Next Article in Journal
Land Use/Cover Dynamics in Response to Changes in Environmental and Socio-Political Forces in the Upper Reaches of Yangtze River, China
Next Article in Special Issue
Small Magnetic Sensors for Space Applications
Previous Article in Journal
Spectral and Spatial-Based Classification for Broad-Scale Land Cover Mapping Based on Logistic Regression
Previous Article in Special Issue
Can Commercial Digital Cameras Be Used as Multispectral Sensors? A Crop Monitoring Test
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Globally Optimal Multisensor Distributed Random Parameter Matrices Kalman Filtering Fusion with Applications

Department of Mathematics, Sichuan University, Chengdu, Sichuan, 610064, P. R. China
*
Author to whom correspondence should be addressed.
Sensors 2008, 8(12), 8086-8103; https://doi.org/10.3390/s8128086
Submission received: 28 August 2008 / Revised: 26 November 2008 / Accepted: 3 December 2008 / Published: 8 December 2008
(This article belongs to the Special Issue Aerospace Sensor Systems)

Abstract

:
This paper proposes a new distributed Kalman filtering fusion with random state transition and measurement matrices, i.e., random parameter matrices Kalman filtering. It is proved that under a mild condition the fused state estimate is equivalent to the centralized Kalman filtering using all sensor measurements; therefore, it achieves the best performance. More importantly, this result can be applied to Kalman filtering with uncertain observations including the measurement with a false alarm probability as a special case, as well as, randomly variant dynamic systems with multiple models. Numerical examples are given which support our analysis and show significant performance loss of ignoring the randomness of the parameter matrices.

1. Introduction

Linear discrete time system with random state transition and observation matrices arise in many areas such as radar control, missile track estimation, satellite navigation, digital control of chemical processes, economic systems. Koning [1] gave the Linear Minimum Variance recursive estimation formulae for the linear discrete time dynamic system with random state transit and measurement matrices without detailed rigorous derivation. Such system can be converted to a linear dynamic system with deterministic parameter matrices and state-dependent process and measurement noises. Therefore, the conditions of standard Kalman Filtering are violated and the recursive formulae in [1] can not be derived directly from the Kalman Filtering Theory. In this paper, a rigorous analysis (mainly in the appendix) shows that under mild conditions, the converted system still satisfies the conditions of standard Kalman Filtering; therefore, the recursive state estimation of this system is still of the form of a modified Kalman filtering. Reference [5] shows that this result can be applied to Kalman filtering with uncertain observations, as well as randomly variant dynamic systems with multiple models.
Many advanced systems now make use of large number of sensors in practical applications ranging from aerospace and defense, robotics and automation systems, to the monitoring and control of a process generation plants. An important practical problem in the above systems is to find an optimal state estimator given the observations.
When the processing center can receive all measurements from the local sensors in time, centralized Kalman filtering can be carried out, and the resulting state estimates are optimal in the Mean Square Error (MSE) sense. Unfortunately, due to limited communication bandwidth, or to increase survivability of the system in a poor environment, such as a war situation, every local sensor has to carry on Kalman filtering upon its own observations first for local requirement, and then transmits the processed data-local state estimate to a fusion center. Therefore, the fusion center now needs to fuse all received local estimates to yield a globally optimal state estimate.
Under some regularity conditions, in particular, the assumption of independent sensor noises, an optimal Kalman filtering fusion was proposed in [11-12], which was proved to be equivalent to the centralized Kalman filtering using all sensor measurements; therefore, such fusion is globally optimal. Then, Song [7] proved that under a mild condition the fused state estimate is equivalent to the centralized Kalman filtering using all sensor measurements.
In the multisensor random parameter matrices case, sometimes, even if the original sensor noises are mutually independent, the sensor noises of the converted system are still cross-correlated. Hence, such multisensor system seems not satisfying the conditions for the distributed Kalman filtering fusion given in [11-12]. In this paper, it was proved that when the sensor noises or the random measurement matrices of the original system are correlated across sensors, the sensor noises of the converted system are cross-correlated. Even if so, similarly with [7], centralized random parameter matrices Kalman filtering, where the fusion center can receive all sensor measurements, can still be expressed by a linear combination of the local estimates. Therefore, the performance of the distributed filtering fusion is the same as that of the centralized fusion under the assumption that the expectations of all sensor measurement matrices are of full row rank. Numerical examples are given which support our analysis and show significant performance loss of ignoring the randomness of the parameter matrices.
The remainder of this paper is organized as follows. In Section 2, we present the concept of random parameter matrices Kalman filtering. In Section 3, we present an optimal Kalman filtering fusion with random parameter matrices and show that under a mild condition the fused state estimate is equivalent to the centralized Kalman filtering with all sensor measurements. In Section 4, we show that the result can be applied to Kalman filtering with uncertain observations as well as randomly variant dynamic systems with multiple models. More importantly, we will see that the Kalman filtering with false alarm probability is a special case of Kalman with random parameter matrices. A simulation example is given in Section 5. And finally, in Section 6, we present our conclusions.

2. Random Parameter Matrices Kalman Filtering

Consider a discrete time dynamic system:
x k + 1 = F k x k + v k
y k = H k x k + ω k , k = 0 , 1 , 2 ,
where xkRr is the system state, ykRN is the measurement matrix, vkRr is the process noise, and ωkRN is the measurement noise. The subscript k is the time index. FkRr×r and HkRN×r are random matrices.
We assume the system has the following statistical properties: {Fk, Hk, vk, ωk, k = 0,1,2,…} are all sequences of independent random variables temporally and across sequences as well as independent of x0. Moreover, we assume xk and {Fk, Hk, k =0,1,2,…} are mutually independent. The initial statex0, the noises vk, ωkand the parameter matrices Fk, Hk have the following means and covariance.
E ( x 0 ) = μ 0 , E ( x 0 μ 0 ) ( x 0 μ 0 ) T = P 0 ,
E ( v k ) = 0 , E ( v k v k T ) = R v k , E ( ω k ) = 0 , E ( ω k ω k T ) = R ω k
E ( F k ) = F ¯ k , Cov ( f i j k , f m n k ) = C f i j k f m n k
E ( H k ) = H ¯ k , Cov ( h i j k , h m n k ) = C h i j k h m n k
Where f i j k and h i j k are the (i, j) th entries of matrices Fk and Hk, respectively.
Rewriting Fkand Hk as:
F k = F ¯ k + F ˜ k
H k = H ¯ k + H ˜ k
And substituting (7), (8) into (1), (2) converts the original system to:
x k + 1 = F ¯ k x k + v ˜ k
y k = H ¯ k x k + ω ˜ k
where
v ˜ k = v k + F ˜ k x k ω ˜ k = ω k + H ˜ k x k
System (9), (10) has deterministic parameter matrices, but the process noise and observation noise are dependent on the state. Therefore, this would not satisfy the well-known assumptions of the standard Kalman filtering apparently.
In the appendix, we give a detailed proof that system (9) and (10) satisfy all conditions of the standard Kalman filtering and derive the recursive state estimate of the new system as follows:
Theorem 1. The Linear Minimum Variance recursive state estimation of system (9), (10) is given by:
x k + 1 | k + 1 = x k + 1 | k + K k + 1 ( y k + 1 H ¯ k + 1 x k + 1 | k ) x k + 1 | k = F ¯ k x k / k K k + 1 = P k + 1 | k H ¯ k + 1 T ( H ¯ k + 1 P k + 1 | k H ¯ k + 1 T + R ω ˜ k ) + P k + 1 | k = F ¯ k P k / k F ¯ k T + R v ˜ k P k + 1 | k + 1 = ( I K k + 1 H ¯ k + 1 ) P k + 1 | k R v ˜ k = R v k + E ( F ˜ k E ( x k x k T ) F ˜ k T ) R ω ˜ k = R ω k + E ( H ˜ k E ( x k x k T ) H ˜ k T ) E ( x k + 1 x k + 1 T ) = F ¯ k E ( x k x k T ) F ¯ k + E ( F ˜ k E ( x k x k T ) F ˜ k T ) + R v k x 0 | 0 = E x 0 , P 0 | 0 = Var ( x 0 ) , E ( x 0 x 0 T ) = E x 0 E x 0 T + P 0 | 0
where the superscript “+” denotes Moore-Penrose pseudo inverse, xk+1|k denotes the one-step prediction of xk+1, Pk+1|k denotes the covariance of xk+1|k, xk+1|k+1 denotes the update of xk+1 and Pk+1|k+1 denotes the covariance of xk+1|k+1.
Compared with the standard Kalman filtering and noting the notations in (5), (6), the random parameter matrices Kalman filtering has one more recursion of E ( x k + 1 x k + 1 T ) as follows:
E ( x k + 1 x k + 1 T ) = F ¯ k E ( x k x k T ) F ¯ k + E ( F ˜ k E ( x k x k T ) F ˜ k T ) + R v k
where
E ( F ˜ k E ( x k x k T ) F ˜ k T ) ( m , n ) = i = 1 r C f n 1 k f m i k X i 1 k + + i = 1 r C f n r k f m i k X i r k E ( H ˜ k E ( x k x k T ) H ˜ k T ) ( m , n ) = i = 1 r C h n 1 k h m i k X i 1 k + + i = 1 r C h n r k h m i k X i r k
and where X i j k is the (i, j) th entries of X k = E ( x k x k T ).

3. Random Parameter Matrices Kalman Filtering with Multisensor Fusion

In this section, a new distributed Kalman filtering fusion with random parameter matrices is proposed. The framework of the distributed tracking system is the same as those considered in [12-15]. The advantages of transmitting sensor estimates other than sensor measurements can be seen in [12-15]. We will show that under a mild condition the fused state estimate is equivalent to the centralized Kalman filtering using all sensor measurements. Therefore, it achieves the best performance.
The l-sensor dynamic system is given by:
x k + 1 = F k x k + v k , k = 0 , 1 , y k i = H k i x k + ω k i i = 1 , , l
where xkRr is the state, y k i R N i is the measurement matrix in i-th sensor, vkRr is the process noise, and ω k i R N iis the measurement noise. Parameter matrices Fk and H k i are random. We assume that { F k , H k i , v k , ω k i , k = 0 , 1 , 2 }, i, j =, …l, is a sequence of independent variables. Every single sensor satisfies the assumption in the last section.
Convert system (12) to the following one with deterministic parameter matrices:
x k + 1 = F ¯ k x k + v ˜ k , k = 0 , 1 , y k i = H ¯ k i x k + ω ˜ k i i = 1 , , l
where
v ˜ k = v k + F ˜ k x k ω ˜ k i = ω k + H ˜ k i x k
The stacked measurement equation is written as:
y k = H ¯ k x k + ω ˜ k
where
y k = ( y k 1 T , , y k l T ) T , H ¯ k = ( H ¯ k 1 T , , H ¯ k l T ) T , ω ˜ k = ( ω ˜ k 1 T , , ω ˜ k l T ) T
and the covariance of the noise ω̃k is given by:
Var ( ω ˜ k ) = R ˜ k
Consider the covariance of the measurement noise of single sensor in new system. By the assumption above, we have:
E ( ω ˜ k i ω ˜ k j T ) = E ( ω k i + H ˜ k i x k ) ( ω k j + H ˜ k j x k ) T = E ( ω ˜ k i ω ˜ k i T + ω k i x k T H ˜ k j T + H ˜ k i x k ω ˜ k j T + H ˜ k i x k x k T H ˜ k j T ) = E ( ω ˜ k i ω ˜ k j T ) + E [ H ˜ k i E ( x k x k T ) H ˜ k j T ]
As shown in the last part of Section 2, every entry of the last matrix term of the above equation is a linear combination of E ( h i j k h l s k ). Hence, when H ˜ k i and H ˜ k j are correlated, in general, E [ H ˜ k E ( x k x k T ) H ˜ k j T ] 0. Therefore, even if ( ω k i ω k i T ) = 0, i.e., the original sensor noises are mutually independent, the sensor noises of the converted system are still cross-correlated, i.e., R̃k is non-diagonal block matrix.
Luckily, when sensor noises are cross-correlated, in [7], it was proven that under a mild condition the fuse state estimate is equivalent to the centralized Kalman filtering using all sensor measurments.
According to Theorem 1 and the Kalman filtering formulae given in [8-10], the local Kalman filtering at the i-th sensor is:
x k | k i = x k | k 1 i + K k i ( y k i H ¯ k x k | k 1 i ) = ( I K k i H ¯ k i ) x k | k 1 i + K k i y k i K k i = P k | k i H ¯ k i T R ˜ k i 1
where
R ˜ k i = Var ( ω ˜ k i )
with covariance of filtering error given by:
P k | k i = ( I K k i H ¯ k i ) P k | k 1 i
or
P k | k i 1 = P k | k i 1 + H ¯ k i T R ˜ k i 1 H ¯ k i
where
x k | k 1 i = F ¯ k x k 1 | k 1 i p k | k i = E ( x k | k i x k ) ( x k | k i x k ) T p k | k 1 i = E ( x k | k 1 i x k ) ( x k | k i x k ) T
We assume that the system has the following properties: the row dimensions of all sensor measurement matrices H ¯ k i to be less than or equal to the dimension of the state, and all H ¯ k i to be of full row rank. In many practical applications, this assumption is fulfilled very often. Thus, we know H ¯ k i ( H ¯ k i ) + = I.
According to [7] and Theorem 1, the centralized Kalman filtering with all sensor data is given by:
x k | k = x k | k 1 + K k ( y k H ¯ k x k | k 1 ) = ( I K k H ¯ k ) x k | k 1 + K k y k = ( I K k H ¯ k ) x k | k 1 + P k | k H ¯ k T i = 1 l R ˜ k 1 ( * i ) y k i
K k = P k | k H ¯ k T R ˜ k 1
where, R ˜ k 1 ( * i ) is the i-th column block of R ˜ k 1. The covariance of filtering error given by:
P k | k 1 = P k | k 1 1 + H ¯ k T R ˜ k 1 H ¯ k = P k | k 1 + H ¯ k T i = 1 l R ˜ k 1 ( * i ) R ˜ k i ( H ¯ k i T ) + H ¯ k i T R ˜ k i 1 H ¯ k i
or
P k | k = ( I K k H ¯ k ) P k | k 1
Where
x k | k 1 = F ¯ k x k 1 | k 1 P k | k = E ( x k | k x k ) ( x k | k x k ) T P k | k 1 = E ( x k | k 1 x k ) ( x k | k 1 x k ) T
Using (15) and (18), the estimation error covariance of the centralized Kalman filtering is given by using the estimation error covariance of all local filters:
P k | k 1 = P k | k 1 1 + H ¯ k T i = 1 l R ˜ k 1 ( * i ) R ˜ k i ( H ˜ k i T ) + ( P k | k i 1 P k | k 1 i 1 )
Using (17):
K k y k = P k | k H ¯ k T i = 1 l R ˜ k 1 ( * i ) y k i = P k | k H ¯ k T i = 1 l R ˜ k 1 ( * i ) R ˜ k i ( H ¯ k i T ) + H ¯ k i T R ˜ k i 1 y k i
To express the centralized filtering xk|k in terms of the local filtering, by (14) and (15), we have
H ¯ k i T R ˜ k i 1 y k i = P k | k i 1 K k i y k i = P k | k i 1 [ x k | k i ( I K k i H ¯ k i ) x k | k 1 i ] = P k | k i 1 x k | k i P k | k 1 i 1 x k | k 1 i
Thus, substituting (19), (21) and (22) into (16) yields:
P k | k 1 x k | k = P k | k 1 1 x k | k 1 + H ¯ k T i = 1 l R ˜ k 1 ( * i ) R ˜ k i ( H ¯ k i T ) + ( P k | k i 1 x k | k i P k | k 1 i 1 x k | k 1 i )
That is to say that the centralized filtering (23) and error matrix (20) are explicitly expressed in terms of the local filtering. Hence, the performance of the distributed random parameter matrices Kalman filtering fusion is the same as that of the centralized random parameter matrices fusion.

Remark 1

In this paper it is assumed that all sensor observations are synchronous. In practice, this may be very rarely true. However, in the past 20 years, such assumption was used very often in the track fusion community (for example, see [7, 11-13] among others). One of reasons for this is that it is easy to extend the results for synchronous distributed Kalman filtering fusion to the corresponding asynchronous case by letting x k | k i = x k | k 1 i and P k | k i = P k | k 1 i when the i th sensor does not receive its observation y k i at time k to reduce the asynchronous distributed tracking system to be synchronous.

Remark 2

The distributed systems here and in [7, 11-15] have a fusion center and the local sensor should transmit x k | k i , x k | k 1 i P k | k i , P k | k 1 i and P k | k 1 ito the fusion center. Our purpose in this paper is to derive a globally optimal distributed fusion algorithm equivalent to the centralized Kalman filtering as the fusion center can receive all sensor observations. In this framework, the system has not only the global optimality, but also the good survivability in a poor situation. Clearly, such systems are different from the system considered [16]. The system in [16] does not require any form of central processing facility, and the algorithm there is highly resilient to loss of one or more sensing nodes, but costs more communication and has no real-time global optimality. Thus, both of them have their own advantages and disadvantages. The Random Kalman filtering in the framework of [16] may be another future research direction.

4. Applications of Random Parameter Matrices Kalman Filtering

In this section, we will see that the results in the last two sections can be applied to the Kalman filtering with uncertain observations as well as randomly variant dynamic systems with multiple models.

4.1. Application to a General Uncertain Observation

The Kalman filtering with uncertain observation attracted extensive attention [2-4]. There are two types of uncertain observations in practice. The first one is that the estimator can exactly know whether the observation fully or partially contains the signal to be estimated, or just contains noise alone (for example, see [2]). By directly using the optimal estimation theory, the Kalman filter for the first type of uncertain observations can be derived easily. The other uncertain observations belong to the second type, i.e., the estimator cannot know whether the observation fully or partially contains the signal to be estimated or just contains noise alone, but the occurrence probabilities of each case are known. Clearly, the latter is more practical. By applying the random measurement matrix Kalman filtering, we can derive the Kalman filter with the second type of uncertain observations, which is much more general than that in [2-4].
Consider a system:
x k + 1 = F k x k + v k
y k = i = ` l I { y k = i } H k i x k + i = 1 l I { y k = i } ω k i
where all the parameter matrices are non-random and a set of multiple observation equations is selected to represent the possible observation case at each time. The random variable γk is either observable or unobservable. If γk = i, the measure matrix is H k i and the observation noise corresponds to ω k i. When the value of γk is observable at each time k, this is an uncertain observation of the first type and the state estimation with measurement Equation (25) is converted to:
y k = H k i x k + ω k i
which is obviously the classical Kalman filtering, i.e., the least mean square estimate using the various available observation of yk. To show the applications of the random measurement matrix Kalman filtering, we focus on the second type of uncertain observation, i.e., in (25), γk is unobservable at each time k, but the probability of occurrence of every available measurement matrix is known.
Consider that in (25), γk is unobservable at each time k, but the probability of the occurrence of each measurement is known. Obviously, (2) is a more general form of (25) because only expectation and covariance of Hk in (2) are known other than its distribution. The expectation of Hk can be expressed as:
H ¯ k = j = 1 l p k j H k j
H ˜ k = H k i H ¯ k , with probability p k i
All that remains in order to apply the random measurement matrix Kalman filtering is just to calculate:
R ω ˜ k = R ω k + E ( H ˜ k E ( x k x k T ) H ˜ k T ) = R ω k + i = 1 l p k i ( H k i H ¯ k ) E ( x k x k T ) ( H k i H ¯ k ) T
Substituting (27}) and (29) into Theorem 1 can immediately obtain the random measurement matrix Kalman filtering of model (1) and (25).
In the classical Kalman filtering problem, the observation is always assumed to contain the signal to be estimated. However, in practice, when the exterior interference is strong, i.e., total covariance of the measurement noise is large; the estimator will mistake the noise as the observation sometimes. In radar terminology, this is called a false alarm. Usually, the estimator cannot know whether this happens or not, only the probability of a false alarm is known. In the following, we will show that the Kalman Filtering with a false alarm probability is a special case of the uncertain observations of the above model (1), (25) are given.
Consider a discrete dynamic process:
x k + 1 = F k x k + v k
y k = h k x k + ω k , k = 0 , 1 , 2
where {Fk, hk, vk, ωk, k = 0, 1, 2, …} satisfy the assumptions of standard Kalman filtering. Fk and hk are deterministic matrices. The false alarm probability of the observation is 1 – pk.
Then,we can rewrite the measurement equations as follows:
y k = H k x k + ω k , k = 0 , 1 , 2 ,
where the observation matrix Hk is a binary-valued random with:
Pr { H k = h k } = p k
Pr { H k = 0 } = 1 p k
Due to (8):
H ¯ k = p k h k
Pr { H ˜ = ( 1 p k ) h k } = p k
Pr { H ˜ = p k h k } = 1 p k
In the false alarm case, the state transition matrix is still deterministic, but the measurement matrix is random, by (35), (36) and (37), the covariance of the process and observation noises can be written as follows:
R v ˜ k = R v k
R ω ˜ k = R ω k + E ( H ˜ k E ( x k x k T ) H ˜ k T ) = R ω k + ( 1 p k ) p k h k E ( x k x k T ) h k T
Thus, the Kalman filtering with false alarm probability in this case is given by:
x k + 1 | k + 1 = x k + 1 | k + K k + 1 ( y k + 1 p k + 1 h k + 1 x k + 1 | k ) x k + 1 | k = F k x k | k K k + 1 = p k + 1 P k + 1 | k h k + 1 T ( p k + 1 2 h k + 1 P k + 1 | k h k + 1 T + R ω ˜ k ) + P k + 1 | k = F k P k | k F k T + R v ˜ k P k + 1 | k + 1 = ( I p k + 1 K k + 1 h k + 1 ) P k + 1 | k R ω ˜ k = R ω k + ( 1 p k ) p k h k E ( x k x k T ) h k T E ( x k + 1 x k + 1 T ) = F k E ( x k x k T ) F k T + R v k x 0 | 0 = E x 0 , P 0 | 0 = Var ( x 0 ) , E ( x 0 x 0 T ) = E x 0 E x 0 T + P 0 | 0
In this section, we consider the application to a general uncertain observation for one sensor case. In a manner analogous to the derivation of Section 4.1, we can also give an application to a general uncertain observation for multisensor case using Section 3. The procedure is omitted here.

4.2. Application to a Multi-Model Dynamic Process

The multiple-model (MM) dynamic process has been considered by many researchers. Although the possible models considered in those papers are quite general and can depend on the state, but no optimal algorithm in the mean square error (MSE) sense was proposed in the past a few decades. On the other hand, when some of the MM systems satisfy the assumptions in this paper, they can be reduced to dynamic models with random transition matrix and thus the optimal real-time filter can be given directly according to the random transition matrix Kalman filtering proposed in Theorem 1.
Consider a system:
x k + 1 = F k i x k + v k with probability p k i , i = 1 , , l
y k = H k x k + ω k
where Fk and vk are independent sequence, and Hk is non-random. We use random matrix Fk to stand for the state transition matrix. The expectation of Fk can be expressed as:
F ¯ k = j = 1 l p k j F k j
F ˜ k = F k i F ¯ k , with probability p k i
A necessary step for implementing the random Kalman filtering is to calculate:
R v ˜ k = R v k + E ( F ˜ k E ( x k x k T ) F ˜ k T ) = R v k + i = 1 l p k i ( F k i F ¯ k ) E ( x k x k T ) ( F k i F ¯ k ) T
Thus, all the recursive formulas of random Kalman filtering can be given by:
x k + 1 | k + 1 = x k + 1 | k + K k + 1 ( y k + 1 H k + 1 x k + 1 | k ) x k + 1 | k = F ¯ k x k | k K k + 1 = p k + 1 P k + 1 | k H k + 1 T ( H k + 1 P k + 1 | k H k + 1 T + R ω k ) + P k + 1 | k = F ¯ k P k | k F ¯ k T + R v ˜ k P k + 1 | k + 1 = ( I K k + 1 H k + 1 ) P k + 1 | k R v ˜ k = R v k + i = 1 l p k i ( F k i F ¯ k ) E ( x k x k T ) ( F k i F ¯ k ) T E ( x k + 1 x k + 1 T ) = F ¯ k E ( x k x k T ) F ¯ k T + R v k + i = 1 l p k i ( F k i F ¯ k ) E ( x k x k T ) ( F k i F ¯ ) T x 0 | 0 = E x 0 , P 0 | 0 = Var ( x 0 ) , E ( x 0 x 0 T ) = E x 0 E x 0 T + P 0 | 0

5. Numerical Example

In this section, three simulations will be done for a dynamic system with random parameter matrices modeled as an object movement with process noise and measurement noise on the plane. The simulations give the special applications of results in the last section and show that fused random parameter matrices Kalman filtering algorithms can track the object satisfactorily.
Remember that we have rigorously proved in Section 3 that the centralized algorithm using all sensor observations y k i at the fusion center can be equivalently converted to be distributed algorithm using all sensor estimates x k | k i. In addition, the computer simulations we have done show that the simulation results of two algorithms are exactly the same. It turns out that in the following numerical examples, we only compare the distributed random Kalman filtering and the corresponding standard Kalman filtering that ignores the randomness of the parameter matrices. Without loss of generality our examples assume the local sensors send updates each time when they receive a measurement.
Firstly, we consider a three-sensor distributed Kalman filtering fusion problem with false alarm probabilities.

Example 1

The object dynamics and measurement equations are modeled as follows:
x k + 1 = F k x k + v k y k i = H k i x k + ω k i , i = 1 , 2 , 3
where { F k , H k i , v k ω k i k = 0 , 1 , 2 , } satisfy the assumptions of standard Kalman filtering. The state transition matrix Fk
F k = ( cos ( 2 π / 300 ) sin ( 2 π / 300 ) sin ( 2 π / 300 ) cos ( 2 π / 300 ) )
is a constant. The measurement matrix is given by:
H k i = ( 1 i 1 i ) , i = 1 , 2 , H k 3 = ( 1 3 1 3 )
The false alarm probability of the l-th sensor is given by:
1 p k 1 = 0.01 , 1 p k 2 = 0.02 , 1 p k 3 = 0.03
The initial state x0 = (50, 0), P 0 | 0 i = I. The covariance of the noises are diagonal, given by Rv =1, Rωi = 2 , i = 1,2,3. Using a Monte-Carlo method of 50 runs, we can evaluate tracking performance of an algorithm by estimating the second moment of the tracking error, given by:
E k 2 = 1 50 i = 1 50 x k | k ( i ) x k 2
Figure 1 shows that the second moments of tracking error for three sensors Kalman filtering fusion without considering the false alarm (i.e. standard Kalman filtering) and three sensors random Kalman filtering fusion considering the false alarm (i.e. random Kalman filtering), respectively. It can be shown that even if the false alarm probability is very small, the distributed Random Kalman filtering fusion performs much better than the standard Kalman filtering.
In Example 1, both the sensor noises and the random measure matrices of the original system are mutually independent, so the sensor noise of the converted system are mutually independent. Now, we consider another example that both the noises and the random measure matrices of the original system are cross-correlated.

Example 2

The object dynamics and measurement equations are modeled as follows:
x k + 1 = F k x k + v k y k i = H k i x k + ω k i , with probability of p k = ω k + ω k i , with probability of 1 p k
where i = 1 , 2 , 3 , { F k , H k i , v k , ω k , ω k i k = 0 , 1 , 2 , } satisfy the assumptions of standard Kalman filtering. The state transition matrix Fk and the measurement matrices H k i are the same as Example 1 and ωk is a large transition noise. When it happens, the sensors will mistake the transition noise as the observation. The false alarm probability of the transition noise is given by1 − pk = 0.05. Though the sensor noises ω k i , i = 1 , 2 , 3 are mutually independent and independent of ωk, but the total measurement noises ω ˜ k i are cross-correlated here. The covariance of the noises are diagonal, given by Rv = 1, Rω = 2 Rωi = 0.5, i = 1,2,3. The initial state x0 = (50, 0), P 0 | 0 i = I.
In this example, both the measurement noises and the random measure matrices of the original system are cross-correlated. Hence, the sensor noises of the converted system are cross-correlated. Figure 2 shows that the random Kalman filtering fusion given in Section 3 still works better than the standard Kalman filtering without considering the false alarm. This implies that the standard Kalman filtering incorrectly assumes that sensor noises are independent.

Example 3

In this simulation, there are three dynamic models with the corresponding probabilities of occurrence available. The object dynamics and measurement matrix in (40) are given by:
F k 1 = ( cos ( 2 π / 300 ) sin ( 2 π / 300 ) sin ( 2 π / 300 ) cos ( 2 π / 300 ) ) with probability 0.1 F k 2 = ( cos ( 2 π / 250 ) sin ( 2 π / 250 ) sin ( 2 π / 250 ) cos ( 2 π / 250 ) ) with probability 0.2 F k 3 = ( cos ( 2 π / 100 ) sin ( 2 π / 50 ) sin ( 2 π / 50 ) cos ( 2 π / 100 ) ) with probability 0.1 H k = ( 1 1 1 1 )
The covariance of the noises are diagonal, given by Rv = 2, Rω = 1. In the following, we compare our numerical results with the IMM. Since in this example, the occurrence probability of each model at every time k is known and mutually independent, it is also the transition probability in the IMM. Therefore, the transition probability matrix ∏ at each time in the IMM is fixed and given by:
= ( 0.1 0.2 0.7 0.1 0.2 0.7 0.1 0.2 0.7 )
Π(i, j) here means the transition probability of model i to model j . This assumption also implies that the model probability in the IMM is fixed as follows:
π k 1 = 0.1 , π k 2 = 0.1 , π k 3 = 0.7 , k = 1 , 2
Figure 3 shows that the random Kalman filtering given in section 4.2 still works better than the IMM with the fixed transition probability and model probability. This makes sense since the former is optimal in the MSE sense but the latter is not. However, in practice, the occurrence probability of each model is very often dependent on state or observation, and therefore not independent of each other in time. In this case, our new method offers no advantage.

6. Conclusions

In the multisensor random parameter matrices case, it was proven in this paper that when the sensor noises, or the measurement matrices of the original system are correlated across sensors, the sensor noises of the converted system are cross-correlated. Hence, such multisensor system seems not to satisfy the conditions for the standard distributed Kalman filtering fusion. This paper propose a new distributed Kalman filtering fusion with random parameter matrices Kalman filtering and proves that under a mild condition the fused state estimate is equivalent to the centralized Kalman filtering using all sensor measurements, therefore, it achieves the best performance. More importantly, this result can be applied to Kalman filtering with uncertain observations as well as randomly variant dynamic systems with multiple models. The Kalman filtering with false alarm is a special case of Kalman filtering with uncertain observations. Numerical examples are given which support our analysis and show significant performance loss of ignoring the randomness of the parameter matrices.

Appendix

In the Appendix, we will give the proof of Theorem 1. Firstly, we give a detailed proof that system (9) and (10) satisfy all conditions of the standard Kalman filtering as follows:

Lemma 1

Suppose random matrix F and random vector x are independent, then:
E ( F x x T F T ) = E ( F E ( x x T ) F T )

Proof

By the properties of conditional expectation, we have that:
E ( F x x T F T ) = E ( E ( F x x T F T | F ) ) = E ( F E ( x x T | F ) F T ) = E ( F E ( x x T ) F T )
Q.E.D.

Lemma 2

( a ) E ( v ˜ k ) = 0 , E ( ω ˜ k ) = 0 ; ( b 1 ) E ( x 0 v ˜ k T ) = 0 , ( b 2 ) E ( x 0 ω ˜ k T ) = 0 ( c 1 ) E ( v ˜ k v ˜ l T ) = 0 , ( c 2 ) E ( ω ˜ k ω ˜ l T ) = 0 , ( c 3 ) E ( v ˜ k ω ˜ l T ) = 0 , k l ( d ) E ( v ˜ k v ˜ l T ) = R v ˜ k , E ( ω ˜ k ω ˜ l T ) = R ω k ,
where
R v ˜ k = R v k + E ( F ˜ k E ( x k x k T ) F ˜ k T ) , R ω ˜ k = R ω k + E ( H ˜ k E ( x k x k T ) H ˜ k T ) .

Proof

(a):
By the assumptions on the model (1) and (2) , and notations in (11), it is obvious.
(b1):
Since {Fk, vk, k = 0,1,2,…} is independent of x0,
E ( x 0 v ˜ k T ) = E { x 0 ( v k + F ˜ k x k ) T } = E ( x 0 x k T F ˜ k T ) = E { x 0 ( F k 1 x k 1 + v k 1 ) T F ˜ k T } = E ( x 0 x k 1 T F k 1 T F ˜ k T ) = E { x 0 ( F k 2 x k 2 + v k 2 ) T F k 1 T F ˜ k T } = E ( x 0 x k 2 T F k 2 T F k 1 T F ˜ k T ) = E ( x 0 x 0 T F 0 T F 1 T F k 2 T F k 1 T F ˜ k T ) = E ( x 0 x 0 T ) E ( F 0 T ) E ( F 1 T ) E ( F k 2 T ) E ( F k 1 T ) E ( F ˜ k T ) = 0 .
(b2):
Similarly, E ( x 0 ω ˜ k T ) = 0.
(c1):
Without loss of generality, we consider the case of k > l only.
E ( v ˜ k v ˜ l T ) = E ( v k v l T + v k x l T F ˜ l T + F ˜ k x k v l T + F ˜ k x k x l T F ˜ l T )
For xl is linearly dependent on Fl−1Fl2F1F0x0 ,vl−1,Fl−1,Fli+1vli,i = 2,3,…,l, and {Fk,vk, k = 0,1,2,…}is independent of x0,
E ( v k x l T F ˜ l T ) = 0 E ( F ˜ k x k v l T ) = E { F ˜ k ( F k 1 x k 1 + v k 1 ) v l T } = E ( F ˜ k F k 1 x k 1 v l T ) = E { F ˜ k F k 1 ( F k 2 x k 2 + v k 2 ) v l T } = E ( F ˜ k F k 1 F k 2 x k 2 v l T ) + E ( F ˜ k F k 1 v k 2 v l T ) = E ( F ˜ k F k 1 F l x l V l T ) + E ( F ˜ k F k 1 F l + 1 v l v l T ) = E ( F ˜ k ) E ( F k 1 ) E ( F l + 1 ) E ( v l v l T ) = 0 .
Noting that xl and {Fk ,k = 0,1,2,…} are independent, by Lemma 1 we have:
E ( F ˜ k x k x l T F ˜ l T ) = E { F ˜ k ( F k 1 x k 1 + v k 1 ) x l T F ˜ l T } = E ( F ˜ k F k 1 x k 1 x l T F ˜ l T ) = E ( F ˜ k F k 1 F k 2 x k 2 x l T F ˜ l T + F ˜ k F k 1 v k 2 x l T F ˜ l T ) = E ( F ˜ k F k 1 F k 2 F l + 1 F l x l x l T F ˜ l T ) + E ( F ˜ k F k 1 F k 2 F l + 1 v l x l T F ˜ l T ) = E ( F ˜ k F k 1 F k 2 F l + 1 F ˜ l x l x l T F ˜ l T ) + E ( F ˜ k F k 1 F k 2 F l + 1 F ˜ l x l x l T F ˜ l T ) = E ( E ( F ˜ k F k 1 F k 2 F l + 1 ) F ˜ l E ( x l x l T ) F ˜ l T ) = 0 .
Hence, E ( v ˜ k v ˜ l T ) = 0.
(c2):
Also consider the case of k>l only. Since xl is linearly dependent on Fl−1Fl−2F1F0x0,vl−1,Fl−1Fl−2Fli+1vl−i,i=2,3,…,l,{Fk, Hk, vkωk=0,1,2,…} is independent of x0, and {Fk, Hk, k = 0,1,2,…} is independent of xl .Moreover:
E ( ω ˜ k ω ˜ l T ) = E ( ω k ω l T + ω k x l T H ˜ l T + H ˜ k x k ω l T + H ˜ k x k x l T H ˜ l T ) = E ( H ˜ k x k x l T H ˜ l T ) = E { H ˜ k ( F k 1 x k 1 + v k 1 ) x l T H ˜ l T } = E ( H ˜ k F k 1 x k 1 x l T H ˜ l T ) = E ( H ˜ k F k 1 F k 2 x k 2 x l T H ˜ l T + H ˜ k F k 1 v k 2 x l T H ˜ l T ) = E ( H ˜ k F k 1 F k 2 F l + 1 F l x l x l T H ˜ l T ) + E ( H ˜ k F k 1 F k 2 F l + 1 F l v l x l T H ˜ l T ) = 0 .
(c3):
E ( v ˜ k ω ˜ l T ) = E ( v k ω l T + v k x l T H ˜ l T + F ˜ k x k ω l T + F ˜ k x k x l T H ˜ l T ) .
When k>l,
E ( v ˜ k ω ˜ l T ) = E ( F ˜ k x k x l T H ˜ l T ) = E { F ˜ k ( F k 1 x k 1 + v k 1 ) x l T H ˜ l T } = E ( F ˜ k F k 1 x k 1 x l T H ˜ l T ) = E ( F ˜ k F k 1 F k 2 x k 1 x l T H ˜ l T ) + E ( F ˜ k F k 1 v k 2 x l T H ˜ l T ) = E ( F ˜ k F k 1 F k 2 F l + 1 F l x l x l T H ˜ l T + F ˜ k F k 1 F k 2 F l + 1 v l x l T H ˜ l T ) = 0 .
when k<l,
E ( v k x l T H ˜ l T ) = E { v k ( F l 1 x l 1 + v l 1 ) T H ˜ l T } = E ( v k x l 1 T F l 1 T H ˜ l T ) = E ( v k x l 2 T F l 2 T H ˜ l T ) + E ( v k v l 2 T F l 1 T H ˜ l T ) = E ( v k x k T F k T F k + 2 T F l 1 T H ˜ l T ) + E ( v k v k T F k + 2 T F l 1 T H ˜ l T ) = 0 ,
E ( F ˜ k x k x l T H ˜ l T ) = E { F ˜ k x k ( F l 1 x l 1 + v l 1 ) T H ˜ l T } = E ( F ˜ k x k x l 1 T F l 1 T H ˜ l T ) = E { F ˜ k x k ( F l 2 x l 2 + v l 2 ) T F l 1 T H ˜ l T } E { F ˜ k x k ( F k x k + v k ) T F k + 1 T F l 1 T H ˜ l T } = E ( F ˜ k x k x k T F k T F k + 1 T F l 1 T H ˜ l T ) = E ( F ˜ k E ( x k x k T ) F k T E ( F k + 1 T F l 1 T H ˜ l T ) ) = 0
Thus, E ( v ˜ k ω ˜ l T ) = 0.
(d):
E ( v ˜ k v ˜ k T ) = E ( v k v k T + v k x k T F ˜ k T + F ˜ k x k v k T + F ˜ k x k x k T F ˜ k T ) = R v k + E ( F ˜ k x k x k T F ˜ k T ) = R v k + E ( F ˜ k E ( x k x k T ) F ˜ k T )
Similarly, E ( ω ˜ k ω ˜ l T ) = R ω k + E ( H ˜ k E ( x k x k T ) H ˜ k T ).
Q.E.D.
By Lemma 2, system (9), (10) satisfies all conditions of the standard Kalman Filtering. Hence, we derive the recursive state estimate of the new system, i.e., Theorem 1 immediately.

Acknowledgments

Supported in part by NSF of China (#60874107, 10826101) and Project 863 through grant 2006AA12A104.

References and Notes

  1. DeKoning, W.L. Optimal Estimation of Linear Discrete-time Systems with Stochastic Parameters. Automatica 1984, 20, 113–115. [Google Scholar]
  2. Hahi, N.E. Optimal Recursive Estimation With Uncertain Observation. IEEE Trans. Inf. Theory 1969, 15, 457–462. [Google Scholar]
  3. Sinopoli, B.; Schenato, L.; Franceschetti, M.; Poolla, K.; Jordan, M.I.; Sastry, S.S. Kalman Filtering with Intermittent Observations. IEEE Trans. Autom. Control 2004, 49, 1453–1464. [Google Scholar]
  4. Liu, X.H.; Goldsmith, A. Kalman Filtering with Partial Observation Losses. Proceedings of 43rd IEEE Conference on Decision and Control, Atlantis, Paradise Island, Bahamas, Dec.14-17, 2004; pp. 4180–4186.
  5. Luo, D.D.; Zhu, Y.M. Applications of Random Parameter Matrices Kalman Filtering in Uncertain Observation and Multi-Model Systems. Proceedings of the International Federation of the Automatic Control, Seoul, Korea, July 6-11, 2008.
  6. Zhu, Y.M. Multisensor Decision and Estimation Fusion; Kluwer Academic Publishers: Boston, USA, 2003. [Google Scholar]
  7. Song, E.B.; Zhu, Y.M.; Zhou, J.; You, Z.S. Optimality Kalman Filtering fusion with cross-correlated sensor noises. Automatica 2007, 43, 1450–1456. [Google Scholar]
  8. Goodwin, G.C.; Payne, R. Dynamic System Identification: Experimental Design and Data Analysis; Academic Press: New York, USA, 1977. [Google Scholar]
  9. Haykin, S. Adaptive Filter Theory; Prentice-Hall: Englewood Cliffs, NJ, 1996. [Google Scholar]
  10. Ljung, L. System Identification: Theory for the User; Prentice-Hall: Englewood Cliffs, NJ, 1987. [Google Scholar]
  11. Chong, C.Y.; Chang, K.C.; Mori, S. Distributed Tracking in Distributed Sensor Networks. Proceedings of 1986 American Control Conf., Seattle, WA, June 1986.
  12. Hashmipour, H.R.; Roy, S.; Laub, A.J. Decentralized Structures for Parallel Kalman Filtering. IEEE Trans. Autom. Control 1988, 33, 88–93. [Google Scholar]
  13. Bar-Shalom, Y. Multitarget-multisensor tracking: Advanced applications; Artech House: Norwood, MA, 1990; Volume 1. [Google Scholar]
  14. Bar-Shalom, Y.; Li, X. R. Multitarget-Multisensor Tracking: Principles and Techniques; YBS Publishing: Storrs, CT, 1995. [Google Scholar]
  15. Liggins, M.E.; Chong, C.-Y.; Kadar, I.; Alford, M.G.; Vannicola, V.; Thomopoulos, S. Distributed Fusion Architectures and Algorithms for Target Tracking. Proc. IEEE 1997, 85, 95–107. [Google Scholar]
  16. Rao, B.S.; Durrant-Whyte, H.F. Fully decentralized algorithm for multisensor Kalman filtering. IEE Proceedings on Control Theory and Applications 1991, Part D. vol. 138, 413–420. [Google Scholar]
Figure 1. Comparison of standard Kalman filtering fusion and random Kalman filtering fusion.
Figure 1. Comparison of standard Kalman filtering fusion and random Kalman filtering fusion.
Sensors 08 08086f1
Figure 2. Comparison of standard Kalman filtering fusion and random Kalman filtering fusion.
Figure 2. Comparison of standard Kalman filtering fusion and random Kalman filtering fusion.
Sensors 08 08086f2
Figure 3. Comparison of IMM and random Kalman filtering.
Figure 3. Comparison of IMM and random Kalman filtering.
Sensors 08 08086f3

Share and Cite

MDPI and ACS Style

Luo, Y.; Zhu, Y.; Luo, D.; Zhou, J.; Song, E.; Wang, D. Globally Optimal Multisensor Distributed Random Parameter Matrices Kalman Filtering Fusion with Applications. Sensors 2008, 8, 8086-8103. https://doi.org/10.3390/s8128086

AMA Style

Luo Y, Zhu Y, Luo D, Zhou J, Song E, Wang D. Globally Optimal Multisensor Distributed Random Parameter Matrices Kalman Filtering Fusion with Applications. Sensors. 2008; 8(12):8086-8103. https://doi.org/10.3390/s8128086

Chicago/Turabian Style

Luo, Yingting, Yunmin Zhu, Dandan Luo, Jie Zhou, Enbin Song, and Donghua Wang. 2008. "Globally Optimal Multisensor Distributed Random Parameter Matrices Kalman Filtering Fusion with Applications" Sensors 8, no. 12: 8086-8103. https://doi.org/10.3390/s8128086

Article Metrics

Back to TopTop