Next Article in Journal
Concurrent Reflectance Confocal Microscopy and Laser Doppler Flowmetry to Improve Skin Cancer Imaging: A Monte Carlo Model and Experimental Validation
Next Article in Special Issue
Continuous Indoor Positioning Fusing WiFi, Smartphone Sensors and Landmarks
Previous Article in Journal
Combining Remote Temperature Sensing with in-Situ Sensing to Track Marine/Freshwater Mixing Dynamics
Previous Article in Special Issue
Cardinality Balanced Multi-Target Multi-Bernoulli Filter with Error Compensation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Sensor Consensus Estimation of State, Sensor Biases and Unknown Input

Key Laboratory of Information Fusion Technology, Ministry of Education, School of Automation, Northwestern Polytechnical University, Xi’an 710129, China
*
Author to whom correspondence should be addressed.
Sensors 2016, 16(9), 1407; https://doi.org/10.3390/s16091407
Submission received: 12 March 2016 / Revised: 21 August 2016 / Accepted: 23 August 2016 / Published: 1 September 2016
(This article belongs to the Special Issue Advances in Multi-Sensor Information Fusion: Theory and Applications)

Abstract

:
This paper addresses the problem of the joint estimation of system state and generalized sensor bias (GSB) under a common unknown input (UI) in the case of bias evolution in a heterogeneous sensor network. First, the equivalent UI-free GSB dynamic model is derived and the local optimal estimates of system state and sensor bias are obtained in each sensor node; Second, based on the state and bias estimates obtained by each node from its neighbors, the UI is estimated via the least-squares method, and then the state estimates are fused via consensus processing; Finally, the multi-sensor bias estimates are further refined based on the consensus estimate of the UI. A numerical example of distributed multi-sensor target tracking is presented to illustrate the proposed filter.

1. Introduction

The well-known Kalman filter is optimal given a linear Gaussian model, but its performance will deteriorate in the presence of unknown biases, such as the unknown and time-varying delays in chemical processes [1,2,3,4], faults or failures in fault-tolerant diagnosis and control systems [5,6], registration errors in multi-sensor fusion [7,8,9,10], or inertial drift in navigation [11,12,13,14]. In general, such a bias is represented as an unknown input (UI) to the nominal model. To the best of our knowledge, approaches to UI modeling and the design of corresponding filters typically fall into one of the following categories.
The first type of UI is zero-mean random noise with unknown covariance. In the case of the stationary noise processes in linear dynamic systems, the covariance can be identified via Bayesian or maximum likelihood estimation. A corresponding filter has been applied for orbit determination for near-earth satellites [15,16]. Recently, this type of filter design has been extended to time-varying covariance [17] and jump Markov stochastic systems [18]. Furthermore, an M-robust estimator [19] has been derived for the simultaneous adaptive estimation of unknown states and observation noise statistics. An auto-covariance least-squares method [20] has been presented to achieve lower-variance covariance estimates along with the necessary and sufficient conditions for the uniqueness of the estimated covariances. By treating unknown covariances as missing data, the adaptive estimation problem has been transformed into a problem of the joint optimization of state estimation and parameter identification, allowing it to be solved in an expectation-maximization (EM) iterative processing framework [21].
The second type of UI is unknown but deterministic. Using least-squares estimation and moving-window hypothesis testing, the corresponding filters can cope with cases in which the UI is piecewise constant [22] or a sum of basis functions with piecewise-constant weights [23]. The problem of state estimation with UIs in both dynamic and measurement models has been addressed using a joint EM optimization scheme for state estimation, parameter identification and iteration termination decision-making [24], and this approach has been extended to a distributed EM algorithm for sensor networks [25], and further extended to an adaptive divided difference filter for nonlinear system with multiplicative parameters [26].
The third type of UI is completely arbitrary, without the availability of any prior knowledge about its evolution. An asymptotically stable and UI-decoupled observer has been derived [27] based on the condition that the rank of its distribution matrix must be less than that of the UI. For the case in which the UI appears only in the process equation, a combination of least-squares estimation and the Kalman filter has been proposed for joint state and UI estimation, and the necessary and sufficient condition for the existence of such a joint estimator has also been presented [28]. In addition, this result has been extended to the case in which the UI appears in both the process and measurement models [29].
The fourth type of UI is norm-bounded, and robust offline filters have been designed to minimize the gain of the transfer function from the UI to the estimation error [30]. As an alternative approach, a method of linear minimax regret estimation has been developed to minimize the worst-case regret over all bounded data [31].
The fifth type of UI is characterized by a randomly switching parameter obeying a known Markov chain. The corresponding solutions fall within the scope of multiple model estimators [32,33,34,35,36,37], such as the interacting multiple model (IMM), which is well known in maneuvering-target tracking. Motivated by the concept of establishing a general framework for joint state estimation and data association in clutters for the tracking of a maneuvering target, the linear minimum-mean-square-error (MMSE) estimator has been derived for discrete-time Markovian jump linear systems with stochastic coefficient matrices [38].
In general, the filters discussed above are UI-specific because of the significant differences between both the different types of UIs and the solutions designed to address them. However, in many practical applications, much more complicated UIs may be encountered. Recently, the minimum upper bound filter (MUBF) was proposed for a linear stochastic system corrupted by a generalized UI representing an arbitrary linear combination of dynamic UIs, random UIs, and deterministic UIs [39]. The result was further extended to a discrete-time non-linear stochastic system with a UI in its measurements, and an iterative optimization method for joint estimation and parameter identification was derived [40]. In multi-sensor bias estimation [41], the concept of a generalized sensor bias (GSB) has been proposed, which is represented by a dynamic model driven by a structured UI. By deriving an equivalent state-free measurement model and the corresponding UI-free dynamic model for the GSB, the LMMSE can be obtained via the orthogonal principle.
However, the application of the resultant filter is limited to the case in which all sensors have an identical measurement matrix, and only the GSB can be estimated. This problem motivates us to present and solve the problem of the joint estimation of the system state and the GSB under a UI in the case of GSB evolution.
This paper presents the joint estimation of the system state and GSB under a common UI in the case of bias evolution in a heterogeneous sensor network. To the best of the author’s knowledge, this is the first attempt to achieve this kind of the joint estimation. In detail, a two-stages cooperative strategy for pursuing consistent estimates in the distributed network is proposed. First, a joint UI-free evolution model is derived, along with its necessary and sufficient conditions for existence. In the derived model, the process noise and measurement noise are correlated. And the optimal local MMSE estimator with correlated noise is obtained. Second, based on the local estimates, the global estimation of the target state and UI are further achieved by network consensus. Through utilizing the global estimation of UI, the GSB is refined. The optimal estimates of the target state, GSB and UI are finally obtained.
Throughout the paper, the superscripts “ - 1 ”, “T” and “+” represent the inverse, transpose, and Moore-Penrose operations, respectively; “I” and “0” represent the identity matrix and the zero matrix, respectively, of the proper dimensions; “ 0 m × n ” represents the zero matrix with dimensions of m × n ; diag · indicates a diagonal matrix; col · denotes column augmentation; E { · } denotes the operator of mathematical expectation; the superscript i indicates a matrix or variable related to the ith sensor; rank { · } denotes the rank of the specified matrix; the subscripts “i” and “j” indicate the ith and jth subblocks of a matrix, respectively; the superscripts “∧” and “”indicate the estimate and residual, respectively, of a random vector; the subscripts “ k | k + 1 ”represents the estimate of a vector at the time k using measurements up to time k + 1 and ⨂ denotes the Kronecker product.

2. Problem Formulation

Sensor bias (SB) is widespread in multi-sensor systems and originates from two types of sources: SBs of the first type are local SBs (LSBs), which are independent among sensors, such as calibration error, navigation bias and sensor faults/drift, whereas sensor biases of the second type are global SB (GLSB, also called UI in this work),which are common to all sensors, such as electronic countermeasures (ECMs) in target tracking. A GSB model has been proposed based on dynamic evolution driven by both independent LSBs and a common UI [41]. However, the corresponding filter has two shortcomings: first, all sensors must have an identical measurement matrix, and second, the system state and the UI cannot be estimated simultaneously because the SB estimation is obtained by deriving the equivalent GSB model neglecting system state and UI. Here, we present the two assumptions adopted in our work.
Assumption 1. 
All measurements made by sensors in the network are continuously interrupted by the GSB, which includes LSBs and GLSB.
Assumption 2. 
The network topology is fixed, and the target can be monitored by all sensors throughout the entire process. There is no fusion centre or lead sensor in this sensor network. In other words, all sensors are equal in status, and they reach a network consensus by exchanging information with their neighbours.
Consider a scenario of collaborative target tracking within a sensor network, as shown in Figure 1. A target is continuously moving within a surveillance zone. Each sensor node obtains raw measurements of the target corrupted by the GSB. At each step, each sensor obtains local estimates of the target state and GSB, exchanges its local system-state estimate with its neighbors, and obtains fused estimates of the target state and UI via network consensus. In other words, this sensor fusion process can be regarded as the joint distributed estimation of the target state, GSB and UI without the limitation that all sensor measurement matrices must be identical.
Consider a network of s sensor nodes. The network topology is represented by an undirected graph G = V , E , A , where V = 1 , 2 , . . . , s denotes the set of nodes; E V × V is the set of permissible communication links; and A = a i j is the graph topology matrix, the elements of which are defined as
a i j = 1 , i , j E o r i = j 0 , i , j E
Let N i = i V : a i j > 0 , i j denote the set of neighbors of the ith sensor, and let J i = N i i denote the inclusive neighbor set of sensor i. Here, the presented network is assumed to be a connected graph. Otherwise, at least two separate sub-networks would exist, and hence, information could only be fused within each sub-network instead of throughout the entire network because information could not be exchanged between the two.
Here, we formulate the following problem of distributed sensor fusion in the presence of time-varying sensors subject to a common UI:
x k + 1 = Φ k x k + ζ k ,
y k + 1 i = H k + 1 i x k + 1 + N k + 1 i b k + 1 i + v k + 1 i
b k + 1 i = F k i b k i + G k i d k + w k i
where x k n is the state vector; y k i m i is the measurement vector of the ith sensor; b k i p i is the ith GSB, where i = 1 , 2 , . . . , s ; d k q is the common UI; and ζ k , w k i and v k i are all independent, zero-mean, Gaussian, white-noise components with covariances of Q k , S k i and R k i , respectively. The initial target state x 0 and ith GSB state b 0 i are Gaussian distributed with respective known means of x ¯ 0 and b ¯ 0 i and associated covariances of Σ 0 x and Σ 0 i , b . Φ k , H k i , N k i , F k i , and G k i are known and have appropriate dimensions. Under the assumption that m i > p i q i , the G k i are of full column rank.
Our aim is to design an unbiased minimum-variance filter to jointly estimate the target state x k , the GSBs b k i and the UI d k given the distributed measurements recorded up to time k.
Remark 1. 
The GSB b k i in Equation (3) represents a generalized bias which includes multiple bias types as the special case:
(1). 
If F k i = I , G k i = 0 , S k i = 0 , then b k + 1 i = b k i represents a constant-value bias.
(2). 
If F k i 0 , G k i = 0 , then b k + 1 i = F k i b k i + w k i represents a time-varying bias with known dynamic model.
(3). 
If both F k i = 0 , G k i = 0 , then b k + 1 i = w k i represents a zero-mean random error.
(4). 
If G k i 0 , then b k i represents the dynamically evolving bias driven by arbitrary UI.
The second type of the UI mentioned in the introduction is equivalent to the case (1).
The third type of the UI mentioned in the introduction is equivalent to the case (4).
The first and fourth types of the UI mentioned in the introduction can be contributed to the case (4) when these two types of UI can be decoupled from Equation (3).
The above four types of UI can be accommodated within the single-model system proposed in this work. However, for the fifth type of UI mentioned in the introduction, it is usually regarded as a randomly switching parameter obeying a known Markov chain which is classified to the multiple-models system, so this type of UI cannot be accommodated within the presented model in this work. Nevertheless, for this type of UI, if GSB model can be established for the each mode of Markov chain. According to Equation (3), GSB model can be written as
b k + 1 i , j = F k i , j b k i , j + G k i , j d k + w k i , j , j = 1 , 2 . . . r
where r represents the number of states of the Markov chain. Meanwhile, according to Equation (10), the decoupling condition of the corresponding GSB model can be also confirmed as
r a n k H ¯ k + 1 i G ¯ k i , j = r a n k G ¯ k i , j , j = 1 , 2 . . . r
These decoupling conditions for a UI obeying a Markov chain are more stringent than those considered in this work, as Equation (5) must hold for every r. Then the fifth type of UI can be accommodated within the proposed framework.
Remark 2. 
There are two distinct strategies of networked sensor fusion. The first is the centralized strategy [41,42], i.e., all sensors transmit their measurements to a fusion centre, which is responsible for integrated data processing for state estimation and/or parameter identification. The second is the distributed strategy [25], i.e., each sensor processes its own measurements and then shares its processing results with its neighbors in an iterative manner to achieve consistent fusion. Obviously, the distributed fusion strategy allows the computational burden to be shared among sensors and consequently is more desirable for large-scale networks. However, distributed fusion is much more complex, involving both local processing and global fusion. Currently, distributed fusion in the presence of time-varying sensor biases driven by a common UI remains an open problem.
Remark 3. 
Concerning our proposed problem, there exist two possible solution approaches. One is iterative optimization between state estimation and UI identification, as in the well-known expectation-maximization (EM) scheme [25]. As shown later via simulation, the EM scheme is not desirable here because it must treat the UI as a constant or slowly varying parameter in each iterative window. The other approach is UI decoupling plus state estimation, which serves as the basis for our proposed method, as derived later. The main technical difficulty of this approach is to determine how to adaptively decouple the UI and then fuse the local state and UI estimates.

3. Main Results

Because of the presence of the UI, the target state and GSB cannot be estimated directly. In this section, an equivalent UI-free dynamic model of the bias is derived and the necessary and sufficient conditions for the UI-free model are explored. Afterwards, optimal estimators, in the sense of minimum variance, are proposed for the target state x k and the local GSBs b k i .

3.1. Local LMMSE Filter for System State and Local SB

Lemma 1. 
For matrices Γ, Ψ and I with proper dimensions, if the matrix Ψ is of full column rank, the equality
Ψ I - Γ Ψ + Γ Ψ = 0
will hold if and only if
r a n k { Γ Ψ } = r a n k { Ψ }
Proof. 
See Appendix A. ☐
For convenience of derivation, the model defined in Equations (1) and (2) can be rewritten as follows:
z k + 1 i = A k i z k i + G ¯ k i d k + ζ ¯ k i
y k + 1 i = H ¯ k + 1 i z k + 1 i + v k + 1 i
where z k i = col { x k , b k i } , A k i = diag { Φ k , F k i } , G ¯ k i = col { 0 n × q , G k i } , ζ ¯ k i = col { ζ k , w k i } and H ¯ k i = H k i N k i . It is easy to find that S ¯ k i E ζ ¯ k i ζ ¯ k i T = diag { Q k , S k i } .
Theorem 1. 
(UI decoupling). Consider the system model defined in Equations (8) and (9). If and only if
r a n k H ¯ k + 1 i G ¯ k i = r a n k G ¯ k i
then the UI will be decoupled from z k + 1 i , i.e., there will exist the following UI-free dynamic model of the sensor bias:
z k + 1 i = F ¯ k i A k i z k i + C ¯ k i y k + 1 i + B ¯ k i w ¯ k i
where
Π k i = H ¯ k + 1 i G ¯ k i +
C ¯ k i = G ¯ k i Π k i
F ¯ k i = I n + p i - C ¯ k i H ¯ k + 1 i
w ¯ k i = c o l { ζ ¯ k i , v k + 1 i }
B ¯ k i = F ¯ k i - C ¯ k i
Proof. 
See Appendix B. ☐
Remark 4. 
If the matrices Γ and Ψ satisfy Equation (7), then the row rank of the matrix Γ should be no less than the column rank of the matrix Ψ in Lemma 1. As shown in Equation (10), this means that the maximum number of UIs that can be decoupled from the system must be fewer than the number of independent measurement components of all sensors.
As shown in Equation (11), the measurements y k i are introduced into the equivalent bias dynamic model to decouple d k from b k i . Defining Q ¯ k i E w ¯ k i w ¯ k i T and M k + 1 i E B ¯ k i w ¯ k i v k + 1 i T , we have
Q ¯ k i = diag { S ¯ k i , R k + 1 i }
M k + 1 i = - C ¯ k i R k + 1 i
Here, M k + 1 i 0 means that the process noise w ¯ k i is correlated with the measurement noise v k i . Thus, designing an unbiased minimum-variance linear filter for the system defined in Equations (1) and (2) is equivalent to finding the MMSE estimation for the system defined in Equations (9) and (11) with correlated process noise and measurement noise.
Theorem 2. 
The MMSE estimators for the UI-free system defined in Equations (9) and (11) have the following recursion relations:
z ^ k + 1 | k i = F ¯ k i A k i z ^ k | k i + C ¯ k i y k + 1 i
P k + 1 | k i = F ¯ k i A k i P k | k i F ¯ k i A k i T + B ¯ k Q ¯ k B ¯ k i T + C ¯ k i R k + 1 C ¯ k i T
z ^ k + 1 | k + 1 i = z ^ k + 1 | k i + K k + 1 i y k + 1 i - H ¯ k + 1 i z ^ k + 1 | k i
P k + 1 | k + 1 i = P k + 1 | k i - K k + 1 i H ¯ k + 1 i P k + 1 | k i + M k + 1 i T
where the estimator gain is
K k + 1 i = P k + 1 | k i H ¯ k + 1 i T + M k + 1 i H ¯ k + 1 i P k + 1 | k i H ¯ k + 1 i T + H ¯ k + 1 i M k + 1 i + H ¯ k + 1 i M k + 1 i T + R k + 1 T - 1
Proof. 
The term C ¯ k i y k + 1 i is an additive known input. Equations (9) and (11) are linear, with additional Gaussian noise. Hence, the optimal filter with correlated process noise and measurement noise [43] can be directly utilized. ☐
The framework and pseudo-code for the proposed local MMSE estimation are presented in Figure 2 and Table 1, respectively.

3.2. Consensus Estimation of the Target State and the UI

Given the local estimate z ^ k + 1 | k + 1 i of the ith sensor derived in the previous section, each sensor exchanges its own estimates with its neighbours to reach consistent consensus estimates of x k and d k . The corresponding consensus estimates consist of two parts: an average consensus filter for x k and a distributed least-squares (DLS) estimate for d k .

3.2.1. Local LS Estimator of UI

Let us rewrite the sensor bias evolution model defined in Equation (3) as
G k i d k = b k + 1 i - F k i b k i - w k i = b ^ k + 1 | k + 1 i - F k i b ^ k | k i + b ˜ k + 1 | k + 1 i - F k i b ˜ k | k i - w k i = b ¯ k i + ξ k i
where b ¯ k i = b ^ k + 1 | k + 1 i - F k i b ^ k | k i and ξ k i = b ˜ k + 1 | k + 1 i - F k i b ˜ k | k i - w k i . z ^ k + 1 | k + 1 i is the unbiased estimate of z k + 1 i in Section 2. It is easily confirmed that E ξ k i = 0 . We can rewrite Equation (24) as the following compact expression:
G k d k = B k + ξ k
where G k = col G k j : j J i , B k = col b ¯ k j : j J i , and ξ k = col ξ k j : j J i Furthermore, based on the local GSB estimates of the ith node and its neighbours, the local DLS estimate of d k and the corresponding covariance of the ith sensor are obtained as follows:
d ^ k i = G k T G k - 1 G k T B k
P k d d , i = G k T G k - 1

3.2.2. Average Consensus Filter for the Target State and the UI

Now, estimates of the target state and the UI have been obtained, but they are local. In other words, the estimates from different nodes are not identical to each other. Therefore, an average consensus fusion (ACF) is presented here to obtain a uniform result at each node via iterative interaction.
Let W l = w i j l denote the linear weight matrix in the lth step of iteration. Here, w i j l is the measurement weight of node j at node i, which satisfies
j = 1 s w i j l = 1 i = 1 s w i j l = 1 w i j l 0
Let W k x l = w i j x l and W k d l = w i j d l represent the weight matrices for the target state and the UI, respectively.
Given the consensus covariances of the state estimate and bias estimate of node i after the lth iteration, denoted by P k | k x x , i ( l ) and P k d d , i ( l ) , respectively, we design the weight matrices as follows:
w i j x l + 1 = t r P k | k x x , j ( l ) - 1 t J i t r P k | k x x , t ( l ) - 1 j J i w i j x l + 1 = 0 o t h e r w i s e
and
w i j d l + 1 = t r P k d d , j ( l ) - 1 t J i t r P k d d , t ( l ) - 1 j J i w i j d l + 1 = 0 o t h e r w i s e
such that local estimates of higher accuracy will be assigned higher weights in the weighted fusion process.
Then, the initial values can be set to x ^ k | k i 0 x ^ k | k i , d ^ k i 0 d ^ k i , P k | k x x , i 0 P k | k x x , i , and P k d d , i 0 P k d d , i , where x ^ k | k i and P k | k x x , i are subblocks of z ^ k | k i and P k | k i , respectively, i.e., z ^ k | k i = x ^ k | k i b ^ k | k i and P k | k i = P k | k x x , i P k | k x b , i P k | k b x , i P k | k b b , i .
Theorem 3. 
Consider the multi-sensor system defined in Equations (1)–(3), with topology G = V , E , A and weight matrices w i j x ( l ) and w i j d ( l ) for the target state and UI, respectively, at the lth step of iteration. The ACFs for x k and d k are
x ^ k | k i l + 1 = j J i w i j x l x ^ k | k j l
d ^ k i l + 1 = j J i w i j x l d ^ k j l
where
P k | k x x , i l + 1 = j J i w i j x l P k | k x x , j l + x ^ k | k j l x ^ k | k j l T - x ^ k | k i l + 1 x ^ k | k i l + 1 T
P k d d , i l + 1 = j J i w i j d l p k d d , j l + d ^ k j l d ^ k j l T - d ^ k i l + 1 d ^ k i l + 1 T
Proof. 
See Appendix C. ☐
In the ith node, Equations (27)–(30) will remain iterative until
t r P k | k x x , i l + 1 - t r P k | k x x , i l t r P k | k x x , i l η x
and
t r P k d d , i l + 1 - t r P k d d , i l t r P k d d , i l η d
where η x and η d are positive thresholds for system state and UI, respectively.

3.2.3. Refinement of the Local Bias Estimates

After the consensus iteration of x ^ k | k i and d ^ k - 1 i , the sensor biases can be refined according to Equation (3), that is,
b ^ k | k i = F k - 1 i b ^ k - 1 | k - 1 i + G k - 1 i d ^ k - 1 i
where d ^ k - 1 i is the consensus estimate at the kth sampling instant. The covariance of b ^ k | k i is as follows:
P k | k b b , i = F k - 1 i P k - 1 | k - 1 b b , i F k - 1 i T + G k - 1 i P k - 1 d d , i G k - 1 i T + S k - 1 i
Thus, the estimates b ^ k | k i , P k | k b b , i , x ^ k | k i , P k | k x x , i , d ^ k - 1 i and P k - 1 d d , i will be output as the final results at the kth sampling instant.
The framework and pseudo-code for the iterative consensus process are presented in Figure 3 and Table 2, respectively.
Remark 5. 
In this paper, a fusion algorithm for multiple cooperative sensors subject to interference from GSBs is proposed. However, when this algorithm is applied to continuous target tracking in a wireless sensor networks (WSN), a new problem should be considered. Because of the sensors’ limited detection capabilities and low energy storage, a WSN is divided into many clusters. Each cluster forms a sub-WSN and is responsible for tracking the target for a certain period of time. When the target moves out of the surveillance area of the current cluster, one or more new clusters must be generated dynamically; thus, a strategy for generating such clusters must be proposed. This strategy must specify how to choose the leader and followers in each cluster and how to transmit estimate information between clusters. This problem will be an interesting topic for future research regarding the joint optimization of signal processing and distributed cooperative detection. Currently, this topic has been tried in our research group [44].
Remark 6. 
For the case of multiple-target tracking with dense clutter, the association relationships between the targets and measurements are unknown. Therefore, the problem of data association must be considered when our proposed algorithm is applied for multiple-target tracking. To this end, the optimal estimation for a single target must be extended to the joint optimization of state estimation and target identification for multiple targets. However, because of the limited detection capabilities of nodes in a WSN, as also mentioned in Remark 5, a WSN is often divided into many clusters, each of which monitors only a relatively small region. Consequently, the probability of multiple targets being present in any single cluster is statistically low. Therefore, our algorithm is focused only on single-target tracking.

4. Numerical Example

Consider examples of distributed networks tracking one constant-velocity target. The target state is [ x k , x ˙ k , y k , y ˙ k ] T , where [ x k , x ˙ k ] T and [ y k , y ˙ k ] T represent the target’s Cartesian position and velocity on the x and y axes, respectively. The sampling interval is T = 1 s , and the number of the samples is 60. Each sensor node synchronously measures the position of the target at each sampling instant. The measurements of each sensor node are corrupted by a local dynamic system bias, and all local system biases originate from a common UI. This situation can be modelled using Equations (1) and (3) with the following parameters:
Φ k = I 2 1 T 0 1 , F b i = - 0.05 - 0.84 0.5170 0.8069 , G k i = 2.58 - 2.5008 , H k i = 1 0 0 0 0 0 1 0
The initial position of the target is 0 , 0 T , in units of m, and the initial velocity is 50 , 60 T , in m/s. The initial value of the bias is b ¯ 0 i = 1 , 1 T , in m. The covariance matrices are as follows (all in m 2 /s 4 ):
R k i = 16 0 0 16 , Q k = 0 . 01 × I 2 T 2 / 2 T × T 2 / 2 T , S k i = 0.36 0.342 0.342 0.3249

4.1. A Distributed Network of Twelve Sensors

Consider an example of a distributed network of twelve sensors, and the matrices N k i for the 12 sensor nodes are
N k 1 = d i a g 3 , - 5 , N k 2 = d i a g ( 7 , 1 ) , N k 3 = d i a g ( - 3 , 7 ) , N k 4 = d i a g ( 4 , - 6 ) , N k 5 = d i a g ( 7 , 3 ) , N k 6 = d i a g ( 2 , - 8 ) , N k 7 = d i a g ( - 7 , 5 ) , N k 8 = d i a g ( 3 , 8 ) , N k 9 = d i a g ( 2 , - 5 ) , N k 10 = d i a g ( 7 , 4 ) , N k 11 = d i a g ( - 9 , 8 ) , N k 12 = d i a g ( 8 , 4 )
The dimensions of the surveillance area are 16 km × 14 km. The topological map is depicted in Figure 4a, and the sensors’ true locations and the target trajectory are depicted in Figure 4b.
The UI, plotted in Figure 5, is
d k = 0 m 0 < t 5 5 × sin π t - 5 10 m 5 < t 25 0 m 25 < t 35 0.5 × t - 45 m 35 < t 55 0 m 55 < t 60
The histograms shown in Figure 6 depict the numbers of steps of iteration for different thresholds. Clearly, the number of iterations required for consensus increases with a decreasing threshold. Moreover, when the threshold is greater than 0.1, the number of iteration steps varies only slowly. However, when the threshold is less than 0.1, the number of iteration steps increases sharply, and the computational cost is dramatically increased. Considering both calculation and accuracy, the consensus threshold values for both target state and UI estimation are chosen to be η x = η d = 0.1.
In accordance with Theorem 2, the rank of the filter matrix is rank H ¯ k + 1 i G ¯ k i = rank G ¯ k i = 1 , which satisfies Equation (10).
The proposed algorithm is compared with the EM algorithm [25] in this simulation. The iteration threshold for the EM algorithm is δ = 10 - 4 , and the processing window length is l = 5 . We also apply the standard Kalman filter, neglecting the presence of a UI. The estimation of the Kalman filter shows that the fusion performance will deteriorate considerably when bias exists in the sensor measurements.
Because of the page limitation and the similarity in the data variation trends among the simulated sensors, only the estimation results for the fourth sensor are presented here. Figure 7 and Figure 8 show the target-state estimation errors for the Kalman filter (called ‘KF’), the EM method, and the proposed method without consensus (called ‘DP no consensus’) and with consensus (called ‘DP consensus’). It can be seen that in the DP methods, the UI is decoupled from the system. Furthermore, the estimation error of the DP consensus method is less than that of the DP no consensus method, and the estimation accuracy is also improved for the DP consensus method. It is also observed that the estimation results based on the KF and EM methods are not as good as those achieved using the DP methods. Two reasons for these findings can be identified. One is that the UI is ignored in the KF method, causing the KF algorithm to become invalid. The other is that the UI is assumed to take a constant value in the iterative process of the EM method. Similar conclusions can also be drawn from the results of the sensor bias estimation, as shown in Figure 9.
The estimation errors for the UI based on the DP methods with and without consensus are plotted in Figure 10. It can be seen that the UI estimation error is greatly reduced by the consensus processing.
The mean and peak values of the estimation errors for the target state [ x , x ˙ , y , y ˙ ] T and the bias [ b x , b y ] T based on the KF, EM and DP methods are listed in Table 3 and Table 4, respectively. It can be seen from Table 3 that the mean estimation error values of the DP consensus method are all smaller than those of the KF, EM and DP no consensus methods. As shown in Table 4, the peak estimation error values are also smallest for the DP consensus must. Thus, the proposed method is proven to be effective.
The run times are also compared as listed in Table 5. It can be seen that the run time for the DP no consensus method is merely 0.33 s because of its non-iteration operation. After iteration, the run time of the DP consensus method is 4.89 s, which is nevertheless shorter than the 6.57 s required for the EM method.

4.2. A Distributed Network of Sixty Sensors

Now we consider an example of distributed network of sixty sensors, and the matrices N k i of the sensor nodes are
N k 1 = N k 13 = N k 25 = N k 37 = N k 49 = d i a g ( 3 , - 5 ) , N k 2 = N k 14 = N k 26 = N k 38 = N k 50 = d i a g ( 7 , 1 ) , N k 3 = N k 15 = N k 27 = N k 39 = N k 51 = d i a g ( - 3 , 7 ) , N k 4 = N k 16 = N k 28 = N k 40 = N k 52 = d i a g ( 4 , - 6 ) , N k 5 = N k 17 = N k 29 = N k 41 = N k 53 = d i a g ( 7 , 3 ) , N k 6 = N k 18 = N k 30 = N k 42 = N k 54 = d i a g ( 2 , - 8 ) , N k 7 = N k 19 = N k 31 = N k 43 = N k 55 = d i a g ( - 7 , 5 ) , N k 8 = N k 20 = N k 32 = N k 44 = N k 56 = d i a g ( 3 , 8 ) , N k 9 = N k 21 = N k 33 = N k 45 = N k 57 = d i a g ( 2 , - 5 ) , N k 10 = N k 22 = N k 34 = N k 46 = N k 58 = d i a g ( 7 , 4 ) , N k 11 = N k 23 = N k 35 = N k 47 = N k 59 = d i a g ( - 9 , 8 ) , N k 12 = N k 24 = N k 36 = N k 48 = N k 60 = d i a g ( 8 , 4 )
The dimensions of the surveillance area are 18 km × 16 km. The topological map is shown in Figure 11a. Sensors’ true locations and the target trajectory are depicted in Figure 11b.
In accordance with Theorem 2, the rank of filter matrix rank H ¯ k + 1 i G ¯ k i = rank G ¯ k i = 1 for each sensor in the network, which satisfies Equation (10).
The UI, as plotted in Figure 12, is a stochastic noise obeying Gaussian distribution with mean 0 and covariance 15 2 .
The histograms in Figure 13 depict the mean numbers of consensus iteration with different thresholds. Similarly, the consensus iteration number increases with thresholds decreasing, and when the thresholds are less than 0.1, the iteration step number increase sharply. Therefore, we choose the threshold as η x = η d = 0.1. It is interesting to see from the histograms that the consensus iteration needed for UI is roughly one more time than that needed for sensor states.
Due to the page limitation and the similar data variation trend in simulation figures, only the estimation results for the forth sensor is presented here. The iteration threshold for EM algorithm is δ = 10 - 4 and the processing window length is l = 5 too. Figure 14, Figure 15, Figure 16 and Figure 17 show the proposed algorithm estimation results are obviously better than those of KF and EM methods. Indeed, The estimation errors are greatly reduced by the DP consensus operation.
The mean and peak values of the estimation errors of the target state [ x , x ˙ , y , y ˙ ] T and the bias [ b x , b y ] T based on the KF, EM and DP methods are correspondingly computed and listed in Table 6 and Table 7, respectively. It can be seen from the Tables that our proposed method is proven to be effective.
The run times listed in Table 8 show that the run time of DP consensus is slightly shorter than that of EM method.

4.3. Computational Burden Analysis

Computation burden is also a critical issue to determine the practical applicability of the proposed method and worth to be discussed. Generally speaking, the computational burden is strongly related to the total sensors’ number and the connectivity of the sensor network. Thus, the mean values of run times of DP consensus method are tested with different sensor numbers and connectivity probabilities. In the simulation, the network are generated randomly with different connective probabilities. And the simulation program runs ten times with every connective probability and sensor number. The mean values of run time for one sensor of processing 60 samples are recorded to measure the computational burden.
According to Figure 6 and Figure 13, the thresholds are chosen as η x = η d = 0.1 in this simulation. Further, Figure 18 depicted the relationship between the mean of the run time and the number of sensor at different connectivity probabilities (’CP’ in Figure 18). It can be seen from Figure 18 that the run time increases with sensor number increase. It means more iterations are needed to reach a consensus when sensor number increase. Particularly, when the number of the sensor are 12 and 60 with connectivity probability of 1, the run times reach 0.08 s and 0.28 s, respectively. Furthermore, when the sensor number fixed, the run time increases with the connectivity probability decreases. This is because that a looser sensor network topology means fewer neighboring sensors, therefore, more jumpers are needed between each pair of sensors. So more iterations are needed to reach consensus, and the computational cost is corresponding high. It is also worth to mention that when the connectivity probability is larger than 0.2, run time almost have no difference. It implies that our proposed algorithm is somewhat robust when the connectivity probability is larger than 0.2.
As Figure 18 shows, with the sensor number increases, the mean values of run time for one sensor increases linearly. Thus, our proposed method can handle a relatively large network in a reasonable period of time.

5. Conclusions

The joint estimation of system state and sensor bias based on a generalized system model is proposed. The conditions for UI decoupling derived in this paper are helpful for guiding sensor choice and deployment. Local UI estimates are obtained via the LS approach using all estimates from neighboring sensors. Network consensus processing is then applied to obtain more accurate estimates of the system (target) state and the UI. Future work may address distributive and collaborative processing in sensor networks in the case in which bias exists in only a subset of the sensors.

Acknowledgments

This research is supported by the National Science Foundation Council of China under Grant 61135001, 61374023, 61374159, 61403309 and Aerospace Support Technology Foundation of China under Grant 2014-HT-XGD.

Author Contributions

Jie Zhou and Yan Liang conceived of, designed and performed the simulations and wrote the manuscript. Feng Yang, Linfeng Xu and Quan Pan reviewed the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

UIUnknown input
GSBgeneralized sensor biases
EMExpectation-maximization
IMMInteracting multiple model
LMMSELinear minimum-mean-square-error
MUBFMinimum upper bound filter
SBSensor bias
ECMElectronic counter-measure
LSBLocal sensor bias
GLSBGlobal sensor bias
ACFAverage consensus fusion
DLSDistributed least squares
WSNWireless sensor networks
KFKalman Filter
DP no consensusThe proposed method without consensus processing
DP consensusThe proposed method with consensus processing
CPConnectivity probability

Appendix A

Proof of Lemma 1. 
Sufficient: If rank Γ Ψ = rank Ψ and Ψ is of full column rank, then the matrix Γ Ψ is also of full column rank, and hence, its left pseudo-inverse matrix Γ Ψ + = ( Γ Ψ ) T ( Γ Ψ ) - 1 ( Γ Ψ ) T exists, satisfying
Γ Ψ + Γ Ψ = I
Clearly, Equation (6) holds.
Necessary: If Equation (6) holds, then Ψ = Ψ Γ Ψ + Γ Ψ or Ψ T = Γ Ψ T Ψ Γ Ψ + T , implying that Ψ T belongs to the space of the matrix Γ Ψ T ; hence, we have the following inequality:
rank { Ψ T } rank { Γ Ψ T }
or
rank { Ψ } rank { Γ Ψ }
Meanwhile, we know that
rank { Γ Ψ } min rank { Γ } , rank { Ψ } rank { Ψ }
Combining Equations (A2) and (A3), we obtain Equation (7). ☐

Appendix B

Proof of Theorem 2. 
By left-multiplying both sides of Equation (8) by H ¯ k + 1 i , we obtain
H ¯ k + 1 i z k + 1 i = H ¯ k + 1 i A k i z k i + H ¯ k + 1 i G ¯ k i d k + H ¯ k + 1 i ζ ¯ k i
Meanwhile, Equation (9) can be rewritten as
H ¯ k + 1 i z k + 1 i = y k + 1 i - v k + 1 i
Then, substituting Equation (B1) into Equation (B2) yields
H ¯ k + 1 i G ¯ k i d k = y k + 1 i - v k + 1 i - H ¯ k + 1 i A k i z k i - H ¯ k + 1 i ζ ¯ k i
with the following solution:
d k = Π k i y k + 1 i - v k + 1 i - H ¯ k + 1 i A k i z k i - H ¯ k + 1 i ζ ¯ k i + I q - Π k i H ¯ k + 1 i G ¯ k i κ
where Π k i H ¯ k + 1 i G ¯ k i + = H ¯ k + 1 i G ¯ k i T H ¯ k + 1 i G ¯ k i - 1 H ¯ k + 1 i G ¯ k i T in Equation (12) and Equation (B3) and κ is an arbitrary matrix with the proper dimensions.
Further substituting Equation (B3) into Equation (8) yields
z k + 1 i = A k i z k i + ζ ¯ k i + C ¯ k i y k + 1 i - v k + 1 i - H ¯ k + 1 i A k i z k i - H ¯ k + 1 i ζ ¯ k i + G ¯ k i I q - Π k i H ¯ k + 1 i G ¯ k i κ = F ¯ k i A k i z k i + C ¯ k i y k + 1 i + B ¯ k i w ¯ k i + G ¯ k i I q - Π k i H ¯ k + 1 i G ¯ k i κ
where C ¯ k i and B ¯ k i are given in Equations (13) and (16), respectively. For any κ, the necessary and sufficient condition for guaranteeing the uniqueness of z k + 1 i in Equation (B4) is
G ¯ k i I q - Π k i H ¯ k + 1 i G ¯ k i = G ¯ k i I q - H ¯ k + 1 i G ¯ k i + H ¯ k + 1 i G ¯ k i = 0
According to Lemma 1, Equation (B5) will hold if and only if Equation (10) holds. Finally, substituting Equation (B5) into Equation (B4) results in Equations (11)–(13). ☐

Appendix C

Proof of Theorem 4. 
In Equation (27), w i j x l 0 denotes the probability of being in state i at iteration l. The covariance of the mixture is
E x k - x ^ k | k i l + 1 x k - x ^ k | k i l + 1 T = E j J i w i j x l x k - x ^ k | k j l + x ^ k | k j l - x ^ k | k i l + 1 x k - x ^ k | k j l + x ^ k | k j l - x ^ k | k i l + 1 T = E j J i w i j x l x k - x ^ k | k j l x k - x ^ k | k j l T + x ^ k | k j l - x ^ k | k i l + 1 x ^ k | k j l - x ^ k | k i l + 1 T = j J i w i j x l P k | k x x , i l + j J i w i j x l x ^ k | k j l - x ^ k | k i l + 1 x ^ k | k j l - x ^ k | k i l + 1 T
The second term in Equation (C1) has the following alternative expression:
j J i w i j x l x ^ k | k j l - x ^ k | k i l + 1 x ^ k | k j l - x ^ k | k i l + 1 T = j J i w i j x l x ^ k | k j l x ^ k | k j l T - x ^ k | k i l + 1 x ^ k | k i l + 1 T
Substituting Equation (C2) into Equation (C1) yields Equation (29), and Equation (30) can be obtained similarly. ☐

References

  1. Kurz, H.; Goedecke, W. Digital parameter-adaptive control of processes with unknown dead time. Automatica 1981, 17, 245–252. [Google Scholar]
  2. Henson, M.A.; Seborg, D.E. Time delay compensation for nonlinear processes. Ind. Eng. Chem. Res. 1994, 33, 1493–1500. [Google Scholar]
  3. Wang, X.X.; Liang, Y.; Pan, Q.; Wang, Y.G. Measurement random latency probability identification. IEEE Trans. Autom. Control 2016, PP. [Google Scholar] [CrossRef]
  4. Wang, X.X.; Liang, Y.; Pan, Q.; Zhao, C.H.; Yang, F. Design and implementation of Gaussian filter for nonlinear system with randomly delayed measurements and correlated noises. Appl. Math. Comput. 2014, 232, 1011–1024. [Google Scholar]
  5. Shi, Y.; Fang, H. Kalman filter-based identification for systems with randomly missing measurements in a network environment. Int. J. Control 2010, 83, 538–551. [Google Scholar]
  6. Chilin, D.; Liu, J.; delaPeña, D.M.; Christofides, P.D.; Davis, J.F. Detection, isolation and handling of actuator faults in distributed model predictive control systems. J. Process Control 2010, 20, 1059–1075. [Google Scholar]
  7. Okello, N.N.; Challa, S. Joint sensor registration and track-to-track fusion for distributed trackers. Aerosp. Electron. Syst. 2004, 40, 808–823. [Google Scholar]
  8. Lin, X.; Bar-Shalom, Y.; Kirubarajan, T. Multisensor multitarget bias estimation for general asynchronous sensors. Aerosp. Electron. Syst. 2005, 41, 899–921. [Google Scholar]
  9. Spingarn, K.; Robinson, B.H. Attitude determination with UD implementation of decoupled bias estimation. IEEE Trans. Aerosp. Electron. Syst. 2005, 41, 1294–1304. [Google Scholar]
  10. Wang, G.B.; Chen, L.; Jia, S.Y. Optimized bias estimation model for 3-D radar considering platform attitude errors. IEEE Trans. Aerosp. Electron. Syst. 2012, 27, 19–24. [Google Scholar]
  11. Sarma, A. Mathematical modeling of INS error dynamics for integration/debiasing. Position Locat. Navig. Symp. PLANS 2014, 136–146. [Google Scholar] [CrossRef]
  12. Zhong, M.; Guo, J.; Cao, Q. On designing PMI Kalman filter for INS/GPS integrated systems with unknown sensor errors. IEEE Sens. J. 2015, 15, 535–544. [Google Scholar]
  13. Pin, G.; Chen, B.; Parisini, T.; Bodson, M. Robust sinusoid identification with structured and unstructured measurement uncertainties. IEEE Trans. Autom. Control 2014, 59, 1588–1593. [Google Scholar]
  14. Karniely, H.; Siegelmann, H.T. Sensor registration using neural networks. IEEE Trans. Aerosp. Electron. Syst. 2000, 36, 85–101. [Google Scholar]
  15. Mehra, R.K. Approaches to adaptive filtering. IEEE Trans. Autom. Control 1972, 17, 693–698. [Google Scholar]
  16. Myers, K.; Tapley, B.D. Adaptive sequential estimation with unknown noise statistics. IEEE Trans. Autom. Control 1976, 21, 520–523. [Google Scholar]
  17. Liang, Y.; Zhou, D.H.; Pan, Q. A finite-horizon adaptive Kalman filter for linear systems with unknown disturbances. Signal Process. 2004, 84, 2175–2194. [Google Scholar]
  18. Liang, Y.; Zhou, D.H.; Pan, Q. Estimation of varying time delay and parameters of a class of jump Markov nonlinear stochastic systems. Comput. Chem. Eng. 2003, 27, 1761–1778. [Google Scholar]
  19. Durovic, Z.M.; Kovacevic, B.D. Robust estimation with unknown noise statistics. IEEE Trans. Autom. Control 1999, 44, 1292–1296. [Google Scholar]
  20. Odelson, B.J.; Rajamani, M.R.; Rawlings, J.B. A new autocovariance least-squares method for estimating noise covariances. Automatica 2006, 42, 303–308. [Google Scholar]
  21. Bavdekar, V.A.; Deshpande, A.P.; Patwardhan, S.C. Identification of process and measurement noise covariance for state and parameter estimation using extended Kalman filter. J. Process Control 2011, 21, 585–601. [Google Scholar]
  22. Bogler, P.L. Tracking a maneuvering target using input estimation. IEEE Trans. Aerosp. Electron. Syst. 1987, 3, 298–310. [Google Scholar]
  23. Lee, H.; Tahk, M.J. Generalized input-estimation technique for tracking maneuvering targets. IEEE Trans. Aerosp. Electron. Syst. 1999, 35, 1388–1402. [Google Scholar]
  24. Lan, H.; Liang, Y.; Yang, F.; Wang, Z.; Pan, Q. Joint estimation and identification for stochastic systems with unknown inputs. IET Control Theory Appl. 2013, 7, 1377–1386. [Google Scholar]
  25. Lan, H.; Bishop, A.N.; Pan, Q. Distributed joint estimation and identification for sensor networks with unknown inputs. In Proceedings of the IEEE Ninth International Conference on Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), Singapore, Singapore, 21–24 April 2014.
  26. Wang, X.X.; Song, B.; Pan, Q.; Geng, H. EM-based adaptive divided difference filter for nonlinear system with multiplicative parameter. Int. J. Robust Nonlin. (accepted).
  27. Darouach, M.; Zasadzinski, M.; Boutayeb, M. Extension of minimum variance estimation for systems with unknown inputs. Automatica 2003, 39, 867–876. [Google Scholar]
  28. Gillijns, S.; de Moor, B. Unbiased minimum-variance input and state estimation for linear discrete-time systems. Automatica 2007, 43, 111–116. [Google Scholar]
  29. Gillijns, S.; de Moor, B. Unbiased minimum-variance input and state estimation for linear discrete-time systems with direct feedthrough. Automatica 2007, 43, 934–937. [Google Scholar]
  30. El Ghaoui, L.; Calafiore, G. Robust filtering for discrete-time systems with bounded noise and parametric uncertainty. IEEE Trans. Autom. Control 2001, 46, 1084–1089. [Google Scholar]
  31. Eldar, Y.C.; Ben-Tal, A.; Nemirovski, A. Linear minimax regret estimation of deterministic parameters with bounded data uncertainties. IEEE Trans. Signal Process. 2004, 52, 2177–2188. [Google Scholar]
  32. Mazor, E.; Averbuch, A.; Bar-Shalom, Y.; Dayan, J. Interacting multiple model methods in target tracking: A survey. IEEE Trans. Aerosp. Electron. Syst. 1998, 34, 103–123. [Google Scholar]
  33. Chen, B.; Tugnait, J.K. Interacting multiple model fixed-lag smoothing algorithm for Markovian switching systems. IEEE Trans. Aerosp. Electron. Syst. 2000, 36, 243–250. [Google Scholar]
  34. Wang, Z.; Lam, J.; Liu, X. Robust filtering for discrete-time Markovian jump delay systems. IEEE Signal Process. Lett. 2004, 11, 659–662. [Google Scholar]
  35. Johnston, L.; Krishnamurthy, V. An improvement to the interacting multiple model (IMM) algorithm. IEEE Trans. Signal Process. 2001, 49, 2909–2923. [Google Scholar]
  36. Katsikas, S.K.; Likothanassis, S.D.; Beligiannis, G.N. Genetically determined variable structure multiple model estimation. IEEE Trans. Signal Process. 2001, 49, 2253–2261. [Google Scholar]
  37. Xu, L.F.; Li, X.R.; Duan, Z. Hybrid grid multiple-model estimation with application to maneuvering target tracking. IEEE Trans. Aerosp. Electron. Syst. 2016, 52, 122–136. [Google Scholar]
  38. Yang, Y.; Liang, Y.; Pan, Q. Linear minimum-mean-square error estimation of Markovian jump linear systems with Stochastic coefficient matrices. IET Control Theory Appl. 2014, 8, 1112–1126. [Google Scholar]
  39. Liang, Y.; Zhou, D.; Zhang, L. Adaptive filtering for stochastic systems with generalized disturbance inputs. IEEE Signal Process. Lett. 2008, 15, 645–648. [Google Scholar]
  40. Qin, Y.; Liang, Y.; Yang, Y. Adaptive filter of non-linear systems with generalised unknown disturbances. IET Radar Sonar Navig. 2014, 8, 307–317. [Google Scholar]
  41. Zhou, L.; Liang, Y.; Zhou, J. Linear minimum mean squared estimation of measurement bias driven by structured unknown inputs. IET Radar Sonar Navig. 2014, 8, 977–986. [Google Scholar]
  42. Liang, Y.; Chen, T.; Pan, Q. Multi-rate stochastic H∞ filtering for networked multi-sensor fusion. Automatica 2010, 46, 437–444. [Google Scholar] [CrossRef]
  43. Simon, D. Kalman filter generalizations. In Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches; John Wiley & Sons: Hoboken, NJ, USA, 2006. [Google Scholar]
  44. Liang, Y.; Feng, X.; Yang, F. The distributed infectious disease model and its application to collaborative sensor wakeup of wireless sensor networks. Inform. Sci. 2013, 223, 192–204. [Google Scholar]
Figure 1. Collaborative target tracking in sensor network.
Figure 1. Collaborative target tracking in sensor network.
Sensors 16 01407 g001
Figure 2. The scheme of joint MMSE estimation of the state and the SB.
Figure 2. The scheme of joint MMSE estimation of the state and the SB.
Sensors 16 01407 g002
Figure 3. The scheme of the consensus estimation process for the target state and the UI.
Figure 3. The scheme of the consensus estimation process for the target state and the UI.
Sensors 16 01407 g003
Figure 4. Examples of networks with 12 sensor nodes: (a) Graph-topological map of the network (b) Sensors’ real locations and target trajectory.
Figure 4. Examples of networks with 12 sensor nodes: (a) Graph-topological map of the network (b) Sensors’ real locations and target trajectory.
Sensors 16 01407 g004
Figure 5. Real unknown input in the multi-sensor system.
Figure 5. Real unknown input in the multi-sensor system.
Sensors 16 01407 g005
Figure 6. The mean numbers of iteration steps for (a) state and (b) UI estimation with different thresholds.
Figure 6. The mean numbers of iteration steps for (a) state and (b) UI estimation with different thresholds.
Sensors 16 01407 g006
Figure 7. State estimation errors in the x direction using the KF, DP consensus, DP no consensus and EM methods for sensor 4. (a) position estimation; (b) velocty estimation.
Figure 7. State estimation errors in the x direction using the KF, DP consensus, DP no consensus and EM methods for sensor 4. (a) position estimation; (b) velocty estimation.
Sensors 16 01407 g007
Figure 8. State estimation errors in the y direction using the KF, DP consensus, DP no consensus and EM methods for sensor 4. (a) position estimation; (b) velocty estimation.
Figure 8. State estimation errors in the y direction using the KF, DP consensus, DP no consensus and EM methods for sensor 4. (a) position estimation; (b) velocty estimation.
Sensors 16 01407 g008
Figure 9. Bias estimation errors using the KF, DP consensus, DP no consensus and EM methods for sensor 4. (a) x-axis; (b) y-axis.
Figure 9. Bias estimation errors using the KF, DP consensus, DP no consensus and EM methods for sensor 4. (a) x-axis; (b) y-axis.
Sensors 16 01407 g009
Figure 10. UI estimation errors of the DP consensus and DP no consensus methods for sensor 4.
Figure 10. UI estimation errors of the DP consensus and DP no consensus methods for sensor 4.
Sensors 16 01407 g010
Figure 11. Examples of networks with 60 sensors: (a) Graph-topological map of the network; (b) Sensors’ true locations and target trajectory.
Figure 11. Examples of networks with 60 sensors: (a) Graph-topological map of the network; (b) Sensors’ true locations and target trajectory.
Sensors 16 01407 g011
Figure 12. Real unknown input in the multi-sensor system of 60 sensors.
Figure 12. Real unknown input in the multi-sensor system of 60 sensors.
Sensors 16 01407 g012
Figure 13. The mean numbers of iteration steps for (a) state and (b) UI estimation with different thresholds in the multi-sensor system of 60 sensors.
Figure 13. The mean numbers of iteration steps for (a) state and (b) UI estimation with different thresholds in the multi-sensor system of 60 sensors.
Sensors 16 01407 g013
Figure 14. State estimation errors in the x direction using the KF, DP consensus, DP no consensus and EM methods for sensor 4 in the multi-sensor system of 60 sensors. (a) position estimation; (b) velocty estimation.
Figure 14. State estimation errors in the x direction using the KF, DP consensus, DP no consensus and EM methods for sensor 4 in the multi-sensor system of 60 sensors. (a) position estimation; (b) velocty estimation.
Sensors 16 01407 g014
Figure 15. State estimation errors in the y direction using the KF, DP consensus, DP no consensus and EM methods for sensor 4 in the multi-sensor system of 60 sensors. (a) position estimation; (b) velocty estimation.
Figure 15. State estimation errors in the y direction using the KF, DP consensus, DP no consensus and EM methods for sensor 4 in the multi-sensor system of 60 sensors. (a) position estimation; (b) velocty estimation.
Sensors 16 01407 g015
Figure 16. Bias estimation errors using the KF, DP consensus, DP no consensus and EM methods for sensor 4 in the multi-sensor system of 60 sensors. (a) x-axis; (b) y-axis.
Figure 16. Bias estimation errors using the KF, DP consensus, DP no consensus and EM methods for sensor 4 in the multi-sensor system of 60 sensors. (a) x-axis; (b) y-axis.
Sensors 16 01407 g016
Figure 17. UI estimation errors of the DP consensus and DP no consensus methods for sensor 4 in the multi-sensor system of 60 sensors.
Figure 17. UI estimation errors of the DP consensus and DP no consensus methods for sensor 4 in the multi-sensor system of 60 sensors.
Sensors 16 01407 g017
Figure 18. The mean values of the run times with different sensor numbers.
Figure 18. The mean values of the run times with different sensor numbers.
Sensors 16 01407 g018
Table 1. Pseudo-code for the local MMSE estimation.
Table 1. Pseudo-code for the local MMSE estimation.
Check the Applicability of the Method
Step 1: Check the existence condition given by Equation (10). If the condition is satisfied, then perform the following computation. Otherwise, the proposed method is not applicable.
Offline Equivalence Transformation
Step 2: Construct the equivalent system-state evolution model by defining F ¯ k i , A k i , C ¯ k i , B ¯ k i , and w ¯ k i using Equations (12)–(16).
Step 3. Determine the parameters Π k i and M k i using Equations (12) and (18).
Online Estimator Implementation
Step 4: Initialize the estimators: P 0 | 0 i = P i ( 0 ) and z 0 i = z i 0 .
Step 5: Compute the predictions z ^ k + 1 / k i and the corresponding covariances P k + 1 | k i using Equations (19) and (20).
Step 6: Calculate the filter gains K k + 1 i using Equation (23).
Step 7: Obtain the z ^ k + 1 / k + 1 i and the corresponding covariances P k + 1 | k + 1 i using Equations (21) and (22).
Step 8: Set k = k + 1 and go to Step 1.
Table 2. Pseudo-code for the consensus estimation.
Table 2. Pseudo-code for the consensus estimation.
Formulate the Local Information Before Transmission
Step 1: Calculate the estimators b ¯ k i using Equation (24).
Exchange Information with All Neighbours of the ith Sensor
Step 2: Exchange G k i , x ^ k | k i , P k | k x x , i , and b ¯ k i with all neighbouring sensors.
Formulate the Local LS Estimators of the UI
Step 3: Define the parameters G k and B k .
Step 4: Use the exchanged information to calculate d ^ k - 1 i and P k - 1 d d , i via Equations (25) and (26).
Consensus Estimation after Transmission
Step 5: Set l = 1 , x ^ k | k i 0 = x ^ k | k i , P k | k x x , i 0 = P k | k x x , i , d ^ k - 1 i 0 = d ^ k - 1 i and P k - 1 d d , i 0 = P k - 1 d d , i .
Step 6: Calculate the ACF estimators x ^ k | k i l and P k | k x x , i l using Equations (27) and (29).
Step 7: Calculate the ACF estimators d k - 1 i l and P k - 1 d d , i l using Equations (28) and (30).
Step 8: If Equations (31) and (32) hold, go to Step 9; otherwise, set l = l + 1 and go to Step 6.
Refine the Local Sensor Bias Estimators after Consensus
Step 9: Refine the estimates b k | k i and P k | k i , b b using Equations (33) and (34).
Step 10: Set k = k + 1 and go to Step 1.
Table 3. Comparison of mean estimation error values.
Table 3. Comparison of mean estimation error values.
x x ˙ y y ˙ b x b y
KF (m)1.180.48−2.660.291.16−0.28
EM (m)26.830.7932.330.91−6.895.76
DP no consensus (m)−10.87−0.2618.810.632.92−1.51
DP consensus (m)0.190.09−9.550.27−1.370.39
Table 4. Comparison of peak estimation error values.
Table 4. Comparison of peak estimation error values.
x x ˙ y y ˙ b x b y
KF (m)76.276.2148.523.7619.1117.07
EM (m)62.2610.8159.9510.3019.8412.33
DP no consensus (m)30.551.7146.734.569.478.12
DP consensus (m)4.281.1713.901.984.084.67
Table 5. Total run times (60 samples).
Table 5. Total run times (60 samples).
EMDP No ConsensusDP Consensus
time (s)6.570.334.89
Table 6. Comparison of mean estimation error values of the network of 60 sensors.
Table 6. Comparison of mean estimation error values of the network of 60 sensors.
x x ˙ y y ˙ b x b y
KF (m)−12.721−3.86−13.54−5.72−0.401.19
EM (m)180.144.16299.976.58−182.47−306.07
DP no consensus (m)−20.73−0.84−15.03−0.348.563.63
DP consensus (m)−1.21−0.150.063−0.093.638.56
Table 7. Comparison of peak estimation error values of the network of 60 sensors.
Table 7. Comparison of peak estimation error values of the network of 60 sensors.
x x ˙ y y ˙ b x b y
KF (m)396.4680.08526.5480.03220.43156.67
EM (m)381.9864.60443.6678.49803.40973.76
DP no consensus (m)65.627.1252.055.5421.6114.26
DP consensus (m)5.730.945.760.5912.9121.61
Table 8. Total run times (60 samples) of the network of 60 sensors.
Table 8. Total run times (60 samples) of the network of 60 sensors.
EMDP No ConsensusDP Consensus
time (s)27.391.6524.56

Share and Cite

MDPI and ACS Style

Zhou, J.; Liang, Y.; Yang, F.; Xu, L.; Pan, Q. Multi-Sensor Consensus Estimation of State, Sensor Biases and Unknown Input. Sensors 2016, 16, 1407. https://doi.org/10.3390/s16091407

AMA Style

Zhou J, Liang Y, Yang F, Xu L, Pan Q. Multi-Sensor Consensus Estimation of State, Sensor Biases and Unknown Input. Sensors. 2016; 16(9):1407. https://doi.org/10.3390/s16091407

Chicago/Turabian Style

Zhou, Jie, Yan Liang, Feng Yang, Linfeng Xu, and Quan Pan. 2016. "Multi-Sensor Consensus Estimation of State, Sensor Biases and Unknown Input" Sensors 16, no. 9: 1407. https://doi.org/10.3390/s16091407

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop