Next Article in Journal
Fast Detection of Striped Stem-Borer (Chilo suppressalis Walker) Infested Rice Seedling Based on Visible/Near-Infrared Hyperspectral Imaging System
Previous Article in Journal
Trail-Based Search for Efficient Event Report to Mobile Actors in Wireless Sensor and Actor Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Distributed Multisensor Data Fusion under Unknown Correlation and Data Inconsistency

Intelligent Systems Research Institute, Sungkyunkwan University, Suwon, Gyeonggi-do 440-746, Korea
*
Author to whom correspondence should be addressed.
Sensors 2017, 17(11), 2472; https://doi.org/10.3390/s17112472
Submission received: 7 September 2017 / Revised: 21 October 2017 / Accepted: 25 October 2017 / Published: 27 October 2017
(This article belongs to the Section Sensor Networks)

Abstract

:
The paradigm of multisensor data fusion has been evolved from a centralized architecture to a decentralized or distributed architecture along with the advancement in sensor and communication technologies. These days, distributed state estimation and data fusion has been widely explored in diverse fields of engineering and control due to its superior performance over the centralized one in terms of flexibility, robustness to failure and cost effectiveness in infrastructure and communication. However, distributed multisensor data fusion is not without technical challenges to overcome: namely, dealing with cross-correlation and inconsistency among state estimates and sensor data. In this paper, we review the key theories and methodologies of distributed multisensor data fusion available to date with a specific focus on handling unknown correlation and data inconsistency. We aim at providing readers with a unifying view out of individual theories and methodologies by presenting a formal analysis of their implications. Finally, several directions of future research are highlighted.

1. Introduction

Multisensor data fusion refers to the process of utilizing additional and complementary data from multiple sources to achieve inferences that are not feasible/possible from an individual data source operating independently. More specifically, multisensor data fusion is to obtain a more meaningful and precise estimate of a state by combining data from multiple sensors and model-based predictions. These days, multisensor data fusion has been widely adopted in diverse fields of application including manufacturing and process control, autonomous navigation (SLAM) [1,2], robotics, remote sensing [3], medical diagnosis, image processing and visual recognition [4,5,6,7], fault-tolerant control [8] etc., beyond traditional application domain in the military field [9].
The architecture of multisensor data fusion can be broadly categorized into two, depending on the way raw data are processed: (1) Centralized fusion architecture [10], where raw data from multiple sources is sent directly to and fused in the central node for state estimation and (2) Distributed fusion architecture [10,11,12], where data measured at multiple sources is processed independently at individual nodes to obtain local estimates before they are sent to the central node for fusion. Although centralized fusion can yield theoretically optimal solutions, it is not scalable to the number of nodes, i.e., processing all sensor measurements at a single location is either ineffective or infeasible as the number of nodes increases due to communication overhead and reliability degradation. The distributed fusion, on the other hand, is robust to failures and has the advantage of lower infrastructure and communication costs.
However, distributed fusion needs to take the correlations among local estimates into consideration. This is due to the fact that local estimates can be dependent due to double counting, i.e., sharing prior information or data sources [9,13] and that distributed sensors observe data with definite physical relationships existing among their observations [14,15]. In a centralized architecture where the assumption of statistical independence is applicable, the Kalman filter (KF) [16] provides an optimal estimate in the sense of minimum mean square error (MMSE). On the other hand, in a distributed architecture where the assumption of statistical independence is not applicable, filtering without taking the cross-correlation into account may lead to divergence due to the inconsistency in fused mean and covariance [9]. In the case of known cross-correlations among data sources, the Bar-Shalom Campo (BC) formula [17,18] provides consistent fusion results for a pair of data sources. A generalization to more than two data sources with known cross-correlations is given in References [19,20,21,22].
However, it is difficult to estimate the cross-correlation among the data sources, especially with a distributed fusion architecture. For a large distributed sensor network [23], even taking account of all cross-correlations may be too expensive to carry out for fusion. Unfortunately, simply neglecting the cross-correlations results in a conservative fused mean and covariance [24]. Various methods have been proposed to cope with the problem of fusion under unknown correlations in a distributed architecture. Depending on the way that unknown cross-correlations are handled, these methods can be categorized into three groups, including (1) Data Decorrelation, where the input data sources are decorrelated before fusion based on the measurements reconstruction [25,26] or explicit elimination of double counting [27,28]; (2) Modeling Correlation, where fused solutions are obtained based on some knowledge and modeling of the unknown correlation [29,30,31,32]; and (3) Ellipsoidal Methods (EM), under the assumption of bounded cross-correlation, these methods attempt to provide a suboptimal but consistent fused solution by approximating the intersection among individual data sources without any knowledge of cross-correlation. The EM include, Covariance Intersection Method (CI) and its derivatives [33,34,35], Largest Ellipsoid Method (LE) [36], Internal Ellipsoidal Approximation (IEA) [37,38] and Ellipsoidal Intersection Method (EI) [39].
Another issue in sensor fusion is that sensors frequently provide spurious measurements that are difficult to predict and model. Fusion methodologies assume that the sensor measurements are affected by Gaussian noise only, and thus the covariance of the estimate provides a good approximation of all disturbances affecting the sensor measurements. However, sensors may produce inconsistent and spurious data due to unmodeled faults, including permanent sensor failures, sensor glitches, short duration spike faults, slowly developing failures due to sensor elements, etc. [40,41,42]. Fusion of inconsistent sensor data with correct data can lead to severely inaccurate results [43]. For example, when exposed to abnormalities and outliers Kalman filter would easily diverge [44]. Hence, a data validation scheme is required to identify and eliminate the inconsistencies/faults/outliers before fusion in a distributed fusion architecture.
Multisensor data fusion in the presence of inconsistent and spurious sensor data can be broadly classified into the following three categories: (1) Model based approaches, where inconsistencies are identified and excluded based on a comparison of sensor data against a reference, obtained through a mathematical model of the system [45,46]; (2) Redundancy based approaches, where multiple sensors provide an estimate of a quantity of interest and then identify and remove the inconsistent estimates by consistency checking and majority voting [40,47]; and (3) Fusion based approaches, where the fuse covariance is enlarged to cover all local means and covariances in such a way that the fused estimate is consistent under spurious data [48,49].
This paper systematically reviews the key theories and methodologies of distributed multisensor data fusion with a specific focus on fusion under unknown correlation and fusion in the presence of inconsistent and spurious sensor data. While several general reviews of data fusion [9,50,51,52] and data inconsistency [46,53,54] exists; this paper is intended to provide readers with the methodology of fusion under unknown correlation and data inconsistency in the context of distributed data fusion. The rest of the paper is organized as follows. In Section 2 centralized and distributed fusion architectures are explained along with the causes of correlation in distributed sensor systems. Section 3 provides an overview of the Kalman filter framework and fusion in the case of known cross-correlation. In Section 4, various methods for fusion under the assumption of unknown correlation are analyzed. In Section 5, fusion of spurious and inconsistent sensor data is reviewed. Finally, the paper is concluded and several future directions of research in distributed data fusion are highlighted.
Preliminaries: 
R and R + respectively define the set of real and non-negative real numbers. We denote A R n × m as a matrix with n rows and m columns. Similarly, I denotes an identity matrix. The inverse and transpose of matrix A are denoted as A 1 and A T respectively. Given positive semidefinite matrices A ,   B R n × n , that is, A ,   B 0 , then A B means A B 0 ( A B is positive semidefinite). Let x ^ = E [ x ] and P = E [ x x T ] E [ x ] E [ x T ] denote the mean and covariance of the random vector x R n respectively. Where the notation E [*] denotes the expectation. The cross-covariance between two random vectors x 1 , x 2 R n is represented as P 12 = E [ x 1 x 2 T ] E [ x 1 ] E [ x 2 T ] . Furthermore, due to positive semi definiteness of the covariance matrix, P 12 = P 21 . We denote the Gaussian distribution as x ~ N ( x ^ , P ) , with mean x ^ and covariance P . Furthermore, the Gaussian distribution N ( x ^ , P ) , can be represented by an ellipsoid ε ( x ^ , P ) , as ε ( x ^ , P ) = { x   ϵ   n |   ( x x ^ ) T P 1 ( x x ^ ) 1 } .

2. Fusion Architectures

In a data fusion framework, multiple sensors provide additional and complementary data to a fusion center, where the data is combined to obtain a precise and more meaningful information about the underlying states of an object. Based on the availability and processing of raw data, the fusion architectures can be divided into Centralized and Distributed fusion architectures.

2.1. Centralized Fusion Architecture

In a Centralized fusion architecture, raw data from multiple sensors is directly sent to a central fusion node, which computes state estimates and makes decisions as shown in Figure 1. Although, local sensors may pre-process the data before transmitting it to the central node, the term ‘raw data’ signify sensor measurements or pre-processed data without filtering or local fusion. Each sensor observes and provides measurements to the central system where data is filtered and fused. If the data is correctly aligned and associated, and there is no constraint on the communication bandwidth, then the centralized fusion architecture yields a theoretical optimal solution to state estimation. However, processing all the information at a central node poses various issues, such as a large computational load on the central node, large communication bandwidth requirement, the possibility of failure (due to failure of the central node) and inflexibility to changes in architecture [50,52,55].

2.2. Distributed Fusion Architecture

Advances in sensor and communication technologies mean that each sensor node can independently process its sensor data to compute local state estimates. In most applications, the raw information is used to compute the state estimates of some quantity of interest in the form of the mean and covariance. These estimates are then communicated among sensor nodes and to the central node to form a global state estimate as depicted in Figure 2. Compared to a centralized architecture, a distributed network of sensors is superior in many settings, that is, an outstanding potential to solve the problems in a cooperative fashion, coverage of large area, and considerable increase in spatial resolution to name a few [12,52,55,56]. Furthermore, local processing of the data means a low processing load on each node due to the distribution of load, lower communication cost, flexibility to changes and robustness to failure. Still, another fusion architecture is the Decentralized one where nodes operate independently, share information with each other without any central fusion node [14,55]. Different from a distributed architecture, the decentralized architecture lacks any central node, rather each node computes the underlying system states and communicates with each other. The reason for dependencies in decentralized and distributed architectures are the same. Thus, these two architectures are categorized as one in this paper.
In general, a decentralized or distributed network of sensors cannot achieve the estimation quality of a centralized system but is inherently more flexible and robust to failure. The local sensor estimates in a distributed architecture may be correlated because observations from distributed sensors can be affected by the same process noise [15] and local estimates can be dependent due to double counting [9,13]. A distributed fusion algorithm should take into account the cross-correlation to ensure optimality and consistency. In some situations, sensor measurements may also be affected by unexpected uncertainties, that is, spike faults, permanent failure or slowly developing failure [40,49,52]. Thus, the estimates provided by sensors may be spurious and inconsistent. Hence, a data validation scheme is required to identify and eliminate inconsistent sensor estimates before the fusion process.

2.3. Causes of Correlation

A common reason for the dependencies of local estimates in a distributed sensor network is the data incest/rumor propagation/double counting of the data [9,13]. Double counting is a situation in which data is unknowingly used multiple times. This may be caused by either recirculation of the information through cyclic paths or the same information taking several paths from another sensor to the fusion node [9,55], as depicted in Figure 3. For instance, two sensor nodes A and B that are initialized with the same prior estimate x ^ P on the sates, i.e., x ^ A = x ^ P and x ^ B = x ^ P have correlated errors, i.e., E [ ( x ^ A x ) ( x ^ B x ) ] = P A = P B = P P . The separation of common sensor data from independent data become more difficult as the data is further processed along the communication paths and network topology [55], and the source of the common data become unknown. Fusing the local sensor estimates without accounting for the common information results in an underestimated error covariance. Another reason for interdependencies is the common process noise [14,35]. A typical example of this is the decentralized monitoring system for chemical processes [14]. The temperature measured from the pressure information combined with a reaction model and the temperature measured directly from the temperature nodes are dependent. Similarly, a KF estimating position and another KF maintaining the orientation of a vehicle using the same sensor information results in a dependent position and orientation error [14].

3. Distributed Data Fusion

This section focuses on various data fusion algorithms. First, a Kalman filter and its variants are overviewed, and this is followed by fusion of multiple data sources under exactly known cross-covariance.

3.1. Kalman Filter

Kalman filter (KF) [16] is a fundamental tool that can be used to analyze and solve a broad class of estimation problems. It has been extensively used for various purposes, including estimation, tracking, sensor fusion etc. The KF framework consists of a prediction based on the system matrix of the underlying state vectors, followed by an update provided by sensor measurements. Consider a linear dynamic system with the following system model and measurement equation,
x ^ k = A k x ^ k 1 + B k u k 1 + w k
z k = H k x ^ k + v k
where k represents the discrete-time index, A k is the system matrix, B k the input matrix, u k 1 the input vector, and x ^ k 1 the process states. The process noise w k and measurement noise v k are white, zero mean, uncorrelated Gaussian with covariance Q k and R k respectively. The Kalman filter prediction of the state estimate and its error covariance is given as [57],
x ^ k = A k x ^ k 1 + B k u k 1
P k = A k P k 1 A k T + Q k
The predicted estimate x ^ k and error covariance P k are then combined with the received sensor measurement z k with covariance R k to obtain an updated estimate and error covariance matrix,
x ^ k = x ^ k + K k ( z k H k x ^ k )
P k = ( I K k H k ) P k ( I K k H k ) T + K k R k K k T
where K k is the Kalman gain and calculated as, K k = P k H k T ( H k P k H k T + R k ) 1 . Figure 4 depicts the prediction and update cycle of the KF. The KF has been further modified as an Extended Kalman Filter (EKF) [58] and Unscented Kalman Filter (UKF) [59,60] to address the issue of non-linearity in the state estimation. The EKF and UKF are often employed in the field of robotics for tracking and navigation. In References [61,62], an information theoretic approach to KF has been proposed. The Information filter (IF) is a KF that estimates the information state vector, y , defined as y = P 1 x , where x is the state vector and P its covariance. The inverse covariance matrix P 1 is equal to the Fisher information matrix and maximizing the Fisher information about the state is related to MMSE estimation. The representation of KF as an IF is beneficial when the state vector is larger than the measurement vector [24,62]. Furthermore, a KF implementation for the update stage become very complex when the cross-correlation between observation innovations are accounted for. The simple additive nature of the update stage makes the IF highly attractive for multisensor estimation [63].

3.2. Fusion under Known Correlation

One simplification in distributed estimation is the assumption of conditional independence of estimates. However, ignoring the cross-correlation in a distributed architecture leads to inconsistent results, which can result in a divergence of fusion algorithm [9,24]. Various methods have been devised to incorporate known cross-correlation for state estimation and fusion. A well-known result is the Bar-Shalom Campo (BC) formula [17], which is given as,
P f = P 1 ( P 1 P 12 ) ( P 1 + P 2 P 12 P 21 ) 1 ( P 1 P 21 )
x ^ f = ( P 2 P 21 ) ( P 1 + P 2 P 12 P 21 ) 1 x ^ 1 + ( P 1 P 12 ) ( P 1 + P 2 P 12 P 21 ) 1 x ^ 2
The BC formula provides a consistent fusion result in the sense of Maximum Likelihood [18] for a pair of redundant data sources. A generalization to more than two data sources with known cross-correlations is given in References [19,20,21,22]. A unified fusion rule for centralized, distributed and hybrid fusion architectures with complete prior information was proposed in References [20,64]. A fusion method for discrete multi-rate independent systems based on multi-scale theory was proposed in Reference [65], where the sampling rate ratio between the local estimates is assumed as a positive integer. Distributed fusion estimation for the case of asynchronous systems with correlated noises was studied in References [66,67,68]. Some authors have also explored learning based approaches for multisensor data fusion [4,6,7,69,70,71]. While Kalman filter and Bayesian formulation rely on known statistics for data fusion, learning based approaches learn the statistical model of the uncertainty from incoming data. In Reference [7], multi-feature fusion method is used for visual recognition in a multimedia application. A fusion framework for multi-rate multisensor linear systems based on a neural network was proposed in Reference [69]. The framework reformulates the multi-rate multiple systems into a single multisensor system with the highest sampling rate and effectively fuse the local estimates using neural network. A neural network based multisensor data fusion is compared with conventional methods in References [72,73] with superior fusion performance. However, learning based approaches are limited with the requirement of a large amount of data for training. Interested readers can refer to References [50,52] for more general perspectives and approaches to multisensor data fusion.
Given n sensor estimates ( x ^ 1 , P 1 ) , ( x ^ 2 , P 2 ) , , ( x ^ n , P n ) with exact cross-correlation P i j , i , j = 1 , , n , the fused mean and covariance can be written as [19,20,21,22],
x ^ f = ( H T P 1 H ) 1 H T P 1 x ^
P f = ( H T P 1 H ) 1
with
x ^ = [ x ^ 1 x ^ 2 x ^ n ] ,   P = [ P 1 P 12 P 1 n P 12 T P 1 n T P 2 P n n ] ,   H = [ I N 1 I N 2 I N n ]
where the dimensions of x ^ ,   P and H are N n × 1 , n N × n N and n N × N , respectively. n is the number of sensors and N corresponds to the dimension of the state vector. With full prior information, these fusion rules are proven to be unbiased and optimal in the sense of MMSE. If the estimates are assumed to be independent, that is, P i j = 0 , i , j = 1 , 2 , , n , then the fused result can be obtained as,
P f = ( i = 1 n P i 1 ) 1
x f = P f ( i = 1 n P i 1 x ^ i )
In order to employ the fusion rule of (9) and (10), the computation of the cross-covariance P i j is needed. The cross-covariance among local sensor estimates can be calculated as [19,21,22,74],
P i j = [ I K i H i ] [ A P i j k 1 A T + B Q B T ] [ I K j H j ] T
where K i is the Kalman gain of i t h local filter and P i j k 1 represents the cross covariance of the previous cycle. As seen from (13), the calculation of the cross-covariance needs internal details of the estimator, like the Kalman gain, which may not be available in some cases. An approximation of the cross-covariance in terms of the correlation coefficient can be obtained in such cases [75],
P i j = ρ P i P j
An approximation of the cross-covariance in terms of the different correlation components for different components of the state can be computed as,
P i j n m = ρ n m P i n m P j n m
where n , m = 1 , , N x with N x as the state dimension. A Monte Carlo simulation can be used to numerically compute the correlation coefficient ρ offline for a specific setup. Figure 5a,b illustrates the effect of the independence assumption on fused covariance and fused mean of two correlated sensor estimates respectively. The optimal fused solution ε ( x o , P o ) is obtained using (7) and (8) by incorporating a known cross-correlation. As shown, when KF is employed by assuming zero correlation between the sensor estimates, an underestimated fused covariance and mean is obtained as compared to the optimal fused solution. This severely hampers the accuracy of estimated states and sometimes results in filter divergence.
It is worth noting that the KF/IF provides optimal results in a centralized architecture because the assumption of independence is true. In a distributed fusion architecture, optimality can be achieved by computing and incorporating the exact cross-correlation. Furthermore, addressed fusion algorithms can either be applied independently or jointly to solve complex fusion problems according to fusion architectures and practical demands.

4. Fusion under Unknown Correlation

There are various sources of correlation affecting the state estimation and fusion process in a distributed architecture. Failing to consider the cross-correlation leads to overconfident results and even divergence of the fusion algorithm [9,24]. Nonetheless, due to double counting and the unavailability of internal parameters, it is very difficult to exactly estimate the cross-correlation in a vast distributed sensor network. In some applications, such as in map building, weather forecasting etc., the process model could use hundreds and thousands of states [35]. Maintaining and taking care of cross-correlation is expensive, and it scales quadratically with the number of updates [23]. Therefore, various suboptimal strategies have been devised to provide a fused solution from multiple data sources without the need of an actual cross-correlation. The analysis of fusion under unknown correlation is carried out according to the categorization of Figure 6.

4.1. Data Decorrelation

A common cause of cross-correlation in distributed architecture is data incest/rumor propagation/double counting. Double counting happens when the same data follows different or cyclic paths to reach the fusion node [9,13]. An effective way to avoid the data incest issue is to keep the record of estimate updates. References [27,28] propose a method to remove the correlation by explicitly eliminating double counting. The idea is to resolve remote measurements from state estimates of other sensor nodes, store them and use them to update its own state estimate. This way the double counted data is removed before the data is fused. This method assumes a specific network topology to avoid the correlation due to double counting. In References [76,77], a more general solution using graph theoretic algorithms is proposed, which is viable for arbitrary network topologies with variable time delays. However, this is neither scalable nor practical for a large network of sensors [78]. Another approach for decorrelation is measurement reconstruction [25,26,79], where the system noise is artificially adjusted by reconstructing the measurements so that correlation between the sequence of measurements is removed. The remote measurements are reconstructed at the fusion node based on the local sensor estimates. This method is further developed for tracking in clutter [80], Out-of-sequence filtering [81] and non-Gaussian distributions with Gaussian mixture models [82]. However, internal information like Kalman gain, association weights and sensor model information etc. are required to exactly reconstruct the measurements [74,75]. The decorrelation methods result in a compromised fusion performance due to their dependency on empirical knowledge and special analysis for a particular real system. Furthermore, with an increase in the number of sensors, these methods become highly inefficient and impractical.

4.2. Modeling Correlation

Although an exact cross-correlation between local estimates in a distributed architecture is difficult to obtain, the properties of the joint covariance matrix put some restriction on the possible cross-correlation. Furthermore, certain applications may provide prior knowledge and constraints on the degree of correlation such that we may infer whether the local estimates are strongly or weakly correlated. In fact, the estimates provided by multiple sensors are neither independent nor exactly dependent, meaning that the cross-correlation is not completely unknown. Thus, the information and knowledge regarding unknown cross-correlations can be exploited to improve the accuracy of the fused solution under unknown correlation. Given two sensor estimates ( x ^ 1 , P 1 ) and ( x ^ 2 , P 2 ) , the joint covariance matrix can be written as,
P = [ P 1 P 12 P 21 P 2 ]
where P 12 = P 21 T is the cross-correlation between the two estimates. The joint covariance matrix P is positive semidefinite if and only if there is a contraction matrix C such that [83],
P 12 = P 1 1 / 2 C P 2 1 / 2
where a contraction matrix C is a matrix with the largest singular value less than or equal to unity. In the case of scalar-valued estimates, the cross-correlation can be computed as,
P 12 = ρ P 1 P 2
where (17) is a function of known individual covariances and a correlation coefficient ρ in the range [−1, 1]. Based on the correlation model (17) an analytic analysis of the BC formula is carried out to give an exact solution for fusion under unknown correlation [29]. A closed-form equation for scalar-valued fusion and an approximate solution for vector valued fusion based on a uniformly distributed correlation coefficient is proposed in Reference [30]. In Reference [84], a tight upper bound for the joint covariance matrix is obtained from individual covariances P 1 ,   P 2 and the constrained correlation coefficient ρ . Based on bounded correlations, a general method was proposed as the Bounded Covariance Inflation (BCInf) [85] with upper and lower bounds on cross-correlation. The method exploits the available information regarding known independence in the sensor network. The BCInf method was further developed as an Adaptive Bounded Covariance Inflation (ABCInf) by probabilistic and deterministic approaches [86]. An approximate correlation model is adopted for two data sources in high dimensions as [32],
P 12 = ρ C 1 C 2 T
where ρ is the correlation coefficient and C 1 is the cholesky decomposition satisfying P 1 = C 1 C 1 T . It is illustrated in Reference [32] that the proposed model ensures the positive semi definiteness of the joint covariance matrix P and agrees with the Canonical Correlation Analysis of multivariate correlation [87]. Based on the correlation model (18), a track association and fusion is carried out in the Maximum Likelihood sense in Reference [31]. In Reference [32], the Cholesky decomposition model of unknown cross-correlation is applied to BC formula, and the fused solution is iteratively approximated based on min-max optimization function for unknown correlation coefficient ρ . Furthermore, a conservative fusion solution is also obtained under the assumption of a uniform distribution of correlation coefficient ρ . In Reference [29], the correlation model (18) was used in BC formula to analytically estimate the maximum bounds of the unknown correlation in track-to-track fusion. The multisensor estimation problem with the assumption of norm-bounded cross-correlation is studied in [88], where the worst-case fused MSE is minimized for all feasible cross-covariances. To utilize some prior information of the cross-covariance, a formulation named allowance of cross-covariance is proposed in Reference [89]. Based on the proposed model an optimal fusion method in the sense of minimizing the worst-case fused MSE by semidefinite programming (SDP) is derived.
For scalar-valued two sensor estimates, the cross-covariance P 12 is well-defined by the correlation coefficient ρ . Yet, the number of correlation coefficients increases with the number of sensors and the closed-form solution for even scalar-valued estimates becomes difficult. For instance, in the case of three data sources in R 1 the joint covariance matrix can be written as,
P = [ P 1 ρ 12 P 1 P 2 ρ 13 P 1 P 3 ρ 12 P 1 P 2 P 2 ρ 23 P 2 P 3 ρ 13 P 1 P 3 ρ 23 P 2 P 3 P 3 ]
Three correlation coefficients can now be noted to represent the dependency among the three data sources and optimizing any function of P in terms of correlation coefficients becomes a daunting task. In general, it is difficult to interpret cross-correlation for more than two data sources in high dimensions. It should also be noted that the general correlation analysis techniques like canonical correlation analysis (CCA) [87] and multivariate linear regression (MLA) [90] have limited use in connection with the cross-correlation among multiple data sources. Since these techniques assess the correlation property when given a vast set of data points. The joint covariance matrix of the multiple data sources, on the other hand, is a block covariance matrix that represents the relationships among the individual states of the sensor and among different sensors.

4.3. Ellipsoidal Methods

Suppose that we have two Gaussian sensor estimates N ( x ^ 1 , P 1 ) and N ( x ^ 2 , P 2 ) of the true state x in 2 . The two data sources are assumed to be correlated with cross covariance matrix P 12 . From (7) and (8), we can observe that the underlying fused covariance and mean of the two data sources is dependent on the unknown cross-covariance P 12 . The given sensor estimates can be represented using an ellipsoid ellipsoid   ε ( x ^ 1 , P 1 )   and   ε ( x ^ 2 , P 2 ) . Figure 7 depicts the zero mean ellipsoids ε ( 0 , P 1 )   and   ε ( 0 , P 2 ) , where the length of ellipsoid axes corresponds to the eigenvalues of the respective covariance matrix and the eigenvectors define its orientation. The possible cross covariances between the data sources are bounded [14,33,34,35], which in turn, restricts the possible outcomes of the fused covariance to a bounded set. As shown in Figure 7, for different choices of cross-covariance P 12 , the fused covariance P f will lie inside the intersection of the individual data sources. The goal of the Ellipsoidal Methods (EM) is to find a bounding covariance P E M such that,
P E M P f ( P 12 )
for any choice of cross-covariance matrix P 12 . The Ellipsoidal Methods (EM) attempt to provide a fused estimate by approximating the intersection region of the individual ellipsoids. The EM can be further classified into the Covariance Intersection Method (CI), Largest Ellipsoid Method (LE), Internal Ellipsoidal Approximation (IEA) and Ellipsoidal Intersection Method (EI). The three methods, LE, IEA and EI aim for a maximum ellipsoid inside the intersection region of individual ellipsoids, and are termed here as the Maximum Ellipsoidal Methods (ME). The EM are analyzed one by one here.

4.3.1. Covariance Intersection Method

Covariance Intersection Method (CI) [35] was proposed by Julier and Uhlman for fusion under unknown correlation in a decentralized network. Given two sensor estimates x ^ 1 and x ^ 2 of the true state x with corresponding covariance matrices P 1 and P 2 , the CI method can be viewed as a weighted form of the simple convex combination of individual estimates. The algorithm is given by [14,35],
x C I = ω P C I P 1 1 x ^ 1 + ( 1 ω ) P C I P 2 1 x ^ 2
P C I 1 = ω P 1 1   +   ( 1 ω ) P 2 1
where ω [ 0 ,   1 ] is a weighting parameter, determined numerically in such a way that the determinant or trace of P C I is minimized. The CI method obtains a consistent fused result without computing the cross-correlation. Figure 8 shows two zero mean estimates as ellipsoids ε ( 0 , P 1 ) and ε ( 0 , P 2 ) . Since, for any possible cross-correlation the fused result lies inside the intersection region of the individual ellipsoids, CI method provides a consistent solution by enclosing the region of the intersection of individual ellipsoids, as depicted in Figure 8.
Since its inception, the CI method has received much attention, and some improvements have been made to enhance the capabilities of the methodology itself while others have focused on its applications in various fields [2,33,34,49,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106]. For example, the CI method is generalized as a split CI method [100] to fuse independent as well as dependent information with an unknown degree of correlation. In Reference [97], the CI method is examined with a Chernoff fusion rule, and it is noted that the CI method is suitable for fusing any distributions, and is not limited to Gaussian density function. Meanwhile, CI is used for a non-linear estimation in [107], where the distributions are represented as pseudo-Gaussian densities, while a closed form optimization of CI for low-dimensional matrices was proposed in Reference [103]. In References [108,109], the CI method is studied for track-to-track fusion with memory and without memory. Furthermore, a comparative analysis of CI with different optimal fusion rules is presented in Reference [98]. The CI method is applied in many applications, namely, localization [110,111,112], target tracking [113,114], simultaneous localization and mapping (SLAM) [1,2], image integration [99], NASA MARS rover [101] and spacecraft state estimation [114,115].
Although state-of-the-art CI method has its own disadvantages including: (1) requirement of a nonlinear iterative optimization and (2) it overestimates the intersection region of individual covariances, resulting in a degradation of the estimation performance. For the sake of computational efficiency, approaches to directly compute the weights based on the determinants of individual covariances have been proposed [91,92] at the expense of further performance degradation without taking the relative orientation of individual covariances into account. Different optimization criteria for weight computation based on information theory [93,94] as well as set theory [95] have been proposed for computational efficiency. To avoid the computational cost of the CI method for more than two sensors, a sequential covariance intersection (SCI) [96] is presented. The SCI method reduces the multidimensional non-linear optimization problem of CI into many one-dimensional non-linear functions by sequentially applying the CI method of two sensors to n sensors. A proof that CI method results in a minimum consistent covariance bound for two sensors is given in Reference [104]. Recently an Inverse Covariance Intersection (ICI) [105] method based on the common information of two sensors was proposed, which results in a tighter estimate than with the CI method.

4.3.2. Maximum Ellipsoidal Methods

Contrary to the CI method which yields a minimum overestimation of the intersection region of individual covariances, the Maximum Ellipsoidal Methods (ME), that is, LE [36], IEA [37,38] and EI [39] sought a maximum ellipsoid inside the intersection region of individual covariance ellipsoids as shown in Figure 9. Since the fused covariance for any possible choice of cross-correlation lies inside the intersection of individual ellipsoids, the ME methods attempt to obtain a maximum ellipsoid inside the region of the intersection. Although aiming for a common objective, the ME methods follow different approaches from each other, thus resulting in subtle differences in the computation of the fused mean and covariance. The ME methods are analyzed one by one below.

Largest Ellipsoid Method

To avoid an overestimation of the CI, the Largest Ellipsoid Method [36] provides the largest ellipsoid inside the intersection of two individual ellipsoids by manipulating their orientation. Assuming two estimates x ^ 1 and x ^ 2 with covariances P 1 and P 2 respectively. The two covariances are transformed by a transformation matrix T r as,
P 1 r =   T r   P 1 T r T ,   P 2 r =   T r   P 2 T r T
where T r = [ e 1 T , e 2 T , , e n T ] T is the eigenvector matrix of P 1 . A second scaling transformation is performed by   T s as,
P 1 s r =   T s P 1 r T s T =   T s   T r   P 1 T r T T s T
with
T s = d i a g ( 1 , λ 1 1 λ 1 2 , , λ 1 1 λ 1 n )
where λ 1 i is the i t h eigenvalue of P 1 r . This scaling operation transform the ellipsoid P 1 into a sphere with all eigenvalues of P 1 s r being equal. Similarly, the second ellipsoid is transformed as,
P 2 s r =   T s   T r   P 2 T r T T s T
The intersection of the two ellipsoids P 1 s r and P 2 s r in the transformed space is computed as,
E s r = E D E T
where E = [ e 1 T , e 2 T , , e n T ] T is the eigenvector matrix of P 2 s r and D = d i a g ( k 1 , k 2 , , k n ) with k i = min ( λ 1 i , λ 2 i ) . The corresponding largest ellipsoid is transformed back to original space by an inverse transformation as,
P L E = T r 1 T s 1 E s r T s T T r T
The fused mean x L E of the two data sources is calculated using the simple convex equation of KF,
P K 1 x L E = P 1 1 x ^ 1 + P 2 1 x ^ 2
where P K 1 = P 1 1 + P 2 1 .
Although, the LE method for fused covariance results in the largest ellipsoid inside the intersection of the individual ellipsoids, the computation of the fused mean is incorrect. Because calculation of the fused mean is based on the independence assumption of KF and does not consider the cross-correlation, which may lead to inconsistent results. To ensure the consistency and optimality in multisensor data fusion, the fused covariance, as well as the correct calculation of the fused mean, is important.

Internal Ellipsoidal Approximation

To fill the gap in the LE Method, an Internal Ellipsoidal Approximation Method (IEA) [37,38, 116] was proposed which provides an internal approximation of the region of intersection of the individual ellipsoids. The fused mean and covariance of the algorithm are written as,
x I E A = ( ω 1 P 1 1 + ω 2 P 2 1 ) 1 ( ω 1 P 1 1 x ^ 1 + ω 2 P 2 1 x ^ 2 )
P I E A = ( 1 x ^ 1 T P 1 1 x ^ 1 x ^ 2 T P 2 1 x ^ 2 + x ^ I E A T P I E A 1 x ^ I E A ) ( ω 1 P 1 1 + ω 2 P 2 1 ) 1
where
ω 1 = 1 min ( 1 , β 2 ) 1 min ( 1 , β 1 ) min ( 1 , β 2 )
ω 2 = 1 min ( 1 , β 1 ) 1 min ( 1 , β 1 ) min ( 1 , β 2 )  
where 0 ω 1 , ω 2 1 and β 1 and β 2 are computed based on the optimization of the Quadratic programming problem as follows,
β 1 = min x T P 2 1 x = 1 x T P 1 1 x
β 2 = min x T P 1 1 x = 1 x T P 2 1 x
Nonlinear optimization methods like Newton or Lagrange multipliers can be used to compute the values of β 1 and β 2 . By additional manipulation, the Quadratic Constrained Quadratic Problem (QCQP) of (28) and (29) can be transformed to a much simpler form, resulting in a direct computation of unknown variable x . Based on the definition of P 1 and P 2 as positive semidefinite matrices we can write,
P 2 = E D 1 / 2 D 1 / 2 E T
where D is the eigenvalue matrix and E is the respective eigenvector matrix. Using y = D 1 / 2 E T x , we can rewrite (28) in terms of y as,
β 1 = min y 2 2 = 1 y T ( D 1 2 E T P 1 1 E D 1 2 ) y
Hence,
y T ( D 1 2 E T P 1 1 E D 1 2 ) y λ m i n ( D 1 2 E T P 1 1 E D 1 2 ) y 2 2
Then y m i n , the normalized eigenvector corresponding to the minimum eigenvalue of ( D 1 2 E T P 1 1 E D 1 2 ) is a solution to (30). Subsequently, x can be obtained as,
x = E D 1 / 2 y m i n
The value of x can be used in (28) to obtain β 1 . A similar approach can be followed to calculate β 2 . The computed values of β 1 and β 2 can then be used in (26) and (27) to compute the weights ω 1 and ω 2 . Based on the values of β 1 and β 2 , the IEA method provides a relationship between two ellipsoids as [37,116],
  • If β 1 1 ,   β 2 1 , then ω 1 = 1 ,   ω 2 = 0 , ε ( 0 , P 1 ) ε ( 0 , P 2 ) and ε ( x I E A , P I E A ) = ε ( x 1 , P 1 )
  • If β 1 1 ,   β 2 1 , then ω 1 = 0 ,   ω 2 = 1 , ε ( 0 , P 1 ) ε ( 0 , P 2 ) and ε ( x I E A , P I E A ) = ε ( x 2 , P 2 )
  • If β 1 1 ,   β 2 1 , then 0 < ω 1 ,   ω 2 < 1 , ε ( 0 , P 1 ) ε ( 0 , P 2 ) ϕ
Although the IEA method aims for an approximation of the intersection region of individual ellipsoids, the method lacks a strong mathematical foundation and is based on heuristics.

Ellipsoidal Intersection Method

Ellipsoidal Intersection (EI) Method [39] solves the problem of fusion under unknown correlation by computing the fused mean and covariance based on the mutual and exclusive information of two data sources. Given two sensor estimates ( x ^ 1 , P 1 ) and ( x ^ 2 , P 2 ) , it is assumed that they can be represented by three mutually uncorrelated estimates ( a ^ , A ) ,   ( b ^ , B ) and ( Υ , Γ ) as [117],
P 1 = ( A 1 + Γ 1 ) 1 , x ^ 1 = P 1 ( A 1 a ^ + Γ 1 Υ )
P 2 = ( B 1 + Γ 1 ) 1 , x ^ 2 = P 2 ( B 1 b ^ + Γ 1 Υ )
Hence, both sensor estimates share the common estimate ( Υ , Γ )   . By using mutual and exclusive information, the fused mean and covariance of the algorithm is written as,
P E I = ( P 1 1   +   B 1 ) 1 ,   x E I = P E I ( P 1 1 x ^ 1 + B 1 b ^ )
Substituting the results of (32) in (33) gives the fused covariance P E I and fused mean x E I as,
P E I = ( P 1 1   +   P 2 1 Γ 1 ) 1
x E I = P E I ( P 1 1 x ^ 1 + P 2 1 x ^ 2 Γ 1 Υ )
The formulation of (34) and (35) implies that first the estimates ( x ^ 1 , P 1 ) and ( x ^ 2 , P 2 ) are fused, followed by subtraction of the common estimate ( Υ , Γ ) . The mutual covariance Γ is chosen such that the mutual information between the two data sources is maximized. Using eigenvalue decomposition, we can write,
P 1 = E 1 D 1 E 1 T ,   and   Q 2 D 2 Q 2 T = D 1 0.5 E 1 T P 2 E 1 D 1 0.5
Then, the maximum mutual information can be calculated as,
Γ = E 1 D 1 0.5 Q 2 D Γ Q 2 T D 1 0.5 E 1 1
where
( D Γ ) i j = { max ( D 2 , 1 )     i f   i = j 0                       i f   i j  
Similarly, the mean value of the mutual information can be computed as,
Υ = ( P 1 1 + P 2 1 2 Γ 1 + 2 η I ) 1 ( ( P 2 1 Γ 1 +   η I ) x ^ 1 + ( P 1 1 Γ 1 + η I ) x ^ 2 )
where the term η is added such that ( P i 1 Γ 1 ) should be positive definite rather than positive semi-definite. The value of η is selected as follow,
η = { 0                                 i f   | H | 0 s λ + ( H )               i f   | H | = 0  
where H is defined as H = P 1 1 + P 2 1 2 Γ 1 and λ + ( H ) R + is defined as the smallest non-zero eigenvalue of H.
A relation between the cross-covariance P 12 and mutual information Γ of P 1 and P 2 is given as [105],
P 12 = P 1 Γ 1 P 2
Based on (38), a decentralized fused solution for two sensor estimates known as inverse covariance intersection (ICI) is proposed in Reference [105]. This method provides a tighter solution than CI for all admissible common information Γ . The concept of common information is also used in the channel filter [12] and its nonlinear counterpart [118]. In Reference [119], the performance of the EI method is assessed for various real-life scenarios like the absence of observability, non-linearity of the process model and situations where the computational requirement is different for different nodes. For fusion of scalar-valued estimates, the fused solution provided by EI is equal to that of CI method.
Example. 
Consider an illustrative example for comparative analysis of EM with the following two sensor estimates,
x ^ 1 = [ 1 2 ] ,   P 1 = [ 4 1.8 1.8 3.5 ] ,   x ^ 2 = [ 0.8 1.3 ] ,   P 2 = [ 4.5 0.5 0.5 2.7 ]
The weights of the CI method are determined by minimizing the determinant of the fused covariance, that as, min ω + ( 1 ω ) = 1   det ( P C I ) . The Matlab function ‘fminbnd’ is used to compute the weights and are then used in (20) and (21) to compute the fused mean and fused covariance of the CI method. For IEA, the parameters β 1 and β 2 are computed using (30) and subsequently, the weights ω 1 and ω 2 are computed from (26) and (27) respectively. The weights are then used to compute the fused result. The fused covariance and mean of the LE and EI method are calculated using (22), (23) and (34), (35) respectively. The eigenvalue decomposition of the ME methods is done using the standard ‘eig’ function of Matlab. Table 1 summarizes the computed fused mean and covariance of different EM. The average computation time of each method for 10,000 runs is also given in Table 1. Figure 10a,b depicts the fused covariance ellipsoids of the different EM. The CI method can be noted to provide a minimum overestimate of the intersection region of the individual data sources. The IEA method chooses the first sensor estimate as the fused result despite the fact that ε ( 0 , P 1 ) ε ( 0 , P 2 ) . The LE and EI result in a maximum covariance ellipsoid inside the intersection region. Although aiming for the same goal, the three ME methods differ from each other. For instance, the fused covariance provided by EI and LE is exactly the same while the fused covariance provided by IEA differ from LE and EI methods in this case. On the other hand, the fused mean provided by all three ME methods are different as noted from Figure 10b and Table 1.
The CI method provides a consistent fused solution for two estimates based on (19), that is, P C I P f is always positive semi-definite. This can also be observed from Figure 10a, where CI method generate a tight bound on the intersection region, thus ensuring consistency for any choice of cross-correlation. Although consistent, the CI results are conservative with the possibility of much less informative fused estimates. On the other hand, the LE and EI methods result in a largest ellipsoid inside the region of intersection. However, the methods may become inconsistent with P L E , P E I P f , for some choices of known cross-covariance P 12 . The EI method yields less conservative results than CI and may perform better when the local sensor estimates are weakly correlated.
It can be observed from Table 1 that the CI method incurs high computational cost as compared to the other methods. To observe the effect of data dimension on the computation time of EM methods, we randomly generated data with different dimensions for evaluation. Figure 11 depicts the average computation time for 10,000 runs of each method for fusing two data sources of increasing dimension. Although, the ME methods perform efficiently for low dimensions of data, these methods may become inefficient with the increase in the dimensions of data sources as seen from Figure 11.

4.3.3. Analysis of Ellipsoidal Methods for Three Sensors

In some situations, more than two sensors may provide an estimate of a particular state in a distributed sensors system. The role of the data fusion framework is to provide a consistent and minimum variance fused solution when more than two sensors are involved. The framework of all the three ME methods are devised for fusing two sensors only. Conservative solutions can be achieved for fusion of more than two sensors by sequentially applying the ME methods in a decentralized fashion similar to SCI [96]. The CI method, on the other hand, provides a generalization to n sensors [49]. The CI method computes an estimate P C I for n sensors by combining the individual covariances P i , i = 1 , , n with scalars ω i , such that, i = 1 n ω i = 1 is retained. The fused mean and covariance estimate for n sensor estimates are then obtained as,
x C I = P C I ( ω 1 P 1 1 x ^ 1 + ω 2 P 2 1 x ^ 2 + + ω n P n 1 x ^ n )
P C I = ( ω 1 P 1 1 + ω 2 P 2 1 + + ω n P n 1 ) 1
However, a simple example reveals that the minimum overestimate of CI for more than two sensors does not hold.
Example. 
Consider an illustrative example with the following three sensor estimates,
x ^ 1 = [ 0 0 ] ,   P 1 = [ 0.5 0 0 8 ] ,   x ^ 2 = [ 0 0 ] ,   P 2 = [ 6.1250 3.2476 3.2476 2.3750 ] , x ^ 3 = [ 0 0 ] , P 3 = [ 6.1250 3.2476 3.2476 2.3750 ]
Figure 12 depicts the corresponding covariance ellipsoids of the three sensors. The fused covariance of the three sensors for different values of correlation lies inside the hexagonal intersection area of the three ellipsoids. By definition, the CI method should provide a tight overestimation of the hexagonal intersection region as shown in Figure 12 as ε ( 0 , P C I A ) . However, trace minimization of P C I = ( i = 1 n ω i P i 1 ) 1 leads to a larger overestimate than the actual one. This means that the generalization of CI as a minimum tight overestimate for more than two sensors must be different than as proposed in [49]. Figure 13 shows the fused results provided by sequentially applying the ME methods to three sensors. First, the two sensor estimates are fused together, followed by fusion of the third estimate. The fused covariance ellipsoid for three sequences, that is, P 123 , P 132 and P 231 are depicted. Consequent of ME methods definition, the fused result for three sensors must be a maximum ellipsoid inside the intersection region ε ( 0 , P E M A ) as shown in Figure 13. However, the ME methods provide underestimated fused solutions as depicted in Figure 13. It can also be noted that different sequence of fusion result in different fused ellipsoid.
Remarks. 
The choice of a fusion method under the assumption of unknown cross-correlation depend on the underlying fusion problem. The data decorrelation methods remove the correlation before fusing the estimates but are limited to small network topologies. It is always preferable to use exact cross-correlation in a distributed fusion architecture to achieve optimality. As such, if there is some prior knowledge of the extent of the correlation, then using that information can improve the estimation accuracy. The CI method can be used to consistently fuse data with unknown correlation. However, the CI results are conservative with the possibility of a much lower accuracy. The EI method can be used to obtain a less conservative solution. Table 2 summarizes the characteristics of various methods for fusion under unknown correlation.

5. Fusion of Inconsistent and Spurious Data

The distributed fusion methodologies discussed above assume that input sensor mean and covariance estimates are consistent. In other words, the covariance provides a good approximation of all disturbances affecting the sensor measurements. However, in reality, uncertainties in sensor measurements may not only come from noise but also from unexpected situations, such as short duration spike faults, sensor glitches, permanent failure or slowly developing failure due to sensor elements [40,41,42]. Since these types of uncertainties are not attributable to the inherent noise, they are difficult to model. Subsequently, the estimates provided by a sensor node in a distributed sensor network may be spurious and inconsistent. Fusing such inconsistent estimates with correct estimates can lead to severely inaccurate results [43]. Hence, a data validation scheme is required to identify and eliminate the sensor inconsistencies before fusion in a distributed architecture. Various methods exist in the literature to tackle the issue of data inconsistency and can be broadly categorized into three groups based on their approach to the problem. These groups of methods are overviewed one by one here.

5.1. Model Based Approaches

The model-based approaches, also known as analytical redundancy approaches [45,46] identify functional relationships among the measured states through a mathematical model that can either be developed from the underlying physics or derived directly from the measurements. A residual r k is then generated between the actual sensor output y k and estimated modeled output y ^ k , i.e.,
r k = y k y ^ k
A zero-mean residual, that is, E [ r k ] = 0 mean no fault and deviation of the mean from zero signify presence of fault. In Reference [120], a Nadaraya-Watson statistical estimator and a priori observations are used to validate the sensor measurements. In References [121,122,123], residuals or innovations generated by Kalman filter (KF) were used for faults detection. The faults are identified by statistical tests on the whiteness, mean and covariance of the residuals. A failure detection approach for GPS integrity monitoring system based on KF was proposed in Reference [123]. The idea is to process subsets of the measurements by a bank of auxiliary KFs and use the generated estimate as a reference for failure detection. In Reference [124], the KF prediction was used as a reference to detect inconsistencies in sensor measurements. An adaptive sensor/actuator fault detection and isolation scheme based on KF for an Unmanned Aerial Vehicle (UAV) was proposed in Reference [125]. The method detects faults in the system by applying statistical test on the innovation covariance of KF. The method then adapt the process and measurement noise accordingly to avoid the deterioration of state estimation due to inconsistencies. This method is used in Reference [126] for improving the accuracy of personal positioning systems for outdoor environment. Common tools for evaluating the statistical characteristics of the residuals are generalized likelihood ratio test [127], chi-square test [128] and multiple hypothesis test [46]. Some authors have also proposed Extended KF (EKF) [129,130] and Unscented KF (UKF) [131] based approaches with the advantage of inconsistencies detection in non-linear systems. Multisensor data fusion with fault detection and removal based on Kullback-Leibler Divergence (KLD) for multi-robot system was proposed in Reference [132]. The method computes the KLD between the a priori and posteriori distributions of the Information Filter (IF) and uses Kullback-Leibler Criterion (KLC) thresholding to detect and remove the spurious sensor data.
Some researchers have also used fuzzy logic [133,134], knowledge-based [135] and neural network (NN) [136,137,138,139] based approaches to identify sensor inconsistencies. In Reference [135] a knowledge-based machine learning approach is used to solve the interference and drift problem caused by sensor aging in E-nose. A probabilistic NN for sensor validation of jet engines was presented in Reference [136]. The network was trained on comprehensive data of faulty and healthy situations generated from an engine performance model. A turbo fan engine was used to evaluate the performance of the network with high success rate of faults identification. As compared to the conventional model based approaches which require bank of estimators for sensor validation, an efficient AI based method was proposed in Reference [137] for fault detection. The method employed a single NN estimator and achieved the same performance as the group of parallel estimators but with much lower computational cost. In Reference [140], the residual of a recurrent neural network (RNN) was used to identify faults in sensor and actuator of non-linear systems. A NN for fault detection in aircraft sensors and actuators was proposed in Reference [139], where EKF was used to update the weights of the neural network. The use of EKF for tuning the weights of neural network result in a fast convergence rate of learning. The method was found to be more accurate and efficient than conventional NN based approach in faults detection.
The model based approaches can be used by individual sensor nodes in a distributed architecture to validate their own estimates before transmitting it to the fusion center. In addition, it can be also employed at the central node for validating the incoming multisensory data. The disadvantage of the model based approaches is the requirement of explicit mathematical model and prior information for sensor validation which may not be available in some cases. The learning based approaches ease this requirement by learning the statistical characteristics of the system from training data. However, learning based approaches need a large amount of data for training and depend on the accumulated experience and data history of the target system.

5.2. Redundancy Based Approaches

In data/hardware/sensor redundancy based approaches, two or more sensors measure the same critical state and then detect as well as isolate the faulty sensors by consistency checking and majority voting [45]. For instance, voter-based fault detection system for multiple sensors subsystems of GPS, inertial navigation system (INS) and Doppler attitude and heading reference system (DAHRS) was presented in Reference [47]. The method is based on the overlap of Gaussian confidence regions of two local sensor estimates in a decentralized system. A sensor voter algorithm to manage three redundant sensors was presented in Reference [141]. Inconsistency detection for hypersonic cruise vehicles (HCVs) based on redundant multisensor navigation systems was proposed in Reference [142]. The system consists of two blocks, where the first block consists of complementary sensors of inertial navigation system (INS) and GPS, and the second block comprises of INS and celestial navigation system (CNS). The method uses chi-square test and sequential probability ratio test (SPRT) to detect inconsistencies in the local sensor estimates of each block before their data is sent to the central node for obtaining a global estimate. Fault detection and isolation application on redundant aircraft sensors based on fuzzy logic and majority voting were proposed in References [143, 144], respectively. Without any prior information, a method to detect spurious sensor data based on Bayesian framework was proposed in Reference [40,41]. The method adds a term to the Bayesian formulation which has the effect of increasing the posterior distribution when measurement from one of the sensors is inconsistent with respect to the other. Gaussian likelihood function of a state X in the presence of measurements z 1 and z 2 from a pair of sensors can be written as,
p ( Z = z n | X = x ) = 1 σ n 2 π e { ( x z n ) 2 2 σ n 2 } n = 1 , 2
The posterior fused mean and covariance can be computed as,
x f = σ 2 2 σ 1 2 + σ 2 2 z 1 + σ 1 2 σ 1 2 + σ 2 2 z 2 ,   σ f 2 = [ σ 1 2 + σ 2 2 ] 1
The method developed a modified Bayesian (MB) formulation as,
p ( X = x | Z = z n ) 1 σ 1 2 π e ( x z 1 ) 2 2 σ 1 2 f     × 1 σ 2 2 π e ( x z 2 ) 2 2 σ 2 2 f  
where f = { m 2 m 2 ( z 1 z 2 ) 2 } and m represent the maximum expected difference between the sensor readings. The factor f depends on the squared difference between the measurements and has the effect of increasing or decreasing the variance of the posterior fused distribution as compared to individual sensor variances. Thus, the MB framework is capable of determining if fusing two measurements would lead to an increase or decrease in posterior distribution variance. Subsequently, a decision to fuse or not can be made based on an increase or a decrease in the posterior variance. In References [43,145], the MB framework along with Kalman filtering is applied to improve the accuracy of robotic position estimation in the presence of inconsistencies. In Reference [8], a fault-tolerant multisensor perception system was presented for mobile robot localization with redundant parallel blocks. Where each block consists of duplicate sensors and fusion block. The idea is to compare sensors measurements of the redundant sensors from each block as well as the KF fused result of individual block to detect inconsistencies.
Redundancy based approaches may fail if multiple sensors could fail simultaneously. This is possible due to the fact that redundant sensors operate in the same working environment and thus tend to have similar usage life expectations. In Reference [146], a combination of model based approach and majority voting is used to remove modeled and unmodeled faults in a target tracking scenario. Similarly, a hybrid of data redundancy and analytic redundancy based on unscented and extended Kalman filter is proposed in References [147,148] respectively.

5.3. Fusion Based Approaches

Some authors also explored fusion of inconsistent sensor estimates within the Bayesian probabilistic framework. For instance, Uhlman proposed a Covariance Union (CU) [49] to consistently fuse spurious data coming from multiple sources. The CU method unifies two or more sensor estimates that are inconsistent. Given n local estimates ( x ^ 1 , P 1 ) , ( x ^ 2 , P 2 ) ( x ^ n , P n ) , the CU method provides a unioned estimate ( x ^ u , P u ) , which is consistent with all of the estimates as long as one of the estimate ( x ^ i , P i ) is consistent. The CU constraint is,
P u P 1 + ( u x ^ 1 ) ( u x ^ 1 ) T P u P 2 + ( u x ^ 2 ) ( u x ^ 2 ) T P u P n + ( u x ^ n ) ( u x ^ n ) T
For a pair of estimates, a close form representation of CU fused covariance can be obtained. Define:
P 1 = E 1 D 1 E 1 T
Then
I = T P 1 T T   a n d   P 2 = T P 2 T T = E 2 D 2 E 2 T
where T = D 1 1 / 2 E 1 T and I is the identity matrix. Then, we can write
P u = E 1 D 1 1 / 2 E 2 m a x ( D 2 , I ) E 2 T D 1 1 / 2 E 1 T
where m a x is the element wise maximum value of D 2 and I matrices. Figure 14 shows the merging of two coincident estimates by CU. The union fused result for multiple sensor estimates can be obtain by solving the CU constraints of (43) by numerical optimization [149]. In References [51,150], the CU method is explored to consistently fuse more than two sensor estimates. To ensure consistency for more than two estimates, the CU method should be collectively applied rather than pairwise recursively [150]. Furthermore, an implementation of the CU algorithm in MATLAB and C is developed in Reference [150]. However, the implementation incurs a high computational cost and is not practical for real-time applications. Proof that the CU method provides a minimum enclosing ellipsoid for fusion of local estimates is given in Reference [151]. A Generalized Covariance Union (GCU) to merge multiple hypotheses in tracking applications is presented in Reference [48]. The GCU method provides tighter estimates than CU by exploiting the hypothesis probability bounds. The method reduces to CU when hypothesis probability is absent and to standard mixture reduction (SMR) methods when the hypothesis probability is exactly known. The CU method is studied for navigation [152] and in comparison with other track-to-track fusion algorithms [129], and is shown to perform well in the presence of inconsistencies. A hybrid of the CI and CU method for network-centric data fusion is shown to be highly flexible and resilient against corrupted sensor data [153]. However, the CU method incurs a high computational cost and results in an inappropriately large conservative fused solution.
Remarks. 
It should be noted that, to ensure consistency in distributed data fusion, the effect of spurious data needs to be taken into consideration in addition. To this end, methods for identifying spurious data and managing consistency under spurious data, either by removing spurious data or enlarging fused covariance are introduced. The choice of fault-tolerant methods for distributed data fusion depends upon the underlying problem and availability of system information. A suitable model-based approach can be employed by local sensors for sensor validation, whenever prior information regarding the system model is available. Without any prior information, the redundancy of a distributed architecture can be exploited to identify any inconsistency in the fusion pool. However, redundancy based approaches may fail in the case in which multiple sensors simultaneously provide inconsistent data. The CU method can be used to consistently fuse spurious data coming from multiple sources. Yet, the method is computationally expensive and results in inappropriately large conservative fused results. The fault-tolerant methods can also be jointly applied to improve the fusion performance in the presence of inconsistencies and solve complex fusion problems according to practical demands. Table 3 summarizes the characteristics of fusion approaches for inconsistent data sources.

6. Conclusions and Future Directions

In this paper, we reviewed and analyzed the theories and approaches for multisensor data fusion in a distributed architecture. The reasons for the dependencies of local sensor estimates are discussed and various fusion algorithms for correlated data sources are summarized. Both classic results and recent developments in distributed multisensor data fusion with the assumption of unknown correlation are analyzed. Several fault-tolerant approaches for identification and removal/fusion of inconsistent sensor data are also reviewed. The appropriateness of the fusion technique depends on the underlying problem and the established assumptions of each method. Based on literature review, future directions are summarized here:
  • The algorithms for fusion under unknown correlation in literature are mostly devised for the two-sensor case. A general fusion framework for more than two data sources under unknown correlation is still an open research question.
  • A major limitation of the distributed fusion methods is that almost all the methods described are based on the traditional KF framework. Investigating these methods within a more powerful framework, such as particle filter, may be an interesting topic.
  • While some research has been done on an explicit characterization of correlation for low-dimensional data sources, a general description and mathematical model for unknown correlation of multiple data sources is still an open question.
  • Another interesting topic is the use of neural network for estimating the unknown correlation among multiple sensors in distributed architecture.
  • Detection and removal of inconsistent and spurious sensor estimates in a distributed fusion architecture under unknown correlation is also an interesting problem.
  • Examining the distributed fusion algorithms for network of nonlinear systems under unknown uncertainties may be an open and challenging research direction.
  • Lack of a standard evaluation framework to assess the performance of distributed fusion algorithms is another issue. Most of the fusion algorithms are either tested on simulated data with arbitrary assumptions or applied to a specific real-world problem.

Acknowledgments

This research was supported, in part, by the “Space Initiative Program” of National Research Foundation (NRF) of Korea (NRF-2013M1A3A3A02042335), sponsored by the Korean Ministry of Science, ICT and Planning (MSIP), in part, by the “3D Visual Recognition Project” of Korea Evaluation Institute of Industrial Technology (KEIT) (2015-10060160), and in part, by the “Robot Industry Fusion Core Technology Development Project” of KEIT (R0004590).

Author Contributions

Muhammad Abu Bakr and Sukhan Lee conceived and designed the content. Abu Bakr drafted the paper. Lee supervised Abu Bakr with critical assessment of the draft for quality revision.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Uhlmann, J.; Julier, S.; Csorba, M. Nondivergent simultaneous map building and localization using covariance intersection. In Proceedings of the SPIE 3087, Navigation and Control Technologies for Unmanned Systems II, Orlando, FL, USA, 23 April 1997. [Google Scholar]
  2. Julier, S.; Uhlmann, J. Using covariance intersection for SLAM. Robot. Auton. Syst. 2007, 55, 3–20. [Google Scholar] [CrossRef]
  3. Ghassemian, H. A review of remote sensing image fusion methods. Inf. Fusion 2016, 32, 75–89. [Google Scholar] [CrossRef]
  4. Zhang, L.; Zhang, D. Robust visual knowledge transfer via extreme learning machine-based domain adaptation. IEEE Trans. Image Process. 2016, 25, 4959–4973. [Google Scholar] [CrossRef] [PubMed]
  5. Zhang, L.; Zhang, D. Metricfusion: Generalized metric swarm learning for similarity measure. Inf. Fusion 2016, 30, 80–90. [Google Scholar] [CrossRef]
  6. Zhang, L.; Zuo, W.; Zhang, D. LSDT: Latent sparse domain transfer learning for visual adaptation. IEEE Trans. Image Process. 2016, 25, 1177–1191. [Google Scholar] [CrossRef] [PubMed]
  7. Zhang, L.; Zhang, D. Visual understanding via multi-feature shared learning with global consistency. IEEE Trans. Multimed. 2016, 18, 247–259. [Google Scholar] [CrossRef]
  8. Bader, K.; Lussier, B.; Schön, W. A fault tolerant architecture for data fusion: A real application of Kalman filters for mobile robot localization. Robot. Auton. Syst. 2017, 88, 11–23. [Google Scholar] [CrossRef]
  9. Smith, D.; Singh, S. Approaches to multisensor data fusion in target tracking: A survey. IEEE Trans. Knowl. Data Eng. 2006, 18, 1696–1710. [Google Scholar] [CrossRef]
  10. Martin, L., II; Hall, D.; Llinas, J. Handbook of Multisensor Data Fusion: Theory and Practice; CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar]
  11. Hall, D.; Chong, C.; Llinas, J.; Martin, L., II. Distributed Data Fusion for Network-Centric Operations; CRC Press: Boca Raton, FL, USA, 2012. [Google Scholar]
  12. Grime, S.; Durrant-Whyte, H. Data fusion in decentralized sensor networks. Control Eng. Pract. 1994, 2, 849–863. [Google Scholar] [CrossRef]
  13. Marrs, A.; Reed, C.; Webb, A.; Webber, H. Data Incest and Symbolic Information Processing; United Kingdom Defence Evaluation and Research Agency: Farnborough, UK, 1999.
  14. Julier, S.; Uhlmann, J. Generalised decentralised data fusion with covariance intersection. In Handbook of Multisensor Data Fusion: Theory and Practice; CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar]
  15. Bar-Shalom, Y. On the track-to-track correlation problem. IEEE Trans. Autom. Control 1981, 26, 571–572. [Google Scholar] [CrossRef]
  16. Kalman, R. A new approach to linear filtering and prediction problems. J. Basic Eng. 1960, 82, 35–45. [Google Scholar] [CrossRef]
  17. Bar-Shalom, Y.; Campo, L. The effect of the common process noise on the two-sensor fused-track covariance. IEEE Trans. Aerosp. 1986. [Google Scholar] [CrossRef]
  18. Chang, K.C.K.; Saha, R.K.; Bar-Shalom, Y. On optimal track-to-track fusion. IEEE Trans. Aerosp. Electron. Syst. 1997, 33, 1271–1276. [Google Scholar] [CrossRef]
  19. Lee, S.; Bakr, M. An optimal data fusion for distributed multisensor systems: Covariance extension method. In Proceedings of the 11th International Conference on Ubiquitous Information Management and Communication, Beppu, Japan, 5–7 January 2017. [Google Scholar]
  20. Li, X.; Zhu, Y.; Wang, J.; Han, C. Optimal linear estimation fusion. I. Unified fusion rules. IEEE Trans. Inf. Theory 2003, 49, 2192–2208. [Google Scholar] [CrossRef]
  21. Shin, V.; Lee, Y.; Choi, T. Generalized Millman’s formula and its application for estimation problems. Signal Process. 2006, 86, 257–266. [Google Scholar] [CrossRef]
  22. Sun, S. Multi-sensor optimal information fusion Kalman filters with applications. Aerosp. Sci. Technol. 2004, 8, 57–62. [Google Scholar] [CrossRef]
  23. Uhlmann, J.; Julier, S.; Durrant-Whyte, H. A Culminating Advance in the Theory and Practice of Data Fusion, Filtering and Decentralized Estimation; Technical Report; Covariance Intersection Working Group (CIWG): Oxford, UK, 1997. [Google Scholar]
  24. Maybeck, P. Stochastic Models, Estimation, and Control; Academic Press: Cambridge, MA, USA, 1982. [Google Scholar]
  25. Pao, L. Distributed multisensor fusion. Guid. Navig. Control Conf. 1994. [Google Scholar] [CrossRef]
  26. Pao, L.; Kalandros, M. Algorithms for a class of distributed architecture tracking. In Proceedings of the 1997 American Control Conference, Albuquerque, NM, USA, 6 June 1997. [Google Scholar]
  27. McLaughlin, S.; Evans, R.; Krishnamurthy, V. Data incest removal in a survivable estimation fusion architecture. In Proceedings of the Sixth International Conference of Information Fusion, Cairns, Australia, 8–11 July 2003. [Google Scholar]
  28. McLaughlin, S.; Krishnamurthy, V. Managing data incest in a distributed sensor network. In Proceedings of the 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, Hong Kong, China, 6–10 April 2003. [Google Scholar]
  29. Bakr, M.; Lee, S. Track level fusion with an estimation of maximum bound of unknown correlation. In Proceedings of the 2016 International Conference on Control, Automation and Information Sciences (ICCAIS), Ansan, Korea, 27–29 October 2016. [Google Scholar]
  30. Reinhardt, M.; Noack, B.; Baum, M. Analysis of set-theoretic and stochastic models for fusion under unknown correlations. In Proceedings of the 14th International Conference on Information Fusion (FUSION), Chicago, IL, USA, 5–8 July 2011. [Google Scholar]
  31. Kaplan, L.; Blair, W. Simulations studies of multisensor track association and fusion methods. In Proceedings of the 2006 IEEE Aerospace Conference, Big Sky, MT, USA, 4–11 March 2006. [Google Scholar]
  32. Zhu, H.; Zhai, Q.; Yu, M.; Han, C. Estimation fusion algorithms in the presence of partially known cross-correlation of local estimation errors. Inf. Fusion 2014, 18, 187–196. [Google Scholar] [CrossRef]
  33. Chen, L.; Arambel, P.; Mehra, R. Estimation under unknown correlation: Covariance intersection revisited. IEEE Trans. Autom. Control 2002, 47, 1879–1882. [Google Scholar] [CrossRef]
  34. Chen, L.; Arambel, P.; Mehra, R. Fusion under unknown correlation-covariance intersection as a special case. In Proceedings of the Fifth International Conference on Information Fusion, Annapolis, MD, USA, 8–11 July 2002. [Google Scholar]
  35. Julier, S.; Uhlmann, J. A non-divergent estimation algorithm in the presence of unknown correlations. In Proceedings of the 1997 American Control Conference, Albuquerque, NM, USA, 6 June 1997. [Google Scholar]
  36. Benaskeur, A. Consistent fusion of correlated data sources. In Proceedings of the IEEE 2002 28th Annual Conference of the Industrial Electronics Society, Sevilla, Spain, 5–8 November 2002. [Google Scholar]
  37. Zhou, Y.; Li, J. Robust decentralized data fusion based on internal ellipsoid approximation. IFAC Proc. Vol. 2008, 41, 9964–9969. [Google Scholar] [CrossRef]
  38. Zhou, Y.; Li, J. Data fusion of unknown correlations using internal ellipsoidal approximation. IFAC Proc. Vol. 2008, 41, 2856–2860. [Google Scholar] [CrossRef]
  39. Sijs, J.; Lazar, M. State fusion with unknown correlation: Ellipsoidal intersection. Automatica 2012, 48, 1874–1878. [Google Scholar] [CrossRef]
  40. Kumar, M.; Garg, D.; Zachery, R. A method for judicious fusion of inconsistent multiple sensor data. IEEE Sens. J. 2007, 7, 723–733. [Google Scholar] [CrossRef]
  41. Kumar, M.; Garg, D.; Zachery, R. A generalized approach for inconsistency detection in data fusion from multiple sensors. In Proceedings of the 2006 American Control Conference, Minneapolis, MN, USA, 14–16 June 2006. [Google Scholar]
  42. Kumar, M.; Garg, D.; Zachery, R. Stochastic adaptive sensor modeling and data fusion. In Proceedings of the SPIE 6174, Smart Structures and Materials 2006: Sensors and Smart Structures Technologies for Civil, Mechanical, and Aerospace Systems, San Diego, CA, USA, 5 April 2006. [Google Scholar]
  43. Abdulhafiz, W.; Khamis, A. Handling data uncertainty and inconsistency using multisensor data fusion. Adv. Artif. Intell. 2013. [Google Scholar] [CrossRef]
  44. Đurović, Ž.; Kovačević, B. QQ-plot approach to robust Kalman filtering. Int. J. Control 1995, 61, 837–857. [Google Scholar] [CrossRef]
  45. Jiang, L. Sensor Fault Detection and Isolation Using System Dynamics Identification Techniques. Ph.D. Thesis, The University of Michigan, Ann Arbor, MI, USA, 2011. [Google Scholar]
  46. Hwang, I.; Kim, S.; Kim, Y.; Seah, C. A survey of fault detection, isolation, and reconfiguration methods. IEEE Trans. Control 2010, 18, 636–653. [Google Scholar] [CrossRef]
  47. Kerr, T. Decentralized filtering and redundancy management for multisensor navigation. Trans. Aerosp. Electron. Syst. 1987, AES-23, 83–119. [Google Scholar] [CrossRef]
  48. Reece, S.; Roberts, S. Generalised covariance union: A unified approach to hypothesis merging in tracking. IEEE Trans. Aerosp. 2010, 46. [Google Scholar] [CrossRef]
  49. Uhlmann, J. Covariance consistency methods for fault-tolerant distributed data fusion. Inf. Fusion 2003, 4, 201–215. [Google Scholar] [CrossRef]
  50. Castanedo, F. A review of data fusion techniques. Sci. World J. 2013, 2013, 704504. [Google Scholar] [CrossRef] [PubMed]
  51. Luo, R.; Yih, C.; Su, K. Multisensor fusion and integration: Approaches, applications, and future research directions. IEEE Sens. J. 2002, 2, 107–119. [Google Scholar] [CrossRef]
  52. Khaleghi, B.; Khamis, A.; Karray, F.; Razavi, S. Multisensor data fusion: A review of the state-of-the-art. Inf. Fusion 2013, 14, 28–44. [Google Scholar] [CrossRef]
  53. Gao, Z.; Cecati, C.; Ding, S. A survey of fault diagnosis and fault-tolerant techniques—Part I: Fault diagnosis with model-based and signal-based approaches. IEEE Trans. Ind. Electron. 2015, 62, 3757–3767. [Google Scholar] [CrossRef]
  54. Allerton, D.; Jia, H. A review of multisensor fusion methodologies for aircraft navigation systems. J. Navig. 2005. [Google Scholar] [CrossRef]
  55. Noack, B. State Estimation for Distributed Systems with Stochastic and Set-Membership Uncertainties; KIT Scientific Publishing: Karlsruhe, Germany, 2014. [Google Scholar]
  56. Liggins, M.; Chong, C.; Kadar, I.; Alford, M. Distributed fusion architectures and algorithms for target tracking. Proc. IEEE 1997, 85, 95–107. [Google Scholar] [CrossRef]
  57. Simon, D. Optimal State Estimation; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2006; ISBN 9780470045343. [Google Scholar]
  58. Bar-Shalom, Y.; Li, X. Estimation and Tracking-Principles, Techniques, and Software; Artech House, Inc.: Norwood, MA, USA, 1993. [Google Scholar]
  59. Julier, S.; Uhlmann, J. New extension of the Kalman filter to nonlinear systems. In Proceedings of the SPIE 3068, Signal Processing, Sensor Fusion, and Target Recognition VI, Orlando, FL, USA, 28 July 1997. [Google Scholar]
  60. Wan, E.; Van Der Merwe, R. The unscented Kalman filter for nonlinear estimation. In Proceedings of the IEEE 2000 Adaptive Systems for Signal Processing, Communications, and Control Symposium (Cat. No.00EX373), Lake Louise, AB, Canada, 4 October 2000. [Google Scholar]
  61. Hernandez, M.; Kirubarajan, T. Multisensor resource deployment using posterior Cramér-Rao bounds. IEEE Trans. 2004, 40, 399–416. [Google Scholar] [CrossRef]
  62. Mutambara, A. Decentralized Estimation and Control for Multisensor Systems; CRC Press: Boca Raton, FL, USA, 1998. [Google Scholar]
  63. Grocholsky, B. Information-Theoretic Control of Multiple Sensor Platforms. Ph.D. Thesis, The University of Sydney, Australia, 2002. [Google Scholar]
  64. Li, X.; Zhu, Y.; Han, C. Unified optimal linear estimation fusion. I. Unified models and fusion rules. In Proceedings of the Third International Conference on Information Fusion, Paris, France, 10–13 July 2000. [Google Scholar]
  65. Yan, L.P.; Liu, B.S.; Zhou, D.H. The modeling and estimation of asynchronous multirate multisensor dynamic systems. Aerosp. Sci. Technol. 2006, 10, 63–71. [Google Scholar] [CrossRef]
  66. Lin, H.; Sun, S. Distributed fusion estimation for multi-sensor asynchronous sampling systems with correlated noises. Int. J. Syst. Sci. 2017, 48, 952–960. [Google Scholar] [CrossRef]
  67. Alouani, A.; Gray, J.; McCabe, D. Theory of distributed estimation using multiple asynchronous sensors. IEEE Trans. Aerosp. Electron. Syst. 2005, 41, 717–722. [Google Scholar] [CrossRef]
  68. Lin, H.; Sun, S. Distributed fusion estimator for multisensor multirate systems with correlated noises. IEEE Trans. Syst. Man Cybern. 2017. [Google Scholar] [CrossRef]
  69. Safari, S.; Shabani, F.; Simon, D. Multirate multisensor data fusion for linear systems using Kalman filters and a neural network. Aerosp. Sci. Technol. 2014, 39, 465–471. [Google Scholar] [CrossRef]
  70. Liu, Q.; Brigham, K.; Rao, N. Estimation and Fusion for Tracking Over Long-Haul Links Using Artificial Neural Networks. IEEE Trans. Signal Inf. Process. Netw. 2017. [Google Scholar] [CrossRef]
  71. Luo, X.; Chang, X. A Novel Data Fusion Scheme using Grey Model and Extreme Learning Machine in Wireless Sensor Networks. Int. J. Control Autom. Syst. 2015, 13, 539–546. [Google Scholar] [CrossRef]
  72. Yadaiah, N.; Singh, L.; Bapi, R.S.; Rao, V.S.; Deekshatulu, B.L.; Negi, A. Multisensor Data Fusion Using Neural Networks. In Proceedings of the 2006 IEEE International Joint Conference on Neural Network, Vancouver, BC, Canada, 16–21 July 2006; pp. 875–881. [Google Scholar]
  73. Brigham, K.; Kumar, B.V.; Rao, N.S. Learning-based approaches to Nonlinear Multisensor fusion in Target Tracking. In Proceedings of the 16th International Conference on Information Fusion (FUSION), Istanbul, Turkey, 9–12 July 2013. [Google Scholar]
  74. Duraisamy, B.; Schwarz, T.; Wohler, C. Track level fusion algorithms for automotive safety applications. In Proceedings of the 2013 International Conference on Signal Processing Image Processing & Pattern Recognition (ICSIPR), Coimbatore, India, 7–8 February 2013; pp. 179–184. [Google Scholar]
  75. Bar, S.; Willett, P.; Tian, X. Tracking and Data Fusion: A Handbook of Algorithms; YBS Publishing: Storrs, CT, USA, 2011. [Google Scholar]
  76. McLaughlin, S.; Evans, R. A graph theoretic approach to data incest management in network centric warfare. In Proceedings of the 2005 8th International Conference on Information Fusion, Philadelphia, PA, USA, 25–28 July 2005. [Google Scholar]
  77. Bréhard, T.; Krishnamurthy, V. Optimal data incest removal in Bayesian decentralized estimation over a sensor network. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP, Honolulu, HI, USA, 15–20 April 2007. [Google Scholar]
  78. Nicholson, D.; Lloyd, C.; Julier, S. Scalable distributed data fusion. In Proceedings of the Fifth International Conference on Information Fusion, Annapolis, MD, USA, 8–11 July 2002. [Google Scholar]
  79. Chong, C.; Mori, S.; Barker, W. Architectures and algorithms for track association and fusion. IEEE Aerosp. 2000, 15, 5–13. [Google Scholar]
  80. Khawsuk, W.; Pao, L. Decorrelated state estimation for distributed tracking of interacting targets in cluttered environments. In Proceedings of the 2002 American Control Conference, Anchorage, AK, USA, 8–10 May 2002. [Google Scholar]
  81. Mallick, M.; Schmidt, S.; Pao, L.Y.; Chang, K.C. Out-of-sequence track filtering using the decorrelated pseudo-measurement approach. In Proceedings of the SPIE 5428, Signal and Data Processing of Small Targets, Orlando, FL, USA, 25 August 2004; pp. 154–166. [Google Scholar]
  82. Trailovic, L.; Pao, L. Variance estimation and ranking of Gaussian mixture distributions in target tracking applications. In Proceedings of the 41st IEEE Conference on Decision and Control, Las Vegas, NV, USA, 10–13 December 2002. [Google Scholar]
  83. Horn, R.; Johnson, C. Topics in Matrix Analysis; Cambridge University Press: Cambridge, UK, 1994. [Google Scholar]
  84. Hanebeck, U.; Briechle, K.; Horn, J. A tight bound for the joint covariance of two random vectors with unknown but constrained cross-correlation. In Proceedings of the International Conference on Multisensor Fusion and Integration for Intelligent Systems, Baden, Germany, 20–22 August 2001. [Google Scholar]
  85. Reece, S.; Roberts, S. Robust, low-bandwidth, multi-vehicle mapping. In Proceedings of the 2005 8th International Conference on Information Fusion, Philadelphia, PA, USA, 25–28 July 2005. [Google Scholar]
  86. Julier, S. Estimating and exploiting the degree of independent information in distributed data fusion. In Proceedings of the 12th International Conference on Information Fusion, Seattle, WA, USA, 6–9 July 2009. [Google Scholar]
  87. Schreier, P. A unifying discussion of correlation analysis for complex random vectors. IEEE Trans. Signal Process. 2008, 56, 1327–1336. [Google Scholar] [CrossRef]
  88. Qu, X.; Zhou, J.; Song, E.; Zhu, Y. Minimax robust optimal estimation fusion in distributed multisensor systems with uncertainties. IEEE Signal Process. Lett. 2010, 17, 811–814. [Google Scholar]
  89. Gao, Y.; Li, X.; Song, E. Robust linear estimation fusion with allowable unknown cross-covariance. In Proceedings of the 17th International Conference on Information Fusion, Salamanca, Spain, 7–10 July 2014. [Google Scholar]
  90. Thompson, B. Canonical correlation analysis. Encycl. Stat. Behav. Sci. 2005. [Google Scholar] [CrossRef]
  91. Franken, D.; Hupper, A. Improved fast covariance intersection for distributed data fusion. In Proceedings of the 2005 8th International Conference on Information Fusion, Philadelphia, PA, USA, 25–28 July 2005. [Google Scholar]
  92. Niehsen, W. Information fusion based on fast covariance intersection filtering. In Proceedings of the Fifth International Conference on Information Fusion, Annapolis, MD, USA, 8–11 July 2002. [Google Scholar]
  93. Hurley, M. An information theoretic justification for covariance intersection and its generalization. In Proceedings of the Fifth International Conference on Information Fusion, Annapolis, MD, USA, 8–11 July 2002. [Google Scholar]
  94. Wang, Y.; Li, X. A fast and fault-tolerant convex combination fusion algorithm under unknown cross-correlation. In Proceedings of the 12th International Conference on Information Fusion, Seattle, WA, USA, 6–9 July 2009. [Google Scholar]
  95. Chong, C.; Mori, S. Convex combination and covariance intersection algorithms in distributed fusion. In Proceedings of the 4th International Conference on Information Fusion, Montreal, QC, Canada, 7–10 August 2001. [Google Scholar]
  96. Deng, Z.; Zhang, P.; Qi, W.; Liu, J.; Gao, Y. Sequential covariance intersection fusion Kalman filter. Inf. Sci. 2012, 189, 293–309. [Google Scholar] [CrossRef]
  97. Farrell, W.; Ganesh, C. Generalized chernoff fusion approximation for practical distributed data fusion. In Proceedings of the 12th International Conference on Information Fusion, Seattle, WA, USA, 6–9 July 2009. [Google Scholar]
  98. Deng, Z.; Zhang, P.; Qi, W.; Yuan, G.; Liu, J. The accuracy comparison of multisensor covariance intersection fuser and three weighting fusers. Inf. Fusion 2013, 14, 177–185. [Google Scholar] [CrossRef]
  99. Guo, Q.; Chen, S.; Leung, H.; Liu, S. Covariance intersection based image fusion technique with application to pansharpening in remote sensing. Inf. Sci. 2010, 180, 3434–3443. [Google Scholar] [CrossRef]
  100. Li, H.; Nashashibi, F.; Yang, M. Split covariance intersection filter: Theory and its application to vehicle localization. IEEE Trans. Intell. Transp. Syst. 2013, 14, 1860–1871. [Google Scholar] [CrossRef]
  101. Uhlmann, J.; Julier, S. NASA Mars rover: A testbed for evaluating applications of covariance intersection. In Proceedings of the SPIE 3693, Unmanned Ground Vehicle Technology, Orlando, FL, USA, 22 July 1999. [Google Scholar]
  102. Hu, J.; Xie, L.; Zhang, C. Diffusion Kalman filtering based on covariance intersection. IEEE Trans. Signal Process. 2012, 60, 891–902. [Google Scholar] [CrossRef]
  103. Reinhardt, M.; Noack, B. Closed-form optimization of covariance intersection for low-dimensional matrices. In Proceedings of the 2012 15th International Conference on Information Fusion (FUSION), Singapore, 9–12 July 2012. [Google Scholar]
  104. Reinhardt, M.; Noack, B.; Arambel, P. Minimum covariance bounds for the fusion under unknown correlations. IEEE Signal 2015, 22, 1210–1214. [Google Scholar] [CrossRef]
  105. Noack, B.; Sijs, J.; Reinhardt, M.; Hanebeck, U. Decentralized data fusion with inverse covariance intersection. Automatica 2017, 79, 35–41. [Google Scholar] [CrossRef]
  106. Cong, J.; Li, Y.; Qi, G.; Sheng, A. An order insensitive sequential fast covariance intersection fusion algorithm. Inf. Sci. 2016, 367–368, 28–40. [Google Scholar] [CrossRef]
  107. Noack, B.; Baum, M.; Hanebeck, U. Covariance intersection in nonlinear estimation based on pseudo gaussian densities. In Proceedings of the 14th International Conference on Information Fusion (FUSION), Chicago, IL, USA, 5–8 July 2011. [Google Scholar]
  108. Ajgl, J.; Straka, O. Covariance intersection in track-to-track fusion with memory. In Proceedings of the 2016 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI), Baden, Germany, 19–21 September 2016. [Google Scholar]
  109. Ajgl, J.; Straka, O. Covariance intersection in track-to-track fusion without memory. In Proceedings of the 19th International Conference on Information Fusion (FUSION), Heidelberg, Germany, 5–8 July 2016. [Google Scholar]
  110. Luo, R.; Chen, O.; Tu, L. Nodes localization through data fusion in sensor network. In Proceedings of the 19th International Conference on Advanced Information Networking and Applications, Taipei, Taiwan, 28–30 March 2005. [Google Scholar]
  111. Luo, R.; Liao, C.; Lin, S. Multi-sensor fusion for reduced uncertainty in autonomous mobile robot docking and recharging. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, St. Louis, MO, USA, 10–15 October 2009. [Google Scholar]
  112. Lazarus, S.; Ashokaraj, I.; Tsourdos, A. Vehicle localization using sensors data fusion via integration of covariance intersection and interval analysis. IEEE Sens. 2007, 7, 1302–1314. [Google Scholar] [CrossRef]
  113. Wang, Y.; Li, X. Distributed estimation fusion with unavailable cross-correlation. IEEE Trans. Aerosp. Electron. Syst. 2012, 48, 259–278. [Google Scholar] [CrossRef]
  114. De Campos Ferreira, J.; Waldmann, J. Covariance intersection-based sensor fusion for sounding rocket tracking and impact area prediction. Control Eng. Pract. 2007, 15, 389–409. [Google Scholar] [CrossRef]
  115. Arambel, P.; Rago, C.; Mehra, R. Covariance intersection algorithm for distributed spacecraft state estimation. In Proceedings of the 2001 American Control Conference, Arlington, VA, USA, 25–27 June 2001. [Google Scholar]
  116. Zhou, Y.; Wang, D.; Pei, T.; Tian, S. Robust estimation fusion in wireless senor networks with outliers and correlated noises. Int. J. Distrib. Sens. Netw. 2014, 10. [Google Scholar] [CrossRef]
  117. Noack, B.; Sijs, J.; Hanebeck, U. Algebraic analysis of data fusion with ellipsoidal intersection. In Proceedings of the 2016 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI), Baden, Germany, 19–21 September 2016. [Google Scholar]
  118. Noack, B.; Lyons, D.; Nagel, M. Nonlinear information filtering for distributed multisensor data fusion. In Proceedings of the American Control Conference (ACC), San Francisco, CA, USA, 29 June–1 July 2011. [Google Scholar]
  119. Sijs, J.; Lazar, M. Empirical case-studies of state fusion via ellipsoidal intersection. In Proceedings of the 14th International Conference on Information Fusion, Chicago, IL, USA, 5–8 July 2011. [Google Scholar]
  120. Wellington, S.; Atkinson, J. Sensor validation and fusion using the Nadaraya-Watson statistical estimator. In Proceedings of the Fifth International Conference on Information Fusion, Annapolis, MD, USA, 8–11 July 2002. [Google Scholar]
  121. Doraiswami, R.; Cheded, L. A unified approach to detection and isolation of parametric faults using a Kalman filter residual-based approach. J. Frankl. Inst. 2013, 350, 938–965. [Google Scholar] [CrossRef]
  122. Huang, S.; Tan, K.K.; Lee, T.H. Fault Diagnosis and Fault-Tolerant Control in Linear Drives Using the Kalman Filter. IEEE Trans. Ind. Electron. 2012, 59, 4285–4292. [Google Scholar] [CrossRef]
  123. Da, R.; Lin, C. A new failure detection approach and its application to GPS autonomous integrity monitoring. IEEE Trans. Aerosp. Electron. 1995, 31, 499–506. [Google Scholar]
  124. Kai, Q.; Hui, Y.; Peng, Y.; Yan, R. An integrated fault detection scheme for the federated filter. In Proceedings of the 2013 Fourth International Conference on Digital Manufacturing & Automation, Qingdao, China, 29–30 June 2013. [Google Scholar]
  125. Hajiyev, C.; Soken, H.E. Robust Adaptive Kalman Filter for estimation of UAV dynamics in the presence of sensor/actuator faults. Aerosp. Sci. Technol. 2013, 28, 376–383. [Google Scholar] [CrossRef]
  126. Pulido Herrera, E.; Kaufmann, H.; Secue, J.; Quirós, R.; Fabregat, G. Improving data fusion in personal positioning systems for outdoor environments. Inf. Fusion 2013, 14, 45–56. [Google Scholar] [CrossRef]
  127. Jamouli, H.; Sauter, D. A generalized likelihood ratio test for a fault-tolerant control system. Int. J. Innov. Comput. 2012, 8, 1743–1771. [Google Scholar]
  128. Jacob, P. Probability and Statistics for Engineers and Scientists (9th Edition). Chance 2013, 26, 53. [Google Scholar] [CrossRef]
  129. Matzka, S.; Altendorfer, R. A comparison of track-to-track fusion algorithms for automotive sensor fusion. In Proceedings of the 2008 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, Seoul, Korea, 20–22 August 2008. [Google Scholar]
  130. Del Gobbo, D.; Napolitano, M.; Famouri, P. Experimental application of extended Kalman filtering for sensor validation. IEEE Trans. 2001, 9, 376–380. [Google Scholar] [CrossRef]
  131. Sepasi, M.; Sassani, F. On-line fault diagnosis of hydraulic systems using Unscented Kalman Filter. Int. J. Control. Autom. Syst. 2010, 8, 149–156. [Google Scholar] [CrossRef]
  132. Al Hage, J.; EI Najjar, M.E.; Pomorski, D. Multi-sensor fusion approach with fault detection and exclusion based on the Kullback–Leibler Divergence: Application on collaborative multi-robot system. Inf. Fusion 2017, 37, 61–76. [Google Scholar] [CrossRef]
  133. Frolik, J.; Abdelrahman, M. A confidence-based approach to the self-validation, fusion and reconstruction of quasi-redundant sensor data. IEEE Trans. 2001, 50, 1761–1769. [Google Scholar] [CrossRef]
  134. Jeyanthi, R.; Anwamsha, K. Fuzzy-based sensor validation for a nonlinear bench-mark boiler under MPC. In Proceedings of the 2016 10th International Conference on Intelligent Systems and Control (ISCO), Coimbatore, India, 7–8 January 2016. [Google Scholar]
  135. Zhang, L.; Zhang, D. Domain Adaptation Extreme Learning Machines for Drift Compensation in E-Nose Systems. IEEE Trans. Instrum. Meas. 2015, 64, 1790–1801. [Google Scholar] [CrossRef]
  136. Mathioudakis, K.; Romessis, C. Probabilistic neural networks for validation of on-board jet engine data. Proc. Inst. Mech. Eng. Part G 2004, 218, 59–72. [Google Scholar] [CrossRef]
  137. Michail, K.; Deliparaschos, K.M.; Tzafestas, S.G.; Zolotas, A.C. AI-Based Actuator/Sensor Fault Detection With Low Computational Cost for Industrial Applications. IEEE Trans. Control Syst. Technol. 2016, 24, 293–301. [Google Scholar] [CrossRef]
  138. Chine, W.; Mellit, A.; Lughi, V.; Malek, A.; Sulligoi, G. A novel fault diagnosis technique for photovoltaic systems based on artificial neural networks. Renew. Energy 2016, 90, 501–512. [Google Scholar] [CrossRef]
  139. Abbaspour, A.; Aboutalebi, P.; Yen, K.; Sargolzaei, A. Neural adaptive observer-based sensor and actuator fault detection in nonlinear systems: Application in UAV. ISA Trans. 2017. [Google Scholar] [CrossRef] [PubMed]
  140. Talebi, H.A.; Khorasani, K.; Tafazoli, S. A Recurrent Neural-Network-Based Sensor and Actuator Fault Detection and Isolation for Nonlinear Systems with Application to the Satellite’s Attitude Control Subsystem. IEEE Trans. Neural Netw. 2009, 20, 45–60. [Google Scholar] [CrossRef] [PubMed]
  141. Dajani-Brown, S.; Cofer, D.; Hartmann, G.; Pratt, S. Formal Modeling and Analysis of an Avionics Triplex Sensor Voter. In Model Checking Software. SPIN 2003; Springer: Berlin/Heidelberg, Germany, 2003; pp. 34–48. [Google Scholar]
  142. Wang, R.; Xiong, Z.; Liu, J.; Xu, J. Chi-square and SPRT combined fault detection for multisensor navigation. IEEE Trans. 2016, 52, 1352–1365. [Google Scholar] [CrossRef]
  143. Berdjag, D.; Cieslak, J.; Zolghadri, A. Fault detection and isolation of aircraft air data/inertial system. In Progress in Flight Dynamics, Guidance, Navigation, Control, Fault Detection, and Avionics; EDP Sciences: Les Ulis, France, 2013; Volume 6, pp. 317–332. [Google Scholar]
  144. Kassab, M.A.; Taha, H.S.; Shedied, S.A.; Maher, A. A novel voting algorithm for redundant aircraft sensors. In Proceedings of the 11th World Congress on Intelligent Control and Automation, Shenyang, China, 29 June–4 July 2014; pp. 3741–3746. [Google Scholar]
  145. Abdulhafiz, W.A.; Khamis, A. Bayesian approach to multisensor data fusion with Pre- and Post-Filtering. In Proceedings of the 2013 10th IEEE International Conference on Networking, Sensing and Control (ICNSC), Evry, France, 10–12 April 2013; pp. 373–378. [Google Scholar]
  146. Reece, S.; Roberts, S.; Claxton, C. Multi-sensor fault recovery in the presence of known and unknown fault types. In Proceedings of the 12th International Conference on Information Fusion, Seattle, WA, USA, 6–9 July 2009. [Google Scholar]
  147. Kim, H.; Park, S.; Kim, Y.; Park, C. Hybrid fault detection and isolation method for UAV inertial sensor redundancy management system. IFAC Proc. Vol. 2005. [Google Scholar] [CrossRef]
  148. Kim, S.; Kim, Y.; Park, C.; Jung, I. Hybrid fault detection and isolation techniques for aircraft inertial measurement sensors. In Proceedings of the Navigation, and Control Conference and Exhibit, Guidance, Navigation, and Control and Co-located Conferences, Providence, RI, USA, 16–19 August 2004. [Google Scholar]
  149. Boyd, S.; Vandenberghe, L. Convex Optimization; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  150. Bochardt, O.; Calhoun, R.; Uhlmann, J. Generalized information representation and compression using covariance union. In Proceedings of the 9th International Conference on Information Fusion, Florence, Italy, 10–13 July 2006. [Google Scholar]
  151. Bochardt, O.; Uhlmann, J. On the Equivalence of the General Covariance Union (GCU) and Minimum Enclosing Ellipsoid (MEE) Problems. arXiv, 2010; arXiv:1012.4795. [Google Scholar]
  152. Julier, S.; Uhlmann, J. A method for dealing with assignment ambiguity. In Proceedings of the 2004 American Control Conference, Boston, MA, USA, 30 June–2 July 2004. [Google Scholar]
  153. Nicholson, D. An automatic method for eliminating spurious data from sensor networks. In Proceedings of the IEE Target Tracking 2004: Algorithms and Applications, Brighton, UK, 23–24 March 2004. [Google Scholar]
Figure 1. Centralized fusion architecture.
Figure 1. Centralized fusion architecture.
Sensors 17 02472 g001
Figure 2. Distributed fusion architecture. Each node consists of sensor and fusion node.
Figure 2. Distributed fusion architecture. Each node consists of sensor and fusion node.
Sensors 17 02472 g002
Figure 3. Causes of Double Counting.
Figure 3. Causes of Double Counting.
Sensors 17 02472 g003
Figure 4. Kalman filter Prediction-Update procedure ( z 1 represents a unit time delay).
Figure 4. Kalman filter Prediction-Update procedure ( z 1 represents a unit time delay).
Sensors 17 02472 g004
Figure 5. Ellipsoidal Fusion of two Estimates ε ( x 1 , P 1 ) and ε ( x 2 , P 2 ) (a) Zero Mean (b) Non-Zero Mean. Compared to the optimal solution ε ( x o , P o ) , Kalman filter (KF) yields underestimated results ε ( x K , P K ) by ignoring cross-correlation.
Figure 5. Ellipsoidal Fusion of two Estimates ε ( x 1 , P 1 ) and ε ( x 2 , P 2 ) (a) Zero Mean (b) Non-Zero Mean. Compared to the optimal solution ε ( x o , P o ) , Kalman filter (KF) yields underestimated results ε ( x K , P K ) by ignoring cross-correlation.
Sensors 17 02472 g005
Figure 6. Taxonomy of Fusion under Unknown Correlation.
Figure 6. Taxonomy of Fusion under Unknown Correlation.
Sensors 17 02472 g006
Figure 7. Illustration of fused covariance P f of individual data sources for the correlation coefficient in the range [−1, 1]. The gray area represents all possibilities of a fused covariance.
Figure 7. Illustration of fused covariance P f of individual data sources for the correlation coefficient in the range [−1, 1]. The gray area represents all possibilities of a fused covariance.
Sensors 17 02472 g007
Figure 8. Two estimates at the origin, i.e., ε ( 0 , P 1 ) , ε ( 0 , P 2 ) and their fused result ε ( 0 , P C I ) , provided by Covariance Intersection method.
Figure 8. Two estimates at the origin, i.e., ε ( 0 , P 1 ) , ε ( 0 , P 2 ) and their fused result ε ( 0 , P C I ) , provided by Covariance Intersection method.
Sensors 17 02472 g008
Figure 9. Two estimates at the origin, i.e., ε ( 0 , P 1 ) , ε ( 0 , P 2 ) and the aimed fused result ε ( 0 , P M E ) , of Maximum Ellipsoidal methods.
Figure 9. Two estimates at the origin, i.e., ε ( 0 , P 1 ) , ε ( 0 , P 2 ) and the aimed fused result ε ( 0 , P M E ) , of Maximum Ellipsoidal methods.
Sensors 17 02472 g009
Figure 10. Two estimates ε ( x 1 , P 1 ) and ε ( x 2 , P 2 ) and their fused result provided by CI and ME methods, where three instances of ME are considered (a) Zero Mean (b) Non-Zero Mean.
Figure 10. Two estimates ε ( x 1 , P 1 ) and ε ( x 2 , P 2 ) and their fused result provided by CI and ME methods, where three instances of ME are considered (a) Zero Mean (b) Non-Zero Mean.
Sensors 17 02472 g010
Figure 11. Comparison of CI and ME methods in terms of computation time for different dimensions of data.
Figure 11. Comparison of CI and ME methods in terms of computation time for different dimensions of data.
Sensors 17 02472 g011
Figure 12. Illustration of three ellipsoids ε ( 0 , P 1 ) , ε ( 0 , P 2 ) ,   ε ( 0 , P 3 ) and their fusion result ε ( 0 , P CI ) , provided by CI method. The figure also shows the actual fused result ε ( 0 , P C I A ) for CI.
Figure 12. Illustration of three ellipsoids ε ( 0 , P 1 ) , ε ( 0 , P 2 ) ,   ε ( 0 , P 3 ) and their fusion result ε ( 0 , P CI ) , provided by CI method. The figure also shows the actual fused result ε ( 0 , P C I A ) for CI.
Sensors 17 02472 g012
Figure 13. Illustration of three ellipsoids ε ( 0 , P 1 ) , ε ( 0 , P 2 ) ,   ε ( 0 , P 3 ) and their fusion result provided by ME methods. The figure also shows the actual fused result ε ( 0 , P M E A ) for ME methods.
Figure 13. Illustration of three ellipsoids ε ( 0 , P 1 ) , ε ( 0 , P 2 ) ,   ε ( 0 , P 3 ) and their fusion result provided by ME methods. The figure also shows the actual fused result ε ( 0 , P M E A ) for ME methods.
Sensors 17 02472 g013
Figure 14. Illustration of two ellipsoids ε ( 0 , P 1 ) , ε ( 0 , P 2 ) and their consistent fused result ε ( 0 , P U ) , provided by the CU method.
Figure 14. Illustration of two ellipsoids ε ( 0 , P 1 ) , ε ( 0 , P 2 ) and their consistent fused result ε ( 0 , P U ) , provided by the CU method.
Sensors 17 02472 g014
Table 1. Fused result and average computation time of different ellipsoidal methods.
Table 1. Fused result and average computation time of different ellipsoidal methods.
CILEIEAEI
Fused Result x C I = [ 0.899 1.80 ] T x L E = [ 0.82 1.60 ] T x I E A = [ 1 2 ] T x E I = [ 0.52 1.41 ] T
P C I = [ 3.97 1.48 1.48 3.23 ] P L E = [ 3.33 0.98 0.98 2.49 ] P I E A = [ 4 1.8 1.8 3.5 ] P E I = [ 3.33 0.98 0.98 2.49 ]
det ( P C I ) = 10.663 det ( P L E ) = 7.365 det ( P I E A ) = 10.76 det ( P E I ) = 7.365
Time (ms)1.16680.15140.28790.2353
Table 2. Summary of various algorithms for fusion under unknown correlation.
Table 2. Summary of various algorithms for fusion under unknown correlation.
FrameworkAlgorithmsCharacteristics
Data DecorrelationDouble counting removal [27,28,76,77]
  • Tracking and explicitly removing the double counting
  • Assumes a particular network topology
  • Neither scalable nor practical solution for a large network of sensors
Measurement reconstruction [25,26]
  • Decorrelating the sequence of measurements by reconstructing the measurements at fusion node
  • Internal information like Kalman gain, association weights, and sensor model information etc. are required to reconstruct the measurements
  • Inefficient and impractical for large distributed sensor networks
Modeling Correlation [29,30,31,32,84]
  • Approximate the unknown cross-covariance based on a function of correlation coefficient
  • A closed form solution for scalar-valued and approximate solution for fusion of vector-valued two estimates
  • Improved fusion performance by incorporating knowledge of cross-correlation
  • Difficult to interpret cross-correlation for multiple estimates
Ellipsoidal MethodsCovariance Intersection Method [14,33,34,35]
  • Provides a consistent and minimum bound for two data sources
  • Does not provide a tight overestimate for more than two data sources
  • Computationally demanding
Largest Ellipsoid Method [36]
  • Provides a less conservative estimate of fused covariance than CI
  • Fused mean value is based on the independent assumption of KF
Internal Ellipsoidal Approximation [37,38]
  • Approximate the fused covariance by an internal maximum ellipsoid
  • Based on heuristics
Ellipsoidal Intersection Method [39]
  • The fused mean and covariance is calculated based on mutual and exclusive information of the two data sources
  • Less conservative than CI but may provide inconsistent fused results in some cases
  • Limited to the fusion of two data sources
Table 3. Overview of the methodologies for inconsistent and spurious data sources.
Table 3. Overview of the methodologies for inconsistent and spurious data sources.
ApproachesCharacteristics
Model based approaches [121,122,125,132,135,137,139]
  • Identification and subsequent removal of inconsistent and abnormal data
  • Uses residuals generated between the modeled outputs and actual sensor measurements to identify inconsistency
  • Need prior information and limited to specific failure model(s)
Redundancy based approaches [40,41,43,47,141,143,144]
  • Uses consistency checking and majority voting to identify inconsistency among multiple sensor estimates of the same state
  • Identification of corrupted sensor estimates without prior information
  • May fail if inconsistent estimates are provided by multiple sensors simultaneously
Fusion based approaches [48,49]
  • Provides a consistent fused result as long as one estimate is consistent
  • May results in inappropriately conservative fusion solution
  • Computationally demanding

Share and Cite

MDPI and ACS Style

Bakr, M.A.; Lee, S. Distributed Multisensor Data Fusion under Unknown Correlation and Data Inconsistency. Sensors 2017, 17, 2472. https://doi.org/10.3390/s17112472

AMA Style

Bakr MA, Lee S. Distributed Multisensor Data Fusion under Unknown Correlation and Data Inconsistency. Sensors. 2017; 17(11):2472. https://doi.org/10.3390/s17112472

Chicago/Turabian Style

Bakr, Muhammad Abu, and Sukhan Lee. 2017. "Distributed Multisensor Data Fusion under Unknown Correlation and Data Inconsistency" Sensors 17, no. 11: 2472. https://doi.org/10.3390/s17112472

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop