Next Article in Journal
Wearable GPS and Accelerometer Technologies for Monitoring Mobility and Physical Activity in Neurodegenerative Disorders: A Systematic Review
Previous Article in Journal
Passive Homodyne Phase Demodulation Technique Based on LF-TIT-DCM Algorithm for Interferometric Sensors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Decentralized Sensor Fusion Scheme for Multi Sensorial Fault Resilient Pose Estimation

by
Moumita Mukherjee
*,
Avijit Banerjee
,
Andreas Papadimitriou
,
Sina Sharif Mansouri
and
George Nikolakopoulos
Robotics and AI Group, Department of Computer, Electrical and Space Engineering, Luleå University of Technology, SE-97187 Luleå, Sweden
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(24), 8259; https://doi.org/10.3390/s21248259
Submission received: 6 October 2021 / Revised: 29 November 2021 / Accepted: 6 December 2021 / Published: 10 December 2021
(This article belongs to the Topic Autonomy for Enabling the Next Generation of UAVs)

Abstract

:
This article proposes a novel decentralized two-layered and multi-sensorial based fusion architecture for establishing a novel resilient pose estimation scheme. As it will be presented, the first layer of the fusion architecture considers a set of distributed nodes. All the possible combinations of pose information, appearing from different sensors, are integrated to acquire various possibilities of estimated pose obtained by involving multiple extended Kalman filters. Based on the estimated poses, obtained from the first layer, a Fault Resilient Optimal Information Fusion (FR-OIF) paradigm is introduced in the second layer to provide a trusted pose estimation. The second layer incorporates the output of each node (constructed in the first layer) in a weighted linear combination form, while explicitly accounting for the maximum likelihood fusion criterion. Moreover, in the case of inaccurate measurements, the proposed FR-OIF formulation enables a self resiliency by embedding a built-in fault isolation mechanism. Additionally, the FR-OIF scheme is also able to address accurate localization in the presence of sensor failures or erroneous measurements. To demonstrate the effectiveness of the proposed fusion architecture, extensive experimental studies have been conducted with a micro aerial vehicle, equipped with various onboard pose sensors, such as a 3D lidar, a real-sense camera, an ultra wide band node, and an IMU. The efficiency of the proposed novel framework is extensively evaluated through multiple experimental results, while its superiority is also demonstrated through a comparison with the classical multi-sensorial centralized fusion approach.

1. Introduction

State estimation is a challenging problem in the field of robotics that has been significantly explored in the recent years and in different scientific and technological oriented communities, such as: robotics [1], aerospace [2], automatic control [3], artificial intelligence [4], and computer vision [5]. In this framework, one of the most interesting problems is the one that is related to the estimation of the pose of a robot, especially for the case that multi-sensors are utilized for the pose determination problem, with related sensor fusion schemes, in order to increase the overall accuracy of the estimation but also at the same time introduce the proper resiliency.
Towards this direction, lately the research on sensor fusion has been evolving in a rapid manner, since multi-sensor fusion has the ability to integrate or to combine data streams from different sources and simultaneously to increase the quality of the measurements, while decreasing the corrupting noise from the measurements. Thus, in the multi-sensorial fusion architectures, such for example the case of the pose estimation, it is very common to have multiple sensors that are providing, e.g., full pose state estimations or partial pose estimations (translation or orientation) and to have an overall sensor fusion scheme that is commonly realized in a centralized [6] or distributed approaches [6,7]. In centralized fusion architectures, a central node or a single node is utilized, where direct measurement data or raw data from multiple sensors are used to fuse using several type of Kalman filters, depending upon the system whether it is linear or nonlinear. In this context, the Extended Kalman Filter (EKF) has taken considerable attention in numerous research works involving the case of multi-sensor fusion, while utilizing real data from sensors for localization of ground robots and Micro Aerial Vehicle (MAV) [8,9,10,11]. However, the EKF based fusion algorithms involve local linearization of the system. In order to explicitly account for the nonlinearities, involved in the dynamical model, few progressive approaches consider the Unscented Kalman Filter (UKF) [12,13] and the Particle Filter (PF) [14] based multi-sensor fusion for robotic localization. Even though the UKF and PF are superior for addressing nonlinearities, the advantage comes with additional complexity in the computational burden. Comparison studies reported in [15,16] demonstrated that for robot localization, the performance of EKF is comparable in all practical purposes. Apart from the conventional Kalman based approaches, various innovative numerical optimization based multi-sensor fusion algorithms, such as moving horizon estimation [17,18], set membership function [19], graph optimization [20], while MAV localization in GPS denied environment have recently been investigated in [19,21]. The numerical optimization based estimation framework has the capability to incorporate the non-linearity of the dynamical model, as well as various physical constraints. However, it imposes additional computation complexity and limitations due to the theoretical guarantee on convergence properties.
In general, a centralized structure with multi inputs and outputs works very well for data fusion. Though it is not sufficient for all the cases, sometimes one of the sensor can fail suddenly for a certain period of time in total operating duration, in such a situation, the centralized approach follows the faulty sensor’s data, while having the disadvantage that it can not detect or eliminate the fault occurred by the sensors. Although it is possible to attain an almost optimal solution in the centralized fusion framework, in real and field utilizations, processing all the sensors at a node or a point is most likely ineffective and has the potential to lead into direct failures in case of a sensor defects or temporary performance degradation. On the other hand, the distributed fusion [22] typically consists of at least two layers, where in the first layer, the raw data are collected from different sensor measurement units to create the local estimates and in the sequel are being forwarded to the second layer for further fusion from the corresponding nodes. Typically, in the first layer a Kalman filter is utilized to provide the local estimates from the sensors [23]. However, depending on the data assimilation procedures, in the second layer various decentralized fusion methods have been investigated [24]. In this direction, an information filter based decentralized fusion [22] is considered for the localization of mobile robots and MAVs in [24,25]. However, these formulations do not explicitly incorporates the correlation between local estimates. Contemplating the dependency of the local nodes, various covariance intersection based decentralize fusion algorithms for collaborative localization have been investigated in [26,27,28,29,30]. In this context, an innovative maximum likelihood criterion for performing the decentralized fusion was presented in [31], where the information from the local nodes were integrated as a weighted combination to provide the fused states. The weighting matrices were judiciously determined based on the cross covariance of the local estimates. However, these decentralized fusion approaches, have been implemented based on the assumption that the measurements from sensors are only influenced by an inherent unbiased Gaussian noise, while in reality, and for field robotic applications, the related measurements are spurious [30] due to unexpected uncertainties, such as a temporal inoperative surrounding (such as low illumination condition, presence of smoke/dust etc.) fault, spike and sensor glitches. In such situations, the magnitude of inaccuracy in the measurements is much larger when compared to the normal noise [32]. In order to address this issue, various fault detection and isolation methods, in combination with the decentralized fusion, are investigated in the related state of the art literature [28,33,34].
In summary, distributed fusion is a robust framework to failures and can indicate the proper resiliency for critical applications, while eliminating the risk of single failures. Specifically in robotic applications, the approach of perception, in the direction of decentralized fusion, can handle the fundamental problem addressed in numerous implementations like multi-robot tracking, cooperative localization [35] and navigation [35], multi-robot Simultaneous Localization and Mapping (SLAM) [36], distributed multi-view 3D reconstruction and mapping, and multi-robot monitoring [37]. The map merging, data association, and robot localization can potentially be efficient only if the robots are enough capable to perceive autonomously the world. Therefore, centralized and decentralized fusion or in other words fusion in general, plays an important role in all robotic applications.
In this article, a unique decentralized multi-sensor fusion approach is proposed that introduces a novel flexible fault resilient structure, by isolating the information from the faulty sensor, while enabling the information provided by the other sensor measurements in an optimal information fusion framework. Furthermore, the classical optimal information fusion (OIF) presented in [31] has a provision to incorporate a sensor level fault isolation, which operates as a separate unit. However, an increasing number of sensors/local nodes, along with the presence of temporal sporadic (spikes, temporary failure) measurements, demands a more flexible fusion architecture to account for fault resiliency in real time. In the present work, an innovative fault isolation mechanism is embedded with the OIF architecture by representing the isolation operation, as a constraint optimization problem. Thus, the novelty of this article stems from the novel application of an optimal information two-layered sensor fusion method, which has been modified ideally for handling sensor breakdown during operation, as it will be described in the sequel.
The main focus of this article is to develop a two-layered, multi-sensor fusion with self resiliency with the assistance of a unique optimal isolation algorithm. The technique works in several stages. In the first step, information from multiple asynchronous sensors is exhaustively exploited in an orderly sequence by introducing the concept of nodes. Each node individually fuses position and orientation information from independent sensors separately. In a way, the nodal architecture introduces the feasibility of distinguishing partially defective measurements and thereby broaden out the possibility of fusing all the acceptable information. For instance, if a sensor provides accurate position and defective orientation measurement for a short duration, utilization of the position measurement would be beneficial in lieu of discarding the entire sensor information. Next, in the second layer, the information from each node is blended in a weighted combination by employing an maximum likelihood estimator to obtain the most accurate pose collectively. Moreover, the proposed fault resilient optimal information fusion architecture incorporates an inbuilt fault isolation mechanism to discard disputed outcome from sensors. Once the disputed outcomes are observed, the weighing parameters of the optimal information filter are adjusted to accommodate the fault isolation. The second contribution stems from the effectiveness evaluation of the second layer fusion architecture when a sensor is not working correctly for a certain period or when the system receives inaccurate measurements from the sensors. In this case, an optimal information filter with fault handling capability is proposed and incorporated to get more robust and accurate responses from the corrupted outcomes. In this case, a modified co-variance weight scheme is introduced for combining all possible nodes in the second layer that is utilized to resolve the effect of erroneous measurements. The final contribution of the article is towards the performance comparison of the existing centralized and decentralized fusion techniques with the proposed novel decentralized fusion approach. Both existing methods methods perform well when measured sensor data are faultless, otherwise, the proposed fault resilient decentralized fusion scheme works far better than the existing centralized and decentralized filter estimation. The comparison of these different fusion architectures ha been carried out by using experimentally collected data sets.

2. Problem Formulation

Aiming towards a real-time, resilient and accurate navigation for autonomous exploration, through an unknown environment, aerial robotic vehicles equipped with multiple asynchronous real senors, as the one depicted in Figure 1 will be considered as the base line of this novel work, without a loss of generality, since the overall framework has the merit to be platform agnostic. In this case, the considered sensor suite of the MAV contains a Velodyne Puck LITE, the Intel realsense camera T265, the IMU of a Pixhawk 4 flight controller, and a single UWB node. The collected point-cloud from the 3D-lidar is processed online to provide odometry based on [38]. Essentially, the 3D-lidar odometry and the real sense camera T265 are capable of providing the real time 3D pose of the MAV independently, whereas, the IMU provides the measurements of angular velocity, as well as the acceleration of the MAV. In addition, a network of 5 UWB nodes is set strategically around the utilized flying arena, for estimating the position of the MAV based on the mounted UWB node. The overall objective is to provide a novel decentralized multi-sensor fusion architecture that can blend the information from various real sensor measurements to obtain the most accurate pose of the MAV in real-time. A schematic representation of the MAV pose is depicted in Figure 2.
The pose is described using two reference frames, namely the world frame ( W = { X W , Y W , Z W } ) and the body-fixed frame ( B = { X B , Y B , Z B } ) . The body-fixed frame is attached to the MAV’s centre of mass, while the inertial frame is assumed to be attached at a point on the ground with its X , Y and Z axis directed along the East, North and Up (ENU) directions, respectively. The sensors are mounted in the body fixed frame B of the MAV, and provide the information regarding its pose. The position of the MAV (essentially the origin of the body-fixed frame) is defined as p , which is described with respect to the world frame W as shown in Figure 2. The orientation of MAV with respect to the world frame can be visualized using the Euler angle representation { ϕ , θ , ψ } , denoting the roll, pitch and yaw rotations, respectively. In order to avoid the singularity associated with Euler angle, the orientation of the MAV is considered to be represented by quaternions [39] denoted with q. The position of the MAV, p and its orientation, q together designate the pose of the vehicle.

2.1. Asynchronous Sensors Associated with the Present Study

The under consideration MAV, equipped with various sensors, is presented in Figure 1. The sensors that are considered in the present context are: (a) the IMU of a Pixhawk 4 flight controller that provides the acceleration a m , the angular velocity ω m and the orientation q I M U of the MAV, (b) the Velodyne Puck LITE radar assisted with Lidar Odometry (LIO) [38] to provide a 3D pose of the MAV denoted by p L I O , q L I O , (c) the Intel Real-sense T265 visual sensor integrated with visual odometry (VIO) that provides a position and orientation denoted with p V I O , q V I O , and (d) the Ultra-Wideband (UWB) transceivers that provide the position of the MAV, denoted with p U W B .

2.2. The MAV’s Utilized Kinematic Model

The decentralized sensor fusion architecture at its core utilizes the model based estimation framework, where the nonlinear kinematic model of the MAV is considered as [40]:
p ˙ = v t
v ˙ t = a t
q ˙ = 1 2 q ω t
where, p R 3 × 1 denotes the position, v t R 3 × 1 represents the velocity, q R 4 × 1 stands for the orientation of the MAV in the form of quaternions representation. Here, a t R 3 × 1 represents the total acceleration of the vehicle expressed in the inertial frame of references, where as ω t R 3 × 1 denotes the body rate experienced by the vehicle. Physically, these parameters (acceleration and body rates) are characterized as an input to the system, which are typically measured using an IMU denoted as a m and ω m . In general, the measurements from sensors are noisy, which includes sensor bias as well. In order to establish this fact, the measurement signals are represented as:
a m = R t T ( a t g t ) + a b t + a n
ω m = ω t + ω b t + ω n
where, a b t R 3 × 1 denotes the accelerometer bias and ω b t R 3 × 1 represents the gyroscopic bias terms, a n and ω n signify the additive noise for acceleration and angular rate, respectively, R t R ( q ) S O ( 3 ) denotes the transformation matrix, from the body to the world frame, and g t R 3 × 1 denotes the gravity bias. Moreover, it has also been accounted that the bias factors a b t , ω b t are driven by a process noise dynamically represented as:
a ˙ b t = a ω
ω ˙ b t = ω ω
g ˙ t = 0 3 × 1
where, a ω R 3 × 1 and ω ω R 3 × 1 are the accelerometer and gyroscopic process noise, respectively. Rearranging Equation (4a,b), the total acceleration can be expressed as:
a t = R t ( a m a b t a n ) + g t
ω t = ω m ω b t ω n
Substituting, Equations (4a,b) and (1b,c), respectively, yields:
v ˙ t = R t a m a b t a n + g t
q ˙ = 1 2 q ( ω m ω b t ω n )
The above equation of motion of MAV is expressed in a compact mathematical notation given as:
X ˙ t = f t ( X t , u m , w )
y t = [ I 7 × 7 0 7 × 12 ] X t
where, the state vector is denoted as X t = [ p t , v t , q , a b t , ω b t , g t ] T R 19 × 1 , the noisy measured input based on IMU reading is denoted as u m R 6 × 1 and the random process noise w R 6 × 1 is defined as:
u m = a m a n ω m ω n , w = a w ω w

3. Decentralized Sensor Fusion Architecture

In order to make use of all the available information from the sensor measurements in the best possible way, a two-layered fusion architecture is considered here. A schematic overview of the proposed fusion architecture is presented in Figure 3. Since the primary focus here is to determine the MAV pose, the sensors involved in this process are capable of measuring either the position and/or orientation, while the IMU is the only exception, which provides the acceleration and body rates of the vehicle. In the First layer, information from multiple asynchronous sensors is exhaustively exploited in an orderly sequence by introducing the concept of nodes. Potentially, each node provides the pose of the MAV, which is obtained by involving its position from one sensor and orientation from another. In the second layer, the information from each node is used in a weighted combination to collectively obtain the most accurate pose of the MAV in a maximum likelihood hood manner. A complete overview of the fusion architecture in detail is presented in the sequel.

3.1. First Layered Decentralized Fusion Architecture

In the first layer, we have introduced the concept of the decentralized nodes. In the context of the multi-sensor framework, the position and orientation of the MAV is obtained by two distinct sensors that are arbitrarily selected to construct a node. Incorporating all of such possible combinations of sensor measurements collectively, a total number of seven nodes are constructed in the present setup and alphabetically denoted as node-l, where l { A , B , , G } . The position and orientation information accounted for the individual nodes are described in Table 1. In the process of constructing the node, each decentralized node is associated with an Extended Kalman Filters (EKF) [41], which blends the measurements from various sensors in all possible combinations. However, each node receives the IMU information, as measured actuation/control input to the kinematic model associated with the Kalman filter. Moreover, position and orientation from two distinct sensors are utilized as the measurement information. In the context of the present setup under consideration, Figure 4 describes all the possible decentralized nodes and the associated measurements [42].
Figure 4. Combination of position and orientation information flow from various sensors in constituting different nodes of the first layer.
Figure 4. Combination of position and orientation information flow from various sensors in constituting different nodes of the first layer.
Sensors 21 08259 g004
Previously, in the Section 2.2, the kinematic model of a MAV in continuous time form is presented in Equation (6a). In order to describe the decentralized nodes, associated with the first layer, in a compact mathematical form (in correlations with EKF), an equivalent of Equation (6b) in discrete time representation (using Euler [43] discretization) is provided by:
x k = f k 1 ( x k 1 , u k 1 , ω k 1 )
y k = h k ( x k , v k ) = [ I 7 × 7 0 7 × 12 ] x k + v k
where, k denotes the discrete time instants. It should be noted that in order to account for model inaccuracy, we have considered a process noise ω k R 19 . Moreover, an additional measurement noise vector v k R 7 is introduced in Equation (7b) to encapsulate a realistic output model, under the influence of noisy measurement appearing from real sensors. The process and measurement noise are assumed to follow the Gaussian distribution as:
ω k N ( 0 , Q k ) , v k N ( 0 , R k )
where Q k and R k represents the process noise co-variance matrix and the measurement noise co-variance matrix, respectively. The mathematical operator E denotes the expectation and the superscript T indicates the transpose. Starting with an initial guess of a posteriori estimate x ^ l 0 + = E ( x l 0 ) and P l 0 + = E [ ( x l x ^ l 0 + ) ( x l x ^ l 0 + ) T ] , along with the assumption in Equation (8), the l th node is described as a local EKF with the following prediction-correction formalism:
Prediction Steps:
x ^ l 0 + = E ( x l 0 ) , P l 0 + = E [ ( x l x ^ l 0 + ) ( x l x ^ l 0 + ) T ]
x ^ l k = f l k 1 ( x ^ l k 1 + , u k 1 , 0 )
K l k = P l k H l k T ( H l k P l k H l k T + R k l T ) 1
P l k = F l k P l k + F l k T + L l k Q l k L l k
where the ‘+’ symbol is used to denote an a priori estimate, the ‘−’ symbol is designated to a posteriori estimate, the subscript l indicates the corresponding variable of the lth node, where l { A , B , , G } . The Jacobian matrices are defined as:
F l k = f l k 1 x l k , L l k = f l k u k , H l k = h l k x l k
while the inputs excitation u k (linear acceleration and angular velocity), used in the prediction process, are essentially obtained from the IMU measurements.
Correction Steps:
x ^ l k + = x ^ l k + K l k [ y l k h l k ( x l k , 0 ) ]
P l k + = ( I K l k H l k ) P l k
Note that, in the process of constructing the nodes, the associated EKF cleans out the noisy measurements appearing from the actual sensor unit. It is apparent that, individually each node is potentially capable of proving the information regarding the pose of the MAV. However, the accuracy of pose information, obtained from each decentralized node, varies depending on the accuracy of the sensors that are involved to constitute the node. Hence, to obtain a best possible pose estimate, a second layer architecture is presented in the sequel.

3.2. Second Layer Decentralized Fusion Architecture

The estimated states x ^ l , l { A , B , , G } from each node are placed together in a weighted combination to jointly obtain an accurate estimate of the MAV pose, while using an Optimal Information Filter (OIF) by maximum likelihood estimation. Thus, the FR-OIF is used in the second layer, which incorporates the capability of choosing optimal weights based on the co-variances that is obtained from the first layer fusion. Moreover, the proposed formulation embeds a fault isolation mechanism with the OIF architecture. The collective estimate of the fused state vector, as the output of the second layer, is expressed as:
x ^ k = l { A , , G } A ¯ l k x l k ^
where, A ¯ l k , l { A , B , , G } represents the arbitrary weight associated with the corresponding node. Here x ^ k and x ^ l k denote the outcome of the second layered fusion and estimated states of the l th node from the first layer, respectively. These weigh parameters are optimally determined based on a minimum variance (maximum likelihood) criterion.
Assuming that both the FR-OIF, as well as the EKF act as an unbiased estimator, i.e., E ( x ^ k ) = E ( x k ) , E ( x ^ l k ) = E ( x k ) , and taking the expectation of both sides of Equation (12) it yields:
A ¯ A k + A ¯ B k + . . . . . . + A ¯ G k = I
where x k represents the actual state of the MAV. The estimation error for FR-OIF, i.e., x k x ^ k is expressed as:
x ˜ k = x k l { A k , , G } A ¯ l k x ^ l k
Using the constraint relation from Equation (13), the estimation error in Equation (14) is rewritten as:
x k ˜ = l { A , , G } A l k ¯ ( x k x ^ l k ) = l { A , , G } A ¯ l x ˜ l k = W k x ˜ L
where W k = [ A ¯ A k , A ¯ B k , A ¯ G k ] T and x ˜ L = [ x ˜ A k , x ˜ B k , , x ˜ G k ] . Hence, the error co-variance matrix of the second layer of the FR-OIF is expressed as:
P k = E ( x ˜ k x ˜ k T ) = W k T Σ k W k
where Σ k = E ( x ˜ L x ˜ L T ) = P ( l , m ) k , l = m = { A , B , , G } represents the cross co-variance matrix [31] between lth and mth node, expressed as:
P ( l , m ) k = I K m k H m k × F l k P ( l , m ) k + F l k T + Q k × I K m k H m k T
At this point, it is possible to obtain the weight parameter matrix W by solving the following static optimization, as described in [31]:
min A A k , , A G k J k = 1 2 t r ( P k ) = 1 2 t r ( W k T Σ k W k T ) subjected to A ¯ A k + A ¯ B k + . . . . . . . + A ¯ G k = I
However, the solution of Equation (18), obtained from the classical OIF in [31], is unable to provide a sufficient resiliency in the presence of inaccurate measurements obtained from one or a group of sensors. However, such problems of corrupted sensor measurements for a short duration of the operation period are often encountered in reality. For example, in the absence of sufficient visual features for some part of the surrounding environment, the real-sense camera fails to determine the MAV’s position, or in the presence of a bright moving object in the lidar’s field of view, it fails to provide an accurate pose.
In order to overcome such shortcomings, a separate fault isolation technique is proposed as an enhancement to the proposed decentralized estimation scheme, where the classical OIF formulation is modified to incorporate an inbuilt fault isolation mechanism with the OIF structure. For a time interval, if a group of sensors is identified to be corrupted, based on a fault detection method, the corresponding nodes associated with the faulty sensor need to be eliminated from the second layer architecture during the defective period of operation. In this article and without a loss of generality, the mechanisms for identifying the fault occurrence will not be considered and it will be assumed that the time of the fault and the faulty node can be identified. However, a straightforward nullifying of the weight matrices linked with the corrupted nodes, without altering other weights, leads to violation of the constraint in Equation (13). Hence, it is required to reformulate the optimization problem to bring in the flexibility of enabling/disabling a group of nodes online, while the MAV is operating in real-life applications. Let us consider the ith node, where i A k , B k , , G k , is found to be corrupted for a short time interval. This brings an additional constraint collectively presented as:
l { A , , G } A ¯ l k = I , i { A , , G } A ¯ i k = 0 , i l
The above constraints are combined and with a compact mathematical notation and can be represented as:
l { A , , G } δ l k A ¯ l k = I
where, δ l k { 0 , 1 } is a scalar multiplying factor. We will impose that δ l k = 0 , if the lth node is found to be corrupted, otherwise δ l k = 1 . The modified optimization problem is presented as:
min A A , , A G J k = 1 2 t r ( P k ) = 1 2 t r ( W k T Σ k W k T ) subjected to W k T e δ k I = 0
where, e δ k = δ A k I , δ B k I , , δ G k I T . Following the solution approach of the optimization problem with an equality constraint using the Lagrange multiplayer method [44], the augmented cost function is presented as:
J ¯ k = 1 2 t r ( W k T Σ k W k ) + t r Λ k W k T e δ k I
where, Λ k R 19 × 19 represents the Lagrange multiplayer. Evaluating the necessary conditions of optimality, i.e., J ¯ k W k = 0 and J ¯ k Λ k = 0 , yields:
Σ k e δ k e δ k T 0 W k Λ k   =   0 I
From the solution of Equation (23), the optimal weight matrix W k is obtained as:
W k = Σ k 1 e δ k ( e δ k T Σ k 1 e δ k ) 1
Using the optimal weight matrix W k into Equation (12), one can obtain the estimated state from the second layer fusion architecture. The revised W k , as a function of e δ k , enables the modified OIF to be resilient in presence of a faulty measurement from a group of sensors.

Fault Detection

In the presented methodology, the corruption or the fault of a sensor is diagnosed by the first layer fusion, while the Kalman filter innovation from Equation (11a) is utilized to detect the fault or locate the sensor failure at the corresponding time frame. The innovation can be written as I D , l = [ y v k h k ( x ^ k , 0 ) ] . In this way, the innovation vector I D , l depends on the estimated poses from the individual nodes with the pose obtained from a Vicon motion capturing system. In this case the sub-scripted letter D denotes the detected fault, while the sub-scripted letter v indicates the pose obtained from a Vicon. Furthermore, Vicon provides the most accurate pose, which can be considered as a ground truth. Once the innovation for all the nodes is computed, it is compared with a threshold value. The logical rule is given as:
I D , l Δ : δ l = 0 , l t h node is faulty 0 I D , l < Δ : δ l = 1 , l t h node is legitimate
The fault f l occurs. . is denoted for absolute value operator and Δ is the threshold that can be chosen arbitrary.

4. Experimental Framework Evaluation

For the experimental evaluation of the proposed scheme, collected data from a MAV under a manual flight is utilized. The platform and its components have been depicted in Figure 1. In this case, the sensor suite of the MAV consists of the Velodyne Puck LITE based Lidar odometry, the intel real-sense camera T265 for visual odometry, the IMU of a Pixhawk 4 flight controller, and a single UWB node. The detail about the sensor suit is described in Section 2. The MAV is manually flown in an approximate rectangular trajectory. During the experiment, information from the multiple sensors are recorded, which is used to evaluate the efficacy of the proposed FR-OIF framework. Apart form the on-board sensor suit, a Vicon motion capture system is used to provide the most accurate pose of MAV, which is considered as the ground truth in the present context.
The fusion method works in three different stages when sensor outcomes are faulty. In the first step, various nodes generate their equivalent estimated states and associated innovation (as presented in Equation (11a)). The actuation input (Angular velocity ω m and linear acceleration a m ) for all the nodes are obtained from IMU, which is shared among all the nodes. In the second step, innovation terms for various nodes are compared with a constant threshold. The process essentially identifies the defective node. Eventually, in the last effort, the resilient fault isolation mechanism of FR-OIF architecture eliminates the faulty measurements. The corresponding numerical values for the various systems and design parameters that are considered in the present article are: initial guess for error co-variance P l 0 = I 19 × 19 , initial state X l 0 = [ 0 1 × 6 , 1 , 0 1 × 10 , 9.81 ] T l ( A , , G ) , process noise co-variance Q l k = 1000 × I 19 × 19 , measurement noise co-variance R l k = 10 × I 18 × 18 , threshold tolerance for innovation Δ = 0.4 . The experimental results, along with a comparison study based on the centralized EKF approach will be presented in the sequel.
The estimated trajectory of the MAV is presented in Figure 5. From the obtained results, it is evident that the FR-OIF provides a pose estimate that is approximately close to the ground truth trajectory obtained from the Vicon system. The variation of the estimated MAV position along X , Y , Z are presented in Figure 6. It can be observed in Figure 5 that, in the absence of faulty measurement, the estimated trajectory obtained from the Centralized fusion (CF) as well as the decentralized Optimal information fusion (OIF) are approximately equivalent. However, during the operating region, where a faulty measurement is encountered, both the estimated trajectories obtained based on CF and OIF significantly deviated from the ground truth. The variation of the MAV’s orientation in Euler angle representation is shown in Figure 7. The estimated orientation, obtained from the FR-OIF, is approximately close to the Vicon based ground truth. However, during the experiment, when the aerial robot was roaming around the rectangular path of trajectory, it was manoeuvring with small-angle variation along with the roll, pitch and yaw. Hence, the noticeable performance improvement for FR-OIF is prominent in transnational motion compared to that of the rotations.
In order to demonstrate the effectiveness of the proposed fault resilient framework, momentary faults are synthetically introduced into the measured sensor data obtained during the experiment. The evaluation is carried out in multiple scenarios depicting presence of spurious measurement from multiple sensors, as follows:
  • Case-1: Temporal fault only in LIO measurement in between (20–30) s, while measurement from all other sensors are unaltered.
  • Case-2: Temporal fault only in VIO measurement in between (50–60) s, while measurement from all other sensors are unaltered.
  • Case-3: Temporal fault in both LIO and UWB measurements are itroduced in different operating points. The UWB measurement is faulty during (20–30) s, whereas for LIO reports faulty data during (35–45) s. Hence, this case evaluates multiple faults from different sensors in separate operating point.
  • Case-4: Simultaneous temporal failure of LIO and UWB appeared during (20–30) s.
Note that the sensor selected for reporting faulty operation and the corresponding time duration are arbitrarily selected without loss of generality. The variation of the estimated positions for all the possible cases under consideration is depicted in Figure 8, Figure 9, Figure 10 and Figure 11. It is evident from the results that the proposed FR-OIF is successfully capable of determining the MAV position in the presence of various possible failure conditions and temporal faulty sensor measurements. More significantly, from Figure 11 one can visualize that the FR-OIF demonstrated its efficacy for a simultaneous failure of multiple sensors at a time, as described in case-4.
The variation of estimated position, obtained from the different nodes A G are also depicted in Figure 8, Figure 9, Figure 10 and Figure 11, corresponding to the various cases under consideration. Each of the node, individually employing an EKF, indeed eventually removes the measurement noise from the estimated pose. However, the accuracy of the corresponding node depends on the measurement of sensor associated with it. For example, if we consider the situation as described in the Case-1, the faulty measurement is associated with the LIO. Hence, the estimated position from the node B and E, which are using the position information based on LIO, (refer to Table 1) are inaccurate, as shown in Figure 6. Similarly, one can observe here that the estimated orientation, obtained from the nodes D and F, are erroneous during (20–30) s the period of the faulty operation of LIO, since these nodes are using the LIO based orientation as a measurements. The equivalent analysis also holds for the remaining case studies.

4.1. Comparison of FR-OIF with Centralized and Distributed Fusion Approaches

In order to demonstrate the efficiency of the proposed FR-OIF, comparative study with the EKF based centralized multi-sensor fusion as well as decentralized optimal information fusion is presented here. For the sake of completeness, a brief description of the EKF based centralized fusion architecture is described in the sequel. The centralized EKF follows exactly the same steps of the prediction and correction approach, as described in Equations (Section 3.1)–(Section 3.1), with the only exception that the measured pose from the multiple sensors are collectively taken into account. Hence, the output equation is described as:
y k R 21 × 1 = p L I O , q L I O , p V I O , q V I O , p U W B , q I M U T
The dimension of the measurement noise co-variance matrix R k , Kalman gain K k and output gradient H k are redefined accordingly.
Apart from the EKF based centralized approach, a comparison study has been carried out with the existing decentralized OIF [31] method. The formulation presented in [31] considers a weighted sum of the individual node in the second layer fusion, as described in Equation (18). Both the CF and OIF are evaluated for all the scenarios presented in the Case 1–4 with faulty sensor measurements from multiple sensors. The variations of the estimated pose for CF, OIF and FR-OIF are presented in Figure 5, Figure 6, Figure 7 and Figure 8. The comparison study reveals that in the absence of sporadic measurements, the performance of all the methods (CF, OIF and FR-OIF) under consideration are approximately equivalent. However, in the presence of faulty measurements (Case 1–4), it can be observed that the estimated trajectory, obtained from the CF and OIF approach, deviates from the ground truth. Moreover, the presented experimental study brings out another interesting fact where the performance of the OIF closely resembles with the CF. This is highlighted in Figure 8, which is an emphasized version of Figure 6 for a time duration of operation under the faulty measurements (and the fact is also evident in other figures as well). In contrast, FR-OIF is capable of providing an accurate pose estimation even in the presence of fault from multiple sensors.

4.2. Accuracy in Terms of the Root Mean Square Error

In this section, the accuracy of the fusion algorithms are evaluated in terms of Root Mean Square Error (RMSE). In this case, a comparison is made in two steps. Firstly, the estimated poses are compared using a sliding window RMSE where the corresponding plots are depicted in Figure 12 and Figure 13 in a logarithmic scale. Secondly, to compare the performance of the estimated pose, obtained based on various nodes and fusion approaches, the single value RMSE is computed and illustrated in Table 2 and Table 3.
Note that the RMSE errors are computed by considering the Vicon based ground truth as reference. The RMSE comparison table proves that the second layered FR-OIF method is superior when the sensor measurements are erroneous. In order to visualize the variation of RMSE error along the trajectory, a sliding-window logarithmic RMSE (RMSLE) is considered with a window size of 100 samples. Since, the RMSLE provides the error in a logarithmic scale, the smaller the magnitude is (more negative) the more signifies for a higher accuracy. From the variation of RMSLE, as presented in Figure 12 and Figure 13. Even though the root mean square error for orientations has slightly varied with the efficiency of the estimators.
Hence, transnational motion during the experiment was made significant changes in the RMSE Table 2, as well as Figure 12, however it is not significantly visible in Table 3 and Figure 13.
The proposed FR-OIF provides excellent performance in terms of RMSE, compared to the centralized fusion approach. Moreover, based on the experimental results it can concluded that the proposed multi-sensor fusion is capable to provide a resilient pose estimation in the presence of faulty measurements and it has a great potential for various practical applications, involving multiple sensors and with a sufficient redundancy.
In the present context, the evaluation of the proposed fault resilient fusion is carried out with the experimental data, where temporal sensor fault are synthetically injected for the purpose of validation. However, part of the future work will consider evaluating the proposed FR-OIF sensor fusion framework in a field robotic experiment, where a momentary sensor failure is unavoidable in presence of dusty/smokey and dark environment. Additionally it is to be noted that, In the presented approach, and as in the case of most of the fault detection approaches in the related literature, the Δ has been ad hoc selected to a constant number and without loosing generality, while part of future work is also related to the adaptive determination of this value.

5. Conclusions

In this article, a novel decentralized multi-sensor fusion framework for resilient pose estimation of MAV is presented. The proposed multi-sensor fusion considered a two layered architecture. In the first layer, by combining the information from different sensors and by using an EKF a set of nodes are constructed. Each node provides an estimate of the MAV pose, which are collectively integrated by using OIF to provide an optimal estimate of it. Moreover, an unique fault isolation is embedded with the classical OIF formulation to incorporate the resiliency in presence of faulty measurements. Based on the experimental study an interesting fact has been established that, without an external fault isolation mechanism, the performance of the classical OIF closely resembles with the centralized EKF based multi-sensor fusion approach. Hence, these two methods are not sufficient to eliminate the fault accurately. In contrast, the proposed fault resilient optimal isolation technique is adequately capable to overcome such shortcomings. Even though the proposed FR-OIF is presented in this article is considered pose estimation of MAV, the formulation is quite generic and it can be applied in various autonomous navigation and with different robotic platforms and involving multiple sensors.

Author Contributions

Investigation, M.M.; Methodology, M.M, A.B.; Supervision, A.B., G.N.; Validation, M.M., A.P.; Writing—original draft, M.M., A.B., S.S.M.; Writing—review and editing, M.M., A.B., and G.N. All authors have read and agreed to the published version of the manuscript.

Funding

This work has been partially funded by the European Union’s Horizon 2020 Research and Innovation Programme under the Grant Agreement No. 101003591 NEXGEN SIMS

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wanasinghe, T.R.; Mann, G.K.; Gosine, R.G. Decentralized cooperative localization for heterogeneous multi-robot system using split covariance intersection filter. In Proceedings of the 2014 Canadian Conference on Computer and Robot Vision, Montreal, QC, Canada, 6–9 May 2014; pp. 167–174. [Google Scholar]
  2. Durrant-Whyte, H.F.; Rao, B.; Hu, H. Toward a fully decentralized architecture for multi-sensor data fusion. In Proceedings of the IEEE International Conference on Robotics and Automation, Cincinnati, OH, USA, 13–18 May 1990; pp. 1331–1336. [Google Scholar]
  3. Rigatos, G.G. Extended Kalman and particle filtering for sensor fusion in motion control of mobile robots. Math. Comput. Simul. 2010, 81, 590–607. [Google Scholar] [CrossRef]
  4. Yazdkhasti, S.; Sasiadek, J. Multi Sensor Fusion Based on Adaptive Kalman Filtering; Springer: Warsaw, Poland, 2017; pp. 317–333. [Google Scholar] [CrossRef]
  5. Brena, R.F.; Aguileta, A.A.; Trejo, L.A.; Molino-Minero-Re, E.; Mayora, O. Choosing the Best Sensor Fusion Method: A Machine-Learning Approach. Sensors 2020, 20, 2350. [Google Scholar] [CrossRef] [PubMed]
  6. Liggins, M., II; Hall, D.; Llinas, J. Handbook of Multisensor Data Fusion: Theory and Practice; CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar]
  7. Hall, D.; Chong, C.Y.; Llinas, J.; Liggins, M., II. Distributed Data Fusion for Network-Centric Operations; CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar]
  8. Hoang, T.; Duong, P.; Van, N.; Viet, D.; Vinh, T. Multi-sensor perceptual system for mobile robot and sensor fusion-based localization. In Proceedings of the 2012 International Conference on Control, Automation and Information Sciences (ICCAIS), Saigon, Vietnam, 26–29 November 2012; pp. 259–264. [Google Scholar]
  9. Vasquez, B.P.E.A.; Gonzalez, R.; Matia, F.; De la Puente, P. Sensor fusion for tour-guide robot localization. IEEE Access 2018, 6, 78947–78964. [Google Scholar] [CrossRef]
  10. Mueller, M.W.; Hamer, M.; D’Andrea, R. Fusing ultra-wideband range measurements with accelerometers and rate gyroscopes for quadrocopter state estimation. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 1730–1736. [Google Scholar]
  11. Al Khatib, E.I.; Jaradat, M.A.; Abdel-Hafez, M.; Roigari, M. Multiple sensor fusion for mobile robot localization and navigation using the Extended Kalman Filter. In Proceedings of the 2015 10th International Symposium on Mechatronics and Its Applications (ISMA), Sharjah, United Arab Emirates, 8–10 December 2015; pp. 1–5. [Google Scholar]
  12. Cotugno, G.; D’Alfonso, L.; Lucia, W.; Muraca, P.; Pugliese, P. Extended and Unscented Kalman Filters for mobile robot localization and environment reconstruction. In Proceedings of the 21st Mediterranean Conference on Control and Automation, Platanias, Greece, 25–28 June 2013; pp. 19–26. [Google Scholar]
  13. Anjum, M.L.; Park, J.; Hwang, W.; Kwon, H.i.; Kim, J.H.; Lee, C.; Kim, K.S. Sensor data fusion using unscented kalman filter for accurate localization of mobile robots. In Proceedings of the ICCAS 2010, Gyeonggi-do, Korea, 27–30 October 2010; pp. 947–952. [Google Scholar]
  14. Ullah, I.; Shen, Y.; Su, X.; Esposito, C.; Choi, C. A localization based on unscented Kalman filter and particle filter localization algorithms. IEEE Access 2019, 8, 2233–2246. [Google Scholar] [CrossRef]
  15. D’Alfonso, L.; Lucia, W.; Muraca, P.; Pugliese, P. Mobile robot localization via EKF and UKF: A comparison based on real data. Robot. Auton. Syst. 2015, 74, 122–127. [Google Scholar] [CrossRef]
  16. Martinelli, F. Robot localization: Comparable performance of EKF and UKF in some interesting indoor settings. In Proceedings of the 2008 16th Mediterranean Conference on Control and Automation, Ajaccio, France, 25–27 June 2008; pp. 499–504. [Google Scholar]
  17. Wang, S.; Chen, L.; Gu, D.; Hu, H. An optimization based moving horizon estimation with application to localization of autonomous underwater vehicles. Robot. Auton. Syst. 2014, 62, 1581–1596. [Google Scholar] [CrossRef]
  18. Kimura, K.; Hiromachi, Y.; Nonaka, K.; Sekiguchi, K. Vehicle localization by sensor fusion of LRS measurement and odometry information based on moving horizon estimation. In Proceedings of the 2014 IEEE Conference on Control Applications (CCA), Juan Les Antibes, France, 8–10 October 2014; pp. 1306–1311. [Google Scholar]
  19. Zhou, B.; Qian, K.; Fang, F.; Ma, X.; Dai, X. Multi-sensor fusion robust localization for indoor mobile robots based on a set-membership estimator. In Proceedings of the 2015 IEEE International Conference on Cyber Technology in Automation, Control and Intelligent Systems (CYBER), Shenyang, China, 8–12 June 2015; pp. 157–162. [Google Scholar]
  20. Fang, X.; Wang, C.; Nguyen, T.M.; Xie, L. Graph optimization approach to range-based localization. IEEE Trans. Syst. Man Cybern. Syst. 2020, 51, 6830–6841. [Google Scholar] [CrossRef] [Green Version]
  21. Nguyen, T.M.; Cao, M.; Yuan, S.; Lyu, Y.; Nguyen, T.H.; Xie, L. Viral-fusion: A visual-inertial-ranging-lidar sensor fusion approach. IEEE Trans. Robot. 2021, 1–20. [Google Scholar] [CrossRef]
  22. Nebot, E.M.; Bozorg, M.; Durrant-Whyte, H.F. Decentralized architecture for asynchronous sensors. Auton. Robot. 1999, 6, 147–164. [Google Scholar] [CrossRef]
  23. Alatise, M.B.; Hancke, G.P. A review on challenges of autonomous mobile robot and sensor fusion methods. IEEE Access 2020, 8, 39830–39846. [Google Scholar] [CrossRef]
  24. Zali, A.; Bozorg, M.; Masouleh, M.T. Localization of an indoor mobile robot using decentralized data fusion. In Proceedings of the 2019 7th International Conference on Robotics and Mechatronics (ICRoM), Tehran, Iran, 20–21 November 2019; pp. 328–333. [Google Scholar]
  25. Santos, M.C.; Santana, L.V.; Martins, M.M.; Brandão, A.S.; Sarcinelli-Filho, M. Estimating and controlling uav position using rgb-d/imu data fusion with decentralized information/kalman filter. In Proceedings of the 2015 IEEE International Conference on Industrial Technology (ICIT), Seville, Spain, 17–19 March 2015; pp. 232–239. [Google Scholar]
  26. Li, H.; Nashashibi, F. Cooperative multi-vehicle localization using split covariance intersection filter. IEEE Intell. Transp. Syst. Mag. 2013, 5, 33–44. [Google Scholar] [CrossRef]
  27. Sijs, J.; Lazar, M.; Bosch, P. State fusion with unknown correlation: Ellipsoidal intersection. In Proceedings of the 2010 American Control Conference, Baltimore, MD, USA, 30 June–2 July 2010; pp. 3992–3997. [Google Scholar]
  28. Wu, M.; Ma, H.; Zhang, X. Decentralized cooperative localization with fault detection and isolation in robot teams. Sensors 2018, 18, 3360. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  29. Carrillo-Arce, L.C.; Nerurkar, E.D.; Gordillo, J.L.; Roumeliotis, S.I. Decentralized multi-robot cooperative localization using covariance intersection. In Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; pp. 1412–1417. [Google Scholar]
  30. Wang, X.; Sun, S.; Li, T.; Liu, Y. Fault tolerant multi-robot cooperative localization based on covariance union. IEEE Robot. Autom. Lett. 2021, 6, 7799–7806. [Google Scholar] [CrossRef]
  31. Sun, S.L.; Deng, Z.L. Multi-sensor optimal information fusion Kalman filter. Automatica 2004, 40, 1017–1023. [Google Scholar] [CrossRef]
  32. Bakr, M.A.; Lee, S. Distributed multisensor data fusion under unknown correlation and data inconsistency. Sensors 2017, 17, 2472. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  33. Al Hage, J.; El Najjar, M.E.; Pomorski, D. Multi-sensor fusion approach with fault detection and exclusion based on the Kullback–Leibler Divergence: Application on collaborative multi-robot system. Inf. Fusion 2017, 37, 61–76. [Google Scholar] [CrossRef]
  34. Li, T.; Corchado, J.M.; Sun, S. Partial consensus and conservative fusion of Gaussian mixtures for distributed PHD fusion. IEEE Trans. Aerosp. Electron. Syst. 2018, 55, 2150–2163. [Google Scholar] [CrossRef] [Green Version]
  35. Rekleitis, I. Cooperative Localization and Multi-Robot Exploration. Ph.D. Thesis, School of Computer Science, McGill University, Montreal, Quebec, Canada, 2003. [Google Scholar]
  36. Kshirsagar, J.; Shue, S.; Conrad, J.M. A survey of implementation of multi-robot simultaneous localization and mapping. In Proceedings of the SoutheastCon 2018, St. Petersburg, FL, USA, 19–22 April 2018; pp. 1–7. [Google Scholar]
  37. Perron, J.M.; Huang, R.; Thomas, J.; Zhang, L.; Tan, P.; Vaughan, R.T. Orbiting a moving target with multi-robot collaborative visual slam. In Proceedings of the Workshop on Multi-View Geometry in Robotics (MVIGRO), Rome, Italy, 16 July 2015; pp. 1339–1344. [Google Scholar]
  38. Shan, T.; Englot, B.; Meyers, D.; Wang, W.; Ratti, C.; Daniela, R. LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October–24 January 2020; pp. 5135–5142. [Google Scholar]
  39. Corke, P.I.; Khatib, O. Robotics, Vision and Control: Fundamental Algorithms in MATLAB; Springer: Berlin/Heidelberg, Germany, 2011; Volume 73. [Google Scholar]
  40. Sola, J. Quaternion kinematics for the error-state Kalman filter. arXiv 2017, arXiv:1711.02508. [Google Scholar]
  41. Simon, D. Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches; John Wiley & Sons: Hoboken, NJ, USA, 2006. [Google Scholar]
  42. Givens, M.W.; Coopmans, C. A survey of inertial sensor fusion: Applications in suas navigation and data collection. In Proceedings of the 2019 International Conference on Unmanned Aircraft Systems (ICUAS), Atlanta, GA, USA, 11–14 June 2019; pp. 1054–1060. [Google Scholar]
  43. Yuksel, G.; Isik, O.R. Numerical analysis of Backward–Euler discretization for simplified magnetohydrodynamic flows. Appl. Math. Model. 2015, 39, 1889–1898. [Google Scholar] [CrossRef]
  44. Rao, S.S. Engineering Optimization: Theory and Practice; John Wiley & Sons: Hoboken, NJ, USA, 2019. [Google Scholar]
Figure 1. The aerial robot considered with the heterogeneous and asynchronous sensors for establishing the multi-layer sensor fusion architecture.
Figure 1. The aerial robot considered with the heterogeneous and asynchronous sensors for establishing the multi-layer sensor fusion architecture.
Sensors 21 08259 g001
Figure 2. Co-ordinate frames: subscript W denotes the global frame and subscript B denotes the body frame.
Figure 2. Co-ordinate frames: subscript W denotes the global frame and subscript B denotes the body frame.
Sensors 21 08259 g002
Figure 3. The Fault resilient optimal information filter with a two-layered fusion arrangement. An emphasized description of the first layer fusion is given in Figure 4.
Figure 3. The Fault resilient optimal information filter with a two-layered fusion arrangement. An emphasized description of the first layer fusion is given in Figure 4.
Sensors 21 08259 g003
Figure 5. Variation of estimated MAV trajectory obtained from Centralized Fusion (CF), decentralized Optimal Information Fusion (OIF), Fault Resilient (FR)-OIF, and ground truth (position from vicon camera). FR-OIF provides the estimated position of the MAV approximately close with the ground truth obtained from the Vicon motion capture system.
Figure 5. Variation of estimated MAV trajectory obtained from Centralized Fusion (CF), decentralized Optimal Information Fusion (OIF), Fault Resilient (FR)-OIF, and ground truth (position from vicon camera). FR-OIF provides the estimated position of the MAV approximately close with the ground truth obtained from the Vicon motion capture system.
Sensors 21 08259 g005
Figure 6. Estimated positions along ‘X-Y-Z’ components obtained from intermediate nodes ( A , , G ) and FR-OIF, compared with CF, OIF and the ground truth. The evaluation is carried out in presence of temporal fault appearing from LIO (Case-1). Except the fused position obtained from FR-OIF all the estimated positions deviated during 20–30 s.
Figure 6. Estimated positions along ‘X-Y-Z’ components obtained from intermediate nodes ( A , , G ) and FR-OIF, compared with CF, OIF and the ground truth. The evaluation is carried out in presence of temporal fault appearing from LIO (Case-1). Except the fused position obtained from FR-OIF all the estimated positions deviated during 20–30 s.
Sensors 21 08259 g006
Figure 7. Variation of orientation represented using Euler angles obtained from intermediate nodes ( A , , G ), CF, OIF, FR-OIF and the ground truth. During the experiment, the transnational motion is dominant over the rotational motion. As a result, the significant impact of FR-OIF is difficult to be visualized.
Figure 7. Variation of orientation represented using Euler angles obtained from intermediate nodes ( A , , G ), CF, OIF, FR-OIF and the ground truth. During the experiment, the transnational motion is dominant over the rotational motion. As a result, the significant impact of FR-OIF is difficult to be visualized.
Sensors 21 08259 g007
Figure 8. Case-1: Comparison of estimated position obtained from CF, OIF, FR-OIF and Vicon based ground truth, visualization in an emphasized mode of Figure 5 is presented. In the presence of a fault in the LIO for the duration of operation between (20–30) s, the centralized and classical OIF approach is unable to recover the failure in the estimated states, while the proposed FR-OIF successfully recovered from the faulty measurements and it is able to provide a close approximation of position estimate comparable with ground truth.
Figure 8. Case-1: Comparison of estimated position obtained from CF, OIF, FR-OIF and Vicon based ground truth, visualization in an emphasized mode of Figure 5 is presented. In the presence of a fault in the LIO for the duration of operation between (20–30) s, the centralized and classical OIF approach is unable to recover the failure in the estimated states, while the proposed FR-OIF successfully recovered from the faulty measurements and it is able to provide a close approximation of position estimate comparable with ground truth.
Sensors 21 08259 g008
Figure 9. Case-2: Estimated positions along ‘X-Y-Z’ components obtained from intermediate nodes ( A , , G ) and FR-OIF, compared with CF, OIF and the ground truth. The evaluation is carried out in presence of temporal fault appearing from VIO. Except the fused position obtained from FR-OIF all the estimated positions deviated during 50–60 s.
Figure 9. Case-2: Estimated positions along ‘X-Y-Z’ components obtained from intermediate nodes ( A , , G ) and FR-OIF, compared with CF, OIF and the ground truth. The evaluation is carried out in presence of temporal fault appearing from VIO. Except the fused position obtained from FR-OIF all the estimated positions deviated during 50–60 s.
Sensors 21 08259 g009
Figure 10. Case-3: Estimated positions along ‘X-Y-Z’ components obtained from intermediate nodes ( A , , G ) and FR-OIF, compared with CF, OIF and the ground truth. The evaluation is carried out in presence of temporal fault appearing from UWB (20–30) s and LIO (35–45) s. Except the fused position obtained from FR-OIF all the estimated positions deviated in presence of faulty measurements.
Figure 10. Case-3: Estimated positions along ‘X-Y-Z’ components obtained from intermediate nodes ( A , , G ) and FR-OIF, compared with CF, OIF and the ground truth. The evaluation is carried out in presence of temporal fault appearing from UWB (20–30) s and LIO (35–45) s. Except the fused position obtained from FR-OIF all the estimated positions deviated in presence of faulty measurements.
Sensors 21 08259 g010
Figure 11. Case-4: Estimated positions along ‘X-Y-Z’ components obtained from intermediate nodes ( A , , G ) and FR-OIF, compared with CF, OIF and the ground truth. The evaluation is carried out in presence of simultaneous temporal fault appearing from UWB and LIO during (20–30) s. Except the fused position obtained from FR-OIF all the estimated positions deviated in presence of faulty measurements.
Figure 11. Case-4: Estimated positions along ‘X-Y-Z’ components obtained from intermediate nodes ( A , , G ) and FR-OIF, compared with CF, OIF and the ground truth. The evaluation is carried out in presence of simultaneous temporal fault appearing from UWB and LIO during (20–30) s. Except the fused position obtained from FR-OIF all the estimated positions deviated in presence of faulty measurements.
Sensors 21 08259 g011
Figure 12. Variation of Semi-logarithmic root mean square error for position along X-Y-Z obtained from different fusion (CF, OIF, FR-OIF).
Figure 12. Variation of Semi-logarithmic root mean square error for position along X-Y-Z obtained from different fusion (CF, OIF, FR-OIF).
Sensors 21 08259 g012
Figure 13. Semi-logarithmic root mean square error for orientation represented in Euler angles, obtained from different fusion (CF, OIF, FR-OIF).
Figure 13. Semi-logarithmic root mean square error for orientation represented in Euler angles, obtained from different fusion (CF, OIF, FR-OIF).
Sensors 21 08259 g013
Table 1. Combination of sensor information used to construct the nodes of the proposed first layer
Table 1. Combination of sensor information used to construct the nodes of the proposed first layer
PoseABCDEFG
PositionReal
sense
Camera
3D
LiDar
UWBUWB3D
LiDar
Real
sense
Camera
UWB
OrientationIMUReal
sense
Camera
IMU3D
LiDar
IMU3D-
LiDar
Real
sense
Camera
Table 2. RMSE comparison for the estimation of the positions in meters.
Table 2. RMSE comparison for the estimation of the positions in meters.
AxisABCDEFGCFOIFFR-OIF
X0.11140.57400.25090.25090.57400.11140.25090.25030.23400.1394
Y0.08062.18560.07930.07932.18560.08060.07930.75640.64520.0650
Z0.05020.46160.11220.11220.46160.05020.11220.17510.15570.0499
Table 3. RMSE comparison for the estimation of the orientation in Euler angles.
Table 3. RMSE comparison for the estimation of the orientation in Euler angles.
AnglesABCDEFGCFOIFFR-OIF
ψ 0.46670.75220.46670.74730.46670.74730.01680.53430.51540.5340
θ 2.06632.16642.06631.93242.06631.93240.01881.96181.96181.9505
ϕ 1.93871.92461.93871.93571.93871.93570.00661.83651.84561.8341
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Mukherjee, M.; Banerjee, A.; Papadimitriou, A.; Mansouri, S.S.; Nikolakopoulos, G. A Decentralized Sensor Fusion Scheme for Multi Sensorial Fault Resilient Pose Estimation. Sensors 2021, 21, 8259. https://doi.org/10.3390/s21248259

AMA Style

Mukherjee M, Banerjee A, Papadimitriou A, Mansouri SS, Nikolakopoulos G. A Decentralized Sensor Fusion Scheme for Multi Sensorial Fault Resilient Pose Estimation. Sensors. 2021; 21(24):8259. https://doi.org/10.3390/s21248259

Chicago/Turabian Style

Mukherjee, Moumita, Avijit Banerjee, Andreas Papadimitriou, Sina Sharif Mansouri, and George Nikolakopoulos. 2021. "A Decentralized Sensor Fusion Scheme for Multi Sensorial Fault Resilient Pose Estimation" Sensors 21, no. 24: 8259. https://doi.org/10.3390/s21248259

APA Style

Mukherjee, M., Banerjee, A., Papadimitriou, A., Mansouri, S. S., & Nikolakopoulos, G. (2021). A Decentralized Sensor Fusion Scheme for Multi Sensorial Fault Resilient Pose Estimation. Sensors, 21(24), 8259. https://doi.org/10.3390/s21248259

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop