Next Article in Journal
LOADng-IoT: An Enhanced Routing Protocol for Internet of Things Applications over Low Power Networks
Next Article in Special Issue
A New Image Registration Algorithm Based on Evidential Reasoning
Previous Article in Journal
3D Printed Modular Immunofiltration Columns for Frequency Mixing-Based Multiplex Magnetic Immunodetection
Previous Article in Special Issue
Multi-Sensor Data Fusion for Real-Time Surface Quality Control in Automated Machining Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Kalman Filtering for Attitude Estimation with Quaternions and Concepts from Manifold Theory

by
Pablo Bernal-Polo
* and
Humberto Martínez-Barberá
Department of Information and Communication Engineering, University of Murcia, 30100 Murcia, Spain
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(1), 149; https://doi.org/10.3390/s19010149
Submission received: 5 December 2018 / Revised: 26 December 2018 / Accepted: 27 December 2018 / Published: 3 January 2019
(This article belongs to the Collection Multi-Sensor Information Fusion)

Abstract

:
The problem of attitude estimation is broadly addressed using the Kalman filter formalism and unit quaternions to represent attitudes. This paper is also included in this framework, but introduces a new viewpoint from which the notions of “multiplicative update” and “covariance correction step” are conceived in a natural way. Concepts from manifold theory are used to define the moments of a distribution in a manifold. In particular, the mean and the covariance matrix of a distribution of unit quaternions are defined. Non-linear versions of the Kalman filter are developed applying these definitions. A simulation is designed to test the accuracy of the developed algorithms. The results of the simulation are analyzed and the best attitude estimator is selected according to the adopted performance metric.

1. Introduction

Mechanical state estimation of a vehicle is a field of interest. A vehicle is considered a rigid body, and its state of motion is represented by 4 mathematical objects: two of them represent its position and velocity, and the other two represent its orientation, and angular velocity. This paper is focused on the estimation of the angular state, composed of orientation, and angular velocity.
Although there are other mathematical tools used for estimation [1], the Kalman Filter [2] has become the algorithm par excellence in this area. Because of its simplicity, the rigor and elegance in its mathematical derivation, and its recursive nature it is very attractive for many practical applications. Its non-linear versions have been widely used in orientation estimation: the Extended Kalman Filter (EKF), and the Unscented Kalman Filter (UKF) [3]. However, there are problems arising from the used parametrization to represent the orientation.
The orientation of a system is represented by the rotation transformation that relates two reference frames: a reference frame anchored to that system, and an external reference frame. A thorough survey of attitude representations is provided in Reference [4]. The parametrization used to represent the rotation transformation could be singular, or present discontinuities among others. Table 1 summarizes the main characteristics of the most used parametrizations.
Having in mind that the special orthogonal group SO(3) has dimension three, ideally we would seek for a continuous and non-singular representation expressed by 3 parameters. However, since 1964 we know that “...it is topologically impossible to have a global 3-dimensional parametrization without singular points for the rotation group” [5]. Knowing this, we would not be wrong to say that unit quaternions are the most convenient representation we have, and that we will have for orientations. In Reference [6] the literature on attitude estimation is reviewed until 1982, when other parametrizations like Euler angles were common, and founds the basis of modern quaternion-based attitude estimation, in which this paper is supported. After that work, many others have explored this viewpoint, and have demonstrated its superiority [7,8,9,10,11,12].
Quaternions are 4-dimensional entities, but only those having unit norm represent a rotation transformation. This fact implies a problem in applying the ordinary Kalman Filter, so different approaches have emerged. Since a quaternion is of dimension 4, one tends to think at first on a 4 × 4 covariance matrix, and in the direct application of the Kalman Filter [13]. Given that all predictions are contained in the surface defined by the unit constraint, the covariance matrix shrinks in the orthogonal direction to this surface, which leads to a singular covariance matrix after several updates. A second perspective was firstly approached in Reference [6] and was after named as “Multiplicative Extended Kalman Filter” [8,11,12]. In this second approach we define an “error-quaternion” that is transformed to a 3-vector. We use this vector to build the covariance matrix, and we talk about a “ 3 × 3 representation of the quaternion covariance matrix”. However, there are still details in this adaptation that are currently being developed. Namely, the “covariance correction step” [14].
This paper presents a new viewpoint on the problem of attitude estimation using Kalman filters when the orientation is represented by unit quaternions. Noticing that unit quaternions live in a manifold (the unit sphere in R 4 ), we use basic concepts from manifold theory to define the mean and covariance matrix of a distribution of unit quaternions. With these definitions we develop two estimators based on the Kalman filter (one EKF-based and another UKF-based) arriving at the concepts of “multiplicative update” and “covariance correction step” in a natural and satisfying way. The inartificial emergence of these ideas establishes a solid foundation for the development of general navigation algorithms. Lastly, we also analyze the accuracy in the estimations of these two estimators using simulations.
The organization of this paper is as follows. In Section 2 we review quaternion basics. We also expose the new viewpoint on the definition of the quaternion mean and covariance matrix. In Section 3 we present the developed estimation algorithms. In Section 4 we define the performance metric, describe the simulation scheme, and present the results of the simulations. We also discuss the results. Finally, Section 5 concludes the paper.

2. Quaternions Describing Orientations

2.1. Quaternions

Quaternions are hypercomplex numbers composed of a real part and an imaginary part. The imaginary part is expressed using three different imaginary units { i , j , k } satisfying the Hamilton axiom:
i 2 = j 2 = k 2 = i j k = 1 .
A quaternion q can be represented with 4 real numbers, and using several notations:
q = q 0 + q 1 i + q 2 j + q 3 k
q 0 , q 1 , q 2 , q 3 T
q 0 , q T .
We will denote quaternions with bold italic symbols ( q ), while vectors will be denoted with bold upright symbols ( q ). Vectors will be written in matrix form, and the transposed of a matrix M will be denoted as M T .
Quaternion product is defined by Equation (1) which produces the multiplication rule
p q = p 0 q 0 p · q p 0 q + q 0 p + p × q ,
where ( · ) represents the usual dot product, and ( × ) represents the 3-vector cross product. Note that the quaternion product (*) is different from the product denoted by (⊗) in reference [4]. Given this multiplication rule, the inverse of a quaternion q (the one for which q q 1 = q 1 q = 1 ) is given by
q 1 = 1 q 2 q = 1 q 2 q 0 , q T ,
where q represents the complex conjugate quaternion. Note that if q is a unit quaternion (a quaternion with q = 1 ), then q 1 = q .

2.2. Quaternions Representing Rotations

Each rotation transformation is mapped with a rotation matrix R and with two unit quaternions q and q all of them related through
R ( q ) = 1 2 q 2 2 2 q 3 2 2 ( q 1 q 2 q 3 q 0 ) 2 ( q 1 q 3 + q 2 q 0 ) 2 ( q 1 q 2 + q 3 q 0 ) 1 2 q 1 2 2 q 3 2 2 ( q 2 q 3 q 1 q 0 ) 2 ( q 1 q 3 q 2 q 0 ) 2 ( q 2 q 3 + q 1 q 0 ) 1 2 q 1 2 2 q 2 2 .
Note that R ( q ) = R ( q ) .
Quaternions representing rotations have the form
q = cos ( θ / 2 ) , q ^ sin ( θ / 2 ) T ,
where q ^ denotes the unit vector that defines the rotation axis, and θ the angle of rotation. Having this form, they satisfy the restriction
q 2 = q 0 2 + q 1 2 + q 2 2 + q 3 2 = 1 .
This means that quaternions describing rotations live in the unit sphere of R 4 , S 3 . This space is a manifold, and some concepts regarding these mathematical objects are useful in our context. In particular, the concept of chart is of special interest.

2.3. Distributions of Unit Quaternions

When dealing with the Kalman filter, the distribution of a random variable x is encoded by its mean x ¯ = E [ x ] and its covariance matrix P defined as
P = E x x ¯ x x ¯ T .
This definition makes sense when our random variables are defined in the Euclidean space. But how do we define the covariance matrix of a random variable living in a manifold like ours? How can we define the covariance for unit quaternions if q q ¯ does not represent a rotation? (Unit quaternions form a group under multiplication, but not under addition. This means that the addition of two unit quaternions may not result in another unit quaternion. Therefore, the addition of two unit quaternions may not represent a rotation.) What would be the covariance matrix if each quaternion was equiprobable in the unit sphere? We cannot redefine the covariance matrix, because the Kalman filter uses this precise form in its derivations, but we can take advantage of the properties of a manifold. Let us retrieve some important definitions:
Definition 1 (Homeomorphism).
A homeomorphism is a function f : X Y between two topological spaces X and Y satisfying the next properties:
  • f is a bijection,
  • f is continuous,
  • its inverse function f 1 is continuous.
If such a function exists, we say that X and Y are  homeomorphic.
Definition 2 (Manifold).
A n-manifold M n is a topological space in which each point is locally homeomorphic to the Euclidean space R n . This is, each point x M n has a neighborhood N M n for which we can define a homeomorphism f : N B n with B n the unit ball of R n .
Definition 3 (Chart).
A chart for a manifold M n is a homeomorphism φ from an open subset U M n to an open subset of the Euclidean space V R n . This is, a chart is a function
φ : U M n V R n ,
with φ a homeomorphism. Traditionally, a chart is expressed as the pair ( U , φ ) .
Given these definitions we can continue our reasoning.
In Reference [8] it talks about four “attitude error representations”. Namely, the one we will call Orthographic (O), the Rodrigues Parameters (RP), the Modified Rodrigues Parameters (MRP), and the Rotation Vector (RV). The first three are what we know as stereographic projections (and are called Orthographic, Gnomonic, and Stereographic respectively). The last one is a projection called Equidistant. But all four are charts defining a homeomorphism from the manifold S 3 to the Euclidean space R 3 . This is, they map a point q in the manifold with a point e in R 3 . Table 2 arranges these chart definitions, together with their domain and image. We must ensure the charts to be bijections so that they properly define a homeomorphism, and that they do not map q and q with different points of R 3 since they represent the same rotation. We achieve this by the given definitions of the domain and image for each chart.
Figure 1 shows how points in the sphere S 2 (subspace of the sphere S 3 where quaternions live) are mapped to points in R 2 (subspace of R 3 where the images of the charts are contained) through each one of the named charts. Since our charts are homeomorphisms, it is possible to invert the functions. Figure 2 shows how points from R 2 are mapped to points in the manifold through the inverted charts. As pointed in Reference [8], all four charts provide the same second-order approximation for a point e R 3 near the origin, to a quaternion q S 3
φ 1 ( e ) 1 e 2 8 , e 2 T .
We should notice that having R 3 and S 3 different metrics, a chart φ will inevitably produce a deformation of the space. However, for quaternions in the neighborhood of the identity quaternion (top of the sphere), our charts behave like the identity transformation between the imaginary part of these quaternions, and the points near the origin in R 3 as suggested by (10). This is a desirable property, as this means that the space around the identity quaternion closely resembles the Euclidean space, which is the space for which the Kalman filter is designed. But this just happens in the neighborhood of the identity quaternion. However, we can extend this property for any quaternion q ¯ S 3 noting that any quaternion q S 3 can be expressed as a “deviation” from the first one through the quaternion product:
q = q ¯ δ q ¯ ,
where δ q ¯ represents such a deviation. (This definition is arbitrary: we could have chosen to relate the quaternions through q = δ q ¯ q ¯ , but it is important to establish one of these definitions, and then be consequent with it. However, (11) entails a computational advantage for the computation of (37).) Then, we define a chart φ q ¯ for each quaternion q ¯ S 3 as
e q ¯ = φ q ¯ ( q ) = φ δ q ¯ ,
where δ q ¯ = q ¯ q and where we have denoted the point of the Euclidean space mapped with the quaternion q S 3 through the chart φ q ¯ as e q ¯ . Then, we will have a set of charts φ q ¯ q ¯ , each one resembling the Euclidean space around a quaternion q ¯ S 3 , and mapping this last quaternion to the origin of R 3 . We will refer to the Euclidean space associated with the chart φ q ¯ as the q ¯ -centered chart. Thus, the homeomorphism φ q ¯ 1 takes a point e q ¯ in the q ¯ -centered chart and maps it to a point q in the manifold through
q = φ q ¯ 1 e q ¯ = q ¯ φ 1 e q ¯ .
After reviewing these concepts, we can define the covariance matrix of a distribution of unit quaternions.
Given a unit quaternion q ¯ and a chart φ , we will define the expected value of a distribution of unit quaternions in the q ¯ -centered chart as
e ¯ q ¯ = E e q ¯ ,
and its covariance matrix as
P q ¯ = E e q ¯ e ¯ q ¯ e q ¯ e ¯ q ¯ T ,
and the probability density of each unit quaternion q would be defined through the homeomorphism q = φ q ¯ 1 ( e q ¯ ) . Then, a distribution of unit quaternions needs of four mathematical objects to be encoded: φ , q ¯ , e ¯ q ¯ , P q ¯ . Although a distribution of unit quaternions is unique, given this definition, its expected value e ¯ q ¯ and its covariance matrix P q ¯ may take different values depending on the chosen quaternion q ¯ and chart φ . However, knowing that the Kalman filter is designed for the Euclidean space, it will be convenient to choose a unit quaternion q ¯ central in the distribution, in order that the manifold space around it closely resembles the most significant region for the covariance matrix in the q ¯ -centered chart. It is particularly convenient to choose a quaternion q ¯ such that e ¯ q ¯ = 0 so that the covariance matrix is centered in the origin of the q ¯ -centered chart.

2.4. Transition Maps

At some step of the Kalman filter, we will have a distribution of unit quaternions defined in a q ¯ -centered chart, and we will be interested in expressing our distribution in another p ¯ -centered chart. The concept of transition map is relevant for this purpose.
Definition 4 (Transition map).
Given two charts ( U α , φ α ) and ( U β , φ β ) for a manifold M, with U α β = U α U β , we can define a function φ α β : φ α ( U α β ) φ β ( U α β ) as
φ α β ( x ) = φ β φ α 1 ( x ) ,
with x φ α ( U α β ) . The function φ α β is called a transition map. Being that φ α and φ β are homeomorphisms, so is φ α β .
For the present case, let us consider two unit quaternions p ¯ and q ¯ both related through
p ¯ = q ¯ δ ¯ .
These two quaternions define the charts φ p ¯ and φ q ¯ . We build the transition map that relates a point e q ¯ expressed in the q ¯ -centered chart with a point e p ¯ expressed in the p ¯ -centered chart doing
e p ¯ = φ p ¯ φ q ¯ 1 e q ¯ =
= φ p ¯ q ¯ φ 1 e q ¯ =
= φ δ ¯ φ 1 e q ¯ .
That is to say, first we take the point e q ¯ in the q ¯ -centered chart, and we obtain its associated quaternion q in the manifold using φ q ¯ 1 . Then, we transform this quaternion q to a point e p ¯ in the p ¯ -centered chart. Nevertheless, knowing the quaternion δ ¯ we do not need to explicitly compute q . In fact, being able to express the same quaternion q as two different deviations,
q = q ¯ δ q ¯ q = p ¯ δ p ¯ δ p ¯ = p ¯ q ¯ δ ¯ δ q ¯ .
Note the equivalence of expressions (18c) and (19).
Table 3 displays the transition maps for the charts studied. The detailed derivations of these transition maps can be found in Appendix A. Figure 3 attempts to provide some insight into how points are transformed through the transition map of each chart.

3. Manifold Kalman Filters

In this section we present the models adopted for the Manifold Kalman Filters (MKF), and we display the resulting algorithms.
The state of the system at a time t is defined by an orientation, encoded with a unit quaternion q t and by an angular velocity ω t . We will consider them to be random variables, and we will try to estimate their value using a Kalman filter.
Our unit quaternions q t H : q t = 1 will define the rotation transformation that relates a vector v t expressed in a reference frame S attached to the solid whose state we want to describe, with the same vector v t expressed in an external reference frame S
v t = R ( q t ) v t v t = q t v t q t .
For example, if we measure an acceleration a t in reference frame S the acceleration in the inertial reference frame S would be given by a t = R ( q t ) a t . This acceleration would be the one that we would have to integrate to obtain the position estimated by an accelerometer.
The vector ω t will define the angular velocity of the solid measured in S . Note that we do not include the bias of the sensors in the state of our system. We will assume that our sensors are calibrated, so the biases are zero.
We can predict the value of the random variables that describe the state of our system through the following motion equations:
d ω ( t ) d t = q ω ( t ) ,
d q ( t ) d t = 1 2 q ( t ) ω ( t ) = 1 2 q ( t ) 0 ω ( t ) ,
where q ω ( t ) is a random variable that represents the process noise, and is associated with the torque acting on the system, and with its inertia tensor. Its expected value at a given time t will be denoted as q ¯ t ω and its covariance matrix will be denoted as Q t ω .
We will assume that we have sensors giving measurements of angular velocity ω t m (which provide information about the relative change in orientation), and of a vector v t m whose value v t expressed in the external reference frame S is known (this provides information about absolute orientation). Examples of such sensors could be a gyroscope giving angular velocity measurements, an accelerometer measuring the gravity vector near the Earth surface ( v t : = g ), or a magnetometer measuring the Earth magnetic field ( v t : = B ). The measurement model relates these measurements with the variables that describe the state of the system:
v t m = R T ( q t ) q t v + v t + r t v ,
ω t m = ω t + r t ω ,
where r t ω and r t v are random variables with zero mean and covariance matrices R t ω and R t v respectively that represent the measurement noises, and q t v is another random variable with mean q ¯ t v and covariance matrix Q t v representing external disturbances in the measurement of the vector v t . For example, it could represent accelerations others than gravity for an accelerometer, or magnetic disturbances produced by moving irons for a magnetometer.
We will assume that the measurements arrive at discrete times { t n } n . The format x t | t n will be used to denote a variable x at a time t, having included measurements up to a time t n with t > t n . For the n-th time stamp, in which a measurement arrives, we will write x t | n for the sake of simplicity. Then, our knowledge about the state at a time t, having included measurements up to a time t n with t > t n is described by a distribution encoded in the collection of mathematical objects φ , p ¯ , x ¯ t | n p ¯ , P t | n p ¯ as described in Section 2.3. For the present case, x ¯ t | n p ¯ = e ¯ t | n p ¯ , ω ¯ t | n T is the expected value of the distribution, and P t | n p ¯ is its 6 × 6 covariance matrix, both expressing the quaternion distribution in the p ¯ -centered chart. Preferably, p ¯ will be a unit quaternion central in the distribution, so that the mapping of points from the p ¯ -centered chart to the manifold causes minimal deformation in such distribution. The unit quaternion q ¯ t | n = φ p ¯ 1 e ¯ t | n p ¯ will be our best estimation of the real quaternion q t that defines the orientation of the system with respect to the external reference frame S at time t.
The following subsections present the developed Kalman filters: one version based on the EKF and another version based on the UKF. The EKF is based on the linearization of the non-linear models to calculate the predicted covariance matrices. That is, the EKF approximates non-linear functions using their Jacobian matrices. To apply the EKF, our functions must be differentiable. On the other hand, the UKF is based on a deterministic sampling to approximate the distribution of our random variables. We select a minimal set of samples whose mean and covariance matrix are those of the state distribution. Then, they are transformed by the non-linear models, and the resulting set of points is used to compute the means and covariance matrices necessary to perform the Kalman update. This second approach does not need the functions to be differentiable.

3.1. Manifold Extended Kalman Filter

In this section we present the EKF-based estimator: the Manifold Extended Kalman Filter (MEKF). We offer here the main results of the more detailed derivation given in Appendix B.
A measurement
z n = v n m ω n m
arrives at time t n . Our knowledge about the orientation at a previous time t n 1 is described by a distribution expressed in the q ¯ n 1 | n 1 -centered chart. We assume that this distribution has mean
x ¯ n 1 | n 1 q ¯ n 1 | n 1 = e ¯ n 1 | n 1 q ¯ n 1 | n 1 = 0 ω ¯ n 1 | n 1 ,
and covariance matrix P n 1 | n 1 q ¯ n 1 | n 1 . This is, we have an initial four
φ , q ¯ n 1 | n 1 , ω ¯ n 1 | n 1 , P n 1 | n 1 q ¯ n 1 | n 1 .
The state prediction at time t n given all the information up to t n 1 is computed through
ω ¯ n | n 1 = ω ¯ n 1 | n 1 ,
δ n ω = cos ω ¯ n | n 1 Δ t n 2 ω ¯ n | n 1 ω ¯ n | n 1 sin ω ¯ n | n 1 Δ t n 2 ,
q ¯ n | n 1 = q ¯ n 1 | n 1 δ n ω ,
F n = R T ( δ n ω ) I Δ t n 0 I ,
P n | n 1 q ¯ n | n 1 = F n P n 1 | n 1 q ¯ n 1 | n 1 + Q n F n T ,
with
Q n = Q n ω ( Δ t n ) 3 3 Q n ω ( Δ t n ) 2 2 Q n ω ( Δ t n ) 2 2 Q n ω Δ t n .
The measurement prediction at the same time is given by
v ¯ n | n 1 m = R T q ¯ n | n 1 q ¯ n v + v n ,
ω ¯ n | n 1 m = ω ¯ n | n 1 ,
z ¯ n | n 1 = v ¯ n | n 1 m ω ¯ n | n 1 m ,
H n = v ¯ n | n 1 m × 0 0 I ,
S n | n 1 = H n P n | n 1 q ¯ n | n 1 H n T + R T q ¯ n | n 1 Q n v R q ¯ n | n 1 + R n v 0 0 R n ω ,
where [ v ] × stands for
[ v ] × = 0 v 3 v 2 v 3 0 v 1 v 2 v 1 0 .
At this point, we compute the Kalman gain K n and use it to obtain the optimal estimation of the state:
K n = P n | n 1 q ¯ n | n 1 H n T S n | n 1 1 ,
x ¯ n | n q ¯ n | n 1 = x ¯ n | n 1 q ¯ n | n 1 + K n z n z ¯ n | n 1 ,
P n | n q ¯ n | n 1 = I K n H n P n | n 1 q ¯ n | n 1 ,
where x ¯ n | n 1 q ¯ n | n 1 = e ¯ n | n 1 q ¯ n | n 1 = 0 , ω ¯ n | n 1 T . Finally, we need to obtain the updated unit quaternion, q ¯ n | n and compute the mean and the covariance matrix in the q ¯ n | n -centered chart, so that the distribution is expressed in the same conditions as at the beginning of the iteration. The point e ¯ n | n q ¯ n | n 1 that results from (41), and that is defined in the q ¯ n | n 1 -centered chart, correspond to a unit quaternion in the manifold. This is the updated unit quaternion q ¯ n | n which we are looking for:
q ¯ n | n = φ q ¯ n | n 1 1 e ¯ n | n q ¯ n | n 1 =
= q ¯ n | n 1 φ 1 e ¯ n | n q ¯ n | n 1 =
= q ¯ n | n 1 δ ¯ n .
Knowing that the Kalman update (41) could produce any point in the q ¯ n | n 1 -centered chart we will need to “saturate” to the closest point contained in the image of each chart. The point e ¯ n | n q ¯ n | n 1 in the q ¯ n | n 1 -centered chart is the origin in the q ¯ n | n -centered chart. Then, the expected value of the state in this new chart will be given by x ¯ n | n q ¯ n | n = e ¯ n | n q ¯ n | n = 0 , ω ¯ n | n T as at the beginning of the iteration.
To update the covariance matrix we need to consider its definition (15). We want to compute P q ¯ n | n having P q ¯ n | n 1 and knowing the relation e p ¯ e q ¯ provided by the transition maps in Table 3. Continuing with the EKF philosophy, the update for the covariance matrix will be found by linearizing e p ¯ e q ¯ around the point where the majority of information is comprised (in our case, the point e ¯ q ¯ = e ¯ n | n q ¯ n | n 1 ):
e i p ¯ e q ¯ = e i p ¯ e ¯ q ¯ + j e i p ¯ e q ¯ e j q ¯ e q ¯ = e ¯ q ¯ e j q ¯ e ¯ j q ¯ + O e q ¯ e ¯ q ¯ 2 ,
where we have used the big O notation to describe the limiting behavior of the error term of the approximation as e q ¯ e ¯ q ¯ . In particular, if we define
( T ) i j = e i p ¯ e q ¯ e j q ¯ e q ¯ = e ¯ q ¯ ,
then,
e p ¯ e ¯ p ¯ e p ¯ e q ¯ e p ¯ e ¯ q ¯ T e q ¯ e ¯ q ¯ ,
and the final update for the covariance matrix will be computed through
P n | n q ¯ n | n = E ( x n | n q ¯ n | n x ¯ n | n q ¯ n | n ) ( x n | n q ¯ n | n x ¯ n | n q ¯ n | n ) T
T ( δ ¯ n ) 0 0 I P n | n q ¯ n | n 1 T ( δ ¯ n ) 0 0 I T .
Table 4 summarizes the resulting T -matrix for each chart, along with their application domain. A detailed derivation of these T -matrices can be found in Appendix C.
After the final computation we obtain the four
φ , q ¯ n | n , ω ¯ n | n , P n | n q ¯ n | n ,
that is a condition equivalent to (27) in which we started the iteration.

3.2. Manifold Unscented Kalman Filter

In this section we present the UKF-based estimator: the Manifold Unscented Kalman Filter (MUKF).
A measurement z n arrives at time t n . Our knowledge about the orientation at a previous time t n 1 is described by a distribution expressed in the q ¯ n 1 | n 2 -centered chart. This distribution is encoded in the four
φ , q ¯ n 1 | n 2 , x ¯ n 1 | n 1 q ¯ n 1 | n 2 , P n 1 | n 1 q ¯ n 1 | n 2 .
The first step in the UKF is to create the augmented N × 1 mean x ˜ n and N × N covariance matrix P ˜ n . Since the measurement equations are linear for the random variables r t ω and r t v we can leave their covariance matrices out of the augmented one and add them later:
x ˜ n = x ¯ n 1 | n 1 q ¯ n 1 | n 2 q ¯ n ω q ¯ n v ,
P ˜ n = P n 1 | n 1 q ¯ n 1 | n 2 0 0 0 Q n ω 0 0 0 Q n v .
Then, we obtain the matrix L n which satisfies L n L n T = P ˜ n and we use it to generate the 2 N + 1 sigma points { X j } j = 0 2 N as described in Ref. [15]:
X i , 0 = ( x ˜ n ) i ,
X i , j = ( x ˜ n ) i + L n i j 2 W j for j = 1 , , N ,
X i , j + N = ( x ˜ n ) i L n i j 2 W j for j = 1 , , N ,
being W j = ( 1 W 0 ) / ( 2 N ) for j 0 where W 0 regulates the importance given to the sigma point X 0 in the computation of the mean. These sigma points { X j } j are expressed in the q ¯ n 1 | n 2 -centered chart. We need to express them in the manifold before applying the evolution equations and the measurement equations:
X j q = φ q ¯ n 1 | n 2 1 X j e = q ¯ n 1 | n 2 φ 1 X j e ,
Y j ω = X j ω + X j q ω Δ t n ,
Y j q = X j q cos Y j ω Δ t n 2 Y ^ j ω sin Y j ω Δ t n 2 ,
Z j v = R T X j q X j v + v t ,
Z j ω = Y j ω ,
where for the j-th sigma point, X j e is its chart point part and X j q is the quaternion with which it is mapped, X j ω is its angular velocity part, X j q ω is its angular velocity noise part, Y j ω is its angular velocity prediction, Y j q is the quaternion part of its prediction (we have assumed that the angular velocity Y j ω is constant in the time interval [ t n 1 , t n ) so that we can use (A20)), X j v is the vector process noise part, Z j v is its vector measurement prediction, Z j ω is its angular velocity measurement prediction, and Δ t n = t n t n 1 . Note that when applying the inverse chart φ 1 we will need to “saturate” X j e to the closest point in the image of φ . Having these new sigma points, we can obtain the means and covariance matrices of the distributions present in the UKF. First, defining Z j : = Z j v , Z j ω T the means are computed through
q ¯ n | n 1 = j W j Y j q j W j Y j q ,
ω ¯ n | n 1 = j W j Y j ω ,
x ¯ n | n 1 q ¯ n | n 1 = φ q ¯ n | n 1 q ¯ n | n 1 = 0 ω ¯ n | n 1 ,
z ¯ n | n 1 = j W j Z j .
where we have used a variation of the result provided in Ref. [16]. Namely,
q ¯ j q j j q j ,
with q j · q k > 0 for j , k = 0 , , 2 N . This result is shown to minimize the fourth order approximation of the distance defined as the sum of squared angles between the rotation transformation represented by each quaternion q j and the one represented by q ¯ . This approach to compute the mean quaternion is extremely efficient, and its derivation is elegant and simple. In order to ensure that q j · q k > 0 it is useful to remember the property that both q and q represent the same rotation. This property is also useful for introducing the quaternions in the domain of φ to execute the next step of the filter.
After this, we use the obtained mean quaternion q ¯ n | n 1 to express each sigma point in the q ¯ n | n 1 -centered chart, and compute the covariance matrices:
Y j e = φ q ¯ n | n 1 Y j q = φ q ¯ n | n 1 Y j q ,
P n | n 1 q ¯ n | n 1 = j W j Y j Y j T ,
P n | n 1 y z = j W j Y j Z j z ¯ n | n 1 T ,
S n | n 1 = j W j Z j z ¯ n | n 1 Z j z ¯ n | n 1 T + R n v 0 0 R n ω ,
where we have denoted Y j : = Y j e , Y j ω ω ¯ n | n 1 T . Finally, we compute the UKF version of the Kalman gain K n and we use it to obtain the optimal estimation of the state:
K n = P n | n 1 y z S n | n 1 1 ,
x ¯ n | n q ¯ n | n 1 = x ¯ n | n 1 q ¯ n | n 1 + K n z n z ¯ n | n 1 ,
P n | n q ¯ n | n 1 = P n | n 1 q ¯ n | n 1 K n S n | n 1 K n T ,
arriving at the same conditions in which we began the iteration, with a distribution expressed in the q ¯ n | n 1 -centered chart, and encoded by the four
φ , q ¯ n | n 1 , x ¯ n | n q ¯ n | n 1 , P n | n q ¯ n | n 1 .
Our best estimation for the orientation at this time is
q ¯ n | n = φ q ¯ n | n 1 1 e ¯ n | n q ¯ n | n 1 = q ¯ n | n 1 φ 1 e ¯ n | n q ¯ n | n 1 ,
being e ¯ n | n q ¯ n | n 1 the part of the mean x ¯ n | n q ¯ n | n 1 that represent the quaternion in the q ¯ n | n 1 -centered chart.
Note that setting q ¯ n 1 | n 2 : = q ¯ n 1 | n 1 and e ¯ n 1 | n 1 q ¯ n 1 | n 2 : = 0 at the beginning of each iteration yields the traditional version of the algorithm, where a “reset operation” is performed instead of the covariance matrix update.

4. Simulation Results

This section presents the results of the simulations used to measure the accuracy of each estimator. Simulations are chosen instead of real experiments because a real system entails an uncertainty in the measurement of the true attitude: the attitude that is used to compare with that estimated by the algorithms. There are sources of error ranging from a miscalibration of the measurement system to a possible bias in the “true attitude” produced by another attitude estimator, which makes it problematic to define an adequate metric to measure the accuracy of the algorithms. For this reason, the authors consider that using a simulation is more reliable to avoid possible biases in the results due to said sources of error. Others have performed similar types of tests [7,17]. However, the results do not seem to be statistically conclusive: only the estimations of some orientation trajectories are shown.
We perform our comparison through a simulation in which we do have an absolute knowledge of the attitude of the system: a true oracle exists in a simulation. Therefore, we can compare the real orientation with the attitude estimated by the algorithms having fed them only with simulated measurements that we obtain from such known orientations. We will extract our performance metrics from a wide set of orientation trajectories in order to obtain statistically conclusive results.
We try to answer three questions with the simulation test. The first question is, is there a chart for which we get a greater accuracy in attitude estimation? The second one is, what algorithm produces the most accurate attitude estimation, the MEKF or the MUKF? The last question stems from the fact that previous algorithms on attitude estimation, such as the Multiplicative Extended Kalman Filter, did not contemplate updating the distribution from one chart to another as done at (47b) in the MEKF. However, their estimators performed well [6,7,12]. Then the third question is, does this “chart update” imply an improvement in the accuracy of the attitude estimation?
Although a simulation has been used to compare our algorithms, these have also been tested with a real IMU. In the Supplementary Materials one can find a demonstration video, the source code used in the video, the source code used to generate the simulations, and the source code used to obtain the computational cost of the algorithms in each platform.

4.1. Performance Metric

We have already described a quaternion q as a deviation from another quaternion q ¯ as q = q ¯ δ . Now we define the instantaneous error between an estimated attitude, represented by a unit quaternion q ¯ and the real attitude, represented by the unit quaternion q as the angle we have to rotate one of them to transform it into the other. This is, the angle of the rotation transformation defined by the quaternion δ e such that q = q ¯ δ e . Recalling (6), this angle can be computed as:
θ e = 2 arccos q ¯ q 0 =
= 2 arccos q ¯ · q ,
having previously ensured that q ¯ · q 0 using the fact that both q and q represent the same rotation transformation.
Angle θ e will vary along an orientation trajectory. Then, we will define the mean error in orientation estimation for a given trajectory starting at time t = 0 and ending at time t = T as
e θ = 1 T 0 T θ e ( t ) d t .
Finally, e θ will depend on the followed trajectory, and on the set of taken measurements. We will need to generate several orientation trajectories to obtain the mean value e ¯ θ and the variance σ e ¯ θ 2 that characterize the distribution of the error in orientation estimation e θ for each algorithm. We will define the confidence interval for the computed e ¯ θ as
e ¯ θ 3 σ e ¯ θ / N s , e ¯ θ + 3 σ e ¯ θ / N s ,
where N s is the number of samples taken for the e ¯ θ computation, so that σ e ¯ θ 2 / N s is the variance of the sample mean distribution.
Being that the lower the better, the value of e ¯ θ gives us a measure of how well an algorithm estimates the orientation. We will consider that the performance of an algorithm A is better than the performance of other algorithm B if e ¯ θ ( A ) < e ¯ θ ( B ) and their confidence intervals do not overlap.

4.2. Simulation Scheme

To compute the performance metrics we will need to generate a large number of simulations. Each independent simulation will consist of three steps: initialization, convergence, and estimation.
In the initialization step we set up the initial conditions accordingly to the chosen simulation parameters. This includes generating the initial unit quaternion q 0 from a uniform distribution in S 3 setting the initial angular velocity ω 0 to zero, setting the update frequency f update generating the variances of the process noises σ ω 2 and σ v 2 from a uniform distribution in the intervals ( 0 , Q max ω ] and ( 0 , Q max v ] respectively, and initializing the estimation algorithm. The initialization of the MEKF includes setting q ¯ 0 | 0 = 1 ω ¯ 0 | 0 = 0 rad / s and P 0 | 0 q ¯ 0 | 0 = 10 2 I . On the other hand, the initialization of the MUKF includes setting q ¯ 0 | 1 = 1 e 0 | 0 q ¯ 0 | 1 = 0 ω ¯ 0 | 0 = ( 1 , 1 , 1 ) T rad / s and P 0 | 0 q ¯ 0 | 1 = 10 2 I . The angular velocity is not initialized to 0 in the MUKF because it has been observed that it is sometimes necessary to “break the symmetry” for the algorithm to converge; especially when we do not apply the chart update (when we perform the “reset operation”) for the RV chart. The covariance matrices that appear in both algorithms are initialized as Q n ω = I rads 2 / s 4 Q n v = 10 2 I p . d . u . (“p.d.u.” stands for “Procedure Defined Unit”. In the present case it depends on the definition of the vector v ), R n ω = R ω I rads 2 / s 2 , R n v = R v I p . d . u . , where R ω and R v are the variances of the measurement noise that will be used in the simulation. We give this information about the measurement noise to the algorithms because it can be obtained offline, while the information about the process noise cannot. Given that a priori we cannot know how the system will behave, the values of Q n ω and Q n v have been chosen according to what we understand could be normal. Choosing these values we are assuming that after a second it is normal for the angular velocity to have changed by 1 rad / s and also that it is normal to find external noises added to the vector v t of magnitude 10 1 p . d . u . . For the mean values we set q ¯ n ω = 0 rads / s 2 and q ¯ v = 0 p . d . u . .
In the convergence step we keep the system in the initial orientation q 0 . Simulated measurements are generated using (23) and (24). For each measurement, a different v t is sampled from a uniform distribution in the unit sphere of R 3 . The values for each component of q t v r t v and r t ω are obtained from normal distributions with zero mean and variances σ v 2 R v and R ω respectively. The term R T ( q t ) in (23) is obtained from the true attitude q t , which in the convergence step takes the value of q t = q 0 . The term ω t in (24) is the true angular velocity, which in the convergence step takes the value ω t = 0 . The tested algorithm updates its state estimation until the inequality θ e ( t ) < θ e 0 is satisfied, where θ e ( t ) is the value of the error (72), and θ e 0 is a parameter in the simulation. The convergence step could have been replaced by an initialization of the attitude estimated by the algorithm q ¯ t to the real value q t but then it would have also been necessary to fix a certain covariance matrix. Since the metric of the space generated by each chart is different, it is difficult to set a covariance matrix that provides the same information for each chart. It seemed more natural to the authors to allow the algorithm to find the true attitude by its own means, and for the covariance matrix to converge to a value in each case.
Finally, in the estimation step we generate a random but continuous orientation sequence using a Wiener process for the angular velocity:
ω t = ω t δ t + n t δ t ,
q t = q t δ t cos ω t δ t 2 ω t ω t sin ω t δ t 2 ,
where n t is a random vector whose components are sampled from a normal distribution with zero mean and variance σ ω 2 and δ t is the simulation time step that is related to the algorithm time step Δt trough dtdtsim δ t = Δ t being dtdtsim an integer parameter that determines the simulation updates per algorithm update. Note that we multiply n t by δ t and not by δ t . We do it this way so that the covariance matrix after k steps does not depend on the simulation time step δ t . In fact, after a time T = k δ t the covariance matrix of the angular velocity will have grown by Δ P ω = k I σ ω 2 δ t = I σ ω 2 T and not by ( Δ P ω ) = k I σ ω 2 ( δ t ) 2 = I σ ω 2 T δ t . After each dtdtsim simulation updates, a simulated measurement is generated in the same way it was done in the convergence step, and the algorithm is updated with it. The simulation will run for a time Tsim = k Δ t where k is an integer number. This way we will perform the last algorithm update at the end of the simulation. The error (72) will be evaluated after each algorithm update, and it will be added up through the simulation to obtain the averaged error (73). After each simulation, we will obtain a sample for the computation of e ¯ θ and σ e ¯ θ 2 . We will perform N s of these simulations to obtain the confidence interval (74).

4.3. Results

In this section we present the results of the simulations. The algorithms are tested for update frequencies f update = 1 / Δ t in the interval [ 2 , 1000 ] Hz . This range has been chosen thinking about the possible limitations of a real system. For example, the maximum data rate of a low cost IMU is around 1000 Hz . On the other hand, the update frequency may be limited by processing. The computational cost of each estimator has been evaluated in two platforms: an Arduino MEGA 2560, and a Raspberry Pi 3 Model B. The code has been written in c++. The resulting maximum update frequencies are presented in Figure 4, which indicates that the MEKF can be executed approximately 3 times faster than the MUKF.
Although the algorithms have been developed allowing a different Δ t n for each update, the simulations are performed using a constant Δt, and the simulation parameters depicted in Table 5.
The parameters θ e 0 Tsim, dtdtsim, and N s have been chosen trying to reach a compromise between the precision of the results, and the execution time of the simulation. The values for Q max ω and Q max v have been chosen in such a way that the estimation algorithms face both normal situations ( Q n ω σ ω 2 I and Q n v σ v 2 I ) and situations that were not foreseen ( Q n ω σ ω 2 I or Q n v σ v 2 I ). A typical low cost IMU has R ω 10 4 rad 2 / s 2 and R v 10 4 g 2 . The values chosen for R represent an imprecise sensor ( 10 2 ), a normal sensor ( 10 4 ), and a precise sensor ( 10 6 ). The value of W 0 has been chosen so that all sigma points have the same importance, but very similar results, if not identical, have been obtained for other selections of W 0 .

4.3.1. Chart Choice

The results of the simulation are presented in Figure 5. The average of the performance metric is shown along with its confidence interval for each of the selected update frequencies. The results of the MEKF and the MUKF are shown in different graphs, but drawn in the same one are the results for each chart and for a given MKF. In this way we are able to distinguish if a chart has an advantage over the others.
We observe that there is no chart that is especially advantageous. All things being equal, we would opt for the RP chart. For this chart it is not necessary to worry about the domain since it maps q and q with the same point of R 3 and with the same T -matrix; or of the image since it is all R 3 . In addition, the expressions of φ 1 and the T -matrix for the MEKF are simpler for the RP chart. These computational advantages make us prefer the RP chart over the others.

4.3.2. MEKF vs. MUKF

Figure 6 also presents the results of the simulations. This time, we display on the same graph the resulting performance metrics for the MUKF and the MEKF when the RP chart is used. In this way, we can distinguish if one MKF has an advantage over the other.
We note that the MEKF performs the same or better than the MUKF. This differs from the usual experience, in which the UKF outperforms the EKF in traditional non-linear estimation applications. The fact that the charts resemble the Euclidean space near the origin (see Section 2.3) might be favoring the MEKF, since the Jacobian matrices, used to approximate the non-linear functions, are defined at that point. However, the sigma points generated for the MUKF are sampled far from the origin of the chart, where the non-linearities become notorious. We are facing a very particular scenario in which the model is approximately linear for the MEKF, while for the MUKF it is not. In addition, due to the difference in computational cost (see Figure 4), the MUKF update frequencies will generally be lower than those of the MEKF, which will imply worse accuracy in its estimations. Then, the MEKF with the RP chart seems to be our best option.

4.3.3. Chart Update vs. No Chart Update

Figure 7 presents the results of each MKF with each chart in a different graph, but displayed in the same one are the results using the “chart update” and the results without using it.
We can observe that there is almost no difference between using the “chart update” and not using it. The concepts used in this paper have helped us to understand the mechanisms of the MKF, and ultimately to arrive to the concepts of “multiplicative update”, and of “covariance correction step” with the T -matrix definition. However, it is not necessary to apply the latest update (47b) in practice: we will obtain essentially the same accuracy in our estimations.

5. Conclusions

We have used concepts from manifold theory to define the expected value and the covariance matrix of a distribution in a manifold. In particular, we have defined the expected value and covariance matrix of a distribution of unit quaternions in S 3 , the unit sphere in R 4 , using the concept of chart. These definitions have helped us to develop Kalman filters for orientation estimation, where the attitude has been represented by a unit quaternion. They have also helped us solve the problem of the “covariance correction step”. Two estimators have been developed: one based on the EKF (the MEKF), and another based on the UKF (the MUKF). The MEKF and the MUKF have been tested in simulations, and some results have been obtained. The conclusions of the simulations are:
  • There is no chart that presents a clear advantage over the others, but the RP chart has some characteristics that motivate us to prefer it.
  • The MEKF is preferable to the MUKF due to its lower computational cost and its greater accuracy in orientation estimation.
  • The “chart update” is not necessary for the MKF in practice.
Then, the MEKF with the RP chart and without applying the “chart update” is our best attitude estimator according to the adopted performance metric. This algorithm resembles the conventional “Multiplicative Extended Kalman Filter”, but we have obtained the MEKF without having to redefine any aspect of the classic Kalman filter.

Supplementary Materials

The following are available online at https://www.mdpi.com/1424-8220/19/1/149/s1: SupplementaryMaterials.zip.

Author Contributions

Conceptualization, P.B.-P.; methodology, P.B.-P.; software, P.B.-P.; validation, P.B.-P. and H.M.-B.; formal analysis, P.B.-P.; investigation, P.B.-P.; resources, H.M.-B.; data curation, P.B.-P.; writing–original draft preparation, P.B.-P.; writing–review and editing, P.B.-P. and H.M.-B.; visualization, P.B.-P.; supervision, H.M.-B.; project administration, P.B.-P. and H.M.-B.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
EKFExtended Kalman Filter
UKFUnscented Kalman Filter
MKFManifold Kalman Filter
MEKFManifold Extended Kalman Filter
MUKFManifold Unscented Kalman Filter
OOrthographic
RPRodrigues Parameters
MRPModified Rodrigues Parameters
RVRotation Vector

Appendix A. Derivation of Transition Maps

This appendix contains the derivation of the transition map for each chart.

Appendix A.1. Orthographic

Using the inverse of the transformation that defines the chart, φ 1 ,
δ q ¯ = 1 e q ¯ 2 4 e q ¯ / 2 .
Introducing (A1) into (19),
δ p ¯ = δ ¯ δ q ¯ = δ ¯ 0 1 e q ¯ 2 4 + δ ¯ · e q ¯ 2 δ ¯ 0 e q ¯ 2 1 e q ¯ 2 4 δ ¯ δ ¯ × e q ¯ 2 .
Finally, applying the chart definition,
e p ¯ = 2 δ p ¯ = δ ¯ 0 e q ¯ 4 e q ¯ 2 δ ¯ δ ¯ × e q ¯ .

Appendix A.2. Rodrigues Parameters

Using the inverse of the transformation that defines the chart, φ 1 ,
δ q ¯ = 1 4 + e q ¯ 2 2 e q ¯ .
Introducing (A4) into (19),
δ p ¯ = δ ¯ δ q ¯ = 1 4 + e q ¯ 2 2 δ ¯ 0 + δ ¯ · e q ¯ δ ¯ 0 e q ¯ 2 δ ¯ δ ¯ × e q ¯ .
Finally, applying the chart definition,
e p ¯ = 2 δ p ¯ δ 0 p ¯ = 2 δ ¯ 0 e q ¯ 2 δ ¯ δ ¯ × e q ¯ 2 δ ¯ 0 + δ ¯ · e q ¯ .

Appendix A.3. Modified Rodrigues Parameters

Using the inverse of the transformation that defines the chart, φ 1 ,
δ q ¯ = 1 16 + e q ¯ 2 16 e q ¯ 2 8 e q ¯ .
Introducing (A7) into (19),
δ p ¯ = δ ¯ δ q ¯ = 1 16 + e q ¯ 2 δ ¯ 0 16 e q ¯ 2 + 8 δ ¯ · e q ¯ 8 δ ¯ 0 e q ¯ 16 e q ¯ 2 δ ¯ 8 δ ¯ × e q ¯ .
Finally, applying the chart definition,
e p ¯ = 4 δ p ¯ 1 + δ 0 p ¯ = 4 8 δ ¯ 0 e q ¯ 16 e q ¯ 2 δ ¯ 8 δ ¯ × e q ¯ 16 + e q ¯ 2 + δ ¯ 0 16 e q ¯ 2 + 8 δ ¯ · e q ¯ .

Appendix A.4. Rotation Vector

Using the inverse of the transformation that defines the chart, φ 1 ,
δ q ¯ = cos e q ¯ 2 e ^ q ¯ sin e q ¯ 2 .
Introducing (A10) into (19),
δ p ¯ = δ ¯ δ q ¯ = δ ¯ 0 cos e q ¯ 2 + δ ¯ · e ^ q ¯ sin e q ¯ 2 δ ¯ 0 e ^ q ¯ sin e q ¯ 2 cos e q ¯ 2 δ ¯ δ ¯ × e ^ q ¯ sin e q ¯ 2 .
Finally, applying the chart definition,
e p ¯ = 2 δ p ¯ δ p ¯ arcsin δ p ¯ ,
with
δ p ¯ = δ ¯ 0 e ^ q ¯ sin e q ¯ 2 cos e q ¯ 2 δ ¯ δ ¯ × e ^ q ¯ sin e q ¯ 2 .
Note that all transition maps are expressed using the δ ¯ quaternion. Given that e ¯ = φ δ ¯ we could also have expressed them using e ¯ which is what we get after applying the Kalman update (41). However, our choice makes transition maps to take a simpler form. In addition, having to compute the quaternion δ ¯ to perform (43c), this choice does not imply a computational overhead.

Appendix B. Details in the Derivation of the MEKF

This appendix contains the details in the derivation of the Manifold Extended Kalman Filter used in this study.

Appendix B.1. State Prediction

This subsection contains the derivation of the equations for the state prediction.

Appendix B.1.1. Evolution of the Expected Value of the State

Taking expected values in Equation (21) we obtain
d ω ¯ d t = q ¯ ω ω ¯ ( t ) = ω ¯ 0 + q ¯ t ω t ,
with q ¯ t ω the expected value of the random variable q ω at time t. Doing ω ¯ 0 = ω ¯ n 1 | n 1 we arrive at
ω ¯ t | n 1 = ω ¯ n 1 | n 1 + q ¯ t ω t .
On the other hand, approximating (22) with its Taylor series up to first order around the current state ( q ¯ , ω ¯ ) , and taking its expected value we obtain
E d q ( t ) d t 1 2 q ¯ ( t ) ω ¯ ( t ) .
This differential equation has no general closed solution. But if we assume that the expected value of the process noise q ¯ ω ( t ) is zero when t ( t n 1 , t n ) , so that ω ¯ ( t ) is constant in that interval, then we will have the matrix differential equation
d q ¯ ( t ) d t = Ω ˇ n q ¯ ( t ) ,
with
Ω ˇ n : = 1 2 ( 0 ω ¯ 1 ω ¯ 2 ω ¯ 3 ω ¯ 1 0 ω ¯ 3 ω ¯ 2 ω ¯ 2 ω ¯ 3 0 ω ¯ 1 ω ¯ 3 ω ¯ 2 ω ¯ 1 0 ) n | n 1 .
This differential equation has the solution
q ¯ ( t ) = e Ω ˇ t q ¯ 0 ,
where q ¯ 0 represents the initial conditions. After taking q ¯ 0 = q ¯ n 1 | n 1 , we obtain the prediction q ¯ t | n 1 , that can be expressed using the quaternion product as
q ¯ t | n 1 = q ¯ n 1 | n 1 δ t ω = q ¯ n 1 | n 1 cos ω ¯ t | n 1 Δ t 2 ω ¯ t | n 1 ω ¯ t | n 1 sin ω ¯ t | n 1 Δ t 2 ,
with Δ t = t t n 1 .

Appendix B.1.2. Evolution of the State Covariance Matrix

For a continuous nonlinear system of the form
d x d t = f ( x , t ) + g ( q ) ,
we know [18] that the covariance matrix satisfies the following differential equation:
d P d t = F P + P F T + G Q G T ,
where F = f x , and G = g q . This is so because the evolution equation for Δ x = x x ¯ is approximately given by:
d x d t d x ¯ d t + f x x = x ¯ x x ¯ + g q q = q ¯ q q ¯ ,
and P is defined as P = E Δ x ( Δ x ) T . However, we have a different definition for P :
P = E e q ¯ e ¯ q ¯ ω ω ¯ e q ¯ e ¯ q ¯ ω ω ¯ T .
Then we need to find the evolution equation for e q ¯ . Recall that we are assuming e ¯ q ¯ = 0 at the beginning of the iteration. Knowing that any quaternion in the unit sphere can be expressed as a deviation from a central quaternion q ¯ as q = q ¯ δ , and using the differential Equations (22) and (A16), we can find a differential equation for the quaternion δ :
q = q ¯ δ
q ˙ = q ¯ ˙ δ + q ¯ δ ˙
1 2 q ω 1 2 q ¯ ω ¯ δ + q ¯ δ ˙ ,
where a dot over a symbol represents time derivative, and we have obviated the time dependence. Isolating the time derivative δ ˙ ,
δ ˙ 1 2 q ¯ q δ ω 1 2 q ¯ q ¯ 1 ω ¯ δ =
= 1 2 δ ω ω ¯ δ =
= 1 2 δ 0 δ 0 ω 0 ω ¯ δ 0 δ =
= 1 2 ω ω ¯ · δ δ 0 ω ω ¯ ω + ω ¯ × δ =
= 1 2 Δ ω · δ δ 0 Δ ω 2 ω ¯ + Δ ω × δ .
Knowing that, for each of our charts, the δ quaternion can be approximated by (10) as e 0 , then we can obtain an approximate differential equation for a point e expressed in the q ¯ -centered chart. Note that we have not explicitly denoted e q ¯ or δ q ¯ . This will be assumed implicitly, since these quantities will always be expressed in the q ¯ -centered chart in this appendix. Using the chain rule for a time derivative and expression (A26e),
e ˙ i = j e i δ j 2 δ i j 2 δ i j δ j t δ ˙ j
e ˙ δ 0 Δ ω 2 ω ¯ + Δ ω × δ
1 e 2 8 Δ ω 2 ω ¯ + Δ ω × e 2 .
Then, the first order approximation to differential Equation (A27c) would be
e ˙ Δ ω ω ¯ × e .
On the other hand, combining Equations (21) and (A14) we obtain
d Δ ω d t = d ( ω ω ¯ ) d t = q ω q ¯ ω = q ω .
Summarizing,
d d t e Δ ω ω ¯ × I 0 0 e Δ ω + 0 Δ q ω ,
therefore matrices F , G , and Q in (A22) are in our case
F = ω ¯ × I 0 0 ,
G = I ,
Q = 0 0 0 E [ Δ q ω ( Δ q ω ) T ] .
We are now in a position to solve the differential Equation (A22). Let us consider its homogeneous version first:
d P H d t = F P H + P H F T ,
which has as solution
P H = e F t C 0 e F T t .
Taking into account the definition of matrix exponential, and after computing the powers of F we obtain
e F t = n = 0 ( Ω ) n t n n ! n = 1 ( Ω ) n 1 t n n ! 0 I R T ( δ ω ) I t 0 I ,
where we have denoted Ω = ω ¯ × , and δ ω = cos ω ¯ t 2 , ω ¯ ω ¯ sin ω ¯ t 2 . We also have assumed that t takes small values so we can approximate the infinite sums truncating in the first term. To find the solution of the non-homogeneous differential equation we use the variation of constants method:
P = e F t C ( t ) e F T t
d P d t = F e F t C ( t ) e F T t + e F t C ( t ) e F T t F T + e F t d C ( t ) d t e F T t =
= F P + P F T + e F t d C ( t ) d t e F T t .
Identifying terms with (A22) we obtain that
e F t d C ( t ) d t e F T t = G Q G T
d C ( t ) d t = e F t Q e F T t = n = 0 m = 0 ( F ) n t n n ! Q ( F T ) m t m m !
C ( t ) = C 0 + n = 0 m = 0 ( F ) n n ! Q ( F T ) m m ! t n + m + 1 n + m + 1 .
Finally, truncating the summation in (A42) at the first non-zero elements, and inserting the result into (A37), we obtain (32) where we have identified C 0 = P ( 0 ) through the initial conditions.

Appendix B.2. Measurement Prediction

This subsection contains the derivation of the equations for the measurement prediction.

Appendix B.2.1. Expected Value of the Measurement Prediction

Taking expected values on (24), and assuming r ¯ t ω = 0 we arrive at (35). On the other hand, approximating (23) with its Taylor series up to first order around the current estimation of the state ( q ¯ , ω ¯ ) , taking its expected value, and assuming r ¯ t v = 0 we obtain (34).

Appendix B.2.2. Covariance Matrix of the Measurement Prediction

In order to find the covariance matrix of the measurement prediction we need the linear approximation of the vector measurement around the point x 0 : = e = 0 , q v = q ¯ v , r v = r ¯ v :
v m v ¯ m + v m e x 0 e + v m q v x 0 q v q ¯ v + v m r v x 0 r v r ¯ v .
It is direct to identify
v m q v x 0 = R T ( q ¯ ) ,
v m r v x 0 = I .
On the other hand, rewriting (23) as
v m = δ q ¯ q v + v q ¯ δ + r v
v m = R T ( δ ) R T ( q ¯ ) q v + v + r v ,
and noting that setting e = 0 is equivalent to do δ = 1 ,
v i m e j x 0 = k v i m δ k x 0 δ k e j e = 0 = k l R i l T ( δ ) δ k δ = 1 R T ( q ¯ ) q ¯ v + v l δ k e j e = 0 .
Now, recalling (5) we have
R T ( δ ) δ k δ = 1 = 2 0 δ 3 k δ 2 k δ 3 k 0 δ 1 k δ 2 k δ 1 k 0 2 n ε i n l δ n k ,
with ε i n l the Levi-Civita symbol, and δ n k the Kronecker delta. Recalling (10) we also have
δ e j = e j / 4 δ 1 j / 2 δ 2 j / 2 δ 3 j / 2 e = 0 ( 1 δ 0 k ) δ k j / 2 .
Then, introducing in (A48) Equations (A49), (34), and (A50),
v i m e j x 0 k l n ε i n l δ n k v ¯ l m ( 1 δ 0 k ) δ k j = l ε i l j v ¯ l m v ¯ m × ,
where we have used ε i j l = ε i l j . Finally, assuming the independence of the random variables x t = ( e t , ω t ) T , q t v , r t v , and r t ω , and computing the covariance matrix S t = E ( z t z ¯ t ) ( z t z ¯ t ) T with z t = ( v t m , ω t m ) T and (A43), we arrive at (37) and (38).

Appendix C. Derivation of the T-matrices

This appendix contains the derivation of the T -matrix for each chart.

Appendix C.1. Orthographic

Our transition map (A3) can be written as
e i p ¯ ( e q ¯ ) = δ ¯ 0 e i q ¯ 4 k ( e k q ¯ ) 2 δ ¯ i l m ε i l m δ ¯ l e m q ¯ ,
being ε i l m the Levi-Civita symbol. Finding (45) for (A52) we obtain
( T ) i j = δ ¯ 0 δ i j k e k q ¯ δ k j 4 k ( e k q ¯ ) 2 δ ¯ i l m ε i l m δ ¯ l δ m j e q ¯ = e ¯ q ¯ =
= δ ¯ 0 δ i j + e ¯ j q ¯ 4 k ( e ¯ k q ¯ ) 2 δ ¯ i l ε i l j δ ¯ l ,
This expression can be rewritten in matrix form as
T = δ ¯ 0 I + δ ¯ ( e ¯ q ¯ ) T 4 e ¯ q ¯ 2 δ ¯ × .
Finally, recalling that for this chart δ ¯ 0 = 1 e ¯ q ¯ 2 / 4 and δ ¯ = e ¯ q ¯ / 2 , we arrive at the final expression
T = δ ¯ 0 I + δ ¯ δ ¯ T δ ¯ 0 δ ¯ × .

Appendix C.2. Rodrigues Parameters

First, let us denote the numerator of (A6) as N ( e q ¯ ) , and its denominator as D ( e q ¯ ) :
N e q ¯ : = δ ¯ 0 e q ¯ 2 δ ¯ δ ¯ × e q ¯ ,
D e q ¯ : = 2 δ ¯ 0 + δ ¯ · e q ¯ .
Now let us evaluate (A56) at e ¯ q ¯ :
N e ¯ q ¯ = δ ¯ 0 e ¯ q ¯ 2 δ ¯ / δ ¯ 0 2 δ ¯ 0 δ ¯ × e ¯ q ¯ = ( δ ¯ e ¯ q ¯ ) = 0 .
Then, the approximation of N e q ¯ does not have terms of order O ( 1 ) . This means that we will only need to approximate D e q ¯ to the zeroth order. Any further approximation would produce, after multiplying by the linear approximation of N e q ¯ , a higher order term. Let us then calculate each approximation.
We can rewrite (A56) as
N i e q ¯ = δ ¯ 0 e i q ¯ 2 δ ¯ i k l ε i k l δ ¯ k e l q ¯ ,
with ε i k l the Levi-Civita symbol. Applying (44) to (A59),
N i e q ¯ j δ ¯ 0 δ i j k l ε i k l δ ¯ k δ l j e q ¯ = e ¯ q ¯ e j q ¯ e ¯ j q ¯ =
= j δ ¯ 0 δ i j k ε i k j δ ¯ k e j q ¯ e ¯ j q ¯ ,
being δ i j the Kronecker delta. Returning to matrix notation, the linear approximation of N e q ¯ is
N e q ¯ = δ ¯ 0 I δ ¯ × e q ¯ e ¯ q ¯ + O e q ¯ e ¯ q ¯ 2 .
On the other hand, evaluating (A57) at e ¯ q ¯ we obtain the zeroth order approximation:
D e ¯ q ¯ = 2 δ ¯ 0 + δ ¯ · 2 δ ¯ δ ¯ 0 + O e q ¯ e ¯ q ¯ =
= 2 δ ¯ 0 δ ¯ 0 2 + δ ¯ 2 1 + O e q ¯ e ¯ q ¯ =
= 2 δ ¯ 0 + O e q ¯ e ¯ q ¯ .
Finally, combining (A61) and (A62c) we can compute the linear approximation of (A6):
e p ¯ e q ¯ = 2 δ ¯ 0 I δ ¯ × e q ¯ e ¯ q ¯ + O e q ¯ e ¯ q ¯ 2 δ ¯ 0 2 + O e q ¯ e ¯ q ¯ =
= δ ¯ 0 δ ¯ 0 I δ ¯ × e q ¯ e ¯ q ¯ + O e q ¯ e ¯ q ¯ 2 .

Appendix C.3. Modified Rodrigues Parameters

First, let us denote the numerator of (A9) as N ( e q ¯ ) , and its denominator as D ( e q ¯ ) :
N e q ¯ = 8 δ ¯ 0 e q ¯ 16 e q ¯ 2 δ ¯ 8 δ ¯ × e q ¯ ,
D e q ¯ = 16 + e q ¯ 2 + δ ¯ 0 16 e q ¯ 2 + 8 δ ¯ · e q ¯ .
Now let us evaluate (A64) at e ¯ q ¯ :
N e ¯ q ¯ = 8 δ ¯ 0 e ¯ q ¯ 4 δ ¯ / ( 1 + δ ¯ 0 ) 16 e ¯ q ¯ 2 16 δ ¯ 2 / ( 1 + δ ¯ 0 ) 2 δ ¯ 8 δ ¯ × e ¯ q ¯ 0 ( e ¯ q ¯ δ ¯ ) = 0 ( e ¯ q ¯ δ ¯ ) =
= 16 δ ¯ 1 + δ ¯ 0 2 δ ¯ 0 1 + δ ¯ 0 δ ¯ 2 1 δ ¯ 0 2 1 + δ ¯ 0 = 0 .
Then, as with the RP chart, the approximation of N e q ¯ does not have terms of order O ( 1 ) , and we will only need to approximate D e q ¯ to the zeroth order.
We can write (A64) as
N i e q ¯ = 8 δ ¯ 0 e i q ¯ 16 k ( e k q ¯ ) 2 δ ¯ i 8 l m ε i l m δ ¯ l e m q ¯ ,
with ε i l m the Levi-Civita symbol. Applying (44) to (A67),
N i e q ¯ j 8 δ ¯ 0 δ i j + k 2 e k q ¯ δ k j δ ¯ i 8 l m ε i l m δ ¯ l δ m j e q ¯ = e ¯ q ¯ e j q ¯ e ¯ j q ¯ =
= j 8 δ ¯ 0 δ i j + 2 e ¯ j q ¯ δ ¯ i 8 l ε i l j δ ¯ l e j q ¯ e ¯ j q ¯ ,
being δ i j the Kronecker delta. Returning to matrix notation, the linear approximation of N e q ¯ is
N e q ¯ = 8 δ ¯ 0 I + 2 δ ¯ ( e ¯ q ¯ ) T 8 δ ¯ × e q ¯ e ¯ q ¯ + O e q ¯ e ¯ q ¯ 2 =
= 8 δ ¯ 0 I + δ ¯ δ ¯ T 1 + δ ¯ 0 δ ¯ × e q ¯ e ¯ q ¯ + O e q ¯ e ¯ q ¯ 2 .
On the other hand, evaluating (A65) at e ¯ q ¯ we obtain the zeroth order approximation:
D e ¯ q ¯ 16 + 16 δ ¯ 2 ( 1 + δ ¯ 0 ) 2 + δ ¯ 0 16 16 δ ¯ 2 ( 1 + δ ¯ 0 ) 2 + 8 δ ¯ · 4 δ ¯ 1 + δ ¯ 0 =
= 16 1 + δ ¯ 0 ( 1 + δ ¯ 0 ) + δ ¯ 2 1 + δ ¯ 0 + δ ¯ 0 ( 1 + δ ¯ 0 ) δ ¯ 2 1 + δ ¯ 0 + 2 δ ¯ 2 =
= 16 1 + δ ¯ 0 2 + δ ¯ 0 2 δ ¯ 0 + 2 1 δ ¯ 0 2 = 64 1 + δ ¯ 0 ,
where we have used the equality δ ¯ 2 = 1 δ ¯ 0 2 for unit quaternions. Finally, combining (A69b) and (A70c) we can compute the linear approximation of (A9):
e p ¯ e q ¯ = 4 8 δ ¯ 0 I + δ ¯ δ ¯ T 1 + δ ¯ 0 δ ¯ × e q ¯ e ¯ q ¯ + O e q ¯ e ¯ q ¯ 2 1 + δ ¯ 0 64 + O e q ¯ e ¯ q ¯ =
= 1 + δ ¯ 0 2 δ ¯ 0 I + δ ¯ δ ¯ T 1 + δ ¯ 0 δ ¯ × e q ¯ e ¯ q ¯ + O e q ¯ e ¯ q ¯ 2 =
= 1 2 1 + δ ¯ 0 δ ¯ 0 I δ ¯ × + δ ¯ δ ¯ T e q ¯ e ¯ q ¯ + O e q ¯ e ¯ q ¯ 2 .

Appendix C.4. Rotation Vector

Let us start evaluating the vector δ p ¯ in (A12) and (A13) at the point e ¯ q ¯ :
δ p ¯ e ¯ q ¯ = δ ¯ 0 e ¯ ^ q ¯ sin e ¯ q ¯ 2 δ ¯ cos e ¯ q ¯ 2 δ ¯ 0 δ ¯ 0 = 0 δ ¯ × e ¯ ^ q ¯ sin e ¯ q ¯ 2 δ ¯ 0 = 0 = 0 .
Then, the first order approximation of δ p ¯ around e ¯ q ¯ will have the form
δ p ¯ = T ˜ e q ¯ e ¯ q ¯ + O e q ¯ e ¯ q ¯ 2 ,
and δ p ¯ 0 as e q ¯ e ¯ q ¯ . Taking the Taylor series of the arcsin x ,
arcsin δ p ¯ δ p ¯ = δ p ¯ + O δ p ¯ 3 δ p ¯ = 1 + O δ p ¯ 2 = 1 + O e q ¯ e ¯ q ¯ 2 ,
so that (A12) is linearized as
2 δ p ¯ δ p ¯ arcsin δ p ¯ = 2 T ˜ e q ¯ e ¯ q ¯ + O e q ¯ e ¯ q ¯ 2 .
We only lack the T ˜ matrix. We will need the linear approximations of cos e q ¯ / 2 and e ^ q ¯ sin e q ¯ / 2 around e ¯ q ¯ . To this end we will first obtain the linear approximation of x :
x = k x k 2 =
= x ¯ + j k x k δ k j k x k 2 x = x ¯ x j x ¯ j + O x x ¯ 2 =
= x ¯ + j x ¯ j k x ¯ k 2 x j x ¯ j + O x x ¯ 2 =
= x ¯ + x ¯ ^ T x x ¯ + O x x ¯ 2 .
Noticing that
x x = x ¯ ^ T + O x x ¯ ,
our computations are straightforward:
cos x 2 = cos x ¯ 2 sin x 2 1 2 x ¯ ^ T + O x x ¯ x = x ¯ x x ¯ + O x x ¯ 2 =
= cos x ¯ 2 1 2 sin x ¯ 2 x ¯ ^ T x x ¯ + O x x ¯ 2 .
For our particular case,
cos e q ¯ 2 = δ ¯ 0 1 2 δ ¯ T e q ¯ e ¯ q ¯ + O e q ¯ e ¯ q ¯ 2 .
On the other hand,
sin x 2 x =
= sin x ¯ 2 x ¯ + cos x 2 x 1 2 sin x 2 x 2 x ¯ ^ T + O x x ¯ x = x ¯ x x ¯ + O x x ¯ 2 =
= sin x ¯ 2 x ¯ + cos x ¯ 2 x ¯ 1 2 sin x ¯ 2 x ¯ 2 x ¯ ^ T x x ¯ + O x x ¯ 2 .
Now, taking x = x ¯ + x x ¯ we arrive at
x x sin x 2 = x ¯ + x x ¯ sin x 2 x =
= x ¯ ^ sin x ¯ 2 + sin x ¯ 2 x ¯ + 1 2 cos x ¯ 2 x ¯ ^ x ¯ ^ T sin x ¯ 2 x ¯ x ¯ ^ x ¯ ^ T x x ¯ + O x x ¯ 2 .
For our particular case,
e q ¯ e q ¯ sin e q ¯ 2 = δ ¯ + δ ¯ 2 arcsin δ ¯ + δ ¯ 0 2 δ ¯ ^ δ ¯ ^ T δ ¯ 2 arcsin δ ¯ δ ¯ ^ δ ¯ ^ T e q ¯ e ¯ q ¯ + O e q ¯ e ¯ q ¯ 2 =
= δ ¯ + 1 2 I δ ¯ ^ δ ¯ ^ T δ ¯ arcsin δ ¯ + δ ¯ 0 δ ¯ ^ δ ¯ ^ T e q ¯ e ¯ q ¯ + O e q ¯ e ¯ q ¯ 2 .
Finally, we just have to replace (A79) and (A82b) in (A13) to obtain the required linear approximation. Returning to the original notation we have
2 δ p ¯ δ p ¯ arcsin δ p ¯ = 2 δ p ¯ + O e q ¯ e ¯ q ¯ 2 =
= 2 δ ¯ 0 e ^ q ¯ sin e q ¯ 2 2 cos e q ¯ 2 δ ¯ 2 δ ¯ × e ^ q ¯ sin e q ¯ 2 + O e q ¯ e ¯ q ¯ 2 = = 2 δ ¯ 0 δ ¯ + δ ¯ 0 I δ ¯ ^ δ ¯ ^ T δ ¯ arcsin δ ¯ + δ ¯ 0 δ ¯ ^ δ ¯ ^ T e q ¯ e ¯ q ¯ 2 δ ¯ 0 δ ¯ T e q ¯ e ¯ q ¯ δ ¯ +
δ ¯ × 2 δ ¯ + I δ ¯ ^ δ ¯ ^ T δ ¯ arcsin δ ¯ + δ ¯ 0 δ ¯ ^ δ ¯ ^ T e q ¯ e ¯ q ¯ + O e q ¯ e ¯ q ¯ 2 =
= δ ¯ 0 I δ ¯ ^ δ ¯ ^ T δ ¯ × δ ¯ arcsin δ ¯ + δ ¯ 0 2 δ ¯ ^ δ ¯ ^ T + δ ¯ δ ¯ T e q ¯ e ¯ q ¯ + O e q ¯ e ¯ q ¯ 2 =
= δ ¯ 0 I δ ¯ ^ δ ¯ ^ T δ ¯ × δ ¯ arcsin δ ¯ + δ ¯ ^ δ ¯ ^ T e q ¯ e ¯ q ¯ + O e q ¯ e ¯ q ¯ 2 .
Note that the linear approximations of our transition maps are valid for e q ¯ near of e ¯ q ¯ . However, we have not made any assumption about the δ ¯ quaternion. This means that our linear approximations are exact for any δ ¯ = φ 1 ( e ¯ q ¯ ) in the domain of each T -matrix, provided that e q ¯ is close enough to e ¯ q ¯ .

References

  1. Crassidis, J.L.; Markley, F.L.; Cheng, Y. Survey of nonlinear attitude estimation methods. J. Guid. Control Dyn. 2007, 30, 12–28. [Google Scholar] [CrossRef]
  2. Kalman, R.E. A new approach to linear filtering and prediction problems. J. Basic Eng. 1960, 82, 35–45. [Google Scholar] [CrossRef]
  3. Julier, S.J.; Uhlmann, J.K. New Extension of the Kalman Filter to Nonlinear Systems; AeroSense’97; International Society for Optics and Photonics: Bellingham, WA, USA, 1997; pp. 182–193. [Google Scholar]
  4. Shuster, M.D. A survey of attitude representations. Navigation 1993, 8, 439–517. [Google Scholar]
  5. Stuelpnagel, J. On the parametrization of the three-dimensional rotation group. SIAM Rev. 1964, 6, 422–430. [Google Scholar] [CrossRef]
  6. Lefferts, E.J.; Markley, F.L.; Shuster, M.D. Kalman filtering for spacecraft attitude estimation. J. Guid. Control Dyn. 1982, 5, 417–429. [Google Scholar] [CrossRef]
  7. Crassidis, J.L.; Markley, F.L. Unscented filtering for spacecraft attitude estimation. J. Guid. Control Dyn. 2003, 26, 536–542. [Google Scholar] [CrossRef]
  8. Markley, F.L. Attitude error representations for Kalman filtering. J. Guid. Control Dyn. 2003, 26, 311–317. [Google Scholar] [CrossRef]
  9. Hall, J.K.; Knoebel, N.B.; McLain, T.W. Quaternion attitude estimation for miniature air vehicles using a multiplicative extended Kalman filter. In Proceedings of the 2008 IEEE/ION Position, Location and Navigation Symposium, Monterey, CA, USA, 5–8 May 2008; IEEE: Piscataway, NJ, USA; pp. 1230–1237. [Google Scholar]
  10. VanDyke, M.C.; Schwartz, J.L.; Hall, C.D. Unscented Kalman filtering for spacecraft attitude state and parameter estimation. Adv. Astronaut. Sci. 2004, 118, 217–228. [Google Scholar]
  11. Markley, F.L. Multiplicative vs. additive filtering for spacecraft attitude determination. Dyn. Control Syst. Struct. Space 2004, 6, 311–317. [Google Scholar]
  12. Crassidis, J.L.; Markley, F.L. Attitude Estimation Using Modified Rodrigues Parameters. Available online: https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19960035754.pdf (accessed on 1 January 2019).
  13. Bar-Itzhack, I.; Oshman, Y. Attitude determination from vector observations: Quaternion estimation. IEEE Trans. Aerosp. Electr. Syst. 1985, AES-21, 128–136. [Google Scholar] [CrossRef]
  14. Mueller, M.W.; Hehn, M.; D’Andrea, R. Covariance correction step for kalman filtering with an attitude. J. Guid. Control Dyn. 2016, 40, 2301–2306. [Google Scholar] [CrossRef]
  15. Julier, S.J.; Uhlmann, J.K. Unscented filtering and nonlinear estimation. Proc. IEEE 2004, 92, 401–422. [Google Scholar] [CrossRef]
  16. Gramkow, C. On averaging rotations. J. Math. Imag. Vision 2001, 15, 7–16. [Google Scholar] [CrossRef]
  17. LaViola, J.J. A comparison of unscented and extended Kalman filtering for estimating quaternion motion. In Proceedings of the American Control Conference, Denver, CO, USA, 4–6 June 2003; IEEE: Piscataway, NJ, USA, 2003; Volume 3, pp. 2435–2440. [Google Scholar]
  18. Xie, L.; Popa, D.; Lewis, F.L. Optimal and Robust Estimation: With an Introduction to Stochastic Control Theory; CRC Press: Boca Raton, FL, USA, 2007. [Google Scholar]
Figure 1. Points in the manifold with q 3 = 0 are mapped with points in the Euclidean space through each chart φ . (a) S 2 ; (b) O; (c) RP; (d) MRP; (e) RV.
Figure 1. Points in the manifold with q 3 = 0 are mapped with points in the Euclidean space through each chart φ . (a) S 2 ; (b) O; (c) RP; (d) MRP; (e) RV.
Sensors 19 00149 g001
Figure 2. Points in the Euclidean space with e 3 = 0 are mapped with points in the manifold through each chart inverse φ 1 . (a) R 2 ; (b) O; (c) RP; (d) MRP; (e) RV.
Figure 2. Points in the Euclidean space with e 3 = 0 are mapped with points in the manifold through each chart inverse φ 1 . (a) R 2 ; (b) O; (c) RP; (d) MRP; (e) RV.
Sensors 19 00149 g002
Figure 3. Points in a q ¯ -centered chart are transformed using the transition map defined by each chart, and travel to the chart centered in the quaternion mapped with e q ¯ = ( 1 , 1 , 0 ) T in the previous chart. (a) R 2 ; (b) O; (c) RP; (d) MRP; (e) RV.
Figure 3. Points in a q ¯ -centered chart are transformed using the transition map defined by each chart, and travel to the chart centered in the quaternion mapped with e q ¯ = ( 1 , 1 , 0 ) T in the previous chart. (a) R 2 ; (b) O; (c) RP; (d) MRP; (e) RV.
Sensors 19 00149 g003
Figure 4. Maximum update frequency for each approach. The lines at the top represent the mean and the deviation ( 3 σ ) of the distribution of maximum update frequencies. “CU” stands for Chart Update, while “NCU” stands for No Chart Update.
Figure 4. Maximum update frequency for each approach. The lines at the top represent the mean and the deviation ( 3 σ ) of the distribution of maximum update frequencies. “CU” stands for Chart Update, while “NCU” stands for No Chart Update.
Sensors 19 00149 g004
Figure 5. Mean of the performance metric for each approach. Results from different charts are plotted in the same graph. Results from different MKF are plotted in different graphs. Bars represent the confidence interval ( 3 σ ) for the mean computation.
Figure 5. Mean of the performance metric for each approach. Results from different charts are plotted in the same graph. Results from different MKF are plotted in different graphs. Bars represent the confidence interval ( 3 σ ) for the mean computation.
Sensors 19 00149 g005
Figure 6. Mean of the performance metric for each MKF. Only results for the RP chart are plotted. Bars represent the confidence interval ( 3 σ ) in the mean computation.
Figure 6. Mean of the performance metric for each MKF. Only results for the RP chart are plotted. Bars represent the confidence interval ( 3 σ ) in the mean computation.
Sensors 19 00149 g006
Figure 7. Mean of the performance metric for each approach. Results from the approach in which we apply the chart update, and those of which we do not apply it are plotted together. Bars represent the confidence interval ( 3 σ ) in the mean computation.
Figure 7. Mean of the performance metric for each approach. Results from the approach in which we apply the chart update, and those of which we do not apply it are plotted together. Bars represent the confidence interval ( 3 σ ) in the mean computation.
Sensors 19 00149 g007
Table 1. Main characteristics of the most used parametrizations to represent an orientation.
Table 1. Main characteristics of the most used parametrizations to represent an orientation.
RepresentationParametersContinuousNon-SingularLinear Evolution Equation
Euler angles3
Axis-angle3–4
Rotation matrix9
Unit quaternion4
Table 2. Main characteristics of the charts studied.
Table 2. Main characteristics of the charts studied.
ChartDomainImage e = φ ( q ) q = φ 1 ( e )
O { q S 3 : q 0 0 } { e R 3 : e 2 } 2 q 1 e 2 4 e / 2
RP { q S 3 : q 0 > 0 } R 3 2 q q 0 1 4 + e 2 2 e
MRP { q S 3 : q 0 0 } { e R 3 : e 4 } 4 q 1 + q 0 1 16 + e 2 16 e 2 8 e
RV { q S 3 : q 0 0 } { e R 3 : e π } 2 q ^ arcsin q cos e 2 e ^ sin e 2
Table 3. Transition maps for the charts studied.
Table 3. Transition maps for the charts studied.
ChartTransition Map e p ¯ e q ¯
O δ ¯ 0 e q ¯ 4 e q ¯ 2 δ ¯ δ ¯ × e q ¯
RP 2 δ ¯ 0 e q ¯ 2 δ ¯ δ ¯ × e q ¯ 2 δ ¯ 0 + δ ¯ · e q ¯
MRP 4 8 δ ¯ 0 e q ¯ ( 16 e q ¯ 2 ) δ ¯ 8 δ ¯ × e q ¯ 16 + e q ¯ 2 + δ ¯ 0 ( 16 e q ¯ 2 ) + 8 δ ¯ · e q ¯
RV 2 δ p ¯ δ p ¯ arcsin δ p ¯ ,   with δ p ¯ = δ ¯ 0 e ^ q ¯ sin e q ¯ 2 cos e q ¯ 2 δ ¯ δ ¯ × e ^ q ¯ sin e q ¯ 2
Table 4. T -matrices for the transition maps of the charts studied.
Table 4. T -matrices for the transition maps of the charts studied.
Chart T ( δ ¯ ) MatrixDomain
O δ ¯ 0 I δ ¯ × + δ ¯ δ ¯ T δ ¯ 0 { δ ¯ S 3 : δ ¯ 0 > 0 }
RP δ ¯ 0 δ ¯ 0 I δ ¯ × { δ ¯ S 3 : δ ¯ 0 0 }
MRP 1 2 1 + δ ¯ 0 δ ¯ 0 I δ ¯ × + δ ¯ δ ¯ T { δ ¯ S 3 : δ ¯ 0 0 }
RV δ ¯ 0 I δ ¯ ^ δ ¯ ^ T δ ¯ × δ ¯ arcsin δ ¯ + δ ¯ ^ δ ¯ ^ T { δ ¯ S 3 : δ ¯ 0 0 , δ ¯ 0 }
Table 5. Parameters used in the simulations.
Table 5. Parameters used in the simulations.
ParameterValue
θ e 0 1
Tsim 10 s
dtdtsim100
N s 1000
Q max ω 10 2 rads 2 / s 3
Q max v 1 p . d . u .
R { 10 2 , 10 4 , 10 6 }
R ω R rads 2 / s 2
R v R p . d . u .
W 0 1 / 25

Share and Cite

MDPI and ACS Style

Bernal-Polo, P.; Martínez-Barberá, H. Kalman Filtering for Attitude Estimation with Quaternions and Concepts from Manifold Theory. Sensors 2019, 19, 149. https://doi.org/10.3390/s19010149

AMA Style

Bernal-Polo P, Martínez-Barberá H. Kalman Filtering for Attitude Estimation with Quaternions and Concepts from Manifold Theory. Sensors. 2019; 19(1):149. https://doi.org/10.3390/s19010149

Chicago/Turabian Style

Bernal-Polo, Pablo, and Humberto Martínez-Barberá. 2019. "Kalman Filtering for Attitude Estimation with Quaternions and Concepts from Manifold Theory" Sensors 19, no. 1: 149. https://doi.org/10.3390/s19010149

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop