Next Article in Journal
Improving the Model for Person Detection in Aerial Image Sequences Using the Displacement Vector: A Search and Rescue Scenario
Previous Article in Journal
A Decade of UAV Docking Stations: A Brief Overview of Mobile and Fixed Landing Platforms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhanced Attitude and Altitude Estimation for Indoor Autonomous UAVs

by
Salvatore Rosario Bassolillo
1,
Egidio D’Amato
2,*,
Immacolata Notaro
1,
Gennaro Ariante
2,
Giuseppe Del Core
2 and
Massimiliano Mattei
3
1
Department of Engineering, University of Campania Luigi Vanvitelli, 81031 Aversa, Italy
2
Department of Science and Technology, University of Naples Parthenope, 80143 Naples, Italy
3
Department of Electrical Engineering and Information Technologies, University of Naples Federico II, 80125 Naples, Italy
*
Author to whom correspondence should be addressed.
Drones 2022, 6(1), 18; https://doi.org/10.3390/drones6010018
Submission received: 24 November 2021 / Revised: 31 December 2021 / Accepted: 10 January 2022 / Published: 12 January 2022

Abstract

:
In recent years the use of Unmanned Aerial Vehicles (UAVs) has considerably grown in the civil sectors, due to their high flexibility of use. Currently, two important key points are making them more and more successful in the civil field, namely the decrease of production costs and the increase in navigation accuracy. In this paper, we propose a Kalman filtering-based sensor fusion algorithm, using a low cost navigation platform that contains an inertial measurement unit (IMU), five ultrasonic ranging sensors and an optical flow camera. The aim is to improve navigation in indoor or GPS-denied environments. A multi-rate version of the Extended Kalman Filter is considered to deal with the use of heterogeneous sensors with different sampling rates, and the presence of non-linearities in the model. The effectiveness of the proposed sensor platform is evaluated by means of numerical tests on the dynamic flight simulator of a quadrotor. Results show high precision and robustness of the attitude estimation algorithm, with a reduced computational cost, being ready to be implemented on low-cost platforms.

1. Introduction

In the last two decades, the use of small and micro-UAVs has considerably spread, with the increased level of autonomy and the development of low cost electronics devices, i.e., microcontrollers, sensors, etc. [1]. In particular, the possibility of multi-rotors (quadcopter, hexacopeter, etc.) to take off and land vertically, to move in any direction and hover over a fixed position gives them employment for reconnaissance missions in hostile and hazardous environment [2,3], where other aircraft and robots cannot be used. However, their use requires the solution of different technological problems, first of all the need of a robust and reliable attitude estimator, possibly executable on low-cost computational hardware and using only measurements from light-weight sensors.
Nowadays the most widely used platforms for UAV navigation are based on the Global Positioning System (GPS) [4] and the Inertial Navigation System (INS) [5] including an IMU with magnetic, angular rate, and gravity sensors (MARG). However, GPS is able to provide data about position by receiving information from a network of several satellites, resulting unusable in indoor environment, due to the loss of signal.
MARG sensors consists of Micro Electro-Mechanical Systems (MEMS) based on gyroscopes, accelerometers and magnetometers which can provide the orientation of a rigid body with respect to a fixed reference system. Unlike GPS sensors, they can provide attitude information in indoor environment too, since they do not use signal from external sources. Moreover, the growing success of these sensors in many applications [6], is due to their low-weight, low-cost, low power consumption and relatively high performance in real-time [7].
Being an essential task, the problem of attitude estimation on the basis of sensor measurements can be non-trivial [8,9].
Most of the recent sensor fusion algorithms provide orientation estimation in quaternion form [10,11]. Quaternions are a useful mathematical tool that does not result in singularity configurations as the Euler representation.
Over the years, several solutions have been developed to solve the problem of attitude estimation and different sensor data fusion algorithms have been implemented and tested on-board [12]. The most famous techniques make use of complementary filters [13,14,15,16,17], thanks to their low computational burden. In [13], a constant gain filter is considered to estimate the attitude of a rigid body. Such method ensures good attitude estimation at low computational cost, but it has a drawback related to the local magnetic disturbances. In [14], a nonlinear explicit complementary filter is proposed, improved with an anti-windup nonlinear integrator that allows an effective gyro-bias compensation. To increase robustness to magnetic disturbances, a decoupling strategy was implemented to make roll and pitch estimates not dependent on magnetometer. In [17], a complementary filter provides a quaternion estimation like the algebraic solution of a system from inertial/magnetic observations. Tilting and heading estimation are separately obtained to avoid the impact of magnetic disturbances on roll and pitch.
However, despite the effectiveness and the computational performances of such techniques, more effective solutions have been based on Kalman Filtering (KF) with the increase of computational power in low-cost electronic boards. Such techniques are widely used in human motion analysis [18,19,20,21,22], robotics [23,24] and in aerospace applications [25,26]. Most UAV attitude estimation algorithms are based on the Extended Kalman Filter (EKF) [27,28,29] and the Unscented Kalman Filter (UKF) [30,31].
At the cost of a further increase in computational burden, in literature several examples based on Particle Filtering (PF) [9,32,33,34] exist.
Since attitude estimation depends on sensors accuracy and can suffer from the presence of fast dynamics, the integration of INS, based on a sensor fusion algorithm and an IMU, with other kinds of sensors can be useful. Speed and position can be also obtained implementing an integration of IMU and GPS sensors [35] which has been used in the field of autonomous vehicle only recently [36]. Thanks to the complementary characteristics of GPS and IMU sensors, in [37] the author developed an integrated and advanced GPS/IMU system to estimate the attitude and position of a rigid body in 3-D space. Attitude estimation is obtained comparing the acceleration vector, implicitly available in the derivative of the GNSS velocity, with measurements provided by accelerometers. In [38], authors present a nonlinear observer for estimating position, velocity, attitude, and gyro bias by combining a GNSS receiver with an INS. The presented navigation equations take into account the Earth rotation and curvature, to improve accuracy. In [39], authors exploited the GPS Precise Point Positioning (PPP) on a fixed-wing UAV for photogrammetric mapping with centimeter-level precision on the horizontal plane.
Such solutions are not effective in indoor scenarios or in absence of GPS signal and the only MARG-based dead reckoning results in poor position estimates. To improve attitude and position estimation, several solutions based on vision and sound propagation techniques have been proposed over the years.
In low-cost and commercial drones, one of the most popular solutions makes use of the visual odometry. Optical flow is a naturally inspired technique, introduced in 1950s by the american psychologist Gibson [40] to describe the animal visual perception of the world. It can be considered as the projection of the 3D world apparent motion of objects caused by the relative movements between an observer and the scene.
The concept of optical flow has been widely used in many fields and different types of algorithms have been developed over the years. The Lucas-Kanade algorithm [41] and the Horn-Schunck method [42] are based on the gradient descent approach. In [43], a method for object recognition is presented based on the image features, while in [44] the author used the interpolation technique for the computation of optical flow information and ego-motion.
Starting from the experience of pioneers in this discipline, many researchers have involved optical flow algorithms in conjunction with inertial platforms for navigation and obstacle avoidance purposes in the UAV field [45,46,47,48,49].
From the hardware point of view, several kinds of optical sensors were involved on small and micro-UAVs. Optical mouse based sensors were firstly used thanks to their very low cost, but the need of a strong lighting limited their success. Being more sensitive, CMOS image sensors can alleviate the problem, allowing operations in indoor environment without using an artificial lighting as in [50].
Another solution to improve indoor navigation is the Simultaneous Localization and Mapping (SLAM) [51,52,53,54,55,56,57,58,59]. However, although SLAM techniques are extremely useful in unknown environments, their cost in terms of weight and computational burden is usually high, making it difficult to be implemented on micro and mini UAV.
In this paper, we propose a novel technique to compute accurate attitude and altitude estimation, by fusing data from an IMU, a downward oriented optical flow and five downward oriented distance sensors based on ultrasonic or time of flight (TOF) devices. Being placed in different points, distance sensors can improve attitude estimation accuracy in addition to UAV altitude. One of the scopes of this work is to show the benefits of the presence of more distance sensors in attitude estimation as well as in velocity and altitude measurement. Furthermore, the presence of a downward oriented optical flow gives the ability to estimate the velocity vector in the horizontal plane as well as to correct attitude in fast maneuvers.
Since the involved sensors run at different sampling rates, the proposed algorithm makes use of a multi-rate EKF [60,61,62]. A campaign of numerical simulations is carried out to prove the effectiveness of the idea, comparing results obtained with several sensor fusion schemes. Two configurations are available: one involving a single TOF sensor and one with 5 TOF devices. Each configuration is presented in two possible architectures: a tightly coupled and a loosely coupled scheme.
The paper is organized as follows: Section 2 defines the UAV attitude and altitude estimation problems; Section 3 presents the multi-rate EKF approach and, finally, Section 4 describes the proposed testing platform and it shows the numerical results obtained by integrating the system on a quadrotor dynamic simulator.

2. UAV Attitude and Altitude Estimation Problem

To define the UAV position and orientation, two reference frames are considered:
  • North-East-Down (NED) Reference Frame, located on the earth surface, with:
    X E -axis points north, parallel to the earth surface;
    Y E -axis points east, parallel to the earth surface;
    Z E -axis points downward, toward the earth surface.
  • Body Reference Frame, centred in the Center of Gravity ( C G ) of the quadrotor, with:
    X B -axis points along the arm 1–3, positive forward;
    Y B -axis points along the arm 2–4, positive rightward;
    Z B -axis points downward, to form a right-handed reference frame.
as shown in Figure 1.
Vehicle attitude is defined as its orientation with respect to a reference frame and its accurate estimation is an important topic in the robotics and aerospace fields.
At the initial time, NED and body reference frames can be overlapped at the UAV departing point.
After that, the transformation from NED to body frame can be obtained as a sequence of three ordered rotations (Figure 2), called 3 2 1 [63]:
  • a rotation around the Z E axis by the yaw angle ψ , from O X E Y E Z E to O X Y Z ;
  • a rotation around the Y axis by the pitch angle θ from O X Y Z to O X Y Z ;
  • a rotation around the X axis by the roll angle ϕ from O X Y Z to O X B Y B Z B .
Euler angles ψ , θ and ϕ describe the aircraft attitude. An alternative representation makes use of the quaternion definition [17,64], considering q = [ q 0 , q 1 , q 2 , q 3 ] T as the unit quaternion vector associated to R BE ( q ) from the NED reference frame to the body frame:
R BE ( q ) = q 1 2 q 2 2 q 3 2 + q 0 2 2 ( q 1 q 2 q 3 q 0 ) 2 ( q 1 q 3 + q 2 q 0 ) 2 ( q 1 q 2 + q 3 q 0 ) q 1 2 + q 2 2 q 3 2 q 0 2 2 ( q 2 q 3 q 1 q 0 ) 2 ( q 1 q 3 q 2 q 0 ) 2 ( q 2 q 3 + q 1 q 0 ) q 1 2 q 2 2 + q 3 2 + q 0 2
Gyroscope model. If gyroscope was not affected by noise and uncertainties, the attitude could be estimated by integrating the angular velocity ω = [ p , q , r ] T from a known initial condition. In fact, quaternion vector shows the following dynamics:
q ˙ ( t ) = 1 2 Q ( ω ( t ) ) · q ( t )
where the matrix Q ( ω ( t ) ) is
Q ( ω ( t ) ) = 0 p ( t ) q ( t ) r ( t ) p ( t ) 0 r ( t ) q ( t ) q ( t ) r ( t ) 0 p ( t ) r ( t ) q ( t ) p ( t ) 0
However, the gyroscope output is affected by noise with zero mean and given standard deviation. In steady state conditions, a straightforward integration would lead to a non-zero final angle due to the so called Angular Random Walk (ARW) [65,66].
To compensate this effect, a gyro drift model is introduced, whose sensed angular speed vector ω S ( t ) differs from ω ( t ) due to a bias b g ( t ) = [ b p ( t ) , b q ( t ) , b r ( t ) ] T and a stochastic zero-mean noise υ ω ( t ) = [ υ p ( t ) , υ q ( t ) , υ r ( t ) ] T :
ω S ( t ) = p s ( t ) q s ( t ) r s ( t ) = ω ( t ) + b g ( t ) + υ ω ( t )
Therefore, by computing ω ( t ) from (4), the equation describing attitude dynamics becomes
q ˙ ( t ) = 1 2 Q ω S ( t ) b g ( t ) υ ω ( t ) · q ( t )
Usually, in slowly varying conditions, the accelerometer and the magnetometer can be used to estimate the gyroscope biases and compensate the attitude drift, being their measurements dependent from the UAV orientation [67,68,69]. The gyroscope bias is commonly modeled as a first-order Gauss-Markov process [70,71], with a first-order stochastic differential equation given by
b g ˙ ( t ) = 1 τ g b g ( t ) + υ b
where τ g is the correlation time of the process and υ b is the associated white Gaussian noise [72,73,74,75,76].
Accelerometer model. A tri-axial accelerometer measures the forces, per unit mass, acting on the body along three specific orthogonal directions. The inertial acceleration depends on these forces and the gravity acceleration g = 0 0 g T . The specific force in the body reference frame is:
F B ( t ) = a B ( t ) + R BE ( q ( t ) ) · g
The acceleration in the inertial reference frame a E ( t ) depends on a t B ( t ) as follows
a E ( t ) = a t B ( t ) + ω ( t ) × ω ( t ) × L B + ω ˙ ( t ) × L B
where L B is the distance vector between the center of gravity and the accelerometer.
The acceleration in the body reference frame can be then written as follows:
a B ( t ) = R BE ( q ( t ) ) · a E ( t ) = R BE ( t ) q ( t ) ) · ( a t B ( t ) + ω ( t ) × ω ( t ) × L B + ω ˙ ( t ) × L B
In the absence of noise and uncertainties, the velocity in the inertial frame could be obtained by integrating (8). However, to compensate the drifting integration of the acceleration, a bias b a ( t ) = [ b a x ( t ) , b a y ( t ) , b a z ( t ) ] T and a stochastic zero-mean noise ν a ( t ) = [ ν a x ( t ) , ν a y ( t ) , ν a z ( t ) ] T are considered:
a S B ( t ) = a x S B ( t ) a y S B ( t ) a z S B ( t ) = a B ( t ) + R BE ( q ( t ) ) · g + b a ( t ) + ν a ( t ) = R BE ( q ( t ) ) · g + a ˙ B ( t ) + ν a ( t )
In case of slowly varying maneuvers, vector a B ( t ) = a B ( t ) + b a ( t ) can be estimated using a first-order dynamics:
a ˙ B ( t ) = 1 τ a a B ( t )
where τ a is a time constant related to the UAV dynamics.
The velocity in the inertial frame can be then obtained by integrating a b
V ˙ E ( t ) = R BE T ( q ( t ) ) · a B ( t )
Altitude z E is related to V E as follows:
z ˙ E ( t ) = [ 0 , 0 , 1 ] · V E ( t )
Magnetometer model. Magnetometer measures the local magnetic field in a given direction, which is a combination of both the Earth magnetic field and the magnetic field due to the presence of ferro-magnetic material [77]. The horizontal component of the local Earth magnetic field points towards the Earth magnetic north pole, while the ratio between the horizontal and vertical components depends on the location on the Earth.
Assuming that the sensor does not travel over significant distances and the local magnetic sources are weak and not moving, such that a preliminary calibration can estimate the constant magnetic field they induce, the local magnetic field M B in the body reference frame can be expressed as follows:
M B ( t ) = R BE ( q ( t ) ) · M E
where M E is the Earth’s magnetic field. However, at the time instant t, magnetometer measurements M S ( t ) are affected by sensor noise ν M .
M S ( t ) = M x , S ( t ) M y , S ( t ) M z , S ( t ) = R BE ( q ( t ) ) · M E + ν M ( t )
Here, the presence of additional fields due to ferro-magnetic material in an indoor environment was neglected, considering the problem independently handled in literature [78,79,80,81,82].
Optical Flow model. Optical flow is the distribution of apparent velocities about the movement of brightness patterns in an image [42]. It can be due to the relative motion between objects and the viewer [40]. According to the pinhole image modelling approach, that defines a relationship between the 3D motion field and the 2D image camera plane, the sensed optical flow [46,50] is:
V S ( t ) = u S ( t ) v S ( t ) = u ( t ) · β z E ( t ) q ( t ) · β + r ( t ) d y ( t ) + ν O x ( t ) v ( t ) · β z E ( t ) + p ( t ) · β r ( t ) d x ( t ) + ν O y ( t )
where u ( t ) and v ( t ) are the components of the velocity vector in the body frame at the time instant t, β is the focal length, z E ( t ) is the altitude of the UAV, d x ( t ) and d y ( t ) are the components on the horizontal plane of the distance vector between the center of gravity of the UAV and the camera, ν O x ( t ) and ν O y ( t ) represent the sensor noise on x and y axes.
Time-of-flight sensor model. The primary function of a time-of-flight sensor is to measure the distance between the transmitter and the objects in its field of view. The principle is based on the speed of the light: the distance is evaluated by measuring the time it takes for the transmitted signal to travel to and from the object.
The use of several devices allows to measure the distance to ground from several points on the vehicle, resulting useful to improve attitude estimation. Consider n T O F TOF devices mounted on the UAV structure and pointed towards the positive z B axis direction. Let ξ i B = x i B , y i B , z i B T , i = 1 , . . . , n T O F be the position of the i-th device in the body reference frame. During the flight, the position of the sensor ξ i E ( t ) = x i E ( t ) , y i E ( t ) , z i E ( t ) in the inertial reference frame is
ξ i E ( t ) = R B E T ( q ( t ) ) · ξ i B
The vector of the sensed distance measurements is
D S ( t ) = [ d S , 1 ( t ) , . . . , d S , n T O F ( t ) ] T where d S , i ( t ) = [ 0 , 0 , 1 ] ξ i E ( t ) cos ϕ ( t ) cos θ ( t ) + ν T O F i ( t ) i = 1 , . . . , n T O F
with ν T O F i ( t ) is the sensor noise on the i-th device (see Figure 3).
Overall mathematical model. Equations (5), (6), (11)–(13) can be expressed in the state-space form:
x ˙ ( t ) = γ ( x ( t ) , u ( t ) , υ ( t ) )
where x ( t ) = q ( t ) T , b g ( t ) T , V E ( t ) T , a B ( t ) T , z E ( t ) T is the state vector, u ( t ) = ω S ( t ) is the input vector, and υ = [ υ ω T , υ b T ] T is the process noise. Similarly, Equations (10), (15), (16) and (18) can be written in the following form:
y ( t ) = g ( x ( t ) , v ( t ) )
where y ( t ) = a S ( t ) T , M S ( t ) T , V S T ( t ) , D S T ( t ) T is the output vector and ν ( t ) = ν a ( t ) T , ν M ( t ) T , ν T O F ( t ) T T is the measurement noise vector.

3. Multi-Rate Extended Kalman Filtering Approach

In order to fuse raw data and obtain a reliable estimation of attitude and vertical position with respect to the ground, a sensors fusion algorithm must be adopted. Since sensors work at different asynchronous sampling rates, a multi-rate version of the EKF, hereinafter referred to as MR-EKF, is needed.
Let us consider the nonlinear discrete dynamic model of our system:
x k = f ( x k 1 , u k 1 , υ k 1 )
y k = g ( x k , u k , ν k )
where x is the state, u is the input, y is the measured output, and υ and ν are the process and measurement noises, respectively. This discrete time system can be obtained by applying Euler’s discretization to (19) and (20) where ζ k = ζ ( t k ) and t k = k T S is the k-th sampling time instant.
Assume a zero-mean Gaussian distribution for the noises υ k 1 and ν k , with known covariance matrices Q k 1 and R k :
υ k 1 ( 0 , Q k 1 ) ν k ( 0 , R k )
The estimation algorithm is composed of two distinct phases, namely prediction and correction phase.
At each time step, the prediction phase provides the a-priori state estimation x ^ k and the state covariance matrix P k = E ( ( x k x ^ k ) ( x k x ^ k ) T ) , whereas the update phase computes the a-posteriori state estimation x ^ k + and the state covariance matrix P k + = E ( ( x k x ^ k + ) ( x k x ^ k + ) T ) .
According to the KF approach, at each time instant k the current state can be estimated by using the previous a-posteriori estimation x ^ k 1 + and the last acquired set of measures y m , k depending on the frequency rate of each sensor.
Therefore, the a-priori estimation of the state x ^ k , is computed by using the Equation (21) with null process noise ( υ = 0 ):
x ^ k = f ( x ^ k 1 + , u k 1 , 0 )
The a-priori state covariance matrix P k is computed as follows:
P k = A k 1 P k 1 + A k 1 T + Y k 1 Q k 1 Y k 1 T
where
A k 1 = f x x ^ k 1 + , u k 1 , 0 Y k 1 = f υ x ^ k 1 + , u k 1 , 0
are the Jacobian matrices of function f ( · ) , and the P k 1 + is the a-posteriori state covariance matrix at the time instant k 1 .
Consider n sensors that provide measurements with different sampling rates
ρ 1 ρ 2 . . . ρ n i = 1 , . . . , n .
To perform the update process of the a-priori state estimation, assume the minimum sample time T S = 1 ρ 1 , to obtain the discrete time model (21), (22).
The update process must be executed taking into account the available measurements at the time k. Measurements available at k can be denoted as a vector y ˜ m k , k with a variable size.
To take into account missing measurements at some discrete time, in the update phase, only a partial measurement model can be adopted.
Let’s define Δ k a diagonal matrix with Δ k j j = 1 if the j t h output is available, Δ k j j = 0 otherwise.
The partial estimated output is obtained by using the output Equation (22) and neglecting the measurements noise.
y ^ k = Δ k g x ^ k , u k , 0 .
Once a new measure has been acquired, the a-posteriori state estimation can be computed updating the a-priori state estimation with a feedback term that depends only on the active measurements:
x ^ k + = x ^ k + L ˜ k y m , k y ^ k
where the gain L ˜ k is given by the following equation:
L ˜ k = P k C ˜ k T C ˜ k P k C ˜ k T + N ˜ k R ˜ k N ˜ k T 1
with
C ˜ k = Δ k g x x ^ k , u k , 0 N ˜ k = Δ k g ν x ^ k , u k , 0
Finally, the a-posteriori state matrix is updated as follows:
P k + = I L ˜ k C ˜ k P k
where I is the identity matrix.

4. Results

To validate the proposed algorithm, numerical results have been obtained on a quadrotor dynamic simulation environment including sensors dynamics and measurement noise. Such uncertainty was simulated by adding a bias and a white Gaussian noise to every sensor output. The main simulation parameters are summarized in Table 1.
The idea is to test the algorithm in view of its implementation on a low-cost embedded board.
Although the navigation system has the objective of acquiring, filtering and fusing sensors data to extract information about UAV and surrounding environment states, the same electronic board can be used also for guidance and control tasks.
The sensors platform involves a low-cost IMU consisting of a 3-axis accelerometer, a 3-axis gyroscope and a 3-axis magnetometer, an optical flow and 5 time-of-flight distance sensors. Four of them can be installed below every arm and the fifth at the centre of the structure. The Optical flow and the time-of-flight devices are oriented downwards in order to measure the velocity vector in the horizontal plane and the relative distance to the ground.
The proposed simulation campaign considers as available sensors a tri-axial accelerometer, a gyroscope, a magnetometer, an optical flow, and two possibilities for TOFs: five or only a single device (see Figure 4). Sensor fusion algorithm is implemented using two different architectures: a tightly coupled scheme (A) and a loosely coupled scheme (B).
According with a tightly coupled approach, measurements are fused together by using the same EKF (see Figure 5), whose model is defined by state Equations (5), (6), (11)–(13), and output Equations (10), (15), (16) and (18).
The loosely coupled scheme involves two separated filters (see Figure 6): the former EKF is based on the IMU and TOF devices, to estimate the attitude on the basis of Equations (5), (6), (10), (15), (18); the latter MR-EKF, is based on the TOF plus the optical flow, to estimate altitude and velocity by means of Equations (11)–(13), (16) and (18).
Hence four possibilities are analysed:
A 5
n T O F = 5 and tightly coupled EKF architecture;
A 1
n T O F = 1 and tightly coupled EKF architecture;
B 5
n T O F = 5 and loosely coupled EKF architecture;
B 1
n T O F = 1 and loosely coupled EKF architecture;
where two of them, A 5 and B 5 , use more than one TOF to augment attitude estimation, while A 1 and B 1 schemes are typical architectures, used to compare results with the state of the art.
As shown in Figure 4, the quadrotor dynamics includes attitude and altitude control. Simulation starts at t = 0 from an equilibrium condition in hovering at the altitude h = 10 m. Roll and pitch maneuvers are imposed by applying two reference doublet signals having a duration of 16 s and an amplitude of 24 degrees on the roll angle at t ϕ = 2 s, and on the pitch angle at t θ = 30 s. A yaw maneuver is carried out by applying a trapezoidal signal with a duration of 7 s and an amplitude of 12 degrees to the yaw reference signal. Finally a variation in altitude is simulated using a ramp reference signal, passing from 10 m to 11 m.
Figure 7 and Figure 8 show sensors measurements.
Figure 9, Figure 10 and Figure 11 show the results of the estimation process for every involved configuration, while Table 2 and Table 3 resume the performance of each model in terms of mean absolute errors and standard deviations [13,83].
Figure 9 shows the improvements obtained by the proposed technique over other standard EKF based INS. Indeed, a standard INS neglects the term of pure linear acceleration, resulting in an over-estimation of pitch and roll angles due to the unobservable body accelerations measured during a maneuver. Furthermore, the linear approximation needed in the EKF formulation makes roll and pitch angles more sensitive to the vertical component of acceleration, causing a coupling effect during the pitch maneuver at t = 30 s. The use of a tightly coupled sensor fusion algorithm solves these issues (models A 1 and A 5 , using IMU, optical flow and TOF sensors together). Furthermore, the presence of five TOF devices is the greatest improvement in terms of pitch and roll estimation (model A 5 ). Such improvement is visible also in the velocity vector estimation: while components on the horizontal plane depend moreover on the optical flow and the increasing performance affects the correction of relatively small errors, the greatest improvements is obtained in the vertical component of speed and in the altitude estimation. It should be clear how the presence of five TOF sensors permits to avoid errors on the attitude, making the altitude estimation more reliable.
Table 4 resumes the computational load in terms of cpu occupancy, needed to execute the proposed filters on a Intel core i7 10750H based laptop. Being a multi-rate EKF, it was measured by computing the ratio (percentage) between the time spent to perform filtering and the overall simulation time. As shown in the table, the introduction of the multi-rate EKF limits the increase in the computational burden between model B 1 and A 5 .

5. Conclusions

Small unmanned aerial vehicles need an accurate attitude and position estimation in order to deal with several application, where the risk of collision is high and the control must be absolutely reliable.
With this aim, the paper deals with the design of a Kalman filtering based sensor fusion algorithm to augment typical drone navigation, based on inertial sensors, accelerometers, gyroscopes, magnetometers, with an optical flow camera and several time-of-flight distance sensors, in order to improve attitude estimation while measuring distance to ground.
Results show the effectiveness of the proposed estimation procedure, comparing several sensors configuration and highlighting typical issues of inertial navigation, subject to attitude estimation errors in accelerated flight.
Currently, on commercial small drone the typical sensors configuration involves the use of the optical flow and only one ground distance sensor (sonar or TOF). The cost of adding more distance sensors is minimal in terms of weight and computational burden, but it would permit a great improvement in the attitude estimation as well as in the velocity and altitude measurement.

Author Contributions

Conceptualization, E.D. and I.N.; Data curation, S.R.B.; Formal analysis, E.D., I.N. and M.M.; Funding acquisition, G.D.C.; Investigation, S.R.B., E.D., I.N. and G.A.; Methodology, E.D. and I.N.; Resources, G.D.C. and M.M.; Software, S.R.B.; Supervision, G.D.C. and M.M.; Validation, S.R.B., E.D., I.N. and G.A.; Visualization, S.R.B. and G.A.; Writing—original draft, S.R.B., E.D. and I.N.; Writing—review & editing, E.D., I.N., G.D.C. and M.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Friedman, C.; Chopra, I.; Rand, O. Indoor/outdoor scan-matching based mapping technique with a helicopter MAV in GPS-denied environment. Int. J. Micro Air Veh. 2015, 7, 55–70. [Google Scholar] [CrossRef] [Green Version]
  2. Maza, I.; Caballero, F.; Capitan, J.; Martinez-de Dios, J.R.; Ollero, A. Firemen monitoring with multiple UAVs for search and rescue missions. In Proceedings of the 2010 IEEE Safety Security and Rescue Robotics, Bremen, Germany, 26–30 July 2010; pp. 1–6. [Google Scholar]
  3. Cole, D.T.; Sukkarieh, S.; Göktoğan, A.H. System development and demonstration of a UAV control architecture for information gathering missions. J. Field Robot. 2006, 23, 417–440. [Google Scholar] [CrossRef]
  4. Flores, G.; Zhou, S.; Lozano, R.; Castillo, P. A vision and GPS-based real-time trajectory planning for a MAV in unknown and low-sunlight environments. J. Intell. Robot. Syst. 2014, 74, 59–67. [Google Scholar] [CrossRef] [Green Version]
  5. Ali, A.; El-Sheimy, N. Low-cost MEMS-based pedestrian navigation technique for GPS-denied areas. J. Sens. 2013, 2013, 197090. [Google Scholar] [CrossRef] [Green Version]
  6. Jang, J.S.; Liccardo, D. Small UAV automation using MEMS. IEEE Aerosp. Electron. Syst. Mag. 2007, 22, 30–34. [Google Scholar] [CrossRef]
  7. Barbour, N.; Schmidt, G. Inertial sensor technology trends. IEEE Sens. J. 2001, 1, 332–339. [Google Scholar] [CrossRef]
  8. Gośliński, J.; Giernacki, W.; Królikowski, A. A nonlinear filter for efficient attitude estimation of unmanned aerial vehicle (UAV). J. Intell. Robot. Syst. 2019, 95, 1079–1095. [Google Scholar] [CrossRef] [Green Version]
  9. D’Amato, E.; Nardi, V.A.; Notaro, I.; Scordamaglia, V. A Particle Filtering Approach for Fault Detection and Isolation of UAV IMU Sensors: Design, Implementation and Sensitivity Analysis. Sensors 2021, 21, 3066. [Google Scholar] [CrossRef]
  10. Bar-Itzhack, I.Y. REQUEST-A recursive QUEST algorithm for sequential attitude determination. J. Guid. Control Dyn. 1996, 19, 1034–1038. [Google Scholar] [CrossRef] [Green Version]
  11. Psiaki, M.L. Attitude-determination filtering via extended quaternion estimation. J. Guid. Control Dyn. 2000, 23, 206–214. [Google Scholar] [CrossRef]
  12. Ahmed, H.; Tahir, M. Accurate attitude estimation of a moving land vehicle using low-cost MEMS IMU sensors. IEEE Trans. Intell. Transp. Syst. 2016, 18, 1723–1739. [Google Scholar] [CrossRef]
  13. Madgwick, S.O.; Harrison, A.J.; Vaidyanathan, R. Estimation of IMU and MARG orientation using a gradient descent algorithm. In Proceedings of the 2011 IEEE International Conference on Rehabilitation Robotics, Zurich, Switzerland, 29 June–1 July 2011; pp. 1–7. [Google Scholar]
  14. Hua, M.D.; Ducard, G.; Hamel, T.; Mahony, R.; Rudin, K. Implementation of a nonlinear attitude estimator for aerial robotic vehicles. IEEE Trans. Control Syst. Technol. 2013, 22, 201–213. [Google Scholar] [CrossRef] [Green Version]
  15. Euston, M.; Coote, P.; Mahony, R.; Kim, J.; Hamel, T. A complementary filter for attitude estimation of a fixed-wing UAV. In Proceedings of the 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, Nice, France, 22–26 September 2008; pp. 340–345. [Google Scholar]
  16. Wu, J.; Zhou, Z.; Fourati, H.; Li, R.; Liu, M. Generalized linear quaternion complementary filter for attitude estimation from multisensor observations: An optimization approach. IEEE Trans. Autom. Sci. Eng. 2019, 16, 1330–1343. [Google Scholar] [CrossRef]
  17. Valenti, R.G.; Dryanovski, I.; Xiao, J. Keeping a good attitude: A quaternion-based orientation filter for IMUs and MARGs. Sensors 2015, 15, 19302–19330. [Google Scholar] [CrossRef] [Green Version]
  18. Caputo, F.; D’Amato, E.; Greco, A.; Notaro, I.; Spada, S. Human posture tracking system for industrial process design and assessment. In Intelligent Human Systems Integration; Springer International Publishing: Cham, Switzerland, 2018; pp. 450–455. [Google Scholar]
  19. Tao, W.; Liu, T.; Zheng, R.; Feng, H. Gait analysis using wearable sensors. Sensors 2012, 12, 2255–2283. [Google Scholar] [CrossRef] [PubMed]
  20. Sabatini, A.M. Quaternion-based extended Kalman filter for determining orientation by inertial and magnetic sensing. IEEE Trans. Biomed. Eng. 2006, 53, 1346–1356. [Google Scholar] [CrossRef]
  21. Hung, T.N.; Suh, Y.S. Inertial sensor-based two feet motion tracking for gait analysis. Sensors 2013, 13, 5614–5629. [Google Scholar] [CrossRef]
  22. Caputo, F.; Greco, A.; D’Amato, E.; Notaro, I.; Sardo, M.L.; Spada, S.; Ghibaudo, L. A human postures inertial tracking system for ergonomic assessments. In Congress of the International Ergonomics Association; Springer: Berlin/Heidelberg, Germany, 2018; pp. 173–184. [Google Scholar]
  23. Barshan, B.; Durrant-Whyte, H.F. Inertial navigation systems for mobile robots. IEEE Trans. Robot. Autom. 1995, 11, 328–342. [Google Scholar] [CrossRef] [Green Version]
  24. Jun, M.; Roumeliotis, S.I.; Sukhatme, G.S. State estimation of an autonomous helicopter using Kalman filtering. In Proceedings of the 1999 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human and Environment Friendly Robots with High Intelligence and Emotional Quotients (Cat. No. 99CH36289), Kyongju, Korea, 17–21 October 1999; Volume 3, pp. 1346–1353. [Google Scholar]
  25. Gebre-Egziabher, D.; Hayward, R.C.; Powell, J.D. Design of multi-sensor attitude determination systems. IEEE Trans. Aerosp. Electron. Syst. 2004, 40, 627–649. [Google Scholar] [CrossRef]
  26. Choukroun, D.; Bar-Itzhack, I.Y.; Oshman, Y. Novel quaternion Kalman filter. IEEE Trans. Aerosp. Electron. Syst. 2006, 42, 174–190. [Google Scholar] [CrossRef] [Green Version]
  27. D’Amato, E.; Notaro, I.; Mattei, M.; Tartaglione, G. Attitude and position estimation for an UAV swarm using consensus Kalman filtering. In Proceedings of the 2015 IEEE Metrology for Aerospace (MetroAeroSpace), Benevento, Italy, 4–5 June 2015; pp. 519–524. [Google Scholar]
  28. Hyyti, H.; Visala, A. A DCM Based Attitude Estimation Algorithm for Low-Cost MEMS IMUs. Int. J. Navig. Obs. 2015, 2015, 503814. [Google Scholar] [CrossRef] [Green Version]
  29. Costanzi, R.; Fanelli, F.; Monni, N.; Ridolfi, A.; Allotta, B. An attitude estimation algorithm for mobile robots under unknown magnetic disturbances. IEEE/ASME Trans. Mech. 2016, 21, 1900–1911. [Google Scholar] [CrossRef]
  30. De Marina, H.G.; Espinosa, F.; Santos, C. Adaptive UAV attitude estimation employing unscented Kalman filter, FOAM and low-cost MEMS sensors. Sensors 2012, 12, 9566–9585. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  31. Chiella, A.C.; Teixeira, B.O.; Pereira, G.A. Quaternion-based robust attitude estimation using an adaptive unscented Kalman filter. Sensors 2019, 19, 2372. [Google Scholar] [CrossRef] [Green Version]
  32. Oshman, Y.; Carmi, A. Attitude estimation from vector observations using a genetic-algorithm-embedded quaternion particle filter. J. Guid. Control Dyn. 2006, 29, 879–891. [Google Scholar] [CrossRef]
  33. Zhang, C.; Taghvaei, A.; Mehta, P.G. Attitude estimation with feedback particle filter. In Proceedings of the 2016 IEEE 55th Conference on Decision and Control (CDC), Las Vegas, NV, USA, 12–14 December 2016; pp. 5440–5445. [Google Scholar]
  34. Cheng, Y.; Crassidis, J.L. Particle filtering for attitude estimation using a minimal local-error representation. J. Guid. Control Dyn. 2010, 33, 1305–1310. [Google Scholar] [CrossRef]
  35. Abdelkrim, N.; Aouf, N.; Tsourdos, A.; White, B. Robust nonlinear filtering for INS/GPS UAV localization. In Proceedings of the 2008 16th Mediterranean Conference on Control and Automation, Ajaccio, France, 25–27 June 2008; pp. 695–702. [Google Scholar]
  36. Kumar, V. Integration of Inertial Navigation System and Global Positioning System Using Kalman Filtering; Indian Institute of Technology, Bombay Mumbai: Mumbai, India, 2004. [Google Scholar]
  37. Hua, M.D. Attitude estimation for accelerated vehicles using GPS/INS measurements. Control Eng. Pract. 2010, 18, 723–732. [Google Scholar] [CrossRef]
  38. Grip, H.F.; Fossen, T.I.; Johansen, T.A.; Saberi, A. Nonlinear observer for GNSS-aided inertial navigation with quaternion-based attitude estimation. In Proceedings of the 2013 American Control Conference, Washington, DC, USA, 17–19 June 2013; pp. 272–279. [Google Scholar]
  39. Grayson, B.; Penna, N.T.; Mills, J.P.; Grant, D.S. GPS precise point positioning for UAV photogrammetry. Photogramm. Rec. 2018, 33, 427–447. [Google Scholar] [CrossRef] [Green Version]
  40. Gibson, J.J. The Perception of the Visual World; Riverside Press: Cambridge, UK, 1950. [Google Scholar]
  41. Lucas, B.D.; Kanade, T. An iterative image registration technique with an application to stereo vision. In Proceedings of the 7th International Joint Conference on Artificial Intelligence (IJCAI ’81), Vancouver, BC, Canada, 24–28 August 1981; Volume 81, pp. 674–679. [Google Scholar]
  42. Horn, B.K.; Schunck, B.G. Determining optical flow. Artif. Intell. 1981, 17, 185–203. [Google Scholar] [CrossRef] [Green Version]
  43. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  44. Srinivasan, M.V. An image-interpolation technique for the computation of optic flow and egomotion. Biol. Cybern. 1994, 71, 401–415. [Google Scholar] [CrossRef]
  45. Zufferey, J.C.; Floreano, D. Toward 30-gram autonomous indoor aircraft: Vision-based obstacle avoidance and altitude control. In Proceedings of the 2005 IEEE International Conference on Robotics and Automation, Barcelona, Spain, 18–22 April 2005; pp. 2594–2599. [Google Scholar]
  46. Shen, C.; Bai, Z.; Cao, H.; Xu, K.; Wang, C.; Zhang, H.; Wang, D.; Tang, J.; Liu, J. Optical flow sensor/INS/magnetometer integrated navigation system for MAV in GPS-denied environment. J. Sens. 2016, 2016, 6105803. [Google Scholar] [CrossRef] [Green Version]
  47. Chao, H.; Gu, Y.; Napolitano, M. A survey of optical flow techniques for robotics navigation applications. J. Intell. Robot. Syst. 2014, 73, 361–372. [Google Scholar] [CrossRef]
  48. Yu, Z.; Hu, Y.; Huang, J. GPS/INS/Odometer/DR integrated navigation system aided with vehicular dynamic characteristics for autonomous vehicle application. IFAC-PapersOnLine 2018, 51, 936–942. [Google Scholar] [CrossRef]
  49. Golovan, A.A. INS/Odometer Integration: Positional Approach. Gyroscopy Navig. 2021, 12, 186–194. [Google Scholar] [CrossRef]
  50. Honegger, D.; Meier, L.; Tanskanen, P.; Pollefeys, M. An open source and open hardware embedded metric optical flow cmos camera for indoor and outdoor applications. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013; pp. 1736–1741. [Google Scholar]
  51. Smith, R.; Self, M.; Cheeseman, P. Estimating uncertain spatial relationships in robotics. In Autonomous Robot Vehicles; Springer: Berlin/Heidelberg, Germany, 1990; pp. 167–193. [Google Scholar]
  52. Dissanayake, M.G.; Newman, P.; Clark, S.; Durrant-Whyte, H.F.; Csorba, M. A solution to the simultaneous localization and map building (SLAM) problem. IEEE Trans. Robot. Autom. 2001, 17, 229–241. [Google Scholar] [CrossRef] [Green Version]
  53. Dissanayake, G.; Williams, S.B.; Durrant-Whyte, H.; Bailey, T. Map management for efficient simultaneous localization and mapping (SLAM). Auton. Robot. 2002, 12, 267–286. [Google Scholar] [CrossRef]
  54. Williams, S.; Dissanayake, G.; Durrant-Whyte, H. Towards terrain-aided navigation for underwater robotics. Adv. Robot. 2001, 15, 533–549. [Google Scholar] [CrossRef] [Green Version]
  55. Kim, J.H.; Sukkarieh, S. Airborne simultaneous localisation and map building. In Proceedings of the 2003 IEEE International Conference on Robotics and Automation (Cat. No. 03CH37422), Taipei, Taiwan, 14–19 September 2003; Volume 1, pp. 406–411. [Google Scholar]
  56. Kim, J.; Sukkarieh, S. Autonomous airborne navigation in unknown terrain environments. IEEE Trans. Aerosp. Electron. Syst. 2004, 40, 1031–1045. [Google Scholar]
  57. Bryson, M.; Sukkarieh, S. Building a Robust Implementation of Bearing-only Inertial SLAM for a UAV. J. Field Robot. 2007, 24, 113–143. [Google Scholar] [CrossRef] [Green Version]
  58. Wang, F.; Wang, K.; Lai, S.; Phang, S.K.; Chen, B.M.; Lee, T.H. An efficient UAV navigation solution for confined but partially known indoor environments. In Proceedings of the 11th IEEE International Conference on Control & Automation (ICCA), Taichung, Taiwan, 18–20 June 2014; pp. 1351–1356. [Google Scholar]
  59. Li, R.; Liu, J.; Zhang, L.; Hang, Y. LIDAR/MEMS IMU integrated navigation (SLAM) method for a small UAV in indoor environments. In Proceedings of the 2014 DGON Inertial Sensors and Systems (ISS), Karlsruhe, Germany, 16–17 September 2014; pp. 1–15. [Google Scholar]
  60. Albertos, P. Block multirate input-output model for sampled-data control systems. IEEE Trans. Autom. Control 1990, 35, 1085–1088. [Google Scholar] [CrossRef]
  61. Kranc, G. Input-output analysis of multirate feedback systems. IRE Trans. Autom. Control 1957, 3, 21–28. [Google Scholar] [CrossRef]
  62. Khargonekar, P.; Poolla, K.; Tannenbaum, A. Robust control of linear time-invariant plants using periodic compensation. IEEE Trans. Autom. Control 1985, 30, 1088–1096. [Google Scholar] [CrossRef] [Green Version]
  63. Blanco, J.L. A tutorial on se (3) transformation parameterizations and on-manifold optimization. Univ. Malaga Tech. Rep 2010, 3, 6. [Google Scholar]
  64. Markley, F.L. Attitude error representations for Kalman filtering. J. Guid. Control Dyn. 2003, 26, 311–317. [Google Scholar] [CrossRef]
  65. Park, S.; Horowitz, R.; Tan, C.W. Dynamics and control of a MEMS angle measuring gyroscope. Sens. Actuators A Phys. 2008, 144, 56–63. [Google Scholar] [CrossRef]
  66. Vitali, A. Noise Analysis and Identification in MEMS Sensors, Allan, Time, Hadamard, Overlapping, Modified, Total Variance; Technical Report DT0064; STMicroelectronics: Geneva, Switzerland, 2016. [Google Scholar]
  67. D’Amato, E.; Mattei, M.; Notaro, I.; Scordamaglia, V. UAV Sensor FDI in Duplex Attitude Estimation Architectures Using a Set-Based Approach. IEEE Trans. Instrum. Meas. 2018, 67, 2465–2475. [Google Scholar] [CrossRef]
  68. Crassidis, J.L.; Markley, F.L.; Cheng, Y. Survey of nonlinear attitude estimation methods. J. Guid. Control Dyn. 2007, 30, 12–28. [Google Scholar] [CrossRef]
  69. D’Amato, E.; Mattei, M.; Mele, A.; Notaro, I.; Scordamaglia, V. Fault tolerant low cost IMUS for UAVs. In Proceedings of the 2017 IEEE International Workshop on Measurement and Networking, Naples, Italy, 27–29 September 2017; pp. 1–6. [Google Scholar]
  70. Quinchia, A.G.; Falco, G.; Falletti, E.; Dovis, F.; Ferrer, C. A comparison between different error modeling of MEMS applied to GPS/INS integrated systems. Sensors 2013, 13, 9549–9588. [Google Scholar] [CrossRef] [Green Version]
  71. Tereshkov, V.M. A simple observer for gyro and accelerometer biases in land navigation systems. J. Navig. 2015, 68, 635–645. [Google Scholar] [CrossRef] [Green Version]
  72. Hamel, T.; Mahony, R. Attitude estimation on SO[3] based on direct inertial measurements. In Proceedings of the 2006 IEEE International Conference on Robotics and Automation, Orlando, FL, USA, 15–19 May 2006; pp. 2170–2175. [Google Scholar]
  73. Thienel, J.; Sanner, R. A coupled nonlinear spacecraft attitude controller and observer with an unknown constant gyro bias and gyro noise. IEEE Trans. Autom. Control 2003, 48, 2011–2015. [Google Scholar] [CrossRef]
  74. Atesoglu, Ö.; Nalbantoglu, V.; Seymen, B. Gyro-bias estimation filter design for the stabilization accuracy enhancement of two axes gimbaled sighting systems. IFAC Proc. Vol. 2008, 41, 5011–5017. [Google Scholar] [CrossRef]
  75. Bauer, P.; Bokor, J. Multi-mode extended Kalman filter for aircraft attitude estimation. IFAC Proc. Vol. 2011, 44, 7244–7249. [Google Scholar] [CrossRef]
  76. Lam, Q.; Stamatakos, N.; Woodruff, C.; Ashton, S. Gyro modeling and estimation of its random noise sources. In Proceedings of the AIAA Guidance, Navigation, and Control Conference and Exhibit, Austin, TX, USA, 11–14 August 2003; p. 5562. [Google Scholar]
  77. Kok, M.; Hol, J.D.; Schön, T.B. Using Inertial Sensors for Position and Orientation Estimation. Found. Trends Signal Process. 2017, 11, 1–153. [Google Scholar] [CrossRef] [Green Version]
  78. Afzal, M.H.; Renaudin, V.; Lachapelle, G. Assessment of indoor magnetic field anomalies using multiple magnetometers. In Proceedings of the ION GNSS10, Portland, OR, USA, 21–24 September 2010; pp. 525–533. [Google Scholar]
  79. Afzal, M.H.; Renaudin, V.; Lachapelle, G. Multi-Magnetometer Based Perturbation Mitigation for Indoor Orientation Estimation. Navigation 2011, 58, 279–292. [Google Scholar] [CrossRef]
  80. Renaudin, V.; Afzal, M.H.; Lachapelle, G. New method for magnetometers based orientation estimation. In Proceedings of the IEEE/ION Position, Location and Navigation Symposium, Indian Wells, CA, USA, 4–6 May 2010; pp. 348–356. [Google Scholar] [CrossRef]
  81. Kuevor, P.E.; Cutler, J.W.; Atkins, E.M. Improving Attitude Estimation Using Gaussian-Process-Regression-Based Magnetic Field Maps. Sensors 2021, 21, 6351. [Google Scholar] [CrossRef]
  82. Hellmers, H.; Norrdine, A.; Blankenbach, J.; Eichhorn, A. An IMU/magnetometer-based Indoor positioning system using Kalman filtering. In Proceedings of the International Conference on Indoor Positioning and Indoor Navigation, Montbeliard, France, 28–31 October 2013; pp. 1–9. [Google Scholar] [CrossRef]
  83. Mahony, R.; Hamel, T.; Pflimlin, J.M. Nonlinear complementary filters on the special orthogonal group. IEEE Trans. Autom. Control 2008, 53, 1203–1218. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Definitions of NED and Body Reference frames.
Figure 1. Definitions of NED and Body Reference frames.
Drones 06 00018 g001
Figure 2. Rotations needed for the transformation from NED to Body reference frames.
Figure 2. Rotations needed for the transformation from NED to Body reference frames.
Drones 06 00018 g002
Figure 3. TOF sensors arrangement on the quadrotor structure.
Figure 3. TOF sensors arrangement on the quadrotor structure.
Drones 06 00018 g003
Figure 4. Simulation scheme. Closed Loop Quadrotor Dynamics module includes the aircraft and its flight control system, Sensors blocks include sensors dynamics and noise.
Figure 4. Simulation scheme. Closed Loop Quadrotor Dynamics module includes the aircraft and its flight control system, Sensors blocks include sensors dynamics and noise.
Drones 06 00018 g004
Figure 5. Tightly coupled architecture model A η .
Figure 5. Tightly coupled architecture model A η .
Drones 06 00018 g005
Figure 6. Loosely coupled architecture model B η .
Figure 6. Loosely coupled architecture model B η .
Drones 06 00018 g006
Figure 7. Simulated sensors measurements.
Figure 7. Simulated sensors measurements.
Drones 06 00018 g007
Figure 8. Simulated TOF sensors measurements.
Figure 8. Simulated TOF sensors measurements.
Drones 06 00018 g008
Figure 9. Estimated quadrotor attitude in terms of Euler angles.
Figure 9. Estimated quadrotor attitude in terms of Euler angles.
Drones 06 00018 g009
Figure 10. Estimated velocity vector components in the inertial reference frame.
Figure 10. Estimated velocity vector components in the inertial reference frame.
Drones 06 00018 g010
Figure 11. Estimated altitude z E .
Figure 11. Estimated altitude z E .
Drones 06 00018 g011
Table 1. Quadcopter and sensors main parameters. I N denotes the identity matrix N × N .
Table 1. Quadcopter and sensors main parameters. I N denotes the identity matrix N × N .
ParameterValue
Arm length—[m]1
Mass—[kg]0.6
Gyroscope sampling frequency—[Hz] 10 3
Accelerometer sampling frequency—[Hz] 10 3
Magnetometer sampling frequency—[Hz] 10 2
Optical Flow sampling frequency—[Hz]10
TOF sampling frequency—[Hz]50
Gyroscope bias—[rad/s] [ 0.1 , 0.08 , 0.11 ] T
Accelerometer bias—[m/s2] [ 0.1 , 0.15 , 0.2 ] T
Magnetometer bias—[G] [ 0.05 , 0.02 , 0.06 ] T
Optical Flow bias—[m/s] [ 0.1 , 0.07 ] T
TOF bias—[m] [ 1.0 , 0.7 , 0.5 , 0.1 , 0.2 ] T * 10 3
Gyroscope noise covariance—[(rad/s) 2 ] 8.0 · I 3 · 10 8
Accelerometer noise covariance—[(m/s2)2] 1.0 · I 3 · 10 4
Magnetometer noise covariance—[(G) 2 ] 4.0 · I 3 · 10 8
Optical Flow noise covariance—[(m/s) 2 ] 1.0 · I 2 · 10 6
TOF noise covariance—[(m) 2 ] 2.6 · I 5 · 10 8
Table 2. Models comparison in terms of Mean Absolute Error.
Table 2. Models comparison in terms of Mean Absolute Error.
Model A 5 Model A 1 Model B 5 Model B 1
M A E ϕ    [deg]0.00660.73690.00630.2156
M A E θ    [deg]0.00980.66160.00960.0707
M A E ψ    [deg]0.01790.21040.0190.1572
M A E z E    [m]0.00120.0840.00130.0899
M A E V E x    [m/s]0.01060.01490.00750.0594
M A E V E y    [m/s]0.01740.00890.0090.0082
M A E V E z    [m/s]0.00550.0170.00580.0055
Table 3. Models comparison in terms of standard deviation of errors.
Table 3. Models comparison in terms of standard deviation of errors.
Model A 5 Model A 1 Model B 5 Model B 1
σ ϕ    [deg]0.06192.87550.06403.0172
σ θ    [deg]0.07303.04700.07703.0838
σ ψ    [deg]0.19491.56060.19791.66002
σ z E    [m]0.00280.16270.00300.1851
σ V E x    [m/s]0.10420.42320.13160.4406
σ V E y    [m/s]0.11360.42550.13070.3780
σ V E z    [m/s]0.02310.08580.02500.1941
Table 4. CPU time (ratio between the time spent in the EKF function and the overall simulation time).
Table 4. CPU time (ratio between the time spent in the EKF function and the overall simulation time).
Model A 5 Model A 1 Model B 5 Model B 1
CPU %3.343.153.031.9
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Bassolillo, S.R.; D’Amato, E.; Notaro, I.; Ariante, G.; Del Core, G.; Mattei, M. Enhanced Attitude and Altitude Estimation for Indoor Autonomous UAVs. Drones 2022, 6, 18. https://doi.org/10.3390/drones6010018

AMA Style

Bassolillo SR, D’Amato E, Notaro I, Ariante G, Del Core G, Mattei M. Enhanced Attitude and Altitude Estimation for Indoor Autonomous UAVs. Drones. 2022; 6(1):18. https://doi.org/10.3390/drones6010018

Chicago/Turabian Style

Bassolillo, Salvatore Rosario, Egidio D’Amato, Immacolata Notaro, Gennaro Ariante, Giuseppe Del Core, and Massimiliano Mattei. 2022. "Enhanced Attitude and Altitude Estimation for Indoor Autonomous UAVs" Drones 6, no. 1: 18. https://doi.org/10.3390/drones6010018

Article Metrics

Back to TopTop