A Sensor Fusion Approach to Observe Quadrotor Velocity

The growing use of Unmanned Aerial Vehicles (UAVs) raises the need to improve their autonomous navigation capabilities. Visual odometry allows for dispensing positioning systems, such as GPS, especially on indoor flights. This paper reports an effort toward UAV autonomous navigation by proposing a translational velocity observer based on inertial and visual measurements for a quadrotor. The proposed observer complementarily fuses available measurements from different domains and is synthesized following the Immersion and Invariance observer design technique. A formal Lyapunov-based observer error convergence to zero is provided. The proposed observer algorithm is evaluated using numerical simulations in the Parrot Mambo Minidrone App from Simulink-Matlab.


Introduction
Thanks to the advancement of unmanned aerial vehicles, these have gained ground in various applications, including law enforcement, precision agriculture, and architectural/industrial inspection.Additionally, researchers have chosen to use this aircraft type, which offers excellent academic advantages, to test new nonlinear control theory methods.Crewless aircraft have made a quantum leap, and the nonlinear control theory has evolved.The basis of the classical nonlinear control theory, built by, for instance, [1,2], has been adapted rapidly into this century's new technologies, as reported in [3,4].Hence, quadrotor aerial vehicles have become a worldwide standard platform for robotics research.In particular, the Parrot Mambo Minidrone App in Simulink-Matlab offers excellent functionality for testing new control and state observation algorithms.It provides semi-realistic inertial and visual sensors, making real-time implementation straightforward.In reference to the first developments of sensor fusion for unmanned aerial vehicles, we can find that in [5], the measurements from an Inertial Navigation System (INS) and GPS sensors are fused by using a Kalman filter.The information for the simulation experiment was used to examine the data of both sensors; tool implementations of the filter were considered.The research article [6] establishes a basic requirement for an autonomous robot merging the data of different sensors; odometric and sonar sensors are fused together by means of an Extended Kalman Filter (EKF).The adaptive algorithm performs a precise localization of the vehicle and can be established in a wide range of experimental situations.The researchers in [7] describe a simple yet powerful statistical technique for fusing information from different sensors that aims to obtain the spectral error densities of the navigation gravity model, completing a robust aerial model that absorbs vertical disturbances.The information is computed in the time-dependent frequency.
In [8], the authors present a real-time implementation of position controllers based on the Adaptive-Proportional-Integral Derivative (APID) method for the Parrot Mambo Minidrone.An adaptive mechanism based on a second-order sliding mode control is also included to modify the conventional parameters of the altitude controller.Simulations and experimental flights validate the success of this approach.The research reported in [9] synthesizes Proportional-Integral-Derivative (PID), Linear Quadratic Regulator (LQR), and Model Predictive Control (MPC) algorithms for a Parrot Minidrone, and the control algorithms' strengths and weaknesses are analyzed.These control techniques were chosen considering that the MPC design for Parrot Minidrones is unavailable in the existing literature.The automatic code generation capabilities offered by the Simulink coder facilitated the experimental implementation of the proposed control algorithms.The use of a quadrotor linear model evidently causes a relatively large mismatch between simulations and experimental results.The work in [10] presented the real-time implementation of a novel Cartesian translational robust control strategy for the Parrot Mambo Minidrone.Multiple data were collected from actual flight experiments involving a set of four Parrot Mambo Minidrones.A first-order dynamic with time delay was identified as the mathematical model for the Cartesian translational dynamic.The control strategy was designed based on a parameter variant discrete-time linear system.Reference [11] reports the design and implementation of a cascade attitude controller for the Parrot Mambo Minidrone.The controller was designed using the triple-step and nonlinear integral sliding mode control NISMC method considering the three-DOF nonlinear model.In [12], a hand gesture-based drone controller for the Parrot Mambo Minidrone is developed; the controller can perform take-off, hovering, and landing operations.The delay between executing each command is only three seconds, and the system accuracy obtained for hand gesture detection is remarkably good.The system can be further extended by modifying the controller so that the UAV can move and perform multiple commands using hand gestures.
Current directions aim to develop nonlinear observer strategies for autonomous navigation using only onboard sensors.The work in [13] proposed an embedded fast Nonlinear Predictive Model (NMPC) algorithm.This controller ensures a stable and safe flight of micro aerial robots relying solely on onboard sensors to localize themselves.The implemented controller can drive the micro aerial vehicle to track a path and outstand external disturbances.The design of a computationally efficient optical flow algorithm for tiny multirotor aerial vehicles, also known as pocket drones, is the focus of the work reported [14].The Edge-Flow algorithm uses a compressed representation of an image frame to match it with the one in the previous time step; the adaptive time horizon also enables it to detect sub-pixel flow, from which velocities can be observed.Reference [15] presents a fusion filter design using the Kalman Filter (KF) complemented with the No Motion No Integration Filter (NMNI) to reduce noise influence to observe the tilt angle using only one accelerometer and one gyroscope.This approach relieves the burden of multiple sensors for attitude tracking, reducing battery energy consumption and helping the drone obtain the precise angle under rotor vibrations.
Translational velocity observation for quadrotors is an appealing problem from practical and theoretical perspectives.In [16], a globally exponential convergent observer based on nonlinear adaptive techniques is proposed.The observer uses measurements from an Attitude and Heading Reference System (AHRS).Numerical simulations evaluate the observer algorithm that shows good performance in the presence of noisy acceleration measurements.Reference [17] reports the design of a deterministic quadrotor translational velocity observer employed to reconstruct the scale factor of the position determined by a Simultaneous Localization And Mapping (SLAM) algorithm.The translational velocity observer uses the translational acceleration and the non-scaled translational position; its performance is evaluated through numerical simulations.Translational velocity observation is also an essential issue for fixed-wing aircrafts.Reference [18] presents an exponentially stable nonlinear wind velocity observer for fixed-wing unmanned aerial vehicles.The proposed observer employs a GNSS-aided Inertial Navigation System (INS), an attitude observer, and a pitot-static probe measuring dynamic pressure and airspeed in the longitudinal direction.The proposed observer estimates wind velocity and, as by-products, the angles of attack, sideslip, and scaling factor of the pitot-static probe measurement without requiring UAV maneuvers with the Persistence of Excitation (PE).The algorithm is well-suited for embedded systems and was tested through simulations.Finally, in [19], a uniform, semi-global, exponentially stable nonlinear observer for attitude, gyro bias, position, velocity, and specific force estimation for a fixed-wing UAV is presented.The nonlinear observer uses inertial and visual measurements without any assumptions about the flight altitude or the structure of the terrain being recorded.Experimental data from a UAV test flight and simulated data show that the nonlinear observer performs robustly.
Examining the existing literature reveals a significant gap in quadrotor translational velocity observations.No deterministic observer complements inertial and visual measurements with a formal proof of convergence of the observation error and a semi-realistic numerical simulation.By incorporating novel methodologies such as the Immersion and Invariance method, this research fills these gaps and presents a technique to complementarily fuse both measurements, accompanied by a formal proof of the convergence to zero of the observation error.This research was driven by an aim to enhance quadrotor capabilities to fly autonomously using only onboard sensors.In relation to the autonomous performance of aerial vehicles, we can find in [20], mounted-based stations, which are expected to become an integral component of future intelligent transportation systems.The aerial vehicle, integrated with the ground vehicular network, is used to bridge coverage gaps, offer broader communication services, and improve the network connection stability.Its self understanding is the main task, which is performed by the autonomous unmanned vehicle.As mentioned, a fundamental goal of the autonomous UAV is the development of navigation systems that help to realize any kind of aerial operations; inside [21], a nano-UAV learns to detect and fly through programmed trajectories with an autonomous navigation system based on neural networks.This research article is extremely pretentious because, besides the autonomous requirements, it is needed to establish appropriate sensor fusion, as well as the overall perspective of the whole aerial environment.Key insights from this research include the complementary nature of the inertial and visual sensors.Moreover, the findings reported in this article complement the work presented in [17].The nonlinear translational velocity observer of [17] degrades when the quadrotor remains in hover; this problem is overcome by adding the optical flow measurement.Adding the optical measurement also gives the proposed observer an advantage over the observer reported in [16].Finally, it is important to observe that the nonlinear observers reported in [18,19] use different sensors.
This paper introduces a novel sensor fusion strategy for observing the quadrotor translational velocity.The proposed observer algorithm uniquely fuses inertial and visual measurements.This fusion strategy is synthesized following the Immersion and Invariance approach method introduced in [3].The observer error convergence to zero is guaranteed using Lyapunov theory arguments, and its performance is evaluated through numerical simulations in the semi-realistic Parrot Minidrone app from Matlab-Simulink.It is demonstrated that the proposed observer operates effectively under noisy measurements and considering different quadrotor motions.
This work has the following structure: Section 2 presents the quadrotor model and describes the available measurements.Section 3 reports the sensor fusion algorithm and formally states the nonlinear observer design.Section 4 is devoted to the numerical simulation study, the observation error, and the stability properties of the designed observer; finally, Section 5 presents the conclusions of this work.

Quadrotor Mathematical Model and Available Measurements 2.1. Quadrotor Dynamics
The basic structure of the dynamic quadrotor model has been reported in various research papers and books.This work considers the dynamic quadrotor model reported in [17,22].The model contains states expressed in two frames of reference, the inertial and the fixed body, and is described by the following set of differential equations.See Figure 1.
where X = x y z ⊤ is the quarotor position in the inertial reference frame.R ∈ SO( 3) is the rotation matrix from the fixed body to inertial coordinates with where I ∈ R 3×3 is the identity matrix, and V b = u v w ⊤ denotes the translational velocity expressed in the fixed body frame.Moreover, m represents the vehicle's mass, g is the gravitational force constant, e 3 = [0 0 1] ⊤ , T T is the total thrust force produced by the four rotors,

Available Measurements
This work considers that the quadrotor has an Inertial Measurement Unit (IMU), a monocular camera pointing downwards, and a height sensor.Thus, the following signals are available.

Quadrotor's Specific Translational Acceleration
As the leading electronic sensor, multi-rotors carry an IMU, which provides information on the fixed body coordinates.An IMU delivers the specific translational acceleration a b , the angular velocity Ω, and the intensity of the Earth's magnetic field.According to the work reported in [23], the specific translational acceleration measured by the IMU's accelerometer mounted on board an aerial vehicle is given by where F b T models the total external forces acting on the vehicle where the IMU is mounted.From the second equation in (1), it follows that Notice that the above equation does not include the term −mS(Ω)V b because the Coriolis acceleration is an inertial force.Substituting (3) into (2), the specific acceleration force measured by an accelerometer onboard a quadrotor is The first available measurement is the specific translational acceleration, which element-toelement reads in the following way,

Quadrotor's Attitude and Angular Velocity
The IMU data processed through an estimation algorithm constitutes an Attitude and Heading Reference System (AHRS) that provides the quadrotor attitude R and angular velocity Ω.The AHRS can deliver the attitude using a parameterization such as Euler angles or quaternions or directly delivering the rotation matrix R. Thus, one has

Optical Flow
Nowadays, the second crucial sensor for autonomous navigation is a monocular camera.The pixel velocity can be measured and related to the quadrotor velocity by processing the camera image.
The cornerstone to interpreting an image inside a computer is brightness.The camera sensors integrate the irradiance from the scene so that I(x p , y p ) defines the brightness of the pixel located at (x p , y p ); thus, the k image captured at time t can be characterized as [24], Computer vision deals with extracting meaningful information from images; this is extracting meaningful information from brightness.A much more straightforward problem is computing image motion information from brightness.Consider two images of the same pixel location, I 1 (x p , y p , t 1 ) and I 2 (x p , y p , t 2 ), taken from infinitesimally close vantage points and at infinitesimally consecutive time instants t 2 ≈ t 1 ; thus, where t 2 = t 1 + ∆t with ∆t an infinitesimal time increment.Assume that only pixels belonging to flat and parallel image plane portions and moving parallel to the image plane are considered.Then, x p (t 1 + ∆t) = x p (t 1 ) + u p ∆t y p (t 1 + ∆t) = y p (t 1 ) + v p ∆t with u p and v p the pixel velocity.As a result, Expanding the right-hand-side of Equation ( 7) by the Taylor series, one has (8) Neglecting the high-order terms, it follows that this equation is known as the brightness constancy constraint [24] or the optical flow constraint equation [25,26].Note that it is impossible to compute the speeds u p and v p perpendicular to the image gradient using the brightness constancy constraint; this drawback is known as the aperture problem [27].It is necessary to evaluate the brightness constancy constraint in each pixel location belonging to a region where u p and v p can be assumed constant, for example, the window with R the image plane.If the window W i (x p , y p ) is fixed inside the image plane, the computed speeds u p and v p are known as the optical flow.Using the differential method proposed by Lucas-Kanade [28], the computation of constant values of u p and v p in each small neighborhood W i (x p , y p ) can be implemented as the minimization of , a function giving more influence to constraints at the center of W i (x p , y p ) than those on the boundary.As reported in [29], this method is one of the most reliable.Consider a camera onboard the aerial vehicle monitoring several characteristic points located at ⊤ , as illustrated in Figure 2. It is worth mentioning that this Figure comes from [30] with slight additions.The velocity of each point relative to the camera is given by: Ṗi = −y 3 From the perspective projection condition, it follows that the location of each characteristic point projects on the image plane as follows Figure 2. Pinhole camera principle [30].
with f the focal length of the camera lens.Combining ( 10) and (11), one obtains the pixel velocity (u i p , v i p ) registered by the camera due to the quadrotor motion.Figure 2 shows that z c = z.Using the differential method proposed by Lukas-Kanade to determine u p and v p in a region W i (x p , y p ), it is possible to obtain the quadrotor translational velocity as follows in an image region W i (x p , y p ) for some i.
Consider the following assumption, Assumption 1.The optical flow is computed in a region W i that contains the pixel image origin; this is, x p = y p = 0.Moreover, the quadrotor has a height controller that ensures z ≈ z for some constant z and w ≈ 0.
Under Assumption 1, the Equation (13) becomes with

Height Sensor
Ultrasonic or laser devices to measure distance can be mounted on quadrotors.Hence, it is assumed that the following signals are also measured thus, the matrix H 1 is also measurable since it can be expressed as Finally, note that in terms of the measurable signals, the second equation in ( 7) can be expressed as Vb = gy ⊤ 2 e 3 + y 1 − s(y 3 )V b (16)

Nonlinear Observer Design
Sensor fusion has become essential for solving mobile robotics state observation problems [31].Better spatial and temporal coverage, robustness to sensor failures, and increased state observation accuracy are the main desirable properties of sensor fusion algorithms.This research article proposes a sensor fusion algorithm to estimate the quadrotor Cartesian velocity based on the Immersion and Invariance observer design technique proposed in [3].The proposed fusion algorithm can be classified as a complementary fusion across domains [32,33].Both considered sensors measure the same quantity in different domains and in acceleration and pixel velocity, and they work in a complementary configuration.Figure 3 illustrates schematically the proposed sensor fusion.The sensor fusion algorithm is designed in the deterministic nonlinear time-invariant framework.The observer design method can be explained as follows.Consider the following non-linear, deterministic, time-invariant system [3].
where η ∈ R ⊂ R n and y ∈ Y ⊂ R m are the unmeasured and measured states, correspondingly.
Definition 1.The dynamic system ˙η = ϕ( η , y) (18) with η ∈ R n , is a sensor fusion observer for the unmeasured state η if there exists a mapping β : R n × R m −→ R n such that the manifold, has the following properties, • M is positively invariant; • All trajectories of ( 17) and ( 18) that start in a neighborhood of M asymptotically converge to M.
The design of the observer of the form given in Definition 1 requires additional properties on the mapping β( η , y), as stated in the following result.
Theorem 1.Consider the system (17).Assume that the vector fields f 1 (η , y) and f 2 (η , y) are forward complete and that there exist differentiable maps β : R n × R m −→ R n such that A1 For all η and y the map β( η , y) satisfies, has a (globally) asymptotically stable equilibrium at η = 0 uniformly in η and y.Then, the system (18) with, is a (global) observer for (17).
Remark 1.The result in Theorem 1 is a simplified version of the general observer design theory reported in [3].The proof of Theorem 1 was reported in [17].The sensor fusion characteristics of the observer in Definition 1 are as follows.Assume that two measurements y 1 and y 2 contain information on the nonmeasurable state η.Then, it is possible to define a function γ(y 1 , y 2 ) that fuses both measurements, and then the function β( η, γ(y 1 , y 2 )) integrates the fusion to the observer design procedure.

Cartesian Velocity Observer
Here, the quadrotor Cartesian velocity observer is designed.The observer fuses all measurements described in Section 2.2.First, using all available measurements, the following measurements are tailored From the definition of the manifold (19), the observer error is defined as with α 1 and α 2 scalar constants.Note that the function γ( ȳ1 , ȳ4 ) fuses the information of the same quantity, translational velocity, expressed in different domains, body axes acceleration ȳ1 , and optical flow ȳ4 .The scalars α 1 and α 2 modulate the fusion.The time derivative of the observer error Ṽb reads as Using ( 16) and ( 22), one obtains Now, to express (26) in terms of the observer error, the Equation ( 23) is solved for V b and substituted in (26).Thus, Defining with Γ a matrix gain, the state observer dynamic can be defined as follows It is important to verify that the state observer dynamic depends only on available measurements and known parameters.Then, the following vector differential equation described the observer error dynamic Vb = −S(y 3 ) Ṽb + 1 Hence, one has the following.
Proposition 1.Consider that Assumption 1 holds.Assume that the quadrotor is equipped with a set of sensors to measure y i , i = 1, • • • , 6. Assume that the quadrotor flies over a surface with enough visual characteristics and there is a region W i containing the pixel location x p = y p = 0, where the optical flow is constant.Then, there exist constants α 1 , α 2 and a matrix Γ such that the observer dynamic (29) complimentarily fuses the available measurements and the observer error Ṽb exponentially converges to zero.
Proof.The function γ( ȳ1 , ȳ4 ) in ( 24) performs the complementary fusion of the available measurements directly related to the quadrotor translational velocity.From this point, the observer design follows the lines of Theorem 1.
To analyze the observation error's stability properties, consider the following Lyapunov function [ with Γ 1 ∈ R 3×3 a diagonal positive definite matrix.Thus, one has with λ m (A) and λ M (A) the smallest and greatest eigenvalues of any matrix A.
The time derivative of ( 31) along the trajectories of the observer error dynamic ( 30) is given by Since S(y 3 ) is a skew symmetric matrix, it follows that V = ( Ṽb ) ⊤ Γ 1 1 It is straightforward to verify that there exist α 1 , α 2 , Γ, and Γ 1 such that the matrix is negative definite.Thus, and the proof is concluded.

Results
This section presents a semi-realistic numerical simulation study to validate the theoretical developments of the previous section.First, the available measurements-the body axes acceleration and the optical flow-must be computed.Then, the proposed observer is implemented.It is essential to underscore that the Parrot Minidrone simulator provided by MATLAB-Simulink incorporates realistic quadrotor dynamics and sensor models.The works in [8-10] illustrate that the experimental implementation is straightforward after performing numerical simulations using this simulator; see also the work in [34], where semi-realistic simulations are performed.
The Parrot Minidrone's physical characteristics and the camera specifications are summarized in Table 1.

Determination of the Parameter µ
Note that in Equation ( 24), two essential measurements to reconstruct the quadrotor velocity are ȳ1 and ȳ4 .Table 1 shows that the quadrotor's mass is available, but the parameter µ is not.Hence, the quadrotor follows a circular trajectory, recording, along the 0X b axis, its acceleration a b x and velocity u to determine µ from the following relationship It is important to highlight that to compute µ, the measured acceleration a b x was filtered using a low-pass first-order filter, as recommended in [23].Then, the value of µ was computed as the average value of the µ values obtained during the flight.Figure 4 shows (white line) the recorded acceleration a b and (blue line) the reconstructed acceleration; this is, Hence, for this quadrotor, one has µ = 0.0035.The parameter µ is related to the blade's aerodynamic profile and induced drag forces.This positive constant is known in helicopter literature as blade drag [23,35].Note that this is an open-loop reconstruction so it is not expected that a b x and āb x are exactly coincident.However, as reported in [17,23], this procedure gives an adequate approximation of µ.

Optical Flow Algorithm Design
The optical flow estimation algorithm is implemented in the Parrot Minidrone Competition simulation environment, specifically in the Image Processing subsystem in the Flight Control System block.The optical flow estimation algorithm is performed as follows: (a) The monocular camera information that arrives in the Y1UY2V format is transformed into the RGB format.(b) The image in RGB format is transformed to Grayscale and filtered using an FIR (Finite Response Impulse) filter represented as a 2D coefficient matrix or a pair of separable filter coefficient vectors.(c) The filtered image is used to estimate the optical flow, employing the Lucas-Kanade method.The block implementing the Lucas-Kanade method delivers the pixel displacement per frame as a complex number for each image's pixel.
These calculated displacements are multiplied by the corresponding number of frames per second (intrinsic parameter of the camera) to obtain the pixel velocity.Finally, the pixel velocities are subjected to statistical processing consisting of selecting a region of interest on the image plane, which, considering Assumption 1, corresponds to the image origin.Figure 5 shows a block diagram of the optical flow algorithm.

Quadrotor Trajectories
To test the proposed observer, two trajectories are considered.In the first one, the quadrotor tracks a circle while in the second one, the quadrotor visits two waypoints.Figure 6 shows the quadrotor's path during the circle-tracking flight at a constant altitude.Figure 7 depicts the quadrotor's path during the two-way point flight, which is at a constant altitude once again.

Measurements
Note that the parameter µ is required to implement the proposed observer to reconstruct the quadrotor-specific force.The quadrotor acceleration is obtained directly from the IMU implemented in the simulator.Figures 8 and 9 show the specific force along the 0X b axis delivered by the IMU when the quadrotor tracks the circular and the squaretype (the quadorotor moves first along the 0X b axis and, 10 s after, moves along the 0Y b axis) trajectories, respectively.It is essential to state that the IMU from Simulink is implemented considering that the accelerometers also measure the gravity force, contradicting Equation (4).Hence, the gravitational force is subtracted from the IMU's accelerometer measurement to match (4).

Observer Evaluation
The observer state dynamic described by Equation ( 29     Figures 16 and 17 present the observation errors to examine the proposed algorithm's performance more closely.Note that the observation errors only converge to a neighborhood of zero.This behavior results from the semi-realistic simulation that includes noise in the measurements.However, the strong result in Proposition 1 hints at this observer behavior.Remark 2. Under mild assumptions, consider that measurement noise enters into the translational velocity dynamics (16), as follows with δ(t) ≤ δ a bounded time variant disturbance modeling the noise in all measurements.Then, the observer error dynamic becomes Now, from the analysis in Proposition 1, one obtains Hence, from Lemma 9.2 in [2], it follows that the observer error is ultimately bounded as observed in Figure 16.Note in (44), that the ultimate bound depends on the noise level modeled by δ.
A more challenging problem where the proposed observer design method may fail is the case where the noise enters the translational velocity dynamic as follows with δ 1 (t) ≤ δ1 and δ 2 (t) ≤ δ2 bounded time variant disturbances.

Observer Gains
A nonlinear time-varying model describes the observer error dynamics (see Equation (30) ).Thus, determining adequate observer gains is a complex task; however, with some mild assumptions, at least locally, it is possible to determine a suitable observer gains combination.For example, assume that y 3 ≈ 0; then, the observer error dynamics reduces to ) Hence, the following Eigenvalues locally shape the observer error dynamics From Equation (47), the selection γ 3 follows trivially.Now, assuming that γ 1 = γ 2 , the combination of γ 1 , α 1 and α 2 was selected as follows.For a fixed value of γ 1 , the Eigenvalue λ = λ 1 was computed considering intervals for γ 1 and γ 2 that give a negative value, as illustrated in Figure 18.It is important to remember that a negative Eigenvalue will not guarantee an adequate observation of the quadrotor velocity due to numerical problems in the simulation.
In Figure 18, one can observe the optimal values region from the scalar constants, as well as the Eigenvalues, described previously, that were estimated with precision and accuracy.The displayed chromatic variety denotes the set of Eigenvalues λ, according to the different values that the scalar constants α 1 , α 2 can take.The last result Figure 19 shows the estimation errors analyzed only with respect to the x-axis because analogically, the y-axis presents the same behavior.As expected, increasing the value of λ decreases the observer error until the numerical errors appear at a specific value of λ, where the observer error diverges.

Conclusions
This article proposed a novel sensor fusion observation algorithm to observe the translational velocity using available measurements from onboard sensors for a quadrotor.Besides the successful implementation of the designed observation algorithm, the main contributions are listed next:

•
The designed observer can estimate the linear speeds of the aircraft for different trajectories with precision.

•
The optical flow was obtained without using image features or patterns compared to other research works.

•
The application of Lyapunov's theory through a correct proposed Lyapunov function demonstrates asymptotic convergence to zero of the nonlinear observer error.

•
As mentioned initially, this article was intended to compensate for the overall information loss in indoor flights by observing the vehicle's translational velocities.The observed velocity is precise enough so that the main objective has been successfully fulfilled.
• As previously evoked in the introduction, this research project surpassed the performance of the observer proposed in [17], mainly in flight trajectories, in which, at certain moments, the MAV does not make any displacements, remaining in stationary or hover flight, thanks to the incorporation of the Optical Flow into the observation algorithm.
and S(•) is a skew symmetric matrix such that a × b = S(a)b, ∀ a, b ∈ R 3 , J = diag{J xx , J yy , J zz } is the inertia matrix, Ω = p q r ⊤ is the quadrotor rotational velocity expressed in the fixed-body frame coordinates, and M b is the vector of moments generated by the differential rotor's thrust and rotor's reaction moment.

Figure 4 .
Figure 4. Measured acceleration a bx and reconstructed acceleration āb x .

Figure 8 .
Figure 8. Specific force measured along the 0X b axis while the quadrotor follows a circular trajectory.

Figure 9 .
Figure 9. Specific force measured along the 0X b axis while the quadrotor follows a square-type trajectory.Figures 10 and 11 depict the quadrotor optical flow along the 0X b axis computed as described in Section 4.2 when the quadrotor follows the circular and square-type trajectories, respectively.From Equation (22), it is clear that the signals plotted in Figures 8-11 contain information about the translational velocity Ṽb .It is not evident to identify a relationship between

Figure 10 .
Figure 10.Computed optical flow along the 0X b axis while the quadrotor tracks a circular trajectory.

Figure 11 .
Figure 11.Computed optical flow along the 0X b axis while the quadrotor tracks a square-type trajectory.

Figure 13 .
Figure 13.Observed speed v + Γ 22 σ 2 (blue line) and speed computed by the Parrot Mambo simulator algorithm v (yellow line).Now, Figures 14 and 15 present the observer error behavior when the quadrotor follows the square-type trajectory.The speeds computed by the Parrot Mambo Minidrone algorithm are also shown.

Figure 15 .
Figure 15.Observed speed v + Γ 22 σ 2 (blue line) and speed computed by the Parrot Mambo simulator algorithm v (yellow line).

Figures 12 -
illustrate that the selected observer gains performed adequately to identify both translational speeds.Figures16 and 17present the observation errors to examine the proposed algorithm's performance more closely.Note that the observation errors only converge to a neighborhood of zero.This behavior results from the semi-realistic simulation that includes noise in the measurements.However, the strong result in Proposition 1 hints at this observer behavior.

Table 1 .
Drone's physical characteristics and camera specifications.