Next Article in Journal
Intelligent Fault-Tolerant Control of Delta Robots: A Hybrid Optimization Approach for Enhanced Trajectory Tracking
Previous Article in Journal
Long-Term Wavelength Stability of Large Type II FBG Arrays in Different Silica-Based Fibers at High Temperature
Previous Article in Special Issue
Enhanced Camera Relocalization Through Optimized Accelerated Coordinate Encoding Network and Pose Solver
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Generalized q-Method Relative Pose Estimation for UAVs with Onboard Sensor Measurements

1
Guidance, Navigation, & Controls (GNC) Engineer, 5500 Campanile Drive, San Diego, CA 92182, USA
2
Department of Aerospace Engineering, San Diego State University, San Diego, CA 92181, USA
3
Faculty of Engineering, Mechatronics Engineering, The Hashemite University, Zarqa 13133, Jordan
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(6), 1939; https://doi.org/10.3390/s25061939
Submission received: 20 January 2025 / Revised: 13 March 2025 / Accepted: 14 March 2025 / Published: 20 March 2025
(This article belongs to the Special Issue UAV and Sensors Applications for Navigation and Positioning)

Abstract

:
The q-method for pose estimation utilizes on-board measurement vectors of reference objects to calculate air vehicle position and orientation with respect to an Inertial frame. This new method solves for the quaternion eigenvalue solution of the optimal pose to minimize the error in the derived system of equations. The generalized q-method extends Davenport’s q-method for satellite attitude estimation by incorporating inertial position into the relative model and eliminating assumptions throughout the derivation that require spacecraft applications. Thus, the pose estimation model is developed and implemented for UAV applications using an onboard camera to obtain measurements in a controlled environment. Combined with numerical methods, algorithm outputs for position and orientation are validated against truth data to prove accurate estimation despite sensor error.

1. Introduction

In aerospace engineering and robotics, the estimation of attitude and position is an essential factor for precise autonomous operations [1,2,3,4,5]. Attitude estimation enables one to determine the vehicle’s orientation in a three-dimensional space, while position estimation determines the vehicle’s translational coordinates in a three-dimensional space. Modern methods integrate optimal filtering techniques by fusing measurements from local or global sensors with knowledge of the vehicle’s motion dynamics. Traditional methods often involve sensor fusion techniques combining visual, inertial, and GPS data. Visual odometry and simultaneous localization and mapping (SLAM) have been widely adopted for pose estimation using cameras and other imaging sensors, while inertial measurement units (IMUs) are often used to provide complementary motion information [6]. Techniques such as the Kalman filter, Extended Kalman Filter (EKF), and Unscented Kalman Filter (UKF) have been extensively applied. For instance, Zhang et al. [7] demonstrated the effectiveness of EKF in fusing data from inertial measurement units (IMUs) and global navigation satellite systems (GNSSs) to improve the robustness of UAV positioning in urban environments. Similarly, Liu et al. [8] explored the use of particle filters for attitude estimation in multi-sensor setups, highlighting their resilience to sensor noise and dynamic changes. One significant challenge in these methods is to deal with sensor noise and environmental conditions [9] that affect sensor performance, such as poor lighting or featureless scenes. Several researchers have focused on enhancing these techniques through the integration of LiDAR and deep learning-based approaches [10] to improve robustness and real-time performance, particularly in GPS-denied environments. Another challenge is associated with handling the coupled dynamics with the estimated position and attitude simultaneously [11]. Therefore, this research develops a generalized formulation that combines the estimation of position and attitude based on the well-known q-method framework [12].
Using the quaternion (q-method) for attitude estimation was a method developed by Paul Davenport [12,13] as a compact solution for optimal estimation of the rotation of a body relative to an Inertial frame. Quaternions are widely used because of their efficiency and computational advantages [14,15] over other three-parameter attitude representations, such as Euler angles [16,17]. They are ideal for avoiding singularities, such as gimbal lock, which can occur when using Euler angles during large rotations [18]. Quaternions are also computationally lighter and more stable in dynamic conditions, which makes them popular in real-time applications [19,20]. The integration of a quaternion-based algorithm with the Kalman filter has been widely applied for real-time attitude estimation in aerospace applications [21,22]. Another progression has led to the development of more advanced algorithms, such as Unscented Kalman Filters (UKFs) and particle filters, which demonstrate enhanced performance in handling non-linear systems [23]. Additionally, recent advancements have incorporated deep learning approaches to enhance the pose estimation process. For example, methods that fuse quaternions with convolutional neural networks (CNNs) have demonstrated high levels of accuracy in both orientation and position estimation [24]. These developments have researched uncoupled attitude estimation without integrating the translational dynamics. The need for solutions that integrate the coupled dynamics is essential for many motion-based proximity applications.
This paper generalizes Davenport’s q-method to solve for the relative pose—the coupled state of attitude and translation—for estimation of UAVs with onboard measurements. This paper first provides a comprehensive overview of mathematical preliminaries and Davenport’s q-method for attitude estimation, including a background on the Wahba problem [25] and laying a solid foundation for the subsequent advancements. The development of the quaternion-based approach is explained, highlighting its advantages over traditional methods in terms of simplicity and reduced computational overhead. Following this, the q-method pose estimation model is formulated as a novel, more generalized approach that optimally estimates a body’s position and orientation. This approach requires two sets of measurements for reference points, enabling the calculation of the optimal relative pose from an Inertial frame. This framework eliminates various assumptions inherent to the original technique for spacecraft application and provides a precise method for aircraft and spacecraft pose estimation when given known inertial reference data and body-generated measurements. The intricacies and methodologies here are fully documented. Subsequently, a simulation is constructed to corroborate the practical application of the newly devised pose estimation method in a controlled environment. Numerical methods are employed to solve the newly formed six degrees of freedom (6DoF) estimation equations to minimize the system error. This multifaceted analysis ensures empirical evidence of this effective technique in real-world settings.
By unifying the models for attitude and position estimation, the generalized q-method offers a comprehensive solution for the relative pose estimation problem, bridging the gap between quaternion-based rotation estimation and full pose determination. Through rigorous derivation of the governing equations and models, we provide a cohesive framework that extends the reach of the q-method, setting the stage for its practical application in aerospace systems and beyond. The main contribution is summarized as the development of a generalized q-method for pose estimation. This method is based on formulating the coupled dynamics of 6DoF motion in a singularity-free manner. The generalized q-method was tested using relative pose environments in two different attitude configurations with realistically acquired data and an experimental setup in the SPACE lab. The performance of the method was validated against ground truth data obtained from a Vicon system, which provides sub-millimeter accuracy.

2. Mathematical Preliminaries

2.1. Coordinate Frame and System Definition

Illustrated in Figure 1, there are three frames: the Inertial reference (I) frame, the Body reference (B) frame, and the relative reference (D) frame. The Inertial frame remains fixed in space, while the Body frame is free to move throughout the system, relative to the I frame. The B frame is oriented by some rotation C (Direction Cosine Matrix) relative to the I frame. The respective position vectors from the Inertial frame (system origin) are given by r ¯ B / I and r ¯ D / I . r ¯ D / B goes to D from B, and can be given with respect to the body-centered frame r ¯ D / B B , or the inertial-centered frame r ¯ D / B I C r ¯ D / B I = r ¯ D / B B . This relation will be upheld across estimation methods; the upfront definitions are given because individual notation varies, but the fundamentals bind together the varying models with a unifying system. Ultimately, the final goal is to obtain a solution that yields the rotation C or q and the position of the body with respect to the Inertial frame r ¯ B / I I . This general system relation will be interrelated across all forms of this problem.

2.2. Wahba’s Least Squares Optimization Problem

In 1965, mathematician Grace Wahba formulated an intuitive way to describe a rotation between two coordinate systems [25]. This method built the problem as a least squares optimization problem to minimize the error between the known relative reference position vector in the Inertial frame r ¯ D / B I and the measurements in the B frame r ¯ D / B B . This optimization problem can then lead to the optimal estimate of the rotation matrix C that satisfies the relation between the two frames C r ¯ D / B I = r ¯ D / B B . For ease, we will designate r ^ as the unit vector for r ¯ D / B I and b ^ as the unit vector for r ¯ D / B B used in this model. Also, this model was derived for spacecraft attitude estimation purposes and finds use in [26] with the simplification r ¯ = r ¯ D / I I r ¯ D / B I (Referencing Figure 1). This assumption can be made when r ¯ D / I I > > r ¯ B / I I where the relative distance of a celestial body for D reference measurements (e.g., star catalog) is far greater than the distance of a satellite body to the earth.
Now, the relation for each reference vector C r ^ i = b ^ i is applicable in an ideal world where body-frame measurements are without any error. In reality, all N measurements b ^ i will posses some form of innate sensor error, noise, or uncertainty. The error term for each measurement can be condensed into δ i and gives the relation b ^ i = C r ^ i + δ i .
This can then be formulated as a least squares problem to find the solution for C to minimize the error δ i in the system shown in Equation (1). Furthermore, by expanding the least squares it is converted into a minimization problem for L ( C ) with Equation (2), and solving for the optimal direction cosine matrix C.
δ i 2 = b ^ i C r ^ i 2 = b ^ i C r ^ i T b ^ i C r ^ i
L C = 1 2 i = 1 n α i δ i 2 = 1 2 i = 1 n α i b ^ i T b ^ i + r ^ i T C T C r ^ i r ^ i T C T b ^ i b ^ i T C r ^ i
Given that L ( C ) =min, it can also be modeled as a maximization problem where G ( C ) =max. The term α i is a set of non-negative weights for each observation; α i may remain unweighted with a scalar value of one if desired or uncertain. Through some algebraic simplification, the following equation is obtained:
G C = i = 1 n α i b ^ i T C r ^ i = 1 L C = m a x
This leads to the one final equation in simplified form, maximizing the trace of the direction cosine matrix C, Equation (4), and the summation matrix B, Equation (5).
G C = tr [ C B T ] = m a x
B = i = 1 n α i b ^ i r ^ i T
This forms the basis of Wahba’s problem for optimal attitude estimation. The objective is to solve for a direction cosine matrix C that maximizes the function G ( C ) . However, it is not always the easiest to solve for C directly—although there have since been solutions developed to do so [27]. The formalized solution to obtain the solution using the quaternion is given in the following sections.

2.3. Functional Concepts for Quaternions

The quaternion [28] q is an application of complex numbers on a Clifford algebra in R 4 defined as the following:
q = q 0 + q 1 i + q 2 j + q 3 k
The group of quaternions as defined by Hamilton in 1843 [29] utilizes the imaginary units that follow the definition i 2 = j 2 = k 2 = i j k = 1 and { q 0 , q 1 , q 2 , q 3 R } . It is also common to represent the quaternion as two components, the vector component ( i , j , and k ) and the scalar component (denoted by q 0 ). The purpose of the scalar component is to provide an additional, redundant parameter that keeps the quaternion fully defined in the event that a singularity may occur. This keeps the quaternion singularity free. Another way of thinking of it is thus:
q = ( q 0 , q ¯ )
Note that q 0 = cos ϕ 2 R and q ¯ = [ q 1 , q 2 , q 3 ] T = e ¯ sin ϕ 2 , R 3 . Here, e ¯ is the principal axis unit vector [ i , j , k ] and ϕ is the principal angle for attitude and rotation representation purposes. Additionally, the quaternion may sometimes be defined with the scalar component last as q 4 . However, for the purpose of this paper, the quaternion will always use the scalar component as the first element q 0 . In practice, the scalar component tells the angle of rotation, and the normalized vector component provides the direction of the rotation axis.
A quaternion used for attitude representation is a unit quaternion (also called rotation quaternion) with norm q = 1 and satisfies the condition q T q = 1 —similarly to how a direction cosine matrix (DCM) possesses its orthogonal property such that C C T = I 3 × 3 . The norm constraint is the additional parameter that fully constrains the quaternion in the event of an angular singularity. A quaternion describing the orientation of the X frame from the Y frame ( q X / Y ) satisfies the condition ( q Y / X ) * ( q Y / X ) = ( q Y / X ) ( q Y / X ) * = 1 q , where 1 q ( 1 , 0 ¯ 3 × 1 ) . See Table 1 for operation definitions.
Due to the quaternion being defined in such a way that it is constructed with a scalar component and a vector component, ordinary linear algebra operations may not apply to the quaternion as they would a typical vector. As such, Table 1 [30] summarizes the operations that are implemented when working with quaternion algebra. For example, the distinction must be made that ( q Y / X ) * ( q Y / X ) is the quaternion multiplied with its conjugate using the quaternion operations, while q T q is to be taken as the traditional 4 × 1 column vector ordinarily multiplied by its transpose.
Three-dimensional vectors may also be interpreted as special cases of quaternions. This allows for combined use of quaternions along with vectors in a three-dimensional space for dynamics governing equations. Redefining a vector s ¯ such that s ¯ R 3 is in the form of a quaternion is carried out as shown below.
s q = s 0 , s ¯ with s 0 = 0
Thus, the distinction is formed to differentiate the quaternion itself used for attitude representation, and variables in ‘quaternion’ form. Consider that when a vector s ¯ is converted into quaternion form, it will not be a unit quaternion. Lastly, the quaternion has explicit applications for changing reference frames—both in general and also for variables in quaternion format. The change of reference frame for a vector in quaternion form from the X frame to the Y frame is achieved via the following:
s q Y = q Y / X * s q X q Y / X

2.4. Davenport’s q-Method for Attitude Estimation

A solution for this estimation problem may be obtained by substituting the quaternion in place of the direction cosine matrix. The equation below is an identity for a direction cosine matrix as a function of q. Here, q 0 is the quaternion scalar component, q ¯ is the quaternion vector component, and [ q ¯ × ] is the skew-symmetric matrix of q ¯ .
C q = q 0 2 q ¯ T q ¯ I 3 × 3 + 2 q ¯ q ¯ T 2 q 0 [ q ¯ × ]
Through some manipulation, this optimization problem can be parameterized in terms of q. By substituting Equation (10) into G ( q ) for Equation (4) and isolating the quaternion, the system objective function G ( q ) can be put into a compact format of q T K q . K is a 4 × 4 matrix, and this provides a quadratic form which will allow for easy minimization [31] (or maximization, in this case).
G q = q 0 q ¯ T t r B i = 1 n α i b ^ i × r ^ i i = 1 n α i b ^ i × r ^ i T B + B T t r B I 3 × 3 q 0 q ¯ = q T K q
A constraint q T q = 1 is then added to solve for the nontrivial solution of the optimization problem, as shown in Equation (12). This provides the optimal quaternion q o p t that defines the rotation between two frames. Afterwards, differentiating with respect to q and equating q = 0 solves for the maximum value of this quadratic form, shown in Equation (13).
G * q = q T K q λ q T q 1 = m a x
d G * q d q = 2 K q 2 λ q = 0
K q o p t = λ m a x q o p t
This yields a very straightforward solution where the optimal quaternion q o p t is exactly the eigenvector that corresponds to the maximum real eigenvalue λ m a x for matrix K. Therefore, following this procedure provides q o p t to minimize the error in the attitude estimation system via the K matrix. This involves using summations of the B matrix—constructed with measurement vectors b ^ i and known reference direction vectors r ^ i . Given the relation for Equation (10), the solution will provide the rotation matrix C that describes the rotation transformation from the Inertial frame to the Body frame.
For awareness, there are some nuances that can be added to the q-method attitude estimation process, the first being that b ^ i and r ^ i are not required to be directional unit vectors. Both b ¯ i and r ¯ i can be positional vectors instead. The results for attitude estimation are identical; the exact derivation of the Wahba problem and q-method would differ slightly due to a lack of simplification that comes with the unit vectors, but the core process and equations will remain the same. In practice, unit directional vectors are more commonly used [32] because knowledge of the precise distance for a reference body can be difficult, especially for satellite applications.
With the above assumptions in play, Davenport’s q-method yields great results. However, we can develop a more generalized model that will continue to be applicable for satellite applications, but will not be limited by approximations or simplifications. The following section will expand on this knowledge to incorporate position estimation in tandem with attitude estimation to formulate a robust pose estimation method.

3. q-Method Pose Estimation Derivation

This section will now shift focus onto the newly developed q-method for pose estimation. Prior use of the q-method focused on spacecraft applications for attitude estimation, whereas this more generalized approach eliminates both the unit vector direction and r ¯ = r ¯ D / I I r ¯ D / B I assumptions for the ’known’ reference values of r ¯ i . This indeed complicates the problem and makes it more difficult to estimate, but also incorporates inertial position p ¯ into the estimation; the final model shows promising compatibility for 6DoF drone pose estimation. Figure 2 illustrates how the q-method pose estimation will relate back to the global system definitions in Figure 1, differing from the original q-method. The objective is to estimate the attitude, C, as well as the satellite position, p ¯ , in order to estimate the pose of the air vehicle. A list of variables is provided:
  • r ¯ B / I I —Body position from Inertial Reference (Earth), to be estimated, p ¯ .
  • r ¯ D / I I —Known reference position from Inertial frame (Earth), r ¯ .
  • r ¯ D / B B —Measured position vectors in the Body frame, with error, b ¯ .
  • r ¯ D / B I —Unknown. Essentially requires prior knowledge. If we had this, position estimation would be easy. Let it be known as d ¯ . We will use this to estimate p ¯ given that d ¯ = r ¯ p ¯ by following the vector addition for the system definition.
We are able to take advantage of a simple substitution for r ¯ D / B I by virtue of system definition commonality. The q-method takes r ¯ D / B I = r ^ . For the q-method pose derivation, let r ¯ D / B I = d ¯ = r ¯ p ¯ . The relation for the drone body measurements with error continues as b ^ i = C d ^ i + δ i . Substituting this equivalence into Equation (1), a slight modification is made to the original Wahba problem. This straightforward substitution unifies the estimation derivations, allowing for a similar system of equations. Throughout this derivation, all aforementioned variables will be kept identical, including the weighting factor α i . Curiously, there exists other research expanding upon the Wahba problem to incorporate position [33,34], but not for the quaternion-based purposes used here.
L C = δ i 2 = b ¯ i C ( r ¯ i p ¯ i ) 2 = b ¯ i C ( r ¯ i p ¯ i ) T b ¯ i C ( r ¯ i p ¯ i ) = b ¯ i C d ¯ i T b ¯ i C d ¯ i
Further expanding the minimization cost function L ( C , d ) , and then grouping terms, gives
L C , d = 1 2 i = 1 n α i δ i 2 = 1 2 i = 1 n α i b ¯ i T b ¯ i + d ¯ i T C T C d ¯ i d ¯ i T C T b ¯ i b ¯ i T C d ¯ i
L ( C , d ) = 1 2 i = 1 n α i b ¯ i T b ¯ i + 1 2 i = 1 n α i d ¯ i T d ¯ i i = 1 n α i b ¯ i T C d ¯ i
Note that (17) possesses two terms that Equation (3) combines into one, due to unit vector measurements. Equation (3) also converts the problem into a maximization problem for simplicity, but the above form continues with the form L ( C , d ¯ ) and L ( q , d ¯ ) , opting to keep the third term negative. Also, note the dimensions of q (4 × 1) and d ¯ (3 × 1); the following equations make use of the aforementioned ’quaternionized’ vector format in Equation (8) when required.
The third term also still resembles that in (3), and subsequently will be equivalent to tr [ C β T ] . Like B in Equation (5), β is defined as
β = i = 1 n α i b ¯ i d ¯ i T
Similarly, the cost function is put into terms of L ( q , d ¯ ) using Equation (10). Only the third term is a function of q (or C), and substituting d ¯ for r ^ allows for the use of (11), as derived by Davenport. The cost function then becomes
L q , d = 1 2 i = 1 n α i b ¯ i T b ¯ i + 1 2 i = 1 n α i d ¯ i T d ¯ i q T κ q
where the third term q T κ q is expressed as
q T κ q = q 0 q ¯ T tr β i = 1 n α i b ¯ i × d ¯ i i = 1 n α i b ¯ i × d ¯ i T β + β T tr β I 3 × 3 q 0 q ¯
With the cost function f ( x ) now in terms of the variables x = [ q , d ¯ ] of interest, the quaternion norm constraint q T q = 1 is imposed as an equality constraint g ( x ) . The Lagrangian function [35] is applied to obtain the optimal solution that minimizes the objective function f ( x ) (minimizing the error), therefore obtaining the optimal solution for q and d ¯ . The Lagrangian function is defined as L ( x , λ ) f ( x ) λ g ( x ) .
L ( q , d , λ ) = 1 2 i = 1 n α i b ¯ i T b ¯ i + 1 2 i = 1 n α i d ¯ i T d ¯ i q T κ q λ ( q T q 1 )
The necessary condition is implemented to minimize the system such that L d ¯ = 0 and L q = 0 . The partial differential equations used to solve the system are shown below.
L d ¯ = 0 = i = 1 n α i d ¯ i q T κ d q
L q = 0 = 2 κ q 2 λ q
Equations (22) and (23) can now be used to solve for the optimal position estimation p ¯ * and optimal quaternion to describe the rotation q o p t . L q fortunately gives a solution comparable to the q-method, making use of κ shown in (20) to find q o p t as the eigenvector for the corresponding maximum real eigenvalue. E.g., (22) will substitute d ¯ i = ( r ¯ i p ¯ * ) so that i = 1 n α i d ¯ i = i = 1 n α i r ¯ i i = 1 n α i p ¯ * . The end set of equations for q-method pose estimation are hereby formed.
p ¯ * = 1 N i = 1 n α i r ¯ i q T κ d ¯ q
κ q o p t = λ m a x q *
One last simplification is made due to the term κ d ¯ within Equation (24). The partial differential of κ 4 × 4 w.r.t. a vector d ¯ R 3 yields a 4 × 4 × 3 higher-order tensor. The evaluation of this term can be found in a prior publication [36] as well as validation of q-method pose estimation for simulated spacecraft applications. Thus, an intuitive equation for p ¯ * exists by again referencing the system definition and taking a weight average for p ¯ = r ¯ B / I I = r ¯ D / I I r ¯ D / B I . By definition, r ¯ D / B I = q I / B r ¯ D / B B q I / B * = q B / I * b ¯ i q B / I , where q * is the quaternion conjugate.
p ¯ * = 1 N i = 1 n α i r ¯ i q i = 1 n α i b ¯ i q *
To summarize, the goal of q-method pose estimation is to find p ¯ * and q o p t . In order to do so, one must first formulate κ , making use of β , which makes use of d ¯ . If one of the optimal values ( p ¯ * or q o p t ) is known, the other can then easily be calculated. However, a solution can be found using any valid numerical solving method. The next sections will investigate the results using a numerical solving scheme in parallel with physical sensors in a controlled lab environment.

4. Experimental Setup and Methodology

4.1. Experimental Overview and Setup

To further verify the functionality of the system of equations, the q-method pose estimation is incorporated with onboard measurements obtained from drone-camera experiments. Figure 3 illustrates how a Quanser drone used an Intel RealSense (R200) camera to measure the locations of five objects/beacons in 3D space. The beacons are placed at varying locations and depths upon a curved surface. The camera’s depth-sensing feature was used to determine the location of each beacon in space relative to the camera/drone frame. Camera detection of each beacon is shown in Figure 4 and Figure 5. These are used to provide positional estimates in a pictured frame, and then converted into the body-centered drone frame.
To validate correct estimates, the results were compared against true reference data. Vicon’s motion capture system defined the locations of the five beacons as well as the drone. The Vicon system consists of eight high-resolution cameras that capture moving objects at a rate of 120 Hz. Each beacon and drone were defined as rigid bodies (objects) in three dimensions using reflective markers. Figure 4 shows two of the eight Vicon cameras mounted across the lab and test setup, forming the Inertial frame. This provides a full overview of the hardware used for the experimental setup across the Body and Inertial frames.
To test the method for orientation and position estimation in a physical setting, the drone operated on a static mount while detecting the beacons to obtain reference measurements. The first experiment used a stabilized, nominally oriented (no-tilt) drone to measure beacons, as shown in Figure 5 with the drone camera. A second experiment was performed where the drone was statically tilted at a roll angle of −15 degrees in the Body frame, as shown in the Figure 6, also with the onboard camera.
Two corrections to the experimental setup were required while post-processing data. First, the camera position placed on the drone needed to be taken into account, as it was +15 cm ahead of the true Vicon-defined drone centroid. This fix was performed by adding a +15 cm bias to the Vicon positional truth data during the pose estimation analysis and validation. Second, the camera measurements contained a constant bias outside of the camera position that needed to be considered. This was achieved by performing a static calibration in the same position as the experiment for both drone orientations. Afterwards, the mean bias of the camera for each individual reference marker was subtracted. Further details will be provided in the following section after the inputs.

4.2. Initial Conditions and Inputs

The two experiments were performed and the resulting output data were fed into MATLAB 2018a for post-processing, algorithm implementation, and analysis. Figure 7 provides an illustration of or pseudo-code for the validation process for both configurations. Figure 8 provides a visual representation of the true data within MATLAB using the Vicon system. A correction factor of +15 cm is added to account for the position of the camera on the drone with respect to the Body frame. This offset can be seen in the figure, as the camera centroid is offset from the drone body itself. Additionally, several hundred measurements were taken to characterize the average measurement bias of the camera sensor and correct the input reference measurements. The estimation algorithm and results come from a single instantaneous state—one each for the 0 roll and − 15 roll.
The pose estimation using the previously outlined system of equations was accomplished with the aid of the built-in MATLAB numerical equation solver fsolve for this multi-variable system. The function uses a Trust-Region Dogleg method to solve for the optimal solution that minimizes the residual error of the system. The following is the exact input to calculate the residual f ( x ) used within the solver based on Equations (25) and (26).
f ( x ) = 0 = i = 1 n α i ( r ¯ i p ¯ * ) q i = 1 n α i b ¯ i q * ( 2 κ q 2 λ q ) / 10
The second row of f ( x ) is multiplied by 10 1 in order to better condition the system. Equating both sides of the equation to 0, this multiplication scalar does not make a difference to the solution. λ is included for the purpose of solving the system numerically. Thus, f ( p ¯ , q , λ ) = 0 where x = [ p ¯ , q , λ ] to minimize the system.
An initial guess x 0 is required to iteratively solve the function via numerical methods. As such, a nearby initial guess is arbitrarily selected for the quaternion q 0 , position p 0 , and λ of the system. For this experiment, let p 0 = [ 0 , 0 , 1 ] m, q 0 = [ 1 ; 0 ; 0 ; 0 ] , and λ 0 = Σ b i ¯ b i ¯ T . This formulation for λ 0 is obtained through empirical observations of what generally works well while using this estimation model in conjunction with a numerical equation solver.
Table 2 provides the value of each parameter to initialize the system. The results generated hereafter make use of these provided inputs. Given N reference points for known reference vectors r ¯ D / I I and N body measurements b ¯ i with measurement error σ i , the objective is to estimate p ¯ * and q o p t to be as close to the true values r ¯ B / I I and q B / I as possible.
Measurement errors and correction factors across each roll condition are all provided in Table 2 for transparency. Starting with the reference measurement error σ i , a scalar percent error value is given for each on-board measurement of each reference vector in the Body frame. σ i is computed using the experimental and true positional values of the reference beacons with respect to the camera given as | r ¯ D / B B b i ¯ | / | r ¯ D / B B | × 100 % . Mean measurement error σ i 2 is the average error across all N measurements used in this procedure. This measurement error, while missing directional value, is moreso a simplified method to partially quantify the error in the system when comparing the accompanying algorithm results.
In addition, the average sensor error σ ¯ is the average error across time (700 data points) of the camera sensor for each reference beacon. A static collection of data identical to b i ¯ is performed to verify that the snapshot state obtained in b i ¯ is not drastically different from the average of the camera’s ordinary noisy measurements. The same static collection of data was also used to detect the sensor’s average experimental bias δ r ¯ D / B B to correct the raw measurement vectors b i ¯ . Thus, the bias correction is made such that b i ¯ = b i ¯ + δ r ¯ D / B B . Information for methods and error are disclosed, but procedures for absolute correction and minimization of sensor error are intentionally not incorporated here due to the desire to understand estimation performance with some degree of measurement error.

5. Results

The generalized q-method pose estimation formulation was able to successfully estimate both the position p ¯ and orientation q o p t of the drone for both attitude configurations. Both p ¯ and q o p t are the optimal values for the pose that minimizes the error of the system given the drone camera measurements with sensor noise. Figure 9 is designed to mirror the system definition (Figure 2). It overlays the estimated position p ¯ onto the true body position vector r ¯ B / I I . Here, all inertial and truth data obtained from the Vicon system are used to validate the results of the pose estimation. The inertial system origin is the center floor of the laboratory in which the experiment is performed. True reference position vectors r ¯ D / I I are essential prior-knowledge of the Inertial frame or system, or wherever the drone intends to operate—also obtained by Vicon.
The position error ( 1.8 % and 1.56 % ) is calculated as r ¯ B / I I p ¯ * / r ¯ B / I I , and the position angular error is defined as cos 1 ( r ¯ B / I I · p ¯ * ) / r ¯ B / I I · p ¯ * to represent the angular error between the true vector and estimated position. The quaternion attitude error is not as intuitive to represent, but is found by the principal angle ϕ within the quaternion product δ q = q B / I q o p t * [15] where q o p t * is the conjugate of the estimated quaternion. The equation for ϕ is given by ϕ = 2 cos 1 ( q 0 ) from the prior quaternion definition. Observe that the quaternion error (principal angle error, δ ϕ ) for both test cases is 0.453 °and 0.716 ° respectively, between the true and estimated quaternion. The small positional error and small quaternion error demonstrate the reliability and accuracy of the q-method for pose estimation despite the relatively few reference measurements.
The remaining figures detail the results of interest across each iteration of the numerical solver until convergence is reached. Iterative results for both the 0° roll and 15 ° roll cases are overlaid within Figure 10, Figure 11 and Figure 12 for ease of presentation. Figure 10 displays the principal angle error δ ϕ for an understanding of the aggregate attitude error. The bottom plot of the same figure shows the norm of the residual error f ( x ) as the function fsolve converges to a stopping condition.
Figure 11 itemizes each element of the attitude error ( δ q 1 , δ q 2 , δ q 3 , δ q 4 ) between the true quaternion q B / I and the estimated quaternion q o p t . Lastly, Figure 12 shows the error for each element between the estimated and true positions. The deltas are not expected to be exactly zero; the expectation is not to find the exact true value, but rather to find the optimal estimation that minimizes the error of the system (introduced by the noisy measurements). Further performance enhancements could be made with the addition of more reference points. For an estimation model like this, the enhancements will directly lead to improved performance.
All figures and results presented show the advantage of the pose estimation q-method in its ability to solve for the state of the system with minimal error. In each instance, the numerical equation solver concluded its convergence and obtained a sufficient solution for the UAV. When compared to the inertial, Vicon truth data, the outcome for the predicted position and orientation is greatly comparable despite the error embedded in the attached camera measurement reference vectors.

Method Comparison and Evaluation

The generalization of the q-method focuses on adopting a solution to estimate both orientation and position simultaneously. The main aim of this paper is not to assess estimation performance or related metrics. Since the proposed algorithm solves the algebraic equation using an eigenvalue approach, it is not entirely fair to compare it with other methods that may involve additional computational steps beyond the scope of this approach. Nevertheless, tests were conducted to compare this method with the q-method (for orientation only) and the modified Optimal Linear Attitude Estimator, OLAE (for orientation and position) [37], showing nearly identical performance results. The same sensor data and initial conditions were used for the comparison.
OLAE is a single-point real-time estimator that utilizes the Rodrigues (or Gibbs) vector, a minimal-element attitude parameterization. The optimality criterion, which differs from Wahba’s constrained criterion, is strictly quadratic and unconstrained. This method estimates both position and orientation based on vector observations obtained through vision-based camera technology. The generalized q-method also estimates the translation vector with the same accuracy and rate. Table 3 summarizes the results.
The results show that both the generalized q-method and OLEA perform similarly in terms of position and orientation errors. At 0° roll angle, OLEA has a slightly smaller position error but a slightly higher orientation error. At 15° roll angle, the position errors for both methods are nearly identical, with OLEA having a marginally larger error. Both methods show an increase in orientation error at 15°, with the generalized q-method maintaining a slight advantage in orientation accuracy. Overall, the differences are small, with both methods offering comparable performance depending on the roll angle.

6. Conclusions

The q-method for pose estimation was designed to take the body-frame positional reference measurements and compare them to relative, known positions from an Inertial frame. Doing this, both the orientation and position of the body may be estimated to minimize the error introduced into the system via the measurements. This paper first provided a summary of Davenport’s q-method for attitude estimation and then built on the existing model, while also acknowledging the assumptions and differences that went into the original model. The pose estimation model presented here went to great lengths to keep a consistent system relation between the Inertial, Body, and reference frames in a way that mirrors the q-method. It is via this common system definition that the pose estimation equations were derived by mirroring the original q-method derivation. From this newly developed model, the pose estimation results proved to provide a very accurate solution with relatively simple equations. The computational cost for the numerical methods could be a limitation of this method, but further work could be performed to develop closed-form solutions for this system of equations. Nevertheless, with ample processing power and computing speed, any conventional numerical method used in conjunction with the q-method for pose estimation will provide reliable results.
While results within this paper are promising, the experiment performed was for a single steady-state estimation. Additional estimation approaches like these offer an extra layer of redundancy and dependability for any autonomous aircraft or spacecraft. Further integration into a dynamic system would be required with parallel use of filters, GPS, dynamic models, and/or accelerometer and gyroscopic sensors. Any hardware or communication failures for these flight-essential features would require backup methods for state estimation; the generalized q-method pose estimation provides a backup measure preventing loss of the vehicle, provided prior environmental reference knowledge is utilized.
The estimation error itself is determined by the optimality of the Wahba problem, which is formulated based on the measurement noise. This paper does not aim to reduce the estimation error further, but rather focuses on generalizing the classical q-method to provide pose estimation solutions for coupled vehicle dynamics. Certain validation steps of this pose estimation method were omitted due to redundancy, but additional validation was performed to verify equivalency between the generalized q-method and the classical q-method. Noise reduction and elimination can be fourthly investigated according to the estimation tolerance defined by the problem itself. Preceding work has also previously been performed as part of the validation of the q-method pose estimation by investigating error sensitivity to initial conditions and numerical convergence through a Monte Carlo analysis [36].
Unlike Davenport’s q-method, which will often make the far-away star assumption and use a star-catalogue with respect to earth, this pose estimation method generalizes the model to be more mathematically intuitive for general applications. The pose estimation model is unrestricted to just satellite attitude estimation, as demonstrated in this paper using a drone/UAV. This itself is a benefit. In addition, if the position is already known, these equations will function identically to the q-method. In essence, the integration of pose estimation methodologies with the q-method represents a significant advancement, promising more robust and adaptable solutions for spatial perception and analysis across various domains.

Author Contributions

Conceptualization, K.S. and A.B.Y.; methodology, K.S., A.B.Y. and M.H.; software, K.S.; validation, K.S.; formal analysis, K.S.; investigation, K.S., A.B.Y. and M.H.; resources, A.B.Y. and M.H.; data curation, K.S. and M.H.; writing—original draft preparation, K.S.; writing—review and editing, K.S., A.B.Y. and M.H.; visualization, K.S.; supervision, A.B.Y.; project administration, K.S., A.B.Y. and M.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article material. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Crassidis, J.; Junkins, J. Optimal Estimation of Dynamic Systems; Advances in Applied Mathematics; CRC Press LLC: Boca Raton, FL, USA, 2024. [Google Scholar]
  2. Forbes, J.R. Fundamentals of Spacecraft Attitude Determination and Control [Bookshelf]. IEEE Control Syst. Mag. 2015, 35, 56–58. [Google Scholar] [CrossRef]
  3. Barfoot, T.D. State Estimation for Robotics; Cambridge University Press: Cambridge, MA, USA, 2017. [Google Scholar]
  4. Fourati, H.; Belkhiat, D.E.C. (Eds.) Multisensor Attitude Estimation: Fundamental Concepts and Applications, 1st ed.; CRC Press: Boca Raton, FL, USA, 2016. [Google Scholar] [CrossRef]
  5. Yang, Y.; Shen, Q.; Li, J.; Deng, Z.; Wang, H.; Gao, X. Position and Attitude Estimation Method Integrating Visual Odometer and GPS. Sensors 2020, 20, 2121. [Google Scholar] [CrossRef] [PubMed]
  6. Wang, S.; Ahmad, N.S. A Comprehensive Review on Sensor Fusion Techniques for Localization of a Dynamic Target in GPS-Denied Environments. IEEE Access 2025, 13, 2252–2285. [Google Scholar] [CrossRef]
  7. Zhang, A.; Atia, M.M. An efficient tuning framework for Kalman filter parameter optimization using design of experiments and genetic algorithms. NAVIGATION J. Inst. Navig. 2020, 67, 775–793. [Google Scholar] [CrossRef]
  8. Liu, M.; Chen, Y.; Zhang, L.; Wang, Y. Attitude Estimation Algorithm of Portable Mobile Robot Based on Quaternion Complementary Filtering. Micromachines 2021, 12, 1373. [Google Scholar] [CrossRef]
  9. Cheng, Y.; Crassidis, J.L. Particle Filtering for Attitude Estimation Using a Minimal Local-Error Representation. J. Guid. Control. Dyn. 2010, 33, 1305–1310. [Google Scholar] [CrossRef]
  10. Wang, S.; Clark, R.; Wen, H.; Trigoni, N. DeepVO: Towards end-to-end visual odometry with deep Recurrent Convolutional Neural Networks. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 2043–2050. [Google Scholar] [CrossRef]
  11. Boughellaba, M.; Tayebi, A. Distributed Hybrid Attitude Estimation for Multi-agent Systems on SO(3). In Proceedings of the 2023 American Control Conference (ACC), San Diego, CA, USA, 31 May–2 June 2023; pp. 1048–1053. [Google Scholar] [CrossRef]
  12. Davenport, P.B. A Vector Approach to the Algebra of Rotations with Applications; NASA TN D-4606; NASA: Washington, DC, USA, 1968.
  13. Keat, J. Analysis of Least-Squares Attitude Determination Routine DOAOP; Technical Report CSC/TM-77/6034; NASA Goddard Space Flight Center: Greenbelt, MD, USA, 1977. [CrossRef]
  14. Bani Younes, A.; Mortari, D.; Turner, J.D.; Junkins, J.L. Attitude Error Kinematics. J. Guid. Control. Dyn. 2014, 37, 330–336. [Google Scholar] [CrossRef]
  15. Bani Younes, A.; Mortari, D. Derivation of All Attitude Error Governing Equations for Attitude Filtering and Control. Sensors 2019, 19, 4682. [Google Scholar] [CrossRef] [PubMed]
  16. Bani Younes, A.; Turner, J.D.; Junkins, J.L. A Generic Optimal Control Tracking Solution for Various Attitude Error Parameterizations. In Proceedings of the 23rd AAS/AIAA Space Flight Mechanics Meeting, Advances in the Astronautical Sciences, Kauai, HI, USA, 10–14 February 2013; Volume 148. [Google Scholar]
  17. Bani Younes, A.; Turner, J.D.; Majji, M.; Junkins, J.L. Nonlinear Tracking Control of Maneuvering Rigid Spacecraft. In Proceedings of the 21st AAS/AIAA Space Flight Mechanics Meeting, Advances in the Astronautical Sciences, New Orleans, LA, USA, 13–17 February 2011; Volume 140, pp. 965–981. [Google Scholar]
  18. Euston, M.; Coote, P.; Mahony, R.; Kim, J.; Hamel, T. A complementary filter for attitude estimation of a fixed-wing UAV. In Proceedings of the 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, Nice, France, 22–26 September 2008; pp. 340–345. [Google Scholar] [CrossRef]
  19. Van Der Ha, J.; Shuster, M. A Tutorial on Vectors and Attitude. Control Syst. IEEE 2009, 29, 94–107. [Google Scholar] [CrossRef]
  20. Stanfield, K.; Bani Younes, A. Dual-Quaternion Analytic LQR Control Design for Spacecraft Proximity Operations. Sensors 2021, 21, 3597. [Google Scholar] [CrossRef] [PubMed]
  21. Welch, G.; Bishop, G. An Introduction to the Kalman Filter; University of North Carolina at Chapel Hill, Department of Computer Science: Chapel Hill, NC, USA, 2006. [Google Scholar]
  22. Titterton, D.; Weston, J. Strapdown inertial navigation technology—2nd edition—[Book review]. IEEE Aerosp. Electron. Syst. Mag. 2005, 20, 33–34. [Google Scholar] [CrossRef]
  23. Särkkä, S. Bayesian Filtering and Smoothing; Cambridge University Press: Cambridge, MA, USA, 2013. [Google Scholar]
  24. Risso, M.; Daghero, F.; Motetti, B.A.; Pagliari, D.J.; Macii, E.; Poncino, M.; Burrello, A. Optimized Deployment of Deep Neural Networks for Visual Pose Estimation on Nano-drones. arXiv 2024, arXiv:cs.CV/2402.15273. [Google Scholar]
  25. Farrell, J.L.; Stuelpnagel, J.C.; Wessner, R.H.; Velman, J.R.; Brook, J.E. A Least Squares Estimate of Satellite Attitude (Grace Wahba). SIAM Rev. 1966, 8, 384–393. [Google Scholar] [CrossRef]
  26. Crassidis, J.; Andrews, S.; Markley, L.; Ha, K. Contingency Designs for Attitude Determination of TRMM. In Proceedings of the Flight Mechanics/Estimation Theory Symposium, Greenbelt, MD, USA, 15–18 May 1995. [Google Scholar]
  27. Markley, L. Statistical Attitude Determination. In Proceedings of the Encyclopedia of Aerospace Engineering; John Wiley & Sons, Ltd.: Hoboken, NJ, USA. [CrossRef]
  28. Diebel, J. Representing Attitude: Euler Angles, Unit Quaternions, and Rotation Vectors. Matrix 2006, 58, 1–35. [Google Scholar]
  29. Koecher, M.; Remmert, R. Hamilton’s Quaternions. Numbers 1991, 123, 189–220. [Google Scholar] [CrossRef]
  30. Stanfield, K.S.; Younes, A.B. Dual Attitude Representations and Kinematics for Six-Degree-of-Freedom Spacecraft Dynamics. IEEE Access 2023, 11, 34349–34358. [Google Scholar] [CrossRef]
  31. Shuster, M.D.; Oh, S.D. Three-Axis Attitude Determination from Vector Observations. J. Guid. Control 1981, 4, 70–77. [Google Scholar] [CrossRef]
  32. Crassidis, J.L. Spacecraft Attitude Determination. In Proceedings of the Encyclopedia of Systems and Control; Springer International Publishing: Berlin/Heidelberg, Germany, 2021; pp. 2097–2104. [Google Scholar] [CrossRef]
  33. de Ruiter, A.H.J.; Forbes, J.R. On the Solution ofWahba’s Problem on SO(n). J. Astronaut. Sci. 2013, 60, 1–31. [Google Scholar] [CrossRef]
  34. Cheng, Y.; Crassidis, J.L. A Total Least-Squares Estimate for Attitude Determination. In Proceedings of the AIAA Scitech 2019 Forum, San Diego, CA, USA, 7–11 January 2019. [Google Scholar] [CrossRef]
  35. Rockafellar, R.T. Lagrange Multipliers and Optimality. SIAM Rev. 1993, 35, 183–238. [Google Scholar] [CrossRef]
  36. Stanfield, K. Advanced Pose Developments for Aerospace Applications in Estimation, Modeling, and Control. Master’s Thesis, School of Aerospace Engineering, San Diego State University, San Diego, CA, USA, 2024. [Google Scholar]
  37. Mortari, D.; Markley, F.L.; Singla, P. Optimal Linear Attitude Estimator. J. Guid. Control. Dyn. 2007, 30, 1619–1627. [Google Scholar] [CrossRef]
Figure 1. Relative coordinate frame system definition.
Figure 1. Relative coordinate frame system definition.
Sensors 25 01939 g001
Figure 2. Definition of q-method pose estimation system.
Figure 2. Definition of q-method pose estimation system.
Sensors 25 01939 g002
Figure 3. Quanser drone and five reference beacons.
Figure 3. Quanser drone and five reference beacons.
Sensors 25 01939 g003
Figure 4. (a) RGB imagery from drone camera. (b) Partial Vicon system within SDSU SPACE lab.
Figure 4. (a) RGB imagery from drone camera. (b) Partial Vicon system within SDSU SPACE lab.
Sensors 25 01939 g004
Figure 5. Beacon detection and location definitions.
Figure 5. Beacon detection and location definitions.
Sensors 25 01939 g005
Figure 6. Reference beacons measured with 15 tilted drone.
Figure 6. Reference beacons measured with 15 tilted drone.
Sensors 25 01939 g006
Figure 7. Procedural validation for q-method pose estimation.
Figure 7. Procedural validation for q-method pose estimation.
Sensors 25 01939 g007
Figure 8. Experimental visualization. [Left: 0 roll. Right: 15 roll].
Figure 8. Experimental visualization. [Left: 0 roll. Right: 15 roll].
Sensors 25 01939 g008
Figure 9. Pose estimation results.
Figure 9. Pose estimation results.
Sensors 25 01939 g009
Figure 10. Principal angle error between true and estimated quaternions (Top) and norm of residual error for numerical solver (Bottom) across iterations.
Figure 10. Principal angle error between true and estimated quaternions (Top) and norm of residual error for numerical solver (Bottom) across iterations.
Sensors 25 01939 g010
Figure 11. Attitude error between true and estimated quaternion elements [ δ q 1 , δ q 2 , δ q 3 , δ q 4 ] across iterations for numerical solver.
Figure 11. Attitude error between true and estimated quaternion elements [ δ q 1 , δ q 2 , δ q 3 , δ q 4 ] across iterations for numerical solver.
Sensors 25 01939 g011
Figure 12. Error for estimated position elements [ p x , p y , p z ] across iterations for numerical solver.
Figure 12. Error for estimated position elements [ p x , p y , p z ] across iterations for numerical solver.
Sensors 25 01939 g012
Table 1. Quaternion operations.
Table 1. Quaternion operations.
OperationDefinition
Addition a + b = ( a 0 + b 0 , a ¯ + b ¯ )
Scalar multiplication λ a = ( λ a 0 , λ a ¯ )
Multiplication a b = ( a 0 b 0 a ¯ · b ¯ , a 0 b ¯ + b 0 a ¯ + a ¯ × b ¯ )
Conjugate a * = ( a 0 , a ¯ )
Dot product a · b = ( a 0 b 0 + a ¯ · b ¯ , 0 3 × 1 )
Cross product a × b = ( 0 , a 0 b ¯ + b 0 a ¯ + a ¯ × b ¯ )
Norm a = a · a
Table 2. Simulation initial condition inputs.
Table 2. Simulation initial condition inputs.
ParameterSym.UnitInput 0° RollInput −15° Roll
Num. Ref. MeasurementsN55
True Orientation q B / I [0.9999; 0.0007; −0.0014; −0.0044][0.9913; −0.1309; 0.0003; 0.0004]
True Position r ¯ B / I I [m][0.1454, −0.0038, 1.0788][0.1539, −0.0127, 1.0842]
Ref. Measurement Error σ i [%][0.28,  0.29,  0.23,  0.05,  0.40][0.63,  0.64,  0.01,  0.27,  0.36]
Mean Measurement Error σ i 2 [%]0.24870.3809
Average Sensor Error σ ¯ [%][0.25,  0.24,  0.30,  0.33,  0.25][0.29,  0.40,  0.31,  0.42,  0.22]
Measurement Vectors
(w/Error)
b i ¯ [m] 1.317 0.326 0.241 1.390 0.186 0.080 1.472 0.201 0.336 1.339 0.490 0.102 1.464 0.192 0.261 1.327 0.276 0.312 1.367 0.224 0.036 1.465 0.252 0.267 1.459 0.091 0.307 1.324 0.470 0.032
True Reference Positions r ¯ D / I I [m] 1.467 0.335 1.316 1.530 0.194 0.994 1.616 0.190 1.411 1.606 0.1821 0.814 1.485 0.481 1.178 1.469 0.333 1.316 1.533 0.193 0.993 1.618 0.188 1.409 1.608 0.180 0.812 1.485 0.478 1.177
Raw Measurements b i ¯ [m] 1.416 0.131 0.241 1.507 0.013 0.089 1.495 0.279 0.315 1.495 0.274 0.276 1.355 0.483 0.084 1.404 0.173 0.142 1.507 0.008 0.140 1.548 0.171 0.353 1.478 0.314 0.213 1.363 0.456 0.198
Camera
Correction Factor
δ r ¯ D / B B [m] 0.099 0.196 0.000 0.117 0.173 0.009 0.023 0.078 0.021 0.156 0.216 0.378 0.109 0.291 0.346 0.077 0.103 0.170 0.140 0.232 0.104 0.083 0.081 0.087 0.019 0.222 0.520 0.038 0.014 0.167
Weight Parameter α 1.01.0
Table 3. Comparison of error results for different roll angles.
Table 3. Comparison of error results for different roll angles.
Roll AngleEstimation MethodPosition Error δ p [ m ] Angle Error δ ϕ [deg]
q-Method Pose0.019650.4493
OLEA0.018980.4544
15°q-Method Pose0.017080.7159
OLEA0.017200.7202
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Stanfield, K.; Bani Younes, A.; Hayajneh, M. Generalized q-Method Relative Pose Estimation for UAVs with Onboard Sensor Measurements. Sensors 2025, 25, 1939. https://doi.org/10.3390/s25061939

AMA Style

Stanfield K, Bani Younes A, Hayajneh M. Generalized q-Method Relative Pose Estimation for UAVs with Onboard Sensor Measurements. Sensors. 2025; 25(6):1939. https://doi.org/10.3390/s25061939

Chicago/Turabian Style

Stanfield, Kyl, Ahmad Bani Younes, and Mohammad Hayajneh. 2025. "Generalized q-Method Relative Pose Estimation for UAVs with Onboard Sensor Measurements" Sensors 25, no. 6: 1939. https://doi.org/10.3390/s25061939

APA Style

Stanfield, K., Bani Younes, A., & Hayajneh, M. (2025). Generalized q-Method Relative Pose Estimation for UAVs with Onboard Sensor Measurements. Sensors, 25(6), 1939. https://doi.org/10.3390/s25061939

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop