Static Attitude Determination Using Convolutional Neural Networks

The need to estimate the orientation between frames of reference is crucial in spacecraft navigation. Robust algorithms for this type of problem have been built by following algebraic approaches, but data-driven solutions are becoming more appealing due to their stochastic nature. Hence, an approach based on convolutional neural networks in order to deal with measurement uncertainty in static attitude determination problems is proposed in this paper. PointNet models were trained with different datasets containing different numbers of observation vectors that were used to build attitude profile matrices, which were the inputs of the system. The uncertainty of measurements in the test scenarios was taken into consideration when choosing the best model. The proposed model, which used convolutional neural networks, proved to be less sensitive to higher noise than traditional algorithms, such as singular value decomposition (SVD), the q-method, the quaternion estimator (QUEST), and the second estimator of the optimal quaternion (ESOQ2).


Introduction
The process of finding rotations between frames-although classic-is recurrent. It consists of providing information about the difference between some objects of interest that are seen from the perspective of the body frame and in the desired frame of reference. This difference is represented as a rotation operation; thus, the goal is to find the rotational axis and rotational angle that best rotate a frame given by ( #» a , #» b , and #» c ) to another frame given by ( #» x , #» y , and #» z ). Finding the rotation between frames is applied in many fields, such as robotics [1,2], navigation and control [3], and computer graphics [4,5], among others [6].
In spacecraft navigation, this process is called attitude determination, and typically, various sensors are used to collect information about the objects of interest. These data are returned as a vector, which is known as a measurement or observation vector. It is possible to obtain two orientation parameters from a given measurement vector. However, three orientation parameters are necessary in order to determine an attitude. This leads to an underdetermined problem if only one vector (N = 1) is used, since the number of parameters is less than the minimum number that is required. On the other hand, the problem will be overdetermined if two measurement vectors or more (N ≥ 2) are used, since four orientation parameters will be used, i.e., there are too many variables.
Following this reasoning, some measurements are needed to determine a proper attitude such that 1 < N < 2. However, this is a problem because it is not possible to obtain a non-natural number of measurements; thus, a set of measurement vectors are needed in order to estimate the proper orientation. The first proposed solution was called Wahba's problem [7], a least-square approximation given by: where #» b i is an observation vector in the body frame, #» r i is a vector in the reference frame, a i is a weight value for each observation vector, and A is the optimal attitude matrix. This first approach to attitude determination led to the development of several new algorithms derived directly from Equation (1), resulting in a trade-off, as more robust models are slower to converge, requiring more computational effort, and very fast algorithms may result in low accuracy [8], thus requiring careful fine-tuning.
To solve the problem, it is possible to use a combination of artificial neural networks (ANNs) with Kalman filters (KFs) [9] or even recurrent neural networks (RNNs) in order to estimate the new attitude [10]. These approaches require some previous attitude information in order to adjust the predictions, that is, they depend on time information. Therefore, these types of algorithms are known as attitude estimation or dynamic attitude determination algorithms. This paper focuses on another type of algorithm that does not need previous attitude information for the prediction, resulting in a static attitude determination.
The quaternion estimator (QUEST) method [11,12], singular value decomposition (SVD)-based method [13], second estimator of the optimal quaternion (ESOQ2) method [14], and q-method [15,16] belong to the static attitude determination class of algorithms, since their inputs are observation vectors that are taken simultaneously or close enough in time that they can ignore spacecraft motion between the measurements. Therefore, they can determine the attitude at that given instant. Conversely, the elimination of the time variable makes this class of algorithms susceptible to the quality of the input data, i.e., the precision of the sensors drastically affects the caliber of the results. Due to their nature, they have a crucial role in the "lost-in-space" scenario [17], where the spacecraft has no previous information about its attitude. Therefore, the navigation system must instantaneously infer an orientation by relying on the quality of measurements taken by its sensors, since this directly affects the performance of the algorithms [18].
In this context, this paper focuses on applying machine learning to the mitigation of the impact of the measurements' perturbations on the predictions. Specifically, this paper proposes a neural-network-based approach to determining the rotation difference between two coordinate frames based on a convolutional neural network (CNN) architecture. The contribution of this paper is based on the strategy of choosing the model by considering its uncertainty. Furthermore, this paper will show how the model's performance compares with that of traditional algorithms and if this data-driven approach to static attitude determination can mitigate the impact of input distortion.

Attitude Representations
Rotation matrices can be parametrized in many different ways, and the most commonly used tools are the Euler axis/angle, Euler angles, and quaternions. The Euler axis/angle describes a rotation as a unit vector #» e that lies along the rotation axis and a rotation angle φ about this axis. Although it has a clear physical interpretation, its axis is undefined for φ = {0, 2π}, that is, it is not a continuous representation.
Euler angles are defined as a set of three parameters: φ, θ, and ψ. These angles break a single rotation into a sequence of three rotations using two intermediate frames throughout the process. Thus, it is relatively easy to interpret the problem. However, for θ = ±90 • , the solution degenerates, leading to a problem called "gimbal lock", which causes φ and θ to move in the same fashion. That is why one should be careful when using Euler angle representations.
Quaternions are four-dimensional vectors that embed the rotation axis and the rotation angle in a complex domain. Despite not having an apparent physical interpretation, they are the preferable form of parametrization for specialists, since they do not require trigonometric functions and avoid the "gimbal lock" issue. However, this type of representation has an interesting property where #» q = − #» q , that is, the antipodes represent the same rotation. Therefore, they are not a continuous representation.
In trying to find a valid continuous representation for SO(3), a new method was proposed by [19]. They assumed that a representation space R produced by a neural network could be mapped into the original space X ∈ SO(3) through a function f : R → X and mapped back through a function g : X → R.
The function f is a mathematical function that is used as part of the forward pass of the network at both the training and inference time. For 6D representations, this function is guaranteed to return an orthogonal 3 × 3 matrix, and the loss function can then be applied to it. This mapping function is defined as follows for a 6D representation: where N(·) stands for the normalization function, × represents the vector cross product, · represents the vector dot product, and the column vectors #» a 1 and #» a 2 are vectors from R that are estimated by the network.

Related Work
As mentioned before, if a spacecraft needs to know its orientation, some prior information must be gathered by sensors. They return the data as three-dimensional unit vectors representing the pointing direction of an object of interest with respect to the body frame. From these data, it is possible to find a proper rotation that will project the observations onto the target frame. There is an extensive field of study covering regression in rotation representations by using ANNs. In the computer vision field, there is a problem that is similar to the one described before, whose premise is to find the orientation of objects with respect to the camera. This task also shares the same goal as attitude determination, that is, it determines the orientations or poses of objects of interest from one coordinate frame to another. In the context of computer vision, this task is referred to as pose estimation.
Xiang et al. [20] tried to estimate the rigid transformation from the object coordinate frame O to the camera coordinate frame C given an input image. Each image contains a collection of objects, and the main task is to find their poses (location and orientation), which are represented by a 3D rotation R and a 3D translation T parametrized by quaternions. Thus, they can be written as R( #» q ). They used the mean squared error loss function to minimize the error, where they measured the distance between a set of 3D points #» x ∈ O rotated by both the ground-truth rotation R( #» q ) and the predicted rotation R( #» q ). However, using this approach, the network performed poorly when the rotation angles were close to 0 • and 180 • .
Do et al. [21] used an architecture called mask R-CNN, which took an RGB image containing several objects as input and returned their classes, segmentation masks, and bounding boxes. They added a fourth output on top of this architecture, which tried to estimate the orientation and localization of each object. This output layer was a fourdimensional vector, where the first three scalar values represented the rotation and the last one represented the translation. However, instead of using representations, such as Euler angles and quaternions, they made use of Lie algebra so(3) to represent the rotation, since a skew-symmetric matrix of an arbitrary element from so(3) could be mapped to the SO(3) through exponential mapping. Hence, they only needed to regress a vector #» x ∈ R 3 to define a proper rotation.
Point clouds can be interpreted as 3D vectors that describe a shape or object in some arbitrary coordinate frame; hence, this type of data can provide a lot of useful information, such as depth, volume, orientation, and location. In trying to explore this type of information, a novel approach [22] was proposed to address the 6D pose estimation problem: Instead of using RGB images, they used point cloud segments as the input to the system. The proposed neural network was an adaptation of the PointNet architecture [23], which takes a set of point clouds as input and returns a rotation matrix that best describes the orientation of the input data.
Those works addressed the 6D pose estimation task by considering different types of input data, such as RGB images and point clouds, as well as distinct attitude representations. Although all of them achieved good results for the task, they presented some limitations. First, their attitude representations were not adequate for the job because they showed discontinuities for certain angles. Second, the input data did not offer clear information about the orientation or relationship between the coordinate systems, thus demanding more effort from the network to detect those features.
In this context, this paper proposes a model that uses a different type of input data, which is denoted as an attitude profile matrix; this embeds the similarity between two coordinate frames and alleviates the model's effort. In addition, the proposed model uses another type of attitude representation that can be learned and does not suffer from discontinuities.

Methodology
PointNet is a neural network architecture that accepts raw point clouds as input and is robust with respect to input perturbation and corruption [24]. Taking into account the promising results of PointNet, several extensions have been developed [25][26][27][28]. Hence, PointNet was chosen as the base architecture for the experiments presented in this paper. Figure 1 shows the network structure. Some modifications have been made to the traditional PointNet architecture in order to improve its performance. In the vanilla PointNet, a set of vectors in the original coordinate frame and another set in the target coordinate frame are used as input and fed into the model in pairs. However, in the proposed model, they were replaced by the attitude profile matrix. In fact, the same pairs of vectors were used to build the profile matrix according to Equation (3). By flattening this matrix, an array with the shape (P, 9, 1) was generated, where P represents the batch size. However, it was considered that all weights a i were equal (star-tracking scenario) because it led to lower errors when compared to random weights, even in the evaluation phase, as can be seen in Section 4.1.
All one-dimensional convolutions were built using a kernel of size of 9. However, in all but the last convolution, the shape of the feature maps was maintained and, as a result, after applying the convolution operation, the second dimension of the tensor did not change. Next, all rectified linear unit (ReLU) [29] activation functions were replaced with Swish activation [29] because it turned out that ReLU led to more significant errors during training and evaluation. Finally, as the last operation, the mapping function f, which maps a latent rotation representation R to SO(3), was implemented, as performed in [19].
Dropout layers were added after each activation function [30]. The reasons were twofold: to act as a regularizer to prevent overfitting and to measure the model uncertainty.
Since the dropout technique randomly drops some neurons with a probability p at each forward pass, each update to a layer during training is performed with a different view of the configured layer, forcing the knowledge to be spread out across all neurons equally, thus acting as a regularizer. This can be interpreted as if several neural networks are being trained simultaneously because at each forward pass, the model "changes" its architecture.
Notice that the regular dropout could be interpreted as a Bayesian approximation of a Gaussian process [31]. By applying the dropout during both training and inference, it is possible to analyze it as if predictions from many different networks have been made, that is, a Monte Carlo sample from the space of all possible networks. Hence, it is entitled Monte Carlo Dropout (MCD) [32]. Therefore, a distribution of predictions is gathered and a measure of uncertainty of the model can be evaluated.
In an actual application, the star-tracker algorithm returns two pairs of vectors from a star image: the body vectors, which represent each star in the body frame, and the reference vectors, which represent the same stars collected from the satellite's onboard database. The proposed model uses those two pairs of vectors to build the attitude profile matrix according to Equation (3). Afterward, the model reads a flattened version of this matrix and estimates the orientation matrix.

Implementation Details
The traditional algorithms were designed to determine the best attitude representation given a set of unit observation vectors and their respective weights. In essence, those vectors are random, and the only requirement is that they must be unitary and have a boresight axis, which is the axis that expresses the direction of the observed object.
Following this reasoning, the dataset was built according to the number of samples and the number of observation vectors per sample. First, random unit vectors with arbitrary boresight axes were generated, forming an array with a shape (S, O, 3), where S stands for the number of samples, O is the number of observation vectors, and the last dimension is the number of axes of each vector. The last dimension was set to 3, since the data had components in R 3 . The first collections of vectors were chosen to be the reference vectors #» r .
The random rotation matrices A true were generated through Equation (4), where the angles were drawn from a uniform distribution ranging from −π rad to π rad, and the axis vectors were drawn from a uniform distribution.
The random direction cosine matrices (DCM) with shape (S, 3, 3) served as the groundtruth attitude matrices [33]. The reference vectors were simply rotated by each attitude matrix to generate the body vectors #» b with a shape (S, O, 3) as follows: where #» n i is a vector of measurement errors drawn from a zero-mean Gaussian distribution N (0, σ i ) with standard deviations {σ i | 10 −6 ≤ σ i ≤ 0.01} serving as additive white Gaussian noise (AWGN) [34].

Training
The dataset was generated with the total number of samples S = 2 13 , where 30% was used as the testing dataset and 4% as the validation dataset. The optimizer used was the Adam algorithm [35] with a learning rate of lr = 10 −4 and a weight decay of 10 −4 being applied every 500 epochs. The lr was also reduced by a factor of ten every 500 epochs.
The work was developed using the Tensorflow 2 framework, and the models were trained on a Nvidia GeForce GTX 1060 6 GB GPU (graphic processing unit). In order to train the network, the geodesic loss function defined by Equation (6) was used to minimize the error. The matrix A is the rotation difference given by the multiplication A true A T pred . This equation was also used as a metric, along with Wahba's loss, which is defined by Equation (1), where all weights a i were considered equal during the evaluation. Equation (6) was clipped to lie in the range [−1 + eps, 1 − eps] in order to avoid numerical issues, where eps = 10 −7 is enough.
To validate the model's performance, several training routines were scheduled. For each routine, different values for O were used to check if this could bring a meaningful impact on the model's robustness. Different models were trained, considering a value for O that ranged from 3 to 7. Every training was performed by considering 2000 epochs with a batch size of 64. In addition, for each O, three different dropout rates were considered: 10%, 15%, and 20%.

Model Training
Initially, the model was trained with an earlier version of the architecture presented in Section 3, where the pairs of body and reference vectors, which were concatenated along the last dimension, were defined as the input to the network. The number of observation vectors was set as the kernel size parameter in the convolution layers. In all except the last convolution, the shape of the feature maps was maintained. In this assessment, the ReLU activation function was used instead of Swish.
Variations were made to improve the model, since this assessment did not achieve acceptable error values. The pairs of vectors were replaced by the attitude profile matrix, and the architecture presented in Figure 1 was formulated. However, ReLU was considered instead of Swish. Throughout this paper, this model will be called B-ReLU. This model had two versions: B-ReLU-A and B-ReLU-B. The former considered a star-tracking scenario, and the latter considered the actual weights for each pair of inputs when building the attitude profile matrices.
It was possible to notice that using the actual weights to build the B matrix led to poor results; thus, it was decided to use the star-tracking scenario. Since the Swish activation function showed more promising results than the ReLU activation function and its variations, this function was defined as the default in the following analyses. Then, the final model was built (see Section 3), which was called B-Swish.
After training the models, their performance was measured by using the validation set. The models whose parameters corresponded to the slightest error in Wahba's loss were tested. As shown in Figures 2 and 3, the hyperparameter O led to better results on the validation dataset when it was increased. This decrease in error indicated that the network was capturing more insights about the attitude profile matrix because it was using more observation vectors. Moreover, as the dropout rate increased, the network tended to have worse results, indicating that the parameters were over-penalized.
The network struggled to converge when using pairs of body and reference vectors as inputs. Moreover, when using the accurate weights to build the attitude profile, worse results were obtained. The Swish activation function improved the network performance; thus, it was chosen as the final architecture. It was still necessary to verify which of the B-Swish models was more reliable and if the results really reflected the system's expected behavior when subjected to unseen scenarios.

Model Choice
The models' performance was verified by using the twelve test cases specified by Markley [36]. Each test case contains a collection of vectors #» r i and their corresponding measurement errors σ i . These #» r i vectors represent what, in practice, would be the output of the spacecraft's onboard sensors, such as star trackers, sun sensors, magnetometers, and others. Notice that these data would be returned as three-dimensional vectors. The σ i errors represent how accurate the respective observation vectors are, that is, they hold the information about the precision of the sensors that generated these sets of vectors. The body vectors #» b i were constructed by following Equation (5), where A true is defined as The first four cases consider an ideal scenario, where all observation vectors lie along the boresight axes and are orthogonal to each other. The fifth one demonstrates a case where the vectors are orthogonal but do not lie along the boresight axes. From the sixth scenario onwards, they consider a narrow-field-of-view star tracker, where the rotation about the tracker's boresight is much less well determined than the pointing of the boresight. Despite the fact that they are handcrafted cases, the tests simulate some interesting scenarios that could happen in an actual application and give an idea about an algorithm's behavior when subjected to real situations.
The twelve cases were specified as follows: Case 1 In this case, there are three measurement vectors with measurement noise σ 1 = σ 2 = σ 3 = 10 −6 rad as follows:
For each case, the attitude profile matrix was built as defined in Equation (3) in order to serve as the input to the neural network, where a star-tracking scenario was considered, that is, each observation vector was equally weighted. Then, with the dropout layers enabled, N forward passes were executed through the network to sample a set of possible outputs. In those tests, the definition N = 1000 was set.
After each forward pass, Wahba's error was evaluated by using Equation (1). The weights given by the test case were considered-and are defined in Equation (12b)-and A pred was considered instead of A true to measure the true distance. This led to an array of errors with size N for each test case. At the end of the N forward passes, the mean error was measured for each test case, as shown in Table 1. The error values are displayed on a logarithmic scale, where the bold entries represent the smallest values.
When choosing the most eligible model, if only the errors displayed in Table 1 were considered, it would still not be possible to notice any meaningful insights. The models were trained using four and five observations, and the use of a dropout rate of 10% was, in general, better than when using other rates. Thus, a procedure described in [37] was followed, which evaluated the rotation error with respect to the average rotation matrix of the predicted DCMs. Given the set of predicted DCMs {R j | j = 1, 2, ..., N}, it was possible to calculate an average rotation matrix R avg , that is, a matrix that represented the mean of a distribution of rotation matrices. This average rotation was found as the orthogonal projection of a matrix R = N −1 ∑ N j=1R j and can be further reviewed in [38], which used an SVD approach in order to evaluate R avg .
Once R avg was determined, the noise matrices ∆R j could be evaluated. The new matrices held the orientation differences between each predicted DCMR j and the average DCM R avg , which could be found by performing the following matrix multiplication: (13) where each ∆R j is parametrized by a rotation axis #» e j and a rotation angle φ j . After finding the noise matrices, their respective Euler vectors #» u j = φ j #» e j were extracted, and a covariance matrix was calculated as follows: Since #» u j is a representation of the difference rotation matrix ∆R j , by taking the diagonal entries of C( #» u ), it is possible to have the variances in the x, y, and z axes of the predicted DCMs with respect to R avg . This procedure was performed for each test case to measure the uncertainty of the rotation axes predicted by each model. The measured variances (in degrees 2 ) for each dropout rate are displayed in Figures 4-6.
The models with a dropout rate of 10% had lower variance levels in the three axes for the majority of the test cases. It is worth mentioning that these models were tested with the same dropout rate as that with which they were trained, that is, the models that were trained with a dropout rate of 10% were tested with the same rate, and so on. By doing that, it was possible to verify whether the number of observation vectors presented the same behavior independent of the dropout rate used. For the #» u y and #» u z components, the values were similar, but the model that was trained using four observations had smaller variances for the #» u x component compared to the others.
These results led to the choice of the model that was trained with four observation vectors as the best option. Since the experiments dealt with directional data, it was not possible to use the standard equations of the mean and variance to calculate the uncertainty. Nonetheless, the dropout procedure was still applied during the inference, so it generated different outputs for the same input. Therefore, it was still possible to use those predictions and the covariance approach to estimate the uncertainty.   The distance between the true angle (extracted from Equation (7)) and the angles from the predicted attitude matrices A pred was measured. For each test case, N predictions were taken, with their angles extracted using Equation (6) . Figures 7-9 show how the absolute angular differences (in degrees) were spread for the three dropout values. It can be noticed that the dispersion of the differences was narrower for the model that used four observation vectors.
It was verified that the model trained with four vectors had the best results. To better understand the models, two new tests were performed. The first one evaluated the behavior of the models when using another dropout rate during the testing. The other estimated how capable the models were of retaining knowledge when the number of switched-off neurons was varied. Wahba's error and the variance of the B-Swish-4obs model were evaluated as they were previously, but the dropout rate was swept from 0% to 20% with steps of 2.5%. As shown in Figure 10, when the dropout rate increased, it was possible to notice that the model trained with 10% could not retain much knowledge compared with the other models, even though it achieved lower errors when the dropout layer was disabled (0%).   Similarly, the model trained with 10% became more uncertain of its predictions as the testing dropout rate increased, as can be seen in Figure 11. The upper bound refers to the variance in the x-axis, the lower bound refers to the z-axis, and the dashed lines refer to the variance in the y-axis.  The results presented in Figures 10 and 11 are only relevant if the model is running in an unstable environment, that is, it is expected that some neurons could "be lost" during inference; thus, the model has to be robust against the number of active neurons. To perform a comparison with the traditional algorithms, it was necessary to use the best scenario, that is, the B-Swish-4obs trained with a dropout rate of 10%, since it resulted in the lowest Wahba's error when the dropout layers were disabled.

Comparison with Traditional Algorithms
To have confirmation that the chosen neural network model is the most appropriate one for the problem in question, a benchmark was carried out by using QUEST [39], an SVD-based method [40], the q-method, and ESOQ2. To do that, the twelve test cases defined above were used, but now, S samples were generated for each case and their respective measurement noises N (0, σ i ) were applied. For the tests, S = 4000 was considered.
For each sample used in the algorithm, Wahba's error was calculated by using Equation (1) with the value of a i given by each test case. After all samples had been evaluated, the mean of all errors ∑ S j=1 L(R j ) was evaluated for each case. This procedure was executed for all algorithms. The dropout layers were disabled during inference since there was no need for evaluating the model uncertainty in this phase. The results are depicted in Figure 12 on a logarithmic scale for better visualization. The SVD-based and q-method algorithms achieved the lowest errors, since they were the most robust. In some situations, such as in cases 3, 4, 8, and 9, all algorithms had similar results, since the inputs to the system had more significant associated measurement noise. In spite of the fact that the neural network performed worse in most cases, the predictions were more constant, that is, when considerable noise was introduced, the system error did not increase proportionally in the same way as in the other algorithms.
The relationship between the measurement noise and Wahba's error is presented in Figure 13. This analysis considered a weighted average of the measurement noises for each test case by using Equation (12). It is possible to notice that Wahba's error remained constant in the neural network when the weighted average increased, unlike with the traditional algorithms. In addition, the presence of a single accurate observation vector (low-valued σ) in a test case was enough to prevent the conventional algorithm from performing poorly. Case number 10 is an example of this statement, since the traditional algorithms become less affected by the noise despite having two measurement noises with higher values.
Wahba's error in log scale Figure 13. Impact of the average measurement noise on model performance.

Conclusions
This paper proposes a neural network model for the problem of static attitude determination based on an existing architecture called PointNet. Modifications to the system's layers, input shapes, and output shapes were performed due to the poor results in preliminary tests. It was noticed that changing the ReLU activation function to Swish resulted in an improvement in the loss with the validation set. The most noticeable gain happened when the input used in the system was changed, which was probably because the network was unable to obtain meaningful information about how the pairs of observation and reference vectors were correlated. In addition, traditional algorithms use calculation techniques that may require more computational resources, such as SVD, eigenvectors, and eigenvalues. With the proposed method, after the neural network is trained on Earth (offline training), the embedded system will rely on pre-established weights; thus, it will only require the active calculation of multiplications and additions.
Using a star-tracking scenario during the training stage resulted in smaller values of Wahba's error during the testing stage than those obtained when using the actual weights. This is interesting because it removes the need to store the actual weights a i in order to build the attitude profile matrix B. The error with the validation data indeed decreased when the number of measurement vectors increased. This did not reflect the actual behavior when the test case scenarios were run. This might be because the test cases used only up to three vectors, and the models that were trained using more observation vectors could not generalize very well for fewer vectors. However, for the same test cases, the variances of the predicted DCMs were not out of line around the y and z axes when measuring the uncertainty. On the other hand, they had significant differences for the x axis, especially for cases 6 to 12, where the input vectors had a boresight along the x axis. While the chosen model could not surpass the traditional algorithms when the input vectors had more significant measurement noise associated with them, it did not suffer from this noise as the other algorithms did, achieving similar values of Wahba's error; therefore, it is less sensitive to the input noise. As future work, it is suggested Bingham's distribution concepts be incorporated into the model. As our output data are somehow a type of directional data, it may be possible to have a better estimation of the uncertainty, perhaps by reducing this value and obtaining some improvements in Wahba's loss. Thus, the effective use and testing of the proposed solution in real situations need to be considered.