Next Article in Journal
Visual Localization Method for Unmanned Aerial Vehicles in Urban Scenes Based on Shape and Spatial Relationship Matching of Buildings
Previous Article in Journal
Monitoring the Nitrogen Nutrition Index Using Leaf-Based Hyperspectral Reflectance in Cut Chrysanthemums
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Tightly Coupled Visual–Inertial Fusion for Attitude Estimation of Spacecraft

1
Institute of Optics and Electronics of Chinese Academy of Sciences, Chengdu 610209, China
2
Key Laboratory of Science and Technology on Space Optoelectronic Precision Measurement, Chinese Academy of Sciences, Chengdu 610209, China
3
School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(16), 3063; https://doi.org/10.3390/rs16163063
Submission received: 26 June 2024 / Revised: 16 August 2024 / Accepted: 18 August 2024 / Published: 20 August 2024

Abstract

The star sensor boasts the highest accuracy in spacecraft attitude measurement. However, it is vulnerable to disturbances, including high-dynamic motion, stray light, and various in-orbit environmental factors. These disruptions may lead to a significant decline in attitude accuracy or even abnormal output, potentially inducing a state of disorientation in the spacecraft. Thus, it is usually coupled with a high-frequency gyroscope to compensate for this limitation. Nevertheless, the accuracy of long-term attitude estimation using a gyroscope decreases due to the presence of bias. We propose an optimization-based tightly coupled scheme to enhance attitude estimation accuracy under dynamic conditions as well as to bolster the star sensor’s robustness in cases like lost-in-space. Our approach commenced with visual–inertial measurement preprocessing and estimator initialization. Subsequently, the enhancement of attitude and bias estimation precision was achieved by minimizing visual and inertial constraints. Additionally, a keyframe-based sliding window approach was employed to mitigate potential failures in visual sensor measurements. Numerical tests were performed to validate that, under identical dynamic conditions, the proposed method achieves a 50% improvement in the accuracy of yaw, pitch, and roll angles in comparison to the star sensor only.

1. Introduction

Attitude estimation is undoubtedly the most important module for spacecraft in-orbit services. Currently, the star sensor is the most precise measuring instrument for attitude determination. It detects and analyzes light from stars through an image sensor as objects of reference [1]. Then, it matches the detected stars with the star catalog carried by the spacecraft to obtain the absolute attitude in inertial space [2]. However, the performance of the star sensor is restricted by its exposure time, resulting in an attitude update frequency of around 10 Hz. Hence, it is susceptible to motion blur when operating under high-dynamic conditions [3,4]. In this case, the precision of all three axes, particularly the roll axis, experiences a noteworthy decline. Moreover, when the spacecraft runs in certain specific areas and experiences interference from stray light such as sunlight, short-term lost-in-space may occur, requiring re-identification of the star images [5,6]. Furthermore, gyroscopes are extensively utilized in measuring the three-axis angular velocity of spacecraft due to their small size and low cost. By the virtues of fast dynamic response and high attitude output frequency, the gyroscope serves as a self-contained reference system that operates independently of external information. But the output of the gyroscope is subject to bias, and long-term measurement will accumulate errors [7]. Therefore, the gyroscope often operates along with direct attitude measurement devices to offset the effect of bias [8].
Spacecraft in-orbit services may encounter high-dynamic conditions such as satellite orbit adjustments and spacecraft docking. In this context, high-dynamic conditions refer to situations where the spacecraft’s angular velocity exceeds 3°/s. Therefore, the developmental trend in spacecraft attitude estimation lies in maintaining high precision and stability in output. Combining the star sensor and gyroscope for attitude estimation is advantageous, as they complement each other and compensate for gyroscope measurement bias. Extensive research has been conducted to investigate the fusion of data from both sensors [9,10,11]. The joint attitude estimation of the star sensor and gyroscope can be classified into two combination modes: loose coupling [12] and tight coupling [13,14]. Loose coupling is an attitude-level combination method where the sensors independently perform attitude estimation, and the observed vector is the error attitude between the two sensors. However, this approach renders the system vulnerable to sensor errors, resulting in fast computation speed at the expense of compromised accuracy. Tight coupling is a feature-level combination method that directly utilizes raw data from sensors, such as the star tracker’s star vectors and the gyroscope’s angular rates. The gyroscope is responsible for attitude propagation, while the star vectors correct the gyroscope’s bias [9]. Although it imposes higher requirements on the parameter accuracy of a multi-sensor system, this approach overcomes the limitations of loosely coupled methods and is currently a mainstream solution for spacecraft attitude estimation.
The key to tightly coupled attitude algorithms is multi-sensor data fusion, which is classified as a nonlinear dynamic state-estimation problem. In general, any estimation problem involves the combination of a minimization criterion, a model of the states to be estimated, and an estimation algorithm [15]. In this sense, commonly used estimators can be classified into two categories: Bayesian filters and optimization-based methods. Bayesian filters such as the Kalman filter estimate system states by minimizing the covariance matrix of the estimation error. In spacecraft navigation, attitude evolves over time, and its recursive estimation is typically addressed using nonlinear extensions of the original Kalman filter, such as extended Kalman filter (EKF) [16], error-state Kalman filter (ESKF) [17], and invariant Kalman filter (IKF) [18,19]. Alternatively, sigma-point filters like the unscented Kalman filter (UKF) [20,21,22] are used to propagate the mean and variance of the state by introducing a set of statistical sigma points. However, the standard filtering techniques mentioned above are highly sensitive to the accuracy of prior assumptions and measurements [23,24]. When prior assumptions are inaccurate, or measurements lack precision or are erroneous, these techniques can lead to suboptimal state estimation, resulting in a significant reduction in estimation efficiency. To address this issue, extensive research has been conducted to improve the robustness of these filters. Strategies such as robust cost functions, robust statistics, minimax optimization, or heavy-tailed priors have been employed to handle measurement outliers and to model complex noise distributions. For example, Bellés et al. [25] proposed a nonlinear robust iterative ESKF. This filter, based on robust statistics, mitigates the impact of outlier measurements by utilizing a robust scoring function. Gupta et al. [26] utilized Rao–Blackwellization to integrate a robust EKF with a particle filter into a unified robust Bayesian filtering framework. They further enhanced the robust EKF by tracking multiple linearization points of the EKF, along with its robust cost function. Additionally, Pavlović et al. [27] derived a new statistically linearized M-robust Kalman filtering technique that combines the Huber’s M-robust estimator [28], which approximates optimal maximum likelihood estimation, with the approximate minimum variance Kalman filter. However, the primary focus of such research has been on enhancing the robustness of filters in handling observation outliers and dealing with noise that does not conform to specific distributions.
On the other hand, although optimization algorithms have long been criticized for their computational cost due to the batch processing of observations and predictions, recent years have seen significant advancements in optimization-based spacecraft attitude estimation algorithms. This progress is largely due to improvements in hardware capabilities and the sparse nature of star features. Moving horizon estimation (MHE) is a typical algorithm for optimal state estimation by constructing a least-squares problem (minimum variance). Rao [29] first proposed a general theory for constrained moving horizon estimation. Firstly, it considers a finite sliding window of data when updating the estimation rather than relying solely on the current measurement. Secondly, it directly minimizes the least-squares cost function rather than deriving recursive update formulas based on linearization. Vandersteen [30] and Polóni [31] further discussed various computational acceleration techniques, such as leveraging the sparsity and banded structure of the matrix to be inverted, as well as utilizing hot starts based on previous solutions. Another form of optimization is the use of factor graphs [32]. For instance, Zeng [33] proposed an algorithm based on probabilistic graphical models for GPS/INS integrated navigation systems. This approach transforms the fusion process into a graphical representation with connected factors defined by the measurements. Wang [34] introduced an optimization technique based on the factor graph framework, effectively addressing the challenge of sensor computation delays. The results demonstrate that the optimization exhibits a faster convergence from large initialization errors and superior estimation accuracy in comparison to filter methods. However, current optimization-based methods do not address how to handle scenarios with missing visual observations or how to eliminate the cumulative errors resulting from batch optimization.
Based on the analysis above, existing research largely fails to account for short-term loss of image information, thereby compromising the ability to ensure stable attitude estimation under star tracking modes. Moreover, when continuously operating in high-dynamic scenarios, how to effectively integrate visual–inertial data to improve the accuracy of three-axis attitude estimation as well as how to expedite algorithm convergence are challenges that this paper aims to address. For this reason, this paper presents a tightly coupled visual–inertial fusion method for spacecraft attitude estimation.
The primary contributions of this work include the following:
  • We designed a method utilizing visual–inertial nonlinear optimization for spacecraft attitude estimation. This approach effectively enhances the accuracy of both attitude and gyroscope bias estimation in high-dynamic scenarios;
  • We introduced a loosely coupled initialization scheme, which determines the initial value of gyroscope bias to enhance the convergence of the back-end;
  • Building upon traditional visual reprojection errors, inertial pre-integration errors, and keyframes marginalization errors, to eliminate the cumulative errors caused by relative constraints, we put forward an extra absolute attitude error to rectify the estimation error.
The experiment’s results prove the feasibility of the proposed method.

2. Overview

The structure of proposed tightly coupled visual–inertial fusion method is depicted in Figure 1. The system starts by preprocessing the raw measurements of the two sensors, where stars are extracted as features and tracked. The gyroscope measurements between two consecutive frames are pre-integrated simultaneously. During the initialization procedure, a loosely coupled approach between vision and the gyroscope is employed. At first, the initial attitude of every frame in the sliding window is determined by using star identification through the star sensor. Later, this information is used to align with the pre-integration of the gyroscope to initialize the bias. The nonlinear optimization estimator jointly optimizes the visual reprojection errors, inertial pre-integration errors, absolute attitude error, and marginalized information constructed from all the keyframes in the sliding window to achieve the optimal state estimation.
We employ the following notation throughout this work: ( · ) w denotes the world frame and for spacecraft it is also consistent with International Celestial Reference System (ICRS). ( · ) b denotes the body frame, which is defined to be the same as the gyroscope frame. Thus, the attitude of the spacecraft in the inertial frame can be expressed as R b w . We consider ( · ) c as the star sensor frame; there exists a rotation matrix R c b from the star sensor frame to gyroscope frame. b k is the body frame of k-th image. Furthermore, we also represent the rotation using the Hamilton quaternions, and the attitude kinematics model is q i + 1 = q i δ q . represents the multiplication operation between two quaternions. ( · ^ ) is considered as the actual noisy measurement or estimation of a certain quantity.
The remaining sections of this paper are organized as follows: Section 3 outlines the preprocessing of raw measurements from the sensors, Section 4 details the nonlinear estimator initialization process, and next, Section 5 provides comprehensive constraint information about the nonlinear estimator. Experiment results are shown and discussed in Section 6. Section 7 concludes with a summary and discussion of our work.

3. Measurement Preprocessing

From this section onwards, a comprehensive introduction to the visual–inertial tightly coupled fusion scheme is provided, which comprises three primary elements. Prior to introducing each component separately, it is assumed that the camera has completed intrinsic parameters calibration, which includes focal length, principal point, and optical lens distortion, resulting in known intrinsic parameter matrix. Moreover, the extrinsic parameters between the camera and the gyroscope were previously determined, and star catalog errors are neglected. Additionally, we assume that the trigger signal was aligned, disregarding the measurement delay between the two sensors.

3.1. Gyroscope Pre-Integration

The raw measurement of gyroscope is given by the following:
ω ^ = ω + b + n g
where ω denotes true value without noise of angular velocity. b denotes a slowly varying term about constant bias, and it consists of two parts: the constant part b f and the random part b ω , which is also called random walk. It can be described as the Wiener process:   b ω ˙ = n ω , n ω ~ N ( 0 , σ ω 2 ) , and its unit in continuous form is rad / s 2 / Hz . n g is an additive Gaussian white noise term, n g ~ N ( 0 , σ g 2 ) ; its unit in continuous form is rad / s / Hz . Both of their units in discrete form are rad / s .
Given two consecutive frames, denoted as b k and b k + 1 , the attitude in the world frame can be propagated by using the gyroscope measurements during the time interval [ t k ,   t k + 1 ] , which can be used as the initial state of the b k + 1 frame.
q b k + 1 w = q b k w t [ k , k + 1 ] 1 2 Ω ( ω ^ t b t ) q t b k d t
where
Ω ( ω ) = [ ω ω ω T 0 ] , ω = [ 0 ω z ω y ω z 0 ω x ω y ω x 0 ]
It can be seen that current attitude depends on the previous states (attitude and bias). However, the optimization method seeks to optimize all states over a period of time, which can lead to redundant calculations. Therefore, the primary motivation for pre-integration is to separate the absolute attitude from the state variables.
β b k + 1 b k = q b k w 1 q b k + 1 w = t [ k , k + 1 ] 1 2 Ω ( ω ^ t b t ) q t b k d t
with b k serving as the reference frame, the pre-integration term Equation (4) can be derived solely from gyroscope measurements. However, the bias is still the state to be optimized in the estimator; it need to be separated from the pre-integration term from the bias error. To simplify the mathematical formulation, the following assumptions are made: (1) The bias value b t at each moment is assumed to be constant and remains fixed throughout the entire pre-integration calculation process. (2) When the bias value changes, it is approximately corrected using a linear model during pre-integration. (3) Only the first-order linear model for bias is retained, while higher-order terms are discarded. Therefore, we have the following:
β b k + 1 b k β ^ b k + 1 b k [ 1 1 2 J b β δ b ]
where J b β = β b k + 1 b k / b denotes the Jacobian matrix of attitude with respect to bias. β ^ b k + 1 b k denotes the attitude increment between two consecutive frames without bias error. The derivation details of continuous-time linearized dynamics of error terms and noise covariance matrix can be found in the paper [35], which is also similar to ESKF [17].

3.2. Visual Preprocessing

Since stars can be regarded as sources at infinity, the imaging model of THE star sensor can be considered to satisfy the pinhole projection model. Consider w as a star vector in the inertial frame; its right ascension and declination are α and δ , respectively.
w = [ cos α cos δ sin α cos δ sin δ ]
Assume that the x-axis is the camera’s optical axis. The projected point p in the image frame can be expressed as given:
p = [ 1 u v ] = [ 1 0 0 c x f x 0 c y 0 f y ] [ 1 Y / X Z / X ] = K P = K R w c w
where ( u , v ) represent pixel values. K denotes the camera intrinsics matrix, and f represents the pixel-level focal length, measured in pixels. Here, it is assumed that the horizontal and vertical dimensions of the pixel are the same; ( u 0 , v 0 ) represent the principal point position. P is the vector in the camera frame, which is calculated by star vector w and attitude matrix R c w between the inertial and camera frame.
Given two or more star vectors, there are two methods for solving the attitude matrix R c w . The first method involves linear algebra techniques such as SVD and QUEST [36]. The second method employs nonlinear optimization, which is similar to bundle adjustment (BA).
When a new frame of image is obtained, the centroid method is used to extract stars. After completing the star identification for the first image, two strategies were employed to track and match the features. The Lucas–Kanade (LK) optical flow method [37] was used when the gyroscope had not been corrected for bias. After the bias correction, feature tracking was performed using the dynamics propagation model.
Between two sequential frames, for each star feature, the dynamics propagation model can be easily obtained by using the Equations (2) and (7) as follows:
{ u i + 1 = u i + v i ω x Δ t + f ω z Δ t ( u i ω z Δ t + v i ω y Δ t ) / f + 1 v i + 1 = v i u i ω x Δ t f ω y Δ t ( u i ω z Δ t + v i ω y Δ t ) / f + 1
where the pairs of   ( u i , v i ) and ( u i + 1 , v i + 1 ) denote pixel coordinates of the same star under continuous gyroscope sampling. ( ω x , ω y , ω z ) denote the angular velocity of the three axes, in units of rad/s, and f represents the focal length.

4. Estimator Initialization

The gyroscope constant bias varies each time it is started up, and it is not a fixed value. The ground calibration results only represent statistical characteristics during a specific period in a particular environment. Furthermore, environmental factors such as the temperature and mechanical vibration can affect the random walk. Therefore, it is necessary to initialize the gyroscope bias to obtain a precise value to enhance optimization [38].
In this paper, we employ attitude alignment approach to estimate the initial gyroscope bias. As spacecraft move slowly and steadily during star identification, the star sensor provides very accurate attitude measurements. Therefore, as depicted in Figure 2, aligning the visual and inertial relative attitude among all image frames within the initial sliding window can effectively eliminate bias. Furthermore, to expedite the star searching process, we employed the LK optical flow method to track identified stars after completing the initial star identification for the first image frame.
Assuming there are two consecutive frames b k and b k + 1 in the sliding window S , we have solved the attitude q b k w and q b k + 1 w from star sensor and the corresponding pre-integration β b k + 1 b k from the gyroscope. If the gyroscope has no bias, the relative attitude determined by the two sensors should be precisely identical. A cost function can be obtained as follows:
m i n δ b k = 0 S 1 ( β b k + 1 b k ) 1 ( q b k w ) 1 q b k + 1 w 2
According to Equation (5), pre-integration can be linearized with bias, Moreover, since the minimum value of the cost function is the unit quaternion, and we only consider the imaginary part of quaternions, it can be expressed as follows:
k = 0 S 1 J b β δ b = k = 0 S 1 2 ( ( β ^ b k + 1 b k ) 1 ( q b k w ) 1 q b k + 1 w ) v e c
where ( · ) vec takes the imaginary part of quaternions. Then, we perform Cholesky decomposition on the above equation. After solving for the gyroscope bias, it is necessary to recompute all pre-integration terms in a sliding window. A suitable initial bias will reduce the impact of the first sliding window’s attitude on subsequent attitude estimations.

5. Nonlinear Optimization Estimator

After initialization of the estimator, the algorithm primarily utilizes batch optimization methods to achieve visual–inertial tightly coupled attitude estimation for spacecraft in high-dynamic scenarios. This method provides optimal estimation of attitude and gyroscope bias. Additionally, a sliding window is introduced to limit the state size and meet real-time requirements. The sliding window comprises all the information regarding the frames, including image information and gyroscope pre-integration. Only frames with sufficient feature displacement in the image are considered as keyframes. This system ensures that there are not too many common view features between frames, which could result in an overly dense H matrix during least-squares optimization. Here, the H matrix is the square of the first-order Jacobian matrix, which serves as an approximation of the second-order Hessian matrix used in the Gauss-Newton method. Figure 3 illustrates the operational principle of the nonlinear optimization estimator when the latest image frame serves as a keyframe.
For the sake of convenience in notation, the state x k of the b k frame includes the attitude in the world frame R b k w ( q b k w ) and the bias b k in the body frame. We consider N+1 frames in the sliding window, starting at 0 and ending at N. Here, the time index N always refers to the latest image, and the state vector to be estimated is defined as follows:
X = [ x 0 : N 1 , x N ]
The tightly coupled problem of visual–inertial fusion can be expressed as a joint optimization of a cost function comprising errors from all sensors’ measurements:
m i n X { e p ( X ) 2 + k = 0 S 1 e b k + 1 b k ( X ) W b k + 1 b k 2 + i , j = 0 S l ζ ( i , j ) e l c i j ( X ) W l c i j 2 + k = 0 S l b k e l b k ( X ) W l c k 2 }
where e p ( X ) denotes the marginalized information. e b k + 1 b k ( X ) , e l c i j ( X ) , and   e l c k ( X ) denote the error for inertial, visual, and absolute attitude measurement, respectively. Similarly, W b k + 1 b k , W l c i j , and W l c k denote their corresponding information matrices. The indexes of features visible in the i-th frame and the j-th camera are written as the set ζ ( i , j ) . Since the observation model’s noise is assumed to follow a Gaussian distribution, the least-squares method actually optimizes the error expressed as the Mahalanobis distance. The Mahalanobis distance is a weighted Euclidean distance by the information matrix. Hence, the information matrix is related to the noise term and is the inverse of the noise covariance matrix. The information matrix can be seen as the weight of the error term. The larger the information matrix, the smaller the observation noise, and the more reliable the measurements from the sensor.
For solving the least-squares problem utilizing the nonlinear Gauss–Newton method, it is possible to express each error term as follows:
e ( X ) = e ( X + J X δ X )
where J X is the Jacobian matrix of errors with respect to the state variables. δ X is a small perturbation in the state variables.
In the following, we introduce individual measurement errors from various sensors.

5.1. Inertial Measurement Error

The gyroscope measurement errors denote the bias error between successive frames and the subsequent alteration in attitude due to this bias error. These errors reflect the discrepancy between the observed and estimated values. Therefore, we can define the gyroscope measurement error e b k + 1 b k between any two consecutive frames within a sliding window S as follows:
e b k + 1 b k ( X ) = [ δ θ b k + 1 b k δ b b k + 1 b k ] = [ 2 ( ( β ^ b k + 1 b k ) 1 ( q b k w ) 1 q b k + 1 w ) v e c b k + 1 b k ]
where β ^ b k + 1 b k denotes the attitude increment without bias change, which was discussed in the gyroscope pre-integration section. It can be seen that each gyroscope measurement error term is only related to frame b k and b k + 1 and is independent of other frames within the sliding window. This means that the partial derivative of the error term with respect to unrelated state variables in the Jacobian matrix is 0.
e b k + 1 b k [ q b k w , b k ] = [ [ 0 3 × 1 , ( ( ( q b k + 1 w ) 1 q b k w ) ( β ^ b k + 1 b k ) ) b o t ] ( ( ( q b k + 1 w ) 1 q b k w β ^ b k + 1 b k ) ) b o t J b k β 0 3 × 4 I 3 × 3 ] e b k + 1 b k [ q b k + 1 w , b k + 1 ] = [ [ 0 3 × 1 , ( ( ( β ^ b k + 1 b k ) 1 ( q b k w ) 1 q b k + 1 w ) ) b o t ] 0 3 × 3 0 3 × 4 I 3 × 3 ]
where
( q ) = [ s v T v s I + v ] , ( q ) = [ s v T v s I v ]
q = [ s , v ] ; s represents the real part, and v represents the vector part. ( · ) bot extracts the lower-right 3   ×   3 block of the given matrix. The derivation of the Jacobian matrix is given in the Supplementary Materials.

5.2. Visual Reprojection Error

In a pinhole camera model, the reprojection error on the image plane is typically defined. Figure 4 illustrates a scenario where there is a point l in space, which is projected onto the image planes c i and c j , resulting in pixel coordinates p 1 and p 2 , respectively. The predicted pixel location on c j , denoted as p ^ 2 , is obtained by rotating p 1 with R c i c j   . In this case, the reprojection error is the difference between the predicted pixel coordinates p ^ 2 and the corresponding observed pixel coordinates p 2 .
We consider that two distinct image planes, c i and c j , share a common observed feature l. The visual error of this feature can be defined as follows:
e l c i j ( X ) = [ u ^ l c j u l c j v ^ l c j v l c j ]
where ( u l c j , v l c j ) and ( u ^ l c j , v ^ l c j ) denote the observed and predicted pixel coordinates of l in camera plane c j , respectively. The visual preprocessing section discussed the utilization of P ^ l c j = ( x ^ l c j , y ^ l c j , z ^ l c j ) to signify feature vector under the camera frame. So, it can be calculated in the following way:
P ^ l c j = R b c R w b j R b i w R c b P l c i
P l c i = ( x l c i , y l c i , z l c i ) is the observed vector of l under the camera frame c i . Therefore, we can establish two-frame attitude constraints through visual errors, which are different from inertial constraints that need to be between adjacent frames. These constraints are connected by a shared visual feature across several frames.
The derivative of the state variables follows the chain rule of differentiation and is taken with respect to the attitude in the world frame. This allows the perturbation rotation to be expressed in the form of right multiplication. It is noteworthy that we defined the x-axis of the camera as its optical axis. The calculation method is as follows:
e l c i j R b i w = e l c i j P ^ l c j P ^ l c j R b i w
So, the first term on the right-hand side of the equation represents the derivative of the error term with respect to the three dimensions of P ^ l c j . According to Equation (7), it is not difficult to deduce:
e l c i j P ^ l c j = [ y ^ l c j x ^ l c j 2 1 x ^ l c j 0 z ^ l c j x ^ l c j 2 0 1 x ^ l c j ]
The second term in the equation is obtained directly by taking the derivative of Equation (18) with respect to the rotation matrix. Similar to solving for inertial measurement errors, it is also achieved by applying small perturbations to the relevant rotation matrices. Therefore, we obtain the following:
P ^ l c j R b i w = R b c R w b j R b i w ( R c b P l c i )
Using the same method, we can obtain the derivative of both variables R b j w . The process of derivation is included in the Supplementary Materials.
P ^ l c j R b j w = R b c ( R w b j R b i w R c b P l c i )

5.3. Absolute Attitude Error

The above two error terms describe the relative attitude constraints between two image frames, thus lacking absolute attitude as a reference baseline. However, the state propagation relies on the dynamics of the gyroscope attitude equation, which leads to accumulated errors over longer optimization cycles and cannot be easily resolved on its own. While SLAM systems rely on loop closure detection to identify repeated environmental features in the map and then correct previous localization and map construction based on these repeated features to reduce the impact of accumulated errors, this approach is not practical for spacecraft. To address this issue, a new error term is proposed, which aims to minimize the accumulated errors due to optimization by using absolute attitude calculated from the star sensor. We refer to this as absolute attitude error. The construction method of this constraint is similar to visual reprojection error, ensuring the maximum reduction of potential errors in the optimization system and providing a more accurate assessment of the absolute accuracy of sensor measurements.
As demonstrated in Figure 5, c k is a frame within the sliding window S. In the case of a feature, the observed pixel from the image is p l , whereas the predicted pixel using Equation (8) is p l . Due to various factors, such as extraction errors and gyroscope bias, a slight rotation Exp ( ϕ ) takes place to align p l and p l , where Exp(·) is the exponential map, which can convert a vector belonging to the Lie algebra into a rotation matrix that corresponds to the Lie group.
Similar to Equation (17), the absolute attitude error can be defined as follows:
e l c k ( X ) = [ u ^ l c k u l c k v ^ l c k v l c k ]
The transformation relationship of the feature vector in the camera frame can be determined through utilization of the equation presented below:
P ^ l c k = R w c k R b k w R c b P l
where R c k w and R b k w denote the attitude calculated by the star sensor and gyroscope, respectively. It is worth noting that the states to be optimized are represented by R b k w , while R c k w is the considered constant. Here, we will not reiterate the process of differentiating with respect to variables as outlined in Equation (19).

5.4. Marginalized Information

As the estimation time increases, the size of the states to be estimated will continue to grow, leading to a sharp increase in computational complexity and a decrease in algorithm efficiency. Typically, a sliding window based on keyframes is used to control the size of the states to be optimized, with optimization limited to a subset of keyframes from past moments and the current frame, as shown in Figure 3. Discarding past states requires the application of marginalization techniques. Marginalization is a technique that discards the oldest keyframe states within the sliding window while retaining their prior information to constrain the current frame.
As shown in Figure 6, this approach employs two marginalization strategies based on whether the newly acquired image frame is a keyframe. Firstly, if the image frame is a keyframe, then the oldest keyframe within the sliding window is marginalized. Gyroscope states and features with almost no co-visibility with other keyframes are marginalized, transforming into prior information e p ( X ) added to the overall objective function. This is because star features are very sparse, and there may be multiple co-visible features between keyframes. Marginalizing all measurements of the oldest keyframe would result in an overly dense H matrix, making optimization solving difficult. Secondly, if the image frame is not a keyframe, then the measurements of the newest keyframe within the sliding window are directly replaced with the new measurements. However, gyroscope measurements still need to be retained to ensure pre-integration consistency. The process of marginalization is performed utilizing the Schur complement, and readers can refer to the paper [39] for its details.

6. Experiments and Results

In this section, simulation and experiments are provided to validate the reliability of the proposed method. All the work was implemented using C++ code and optimized using the Google Ceres optimizer [40] and Levenberg–Marquardt algorithm to optimize the aforementioned objective function.

6.1. Simulations

To facilitate better comparison of results, an EKF algorithm [41] was implemented using C++ code. EKF is currently the most widely used attitude estimation algorithm for spacecraft because spacecraft attitude estimation typically involves weakly nonlinear systems. Additionally, EKF has low computational complexity and performs well in real-time applications. Both methods use the same sensor parameters. Table 1 lists the specific parameters for the star sensor and gyroscope, where gyroscope noise parameters are in continuous-time form. A star chart was constructed in Matlab 2016 using stars from the Smithsonian Observatory Catalogue (SAO) with brightness exceeding 6Mv for star chart simulation. The camera model used a pinhole imaging model and was defocused.
A sinusoidal curve motion was applied to the yaw, pitch, and roll axes’ angular velocities, and 1000 star images were continuously simulated, as shown in Figure 7. The proposed method uses a sliding window that retains historical information from the past 12 keyframes. The first 12 images within the sliding window were considered keyframes and used for estimator initialization. Subsequently, a new keyframe was established every eight images.
Figure 8 illustrates the attitude and bias estimation results obtained using the proposed optimization method and the EKF algorithm under a standard deviation of 0.3 pixels for star position error. Subfigures (a)–(c) show the attitude estimation errors for the yaw, pitch, and roll axes, while subfigures (d)–(f) display the bias estimation for all three axes. Both the attitude and bias estimation results of the proposed method outperformed those of the EKF algorithm. During the initial stages of optimization, there may be transient erroneous estimates due to the gyroscope not yet completing initialization within the first sliding window. However, once initialization is completed, it provides good initial bias values to accelerate the convergence of the algorithm.
Star position error is the primary factor affecting the accuracy of star sensor attitude measurements, with estimation accuracy decreasing as star point position error increases. It is important to note that the roll angle is most affected. This is determined by the nature of the camera itself, as pixel movements of the camera are insensitive to rotations about the line of sight (the optical axis perpendicular to the target surface). Conversely, roll angle errors are highly sensitive to pixel errors. Even small position errors can amplify roll angle errors. Therefore, under the same position error, the estimation error of the roll angle by the star sensor may be nearly 10 times larger than that of the other two axes.

6.2. The Robustness of Attitude Estimation

The accuracy of attitude estimation is significantly influenced by the error in star positions. To assess the algorithm’s performance under different star sensor position errors, six sets of noise ranging from 0.1 to 0.6 pixels were applied. Each set was independently repeated ten times, and the results were averaged. Gyroscope parameters remained constant. A comparison was made with the results obtained from the EKF algorithm and directly computed by the star sensor.
Table 2 presents the standard deviation of three-axis attitude errors under different star position errors. As shown in Figure 9, the three-axis attitude estimation almost linearly increases with the increase in star position error. This is because observation noise directly affects two constraints in the cost function, namely camera visual reprojection error and absolute attitude error, ultimately leading to decreased solution accuracy. Additionally, observation noise also affects the estimation of gyroscope bias, ultimately impacting gyroscope pre-integration constraints. As illustrated in Figure 8, the estimation error of the roll axis is significantly higher than that of the other two axes. However, in high-dynamic scenes, the proposed optimization method significantly improves the three-axis estimation accuracy, with all three axes showing an increase of nearly 50% compared to the attitude solved by the star sensor only.
Gyroscope measurement noise is also a crucial factor influencing attitude estimation. To test the impact of gyroscope accuracy on the performance of the proposed method, different levels of measurement noise were set. The standard deviations in continuous form were sequentially set to 1   ×   10 3 ° / s , 1   ×   10 4 ° / s , 1   ×   10 5 ° / s , and 1   ×   10 6 ° / s . The star position error was maintained at 0.3 pixels, with constant bias at 1   ×   10 3 ° / s and random walk at 5   ×   10 5 ° / s s . Each set was independently repeated ten times, and the final results were averaged. The simulation results are shown in Figure 10, and the numerical results are presented in Table 3.
It can be observed that when the gyroscope measurement noise is less than 1   ×   10 4 ° / s , the three-axis estimation accuracy hardly improves, indicating that the estimation is mainly affected by star position error at this level of noise. Moreover, when the noise level increases by ten times, the accuracy of the three axes does not significantly decrease, suggesting that the proposed algorithm can adapt to higher noise levels.

6.3. The Robustness of Bias Estimation

The gyroscope’s constant bias is not a fixed value; it varies each time it is powered on and used. Offline calibration values only reflect the statistical characteristics over a period of time in a specific working environment. Additionally, bias undergoes slow changes during usage due to environmental factors such as temperature, described as the angular rate random walk using Wiener processes. Therefore, both the constant bias and angular rate random walk are crucial parameters influencing gyroscope bias estimation. Similarly, under the condition of keeping other sensor parameters unchanged, multiple sets of different constant biases and angular rate random walks were configured to evaluate the accuracy of the proposed algorithm in estimating gyroscope bias.
Figure 11 illustrates the estimation results of three sets of gyroscope constant bias, with parameters in subfigures (a)–(c) set to 1   ×   10 3 ° / s , 1   ×   10 2 ° / s , and 1   ×   10 1 ° / s , respectively. It is evident that no prior calibration values are required, and the gyroscope can obtain good initial bias values through a simple initialization process. Additionally, from the results, it can be observed that the presence of constant bias does not affect the accuracy of bias estimation due to initialization. Figure 12 presents the estimation results of bias under three different levels of random walk, with parameters in subfigures (a)–(c) set to 5   ×   10 5 ° / s s , 5   ×   10 4 ° / s s , and 1   ×   10 3 ° / s s , respectively. From the results, it can be seen that even under conditions of significant random walk, the proposed algorithm can dynamically estimate the bias. Although there is a certain lag in estimation during periods of rapid change; the bias can be accurately estimated once it stabilizes.

6.4. The Outlier Test for the Star Sensor

Scenarios such as stray light and extreme conditions can cause temporary spatial disorientation and anomalies in star sensor measurements. Traditional filter methods lack the capability to update the state with measurements, relying solely on the gyroscope for state propagation. In contrast, the proposed optimization method seeks the optimal state estimation over a period of time. When visual observations are lacking, it constrains the current estimation based on information retained from past image frames and the current gyroscope measurements until new valid observations are obtained. Additionally, upon regaining valid measurements, the star sensor only needs to quickly identify stars based on the current attitude, without the need for reidentification across the entire celestial sphere or large local sky areas. To address the extreme scenarios described above, corresponding simulation experiments were designed. The attitude and bias estimation results are shown in Figure 13. Even when star sensor measurement data are unavailable for up to 10 s, i.e., no input from 100 star images, the proposed method (blue line) maintains good attitude estimation across all three axes. Once new valid observation information is obtained, the star sensor quickly resumes output (red line).
Table 4 shows the estimation accuracy of the three-axis attitude of the star sensor under different durations of spatial disorientation. From the results, even when image loss occurs for up to 10 s, the attitude estimation accuracy of the yaw and pitch axes remains unaffected thanks to the accuracy of the estimation of the two-axis bias. Although the accuracy of the roll axis is slightly reduced, with an average decrease of only 2.6″, it demonstrates that the proposed algorithm can still stably output the three-axis attitude even when the image information of the star sensor is momentarily lost.

7. Conclusions

This paper presents a spacecraft-oriented optimization-based tightly coupled scheme for visual–inertial attitude estimation. Continuous optimization of the state quantity over time yields higher-precision attitude estimation. The proposed method initiates the estimator by utilizing the initial attitude of the star sensor. During the front-end measurement preprocessing stage, star features are tracked and stored to facilitate the establishment of visual constraints in the back-end. Pre-integration technology enables the gyroscope to establish correlations between adjacent frames. To ensure real-time estimation, a keyframe-based sliding window is utilized to limit the size of the state quantity. Upon the arrival of a new keyframe, marginalized technology is employed to retain the constraint information from the oldest keyframe on the current frame. Additionally, we propose an absolute attitude constraint to mitigate accumulated errors. The simulation and experimental evaluations encompassed various conditions, including star position error, gyroscope bias error, and star sensor lost-in-space. The results substantiated that, in continuous high-dynamic scenarios, our proposed method demonstrates a 50% improvement in the accuracy of estimated yaw, pitch, and roll angles compared to those calculated by the star sensor.
Future work entails implementing the solution on the embedded platform. Given the limited on-chip resources of spacecraft, the focus will be on storing a large amount of feature star information and enhancing computational capabilities.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/rs16163063/s1.

Author Contributions

Conceptualization, J.Y.; methodology, J.Y.; software, J.Y.; validation, J.Y., Y.M., and H.L.; investigation, J.Y.; resources, R.Z.; writing—original draft, J.Y.; writing—review and editing, Y.M. and H.L.; visualization, J.Y. and Z.Z.; supervision, R.Z.; project administration, R.Z.; funding acquisition, R.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Outstanding Youth Science and Technology Talents Program of Sichuan (2022JDJQ0027); the West Light of Chinese Academy of Sciences (No. YA21K001); Sichuan Province Science and Technology Support Program (No. 24NSFSC2625).

Data Availability Statement

The original contributions presented in the study are included in the article and Supplementary Materials, further inquiries can be directed to the corresponding author.

Acknowledgments

Thanks to the accompaniers working with us in department of the Institute of Optics and Electronics, Chinese Academy of Sciences.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ma, J.B.; Xu, J.; Cao, Z.B. A method of autonomous orbit determination for satellite using star sensor. Sci. China Ser. G-Phys. Mech. Astron. 2005, 48, 268–281. [Google Scholar] [CrossRef]
  2. Zhu, Z.J.; Ma, Y.B.; Dan, B.B.; Liu, E.H.; Zhu, Z.F.; Yi, J.H.; Tang, Y.P.; Zhao, R.J. An autonomous global star identification algorithm based on the fast mst index and robust multi-order cca pattern. Remote Sens. 2023, 15, 2251. [Google Scholar] [CrossRef]
  3. Han, J.L.; Yang, X.B.; Xu, T.T.; Fu, Z.Q.; Chang, L.; Yang, C.L.; Jin, G. An end-to-end identification algorithm for smearing star image. Remote Sens. 2021, 13, 4541. [Google Scholar] [CrossRef]
  4. Yi, J.H.; Ma, Y.B.; Zhu, Z.F.; Zhu, Z.J.; Tang, Y.P.; Zhao, R.J. A blurred star image restoration method based on gyroscope data and enhanced sparse model. Meas. Sci. Technol. 2023, 34, 115105. [Google Scholar] [CrossRef]
  5. Lu, K.L.; Liu, E.H.; Zhao, R.J.; Tian, H.; Zhang, H. A fast star identification algorithm of star sensors in the lis mode. Remote Sens. 2022, 14, 1739. [Google Scholar] [CrossRef]
  6. Yuan, H.; Li, D.X.; Wang, J. A robust star identification algorithm based on a masked distance map. Remote Sens. 2022, 14, 4699. [Google Scholar] [CrossRef]
  7. Tan, W.F.; Dai, D.K.; Wu, W.; Wang, X.S.; Qin, S.G. A comprehensive calibration method for a star tracker and gyroscope units integrated system. Sensors 2018, 18, 3106. [Google Scholar] [CrossRef]
  8. Passaro, V.M.N.; Cuccovillo, A.; Vaiani, L.; De Carlo, M.; Campanella, C.E. Gyroscope technology and applications: A review in the industrial perspective. Sensors 2017, 17, 2284. [Google Scholar] [CrossRef]
  9. Sun, T.; Xing, F.; You, Z.; Wang, X.C.; Li, B. Deep coupling of star tracker and mems-gyro data under highly dynamic and long exposure conditions. Meas. Sci. Technol. 2014, 25, 085003. [Google Scholar] [CrossRef]
  10. Hou, B.W.; He, Z.M.; Zhou, H.Y.; Wang, J.Q. Integrated design and accuracy analysis of star sensor and gyro on the same benchmark for satellite attitude determination system. IEEE-Caa J. Autom. Sin. 2019, 6, 1074–1080. [Google Scholar] [CrossRef]
  11. Labibian, A.; Bahrami, A.H.; Haghshenas, J. Development of a Computationally Efficient Algorithm for Attitude Estimation of a Remote Sensing Satellite, Proceedings of the Conference on Sensors, Systems, and Next-Generation Satellites XXI, Warsaw, Poland, 11–14 September 2017; SPIE: Bellingham, WA, USA, 2017. [Google Scholar]
  12. Ning, X.L.; Liu, L.L. A two-mode ins/cns navigation method for lunar rovers. IEEE Trans. Instrum. Meas. 2014, 63, 2170–2179. [Google Scholar] [CrossRef]
  13. Gou, B.; Cheng, Y.M.; de Ruiter, A.H.J. Ins/cns navigation system based on multi-star pseudo measurements. Aerosp. Sci. Technol. 2019, 95, 105506. [Google Scholar] [CrossRef]
  14. Mu, R.J.; Sun, H.C.; Li, Y.T.; Cui, N.G. Ins/cns deeply integrated navigation method of near space vehicles. Sensors 2020, 20, 5885. [Google Scholar] [CrossRef] [PubMed]
  15. Candy, J.V. Model-Based Signal Processing; John Wiley & Sons: Hoboken, NJ, USA, 2005. [Google Scholar]
  16. Filipe, N.; Kontitsis, M.; Tsiotras, P. Extended kalman filter for spacecraft pose estimation using dual quaternions. J. Guid. Control. Dyn. 2015, 38, 1625–1641. [Google Scholar] [CrossRef]
  17. Solà, J. Quaternion kinematics for the error-state kalman filter. arXiv 2017, arXiv:1711.02508. [Google Scholar]
  18. Barrau, A.; Bonnabel, S. Invariant kalman filtering. Annu. Rev. Control. Robot.,Auton. Syst. 2018, 1, 237–257. [Google Scholar] [CrossRef]
  19. Barrau, A.; Bonnabel, S. The invariant extended kalman filter as a stable observer. IEEE Trans. Autom. Control. 2016, 62, 1797–1812. [Google Scholar] [CrossRef]
  20. Hajiyev, C.; Soken, H.E. Robust adaptive unscented kalman filter for attitude estimation of pico satellites. Int. J. Adapt. Control. Signal Process. 2014, 28, 107–120. [Google Scholar] [CrossRef]
  21. Lee, D.; Vukovich, G.; Lee, R. Robust adaptive unscented kalman filter for spacecraft attitude estimation using quaternion measurements. J. Aerosp. Eng. 2017, 30, 04017009. [Google Scholar] [CrossRef]
  22. Wang, X.C.; You, Z.; Zhao, K.C. Inertial/celestial-based fuzzy adaptive unscented kalman filter with covariance intersection algorithm for satellite attitude determination. Aerosp. Sci. Technol. 2016, 48, 214–222. [Google Scholar] [CrossRef]
  23. Shen, S.J.; Michael, N.; Kumar, V. Tightly-coupled monocular visual-inertial fusion for autonomous flight of rotorcraft mavs. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; IEEE: Piscatvey, NJ, USA, 2015; pp. 5303–5310. [Google Scholar]
  24. Cano, C.; Arab, N.; Chaumette, E.; Larzabal, P.; El Korso, M.N.; Vin, I. Kalman filter for radio source power and direction of arrival estimation. Eurasip J. Adv. Signal Process. 2024, 2024, 66. [Google Scholar] [CrossRef]
  25. Bellés, A.; Medina, D.; Chauchat, P.; Labsir, S.; Vilà-Valls, J. Robust error-state kalman-type filters for attitude estimation. Eurasip J. Adv. Signal Process. 2024, 2024, 75. [Google Scholar] [CrossRef]
  26. Gupta, S.; Mohanty, A.; Gao, G.C. Urban localization using robust filtering at multiple linearization points. Eurasip J. Adv. Signal Process. 2023, 2023, 100. [Google Scholar] [CrossRef]
  27. Pavlovic, M.; Banjac, Z.; Kovacevic, B. Approximate kalman filtering by both m-robustified dynamic stochastic approximation and statistical linearization methods. Eurasip J. Adv. Signal Process. 2023, 2023, 69. [Google Scholar] [CrossRef]
  28. Huber, P.J.; Ronchetti, E.M. Robust Statistics; John Wiley & Sons: Hoboken, NJ, USA, 2011. [Google Scholar]
  29. Rao, C.V.; Rawlings, J.B.; Mayne, D.Q. Constrained state estimation for nonlinear discrete-time systems: Stability and moving horizon approximations. IEEE Trans. Autom. Control. 2003, 48, 246–258. [Google Scholar] [CrossRef]
  30. Vandersteen, J.; Diehl, M.; Aerts, C.; Swevers, J. Spacecraft attitude estimation and sensor calibration using moving horizon estimation. J. Guid. Control. Dyn. 2013, 36, 734–742. [Google Scholar] [CrossRef]
  31. Poloni, T.; Rohal’-Ilkiv, B.; Johansen, T.A. Moving horizon estimation for integrated navigation filtering. In Proceedings of the 5th IFAC Conference on Nonlinear-Model-Predictive-Control (NMPC), Seville, Spain, 17–20 September 2015; pp. 519–526. [Google Scholar]
  32. Chiu, H.P.; Zhou, X.S.; Carlone, L.; Dellaert, F.; Samarasekera, S.; Kumar, R. Constrained optimal selection for multi-sensor robot navigation using plug-and-play factor graphs. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014; IEEE: Piscatvey, NJ, USA, 2014; pp. 663–670. [Google Scholar]
  33. Zeng, Q.H.; Chen, W.N.; Liu, J.Y.; Wang, H.Z. An improved multi-sensor fusion navigation algorithm based on the factor graph. Sensors 2017, 17, 641. [Google Scholar] [CrossRef]
  34. Wang, S.Q.; Zhang, S.J.; Zhou, B.T.; Wang, X.F. A factor graph approach for attitude estimation of spacecraft using a stellar gyroscope. IEEE Access 2019, 7, 20060–20075. [Google Scholar] [CrossRef]
  35. Forster, C.; Carlone, L.; Dellaert, F.; Scaramuzza, D. Imu preintegration on manifold for efficient visual-inertial maximum-a-posteriori estimation. In Proceedings of the 11th Conference on Robotics—Science and Systems, Sapienza Univ Rome, Rome, Italy, 13–17 July 2015. [Google Scholar]
  36. Shuster, M.D.; Oh, S.D. 3-axis attitude determination from vector observations. J. Guid. Control. 1981, 4, 70–77. [Google Scholar] [CrossRef]
  37. Ning, X.L.; Ding, Z.H.; Chen, P.P.; Wu, W.R.; Fang, J.C.; Liu, G. Spacecraft angular velocity estimation method using optical flow of stars. Sci. China-Inf. Sci. 2018, 61, 112203. [Google Scholar] [CrossRef]
  38. Qin, T.; Li, P.L.; Shen, S.J. Vins-mono: A robust and versatile monocular visual-inertial state estimator. IEEE Trans. Robot. 2018, 34, 1004–1020. [Google Scholar] [CrossRef]
  39. Leutenegger, S.; Lynen, S.; Bosse, M.; Siegwart, R.; Furgale, P. Keyframe-based visual-inertial odometry using nonlinear optimization. Int. J. Robot. Res. 2015, 34, 314–334. [Google Scholar] [CrossRef]
  40. Agarwal, S.M.K. Ceres Solver. Available online: https://github.com/ceres-solver/ceres-solver (accessed on 27 April 2023).
  41. Li, J.; Wei, X.G.; Zhang, G.J. An extended kalman filter-based attitude tracking algorithm for star sensors. Sensors 2017, 17, 1921. [Google Scholar] [CrossRef] [PubMed]
Figure 1. A block diagram of proposed approach. The green and orange blocks, respectively, symbolize the front-end preprocessing of visual and inertial measurements. The blue block represents the back-end data fusion.
Figure 1. A block diagram of proposed approach. The green and orange blocks, respectively, symbolize the front-end preprocessing of visual and inertial measurements. The blue block represents the back-end data fusion.
Remotesensing 16 03063 g001
Figure 2. A graph of estimator initialization. The yellow stars denote observations, the red dashed line represents the visual relative attitude, and the blue line represents the inertial relative attitude.
Figure 2. A graph of estimator initialization. The yellow stars denote observations, the red dashed line represents the visual relative attitude, and the blue line represents the inertial relative attitude.
Remotesensing 16 03063 g002
Figure 3. The graph illustrates the operational principles of the local nonlinear optimization estimator.
Figure 3. The graph illustrates the operational principles of the local nonlinear optimization estimator.
Remotesensing 16 03063 g003
Figure 4. An illustration of reprojection error.
Figure 4. An illustration of reprojection error.
Remotesensing 16 03063 g004
Figure 5. An illustration of absolute attitude error.
Figure 5. An illustration of absolute attitude error.
Remotesensing 16 03063 g005
Figure 6. Marginalization strategy. The yellow stars represent observations, while the red crosses indicate the removal of related states and measurements.
Figure 6. Marginalization strategy. The yellow stars represent observations, while the red crosses indicate the removal of related states and measurements.
Remotesensing 16 03063 g006
Figure 7. Simulated output of the gyroscope.
Figure 7. Simulated output of the gyroscope.
Remotesensing 16 03063 g007
Figure 8. Results of three-axis attitude error and bias error using different methods: (ac) three-axis attitude error; (df) three-axis bias error.
Figure 8. Results of three-axis attitude error and bias error using different methods: (ac) three-axis attitude error; (df) three-axis bias error.
Remotesensing 16 03063 g008
Figure 9. Trend of three-axis attitude error variation: (a) yaw-axis; (b) pitch-axis; (c) roll-axis.
Figure 9. Trend of three-axis attitude error variation: (a) yaw-axis; (b) pitch-axis; (c) roll-axis.
Remotesensing 16 03063 g009
Figure 10. Three-axis attitude error under different gyroscope measurement errors: (a) yaw-axis; (b) pitch-axis; (c) roll-axis.
Figure 10. Three-axis attitude error under different gyroscope measurement errors: (a) yaw-axis; (b) pitch-axis; (c) roll-axis.
Remotesensing 16 03063 g010
Figure 11. Bias estimation results under different constant bias: (a) 1   ×   10 3 ° / s ; (b) 1   ×   10 2 ° / s ; (c) 1   ×   10 1 ° / s .
Figure 11. Bias estimation results under different constant bias: (a) 1   ×   10 3 ° / s ; (b) 1   ×   10 2 ° / s ; (c) 1   ×   10 1 ° / s .
Remotesensing 16 03063 g011
Figure 12. Bias estimation results under different random walks: (a) 5   ×   10 5 ° / s s ; (b) 5   ×   10 4 ° / s s ; (c)   1   ×   10 3 ° / s s .
Figure 12. Bias estimation results under different random walks: (a) 5   ×   10 5 ° / s s ; (b) 5   ×   10 4 ° / s s ; (c)   1   ×   10 3 ° / s s .
Remotesensing 16 03063 g012
Figure 13. Three-axis attitude and bias estimation during a 10 s absence of image information: (ac) three-axis attitude error; (df) three-axis bias.
Figure 13. Three-axis attitude and bias estimation during a 10 s absence of image information: (ac) three-axis attitude error; (df) three-axis bias.
Remotesensing 16 03063 g013
Table 1. Parameters of visual–inertial combination system.
Table 1. Parameters of visual–inertial combination system.
Camera ParametersGyroscope Parameters
Focal Length15 mmBias Error 0.05 ° / s
Image Size1024 × 1024Random Walk 0.0005 ° / s
Field of View20°White Noise 0.005 ° / s
Exposure Time100 msOutput Rate100 Hz
Table 2. Standard deviation of three-axis attitude error under different position errors.
Table 2. Standard deviation of three-axis attitude error under different position errors.
Star SensorEKFOur Approach
Pixel Error σ yaw ( ) σ pitch ( ) σ roll ( ) σ yaw ( ) σ pitch ( ) σ roll ( ) σ yaw ( ) σ pitch ( ) σ roll ( )
0.11.81.7141.31.210.51.11.16.7
0.23.63.427.22.52.320.61.9212.9
0.35.45.441.43.63.533.72.92.818.8
0.476.854.84.84.643.83.73.528.4
0.58.58.365.76.15.847.44.64.232.9
0.610.410.383.777.162.45.35.345
Table 3. Results of three-axis attitude error.
Table 3. Results of three-axis attitude error.
Noise ( ° / s ) 1   ×   10 6 1   ×   10 5 1   ×   10 4 1   ×   10 3
σ yaw ( ) 2.802.832.913.58
σ pitch ( ) 2.682.722.763.53
σ roll ( ) 18.3818.6218.7621.17
Table 4. Estimation error of three-axis attitudes during different periods of image information loss.
Table 4. Estimation error of three-axis attitudes during different periods of image information loss.
Time (s)15710
σ yaw   ( ) 2.92.92.82.9
σ pitch   ( ) 2.82.82.82.8
σ roll   ( ) 18.820.421.121.4
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yi, J.; Ma, Y.; Long, H.; Zhu, Z.; Zhao, R. Tightly Coupled Visual–Inertial Fusion for Attitude Estimation of Spacecraft. Remote Sens. 2024, 16, 3063. https://doi.org/10.3390/rs16163063

AMA Style

Yi J, Ma Y, Long H, Zhu Z, Zhao R. Tightly Coupled Visual–Inertial Fusion for Attitude Estimation of Spacecraft. Remote Sensing. 2024; 16(16):3063. https://doi.org/10.3390/rs16163063

Chicago/Turabian Style

Yi, Jinhui, Yuebo Ma, Hongfeng Long, Zijian Zhu, and Rujin Zhao. 2024. "Tightly Coupled Visual–Inertial Fusion for Attitude Estimation of Spacecraft" Remote Sensing 16, no. 16: 3063. https://doi.org/10.3390/rs16163063

APA Style

Yi, J., Ma, Y., Long, H., Zhu, Z., & Zhao, R. (2024). Tightly Coupled Visual–Inertial Fusion for Attitude Estimation of Spacecraft. Remote Sensing, 16(16), 3063. https://doi.org/10.3390/rs16163063

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop