^{*}

This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).

An optical flow-based technique is proposed to estimate spacecraft angular velocity based on sequences of star-field images. It does not require star identification and can be thus used to also deliver angular rate information when attitude determination is not possible, as during platform de tumbling or slewing. Region-based optical flow calculation is carried out on successive star images preprocessed to remove background. Sensor calibration parameters, Poisson equation, and a least-squares method are then used to estimate the angular velocity vector components in the sensor rotating frame. A theoretical error budget is developed to estimate the expected angular rate accuracy as a function of camera parameters and star distribution in the field of view. The effectiveness of the proposed technique is tested by using star field scenes generated by a hardware-in-the-loop testing facility and acquired by a commercial-off-the shelf camera sensor. Simulated cases comprise rotations at different rates. Experimental results are presented which are consistent with theoretical estimates. In particular, very accurate angular velocity estimates are generated at lower slew rates, while in all cases the achievable accuracy in the estimation of the angular velocity component along boresight is about one order of magnitude worse than the other two components.

Spacecraft requiring accurate three-axis attitude control are all equipped with star sensors to support attitude determination with high accuracy. In recent years, star tracker technology has seen a remarkable evolution. In particular, these sensors have gained significant improvements in their autonomy and capabilities [

to produce high-accuracy, high-reliability attitude angle and rate estimates without external support;

to operate in a wide range of mission conditions;

to solve the lost-in-space problem autonomously and in a short time;

to deliver angular rate information also when attitude determination is not feasible, as during platform de tumbling or slewing.

These functionalities should be achieved via additional software routines rather than by hardware enhancements (apart from improved sensitivity of photodetectors), and different operating modes should control sensor operation. As a result, software for system control and management becomes very complex.

Among the cited advanced functionalities, one of the most demanding, in terms of algorithm and software complexity and sensor operation management, is the determination of the satellite inertial angular velocity during slewing and/or de-tumbling phases. Indeed, many existing satellites execute slewing maneuvers at rates lower than 1°/s, at which the star sensor is still able to acquire star field images, so that star centroids can be computed on the focal plane. Instead, higher angular rates (>1°/s) are being proposed for high-agility small satellites and next generation Earth Observation satellites [

On the other hand, there is a growing interest in systems able to propagate attitude of very small satellites (such as CubeSats) using low cost sensors and optics [

In this paper a technique for angular rate determination based on optical flow computation is analyzed. Besides being adopted for vision-based guidance and control of Unmanned Aircraft Systems, optical flow techniques have found usage in space applications within the fields of remote sensing and space exploration. Regarding spaceborne remote sensing, optical flow measurements have been used for example to estimate glacier motion from multi-temporal sequences of electro-optical (EO) images [

The optical flow technique proposed in the paper relies on the computation of a displacement field between successive star images, then a least squares method is used to find the best estimate of the angular velocity vector components in the rotating frame matching the observed displacement field. The effectiveness of the proposed techniques is tested by using star field scenes reproduced by an indoor testing facility and acquired by a commercial-off-the shelf camera sensor, shortly described in the paper. Specifically, star field scenes relevant to representative satellite slewing maneuvers are simulated. Then the corresponding images are processed with the optical flow algorithm in order to extract the angular rate information. This information is then compared with the one used in input to the testing facility.

Satellite angular rates estimation, independent of star identification and attitude measurement, has been also discussed in [

In particular, [

In [

With regards to these latter works, the work presented in this paper provides the following original contributions:

the entire angular velocity measurement process is presented comprising accurate and efficient optical flow computation and relation with algorithm tuning;

a complete theoretical error budget is developed that allows predicting the expected measurement accuracy as a function of camera and geometric parameters;

the developed methodology is tested in hardware-in-the-loop simulations of representative satellite slewing maneuvers.

The paper is organized as follows: Section 2 describes the adopted algorithm with a preliminary error budget to estimate the expected angular accuracy, then Sections 3 and 4 describe, respectively, the adopted indoor facility and the simulation scenario, and the results of the algorithm test on star field scenes acquired with the laboratory facility.

The developed algorithm is composed of a few basic steps: given a couple of subsequent star field images, first the acquired images are pre-processed to eliminate background noise, and the velocity vector field (which is indeed a displacement field) is calculated in pixels. Then, unit vectors and unit vector derivatives corresponding to the computed velocity vectors are evaluated by exploiting a neural network calibration to estimate at the same time intrinsic and extrinsic parameters relevant to the adopted experimental setup. Once unit vectors and their derivatives are known, the Poisson's equation expressing the time derivative of a unit vector in a rotating reference frame and a least square method are used to find the best estimate of the angular velocity vector components in the rotating frame.

The above mentioned process is summarized in

Given a couple of consecutive grey level images, first of all a background noise removal process is carried out separately on both images to eliminate sensor noise which can affect accuracy of optical flow computation. To this end, a global threshold technique [

An example of background noise removal process around a star is reported in _{v} of 6–6.5. Assuming as reference a 20°-FOV and m_{v} = 6.2 as detection limit, the resulting average number of detectable stars in the sensor FOV is 40 [

In general, the optical flow is the 2-D motion field, which is the perspective projection onto the image plane of the true 3-D velocity field of moving surface in space [

The basic assumption in measuring the image motion is that the intensity structures of local time-varying image regions are approximately constant for, at least, a short time duration. The classical “optical flow constraint equation” [

Different approaches can be adopted to compute optical flow [

Differential techniques compute velocity from spatiotemporal derivatives of image intensity or filtered version of the images (using low pass or band pass filters). In this framework,

As an example, Horn and Schunck's method assumes that the motion field is smooth over the entire image domain and tries to maximize a global smoothness term [

Differential techniques are not the best solution in the considered case for several reasons. First of all, after background removal, images are very sparse, with a few non zero pixels and a significant departure from the smoothness properties these techniques are based on. Thus, accurate numerical differentiation is typically unachievable. This also happens if background removal is not applied because of the negative impact of noise. Then, it has to be considered that if a very high resolution camera is used,

Since phase-based and energy-based methods work in the Fourier domain, in the star sensor case they also suffer from the same problems of differential techniques.

Region based matching is, instead, an appealing solution because it works well even in noisy images without smooth intensity patterns, and in case of large pixel velocities, such as the ones we have to work with.

The basic principle is to evaluate velocity as the displacement that yields the best fit between image regions at different times. Specifically, in the considered application, a customized two-step method is adopted in which a coarse estimate of the star displacement on the focal plane is computed first and then refined to improve accuracy in the velocity field estimate:

First of all, the integer shift in pixels (_{n}_{n}_{+}_{1}

The coarse estimate of

The two steps are repeated for each star detected and labeled in the first image. Once star displacements are determined, the information can be easily translated in a velocity information (in pixels) by taking the frame rate into account. Within this framework, it is assumed that accurate image timing is available, thanks to the adoption of proper hardware (camera and shutter technique) and software (real time operating systems and proper coding of image acquisition).

Once star centroids and vector displacements between two consecutive frames are known, the subsequent step is to convert this information in unit vectors and their derivatives. This has to take camera calibration parameters into account and can be done in different ways.

For example, a classical calibration procedure can be used to estimate, firstly, camera intrinsic parameters to be used in a pinhole camera model plus distortion effects (e.g., focal length, optical center, radial and tangential distortion,

In the considered case, an end-to-end neural-network-based calibration procedure is used, which correctly takes account of all the intrinsic and extrinsic parameters relevant to the camera and the test facility [

Once unit vectors and their derivatives are known, angular velocity estimation is based on the Poisson equation, that relates the temporal derivatives of the stars unit vectors in the Inertial Reference Frame (IRF) and in the Star sensor Reference Frame (SRF):

Thus, three non independent linear equations (in three unknown variables) can be written for each star, leading to N × 3 linear equations if N is the number of stars for which the optical flow has been calculated.

These N × 3 equations can be solved in ω by a classical minimum-least-squares technique based on orthogonal-triangular decomposition, which is computationally light thanks to the sparse structure of the problem matrix. Once the solution for ω is obtained, measurement residuals can be calculated to detect anomalous values and thus to have a first assessment of the method reliability.

A theoretical analysis can be carried out to derive a first order error budget for the selected technique. The input parameters for the error budget are: the angular resolution of the considered sensor, the angular velocity to be measured and the consequent velocity field pattern of the stars, the attitude of the SRF with respect to the inertial reference frame (which determines the star distribution within the camera field of view), and the number of detected stars (which depends on star sensor sensitivity and, again, on sensor attitude).

With reference to _{s},Z_{s} plane of the star line-of sight, and ϕ is the angular separation from the sensor boresight Z_{s} of its projection on X_{s},Z_{s}. In addition, we define χ as the angle of the generic star line-of-sight with respect to the sensor boresight axis.

The error analysis can be carried out separately for the different components of the angular velocity in SRF (ω_{1s}, ω_{2s}, ω_{3s}). Let us first consider ω_{1s}, _{2s} = ω_{3s} = 0, ω_{1s} ≠ 0. In this case

The unit vector components can be written in terms of the ϕ and θ angles. Since star sensors typically have small FOVs, we can apply the small angle approximation thus getting:

And from

Then, we can relate the ϕ and θ rate of change directly to the star displacement on the focal plane:
_{c}_{c}

Thus, we get the final approximate relation in which the first component of the inertial angular velocity vector is directly related to the velocity component along the y_{s} axis computed by means of the optical flow techniques and expressed as an angular velocity:

_{1s}. In what follows, we use

From a numerical point of view:
_{starpixels}

Since ω_{1s} represents a rotation around an axis perpendicular to the sensor boresight, the corresponding velocity field measured on the focal plane is uniform, _{1s} by combining N identical, and identically distributed, measurements of _{1s} does not depend on the star position in the FOV and it can be estimated as:

Assuming realistic values for the frame rate (10 Hz), the number of pixels per star (10), and the number of detected stars (40), we get the uncertainty in ω_{1s} as a function of camera IFOV presented in _{1s} goes from about 0.0035°/s to about 0.035°/s.

Uncertainty in ω_{2s} can be estimated exactly in the same way, and the error budget is identical since azimuth and elevation IFOVs usually coincide. It is worth noting that the estimated uncertainty does not depend on the angular rotation value which produced the observed velocity field. Of course this conclusion relies on the validity of the proposed model, depending on the assumption that the slew rate is small enough so that a star-field image can be imaged on the focal plane in the considered subsequent images.

The error budget in ω_{3s} is somewhat different. Combining _{1s} = ω_{2s} = 0, ω_{3s} ≠ 0, and with the small angles assumption, we get:

Combining

The first term can be further developed by using spherical trigonometry. Indeed, with reference to

From the small angle assumption we get:

Thus, from

The χ angle obviously depends on the observed star, and its maximum value depends on the FOV size.

_{3s} can be then calculated at a first order, and for a single star, as:

By developing the different terms we get:

By using

For the typically encountered angular velocities and high frame rates (10 Hz or more), the second term in the above equation is larger than the first one, which yields the following approximate form of the uncertainty in ω_{3s} for a single star:

The final ω_{3s} estimate is obtained by combining star measurements having different error distribution. However, a preliminary estimate of the ω_{3s} uncertainty can be obtained by taking an average value of χ and using again the factor

Considering an average value of 5° for χ (realistic considering typical medium-large size FOVs) we get that the achievable accuracy is about one order of magnitude worse than the one attainable for ω_{1s}. This is also consistent with the usual difference existing between the attitude measurement uncertainties across and along the boresight axis of a star sensor [_{3s} as a function of camera IFOV. Of course, the actual estimation uncertainty depends on the distribution of detected stars within the sensor FOV, and thus also on the actual attitude of the satellite.

Tests for performance assessment of the discussed procedure were carried out by means of a functional, hardware prototype of star sensor operated in a laboratory facility for star field scene simulation.

The star sensor prototype was designed to implement the operational modes suggested by the European Space Agency [

The laboratory test facility (

a single pixel of the LCD screen is exploited to simulate a single star of a star field if a static pointing is considered or in the case of a low-rate dynamics of the orbiting platform. Differently, when high-rate attitude dynamics are accounted for in the simulation, a single star is represented by the strip of pixels reproducing its apparent trajectory in the sensor FOV during the update time of the displayed star field scene. Pixel brightness control is used to reproduce star apparent brightness. Approximations result in this simulation approach as a consequence of spatial, temporal, and pixel brightness digital discretization of the synthetic star field scenes and relevant sequences. However a theoretical, worst-case analysis [

a collimating lens allows for simulating the large distance of the star sensor from light source;

a high-performance video processor is adopted for LCD display control by an embedded computer, to carry out static but also dynamical simulations. The former ones simply consist of sequences of star field scenes, as resulting from assigned sensor attitude. The latter ones reproduce the evolution of the star field observed by the sensor during assigned maneuvers (orbit and/or attitude dynamics), with accurate timing;

sensor position within the darkroom and collimating lens selection guarantee matching of instrument FOV and LCD apparent angular size. Micro translators and rotators are used for fine regulation and alignment of sensor orientation and facility intrinsic reference frame,

finally, precise matching is software-based. In particular, it is realized by means of a neural calibration function used to compensate for residual misalignment after installation in the darkroom, and to adjust sensor output to LCD star angular position finely [

The above hardware is completed by the Experiment-Control Workstation that coordinates simulation and sensor operation during test, and it also generates the needed simulated star field data, off-line before star sensor testing.

Accuracy and reliability of the proposed method can be evaluated by exploiting the described hardware-in-the-loop facility. In all the simulated cases, a circular equatorial Low Earth Orbit (LEO) at altitude of 500 km is considered. This choice does not compromise the general validity of the results since a wide range of attitude maneuvers is simulated to evaluate the effect of different star image patterns on method accuracy. Initially, the satellite body reference frame (BRF) is supposed to coincide with the classically defined orbital reference frame (ORF),

The SRF is thus obtained from the BRF by a 180°-rotation around the axis 1. As a consequence, the star sensor boresight axis initially points in zenith direction in the equatorial plane. The reference frames used for the simulations are depicted in

The simulated cases differ for the considered attitude maneuvers. In the first two cases (case 1 and case 2) a satellite rotation around the 1 axis with constant angular velocity (1 deg/s in case 1, 5 deg/s in case 2) is superimposed to the constant angular velocity of the keplerian orbit (6.243·10^{−2} deg/s along the negative 2 axis initially) so that the star sensor boresight axis moves outside the equatorial plane towards the North pole while the satellite rotates around Earth. This condition allows evaluating method performance with a varying number of detected stars and an almost uniform apparent velocity field on the focal plane (pure translation).

In the other two cases (case 3 and case 4), the satellite is supposed to rotate around the star sensor boresight axis, again with constant angular velocity (1 deg/s in case 3, 5 deg/s in case 4). This condition is representative of the case in which the velocity field on the focal plane is not uniform (pure rotation). Actually, a small translational component due to the orbital angular velocity is present in the acquired images.

The simulated angular rates are relevant to the slew maneuvers of many existing satellites, which are typically executed at rates lower than 1°/s. In this condition, the star sensor is able to acquire star field images, and star centroids can be computed on the focal plane. Higher angular rates (>1°/s) are instead proposed for high-agility, small satellites, and next generation Earth Observation satellites [

For reader convenience, all the simulated cases are summarized in _{1s}, ω_{2s}, and ω_{3s}) represent the components along the SRF axes of the inertial angular velocity vector of the SRF.

In this case, initially the true angular velocity vector has non-zero components only along the x_{s} and y_{s} axes of SRF. As a consequence, the velocity field pattern represents a pure translation with a larger components along the y_{s} axis. This condition is evident in

As a result of the relatively large number of detected stars, and velocity vectors, both the larger x_{s} component (1 deg/s) and the smaller y_{s} component (0.06 deg/s) are measured with good accuracy, as shown in

Although the proposed technique is specifically tuned to work with star images, it is of great interest investigating its application to cases with higher angular velocities, where stripes rather than stars are imaged on the focal plane and a large displacement in pixels is measured among consecutive frames. Case 2 is representative of this condition (see _{0} measured with slightly worse accuracy compared with case 1. Instead, the estimate of ω_{1s} shows a small negative bias (due to a slight under-estimation of stars displacement) and a larger error standard deviation, which is also found in the third component estimate.

Considering now the first radial rotation case (case 3), the velocity field pattern is of course very different from the one detected in cases 1 and 2, with a rotation around the boresight axis superimposed to the horizontal translation due to the orbital angular velocity. Notwithstanding the large variation of the velocity modules on the focal plane, the optical flow is able to capture the motion field (shown in

Performance in terms of mean and standard deviation of errors with respect to assigned values is summarized in _{1s} is of order of 10^{−2} deg/s (about 1% of the “true” value), whereas the noise in the boresight axis component is always about one order of magnitude higher. In absolute terms, a slightly better performance is measured in the radial rotation case, which is still in agreement with the theoretical error budget taking into account that the number of detected stars, and the average off-boresight angle (of the order of 55 and 7°, respectively) were larger than the reference values assumed in deriving

This paper focused on an optical flow-based technique to estimate spacecraft angular velocity based on successive images of star fields. The main steps of the developed algorithms are image pre-processing for background removal, region-based optical flow computation, and least-squares solution of a linear system obtained expressing the time derivative of a unit vector in a rotating reference frame for each detected star.

Algorithm performance was evaluated on a set of star images generated with different rates and geometries (1°/s and 5°/s out-of-plane or radial rotations) by a hardware-in-the-loop testing facility and acquired by a commercial-off-the shelf camera sensor.

The method showed good performance in terms of accuracy and reliability, and experimental results were consistent with the developed theoretical error budget taking account of star fields and camera parameters. In the case of the out-of-plane rotation at 1°/s, unbiased angular rate estimates were generated and the measurement noise was of the order of 10^{−2} deg/s for the off-boresight components, while the achievable accuracy for the angular velocity component along boresight was of about one order of magnitude worse. A slightly better performance was estimated in the 1°/s radial rotation case due to the number and the average off-boresight angle of detected stars.

Rotation at 5°/s represents a very challenging situation for angular velocity measurement, with star strips on the image plane and a significant reduction of signal-to-noise ratio. Nevertheless, the developed algorithm was able to measure with satisfying accuracy these velocities, especially in the radial rotation case.

Future work is aimed at optimizing algorithm tuning in view of real-time implementation. In fact, the computational burden dramatically depends on settings related to the maximum angular velocity that has to be measured. From this point of view, a feedback control scheme, where the current algorithm settings depend on the latest angular velocity estimate and the measurement residual, seems to be a promising solution. Furthermore, measurement residual can also be used to generate a real-time estimate of measurement covariance, which allows generated output to be effectively integrated in dynamic filtering schemes, possibly also comprising estimates from other sensors.

The authors declare no conflict of interest.

Algorithm flow-chart.

Background noise removal process (pseudo colors are used for the sake of clarity).

Definition of the generic star angles in SRF: the star line-of-sight is in red, Z_{s} is the sensor boresight axis.

Approximate theoretical uncertainty in ω_{1s} estimate as a function of sensor IFOV.

Theoretical uncertainty in ω_{3s} as a function of sensor IFOV.

Laboratory facility set-up for star field simulation and star sensor tests.

Reference frames for the considered simulations.

Velocity field as estimated by the optical flow algorithm from a couple of consecutive images (case 1).

Estimated angular velocity components against “true” values (case 1, 10 frames per second).

Sample image of star stripes relevant to case 2, significantly modified for the sake of clarity (large angular velocity).

Estimated angular velocity components against “true” values (case 2, 10 frames per second).

Vector field as estimated by the optical flow algorithm from a couple of consecutive images (case 3).

Estimated angular velocity components against “true” values (case 3, 10 frames per second).

Estimated angular velocity against “true values” (case 4, 10 frames per second).

Star sensor prototype specifications.

Field Of View | 22.48° × 17.02° |

Effective Focal Length | 16 mm |

F-number | 1.4 |

Star Sensitivity | <visible magnitude 7 |

Image Sensor | ½″ CCD Progressive Scan |

Image Size | 1,280 × 1,024 pixel |

Instantaneous Field Of View | 0.017° × 0.017° |

Test facility features relevant to sensor FOV match.

Display active area H × V (m) | 0.641 × 0.401 |

Display resolution H × V (pixel) | 2,560 × 1,600 |

Collimating lens focal length (m) | 1.3 |

Collimator diameter (mm) | 50 |

Display apparent angular size (deg) | 27.6 (H) × 17.5 (V) |

Display pixel apparent angular size at screen centre (deg) | 0.011 × 0.011 (H × V) |

Overall magnification ratio (with 16-mm-focal sensor optics) | 1.23 × 10^{−2} |

Summary of simulated test cases: initial conditions.

| ||||
---|---|---|---|---|

ω_{1S} (°/s) |
1 | 5 | 0 | 0 |

ω_{2S} (°/s) |
−6.243·10^{−2} |
−6.243·10^{−2} |
−6.243·10^{−2} |
−6.243·10^{−2} |

ω_{3S} (°/s) |
0 | 0 | −1 | −5 |

Synthetic statistics relevant to low slew rates.

| ||||
---|---|---|---|---|

Error on ω_{1s} (°/s) |
−1.20·10^{−4} |
1.64·10^{−2} |
−2.88·10^{−3} |
6.72·10^{−3} |

Error on ω_{2s} (°/s) |
−1.90·10^{−3} |
9.20·10^{−3} |
1.61·10^{−3} |
5.71·10^{−3} |

Error on ω_{3s} (°/s) |
−4.96·10^{−3} |
1.22·10^{−1} |
−6.66·10^{−3} |
5.57·10^{−2} |

Synthetic statistics relevant to high slew rates.

| ||||
---|---|---|---|---|

Error on ω_{1s} (°/s) |
−1.79·10^{−1} |
1.81·10^{−1} |
−9.57·10^{−3} |
1.52·10^{−2} |

Error on ω_{2s} (°/s) |
4.26·10^{−4} |
3.90·10^{−2} |
9.61·10^{−3} |
7.47·10^{−3} |

Error on ω_{3s} (°/s) |
−1.28·10^{−1} |
1.38 | 7.95·10^{−3} |
1.21·10^{−1} |