Next Article in Journal
Characterization of Recombinant His-Tag Protein Immobilized onto Functionalized Gold Nanoparticles
Previous Article in Journal
A Regularized Weighted Smoothed L0 Norm Minimization Method for Underdetermined Blind Source Separation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Stellar Imaging Error Correction Method Based on an Ellipsoid Model: Taking Ziyuan 3-02 Satellite Data Analysis as an Example

College of Astronautics, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
*
Author to whom correspondence should be addressed.
Sensors 2018, 18(12), 4259; https://doi.org/10.3390/s18124259
Submission received: 29 September 2018 / Revised: 30 November 2018 / Accepted: 30 November 2018 / Published: 4 December 2018
(This article belongs to the Section Remote Sensors)

Abstract

:
Stellar point image coordinates are one of the important observations needed for high-precision space attitude measurement with a star sensor. High-coupling imaging errors occur under dynamic imaging conditions. Using the results of preliminary star point extraction from star sensor imaging data combined with a superimposed time series, we analyze the relative motion and trajectory based on the star point image, establish an image error ellipsoid fitting model based on the elliptical orbit of a satellite platform, and achieve geometric error correction of a star sensors’ image star point using multi-parameter screening of the ambiguous solutions of intersection of the elliptic equations. The simulation data showed that the accuracy of the correction error of this method reached 89.8%, and every star point coordinate required 0.259 s to calculate, on average. In addition, it was applied to real data from the satellite Ziyuan 3-02 to carry out the correction of the star points. The experiment shows that the mean of attitude quaternion errors for all its components was reduced by 52.3%. Our results show that the estimation parameters of dynamic imaging errors can effectively compensate for the star point image observation value and improve the accuracy of attitude calculation.

1. Introduction

A star sensor is an electronic measurement system that can measure the three-axis attitude of the carrier satellite using a star as the measurement object and a photodetector as the core component. Owing to their strong autonomy, good concealment, high reliability, and high precision, star sensors have become important instruments for measuring spacecraft attitude in the aerospace field. They are widely used with Earth observation satellites and space exploration aircraft. In recent years, with the rapid development of aerospace science and technology, the dynamic high-precision attitude measurement of aircraft has become even more important for space target surveillance and geosynchronous observation of constellations of spacecraft [1,2,3,4].
Dynamic high-precision attitude measurement has become indispensable for aircrafts [5,6]. Because this technique requires the image to provide extremely accurate star point coordinates, research on resolving errors related to star sensor imaging errors is always ongoing [7]. A star sensor will introduce complex imaging errors in the dynamic imaging process, making the accuracy of attitude measurement difficult to guarantee [8,9]. These imaging errors are due to the relative motion between the starry sky and the camera [10]; star point positioning errors include both static [8,11] and dynamic systematic errors [12]. All errors can cause the star point position to be inaccurate, thus affecting the measured attitude of the satellite platform.
The development of methods for correcting these errors is ongoing. On the theoretical level, Jia et al. [13] used the frequency domain method to explore the factors influencing errors involved in star point extraction from the perspective of an imaging model. However, that study did not shed light on the inherent mechanism of error generation. Yao et al. [14] established a distortion model based on point coordinate measurement, which implements non-uniformity error correction for each pixel. Li et al. [15] pointed out that suppressing pixel non-uniformity noise can greatly improve the accuracy of star point positioning based on analysis of a star point positioning error model. Schmidt et al. [16] considered the in-orbit usage and real-time requirements of the star sensor, treated all noise as pixel non-uniformity noise, predicted the pixel-level noise, and corrected the star point positioning error using the background difference method. Liao et al. [17] considered combining a star sensor and an inertial platform to counteract the negative effects of excessive angular velocity of the carrier satellite and to ensure that star point positioning eliminates cumulative errors. At a practical engineering level, Samaan et al. [18] considered selecting more sensitive chips to avoid star blur while Yang et al. [19] used the least square support vector regression method to train and fit an image to compensate for the image of the square star coordinates. Most of these methods analyzed the star sensor imaging errors from the perspective of imaging principles or hardware and compensated for the errors. The Kalman filter [20] or smoother techniques can improve attitude measurements, but they have high coupling with hardware design model, and are acting on the attitude directly.
The above studies make important contributions to the theoretical development and application of measurement technology. However, all of them consider one type of error to the neglect of others. In addition, they lack error analysis for star maps obtained from a moving satellite platform. In response, a mathematical model that considers multiple imaging errors needs to be established. In this paper, we develop a generic model by analyzing a large amount of star map data to be used in real-time attitude determination. Specifically, we think about correcting error from the aspect of the data. We describe our aims and demonstrate the effectiveness of our method. Our method is easier to implement than a stricter model, as in our method we do not need to understand how a specific star sensor works. The only thing we need to know is the coordinate before the correction. This powerful model can be used to correct star point coordinates from the same star sensor.
In our method, we consider the star maps of the star sensor to be superimposed, and the relative motion trajectory of the star indicates the motion of the satellite to some extent. We select point coordinates for every trajectory randomly and equably. Then, we conduct bidirectional fitting of the motion trajectory. If the errors obtained by fitting from the two directions are not consistent, a more quadratic fit is required. The fit establishes a link between image position and first coefficients. The image position can be indicated by one set of the point coordinates mentioned as above. The selection principle is that the point coordinates can best match the results of the first fitting. And then these parameters are utilized to restrain and screen the corrected coordinates, so as to improve the accuracy of the corrected coordinates.
This study is based on image data collected by the star sensor. We take the motion of the satellite platform, use an elliptic equation to fit the relative motion trajectory of the star image, and implement geometric error correction of the image star points using multiple parameters to classify the ambiguous solutions of the intersection of the elliptic equation. As long as a set of identical star sensor data are available to obtain the initial model, the coordinates of the star point can be quickly compensated, and the model can be continuously updated to improve its accuracy. Our research shows that after dynamic imaging error analysis and parameter estimation, the intersection of multi-estimated parameter curves can effectively be used to compensate for the image star point observation values. The time required for the method is short, which will allow for better observations for the strict calibration and attitude calculation of subsequent star sensors.

2. Methods

2.1. Technical Outline of the Ellipsoid Model Method

As mentioned above, we wanted to solve the imaging error problem of the star sensor based on star maps. We considered the star maps of the star sensor to be superimposed, and the relative motion trajectory of the star indicates the motion of the satellite to some extent. Then, we conducted bidirectional fitting of the motion trajectory. Based on this fitting, an elliptic model was established, the parameters were estimated and the coordinates were compensated to correct the star point error.
Figure 1 shows the flow of the ellipsoid model method.

2.2. Ellipsoid Model of Image Star Points

In the celestial system, stars stand at very great distances from satellite platforms that orbit the Earth; therefore, stars can be thought of as relatively static control points that can be used to determine the attitude of a satellite platform. The process of star sensor imaging is shown in Figure 2. During a known period, a satellite moves in an elliptical orbit. Note that a spacecraft experiencing acceleration in any direction will not follow an elliptical orbit. Fortunately, observation satellites are usually not accelerating when observations occur. We consider that when an observation satellite is in a stable attitude, the satellite is slewing at a constant angular rate. As the satellite moves along its elliptical orbit, the camera center of the star sensor changes accordingly. Because the star is stationary, the imaging trajectory of the star during this time reflects the elliptical orbit of the satellite platform to some extent.
However, because the x and y coordinates in the image are perpendicular to each other and measured relatively independently, the long and short half axes of the elliptical orbit are negligible relative to the star distance during the star sensor imaging process. A single imaging trajectory has two possible directions (the x and y directions); such polysemy solutions will be considered in subsequent parameter estimation. If the errors obtained by fitting from the two directions are consistent, that is, the errors by fitting from direction x are approximately equal to those from direction y, then the error can be compensated. If they are not consistent, a more quadratic fit is required.
From the perspective of the image, the imaging trajectory of a single star point can be represented by a certain elliptical arc equation. Since the satellite platform only runs a small elliptical arc orbit during this period, a parabolic equation approximation is used instead of a small elliptical arc equation. This is bidirectionally analyzed from x and y, respectively, so an image point trajectory equation can be established (Equation (1)).
{ x = a y 2 + b y + c y = a x 2 + b x + c
In the Equation (1), x , y are a random and uniform selection of points on the relative trajectory of stars after the superposition of multi-frame star maps.
Each type of imaging trajectory in the figure can be fitted with an ellipsoid shape. Now, although the matching ellipse corresponding to each imaging trajectory is not the same, these trajectories are all obtained from the same star maps and are formed by the same satellite platform, so they should have the same regularity. In other words, they can express the motion trajectory of the satellite platform, but some kind of scaling relationship does exist. Therefore, the elliptic coefficient parameter should satisfy a certain model and is related to the image position. The image position can be indicated by one set of the point coordinates x , y in Equation (1). The selection principle is that the selected set of star coordinates should be able to best express the fitting coefficients ( a , b , c , a , b , c ). That means the point coordinates x ¯ , y ¯ we selected can best match the results of the first fitting. A quadratic model is used, and its mathematical expression can be expressed as:
{ a = k 1 x ¯ 2 + m 1 y ¯ 2 + n 1 x ¯ y ¯ + p 1 x ¯ + q 1 y ¯ + l 1 b = k 2 x ¯ 2 + m 2 y ¯ 2 + n 2 x ¯ y ¯ + p 2 x ¯ + q 2 y ¯ + l 2 c = k 3 x ¯ 2 + m 3 y ¯ 2 + n 3 x ¯ y ¯ + p 3 x ¯ + q 3 y ¯ + l 3 a = k 4 x ¯ 2 + m 4 y ¯ 2 + n 4 x ¯ y ¯ + p 4 x ¯ + q 4 y ¯ + l 4 b = k 5 x ¯ 2 + m 5 y ¯ 2 + n 5 x ¯ y ¯ + p 5 x ¯ + q 5 y ¯ + l 5 c = k 6 x ¯ 2 + m 6 y ¯ 2 + n 6 x ¯ y ¯ + p 6 x ¯ + q 6 y ¯ + l 6
In this equation, x ¯ , y ¯ are the image coordinates, which can best match the results of the first fitting, a , b , c , a , b , and c are the fitting coefficients of the equations in Equation (1), k , m , n , p , q and l are the fitting coefficients, and thus, as Equation (2), the equation is called the ellipsoid model.

2.3. Parameter Estimation and Coordinate Compensation

In the above process, the star map data acquired by the star sensor is expressed by a set of parameterized elliptic equations, and the actual imaging error has been smoothed and corrected in the fitting process of the superimposed image traces. Using the elliptic equations of the above six related parameters, we can realize the correction and compensation of the image observations before calibration and attitude calculation of the star sensor.
The principle on which the ellipsoid model is based shows that any trajectory in the image plane will correspond to a curve on the surface (Figure 3). The specific principle is that the parameter value is high, parallel to the xoy plane, the intersection of the plane is the corresponding conic, and solutions can be found by intersecting the conic of each group. In theory, the intersection point is represented by the corrected coordinates obtained.
But actually, in Figure 4 the upper left corner provides a superimposed imaging trajectory of multi-frame data. The black figure in the upper left corner results from superimposing multi-frame consequent star maps. We can get many relative motion trajectories, as shown by the white curve. Because the star is stationary, the trajectory corresponds to the regular orbit of the spacecraft. The ellipse in this figure illustrates the orbit and the motion of the satellite platform; the circle at one focus represents the earth. According to the analysis, the traces after multi-frame data superposition are the embodiment of the orbit equation of the satellite platform. Although different stars have different imaging positions, these traces over a specific period should conform to an arc of the elliptical orbit, which is the motion trajectory represented by the arcs (shown in four colors). The real meaning of the four arcs is the orbit of the spacecraft, which is a schematic diagram, meaning the traces in a specific period should confirm to the ellipsoid. and the motion trajectory we got may correspond random one of them.
This explains why the results have an ambiguous solution. At this time, it is theoretically reasonable and effective to use an elliptic equation to fit the imaging error on the image path. Because it is a conic shape, the intersection solution will have an ambiguous solution meaning it must be filtered.
We now describe the principle involved in intersecting ambiguous solution filtering. As shown in Figure 5a,b, point A is the original star point; one coordinate of point C is closer to point A, but the other coordinate is farther away. Therefore, point B the closer point should be selected as the candidate point in the modified solution. As shown in Figure 5c, four ambiguous solutions of point BCDE are possible. Therefore, it is necessary to consider the two-way coordinates comprehensively and select the point closest to point A as the candidate point of the modified solution.
Figure 6 shows the steps for filtering the ambiguous solution. Here, we set a value ε to filter the ambiguous solution; meets this value, we will call it candidate point of the modified solution. In the simulation experiment, the value is set as 2 pixels, while in the real data experiment, the value was set as 15 pixels.

3. Experiment and Results Analysis

3.1. Experiment Data

In this paper, two sets of experiments were designed, one simulated and one using for the real data.
The design idea of the simulation experiment is as follows: first, we simulated a set of original star maps using MATLAB2014 and added Gaussian white noise. We then carried out rough extraction of the star points. Next, the ellipsoid model was applied to the dataset to correct the coordinates of the star point. Here, we assumed that the simulated star point coordinates before the noise was added were the real coordinates; the corrected coordinates were compared with the real coordinates.
The simulation conditions were as follows: Something about the computer: frequency of the CPU was 1.70 GHz, memory was 4.00 GB, system was Windows 8.1. We input the initial a set of attitudes of satellites, and then simulated using the principles of geometric imaging. In this process, we assumed that the focal length of the camera was f = 43.3 mm, the pixel size was 0.015 mm, the signal-to-noise ratio (SNR) was 5 dB, the star’s star limit was set to 6, and the photo size was set to 1024 × 1024.
In the second set of experiments, the ellipsoid model was applied to the real data to correct the star point error. Then the satellite attitude was determined. In this paper, the satellite attitude was represented by quaternions, and compared with the quaternions provided by the original gyroscope.
The experimental data used in this experiment are all real data. obtained by the star sensor aboard the carrier satellite ZY-3-02 of China. Two sets of data were used: the group 0702 star maps was obtained on 2 July 2016, and the group 0712 was obtained on 12 July 2016. The size of each star map was 1024 × 1024 pixel. The group 0702 was used as raw data for model fitting and the group 0712 was used to test the model when conducting the comparison test. First, the 0702 data were used to test the conformity of the model to its own data to prove the correction of the model. Then the 0712 data were used to test the applicability and effectiveness of the model.
In reference to the correction test of this model, for the simulation experiment, we can assume that the MATLAB star point coordinates were exact before the noise was added, meaning the accuracy of the correction can be directly verified. For the real data, the real coordinates were not clear, so the external precision evaluation could not be performed. The mean square error of the difference between the two was evaluated using the coincidence degree of data.

3.2. Experiment Results and Analysis

3.2.1. Experiment Results of Simulation

As described above, the present study began by performing two-way fitting in the x and y directions of the frame superposition results of simulation data; We select 100 point coordinates of every trajectory and fit the point coordinates. The resulting fitting equations are shown in Equations (3) and (4).
{ x 1 = 7.68287111 e 3 y 1 2 + 1.69300874 y 1 + 661.87217940 x 2 = 2.09244654 e 3 y 2 2 + 0.93951053 y 2 + 271.60179426 x 3 = 1.51004741 e 3 y 3 2 + 0.83393057 y 3 + 127.79101785 x 4 = 2.88847604 e 3 y 4 2 + 0.67623070 y 4 + 290.02821260 x 5 = 1.97606378 e 3 y 5 2 + 0.71818976 y 5 + 168.06019984 x 6 = 1.73496887 e 3 y 6 2 + 0.41826417 y 6 + 3.5860763886
{ y 1 = 7.10384860 e 4 x 1 2 + 1.49411085 x 1 677.38150299 y 2 = 7.71334248 e 4 x 2 2 + 1.33870777 x 2 302.26592681 y 3 = 8.05331580 e 4 x 3 2 + 1.24388547 x 3 140.42369637 y 4 = 6.81424112 e 4 x 4 2 + 1.27494096 x 4 290.11935213 y 5 = 7.35369538 e 5 x 5 2 + 1.23711026 x 5 170.93902255 y 6 = 6.46955865 e 4 x 6 2 + 1.05814885 x 6 + 50.97186378
The results of bidirectional fitting are shown using MATLAB software. Figure 7 shows that the fitting results from the x and y directions are not very consistent in general. Therefore, starting from a single trajectory, Figure 8 shows the results of the bidirectional fitting curve for the one trajectory. A difference exists in the two-way error; this difference will affect the determination of the error and the correction of the coordinates. After calculation, the average error in the x direction of fitting is 1.86 pixels; that in the y direction of fitting is 0.483 pixels. So, if we correct error from one direction, x or y, the result is not accurate. Therefore, we propose a method that involves using an ellipsoid model and an intersection to correct the error. Specifically, based on the two-way fitting curve, the coefficients are quadratically fitted. In theory, the required coordinate correction solution should satisfy the quadratic fitting equation of each coefficient; however, when considering the actual situation and the existence of the fitting error, the results of the intersection of each quadratic fitting curve were regarded as the required coordinate correction solution. Because the result of each intersection may be a modified solution, each result was recorded as a candidate point of the modified solution; and the average value was used as the final correction solution.
Then, we selected most the suitable point coordinate for every trajectory, so we selected six point coordinates to fit the coefficients, and the results are as follows:
Equations (5) and (6) provide the results of quadratic fitting.
{ a = 1.6608 e 7 x ¯ 2 + 2.7379 e 7 y ¯ 2   + 4.5693 e 7 x ¯ y ¯ 2.1240 e 4 x ¯   2. 8122 e 4 y ¯ + 6.8125 e 2 b =   4.9939 e 5 x ¯ 2   1.1482 e 4 y ¯ 2 1.8560 e 4 x ¯ y ¯ + 7.4011 e 2 x ¯ + 1.1331 e 1 y ¯ 23.778 c = 3.2112 e 3 x ¯ 2 + 7.8633 e 3 y ¯ 2 + 1.0791 e 2 x ¯ y ¯ 3.7043 x ¯ 8.1798 y ¯ + 1579.1764
{ a = 1.3153 e 9 x ¯ 2 + 4.6631 e 9 y ¯ 2 + 5.4640 e 9 x ¯ y ¯ 1.5831 e 6 x ¯ 2.9625 e 6 y ¯ 3.5370 b = 2.7497 e 6 x ¯ 2 4.3032 e 6 y ¯ 2 7.6637 e 6 x ¯ y ¯ + 3.9498 e 3 x ¯ + 3.3121 e 3 y ¯ + 0.2368 c = 1.2293 e 3 x ¯ 2 + 1.1384 e 3 y ¯ 2 + 3.1252 e 3 x ¯ y ¯ 2.4382 x ¯ 0.2757 y ¯ + 354.3663
Table 1 compares the coordinates before and after the correction of the simulation data, and the amount of time needed to correct each coordinate. Table 2 compares the errors before and after the correction of the simulation data. According to the tables we can calculate the mean square error of the errors. After calculation, the mean square error of the errors before correction was 0.4709, and the mean square error after correction was 0.0479, and the accuracy improvement of 89.8%. Besides, the average time required for a star point coordinate correction is 0.259 s. Therefore, the model can correct the star point errors effectively and quickly. From the experimental results, the time required for the model can fully meet the needs of typical missions.

3.2.2. Experiment Results of Real Data

As described above, this study began by performing two-way fitting in the x and y directions of the frame superposition results; We select 120 point coordinates of every trajectory and fit the point coordinates. The obtained fitting equations are shown in Equations (7) and (8).
{ x 1 = 6 . 06412528 e 4 y 1 2 + 1 . 66896674 e 1 y 1 + 796 . 35949068 x 2 = 5 . 603967868 e 4 y 2 2 + 1 . 51297560 e 1 y 2 + 777 . 4138421 x 3 = 6 . 94229250 e 4 y 3 2 + 3 . 87739417 e 2 y 3 + 782 . 26733260 x 26 = 3 . 75154707 e 4 y 26 2 1 . 52832756 e 1 y 26 138 . 7228401
Detailed equations see Appendix A Equations (A1)–(A26).
{ y 1 = 8 . 45008923 e 3 x 1 2 + 16 . 94088086 x 1 8104 . 997895661 y 2 = 6 . 39386799 e 3 x 2 2 + 13 . 43513064 x 2 6553 . 51591621 y 3 = 4 . 54951256 e 3 x 3 2 + 10 . 01200305 x 3 4939 . 881501419 y 26 = 2 . 23761618 e 3 x 26 2 + 2 . 03570683 x 26 + 845 . 427003311
For detailed equations see Appendix A Equations (A27)–(A52).
Then, we selected the most suitable point coordinates for every trajectory, so we select twenty-six point coordinates to fit the coefficients, and the results are as follows:
Equations (9) and (10) provide the results of quadratic fitting.
{ a = 3 . 055 e 10 x ¯ 2 7 . 183 e 11 y ¯ 2 + 2 . 223 e 10 x ¯ y ¯ + 1 . 069 e 7 x ¯ + 1 . 097 e 7 y ¯ + 2 . 965 e 4 b = 5 . 749 e 9 x ¯ 2 2 . 552 e 7 y ¯ 2 7 . 113 e 7 x ¯ y ¯ + 1 . 209 e 4 x ¯ + 1 . 588 e 5 y ¯ + 9 . 857 e 2 c = 4 . 636 e 5 x ¯ 2 + 3 . 324 e 5 y ¯ 2 + 4 . 137 e 5 x ¯ y ¯ + 1 . 023 x ¯ 0 . 247 y ¯ + 9 . 633
{ a = 5 . 034 e 9 x ¯ 2 1 . 285 e 8 y ¯ 2 5 . 001 e 11 x ¯ y ¯ + 4 . 153 e 6 x ¯ + 1 . 839 e 5 y ¯ 8 . 775 e 3 b = 8 . 300 e 6 x ¯ 2 + 1 . 222 e 5 y ¯ 2 1 . 277 e 5 x ¯ y ¯ + 4 . 970 e 3 x ¯ 1 . 503 e 2 y ¯ + 6 . 963 c = 5 . 794 e 3 x ¯ 2 3 . 412 e 3 y ¯ 2 + 8 . 521 e 3 x ¯ y ¯ 4 . 190 x ¯ + 3 . 772 y ¯ 469 . 338
For detailed equations see Appendix A Equations (A53) and (A54).
Table 3 and Table 4 compare the coordinates before and after the correction of the 0702 and 0712 groups, respectively, using the ellipsoid model. Table 3 and Table 4 represent two different experimental datasets. The entries in the table respectively refer to the horizontal pixel coordinate of points before the correction, the vertical coordinate before the correction, the horizontal pixel coordinate of points after the correction, the vertical coordinate after the correction, the difference between the horizontal pixel coordinate of points before and after the correction, the difference between the vertical coordinate of points before and after the correction, and the time required to correct each coordinate.
Table 3 and Table 4 show that most of the differences for the 0702 group correction are at the pixel level; meanwhile, for the 0712 group, the differences involve more than a dozen pixels. After calculation, the average time to calculate a star point coordinate of the 0702 group is 0.272 s. However, for the 0712 group 0.284 s were required. We know that the correction effect of the ellipsoid model for the 0702 group was significantly better than that of the 0712 group.
Table 5 gives the mean square error of the difference between the two groups of experiments. We can see the model has more complexity than the 0712 group data and the correction effect has improved. We can conclude that the model not only applies effectively to its own data, but also can apply to other data. This shows that the model has good correction ability, applicability, and effectiveness.
In addition, we used the coordinates before and after the correction to calculate the attitude. The attitude directly obtained by gyroscope was used as the original reference attitude. The quaternion was used to measure the accuracy comparison before and after the correction. Table 6 gives a comparison of the quaternion calculated before and after the correction with the quaternion provided by the gyroscope. Table 7 shows the mean of the errors of the four components before and after the correction, and the mean of the combined error. From the table, we can see that the error obtained after correction is reduced by 52.3%, demonstrating the validity of the method.

4. Discussion

This paper describes a model of the pixel coordinates of the star point centroid during attitude measurement by a star sensor. Although the pixel coordinates of the star can be given more accurately, some problems still exist. First, because a motion streak has a certain width in the star map after the frame superposition, when we did the quadratic curve fitting, we had to select random points as uniformly as possible within the surface of a certain area before the center curve was fitted. Thus, the accuracy of the model was related to the measured points. In addition, when the second fitting was performed, the number of selected points was small, resulting in the low model conformity. A data source needed to be added to solve this problem. Another weakness is that the model was established on the assumption of stable satellite attitude; if the spacecraft was accelerating or decelerating, its applicability was limited. We will address this shortcoming in our future work.

5. Conclusions

Based on simulation data and real-life imaging data from a star sensor on ZY-3-02, this paper proposes a star point centroid determination model of the whole star map based on the trajectory of motion and the correlation between stars in the same star domain. In simulation experiments, we can calculate the mean square error of the errors. In the example herein, the mean square error of the errors before correction is 0.4709, and the mean square error after correction is 0.0479, an improvement in accuracy of 89.8%. Besides, the average time required for a star point coordinate correction is 0.259 s. Therefore, the model can correct the star point error effectively and the time required for the model can fully meet the needs of typical missions. In real data experiments, we can see that the error obtained after correction is reduced by 52.3%, demonstrating validity of this method for star point coordinate correction What is more, the 0702 group and the 0712 group data express that the model not only applies effectively to its own data, but also can apply to other data. This shows that the model has strong correction ability, applicability, and effectiveness.
As the number of current data points is small, the correction effect is not very good. However, herein, we prosed a new method for star point centroid correction, proved the applicability and effectiveness of the method, and provided a strong correlated star point centroid and image control point coordinates for subsequent star map recognition. Our work should help with star map recognition and determination of the initial attitude of the satellite.

Author Contributions

B.W., W.Z., and Q.S. conceived and designed the study. B.W. and Y.G. performed the experiments and analyzed the corresponding results. W.Z. and Q.S. wrote the paper jointly.

Funding

This work was supported in part by the National Natural Science Foundation of China under Project No. 41701531. It was also supported in part by the Natural Science Foundation of Jiangsu Province under Project No. BK20170782. We would also like to convey our heartfelt gratitude to the reviewers and members of the editorial team for their comments and contributions.

Acknowledgments

The authors wish to express their gratitude to Zhang, L. and Zhang, H.P. for they serious checking English style and providing computer for the study.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

As described above, the present study began by performing bidirectional fitting in the x and y directions of the frame superposition results of 0702 group; the obtained fitting equations are shown as follows:
Equations (A1)–(A26):
x 1 = 6 . 064125277003 e 4 y 1 2 + 1 . 6689667394 e 1 y 1 + 796 . 3594906760
x 2 = 5 . 603967862245 e 4 y 2 2 + 1 . 5129755955 e 1 y 2 + 777 . 4138421293
x 3 = 6 . 942292495556 e 4 y 3 2 + 3 . 8773941718 e 2 y 3 + 782 . 2673326060
x 4 = 6 . 9866696267418 e 4 y 4 2 + 7 . 3468440479 e 2 y 4 + 636 . 9685887266
x 5 = 6 . 92550589668876 e 4 y 5 2 1 . 108955567 e 1 y 5 + 675 . 5242865971
x 6 = 5 . 32516700176634 e 4 y 6 2 + 9 . 79781524 e 2 y 6 + 598 . 1488366720
x 7 = 5 . 26405122100068 e 4 y 7 2 + 7 . 032782218 e 2 y 7 + 567 . 6751982558
x 8 = 4 . 94957822979376 e 4 y 8 2 + 7 . 706918615 e 2 y 8 + 522 . 9766827270
x 9 = 4 . 54097927642127 e 4 y 9 2 + 9 . 896728706 e 2 y 9 + 460 . 0217822490
x 10 = 5 . 15730007995388 e 4 y 10 2 + 2 . 69593363783739 e 3 y 10 + 486 . 5398225760
x 11 = 6 . 53122667389336 e 4 y 11 2 2 . 50385151027044 e 1 y 11 + 572 . 6263772800
x 12 = 5 . 85490588013185 e 4 y 12 2 1 . 42415989149641 e 1 y 12 + 505 . 8106528400
x 13 = 4 . 74126270204956 e 4 y 13 2 + 3 . 83482088667469 e 2 y 13 + 392 . 2837162508
x 14 = 4 . 71203052395599 e 4 y 14 2 3 . 31811303106649 e 2 y 14 + 354 . 4012201753
x 15 = 3 . 79915059769796 e 4 y 15 2 + 8 . 98617759975948 e 2 y 15 + 244 . 3665492481
x 16 = 3 . 24462995469601 e 4 y 16 2 + 1 . 52415000053149 e 1 y 16 + 218 . 4113406868
x 17 = 5 . 55508684171826 e 4 y 17 2 2 . 41104822702083 e 1 y 17 + 358 . 3023588778
x 18 = 4 . 64037170393650 e 4 y 18 2 1 . 48369917015442 e 1 y 18 + 208 . 5281832245
x 19 = 3 . 33949002276114 e 4 y 19 2 + 9 . 38688532562692 e 2 y 19 + 95 . 7945289242  
x 20 = 3 . 15382117297735 e 4 y 20 2 + 1 . 13261214115208 e 1 y 20 + 56 . 4667047992
x 21 = 3 . 49587932089585 e 4 y 21 2 + 3 . 76491335780276 e 3 y 21 + 8 . 7743658633  
x 22 = 3 . 41376891070825 e 4 y 22 2 + 2 . 62880465034960 e 2 y 22 56 . 0527664244  
x 23 = 3 . 47137228707709 e 4 y 23 2 2 . 81198687797805 e 2 y 23 49 . 2361978412  
x 24 = 3 . 44292642962884 e 4 y 24 2 1 . 95088525902766 e 2 y 24 149 . 6236465132  
x 25 = 8 . 19803700326146 e 4 y 25 2 2 . 48594132359937 e 1 y 25 + 794 . 5774447065  
x 26 = 3 . 75154707466548 e 4 y 26 2 1 . 52832756066449 e 1 y 26 138 . 7228401308
Equations (A27)–(A52):
y 1 = 8 . 4500892281 e 3 x 1 2 + 16 . 94088086027 x 1 8104 . 997895661
y 2 = 6 . 393867991146 e 3 x 2 2 + 13 . 43513064487 x 2 6553 . 51591621
y 3 = 4 . 549512564531 e 3 x 3 2 + 10 . 01200304615 x 3 4939 . 881501419
y 4 = 4 . 518244577148 e 3 x 4 2 + 8 . 952093236062 x 4 3787 . 984622645
y 5 = 2 . 402507498491 e 3 x 5 2 + 5 . 610200779061 x 5 2443 . 306507477
y 6 = 6 . 010233278600 e 3 x 6 2 + 10 . 81873853981 x 6 4274 . 291994271
y 7 = 3 . 614497708002 e 3 x 7 2 + 7 . 073360866266 x 7 2742 . 174871950
y 8 = 4 . 237069994487 e 3 x 8 2 + 7 . 706747363717 x 8 2785 . 062200705
y 9 = 6 . 170966371196 e 3 x 9 2 + 9 . 576347661311 x 9 3056 . 347455449
y 10 = 1 . 116740793190 e 3 x 10 2 + 3 . 066114266387 x 10 960 . 5426709476
y 11 = 2 . 010675564919 e 3 x 11 2 + 4 . 401470832127 x 11 1422 . 943998352
y 12 = 1 . 906314455616 e 3 x 12 2 + 4 . 171902190955 x 12 1275 . 861001848
y 13 = 3 . 181035477800 e 3 x 13 2 + 5 . 601531485740 x 13 1586 . 192408171
y 14 = 2 . 524957329257 e 3 x 14 2 + 4 . 507537314563 x 14 1033 . 583612067
y 15 = 4 . 886967737468 e 3 x 15 2 + 6 . 244881334510 x 15 1175 . 412397577
y 16 = 1 . 170515468933 e 2 x 16 2 + 9 . 873356558673 x 16 1585 . 748185160
y 17 = 1 . 669077240334 e 3 x 17 2 + 3 . 310608186704 x 17 472 . 8322687479
y 18 = 1 . 796822656313 e 3 x 18 2 + 3 . 093110472883 x 18 128 . 5098829008
y 19 = 4 . 613783247914 e 3 x 19 2 + 4 . 820927360987 x 19 358 . 8531962554
y 20 = 5 . 789077643825 e 3 x 20 2 + 4 . 845118755650 x 20 210 . 6715800810
y 21 = 2 . 482603076529 e 3 x 21 2 + 3 . 016689538945 x 21 + 229 . 2187895223
y 22 = 2 . 581428566220 e 3 x 22 2 + 2 . 762645788154 x 22 + 383 . 7869915917
y 23 = 2 . 060238721019 e 3 x 23 2 + 2 . 530532601516 x 23 + 465 . 5466135950
y 24 = 1 . 850361178625 e 3 x 24 2 + 2 . 093337244263 x 24 + 689 . 6723894220
y 25 = 3 . 355245643787 e 3 x 25 2 + 7 . 611874637896 x 25 3591 . 097656766
y 26 = 2 . 237616175281 e 3 x 26 2 + 2 . 035706833294 x 26 + 845 . 4270033110
Equations (A53) and (A54):
{ a = 3 . 055 e 10 x ¯ 2 7 . 183 e 11 y ¯ 2 + 2 . 223 e 10 x ¯ y ¯ + 1 . 069 e 7 x ¯ + 1 . 097 e 7 y ¯ + 2 . 965 e 4 b = 5 . 749 e 9 x ¯ 2 2 . 552 e 7 y ¯ 2 7 . 113 e 7 x ¯ y ¯ + 1 . 209 e 4 x ¯ + 1 . 588 e 5 y ¯ + 9 . 857 e 2 c = 4 . 636 e 5 x ¯ 2 + 3 . 324 e 5 y ¯ 2 + 4 . 137 e 5 x ¯ y ¯ + 1 . 023 x ¯ 0 . 247 y ¯ + 9 . 633
{ a = 5 . 034 e 9 x ¯ 2 1 . 285 e 8 y ¯ 2 5 . 001 e 11 x ¯ y ¯ + 4 . 153 e 6 x ¯ + 1 . 839 e 5 y ¯ 8 . 775 e 3 b = 8 . 300 e 6 x ¯ 2 + 1 . 222 e 5 y ¯ 2 1 . 277 e 5 x ¯ y ¯ + 4 . 970 e 3 x ¯ 1 . 503 e 2 y ¯ + 6 . 963 c = 5 . 794 e 3 x ¯ 2 3 . 412 e 3 y ¯ 2 + 8 . 521 e 3 x ¯ y ¯ 4 . 190 x ¯ + 3 . 772 y ¯ 469 . 338

References

  1. Wei, X.G.; Jia, X.U.; Zhang, G.J. S-curve error compensation of centroiding location for star sensors. Opt. Precis. Eng. 2013, 21, 849–857. [Google Scholar]
  2. Wang, J.; Zhang, H.; Lin, L.; Tian, H. Error Analysis of Attitude Determination by Star Sensors on Satellites. Comput. Simulat. 2012, 29, 68–71. [Google Scholar]
  3. Wang, S.; Geng, Y.; Jin, R. A Novel Error Model of Optical Systems and an On-Orbit Calibration Method for Star Sensors. Sensors 2015, 15, 31428–31441. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Zeng, F. High Accuracy Star Camera Image Processing Technology. Master’s Thesis, Changchun Institute of Optics, Chinese Academy of Sciences, Beijing, China, 2014. [Google Scholar]
  5. Wang, B.; Bao, J.W.; Wang, S.K.; Wang, H.J.; Sheng, Q.H. Improved Line Tracing Methods for Removal of Bad Streaks Noise in CCD Line Array Image—A Case Study with GF-1 Images. Sensors 2017, 17, 935. [Google Scholar] [CrossRef] [PubMed]
  6. Wang, B.; Peng, J.F.; Wu, X.J.; Bao, J.W. Automatic Matching of Multi-Source Satellite Images: A Case Study on ZY-1-02C and ETM+. Appl Sci. 2017, 7, 1066. [Google Scholar] [CrossRef]
  7. Yan, J.Y.; Jiang, J.; Zhang, G.J. Dynamic imaging model and parameter optimization for a star tracker. Opt. Express 2016, 24. [Google Scholar] [CrossRef] [PubMed]
  8. Zhang, W.N.; Quan, W.; Guo, L. Blurred Star Image Processing for Star Sensors under Dynamic Conditions. Sensors 2012, 12, 6712–6726. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Lu, J.; Lei, C.; Yang, Y. A Dynamic Precision Evaluation Method for the Star Sensor in the Stellar-Inertial Navigation System. Sci Rep. 2017, 7, 4356. [Google Scholar] [CrossRef] [PubMed]
  10. Zhang, J.; Zhou, X.D.; Zhang, C.H. The Research on the Algorithm for Space Target Motion Trajectory Extraction. Infrared Technol. 2007, 29, 459–462. [Google Scholar]
  11. Wei, Q.; Fang, J. A Star Recognition Method Based on the Adaptive Ant Colony Algorithm for Star Sensors. Sensors 2010, 10, 1955–1966. [Google Scholar] [CrossRef] [Green Version]
  12. Pang, B.; Li, K.; Tang, L.; Li, G. Error Analysis and Compensation for Star Sensor. Aeros. Cont. Appl. 2017, 43, 17–24. [Google Scholar] [CrossRef]
  13. Jia, H.; Yang, J.K.; Li, X.J.; Yang, J.C.; Yang, M.F.; Liu, Y.W.; Hao, Y.C. Systematic error analysis and compensation for high accuracy star centroid estimation of star tracker. Sci. China Technol. Sci. 2010, 53, 3145–3152. [Google Scholar] [CrossRef]
  14. Yao, T.; Yin, S.M.; Bin, X.; Lü, Q.B. Techniques for pixel response nonuniformity correction of CCD in interferential imaging spectrometer. Spectrosc. Spect. Anal. 2010, 30, 1712–1716. [Google Scholar]
  15. Li, X.K.; Tan, H.S.; Yu, X.M.; Hao, Z.H. Study of Factors that Restrict the Position Accuracy Noises of Star Sensor. Opt. Optoelectronic. Technol. 2010, 30, 97–101. [Google Scholar]
  16. Schmidt, Uwe. Intelligent error correction method applied on active pixel sensor based star tracker. Proc. SPIE 2005, 5964, 154–156. [Google Scholar]
  17. Liao, Y.F.; Zhong, J.Y.; Dong, C. Estimation of degradation parameters of smearing star image based on motion trajectory of star′s centroid. Infrared Laser Engine 2014, 43, 3162–3167. [Google Scholar] [CrossRef]
  18. Samaan, M.A.; Pollock, T.C.; Junkins, J.L. Predictive centroiding for star trackers with the effect of image smearing. J. Astronaut. Sci. 2002, 50, 113–123. [Google Scholar]
  19. Yang, J.; Liang, B.; Zhang, T.; Song, J. A Novel Systematic Error Compensation Algorithm Based on Least Squares Support Vector Regression for Star Sensor Image Centroid Estimation. Sensors 2011, 11, 7341–7363. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Shen, J.; Zhang, G.J.; Wei, X.G. On-orbit calibration of star sensor based on Kalman filter. Acta Aeronaut. Astronaut. Sin. 2010, 31, 1220–1224. [Google Scholar]
Figure 1. The flow chart of the ellipsoid model method.
Figure 1. The flow chart of the ellipsoid model method.
Sensors 18 04259 g001
Figure 2. Schematic diagram of the ellipsoid model.
Figure 2. Schematic diagram of the ellipsoid model.
Sensors 18 04259 g002
Figure 3. Principle of the ellipsoid model.
Figure 3. Principle of the ellipsoid model.
Sensors 18 04259 g003
Figure 4. Reasons for ambiguous solutions.
Figure 4. Reasons for ambiguous solutions.
Sensors 18 04259 g004
Figure 5. Principle of filtering ambiguous solutions: (a) coordinate x of ambiguous solutions closer to original star point; (b) coordinate y of ambiguous solutions closer to original star point; (c) all ambiguous solutions almost close to original star point.
Figure 5. Principle of filtering ambiguous solutions: (a) coordinate x of ambiguous solutions closer to original star point; (b) coordinate y of ambiguous solutions closer to original star point; (c) all ambiguous solutions almost close to original star point.
Sensors 18 04259 g005
Figure 6. Flow chart of filtering ambiguous solutions.
Figure 6. Flow chart of filtering ambiguous solutions.
Sensors 18 04259 g006
Figure 7. The results of the bidirectional fitting.
Figure 7. The results of the bidirectional fitting.
Sensors 18 04259 g007
Figure 8. The results of the bidirectional fitting curve for one trajectory.
Figure 8. The results of the bidirectional fitting curve for one trajectory.
Sensors 18 04259 g008
Table 1. The coordinate comparison table before and after the correction of simulation data.
Table 1. The coordinate comparison table before and after the correction of simulation data.
Real Coordinate x 0 / Pixel Real Coordinate y 0 / Pixel Coordinate before the Correction x 1 / Pixel Coordinate before the Correction y 1 / Pixel Coordinate after the Correction x 2 / Pixel Coordinate after the Correction y 2 / Pixel Time of Correction/s
769.523051.6882769.492251.6136769.916551.78560.267
428.8964129.9844429.6600128.5301428.4559129.71420.253
756.1361143.6791756.4673143.0625755.8765143.69540.255
324.2521178.2613325.0366178.8114323.7439178.67380.254
506.1756180.5237507.7108180.8036505.8419180.80860.260
344.5397167.9456342.6581168.2032343.6593168.05000.263
Table 2. The mean square error of errors table before and after the correction of simulation.
Table 2. The mean square error of errors table before and after the correction of simulation.
Errors before the correction/pixel0.08081.64260.69990.95811.56051.8992
Errors after the correction/pixel0.40540.51660.26010.65460.43880.8866
Table 3. The coordinate comparison table before and after the correction of the 0702 groups.
Table 3. The coordinate comparison table before and after the correction of the 0702 groups.
NumbersCoordinate before the Correction x 0 / Pixel Coordinate before the Correction y 0 / Pixel Coordinate after the Correction x 1 / Pixel Coordinate after the Correction y 1 / Pixel D Value   Δ x / Pixel D Value   Δ y / Pixel Time of Correction/s
179686796.994985.18099−0.99490.8190.285
2697260697.7184258.9675−0.71841.03250.288
3812530815.7728531.539−3.7728−1.5390.269
4676301676.0389300.9522−0.03890.04780.263
5651336650.5735336.47260.4265−0.47260.243
6550348549.5405348.50980.4595−0.50980.264
7676603676.9681603.761−0.9681−0.7610.274
8672626674.1938624.0945−2.19381.90550.272
9658646658.367646.5949−0.367−0.59490.271
10442276443.7777274.1592−1.77771.84080.268
11621790618.8719787.65512.12812.34490.278
12271164270.568165.98170.432−1.98170.273
13250153250.4332153.4049−0.4332−0.40490.266
14116147114.5381148.60521.4619−1.60520.284
158215881.45622156.71540.54381.28460.276
Table 4. The coordinate comparison table before and after the correction of the 0712 groups.
Table 4. The coordinate comparison table before and after the correction of the 0712 groups.
NumbersCoordinate before the Correction x 0 / Pixel Coordinate before the Correction y 0 / Pixel Coordinate after the Correction x 1 / Pixel Coordinate after the Correction y 1 / Pixel D-Value Δ x / Pixel D-Value Δ y / Pixel Time of Correction/s
1701243703.9327239.69722.9327−3.30280.287
2699247700.9408244.42211.9408−2.57790.263
3706251704.8268251.6018−1.17320.60180.284
4703255702.3503255.2876−0.64970.28760.298
5707258703.726260.3408−3.2742.34080.276
6705262702.381264.8682−2.6192.86820.283
7703250703.6072249.37010.6072−0.62990.279
8702266699.179267.7982−2.8211.79820.294
9709269702.1712273.5862−6.82884.58620.278
10704273699.3447277.8297−4.65534.82970.298
11710276701.3928282.0021−8.60726.00210.270
12706279697.5668285.7076−8.43326.70760.282
13711282706.9911293.0142−4.008911.01420.278
14709285701.2951293.4709−7.70498.47090.272
15707290697.0002299.0767−9.99989.07670.297
16712292700.0789303.2288−11.921111.22880.294
17709299698.7398308.9167−10.26029.91670.289
18708295703.9664306.281−4.033611.2810.296
19711303700.7709315.0803−10.229112.08030.277
20716306698.7524321.6062−17.247615.60620.286
Table 5. The mean square error of the difference between the two groups.
Table 5. The mean square error of the difference between the two groups.
GroupsLateral Mean Square Error σ x / Pixel Longitudinal Mean Square Error σ y / Pixel
07021.47021.3225
07125.00445.2348
Table 6. Comparison of quaternion of the 0712 group.
Table 6. Comparison of quaternion of the 0712 group.
Real Quaternion (q0, q1, q2, q3)Quaternion before the Correction (q0, q1, q2, q3)Quaternion after the Correction (q0, q1, q2, q3)
(0.6865, −0.0656, 0.7057, 0.1623)(0.6882, 0.1548, 0.6928, 0.1498)(0.7029, −0.0632, 0.7050, 0.0697)
(0.6865, −0.0658, 0.7058, 0.1621)(0.6885, 0.1535, 0.6931, 0.1485)(0.7028, −0.0644, 0.7049, 0.0710)
(0.6864, −0.0659, 0.7059,0.1619)(0.6883, 0.1546, 0.6928, 0.1495)(0.7038, −0.0537, 0.7058, 0.0602)
(0.6865, −0.0658, 0.7058, 0.1621)(0.6881, 0.1556, 0.6925, 0.1506)(0.7042, −0.0481, 0.7062, 0.0546)
Table 7. Comparison of mean of errors of quaternion.
Table 7. Comparison of mean of errors of quaternion.
Mean of q0 ErrorMean of q1 ErrorMean of q2 ErrorMean of q3 ErrorMean of All
Mean of quaternion errors before the correction0.00180.22040.01300.01250.0619
Mean of quaternion errors after the correction0.01690.00240.00030.09820.0295

Share and Cite

MDPI and ACS Style

Wang, B.; Zhou, W.; Gao, Y.; Sheng, Q. A Stellar Imaging Error Correction Method Based on an Ellipsoid Model: Taking Ziyuan 3-02 Satellite Data Analysis as an Example. Sensors 2018, 18, 4259. https://doi.org/10.3390/s18124259

AMA Style

Wang B, Zhou W, Gao Y, Sheng Q. A Stellar Imaging Error Correction Method Based on an Ellipsoid Model: Taking Ziyuan 3-02 Satellite Data Analysis as an Example. Sensors. 2018; 18(12):4259. https://doi.org/10.3390/s18124259

Chicago/Turabian Style

Wang, Bo, Wei Zhou, Yuyang Gao, and Qinghong Sheng. 2018. "A Stellar Imaging Error Correction Method Based on an Ellipsoid Model: Taking Ziyuan 3-02 Satellite Data Analysis as an Example" Sensors 18, no. 12: 4259. https://doi.org/10.3390/s18124259

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop