Star Image Prediction and Restoration under Dynamic Conditions

The star sensor is widely used in attitude control systems of spacecraft for attitude measurement. However, under high dynamic conditions, frame loss and smearing of the star image may appear and result in decreased accuracy or even failure of the star centroid extraction and attitude determination. To improve the performance of the star sensor under dynamic conditions, a gyroscope-assisted star image prediction method and an improved Richardson-Lucy (RL) algorithm based on the ensemble back-propagation neural network (EBPNN) are proposed. First, for the frame loss problem of the star sensor, considering the distortion of the star sensor lens, a prediction model of the star spot position is obtained by the angular rates of the gyroscope. Second, to restore the smearing star image, the point spread function (PSF) is calculated by the angular velocity of the gyroscope. Then, we use the EBPNN to predict the number of iterations required by the RL algorithm to complete the star image deblurring. Finally, simulation experiments are performed to verify the effectiveness and real-time of the proposed algorithm.


Introduction
Along with the development of navigation technology, the requirement for a spacecraft attitude measurement is becoming higher and higher [1,2].In general, star sensors and gyroscopes are often used in spacecraft to measure the attitude information.The star sensor is supposed to be the most accurate attitude-measuring device in stable conditions [3].However, under dynamic conditions, frame loss and blurring of the star image may occur, which leads to decreased accuracy or even failure of the star centroid extraction and attitude determination.Therefore, only by solving the frame loss and blurring problem of the star image, can the star sensor maintain good performance under dynamic conditions.Because gyroscopes have a relatively high measurement accuracy and excellent dynamic performance in a short period, using the gyroscope to assist in improving the dynamic performance of the star sensor has become a hot topic [4][5][6][7][8][9].
In the process of spacecraft motion, due to the influence of external interference and the limitation of the star sensor, the star sensor is prone to frame loss, which can result in a lack of coherence in the process of moving image tracking and even loss of key motion features.Therefore, how to eliminate the frame loss error has become a research hotspot in the field of image processing.Currently, the primary methods for eliminating frame loss error includes the frame loss error elimination based on the support vector machine (SVM) [10,11], frame loss error elimination based on iterative error compensation [12,13] and frame loss error elimination based on adaptive minimum error threshold segmentation [14].These methods eliminate the interference noise in the image and compensate the frame loss error, but still cannot avoid the frame loss.To overcome the shortcomings of the above methods, a method for eliminating the frame loss by using a motion image-tracking model is presented in [15], since the frame loss of the star image is mainly affected by the exposure time and readout time of the star sensor [2].Therefore, in [16][17][18], parallel processing is used to overlap exposure time and readout time to reduce the frame loss of the star image.In [19,20], the authors used image intensifiers to increase the sensitivity of the image sensor, thereby reducing the occurrence of the frame loss in the star sensors.In [21], Wang et al. proposed using field programmable gate arrays (FPGAs) to improve the processing ability of the star sensor to reduce the readout time.Yu et al. [22] proposed a method to reduce the occurrence of the frame loss by using an intensified star sensor.Although FPGAs and image intensifiers can assist the star sensor in reducing the occurrence of the frame loss, the additional FPGA and image intensifier lead to an increase in the weight and power consumption of the star sensor and limit its application in micro-spacecraft.
The motion blur of the star image is another important reason that affects the dynamic performance of the star sensor.To improve the dynamic performance of the star sensor, many scholars have done a lot of research in the field of image processing, especially on the star image deblurring algorithms [23].According to whether the point spread function (PSF) is known or not, the deblurring methods can be classified into two typical forms: Blind image deblurring (BID) with unknown PSF, and non-blind image deblurring (NBID) with known PSF [24].Mathematically, the process of NBID is an inverse problem, and an NBID algorithm has a good real-time performance.Currently, most BID algorithms perform blur kernel estimation and image deblurring simultaneously, and recursively to approach the sharp image [25][26][27][28][29][30].Therefore, BID methods have poor real-time performance.Because star sensors are widely used in spacecraft, the real-time requirements are high.Therefore, we intend to study an NBID algorithm for star image deblurring.
Two problems should be solved in the process of restoring the blurred star image.One is how to determine the blur kernel, and the other is to choose which deblurring method to use.The gyroscope can be used to measure the angular rates of the carrier and is easy to integrate, and the blur kernel parameters (blur angle and blur length) can be calculated according to the angular rate information output by the gyroscope.In this paper, a gyroscope is used to assist in the calculation of blur kernels.For the star image deblurring, there are two commonly used NBID algorithms.One is the Wiener filter [31,32].Quan et al. [31] proposed a Wiener filter based on the optimal window technique for recovering the blurred star image.Ma et al. [32] proposed an improved one-dimensional Wiener filtering method for star image deblurring.Although the two methods are better in real time, they also amplify the noise in the image.The other is the Richardson-Lucy (RL) algorithm, which can effectively suppress the noise in the deblurred star image [33,34].However, the iterative convergence criterion is not given in the RL algorithm, and the optimal number of iterations needs to be obtained through constant-trying with large time-consumption.If the amount of blurred star image to be processed is enormous, this is a disadvantage that cannot be ignored.
In this paper, to solve the shortcomings of the above methods and further improve the performance of the star sensor under highly dynamic conditions, we propose an improved gyroscope-assisted star image prediction method and RL non-blind deblurring algorithm.In the star image prediction method, considering the second-order distortion of the star sensor lens, a prediction model between the angular rates of the gyroscope and the position of the star spot is established.For the improved RL algorithm, first, we analyze the point spread function (PSF) model of the star sensor under different motion conditions, and then the ensemble back-propagation neural network (EBPNN) prediction model based on the improved bagging method is constructed to predict the number of termination iterations required by the conventional RL algorithm, which is used to overcome the disadvantage of traditional RL algorithm that needs to set the number of iterations manually.
The rest of this paper is organized as follows.In Section 2, we introduce the star image prediction model in the case of the frame loss of the star image.The improved RL algorithm is described in Section 3. In Section 4, simulation results are shown to demonstrate the effectiveness of our method.Finally, we give a conclusion in Section 5.

Prediction Model of the Star Image
The star sensor is a vision sensor that can be used to measure the attitude of a spacecraft [35].To obtain the high-precision attitude information of the spacecraft, we must ensure that the image sensor of the star sensor can output the star image continuously.Due to the highly dynamic motion of the spacecraft, frame loss of the star image often occurs.Therefore, it is especially important to ensure that the star sensor can output the high-precision attitude information under the condition of the frame loss of the star image.In this section, we will show how to predict the position of the star spot based on the angular rates of the gyroscope in the presence of distortion of the star sensor lens.In Figure 1, the star sensor obtains the direction vector of the navigation star in the celestial inertial coordinate system by observing the stars on the celestial sphere.At time t , the attitude matrix of the star sensor in the celestial coordinate system is ( ) A t , the star sensor can detect the direction vector i v of the navigation star in the celestial coordinate system, and its image vector can be represented as i W in the star sensor coordinate system.The image coordinate of the principal point of the lens of the star sensor is 0 0 ( , ) x y , the coordinates of the navigation star i S on the image plane is ( , Since the optical lens of the star sensor mainly has a second-order radial distortion, the ideal image coordinate ( , ) x y   of the navigation star i S can be expressed as,   ) , where, k and y k  represent the second-order radial distortion coefficients in the X and Y directions, respectively.Assuming that the focal length of the star sensor is f , the direction vector i W can be given by According to the attitude matrix ( ) A t of the star sensor, the relationship between the vectors i W and i v can be obtained, where, the attitude matrix ( ) A t can be solved by the N vector method, Trial method, Quest method, Q-method and Least square method [36].In this paper, we use the angular velocity information of the gyroscope to calculate the attitude matrix ( ) A t .
In Figure 2 where ( ( ) ) w t  represents the cross-product matrix of the star sensor angular rates vector ( ) w t .According to Equations ( 4) and ( 5), the relationship between ( ) where, we can calculate i W through the star image.According to Equations ( 1) and ( 6), we can obtain the position prediction model as follows,

Improved Star Image Deblurring Algorithm
Generally, establishing a PSF under a specific motion is the key to star image recovery.In this section, first, we analyze the PSF model of the blurred star image caused by the rotation of the star sensor around the optical axis and non-optical axis and calculate the PSF in the corresponding motion condition through the angular velocity information of the gyroscope.Then, we introduce an improved RL algorithm to recover the blurred star image.

Motion Blur Model of the Star Image
To better recover the blurred star image, the primary task is to obtain the PSF.Therefore, it is necessary to analyze the mechanism of the star image blurs.The star sensor is a navigation device that acquires the attitude by utilizing star observations.Because the star sensor needs to photograph the sky with a dark background, in order to increase the number of navigation stars in the star image, it needs to increase the exposure time appropriately.If the star sensor has a wide range of motion during the exposure time, the same star will be imaged at different locations on the star image, which will result in blurring of the star image.Mathematically, the model of star image blurring can be written as, where ( , ) f x y , ( , ) g x y , and ( , ) h x y denote the sharp star image, the blurred star image, and the PSF, respectively;  represents two-dimensional convolution operator, and ( , ) n x y denotes the image noise.Due to the different motion types of the star, sensors produce different PSFs, so PSF is important for describing the model of the blurred star image.Since the distance from the navigation star to the earth is much larger than the distance from the star sensor to the earth, the linear motion has less effect on the star image blur, and this effect can be ignored.Therefore, we mainly analyze the model of the blurred star image generated by the angular motion.
In Figure 3a, the star image blur caused by the angular motion is shown.Since the exposure time of the star sensor is short, the angular velocity of the star sensor can be considered to be constant during the exposure time.Moreover, the star sensor coordinate system is coincident with the body-fixed frame.In Figure 3b, the model of the blurred star image generated by the star sensor rotating around the X-axis is shown, the initial angle between the starlight direction and the principal optical axis of the star sensor is  , and the projection of the navigation star is P in the star image.When the star sensor rotates clockwise around the X-axis at an angular velocity x w , and during the exposure time t  , the rotational angle is As a result of the short exposure time of the star sensor,   is quite small, the first order Taylor-expansion for tan( ) Substituting Equation ( 10) into ( 9), we have .
In general, the rotational motion characteristics of the star sensor in the S O X and S O Y directions are the same.As shown in Figure 3c, during the exposure time t  , the star sensor rotates clockwise around the Y-axis at an angular velocity y w , the rotational angle is y w t      , the star spot shifts along the u axis in the image plane, and its translation vector can be obtained.
When the star sensor rotates around the X-axis and Y-axis with angular rates x w and y w , respectively, and after the exposure time t , the rotation angle of the star sensor is w t w w t, and the translation vector of the star spot is In general, when the star sensor rotates around the cross bore-sight direction ( S O X and S O Y directions), the blur kernel angle  of the star image can be given by Then, the PSF of the blurred star image is expressed as [37,38], In Figure 3d, the star sensor rotates clockwise around the Z-axis at an angular rate z w , point ( , ) P u v does a circular motion with C O as the center and PP can be approximated as the chord length  PP .Inspired by reference [39], the motion of the star spot can be regarded as a uniform linear motion on the focal plane.The displacement of the star spot in the direction of the X-and Y-axis can be expressed as, The star image blur kernel angle  and the  PP are given by According to the geometric relation in Figure 3d, Substituting Equation (19) into Equation (18), Equation ( 18) can be rewritten as Therefore, when the star sensor rotates around the Z-axis, the PSF of the blurred star image is expressed as, , sin cos ,0 cos ( , ) .0, In summary, according to Equations ( 15) and ( 21), the model of the multiple-blurred star image is given by where the 1 ( , ) C denotes the rotation matrix from the body coordinate system to the star sensor coordinate system.Because the star sensor is fixed on the spacecraft, s b C can be calibrated in advance.
After obtaining the PSF, the NBID algorithm is used to recover the blurred star image.

Richardson-Lucy (RL) Algorithm
The NBID algorithm includes both linear and nonlinear algorithms.The most common linear NBID algorithms include the inverse filtering algorithm, Wiener filtering algorithm, and least squares algorithm [3].Compared with the linear NBID algorithm, nonlinear NBID algorithm has a better effect in suppressing noise and preserving image edge information.Currently, the RL algorithm [40] is the most widely used non-linear iterative restoration algorithm.The RL algorithm is a blurred image deconvolution algorithm that extends from the maximum a posteriori probability estimate.This method assumes that the noise in the image follows a Poisson distribution, and the likelihood probability of the image is where, ( , ) x y denotes the pixel coordinate, ( , ) g x y represents the blurred image, ( , ) h x y denotes the PSF, and  denotes the two-dimensional convolution operator.
To get the maximum likelihood solution of the sharp image ( , ) f x y , we minimize the energy function. , x y

E f f x y h x y g x y f x y h x y x y (25)
By deriving the ( ) E f and normalizing the blur kernel ( , ) h x y , the RL algorithm iteratively updates the image by x y h x y f x y f x y h x y (26) where n represents the iteration number.
The RL algorithm has two important properties [40]: Non-negativity and energy preserving.It constrains the non-negative of estimated values of the sharp image and preserves the total energy of the image in the iteration so that the RL algorithm has excellent performance in the star image deblurring.However, the iterative convergence criterion is not given in the RL algorithm, and the optimal number of iterations need to be obtained through constant-trying with large time-consumption.This shortcoming of the RL algorithm cannot be ignored if we are dealing with a large number of the blurred star image.Therefore, it is necessary to study an improved RL algorithm which automatically sets the number of iterations.

Improved RL Algorithm
To overcome the shortcomings of the RL algorithm, we propose an improved RL algorithm, and the flow diagram is shown in Figure 4. First, we set the parameters of the star sensor including the field of view, focal length, star magnitude limit, resolution of the star image, etc.We use these parameters to simulate a large number of sharp star image and the corresponding blurred star image.Second, according to the angular rates of the gyroscope output, we calculate the PSF of each blurred star image and use the RL algorithm to deblur the star image and record the optimal number of iterations used.The optimal number of iterations and the sum of the Magnitude of Fourier Coefficients (SUMFC) of the PSF of the blurred star image are used for the training of the ensemble back-propagation neural network (EBPNN) [41].After the training is completed, the optimal iteration number prediction model of the RL algorithm can be obtained.Finally, when the navigation system is used, the PSF of the blurred star image is obtained according to the angular velocity of the gyroscope, and the SUMFC of the PSF is used as the input of the prediction model.The star image is deblurred according to the number of iterations required by the RL algorithm of the prediction model output.Especially, when predicting the number of iterations, the ensemble back-propagation neural network (EBPNN) prediction model based on the improved bagging method uses the SUMFC of the PSF of the blurred star image as the input.We use different PSFs to blur the sharp star image (Figure 5a).The relationship between the SUMFC of PSFs and the corresponding number of iterations required by the RL algorithm is shown in Figure 6.We can see that there is an obvious non-linear relationship between them, which prompts us to use EBPNN to predict the optimal number of iterations of the RL algorithm.In order to further improve the prediction accuracy of the ensemble neural network, we introduce a just-in-time learning algorithm to optimize the sample sets the bagging method.Suppose two input samples i x and q x , where q x is the currently acquired input sample and i x is a training sample in The distance and angle between them can be calculated by the following equation.
The similarity between i x and q x is cos( ( , )),

Number of iterations
where,  is the weighting factor, the larger the ( , ) i q S x x value, the higher the similarity between i x and q x .
We select the k-group data closest to the currently acquired one sample q x from the training sample set ( 1,2, ) i D i   and arrange the new sample set in descending order.
Minimize ( ) J  to obtain the model parameter  at the current moment, and then obtain its local model: In particular, we find that the computational complexity of the EBPNN model increases with the increase of the number of BPNN models, but the prediction accuracy of the EBPNN model does not always increase with it, sometimes it even decreases.Therefore, after considering the computational complexity and prediction accuracy of the EBPNN model, we decide to use three sub-BP neural network models to construct the EBPNN model.As shown in Figure 7, three BP neural networks are trained by different sample sets ' ( 1,2,3) , and the integrated prediction model is obtained by aggregating the three BP neural networks.When the EBPNN is used for prediction, we use the weighted method to integrate the output of each neural network and take the integrated result as the output of EBPNN.In the process of integrating the output of each BP neural network, first, we calculate the average training errors ( 1,2,3) i e i  of three sub-models on their respective training sample set.Then, we construct a weighting vector w of 1 n  dimensions, the value of n is the same as the number of sub-BP neural network models, so 3 n  , . Finally, we calculate the prediction results of three sub-models for the input data q x by Equation ( 31) and form a 1 3  -dimensional output vector ' q y .The final prediction result of the EBPNN is expressed as, To verify the effectiveness of the EBPNN prediction model, we analyzed the accuracy of the iteration times estimated by the model.In the training stage of the EBPNN model, each BP neural network adopts a three-layer structure.The nodes of the input layer, hidden layer, and output layer are set to 1, 10 and 1, respectively.The sigmoid function is used as the activation function.The original training set D contains 1708 samples.In Figure 8, we show the number of iterations predicted by the EBPNN model and compare it with the optimal number of iterations.We can see that the number of iterations estimated by the EBPNN almost coincides with the optimal number of iterations, and the error between them is small.Therefore, the performance of the EBPNN prediction model can meet our requirements.After EBPNN predicts the number of iterations, we use the improved RL algorithm to obtain the sharp star image, and then we can accurately estimate the attitude information by the star image segmentation, star extraction, star identification, star matching, and other operations [43].

Simulation Results and Analysis
In order to prove the effectiveness of the star image prediction method and the improved RL algorithm in the highly dynamic environment, we compare and analyze the prediction accuracy of the star spot, and the accuracy of the attitude estimation before and after the star image deblurring in the following section.

Star Image Prediction Experiment
In this section, to validate the star image prediction method, we need to simulate the star image acquired by the star sensor at a different time.In the process of star image simulation, we determine the position of the navigation star in the star image based on the bore-sight direction of the star sensor and the right ascension and declination of the navigation star.Since the star sensor is fixed on the spacecraft, it can obtain different star images as the spacecraft moves.We assume that the exposure time of the star sensor is 0.01 s, the field of view is 20 20    , the image sensor size is 865 pixels  865 pixels, the pixel size is 20μm , the focal length is 49mm, and select the stars brighter than 3m in Yale Bright Star Catalogue as the guide star catalog.We use these parameters and the spacecraft trajectory to simulate the images at different times and use them as the ground truth of the star image.According to the above parameters, the resolution of the star image we simulated is 865 865  .To speed up the processing of the star image, we intercept the 512 512  size as the star image to be processed.The trajectory of the spacecraft we simulated is shown in Figure 9.And 1500 frames of consecutive star images are simulated, the first and the 1500th frame star image are shown in Figure 10.To validate the star image prediction method, we predict the star image based on the first frame star image and the angular velocity of the gyroscope, and compare it with the ground truth of the star image.Figure 11a,b show the ground truth of the 1500th frame star image and the 1500th frame star image predicted by the proposed algorithm.To more intuitively demonstrate the accuracy of the prediction algorithm, in Table 1, the centroid coordinates of the star spot in the real and predicted 1500th frame star image is shown, where ( , ) x y represents the centroid coordinate of the star spot in the real star image, ( ', ') x y is the centroid coordinate of the predicted star spot.x and y represent the difference of the horizontal and vertical coordinates between the true star spot and the predicted star spot, respectively.As seen from Table 1, the maximum error of the horizontal and vertical coordinates of the star spot predicted by our method within 15 s is 0.89 and 0.50 pixels, respectively.
To further analyze the prediction algorithm, according to the first frame star image shown in Figure 10a, we successively predicted the position of stars in 1500 star images, and analyze the mean value of the estimation error of the star spot position in each predicted star image.As shown in Figure 12, the mean value of the coordinate errors of the predicted star spot increases with the increase of the estimated number of frames, but the mean errors could stay in a small range.Therefore, in the case of the short-term frame loss, the proposed method can achieve an accurate prediction of the star spot.

Experiments on Star Image Deblurring
In this section, we present some examples to validate the proposed gyro-assisted improved RL algorithm.First, we analyze the blurring of the star image when the star sensor rotates around the X-axis, the Y-axis, the Z-axis, the X-and Y-axis, and the three axes simultaneously.Then, we add the Gaussian white noise with zero mean and variance 0.01 to the blurred star images.Finally, the blurred star image is deblurred by our proposed algorithm, and we compare the deblurred star image with the original sharp star image.Figure 13 shows the magnified original star image, blurred star images caused by the star sensor rotate around the X-and Y-axis, ( 10x w s  。 , 10 y w s  。 ), deblurred star image, and the gray distribution of the star spot in them.As can be seen from Figure 13, the gray value of the star spot in the blurred star image decreases significantly, and after deblurring the star image, the smearing phenomenon is obviously suppressed, the gray value and the gray distribution of star spot are closer to those in original star image.).
The star sensor is an attitude measurement device.To more intuitively reflect the deblurring performance of the proposed algorithm, we compare the attitude information of the spacecraft estimated by the star image before and after deblurring.The star image observed by the star sensor at a certain time is shown in Figure 14.First, we perform an angular motion blurring on the observed star image, then we use the proposed algorithm and the automatic iterative RL algorithm to deblur the star image, and compare the attitude information estimated by the deblurred images.The automatic iterative RL algorithm calculates the mean square error (MSE) of the currently restored image by automatically increasing the number of iterations, and compares it with the MSE of the image restored by the last iteration.If the MSE of the currently restored image is higher than the last iteration recovery result, the last iteration number is considered to be the optimal number of iterations, and the restored image is the optimal restoration result.The attitude estimation results are shown in Tables 2-6, and the "Fail" indicates that the attitude information of the spacecraft cannot be estimated by the star image because the degree of blurring of the star image is too high.From Tables 2-6, it can be seen that the attitude estimation failed when the angular velocity of the star sensor rotating around the X-axis, the Y-axis, the Z-axis, the X-and the Y-axis, and the three axes exceeds , respectively, these two methods have a similar performance, and the attitude errors are kept in a small range.This is because with the increase of the angular velocity of the star sensor, the blur extent of the star image gets bigger, and the gray value of the star spot decreases significantly.When the gray value of a blurred star is lower than the threshold for star image segmentation and the blurred star can hardly be detected.However, after the restoration of the blurred star image, the gray value of the star spot is improved, and the gray distribution of the star spot is closer to the true distribution so that the star spot can also be extracted under high dynamic conditions.Finally, the attitude of the spacecraft can be estimated by these star spots.
To verify the real-time performance of the proposed algorithm in the case of Gaussian noise, we use the proposed algorithm and the automatic iterative RL algorithm to restore the blurred star image caused by the star sensor rotating around the Z-axis and compare the time consumed by the two methods.As shown in Figure 15, the real-time performance of the proposed algorithm is significantly better than the iterative RL algorithm.This is mainly because the proposed algorithm can use the ensemble neural network based on the improved bagging method to quickly predict the number of iterations required by the RL algorithm, while the iterative RL algorithm requires a step-by-step iteration to optimize the number of iteration steps required.Second, in the case where the blurred star image is contaminated by Poisson noise, we present the deblurring performance of the proposed method and compare it with the automatic iterative RL algorithm.Figure 16 shows the magnified original star image, blurred star images caused by star sensor rotate around the X-and Y-axis, (  Figure 17 shows the real-time performance of the proposed algorithm and the iterative RL algorithm when dealing with the blurred star image caused by the star sensor rotating around the Z-axis, and the result shows that the real-time performance of our algorithm is better than the iterative RL algorithm when the degree of the blurred star image is large.In summary, the proposed method and the iterative RL algorithm significantly improve the dynamic performance of the star sensor and have similar performance.However, the real-time performance of our algorithm is better than the iterative RL algorithm, especially in the case of Gaussian white noise.

Figure 1 .
Figure 1.Star image model of the star sensor.

Figure 2 .
Figure 2. Prediction model of the star spot.

Figure 3 .
Figure 3. Motion blur star image model.(a) Blurred star image generated by the angular motion; (b) blurred star image generated by the rotation of the star sensor around the X-axis; (c) blurred star image generated by the rotation of the star sensor around the Y-axis; (d) blurred star image generated by the rotation of the star sensor around the Z-axis.
radius.The rotation angle of the star sensor is      z w t during the exposure time t .Since the exposure time of the star sensor is short, the arc length '

Figure 4 .
Figure 4. Flow diagram of the improved Richardson-Lucy (RL) algorithm for star image deblurring.

Figure 5 .
Figure 5. Original sharp star image and its gray distribution.(a) Original sharp star image; (b) gray distribution of star spot.

Figure 6 .
Figure 6.The relationship between the magnitude of Fourier coefficients (SUMFC) of the point spread function (PSF) and the corresponding optimal number of iterations.

Figure 8 .
Figure 8.Comparison between the optimal number of iterations and the estimated number of iterations.

Figure 9 .Figure 10 .
Figure 9.The spacecraft trajectory.(a) Three-dimensional trajectory of spacecraft; (b) projection of the spacecraft trajectory on the surface of the Earth.

Figure 11 .
Figure 11.True star image versus predicted star image.(a) The true value of the 1500th frame star image; (b) the 1500th frame star image predicted by the proposed algorithm.

Figure 13 .
Figure 13.The magnified star image and the gray distribution of the star spot in the case of Gaussian white noise.(a) The magnified original star image; (b) gray value distribution of star spot in the original star image; (c) the magnified blurred star image ( 10 x y w w s   。 ); (d) gray value distribution

Figure 14 .
Figure 14.Star spots observed by a star sensor.

Figure 15 .
Figure 15.Comparison of running time between the proposed method and the iterative RL method in the case of Gaussian noise.

10
, deblurred star image, and the gray distribution of the star spot in the case of Poisson noise.Combined with Table7-11, we can see that in the case of Poisson noise, the attitude estimation failed when the angular velocity of the star sensor rotating around the X-axis, the Y-axis, the Z-axis, the X-and the Y-axis, and the three axes exceeds 40 .After the blurred star image is restored by the proposed algorithm and the automatic iterative RL Time/s algorithm, the maximum angular velocity of the attitude can be estimated to be expanded to 160 have a similar performance, and the attitude errors are kept in a small range.

Figure 16 .
Figure 16.The magnified star image and the gray level distribution of the star spot in the case of Poisson noise.(a) The magnified original star image; (b) gray level distribution of star spot in the original star image; (c) the magnified blurred star image ( 10 x y w w s   。 ); (d) gray level distribution

Figure 17 .
Figure 17.Comparison of running time between the proposed method and the iterative RL method in the case of Poisson noise.

Table 1 .
Comparison of the coordinates between the ideal and the predicted star spots in the 1500th star image.
Figure 12.Predicted star position error.

Table 2 .
Comparison of attitude estimation in the case of Gaussian white noise (Vary x w ).x w (

Table 3 .
Comparison of attitude estimation in the case of Gaussian white noise (Vary y w ).

Table 4 .
Comparison of attitude estimation in the case of Gaussian white noise (Vary z w ).

Table 5 .
Comparison of attitude estimation in the case of Gaussian white noise (Vary x w and y w ).

Table 6 .
Comparison of attitude estimation in the case of Gaussian white noise (Vary x w , y w and z w ).
After the blurred star image is restored by the proposed algorithm and the automatic iterative RL algorithm, the maximum angular velocity of the attitude can be estimated to be expanded to 75

Table 7 .
Comparison of attitude estimation in the case of Poisson noise (Vary x w ).

Table 8 .
Comparison of attitude estimation in the case of Poisson noise (Vary y w ).

Table 9 .
Comparison of attitude estimation in the case of Poisson noise (Vary z w ).

Table 10 .
Comparison of attitude estimation in the case of Poisson noise (Vary x w and y w ).

Table 11 .
Comparison of attitude estimation in the case of Poisson noise (Vary x w , y w and z w ).