Next Article in Journal
An Efficient Direction Field-Based Method for the Detection of Fasteners on High-Speed Railways
Previous Article in Journal
Modeling of Slot Waveguide Sensors Based on Polymeric Materials
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Systematic Error Compensation Algorithm Based on Least Squares Support Vector Regression for Star Sensor Image Centroid Estimation

Department of Automation, Tsinghua University, Beijing 100084, China
*
Author to whom correspondence should be addressed.
Sensors 2011, 11(8), 7341-7363; https://doi.org/10.3390/s110807341
Submission received: 13 May 2011 / Revised: 18 July 2011 / Accepted: 22 July 2011 / Published: 25 July 2011
(This article belongs to the Section Physical Sensors)

Abstract

: The star centroid estimation is the most important operation, which directly affects the precision of attitude determination for star sensors. This paper presents a theoretical study of the systematic error introduced by the star centroid estimation algorithm. The systematic error is analyzed through a frequency domain approach and numerical simulations. It is shown that the systematic error consists of the approximation error and truncation error which resulted from the discretization approximation and sampling window limitations, respectively. A criterion for choosing the size of the sampling window to reduce the truncation error is given in this paper. The systematic error can be evaluated as a function of the actual star centroid positions under different Gaussian widths of star intensity distribution. In order to eliminate the systematic error, a novel compensation algorithm based on the least squares support vector regression (LSSVR) with Radial Basis Function (RBF) kernel is proposed. Simulation results show that when the compensation algorithm is applied to the 5-pixel star sampling window, the accuracy of star centroid estimation is improved from 0.06 to 6 × 10−5 pixels.

1. Introduction

The star tracker is a satellite-based embedded system which estimates the orientation of the satellite in space. This information is essential for any space mission, as it supplies all attitude data required for satellite control. There are other sensors used for the same purpose (gyroscope, sun tracker, magnetometer, GPS), but star trackers are more accurate and allow for attitude estimation without prior information [1]. For these reasons star trackers are used onboard 3-axis stabilized spacecraft. Star trackers estimate the orientation directly from the images of stars taken by an onboard camera. The estimation is based on a comparison of the star locations in the image with those in the predefined catalogue. One important factor influenced the performance of the star tracker is the star centroid location estimation in the image. This process becomes difficult when noise exists. This work applies the Least Square Support Vector Regression (LSSVR) with Radial Basis Function (RBF) kernel to improve the estimation process.

The noise influence on the estimation process can be divided into two types, the random noise and the systematical noise. The random noise includes the short noise, dark current noise, CCD readout noise, and radiation noise, which are closely related with the hardware of the CCD sensor [2]. In order to obtain high accuracy star locations in the image, sub-pixel centroid algorithms should be adopted, namely, the center of mass (COM), polynomial and B-spline interpolators [3]. The systematic noise is due to the nature of the centroid algorithm. The systematic noise of the centroid algorithm can cause several arc-seconds accuracy loss, so it is essential to analyze the systematic error and design a compensation method to improve the accuracy of star centroid location estimation in the image. In this paper, the systematic error is discussed in detail and the random noise will be only briefly analyzed.

The properties of the systematic error have been investigated by many scholars. In general, systematic error of centroid estimation is related with the energy distribution of starlight on star image (Gaussian width), the frequency of sampling, the size of sampling window and the actual position of star point. Grossman et al. [4] pointed out that the systematic error was reduced by increasing degrees of blur and by the wider defocusing of the neighbor pixels of the starlight. However, Hegedus et al. [5] pointed out that the error firstly decreases and then increases as star Gaussian width is increased. Stanton et al. [6] obtained a roughly sinusoid functional relationship between systematic error and the actual position of star point under fixed blur size. Alexander et al. [7] analyzed the systematic error through a spatial-frequency-based approach caused by the center of mass algorithm. Jean [8] supplemented Alexander’s work and proposed a Fourier phase shift method to calculate the sub-pixel position under more complex signals. Rufino et al. [9] obtained the starlight intensity distribution point spread function (PSF) considering diffraction and CCD defocus, and used the BP neural network method to compensate the systematic error. JIA et al. [10] studied the systematic error utilizing a frequency domain method considering sampling frequency limitation and sampling window limitation. He also proposed an analytical compensation algorithm to reduce the systematic error of star centroid estimation.

This paper analyzes the systematic error caused by the center of mass (COM) centroid estimation algorithm. Through the frequency domain approach analysis and numerical simulations, it is found that the systematic error consists of an approximation error and a truncation error. The approximation error results from the discretization approximation, which is caused when the spacial frequency of a star image is higher than the sampling frequency of the detector. The truncation error will appear when the size of the sampling window is smaller than the Gaussian width of the star intensity distribution. A criterion for choosing the size of the sampling window is given to reduce the truncation error as much as possible. Through numerical simulations, the systematic error can be evaluated as a function of the actual star centroid positions under different Gaussian widths of the star intensity distribution. In order to eliminate the systematic error, a novel systematic error compensation algorithm based on the least squares support vector regression (LSSVR) with Radial Basis Function (RBF) kernel is proposed. This novel algorithm can control the function estimation kernel shape and prediction accuracy. The experimental results demonstrate that the proposed approach can improve the accuracy of the star centroid position estimation dramatically.

The rest of this paper is organized as follows. In Section 2, the error of star centroid estimation algorithm is analyzed from three aspects through a frequency domain approach and numerical simulations: the integral error, the approximation error and the truncation error. A detailed description of our novel compensation algorithm based on the LSSVR is given in Section 3. In Section 4, the performance of the LSSVR compensation algorithm is evaluated. Finally, the conclusions of the paper are drawn in Section 5.

2. Error Analysis of Star Centroid Estimation Algorithm

It is well known that the star centroid calculation is used to pinpoint location. In order to adopt digital centroid algorithms to achieve sub-pixel accuracy in star centroid position estimation, the star sensor camera should be defocused slightly in order to spread the star energy over several neighboring pixels [11]. The center of mass (COM) algorithm is the most widely method used to calculate the centroid position of star images, and the error analysis is based on the COM algorithm [1,2,4,10].

2.1. The Integral Error of Center of Mass (COM) Algorithm

It is evident that the sub-pixel accuracy star centriod cannot be obtained by one single pixel directly. The COM algorithm uses several neighbor pixels around the brightest pixel to calculate the sub-pixel star centroid position. The ideal star centroid position in the image plane is c and ŷc, which can be computed by:

x ^ c ^ = W x I ( x , y ) dxdy W I ( x , y ) dxdy , y c = W y I ( x , y ) dxdy W I ( x , y ) dxdy
where W is the sampling window area that include all validated neighbor pixels around the starlight in the image plane, x and y are the coordinates of the pixels in W, I(x, y) is the detected signal irradiance intensity at pixel (x, y). Equation (1) is the COM algorithm’s theory model, it should be discretized when it used in digital computation. After the discretization, Equation (1) can be written as:
x ^ g ^ = i = 1 n x i I i i = 1 n I i , y g = i = 1 n y i I i i = 1 n I i
where g and ŷg are the actual star centroid position in the image plane after discretization. W in Equation (1) replaces the discrete n pixels to constitute the sampling window, xi and yi are the coordinates of the geometric center of the i-th pixel, Ii and is the irradiance intensity integration of the i-th pixel.

From Equation (2), it can be found that there are three factors can influence the star centroid estimation accuracy: the size of sampling window W, the i-th pixel coordinates xi in W and the signal intensity Ii in corresponding pixels. The systematic error is caused by the discrete approximation of the coordinate xi and truncating the sampling window W, and the uncertainty in detecting Ii leads to random noise. The 1-D situation in the x direction will be discussed, and the analysis is also valid for both the x and y direction in 2-D situation. Assuming the systematic error and the random noise are small and not correlated, then the integral error of the COM can be described by the expression [9]:

σ X ˜ g 2 = i = 1 n [ ( x ^ g ^ x i ) 2 σ x 2 + ( x g I i ) 2 σ I 2 ]
where σg is the integration error of g, σx is the systematic error resulting from the use of the pixel geometrical center to substitute the irradiance integration over a whole pixel and truncating the sampling window. σI is the random error caused by various noises, namely, the short noise, dark current noise, CCD readout noise, and radiation noise etc.

Firstly, we consider random error which is caused by the uncertainty in detecting Ii. We assume that the measured signal intensity Ii at the pixel xi consists of two components: a ‘true’ intensity Ei, and the noise intensity σI, then the Ii = Ei + σI. The derivatives in Equation (3) can be computed from Equation (2), and can be written as:

σ X ˜ g , I 2 = [ i = 1 n ( x i x 0 ) ] 2 σ I 2 I 0 2 = ( σ I I 0 ) 2 i = 1 n [ ( x i x 0 ) ] 2
where the total signal I 0 = i = 1 n I i, the ‘true’ signal E 0 = i = 1 n E i, and the ‘true’ star centroid position x 0 = i = 1 n x i E i / E 0.

If the σI is small, the I0E0, then through the Equation (4), we can find that the random error is inversely proportional to the signal to noise ratio (SNR). Enhancing the SNR can then reduce the random noise effectively. In this study, the random error analysis is not the key content. Many random noise elimination algorithms are described elsewhere [4,12] and are not covered in this paper.

In this paper, the analysis of systematic error is our main topic. From Equation (3), one also can use a derivative of the parameter xi to determine the systematic error, and this can be expressed as:

σ X ˜ g , x 2 = σ x 2 i = 1 n ( I i I 0 ) 2

As we can see, the systematic error σg, x cannot be calculated directly through Equation (5), because there is little information about the σx in time domain. In order to get more information to express the systematic error explicitly, we will analyze the systematic error using the frequency domain based method and numerical simulations.

2.2. Theoretical Analysis of the Systematic Approximation Error under Sampling Frequency Limitation

In this section, frequency domain analysis is adopted to get more information about the relationship between the systematic error and the ideal star centroid position just consideration of sampling frequency limitation. Under the condition of the spacial frequency of star image being higher than the sampling frequency of the detector, one type of systematic error named approximation error in calculating the star centroid position will be caused. We derive an approximate sinusoidal relationship between the approximation systematic error and the ideal star centroid position. The theoretical relationship function can inspire us to design some novel algorithms to compensate the systematic error.

The star image sampling process is illustrated in Figure 1, and can be divided into two steps. The waveform e(x) is the intensity profile of the starlight stripe projected on the surface of the CCD. The signal intensity function e(x) is convoluted with the pixel sensitivity function p(x) to generate the continuing pixel signal function f(x). After multiplying the pixel sampling function t(x), we can get the discrete signal function g(x), which can be written as:

f ( x ) = I ( x , x 0 ) p ( x ) g ( x ) = f ( x ) × t ( x ) g ( x ) = I ( x , x 0 ) p ( x ) × t ( x )

When the CCD’s fill factor is approximated to 100%, the pixel sensitivity function p(x) is equal to a rectangle function. t(x) is the sampling function, its sampling frequency is fs = 1/T and is a comb function, T is the length of pixel. The p(x) and t(x) are given as follows:

p ( x ) = rect ( x ) , t ( x ) = comb ( x ) = k = k = δ ( x kT )

The Fourier transform of the continuous function f(x) can be written as:

F ( s ) = f ( x ) exp ( 2 π isx ) d x
and the derivative of the F(s) can be expressed F′(s) by as:
F ( s ) = 2 π i xf ( x ) exp ( 2 π isx ) dx

Then the ideal centroid position c of f(x) can be calculated through Equations (8) and (9), as stated by Alexander [7]:

x ^ c = xf ( x ) dx f ( x ) dx = F ( 0 ) 2 π iF ( 0 )

Likewise, the centroid of the sampled function g can be written as:

x ^ g = xg ( x ) dx g ( x ) dx = G ( 0 ) 2 π iG ( 0 )

As described above, c is the ideal star centroid position and g is the actual star centroid position with approximation systematic error. The following step, we will begin to analyze the g influenced by the approximation systematic error and get its theoretical model through frequency domain analysis.

Starlight can be viewed as point light sources, so the starlight signal intensity distribution spread point function is approximated reasonably by the Gaussian function and the 2-D situation function can be written as [2,10,13]

f ( x , y ) = I 0 2 π σ PSF 2 exp [ ( x x 0 ) 2 + ( y y 0 ) 2 2 σ PSF 2 ]

For just considering the x direction, the 1-D case can be reduced to:

f ( x ) = I 0 2 π σ PSF exp [ ( x x 0 ) 2 2 σ PSF 2 ]
where x0 represents the ideal star centroid position equal to c, and the σPSF is the Gaussian width parameter. Through the Equation (13), f(x) can be expressed by the fe(x) shifted by offset d from the origin, i.e.:
f ( x ) = f e ( x d )

From the Equation (13), it can find that the d equals to x0. The Fourier transform of f(x) is written as:

F ( s ) = exp ( 2 π ids ) F e ( s )
where the Fe(s) is the Fourier transform of fe(s).

From the Equation (11), the approximation systematic error σg, x can be written by:

σ X ˜ g , x = x ^ g x 0 = G ( 0 ) 2 π iG ( 0 ) x 0 = G ( 0 ) 2 π iG ( 0 ) d

From Equation (6), the G(s) can be written by G (x ) = F(s) × T(s), according to the form of t(x) in Equation (7) and sampling frequency fs = 1/T, the G(s) can be given as:

G ( s ) = n = F ( s n / T ) = n = F ( s nf s ) = n = exp [ 2 π id ( s nf s ) ] F e ( s nf s )

Then the derivative of G (s) is written by:

G ( s ) = n = { 2 π id exp [ 2 π id ( s nf s ) ] F e ( s nf s ) } + n = exp [ 2 π id ( s nf s ) ] F e ( s nf s ) = 2 π id * G ( s ) + n = exp [ 2 π id ( s nf s ) ] F e ( s nf s )

Then substituting Equations (17) and (18) into (16) yields:

σ X ˜ g , x = G ( 0 ) 2 π iG ( 0 ) d = 2 π idG ( 0 ) + n = exp [ 2 π id ( s nf s ) ] F e ( s nf s ) | s = 0 2 π iG ( 0 ) d = n = exp [ 2 π id ( s nf s ) ] F e ( s nf s ) | s = 0 2 π iG ( 0 )

Substituting s = 0 into Equation (19), and noting that Fe(x) is even, and the Fe(x) is odd. Then the numerator of the σg, x in Equation (19) can be calculated as:

n = exp [ 2 π id ( s nf s ) ] F e ( s nf s ) | s = 0 = F e ( 0 ) n = 1 F e ( nf s ) [ exp ( 2 π id nf s ) exp ( 2 π id nf s ) ] = n = 1 2 i F e ( nf s ) sin ( 2 π d nf s )

From Equation (17), the denominator of σg, x can be obtained as:

G ( 0 ) = n = exp [ 2 π id ( s nf s ) ] F e ( s nf s ) | s = 0 = F e ( 0 ) + n = 1 [ exp ( 2 π id nf s ) + exp ( 2 π id nf s ) ] F e ( nf s ) = F e ( 0 ) + n = 1 2 cos ( 2 π d nf s ) F e ( nf s )

Taking the Equations (20) and (21) into the Equation (19) to get the approximation systematic error σg, x as:

σ X ˜ g , x = 2 i n = 1 F e ( nf s ) sin ( 2 π d nf s ) 2 π i [ F e ( 0 ) + n = 1 2 cos ( 2 π d nf s ) F e ( nf s ) ] = n = 1 F e ( nf s ) sin ( 2 π d nf s ) π [ F e ( 0 ) + n = 1 2 cos ( 2 π d nf s ) F e ( nf s ) ]
fs = 1/T is the sampling frequency and we measure all distances in units of the pixel length (T = 1), and in Equation (14) the d equals to x0, so the Equation (22) can be rewritten by:
σ X ˜ g , x = n = 1 F e ( n ) sin ( 2 π n x 0 ) π [ F e ( 0 ) + n = 1 2 cos ( 2 π n x 0 ) F e ( n ) ]

From Equation (6), it follows that:

F e ( s ) = { f e ( x ) } = { I ( x , 0 ) rect ( x ) } = I 0 exp [ 2 ( π s σ PSF ) 2 ] sin ( π s ) / ( π s )
F e ( s ) = I 0 exp [ 2 ( π s σ PSF ) 2 ] cos ( π s ) / s
where {} denotes the Fourier transform operation. Therefore:
F e ( 0 ) = I 0 , F e ( n ) = 0 ( n 1 ) F e ( n ) = ( 1 ) n I 0 exp [ 2 ( π n σ PSF ) 2 ] / n

Substituting Equation (26) into Equation (23) yields:

σ X ˜ g , x = 1 π n = 1 ( 1 ) n exp [ 2 ( π n σ PSF ) 2 ] sin ( 2 π n x 0 ) / n
Equation (27) is the theory expression of the approximation systematic error of star image centroid estimation with Gaussian distribution shape. Under the fixed sampling frequency (fs = 1), it can be seen that the approximation error σg, x is related with Gaussian width σPSF and the ideal star centroid position x0 and it decreases as the Gaussian width increases. Under the condition of σPSF > 0.3, Equation (27) can be written by:
σ X ˜ g , x = 1 π exp [ 2 ( π σ PSF ) 2 ] sin ( 2 π x 0 )

From the Equation (28), it is seen that there is an approximately sinusoidal relationship between σg, x and x0. The amplitude of σg, x decreases as the Gaussian width σPSF increases. In the following part, we also use numerical simulations to verify the theoretical expression of the approximate systematic error in Equation (28).

Designing the numerical simulations, the ideal star centroid position x0 is varied from 0 to 1 with the interval of 0.002, and set the Gaussian width σPSF from 0.1 to 1.2 with the interval of 0.1. Because the starlight signal intensity point spread function (PSF) is reasonable approximated by the 2-D Gaussian function in Equation (12) and is symmetrical in the x and y direction. Just the 1-D situation in the x direction is considered. Therefore the actual star centroid position g can be calculated by the following equation:

x ^ g = xg ( x ) d x g ( x ) d x = i = 1 n x i I i i = 1 n I i

Then, the approximation systematic error can be expressed by:

σ X ˜ g , x = x ^ g x 0

There is one premise should be stated. The fill factor of the active pixel sensors is assumed to be 100% and each pixel has the same photon response. Then, the detected signal intensity of the i-th pixel is:

I i = x i x i + 1 I ( x , x 0 ) d x
where I(x, x0) equals to f(x) in Equation (13).

The sampling window size is fixed at 5 × 5 pixels. Under different Gaussian widths, a group of curves between the approximation systematic error σg, x and the ideal star centroid position x0 can be obtained. The 3-D numerical simulation results of the relationships between σg, x and x0 is shown in Figure 2.

Through the Figure 2, it can be seen that the systematic error σg, x and the x0 has an approximately sinusoidal relationship when the Gaussian width σPSF is small (σPSF < 0.5), and the result is consistent with the theoretical analysis in Equation (28), but, when the σPSF is large, there is a linear relationship between σg, x and x0. This is an interesting result, and we will introduce another type of systematic error named truncation systematic error here to describe this phenomenon. The approximation systematic error is caused by the sampling frequency limitation and the truncation systematic error is caused by the sampling window limitation. The truncation error will appear when the size of sampling window is smaller than Gaussian width and will be discussed in detail in the next section.

2.3. Theoretical Analysis of the Systematic Truncation Error under Sampling Window Limitation

In this section, we will analyze the truncation error and give the criterion for choosing the sampling window size to reduce the systematic error as much as possible. The simulations above show that the truncation error will appear when the sampling window size is relatively small. The sampling window area decides how many validated neighbor pixels around the star signal in the image plane were involved in calculating the star centroid position. In Figure 3, we will demonstrate how the sampling window size introduces error into the star centroid position estimation.

Figure 3(a), shows that the Gaussian width σPSF is larger than the sampling window size. We can see that the g(k) is a part of the g(x) and g(x) has truncated some effective pixels from the original star signal. Then, the g(k) has fewer pixels to be used in calculating the star centroid position and will introduce a truncation systematic error to the final star centroid position estimation. In Figure 3(b), the sampling window size is larger than the Gaussian width σPSF. In this case, the g(k) contained all the information of the star signal g(x) and the size of the sampling window will not cause the truncation systematic error. Under this condition, the error is just dominated by the systematic approximation error.

Here, we also use the numerical simulations (designed in Section 2.2) to analyze the truncation systematic error. We select some Gaussian width from 0.1 to 1.2 to implement the simulations again and give the 2-D experiment results under σPSF = 0.3, 0.4, 0.5, 0.7, 0.9, 1.1 in Figure 4, and also give out the number of pixels occupied by the Gaussian curve under different Gaussian widths in Figure 5.

From Figure 4, it can be seen that the relationship between σg, x and x0 changed from approximately sinusoidal to linear with the Gaussian width increases. Combining Figures 4 and 5, we can explain the reason of the truncation error clearly. When the Gaussian width is smaller than 0.5, we can find that the number of pixels occupied by the Gaussian curve in Figure 5 is smaller than the 5-pixel window size (the sampling window size selected in our numerical simulations). In this case, the systematic error is just caused by the sampling frequency limitation and is dominated by the approximation error. When the Gaussian width is larger than 0.5, the number of pixels occupied by the Gaussian curve exceeds the 5-pixel window size. In this case, the star signal is truncated by the smaller sampling window size. Only partial effective pixels can be involved in calculating the star centroid position estimation. Under this condition, the error is dominated by the truncation systematic error.

In order to reduce the truncation error as much as possible, a criterion for choosing the size of the sampling window is put forth. The size of sampling window should be a little larger than the Gaussian width. The Gaussian width (PSF size) is decided by the defocusing. If a small displacement δZ from the image plane, the Gaussian width will increase and its diameter is [14]:

D = δ Z F #
where the F# is the optics number of the image sensor. The unit of the D is μm.

The size of sampling window can be chosen following the function below:

W size = fix ( D Pixel size ) + 1
where fix is the corresponding function in MATLAB, which rounds the elements towards zero. The term pixelsize is the single pixel size of the image plane (e.g., STAR250 pixelsize = 25 μm), D/ pixelsize is the Gaussian width of the star signal. In order to let the sampling window size be larger than the Gaussian width, the sampling window size Wsize adds one additional pixel on the Gaussian width. Under this operation, we can reduce the truncation systematic error as much as possible. Then, the systematic error of the COM algorithm is just dominated by the approximation error.

Through an appropriate numerical simulation, we can get the relationship between the systematic error σg, x, the ideal star centroid position x0 and the actual star centroid position g contaminated by the error. From the Equation (30), we can calculate the ideal star centroid position x0 as follows:

x 0 = x ^ g σ X ˜ g , x

3. The LSSVR Compensation Algorithm

The relationship between the systematic error σg, x and the actual star centroid position g is the basis of our compensation algorithm. We will design a novel algorithm based on the least squares support vector regression (LSSVR) to estimate the systematic error, which can be used to eliminate the systematic error caused by the nature of the COM algorithm.

3.1. The Least Squares Support Vector Regression

The support vector machine (SVM) technique was developed by Vapnik in 1995 [15]. SVM is motivated by statistical learning theory based on the principle of structural risk minimization, shown to be superior to the traditional empirical risk minimization principle employed by traditional neural networks. It can be applied in classification and regression. SVR is used to find out the underlying relationships between input and target output vector, especially for modeling nonlinear relationships. It has been proven to be a powerful method for solving problems in nonlinear density estimation and function estimation [16,17]. LSSVR, proposed by Suykens, is an alternate formulation of SVR [18]. The reason for choosing LSSVR as the function estimation is its lower memory requirements, as well as the achievement of a global solution within a fast training speed [19,20]. The primary ridge regression model of LSSVR in the function estimation problem is formulated as:

min W , b , ξ 𝒭 ( W , b , ξ ) = 1 2 W T W + γ 1 2 i = 1 N ξ i 2
subject to the equality constraints:
y i = W T φ ( x i ) + b + ξ i
where γ is a positive real constant and ξi is slack variable. In this function estimation problem, the Lagrangian is:
L N ( W , b , ξ ; α ) = 1 2 W T W + γ 1 2 i = 1 N ξ i 2 i = 1 N α i ( W T φ ( x i ) + b + ξ i y i )
where αi are Lagrange multipliers. The conditions for optimality are given by [21]:
L N ( W , b , ξ ; α ) W = 0 W = i = 1 N α i φ ( x i ) L N ( W , b , ξ ; α ) b = 0 i = 1 N α i = 0 L N ( W , b , ξ ; α ) ξ i = 0 α i = γ ξ i L N ( W , b , ξ ; α ) α i = 0 W T φ ( x i ) + b + ξ i y i = 0

After eliminating the W and ξ, the Karush-Kuhn-Tucker (KKT) system is obtained as:

[ 0 I n T I n K + γ 1 E ] [ b α ] = [ 0 y ]
where In = [1,…,1]T, α = [α1,…, αN]T, y = [y1,…, yN]T, K = K(xi,xj) = φ(xi)T φ(xj). K(.,.) is the kernel function, which can be expressed as the inner product of two vectors in some feature space. There are many Mercer kernel functions K(x,.xi) that can be chosen, such as K ( x i , x j ) = tanh [ k x j T x i + θ ] (hyperbolic tangent kernel), K ( x i , x j ) = ( x j T x i + 1 ) d (polynomial kernel) and K(xi,xj) = exp{–||xixj||2 / σ2} (the RBF kernel). Finally, for an input x, we can predict the output of the LSSVR model in response to the input x as:
f ( x ) = W T φ ( x i ) + b = i = 1 N α i * K ( x i , x ) + b *
where α i * and b* are the optimal solutions of Equation (39).

Through the Equation (40), we can find that the LSSVR just calculates sets of linear Equations rather than solving the dual problem in SVR. Furthermore, if we use the RBF kernel, only two parameters (γ, σ) are needed for LSSVR in Equation (39). However, except for the parameters (γ, σ) are needed in SVR, the parameter ξ also should be considered which is the regression error in the e-insensitive loss function. The advantage of low computation complex of LSSVR makes it suitable for our systematic error compensation algorithm.

3.2. LSSVR Calculation

The LSSVR model is used for function estimation. In practice, we can’t get the ideal star centroid position x0 but can get the actual star centroid position g calculated by Equation (29). According to the Equation (40), we can use the LSSVR to estimate the functional relationship between the systematic error σg, x and the actual star centroid position g. If we use the RBF kernel, the estimation function can be written as:

σ ( x ) = i = 1 N α i * exp [ ( x i x ) 2 / ( 2 σ 2 ) ] + b *
where the x is the input of actual star centroid position g in practical operation. α i * and b* are the optimal solutions of Equation (39). Then, when we input the g into the LSSVR model, it will predict its corresponding output of systematic error σg, x, and we can use Equation (34) to calculate the ideal star centroid position x0. Through this operation, we can achieve the aim of eliminating the systematic star centroid position error caused by the nature of the COM algorithm.

4. Experimental Results and Analysis

In this section, we design a number of experiments to verify the performance of the systematic error compensation algorithm based on the least square support vector regression. The experiments are prepared in three steps. Firstly, before using the LSSVR for function estimation, we should obtain the input training samples through the numerical simulations. Secondly, some parameters can influence the performance of the LSSVR for function estimation. We should use the cross-validation method to get the optimal value of the parameters to guarantee the fitting and prediction accuracy of the LSSVR model. Thirdly, we use our compensation algorithm in the processing of a simulated star image to judge the performance of our proposed LSSVR systematic error compensation algorithm. All these simulations are carried on MATLAB 7.1 software platform run on a Pentium IV 2.8 GHz processor.

4.1. Pre-Process the Training Samples

In order to use the LSSVR for regression the relationship between the ideal star centroid position x0, the actual star centroid position g (under the systematic error) and the systematic error σg, x under different Gaussian width in Equation (34), we should design a number of numerical simulations to get the relationship function among them. Considering the real image sensor STAR250, its image plane size is 512 × 512 pixels, single pixel size is 25 μm, FOV size is 8° × 8°. The starlight projected onto the image plane can be viewed as point light sources, and the starlight signal intensity spread point function is reasonable approximated by the Gaussian function. Just considering the x direction, it can be expressed by Equation (13). We also assume the fill factor of the active pixel sensors is 100% and each pixel has the same photon response. Then, the detected signal intensity function can be given by Equation (31). As mentioned in Sections 2.2 and 2.3, there are two situations that should be considered. The first one is when the sampling window size is larger than the Gaussian width; in this case the systematic error is dominated by the approximation error. Another is when the Gaussian width is larger than the sampling window size; in this case the systematic error is composed of the approximation error and the truncation error. In actual operation, the Gaussian width is increased as the star light intensity is strengthened, soif we use a set sized sampling window, such as 3 × 3 or 5 × 5 pixels, both situations above will exist. The experiments take full consideration of the two situations above, and we set the sampling window size to be 5 × 5 pixels. The Gaussian width σPSF is set to be 0.3 (situation 1) and 0.9 (situation 2), respectively. Other values of σPSF also can be simulated using the same method and form the compensation template to eliminate the systematic error under different σPSF scenarios.

We assume the one single starlight is projected on the position (50,160). Just considering the x direction, the star centroid position of the starlight in x direction will range from 50 to 51. We subdivide the one pixel into 300 equivalent parts, and the ideal star centroid position in x direction x0 from the 50.0033, 50.0066, …, till 51, the simulation step is 0.0033 pixels. If higher star centroid position accuracy is desired, one can reduce the interval of the simulation step but then one must sacrifice the computation time for training the LSSVR. For every trial, we will record the x0 and the corresponding actual star centroid position g, then their different is the systematic error σg, x. Under σPSF = 0.3 and 0.9, we can get their relationship, seen in Figure 6.

In Figure 6, we can see that the maximum systematic error is nearly 0.06 pixel under σPSF = 0.3 and nearly 0.1 pixel under σPSF = 0.9. In the STAR250, one pixel accuracy is 56.25″ Then, 0.06 pixel is approximately 4 arc-second, and the error is big enough to influence the accuracy of the star sensor. It is necessary to design a compensation algorithm to reduce the systematic error. Three hundred training samples can be used to train the LSSVR model to estimate the function above.

4.2. The Fitting Accuracy of the LSSVR

The fitting and prediction accuracy are the two main aspects used to judge the quality of our LSSVR model. There are three main parameters that can influence the fitting and prediction accuracy, these are the parameter σ of the RBF kernel, degree d of the polynomial function, and parameter γ of slack variable in Equation (37). The number of training samples is 300, a relatively small number, so we employ the leave-one-out cross validation approach to choose the optimal parameters. In the optimization of these parameters, the root mean squared error of prediction (RMSEP) of the assessing set is used as an evaluation criterion:

𝒭 RMSEP = 1 N i = 1 N [ x ^ g Y i f ( x ) ] 2 = 1 N i = 1 N [ x ^ g Y i ( i = 1 N α i * K ( x i , x ) b * ) ] 2 i = 1 N
where Yi is the ideal star centroid position x0, f(x) is the prediction output of LSSVR model(with input of actual star centroid position g). N is the number of prediction samples. Using the criterion of Equation (42), we compared the performance of RBF kernel and the polynomial kernel. The RMSEP of the RBF kernel is smaller than that of polynomial kernel by at least one order of magnitude, so we choose the RBF kernel and the LSSVR parameters σ and γ are optimized. α = 2, γ = 2.6 × 105 are used in the calculation. The performance of the regression of the LSSVR is shown in Figure 7.

In Figure 7, we can see that the fitting curve nearly overlaps with the relationship function in Figure 6, and it illustrates that the fitting accuracy of the LSSVR is pretty high under two situations. The corresponding fitting errors of the LSSVR are shown in Figure 8.

The fitting error is defined by the difference between the actual systematic error σg, x and the predicted systematic error σlssvr which is the output of the LSSVR model. From Figure 8, we can see that under the two situations, the maximum fitting errors of the LSSVR are all smaller than 4 × 10−5 pixels, but a high fitting accuracy cannot illustrate the performance of the LSSVR model completely. What we are most concerned with is the prediction accuracy of the LSSVR model.

4.3. The Prediction Accuracy of the LSSVR

Firstly, we should give the definition of the prediction accuracy of the LSSVR model. We use the LSSVR model to predict the systematic error σlssvr with the input of the actual star centroid position g, then the star centroid position after compensation can be calculated as:

x ˜ 0 = x ^ g ^ ^ LSSVR predict ( x g ) = x g σ lssvr
where g is the actual star centroid position in practical operation (input of LSSVR), σlssvr is the predicted systematic error (output of the LSSVR), and 0 is the star centroid position after compensation. Through the Equation (43), we can get the prediction error of the LSSVR model in the following Equation:
ζ predict = x ˜ 0 x 0
where x0 is the ideal star centroid position, ζpredict is the prediction error of the LSSVR.

With the optimal parameters, a LSSVR model was trained using the 300 samples of data in Section 4.1. In order to test the prediction performance of the trained LSSVR model, we select 500 star points which are projected on the CCD image plane randomly. We also just consider the x direction, and all the 500 star centroid positions of the star in the x direction will range from 100 to 201. The experiments are shown in Figure 9.

The 500 random experiments results under σPSF = 0.3 and σPSF = 0.9 are shown in the left side of Figure 9(a,b). The right sides of Figure 9(a,b) are corresponding enlarged pictures of the left side. The blue line is the ideal star position x0 and the red line is the compensated star centroid position 0. From the right side of the Figure 9, we can see that every compensated 0 is very close to its corresponding ideal position x0. It demonstrates that our trained LSSVR model can achieve high prediction accuracy. The prediction error of the LSSVR model is shown in Figure 10.

From Figure 10, we can see that the prediction errors of our LSSVR model are smaller than 6 × 10−5 pixels under the two situations σPSF = 0.3 and σPSF = 0.9. The result shows that the proposed compensation algorithm can achieve high star centroid position accuracy under different Gaussian widths. The accuracy of our systematic compensation algorithm is much higher than methods proposed by other scholars, such as the neural network method [9] that can reach 5 × 10−3 accuracy and the analytical compensation method [10] which can reach 2 × 10−4 accuracy.

4.4. The Performance of the Compensation Algorithm in Simulations

In addition to the single star point simulations, we also apply the compensation algorithm to simulated star image testing. We select a star sensor field of view (FOV) point randomly, and suppose the point’s right ascension, declination and the angle rotation are (130, 60, 60). The FOV size is 18 degree, using the sky2000 version 4 star catalog (developed by the NASA’s Goddard Space Flight Center), the stars’ magnitudes in the image are all lower than 6.5 and the σPSF = 0.3. The simulated star image is shown in Figure 11.

In Figure 11, we can see that there are 20 stars in the star image. We select 10 of them to compare their errors before compensation and after compensation. The results are shown in Table 1.

Through the experiments above, we can find that the systematic error compensation proposed by the Least Squares Support Vector Regression can achieve high accuracy star centroid positions estimation and meet the high attitude pointing accuracy requirements of star sensors.

4.5. The Performance of the Compensation Algorithm in Actual Images Experiments

In addition to the simulated images testing, we also apply the compensation algorithm on some actual images. The actual night sky images were captured on NAOC’s observation station in XingLong, Hebei Province (China), in December 2009. We took about 900 images under different directions. The CANON 20D camera is used, whose focal length is 50 mm, the pixel size is 6.42 μm, the field of view is 25.36 × 17.06 degree, and the plane size is 3,504 × 2,336 pixels. In order to reduce the effects of image distortion, we just used the 12 × 12 degree field of view in the center of each image. One night sky actual image is shown in Figure 12.

We used the zenith observation method to test the accuracy of the star tracker [22,23]. The zenith method takes the Earth as an evenly rotational turntable. It needs a high accuracy spirit level to make sure the star tracker is pointed in the zenith direction. The star tracker captured the pictures from the zenith direction and calculated the attitude. Then, we use our knowledge of astronomy to figure out the ideal zenith direction at the shooting time. Comparing the star tracker’s attitude with the zenith ideal attitude, we can test the accuracy of the star tracker and thus prove the effectiveness of our LSSVR compensation algorithm.

In the 900 sky night actual images, there are about 100 images pointing at the zenith. We selected 66 images to test the accuracy of the star tracker and thus test our LSSVR compensation algorithm. The 66 images are taken under different noise conditions. Through the 66 actual images, we can calculate 66 attitude directions by the star tracker. According to the shooting time and place, we also can calculate 66 ideal zenith directions through the zenith observation method. Before calculating the accuracy of the star tracker, we should eliminate the constant bias on star tracker’s optical axis caused by the assembly. We choose 10 images from the 66 images to calculate the mean of constant bias on the star tracker’s optical axis. After eliminating the constant bias on the optical axis, we can get the accuracy of the star tracker on the yaw axis and roll axis. The experimental results are shown in Figure 13.

From Figure 13, we can see that the accuracy of the star tracker after compensation is higher than before compensation. The actual images experiments can test the performance of our compensation algorithm under different random noise conditions. The 66 actual images are taken under different random noise conditions. Through the Figure 13, we also can see that when the random noise is large, the compensation performance is not obvious. When the random noise is small, the accuracy of the star tracker is very high after compensation. The high performance of our LSSVR compensation algorithm under large random noise condition is to be further studied and improved in our future work.

5. Conclusions

This paper analyzed the systematic error of star image centroid estimation utilizing frequency domain analysis and numerical simulations. The sampling frequency limitation and sampling window size limitation are fully considered and the systematic error is then divided into an approximation error and a truncation error. Through the frequency domain analysis, an approximate sinusoidal and linear relationship between systematic error and actual star centroid position are obtained under sampling frequency limitation and under sampling window size limitation, respectively. A novel systematic error compensation algorithm based on the LSSVR is presented. According to the two types of systematic errors, a number of experiments are designed to test the LSSVR compensation algorithm. Simulation results show that after compensation, the residual systematic error of star centroid estimation is less than 6 × 10−5 pixels under 5 × 5 pixel sampling window size. Compared to the neural network method and the analytical compensation algorithm, the proposed method’s accuracy is one or two orders of magnitude higher than that of these two algorithms and can meet the requirements of high accuracy star sensors. Since we have not considered the influence of random noise to the proposed method, the high performance of our LSSVR compensation algorithm under large random noise conditions is to be further studied in our future work.

Acknowledgments

This research was supported by the National Basic Research Program of China (973 Program, No.2010CB731800) and Chinese Aviation Science Foundation Grant (2008758003).

References

  1. Liebe, CC. Accuracy performance of star trackers–A tutorial. IEEE Trans. Aero. Electron. Syst 2002, 38, 587–599. [Google Scholar]
  2. Katake, AB. Modeling, Image Processing and Attitude Estimation of High Speed Star Sensors. Ph.D. Thesis, Texas A&M University, College Station, TX, USA. 2006, 33–37. [Google Scholar]
  3. Jahne, B. Practical Handbook on Image Processing for Scientific Application, 2nd ed; CRC Press: Boca Raton, FL, USA, 2004. [Google Scholar]
  4. Grossman, SB; Emmons, RB. Performance analysis and size optimization of focal planes for point-source tracking algorithm applications. Opt. Eng 1984, 23, 167–176. [Google Scholar]
  5. Hegedus, ZS; Small, GW. Shape measurement in industry with sub-pixel definition. Acta Polytech. Scand. Appl 1985, 150, 101–104. [Google Scholar]
  6. Stanton, RH; Alexander, JW; Dennison, EW; Glavich, TA; Hovland, LF. Optical tracking using charge-coupled devices. Opt. Eng 1987, 26, 930–938. [Google Scholar]
  7. Alexander, BF; Kim, CN. Elimination of systematic error in subpixel accuracy centroid estimation. Opt. Eng 1991, 30, 1320–1330. [Google Scholar]
  8. Jean, PF. Subpixel accuracy location estimation from digital signals. Opt. Eng 1992, 31, 2465–2471. [Google Scholar]
  9. Rufino, G; Accardo, D. Enhancement of the centroiding algorithm for star tracker measure refinement. Acta Astronaut 2003, 53, 135–147. [Google Scholar]
  10. Jia, H; Yang, JK; Li, XJ; Yang, JC; Yang, MF; Liu, YW; Hao, YC. Systematic error analysis and compensation for high accuracy star centroid estimation of star tracker. Sci. China Ser. E: Eng. Mater. Sci 2010, 53, 3145–3152. [Google Scholar]
  11. Eisenman, AR; Liebe, CC. The Advancing State-of-the Art in Second Generation Star Trackers. Proceedings of the 2th IEEE International Conference on Aerospace Application, Aspen, CO, USA, 1–4 September 1998.
  12. Faraji, H; Maclean, WJ. CCD noise removal in digital images. IEEE T. Image. Proc 2006, 15, 2676–2685. [Google Scholar]
  13. Li, YF; Hao, ZH. Research of hyper accuracy subpixel subdivision location algorithm for star image. Opt. Technol 2005, 31, 666–671. [Google Scholar]
  14. Smith, WJ. Modern Lens Design: A Resourse Manual; McGraw-Hill: New York, NY, USA, 1992; Volume 47. [Google Scholar]
  15. Vapnik, V. The Nature of Statistical Learning Theory; Springer-Verlag: New York, NY, USA, 1995. [Google Scholar]
  16. Vapnik, V; Golowich, S; Smola, A. Support vector method for function approximation, regression estimation and signal processing. Adv. Neural Inf. Proc. Syst 1996, 9, 281–287. [Google Scholar]
  17. Scholkopf, B; Smola, A. A tutorial on support vector regression. Stat. Comput 2004, 14, 199–222. [Google Scholar]
  18. Suykens, JAK; Vandewalle, J. Least squares support vector machine classifiers. Neuroendocrinol. Proc. Lett 1999, 9, 293–300. [Google Scholar]
  19. Suykens, JAK; Gestel, TV; Brabanter, JD; Moor, BD; Vandewalle, J. Least Squares Support Vector Machines; World Scientific: Singapore, 2002. [Google Scholar]
  20. Li, YK; Shao, XG; Cai, WS. A consensus least squares support vector regression (LS-SVR) for analysis of near-infrared spectra of plant samples. Talanta 2007, 72, 217–222. [Google Scholar]
  21. Goodarzi, M; Freitas, MP; Wu, CH; Duchowicz, PR. pKa modeling and prediction of a series of pH indicators through genetic algorithm-least square support vector regression. Chemom. Intell. Lab 2010, 101, 102–109. [Google Scholar]
  22. Li, CY; Li, HF; Sun, CH. Astronomical calibration method and observation analysis for high-accuracy star sensor. Opt. Precis. Eng 2006, 14, 558–562. [Google Scholar]
  23. IERS Technical Note No. 32. Available online: http://www.iers.org/nn_11216/SharedDocs/Publikationen/EN/IERS/Publications/tn/TechnNote32/tn32,templateId=raw,property=publicationFile.pdf/tn32.pdf (accessed on 25 April 2010).
Figure 1. The process of star image sampling: e(x) is the starlight stripe intensity profile function; p(x) is the pixel sensitivity function; f(x) is the continuing pixel signal function; t(x) is the sampling function; g(x) is the discrete pixel signal function.
Figure 1. The process of star image sampling: e(x) is the starlight stripe intensity profile function; p(x) is the pixel sensitivity function; f(x) is the continuing pixel signal function; t(x) is the sampling function; g(x) is the discrete pixel signal function.
Sensors 11 07341f1 1024
Figure 2. Numerical simulations of the relationship between the approximation systematic error of the COM algorithm and the ideal star centroid positions under different Gaussian widths.
Figure 2. Numerical simulations of the relationship between the approximation systematic error of the COM algorithm and the ideal star centroid positions under different Gaussian widths.
Sensors 11 07341f2 1024
Figure 3. (a) The width of Gaussian is larger than the sampling window size; (b) The width of Gaussian is smaller than the sampling window size.
Figure 3. (a) The width of Gaussian is larger than the sampling window size; (b) The width of Gaussian is smaller than the sampling window size.
Sensors 11 07341f3 1024
Figure 4. The 2-D result of systematic error of star centroid position estimation under different Gaussian widths.
Figure 4. The 2-D result of systematic error of star centroid position estimation under different Gaussian widths.
Sensors 11 07341f4 1024
Figure 5. The number of pixels occupied of star under different Gaussian width σPSF.
Figure 5. The number of pixels occupied of star under different Gaussian width σPSF.
Sensors 11 07341f5 1024
Figure 6. (a) The relationship curve between g and σg, x under σPSF = 0.3. (b) For σPSF = 0.9.
Figure 6. (a) The relationship curve between g and σg, x under σPSF = 0.3. (b) For σPSF = 0.9.
Sensors 11 07341f6 1024
Figure 7. (a) The fitting performance of the LSSVR under σPSF = 0.3. (b) For σPSF = 0.9.
Figure 7. (a) The fitting performance of the LSSVR under σPSF = 0.3. (b) For σPSF = 0.9.
Sensors 11 07341f7 1024
Figure 8. (a) The fitting accuracy of the LSSVR under σPSF = 0.3. (b) For σPSF = 0.9.
Figure 8. (a) The fitting accuracy of the LSSVR under σPSF = 0.3. (b) For σPSF = 0.9.
Sensors 11 07341f8 1024
Figure 9. (a) Experiments of random star positions under σPSF = 0.3. (b) For σPSF = 0.9.
Figure 9. (a) Experiments of random star positions under σPSF = 0.3. (b) For σPSF = 0.9.
Sensors 11 07341f9 1024
Figure 10. (a) The star centroid position error before and after compensation under σPSF = 0.3. (b) For σPSF = 0.9.
Figure 10. (a) The star centroid position error before and after compensation under σPSF = 0.3. (b) For σPSF = 0.9.
Sensors 11 07341f10 1024
Figure 11. The simulated star image pointing at (130, 60, 60).
Figure 11. The simulated star image pointing at (130, 60, 60).
Sensors 11 07341f11 1024
Figure 12. One night sky actual image with FOV 12 × 12 degree.
Figure 12. One night sky actual image with FOV 12 × 12 degree.
Sensors 11 07341f12 1024
Figure 13. (a) The accuracy of the star tracker on the yaw axis. (b) The accuracy of the star tracker on the roll axis.
Figure 13. (a) The accuracy of the star tracker on the yaw axis. (b) The accuracy of the star tracker on the roll axis.
Sensors 11 07341f13 1024
Table 1. The systematic error before and after compensation of the simulated star image.
Table 1. The systematic error before and after compensation of the simulated star image.
Star numberIdeal x positionActual x position before compensationError before compensation (pixel)Actual x position after compensationError after compensation (pixel)
1188.533005188.5748170.041812188.53305280.0000478
233.88674633.8986490.01190333.88675530.0000093
3−83.046154−83.0098430.036311−83.044610240.0000516
4200.032901199.9765650.056336200.03293950.0000385
594.36649294.3286610.03783194.36647670.0000153
6−38.586794−38.6001590.013365−38.58678380.0000102
724.18088324.1701960.01068724.18085410.0000289
879.48855579.5261980.03764379.48857070.0000157
969.74074669.7730620.03231669.74078090.0000349
10−95.161995−95.1203800.041615−95.16201230.0000173

Share and Cite

MDPI and ACS Style

Yang, J.; Liang, B.; Zhang, T.; Song, J. A Novel Systematic Error Compensation Algorithm Based on Least Squares Support Vector Regression for Star Sensor Image Centroid Estimation. Sensors 2011, 11, 7341-7363. https://doi.org/10.3390/s110807341

AMA Style

Yang J, Liang B, Zhang T, Song J. A Novel Systematic Error Compensation Algorithm Based on Least Squares Support Vector Regression for Star Sensor Image Centroid Estimation. Sensors. 2011; 11(8):7341-7363. https://doi.org/10.3390/s110807341

Chicago/Turabian Style

Yang, Jun, Bin Liang, Tao Zhang, and Jingyan Song. 2011. "A Novel Systematic Error Compensation Algorithm Based on Least Squares Support Vector Regression for Star Sensor Image Centroid Estimation" Sensors 11, no. 8: 7341-7363. https://doi.org/10.3390/s110807341

Article Metrics

Back to TopTop