Next Article in Journal
A New Method for Single-Epoch Ambiguity Resolution with Indoor Pseudolite Positioning
Previous Article in Journal
Image-Guided Laparoscopic Surgical Tool (IGLaST) Based on the Optical Frequency Domain Imaging (OFDI) to Prevent Bleeding
Open AccessArticle

Celestial Object Imaging Model and Parameter Optimization for an Optical Navigation Sensor Based on the Well Capacity Adjusting Scheme

Key Laboratory of Precision Opto-mechatronics Technology, Ministry of Education, School of Instrumentation Science and Opto-electronics Engineering, Beihang University, No. 37 Xueyuan Road, Haidian District, Beijing 100191, China
*
Author to whom correspondence should be addressed.
Academic Editor: Francesco De Leonardis
Sensors 2017, 17(4), 915; https://doi.org/10.3390/s17040915
Received: 3 January 2017 / Revised: 30 March 2017 / Accepted: 19 April 2017 / Published: 21 April 2017
(This article belongs to the Section Physical Sensors)

Abstract

The simultaneous extraction of optical navigation measurements from a target celestial body and star images is essential for autonomous optical navigation. Generally, a single optical navigation sensor cannot simultaneously image the target celestial body and stars well-exposed because their irradiance difference is generally large. Multi-sensor integration or complex image processing algorithms are commonly utilized to solve the said problem. This study analyzes and demonstrates the feasibility of simultaneously imaging the target celestial body and stars well-exposed within a single exposure through a single field of view (FOV) optical navigation sensor using the well capacity adjusting (WCA) scheme. First, the irradiance characteristics of the celestial body are analyzed. Then, the celestial body edge model and star spot imaging model are established when the WCA scheme is applied. Furthermore, the effect of exposure parameters on the accuracy of star centroiding and edge extraction is analyzed using the proposed model. Optimal exposure parameters are also derived by conducting Monte Carlo simulation to obtain the best performance of the navigation sensor. Finally, laboratorial and night sky experiments are performed to validate the correctness of the proposed model and optimal exposure parameters.
Keywords: optical navigation sensor; well capacity adjusting; star centroid estimation; edge extraction; exposure parameter optimization optical navigation sensor; well capacity adjusting; star centroid estimation; edge extraction; exposure parameter optimization

1. Introduction

Optical autonomous navigation is a key technology in deep space exploration. This process is usually accomplished by multi-sensor integration, such as star sensors, navigation cameras, inertial measurement devices, and other equipment. The attitude information of a spacecraft is obtained by a star sensor and inertial measurement element. The navigation camera captures the target celestial image with background stars and extracts the target celestial line-of-sight (LOS) vector according to the current spacecraft attitude. Then, the spacecraft position can be calculated by integrating these optical navigation measurements according to the geometric relationship. An optical navigation system with multi-sensor integration is not only complicated in structure and has high cost and power consumption but also has installation errors between sensors, which further restricts the improvement of navigation accuracy. The best solution for deep space exploration missions is when a single navigation sensor can simultaneously obtain the attitude and LOS vector from the sensor to the centroid of the target celestial body. This approach requires the sensor to image the stars and target celestial body and to extract their navigation measurements simultaneously. However, a large gap exists between the irradiance of stars and the target celestial body. For reference, standard image sensors have a dynamic range of 40 dB to 70 dB [1], which is insufficient to ensure that the target celestial body and stars are well-exposed simultaneously. The problem of insufficient dynamic range for image sensors can be generally solved in three ways.
The design of optical systems to lower the incident flux of a celestial body is first considered. In Reference [2], the combined Earth-/star sensor for attitude and orbit determination of geostationary satellites is investigated. The combined Earth-/star sensor has two fields of view (FOV) for the observation of the Earth and stars. The two FOVs are combined on the detector through a beamsplitter. The partially transmissive mirror reflects 91% of starlight onto the detector, while transmitting only 9% of the Earth’s brightness. The Earth’s incident flux is further reduced by a filter. This method directly lowers the incident flux of the high irradiance target from the source, which is convenient for the subsequent image processing. A disadvantage of this approach is the complexity of the optical system design, higher weight, and higher costs.
The second method enhances the dynamic range by image processing algorithms. In [3,4,5,6], multi-exposure fusion techniques are adopted to enhance the dynamic range. A set of different exposure images is obtained. Then, these images are fused into an image where all scenes or areas of interest appear well exposed. The advantage of the multi-exposure technique is that it can enhance the dynamic range without degrading the signal-to-noise ratio (SNR) [7]. The main drawback of exposure fusion is its limitation to static scenes, and any object movement incurs severe ghosting artifacts in the fused result. Given that a spacecraft is always in the motion state, this method is inapplicable in such condition.
New image sensor designs are also proposed to attain an extended dynamic range. The photocurrent in logarithmic response image sensors is fed to a resistor with a logarithmic current-voltage characteristic [8,9,10]. Logarithmic-response image sensors can obtain a wide dynamic range, but it has several disadvantages (i.e., image lag, low SNR, large fixed pattern noise, and poor image quality). This undesired lag effect is most pronounced at low light conditions, and it is caused by a long settling time constant that can exceed the frame time. Another wide dynamic image sensor based on time-to-saturation information was reported in [11,12,13,14]. The pixel attains its saturation level and extrapolates the incident light by measuring and recording the time required to attain the saturation state. The light intensity is derived by the information on the time stored in the memory of each pixel and then the final image can be reconstructed. However, each pixel of the detector requires a signal detection circuit, comparator, digital memory, and other components to detect the saturation state and storage time information. This condition results in large pixel sizes and low fill factor, which limit the sensor resolution. The well capacity adjusting (WCA) scheme described by Knight [15] and Sayag [16] and implemented by Decker [17] compresses the sensor’s current versus charge response curve using a lateral overflow gate. This technology is currently widely employed by integrating a lateral overflow integration capacitor in a pixel in complementary metal-oxide-semiconductor (CMOS) detectors [18,19,20]. The well capacity is monotonically increased once or multiple times to its maximum value during integration. The accumulated photoelectrons of the high irradiance signal are suppressed, but the low irradiance signal is unaffected. The WCA scheme enhances the dynamic range, but at the expense of substantial degradation in SNR.
In [21], the navigation camera used a sequence of long and short exposures for optical navigation. A short exposure in which the celestial body is not saturated will permit determination of the celestial body’s location within the image. A long exposure will permit determination of the stars’ location within the image. Setting a long exposure time to ensure that the dim stars satisfy the detection SNR limit is necessary to ensure that the navigation sensor has a reliable attitude determination function. However, a long exposure time can lead to the overexposure of the target celestial body, which results in the expansion of the image shape because the apparent diameter increases and high stray light effect which will overwhelm the natural response to the target stars. Given that the WCA scheme is widely utilized in CMOS image sensors, high dynamic range images can be obtained by a single exposure with this technique.
This study first analyzes the irradiance characteristics of a celestial body. Then, the celestial body edge model and star spot imaging model are established when the WCA scheme is applied. The effect of exposure parameters on the accuracy of star centroid estimation and edge detection is analyzed based on the proposed model. The exposure parameters are optimized to ensure that the optical navigation measurements satisfy the requirement of navigation accuracy. Comparing with the conventional navigation sensors, this study provides a feasible method for the study of a miniaturized single FOV optical navigation sensor, which is less cost, less weight, simple to design and can obtain attitude information and LOS vector from the sensor to the centroid of the target celestial body simultaneously. This navigation sensor has a strong applicability and can be utilized for a variety of navigation tasks.

2. Irradiance Characteristics of a Celestial Body

The study of the irradiance characteristics of a celestial body is the prerequisite for the optimization of the exposure parameters of navigation sensors. The irradiance of a celestial body received by the detector is analyzed in this section.
Figure 1 shows the spatial position relationship between the navigation sensor and the observed target. A planet typically emits energy in a manner that reflects solar radiation energy. The irradiance of the sun is assumed to be isotropic because the irradiance is inversely proportional to the square of the distance [22]. Thus, the irradiance received at the target celestial surface is expressed as:
I P = I s u n ( r S / R S P ) 2
where I s u n is the irradiance of the sun, r S is the radius of the sun, and R S P is the distance between the sun and the target celestial body. The incident energy on the surface of a planet is only partially reflected back into cosmic space, whereas the rest is absorbed by the surface. r P is defined as the radius of the celestial body, A is the Bond albedo that represents the fraction of energy incident on an astronomical body scattered back into space at all wavelengths and phase angles. Thus, the total radiant flux reflected by the surface of the celestial body is expressed as:
L R = π A r P 2 r S 2 R S P 2 I s u n
We consider that navigation sensor observes the planet at a distance of R P C . Only a part of the illuminated area of the celestial body can be observed in most cases. If the surface of a planet is assumed to be homogeneous, then the distribution of the reflected radiant energy only depends on two factors, distance R P C and phase angle ξ which is the angle at the celestial body object between the Sun and the observer. Phase function is the ratio of the reflected radiant flux at phase angle ξ to the radiant flux at zero phase angle, which is denoted as P ( ξ ) . When ξ = 0 ° , P ( ξ ) = 1 . Thus, the irradiance received by the navigation sensor is expressed in the following form:
I C = C P ( ξ ) L R 4 π R P C 2 = C A r P 2 r S 2 4 R P C 2 R S P 2 P ( ξ ) I s u n
where C is a constant. The total reflected energy of the celestial body is distributed on a spherical surface. As the distance increases, the radiant flux density decreases, but the total radiant flux remains the same. Therefore, the total radiant flux on the sphere from the center of the celestial body of radius R P C is expressed as:
L R = S I C d S = S C P ( ξ ) L R 4 π R P C 2 d S
Figure 2 shows that the area d S can be expressed in terms of its coordinates as d S = R P C 2 sin ξ d φ d ξ . Therefore, C is derived as:
C = 4 π R P C 2 S P ( ξ ) d S = 4 π φ = 0 φ = 2 π ξ = 0 ξ = π P ( ξ ) sin ξ d φ d ξ = 2 0 π P ( ξ ) sin ξ d ξ
In astronomy, the Bond albedo ( A ) is related to the geometric albedo ( ρ ) by the expression A = ρ q [23], where:
q = 2 0 π I ( ξ ) I ( 0 ) sin ξ d ξ = 2 0 π P ( ξ ) sin ξ d ξ
Thus, the constant C must obey the following relationship:
C = 4 ρ A
The previously analysis shows that the irradiance received by the sensor can be expressed as:
I C = ρ r P 2 r S 2 R P C 2 R S P 2 P ( ξ ) I s u n
Equation (8) shows that I C is the function of R P C , R S P , and phase angle ξ . Visual magnitude is the relative quantity generally adopted to measure the irradiance of a celestial object. For example, when the Moon is observed in a geosynchronous orbit, the visual magnitude varies from −2.5 to −12.74. The irradiance of a celestial body is commonly higher than that of stars.

3. Celestial Object Imaging Model based on the Well Capacity Adjusting Scheme

The WCA scheme is widely employed in CMOS image sensors. During integration period of the WCA scheme, well capacity of a certain pixel is increased several times to extend the range of incident signal. The dynamic range which is defined as the ratio of the largest nonsaturating signal to the standard deviation of the noise under dark conditions is enhanced in WCA scheme. Figure 3a plots the accumulated photoelectrons versus the integration time for three different irradiance signals in normal integration mode. The accumulated photoelectrons increase linearly with the increase in integration time until they reach the full well capacity. The accumulated photoelectrons of a pixel can be expressed as:
Q = { I T if   I Q M A X / T Q M A X otherwise
where I is the photocurrent of the incident signal. The largest photocurrent of nonsaturating incident signal is given by, I M A X = Q M A X / T .
The total integration time is divided into several time segments when the WCA scheme is utilized. The well capacity is adjusted to a higher value at the beginning of each time segment until it increases to the full well capacity. The accumulated photoelectrons are a piecewise linear function with respect to the integration time. Figure 3b shows that the integration time is divided into two segments, namely, T S and T T S . T S is designated as the adjusting integration time (AIT) at this point. The well capacity is adjusted from Q S to the full well capacity Q M A X at time T S . Notably, when the accumulated photoelectrons reach Q S (e.g., the high irradiance signal ( I H ) case in the figure), the output photoelectrons are clipped until time T S . The excess photoelectrons will spill over from the lateral overflow gate. However, the accumulated photoelectrons of the low irradiance signal ( I L ) are unaffected. Therefore, the accumulated photoelectrons of a pixel when using the WCA scheme can be expressed as:
Q = { I T if   I Q S / T S Q S + I ( T T S ) if   Q S / T S < I ( Q M A X Q S ) / ( T t S ) T S Q M A X otherwise
The largest photocurrent of incident signal when using the WCA scheme is given by, I M A X = ( Q M A X Q S ) / ( T T S ) . The standard deviation of the noise under dark conditions is the same. Thus, dynamic range is enhanced by a factor:
λ = I M A X I M A X = ( Q M A X Q S ) T Q M A X ( T T S )
For a signal that does not saturate after integration, Equation (10) can also be expressed as:
Q = I T ε ( I Q S T S ) ( I T S Q S )
where ε ( t ) = { 1 t 0 0 t < 0 is the unit step function. As shown in Equation (12), the exposure parameters Q S and T S explicitly show the effects of output photoelectrons. Therefore, celestial body edge model and star spot imaging model are first established in this study. Then, the influence of exposure parameters on the accuracy of the extraction of optical navigation measurements is analyzed.

3.1. Celestial Body Edge Model

The edge of a celestial body can ideally be modeled as a step function on 1D section. Given the effects of the point spread function (PSF) of the optic system, the real edge adjusts to the blurring effect and is called the blurred edge model. The parameter Gaussian PSF radius σ P S F indicates the extent of the blurring effect. The celestial body image is assumed to be an ideal disk, such that the radial energy distribution along the direction normal to the edge is isotropic. Therefore, the 2D imaging model can be regarded as having been formed by 1D edge model that rotates 360° around the center of the disk. A 1D celestial body edge model is established, and the subsequent analyses are based on the said model to simplify the theoretical analysis and calculation. The blurred edge (Figure 4) can be modeled by convolving the ideal step edge with the PSF, which is expressed as [24]:
f ( x ) = k 2 ( e r f ( x l 2 σ P S F ) + 1 ) + h
This model has four parameters, namely, background intensity h , edge contrast k , edge location l and Gaussian PSF radius σ P S F . The background intensity is zero regardless of the random noise. Then, the 1D celestial body edge model is expressed as:
f ( x ) = ϕ P η Q E T 2 [ e r f ( x l 2 σ P S F ) + 1 ]
Thus, the 1D edge gradient model is derived as:
f ( x ) = ϕ P η Q E T 2 π σ P S F exp [ ( x l ) 2 2 σ P S F 2 ]
where T is the integration time, ϕ P is the incident flux density of the celestial body on the image plane, and η Q E is the quantum efficiency of the imager sensor. Equation (15) implies that the gradient is maximized at the edge location x = l .
In Figure 5, the small amplitude blue solid line denotes the intensity distribution with a short integration time, and the central area is under-saturated. The edge location can be obtained at half the amplitude intersection point. The large amplitude light blue dashed line denotes the intensity distribution where the long integration time should have been. However, the actual intensity distribution is denoted as the blue solid line because the central area is oversaturated. This scenario will result in an extension of the apparent diameter of the celestial body. Therefore, the real edge location cannot be extracted.
The WCA scheme is used to avoid oversaturation of the celestial body. Let the total integration time be T and AIT be T S . The well capacity is adjusted at time T S from Q S to Q M A X . In Figure 6, the red solid line denotes the intensity distribution at time T S and the accumulated photoelectrons in the central area are clipped at Q S . The dark blue solid line denotes the intensity distribution at the end of the integration. Compared with Figure 5, the central area of the celestial body image is under-saturated, which avoids the extension of the apparent diameter of the celestial body. However, Figure 6 shows that a shallow energy ring forms around the central area. Therefore, the 1D celestial body edge model becomes a piecewise function when the WCA scheme is applied and can be expressed as:
f ( x ) = { ϕ P η Q E T 2 [ e r f ( x l 2 σ P S F ) + 1 ] x x c Q S + ϕ P η Q E ( T T S ) 2 [ e r f ( x l 2 σ P S F ) + 1 ] x > x c
where x C is the solution of e r f ( x l 2 σ P S F ) = 2 Q S ϕ P η Q E T S 1 , which denotes the inflection point of the intensity distribution. The irradiance of a celestial body is commonly high, such that ϕ P η Q E T S Q S . Therefore, x C < l is derived.
The 1D edge gradient model when the WCA scheme is utilized can be derived as:
f ( x ) = { ϕ P η Q E T 2 π σ P S F exp [ ( x l ) 2 2 σ P S F 2 ] x x C ϕ P η Q E ( T T S ) 2 π σ P S F exp [ ( x l ) 2 2 σ P S F 2 ] x > x C
when x = l , the second term of Equation (17) obtains the maximum value. Therefore, the true edge location can be extracted when the WCA scheme is adopted.

3.2. Star Spot Imaging Model

Establishing an accurate star spot imaging model is the first step to achieving high star centroiding accuracy. Stars can be considered point sources at infinity. Stellar rays can be approximated to parallel light rays. Incident stellar lights pass through the optical lens and are focused at a point in the focal plane. However, the lens is generally slightly defocused to improve the centroiding accuracy of a star spot, which spreads to several pixels in the image plane. The profile of a star spot can be described by the Gaussian PSF, whereas the parameter Gaussian PSF radius σ P S F indicates the extent of dispersion. When σ P S F is high, the region where a star spot spreads out is large. The star spot imaging model in normal integration mode commonly assumes a 2D Gaussian function (Figure 7a) and can be expressed as:
E n o r ( x , y ) = ϕ S T η Q E 2 π σ P S F 2 exp [ ( x x 0 ) 2 + ( y y 0 ) 2 2 σ P S F 2 ] = Φ S ( x , y ) T
where ( x 0 , y 0 ) is the true centroid of the star and ϕ S is the incident flux of the star on the image plane. Φ S ( x , y ) = ϕ S η Q E 2 π σ P S F 2 exp [ ( x x 0 ) 2 + ( y y 0 ) 2 2 σ P S F 2 ] is defined as the energy distribution function at this point. The accumulated photoelectrons of a bright star are suppressed when utilizing the WCA scheme, and the excess photoelectrons are drained via the overflow gate. The star spot imaging model when the WCA scheme is applied (Figure 7b) can be expressed as:
E W C A ( x , y ) = Φ S ( x , y ) T ε ( Φ S ( x , y ) Q S T S ) ( Φ S ( x , y ) T S Q S )
The celestial body edge model and star spot imaging model when utilizing the WCA scheme have been established so far. In the subsequent section, the effect of exposure parameters on the accuracy of star centroiding and edge detection is analyzed using the proposed models. Then, the exposure parameters are optimized to obtain the best performance of the navigation sensor utilizing the WCA scheme.

4. Celestial Object Image Feature Extraction Accuracy Performance Utilizing the WCA Scheme

The navigation measurements of deep space optical navigation systems are generally combined to calculate the LOS direction and spacecraft location [25]. The accuracy of the apparent diameter and celestial body centroiding is determined by the precision of edge detection. The accuracy of the LOS direction of the navigation sensor is determined by the attitude measurement precision, that is, the centroiding accuracy of the background stars. In this section, the effect of exposure parameters on the accuracy of star centroiding and edge detection is analyzed using the proposed image model, which provides theoretical support for parameter optimization.

4.1. Edge Detection Accuracy Performance Utilizing the WCA Scheme

The edge is the part of the image where brightness changes sharply. The edge points based on the blurred edge model are given by the maxima of the first image derivative or zero crossing point of the second image derivative. Steger proposed a subpixel edge extraction algorithm in his doctoral thesis [26]. The basic principle of the algorithm is to perform the second-order Taylor expansion about the pixel where the local gradient is maximized in the direction of the edge normal and to determine the subpixel location of the zero crossing point of the second derivative. In this study, the edge of the celestial body is extracted using this algorithm, which is essentially a fitting interpolation algorithm.
Given that the edge detection algorithm is based on image derivative information, it is highly sensitive to noise. Therefore, the image derivatives must be estimated by convolving the image with the derivatives of the Gaussian smoothing kernel. Edges appear as bright lines in an image that contains the absolute value of the gradient. The second-order Taylor expansion about the maximized gradient pixel in the direction of the edge normal is expressed as:
r ( x ) = r ( x 0 ) + r ( x 0 ) x + 1 2 r ( x 0 ) x 2
where x 0 is the pixel center. The subpixel location of the edge where r ( x ) = 0 is expressed as:
l 0 = x 0 r ( x 0 ) r ( x 0 )
The 1D celestial body edge model is a piecewise function when the WCA scheme is applied. The gradient of the edge model is derived by convolving the edge model with the first derivative of the Gaussian smoothing kernel and expressed as:
r ( x ) = f ( x ) g ( x ) = f ( τ ) g ( x τ ) d τ = x c ϕ P η Q E T 2 π σ P S F exp [ ( τ l 2 σ P S F ) 2 ] 1 2 π σ exp [ ( x τ 2 σ ) 2 ] d τ + x c ϕ P η Q E ( T T S ) 2 π σ P S F exp [ ( τ l 2 σ P S F ) 2 ] 1 2 π σ exp [ ( x τ 2 σ ) 2 ] d τ = ϕ P η Q E 2 π ( σ P S F 2 + σ 2 ) exp [ ( x l ) 2 2 ( σ P S F 2 + σ 2 ) ] { T T S 2 + T S 2 e r f [ Θ ( x ) ] }
Thus, the second derivative of edge model is derived as:
r ( x ) = ϕ P η Q E ( x l ) 2 π ( σ P S F 2 + σ 2 ) ( σ P S F 2 + σ 2 ) exp [ ( x l ) 2 2 ( σ P S F 2 + σ 2 ) ] { T T S 2 + T S 2 e r f [ Θ ( x ) ] } ϕ P η Q E T S σ P S F 2 π σ ( σ P S F 2 + σ 2 ) exp [ ( x l ) 2 2 ( σ P S F 2 + σ 2 ) ] exp { Θ 2 ( x ) }
where Θ ( x ) = σ P S F 2 ( x c x ) + σ 2 ( x c l ) 2 ( σ P S F 2 + σ 2 ) σ P S F σ . The edge location is the zero crossing point of the second derivative, which indicates that the solution of Equation (23) and r ( x ) r ( x ) < 0 are required. Given that the analytical solution cannot be calculated, the effect of exposure parameters on the edge detection accuracy is analyzed by numerical simulations. The edge localization error is the function of the Gaussian radius σ P S F , well capacity Q S , total integration time T , AIT T S , Gaussian smoothing kernel radius σ , and edge location l . For a given optical system, σ P S F is constant, σ is a parameter of the edge detection algorithm, and T is determined by the limiting detectable star visual magnitude for the navigation sensor. This study focuses on analyzing the edge detection error caused by Q S and T S . Systematic error in edge detection is also introduced by pixelization. However, l is a random variable in practice that is uniformly distributed over a pixel. The root mean square error is defined as the error of edge detection, which is expressed as:
δ E ( Q S , T S ) = [ 0.5 0.5 δ 2 ( Q S , T S , l ) d l ] 2
σ P S F = 0.67 pixels, σ =   0 . 55 pixels, T = 30 ms are set at this point. Then, the simulated celestial body images with temporal noise and fixed pattern noise are generated. Sources of noises which are taken into consideration include photon shot noise, dark current noise, readout noise, quantization noise, dark signal non-uniformity and photon response non-uniformity. The full well capacity is set to Q M A X = 15 , 000 e consist with the image sensor we utilized. The edge detection error simulation results are shown in Figure 8.
First, the relationship between edge detection error δ E and AIT T S is discussed. The black line indicates that δ E is evidently affected by T S . An interval exists wherein δ E is significantly small. The second segment of the integration time T T S is relatively long when T S is short, which leads to the oversaturation of the central region pixels and extension of the apparent diameter of the celestial body. However, T T S is relatively short when T S is excessively long, which leads to a small intensity contrast between the central region and the energy ring. The algorithm will extract the edge of the “rings” instead of the actual edge location. Therefore, δ E initially reaches the minimum number and then increases with the increase in T S .
Second, the relationship between edge detection error δ E and well capacity Q S is discussed. Setting the AIT to an appropriate value leads to a relatively small δ E . T S = 29.6 ms is set at this point. The red line indicates that δ E varies slightly and remains nearly constant at the beginning with the increase in Q S . As Q S continues to increase, δ E increases sharply. The reason for this relationship is provided in Figure 9, which shows the simulated images of the celestial object and the second derivative of the edge model expressed in Equation (23) when the well capacity values are 1000 e , 5000 e and 14,000 e .
In Figure 9, the purple color represents the saturated pixels, whereas the cyan color represents the pixels with zero intensity. The yellow arc represents the true edge of the celestial body. The red cross symbol indicates the zero-crossing point of the second derivative, which is located at the true edge location, whereas the blue circle symbol indicates the zero-crossing point that deviates from the true edge location. Figure 9a shows that the intensity of the energy ring is small when the well capacity Q S is small and that the zero-crossing points can be extracted at the actual edge location. Figure 9b shows that the intensity of the energy ring increases with the increase in Q S and that the zero crossing points exist at the location of the energy ring. The algorithm extracts double edges, and the false edge can be rejected. Figure 9c shows that the intensity of the energy ring is higher with a larger Q S and that no zero-crossing points are obtained at the actual edge location. The extracted edge deviates from the true location. Thus, a large Q S value results in false edge extraction.
Thus, the well capacity and AIT are the main factors that affect the edge detection error. T S evidently influences on the accuracy of edge detection. The edge detection error initially reaches the minimum number and then increases with the increase in T S ; Q S must not be excessively large.

4.2. Star Centroiding Accuracy Performance Utilizing the WCA Scheme

Star centroiding accuracy is the basis for attitude accuracy. The star centroiding accuracy performance is analyzed when the WCA scheme is employed to ensure attitude accuracy. The total centroiding error is decomposed into the x-and y-component errors. The errors in each case can be proven to be the same. Thus, this study focuses on analyzing the x-component errors as an example.
The centroiding error of the x-component δ x when the WCA scheme is applied is expressed as:
δ x = i j x i I i j i j I i j x 0     = i j Φ S ( x i , y j ) T x i i j x i ε ( Φ S ( x i , y j ) Q S T S ) [ Φ S ( x i , y j ) T S Q S ] ϕ S η Q E T i j ε ( Φ S ( x i , y j ) Q S T S ) [ Φ S ( x i , y j ) T S Q S ] x 0
In Equation (25), δ x is the function of the Gaussian radius σ P S F , well capacity Q S , total integration time T , AIT T S , incident flux of the star on the image plane ϕ S , and actual star location x 0 . T is determined by the limiting detectable star visual magnitude for the navigation sensor. If x 0 moves within a pixel, then δ x changes periodically. However, x 0 is a random variable that is uniformly distributed over a pixel within the range [−0.5, 0.5) in practice. The root mean square error is defined as the error of x, as:
δ x , S ( Q S , T S ) = [ 0.5 0.5 δ x 2 ( Q S , T S , x 0 ) d x 0 ] 2
After adding temporal noise and fixed pattern noise to the simulated star image, the relationship among δ x , S , Q S , and T S is analyzed with different star magnitudes. Figure 10 shows the simulated results when the star magnitude is 2, 4, 5, and 6. First, the relationship between star centroiding error δ x , S and AIT T S is discussed. Figure 10a shows that the centroiding error of the star magnitude = 2 slowly increases with the increase in T S . Figure 10b–d shows that the centroiding error variation caused by T S can be neglected. Thus, δ x , S is less affected by T S . Second, the relationship between star centroiding error δ x , S and well capacity Q S is discussed. Figure 10a–c shows that δ x , S decreases with the increase in Q S . However, the centroiding error variation of a dim star (Figure 10d) caused by well capacity can be neglected.
The total star centroiding error is expressed as:
δ S , c e n = δ x , S 2 + δ y , S 2
The relationship between total star centroiding error and exposure parameters is consistent with the x-component errors. Thus, the centroiding error of a dim star is unaffected by the WAC scheme. The centroiding error of a bright star decreases with the increase in well capacity, and the AIT effect can be ignored.
In the preceding sections, the centroiding accuracy performance of a single star is analyzed when the WCA scheme is adopted. Many stars in the FOV are required to increase the attitude determination accuracy in practice. More dim stars have been generally recorded than bright stars in the FOV. The overall star centroiding error is defined as the weighted average of centroiding errors for different star magnitudes at this point. The overall centroiding error can directly reflect the attitude determination accuracy of the optical navigation sensor. The star magnitudes range from 0 to 7 at 0.5 intervals. The star magnitudes that range from m V 0.25 to m V + 0.25 are considered the magnitude m V to simplify the analysis process. Therefore, the overall star centroiding error is expressed as:
δ S , A l l = i = 0 7 δ S , M V i ( N M V i , F O V N M V i 1 , F O V ) N M V 7 , F O V
where N M V i , F O V is the average number of stars brighter than magnitude M V i in the FOV, and is expressed as [27]:
N M V i , F O V = 6.57 e 1.08 M V i 1 cos ( A / 2 ) 2
where A is the FOV size. The number of stars in the FOV increases exponentially with the increase in star magnitude M V i . There are much more dim stars than the bright ones in the FOV. As a result, the star centroiding error of dim star contributes more to the overall centroiding error. Figure 11 shows the relationship between overall centroiding error and well capacity Q S when T S = 29.6 ms . The overall centroiding error fluctuation is relatively stable with the increase in Q S because variation of star centroiding error of dim star caused by Q S is less affected than bright star Therefore, the variation of overall centroiding error caused by well capacity can be neglected.
In summary, the well capacity Q S is mainly responsible for the centroiding error of a single star. However, the attitude accuracy affected by well capacity can be neglected.

5. Exposure Parameter Optimization

In the preceding sections, the celestial body edge model and star spot image model are established. Both models are piecewise functions, such that the analytical solution of the optimal exposure parameter is difficult to obtain. Thus, the optimal exposure parameters are obtained by conducting Monte Carlo simulation. The exposure parameters include the total integration time T , AIT T S , and well capacity Q S . An appropriate value of T ensures that sufficient stars are covered in the FOV for star pattern recognition, which is a prerequisite for a reliable attitude measurement function. The total integration time is generally set to a relatively longer value to ensure that dim stars can be identified as reliable, which can cause the target celestial body to become overexposed. Therefore, the WCA scheme is adopted, and the AIT and well capacity are optimized to obtain the best navigation performance. The optical system employed in this study has the following parameters: aperture D = 40 mm , focal length f = 100 mm , and optical transmission τ = 80 % . The image sensor utilized is CMV20000, the parameters of which are listed in Table 1.

5.1. Total Integration Time T

The total integration time is determined by the limiting detectable star visual magnitude for the navigation sensor. Navigation sensors must conduct star pattern recognition to obtain attitude information. A sufficient number of navigation stars must be present in the FOV to ensure the effectiveness of the star pattern recognition algorithm. A star can be generally identified as reliable if the SNR of at least five pixels are more than 5. Thus, T must satisfy that the SNR of the darkest pixel of limiting detectable star is more than 5, which can be expressed as:
S N R = K ϕ S η Q E T N n o i s e = K ϕ S 0 η Q E T 2.512 M V n S h o t 2 + n D a r k 2 + n P R N U 2 + n D S N U 2 + n r e a d 2 + n A D C 2 > 5
Thus, the following expression can be derived:
T > 5 n S h o t 2 + n D a r k 2 + n P R N U 2 + n D S N U 2 + n r e a d 2 + n A D C 2 K ϕ S 0 η Q E 2.512 M V
where n S h o t , n D a r k , n P R N U , n D S N U , n r e a d , and n A D C denote the standard deviations of photon shot noise, dark current noise, photon response non-uniformity noise (PRNU), dark signal non-uniformity noise (DSNU), readout noise and quantization noise, respectively. Photon shot noise follows a Poisson distribution and is dependent on incident flux on the pixel; its variance is equal to the counts of photoelectrons in the imaging process. Dark current noise also follows a Poisson distribution, and its variance is equal to the production of dark current and exposure time. The standard deviation of the quantization noise is equal to n A D C = 1 / 12 G , where G denote the conversion gain. These noise terms can be derived from the parameters in Table 1. K is the ratio of the energy of the darkest pixel to the total energy of the star signal. K = 0.0287 is set at this point. ϕ S 0 = 5.4 × 10 4 photons / ms is the incident flux of the star, whose magnitude is 0. M V is the limiting detectable star visual magnitude for the navigation sensor. M V = 6 is set at this point. Equation (31) shows that the total integration time must satisfy T > 25.4 ms . T = 30 ms is set to provide the system with a certain degree of redundancy.

5.2. Adjusting Integration Time T S

The AIT T S is one of the main factors that influences the edge detection accuracy performance of the celestial body. The edge detection error initially reaches the minimum number and then increases with the increase in T S based on the previous conclusion. However, the variation of star centroiding error caused by T S can be neglected. Therefore, this study focuses on optimizing T S to minimize the edge detection error.
Monte Carlo simulation is performed through the following procedure: First, a total of 8000 groups of celestial body images are generated using the proposed model. The well capacity values range from 3000 e to 5000 e at 25 e intervals. The AIT values range from 29.6 ms to 29.8 ms at 0.002 ms intervals. Each group contains 100 celestial body images, whose individual center coordinates are fixed; then, random noise is added. However, the radius of the celestial body obeys uniform distribution over a pixel, which is equivalent to the edge location that changes uniformly over a pixel. Second, edge data of the 100 celestial bodies are extracted using the edge detection algorithm. The absolute fitting radius error is obtained by fitting these edge points utilizing the least square circle fitting algorithm. The absolute radius error is a direct expression of the edge detection error. Then, the standard deviation of these absolute radius errors in one group is considered the average edge detection error at certain well capacity and AIT values. Finally, this procedure is repeated until all groups of celestial body images are processed.
The simulation conditions are set as follows: the incident flux of the celestial body on the image plane average in one pixel ϕ P = 7 . 5 × 10 4 photons / ms , the total integration time T = 30 ms , the Gaussian radius σ P S F =   0 . 67 pixels, and the Gaussian smoothing kernel radius σ = 0 . 55 pixels. The AIT T S that corresponds to the minimum standard deviation of the edge detection error is selected as the optimal T S at a certain well capacity, which is shown as a red scattered point in Figure 12.
The simulation results are shown in Figure 12. The blue solid line indicates the linear function between well capacity Q S and optimal T S fitted by MATLAB. The optimal T S increases with the increase in well capacity. Therefore, the optimal AIT is derived as:
T S = T Q M A X Q S ϕ P η Q E
The same conclusion can also be obtained under other simulation conditions. Thus, the optimal AIT T S when utilizing the WCA scheme is given by Equation (32).

5.3. Well Capacity Q S

The optimization of well capacity Q S is determined by two conditions. First, if Q S is set excessively low, then the SNR of a dim star decreases, which can cause the star identification failure of the limiting detectable star. Therefore, providing the lower limit of the well capacity is necessary. Second, Q S is the main factor that affects the accuracy of edge detection when the WCA scheme is applied, and the optimal Q S must minimize the edge detection error. Therefore, the optimal well capacity Q S is comprehensively determined by the aforementioned conditions.
The intensity distribution of the limiting detectable star cannot be degraded to ensure that it is identified reliably. Thus, the lower limit of well capacity is expressed as:
Q S T S K B ϕ S 0 η Q E 2.512 M V t
By substituting Equation (32) into Equation (33), the following expression is obtained:
Q S ( ϕ P T η Q E Q M A X ) K B ϕ S 0 2.512 M V t ϕ P K B ϕ S 0 2.512 M V t
where K B is the ratio of the energy of the brightest pixel to the total energy of the star signal, K B = 0.2965 is set at this point. M V t is the star magnitude limit threshold, and the SNR of a star that is dimmer than magnitude M V t does not decrease when the WCA scheme is employed. M V t = 5 . 5 is set at this point, and Q S 1400 e is obtained.
The well capacity Q S is another factor that influences the edge detection performance of the celestial body. Monte Carlo simulation is performed to obtain the optimal well capacity value. Given that the relationship between optimal AIT and well capacity is already derived in the previous section, the optimal exposure parameters under current imaging conditions are determined. A total of 300 groups of celestial body images were generated based on the celestial body edge model. Then, random noise is added. The well capacity values range from 2000 e to 5000 e at 10 e intervals. Each T S is derived using Equation (32) at a certain Q S . Then, the edge data of the 100 celestial bodies are extracted using the edge detection algorithm. The absolute fitting radius error is obtained by fitting these edge points utilizing the least square circle fitting algorithm. The standard deviation of these absolute radius errors in one group is considered the average edge detection error at a certain well capacity. The well capacity Q S that corresponds to the minimum standard deviation of the edge detection error is selected as the optimal Q S if it also satisfies the condition expressed in Equation (34).
The simulation results of the optical system in this study are shown in Figure 13. The abscissa denotes the well capacity Q S , whereas the values in the parentheses denote the optimal T S that correspond to the current Q S . The edge detection error initially reaches the minimum number and then increases with the increase in Q S . The symbol “*” indicates the optimal Q S . Thus, the optimal solutions for the exposure parameters are identified. The optimal well capacity of the navigation sensor of the optical system employed in this study is 3750 e , whereas the optimal AIT is 29.67 ms.

6. Experimental Results and Analysis

Laboratorial single-star imaging and accuracy analysis experiment and night sky experiment are conducted to validate the correctness of the proposed models, accuracy performance analysis, and optimal exposure parameters. The image sensor of the navigation sensor adopted in these experiments is CMV20000, the parameters of which are listed in Table 1.

6.1. Laboratorial Single-Star Imaging and Accuracy Analysis Experiment

A laboratorial single-star imaging and accuracy analysis experiment is performed to validate the star spot imaging model and centroiding accuracy performance when the WCA scheme is employed. The autocollimator in the laboratory is used to generate infinite distance star signals with different star magnitudes. The navigation sensor is mounted on a turntable, as shown in Figure 14.
The exposure parameters of the navigation sensor are set as follows: total integration time T = 30 ms , AIT T S = 29.67 ms , and well capacity Q S = 3750 e . Figure 15 shows the star images of different magnitudes using the normal integration mode and the WCA scheme. The brightness degree of the bright star clearly degraded when the WCA scheme is applied. However, the energy distribution of the dim star is unaffected. Therefore, the star point imaging model is validated by the experiment.
The centroiding accuracy performance when utilizing the WCA scheme is validated. The exposure parameters of the navigation sensor are set as follows: total integration time T = 30 ms , AIT T S = 29.67 ms , and well capacity Q S ranges from 2000 e to 8000 e at 500 e intervals. Then, star centroiding is performed and experimental data are recorded. Although the true position of the star is unknown, the average centroid position ( x ¯ c e n , y ¯ c e n ) of a bright unsaturated star image can be considered the estimated value of the true position. Then, the standard deviations of the centroiding error of each magnitude with respect to position ( x ¯ c e n , y ¯ c e n ) are calculated at each certain Q S . The experimental results are shown in Figure 16.
In Figure 16, the experimental results are denoted with red solid lines, whereas the simulation results are denoted with blue solid lines. The centroiding accuracy performance from the experiment is consistent with the simulation results. Thus, the laboratorial single-star imaging and accuracy analysis experiment validates the conclusions of this study.

6.2. Night Sky Observation and Accuracy Analysis Experiment

A night sky observation and accuracy analysis experiment is performed to validate the correctness of the celestial body edge model and optimal exposure parameters when the WCA scheme is applied. Moreover, whether the optical navigation measurements of the stars and target celestial body from the image by a single exposure with the optimal exposure parameters satisfy the requirements of navigation accuracy is validated. In Figure 17, the navigation sensor is installed on a tripod. The hardware configurations of the navigation sensor are the same as those previously mentioned.

6.2.1. Observations of the Moon and Accuracy Analysis Experiment

The images of the Moon are obtained utilizing the normal integration and WCA schemes, as shown in Figure 18. Figure 18b shows that the image is largely overexposed, and the apparent diameter of the Moon is extended significantly. Figure 18c shows the Moon imaged with optimal exposure parameters utilizing the WCA scheme. Compared with the image shown in Figure 18b, although the total integration time is the same, the image of the Moon is well exposed, and the image exhibited suitable performance of navigation measurement extraction when the WCA scheme is applied. Figure 18d shows the image of the Moon when the well capacity is set to 6093 e . The energy ring around the Moon is clearly visible. The observation results validate the correctness of the celestial body edge model when the WCA scheme is adopted.
The theoretical value of the apparent diameter of the Moon is estimated to be 71.02 pixels on the image plane by applying the STK software to simulate the distance from the observation location to the Moon. The edge detection algorithm is employed to extract the edge of the Moon. The least squares circle fitting is utilized to obtain the apparent radius and centroid of the Moon image. The results are shown in Figure 19.
In Figure 19, the red scatter points are the extracted edge points, the yellow circle is the fitting circle, and “+” symbol is the centroid position. The minimum relative deviation of the apparent radius with respect to the theoretical value is obtained when the optimal exposure parameters are adopted. Table 2 shows the average extraction results of the apparent radius under different exposure conditions. However, the apparent radius errors are larger than the simulation results. This phenomenon may be attributed to atmospheric turbulence and lens calibration error, which are beyond the scope of this study. Thus, these factors are not considered in the model. In summary, the Moon observations validate the reliability of error analysis and parameter optimization.

6.2.2. Observations of the Moon and Stars in the Same FOV

Images of stars and the Moon in the same FOV are taken utilizing the WCA scheme and optimal exposure parameters, as shown in Figure 20. The star pattern identification algorithm is applied, and the identified stars are denoted with the yellow “+” symbol. A total of 12 stars are identified. The brightest star magnitude is 3.0, whereas the dimmest star magnitude is 6.2. The centroid positions and magnitude of the identified stars in the image are listed in Table 3.
Under the optimal exposure parameters, the navigation sensor can simultaneously identify the stars that are dimmer than the limiting detectable star magnitude and extract the high-accuracy edge location of the celestial body (results listed in Table 2). In summary, by utilizing the WCA scheme, the navigation sensor can image the stars and target celestial body well-exposed simultaneously within a single exposure and can reliably extract high-accuracy optical navigation measurements that satisfy the navigation demand. The night sky observation and accuracy analysis experiment validates our study conclusions.

7. Conclusions

In this paper, we first analyze the irradiance characteristics of a celestial body. This study aims at solving the problem that an optical navigation sensor is unable to image and expose the target celestial body and stars well-exposed simultaneously. Given that their irradiance difference is generally large, a solution that utilizes the WCA scheme is proposed. Then, celestial body edge model and star spot imaging model are established when the WCA scheme is adopted. The effect of exposure parameters on the accuracy of the star centroid estimation and edge extraction is analyzed based on the models. The AIT T S and well capacity Q S are the main factors that influence the edge detection accuracy performance of the celestial body. The edge detection error initially reaches the minimum number and then increases with the increase in T S . An interval exists that indicates that the edge detection error is significantly small. The edge detection error initially reaches the minimum number and then increases with the increase in Q S when we set T S to an appropriate value. The well capacity is the main factor that influences the centroiding accuracy performance of a single star. The star centroiding error of a bright star decreases with the increase in Q S . However, the centroiding error of a dim star is mainly caused by random noise, and more dim stars are recorded than bright stars in the FOV. Therefore, the overall centroiding error variation caused by the exposure parameters can be neglected. The exposure parameters are optimized to ensure that the optical navigation measurements satisfy the requirement of navigation accuracy. The optimal Q S and analytical solution of the optimal T S are obtained by conducting Monte Carlo simulation. The laboratorial and night sky experiments validate the correctness of the models, the proposed optimal exposure parameters, and other study conclusions. This study validates the feasibility of extracting attitude information and LOS vector from the sensor to the centroid of the target celestial body simultaneously by utilizing a miniaturized single FOV optical navigation sensor.

Acknowledgments

This work was supported by National Natural Science Fund of China (NSFC) (No. 61222304) and the Specialized Research Fund for the Doctoral Program of Higher Education of China (No. 20121102110032). We gratefully acknowledge the supports.

Author Contributions

Hao Wang is responsible for the overall work, performance of the simulations, analysis and experiments, and writing of this paper. Jie Jiang provided partial research ideas and modified the paper. Guangjun Zhang is the research group leader who provided general guidance during the research and approved this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. El Gamal, A.; Eltoukhy, H. CMOS image sensors. IEEE Circuits Devices Mag. 2005, 21, 6–20. [Google Scholar] [CrossRef]
  2. Kühl, C.T. Combined Earth-/Star Sensor for Attitude and Orbit Determination of Geostationary Satellites. Ph.D. Dissertation, Universität Stuttgart, Stuttgart, Baden-Württemberg, Germany, March 2005. [Google Scholar]
  3. Jinno, T.; Okuda, M. Multiple exposure fusion for high dynamic range image acquisition. IEEE Trans. Image Process. 2012, 21, 358–365. [Google Scholar] [CrossRef] [PubMed]
  4. Zhang, W.; Cham, W.-K. Reference-guided exposure fusion in dynamic scenes. J. Vis. Commun. Image Represent. 2012, 23, 467–475. [Google Scholar] [CrossRef]
  5. Goshtasby, A.A. Fusion of multi-exposure images. Image Vis. Comput. 2005, 23, 611–618. [Google Scholar] [CrossRef]
  6. Mertens, T.; Kautz, J.; Van Reeth, F. Exposure fusion: A simple and practical alternative to high dynamic range photography. In Computer Graphics Forum; Blackwell Publishing Ltd.: Oxford, UK, 2009; Volume 28, pp. 161–171. [Google Scholar]
  7. Yang, D.X.; El Gamal, A. Comparative analysis of SNR for image sensors with enhanced dynamic range. In Sensors, Cameras, and Systems for Scientific/Industrial Applications; Blouke, M.M., Williams, G.M., Jr., Eds.; SPIE: San Jose, CA, USA, 1999; Volume 3649, pp. 197–211. [Google Scholar]
  8. Cheng, H.-Y.; Choubey, B.; Collins, S. An integrating wide dynamic-range image sensor with a logarithmic response. IEEE Trans. Electron Devices 2009, 56, 2423–2428. [Google Scholar] [CrossRef]
  9. Kavadias, S.; Dierickx, B.; Scheffer, D.; Alaerts, A.; Uwaerts, D.; Bogaerts, J. A logarithmic response CMOS image sensor with on-chip calibration. IEEE J. Solid State Circuits 2000, 35, 1146–1152. [Google Scholar] [CrossRef]
  10. Kim, D.; Song, M. An enhanced dynamic-range CMOS image sensor using a digital logarithmic single-slope ADC. IEEE Trans. Circuits Syst. II Express Briefs 2012, 59, 653–657. [Google Scholar] [CrossRef]
  11. Brajovic, V.; Kanade, T. A sorting image sensor: An example of massively parallel intensity-to-time processing for low-latency computational sensors. In Proceedings of the IEEE International Conference on Robotics and Automation, Minneapolis, MN, USA, 22–28 April 1996; pp. 1638–1643. [Google Scholar]
  12. Luo, Q.; Harris, J.G.; Chen, Z.J. A time-to-first spike CMOS image sensor with coarse temporal sampling. Analog Integr. Circuits Signal Process. 2006, 47, 303–313. [Google Scholar] [CrossRef]
  13. Stoppa, D.; Simoni, A.; Gonzo, L.; Gottardi, M.; Dalla Betta, G.-F. Novel CMOS image sensor with a 132-dB dynamic range. IEEE J. Solid State Circuits 2002, 37, 1846–1852. [Google Scholar] [CrossRef]
  14. Stoppa, D.; Vatteroni, M.; Covi, D.; Baschirotto, A.; Sartori, A.; Simoni, A. A 120-dB dynamic range CMOS image sensor with programmable power responsivity. IEEE J. Solid State Circuits 2007, 42, 1555–1563. [Google Scholar] [CrossRef]
  15. Knight, T.F. Design of an Integrated Optical Sensor with on-Chip Preprocessing. Ph.D. Dissertation, Massachusetts Institute of Technology, Cambridge, MA, USA, 1983. [Google Scholar]
  16. Sayag, M. Non-Linear Photosite Response in CCD Imagers. U.S. Patent No. 5,055,667, 8 October 1991. [Google Scholar]
  17. Decker, S.; McGrath, D.; Brehmer, K.; Sodini, C.G. A 256 × 256 CMOS imaging array with wide dynamic range pixels and column-parallel digital output. IEEE J. Solid State Circuits 1998, 33, 2081–2091. [Google Scholar] [CrossRef]
  18. Fossum, E.R. High Dynamic Range Cascaded Integration Pixel Cell and Method of Operation. U.S. Patent No. 6,888,122, 3 May 2005. [Google Scholar]
  19. Akahane, N.; Sugawa, S.; Adachi, S.; Mori, K.; Ishiuchi, T.; Mizobuchi, K. A sensitivity and linearity improvement of a 100-dB dynamic range CMOS image sensor using a lateral overflow integration capacitor. IEEE J. Solid State Circuits 2006, 41, 851–858. [Google Scholar] [CrossRef]
  20. Lee, W.; Akahane, N.; Adachi, S.; Mizobuchi, K.; Sugawa, S. A high S/N ratio and high full well capacity CMOS image sensor with active pixel readout feedback operation. In Proceedings of the Solid-State Circuits Conference, ASSCC’07, IEEE Asian, Jeju City, Korea, 12–14 Novmber 2007; pp. 260–263. [Google Scholar]
  21. Hawkins, S.E., III; Boldt, J.D.; Darlington, E.H.; Espiritu, R.; Gold, R.E.; Gotwols, B.; Grey, M.P.; Hash, C.D.; Hayes, J.R.; Jaskulek, S.E. The Mercury dual imaging system on the MESSENGER spacecraft. Space Sci. Rev. 2007, 131, 247–338. [Google Scholar] [CrossRef]
  22. Iqbal, M. An Introduction to Solar Radiation, 1st ed.; Academic Press: Don Mills, ON, Canada, 2012; pp. 35–36. [Google Scholar]
  23. The Information of Bond Albedo. Available online: https://en.wikipedia.org/wiki/Bond_albedo (accessed on 9 November 2016).
  24. Hagara, M.; Kulla, P. Edge detection with sub-pixel accuracy based on approximation of edge with Erf function. Radioengineering 2011, 20, 516–524. [Google Scholar]
  25. Lightsey, G.E.; Christian, J.A. Onboard image-processing algorithm for a spacecraft optical navigation sensor system. J. Spacecr. Rocket. 2012, 49, 337–352. [Google Scholar] [CrossRef]
  26. Steger, C. Unbiased Extraction of Curvilinear Structures from 2D and 3D Images. Ph.D. Dissertation, Technische Universität München, Arcisstraße 21, Munich, Bavaria, Germany, 1998. [Google Scholar]
  27. Liebe, C.C. Accuracy performance of star trackers-a tutorial. IEEE Trans. Aerosp. Electron. Syst. 2002, 38, 587–599. [Google Scholar] [CrossRef]
Figure 1. Spatial position relationship between the sensor and target celestial body.
Figure 1. Spatial position relationship between the sensor and target celestial body.
Sensors 17 00915 g001
Figure 2. Reflected radiation flux over a sphere.
Figure 2. Reflected radiation flux over a sphere.
Sensors 17 00915 g002
Figure 3. (a) Accumulated photoelectrons versus time in normal integration mode; and (b) accumulated photoelectrons versus time using the WCA scheme.
Figure 3. (a) Accumulated photoelectrons versus time in normal integration mode; and (b) accumulated photoelectrons versus time using the WCA scheme.
Sensors 17 00915 g003
Figure 4. Blurred edge model.
Figure 4. Blurred edge model.
Sensors 17 00915 g004
Figure 5. 1D profile of the celestial body image in normal integration mode.
Figure 5. 1D profile of the celestial body image in normal integration mode.
Sensors 17 00915 g005
Figure 6. 1D profile of the celestial body image when using the WCA scheme.
Figure 6. 1D profile of the celestial body image when using the WCA scheme.
Sensors 17 00915 g006
Figure 7. (a) Star signal intensity distribution utilizing normal integration mode; and (b) star signal intensity distribution using the WCA scheme.
Figure 7. (a) Star signal intensity distribution utilizing normal integration mode; and (b) star signal intensity distribution using the WCA scheme.
Sensors 17 00915 g007
Figure 8. Edge detection error δ E (left) versus well capacity Q S and AIT T S (right).
Figure 8. Edge detection error δ E (left) versus well capacity Q S and AIT T S (right).
Sensors 17 00915 g008
Figure 9. Effect of different well capacity values on the edge detection results when: (a) Q S = 1000 e ; (b) Q S = 5000 e ; and (c) Q S = 14 , 000 e .
Figure 9. Effect of different well capacity values on the edge detection results when: (a) Q S = 1000 e ; (b) Q S = 5000 e ; and (c) Q S = 14 , 000 e .
Sensors 17 00915 g009
Figure 10. δ x , S versus well capacity Q S and AIT T S for different star magnitudes: (a) star magnitude = 2; (b) star magnitude = 4; (c) star magnitude = 5; and (d) star magnitude = 6.
Figure 10. δ x , S versus well capacity Q S and AIT T S for different star magnitudes: (a) star magnitude = 2; (b) star magnitude = 4; (c) star magnitude = 5; and (d) star magnitude = 6.
Sensors 17 00915 g010
Figure 11. Overall star centroiding error versus well capacity.
Figure 11. Overall star centroiding error versus well capacity.
Sensors 17 00915 g011
Figure 12. Simulation results of the optimal T S .
Figure 12. Simulation results of the optimal T S .
Sensors 17 00915 g012
Figure 13. Simulation results of the optimal Q S .
Figure 13. Simulation results of the optimal Q S .
Sensors 17 00915 g013
Figure 14. Setup for the laboratorial experiment.
Figure 14. Setup for the laboratorial experiment.
Sensors 17 00915 g014
Figure 15. (ad) Star images of magnitudes 2, 4, 5 and 6 when using the normal integration mode; and (eh) star images of magnitudes 2, 4, 5 and 6 when using the WCA scheme.
Figure 15. (ad) Star images of magnitudes 2, 4, 5 and 6 when using the normal integration mode; and (eh) star images of magnitudes 2, 4, 5 and 6 when using the WCA scheme.
Sensors 17 00915 g015
Figure 16. Star centroiding error versus well capacity: (a) star magnitude = 2; (b) star magnitude = 4; (c) star magnitude = 5; and (d) star magnitude = 6.
Figure 16. Star centroiding error versus well capacity: (a) star magnitude = 2; (b) star magnitude = 4; (c) star magnitude = 5; and (d) star magnitude = 6.
Sensors 17 00915 g016
Figure 17. Setup of the night sky experiment.
Figure 17. Setup of the night sky experiment.
Sensors 17 00915 g017
Figure 18. Lunar images with different exposure parameters: (a) T = 0.4 ms utilizing the normal integration mode; (b) T = 30 ms utilizing the normal integration mode; (c) T = 30 ms , T S = 29.67 ms , Q S = 3750 e utilizing the WCA scheme; and (d) T = 30 ms , T S = 29.67 ms , Q S = 6093 e utilizing the WCA scheme.
Figure 18. Lunar images with different exposure parameters: (a) T = 0.4 ms utilizing the normal integration mode; (b) T = 30 ms utilizing the normal integration mode; (c) T = 30 ms , T S = 29.67 ms , Q S = 3750 e utilizing the WCA scheme; and (d) T = 30 ms , T S = 29.67 ms , Q S = 6093 e utilizing the WCA scheme.
Sensors 17 00915 g018
Figure 19. Edge detection and circle fitting results of the Moon image.
Figure 19. Edge detection and circle fitting results of the Moon image.
Sensors 17 00915 g019
Figure 20. Observations of the Moon and stars in the same FOV.
Figure 20. Observations of the Moon and stars in the same FOV.
Sensors 17 00915 g020
Table 1. Parameters of the CMV20000 image sensor.
Table 1. Parameters of the CMV20000 image sensor.
ParameterValueParameterValue
Active pixels 5120 × 3840 PRNU 1 %
Pixel pitch6.4 μm × 6.4 μmDSNU 10 e s
Full well capacity, Q M A X 15 , 000 e Read noise 8 e
Conversion gain 0.25 D N e Quantization bits12
Dark current 125 e s Quantum efficiency, η Q E 0.45 e photon
Table 2. Apparent radius of the Moon under different exposure conditions.
Table 2. Apparent radius of the Moon under different exposure conditions.
Exposure ConditionsApparent Radius/PixelsError/Pixels
Normal integration, T = 0.4 ms 71.440.42
Normal integration, T = 30 ms 75.364.34
WCA, T = 30 ms , T S = 29 . 65 ms , Q S = 3046 e 71.460.44
WCA, T = 30 ms , T S   = 29 . 67 ms , Q S = 3750 e 71.360.34
WCA, T = 30 ms , T S   = 29 . 69 ms , Q S = 4453 e 71.600.58
WCA, T = 30 ms , T S   = 29 . 74 ms , Q S = 6093 e 72.121.10
Table 3. Centroid positions and magnitude of the identified stars.
Table 3. Centroid positions and magnitude of the identified stars.
NumberCentroid/PixelsMagnitudeNumberCentroid/PixelsMagnitude
1(3439.886, 3084.843)3.87(57.533, 1493.233)4.9
2(2985.837, 2934.429)3.08(3045.821, 1069.692)6.2
3(1707.333, 2326.095)4.49(3131.244, 1017.171)5.3
4(2575.164, 2232.000)5.210(3624.311, 684.475)4.8
5(1012.179, 256.769)5.511(977.373, 675.311)4.3
6(3025.250, 1638.643)5.412(4056.727, 59.644)3.1
Back to TopTop