Next Article in Journal
A Modified High-Precision Imaging Algorithm Based on Imaging Plane Optimization with Minimum Entropy Applied to UAV SAR
Previous Article in Journal
A Comparative Study of Factor Graph Optimization-Based and Extended Kalman Filter-Based PPP-B2b/INS Integrated Navigation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Practical Star Image Registration Algorithm Using Radial Module and Rotation Angle Features

National Key Laboratory of Science and Technology on ATR, College of Electronic Science and Technology, National University of Defense Technology, Changsha 410073, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(21), 5146; https://doi.org/10.3390/rs15215146
Submission received: 31 August 2023 / Revised: 21 October 2023 / Accepted: 26 October 2023 / Published: 27 October 2023
(This article belongs to the Section Remote Sensing Image Processing)

Abstract

:
Star image registration is the most important step in the application of astronomical image differencing, stacking, and mosaicking, which requires high robustness, accuracy, and real-time capability on the part of the algorithm. At present, there are no high-performance registration algorithms available in this field. In the present paper, we propose a star image registration algorithm that relies only on radial module features (RMF) and rotation angle features (RAF) while providing excellent robustness, high accuracy, and good real-time performance. The test results on a large amount of simulated and real data show that the comprehensive performance of the proposed algorithm is significantly better than the four classical baseline algorithms as judged by the presence of rotation, insufficient overlapping area, false stars, position deviation, magnitude deviation, and complex sky background, making it a more ideal star image registration algorithm than current alternatives.

Graphical Abstract

1. Introduction

Current methods for space environment situational awareness [1,2,3,4,5], near-Earth asteroid detection [6], large-scale sky survey [7,8], etc., yield massive amounts of star image data every day. Star image registration is a fundamental data processing step for astronomical image differencing [9], stacking [10], and mosaicking [11] applications, and has high requirements in terms of robustness, accuracy, and real-time capability on the part of registration algorithms.
Image registration is a difficult problem in the field of image processing, aiming to register images of the same object acquired under different conditions using computed inter-pixel transformation relations. Generally, image registration can be divided into three types: intensity-based registration algorithms, feature-based registration algorithms, and neural network-based registration algorithms [12]. Intensity-based image registration algorithms evaluate the intensity correlation of image pairs in the spatial or frequency domain; they are classical statistical registration algorithms, with representative algorithms including normalized cross-correlation (NCC) [13,14], mutual information [15], and Fourier Merlin transform (FMT) [16]. The advantages of intensity-based registration algorithms are that they do not require feature extraction and are simple to implement. The disadvantages are that they are computationally expensive, cannot be registered when the background changes drastically or when grayscale information is poor (which unfortunately is not uncommon in star images), and in most cases the global optimal solution cannot be obtained using the similarity measure function. Feature-based registration algorithms correspond to robust features extracted from image pairs [17], which usually have better real-time performance and robustness; they have been a hot topic of research, and are mainly divided into feature descriptor-based and spatial distribution relationship-based algorithms. Feature descriptor-based registration algorithms include SIFT [18,19], SURF [20,21,22], FAST [23], ORB [24], and others. To ensure the rotation invariance of the feature descriptors, most of these algorithms and their improved versions need to use the gradient information of the local regions of the feature points to determine the dominant orientation; however, because grayscale astronomical star images have the same gradient drop in each direction and the different local regions produce different dominant orientations, these algorithms lack robustness against rotation in star image registration applications. In addition, the FAST and ORB algorithms mainly detect corner points as feature points in the image; as corner points are almost non-existent in star images, these algorithms are not applicable to this field. Spatial distribution relationship-based registration algorithms mainly include polygonal algorithms [25,26,27] and grid algorithms [28,29]. Polygonal algorithms often cannot balance real-time performance and robustness, as it is obvious that robustness is more important when performing an exhaustive test of all combined point pairs, while the computational load is massive when there are many stars, which is often unacceptable in practical applications. Thus, there is an unavoidable trade-off between real-time performance and robustness. Grid algorithms mostly use experience to determine the reference axis based on the brightness or angle of neighboring stars, and their performance deteriorates dramatically when the stars are dense or unreliable in terms of brightness. Neural network-based registration algorithms are now widely studied in the field of remote sensing image registration, and can be broadly classified into two types: deep neural network-based [30,31,32] and graph neural network-based [33,34] algorithms. The greatest advantage of deep neural networks is their powerful feature extraction ability, especially for advanced features; the richer the image features, the better the performance of deep neural network algorithms. However, star images lack conventional features such as color, texture, shape, etc., and advanced features are scarce; these characteristics mean that registration algorithms based on deep neural networks are rarely applicable to this field. SuperPoint and SuperGlue (SPSG) combined algorithms are typical of graph neural network-based registration algorithms, SuperPoint is a network model that simultaneously extracts image feature points and descriptors, while SuperGlue is a feature registration network based on a graph neural network with an attention mechanism. Related registration algorithms incorporating SPSG have achieved excellent results in the CVPR Image Matching Challenge for three consecutive years (from 2020 to 2022). As in feature descriptor-based registration methods, SPSG-based algorithms suffer from the same lack of robustness against rotation in star image registration applications, and their high performance and memory requirements on GPUs can limit their use in applications with limited hardware resources.
Challenges in star image registration include image rotation, insufficient overlapping area, false stars, position deviation, magnitude deviation, high-density stars, complex sky backgrounds, and more, and incur high requirements in terms of robustness, accuracy, and real-time performance. In this paper, we design a high-precision robust real-time star image alignment algorithm by relying on the radial modulus feature and the rotation angle feature, combining the characteristic that the relative distances between star pairs show little variance at different positions on the image, and using a reasonable voting strategy. The proposed algorithm belongs to the class of spatial distribution relationship-based registration algorithms, which are similar to the mechanism of animal recognition of stars [35] and draw on the ideas of polar coordinate-based grid star map matching [29] and voting strategy-based star map matching [36]. A large number of simulated and real data test results show that the comprehensive performance of the proposed algorithm is significantly better than the four classical baseline algorithms when image rotation, insufficient overlap area, false stars, position deviation, magnitude deviation, high-density stars, and complex sky backgrounds are present, resulting in a more ideal star image registration algorithm.
The rest of this paper is structured as follows. In Section 2, we provide a brief description of the image preprocessing approach used in this paper. In Section 3, we describe the principles and implementation steps of the proposed star image registration algorithm in detail. Section 4 deals with the full and rigorous testing of our algorithm’s performance using simulated and real data, as well as a comparison with classical baseline registration algorithms. Section 5 discusses the comprehensive performance of star image registration algorithms. Finally, Section 6 concludes the paper.

2. Image Preprocessing

2.1. Background Suppression

A typical star image is shown in Figure 1, which consists of the four parts shown in Equation (1):
I = B + S + O + N ,
where I denotes the star image, B denotes low-frequency undulating backgrounds such as mist, atmospheric glow, ground gas light, etc., S denotes stellar images, O denotes near-Earth-orbit object images such as satellites, debris, near-Earth asteroids, etc., and N denotes high-frequency noise such as cosmic ray noise, defective pixels, etc. According to the frequency characteristics, the composition of a star image can be divided into mainly low-frequency undulating background and mainly high-frequency stars, near-Earth orbit (NEO) objects, and the two noise categories. High-frequency noise and NEO objects are similar to the stellar image, making it difficult to suppress them in the preprocessing stage. Therefore, in order to extract the stellar centroid as accurately as possible, the low-frequency undulating background should be suppressed first.
The effectiveness and real-time performance of morphological filtering have been well proven in the field of weak target detection [37]; thus, the preprocessing step in this paper uses the morphological reconstruction algorithm [38] to suppress the undulating background. The specific steps are to first obtain the marker image using Top-Hat filtering, as shown in Equation (2):
H 0 = I I b t h ,
where I is the original image, is the morphological open operator, and b t h is the structure element, which is generally selected as a flat structure element with a size of three to five times the defocus blur diameter. To improve the positioning accuracy of the stellar centroid, telescopes are usually designed with defocus [39], allowing the single-pixel star image to spread into a bright spot; the defocused blur diameter is the diameter of this bright spot.
Next, the morphological reconstruction operation is executed as shown in Equation (3):
H k + 1 = ( H k b r c ) I ,
where H k is the marker image in the iterative process, is the morphological dilation operator, b r c is the structural element of the reconstruction process, which generally uses 3 × 3 flat structural elements, and indicates that the corresponding pixel takes a small value. The reconstruction process is stopped when H k + 1 = H k , at which point H k is the estimated image background.
Finally, the estimated image background is subtracted from the original image to obtain the background-suppressed image, denoted as I S .

2.2. Stellar Centroid Positioning

The binary mask image is obtained by segmenting the image after background suppression using the adaptive threshold shown in Equation (4):
T s e = I S ¯ + k × σ I S ,
where I S ¯ and σ I S are the mean and standard deviations of the image after background suppression, respectively, and k is the adjustment coefficient, which generally takes the value of a real number from 0.5 to 1.5.
Then, the connected components with holes in the binary mask image are filled. Holes in the connected components may be caused by bright star overexposure that triggers the CCD gain protection mechanism, resulting in a dead pixel on the star image or a dark spot in the center of the star. After filling the holes, objects with too few pixels are removed. Objects with too few pixels are usually defective pixels or faint stars that cannot be precisely positioned and which need to be eliminated if possible. Finally, the intensity-weighted centroid formula shown in Equation (5) is used to calculate the stellar centroid:
x c = x D y D x I ( x , y ) x D y D I ( x , y ) ; y c = x D y D y I ( x , y ) x D y D I ( x , y ) ,
where x c , y c are the row and column coordinates of the stellar centroid, respectively, and D is the corresponding connected component.

3. Star Image Registration

3.1. Matching Features

Stellar light is focused on the light-sensitive plane through the lens to form an image; thus, the imaging model can be regarded as a pinhole perspective projection model [40], as shown in Figure 2.
In Figure 2, O w x w y w z w is the world coordinate system and O x y z is the camera coordinate system. According to the principle of pinhole imaging, the angle θ i = θ o always holds, and because the star is a point source of light at infinite distance from the camera, θ i does not change with the movement of the camera. According to the above analysis, it can be concluded that the angle between two stars is constant no matter when, where, and in what position they are photographed, making the angle between the stars a good matching feature, which in this paper we call the radial modulus feature (RMF). As position of the star cannot be uniquely determined using only the interstellar angles, we add a rotational angle feature—the angle of counterclockwise rotation of the reference axis formed by the host star (HS) and the starting star (SS) in the image plane to the line connecting the HS and the neighboring star—which is rotationally invariant when both the HS and SS are determined. The matching features of the HS are shown in Figure 3.
Calculating the angle between stars requires introducing the focal length, which varies due to temperature, mechanical vibrations, etc. This variation causes errors in the calculation of the angle; in order to avoid this error and ensure the direct use of two-dimensional coordinates to reduce the calculation, the distance between the stars can be used as the RMF instead of the angle between then. It should be noted that while the distance between stars varies at different positions in the image, based on prior analysis this variation is acceptable in image registration for most astronomical telescopes. Letting the field of view angle be f o v and the image size be n pixels, the process of analyzing the amount of variation in combination with Figure 2 is as follows:
d 2 = ( d 1 2 + f 2 ) + ( d 2 2 + f 2 ) 2 d 1 2 + f 2 d 2 2 + f 2 cos θ o = n 2 4 cot 2 f o v 2 ( tan 2 θ 1 + 1 ) + ( tan 2 θ 2 + 1 ) 2 tan 2 θ 1 + 1 tan 2 θ 2 + 1 cos θ o ,
where f o v is the field of view angle and c o t is the cotangent function. When θ 1 = θ 2 = θ o 2 , the distance between stars takes the minimum value:
d m i n = n tan θ o 2 cot f o v 2 .
When θ 1 = θ 2 = f o v 2 , the distance between the stars takes the maximum value:
d m a x = n sin θ o 2 csc f o v 2 .
Then, the maximum value of the variation when the angle is θ o is shown in Equation (9):
d e r r = n sin θ o 2 csc f o v 2 n tan θ o 2 cot f o v 2 .
For a given field of view angle f o v and image size n, the distance variation takes the maximum value when cos 3 θ o 2 = cos f o v 2 ,
d e r r m a x = n csc f o v 2 ( 1 cos 2 3 f o v 2 ) 1.5 .
The relationship between the maximum distance variation on the one hand and the field of view and image size on the other is shown in Figure 4.
Figure 4 and Equation (10) can be used as references for the subsequent adoption of angles between stars and distances between stars as the RMF. For example, according to Figure 4, for most astronomical telescopes with a field of view of less than 5 degrees, the effect of variations in the distance of the star pair at image different positions on the registration can be largely avoided by setting a suitable matching threshold. Of course, for telescopes with a very large field of view or ultra-high resolution, such as the LSST, it is recommended to use the angle between stars as the RMF. These two choices are identical in the registration process except for their different methods of calculating the features, and we subsequently use the distance between the stars as the radial modal feature in this paper, as shown in Figure 5.
As shown in Figure 5a, the RMF of the HS refers to the distances of all neighboring stars to the HS in the image plane, which is translational and rotationally invariant; the RAF continues to refer to the angle of counterclockwise rotation of the reference axis formed by the HS and the SS with respect to the line connecting the HS and the neighboring stars in the image plane, as the distances between the star pairs at different positions in the image do not vary much. Thus, the angle between the stars hardly varies with their positions, that is, the RAF is translationally and rotationally invariant when the SS is determined. Figure 5b shows the feature list of the HS containing the RMF and RAF. In summary, the RMF and RAF can be used to uniquely determine the position of each neighboring star under the condition that the HS and the SS are predetermined, which makes it is easy to obtain matching stars with high confidence and a low false match rate.

3.2. Registration Process

The registration process mainly consists of determining the candidate HS and SS and verifying them while obtaining all matched stars to calculate the transformation parameters. The details of this process are described below.

3.2.1. Calculating RMF and Initial RAF

The RMF of the stars in the moving image and the fixed image are calculated, obtaining the respective RMF matrices D M and D F :
D M = D M 1 , M 1 D M 1 , M N m D M N m , M 1 D M N m , M N m = D M 1 D M N m ,
D F = D F 1 , F 1 D F 1 , F N f D F N f , F 1 D F N f , F N f = D F 1 D F N f ,
where D i , j represents the Euclidean distance between the i-th star and the j-th star on the star image, while N m and N f denote the number of stars extracted in the moving image and the fixed image, respectively. A total of N m 2 + N f 2 N m N f 2 Euclidean distances need to be computed to obtain the RMF.
The initial RAF (IRAF) of all stars in the moving image and the fixed image are calculated, obtaining the respective IRAF matrices A M and A F :
A M = A M 1 , M 1 A M 1 , M N m A M N m , M 1 A M N m , M N m = A M 1 A M N m ,
A F = A F 1 , F 1 A F 1 , F N f A F N f , F 1 A F N f , F N f = A F 1 A F N f ,
where A i , j represents the initial rotation angle of the j-th star when the i-th star is the HS; furthermore; it is specified that A i , j = 0 when i = j . A total of N m 2 + N f 2 N m N f 2 vector angles need to be computed in order to obtain the IRAF. The IRAF is the rotation angle calculated using the image horizontal axis instead of the reference axis formed by the HS and the SS. As shown in Figure 6, it differs from the RAF by a fixed angle, which is the angle of counterclockwise rotation of the image horizontal axis to the line connecting the HS and the SS. The RAF is obtained by subtracting this angle directly from the IRAF after the SS is determined.

3.2.2. Determining the Candidate HS and SS

The determination of the HS and the SS seeks to find two matching pairs of stars in the image pair to be registered, either one of which is the HS and the other is the SS; this is the key step in the registration process described in this paper. To ensure robustness, a voting algorithm based on RMF is used to obtain the candidate HS and SS. The following is an example of the process of determining the candidate matching star in the fixed image for the i-th star in the moving image.
The RMF D M i (row i of D m ) of the i-th star in the moving image is taken and its matching degree with the stars in the fixed image is counted separately, as shown in Equation (15):
U i , j = D M i D F j ,
where D M i and D F j are calculated by Equations (11) and (12), respectively, and   ·   denotes the base of the set, i.e., the number of elements. Due to the positioning error and to the distance between the stars being different at different image positions, the intersection operation here does not require the two elements to be exactly equal; as long as the difference between the two elements is within the set threshold value, the two elements are considered to be the same. Each vote requires a total of N f intersection operations of sets, and the number of elements for the two sets in the intersection are N m and N f , respectively.
Considering the existence of false stars, noise, and other interference, as well as that matching by RMF alone is not unique, in order to avoid missing the correct match as much as possible, all stars with matching degree greater than the threshold are selected here as candidate matching stars. The set J F of candidate matching stars in the fixed image is provided by Equation (16):
J F = { j U i , j > max ( U i ) std ( U i ) , j = 1 , , N f } ,
where U i = { U i , 1 , , U i , N f } denotes the matching degree set. We select two different stars in the moving image as the HS and SS, respectively, and determine the corresponding candidate HS and SS in the fixed image using the candidate matching star determination method described above.

3.2.3. Verifying and Obtaining Matching Star Pairs

The candidate HS and SS combinations need to be rigorously verified to ensure that a sufficiently reliable correct combination is obtained; this process needs to avoid unnecessary redundancy as much as possible in order to ensure real-time performance. Let the candidate HS and SS in the moving image be M h and M s , respectively, and let the corresponding candidate matching HS and SS in the fixed image be F h and F s , respectively. First, we need to verify whether the distance difference between the HS and the SS in the moving image and the fixed image is less than the threshold value, as shown in Equation (17). Although this step is simple, it can avoid a large number of incorrect combinations of the HS and the SS and ensure strong real-time performance.
D M h , M s D F h , F s < T d
Here, D M h , M s and D F h , F s denote the M s -th and F s -th features of the RMF of HS in the moving image and fixed image, respectively (which are the distances between the HS and the SS) and T d is the distance threshold which depends on the positioning error of the stellar centroid. Assuming that the calculated stellar centroid in both the moving image and the fixed image obey a uniform distribution within a circle with radius R at the origin of the real centroid (with R representing the degree of deviation of the stellar centroid positioning), T d is 128 R 45 π , which can be appropriately larger than this value in practical applications to ensure the robustness. In this paper, the distance threshold is set as two pixels.
If the initial test is passed, a secondary test is performed in combination with the RAF. In the IRAF matrices A M and A F of the moving image and fixed image, the IRAF A M h (row M h of A M ) and A F h (row F h of A F ) of the corresponding HS candidates are taken and the RAFs A M h and A F h are obtained by correcting the IRAF according to the candidate SS, as shown in Equations (18) and (19):
A M h = mod ( A M h A M h , M s + 180 , 360 ) 180 A M h , M h = 0 ,
A F h = mod ( A F h A F h , F s + 180 , 360 ) 180 A F h , F h = 0 .
Thus far, all matching features, i.e., the RMF and RAF of the host star in the moving and fixed images, have been obtained, as shown in Equations (20) and (21):
F M = D M h A M h = D M h , M 1 D M h , M N m A M h , M 1 A M h , M N m ,
F F = D F h A F h = D F h , F 1 D F h , F N f A F h , F 1 A F h , F N f ,
where F M and F F represent the matching features of the HS in the moving image and the fixed image, respectively, which are equivalent to the set of neighboring star coordinates with the HS as the origin and the line connecting the HS and the SS as the polar axis.
Next, we calculate the pairwise Euclidean distances of the feature points in F M and F F to obtain the distance verification matrix
D T = d M 1 , F 1 d M 1 , F N f d M N m , F 1 d M N m , F N f = d M 1 d M N m = d F 1 d F N f T ,
where d i , j denotes the distance between the i-th feature point in the moving image and the j-th feature point in the fixed image.
The candidate matching marker matrix is obtained using Equation (23):
F T = d M 1 d M N m = = min ( d M 1 ) min ( d M N m ) & ( d F 1 d F N f T = = min ( d F 1 ) min ( d F N f ) T ) ,
where the operator = = is used to determine whether two elements are equal (with its result being a Boolean value) and & is a logical AND operator. Note that the matches in the candidate match matrix F T obtained using the above method are all one-to-one.
Finally, the candidate matches that are larger than the threshold using Equation (24) are eliminated to obtain the match marker matrix
F = F T   &   ( D T < T d ) ,
where the distance threshold T d is the same as the distance threshold in Equation (17).
If F i j = t r u e , this means that the i-th star in the moving image matches with the j-th star in the fixed image; thus, the number of true instances in F is the number of matched star pairs. If the number of matched star pairs is greater than the matching threshold T N , the registration is judged to be successful; otherwise, the remaining combinations of HS and SS candidates continue to be verified. If all combinations fail to pass the verification, the voting strategy is used to determine the candidate matching stars for the remaining stars in the moving image, and the above steps are be repeated until verification is passed or there are no remaining stars in the moving image to be voted on. Here, the selection of the matching threshold T N is related to the registration effect and the robustness of the algorithm; too large a choice may lead to a reduction in the registration rate, while too small of one may lead to misregistration. Thus, the choice ought to be proportional to the number of extracted stars; in this paper, the setting is T N = max ( 100 , min ( N m , N f ) 3 ) , where N m and N f denote the number of stars extracted in the moving image and the fixed image, respectively.

3.2.4. Maximum Matching Number Registration

This step is performed when the test cannot be passed even after all stars in the moving image have been voted on. The reason for failing the test may not be entirely that the image pair cannot really be registered, and can be due to the matching threshold T N being too large. To ensure the matching effect, T N is usually set to be appropriately large and proportional to the number of stars extracted; when there are too many false stars among the extracted stars or insufficient overlapping areas between images, T N is likely to be too large, resulting in failure to pass the test. To avoid this situation, an additional step is added to ensure the success of the registration. This step plays a very important role in the robustness of the algorithm.
After each execution of the secondary test, the respective serial numbers of the HS and the SS instances in the moving image and the fixed image are recorded, along with the corresponding number of matched star pairs. If all the stars in the moving image have been voted on and the test cannot be passed, the record with the highest number of matched star pairs is searched for; if the number of matched star pairs in this record is greater than three, the information of all matching star pairs is obtained using the aforementioned algorithm as the final matching result.

3.2.5. Calculating the Transformation Parameters

Most feature-based registration algorithms use an initial the strategy of rough matching and then optimally reject the mismatched points [41]. As the algorithm in this paper obtains matched star pairs with high confidence and low a mismatch rate, the step of rejecting the mismatched pairs is not needed, and the transformation parameters can be calculated directly.
In this paper, the Homography transformation model [25,40] is used and the transformation parameters between the moving image and the fixed image are solved using the Total Least Squares (TLS) method. If the right-angle coordinates of the matched star pairs obtained in the moving image and the fixed image are as shown in Equation (25),
C M = x 1 M x n M y 1 M y n M ; C F = x 1 F x n F y 1 F y n F ,
where x i M and y i M are the row and column coordinates of the star in the moving image, respectively, and x i F and y i F are the corresponding row and column coordinates of the star in the fixed image, then
x i F = h 1 x i M + h 2 y i M + h 3 h 7 x i M + h 8 y i M + 1 ; y i F = h 4 x i M + h 5 y i M + h 6 h 7 x i M + h 8 y i M + 1 .
We can now construct the system of equations A H = b shown in Equation (27):
x 1 M y 1 M 1 0 0 0 x 1 M x 1 F y 1 M x 1 F 0 0 0 x 1 M y 1 M 1 x 1 M y 1 F y 1 M y 1 F x i M y i M 1 0 0 0 x i M x i F y i M x i F 0 0 0 x i M y i M 1 x i M y i F y i M y i F x n M y n M 1 0 0 0 x n M x n F y n M x n F 0 0 0 x n M y n M 1 x n M y n F y n M y n F h 1 h 2 h 3 h 4 h 5 h 6 h 7 h 8 = x 1 F y 1 F x i F y i F x n F y n F .
The transformation parameters are then solved using the TLS method. Letting B = [ A b ] , we perform singular value decomposition on B, as shown in Equation (28):
B = U Σ V T .
If the right singular vector corresponding to the minimum singular value is V s = ( V s 0 , , V s 8 ) T , then the solution of the transformation parameter is as shown in Equation (29):
H = 1 V s 8 ( V s 0 , , V s 7 ) T .
For easier understanding, the flow chart of the registration algorithm used in this paper is shown in Figure 7.

4. Simulation and Real Data Testing

The robustness, accuracy, and real-time performance of the proposed algorithm were rigorously and comprehensively tested from the three perspectives of registration rate, registration accuracy, and running time using both simulated and real data. We compared our algorithm with the classical top-performing NCC [14], FMT [16], SURF [20], and SPSG [33,34] registration algorithms. The PC specifications were as follows: CPU, Montage Jintide(R) C6248R×4; RAM, 128GB; GPU, NVIDIA A100-40GB×4, where the GPU was mainly used to run the SPSG comparison algorithm.

4.1. Simulation Data Testing

The simulation parameters were field of view (FoV) 2.5 degrees, image size 1024 × 1024, detection capability (DC) 13 Mv, 90% of the energy of the star image concentrated in the area of 3 × 3, a maximum defocus blur diameter of 7 pixels, length of the star image trailing of 15 pixels, and trailing angle of 45 degrees. The image background was simulated using Equation (30):
B = [ x 3 , x 2 y , x y 2 , y 3 , x 2 , x y , y 2 , x , y , 1 ] p 9 p 0 ,
where x, y denote the column and row coordinates of the pixel, respectively, [ p 9 , , p 0 ] T is the parameter, which is set by statistics of 1000 real image backgrounds acquired at different times, and the specific values are as shown in Table 1, where w is a random real variable with a mean of 0 and variance of 1.
The ratio of high-frequency noise (speckle-shaped to simulate cosmic radiation noise and NEO objects), ratio of hot pixels to total pixels, and inter-image rotation angle were configured according to different test scenarios. The catalog used for the simulation was a union of Tycho-2, The third U.S. Naval Observatory CCD Astrograph Catalog (UCAC3), and the Yale Bright Star Catalog Fifth Edition (BSC5). The center of the field of view points to the Geostationary Orbit (GEO), an orbital belt with declination near 0 degrees and right ascension from 0 to 360 degrees, where the stellar density varies widely and the performance of the registration algorithm can be verified under different stellar density scenarios. The number of stars in the field of view corresponding to different right ascensions is shown in Figure 8.
Five scenarios were tested using simulation data: rotation, overlapping regions, false stars, position deviation, and magnitude deviation, all of which which are frequently encountered in star image registration. The registration rate, registration accuracy, and run time were calculated, where the registration rate refers to the ratio of the number of registered image pairs to the total number of pairs, the registration accuracy refers to the average distance between the matched reference stars after registration, the true value of the reference stars is provided in the image simulation, and the run time refers to the average run time of the registration algorithm. It should be noted that the positions of the stars were rounded in the simulations, causing a rounding error that follows a uniform distribution over the interval [−0.5,0.5]; thus, if there is a rotation or translation between the simulated image pairs to be registered, the resulting registration error has a mean value of 0.52 pixels, as shown in Equation (31).
0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 ( x M x F ) 2 + ( y M y F ) 2 d x M d x F d y M d y F 0.52

4.1.1. Rotation

The purpose of this test is to test the performance of registration algorithms at different rotation angles between images. Specific configurations (rotation angle between images: 1 to 360 degrees; step size: 1 degree; 200 pairs of images tested for each rotation angle (center point: declination 0 degrees, right ascension: 0 to 358.2 degrees, step size: 1.8 degrees); total pairs of images: 72,000). No high-frequency noise or hot pixels were added, and the rest of parameters used the default values. Figure 9, Figure 10 and Figure 11 show the statistics for the registration rate, registration accuracy, and running time of each algorithm at the different rotation angles used in this test.
According to the statistical results, the performance of the proposed algorithm is optimal and is completely unaffected by the rotation angle. The registration rate is 100% for all rotation angles; the registration accuracy is about 0.5 pixels, which is mainly caused by the rounding error and the positioning error; the running time is about 0.15 s, which includes the running time of background suppression, star centroid extraction, and registration; and the running time meets the requirements of real-time performance when compared to the exposure time of several seconds or even ten seconds for star images. The registration performance of all the algorithms compared in this test was poor, and only a few of them were effective close to 0 and 180 degrees; in particular, the results of the grayscale-based NCC algorithm was completely invalid due to its algorithmic principle. It should be noted that the performance period of the SURF and FMT algorithms is 180 degrees, while the performance period of the SPSG algorithm is 360 degrees.

4.1.2. Overlapping Regions

The purpose of this test was to test the performance of registration algorithms in different overlapping region between images. The specific configuration was as follows: angle between the center direction of the images to be registered, 0 to 2.5 degrees; a step size of 0.01 degrees, corresponding to the percentage of the overlapping region in the image of 100% to 0; 200 pairs of images tested for each angle (the right ascension and declination of the center of the fixed image were 0 degrees, and the centers of the moving images were uniformly distributed in a concentric circle with the center at the center of the fixed image); 50,200 total pairs of images; no high-frequency noise or hot pixels added; no rotation between pairs of images; and the rest of parameters used their default values. Figure 12, Figure 13 and Figure 14 show the statistics for the registration rate, registration accuracy, and running time for each algorithm at the different overlapping regions in this test. Note that the overlapping region here refers to the overlapping region of the circular field of view, while the simulated image is square; thus, the actual overlap region is larger, and even when the overlap region of the circular field of view is 0 there may be a small overlap at the corners of the square field of view.
According to the results in Figure 12, the proposed algorithm is the most robust to different overlapping regions, and can maintain a 100% alignment rate when the proportion of overlapping regions is greater than 28% (i.e., the angle is less than 1.5 degrees). The best-performing NCC among the other algorithms in the same case is only about 30%. The FMT and SPSG algorithms outperform the NCC and SURF algorithms when the overlap region is large, while the latter two algorithms outperform the FMT and SPSG algorithms when the overlap region is small, which seems to be contrary to the previous experience that grayscale-based registration algorithms generally have lower registration rates than feature-based registration algorithms when the overlap region is small. The registration accuracy of the proposed algorithm is maintained at about 0.5 pixels, which is mainly caused by the rounding error and the positioning error; the registration accuracy of the NCC algorithm is similar to that of the proposed algorithm, while the errors of the rest of the compared algorithms are slightly larger. In particular, the registration accuracy of the SPSG algorithm deteriorates sharply when the overlapping area is insufficient. In terms of running time, the proposed algorithm has a running time of about 0.15 s when the overlap region is larger than 60%, then the running time gradually increases due to the increase in the number of unmatched stars. The running time is less than 1 s when the overlap region is larger than 34%, less than 3 s when the overlap region is larger than 14%, and no more than 6 s in the worst case. The real-time performance of the feature-based SURF and SPSG algorithms is significantly better than the grayscale-based FMT and NCC algorithms.

4.1.3. False Stars

The purpose of this test was to test the performance of registration algorithms at different false stars densities. Here, ‘false stars’ mainly refers to unmatched interference such as high-frequency noise (cosmic ray noise, defective pixels, etc.), NEOs, and hot pixels, which were simulated by adding speckle-like noise and highlighting single pixels. The specific configuration was as follows: speckle-like noise rate, 0 to 5 × 10 4 with step 2.5 × 10 6 ; highlighted single pixels rate, 0 to 5 × 10 5 with step 2.5 × 10 7 ; total false stars rate, 0 to 5.5 × 10 4 with step 2.75 × 10 6 . The false star rate is the ratio of the number of false stars to the total number of pixels in the image. Each false star rate was tested for 200 pairs of images (center point: declination 0 degrees, right ascension: 0 to 358.2 degrees, step size: 1.8 degrees), for a total 40,200 image pairs; there was no rotation between image pairs, and the rest of the parameters used their default values. Figure 15, Figure 16 and Figure 17 show the statistics for the registration rate, registration accuracy, and running time for each algorithm in this test for different numbers of false stars on the image (equivalent to the false stars rate) and different ratios of the number of false stars to the total number of object (the sum of the number of real stars and false stars), respectively.
As can be seen from Figure 15, the proposed algorithm maintains the optimal registration rate of 100%, while the registration rates of the feature-based SPSG and SURF algorithms are significantly better than those of the grayscale-based NCC and FMT in the comparison algorithms and are less affected by false stars. The registration rates of the NCC and FMT algorithms are more affected by false stars, especially the FMT algorithm in the cases with more false stars, and the registration rate deteriorates sharply to near 0. As can be seen from Figure 16, because there is no translation and rotation in this scene it is not affected by the rounding error, meaning that the registration accuracy is higher than 0.07 pixels for all of the algorithms except SPSG and FMT, which are only slightly worse. As can be seen from Figure 17, overall the feature-based algorithms are significantly better than the grayscale-based algorithms in terms of run time. The run time of the proposed algorithm gradually increases from 0.15 to 0.3 s, which, while slightly inferior to the SURF algorithm, fully satisfies real-time performance requirements.

4.1.4. Positional Deviation

The purpose of this test was to test the performance of the registration algorithms under different positional deviations; the process was simulated by adding Gaussian noise with a mean of 0 pixels and a standard deviation of σ pixels to the row and column coordinates of a star. The specific configuration was as follows: 0 σ 6; step size of 0.01 pixels; 200 pairs of images tested for each σ value (center point: declination 0 degrees, right ascension: 0 to 358.2 degrees, step size: 1.8 degrees); 120,200 pairs of images in total; no high frequency noise or hot pixels added; and no rotation between pairs of images. The rest of parameters used their default values. Figure 18, Figure 19 and Figure 20 show the statistics of the registration rate, registration accuracy, and run time for each algorithm for the different positional deviations in this test.
As can be seen from Figure 18, the proposed algorithm maintains the optimal registration rate of 100%; among the compared algorithms, SPSG has a better registration rate of more than 80%, FMT is the worst, and, slightly surprisingly, NCC has a better registration rate than SURF. In terms of registration accuracy, the reference accuracy calculated based on the real simulation coordinates (“Racc” in Figure 19, indicted by the solid black line) was added to the statistics as a comparison because of the preset position deviation during the simulation of this scene. As can be seen from Figure 19, the proposed algorithm and SPSG are the best in terms of registration accuracy, with little difference from the reference accuracy; FMT is the worst, and NCC and SURF have about the same registration accuracy. As can be seen from Figure 20, all the compared algorithms are relatively smooth in terms of run time and the feature-based SURF and SPSG algorithms are better than the grayscale-based NCC and FMT algorithms. The proposed algorithm is comparable to SURF, at about 0.15 s when the positional deviation is less than two pixels; the run time gradually increases to about 4 s when the deviation is greater than two pixels because the combination of the HS and the SS needs to be verified several times.

4.1.5. Magnitude Deviation

The purpose of this test was to test the performance of the registration algorithms with different magnitude deviations by simulating the process through the addition of Gaussian noise with a mean of 0 Mv and a standard deviation of σ Mv to the real magnitude values. The specific configuration was as follows: 0 σ 2, with a step size of 0.01 Mv; 200 pairs of images tested for each σ value (center point: declination 0 degrees, right ascension: 0 to 358.2 degrees, step size: 1.8 degrees); 40,200 pairs of images in total; no high-frequency noise or hot pixels added; and no rotation between image pairs. The remaining parameters used their default values. Figure 21, Figure 22 and Figure 23 show the statistics for the registration rate, registration accuracy, and run time of each algorithm for the different magnitude deviations in this test.
As can be seen from Figure 21, the proposed algorithm maintains the optimal registration rate of 100% in terms of the registration rate; among the other algorithms, SPSG performs best, FMT performs the worst, and NCC has a better registration rate than the SURF algorithm, which seems to be contrary to the previous experience that the grayscale-based registration is inferior to feature-based registration when there is a deviation in magnitude (or brightness). As can be seen from Figure 22, the registration accuracy is high for all of the algorithms except SPSG, which is because the scene had no translation and rotation and as such is not affected by the rounding error. When the magnitude deviation is greater than 0.5 Mv, the registration accuracy of the proposed algorithms ranges from 0.05 to 0.1 pixel, NCC and SURF fluctuate slightly, and SPSG gradually deteriorates to 3.5 pixels. As can be seen from Figure 23, all the compared algorithms are essentially the same in terms of run time as in other test scenarios. The proposed algorithm is the most optimal, with run time of less than 0.2 seconds when the magnitude deviation is less than 1.6 Mv; while this run time increases gradually with magnitude deviations greater than 1.6 Mv, the maximum time does not exceed 0.7 s.

4.2. Real Data Testing

The real data parameters were as follows: field of view, 2.5 degrees; image size, 1024 × 1024; and a detection capability of about 13 Mv, including a total of 338,577 real star images in 523 tasks over 31 days. The parameters for the simulated data were set with reference to the real data, and the real images were effectively taken in the GEO belt; thus, Figure 8 reflects the range of the number of stars in the field of view in the real images. The more complex imaging environment of the real data is better abke to test the comprehensive performance of algorithms; in particular, the real complex background simulation algorithm is difficult to simulate realistically, and the real images can effectively test the algorithm’s registration performance for star images in complex backgrounds. Similar to the simulated data, the three indexes of registration rate, registration accuracy, and run time were counted in the test using the real data. The main between these two cases difference is that the true value of the reference point cannot be provided in the calculation of the registration accuracy as it can in the simulation data; as such, the result of the algorithm with the largest number of matched star pairs among all algorithms is selected here as a reference. It should be noted that, due to the limitations of the telescope operation mode, there was no rotation between the real images; however, the performance of the algorithms in the presence of image rotation was adequately tested on the simulated data. Figure 24, Figure 25 and Figure 26 show the statistics for the registration rate, registration accuracy, and run time of each algorithm under these different tasks.
According to the statistical results, the proposed algorithm is optimal in terms of registration rate, registration accuracy, and running time; the most unexpected result is that the SURF algorithm is completely ineffective at registering real star images, probably because it cannot adapt to their complex background. As can be seen in Figure 24, the proposed algorithm maintains 100% registration rate, while among the other algorithms NCC achieves a registration rate of more than 98% on most of the tasks, SPSG is able to achieve better than 92% on most of the tasks despite slight fluctuations, FMT’s registration rate fluctuates greatly between 40% and 100%, and SURF is completely ineffective. As can be seen from Figure 25, the proposed algorithm has the best performance in terms of registration accuracy at about 0.1 pixels for all tasks; as there is no rounding error in the real data, this error is mainly caused by positioning errors. The accuracy of the FMT algorithm is between 0.15 and 0.2 pixels for most tasks, the accuracy of the NCC algorithm is between 0.3 and 0.5 pixels, mostly around 0.3, and the SPSG algorithm has poor registration accuracy between 0.4 and 1.85 pixels. As can be seen from Figure 26, the proposed algorithm is optimal in terms of run time at around 0.15 s, the SPSG algorithm is next at 0.3 to 0.7 s, and the grayscale-based NCC and FMT algorithms have the longest run times of 1.1 to 1.8 s.

5. Discussion

This section discusses the comprehensive performance of star image registration algorithms. In order to obtain the comprehensive performance scores of the proposed algorithm and the compared algorithms, we carried out a comprehensive evaluation of each algorithm based on the registration rate, registration accuracy, and run time when processing simulated images with rotation, overlapping area change, false stars, position deviation, magnitude deviation, and real images. For this, we used the scoring strategy in Equation (32):
B = 1 N i = 1 N S i s ( w a × S i a + w t × S i t ) ,
where N is the number of registered image pairs, S i s , S i a , and S i t are the scores of the i-th pair of registration success rate, registration accuracy, and run time, respectively, and w a = 0.8 and w t = 0.2 are the weights of registration accuracy and run time, respectively. Considering that in practice the registration accuracy reflects the core performance of the algorithm better than the run time and that the run time can be improved by parallel design, upgrading hardware performance, and increasing the amount of hardware, the weight assigned to the registration accuracy was greater. The specific definitions of S i s , S i a , and S i t are as follows:
S i s = 1 , Success 0 , Fall ,
S i a = 100 × max 0 , 1 max ( 0 , a i R a i ) 2 × max ( 0.5 , R a i ) ,
where a i is the registration accuracy of the registration algorithm and R a i is the reference accuracy
S i t = 100 , 0 t i 0.2 s 90 , 0.2 < t i 0.5 s 70 , 0.5 < t i 1 s 50 , 1 < t i 3 s 30 , t i > 3 s ,
where t i is the run time of the registration algorithm. Table 2 shows the statistics of the comprehensive performance of the algorithms; “OLR” represents the overlapping region test, “PT” represents the algorithm proposed in this paper, and “Score” is the average score of all test items used to evaluate the comprehensive performance of the algorithms. A radar chart displaying the comprehensive performance of the tested star image registration algorithms is shown in Figure 27.
From Table 2 and Figure 27, it can be seen that the comprehensive performance of the proposed algorithm is significantly better than all the compared algorithms in all scenes; its performance is more balanced, and is only slightly worse on the overlapping region change test. All of the other algorithms have poor processing performance in the presence of rotation and insufficient overlapping region between images. The NCC algorithm performs better when processing images with false stars and real images without rotation. Although the FMT algorithm does not perform well when processing simulated images, it performs reasonably well when processing real images without rotation by virtue of its high registration accuracy. The SURF algorithm performs better when processing images with false stars, however, it cannot register real images, probably because it cannot adapt to complex real backgrounds; thus, a high-performance background suppression algorithm must be used in order to register real star images with SURF. The SPSG algorithm performs reasonably well when processing images with false stars and positional deviations.
The shortcomings of the algorithms and the scenarios in which they are applicable are reported below. Figure 14 and Figure 20 show that the running time of the proposed algorithm is relatively long in extreme cases when the overlap area is too small or the position deviation is too large; however, the probability of such situations (i.e., where the overlap area is less than 30% or the positioning error is more than four pixels) in real scenarios is not large. When the overlapping area is too small, our results suggest that the alignment rate and real-time performance can be improved by adding prior information to help determine the registrable area in advance. When the performance of the algorithm is affected by a large positioning error, the appropriateness of the stellar centroid positioning algorithm needs to be considered; this problem can be mitigated by appropriately increasing the distance threshold. In addition, the method described herein is applicable to all application scenarios where there are point targets in the image, such as astronomical images, and is applicable in principle to the problem of registering large target images when the set of points that can be matched is extracted during preprocessing.

6. Conclusions

In this paper, we propose a registration algorithm for star images that relies only on the RMF and RAF. Our test results using large amounts of simulated and real data show that the comprehensive performance of the proposed algorithm is significantly better than other similar algorithms in the presence of rotation, insufficient overlapping regions, false stars, positional deviations, magnitude deviations, and complex sky backgrounds, making for a more ideal star image registration algorithm. Notably, our test results show that the proposed algorithm has longer running times when the overlap region is too small or the positional deviation is too large, leaving room for improvement in the future work. In addition, the robustness, real-time performance, and registration accuracy of the current classical NCC, FMT, SURF, and SPSG registration algorithms are fully tested in this paper in the context of star image registration, providing a valuable references for related future research and applications.

Author Contributions

Conceptualization, Q.S.; methodology, Q.S.; software, Q.S.; validation, Q.S., L.L. and J.Z.; formal analysis, Q.S.; investigation, Q.S.; resources, Z.N.; data curation, Y.L.; writing—original draft preparation, Q.S.; writing—review and editing, Z.N. and Z.W.; visualization, Q.S.; supervision, Z.N.; project administration, Z.N.; funding acquisition, Z.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by the Youth Science Foundation of China (Grant No. 61605243).

Data Availability Statement

The simulation data supporting the results of this study are available from the corresponding author upon reasonable request. The raw real image data are cleaned periodically due to their excessive volume; intermediate processing results are available upon reasonable request.

Acknowledgments

The authors thank the Xi’an Satellite Control Center for providing real data to fully validate the performance of the algorithm, and are very grateful to all the editors and reviewers for their hard work on this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jiang, P.; Liu, C.; Yang, W.; Kang, Z.; Fan, C.; Li, Z. Automatic extraction channel of space debris based on wide-field surveillance system. npj Microgravity 2022, 8, 14. [Google Scholar] [CrossRef] [PubMed]
  2. Barentine, J.C.; Venkatesan, A.; Heim, J.; Lowenthal, J.; Kocifaj, M.; Bará, S. Aggregate effects of proliferating low-Earth-orbit objects and implications for astronomical data lost in the noise. Nat. Astron. 2023, 7, 252–258. [Google Scholar] [CrossRef]
  3. Li, Y.; Niu, Z.; Sun, Q.; Xiao, H.; Li, H. BSC-Net: Background Suppression Algorithm for Stray Lights in Star Images. Remote Sens. 2022, 14, 4852. [Google Scholar] [CrossRef]
  4. Li, H.; Niu, Z.; Sun, Q.; Li, Y. Co-Correcting: Combat Noisy Labels in Space Debris Detection. Remote Sens. 2022, 14, 5261. [Google Scholar] [CrossRef]
  5. Liu, L.; Niu, Z.; Li, Y.; Sun, Q. Multi-Level Convolutional Network for Ground-Based Star Image Enhancement. Remote Sens. 2023, 15, 3292. [Google Scholar] [CrossRef]
  6. Shou-cun, H.; Hai-bin, Z.; Jiang-hui, J. Statistical Analysis on the Number of Discoveries and Discovery Scenarios of Near-Earth Asteroids. Chin. Astron. Astrophys. 2023, 47, 147–176. [Google Scholar] [CrossRef]
  7. Ivezić, Ž.; Kahn, S.M.; Tyson, J.A.; Abel, B.; Acosta, E.; Allsman, R.; Alonso, D.; AlSayyad, Y.; Anderson, S.F.; Andrew, J.; et al. LSST: From Science Drivers to Reference Design and Anticipated Data Products. Astrophys. J. 2019, 873, 111. [Google Scholar] [CrossRef]
  8. Bosch, J.; AlSayyad, Y.; Armstrong, R.; Bellm, E.; Chiang, H.F.; Eggl, S.; Findeisen, K.; Fisher-Levine, M.; Guy, L.P.; Guyonnet, A.; et al. An Overview of the LSST Image Processing Pipelines. Astrophysics 2018, arXiv:1812.03248. [Google Scholar] [CrossRef]
  9. Mong, Y.L.; Ackley, K.; Killestein, T.L.; Galloway, D.K.; Vassallo, C.; Dyer, M.; Cutter, R.; Brown, M.J.I.; Lyman, J.; Ulaczyk, K.; et al. Self-supervised clustering on image-subtracted data with deep-embedded self-organizing map. Mon. Not. R. Astron. Soc. 2023, 518, 752–762. [Google Scholar] [CrossRef]
  10. Singhal, A.; Bhalerao, V.; Mahabal, A.A.; Vaghmare, K.; Jagade, S.; Kulkarni, S.; Vibhute, A.; Kembhavi, A.K.; Drake, A.J.; Djorgovski, S.G.; et al. Deep co-added sky from Catalina Sky Survey images. Mon. Not. R. Astron. Soc. 2021, 507, 4983–4996. [Google Scholar] [CrossRef]
  11. Yu, C.; Li, B.; Xiao, J.; Sun, C.; Tang, S.; Bi, C.; Cui, C.; Fan, D. Astronomical data fusion: Recent progress and future prospects—A survey. Exp. Astron. 2019, 47, 359–380. [Google Scholar] [CrossRef]
  12. Paul, S.; Pati, U.C. A comprehensive review on remote sensing image registration. Int. J. Remote Sens. 2021, 42, 5400–5436. [Google Scholar] [CrossRef]
  13. Wu, P.; Li, W.; Song, W. Fast, accurate normalized cross-correlation image matching. J. Intell. Fuzzy Syst. 2019, 37, 4431–4436. [Google Scholar] [CrossRef]
  14. Lewis, J. Fast Normalized Cross-Correlation. Vis. Interface 1995, 120–123. Available online: www.scribblethink.org/Work/nvisionInterface/nip.pdf (accessed on 25 October 2023).
  15. Yan, X.; Zhang, Y.; Zhang, D.; Hou, N.; Zhang, B. Registration of Multimodal Remote Sensing Images Using Transfer Optimization. IEEE Geosci. Remote Sens. Lett. 2020, 17, 2060–2064. [Google Scholar] [CrossRef]
  16. Reddy, B.S.; Chatterji, B.N. A FFT-Based Technique for Translation, Rotation and Scale-Invariant Image Registration. IEEE Trans. Image Process. 1996, 5, 1266–1271. [Google Scholar] [CrossRef] [PubMed]
  17. Misra, I.; Rohil, M.K.; Moorthi, S.M.; Dhar, D. Feature based remote sensing image registration techniques: A comprehensive and comparative review. Int. J. Remote Sens. 2022, 43, 4477–4516. [Google Scholar] [CrossRef]
  18. Tang, G.; Liu, Z.; Xiong, J. Distinctive image features from illumination and scale invariant keypoints. Multimed. Tools Appl. 2019, 78, 23415–23442. [Google Scholar] [CrossRef]
  19. Chang, H.H.; Wu, G.L.; Chiang, M.H. Remote Sensing Image Registration Based on Modified SIFT and Feature Slope Grouping. IEEE Geosci. Remote Sens. Lett. 2019, 16, 1363–1367. [Google Scholar] [CrossRef]
  20. Bay, H.; Ess, A.; Tuytelaars, T.; Van Gool, L. Speeded-Up Robust Features (SURF). Comput. Vis. Image Underst. 2008, 110, 346–359. [Google Scholar] [CrossRef]
  21. Liu, Y.; Wu, X. An FPGA-Based General-Purpose Feature Detection Algorithm for Space Applications. IEEE Trans. Aerosp. Electron. Syst. 2023, 59, 98–108. [Google Scholar] [CrossRef]
  22. Zhou, H.; Yu, Y. Applying rotation-invariant star descriptor to deep-sky image registration. Front. Comput. Sci. China 2018, 12, 1013–1025. [Google Scholar] [CrossRef]
  23. Rosten, E.; Porter, R.; Drummond, T. Faster and better: A machine learning approach to corner detection. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 105–119. [Google Scholar] [CrossRef] [PubMed]
  24. Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G. ORB: An efficient alternative to SIFT or SURF. In Proceedings of the IEEE International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2564–2571. [Google Scholar] [CrossRef]
  25. Lin, B.; Xu, X.; Shen, Z.; Yang, X.; Zhong, L.; Zhang, X. A Registration Algorithm for Astronomical Images Based on Geometric Constraints and Homography. Remote Sens. 2023, 15, 1921. [Google Scholar] [CrossRef]
  26. Lang, D.; Hogg, D.W.; Mierle, K.; Blanton, M.; Roweis, S. Astrometry. net: Blind astrometric calibration of arbitrary astronomical images. Astron. J. 2010, 139, 1782. [Google Scholar] [CrossRef]
  27. Garcia, L.J.; Timmermans, M.; Pozuelos, F.J.; Ducrot, E.; Gillon, M.; Delrez, L.; Wells, R.D.; Jehin, E. prose: A python framework for modular astronomical images processing. Mon. Not. R. Astron. Soc. 2021, 509, 4817–4828. [Google Scholar] [CrossRef]
  28. Li, J.; Wei, X.; Wang, G.; Zhou, S. Improved Grid Algorithm Based on Star Pair Pattern and Two-dimensional Angular Distances for Full-Sky Star Identification. IEEE Access 2019, 8, 1010–1020. [Google Scholar] [CrossRef]
  29. Zhang, G.; Wei, X.; Jiang, J. Full-sky autonomous star identification based on radial and cyclic features of star pattern. Image Vis. Comput. 2008, 26, 891–897. [Google Scholar] [CrossRef]
  30. Ma, W.; Zhang, J.; Wu, Y.; Jiao, L.; Zhu, H.; Zhao, W. A Novel Two-Step Registration Method for Remote Sensing Images Based on Deep and Local Features. IEEE Trans. Geosci. Remote Sens. 2019, 57, 4834–4843. [Google Scholar] [CrossRef]
  31. Li, L.; Han, L.; Ding, M.; Cao, H. Multimodal image fusion framework for end-to-end remote sensing image registration. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–14. [Google Scholar] [CrossRef]
  32. Ye, Y.; Tang, T.; Zhu, B.; Yang, C.; Li, B.; Hao, S. A Multiscale Framework with Unsupervised Learning for Remote Sensing Image Registration. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–15. [Google Scholar] [CrossRef]
  33. DeTone, D.; Malisiewicz, T.; Rabinovich, A. SuperPoint: Self-Supervised Interest Point Detection and Description. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar] [CrossRef]
  34. Sarlin, P.E.; DeTone, D.; Malisiewicz, T.; Rabinovich, A. SuperGlue: Learning Feature Matching With Graph Neural Networks. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 4937–4946. [Google Scholar] [CrossRef]
  35. Foster, J.J.; Smolka, J.; Nilsson, D.E.; Dacke, M. How animals follow the stars. Proc. R. Soc. Biol. Sci. 2018, 285, 20172322. [Google Scholar] [CrossRef] [PubMed]
  36. Kolomenkin, M.; Pollak, S.; Shimshoni, I.I.; Lindenbaum, M. Geometric voting algorithm for star trackers. IEEE Trans. Aerosp. Electron. Syst. 2008, 44, 441–456. [Google Scholar] [CrossRef]
  37. Wei, M.S.; Xing, F.; You, Z. A real-time detection and positioning method for small and weak targets using a 1D morphology-based approach in 2D images. Light. Sci. Appl. 2018, 7, 97–106. [Google Scholar] [CrossRef]
  38. Vincent, L. Morphological grayscale reconstruction in image analysis: Applications and efficient algorithms. IEEE Trans. Image Process. 1993, 2, 176–201. [Google Scholar] [CrossRef] [PubMed]
  39. McKee, P.; Nguyen, H.; Kudenov, M.W.; Christian, J.A. StarNAV with a wide field-of-view optical sensor. Acta Astronaut. 2022, 197, 220–234. [Google Scholar] [CrossRef]
  40. Khodabakhshian, S.; Enright, J. Neural Network Calibration of Star Trackers. IEEE Trans. Instrum. Meas. 2022, 71, 1–10. [Google Scholar] [CrossRef]
  41. Ma, J.; Jiang, X.; Fan, A.; Jiang, J.; Yan, J. Image Matching from Handcrafted to Deep Features: A Survey. Int. J. Comput. Vis. 2020, 129, 23–79. [Google Scholar] [CrossRef]
Figure 1. Example of a typical star image.
Figure 1. Example of a typical star image.
Remotesensing 15 05146 g001
Figure 2. Stellar imaging schematic.
Figure 2. Stellar imaging schematic.
Remotesensing 15 05146 g002
Figure 3. The matching features of the HS.
Figure 3. The matching features of the HS.
Remotesensing 15 05146 g003
Figure 4. The maximum distance variation of the stellar pair at different image positions versus the f o v and image size.
Figure 4. The maximum distance variation of the stellar pair at different image positions versus the f o v and image size.
Remotesensing 15 05146 g004
Figure 5. (a) RMF and RAF of the HS and (b) HS feature list.
Figure 5. (a) RMF and RAF of the HS and (b) HS feature list.
Remotesensing 15 05146 g005
Figure 6. Example of the IRAF of an HS.
Figure 6. Example of the IRAF of an HS.
Remotesensing 15 05146 g006
Figure 7. Flow chart of the registration process.
Figure 7. Flow chart of the registration process.
Remotesensing 15 05146 g007
Figure 8. Number of stars in the field of view in GEO orbit (FoV = 2 . 5 ° , DC = 13 Mv).
Figure 8. Number of stars in the field of view in GEO orbit (FoV = 2 . 5 ° , DC = 13 Mv).
Remotesensing 15 05146 g008
Figure 9. Rotation test registration rate.
Figure 9. Rotation test registration rate.
Remotesensing 15 05146 g009
Figure 10. Rotation test registration accuracy.
Figure 10. Rotation test registration accuracy.
Remotesensing 15 05146 g010
Figure 11. Rotation test run time.
Figure 11. Rotation test run time.
Remotesensing 15 05146 g011
Figure 12. Overlapping regions test registration rate.
Figure 12. Overlapping regions test registration rate.
Remotesensing 15 05146 g012
Figure 13. Overlapping regions test registration accuracy.
Figure 13. Overlapping regions test registration accuracy.
Remotesensing 15 05146 g013
Figure 14. Overlapping regions test run time.
Figure 14. Overlapping regions test run time.
Remotesensing 15 05146 g014
Figure 15. False stars test registration rate.
Figure 15. False stars test registration rate.
Remotesensing 15 05146 g015
Figure 16. False stars test registration accuracy.
Figure 16. False stars test registration accuracy.
Remotesensing 15 05146 g016
Figure 17. False stars test run time.
Figure 17. False stars test run time.
Remotesensing 15 05146 g017
Figure 18. Positional deviation test registration rate.
Figure 18. Positional deviation test registration rate.
Remotesensing 15 05146 g018
Figure 19. Positional deviation test registration accuracy.
Figure 19. Positional deviation test registration accuracy.
Remotesensing 15 05146 g019
Figure 20. Positional deviation test run time.
Figure 20. Positional deviation test run time.
Remotesensing 15 05146 g020
Figure 21. Magnitude deviation test registration rate.
Figure 21. Magnitude deviation test registration rate.
Remotesensing 15 05146 g021
Figure 22. Magnitude deviation test registration accuracy.
Figure 22. Magnitude deviation test registration accuracy.
Remotesensing 15 05146 g022
Figure 23. Magnitude deviation test run time.
Figure 23. Magnitude deviation test run time.
Remotesensing 15 05146 g023
Figure 24. Real data test registration rate.
Figure 24. Real data test registration rate.
Remotesensing 15 05146 g024
Figure 25. Real data test registration accuracy.
Figure 25. Real data test registration accuracy.
Remotesensing 15 05146 g025
Figure 26. Real data test run time.
Figure 26. Real data test run time.
Remotesensing 15 05146 g026
Figure 27. Radar chart showing the comprehensive performance of the tested star image registration algorithms.
Figure 27. Radar chart showing the comprehensive performance of the tested star image registration algorithms.
Remotesensing 15 05146 g027
Table 1. Background parameters configuration table.
Table 1. Background parameters configuration table.
p 0 160.10 +  w   ×   6.25 p 5 −1.80   ×   10 6  +  w   ×   5.04  ×   10 5
p 1 1.64  ×   10 4  +  w   ×  2.82  ×   10 2 p 6 −1.82  ×   10 9  +  w   ×  3.87  ×   10 8
p 2 1.31  ×   10 2  +  w   ×  2.14  ×   10 2 p 7 8.07  ×   10 9  +  w   ×  2.74  ×   10 8
p 3 3.43  ×   10 7  +  w   ×  6.04  ×   10 5 p 8 −4.02  ×   10 9  +  w   ×  1.96  ×   10 8
p 4 −2.83  ×   10 6  +  w   ×   3.10  ×   10 5 p 9 1.51  ×   10 9  +  w   ×   3.61  ×   10 8
Table 2. Comprehensive performance statistics of star image registration algorithms.
Table 2. Comprehensive performance statistics of star image registration algorithms.
PotateOLRFalseStarPosDevMagDevRealScore
NCC0.2536.5681.7259.6456.9872.4051.26
FMT4.3623.6411.5510.1030.9369.8425.07
SURF19.9534.8494.0653.8247.640.0041.72
SPSG4.8632.8861.8388.7826.4559.3745.69
PT99.1378.9494.1993.4494.1899.9193.30
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sun, Q.; Liu, L.; Niu, Z.; Li, Y.; Zhang, J.; Wang, Z. A Practical Star Image Registration Algorithm Using Radial Module and Rotation Angle Features. Remote Sens. 2023, 15, 5146. https://doi.org/10.3390/rs15215146

AMA Style

Sun Q, Liu L, Niu Z, Li Y, Zhang J, Wang Z. A Practical Star Image Registration Algorithm Using Radial Module and Rotation Angle Features. Remote Sensing. 2023; 15(21):5146. https://doi.org/10.3390/rs15215146

Chicago/Turabian Style

Sun, Quan, Lei Liu, Zhaodong Niu, Yabo Li, Jingyi Zhang, and Zhuang Wang. 2023. "A Practical Star Image Registration Algorithm Using Radial Module and Rotation Angle Features" Remote Sensing 15, no. 21: 5146. https://doi.org/10.3390/rs15215146

APA Style

Sun, Q., Liu, L., Niu, Z., Li, Y., Zhang, J., & Wang, Z. (2023). A Practical Star Image Registration Algorithm Using Radial Module and Rotation Angle Features. Remote Sensing, 15(21), 5146. https://doi.org/10.3390/rs15215146

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop