A Registration Algorithm for Astronomical Images Based on Geometric Constraints and Homography

: The registration of astronomical images, is one of the key technologies to improve the detection accuracy of small and weak targets during astronomical image-based observation. The error of registration has a great inﬂuence on the trace association of targets. However, most of the existing methods for point-matching and image transformation lack pertinence for this actual scene. In this study, we propose a registration algorithm based on geometric constraints and homography, for astronomical images. First, the position changes in stars in an image caused by the motion of the platform where the camera had been stationed, were studied, to choose a more targeted registration model, which is based on homography transformation. Next, each image was divided into regions, and the enclosed stable stars were used for the construction of triangles, to reduce the errors from unevenly distributed points and the number of triangles. Then, the triangles in the same region of two images were matched by the geometric constraints of side lengths and a new cumulative conﬁdence matrix. Finally, a strategy of two-stage estimation was applied, to eliminate the inﬂuence of false pairs and realize accurate registration. The proposed method was then tested on sequences of real optical images under different imaging conditions and conﬁrmed to have outstanding performance in the dispersion rate of points, the accuracy of matching, and the error of registration, as compared to baseline methods. The mean pixel errors after registration for different sequences are all less than 0.5 when the approximate rotation angle per image is from 0.58 × 10 − 2 to 5.89 × 10 − 2 .


Introduction
In recent years, the number of astronomical objects in images, including satellites and debris, has increased remarkably, making the surveillance of astronomical objects a hotspot in the field of remote sensing [1][2][3].Many ground-based and astronomical image-based optical systems have been developed for this purpose.However, the backgrounds of the acquired astronomical images are complex, not only due to the densely distributed stars and noise, but also the complex motion of in-orbit systems, which affects the detection and tracking of targets.Image registration, which is the process of matching two images containing some of the same objects and estimating the optimal geometric transformation matrix to align the images, is a reasonable solution to the problem.The results of this registration can reduce the interference of most stars and align the trajectories of targets according to their movement rules, in order to track moving targets in the future.Among various image registration methods, feature-based image registration has become a focus of research in various fields.For astronomical images, stars are ideal and readily available feature points, as compared to commonly used features such as corner points, endpoints, edges, and so on.Therefore, there are two main methods for the registration of astronomical images: the similarity of spatial relationships and feature descriptions.
Triangle-based methods are the most common methods in this field, that are based on the similarity of spatial relationships.The traditional triangle methods focus on the mapping relationship between the stars in the detection star map and the stars in the star catalog, so that they can only be used in the known situation of the star catalog.In the reference image, every three stars within a specific range are composed into a triangle, whose angular distance is used as the side length of the triangle.If the differences between the triangles from the reference image and star catalog are within the threshold, the corresponding star points are matched.However, the biggest disadvantage of this method is its high computational demand when building and matching triangles, which limits its application in resource-constrained platforms.Many scholars have conducted significant research to solve the above challenges [4,5].The first approach, was to reduce the number of points indexed in the catalog, with a premise of keeping valid points.The second, was to change the representation of the triangles.However, there have been few reasonable solutions to address difficulties such as low efficiency, the high consumption of memory resources, and the selection of an appropriate threshold.
Another type of triangle-based method relied only on the information of images, and the stars of two images were used to construct triangles.In order to capture translation and rotation, and the small changes in scale caused by temperature-related changes in focal length, the triangles were matched based on the triangle similarity theorem [6][7][8][9][10][11]. Groth [6], proposed a pattern-matching method to align pairs of coordinates according to two 2D lists, which had been formed from triplets of points in each list.The methods proposed by [7,8], followed a similar concept.They not only implemented, but also improved it independently, with novel strategies.On this basis, Zhou [11] proposed an improved method, based on efficient triangle similarity.He used statistical information to extract stable star points that were used to form triangles with the nearest two stable stars, and the weight matrix in triangle matching was improved by introducing a degree of similarity.The improved voting matrix was used for triangle matching.In the method proposed in [10], the authors introduced the Delaunay subdivision, which is a geometric topological structure, to further reduce the number of triangles formed by stars.The greatest defects in these methods were their extensive time requirements, high memory consumption, and their need for sophisticated voting schemes to guarantee matching accuracy.In addition, some methods used in other fields [12][13][14] even extended the triangle method to building polygons, but they encountered the same problems.At present, PixInsight [15], a relatively mature software, is a typical example of a successful application based on the triangle similarity method.The brightest 200 stars are selected, which eliminates the inefficiency caused by including all the stars in the calculation.However, there have been significant challenges when it has been applied to embedded platforms.
The methods based on feature description, have attempted to improve the registration accuracy by enhancing the feature descriptors.Generally, these methods include feature detection, feature description, and feature matching [16].Among them, feature description has become the dominant technique.Many rotation-invariant descriptors, such as SIFT [17], SURF [18], GLOH [19], ORB [20], BRISK [21], FREAK [22], KAZE [23], and HOG [24], have been proposed and detailed in the literature [16].The dominant orientation assignment has been the key part of these methods, to guarantee the rotation-invariant features in many applications.The estimation of the dominant orientation, requires the gradient information and the surrounding localized features, in order to calculate results.However, the gray value of stars in images decreases similarly in all directions, and the varied sizes of the local area yield inconsistent orientations, which easily produce errors in complex astronomical scenes.In addition, few of the existing methods have been devoted to the design of special descriptions for stars.Ruiz [25] applied a scale-invariant feature transform (SIFT) descriptor, to register stellar images.Zhou [16] estimated a dominant orientation from the geometrical relationships between a described star and two neighboring stable stars.The local patch size adaptively changed according to the described star size, and an adaptive sped-up robust features (SURF) descriptor was constructed.
After the matching of point pairs, the pairs are then used to solve the image transformation parameters and achieve accurate registration.The transformation in most research has been regarded as a non-rigid transformation, but translation and rotation have been the two most prominent rigid deformations [10,16].However, the errors in registration have been significant in certain scenarios, especially when the images were significantly rotated or the point pairs were concentrated local areas.Recently, many deep-learning methods have been proposed for the estimation of these parameters.These methods can be classified into supervised [26,27] and unsupervised [28][29][30] categories.The former utilizes synthesis to generate examples with ground-truth labels for training, while the latter directly minimizes the loss of photometrics or features between images.However, these methods pose significant challenges.Firstly, the synthesized examples are unable to reflect the real scene's parallax and dynamic objects.Secondly, the stars in the images are the key features, but their sizes are small, so the range of gray values is large, with limited texture information.When the size of the input is small and the astronomical image being processed is large, the images should be resized.In addition, there are multiple convolutional layers in the network.The two-step process further decreases the valid information of the images.Therefore, methods based on deep learning are not suitable for the registration of astronomical images.
In this work, we aimed to realize the registration of astronomical images in actual orbit.The relations between the changes in stars and the in-orbit motion of the platforms were studied, to select an accurate and effective registration model.The concept of stable stars in images was proposed in this paper.Stable stars, refers to those stars that have the highest gray values and the largest size in an image, and their appearance is usually stable across multiple images Each image was divided into regions, where the stable stars inside were selected based on statistics, to guarantee the uniformity of points for matching the whole image.Only the points in the same regions, rather than all the points, were used to construct the triangles, and therefore, the number of triangles was substantially reduced.During the matching of the triangles, the relative changes in the side length substituted for the angles, and a new cumulative confidence matrix was designed.The traditional transformation model and the proposed transformation model were coordinated, to eliminate the influence of false pairs.The former was used for the primary screening of false pairs, and the latter further estimated the parameters of the transformation on this basis, to realize an accurate registration.
As a whole, the main contributions of this paper can be summarized, as follows. 1.
We discussed in detail the changes in the stars in images caused by the in-orbit motion of the platforms and the limitations of the traditional model.An accurate and effective registration model was chosen to replace the traditional one.

2.
The triangles were characterized by their side lengths instead of their angles.Furthermore, the lengths were transformed into relative values, to reduce the difficulty of threshold settings during the matching of the triangles.

3.
A strategy with a two-stage parameter estimation, based on two different models was proposed, to realize accurate registration.
The organization of the remainder of this paper is as follows.Section 2 introduces the registration model of astronomical images in detail.In Section 3, the proposed method, along with the theoretical background, is introduced.The experimental results are presented in Section 4, the advantages and limitations of the presented method are discussed in Section 5, and the paper concludes with Section 6.

Registration Model for Astronomical Images
Figure 1 shows an in-orbit satellite under typical conditions and the influence of its position changes on imaging.At the times t i and t j , the camera observed the same fixed star.Because the distance of the star was far greater than the orbital altitude of the satellite r , the stars in the images were infinite points, and the movement of the satellite could be ignored.Therefore, the changes in the star positions in the images were more likely caused by the change in satellite attitude at different times.The relationship between the same target in two frames of images was deduced as follows.Let the camera's intrinsic parameter matrix be where F x , F y ,C x , and C y are the intrinsic parameters of the camera.
For an infinite star point, if its corresponding pixel coordinate is [x, y, 1] T and its where In the coordinate system of the camera, when taking the first frame, the origin was set as the optical center, and the Z axis was set as the optical axis.Because there was only a rotational relationship between the camera attitudes, for the point whose coordinate in the first frame was [X 1 , Y 1 , Z 1 ] T , after the attitude change matrix R of the camera at the imaging time of the two frames, its coordinate [X 2 , Y 2 , Z 2 ] T in the second frame camera coordinate system, satisfied the following: That is, the coordinates of the point in the camera's coordinate system of the second frame, whose coordinates on the normalized image plane of the first frame were [X, Y, 1] T , are now: The optical center of the camera and the point formed a line.The coordinates of the intersection of the line and the normalized image plane was calculated by: The corresponding pixel coordinates of this point in the second frame (x , y ) were calculated as: If the attitude Euler angles of the second frame, rotating around the X, Y, and Z axes, relative to the first frame, were α, β, and γ, respectively, and the rotation order was Z, Y, X, respectively, the relationship between it and R was calculated as: where The above deductions were regarded as the process of homography transformation.The homography matrix can be written as: If the relationship between two images could be approximated as the translational and rotational relationship of the image, that is, the affine transformation that was commonly used in this field, it was equivalent to the existence of R 0 and T 0 , satisfying: The third row of KRK −1 was calculated as: Then, the above problem could be simplified to at least meet the following condition: If, and only if, α and β were close to 0, and Z was close to 1, R 0 and T 0 could be solved.However, when the satellite operated in an orbit at an altitude of 200-2000 km from the ground, the time for it to orbit the earth was 90-120 min, so that the operating angular speed was 3-4 arcmin per second.In this scenario, the attitude of the satellite changed rapidly, and the translation and rotation of the image could not reflect the actual changes in the image.Therefore, the image registration model based on homography transformation was applied in this paper.

The Matching Method for Stars
Because the locations of fixed stars are salient features in images, fixed stars in two images for matching were selected to calculate the transformation matrix.

Triangle Construction
For the selected star points in the reference map and the target map, every three stars were combined as triangles, to form sets T re f and T src , respectively.A total of k star points could generate up to C 3 k triangles.Therefore, the number of constructed triangles was positively correlated with the number of stars.Because of the large number of stars detected, the computational demand was significant.arszalek[4] used the brighter stars in the star points to construct triangles.This method had two problems, however: (1) the threshold of selecting bright stars was uncertain.The selection threshold directly affected the number of star points involved in the construction of the triangles.If the threshold was too low, the number of stars involved in the calculation was too large, making it difficult to achieve efficiency.When the threshold was high, the points could be insufficient.(2) Tuytelarrs et al. [31,32] proposed that the feature points should be evenly distributed across the whole image, in order to estimate the transformation matrix more accurately.The study in [4] could not ensure that the selected bright stars were evenly distributed across the whole image.
In order to resolve the above problems, we searched for limited stable stars in an image.However, there could be stray light, thermal noise, and other effects in the images, and some pixels and even regions could also have local highlights.It was difficult to achieve a global star search only through a single fixed threshold.Therefore, the image was divided into N × M regions.For a single region, the gray values of the contained pixels were counted, to realize the dynamic setting of the threshold, and the image was binarized, as follows: where T g is the threshold of gray value.The variable c sort is the sequence formed by arranging the gray values of all n p pixels in the local area from small to large, and α is the scale coefficient.If the gray value of a pixel was higher than the α% of the points in the region, the pixel was set to 1 in the binary image I bw .The connected regions in I bw were sorted by area, and the N * largest connected regions were selected as local stable stars.In this way, we significantly reduced the influence of strong isolated interference, such as thermal noise.The weighted centroids of stable stars were calculated as centers.For the connected domain of n l pixels, the calculation was as follows: where (x c , y c ) is the center of a star, and I(x i , y i ) is the gray value of point (x i , y i ).These stable stars were used to generate C 3 N * triangles.The camera was usually attached to a satellite.If the attitudes of both the satellite and camera were stable, the movement distance of the stable star between frames was usually minimal.The registration of the corresponding star points was also carried out in each region for better efficiency.The set of all the triangles N, in a region, was formed as follows: where ∆ i,j,k is the triangle with the vertices of three star points i, j, k.
Figure 2 shows an illustration of the triangle construction.Two images significantly affected by stray light were divided into 3 × 3 regions, and then six stable stars were chosen.For a better visual effect, only the triangles satisfying Delaunay remained.

Triangle Matching
Due to the movement of the observational platform, the gray values, positions, and sizes of star points in two frames could change.To identify the stable features of the triangles before and after movement, images of different sizes, from 1000 to 4000 pixels, were divided into 3 × 3 regions, and 50 points were randomly generated inside, to simulate the stable stars.The attitude of the camera was changed from 0.1 • to 0.2 • around the x and y axes, as shown in Figure 3.The relative variations in lengths of the segment lines formed by the points were recorded.The statistical results showed that the relative variations in the lengths between the star points in two frames, were smaller than 4% within the slight movement caused by the change in attitude, as shown in Figure 4.This indicated that the side lengths of the triangles composed of stars were stable.In this paper, a registration method based on the triangular length constraint was proposed, to match the corresponding star points.To match triangles, we used the feature information of the triangles.The features of triangles included three sides and three angles.In order to facilitate our calculations, this study used the length of the three sides to build a clue matrix.The clue matrix Z, was divided into C 3 N * rows, and each row was used to store the lengths of three sides of a triangle and the numbers of three vertices.The specific format was as follows: where l ij is the distance between the ith and jth star points.
After the clue matrices of the corresponding regions in two images were constructed, and recorded as Z and Z , they were used for the registration of corresponding star points.The registration was primarily divided into two steps.The first step involved matching the triangles, and the second step involved matching the points.The judgment basis of the triangle matching, was that the corresponding side lengths of the two triangles were almost equal.Because the side lengths of each triangle were arranged in a different order in the two clue matrices, it was necessary to adjust the side lengths' order first.The matching of points was based on the success of the triangle matching.The three sides of the two triangles corresponded to each other individually.The matching of points was carried out according to the principle that, the vertices of the two corresponding sides were corresponding star points.The steps of the registration were as follows: (1) The first three columns of Z and Z were extracted, and each row was rearranged from small to large to form matrices L and L , respectively.(2) The triangles ∆ 1,2,3 in the first row of L, were extracted for pairing.The relative differences between the three side lengths of ∆ 1,2,3 and the side lengths of all the triangles in L were calculated, and the relative difference matrix D was constructed as follows: (3) The smallest value in D was regarded as the lowest sum value of the differences of each side D single and the differences of the sum of the three sides D all , which was defined as follows: where where i , j , k are the number of stars in Z .(4) If the relative changes in the distances met the following standard: the difference of each side was smaller than 10%, and the sum of the difference was smaller than 5%, the pair was judged a success, that is: where ∼ = means the success of pairing and [] means no matching reference triangle.(5) If the matching of ∆ i ,j ,k , and ∆ 1,2.3 was successful, the corresponding points would be matched next.The variables l i j , l i k , l j k and l 12 , l 13 , l 23 had one-to-one correspondence.The intersection of l i j and l i k was i , and the intersection of l 12 and l 13 was 1, so i and 1 were corresponding points.Similarly, j and 2, as well as k and 3, were also corresponding points.Considering that there could be mismatched triangle pairs, a weight matrix of N * × N * was constructed.Each row in the weight matrix represented a star point in the previous image, and each column represented a star point in the current image.Each intersection represented a pair of star points, and its value represented the cumulative confidence of the matching.Single confidence v , was defined as follows: where m and n are the two points in single matching, and D m,n is the sum of the absolute value of the relative differences between the two sides, in triangles corresponding to the matched points.(6) The operations of (2)-( 5) were repeated until all triangles in L had been matched.The cumulative confidence v, was the result after the registration of all triangles: If v(m, n) is the maximum value of the mth row and the nth column and v(m, n) > 5, that is, the matching of two points had been successful, the star number of the corresponding point pair in the ith region would be saved in pair matrix P i .(7) The same processing was carried out for each region.The information of the matched points in P i was summarized in P total .

Parameter Estimation
The points in P total were used to calculate the homography matrix between two frames.The pixel coordinate of a corresponding point in the second frame (x , y ) could be calculated as follows: Considering the reasons for scaling, there were only eight unknowns, rather than nine, when solving H. Once a certain value was assigned to any non-zero element in the matrix, other elements would also acquire a certain value, according to the proportion.Usually, the matrix used was obtained by setting the value of h 9 to 1, which was recorded as follows: This ensured the values of elements h 3 and h 6 were equal to x and y , respectively, when x and y were each equal to 0.
From the previous derivation, for each pair of points on two planes, two equations based on x and y could be established simultaneously: That is The solution of the matrix H required eight constraints, so more than four pairs of points needed to be provided to achieve this.For n total pairs of points, we could obtain the following: where The solution for this equation was then transformed to the following: where D is a diagonal matrix, and the diagonal elements were arranged in descending order.The optimal solution was obtained when x = (0, 0, ..., 1) T , and the column vector corresponding to the minimum singular value of V was the solution of parameters.
Since the number of pairs was obviously more than the requirement, there could be some mismatched pairs.These mismatched pairs resulted in a large calculation error of H, though even their projection errors were not always the largest.In contrast, when using translation and rotation to approximate homography transformation, the mismatched point pairs usually pointed to the largest projection errors.Therefore, a two-stage iteration was calculated as follows: (1) P total and Equation ( 13) were used to calculate R 0 and T 0 , by the least squares method.(

2) Baselines
The representative methods proposed in [10,16,18,20] were selected, and combined as baselines in this paper.Among them, establishing a global threshold for image binarization has been a common method for the selection of stable stars.SURF and ORB have been the most widely used and classic descriptors for the matching of corresponding points.Using the triangle relations between stars has been another commonly used method in this field.Zhou [16] constructed the triangle relations between stars with a global threshold, and embedded the relations into SURF.Yang [10] used the Delaunay subdivision to describe the triangle relations between stable stars.
(3) Evaluation metrics A dispersion rate (DR) of the initial selected stable stars for triangle construction, directly affected the calculation of H.For a reasonable quantification of this index, an image was divided into grids (n × n), and DR was defined as follows: where c i is the number of points in the ith grid.
After selecting the stable stars, their corresponding pairs of points were then matched.The accuracy of matching was defined as follows: where N cp is the number of correct pairs and N p is the number of pairs.The corresponding pairs were used to estimate the parameters of different transformation methods.The points belonging to the pairs in one image were projected onto another image.The pixel error, Error ij , of the projection points from the jth image to the ith image, and the original points in the ith image, were defined as follows: where (x i , y i ) and x ji , y ji are the image coordinates of the original points and projection points, respectively.The targets was also projected from the images onto the reference image, and formed the corresponding trajectories.Non-linear least squares fitting [33] was utilized for these trajectories.In addition to the typical statistical indices, such as SSE, R-squared, and RMSE, curvature was also used, to determine the degree of smoothness of the fitting results.The curvature K, was defined as follows: (41)

Comparative Analysis of the Corresponding Point-Matching
In this section, the proposed method is realized in stages and compared to baselines.In the experiment, the first image was regarded as the reference image.The subsequent images were registered to the reference image, to simulate different deviations caused by different movement speeds of the platform and the time intervals for imaging.During stable star selection, each baseline was used to select 100 points in an image, in order to ensure the number of valid stable stars and match the number of stable stars determined by the proposed method.Each image was divided into a 4 × 4 grid, and the results of the average dispersion rate are shown in Table 3.The dispersion rate of the points chosen by the global threshold methods, SURF and ORB, had an excellent range variance.The distributions of stars found by the proposed method in astronomical images were reasonably balanced, which benefited their registration.
Table 3.Comparison of dispersion rates of selected stable stars among the proposed method and baselines.Two registration models, based on affine transformation and homography transformation, with parameters estimated by the matched point pairs, were utilized to realize the transformation of the images.The images in a sequence were registered to the reference image, and the means of the pixel error are shown in Table 6.The parameters of the homography transformation were more than the affine transformation, so they were not obtainable without a sufficient number of pairs.The proposed method performed better than the baselines of both models.In the same pairs identified by the baselines, the errors of the affine-transformation-based model were much smaller than those of the homography-transformation-based model.Because the homography transformation and affine transformation reflected the changes in the third dimension and second dimension, respectively, the wrong-matched pairs had more effect on the estimation of the former.Even a correct pair could occasionally contain the largest error, when using homography transformation.Therefore, the method used in this paper, that eliminated incorrect pairs efficiently by first applying the affine transformation, was superior.The results of our method, showed that errors of homography transformation decreased, as compared to those in affine transformation, when the pairs for estimation were correct, which also proved that the proposed model was more accurate.In order to show the differences between the two registration models, the second image and the tenth image in Seq.1 were registered to the reference image, using the proposed method in different transformations.Three regions in the registered images and the reference image were randomly selected, as shown in Figures 8-10.In each region, two typical areas, marked in red and blue rectangles, were selected and enlarged.Among them, the red area in Figure 10(a1,b1) was an astronomical image target, so it appeared only once in the same place.The rest of the areas contained only stars.

Seq. Number of Points/Average Dispersion Rate
The location of a region affected the results of the affine transformation, especially when it was a significant distance from the approximate center of rotation, as shown in Figures 9 and 10(a1,a2).Furthermore, when the time interval of two images was small, the differences in registration were not obvious, as shown in Figures 8-10(a1,b1).With an increase in time, Figures 8-10(a2,b2) show that the deviation of the registered images could not be ignored.Furthermore, Figure 11 shows the influence of the time interval on the different registration models.The pixel error of the matched points was correlated positively with the time interval when using the affine transformation, in all sequences.Because the longer time interval indicated a larger rotation angle of the platform, the affine transformation was not in accord with actual in-orbit conditions.In contrast, the performances of the homography transformations were more stable, which further illustrated the effectiveness of the proposed registration model.Furthermore, the error of our method was also affected by the size of the images and the movements of the stars between two images.If the size was small and the movements were significant, the matched points in the image to be registered would move outward from the same region with increased time intervals, which resulted in fewer points that could be matched and worsening results in the estimation, such as in Seq.6.
The final objective of our study, was to improve target detection in astronomical images.In addition to reducing the influence of the stars, the revision of the target trajectories was another important factor to consider.The trajectories before and after fitting are shown in Figure 12.The typical statistical indices are listed in Table 7. Furthermore, the curvatures of the trajectories revised by two types of transformation are marked in red and green lines, and are shown in Figure 12(b1-b5).The revised trajectories after homography transformation, and the corresponding curves, were smoother, which was advantageous for trace association.The results once again proved the effectiveness of the proposed method.

Discussion
A novel registration method for astronomical images was proposed in this paper.There are few studies available in the literature regarding our objectives.The proposed method had better accuracy and speed, as compared to other methods described in Section 1.The advantages and limitations of the presented method are discussed in this section.

1.
The changes in stars in images, caused by the in-orbit motion of platforms, was deduced, to choose the registration model used in this paper.The traditional model was compared to the chosen model, and its limitations were analyzed in detail.The chosen model was more accurate and efficient.2.
We divided a whole image into regions and used a dynamic threshold based on statistics, to identify the fixed stars easily and efficiently.These stars had a uniform distribution across a whole image, which was advantageous for the parameter estimation of the model.

3.
We utilized the side lengths as the features of the triangles.The side lengths were transformed into relative values, so that the difficulty of threshold setting during the matching of the triangles was reduced and the robustness of the method was improved.4.
The estimation of parameters included two stages.The traditional model was used to eliminate the wrongly matched points first, because it was more sensitive to these points.The chosen model was applied to realize the accurate registration.Therefore, we eliminated significant errors due to the wrongly matched points, which are typically difficult to directly confirm.

5.
The method involved empirical parameters, one of which was the number of regions.
For the same image, a larger number indicated more and smaller regions, which added the burden of high computational demand and the likelihood that the matched points could have moved out of the original region.However, a smaller number could result in an uneven distribution of the selected points.The automatic setting of the regions, combined with the sizes of the images and possible motion of the targets, will be a key factor in our next research focus.6.
Our purpose in this work was to improve the performance of detection and tracking.
In future work, we will propose a new detection and tracking method based on this work, to further expand upon it for potential real-space objects.

Conclusions
This paper presented a method for the accurate registration of astronomical images.The central concept of the method was the registration model, sub-regional processing, and the description of the triangle relations of stable stars.Our registration model combined the frequently used affine transformation with a practical in-orbit situation.The proposed method was tested on four sequences of astronomical images, acquired with two astronomical image-based telescopes and a ground-based telescope.By applying strategies for local thresholds and local star matching, the proposed method achieved superior performance for handling large rotation angles, and it had superior resistance to interference, such as stray light and noise.The results also indicated the proposed method was highly competitive in terms of matching accuracy, and it outperformed the baselines for registering astronomical images.

Figure 1 .
Figure 1.An in-orbit satellite under typical conditions.(a) Position change.(b) Influence of (a) on imaging.

Figure 2 .
Figure 2.An illustration of triangle construction in two images for registration.(a) Reference image.(b) Current image.(c) Photomicrographs of the marked region in (a).(d) Photomicrographs of the marked region in (b).

Figure 3 .
Figure 3.The simulation of points in the 1500 × 1500 image.

Figure 4 .
Figure 4.The range of the relative variation in lengths.

Figure 8 .
Figure 8. Region 1 in the difference images using (a) affine transformation and (b) homography transformation.(a1,b1) The difference image of the second frame and the reference image.(a2,b2) The difference image of the tenth image and the reference image.

Figure 9 .
Figure 9. Region 2 in the difference images using (a) affine transformation and (b) homography transformation.(a1,b1) The difference image of the second frame and the reference image.(a2,b2) The difference image of the tenth image and the reference image.

Figure 10 .
Figure 10.Region 3 in the difference images using (a) affine transformation and (b) homography transformation.(a1,b1) The difference image of the second frame and the reference image.(a2,b2) The difference image of the tenth image and the reference image.

Table 1 .
Information of telescopes in sequences.

Table 2 .
Information of images in sequences.

Table 5 .
Computational cost comparison for corresponding point-matching among the proposed method and baselines.

Table 6 .
Comparison of pixel error among the proposed method and baselines.

Table 7 .
Comparison of trajectory fitting by different transformations.