1. Introduction
In recent years, the number of astronomical objects in images, including satellites and debris, has increased remarkably, making the surveillance of astronomical objects a hotspot in the field of remote sensing [
1,
2,
3]. Many ground-based and astronomical image-based optical systems have been developed for this purpose. However, the backgrounds of the acquired astronomical images are complex, not only due to the densely distributed stars and noise, but also the complex motion of in-orbit systems, which affects the detection and tracking of targets. Image registration, which is the process of matching two images containing some of the same objects and estimating the optimal geometric transformation matrix to align the images, is a reasonable solution to the problem. The results of this registration can reduce the interference of most stars and align the trajectories of targets according to their movement rules, in order to track moving targets in the future. Among various image registration methods, feature-based image registration has become a focus of research in various fields. For astronomical images, stars are ideal and readily available feature points, as compared to commonly used features such as corner points, endpoints, edges, and so on. Therefore, there are two main methods for the registration of astronomical images: the similarity of spatial relationships and feature descriptions.
Triangle-based methods are the most common methods in this field, that are based on the similarity of spatial relationships. The traditional triangle methods focus on the mapping relationship between the stars in the detection star map and the stars in the star catalog, so that they can only be used in the known situation of the star catalog. In the reference image, every three stars within a specific range are composed into a triangle, whose angular distance is used as the side length of the triangle. If the differences between the triangles from the reference image and star catalog are within the threshold, the corresponding star points are matched. However, the biggest disadvantage of this method is its high computational demand when building and matching triangles, which limits its application in resource-constrained platforms. Many scholars have conducted significant research to solve the above challenges [
4,
5]. The first approach, was to reduce the number of points indexed in the catalog, with a premise of keeping valid points. The second, was to change the representation of the triangles. However, there have been few reasonable solutions to address difficulties such as low efficiency, the high consumption of memory resources, and the selection of an appropriate threshold.
Another type of triangle-based method relied only on the information of images, and the stars of two images were used to construct triangles. In order to capture translation and rotation, and the small changes in scale caused by temperature-related changes in focal length, the triangles were matched based on the triangle similarity theorem [
6,
7,
8,
9,
10,
11]. Groth [
6], proposed a pattern-matching method to align pairs of coordinates according to two 2D lists, which had been formed from triplets of points in each list. The methods proposed by [
7,
8], followed a similar concept. They not only implemented, but also improved it independently, with novel strategies. On this basis, Zhou [
11] proposed an improved method, based on efficient triangle similarity. He used statistical information to extract stable star points that were used to form triangles with the nearest two stable stars, and the weight matrix in triangle matching was improved by introducing a degree of similarity. The improved voting matrix was used for triangle matching. In the method proposed in [
10], the authors introduced the Delaunay subdivision, which is a geometric topological structure, to further reduce the number of triangles formed by stars. The greatest defects in these methods were their extensive time requirements, high memory consumption, and their need for sophisticated voting schemes to guarantee matching accuracy. In addition, some methods used in other fields [
12,
13,
14] even extended the triangle method to building polygons, but they encountered the same problems. At present, PixInsight [
15], a relatively mature software, is a typical example of a successful application based on the triangle similarity method. The brightest 200 stars are selected, which eliminates the inefficiency caused by including all the stars in the calculation. However, there have been significant challenges when it has been applied to embedded platforms.
The methods based on feature description, have attempted to improve the registration accuracy by enhancing the feature descriptors. Generally, these methods include feature detection, feature description, and feature matching [
16]. Among them, feature description has become the dominant technique. Many rotation-invariant descriptors, such as SIFT [
17], SURF [
18], GLOH [
19], ORB [
20], BRISK [
21], FREAK [
22], KAZE [
23], and HOG [
24], have been proposed and detailed in the literature [
16]. The dominant orientation assignment has been the key part of these methods, to guarantee the rotation-invariant features in many applications. The estimation of the dominant orientation, requires the gradient information and the surrounding localized features, in order to calculate results.However, the gray value of stars in images decreases similarly in all directions, and the varied sizes of the local area yield inconsistent orientations, which easily produce errors in complex astronomical scenes. In addition, few of the existing methods have been devoted to the design of special descriptions for stars. Ruiz [
25] applied a scale-invariant feature transform (SIFT) descriptor, to register stellar images. Zhou [
16] estimated a dominant orientation from the geometrical relationships between a described star and two neighboring stable stars. The local patch size adaptively changed according to the described star size, and an adaptive sped-up robust features (SURF) descriptor was constructed.
After the matching of point pairs, the pairs are then used to solve the image transformation parameters and achieve accurate registration. The transformation in most research has been regarded as a non-rigid transformation, but translation and rotation have been the two most prominent rigid deformations [
10,
16]. However, the errors in registration have been significant in certain scenarios, especially when the images were significantly rotated or the point pairs were concentrated local areas. Recently, many deep-learning methods have been proposed for the estimation of these parameters. These methods can be classified into supervised [
26,
27] and unsupervised [
28,
29,
30] categories. The former utilizes synthesis to generate examples with ground-truth labels for training, while the latter directly minimizes the loss of photometrics or features between images. However, these methods pose significant challenges. Firstly, the synthesized examples are unable to reflect the real scene’s parallax and dynamic objects. Secondly, the stars in the images are the key features, but their sizes are small, so the range of gray values is large, with limited texture information. When the size of the input is small and the astronomical image being processed is large, the images should be resized. In addition, there are multiple convolutional layers in the network. The two-step process further decreases the valid information of the images. Therefore, methods based on deep learning are not suitable for the registration of astronomical images.
In this work, we aimed to realize the registration of astronomical images in actual orbit. The relations between the changes in stars and the in-orbit motion of the platforms were studied, to select an accurate and effective registration model. The concept of stable stars in images was proposed in this paper. Stable stars, refers to those stars that have the highest gray values and the largest size in an image, and their appearance is usually stable across multiple images Each image was divided into regions, where the stable stars inside were selected based on statistics, to guarantee the uniformity of points for matching the whole image. Only the points in the same regions, rather than all the points, were used to construct the triangles, and therefore, the number of triangles was substantially reduced. During the matching of the triangles, the relative changes in the side length substituted for the angles, and a new cumulative confidence matrix was designed. The traditional transformation model and the proposed transformation model were coordinated, to eliminate the influence of false pairs. The former was used for the primary screening of false pairs, and the latter further estimated the parameters of the transformation on this basis, to realize an accurate registration.
As a whole, the main contributions of this paper can be summarized, as follows.
- 1.
We discussed in detail the changes in the stars in images caused by the in-orbit motion of the platforms and the limitations of the traditional model. An accurate and effective registration model was chosen to replace the traditional one.
- 2.
The triangles were characterized by their side lengths instead of their angles. Furthermore, the lengths were transformed into relative values, to reduce the difficulty of threshold settings during the matching of the triangles.
- 3.
A strategy with a two-stage parameter estimation, based on two different models was proposed, to realize accurate registration.
The organization of the remainder of this paper is as follows.
Section 2 introduces the registration model of astronomical images in detail. In
Section 3, the proposed method, along with the theoretical background, is introduced. The experimental results are presented in
Section 4, the advantages and limitations of the presented method are discussed in
Section 5, and the paper concludes with
Section 6.
2. Registration Model for Astronomical Images
Figure 1 shows an in-orbit satellite under typical conditions and the influence of its position changes on imaging. At the times
and
, the camera observed the same fixed star. Because the distance of the star was far greater than the orbital altitude of the satellite
, the stars in the images were infinite points, and the movement of the satellite could be ignored. Therefore, the changes in the star positions in the images were more likely caused by the change in satellite attitude at different times. The relationship between the same target in two frames of images was deduced as follows.
Let the camera’s intrinsic parameter matrix be
where
,
,
, and
are the intrinsic parameters of the camera.
For an infinite star point, if its corresponding pixel coordinate is
and its normalized image plane coordinate is
, then
where
In the coordinate system of the camera, when taking the first frame, the origin was set as the optical center, and the
Z axis was set as the optical axis. Because there was only a rotational relationship between the camera attitudes, for the point whose coordinate in the first frame was
, after the attitude change matrix
of the camera at the imaging time of the two frames, its coordinate
in the second frame camera coordinate system, satisfied the following:
That is, the coordinates of the point in the camera’s coordinate system of the second frame, whose coordinates on the normalized image plane of the first frame were
, are now:
The optical center of the camera and the point formed a line. The coordinates of the intersection of the line and the normalized image plane was calculated by:
The corresponding pixel coordinates of this point in the second frame
were calculated as:
If the attitude Euler angles of the second frame, rotating around the
X,
Y, and
Z axes, relative to the first frame, were
,
, and
, respectively, and the rotation order was
Z,
Y,
X, respectively, the relationship between it and
was calculated as:
where
The above deductions were regarded as the process of homography transformation. The homography matrix can be written as:
If the relationship between two images could be approximated as the translational and rotational relationship of the image, that is, the affine transformation that was commonly used in this field, it was equivalent to the existence of
and
, satisfying:
The third row of
was calculated as:
Then, the above problem could be simplified to at least meet the following condition:
If, and only if, and were close to 0, and was close to 1, and could be solved. However, when the satellite operated in an orbit at an altitude of 200–2000 km from the ground, the time for it to orbit the earth was 90–120 min, so that the operating angular speed was 3–4 arcmin per second. In this scenario, the attitude of the satellite changed rapidly, and the translation and rotation of the image could not reflect the actual changes in the image. Therefore, the image registration model based on homography transformation was applied in this paper.