A Full-Spectrum Registration Method for Zhuhai-1 Satellite Hyperspectral Imagery

Accurate registration is an essential prerequisite for analysis and applications involving remote sensing imagery. It is usually difficult to extract enough matching points for inter-band registration in hyperspectral imagery due to the different spectral responses for land features in different image bands. This is especially true for non-adjacent bands. The inconsistency in geometric distortion caused by topographic relief also makes it inappropriate to use a single affine transformation relationship for the geometric transformation of the entire image. Currently, accurate registration between spectral bands of Zhuhai-1 satellite hyperspectral imagery remains challenging. In this paper, a full-spectrum registration method was proposed to address this problem. The method combines the transfer strategy based on the affine transformation relationship between adjacent spectrums with the differential correction from dense Delaunay triangulation. Firstly, the scale-invariant feature transform (SIFT) extraction method was used to extract and match feature points of adjacent bands. The RANdom SAmple Consensus (RANSAC) algorithm and the least square method is then used to eliminate mismatching point pairs to obtain fine matching point pairs. Secondly, a dense Delaunay triangulation was constructed based on fine matching point pairs. The affine transformation relation for non-adjacent bands was established for each triangle using the affine transformation relation transfer strategy. Finally, the affine transformation relation was used to perform differential correction for each triangle. Three Zhuhai-1 satellite hyperspectral images covering different terrains were used as experiment data. The evaluation results showed that the adjacent band registration accuracy ranged from 0.2 to 0.6 pixels. The structural similarity measure and cosine similarity measure between non-adjacent bands were both greater than 0.80. Moreover, the full-spectrum registration accuracy was less than 1 pixel. These registration results can meet the needs of Zhuhai-1 hyperspectral imagery applications in various fields.


Introduction
The Zhuhai-1 hyperspectral satellite is a commercial remote sensing micro-nano satellite constellation that was launched and is currently operated by Zhuhai Orbita Aerospace Science and Technology Co., Ltd. It is the first satellite constellation constructed and operated by a private company in China. The Zhuhai-1 satellite constellation is planned to feature 34 small satellites, stability. Additionally, the gray levels can vary for the same land features in different spectral bands, since the hyperspectral images are obtained by spectral imagers with gradient filters. This makes it problematic to extract and match the feature points of non-adjacent spectral bands in OHS hyperspectral images. Therefore, accurate inter-band registration of Zhuhai-1 satellite hyperspectral images remains challenging. In this paper, a novel full-spectrum registration algorithm was proposed based on feature point matching methods. The algorithm combines the transference strategy based on the affine transformation relationship between adjacent bands with the differential correction from dense Delaunay triangulation. The performance of the proposed registration method was evaluated using the coordinate deviation of matching points, as well as the similarity measures for hyperspectral images with different topographies and roll angles.

Materials
In this study, the experimental imagery was acquired by the Zhuhai-1 hyperspectral satellite constellation (OHS-A, B, C, D) launched on 26 April 2018. The camera onboard the Zhuhai-1 hyperspectral satellite is spliced by three CMOS image sensors. The splicing method for the focal surface is illustrated in Figure 1. Figure 2 displays the spectral composition of each CMOS image sensor. Each CMOS image sensor is composed of 5056 × 2968 pixels, the size of each pixel is 4.25 µm, and the spectral range is 400-1000 nm. Each CMOS image sensor divides the 400-1000 nm spectrum into 32 spectral bands on average with a gradient filter. Each spectral band occupies about 92 lines on the sensor (2968 divided by 32). Only eight rows of pixels are used in each spectral band for eight-level integral imaging due to the low resolution and signal-to-noise ratio ( Figure 2). The main parameters for the OHS and payload are displayed in Table 1. Figure 3 shows the spectral response functions of each band of OHS hyperspectral sensors.
Sensors 2020, 20, x FOR PEER REVIEW 3 of 20 spectral bands in OHS hyperspectral images. Therefore, accurate inter-band registration of Zhuhai-1 satellite hyperspectral images remains challenging. In this paper, a novel full-spectrum registration algorithm was proposed based on feature point matching methods. The algorithm combines the transference strategy based on the affine transformation relationship between adjacent bands with the differential correction from dense Delaunay triangulation. The performance of the proposed registration method was evaluated using the coordinate deviation of matching points, as well as the similarity measures for hyperspectral images with different topographies and roll angles.

Materials
In this study, the experimental imagery was acquired by the Zhuhai-1 hyperspectral satellite constellation (OHS-A, B, C, D) launched on 26 April 2018. The camera onboard the Zhuhai-1 hyperspectral satellite is spliced by three CMOS image sensors. The splicing method for the focal surface is illustrated in Figure 1. Figure 2 displays the spectral composition of each CMOS image sensor. Each CMOS image sensor is composed of 5056 × 2968 pixels, the size of each pixel is 4.25 μm, and the spectral range is 400-1000 nm. Each CMOS image sensor divides the 400-1000 nm spectrum into 32 spectral bands on average with a gradient filter. Each spectral band occupies about 92 lines on the sensor (2968 divided by 32). Only eight rows of pixels are used in each spectral band for eightlevel integral imaging due to the low resolution and signal-to-noise ratio ( Figure 2). The main parameters for the OHS and payload are displayed in Table 1. Figure 3 shows the spectral response functions of each band of OHS hyperspectral sensors.       Three hyperspectral datasets with different topographies and roll angles were selected to evaluate the registration accuracy. The first dataset covers the southwestern part of Arizona, USA. The area consists of a large mountain basin with plain areas. The second dataset covers the central part of Chongzuo City, Guangxi Zhuang Autonomous Region, China. This area is crossed by the Zuojiang River and its tributaries and features both alluvial plains and hilly areas. The third dataset covers the northern part of Turpan City, Xinjiang Uygur Autonomous Region, China. The image shows the majestic Bogda Mountains, as well as the oasis plain belt that lies below sea level. The size of each image was 5056 × 5056 pixels. The experiment dataset parameters are listed in Table 2.   majestic Bogda Mountains, as well as the oasis plain belt that lies below sea level. The size of each image was 5056 × 5056 pixels. The experiment dataset parameters are listed in Table 2.

Methods
The proposed full-spectrum inter-band registration method consists of two parts, including relative and absolute registration. Relative registration involves matching feature points between adjacent spectral bands to obtain accurate matching points. Absolute registration involves selecting a spectral band as the reference image and performing registration for all other bands. This process involves transferring the affine transformation relationship between adjacent bands and differential correction from dense Delaunay triangulation. By comparing image quality evaluation metrics such as histogram, clarity, Shannon entropy, and Signal-to-Noise Ratio (SNR) of different bands, an image band with higher quality was selected as the reference image. Figure 4 displays the full-spectrum registration workflow used in this study. involves transferring the affine transformation relationship between adjacent bands and differential correction from dense Delaunay triangulation. By comparing image quality evaluation metrics such as histogram, clarity, Shannon entropy, and Signal-to-Noise Ratio (SNR) of different bands, an image band with higher quality was selected as the reference image. Figure 4 displays the full-spectrum registration workflow used in this study.

Feature Matching of Adjacent Bands
The scale-invariant feature transform (SIFT) feature operator is a local image feature description operator that maintains invariance to image scaling, rotation, and affine transformation [32]. In this study, the feature points were extracted and matched based on SIFT features. The flow of the algorithm is described below.  The scale-invariant feature transform (SIFT) feature operator is a local image feature description operator that maintains invariance to image scaling, rotation, and affine transformation [32]. In this study, the feature points were extracted and matched based on SIFT features. The flow of the algorithm is described below.

Keypoint detection
The image scale-space L(x, y, σ) was produced from the convolution of a variable-scale Gaussian function (G(x, y, σ)) with an input image (I(x, y)), as shown in the following equation: where ⊗ is the convolution operation in x and y; G(x, y, σ) is a Gaussian function with a variable scale and (x, y) are pixel coordinates; and σ is the scale space factor, and its size determines the smoothness scale of the image. A large scale corresponds to general features in the image, while a small scale corresponds to detailed features. Different scale space factors were used to construct a difference of Gaussian (DoG) pyramid based on the original image. A point was selected as the key point if it is the largest or smallest point within 26 neighborhoods of the same layer and the upper and lower layers in DoG scale space.

Keypoint direction matching
An orientation histogram was formed from the gradient orientations of sample points within a region around the keypoint. The orientation histogram ranged from 0 • to 360 • , and each 10 • represented a direction. A total of 36 directions were derived in this study. The peaks in the orientation histogram correspond to the dominant directions of local gradients at the keypoint.

Keypoint descriptor
The local area around the feature point was rotated clockwise by θ (e.g., adjusted to 0 • ) when constructing the feature descriptor to ensure its rotation invariance. In the rotated region, the surrounding pixel region was partitioned with the keypoint as the center. The gradient accumulation value is calculated for eight directions on each sub-block, and a gradient direction histogram was then drawn. Finally, the feature vector representing local keypoint information could be obtained. Through the above steps, the influence of geometric factors such as scale change and rotation on the SIFT feature vector could be removed. Moreover, the influence of illumination change could be reduced if the SIFT feature vector is further normalized [33].

Feature matching
The similarity of keypoints was measured based on Euclidean distance after the feature vectors are generated for the two images. The keypoint p can be taken from the reference image and used to search for the two keypoints n and m that have the closest Euclidean distance from p in the matched image. The Euclidean distance can then be calculated between p and n and p and m, respectively. The matching process is successful if the ratio of the closest distance to the next closest distance is less than a certain threshold.

Elimination of mismatched points
The RANdom SAmple Consensus (RANSAC) algorithm was used to find the optimal mathematical model, where the number of matching point pairs is the maximum value. Hence, mismatched point pairs were eliminated. The basic steps of the algorithm are described as follows: Step 1: Randomly select the minimum number of points required to determine the model parameters.
Step 2: Calculate the model parameters.
Step 3: Find how many points from the entire point set fit the model within a user defined tolerance of λ.
Step 4: Re-estimate the model parameters using identified inliers and terminate the process if the fraction of inlier points over the entire point set exceeds a predefined threshold ψ.
Step 5: Otherwise, repeat steps 1 through 4 for a prescribed number of iterations.

Homography matrix estimation
The matching point pairs of two images are defined as p 1 (x 1 , y 1 ) and p 2 (x 2 , y 2 ), and their homography matrix H is given as: The matrix can then be expanded as: where: Equation (4) can be further converted to: Equation (5) is then rewritten as: The last element of the homography matrix H is normalized to 1. Since the homography matrix H has eight unknown variables, at least four pairs of matching points (any three points are noncollinear) are needed to obtain the matrix for the two images. Usually, enough homonymous matching points can be obtained in adjacent spectral bands. Since the number of extracted keypoints is affected by the image features of ground objects, about 5000 to 100,000 matching points were obtained for each image in this study. The triangulation in the absolute registration process will be denser with more matching points, and higher registration accuracy will be produced. The least-squares method is used to estimate the homography matrix H in Equation (11).
Sensors 2020, 20, 6298 2. Matching point adjustment The coordinates of each matching point for the reference band can be predicted in the sensed image using the estimated homography matrix H. The difference in the x and y directions and the root mean square (RMS) is calculated based on the predicted values and the actual matching point, respectively. Figure 4 displays a schematic diagram of adjustments for the matching points. In Figure 5, P1 and P2 are the matching point pairs and P2 is the predicted position of P1 on the sensed image, calculated using the transformation matrix H. The root mean square error of the matching point pair is calculated using Equation (13), and the matching point pairs with a root mean square error greater than one are eliminated. This process is repeated until the total root-mean-square error of all matching point pairs (e.g., Equation (14)) is less than one pixel.

Delaunay Triangulation Construction
The Delaunay triangulation for a set of points in a two-dimensional domain has two important properties. The first is an empty circumcircle property, which means that a circle circumscribing any Delaunay triangle does not contain any other input points in its interior. The second is a maximumminimum angle property, which means that the Delaunay triangulation maximizes the minimum angle in the plane. The commonly used Delaunay triangulation construction algorithms include the incremental insertion method, triangulation growth method, and the divide and conquer method. In this study, the divide and conquer method was employed to construct the Delaunay triangulation.
Delaunay triangulation is created using the matching points obtained from relative registration ( Figure 6). In this step, triangulation is constructed for the matching points in the reference band. An affine transformation relationship is derived based on each small triangle vertex in the reference band and the corresponding matching points in the sensed band. The equation is given as: where , , , , , and represent the affine transformation coefficients for the two triangles; (x, y) are the coordinates for the matching point in the reference band; and (X, Y) are the coordinates for the matching points in the image to be registered.

Delaunay Triangulation Construction
The Delaunay triangulation for a set of points in a two-dimensional domain has two important properties. The first is an empty circumcircle property, which means that a circle circumscribing any Delaunay triangle does not contain any other input points in its interior. The second is a maximum-minimum angle property, which means that the Delaunay triangulation maximizes the minimum angle in the plane. The commonly used Delaunay triangulation construction algorithms include the incremental insertion method, triangulation growth method, and the divide and conquer method. In this study, the divide and conquer method was employed to construct the Delaunay triangulation.
Delaunay triangulation is created using the matching points obtained from relative registration ( Figure 6). In this step, triangulation is constructed for the matching points in the reference band. An affine transformation relationship is derived based on each small triangle vertex in the reference band and the corresponding matching points in the sensed band. The equation is given as: Sensors 2020, 20, 6298 9 of 21 where a 1 , b 1 , c 1 , a 2 , b 2 , and c 2 represent the affine transformation coefficients for the two triangles; (x, y) are the coordinates for the matching point in the reference band; and (X, Y) are the coordinates for the matching points in the image to be registered.
incremental insertion method, triangulation growth method, and the divide and conquer method. In this study, the divide and conquer method was employed to construct the Delaunay triangulation. Delaunay triangulation is created using the matching points obtained from relative registration ( Figure 6). In this step, triangulation is constructed for the matching points in the reference band. An affine transformation relationship is derived based on each small triangle vertex in the reference band and the corresponding matching points in the sensed band. The equation is given as: where , , , , , and represent the affine transformation coefficients for the two triangles; (x, y) are the coordinates for the matching point in the reference band; and (X, Y) are the coordinates for the matching points in the image to be registered.

Affine Transformation Relationship Transfer Strategy
The affine transformation relationship for adjacent bands can be transferred to perform registration for all other bands after a spectral band is selected as the reference band. The criteria for reference band selection is that the band should represent the texture characteristics of ground objects and have high contrast. The reference band can be chosen by analyzing the signal-to-noise ratio and range of spectral bands. Generally, the middle or red band is selected as the reference band. There are two situations in the registration process, including the registration of bands that are adjacent and non-adjacent to the reference band. In the first case, the differential correction method based on dense Delaunay triangulation is directly used for registration. In the second case, the registration of non-

Affine Transformation Relationship Transfer Strategy
The affine transformation relationship for adjacent bands can be transferred to perform registration for all other bands after a spectral band is selected as the reference band. The criteria for reference band selection is that the band should represent the texture characteristics of ground objects and have high contrast. The reference band can be chosen by analyzing the signal-to-noise ratio and range of spectral bands. Generally, the middle or red band is selected as the reference band. There are two situations in the registration process, including the registration of bands that are adjacent and non-adjacent to the reference band. In the first case, the differential correction method based on dense Delaunay triangulation is directly used for registration. In the second case, the registration of non-adjacent bands is completed by combining the transferred affine transformation relationship of adjacent bands with a differential correction from dense Delaunay triangulation. Figure 7 displays a schematic diagram of full-spectrum registration. In the figure, the reference image is Band 1. The solid circle points in Figure 7a   The registration steps for Band 1 and Band 2 are described below.
Step 1: The matching points in Band 1 (the solid circled points in Figure 7a) are used to construct a dense Delaunay triangulation D1.
Step 2: The affine transformation coefficients are calculated using Equation (15). This is based on the three vertices of the small triangle in D1 (the red solid triangle vertices), and the corresponding matching points in Band 2 (the red dashed triangle vertices in Figure 7b).
Step 3: An affine transformation is performed on the internal points of the small triangles in D1. The correct coordinates are calculated for the internal points in Band 2. Nearest neighbor interpolation, bilinear, or bicubic interpolation is then used to calculate new pixel values.
Step 4: Steps (2) and (3) are repeated until all triangle calculations are complete. The registration of Band 3 and Band 1 can be used as an example. The following steps describe registration in the second case.
Step 1: The matching points for Band 2 and Band 3 (the diamond point in Figure 7b) are used to construct a dense Delaunay triangulation D2.
Step 2: Equation (15) is used to calculate the affine transformation coefficients. This calculation is based on the small triangle vertices in D2 (the red diamond points), and the corresponding matching points in Band 3 (the red diamond points in Figure 7c).
Step 3: The coordinates for the matching points between Band 1 and Band 2 (the red solid circle The registration steps for Band 1 and Band 2 are described below. Step 1: The matching points in Band 1 (the solid circled points in Figure 7a) are used to construct a dense Delaunay triangulation D1.
Step 2: The affine transformation coefficients are calculated using Equation (15). This is based on the three vertices of the small triangle in D1 (the red solid triangle vertices), and the corresponding matching points in Band 2 (the red dashed triangle vertices in Figure 7b).
Step 3: An affine transformation is performed on the internal points of the small triangles in D1. The correct coordinates are calculated for the internal points in Band 2. Nearest neighbor interpolation, bilinear, or bicubic interpolation is then used to calculate new pixel values.
Step 4: Steps (2) and (3) are repeated until all triangle calculations are complete.
The registration of Band 3 and Band 1 can be used as an example. The following steps describe registration in the second case.
Step 1: The matching points for Band 2 and Band 3 (the diamond point in Figure 7b) are used to construct a dense Delaunay triangulation D2.
Step 2: Equation (15) is used to calculate the affine transformation coefficients. This calculation is based on the small triangle vertices in D2 (the red diamond points), and the corresponding matching points in Band 3 (the red diamond points in Figure 7c).
Step 3: The coordinates for the matching points between Band 1 and Band 2 (the red solid circle point in Figure 7b) are calculated for Band 3 (the red dashed circle point in Figure 7c) based on the affine transformation relationship.
Step 4: Steps (2) and (3) are repeated until all the matching points in Band 1 and Band 2 (the solid circle point in Figure 7b) are calculated for Band 3 (the dotted circle point in Figure 7c).
Based on the above steps, the coordinates of the matching points for Band 1 and Band 2 are derived in Band 3 using the transfer strategy based on the affine transformation relationship between adjacent bands. Thus, Band 3 can be regarded as being adjacent to Band 1. The registration of Band 3 can then be performed using the differential correction method based on dense Delaunay triangulation. The coordinates of matching points in Band 1 and Band 2 can be calculated for Band N, which allows for the registration of Band N.
The constructed Delaunay triangulation will differ due to the different positions and number of feature points that are extracted for each band. The position of the calculated matching points in Band 1 can be outside the Delaunay triangulation in Band M (e.g., the red dotted circles in Figure 8). In this case, the center of gravity of each triangle was calculated to find the triangle closest to the matching point. This is represented as the red dotted triangle in Figure 8. The affine transformation relationship that is established by the closest triangle is used for resampling pixels. To reduce registration errors which might be caused by mixed pixels, a bilinear resampling operation was performed when applying the affine transformation.
Sensors 2020, 20, x FOR PEER REVIEW 10 of 20 which might be caused by mixed pixels, a bilinear resampling operation was performed when applying the affine transformation.

Coordinate Deviation of Homonymous Points
The coordinate deviation for homonymous matching points was calculated to estimate the registration accuracy between adjacent bands using the following Equation: where ( , ) are the coordinates of a matching point in the reference band; ( , ) are the coordinates of a matching point in the band to be registered; , , and are the accuracy values in the X and Y direction and the total accuracy of the registered image, respectively; and N is the number of matching points used in the accuracy assessment.
Overall, 20 evenly distributed homonymous points were manually selected for the accuracy assessment of non-adjacent bands. Equation (16) was used to evaluate the registration accuracy for

Coordinate Deviation of Homonymous Points
The coordinate deviation for homonymous matching points was calculated to estimate the registration accuracy between adjacent bands using the following Equation: where (X b , Y b ) are the coordinates of a matching point in the reference band; (X w , Y w ) are the coordinates of a matching point in the band to be registered; m X , m Y , and m A are the accuracy values in the X and Y direction and the total accuracy of the registered image, respectively; and N is the number of matching points used in the accuracy assessment. Overall, 20 evenly distributed homonymous points were manually selected for the accuracy assessment of non-adjacent bands. Equation (16) was used to evaluate the registration accuracy for non-adjacent bands.

Similarity Measures
A block strategy was used to generate the similarity measures for two bands. First, the two bands were divided into blocks, whose sizes were determined based on the size of the image. Generally, the block size can be set as 50 × 50, 100 × 100, or 200 × 200 pixels, and it was set as 100 × 100 pixels in this study. The similarity measures for corresponding blocks were then calculated to evaluate the distribution of accuracy in the entire image. In this study, two commonly used similarity measures, structural similarity index (SSIM) and cosine similarity (CS), were selected for the accuracy assessment of image registration.
The structural similarity measure is an objective image quality evaluation standard that is in line with the characteristics of the human visual system [34]. This indicator measures image similarity based on three aspects: brightness (mean), contrast (variance), and structure. Equation (17) is given as: where l(P, Q) = 2u P u Q + c 1 where P and Q are the reference image and the image to be registered, respectively; µ P , µ Q , σ 2 P , σ 2 Q , and σ PQ represent the mean, variance, and covariance of images P and Q, respectively; c 1 , c 2 , and, c 3 are small normal numbers that are used to avoid instability due to the zero denominator in Equations (18)- (20); the parameters α, β, and γ are all greater than zero and are used to adjust the proportions of the three components in the model; and the value range of SSIM is [0,1], with larger values indicating higher image similarities. The following equation is given when α = β = γ = 1 and c 3 = c 2 /2: Cosine similarity measures the similarity between two vectors by measuring the cosine value of their inner product space, which is suitable for the vector comparison of any dimensions [35]. Digital images contain more feature codes that belong to high-dimensional space, and thus represent the application domain of the cosine similarity algorithm. The satellite image is regarded as a vector, and the cosine distance between the vectors is calculated to characterize the similarity between two images (Equation (23)) as: where X i and Y i represent the image vectors of the reference and sensed image, respectively. The normalized cosine similarity values range from 0 to 1, with larger values representing higher image similarity.

Experiment Results
In the experiment, B15 (686 nm) was selected as the reference image to perform registration for all other bands. Several of the most common bands were selected for registration accuracy analysis due to the large number of spectral bands. These bands included: B03 (490 nm) and B14 (670 nm), B08 (560 nm) and B28 (880 nm), and B10 (596 nm) and B30 (910 nm). The two bands were overlaid in the form of a checkerboard to verify the registration algorithm. Figure 9 displays the registration results using the proposed method. Visually, misalignment was not identified for rivers, roads, fields, and other land cover features between bands. The overall registration result was satisfactory.

Comparison with IOEM Method
The proposed method was compared with the IOEM method [31]. The IOEM method uses the external and internal parameter error from the hyperspectral camera for on-orbit calibration. Moreover, the IOEM method uses the established internal orientation element model to complete the

Comparison with IOEM Method
The proposed method was compared with the IOEM method [31]. The IOEM method uses the external and internal parameter error from the hyperspectral camera for on-orbit calibration. Moreover, the IOEM method uses the established internal orientation element model to complete the registration of all spectral bands. Figure 11 illustrates the registration results for the IOEM method and the proposed method for the dataset of OHS-Guangxi, China. The true color composite images produced by the IOEM method exhibited an obvious ghost-shadow phenomenon. Conversely, the true color composite imagery produced by the proposed method clearly showed ground objects with sharper edges and details, and thus featured a better registration accuracy.

Comparison with IOEM Method
The proposed method was compared with the IOEM method [31]. The IOEM method uses the external and internal parameter error from the hyperspectral camera for on-orbit calibration. Moreover, the IOEM method uses the established internal orientation element model to complete the registration of all spectral bands. Figure 11 illustrates the registration results for the IOEM method and the proposed method for the dataset of OHS-Guangxi, China. The true color composite images produced by the IOEM method exhibited an obvious ghost-shadow phenomenon. Conversely, the true color composite imagery produced by the proposed method clearly showed ground objects with sharper edges and details, and thus featured a better registration accuracy. The homonymous points were extracted from the adjacent spectral bands after registration. The coordinate displacement between homonymous points was then calculated using Equation (16) to estimate the registration accuracy between adjacent spectral bands. Figure 12 shows the registration accuracy of adjacent bands when using the two methods. The maximum error produced by the IOEM method was 0.65 pixels and the minimum error was 0.30 pixels. However, the maximum error produced by the proposed method was 0.57 pixels and the minimum error was 0.23 pixels. Different accuracies were achieved for the three experimental images. Relatively high errors were revealed in the image covering Xinjiang, China. Since there are large deserts and fewer texture details in the image of Xinjiang, fewer feature points were detected than the other two images. A relatively small number of correctly matched keypoints may lead to a decrease in registration accuracy. The homonymous points were extracted from the adjacent spectral bands after registration. The coordinate displacement between homonymous points was then calculated using Equation (16) to estimate the registration accuracy between adjacent spectral bands. Figure 12 shows the registration accuracy of adjacent bands when using the two methods. The maximum error produced by the IOEM method was 0.65 pixels and the minimum error was 0.30 pixels. However, the maximum error produced by the proposed method was 0.57 pixels and the minimum error was 0.23 pixels. Different accuracies were achieved for the three experimental images. Relatively high errors were revealed in the image covering Xinjiang, China. Since there are large deserts and fewer texture details in the image of Xinjiang, fewer feature points were detected than the other two images. A relatively small number of correctly matched keypoints may lead to a decrease in registration accuracy. coordinate displacement between homonymous points was then calculated using Equation (16) to estimate the registration accuracy between adjacent spectral bands. Figure 12 shows the registration accuracy of adjacent bands when using the two methods. The maximum error produced by the IOEM method was 0.65 pixels and the minimum error was 0.30 pixels. However, the maximum error produced by the proposed method was 0.57 pixels and the minimum error was 0.23 pixels. Different accuracies were achieved for the three experimental images. Relatively high errors were revealed in the image covering Xinjiang, China. Since there are large deserts and fewer texture details in the image of Xinjiang, fewer feature points were detected than the other two images. A relatively small number of correctly matched keypoints may lead to a decrease in registration accuracy. In total, 20 uniformly distributed homonymous points were manually selected for registration accuracy assessment between non-adjacent bands in B03, B14, B08, B28, B10, and B30, respectively. The coordinate deviation between homonymous points was calculated using Equation (16). The registration accuracy between non-adjacent bands was greater than one pixel for the IOEM method. The registration accuracy between non-adjacent bands was located at the sub-pixel level in the proposed method, as shown in Table 3. Lower RMSE values of the proposed method indicate that it can obtain higher registration accuracies than IOEM method. In total, 20 uniformly distributed homonymous points were manually selected for registration accuracy assessment between non-adjacent bands in B03, B14, B08, B28, B10, and B30, respectively. The coordinate deviation between homonymous points was calculated using Equation (16). The registration accuracy between non-adjacent bands was greater than one pixel for the IOEM method. The registration accuracy between non-adjacent bands was located at the sub-pixel level in the proposed method, as shown in Table 3. Lower RMSE values of the proposed method indicate that it can obtain higher registration accuracies than IOEM method. In this study, the structural similarity measure and cosine similarity measure were used to evaluate the similarity between registered spectral bands. The similarity measures were calculated with a size of 100 × 100 pixels. As shown in Figure 13, the structural and cosine similarity measures between B03 and B14, B08 and B28, and B10 and B30 were all greater than 0.80 for the three experimental datasets. The distribution of structural similarity index values is more concentrated in the Guangxi image than the other two experimental sites, while the cosine similarity index values are more concentrated in the Xinjiang image ( Figure 13). This might be because the two indices measure the similarity of images in different aspects. Therefore, similarity measures should be used conjunctively to gain a comprehensive understanding of the registration results. experimental datasets. The distribution of structural similarity index values is more concentrated in the Guangxi image than the other two experimental sites, while the cosine similarity index values are more concentrated in the Xinjiang image ( Figure 13). This might be because the two indices measure the similarity of images in different aspects. Therefore, similarity measures should be used conjunctively to gain a comprehensive understanding of the registration results.  The registration results can help facilitate the application of Zhuhai-1 hyperspectral imagery to various fields. Hyperspectral imagery can capture the reflectance at hundreds of wavelengths and enable the analysis of the spectra of materials on the Earth's surface. Registered Zhuhai-1 hyperspectral imagery can be used for urban land cover mapping, mineral mapping, and forest and plant species identification [36]. The full-spectrum registration approach can also be compared with other approaches (e.g., deep-learning based methods) for further improvement [37,38]. With the capability of capturing increasingly complex image characteristics, deep convolutional neural networks have been applied to feature extraction for image registration and outperformed traditional feature detectors with fewer outliers [39,40]. The proposed method can be adjusted and applied to hyperspectral images acquired by other satellite or airborne hyperspectral sensors. It also has great potentials for registration of satellite images captured at different times, from different viewpoints or from different sensors, to facilitate applications such as change detection and urban sprawl monitoring [41].

Conclusions
In this paper, a method was developed to address the issue relating to the accurate registration of spectral bands in Zhuhai-1 satellite hyperspectral imagery. A full-spectrum registration method was proposed that combines the transference strategy for the affine transformation relationship between adjacent bands with the differential correction from dense Delaunay triangulation. This method uses the affine transformation relationship between adjacent bands to establish the same relationship between non-adjacent bands. This method was used to conduct registration of non-adjacent bands with large gray variations. Moreover, dense Delaunay triangulation was constructed to establish multiple affine transformation relations for triangular differential correction and to eliminate the influence of local distortion on registration accuracy caused by terrain undulation. Finally, an experiment was conducted using three Zhuhai-1 satellite hyperspectral images covering different topographies.
The experiment showed that the proposed method could obtain accurate and robust registration results. The registration accuracy between adjacent spectral bands ranged from 0.2 to 0.6 pixels. The registration accuracy for non-adjacent bands reached a sub-pixel level, and the resulting structural and cosine similarity measures were both greater than 0.80.