Planar Block Adjustment for China’s Land Regions with LuoJia1-01 Nighttime Light Imagery

The Luojia1-01 satellite provides high-resolution, high-sensitivity nighttime light data at a resolution of 130 m. To effectively use the Luojia1-01 nighttime light data for global applications, the problems of relative and absolute positioning accuracy should be solved. This paper proposes a high accuracy regional geometric processing method of nighttime light imagery. We utilized a nighttime light image matching algorithm to obtain tie points, which are used in the planar block adjustment with ground control points. Then, orthorectification of all images is implemented. Finally, we obtain the nighttime light map of China by mosaicking all the nighttime light orthoimages. According to the experimental results for 275 Luojia1-01 images, the root mean square error of the tie points is 0.983 pixels and the root mean square error of independent checkpoints is 195.491 m (less than 1.5 pixels) after the planar block adjustment. The experimental results prove the validity and feasibility of the proposed method.


Introduction
The Luojia 1-01 satellite was launched at 12:13 on 2 June 2018 [1]. It is the first Chinese professional nighttime remote-sensing satellite and was jointly developed by Wuhan University and Chang Guang Satellite Technology Co., Ltd. The spatial resolution of the satellite (130 m) is superior to those of the Defense Meteorological Satellite Program's Operational Linescan System (DMSP/OLS) (2.7 km) and the Suomi National Polar-Orbiting Partnership Visible Infrared Imaging Radiometer Suite (NPP/VIIRS) (740 m). Luojia 1-01 is located in a 645-km solar synchronous orbit, and it provides high radiometric quantization (15 bits) with a swath of 250 km. It can guarantee that global nighttime light image acquisition can be completed within 15 days. In addition, the nighttime light data are continuously updated by Luojia 1-01 satellite and all imagery is provided to users for free. Users can download data after registration at http://www.hbeos.org.cn/.

Image Selection
The July to October time period was selected based on the completeness of the archive during this time period, and the overlapping degree of one-track data by Luojia1-01 was about 80%. In order to reduce data volume, data extraction was carried out to ensure the overlapping degree was about 20%. At the same time, the cloud-contaminated data were excluded by manual screening to ensure nighttime light images were cloud-free. As shown in Figure 2a, the cloud-contaminated image is blurred, which cannot accurately obtain the ground lighting information.

Image Selection
The July to October time period was selected based on the completeness of the archive during this time period, and the overlapping degree of one-track data by Luojia1-01 was about 80%. In order to reduce data volume, data extraction was carried out to ensure the overlapping degree was about 20%. At the same time, the cloud-contaminated data were excluded by manual screening to ensure nighttime light images were cloud-free. As shown in Figure 2a, the cloud-contaminated image is blurred, which cannot accurately obtain the ground lighting information.

Removal of Stray Light
From the perspective of satellite design, Luojia1-01 has a consistent overpass time of the ascending node at 22:30 in Beijing time. This time slot would permit the observation of the relative maximum extent of lighting in regions where lights may be gradually turned off as midnight approaches. In addition, according to the characteristics of satellite platform orbit and the incident law of stray light, the satellite designed a special-shaped baffle to further eliminate the stray light from the sun [40].
Based on the acquired optical satellite products of the same period as guidance, we excluded the radiation anomalies caused by biomass burning. Meanwhile, the non-electric light information generated by the full moon could also be compared and excluded by this guidance product.

Nighttime Light Image Matching
After data preprocessing, the acquired cloud-free Luojia1-01 nighttime light imagery was automatically matched to obtain TPs. The following is the image matching process.

Extract Feature Points
In this study, the scale-invariant feature transform operator based on a multi-scale space was used to match the nighttime light imagery. The Gaussian function and original image were convoluted to obtain the scale-space of the image, and the Gaussian pyramid was constructed based on the following equation: where G(x, y, σ) represents Gaussian function with a scale of σ; m and n are the dimensions of the Gaussian template, (x, y) represents the coordinates of the image point. The smaller the value of σ, the more obvious are the detail features of the image.
L(x, y, σ) = G(x, y, σ) * I(x, y) In the above formula, L(x, y, σ) is expressed as a scale space, and its scale parameter is σ. I(x, y) represents the original image. And * represents a convolution operation.
According to Equation (1), the difference operation was performed at different scales to obtain the differential Gaussian pyramid (DoG) image. Further, pixel-by-pixel comparison was performed for each DoG image. The extreme points obtained by comparison were the feature points that needed to be acquired.
Here k i and k j represent the number of Gaussian pyramid image layers of different layers, where i, j is smaller than the number of Gauss layers. The DoG was constructed on the basis of the Gaussian pyramid.
After the feature points were acquired, positioning and orientation processing were performed to ensure the movement, scaling, and rotation invariance of the feature points. Next, a descriptor of the feature points was established to ensure that the feature points were robust to changes in light and viewing angles.

Acquisition of Matching Point Pairs
The obtained feature points contained many erroneous matching points, which needed to be deleted. In this study, a two-layer image feature point matching algorithm was adopted, where the two layers were rough and precise matching. Rough matching is an inter-matching algorithm that determines a matching point pair by using the Euclidean distance between the reference image and the registered image. Precise matching is used to eliminate error matching point pairs in rough matching by using the random sample consensus (RANSAC) algorithm.
Considering that the conventional RANSAC algorithm [41] requires a large number of iterations, the optimal value may not be obtained if an iteration threshold is set. Therefore, the distance α = [MIN, θ*MAX] was selected as the iteration threshold for matching point pair screening by calculating the maximum value MAX and minimum value MIN of the matching distance between the reference image and the registered image. Here, θ is the scaling factor between 0 and 1, the number of matching point pairs is positively correlated with θ, and the false matching rate is negatively correlated with θ. After experimental verification, the maximum matching distance based on the mismatching threshold was identified. Hence, the maximum matching distance α = θ*MAX was used to eliminate the mismatched pairs. After the distance α was determined, the threshold of the rough matching was determined, Remote Sens. 2019, 11, 2097 5 of 18 and the matching point pair was selected based on the distance, and finally, the precise matching was performed by the RANSAC algorithm.

Planar Block Adjustment
When the angle of intersection between satellite images participating in the block adjustment is small (less than 10 • ), the satellite images are generally known to have weak convergence geometry between adjacent images [36]. Therefore, the traditional stereo block adjustment method cannot be applied directly.
In response to this problem, we propose the planar block adjustment method, which only calculates the orientation compensation parameters of the satellite image and the ground planar coordinates of the TPs. The elevation value of the TPs is interpolated using a digital elevation model (DEM). This method ensures the stability and accuracy of the adjustment solution. The detailed planar block adjustment technical flowchart is presented in Figure 3.  In Figure 3, GCPs are the control point files, TPs are the tie points acquired by nighttime light image matching, and rational polynomial coefficients (RPCs) represent the orientation parameter files. GCPs, TPs, and RPCs are used as input data. The final adjustment result are the compensation parameters, which are given as an output file. The following sections provide a detailed description of planar block adjustment.

Mathematical Model of Planar Block Adjustment
Rational function model (RFM) was chosen as the geometric model of planar block adjustment. The RFM establishes the relationship between ground coordinates (latitude, longitude, height) and corresponding pixel coordinates (line, sample) [42]. The general form is: where ( , ) are the measured line and sample coordinates of the nth image point, corresponding to the ground point with object space coordinates ( , , ), which are the variables of a polynomial (i = 1, 2, 3, 4) whose degree should not exceed three. For example, the form of the polynomial p1 is: In Figure 3, GCPs are the control point files, TPs are the tie points acquired by nighttime light image matching, and rational polynomial coefficients (RPCs) represent the orientation parameter files. GCPs, TPs, and RPCs are used as input data. The final adjustment result are the compensation parameters, which are given as an output file. The following sections provide a detailed description of planar block adjustment.

Mathematical Model of Planar Block Adjustment
Rational function model (RFM) was chosen as the geometric model of planar block adjustment. The RFM establishes the relationship between ground coordinates (latitude, longitude, height) and corresponding pixel coordinates (line, sample) [42]. The general form is: Remote Sens. 2019, 11, 2097 6 of 18 where (R n , C n ) are the measured line and sample coordinates of the nth image point, corresponding to the ground point with object space coordinates (X n , Y n , Z n ), which are the variables of a polynomial P i (i = 1, 2, 3, 4) whose degree should not exceed three. For example, the form of the polynomial p1 is: where a ij (i = 1,2,3,4, j = 0,1 . . . , 19) are RPCs. Based on Equation (4), we can rewrite to get the following formula: The affine transformation model is widely used for system error compensation based on image compensation. A minimum of three GCPs are needed to solve the parameters for six unknowns. The two offset parameters, e 0 and f 0 , can be solved for using only one GCP, which can compensate most of the errors. The offset and drift parameters can be solved by using two GCPs simultaneously: where (e 0 , e 1 , e 2 ) and ( f 0 , f 1 , f 2 ) are parameters of affine transformation, ∆R and ∆C are image compensation parameters. According to Equations (4)-(6), the parameters of affine transformation should be set as unknowns. Based on the RFM, the image orientation error equation for ground control points is: The error equation is written in the matrix form as T is the residual of the observed value of sample and line for coordinate of image point, T is the correction value of the system error compensation, x = [∆X∆Y∆Z] T represents the correction value of coordinate of ground point. A and B are coefficient matrices that are matrices of partial derivatives of the unknowns. Because the elevation is obtained by interpolating the DEM, height-related items all should be set to 0 and elevation values are omitted from the following formulas: The specific form given by: where (R s , C s , X s , Y s ) are regularized proportionality coefficients [43], (X n , Y n ) are the nth ground point with object space coordinates (X n , Y n , Z n ) ; P i (i = 1, 2, 3, 4) are polynomial refer to Equation (4).
is the line and sample coordinates of the image point on the image, which can be calculated by approximating the unknown number.
After the end of each adjustment iteration, the new coordinate of ground point of the TPs is obtained. At this time, the DEM is added as an elevation constraint, and the elevation value Z of the ground point is interpolated by the DEM. Then, the elevation value Z obtained from the DEM is added to the adjustment system together with the planar coordinates (X, Y) for the next iteration calculation until the adjustment result converges.
Referring to Figure 4, S1 and S2 are a pair of TPs of the same name point. In adjustment calculation, we took the average elevation value Z 0 corresponding to the two points as the initial value and obtained the compensation amount of elevation after adjustment. Then, the initial elevation value Z 0 was compensated to obtain Z 1 . Due to the weak convergence geometry of Luojia 1-01 nighttime light imagery, the elevation error caused by Z 1 was amplified, and the adjustment result could not be converged. Therefore, the new elevation value Z' 1 was obtained based on DEM interpolation to replace Z 1 in the next adjustment iteration calculation. In this way, the new interpolated elevation value Z' 2 was obtained, and the iterative calculation continued until the adjustment convergence. Referring to Figure 4, S1 and S2 are a pair of TPs of the same name point. In adjustment calculation, we took the average elevation value Z0 corresponding to the two points as the initial value and obtained the compensation amount of elevation after adjustment. Then, the initial elevation value Z0 was compensated to obtain Z1. Due to the weak convergence geometry of Luojia 1-01 nighttime light imagery, the elevation error caused by Z1 was amplified, and the adjustment result could not be converged. Therefore, the new elevation value Z'1 was obtained based on DEM interpolation to replace Z1 in the next adjustment iteration calculation. In this way, the new interpolated elevation value Z'2 was obtained, and the iterative calculation continued until the adjustment convergence.

Solution of Large-Scale Adjustment Equation
We normalized the error equation and limit the data to [−1, 1], so as to ensure the convenience and stability of the solution of the error equation. After that, we solved the equation to obtain the correction parameters and update the image affine transformation coefficients. The equation uses an iterative solver called the spectral correction iterative method to estimate the correction parameters and coordinate of ground point. The specific process is as follows.
Two error equations can be established for each image point. When the number of image points is found to be sufficient, a law equation can be formed according to the principle of least squares correction parameters and update the image affine transformation coefficients. The equation uses an iterative solver called the spectral correction iterative method to estimate the correction parameters and coordinate of ground point. The specific process is as follows.
Two error equations can be established for each image point. When the number of image points is found to be sufficient, a law equation can be formed according to the principle of least squares adjustment: Now, t X is added to both sides of the above equation: where E is the same unit matrix as . Because both sides of the equation contain unknown parameters s X , it only can be solved by an iterative method, whose iteration formula is: Next, s X = 0 0 is substituted by the above equation and t X (1) is solved. Then, s X (2) , · · · , s X (k) are iteratively solved. The iteration terminates when the difference between the corrections of two iterative cycles is less than the specified threshold, in which case we obtain the corrected parameters t and corrections X of coordinate of ground point. The algorithm continuously updates the coordinate of ground point and image orientation parameters via the iterative process until the translation parameters e 0 , f 0 in the orientation parameters of the image are smaller than the threshold set to finish the iterative adjustment. The RPC model is refined by using the affine transformation compensation coefficients.

Experimental Design
In order to carry out the planar block adjustment with LuoJia1-01 nighttime light imagery, some related experimental designs and data acquisition work needed to be prepared. First, TPs of images needed to be obtained by automatic matching. Second, GCPs on nighttime light images needed to be collected. Third, relevant parameters of Chinese test areas were listed. The accuracy of adjustment of free network and control network needed to be verified, respectively.

Test Area for TPs Matching of Nighttime Light Imagery
According to the nighttime light image matching algorithm, the four-track data of the Luojia1-01 images covering North America, Europe, and Asia were selected randomly to perform matching experiments and the robustness and matching accuracy of the algorithm are assessed. Figure 5 shows the distribution of the four-track data.

Test area for TPs Matching of Nighttime Light Imagery
According to the nighttime light image matching algorithm, the four-track data of the Luojia1-01 images covering North America, Europe, and Asia were selected randomly to perform matching experiments and the robustness and matching accuracy of the algorithm are assessed. Figure 5 shows the distribution of the four-track data.

GCPs Selection for Nighttime Light Imagery
According to the light information on the nighttime light imagery, we selected GCPs on Google Earth. Compared with the optical satellite images in Google Earth, nighttime light imagery has fewer features, so the selection of GCPs should follow the basic principles: GCPs cannot be chosen in the region with large relief. GCPs cannot be selected in unlit areas. There is little night light information in the desert and mountainous areas of western China, so GCPs are relatively less distributed in these areas. Road intersections should be the priority. Figure 6 shows the selection example of one GCP.
According to the light information on the nighttime light imagery, we selected GCPs on Google Earth. Compared with the optical satellite images in Google Earth, nighttime light imagery has fewer features, so the selection of GCPs should follow the basic principles: GCPs cannot be chosen in the region with large relief. GCPs cannot be selected in unlit areas. There is little night light information in the desert and mountainous areas of western China, so GCPs are relatively less distributed in these areas. Road intersections should be the priority. Figure 6 shows the selection example of one GCP.

China's Test Area Parameters
Based on the method proposed in this paper, a block adjustment experiment was performed on the Luojia1-01 nighttime light imagery of the entire Chinese land area. The relevant parameters of the specific test area are listed in Table 1.

China's Test Area Parameters
Based on the method proposed in this paper, a block adjustment experiment was performed on the Luojia1-01 nighttime light imagery of the entire Chinese land area. The relevant parameters of the specific test area are listed in Table 1.

Nighttime Light Image Matching Experiment
The purpose of nighttime light image matching is to automatically capture the TPs between images. Experiment verification was carried out according to Section 2.3.2. The matching effect of Luojia 1-01 nighttime light images is displayed in Figure 7.

Nighttime Light Image Matching Experiment
The purpose of nighttime light image matching is to automatically capture the TPs between images. Experiment verification was carried out according to Section 2.3.2. The matching effect of Luojia 1-01 nighttime light images is displayed in Figure 7. The TPs were obtained based on the matching algorithm and then the block adjustment was performed to verify the matching accuracy with the four-track data. Table 2 lists the TPs verification adjustment results.  Figure 7, it can be seen that the nighttime light image-matching algorithm is reliable. The experimental results of image matching show that the matching points are sufficient, and the distribution is relatively uniform. At the same time, according to the experimental results of the four-track data listed in Table 2, the root mean square error (RMSE) of the TPs is better than one pixel, which can meet the accuracy for the nighttime light images mosaic.

Planar Block Adjustment with Luojia 1-01 Nighttime Light Imagery
The plane block adjustment test was performed on the full coverage of the Luojia1-01 image data using the test scheme described in Section 3.1.3. The GCPs, ICPs, TPs, and image distribution maps are shown in Figures 8 and 9. The TPs were obtained based on the matching algorithm and then the block adjustment was performed to verify the matching accuracy with the four-track data. Table 2 lists the TPs verification adjustment results.  Figure 7, it can be seen that the nighttime light image-matching algorithm is reliable. The experimental results of image matching show that the matching points are sufficient, and the distribution is relatively uniform. At the same time, according to the experimental results of the four-track data listed in Table 2, the root mean square error (RMSE) of the TPs is better than one pixel, which can meet the accuracy for the nighttime light images mosaic.

Planar Block Adjustment with Luojia 1-01 Nighttime Light Imagery
The plane block adjustment test was performed on the full coverage of the Luojia1-01 image data using the test scheme described in Section 3.  The ICPs' residuals obtained with GCPs and without GCPs for the test area can be clearly seen in Tables 3 and 4. As shown in Figures 10 and 11, the systematic errors of Luojia1-01 images are well eliminated.   The ICPs' residuals obtained with GCPs and without GCPs for the test area can be clearly seen in Tables 3 and 4. As shown in Figures 10 and 11, the systematic errors of Luojia1-01 images are well eliminated.  The ICPs' residuals obtained with GCPs and without GCPs for the test area can be clearly seen in Tables 3 and 4. As shown in Figures 10 and 11, the systematic errors of Luojia1-01 images are well eliminated. The result of the free network adjustment indicates that there are obvious systematic errors of the ICPs from the residual map, as shown in Figure 10. The size and direction of residual errors are consistent. The result of the control network adjustment indicates that the checkpoint error after the block adjustment is about 1.5 pixels. The absolute positioning accuracy of the image is effectively improved from 683.386 to 195.491 m. Then the orthorectification is generated based on the refined RPCs [43]. The effect of the edge of the orthophoto is shown in Figure 12.    The result of the free network adjustment indicates that there are obvious systematic errors of the ICPs from the residual map, as shown in Figure 10. The size and direction of residual errors are consistent. The result of the control network adjustment indicates that the checkpoint error after the block adjustment is about 1.5 pixels. The absolute positioning accuracy of the image is effectively improved from 683.386 to 195.491 m. Then the orthorectification is generated based on the refined RPCs [43]. The effect of the edge of the orthophoto is shown in Figure 12. Through the control network adjustment, the significant improvements of relative and absolute positioning accuracy of the Luojia1-01 nighttime light imagery have been made. Finally, the Luojia1-01 nighttime light map of China is completed. The display effect is shown in Figure 13. Through the control network adjustment, the significant improvements of relative and absolute positioning accuracy of the Luojia1-01 nighttime light imagery have been made. Finally, the Luojia1-01 nighttime light map of China is completed. The display effect is shown in Figure 13.
(c) (d) Through the control network adjustment, the significant improvements of relative and absolute positioning accuracy of the Luojia1-01 nighttime light imagery have been made. Finally, the Luojia1-01 nighttime light map of China is completed. The display effect is shown in Figure 13. Nighttime light imagery cannot obtain features and geomorphological information unlike the imagery obtained by optical satellites. However, the public can enjoy beautiful night scenes through nighttime remote-sensing images, and researchers can analyze and invert from nighttime light imagery to serve various applications.
According to the experimental results, the relative positioning accuracy is better than one pixel, and the absolute positioning accuracy reaches 1.5 pixels after planar block adjustment with GCPs. Nighttime light imagery cannot obtain features and geomorphological information unlike the imagery obtained by optical satellites. However, the public can enjoy beautiful night scenes through nighttime remote-sensing images, and researchers can analyze and invert from nighttime light imagery to serve various applications.
According to the experimental results, the relative positioning accuracy is better than one pixel, and the absolute positioning accuracy reaches 1.5 pixels after planar block adjustment with GCPs. The high relative positioning accuracy can ensure the high spatial consistency of mosaic images, and there will be no obvious gap at the edges of adjacent images. On the other hand, high absolute positioning accuracy is also very important. Time series analysis based on multitemporal night-light images needs high absolute positioning accuracy to ensure a unified geospatial benchmark.
The TPs were obtained through the nighttime light image matching algorithm in this paper, but it was found that there were still a few mismatching points in the subsequent adjustment verification. In the following studies on geometric processing of nighttime light imagery, we will further study on improving the automatic matching of nighttime light images to obtain high-accuracy TPs. Therefore, the absolute positioning accuracy can be improved by introducing high-accuracy GCPs in the process of planar block adjustment.

Conclusions
Through the nighttime light image geometric processing method proposed in this paper, a high accuracy geometric processing method is carried out for China's land regions with LuoJia1-01 Nighttime light imagery. We carried out nighttime light image matching and planar block adjustment. Finally, the absolute positioning accuracy of the image reached about 1.5 pixels, which was improved from 700 to 195 m, while the relative positioning accuracy is better than one pixel. In the end, we completed the nighttime light map of China by mosaic. According to the experimental results, the proposed geometry processing method effectively eliminates the systematic geometric positioning error and ensures the geometric positioning consistency between adjacent images. In addition, the experimental results verify the feasibility of the planar block adjustment method based on the RPC model for nighttime light remote sensing image processing. The large-scale adjustment method proposed in this paper achieves an efficient and robust block adjustment of massive data. The Luojia 1-01 nighttime light imagery provides better visualization effects, radiometric quantization, and positioning accuracy, it can be widely used for urban expansion analysis, economic evaluation, disaster assessment, and carbon emissions at global or regional scales.