Rational-Function-Model-Based Rigorous Bundle Adjustment for Improving the Relative Geometric Positioning Accuracy of Multiple Korea Multi-Purpose Satellite-3A Images

: Recent advancements in satellite technology have significantly increased the availability of high-resolution imagery for Earth observation, enabling nearly all regions to be captured frequently throughout the year. These images have become a vast source of big data and hold immense potential for various applications, including environmental monitoring, urban planning, and disaster management. However, obtaining ground control points (GCPs) and performing geometric correction is a time-consuming and costly process, often limiting the efficient use of these images. To address this challenge, this study introduces a Rational Function Model (RFM)-based rigorous bundle adjustment method to enhance the relative geometric positioning accuracy of multiple KOMPSAT-3A images without the need for GCPs. The proposed method was tested using KOMPSAT-3A images. The results showed a significant improvement in geometric accuracy, with mean positional errors reduced from 30.02 pixels to 2.21 pixels. This enhancement ensured that the corrected images derived from the proposed method were reliable and accurate, making it highly valuable for various geospatial applications.


Introduction
Recently, advancements in satellite technology have led to a significant increase in high-resolution imagery for Earth observation.Nearly all regions have been captured on a daily or multiple occasions throughout the year.These satellites images have been converted into vast amounts of big data holding immense potential for utilization [1][2][3].They can be processed into valuable time series data that facilitate analysis such as Earth monitoring and change detection.This analysis provides crucial and up-to-date information on the evolution of terrain and human activities.For the accuracy and applicability of such analysis, the precision of conversion between satellite image and corresponding ground coordinates is crucial.Commonly used high-resolution satellite images such as Korea Multi-Purpose Satellite-3A (KOMPSAT-3A) and IKONOS may not exhibit high accuracy by direct geo-referencing due to inherent systematic errors of their initial Rational Function Model (RFM) [4][5][6].These errors are compensated for using a bias error model [7,8] which is established by an affine transformation model based on ground control points (GCPs) as references of ground coordinates.However, obtaining high-precision GCPs is typically challenging in the practical utilization of satellite images due to time, cost, and policy constraints [9,10].In addition, the process of identifying GCPs in satellite images and performing real-time geometric correction incurs significant amounts of time and cost compared to the actual generation of the satellite image itself.Therefore, reducing dependency on GCPs is a highly efficient and effective approach.A relative geometric correction which considers a transformation between satellite images without GCP would be an essential technology in the current context where a number of satellite images is exponentially increasing [11][12][13].
The satellite relative geometric correction commonly employs two main methods, depending on the number of processed images.One is a pair-wise image registration [14][15][16], and the other is a bundle adjustment-based approach [11,[17][18][19].The pair-wise image registration method involves image matching and estimating the transformation relationship to align a target image with a reference image.This method can generally be divided into three categories: feature-based method, area-based method, and deep-learning-based method [20,21].These image registration methods require identifying corresponding points between the target and reference images to estimate a transformation relationship between the two images.Various transformation models, such as similarity, affine, and homography transformations, can be used to align images.Among these, homography transformation is often utilized for satellite images due to its ability to accurately model the perspective changes and geometric distortions typically present in such imagery.However, when processing multiple satellite images, it is necessary to design a process that is independent of the reference image selection and the error propagation.In contrast to pair-wise image registration, bundle adjustment-based methods iteratively adjust the given, initial RPCs and re-estimates ground coordinates [22].They are not restricted by a number of images and have the advantage of simultaneously estimating RPC correction parameters.Despite these advantages of bundle adjustment, there are two difficulties in bundle adjustment-based relative geometric correction.First, stable convergence of adjustment requires user intervention to set weights for observation and correction parameters [23].Second, the resulting images are not generated by a simple image transformation, but by projecting a complex terrain by adjusting the ground coordinates [24,25].To address similar issues, Mari et al. introduced a modified Huber loss function to apply initial weights [22].In recent studies, the concept of virtual control points has been proposed to impose constraints on the adjustment [17,26].However, these methods did not adjust the RPC correction parameters and ground coordinates simultaneously.Therefore, a more rigorous bundle adjustment method is needed that estimates both the RPC correction parameters and ground coordinates simultaneously while considering the accuracy of uncertain tie point extraction.
To address the above difficulties, we propose a novel relative geometric correction method with RFM-based rigorous bundle adjustment of multiple high-resolution satellite images.Our method aims to resolve relative positional errors between satellite images without the need for reference data such as GCP or DEM, enabling the rapid generation of corrected images.The proposed method divides the images into nine sections to extract the tie points more evenly and effectively, especially considering the complex terrains.It then establishes a mathematical model based on the Rational Function Model (RFM) with a correction term for the RPCs.Subsequently, it conducts rigorous bundle adjustment, iteratively estimating RPC correction factors and refining the ground coordinates of the tie points.Finally, it generates a virtual ground terrain model using the adjusted ground coordinates of the tie points and generates the result image by applying correction RPCs and the virtual terrain model.The proposed method was tested using multiple KOMPSAT-3A images.The results demonstrated the robustness and efficiency of the proposed method.

Materials and Methods
In this experiment, KOMPSAT-3A satellite images with only radiometric correction processed at the L1R processing level, without geometric correction, were selected as the input data.The dataset considered various acquisition and geometric environments, consisting of images captured in different regions.Relative geometric correction was performed considering the varying number of satellite images.Table 1 presents the dataset used in the experiment.Each dataset consists of two or more images, and the images were selected based on the region and attitude of acquisition.Figure 1 visually represents the satellite images.On the right side of Figure 1, enlarged satellite images are displayed.As shown in Figure 1, the datasets have overlapping regions for the tie point extraction.In the enlarged image, it is evident that the alignment of the same object is notably imprecise, attributable to the initial relative positional error.Moreover, when there is a high convergence angle, the discrepancy in relief displacement direction exacerbates misalignment.

Tie Point Extraction with SIFT Algorithm
In the relative geometric correction process, tie points, which are corresponding points between two or more different images with overlapping area, are considered crucial foundational units.Our tie-point extraction algorithm is based on the SIFT algorithm [27].The SIFT algorithm is known for its scale and rotation invariance.The SIFT algorithm was employed to automatically extract distinctive feature points from the images.The SIFT algorithm is well-suited for satellite images that require a substantial number of feature points.Furthermore, it effectively extracts unique and stable feature points, facilitating the subsequent process of matching point extraction.The SIFT algorithm allows for obtaining sub-pixel level image coordinates, providing finer granularity in representing feature points.Since the descriptor of SIFT-based feature points is a 128-dimensional vector structure, it takes a long time to match a large number of tie points.However, the subsequent processing steps of bundle adjustment and generating a virtual terrain model require as many sampling points as possible.In particular, if the result image is created with a small number of sampling points, it may be distorted, like warping in satellite images taken over a wide area.Therefore, it is essential to use a SIFT-based algorithm to extract tie points for satellite images.
The left side of Figure 2 shows the processing flow-chart of tie point extraction algorithms which is used in our proposed method.As depicted in the flow-chart, our tie point extraction algorithm divides an image area into nine regions to extract feature points.The SIFT algorithm is applied to each region independently.This processing, rather than applying the SIFT onto the entire image at once, is crucial for fast processing and extracting more evenly distributed feature points.This approach contributes to creating an accurate projection model with denser reference points in a subsequent process.After extracting the feature points, a matching process is performed by calculating the similarities of the descriptors of feature points to find corresponded points.The initially matched results are sorted based on the matching distance, with the top 30% of matched features, which are matching points with the highest similarity.These retained matching points undergo further processing through the RANSAC algorithm to obtain the final set of tie points [28].In this process, the reference model of RANSAC is the homography model, which represents the transformation of image in three dimensions, and the threshold is set to allow a small error (10 pixels in our method).This comprehensive approach ensures the selection of the most relevant and accurate tie points for bundle adjustment.By utilizing the SIFT algorithm in satellite images, the sub-pixel scale of features provided contributes to achieving higher accuracy in bundle adjustments.In the bundle adjustment process, each extracted feature point of the ground coordinates projected onto the image needs to be individually adjusted.

RFM-Based Observation Equation
To perform rigorous bundle adjustment, it is necessary to define a set of observation equations, representing transformation between satellite image coordinates and ground coordinates based on the extracted tie points.KOMPSAT-3A satellite images used mathematical model of RFM to interpret these transformation relationships, accompanied by provision of RPCs.Considering the adjustment factors of satellite images in the transformation between image and ground based on RFM, the expression of the observation equation is as follows: where  and  are measured line(row) and sample(column) coordinates of tie points in image space. and  are image correction functions in line and sample directions. and  are projected line and sample coordinates of ground coordinates(, , ) of tie points using RPCs.  and   are random unobservable errors in line and sample direction.
In this study, the image correction functions  and  are expressed as firstorder polynomials based on affine transformation.This equation is described as follows: where  0 ,   ,   ,  0 ,   ,   are the coefficients of the affine transformation.The terms  0 and  0 represent parameters that absorb various sources of error.Specifically,  0 accounts for in-track error, pitch altitude error, as well as the line component of principal point and sensor position errors. 0 absorbs across-track error, roll attitude error, and the sample component of principal point and sensor position errors.  and   absorb errors in the radial direction and interior orientation errors, such as lens distortion and focal length, while   and   capture the effects of gyro drift during image scanning.These parameter values play a crucial role in the observation equation for correcting the image coordinates and aligning them accurately with the ground coordinates.They contribute to the precise adjustment of satellite image positions, accounting for various systematic errors and distortions. and  in Equation ( 1) represent the transformation between ground coordinates and image coordinates, explained by the RFM of satellite image.For KOMPSAT-3A, this RFM is composed of a total of 80 coefficients, as shown in Equation (3).These coefficients form a rational function that, when provided with ground coordinates, calculates the corresponding image coordinates.

𝐿𝑖𝑛𝑒(𝐿𝑎𝑡
where , ,   are normalized altitude, longitude, and height values obtained from the given ground coordinates, where geodetic latitude, longitude and height are calculated using scale and offset coefficients. 1~20 ,  1~20 ,  1~20 , and  1~20 are RPC coefficients, and   ,   ,   , and   are coefficients for de-normalizing the normalized line and sample coordinates.Figure 3 visually represents the observation equations employed in our approach.As depicted in Figure 3, our observation equations reflect a projection of ground coordinates onto image coordinates with initial RPCs.Additionally, it involves calculating and applying image correction values on the image plane to estimate the corresponding ground coordinates.Observation equations indexing is based on the image points of the tie points, assuming independence of observations across images while aiming for the same ground coordinates.Consequently, for each image coordinates of tie points acquired from multiple images for indexed bundle adjustment, the observation equations are expressed as follows: where  represents the index of tie point image coordinates observed in each image. represents the index of the image. represents the index of ground coordinates of tie point.

Rigorous Bundle Adjustment
Bundle adjustment is a computational technique that simultaneously refines the geometric parameters of a set of images and 3D coordinates of observed features, enhancing overall positional accuracy (Figure 4).Before performing bundle adjustment, it is necessary to define the parameters to be adjusted.In this study, the parameters of image correction functions ( 0 ,   ,   ,  0 ,   ,   ) and the ground projected coordinates (Latitude, Longitude, Height) for the tie points are selected as adjustment parameters.The initial values of the image correction parameters are all set to 0, and the tie point ground projection coordinates are set to the forward intersection with initial RPCs.When dealing with multiple images and tie points, Equation (8) becomes complicated to interpret, so we can utilize the first-order expansion of the Taylor series to express it as follows: where The linearized observation equations can be used to estimate unknown through the least square approach and can be represented as shown in Equation ( 11) [29].The  matrix for the adjustment parameters is represented as shown in Equation ( 12), grouped by the image correction parameters and the ground coordinates. where As the  matrix is grouped, the design matrix to which the partial differential is applied can also be grouped as shown in Equation (15), and finally the bundle adjustment matrix is constructed by considering the weight of the observations, which is expressed in Equation ( 16). [ where the variables , ̇, ̈ represent the weights for the observations, image correction parameters, and ground coordinates of tie points.  is the misclosure value for the observation.  and   are the misclosure for the image correction parameter and ground coordinates; there, misclosures are substituted for zero in the bundle adjustment As shown in Equation ( 16), bundle adjustment is the process of estimating an increment in the parameter to be adjusted, typically through multiple iterations and updates to compute the final parameter estimate.The results of these estimates are affected by the weight of the observations, which are weight matrix (, ̇, ̈).If using absolute coordinates such as GCP, the weight matrix can be given by the accuracy implied by the control points.However, in a relative correction process that utilizes only tie points, the weighting of the tie point observation is ambiguous, and the method of performing the adjustment using the same weight for each iteration is unstable.In our method, the weight matrix is recalculated and applied at each iteration to perform a more rigorous bundle adjustment.
To recalculate the weight matrix, we evaluated the result of the adjustment at each iteration of the bundle adjustment.The result of the adjustment is estimated by residuals and the covariance matrix of residuals.The covariance matrix of residuals can be calculated using Equation (17) [30].
=   −   ̂ ̂ (17) where is the covariance matrix before adjustment, and the estimated new weigh matrix ( ̂ −1 ) for the next iteration can be calculated as follows: These calculations provide valuable information for assessing the accuracy and reliability of the estimated parameters and allow for iterative refinement in subsequent iterations of the rigorous bundle adjustment.

Result Image Generation Based on Virtual DEM
After rigorous bundle adjustment, the adjustment parameters of the image correction and ground coordinates of the tie points were calculated for each image to achieve an optimal level of geometric positional accuracy.Relative geometric correction between images, such as image registration, generally uses the affine or homography transformation model to generate result images.However, because our method adjusts the ground coordinates of the tie point, which are projected from the satellite image, the result image is projected onto a complex 3D model rather than a simple plane.
To utilize the 3D model, it is necessary to define a 3D object space, a virtual Digital Elevation Model (vDEM), which will be basis for the re-projection process.To generate the vDEM, we used the adjusted ground coordinates of the tie points.However, the extracted tie points are not dense enough to generate a vDEM representing a large-scale satellite image.Therefore, the vDEM model should be generated by estimating 3D coordinates.
Figure 5 shows the concept of generating the result image used in this study.First, the size and area of the result image must be set.This can be calculated by applying the initial RPC and the adjusted image correction parameters to four corner points of the original image.Next, 3D coordinates corresponding to each pixel of the result image are estimated by applying interpolation methods with sampled tie points.The horizontal coordinates were assigned at a regular sampling distance with reference to the GSD of the original image, and the vertical coordinates were estimated by applying an Inverse Distance Weighting (IDW) interpolation method [31].The formula for IDW is as follows: where ′ represents the estimated height value of the targeted pixel;   is the height value at the neighboring pixel. is the distance between the target and neighboring pixel. is a power parameter that controls the influence of the neighboring pixel on the interpolation.
The 3D ground coordinates corresponding to the pixel of the result image are obtained by forward projection.The pixel coordinates of the original image are obtained from the 3D ground coordinates.Finally, a pixel value corresponding to the result image's pixels can be extracted from the original image.

Test Results
Our proposed method was tested on four datasets consisting of multiple KOMPSAT-3A satellite images.The proposed rigorous bundle adjustment is intended to improve the relative positional accuracy between satellite images.To validate the performance of our method, we established check points for each dataset.For the relative geometric correction, the check points were not based on actual ground coordinates, but rather visually identified the corresponding points in the satellite image.The extracted check points are used to calculate the re-projection error.
However, the RFM that interpret the relationship between 2D image coordinates and 3D ground coordinates in a homogeneous system can only describe the directionality from the image coordinates.This means that, in order to compute the projected ground coordinates from the image coordinates, the height value must be defined in advance through the virtual terrain model.Additionally, using an iterative ray-tracing technique, accurate 3D ground coordinates are estimated.Therefore, we manually extracted the same object location from the overlapping area between satellite images and calculated the reprojection errors as shown in Figure 6.As mentioned in Section 2.1, our proposed relative geometric correction method obtains a large number of tie points, since they represent connectivity for coordination between satellite images.Figure 7 shows the distribution of tie points and check points (black dots are tie points; red dots are check points).As shown in Figure 7, the tie points are evenly distributed, except in areas where tie points are difficult to extract, such as water and mountainous areas.The check points were also extracted throughout the overlap area and used for validation.Table 2 shows the tie point extraction results and the initial relative positional errors for each dataset.As shown in Table 2, there are significant relative positional errors between satellite images that are not geometrically corrected for both the tie points and check points.

Result of Rigorous Bundle Adjustment
To show the effectiveness of the proposed method, the values of the parameters which are results of rigorous bundle adjustment should be converged stably.The construction of the bundle adjustment matrix fundamentally grows significantly as the number of images and tie points increase.Consequently, computational load also escalates sharply.To ensure stable convergence of bundle adjustment, inevitable repeated matrix operations require satisfying termination conditions with minimal iterations.The degree of adjustment varies depending on the weight matrix composition.Our proposed method employs rigorous techniques by re-weighting at each iteration to approximate the variation of the parameters close to zero.Tables 3-6 show the incremental values of parameters at each iteration when our rigorous bundle adjustment method is applied.The threshold for the termination condition of the bundle adjustment was set differently for each parameter, considering their respective scales.These thresholds were empirically determined to be sufficiently low to ensure that pixel-level reprojection errors do not have a significant impact.Consequently, our rigorous bundle adjustment terminated after relatively few iterations for all datasets, with the adjustment parameters converging stably towards zero without divergence.Table 7 presents the final RPC correction parameter values for each adjusted image.The RPC correction parameters (a 0 , a s , a  , b 0 , b s , b  ) after bundle adjustment vary across different images, reflecting the specific corrections needed for each dataset.These parameters demonstrate the adjustment necessary to account for the distortion characteristics in each image.The refined RPC correction parameters enhance the accuracy of the transformation between image coordinates and ground coordinates.To further validate the effectiveness of these corrected parameters, we calculated the reprojection errors using manually acquired check points.

Relative Positional Accuracies of the Check Points
To evaluate the accuracy of adjusted image correction parameters, we compared the relative positional errors of the manually acquired check points.The relative positional errors were calculated by reprojecting the check points onto the adjusted images using the adjusted correction parameters.The results are summarized in Table 8.Our analysis showed that the adjusted images exhibited significantly reduced positional errors compared to the unadjusted images.The mean positional error decreased from 30.02 pixels to 2.21 pixels, indicating a more consistent and accurate alignment of the images.To further demonstrate the efficiency of our iterative bundle adjustment method, we have included a graph showing the initial error and the re-projected model error after convergence (Figures 8-11).This graph clearly shows the decrease in the reprojection error with each iteration, highlighting the convergence and stability of the adjustment process.The error reduces significantly in the initial iterations and stabilizes as it approaches the final minimal value, confirming the robustness of our approach.

Corrected Result Images
In these sections, we present a visual comparison to illustrate the improvements achieved through our rigorous bundle adjustment process.Figure 12 shows an example of the positional transform of the key features by comparing the original unadjusted images with the adjusted images.This comparison highlights how our method effectively corrects positional inaccuracies.Figures [13][14][15][16] provide enlarged images of specific regions to emphasize the enhanced alignment of features such as building edges and road intersections.These detailed views further validate the quantitative improvements previously discussed in Section 3.2, showcasing the practical benefits of our bundle adjustment process.The reduction in positional errors and the enhanced alignment of features visually confirm the effectiveness of our bundle adjustment method.As a result, these results provide a comprehensive validation of our approach, demonstrating its capability to significantly improve the accuracy and reliability of satellite image correction.Finally, the result of the relative geometric correction was achieved by applying the proposed rigorous bundle adjustment method and using the corrected RFM with a virtual elevation model.The result images illustrate the images transformed using adjustment parameters estimated after implementing the proposed method.As shown in the result images, projecting the satellite images onto the ground with the unadjusted initial RFM reveals noticeable positioning errors in all datasets.To verify the proper correction of the initial error, the result image was enlarged for each dataset, and the continuity of the objects identified in the image was compared.The proposed bundle adjustment method successfully mitigated the positional errors of the initial RFM, leading to improved alignment and coherence of the objects across the images.

Conclusions
This study demonstrates the effectiveness of the proposed RFM-based rigorous bundle adjustment method in improving the relative geometric positioning accuracy of multiple high-resolution satellite images.By implementing iterative adjustment, our method successfully achieves convergence with a minimal number of iterations, ensuring efficiency and precision.The experimental results validate the robustness and reliability of the proposed approach.The iterative bundle adjustment process effectively reduced positional errors and stabilized adjustment parameters.
The qualitative results show that the model error averages around 1.22 pixels, which achieves a high level of modeling accuracy, and the check point error is significantly reduced from 30.02 pixels to 2.21 pixels, indicating a significant improvement in the alignment of satellite images.Additionally, the visual comparisons provided clear evidence of improved consistency and reduced discrepancies in the adjusted images, particularly in the alignment of key features such as building edges and road intersections.In summary, the proposed rigorous bundle adjustment method offers a comprehensive solution for enhancing the accuracy and reliability of satellite image correction.This method not only corrects positional inaccuracies but also improves the overall quality and usability of satellite imagery.Future research will consider the use of existing DEMs for further adjustment of the generated virtual DEMs to estimate absolute accuracy.This will include comparing the accuracy of the proposed method with absolute geometric correction methods.Moreover, the robustness of the approach under different convergence angles and its applicability in diverse topographic regions will be examined.Finally, future research will focus on refining this approach further and exploring its application to various types of satellite data to extend its benefits across different contexts and datasets.

Figure 1 .
Figure 1.Test area and enlarged images.

Figure 2 .
Figure 2. Tie point extraction algorithm in proposed method.

Figure 3 .
Figure 3. Visual representation of RFM with correction factors.

Figure 4 .
Figure 4. Bundle adjustment concept of multiple satellite images.

Figure 5 .
Figure 5. Method for generating result images with vDEM as a virtual projection model.The colored dots represent sampling points after ground coordinates adjustment.

Figure 6 .
Figure 6.Height value estimation based on ray tracing.

Figure 7 .
Figure 7. Extraction result of tie point (black dots) and check points (red dots).

Figure 8 .
Figure 8. Scatter plot for tie points used for bundle adjustment of Dataset A.

Figure 9 .
Figure 9. Scatter plot for tie points used for bundle adjustment of Dataset B.

Figure 10 .
Figure 10.Scatter plot for tie points used for bundle adjustment of Dataset C.

Figure 11 .
Figure 11.Scatter plot for tie points used for bundle adjustment of Dataset D.

Figure 12 .
Figure 12.Concept of transform of modeling points and result image.The red dots represent initial tie points, while the blue dots represent tie points after adjustment.

Figure 13 .
Figure 13.Enlarged result images with our proposed method (Dataset A).

Figure 14 .
Figure 14.Enlarged result images with our proposed method (Dataset B).

Figure 15 .
Figure 15.Enlarged result images with our proposed method (Dataset C).

Figure 16 .
Figure 16.Enlarged result images with our proposed method (Dataset D).

Table 1 .
Properties of dataset used in the study.

Table 2 .
Tie point extraction result.

Table 3 .
Bundle adjustment result of Dataset A.

Table 4 .
Bundle adjustment result of Dataset B.

Table 5 .
Bundle adjustment result of Dataset C.

Table 6 .
Bundle adjustment result of Dataset D.

Table 7 .
Image RPC correction parameters after rigorous bundle adjustment.

Table 8 .
Reprojection error by pair before and after bundle adjustment.