Next Article in Journal
A New Open-Source Software to Help Design Models for Automatic 3D Point Cloud Classification in Coastal Studies
Previous Article in Journal
Impacts of Drought and Heatwave on the Vegetation and Ecosystem in the Yangtze River Basin in 2022
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Rational-Function-Model-Based Rigorous Bundle Adjustment for Improving the Relative Geometric Positioning Accuracy of Multiple Korea Multi-Purpose Satellite-3A Images

Department of Geoinformatic Engineering, Inha University, 100 Inharo, Michuhol-gu, Incheon 22212, Republic of Korea
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(16), 2890; https://doi.org/10.3390/rs16162890
Submission received: 28 June 2024 / Revised: 5 August 2024 / Accepted: 6 August 2024 / Published: 7 August 2024
(This article belongs to the Section Remote Sensing Image Processing)

Abstract

:
Recent advancements in satellite technology have significantly increased the availability of high-resolution imagery for Earth observation, enabling nearly all regions to be captured frequently throughout the year. These images have become a vast source of big data and hold immense potential for various applications, including environmental monitoring, urban planning, and disaster management. However, obtaining ground control points (GCPs) and performing geometric correction is a time-consuming and costly process, often limiting the efficient use of these images. To address this challenge, this study introduces a Rational Function Model (RFM)-based rigorous bundle adjustment method to enhance the relative geometric positioning accuracy of multiple KOMPSAT-3A images without the need for GCPs. The proposed method was tested using KOMPSAT-3A images. The results showed a significant improvement in geometric accuracy, with mean positional errors reduced from 30.02 pixels to 2.21 pixels. This enhancement ensured that the corrected images derived from the proposed method were reliable and accurate, making it highly valuable for various geospatial applications.

1. Introduction

Recently, advancements in satellite technology have led to a significant increase in high-resolution imagery for Earth observation. Nearly all regions have been captured on a daily or multiple occasions throughout the year. These satellites images have been converted into vast amounts of big data holding immense potential for utilization [1,2,3]. They can be processed into valuable time series data that facilitate analysis such as Earth monitoring and change detection. This analysis provides crucial and up-to-date information on the evolution of terrain and human activities. For the accuracy and applicability of such analysis, the precision of conversion between satellite image and corresponding ground coordinates is crucial. Commonly used high-resolution satellite images such as Korea Multi-Purpose Satellite-3A (KOMPSAT-3A) and IKONOS may not exhibit high accuracy by direct geo-referencing due to inherent systematic errors of their initial Rational Function Model (RFM) [4,5,6]. These errors are compensated for using a bias error model [7,8] which is established by an affine transformation model based on ground control points (GCPs) as references of ground coordinates. However, obtaining high-precision GCPs is typically challenging in the practical utilization of satellite images due to time, cost, and policy constraints [9,10]. In addition, the process of identifying GCPs in satellite images and performing real-time geometric correction incurs significant amounts of time and cost compared to the actual generation of the satellite image itself. Therefore, reducing dependency on GCPs is a highly efficient and effective approach. A relative geometric correction which considers a transformation between satellite images without GCP would be an essential technology in the current context where a number of satellite images is exponentially increasing [11,12,13].
The satellite relative geometric correction commonly employs two main methods, depending on the number of processed images. One is a pair-wise image registration [14,15,16], and the other is a bundle adjustment-based approach [11,17,18,19]. The pair-wise image registration method involves image matching and estimating the transformation relationship to align a target image with a reference image. This method can generally be divided into three categories: feature-based method, area-based method, and deep-learning-based method [20,21]. These image registration methods require identifying corresponding points between the target and reference images to estimate a transformation relationship between the two images. Various transformation models, such as similarity, affine, and homography transformations, can be used to align images. Among these, homography transformation is often utilized for satellite images due to its ability to accurately model the perspective changes and geometric distortions typically present in such imagery. However, when processing multiple satellite images, it is necessary to design a process that is independent of the reference image selection and the error propagation. In contrast to pair-wise image registration, bundle adjustment-based methods iteratively adjust the given, initial RPCs and re-estimates ground coordinates [22]. They are not restricted by a number of images and have the advantage of simultaneously estimating RPC correction parameters. Despite these advantages of bundle adjustment, there are two difficulties in bundle adjustment-based relative geometric correction. First, stable convergence of adjustment requires user intervention to set weights for observation and correction parameters [23]. Second, the resulting images are not generated by a simple image transformation, but by projecting a complex terrain by adjusting the ground coordinates [24,25]. To address similar issues, Mari et al. introduced a modified Huber loss function to apply initial weights [22]. In recent studies, the concept of virtual control points has been proposed to impose constraints on the adjustment [17,26]. However, these methods did not adjust the RPC correction parameters and ground coordinates simultaneously. Therefore, a more rigorous bundle adjustment method is needed that estimates both the RPC correction parameters and ground coordinates simultaneously while considering the accuracy of uncertain tie point extraction.
To address the above difficulties, we propose a novel relative geometric correction method with RFM-based rigorous bundle adjustment of multiple high-resolution satellite images. Our method aims to resolve relative positional errors between satellite images without the need for reference data such as GCP or DEM, enabling the rapid generation of corrected images. The proposed method divides the images into nine sections to extract the tie points more evenly and effectively, especially considering the complex terrains. It then establishes a mathematical model based on the Rational Function Model (RFM) with a correction term for the RPCs. Subsequently, it conducts rigorous bundle adjustment, iteratively estimating RPC correction factors and refining the ground coordinates of the tie points. Finally, it generates a virtual ground terrain model using the adjusted ground coordinates of the tie points and generates the result image by applying correction RPCs and the virtual terrain model. The proposed method was tested using multiple KOMPSAT-3A images. The results demonstrated the robustness and efficiency of the proposed method.

2. Materials and Methods

In this experiment, KOMPSAT-3A satellite images with only radiometric correction processed at the L1R processing level, without geometric correction, were selected as the input data. The dataset considered various acquisition and geometric environments, consisting of images captured in different regions. Relative geometric correction was performed considering the varying number of satellite images. Table 1 presents the dataset used in the experiment. Each dataset consists of two or more images, and the images were selected based on the region and attitude of acquisition. Figure 1 visually represents the satellite images. On the right side of Figure 1, enlarged satellite images are displayed. As shown in Figure 1, the datasets have overlapping regions for the tie point extraction. In the enlarged image, it is evident that the alignment of the same object is notably imprecise, attributable to the initial relative positional error. Moreover, when there is a high convergence angle, the discrepancy in relief displacement direction exacerbates misalignment.

2.1. Tie Point Extraction with SIFT Algorithm

In the relative geometric correction process, tie points, which are corresponding points between two or more different images with overlapping area, are considered crucial foundational units. Our tie-point extraction algorithm is based on the SIFT algorithm [27]. The SIFT algorithm is known for its scale and rotation invariance. The SIFT algorithm was employed to automatically extract distinctive feature points from the images. The SIFT algorithm is well-suited for satellite images that require a substantial number of feature points. Furthermore, it effectively extracts unique and stable feature points, facilitating the subsequent process of matching point extraction. The SIFT algorithm allows for obtaining sub-pixel level image coordinates, providing finer granularity in representing feature points. Since the descriptor of SIFT-based feature points is a 128-dimensional vector structure, it takes a long time to match a large number of tie points. However, the subsequent processing steps of bundle adjustment and generating a virtual terrain model require as many sampling points as possible. In particular, if the result image is created with a small number of sampling points, it may be distorted, like warping in satellite images taken over a wide area. Therefore, it is essential to use a SIFT-based algorithm to extract tie points for satellite images.
The left side of Figure 2 shows the processing flow-chart of tie point extraction algorithms which is used in our proposed method. As depicted in the flow-chart, our tie point extraction algorithm divides an image area into nine regions to extract feature points. The SIFT algorithm is applied to each region independently. This processing, rather than applying the SIFT onto the entire image at once, is crucial for fast processing and extracting more evenly distributed feature points. This approach contributes to creating an accurate projection model with denser reference points in a subsequent process. After extracting the feature points, a matching process is performed by calculating the similarities of the descriptors of feature points to find corresponded points. The initially matched results are sorted based on the matching distance, with the top 30% of matched features, which are matching points with the highest similarity. These retained matching points undergo further processing through the RANSAC algorithm to obtain the final set of tie points [28]. In this process, the reference model of RANSAC is the homography model, which represents the transformation of image in three dimensions, and the threshold is set to allow a small error (10 pixels in our method).
This comprehensive approach ensures the selection of the most relevant and accurate tie points for bundle adjustment. By utilizing the SIFT algorithm in satellite images, the sub-pixel scale of features provided contributes to achieving higher accuracy in bundle adjustments. In the bundle adjustment process, each extracted feature point of the ground coordinates projected onto the image needs to be individually adjusted.

2.2. RFM-Based Observation Equation

To perform rigorous bundle adjustment, it is necessary to define a set of observation equations, representing transformation between satellite image coordinates and ground coordinates based on the extracted tie points. KOMPSAT-3A satellite images used mathematical model of RFM to interpret these transformation relationships, accompanied by provision of RPCs. Considering the adjustment factors of satellite images in the transformation between image and ground based on RFM, the expression of the observation equation is as follows:
l i n e = Δ l i n e + L i n e L a t , L o n , H g t + v l i n e s a m p l e = Δ s a m p l e + S a m p l e L a t ,   L o n ,   H g t + v s a m p l e
where l i n e and s a m p l e are measured line(row) and sample(column) coordinates of tie points in image space. Δ l i n e and Δ s a m p l e are image correction functions in line and sample directions. L i n e and S a m p l e are projected line and sample coordinates of ground coordinates ( L a t , L o n , H g t ) of tie points using RPCs. v l i n e and v s a m p l e are random unobservable errors in line and sample direction.
In this study, the image correction functions Δ l i n e and Δ s a m p l e are expressed as first-order polynomials based on affine transformation. This equation is described as follows:
Δ l i n e = a 0 + a s · s a m p l e + a l · l i n e Δ s a m p l e = b 0 + b s · s a m p l e + b l · l i n e
where a 0 ,   a s ,   a l ,   b 0 ,   b s ,   b l are the coefficients of the affine transformation. The terms a 0 and b 0 represent parameters that absorb various sources of error. Specifically, a 0 accounts for in-track error, pitch altitude error, as well as the line component of principal point and sensor position errors. b 0 absorbs across-track error, roll attitude error, and the sample component of principal point and sensor position errors. a s and b s absorb errors in the radial direction and interior orientation errors, such as lens distortion and focal length, while a l and b l capture the effects of gyro drift during image scanning.
These parameter values play a crucial role in the observation equation for correcting the image coordinates and aligning them accurately with the ground coordinates. They contribute to the precise adjustment of satellite image positions, accounting for various systematic errors and distortions. L i n e and S a m p l e in Equation (1) represent the transformation between ground coordinates and image coordinates, explained by the RFM of satellite image. For KOMPSAT-3A, this RFM is composed of a total of 80 coefficients, as shown in Equation (3). These coefficients form a rational function that, when provided with ground coordinates, calculates the corresponding image coordinates.
L i n e L a t , L o n , H g t = N u m e r L i n e P , L , H D e n o m L i n e P , L , H   · L I N E S C A L E + L I N E O F F S a m p l e L a t , L o n , H g t = N u m e r S a m p l e P , L , H D e n o m S a m p l e P , L , H · S A M P L E S C A L E + S A M P L E O F F
where
N u m e r L i n e P , L , H = a 1 + a 2 L + a 3 P + a 4 H + a 5 L P + a 6 L H + a 7 P H                                    + a 8 L 2 + a 9 P 2 + a 10 H 2 + a 11 P L H + a 12 L 3 + a 13 L P 2 + a 14 L H 2                            + a 15 L 2 P + a 16 P 3 + a 17 P H 2 + a 18 L 2 H + a 19 P 2 H + a 20 H 3       = i = 0 3 j = 0 3 k = 0 3 a n P i L j H k ,    ( ( i + j + k ) < 4 )
D e n o m L i n e ( P , L , H ) = i = 0 3 j = 0 3 k = 0 3 b n P i L j H k ,    ( ( i + j + k ) < 4 )
N u m e r S a m p l e ( P , L , H ) = i = 0 3 j = 0 3 k = 0 3 c n P i L j H k ,    ( ( i + j + k ) < 4 )
D e n o m S a m p l e ( P , L , H ) = i = 0 3 j = 0 3 k = 0 3 d n P i L j H k ,    ( ( i + j + k ) < 4 )
where P ,   L ,   and   H are normalized altitude, longitude, and height values obtained from the given ground coordinates, where geodetic latitude, longitude and height are calculated using scale and offset coefficients. a 1 ~ 20 ,   b 1 ~ 20 ,   c 1 ~ 20 ,   a n d   d 1 ~ 20 are RPC coefficients, and L I N E S C A L E , L I N E O F F , S A M P L E S C A L E , and S A M P L E O F F are coefficients for de-normalizing the normalized line and sample coordinates.
Figure 3 visually represents the observation equations employed in our approach. As depicted in Figure 3, our observation equations reflect a projection of ground coordinates onto image coordinates with initial RPCs. Additionally, it involves calculating and applying image correction values on the image plane to estimate the corresponding ground coordinates.
Observation equations indexing is based on the image points of the tie points, assuming independence of observations across images while aiming for the same ground coordinates. Consequently, for each image coordinates of tie points acquired from multiple images for indexed bundle adjustment, the observation equations are expressed as follows:
F L i n e = l i n e i j + a 0 j + a s j · s a m p l e i j + a l j · l i n e i j + L i n e j ( L a t k ,   L o n k ,   H g t k ) F S a m p l e = s a m p l e i j + b 0 j + b s j · s a m p l e i j + b l j · l i n e i j + S a m p l e j ( L a t k ,   L o n k ,   H g t k )
where i represents the index of tie point image coordinates observed in each image. i represents the index of the image. k represents the index of ground coordinates of tie point.

2.3. Rigorous Bundle Adjustment

Bundle adjustment is a computational technique that simultaneously refines the geometric parameters of a set of images and 3D coordinates of observed features, enhancing overall positional accuracy (Figure 4).
Before performing bundle adjustment, it is necessary to define the parameters to be adjusted. In this study, the parameters of image correction functions ( a 0 ,   a s ,   a l ,   b 0 ,   b s ,   b l ) and the ground projected coordinates (Latitude, Longitude, Height) for the tie points are selected as adjustment parameters. The initial values of the image correction parameters are all set to 0, and the tie point ground projection coordinates are set to the forward intersection with initial RPCs. When dealing with multiple images and tie points, Equation (8) becomes complicated to interpret, so we can utilize the first-order expansion of the Taylor series to express it as follows:
F i 0 + d F i + ε = 0
where
F i = F L i n e i F S a m p l e i
The linearized observation equations can be used to estimate unknown through the least square approach and can be represented as shown in Equation (11) [29]. The d x matrix for the adjustment parameters is represented as shown in Equation (12), grouped by the image correction parameters and the ground coordinates.
F L i n e i x F S a m p l e i x d x = F i 0
d x = d x R F M d x L L H T
where
d x R F M = a 0 0     a s 0     a l 0     b 0 0     b s 0     b l 0             a 0 j     a s j     a l j     b 0 j     b s j     b l j T
d x L L H = L a t 0       L o n 0       H g t 0         L a t k       L o n k       H g t k T
As the d x matrix is grouped, the design matrix to which the partial differential is applied can also be grouped as shown in Equation (15), and finally the bundle adjustment matrix is constructed by considering the weight of the observations, which is expressed in Equation (16).
F L i n e i x F S a m p l e i x d x = F L i n e i x F S a m p l e i x d x R F M d x L L H = F L i n e i x R F M F S a m p l e i x R F M                     F L i n e i x L L H F S a m p l e i x L L H d x R F M d x L L H = B ˙ B ¨ d x R F M d x L L H
w 0 0 0 w ˙ 0 0 0 w ¨ B ˙ B ¨ I 0 0 I d x R F M d x L L H = w 0 0 0 w ˙ 0 0 0 w ¨ M o b s M R F M = 0 M L L H = 0
where the variables w ,   w ˙ ,   w ¨ represent the weights for the observations, image correction parameters, and ground coordinates of tie points. M o b s is the misclosure value for the observation. M R F M and M L L H are the misclosure for the image correction parameter and ground coordinates; there, misclosures are substituted for zero in the bundle adjustment
As shown in Equation (16), bundle adjustment is the process of estimating an increment in the parameter to be adjusted, typically through multiple iterations and updates to compute the final parameter estimate. The results of these estimates are affected by the weight of the observations, which are weight matrix ( w ,   w ˙ ,   w ¨ ). If using absolute coordinates such as GCP, the weight matrix can be given by the accuracy implied by the control points. However, in a relative correction process that utilizes only tie points, the weighting of the tie point observation is ambiguous, and the method of performing the adjustment using the same weight for each iteration is unstable. In our method, the weight matrix is recalculated and applied at each iteration to perform a more rigorous bundle adjustment.
To recalculate the weight matrix, we evaluated the result of the adjustment at each iteration of the bundle adjustment. The result of the adjustment is estimated by residuals and the covariance matrix of residuals. The covariance matrix of residuals can be calculated using Equation (17) [30].
C v v = C L L B C p ^ p ^ B T
where
B = B ˙ B ¨ I 0 0 I   ,
C p ^ p ^ = B T C L L 1 B 1
C L L is the covariance matrix before adjustment, and the estimated new weigh matrix ( C ^ L L 1 ) for the next iteration can be calculated as follows:
C ^ L L 1 = v T C L L 1 v t r a c e ( C v v C L L 1 )
These calculations provide valuable information for assessing the accuracy and reliability of the estimated parameters and allow for iterative refinement in subsequent iterations of the rigorous bundle adjustment.

2.4. Result Image Generation Based on Virtual DEM

After rigorous bundle adjustment, the adjustment parameters of the image correction and ground coordinates of the tie points were calculated for each image to achieve an optimal level of geometric positional accuracy. Relative geometric correction between images, such as image registration, generally uses the affine or homography transformation model to generate result images. However, because our method adjusts the ground coordinates of the tie point, which are projected from the satellite image, the result image is projected onto a complex 3D model rather than a simple plane.
To utilize the 3D model, it is necessary to define a 3D object space, a virtual Digital Elevation Model (vDEM), which will be basis for the re-projection process. To generate the vDEM, we used the adjusted ground coordinates of the tie points. However, the extracted tie points are not dense enough to generate a vDEM representing a large-scale satellite image. Therefore, the vDEM model should be generated by estimating 3D coordinates.
Figure 5 shows the concept of generating the result image used in this study. First, the size and area of the result image must be set. This can be calculated by applying the initial RPC and the adjusted image correction parameters to four corner points of the original image. Next, 3D coordinates corresponding to each pixel of the result image are estimated by applying interpolation methods with sampled tie points. The horizontal coordinates were assigned at a regular sampling distance with reference to the GSD of the original image, and the vertical coordinates were estimated by applying an Inverse Distance Weighting (IDW) interpolation method [31]. The formula for IDW is as follows:
H = w i · H i w i
where
w i = 1 d i s t a n c e P o w e r
H represents the estimated height value of the targeted pixel; H i is the height value at the neighboring pixel. d i s t a n c e is the distance between the target and neighboring pixel. P o w e r is a power parameter that controls the influence of the neighboring pixel on the interpolation.
The 3D ground coordinates corresponding to the pixel of the result image are obtained by forward projection. The pixel coordinates of the original image are obtained from the 3D ground coordinates. Finally, a pixel value corresponding to the result image’s pixels can be extracted from the original image.

3. Test Results

Our proposed method was tested on four datasets consisting of multiple KOMPSAT-3A satellite images. The proposed rigorous bundle adjustment is intended to improve the relative positional accuracy between satellite images. To validate the performance of our method, we established check points for each dataset. For the relative geometric correction, the check points were not based on actual ground coordinates, but rather visually identified the corresponding points in the satellite image. The extracted check points are used to calculate the re-projection error.
However, the RFM that interpret the relationship between 2D image coordinates and 3D ground coordinates in a homogeneous system can only describe the directionality from the image coordinates. This means that, in order to compute the projected ground coordinates from the image coordinates, the height value must be defined in advance through the virtual terrain model. Additionally, using an iterative ray-tracing technique, accurate 3D ground coordinates are estimated. Therefore, we manually extracted the same object location from the overlapping area between satellite images and calculated the re-projection errors as shown in Figure 6.
As mentioned in Section 2.1, our proposed relative geometric correction method obtains a large number of tie points, since they represent connectivity for coordination between satellite images. Figure 7 shows the distribution of tie points and check points (black dots are tie points; red dots are check points). As shown in Figure 7, the tie points are evenly distributed, except in areas where tie points are difficult to extract, such as water and mountainous areas. The check points were also extracted throughout the overlap area and used for validation.
Table 2 shows the tie point extraction results and the initial relative positional errors for each dataset. As shown in Table 2, there are significant relative positional errors between satellite images that are not geometrically corrected for both the tie points and check points.

3.1. Result of Rigorous Bundle Adjustment

To show the effectiveness of the proposed method, the values of the parameters which are results of rigorous bundle adjustment should be converged stably. The construction of the bundle adjustment matrix fundamentally grows significantly as the number of images and tie points increase. Consequently, computational load also escalates sharply. To ensure stable convergence of bundle adjustment, inevitable repeated matrix operations require satisfying termination conditions with minimal iterations. The degree of adjustment varies depending on the weight matrix composition. Our proposed method employs rigorous techniques by re-weighting at each iteration to approximate the variation of the parameters close to zero. Table 3, Table 4, Table 5 and Table 6 show the incremental values of parameters at each iteration when our rigorous bundle adjustment method is applied.
The threshold for the termination condition of the bundle adjustment was set differently for each parameter, considering their respective scales. These thresholds were empirically determined to be sufficiently low to ensure that pixel-level reprojection errors do not have a significant impact. Consequently, our rigorous bundle adjustment terminated after relatively few iterations for all datasets, with the adjustment parameters converging stably towards zero without divergence. Table 7 presents the final RPC correction parameter values for each adjusted image.
The RPC correction parameters ( a 0 , a s , a l , b 0 , b s , b l ) after bundle adjustment vary across different images, reflecting the specific corrections needed for each dataset. These parameters demonstrate the adjustment necessary to account for the distortion characteristics in each image. The refined RPC correction parameters enhance the accuracy of the transformation between image coordinates and ground coordinates. To further validate the effectiveness of these corrected parameters, we calculated the reprojection errors using manually acquired check points.

3.2. Relative Positional Accuracies of the Check Points

To evaluate the accuracy of adjusted image correction parameters, we compared the relative positional errors of the manually acquired check points. The relative positional errors were calculated by reprojecting the check points onto the adjusted images using the adjusted correction parameters. The results are summarized in Table 8.
Our analysis showed that the adjusted images exhibited significantly reduced positional errors compared to the unadjusted images. The mean positional error decreased from 30.02 pixels to 2.21 pixels, indicating a more consistent and accurate alignment of the images. To further demonstrate the efficiency of our iterative bundle adjustment method, we have included a graph showing the initial error and the re-projected model error after convergence (Figure 8, Figure 9, Figure 10 and Figure 11). This graph clearly shows the decrease in the reprojection error with each iteration, highlighting the convergence and stability of the adjustment process. The error reduces significantly in the initial iterations and stabilizes as it approaches the final minimal value, confirming the robustness of our approach.

3.3. Corrected Result Images

In these sections, we present a visual comparison to illustrate the improvements achieved through our rigorous bundle adjustment process. Figure 12 shows an example of the positional transform of the key features by comparing the original unadjusted images with the adjusted images. This comparison highlights how our method effectively corrects positional inaccuracies.
Figure 13, Figure 14, Figure 15 and Figure 16 provide enlarged images of specific regions to emphasize the enhanced alignment of features such as building edges and road intersections. These detailed views further validate the quantitative improvements previously discussed in Section 3.2, showcasing the practical benefits of our bundle adjustment process. The reduction in positional errors and the enhanced alignment of features visually confirm the effectiveness of our bundle adjustment method. As a result, these results provide a comprehensive validation of our approach, demonstrating its capability to significantly improve the accuracy and reliability of satellite image correction.
Finally, the result of the relative geometric correction was achieved by applying the proposed rigorous bundle adjustment method and using the corrected RFM with a virtual elevation model. The result images illustrate the images transformed using adjustment parameters estimated after implementing the proposed method. As shown in the result images, projecting the satellite images onto the ground with the unadjusted initial RFM reveals noticeable positioning errors in all datasets. To verify the proper correction of the initial error, the result image was enlarged for each dataset, and the continuity of the objects identified in the image was compared. The proposed bundle adjustment method successfully mitigated the positional errors of the initial RFM, leading to improved alignment and coherence of the objects across the images.

4. Conclusions

This study demonstrates the effectiveness of the proposed RFM-based rigorous bundle adjustment method in improving the relative geometric positioning accuracy of multiple high-resolution satellite images. By implementing iterative adjustment, our method successfully achieves convergence with a minimal number of iterations, ensuring efficiency and precision. The experimental results validate the robustness and reliability of the proposed approach. The iterative bundle adjustment process effectively reduced positional errors and stabilized adjustment parameters.
The qualitative results show that the model error averages around 1.22 pixels, which achieves a high level of modeling accuracy, and the check point error is significantly reduced from 30.02 pixels to 2.21 pixels, indicating a significant improvement in the alignment of satellite images. Additionally, the visual comparisons provided clear evidence of improved consistency and reduced discrepancies in the adjusted images, particularly in the alignment of key features such as building edges and road intersections. In summary, the proposed rigorous bundle adjustment method offers a comprehensive solution for enhancing the accuracy and reliability of satellite image correction. This method not only corrects positional inaccuracies but also improves the overall quality and usability of satellite imagery. Future research will consider the use of existing DEMs for further adjustment of the generated virtual DEMs to estimate absolute accuracy. This will include comparing the accuracy of the proposed method with absolute geometric correction methods. Moreover, the robustness of the approach under different convergence angles and its applicability in diverse topographic regions will be examined. Finally, future research will focus on refining this approach further and exploring its application to various types of satellite data to extend its benefits across different contexts and datasets.

Author Contributions

Conceptualization, T.K.; Methodology, S.B.; Validation, S.B.; Formal analysis, T.K.; Writing—original draft, S.B.; Writing—review & editing, T.K.; Supervision, T.K. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the Korea Agency for Infrastructure Technology Advancement (KAIA) grant funded by the Ministry of Land, Infrastructure and Transport (Grant RS-2022-00155763).

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Park, J.; Lee, D.; Lee, J.; Cheon, E.; Jeong, H. Study on Disaster Response Strategies Using Multi-Sensors Satellite Imagery. Korean J. Remote Sens. 2023, 39, 755–770. [Google Scholar]
  2. Liu, Y.; Shen, C.; Chen, X.; Hong, Y.; Fan, Q.; Chan, P.; Lan, J. Satellite-Based Estimation of Roughness Length over Vegetated Surfaces and Its Utilization in WRF Simulations. Remote Sens. 2023, 15, 2686. [Google Scholar] [CrossRef]
  3. Afaq, Y.; Manocha, A. Analysis on Change Detection Techniques for Remote Sensing Applications: A Review. Ecol. Inform. 2021, 63, 101310. [Google Scholar] [CrossRef]
  4. Jiang, Y.H.; Zhang, G.; Chen, P.; Li, D.R.; Tang, X.M.; Huang, W.C. Systematic Error Compensation Based on a Rational Function Model for Ziyuan1-02C. IEEE Trans. Geosci. Remote Sens. 2015, 53, 3985–3995. [Google Scholar] [CrossRef]
  5. Yoon, W.; Park, H.; Kim, T. Feasibility Analysis of Precise Sensor Modelling for KOMPSAT-3A Imagery Using Unified Control Points. Korean J. Remote Sens. 2018, 34, 1089–1100. [Google Scholar]
  6. Saleh, T.M.; Zahran, M.I.; Al-Shehaby, A.R.; Gomaa, M.S. Performance Enhancement of Rational Function Model (RFM) for Improved Geo-Position Accuracy of IKONOS Stereo Satellite Imagery. J. Geomat. 2018, 12, 1–12. [Google Scholar]
  7. Tong, X.; Liu, S.; Weng, Q. Bias-Corrected Rational Polynomial Coefficients for High Accuracy Geo-Positioning of QuickBird Stereo Imagery. ISPRS J. Photogramm. Remote Sens. 2010, 65, 218–226. [Google Scholar] [CrossRef]
  8. Son, J.H.; Yoon, W.; Kim, T.; Rhee, S. Iterative Precision Geometric Correction for High-Resolution Satellite Images. Korean J. Remote Sens. 2021, 37, 431–447. [Google Scholar]
  9. Deng, M.; Zhang, G.; Cai, C.; Xu, K.; Zhao, R.; Guo, F.; Suo, J. Improvement and Assessment of the Absolute Positioning Accuracy of Chinese High-Resolution SAR Satellites. Remote Sens. 2019, 11, 1465. [Google Scholar] [CrossRef]
  10. Cabo, C.; Sanz-Ablanedo, E.; Roca-Pardiñas, J.; Ordóñez, C. Influence of the Number and Spatial Distribution of Ground Control Points in the Accuracy of UAV-SfM DEMs: An Approach Based on Generalized Additive Models. IEEE Trans. Geosci. Remote Sens. 2021, 59, 10618–10627. [Google Scholar] [CrossRef]
  11. Ma, Z.; Wu, X.; Yan, L.; Xu, Z. Geometric Positioning for Satellite Imagery without Ground Control Points by Exploiting Repeated Observation. Sensors 2017, 17, 240. [Google Scholar] [CrossRef] [PubMed]
  12. Sánchez, M.; Cuartero, A.; Barrena, M.; Plaza, A. A New Method for Positional Accuracy Analysis in Georeferenced Satellite Images without Independent Ground Control Points. Remote Sens. 2020, 12, 4132. [Google Scholar] [CrossRef]
  13. Yang, B.; Pi, Y.; Li, X.; Wang, M. Relative Geometric Refinement of Patch Images without Use of Ground Control Points for the Geostationary Optical Satellite GaoFen4. IEEE Trans. Geosci. Remote Sens. 2017, 56, 474–484. [Google Scholar] [CrossRef]
  14. Sommervold, O.; Gazzea, M.; Arghandeh, R. A Survey on SAR and Optical Satellite Image Registration. Remote Sens. 2023, 15, 850. [Google Scholar] [CrossRef]
  15. Hou, X.; Gao, Q.; Wang, R.; Luo, X. Satellite-Borne Optical Remote Sensing Image Registration Based on Point Features. Sensors 2021, 21, 2695. [Google Scholar] [CrossRef]
  16. Velesaca, H.O.; Bastidas, G.; Rouhani, M.; Sappa, A.D. Multimodal Image Registration Techniques: A Comprehensive Survey. Multimed. Tools Appl. 2024, 83, 63919–63947. [Google Scholar] [CrossRef]
  17. Yang, B.; Wang, M.; Xu, W.; Li, D.; Gong, J.; Pi, Y. Large-Scale Block Adjustment without Use of Ground Control Points Based on the Compensation of Geometric Calibration for ZY-3 Images. ISPRS J. Photogramm. Remote Sens. 2017, 134, 1–14. [Google Scholar] [CrossRef]
  18. Fu, Q.; Liu, S.; Tong, X.; Wang, H. Block Adjustment of Large-Scale High-Resolution Optical Satellite Imagery without GCPs Based on the GPU. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, 42, 91–94. [Google Scholar] [CrossRef]
  19. Fu, Q.; Tong, X.; Liu, S.; Ye, Z.; Jin, Y.; Wang, H.; Hong, Z. GPU-Accelerated PCG Method for the Block Adjustment of Large-Scale High-Resolution Optical Satellite Imagery without GCPs. Photogramm. Eng. Remote Sens. 2023, 89, 211–220. [Google Scholar] [CrossRef]
  20. Zitova, B.; Flusser, J. Image Registration Methods: A Survey. Image Vis. Comput. 2003, 21, 977–1000. [Google Scholar] [CrossRef]
  21. Chen, Y.; Jiang, J. A Two-Stage Deep Learning Registration Method for Remote Sensing Images Based on Sub-Image Matching. Remote Sens. 2021, 13, 3443. [Google Scholar] [CrossRef]
  22. Marí, R.; de Franchis, C.; Meinhardt-Llopis, E.; Anger, J.; Facciolo, G. A generic bundle adjustment methodology for indirect RPC model refinement of satellite imagery. Image Process. Line 2021, 11, 344–373. [Google Scholar] [CrossRef]
  23. Agarwal, S.; Snavely, N.; Seitz, S.M.; Szeliski, R. Bundle Adjustment in the Large. In Proceedings of the 11th European Conference on Computer Vision, Heraklion, Greece, 5–11 September 2010. [Google Scholar]
  24. Zareei, S.; Kelbe, D.; Sirguey, P.; Mills, S.; Eyers, D.M. Virtual Ground Control for Survey-Grade Terrain Modelling from Satellite Imagery. In Proceedings of the 36th International Conference on Image and Vision Computing New Zealand (IVCNZ), Wellington, New Zealand, 1–6 December 2021. [Google Scholar]
  25. Yang, B.; Wang, M.; Pi, Y. Block-Adjustment without GCPs for Large-Scale Regions Only Based on the Virtual Control Points. Acta Geod. Cartogr. Sin. 2017, 46, 874. [Google Scholar]
  26. Pi, Y.; Yang, B.; Li, X.; Wang, M. Robust correction of relative geometric errors among GaoFen-7 regional stereo images based on posteriori compensation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 3224–3234. [Google Scholar] [CrossRef]
  27. Lowe, D.G. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  28. Fischler, M.A.; Bolles, R.C. Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
  29. Grodecki, J.; Dial, G. Block Adjustment of High-Resolution Satellite Images Described by Rational Polynomials. Photogramm. Eng. Remote Sens. 2003, 69, 59–68. [Google Scholar] [CrossRef]
  30. McGlone, J.C.; Mikhail, E.; Bethel, J. Manual of Photogrammetry; American Society for Photogrammetry and Remote Sensing: Bethesda, MD, USA, 2004. [Google Scholar]
  31. Bartier, P.M.; Keller, C.P. Multivariate Interpolation to Incorporate Thematic Surface Data Using Inverse Distance Weighting (IDW). Comput. Geosci. 1996, 22, 795–799. [Google Scholar] [CrossRef]
Figure 1. Test area and enlarged images.
Figure 1. Test area and enlarged images.
Remotesensing 16 02890 g001
Figure 2. Tie point extraction algorithm in proposed method.
Figure 2. Tie point extraction algorithm in proposed method.
Remotesensing 16 02890 g002
Figure 3. Visual representation of RFM with correction factors.
Figure 3. Visual representation of RFM with correction factors.
Remotesensing 16 02890 g003
Figure 4. Bundle adjustment concept of multiple satellite images.
Figure 4. Bundle adjustment concept of multiple satellite images.
Remotesensing 16 02890 g004
Figure 5. Method for generating result images with vDEM as a virtual projection model. The colored dots represent sampling points after ground coordinates adjustment.
Figure 5. Method for generating result images with vDEM as a virtual projection model. The colored dots represent sampling points after ground coordinates adjustment.
Remotesensing 16 02890 g005
Figure 6. Height value estimation based on ray tracing.
Figure 6. Height value estimation based on ray tracing.
Remotesensing 16 02890 g006
Figure 7. Extraction result of tie point (black dots) and check points (red dots).
Figure 7. Extraction result of tie point (black dots) and check points (red dots).
Remotesensing 16 02890 g007
Figure 8. Scatter plot for tie points used for bundle adjustment of Dataset A.
Figure 8. Scatter plot for tie points used for bundle adjustment of Dataset A.
Remotesensing 16 02890 g008
Figure 9. Scatter plot for tie points used for bundle adjustment of Dataset B.
Figure 9. Scatter plot for tie points used for bundle adjustment of Dataset B.
Remotesensing 16 02890 g009
Figure 10. Scatter plot for tie points used for bundle adjustment of Dataset C.
Figure 10. Scatter plot for tie points used for bundle adjustment of Dataset C.
Remotesensing 16 02890 g010
Figure 11. Scatter plot for tie points used for bundle adjustment of Dataset D.
Figure 11. Scatter plot for tie points used for bundle adjustment of Dataset D.
Remotesensing 16 02890 g011
Figure 12. Concept of transform of modeling points and result image. The red dots represent initial tie points, while the blue dots represent tie points after adjustment.
Figure 12. Concept of transform of modeling points and result image. The red dots represent initial tie points, while the blue dots represent tie points after adjustment.
Remotesensing 16 02890 g012
Figure 13. Enlarged result images with our proposed method (Dataset A).
Figure 13. Enlarged result images with our proposed method (Dataset A).
Remotesensing 16 02890 g013
Figure 14. Enlarged result images with our proposed method (Dataset B).
Figure 14. Enlarged result images with our proposed method (Dataset B).
Remotesensing 16 02890 g014
Figure 15. Enlarged result images with our proposed method (Dataset C).
Figure 15. Enlarged result images with our proposed method (Dataset C).
Remotesensing 16 02890 g015
Figure 16. Enlarged result images with our proposed method (Dataset D).
Figure 16. Enlarged result images with our proposed method (Dataset D).
Remotesensing 16 02890 g016
Table 1. Properties of dataset used in the study.
Table 1. Properties of dataset used in the study.
DatasetImageDate of
Acquisition
Image
Center
Latitude
Image
Center
Longitude
Column
GSD
Row
GSD
Average
Bisector
Elevation
Angle
Average
Convergence
Angle
A125 September 201737.66915019°126.69710367°0.636 m0.738 m58.63°3.48°
230 October 201737.67087524°126.70190406°0.648 m0.722 m
B119 January 201837.44616080°126.67440106°0.622 m0.692 m62.21°22.05°
227 January 201837.47981407°126.66328898°0.662 m0.703 m
C115 April 201634.52600627°127.22193512°0.542 m0.539 m80.29°16.37°
219 August 201634.49270726°127.27720400°0.557 m0.568 m
331 December 201634.53265698°127.23364734°0.613 m0.582 m
D18 January 201637.48699304°126.98696326°0.678 m0.779 m62.62°19.98°
215 February 201737.51165646°126.94987635°0.702 m0.672 m
323 February 201737.51529077°126.94700648°0.578 m0.620 m
424 February 201737.46389359°126.96478021°0.652 m0.609 m
Table 2. Tie point extraction result.
Table 2. Tie point extraction result.
DatasetNumber of
Images
Number of
Pairs
Total Number of
Feature Points
Total Number of
Tie Points
(After RANSAC)
Average
Model Error
(Initial)
Average
Check Error
(Initial)
A2113,773966921.29 pixels20.57 pixels
B2113,87841834.43 pixels4.08 pixels
C3332,728295630.71 pixels30.01 pixels
D4683,76018,79637.66 pixels38.05 pixels
Table 3. Bundle adjustment result of Dataset A.
Table 3. Bundle adjustment result of Dataset A.
Iteration
Count
Average Absolute Increments (Dataset A)
Δ a 0 Δ a s Δ a l Δ b 0 Δ b s Δ b l Δ L a t Δ L o n Δ H g t
111.131125.82 × 10−50.003317.165724.03 × 10−50.000857.54 × 10−54.80 × 10−614.26810
20.066894.20 × 10−72.93 × 10−50.009551.28 × 10−66.01 × 10−67.61 × 10−55.08 × 10−71.37377
30.00038--1.73 × 10−5--1.47 × 10−79.03 × 10−90.02504
4------1.06 × 10−77.88 × 10−93.94 × 10−5
5------5.25 × 10−84.13 × 10−91.03 × 10−7
6---------
Table 4. Bundle adjustment result of Dataset B.
Table 4. Bundle adjustment result of Dataset B.
Iteration
Count
Average Absolute Increments (Dataset B)
Δ a 0 Δ a s Δ a l Δ b 0 Δ b s Δ b l Δ L a t Δ L o n Δ H g t
12.417203.57 × 10−50.000354.881370.000530.000584.80 × 10−53.80 × 10−610.41926
20.037223.32 × 10−61.25 × 10−50.011799.95 × 10−73.54 × 10−65.74 × 10−65.49 × 10−71.02884
30.00015-1.15 × 10−72.94 × 10−5--2.38 × 10−72.58 × 10−80.02372
4------1.82 × 10−72.46 × 10−80.00017
5------6.85 × 10−81.07 × 10−82.06 × 10−6
6---------
Table 5. Bundle adjustment result of Dataset C.
Table 5. Bundle adjustment result of Dataset C.
Iteration
Count
Average Absolute Increments (Dataset C)
Δ a 0 Δ a s Δ a l Δ b 0 Δ b s Δ b l Δ L a t Δ L o n Δ H g t
114.714610.000720.0004412.291370.001170.000472.02 × 10−41.71 × 10−430.48027
20.231500.000149.31 × 10−50.193890.000107.45 × 10−52.94 × 10−62.01 × 10−61.49510
30.000764.70 × 10−72.43 × 10−70.000693.30 × 10−72.50 × 10−76.13 × 10−93.31 × 10−90.00389
4--1.00 × 10−7---8.12 × 10−9--
5---------
Table 6. Bundle adjustment result of Dataset D.
Table 6. Bundle adjustment result of Dataset D.
Iteration
Count
Average Absolute Increments (Dataset D)
Δ a 0 Δ a s Δ a l Δ b 0 Δ b s Δ b l Δ L a t Δ L o n Δ H g t
19.400090.000820.0010514.616430.001050.004212.06 × 10−42.27 × 10−427.31976
20.704761.88 × 10−50.000300.245897.9 × 10−55.01 × 10−55.53 × 10−65.08 × 10−60.92514
30.001042.75 × 10−84.35 × 10−70.000331.15 × 10−7-8.34 × 10−97.10 × 10−90.00111
4--1.45 × 10−7---5.32 × 10−94.12 × 10−9-
5---------
Table 7. Image RPC correction parameters after rigorous bundle adjustment.
Table 7. Image RPC correction parameters after rigorous bundle adjustment.
DatasetImageImage RPC Correction Parameter
a 0 a s a l b 0 b s b l
A19.09387.85 × 10−5−0.0024−6.70586.10 × 10−5−0.0011
2−13.1718−3.74 × 10−50.00427.6217−1.99 × 10−50.0006
B11.7469−5.20 × 10−7−0.00026.3413−0.0006−0.0006
2−3.09706.87 × 10−50.0005−3.39780.00050.0005
C1−17.33100.00090.0002−17.98930.0016−0.0009
2−4.40310.00088.50 × 10−57.37810.0013−0.0005
323.1065−0.00080.001111.74040.0005−0.0001
D110.8340−0.00050.0012−30.83980.00110.0039
22.7666−0.00100.0021−8.57130.00140.0086
3−8.4264−0.00080.00014.11170.0007−0.0029
4−15.2881−0.00100.0016−15.88810.00120.0017
Table 8. Reprojection error by pair before and after bundle adjustment.
Table 8. Reprojection error by pair before and after bundle adjustment.
DatasetImage PairBefore AdjustmentAfter Adjustment
Model ErrorCheck ErrorModel ErrorCheck Error
A1 and 221.29 pixels20.57 pixels1.02 pixels2.14 pixels
B1 and 24.43 pixels4.08 pixels0.67 pixels1.75 pixels
C1 and 225.81 pixels26.26 pixels1.75 pixels2.10 pixels
1 and 341.3 pixels40.10 pixels1.27 pixels2.05 pixels
2 and 326.01 pixels25.73 pixels1.71 pixels2.57 pixels
D1 and 250.58 pixels49.25 pixels1.40 pixels2.43 pixels
1 and 334.12 pixels33.68 pixels1.27 pixels2.00 pixels
1 and 444.86 pixels45.32 pixels1.36 pixels2.93 pixels
2 and 328.70 pixels24.40 pixels1.08 pixels2.19 pixels
2 and 444.12 pixels44.02 pixels0.80 pixels2.31 pixels
3 and 416.87 pixels16.80 pixels0.99 pixels1.89 pixels
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ban, S.; Kim, T. Rational-Function-Model-Based Rigorous Bundle Adjustment for Improving the Relative Geometric Positioning Accuracy of Multiple Korea Multi-Purpose Satellite-3A Images. Remote Sens. 2024, 16, 2890. https://doi.org/10.3390/rs16162890

AMA Style

Ban S, Kim T. Rational-Function-Model-Based Rigorous Bundle Adjustment for Improving the Relative Geometric Positioning Accuracy of Multiple Korea Multi-Purpose Satellite-3A Images. Remote Sensing. 2024; 16(16):2890. https://doi.org/10.3390/rs16162890

Chicago/Turabian Style

Ban, Seunghwan, and Taejung Kim. 2024. "Rational-Function-Model-Based Rigorous Bundle Adjustment for Improving the Relative Geometric Positioning Accuracy of Multiple Korea Multi-Purpose Satellite-3A Images" Remote Sensing 16, no. 16: 2890. https://doi.org/10.3390/rs16162890

APA Style

Ban, S., & Kim, T. (2024). Rational-Function-Model-Based Rigorous Bundle Adjustment for Improving the Relative Geometric Positioning Accuracy of Multiple Korea Multi-Purpose Satellite-3A Images. Remote Sensing, 16(16), 2890. https://doi.org/10.3390/rs16162890

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop