Next Article in Journal
Implementation of an Improved Water Change Tracking (IWCT) Algorithm: Monitoring the Water Changes in Tianjin over 1984–2019 Using Landsat Time-Series Data
Next Article in Special Issue
Improved Real-Time Natural Hazard Monitoring Using Automated DInSAR Time Series
Previous Article in Journal
Fast Aerial Image Geolocalization Using the Projective-Invariant Contour Feature
Previous Article in Special Issue
Robust Feature Matching with Spatial Smoothness Constraints
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Technical Note

A New Combined Adjustment Model for Geolocation Accuracy Improvement of Multiple Sources Optical and SAR Imagery

1
Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China
2
Key Laboratory of Technology in Geo-Spatial Information Processing and Application Systems, Institute of Electronics, Chinese Academy of Sciences, Beijing 100190, China
3
School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 101408, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(3), 491; https://doi.org/10.3390/rs13030491
Submission received: 20 December 2020 / Revised: 19 January 2021 / Accepted: 25 January 2021 / Published: 30 January 2021

Abstract

:
Numerous earth observation data obtained from different platforms have been widely used in various fields, and geometric calibration is a fundamental step for these applications. Traditional calibration methods are developed based on the rational function model (RFM), which is produced by image vendors as a substitution of the rigorous sensor model (RSM). Generally, the fitting accuracy of the RFM is much higher than 1 pixel, whereas the result decreases to several pixels in mountainous areas, especially for Synthetic Aperture Radar (SAR) imagery. Therefore, this paper proposes a new combined adjustment for geolocation accuracy improvement of multiple sources satellite SAR and optical imagery. Tie points are extracted based on a robust image matching algorithm, and relationships between the parameters of the range-doppler (RD) model and the RFM are developed by transformed into the same Geodetic Coordinate systems. At the same time, a heterogeneous weight strategy is designed for better convergence. Experimental results indicate that our proposed model can achieve much higher geolocation accuracy with approximately 2.60 pixels in the X direction and 3.50 pixels in the Y direction. Compared with traditional methods developed based on RFM, our proposed model provides a new way for synergistic use of multiple sources remote sensing data.

Graphical Abstract

1. Introduction

With the development of satellite imaging technology, it is increasingly common to obtain repeated observations of the same object from multiple sources in a short time, which provides dozens of imagery widely used in many fields, such as 3D reconstruction [1], change detection [2] and semantic classification [3]. Nowadays, The application of multiple sources airborne and spaceborne remote sensing imagery is increasing popular in archaeological and cultural heritage as a supplement to traditional methods [4], which will provide sufficient texture information. Terrestrial results obtained by laser scanning suffer from high cost and missing data, whereas the combination of photogrammetry provides an affordable and practical approach for the production of 3D models. Compared with spaceborne imagery, the application of airborne remote sensing images is widespread due to its high resolution which will provide enough details of buildings. In 2014, Xu et al. proposed a methodology by intergrating laser scanning and image-based 3D reconstruction techniques for the the production of 3D models [5]. Meyer et al. investigated an optimized Unmanned Aerial Vehicles (UAV) system for the reconstruction of large scale cultural heritage sites [6]. A digital 3D model of Asinou Church in Cyprus is obtained using a consumer-level DJI platform equipped with a GoPro camera, and a 3D printer was used to create a physical model of the church [7]. Moreover, multispectral and hyperspectral remote sensing data also contribute a lot in the classification of material features of cultural heritage [8]. During the production of 3D models, all these datasets have to be rectified due to the low geometric performance [4]. Some open-source tools including the structure from motion (SfM) [9] and dense multi-view 3D reconstruction (DMVR) [10] are widely applied. However, most of them are designed for optical images, whereas the integration of SAR data is not well-considered for photogrammetric applications.
Generally, the geolocation accuracy of obtained satellite images varies greatly according to different satellite platforms and imaging principles. Hence, the geometric processing of remote sensing images is a fundamental step for further photogrammetric applications. Traditional methods for obtaining geometric calibrated satellite images are developed based on the rigorous sensor model (RSM) [11]. Usually, the establishment of the RSM requires orbit, altitude or other information of the on-orbit satellite platform. Therefore, the formula of the RSMs can be rather complicated.
As a replacement, generic models that fitting the RSM are proposed such as the direct linear transformation (DLT) model [12] and the rational function model (RFM) [13]. These kinds of generic models use polynomial functions to build the relationship between image-space and object-space coordinates. All of them are independent of the distinct characteristics of satellite sensors. Therefore, the RFM is widely used in photogrammetric processing of remote sensing images due to its simplicity of implementation and standardization [14]. Generally, the RFM is developed based on a third-order polynomial, and parameters of the RFM are recognized as rational polynomial coefficients (RPCs). Experiments have been conducted to verify the feasibility and efficiency of the RFM with various optical remote sensing datasets [15,16,17,18]. In 2010, Teo et al. compared three block adjustment models based on SPOT images [19]. Experimental results showed that the geometric performance of all three models are similar. With the help of ground control points (GCPs), all methods discussed in this paper can significantly improve the geolocation accuracy. Choi et al. also investigated the 3D performance of the vendor-provided RPCs using two stereo pairs of high-resolution GeoEye-1 and WorldView-2. The results indicated that the performance of the RFM and the RSM are nearly the same, and the plane accuracy without any GCPs can reach about 2.3 m [20].
Differently from traditional optical satellite images, satellite images obtained from Synthetic Aperture Radar (SAR) sensors can provide valuable information at all times and all weather [21]. Benefitting from all these characteristics, the technology of SAR has been greatly improved. World-class SAR sensors, such as the TerraSAR-X [22], the ALOS [23] and the COSMO- SkyMed [24], can provide images with an accuracy of higher than 10 m. The single look complex (SLC) images from the TerraSAR-X platform especially can even reach decimeter level. In contrast, the obtained performance of the Chinese GF-3 SAR imagery is relatively poor, with an accuracy of approximately 40 m in plane [25,26]. Therefore, investigations have been conducted to improve the geolocation accuracy of GF-3 SAR images [27,28,29,30]. Most of them are experimented based on the RPCs provided by image vendors. Differently, coefficients of the RD model are supplied for most world-class SAR sensors instead of the RFM. Furthermore, the production of RPCs by users will introduce extra fitting errors inevitable. Generally, RPCs can be approximated based on the terrain-independent method [31]. A set of virtual GCPs are arranged in a grid-shape format on planes located at different heights. Usually the fitting accuracy of approximated RPCs is better than 5% pixels, whereas errors increased to pixel-level in areas with undulating terrain [32].
Compared with traditional satellites, the geometric performance of imagery obtained from small satellites, a new kind of satellite with small volume and quick response, is unstable and usually poor due to the low measurement accuracy [15,33,34,35]. Traditional methods using existed reference data (such as GCPs, digital orthophoto map (DOM) or LiDAR data) to improve their geolocation accuracy [36,37,38,39]. In practice, the collection of these reference data requires considerable financial and human resources. Therefore, combined adjustment methods designed for multiple sources satellite imagery are investigated. In 2015, Jeong et al. investigated the performance of images from IKONOS, QuickBird and KOMPSAT-2 [40]. Redundant observations were involved for geolocation accuracy improvement of multiple sources satellite images [41]. In contrast, the integration of optical and SAR images are seldom investigated. Furthermore, most of the previous studies are experimented based on the RFM, which is not suitable for the geometric processing of multiple sources optical and SAR images in most cases.
In this paper, we propose a new and generic combined adjustment model designed for optical and SAR satellite images. When considering aerial remote sensing images, coefficients of the RFM for optical imagery should be produced first by users before the application of our proposed model. By introducing the relationships between coordinates defined in the Geodetic Coordinate System and Cartesian Coordinate System, parameters of the RD model are transformed into the same system with the RFM. Therefore the normal equations for the combined adjustment model are developed based on an image-space compensation model. A heterogeneous weight strategy is introduced for better convergence. With the help of a popular modified Least-Square method, the ill-conditioned problem can be solved efficiently.
The remainder of this paper is organized as follows—the basic principles of the RFM based combined adjustment model are introduced in Section 2. Our proposed combined adjustment model and the determination of the heterogeneous weight strategy is shown in Section 3. In Section 4, experimental results are shown to verify the efficiency of our proposed method using multiple optical and SAR images covering the Mount Song area. Conclusions and discussions are drawn in Section 5.

2. Basic Principle of the RFM

Generally, the RSM is composed of various on-orbit information of satellite platforms, which leads to a complicated form. Therefore, the RFM is proposed as a substitution of the RSM. Usually, the relationship between image-space coordinates and object-space coordinates are described by two polynomials as:
x = N u m S D e n S = [ 1 P L H P L H ] [ a 0 a 1 a 2 a 3 a 19 ] T [ 1 P L H P L H ] [ b 0 b 1 b 2 b 3 b 19 ] T y = N u m L D e n L = [ 1 P L H P L H ] [ c 0 c 1 c 2 c 3 c 19 ] T [ 1 P L H P L H ] [ d 0 d 1 d 2 d 3 d 19 ] T ,
where ( x , y ) is the normalized image-space coordinate. ( P , L, H ) denotes the normalized latitude, longitude, and height in object-space. N u m S , D e n S , N u m L and D e n L are third-order polynomials consisting of 80 coefficients marked as a i , b i , c i and d i ( i = 0 , 1 , 2 , , 19 ) .
And the normalized coordinates can be obtained according to Equation (2).
s = x · S A M P _ S C A L E + S A M P _ O F F l = y · L I N E _ S C A L E + L I N E _ O F F ϕ = P · L A T _ S C A L E + L A T _ O F F λ = L · L O N G _ S C A L E + L O N G _ O F F h = H · H E I G H T _ S C A L E + H E I G H T _ O F F ,
where ( ϕ , λ , h ) is the geodetic latitude, longitude and height calculated with different offset and scale factors, respectively; ( s , l ) represents the image sample and line number in pixels with pixel ( 0 , 0 ) is the top-left of the image.
F r = a 0 + a 1 · s + a 2 · l r F c = b 0 + b 1 · s + b 2 · l c .
The RFM can fit the RSM well in most cases. However, geolocation error still exists due to the low measurement accuracy of on-orbit sensors. A commonly used model for systematically error compensation is the affine transformation model. In Equation (3), r and c are extracted image-space coordinates; a 0 , a 1 , a 2 , b 0 , b 1 , and b 2 are designed affine transformation coefficients. Usually, systematic errors can be greatly eliminated based on a translation model with a 0 and b 0 . Hence, normal equations based on the RFM can be simplified and described as follows [42]:
V = A · X A + B · X B l , P ,
where V is the residual vector, A and B are the designed coefficient matrices containing partial derivatives of the unknowns; X A and X B are the correction vectors of the affine transformation parameters and object-space coordinates, respectively. l are the vectors of residual errors and P denotes the designed weight matrix.
The normal equations can be established from Equation (4), according to the principle of least-squares adjustment:
A T P A A T P B B T P A B T P B X A X B = A T P l B T P l .

3. Methodology

3.1. Overview

Generally, optical remote sensing imagery can be processed based on the RFM due to its simplicity and standardization. Instead, the geometric processing of SAR imagery is usually conducted based on the RD model due to the lack of RPCs. In this paper, a new combined adjustment model that aggregates the RD model and the RFM is proposed for geometric calibration of multiple sources optical and SAR imagery. The workflow is shown in Figure 1. Firstly, the sensor orientation of SAR images is processed for systematic error compensation, which can be considered as the coarse-calibration stage. Secondly, conjugate tie points are extracted using a feature-based OS-SIFT method, which is a more robust and efficient remote sensing image-matching method compared with others in computer vision [43]. Subsequently, the proposed combined adjustment model consisting of the RD model and RFM is applied to fulfill the fine-calibration stage. Therefore, the geolocation accuracy of calibrated SAR and optical images can be improved significantly, which facilitates further photogrammetric applications.

3.2. Sensor Orientation of SAR Imagery

The RD model is usually considered as the rigorous sensor model for SAR images, which is composed of three equations: the Range equation, the Doppler equation and the Earth model equation as follows:
R = | R s R T | f D c = 2 λ | R s R T | ( V s V T ) ( R s R T ) X 2 + Y 2 ( R e + h ) 2 + Z 2 R p 2 + h = 1 ,
where R represents the measurement range between the target and the sensor, R T and R S are position vectors of the target and SAR sensor, respectively; V T and V S serve as corresponding velocity vectors of the target and SAR sensor, f D c denotes the Doppler centroid frequency; ( X , Y , Z ) denotes the target position; h is the target height relative to the surface of the earth, R e and R p are the equatorial radius and the polar radius of the Earth [32].
According to Equation (6), the geolocation accuracy of SAR images are mainly influenced by the slant range measurement error, azimuthal time error, Doppler center frequency error, ephemeris error of the satellite platform, and the topographic error [30]. Therefore, the calibration of SAR images can be complicated due to different error sources.The slant range measurement error and azimuthal time error play an important role among all of these factors, which leads to geolocation errors in the X and Y direction, respectively. Especially, the slant range measurement error is associated with different combinations of bandwidth and pulse width of the SAR sensor, which can be calibrated with a simple static model by correcting the internal calibration time delay error and atmospheric time delay error [39].
After the calibration of slant range measurement error, the geolocation accuracy of SAR images can be improved by several meters.The coarse-calibration can be achieved by updating sensor parameters of the RD model. Normally, the geolocation accuracy of coarse-calibrated SAR imagery cannot meet the requirement for further photogrammetric applications. Traditional methods for geolocation accuracy improvement of SAR images usually depend on the assistance of additional reference data—such as GCPs and LiDAR data. The collection of these reference data usually requires considerable financial and human resources. Hence, a free combined adjustment model is proposed for the production of remote sensing data with very-high geolocation accuracy.

3.3. Combined Adjustment Model

3.3.1. Unification of Coordinate System

Previous studies have revealed different types of combined adjustment models for multiple sources remote sensing data. Most of them are developed based on the RFM, and almost all methods highly depend on some additional existed reference data as mentioned above. Differently, we proposed a new combined adjustment model designed for the geometric calibration of SAR and optical images.
For optical images, interior and exterior orientation parameters are necessary for the establishment of the Collinear Condition Equations [44]. Considering the complexity and inconsistency between different platforms, coefficients of the RFM are provided by image vendors as a substitution. Usually, the production of RPCs is conducted under the Geodetic Coordinate System, whereas the RD model is defined based on the Cartesian Coordinate System. Therefore, parameters of the RD model are transformed into the Geodetic Coordinate System according to Equation (7).
X = ( N + H ) cos ϕ cos λ Y = ( N + H ) cos ϕ sin λ Z = [ N ( 1 e 2 ) + H ] sin ϕ ,
where ( X , Y , Z ) denote the space rectangular coordinates; ϕ and λ are the radians of the geodetic latitude and longitude coordinates; N represents the Earth Curve Radius with
N = R e 1 e 2 sin 2 ϕ ,
where e denotes the Earth Curve.
Giving the relationship between the geodetic coordinates and space rectangular coordinates, parameters of the RD model can be translated into the Geodetic Coordinate System. Differently, the RD model are developed based on the Range Equation and Doppler Equation, which indicates that the geolocation accuracy should be reprojected into the image space to keep in line with the RFM.

3.3.2. Combined Normal Equations

As demonstrated in Equation (3), an affine transformation model can compensate for the geolocation error of the RFM efficiently. The situation can be much more complicated when it comes to the RD model because we cannot develop a formula that indicates the relationship between the object-space coordinates and image-space coordinates directly. As demonstrated above, the slant range measurement error and azimuthal time delay error are the main factors that influence the reprojection error of the image-space coordinates. For simplicity, the RD model can be rewritten as follows:
G r = ( V s V T ) ( R s R T ) λ f D c 2 · R G c = | R s R T | R ,
where G r and G c are influenced by the image-space coordinates. To simplify the normal equations, a traditional “stop and go” assumption is utilized here [45]. Hence, parameters of the RD model can be represented by the image-space coordinates as:
R = R n e a r + [ ( x + d x ) · c s ] / ( 2 · f s ) R r e f = R n e a r + W / 2 · w s d t 0 = ( R R r e f ) / c s , d t 1 = t 0 + ( y + d y ) / p r f f D C = i = 0 l d i · d t 0 i R S = i = 0 n p i · d t 1 i , V S = i = 0 m q i · d t 1 i ,
where ( x , y ) represent the image-space coordinates; ( d x , d y ) are the corresponding corrections; R n e a r is the measured range corresponding to the first pixel in the image sample direction; t 0 denotes the start imaging time in the image line direction; c s represents the speed of light; f s is the sampling frequency in the slant range direction; R r e f denotes the reference slant range distance, d t 0 is the reference of the Doppler time; d i represent the coefficients of the Doppler center frequency; W and w s are the image width and widthspace; d t 1 denotes the azimuth time; p r f is the pulse repetition frequency; p i and q i are coefficients to fitting the position and velocity vector of the satellite platform.
Hence, normal equations assembling the RD model and RFM can be developed as:
F r = a 0 + a 1 · s + a 2 · l r F c = b 0 + b 1 · s + b 2 · l c G r = ( V s V T ) · ( R s R T ) λ f D c 2 · R G c = | R s R T | R .
Assuming the number of optical and SAR images are m and n, the coefficient matrix of normal equations can be obtained in the form of Equations (12) and (13).
A = F r 1 a 0 1 0 0 0 0 0 0 0 0 F c 1 b 0 1 0 0 0 0 0 0 0 0 G r 1 y 1 G r 1 x 1 0 0 0 0 0 0 G c 1 y 1 G c 1 x 1 0 0 0 0 0 0 0 0 F r n a 0 n 0 0 0 0 0 0 0 0 F c n b 0 n 0 0 0 0 0 0 0 0 G r m y m G r m x m 0 0 0 0 0 0 G c m y m G c m x m .
B = F r 1 P F r 1 L F r 1 H 0 0 0 0 0 0 F c 1 P F c 1 L F c 1 H 0 0 0 0 0 0 G r 1 P G r 1 L G r 1 H 0 0 0 0 0 0 G c 1 P G c 1 L G c 1 H 0 0 0 0 0 0 0 0 0 F r 2 P F r 2 L F r 2 H 0 0 0 0 0 0 F r 2 P F r 2 L F r 2 H 0 0 0 0 0 0 G r 2 P G r 2 L G r 2 H 0 0 0 0 0 0 G r 2 P G r 2 L G r 2 H 0 0 0 0 0 0 0 0 0 F r n P F r n L F r n H 0 0 0 0 0 0 F r n P F r n L F r n H 0 0 0 0 0 0 G r n P G r n L G r n H 0 0 0 0 0 0 G r n P G r n L G r n H .
Partial derivatives of optical images based on the RFM can be easily obtained, whereas the formula corresponding to G r and G c are more complicated. With the help of Equation (11), partial derivatives including G r x , G r y , G c x and G c y can be derived as:
G r x = λ f D c 2 · R x λ R 2 · f D C d t 0 · d t 0 R · R x = λ f D c 2 · c s 2 f s λ R 2 · 1 2 f s · i = 0 l d i · d t 0 i 1 G r y = ( V S V T ) · R S d t 1 · d t 1 y + ( R S R T ) · V S d t 1 · d t 1 y = ( V S V T ) · 1 p r f · i = 1 n p i · d t 1 i 1 + ( R S R T ) · 1 p r f · i = 1 m q i · d t 1 i 1 G c x = R x = c s 2 f s G c y = R S d t 1 · d t 1 y = 1 p r f · i = 1 n p i · d t 1 i 1 .
The partial derivatives from G r and G c to P, L and H can be represented as Equation (15).
G r P = G r X X ϕ ϕ P + G r Y Y ϕ ϕ P + G r Z Z ϕ ϕ P = π 180 · [ ( V S . x V T . x ) X ϕ + ( V S . y V T . y ) Y ϕ + ( V S . z V T . z ) Z ϕ ] G r L = G r X X λ λ L + G r Y Y λ λ L + G r Z Z λ λ L = π 180 · [ ( V S . x V T . x ) X λ + ( V S . y V T . y ) Y λ + ( V S . z V T . z ) Z λ ] G r H = G r X X H + G r Y Y H + G r Z Z H = ( V S . x V T . x ) X H + ( V S . y V T . y ) Y H + ( V S . z V T . z ) Z H G c P = G c X X ϕ ϕ P + G c Y Y ϕ ϕ P + G c Z Z ϕ ϕ P = π 180 · | R s R T | · [ ( R S . x R T . x ) X ϕ + ( R S . y R T . y ) Y ϕ + ( R S . z R T . z ) Z ϕ ] G c L = G c X X λ λ L + G c Y Y λ λ L + G c Z Z λ λ L = π 180 · | R s R T | · [ ( R S . y R T . y ) X λ + ( R S . y R T . y ) Y λ + ( R S . z R T . z ) Z λ ] G c H = G c X X H + G c Y Y H + G c Z Z H = 1 | R s R T | · [ ( R S . x R T . x ) X H + ( R S . y R T . y ) Y H + ( R S . z R T . z ) Z H ] .
Given Equation (8), partial derivatives of X, Y and Z to ϕ , λ and H are easy to be derived. Hence, normal equations can be established in the form of matrices as:
V O = A O · X A O + B O · X B O l O , P O V S = A S · X A S + B S · X B S l S , P S ,
where the subscript O and S represent matrices designed for optical and SAR images, respectively.
The establishment of the combined normal equations provides a generic way for the geometric processing of SAR and optical imagery. However, the absolute geolocation accuracy after free block adjustment cannot meet our requirements without the help of GCPs. Therefore, the heterogeneous weight strategy, defined as P O and P S , is introduced for better convergence.

3.3.3. Heterogeneous Weight Strategy

Traditional block adjustment methods are developed with an identify weight matrix, which indicates that the contribution of all elements involved is the same. However, the geometric performance of multiple sources remote sensing data varies greatly according to different platforms. Generally, the performance of most world-class SAR imagery can achieve better than 10 m, whereas the geolocation accuracy of different optical imagery ranging from several meters to hundreds of meters. Therefore, the heterogeneous weight strategy is proposed to ensure that images with higher accuracy will contribute more during the combined adjustment process, which ensures an optimum result will be obtained without using GCPs.
Different from traditional identify weight matrix, the heterogeneous weight matrix composed of P O and P S are defined as follows:
P = P O r 0 0 0 0 0 0 0 0 0 P O c 0 0 0 0 0 0 0 0 0 P S r 0 0 0 0 0 0 0 0 0 P S c 0 0 0 0 0 0 0 0 0 P O r n 0 0 0 0 0 0 0 0 P O c n 0 0 0 0 0 0 0 0 P S r m 0 0 0 0 0 0 0 0 P S c m , P O r = m O · 1 q · 1 H · t a n θ · 1 A O P O c = m O · l o l s u m o · 1 q · 1 H · t a n ω · 1 C O P S r = m S · 1 q · 1 A S P S c = m S · l s l s u m s · 1 q · t a n θ C S ,
where the subscripts O and S represent designed for optical and SAR imagery, respectively; m denotes the adaptive parameter to keep balance between different weights; q is the resolution of each involved image; H is the height of the optical satellite, θ and ω are measured rolling angle and pitching angle of the optical imaging sensor; θ represent the looking angle of the SAR sensor; l denotes the number of images divided into groups according to different principles; l s u m is the total number of involved optical/SAR images; C and A represent the relative geolocation error of each image computed during each iteration.
Without the help of GCPs, the above strategies provide a generic guidance for the determination of an optimum weight for each observation. In practice, the determination of m O and m S is conducted based on more than one test. A converged solution will be obtained with the aid of some popular modified Least-Square method, and further photogrammetric applications can be investigated after the fine-calibration of the multiple sources dataset.

4. Experimental Results and Analysis

4.1. Experimental Dataset

Considering the revisited period of different commercial satellites with very high resolution, the collection of multiple sources and multiple observation dataset is time-consuming and expensive. In comparison, the comparison of some open access datasets, such as the Sentinel-1 data, are much lower. Hence, multiple remote sensing images obtained from the Jilin-1 (JL-1) optical small satellite constellation and the Gaofen-3 (GF-3) SAR satellite are involved to verify the efficiency of our proposed method. As the first commercial optical satellite constellation, it is composed of 14 small satellites by the end of 2020. The resolution of JL-1 optical images is 0.92 m and the swath width is 11 km. Benefited from the non-fixed camera on the platform, images can be obtained at different imaging times and looking angle, which provides multiple observation dataset with more information. The GF-3 satellite is the first civilian microwave remote sensing imaging satellite. The nominal resolution of obtained GF-3 images varies from 1 m to 500 m with the swath width ranging from 10 km to 650 km. In this experiment, 7 JL-1 optical images obtained from 4 different platforms and 3 GF-3 SAR images covering a rural area around the Mount Song area are selected. Detailed information is listed in Table 1, and the geometric distribution can be found in Figure 2.
Influenced by the imaging modality, targets in SAR images are difficult to be identified compared with optical images. Therefore, 5 check point sets are extracted from an existed database of control points. All check points located in the corner of border areas or road intersections. Moreover, 147 tie points are extracted automatically based on an efficient multiple source image matching method [43]. Figure 3 shows the geographical distribution of involved optical and SAR images, as well as the distribution of extracted corresponding tie points.

4.2. Performance of the Combined Adjustment

Before the adjustment process, the slant range measurement error is firstly calibrated based on our previous statistic results [39]. Table 2 gives the geolocation results before and after the coarse-calibration. The initial geolocation accuracy of both GF-3 SAR images is approximately 11 pixels in the X direction and 8 pixels in the Y direction, which is in accordance with previous studies [25]. After calibration, the geolocation error in the X direction influenced by the slant range measurement error is eliminated greatly, whereas the variation of geolocation error in the Y direction is negligible. Based on the coarse-calibration step, the geolocation accuracy of both GF-3 SAR images is improved significantly, which guarantee a better result of the whole combined adjustment process.
To verify the efficiency of our proposed combined adjustment model, a traditional RFM based adjustment process is conducted in comparison. Hence, the RPCs of SAR images need to be produced in advance. Generally, a terrain-dependent method relying on well-distributed GCPs performs the best [46]. In contrast, the terrain-independent method is commonly applied with the help of an open-source DEM. Based on the established spatial grid, the fitting accuracy of produced RPCs can reach sub-pixels [47]. Hence, RPCs of GF-3 SAR images are produced after coarse-calibration and the RFM based combined adjustment model including all SAR and optical images can be developed according to Equation (4).
Figure 4 gives the relative geolocation accuracy between optical and SAR images after combined adjustment. Without the help of GCPs, geolocation results after free combined adjustment cannot meet the requirement for further processing, such as the production of geocoded 3D products and target localization. Hence, the heterogeneous weight strategy is applied for better convergence. The final results are listed in Table 3. After processing, the geolocation accuracy increases to approximately 3 pixels in the X direction and 4.5 pixels in the Y direction. At the same time, our proposed model also shows better performance than the traditional RFM based combined adjustment model. Furthermore, Figure 5 shows the error distribution after processed by these two methods. Compared with the traditional one, our proposed model also gives the best performance in convergence, which can be derived from the consistency between images.
Table 3 gives the root mean square error (RMSE) of the whole dataset processed with different models. The RFM based combined adjustment model improves the geolocation accuracy of all datasets from about 146.36 pixels in the X direction and 111.75 pixels in the Y direction to 61.97 pixels in the X direction and 72.23 pixels in the Y direction with an identity weight matrix. In contrast, the performance of our proposed model is better than the traditional one, with an accuracy of 59.43 pixels in the X direction and 71.89 pixels in the Y direction.

5. Discussions and Conclusions

Traditional methods for the geometric processing of multiple sources optical and SAR imagery are developed based on the RFM. Different from most optical satellite imagery, RPCs of SAR images are not always provided by image vendors, especially for some popular SAR sensors such as the TerraSAR-X and Sentinel-1 satellite. Therefore, the production of RPCs for SAR imagery has to be produced by users additionally. Moreover, the fitting accuracy is highly dependent on the terrain.
Aiming at finding a generic and simple way for geometric calibration of multiple sources optical and SAR images, we proposed a new combined adjustment model. Unlike traditional RFM-based methods, the slant range measurement error of SAR images obtained from the GF-3 satellite is calibrated based on our previous work. After the coarse-calibration step, tie points are automatically extracted from both optical and SAR images. The combined adjustment model is established by reprojecting parameters of the RD model into the same coordinate system with the RPCs. Together with an additional heterogeneous weight strategy, our proposed model gives the best performance. Compared with traditional methods, our proposed model provides a new way for the integration of multiple sources optical and SAR data, which do not introduce extra fitting accuracy. Further, this proposed model also enables the application for precise photogrammetric reconstruction.

Author Contributions

Conceptualization, N.J., F.W. and H.Y.; Data curation, N.J.; Formal analysis, N.J. and F.W.; Funding acquisition, F.W. and H.Y.; Investigation, N.J. and H.Y.; Methodology, N.J. and F.W.; Project administration, N.J., F.W. and H.Y.; Resources, F.W.; Writing-original draft, N.J.; Writing-review & editing, F.W. and H.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was by the National Key R&D Program of China, grant number 2017YFB0502901.

Informed Consent Statement

Not applicable.

Acknowledgments

The authors would like to thank Wentao Wang for the support in data processing.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Rupnik, E.; Pierrot-deseilligny, M.; Delorme, A. 3D reconstruction from multi-view VHR-satellite images in MicMac. ISPRS J. Photogramm. Remote Sens. 2018, 139, 201–211. [Google Scholar] [CrossRef]
  2. Hégarat-Mascle, S.L.; Ottlé, C.; Guérin, C. Land cover change detection at coarse spatial scales based on iterative estimation and previous state information. Remote Sens. Environ. 2018, 95, 464–479. [Google Scholar] [CrossRef]
  3. Tokarczyk, P.; Wegner, J.D.; Walk, S.; Schindler, K. Features, Color Spaces, and Boosting: New Insights on Semantic Classification of Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2014, 53, 280–295. [Google Scholar] [CrossRef]
  4. Luo, L.; Wang, X.; Guo, H.; Lasaponara, R.; Zong, X.; Masini, N.; Wang, G.; Shi, P.; Khatteli, H.; Chen, F.; et al. Airborne and spaceborne remote sensing for archaeological and cultural heritage applications: A review of the century (1907–2017). Remote Sens. Environ. 2019, 232, 111280. [Google Scholar] [CrossRef]
  5. Zhihua, X.; Lixin, W.; Yonglin, S.; Fashuai, L.; Qiuling, W. Tridimensional Reconstruction Applied to Cultural Heritage with the Use of Camera-Equipped UAV and Terrestrial Laser Scanner. Remote Sens. 2014, 6, 10413–10434. [Google Scholar]
  6. Meyer, D.; Fraijo, E.; Lo, E.; Rissolo, D.; Kuester, F. Optimizing UAV systems for rapid survey and reconstruction of large scale cultural heritage sites. Digit. Herit. 2015. Available online: https://ieeexplore.ieee.org/document/7413857 (accessed on 28 January 2021).
  7. Hadjimitsis, D.G.; Themistocleous, K.; Michaelides, S.; Papadavid, G.; Themistocleous, K.; Ioannides, M.; Agapiou, A.; Hadjimitsis, D.G. The methodology of documenting cultural heritage sites using photogrammetry, UAV, and 3D printing techniques: The case study of Asinou Church in Cyprus. In Proceedings of the SPIE—Third International Conference on Remote Sensing and Geoinformation of the Environment, Paphos, Cyprus, 16 March 2015. [Google Scholar]
  8. Erenoglua, R.C.; Akcaya, O.; Erenoglub, O. An UAS-assisted multi-sensor approach for 3D modeling and reconstruction of cultural heritage site. Remote Sens. Environ. 2017, 26, 79–90. [Google Scholar] [CrossRef]
  9. Wu, C. Towards Linear-Time Incremental Structure from Motion. In Proceedings of the 2013 International Conference on 3DV-Conference, Seattle, WA, USA, 29 June–1 July 2013. [Google Scholar]
  10. Furukawa, Y.; Ponce, J. Accurate, Dense, and Robust Multiview Stereopsis. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 1362–1376. [Google Scholar] [CrossRef]
  11. Cheng, C.; Zheng, S.; Liu, X.; Han, J. Space-Borne SAR Image Geo-Location in Mountain Area with Sparse GCP. In Proceedings of the International Symposium on Image and Data Fusion, Tengchong, China, 9–11 August 2011; pp. 1–4. [Google Scholar]
  12. Zhang, W.S.; Wang, Y.M.; Wang, C.; Jin, S.L.; Zhang, H. Precision Comparison of Several Algorithms for Precise Rectification of Linear Array Push-Broom Middle or High Resolution Imagery on the Plainness and Small Areas. OPT Tech. 2006. Available online: https://www.researchgate.net/publication/291743025_Precision_comparison_of_several_algorithms_for_precise_rectification_of_linear_array_push-broom_middle_or_high_resolution_imagery_on_the_plainness_and_small_areas (accessed on 28 January 2021).
  13. Cao, J.; Fu, J.; Yuan, X.; Gong, J. Nonlinear bias compensation of ZiYuan-3 satellite imagery with cubic splines. ISPRS J. Photogramm. Remote Sens. 2017, 133, 174–185. [Google Scholar] [CrossRef]
  14. Grodecki, J.; Dial, G. Block adjustment of high-resolution satellite images described by rational polynomials. Photogramm. Eng. Remote Sens. 2003, 69, 59–68. [Google Scholar] [CrossRef]
  15. Chen, X.; Zhang, B.; Cen, M.; Guo, H.; Zhang, T.; Zhao, C. SRTM DEM-Aided Mapping Satellite-1 Image Geopositioning without Ground Control Points. IEEE Geosci. Remote Sens. Lett. 2017, 14, 2137–2141. [Google Scholar] [CrossRef]
  16. Hong, Z.; Tong, X.; Liu, S.; Chen, P.; Xie, H.; Jin, Y. A Comparison of the Performance of Bias-Corrected RSMs and RFMs for the Geo-Positioning of High-Resolution Satellite Stereo Imagery. Remote Sens. 2015, 7, 16815–16830. [Google Scholar] [CrossRef] [Green Version]
  17. Shen, X.; Liu, B.; Li, Q.Q. Correcting bias in the rational polynomial coefficients of satellite imagery using thin-plate smoothing splines. ISPRS J. Photogramm. Remote Sens. 2017, 125. [Google Scholar] [CrossRef]
  18. Wang, T.; Zhang, G.; Li, D.; Tang, X.; Jiang, Y.; Pan, H.; Zhu, X. Planar Block Adjustment and Orthorectification of ZY-3 Satellite Images. Photogramm. Eng. Remote Sens. 2014, 80, 559–570. [Google Scholar] [CrossRef]
  19. Teo, T.A.; Chen, L.C.; Liu, C.L.; Tung, Y.C.; Wu, W.Y. DEM-Aided Block Adjustment for Satellite Images With Weak Convergence Geometry. IEEE Trans. Geosci. Remote Sens. 2010, 48, 1907–1918. [Google Scholar]
  20. Choi, S.Y.; Kang, J.M. Accuracy Investigation of RPC-based Block Adjustment Using High Resolution Satellite Images GeoEye-1 and WorldView-2. J. Korean Soc. Surv. Geod. Photogramm. Cartogr. 2012, 30, 107–116. [Google Scholar] [CrossRef] [Green Version]
  21. Ding, C.; Liu, J.; Lei, B.; Qiu, X. Preliminary Exploration of Systematic Geolocation Accuracy of GF-3 SAR Satellite System. J. Radars 2017, 6, 11–16. [Google Scholar]
  22. Schwerdt, M.; Bräutigam, B.; Bachmann, M.; Döring, B.; Schrank, D.; Gonzalez, J.H. Final TerraSAR-X Calibration Results Based on Novel Efficient Methods. IEEE Trans. Geosci. Remote Sens. 2010, 48, 677–689. [Google Scholar] [CrossRef]
  23. Shimada, M.; Isoguchi, O.; Tadono, T.; Isono, K. PALSAR Radiometric and Geometric Calibration. IEEE Trans. Geosci. Remote Sens. 2009, 47, 3915–3932. [Google Scholar] [CrossRef]
  24. Covello, F.; Battazza, F.; Coletta, A.; Lopinto, E.; Fiorentino, C.; Pietranera, L.; Valentini, G.; Zoffoli, S. COSMO-SkyMed an existing opportunity for observing the Earth. J. Geodyn. 2010, 49, 171–180. [Google Scholar] [CrossRef] [Green Version]
  25. Jiao, N.; Wang, F.; You, H.; Qiu, X.; Yang, M. Geo-Positioning Accuracy Improvement of Multi-Mode GF-3 Satellite SAR Imagery Based on Error Sources Analysis. Sensors 2018, 18, 2333. [Google Scholar] [CrossRef] [Green Version]
  26. Zhang, G.; Wu, Q.; Wang, T.; Zhao, R.; Deng, M.; Jiang, B.; Li, X.; Wang, H.; Zhu, Y.; Li, F.; et al. Block Adjustment without GCPs for Chinese Spaceborne SAR GF-3 Imagery. Sensors 2018, 18, 4023. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Deng, M.; Zhang, G.; Zhao, R.; Li, S.; Li, J. Improvement of gaofen-3 absolute positioning accuracy based on cross-calibration. Sensors 2017, 17, 2903. [Google Scholar] [CrossRef] [Green Version]
  28. Niangang, J.; Feng, W.; Hongjian, Y.; Xiaolan, Q. Geolocation Accuracy Improvement of Multiobserved GF-3 Spaceborne SAR Imagery. IEEE Geosci. Remote Sens. Lett. 2019, 17, 1–5. [Google Scholar]
  29. Wang, M.; Wang, Y.; Run, Y.; Cheng, Y.; Jin, S. Geometric accuracy analysis for GaoFen3 stereo pair orientation. IEEE Geosci. Remote Sens. Lett. 2018, 15, 92–96. [Google Scholar] [CrossRef]
  30. Wang, T.; Zhang, G. Multi-Mode GF-3 Satellite Image Geometric Accuracy Verification Using the RPC Model. Sensors 2017, 17, 2005. [Google Scholar] [CrossRef] [Green Version]
  31. Tao, C.V.; Hu, Y. 3D reconstruction methods based on the rational function model. Photogramm. Eng. Remote Sens. 2002, 68, 705–714. [Google Scholar]
  32. Jiang, W.; Yu, A.; Dong, Z.; Wang, Q. Comparison and Analysis of Geometric Correction Models of Spaceborne SAR. Sensors 2016, 16, 973. [Google Scholar] [CrossRef]
  33. Cheng, C.; Zheng, S.; Liu, X.; Han, J. Geometric Rectification of Small Satellite Remote Sensing Images. In Proceedings of the 2010 International Conference on Remote Sensing (ICRS), Hangzhou, China, 5 October 2010; pp. 324–327. [Google Scholar]
  34. Schulz, S.; Renner, U. DLR-TUBSAT: A microsatellite for interactive earthobservation. In Proceedings of the Small Satellite Systems and Services, Hong Kong, China, 9–14 June 2000. [Google Scholar]
  35. Xiong, W.; Shen, W.; Wang, Q.; Shi, Y.; Xiao, R.; Fu, Z. Research on HJ-1A/B satellite data automatic geometric precision correction design. Eng. Sci. 2014, 5, 90–96. [Google Scholar] [CrossRef]
  36. Wu, B.; Tang, S.; Zhu, Q.; Yuen Tong, K.; Hu, H.; Li, G. Geometric integration of high-resolution satellite imagery and airborne LiDAR data for improved geopositioning accuracy in metropolitan areas. ISPRS J. Photogramm. Remote Sens. 2015, 109, 139–151. [Google Scholar]
  37. Bagheri, H.; Schmitt, M.; Angelo, P.; Zhu, X.X. A framework for SAR-optical stereogrammetry over urban areas. ISPRS J. Photogramm. Remote Sens. 2018, 146, 389–408. [Google Scholar]
  38. Tang, S.; Wu, B.; Zhu, Q. Combined adjustment of multi-resolution satellite imagery for improved geo-positioning accuracy. ISPRS J. Photogramm. Remote Sens. 2016, 114, 125–136. [Google Scholar] [CrossRef]
  39. Niangang, J.; Feng, W.; Hongjian, Y.; Jiayin, L.; Xiaolan, Q. A generic framework for improving the geopositioning accuracy of multi-source optical and SAR imagery. ISPRS J. Photogramm. Remote Sens. 2020, 169, 377–388. [Google Scholar]
  40. Jeong, J.; Yang, C.; Kim, T. Geo-positioning accuracy using multiple-satellite images: IKONOS, QuickBird, and KOMPSAT-2 stereo images. Remote Sens. 2015, 7, 4449–4564. [Google Scholar] [CrossRef] [Green Version]
  41. Ma, Z.; Gong, Y.; Cui, C.; Deng, J.; Cao, B. Geometric positioning of multi-source optical satellite imagery for the island and reef area with sparse ground control points. In Proceedings of the Oceans 2017, Aberdeen, UK, 9 June 2017. [Google Scholar]
  42. Pi, Y.; Yang, B.; Li, X.; Wang, M.; Cheng, Y. Large-Scale Planar Block Adjustment of GaoFen1 WFV Images Covering Most of Mainland China. IEEE Trans. Geosci. Remote Sens. 2019, 57, 1368–1379. [Google Scholar] [CrossRef]
  43. Xiang, Y.; Wang, F.; You, H. OS-SIFT: A Robust SIFT-Like Algorithm for High-Resolution Optical-to-SAR Image Registration in Suburban Areas. IEEE Trans. Geosci. Remote Sens. 2018, 56, 3078–3090. [Google Scholar] [CrossRef]
  44. Guo, J.-W.; Li, Y.-S. Study on the precision of block adjustment based on UAV imagery data. In Proceedings of the 2017 2nd International Conference on Frontiers of Sensors Technologies, Shenzhen, China, 16 April 2017. [Google Scholar]
  45. Qiu, X.; Han, C.; Liu, J.; Radars, J. A Method for Spaceborne SAR Geolocation Based on Continuously Moving Geometry. J. Radars 2013, 2, 54–59. [Google Scholar] [CrossRef]
  46. Tao, C.V.; Hu, Y. Use of the Rational Function Model for Image Rectification. Can. J. Remote Sens. 2001, 27, 593–602. [Google Scholar] [CrossRef]
  47. Zhang, G.; Fei, W.; Li, Z.; Zhu, X.; Tang, X. Analysis and test of the substitutability of the RPC model for the rigorous sensor model of spaceborne SAR imagery. Acta Geod. Cartogr. Sin. 2010, 39, 264–270. [Google Scholar]
Figure 1. Flow chart of our proposed method.
Figure 1. Flow chart of our proposed method.
Remotesensing 13 00491 g001
Figure 2. Geometric distribution of multiple sources JL-1 and GF-3 imagery.
Figure 2. Geometric distribution of multiple sources JL-1 and GF-3 imagery.
Remotesensing 13 00491 g002
Figure 3. The experimental overlapping area of the JL-1 and GF-3 imagery, marked with extracted tie points (in red) in the experimental analysis. (a) an example of extracted tie points on optical imagery. (b) the corresponding scence on SAR imagery.
Figure 3. The experimental overlapping area of the JL-1 and GF-3 imagery, marked with extracted tie points (in red) in the experimental analysis. (a) an example of extracted tie points on optical imagery. (b) the corresponding scence on SAR imagery.
Remotesensing 13 00491 g003
Figure 4. Relative geolocation accuracy between optical and SAR images after combined adjustment.
Figure 4. Relative geolocation accuracy between optical and SAR images after combined adjustment.
Remotesensing 13 00491 g004
Figure 5. Geolocation error distribution after processed with (a) the rational function model (RFM) based combined adjustment model and (b) our porposed model.
Figure 5. Geolocation error distribution after processed with (a) the rational function model (RFM) based combined adjustment model and (b) our porposed model.
Remotesensing 13 00491 g005
Table 1. Detailed information of experimental dataset (DEC means descending, and ASC is the abbreviation of ascending).
Table 1. Detailed information of experimental dataset (DEC means descending, and ASC is the abbreviation of ascending).
PlatformAcquisition DateOrbitIncidence Angle ( )Size (Pixels)Resolution (m)
JL-104-12 April 2018DEC8.7311,516 × 12,1430.93 × 0.94
JL-104-223 June 2018DEC4.3311,506 × 12,1480.92 × 0.92
JL-104-329 October 2018DEC0.8411,518 × 12,1200.92 × 0.92
JL-105-115 June 2018DEC−0.5211,518 × 12,0560.92 × 0.92
JL-105-210 October 2018DEC5.1911,518 × 12,0080.92 × 0.94
JL-106-17 June 2018DEC1.3811,530 × 12,0070.92 × 0.92
JL-107-131 March 2018DEC1.9711,513 × 11,9910.92 × 0.92
GF3-128 December 2016DEC37.4316,215 × 21,5312.24 × 2.86
GF3-214 November 2019DEC24.6821,625 × 23,3541.12 × 2.61
GF3-320 December 2019ASC41.4318,124 × 20,3162.24 × 3.03
Table 2. Geometric performance of GF-3 Synthetic Aperture Radar (SAR) images before and after coarse calibration (pixels).
Table 2. Geometric performance of GF-3 Synthetic Aperture Radar (SAR) images before and after coarse calibration (pixels).
ImagesBefore CalibrationAfter Calibration
XYXY
GF3-111.118.091.568.09
GF3-210.967.841.737.84
GF3-312.378.211.618.21
Table 3. Geometric performance of the whole dataset with different combined adjustment methods (pixels).
Table 3. Geometric performance of the whole dataset with different combined adjustment methods (pixels).
ItemsBefore AdjustmentAfter AdjustmentAfter Weighted Adjustment
XYXYXY
RFM146.36111.7561.9772.232.974.78
Proposed146.36111.7559.4371.892.654.43
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Jiao, N.; Wang, F.; You, H. A New Combined Adjustment Model for Geolocation Accuracy Improvement of Multiple Sources Optical and SAR Imagery. Remote Sens. 2021, 13, 491. https://doi.org/10.3390/rs13030491

AMA Style

Jiao N, Wang F, You H. A New Combined Adjustment Model for Geolocation Accuracy Improvement of Multiple Sources Optical and SAR Imagery. Remote Sensing. 2021; 13(3):491. https://doi.org/10.3390/rs13030491

Chicago/Turabian Style

Jiao, Niangang, Feng Wang, and Hongjian You. 2021. "A New Combined Adjustment Model for Geolocation Accuracy Improvement of Multiple Sources Optical and SAR Imagery" Remote Sensing 13, no. 3: 491. https://doi.org/10.3390/rs13030491

APA Style

Jiao, N., Wang, F., & You, H. (2021). A New Combined Adjustment Model for Geolocation Accuracy Improvement of Multiple Sources Optical and SAR Imagery. Remote Sensing, 13(3), 491. https://doi.org/10.3390/rs13030491

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop