Next Article in Journal
The Intertwined Factors Affecting Altimeter Sigma0
Previous Article in Journal
KFGOD: A Fine-Grained Object Detection Dataset in KOMPSAT Satellite Imagery
Previous Article in Special Issue
PolSAR Image Modulation Using a Flexible Metasurface with Independently Controllable Polarizations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

High-Precision Geolocation of SAR Images via Multi-View Fusion Without Ground Control Points

College of Electronic Science and Technology, National University of Defense Technology (NUDT), Changsha 410073, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(22), 3775; https://doi.org/10.3390/rs17223775
Submission received: 22 October 2025 / Revised: 16 November 2025 / Accepted: 18 November 2025 / Published: 20 November 2025

Highlights

What are the main findings?
  • Proposes a novel GCP (Ground Control Points)-free high-precision geolocation method based on multi-view SAR (Synthetic Aperture Radar) image fusion, incorporating outlier detection, weighted fusion, and refined estimation technical strategies.
  • For actualmeasured airborne SAR data, the proposedmethod achieves an average 84.78% improvement in positioning accuracy relative to dual-view fusion methods, attaining meter-level positioning precision. Ablation experiments confirm that outlier removal and refined estimation contribute 82.42% and 22.75% respectively to this accuracy gain.
What is the implication of the main finding?
  • The proposed method is compatible with three or more multi-view images, while excluding outlier images with systematic geolocation errors inconsistent across views.
  • The method integrates a weighted fusion strategy and the minimum norm least-squares criterion, enablingGCP-free high-precision estimation of planar systematic geolocation errors of individual images throughmaximizing utilization ofmulti-view redundant information.

Abstract

Synthetic Aperture Radar (SAR) images generated via range-Doppler (RD) model-based geometric correction often suffer from non-negligible systematic geolocation errors due to cumulative impacts of platform positioning inaccuracies, payload time synchronization offsets, and atmospheric propagation delays. These errors limit the applicability of SAR data in high-precision geometric applications, especially in scenarios where ground control points (GCPs)—traditionally used for calibration—are inaccessible or costly to acquire. To address this challenge, this study proposes a novel GCP-free high-precision geolocation method based on multi-view SAR image fusion, integrating outlier detection, weighted fusion, and refined estimation strategies. The method first establishes a positioning error correlation model for homologous point pairs in multi-view SAR images. Under the assumption of approximately equal positioning errors, initial systematic error estimates are obtained for all arbitrary dual-view combinations. It then identifies and removes outlier images with inconsistent systematic errors via coefficient of variation analysis, retaining a subset of multi-view images with stable calibration parameters. A weighted fusion strategy, tailored to the geometric error propagation model, is applied to the optimized subset to balance the influence of angular relationships on error estimation. Finally, the minimum norm least-squares method refines the fusion results to enhance consistency and accuracy. Validation experiments on both simulated and actual airborne SAR images demonstrate the method’s effectiveness. For actual measured data, the proposed method achieves an average positioning accuracy improvement of 84.78% compared with dual-view fusion methods, with meter-level precision. Ablation studies confirm that outlier removal and refined estimation contribute 82.42% and 22.75% to accuracy gains, respectively. These results indicate that the method fully leverages multi-view information to robustly estimate and compensate for 2D systematic errors (range and azimuth), enabling high-precision planar geolocation of airborne SAR images without GCPs.

1. Introduction

Synthetic Aperture Radar (SAR) systematic geocorrected products are typically geocoded to a map projection without ground control points (GCPs), relying on the range-Doppler (RD) signal model [1,2,3,4]. However, due to cumulative errors from aircraft positioning/attitude determination, payload time calibration, radio wave propagation delays, and local average elevation inaccuracies, even rigorous geolocation models may result in SAR images with non-negligible systematic geolocation errors, failing to meet the demands of high-precision applications [4,5]. Meter-level precise geolocation, a critical geometric correction step, traditionally requires a rigorous geometric model calibrated with user-collected GCPs [6]. Yet, GCP acquisition is often arduous in remote regions, featureless terrains (e.g., glaciers, deserts), or restricted areas [5,7], where reliable GCPs may be inaccessible or cost-prohibitive. Moreover, SAR-specific radiometric and geometric effects—such as speckle noise, foreshortening, layover, and shadow—render GCP identification and collection far more challenging than in optical imagery. Thus, developing GCP-free precise geolocation methods for SAR images remains an urgent need [5,7,8].
Recent efforts have explored stereo localization via multi-view SAR image fusion [9]. Chen and Dowman [10] proposed a spaceborne SAR 3D intersection algorithm based on weighted least-squares. Luo et al. [11] achieved robust multi-view spaceborne SAR 3D positioning using the range-Doppler model. Yin et al. [12] developed a two-stage stereo positioning method for multi-view spaceborne SAR, involving initial and secondary positioning on a normalized RD model to enhance accuracy and stability. However, these methods cannot eliminate positioning offsets induced by SAR system calibration parameter errors.
To address such errors, researchers have investigated high-precision GCP-free geolocation by fusing multi-pass, multi-view SAR data of the same area [13,14]. In the absence of GCPs, calibration parameters can be corrected using geometric relationships between overlapping SAR images under specific observation configurations [8,15]. For instance, a self-calibration model with symmetric geometric constraints [16] enables systematic error estimation but requires two images from different orbits with identical incident angles. Zhang et al. [17] proposed a self-calibration method using at least three overlapping images, leveraging spatial intersection residuals of conjugate points to detect timing offsets. For dual-view airborne SAR, Zhang et al. [18] estimated self-calibration parameters via a simple affine transformation model on ground-range images, improving GCP-free geolocation accuracy.
Key geometric calibration parameters for SAR include range-direction fast time and azimuth-direction slow time, which originate from systematic errors (e.g., instrument internal delays, GNSS-radar time synchronization errors). These parameters are generally assumed stable when the same radar acquires images from different perspectives [16,17,18], forming the basis for geometric self-calibration via multi-view data fusion.
However, for three or more multi-view SAR images, even from the same system, estimates of 2D calibration parameters (azimuth and range) derived from different dual-view combinations often exhibit large fluctuations. In some cases, individual images may show significant positioning offsets (see experimental results), qualifying as outliers. This arises because non-ideal factors causing systematic positioning errors are complex and diverse: minor error terms—such as platform positioning errors, antenna phase center offsets, atmospheric delays, and ground elevation projection errors—exhibit randomness across different viewing angles [19]. Consequently, effective outlier detection and removal are critical to avoid compromising self-calibration.
Notably, the stability of self-calibration and the handling of ill-conditioned problems in multi-view scenarios have attracted research attention. For spaceborne SAR, Yin et al. [20] proposed a sensitive geometric self-calibration method based on the RD model, utilizing the determinant and an accuracy stabilization factor to filter images, thereby mitigating singular solutions and enhancing robustness. That study identified satellite position error as the primary error source and recommended acquiring ipsilateral images with incidence angles greater than 8° to improve geometric configuration. For UAV-borne SAR with spatially variant slant range errors, Luo et al. [21] proposed a geometric auto-calibration method that incorporated a tie-point quality-based weighting strategy and an iterative eigenvalue-correction least-squares solution, achieving significant accuracy improvements. These methods underscore the importance of image selection, error source analysis, and robust solving in multi-view self-calibration. Nevertheless, the effective detection of outlier images and the optimal fusion of multi-view data for consistent and high-precision geolocation remain challenging.
This study presents a novel GCP-free high-precision geolocation approach based on multi-view synthetic aperture radar (SAR) image fusion. Focusing on multi-view airborne SAR ground-range images, the proposed method is implemented through the following steps:
(1)
A positioning error correlation model for homologous point pairs in multi-view SAR images is established, where model parameters are derived from an affine transformation-based geometric positioning model constructed using the four corner points of the ground-range image.
(2)
Under the assumption of approximately equal positioning errors, initial error estimates are obtained for all arbitrary dual-view combinations. And then potential outlier images are detected and eliminated to prevent interference with the self-calibration process.
(3)
The remaining multi-view data are fused in two successive steps using the weighted least squares method and the minimum norm least squares method, respectively, to suppress inconsistent errors across different views.
This approach significantly enhances both the GCP-free geolocation accuracy and algorithm robustness for multi-view SAR images. The main contributions of this work are summarized as follows:
(1)
By leveraging the stability of SAR geometric calibration parameters, an outlier detection and removal method is proposed to retain the subset of multi-view SAR images with the most consistent calibration parameters.
(2)
A weighted least squares fusion strategy based on a multi-view error propagation model is designed to accurately estimate self-calibration parameters. This strategy balances the influence of angular geometric relationships on positioning error estimation, with a particular focus on suppressing large error propagation caused by small angle differences.
(3)
The initial weighted fusion estimation results are further refined using the minimum norm least-squares principle. By minimizing residuals and balancing the respective equations, more accurate and robust results are obtained.
The remainder of this paper is organized as follows: Section 2 introduces the multi-view positioning error correlation model. Section 3 elaborates on the principles and steps of the proposed outlier removal and multi-view SAR fusion positioning method. Section 4 validates the method using both simulated and measured data, with performance analysis conducted via ablation and comparative experiments. Some factors affecting algorithm accuracy are discussed in Section 5. Finally, conclusions are drawn in Section 6.

2. Multi-View Positioning Error Correlation Model

In practical airborne SAR image processing, a simple first-order affine transformation model is typically established using the latitude and longitude coordinates of the four corner points attached to ground-range SAR images. This model describes the transformation from pixel coordinates ( i , j ) to the systematic geolocation coordinates ( B , L ) of scatterers:
[ B L ] = [ i j ] C + D ,
where B and L denote the geographic latitude and longitude of the scatterer’s systematic geolocation, respectively. C and D represent the coefficient matrices for the linear term and constant term of the affine transformation model, respectively, which can be derived inversely from the latitude and longitude coordinates of the image’s four corner points [18].
Considering residual errors in the SAR system’s geometric calibration parameters, the actual geographic coordinates of ground scatterers strictly satisfy:
B r e a l L r e a l = i j + r ρ r a ρ a C + D ,
where B real and L real are the true latitude and longitude of the scatterer ( i , j ) in the SAR image; ρ r and ρ a are the ground sampling distances (GSD) in the range and azimuth directions, respectively; r and a denote the positioning offsets in the range and azimuth directions. These offsets are closely related to the SAR system’s range electronic delay and azimuth timing offset, exhibiting good numerical stability during multi-view image acquisition with slight random fluctuations across different views. Our objective is precisely to achieve geometric self-calibration by fusing multi-view SAR images, thereby estimating the 2D positioning offsets—i.e., range ( r ) and azimuth ( a ) —of each image as accurately as possible.
After system-level positioning, rotational and scale differences between multi-view images are eliminated. Therefore, a high-precision area-based matching method [22] is employed to align multi-view SAR images, yielding multiple sets of homogenous points (sharing identical true geographic coordinates). Compared to feature-based methods, this approach yields more reliable homogenous point pairs. Based on these points, the positioning error correlation model for dual-view SAR homologous point pairs is established as follows:
i m j m + r m ρ r m a m ρ a m C m + D m = i n j n + r n ρ r n a n ρ a n C n + D n ,
where m and n correspond to SAR images from different views. Formula (3) can be rewritten as:
r m ρ r m a m ρ a m C m r n ρ r n a n ρ a n C n = i n j n C n + D n i m j m C m D m = B n L n B m L m = Δ B n m Δ L n m ,
where [ B m L m ] and [ B n L n ] are the systematic geolocation coordinates of homologous points (Hps) in images m and n, respectively, calculated using (1); [ Δ B n m Δ L n m ] represents the systematic geolocation difference of Hps between the dual-view images. Further transformation of (4) yields:
r m ρ r m c 11 m + a m ρ a m c 21 m r n ρ r n c 11 n a n ρ a n c 21 n = Δ B n m r m ρ r c 12 m + a m ρ a c 22 m r n ρ r c 12 n a n ρ a c 22 n = Δ L n m
Formula (5) can be rewritten as:
c 11 m ρ r m c 21 m ρ a m c 11 n ρ r n c 21 n ρ a n c 12 n ρ r m c 22 n ρ a m c 12 n ρ r n c 22 n ρ a n r m a m r n a n = Δ B n m Δ L n m ,
where C i j m denotes the element in the i-th row and j-th column of C m , and C i j n denotes the element in the i-th row and j-th column of C n . Since r m , a m , r n , and a n are unknowns, this system of equations is rank-deficient and cannot be solved directly. Equation (6) establishes the relationship between range-azimuth positioning errors in dual-view images and latitude-longitude differences. Matrix elements reflect the influence of affine model coefficients and ground sampling distance on error propagation. To address this, dual-view fusion positioning typically assumes equal positioning errors for the two images to avoid rank deficiency. However, this assumption often leads to significant positioning accuracy loss in multi-view observation scenarios.
Generalizing to the case of N multi-view SAR images, the positioning error correlation model for multi-view SAR images is established as:
Y N ( N 1 ) × 1 = A N ( N 1 ) × 2 N X 2 N × 1 A = T 1 T 2 0 0 T 1 0 T 3 0 0 0 0 T 1 0 0 T N 1 0 T 1 0 0 0 T N 0 T 2 T 3 0 0 0 T 2 0 T 4 0 0 0 0 T 2 0 0 T N 0 0 0 0 0 0 T m 0 T n 0 0 0 0 0 T N 1 T N ,
where
X = r 1 a 1 r 2 a 2 r N a N T ,
Y = Δ B 21 Δ L 21 Δ B 31 Δ L 31 Δ L N N 1 Δ L N N 1 T ,
T m = C 11 m ρ r m C 21 m ρ a m C 12 n ρ r m C 22 n ρ a m .
0 is a 2D zero matrix. Although this system remains rank-deficient, the increased observation dimension enables optimal estimation in the sense of minimum norm least squares.

3. Methodology

Figure 1 illustrates the main workflow of the proposed method. First, multi-view images are paired to perform dual-view fusion positioning, yielding rough estimates of 2D positioning offsets for each pair. While systematic 2D positioning offsets of multi-view SAR images are generally consistent, anomalous offsets may arise under complex influences (e.g., aircraft trajectory deviations caused by intense airflow disturbances). To address this, a coefficient of variation-based method is proposed to identify consistency in multi-view positioning errors and eliminate anomalous images, enabling outlier removal prior to multi-view fusion. Next, considering that error estimation accuracy of dual-view pairs is closely tied to their geometric relationships (e.g., angular differences), weighting coefficients are computed using the sensitivity of the error propagation model to fuse positioning error offsets across multi-view SAR images. Finally, fusion estimates are refined under the minimum norm least-squares criterion to achieve high-precision planar positioning for each SAR image. This method effectively suppresses the impact of inconsistent multi-view positioning errors. Additionally, the weighted fusion strategy accounts for error sensitivity differences arising from angular geometric characteristics, leveraging complementary and redundant information in multi-view images to maximize algorithm accuracy and robustness.

3.1. Consistency Identification of Multi-View Image Positioning Errors and Outlier Removal

Assuming equal systematic positioning errors for images m and n, Equation (3) can be simplified for any homologous point as:
r m n a m n 1 ρ r m 0 0 1 ρ a m C m 1 ρ r n 0 0 1 ρ a n C n = Δ B n m Δ L n m ,
where r m n and a m n denote the range and azimuth positioning errors, respectively. Let
K ( n , m ) = k 11 ( n , m ) k 12 ( n , m ) k 21 ( n , m ) k 22 ( n , m ) = 1 ρ r m 0 0 1 ρ a m C m 1 ρ r n 0 0 1 ρ a n C n 1 ,
where k ( n , m ) i j represents the element in the i-th row and j-th column of K ( n , m ) . Substituting (9) into (8) yields the range and azimuth positioning error estimates from dual-view fusion:
r m n a m n = Δ B n m Δ L n m K ( n , m ) .
The accuracy of dual-view estimates depends on both the consistency of positioning errors between views and the inherent angular difference in dual-view observation geometry. With additional multi-view SAR images, the subset with the most consistent positioning errors can be selected based on the dispersion of error estimates across all dual-view pairs.
The workflow for identifying multi-view positioning error consistency and outlier removal is shown in Figure 2. For N multi-view SAR images, pairwise correlation yields N ( N 1 ) / 2 estimates via dual-view fusion. The dispersion of these estimates reflects the consistency of true positioning errors across multi-view images. Thus, the coefficient of variation [23] is employed to quantify this consistency, calculated separately for range and azimuth errors:
V r = s r u r , V a = s a u a .
where V r and V a are the coefficients of variation for range and azimuth errors, respectively; s r and s a denote their standard deviations; u r and u a represent their means.
Consistency is evaluated by comparing V r and V a against a threshold γ . While a 20% threshold is typical in practice, error propagation effects inflate estimation dispersion, so γ is adjusted to 1 in this algorithm.
Specifically, if either 2D coefficient of variation exceeds 1, actual positioning error discrepancies between images are unacceptably large, precluding accurate estimation. In such cases, images are sequentially excluded one at a time from the N images, generating N subsets each containing N 1 images. For each subset, estimates and 2D coefficients of variation are computed. The subset with the smallest coefficients is selected, and if both values are <1, it is adopted as the optimally consistent dataset. If not, exclusion continues until a subset with relatively consistent systematic positioning errors is identified.
If coefficients of variation in either dimension remain >1 even after reducing the subset to three images, the three images with the most consistent errors are selected for fusion positioning.

3.2. Multi-View Positioning Error Fusion Estimation Based on a Geometric Error Propagation Model

Consistency identification and outlier removal yield a subset of multi-view SAR images with relatively consistent systematic positioning errors. Concurrently, a geometric error propagation model for dual-view SAR positioning estimates is established, enabling a weighted fusion strategy to compute multi-view positioning error estimates. To quantify positioning errors between any two images, an equivalent equation is derived using true geographic coordinates of Hps:
r m ρ r m a m ρ a m C m r n ρ r n a n ρ a n C n = Δ B n m Δ L n m .
Let
Δ r m n Δ a m n = [ r m a m ] [ r n a n ] ,
where r m , a m and r n , a n are the true range and azimuth positioning errors of images m and n, respectively; r m n and a m n denote their range and azimuth error differences. Substituting (13) into (12) yields
r m ρ r m a m ρ a m C m r m Δ r m n ρ r n a m Δ a m n ρ a n C n = Δ B n m Δ L n m .
Rearranging (14) gives
r m a m   = Δ B n m Δ L n m K ( n , m ) Δ r m n Δ a m n 1 ρ r n 0 0 1 ρ a n C n K ( n , m ) .
Thus, the estimation error introduced by image n into image m during dual-view positioning is
r ^ m n a ^ m n r m a m = Δ r m n Δ a m n 1 ρ r n 0 0 1 ρ a n C n K ( n , m ) .
where r ^ m n and a ^ m n are the range direction positioning error estimate and azimuth direction positioning error estimate when the positioning errors of images m and n are assumed to be equal. (16) indicates that due to discrepancies in true systematic positioning errors, the error propagation matrix governing error transfer from image n to m satisfies
H ( n , m ) = H 11 ( n , m ) H 12 ( n , m ) H 21 ( n , m ) H 22 ( n , m ) = 1 ρ r n 0 0 1 ρ a n C n K ( n , m ) ,
where H ( n , m ) is the error propagation matrix from n to m. The larger the absolute value of an element, the stronger the interference of n’s error on m’s estimation result, providing a basis for weight design in subsequent weighted fusion. Substituting (17) into (16) yields:
r ^ n m r n = Δ r n m · H 11 ( n , m ) + Δ a m n · H 21 ( n , m ) , a ^ n m a n = Δ r n m · H 12 ( n , m ) + Δ a m n · H 22 ( n , m ) .
Larger magnitudes of | H 11 ( n , m ) | , | H 12 ( n , m ) | , | H 21 ( n , m ) | , and | H 22 ( n , m ) | indicate greater estimation errors induced by actual positioning error differences. This implies that errors introduced by SAR images at different angles to a reference image depend on their mutual geometric relationship. Let α n , m r and α n , m a denote the range and azimuth influence factors, defined as
α n , m r = H 11 ( n , m ) + H 21 ( n , m ) , α n , m a = H 12 ( n , m ) + H 22 ( n , m ) .
Systematic positioning error estimates from all pairwise combinations of multi-view images are then fused via a weighted strategy to enhance per-view error estimation accuracy. The fused positioning error for image m is
r m ( 0 ) = n = 1 , n m N w n , m r · r ^ m n , a m ( 0 ) = n = 1 , n m N w n , m a · a ^ m n ,
where r m ( 0 ) and a m ( 0 ) are the fused range and azimuth error estimates for image m. The weights w n , m r and w n , m a satisfy
w n , m r = 1 α n , m r n = 1 , n m N 1 α n , m r , w n , m a = 1 α n , m a n = 1 , n m N 1 α n , m a .
and
n = 1 , n m N w n , m r = 1 , n = 1 , n m N w n , m a = 1 .
This yields multi-view positioning error estimates consistent with the relative consistency of true errors. Subsequently, positioning errors of excluded views can be estimated using equivalent equations derived from Hps’ true geographic coordinates.
Substituting (18) into (20) shows that
r m ( 0 ) = n = 1 , n m N w n , m r · r ^ m n = r m + n = 1 , n m N w n , m r · Δ r m n · H 11 ( n , m ) + Δ a m n · H 21 ( n , m ) , a m ( 0 ) = n = 1 , n m N w n , m a · a ^ m n = a m + n = 1 , n m N w n , m a · Δ r m n · H 12 ( n , m ) + Δ a m n · H 22 ( n , m ) .
Estimation accuracy is ensured when errors are relatively consistent (i.e., Δ r n m r m ). Equation (23) decomposes the fused range error and azimuth error estimate into two parts: the weighted sum of the true error and the dual view estimation error. The weight coefficients are determined by the geometric error propagation matrix, which means that images with smaller angular differences (i.e., larger propagation errors) will receive lower weights. When the error of multi view images is highly consistent, the weighted sum tends to zero, and the fusion result will approach the true error.

3.3. Refinement of Fusion Estimation Based on Minimum Norm Least Squares

Following multi-view positioning error fusion, results are refined using the minimum norm least-squares criterion [24,25], which relies on the Moore–Penrose generalized inverse [26] for solution, to achieve accurate error estimation, enabling high-precision planar positioning of SAR images. Equivalent equations between different views are used to derive the minimum norm least-squares solution for weighted fusion estimates.
Taking three images as an example:
Y = A X , A = C 11 1 ρ r 1 C 21 1 ρ a 1 C 11 ρ r 2 C 21 2 ρ a 2 0 0 C 12 1 ρ r 1 C 22 1 ρ a 1 C 12 ρ r 2 C 22 ρ a 2 0 0 C 11 1 ρ r 1 C 21 1 ρ a 1 0 0 C 11 3 ρ r 3 C 21 3 ρ a 3 C 12 1 ρ r 1 C 22 1 ρ a 1 0 0 C 12 ρ r 3 C 22 3 ρ a 3 0 0 C 11 2 ρ r 2 C 21 2 ρ a 2 C 11 3 ρ r 3 3 C 21 3 ρ a 3 0 0 C 12 2 ρ r 2 C 22 2 ρ a 2 C 12 3 ρ r 3 C 22 3 ρ a 3 , X = r 1 a 1 r 2 a 2 r 3 a 3 ,  Y = Δ B 21 Δ L 21 Δ B 31 Δ L 31 Δ B 32 Δ L 32 .
Let X ( 0 ) = r 1 ( 0 ) a 1 ( 0 ) r 2 ( 0 ) a 2 ( 0 ) r 3 ( 0 ) a 3 ( 0 ) T denote the fusion estimates. The residuals of the equation system satisfy
Y * = Y A X ( 0 ) = AX A X ( 0 ) = A X X ( 0 ) A Z .
Due to rank deficiency, the Moore–Penrose generalized inverse of the coefficient matrix is used to solve for the minimum norm least-squares solution Z of (25):
Z = A + Y * ,
where A + is the Moore–Penrose generalized inverse matrix of A . The final positioning error estimate is
X * = X ( 0 ) + Z .
Compared to initial fusion estimates, refined results exhibit smaller equation residuals and higher accuracy. This method balances residuals across equations, preventing dominance by extreme errors and enhancing robustness. Incorporating refined error estimates into latitude/longitude calculations and repositioning SAR images accordingly yields high-precision multi-view SAR positioning results.

4. Experiments and Analysis

To validate the effectiveness of the proposed method, performance evaluations were conducted using both simulated and actual measured airborne SAR images. Additionally, the impact of different algorithmic components on performance was analyzed, and the efficacy of each component was confirmed via ablation studies. Comparative experiments with the dual-view positioning method demonstrated that the proposed algorithm achieves higher estimation accuracy and more robust positioning results for both simulated and measured data.

4.1. Experiments and Analysis of Airborne SAR Simulated Data

In actual flight scenarios, the positioning error of the carrier is unpredictable, whereas in simulated flights, such errors can be explicitly designed. Thus, simulated data are utilized in this subsection to verify the effectiveness of the proposed algorithm.
For the convenience of simulation validation, this experiment neglected the influence of terrain relief, approximating the target area as a planar surface. In this context, adjusting the rotation angle of the airborne SAR image is equivalent to modifying the heading angle of the airborne SAR system. To simplify the experimental setup, we resampled the ground sampling distance for both the distance and azimuth directions to 1 m. For simulation experiments, we utilized image tiles covering a 1500 m × 1500 m area. The mapping relationship between pixel coordinates and geographic coordinates of the simulated images can be derived from that of actual ground-range airborne SAR images. Specifically, a real ground-range airborne SAR image was selected, as illustrated in Figure 3. It was then rotated to various angles and supplemented with distinct systematic positioning errors to construct the simulated multi-view SAR image dataset.

4.1.1. Parameter Settings

To verify the effectiveness of the proposed method, multiple experimental scenarios were designed, categorized into three types: (1) multi-view SAR images with relatively consistent systematic positioning errors under large angular differences; (2) multi-view SAR images with relatively consistent systematic positioning errors under small angular differences; and (3) multi-view SAR images with relatively inconsistent systematic positioning errors under large angular differences. The performance of the proposed algorithm was validated by comparing its results with those of the dual-view fusion positioning algorithm across these scenarios. Additionally, by analyzing the positioning errors of the proposed algorithm under different scenarios, the influence of angular geometric relationships on the algorithm’s estimation accuracy was investigated. Furthermore, the effectiveness of each algorithm module was verified by comparing positioning errors before and after the implementation of different steps within the same scenario.
(1) Scenario I: Multi-view SAR images with relatively consistent systematic positioning errors under large angular differences. Simulated images A, B, C, and D were generated by rotating the initial image by 0°, 90°, 160° and 260°, respectively, with distinct positioning errors introduced into each.
(2) Scenario II: Multi-view SAR images with relatively consistent systematic positioning errors under small angular differences. The simulated images in this scenario were created by rotating the initial image by 0°, 20°, 40° and 60°, followed by the introduction of positioning errors and cropping of the region of interest.
(3) Scenario III: Multi-view SAR images with relatively inconsistent systematic positioning errors under large angular differences. The simulated images in this scenario share the same rotation angles as those in Scenario I, but the positioning error introduced into image A was modified.
The specific parameters for the three scenarios are provided in Table 1. The term “2D positioning errors” refers to errors in the range and azimuth directions.

4.1.2. Positioning Accuracy of the Proposed Algorithm Under Different Simulated Scenarios

After selecting Hps and estimating multi-view SAR image positioning errors using the proposed algorithm, a high-precision estimation of the 2D systematic positioning error for the initial ground-range SAR image was obtained. Using these estimates as the systematic positioning errors, a new mapping between pixel and geographic coordinates can be established, enabling repositioning of SAR images to achieve high-precision results. Furthermore, since the introduced positioning errors are consistent across the entire image, the positioning error at any single point is representative of the overall positioning accuracy. In this context, any pixel from the multi-view SAR images can be selected as a check point. The positioning error at this point—calculated by comparing its repositioning result with that of the actual ground-range SAR image—serves as the positioning accuracy metric for the simulated image.
(1) Scenario I: Multi-view SAR images with relatively consistent systematic positioning errors under large angular differences. To validate the effectiveness of the proposed method, Table 2 presents the positioning accuracy after multi-view fusion positioning, with comparisons to the dual-view fusion positioning method proposed by Zhang et al. [18].
Table 2 demonstrates that fusing four SAR images from different viewpoints significantly improves the positioning accuracy of airborne SAR images. The proposed method achieves an average positioning accuracy improvement of 31.26% (reduced from 6.36 m to 4.37 m) compared to the dual-view fusion method. A comparison between dual-view fusion estimates and weighted fusion estimates indicates that multi-view positioning error fusion estimation based on the geometric error propagation model effectively enhances positioning accuracy, with the weighted fusion estimates outperforming dual-view fusion by an average of 23.91% (reduced from 6.36 m to 4.84 m). Additionally, comparing weighted fusion estimates with refined results shows that correcting fusion estimates via the minimum norm least-squares method yields consistent positioning accuracy across multi-view SAR images, with an average improvement of 9.66% (reduced from 4.84 m to 4.37 m).
(2) Scenario II: Multi-view SAR images with relatively consistent systematic positioning errors under small angular differences Table 3 presents the positioning accuracy after multi-view fusion positioning. Table 3 shows that the proposed method achieves an average positioning accuracy improvement of 78.13% (reduced from 21.44 m to 4.69 m) over dual-view fusion positioning. A comparison of planar positioning accuracies between Scenarios I and II reveals slightly lower accuracy under smaller angular differences than under larger ones. However, the proposed algorithm still demonstrates significant improvements in accurate error estimation under small angular differences, highlighting its superior angular adaptability.
(3) Scenario III: Multi-view SAR images with inconsistent systematic positioning errors under large angular differences.
Table 4 presents the corresponding positioning accuracy. In principle, once an image is identified as an outlier, it is excluded from subsequent fusion calculations. However, to fully demonstrate the method’s effectiveness, we still calculate the positioning error of the outlier image (Image A) using the fusion positioning method (based on Image B, C, D)—this verifies that the method can even improve the accuracy of outliers (Image A’s accuracy improved from 72.01 m to 4.69 m), rather than retaining the outlier in the fusion process. As shown, the proposed method achieves an average positioning accuracy improvement of 87.84% (reduced from 38.57 m to 4.69 m) over dual-view fusion positioning under this scenario. A comparison of positioning accuracies before and after outlier removal indicates that, in the presence of images with significantly divergent systematic positioning errors, unoptimized fusion and refinement fail to effectively estimate systematic positioning errors, resulting in poor performance. In contrast, the integration of multi-view positioning error consistency identification and outlier removal in this study significantly enhances estimation accuracy, with an average improvement of 82.08% (reduced from 26.17 m to 4.69 m) in refined estimation accuracy after dataset optimization.
In summary, comparative analyses of the proposed method and the dual-view fusion method across different scenarios confirm the effectiveness of the proposed approach. A cross-scenario comparison (Scenarios I vs. II) reveals that angular geometric relationships between images influence estimation accuracy, with superior performance observed under larger angular differences. Ablation experiments within each scenario further validate the efficacy of three key steps: multi-view positioning error consistency identification and outlier removal, weighted fusion estimation of multi-view systematic positioning errors based on the geometric error propagation model, and refined estimation of multi-view systematic positioning errors via the minimum norm least-squares method. Overall, the proposed method achieves higher accuracy and robustness in positioning compared to the dual-view fusion method.

4.2. Monte Carlo Simulation Experiments and Analysis

To quantitatively validate the effectiveness and robustness of the proposed multi-view SAR image fusion positioning method, a series of Monte Carlo simulation experiments were conducted. These experiments were designed to compare the performance of the proposed method against the direct averaging method under various viewing angle configurations and positioning error conditions.

4.2.1. Experimental Setup

In the simulations, multi-view observations were emulated by applying rotational transformations to a real ground-range SAR image. To simplify the experimental scenario, we set the GSD for both the range and azimuth directions to 1 m. Systematic geolocation errors, commonly present in practical SAR systems, were simulated by adding random positioning errors to the image plane. Specifically, independent errors with a mean of 50 pixels and a standard deviation of 10 pixels were introduced in both the X and Y directions. To ensure statistical significance, the results for each scenario were obtained over 1000 independent Monte Carlo runs. The experiments were categorized into three distinct scenarios:
  • Experiment 1 (Three Views): Images were configured with viewing angles of 0°, 90°, and 260°.
  • Experiment 2 (Four Views, Consistent Errors): Images were configured with viewing angles of 0°, 90°, 260°, and 170°, with consistent systematic errors applied to all images.
  • Experiment 3 (Four Views, Inconsistent Error): The viewing angles were identical to Experiment 2, but image 4 was introduced with a significantly divergent systematic positioning error (with a mean of −50 pixels and a standard deviation of 10 pixels) to simulate an outlier.

4.2.2. Results and Comparative Analysis

The positioning errors for all images in each experiment were recorded for a quantitative performance evaluation. Table 5 summarizes the comparative results.
Under conditions of relatively consistent systematic errors across views (Experiments 1 and 2), the proposed method demonstrates a significant enhancement in both positioning accuracy and stability compared to the direct averaging approach. In the three-view configuration (Experiment 1), the mean positioning error was reduced from approximately 10.81 m to a consistent 7.41 m for all images, constituting a 31.5% accuracy improvement. This advantage became even more pronounced in the four-view scenario (Experiment 2), where the mean error was reduced from 11.46 m to 6.45 m, corresponding to a 43.7% improvement. Crucially, unlike the direct averaging method which produced fluctuating errors across images, the proposed method yielded a uniform error estimate for all images within each scenario. This consistent performance underscores its superior capability to effectively leverage the geometric constraints inherent in multiple views, a fundamental aspect entirely overlooked by the simplistic averaging approach.
Meanwhile, the proposed method exhibits exceptional robustness against outlier views, as unequivocally demonstrated in Experiment 3. When one image contained a severe, inconsistent positioning error, the direct averaging method suffered catastrophic failure, with errors soaring as high as 94.02 m due to contamination from the outlier. In stark contrast, the proposed method successfully maintained the positioning error at a stable 7.66 m for all images—a performance level comparable to the consistent four-view case in Experiment 2. This result provides strong empirical evidence for the high effectiveness of the integrated outlier detection and removal module. It confirms that the module reliably identifies and excludes images with anomalous systematic errors, thereby preserving the high accuracy of the fusion process even in challenging, non-ideal observational conditions.

4.3. Experiments and Analysis of Airborne SAR Measured Data

A positioning experiment was performed utilizing multi-view ground-range airborne SAR images of the same target area, which were acquired by an operational airborne SAR system. Figure 4 presents a high-definition optical image of the airborne radar observation area, where regions enclosed by rectangular boxes of different colors correspond to the areas imaged by the four SAR images. In this optical image, H p i ( i = 1 , , 10 ) denotes the locations of ten Hps selected from the SAR images, while T p i ( i = 1 , , 10 ) represents the locations of ten test points (Tps) evenly distributed across the SAR images. The true latitude and longitude of these Hps and Tps were pre-measured using a high-precision Global Positioning System (GPS). Notably, these true geographic coordinates were solely used for evaluating the positioning accuracy of the proposed method and were not involved in the positioning computation process.

4.3.1. Experimental Parameters of Measured Data

In this experiment, the airborne SAR system imaged the same target area in strip mode along different flight paths. The multi-view images acquired during four separate imaging campaigns are displayed in Figure 5.

4.3.2. Positioning Accuracy Evaluation of the Proposed Method

After multi-view image registration processing, 10 groups of Hps were extracted. The positions of these Hps are marked as Hp1 to Hp10 on the optical reference image in Figure 4. As can be seen from Figure 5, although each Hps on the four multi-view images exhibits a certain degree of anisotropic scattering characteristics, high-precision matching processing can still control the positional error of Hps at the pixel level (usually meter-level). Considering the randomness of the error, the impact of this error on the algorithm accuracy can basically be neglected through the sample adjustment effect of multiple groups of Hps can basically be neglected. Furthermore, we selected 10 Tps on the images, marked as Tp1–Tp10, and the geographic coordinates of each test point on the high-precision optical reference image were used as the true values for positioning error evaluation.
Table 6 lists the true 2D positioning errors (range and azimuth directions) of the multi-view SAR images. As indicated in Table 6, the real positioning errors of the images in Figure 5a–c are relatively consistent, whereas those of Figure 5d differ significantly from the others. Using the Hps selected based on Figure 4, the multi-view fusion positioning method proposed in this study was applied to estimate the positioning error of each SAR image, with the results presented in Table 7. Table 7 demonstrates that the proposed method exhibits good positioning stability across the ten Tps, with relatively consistent target positioning errors in both longitude and latitude. Specifically, the positioning errors of the ten Tps range from 0 to 3 m, and the root mean square error (RMSE) of the proposed method is 1.55 m.
A comparison between Table 6 and Table 7 reveals that the proposed method can effectively estimate the systematic positioning errors of multi-view SAR images, achieving meter-level positioning accuracy. The average of the estimation results was adopted as the systematic positioning error, which was then used to compensate for the latitude and longitude of the coarsely positioned quadrangle points. Repositioning the SAR image with this compensation yielded high-precision multi-view SAR positioning results. Taking the repositioned image in Figure 5a as an example, the planar positioning errors of the ten Tps were calculated, with the results shown in Table 8.

4.3.3. Method Comparison and Ablation Experiments

Multi-view SAR images were repositioned using both the dual-view fusion positioning method and the proposed method, and the positioning accuracies of these approaches were evaluated using the Tps. The results are presented in Table 9, which integrates method comparisons and ablation experiments to facilitate analysis of algorithm component effects.
Table 9 indicates that the dual-view positioning method [18] exhibits unstable positioning accuracy, with performance degradation likely occurring under conditions of small heading angle differences and inconsistent systematic positioning errors. In contrast, the proposed algorithm significantly enhances positioning accuracy while ensuring estimation robustness. For the measured data, the proposed method achieves an average positioning accuracy improvement of 84.78% (reduced from 10.71 m to 1.63 m) compared to the dual-view fusion method.
Furthermore, a comparison of positioning accuracies before and after outlier removal reveals that weighted fusion estimation and refined estimation alone fail to effectively estimate systematic positioning errors when images exhibit large discrepancies in systematic positioning errors are present. However, the integration of multi-view positioning error consistency identification and outlier removal leads to significant enhancements in estimation accuracy, with the refined positioning accuracy achieving an 82.42% (reduced from 9.27 m to 1.63 m) improvement after optimization.
Additionally, a comparison between weighted fusion estimates and refined results demonstrates that without the minimum norm least-squares method, positioning accuracy is inconsistent and exhibits significant variability. After correcting fusion estimates using this method, the positioning accuracy across multi-view SAR images becomes more consistent, with an average improvement of 22.75% (reduced from 2.11 m to 1.63 m).
Without loss of generality, the SAR image from the first viewing angle as shown in Figure 5a is selected. As shown in the Figure 6, we selected three areas on the optical base map to demonstrate the results before and after applying the positioning method. Figure 7 illustrates the fitting effect between the positioning results before and after correction and the high-precision optical base map. It can be observed that, prior to multi-view fusion positioning, the initial SAR image exhibits significant positional offsets relative to the reference high-accuracy optical image in both latitudinal and longitudinal directions. After multi-view fusion positioning, by contrast, the calibrated SAR images show improved consistency with the optical image in terms of road networks and texture features.
These results confirm that the proposed algorithm can estimate and compensate for the 2D systematic positioning errors (in both range and azimuth directions) of each SAR image. Following the completion of four-corner positioning and slant-to-ground conversion using the RD model, this method enables high-precision planar positioning of SAR images without the need for GCPs.

5. Discussion

(1)
The Impact of Radar Target Anisotropy
The method in this paper needs to extract multiple sets of homologous point pairs from multi-view SAR images through high-precision registration. Although strictly speaking, the anisotropy of artificial targets will inevitably reduce the registration accuracy of multi-view images, especially in urban high-rise building areas. Considering that actual multi-view airborne SAR images used in the experiments of this paper have a large coverage area, the limited quantity with significant anisotropy ensures pixel-level registration accuracy and maintains homotopic point reliability. The positional error introduced by this non-ideal registration is far smaller than the systematic geolocation error (at least tens of pixels) of each SAR image. Therefore, the impact of radar target anisotropy on the method in this paper is negligible.
(2)
The Impact of Complex Terrain Scenarios
The method proposed in this paper focuses on correcting the overall geolocation offset of images, and it possesses both high efficiency and high precision, thus holding significant engineering application value. The space-variant systematic geolocation errors introduced by the errors of geolocation model parameters account for a relatively small proportion. The main source of the spatial variability of errors usually stems from terrain relief factors. Addressing this issue will significantly increase the complexity of the algorithm, which is not the research focus of this paper.
This study uses multi-view airborne SAR images of flat terrain as the experimental data to verify the effectiveness of the core innovations. For multi-view SAR images of mountainous and urban high-rise building scenarios, terrain undulations and tall buildings may locally cause layover, shadows, and additional relative positioning errors. On one hand, this tends to reduce the image registration accuracy in local areas; on the other hand, it will weaken the consistency of the systematic positioning errors of multi-view images in local areas, thereby posing technical challenges to the application of the method proposed in this paper.
In future work, we plan to avoid selecting Hps in urban high-rise building areas through regional masking, correct terrain-induced errors by integrating a Digital Elevation Model (DEM) to address the issue of spatial variability of errors in mountainous scenarios, and acquire more multi-view SAR image data under different scenarios to experimentally verify the adaptability of this method in complex terrains.
(3)
The number of multi-view images and the proportion of outlier images
When only 2 images from different viewing angles are input, the method in Reference [18] can be directly adopted. When conditions allow for the acquisition of more multi-view SAR images, the method proposed in this paper should be used. The new method not only makes full use of multi-view geometric structure information to improve fusion accuracy but also can automatically eliminate a small number of SAR images with inconsistent systematic errors, thereby significantly enhancing the accuracy and robustness of the algorithm.
Theoretically, the more multi-view images there are, the higher the fusion processing accuracy will be. Considering the limited sources of current multi-view SAR image data, this paper only compares the processing performance when 2, 3, or 4 images are input. Experimental results show that the fusion of 4 multi-view images yields results under different combinations that are significantly better than those obtained by fusing 2 or 3 images, with the highest positioning accuracy reaching 1.63 m, which is already close to the level of methods corrected using control points.
However, in practical applications, considering multiple factors such as geometric distortion of terrain and ground objects, differences in multi-view target scattering, and a certain degree of inconsistency in multi-view positioning errors, inputting too many images will not only fail to continuously improve algorithm performance but also significantly increase algorithm complexity. Therefore, considering both algorithm complexity and accuracy convergence, we suggest that the number of multi-view images in practical applications should be 3–7, among which the proportion of outlier images should not exceed 30% (i.e., 1–2 images are appropriate).

6. Conclusions

This study presents a novel approach for high-precision planar geolocation based on multi-view SAR image fusion, which eliminates the need for GCPs. The method accounts for discrepancies in positioning errors among multi-view airborne SAR images and employs the coefficient of variation to optimize the multi-view SAR image dataset. Additionally, a weighting strategy and a correction approach are developed to achieve high-precision fusion estimation of positioning errors for each view image. Simulation experiments analyze the influence of angular geometric relationships between images on estimation accuracy across different scenarios. Furthermore, cross-scenario result comparisons validate the effectiveness of the consistency identification and outlier removal modules. Within each scenario, ablation experiments are conducted to investigate the complex interactions between the proposed algorithm’s components and their respective impacts on overall performance. To further validate the method, experiments using actual airborne SAR measured data are performed. Results show that the proposed method achieves an average positioning accuracy improvement of 84.78% compared with alternative methods. These findings demonstrate that the method fully leverages information from multiple distinct SAR views, thereby enhancing both the positioning accuracy and robustness of multi-view SAR images.

Author Contributions

All authors made significant contributions to this work. A.Y. developed the theoretical framework. A.Y., H.Y., Y.J., W.T. and Z.D. conceived and designed the experiments. H.Y. performed the experiments and drafted the manuscript. H.Y. and A.Y. analyzed the data. Z.D. provided insightful suggestions on both the work and the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China under Grants 62101568, 62371460, and 62471474; partially by the National Postdoctoral Program for Innovative Talents under Grant BX20230473; partially by the Excellent Youth Program of the Hunan Provincial Natural Science Foundation under Grant 2024JJ4046; and partially by the Science and Technology Innovation Program of Hunan Province under Grant 2024RC3122.

Data Availability Statement

The data used in this study are available from the corresponding author upon reasonable request.

Acknowledgments

The authors would like to express their gratitude to all anonymous reviewers for their valuable comments and constructive suggestions, which significantly improved the quality of this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Curlander, J.C. Location of Spaceborne SAR Imagery. IEEE Trans. Geosci. Remote Sens. 1982, 20, 359–364. [Google Scholar] [CrossRef]
  2. Moreira, A.; Prats-Iraola, P.; Younis, M.; Krieger, G.; Hajnsek, I. Papathanassiou, K. P. A Tutorial on Synthetic Aperture Radar. IEEE Geosci. Remote Sens. Mag. 2013, 1, 6–43. [Google Scholar] [CrossRef]
  3. Eineder, M.; Fritz, T.; Mittermayer, J.; Roth, A.; Breit, H. TerraSAR-X Ground Segment, Basic Product Specification Document; The German Aerospace Center (DLR): Cologne, Germany, 2008. [Google Scholar]
  4. Sun, Y.; Lei, L.; Li, Z.; Kuang, G.; Yu, Q. Detecting changes without comparing images: Rules induced change detection in heterogeneous remote sensing images. ISPRS J. Photogramm. Remote Sens. 2025, 230, 241–257. [Google Scholar] [CrossRef]
  5. Wang, T.; Li, X.; Zhang, G.; Lin, M.; Deng, M.; Cui, H.; Jiang, B.; Wang, Y.; Zhu, Y.; Wang, H.; et al. Large-Scale Orthorectification of GF-3 SAR Images Without Ground Control Points for China’s Land Area. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5221617. [Google Scholar] [CrossRef]
  6. You, H.; Fu, K. Precision Processing of Synthetic Aperture Radar Images; Science Press: Beijing, China, 2011. [Google Scholar]
  7. Cheng, P.; Toutin, T. Automated High Accuracy Geometric Correction and Mosaicking without Ground Control Points. RADARSAT-2 data. GeoInformatics 2010, 13, 22. [Google Scholar]
  8. Ke, S.; Wang, Q.; Liu, Z. Energy-Based Geometric Self-Calibration Method for Spaceborne SAR Without GCPs. IEEE Trans. Geosci. Remote Sens. 2025, 63, 5215414. [Google Scholar]
  9. Raggam, H.; Gutjahr, K.; Perko, R.; Schardt, M. Assessment of the Stereo-Radargrammetric Mapping Potential of TerraSAR-X Multibeam Spotlight Data. IEEE Trans. Geosci. Remote Sens. 2010, 48, 971–977. [Google Scholar] [CrossRef]
  10. Chen, P.-H.; Dowman, I.J. A Weighted Least Squares Solution for Space Intersection of Spaceborne Stereo SAR Data. IEEE Trans. Geosci. Remote Sens. 2001, 39, 233–240. [Google Scholar] [CrossRef]
  11. Luo, Y.; Qiu, X.; Dong, Q.; Fu, K. A Robust Stereo Positioning Solution for Multiview Spaceborne SAR Images Based on the Range–Doppler Model. IEEE Geosci. Remote Sens. Lett. 2022, 19, 4008705. [Google Scholar] [CrossRef]
  12. Yin, L.; Yang, Y.; Deng, M.; Huang, Y.; Chen, K. A Twofold Stereo Positioning Method for Multiview Spaceborne SAR Images. IEEE Geosci. Remote Sens. Lett. 2023, 20, 4010605. [Google Scholar] [CrossRef]
  13. Wang, G. A Novel Two-Step Registration Method for Multi-Aspect SAR Images. In Proceedings of the 2018 China International SAR Symposium (CISS), Shanghai, China, 10–12 October 2018; pp. 1–4. [Google Scholar]
  14. Walterscheid, I. Multistatic and Multi-aspect SAR Data Acquisition to Improve Image Interpretation. In Proceedings of the 2013 IEEE International Geoscience and Remote Sensing Symposium, Melbourne, VIC, Australia, 21–26 July 2013; pp. 4194–4197. [Google Scholar]
  15. Zhang, B.; Yu, A.; Chen, X.; Wang, Z.; Dong, Z. Comparative Analysis of Single-View and Multi-View Airborne SAR Positioning Error and Course Planning for Multi-View Airborne SAR Optimal Positioning. Remote Sens. 2022, 14, 3055. [Google Scholar] [CrossRef]
  16. Xu, K.; Liu, S.; Wang, Z.; Zhang, G.; Li, Y.; Wu, B. Geometric Autocalibration of SAR Images Utilizing Constraints of Symmetric Geometry. IEEE Geosci. Remote Sens. Lett. 2022, 19, 4515005. [Google Scholar] [CrossRef]
  17. Zhang, G.; Deng, M.; Cai, C.; Zhao, R. Geometric Self-calibration of YaoGan-13 Images Using Multiple Overlapping Images. Sensors 2019, 19, 2367. [Google Scholar] [CrossRef]
  18. Zhang, B.; Yu, A.; Chen, X.; Tang, F.; Zhang, Y. An Image Planar Positioning Method Base on Fusion of Dual-View Airborne SAR Data. Remote Sens. 2023, 15, 2499. [Google Scholar] [CrossRef]
  19. Ding, C.; Liu, J.; Lei, B.; Qiu, X. Preliminary Exploration of Systematic Geolocation Accuracy of GF-3 SAR Satellite System. J. Radars 2017, 6, 11–16. [Google Scholar]
  20. Yin, L.; Deng, M.; Yang, Y.; Huang, Y.; Tang, Q. A sensitive geometric self-calibration method and stability analysis for multiview spaceborne SAR images based on the range-Doppler model. ISPRS J. Photogramm. Remote Sens. 2025, 220, 550–562. [Google Scholar] [CrossRef]
  21. Luo, Y.; Qiu, X.; Cheng, Y. A Geometric Auto-Calibration Method for Multiview UAV-Borne FMCW SAR Images. IEEE Geosci. Remote Sens. Lett. 2024, 21, 4001805. [Google Scholar] [CrossRef]
  22. Ye, Y.; Bruzzone, L.; Shan, J.; Bovolo, F.; Zhu, Q. Fast and Robust Matching for Multimodal Remote Sensing Image Registration. IEEE Trans. Geosci. Remote Sens. 2019, 57, 9059–9070. [Google Scholar] [CrossRef]
  23. Mao, S.; Wang, J.; Pu, X. Advanced Mathematical Statistics; Higher Education Press: Beijing, China, 2006. [Google Scholar]
  24. Wu, M.D.; Li, B.; Wang, W.H. Advanced Engineering Mathematics; Science Press: Beijing, China, 2004. [Google Scholar]
  25. Mattsson, P.; Zachariah, D.; Stoica, P. Analysis of the Minimum-Norm Least-Squares Estimator and Its Double-Descent Behavior. IEEE Signal Process. Mag. 2023, 40, 39–75. [Google Scholar] [CrossRef]
  26. Chen, H.; Wang, Y. A Family of Higher-order Convergent Iterative Methods for Computing the Moore–Penrose Inverse. Appl. Math. Comput. 2011, 218, 4012–4016. [Google Scholar] [CrossRef]
Figure 1. Algorithm flow block diagram. Contributions presented in this paper are in color. r ^ m n and a ^ m n are the range direction positioning error estimate and azimuth direction positioning error estimate when the positioning errors of images m and n are assumed to be equal. r m * and a m * represent the refined results of our method for estimating positioning errors in the distance direction and azimuth direction, respectively.
Figure 1. Algorithm flow block diagram. Contributions presented in this paper are in color. r ^ m n and a ^ m n are the range direction positioning error estimate and azimuth direction positioning error estimate when the positioning errors of images m and n are assumed to be equal. r m * and a m * represent the refined results of our method for estimating positioning errors in the distance direction and azimuth direction, respectively.
Remotesensing 17 03775 g001
Figure 2. Flowchart for error consistency identification and outlier removal.
Figure 2. Flowchart for error consistency identification and outlier removal.
Remotesensing 17 03775 g002
Figure 3. Real ground-range SAR image.
Figure 3. Real ground-range SAR image.
Remotesensing 17 03775 g003
Figure 4. High-definition optical image of the imaging area and four muti-view SAR imaging area.
Figure 4. High-definition optical image of the imaging area and four muti-view SAR imaging area.
Remotesensing 17 03775 g004
Figure 5. Multi-view SAR images. (a) Heading angle 143.97°. (b) Heading angle 160.44°. (c) Heading angle 173.22°. (d) Heading angle −173.20°. The white rectangular box in (a) indicates an area of actual data loss in the original SAR image, caused by an anomaly in the airborne system’s ground data transmission and not artificially added.
Figure 5. Multi-view SAR images. (a) Heading angle 143.97°. (b) Heading angle 160.44°. (c) Heading angle 173.22°. (d) Heading angle −173.20°. The white rectangular box in (a) indicates an area of actual data loss in the original SAR image, caused by an anomaly in the airborne system’s ground data transmission and not artificially added.
Remotesensing 17 03775 g005
Figure 6. The selected demonstration areas on the high-definition optical image. The red dots indicate the centers of the selected demonstration areas.
Figure 6. The selected demonstration areas on the high-definition optical image. The red dots indicate the centers of the selected demonstration areas.
Remotesensing 17 03775 g006
Figure 7. Comparison of positioning result. (a,c,e) are the calibration effects before multi-view fusion positioning, (b,d,f) are the calibration effects after multi-view fusion positioning.
Figure 7. Comparison of positioning result. (a,c,e) are the calibration effects before multi-view fusion positioning, (b,d,f) are the calibration effects after multi-view fusion positioning.
Remotesensing 17 03775 g007
Table 1. Parameter Settings for Different Scenarios.
Table 1. Parameter Settings for Different Scenarios.
Image AImage BImage CImage D
Scenario IHeading angle (°)143.97233.97303.9743.97
2D positioning error (m)(45, −35)(35, −50)(38, −45)(42, −40)
Planar positioning error (m)57.0161.0358.958
Scenario IIHeading angle (°)143.97163.97183.97203.97
2D positioning error (m)(45, −35)(35, −50)(38, −45)(42, −40)
Planar positioning error (m)57.0161.0358.958
Scenario IIIHeading angle (°)143.97233.97303.9743.97
2D positioning error (m)(−45, 35)(35, −50)(38, −45)(42, −40)
Planar positioning error (m)57.0161.0358.958
Table 2. Method Comparison and Ablation Experiments (Scenario I). “——” indicates that the positioning accuracy of this image cannot be estimated.
Table 2. Method Comparison and Ablation Experiments (Scenario I). “——” indicates that the positioning accuracy of this image cannot be estimated.
Image AImage BImage CImage DOverall Mean
Simulated image planar positioning error (m)57.0161.0358.9058.0058.74
Dual-view fusion positioning accuracy (m) [18]A + B12.7512.75——————
A + C6.20——6.20————
A + D3.81————3.81——
B + C——5.085.08————
B + D——6.13——6.13——
C + D————4.184.18——
Mean7.587.995.154.716.36
The coefficient of variation before outlier removal, Vr = 0.11, Va = 0.11. No dataset optimization required
Multi-view fusion positioning accuracy (m)Weighted fusion estimation6.316.333.463.254.84
Refined estimation4.374.374.374.374.37
Improved positioning accuracy compared to [18]42.35%45.31%15.15%7.22%31.26%
Table 3. Method comparison and ablation experiments (scenario II).
Table 3. Method comparison and ablation experiments (scenario II).
Image AImage BImage CImage DOverall Mean
Simulated image planar positioning error (m)57.0161.0358.9058.0058.74
Dual-view fusion positioning accuracy (m) [18]A + B51.9151.91——————
A + C17.84——17.84————
A + D5.83————5.83——
B + C——16.7916.79————
B + D——17.84——17.84——
C + D————18.4418.44——
Mean25.1928.8517.6914.0421.44
The coefficient of variation before outlier removal, Vr = 0.59, Va = 0.42. No dataset optimization required
Multi-view fusion positioning accuracy (m)Weighted fusion estimation18.576.183.256.918.73
Refined estimation4.694.694.694.694.69
Improved positioning accuracy compared to [18]81.38%83.74%73.49%66.60%78.13%
Table 4. Method comparison and ablation experiments (scenario III).
Table 4. Method comparison and ablation experiments (scenario III).
Image AImage BImage CImage DOverall Mean
Simulated image planar positioning error (m)57.0161.0358.9058.0058.74
Dual-view fusion positioning accuracy (m) [18] A + B82.5482.54——————
A + C58.53——58.53————
A + D74.97————74.97——
B + C——5.085.08————
B + D——6.13——6.13——
C + D————4.184.18——
Mean72.0131.2522.6028.4338.57
Multi-view fusion positioning accuracy (m)BeforeThe coefficient of variationVr = 2.36, Va = 1.21.
outlierWeighted fusion estimation56.5319.6625.6917.1229.75
removalRefined estimation26.1726.1726.1726.1726.17
AfterThe coefficient of variationVr = 0.04, Va = 0.06. Reject image A
outlierWeighted fusion estimation4.695.443.605.244.74
removalRefined estimation4.694.694.694.694.69
Improved positioning accuracy compared to [18]93.49%84.99%79.25%83.50%87.84%
Table 5. Comparative Positioning Errors from Monte Carlo Simulations.
Table 5. Comparative Positioning Errors from Monte Carlo Simulations.
Experiment ScenarioMethodImage 1 Error (m)Image 2 Error (m)Image 3 Error (m)Image 4 Error (m)
Experiment 1Dual-view fusion positioning averaging accuracy [18]11.879.9910.57
Proposed Method7.417.417.41
Experiment 2Dual-view fusion positioning averaging accuracy [18]11.0211.0611.8711.93
Proposed Method6.456.456.456.45
Experiment 3Dual-view fusion positioning averaging accuracy [18]31.8340.2644.2394.02
Proposed Method7.667.667.667.66
Table 6. True Values of 2D Positioning Errors for Multi-view SAR Images.
Table 6. True Values of 2D Positioning Errors for Multi-view SAR Images.
Point Position (m)Figure 5aFigure 5bFigure 5cFigure 5d
Tp1[14.75, 5.25][14.93, 5.49][14.84, 5.05][10.46, 9.56]
Tp2[14.93, 6.22][14.47, 6.50][14.56, 6.18][10.66, 8.68]
Tp3[13.06, 4.05][13.57, 4.38][13.98, 4.57][9.36, 8.55]
Tp4[14.82, 5.70][14.76, 5.88][14.54, 5.68][10.24, 10.34]
Tp5[14.56, 6.60][14.38, 6.35][14.05, 6.19][10.67, 9.86]
Tp6[14.76, 5.47][14.67, 5.73][14.53, 5.40][10.95, 8.42]
Tp7[15.12, 5.37][15.00, 5.56][15.09, 5.73][10.88, 8.79]
Tp8[14.07, 5.95][14.35, 5.77][13.69, 5.47][9.53, 10.78]
Tp9[13.55, 5.40][13.89, 5.67][13.72, 5.23][9.98, 10.56]
Tp10[14.47, 6.26][14.57, 5.78][14.32, 5.49][10.54, 9.78]
mean[14.41, 5.63][14.46, 5.71][14.33, 5.50][10.33, 9.53]
Table 7. Estimated 2D Positioning Errors for Multi-view SAR Images.
Table 7. Estimated 2D Positioning Errors for Multi-view SAR Images.
Point Position (m)Figure 5aFigure 5bFigure 5cFigure 5d
Tp1[15.19, 5.29][15.35, 5.61][15.22, 5.27][10.79, 9.86]
Tp2[14.75, 7.00][14.45, 7.29][14.72, 6.96][11.00, 9.40]
Tp3[14.71, 2.15][14.81, 2.19][14.71, 2.16][9.50, 6.04]
Tp4[14.62, 6.27][14.68, 6.48][14.60, 6.28][10.43, 10.91]
Tp5[15.17, 5.30][15.08, 4.88][15.09, 5.47][11.58, 10.87]
Tp6[14.49, 5.88][14.49, 6.19][14.45, 5.88][10.99, 8.90]
Tp7[14.30, 5.47][14.18, 5.49][14.30, 5.48][10.17, 8.37]
Tp8[12.53, 6.30][13.54, 7.11][12.79, 4.43][9.56, 12.11]
Tp9[13.94, 5.40][14.22, 6.00][13.97, 5.62][10.13, 11.00]
Tp10[15.31, 5.66][15.48, 5.37][15.27, 5.42][11.49, 9.87]
mean[14.50, 5.53][14.63, 5.66][14.51, 5.30][10.56, 9.73]
Table 8. Positioning Errors of Figure 5a Obtained by the Proposed Method.
Table 8. Positioning Errors of Figure 5a Obtained by the Proposed Method.
Δ   B (m) Δ   L (m)Planar (m)
Tp1−0.17−1.121.13
Tp20.61−0.770.99
Tp30.700.570.91
Tp4−1.711.402.20
Tp5−1.520.691.67
Tp6−0.62−0.901.10
Tp70.471.551.62
Tp81.120.111.13
Tp91.11−0.591.25
Tp101.291.511.99
RMSE1.041.151.55
Table 9. Method Comparison and Ablation Experiments Results on Airborne SAR Measured Data.
Table 9. Method Comparison and Ablation Experiments Results on Airborne SAR Measured Data.
Figure 5aFigure 5bFigure 5cFigure 5dOverall Mean
Positioning accuracy of system-level geometric corrections15.5015.5615.4113.9615.11
Dual-view fusion positioning accuracy (m) [18] A + B2.782.63——————
A + C1.38——1.26————
A + D9.09————7.55——
B + C——5.085.11————
B + D——23.56——19.21——
C + D————27.3823.51——
Mean4.4210.4211.2516.7610.71
Multi-view fusion positioning accuracy (m)BeforeThe coefficient of variationVr = 2.50, Va = 1.00.
outlierWeighted fusion estimation5.2712.288.6114.0510.05
removalRefined estimation8.9110.159.368.649.27
AfterThe coefficient of variationVr = 0.24, Va = 0.04. Reject Figure 5d.
outlierWeighted fusion estimation2.341.382.202.532.11
removalRefined estimation1.551.481.601.891.63
Improved positioning accuracy compared to [18]64.93%85.80%85.78%88.72%84.78%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yu, A.; Yu, H.; Ji, Y.; Tong, W.; Dong, Z. High-Precision Geolocation of SAR Images via Multi-View Fusion Without Ground Control Points. Remote Sens. 2025, 17, 3775. https://doi.org/10.3390/rs17223775

AMA Style

Yu A, Yu H, Ji Y, Tong W, Dong Z. High-Precision Geolocation of SAR Images via Multi-View Fusion Without Ground Control Points. Remote Sensing. 2025; 17(22):3775. https://doi.org/10.3390/rs17223775

Chicago/Turabian Style

Yu, Anxi, Huatao Yu, Yifei Ji, Wenhao Tong, and Zhen Dong. 2025. "High-Precision Geolocation of SAR Images via Multi-View Fusion Without Ground Control Points" Remote Sensing 17, no. 22: 3775. https://doi.org/10.3390/rs17223775

APA Style

Yu, A., Yu, H., Ji, Y., Tong, W., & Dong, Z. (2025). High-Precision Geolocation of SAR Images via Multi-View Fusion Without Ground Control Points. Remote Sensing, 17(22), 3775. https://doi.org/10.3390/rs17223775

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop