Next Article in Journal
Refined Landslide Susceptibility Mapping Considering Land Use Changes and InSAR Deformation: A Case Study of Yulin City, Guangxi
Previous Article in Journal
Improving Radiometric Block Adjustment for UAV Multispectral Imagery under Variable Illumination Conditions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Rational Polynomial Coefficient Estimation via Adaptive Sparse PCA-Based Method

College of Electronic Science and Technology, National University of Defense Technology, Changsha 410073, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(16), 3018; https://doi.org/10.3390/rs16163018
Submission received: 13 June 2024 / Revised: 8 August 2024 / Accepted: 15 August 2024 / Published: 17 August 2024

Abstract

:
The Rational Function Model (RFM) is composed of numerous highly correlated Rational Polynomial Coefficients (RPCs), establishing a mathematical relationship between two-dimensional images and three-dimensional spatial coordinates. Due to the existence of ill-posedness and overparameterization, the estimated RPCs are sensitive to any slight perturbations in the observation data, particularly when handling a limited number of Ground Control Points (GCPs). Recently, Principal Component Analysis (PCA) has demonstrated significant performance improvements in the RFM optimization problem. In the PCA-based RFM, each Principal Component (PC) is a linear combination of all variables in the design matrix. However, some original variables are noise related and have very small or almost zero contributions to the construction of PCs, which leads to the overparameterization problem and makes the RPC estimation process ill posed. To address this problem, in this paper, we propose an Adaptive Sparse Principal Component Analysis-based RFM method (ASPCA-RFM) for RPC estimation. In this method, the Elastic Net sparsity constraint is introduced to ensure that each PC contains only a small number of original variables, which automatically eliminates unnecessary variables during PC computation. Since the optimal regularization parameters of the Elastic Net vary significantly in different scenarios, an adaptive regularization parameter approach is proposed to dynamically adjust the regularization parameters according to the explained variance of PCs and degrees of freedom. By adopting the proposed method, the noise and error in the design matrix can be reduced, and the ill-posedness and overparameterization of the RPC estimation can be significantly mitigated. Additionally, we conduct extensive experiments to validate the effectiveness of our method. Compared to existing state-of-the-art methods, the proposed method yields markedly improved or competitive performance.

1. Introduction

Optical remote sensing satellites are increasingly required and utilized in a wide range of applications, such as digital map generation, space surveillance, meteorological monitoring and disaster early warning [1]. To extract accurate spatial information from remote sensing images, two categories of sensor models are used to represent the mathematical relationship between image and ground spaces: rigorous models and generic models [2]. Rigorous models simulate the physical imaging process of satellite sensors; hence, they are extremely complex and require ephemeris, attitude and other ancillary data from the satellite platform [3,4]. However, the Rational Function Model (RFM), as the most popular generic model, is independent of satellite sensors and provides a mathematically simple and straightforward approach to describe the geometric mapping from the three-dimensional world to the corresponding two-dimensional image coordinate system [5].
The RFM is the ratio of two cubic polynomials with 80 Rational Polynomial Coefficients (RPCs). The estimation of RPCs is accomplished through a regression algorithm using a series of Ground Control Points (GCPs) that is composed of geographic coordinates (i.e., latitude, longitude and height) and corresponding image coordinates (i.e., line and sample). Nevertheless, the existence of a large number of highly correlated RPCs makes the estimation essentially an ill-posed problem [5]. Due to the multicollinearity of the design matrix, a small amount of noise or errors in the observation data may significantly affect the estimated RPCs [6]. This paper aims to address the problems of ill-posedness and overparameterization during the RPC estimation process, especially when the number of the observation data is insufficient.
Previous studies related to RPC estimation can be roughly divided into two types: regularization-based methods [7,8,9,10] and variable selection methods [11,12,13,14,15,16]. The first type involves finding optimal results through parameter penalties during the optimization process. Ridge estimation is a member of the first group of regularization-based techniques that computes RPCs by minimizing the residuals and the l 2 -norm of the estimated parameters [7]. However, l 2 -norm regularization does not reduce the demand for observation data. Long et al. [9] proposed an l 1 -norm-regularized least squares (L1LS) method that applies the l 1 -norm to parameter minimization. The L1LS method typically yields sparse results of RPCs with only a few non-zero elements, making the ill-posed problem well posed. Consequently, this method can estimate more robust RPCs with fewer Ground Control Points (GCPs). Despite the many advantages of L1LS, manually selecting an optimal value of the regularization parameter remains challenging. To address this problem, Gholinejad et al. [10] proposed a parameter-free linear framework for RFM optimization, which imposes the l 1 -norm on both the residuals and the RPCs within the linear objective function.
The second type of method involves removing unnecessary parameters from the RPCs, known as variable selection methods. Zhao et al. [11] directly left out the third-order terms, while Zhang et al. [12] omitted redundant RPCs using a scatter matrix and elimination transformation strategy. Additionally, some well-known variable selection methods include the particle swarm optimization method [13], the nested regression-based optimal selection method [14], and the two-stage uncorrelated and statistically significant RFM (USS-RFM) [15]. To determine the optimal RPCs, Zoej et al. [16] employed a meta-heuristic optimization algorithm to search the solution space based on the root mean square error of some GCPs. In recent years, with the rapid development of satellite processing hardware, many researchers [17,18] explore the onboard processing techniques on satellite remote sensing images, especially for the implementation of RPC estimation, which focuses on the application of high-performance and disruptive computing in remote sensing, providing more technical support and research directions for the implementation of RPC solving on satellites. Furthermore, Zhang et al. [19] proposed a field-programmable gate array (FPGA)-based fixed-point (FP)-RPC orthorectification method. This method significantly improves computational speed and reduces resource consumption compared to PC-based RPC while ensuring accuracy.
As mentioned above, due to ill-posedness and overparameterization problems, any small perturbation in the observation data can significantly affect the estimated results [6]. The Principal Component Analysis-based RFM method (PCA-RFM) [20] performs variable selection in the Principal Component (PC) space, which achieves dimensionality reduction while mitigating the effect of noise. In PCA-RFM, PCs with small eigenvalues are considered noise related and can be directly discarded. PCA-RFM removes PCs with eigenvalues smaller than a preset threshold and then reconstructs the design matrix through PCA inverse transformation. Subsequently, the design matrix, with significantly reduced noise interference, allows stable RPCs to be obtained through QR decomposition and the least squares (LS) method [21]. However, selecting the optimal threshold parameter is a key factor in determining the algorithm’s performance. Therefore, based on PCA-RFM, the APCA-RFM method [6] developed the thresholding ridge ratio criterion to achieve a parameter-free RFM method. Another method with a similar mathematical representation to PCA is the Empirical Orthogonal Function and Singular Value Decomposition method (EOF-SVD), which is utilized to analyze spatio-temporal environmental data [22,23]. The EOF-SVD method combines EOF analysis and SVD techniques, focusing on extracting the main spatio-temporal patterns and variations from the data, and is commonly used in fields such as meteorology, oceanography and earth sciences.
The aforementioned PCA-based RFM methods reduced the effect of multicollinearity by discarding noise-related PCs. From the perspective of each PC, it seeks the linear combination of all original variables, with each element in the eigenvector indicating the weight of each variable in the design matrix. In RFM problems, the extraction process of GCPs inevitably introduces varying degrees of errors and noise, directly leading to the instability of variables in the design matrix. Typically, the elements in the eigenvectors are non-zero; however, some of the original variables are noise related and have very small or almost zero contributions to PC construction. Removing these unnecessary elements from the eigenvectors yields more robust regression results [24]. Based on this motivation, we introduce Sparse Principal Component Analysis (SPCA) [24] into the RPC estimation problem and propose an Adaptive Sparse PCA-RFM method (ASPCA-RFM). Even in the case of insufficient GCPs, our proposed method can ensure accurate and stable geographical information from remote sensing images.
Similarly to SPCA [24], we incorporate the Elastic Net [25] constraint into the PCA decomposition process to derive sparse eigenvectors. Through this approach, each PC consists of only a subset of the original variables in the design matrix. Additionally, each eigenvector requires an Elastic Net regression (i.e., 78 times regression operations for all PCs), which demands significant extra computational resources. To address this problem, we adopt the Non-linear Iterative Partial Least Squares (NIPALS) [26] framework to compute eigenvectors iteratively and only employed the Elastic Net constraint on signal-related PCs. Moreover, applying the Elastic Net to the eigenvectors can automatically shrink noise-related PCs to zero vectors, without the need to define a threshold parameter or employ a complex automatic threshold selection algorithm as in PCA-RFM or APCA-RFM. Furthermore, since the regularization parameter of the Elastic Net greatly determines the algorithm’s performance, we propose an adaptive regularization parameter approach to dynamically adjust the regularization parameters based on the explained variance of PCs and degrees of freedom, thereby enhancing the performance of the ASPCA-RFM method. The main contributions of this paper are three-fold:
(1)
We incorporate SPCA into the RPC estimation problem to automatically eliminate unnecessary and noise-related variables during PC computation.
(2)
We propose an adaptive regularization parameter approach to dynamically adjust the regularization parameters based on the explained variance of PCs and degrees of freedom, enhancing the method’s robustness in different scenarios.
(3)
We conduct extensive experiments to evaluate the performance of the proposed method, and the results show that our method demonstrates improved performance compared to existing competing methods.
The remainder of this paper is organized as follows. In Section 2, the RFM structure and the PCA-based RFM algorithm are introduced. In Section 3, an improved RPC estimation method based on adaptive sparse PCA is proposed. In Section 4, the performance of different competing methods is evaluated on several real datasets to verify the effectiveness of the proposed method. Finally, Section 5 presents some discussion and conclusions.

2. Theoretical Background

2.1. Rational Function Model

The RFM defines a transformation relationship between the image pixel coordinates and the ground coordinates by the ratios of polynomials. The mathematical form of the ground-to-image transformation [27] can be expressed as follows:
l = N l ( X , Y , Z ) D l ( X , Y , Z )
s = N s ( X , Y , Z ) D s ( X , Y , Z )
where ( l , s ) represents normalized line and sample coordinates in the image space. ( X , Y , Z ) represents normalized coordinates of ground points in Earth space. N l , D l , N s and D s denote four third-order polynomial functions, given by the following equations.
N l ( X , Y , Z ) = a 1 + a 2 X + a 3 Y + + a 20 Z 3 D l ( X , Y , Z ) = b 1 + b 2 X + b 3 Y + + b 20 Z 3 N s ( X , Y , Z ) = c 1 + c 2 X + c 3 Y + + c 20 Z 3 D s ( X , Y , Z ) = d 1 + d 2 X + d 3 Y + + d 20 Z 3
where a i , b i , c i , d i (i = 1, 2, ⋯, 20) are Rational Polynomial Coefficients. Although the RFM is a nonlinear model, it is usually expressed as a linear form by setting b 1 and d 1 to 1. When b 1 and d 1 are not equal to 1, each line and sample coefficients can be divided by b 1 and d 1 , respectively, to obtain a linear form of RFM, as follows:
N l ( X , Y , Z ) l · D l ( X , Y , Z ) = 0 N s ( X , Y , Z ) s · D s ( X , Y , Z ) = 0
In order to estimate the RPCs, the above equations can be reformulated as a standard linear Gauss–Markov model [9]:
y = Ax + e
where y R 2 n is the vector of observations, A R 2 n × r is the design matrix, x R r is the vector of unknowns (i.e., RPCs) and e R 2 n is the vector of random errors. In this case, n always represents the number of GCPs and r is set to 78 which is the number of the unknown RPCs. Thus, the observation equations and design matrix can be written as follows:
A = A l 0 0 A s 2 n × r
where A l R n × 39 and A s R n × 39 are the design matrices of lines and samples, respectively, which can be formulated as follows:
A l = [ A l 1 , A l 2 , , A l n ] T A s = [ A s 1 , A s 2 , , A s n ] T
A l i = [ 1 , X i , Y i , , Z i 3 , l i X i , l i Y i , , l i Z i 3 ] T A s i = [ 1 , X i , Y i , , Z i 3 , s i X i , s i Y i , , s i Z i 3 ] T
The vector of observation y and the vector of RPCs are defined as follows:
y = [ l 1 , l 2 , , l n , s 1 , s 2 , , s n ] T
x = [ a 1 , a 2 , , a 40 , b 2 , b 3 , , b 40 , c 1 , c 2 , , c 40 , d 2 , d 3 , , d 40 ] T
The standard linear Gauss–Markov model can be estimated by the Ordinary Least Squares (OLS) method [21]:
x ^ = ( A T A ) 1 A T y
In Equation (6), r a n k ( A ) min ( 2 n , r ) , indicating linear correlation between the column vectors of A . These multicollinearity problems may result in the instability of the regression model and inaccuracy of the parameter estimation.

2.2. PCA-Based RFM Optimization

The measurement noise of observations (GCPs) is inevitably transmitted to the design matrix. Thus, it is efficient to apply PCA to the design matrix to obtain noise-free results. The mean-centered design matrix A ¯ can be computed as follows:
A ¯ = A m e a n ( A )
Then, the covariance matrix C of A ¯ is determined by the following equation [21]:
C = A ¯ T A ¯ n 1
After specifying C , the PCA transformation [20] is applied to A ¯ , as follows:
Q = A ¯ V
where V is the matrix of eigenvectors of C and Q is a matrix containing 78 Principal Components (PCs), equal to the number of unknown RPCs. It is usually to set λ 1 , λ 2 , , λ 78 as the decreasingly ordered eigenvalues of C , and eigenvectors with larger eigenvalues are always considered as more important vectors with larger explained variance. Therefore, only the first k PCs are retained while others are discarded. Normally, the value of k is calculated by a preset threshold value, as follows:
k = max j j : λ j > t , j = 1 , , 78
where t is a preset threshold value. V k is defined as a matrix that consists of the first k eigenvectors. When V k is assigned, the reconstructed design matrix A r e is determined by the following formula [6].
A ¯ r e = A ¯ V k V k T
A r e = A ¯ r e + m e a n ( A )
Finally, the noise of the design matrix is significantly reduced and the unknown RPCs can be easily computed by QR decomposition with column pivoting and the LS method [21]. Through the PCA-based RFM method, the estimated RPCs are more robust to the problems of ill-posedness and overparameterization.

3. Methodology

3.1. SPCA-Based RFM Method via NIPALS

In PCA-based RPC estimation methods, as indicated in Equation (14), each element in the eigenvector represents the weight of each variable in the design matrix. However, not all variables are necessary, and some are even noise related for the RPC estimation problem. Removing these redundant elements from the eigenvectors yields more robust regression results, particularly when the observation data are insufficient. To address this problem, we introduce the Sparse Principal Component Analysis (SPCA) method in the RFM task. SPCA can be viewed as adding a variable selection capability to PCA, determining which variables are most important for constructing PCs through sparse constraints. SPCA [24] imposes an Elastic Net constraint on the reconstruction of eigenvectors, as follows:
v ^ i = arg min v i q i A ¯ v i 2 2 + μ ( 1 α ) 2 v i 2 2 + α v i 1
where v ^ i is the output of Elastic Net regression, and v i and q i denote the i-th eigenvector and i-th PC, respectively. The Elastic Net is essentially a combination of l 2 -norm and l 1 -norm [28] regularization. μ and α are two regularization parameters. α controls the balance between l 1 and l 2 penalties. When α = 1 , the Elastic Net reduces to Lasso; when α = 0 , it becomes ridge regression. μ is another tuning parameter that controls the overall strength of the regularization. A higher value of μ increases the penalty on the magnitude of coefficients, leading to more sparse results. SPCA introduces the Elastic Net regularization constraint in the estimation of eigenvectors, automatically setting unnecessary variables to zero when constructing the PCs. This helps to reduce the impact of unnecessary variables on the estimation of PCs and regression results. Due to the existence of severe multicollinearity and a large number of redundant variables in the design matrix, especially in the case of insufficient data, SPCA ensures that each PC consists of only a subset of the most important original variables, which significantly enhances the robustness of RPC estimation.
According to Equation (18), each eigenvector requires an Elastic Net regression operation, which is computationally inefficient. The most popular algorithms for computing PCA decomposition are Eigenvalue Decomposition (EVD) and Nonlinear Iterative Partial Least Square (NIPALS) algorithms [26]. EVD is a traditional PCA decomposition method that obtains PCs by computing the eigenvalues and eigenvectors of the covariance matrix of the data. When the full PCs are required, EVD is more numerically robust and efficient. NIPALS iteratively calculates the PCA decomposition, one PC at a time in descending order. When only several leading PCs with the largest eigenvalues are required, it is more computationally efficient to apply NIPALS algorithm. In the RFM problem, it is often necessary to calculate only a small number of PCs (e.g., 10 to 20 out of a total of 78 PCs). Therefore, to reduce computational resources in this task, we adopt the proposed method based on the NIPALS framework. The description of the SPCA-based RFM algorithm is provided in Algorithm 1.
In the NIPALS algorithm, the input is the mean-centered design matrix A ¯ . The matrix V k R r × k contains the first k ordered sparse eigenvectors (in descending eigenvalue order). k is the number of retained PCs. Q k R 2 n × k represents the geometric projection of A ¯ on the columns of V k , also known as the PCs. All symbols in Algorithm 1 are the same as those in Section 2.2.
Algorithm 1  [ V k , Q k , k ] = NIPALS ( A ¯ )
  1:
Require: Mean-Centered Design Matrix A ¯ ,
  2:
   Set R = A
  3:
   Initialize V 0 = Q 0 =
  4:
   Set ε = 10 6 (convergence threshold)
  5:
   Initialize q to a non-zero column of R
  6:
   for  j = 1  to 78 do
  7:
        Set q n e w = q and q o l d = 10 q
  8:
        while  q n e w q o l d 2 > ε  do
  9:
            q o l d = q n e w
10:
            v = R T q / q T q
11:
            v = v / v T v
12:
            q = Rv
13:
            q n e w = q
14:
        end while
15:
        v ^ = arg min v q A ¯ v 2 2 + μ ( 1 α ) 2 v 2 2 + α v 1
16:
        if  v ^ = = 0
17:
            k = j 1
18:
            break
19:
        end if
20:
        Q j = [ Q j 1 R v ^ ]
21:
        V j = [ V j 1 v ^ ]
22:
        R = R q v T
23:
   end for
24:
   return  V k , Q k , k
NIPALS can be regarded as a variant of the power method [29], which iteratively computes each PC and then performs matrix deflation before continuing to compute the next PC. Steps 8–14 in Algorithm 1 essentially perform the loop of v A T Av and q A A T q . During these iterations, v and q converge to the eigenvector and PC, respectively. In step 22, after obtaining the current PC in each iteration, Hotelling’s deflation method [30] is employed to shrink the matrix. This operation effectively removes the influence of the current PC in the design matrix, thereby preparing the design matrix for the extraction of the subsequent PC.
Another advantage of using the Elastic Net for eigenvector regression is that it can automatically shrink noise-related eigenvectors to zero vectors. Therefore, the algorithm does not need to preset a threshold to eliminate noise-related PCs. When the computed eigenvector is a zero vector, NIPALS will stop working, as shown in steps 16–19, automatically removing noise-related eigenvectors while ensuring computational efficiency. Figure 1a,b show a comparison of eigenvectors for remote sensing images from GF-1 satellite [31] with only 20 GCPs, calculated using PCA and SPCA, respectively. To better visualize the sparsity of the eigenvectors, the ln · operation was applied to all elements in the eigenvectors and zero elements are set to 10 5 . In Figure 1a, the elements in the PCA eigenvectors are almost all non-zero, while SPCA in Figure 1b achieves a sparser result. Compared to PCA, which directly discards eigenvectors, SPCA uses the Elastic Net to automatically select elements within each eigenvector. However, in Figure 1b, SPCA shrinks the eigenvector to a zero vector at the 25-th position, which is different from the 13-th position in PCA through a predefined threshold. This can lead to performance degradation. In Section 3.2, we propose an adaptive regularization parameter approach to address this problem.

3.2. Adaptive Regularization Parameter Approach

Although there is no need to define a threshold parameter to distinguish signal-related and noise-related PCs, it is still challenging to set the two optimal regularization parameters of the Elastic Net, which significantly affect the sparsity of the eigenvectors. Higher values of μ and α impose greater penalties on Lasso, resulting in sparser eigenvectors. It is clear that there is a trade-off between sparsity and reconstruction loss. To achieve better performance, we propose an adaptive regularization parameter approach.
When performing Elastic Net regression, the regularization parameter directly determines the sparsity of the eigenvectors. In general, the eigenvalue corresponds to the amount of variance explained by each PC. To obtain more robust eigenvector representations while preserving as much original information as possible, different regularization parameter should be assigned to each eigenvector in the Elastic Net regression process. A smaller eigenvalue indicates the corresponding eigenvector is more related to noise and, thus, a larger regularization penalty should be applied to obtain a sparser representation, reducing the impact of noise and unnecessary variables. Therefore, we assign different regularization parameter to each eigenvector based on the magnitude of its eigenvalue.
Since the SPCA method is implemented through NIPALS, eigenvalues can be obtained through the definition in the following equation.
C v i = λ i v i
The formula representing this adaptive regularization strategy is as follows:
μ i = τ λ i = τ v i C v i
where the regularization parameter μ i is inversely proportional to the eigenvalue λ i , τ is a scalar parameter and v i is the eigenvector for i-th iteration, and C is the the covariance matrix in Equation (13). Equation (20) enables a dynamically adjusted regularization parameter process.
As shown in Equation (20), we set the regularization parameter to be inversely proportional to the eigenvalue to better utilize the relationship between eigenvalues and regularization parameters. Specifically, eigenvectors with larger eigenvalues capture more data variability, and thus should be assigned a smaller sparsity penalty to retain more original information that is critical for accurate reconstruction. Meanwhile, noise-related eigenvectors with smaller eigenvalues should be assigned larger regularization constraints to eliminate noise-related signals. To this end, we construct a connection between the regularization parameters and the eigenvalues, ensuring a dynamic balance between the preservation of information and the elimination of noise.
Figure 1c also displays the sparsity distribution of the eigenvector in ASPCA-RFM. By comparing Figure 1a,c, through the adaptive regularization parameter approach, the eigenvector is reduced to a zero vector at the 13-th sequence, which is the same as that in PCA-RFM using a preset threshold. Moreover, the sparsity of eigenvectors in Figure 1c shows an increasing trend when employing the adaptive regularization parameter approach, demonstrating the effectiveness of the adaptive algorithm in Equation (20).
When the number of variables (columns) in the design matrix is higher than the number of observed data points (rows), the model’s degrees of freedom are negative, potentially making it unsolvable. Different levels of sparse penalties should be applied according to the degrees of freedom to ensure that there is no significant loss of information when GCPs are sufficient and to reduce the impact of multicollinearity when GCPs are insufficient. Based on this idea, we associate the balance parameter α of l 1 and l 2 norm with the degrees of freedom.
α = 1 1 + e d f / 20 = 1 1 + e n r / 2 / 20
Considering that α must lie within the interval 0 , 1 , we employed a sigmoid function to normalize the output, ensuring that α remains within this range. In Equation (21), a preset constant value of 20 is used as the divisor of the degrees of freedom, strategically setting 39 GCPs as the center of the sigmoid function, as shown in Figure 2, which is also the boundary point of positive and negative degrees of freedom. This approach effectively prevents α from approaching one under most degrees of freedom, thereby maintaining a balanced regularization effect in different data scenarios. Thus, through normalization by the sigmoid function, α will be larger when degrees of freedom are smaller, applying stronger regularization. Conversely, when the number of GCPs is larger, α will be smaller, reducing the intensity of regularization. This allows dynamic adjustment of the regularization parameter in different data scenarios, maintaining the robustness of the model.
The algorithm returns matrices of modified PCs and eigenvectors. Clearly, the eigenvectors are no longer orthogonal, so the original reconstruction of the design matrix, as shown in Equation (16), is not satisfied in this case. A general way to reconstruct the design matrix is computed by the LS method [21]:
A ¯ r e = Q k ( Q k T Q k ) 1 Q k T A ¯
The proof of Equation (22) is shown in Appendix A. Finally, the complete process of the ASPCA-RFM algorithm is provided in Algorithm 2, as follows:
Algorithm 2  x = ASPCA ( A , y )
1:
Require: Design Matrix A R 2 n × r , Observation vector y R 2 n
2:
   A ¯ = A m e a n ( A )
3:
   [ V k , Q k , k ] = NIPALS ( A ¯ )
4:
   A ¯ r e = Q k ( Q k T Q k ) 1 Q k T A ¯
5:
   A r e = A ¯ r e + m e a n ( A )
6:
   x is computed by the QR decomposition with column pivoting and LS method for A r e
7:
   return  x

4. Experiments

4.1. Datasets and Metrics

In the experiment, we evaluated our method on six real datasets collected from three different satellite platforms. The detailed information on the datasets is listed in Table 1. All GCPs in these datasets are obtained through an automatic matching algorithm [32] using Landsat-8 satellite images as the geographic reference. In addition, the height information of the GCPs is obtained from the global Digital Elevation Model (DEM) data. Table 1 shows the specific information about each dataset, including the satellite platform, image resolution, elevation range, geographic coverage and the number of matched control points.
GF1-A and GF1-B are two Wide Field View (WFV) images captured from the GF-1 satellite [31], both with a Ground Sampling Distance (GSD) of 16 m. To match the coverage area of the Landsat-8 images, GF1-A and GF1-B are cropped to two sizes: 40 × 40 and 20 × 20 km2. The elevation ranges of GF1-A and GF1-B are 403∼917 and −22∼280 m, respectively. The first image was taken in a mountainous area while the second image was captured in an urban area. Using the automatic matching algorithm, 200 and 100 control points were extracted from GF1-A and GF1-B, respectively. Figure 3a,b illustrate the distribution of GCPs in GF1-A and GF1-B, respectively.
In the experiment, we also applied two scenes from the EO-1 satellite [33] with a GSD of 30 m. EO1-A covers a 30 × 30 mountainous area with an elevation range of 4907 to 5361 m, while EO1-B covers a 27 × 27 rural area with an elevation range of 331 to 437 m. Here, 250 and 120 matched GCPs are available for the evaluation of EO1-A and EO1-B, respectively. The distribution of GCPs in EO1-A and EO1-B is shown in Figure 3c,d.
Additionally, the experiment utilized two scenes of remote-sensing images, namely, S2-A and S2-B, captured from the Sentinel-2 satellite [34]. S2-A was collected over an urban area with several distinct rivers, while S2-B was captured over a rural area in the central part of the United States. Both S2-A and S2-B cover an area of 110 × 110 km2. The elevation range of S2-A is from −55 to 1257 m, and that of S2-B is from 847 to 1272 m. For testing, 80 and 300 evenly distributed control points were extracted for S2-A and S2-B, respectively. The distribution of GCPs in S2-A and S2-B is shown in Figure 3e,f.
To better examine the impact of the varying number and distribution of GCPs on the results, we referred to the approach in APCA-RFM [6]. The GCPs in each dataset were divided into two parts by random sampling. The first part randomly selected a fixed number of GCPs from the dataset for the estimation of RPCs, while the second part consisted of the remaining GCPs that were used as Independent Check Points (ICPs) for the accuracy evaluation. For each dataset, this random sampling operation was repeated five times to obtain five Root Mean Square Error (RMSE) metrics, and their average and standard deviation are used as final evaluation metrics. Thus, the average RMSEs can reflect the accuracy of the performance, and the standard deviation values can reflect the stability under different distributions of GCPs. For example, GF1-A has 200 available control points. To verify the performance of the algorithm with 40 GCPs, five times random sampling operations were conducted, each with 40 points as GCPs and the remaining 160 points as ICPs, to calculate the RMSE metrics. The RMSE metric can be computed by the following formula [35]:
R M S E = 1 N i = 1 N ( ε l i 2 + ε s i 2 )
where ε l and ε s denote the line and sample error, respectively. N represents the number of ICPs.

4.2. Parameter Setting of the Methods in Comparison

In this study, to verify the effectiveness of the proposed method, four state-of-the-art methods, namely ridge estimation [7], L1LS [9], PCA-RFM [20] and APCA-RFM [6], were applied to various datasets for comparison. The L1LS method provided stable and sparse RPC results using L1-norm-regularized least squares when the available observation data were relatively insufficient. The APCA-RFM method separated the noise-related PCs from the signal-related ones by using an automatic thresholding ridge ratio. For the ridge estimation method, the parameter λ was manually set to 10 4 . In the L1LS method, previous analysis in [9] has shown that λ = 10 4 is the optimal choice. The PCA-RFM method set the threshold parameter t to 0.01. In the proposed method, the scalar value τ was set to 8 × 10 5 , and the Elastic Net was solved using the coordinate descent algorithm [36]. All experiments were conducted and tested on a 13-th Gen Intel Core i5 CPU at 2.50 GHz and 15.70 GB usable RAM.

4.3. Comparative Results

In this experiment, the analysis is divided into two main parts: analysis with a sufficient number of GCPs (positive degrees of freedom) and analysis with a limited number of GCPs (negative degrees of freedom). In the first part, 40 and 50 GCPs were selected to compute RPCs using different experimental algorithms while the remaining GCPs were used as evaluation sets. In the second part, the performance of different methods was investigated on datasets with only 10, 15 and 20 GCPs, respectively. The results of the quantitative evaluation in terms of the average and standard deviation of RMSE on the six datasets and the different numbers of GCPs are shown in Table 2, where the best results for each experiment are highlighted in bold. The average value directly indicates the error magnitude of the method, while the variance reflects the robustness of the method to different distributions of GCPs.
As shown in Table 2, although the ridge estimation method has the weakest performance among the tested methods, it yields reasonable results when there is an adequate number of GCPs. In contrast, when the number of GCPs is limited, the L1LS method produces relatively stable results, but its performance is still inferior to the PCA-based RFM methods. With an adequate number of GCPs, the proposed ASPCA-RFM method shows comparable performance to the APCA-RFM method across all datasets. For the GF1-A and GF1-B datasets, the APCA-RFM method performs the best in terms of average RMSE, while the ASPCA-RFM method performs the best in terms of standard deviation (with 40 and 50 GCPs). In particular for the S2-A dataset, the proposed method shows higher precision than other methods in both the average and standard RMSEs. For 40 and 50 GCPs, except for GF1-B, the average RMSEs of the proposed method are below one pixel across all datasets. When the degrees of freedom are positive, the results of the ASPCA-RFM and APCA-RFM methods are close, with ASPCA-RFM sometimes slightly inferior to APCA-RFM.
When the number of GCPs is far less than 39, the RPC estimation problem is severely affected by ill-posedness and overparameterization. The ridge estimation method performs poorly and almost fails when using 20, 15 and 10 GCPs whereas the L1LS method improves the solution of ridge estimation and shows strong robustness. Even in the 10-GCP case, PCA-based methods can provide practically feasible results. With a limited number of GCPs, the proposed method shows better performance by a marked margin on almost all datasets and for all numbers of GCPs. In particular, on the GF1-A dataset, the proposed ASPCA-RFM method further reduces the average RMSE pixel value by 1.4 compared to the second-best model when using 10 GCPs. Besides the average RMSE metric, the proposed method also achieves higher RMSE standard deviation values on almost all datasets, indicating better robustness under different GCP distributions. The analysis in the table shows that, with a limited number of GCPs, the ASPCA-RFM method outperforms other methods in both average RMSE and standard deviation of RMSE.
In the experiment, we also compared the average computational time of the proposed method and the other methods, as shown in Table 3. The computational cost of ASPCA-RFM is higher than the other methods as it requires multiple Elastic Net regression operations. However, all methods are fast and require less than 0.1 s per run, which is sufficient to support practical applications. To verify the efficiency of our proposed method optimized by NIPALS, we also implemented the ASPCA-RFM based on EVD, namely ASPCA-RFM-EVD. In Table 3, our proposed method is much faster than ASPCA-RFM-EVD, which demonstrates the effectiveness of NIPALS. On the other hand, in general, EVD is more reliable in terms of numerical stability compared to NIPALS. The setting of the initial value, iteration stopping criteria and number of iterations in NIPALS may affect the quality of the final results. However, according to the setting in Algorithm 1, we can ensure that the results optimized by EVD and NIPALS are the same to the third decimal place, which indicates that both optimization methods yield almost the same numerical quality.

4.4. Model Analysis

The adaptive regularization parameter approach aims to dynamically assign suitable regularization parameters for different PCs and different numbers of observation data. To investigate its effect, we conducted an experiment to understand the behavior of the adaptive regularization parameter approach in ASPCA-RFM. We randomly selected different numbers of GCPs from six datasets, specifically 50, 40, 30, 20 and 10. In the experiment, the number of zero values for each eigenvector was counted and summarized, representing the sparsity for each vector. Figure 4 illustrates the variation in sparsity with eigenvectors across all datasets and the overall trend under different numbers of GCPs. The eigenvectors are sorted in descending order of eigenvalues. From the perspective of different eigenvectors, in all results, the sparsity gradually increases with the order of the eigenvectors, indicating that eigenvectors with smaller eigenvalues correspond to higher sparsity. The results show that the adaptive regularization parameter strategy we described in Equation (20) was accurately executed. From the perspective of different numbers of GCPs, overall, as the number of GCPs increases, sparsity shows a decreasing trend. This validates the mathematical design of the adaptive regularization parameter, as shown in Equation (21), making the ASPCA-RFM algorithm more robust when faced with insufficient and unevenly distributed GCPs.
To better understand the efficiency of the adaptive regularization parameter approach, we designed two comparative experiments to demonstrate the performance advantages of the adaptive regularization parameter algorithm over fixed regularization parameters. In the first comparative experiment, the adaptive overall penalty coefficient μ in Equation (20) was replaced with fixed values, specifically 5 × 10 6 , 1 × 10 5 , 5 × 10 5 , 1 × 10 4 , 5 × 10 4 and 1 × 10 3 . The performance of different fixed values and the adaptive regularization parameter approach was tested under different numbers of GCPs on six datasets, and the average RMSE results are shown in Figure 5. The results showed that, as μ changed, the error fluctuated significantly, but the adaptive algorithm consistently maintained the lowest RMSE values. It is clear that, in nearly all datasets and GCPs, the method using the adaptive regularization parameter achieved the best performance, fully demonstrating the effectiveness of adaptive regularization parameters.
The second comparative experiment aims to compare the performance difference between the adaptive strategy and fixed values for the balance regularization parameter α . Similar to the first comparative experiment, the adaptive α in Equation (21) was replaced with fixed values, specifically 0.9, 0.7, 0.5, 0.3 and 0.1. The experimental results for different numbers of GCPs in six datasets are shown in Figure 6, with all the results presented using the average RMSE. From an overall perspective, the adaptive regularization parameters outperform the fixed values in most scenarios. On the other hand, compared to the adaptive algorithm, fixed regularization parameters may perform better for specific numbers of GCPs or specific datasets, but their stability is much lower. Most importantly, there is no single fixed value that performs best across all GCPs and datasets. For example, in the GF1-A dataset of Figure 6a, when α = 0.1 , although this fixed parameter achieves the best result when the number of GCPs is 15, it results in very high RMSE values for the other numbers of GCPs. In the EO1-B dataset of Figure 6d, although the results for α = 0.5 were the best when the GCPs were 15 and 20, the RMSE was the worst when the GCPs were 25, and the performance for other numbers of GCPs was also inferior to the adaptive algorithm. In summary, although the adaptive balance regularization parameter did not achieve the best results in all datasets and all numbers of GCPs, compared to the fixed values, the adaptive algorithm consistently achieved stable and lower RMSE values. This demonstrated its effectiveness in adaptively adjusting the regularization parameters based on the number of GCPs, ultimately achieving more robust results.

5. Conclusions

In this paper, an adaptive Sparse Principal Component Analysis method is proposed for RPC estimation to address the problems of ill-posedness and overparameterization. The ASPCA-RFM introduces the SPCA to obtain sparse eigenvectors, which reduces the impact of noise and errors in the design matrix. Considering the high computational cost of performing Elastic Net regression on each PC, we propose using the NIPALS framework to perform Elastic Net regression only on signal-related eigenvectors and automatically remove unnecessary PCs. To further improve performance, we propose an adaptive regularization parameter approach that can dynamically adjust the regularization parameters based on the explained variance of PCs and the amount of observation data.
Extensive experiments have been conducted and the results show that the proposed ASPCA-RFM outperforms or is highly competitive with existing RFM models. In particular, the proposed method performs better than other methods when the number of GCPs is limited. The experimental results demonstrate that the proposed adaptive regularization parameter approach can achieve a great balance between sparsity and information loss. In future work, we will investigate a more reliable method for dynamically changing the regularization parameters, especially in cases with a sufficient number of GCPs. In addition, considering that deep learning methods have demonstrated significant performance improvements compare with traditional methods, we plan to explore a deep learning-based RPC estimation method in our future work. The proposed RPC estimation method may contribute to the practical application of optical satellites.

Author Contributions

Conceptualization, T.Y.; methodology, T.Y.; software, T.Y.; validation, T.Y.; formal analysis, T.Y.; resources, T.Y. and Y.W.; data curation, T.Y.; writing—original draft preparation, T.Y.; writing—review and editing, Y.W. and P.W.; supervision, Y.W. and P.W.; project administration, Y.W. and P.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Proof of Equation (21). 
Given the matrix A ¯ R n × r , representing the mean-centered design matrix with n measurements and r variables, it is possible to obtain a good reconstruction of A ¯ by linear regression on the modified PC matrix Q . □
A ¯ r e = Q Θ
In general, the optimal least square error linear reconstruction of the original design matrix A ¯ is given by
Θ = arg min Θ ˜ Q Θ ˜ A ¯ F 2
where · F is the Frobenius norm. The solution is the well-known least squares solution:
Θ = ( Q T Q ) 1 Q T A ¯
Finally, the optimal linear reconstruction of A ¯ can be expressed as
A ¯ r e = Q ( Q T Q ) 1 Q T A ¯

References

  1. Shen, X.; Li, Q.; Wu, G.; Zhu, J. Bias compensation for rational polynomial coefficients of high-resolution satellite imagery by local polynomial modeling. Remote Sens. 2017, 9, 200. [Google Scholar] [CrossRef]
  2. Toutin, T. Geometric processing of remote sensing images: Models, algorithms and methods. Int. J. Remote Sens. 2004, 25, 1893–1924. [Google Scholar] [CrossRef]
  3. Zhang, L.; He, X.; Balz, T.; Wei, X.; Liao, M. Rational function modeling for spaceborne SAR datasets. ISPRS J. Photogramm. Remote Sens. 2011, 66, 133–145. [Google Scholar] [CrossRef]
  4. Zhou, G.; Liu, X. Orthorectification model for extra-length linear array imagery. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–10. [Google Scholar] [CrossRef]
  5. Tao, C.V.; Hu, Y. A comprehensive study of the rational function model for photogrammetric processing. Photogramm. Eng. Remote Sens. 2001, 67, 1347–1358. [Google Scholar]
  6. Gholinejad, S.; Amiri-Simkooei, A.; Moghaddam, S.H.A.; Naeini, A.A. An automated PCA-based approach towards optimization of the rational function model. ISPRS J. Photogramm. Remote Sens. 2020, 165, 133–139. [Google Scholar] [CrossRef]
  7. Yuan, X.; Lin, X. A method for solving rational polynomial coefficients based on ridge estimation. Geomat. Inf. Sci. Wuhan Univ. 2008, 33, 1130–1133. [Google Scholar]
  8. Wu, Y.; Ming, Y. A fast and robust method of calculating RFM parameters for satellite imagery. Remote Sens. Lett. 2016, 7, 1112–1120. [Google Scholar] [CrossRef]
  9. Long, T.; Jiao, W.; He, G. Rpc estimation via L1-norm-regularized least squares (l1ls). IEEE Trans. Geosci. Remote Sens. 2015, 53, 4554–4567. [Google Scholar] [CrossRef]
  10. Gholinejad, S.; Naeini, A.A.; Amiri-Simkooei, A. Optimization of RFM Problem Using Linearly Programed l1-Regularization. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–9. [Google Scholar] [CrossRef]
  11. Zhao, L.; Liu, F.; Li, J.; Wang, W. Research on reducing term of higher order in RFM model. Sci. Surv. Mapp. 2007, 32, 14–17. [Google Scholar]
  12. Zhang, Y.; Lu, Y.; Wang, L.; Huang, X. A new approach on optimization of the rational function model of high-resolution satellite imagery. IEEE Trans. Geosci. Remote Sens. 2011, 50, 2758–2764. [Google Scholar] [CrossRef]
  13. Moghaddam, S.H.A.; Mokhtarzade, M.; Moghaddam, S.A.A. Optimization of RFM’s structure based on PSO algorithm and figure condition analysis. IEEE Geosci. Remote Sens. Lett. 2018, 15, 1179–1183. [Google Scholar] [CrossRef]
  14. Tengfei, L.; Weili, J.; Guojin, H. Nested regression based optimal selection (NRBOS) of rational polynomial coefficients. Photogramm. Eng. Remote Sens. 2014, 80, 261–269. [Google Scholar] [CrossRef]
  15. Moghaddam, S.H.A.; Mokhtarzade, M.; Naeini, A.A.; Amiri-Simkooei, A. A statistical variable selection solution for RFM ill-posedness and overparameterization problems. IEEE Trans. Geosci. Remote Sens. 2018, 56, 3990–4001. [Google Scholar]
  16. Zoej, M.V.; Mokhtarzade, M.; Mansourian, A.; Ebadi, H.; Sadeghian, S. Rational function optimization using genetic algorithms. Int. J. Appl. Earth Obs. Geoinf. 2007, 9, 403–413. [Google Scholar]
  17. Zhou, G. On-Board Processing for Satellite Remote Sensing Images; CRC Press: Boca Raton, FL, USA, 2023. [Google Scholar]
  18. Cavallaro, G.; Heras, D.B.; Wu, Z.; Maskey, M.; Lopez, S.; Gawron, P.; Coca, M.; Datcu, M. High-Performance and Disruptive Computing in Remote Sensing: HDCRS—A new Working Group of the GRSS Earth Science Informatics Technical Committee [Technical Committees]. IEEE Geosci. Remote Sens. Mag. 2022, 10, 329–345. [Google Scholar] [CrossRef]
  19. Zhang, R.; Zhou, G.; Zhang, G.; Zhou, X.; Huang, J. RPC-based orthorectification for satellite images using FPGA. Sensors 2018, 18, 2511. [Google Scholar] [CrossRef]
  20. Naeini, A.A.; Moghaddam, S.H.A.; Sheikholeslami, M.M.; Amiri-Simkooei, A. Application of PCA Analysis and QR Decomposition to Address RFM’s Ill-Posedness. Photogramm. Eng. Remote Sens. 2020, 86, 17–21. [Google Scholar] [CrossRef]
  21. Golub, G.H.; Van Loan, C.F. Matrix Computations; JHU Press: Baltimore, MD, USA, 2013. [Google Scholar]
  22. Hao, L.; Pan, C.; Liu, P.; Zhou, D.; Zhang, L.; Xiong, Z.; Liu, Y.; Sun, G. Detection of the coupling between vegetation leaf area and climate in a multifunctional watershed, Northwestern China. Remote Sens. 2016, 8, 1032. [Google Scholar] [CrossRef]
  23. Li, J.; Fan, K.; Zhou, L. Satellite observations of El Niño impacts on Eurasian spring vegetation greenness during the period 1982–2015. Remote Sens. 2017, 9, 628. [Google Scholar] [CrossRef]
  24. Zou, H.; Hastie, T.; Tibshirani, R. Sparse principal component analysis. J. Comput. Graph. Stat. 2006, 15, 265–286. [Google Scholar] [CrossRef]
  25. Zou, H.; Hastie, T. Regularization and variable selection via the elastic net. J. R. Stat. Soc. Ser. B Stat. Methodol. 2005, 67, 301–320. [Google Scholar] [CrossRef]
  26. Wold, H.O.A. Nonlinear Estimation by Iterative Least Square Procedures. In Research Papers in Statistics: Festschrift for J. Neyman; David, F.N., Ed.; Wiley: New York, NY, USA, 1966. [Google Scholar]
  27. Zhou, G.; Yuan, M.; Li, X.; Sha, H.; Xu, J.; Song, B.; Wang, F. Optimal regularization method based on the L-curve for solving rational function model parameters. Photogramm. Eng. Remote Sens. 2021, 87, 661–668. [Google Scholar] [CrossRef]
  28. Tibshirani, R. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B Stat. Methodol. 1996, 58, 267–288. [Google Scholar] [CrossRef]
  29. Kincaid, D.R.; Cheney, E.W. Numerical Analysis: Mathematics of Scientific Computing; American Mathematical Society: Providence, RI, USA, 2009; Volume 2. [Google Scholar]
  30. Hotelling, H. Analysis of a complex of statistical variables into principal components. J. Educ. Psychol. 1933, 24, 417. [Google Scholar] [CrossRef]
  31. Zhang, G.; Xu, K.; Huang, W. Auto-calibration of GF-1 WFV images using flat terrain. ISPRS J. Photogramm. Remote Sens. 2017, 134, 59–69. [Google Scholar] [CrossRef]
  32. Fan, Z.; Liu, Y.; Liu, Y.; Zhang, L.; Zhang, J.; Sun, Y.; Ai, H. 3MRS: An effective coarse-to-fine matching method for multimodal remote sensing imagery. Remote Sens. 2022, 14, 478. [Google Scholar] [CrossRef]
  33. Ungar, S.G.; Pearlman, J.S.; Mendenhall, J.A.; Reuter, D. Overview of the earth observing one (EO-1) mission. IEEE Trans. Geosci. Remote Sens. 2003, 41, 1149–1159. [Google Scholar] [CrossRef]
  34. Drusch, M.; Del Bello, U.; Carlier, S.; Colin, O.; Fernandez, V.; Gascon, F.; Hoersch, B.; Isola, C.; Laberinti, P.; Martimort, P.; et al. Sentinel-2: ESA’s optical high-resolution mission for GMES operational services. Remote Sens. Environ. 2012, 120, 25–36. [Google Scholar] [CrossRef]
  35. Shan, X.; Zhang, J. Does the Rational Function Model’s Accuracy for GF1 and GF6 WFV Images Satisfy Practical Requirements? Remote Sens. 2023, 15, 2820. [Google Scholar] [CrossRef]
  36. Friedman, J.; Hastie, T.; Tibshirani, R. Regularization paths for generalized linear models via coordinate descent. J. Stat. Softw. 2010, 33, 1. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Distribution of sparsity for eigenvectors tested on GF1−A dataset with 20 GCPs. (a) Sparsity of eigenvectors using PCA. (b) Sparsity of eigenvectors using SPCA with fixed regularization parameters. (c) Sparsity of eigenvectors using ASPCA−RFM with adaptive regularization parameters.
Figure 1. Distribution of sparsity for eigenvectors tested on GF1−A dataset with 20 GCPs. (a) Sparsity of eigenvectors using PCA. (b) Sparsity of eigenvectors using SPCA with fixed regularization parameters. (c) Sparsity of eigenvectors using ASPCA−RFM with adaptive regularization parameters.
Remotesensing 16 03018 g001
Figure 2. Sigmoid form of the regularization parameter α . Thirty-nine GCPs are the center point of the sigmoid function which is also the boundary point of positive and negative degrees of freedom.
Figure 2. Sigmoid form of the regularization parameter α . Thirty-nine GCPs are the center point of the sigmoid function which is also the boundary point of positive and negative degrees of freedom.
Remotesensing 16 03018 g002
Figure 3. Distribution of GCPs for different datasets. (a) GF1-A. (b) GF1-B. (c) EO1-A. (d) EO1-B. (e) S2-A. (f) S2-B.
Figure 3. Distribution of GCPs for different datasets. (a) GF1-A. (b) GF1-B. (c) EO1-A. (d) EO1-B. (e) S2-A. (f) S2-B.
Remotesensing 16 03018 g003
Figure 4. Sparsity of different eigenvectors when handling different numbers of GCPs on six datasets. (a) GF1-A. (b) GF1-B. (c) EO1-A. (d) EO1-B. (e) S2-A. (f) S2-B.
Figure 4. Sparsity of different eigenvectors when handling different numbers of GCPs on six datasets. (a) GF1-A. (b) GF1-B. (c) EO1-A. (d) EO1-B. (e) S2-A. (f) S2-B.
Remotesensing 16 03018 g004
Figure 5. Average RMSE results for fixed μ and adaptive μ . (a) GF1-A. (b) GF1-B. (c) EO1-A. (d) EO1-B. (e) S2-A. (f) S2-B.
Figure 5. Average RMSE results for fixed μ and adaptive μ . (a) GF1-A. (b) GF1-B. (c) EO1-A. (d) EO1-B. (e) S2-A. (f) S2-B.
Remotesensing 16 03018 g005
Figure 6. Average RMSE results for fixed α and adaptive α . (a) GF1-A. (b) GF1-B. (c) EO1-A. (d) EO1-B. (e) S2-A. (f) S2-B.
Figure 6. Average RMSE results for fixed α and adaptive α . (a) GF1-A. (b) GF1-B. (c) EO1-A. (d) EO1-B. (e) S2-A. (f) S2-B.
Remotesensing 16 03018 g006
Table 1. Information about the datasets.
Table 1. Information about the datasets.
DatasetSatelliteArea TypeGSD (m)Coverage (km2)Elevation Range (m)No. of GCPs
GF1-AGF-1Mountain1640 × 40403∼917200
GF1-BGF-1Urban1620 × 20−22∼280100
EO1-AEO-1Mountain, Lake3030 × 304907∼5361250
EO1-BEO-1Rural3027 × 27331∼437120
S2-ASentinel-2Urban, River60110 × 110−55∼125780
S2-BSentinel-2Rural60110 × 110847∼1272300
Table 2. Average and standard deviation values (in pixels) for different methods evaluated on datasets.
Table 2. Average and standard deviation values (in pixels) for different methods evaluated on datasets.
DatasetGCPs/ICPsRidge EstimationL1LSPCA-RFMAPCA-RFMASPCA-RFM
GF1-A10/190286.52 ± 115.443.8254 ± 2.51722.6414 ± 3.05702.5632 ± 3.08731.1704 ± 0.1413
15/18569.087 ± 23.8921.7546 ± 0.60081.0119 ± 0.12791.0017 ± 0.11060.9980 ± 0.0958
20/18048.835 ± 37.9721.7815 ± 0.39791.1274 ± 0.21241.2313 ± 0.35301.1193 ± 0.2361
40/1604.4347 ± 1.63211.5912 ± 0.33040.8598 ± 0.03220.8382 ± 0.02890.8519 ± 0.0265
50/1503.0036 ± 1.47621.3455 ± 0.16060.8629 ± 0.04300.8489 ± 0.03940.8699 ± 0.0387
GF1-B10/9083.932 ± 21.3143.1493 ± 4.22511.9714 ± 0.47281.8910 ± 0.66241.7997 ± 0.5219
15/8544.755 ± 22.0984.2002 ± 4.27211.3033 ± 0.23911.5418 ± 0.66081.2666 ± 0.2015
20/8012.587 ± 5.54991.2065 ± 0.27091.1808 ± 0.09001.1688 ± 0.13491.1266 ± 0.0586
40/607.8813 ± 8.37713.0036 ± 3.93591.0754 ± 0.14230.9914 ± 0.05111.0201 ± 0.0472
50/507.3447 ± 10.4003.3032 ± 4.28160.9870 ± 0.05830.9836 ± 0.07300.9850 ± 0.0578
EO1-A10/240109.31 ± 33.4351.5254 ± 1.08660.7464 ± 0.12070.7741 ± 0.12350.6639 ± 0.0747
15/23558.242 ± 24.0740.8955 ± 0.22210.6998 ± 0.11100.6995 ± 0.13250.6827 ± 0.0991
20/23024.292 ± 12.4480.8026 ± 0.15660.6768 ± 0.08310.6152 ± 0.08340.6059 ± 0.0261
40/2104.0333 ± 1.73250.7326 ± 0.08760.5615 ± 0.03140.5622 ± 0.06640.5612 ± 0.0403
50/2001.4546 ± 0.58800.6847 ± 0.04070.5384 ± 0.03420.5172 ± 0.01630.5351 ± 0.0396
EO1-B10/110145.15 ± 51.8483.6538 ± 4.13981.1291 ± 0.35801.2531 ± 0.98181.0544 ± 0.3865
15/10544.734 ± 14.0711.3268 ± 0.26810.9563 ± 0.15600.9571 ± 0.23410.8635 ± 0.0599
20/10027.488 ± 10.9971.1318 ± 0.30170.8690 ± 0.17080.8970 ± 0.12780.8299 ± 0.0847
40/804.4904 ± 3.55250.9986 ± 0.15420.6987 ± 0.10150.7515 ± 0.11230.6927 ± 0.0565
50/702.2041 ± 0.66810.9495 ± 0.18380.6770 ± 0.08230.6518 ± 0.05420.6915 ± 0.1411
S2-A10/70168.88 ± 74.6186.0041 ± 2.74931.2099 ± 0.43411.4811 ± 0.41981.2015 ± 0.4081
15/65107.80 ± 93.9044.6899 ± 3.28910.7706 ± 0.14510.8446 ± 0.27670.7485 ± 0.1598
20/60107.84 ± 106.984.8182 ± 3.76930.8929 ± 0.31001.0044 ± 0.24420.8224 ± 0.2362
40/4036.784 ± 34.1812.0378 ± 0.74630.6694 ± 0.03560.7820 ± 0.23960.6538 ± 0.0277
50/3020.586 ± 21.1441.5930 ± 0.85130.6606 ± 0.06940.8995 ± 0.33310.6452 ± 0.0418
S2-B10/290107.13 ± 32.4222.1641 ± 0.50221.2462 ± 0.32781.3315 ± 0.50661.3981 ± 0.5374
15/28559.161 ± 31.5102.1688 ± 0.50861.0541 ± 0.34251.5211 ± 0.79380.9921 ± 0.3166
20/28022.722 ± 18.6651.6835 ± 0.34640.9005 ± 0.14830.9135 ± 0.22390.8233 ± 0.0686
40/2602.9398 ± 1.51141.7074 ± 0.32560.7457 ± 0.01931.0462 ± 0.70650.7366 ± 0.0065
50/2501.9344 ± 0.59271.6297 ± 0.19870.7217 ± 0.01030.7177 ± 0.00820.7290 ± 0.0252
Table 3. Average computational time for different methods on all datesets. ASPCA-RFM-EVD is our proposed method implemented based on EVD.
Table 3. Average computational time for different methods on all datesets. ASPCA-RFM-EVD is our proposed method implemented based on EVD.
Ridge EstimationL1LSPCA-RFMAPCA-RFMASPCA-RFM-EVDASPCA-RFM (Ours)
0.0008 s0.0013 s0.0009 s0.0021 s0.0845 s0.0454 s
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yan, T.; Wang, Y.; Wang, P. Rational Polynomial Coefficient Estimation via Adaptive Sparse PCA-Based Method. Remote Sens. 2024, 16, 3018. https://doi.org/10.3390/rs16163018

AMA Style

Yan T, Wang Y, Wang P. Rational Polynomial Coefficient Estimation via Adaptive Sparse PCA-Based Method. Remote Sensing. 2024; 16(16):3018. https://doi.org/10.3390/rs16163018

Chicago/Turabian Style

Yan, Tianyu, Yingqian Wang, and Pu Wang. 2024. "Rational Polynomial Coefficient Estimation via Adaptive Sparse PCA-Based Method" Remote Sensing 16, no. 16: 3018. https://doi.org/10.3390/rs16163018

APA Style

Yan, T., Wang, Y., & Wang, P. (2024). Rational Polynomial Coefficient Estimation via Adaptive Sparse PCA-Based Method. Remote Sensing, 16(16), 3018. https://doi.org/10.3390/rs16163018

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop