Next Article in Journal
High Quality Plasmonic Sensors Based on Fano Resonances Created through Cascading Double Asymmetric Cavities
Previous Article in Journal
Non-Enzymatic Glucose Sensing Using Carbon Quantum Dots Decorated with Copper Oxide Nanoparticles
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Effective Correction Method for Seriously Oblique Remote Sensing Images Based on Multi-View Simulation and a Piecewise Model

1
School of Electronicand Electrical Engineering, Shanghai University of Engineering Science, Longteng Road No. 333, Shanghai 201620, China
2
School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, Shanghai 200433, China
*
Author to whom correspondence should be addressed.
Sensors 2016, 16(10), 1725; https://doi.org/10.3390/s16101725
Submission received: 13 July 2016 / Revised: 15 September 2016 / Accepted: 29 September 2016 / Published: 18 October 2016
(This article belongs to the Section Remote Sensors)

Abstract

:
Conventional correction approaches are unsuitable for effectively correcting remote sensing images acquired in the seriously oblique condition which has severe distortions and resolution disparity. Considering that the extraction of control points (CPs) and the parameter estimation of the correction model play important roles in correction accuracy, this paper introduces an effective correction method for large angle (LA) images. Firstly, a new CP extraction algorithm is proposed based on multi-view simulation (MVS) to ensure the effective matching of CP pairs between the reference image and the LA image. Then, a new piecewise correction algorithm is advanced with the optimized CPs, where a concept of distribution measurement (DM) is introduced to quantify the CPs distribution. The whole image is partitioned into contiguous subparts which are corrected by different correction formulae to guarantee the accuracy of each subpart. The extensive experimental results demonstrate that the proposed method significantly outperforms conventional approaches.

1. Introduction

Multi-angle remote sensing, providing multi-angle observations of the same scene, enhances human’s ability to monitor and identify the Earth’s surface [1]. However, to successful use multi-angle images, the distortions caused by imaging from multiple viewing angles must be removed [2]. Geometric correction, especially in the case of imaging with large viewing angles, is an indispensable preprocessing step for the processing and applications of remote sensing images [3]. There are two key factors that can affect correction accuracy. One is how to effectively detect and accurately match the feature points between the reference image and the distorted image. The other is how to precisely estimate the model parameters using control points (CPs, i.e., matched feature points).
Previous studies of feature point extraction methods can be divided into three categories. The first category is based on the image gray that uses the gray differences around a pixel of an image [4]. The second category is based on the boundary. Detecting objects include invariant corners at the maximum curvature of the boundary chain code and intersection points after polygon approximation [5]. The third category is based on the parameter model that models each pixel of an image [6]. Since the same target in the distorted image often exhibits local distortions and gray differences compared to in the reference image, the second category method that depends on the edge quality, and the third category method that is restricted to the parametric model are unsuitable. The first category method is comparatively simple and easy to combine with other methods. According to the evaluation of various feature point extraction algorithms in [7], Scale-invariant feature transform (SIFT) [8] can provide high stability of image translation, rotation, and scaling. However, its modeling principle does not include complete affine invariance, which is only robust to a certain degree of angle changes in practice. Afterwards, Morel [9] achieved an increasing number of extracted points by simulating scale and parameters related to the optical axis of the imaging sensor and normalizing parameters related to translation and rotation. However, the algorithm is more complex to perform the simulation and normalization for each image pairs, and it is time consuming in practice. In this paper, a new CPs extraction algorithm is proposed by compensating the perspective difference for large angle (LA) images. A reference feature point set is formed by simulating the multi-views of the reference image and performing feature point detection on simulated images. The proposed algorithm ensures effective matching of feature points between the reference image and LA images.
Moreover, how to precisely estimate model parameters is another important aspect for correction accuracy [10,11]. Goncalves [12] experimentally proved the influence of the CP distribution on solving model parameters. The correction accuracy achieved by a uniform distribution of CPs is better than it obtained with a non-uniform set [13]. In addition, traditional methods usually employ the same correction model for a whole image [14] while few have considered the serious local distortions and resolution changes causing by LA imaging. Also, the large dimensions of an image would obviously magnify the error correction model parameters. Assuming that a 1000 × 1000 image is processed using the second-order polynomial model, the order of magnitude will reach 106. The piecewise linear functions [15,16,17], which divide the images into triangular elements by Delaunay’s triangulation method, and take affine transformation as the mapping function, are suitable to reduce local geometric distortions. However, the correction accuracy possible using the affine transformation model is lower than that of using a projection transformation model for LA images. The projection transformation model additionally requires at least five CP pairs. Consequently, the proposed piecewise correction algorithm is indispensable for LA image correction. Firstly, the image is separated into grids according to the variability of spatial resolution. Severely distorted are as have relatively intensive grid distributions. Based on these grids, the overall CP distribution is controlled and the piecewise location is determined. Then, CPs are optimized by joining the CPs distribution measurement (DM) to select the representative CPs in each grid. Finally, subparts deduced from the piecewise location are corrected by different correction formulae using the optimized CPs.
The rest of this paper is organized as follows: In Section 2, CP extraction based on multi-view simulation (MVS) is introduced. The piecewise correction algorithm is advanced with the optimized CPs by DM are described in Section 3. In Section 4, experimental results are presented to quantify the effectiveness of the new method. The conclusions of this paper with a summary of our results are provided in the final section.

2. CPs Extraction Based on Multi-View Simulation

SIFT, having excellent performance against scaling, rotation, noise and illumination changes, has been applied widely in the feature detection field. However, the modeling process does not consider the imaging oblique angle which results in the change of neighborhood pixels as shown in Figure 1. For example, in the process of the scale space extremum detection, determining whether the current pixel is a maximum value or minimum value requires 26 pixels in the neighborhood as a reference. If the reference pixels are changed, such as resolution reduction and gray distortion caused by LA imaging, the result of the extreme value would be changed. As another example, the process of the key point description also uses the gradient and direction of neighboring pixels as a reference. If the reference pixels were changed, the description vector would be changed which can reduce the matching probability. In the [18], the impact of LA imaging on spatial resolution is analyzed. For LA images, along the seriously oblique direction, the spatial resolution is decreasing with the increasing of imaging angles as shown in Figure 2. The off-nadir ground resolution R E S θ is given by:
R E S θ = β H cos 2 θ = R E S n cos 2 θ
where R E S n is the nadir’s resolution, θ is the angle from nadir. The off-nadir resolution is reduced by a factor of cosine squared of θ. Equation (1) demonstrates that the resolution compression is nonlinear when the imaging angle is larger than 45°. The performance of SIFT is stable within 50° while it is obviously ineffective beyond 50°, which is shown in the following experimental Section 4.1. The experimental results are consistent with the theoretical analysis.
Based on the above analysis, in order to compensate viewing angle differences between the reference image and LA images, SIFT is extended to the multi-views space in this paper. The transformation model employed in MVS is described in Section 2.1. The detailed procedure of CPs extraction based on MVS is introduced in Section 2.2.

2.1. Multi-View Transformation Model

The projection relation of ground scene I0 to digital image I can be described as:
I = A T I 0
where A and T are projection and translation, respectively.
According to the affine approximation theory [19], the above model can be expressed as:
I ( x , y ) I ( a x + b y + e 1 , c x + d y + e 2 )
If any determinant of A is strictly positive, following the Singular Value Decomposition (SVD) principle, A has a unique decomposition:
A = [ a b c d ] = H λ R 1 ( ψ ) t R 2 ( ϕ ) = λ [ cos ψ sin ψ sin ψ cos ψ ] [ t 0 0 1 ] [ cos ϕ sin ϕ sin ϕ cos ϕ ]
where λ > 0, λ is a zoom parameter, R1 and R2 are two rotation matrices, and t expresses a tilt, namely a diagonal matrix with first eigenvalue t > 1 and the second eigenvalue equal to 1.
The geometric explanation for the matrix decomposition of transformation is shown in Figure 3. The angle φ is called longitude, which is formed by the plane containing the normal and the optical axis with a fixed vertical plane, where φ = ∊[0,π). The angle θ = arccos(1/t) is called latitude, which is formed by the optical axis with the normal to the image plane I0, where θ = ∊[0,π/2). ψ is the rotation angle of the imaging sensor around the optical axis. As seen in Figure 3, the motion of the imaging sensor leads to a parameter change from the positive perspective (λ0 = 1, t0 = 1, φ0 = ψ0 = 0) to the oblique angle (λ, t, φ, ψ), which represents the transformation I(x,y) →I(A(x,y)), where the longitude φ and the latitude θ determine the amount of perspective changes. The MVS is mainly based on two factors: the longitude φ and the latitude θ. θ is represented by the tilt parameter t = |1/cosθ| which measures the oblique amount between the reference image and LA images. The oblique distortion causes a directional subsampling in the direction given by the longitude φ.
The Affine SIFT (ASIFT) approach [9] operates on each image to simulate all distortions caused by a variation of the camera optical axis direction, and then it applies the SIFT method. ASIFT provides robust image matching between the two images due to the viewpoint change. However, ASIFT is time consuming. Assuming that the tilt and angle ranges are [tmin, tmax] = [1,4 2 ] and [φmin, φmax] = [0°, 180°], and the sampling steps are Δt = 2 , Δφ = 72°/t. The complexity of the ASIFT approach is 1 + (|Γt|−1)(180°/72°) = 1 + 5 × 2.5 = 13.5 times as much as that of a single SIFT routine, where | Γ t | = | { 1 , 2 , 2 , 2 2 , 4 , 4 2 } | = 6 . Even if the two-resolution scheme is applied, the ASIFT complexity is still more than twice that of SIFT. On the other hand, multiple matching of ASIFT results in matching point redundancy which demands huge calculating quantity and long computing time during the CPs optimization process. Even so, ASIFT provides a good idea for extracting CPs between the two images of viewing angle differences.
Inspired by ASIFT, the new CPs extraction algorithm is proposed by simulating multi-views of the reference image to compensate the viewing angle difference for LA images. And then the new algorithm applies the SIFT to simulated images to construct a reference feature points database. Finally, feature matching is conducted between the reference feature point database and the feature points set detected from the LA image. The overview of the proposed algorithm is shown as Figure 4.

2.2. Procedure of CPs Extraction Based on Multi-View Simulation

The proposed algorithm proceeds by the following steps:
  • The reference image I is transformed by simulating all possible affine distortions by changing the camera optical axis orientation. In this process, the oblique transform is simulated by the tilt parameter t. Assuming that the y direction is the oblique direction, it represents I(x,y)→I((x,ty), which is equivalent to perform a y directional t-subsampling. Therefore, an antialiasing filter is previously applied, namely the convolution by a Gaussian with standard deviation c t 2 1 . This ensures a very small aliasing error.
  • The simulated images of the reference image are calculated by the tilt parameter t and the longitude φ with sampling step lengths. When the latitude θ increases at a fixed degree, the image distortion is growing. Hence, the sampling density of θ should increase with the increasing of θ. As a result, the corresponding tilt parameter t follows a geometric series tk1 = 1,u,u2,…,um(k1 = 0,1,…,m), with u > 1. Meanwhile, the image distortion is more drastic by a fixed longitude displacement Δφ at the larger latitude θ = arccos(1/t). Hence, the longitudes φ are an arithmetic series for each tilt ϕ k 2 = 0 , v / t , , n v / t (k2 = 0,1,…,n), where the integer k2 satisfies the condition ( k 2 v / t < 180°). The calculation of simulated images can be described as:
    I k ( t k 1 , ϕ k 2 ) = [ cos ϕ k 2 sin ϕ k 2 sin ϕ k 2 cos ϕ k 2 ] [ t k 1 0 0 1 ] I       ( k = 1 , ... , m × n )
    where the Ik expresses a set of simulated images of the reference image I, namely I1, I2,,Im×n.
  • The feature point detection and description (SIFT) are applied to the reference image I and the simulated images I1, I2,,Im×n. The feature vector sets AI,AI1,AI2,…,AIm×n are established which are combined together to form the reference feature vector set A:
    A = AIAI1AI2∪…∪AIm×n
  • The feature vector set B, obtained from the distorted LA image, is matched with the reference feature vector set A by employing the European neighbor distance ratio criterion (ENDR) [20] and Random Sample Consensus (RANSAC) [21].
The two-resolution scheme [9] is applied to the proposed algorithm to reduce the computation complexity. With this, matched feature points, namely CPs, are extracted.

3. Piecewise Correction with Optimized CPs Based on Distribution Measurement

For different correction models, the required minimum number of CPs is different. However, a larger number of CPs doesn’t achieve a better root mean square error (RMSE) when CPs are concentrated in a local area. The CPs which are uniformly distributed in the image can achieve accurate model parameter estimation. To quantify the uniformity of CPs, the concept of DM is introduced based on information entropy. Then the proposed piecewise correction algorithm with the optimized CPs is presented.

3.1. The Distribution Measurement of CPs

Zhu [22] indicated that the probability of correct matching and information content enjoy strong correlation. This means that the probability of correct matching increases with increasing information entropy. The information entropy of every CP can be obtained by defining the second order differential invariants as the descriptor. The second order differential invariants v are [23]:
v = [ L x L x + L y L y L x x L x L x + 2 L x y L x L y + L y y L y L y L x x + L y y L x x L x x + 2 L x y L x y + L y y L y y ]
where Lx and Ly are the convolution by the one-dimensional Gauss kernel, and Lxy is the convolution by the two-dimensional Gauss kernel.
In order to calculate the probability of entropy, the descriptor v is partitioned into multi-dimensional vector space. The partitioning is dependent on the distance between descriptors which is measured by Mahalanobis distance. The distance between descriptors v1 and v2 is:
D ( v 1 , v 2 ) = [ ( v 1 v 2 ) T Λ 1 ( v 1 v 2 ) ] 1 / 2
where Λ is the covariance matrix. Λ can be SVD decomposed into Λ−1 = GTPG, where P is a diagonal matrix and G is an orthogonal matrix. Then, v can be normalized by P, which represents vnorm = P1/2Gv. After the normalization, D(v1, v2) can be rewritten as:
D ( v 1 , v 2 ) = P 1 / 2 G ( v 1 v 2 )
The descriptor v can be partitioned into many cells of equal size. Then the probability of each cell is used to calculate the entropy of v:
H ( v ) = i p i log ( p i )
where pi = nv/n is the probability of v. nv and n are the number of CPs in each cell and the total number of CPs, respectively.
The CP distribution can be regarded as the spatial dispersion of points, which can be measured by the normalized mean distance between CP and the distribution center of CPs. The distribution center of CPs is:
( x ¯ , y ¯ ) = ( i = 1 n w i x i i = 1 n w i , i = 1 n w i y i i = 1 n w i )
and DM is computed as:
DM = ( [ i = 1 n ( x i x ¯ I x ) 2 + i = 1 n ( y i y ¯ I y ) 2 ] / n ) 1 2
where Ix and Iy are the row and column of the region. The parameter wi is the weight of the CP i, which is obtained by H(v) as Equation (10).
The bigger the DM is, the better the CPs distribution is. The smaller the DM is, the more concentrated the CPs is.

3.2. Piecewise Correction Strategy with Optimized CPs

According to the previous analysis, piecewise correction is indispensable for LA images, and the distribution quality of CPs is important to RMSE. Consequently, the image is gridded to determine the piecewise location and simultaneously control the overall CPs distribution. In each grid, DM is employed to constrain the CP quality. By using these procedures, the distribution uniformity of CPs is guaranteed from global to local.
Since the resolution of LA images along the oblique direction (assuming the y direction) is nonlinear loss, the gridded strategies of the x direction and the y direction are different. For the x direction, the image is averagely divided into M subparts. For the y direction, the resolution variability is used to measure the distortion. Our aim is enable each grid to own approximately changing rate to the resolution. As a result, the gridded strategy of the y direction is shown as in Figure 5.
It can be seen that the larger the angle is, the smaller the division interval is. The gridded strategy for the y direction is:
θ n = arccos N ( n cos 2 θ top + N n cos 2 θ bottom )   ( 0 n N )
where θ0 and θN are boundaries of the viewing angles. N is the subpart amount in the y direction.
The detail of the proposed piecewise correction algorithm goes as follows:
  • Divide the whole image into N × M grids;
  • Set the maximum number of CPs needed to be selected in each grid. The maximum number is represented as m/(N × M), where m is the CPs total in the whole image. The following steps [(a)–(c)] should be executed in each grid:
    (a)
    Calculate the information entropy (IE) of every CP and put them in descending order, reserve the top m/(N × M) CPs, go to (b)). The remaining CPs form a spare set;
    (b)
    Calculate DM of the reserved CPs, if DM > TQ, or all the reserved CPs are processed, go to (c); If DM < TQ, delete the CP which is the nearest to the distribution center, retrieve the CP whose IE is the maximum value in the spare set, and re-execute (b); Here, TQ is the threshold.
    (c)
    Process all the grids. CPs selection terminates.
    Figure 6 illustrates the CPs selection process in one grid.
  • Partition the image into P correction subparts by merging adjacent grids along the y direction. To ensure the correction consistency at the piecewise edge, the adjacent subparts possess an overlap grid as shown in Figure 7.
  • Estimate the model parameters by using the selected CPs for each subpart. All the corrected subparts are integrated to obtain the final result.

4. Experimental Results and Analysis

In order to comprehensively illustrate the performance of the proposed method, two groups of datasets are employed. One group is taken from our semi-physical platform by imaging nadir images for the reference image and off-nadir images for distortion images. The viewing angles are from 30° to 70° for an interval of 10° as shown in Figure 8.
Another is aerial remote sensing images as shown in Figure 9, and their details are listed in Table 1. Our programs are implemented with MATLAB 2012a and the experiments are conducted on a Windows 7 computer with a Xeon 2.66 GHz CPU, and 8 GB memory.

4.1. Performance of CPs Extraction Based on Multi-View Simulation

This experiment is to examine the adaptability of the CPs extraction based on MVS for LA images. MVS of the reference image takes into account all distortions with enough accuracy which are caused by the variation of the imaging sensor optical axis. Therefore, the MVS is applied with very large parameter sampling ranges and small sampling steps. With the increase of the latitude θ, the distortions are increasingly larger. Especially near 90°, a little change of latitude θ can give rise to severe distortion. As a result, the sampling density would grow. Setting u = 2 satisfies this requirement. Similarly, since the image distortion in the high latitude is larger than that in the low latitude, the sampling step of the longitude φ should be reduced with the increasing of the latitude θ. We set Δ ϕ = 72°/tk1. The corresponding sampling parameters for MVS are demonstrated in Table 2. In our experiment, we set m = 4 and n = 2.
Table 3 lists the correct matching number which is conducted by ASIFT, SIFT, Harris-Affine [24], Hessian-Affine [7] and the proposed algorithm respectively. The computation costs of ASIFT, SIFT and the proposed algorithm are shown in Table 4.
It can be inferred that the performances of all the algorithms are deduced with the increasing of the imaging angle. The ASIFT and the proposed algorithm which can obtain more CPs than the other algorithms are more robust for LA images. For the distorted image with 70° as example, SIFT gets 0 CPs, the proposed algorithm gets 52 CPs, and the ASIFT gets 120 CPs. Compared with SIFT algorithm, the proposed algorithm successfully achieves the LA image correction, so it is worth spending a little bit more time. However, the ASIFT is time consuming compared with the proposed algorithm. The computation cost of ASIFT is nearly twice that of the proposed algorithm. Multiple matching of ASIFT results in the matching point redundancy which demands the huge calculating quantity and long computing time in the following process. This is indicative of the fact that the proposed algorithm is indispensable and more efficient than the other algorithms, especially for LA images. In addition, for the proposed algorithm, the situation whereby the result of 40° is more than the 30° one is related the sampling parameters u.

4.2. Performance of the Piecewise Correction with Optimized CPs

The total CPs of the whole image is adjustable. The number normally is 30~100. We set the total number m = 45. Considering the CPs number and computational complexity, we set M = 3, N = 5 and p = 2, just as shown in Figure 7. If Ix equals to Iy, the threshold TQ should be 0.25. Considering that Ix and Iy usually are different, TQ is adjusted to 0.35. For the CPs optimization process, the computation efficiencies of two CPs sets, which are obtained by the ASIFT and the proposed algorithm, are listed in Table 5. It is illustrated that the ASIFT is too time consuming to apply in practice. This is another proof that the proposed algorithm is indispensable. The CPs before and after optimization are shown in Figure 10.
It is obvious that the CPs gather around some objects before optimization. After the optimization, dense CPs are reduced. To examine the behavior of the proposed method, this part presents the correction results conducted by the total CPs from Section 2.2, the optimized CPs by using DM, the piecewise linear functions [15] by using the optimized CPs, and the piecewise strategy from Section 3.2, respectively. Except for the piecewise linear functions, the projective transformation is employed for the correction model. The test points in all the experiments have the same quantity and quality to ensure the equality of experiments, the number is set to 12. Here, RMSE is used to scale the correction accuracy. It can be given as:
RMSE = i = 1 n ( Δ x i 2 + Δ y i 2 ) / n
where n is the number of CPs, Δxi and Δyi are the CPs’ offset between the reference coordinates and the corrected coordinates. We also compute the correction errors of the x and y directions based on i = 1 n Δ x i 2 / n and i = 1 n Δ y i 2 / n .
Comparison of correction errors for the x and y direction is listed in Table 6, and the comparison of RMSEs is listed in Table 7.The mosaic results by different correction methods for 60° as an example are shown in Figure 11. Figure 12 shows the mosaic results of the aerial images 1 and 2 by different correction methods.
Visual inspection suggests that Figure 11a and the Figure 12b possess obvious double shadows. It means that correcting using the total CPs gives rise to a larger correction error. Similarly, Figure 11c and the Figure 12f also possess double shadows. This demonstrates the piecewise linear functions are not suitable for correcting LA images. A comparison of correction errors for the x and y direction is given in Table 6, and comparisons of RMSEs are listed in Table 7. It is observed that the optimized distribution of CPs is effective at improving the correction accuracy. After CP optimization, the reduction of the number of CPs doesn’t lead to a reduction of the correction accuracy, instead, the correction accuracy in the x and y direction is advanced, and the final RMSE is also improved. For the LA image with 60°, RMSE increases 12.03 pixels.
Compared with the model estimation based on the whole image, the proposed piecewise strategy improves the correction accuracy in both the x and y direction, and the final RMSE accordingly increases. The improvement of the correction accuracy increases with the increasing of the oblique angle especially for the y direction. For the LA image with 70°, RMSE of the proposed piecewise correction advances 0.45 pixels compared to the whole image correction. We can conclude that the proposed algorithm is effective and adaptable to correct LA distorted images.

5. Conclusions

Traditional correction methods are unsuitable to effectively correct remote sensing images acquired under seriously oblique conditions. On the basis of analyzing the characteristic of LA images, a new correction method is advanced. The proposed method simulates the multi-views of the reference image to ensure the effective matching of feature points, and uses the piecewise strategy based on optimized CPs to achieve highly accurate correction. The new method is analyzed theoretically and assessed experimentally by performing correction for LA images from our semi-physical platform and aerial remote sensing. Compared with the traditional methods, the new method can effectively correct LA images, and the correction accuracy is significantly improved. This paper has important value applied in geometric correction, especially under LA imaging conditions.

Acknowledgments

This work was supported by the project of local colleges’ and universities’ capacity construction of Science and Technology Commission in Shanghai (15590501300), the National Natural Science Foundation of China (61201244), the Shanghai funded training program of young teachers (E1-8500-15-01046) and the Doctoral Program Initiation Fund of SUES (E1-0501-15-0209).

Author Contributions

The paper was a collaborative effort between the authors. Chunyuan Wang contributed to the theoretical analysis, modeling, simulation, and manuscript preparation. Xiang Liu, Xiaoli Zhao and Yongqi Wang were collectively responsible for result discussion and improved this manuscript’s English language and style.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ma, J.; Chan, J.C.-W.; Canters, F. Fully Automatic Subpixel Image Registration of Multiangle CHRIS/ProbaData. IEEE Trans. Geosci. Remote Sens. 2010, 48, 2829–2839. [Google Scholar]
  2. Berenstein, R.; Hočevar, M.; Godeša, T.; Edan, Y.; Ben-Shahar, O. Distance-Dependent Multimodal Image Registration for Agriculture Tasks. Sensors 2015, 15, 20845–20862. [Google Scholar] [CrossRef] [PubMed]
  3. Kang, Y.-S.; Ho, Y.-S. An efficient image rectification method for parallel multi-camera arrangement. IEEE Trans. Consum. Electron. 2011, 57, 1041–1048. [Google Scholar] [CrossRef]
  4. Zheng, Z.; Wang, H.; Toeh, E.K. Analysis of gray level corner detection. Pattern Recognit. Lett. 1999, 20, 149–162. [Google Scholar] [CrossRef]
  5. Mokhtarian, F.; Mackworth, A. Scale-based description and recognition of planar curves and two-dimensional shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1986, 8, 34–43. [Google Scholar] [CrossRef] [PubMed]
  6. Baker, S.; Nayar, S.; Murase, H. Parametric feature detection. Int. J. Comput. Vis. 1998, 27, 27–50. [Google Scholar] [CrossRef]
  7. Mikolajczyk, K.; Schmid, C. A performance evaluation of local descriptors. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 1615–1630. [Google Scholar] [CrossRef] [PubMed]
  8. Lowe, D. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  9. Morel, J.M.; Yu, G. ASIFT: A new framework for fully affine invariant image comparison. SIAM J. Imaging Sci. 2009, 2, 438–469. [Google Scholar] [CrossRef]
  10. Zitova, B.; Flusser, J. Image registration methods: A survey. Image Vis. Comput. 2003, 21, 977–1000. [Google Scholar] [CrossRef]
  11. Bhatti, H.A.; Rientjes, T.; Haile, A.T.; Habib, E.; Verhoef, W. Evaluation of Bias Correction Method for Satellite-Based Rainfall Data. Sensors 2016, 16, 884–901. [Google Scholar] [CrossRef] [PubMed]
  12. Goncalves, H.; Goncalves, J.A.; Corte-Real, L. Measures for an objective evaluation of the geometric correction process quality. IEEE Geosci. Remote Sens. Lett. 2009, 6, 292–296. [Google Scholar] [CrossRef]
  13. Sedaghat, A.; Mokhtarzade, M.; Ebadi, H. Uniform robust scale-invariant feature matching for optical remote sensing images. IEEE Trans. Geosci. Remote Sens. 2011, 49, 4516–4527. [Google Scholar] [CrossRef]
  14. Tait, R.J.; Schaefer, G.; Hopgood, A.A.; Zhu, S.Y. Efficient 3-D medical image registration using a distributed blackboard architecture. In Proceedings of the 28th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, New York, NY, USA, 30 August–3 September 2006; pp. 3045–3048.
  15. Goshtasby, A. Piecewise linear mapping functions for image registration. Pattern Recognit. 1986, 19, 459–466. [Google Scholar] [CrossRef]
  16. González, V.A.J. An experimental evaluation of non-rigid registration techniques on Quickbird satellite imagery. Int. J. Remote Sens. 2008, 29, 513–527. [Google Scholar]
  17. Han, Y.; Choi, J.; Byun, Y.; Kim, Y. Parameter Optimization for the Extraction of Matching Points between High-Resolution Multisensor Images in Urban Areas. IEEE Trans. Geosci. Remote Sens. 2014, 52, 5612–5621. [Google Scholar]
  18. Wang, C.; Gu, Y.; Zhang, Y. A coarse-to-fine correction method for seriously oblique remote sensing image. ICIC Express Lett. 2011, 5, 4503–4509. [Google Scholar]
  19. Yu, G.; Morel, J.M. A fully affine invariant image comparison method. In Proceedings of the IEEE International Conference on Acoustics, Taipei, Taiwan, 19–24 April 2009; pp. 1597–1600.
  20. Beis, J.; Lowe, D. Shape indexing using approximate nearest-neighbor search in high-dimensional space. In Proceedings of the International Conference on Computer Vision and Pattern Recognition, San Juan, Puerto Rico, 17–19 June 1997; pp. 1000–1006.
  21. Bennour, A.; Tighiouart, B. Automatic high resolution satellite image registration by combination of curvature properties and random sample consensus. In Proceedings of the 2012 6th International Conference on Sciences of Electronics, Technologies of Information and Telecommunications, Sousse, Tunisia, 21–24 March 2012; pp. 370–374.
  22. Zhu, Q.; Wu, B.; Xu, Z. Seed point selection method for triangle constrained image matching propagation. IEEE Geosci. Remote Sens. Lett. 2006, 3, 207–211. [Google Scholar] [CrossRef]
  23. Schmid, C.; Mohr, R.; Bauckhage, C. Evaluation of interest point detectors. Int. J. Comput. Vis. 2000, 37, 151–172. [Google Scholar] [CrossRef]
  24. Mikolajczyk, K.; Schmid, C. An Affine Invariant Interest Point Detector. In Computer Vision—ECCV 2002; Heyden, A., Sparr, G., Nielsen, M., Johansen, P., Eds.; Springer: Berlin/Heidelberg, Germany, 2002. [Google Scholar]
Figure 1. The changes of neighborhood pixels (a) the reference image; (b) the large angle image.
Figure 1. The changes of neighborhood pixels (a) the reference image; (b) the large angle image.
Sensors 16 01725 g001
Figure 2. Spatial resolution of off-nadir and nadir.
Figure 2. Spatial resolution of off-nadir and nadir.
Sensors 16 01725 g002
Figure 3. Geometric explanation for the matrix decomposition of transformation.
Figure 3. Geometric explanation for the matrix decomposition of transformation.
Sensors 16 01725 g003
Figure 4. The overview of the proposed algorithm based on MVS.
Figure 4. The overview of the proposed algorithm based on MVS.
Sensors 16 01725 g004
Figure 5. The gridded strategy in the y direction.
Figure 5. The gridded strategy in the y direction.
Sensors 16 01725 g005
Figure 6. The CPs selection process in one gird.
Figure 6. The CPs selection process in one gird.
Sensors 16 01725 g006
Figure 7. The grid division and the piecewise strategy (e.g., p = 2, N = 5, M = 3).
Figure 7. The grid division and the piecewise strategy (e.g., p = 2, N = 5, M = 3).
Sensors 16 01725 g007
Figure 8. Images taken from semi-physical platform (the left is the reference image).
Figure 8. Images taken from semi-physical platform (the left is the reference image).
Sensors 16 01725 g008
Figure 9. Two sets of aerial images (a) Reference image 1; (b) Distorted image 1; (c) Reference image 2; (d) Distorted image 2.
Figure 9. Two sets of aerial images (a) Reference image 1; (b) Distorted image 1; (c) Reference image 2; (d) Distorted image 2.
Sensors 16 01725 g009
Figure 10. CPs optimization results (a) CPs distribution of 40° before optimization; (b) CPs distribution of 40° after optimization; (c) CPs distribution of 60° before optimization; (d) CPs distribution of 60° after optimization; (e) CPs distribution of aerial 1 before optimization; (f) CPs distribution of aerial 1 after optimization; (g) CPs of aerial 2 before optimization; (h) CPs of aerial 2 after optimization.
Figure 10. CPs optimization results (a) CPs distribution of 40° before optimization; (b) CPs distribution of 40° after optimization; (c) CPs distribution of 60° before optimization; (d) CPs distribution of 60° after optimization; (e) CPs distribution of aerial 1 before optimization; (f) CPs distribution of aerial 1 after optimization; (g) CPs of aerial 2 before optimization; (h) CPs of aerial 2 after optimization.
Sensors 16 01725 g010aSensors 16 01725 g010b
Figure 11. The correction results (for 60° as an example) (a) Correction results by the total CPs; (b) Correction results by the optimized CPs; (c) Correction results by the piecewise linear functions; (d) Correction results by the proposed piecewise strategy.
Figure 11. The correction results (for 60° as an example) (a) Correction results by the total CPs; (b) Correction results by the optimized CPs; (c) Correction results by the piecewise linear functions; (d) Correction results by the proposed piecewise strategy.
Sensors 16 01725 g011
Figure 12. The correction results (a) The correction results by the total CPs of Aerial image 1; (b) The correction results by the total CPs of Aerial image 2; (c) The correction results by the optimized CPs of Aerial image 1; (d) The correction results by the optimized CPs of Aerial image 2; (e) The correction results by the piecewise linear functions of Aerial image 1; (f) The correction results by the piecewise linear functions of Aerial image 2; (g) The correction results by the proposed piecewise strategy of Aerial image 1; (h) The correction results by the proposed piecewise strategy of Aerial image 2.
Figure 12. The correction results (a) The correction results by the total CPs of Aerial image 1; (b) The correction results by the total CPs of Aerial image 2; (c) The correction results by the optimized CPs of Aerial image 1; (d) The correction results by the optimized CPs of Aerial image 2; (e) The correction results by the piecewise linear functions of Aerial image 1; (f) The correction results by the piecewise linear functions of Aerial image 2; (g) The correction results by the proposed piecewise strategy of Aerial image 1; (h) The correction results by the proposed piecewise strategy of Aerial image 2.
Sensors 16 01725 g012aSensors 16 01725 g012b
Table 1. Introduction of the aerial remote sensing images.
Table 1. Introduction of the aerial remote sensing images.
IndexSize (pixel)Resolution (m)View Angle (°)Rotation (°)Location
image1500 × 5000.54015Liaoning, China
image2800 × 7000.16015Rhodes Island, USA
Table 2. The corresponding sampling parameters for multi-view simulation.
Table 2. The corresponding sampling parameters for multi-view simulation.
Factor m12345
θ 45°60°69°75°80°
Δ ϕ 48°36°24°18°12°
Table 3. Correct Matches of different algorithms (Number).
Table 3. Correct Matches of different algorithms (Number).
IndexASIFTSIFTHarAffHesAffProposed Method
30°11261553833212
40°17331233119225
50°918631811181
60°685189596
70°12000052
Aerial image 112471122917215
Aerial image 212841107137
Table 4. Computation Cost (Second).
Table 4. Computation Cost (Second).
IndexASIFTSIFTProposed Method
30°11.254.586.11
40°10.664.756.36
50°9.764.075.89
60°9.563.015.76
70°9.562.644.18
Aerial image 19.274.666.77
Aerial image 210.864.856.14
Table 5. Computation Cost of CPs Optimization (Second).
Table 5. Computation Cost of CPs Optimization (Second).
IndexCPs by ASIFTCPs by the Proposed Method
30°30.835.74
40°38.345.26
50°29.763.21
60°20.922.99
70°3.561.68
Aerial image 136.884.84
Aerial image 23.693.56
Table 6. Comparison of correction errors for the x and y direction (pixel).
Table 6. Comparison of correction errors for the x and y direction (pixel).
Correction Errors
IndexCorrected by the Total CPsCorrected by the Optimized CPsThe Piecewise Linear FunctionsThe Piecewise Correction
xyxyxyxy
30°0.731.110.690.930.871.190.670.93
40°0.860.960.860.931.012.190.810.93
50°1.052.460.891.761.513.120.831.57
60°12.866.870.992.343.034.311.051.70
70°19.659.041.593.745.277.101.512.92
Aerial 10.831.510.811.320.981.420.691.23
Aerial 26.9814.381.963.124.666.341.842.66
Table 7. Comparison of RMSE (pixel).
Table 7. Comparison of RMSE (pixel).
RMSE
IndexCorrected by the Total CPsCorrected by the Optimized CPsThe Piecewise Linear FunctionsThe Piecewise Correction
30°1.441.171.471.16
40°1.291.272.211.24
50°2.651.983.431.79
60°14.572.545.262.07
70°23.094.078.773.32
Aerial 11.721.581.761.42
Aerial 215.983.687.853.23

Share and Cite

MDPI and ACS Style

Wang, C.; Liu, X.; Zhao, X.; Wang, Y. An Effective Correction Method for Seriously Oblique Remote Sensing Images Based on Multi-View Simulation and a Piecewise Model. Sensors 2016, 16, 1725. https://doi.org/10.3390/s16101725

AMA Style

Wang C, Liu X, Zhao X, Wang Y. An Effective Correction Method for Seriously Oblique Remote Sensing Images Based on Multi-View Simulation and a Piecewise Model. Sensors. 2016; 16(10):1725. https://doi.org/10.3390/s16101725

Chicago/Turabian Style

Wang, Chunyuan, Xiang Liu, Xiaoli Zhao, and Yongqi Wang. 2016. "An Effective Correction Method for Seriously Oblique Remote Sensing Images Based on Multi-View Simulation and a Piecewise Model" Sensors 16, no. 10: 1725. https://doi.org/10.3390/s16101725

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop