Next Article in Journal
Mapping Spartina alterniflora Biomass Using LiDAR and Hyperspectral Data
Next Article in Special Issue
Road Detection by Using a Generalized Hough Transform
Previous Article in Journal
High-Rise Building Layover Exploitation with Non-Local Frequency Estimation in SAR Interferograms
Previous Article in Special Issue
Multiobjective Optimized Endmember Extraction for Hyperspectral Image
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Geometry-Based Global Alignment for GSMS Remote Sensing Images

1
Key Laboratory of Specialty Fiber Optics and Optical Access Networks, Shanghai University, Shanghai 200070, China
2
Institute of Information Engineering, Chinese Academy of Sciences, Beijing 100195, China
3
The 16th Institute, China Aerospace Science and Technology Corporation, Shaanxi 710100, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2017, 9(6), 587; https://doi.org/10.3390/rs9060587
Submission received: 11 March 2017 / Revised: 11 May 2017 / Accepted: 7 June 2017 / Published: 10 June 2017
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)

Abstract

:
Alignment of latitude and longitude for all pixels is important for geo-stationary meteorological satellite (GSMS) images. To align landmarks and non-landmarks in the GSMS images, we propose a geometry-based global alignment method. Firstly, the Global Self-consistent, Hierarchical, High-resolution Geography (GSHHG) database and GSMS images are expressed as feature maps by geometric coding. According to the geometric and gradient similarity of feature maps, initial feature matching is obtained. Then, neighborhood spatial consistency based local geometric refinement algorithm is utilized to remove outliers. Since the earth is not a standard sphere, polynomial fitting models are used to describe the global relationship between latitude, longitude and coordinates for all pixels in the GSMS images. Finally, with registered landmarks and polynomial fitting models, the latitude and longitude of each pixel in the GSMS images can be calculated. Experimental results show that the proposed method globally align the GSMS images with high accuracy, recall and significantly low computation complexity.

1. Introduction

In many applications, such as weather forecast, environmental monitoring and so on, determining the latitude and longitude of each pixel in the GSMS images is of great importance. However, the GSMS images have the characteristics of round-the-clock, all-weather, long range and high-resolution, which bring new challenges to practical applications.
Remote sensing images matching algorithms are usually divided into two categories: area-based methods and feature-based methods [1,2]. Area-based matching algorithm establishes correspondence between two images by similarity measurements based on correlation functions. There is some classical arithmetic such as cross-correlation [3] and root mean square error (RMSE) [4]. A rough-location method [5] was proposed to locate the remote image with specific physiognomy. By matching the remote sensing image and the digital map, researchers can roughly locate the remote images and the location error is less than 10 km. However, the GSMS images are generally polluted by illumination, scale variation, cloud influence and other factors, and those algorithms do not work well. A feature-based matching algorithm is widely applied to remote sensing images [6,7,8,9] because of its robustness. For example, scale-invariant feature transform (SIFT) [10,11] has an excellent performance in most circumstances. However, few feature points can be extracted from the GSMS images with SIFT due to poor textures. In addition, feature-based alignment, which only uses local gradient distribution, will lead to low precision because of too many similar features in the GSMS images.
The challenges of these points-matching methods are removing the outliers. The presence of outliers will have a negative effect on the accuracy of the matching results [12,13]. To remove outliers, many algorithms based on geometric constraint and spatial information are commonly used. Among these algorithms, Random Sample Consensus (RANSAC) [14] is one of the most popular algorithms. It selects a sample randomly from the consensus set in each iteration and finds the largest consensus set to calculate the final model parameters. When the outlier is in the minority, RANSAC performs well and robustly. When the outlier is in the majority, using RANSAC will be time-consuming and unstable. By exploring the spatial relationship of matching points, a matching strategy using spatial consistent matching [8] was proposed to remove outliers. In [15,16,17], the authors proposed a spatial coding algorithm for image search, which relies on relative position relationship between pairs of matching feature points. It takes into account all matching feature pairs and encodes their coordinates to discover false matches between two images. However, the spatial relationship consistency in this method is too strict for landmark alignment. Since the earth is not a standard sphere, position deviation exists in the GSMS images. Spatial relationship consistency is effective only in a small region, and it also causes lots of correctly matched features to be deleted mistakenly. Furthermore, the number of landmarks is so large that it slows the process of removing outliers. Aguilar et al. [18] proposed a method called Graph Transformation Matching (GTM). It establishes a K-Nearest-Neighbor (KNN) graph to express neighbor geometric structures of the feature points. The mismatching feature points are determined according to the differences between KNN graph established in two images. Shi et al. [19] proposed an image registration algorithm using point structure information. After obtaining robust initial matching point pairs, the final matching results are estimated using GTM based on the local structure information of the point to remove outliers from initial correspondences. On the basis of the GTM algorithm, Weighted Graph Transformation Matching (WGTM) algorithm [20] was proposed. Utilizing the angular distances between edges that connect a feature point to its KNN as the weight, WGTM algorithms can only deal with pseudo isomorphic structures to a certain extent. This arises because angular distance is only invariant with respect to scales and rotations, and shear deformations are not considered in that case. Liu et al. [21] proposed the Restricted Spatial Order Constraints (RSOC) algorithm using a filtering strategy based on two-way geometric order constraints and two decision criteria restrictions. However, when the K-Nearest-Neighbor of the outliers are all the same, RSOC failed to remove such outliers. Zhang et al. [22] proposed a triangle-area representation of the K nearest neighbors (KNN-TAR). It utilizes the descriptor KNN-TAR to find the candidate outliers and removes the real outliers by the local structure and global information. In [23], an algorithm based on integrated spatial structure constraint (ISSC) was proposed for remote sensing image registration. First, a global structure constraint is constructed for each correspondence out of the tentative set to increase the number of inliers and raise the correct rate simultaneously. Then, a local structure constraint based on the triangle area representation is utilized on the neighboring points of each correspondence to remove outliers. Recently, Zhao et al. [24] proposed a vertex trichotomy descriptor. It utilizes the geometrical relations between any of the vertices and lines, which are constructed by mapping each vertex into trichotomy sets. A recovery and filtering vertex trichotomy matching (RFVTM) algorithm was designed to recover some inliers based on identical vertex trichotomy descriptors and restricted transformation errors.
A lot of work has been done toward the images alignment problem. Previous works can be classified in two main categories: direct [25] and feature-based methods [26,27]. Direct approaches minimize pixel-to-pixel dissimilarities. While the feature-based approaches first locate a sparse set of reliable features in the image and then recover the motion parameters considering their correspondences. Miller et al. [28] proposed the congealing method by using an entropy measure to align images with respect to the distribution of the data. Cox et al. [29] proposed a least squares congealing algorithm that minimizes the sum of squared distances between images. Minimization of a log determinant cost function [30] is utilized to align images.
Inspired by these approaches, we propose a geometry-based global alignment method to align GSMS remote sensing images. According to the geometric and gradient similarity of feature maps from the GSHHG and GSMS images, initial feature matching is obtained. Then, feature refinement with a neighborhood spatial consistent matching (NSCM) algorithm is used to remove outliers. Finally, polynomial models are fitted to describe the offsets’ tendency according to the matched points set. With the fitted polynomial models, the latitude and longitude of all pixels in the GSMS images can be determined.

2. Materials and Methods

2.1. Local Feature Matching by Geometric Coding

The shorelines of the GSHHG database correspond to the edges of the GSMS images [31], which means that shorelines can be used to simplify alignment of GSHHG and GSMS images.
Since the GSHHG database consists of polygon and line type, the size of the GSHHG database is much smaller than other reference data such as digital elevation model and digital vector map. With sub-satellite point (longitude α 0 , latitude γ 0 ) and satellite height H, the landmarks in GSHHG are mapped to a two-dimensional plane by perspective projection. Therefore, the GSHHG database is quantized to a binary image. As shown in Figure 1a, the white pixels are the landmarks defined in the GSHHG database.
The GSMS image is normalized [32,33] so that the GSHHG and GSMS images have the same size. The edges of the GSMS image extracted by Structured Forests [34] are defined as the edge probability image. As shown in Figure 1b, each element denotes the probability of the pixel being an edge candidate. To distinguish edge candidates from noise, the probability image is binarized to generate the edge binary image as depicted in Figure 1c.
For a landmark P i G x i G , y i G in the GSHHG image, the neighborhood coding matrix W can be constructed. For a pixel P i S x i S , y i S in the edge probability image, the neighborhood coding matrix P can be constructed. Similarly the neighborhood coding matrix P can be generated with the edge binary image. The matrix W, P and P all have the same size ( 2 K + 1 ) × ( 2 K + 1 ) .
Then, local features are matched by comparing their geometric similarity and gradient similarity. The geometric similarity between a landmark P i G x i G , y i G in the GSHHG image and a pixel P i S x i S , y i S in the edge binary image can be calculated as follows:
E g e o ( i , x i S , y i S ) = s = K K t = K K W s , t i A N D P s , t ,
where the W s , t i and P s , t separately denotes the s-th row and t-th column element in matrix W and P .
Similarly, the gradient similarity between a landmark P i G x i G , y i G in the GSHHG image and a pixel P i S x i S , y i S in the edge probability image can be calculated by:
E g r a ( i , x i S , y i S ) = s = K K t = K K W s , t i × P s , t .
The number of landmarks located within the template is calculated as follows:
C g e o ( i , x i G , y i G ) = s = K K t = K K W s , t i .
Both geometric and gradient similarity are measured to match local features. The procedure of local feature matching between the GSHHG and GSMS image is shown in Algorithm 1. Figure 1d shows the result of initial feature points matching.
Algorithm 1: Local feature matching.
Remotesensing 09 00587 i001

2.2. Feature Refinement with Neighborhood Spatial Consistent Matching (NSCM)

Since there are lots of similar features in the GSMS image, local feature matching will lead to mismatching. The red circles in Figure 1d show mismatched features. The mauve circles in Figure 1d present many-to-one matched features due to the aperture effect.
The geometric relationship between matched features should not change too much across images. Based on this principle, we propose a neighborhood spatial consistent matching (NSCM) algorithm to remove outliers whose offsets between matched features have sudden mutations.
After Section 2.1, the matched set can be denoted as: M = ( P i G , P i S ) = ( ( x i G , y i G ) , ( x i S , y i S ) ) , i = 1 , 2 , 3 , , N , where the superscripts “G” and “S” refer to the GSHHG and GSMS images, respectively, ( P i G , P i S ) denotes a pair of matched features and N is the number of matched features.
Giving one landmark P i G x i G , y i G in the GSHHG image, the n nearest landmarks can be represented as N G = P i j G x i j G , y i j G , j = 1 , 2 , 3 , , n and their corresponding points in the GSMS image are represented as N S = P i j S x i j S , y i j S , j = 1 , 2 , 3 , , n . Their offsets are represented as D = ( D x i j , D y i j ) , j = 1 , 2 , 3 , , n and defined as below:
D x i j = x i j G x i j S , D y i j = y i j G y i j S .
The neighborhood offsets of the matched feature pair P i G , P i S can be formulated as:
D x i = μ j · D x i j , D y i = μ j · D y i j ,
where μ j = k · e x p ( P i j G P i G 2 σ 2 ) and is constrained to μ j = 1 . In addition, k is a constant normalizing μ j . When P i j G is closer to P i G , the scalar weight μ j assigns higher weights to D x i j and D y i j .
The offsets between P i G x i G , y i G and P i S x i S , y i S in row and column can be calculated by the following formula:
x i = x i G x i S , y i = y i G y i S .
For the given matched feature pair P i G , P i S , the neighborhood spatial consistent matching indicates that the x i and D x i should not deviate too much. Similarly, the y i and D y i also should be close. This constraint can be determined:
x i D x i < δ , y i D y i < ε ,
where δ and ε are two thresholds controlling sensitivity on deformations. If their values are large, the incorrect matched features are more likely to be regarded as inliers. They are both set to 0.5 according to experimental results. If P i G , P i S satisfies the low distortion constraint, it is considered as an inlier.
Figure 2 is the illustration of mismatched features and many-to-one matched features. As shown in Figure 2a, (point 3, point 3’) is a pair of mismatched features. The offsets between them in row and column are 2 and 2 . The offsets between other pairs in neighborhood are 1 and 2. Since the offsets of (point 3, point 3’) are over thresholds, they are removed. In Figure 2b, (point 2, point 2’) and (point 3, point 3’) are pairs of many-to-one matched features. The offsets between point 3 and point 3’ in row and column are 2 and 4. The offsets between other pairs in its neighborhood are 1 and 2. (point 3, point 3’) is removed and (point 2, point 2’) is considered an inlier.
The details of the initial matching result and feature refinement in the southern coastal area of Thailand are shown in Figure 3. Figure 3a,c present the details of the top red circles and mauve circles, respectively, in Figure 1d. As shown in Figure 3b,d, these mismatched features are removed.

2.3. Pixel Alignment Based on Polynomial Fitting

The earth is not a standard sphere. When using the sphere model to describe the earth, the further the pixel is away from the projection center point, the larger its distance distortion. In this case, the transformation model between sphere and plane is not suitable to describe the projection model of GSMS image.
However, the offsets between the GSHHG and GSMS images in rows and columns are smooth without distortion. For the point P i G x i G , y i G in the GSHHG image and its corresponding point P i S x i S , y i S in the GSMS image, the offsets between them in row and column are presented as x i and y i according to Equation (6). In order to fit the tendency of offsets in rows and columns, the polynomial functions are applied. Based on the m-th order polynomial function, the fitting functions can be defined as:
f x i ( x i G , y i G ) = k = 0 m a k x i G k y i G ( m k ) + b 0 , f y i ( x i G , y i G ) = k = 0 m c k x i G k y i G ( m k ) + d 0 ,
where a 0 a m , c 0 c m , b 0 and d 0 are the coefficients treated as the independent variables.
The point P i G x i G , y i G in the matched set and its corresponding x i are used to estimate the coefficients of polynomial fitting function f x i ( x i G , y i G ) . The correlation coefficient and RMSE are considered to select the optimal coefficients. The fitted function f x i ( x i G , y i G ) presents the offsets in rows changing with the coordinate ( x i G , y i G ) . Similarly, the coefficients of polynomial fitting function f y i ( x i G , y i G ) can also be estimated with the point P i G x i G , y i G and its corresponding y i . In addition, the fitted function f y i ( x i G , y i G ) describes the offsets in columns changing with the coordinate.
The offsets of pixels between the GSHHG and GSMS images can be obtained by the polynomial fitting functions f x i ( x i G , y i G ) and f y i ( x i G , y i G ) . For each pixel ( x i G , y i G ) in the GSHHG image, the relationship between it and its corresponding point ( x i S , y i S ) in the GSMS image can be calculated as:
x i S = x i G f x i ( x i G , y i G ) , y i S = y i G f y i ( x i G , y i G ) .
For each pixel in the GSHHG image, the latitude and longitude information is already known. Polynomial fitting functions align all pixels of GSHHG with GSMS images globally. Therefore, the latitude and longitude of all pixels in the GSMS image can be obtained.

3. Results and Discussion

3.1. Dataset and Evaluation Criteria

The remote sensing images used in this experiment are from the FengyunII D meteorological satellite whose sub-satellite point is near ( 86 E, 0 N). Concerning radial distortion, only landmarks located within ± 60 of longitude and ± 60 of latitude around sub-satellite point are chosen as reference data. The size of GSMS image is normalized to 10 , 000 × 10 , 000 pixels. Considering efficiency, both the GSHHG and GSMS images are divided into patches [35,36,37] whose size is S 1 × S 2 pixels. Furthermore, feature points are matched in each pair of patches. Some shorelines can not be detected in the GSMS image due to the occlusion of clouds, causing difficulty in matching these shorelines. To reduce this difficulty, 25 patches with relatively more edges in the GSMS image are selected to perform the local feature matching and feature refinement with NSCM.
To evaluate the performance, the ground truth is manually selected from the points with the maximum gradient within their neighborhood. For each landmark in the GSHHG image, we find its corresponding point in the GSMS image as accurately as possible. Since the ground truth is labelled manually, there may be very small errors. If the distance between ground truth and matched point is no bigger than one pixel, this matched point is considered to be correct. Special attention is needed so that our manually labelled ground truth does not contain those landmarks under the clouds and fogs in the GSMS image.
In our experiments, three evaluation criteria including precision, recall and RMSE are mainly used:
p r e c i s i o n = N i n l i e r s N i n l i e r s + N o u t l i e r s , r e c a l l = N i n l i e r s N g r o u n d t r u t h , R M S E = 1 N p i = 1 N p P i P i 2 ,
where N i n l i e r s represents the number of inliers in the matched set, N o u t l i e r s represents the number of outliers in the matched set, N g r o u n d t r u t h represents the number of points of the ground truth, N p represents the number of matched pairs, P i represents the matched points and P i represents the matched points of the ground truth in the GSMS image.

3.2. Local Feature Matching by Geometric Coding

The size of the template is a key parameter for geometric coding based local feature matching. Figure 4 shows the precision and recall with K varying from 20 to 40. If the size is too small, more points are matched combined with more mismatched points. Therefore, the precision and recall are lower. As K increases, the precision is increasing and finally tends to be stable. If the size is too large, the recall is decreasing since the number of the obtained matched features is decreasing gradually. Considering the tradeoff between precision and recall, K is set to 30 in our experiments.

3.3. Feature Refinement with Neighborhood Spatial Consistent Matching (NSCM)

The NSCM algorithm is applied to remove the outliers caused by similar features and aperture effect. In the NSCM algorithm, the n nearest matched pairs are selected as neighborhood reference pairs. As depicted in Figure 5, with the value of n increasing, more neighborhood spatial consistent information is utilized and more outliers are removed. However, the spatial constraints also become stricter and the recall is decreasing. Considering the tradeoff between precision and recall, the value of n is set as 17 in the feature refinement process.

3.4. Comparison among Feature Matching Algorithms

The proposed NSCM approach is compared with seven refinement algorithms: RANSAC [14], GTM [18], WGTM [20], RSOC [21], KNN-TAR [22], ISSC [23] and RFVTM [24]. Figure 6 presents the performance of these eight algorithms. In addition, the mean of experimental results are shown in Table 1. Table 1 indicates that the average precision of NSCM is the highest and the recall of NSCM algorithm ranks as medium. However, the subsequent processing can improve our recall on the basis of high precision. The RMSE value of NSCM is the smallest as shown in Table 1.
As shown in Table 1, NSCM significantly outperforms the other algorithms with respect to time efficiency. Assuming that there would be N feature pairs in the matched results. In this paper, n is set to 17, which is much smaller than N. Computation complexity of NSCM is O ( n × N 2 ) = O ( N 2 ) .

3.5. Pixel Alignment Based on Polynomial Fitting

Based on the matched set obtained by feature refinement with NSCM, the offsets between the GSHHG and GSMS images in rows and columns are fitted. The Interpolant, Lowess and Polynomial fitting types are used to get an optimal solution by comparing their precision, recall and RMSE. Table 2 shows the statistical results of the three common fitting functions. The precision of Polynomial fitting is slightly higher compared with Interpolant fitting and Lowess fitting. The recall of Polynomial fitting is far larger than the others, and the RMSE is slightly smaller than the others. In conclusion, the Polynomial fitting outperforms the other methods in all evaluation criteria.
Figure 7 shows the results of Polynomial fitting functions with different order m from 1 to 5. As shown in Figure 7a,b, when m is smaller, the precision and recall are lower due to under-fitting. However, high-order polynomial leads to over-fitting. When m becomes large, the precision and recall suddenly become very low, but the RMSE becomes very high. Therefore, the third-order Polynomial fitting functions are utilized to fit the offsets’ tendency.
Table 3 gives the three mean values including precision, recall and RMSE before and after pixel alignment. The values of precision are close, but the recall after pixel alignment increases greatly. Figure 8 shows the result of landmark alignment. All pixels in the GSMS remote sensing image are precisely located.
With pixel alignment, the latitude and longitude of all pixels in the GSMS image can be calculated. For each pixel p i , the intensity and longitude α i , latitude γ i are achieved by NSCM and Polynomial fitting. The coordinate of p i in the sub-satellite-based earth coordinate system can be represented as:
X i = R s i n ( γ i γ 0 ) c o s ( α i α 0 ) , Y i = R s i n ( γ i γ 0 ) s i n ( α i α 0 ) , Z i = R c o s ( γ i γ 0 ) ,
where R is the radius of the earth; α 0 and γ 0 are the longitude and latitude of the sub-satellite point. With the coordinate ( X i , Y i , Z i ) and intensity, the GSMS image can be displayed as a 3D earth as shown in Figure 9.

4. Conclusions

In this paper, we implement global alignment of all pixels in the GSMS images. Before global alignment, we do feature match between the landmarks of GSHHG and the edges of the GSMS images by geometric and gradient similarity measurement. Using spatial consistency of the matched pairs, feature refinement with a neighborhood spatial consistent matching algorithm is proposed to remove outliers. According to the experimental results, compared with other methods, our algorithm can achieve higher accuracy and lower RMSE while its time cost is significantly less than other methods. Based on polynomial fitting, global pixel alignment is applied to obtain the latitude and longitude of all pixels in the GSMS images and improve the recall significantly. The future work will focus on three-dimensional spherical stitching of multi-view remote sensing images.

Acknowledgments

We would like to thank professor Tian Qi (the University of Texas at San Antonio) for his suggestions for this project. We also gratefully acknowledge the support of the National Satellite Meteorological center. This work is supported by the National Natural Science Foundation of China (61572307).

Author Contributions

Dan Zeng supervised and designed the research work, in addition to writing the manuscript; Rui Fang, Shiming Ge and Shuying Li performed the experiments, and participated in experimental designs and data processing; and Zhijiang Zhang helped with writing and revisions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zitova, B.; Flusser, J. Image registration methods: A survey. Image Vis. Comput. 2003, 21, 977–1000. [Google Scholar] [CrossRef]
  2. Govindarajulu, S.; Reddy, K.N.K. Image Registration on satellite Images. IOSR-JECE 2012, 3, 10–17. [Google Scholar] [CrossRef]
  3. Xing, C.; Qiu, P. Intensity-based image registration by nonparametric local smoothing. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 2081–2092. [Google Scholar] [CrossRef] [PubMed]
  4. Jung, J.S.; Song, J.H.; Kwag, Y.K. High precision automatic geocoding method of SAR image using GSHHS. In Proceedings of the 2011 3rd International Asia-Pacific Conference on Synthetic Aperture Radar (APSAR), Seoul, Korea, 26–30 September 2011; pp. 1–4. [Google Scholar]
  5. Jianbin, X.; Wen, H.; Zhe, L.; Yirong, W.; Maosheng, X. The study of rough-location of remote sensing image with coastlines. In Proceedings of the 2003 IEEE International Geoscience and Remote Sensing Symposium (IGARSS’03), Toulouse, France, 21–25 July 2003; Volume 6, pp. 3964–3966. [Google Scholar]
  6. Liu, X.; Tian, Z.; Leng, C.; Duan, X. Remote sensing image registration based on KICA-SIFT descriptors. In Proceedings of the 2010 Seventh International Conference on Fuzzy Systems and Knowledge Discovery (FSKD), Singapore, 10–12 August 2010; Volume 1, pp. 278–282. [Google Scholar]
  7. Wang, G.-H.; Zhang, S.-B.; Wang, H.B.; Li, C.-H.; Tang, X.-M.; Tian, J.J.; Tian, J. An algorithm of parameters adaptive scale-invariant feature for high precision matching of multi-source remote sensing image. In Proceedings of the 2009 Joint Urban Remote Sensing Event, Shanghai, China, 20–22 May 2009; pp. 1–7. [Google Scholar]
  8. Fan, B.; Huo, C.; Pan, C.; Kong, Q. Registration of optical and SAR satellite images by exploring the spatial relationship of the improved SIFT. IEEE Geosci. Remote Sens. Lett. 2013, 10, 657–661. [Google Scholar] [CrossRef]
  9. Wang, X.; Li, Y.; Wei, H.; Liu, F. An ASIFT-based local registration method for satellite imagery. Remote Sens. 2015, 7, 7044–7061. [Google Scholar] [CrossRef]
  10. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  11. Wang, Q.; Zhu, G.; Yuan, Y. Statistical quantization for similarity search. Comput. Vis. Image Underst. 2014, 124, 22–30. [Google Scholar] [CrossRef]
  12. Goncalves, H.; Corte-Real, L.; Goncalves, J.A. Automatic image registration through image segmentation and SIFT. IEEE Trans. Geosci. Remote Sens. 2011, 49, 2589–2600. [Google Scholar] [CrossRef]
  13. Ma, J.; Chan, J.C.W.; Canters, F. Fully automatic subpixel image registration of multiangle CHRIS/Proba data. IEEE Trans. Geosci. Remote Sens. 2010, 48, 2829–2839. [Google Scholar]
  14. Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
  15. Zhou, W.; Lu, Y.; Li, H.; Song, Y.; Tian, Q. Spatial coding for large scale partial-duplicate web image search. In Proceedings of the 18th ACM International Conference on Multimedia, Firenze, Italy, 25–29 October 2010; pp. 511–520. [Google Scholar]
  16. Zhou, W.; Li, H.; Lu, Y.; Tian, Q. Large scale image search with geometric coding. In Proceedings of the 19th ACM International Conference on Multimedia, Scottsdale, AZ, USA, 28 November–1 December 2011; pp. 1349–1352. [Google Scholar]
  17. Zheng, L.; Wang, S. Visual phraselet: Refining spatial constraints for large scale image search. IEEE Signal Proc. Lett. 2013, 20, 391–394. [Google Scholar] [CrossRef]
  18. Aguilar, W.; Frauel, Y.; Escolano, F.; Martinez-Perez, M.E.; Espinosa-Romero, A.; Lozano, M.A. A robust graph transformation matching for non-rigid registration. Image Vis. Comput. 2009, 27, 897–910. [Google Scholar] [CrossRef]
  19. Shi, Q.; Ma, G.; Zhang, F.; Chen, W.; Qin, Q.; Duo, H. Robust image registration using structure features. IEEE Geosci. Remote Sens. Lett. 2014, 11, 2045–2049. [Google Scholar]
  20. Izadi, M.; Saeedi, P. Robust weighted graph transformation matching for rigid and nonrigid image registration. IEEE Trans. Image Proc. 2012, 21, 4369–4382. [Google Scholar] [CrossRef] [PubMed]
  21. Liu, Z.; An, J.; Jing, Y. A simple and robust feature point matching algorithm based on restricted spatial order constraints for aerial image registration. IEEE Trans. Geosci. Remote Sens. 2012, 50, 514–527. [Google Scholar] [CrossRef]
  22. Zhang, K.; Li, X.Z.; Zhang, J.X. A robust point-matching algorithm for remote sensing image registration. IEEE Geosci. Remote Sens. Lett. 2014, 11, 469–473. [Google Scholar] [CrossRef]
  23. Jiang, J.; Shi, X. A Robust Point-Matching Algorithm Based on Integrated Spatial Structure Constraint for Remote Sensing Image Registration. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1716–1720. [Google Scholar] [CrossRef]
  24. Zhao, M.; An, B.; Wu, Y.; Van Luong, H.; Kaup, A. RFVTM: A Recovery and Filtering Vertex Trichotomy Matching for Remote Sensing Image Registration. IEEE Trans. Geosci. Remote Sens. 2017, 55, 375–391. [Google Scholar] [CrossRef]
  25. Battiato, S.; Bruna, A.R.; Puglisi, G. A robust block-based image/video registration approach for mobile imaging devices. IEEE Trans. Multimed. 2010, 12, 622–635. [Google Scholar] [CrossRef]
  26. Elibol, A. A Two-Step Global Alignment Method for Feature-Based Image Mosaicing. Math. Comput. Appl. 2016, 21, 30. [Google Scholar] [CrossRef]
  27. Adams, A.; Gelfand, N.; Pulli, K. Viewfinder Alignment. In Computer Graphics Forum; Blackwell Publishing Ltd.: Oxford, UK, 2008; Volume 27, pp. 597–606. [Google Scholar]
  28. Learned-Miller, E.G. Data driven image models through continuous joint alignment. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 236–250. [Google Scholar] [CrossRef] [PubMed]
  29. Cox, M.; Sridharan, S.; Lucey, S.; Cohn, J. Least squares congealing for unsupervised alignment of images. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2008), Anchorage, AK, USA, 23–28 June 2008; pp. 1–8. [Google Scholar]
  30. Vedaldi, A.; Guidi, G.; Soatto, S. Joint data alignment up to (lossy) transformations. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2008), Anchorage, AK, USA, 23–28 June 2008; pp. 1–8. [Google Scholar]
  31. Tang, F.; Zou, X.; Yang, H.; Weng, F. Estimation and correction of geolocation errors in FengYun-3C Microwave Radiation Imager Data. IEEE Trans. Geosci. Remote Sens. 2016, 54, 407–420. [Google Scholar] [CrossRef]
  32. Wang, Q.; Zou, C.; Yuan, Y.; Lu, H.; Yan, P. Image registration by normalized mapping. Neurocomputing 2013, 101, 181–189. [Google Scholar] [CrossRef]
  33. Wang, Q.; Yuan, Y.; Yan, P.; Li, X. Saliency detection by multiple-instance learning. IEEE Trans. Cybern. 2013, 43, 660–672. [Google Scholar] [CrossRef] [PubMed]
  34. Dollár, P.; Zitnick, C.L. Structured forests for fast edge detection. In Proceedings of the 2013 IEEE International Conference on Computer Vision (ICCV), Sydney, Australia, 1–8 December 2013; pp. 1841–1848. [Google Scholar]
  35. Gao, J.; Kim, S.J.; Brown, M.S. Constructing image panoramas using dual-homography warping. In Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Colorado Springs, CO, USA, 20–25 June 2011; pp. 49–56. [Google Scholar]
  36. Zaragoza, J.; Chin, T.J.; Brown, M.S.; Suter, D. As-projective-as-possible image stitching with moving DLT. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Portland, OR, USA, 23–28 June 2013; pp. 2339–2346. [Google Scholar]
  37. Chang, C.H.; Sato, Y.; Chuang, Y.Y. Shape-preserving half-projective warps for image stitching. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Toronto, ON, Canada, 23–28 June 2014; pp. 3254–3261. [Google Scholar]
Figure 1. The landmarks and GSMS images in the southern coastal area of Thailand and their initial matching results. These points in circles are outliers. (a) landmarks; (b) edge probability image; (c) edge binary image; (d) initial matching.
Figure 1. The landmarks and GSMS images in the southern coastal area of Thailand and their initial matching results. These points in circles are outliers. (a) landmarks; (b) edge probability image; (c) edge binary image; (d) initial matching.
Remotesensing 09 00587 g001
Figure 2. Illustration of mismatched features and many-to-one matched features.(a) mismatched features; (b) many-to-one matched features.
Figure 2. Illustration of mismatched features and many-to-one matched features.(a) mismatched features; (b) many-to-one matched features.
Remotesensing 09 00587 g002
Figure 3. The details of the initial matching and feature refinement in the southern coastal area of Thailand. (a) initial matching; (b) feature refinement; (c) initial matching; and (d) feature refinement.
Figure 3. The details of the initial matching and feature refinement in the southern coastal area of Thailand. (a) initial matching; (b) feature refinement; (c) initial matching; and (d) feature refinement.
Remotesensing 09 00587 g003
Figure 4. Performance of local feature matching with different Ks.
Figure 4. Performance of local feature matching with different Ks.
Remotesensing 09 00587 g004
Figure 5. Mean precision and recall values of feature refinement with different ns (the number of candidate matched pairs nearest the seed matched pair). (a) mean precision; and (b) mean recall.
Figure 5. Mean precision and recall values of feature refinement with different ns (the number of candidate matched pairs nearest the seed matched pair). (a) mean precision; and (b) mean recall.
Remotesensing 09 00587 g005
Figure 6. Performance of eight algorithms on 25 images. NSCM is competitive with RANSAC, GTM, WGTM, RSOC, KNN-TAR, ISSC and RFVTM in precision, recall and RMSE. (a) precision; (b) recall; (c) RMSE.
Figure 6. Performance of eight algorithms on 25 images. NSCM is competitive with RANSAC, GTM, WGTM, RSOC, KNN-TAR, ISSC and RFVTM in precision, recall and RMSE. (a) precision; (b) recall; (c) RMSE.
Remotesensing 09 00587 g006
Figure 7. Mean precision, recall and RMSE values of Polynomial fitting with different ms (the order of Polynomial function). (a) mean precision; (b) mean recall; and (c) mean RMSE.
Figure 7. Mean precision, recall and RMSE values of Polynomial fitting with different ms (the order of Polynomial function). (a) mean precision; (b) mean recall; and (c) mean RMSE.
Remotesensing 09 00587 g007
Figure 8. Pixel alignment results.
Figure 8. Pixel alignment results.
Remotesensing 09 00587 g008
Figure 9. 3D earth.
Figure 9. 3D earth.
Remotesensing 09 00587 g009
Table 1. Mean precision, recall and RMSE values in NSCM, RANSAC, GTM, WGTM, RSOC, KNN-TAR, ISSC and RFVTM.
Table 1. Mean precision, recall and RMSE values in NSCM, RANSAC, GTM, WGTM, RSOC, KNN-TAR, ISSC and RFVTM.
Evaluation Criteria NSCM RANSAC GTM WGTM RSOC KNN-TAR ISSC RFVTM
precision (%)96.295.295.396.094.295.795.695.2
recall (%)50.861.949.567.461.842.847.463.7
RMSE (pixel)1.141.181.151.161.401.341.381.44
time (s)0.481.0818.1216.2110.922.912.741.89
Table 2. Mean precision, recall and RMSE values in Interpolant fitting, Lowess fitting and Polynomial fitting with m set to 3.
Table 2. Mean precision, recall and RMSE values in Interpolant fitting, Lowess fitting and Polynomial fitting with m set to 3.
Evaluation Criteria Interpolant Fitting Lowess Fitting Polynomial Fitting
precision (%)92.992.993.0
recall (%)68.257.591.2
RMSE (pixel)2.332.452.06
Table 3. Mean precision, recall and RMSE values before and after pixel alignment.
Table 3. Mean precision, recall and RMSE values before and after pixel alignment.
Before After
precision (%)96.293.0
recall (%)50.891.2
RMSE (pixel)1.142.06

Share and Cite

MDPI and ACS Style

Zeng, D.; Fang, R.; Ge, S.; Li, S.; Zhang, Z. Geometry-Based Global Alignment for GSMS Remote Sensing Images. Remote Sens. 2017, 9, 587. https://doi.org/10.3390/rs9060587

AMA Style

Zeng D, Fang R, Ge S, Li S, Zhang Z. Geometry-Based Global Alignment for GSMS Remote Sensing Images. Remote Sensing. 2017; 9(6):587. https://doi.org/10.3390/rs9060587

Chicago/Turabian Style

Zeng, Dan, Rui Fang, Shiming Ge, Shuying Li, and Zhijiang Zhang. 2017. "Geometry-Based Global Alignment for GSMS Remote Sensing Images" Remote Sensing 9, no. 6: 587. https://doi.org/10.3390/rs9060587

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop