# Remote Sensing Image Registration Using Multiple Image Features

^{1}

^{2}

^{3}

^{4}

^{*}

^{†}

## Abstract

**:**

## 1. Introduction

## 2. Methodology

#### 2.1. Mixture-Feature Gaussian Mixture Model (MGMM)

#### 2.2. Combination and Complementation of Multiple Image Features

#### 2.2.1. Euclidean Distance

#### 2.2.2. Shape Context (SC)

#### 2.2.3. SIFT Distance

#### 2.2.4. Multiple Image Feature Based Correspondence Estimation

#### 2.3. Geometric Constraint for L2E Based Energy Optimization

#### 2.3.1. ${L}_{2}E$

#### 2.3.2. Motion Coherent Based Geometric Constraint

#### 2.4. Main Process

#### 2.4.1. Extraction of the Feature Point Sets

#### 2.4.2. Feature Point Set Registration

**Correspondence estimation:**The pairwise global and local geometric structure discrepancies $\mathbf{G}$ and $\mathbf{L}$, and the SIFT distance $\mathbf{S}$ are obtained by Equations (4)–(6), respectively. The posterior probability matrix, which is also known as the correspondence matrix is then written as:

**Transformation updating:**The Resiz representation theorem states that if $\psi $ is a bounded linear function on a Hilbert space $\mathcal{H}$, then there is a unique vector $\nu $ in $\mathcal{H}$, making $\psi t=\langle t,\nu \rangle $ for all $\psi \in \mathcal{H}$. Thus the cumbersome mapping problem reduces to find the unique vector $\nu $. This motivates us to model the non-rigid transformation $\mathcal{T}$ by requiring it to lie within the reproducing kernel Hilbert space (RKHS).

Algorithm 1: Feature matching using multiple features for remote sensing registration |

#### 2.5. Image Transformation and Resampling

#### 2.6. Method Analysis

#### 2.6.1. Computational Complexity

#### 2.6.2. Parametric Setting

## 3. Experiments and Results

#### 3.1. Evaluation Criterion

#### 3.2. Results on Feature Matching

#### 3.3. Results on Image Registration

#### 3.4. Reliability and Availability Examination of Our Method

## 4. Conclusions

## Acknowledgments

## Author Contributions

## Conflicts of Interest

## References

- Zitov, B.; Flusser, J. Image registration methods: A survey. Image Vis. Comput.
**2003**, 21, 977–1000. [Google Scholar] [CrossRef] - Brown, L. A survey of image registration techniques. ACM Comput. Surv.
**1992**, 41, 325–376. [Google Scholar] [CrossRef] - Maintz, J.; Viergever, M. A survey of medical image registration. Med. Image Anal.
**1998**, 2, 1–36. [Google Scholar] [CrossRef] - Wang, X.; Li, Y.; Wei, H.; Liu, F. An asift-based local registration method for satellite imagery. Remote Sens.
**2015**, 7, 7044–7061. [Google Scholar] [CrossRef] - Liu, S.; Tong, X.; Chen, J.; Liu, X.; Sun, W.; Xie, H.; Chen, P.; Jin, Y.; Ye, Z. A Linear Feature-Based Approach for the Registration of Unmanned Aerial Vehicle Remotely-Sensed Images and Airborne LiDAR Data. Remote Sens.
**2016**, 8, 82. [Google Scholar] [CrossRef] - Sedaghat, A.; Mokhtarzade, M.; Ebadi, H. Uniform Robust Scale-Invariant Feature Matching for Optical Remote Sensing Images. IEEE Trans. Geosci. Remote Sens.
**2011**, 49, 4516–4527. [Google Scholar] [CrossRef] - Jensen, J. Introductory Digital Image Processing; Prentice Hall: Upper Saddle River, NJ, USA, 2004. [Google Scholar]
- Mikolajczyk, K.; Chmid, C. A performance evaluation of local descriptors. In Proceedings of the 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Madison, WI, USA, 18–20 June 2003; Volume 2. [Google Scholar]
- Fan, B.; Wu, F.; Hu, Z. Rotationally Invariant Descriptors Using Intensity Order Pooling. IEEE Trans. Pattern Anal. Mach. Intell.
**2011**, 34, 2031–2045. [Google Scholar] [CrossRef] [PubMed] - Harris, C. A combined corner and edge detector. In Proceedings of the 4th Alvey Vision Conference, Manchester, UK, 31 August–2 September 1988; Volume 1988, pp. 147–151. [Google Scholar]
- Lowe, D. Object recognition from local scale-invariant features. In Proceedings of the IEEE Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; Volume 2, p. 1150. [Google Scholar]
- Lowe, D. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vis.
**2004**, 21, 91–110. [Google Scholar] [CrossRef] - Bay, H.; Ess, A.; Tuytelaars, T.; Van Gool, L. Speeded-Up Robust Features (SURF). Comput. Vis. Image Underst.
**2008**, 110, 346–359. [Google Scholar] [CrossRef] - Li, Q.; Wang, G.; Liu, J.; Chen, S. Robust Scale-Invariant Feature Matching for Remote Sensing Image Registration. IEEE Geosci. Remote Sens. Lett.
**2009**, 6, 287–291. [Google Scholar] - Ke, Y.; Sukthankar, R. PCA-SIFT: A more distinctive representation for local image descriptors. In Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004, Washington, DC, USA, 27 June–2 July 2004; Volume 2, pp. 506–513. [Google Scholar]
- Dellinger, F.; Delon, J.; Gousseau, Y.; Michel, J. SAR-SIFT: A SIFT-Like Algorithm for SAR Images. IEEE Trans. Geosci. Remote Sens.
**2015**, 53, 453–466. [Google Scholar] [CrossRef] - Sedaghat, A.; Ebadi, H. Remote Sensing Image Matching Based on Adaptive Binning SIFT Descriptor. IEEE Trans. Geosci. Remote Sens.
**2015**, 53, 5283–5293. [Google Scholar] [CrossRef] - Liu, F.; Bi, F.; Chen, L.; Shi, H. Feature-Area Optimization: A Novel SAR Image Registration Method. IEEE Geosci. Remote Sens. Lett.
**2016**, 13, 242–246. [Google Scholar] [CrossRef] - Goncalves, H.; Corte-Real, L.; Goncalves, J. Automatic Image Registration Through Image Segmentation and SIFT. IEEE Trans. Geosci. Remote Sens.
**2011**, 49, 2589–2600. [Google Scholar] [CrossRef] - Gong, M.; Zhao, S.; Jiao, L.; Tian, D. A Novel Coarse-to-Fine Scheme for Automatic Image Registration Based on SIFT and Mutual Information. IEEE Trans. Geosci. Remote Sens.
**2014**, 52, 4328–4338. [Google Scholar] [CrossRef] - Liu, Z.; An, J.; Jing, Y. A simple and robust feature point matching algorithm based on restricted spatial or derconstraints for aerial image registration. IEEE Trans. Geosci. Remote Sens.
**2012**, 50, 514–527. [Google Scholar] [CrossRef] - Ma, J.; Zhao, J.; Tian, J.; Bai, X.; Tu, Z. Regularized vector field learning with sparse approximation for mismatch removal. Pattern Recognit.
**2013**, 46, 3519–3532. [Google Scholar] [CrossRef] - Ma, J.; Zhao, J.; Tian, J.; Yuille, A.; Tu, Z. Robust point matching via vector field consensus. IEEE Trans. Image Proc.
**2014**, 23, 1706–1721. [Google Scholar] - Fischler, M.; Bolles, R. Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. Commun. ACM
**1980**, 24, 381–395. [Google Scholar] [CrossRef] - Myronenko, A.; Song, X. Point set registration: coherent point drift. IEEE Trans. Pattern Anal. Mach. Intell.
**2009**, 32, 2262–2275. [Google Scholar] [CrossRef] [PubMed] - Liu, H.; Yan, S. Common visual pattern discovery via spatially coherent correspondences. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognit, San Francisco, CA, USA, 13–18 June 2010; pp. 1609–1616. [Google Scholar]
- Zhang, Z.; Yang, M.; Zhou, M.; Zeng, X. Simultaneous remote sensing image classification and annotation based on the spatial coherent topic model. In Proceedings of the IGARSS 2014—2014 IEEE International Geoscience and Remote Sensing Symposium, Quebec, QC, Canada, 13–18 July 2014; pp. 1698–1701. [Google Scholar]
- Ma, J.; Qiu, W.; Zhao, J.; Ma, Y. Robust L
_{2}E Estimation of Transformation for Non-Rigid Registration. IEEE Trans. Signal Process.**2015**, 63, 1115–1129. [Google Scholar] [CrossRef] - Ma, J.; Zhou, H.; Zhao, J.; Gao, Y.; Jiang, J.; Tian, J. Robust Feature Matching for Remote Sensing Image Registration via Locally Linear Transforming. IEEE Trans. Geosci. Remote Sens.
**2015**, 53, 6469–6481. [Google Scholar] [CrossRef] - Yang, Y.; Ong, S.; Foong, K. A robust global and local mixture distance based non-rigid point set registration. Pattern Recognit.
**2015**, 48, 156–173. [Google Scholar] [CrossRef] - Belongie, S.; Malik, J.; Puzicha, J. Shape Matching and Object Recognition Using Shape Contexts. IEEE Trans. Pattern Anal.
**2002**, 24, 509–522. [Google Scholar] [CrossRef] - Kortgen, M.; Park, G.; Novotni, M.; Klein, R. 3D shape matching with shape context. In Proceedings of the 7th Central European Seminar on Computer Graphics, Budmerice, Slovakia, 22–24 April 2003; pp. 22–24. [Google Scholar]
- Tikhonov, A.N.; Arsenin, V.Y. Solutions of Ill-Posed Problems; Winston & Sons Inc.: Washington, DC, USA, 1977. [Google Scholar]
- Scott, D. Parametric statistical modeling by minimum integrated sqaure error. Technometrics
**2001**, 43, 274–285. [Google Scholar] [CrossRef] - Yuille, A.; Grzywacz, N. A Mathematical Analysis of the Motion Coherence Theory. Int. J. Comput. Vis.
**1989**, 3, 155–175. [Google Scholar] [CrossRef] - Jian, B.; Vemuri, B. Robust Point Set Registration Using Gaussian Mixture Models. IEEE Trans. Pattern Anal. Mach. Intell.
**2011**, 33, 1633–1645. [Google Scholar] [CrossRef] [PubMed]

**Figure 1.**Illustration of the SC. (

**a**) and (

**b**): diagrams of log-polar histogram bins centered at ${\mathbf{x}}_{i}$ and ${\mathbf{y}}_{j}$ used in computing the shape contexts. (

**c**) and (

**d**): each shape context e.g., ${h}_{i}$ or ${h}_{j}$ is a log-polar histogram of the coordinates of the rest of the point set measured using the centered point as the origin, where darker denotes larger value.

**Figure 2.**Illustration of how the SIFT feature descriptor is obtained. (

**a**): Repeatedly convolving initial image with Gaussians to produce the set of scale space images. (

**b**): Subtracting adjacent Gaussian images to produce the difference-of-Gaussian images. (

**c**): Extrema detection by comparing a pixel (marked with red circle) to its 26 neighbors in $3\times 3$ regions at the current and adjacent scales (marked with green circles). (

**d**): Feature descriptor creation by computing the gradient magnitude and orientation at each image sample point. a Gaussian window indicated by the overlaid circle is used as weighting. (

**e**) Orientation histograms accumulation by summarizing the contents over $4\times 4$ subregions (a $2\times 2$ subregions is shown for convenience), with the length of each arrow corresponding to the sum of the gradient magnitudes near that direction within the region, generating $4\times 4\times 8$ descriptors, where 8 is the number of directions.

**Figure 3.**The robustness comparison between ${L}_{2}E$ and $MLE$. We estimate the mean of a normally distributed sample $\mathcal{N}(0,1)$ with contaminations of extra samples $\mathcal{N}(5,1)$. The outlier to the inlier ratios are $5\%$, $10\%$, $20\%$, $30\%$ and $50\%$. The vertical dash lines indicate the extrema. ${L}_{2}E$ has a global minimum at approximately 0 and a local minimum at approximately 5, both of which conform to the inlier and outlier distributions, respectively. However, the deviation of MLE increases as the ratio grows.

**Figure 4.**Illustration of the velocity field. (

**a**): two given point pairs. (

**b**): a coherent velocity field. (

**c**): a velocity field that is less coherent.

**Figure 5.**The summary of our method. (

**a**): Extracting putative corresponding set ${\{{\mathbf{x}}_{i},{\mathbf{y}}_{i}\}}_{i=1}^{n}$ and SIFT descriptor set ${\{{\mathbf{u}}_{i},{\mathbf{v}}_{i}\}}_{i=1}^{n}$ using SIFT algorithm, and obtaining two types of discrepancies, i.e., the global and local geometric structure discrepancies $\mathbf{G}$ and $\mathbf{L}$, and intensity information discrepancy $\mathbf{S}$. (

**b**): substituting the combined features into the optimization framework and obtaining the transformed point set $\widehat{\mathbf{Y}}$, note that a loop is included. (

**c**): Image registration based on the backward approach.

**Figure 6.**Illustration of the image transformation and resampling. (

**a**) Computing a TPS transformation model $\mathcal{P}$ which maps $\widehat{\mathbf{Y}}$ back onto $\mathbf{Y}$, meanwhile, constructing a grid ${\mathbf{\Gamma}}^{r}$ which is of the same size as ${I}^{r}$; (

**b**) Mapping ${\mathbf{\Gamma}}^{r}$ using $\mathcal{P}$, obtaining ${\widehat{\mathbf{\Gamma}}}^{r}$; (

**c**) Limiting indexes to boundaries of ${\widehat{\mathbf{\Gamma}}}^{\mathbf{r}}\cap {\mathbf{\Gamma}}^{\mathbf{s}}$; (

**d**) Getting intensities from ${I}^{s}$ within boundaries to generate ${I}^{t}$, each pixel in ${I}^{t}$ is determined by the bicubic interpolation.

**Figure 7.**Demonstration of the advantage of the MGMM. Red *: the target feature point set $\mathbf{X}$. Green ∘: the estimated corresponding point set ${\mathbf{X}}^{*}$. Blue +: the source feature point set $\mathcal{T}\left(\mathbf{Y}\right)$. Upper row and lower row: registration process of CPD and our method. $\mathbf{X}$ and $\mathbf{Y}$ are extracted from a remote sensing image pair.

**Figure 8.**Examination of the availability of the MGMM. Red *: the target feature point set $\mathbf{X}$. Blue ∘: the source feature point set $\mathcal{T}\left(\mathbf{Y}\right)$. (

**a**,

**b**) registration result of GMMREG and our method. Obvious deviation exists in the result of GMMREG.

**Figure 9.**Feature matching demonstrations on eight typical image pairs of dataset (i) and (ii). For each image pair, rows from the first to the fourth are: the image pair, the results on feature matching of ours, SIFT and CPD, respectively. Blue lines indicate true positive and true negative, red lines indicate false positive and false negative. The PRs of three methods on each image pairs are listed from (

**a**) to (

**h**) as follows. Ours:

**99.00**,

**99.29**,

**97.31**,

**97.40**,

**99.32**,

**99.62**,

**98.31**,

**98.89**; SIFT: 85.68, 85.18, 45.12, 60.26, 82.43, 80.34, 82.43, 83.55; CPD: 86.86, 88.14, 85.02, 88.24, 87.22, 85.55, 89.76, 85.23.

**Figure 10.**Registration examples on four typical image pairs from dataset (i). The first two rows: the sensed and reference image, with the yellow crosses denoting the landmarks. Rows from the third to the end are the registration results of ours, SIFT, SURF, CPD, RSOC and GLMDTPS. For each method, the registration results are shown using two rows, with the upper row showing the transformed image, and the lower row showing the $5\times 5$ checkboard. The registration errors are highlighted using the red rectangles.

**Figure 11.**Registration examples on four typical image pairs from dataset (ii). The first two rows: the sensed and reference image, with the yellow crosses denoting the landmarks. Rows from the third to the end are the registration results of ours, SIFT, SURF, CPD, RSOC and GLMDTPS. For each method, the registration results are shown using two rows, with the upper row showing the transformed image, and the lower row showing the $5\times 5$ checkboard. The registration errors are highlighted using the red rectangles.

**Figure 12.**Registration examples on the seven typical satellite image pairs. From (

**a**) to (

**g**) are: Berlin, Las Vegas, Mekong River, Paris, WuHan, Washington and Hawaii. Columns from the left to the right are: the sensed, reference and transformed images, the $5\times 5$ checkboards, and the feature matching results. Blue lines indicate true positive and true negative, red lines indicate false positive and false negative.

Series | Criteria | Compared Methods | Datasets Used |
---|---|---|---|

I | PR | SIFT, CPD, Ours | (i), (ii) |

II | All | All | (i), (ii) |

III | All | Ours | (iii) |

Type 1 | Type 2 | Type 3 |
---|---|---|

87 to 505 | 30 | 35 to 165 |

**Table 3.**Experimental results of series (I). Quantitative comparisons on the mean PR of Type I method are carried out. Bold fonts indicate the best results. All Units are in percentage.

Dataset | SIFT | CPD | Ours |
---|---|---|---|

(i) | 78.30 | 90.81 | 98.25 |

(ii) | 72.78 | 90.17 | 97.15 |

**Table 4.**Experimental results on series (II). Quantitative comparisons on image registration measured using the mean RMSE, MAE and SD are carried out. Bold fonts indicate the best results. All units are in pixel.

Dataset | SIFT | SURF | CPD | GLMDTPS | RSOC | OURS | |
---|---|---|---|---|---|---|---|

RMSE | (i) | 13.5287 | 7.9837 | 3.2386 | 3.0152 | 2.2737 | 1.0171 |

(ii) | 11.4466 | 7.0627 | 7.2645 | 5.5991 | 4.1743 | 1.4331 | |

MAE | (i) | 16.0080 | 12.2803 | 7.5404 | 7.1459 | 6.4448 | 4.0271 |

(ii) | 14.2989 | 9.8411 | 10.3778 | 9.1102 | 7.3585 | 3.9188 | |

SD | (i) | 14.1826 | 10.2241 | 5.8594 | 5.4353 | 4.8013 | 3.1957 |

(ii) | 11.9613 | 8.2523 | 4.3619 | 7.6531 | 5.7844 | 3.0021 |

**Table 5.**Experimental results on series (III). Quantitative tests on feature matching and image registration are carried out to examine the availability and robustness of our method using the PR, RMSE, MAE and SD. (a) to (g): results on the corresponding pairs of Figure 12. Mean: the mean of results on all image pairs of dataset (iii). For the PR, the units are in percentage, for the RMSE, MAE and SD, the units are in pixel.

(a) | (b) | (c) | (d) | (e) | (f) | (g) | Mean | |
---|---|---|---|---|---|---|---|---|

$PR$ | 96.88 | 96.77 | 95.97 | 95.77 | 95.09 | 98.19 | 95.57 | 96.54 |

$RMSE$ | 0.8358 | 1.3963 | 1.9556 | 1.4272 | 0.7743 | 1.4342 | 0.3171 | 1.1628 |

$MAE$ | 2.1458 | 4.0208 | 2.9097 | 1.9317 | 2.5778 | 2.3542 | 2.2958 | 2.6051 |

$SD$ | 1.8562 | 2.5498 | 2.5171 | 1.8061 | 2.1429 | 2.0098 | 2.0717 | 2.1362 |

© 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Yang, K.; Pan, A.; Yang, Y.; Zhang, S.; Ong, S.H.; Tang, H. Remote Sensing Image Registration Using Multiple Image Features. *Remote Sens.* **2017**, *9*, 581.
https://doi.org/10.3390/rs9060581

**AMA Style**

Yang K, Pan A, Yang Y, Zhang S, Ong SH, Tang H. Remote Sensing Image Registration Using Multiple Image Features. *Remote Sensing*. 2017; 9(6):581.
https://doi.org/10.3390/rs9060581

**Chicago/Turabian Style**

Yang, Kun, Anning Pan, Yang Yang, Su Zhang, Sim Heng Ong, and Haolin Tang. 2017. "Remote Sensing Image Registration Using Multiple Image Features" *Remote Sensing* 9, no. 6: 581.
https://doi.org/10.3390/rs9060581