Fast Aerial Image Geolocalization Using the Projective-Invariant Contour Feature
Abstract
:1. Introduction
2. Materials and Methods
2.1. Projective-Invariant Contour Feature
2.1.1. Definition of the PICF
- (1)
- ;
- (2)
- , where is the angle of vector related to the X axis;
- (3)
- , if , then .
2.1.2. Equivariance of the PICF
2.1.3. The PICF Descriptor
- (1)
- Sample contour points: Divide the area centered at point into N areas with angle step ; for each area ; find the closest point to ; and take as the sample point on the contour in direction ; compute the relative coordinate of to point as ; if no point exists in the area, then set .
- (2)
- (3)
- Align contour points: Project all contour points with and , respectively, and get two transformed point sets and .
- (4)
- Compute the PICF descriptor: Convert the coordinates of the points in point sets to the polar coordinates, and sort the points with their angles in ascending order, then we get . For each area , compute the average distance of contour points in the area to the center point as: , where is the dimension of the descriptor, and ’. Finally, the two possible PICF descriptors of point in the point set P are obtained as: .
2.1.4. Discussion
2.2. Matching PICFs with Geometric Coherence
2.2.1. Geometric Coherence between PICF Correspondences
2.2.2. Refining Initial Matches with Geometric Coherence
Algorithm 1 PICF matching algorithm. |
|
2.3. PICF Based Fast Geolocalization
- (1)
- If the number of matching pairs is larger than (20 in this paper), an initial homography transformation is estimated with all the matching pairs using the RANSAC method and then refined with the plain ICP. The refined transformation is then verified with the alignment accuracy between the query road map and the reference road map.
- (2)
- If the obtained feature correspondences are less than or the first strategy fails to recover the right transformation, the obtained pairs are afterward sorted with their matching scores. Then, a homography transformation is estimated with a single correspondence and refined using all the road points in the query road map with a local-to-global iterative closest point algorithm. Furthermore, the refined transformation is then verified with the alignment accuracy. Each correspondence in the sorted pair set will be tested until a homography transformation that aligns the query road map and the reference one accurately enough is found. If our method fails to recover the correct transformation after all the correspondences obtained with the geometric-consistent matching algorithm, the left correspondences in the initial matching will be checked in the same manner.
2.3.1. Initial Homography Transformation Estimation from One Matching Pair
- (1)
- Compute matrix : ;
- (2)
- Compute vector : the homography transformation is approximated to a affinity, so we get ;
- (3)
- Compute the translation vector : since the contour center point is projected to , we get .
2.3.2. Transformation Refining
2.3.3. Transformation Validation
3. Results
3.1. Experiments on Synthetic Aerial Image Data Sets
3.2. Experiments on Real Aerial Image Data Sets
4. Discussion
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Appendix A
References
- He, H.; Yang, D.; Wang, S.; Wang, S.; Li, Y. Road Extraction by Using Atrous Spatial Pyramid Pooling Integrated Encoder-Decoder Network and Structural Similarity Loss. Remote Sens. 2019, 11, 1015. [Google Scholar] [CrossRef] [Green Version]
- Batra, A.; Singh, S.; Pang, G.; Basu, S.; Jawahar, C.; Paluri, M. Improved road connectivity by joint learning of orientation and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE), Seoul, Korea, 27 October–2 November 2019; pp. 10385–10393. [Google Scholar]
- Bastani, F.; He, S.; Abbar, S.; Alizadeh, M.; Balakrishnan, H.; Chawla, S.; Madden, S.; DeWitt, D. Roadtracer: Automatic extraction of road networks from aerial images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE), Salt Lake City, UT, USA, 18–22 June 2018; pp. 4720–4728. [Google Scholar]
- Cheng, G.; Wang, Y.; Xu, S.; Wang, H.; Xiang, S.; Pan, C. Automatic road detection and centerline extraction via cascaded end-to-end convolutional neural network. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3322–3337. [Google Scholar] [CrossRef]
- Li, Y.; Yang, D.; Wang, S.; He, H.; Hu, J.; Liu, H. Road-Network-Based Fast Geolocalization. IEEE Trans. Geosci. Remote Sens. 2020, 1–12. [Google Scholar] [CrossRef]
- Costea, D.; Leordeanu, M. Aerial image geolocalization from recognition and matching of roads and intersections. arXiv 2016, arXiv:1605.08323. [Google Scholar]
- Dumble, S.J.; Gibbens, P.W. Airborne vision-aided navigation using road intersection features. J. Intell. Robot. Syst. 2015, 78, 185–204. [Google Scholar] [CrossRef]
- Jung, J.; Yun, J.; Ryoo, C.K.; Choi, K. Vision based navigation using road-intersection image. In Proceedings of the 2011 11th International Conference on Control, Automation and Systems, Gyeonggi-do, Korea, 26–29 October 2011; pp. 964–968. [Google Scholar]
- Wu, L.; Hu, Y. Vision-aided navigation for aircrafts based on road junction detection. In Proceedings of the 2009 IEEE International Conference on Intelligent Computing and Intelligent Systems, Shanghai, China, 20–22 November 2009; Volume 4, pp. 164–169. [Google Scholar]
- Won, C.S.W.; Park, D.K.P.; Park, S.P. Efficient Use of MPEG-7 Edge Histogram Descriptor. ETRI J. 2002, 24, 23–30. [Google Scholar] [CrossRef]
- Chalechale, A.; Naghdy, G.; Mertins, A. Sketch based image matching Using Angular partitioning. IEEE Trans. Syst. Man Cybern. Part A 2005, 35, 28–41. [Google Scholar] [CrossRef]
- Belongie, S.; Malik, J.; Puzicha, J. Shape matching and object recognition using shape contexts. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 509–522. [Google Scholar] [CrossRef] [Green Version]
- Zhang, Z. Iterative point matching for registration of free-form curves and surfaces. Int. J. Comput. Vis. 1994, 13, 119–152. [Google Scholar] [CrossRef]
- Chen, C.S.; Hung, Y.P.; Cheng, J.B. A fast automatic method for registration of partially-overlapping range images. In Proceedings of the Sixth International Conference on Computer Vision (IEEE Cat. No. 98CH36271), Bombay, India, 7 January 1998; pp. 242–248. [Google Scholar]
- Stewart, C.V.; Tsai, C.L.; Roysam, B. The dual-bootstrap iterative closest point algorithm with application to retinal image registration. IEEE Trans. Med. Imaging 2003, 22, 1379–1394. [Google Scholar] [CrossRef] [PubMed]
- Ma, J.; Zhao, J.; Jiang, J.; Zhou, H.; Guo, X. Locality Preserving Matching. Int. J. Comput. Vis. 2019, 127, 512–531. [Google Scholar] [CrossRef]
- Ma, J.; Zhou, H.; Zhao, J.; Gao, Y.; Jiang, J.; Tian, J. Robust Feature Matching for Remote Sensing Image Registration via Locally Linear Transforming. IEEE Trans. Geosci. Remote Sens. 2015, 53, 6469–6481. [Google Scholar] [CrossRef]
- Boykov, Y.; Veksler, O.; Zabih, R. Fast approximate energy minimization via graph cuts. IEEE Trans. Pattern Anal. Mach. Intell. 2001, 23, 1222–1239. [Google Scholar] [CrossRef] [Green Version]
- Boykov, Y.; Kolmogorov, V. An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26, 1124–1137. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Kolmogorov, V.; Zabih, R. What energy functions can be minimized via graph cuts? IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26, 147–159. [Google Scholar] [CrossRef] [Green Version]
- Esteves, C.; Allen-Blanchette, C.; Zhou, X.; Daniilidis, K. Polar Transformer Networks. In Proceedings of the International Conference on Learning Representations, Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
- Muja, M.; Lowe, D.G. Fast approximate nearest neighbors with automatic algorithm configuration. VISAPP (1) 2009, 2, 2. [Google Scholar]
- Litman, R.; Korman, S.; Bronstein, A.M.; Avidan, S. Inverting RANSAC: Global model detection via inlier rate estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE), Boston, MA, USA, 7–12 June 2015; pp. 5243–5251. [Google Scholar]
Video Number | Localization | Maximum Altitude (m) | Trajectory Length (km) | Video Resolution |
---|---|---|---|---|
1 | Suizhou, Hubei | 1000 | 4.325 | 4096 × 2160 |
2 | Suizhou, Hubei | 1000 | 4.476 | 4096 × 2160 |
3 | Suizhou, Hubei | 1000 | 4.557 | 4096 × 2160 |
4 | Suizhou, Hubei | 1000 | 1.543 | 4096 × 2160 |
5 | Suizhou, Hubei | 1000 | 4.238 | 4096 × 2160 |
6 | Suizhou, Hubei | 1000 | 4.366 | 4096 × 2160 |
7 | Suizhou, Hubei | 1000 | 1.442 | 4096 × 2160 |
8 | Suizhou, Hubei | 1000 | 4.493 | 4096 × 2160 |
9 | Suizhou, Hubei | 1000 | 4.601 | 4096 × 2160 |
10 | Suizhou, Hubei | 1000 | 4.512 | 4096 × 2160 |
11 | Suizhou, Hubei | 1000 | 4.272 | 4096 × 2160 |
12 | Suizhou, Hubei | 1000 | 3.023 | 4096 × 2160 |
14 | Xian, Shanxi | 500 | 1.732 | 1920 × 1080 |
15 | Xian, Shanxi | 500 | 7.387 | 1920 × 1080 |
16 | Xian, Shanxi | 350 | 6.125 | 1920 × 1080 |
17 | Xian, Shanxi | 439 | 8.885 | 1920 × 1080 |
18 | Xian, Shanxi | 442 | 8.569 | 1920 × 1080 |
19 | Xian, Shanxi | 444 | 8.491 | 1920 × 1080 |
20 | Fengyang, Anhui | 400 | 4.831 | 3840 × 2160 |
21 | Fengyang, Anhui | 400 | 4.687 | 3840 × 2160 |
22 | Fengyang, Anhui | 400 | 2.032 | 3840 × 2160 |
Localization | Longitude Range | Latitude Range | Area |
---|---|---|---|
Suizhou, Hubei | [113.0000, 113.1792] | [32.0334, 31.6584] | 54.99 km × 37.97 km |
Xian, Shanxi | [108.9296, 109.1767] | [34.0707, 34.2053] | 22.46 km × 14.90 km |
Fengyang, Anhui | [117.4997, 117.8000] | [32.7357, 33.0812] | 28.30 km × 38.120 km |
Localization | Total Number of Mosaics | Number of Successful Geolocalized Mosaics | |
---|---|---|---|
2RIT | Ours | ||
Suizhou, Hubei | 21 | 14 | 16 |
Xian, Shanxi | 24 | 14 | 20 |
Fengyang, Anhui | 7 | 5 | 7 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, Y.; Wang, S.; He, H.; Meng, D.; Yang, D. Fast Aerial Image Geolocalization Using the Projective-Invariant Contour Feature. Remote Sens. 2021, 13, 490. https://doi.org/10.3390/rs13030490
Li Y, Wang S, He H, Meng D, Yang D. Fast Aerial Image Geolocalization Using the Projective-Invariant Contour Feature. Remote Sensing. 2021; 13(3):490. https://doi.org/10.3390/rs13030490
Chicago/Turabian StyleLi, Yongfei, Shicheng Wang, Hao He, Deyu Meng, and Dongfang Yang. 2021. "Fast Aerial Image Geolocalization Using the Projective-Invariant Contour Feature" Remote Sensing 13, no. 3: 490. https://doi.org/10.3390/rs13030490
APA StyleLi, Y., Wang, S., He, H., Meng, D., & Yang, D. (2021). Fast Aerial Image Geolocalization Using the Projective-Invariant Contour Feature. Remote Sensing, 13(3), 490. https://doi.org/10.3390/rs13030490