Imagery Network Fine Registration by Reference Point Cloud Data Based on the Tie Points and Planes
Abstract
:1. Introduction
2. Methods and Materials
2.1. Tie Point Extraction and Tie Plane Estimation
2.2. Constraint Equations and Bundle Adjustment
3. Experimental Data and Implementation
3.1. Study Dataset
3.2. Implementation
- Initial EOPs, APs, and IOPs values are specified. The initial non-accurate values of the IOPs, APs, and EOPs can come from the approximate sensor positioning systems or adopting non-accurate and a low number of the control points (in this paper the coarse values of the EOPs for UAV and close-range datasets are from GNSS and GCP values, respectively. Also, the IOPs, APs are from initial bundle adjustment of the imagery network by using the beforementioned EOPs values).
- Search for closest point cloud points to the object space photogrammetric tie points (if the all distances (d) average is small enough, then stop procedure; otherwise continue).
- Fit a plane to the found points in Step 2 and estimate the DVs and NVs.
- Compare the photogrammetric and the corresponding point cloud planes NVs. Remove the gross tie points and its planes.
- Organize the observation equations (i.e., the error equations).
- Solve least square bundle adjustment procedure to acquire corrections to the EOPs, IOPs, APs values, corrections to the object coordinates of tie points and update the unknown parameters.
- Calculate the Root Mean Square Error (RMSE) for all corrections of the unknown values.
- Check if the RMSE value is less than the threshold (which was selected to be 0.01 by trial and error) go back to step 2; otherwise, repeat from step 6.
4. Analyzing and Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Bing, L.; Xuchu, Y.; Anzhu, Y.; Gang, W. Deep convolutional recurrent neural network with transfer learning for hyperspectral image classification. J. Appl. Remote Sens. 2018, 12, 026028. [Google Scholar]
- Kim, M.; Lee, J.; Han, D.; Shin, M.; Im, J.; Lee, J.; Quackenbush, L.J.; Gu, Z. Convolutional Neural Network-Based Land Cover Classification Using 2-D Spectral Reflectance Curve Graphs with Multitemporal Satellite Imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 4604–4617. [Google Scholar] [CrossRef]
- Islam, K.; Jashimuddin, M.; Nath, B.; Nath, T.K. Land use classification and change detection by using multi-temporal remotely sensed imagery The case of Chunati wildlife sanctuary Bangladesh. Egypt. J. Remote Sens. Space Sci. 2018, 21, 37–47. [Google Scholar] [CrossRef]
- Liénard, J.; Vogs, A.; Gatziolis, D.; Strigul, N. Embedded, real-time UAV control for improved, image-based 3D scene reconstruction. Meas. J. Int. Meas. Confed. 2016, 81, 264–269. [Google Scholar] [CrossRef]
- Khoshboresh Masouleh, M.; Sadeghian, S.J. Deep learning-based method for reconstructing three-dimensional building cadastre models from aerial images Appl. Remote Sens. 2019, 13, 23–45. [Google Scholar]
- Ekhtari, N.; Glennie, C.; Fernandez-diaz, J.C. Classification of Airborne Multispectral Lidar Point Clouds for Land Cover Mapping. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 2068–2078. [Google Scholar] [CrossRef]
- Salehi, A.; Mohammadzadeh, A. Building Roof Reconstruction Based on Residue Anomaly Analysis and Shape Descriptors from Lidar and Optical Data. Photogramm. Eng. Remote Sens. 2017, 83, 281–291. [Google Scholar] [CrossRef]
- Prieto, S.A.; Adán, A.; Quintana, B. Preparation and enhancement of 3D laser scanner data for realistic coloured BIM models. Vis. Comput. 2018, 36, 113–126. [Google Scholar] [CrossRef]
- Schmitz, B.; Holst, C.; Medic, T.; Lichti, D.D. How to Efficiently Determine the Range Precision of 3D Terrestrial Laser Scanners. Sensors 2019, 19, 1466. [Google Scholar] [CrossRef] [Green Version]
- Li, N.; Huang, X.; Zhang, F.; Li, D. Registration of Aerial Imagery and Lidar Data in Desert Areas Using Sand Ridges. Photogramm. Rec. 2015, 30, 263–278. [Google Scholar] [CrossRef]
- Zhang, J.; Lin, X. Advances in fusion of optical imagery and LiDAR point cloud applied to photogrammetry and remote sensing. Int. J. Image Data Fusion 2017, 8, 1–31. [Google Scholar] [CrossRef]
- Yang, B.; Chen, C. Automatic registration of UAV-borne sequent images and LiDAR data. ISPRS J. Photogramm. Remote Sens. 2015, 101, 262–274. [Google Scholar] [CrossRef]
- Parmehr, E.G.; Fraser, C.S.; Zhang, C.; Leach, J. Automatic registration of optical imagery with 3D LiDAR data using statistical similarity. ISPRS J. Photogramm. Remote Sens. 2014, 88, 28–40. [Google Scholar] [CrossRef]
- Mishra, R.; Zhang, Y. A review of optical imagery and airborne lidar data registration methods. Open Remote Sens. J. 2012, 5, 54–63. [Google Scholar] [CrossRef] [Green Version]
- Sun, S.; Savalggio, C. Complex building roof detection and strict description from LIDAR data and orthorectified aerial imagery. In Proceedings of the 2012 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Munich, Germany, 22–27 July 2012; pp. 5466–5469. [Google Scholar]
- Pandey, G.; McBride, J.R.; Savarese, S.; Eustice, R.M. Automatic Targetless Extrinsic Calibration of a 3D Lidar and Camera by Maximizing Mutual Information. In Proceedings of the AAAI, Toronto, ON, Canada, 22–26 July 2012. [Google Scholar]
- Miled, M.; Soheilian, B.; Habets, E.; Vallet, B. Hybrid online mobile laser scanner calibration through image alignment by mutual information. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 3, 25–31. [Google Scholar] [CrossRef]
- Viola, P.; Wells, W.M., III. Alignment by maximization of mutual information. Int. J. Comput. Vis. 1997, 24, 137–154. [Google Scholar] [CrossRef]
- Eslami, M.; Saadatseresht, M. A New Tie Plane-Based Method for Fine Registration of Imagery and Point Cloud Dataset. Canadian Journal of Remote Sensing. 2020, 46, 295–312. [Google Scholar] [CrossRef]
- Mastin, A.; Kepner, J.; Fisher, J.J. Automatic registration of LIDAR and optical images of urban scenes. In Proceedings of the 2009 IEEE ConferenceComputer Vision and Pattern Recognition (CVPR), Miami, FL, USA, 20–25 June 2009; pp. 2639–2646. [Google Scholar]
- Omidalizarandi, M.; Neumann, I. Comparison of target-and mutual informaton based calibration of terrestrial laser scanner and digital camera for deformation monitoring. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, 40, 559. [Google Scholar] [CrossRef] [Green Version]
- Teo, T.A.; Huang, S.H. Automatic co-registration of optical satellite images and airborne lidar data using relative and absolute orientations. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 6, 2229–2237. [Google Scholar] [CrossRef]
- Chen, Y.; Medioni, G. Object Medeling by Registration os Multiple Range Images. IEEE Int. Conf. Robot. Autom. 1992, 2724–2729. [Google Scholar] [CrossRef]
- Moussa, W.; Abdel-Wahab, M.; Fritsch, D. An Automatic Procedure for Combining Digital Images and Laser Scanner Data. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, XXXIX-B5, 229–234. [Google Scholar] [CrossRef] [Green Version]
- Mitishita, E.; Habib, A.; Centeno, J.; Machado, A.; Lay, J.; Wong, C. Photogrammetric and lidar data integration using the centroid of a rectangular roof as a control point. Photogramm. Rec. 2008, 23, 19–35. [Google Scholar] [CrossRef]
- Kwak, T.-S.; Kim, Y.-I.; Yu, K.-Y.; Lee, B.-K. Registration of aerial imagery and aerial LiDAR data using centroids of plane roof surfaces as control information. KSCE J. Civ. Eng. 2006, 10, 365–370. [Google Scholar] [CrossRef]
- Zheng, S.; Huang, R.; Zhou, Y. Registration of optical images with lidar data and its accuracy assessment. Photogramm. Eng. Remote Sens. 2013, 79, 731–741. [Google Scholar] [CrossRef]
- Huang, R.; Zheng, S.; Hu, K. Registration of Aerial Optical Images with LiDAR Data Using the Closest Point Principle and Collinearity Equations. Sensors 2018, 18, 1770. [Google Scholar] [CrossRef] [Green Version]
- Glira, P.; Pfeifer, N.; Mandlburger, G. HYBRID ORIENTATION of AIRBORNE LIDAR POINT CLOUDS and AERIAL IMAGES. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, 4, 567–574. [Google Scholar] [CrossRef] [Green Version]
- Habib, A.; Morgan, M.; Kim, E.M.; Cheng, R. Linear features in photogrammetric activities. In Proceedings of the ISPRS Congress, Istanbul, Turkey, 12–23 July 2004; p. 610. [Google Scholar]
- Aldelgawy, M.; Detchev, I.D.; Habib, A.F. Alternative procedures for the incorporation of LiDAR-derived linear and areal features for photogrammetric geo-referencing. In Proceedings of the American Society for Photogrammetry and Remote Sensing (ASPRS) Annual Conference, Portland, OR, USA, 28 April–2 May 2008. [Google Scholar]
- Srestasathiern, P. Line Based Estimation of Object Space Geometry and Camera Motion. Doctoral Dissertation, The Ohio State University, Columbus, OH, USA, 2012. [Google Scholar]
- Barsai, G.; Yilmaz, A.; Nagarajan, S.; Srestasathiern, P. Registration of Images to Lidar and GIS Data Without Establishing Explicit Correspondences. Photogramm. Eng. Remote Sens. 2017, 83, 705–716. [Google Scholar] [CrossRef]
- Li, J.; Yang, B.; Chen, C.; Huang, R.; Dong, Z.; Xiao, W. Automatic registration of panoramic image sequence and mobile laser scanning data using semantic features. ISPRS J. Photogramm. Remote Sens. 2018, 136, 41–57. [Google Scholar] [CrossRef] [Green Version]
- Safdarinezhad, A.; Mokhtarzade, M.; Valadan Zoej, M.J. Shadow-Based Hierarchical Matching for the Automatic Registration of Airborne LiDAR Data and Space Imagery. Remote Sens. 2016, 8, 466. [Google Scholar] [CrossRef] [Green Version]
- Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
- Eslami, M.; Mohammadzadeh, A. Developing a Spectral-Based Strategy for Urban Object Detection from Airborne Hyperspectral TIR and Visible Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 1808–1816. [Google Scholar] [CrossRef]
- Goshtasby, A.A. Image Registration: Principles, Tools and Methods; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
- Luhmann, T.; Robson, S.; Kyle, S.; Harley, I. Close Range Photogrammetry: Principles, Techniques and Applications; Whittles: Scotland, UK, 2011; ISBN 9781849950572. [Google Scholar]
- Arefi, H. Iterative approach for efficient digital terrain model production from CARTOSAT-1 stereo images. J. Appl. Remote Sens. 2011, 5, 053527. [Google Scholar] [CrossRef] [Green Version]
Sensors | Cameras | Laser Scanners | |||
---|---|---|---|---|---|
Sensor Type | Canon 7D | DJI Camera Sensor | Sensor Type | ScanStation2 | RIEGL VMX-250 |
Site | Close-range | UAV | Site | Close-range | UAV |
Distance to object (m) | 4–6 | 80 | Scan speed (points per second) | 50,000 | 600,000 |
GSD (m) | 0.001 | 0.025 | Points density (points per square meter) | 180,000 | 40,000 |
Number of images | 14 | 37 | Range accuracy (mm) | 5 | 5 |
Error Type (Study Area) | Coarse Registration Accuracy (pixel) | ||
---|---|---|---|
Maximum Error | Average Error | RMSE | |
Total (XYZ) (Close-range) | 39.04 | 29.67 | 30.60 |
Total (XYZ) (UAV) | 25.81 | 20.5 | 20.85 |
Error Type (Study Area) | Fine Registration Accuracy (pixel) | ||
Maximum Error | Average Error | RMSE | |
Total (XYZ) (Close-range) | 2.51 | 2.15 | 2.25 |
Total (XYZ) (UAV) | 2.57 | 2.26 | 2.28 |
Error Type (Study Area) | Coarse Registration Accuracy (pixel) | ||
---|---|---|---|
Maximum Error | Average Error | RMSE | |
Range (Z) (Close-range) | 38 | 28.95 | 29.89 |
Range (Z) (UAV) | 25.1 | 19.25 | 19.81 |
Error Type (Study Area) | Fine Registration Accuracy (pixel) | ||
Maximum Error | Average Error | RMSE | |
Range (Z) (Close-range) | 2.01 | 1.87 | 1.88 |
Range (Z) (UAV) | 2.14 | 1.63 | 1.68 |
Author Name | Study Site | Maximum Error | Average Error | RMSE |
---|---|---|---|---|
Our Method | Close range | 2.51 | 2.15 | 2.25 |
UAV | 2.57 | 2.26 | 2.28 | |
Huang & et al. [28] | Close range | 2.52 | 2.01 | 2.14 |
UAV | 2.12 | 1.89 | 1.96 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Eslami, M.; Saadatseresht, M. Imagery Network Fine Registration by Reference Point Cloud Data Based on the Tie Points and Planes. Sensors 2021, 21, 317. https://doi.org/10.3390/s21010317
Eslami M, Saadatseresht M. Imagery Network Fine Registration by Reference Point Cloud Data Based on the Tie Points and Planes. Sensors. 2021; 21(1):317. https://doi.org/10.3390/s21010317
Chicago/Turabian StyleEslami, Mehrdad, and Mohammad Saadatseresht. 2021. "Imagery Network Fine Registration by Reference Point Cloud Data Based on the Tie Points and Planes" Sensors 21, no. 1: 317. https://doi.org/10.3390/s21010317
APA StyleEslami, M., & Saadatseresht, M. (2021). Imagery Network Fine Registration by Reference Point Cloud Data Based on the Tie Points and Planes. Sensors, 21(1), 317. https://doi.org/10.3390/s21010317