Generation of Virtual Ground Control Points Using a Binocular Camera
Abstract
:1. Introduction
2. Materials and Methods
- The process starts with an image database from which a sparse point cloud is generated using OpenSFM [30].
- Outliers are removed using a middle-pass filter.
- The point cloud is converted into a mesh.
- The resulting mesh is textured using the Poisson tool [33] to make the reconstruction more realistic.
- A georeferencing process is performed in global coordinates according to the EPSG:4326 WGS 84 [34] standard (in degrees).
- The reconstruction is checked for GCPs. If GCPs are present, a digital elevation model is generated, an orthophoto of the reconstruction is created, and a quality report is produced (classic process). On the other hand, if the reconstruction does not have GCPs, the distance between some points of the image is checked to map them inside the image (with the binocular camera) and thus generate VGCPs using the method proposed in the following section.
- Once the VGCPs are obtained, the process is repeated for the subsequent reconstruction.
2.1. Pre-Processing with COLMAP Library
- The process starts with an image database from which a sparse point cloud is generated using an incremental reconstruction technique with the COLMAP library.
- A database is created with the information of all points of the point cloud and the information of the cameras calculated by COLMAP.
- Using the Libimage-exiftool-perl library [35], the Exchangeable image file format (EXIF) metadata of each image is modified, especially the GPS information of the picture.
2.2. Re-Projections from Point Cloud to Image
- The process starts with either a binocular or monocular point cloud.
- Each point in the cloud is re-projected onto the query image and normalized.
- Any points that are outside the image dimensions are then removed. For instance, one can calculate the precise coordinates of a point for the query image x as 5454 and 8799 during the re-projection process. However, since the image resolution is , any points identified within the query image that fall outside these dimensions will be eliminated.
- The process is repeated for each image of the dataset.
- The output data are stored in a single file per image, with the format consisting of the image name, point number, and local and global coordinates (if monocular). For each binocular image, a file is produced in the following format: local coordinates followed by global coordinates. The different formatting arises because binocular images contain more points than monocular images.
2.3. Proposed Methodology for the Generation of Virtual Ground Control Points (VGCPs)
- An image bank is acquired, both with the binocular and monocular cameras. The monocular acquisition comes from a flight at a higher altitude (50 m, 100 m) of the whole area to be reconstructed, with an overlap of 80% (height and overlap are fixed considering [36,37,38,39,40]). The binocular camera images were acquired during a second flight at a lower altitude (5 m, 10 m). We obtain some central points of the area to be reconstructed, for example, about 4 points for a flat terrain of 20 thousand square meters. Each point is chosen to be 20 m apart, trying to connect the points with several images collected by the cameras. The data of the binocular flight were saved in SVO format, which were decompressed after the flight.
- Apply COLMAP pre-processing (feature extraction, matching, and geometric verification) to the monocular images.
- Generate a sparse reconstruction from a set of monocular images using the OpenSFM module of Web OpenDroneMap.
- Convert the point cloud to points using the monocular camera image.
- Generation of one point cloud per query image from the binocular camera.
- Convert the point cloud to points via the binocular camera image.
- The correspondences are computed for each pair of images in which a VGCP is to be generated.
- Eliminate from the list of correspondences all those that do not contain a point in the point cloud both binocularly and monocularly, taking into account queryIdx and trainIdx, which are parameters of the correspondence algorithm, as well as the RANSAC matrix.
- Search for an initial point, taking into account the position of the point and the number of images in which the point appears.
- Find the points surrounding the initial point with a radius of 500 pixels.
- For each point, find the difference in the x, y, and z axes between the binocular and monocular coordinates and calculate the average error.
- Correct the distance error by shifting the coordinates of the starting point considering the average error per axis.
- Convert to EPSG:4326 WGS 84 [34] format (global coordinates in degrees).
- Store the information in the following order: (1) the geodetic coordinates of the rectified point, (2) the spatial position of the rectified point in the image, (3) the name of the image within the image database, and (4) the VGCP number it represents, taking into account all images. This storage process is performed for each image containing this reference object and all reference objects.
2.4. Acquisition of Datasets
2.4.1. Monocular Drone
2.4.2. Binocular Drone
2.4.3. Monocular Datasets
2.4.4. Binocular Datasets
2.5. Metrics
2.5.1. RMSE
2.5.2. CE90 and LE90 Metrics
2.5.3. Development Tools
3. Results
3.1. Results of COLMAP Pre-processing
3.2. Results of Virtual Ground Control Points in Dataset 3
3.3. Analysis of the Creation of Virtual Ground Control Points in Dataset 4
4. Discussion
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A. Re-Projection Results
References
- INEGI. Metadatos de: Modelo Digital de Elevación de México; Technical Report; CentroGeo: México, Mexico, 2015. [Google Scholar]
- Guisado-Pintado, E.; Jackson, D.W.; Rogers, D. 3D mapping efficacy of a drone and terrestrial laser scanner over a temperate beach-dune zone. Geomorphology 2019, 328, 157–172. [Google Scholar] [CrossRef]
- James, M.; Robson, S.; D’Oleire-Oltmanns, S.; Niethammer, U. Optimising UAV topographic surveys processed with structure-from-motion: Ground control quality, quantity and bundle adjustment. Geomorphology 2017, 280, 51–66. [Google Scholar] [CrossRef]
- Wierzbicki, D.; Nienaltowski, M. Accuracy Analysis of a 3D Model of Excavation, Created from Images Acquired with an Action Camera from Low Altitudes. ISPRS Int. J. Geo-Inf. 2019, 8, 83. [Google Scholar] [CrossRef]
- Muradás Odriozola, G.; Pauly, K.; Oswald, S.; Raymaekers, D. Automating Ground Control Point Detection in Drone Imagery: From Computer Vision to Deep Learning. Remote Sens. 2024, 16, 794. [Google Scholar] [CrossRef]
- Li, W.; Liu, G.; Zhu, L.; Li, X.; Zhang, Y.; Shan, S. Efficient detection and recognition algorithm of reference points in photogrammetry. In Optics, Photonics and Digital Technologies for Imaging Applications IV; Schelkens, P., Ebrahimi, T., Cristóbal, G., Truchetet, F., Saarikko, P., Eds.; SPIE: Brussels, Belgium, 2016; Volume 9896, p. 989612. [Google Scholar] [CrossRef]
- Jain, A.; Mahajan, M.; Saraf, R. Standardization of the Shape of Ground Control Point (GCP) and the Methodology for Its Detection in Images for UAV-Based Mapping Applications. In Advances in Intelligent Systems and Computing; Springer: Berlin/Heidelberg, Germany, 2020; Volume 943, pp. 459–476. [Google Scholar] [CrossRef]
- Purevdorj, T.; Yokoyama, R. An approach to automatic detection of GCP for AVHRR imagery. J. Jpn. Soc. Photogramm. Remote Sens. 2002, 41, 28–38. [Google Scholar] [CrossRef]
- Montazeri, S.; Gisinger, C.; Eineder, M.; Zhu, X.x. Automatic Detection and Positioning of Ground Control Points Using TerraSAR-X Multiaspect Acquisitions. IEEE Trans. Geosci. Remote Sens. 2018, 56, 2613–2632. [Google Scholar] [CrossRef]
- Montazeri, S.; Zhu, X.X.; Balss, U.; Gisinger, C.; Wang, Y.; Eineder, M.; Bamler, R. SAR ground control point identification with the aid of high resolution optical data. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; pp. 3205–3208. [Google Scholar] [CrossRef]
- Zhu, Z.; Bao, T.; Hu, Y.; Gong, J. A Novel Method for Fast Positioning of Non-Standardized Ground Control Points in Drone Images. Remote Sens. 2021, 13, 2849. [Google Scholar] [CrossRef]
- Nitti, D.O.; Morea, A.; Nutricato, R.; Chiaradia, M.T.; La Mantia, C.; Agrimano, L.; Samarelli, S. Automatic GCP extraction with high resolution COSMO-SkyMed products. In SAR Image Analysis, Modeling, and Techniques XVI; Notarnicola, C., Paloscia, S., Pierdicca, N., Mitchard, E., Eds.; SPIE: Ciudad de México, Mexico, 2016; Volume 10003, p. 1000302. [Google Scholar] [CrossRef]
- Tomaštík, J.; Mokroš, M.; Surový, P.; Grznárová, A.; Merganič, J. UAV RTK/PPK Method—An Optimal Solution for Mapping Inaccessible Forested Areas? Remote Sens. 2019, 11, 721. [Google Scholar] [CrossRef]
- Feng, A.; Vong, C.N.; Zhou, J.; Conway, L.S.; Zhou, J.; Vories, E.D.; Sudduth, K.A.; Kitchen, N.R. Developing an image processing pipeline to improve the position accuracy of single UAV images. Comput. Electron. Agric. 2023, 206, 107650. [Google Scholar] [CrossRef]
- Feng, A.; Zhou, J.; Vories, E.; Sudduth, K.A. Evaluation of cotton emergence using UAV-based imagery and deep learning. Comput. Electron. Agric. 2020, 177, 105711. [Google Scholar] [CrossRef]
- Wang, R.; Lungu, M.; Zhou, Z.; Zhu, X.; Ding, Y.; Zhao, Q. Least global position information based control of fixed-wing UAVs formation flight: Flight tests and experimental validation. Aerosp. Sci. Technol. 2023, 140, 108473. [Google Scholar] [CrossRef]
- Cieslewski, T.; Derpanis, K.G.; Scaramuzza, D. SIPs: Succinct Interest Points from Unsupervised Inlierness Probability Learning. In Proceedings of the 2019 International Conference on 3D Vision (3DV), Quebec City, QC, Canada, 16–19 September 2019; pp. 604–613. [Google Scholar] [CrossRef]
- DeTone, D.; Malisiewicz, T.; Rabinovich, A. SuperPoint: Self-Supervised Interest Point Detection and Description. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 18–22 June 2018; Volume 2018-June, pp. 337–33712. [Google Scholar] [CrossRef]
- Ono, Y.; Fua, P.; Trulls, E.; Yi, K.M. LF-Net: Learning local features from images. In Proceedings of the Advances in Neural Information Processing Systems, Montréal, QC, Canada, 3–8 December 2018; Volume 2018-December, pp. 6234–6244. [Google Scholar] [CrossRef]
- Yi, K.M.; Trulls, E.; Lepetit, V.; Fua, P. LIFT: Learned Invariant Feature Transform. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin/Heidelberg, Germany, 2016; Volume 9910 LNCS, pp. 467–483. [Google Scholar] [CrossRef]
- Lowe, D.G. Distinctive image features from scale invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
- Lindenberger, P.; Sarlin, P.E.; Pollefeys, M. LightGlue: Local Feature Matching at Light Speed. In Proceedings of the ICCV 2023, Paris, France, 1–6 October 2023. [Google Scholar]
- Sarlin, P.E.; DeTone, D.; Malisiewicz, T.; Rabinovich, A. SuperGlue: Learning Feature Matching With Graph Neural Networks. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 4937–4946. [Google Scholar] [CrossRef]
- Sun, J.; Shen, Z.; Wang, Y.; Bao, H.; Zhou, X. LoFTR: Detector-Free Local Feature Matching with Transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021. [Google Scholar]
- Wang, Q.; Zhang, J.; Yang, K.; Peng, K.; Stiefelhagen, R. MatchFormer: Interleaving Attention in Transformers for Feature Matching. In Proceedings of the CVPR, Macao, China, 4–8 December 2022. [Google Scholar]
- OpenDroneMap. Open Drone Map. Available online: https://www.opendronemap.org/ (accessed on 8 May 2024).
- DJI. DJI MAVIC PRO PLATINUM. Available online: https://www.dji.com/mx/support/product/mavic-pro-platinum (accessed on 8 May 2024).
- Smith, D.; Heidemann, H.K. New Standard for New Era: Overview of the 2015 ASPRS Positional Accuracy Standards for Digital Geospatial Data. Photogramm. Eng. Remote Sens. 2015, 81, 173–176. [Google Scholar]
- Whitehead, K.; Hugenholtz, C.H. Applying ASPRS Accuracy Standards to Surveys from Small Unmanned Aircraft Systems (UAS). Photogramm. Eng. Remote Sens. 2015, 81, 787–793. [Google Scholar] [CrossRef]
- Adorjan, M. OpenSfM A collaborative Structure-from-Motion System. Ph.D. Thesis, Technische Universität Wien, Wien, Austria, 2016. [Google Scholar]
- Agarwal, S.; Furukawa, Y.; Snavely, N.; Simon, I.; Curless, B.; Seitz, S.M.; Szeliski, R. Building Rome in a day. Commun. ACM 2011, 54, 105–112. [Google Scholar] [CrossRef]
- Meza, J.; Marrugo, A.G.; Sierra, E.; Guerrero, M.; Meneses, J.; Romero, L.A. A Structure-from-Motion Pipeline for Topographic Reconstructions Using Unmanned Aerial Vehicles and Open Source Software. In Communications in Computer and Information Science; Springer: Berlin/Heidelberg, Germany, 2018; Volume 885, pp. 213–225. [Google Scholar] [CrossRef]
- Kazhdan, M.; Bolitho, M.; Hoppe, H. Poisson Surface Reconstruction. In Proceedings of the Fourth Eurographics Symposium on Geometry Processing (SGP’06), Goslar, Germany, 26–28 June 2006; pp. 61–70. [Google Scholar]
- European Petroleum Survey Group. EPSG:4326 WGS 84; Available online: https://epsg.io/4326 (accessed on 8 May 2024).
- Harvey, P. libimage-exiftool-perl. Available online: https://packages.ubuntu.com/search?keywords=libimage-exiftool-perl (accessed on 8 May 2024).
- Chand, S.; Bollard, B. Low altitude spatial assessment and monitoring of intertidal seagrass meadows beyond the visible spectrum using a remotely piloted aircraft system. Estuar. Coast. Shelf Sci. 2021, 255, 107299. [Google Scholar] [CrossRef]
- Iheaturu, C.; Okolie, C.; Ayodele, E.; Egogo-Stanley, A.; Musa, S.; Ifejika Speranza, C. A simplified structure-from-motion photogrammetry approach for urban development analysis. Remote Sens. Appl. Soc. Environ. 2022, 28, 100850. [Google Scholar] [CrossRef]
- Vautherin, J.; Rutishauser, S.; Schneider-Zapp, K.; Choi, H.F.; Chovancova, V.; Glass, A.; Strecha, C. Photogrammetric Accuracy and Modeling of Rolling Shutter Cameras. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, III-3, 139–146. [Google Scholar] [CrossRef]
- Page, M.T.; Perotto-Baldivieso, H.L.; Ortega-S, J.A.; Tanner, E.P.; Angerer, J.P.; Combs, R.C.; Camacho, A.M.; Ramirez, M.; Cavazos, V.; Carroll, H.; et al. Evaluating Mesquite Distribution Using Unpiloted Aerial Vehicles and Satellite Imagery. Rangel. Ecol. Manag. 2022, 83, 91–101. [Google Scholar] [CrossRef]
- Varbla, S.; Ellmann, A.; Puust, R. Centimetre-range deformations of built environment revealed by drone-based photogrammetry. Autom. Constr. 2021, 128, 103787. [Google Scholar] [CrossRef]
- EPSG. EPSG:32614 WGS 84/UTM zone 14N. Available online: https://epsg.io/32614 (accessed on 8 May 2024).
- PROJ. pyproj. Available online: https://pypi.org/project/pyproj/ (accessed on 8 May 2024).
- DroneDeploy. Drones in Agriculture: The Ultimate Guide to Putting Your Drone to Work on the Farm. In DroneDeploy eBook; Tradepub: Campbell, CA, USA, 2018; pp. 1–26. [Google Scholar]
- DJI. DJI Agras T10. Available online: https://www.dji.com/mx/t10 (accessed on 8 May 2024).
- Stereolabs. Zed 2. Available online: https://www.stereolabs.com/products/zed-2 (accessed on 8 May 2024).
- NVIDIA. Jetson Nano. Available online: https://www.nvidia.com/es-la/autonomous-machines/embedded-systems/jetson-nano/product-development/ (accessed on 8 May 2024).
- Bankert, A.R.; Strasser, E.H.; Burch, C.G.; Correll, M.D. An open-source approach to characterizing Chihuahuan Desert vegetation communities using object-based image analysis. J. Arid Environ. 2021, 188, 104383. [Google Scholar] [CrossRef]
- Bossoukpe, M.; Faye, E.; Ndiaye, O.; Diatta, S.; Diatta, O.; Diouf, A.; Dendoncker, M.; Assouma, M.; Taugourdeau, S. Low-cost drones help measure tree characteristics in the Sahelian savanna. J. Arid Environ. 2021, 187, 104449. [Google Scholar] [CrossRef]
- Camarillo-Peñaranda, J.R.; Saavedra-Montes, A.J.; Ramos-Paja, C.A. Recomendaciones para Seleccionar Índices para la Validación de Modelos. Ph.D. Thesis, Universidad Nacional de Colombia, Bogotá, Colombia, 2013. [Google Scholar] [CrossRef]
- Martínez-Carricondo, P.; Agüera-Vega, F.; Carvajal-Ramírez, F.; Mesas-Carrascosa, F.J.; García-Ferrer, A.; Pérez-Porras, F.J. Assessment of UAV-photogrammetric mapping accuracy based on variation of ground control points. Int. J. Appl. Earth Obs. Geoinf. 2018, 72, 1–10. [Google Scholar] [CrossRef]
- Negrón Baez, P.A. Redes Neuronales Sigmoidal Con Algoritmo Lm Para Pronostico De Tendencia Del Precio De Las Acciones Del Ipsa; Technical Report; Pontificia Universidad Católica de Valparaíso: Valparaíso, Chile, 2014. [Google Scholar]
- AMAZON and DIGITALGLOBE. ACCURACY OF WORLDVIEW PRODUCTS. Available online: https://dg-cms-uploads-production.s3.amazonaws.com/uploads/document/file/38/DG_ACCURACY_WP_V3.pdf/ (accessed on 8 May 2024).
- Docker. Docker. Available online: https://www.docker.com/ (accessed on 8 May 2024).
- Kaehler, A.; Bradski, G. OpenCV 3; O’Reilly Media: Sebastopol, CA, USA, 2017; pp. 32–35. [Google Scholar]
- Ubuntu. Ubuntu, 2020. Ubuntu. Available online: https://ubuntu.com/ (accessed on 8 May 2024).
- Specht, M.; Szostak, B.; Lewicka, O.; Stateczny, A.; Specht, C. Method for determining of shallow water depths based on data recorded by UAV/USV vehicles and processed using the SVR algorithm. Measurement 2023, 221, 113437. [Google Scholar] [CrossRef]
- Cledat, E.; Jospin, L.; Cucci, D.; Skaloud, J. Mapping quality prediction for RTK/PPK-equipped micro-drones operating in complex natural environment. ISPRS J. Photogramm. Remote Sens. 2020, 167, 24–38. [Google Scholar] [CrossRef]
- Gao, K.; Aliakbarpour, H.; Fraser, J.; Nouduri, K.; Bunyak, F.; Massaro, R.; Seetharaman, G.; Palaniappan, K. Local Feature Performance Evaluation for Structure-From-Motion and Multi-View Stereo Using Simulated City-Scale Aerial Imagery. IEEE Sens. J. 2021, 21, 11615–11627. [Google Scholar] [CrossRef]
- Jiang, S.; Jiang, C.; Jiang, W. Efficient structure from motion for large-scale UAV images: A review and a comparison of SfM tools. ISPRS J. Photogramm. Remote Sens. 2020, 167, 230–251. [Google Scholar] [CrossRef]
- Liu, X.; Lian, X.; Yang, W.; Wang, F.; Han, Y.; Zhang, Y. Accuracy Assessment of a UAV Direct Georeferencing Method and Impact of the Configuration of Ground Control Points. Drones 2022, 6, 30. [Google Scholar] [CrossRef]
- Techapinyawat, L.; Goulden-Brady, I.; Garcia, H.; Zhang, H. Aerial characterization of surface depressions in urban watersheds. J. Hydrol. 2023, 625, 129954. [Google Scholar] [CrossRef]
- Awasthi, B.; Karki, S.; Regmi, P.; Dhami, D.S.; Thapa, S.; Panday, U.S. Analyzing the Effect of Distribution Pattern and Number of GCPs on Overall Accuracy of UAV Photogrammetric Results. In Lecture Notes in Civil Engineering; Springer: Berlin/Heidelberg, Germany, 2020; Volume 51, pp. 339–354. [Google Scholar] [CrossRef]
- Adar, S.; Sternberg, M.; Argaman, E.; Henkin, Z.; Dovrat, G.; Zaady, E.; Paz-Kagan, T. Testing a novel pasture quality index using remote sensing tools in semiarid and Mediterranean grasslands. Agric. Ecosyst. Environ. 2023, 357, 108674. [Google Scholar] [CrossRef]
- Elkhrachy, I. Accuracy Assessment of Low-Cost Unmanned Aerial Vehicle (UAV) Photogrammetry. Alex. Eng. J. 2021, 60, 5579–5590. [Google Scholar] [CrossRef]
- Gillins, D.T.; Kerr, D.; Weaver, B. Evaluation of the Online Positioning User Service for Processing Static GPS Surveys: OPUS-Projects, OPUS-S, OPUS-Net, and OPUS-RS. J. Surv. Eng. 2019, 145, 05019002. [Google Scholar] [CrossRef]
Process | 100 m | 75 m | 50 m | ||||||
---|---|---|---|---|---|---|---|---|---|
W/o | W | Diff | W/o | W | Diff | W/o | W | Diff | |
Absolute RMSE | 0.69 | 0.6 | −0.24 | 0.56 | 0.57 | −1.55 | 0.51 | 0.53 | −3.04 |
Relative RMSE | 0.11 | 0.04 | 61.06 | 0.053 | 0.052 | 1.26 | 0.027 | 0.025 | 8.54 |
CE90 | 0.71 | 0.83 | −16.57 | 0.376 | 0.384 | −2.26 | 0.36 | 0.38 | −5.02 |
LE90 | 0.54 | 0.41 | 23.45 | 0.67 | 0.68 | −1.88 | 0.67 | 0.64 | 4.69 |
Average difference | 16.9% | −1.1% | 1.29% |
Process | 100 m | 75 m | 50 m | ||||||
---|---|---|---|---|---|---|---|---|---|
W/o | W | Diff | W/o | W | Diff | W/o | W | Diff | |
Absolute RMSE | 0.637 | 0.656 | −2.984 | 0.551 | 0.547 | 0.666 | 0.463 | 0.466 | −0.576 |
Relative RMSE | 0.125 | 0.121 | 3.191 | 0.043 | 0.028 | 36.154 | 0.025 | 0.022 | 13.158 |
CE90 | 0.659 | 0.696 | −5.539 | 0.633 | 0.453 | 28.436 | 0.345 | 0.356 | −3.188 |
LE90 | 0.659 | 0.666 | −0.986 | 0.579 | 0.568 | 1.901 | 0.669 | 0.639 | 4.484 |
Average difference | −1.6% | 16.8% | 3.47% |
Process | Original | Only VGCPs | COLMAP | COLMAP + VGCPs |
---|---|---|---|---|
Absolute RMSE | 0.7227 | 0.5223 | 0.7450 | 0.7107 |
Relative RMSE | 0.0967 | 0.0983 | 0.0967 | 0.0937 |
CE90 | 0.7930 | 0.5905 | 0.7950 | 0.5720 |
LE90 | 0.4440 | 0.2835 | 0.4390 | 0.2255 |
Average difference | 0 | 21.9204% | −0.5541% | 20.4611% |
Process | 100 m | 50 m | ||||
---|---|---|---|---|---|---|
W/o | W | Diff | W/o | W | Diff | |
Absolute RMSE | 0.41 | 0.43 | −5.65 | 0.42 | 0.26 | 36.93 |
Relative RMSE | 0.044 | 0.039 | 10.6 | 0.103 | 0.064 | 37.62 |
CE90 | 0.42 | 0.21 | 49.88 | 0.199 | 0.039 | 80.4 |
LE90 | 0.41 | 0.135 | 67.07 | 0.887 | 0.094 | 89.42 |
Average difference | 30.47% | 61.08% |
Article | Height | Camera | GCP | Sensors | Overlap | Metrics | Algorithm | RMSE |
---|---|---|---|---|---|---|---|---|
[47] | 120 m | Sensefly eBee Plus | 34 | * | 75% | RMSE | Pix4D | 0.2–1.1 m |
[48] | 80 m | DJI spark | 0 | * | 90% | RMSE | Pix4D | 97 cm |
[37] | 61 m | DJI Mavic 2 Pro | 9 | * | 75% | RMSE | Pix4D | 21–96 cm |
[38] | 50 m | 4 differents cameras | 7 | * | * | RMSE | Pix4D | 3–88 cm |
[61] | 75–120 m | DJI Mavic 2 Pro | 9 | * | 80% | RMSE | Pix4D | 5.5–31.6 cm |
[62] | 40–50 m | DJI Phantom 3 Advanced | 20 | * | 80% | RMSE | Pix4D | 11.3–18.2 cm |
[39] | 50–100 m | DJI Phantom IV y DJI Phantom IV RTK | 0 | RTK | 80% | RMSE | Pix4D | 2.8–15.6 cm |
[63] | 75 m | DJI Phantom 4 | 15–20 | * | 80% | RMSE | Pix4D | 8.1–14.6 cm |
Ours | 50–100 m | DJI MAVIC PRO PLATINUM | 4 | Binocular camera | 80% | RMSE | ODM | 3.9–9.3 cm |
[64] | 70 m | DJI Mavic Pro Platinum | 21 | * | 70% | RMSE | Pix4D | 4.2–6.2 cm |
[36] | 50 m | DJI Phantom 4 Pro | 0 | * | 85% | RMSE | Pix4D | * |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Vazquez-Dominguez, A.; Magadán-Salazar, A.; Pinto-Elías, R.; Fuentes-Pacheco, J.; López-Sánchez, M.; Abaunza-González, H. Generation of Virtual Ground Control Points Using a Binocular Camera. Drones 2024, 8, 195. https://doi.org/10.3390/drones8050195
Vazquez-Dominguez A, Magadán-Salazar A, Pinto-Elías R, Fuentes-Pacheco J, López-Sánchez M, Abaunza-González H. Generation of Virtual Ground Control Points Using a Binocular Camera. Drones. 2024; 8(5):195. https://doi.org/10.3390/drones8050195
Chicago/Turabian StyleVazquez-Dominguez, Ariel, Andrea Magadán-Salazar, Raúl Pinto-Elías, Jorge Fuentes-Pacheco, Máximo López-Sánchez, and Hernán Abaunza-González. 2024. "Generation of Virtual Ground Control Points Using a Binocular Camera" Drones 8, no. 5: 195. https://doi.org/10.3390/drones8050195
APA StyleVazquez-Dominguez, A., Magadán-Salazar, A., Pinto-Elías, R., Fuentes-Pacheco, J., López-Sánchez, M., & Abaunza-González, H. (2024). Generation of Virtual Ground Control Points Using a Binocular Camera. Drones, 8(5), 195. https://doi.org/10.3390/drones8050195