# How to Build a 2D and 3D Aerial Multispectral Map?—All Steps Deeply Explained

^{1}

^{2}

^{3}

^{4}

^{5}

^{6}

^{7}

^{8}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Related Work

**Figure 3.**Positional and height deviations comparisons between the results obtained from PMVS2 and Agisoft. The vectors illustrated on the (

**a**,

**c**) represent the deviations in position of points. Larger vector sizes indicate a larger deviation. The circles on (

**b**,

**d**) describe the height deviations of points, where larger circles represent larger height deviations. Additionally, the crosses illustrate reference markers. Adapted from [10].

## 3. Methodology

#### 3.1. Data Load/Input

#### 3.2. Structure from Motion

#### 3.2.1. Metadata Extraction

#### 3.2.2. Feature Detection

#### 3.2.3. Feature Matching

#### 3.2.4. Track Creation

#### 3.2.5. Reconstruction

#### 3.2.6. Undistort

#### 3.3. Multi-View Stereo

#### 3.3.1. Stereo Pair Selection

#### 3.3.2. Depth Map Estimation

#### 3.3.3. Depth Maps Filtering

#### 3.3.4. Depth Map Fusion

#### 3.4. Meshing Reconstruction

#### 3.4.1. Space Function

#### 3.4.2. Vector Definition

#### 3.4.3. Poisson Equation Solution

#### 3.4.4. Isosurface Extraction

#### 3.5. Texturing Reconstruction

#### 3.5.1. Preprocessing

#### 3.5.2. View Selection

#### 3.5.3. Color Adjustment

#### 3.6. Georeferencing

#### 3.7. Orthomap

## 4. Experimental Results

#### 4.1. RGB Product

#### 4.2. Multispectral Product

#### Multi Band

#### 4.3. Thermal Product

## 5. Discussion

## 6. Conclusions

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Acknowledgments

## Conflicts of Interest

## References

- Max Roser and Hannah Ritchie—“Technological Progress”. Published online at OurWorldInData.org. 2013. Available online: https://ourworldindata.org/technological-progress (accessed on 3 February 2021).
- Lowe, D.G. Object Recognition From Local Scale-Invariant Features. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; Volume 2, pp. 1150–1157. [Google Scholar] [CrossRef]
- Lowe, D.G. Distinctive Image Features From Scale-Invariant Keypoints. Int. J. Comput. Vis.
**2004**, 60, 91–110. [Google Scholar] [CrossRef] - OpenDroneMap Authors ODM—A Command Line Toolkit to Generate Maps, Point Clouds, 3D Models and DEMs from Drone, Balloon or Kite Images. OpenDroneMap/ODM GitHub Page 2020. Available online: https://github.com/OpenDroneMap/ODM (accessed on 2 October 2020).
- Administrator, Agisoft Metashape. Available online: https://www.agisoft.com/ (accessed on 20 October 2020).
- Administrator, Pix4D. Available online: https://www.pix4d.com/ (accessed on 25 October 2020).
- Administrator, Arc3D. Available online: https://homes.esat.kuleuven.be/~visit3d/webservice/v2/index.php (accessed on 30 October 2020).
- Administrator, Bundler. Available online: http://www.cs.cornell.edu/~snavely/bundler/ (accessed on 1 November 2020).
- Casella, V.; Chiabrando, F.; Franzini, M.; Manzino, A.M. Accuracy Assessment of a UAV Block by Different Software Packages, Processing Schemes and Validation Strategies. ISPRS Int. J. Geo-Inf.
**2020**, 9, 164. [Google Scholar] [CrossRef] [Green Version] - Neitzel, F.; Klonowski, J. Mobile 3D mapping with a low-cost UAV system. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci.
**2011**, XXXVIII-1/C22, 39–44. [Google Scholar] [CrossRef] [Green Version] - Sona, G.; Pinto, L.; Pagliari, D.; Passoni, D.; Gini, R. Experimental analysis of different software packages for orientation and digital surface modelling from UAV images. Earth Sci. Inform.
**2014**, 7, 97–107. [Google Scholar] [CrossRef] - Wang, T.; Zeng, J.; Liu, D.-J.; Yang, L.-Y. Fast stitching of DOM based on small UAV. J. Inf. Optim. Sci.
**2017**, 38, 1211–1219. [Google Scholar] [CrossRef] - Karantanellis, E.; Arav, R.; Dille, A.; Lippl, S.; Marsy, G.; Torresani, L.; Elberink, S.O. Evaluating the Quality of Photogrammetric Point-Clouds in Challenging Geo-Environments–A Case Study in An Alpine Valley. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci.
**2020**, XLIII-B2-2020, 1099–1105. [Google Scholar] [CrossRef] - Guimarães, N.; Pádua, L.; Adão, T.; Hruška, J.; Peres, E.; Sousa, J.J. VisWebDrone: A Web Application for UAV Photogrammetry Based on Open-Source Software. ISPRS Int. J. Geo-Inf.
**2020**, 9, 679. [Google Scholar] [CrossRef] - Schütz, M. Potree: Rendering Large Point Clouds in Web Browsers. Ph.D. Thesis, Institut für Computergraphik und Algorithmen, Wien, Austria, 2016. [Google Scholar]
- Schütz, M.; Ohrhallinger, S.; Wimmer, M. Fast Out-of-Core Octree Generation for Massive Point Clouds. Comput. Graph. Forum
**2020**, 39, 155–167. [Google Scholar] [CrossRef] - Crickard, P. Leaflet.js Essentials; Packt Publishing: Birmingham, UK, 2014. [Google Scholar]
- Hrushchak, Y. Visual Localization for Iseauto Using Structure From Motion. MSc Thesis, Tallinn University of Technology, Tallinn, Estonia, 2019. [Google Scholar]
- Mapillary, Opensfm. Available online: Https://Github.Com/Mapillary/Opensfm#Getting-Started (accessed on 15 October 2020).
- Administrator, Camera Distortion. Available online: http://gazebosim.org/tutorials?tut=camera_distortion (accessed on 12 October 2020).
- Frontera, F.; Smith, M.J.; Marsh, S. Preliminary Investigation into the Geometric Calibration of the Micasense Rededge-M Multispectral Camera. ISPRS-Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci.
**2020**, Xliii-B2-2020, 17–22. [Google Scholar] [CrossRef] - Bay, H.; Ess, A.; Tuytelaars, T.; Gool, L.V. Speeded-Up Robust Features (SURF). Comput. Vis. Image Underst.
**2008**, 110, 346–359. [Google Scholar] [CrossRef] - Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G. ORB: An Efficient Alternative to Sift or Surf. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2564–2571. [Google Scholar] [CrossRef]
- Koenderink, J.J. The Structure of Images. Biol. Cybern.
**1984**, 50, 363–370. [Google Scholar] [CrossRef] - Lindeberg, T. Scale-Space Theory: A Basic Tool for Analysing Structures At Different Scales. J. Appl. Stat.
**1994**, 21, 224–270. [Google Scholar] [CrossRef] - Mikolajczyk, K.; Schmid, C. An Affine Invariant Interest Point Detector. In Computer Vision—Eccv 2002, Proceedings of the 7th European Conference on Computer Vision, Copenhagen, Denmark, 28–31 May 2002; Lecture Notes in Computer Science; Heyden, A., Sparr, G., Nielsen, M., Johansen, P., Eds.; Springer: Berlin/Heidelberg, Germany, 2002; Volume 2350. [Google Scholar] [CrossRef] [Green Version]
- Huang, M.; Mu, Z.; Zeng, H.; Huang, H. A Novel Approach for Interest Point Detection via Laplacian-of-Bilateral Filter. J. Sens.
**2015**, 2015, 685154. [Google Scholar] [CrossRef] - Brown, M.; Lowe, D. Invariant Features from Interest Point Groups. In Proceedings of the British Machine Conference; Marshall, D., Rosin, P.L., Eds.; Bmva Press: Dundee, UK, 2002; pp. 23.1–23.10. [Google Scholar] [CrossRef] [Green Version]
- Harris, C.; Stephens, M. A Combined Corner and Edge Detector. In Proceedings of the 4th Alvey Vision Conference, Manchester, UK, 31 August–2 September 1988; pp. 147–151. [Google Scholar]
- Schmid, C.; Mohr, R. Local Grayvalue Invariants for Image Retrieval. IEEE Trans. Pattern Anal. Mach. Intell.
**1997**, 19, 530–535. [Google Scholar] [CrossRef] - Edelman, S.; Intrator, N.; Poggio, T. Complex Cells and Object Recognition; NIPS*97, Visual Processing, 1997, Unpublished.
- Eichhorn, J.; Chapelle, O. Object Categorization with SVM: Kernels for Local Features; Max Planck Institute for Biological Cybernetics: Baden-Württemberg, Germany, 2004. [Google Scholar]
- Csurka, G.; Dance, C.; Fan, L.; Willamowski, J.; Bray, C. Visual Categorization with Bags of Keypoints. Workshop Stat. Learn. Comput. Vis.
**2004**, 1, 1–2. [Google Scholar] - O’hara, S.; Draper, B. Introduction to the Bag of Features Paradigm for Image Classification and Retrieval. arXiv
**2011**, arXiv:1101.3354. [Google Scholar] - Muja, M.; Lowe, D.G. Fast Approximate Nearest Neighbors with Automatic Algorithm Configuration. In VISAPP 2009, Proceedings of the 4th International Conference on Computer Vision Theory and Applications, Lisbon, Portugal, 5–8 February 2009; Volume 1, pp. 331–340.
- Friedman, J.H.; Bentley, J.L.; Finkel, R.A. An Algorithm for Finding Best Matches in Logarithmic Expected Time. ACM Trans. Math. Softw.
**1977**, 3, 209–226. [Google Scholar] [CrossRef] - Beis, J.S.; Lowe, D.G. Shape Indexing Using Approximate Nearest-Neighbour Search in High-Dimensional Spaces. In Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (Cvpr ’97), San Juan, PR, USA, 17–19 June 1997; pp. 1000–1006. [Google Scholar]
- Hough, P.V.C. Method and Means for Recognizing Complex Patterns. U.S. Patent 3,069,654, 18 December 1962. [Google Scholar]
- Ballard, D.H. Generalizing the Hough Transform To Detect Arbitrary Shapes. Pattern Recognit.
**1981**, 13, 111–122. [Google Scholar] [CrossRef] [Green Version] - Andrew, A.; Eric, W.; Grimson, L.; Lazano-Pérez, T.; Huttenlocher, D.P. Object Recognition By Computer: The Role of Geometric Constraints, Mit Press, Cambridge, Mass., 1990, Hard Cover, Xv 512 Pp. (£40.50). Robotica
**1992**, 10, 475. [Google Scholar] [CrossRef] - Lowe, D.G. Local Feature View Clustering for 3d Object Recognition. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, Kauai, HI, USA, 8–14 December 2001; p. 1. [Google Scholar] [CrossRef] [Green Version]
- Havlena, M.; Schindler, K. Vocmatch: Efficient Multiview Correspondence for Structure from Motion. In Computer Vision—Eccv 2014, Proceedings of the 13th European Conference, Zurich, Switzerland, 6–12 September 2014; Lecture Notes in Computer Science; Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T., Eds.; Springer: Cham, Switzerland, 2014; Volume 8691. [Google Scholar] [CrossRef]
- Levenberg, K. A Method for the Solution of Certain Non-Linear Problems In Least Squares. Q. Appl. Math.
**1944**, 2, 164–168. [Google Scholar] [CrossRef] [Green Version] - Kataria, R.; DeGol, J.; Hoiem, D. Improving Structure from Motion with Reliable Resectioning. In Proceedings of the 2020 International Conference on 3D Vision (3DV), Fukuoka, Japan, 25–28 November 2020; pp. 41–50. [Google Scholar] [CrossRef]
- Adorjan, M. OpenSfM: A Collaborative Structure-From-Motion System. Ph.D. Thesis, Vienna University of Technology, Vienna, Austria, 2016. [Google Scholar]
- Wu, F.; Wei, H.; Wang, X. Correction of image radial distortion based on division model. Opt. Eng.
**2017**, 56, 013108. [Google Scholar] [CrossRef] [Green Version] - Furukawa, Y.; Hernández, C. Multi-View Stereo: A Tutorial. In Foundations and TrendsR in Computer Graphics and Vision; Now Publishers Inc.: Hanover, MA, USA, 2013; Volume 9, pp. 1–148. [Google Scholar]
- Furukawa, Y.; Ponce, J. Accurate, Dense, and Robust Multiview Stereopsis. IEEE Trans. Pattern Anal. Mach. Intell.
**2010**, 32, 1362–1376. [Google Scholar] [CrossRef] [PubMed] - Seitz, S.M.; Dyer, C.R. Photorealistic Scene Reconstruction By Voxel Coloring. Int. J. Comput. Vis.
**1999**, 35, 151–173. [Google Scholar] [CrossRef] - Vogiatzis, G.; Esteban, C.H.; Torr, P.H.; Cipolla, R. Multiview Stereo via Volumetric Graph-Cuts and Occlusion Robust Photo-Consistency. IEEE Trans. Pattern Anal. Mach Intell.
**2007**, 29, 2241–2246. [Google Scholar] [CrossRef] [Green Version] - Sinha, S.N.; Mordohai, P.; Pollefeys, M. Multi-View Stereo via Graph Cuts on the Dual of an Adaptive Tetrahedral Mesh. In Proceedings of the IEEE International Conference on Computer Vision, Rio de Janeiro, Brazil, 14–21 October 2007; pp. 1–8. [Google Scholar] [CrossRef]
- Shen, S. Accurate Multiple View 3d Reconstruction Using Patch-Based Stereo for Large-Scale Scenes. IEEE Trans. Image Process.
**2013**, 22, 1901–1914. [Google Scholar] [CrossRef] [PubMed] - Faugeras, O.; Keriven, R. Variational Principles, Surface Evolution, Pdes, Level Set Methods and the Stereo Problem. IEEE Trans. Image Process.
**1998**, 7, 336–344. [Google Scholar] [CrossRef] - Esteban, C.H.; Schmitt, F. Silhouette and Stereo Fusion for 3D Object Modeling. Comput. Vis. Image Underst.
**2004**, 96, 367–392. [Google Scholar] [CrossRef] [Green Version] - Hiep, V.H.; Keriven, R.; Labatut, P.; Pons, J.P. Towards High-Resolution Large-Scale Multi-View Stereo. In Proceedings of the 2009 IEEE Computer Society Conference on Computer Vision And Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 1430–1437. [Google Scholar] [CrossRef] [Green Version]
- Kolev, K.; Cremers, D. Integration of Multiview Stereo And Silhouettes Via Convex Functionals on Convex Domains. In Computer Vision—Eccv 2008: 10th European Conference on Computer Vision, Marseille, France, 12–18 October 2008, Proceedings, Part I; Lecture Notes in Computer Science, Volume 5302; Forsyth, D., Torr, P., Zisserman, A., Eds.; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar] [CrossRef] [Green Version]
- Goesele, M.; Curless, B.; Seitz, S.M. Multi-View Stereo Revisited. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), New York, NY, USA, 17–22 June 2006; pp. 2402–2409. [Google Scholar] [CrossRef]
- Merrell, P.; Akbarzadeh, A.; Wang, L.; Mordohai, P.; Frahm, J.M.; Yang, R.; Nistér, D.; Pollefeys, M. Real-Time Visibility-Based Fusion of Depth Maps. In Proceedings of the 2007 IEEE 11th International Conference on Computer Vision, Rio de Janeiro, Brazil, 14–21 October 2007; pp. 1–8. [Google Scholar] [CrossRef]
- Fuhrmann, S.; Goesele, M. Fusion of Depth Maps with Multiple Scales. ACM Trans. Graph.
**2011**, 30, 148. [Google Scholar] [CrossRef] - Lhuillier, M.; Quan, L. A Quasi-Dense Approach To Surface Reconstruction from Uncalibrated Images. IEEE Trans. Pattern Anal. Mach. Intell.
**2005**, 27, 418–433. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Goesele, M.; Snavely, N.; Curless, B.; Hoppe, H.; Seitz, S.M. Multi-View Stereo for Community Photo Collections. In Proceedings of the 2007 IEEE 11th International Conference on Computer Vision, Rio de Janeiro, Brazil, 14–21 October 2007; pp. 1–8. [Google Scholar] [CrossRef]
- Cernea, D. Openmvs: Multi-View Stereo Reconstruction Library. 2020. Available online: https://github.com/cdcseacave/openMVS (accessed on 10 February 2021).
- Barnes, C.; Shechtman, E.; Finkelstein, A.; Goldman, D.B. Patchmatch: A Randomized Correspondence Algorithm for Structural Image Editing. ACM Trans. Graph.
**2009**, 28, 24. [Google Scholar] [CrossRef] - Li, J.; Li, E.; Chen, Y.; Xu, L.; Zhang, Y. Bundled Depth-Map Merging for Multi-View Stereo. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; pp. 2769–2776. [Google Scholar] [CrossRef]
- Bleyer, M.; Rhemann, C.; Rother, C. Patchmatch Stereo-Stereo Matching with Slanted Support Windows. In Jesse Hoey, Stephen Mckenna and Emanuele Trucco, Proceedings of the British Machine Vision Conference; Bmva Press: Dundee, UK, 2011; pp. 14.1–14.11. [Google Scholar] [CrossRef] [Green Version]
- Jamwal, N.; Jindal, N.; Singh, K. A Survey on Depth Map Estimation Strategies. In Proceedings of the International Conference on Signal Processing (ICSP 2016), Vidisha, India, 7–9 November 2016; pp. 1–5. [Google Scholar] [CrossRef]
- Pustelnik, N.; Laurent, C. Proximity Operator of A Sum of Functions; Application to Depth Map Estimation. IEEE Signal Process. Lett.
**2017**, 24, 1827–1831. [Google Scholar] [CrossRef] [Green Version] - Choe, J.; Joo, K.; Imtiaz, T.; Kweon, I.S. Volumetric Propagation Network: Stereo-Lidar Fusion for Long-Range Depth Estimation. IEEE Robot. Autom. Lett.
**2021**, 6, 4672–4679. [Google Scholar] [CrossRef] - Zheng, E.; Dunn, E.; Raguram, R.; Frahm, J.M. Efficient and Scalable Depthmap Fusion. In Proceedings of the British Machine Vision Conference 2012, Surrey, UK, 3–7 September 2012. [Google Scholar] [CrossRef] [Green Version]
- PDAL Contributors. PDAL Point Data Abstraction Library. 2018. Available online: https://doi.org/10.5281/zenodo.2556738 (accessed on 19 February 2021).
- Khatamian, A.; Arabnia, H.R. Survey on 3D Surface Reconstruction. J. Inf. Process. Syst.
**2016**, 12, 338–357. [Google Scholar] [CrossRef] [Green Version] - Digne, J.; Cohen-Steiner, D.; Alliez, P.; De Goes, F.; Desbrun, M. Feature-Preserving Surface Reconstruction and Simplification from Defect-Laden Point Sets. J. Math. Imaging Vis.
**2014**, 48, 369–382. [Google Scholar] [CrossRef] [Green Version] - DeCarlo, D.; Metaxas, D. Blended Deformable Models. IEEE Trans. Pattern Anal. Mach. Intell.
**1996**, 18, 443–448. [Google Scholar] [CrossRef] - Terzopoulos, D.; Witkin, A.; Kass, M. Constraints on Deformable Models:Recovering 3d Shape and Nonrigid Motion. Artif. Intell.
**1988**, 36, 91–123. [Google Scholar] [CrossRef] - Taubin, G. An Improved Algorithm for Algebraic Curve and Surface Fitting. In Proceedings of the 1993 4th International Conference on Computer Vision, Berlin, Germany, 11–14 May 1993; pp. 658–665. [Google Scholar] [CrossRef]
- Szeliski, R.; Tonnesen, D.; Terzopoulos, D. Modeling Surfaces of Arbitrary Topology with Dynamic Particles. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, New York, NY, USA, 15–17 June 1993; pp. 82–87. [Google Scholar] [CrossRef]
- Hoffmann, C.; Hopcroft, J. The Geometry of Projective Blending Surfaces. Artif. Intell.
**1988**, 37, 357–376. [Google Scholar] [CrossRef] [Green Version] - Muraki, S. Volumetric Shape Description of Range Data Using “Blobby Model”. In SIGGRAPH ’91: Proceedings of the 18th Annual Conference on Computer Graphics and Interactive Techniques; Association for Computing Machinery: New York, NY, USA, 1991; pp. 227–235. [Google Scholar] [CrossRef]
- Hanson, A.J. Hyperquadrics: Smoothly Deformable Shapes with Convex Polyhedral Bounds. Comput. Vis. Graph. Image Process.
**1988**, 44, 191–210. [Google Scholar] [CrossRef] - Terzopoulos, D.; Metaxas, D. Dynamic 3d Models with Local and Global Deformations: Deformable Superquadrics. IEEE Trans. Pattern Anal. Mach. Intell.
**1991**, 13, 703–714. [Google Scholar] [CrossRef] [Green Version] - Barr, A.H. Superquadrics and Angle-Preserving Transformations. IEEE Comput. Graph. Appl.
**1981**, 1, 11–23. [Google Scholar] [CrossRef] [Green Version] - O’Donnell; Boult; Fang, X.-S.; Gupta. The Extruded Generalized Cylinder: A Deformable Model for Object Recovery. In Proceedings of the 1994 IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 21–23 June 1994; pp. 174–181. [Google Scholar] [CrossRef]
- Marr, D.; Nishihara, H.K. Representation and Recognition of the Spatial Organization of Three-Dimensional Shapesproc. Proc. R. Soc. Lond. Ser. B Biol. Sci.
**1978**, 200, 269–294. [Google Scholar] [CrossRef] [Green Version] - Hoppe, H.; DeRose, T.; Duchamp, T.; McDonald, J.; Stuetzle, W. Surface Reconstruction from Unorganized Point Clouds. In SIGGRAPH ’92: Proceedings of the 19th Annual Conference on Computer Graphics and Interactive Techniques; Association for Computing Machinery: New York, NY, USA, 1992. [Google Scholar]
- Klein, R. Voronoi Diagrams and Delaunay Triangulations. In Encyclopedia of Algorithms; Kao, M.Y., Ed.; Springer: New York, NY, USA, 2016. [Google Scholar] [CrossRef]
- Amenta, N.; Bern, M.; Kamvysselis, M. A New Voronoi-Based Surface Reconstruction Algorithm. In SIGGRAPH ’98: Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques; Association for Computing Machinery: New York, NY, USA, 1998; pp. 415–421. [Google Scholar] [CrossRef] [Green Version]
- Amenta, N.; Choi, S.; Kolluri, R.K. The Power Crust. In SMA ’01: Proceedings of the Sixth ACM Symposium on Solid Modeling and Applications; Association for Computing Machinery: New York, NY, USA, 2001; pp. 249–266. [Google Scholar] [CrossRef]
- Bernardini, F.; Mittleman, J.; Rushmeier, H.; Silva, C.; Taubin, G. The Ball-Pivoting Algorithm for Surface Reconstruction. IEEE Trans. Vis. Comput. Graph.
**1999**, 5, 349–359. [Google Scholar] [CrossRef] - Gopi, M.; Krishnan, S.; Silva, C.T. Surface Reconstruction Based on Lower Dimensional Localized Delaunay Triangulation. Comput. Graph. Forum
**2000**, 19, 467–478. [Google Scholar] [CrossRef] - Gopi, M.; Krishnan, S. A Fast and Efficient Projection-Based Approach for Surface Reconstruction. In Proceedings of the XV Brazilian Symposium on Computer Graphics and Image Processing, Fortaleza, Brazil, 10 October 2002. [Google Scholar] [CrossRef] [Green Version]
- Dinh, H.Q.; Turk, G.; Slabaugh, G. Reconstructing Surfaces Using Anisotropic Basis Functions. In Proceedings of the Eighth IEEE International Conference on Computer Vision. ICCV 2001, Vancouver, BC, Canada, 7–14 July 2001; Volume 2, pp. 606–613. [Google Scholar] [CrossRef] [Green Version]
- Huang, H.; Li, D.; Zhang, H.; Ascher, U.; Cohen-Or, D. Consolidation of Unorganized Point Clouds for Surface Reconstruction. ACM Trans. Graph.
**2009**, 28, 1–7. [Google Scholar] [CrossRef] [Green Version] - Alexa, M.; Behr, J.; Cohen-Or, D.; Fleishman, S.; Levin, D.; Silva, C.T. Computing and Rendering Point Set Surfaces. IEEE Trans. Vis. Comput. Graph.
**2003**, 9, 3–15. [Google Scholar] [CrossRef] [Green Version] - Öztireli, A.C.; Guennebaud, G.; Gross, M. Feature Preserving Point Set Surfaces Based on Non-Linear Kernel Regression. Comput. Graph. Forum
**2009**, 28, 493–501. [Google Scholar] [CrossRef] [Green Version] - Kazhdan, M.; Bolitho, M.; Hoppe, H. Poisson Surface Reconstruction. In SGP ’06: Proceedings of the Fourth Eurographics Symposium on Geometry Processing; Eurographics Association: Goslar, Germany, 2006; pp. 61–70. [Google Scholar]
- Li, X.; Wan, W.; Cheng, X.; Cui, B. An Improved Poisson Surface Reconstruction Algorithm. In Proceedings of the 2010 International Conference on Audio, Language and Image Processing, Shanghai, China, 23–25 November 2010; pp. 1134–1138. [Google Scholar] [CrossRef]
- Bolitho, M.; Kazhdan, M.; Burns, R.; Hoppe, H. Parallel Poisson Surface Reconstruction. In Advances In Visual Computing: 5th International Symposium, ISVC 2009, Las Vegas, NV, USA, 30 November–2 December 2009, Proceedings, Part I; Lecture Notes in Computer Science, Volume 5875; Bebis, G., Boyle, R., Parvin, B., Koracin, D., Kuno, Y., Wang, J., Pajarola, R., Lindstrom, P., Hinkenjann, A., Encarnação, M.L., et al., Eds.; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar] [CrossRef] [Green Version]
- Kazhdan, M.; Maloney, A. Poissonrecon. Available online: Https://Github.Com/Mkazhdan/Poissonrecon (accessed on 2 March 2021).
- Lorensen, W.E.; Cline, H.E. Marching Cubes: A High Resolution 3d Surface Construction Algorithm. In SIGGRAPH ’87: Proceedings of the 14th Annual Conference on Computer Graphics and Interactive Techniques; Association for Computing Machinery: New York, NY, USA, 1987; pp. 163–169. [Google Scholar] [CrossRef]
- Fletcher, C.A.J. Computational Galerkin Methods; Springer Series in Computational Physics; Springer: Berlin/Heidelberg, Germany, 1984. [Google Scholar] [CrossRef]
- Kazhdan, M.; Hoppe, H. Screened poisson surface reconstruction. ACM Trans. Graph.
**2013**, 32, 1–13. [Google Scholar] [CrossRef] [Green Version] - Kazhdan, M.; Hoppe, H. An Adaptive Multi-Grid Solver for Applications in Computer Graphics. Comput. Graph. Forum
**2019**, 38, 138–150. [Google Scholar] [CrossRef] [Green Version] - Kazhdan, M.; Chuang, M.; Rusinkiewicz, S.; Hoppe, H. Poisson Surface Reconstruction with Envelope Constraints. Comput. Graph. Forum
**2020**, 39, 173–182. [Google Scholar] [CrossRef] - Höllig, K.; Reif, U.; Wipper, J. Weighted Extended B-Spline Approximation of Dirichlet Problems. SIAM J. Numer. Anal.
**2001**, 39, 442–462. [Google Scholar] [CrossRef] - Elizabeth, Full 3D vs. 2.5D Processing. Available online: https://support.dronesmadeeasy.com/hc/en-us/articles/207855366-Full-3D-vs-2-5D-Processing (accessed on 8 March 2021).
- Li, S.; Xiao, X.; Guo, B.; Zhang, L. A Novel OpenMVS-Based Texture Reconstruction Method Based on the Fully Automatic Plane Segmentation for 3D Mesh Models. Remote Sens.
**2020**, 12, 3908. [Google Scholar] [CrossRef] - Fu, Y.; Yan, Q.; Yang, L.; Liao, J.; Xiao, C. Texture Mapping for 3D Reconstruction with RGB-D Sensor. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4645–4653. [Google Scholar] [CrossRef]
- Kehl, W.; Navab, N.; Ilic, S. Coloured Signed Distance Fields for Full 3D Object Reconstruction. BMVC. 2014. Available online: http://www.bmva.org/bmvc/2014/papers/paper012/index.html (accessed on 20 March 2021).
- Callieri, M.; Cignoni, P.; Corsini, M.; Scopigno, R. Masked photo blending: Mapping dense photographic data set on high-resolution sampled 3D models. Comput. Graph.
**2008**, 32, 464–473. [Google Scholar] [CrossRef] [Green Version] - Fausto, B.; Martin, I.M.; Rushmeier, H. High-Quality Texture Reconstruction from Multiple Scans. IEEE Trans. Vis. Comput. Graph.
**2001**, 7, 318–332. [Google Scholar] - Hoegner, L.; Stilla, U. Automatic 3D Reconstruction and Texture Extraction for 3D Building Models from Thermal Infrared Image Sequences. Quant. InfraRed Thermogr.
**2016**. [Google Scholar] [CrossRef] - Bi, S.; Kalantari, N.K.; Ramamoorthi, R. Patch-based optimization for image-based texture mapping. ACM Trans. Graph.
**2017**, 36, 1–11. [Google Scholar] [CrossRef] - Zhao, H.; Li, X.; Ge, H.; Lei, N.; Zhang, M.; Wang, X.; Gu, X. Conformal mesh parameterization using discrete Calabi flow. Comput. Aided Geom. Des.
**2018**, 63, 96–108. [Google Scholar] [CrossRef] [Green Version] - Li, S.; Luo, Z.; Zhen, M.; Yao, Y.; Shen, T.; Fang, T.; Quan, L. Cross-Atlas Convolution for Parameterization Invariant Learning on Textured Mesh Surface. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 6136–6145. [Google Scholar] [CrossRef]
- Liu, L.; Ye, C.; Ni, R.; Fu, X.-M. Progressive parameterizations. ACM Trans. Graph.
**2018**, 37, 1–12. [Google Scholar] [CrossRef] - Inzerillo, L.; Di Paola, F.; Alogna, Y. High Quality Texture Mapping Process Aimed at the Optimization of 3D Structured Light Models. ISPRS-Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci.
**2019**, XLII-2/W9, 389–396. [Google Scholar] [CrossRef] [Green Version] - Eisemann, M.; De Decker, B.; Magnor, M.; Bekaert, P.; De Aguiar, E.; Ahmed, N.; C Theobalt, A. Sellent. Floating Textures. Comput. Graph. Forum
**2008**, 27, 409–418. [Google Scholar] [CrossRef] - Aganj, E.; Monasse, P.; Keriven, R. Multi-view Texturing of Imprecise Mesh. In Computer Vision—ACCV 2009; Lecture Notes in Computer Science, Voumel 5995; Zha, H., Taniguchi, R., Maybank, S., Eds.; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar] [CrossRef]
- Lempitsky, V.; Ivanov, D. Seamless Mosaicing of Image-Based Texture Maps. In Proceedings of the 2007 IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, MN, USA, 17–22 June 2007; pp. 1–6. [Google Scholar] [CrossRef]
- Gal, R.; Wexler, Y.; Ofek, E.; Hoppe, H.; Cohen-Or, D. Seamless Montage for Texturing Models. Comput. Graph. Forum
**2010**, 29, 479–486. [Google Scholar] [CrossRef] [Green Version] - Allene, C.; Pons, J.P.; Keriven, R. Seamless image-based texture atlases using multi-band blending. In Proceedings of the 2008 19th International Conference on Pattern Recognition, Tampa, FL, USA, 8–11 December 2008; pp. 1–4. [Google Scholar] [CrossRef]
- Yang, Y.; Zhang, Y. A High-Realistic Texture Mapping Algorithm Based on Image Sequences. In Proceedings of the 2018 26th International Conference on Geoinformatics, Kunming, China, 28–30 June 2018; pp. 1–8. [Google Scholar] [CrossRef]
- Li, W.; Gong, H.; Yang, R. Fast Texture Mapping Adjustment via Local/Global Optimization. IEEE Trans. Vis. Comput. Graph.
**2019**, 25, 2296–2303. [Google Scholar] [CrossRef] - Burt, P.J.; Adelson, E.H. A multiresolution spline with application to image mosaics. ACM Trans. Graph.
**1983**, 24, 217–236. [Google Scholar] [CrossRef] - Pérez, P.; Gangnet, M.; Blake, A. Poisson image editing. ACM Trans. Graph.
**2003**, 22, 313–318. [Google Scholar] [CrossRef] - Waechter, M.; Moehrle, N.; Goesele, M. Let There Be Color! Large-Scale Texturing of 3D Reconstructions. In Computer Vision—ECCV 2014: 13th European Conference, Zurich, Switzerland, 6–12 September 2014, Proceedings, Part V; Lecture Notes in Computer Science, Volume 8693; Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T., Eds.; Springer: Cham, Switzerland, 2014. [Google Scholar] [CrossRef]
- Waechter, M.; Moehrle, N.; Goesele, M. MVS-Texturing. 2014. Available online: https://github.com/nmoehrle/mvs-texturing (accessed on 30 March 2021).
- Geva, A. ColDet—3D Collision Detection Library. 2000. Available online: https://github.com/fougue/claudette (accessed on 28 March 2021).
- Sinha, S.N.; Steedly, D.; Szeliski, R.; Agrawala, M.; Pollefeys, M. Interactive 3D architectural modeling from unordered photo collections. ACM Trans. Graph.
**2008**, 27, 1–10. [Google Scholar] [CrossRef] - Grammatikopoulos, L.; Kalisperakis, I.; Karras, G.; Petsa, E. Automatic Multi-View Texture Mapping of 3D Surface Projections. 2012. Available online: https://www.researchgate.net/publication/228367713_Automatic_multi-view_texture_mapping_of_3D_surface_projections (accessed on 23 March 2021).
- Guennebaud, G.; Jacob, B.; Capricelli, T.; Garg, R.; Hertzberg, C.; Holoborodko, P.; Lenz, M.; Niesen, J.; Nuentsa, D.; Steiner, B.; et al. Eigen v3. 2010. Available online: http://eigen.tuxfamily.org (accessed on 3 April 2021).
- GDAL/OGR Contributors, GDAL/OGR Geospatial Data Abstraction Software Library, Open Source Geospatial Foundation. 2021. Available online: https://gdal.org (accessed on 12 April 2021).
- Engineers, A.S.C. Glossary of the Mapping Sciences; American Society of Civil Engineers: Reston, VA, USA, 1994; ISBN 9780784475706. Available online: https://books.google.pt/books?id=jPVxSDzVRP0C (accessed on 15 April 2021).
- Smith, G.S. Digital Orthophotography and Gis, Green Mountain GeoGraphics, Ltd. Available online: https://proceedings.esri.com/library/userconf/proc95/to150/p124.html (accessed on 20 April 2021).
- Ritter, N.; Brown, E. Libgeotiff. 2020. Available online: https://github.com/OSGeo/libgeotiff (accessed on 20 April 2021).
- Salvado, A.B.; Mendonça, R.; Lourenço, A.; Marques, F.; Matos-Carvalho, J.P.; Campos, L.M.; Barata, J. Semantic Navigation Mapping from Aerial Multispectral Imagery. In Proceedings of the 2019 IEEE 28th International Symposium on Industrial Electronics (ISIE), Vancouver, BC, Canada, 12–14 June 2019; pp. 1192–1197. [Google Scholar] [CrossRef]
- Evangelidis, G.D.; Psarakis, E.Z. Parametric Image Alignment Using Enhanced Correlation Coefficient Maximization. IEEE Trans. Pattern Anal. Mach. Intell.
**2008**, 30, 1858–1865. [Google Scholar] [CrossRef] [Green Version] - NationalDronesAu. Flir Image Extractor CLI. Available online: https://github.com/nationaldronesau/FlirImageExtractor (accessed on 1 May 2021).
- Pino, M.; Matos-Carvalho, J.P.; Pedro, D.; Campos, L.M.; Seco, J.C. UAV Cloud Platform for Precision Farming. In Proceedings of the 2020 12th International Symposium on Communication Systems, Networks and Digital Signal Processing (CSNDSP), Porto, Portugal, 20–22 July 2020; pp. 1–6. [Google Scholar] [CrossRef]
- Pedro, D.; Matos-Carvalho, J.P.; Azevedo, F.; Sacoto-Martins, R.; Bernardo, L.; Campos, L.; Fonseca, J.M.; Mora, A. FFAU—Framework for Fully Autonomous UAVs. Remote Sens.
**2020**, 12, 3533. [Google Scholar] [CrossRef]

**Figure 1.**Configurations used to evaluate the effects of GCPs distribution and GCP vs CP relation: (

**a**) is used as a control group as all the markers are used as GCPs; (

**b**) illustrates the second configuration where the markers are split into two types, GCPs and CPs, where 12 markers are used as GCPs and 7 for CPs, respectively. The final image (

**c**) aims to evaluate the results obtained by using less markers as GCPs. Adapted from [9].

**Figure 2.**Comparison of different programs. (

**a**) point cloud produced by Microsoft Photosynth. Several gaps are present in the point cloud. Features (cars) are barely visible; (

**b**) Bundler’s point cloud presents similar results as (

**a**). Gaps are present as well as barely detectable features; (

**c**) PMVS2 presents a clearer cloud compared to (

**a**,

**b**). The point cloud is almost complete except for the edges and at the center. In this case, the cars are very distinguishable; (

**d**) Agisoft Metashape produced a complete point cloud with no gaps. Adapted from [10].

**Figure 5.**The left column represent the images resulted from the Gaussian functions with different values of K. The right column displays the Difference-of-Gaussian images that resulted from the subtraction of neighboring images. Adapted from [27].

**Figure 6.**Detection of feature points. The sample point is marked by a red circle, and its neighbors from the same and different scales are marked with a blue circles. Adapted from [27].

**Figure 7.**Detection of keypoints and further selection of invariant features: (

**a**) illustrates the original image; (

**b**) illustrates with circles the keypoints identified by the Difference-of-Gaussian function. These keypoints are then filtered based on their contrast. The remaining keypoints are depicted in (

**c**); (

**d**) represents the prevailing keypoints after further exclusion due to principal curvatures ratio threshold. Adapted from [3].

**Figure 8.**Keypoint descriptor computation. On the left image, the arrows represent the gradients and direction of neighbors around a keypoint. The circle displays the Gaussian weight function. The right image shows a 4 × 4 sub-region keypoint descriptor resulted from the sum of orientations and intensities from the left histogram. The arrow size reflects the total of gradient from that orientation within the region. Adapted from [32].

**Figure 9.**On the left, an image is given to the algorithm to extract its features. The features detected of the image are then compared with features of the right image. The lines illustrate the corresponding features between the two images. Adapted from [3].

**Figure 11.**Illustrations between image before and after distortion removal. The red lines represent straight lines/structures in real world. Adapted from [46].

**Figure 13.**Neighboring depth map test to remove redundancy. ${C}_{i}$ represents the camera of the depth map, and ${N}_{1-4}$ represents the neighboring cameras. The value $d(X,{N}_{\#})$ illustrates the depth value of the projected pixel. $\lambda (X,{N}_{1-4})$ represent the depth value of the pixel projected by the neighboring cameras. The depth values of the camera ${N}_{1}$ and ${N}_{2}$ present depth values larger than the depth value of ${C}_{i}$, and as such, the projected points from ${N}_{1}$ and ${N}_{2}$ are considered occluded points and removed from the ${C}_{i}$ depth map. The point projected by ${N}_{4}$ depth value is close to the depth value of ${N}_{\#}$, so to avoid redundancy, the point from ${N}_{4}$ is removed as the projection of it can be classified as the point projected by ${C}_{i}$. Finally, the point ${N}_{3}$ presents a lower depth value than ${N}_{\#}$, so its depth map is retained as it does not satisfy either of the conditions stated above. Adapted from [52].

**Figure 14.**Dense reconstruction resulted from the MVS algorithm. Figure 14b represents the model after point filtering was applied.

**Figure 18.**Objects portrayed in the 3D model exhibit sharper edges as well as the support of vertical objects are better delineated. In contrast, 2.5D model edges are rounder and vertical objects overhung areas do not present any details.

**Figure 19.**Color adjustment method. The left image illustrates a mesh. The right image represents the variation in sample weight in relation to the distance to ${v}_{1}$. Adapted from [126].

**Figure 20.**A 20 pixel wide patch used in Poisson editing. The outer blue boundary, in dark blue, and outer red boundary, in dark red, are used as boundary limits of the Poisson equation. The value of the pixels on the blue patch are assigned by computing the mean pixel values of the image that corresponds to the patch. Pixels on the red patch are given the same value as of the correspondent pixel in the image. Adapted from [126].

**Figure 23.**The resulting orthomap of the surveyed area with approximately 0.2 m between two consecutive points of the point cloud.

**Figure 25.**Orthomap generated from RGB imagery. Two points of the point cloud are approximately distanced by 0.09 m.

**Figure 27.**Orthomap generated from single band, in this case the blue band, obtained from MicaSense camera. Two consecutive points are distanced approximately 0.2 m apart from each other.

**Figure 28.**Point cloud models obtained from the processing of images from a single band, in this case the blue band.

**Figure 29.**Multispectral orthomap generated from images obtained using MicaSense camera. Distance between two points is approximately 0.2 m.

**Figure 30.**Multispectral point cloud models generated from the processing of imagery obtained from MicaSense camera.

**Figure 31.**Orthomap generated from thermal images. On average, a distance of 0.01 m is present between two consecutive points.

**Table 1.**Results obtained from the different software tested. Adapted from [9].

GCP | CP | |||||||
---|---|---|---|---|---|---|---|---|

x (m) | y (m) | z (m) | x (m) | y (m) | z (m) | |||

Casella et al. [9] Config 1: GCP 18 | Metashape | mean | 0.000 | 0.000 | 0.000 | |||

std | 0.003 | 0.003 | 0.009 | |||||

rmse | 0.003 | 0.003 | 0.009 | |||||

UAS Master | mean | 0.000 | 0.000 | 0.000 | ||||

std | 0.002 | 0.002 | 0.008 | |||||

rmse | 0.002 | 0.002 | 0.008 | |||||

Pix4D | mean | 0.000 | 0.000 | −0.001 | ||||

std | 0.004 | 0.005 | 0.010 | |||||

rmse | 0.004 | 0.005 | 0.010 | |||||

ContextCapture | mean | 0.000 | 0.000 | 0.000 | ||||

std | 0.004 | 0.004 | 0.009 | |||||

rmse | 0.004 | 0.004 | 0.009 | |||||

MicMac | mean | 0.000 | 0.000 | 0.000 | ||||

std | 0.004 | 0.005 | 0.005 | |||||

rmse | 0.004 | 0.005 | 0.005 | |||||

Casella et al. [9] Config 1: GCP 18 | Metashape | mean | 0.000 | 0.000 | 0.000 | −0.001 | −0.001 | −0.001 |

std | 0.003 | 0.003 | 0.009 | 0.004 | 0.005 | 0.013 | ||

rmse | 0.003 | 0.003 | 0.009 | 0.004 | 0.005 | 0.013 | ||

UAS Master | mean | 0.000 | 0.000 | 0.000 | 0.002 | −0.001 | 0.010 | |

std | 0.003 | 0.003 | 0.008 | 0.007 | 0.004 | 0.017 | ||

rmse | 0.003 | 0.003 | 0.008 | 0.007 | 0.004 | 0.020 | ||

Pix4D | mean | 0.000 | 0.000 | −0.001 | 0.002 | 0.002 | 0.003 | |

std | 0.004 | 0.005 | 0.008 | 0.005 | 0.007 | 0.015 | ||

rmse | 0.004 | 0.005 | 0.008 | 0.005 | 0.007 | 0.015 | ||

ContextCapture | mean | 0.001 | −0.001 | 0.000 | 0.001 | −0.002 | −0.003 | |

std | 0.005 | 0.004 | 0.009 | 0.008 | 0.007 | 0.012 | ||

rmse | 0.005 | 0.004 | 0.009 | 0.008 | 0.007 | 0.012 | ||

MicMac | mean | 0.000 | −0.001 | −0.001 | 0.000 | 0.000 | −0.003 | |

std | 0.004 | 0.005 | 0.006 | 0.005 | 0.006 | 0.005 | ||

rmse | 0.004 | 0.005 | 0.006 | 0.005 | 0.005 | 0.006 | ||

Casella et al. [9] Config 1: GCP 18 | Metashape | mean | 0.000 | 0.000 | 0.000 | −0.001 | −0.005 | −0.007 |

std | 0.001 | 0.004 | 0.006 | 0.004 | 0.004 | 0.016 | ||

rmse | 0.001 | 0.004 | 0.006 | 0.004 | 0.006 | 0.017 | ||

UAS Master | mean | 0.000 | −0.001 | 0.002 | 0.001 | 0.000 | 0.007 | |

std | 0.007 | 0.005 | 0.015 | 0.005 | 0.004 | 0.023 | ||

rmse | 0.007 | 0.005 | 0.015 | 0.005 | 0.004 | 0.024 | ||

Pix4D | mean | 0.000 | 0.001 | −0.001 | −0.001 | 0.001 | 0.002 | |

std | 0.004 | 0.008 | 0.008 | 0.005 | 0.005 | 0.014 | ||

rmse | 0.004 | 0.008 | 0.008 | 0.005 | 0.005 | 0.014 | ||

ContextCapture | mean | −0.003 | 0.002 | 0.011 | −0.007 | 0.000 | 0.020 | |

std | 0.007 | 0.005 | 0.027 | 0.009 | 0.007 | 0.037 | ||

rmse | 0.008 | 0.005 | 0.029 | 0.011 | 0.007 | 0.042 | ||

MicMac | mean | 0.000 | 0.000 | −0.001 | −0.001 | −0.005 | −0.005 | |

std | 0.006 | 0.005 | 0.006 | 0.003 | 0.005 | 0.007 | ||

rmse | 0.006 | 0.005 | 0.006 | 0.004 | 0.007 | 0.009 |

**Table 2.**CP RMSE values obtained for each software. Adapted from [11].

x (mm) | y (mm) | h (mm) | |
---|---|---|---|

LPS | 48 | 47 | 90 |

EyeDEA | 16 | 12 | 36 |

PhotoModeler | 51 | 41 | 137 |

Pix4D | 81 | 46 | 214 |

Metashape | 74 | 61 | 83 |

**Table 3.**A table with the error resulting from the comparison of different programs. Casella et al. and Sona et al. first configuration is used as a control configuration to calibrate the system. The next configurations are used to test different amount of GCP configurations using checkpoints as evaluating points. It is worth noting that Metashape allows a creation of a better fit of data model (lower RMSE values) although with a more balanced amount of GCP/CP (such as config 2 of Casella et al.), MicMac should be mentioned providing less error at the Z component as an open-source web program. In Guimarães et al., errors are quite similar between the two programs surveyed from two study areas.

GCP | CP | |||||||
---|---|---|---|---|---|---|---|---|

X(m) | Y(m) | Z(m) | X(m) | Y(m) | Z(m) | |||

Casella et al. [9] Config 1: GCP 18 | Metashape | mean | 0.000 | 0.000 | 0.000 | |||

std | 0.003 | 0.003 | 0.009 | |||||

rmse | 0.003 | 0.003 | 0.009 | |||||

UAS Master | mean | 0.000 | 0.000 | 0.000 | ||||

std | 0.002 | 0.002 | 0.008 | |||||

rmse | 0.002 | 0.002 | 0.008 | |||||

Pix4D | mean | 0.000 | 0.000 | −0.001 | ||||

std | 0.004 | 0.005 | 0.010 | |||||

rmse | 0.004 | 0.005 | 0.010 | |||||

ContextCapture | mean | 0.000 | 0.000 | 0.000 | ||||

std | 0.004 | 0.004 | 0.009 | |||||

rmse | 0.004 | 0.004 | 0.009 | |||||

MicMac | mean | 0.000 | 0.000 | 0.000 | ||||

std | 0.004 | 0.005 | 0.005 | |||||

rmse | 0.004 | 0.005 | 0.005 | |||||

Casella et al. [9] Config 2: GCP 11/CP 7 | Metashape | mean | 0.000 | 0.000 | 0.000 | −0.001 | −0.001 | −0.001 |

std | 0.003 | 0.003 | 0.009 | 0.004 | 0.005 | 0.013 | ||

rmse | 0.003 | 0.003 | 0.009 | 0.004 | 0.005 | 0.013 | ||

UAS Master | mean | 0.000 | 0.000 | 0.000 | 0.002 | −0.001 | 0.010 | |

std | 0.003 | 0.003 | 0.008 | 0.007 | 0.004 | 0.017 | ||

rmse | 0.003 | 0.003 | 0.008 | 0.007 | 0.004 | 0.020 | ||

Pix4D | mean | 0.000 | 0.000 | −0.001 | 0.002 | 0.002 | 0.003 | |

std | 0.004 | 0.005 | 0.008 | 0.005 | 0.007 | 0.015 | ||

rmse | 0.004 | 0.005 | 0.008 | 0.005 | 0.007 | 0.015 | ||

ContextCapture | mean | 0.001 | −0.001 | 0.000 | 0.001 | −0.002 | −0.003 | |

std | 0.005 | 0.004 | 0.009 | 0.008 | 0.007 | 0.012 | ||

rmse | 0.005 | 0.004 | 0.009 | 0.008 | 0.007 | 0.012 | ||

MicMac | mean | 0.000 | −0.001 | −0.001 | 0.000 | 0.000 | −0.003 | |

std | 0.004 | 0.005 | 0.006 | 0.005 | 0.006 | 0.005 | ||

rmse | 0.004 | 0.005 | 0.006 | 0.005 | 0.005 | 0.006 | ||

Casella et al. [9] Config 3: GCP 6/CP 12 | Metashape | mean | 0.000 | 0.000 | 0.000 | −0.001 | −0.005 | −0.007 |

std | 0.001 | 0.004 | 0.006 | 0.004 | 0.004 | 0.016 | ||

rmse | 0.001 | 0.004 | 0.006 | 0.004 | 0.006 | 0.017 | ||

UAS Master | mean | 0.000 | −0.001 | 0.002 | 0.001 | 0.000 | 0.007 | |

std | 0.007 | 0.005 | 0.015 | 0.005 | 0.004 | 0.023 | ||

rmse | 0.007 | 0.005 | 0.015 | 0.005 | 0.004 | 0.024 | ||

Pix4D | mean | 0.000 | 0.001 | −0.001 | −0.001 | 0.001 | 0.002 | |

std | 0.004 | 0.008 | 0.008 | 0.005 | 0.005 | 0.014 | ||

rmse | 0.004 | 0.008 | 0.008 | 0.005 | 0.005 | 0.014 | ||

ContextCapture | mean | −0.003 | 0.002 | 0.011 | −0.007 | 0.000 | 0.020 | |

std | 0.007 | 0.005 | 0.027 | 0.009 | 0.007 | 0.037 | ||

rmse | 0.008 | 0.005 | 0.029 | 0.011 | 0.007 | 0.042 | ||

MicMac | mean | 0.000 | 0.000 | −0.001 | −0.001 | −0.005 | −0.005 | |

std | 0.006 | 0.005 | 0.006 | 0.003 | 0.005 | 0.007 | ||

rmse | 0.006 | 0.005 | 0.006 | 0.004 | 0.007 | 0.009 | ||

Sona et al. [11] Config 1: GCP 15 | LPS | rmse | 0.109 | 0.089 | 0.215 | |||

EyeDEA | rmse | 0.057 | 0.050 | 0.142 | ||||

PhotoModeler | rmse | 0.023 | 0.021 | 0.057 | ||||

Pix4D | rmse | 0.025 | 0.023 | 0.061 | ||||

Metashape | rmse | 0.008 | 0.007 | 0.020 | ||||

Sona et al. [11] Config 2: GCP 5/CP 10 | LPS | rmse | 0.119 | 0.101 | 0.259 | 0.050 | 0.050 | 0.130 |

EyeDEA | rmse | 0.068 | 0.061 | 0.181 | 0.073 | 0.081 | 0.329 | |

PhotoModeler | rmse | 0.026 | 0.023 | 0.066 | 0.054 | 0.050 | 0.114 | |

Pix4DMetashape | rmse | 0.030 | 0.028 | 0.076 | 0.039 | 0.054 | 0.213 | |

rmse | 0.009 | 0.008 | 0.023 | 0.050 | 0.019 | 0.055 | ||

Guimarães et al. [14] Study Area 1 | Pix4D | mean | 0.000 | 0.000 | 0.000 | |||

rmse | 0.009 | 0.007 | 0.019 | |||||

MicMac | mean | 0.002 | −0.003 | 0.018 | ||||

rmse | 0.012 | 0.009 | 0.021 | |||||

Guimarães et al. [14] Study Area 2 | Pix4D | mean | 0.004 | −0.006 | 0.004 | |||

rmse | 0.017 | 0.015 | 0.022 | |||||

MicMac | mean | −0.002 | 0.003 | 0.006 | ||||

rmse | 0.019 | 0.016 | 0.023 |

Dataset | n${}^{\xb0}$ Images | n${}^{\xb0}$ Feature Detected | n${}^{\xb0}$ Image Pairs | n${}^{\xb0}$ Points | Point Density (Per Square Unit) | Time Taken (s) |
---|---|---|---|---|---|---|

RGB | 353 | 634,825 | 1769 | 525,244 | 81.8143 | 473 |

Single Band (Blue) | 36 | 153,703 | 173 | 1,054,350 | 99.3281 | 238 |

Multi Band | 180 | 153,703 | 173 | 1,040,177 | 97.3424 | 650 |

Thermal | 321 | 132,740 | 1429 | 3,660,046 | 30.5284 | 930 |

Platform | RGB (Y/N) | Multi Spectral (Y/N) | Thermal (Y/N) |
---|---|---|---|

OpenDroneMap | Y | Y | N |

Ours Platform | Y | Y | Y |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Vong, A.; Matos-Carvalho, J.P.; Toffanin, P.; Pedro, D.; Azevedo, F.; Moutinho, F.; Garcia, N.C.; Mora, A.
How to Build a 2D and 3D Aerial Multispectral Map?—All Steps Deeply Explained. *Remote Sens.* **2021**, *13*, 3227.
https://doi.org/10.3390/rs13163227

**AMA Style**

Vong A, Matos-Carvalho JP, Toffanin P, Pedro D, Azevedo F, Moutinho F, Garcia NC, Mora A.
How to Build a 2D and 3D Aerial Multispectral Map?—All Steps Deeply Explained. *Remote Sensing*. 2021; 13(16):3227.
https://doi.org/10.3390/rs13163227

**Chicago/Turabian Style**

Vong, André, João P. Matos-Carvalho, Piero Toffanin, Dário Pedro, Fábio Azevedo, Filipe Moutinho, Nuno Cruz Garcia, and André Mora.
2021. "How to Build a 2D and 3D Aerial Multispectral Map?—All Steps Deeply Explained" *Remote Sensing* 13, no. 16: 3227.
https://doi.org/10.3390/rs13163227