Next Article in Journal
Accurate Device-Free Tracking Using Inexpensive RFIDs
Previous Article in Journal
An Efficient ISAR Imaging of Targets with Complex Motions Based on a Quasi-Time-Frequency Analysis Bilinear Coherent Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Advanced 3D Photogrammetric Surface Reconstruction of Extensive Objects by UAV Camera Image Acquisition

1
Department of Electric, Electronics and Computer Engineering, University of Catania, V.le A. Doria, 6, 95125 Catania, Italy
2
Department of Mechanical, Chemical and Materials Engineering, University of Cagliari, via Marengo 2, 09123 Cagliari, Italy
*
Author to whom correspondence should be addressed.
Sensors 2018, 18(9), 2815; https://doi.org/10.3390/s18092815
Submission received: 10 August 2018 / Revised: 24 August 2018 / Accepted: 24 August 2018 / Published: 26 August 2018
(This article belongs to the Section Remote Sensors)

Abstract

:
This paper proposes a replicable methodology to enhance the accuracy of the photogrammetric reconstruction of large-scale objects based on the optimization of the procedures for Unmanned Aerial Vehicle (UAV) camera image acquisition. The relationships between the acquisition grid shapes, the acquisition grid geometric parameters (pitches, image rates, camera framing, flight heights), and the 3D photogrammetric surface reconstruction accuracy were studied. Ground Sampling Distance (GSD), the necessary number of photos to assure the desired overlapping, and the surface reconstruction accuracy were related to grid shapes, image rate, and camera framing at different flight heights. The established relationships allow to choose the best combination of grid shapes and acquisition grid geometric parameters to obtain the desired accuracy for the required GSD. This outcome was assessed by means of a case study related to the ancient arched brick Bridge of the Saracens in Adrano (Sicily, Italy). The reconstruction of the three-dimensional surfaces of this structure, obtained by the efficient Structure-From-Motion (SfM) algorithms of the commercial software Pix4Mapper, supported the study by validating it with experimental data. A comparison between the surface reconstruction with different acquisition grids at different flight heights and the measurements obtained with a 3D terrestrial laser and total station-theodolites allowed to evaluate the accuracy in terms of Euclidean distances.

1. Introduction

Automated photogrammetry using UAV image acquisition for digital surface reconstruction has become more widespread in recent years. This can be attributed to the enhanced performance of UAV [1,2,3] and to the development of different computer vision algorithms [4] and computational techniques, which have greatly speeded up the processing time and the quality of the reconstruction [5,6,7].
These techniques have been used for different purposes, including shape detection [8,9] and 3D surface reconstruction of large-scale elements, where a high number of photos is necessary, such as natural environments and geographical configurations [10,11,12], buildings and urban textures [13,14,15], archaeological sites [16,17], and industrial installations [18,19]. In many of these applications, there is an urgent need for the reconstruction of 3D structures from the 2D images collected from a UAV camera quickly and with a great accuracy.
When a UAV is used simply as a platform to acquire images along with a pre-programmed grid and the GPS-enabled trajectory is at a predetermined frame rate [20], it is likely that the acquisition of more images and/or their integration with other images may be necessary to obtain the required accuracy in the three-dimensional reconstruction [21]. When both the dimensions of the object being reconstructed and the accuracy increase, the computational time of the algorithms also increases significantly, thus limiting them to high-speed reconstruction applications.
Researchers have proposed the use of improved algorithms for different situations based on early SfM algorithms [22]. A variety of SfM strategies have emerged, including incremental [23,24], hierarchical [25], and global [26,27,28] approaches. Actually, the procedures developed in the computer vision community have focused more on the speed of the implemented procedures and on the success in image orientation.
To the knowledge of the authors, no scientific work up to now has presented systematic data concerning the improvement of results in a photogrammetric reconstruction against different acquisition grid shapes and acquisition grid geometric parameters (pitches, image rates, camera framing, flight heights).
The goal of the research was to determine the relationship between acquisition grid shapes, acquisition grid geometric parameters, and the accuracy obtained in the 3D reconstruction of large objects’ surfaces. The parameters investigated were therefore the grid geometric parameters: pitches, image rates, camera framing, and flight heights. The simplest three types of grids shapes (rectangular, elliptical, and cylindrical) were intended to be compared each other by studying, for each of them, the influence of pitches, image rates, camera framing, and flight heights on the accuracy of the surfaces’ reconstruction. The study highlighted how much Ground Sampling Distance (GSD), the necessary number of photos to assure the desired overlapping, and the surface reconstruction accuracy were related to grid shapes, image rate, and camera framing at different flight heights (15 m, 20 m, 30 m, 40 m, and 50 m).
The case study of the Bridge of the Saracens in Adrano (Sicily), a valuable example of Roman architecture characterized by an elongated longitudinal shape, geometric singularities, multiple and variously inclined features, and different lighting levels, offered relevant results referring to the examined relationships to support the methodology by validating it with experimental data.
Images, obtained by a CMOS 12 MP sensor, were used in Pix4Dmapper version 3 commercial software to generate dense point clouds. Function Based Method (FBM) [29] and Area Based Matching (ABM) [30,31] algorithms were employed to evaluate the degree of overlapping in the image acquired. Some cutting edge matching techniques with binary descriptors were used to quickly and accurately match keypoints.
The quality of Digital Surfaces Models (DSM) obtained from UAV image acquisitions was evaluated by comparing the photogrammetric reconstructions to the data acquired by the 3D laser scanner and total station-theodolites.
The paper presents the following structure divided into five sections, excluding the introduction. Section 1 is devoted to a description of the materials and methods used. In the Section 2, the process of image acquisition and surface reconstruction models are illustrated. In the Section 3, by means of a statistical synthesis of the surface reconstruction data, the relationship between acquisition grid shapes, acquisition grid parameters, and accuracy in 3D reconstruction is analyzed and discussed. Final considerations and conclusions are drawn in Section 4 and Section 5, respectively.

2. Materials and Methods

The fieldwork involved the following steps: acquisition, reconstruction, and analysis sessions. Initially, the aerial photo shooting was performed with different grid geometric parameters and configurations. For the aerial photogrammetric acquisition, an amateur UAV Hexacopter with Lipo 4S cells (4000 mha, 14.4 V, 576 Wh—over 20 min autonomy) was used (Figure 1a,b)
The control board was an Ardupilot APM 2.6 with Arducopter 3.1.5 flying software and a PC Mission Planner ground station. The board was equipped with a gyroscope, an accelerometer, a magnetometer, a barometer which supplied the processor with 3D data concerning position and acceleration, and an electric speed controller (ESC) HobbyKing F30 (HK, Hong Kong, China) for the brushless motor.
An action camera GoPro Hero 4 (Woodman Labs, San Francisco, CA, USA) Black Edition (Figure 1c) was positioned beneath the drone on a ‘Gimbal’ support. This support allowed the camera to rotate along three axes (pitch, roll, yaw) controlled by a digital board and manoeuvred by brushless motors which could dampen any drone shift keeping the camera motionless. The camera had a CMOS 12 MP 1/2.9″ solid-state sensor which could even sense electromagnetic data, so it was also possible to check the radiometric data in the image pixels. A video transmitter provided real time shots on a 7″ LCD monitor incorporated into the radio control unit (Figure 1d). The UAV platform and camera characteristics are summarized in Table 1 and Table 2, respectively.
GPS LEA6H (uBlox, Thalwil, Switzerland) with Ground Station PC Mission Planner software (version 1) converted a discrete number of points into geo-referenced coordinates (GRC) by generating waypoint acquisition grids. In this way, it was possible to define different configurations of 3D grids. From each GRC, an image was acquired [32]. Figure 2a shows an example of a rectangular waypoint acquisition grid at a flight height of 40 m. The numbers in the figure indicate the points in which the drone started, finished, and changed its trajectory. Figure 2b shows, instead, an elliptical waypoint acquisition grid at a flight height ( h v ) of 30 m.
The commercial software Pix4Dmapper was used to create a polygonal mesh of the surface from image collections captured by UAVs. The method employed for mosaicking images was Bundle Block Adjustment (BBA) with SfM algorithms [33]. Each image has six exterior parameters: the camera’s position along with its roll, pitch, and yaw angles, which mapped each scene point (x, y, z) to the corresponding image point (x’, y’). The parameters of these equations were as follows: (xc, yc, zc) was the camera’s position and (mij) was the rotation matrix 3 × 3 defined by the roll, pitch, and yaw angles of the camera. There were more fixed parameters: (xp, yp) was the main point, whereas fx and fy were focal length ratios. The focal points and focal lengths, along with the distortion and radius of the radial lens, were determined by a calibration procedure before each acquisition flight.
By means of the described method, 20 different polygonal meshes were generated in correspondence of 20 different waypoint acquisition grids by combining three different acquisition grid shapes (rectangular, elliptical, and cylindrical), five flight heights (15 m, 20 m, 30 m, 40 m, and 50 m), and different camera framing. The relationship between acquisition grid shapes and acquisition grid geometric parameters, as well as GSD and 3D reconstruction accuracy at a certain image overlap value, was determined.
For rectangular camera grid acquisitions, different camera framing orientations were used, obtaining two different image collection sets. The first set of images was acquired by vertically oriented aerial photo shooting, which will hereinafter be referred to as the “Rectangular Grid with Vertical Camera” (RGVC). The RGVC was capable of generating orthomosaic image collections. The second image collection set was acquired by a camera oriented differently for each image using a gimbal and action camera, which will hereinafter be referred to as the “Rectangular Grid with Oscillating Camera” (RGOC).
The image collection sets in elliptical and cylindrical camera grid acquisitions were made exclusively by a camera oriented differently for each image, and will hereinafter be referred to as the “Elliptical Grid” (EG) referring to the first acquisition grid, and the “Cylindrical Grid” (CG) referring to the second acquisition grid.
In the case study of the bridge surface reconstruction, the accuracy of the obtained twenty surface reconstructions was evaluated with the data acquired using a 3D laser scanning device Konica Minolta 9v-I (Konica Minolta, Ramsey, NJ, USA), precision ≤2 mm (Figure 1e), and a total station Geodimeter 480 (Geoglobex, Monza, Italy), distance accuracy ≤3 mm (Figure 1f).

3. Image Acquisition and Surface Reconstructions

3.1. Image Acquisition

In the case study of the bridge surface reconstruction, the aerial image acquisition phases are illustrated as follows. Aerial image acquisitions were made with RGVC, RGOC, EG, and CG at five flight heights: 15 m, 20 m, 30 m, 40 m, and 50 m. The acquisition pitches p (m) in the grids forming the waypoints (Figure 3) were a function of the required Ground Sampling Distance (GSD) (cm/pixel) and overlap. They were calculated according to the following Equation (1) [34]:
p = ( I m W p × G S D 100 ) × ( 1 o v e r l a p )
where I m W p was the image width [pixel]. When the acquired images had a different length and width ( I m L and I m W ), it was necessary to define two different pitches to have a constant value of the overlap along the two orthogonal directions in the acquisition grids (Width Overlap and Length Overlap). In the RGVC considering the longitudinal extension as the reference point and positioning the grids as shown in Figure 2a, it was possible to determine the longitudinal pitch (pl) and (orthogonal to the first) a transversal pitch (pt), which ensured a constant overlap value of 66% (Figure 3). In Figure 3 are highlighted three areas, respectively represented with blue, orange and green colour, corresponding to different acquired images and the zone of their overlap, which defines the length and width of overlapping. The dots represent the subsequent positions of the UAV from which the three images were acquired. In the present study, in which the image dimensions in pixel ( I m L p and I m W p ) were 4000 pixels and 3000 pixels, respectively, the following acquisition pitch p values were established:
p l = ( I m L p × G S D 100 ) × ( 1 o v e r l a p ) =   ( 4000 × G S D 100 ) × 33.3
p t = ( I m W p × G S D 100 ) × ( 1 o v e r l a p ) = ( 3000 × G S D 100 ) × 33.3
where GSD (cm/pixel) was calculated according to the following Equation (4) as a function of h v :
G S D = h v × S w × 100 F l × I m W
In this equation, S w was the sensor width and F l was the focal length (See Table 2). Therefore, in RGVC, an exactly constant value of the overlap value equal to 66% along the two orthogonal directions (Table 3a) was obtained.
In RGOC, EG, and CG, the values of grid shapes and acquisition pitches p were established to ensure overlap values close to 66% along the two orthogonal directions. These values of overlap close to 66% in RGVC, RGOC, EG, and CG ensured that each keypoint was captured in at least three different shots (Figure 4), thus avoiding the risk of insufficient feature overlaps across images in post-flight image processing, as well as the subsequent risk of failure in 3D reconstruction. This result is also reported in other research projects [35], to ensure full coverage in aerial laser scanning.
In the RGOC, the overlap value equal to 66% was imposed along the transversal direction, determining the values of transversal pitches (pt). Using longitudinal pitches (pl) equal to RGVC, it was checked that the value obtained along the longitudinal direction was never less than 66% (Table 3b). The gimbal rotation ϑr around the horizontal axis parallel to the longitudinal direction was synchronized with the transverse distance x (m) of the UAV platform in relation to the center line and with the flight height hv (m) according to Equation (5):
ϑ r = tan 1 x w o / 2 h v h o
where ho (m) stands for the object height and wo (m) stands for the object width (Figure 5).
In EG (Figure 6), the overlap value equal to 66% was imposed along the tangential direction (Tangential Overlap), thus obtaining the value of the tangential pitch (ptan), and it was checked that the overlap value obtained along the transversal direction (Transversal Overlap), orthogonal to the first, was never less than 66% (Table 3c). In EG, the gimbal rotation ϑe around the tangential axis to the elliptic orbit was synchronized with the minimum distance ρ from the acquired object according to the following Equation (6):
ϑ e = tan 1 ρ h v h o
Equation (6) was also used to determine the values of the ellipses axis at the various flight heights. In particular, the axis lengths were chosen to obtain a value of the gimbal rotation ϑe around the tangential axis close to 30° at point A and close to 40° at point L. It was seen that these values allowed us to obtain both the overlap values (tangential and transversal) close to 66% at the same time.
In CG, the gimbal rotation ϑc around the horizontal axis parallel to the longitudinal axis was synchronized with the radial direction of the cylindrical trajectory according to the following Equation (7) (Figure 7):
ϑ c = tan 1 x w o / 2 h v h o / 2
By replacing the minimum distance from the bridge (dmin) in Equation (1) with the value hv, pitch values that ensured overlap values close to 66% were determined. In Figure 8, the waypoint at h v = 40 m is shown. Values of GSD overlap and pitches are shown in Table 3d.
Table 4 shows the geometric parameters (grid pitches, width, and length) in RGVC, RGOC, EG, and CG. The number of photos (n. GRC) for each acquisition set in order to obtain the overlap values close to 66% is also reported.
Figure 9 shows the EG at 30 m and the RGVC at 50 m.

3.2. Surface Reconstruction

Once the image collections for the reconstruction were acquired, the system almost automatically calibrated the cameras based on the exchangeable image file (EXIF) information, also reported in [7], and the Fisheye lens camera model (Pix4D), found in the images, and the GCPs aligned them into the 3D space and produced a complete and single mesh using SfM algorithms. The reconstruction of the surface is based on the following steps:
  • Algorithm searches for matching points by analyzing all the acquired images. Matching techniques with binary descriptors, together with FBM and ABM, are employed to identify features irrespective of their position, scale, and rotation. Studies on the performance of such feature descriptors are given in [36].
  • Matching points, as well as approximate values of the image position and orientation provided by Ardupilot APM 2.6, are used in BBA (Bundle Block Adjustment) [37,38] to reconstruct the exact position and orientation of the camera for every acquired image.
  • Based on this reconstruction, the matching points are verified and their 3D coordinates are calculated. The geo-reference system used is uBlox LEA6H with Groundstation PC Mission Planner software (version 1), based on GPS measurements from the UAV Ardupilot APM 2.6 during the flight.
  • Those 3D points are interpolated to form a triangulated irregular network in order to obtain a polygonal mesh of surfaces (Figure 10a,b) [39,40]. At this stage, a dense 3D model can increase the spatial resolution of the triangle structure [41].
  • The polygonal mesh of surfaces is used to project every image pixel, obtaining a mapped texture surface (Figure 10d), and to calculate the georeferenced orthomosaic [42].
In detail, from the 3D feature points calculated by the SfM, the mesh data were obtained by using Delaunay triangulation. Then, the mesh was used as an outline of the object, which was projected onto the plane of the images to get the estimated depth maps. These maps were optimized and corrected using the pixel matching algorithm based on the patch. Finally, dense point cloud data were obtained by fusing these depth maps. From the detected accurate cloud points, a 3D polygonal mesh was obtained (point 4 of previous reconstruction steps). The polygonal mesh obtained can be easily transformed, through open source algorithms, into a nurbs surface (Figure 10c) for different applications. Figure 10 shows some steps of the described procedure in the case study of the bridge surface reconstruction (CG at h v = 15 m).
The flowchart relative to the reconstruction algorithm is summarized in Figure 11. Within the blocks highlighted in yellow, the equivalent open source algorithm is shown.
In the Reference [43], it is possible to find the necessary information to also implement the reconstruction algorithms used in open source software, for obtaining results similar to those of the commercial software. In the same paper, the matching features algorithm written in pseudocode can be found.

4. Reconstructions Accuracy Evaluation

The accuracy evaluation of the 20 reconstructions in the case study of the bridge was performed by measuring two reference shapes: the pounding upper surface (pus) and the south side of the north-east surveyed arch (arc) of the bridge. Using the terrestrial laser scanner and the total station-theodolites, the coordinates of 29 keypoints on the pounding upper surface (Figure 12a) and the coordinates of 11 keypoints on the south side of the north-east surveyed arch (Figure 12b) were acquired.
The choice of these two shapes, being on two mutually orthogonal positions, allowed the right accuracy evaluation of the reconstructions entirely along the three dimensions. Such shapes had, indeed, a favorable spatial distribution, which allowed an accurate validation of the length, width, and height and the reconstructed profile evaluation of the arch. Six of these points (outlined with green markers in Figure 10a) were those used as GCPs.
The partial scans were acquired by positioning the scanner’s sensor plane parallel to the upper surface. The comparison was carried out using Meshlab [44] and CloudCompare [45,46,47,48] open source software. Moreover, Meshlab was used to align the partial scans with the 3D model produced by Pix4Dmapper. Instead, the CloudCompare software was employed to estimate the surface deviation between the mesh obtained with Pix4Dmapper and the point cloud obtained with 3D laser scanning.
The ICP algorithm, implemented in Meshlab, was used to align each partial view obtained with the 3D laser scanning with the Pix4Dmapper triangular mesh model. In order to compare the two data types, the cloud-to-mesh distance function offered by CloudCompare was selected as it was considered as more robust to local noise. The cloud-to-mesh distance function computed the distances between each vertex of the point cloud to the nearest triangle of the mesh surface. The distance between the two was calculated as follows. In cases where the orthogonal projection of the vertex laid inside the surface defined by a triangle, then the distance between the vertex and its point of intersection on the surface was calculated. Accuracy (acc) was measured by evaluating these distances between the obtained points and the homologous points in the 3D reconstructions. For a precise accuracy evaluation, for both the two shapes, the accuracy was evaluated by introducing the standard deviation (σ) of such differences for a typical length of the shape (Equations (8) and (9)). With reference to the pounding upper surface (pus), the typical distance considered was the mean value A B ¯ m e a n of the 14 distances A x B x ¯ (x = 1, 2, …14), and for the arch (arc), the observed typical distance was the mean value Rmean of the 11 rays Ry (y = A, B, …M).
a c c p u s = 14 σ A B ¯ × x = 1 14 ( A x B x ¯ m e a s u r e d A x B x ¯ r e c o n t r u c t i o n ) = 1 σ A B ¯ × A B ¯ m e a n      for   x = 1 ,   2 ,   14
a c c a r c = 11 σ R × y = A M ( R y   m e a s u r e d R y   r e c o n s t r u c t i o n ) = 1 σ A B ¯ × R m e a n         for   x = A ,   B ,   M
Figure 13 shows the mean distance (cm) and standard deviation distances ( σ A B ¯ ; σ R ) calculated in cm in the case of surface reconstruction with CG at h v = 15 m.
In Table 5, the values of the mean error distance ( A B ¯ m e a n ; R m e a n ) and standard deviation distances ( σ A B ¯ ; σ R ) expressed in cm in the 20 reconstructions are shown. These values enabled us to measure the accuracy through each of the 20 reconstructions which were taken into consideration in the present work.
In particular, the mean error of distances ( A B ¯ m e a n ; R m e a n ) indicated the mean value of the accuracy of every acquisition method, and the standard deviation error of distances ( σ A B ¯ ; σ R ) indicated the extent of the error distribution. The most accurate reconstructions were those characterized by smaller values of mean error for distance and smaller values of standard deviation error. Table 5 shows values of a c c p u s (cm) and a c c a r c (cm) evaluated in cm with Equations (8) and (9), respectively.
When considering a comparison of the 20 different types of surface reconstruction related to the number of photos used (n. GRC) and the mean value of GSD offered by each of them, the following parameters were calculated ξ (cm2) by multiplying the accuracy acc by GSD and the number of photos (n. GRC):
ξ p u s   = a c c p u s × G S D × n . GRC
  ξ a r c   = a c c a r c × G S D × n . GRC
The inferior values of these parameters allowed us to improve the accuracy and the speed of reconstruction at the same time. In Table 6, the values of the products ξ (cm2) for the 20 different types of surface reconstruction are shown.
The parameters ξ enabled us to qualify every method which was analyzed in this study, regardless of the value of GSD, which did not include the acquisition orientation, and the position in relation to the part in acquisition.

5. Data Comparison and Discussions

The comparative analysis of the data obtained from the twenty reconstructions highlighted some interesting points of discussion and provided useful information for the photogrammetric surface reconstruction of large-scale objects. In all the reconstructions which were studied, the accuracy resulted in Gaussian-like distributions. The accuracy was always proportional to GSD, but the number of the images which needed to be acquired to obtain the desired accuracy varied according to the grid shapes and the acquisition parameters utilized. Normalizing at one the sum of the two factors ( ξ p u s + ξ a r c = ξ ), it appeared clear (Figure 14) that the acquisition with CG enabled us to obtain a quality of the reconstruction highly superior at all the flight heights. In Figure 14 the values of ξ factor as normalized for the grid shapes at 15 m flight height were showed in green colors. The values of ξ factor normalized for the grid shapes at 20 m flight height were showed in blu colors. The values of ξ factor normalized for the grid shapes at 30 m flight height were showed in brown colors. The values of ξ factor normalized in the grid shapes at 40 m flight height were showed in red colors and lastly the values of ξ factor normalized in the grid shapes at 50 m flight height were showed in grey colors. On average, the CG grid shape improved the accuracy by more than a factor of six/seven.
The image overlaps were proportional to flight height and inversely proportional to grid pitch studied. The synergic effects of grid shapes, grid pitch, and camera framing, instead, could not always be predicted and their right combination might provide an advanced accuracy in photogrammetric surface reconstruction.
The equations from (1) to (7) permitted us to correlate GSD, flight altitudes, and overlap with one another. In particular, it was possible to use such equations to locate the flight heights, which assured the desired values of GSD and overlap.
The acquisition with a single RGVC did not often suffice to reach the desired accuracy. In fact, although it produced acceptable values of GSD, it lacked greatly in the acquisition of surfaces orthogonal to the shot direction, as well as the side wall of the bridge.
Moving on from the acquisition with RGVC to the acquisition with RGVO, it was possible to keep the transversal overlap steady (66%) when increasing the transversal pitch. This occurred according to Equation (1), at the expense of GSD. Moreover, the rotation of the camera of the angle ϑr according to Equation (5), involved an increase of the longitudinal overlap.
Similar to image overlap, the surface deviation values and the distances between feature points appeared to be inversely proportional to flight height and highly dependent on grid acquisition type. Overlap in the acquisitions with constant grid pitch seemed to be extremely variable, especially in the elliptical grid. In the acquisition around the bridge with elliptical grids, some efficiency in terms of number of images against the surface area being covered was lost.
The conducted analysis allowed us to check that all reconstructions contained errors proportional to flight height, but this was true, especially for RGVC. This acquisition, being one of the most used types, provided the worst results both in terms of accuracy (acc) and parameter ξ .
The acquisition with CG allowed us to get the best results. This kind of grid, thanks to the modern technologies and to a good GPS system, was easily implementable with a high accuracy in a system of acquisition by UAV.

6. Conclusions

A replicable and generalizable methodology was illustrated in order to improve the quality of the 3D digital surface reconstruction of large-scale objects by the photogrammetric technique. Using commercial software (Pix4Dmapper) based on the Structure-from-Motion algorithms, the existing relationships between the grid shapes, the acquisition grid parameters, the image overlap, and the accuracy of reconstruction were evaluated and discussed. The proposed relationships enabled us to obtain the appropriate selection of flight heights, acquisition grid shape, and camera framing in correspondence to a pre-established overlap value and required GSD.
The experiments conducted on the reconstruction of the Bridge of the Saracens in Adrano (Sicily) illustrated the effectiveness of the developed methodology, which enhanced the 3D reconstruction of a highly complex architectural structure with the desired accuracy. The errors of surface reconstruction were evaluated statistically using measures with a 3D laser scanner and total station-theodolites measurements.
The experimental results indicated that in large-scale objects characterized by an elongated longitudinal shape, geometric singularities, and multiple and variously inclined features, the proposed method could improve the accuracy, increasing the speed of reconstruction at the same time by more than a factor of six/seven. The authors believe that it might be interesting to apply the study to more complex acquisition grid shapes (i.e., a zig-zag grid).

Author Contributions

The authors contributed equally to this work.

Funding

This research received no external funding.

Acknowledgments

The authors wish to thank Eng. Riccardo Zammataro and Dr. Giuseppe D’Angelo for their technical contribution in the development of work.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chen, S.; Laefer, D.F.; Mangina, E. State of technology review of civilian UAVs. Recent Pat. Eng. 2016, 10, 160–174. [Google Scholar] [CrossRef]
  2. Cano, E.; Horton, R.; Liljegren, C.; Bulanon, D.M. Comparison of small unmanned aerial vehicles performance using image processing. J. Imaging 2017, 3, 4. [Google Scholar] [CrossRef]
  3. Colomina, I.; Molina, P. Unmanned aerial systems for photogrammetry and remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2014, 92, 79–97. [Google Scholar] [CrossRef]
  4. Byrne, J.; Laefer, D.F.; O’Keeffe, E. Maximizing feature detection in aerial unmanned aerial vehicle datasets. J. Appl. Remote Sens. 2017, 11, 025015. [Google Scholar] [CrossRef] [Green Version]
  5. Aguilar, W.G.; Angulo, C. Real-time video stabilization without phantom movements for micro aerial vehicles. EURASIP J. Image Video Process. 2014, 2014, 46. [Google Scholar] [CrossRef] [Green Version]
  6. Heng, L.; Honegger, D.; Lee, G.H.; Meier, L.; Tanskanen, P.; Fraundorfer, F.; Pollefeys, M. Autonomous visual mapping and exploration with a micro aerial vehicle. J. Field Robot. 2014, 31, 654–675. [Google Scholar] [CrossRef]
  7. Byrne, J.; O’Keeffe, E.; Lennon, D.; Laefer, D.F. 3D reconstructions using unstabilized video footage from an unmanned aerial vehicle. J. Imaging 2017, 3, 15. [Google Scholar] [CrossRef]
  8. Gaszczak, A.; Brekon, T.P.; Han, J. Real-time people and vehicle detection from UAV imagery. In Intelligent Robots and Computer Vision XXVIII: Algorithms and Techniques, Proceedings SPIE 7878, San Francisco Airport, CA, USA, 23–27 January 2011; Röning, J., Casasent, D.P., Hall, E.L., Eds.; SPIE: Bellingham, WA, USA, 2011; Article 78780B. [Google Scholar] [CrossRef] [Green Version]
  9. Karim, S.; Zhang, Y.; Laghari, A.A.; Asif, M.R. Image processing based proposed drone for detecting and controlling street crimes. In Proceedings of the 17th IEEE International Conference on Communication Technology, Chengdu, China, 27–30 October 2017; pp. 1725–1730. [Google Scholar]
  10. Mancini, F.; Dubbini, M.; Gattelli, M.; Stecchi, F.; Fabbri, S.; Gabbianelli, G. Using unmanned aerial vehicles (UAV) for high-resolution reconstruction of topography: The structure from motion approach on coastal environments. Remote Sens. 2013, 5, 6880–6898. [Google Scholar] [CrossRef] [Green Version]
  11. Liénard, J.; Vogs, A.; Gatziolis, D.; Strigul, N. Embedded, real-time UAV control for improved, image-based 3D scene reconstruction. Measurement 2016, 81, 264–269. [Google Scholar] [CrossRef]
  12. Liang, Z.; Guo, Q.; Xu, J.; Hu, J. A UAV Image and Geographic Data Integration Processing Method and Its Applications. In Proceedings of the 2017 2nd International Conference on Communication and Information Systems, Wuhuan, China, 7–9 November 2017; Association for Computing Machinery: New York, NY, USA, 2017; pp. 268–272. [Google Scholar]
  13. Ham, Y.; Han, K.K.; Lin, J.J.; Golparvar-Fard, M. Visual monitoring of civil infrastructure systems via camera-equipped Unmanned Aerial Vehicles (UAVs): A review of related works. Vis. Eng. 2016, 4, 1. [Google Scholar] [CrossRef]
  14. Koutsoudis, A.; Vidmar, B.; Ioannakis, G.; Arnaoutoglou, F.; Pavlidis, G.; Chamzas, C. Multi-image 3D reconstruction data evaluation. J. Cult. Herit. 2014, 15, 73–79. [Google Scholar] [CrossRef]
  15. Zhang, W.; Li, M.; Guo, B.; Li, D.; Guo, G. Rapid texture optimization of three-dimensional urban model based on oblique images. Sensors 2017, 17, 911. [Google Scholar] [CrossRef] [PubMed]
  16. Campana, S. Drones in Archaeology. State-of-the-art and Future Perspectives. Archaeol. Prospect. 2017, 24, 275–296. [Google Scholar] [CrossRef] [Green Version]
  17. Sauerbier, M.; Eisenbeiss, H. UAVs for the documentation of archaeological excavations. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2010, XXXVIII, 526–531. [Google Scholar]
  18. Li, Q.; Li, D.C.; Wu, Q.F.; Tang, L.W.; Huo, Y.; Zhang, Y.X.; Cheng, N. Autonomous navigation and environment modeling for MAVs in 3-D enclosed industrial environments. Comput. Ind. 2013, 64, 1161–1177. [Google Scholar] [CrossRef]
  19. Rabbani, T. Automatic Reconstruction of Industrial Installations Using Point Clouds and Images; Publications on Geodesy, 62; NCG Nederlandse Commissie voor Geodesie Netherlands Geodetic Commission: Delft, The Netherlands, 2006; pp. 1–175. [Google Scholar]
  20. Moulon, P.; Monasse, P.; Marlet, R. Adaptive structure from motion with a contrario model estimation. In Computer Vision—ACCV 2012; Lee, K.M., Matsushita, Y., Rehg, J.M., Hu, Z., Eds.; Lecture Notes in Computer Science; Springer: Berlin, Germany, 2012; pp. 257–270. [Google Scholar]
  21. Wu, C. Towards linear-time incremental structure from motion. In Proceedings of the International Conference on 3D Vision-3DV2013, Seattle, WA, USA, 29 June–1 July 2013; pp. 127–134. [Google Scholar]
  22. Beardsley, P.A.; Torr, P.H.S.; Zisserman, A. 3D model acquisition from extended image sequences. In Lecture Notes in Computer Science, Computer Vision—ECCV’96; Buxton, B., Cipolla, R., Eds.; Springer: Berlin, Germany, 1996; pp. 683–695. [Google Scholar]
  23. Mohr, R.; Veillon, F.; Quan, L. Relative 3-D reconstruction using multiple uncalibrated images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, New York, NY, USA, 15–17 June 1993; pp. 543–548. [Google Scholar] [Green Version]
  24. Dellaert, F.; Seitz, S.M.; Thorpe, C.E.; Thrun, S. Structure from motion without correspondence. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition-CVPR 2000, Hilton Head Island, SC, USA, 15 June 2000; pp. 557–564. [Google Scholar]
  25. Gherardi, R.; Farenzena, M.; Fusiello, A. Improving the Efficiency of Hierarchical Structure-and-Motion. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; pp. 1594–1600. [Google Scholar]
  26. Moulon, P.; Monasse, P.; Marlet, R. Global fusion of relative motions for robust, accurate and scalable structure. In Proceedings of the 2013 IEEE International Conference on Computer Vision, Sydney, NSW, Australia, 1–8 December 2013; pp. 3248–3255. [Google Scholar]
  27. Crandall, D.J.; Owens, A.; Snavely, N.; Huttenlocher, D.P. SfM with MRFs: Discrete-continuous optimization. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 2841–2853. [Google Scholar] [CrossRef] [PubMed]
  28. Sweeney, C.; Sattler, T.; Höllerer, T.; Turk, M. Optimizing the viewing graph for structure-from-motion. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 801–809. [Google Scholar]
  29. Cheng, G.; Han, J. A survey on object detection in optical remote sensing images. ISPRS J. Photogramm. Remote Sens. 2016, 117, 11–28. [Google Scholar] [CrossRef] [Green Version]
  30. Zhang, Q.; Li, Y.; Blum, R.S.; Xiang, P. Matching of images with projective distortion using transform invariant low-rank textures. J. Vis. Commun. Image Represent. 2016, 38, 602–613. [Google Scholar] [CrossRef] [Green Version]
  31. Lamis, G.; Draa, A.; Chikhi, S. An ear biometric system based on artificial bees and the scale invariant feature transform. Expert Syst. Appl. 2016, 57, 49–61. [Google Scholar]
  32. Qu, Y.; Huang, J.; Zhang, X. Rapid 3D Reconstruction for Image Sequence Acquired from UAV Camera. Sensors 2018, 18, 225. [Google Scholar] [CrossRef]
  33. Zarco-Tejada, P.J.; Diaz-Varela, R.; Angileri, V.; Loudjani, P. Tree height quantification using very high resolution imagery acquired from an unmanned aerial vehicle (UAV) and automatic 3D photo-reconstruction methods. Eur. J. Agron. 2014, 55, 89–99. [Google Scholar] [CrossRef] [Green Version]
  34. Pix4d Manual. Available online: http://pix4d.com (accessed on 10 August 2018).
  35. Hinks, T.; Carr, H.; Laefer, D.F. Flight optimization algorithms for aerial LiDAR capture for urban infrastructure model generation. J. Comput. Civ. Eng. 2009, 23, 330–339. [Google Scholar] [CrossRef]
  36. Mikolajczyk, K.; Schmid, C. An affine invariant interest point detector. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2002; pp. 128–142. [Google Scholar]
  37. Triggs, B.; McLauchlan, P.F.; Hartley, R.I.; Fitzgibbon, A.W. Bundle adjustment—A modern synthesis. In International Workshop on Vision Algorithms; Springer: Berlin/Heidelberg, Germany, 1999; pp. 298–372. [Google Scholar]
  38. Lu, F.; Hartley, R. A fast optimal algorithm for L 2 triangulation. In Asian Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2007; pp. 279–288. [Google Scholar]
  39. Szeliski, R.; Scharstein, D. Symmetric sub-pixel stereo matching. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2002; pp. 525–540. [Google Scholar]
  40. Hirschmuller, H. Stereo processing by semiglobal matching and mutual information. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 30, 328–341. [Google Scholar] [CrossRef] [PubMed]
  41. Strecha, C.; Von Hansen, W.; Van Gool, L.; Fua, P.; Thoennessen, U. On benchmarking camera calibration and multi-view stereo for high resolution imagery. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2008), Anchorage, AK, USA, 23–28 June 2008; pp. 1–8. [Google Scholar] [CrossRef]
  42. Strecha, C.; Van Gool, L.; Fua, P. A generative model for true orthorectification. In Proceedings of the ISPRS Congress Beijing, Beijing, China, 3–11 July 2008. No. CVLAB-CONF-2008-09. [Google Scholar]
  43. Torres, J.C.; Arroyo, G.; Romo, C.; De Haro, J. 3D Digitization using structure from motion. In Proceedings of the CEIG-Spanish Computer Graphics Conference, At Jaén, Spain, 12–14 September 2012. [Google Scholar]
  44. Falkingham, P.L. Generating a Photogrammetric Model Using Visual SFM, and Post-Processing with Meshlab; Technical Report; Brown University: Providence, RI, USA, 2012. [Google Scholar]
  45. Calì, M.; Oliveri, S.M.; Sequenzia, G.; Fatuzzo, G. Error control in UAV image acquisitions for 3D reconstruction of extensive architectures. In Advances on Mechanics, Design Engineering and Manufacturing; Eynard, B., Nigrelli, V., Olivieri, S., Peris-Fajarnes, G., Rizzuti, S., Eds.; Lecture Notes in Mechanical Engineering; Springer International Publishing: Cham, Switzerland, 2017; pp. 1211–1220. [Google Scholar]
  46. Zanetti, E.M.; Bignardi, C. Structural analysis of skeletal body elements: Numerical and experimental methods. In Biomechanical Systems Technology: Volume 3, Muscular Skeletal Systems; World Scientific Publishing: Singapore, 2009; pp. 185–225. [Google Scholar]
  47. Dewez, T.J.; Girardeau-Montaut, D.; Allanic, C.; Rohmer, J. Facets: A Cloudcompare Plugin to Extract Geological Planes from Unstructured 3D Point Clouds. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, XLI, 799–804. [Google Scholar] [CrossRef]
  48. Rajendra, Y.D.; Mehrotra, S.C.; Kale, K.V.; Manza, R.R.; Dhumal, R.K.; Nagne, A.D.; Vibhute, A.D. Evaluation of Partially Overlapping 3D Point Cloud’s Registration by using ICP variant and CloudCompare. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, XL, 891–897. [Google Scholar] [CrossRef]
Figure 1. (a) UAV platform; (b) batteries and camera; (c) gimbal and camera; (d) LCD screen on the radio control; (e) laser scanner Konica Minolta 9v-I; (f) Geodimeter 480 total station-theodolites.
Figure 1. (a) UAV platform; (b) batteries and camera; (c) gimbal and camera; (d) LCD screen on the radio control; (e) laser scanner Konica Minolta 9v-I; (f) Geodimeter 480 total station-theodolites.
Sensors 18 02815 g001
Figure 2. Waypoint acquisition grids: (a) Rectangular at h v = 40 m; (b) Elliptical at h v = 30 m; (c) Cylindrical h v = 40 m.
Figure 2. Waypoint acquisition grids: (a) Rectangular at h v = 40 m; (b) Elliptical at h v = 30 m; (c) Cylindrical h v = 40 m.
Sensors 18 02815 g002
Figure 3. Waypoint and overlap in RGVC at h v = 40 m.
Figure 3. Waypoint and overlap in RGVC at h v = 40 m.
Sensors 18 02815 g003
Figure 4. Common keypoint in three sequentially captured images in EG at h v = 30 m.
Figure 4. Common keypoint in three sequentially captured images in EG at h v = 30 m.
Sensors 18 02815 g004
Figure 5. Gimbal rotation ϑr around the horizontal axis parallel to the longitudinal direction in EG at h v = 15 m.
Figure 5. Gimbal rotation ϑr around the horizontal axis parallel to the longitudinal direction in EG at h v = 15 m.
Sensors 18 02815 g005
Figure 6. Waypoint and overlap in EG at h v = 15 m.
Figure 6. Waypoint and overlap in EG at h v = 15 m.
Sensors 18 02815 g006
Figure 7. Gimbal rotation ϑc in CG at h v = 15 m.
Figure 7. Gimbal rotation ϑc in CG at h v = 15 m.
Sensors 18 02815 g007
Figure 8. Waypoint in CG at h v = 40 m.
Figure 8. Waypoint in CG at h v = 40 m.
Sensors 18 02815 g008
Figure 9. EG at h v = 30 m and RGVC at h v = 50 m.
Figure 9. EG at h v = 30 m and RGVC at h v = 50 m.
Sensors 18 02815 g009
Figure 10. Surface reconstruction: (a) polygonal mesh of surfaces; (b) polygonal mesh visualized with shade; (c) nurbs surface; (d) mapped texture surface.
Figure 10. Surface reconstruction: (a) polygonal mesh of surfaces; (b) polygonal mesh visualized with shade; (c) nurbs surface; (d) mapped texture surface.
Sensors 18 02815 g010
Figure 11. Flowchart of the main steps of the surface reconstruction algorithm.
Figure 11. Flowchart of the main steps of the surface reconstruction algorithm.
Sensors 18 02815 g011
Figure 12. (a) 29 keypoints on the pounding upper surface; (b) 11 keypoints on the south side of the north-east arch.
Figure 12. (a) 29 keypoints on the pounding upper surface; (b) 11 keypoints on the south side of the north-east arch.
Sensors 18 02815 g012
Figure 13. Error in mean distances (cm) and its standard deviation distances (cm) distribution graphs in CG at 15 m: (a) pounding upper surface; (b) south side of the north-east surveyed arch.
Figure 13. Error in mean distances (cm) and its standard deviation distances (cm) distribution graphs in CG at 15 m: (a) pounding upper surface; (b) south side of the north-east surveyed arch.
Sensors 18 02815 g013
Figure 14. ξ factor normalized.
Figure 14. ξ factor normalized.
Sensors 18 02815 g014
Table 1. UAV Platform.
Table 1. UAV Platform.
Technical SpecificationsValue/Typology
FrameHexacopter
EngineT-Motors 2216 (RoHS, Hong Kong, China)
Engine size (mm)Φ27.8 × 34
Engine Weight (g)75
Idle current@10v (A)0.04
BatteriesLipo 4S—4000 mha
Max Power (Wh)576
RotorsNylon10 × 5 pitch
GPSuBlox LEA6H
Flying softwareArducopter 3.1.5
UAV Weight (g)1120
Table 2. GoPro Hero 4 Black Edition.
Table 2. GoPro Hero 4 Black Edition.
ParameterValue
SensorCMOS 12 MP 1/2.9″
Focal length (Fl) (mm)15.5
Sensor width (SW) (mm)16.8
Sensor length (Sl) (mm)22.4
ISO sensitivity80–6400
Lens rangef/2.0–f/5.9
Burst shooting (fps)2.3
Weight (g)198
Table 3. (a) GSD, overlaps, and pitches in RGVC; (b) GSD, overlaps, and pitches in RGOC; (c) GSD, overlaps, and pitches in EG; (d) GSD, overlaps, and pitches in CG.
(a)
(a)
hv (m)GSD (cm/pixel)ImW (m)ImL (m)W. Over. (%)L. Over. (%)pt (m)pl (m)
150.3610.814.566.0%66.0%3.74.9
200.5416.321.766.0%66.0%5.57.4
300.9027.136.166.0%66.0%9.212.3
401.2637.950.666.0%66.0%12.917.2
501.6348.865.066.0%66.0%16.622.1
(b)
(b)
hv (m) ϑ r ( ° ) GSD (cm/pixel)ImW (m)ImL (m)W. Over. (%)L. Over. (%)pt (m)pl (m)
150.00.3610.814.566.0%66.0%3.74.9
4.00.3610.914.566.0%66.1%3.74.9
25.50.4012.016.066.0%69.3%4.14.9
200.00.5416.321.766.0%66.0%5.67.5
9.90.5516.522.066.0%66.0%5.67.5
31.20.6319.025.366.0%70.5%6.57.5
304.00.9328.037.366.0%66.0%9.512.3
25.61.0030.040.166.0%68.3%10.212.3
406.11.3239.652.866.0%66.0%13.517.2
35.61.4342.857.166.0%68.5%14.617.2
507.21.7151.268.366.0%66.0%17.422.1
28.71.8555.674.166.0%68.7%18.922.1
(c)
(c)
hv (m) ϑ e ( ° ) GSD (cm/pixel)ImW (m)ImL (m)Tan. Over. (%)Trans. Over. (%)ptan (m)
1523.20.4112.316.466.0%66.1%4.2
35.040.7615.320.466.0%78.3%5.2
2027.70.6720.126.866.0%66.0%6.8
38.70.7723.030.766.0%80.3%7.8
3033.61.1534.445.966.0%66.1%11.7
41.31.2838.351.166.0%80.1%13.0
4038.71.6953.671.566.0%66.0%17.2
42.41.7950.767.666.0%79.8%18.2
5038.72.1564.586.166.0%66.0%21.9
43.02.3069.092.066.0%80.6%23.5
(d)
(d)
hv (m) ϑ e ( ° ) GSD (cm/pixel)ImW (m)ImL (m)W. Over. (%)L. Over. (%)pt (m)pl (m)
150.00.3610.814.566.0%66.0%3.74.9
90.00.4313.017.366.0%71.7%4.44.9
200.00.5416.321.766.0%66.0%5.57.4
90.00.6118.424.666.0%70.0%6.37.4
300.00.9027.136.166.0%66.0%9.212.3
90.00.9829.339.066.0%68.5%10.012.3
400.01.2637.950.666.0%66.0%12.917.2
90.01.3440.153.566.0%67.8%13.617.2
500.01.6348.865.066.0%66.0%16.622.1
90.01.7050.967.966.0%67.4%17.322.1
Table 4. Geometric characteristics of: RGVC, RGOC, EG, and CG.
Table 4. Geometric characteristics of: RGVC, RGOC, EG, and CG.
RGVC—Rectangular Grid with Vertical CameraRGOC—Rectangular Grid with Oscillating Camera
hv (m)pt (m)pl (m)Width (m)Length (m)n. GRChv (m)pt (m)pl (m)Width (m)Length (m)n. GRC
153.74.914.873.580153.7; 4.14.915.573.580
205.57.4228160205.6; 6.57.424.18160
309.212.327.68632309.5; 1012.341.78640
4012.917.238.7103.2284013.5; 14.617.259.4103.235
5016.622.149.8110.5245017.4; 18.922.177.1110.530
EG—Elliptical GridCG—Cylindrical Grid
hv (m)ptan (m)Width (m)Length (m)n. GRChv (m)pt (m)pl (m)Width (m)Length (m)n. GRC
154.2; 5.2268036153.7; 4.44.93073.5208
206.8; 7.8369028205.5; 6.37.44081144
3011.7; 135611022309.2; 1012.3608688
4017.2; 18.276130204012.9; 13.617.280103.270
Table 5. Error in mean distances and its standard deviations in cm in the 20 surface reconstructions.
Table 5. Error in mean distances and its standard deviations in cm in the 20 surface reconstructions.
Acq. Grid Tipology A B ¯ m e a n σ A B ¯ a c c p u s R m e a n σ R a c c a r c Acq. Grid Tipology A B ¯ m e a n σ A B ¯ a c c p u s R m e a n σ R a c c a r c
RGVC 15 m0.651.80.851.082.90.35EG 15 m0.451.12.020.701.70.84
RGVC 20 m1.071.90.491.692.90.20EG 20 m0.781.21.071.261.70.47
RGVC 30 m1.711.90.312.703.00.12EG 30 m1.281.20.652.031.80.27
RGVC 40 m2.732.00.186.173.00.05EG 40 m1.871.30.414.761.80.12
RGVC 50 m3.362.10.148.383.10.04EG 50 m2.501.30.316.431.80.09
RGOC 15 m0.541.21.540.61.90.88CG 15 m0.310.404.670.50..02.30
RGOC 20 m0.931.30.831.032.00.50CG 20 m0.520.582.770.841.11.09
RGOC 30 m1.481.40.481.582.10.30CG 30 m0.820.741.531.301.10.70
RGOC 40 m2.141.40.332.982.20.15CG 40 m1.150.801.092.951.10.31
RGOC 50 m2.601.50.263.962.20.11CG 50 m1.540.800.813.971.20.21
Table 6. ξ factors for the 20 surface reconstructions.
Table 6. ξ factors for the 20 surface reconstructions.
Acq. Grid Tipology ξ p u s   ( cm 2 ) ξ a r c   ( cm 2 ) Acq. Grid Tipology ξ p u s   ( cm 2 ) ξ a r c   ( cm 2 )
RGVC 15 m24.74.2EG 15 m33.513.9
RGVC 20 m15.93.6EG 20 m21.59.4
RGVC 30 m8.92.6EG 30 m17.47.3
RGVC 40 m6.51.9EG 40 m14.34.1
RGVC 50 m5.51.5EG 50 m12.33.5
RGOC 15 m47.126.7CG 15 m369.0165.3
RGOC 20 m29.317.7CG 20 m230.290.5
RGOC 30 m18.711.5CG 30 m126.557.8
RGOC 40 m16.07.3CG 40 m98.828.1
RGOC 50 m13.76.1CG 50 m81.021.0

Share and Cite

MDPI and ACS Style

Calì, M.; Ambu, R. Advanced 3D Photogrammetric Surface Reconstruction of Extensive Objects by UAV Camera Image Acquisition. Sensors 2018, 18, 2815. https://doi.org/10.3390/s18092815

AMA Style

Calì M, Ambu R. Advanced 3D Photogrammetric Surface Reconstruction of Extensive Objects by UAV Camera Image Acquisition. Sensors. 2018; 18(9):2815. https://doi.org/10.3390/s18092815

Chicago/Turabian Style

Calì, Michele, and Rita Ambu. 2018. "Advanced 3D Photogrammetric Surface Reconstruction of Extensive Objects by UAV Camera Image Acquisition" Sensors 18, no. 9: 2815. https://doi.org/10.3390/s18092815

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop