Next Article in Journal
Packaged Droplet Microresonator for Thermal Sensing with High Sensitivity
Previous Article in Journal
Photoacoustic Energy Sensor for Nanosecond Optical Pulse Measurement
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Variables Influencing the Accuracy of 3D Modeling of Existing Roads Using Consumer Cameras in Aerial Photogrammetry

by
Juan J. González-Quiñones
1,
Juan F. Reinoso-Gordo
2,*,
Carlos A. León-Robles
2,
José L. García-Balboa
3 and
Francisco J. Ariza-López
3
1
TargetPix, 18015 Granada, Spain
2
Department Architectural and Engineering Graphic Expression, University of Granada, 18071 Granada, Spain
3
Department Cartographic, Geodesic and Photogrammetric Engineering, University of Jaén, 23071 Jaén, Spain
*
Author to whom correspondence should be addressed.
Sensors 2018, 18(11), 3880; https://doi.org/10.3390/s18113880
Submission received: 18 September 2018 / Revised: 27 October 2018 / Accepted: 4 November 2018 / Published: 11 November 2018
(This article belongs to the Section Remote Sensors)

Abstract

:
Point cloud (PC) generation from photogrammetry–remotely piloted aircraft systems (RPAS) at high spatial and temporal resolution and accuracy is of increasing importance for many applications. For several years, photogrammetry–RPAS has been used to recover civil engineering works such as digital elevation models (DEMs), triangle irregular networks (TINs), contour levels, orthophotographs, etc. This study analyzes the influence of variables involved in the accuracy of PC generation over asphalt shapes and determines the most influential variable based on the development of an artificial neural network (ANN) with patterns identified in the test flights. The input variables were those involved, and output was the three-dimension root mean square error (3D-RMSE) of the PC in each ground control point (GCP). The result of the study shows that the most influential variable over PC accuracy is the modulation transfer function 50 (MTF50). In addition, the study obtained an average 3D-RMSE of 1 cm. The results can be used by the scientific and civil engineering communities to consider MTF50 variables in obtaining images from RPAS cameras and to predict the accuracy of a PC over asphalt based on the ANN developed. Also, this ANN could be the beginning of a large database containing patterns from several cameras and lenses in the world market.

1. Introduction

Point cloud generation from photogrammetry– remotely piloted aircraft systems (RPAS) at high spatial and temporal resolution and accuracy is of increasing importance for many applications [1,2]. The improvement is such that photogrammetry–RPAS is a robust tool to retrieve topographic products such as digital elevation models (DEMs), triangle irregular networks (TINs), contour levels, and orthophotographs [3,4,5,6,7,8]. Also, RPAS spends less time on photogrammetric data acquisition, cutting cost in comparison to piloted airplanes [9].
Combining mathematics and the stages of conventional photogrammetry [10,11] with computer vision methods, the structure from motion (SfM) technique [12] arose. SfM adjusts all bundles of all photos in such a way that the root mean square error (RMSE) in the projection of terrain points on the corresponding photos are minimized by the least squares method [13,14,15]. This optimization problem is known as bundle adjustment [16,17]. This has been developed based on two important publications: the book Multiple View Geometry in Computer Vision [18] and the scale invariant feature transform (SIFT) algorithm [19]. The former addresses the relative orientation problem from the projective geometry field, using the outlier concept (incorrectly paired points) in order to eliminate those points through random sample consensus (RANSAC) [20,21,22]. A large number of rows (homologous points) of matrix calculations is accelerated by sparse matrix techniques [23]. The SIFT algorithm, another important milestone, automatically identifies homologous points in the overlapping areas of photographs that have been covered (common areas of each photograph), with a success rate unexceeded since then. Lowe patented the SIFT algorithm and other researchers have studied different algorithms and found similar results [24].
Roads, being part of civil infrastructure, require conservation and maintenance that save time and thus cost. The first step in this process, determining the current state of the pavement, is possible by analyzing the point cloud (PC) from photogrammetry–RPAS. Thus, it is useful to analyze the influence of variables involved in the accuracy of PC generation over the asphalt shape and identify the most influential one. The key variables for PC accuracy, and consequently the ones studied here, are focal length, ground sample distance (GSD), overlap, and modulation transfer function (MTF). The choice of these variables was based on previous research: Rosnell and Honkavaara [25] investigated methods for PC generation combining different overlaps and using different cameras; Hernandez-Lopez et al. [26] analyzed geometrical features based on GSD and overlap that should be taken into account to make a photogrammetric flight plan; Mesas-Carrascosa et al. [27] concluded that the GSD must be taken into account to make a photogrammetric flight plan; Westboy et al. [28] recommended taking into account the overlap to generate PC photogrammetry; Tahar [29] analyzed different photogrammetric methods to obtain PCs combining different focal lengths; Näsi et al. [30] inspected anomalous reflectance characteristics of trees with hyperspectral cameras combining different GSDs; and Eltner and Schneider [31] studied the quality of DEMs generated from different cameras and different focal lengths. In addition, from our experience, we noted that the sharpest images produced better PC quality. So, sharpness should be included in our study. From this, we decided that a variable that could measure this physical phenomenon is the modulation transfer function 50 (MTF50).
It is necessary to know the camera’s internal features to develop PCs from camera images using photogrammetric techniques [32]. These features can be retrieved through the camera calibration process. The internal features involve the coordinates of the image projection center (xp, yp), the focal length (f), the radial distortion coefficients (k1, k2, k3), and the decentering lens distortion (p1, p2) [33]. The camera calibration process has been carefully analyzed in recent years and many techniques have been developed [34,35].
Variables are defined as follows: (a) Focal length is the distance in millimeters from the back nodal point to the point where light rays entering the lens converge to form a sharp image on the sensor. Depending on the focal length, any lens induces a spherical distortion in the resulting photo [36]. (b) GSD is the distance between pixel centers from a digital photo measured in millimeters on the ground. (c) MTF is determined from modulation of the Fourier transform applied over any time or any space domain, for example, to the speech field [37], to the study of response physics systems [38,39], or to photography, in order to measure lens sharpness [40,41]. In recent years, MTF has been analyzed to improve measures obtained from photogrammetry techniques [42,43,44,45,46]. (d) Overlap is the percentage of one photo covering another over the same ground terrain.
The influence of several variables over other variables or behavior can be analyzed by multiple regression to develop systems models, multivariate analysis of variance (MANOVA), principal component analysis (PCA), etc. Garcia-Balboa et al. [47] proved that artificial neural network (ANN) is the most suitable method for training complex datasets; this is the reason we selected this method. For the present purpose, ANN fulfills the dual purpose of analyzing and estimating accuracy for this system. Also, this ANN may serve as the beginning of an open global ANN to estimate the accuracy of a PC through a large data server on the web and the addition of patterns (input variables, output accuracy) from different cameras and lenses that train the ANN and introduce feedback.
When a complete 3D reconstruction of a highway network is needed, the storage capacity should be prepared for managing large amounts of data. Software currently exists to deal with this kind of large data problem, e.g., Autodesk, Bentley, Agisoft Photoscan, and Pix4D.

2. Methodology

The methodology followed throughout this research is presented in Figure 1.

2.1. RPAS System, Hardware, and Software

The RPAS platform used in this study is a multirotor hexacopter (Figure 2) developed by the authors. This system is designed to take zenith photographs. The RPAS payload is a small-format nonmetric digital camera Lumix tz35 (4608 × 3456 pixels, Live MOS 1/2.33″, Leica DC Vario-Elmar lens), serial number FA3KA001057, manufactured in Tokio (Japan). The RPAS payload is stabilized by a gimbal system to improve the accuracy of the camera’s exterior orientation parameters [48]. The hexacopter has onboard a navigation system based on: (i) a u-blox LEA6 global positioning system (GPS) receiver, (ii) an inertial measurement unit (IMU) system with a 3-axis gyro and accelerometer, (iii) an MS5611 barometric pressure sensor, and (iv) a microcontroller based on Arduino (ArduPilotMega 2.5), manufactured in Torino (Italy). With these avionics, the hexacopter can fly autonomously through a predefined set of waypoints. To set the fly-by waypoints, the authors developed a script to generate a track under the conditions required in each flight plan. The track format is compatible with the ground station software used to operate the RPAS, Mission Planner 1.2.85. The script takes flight variables as inputs (start and end flight points, overlap, and GSD desired) and track format as output. The script calculates the flight height and the points where the RPAS camera has to take photos over the track flight.

2.2. Experiments Design

The variables studied here are focal length, ground sample distance, modulation transfer function, and longitudinal overlap. The first 3 variables are related to the photo, and the PC is developed from a photo set so that the way in which each photo is taken will influence the PC.
The last variable, overlap, is related to the common pixels on every 2 photos so it can influence the resulting PC.
This study was made on a roadway cut for traffic into the city of Granada (S Spain). The test-field road measured about 100 × 10 m2. Specifically, the area was over 37.186539° N, −3.627872° geographical coordinates (latitude, longitude). On the test-field road, we randomly placed 30 ground control points (GCPs), maintaining the same initial random method throughout several light tests (Figure 3) to determine ground truth. The coordinates of each GCP were established by total station measures, which enabled positioning accuracy to 5 mm. Thus, each GCP represents a measure of known coordinates.
We performed 15 kinds of flight mission (Table 1). Each type of flight mission was repeated 4 times to ensure statistically valid results. Thus, 60 flight mission samples were gathered. The variables of each flight mission (focal length, GSD, and overlap) were imposed as flight and camera constraints.
The design of each flight required writing the mission in the RPAS. Before the mission was upgraded to the RPAS, it was necessary to apply the script developed in order to: (a) direct the flight along the planned trajectory and (b) activate the trigger in the specific place of the zenith photograph desired.
Also, the flights were executed with common parameters (the same in all flights). Also, as the photographs were taken in motion, the highest speed shutter possible was required (1/2000 s) and ISO 100 was used to ensure minimum noise. These are the maximum values offered by the Lumix tz35. Because of the shutter speed and ISO required, the aperture had to be opened as much as possible, i.e., F3.4, to admit maximum light.

2.3. Development of PC and 3D-RMSE Computation

After making 60 packs of photographs, each PC was developed to get the RMSE of each GCP in the PC. The software used to determine the PC and RMSE was Agisoft PhotoScan 1.1. The self-internal orientation or self-calibration and relative orientation were performed by the computer program within the same process, generating a PC in relative coordinates. Then, the data from the absolute coordinates (ground truth) of each of the 30 GCPs was imported to develop the PC in absolute coordinates, with their respective error calculated by Agisoft PhotoScan based on the calculation of the 3D-RMSE (Figure 4): 3 D _ R M S E = R M S E x 2 + R M S E y 2 + R M S E z 2 .

2.4. MTF50

In this study, we needed to include certain variables related to the sharpness and contrast image. This is because the images are taken in motion and sharpness and contrast can change and contaminate neighboring pixels, causing disarray in the homologous pixel identification algorithm. Thus, we analyzed the MTF50. MTF is the spatial frequency response (SFR) of an imaging system or a component; it is the contrast at a given spatial frequency relative to low frequencies. MTF can be measured in Hertz or in cycles (or lines) per millimeter (or pixel). To get the MTF50, we followed the flowchart in Figure 5. First, we defined 2 regions of interest (ROIs) for each target (the space composed by asphalt and the white portion of the targets). Each photo has at least 1 target, so each one has at least 2 ROIs from the target. Then, each ROI was transformed from red–green–blue (RGB) to grayscale. Afterward, we made the grayscale average curve in the horizontal space domain of ROI.H to get I.h and the average curve in the vertical space domain of ROI.V to get I.v (see Figure 5). Next, we established the modulation transfer function from I.h and I.v to get II.h and II.v. Then, we registered the MTF value corresponding to the frequency of 0.5, which is the MTF50. Finally, the average value between MTF50.h and MTF50.v was calculated, and this was the MTF50 value on each photograph of each flight.

2.5. Setting and Training a Neural Network to Forecast 3D-RMSE of 3D Model from Arbitrary Input Parameters

The basic aim of this project was to forecast the 3D-RMSE of PCs developed by the photogrammetric flight mission under certain controlled conditions (Figure 6). The variables in the project methodology were focal length, GSD, longitudinal overlap, and MTF50. For this, an artificial neural network back propagation was designed, set, trained, validated, and tested by MATLAB r2012b.
The variables monitored were the input neurons of the ANN, while the square 3D-RMSE was the output neurons of the ANN. The optimal number of hidden layers was not known beforehand and was determined by trial and 3D-RMSE. As this was not a complex problem, the ANN had a better fit when it was set with a smaller number of hidden layers [49]. The problem presented here with a single neuron in the output layer does not seem to be very complex. Therefore, it began with testing a single hidden layer, and when the convergence was deemed satisfactory, this design was validated.
In summary, the data were as follows:
-
There were 60 photographic packs under different conditions controlling the variables under study. Each photographic pack generated a PC, so that 60 PCs were generated. Each PC had 30 GCPs. The 3D-RMSE of each GCP in each PC was determined. Then, we calculated the 3D-RMSE average of each 6 GCPs (by homogeneous zones), so we got 5 3D-RMSE samples of each flight (or each PC).
-
The MTF50 in each photo on the x-axis and the y-axis was measured in order to use their average in the ANN.
-
Based on the above, 60 (PC) × 5 (3D-RMSE samples GCP/PC) = 300 samples. Therefore, 300 samples were generated to train, validate, and test the ANN. Chosen randomly, 210 training samples, 45 validating samples, and 45 testing samples were used to set the ANN.
Several configurations were designed to train the ANN, and the configuration that worked best is shown in Figure 7a: 1 hidden layer with 2 neurons in this layer. The convergence was achieved with 10 epochs, converging with a mean square error (MSE) of 1.257·10−5 (Figure 7b). The errors in each epoch were near zero and the histogram that they comprise has a Gaussian shape around zero (Figure 7c). More information concerning the training process of the ANN is provided in Figure 7d.

3. Analysis and Discussion of the Results

Once the neural network was established, we proceeded to analyze it. The mean square error convergence value was 1.257·10−5 m. This value is less than a millimeter, indicating that when ordering values to the neural network, the response output will have an error of ±1.257·10−5 m. In addition, using the trained neural network, we determined which of the variables was the most influential on the output accuracy. To do so, we designed a ceteris paribus input table in which we calculated the midpoint value of the extreme values of the experienced range of each of the four variables. Then, we fixed the midpoint of each variable and changed 5% of the other variables. This was done for each of the four variables. These inputs we reused to compute the error by the neural network and obtained the graphics shown in Figure 8. Each line represents a variable of the four variables considered in this paper. The x-axis runs throughout the whole range in which each variable was studied, with 0, 50, and 100 being the beginning, midpoint, and end of each variable range, respectively. The y-axis gives the 3D-RMSE by computing the inputs in the neural network.
The growth of the curves in Figure 8 provides us with the following information: (a) when the MTF50 increases, the PC 3D-RMSE decreases; (b) when the GSD increases, so does the PC 3D-RMSE; (c) when overlapping increases, the PC3D-RMSE decreases; and (d) when focal length increases, so does the PC 3D-RMSE (as expected). Thus, we might deduce that any improvement in the average sharpness of the photographs developed from the PC will reduce the 3D-RMSE, meaning that the accuracy of the PC would be better. This is because the software can better identify each pixel and thus can provide a model closer to reality. The same is true of the GSD or ground pixel size—that is, as might be expected, the larger the pixel, the less ground information will be obtained and the coarser the data, therefore the software will have poorer geometrical quality information to develop the PC. Also, the more longitudinal overlap, the better the PC will be, because the software has more homologous points between two consecutive images, and thus more accurate PC results.
After analyzing the graph of Figure 8, we calculated the best least square to fit a line through the output neural network 3D-RMSE of each variable in order to determine the variable that most influenced the 3D-RMSE of the PC developing process. The slopes of the resulting line were: (a) MTF50: −4.1·10−4; (b) GSD: 3.7·10−4; (c) overlapping: −2.6·10−4; and (d) focal length: 1.2·10−4. Taking into account the slope change, we propose that the most changing variable over its range and therefore the variable that most influences the accuracy of a PC on asphalt is the MTF50 of each photograph, followed by GSD, overlapping, and focal length.
Additionally, we take into account the importance of the MTF50 linked to each image, instead of the traditional analysis of it as a measure belonging to the camera system (lens). The MTF50 can provide a measure related to blurring, moving, and sharpness of an image because this value is related to light and dark image changes. This idea is far from the traditional treatment of MTF50 values.

4. Conclusions

The method used in this study determined accuracy when using photogrammetry to compute the point cloud (PC) over asphalt, and it is similar to total station accuracy. In particular, the average accuracy found was about ±1 cm. Therefore, this method is suitable for photogrammetric mapping in asphalt areas to support decision-making about the needs for renovation, conservation, and maintenance of road surfaces.
It has been demonstrated that the most influential variable in the accuracy of the PC is the MTF50, representing a measure of sharpness. In other words, establishing an accurate PC requires the use of cameras with suitable MTF50 values. It is also predicted that a key feature to improve MTF50 is shutter speed, because the images are taken with the RPAS in motion. Thus, the speed should be greater than 1/2000 s. We recommend further research on how to improve the MTF50 of a photographic shot with a moving camera or a vibrating subject.
The next variable to consider is the GSD. With other variables fixed, flight at lower altitudes provides greater accuracy in the PC. This presents a drawback: the area photographed will be smaller. Therefore, we recommend the use of a nonmetric camera with the image sensor as large as possible to guarantee higher flights without loss of GSD accuracy.
According to the classical ANN approach, the comparison between the 3D-RMSE estimated by the fitted ANN and the test subset suggests that ANN is a suitable tool for estimating the 3D-RMSE when fed by the following variables: MTF50, GSD, overlap, and focal length. We developed a tool that can advise as to the values those variables should take in order to ensure predefined accuracy. For future research, the tool has the advantage of being able to add more patterns to the neural network, and by training, validating, and testing, it can also expand the range of variables.
Mapping with RPAS has proven to be easier and more versatile than conventional topography and photogrammetric flight data collection with an airplane. It is especially remarkable when the mapping methods are used to sweep the length of intermediate areas between standard topography (small areas) and conventional photogrammetric flights (large areas).

Author Contributions

J.F.R.-G. and J.J.G.-Q. took part in the entire researching process. F.J.A.-L. contributed to the experiment design. C.A.L.-R. held on the photogrammetry process. J.L.G.-B. supported the Artificial Neural Network development.

Funding

The article processing charge (APC) was funded by the Research Group “Ingeniería Cartográfica” (Grant No. PAIDI-TEP-164 from the Regional Government of Andalucía) from the University of Jaén.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Mancini, F.; Dubbini, M.; Gattelli, M.; Stecchi, F.; Fabbri, S.; Gabbianelli, G. Using Unmanned Aerial Vehicles (UAV) for High-Resolution Reconstruction of Topography: The Structure from Motion Approach on Coastal Environments. Remote Sens. 2013, 5, 6880–6898. [Google Scholar] [CrossRef] [Green Version]
  2. Hugenholtz, C.H.; Walker, J.; Brown, O.; Myshak, S. Earthwork Volumetrics with an Unmanned Aerial Vehicle and Softcopy Photogrammetry. J. Surv. Eng. 2015, 141, 06014003. [Google Scholar] [CrossRef]
  3. Nelson, A.; Reuter, H.I.; Gessler, P. Chapter 3 DEM Production Methods and Sources. Dev. Soil Sci. 2009, 33, 65–85. [Google Scholar] [CrossRef]
  4. Timofte, R.; Zimmermann, K.; Van Gool, L. Multi-view traffic sign detection, recognition, and 3D localisation. Mach. Vis. Appl. 2014, 25, 633–647. [Google Scholar] [CrossRef]
  5. Golparvar-Fard, M.; Balali, V.; de la Garza, J.M. Segmentation and Recognition of Highway Assets Using Image-Based 3D Point Clouds and Semantic Texton Forests. J. Comput. Civ. Eng. 2015, 29, 04014023. [Google Scholar] [CrossRef]
  6. Turner, D.; Lucieer, A.; Watson, C. An Automated Technique for Generating Georectified Mosaics from Ultra-High Resolution Unmanned Aerial Vehicle (UAV) Imagery, Based on Structure from Motion (SfM) Point Clouds. Remote Sens. 2012, 4, 1392–1410. [Google Scholar] [CrossRef] [Green Version]
  7. Ai, M.; Hu, Q.; Li, J.; Wang, M.; Yuan, H.; Wang, S. A Robust Photogrammetric Processing Method of Low-Altitude UAV Images. Remote Sens. 2015, 7, 2302–2333. [Google Scholar] [CrossRef] [Green Version]
  8. Balali, V.; Jahangiri, A.; Machiani, S.G. Multi-class US traffic signs 3D recognition and localization via image-based point cloud model using color candidate extraction and texture-based recognition. Adv. Eng. Inform. 2017, 32, 263–274. [Google Scholar] [CrossRef]
  9. Aber, J.S.; Marzolff, I.; Ries, J.B. Small-Format Aerial Photography: Principles, Techniques and Geoscience Applications; Elsevier Scientific: Amsterdam, The Netherlands, 2010; ISBN-10: 0444532609. [Google Scholar]
  10. Cochrane, G.R. Manual of photogrammetry. American Society of Photogrammetry. N. Z. Geog. 1982, 38. [Google Scholar] [CrossRef]
  11. Guillem, S.; Herráez, J. Restitución Analítica; Publication service of the Polytechnic University of Valencia: Valencia, Spain, 1995. [Google Scholar]
  12. Faugeras, O.; Luong, Q.-T.; Papadopoulo, T. The Geometry of Multiple Images: The Laws that Govern the Formation of Multiple Images of a Scene and Some of Their Applications; MIT Press: Cambridge, MA, USA, 2001; ISBN 9780262062206. [Google Scholar]
  13. Harwin, S.; Lucieer, A. Assessing the Accuracy of Georeferenced Point Clouds Produced via Multi-View Stereopsis from Unmanned Aerial Vehicle (UAV) Imagery. Remote Sens. 2012, 4, 1573–1599. [Google Scholar] [CrossRef] [Green Version]
  14. Rupnik, E.; Nex, F.; Toschi, I.; Remondino, F. Aerial multi-camera systems: Accuracy and block triangulation issues. ISPRS J. Photogramm. Remote Sens. 2015, 101, 233–246. [Google Scholar] [CrossRef]
  15. Martinez-Carricondo, P. Técnicas Fotogramétricas Desde Vehículos Aéreos no Tripulados Aplicadas a la Obtención de Productos Cartográficos Para la Ingeniería Civil. Ph.D. Thesis, University of Almería, Almería, Spain, 2016. [Google Scholar]
  16. Brown, D. The bundle adjustment—Progress and prospects. In XIII Congress of the ISPRS. International Archives of Photogrammetry; International Society for Photogrammetry and Temote Sensing: Helsinki, Finland, 1976; Volume 21, p. 33. [Google Scholar]
  17. Triggs, B.; McLauchlan, P.F.; Hartley, R.I.; Fitzgibbon, A.W. Bundle Adjustment—A Modern Synthesis. In Vision Algorithms: Theory and Practice; Springer: Berlin/Heidelberg, Germany, 2000; pp. 298–372. [Google Scholar]
  18. Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision; Cambridge University Press: Cambridge, UK, 2000; ISBN-10: 0521540518. [Google Scholar]
  19. Lowe, D.G. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef] [Green Version]
  20. Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
  21. Hast, A.; Nysjö, J.; Marchetti, A. Optimal RANSAC-Towards a Repeatable Algorithm for Finding the Optimal Set. J. WSCG 2013, 21, 21–30. [Google Scholar]
  22. Den Hollander, R.J.M.; Hanjalic, A. A Combined RANSAC-Hough Transform Algorithm for Fundamental Matrix Estimation. In Proceedings of the British Machine Vision Conference 2007, Warwick, UK, 10–13 September 2007. [Google Scholar]
  23. Lourakis, M.I.A.; Argyros, A.A. SBA: A Software Package for Generic Sparse Bundle Adjustment. ACM Trans. Math. Softw. 2009, 36, 1–30. [Google Scholar] [CrossRef]
  24. Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G. ORB: An efficient alternative to SIFT or SURF. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2564–2571. [Google Scholar]
  25. Rosnell, T.; Honkavaara, E. Point cloud generation from aerial image data acquired by a quadrocopter type micro unmanned aerial vehicle and a digital still camera. Sensors 2012, 12, 453–480. [Google Scholar] [CrossRef] [PubMed]
  26. Hernandez-Lopez, D.; Felipe-Garcia, B.; Gonzalez-Aguilera, D.; Arias-Perez, B. An Automatic Approach to UAV Flight Planning and Control for Photogrammetric Applications. Photogramm. Eng. Remote Sens. 2013, 79, 87–98. [Google Scholar] [CrossRef]
  27. Mesas-Carrascosa, F.; Rumbao, I.; Berrocal, J.; Porras, A. Positional Quality Assessment of Orthophotos Obtained from Sensors Onboard Multi-Rotor UAV Platforms. Sensors 2014, 14, 22394–22407. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Westoby, M.J.; Brasington, J.; Glasser, N.F.; Hambrey, M.J.; Reynolds, J.M. ‘Structure-from-Motion’ photogrammetry: A low-cost, effective tool for geoscience applications. Geomorphology 2012, 179, 300–314. [Google Scholar] [CrossRef] [Green Version]
  29. Tahar, K.N. An evaluation on different number of ground control points in unmanned aerial vehicle photogrammetric block. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, XL-2/W2, 93–98. [Google Scholar] [CrossRef]
  30. Näsi, R.; Honkavaara, E.; Lyytikäinen-Saarenmaa, P.; Blomqvist, M.; Litkey, P.; Hakala, T.; Viljanen, N.; Kantola, T.; Tanhuanpää, T.; Holopainen, M. Using UAV-Based Photogrammetry and Hyperspectral Imaging for Mapping Bark Beetle Damage at Tree-Level. Remote Sens. 2015, 7, 15467–15493. [Google Scholar] [CrossRef] [Green Version]
  31. Eltner, A.; Schneider, D. Analysis of Different Methods for 3D Reconstruction of Natural Surfaces from Parallel-Axes UAV Images. Photogramm. Rec. 2015, 30, 279–299. [Google Scholar] [CrossRef]
  32. Habib, A.; Quackenbush, P.; Lay, J.; Wong, C.; Al-Durgham, M. Calibration and stability analysis of medium-format digital cameras. In Proceedings of SPIE—The International Society for Optical Engineering; Tescher, A.G., Ed.; International Society for Optics and Photonics: Ottawa, ON, Canada, 2006; p. 631218. [Google Scholar]
  33. Atkinson, K.B.; Keith, B. Close Range Photogrammetry and Machine Vision; Whittles: Caithness, UK, 2001; ISBN-10: 1870325737. [Google Scholar]
  34. Gašparović, M.; Dubravko, G.A. Two-step camera calibration method developed for micro uav’s. Remote Sens. Spat. Inf. Sci. 2016, XLI-B1. [Google Scholar] [CrossRef]
  35. Pérez, M.; Agüera, F.; Carvajal, F. Low cost surveying using an unmanned aerial vehicle. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, XL-1/W2, 311–315. [Google Scholar] [CrossRef]
  36. Chen, T.; Shibasaki, R.; Lin, Z. A Rigorous Laboratory Calibration Method for Interior Orientation of an Airborne Linear Push-Broom Camera. Photogramm. Eng. Remote Sens. 2007, 73, 369–374. [Google Scholar] [CrossRef]
  37. Chi, T.; Gao, Y.; Guyton, M.C.; Ru, P.; Shamma, S. Spectro-temporal modulation transfer functions and speech intelligibility. J. Acoust. Soc. Am. 1999, 106, 2719–2732. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  38. Cheung, D.T. MTF modelling of backside-illuminated PV detector arrays. Infrared Phys. 1981, 21, 301–310. [Google Scholar] [CrossRef]
  39. Schuster, J. Numerical simulation of the modulation transfer function (MTF) in infrared focal plane arrays: Simulation methodology and MTF optimization. In Physics and Simulation of Optoelectronic Devices XXVI; Osiński, M., Arakawa, Y., Witzigmann, B., Eds.; SPIE: Bellingham, WA, USA, 2018; Volume 10526, p. 53. [Google Scholar]
  40. Dereniak, E.L.; Dereniak, T.D. Geometrical and Trigonometric Optics; Cambridge University Press: Cambridge, UK, 2008; ISBN 9780521887465. [Google Scholar]
  41. Geary, J.M. Introduction to Lens Design: With Practical ZEMAX Examples; Willmann-Bell: Richmond, VA, USA, 2002; ISBN 9780943396750. [Google Scholar]
  42. Gašparović, M.; Geodetskog, D. Testing of Image Quality Parameters of Digital Cameras for Photogrammetric Surveying with. Geod. List Glas. Hrvat. Geod. društva 2016, 70, 256–266. [Google Scholar]
  43. Kraft, T.; Geßner, M.; Meißner, H.; Cramer, M.; Gerke, M.; Przybilla, H.J.; Kraft, T.; Gessner, M.; Meissner, H. Evaluation of a metric camera system tailored for high precision uav applications. Remote Sens. Spat. Inf. Sci. 2016, XLI-B1. [Google Scholar] [CrossRef]
  44. Honkavaara, E.; Jaakkola, J.; Markelin, L.; Becker, S. Evaluation of Resolving Power and MTF of DMC. Available online: https://www.researchgate.net/publication/228374332_Evaluation_of_resolving_power_and_MTF_of_DMC (accessed on 10 November 2018).
  45. Efe, M. Improved analytical modulation transfer function for image intensified charge coupled devices. Comp. Sci. 2010, 18. [Google Scholar] [CrossRef]
  46. Estribeau, M.; Magnan, P. Fast MTF measurement of CMOS imagers using ISO 12333 slanted-edge methodology. Proceedings of SPIE—The International Society for Optical Engineering, Supaero, France, 19 February 2004; 2004; p. 243. [Google Scholar]
  47. García-Balboa, J.L.; Reinoso-Gordo, J.F.; Ariza-López, F.J. Automated Assessment of Road Generalization Results by Means of an Artificial Neural Network. GIScience Remote Sens. 2012, 49, 558–596. [Google Scholar] [CrossRef]
  48. Gašparović, M.; Jurjević, L. Gimbal Influence on the Stability of Exterior Orientation Parameters of UAV Acquired Images. Sensors 2017, 17, 401. [Google Scholar] [CrossRef] [PubMed]
  49. Isasi Viñuela, P.; Galván León, I.M. Redes de neuronas artificiales: Un enfoque práctico; Prentice Hall: Upper Saddle River, NJ, USA, 2004; ISBN 9788420540252. [Google Scholar]
Figure 1. Research workflow. RPAS, remotely piloted aircraft systems; GCP, ground control point; MTF50, modulation transfer function 50.
Figure 1. Research workflow. RPAS, remotely piloted aircraft systems; GCP, ground control point; MTF50, modulation transfer function 50.
Sensors 18 03880 g001
Figure 2. RPAS.
Figure 2. RPAS.
Sensors 18 03880 g002
Figure 3. Flight test zone.
Figure 3. Flight test zone.
Sensors 18 03880 g003
Figure 4. Development of the point cloud (PC). The first column shows the GCP number, the following three show the real GCP (ground truth), and the last one shows the square error between the PC generated by photogrammetry with respect to the ground truth.
Figure 4. Development of the point cloud (PC). The first column shows the GCP number, the following three show the real GCP (ground truth), and the last one shows the square error between the PC generated by photogrammetry with respect to the ground truth.
Sensors 18 03880 g004
Figure 5. Workflow to determine the MTF50 of each photo(unamgraph on vertical and horizontal axes.
Figure 5. Workflow to determine the MTF50 of each photo(unamgraph on vertical and horizontal axes.
Sensors 18 03880 g005
Figure 6. Artificial neural network inputs–output.
Figure 6. Artificial neural network inputs–output.
Sensors 18 03880 g006
Figure 7. (a) Design of the ANN scheme; (b) mean square error (MSE); (c) error histogram shows the errors of each phase of ANN setting (training, validation, and testing); (d) training parameters of the ANN.
Figure 7. (a) Design of the ANN scheme; (b) mean square error (MSE); (c) error histogram shows the errors of each phase of ANN setting (training, validation, and testing); (d) training parameters of the ANN.
Sensors 18 03880 g007
Figure 8. Influence of each variable on model accuracy.
Figure 8. Influence of each variable on model accuracy.
Sensors 18 03880 g008
Table 1. Set of flight missions. GSD, ground sample distance.
Table 1. Set of flight missions. GSD, ground sample distance.
Flight MissionFocal Length (mm)GSD (mm)Overlap (%)
14790
24780
34770
46590
56580
66570
74590
84580
94570
106790
116780
126770
1349.590
1449.580
1549.570

Share and Cite

MDPI and ACS Style

González-Quiñones, J.J.; Reinoso-Gordo, J.F.; León-Robles, C.A.; García-Balboa, J.L.; Ariza-López, F.J. Variables Influencing the Accuracy of 3D Modeling of Existing Roads Using Consumer Cameras in Aerial Photogrammetry. Sensors 2018, 18, 3880. https://doi.org/10.3390/s18113880

AMA Style

González-Quiñones JJ, Reinoso-Gordo JF, León-Robles CA, García-Balboa JL, Ariza-López FJ. Variables Influencing the Accuracy of 3D Modeling of Existing Roads Using Consumer Cameras in Aerial Photogrammetry. Sensors. 2018; 18(11):3880. https://doi.org/10.3390/s18113880

Chicago/Turabian Style

González-Quiñones, Juan J., Juan F. Reinoso-Gordo, Carlos A. León-Robles, José L. García-Balboa, and Francisco J. Ariza-López. 2018. "Variables Influencing the Accuracy of 3D Modeling of Existing Roads Using Consumer Cameras in Aerial Photogrammetry" Sensors 18, no. 11: 3880. https://doi.org/10.3390/s18113880

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop