Next Article in Journal
Quadrotor Formation Control via Terminal Sliding Mode Approach: Theory and Experiment Results
Next Article in Special Issue
Oblique View Selection for Efficient and Accurate Building Reconstruction in Rural Areas Using Large-Scale UAV Images
Previous Article in Journal
Unoccupied Aerial Systems: A Review of Regulatory and Legislative Frameworks in the Caribbean
Previous Article in Special Issue
UAV Mapping and 3D Modeling as a Tool for Promotion and Management of the Urban Space
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Super-Resolution Images Methodology Applied to UAV Datasets to Road Pavement Monitoring

DIING—Department of Engineering, University of Palermo, Viale Delle Scienze Ed. 8, 90128 Palermo, Italy
*
Author to whom correspondence should be addressed.
Drones 2022, 6(7), 171; https://doi.org/10.3390/drones6070171
Submission received: 13 May 2022 / Revised: 20 June 2022 / Accepted: 4 July 2022 / Published: 12 July 2022
(This article belongs to the Special Issue UAV Photogrammetry for 3D Modeling)

Abstract

:
The increasingly widespread use of smartphones as real cameras on drones has allowed an ever-greater development of several algorithms to improve the image’s refinement. Although the latest generations of drone cameras let the user achieve high resolution images, the large number of pixels to be processed and the acquisitions from multiple lengths for stereo-view often fail to guarantee satisfactory results. In particular, high flight altitudes strongly impact the accuracy, and result in images which are undefined or blurry. This is not acceptable in the field of road pavement monitoring. In that case, the conventional algorithms used for the image resolution conversion, such as the bilinear interpolation algorithm, do not allow high frequency information to be retrieved from an undefined capture. This aspect is felt more strongly when using the recorded images to build a 3D scenario, since its geometric accuracy is greater when the resolution of the photos is higher. Super-Resolution algorithms (SRa) are utilized when registering multiple low-resolution images to interpolate sub-pixel information The aim of this work is to assess, at high flight altitudes, the geometric precision of a 3D model by using the the Morpho Super-Resolution™ algorithm for a road pavement distress monitoring case study.

1. Introduction

The use of the drone for monitoring the road pavement is becoming increasingly widespread to have continuous feedback on the health of the infrastructure and to be able to monitor the deviations that occur from one acquisition to another. However, when for reasons of physical constraints, one is forced to perform flights at high altitudes, the metric error of the final model is greater than 10 cm, which is unacceptable for monitoring purposes. The super-resolution imaging (SR) allows the shortcomings of the capture acquisition systems to obtain a higher-resolution image based on images which come from the same scene. It is an inverse problem, still not determined, for which it is necessary to provide a large amount of deep preliminary data in order to narrow the field of possible solutions. The super-resolution (SR) of the images, that allows to carry out a high-resolution image from a single low-resolution image, presents many solutions that can be associated to any given low-resolution pixel. It is an inverse problem that, nowadays, is not determined since it presents several solutions. To overcome this problem, it is necessary to provide deep preliminary information in order to constrain the space of the solution.
SR image is a typical problem in the computer vision field [1,2,3,4,5,6,7,8,9,10,11]. Nevertheless, light or geometric conditions i.e., long distance from the object during photogrammetric detections lead to problems in the construction of the 3D meshed model and therefore involve to unacceptable RMSE final value [12]. The creation of a 3D model by using historical photos of a no longer existing building constitutes another typical sample where the images are at a low resolution. But particular interest regards the UAV acquisition for the monitoring of the state of road pavements condition [13]. The goal is to assess the 3D model’s accuracy from a photogrammetric survey with super resolution images, and more specifically the validation of RMSE values close to a ground truth 3D in which any corrections have been provided [14] (Figure 1).
For this purpose, a road pavement surface was chosen as sample test. In the case of flat surfaces, as in the case study, the transformation between image and object coordinates is obtained by planar homography [15], which is described by a 3 × 3 non-singular matrix H:
x = [ x y z ] = [ h 1 h 2 h 3 h 4 h 5 h 6 h 7 h 8 h 9 ] [ X Y Z ] = H X
where vectors x = [x y1]T and X = [X Y1]T express image and object points in homogenous coordinates, respectively. The GRS, in that case, has two degrees-of-freedom (DoF) only, given that the object is planar. Equation (1) shows two essential aspects: just one image allows the reconstruction of an object that is planar to be carried out, and since H is non-singular, the inverse transformation is always processed. On the other hand, object coordinates (X) are not always available. In this case, homography can be estimated with the Barazzetti proposal [16].
A known distance measured in the field allows the 3D model to be scaled [14], and the information to reconstruct the 3D coordinates is sufficient so that there is no need to acquire metric data such as object point or known ratios of distances and angles [17,18].

2. Related Works

The SRa problem has been mainly investigated in the Computer Vision field. The SRa have been verified and evaluated by Yang et al. in 2014 [19]. Among them, the example-based methods [20] achieved the state-of-the-art performance. Other studies proposed mapping functions such as kernel regression [21], simple 3 function [22], random forest [23] and anchored neighbourhood regression to further improve the mapping accuracy and speed. The sparse coding-based method and its several improvements [8,12,24] are among the state-of-the-art SR methods nowadays. In the aforementioned methods, the patches are the focus of the optimization; the patch extraction and aggregation steps are considered as pre/post-processing and handled separately.
The learning-based methods consist of algorithms which can be grouped into external, internal, and convolutional neural networks as main categories, depending on the source of the training dataset [25]. The learning algorithms, on which the external image super-resolution method is based, provide the relationship between low and high-resolution image patches. The aforementioned algorithms include several methods of which the convolutional neural network algorithm appears to be the most efficient [26]. This approach is usually performed on images with lots of patterns and textures, but it doesn’t work well on the image structures outside the input image and it fails to generate a correct prediction on images of other classes [25,26].
About the deep learning techniques based on neural networks algorithms for image super-resolution, little research is currently available. Nevertheless, due to their reliability, the multi-layer perception (MLP) algorithms are used for blurring and for natural image denoising [27,28].
In a recent study, a super-resolution method has been proposed by Ahmadian et al., in which a second-order image gradient allows the edges and details of high-resolution and low-resolution images to be obtained [25]. A competitive learning is employed unlike other neural networks that use the backpropagation method for the weight vectors updating [25]. In particular the researchers used old and modern image datasets to train the single image super-resolution algorithm according to the self-organizing neural network, the “k-nearest” neighbour algorithm and Laplace gradient operator, comparing the performance with other previous works and showing that despite the potential of this approach, the processing speed of the algorithm is, however, slower than other traditional methods [25,29,30].
Concerning the aim of this paper, super-resolution reconstruction (SRR) technology, especially the convolutional neural network (SRCNN), has been shown to be appropriate for the detection of structural cracks based on images acquired using UAVs [31]. The performance of the computer vision crack detection models strongly depends on the quality of the images collected [32]. Even if UAVs are common and efficient equipment to collect images of concrete structures or road pavement surface, the vibrations and the distances to the target surveying area may generate problems which lead to the loss of image information and make it difficult to detect the cracks [31,32,33,34,35].
SRR algorithms can overcome the problems of motion blur and insufficient resolution of the images, improving the accuracy needed to detect surface distresses as cracks. Several methods based on deep learning are available to improve the SRR algorithms’ performance, but few studies have employed these kinds of techniques focusing on the crack detection as a research objective. A comparison between the SR images reconstructed by the SrcNet® model with low-resolution images was performed by Bae et al., in which they showed that while SR images improve the recall of detection, at the same time a decrease in detection accuracy has been observed [36]. Other authors found that the crack segmentation accuracy improved with SRR for Low Resolution crack images, but the effect of the SR reconstruction on the quantification of crack features was not explored [37,38,39].
It is clear that the influence of several SRR networks on crack or surface distress reconstruction has not yet been fully investigated.

3. Methodology

3.1. Image Processing with Super-Revolution

Single image super-resolution (SISR) provides the opportunity to reconstruct a high-resolution image ISR from a single low-resolution image ILR. The relationship between ILR and the original high-resolution image IHR is variable relating to the different situations [40]. Several studies assume that ILR is a bicubic down -sampled version of IHR, but other factors of degradation, such as blur, decimation, or noise can also be considered for practical applications [41]. In order to achieve an ISR, the spatial resolution of images needs to be increased or simply the number of pixel rows/columns or both in the image. Several methods have been involved: the interpolation-based methods—Image interpolation (image scaling), that refers to resizing digital images and is widely used by image-related applications. Regarding traditional methods, they include: Nearest-neighbour Interpolation. The nearest-neighbour interpolation which is an algorithm able to select the value of the nearest pixel for each position to be interpolated regardless of any other pixels; Bilinear Interpolation—the bilinear interpolation (BLI) that provides better performance than nearest-neighbour interpolation while keeping a relatively fast speed and finally the Bicubic Interpolation—similarly, the bicubic interpolation (BCI) that performs cubic interpolation on each of the two axes compared to BLI, the BCI takes 4 × 4 pixels into account, and results in smoother results with fewer artifacts but much lower speed.
Starting from the high-resolution image, the low-resolution image is modelled using the expression shown below Equation (2) where X is the high-resolution image, Y is the low-resolution image, F is the degradation function and σ the noise.
X = F (Y;σ)
The degradation parameter σ is unknown; only the high-resolution image and the corresponding low-resolution image are provided. In order to find the inverse function of degradation, the neural network implementation could be involved, just using the SR and LR image data [42].
Learning the end-to-end mapping function F requires the estimation of network parameters Θ = {W1, W2, W3, B1, B2, B3}. That estimation allows the minimizing of the MSE between the reconstructed images F (Y; Θ) and the high-resolution images X, used at the start. The algorithm developed Equation (3) includes Mean Squared Error as the loss function, starting from a series of high-resolution images {Xi} and the corresponding low-resolution ones:
L ( Θ ) = 1 · n · X ni = 1 · | | F ( Y i ; Θ )     X i | | 2
where n is the number of training samples. The strategy to consider MSE as the loss function offers a high PSNR that is a widely used metric for quantitatively evaluating image restoration quality, and it is related to the perceptual quality in partial quantity. Nevertheless, Chao Dong et al. [26] demonstrated that the application of Equation (4), where the weight matrices are calculated, proved to be performant; the loss is minimized using the stochastic gradient descent with the standard backpropagation.
Δ i + 1 = 0.9   ·   Δ i     η   ·   L W i l ,   W i l   =   W i l + Δ i + 1
where l ∈ {1, 2, 3} and i are the indices of layers and iterations, η is the learning rate, and L W i l is the derivative. The filter weights of each layer are initialized by drawing randomly from a Gaussian distribution with zero mean and standard deviation 0.001 (and 0 for biases). The learning rate is 10−4 for the first two layers, and 10−5 for the last layer. They empirically found that a smaller learning rate in the last layer is important for the convolutional neural network to converge [26].
The quality of the images, and the corresponding 3D model, significantly improves with the use of interpolation provided by the end-to-end mapping function.

3.2. Photogrammetric Technique for Pavement Distress Detection Using Images Obtained by Drone

Several studies have been conducted using photogrammetric techniques aimed at the pavement distress analysis. Nevertheless, the more considered aspect was the detection of cars or of vegetation. About the geometric accuracy aimed at the pavement distress detection there are no articles that research on the improvement of the source image quality. The experimentation that was conducted in this paper shows that, once the UAV photogrammetric survey has been made, it is possible to upgrade the resolution of the image source to achieve a more detailed 3D model.
A DJI MAVIC 2 Pro drone with the features shown in Table 1 was used (Figure 2).
To achieve good survey results, it is necessary to plan the flight path. Usually, when using a camera mounted on a drone, it is better to follow a serpentine path rather than a straight-line path. Nevertheless, it depends on the features of the road and on its edge. In our case, the flight was considered in a circle “hyperlapse” due the width of the road and the presence of light poles in the centre of the road, being careful to guarantee the photogrammetry overlap for the recognition of homologous points.
The most critical parameter to be considered is the ground sampling distance (GSD). The GSD is considered as a representation of the smallest details that can be accurately observed on an image; the smaller the value of the GSD, the greater the measurable details [17]. Models are interpreted from this parameter, since it has been demonstrated that the smallest visible details are two to three times the value of the GSD [43]. Based on manuals, generally, the smallest cracks and common distress are not smaller than 10 mm and with resolutions of 3 mm these distresses can be accurately identified. Therefore, appropriately using a 3 mm resolution for pavement distresses, the GSD should be no greater than 1 mm.
The GSD is given by the Equation (5) below, where D = object distance, ƒ = focal length, and pxsize = pixel size:
GSD = D · pxsize f
The field of view (FOV) was 46.7°, calculated using Equation (6):
FOV = 2 · tan 1 ( d 2 · f )
where d is the diagonal length of the sensor and f is the focus length. Camera calibration was performed as part of the Structure from Motion (SfM) process, which calculated the initial and optimized values of the interior orientation parameters. The terrestrial images have been taken considering an altitude of 30 m.

4. Investigation

4.1. Road Pavement Monitoring

Road pavement distress detection and analysis is a crucial aspect for transportation authorities to optimize maintenance strategies. The distress evaluation represents one of the most important steps of the well-known pavement management system (PMS) analysis method, which usually requires reliable measurements of the geometric characteristics of the damages [44,45,46]. In particular, the classification of distresses based on their severity classification needs very accurate measurements that the conventional surveying devices guarantee even if at high cost in terms of technologies and techniques [35].
In recent years, image-based technologies for automated distress detection have represented an assessed alternative option to the conventional ones [47,48,49,50]. High resolution imagery is central to efficiently detect and measure the road surface, even if the standard aerial imagery implies limitations in the survey handling and high costs [51]. For this reason, UAVs are increasingly employed to achieve high flexibility, lower costs, and quickness in the large-scale surveying field. UAVs allow images to be recorded with centimeter spatial resolution, providing a sufficient detail for detection and extraction of some pavement condition features, after being processed into the 3D reconstruction [51].
In several studies UAV image datasets have been used to reconstruct the road pavement surface, to observe its conditions, and, more specifically, to measure, in an accurate way, the deformations in distresses such as potholes and rutting [13,35,51,52,53,54,55].
In order to identify the severity of certain road surface distress, a cheaper methodology to process datasets from drone acquisitions is the stereovision approach, in which photogrammetry and structure-from-motion (SfM) are included [56]. The 3D model output allows to scan the desired metric information of the surveyed distresses efficiently. SfM can be used for pavement distresses such as rutting, block cracking, transverse cracking, potholes, and it enables the surveyor to match the requirements provided from the pavement distress manuals (Figure 3) [46]. This last aspect has prompted the present research work to consider the improvement in UAV image detection as the main goal to achieve the accuracy required from the international distress manuals (Table 2).
At a certain flight altitude the spatial resolutions of the UAV images represent a limitation to the detection of distresses as individual cracks, given that most of their width is less than 0.01 m. Rather, considering a UAV device equipped with a CMOS sensor resolution 12 mpx which flies from 5 mt to 10 mt of altitude, it could be possible to generate a spatial resolution up to the millimeter order of accuracy [35].
To deeply investigate the accuracy of UAV images for road pavement monitoring, and the desirable improvement of them to generate precise 3D models, a case study in Palermo, Italy was performed implementing the previously mentioned Super-resolution approach.

4.2. Case Study

Concerning the present study, the developed algorithm has been used to demonstrate if the relative quality of the reconstructed 3D model of a road surface is sensitive to improvements in the images’ quality and to quantify that improvement. A parking area within the University of Palermo, Italy was chosen as a test road to avoid the restrictions imposed on the drone flights (Figure 4) [57]. Two data sets were collected: a low-resolution image set and the other with high-resolution image, respectively. The low-resolution datasets have then been modified using the morpho super-resolution algorithm and compared with a ground truth source.
The morpho super-resolution™ algorithm is supported by a self-titled software in which the following functions are available:
  • Video stabilization Movie Solid, a technique named “electronic image stabilization” that cancels out camera shake electronically by cropping an area of an image. Another technique called optical image stabilization is provided to mechanically move the lens to compensate for the shaking of the camera [58]. The huge advantage of electronic image stabilization over optical stabilization is the absence of special hardware requirements, so that is sufficient to work using inexpensive products. The disadvantage is the shrinkage of the effective angle of view due to the image always being cropped.
  • Image Stabilization PhotoSolid, that provides sharp images without camera shake or noise [59].Those who have a single-lens reflex camera may know well that camera shake and noise are the counterparts related with image degradation. Cameras, not limited to those of mobile phones, are devices that measure the amount of incident light. In other words, the more light enters a camera, the brighter the image is (and vice versa). When taking photos in a dark scene such as at night, noise is more prevalent than incoming light, which results in noisy images.
  • Image Enhancement by AI Based Segmentation and Pixel Filtering Morpho Semantic Filtering™, which is an image enhancement software that implements AI based segmentation and pixel filtering [60].
  • Fast AI Inference EngineSoftNeuro®”, that operates in multiple environments, utilizing learning results that have been obtained through a variety of deep learning frameworks. It’s user-friendly and it doesn’t require any Deep Learning knowledge. SoftNeuro can also import from various frameworks and run fast on several and various architectures. It is both flexible and fast due to the separation of the layer and its execution pattern, which is a concept of routine.
Morpho super-resolution™ software provides a valid support to achieve the SRa starting from a LR and this is one of the most important features in image based modelling [61]. Once all of the images are converted from low-resolution to super-resolution, the image-based reconstruction can be deployed (Figure 5).
As previously mentioned, the three datasets to process are the low-resolution (drone output), super-resolution (drone outputs processed) and ground truth, respectively.

4.3. SfM Reconstructions

Within the length of the road, a limited area was chosen on which the distresses acquisition and processing was focused. After the image processing to convert low-resolution pictures into Super-resolution ones, the output dataset was used to build the points dense cloud of the road pavement surface by means of the Agisoft Metashape Pro software [62].
The same processing was implemented on all of the datasets [63] and, consequently, the RMS between the dense cloud of both dense clouds generated from dataset was assessed. The reliability of the ground truth processing was also verified [64]. Actually, the mentioned approach is widely used in computer vision for image detection but it is not common for SfM editing [65,66].
Secondly, to have a further validation of the methodology, the dense cloud built from low resolution images dataset, was elaborated [67,68]. In addition, a comparison between that dense cloud with the one obtained from the high-resolution images was carried out. In this way, it was possible compare RMS results from the low resolution image origin dense cloud, from the high resolution image source, and the super-resolution image dense cloud, obtained from the processing, and the high resolution source one [69].
In the following figures (Figure 6, Figure 7, Figure 8 and Figure 9), the difference between the SfM reconstruction obtained from the different data set, the low and super resolution ones, are shown below.
CloudCompare v2 opensource software was used to investigate the quality of the 3D information [70]. The dense clouds, before their implementation in Cloud Compare platform, were scaled in Agisoft Metashape during the reconstruction step [71,72,73].
In Figure 6 and Figure 7 the processed low and super resolution datasets are shown respectively.
In Figure 8 and Figure 9 the results of the processing on the dense clouds of low and super resolution are presented, respectively. As output of the 3D reconstructions processed with Metashape software, the reports of acquisitions imported and processed with the SfM methodology and the calibration coefficients within the correlation matrix are shown for the Low-Resolution dataset (Table 3 and Table 4) and the Super-resolution dataset (Table 5 and Table 6) respectively.
In Figure 10, Figure 11 and Figure 12 the results of Low-resolution and Super-resolution datasets are shown. In particular the image residuals and the camera location are represented for both dataset; in Table 7 the error estimation concerning the camera location are given.

5. Results and Discussion

The optimization of the data set allowed us to carry out a very important output that can open new frontiers in road monitoring field.
The alignments of the low and super resolution cloud with the ground truth one, show how the metric accuracy increases as the RMSE value decreases, that is, it increases with the processing of the data set with the super resolution images
It is important to underscore that without any visual adjustment it would be impossible to get the alignment between the low resolution cloud with the ground truth one due the difficulty in recognizing the homologous markers [74]. Figure 13 and Figure 14, show the merged dense clouds of the low and super resolution dataset of the road pavement surface with the ground truth one.
The reading of the values of the mean square deviation of the comparisons between the two clouds, Low and Super clouds respectively, shows that the value in the case of the dataset originating from the flight, the mean square deviation exceeds the value of 10 cm and, along the entire surface of the comparison the values drop slightly (red area in the diagram) (Figure 15).
The merge clouds referring to the different comparison underline that the merge dense cloud obtained from the super-resolution is closer the truth one, more than the Lower-resolution merge dense cloud. In fact, the result of the comparison with the Super resolution data set shows a very different result from the previous one (blue area in the diagram) (Figure 16). The range of the value of the RMS goes from 0 to 1.5 cm.

6. Conclusions

This study was conducted, principally, to show the applicability of the SRa to images acquired by a drone 30 m above the pavement. This height was chosen as it is likely to be sufficient to avoid most physical obstacles in real world applications. Naturally, images taken from this height are more susceptible to noise and other interference and, resultantly, the fine detail which is required to conduct an accurate analysis is often lost or obscured. The SRa is designed to digitally enhance the quality of low-resolution images and the results presented demonstrated this algorithm’s applicability in increasing the accuracy of the resultant 3D model. This was demonstrated through a dramatic reduction in the RMSE from in excess of 10 cm to between 0 and 1.5 cm.
The results achieved demonstrated that, in case of low-resolution images, is opportune to increase the quality of the source image using the super-resolution image software to achieve at a better-quality model. In this investigation the performance of the Super-resolution image was evaluated, not so much for the recognition of the images, as for the construction of the 3D model from images. The proposed methodology envisaged two main phases: the first one to process the low-resolution images in Super-resolution and the second one to process the data set to 3D reconstruction. Finally, the dense clouds have been compared to verify the quality of the 3D model information. In other words, the comparison was conduct from the dense cloud generated from the data set processed from low to super resolution. The values of RMS achieved, demonstrate that there is an RMS ≥ 10 cm for the comparison of the pavement distress low dense cloud and a 0 ≤ RMS ≥ 1.5 cm for the super resolution dense cloud. In other words, in case of low-resolution data set, the super-resolution image processing, improve the quality of the 3D model (Figure 17).
The metric accuracy changes its value according to the matrix transformation of the data set images. In the path of the super resolution algorithm the matrix transformation achieves different value of accuracy. Figure 17 shows as the geometric accuracy values of super resolution data set is always better than the original one.
In future studies, the authors intend to more deeply investigate the effects of the SRa on low-resolution images to create accurate 3D models from which distresses in road pavements can be easily and accurately detected. This study serves as a proof of concept tying in techniques from multiple scientific disciplines, the results presented are promising and therefore ripe for further exploration.

Author Contributions

Conceptualization, L.I.; data curation, L.I.; formal analysis, F.A.; funding acquisition, G.D.M.; investigation, L.I., F.A. and G.D.M.; methodology, L.I.; resources, F.A. and M.Z.U.; supervision, L.I. and G.D.M.; validation, L.I., G.D.M. and M.Z.U.; writing—original draft, L.I. and F.A.; Writing—review & editing, F.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research has been produced with the financial assistance of the European Union under the ENI CBC Mediterranean Sea Basin Program, for Education, Research, technological development and Innovation, under the grant agreement n.28/1682.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

This paper has been supported by the ENI CBC Mediterranean Sea Basin Program, for Education, Research, technological development and Innovation.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Takeda, H.; Farsiu, S.; Milanfar, P. Kernel regression for image processing and reconstruction. IEEE Trans. Image Process. 2007, 16, 349–366. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Takeda, H.; Milanfar, P.; Protter, M.; Elad, M. Super-resolution without explicit subpixel motion estimation. IEEE Trans. Image Process. 2009, 18, 1958–1975. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Park, S.C.; Park, M.K.; Kang, M.G. Super-resolution image reconstruction: A technical overview. IEEE Signal Process. Mag. 2003, 20, 21–36. [Google Scholar] [CrossRef] [Green Version]
  4. Ng, M.K.; Yip, A.M. A fast MAP algorithm for high-resolution image reconstruction with multisensors. Multidimens. Syst. Signal Process. 2001, 12, 143–164. [Google Scholar] [CrossRef]
  5. Farsiu, S.; Robinson, D.; Elad, M.; Milanfar, P. Robust shift and add approach to superresolution. In Proceedings of the Applications of Digital Image Processing XXVI, San Diego, CA, USA, 3–8 August 2003; Volume 5203. [Google Scholar] [CrossRef]
  6. Farsiu, S.; Robinson, M.D.; Elad, M.; Milanfar, P. Fast and robust multiframe super resolution. IEEE Trans. Image Process. 2004, 13, 1327–1344. [Google Scholar] [CrossRef]
  7. Ng, M.K.; Shen, H.; Lam, E.Y.; Zhang, L. A total variation regularization based super-resolution reconstruction algorithm for digital video. EURASIP J. Adv. Signal Process. 2007, 2007, 074585. [Google Scholar] [CrossRef] [Green Version]
  8. Liu, C.; Sun, D. On bayesian adaptive video super resolution. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 36, 346–360. [Google Scholar] [CrossRef] [Green Version]
  9. Shen, H.; Zhang, L.; Huang, B.; Li, P. A MAP approach for joint motion estimation, segmentation, and super resolution. IEEE Trans. Image Process. 2007, 16, 479–490. [Google Scholar] [CrossRef]
  10. Zhang, H.; Zhang, L.; Shen, H. A Blind Super-Resolution Reconstruction Method Considering Image Registration Errors. Int. J. Fuzzy Syst. 2015, 17, 353–364. [Google Scholar] [CrossRef]
  11. Elad, M.; Feuer, A. Restoration of a single superresolution image from several blurred, noisy, and undersampled measured images. IEEE Trans. Image Process. 1997, 6, 1646–1658. [Google Scholar] [CrossRef] [Green Version]
  12. Inzerillo, L. SfM Techniques Applied in Bad Lighting and Reflection Conditions: The Case of a Museum Artwork. In Advances in Intelligent Systems and Computing; Springer: Cham, Switzerland, 2020; Volume 943. [Google Scholar] [CrossRef]
  13. Roberts, R.; Inzerillo, L.; Di Mino, G. Developing a framework for using structure-from-motion techniques for road distress applications. Eur. Transp.-Trasp. Eur. 2020, 77, 1–11. [Google Scholar] [CrossRef]
  14. Fan, B.; Kong, Q.; Wang, X.; Wang, Z.; Xiang, S.; Pan, C.; Fua, P. A performance evaluation of local features for image-based 3D reconstruction. IEEE Trans. Image Process. 2019, 28, 4774–4789. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Qiao, G.; Lu, P.; Scaioni, M.; Xu, S.; Tong, X.; Feng, T.; Wu, H.; Chen, W.; Tian, Y.; Wang, W.; et al. Landslide investigation with remote sensing and sensor network: From susceptibility mapping and scaled-down simulation towards in situ sensor network design. Remote Sens. 2013, 5, 4319–4346. [Google Scholar] [CrossRef] [Green Version]
  16. Barazzetti, L. Planar metric rectification via parallelograms. In Proceedings of the Videometrics, Range Imaging, and Applications XI, Munich, Germany, 23–26 May 2011; Volume 8085. [Google Scholar] [CrossRef]
  17. Remondino, F.; Nocerino, E.; Toschi, I.; Menna, F. A critical review of automated photogrammetric processing of large datasets. In International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences—ISPRS Archives; Copernicus Publications: Göttingen, Germany, 2017; Volume 42. [Google Scholar] [CrossRef] [Green Version]
  18. Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G. ORB: An efficient alternative to SIFT or SURF. In Proceedings of the IEEE International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011. [Google Scholar] [CrossRef]
  19. Yang, C.Y.; Ma, C.; Yang, M.H. Single-image super-resolution: A benchmark. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer Science + Business Media: Berlin, Germany, 2014; Volume 8692 LNCS. [Google Scholar] [CrossRef] [Green Version]
  20. Glasner, D.; Bagon, S.; Irani, M. Super-resolution from a single image. In Proceedings of the IEEE International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009. [Google Scholar] [CrossRef] [Green Version]
  21. Kim, K.I.; Kwon, Y. Single-image super-resolution using sparse regression and natural image prior. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 1127–1133. [Google Scholar] [CrossRef]
  22. Yang, J.; Lin, Z.; Cohen, S. Fast image super-resolution based on in-place example regression. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013. [Google Scholar] [CrossRef] [Green Version]
  23. Timofte, R.; De, V.; van Gool, L. Anchored neighborhood regression for fast example-based super-resolution. In Proceedings of the IEEE International Conference on Computer Vision, Sydney, Australia, 1–8 December 2013. [Google Scholar] [CrossRef]
  24. Timofte, R.; De Smet, V.; Van Gool, L. A+: Adjusted anchored neighborhood regression for fast super-resolution. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer Science + Business Media: Berlin, Germany, 2015; Volume 9006. [Google Scholar] [CrossRef]
  25. Ahmadian, K.; Reza-Alikhani, H. reza Single image super-resolution with self-organization neural networks and image laplace gradient operator. Multimed. Tools Appl. 2022, 81, 10607–10630. [Google Scholar] [CrossRef]
  26. Dong, C.; Loy, C.C.; He, K.; Tang, X. Image Super-Resolution Using Deep Convolutional Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 295–307. [Google Scholar] [CrossRef] [Green Version]
  27. Burger, H.C.; Schuler, C.J.; Harmeling, S. Image denoising: Can plain neural networks compete with BM3D? In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012. [Google Scholar] [CrossRef] [Green Version]
  28. Schuler, C.J.; Burger, H.C.; Harmeling, S.; Scholkopf, B. A machine learning approach for non-blind image deconvolution. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013; 2013. [Google Scholar]
  29. Li, X.; Orchard, M.T. New edge-directed interpolation. IEEE Trans. Image Process. 2001, 10, 1521–1527. [Google Scholar] [CrossRef] [Green Version]
  30. Schulter, S.; Leistner, C.; Bischof, H. Fast and accurate image upscaling with super-resolution forests. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015. [Google Scholar] [CrossRef]
  31. Xiang, C.; Wang, W.; Deng, L.; Shi, P.; Kong, X. Crack detection algorithm for concrete structures based on super-resolution reconstruction and segmentation network. Autom. Constr. 2022, 140, 104346. [Google Scholar] [CrossRef]
  32. Liu, Y.; Yeoh, J.K.W.; Chua, D.K.H. Deep Learning–Based Enhancement of Motion Blurred UAV Concrete Crack Images. J. Comput. Civ. Eng. 2020, 34, 04020028. [Google Scholar] [CrossRef]
  33. Kim, H.; Lee, J.; Ahn, E.; Cho, S.; Shin, M.; Sim, S.H. Concrete crack identification using a UAV incorporating hybrid image processing. Sensors 2017, 17, 2052. [Google Scholar] [CrossRef] [Green Version]
  34. Ellenberg, A.; Kontsos, A.; Moon, F.; Bartoli, I. Bridge related damage quantification using unmanned aerial vehicle imagery. Struct. Control Health Monit. 2016, 23, 1168–1179. [Google Scholar] [CrossRef]
  35. Inzerillo, L.; Di Mino, G.; Roberts, R. Image-based 3D reconstruction using traditional and UAV datasets for analysis of road pavement distress. Autom. Constr. 2018, 96, 457–469. [Google Scholar] [CrossRef]
  36. Bae, H.; Jang, K.; An, Y.K. Deep super resolution crack network (SrcNet) for improving computer vision–based automated crack detectability in in situ bridges. Struct. Health Monit. 2021, 20, 1428–1442. [Google Scholar] [CrossRef]
  37. Kim, J.; Shim, S.; Cho, G.C. A Study on the Crack Detection Performance for Learning Structure Using Super-Resolution. 2021. Available online: http://www.i-asem.org/publication_conf/asem21/6.TS/3.W5A/4.TS1406_6949.pdf (accessed on 20 May 2022).
  38. Kondo, Y.; Ukita, N. Crack segmentation for low-resolution images using joint learning with super- resolution. In Proceedings of the MVA 2021—17th International Conference on Machine Vision Applications, Aichi, Japan, 25–27 July 2021. [Google Scholar] [CrossRef]
  39. Sathya, K.; Sangavi, D.; Sridharshini, P.; Manobharathi, M.; Jayapriya, G. Improved image based super resolution and concrete crack prediction using pre-trained deep learning models. J. Soft Comput. Civ. Eng. 2020, 4, 40–51. [Google Scholar] [CrossRef]
  40. Jin, Y.; Mishkin, D.; Mishchuk, A.; Matas, J.; Fua, P.; Yi, K.M.; Trulls, E. Image Matching Across Wide Baselines: From Paper to Practice. Int. J. Comput. Vis. 2020, 129, 517–547. [Google Scholar] [CrossRef]
  41. Low, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  42. Verdie, Y.; Yi, K.M.; Fua, P.; Lepetit, V. TILDE: A Temporally Invariant Learned DEtector. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015. [Google Scholar] [CrossRef] [Green Version]
  43. Höhle, J. Oblique aerial images and their use in cultural heritage documentation. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, XL-5/W2, 349–354. [Google Scholar] [CrossRef] [Green Version]
  44. Di Mino, G.; Salvo, G.; Noto, S. Pavement management system model using a LCCA—Microsimulation integrated approach. Adv. Transp. Stud. 2014, 1, 101–112. [Google Scholar] [CrossRef]
  45. Arhin, S.A.; Williams, L.N.; Ribbiso, A.; Anderson, M.F. Predicting Pavement Condition Index Using International Roughness Index in a Dense Urban Area. J. Civ. Eng. Res. 2015, 2015. [Google Scholar]
  46. Miller, J.S.; Bellinger, W.Y. Distress Identification Manual for the Long-Term Pavement Performance Program. Publ. US Dep. Transp. Fed. Highw. Adm. 2003. [Google Scholar]
  47. Puan, O.C.; Mustaffar, M.; Ling, T.-C. Automated Pavement Imaging Program (APIP) for Pavement Cracks Classification and Quantification. Malays. J. Civ. Eng. 2007, 19, 1–16. [Google Scholar]
  48. Chambon, S.; Moliard, J.M. Automatic road pavement assessment with image processing: Review and comparison. Int. J. Geophys. 2011, 2011, 1–20. [Google Scholar] [CrossRef] [Green Version]
  49. Wang, K.C.P.; Gong, W. Automated pavement distress survey: A review and a new direction. Pavement Eval. Conf. 2002, 21–25. [Google Scholar]
  50. Wang, K.C.P. Elements of automated survey of pavements and a 3D methodology. J. Mod. Transp. 2011, 19, 51–57. [Google Scholar] [CrossRef] [Green Version]
  51. Zhang, C. An UAV-based photogrammetric mapping system for road condition assessment. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2008, 37, 627–632. [Google Scholar]
  52. Roberts, R.; Inzerillo, L.; Di Mino, G. Using uav based 3d modelling to provide smart monitoring of road pavement conditions. Information 2020, 11, 568. [Google Scholar] [CrossRef]
  53. Inzerillo, L.; Roberts, R. 3d image based modelling using google earth imagery for 3d landscape modelling. In Advances in Intelligent Systems and Computing; Springer: Berlin/Heidelberg, Germany, 2019; Volume 919. [Google Scholar] [CrossRef]
  54. Pan, Y.; Zhang, X.; Cervone, G.; Yang, L. Detection of Asphalt Pavement Potholes and Cracks Based on the Unmanned Aerial Vehicle Multispectral Imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 3701–3712. [Google Scholar] [CrossRef]
  55. Kang, D.; Cha, Y.J. Autonomous UAVs for Structural Health Monitoring Using Deep Learning and an Ultrasonic Beacon System with Geo-Tagging. Comput. Civ. Infrastruct. Eng. 2018, 33, 885–902. [Google Scholar] [CrossRef]
  56. Westoby, M.J.; Brasington, J.; Glasser, N.F.; Hambrey, M.J.; Reynolds, J.M. “Structure-from-Motion” photogrammetry: A low-cost, effective tool for geoscience applications. Geomorphology 2012, 179, 300–314. [Google Scholar] [CrossRef] [Green Version]
  57. Shen, T.; Luo, Z.; Zhou, L.; Zhang, R.; Zhu, S.; Fang, T.; Quan, L. Matchable Image Retrieval by Learning from Surface Reconstruction. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer Science + Business Media: Berlin/Heidelberg, Germany, 2019; Volume 11361 LNCS. [Google Scholar] [CrossRef] [Green Version]
  58. Luo, Z.; Zhou, L.; Bai, X.; Chen, H.; Zhang, J.; Yao, Y.; Li, S.; Fang, T.; Quan, L. ASLFEaT: Learning local features of accurate shape and localization. In Proceedings of the Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020. [Google Scholar] [CrossRef]
  59. Schönberger, J.L.; Hardmeier, H.; Sattler, T.; Pollefeys, M. Comparative evaluation of hand-crafted and learned local features. In Proceedings of the 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, 21–26 July 2017; Volume 2017-January. [Google Scholar] [CrossRef]
  60. Moulon, P.; Monasse, P.; Perrot, R.; Marlet, R. OpenMVG: Open multiple view geometry. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer Science + Business Media: Berlin/Heidelberg, Germany, 2017; Volume 10214 LNCS. [Google Scholar] [CrossRef] [Green Version]
  61. Freedman, G.; Fattal, R. Image and video upscaling from local self-examples. ACM Trans. Graph. 2011, 30, 1–11. [Google Scholar] [CrossRef] [Green Version]
  62. Fedele, R.; Scaioni, M.; Barazzetti, L.; Rosati, G.; Biolzi, L. Delamination tests on CFRP-reinforced masonry pillars: Optical monitoring and mechanical modeling. Cem. Concr. Compos. 2014, 45, 243–254. [Google Scholar] [CrossRef]
  63. Jancosek, M.; Pajdla, T. Multi-view reconstruction preserving weakly-supported surfaces. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Colorado Springs, CO, USA, 20–25 June 2011. [Google Scholar] [CrossRef]
  64. Gruen, A. Development and Status of Image Matching in Photogrammetry. Photogramm. Rec. 2012, 27, 36–57. [Google Scholar] [CrossRef]
  65. Barazzetti, L.; Scaioni, M. Crack measurement: Development, testing and applications of an automatic image-based algorithm. ISPRS J. Photogramm. Remote Sens. 2009, 64, 285–296. [Google Scholar] [CrossRef]
  66. Barazzetti, L.; Scaioni, M. Development and implementation of image-based algorithms for measurement of deformations in material testing. Sensors 2010, 10, 7469–7495. [Google Scholar] [CrossRef] [Green Version]
  67. Fraser, C.S. Photogrammetric measurement to one part in a million. Photogramm. Eng. Remote Sens. 1992, 58, 305–310. [Google Scholar]
  68. Fraser, C.S. Automatic camera calibration in close range photogrammetry. Photogramm. Eng. Remote Sens. 2013, 79, 381–388. [Google Scholar] [CrossRef] [Green Version]
  69. Stathopoulou, E.K.; Welponer, M.; Remondino, F. Open-source image-based 3D reconstruction pipelines: Review, comparison and evaluation. In International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences—ISPRS Archives; Copernicus Publications: Göttingen, Germany, 2019; Volume 42. [Google Scholar] [CrossRef] [Green Version]
  70. Niederheiser, R.; Mokroš, M.; Lange, J.; Petschko, H.; Prasicek, G.; Elberink, S.O. Deriving 3d point clouds from terrestrial photographs—Comparison of different sensors and software. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, XLI-B5, 685–692. [Google Scholar] [CrossRef] [Green Version]
  71. Di Filippo, A.; Villecco, F.; Cappetti, N.; Barba, S. A Methodological Proposal for the Comparison of 3D Photogrammetric Models. In Lecture Notes in Mechanical Engineering; Springer: Berlin/Heidelberg, Germany, 2022. [Google Scholar] [CrossRef]
  72. Barba, S.; Ferreyra, C.; Cotella, V.A.; di Filippo, A.; Amalfitano, S. A SLAM Integrated Approach for Digital Heritage Documentation. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer Science + Business Media: Berlin/Heidelberg, Germany, 2021; Volume 12794 LNCS. [Google Scholar] [CrossRef]
  73. Morena, S.; Molero Alonso, B.; Barrera-Vera, J.A.; Barba, S. As-built graphic documentation of the Monumento a la Tolerancia. Validation of low-cost survey techniques. EGE-Expresión Gráfica Edif. 2020, 98–114. [Google Scholar] [CrossRef]
  74. Fukozono, T. Recent studies on time prediction of slope failure. Landslide News 1990, 4, 9–12. [Google Scholar]
Figure 1. Proposed methodology overview.
Figure 1. Proposed methodology overview.
Drones 06 00171 g001
Figure 2. DJI MAVIC 2 Pro drone used for the surveys.
Figure 2. DJI MAVIC 2 Pro drone used for the surveys.
Drones 06 00171 g002
Figure 3. Road pavement distress examples: (a) rutting; (b) transverse cracking; (c) block cracking; (d) potholes.
Figure 3. Road pavement distress examples: (a) rutting; (b) transverse cracking; (c) block cracking; (d) potholes.
Drones 06 00171 g003
Figure 4. Low Image resolution of case study from drone camera.
Figure 4. Low Image resolution of case study from drone camera.
Drones 06 00171 g004
Figure 5. In the picture, there are two different steps in the processing to carry out the super resolution image from a low one: f1 × f1 and f3 × f3. Starting from a low -resolution image (Y), the first functional layer of the process extracts a set of feature maps. The last layer combines the predictions within a spatial neighbourhood to produce the final high-resolution image F(Y) called super-resolution Image. Between the two different layers, there is the non- linear mapping. This method is the sparse-coding-based one in the view of a convolutional neural network.
Figure 5. In the picture, there are two different steps in the processing to carry out the super resolution image from a low one: f1 × f1 and f3 × f3. Starting from a low -resolution image (Y), the first functional layer of the process extracts a set of feature maps. The last layer combines the predictions within a spatial neighbourhood to produce the final high-resolution image F(Y) called super-resolution Image. Between the two different layers, there is the non- linear mapping. This method is the sparse-coding-based one in the view of a convolutional neural network.
Drones 06 00171 g005
Figure 6. SfM reconstruction using low-resolution image dataset: (a) model shaded view; (b) solid mesh view.
Figure 6. SfM reconstruction using low-resolution image dataset: (a) model shaded view; (b) solid mesh view.
Drones 06 00171 g006
Figure 7. SfM reconstruction using super-resolution image dataset: (a) Model shaded view; (b) solid mesh view.
Figure 7. SfM reconstruction using super-resolution image dataset: (a) Model shaded view; (b) solid mesh view.
Drones 06 00171 g007
Figure 8. Dense cloud of low-resolution dataset.
Figure 8. Dense cloud of low-resolution dataset.
Drones 06 00171 g008
Figure 9. Dense cloud of super resolution dataset.
Figure 9. Dense cloud of super resolution dataset.
Drones 06 00171 g009
Figure 10. Image residuals: (a) Low-resolution dataset (resolution 5568 × 3648; focal length 10.26 mm; pixel Size 2.38 × 2.38 µm); (b) super-resolution dataset (resolution 9000 × 5897; focal length 10.26 mm; pixel Size 1.47 × 1.47 µm).
Figure 10. Image residuals: (a) Low-resolution dataset (resolution 5568 × 3648; focal length 10.26 mm; pixel Size 2.38 × 2.38 µm); (b) super-resolution dataset (resolution 9000 × 5897; focal length 10.26 mm; pixel Size 1.47 × 1.47 µm).
Drones 06 00171 g010
Figure 11. Reconstructed digital elevation model: (a) Low-resolution dataset (resolution 2.47 cm/pix.; point density 0.164 points/cm2); (b) super-resolution dataset (resolution 1.42 cm/pix.; point density 0.194 points/cm2).
Figure 11. Reconstructed digital elevation model: (a) Low-resolution dataset (resolution 2.47 cm/pix.; point density 0.164 points/cm2); (b) super-resolution dataset (resolution 1.42 cm/pix.; point density 0.194 points/cm2).
Drones 06 00171 g011
Figure 12. Camera locations and error estimation: (a) LR; (b) SR.
Figure 12. Camera locations and error estimation: (a) LR; (b) SR.
Drones 06 00171 g012
Figure 13. Merge cloud of low-image data set dense cloud and ground truth dense cloud.
Figure 13. Merge cloud of low-image data set dense cloud and ground truth dense cloud.
Drones 06 00171 g013
Figure 14. Merge cloud of super resolution-image data set dense cloud and ground truth dense cloud.
Figure 14. Merge cloud of super resolution-image data set dense cloud and ground truth dense cloud.
Drones 06 00171 g014
Figure 15. RMSE histogram in cm for Low dense cloud to ground truth cloud.
Figure 15. RMSE histogram in cm for Low dense cloud to ground truth cloud.
Drones 06 00171 g015
Figure 16. RMSE histogram in cm for super resolution dense cloud to ground truth cloud.
Figure 16. RMSE histogram in cm for super resolution dense cloud to ground truth cloud.
Drones 06 00171 g016
Figure 17. RMS [m] and matrix transformation for both low and super resolution images.
Figure 17. RMS [m] and matrix transformation for both low and super resolution images.
Drones 06 00171 g017
Table 1. UAV and camera settings for the surveys.
Table 1. UAV and camera settings for the surveys.
DeviceDJI Mavic 2 ProCamera 1
Camera resolution (megapixel)2020.9
Image size (pixel)5568 × 36485568 × 3712
Sensor size (mm)
Focal length (35 mm eq.)
13.2 × 8.8
28
23.5 × 17.5
24
ISO200100
Shutter speed1/60 to 1/1251/250
Aperturef/5.6f/8
1 A Nikon Zfc mirrorless camera was used for the ground truth survey.
Table 2. Road pavement distress manuals indications related to 4 considered distresses.
Table 2. Road pavement distress manuals indications related to 4 considered distresses.
DistressIndicatorSeverity Levels 1,2
Block CrackingCrack Width (mm)3–19 mm
Transverse CrackingCrack Width (mm)3–19 mm
Potholes
Rutting
Depth (mm)
Depth (mm)
25–50 mm
>12 mm
1 In the distress identifications manuals [46], for cracking phenomena, a low level is defined for width values less than 6 mm, a medium level for width values from 6 to 19 mm and a high severity level for width values greater than 19 mm. 2 Concerning potholes and rutting distresses, there is no distinction in severity levels.
Table 3. Summary of 3D model reconstruction from LR images.
Table 3. Summary of 3D model reconstruction from LR images.
Number of Images:20Camera Stations:20
Flying altitude49.7 mTie points:11.985
Ground resolution:1.24 cm/pixProjections:36.698
Coverage area:5.01 × 103 m2Reprojection error:0.823 pix
Table 4. Calibration coefficients and correlation matrix related to the LR 3D model.
Table 4. Calibration coefficients and correlation matrix related to the LR 3D model.
ValueErrorFCxCyK1K2K3P1P2
F4295.92.11.00−0.32−1.00−0.180.22−0.28−0.090.06
Cx53.99910.24 1.000.33−0.040.000.040.94−0.11
Cy65.79283.1 1.000.15−0.190.250.10−0.07
K1−0.0150950.00022 1.00−0.970.91−0.05−0.01
K20.0307990.00075 1.00−0.980.040.05
K3−0.0319780.00083 1.00−0.01−0.05
P10.0031281.9 × 10−5 1.00−0.03
P2−0.000521.3 × 10−5 1.00
Table 5. Summary of 3D model reconstruction from SR images.
Table 5. Summary of 3D model reconstruction from SR images.
Number of Images:20Camera Stations:20
Flying altitude51.6 mTie points:19.855
Ground resolution:7.11 mm/pixProjections:38.519
Coverage area:4.77 × 103 m2Reprojection error:1.05 pix
Table 6. Calibration coefficients and correlation matrix related to the SR 3D model.
Table 6. Calibration coefficients and correlation matrix related to the SR 3D model.
ValueErrorFCxCyK1K2K3P1P2
F6950.262.61.00−0.29−1.00−0.180.21−0.26−0.040.07
Cx85.75960.26 1.000.30−0.070.040.000.94−0.13
Cy97.12463.8 1.000.14−0.180.240.05−0.07
K1−0.01510.00015 1.00−0.960.91−0.09−0.03
K20.0295030.00054 1.00−0.980.080.04
K3−0.029720.0006 1.00−0.05−0.04
P10.0030391.3 × 10−5 1.00−0.04
P2−0.000499.3 × 10−6 1.00
Table 7. Average camera location error for both SR and LR 3D models.
Table 7. Average camera location error for both SR and LR 3D models.
X Error (cm)Y Error (cm)Z Error (cm)XY Error (cm)Tot. Error (cm)
5.135658 18.424 17.5754 19.8569 112.0556 1
1.22132 23.8425 22.3766 24.7335 25.6146 2
1 Errors from low-resolution images 3D model. 2 Errors from super-resolution images 3D model.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Inzerillo, L.; Acuto, F.; Di Mino, G.; Uddin, M.Z. Super-Resolution Images Methodology Applied to UAV Datasets to Road Pavement Monitoring. Drones 2022, 6, 171. https://doi.org/10.3390/drones6070171

AMA Style

Inzerillo L, Acuto F, Di Mino G, Uddin MZ. Super-Resolution Images Methodology Applied to UAV Datasets to Road Pavement Monitoring. Drones. 2022; 6(7):171. https://doi.org/10.3390/drones6070171

Chicago/Turabian Style

Inzerillo, Laura, Francesco Acuto, Gaetano Di Mino, and Mohammed Zeeshan Uddin. 2022. "Super-Resolution Images Methodology Applied to UAV Datasets to Road Pavement Monitoring" Drones 6, no. 7: 171. https://doi.org/10.3390/drones6070171

Article Metrics

Back to TopTop