Next Article in Journal
Self-Supervised Learning for Scene Classification in Remote Sensing: Current State of the Art and Perspectives
Previous Article in Journal
Vegetation Dynamics under Rapid Urbanization in the Guangdong–Hong Kong–Macao Greater Bay Area Urban Agglomeration during the Past Two Decades
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Digital Outcrop Model Generation from Hybrid UAV and Panoramic Imaging Systems

by
Alysson Soares Aires
1,*,
Ademir Marques Junior
1,
Daniel Capella Zanotta
1,
André Luiz Durante Spigolon
2,
Mauricio Roberto Veronez
1 and
Luiz Gonzaga, Jr.
1
1
Vizlab—X-Reality and Geoinformatics Lab, Department of Graduate Program in Applied Computing, UNISINOS University, São Leopoldo 93022-750, Brazil
2
Petrobras Research and Development Center (CENPES), Rio de Janeiro 21941-915, Brazil
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(16), 3994; https://doi.org/10.3390/rs14163994
Submission received: 12 July 2022 / Revised: 9 August 2022 / Accepted: 11 August 2022 / Published: 17 August 2022
(This article belongs to the Section Remote Sensing in Geology, Geomorphology and Hydrology)

Abstract

:
The study of outcrops in geosciences is being significantly improved by the enhancement of technologies that aims to build digital outcrop models (DOMs). Usually, the virtual environment is built by a collection of partially overlapped photographs taken from diverse perspectives, frequently using unmanned aerial vehicles (UAV). However, in situations including very steep features or even sub-vertical patterns, incomplete coverage of objects is expected. This work proposes an integration framework that uses terrestrial spherical panoramic images (SPI), acquired by omnidirectional fusion camera, and a UAV survey to overcome gaps left by traditional mapping in complex natural structures, such as outcrops. The omnidirectional fusion camera produces wider field of view images from different perspectives, which are able to considerably improve the representation of the DOM, mainly where the UAV has geometric view restrictions. We designed controlled experiments to guarantee the equivalent performance of SPI compared with UAV. The adaptive integration is accomplished through an optimized selective strategy based on an octree framework. The quality of the 3D model generated using this approach was assessed by quantitative and qualitative indicators. The results show the potential of generating a more reliable 3D model using SPI allied with UAV image data while reducing field survey time and complexity.

Graphical Abstract

1. Introduction

Outcrops are bodies of rock that have been exposed to the surface due to earth’s natural events or anthropological activities. The oil and gas industry has long been benefited from outcrops as analogues for subsurface reservoirs, i.e., with similar characteristics to structures where oil and gas are actually found [1]. These benefits are related to the continuous, easily accessible and high-quality quantitative data that well exposed analogue outcrops can offer, in contrast to labor intensive traditional subsurface data acquisition methods [2,3]. Literature has shown that most of the real reservoir’s characteristics can be properly extracted using analogous outcrops [4].
One convenient assessing method that has been attracting more and more attention is the representation through digital outcrop models (DOMs). The generation of DOMs (i.e., 3D polygonal meshes of outcrops with photo-realistic texture) has become more frequent and accurate with the advancement of survey and processing techniques. Building DOMs usually starts by collecting a set of photographic frames in a controlled fashion via aerial platforms such as unmanned aerial vehicles (UAV) [5,6]. The acquisition is performed under a previously defined fly plan, which guaranties the necessary overlap between consecutive frames for further producing the 3D model. The procedure involves photogrammetry principles and computer vision algorithms [7] and is widely employed, since it is a cheap, rapid and effective technique [8,9,10].
A particular limitation faced by reconstructing outcrops using photogrammetry principles is that the collection of frames must properly observe the outcrop from all perspectives. However, many outcrops present steep walls, including many small subvertical structures (vertical angles smaller than 0°), which prevents observing the outcrop structures from sufficient perspectives. Moreover, approaching with a UAV or flying at low altitudes to increase the number of perspectives may result in collisions or other accidents that can damage the equipment. In most cases, the user must complement the dataset with terrestrial approaches, where a digital camera is handled manually along the entire lower portion of the outcrop. However, handling the digital camera without any pre-defined planning can potentially result in images with varied scales and possible gaps, which will dramatically impact the final model, and in some cases, can only be noticed after the field survey has ended.
Spheric panoramic images (SPIs) can circumvent the above-mentioned problem by automatically acquiring full sets of images able to feed reconstruction models from the bottom perspective. an SPI (also known as a 360° image) can provide a wide field of view of the object of interest and its surroundings in a single capture, avoiding many photo acquisitions [11]. Some studies also report that using SPIs for 3D modeling allowed important time saving by avoiding unnecessary reworking [12,13,14,15,16,17,18]. In this work, we propose using SPIs along with UAV data for digital outcrop models in a hybrid fashion. The integration is accomplished by a region based optimization strategy that selects the best point source to compose the final model. While a UAV is able to capture data from the aerial perspective, SPI data are able to properly visualize base features acquired from the ground perspective. The integrated product thus combines the advantages of both methods in a complementary fashion, circumventing the each system’s limitations.
Previous studies have solidly demonstrated the effectiveness of SPI data for 3D reconstruction purposes in approaches similar to the one discussed in the present work. We have found that the GoPro Fusion camera is the model most frequently used in scientific papers. One study [14] used the GoPro Fusion camera onboard a UAV in addition to a common frame camera for 3D reconstruction of a castle for cultural heritage documentation. The integration of both sensors was justified by the authors to reconstruct, in a single flight, both horizontal and vertical features of the building, which usually requires more than one flight plan with different camera angle configurations for proper modeling. The accuracy reported for the 3D model was calculated using artificial targets that were surveyed with global navigation satellite system (GNSS) receiver equipment. They achieved an RMSE of 5.42 cm. Ref. [19] conducted a terrestrial survey with a GoPro Fusion camera to generate a 3D model of trees in an orange crop. The camera was properly calibrated in a controlled room with coded targets to estimate internal parameters and lens distortion. The mean error computed using checkpoints collected with GNSS equipment was 3.8 cm. The authors concluded that this technique succeeded in achieving the goal of accurately retrieving geometrical attributes from orange fruits and trees and that omnidirectional cameras are a good alternative due to the smaller number of images needed to build a 3D model.
Other works have used a variety of camera brands and models for SPI acquisition. The research using omnidirectional cameras for spherical panoramas for the first time deserves attention [15], which was a pioneering advancement on the study of spherical photogrammetry [20]. The camera employed in that case was a Panono 360º, composed of 36 camera sensors, which resulted in an equirectangular image with over 130 megapixel resolution. The authors took a total of 14 panoramas and built a dense point cloud, resulting in 1.5 billion points. The accuracy assessment performed on the generated point cloud resulted in a 3D mean error of 8 cm, which was considered too coarse and raised the need for post-processing. Yet, the conclusion was optimistic, suggesting that the methodology would require improvement but that it could be used properly to document cultural heritage. Ref. [13] used a Xiaomi Mijia Mi Sphere 360 camera and performed several evaluations using the generated dense point cloud and orthophotos of different locations, carrying out the processing on two commercial software platforms (Agisoft Metashape and Pix4Dmapper). One of the evaluations concerned the accuracy, which was addressed by using checkpoints collected with a Total Station, reaching a 3D RMSE of approximately 1.2 cm for the worst processing scenario. The dense point cloud was compared against a 3D laser scanner point cloud to assess the geometry of the generated product resulting from the spherical panoramas, obtaining a discrepancy of about 0.5 cm. The authors reported results similar to traditional photogrammetric projects with traditional cameras (within 0.5 and 1.5 pixel accuracy). However, they mentioned that the resolution of the products was usually 4 to 6 times worse when using an omnidirectional camera, given the wider field of view covered compared to traditional cameras.
In the next sections, we introduce, test and discuss the proposed integration technique based simultaneously on UAV and SPI data for 3D outcrop modeling. In order to confirm the initial assumptions, we performed validation tests on the SPI output, and calculated appropriate metrics according to a UAV-adjusted output used as reference. Based on the previous results, we then performed refinement of SPI data with tools designed to recognize and remove occasional outliers. With both point-cloud-corrected and properly adjusted data, we implemented our optimized approach to automatically re-build the point cloud according to the point source presenting the best performance.

2. Materials and Methods

2.1. Study Area

The outcrop selected for testing our approach is located on Brazil’s southern coast, and the acquisitions were performed between 12 December and 14 December 2019. The area is in José Lutzenberger State Park, also known as Parque Estadual da Guarita, in the city of Torres, RS, Brazil (Figure 1). Outcrops of the Paraná basin are exposed in the form of discontinuous cliffs and show excellent exposed contact between the Botucatu Formation—eolian sandstones—and the Serra Geral Formation—basalts. The outcrops shows a relatively thick layer of basalts with minor patches of sand-grade sediments from a major volcanic flood event that overlaid the sandstone package with a discordance surface [21].
The rock exposure is divided into three major outcrops (Figure 2). The southern one is known as Torre Sul, and is followed by a smaller outcrop to the north named Torre da Guarita and a large one named Morro das Furnas. The rocky wall selected for testing the proposed integration method was restricted to a portion of Torre da Guarita outcrop, more specifically the base of its NE face, which has easy terrestrial access by the beach and shows a good exposure of Botucatu’s sandstones, as detailed in Figure 2. The outcrop is approximately 30 m in height from its base to its peak and 47 m in length.

2.2. Spherical Panoramic Images (SPIs)

It is opportune to start establishing the basic characteristics of SPI acquisition and the relationship with traditional photogrametry. SPIs are photographic products that can provide large views of a scene through their wide fields of view, allowing regional contextualization of the object of interest from a single image. They are produced after a group of two or more overlapping photos taken from a unique point of view are submitted to a stitching algorithm, which merges the photos, smoothing the overlapping regions. The stitching procedure usually saves considerable processing time during model reconstruction and also compensates in real time for scale differences and overlapping adjustment of the collection of SPI frames, ensuring complete covering of a given target at uniform resolution.
Among the different types of panoramic images, the spherical panorama is the most complete in terms of field of view, because it covers 360 degrees horizontally and 180 degrees vertically (a semi-sphere), allowing the representation an entire scene [22]. The spherical panorama can be represented in a 2D image using the equirectangular projection, which is a type of cartographic projection that preserves lengths measured on horizontal lines of the image (parallels). This projection is usually employed in photogrammetric applications because it allows simple correlations between the pixel coordinates on a 2D image (cartesian coordinates) and on the points of the 3D sphere (polar coordinates). The photogrammetric description for spherical panoramas can be found in [11,23].
Omnidirectional cameras are a convenient way to generate SPIs, since they have more than one lens, usually of the fisheye type [24], facing different vision angles and capturing all the images simultaneously. The images can then be stitched together, generating a full spherical panorama in a single shot, reducing considerably the image capture time and complexity [15]. The GoPro Fusion camera mentioned earlier is an example (Figure 3). It has two embedded lenses: one at the front and the another at the back of the instrument, providing a full spherical image.
In Figure 4, examples of perspectives for front and back are showed for the Torre da Guarita outcrop captured by the principal lenses of a GoPro Fusion camera, as well as the equirectangular result by internally merging them. In order to reconstruct a 3D point cloud, the same system is able to systematically capture recursive perspectives to cover all outcrop faces just following a pre-defined pathway.

2.3. UAV Survey

A DJI Mavic 2 Pro UAV was used to perform the aerial image coverage. Given the conical shaped structure of the outcrop, a circular flight around its boundaries of about 5 to 10 m away proved to be sufficient to acquire all the necessary images, resulting in a total of 280 photos, which were later processed in the Agisoft Metashape version 1.6.2 photogrammetric software. This processing resulted in the dense point cloud illustrated in Figure 5. Although we spent our best efforts to acquire the entire outcrop using the traditional UAV survey, there were many parts not covered due to occlusion caused by subvertical structures (white regions).

2.4. SPI Survey and Processing

The SPI survey was conducted with the GoPro camera presented in Figure 3. The sensor embedded in the camera is a CMOS 1/2.3 with a nominal focal length of 3 mm and a pixel resolution of 9.3 Mp. The product delivered after both captured images are stitched together is an equirectangular image of 16.6 Mp, approximately.
The camera was manually held above the operator’s head at a height of about 2.6 m from the ground, which was the result of the operator standing with its arm fully extended upwards (2.1 m) plus the extendable grip of 0.5 m that comes with the equipment (also fully extended). While capturing the images, the camera was positioned in such a way that its optical axis was perpendicular to the outcrop. The GoPro mobile application fully controls the camera via wireless connection while streaming the field of view to an external monitor. Thus, no specific planning was needed excepting the walking path along the object, where the images were arbitrarily collected approximately 1 m away from each other.
A total of 18 fisheye pairs were captured. All the acquisition process took less than 10 min from initializing the camera to capturing the last photo. The post-processing phase, which included stitching both fisheye images to generate the equirectangular output, was performed in laboratory with the software GoPro Studio.
The processing of the equirectangular collection using Agisoft Metashape software was performed by selecting the spherical camera model in the Camera Calibration settings. The camera models provided by the software were algorithms designed for different types of cameras that are responsible for transforming the point coordinates of the camera’s projection into the image’s pixel coordinate and vice versa, allowing the estimation of the camera’s position for 3D geometry reconstruction.

2.5. Point Cloud Adjustment and Quality Assessment

As usual, unwanted noise from both SPI and UAV raw point clouds is frequent and has to be eliminated. The Statistical Outlier Removal (SOR) tool of the software was applied to eliminate outliers that could bias the statistical analyses. This tool calculates the average distance of a point to its k-neighbors. Then, points that are not included in the average plus a value of standard deviation (defined by the user) are removed from the point cloud.
After this mandatory step, the next action is intended to geometrically integrate both sets of points for further analysis. This process was performed using the software’s registration tool by picking homologous features (matching points) between UAV and SPI point clouds. We assumed that, where the UAV survey could properly acquire data, the point cloud was adequately generated. Thus, this data were used for quality assessment in areas also covered by SPI imaging (overlapping).
With SPI and UAV-derived DOMs geometrically aligned, the Cloud-to-Mesh (C2M) tool in CloudCompare v.2.10 software is used to compute the absolute distance (i.e., Euclidean distance) between every single point of the SPI point cloud and the UAV mesh surface. To allow a visualization of the results calculated by this tool, a scalar field (SF) is generated and can be applied over the point cloud. The SF is a type of value representation based on a color gradient which, in this case, express the calculated distance by C2M for every point of the point cloud.
To improve reliability, a robust statistical analysis [25] was conducted to better interpret the discrepancy values calculated for each point of the generated point clouds against the UAV reference mesh, after running C2M algorithm. This method detects outliers and takes into account the skewness of the data, which are weakly symmetrical and far from ideal when working with real world scenarios. Moreover, the asymmetry of a univariate continuous distribution such as the point clouds in outcrops is strongly affected by the presence of one or more outliers. The formula is based on the standard boxplot [26] but weighted by using the Medcouple (MC) index, which measures the skewness of the observations. MC index is calculated by the following equation:
M C ( X n ) = m e d x i < m e d n < x j x j m e d n m e d n x i x j x i ,
where x i and x j are members of a univariate sample X n of size N ( X n = x 1 , , x N ) from a continuous unimodal distribution, and m e d n stands for the sample median. When the MC index returns a positive value, it means that observations are skewed to the right; when zero, it means that observations are purely symmetrical; and negative results inducate that observations are skewed to the left. The MC was then applied to the interquartile range (IQR) outlier detection method according to the following equations, for both MC > 0 (Equation (2)) and MC < 0 (Equation (3)) case scenarios:
Q 1 1.5 e 4 M C · I Q R ;   Q 3 + 1.5 e 3 M C · I Q R .
Q 1 1.5 e 3 M C · I Q R ;   Q 3 + 1.5 e 4 M C · I Q R .
The IQR corresponds to the range between the left ( Q 1 ) and right ( Q 3 ) quartiles (50% of the data around the median). After both left and right quartiles are calculated, every observation that falls outside these intervals is considered an outlier. In this case, the values of distance calculated by C2M are absolute values.
After removing the outliers, the model’s discrepancy evaluation uses the remaining C2M absolute values to obtain the descriptive statistics, such as the mean d ¯ (Equation (4)) and the standard deviation σ d (Equation (5)) given by:
d ¯ = i = 1 n d i n
σ d = 1 n 1 i = 1 n d i d ¯ 2 ,
where d i is each point cloud’s discrepancy from the aligned UAV 3D mesh. Descriptive statistics for free distributions, on the other hand, are based on robust measures, such as the median, given by:
m e d i a n ( d ) = d ( n + 1 ) / 2 , for odd observations d n / 2 + d n 2 + 1 2 , for even observations
and the normalized median deviation ( N M A D ), given by:
N M A D = 1.4826 · m e d i a n ( | d i m e d i a n ( d ) | ) .
The root mean squared error (RMSE) is also a common measure of model quality, as presented by some the related works. In the present context, the RMSE is given by:
R M S E = i = 1 n d i 2 n .
The above described measures can be effectively used to detect occasional errors and eliminate them, thereby conforming the SPI point cloud to the same setup found in UAV data.

2.6. Automatic Hybrid Point Cloud Integration Strategy

Assuming the quality of the point cloud generated by SPI data is shown to be consistent with the data acquired using UAV photogrametry, the last step of the proposed method can be applied. In our proposition, the final point cloud is composed by a combination of both the SPI and UAV point cloud sources. We suggest the application of an octree approach [27] to analyze every region of the point cloud in the 3D space. The octree is a tree data structure in which each internal node has exactly eight children. Octrees are the three-dimensional analogs of quadtrees and are most often used to partition a three-dimensional space by recursively subdividing it into eight octants. The blocks within an octree are referred to as volume elements, or voxels. The voxels are small cubes with sides r that cover all the points of the two clouds (Figure 6). Each voxel V i can be represented in the 3D space in Cartesian coordinates as
V i = i ( i x , i y , i z ) ,
where i x , i y and i z are the spatial coordinates of each voxel i of length r. In our analysis, each voxel is checked to determine which source (SPI or UAV) presents most of the points contributing to the cloud. For every V i , the set of UAV points S u i belonging to it can be computed as
p u i ( i x , i y , i z ) S u i ( i x , i y , i z ) p u i ( i x , i y , i z ) V i = ( i x , i y , i z ) ,
where p u i ( i x , i y , i z ) is the UAV point p u i in the position ( i x , i y , i z ) . In an analogous form, the set of SPI points S s i belonging to V i can be expressed as
p s i ( i x , i y , i z ) S s i ( i x , i y , i z ) p s i ( i x , i y , i z ) V i = ( i x , i y , i z ) .
The winning source will have its points preserved, whereas the other will be eliminated according to the following rule:
V i ω u , i f n ( S u i ) n ( S s i ) ω s , o t h e r w i s e i [ 1 , N ] ,
where n ( S u i ) and n ( S s i ) are the numbers of points inside a given V i for UAV and SPI, respectively. The rationale behind this voxel-based selective strategy is that, for some parts of the reconstructed outcrop, the SPI will present better observations, which provide higher quality points (mainly for subvertical structures). Conversely, other regions can present the best observations composed by UAV data (mainly by superior parts), which justify the choice for UAV-like points preservation, whereas SPI are eliminated.
The final product will be formed by a set of points resulting from the trade-off between using points from SPI or UAV for each consecutive path of the octree. The resulting set of hybrid points can be used to produce the mesh and the textured model. The Python script of the presented octree methodology is available at GitHub platform (https://github.com/ademirmarquesjunior/octree_cloud_merge, accessed on 11 July 2022).
To quantitatively assess the results of the integration technique, the distances between each point and its respective neighbors were measured using a kd-tree algorithm [28]. This algorithm performs a binary subdivision of the point cloud, where leaves are further divided into two nodes until each final node or branch in the tree contains one element. Using the kd-tree algorithm to assess 10 neighbors for each point, we measure the mean distances to evaluate the point cloud densification in both the original UAV/SPI cloud and the optimized UAV/SPI cloud obtained from the octree cloud merging method.

3. Results

3.1. SPI Dense Point Cloud

The SPI-based reconstruction workflow is very similar to that process designed for traditional cameras, involving camera alignment and dense point cloud generation (Figure 7a). The dense point cloud generated by SPI resulted in a total of 35,771,842 points. However most of those points were of surrounding features that were not of interest to this study. After a rough delimitation of the area, the point cloud was reduced to 10,047,654 points. The SOR tool was then applied using six sampling points for the average calculation and one standard deviation around the average as exclusion criteria. The tool reduced 20% of the total number of SPI points used as input, resulting in 8,373,045 points. The registration performed to align SPI point clouds to the reference triangular mesh (UAV) was done using seven matching points, achieving a RMSE of 4.5 cm.
C2M analysis was then applied to assess the quality of the SPI result against the UAV mesh grid, where the UAV presented reliable points. Figure 7b shows the C2M computation as represented via the SF, using the previously cleaned dense point cloud. This color gradient is attached as an attribute to each i point of the cloud and helps visualize geographically the local point discrepancies between SPI point cloud and the UAV reference mesh. The C2M tool resulted in a 6.6 cm mean and 12.6 cm standard deviation.
Occasional hot spots (reddish colors in the map) on the outcrop wall can be noticed in the C2M result. As previously discussed, outcrops present many faces not easily covered by images from a UAV survey, then cause low density or absence of points in the cloud. This observation confirms the initial supposition that UAV reconstruction has strong limitations in correctly reconstructing subvertical features. Moreover, these areas can strongly bias the SPI assessment, since missing UAV points indicate wrong SPI reconstruction. We thus disregarded the assessment in these areas and assumed that the quality presented by the SPI is consistent with that found in areas where reference data (UAV) are available. The MC index was able to identify and eliminate outliers in the UAV dataset. Using the skewed data analyses, we could identify the regions of the UAV’s dense point cloud with scarce coverage (sub-vertical surfaces) and classify them as outliers, to remove them from further statistical analyses.
The MC index calculated for the discrepancy values, which indicates the skewness of the observations, resulted in 0.407. This means that the discrepancy histogram has its tail on the right side of the distribution. Due to the MC index being greater than 0, we used Equation (4), as suggested in the literature, which resulted in an upper limit of 11.5 cm, meaning that all discrepancy values above this limit were considered as outliers by this method. These outliers represent a total of 2,298,313 points of the point cloud, which as a percentage is 7% of the same point cloud. Finally, all those points that were classified as outliers were removed from the comparison. The comparison could be computed once again with more reliable statistical interpretation, which had a huge impact on the final statistics of discrepancy between the point cloud and reference surface, which now reached a mean of 2.4 cm and standard deviation of 2.2 cm. This SPI-refined model also achieved a median of 1.7 cm, an NMAD of 1.6 cm and a RMSE of 3.3 cm.

3.2. Dense Point Cloud UAV and SPI Integration

After preparing the SPI and UAV final point clouds for integration by excluding noise-like elements and attesting to their geometric equilibrium, we could proceed with the selective strategy for counting the number of points inside each voxel of the octree. Experiments have shown that voxels with r = 10 cm are suitable for selecting the best source of points to integrate into the final point cloud (Figure 8). To evaluate the generated point cloud, we can observe the reduction in the number of points in relation to the sum of the two initial clouds. Before applying the proposed method, the merged cloud had a total of 13,165,841 points (4,342,870 from SPI and 8,822,971 from UAV), whereas after applying the proposed method we obtained a point cloud with 10,946,938 (3,787,022 from SPI and 7,159,916 from UAV). This resulted in the optimization of storage data from 544 MB to 452 MB (uncompressed) and memory usage from 617 to 513 MB, a 17% reduction in both cases, while increasing outcrop coverage and reducing the number of points. This reduction was also reflected in the execution time when measuring the points’ mean distances using the kd-tree algorithm (Figure 9).
In Figure 10, a visual comparison showing the results graphically with and without using SPI data in the 3D reconstruction can help attest to the achievements. Figure 10a shows the original UAV point cloud reconstruction along with a zoomed area. Figure 10b is the resulting point cloud integration after integrating SPI data through the proposed strategy. As can be seen, most missing parts were filled with points from the SPI survey. Figure 10c shows spatially the contribution of each source, with UAV in red and SPI in blue. As expected, the upper part is mostly occupied by UAV points, whereas the bottom is filled with SPI points. It can be noticed that small red clusters are occasionally found in regions dominated by blue and vice versa. These areas correspond to the subvertical features with faces oriented downwards (blue) and upwards (red). This last aspect of the result confirms the hypothesis of the proposed technique according to the initial assumptions. Additional results in Figure 9 show quantitatively both the UAV point cloud and the joined UAV/SPI point cloud—a denser point cloud. In Figure 9, the color scale is given by the mean neighbor distances for each point using a kd-tree algorithm considering a neighborhood k = 10.

4. Discussion

The fine reconstruction of complex environments such as outcrops is challenging due to the very detailed features and requirements for high fidelity outputs. It is well-known that geoscientists are drawing attention to trustworthy solutions concerning remote visualization and interpretation of the environment through digital media. The effectiveness of the reconstructed model relies on the ability of the methodology to provide a reliable environment that is able to represent as much as possible the real characteristics and sensations experimented by the analyst in a real survey. Being able to rely on effective and rapid methodologies to achieve this goal is essential to make available a large number and a great diversity of existing outcrops to analyze wherever and whenever.
The approach proposed in this work improves the prior ability to visualize any portion of outcrops, allowing an adequate reconstruction by 3D models. This is accomplished by adding another systematic view of a scene previously only viewed from an aerial (UAV) perspective: SPI imaging. The controlled experiment performed here, including pre-processing for adequacy of the SPI, could guarantee the effectiveness of the integration approach and the establishment of the proper procedure necessary to generate the model. Due to the severe spatial limitation of the unmanned platform, many parts of the studied outcrop remained uncovered by the survey. After including the SPI data in a controlled way, as described here, the original point cloud was densified, and the final product resulted in a most confident representation of the real environment. At the same time, it is important to stress that SPIs by itself would not be able to be used alone, since features located at high positions could not be visualized, as shown by Figure 7b.
Even though terrestrial photogrammetry is not a novel approach for the generation of DOMs, one of the contributions of this work is the use of an omnidirectional camera, such as the GoPro Fusion. This type of camera can be helpful in terrestrial photogrammetry projects by covering a full spherical field of view from the point where the image is captured. This feature is important because it can provide the overlap between photos required by digital photogrammetry in a more practical way and with fewer photos, when compared to traditional frame cameras, which can help improve considerably both survey time and complexity.
Besides that, another contribution of this paper was the development of a weighted densecloud merging Python script that can integrate a pair of aligned pointclouds, made available for free to the community. Open-source tools can help popularize the photogrammetry technique and assist researchers in their most varied research areas.

5. Conclusions

In this work, we tested the viability of generating DOMs from integrating UAV and SPI data—which cover a full spherical field of view from the point where the image is captured. The results showed that he inclusion of SPI data into the problem was able to overcome the limitations traditionally faced by UAV mapping and could successfully improve the final result by populating regions of the point cloud that did not have sufficient numbers of of points, mainly in subvertical patterns. The use of the SPI cameras is very simple and secure for non-experts, and also prevents the UAV from getting too close to the ground or the target object. With no need to rely on trained personnel to conduct this activity, the presented technique can also help to save expenses in field surveys. Further improvements will be focused on reducing the color difference between the two sources of data.

Author Contributions

Conceptualization, A.S.A. and D.C.Z.; data curation, A.S.A. and A.M.J.; formal analysis, A.M.J.; investigation, A.S.A., A.M.J. and D.C.Z.; methodology, A.S.A., D.C.Z. and A.M.J.; resources, A.L.D.S., M.R.V. and L.G.J.; software, A.M.J.; validation, A.L.D.S. and M.R.V.; visualization, A.S.A. and M.R.V.; writing—original draft preparation, A.S.A.; writing—review and editing, A.S.A., D.C.Z. and A.M.J.; supervision, M.R.V. and L.G.J.; project administration, A.L.D.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Petróleo Brasileiro S.A. (PETROBRAS) under grant number 4600556376; Agência Nacional do Petróleo, Gás Natural e Biocombustíveis (ANP) under grant 460058379; and Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES).

Data Availability Statement

The data that support the findings of this study are available from the corresponding author, upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kurz, T.H.; Buckley, S.; Howell, J.; Schneider, D. Geological outcrop modelling and interpretation using ground based hyperspectral and laser scanning data fusion. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2008, 37, 1229–1234. [Google Scholar]
  2. Jones, R.R.; Mccaffrey, K.J.; Imber, J.; Wightman, R.; Smith, S.A.; Holdsworth, R.E.; Clegg, P.; De Paola, N.; Healy, D.; Wilson, R.W. Calibration and validation of reservoir models: The importance of high resolution, quantitative outcrop analogues. Geol. Soc. Lond. Spec. Publ. 2008, 309, 87–98. [Google Scholar] [CrossRef]
  3. Marques, A., Jr.; Horota, R.K.; de Souza, E.M.; Kupssinskü, L.; Rossa, P.; Aires, A.S.; Bachi, L.; Veronez, M.R.; Gonzaga, L., Jr.; Cazarin, C.L. Virtual and digital outcrops in the petroleum industry: A systematic review. Earth-Sci. Rev. 2020, 208, 103260. [Google Scholar] [CrossRef]
  4. Howell, J.A.; Martinius, A.W.; Good, T.R. The application of outcrop analogues in geological modelling: A review, present status and future outlook. Geol. Soc. Lond. Spec. Publ. 2014, 387, 1–25. [Google Scholar] [CrossRef]
  5. Bilmes, A.; D’Elia, L.; Lopez, L.; Richiano, S.; Varela, A.; del Pilar Alvarez, M.; Bucher, J.; Eymard, I.; Muravchik, M.; Franzese, J.; et al. Digital outcrop modelling using “structure-from-motion” photogrammetry: Acquisition strategies, validation and interpretations to different sedimentary environments. J. S. Am. Earth Sci. 2019, 96, 102325. [Google Scholar] [CrossRef]
  6. Westoby, M.J.; Brasington, J.; Glasser, N.F.; Hambrey, M.J.; Reynolds, J.M. ‘Structure-from-Motion’photogrammetry: A low-cost, effective tool for geoscience applications. Geomorphology 2012, 179, 300–314. [Google Scholar] [CrossRef]
  7. Furukawa, Y.; Hernández, C. Multi-view stereo: A tutorial. Found. Trends® Comput. Graph. Vis. 2015, 9, 1–148. [Google Scholar] [CrossRef]
  8. Chandler, J.H.; Buckley, S. Structure from motion (SFM) photogrammetry vs terrestrial laser scanning. In Geoscience Handbook 2016: AGI Data Sheets, 5th ed.; Carpenter, M.B., Keane, C.M., Eds.; American Geosciences Institute: Alexandria, VA, USA, 2016; ISBN 978-0913312476. [Google Scholar]
  9. Favalli, M.; Fornaciai, A.; Isola, I.; Tarquini, S.; Nannipieri, L. Multiview 3D reconstruction in geosciences. Comput. Geosci. 2012, 44, 168–176. [Google Scholar] [CrossRef]
  10. James, M.R.; Robson, S. Straightforward reconstruction of 3D surfaces and topography with a camera: Accuracy and geoscience application. J. Geophys. Res. Earth Surf. 2012, 117. [Google Scholar] [CrossRef]
  11. Barazzetti, L.; Fangi, G.; Remondino, F.; Scaioni, M. Automation in multi-image spherical photogrammetry for 3D architectural reconstruction. In Proceedings of the 11th International Symposium on Virtual Reality, Archaeology and Cultural Heritage (VAST), Paris, France, 21–24 September 2010. [Google Scholar]
  12. Barazzetti, L.; Previtali, M.; Roncoroni, F. 3D modelling with the Samsung Gear 360. In Proceedings of the 2017 TC II and CIPA-3D Virtual Reconstruction and Visualization of Complex Architectures. International Society for Photogrammetry and Remote Sensing, Nafplio, Greece, 1–3 March 2017; Volume 42, pp. 85–90. [Google Scholar]
  13. Barazzetti, L.; Previtali, M.; Roncoroni, F. Can we use low-cost 360 degree cameras to create accurate 3D models? Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, 42, 69–75. [Google Scholar] [CrossRef]
  14. Calantropio, A.; Chiabrando, F.; Einaudi, D.; Teppati Losè, L. 360° images for UAV multisensor data fusion: First tests and results. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, XLII-2/W13, 227–234. [Google Scholar] [CrossRef]
  15. Fangi, G.; Pierdicca, R.; Sturari, M.; Malinverni, E. Improving spherical photogrammetry using 360° omni-cameras: Use cases and new applications. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, 42, 331–337. [Google Scholar] [CrossRef]
  16. Gottardi, C.; Guerra, F. Spherical images for cultural heritage: Survey and documentation with the Nikon KM360. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, 42, 385–390. [Google Scholar] [CrossRef]
  17. Kwiatek, K.; Tokarczyk, R. Immersive photogrammetry in 3D modelling. GeoMat Environ. Eng. 2015, 9, 51–62. [Google Scholar] [CrossRef]
  18. Ramos, A.P.; Prieto, G.R. Only image based for the 3d metric survey of gothic structures by using frame cameras and panoramic cameras. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 41, 363–370. [Google Scholar] [CrossRef]
  19. Castanheiro, L.; Tommaselli, A.; Campos, M.; Berveglieri, A.; Santos, G. 3D Reconstruction of Citrus Trees Using an Omnidirectional Optical System. In Proceedings of the 2020 IEEE Latin American GRSS & ISPRS Remote Sensing Conference (LAGIRS), Santiago, Chile, 12 August 2020; pp. 409–414. [Google Scholar]
  20. Fangi, G. The Multi-image spherical Panoramas as a tool for Architectural Survey. CIPA Herit. Doc. 2011, 21, 311–316. [Google Scholar]
  21. Zerfass, H.; dos Anjos-Zerfass, G.d.S.; Ruban, D.A.; Yashalova, N.N. Basalt hills of Torres, southern Brazil: World-class geology, its heritage value, and tourism perspectives. J. S. Am. Earth Sci. 2020, 97, 102424. [Google Scholar] [CrossRef]
  22. Cabezos-Bernal, P.M.; Cisneros Vivó, J. Panoramas esféricos estereoscópicos. EGA Revista de Expresión Gráfica Arquitectónica 2016, 21, 70–81. [Google Scholar] [CrossRef]
  23. Fangi, G.; Nardinocchi, C. Photogrammetric processing of spherical panoramas. Photogramm. Rec. 2013, 28, 293–311. [Google Scholar] [CrossRef]
  24. Covas, J.; Ferreira, V.; Mateus, L. 3D reconstruction with fisheye images strategies to survey complex heritage buildings. In Proceedings of the 2015 Digital Heritage, Granada, Spain, 28 September–2 October 2015; Volume 1, pp. 123–126. [Google Scholar]
  25. Hubert, M.; Van der Veeken, S. Outlier detection for skewed data. J. Chemom. A J. Chemom. Soc. 2008, 22, 235–246. [Google Scholar] [CrossRef]
  26. Tukey, J.W. Exploratory Data Analysis; Addison-Wesley Series in Behavioral Science: Quantitative Methods; Addison-Wesley: Reading, MA, USA, 1977; Volume 2. [Google Scholar]
  27. Oliveira, J.; Buxton, B. An Efficient Octree for Interactive Large Model Visualization. Tech Report-RN/05/13. Department of Computer Science, University College London, 2005. Available online: https://www.researchgate.net/publication/268494482_An_Efficient_Octree_For_Interactive_Large_Model_Visualization (accessed on 11 July 2022).
  28. Chen, Y.; Zhou, L.; Tang, Y.; Singh, J.P.; Bouguila, N.; Wang, C.; Wang, H.; Du, J. Fast neighbor search by using revised kd tree. Inf. Sci. 2019, 472, 145–162. [Google Scholar] [CrossRef]
Figure 1. Localization map showing the location of the study area in Torres, Brazil.
Figure 1. Localization map showing the location of the study area in Torres, Brazil.
Remotesensing 14 03994 g001
Figure 2. Outcrops of Parque Estadual da Guarita site, showing the three discontinuous structures (a) and the region captured with the omnidirectional camera (b).
Figure 2. Outcrops of Parque Estadual da Guarita site, showing the three discontinuous structures (a) and the region captured with the omnidirectional camera (b).
Remotesensing 14 03994 g002
Figure 3. GoPro Fusion camera. (a) Side view and (b) front view with extensible grip attached.
Figure 3. GoPro Fusion camera. (a) Side view and (b) front view with extensible grip attached.
Remotesensing 14 03994 g003
Figure 4. (a) Front view. (b) Back view. (c) Equirectangular image. Images captured by the GoPro Fusion camera, with the independent result of each front (a) and back (b) lens and the equirectangular image after the stitching process (c). Notice that information in C is a result of special processing involving integration between both fisheye side acquisitions.
Figure 4. (a) Front view. (b) Back view. (c) Equirectangular image. Images captured by the GoPro Fusion camera, with the independent result of each front (a) and back (b) lens and the equirectangular image after the stitching process (c). Notice that information in C is a result of special processing involving integration between both fisheye side acquisitions.
Remotesensing 14 03994 g004aRemotesensing 14 03994 g004b
Figure 5. Dense point cloud from Torre da Guarita generated by UAV survey. (a) The bottom part presents many gaps (white “holes”) indicating a lack of photogrametric information. These areas correspond to subvertical features, common in outcrops. (b) Details show many subvertical regions of the tower with no point data.
Figure 5. Dense point cloud from Torre da Guarita generated by UAV survey. (a) The bottom part presents many gaps (white “holes”) indicating a lack of photogrametric information. These areas correspond to subvertical features, common in outcrops. (b) Details show many subvertical regions of the tower with no point data.
Remotesensing 14 03994 g005
Figure 6. Octree structure proposed for selecting the source of points (UAV or SPI) to compose the final cloud. Each small cube (voxel) of side r is evaluated to define remaining and excluded points based on majority voting (output). In the presented example, SPI points were preserved.
Figure 6. Octree structure proposed for selecting the source of points (UAV or SPI) to compose the final cloud. Each small cube (voxel) of side r is evaluated to define remaining and excluded points based on majority voting (output). In the presented example, SPI points were preserved.
Remotesensing 14 03994 g006
Figure 7. Dense point clouds obtained from equirectangular image. (a) The blue spheres represent the camera’s position. (b) SPI equirectangular scalar field of calculated discrepancy values resulting from the C2M tool, according to UAV data.
Figure 7. Dense point clouds obtained from equirectangular image. (a) The blue spheres represent the camera’s position. (b) SPI equirectangular scalar field of calculated discrepancy values resulting from the C2M tool, according to UAV data.
Remotesensing 14 03994 g007
Figure 8. 3D representation of the final point cloud, with viewing perspectives in light blue (frames for UAV and spheres for SPI).
Figure 8. 3D representation of the final point cloud, with viewing perspectives in light blue (frames for UAV and spheres for SPI).
Remotesensing 14 03994 g008
Figure 9. Dense point clouds colored by mean neighborhood distances from kd-tree algorithm computation. Subfigure (a) represents the point cloud from the UAV alone whilst (b) shows the joined UAV/SPI point cloud using the proposed method.
Figure 9. Dense point clouds colored by mean neighborhood distances from kd-tree algorithm computation. Subfigure (a) represents the point cloud from the UAV alone whilst (b) shows the joined UAV/SPI point cloud using the proposed method.
Remotesensing 14 03994 g009
Figure 10. Result of the optimized integration between UAV and SPI point clouds. (a,b) Bottom part of the UAV point cloud with several gaps caused by occlusion of subvertical features. (c,d) Same region after integration of sensors. (e,f) Resulting octree analysis showing blue voxels filled with SPI points and red voxels filled with UAV points.
Figure 10. Result of the optimized integration between UAV and SPI point clouds. (a,b) Bottom part of the UAV point cloud with several gaps caused by occlusion of subvertical features. (c,d) Same region after integration of sensors. (e,f) Resulting octree analysis showing blue voxels filled with SPI points and red voxels filled with UAV points.
Remotesensing 14 03994 g010
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Aires, A.S.; Marques Junior, A.; Zanotta, D.C.; Spigolon, A.L.D.; Veronez, M.R.; Gonzaga, L., Jr. Digital Outcrop Model Generation from Hybrid UAV and Panoramic Imaging Systems. Remote Sens. 2022, 14, 3994. https://doi.org/10.3390/rs14163994

AMA Style

Aires AS, Marques Junior A, Zanotta DC, Spigolon ALD, Veronez MR, Gonzaga L Jr. Digital Outcrop Model Generation from Hybrid UAV and Panoramic Imaging Systems. Remote Sensing. 2022; 14(16):3994. https://doi.org/10.3390/rs14163994

Chicago/Turabian Style

Aires, Alysson Soares, Ademir Marques Junior, Daniel Capella Zanotta, André Luiz Durante Spigolon, Mauricio Roberto Veronez, and Luiz Gonzaga, Jr. 2022. "Digital Outcrop Model Generation from Hybrid UAV and Panoramic Imaging Systems" Remote Sensing 14, no. 16: 3994. https://doi.org/10.3390/rs14163994

APA Style

Aires, A. S., Marques Junior, A., Zanotta, D. C., Spigolon, A. L. D., Veronez, M. R., & Gonzaga, L., Jr. (2022). Digital Outcrop Model Generation from Hybrid UAV and Panoramic Imaging Systems. Remote Sensing, 14(16), 3994. https://doi.org/10.3390/rs14163994

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop