Next Article in Journal
Measurement of Cloud Top Height: Comparison of MODIS and Ground-Based Millimeter Radar
Previous Article in Journal
Sentinel-1-Imagery-Based High-Resolution Water Cover Detection on Wetlands, Aided by Google Earth Engine
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Integrating UAV and TLS Approaches for Environmental Management: A Case Study of a Waste Stockpile Area

Korea Environment Institute, Bldg. B, 370 Sicheong-daero, Sejong 30147, Korea
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(10), 1615; https://doi.org/10.3390/rs12101615
Submission received: 15 April 2020 / Revised: 14 May 2020 / Accepted: 16 May 2020 / Published: 18 May 2020

Abstract

:
A methodology for optimal volume computation for the environmental management of waste stockpiles was derived by integrating the terrestrial laser scanning (TLS) and unmanned aerial vehicle (UAV) technologies. Among the UAV-based point clouds generated under various flight scenarios, the most accurate point cloud was selected for analysis. The root mean square errors (RMSEs) of the TLS- and UAV-based methods were 0.202 and 0.032 m, respectively, and the volume computation yielded 41,226 and 41,526 m3, respectively. Both techniques showed high accuracy but also exhibited drawbacks in terms of their spatial features and efficiency. The TLS and UAV methods required 800 and 340 min, respectively, demonstrating the high efficiency of the UAV method. The RMSE and volume obtained using the TLS/UAV fusion model were calculated as 0.030 m and 41,232 m3, respectively. The UAV approach generally yielded high point cloud accuracy and volume computation efficiency.

Graphical Abstract

1. Introduction

The construction process of large-scale civil engineering development projects, such as quarry development projects, creates numerous soil-cutting and soil-banking slopes. If the cutting and banking slopes at a construction site are left unrestored for a long time, intense rainfall leads to soil erosion and subsequent damage to the surrounding river streams via deposition onto river beds. Problems such as collapsed soil slopes can arise depending on the treatment conditions associated with the cutting and banking slopes. In Korea, environmental impact assessments are conducted in accordance with the Environmental Impact Assessment Act to prevent problems that may arise from forest land development and quarry development projects. The environmental impact assessments for quarry development projects suggest that environmental pollution prevention measures, such as the installation of dustproof covers to reduce the occurrence of scattered dust due to the long-term stacking of waste stockpiles and the installation of waterproof covers to reduce soil leaks during the rainy season, should be implemented in the project sites. The impact of these projects on the environment should be minimized by continuous monitoring and rapid treatment of stacked waste stockpiles. Developers should be able to periodically maintain the treatment conditions of waste stockpiles by accurately and quickly computing its volume.
Traditional stockpile volume measurement methods include the truckload-and-bucket-count method and the eyeballing method. The truckload-and-bucket-count method is useful for computing a small stockpile volume; however, it takes a long time to compute a large-scale stockpile volume. Computing a volume using the eyeballing method is also plagued by significantly lower accuracy as computations are roughly performed based on the user’s estimated measurements. With advances in technology, global positioning system (GPS) or terrestrial laser scanning (TLS) can be used to compute the volumes more efficiently. TLS has been used to estimate the volume in various fields [1]. The primary objective of TLS is to compute the distance from the lens to each object based on the range and time the light reflected by the object surface takes to reach the lens [2].
TLS-based point cloud generation has applications in numerous fields (e.g., industry, forestry, and academic research) [3,4,5,6,7]. Pitkänen et al. [3] employed TLS in stem diameter measurements for a more accurate volume estimation, while Liu et al. [4] measured leaf angle distribution affecting the flux of radiation, carbon, and water. Cultural heritage sites or buildings can be represented as point clouds for observation or monitoring purposes [5,8]. Xu et al. [9] utilized TLS to monitor the annual surface elevation of glaciers and their accurate boundary measurements. Osada et al. [10] proposed a method to minimize the Global Navigation Satellite System (GNSS) measurements in places such as city corridors where GNSS signals are weak. The TLS method has been intensively employed due to its ability to generate high-accuracy data with dense TLS-based three-dimensional (3-D) point clouds [6,11]. This method, however, is suboptimal (in terms of utility) due to its high cost and time consumption [12]. There are numerous limitations with respect to monitoring waste stockpiles using TLS at project sites located in areas of danger, such as mountainous areas, where access is poor due to surrounding environmental features [13]. The danger intensifies as people approach the waste mounds. Wastes create unpredictable landscapes that are difficult to walk on and may cause respiratory or skin problems depending on their type. Due to the limited access on foot, a ladder is used to reach the waste mound top and place ground control points (GCPs).
With the recent developments in computer vision technology, unmanned aerial vehicles (UAVs) have been widely used to complement the limitations of TLS [14,15] in various fields such as weed control [16] (Pérez-Ortiz et al., 2016), agriculture mapping [17], forest phenology [18], forest structure [19], and disaster prevention [20,21]. There has been a recent surge of research on the generation of point clouds using photogrammetry based on UAVs (also known as “drones”) to achieve location accuracy [22,23,24]. The UAV-based orthoimages are employed to generate point clouds based on the scale-invariant feature transform and structure from motion (SfM) algorithms. Point clouds are also used to construct digital elevation models (DEMs), digital surface models (DSMs), and digital terrain models (DTMs).
The quality of a point cloud depends on various flight parameters (e.g., flight altitude and image overlap, GCPs), which have been the focus of numerous studies [23,25,26,27]. Agüera-Vega et al. [23] evaluated the accuracy of images constructed using UAVs according to the number of GCPs. Mesas-Carrascosa et al. [26] extracted the most accurate orthomosaic images in a wheat field based on different flight altitudes, flight modes (stop and cruise mode), and GCP settings. Furthermore, Mesas-Carrascosa et al. [25] conducted an aerial survey of an archaeological area using the flight altitude, overlap setting, and number of GCPs as flight parameters. Dandois et al. [27] compared the canopy height obtained in a deciduous forest by setting the altitude and the overlap against field data and airborne LiDAR data.
UAV technology has the advantages of accessibility and efficient site maintenance by periodic monitoring and thus is cheaper and less time-consuming than TLS; however, TLS can generate denser and more accurate point clouds [11,14,28]. Relatively limited research has been conducted on point cloud generation or volume computation using TLS and UAV technologies in an integrated manner. Müller et al. [29] mapped a study area using both TLS and UAV methods for eruption site monitoring, as both approaches can be selected depending on the technological characteristics and geomorphological conditions of the study area. Silva et al. [30] performed volume computation in mining areas using UAV, GNSS, and TLS technologies, and evaluated the calculation accuracy and number of person-hours required to implement the applied technologies. The difference between the values obtained using the TLS and UAV technologies should be analyzed for geospatial accuracy that accounts for spatial differences. By identifying the differences between these two technologies on a spatial plane, we can derive an integrated technology capable of selectively applying the features matching the geomorphological characteristics of the survey area. The development of the upgraded technology by fusing these two technologies in a manner that enables their respective disadvantages to be mutually compensated for should be the subject of studies to further optimize volume computation.
In this study, we compared and fused the TLS and UAV technologies by analyzing their spatial features and efficiencies. The specific objectives of this study were to (1) build point clouds using the TLS and UAV technologies, (2) perform waste stockpile volume computations to derive an optimal computation technique based on technology fusion, and (3) present a comparative analysis of these technologies.

2. Materials and Methods

First, we separately built two point clouds using TLS and UAV technologies, evaluated their accuracies, and performed waste stockpile volume computations. For the UAV investigation, we set up various scenarios and performed volume computation for the scenario yielding the most accurate point cloud. A variety of scenarios were set up for the UAV because different flight designs were required to estimate the effect of the number of GCPs and their placement on model accuracy [23,31,32]. Then, we conducted a comparative spatial analysis of the TLS and UAV technologies and performed waste stockpile volume computations via a TLS/UAV fusion model. Finally, we analyzed the volume computation results.
For the UAV technology, we used DJI Inspire 1 Pro (Shenzhen, China), a rotary-wing UAV resistant to wind and capable of flying for approximately 15 min. Since a single mission was unable to cover the entire study area, a sufficient number of batteries were prepared for repeated mission flights. For image acquisition, we used a camera sensor (ZENMUSE X5) (Shenzhen, China) with 16 megapixels and a 72° diagonal field of view.
For TLS, we used a Leica ScanStation P40 (Aarau, Switzerland) with an ultra-high scan rate of 1,000,000 points/s at a maximum range of 270 m. The GNSS survey was performed using a Trimble R8s GNSS receiver (Sunnyvale, California, USA) to enhance the positional accuracies of point clouds and evaluate the accuracy of each point cloud. The R8s is equipped with 440 channels and supports GPS and global navigation satellite system (GLONASS) satellites. Image processing and GNSS survey data points were processed separately using the Pix4D Mapper 4.3 (Prilly, Switzerland), Cyclone 9.2.1 (Aarau, Switzerland), CloudCompare 2.10.2 (Aarau, Switzerland), and ArcMap 10.1 (California, Redlands, USA) software.
The accuracy of TLS- and UAV-based point cloud data was compared using the model-to-model cloud comparison (M3C2) algorithm, and the UAV-based point cloud efficiency was evaluated based on the UAV flight time, GCP, and control point (CP) measurement time. The individual steps of the process are described in Section 2.2, Section 2.3 and Section 2.4.

2.1. Study Area

A site for waste stockpiles in Jipyeon-ri in Sejong City, South Korea was selected as the study area. Sejong City is a planned city in which large construction sites and residential areas coexist. Excavated earth materials and construction wastes are stored at a temporary waste disposal site, damaging the landscape and posing problems to residential areas via wind-blown dust. Construction site waste management is, therefore, a compelling issue, and accurate waste volume computation is important for the waste removal plan. The waste disposal site selected as the study area extends over ~6000 m2, containing a significant volume of waste, which is challenging to measure. The peak waste mound height is approximately 20 m spreading over a 130 m × 90 m surface area. Wastes are mostly from construction sites and include concrete, slag, and sand. Due to the limited access to the mound featuring uneven surfaces, measurements were taken using a ladder in this study (Figure 1).

2.2. TLS- and UAV-Based Point Cloud Generation and Waste Stockpile Volume Computation

The UAV images and TLS measurements were taken on 10 October 2018. UAV images were taken from 10 a.m. to 11 a.m., whereas TLS measurements were conducted from 2 p.m. to 4 p.m. The temperature and humidity data provided by a nearby weather station on 10 October 2018 were 16.3 °C and 49.5% at 10 a.m. and 15.9 °C and 34.8% at 4 p.m., respectively.
The overall process of TLS-based point cloud generation can be divided into three phases. (1) In the goal setting and planning phase, the scan positions and distances should be planned to account for shadow zones and disturbances. (2) In the field scanning phase, scanning is performed based on the planned scan positions, and backups are executed to prevent data loss. The quality of the scanned data is verified during scanning such that rescanning is performed if necessary. In this study, field scanning was conducted at 20 scan positions (Figure 2). To ensure accurate registration of individual scan data, GCPs should be measured in the survey area and the measured values should be reflected in the ensuing data processing. This study used four GCPs. (3) In the data processing phase, the datasets acquired in the field scanning phase are registered and converted into georeferenced coordinates. The converted data points are realigned and unnecessary parts are removed.
Data processing was performed using the Cyclone 9.2.1 software and accuracy evaluations and volume computations were conducted using CloudCompare 2.10.2 and ArcMap 10.1, respectively.
To implement the UAV-based point cloud generation, we set the flight altitude, image overlap, and number of GCPs as key parameters. The flight plan must be scrutinized and set up accordingly because the data for a particular location of interest may not be shown otherwise, forcing a return to the recovery point [33].
The flight altitude was varied from 40 to 160 m (in 40 m intervals). A higher flight altitude can decrease the flight time by reducing the number of images required to cover the survey area, but it results in a larger ground sampling distance, i.e., lower image resolution and quality. Therefore, the flight height was set to four different levels, accounting for the height of the waste pile in the survey area.
The three-dimensional (3-D) vision is only possible when at least two images overlap, and the overlap ratio is typically set at 60–80% [34,35] or 80% for cities with complex landscapes [34]. Greater image overlap can enhance the quality of image registration results, but it requires more flight and data processing time. We set a fairly high overlap rate in this study, i.e., 85% forward lap (FL) and 65% side lap (SL).
The number of GCPs is an important parameter related to image quality. Numerous studies have attempted to explain the relationship between the number of GCPs and image quality enhancement. In [36], one GCP per 2 ha yielded the highest accuracy, while [37] highlighted the importance of an even GCP distribution across the survey area. In this study, we also examined the association between the number of GCPs and point cloud accuracy by setting a sufficient number of GCPs based on previous studies. We conducted two surveys (i.e., one that included the waste pile and one that excluded it) in two GCP placement scenarios, placing 10 GCPs across each survey area (Figure 3). This GCP placement criterion is different from that in previous studies in which the number of GCPs or their even distribution was more important. We used a different criterion to test our hypothesis that the altitude-dependent GCP placement influences image quality.
Data processing was performed using the Pix4D Mapper 4.3 software, and accuracy evaluations and volume computations were conducted using CloudCompare 2.10.2 and ArcMap 10.1, respectively. Volume computation was performed using the following equation:
V i = L i × W i × H i ,
where Li, Wi, and Hi are the length, width, and height of the cell, respectively [38]. The height of the cell is the difference between the terrain altitude of the cell given at its center and the base altitude in the cell’s center, defined as follows [38]:
H i = Z T i Z B i ,
where ZTi is the altitude in the center of cell i of the 3-D terrain and ZBi is the altitude in the center of cell i from the base surface [38].
The accuracy of the point clouds generated using the TLS and UAV technologies was evaluated by comparing them with those generated by setting a large number of control points (CPs) in the survey area based on a GNSS field survey. The discrepancy between the 3-D model and CPs can be used to evaluate the model accuracy using the root mean square error (RMSE). The RMSE is a statistical metric commonly used to validate the accuracy of point cloud-based modeling approaches such as TLS and UAV [31,39]. The RMSE is a popular and easily understood proxy when the “ground truth” dataset is a set of distributed points rather than a continuous “truth” surface [21]. In this study, CPs were measured with the VRS/RTK-GNSS in Trimble R8s. In total, 311 CPs were measured, as shown in Figure 4.
Measurements were performed at CPs located across the study area, generating reference data essential for evaluating the point cloud accuracy. The RMSE reflects the accuracy of each of the x, y, and z components, but we computed the RMSE using the xyz value, which was obtained by combining the RMSEs corresponding to x, y, and z, as follows:
R M S E X Y Z = R M S E x 2 + R M S E y 2 + R M S E z 2 ,
where RMSEx can be defined as follows:
R M S E X = i = 1 n Δ x i 2 n ,
where Δxi is the difference between the CP coordinates and coordinates determined from the point cloud and n is the number of points. The same equation applies to RMSEY and RMSEZ mutatis mutandis.

2.3. Comparison of Spatial Features and Efficacy of Point Clouds

We compared the point clouds generated using the two previously discussed technologies by analyzing their spatial features and efficiencies. Three techniques are generally used when comparing two spatial models, namely the DEM of difference (DoD), direct cloud-to-cloud (C2C), and cloud-to-mesh or cloud-to-model distance (C2M) techniques [40,41]. However, these methods have disadvantages when used to compare point clouds, which are summarized as follows.
The DoD method, which is used to compare two DEMs, cannot handle overhanging, such that information density decreases in proportion to surface steepness [39]. Moreover, it is not a full 3-D model, but rather a 2.5-D model in which z is added to a cell, rendering it unsuitable for evaluating the complex morphology of solid waste [42].
The C2C approach is the simplest and fastest direct method for a 3-D comparison of point clouds [43]. For each point of the second point cloud, the closest point can be defined in the first point cloud. In its simplest version, the change in the surface can be estimated as the distance between the two points. However, this method cannot be used to calculate spatially variable confidence intervals [41].
In the C2M method, the change in the surface can be calculated based on the distance between a point cloud and a reference 3-D mesh [44], which generally requires time-consuming manual inspection. As in the DoD technique, interpolation of missing data introduces uncertainties that are difficult to quantify [41].
To overcome the uncertainties associated with the spatial data comparison, we can use the M3C2 algorithm [41], which enables the rapid analysis of point clouds with complex surface topographies [40,41,45]. The M3C2 algorithm finds the best-fitting normal direction for each point and then calculates the distance between the two point clouds along a cylinder of a given radius projected in the normal direction [46]. Barnhart and Crosby [40] divided the M3C2 algorithm into two steps—point normal estimation and difference computation. Users may specify if local point normals are calculated or if normals are fixed in either the horizontal or vertical direction [40]. Horizontal point normals allow true horizontal erosion rates to be calculated from the M3C2 analysis, whereas vertical normals allow M3C2 data to be used for strictly vertical erosion and aggradation measurements [40]. In this study, point cloud comparison was performed using the M3C2 algorithm instead of a conventional spatial model comparison technique.
For efficiency analysis, we employed the time variables used by Silva et al. [30] when comparing the UAV, GNSS, and LiDAR and those used by Son et al. [47] when building a UAV-based DSM. The efficiency and accuracy were compared by analyzing the time required for point cloud generation based on the UAV and TLS technologies. For the UAV case, we selected the scenario with the highest accuracy.

2.4. Point Cloud Fusion and Volume Computation

We built one point cloud by fusing the TLS- and UAV-based point clouds to compare the spatial accuracy and efficiency of the TLS-UAV fusion model with those of each individual model. Although higher accuracy is expected for the point cloud generated using the fusion model, efficiency of the process should also be considered. We fused the two technologies and analyzed the performance of the fusion method to test the hypotheses that the TLS- and UAV-based point clouds have their respective problems, which can be solved by point cloud generation using a fusion of these two technologies. As the UAV-based point cloud, we selected the most accurate cloud from among the eight point clouds generated in eight different scenarios.
The fused TLS- and UAV-based point cloud equation can be expressed as follows:
P C D F = P C D T + P C D U ,
where P C D T is the TLS-based point cloud, P C D U is the UAV-based point cloud, and P C D F is the fused TLS- and UAV-based point cloud.
The TLS- and UAV-based point clouds were fused using the CloudCompare 2.10.2 software. Since the point clouds use the same coordinate system (Korea 2000/Central Belt 2010–EPSG:5186), there is no need to perform additional georeferencing. The volume computation accuracy of the point cloud generated by the fusion approach was evaluated using the same method employed for the TLS and UAV methods.

3. Results

3.1. Point Cloud Generation and Volume Computation

TLS-based point cloud data were obtained by scanning the survey area from all 20 scan positions. The individual scan data were registered into a single point cloud, yielding reasonably high accuracy (RMSE = 0.202 m). The volume computed using the TLS-based point cloud was 41,226 m3.
UAV-based point clouds were generated for eight scenarios (A–H), with four flight altitudes and two sets of 10 GCPs as variables (Table 1).
Among eight scenarios (A–H) in which point clouds were generated, scenario A was the most accurate one (RMSE = 0.032 m). Scenario A was configured with a flight altitude of 40 m and a set of 10 GCPs considering the waste height. In scenarios considering waste height (A, C, E, and G), the RMSE increased with the increasing flight altitude, which indicates that accuracy has an inverse correlation with flight altitude. In scenarios with evenly distributed GCPs (B, D, F, and H) that did not consider waste height, no correlation was observed between the RMSE and flight altitude.
Volume computation was conducted on eight UAV flight scenarios using the corresponding point clouds (Table 2).
Among eight scenarios (A–H) in which volume computation was performed, those with GCPs placed atop the waste pile (A, C, E, and G) exhibited similar values (~41,000 m3). This finding can be examined in association with the RMSE. The computed volumes for the other scenarios (B, D, F, and H), which did not have high point cloud accuracy, deviated considerably from each other. Such observations are attributed to the fact that volume is obtained from point cloud data comprising x, y, and z values. In other words, x, y, and z coordinates must be sufficiently accurate to reduce the estimated volume uncertainty.

3.2. Comparison of Spatial Features and Efficacy of Point Clouds

The M3C2 algorithm was employed to compare and analyze the point clouds generated using the TLS and UAV technologies. For UAV-based point clouds, scenario A, which had the highest accuracy, was used. Although both the TLS and UAV methods yielded point clouds with fairly high accuracies, they had certain drawbacks.
Figure 5a,b shows the side-view images comparing the TLS- and UAV-based point clouds of the waste disposal site.
Figure 5a-1,b-1 depicts the point clouds generated using the TLS approach, whereas Figure 5a-2,b-2 presents those obtained using the UAV method. The latter two images show missing portions, presumably due to the difference between the TLS position and UAV shooting position. The TLS technology scans sideways from positions fixed on the ground, but UAV images taken from above are more likely to miss side-view aspects. Constructing a model similar to the original shape is possible using SfM algorithms, with images taken from different positions as configured when setting the UAV flight parameters. However, this technique was not sufficient to reproduce irregularly curved sides.
Figure 6a,b shows the top-view images comparing the TLS- and UAV-based point clouds of the waste disposal site.
Figure 6a-1,b-1 presents the images of the TLS-based point cloud, whereas Figure 6a-2,b-2 depicts the UAV-based point cloud. Although TLS was also performed on top of the waste pile, the TLS-based point cloud exhibits unskinned portions, presumably due to uneven surfaces with steps and grooves. The TLS method was particularly prone to errors when representing grooves in the point cloud. In contrast, grooves and curves were well reflected in the UAV-based point cloud. In addition to the advantage of the UAV’s vertical shooting position in taking top-view photos, as mentioned in the side-view image discussion, the GCPs placed atop the waste pile presumably contributed to the representation accuracy.
We then calculated the time required to generate a point cloud, i.e., from the beginning of the TLS and UAV flight to point cloud completion, to compare the time requirements of the TLS and UAV technologies (Table 3).
The TLS and UAV methods required 800 and 340 min, respectively. The same amount of time was spent measuring CPs, which were used for accuracy evaluation, because the same data were used for TLS and UAV tests. TLS required more time, with the exception of the time spent measuring the GCPs. Given the small size of wastes in the study area compared to the typical volumes of disaster and construction wastes, the feasibility of using TLS technology for volume computation is considered low.

3.3. Point Cloud Fusion and Volume Computation

We then built a single point cloud by fusing the TLS- and UAV-based point clouds. The fusion model yielded the following values: RMSE = 0.030 m and volume = 41,232 m3 (Table 4).
The point cloud accuracy of the fusion model was higher than those of the TLS and UAV methods, but similar to that of the UAV method. Müller et al. [29] constructed a high-resolution DEM by fusing TLS and UAV technologies to monitor an eruption site. Although a fusion model was used to reflect the geomorphological characteristics of the study area, comparisons and analyses showed that the UAV approach alone can yield the desired results.

4. Discussion

The accuracy of UAV-based point clouds was evaluated for each scenario. In general, a lower flight height enhances image resolution and allows for more images to be taken with overlapping parts, resulting in higher image quality and accuracy. However, no significant effect of the flight altitude was observed when the GCPs were placed only on flat land. This finding suggests that the GCP arrangement is associated with the accuracy of the point cloud model [31,48,49]. Most studies, which have been conducted in areas with only slight elevation variations, have mostly focused on the number of GCPs, rather than on their placement [13,50]. The results of this study demonstrate that GCPs should also be placed at the highest points in an area with significant elevation variations.
Although the UAV method outperformed the TLS approach in terms of point cloud accuracy, this finding does not necessarily indicate that UAV technology is superior to TLS technology. In the UAV approach, an optimized point cloud can be built by selecting the best performing scenario among eight different scenarios. If the TLS method was conducted using a similar experimental setup (i.e., with more scan stations and more elaborate measurements), its accuracy would have also improved. Jo and Hong [5] built point clouds of the same target object using TLS and UAV technologies, computing the accuracies of the x, y, and z coordinates. The TLS approach yielded more accurate x and y coordinates, whereas the UAV method generated slightly more accurate z coordinates. There remains considerable room for discussion regarding the performance of these two techniques in terms of time, cost, and efficiency.
In the TLS/UAV point cloud fusion model, the TLS and UAV technologies can mutually compensate for the disadvantages of each other. Although TLS is advantageous over UAV technology when surveying a small area (in terms of image accuracy), it has limitations in surveying large areas [51]. Jo and Hong [5] suggested that, in the fusion of the UAV- and TLS-based point clouds of an area with buildings and surrounding grounds, a UAV can be employed to obtain the point cloud at the top of the building, which is difficult to obtain via TLS, thereby enhancing the overall accuracy of the 3-D point cloud data.
In this study, we applied the TLS and UAV methods to the sides and top of the waste pile, respectively, and showed that the integrated use of TLS and UAV technologies can compensate for the drawbacks of each method. However, given the insignificant difference in accuracy between the UAV- and fusion model-based point clouds, the efficacy of these methods should be further examined. The total time spent on point cloud generation was 800 min for TLS and 340 min for the UAV. The fusion model required considerably more time because of its own analysis time in addition to the time taken for the TLS and UAV approaches. Consequently, the UAV method alone can be considered highly advantageous in terms of efficiency.
In summary, the fusion model may be a rational solution to the problems associated with UAV and TLS technologies, but it is less efficient than the UAV approach. The UAV method is prone to errors in side-view photogrammetry during point cloud generation, which can be overcome by UAV tilt control and flying along the sides of a waste pile. The insufficient representation of the sides in a point cloud obtained in this study is ascribable to the limitation of the vertical UAV shooting position. In view of this insufficiency, future research must focus on deriving an optimal configuration of various flight parameters, such as the camera position and direction, to enhance the accuracy of point cloud generation and volume computation.
The present study estimated the waste volume from the environmental management perspective and requires further discussion from the temporal standpoint. Wastes from mass developments or natural disasters must be quantified and examined in a timely manner for proper management. Large-scale wastes from mass developments must be processed within a predefined time period according to the Environmental Impact Assessment Act, while those from natural disasters must be quickly quantified to prevent subsequent damage. In light of this, the UAV-based volume estimation method exhibits relatively high accuracy and shorter processing time; it can therefore be applied in a variety of situations wherein fast and accurate volume estimation is critical.

5. Conclusions

Continuous monitoring of developing project sites is essential to minimize environmental problems resulting from long-term stacking of waste stockpiles, such as scattered dust and soil loss. Three volume computation methods (TLS, UAV, and TLS/UAV) were compared and analyzed in this study to obtain accurate volume estimates that must precede continuous monitoring activities. Waste stockpile volume computations were performed using the generated point clouds, and the accuracy and efficacy of the three techniques were compared.
All three techniques were suitable for generating highly accurate point clouds. The fusion model was the most accurate one, followed by the UAV and TLS approaches, with the point cloud accuracy of the fusion model being similar to that of the UAV method. Similar volumes were computed using all three techniques. The UAV-based volume estimation method was the most effective one in terms of time requirement, whereas the fusion method yielded the most accurate results (Table 5).
Although the UAV approach is advantageous for rapidly performing waste stockpile volume computations in a large area, the UAV and TLS technologies can mutually compensate for their respective weaknesses in capturing images through scanning or aerial photography, depending on the situation and target object. The fusion method exploits the advantages of both UAV and TLS. This method is deemed appropriate for historical objects requiring a more detailed point cloud rather than for reconstruction and preservation of forestry where the point cloud generation at the lower end of the forest is challenging [28,52].
Future studies should determine the most efficient method of computing the volumes of solid waste stockpiles at different scales by considering the time requirements and economic factors, such as equipment and labor costs. Research on topographic point cloud generation with a LiDAR sensor mounted on a UAV is ongoing and will enable point cloud generation with a higher density than UAV photogrammetry [53]. Thus, we should also perform volume computations using a UAV camera and UAV LiDAR, followed by a comparison of their performance. The results of this study demonstrate that efficient environmental management can be achieved using UAVs for environmental impact assessments or environmental monitoring associated with large-scale development projects.

Author Contributions

Conceptualization, S.W.S.; methodology, J.J.Y.; software, D.W.K.; validation, S.W.S., D.W.K., and J.J.Y.; formal analysis, W.G.S.; investigation, J.J.Y.; writing—original draft preparation, S.W.S.; writing—review and editing, S.W.S.; visualization, D.W.K.; supervision, S.W.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Korea Environment Institute, following the “Reviewing Feasibility of Applying of Drones and BIM in Environment Impact Assessment (BA2019-09)” project.

Acknowledgments

Certain parts of this study were written as a Ph.D. Dissertation (Spatial data collection methodology for estimating large-scale waste quantity based on Unmanned Aerial Systems and Terrestrial Laser Scanning), such that there is repetition in the methods and results.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kociuba, W.; Kubisz, W.; Zagórski, P. Use of terrestrial laser scanning (TLS) for monitoring and modelling of geomorphic processes and phenomena at a small and medium spatial scale in Polar environment (Scott River—Spitsbergen). Geomorphology 2014, 212, 84–96. [Google Scholar] [CrossRef]
  2. Wang, W.; Zhao, W.; Huang, L.; Vimarlund, V.; Wang, Z. Applications of terrestrial laser scanning for tunnels: A review. J. Traffic Transp. Eng. (Engl. Ed.) 2014, 1, 325–337. [Google Scholar] [CrossRef] [Green Version]
  3. Pitkänen, T.P.; Raumonen, P.; Kangas, A. Measuring stem diameters with TLS in boreal forests by complementary fitting procedure. ISPRS J. Photogramm. Remote Sens. 2019, 147, 294–306. [Google Scholar] [CrossRef]
  4. Liu, J.; Skidmore, A.K.; Wang, T.; Zhu, X.; Premier, J.; Heurich, M.; Beudert, B.; Jones, S. Variation of leaf angle distribution quantified by terrestrial LiDAR in natural European beech forest. ISPRS J. Photogramm. Remote Sens. 2019, 148, 208–220. [Google Scholar] [CrossRef]
  5. Jo, Y.H.; Hong, S. Three-dimensional digital documentation of cultural heritage site based on the convergence of terrestrial laser scanning and unmanned aerial vehicle photogrammetry. ISPRS Int. J. Geo-Inf. 2019, 8, 53. [Google Scholar] [CrossRef] [Green Version]
  6. Wang, P.; Li, R.; Bu, G.; Zhao, R. Automated low-cost terrestrial laser scanner for measuring diameters at breast height and heights of plantation trees. PLoS ONE 2019, 14, 1–26. [Google Scholar] [CrossRef]
  7. Martínez-Espejo Zaragoza, I.; Caroti, G.; Piemonte, A.; Riedel, B.; Tengen, D.; Niemeier, W. Structure from motion (SfM) processing of UAV images and combination with terrestrial laser scanning, applied for a 3D-documentation in a hazardous situation. Geomat. Nat. Hazards Risk 2017, 8, 1492–1504. [Google Scholar]
  8. Xu, C.; Li, Z.; Li, H.; Wang, F.; Zhou, P. Long-range terrestrial laser scanning measurements of annual and intra-annual mass balances for Urumqi Glacier No. 1, eastern Tien Shan, China. Cryosphere 2019, 13, 2361–2383. [Google Scholar] [CrossRef] [Green Version]
  9. Xu, Z.; Wu, L.; Shen, Y.; Li, F.; Wang, Q.; Wang, R. Tridimensional reconstruction applied to cultural heritage with the use of camera-equipped UAV and terrestrial laser scanner. Remote Sens. 2014, 6, 10413–10434. [Google Scholar] [CrossRef] [Green Version]
  10. Osada, E.; Sośnica, K.; Borkowski, A.; Owczarek-Wesołowska, M.; Gromczak, A. A direct georeferencing method for terrestrial laser scanning using GNSS data and the vertical deflection from global earth gravity models. Sensors 2017, 17, 1489. [Google Scholar] [CrossRef] [Green Version]
  11. Tong, X.; Liu, X.; Chen, P.; Liu, S.; Luan, K.; Li, L.; Liu, S.; Liu, X.; Xie, H.; Jin, Y.; et al. Integration of UAV-based photogrammetry and terrestrial laser scanning for the three-dimensional mapping and monitoring of open-pit mine areas. Remote Sens. 2015, 7, 6635–6662. [Google Scholar] [CrossRef] [Green Version]
  12. Hugenholtz, C.H.; Walker, J.; Brown, O.; Myshak, S. Earthwork volumetrics with an unmanned aerial vehicle and softcopy photogrammetry. J. Surv. Eng. 2014, 141, 06014003. [Google Scholar] [CrossRef]
  13. Ruzgiene, B.; Berteška, T.; Gečyte, S.; Jakubauskiene, E.; Aksamitauskas, V.Č. The surface modelling based on UAV Photogrammetry and qualitative estimation. Measurement 2015, 73, 619–627. [Google Scholar] [CrossRef]
  14. Mancini, F.; Dubbini, M.; Gattelli, M.; Stecchi, F.; Fabbri, S.; Gabbianelli, G. Using unmanned aerial vehicles (UAV) for high-resolution reconstruction of topography: The structure from motion approach on coastal environments. Remote Sens. 2013, 5, 6880–6898. [Google Scholar] [CrossRef] [Green Version]
  15. Ren, H.; Zhao, Y.; Xiao, W.; Hu, Z. A review of UAV monitoring in mining areas: Current status and future perspectives. Int. J. Coal Sci. Technol. 2019, 6, 320–333. [Google Scholar] [CrossRef] [Green Version]
  16. Pérez-Ortiz, M.; Peña, J.M.; Gutiérrez, P.A.; Torres-Sánchez, J.; Hervás-Martínez, C.; López-Granados, F. Selecting patterns and features for between- and within- crop-row weed mapping using UAV-imagery. Expert Syst. Appl. 2016, 47, 85–94. [Google Scholar] [CrossRef] [Green Version]
  17. Rokhmana, C.A. The Potential of UAV-based Remote Sensing for Supporting Precision Agriculture in Indonesia. Proc. Environ. Sci. 2015, 24, 245–253. [Google Scholar] [CrossRef] [Green Version]
  18. Klosterman, S.; Melaas, E.; Wang, J.; Martinez, A.; Frederick, S.; O’Keefe, J.; Orwig, D.A.; Wang, Z.; Sun, Q.; Schaaf, C.; et al. Fine-scale perspectives on landscape phenology from unmanned aerial vehicle (UAV) photography. Agric. For. Meteorol. 2018, 248, 397–407. [Google Scholar] [CrossRef]
  19. Ota, T.; Ogawa, M.; Mizoue, N.; Fukumoto, K.; Yoshida, S. Forest Structure Estimation from a UAV-Based Photogrammetric Point Cloud in Managed Temperate Coniferous Forests. Forests 2017, 8, 343. [Google Scholar] [CrossRef]
  20. Hsieh, Y.C.; Chan, Y.C.; Hu, J.C. Digital elevation model differencing and error estimation from multiple sources: A case study from the Meiyuan Shan landslide in Taiwan. Remote Sens. 2016, 8, 199. [Google Scholar] [CrossRef] [Green Version]
  21. Eker, R.; Aydın, A.; Hübl, J. Unmanned aerial vehicle (UAV)-based monitoring of a landslide: Gallenzerkogel landslide (Ybbs-Lower Austria) case study. Environ. Monit. Assess. 2018, 190, 28. [Google Scholar] [CrossRef] [PubMed]
  22. Harwin, S.; Lucieer, A. Assessing the accuracy of georeferenced point clouds produced via multi-view stereopsis from unmanned aerial vehicle (UAV) imagery. Remote Sens. 2012, 4, 1573–1599. [Google Scholar] [CrossRef] [Green Version]
  23. Agüera-Vega, F.; Carvajal-Ramírez, F.; Martínez-Carricondo, P. Assessment of photogrammetric mapping accuracy based on variation ground control points number using unmanned aerial vehicle. Measurement 2017, 98, 221–227. [Google Scholar] [CrossRef]
  24. Gindraux, S.; Boesch, R.; Farinotti, D. Accuracy assessment of digital surface models from Unmanned Aerial Vehicles’ imagery on glaciers. Remote Sens. 2017, 9, 186. [Google Scholar] [CrossRef] [Green Version]
  25. Mesas-Carrascosa, F.-J.; Garcia, M.D.N.; de Larriva, J.E.M.; Garcia-Ferrer, A. An analysis of the influence of flight parameters in the generation of unmanned aerial vehicle (UAV) orthomosaicks to survey archaeological areas. Sensors 2016, 16, 1838. [Google Scholar] [CrossRef] [Green Version]
  26. Mesas-Carrascosa, F.J.; Torres-Sánchez, J.; Clavero-Rumbao, I.; García-Ferrer, A.; Peña, J.M.; Borra-Serrano, I.; López-Granados, F. Assessing optimal flight parameters for generating accurate multispectral orthomosaicks by UAV to support site-specific crop management. Remote Sens. 2015, 7, 12793–12814. [Google Scholar] [CrossRef] [Green Version]
  27. Dandois, J.P.; Olano, M.; Ellis, E.C. Optimal altitude, overlap, and weather conditions for computer vision uav estimates of forest structure. Remote Sens. 2015, 7, 13895–13920. [Google Scholar] [CrossRef] [Green Version]
  28. Tian, J.; Dai, T.; Li, H.; Liao, C.; Teng, W.; Hu, Q.; Ma, W.; Xu, Y. A novel tree height extraction approach for individual trees by combining TLS and UAV image-based point cloud integration. Forests 2019, 10, 537. [Google Scholar] [CrossRef] [Green Version]
  29. Müller, D.; Walter, T.R.; Titt, T.; Schöpa, A.; Witt, T.; Steinke, B.; Gudmundsson, M.T.; Dürig, T. High-resolution digital elevation modeling from TLS and UAV campaign reveals structural complexity at the 2014/2015 Holuhraun eruption site, Iceland. Front. Earth Sci. 2017, 5, 59. [Google Scholar] [CrossRef] [Green Version]
  30. Da Silva, C.A.; Duarte, C.R.; Souto, M.V.S.; Santos, A.L.S.D.; Amaro, V.E.; Bicho, C.P.; Sabadia, J.A.B. Evaluating the accuracy in volume calculation in a pile of waste using UAV, GNSS and LiDAR. Bol. Ciências Geodésicas 2016, 22, 73–94. [Google Scholar]
  31. Sanz-Ablanedo, E.; Chandler, J.H.; Rodríguez-Pérez, J.R.; Ordóñez, C. Accuracy of unmanned aerial vehicle (UAV) and SfM photogrammetry survey as a function of the number and location of ground control points used. Remote Sens. 2018, 10, 1606. [Google Scholar] [CrossRef] [Green Version]
  32. Son, S.W.; Yoon, J.H.; Jeon, H.J.; Kim, D.W.; Yu, J.J. Optimal flight parameters for unmanned aerial vehicles collecting spatial information for estimating large-scale waste generation. Int. J. Remote Sens. 2019, 40, 8010–8030. [Google Scholar] [CrossRef]
  33. Evers, L.; Dollevoet, T.; Barros, A.I.; Monsuur, H. Robust UAV mission planning. Ann. Oper. Res. 2014, 222, 293–315. [Google Scholar] [CrossRef]
  34. Pepe, M.; Fregonese, L.; Scaioni, M. Planning airborne photogrammetry and remote-sensing missions with modern platforms and sensors. Eur. J. Remote Sens. 2018, 51, 412–436. [Google Scholar] [CrossRef]
  35. Mikhail, E.; Bethel, J.; McGlone, J. Introduction to modern photogrammetry; John Wiley & Sons: New York, NY, USA, 2001; p. 496. [Google Scholar]
  36. Coveney, S.; Roberts, K. Lightweight UAV digital elevation models and orthoimagery for environmental applications: Data accuracy evaluation and potential for river flood risk modelling. Int. J. Remote Sens. 2017, 38, 3159–3180. [Google Scholar] [CrossRef] [Green Version]
  37. Aber, J.; Marzolff, I.; Ries, J.B. Small Format Aerial Photography: Principles, Techniques and Geoscience Applications; Elsevier: Amsterdam, The Netherlands, 2016; p. 394. [Google Scholar]
  38. Raeva, R.L.; Filipova, S.L.; Filipov, D.G. Volume computation of a stockpile - A case study comparing GPS and UAV measurements in an open pit quarry. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, XLI-B1, 999–1004. [Google Scholar] [CrossRef]
  39. Gallay, M.; Lloyd, C.D.; McKinley, J.; Barry, L. Assessing modern ground survey methods and airborne laser scanning for digital terrain modelling: A case study from the Lake District, England. Comput. Geosci. 2013, 51, 216–227. [Google Scholar] [CrossRef] [Green Version]
  40. Barnhart, T.; Crosby, B. Comparing two methods of surface change detection on an evolving thermokarst using high-temporal-frequency terrestrial laser scanning, Selawik River, Alaska. Remote Sens. 2013, 5, 2813–2837. [Google Scholar] [CrossRef] [Green Version]
  41. Lague, D.; Brodu, N.; Leroux, J. Accurate 3D comparison of complex topography with terrestrial laser scanner: Application to the Rangitikei canyon (NZ). ISPRS J. Photogramm. Remote Sens. 2013, 82, 10–26. [Google Scholar] [CrossRef] [Green Version]
  42. Yu, J.J.; Park, H.-S.; Kim, D.W.; Jeong, H.Y.; Seung, W.S. Assessing the applicability of sea cliff monitoring using multi-camera and SfM method. J. Korean Geomorphol. Assoc. 2018, 25, 67–80. [Google Scholar] [CrossRef]
  43. Girardeau-Montaut, D.; Roux, M.; Marc, R.; Thibault, G. Change detection on points cloud data acquired with a ground laser scanner. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2005, 36, W19. [Google Scholar]
  44. Cignoni, P.; Rocchini, C.; Scopigno, R. Metro: Measuring error on simplified surfaces. Comput. Graph. Forum 1998, 17, 167–174. [Google Scholar] [CrossRef] [Green Version]
  45. James, M.R.; Robson, S.; Smith, M.W. 3-D uncertainty-based topographic change detection with structure-from-motion photogrammetry: Precision maps for ground control and directly georeferenced surveys. Earth Surf. Process. Landf. 2017, 42, 1769–1788. [Google Scholar] [CrossRef]
  46. Cook, K.L. An evaluation of the effectiveness of low-cost UAVs and structure from motion for geomorphic change detection. Geomorphology 2017, 278, 195–208. [Google Scholar] [CrossRef]
  47. Son, S.W.; Kim, D.W.; Yoon, J.-H.; Jeon, H.-J.; Yu, J.J. 3D model construction and evaluation using drone in terms of time efficiency. J. Korea Acad. Ind. Coop. Soc. 2018, 19, 497–505. [Google Scholar]
  48. Shahbazi, M.; Sohn, G.; Théau, J.; Menard, P. Development and evaluation of a UAV-photogrammetry system for precise 3D environmental modeling. Sensors 2015, 15, 27493–27524. [Google Scholar] [CrossRef] [Green Version]
  49. Harwin, S.; Lucieer, A.; Osborn, J. The impact of the calibration method on the accuracy of point clouds derived using unmanned aerial vehicle multi-view stereopsis. Remote Sens. 2015, 7, 11933–11953. [Google Scholar] [CrossRef] [Green Version]
  50. Oniga, V.; Breaban, A.; Statescu, F. Determining the Optimum Number of Ground Control Points for Obtaining High Precision Results Based on UAS Images. Proceedings 2018, 2, 352. [Google Scholar] [CrossRef] [Green Version]
  51. Chen, N.; Ni, N.; Kapp, P.; Chen, J.; Xiao, A.; Li, H. Structural analysis of the Hero Range in the Qaidam Basin, northwestern China, using integrated UAV, terrestrial LiDAR, Landsat 8, and 3-D seismic data. Remote Sens. 2015, 8, 4581–4591. [Google Scholar] [CrossRef]
  52. Tomaštík, J.; Mokroš, M.; Saloš, S.; Chudỳ, F.; Tunák, D. Accuracy of photogrammetric UAV-based point clouds under conditions of partially-open forest canopy. Forests 2017, 8, 151. [Google Scholar] [CrossRef]
  53. Solazzo, D.; Sankey, J.B.; Sankey, T.T.; Munsen, S.M. Mapping and measuring aeolian sand dunes with photogrammetry and LiDAR from unmanned aerial vehicles (UAV) and multispectral satellite imagery on the Paria Plateau, AZ, USA. Geomorphology 2018, 319, 174–185. [Google Scholar] [CrossRef]
Figure 1. Location of the study area, Jipyeon-ri in Sejong City, South Korea.
Figure 1. Location of the study area, Jipyeon-ri in Sejong City, South Korea.
Remotesensing 12 01615 g001
Figure 2. Terrestrial laser scanning (TLS) scan positions and ground control points (GCPs).
Figure 2. Terrestrial laser scanning (TLS) scan positions and ground control points (GCPs).
Remotesensing 12 01615 g002
Figure 3. Ground control point (GCP) positions: (a) all GCPs, (b) GCPs placed without considering waste height, and (c) GCPs placed considering waste height.
Figure 3. Ground control point (GCP) positions: (a) all GCPs, (b) GCPs placed without considering waste height, and (c) GCPs placed considering waste height.
Remotesensing 12 01615 g003
Figure 4. Control point (CP) positions in the study area.
Figure 4. Control point (CP) positions in the study area.
Remotesensing 12 01615 g004
Figure 5. Side views of the point clouds taken at the waste disposal site: (a)-1 TLS-based point cloud and (a)-2 UAV-based point cloud; (b)-1 TLS-based point cloud and (b)-2 UAV-based point cloud.
Figure 5. Side views of the point clouds taken at the waste disposal site: (a)-1 TLS-based point cloud and (a)-2 UAV-based point cloud; (b)-1 TLS-based point cloud and (b)-2 UAV-based point cloud.
Remotesensing 12 01615 g005
Figure 6. Top views of the waste point clouds: (a)-1 TLS-based point cloud and (a)-2 UAV-based point cloud; (b)-1 TLS-based point cloud and (b)-2 UAV-based point cloud.
Figure 6. Top views of the waste point clouds: (a)-1 TLS-based point cloud and (a)-2 UAV-based point cloud; (b)-1 TLS-based point cloud and (b)-2 UAV-based point cloud.
Remotesensing 12 01615 g006
Table 1. Unmanned aerial vehicle (UAV)-based point cloud accuracy for various UAV flight scenarios.
Table 1. Unmanned aerial vehicle (UAV)-based point cloud accuracy for various UAV flight scenarios.
Flight Altitude (m)Overlap (FL: 85%/SL: 65%)
10 GCPs
(Considering Waste Height)
10 GCPs
(Without Considering Waste Height)
Scenarioxyz RMSE (m)Number of ImagesScenarioxyz RMSE (m)Number of Images
40A0.032443B0.447443
80C0.055133D0.293133
120E0.07565F0.32565
160G0.10423H0.19323
Table 2. Volume computed in each UAV flight scenario.
Table 2. Volume computed in each UAV flight scenario.
ScenarioVolume (m3)ScenarioVolume (m3)
A41,256B43,042
C41,405D42,818
E41,449F43,013
G41,621H42,578
Table 3. Time requirements for TLS- and UAV-based point cloud generation.
Table 3. Time requirements for TLS- and UAV-based point cloud generation.
Point CloudScan/Flight TimeGCP MeasurementCP MeasurementImage ProcessingTotal
TLS120 min20 min50 min610 min800 min
UAV20 min50 min50 min220 min340 min
Table 4. Point cloud accuracy and volume computation results of UAV, TLS, and TLS/UAV fusion models.
Table 4. Point cloud accuracy and volume computation results of UAV, TLS, and TLS/UAV fusion models.
UAVTLSFusion
xyz RMSE (m)0.0320.2020.030
Volume (m3)41,25641,22641,232
Table 5. Performance evaluation of the three volume computation methods (ranked from 1 to 3).
Table 5. Performance evaluation of the three volume computation methods (ranked from 1 to 3).
UAVTLSFusion
Point Cloud Accuracy231
Time Requirements123

Share and Cite

MDPI and ACS Style

Son, S.W.; Kim, D.W.; Sung, W.G.; Yu, J.J. Integrating UAV and TLS Approaches for Environmental Management: A Case Study of a Waste Stockpile Area. Remote Sens. 2020, 12, 1615. https://doi.org/10.3390/rs12101615

AMA Style

Son SW, Kim DW, Sung WG, Yu JJ. Integrating UAV and TLS Approaches for Environmental Management: A Case Study of a Waste Stockpile Area. Remote Sensing. 2020; 12(10):1615. https://doi.org/10.3390/rs12101615

Chicago/Turabian Style

Son, Seung Woo, Dong Woo Kim, Woong Gi Sung, and Jae Jin Yu. 2020. "Integrating UAV and TLS Approaches for Environmental Management: A Case Study of a Waste Stockpile Area" Remote Sensing 12, no. 10: 1615. https://doi.org/10.3390/rs12101615

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop