Next Article in Journal
Integrated On-Chip Transformers: Recent Progress in the Design, Layout, Modeling and Fabrication
Next Article in Special Issue
Assessment of CNN-Based Methods for Individual Tree Detection on Images Captured by RGB Cameras Attached to UAVs
Previous Article in Journal
Improved Machine Learning Approach for Wavefront Sensing
Previous Article in Special Issue
Fault Detection in Power Equipment via an Unmanned Aerial System Using Multi Modal Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Ground Control Point-Free Unmanned Aerial Vehicle-Based Photogrammetry for Volume Estimation of Stockpiles Carried on Barges

1
School of Geomatics, East China University of Technology, Nanchang 330013, China
2
School of Water Resources & Environmental Engineering, East China University of Technology, Nanchang 330013, China
3
National Field Observation and Research Station of Landslides in the Three Gorges Reservoir Area of Yangtze River, China Three Gorges University, Yichang 443002, China
4
School of Geodesy and Geomatics, Wuhan University, Wuhan 430079, China
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(16), 3534; https://doi.org/10.3390/s19163534
Submission received: 9 July 2019 / Revised: 1 August 2019 / Accepted: 10 August 2019 / Published: 13 August 2019

Abstract

:
In this study, an approach using ground control point-free unmanned aerial vehicle (UAV)-based photogrammetry is proposed to estimate the volume of stockpiles carried on barges in a dynamic environment. Compared with similar studies regarding UAVs, an indirect absolute orientation based on the geometry of the vessel is used to establish a custom-built framework that can provide a unified reference instead of prerequisite ground control points (GCPs). To ensure sufficient overlap and reduce manual intervention, the stereo images are extracted from a UAV video for aerial triangulation. The region of interest is defined to exclude the area of water in all UAV images using a simple linear iterative clustering algorithm, which segments the UAV images into superpixels and helps to improve the accuracy of image matching. Structure-from-motion is used to recover three-dimensional geometry from the overlapping images without assistance of exterior parameters obtained from the airborne global positioning system and inertial measurement unit. Then, the semi-global matching algorithm is used to generate stockpile-covered and stockpile-free surface models. These models are oriented into a custom-built framework established by the known distance, such as the length and width of the vessel, and they do not require GCPs for coordinate transformation. Lastly, the volume of a stockpile is estimated by multiplying the height difference between the stockpile-covered and stockpile-free surface models by the size of the grid that is defined using the resolution of these models. Results show that a relatively small deviation of approximately ±2% between the volume estimated by UAV photogrammetry and the volume calculated by traditional manual measurement was obtained. Therefore, the proposed approach can be considered the better solution for the volume measurement of stockpiles carried on barges in a dynamic environment because UAV-based photogrammetry not only attains superior density and spatial object accuracy but also remarkably reduces data collection time.

1. Introduction

Measurement of stockpile volumes, which comprise various materials, such as coal, gypsum, chip, gravel, dirt, rock, quarry, and mine tailings, is an essential task in the construction and mining industry [1,2,3,4]. Conventional methods of volume measurement include the trapezoidal and cross-sectioning methods [5]. These methods generally assume that the geometric shape is regular (e.g., rectangular, triangular prisms, and trapezoidal) and can be modeled using ideal mathematical models, such as trapezoidal, Simpson-based, cubic spline, and cubic Hermite formulas. Moreover, these methods require the collection of three-dimensional (3D) points that appropriate distribution and density for volume calculation [6]. However, in most instances, the surface of stockpiles is not a regular geometric shape; therefore, ideal mathematical models cannot accurately represent the real shapes of stockpiles [2].
Various methods are available to estimate the volume of stockpiles with irregular geometry. On the basis of the density of 3D measured points, these methods are divided into two categories, i.e., (1) sparse point and (2) dense point cloud-based methods. Global positioning system (GPS) and total station instruments are often used to collect certain 3D sparse points for the modeling of a stockpile surface and to calculate the volume by interpolating points [7,8]. Although point-sampled methods can handle the complex surface of stockpiles, the volume cannot be estimated properly if the number of 3D points is small [2]. The surface of the stockpiles is also presented depending on the distribution of 3D points and interpolation methods [5,9]. Moreover, ports can unload stockpiles carried on barges without time delay. Meanwhile, the aforementioned methods are usually labor intensive and time consuming and therefore unsuitable for a rapid volume measurement of stockpiles carried on barges. Close-range photogrammetry is commonly used to estimate the volume of stockpiles by accurately measuring 3D points from overlapping images taken from different perspectives using a close-range camera. Using such a method is fast, productive, and economical. In many cases, 3D points on the surface of stockpiles can still be obtained even when the stockpile is inaccessible or subject to safety restrictions. Yakar et al. [10] investigated the performance of close-range photogrammetry for volume computation in an excavation site and verified its applicability to volume calculations. Fawzy et al. [3] used digital close-range photogrammetry as an alternative to traditional methods using total station instruments for volume calculation, which yielded a better result (97.21%) compared with that of the traditional method. Abbaszadeh et al. [11] compared close-range photogrammetry using a nonprofessional camera with traditional field surveying for volume estimation; they reported that an accurate volume estimation can be achieved using close-range photogrammetry due to its ability to measure negative slopes and berms. In addition, close-range photogrammetry can measure 3D points using a non-contacting manner that reduces the risks associated with exposing surveyors to danger during on-site operations. However, finding suitable camera placements to capture overlapping images from different perspectives is difficult around stockpiles carried on barges with a narrow space.
Unmanned aerial vehicle (UAV)-based photogrammetry, which is flexible and low-cost, can work in a close-range domain, can generate high-resolution and dense 3D point clouds, and can be used for the aerial mapping of 3D terrain models [12,13,14]; thus, it has been widely accepted for the volume estimation of stockpiles on land [15,16,17,18]. UAV-based photogrammetry can acquire typically high-resolution aerial stereo images at low-altitude positions and reconstruct the 3D surface of a stockpile [17]. Previous studies have confirmed the accuracy of 3D modeling using UAV-based photogrammetry [19,20]. High-resolution remote sensing images with a fine ground sampling distance offer an opportunity to describe irregular stockpiles in detail, and they can be used to create precise 3D surface models (i.e., stockpile-covered and stockpile-free surface models) using structure-from-motion (SfM) and dense matching. Several ground control points (GCPs) are typically measured using a real-time kinematic global positioning system to georeference the UAV-based photogrammetric point clouds. However, compared with the abovementioned methods, which rely on discretely distributed measuring points obtained from GPS or total station instruments for stockpile surface modeling, UAV-based photogrammetry can provide a more accurate solution to estimate the volume of stockpiles. In addition, UAVs allow surveyors to collect overlapping images far away from stockpiles instead of climbing them. In this manner, surveyors are not exposed to danger during on-site operations. In addition, terrestrial laser scanning (TLS) has become a popular tool for obtaining the 3D points of a terrain surface [21,22]. TLS-based methods are also widely used to measure the volume of stockpiles because they can rapidly capture dense 3D point clouds for the modeling of irregularly shaped stockpile surfaces [23,24]. However, TLS-based methods still need surveyors to walk around the boundaries of stockpiles at the edge of the vessel or climb up stockpiles to afford full coverage of the surface. LiDAR sensors with small size and light weight, such as a high-definition (HDL)-32E LiDAR sensor, can be mounted on UAV for airborne laser scanning (ALS), which is available to collect 3D point clouds when the barges are basically stationary and motionless; otherwise, ALS-based methods cannot be used to reconstruct the surface of stockpiles when barges are moving or shaking. Meanwhile, a laser instrument is far more expensive than low-cost drones, e.g., DJI UAVs. Therefore, UAV-based photogrammetry is more applicable to the volume estimation of stockpiles compared with TLS-based methods.
At present, measuring the volume of stockpiles under the circumstance of a moving or shaking barge, i.e., a dynamic environment, is needed. In other words, compared with similar studies for the absolute orientation in the applications of UAV-based photogrammetry [19,20], UAV imaging and GCP measurement may be needed given the relative motion between the barge and the background. A stable framework for collecting the GCPs to georeference the generated digital surface model of stockpiles carried in a dynamic environment may not be provided. Therefore, studies on the use of UAV-based photogrammetry for the volume measurement of stockpiles under barge movement have been rarely reported. Although photogrammetry software, such as Pix4D and Agisoft PhotoScan, can also estimate volume without GCPs [25,26], the cases applied by the software could be only applicable to the volume estimation on land and are quite different with the case of volume estimation of stockpiles carried on barges. Specifically, the stockpile-free surface is typically not a plane but a complex irregular surface, thus measuring volume above a plane using photogrammetry software such as Agisoft PhotoScan is unavailable to estimate the volume of stockpiles carried on barges, and still requires a unified reference to align stockpile-covered and stockpile-free surface models for volume estimation. On this basis, an accurate and efficient approach using GCP-free UAV photogrammetry is proposed in this study to estimate the volume of a stockpile carried on a barge under a dynamic environment. An indirect absolute orientation based on the geometry of the vessel is used to establish a custom-built framework that can provide a unified reference between stockpile-covered and stockpile-free surface models. In addition, UAV images cover a large proportion of water, which is typically characterized as weak texture and variable undulation. As a result, the water around a barge becomes meaningless for the surface model of the barge. Thus, a region of interest (ROI) within UAV images is extracted to improve the performance of image matching. Particularly, a coarse-to-fine matching strategy is initially used to determine the corresponding points among overlapping images via the scale-invariant feature transform (SIFT) algorithm [27] and the subpixel Harris operator [28]. Then, SfM and semi-global matching (SGM) algorithms [29] are used to recover the 3D geometry and generate the dense point clouds of stockpile-covered and stockpile-free surface models. In turn, these dense point clouds are transformed into a custom-built framework using a rotation matrix that consists of tilt and plane rotations. Lastly, the volume of the stockpile is estimated by multiplying the height difference between the stockpile-covered and stockpile-free surface models by the size of the grid that is defined using the resolution of these models.
The main contribution of this study is to propose an approach using GCP-free UAV-based photogrammetry that is particularly suitable to estimate the volume of stockpiles carried on barges in a dynamic environment. In this approach, the adaptive aerial stereo image extraction, which helps to capture sufficient overlaps for the photogrammetric process from UAV video, and simple linear iterative clustering (SLIC) algorithm, is used to generate a ROI for improving the performance of image matching by excluding water intervention. In particular, a custom-built framework instead of prerequisite GCPs is defined to provide the alignment between stockpile-covered and stockpile-free surface models.
The remainder of this paper is organized as follows: In Section 2, the two study areas and the materials are introduced. In Section 3, the proposed approach using UAV-based photogrammetry is described in detail. In Section 4, comparative experimental results are presented in combination with detailed analysis and discussion. In Section 5, the conclusion of this study and possible future works are discussed.

2. Study Area and Materials

2.1. Test Site

The study area is located downstream of the Zhuhai Bridge (22°09′28″N, 113°25′56″E) and covers approximately 1.5 km2; it is approximately 14 km southeast of Macau, China (Figure 1a,b). The stockpiles consist of sand and gravel (Figure 1c), which are used for the construction of the third runway of the Hong Kong International Airport with a reclamation area over 600 ha. This project includes land formation, construction of sea embankments outside the land, foundation reinforcement, installation of monitoring and testing equipment, and construction of a drainage system.
The construction area of this project is characterized by a slow rising tide and quick ebb tide, which are caused by the influence of the surrounding topography. Furthermore, the flat tide lasts a long time, and the tidal range is between 0.2 and 2.5 m, as shown in Figure 1d. The water flow velocity in the middle part of the construction area is relatively slow, and the velocity gradually decreases from north to south. Moreover, the water depth of the construction area is shallow in the south and deep in the north. The shallow mud surface is the water area near the south and north sides of the airport land area with an elevation of approximately −2.0 m, whereas the deep mud surface is the northeast water area with an elevation between −5.4 and −7.1 m. The volume of the reclamation area, which is mainly filled with sand, is approximately 90 million m3. Traditionally, as shown in Figure 2, the stockpile needs to be reshaped into a regular shape, e.g., trapezoid. The volume of stockpiles carried on barges is usually identified by field measurements using measuring tools, e.g., ruler or ranging instrument. However, some problems, such as low accuracy, low efficiency, numerous surveyors needed, large human error, difficulty in monitoring, and easy divergence with the sand supplier, arise when the traditional method is used. In this case, as shown in Figure 1c, placing the instruments (e.g., GPS, prism, and total station) within the narrow space of barges or on the surface of stockpiles to measure volume is difficult. To find an alternative to the traditional manual volume measurement, UAV photogrammetry and laser scanning are compared and evaluated in terms of indicators, such as accuracy, efficiency, cost, and working conditions.

2.2. Field Measurements

The volume of stockpiles carried on barges should be measured at the test site before unloading the stockpiles into the construction area. In this study, the traditional measuring method and laser scanning were compared with the proposed method in June 2018. These experiments were performed under good weather conditions, e.g., sunny and windless. The field measurements include three parts:
(1) For the traditional method, as shown in Figure 2, a labor-intensive operation is performed to reshape the surface of a stockpile into a regular trapezoid. It requires four people to perform the task in approximately 2 h. The measuring tape is used to measure the widths and lengths of the top and the bottom. Thus, the volume V stockpile of stockpiles with a regular trapezoid shown in Figure 3a can be calculated using the following formula:
V stockpile = V stockpile down + V stockpile up , V stockpile up = [ S top + S bottom + ( w top + w bottom ) ( l top + l bottom ) ] h 6 , S top = w top l top , S bottom = w bottom l bottom ,
where V stockpile down and V stockpile up are the volumes of stockpile under and above the flat surface of the vessel, respectively; V stockpile down is a constant; S top and S bottom are the areas of the top and bottom planes, respectively; w top and l top are the width and length of the top plane, respectively; w bottom and l bottom are the width and length of the bottom plane, respectively. However, the trapezoid reshaped through manual operation is seldom a perfectly regular shape, and the error between the calculated result and the real volume of the stockpile cannot be ignored. To calculate the exact volume of the stockpile as accurately as possible, the stockpile is partitioned into several small trapezoids that can be considered for reshaping. A small trapezoid is shown in Figure 3b.
(2) For the laser scanning-based method, a Velodyne LiDAR with HDL-32E LiDAR sensor, as shown in Figure 4a, is used to collect the 3D point clouds of the surface of the stockpiles because of the sensor’s small size of only 5.7 inches in height and 3.4 inches in diameter, light weight of less than 2 kg, and rugged build; it also features up to 32 lasers across a 40° vertical field of view [30]. The detailed technical parameters of the Velodyne HDL-32E LiDAR sensor (Velodyne, San Jose, CA, USA) are given in Table 1. The surveyor carries the sensor on his back and walks along the side of the barge cabin to scan the stockpile-covered and stockpile-free surfaces with 0.7 million laser beams per second. The barges should basically be stationary and motionless during laser scanning; otherwise, the measured 3D point clouds become invalid. In other words, laser scanning cannot be used to reconstruct the surface of stockpiles when barges are moving or shaking.
(3) For UAV-based photogrammetry, the field measurement, i.e., GCP measurements, is conducted using a real-time kinematic GPS for approximately 20 min. Five GCPs for each of the experimental barges are measured for absolute orientation, and seven GCPs are measured as checkpoints to validate the accuracy of stockpile-covered and stockpile-free surface models. Finally, two GCPs are selected for exhibition in Figure 5. The validity of the GCPs measured is ensured by conducting the field measurement under a windless environment and on a static barge. The GCPs are used only for evaluation in this study; they are not actually needed for GCP-free UAV photogrammetry.

2.3. UAV-Based Video Acquisition

Generally, UAV-based stereo remotely sensed images are acquired in autonomous flights with waypoints predefined using the mission planning software package [19,20]. However, this method cannot satisfy the requirement of overlapping images using fixed waypoints when barges are moving or shaking. In this case, such images are extracted from UAV-based videos instead of images to ensure sufficient overlapping. Given the low-cost and flexible operation of quadcopters, this study uses a small quadcopter (DJI Mavic Pro, DJI, Shenzhen, China). It can capture high-resolution true-color videos through a 1/2.3 inch complementary metal-oxide-semiconductor (CMOS) sensor consumer-grade camera, and has a field of view of 78.8° and a focal length of 28 mm (35 mm format equivalent) [31]. The DJI Mavic Pro maintains the nadir orientation of the consumer-grade camera during video acquisition. UAV videos are obtained under good weather conditions, e.g., clear and sunny and winds <10 m/s. The flight altitude is set as 35 m above the barge level, and the ground sample distances are 2.3 cm/pix. The flight speed is set to 5 m/s, and the flying time is less than 1 min for each barge. The overlapping UAV-based remotely sensed images are extracted from the UAV-based video in a frame interval. The interior orientation parameters of the sensor carried on the DJI Mavic Pro are calculated from several views of a calibration pattern, i.e., the 2D chessboard pattern exhibited in Figure 6, in a relative reference system to minimize the systematic errors from the sensor. Systematic errors, i.e., distortions, can be modeled using a polynomial of four coefficients, e.g., two radial and two tangential distortion coefficients. The mean reprojected error of the adjustment is 0.46 pixels during the solution of the sensor parameters, which are listed in Table 2. The parameters are optimized through self-calibrating bundle adjustment.

3. Method

This study aims to use a workflow for the volume estimation of stockpiles carried on barges using UAV photogrammetry without the assistance of GCPs. The proposed approach, as demonstrated in Figure 7, includes four stages: (1) Self-adaptive stereo images are extracted to obtain overlapping images from UAV-based video. (2) Photogrammetric methods are used to generate 3D point clouds and reconstruct the stockpile-covered and stockpile-free surfaces through a series of steps, i.e., ROI-based image matching, SfM, bundle adjustment, and SGM. (3) A custom-built framework is established on the basis of the physical and geometric structure of the vessel and used to transform the stockpile-covered and stockpile-free surface models into a unified local spatial reference framework for volume estimation through the rotation transformations, i.e., tilt and plane rotations. (4) The volume of the stockpile is estimated by multiplying the height difference between the stockpile-covered and stockpile-free surface models by the size of the grid that is defined using the resolution of these models.

3.1. Aerial Stereo Images Extraction

In this study, UAV-based video is captured to ensure sufficient overlap because it can obtain a sequence of frames. The overlap of aerial stereo images is set to 80% to ensure sufficient overlaps in case of large fluctuations of the stockpile surface. On the basis of the variables, i.e., above the barge level A B L , flight speed s UAV , and focal length f of the sensor, the frame sampling interval T frame is initially set using the following formula:
T frame = w CMOS A B L ( 1 r overlap ( o ) ) s UAV f ,
where w CMOS denotes the width of the CMOS; r overlap ( o ) is the value of the given overlapping rate. Ideally, the flight speed is assumed to be a fixed value. However, it is difficult to maintain a constant speed under manual operation due to the influences of the operator’s operating level or external environmental factors, such as wind and barge moving speed. Therefore, as shown in Figure 8, overlap is validated to extract the stereo images with an overlap of approximately 80% and an effective solution in cases of the UAV slowing down or speeding up during data acquisition. The steps are as follows:
Step 1: Extract the stereo images from the UAV-based video in terms of the frame interval T frame .
Step 2: Match the stereo images using the SIFT algorithm on top of the image pyramid, then calculate the overlap r overlap of the extracted stereo images and compute the overlapping difference Δ r overlap between the overlap r overlap and the given overlap r overlap ( o ) .
Step 3: Update T frame = T frame + 0.1 when Δ r overlap is more than 5°; otherwise, update T frame = T frame 0.1 when Δ r overlap is less than −5°.
Step 4: Repeat Steps 1, 2, and 3 until the stereo images with an overlap of approximately 80% are extracted.

3.2. ROI-based Photogrammetry

Typically, UAV-based image acquisition from low-cost UAVs (e.g., DJI quadcopters) has large perspective distortions and poor sensor geometry [20,32], i.e., systematic errors, which need to be eliminated in terms of sensor parameters (Table 2).
The ROI of the barges and the stockpiles is defined to exclude the area of water in all UAV images and suit the volume measurement of the stockpiles carried on barges, thereby improving the accuracy of image matching and accelerating photogrammetry. In accordance with the clear gap of image color, intensity, and texture between barge and water, image segmentation is used to classify water and non-water regions. Moreover, effective and efficient segmentation is achieved by segmenting the UAV image on top of the pyramid (Figure 9a,b) into superpixels (Figure 9c) by a simple linear iterative clustering (SLIC) algorithm, which does not require much computational cost [33]. Mathematically, a down-sampled UAV image s w × h can be split into M superpixels, i.e., I s becomes a set of superpixels { R m | m M } , where R m denotes the region of a superpixel m .
As shown in Figure 9b, the regions of water water in a UAV image have relatively homogenous characteristics compared with the barge and thus can be used to exclude the superpixels within the water and create a mask of the barge. The region of barge sgv can be calculated by excluding water as:
sgv s w × h water .
The operation of two adjacent regions R k and R l , which are merged into a new region is defined as:
f ( R k R l ) = f ( R k ) f ( R l ) ,
where denotes some operation on the values of R k and R l . Let f ( R k ) = ( N k , μ k , c k ) , in which
N k = | R k | , μ k = 1 N k i R k g i , c k = 1 N k i R k X i ,
where N k , μ k , and c k are the number, color means, and regional centers of superpixel k , respectively; g denotes color values in red, green, and blue channels, respectively. Then, a new region R new is generated, and the corresponding N new , μ new , and c new are updated as follows:
N new = N k + N l , μ new = N k μ k + N l μ l N new , c new = N k c k + N l c l N new .
The merging criteria of R k and R l can be defined using a similarity distance d k , l , which is formed by a weighted combination of color mean distance d k , l ( μ ) , region center distance d k , l ( c ) , and connectivity d k , l ( b ) as follows:
d k , l ( R k , R l ) = α d k , l ( μ ) + β d k , l ( c ) + γ d k , l ( b ) ,
where α , β , and γ are the weights corresponding to d k , l ( μ ) , d k , l ( c ) , and d k , l ( b ) , respectively, and are computed as:
d k , l ( μ ) = N k N new | μ k μ new | 2 + N l N new | μ l μ new | 2 , d k , l ( c ) = N k N new | c k c new | 2 + N l N new | c l c new | 2 , d k , l ( b ) = 1 L weak L ,
where L denotes the common boundary between R k and R l ; L weak is the length of the weak part of the boundary and can be determined by gradient intensity on the boundary (Figure 9d), which is computed by using Sobel operator. Generally, UAV images contain a part of water regions on both sides of the barge to ensure the coverage of full sides. Thus, only one strip of overlapping UAV images can cover a barge. In this case, two red superpixels on both sides of the UAV image in Figure 9c are selected as seeds to trigger superpixel merging. Then, the ROI is shaped by reversing the water regions using Equation (3).
Feature extraction and matching are performed using a sublevel Harris operator (S-Harris) coupled with the SIFT algorithm, which is the most popular and commonly used method in the field of photogrammetry and computer vision [34,35]. To achieve evenly distributed matches, this study uses a coarse-to-fine matching strategy to find corresponding points between two stereo images under the constraint of the ROI instead of directly matching images using SIFT. In the coarse-matching stage, ROI-based SIFT feature extraction and matching on the top of the UAV image pyramid are performed to compute the initial relative orientation of two stereo images. In the fine-matching stage, gridding S-Harris operator [28] and SIFT descriptor (S-Harris-SIFT) are jointly used to find the corresponding points along the epipolar lines obtained from the initial relative orientation. Clearly, these stages are implemented to accelerate the efficiency and accuracy of image matching, and especially to obtain evenly distributed matches even in areas with weak texture.
Traditionally, aerial triangulation is assisted by the initial exterior orientation parameters obtained from an airborne GPS and inertial measurement unit. Compared with traditional photogrammetry, the exterior orientation parameters of each frame in the UAV video cannot be captured and cannot be available for aerial triangulation. Fortunately, SfM is suited to recover 3D geometry from the stereo images captured from the video because it can allow 3D reconstruction without the assistance of exterior orientation parameters. Self-calibrating bundle adjustment (i.e., using sparse bundle adjustment software [36]) is also conducted to optimize 3D points and interior (Table 2) and exterior orientation parameters. In addition, the SGM algorithm is utilized to reconstruct the fine details of the stockpile surface by dense matching within the ROI.
A sequence of the stereo images extracted from the UAV video is selected to evaluate the 3D reconstruction of the stockpile using SIFT, non-ROI-based coarse-to-fine matching (i.e., non-ROI-based matching), and ROI-based coarse-to-fine matching (i.e., ROI-based matching). The visualization results are exhibited in Figure 10, and the number of matches is summarized in Table 3. Results show that denser matches with higher quality can be obtained using ROI-based matching compared with using SIFT and non-ROI-based matching. The root mean square error (RMSE) statistics summarized in Table 4 are calculated on the basis of the seven checkpoints and their corresponding 3D points measured using the digital surface model (DSM). On this basis, 3D reconstruction using ROI-based matching is performed better than by using SIFT and non-ROI-based matching in terms of the number and quality of matches and RMSE values. As shown in Table 4, the horizontal (i.e., X and Y) and vertical (i.e., Z) RMSEs obtained by ROI-based matching are approximately equal to 4 and 5 cm, respectively. Thus, these RMSE values seem relatively satisfactory and sufficient to estimate the volume of the stockpiles carried on the barges. In addition, during the computational efficiency analysis, ROI-based matching also shows a significant improvement in the computational cost of approximately one-fifth the time consumed by SIFT. Furthermore, even though ROI-based matching increases the computational cost of ROI detection, it requires slightly more time compared with non-ROI-based matching due to the small searching scope of the matches.

3.3. Custom-Built Framework

Generally, absolute orientation is an essential step to transform the photogrammetric point clouds in an arbitrary coordinate system into a ground coordinate system using several GCPs. However, the generic method of absolute orientation is only suitable for static ground objects and not for stockpiles in a dynamic environment. Importantly, it cannot be ignored here as the aim is to align stockpile-covered and stockpile-free surface models for volume estimation. In comparison with topographic surveys in which all photogrammetric point clouds need to be transformed into a unified geographic coordinate system, only the stockpile-covered and stockpile-free surface models need to be transformed into a local spatial framework for volume estimation in the present study. Furthermore, a custom-built framework is established to transform the stockpile-covered and stockpile-free surface models into a local spatial coordinate system based on the geometry of the vessel instead of the requirement of GCP measurement or the georeferencing coordinate system. The coordinate transformation can be represented as follows:
[ x y z ] = λ R [ x y z ] + T ,
where [ x y z ] T and [ x y z ] T are the coordinates in the custom-built framework and the auxiliary coordinate system, respectively; R and T are the rotation and translation matrices, respectively; λ denotes a scale factor.
As exhibited in Figure 11a, assuming a horizontal plane α with a known length l and width w of a vessel can be extracted to establish a local planar coordinate system XOY, the four corners located on the rectangle can be used to define four coordinates in the custom-built framework. In other words, ( 0 , 0 , 0 ) , ( w , 0 , 0 ) , ( w , l , 0 ) , and ( 0 , l , 0 ) , and the corresponding coordinates ( x i ( a ) , y i ( a ) , z i ( a ) | i { 1 , 2 , 3 , 4 } ) in the arbitrary coordinate system can be measured using the stockpile-covered and stockpile-free surface models. The normal vector n = ( 0 , 0 , 1 ) of this plane is considered the axis Z, i.e., a custom-built coordinate system is established into O-XYZ. The photogrammetric point clouds with the arbitrary coordinate system are transformed into an auxiliary coordinate system O-X′Y′Z′, which is established on the basis of the geometry and normal vector n of plane β , which are defined using the coordinates [ x i ( a ) , y i ( a ) , z i ( a ) ] ( i { 1 , 2 , 3 , 4 } ) in the arbitrary coordinate system. The coefficients { A , B , C , D } of plane β can be solved by listing four systems of equations as follows:
A x i ( a ) + B y i ( a ) + C z i ( a ) + D = 0.
As shown in Figure 11b, the normal vector n = ( A , B , C ) of this plane is considered the axis Z′. Axis X′ can be defined on the basis of the cross product of the normal vectors n and n , and then axis Y′ is defined on the basis of the cross product of axes Z′ and X′. Subsequently, each 3D photogrammetric point with the arbitrary coordinate system can be transformed into the auxiliary coordinate system O-X′Y′Z′ according to the distance from the point to each plane in the auxiliary coordinate system O-X′Y′Z′, e.g., d y o z , d x o z , and d x o y . As shown in Figure 11c, the coordinate of a point p in the auxiliary coordinate system O-X′Y′Z′ is [ d y o z , d x o z , d x o y ] , which can be computed as:
{ d y o z = | A y o z x + B y o z y + C y o z z + D y o z | A y o z 2 + B y o z 2 + C y o z 2 d x o z = | A x o z x + B x o z y + C x o z z + D x o z | A x o z 2 + B x o z 2 + C x o z 2 d x o y = | A x o y x + B x o y y + C x o y z + D x o y | A x o y 2 + B x o y 2 + C x o y 2 ,
where [ x ( a ) , y ( a ) , z ( a ) ] is the coordinate of the point p in the arbitrary coordinate system; { A y o z , B y o z , C y o z , D y o z } , { A x o z , B x o z , C x o z , D x o z } , and { A x o y , B x o y , C x o y , D x o y } are the coefficients of planes YOZ, XOZ, and XOY, respectively. Then, the scale factor of Equation (9) can be computed with the centralization of coordinates as follows:
λ = 1 n i = 1 n x ¯ i 2 + y ¯ i 2 + z ¯ i 2 x ¯ i 2 + y ¯ i 2 + z ¯ i 2 , x ¯ i = x i 1 n i = 1 n x i , y ¯ i = y i 1 n i = 1 n y i , z ¯ i = z i 1 n i = 1 n z i , x ¯ i = x i 1 n i = 1 n x i , y ¯ i = y i 1 n i = 1 n y i , z ¯ i = z i 1 n i = 1 n z i ,
where n is the number of known coordinates, which is set to 4 in terms of the number of corners on the barge in this study; ( x ¯ i , y ¯ i , z ¯ i ) and ( x ¯ i , y ¯ i , z ¯ i ) are the centralized coordinates in the coordinate systems O-XYZ and O-X′Y′Z′, respectively. Then, the transformation of the two coordinates using Equation (9) can be expressed using the following equation, and the rotation matrix R can be computed as:
[ x ¯ i y ¯ i z ¯ i ] = λ R [ x ¯ i y ¯ i z ¯ i ] .
As shown in Figure 11b, the rotation matrix that consists of a tilt and plane rotation can be computed as:
R = R ( θ ) R ( ω ) = [ 1 0 0 0 cos θ sin θ 0 sin θ cos θ ] [ cos ω sin ω 0 sin ω cos ω 0 0 0 1 ] = [ cos ω sin ω 0 cos θ sin ω cos θ cos ω sin θ sin θ sin ω sin θ cos ω cos θ ] ,
where the tilt rotation angle θ can be computed as:
θ = cos 1 ( n n | n | | n | ) .
Next, the plane rotation angle ω is computed using Equation (14) and the known rotation matrix R . Then, the translation matrix T is computed using Equation (9). Therefore, the geometry structure of a barge can be utilized to establish a custom-built framework for defining a local reference between stockpile-covered and stockpile-free surface models instead of requiring a prerequisite of GCP measurement. The superiority of the custom-built framework is that it can make GCP-free UAV photogrammetry work well for 3D reconstruction with the physical object size regardless if a barge is moving or not.

3.4. Surface Modeling and Volume Estimation

A photogrammetric point of a stockpile in the 3D custom-built space can be represented using an O × 3 matrix P ( p x , p y , p z ) . Accordingly, all the 3D photogrammetric points can be converted into an M-by-N matrix filled by the height values, where M and N are the number of 3D points in the directions of length and width, respectively. Generally, the objective of surface modeling is to create a mathematical function or use an interpolation algorithm from the point clouds to approximate the true stockpile surface. The execution time of surface modeling often needs to meet the real-time modeling requirement. Nonetheless, fitting the stockpile surface from such a dense point cloud is time consuming. UAV photogrammetry can generate sufficient dense point clouds (e.g., 3.5 cm/pix in this study). Thus, a good trade-off between modeling time and accuracy is expected using the M-by-N matrix to represent the 3D surface of the stockpile instead of some complex interpolation methods. In addition, noise filtering of the 3D surface is achieved using the difference of the Gaussian and moving surface functions [20]. Subsequently, the volume V stockpile of the stockpiles carried on the barges is calculated by the height difference between the stockpile-covered and stockpile-free surface models multiplied by the size of the grid that is defined using the resolution of these models. Then, volume estimation can be computed as:
V stockpile = ( H covered H free ) d l d w ,
where H covered and H free are the height values of stockpile-covered and stockpile-free surface models, respectively; l and w are the length and width of the vessel.

4. Results and Discussion

In the experiments, stockpiles carried on eight barges (half moving and half non-moving barges in the UAV video acquisition) are used to evaluate the proposed method, which is compared with the traditional manual measurement. In addition, the other point clouds collected from a different source, i.e., laser scanning, are also used to compare and examine how close the numbers obtained from GCP-free UAV photogrammetry are. Traditionally, the volume of a regularly shaped stockpile that is estimated using the trapezoidal method is considered to be accurate and acceptable.
One of the eight barges, including the point clouds of stockpile-covered and stockpile-free surfaces, is shown in Figure 12 and Figure 13. The visualization results show that the UAV photogrammetric point clouds are close to the LiDAR point clouds obtained by laser scanning, and these point clouds obtained from the different sources can represent the finely detailed stockpile-covered and stockpile-free surfaces. As shown in Figure 12, laser scanning can acquire all the interior side structures of the stockpile-free surface (i.e., vessel), although acquiring these structures may not necessarily be required for the proposed volume estimation, which depends on the height values. The cross section along the y-axis within the point clouds of the stockpile-free surface obtained by laser scanning is closer to the true stockpile-free surface structure than that obtained by UAV photogrammetry; nevertheless, the difference is not evident. In the case where the stockpile is stacked beyond the angle of laser scanning, which may not be able to collect complete 3D points at the top of the stockpile-covered surface, the missing areas of the point clouds are filled using B-spline interpolation, as shown in Figure 13a,e and [4]. In addition, the cross sections along the x and y axes in Figure 13 also exhibit similar stockpile-covered surfaces obtained by laser scanning and UAV photogrammetry. In the experiments, the UAV-derived and LiDAR point clouds, including four out of the eight barges (i.e., nonmoving barges), are absolutely oriented using the five measured GCPs. The RMSE is calculated on the basis of the seven GCPs and their corresponding 3D points measured using the stockpile-covered and stockpile-free surface models. The error statistics are summarized in Table 5. The X and Y RMSE values, which are slightly different between UAV photogrammetry and laser scanning, are approximately 5 cm. The vertical RMSE values of the two methods are less than 8 cm, although laser scanning demonstrates better accuracy in the vertical RMSE compared with UAV photogrammetry. The RMSE values of the UAV-based method remain to be relatively satisfactory and sufficient to estimate the volume of stockpiles carried on barges.
Table 6 shows the volume measured by traditional manual measurement, laser scanning, and UAV photogrammetry. Correspondingly, volume estimation is also performed using UAV photogrammetry with GCPs (B1−B4) and without (B5−B8). The deviation between the volume estimated by laser scanning and UAV photogrammetry and the volume calculated by traditional manual measurement is relatively small and approximately equal to ±2%, which can be considered acceptable because the volume is often calculated with a precision of ±3% accuracy of the whole amount [11,16]. Inevitably, errors occur in volume calculation using the traditional method, and providing a completely correct volume for comparison is difficult. Benefiting from the dense point clouds obtained from laser scanning and UAV photogrammetry, the stockpile-covered and stockpile-free surfaces can be accurately detailed, thereby providing results similar to that of the traditional method. Thus, results suggest that laser scanning (in a non-dynamic environment) and UAV photogrammetry can be used as an effective alternative to traditional manual measurement for the volume estimation of stockpiles carried on barges.
Although all three methods may accurately calculate the volume of stockpiles, the proposed approach using GCP-free UAV photogrammetry is considered the most suitable for a dynamic environment because of four reasons. First, the traditional measurement calculates volume by manually shaping the stockpiles into regular shapes (e.g., trapezoid). This method is costly and time consuming. Furthermore, a perfectly regular shape of stockpiles is difficult to obtain, and the local details of stockpiles cannot be obtained. Therefore, the quality of stockpile shaping is difficult to control. By contrast, laser scanning and UAV photogrammetry can provide dense point clouds to reconstruct the stockpile-covered and stockpile-free surfaces accurately. Second, the surveyor needs to walk around the vessel as smoothly as possible in the course of laser scanning and to avoid the laser sensor swaying excessively; otherwise, the point clouds cannot be fitted, especially in vessel corners and stairs. In addition, when a stockpile is stacked beyond the angle of laser scanning, the surveyor needs to climb the stockpile to scan the surface. Thus, the instability of walking on the top of the stockpile may cause laser sensor vibration and generate invalid data. Some invalid data are generated because of the shaking of the laser sensor during the experiments. Moreover, the point clouds of the stockpile cannot be obtained under the condition of a moving and shaking barge. In this case, UAV photogrammetry can still perform data acquisition regardless of if the barge is moving or not. Third, the traditional method and laser scanning requires boarding for survey operations and may result in health risks associated with exposing surveyors to danger during on-site operations; meanwhile, UAV photogrammetry can be conducted from a long distance without boarding the barge and without touching the stockpile to collect data. In this study, GCP-free UAV photogrammetry using a custom-built framework does not need GCP field measurement; only the physical geometry structure of the vessel is needed to establish a local reference between the stockpile-covered and stockpile-free surface models. Fourth, GCP-free photogrammetry, which requires an average of only 20 min per barge for data acquisition and processing, is more efficient than the traditional method and laser scanning, which require 120 and 40 min, respectively. Overall, the results obtained by GCP-free UAV photogrammetry are approximately equal to those obtained by laser scanning, and the proposed approach has strong applicability to barges in a dynamic environment. The overall results suggest that GCP-free UAV photogrammetry can be used as an effective alternative to manual measurements for a rapid volume estimation of stockpiles carried on barges.

5. Conclusions

The present study aimed to estimate the volume of stockpiles carried on eight barges using low-cost and high-density 3D UAV-based photogrammetric point clouds. These point clouds were oriented by the geometric structure of the vessel instead of prerequisite GCPs. A UAV video with a ground sample distance of approximately 2.3 cm/pix was captured to extract stereo images for ensuring sufficient overlaps with large fluctuations of the stockpile surface. High-density 3D point clouds were generated to reconstruct the stockpile-covered and stockpile-free surfaces (e.g., 3.5 cm/pix in this study). The horizontal and vertical RMSE values of the stockpile-covered and stockpile-free surface models were approximately equal to 5 cm and less than 8 cm, respectively. A local spatial coordinate system based on the geometric structure of the vessel was also established to provide a reference between the stockpile-covered and stockpile-free surface models. A relatively small deviation of approximately ±2% between the volume estimated by UAV photogrammetry and the volume calculated by traditional manual measurement was obtained. The deviation can be considered acceptable in terms of the legislation statement, and it can satisfy the requirement of volume estimation of the stockpiles carried on the eight barges. Therefore, the overall results suggest that our method can be used as an effective alternative to the traditional method for estimating the volume of stockpiles carried on barges.
The custom-built framework established in this study assumes a horizontal plane on the vessel of the barge, which is only suitable for barges with a rectangular vessel. In some cases, the geometric structure of the vessel may be deformed rather than rigidly planar or rectangular, and then systematic errors may occur to reduce the accuracy of volume estimation. In future studies, we will attempt to construct a 3D custom-built framework to minimize the systematic errors caused by the deformation of the geometric structure of a barge.

Author Contributions

H.H. proposed the framework of estimating volume of stockpile and wrote the source code and the paper. T.C. and S.H. designed the experiments and revised the paper. H.Z. generated the datasets and performed the experiments.

Funding

This study was financially supported by the National Natural Science Foundation of China (41861062 and 41401526), the Natural Science Foundation of Jiangxi Province of China (20171BAB213025 and 20181BAB203022), and the Open Foundation of National Field Observation and Research Station of Landslides in the Three Gorges Reservoir Area of Yangtze River, China Three Gorges University (2018KTL14).

Acknowledgments

The authors thank Peng Lu, Shuangfei Huang, and Zishen Wang for providing datasets. The authors also want to thank the anonymous reviewers for their constructive comments that significantly improved our manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lu, T.-F.; Myo, M.T.R. Optimal stockpile voxel identification based on reclaimer minimum movement for target grade. Int. J. Miner. Process. 2011, 98, 74–81. [Google Scholar] [CrossRef]
  2. Zhao, S.; Lu, T.-F.; Koch, B.; Hurdsman, A. Stockpile modelling using mobile laser scanner for quality grade control in stockpile management. In Proceedings of the 12th International Conference on Control Automation Robotics & Vision (ICARCV), Guangzhou, China, 5–7 December 2012; pp. 811–816. [Google Scholar]
  3. Fawzy, H.E.-D. The accuracy of determining the volumes using close range photogrammetry. IOSR J. Mech. Civil Eng. 2015, 12, 10–15. [Google Scholar]
  4. Zhao, S. 3D real-time stockpile mapping and modelling with accurate quality calculation using voxels. Ph.D. Thesis, University of Adelaide, Adelaide, Australia, 2016. [Google Scholar]
  5. Bajtala, M.; Brunčák, P.; Kubinec, J.; Lipták, M.; Sokol, Š. The problem of determining the volumes using modern surveying techniques. J. Interdiscip. Res. 2012, 147–150. [Google Scholar]
  6. Uysal, M.; Toprak, A.S.; Polat, N. DEM generation with UAV photogrammetry and accuracy analysis in Sahitler hill. Measurement 2015, 73, 539–543. [Google Scholar] [CrossRef]
  7. Hamzah, H.B.; Said, S.M. Measuring volume of stockpile using imaging station. Geoinf. Sci. J. 2011, 11, 15–32. [Google Scholar]
  8. Siriba, D.N.; Matara, S.M.; Musyoka, S.M. Improvement of volume estimation of stockpile of earthworks using a concave hull-footprint. Int. Sci. J. Micro Macro Mezzo Geoinf. 2015, 5, 11–25. [Google Scholar]
  9. Yilmaz, H.M. Close range photogrammetry in volume computing. Exp. Tech. 2010, 34, 48–54. [Google Scholar] [CrossRef]
  10. Yakar, M.; Yilmaz, H.M.; Mutluoglu, O. Close range photogrammetry and robotic total station in volume calculation. Int. J. Phys. Sci. 2010, 5, 86–96. [Google Scholar]
  11. Abbaszadeh, S.; Rastiveis, H. A comparison of close-range photogrammetry using a non-professional camera with field surveying for volume estimation. In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-4/W4, Tehran, Iran, 7–10 October 2017; pp. 1–4. [Google Scholar]
  12. Mancini, F.; Castagnetti, C.; Rossi, P.; Dubbini, M.; Fazio, N.L.; Perrotti, M.; Lollino, P. An integrated procedure to assess the stability of coastal rocky cliffs: Fom UAV close-range photogrammetry to geomechanical finite element modeling. Remote Sens. 2017, 9, 1235. [Google Scholar] [CrossRef]
  13. Sayab, M.; Aerden, D.; Paananen, M.; Saarela, P. Virtual structural analysis of Jokisivu open pit using ‘structure-from-motion’ unmanned aerial vehicle (UAV) photogrammetry: Implications for structurally-controlled gold deposits in southwest Finland. Remote Sens. 2018, 10, 1296. [Google Scholar] [CrossRef]
  14. Sluijs, J.V.; Kokelj, S.V.; Fraser, R.H.; Tunnicliffe, J.; Lacelle, D. Permafrost terrain dynamics and infrastructure impacts revealed by UAV photogrammetry and thermal imaging. Remote Sens. 2018, 10, 1734. [Google Scholar] [CrossRef]
  15. Shahbazi, M.; Sohn, G.; Théau, J.; Menard, P. Development and evaluation of a UAV-photogrammetry system for precise 3D environmental modeling. Sensors 2015, 15, 27493–27524. [Google Scholar] [CrossRef]
  16. Raeva, P.L.; Filipova, S.L.; Filipov, D.G. Volume computation of a stockpile – a study case comparing GPS and UAV measurements in an open pit quarry. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, XLI-B1, 999–1004. [Google Scholar] [CrossRef]
  17. Aerial Survey Techniques Applied for Stockpile Progress Measurements. Available online: https://www.iadc-dredging.com/ul/cms/terraetaqua/document/4/6/7/467/467/1/article-aerial-survey-techniques-applied-for-stockpile-progress-measurements-terra-et-aqua-141-3.pdf (accessed on 1 June 2018).
  18. Fast and Accurate Volume Measurement, Drones in Mining. Available online: https://www.pix4d.com/blog/drone-mining-stockpile-volume-pix4dmapper (accessed on 1 June 2018).
  19. Li, H.; Chen, L.; Wang, Z.; Yu, Z. Mapping of river terraces with low-cost UAS based structure-from-motion photogrammetry in a complex terrain setting. Remote Sens. 2019, 11, 464. [Google Scholar] [CrossRef]
  20. He, H.; Yan, Y.; Chen, T.; Cheng, P. Tree height estimation of forest plantation in mountainous terrain from bare-earth points using a DoG-coupled radial basis function neural network. Remote Sens. 2019, 11, 1271. [Google Scholar] [CrossRef]
  21. Starek, M.J.; Mitasova, H.; Wegmann, K.W.; Lyons, N. Space-time cube representation of stream bank evolution mapped by terrestrial laser scanning. IEEE Geosci. Remote Sens. Lett. 2013, 10, 1369–1373. [Google Scholar] [CrossRef]
  22. Fan, L.; Atkinson, P.M. Accuracy of digital elevation models derived from terrestrial laser scanning data. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1923–1927. [Google Scholar] [CrossRef]
  23. Oberholzer, A. Mobile Scanning for Stockpile Volume Reporting. Available online: http://www.ee.co.za/wp-content/uploads/2014/02/mobile-scanning-for-stockpille.pdf (accessed on 1 June 2018).
  24. Xu, Z.; Xu, E.; Wu, L.; Liu, S.; Mao, Y. Registration of terrestrial laser scanning surveys using terrain-invariant regions for measuring exploitative volumes over open-pit mines. Remote Sens. 2019, 11, 606. [Google Scholar] [CrossRef]
  25. Agisoft PhotoScan User Manual. 2019. Available online: https://www.agisoft.com/pdf/photoscan-pro_1_4_ en.pdf (accessed on 25 July 2019).
  26. Volume Measurements with Agisoft PhotoScan. 2019. Available online: https://www.agisoft.com/pdf/PS_1.1 _Tutorial%20(IL)%20-%20Volume%20measurements.pdf (accessed on 25 July 2019).
  27. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  28. He, H.; Chen, M.; Chen, T.; Li, D. Matching of remote sensing images with complex background variations via Siamese convolutional neural network. Remote Sens. 2018, 10, 355. [Google Scholar] [CrossRef]
  29. Hirschmüller, H. Stereo processing by semiglobal matching and mutual information. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 30, 328–341. [Google Scholar] [CrossRef]
  30. HDL-32E. User’s Manual and Programing Guide. 2018. Available online: https://velodynelidar.com/lidar/products/manual/63-9113%20HDL-32E%20manual_Rev%20G.pdf (accessed on 1 June 2018).
  31. Mavic Pro. User Manual. 2017. Available online: https://dl.djicdn.com/downloads/mavic/20171219/Mavic%20 Pro%20User%20Manual%20V2.0.pdf (accessed on 1 June 2018).
  32. Puliti, S.; Ørka, H.O.; Gobakken, T.; Næsset, E. Inventory of small forest areas using an unmanned aerial system. Remote Sens. 2015, 7, 9632–9654. [Google Scholar] [CrossRef]
  33. Achanta, R.; Shaji, A.; Smith, K.; Lucchi, A.; Fua, P.; Süsstrunk, S. SLIC superpixels. Tech. Rep. 2010. [Google Scholar]
  34. Azad, P.; Asfour, T.; Dillmann, R. Combing Harris interest points and the SIFT descriptor for fast scale-invariant object recognition. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, St. Louis, MO, USA, 10–15 October 2009; pp. 4275–4280. [Google Scholar]
  35. Sun, Y.; Zhao, L.; Huang, S.; Yan, L.; Dissanayake, G. L2-SIFT: SIFT feature extraction and matching for large images in large-scale aerial photogrammetry. ISPRS J. Photogramm. 2014, 91, 1–16. [Google Scholar] [CrossRef]
  36. sba: A Generic Sparse Bundle Adjustment C/C++ Package. 2018. Available online: http://users.ics.forth.gr/~lourakis/sba/ (accessed on 5 June 2018).
Figure 1. Test site downstream of the Zhuhai Bridge, Southern China: (a) the study area that includes several barges; (b) the geospatial location described by Google Earth; (c) on-site several barges; (d) a tidal change plot in the test site.
Figure 1. Test site downstream of the Zhuhai Bridge, Southern China: (a) the study area that includes several barges; (b) the geospatial location described by Google Earth; (c) on-site several barges; (d) a tidal change plot in the test site.
Sensors 19 03534 g001
Figure 2. Volume measurement using the traditional method: (a) stockpile carried on a barge; (b) manual operation for reshaping the surface of the stockpile; (c) stockpile with a trapezoidal surface through the reshaping of (b); (d) volume measurement using a tool, e.g., measuring tape.
Figure 2. Volume measurement using the traditional method: (a) stockpile carried on a barge; (b) manual operation for reshaping the surface of the stockpile; (c) stockpile with a trapezoidal surface through the reshaping of (b); (d) volume measurement using a tool, e.g., measuring tape.
Sensors 19 03534 g002
Figure 3. Stockpile with a regular trapezoid: (a) model of a stockpile with a regular trapezoid above the flat surface of the vessel; (b) a small stockpile with a regular trapezoid on a barge.
Figure 3. Stockpile with a regular trapezoid: (a) model of a stockpile with a regular trapezoid above the flat surface of the vessel; (b) a small stockpile with a regular trapezoid on a barge.
Sensors 19 03534 g003
Figure 4. Velodyne LiDAR and laser scanning operation: (a) Velodyne HDL-32E LiDAR sensor; (b) laser scanning operation using the Velodyne HDL-32E LiDAR sensor.
Figure 4. Velodyne LiDAR and laser scanning operation: (a) Velodyne HDL-32E LiDAR sensor; (b) laser scanning operation using the Velodyne HDL-32E LiDAR sensor.
Sensors 19 03534 g004
Figure 5. Two ground control points (GCPs) on a barge: (a) a GCP numbered 5; (b) a GCP numbered 6.
Figure 5. Two ground control points (GCPs) on a barge: (a) a GCP numbered 5; (b) a GCP numbered 6.
Sensors 19 03534 g005
Figure 6. Eight views of the 2D chessboard exhibited as examples. (ah) are the eight views of the 2D chessboard. Red circles with a center denote the referenced corners.
Figure 6. Eight views of the 2D chessboard exhibited as examples. (ah) are the eight views of the 2D chessboard. Red circles with a center denote the referenced corners.
Sensors 19 03534 g006
Figure 7. Workflow of the volume estimation of stockpiles carried on barges using GCP-free unmanned aerial vehicle (UAV) photogrammetry.
Figure 7. Workflow of the volume estimation of stockpiles carried on barges using GCP-free unmanned aerial vehicle (UAV) photogrammetry.
Sensors 19 03534 g007
Figure 8. Validation of overlap for stereo image extraction.
Figure 8. Validation of overlap for stereo image extraction.
Sensors 19 03534 g008
Figure 9. Region of interest (ROI) extraction on the top of the image pyramid by jointly using simple linear iterative clustering (SLIC) and Sobel algorithms: (a) UAV image pyramid; (b) the down-sampled image; (c) the result of SLIC segmentation in which two red superpixels are selected as seeds; (d) the gradient information detected using the Sobel algorithm; (e) the ROI, where blue and yellow denote the regions of water and barge, respectively.
Figure 9. Region of interest (ROI) extraction on the top of the image pyramid by jointly using simple linear iterative clustering (SLIC) and Sobel algorithms: (a) UAV image pyramid; (b) the down-sampled image; (c) the result of SLIC segmentation in which two red superpixels are selected as seeds; (d) the gradient information detected using the Sobel algorithm; (e) the ROI, where blue and yellow denote the regions of water and barge, respectively.
Sensors 19 03534 g009
Figure 10. Results of sparse and dense matching in 3D space: (a) the overlapping images used; (b), (c), and (d) are the sparse point clouds obtained using SIFT, non-ROI-based matching, and ROI-based matching, respectively; (e), (f), and (g) are the dense point clouds corresponding to the results of (b), (c), and (d); the subregions marked by red are the low-quality 3D points.
Figure 10. Results of sparse and dense matching in 3D space: (a) the overlapping images used; (b), (c), and (d) are the sparse point clouds obtained using SIFT, non-ROI-based matching, and ROI-based matching, respectively; (e), (f), and (g) are the dense point clouds corresponding to the results of (b), (c), and (d); the subregions marked by red are the low-quality 3D points.
Sensors 19 03534 g010aSensors 19 03534 g010b
Figure 11. Custom-built framework: (a) a custom-built framework that is established using a horizontal plane α with known length l and width w ; (b) the spatial geometric rotation between the custom-built coordinate system O-XYZ and an auxiliary coordinate system O-X′Y′Z′; (c) the transformation of a photogrammetric point p′ from the arbitrary coordinate system to the auxiliary coordinate system.
Figure 11. Custom-built framework: (a) a custom-built framework that is established using a horizontal plane α with known length l and width w ; (b) the spatial geometric rotation between the custom-built coordinate system O-XYZ and an auxiliary coordinate system O-X′Y′Z′; (c) the transformation of a photogrammetric point p′ from the arbitrary coordinate system to the auxiliary coordinate system.
Sensors 19 03534 g011
Figure 12. Comparisons of point clouds generated from laser scanning and UAV photogrammetry for stockpile-free modeling: (a) and (b) are the point clouds generated from laser scanning and UAV photogrammetry, respectively; (c) and (d) are the respective cross sections of (a) and (b) along the x-axis; (e) and (f) are the respective cross sections of (a) and (b) along the y-axis.
Figure 12. Comparisons of point clouds generated from laser scanning and UAV photogrammetry for stockpile-free modeling: (a) and (b) are the point clouds generated from laser scanning and UAV photogrammetry, respectively; (c) and (d) are the respective cross sections of (a) and (b) along the x-axis; (e) and (f) are the respective cross sections of (a) and (b) along the y-axis.
Sensors 19 03534 g012aSensors 19 03534 g012b
Figure 13. Comparisons of point clouds generated from laser scanning and UAV photogrammetry for stockpile-covered modeling: (a) and (b) are the point clouds generated from laser scanning and UAV photogrammetry, respectively; (c) and (d) are the respective cross sections of (a) and (b) along the x-axis; (e) and (f) are the respective cross sections of (a) and (b) along the y-axis.
Figure 13. Comparisons of point clouds generated from laser scanning and UAV photogrammetry for stockpile-covered modeling: (a) and (b) are the point clouds generated from laser scanning and UAV photogrammetry, respectively; (c) and (d) are the respective cross sections of (a) and (b) along the x-axis; (e) and (f) are the respective cross sections of (a) and (b) along the y-axis.
Sensors 19 03534 g013aSensors 19 03534 g013b
Table 1. Technical parameters of the Velodyne HDL-32E LiDAR sensor.
Table 1. Technical parameters of the Velodyne HDL-32E LiDAR sensor.
ParametersValue
Maximum measuring distance100 m
Channels32
Accuracy±2 cm
Field of view angle (vertical)+10.67° to −30.67°
Angular resolution (vertical)1.33°
Field of view angle (horizontal)360°
Angular resolution (horizontal/azimuth)0.1° to 0.4°
Rotation rate5 to 20 Hz
Output0.7 million points per second
Table 2. Parameters of the sensor carried on the DJI Mavic Pro. f x and f y are the focal lengths expressed in pixel units; ( c x , c y ) is a principal point that is usually at the image center; k 1 and k 2 are the radial distortion coefficients; p 1 and p 2 are the tangential distortion coefficients.
Table 2. Parameters of the sensor carried on the DJI Mavic Pro. f x and f y are the focal lengths expressed in pixel units; ( c x , c y ) is a principal point that is usually at the image center; k 1 and k 2 are the radial distortion coefficients; p 1 and p 2 are the tangential distortion coefficients.
ParametersValue (pixel)
Frame size1920 × 1080
f x 1537.42
f y 1536.61
c x 917.71
c y 542.67
k 1 0.17256914
k 2 −0.82566273
p 1 0.00007309
p 2 −0.00528847
Table 3. Number of matches obtained using scale-invariant feature transform (SIFT), S-Harris-SIFT, and ROI-based S-Harris-SIFT, respectively.
Table 3. Number of matches obtained using scale-invariant feature transform (SIFT), S-Harris-SIFT, and ROI-based S-Harris-SIFT, respectively.
MethodMatches
Sparse MatchingDense Matching
SIFT6330237,411
non-ROI-based matching8561250,012
ROI-based matching10,876298,338
Table 4. Error statistics of checkpoints measured using DSMs. These errors are derived from differencing with checkpoints that serve as a reference. DSMs: digital surface models; RMSE: root mean square error.
Table 4. Error statistics of checkpoints measured using DSMs. These errors are derived from differencing with checkpoints that serve as a reference. DSMs: digital surface models; RMSE: root mean square error.
MethodRMSE X (cm)RMSE Y (cm)RMSE Z (cm)Total RMSE (cm)
SIFT9.108.0513.7810.71
non-ROI-based matching4.965.347.025.66
ROI-based matching4.084.145.094.29
Table 5. Error statistics of GCPs measured using the stockpile-covered and stockpile-free surface models in a non-dynamic environment. These errors are derived from differencing with GCPs that serve as a reference. RMSE: root mean square error. B1–B4 denote the number of barges.
Table 5. Error statistics of GCPs measured using the stockpile-covered and stockpile-free surface models in a non-dynamic environment. These errors are derived from differencing with GCPs that serve as a reference. RMSE: root mean square error. B1–B4 denote the number of barges.
No.Data SourceRMSE X (cm)RMSE Y (cm)RMSE Z (cm)Total RMSE (cm)
B1LiDAR5.675.224.295.09
UAV4.325.837.725.81
B2LiDAR4.965.044.194.55
UAV5.525.057.335.83
B3LiDAR5.215.134.434.86
UAV4.534.396.134.89
B4LiDAR4.965.354.234.85
UAV4.084.145.094.29
Table 6. Comparisons of volume calculation using traditional manual measurement (Manual), laser scanning (LiDAR), and UAV photogrammetry (UAV). B1–B8 denote the number of barges; DE and NDE denote the case of dynamic and non-dynamic environments, respectively; Res. L.: the residuals between LiDAR and Manual; Res. U.: the residuals between UAV and Manual; Dev. L.: the percentage of deviation calculated by dividing Res. L. by Manual; Dev. U.: the percentage of deviation calculated by dividing Res. U. by Manual; NULL: the invalid data in the dynamic environment.
Table 6. Comparisons of volume calculation using traditional manual measurement (Manual), laser scanning (LiDAR), and UAV photogrammetry (UAV). B1–B8 denote the number of barges; DE and NDE denote the case of dynamic and non-dynamic environments, respectively; Res. L.: the residuals between LiDAR and Manual; Res. U.: the residuals between UAV and Manual; Dev. L.: the percentage of deviation calculated by dividing Res. L. by Manual; Dev. U.: the percentage of deviation calculated by dividing Res. U. by Manual; NULL: the invalid data in the dynamic environment.
No.TimeCaseManual (m3)LiDAR (m3)UAV (m3)Res. L. (m3)Res. U. (m3)Dev. L. (%)Dev. U. (%)
B108/06/2018NDE2356.172315.412308.34−40.76−47.83−1.73−2.03
B208/06/2018NDE2173.442208.872200.6135.4327.171.631.25
B310/06/2018NDE2832.832779.572783.54−53.26−49.29−1.88−1.74
B410/06/2018NDE2074.232047.682053.90−26.55−20.33−1.28−0.98
B511/06/2018DE2662.10NULL2718.27NULL51.17NULL1.92
B611/06/2018DE2783.73NULL2829.10NULL45.37NULL1.63
B712/06/2018DE2431.41NULL2386.19NULL−45.22NULL−1.86
B812/06/2018DE2568.35NULL2531.62NULL−36.73NULL−1.43

Share and Cite

MDPI and ACS Style

He, H.; Chen, T.; Zeng, H.; Huang, S. Ground Control Point-Free Unmanned Aerial Vehicle-Based Photogrammetry for Volume Estimation of Stockpiles Carried on Barges. Sensors 2019, 19, 3534. https://doi.org/10.3390/s19163534

AMA Style

He H, Chen T, Zeng H, Huang S. Ground Control Point-Free Unmanned Aerial Vehicle-Based Photogrammetry for Volume Estimation of Stockpiles Carried on Barges. Sensors. 2019; 19(16):3534. https://doi.org/10.3390/s19163534

Chicago/Turabian Style

He, Haiqing, Ting Chen, Huaien Zeng, and Shengxiang Huang. 2019. "Ground Control Point-Free Unmanned Aerial Vehicle-Based Photogrammetry for Volume Estimation of Stockpiles Carried on Barges" Sensors 19, no. 16: 3534. https://doi.org/10.3390/s19163534

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop