^{1}

^{*}

^{2}

^{1}

^{2}

^{1}

This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (

Roads play an indispensable role as part of the infrastructure of society. In recent years, society has witnessed the rapid development of laser mobile mapping systems (LMMS) which, at high measurement rates, acquire dense and accurate point cloud data. This paper presents a way to automatically estimate the required excavation volume when widening a road from point cloud data acquired by an LMMS. Firstly, the input point cloud is down-sampled to a uniform grid and outliers are removed. For each of the resulting grid points, both on and off the road, the local surface normal and 2D slope are estimated. Normals and slopes are consecutively used to separate road from off-road points which enables the estimation of the road centerline and road boundaries. In the final step, the left and right side of the road points are sliced in 1-m slices up to a distance of 4 m, perpendicular to the roadside. Determining and summing each sliced volume enables the estimation of the required excavation for a widening of the road on the left or on the right side. The procedure, including a quality analysis, is demonstrated on a stretch of a mountain road that is approximately 132 m long as sampled by a Lynx LMMS. The results in this particular case show that the required excavation volume on the left side is 8% more than that on the right side. In addition, the error in the results is assessed in two ways. First, by adding up estimated local errors, and second, by comparing results from two different datasets sampling the same piece of road both acquired by the Lynx LMMS. Results of both approaches indicate that the error in the estimated volume is below 4%. The proposed method is relatively easy to implement and runs smoothly on a desktop PC. The whole workflow of the LMMS data acquisition and subsequent volume computation can be completed in one or two days and provides road engineers with much more detail than traditional single-point surveying methods such as Total Station or GPS profiling. A drawback is that an LMMS system can only sample what is within the view of the system from the road.

There are various modern surveying and mapping techniques that make it possible to quickly obtain 3D surface geometry data. Notably, in recent years, there has been rapid development in Light Detection and Ranging (LiDAR) systems, for both airborne and mobile applications. In this study, a car-based system is used, which is generally referred to as a Laser Mobile Mapping Systems (LMMS). Such a system typically integrates one or more LiDAR profilers with a Global Navigation Satellite System (GNSS) and an odometer for positioning, and an Inertial Measurement Unit (IMU) for attitude control [

The information discussed above plays an important role in the ongoing maintenance of the road, especially for mountain roads where there is a risk for rock and stone fall. Additionally, the geometric features observed on a mountainous road could be helpful for monitoring the flow of rainwater in the case of heavy rain and may be used to assist natural disaster prevention [

This paper presents an approach for automatically estimating the roadside material volume to be excavated for road widening. Firstly, LMMS point cloud data in a mountainous area is down sampled and the outliers and noisy points are removed. Based on the data, normal vectors, and 2D slopes are estimated at every point. Then, an automatic iterative floating window approach, taking advantage of point height, normal vector and slope, is used to filter and segment the road points. After that, a local neighbourhood feature is defined based on the vectors between a query point and its neighbouring points to obtain the outline and skeleton of the road. These steps finally allow us to compute the volume of the roadside material that would have to be moved to widen the road by 4 m. In addition, an analysis of the quality of the results is presented notably by comparing results from different datasets both sampling the region of interest. Finally, conclusions and future work are discussed.

This section explains the procedure for estimating the amount of roadside material that has to be removed for road widening. It consists of five steps. (1) Point cloud data preprocessing; the original point cloud data have a very high point density and need to be downsampled before processing. Additionally, outliers are removed, and a Digital Surface Model (DSM) is generated; (2) Surface normal estimation; (3) Slope and aspect estimation; (4) Road detection and segmentation; taking the normal and slopes as estimated in steps (2) and (3) as input, an automatic iterative filtering approach is used to segment the road points from the downsampled point cloud data; (5) Volume computation; the material volume that needs to be moved to widen the road is computed based on step (4). A flowchart of the procedure is shown in

The raw point cloud data have a very high point density, so for quick and efficient processing, downsampling is a necessity. The approach followed here is to represent the point cloud data by voxels [

The outliers of the downsampled point cloud data have to be removed before estimating geometric features from the point cloud data. In this procedure, the query point neighbourhood concept is introduced, as shown in _{query}_{query}_{i}

Outliers are the measurements located at jump edges or discontinuous boundaries where there should be no points [_{query}_{k}_{k}^{*} is the set of remaining points.

The surface normal at a discrete point is a vector perpendicular to the tangential plane of the local surface at that point. Various methods exist to estimate a normal at a certain point in 3D point cloud data [

When estimating a local normal using least-squares, the goal is to approximate the tangent plane at a point of interest and take that plane’s normal. The best fitting plane is obtained by determining those planar parameters that minimize the squared distances between suitable neighbouring points and the plane to be estimated. Suppose we have a point of interest with Euclidean coordinates (^{T}_{x}_{y}_{z}^{T}

Additionally, |_{i}_{i}_{i}_{i}^{T}

The solution of

The 2D slope, also known as the 2D gradient, is the vector field of a surface. The vector direction points in the direction of the greatest change in height, and the vector’s magnitude is equal to the rate of change. Based on the previously generated gridded DSM, the height at every grid point is interpolated from neighbouring points by inverse distance interpolation with power 2. If the grid size is _{grid}_{i}_{i}_{query}_{i}_{i}

Based on the normal vectors and slopes that were computed at each point in the previous steps, an automatic iterative point cloud data filtering approach is used to detect road surface points from the point cloud data. The main steps are as follows:

Input an initial grid and window size;

Generate a virtual reference 3D gridded layer. The elevation of each grid point is interpolated from its neighbouring points, and also the grid point’s normal as unit vector and its direction to zenith. Based on this layer, a window of predefined size is created to move over the grid and point cloud;

In the current moving window, compare for each grid point the 3D layer grid elevation and the normal vector direction with that of the point cloud; Calculate the height and angle differences between the 3D layer and the point cloud to verify if the differences are beyond the threshold;

If the difference is less than the distance and direction threshold, then the point is accepted as a road point, else the point is regarded as off-road point;

Go to step (2) and generate a new virtual layer using a smaller grid size, then iteratively process the point cloud again;

The loop ends when the grid size reaches a pre-defined smallest size.

In this step, based on the segmented road points from step 4, the road’s outlines are determined. First, a local feature descriptor is defined based on the neighbourhood concept.

As shown in _{1}_{2}_{k}_{query}_{query}_{ι} at query point _{query}

In case the query point is indeed an edge point, the value ‖_{des}

After the road outline is determined, the road central line is estimated based on the location of the road edges. Now suppose the road needs to be widened to four lanes. As a consequence, a certain volume of the road has to be removed or added to extend the flat road surface.

As shown in

To minimize the sample error, the resolution for both the road parallel and the road perpendicular direction is defined based on the point cloud data resolution, as shown in

Suppose
_{L}_{R}

If for example

The methodology described above is implemented on an ordinary Dell desktop computer, which has an Intel Xeon 3.6 GHz CPU on board and 16 GB random memory. The implementation of the software is in the C++ language. Also, the Point Cloud Library (PCL) statistical outliers filtering tool is used in the processing. The whole processing took 23.184 s for the tested dataset.

The point cloud data studied in this paper was acquired by the University of Vigo, Spain. The approximate location of the studied road is shown in

The entire study area is shown in

As depicted in

Road points were identified according to the method outlined in Section 2. The minimum virtual grid size was set to 0.1 m, because there is a very high point density in the original point cloud dataset. Additionally, the height threshold was set to 0.3 m and the angle threshold was set to 15 degrees. In this processing, a total of 42,717 road points was abstracted and segmented, as shown in

First, based on the segmented road points, the road outline was determined following the method of Section 2.5. In this paper, the local descriptor threshold was set at 1.5, which means that all points with an edge descriptor value greater than 1.5 were regarded as road outline points.

The abstraction results are shown in

After the extraction of the points above the possible location of the widened road (

^{3} needs to be excavated compared to 462.35 m^{3} on the right side of the road. Because the studied road is not straight, the road parallel direction distances for the two road sides differ. The right side is 137.2 m long, whereas the left side is 124.1 m long. At the location indicated by the arrow in both

There is no data available from other sensors that can be used to verify the computed results. Instead the quality of the results is analyzed by considering the quality of the input data in combination with an analysis of how this quality propagates into the final volume computations. In addition, the excavation volumes were determined from a second LMMS dataset, acquired in a second run by the same system on the same day. Moreover, a possible measurement plan for further validation of the results is sketched.

Since the total volume is computed by summing up slices, the squared total error equals the squared sum of the errors in the determination of each sliced volume. The random error in the computation of a sliced volume consists of a variance component caused by random measurement errors in the original point cloud. This component is denoted as _{PTS}_{R}_{Total}_{i}_{i,pts}_{i,r}

Thus, the error of the volume of a single slide is studied first. According to the specifications of the Lynx LMMS and previous error studies [

As can be seen in

The resulting standard deviation (st.dev) values for the eight blocks that together form the slice depicted in

To summarize the results from

Assuming a st.dev value per block of 0.55 m, the st.dev per slice equals 1.56 m. Assuming 132 slices, this results in a st.dev for the total volume on one road side of 17.9 m. This st.dev value corresponds to an error below 4%, when compared to a value of 500 m^{3} of total excavation volume. As the current error is dominated by surface relief, a reduction in the error could be obtained by decreasing the block size.

For validating the results shown in Section 3, in this paragraph the same method will be applied to a second dataset obtained using the same LMMS on the same day. The differences in outcome will be compared to as discussed in Section 4.1.

For the second run, the same system was used but the position of the car on the road was different, as will be shown below. As for the first dataset, the data of the second run consists of a georeferenced point cloud and of a dataset giving the trajectory of the LMMS car during data acquisition. The point cloud of the second run cropped to the same piece of road consists of 6,374,830 points, and has a point density of ∼2,000 points per square meter. A side view of the second run point cloud is shown in

Following the same methodology as described in Section 2, the data of the second run was processed, and the excavation volumes for both road sides were computed. The results are shown in

A comparison of the results from both datasets is given in

As can be seen from

As shown in

This effect is illustrated in

There are several options to further validate the results of the methodology proposed in this paper in a field experiment. A general idea is to locally use other, preferably superior measuring methods to sample the geometry of a piece of the road and road side considered, and repeat the computations with these superior data. A traditional method would be to use a total station or RTK-GPS to measure some profiles of 3D road surface points in a local georeferenced datum, and import the obtained data into modeling software such as AutoCAD or 3ds Max, to construct a local road model and compute the volume. This method should give accurate results, but is labor intensive. A total different approach would be to actually perform measurements directly before a planned road extension. In this way, the real volume of the material that is excavated can be measured and compared to the results of the analysis of the corresponding LMMS data.

In this paper, a method is proposed for the estimation of the excavation volume of a planned road widening from a Laser Mobile Mapping point cloud. Starting with a LMMS point cloud data sampling a mountainous road, we used a uniform-size voxel to downsample the point cloud data and remove outliers. Then, local normals and 2D slopes were estimated at each resulting grid point to separate road from off-road points. Finally, the volume needed to excavate the road by 4 m on both sides was computed. It was shown on LMMS data representing a mountain road in Spain that the volume to be excavated on the left side differs by 8% to that on the right side. A more detailed analysis of one slice of data indicates that the error in the estimated excavation volume is below 4%. The results were partly validated by a comparison to results from analyzing a second point cloud obtained by the same system on the same day, but from a different trajectory. The resulting excavation volumes as estimated from both datasets differed by 2.5%–3.5%.

A further step would be to use the proposed method for determining the widening of the road of, e.g., 4 m by

The authors would like to thank the three anonymous reviewers for their comments in improving the manuscript. Also, the authors gratefully acknowledge financial support from the China Scholarship Council. The authors would also like to thank the support from project p10-TIC-6114 JUNTA ANDALUCIA. This paper is partly supported by IQmulus (FP7-ICT-2011-318787), a project aiming at a High-volume Fusion and Analysis Platform for Geospatial Point Clouds, Coverages and Volumetric Data Sets.

The authors declare no conflict of interest.

Overall methodology of data processing.

Neighbourhood of a query point within a certain radius.

Normal estimation at a query point using a local plane fitting approach.

Illustration of local neighbourhood feature descriptor.

Computation of excavation volume.

Slice volume computational geometry.

Approximate location of the studied road.

Original point cloud dataset of the study area. (

2D slope at each point of the studied road.

Segmented road from the original point cloud dataset.

Road outline, central line and expanded road outline. (

Profiled height on the expanded roadside.

Slice volume computed from the expanded road outlines on both sides.

Cumulative volume on the extended roadside.

Single slice volume computation error analysis.

Side view of randomly selected road side slice. (

Side view of point cloud data from the second run.

Cumulative volume of a roadside extension determined from point cloud data of the second run.

Height difference per block between original and second run point cloud data (m).

Geometry relation between LMMS and steep roadside terrain.

Mean and standard deviation of points per block in meters. First three rows: original point cloud; Last three rows: downsampled point cloud.

Original Points | number | 839 | 1070 | 571 | 284 | 200 | 175 | 354 | 415 |

mean | 972.84 | 974.94 | 977.57 | 979.20 | 981.26 | 982.70 | 984.62 | 987.02 | |

st.dev | 0.53 | 0.71 | 0.50 | 0.38 | 0.40 | 0.46 | 0.62 | 0.69 | |

| |||||||||

Down Sampled Points | number | 45 | 33 | 33 | 38 | 35 | 40 | 36 | 31 |

mean | 972.43 | 975.03 | 977.24 | 979.19 | 981.31 | 982.76 | 984.54 | 986.60 | |

st.dev | 0.51 | 0.59 | 0.59 | 0.57 | 0.48 | 0.56 | 0.65 | 0.75 |

Comparison of excavation volumes determined from original data and data from second run.

Original data (m^{3}) |
542.2 | 462.3 |

Data from second run (m^{3}) |
556.3 | 478.2 |

Difference (%) | 2.53 | 3.32 |