Next Article in Journal
Extracting Seasonal Signals in GNSS Coordinate Time Series via Weighted Nuclear Norm Minimization
Previous Article in Journal
Morphological Band Registration of Multispectral Cameras for Water Quality Analysis with Unmanned Aerial Vehicle
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automatic Extraction of Road Points from Airborne LiDAR Based on Bidirectional Skewness Balancing

by
Jorge Martínez Sánchez
*,
Francisco Fernández Rivera
,
José Carlos Cabaleiro Domínguez
,
David López Vilariño
and
Tomás Fernández Pena
Centro Singular de Investigación en Tecnoloxías Intelixentes (CiTIUS), University of Santiago de Compostela, 15782 Santiago de Compostela, Spain
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(12), 2025; https://doi.org/10.3390/rs12122025
Submission received: 19 May 2020 / Revised: 14 June 2020 / Accepted: 22 June 2020 / Published: 24 June 2020

Abstract

:
Road extraction from Light Detection and Ranging (LiDAR) has become a hot topic over recent years. Nevertheless, it is still challenging to perform this task in a fully automatic way. Experiments are often carried out over small datasets with a focus on urban areas and it is unclear how these methods perform in less urbanized sites. Furthermore, some methods require the manual input of critical parameters, such as an intensity threshold. Aiming to address these issues, this paper proposes a method for the automatic extraction of road points suitable for different landscapes. Road points are identified using pipeline filtering based on a set of constraints defined on the intensity, curvature, local density, and area. We focus especially on the intensity constraint, as it is the key factor to distinguish between road and ground points. The optimal intensity threshold is established automatically by an improved version of the skewness balancing algorithm. Evaluation was conducted on ten study sites with different degrees of urbanization. Road points were successfully extracted in all of them with an overall completeness of 93%, a correctness of 83%, and a quality of 78%. These results are competitive with the state-of-the-art.

Graphical Abstract

1. Introduction

Precise information about the road network is crucial for a wide range of applications such as city planning, traffic management, or road monitoring. Over the last few decades, remote sensing techniques have been extensively studied to accomplish these tasks [1]. Road extraction from aerial and satellite imagery has achieved various extents of success, but it is still a difficult task due to the limitations of passive sensing. This has led many researchers to explore the advantages of using LiDAR data [2]. LiDAR is an active sensing technique that uses light pulses to record a point cloud. For each point, accurate 3D coordinates, among other information, are provided. Elevation information helps to identify roads from other impervious objects, and reflectance intensity provides good separability of ground materials with similar elevation [3]. To improve results, some authors have fused aerial imagery with LiDAR data [4,5,6]; however, in this work we focus on the road point extraction based exclusively on LiDAR data. It is interesting to explore and evaluate what can be achieved with LiDAR data, as using only one data source decreases costs and avoids the complexities of data coregistration. Moreover, in many cases this is the only input available.
Several authors have presented methods for road point extraction during recent years. This extraction is often used as input to other characterization steps such as vectorization. In [7] road points are identified by a hierarchical classification based on height, intensity, and local point density. The intensity threshold is selected manually. Then, a binary image is produced, and the road is further refined using morphological operations and connected component analysis. Two sites, Fairfield and Yaronga, that are mainly urban with some rural traits were tested. In [8] ground points inside a manually selected intensity range are kept, and a constrained Delaunay Triangulated Irregular Network (TIN) is used to group the points and remove small regions. An urban area of the city of Shashi was tested. In [9] the intensity distribution of ground points is assumed to follow a Gaussian mixture model, and an expectation maximization algorithm is used to find the maximum likelihood solution of the model. An intensity image is produced from the ground points and each pixel is classified according to its posterior probability. A certain K value is required, so that pixels are categorized into K classes. Moreover, the road class needs to be manually identified from these K classes. Three sites, Atlanta, Denver, and Oakland, mainly covering downtown and surrounding residential areas were tested. In [10] ground points inside a manually selected intensity range are kept, and the remaining noise is removed by a morphological opening operation. An urban area of Niagara was tested. In [11] ground points are segmented based on local intensity distribution histogram, and filtered by intensity gradient and area. A region of the city of Vaihingen was tested. In [12] road points are identified based on intensity, local point density, and region area constraints. A new method for the calculation of the intensity threshold based on the skewness balancing algorithm [13] is introduced, and it was tested in four urban sites, including Vaihingen. This method assumes that the distribution of intensity values always yields a positive skewness. While this is generally true for urban areas, in this work we demonstrate and consider that this behavior is not guaranteed in mixed or rural areas.
Alternatively, road centerlines can be extracted without a prior extraction of the road points. In [14,15] road centerlines are obtained by a shift mean clustering with adaptive window size based on the intensity and surface roughness. The well known sites of Vaihingen and Toronto were used in the experiments. While these methods yield better results than previous studies, it is unclear how a mean shift clustering will perform in less urbanized areas. A mean shift clustering is an iterative procedure that shifts each data point to the average of data points inside a window until a stable average is reached. The idea is that ground points will converge into scattered points while road points will converge into the road centerline. This way, road centerlines can be extracted by searching linear features. However, some concerns remain: The appropriate selection of the window size, the high computational cost, and the fact that in rural sites it is common to find areas with a homogeneous intensity and surface roughness similar to road segments that may prevent the ground points to converge into scattered points. Furthermore, because the road points are not being labelled, extracting road features, such as roughness or width, is not straightforward after the extraction of the centerlines.
A different approach is to extract road points with machine learning techniques such as AdaBoost [16], Multiple Classifier Systems (MCS) [17], Support Vector Machines (SVM) [18,19], Conditional Random Fields (CRF) [20,21], Random Forests (RF) [22,23], or Maximum Likelihood Classification (MLC) [24]. However, the need for training data introduces important implications. First, the creation of labelled data is a difficult task with high economical and temporal costs, so that only a small amount of reference data are often available. In addition, due to the heterogeneous characteristics of point cloud data, achieving high performance on datasets different from those used for training is not guaranteed.
Two issues remain to be addressed to perform road extraction successfully. First, methods often need the manual input of critical parameters, particularly the intensity threshold. Second, it is difficult to assess whether a specific method is suitable for particular data, given the fact that the datasets used in most studies are focused on urban areas and experiments are conducted in a very small number of test sites. To address both issues, in this paper, we propose a new framework for the extraction of road points that is highly automatic, requiring only a single user non-critical parameter, which introduces an improved skewness balancing algorithm for the calculation of the intensity threshold, and we use a meaningful dataset with a variety of both urban and rural landscapes to evaluate its performance.
The rest of the paper is organized as follows: Section 2 presents the framework for road point extraction. Section 2.1 describes the improved method to automatically estimate the intensity threshold. Experiments are presented in Section 3 and discussed in Section 4. Section 5 exposes the conclusions.

2. Method

In a preliminary stage of the road extraction process, the distinction between ground and non-ground points is required. Different approaches can be considered to accomplish this task. In this work, we use a two-phase region-growing segmentation followed by a height jump detection algorithm from a previous work [25]. Points are grouped into clusters based on their 3D coordinates, intensity, and normal vector. For each cluster, the height differences at the boundaries with its neighboring clusters are calculated and, in case of high differences, the cluster is labelled as non-ground.
From this input, road points are separated from ground points by a pipeline filtering shown in Figure 1. In each stage of the pipeline, points are evaluated to verify if they fulfil a given characteristic. Any road point is assumed to verify the following characteristics: It yields low intensity, it lies on a plane, it is surrounded mostly by road points, and it is within a set of road points with meaningful size. Only first return points with these characteristics are identified as road points. Non-first returns, which generally are present in ground areas below trees, are not considered because their inherent low intensity produces noticeable false positives in forested areas. A limited amount of road points may not fulfil all these constraints, e.g., road points within road markings yield high intensity values and road points under vegetation will be non-first returns, but these points could be recovered in later stages. Following, the filters to evaluate if each point fulfils these characteristics are introduced in detail.

2.1. Stage I: Intensity Filter

The intensity can be defined as the ratio between the received versus the emitted power of the laser beam. The intensity value depends on the target characteristics (reflectance and roughness), acquisition geometry, instrumental effects, and environmental effects. It is well known that road materials, such as asphalt or concrete, have a low reflectance, thus road points generally yield low intensity values [3]. The challenge resides in selecting an appropriate intensity threshold for every dataset. Although intensity is one of the key factors to distinguish between ground and road points, little research has been conducted to investigate automatic thresholding. In [12] a skewness balancing algorithm to automatically calculate the intensity threshold is proposed. The basic assumption of the skewness balancing algorithm is that naturally measured samples will lead to a normal distribution due to the central limit theorem [26]. Originally, it was assumed that elevation values of ground points follow a normal distribution, which is disturbed by object points, so that by their removal, the normally distributed ground points are obtained [13]. This concept was adopted in [12] for road extraction. In our proposal, we extend this algorithm by (i) improving its robustness to intensity outliers, (ii) making it suitable for non-urban areas, and (iii) standardizing the balancing speed. A flowchart of the proposed algorithm, which we have called bidirectional skewness balancing, is shown in Figure 2. The main stages of this method are described in detail in the following subsections.

2.1.1. Outliers Removal

The skewness is a measure of the degree of asymmetry of a distribution. It can be used to detect whether a distribution is concentrated on the left or on the right side. This information is used to determine the direction of the skewness balancing (as explained in Section 2.1.4). Nevertheless, the skewness is very sensitive to the presence of outliers. In particular, very high intensity values lead the skewness to be positive. A significant number of outliers can even artificially change the skewness value from negative to positive. This, coupled with the fact that the presence of intensity outliers across datasets is common in practice, makes the removal of outliers a mandatory task prior to any processing. To this end, we use the Interquartile Range (IQR) which is defined as:
I Q R = Q 3 Q 1
where Q 3 and Q 1 are the third and first quartiles of the distribution, respectively. Then, the maximum value of the box-and-whisker plot is calculated, which is set as the maximum intensity allowed, that is:
I m a x = Q 3 + 1.5 · IQR
All points yielding an intensity I > I m a x are considered outliers and are filtered.
A clear example of the need for the removal of outliers is shown in Figure 3. Please note that the presence of some points yielding a very high intensity leads to a highly positive skewness (see Figure 3a). A positive skewness value will initiate a backward balancing which will fail to obtain an accurate intensity threshold. By removing the outliers, the actual nature of the distribution is clearly exposed, which yields a negative skewness (see Figure 3b). In our proposal, a negative skewness value initiates a forward balancing to calculate the appropriate intensity threshold.

2.1.2. Right Tail Removal

Even after the removal of outliers, the Gaussian in the histogram formed by ground points generally presents a notable right tail (see Figure 4a). This produces a negative impact on the algorithm and needs to be addressed. The skewness balancing stops when reaching symmetry, i.e., zero skewness. This means that in the presence of a significant right tail, an equivalent significant left tail will be preserved. This situation can be observed in Figure 4b, where the forward balancing stops when reaching the left tail instead of advancing until the base of the Gaussian. Consequently, the calculated intensity threshold is systematically low. This effect is mitigated if the right tail is removed. This is performed by filtering the points with an intensity beyond a certain percentile (see Figure 4c). This way, the forward balancing continues until reaching the base of the Gaussian (see Figure 4d), resulting in a better estimation of the intensity threshold.
We have determined experimentally that the 95th percentile provides consistent results in all cases. To calculate the intensity corresponding to a certain percentile, the intensity values are sorted in ascending order, and its index within the list is calculated using the nearest rank method:
n = P 100 · N
where n is the index, also known as ordinal rank, P is the considered percentile, and N is the list size.
Figure 5 shows how the estimation of the intensity threshold improves when the right tail is removed. Points are colored by their intensity values mapped into a heat map. Please note that by applying this filter, previously missing road segments are now detected. The new road points are the ones yielding the highest intensity (red and orange color). Some false positives are also added, but it is crucial to detect all road points in this stage and they can be removed in next stages.

2.1.3. Intensity Scaling

In [12] the skewness balancing is performed by subtracting 1 to the intensity threshold iteratively. Nevertheless, intensity ranges can vary greatly as the intensity can be scaled into different ranges, commonly between 8 and 16 bits [27]. This means that using an absolute step size of 1 will result in different relative step sizes for point clouds with different intensity ranges. In other words, the balancing speed will not be the same across point clouds. For example, in the Vaihingen data, intensity values are scaled to a 8 bit range so that the relative step size is then 1/255, whereas, in the St Arnaud data, intensity values scaled to a 16 bit range so that the step size would be 1/65,535, which increases the number of iterations reducing the computational performance significantly. There are also cases in which the intensity range after removing the outliers can be smaller than the initial bit range. In the Toronto data, the intensity values are scaled to a 8 bit range, but after removing the outliers, the remaining values are in the [0, 58] range, so the step size would be 1/58. Such large relative step size may lead to obtain an intensity threshold that is not optimal, as the skewness balancing would have been stopped earlier if using a smaller step size. In order to enforce an equal relative step size, the remaining intensity values of each point cloud are scaled into the [0, 255] range and the absolute step size is always kept as 1. This assures a relative step size of 1/255 independently of the intensity range.

2.1.4. Bidirectional Balancing

Once the intensities have been scaled, the skewness is calculated and a forward or backward balancing is initiated depending if its value is negative or positive, respectively. Let us analyze the reasons to consider both directions. In [12] only backward balancing is used, because it is assumed that intensity values of road points follow a normal distribution, which is disturbed by non-road points, so that by their removal, the normally distributed road points are obtained. However, this is only true for urban areas. In such scenarios, there is a predominance of road points over non-road points, meaning that the distribution is left-concentrated and its skewness is positive as shown in the example of Figure 6a. In contrast, in less urbanized or rural areas the situation is just the opposite, there is a predominance of non-road points, so the distribution is right-concentrated and its skewness is negative as shown in the example of Figure 6b. In other words, the disturbance to remove is the non-predominant class, road or non-road, which changes among study sites. Considering this, we reformulate the assumptions of the skewness balancing as follows:
  • Intensity values of both road and non-road points are normally distributed (skewness = 0).
  • A predominance of road points leads to a left-concentrated distribution (skewness > 0).
  • A predominance of ground points leads to a right-concentrated distribution (skewness < 0).
While the original algorithm only works properly in cases under assumptions 1 and 2, we propose to generalize the core principle of the algorithm in order to make it also suitable for cases under assumption 3.
The idea of the algorithm is to filter points iteratively until reaching a zero skewness, in other words, until only the points in the Gaussian-shaped peak remain. For a distribution with positive skewness this filtering is done backwards (right-to-left), as the peak is located on the left side. For a distribution with negative skewness, we propose to perform the filtering forwards (left-to-right) as the peak is located on the right side. A flowchart of the forward balancing is shown in Figure 7. Therefore, the direction of the balancing is dictated by the skewness value.
Because of the intensity scaling described in the previous subsection, the obtained intensity threshold in the skewness balancing will be scaled. This can be reverted into its raw intensity value by:
I T = I ^ T · I m a x / 255
where I T is the raw intensity threshold, I ^ T is the scaled intensity threshold, and I m a x is the maximum raw intensity after the removal of the right tail. All points with intensity I > I T are filtered out. Also, points with I = 0 are also filtered, as we found that they typically correspond to water surfaces.

2.2. Stage II: Curvature Filter

After filtering the points by intensity, their curvature is considered in the next stage of the pipeline. In order to analyze the curvature of a point, its neighborhood must be identified. A common neighborhood model is a sphere with radius r. The length of r is selected based on two factors: the average point spacing (APS) and the minimum road width (MRW). On the one hand, it is necessary to consider a significant amount of neighboring points, an appropriate value could be r = 2 · APS . On the other hand, if the length of r is large with respect to the road width, there will be a high percentage of non-road points inside the neighborhood influencing the curvature calculation. Ideally, a point in the centerline of the road will have only road points as neighbors, while road points near the edges will have a lower number of road points as neighbors. Considering this fact, r must not be higher than the half of the road width. The proposed value of r is:
r = min ( 2 · APS , MRW / 2 )
Please note that for the neighbors search, only points in the same flight strip are considered, as points between overlapping strips can yield a high curvature due to issues in the geometric adjustment.
Then the 3 × 3 covariance matrix of the neighborhood is obtained. Their eigenvalues { λ 1 , λ 2 , λ 3 } can be obtained by Principal Component Analysis (PCA), so that λ 1 λ 2 λ 3 0 , because it is a symmetric positive-definite matrix. These eigenvalues describe the local geometry structure of the point set, from which the surface variation [28], also known as change in curvature [29], is:
C λ = λ 3 λ 1 + λ 2 + λ 3
C λ is zero for points lying on a plane, which is expected for road points. Nevertheless, because LiDAR data have inherent noise, a zero value is not expected, therefore, a threshold needs to be set. We found experimentally that the value 0.005 , used by other authors [30,31], is suited for the whole dataset. So only points with C λ < 0.005 are considered planar and are kept for the next filtering stage.

2.3. Stage III: Density Filter

Roads are connected objects thus road points must also be connected. Therefore, it is expected for a road point to be surrounded by other road points. We adopt the constraint of local point density presented in [7], which is the percentage of road points within the neighborhood of a given point. The assumption is that a road point will have a different local point density depending on its position within the road, but always over a certain minimum. A point in the middle of the road is expected to have a local point density of 100% (Figure 8a), while a point in the edge of the road would have a local point density near 50% (Figure 8b). In an extreme case, where the point is in a corner with a sharp bend of 90°, it will still maintain a local point density of 25% (Figure 8c). Therefore, points yielding a local point density lower than 25% are filtered out. To define the neighborhood, a sphere with a radius of d is used. According to the previous reasoning, d must not be larger than the half minimum road width (MRW). In our case we use d = MRW / 2 .

2.4. Stage IV: Area Filter

Some isolated points eventually fulfil all constraints described in the previous stages. In order to retain only meaningful points and remove this kind of noise, the points are grouped into clusters, and only clusters with a substantial area are kept after this filter. This clustering is achieved by a simple region-growing procedure, in which the seed is the first currently non-clustered point, and the inclusion criteria is to add all neighbors inside a sphere with radius 1 m of the current point, until no more nearby points can be found. A minimum area A m i n = 2 · MRW 2 is set, which can be seen as the minimum road rectangle: A rectangle with a width equal to the MRW and a length of twice the width.

3. Results

3.1. Data and Parameters

A dataset of ten point clouds is used for our experiments. This dataset has been carefully selected to consider a wide variety of scenarios. This is, to the best of our knowledge, the most complete dataset for road extraction used in the literature in the field. Regarding the acquisition of the point clouds, they were obtained from five different sources, acquired with eight different sensors, and yield point densities ranging from 2 to 20 p/m2. The selected sites along with relevant information are shown in Table 1. While data from Babcock International [32] and LaboraTe [33] are not publicly accessible, data from the ISPRS Benchmark [34], OpenTopography [35], and PNOA (Spanish National Plan for Aerial Orthophotography) [36], are openly available.
Regarding the urbanization level, the point clouds range from highly urbanized areas with a predominance of road over ground, such as city cores, to low urbanized areas with a dominance of ground over road, also including moderately urbanized areas, with a more balanced ratio between road and ground. The degree of urbanization is quantified as the percentage of road points within the ground points. As mentioned in Section 2.1.4, there is a relation between the degree of urbanization and the skewness of the intensity values (see Figure 9). Because of the lack of ground-truth for this purpose, we have taken the percentage of road points extracted by our method as the degree of urbanization. A trend can be recognized: The higher the urbanization the higher the skewness. The skewness value used in the figure is the one obtained after the removal of intensity outliers (see Section 2.1.1).
The method was tested on every point cloud of the dataset setting the minimum road width to 2 m. The selection of this parameter is not critical, and prior knowledge of the road width is not actually needed. A detailed explanation is given in Section 4.2.

3.2. Qualitative Evaluation

A ground-truth for each point cloud was generated by a stratified random sampling with a sample size of 100 points per class, following the recommendations in [41]. These points were then manually labelled, interpreting both the point cloud and aerial images. We use three quality metrics, namely correctness, completeness, and quality, introduced in [42], to quantitatively assess the method capability to extract road points. Their mathematically definitions are shown in Equations (7)–(9),
Completeness = T p T p + F n
Correctness = T p T p + F p
Quality = T p T p + F p + F n
where T p is the number of correctly labelled road points, F p is the number of wrongly labelled road points, and F n is the number of missed road points. The values of the metrics are shown in Table 2.
To determine the suitability of the method, we refer to [43], where a minimum value of 0.60 and 0.75 for completeness and correctness, respectively, was defined for the result to become practically useful. Our method satisfies this minimum for the whole dataset. Furthermore, the authors stated that to be of real practical importance, a method probably should yield completeness and correctness values around 0.70 and 0.85, respectively. With an average completeness and correctness values of 0.93 and 0.83, respectively, our method also meets this quality standard.
Please note that the Carola site has not been evaluated quantitatively. This is because the points yield an abnormal high curvature (see Figure 10), which makes infeasible the task of generating ground-truth. Also, the curvature filter cannot be applied in these conditions, as it will filter most of the points. Nevertheless, we tested the pipeline filtering without this filter. Visual inspection demonstrates that the pipeline can still be effective even without the curvature filter stage (see Figure 11f).

3.3. Qualitative Evaluation

The road points extracted by our method along with aerial images of the sites are shown in Figure 11. Visual inspection shows that road points have been extracted in all point clouds. Relevant information about the filtering process is shown in Table 3. Execution times of the pipeline filtering were measured in a desktop computer with an Intel Core i7-4790 CPU and 16 GB of RAM memory. Execution times mainly depend on the point cloud size and on the percentage of road points, as this means that more candidate road points will be processed in the stages of the pipeline filter.
To identify the limitations of the method, we analyzed the two sites yielding quality values lower than 0.70, namely Arzúa and Trabada. In the case of Arzúa, the road surface yields significantly higher intensity than in the other point clouds. This behavior reduces the intensity difference between road and ground points hindering their distinction greatly. This is the reason why, although the main road has been extracted, some road segments inside the city core and the eastern road segment are missing. Furthermore, a few adjacent strips show noticeably different intensity values. Because of this, there is a noticeable gap in the southwestern road segment (see Figure 12). The two aforementioned issues lead the Arzua site to yield the second smallest completeness value. Overall, a higher quality could be achieved by improving the intensity correction and normalization. In the case of Trabada, there is a noticeable number of false negatives, most of them corresponding to tilled areas. These areas yield a lower intensity than the surrounding ground, and, if planar enough, they can be very hard to distinguish from road surfaces such as parking lots. Although it would be reasonable to expect tilled areas to be significantly less planar than the road surface, particularly in Trabada, this is not the case. We tried to deal with this issue by reducing the curvature threshold, but this change removes both tilled areas and road surfaces (see Figure 13). In this particular data, both classes yield similar curvature, and this feature cannot be used to distinguish them. Nevertheless, note that this dataset was acquired 15 years ago, and we do not expect this behavior to occur in more recent data.

4. Discussion

4.1. Intensity Threshold

It is worth mentioning some insights about the intensity threshold calculation (see Table 3). On the one hand, it is important to highlight the importance of removing the intensity outliers before calculating the skewness. The balancing direction depends on the sign of the skewness, thus its calculation is crucial. Take as examples the Carola, Truro, and Victor Harbor sites. The initial skewness is positive but, after removing the outliers, it changes to negative, which also changes the balancing direction. Without this correction it would not be possible to calculate the optimal intensity threshold. On the other hand, the calculated intensity threshold varies greatly across point clouds. By normalizing each threshold to provide comparable values, we can observe that road points are expected to yield an intensity lower than half of the intensity range, after removing the outliers. This can be considered to be a rule of thumb. However, the optimal intensity threshold can be significantly lower, and its value is unique for each data. Therefore, a fully automatic method, as the one proposed, is needed.

4.2. Road Width Parameter

An important matter is the selection of the minimum road width (MRW) parameter. To illustrate its influence, we measure the quality obtained using different values for the Vaihingen and Truro sites (see Figure 14). The road width is subject to government regulations with typical values ranging from 2 m to 6 m, but for the sake of analysis, we also used larger values. Using typical values produced similar qualities, only when using large values the quality decreases significantly, especially in Truro, because the road width is typically smaller in less urbanized landscapes. Also, note that this parameter is not a constraint for the road width itself, but rather a reference for three stages of the pipeline filtering, so roads with both smaller and larger widths will also be extracted.

4.3. Point Density

Selecting the target point density is an important task when planning the LiDAR survey. Generally, acquiring denser point clouds leads to obtain more accurate LiDAR-derived products, at the expense of an increased cost (e.g., higher grade sensor, more flight hours …). Therefore, is critical to know the point density requirements of the LiDAR applications. For this road extraction method, our experiments show that point density does not affect the result significantly, as long as it is above a lower limit. We believe that 2 p/m2 is the lower limit for point density that achieves satisfactory results. We consider this limit to be reasonable, as most point clouds acquired over recent years surpass this point density, and furthermore, acquisition techniques are on constant development and future trends point towards the acquisition of denser point clouds. Nevertheless, note that the higher the point density the more accurate will be the extracted road with respect to the real-world road. In other words, decreasing the point density increases the uncertainty of the road model.

4.4. Challenging Conditions

The road extraction may struggle in the presence of water and wetland, where the intensity values can be as low as in the road surface. For instance, in Victor Harbor site, a water area in the south-east part of the site is wrongly labelled as road surface (see Figure 11t). Deep water normally does not reflect the laser beam, but LiDAR points can be recorded near the coast or in the presence of floating sediment. The topic of detecting water is out of the scope of this work, so in cases of coastal areas, a water extraction procedure should be applied beforehand. A more challenging situation is the existence of wetland. In the St Arnaud site, the surrounding ground of a narrow river is wrongly labelled as road surface (see Figure 11j). While for this particular site the amount of wetland is relatively small, sites with large wetland areas could be an issue, although it is reasonable to expect roads to exhibit higher linear features than wetland areas. Finally, we have observed that some paved areas attached to the road (e.g., garage entrances) are generally identified as road. These might have to be filtered out depending on the definition of road for the given end-user application.

4.5. Comparison with Other Methods

The completeness (Cp), correctness (Cr) and quality (Q) that recent studies have achieved are summarized in Table 4. Note, however, that it is difficult to compare the methods, as authors use different datasets. Furthermore, in some studies, LiDAR-derived features such as elevation and intensity are rasterized, so that processing and evaluation is carried out at pixel-level instead of point-level, which leads to a loss of information. Our method yields the highest completeness and it is the third in terms of correctness and quality, only surpassed by CRF methods [20,21]. Nonetheless, our method offers clear advantages, first, it does not need training as CRF methods do, and second, it has been tested in a wide range of scenarios, while CRF methods have been tested only in three small regions of the urban site of Vaihingen. Notice that the lack of test sites is a prevailing issue among studies, and experiments are often carried out in one study site only, typically urban.

5. Conclusions

In this work, a method for the automatic extraction of road points is proposed. The only parameter required to the user is a rough estimation of the minimum road width. Road points are identified by a pipeline filtering based on intensity, curvature, local density, and area constraints. We showed the relevance of the intensity feature, and we presented an improved algorithm for its automatic thresholding. Experiments conducted in a dataset with ten study sites showed that the method is suitable regardless of the landscape characteristics and the acquisition procedure. Road points were extracted even for sites with little presence of roads. Some limitations were also exposed, in particular, low intensity areas such as tilled areas, wetland, and water can introduce false positives. Also, we showed that the selection of the minimum road width parameter is not critical, with little effect in the quality of the results as long the value is within a typical range. Overall, tests demonstrate the suitability of our approach as a general method for road point extraction. As the main difficulty when extracting the road model is to first extract the road points, our contribution offers a competitive result in this confrontation.

Author Contributions

Conceptualization, J.M.S., F.F.R., J.C.C.D., D.L.V., and T.F.P.; methodology, J.M.S., F.F.R., J.C.C.D., D.L.V., and T.F.P.; software, J.M.S.; validation, J.M.S.; formal analysis, J.M.S.; investigation, J.M.S.; resources, F.F.R., J.C.C.D., D.L.V., and T.F.P.; data curation, J.M.S.; writing—original draft preparation, J.M.S.; writing—review and editing, F.F.R., J.C.C.D., D.L.V., and T.F.P.; visualization, J.M.S.; supervision, F.F.R., J.C.C.D., D.L.V., and T.F.P.; project administration, F.F.R.; funding acquisition, F.F.R. All authors have read and agreed to the published version of the manuscript.

Funding

This work has received financial support from the Consellería de Cultura, Educación e Ordenación Universitaria (accreditation 2019-2022 ED431G-2019/04 and reference competitive group 2019-2021, ED431C 2018/19) and the European Regional Development Fund (ERDF), which acknowledges the CiTIUS-Research Center in Intelligent Technologies of the University of Santiago de Compostela as a Research Center of the Galician University System. This work was also supported in part by Babcock International Group PLC (Civil UAVs Initiative Fund of Xunta de Galicia) and the Ministry of Education, Culture and Sport, Government of Spain (Grant Number TIN2016-76373-P).

Acknowledgments

The Vaihingen data set was provided by the German Society for Photogrammetry, Remote Sensing, and Geoinformation (DGPF) (Cramer, 2010): http://www.ifp.uni-stuttgart.de/dgpf/DKEP-Allg.html. The authors would like to acknowledge the provision of the Downtown Toronto data set by Optech Inc., First Base Solutions Inc., GeoICT Lab at York University, and ISPRS WG III/4. This work uses data services provided by the OpenTopography Facility with support from the National Science Foundation under NSF Award Numbers 1557484, 1557319, and 1557330. Babcock International and LaboraTe group (USC) provided the Alcoy and Trabada data, respectively.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, W.; Yang, N.; Zhang, Y.; Wang, F.; Cao, T.; Eklund, P. A review of road extraction from remote sensing images. J. Traffic Transp. Eng. (Engl. Ed.) 2016, 3, 271–282. [Google Scholar] [CrossRef] [Green Version]
  2. Quackenbush, L.J.; Im, I.; Zuo, Y. Road extraction: A review of LiDAR-focused studies. In Remote Sensing of Natural Resources; CRC Press: Boca Raton, FL, USA, 2013; pp. 155–169. [Google Scholar]
  3. Song, J.H.; Han, S.H.; Yu, K.; Kim, Y.I. Assessing the possibility of land-cover classification using LiDAR intensity data. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2002, 34, 259–262. [Google Scholar]
  4. Hu, X.; Tao, C.; Hu, Y. Automatic road extraction from dense urban area by integrated processing of high resolution imagery and LIDAR data. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2004, 35, 288–292. [Google Scholar]
  5. Chen, Y.; Su, W.; Li, J.; Sun, Z. Hierarchical object oriented classification using very high resolution imagery and LIDAR data over urban areas. Adv. Space Res. 2009, 43, 1101–1110. [Google Scholar] [CrossRef]
  6. Wang, G.; Zhang, Y.; Li, J.; Song, P. 3D Road information extraction from LiDAR data fused with aerial-images. In Proceedings of the 2011 IEEE International Conference on Spatial Data Mining and Geographical Knowledge Services, Fuzhou, China, 29 June–1 July 2011; pp. 362–366. [Google Scholar] [CrossRef]
  7. Clode, S.; Rottensteiner, F.; Kootsookos, P.; Zelniker, E. Detection and vectorization of roads from LiDAR data. Photogramm. Eng. Remote Sens. 2007, 73, 517–535. [Google Scholar] [CrossRef] [Green Version]
  8. Jiangui, P.; Guang, G. A method for main road extraction from airborne LiDAR data in urban area. In Proceedings of the 2011 International Conference on Electronics, Communications and Control (ICECC), Ningbo, China, 9–11 September 2011; pp. 2425–2428. [Google Scholar] [CrossRef]
  9. Zhao, J.; You, S.; Huang, J. Rapid extraction and updating of road network from airborne LiDAR data. In Proceedings of the 2011 IEEE Applied Imagery Pattern Recognition Workshop (AIPR), Washington, DC, USA, 11–13 October 2011; pp. 1–7. [Google Scholar] [CrossRef]
  10. Wang, Y.; Chen, S.; Zhang, Y.; Chen, H.; Guo, P.; Yang, J. Automatic road extraction for airborne LiDAR data. In International Symposium on Photoelectronic Detection and Imaging 2013: Laser Sensing and Imaging and Applications; International Society for Optics and Photonics: Beijing, China, 2013; Volume 8905, p. 890528. [Google Scholar]
  11. Li, Y.; Yong, B.; Wu, H.; An, R.; Xu, H. Road detection from airborne LiDAR point clouds adaptive for variability of intensity data. Optik 2015, 126, 4292–4298. [Google Scholar] [CrossRef]
  12. Hui, Z.; Hu, Y.; Jin, S.; Yevenyo, Y.Z. Road centerline extraction from airborne LiDAR point cloud based on hierarchical fusion and optimization. ISPRS J. Photogramm. Remote Sens. 2016, 118, 22–36. [Google Scholar] [CrossRef]
  13. Bartels, M.; Wei, H. Threshold-free object and ground point separation in LIDAR data. Pattern Recognit. Lett. 2010, 31, 1089–1099. [Google Scholar] [CrossRef]
  14. Hu, X.; Li, Y.; Shan, J.; Zhang, J.; Zhang, Y. Road centerline extraction in complex urban scenes from LiDAR data based on multiple features. IEEE Trans. Geosci. Remote Sens. 2014, 52, 7448–7456. [Google Scholar]
  15. Li, Y.; Hu, X.; Guan, H.; Liu, P. An efficient method for automatic road extraction based on multiple feature from LiDAR data. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 41, 289–293. [Google Scholar] [CrossRef]
  16. Lodha, S.K.; Fitzpatrick, D.M.; Helmbold, D.P. Aerial Lidar Data Classification using AdaBoost. In Proceedings of the Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007), Montreal, QC, Canada, 21–23 August 2007; pp. 435–442. [Google Scholar] [CrossRef] [Green Version]
  17. Samadzadegan, F.; Hahn, M.; Bigdeli, B. Automatic road extraction from LIDAR data based on classifier fusion. In Proceedings of the 2009 Joint Urban Remote Sensing Event, Shanghai, China, 20–22 May 2009; pp. 1–6. [Google Scholar] [CrossRef]
  18. Azizi, Z.; Najafi, A.; Sadeghian, S. Forest Road Detection Using LiDAR Data. J. For. Res. 2014, 25, 975–980. [Google Scholar] [CrossRef]
  19. Matkan, A.A.; Hajeb, M.; Sadeghian, S. Road Extraction from Lidar Data Using Support Vector Machine Classification. Photogramm. Eng. Remote Sens. 2014, 80, 409–422. [Google Scholar] [CrossRef] [Green Version]
  20. Niemeyer, J.; Rottensteiner, F.; Soergel, U. Contextual classification of lidar data and building object detection in urban areas. ISPRS J. Photogramm. Remote Sens. 2014, 87, 152–165. [Google Scholar] [CrossRef]
  21. Niemeyer, J.; Rottensteiner, F.; Soergel, U.; Heipke, C. Contextual classification of point clouds using a two-stage CRF. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, XL-3/W2, 141–148. [Google Scholar] [CrossRef] [Green Version]
  22. Ferraz, A.; Mallet, C.; Chehata, N. Large-scale road detection in forested mountainous areas using airborne topographic lidar data. ISPRS J. Photogramm. Remote Sens. 2016, 112, 23–36. [Google Scholar] [CrossRef]
  23. Karila, K.; Matikainen, L.; Puttonen, E.; Hyyppä, J. Feasibility of multispectral airborne laser scanning data for road mapping. IEEE Geosci. Remote Sens. Lett. 2017, 14, 294–298. [Google Scholar] [CrossRef]
  24. Upadhayay, S.; Yadav, M.; Singh, D.P. Road network mapping using airborne LiDAR data. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, XLII-5, 707–711. [Google Scholar] [CrossRef] [Green Version]
  25. Martínez, J.; Rivera, F.F.; Cabaleiro, J.C.; Vilariño, D.L.; Pena, T.F.; Miranda, D. A rule-based classification from a region-growing segmentation of airborne LiDAR. In Image and Signal Processing for Remote Sensing XXII; International Society for Optics and Photonics: Edinburgh, UK, 2016; Volume 10004, p. 100040F. [Google Scholar]
  26. Duda, R.O.; Hart, P.E.; Stork, D.G. Pattern Classification; Wiley: New York, NY, USA, 2001. [Google Scholar]
  27. The American Society for Photogrammetry Remote Sensing (ASPRS). LAS Specification Version 1.4; ASPRS Board Meeting; ASPRS: Bethesda, MD, USA, 2011. [Google Scholar]
  28. Pauly, M.; Gross, M.; Kobbelt, L.P. Efficient simplification of point-sampled surfaces. In Proceedings of the IEEE Visualization (VIS 2002), Boston, MA, USA, 27 October–1 November 2002; pp. 163–170. [Google Scholar] [CrossRef] [Green Version]
  29. Weinmann, M.; Jutzi, B.; Mallet, C. Feature relevance assessment for the semantic interpretation of 3D point cloud data. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, 5, W2. [Google Scholar] [CrossRef] [Green Version]
  30. Rutzinger, M.; Maukisch, M.; Petrini-Monteferri, F.; Stötter, J. Development of algorithms for the extraction of linear patterns from airborne laser scanning data. In Geomorphology for the Future—Conference Proceedings; Kellerer-Pirklbauer, A., Keiler, M., Embleton-Hamann, C., Stötter, J., Eds.; Innsbruck University Press: Innsbruck, Austria, 2007; pp. 161–168. [Google Scholar]
  31. Sampath, A.; Shan, J. Segmentation and reconstruction of polyhedral building roofs from aerial lidar point clouds. IEEE Trans. Geosci. Remote Sens. 2010, 48, 1554–1567. [Google Scholar] [CrossRef]
  32. Babcock International. Trusted to Deliver. 2019. Available online: https://www.babcockinternational.com/ (accessed on 19 March 2019).
  33. LaboraTe. Laboratorio do Territorio. 2019. Available online: http://laborate.usc.es/ (accessed on 11 April 2019).
  34. Cramer, M. The DGPF-test on digital airborne camera evaluation–overview and test design. Photogramm. Fernerkund. Geoinf. 2010, 2010, 73–82. [Google Scholar] [CrossRef]
  35. OpenTopography. OpenTopography High-Resolution Topography Data and Tools. 2019. Available online: https://opentopography.org/ (accessed on 19 March 2019).
  36. Gobierno de La Rioja. IDE Rioja. 2019. Available online: https://iderioja.github.io/clasificacion_lidar/ (accessed on 25 April 2019).
  37. Guo, Q. Luquillo CZO Rio Blanco and Rio Mameyes Airborne Lidar; National Center for Airborne Laser Mapping (NCALM): Houston, TX, USA, 2010; Distributed by OpenTopography. [Google Scholar] [CrossRef]
  38. Tasman District Council. Golden Bay, Tasman, New Zealand Airborne Lidar; AAM New Zealand Limited: Auckland, New Zealand, 2017; Distributed by OpenTopography. [Google Scholar] [CrossRef]
  39. Rogers, J. Truro and Provincetown, MA: LiDAR in Salt March Environments Airborne Lidar; National Center for Airborne Laser Mapping (NCALM): Houston, TX, USA, 2010; Distributed by OpenTopography. [Google Scholar] [CrossRef]
  40. UDEM. Victor Harbour South Australia Airborne Lidar. The Coastal Urban Digital Elevation Modelling in High Priority Regions (UDEM); Cooperative Research Centre for Spatial Information (CRCSI): Auckland, New Zealand, 2011; Distributed by OpenTopography. [Google Scholar] [CrossRef]
  41. Congalton, R.G. A review of assessing the accuracy of classifications of remotely sensed data. Remote Sens. Environ. 1991, 37, 35–46. [Google Scholar] [CrossRef]
  42. Heipke, C.; Mayer, H.; Wiedemann, C.; Jamet, O. Evaluation of automatic road extraction. Int. Arch. Photogramm. Remote Sens. 1997, 32, 151–160. [Google Scholar]
  43. Mayer, H.; Hinz, S.; Bacher, U.; Baltsavias, E. A test of automatic road extraction approaches. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2006, 36, 209–214. [Google Scholar]
Figure 1. Flowchart of the pipeline filtering.
Figure 1. Flowchart of the pipeline filtering.
Remotesensing 12 02025 g001
Figure 2. Flowchart of the bidirectional skewness balancing.
Figure 2. Flowchart of the bidirectional skewness balancing.
Remotesensing 12 02025 g002
Figure 3. Normalized histogram (HST) of Truro (see Section 3.1) (a) before and (b) after outlier removal.
Figure 3. Normalized histogram (HST) of Truro (see Section 3.1) (a) before and (b) after outlier removal.
Remotesensing 12 02025 g003
Figure 4. Effects of the tail filtering for the Carola data. The histogram is shown (a) after removing outliers, (b) after removing values lower than the threshold obtained by the forward skewness balancing without tail removal, (c) after removing the right tail, and (d) after removing values lower than the threshold obtained by the forward skewness balancing with tail removal.
Figure 4. Effects of the tail filtering for the Carola data. The histogram is shown (a) after removing outliers, (b) after removing values lower than the threshold obtained by the forward skewness balancing without tail removal, (c) after removing the right tail, and (d) after removing values lower than the threshold obtained by the forward skewness balancing with tail removal.
Remotesensing 12 02025 g004aRemotesensing 12 02025 g004b
Figure 5. Results of the intensity filtering for Carola data (a) before and (b) after the removal of the right tail. Points are colored by a cold-to-hot gradient of their intensity values. Some road segments enclosed in the highlighted white areas are recovered.
Figure 5. Results of the intensity filtering for Carola data (a) before and (b) after the removal of the right tail. Points are colored by a cold-to-hot gradient of their intensity values. Some road segments enclosed in the highlighted white areas are recovered.
Remotesensing 12 02025 g005
Figure 6. Comparison of the intensity values distribution between (a) the urban site of Vaihingen and (b) the rural site of Alcoy. The dominant point class, road or non-road, determines if the distribution is left-concentrated or right-concentrated, respectively.
Figure 6. Comparison of the intensity values distribution between (a) the urban site of Vaihingen and (b) the rural site of Alcoy. The dominant point class, road or non-road, determines if the distribution is left-concentrated or right-concentrated, respectively.
Remotesensing 12 02025 g006
Figure 7. Flowchart of the forward skewness balancing.
Figure 7. Flowchart of the forward skewness balancing.
Remotesensing 12 02025 g007
Figure 8. Visualization of a circular neighborhood with radius r for different points in the road: (a) a point in the centerline, (b) a point in the road edge and (c) a point in a sharp bend of 90 degrees.
Figure 8. Visualization of a circular neighborhood with radius r for different points in the road: (a) a point in the centerline, (b) a point in the road edge and (c) a point in a sharp bend of 90 degrees.
Remotesensing 12 02025 g008
Figure 9. Skewness vs. level of urbanization. The red line is the 1D polynomial fitted line. Please note that the adjustment is far from perfect, but it is just shown to help to visualize the trend.
Figure 9. Skewness vs. level of urbanization. The red line is the 1D polynomial fitted line. Please note that the adjustment is far from perfect, but it is just shown to help to visualize the trend.
Remotesensing 12 02025 g009
Figure 10. Triangulated Irregular Network (TIN) of a spot in the (a) Vaihingen and (b) Carola sites. In Carola, the high curvature of the points in flat surfaces such as roofs and ground is clearly visible.
Figure 10. Triangulated Irregular Network (TIN) of a spot in the (a) Vaihingen and (b) Carola sites. In Carola, the high curvature of the points in flat surfaces such as roofs and ground is clearly visible.
Remotesensing 12 02025 g010
Figure 11. Aerial images of the tested data (left) and extracted road points (right).
Figure 11. Aerial images of the tested data (left) and extracted road points (right).
Remotesensing 12 02025 g011aRemotesensing 12 02025 g011bRemotesensing 12 02025 g011c
Figure 12. Intensity correction issues in Arzúa. The change in intensity is clearly visible at the overlap of the flight strips (highlighted with red arrows). The road segment inside this high-intensity strip cannot be identified by our method because of its high intensity compared with other road segments.
Figure 12. Intensity correction issues in Arzúa. The change in intensity is clearly visible at the overlap of the flight strips (highlighted with red arrows). The road segment inside this high-intensity strip cannot be identified by our method because of its high intensity compared with other road segments.
Remotesensing 12 02025 g012
Figure 13. Planar (black) and non-planar (red) points using different thresholds in the curvature filter: (a) 0.005, (b) 0.004, (c) 0.003, (d) 0.002. The scene corresponds to an area of east Trabada with tilled areas with nearby road segments.
Figure 13. Planar (black) and non-planar (red) points using different thresholds in the curvature filter: (a) 0.005, (b) 0.004, (c) 0.003, (d) 0.002. The scene corresponds to an area of east Trabada with tilled areas with nearby road segments.
Remotesensing 12 02025 g013
Figure 14. Influence of MRW parameter in the quality of the results for Vaihingen and Truro sites.
Figure 14. Influence of MRW parameter in the quality of the results for Vaihingen and Truro sites.
Remotesensing 12 02025 g014
Table 1. Point clouds included the dataset.
Table 1. Point clouds included the dataset.
SiteUrbanizationDateSensorPoints (M)Density (p/m2)Source
AlcoyMediumNA/2013Leica ALS6019.148.65Babcock Int.
ArzúaLow03/2018Riegl VQ-480i40.7020.35Babcock Int.
CarolaMedium07/2010Optech Gemini44.8710.28OpenTopo. [37]
LogroñoMedium09/2016Leica ALS8063.802.00PNOA
St ArnaudLow12/2017Riegl LMS-Q156028.839.54OpenTopo. [38]
TorontoHigh02/2009Optech Orion M13.866.00ISPRS
TrabadaLow11/2004Optech 203329.394.00LaboraTe
TruroMedium07/2010Optech Gemini25.934.35OpenTopo. [39]
VaihingenHigh08/2008Leica ALS5018.564.00ISPRS
Victor HarborMedium09/2011Optech Gemini43.482.81OpenTopo. [40]
Table 2. Results of the quantitative evaluation. Average values are highlighted.
Table 2. Results of the quantitative evaluation. Average values are highlighted.
SiteCompletenessCorrectnessQuality
Alcoy0.970.770.75
Arzúa0.810.820.69
Logroño0.990.800.79
St Arnaud0.990.880.87
Toronto0.800.930.76
Trabada0.970.690.68
Truro0.970.970.94
Vaihingen0.950.800.77
Victor Harbor0.960.730.71
Min.0.800.690.68
Avg.0.930.830.78
Max.0.990.970.94
Table 3. Skewness balancing insights and results of the road extraction. SkInit: initial skewness; SkIQR: skewness after outlier removal; SkPCT: skewness after tail removal; Bdir: balancing direction; IT obtained intensity threshold; (IT [ 0 , 255 ] ) IT normalized into [0, 255] range.
Table 3. Skewness balancing insights and results of the road extraction. SkInit: initial skewness; SkIQR: skewness after outlier removal; SkPCT: skewness after tail removal; Bdir: balancing direction; IT obtained intensity threshold; (IT [ 0 , 255 ] ) IT normalized into [0, 255] range.
SiteSkInitSkIQRSkPCTBDirITIT [ 0 , 255 ] Road %Exec. Time (s)
Alcoy−0.705−0.749−0.947Forward10310818.24117.1
Arzúa−0.937−1.174−1.570Forward7901354.15464.4
Carola81.564−0.379−0.593Forward258215.47186.1
Logroño−0.867−0.997−1.224Forward17,76011916.80844.8
St Arnaud−1.542−1.564−1.682Forward26,6091092.89101.4
Toronto3.3270.9711.013Backward93841.0088.8
Trabada−0.647−0.647−0.873Forward10310319.35186.6
Truro1.470−0.135−0.530Forward1547217.3285.2
Vaihingen0.4340.3980.183Backward717171.10238.8
Victor Harbor10.327−0.470−0.627Forward129114.62712.2
Table 4. Comparison of quality metrics with existing methods. Highest values are highlighted.
Table 4. Comparison of quality metrics with existing methods. Highest values are highlighted.
AuthorCp (%)Cr (%)Q (%)No RasterizationStudy Sites
Clode et al. (2007) [7]83.5073.5063.502 (Fairfield and Yeronga)
Samadzadegan et al. (2009) [17]53.9456.6453.101 (Castrop-Rauxel)
Jiangui and Guang (2011) [8]60.3566.81NA1 (Shashi)
Azizi et al. (2014) [18]75.0763.0252.111 (Golestan)
Matkan et al. (2014) [19]85.3471.5463.561 (Rheine, 3 areas)
Niemeyer et al. (2014) [20]87.0893.0481.751 (Vaihingen, 3 areas)
Niemeyer et al. (2015) [21]90.4087.3079.901 (Vaihingen, 3 areas)
Li et al. (2015) [11]92.9475.5071.411 (Vaihingen)
Proposed method93.0083.0078.0010 (see Table 1)

Share and Cite

MDPI and ACS Style

Martínez Sánchez, J.; Fernández Rivera, F.; Cabaleiro Domínguez, J.C.; López Vilariño, D.; Fernández Pena, T. Automatic Extraction of Road Points from Airborne LiDAR Based on Bidirectional Skewness Balancing. Remote Sens. 2020, 12, 2025. https://doi.org/10.3390/rs12122025

AMA Style

Martínez Sánchez J, Fernández Rivera F, Cabaleiro Domínguez JC, López Vilariño D, Fernández Pena T. Automatic Extraction of Road Points from Airborne LiDAR Based on Bidirectional Skewness Balancing. Remote Sensing. 2020; 12(12):2025. https://doi.org/10.3390/rs12122025

Chicago/Turabian Style

Martínez Sánchez, Jorge, Francisco Fernández Rivera, José Carlos Cabaleiro Domínguez, David López Vilariño, and Tomás Fernández Pena. 2020. "Automatic Extraction of Road Points from Airborne LiDAR Based on Bidirectional Skewness Balancing" Remote Sensing 12, no. 12: 2025. https://doi.org/10.3390/rs12122025

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop