Next Article in Journal
Equalizer Parameters’ Adjustment Based on an Oversampled Channel Model for OFDM Modulation Systems
Previous Article in Journal
Harnessing the Radio Frequency Power Level of Cellular Terminals for Weather Parameter Sensing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Staged Real-Time Ground Segmentation Algorithm of 3D LiDAR Point Cloud

School of Electronic and Information Engineering, Soochow University, Suzhou 215006, China
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(5), 841; https://doi.org/10.3390/electronics13050841
Submission received: 15 January 2024 / Revised: 13 February 2024 / Accepted: 20 February 2024 / Published: 22 February 2024

Abstract

:
Ground segmentation is a crucial task in the field of 3D LiDAR perception for autonomous driving. It is commonly used as a preprocessing step for tasks such as object detection and road extraction. However, the existing ground segmentation algorithms often struggle to meet the requirements of robustness and real-time performance due to significant variations in ground slopes and flatness across different scenes, as well as the influence of objects such as grass, flowerbeds, and trees in the environment. To address these challenges, this paper proposes a staged real-time ground segmentation algorithm. The proposed algorithm not only achieves high real-time performance but also exhibits improved robustness. Based on a concentric zone model, the algorithm filters out reflected noise points and vertical non-ground points in the first stage, improving the validity of the fitted ground plane. In the second stage, the algorithm effectively addresses the issue of undersegmentation of ground points through three steps: ground plane fitting, ground plane validity judgment, and ground plane repair. The experimental results on the SemanticKITTI dataset demonstrate that the proposed algorithm outperforms the existing methods in terms of segmentation results.

1. Introduction

In recent years, 3D LiDAR has increasingly been employed in autonomous vehicles and mobile robots [1]. Its operating principle involves emitting laser pulses toward targets and capturing the reflected signals. By calculating the round-trip time of the laser pulses, the distance to the targets is determined, resulting in the generation of a 3D point cloud. Compared to 3D stereo cameras, 3D LiDAR offers advantages such as high measurement accuracy, long detection range, minimal sensitivity to lighting conditions, and resistance to electromagnetic interference. Consequently, it is commonly used in tasks such as localization and mapping [2,3], object detection [4,5], pedestrian detection and tracking [6,7], and scene segmentation [8].
As a preprocessing step, ground segmentation algorithms play a crucial role in filtering out irrelevant information for subsequent perception tasks [9]. By dividing the 3D point cloud into ground and non-ground points, this algorithm can effectively reduce data volume and computational requirements. Specifically, the point cloud classified as ground points can be utilized for road edge extraction [10,11], navigation [12,13], and automatic parking [14], while the non-ground points can be used for object and vehicles detection [15,16,17], obstacle avoidance [18,19], and target tracking [20,21]. Therefore, ground segmentation algorithms hold significant research value in the field of autonomous driving.
Currently, significant progress has been made in algorithms related to ground segmentation. Huang et al. [22] introduced a fast ground point segmentation algorithm by incorporating Markov random fields. Bogoslavskyi et al. [23] proposed a method to convert point clouds from mechanically rotating multi-line LiDAR sensors into range images, which can then be used for ground segmentation. Himmelsbach et al. [24] were the first to apply the concept of ground fitting and introduced a ground segmentation algorithm based on ground line fitting. Hu et al. [25] and Lim et al. [26] conducted further research based on ground fitting and proposed ground segmentation algorithms by fitting ground plane equations. Furthermore, with the rapid development of deep learning in recent years, Qi et al. [27] and Hua et al. [28] proposed PointNet++ and Pointwise CNN networks based on convolutional neural networks, respectively, achieving high-precision point cloud segmentation.
Due to the common usage of ground segmentation algorithms as preprocessing steps, they need to balance accuracy and speed. Traditional methods such as range images [23], higher-order inference [22], and ground plane fitting [24,25,26] have lower computational requirements but exhibit lower segmentation accuracy. On the other hand, learning-based methods [27,28] can achieve high-precision segmentation but often require more computational resources and struggle with real-time tasks. Additionally, manually annotating point cloud datasets incurs high labor costs.
In response to the above challenges, this paper proposes a staged ground point segmentation algorithm based on ground plane fitting, which is specifically designed for real-time tasks. This algorithm is an improvement from Patchwork++ [29]: it fixes the rough segmentation of the point cloud at details in Patchwork++. The algorithm has a clearer structural hierarchy than the existing methods, and additional modules have been designed for more detailed segmentation of the point cloud. The contributions of this paper are threefold:
  • First, in the first stage of interference point removal, two modules are designed: region-wise reflected noise removal and region-wise vertical interference removal. These modules effectively address the issue of non-ground points interfering with ground plane fitting.
  • Second, in the second stage of ground plane fitting and ground point segmentation, this paper introduces novel criteria for assessing the validity of fitted planes and the region-wise invalid ground plane repair module. These address the issue of undersegmentation of ground points caused by failed ground plane fitting.
  • Finally, the proposed algorithm is evaluated on the SemanticKITTI dataset [30]. The experimental results demonstrate that, compared to the existing methods, the proposed algorithm achieves the best segmentation results.

2. Related Work

2.1. 2.5D Grid-Based Method

It is the case that 2.5D grid-based methods are commonly used to represent original point cloud data in two-dimensional space, reducing the data volume. Among them, 2.5D refers to grid partitioning in planar space, where each grid has its own height. This is a simplified and discretized representation of some three-dimensional information of the environment. Among these methods, elevation maps are widely employed. Douillard et al. [31] constructed an elevation map by calculating the average height of all the points within each grid cell and subsequently performed ground point segmentation using clustering methods. While this algorithm exhibits certain advantages in terms of segmentation accuracy, it requires significant computational resources. On the other hand, Sun et al. [10] proposed a fast ground point segmentation algorithm based on elevation differences. This algorithm constructs a uniform polar grid model and performs ground point segmentation by computing the elevation differences between each point and its neighboring points within the grid. Anand et al. [32] divided the point cloud into 0.5 m × 0.5 m cells and performed ground point extraction and filtering by comparing the z-value of each point within the cells.

2.2. Ground Modeling

Ground modeling utilizes plane fitting techniques to estimate ground planes from point cloud data and performs ground point segmentation based on the fitted ground plane model. Himmelsbach et al. [24] utilized a uniform polar grid model to fit linear equations representing the ground plane within each sector ring region and segmented the ground points based on these equations. This algorithm has low computational complexity but is not suitable for rugged terrain. Hu et al. [25] divided a point cloud into multiple rectangular cells along the direction of motion of a vehicle and performed ground point segmentation by fitting a plane equation for each cell. Zermas et al. [33] employed a uniform polar grid model to fit the ground plane equation, which enhanced the algorithm’s robustness. To address the computational complexity issue in plane-fitting-based methods, Lim et al. [26] and Lee et al. [29] proposed the Patchwork and Patchwork++ algorithms, respectively, based on the concentric zone model. These algorithms effectively improve segmentation accuracy and efficiency.

2.3. Adjacent Points and Local Features

This method segments ground points by extracting the relationships between neighboring points and local features from 3D point cloud data. Among these methods, the range images method is commonly used. Bogoslavskyi et al. [23,34] transformed point clouds generated by mechanically rotating multi-line LiDAR into range images and subsequently utilized the feature relationships between adjacent pixels in the image for point cloud segmentation. Guo et al. [35] combined the concentric zone model with the range images method and proposed a coarse-to-fine ground point segmentation method. On the other hand, region-growing-based point cloud segmentation algorithms were proposed earlier. Moosmann et al. [36] randomly selected initial seed points and performed region growing using local convexity as a similarity criterion, thereby achieving point cloud segmentation.

2.4. Higher-Order Inference

In ground point segmentation tasks, the sparsity issue of LiDAR point clouds often leads to misclassification of points. Therefore, higher-order inference algorithms used in visual segmentation tasks have also been applied in point cloud segmentation. Rummelhard et al. [37] proposed a 3D point cloud ground-labeling adaptive method based on local ground elevation estimation. This method models the ground as a spatiotemporal conditional random field to segment the point cloud. Huang et al. [22] introduced a fast ground point segmentation method based on a coarse-to-fine Markov random field. This method first performs coarse segmentation using an improved elevation map and then achieves fine segmentation by solving a Markov random field model.

2.5. Learning-Based Method

Deep learning methods were initially applied to image segmentation tasks and achieved excellent segmentation results. However, in point cloud segmentation tasks, deep learning methods started relatively late. Qi et al. [8] proposed an end-to-end network model called PointNet, which was the first to use raw point cloud data for segmentation. Building upon PointNet, Qi et al. [27] further proposed a new network model called PointNet++, which addressed the issue of not extracting local features around each point in PointNet. Varney et al. [38] employed a dense pyramid structure instead of the traditional “U”-shaped architecture to extract more features and demonstrated through experiments that the network exhibited superior performance.

3. Materials and Methods

This algorithm, based on the concentric zone model (CZM), designs several key modules to achieve ground extraction from LiDAR point cloud. CZM divides a point cloud into multiple sector ring regions with regular intervals of radial and azimuthal angle. Meanwhile, the density of sector ring regions in each zone can be set through parameters. The key modules include region-wise reflected noise removal (R-RNR), region-wise vertical interference removal (R-VIR), region-wise ground plane fitting (R-GPF), region-wise ground plane validity judgment (R-GPVJ), region-wise invalid ground plane repair (R-IGPR), and region-wise ground point segmentation (R-GPS). Based on the functionality of each module, they can be divided into two stages: interference point removal stage and ground plane fitting stage. The overall framework of the algorithm is illustrated in Figure 1.

3.1. Problem Definition

Given a set of LiDAR point cloud P, where each point p i contains its three-dimensional coordinates ( x , y , z ) and the reflection intensity I, the problem of ground point extraction aims to classify each point p i in P as either a ground point or a non-ground point. Ground points typically represent road surfaces, sidewalks, parking lots, and other ground elements, while non-ground points encompass buildings, vehicles, vegetation, pedestrians, and other non-ground elements. If we define the set of ground points as G and the set of non-ground points as N, the set P can be represented as follows:
P = i p i = G N

3.2. CZM: Concentric Zone Model

Because the ground surface is typically not a perfectly flat plane, it is necessary to divide the point cloud set P into multiple small regions and assume that the ground in each region is a flat surface. Based on the above idea, the uniform polar grid model [39,40] was proposed as shown in Figure 2a. The uniform polar grid model divides the set P into multiple rings based on the radial distance from the center, and each ring is further divided into smaller sector ring regions based on the azimuth angle. However, there are some obvious problems with the model. When the sector ring region is located too close to the center, its area becomes small, resulting in fewer points being contained within it. In addition, due to the sparsity problem of the point cloud, the point cloud becomes sparser as the distance from the center increases. This means that sector ring regions located far from the center will also contain fewer points. Both scenarios can lead to poor ground plane fitting within the sector ring regions.
To address these issues, we employ the concentric zone model (CZM), which is the same as that used in Patchwork [26] and Patchwork++ [29]. As depicted in Figure 2b, CZM can be parameterized to adjust the density of sector ring regions in each zone. By setting the number of sector ring regions to be divided in each zone, the concentric zone model can ensure that the rings closer to and farther from the center have fewer sector ring regions.

3.3. Stage One: Interference Point Removal

Interference point removal in Stage One is employed to remove reflected noise points and vertical non-ground interference points, providing more effective and reliable point cloud data for subsequent ground plane fitting.

3.3.1. R-RNR: Region-Wise Reflected Noise Removal

In the set of LiDAR point cloud P, noise points tend to occur below the ground plane. These noise points can significantly affect the reliability of the fitted ground plane.
According to the research conducted by Zhao X et al. [41], as shown in Figure 3, when a laser beam illuminates surfaces made of materials such as glass or metal, specular reflection often occurs. If the specular reflection contacts an obstacle, it will reflect to the laser receiver along the opposite path. Since the sensor is unaware of this reflection, the acquired points will be located incorrectly. These points are referred to as reflected noise points. As the incident angle increases, the reflection intensity of these noise points decreases. Based on this principle, Patchwork++ [29] introduces reflected noise removal (RNR) module. This module searches for all points in the point cloud that are below the specified height one by one and identifies the points with reflection intensity below the threshold as reflected noise points.
However, as shown in Figure 4, due to variations in reflectivity among different materials or non-uniform reflectivity on material surfaces, it is common for the reflection intensity of reflected noise points to exceed the set threshold, resulting in incomplete removal of reflected noise points. To address this issue, the proposed algorithm introduces region-wise reflected noise removal (R-RNR) module. The R-RNR performs reflected noise removal separately for each sector ring region. The method first extracts a point set ( P z , R - R N R ) from the sector ring region with z-value below the threshold value ( Z t h , R - R N R ) and counts the number of points ( N z ) in this set. Then, from P z , R - R N R , a point set ( P I , R - R N R ) is extracted based on reflection intensity below the threshold value ( I t h , R - R N R ), and the number of points ( N I ) in this set is counted (here Z t h , R - R N R 0 m and 0 < I t h , R - R N R 1 ). The determination of the reflected noise point set ( P R N ) is described as follows:
P R N = P z , R - R N R N z N z t h r , N I N I t h r P I , R - R N R o t h e r s
where N z t h r is the threshold value for the number of points with z-value less than Z t h , R - R N R , and N I t h r is the threshold value for the number of points with reflection intensity lower than I t h , R - R N R .

3.3.2. R-VIR: Region-Wise Vertical Interference Removal

As shown in Figure 5, when large vertical elements such as walls appear in the scene, the number of vertical points in some sector ring regions can be significantly greater than the number of ground points. These abundant vertical points can heavily interfere with ground plane fitting, resulting in a significant deviation between the fitted ground plane and the actual ground, thereby affecting the final ground point segmentation.
To address the above issue, this algorithm introduces a module for region-wise vertical interference point removal to remove large-scale vertical interference points from each sector ring region point set. This module first extracts a point set ( P z , R - V I R ) from the sector ring region with z-value greater than the threshold value ( Z t h , R - V I R ). To prevent ground points from being incorrectly identified as vertical interference points, Z t h , R - V I R is set to be greater than 0 m. Then, from P z , R - V I R , N t h , R - V I R points with the smallest z-value are selected as the initial seed points. The principal component analysis (PCA) method [42] is used to fit a plane, obtaining the unit normal vector ( A , B , C ) and the plane equation A x + B y + C z + D = 0 . If the angle between the unit normal vector and the Z-axis is less than the threshold value ( θ t h , R - V I R ), the fitted plane is classified as a vertical plane. The distance from each point in P z , R - V I R to the fitted plane is calculated. If the distance is less than the threshold value ( d t h , R - V I R ), the point is considered a vertical interference point, and it is removed from P z , R - V I R . The above steps are repeated until the angle between the unit normal vector of the fitted plane and the Z-axis is less than or equal to the threshold value( θ t h , R - V I R ), or the number of points in P z , R - V I R is less than N t h , R - V I R . Finally, all the vertical interference points are filtered out from the sector ring region point set.
In addition, this algorithm uses the formula for the orthogonal distance to calculate the distance d between the point p ( x , y , z ) and the fitted plane A x + B y + C z + D = 0 .
d = A x + B y + C z + D A 2 + B 2 + C 2
In the plane equation A x + B y + C z + D = 0 fitted using the PCA method [42], the unit normal vector of the plane is ( A , B , C ) . Therefore, A 2 + B 2 + C 2 = 1 . So, the distance calculation method can be further modified to the following formula.
d = A x + B y + C z + D

3.4. Stage Two: Ground Plane Fitting and Ground Point Segmentation

After obtaining reliable point cloud data through Stage One, Stage Two is employed for ground plane fitting and ground point segmentation. This stage consists of four main modules: R-GPF, R-GPVJ, R-IGPR, and R-GPS. The first three modules aim to obtain the ground plane fitting equation for each sector ring region, while the last module is responsible for partitioning each point based on the fitted ground plane.

3.4.1. R-GPF: Region-Wise Ground Plane Fitting

Based on the idea of the concentric zone model, it can be assumed that the ground plane within each sector ring region is a complete plane. Therefore, based on the point cloud within the sector ring region, an ideal ground plane equation can be fitted.
The R-GPF module used in this algorithm follows the same approach as that used in Patchwork++ [29]. First, N t h , R - G P F points with the lowest z-value are selected from the sector ring region point set after removing interference points. The average z-value ( z a v g ) is calculated for these selected points. Then, all points within the sector ring region point set that have z-value less than z a v g + z t h - s e e d are reselected as initial seed points, where z t h - s e e d is the height threshold for the initial seed points. By selecting the initial seed points based on the calculated average z-value, the presence of outlier points within the initial seed points can be effectively avoided. The initial seed points are used as the points to be fitted, and the PCA method [42] is employed to fit a plane, obtaining the unit normal vector ( A , B , C ) and the plane equation A x + B y + C z + D = 0 . The distance (d) between each point in the sector ring region point set and the fitted plane is calculated and compared with the threshold ( d t h , R - G P F ). If d d t h , R - G P F , the point is considered to be close to the fitted plane and becomes a new seed point. If d > d t h , R - G P F , no further processing is performed on the point. Based on the new seed points, the above operations are repeated N R - G P F times to iteratively obtain the final fitted ground plane equation. Through multiple iterations, the fitted ground plane gradually converges to the actual ground plane. Here, N R - G P F should be a small value to avoid a large computational burden.
In this module, Formula (4) is used to compute the distance between the point and the plane.

3.4.2. R-GPVJ: Region-Wise Ground Plane Validity Judgment

Due to issues such as insufficient fitted points, inclusion of outliers, and the absence of the fitted plane within some sector ring regions, the fitted planes within certain sector ring regions may significantly deviate from the actual ground surface, making them unreliable for subsequent ground point segmentation. Therefore, it is necessary to analyze the validity of the fitted ground planes within each sector ring region.
This module assesses the validity of the fitted ground plane using three criteria: plane uprightness, plane elevation, and plane flatness.
Plane uprightness is represented by the angle ( θ Z ) between the unit normal vector of the fitted plane and the Z-axis. A smaller θ Z indicates a more perpendicular alignment with the horizontal plane. In practical situations, the slope of the ground should conform to architectural design specifications and should not be excessively steep. Thus, when θ Z exceeds the threshold value ( θ t h , R - G P V J ), the uprightness requirement of the fitted plane is considered satisfied.
Plane elevation is represented by the average z-value ( z a v g ) of the fitted plane’s seed points. A larger z a v g indicates a higher average elevation of the fitted plane. In real-world scenarios, the ground plane generally has a lower elevation, except for uphill sections. Therefore, when z a v g is below the threshold value ( z t h , R - G P V J ), the elevation requirement of the fitted plane is considered satisfied.
Plane flatness is assessed using the singular value ( S n ) of the fitted plane’s normal direction. A larger S n indicates a less flat plane. In practical situations, the ground plane is typically flat, except for non-structured roads that may exhibit unevenness. Therefore, when S n falls below the threshold value ( S t h , R - G P V J ), the flatness requirement of the fitted plane is considered satisfied.
θ t h , R - G P V J is applicable to all sector ring regions and is set manually in advance. On the other hand, z t h , R - G P V J and S t h , R - G P V J have different values for each concentric zone in the model, and these values are dynamically updated based on the average elevation and flatness of all the sector ring regions on each ring.
Based on these three criteria, this module designs a table for judging the validity of the fitted plane, as shown in Table 1. Among them, uprightness, elevation, and flatness are conditions, and validity is the final judgment result. Table 1 shows all four judgment rules. Since uprightness is most accurate for excluding the ground, the fitted ground plane is directly judged invalid if uprightness is not satisfied, as shown in state 1. The fitted ground plane can be judged valid as long as one of the elevation and flatness is satisfied, provided that the uprightness is satisfied, as shown in states 3 and 4. This is because, in our experiments, we found that, if we require all three criteria to be satisfied at the same time, the conditions are too stringent and the algorithm will incorrectly exclude some ground planes, such as pavements and rough roads. Moreover, for the case where only uprightness is satisfied, we will also judge the fitted ground plane as invalid, as shown in state 2.

3.4.3. R-IGPR: Region-Wise Invalid Ground Plane Repair

When there are outlier points or a relatively low proportion of ground points in a sector ring region, the fitted plane corresponding to that sector ring region will be deemed an invalid fitted plane by the R-GPVJ module. In this case, directly classifying all the points in that sector ring region as non-ground points would lead to undersegmentation issues. Therefore, this algorithm proposes the R-IGPR module to re-estimate and repair the ground plane for the sector ring regions where the fitted plane is invalid based on known conditions.
This module evaluates and re-estimates the invalid fitted planes based on the concept of neighborhoods. In this context, the neighborhood considers only sector ring regions of the same zone. As shown in Figure 6a, when the sector ring region containing the invalid plane is located at a non-edge position in the zone, the validity of the four neighboring planes needs to be considered. As shown in Figure 6b, when the sector ring region containing the invalid plane is located at the edge position of the zone, only the validity of the three neighboring planes needs to be considered. If the number of valid neighboring planes is greater than or equal to two, the sector ring region is considered suitable for ground plane repair. Otherwise, no repair is performed. For sector ring regions eligible for ground plane repair, the calculation method is as follows.
Assuming there are n valid fitted planes in its neighborhood, the plane equations are given by A i x + B i y + C i z + D i = 0 , i = 1 , 2 , , n . The equation for the re-patched ground plane is as follows:
i = 1 n A i n x + i = 1 n B i n y + i = 1 n C i n z + i = 1 n D i n = 0

3.4.4. R-GPS: Region-Wise Ground Point Segmentation

Finally, based on the processing results of all the above modules, the point cloud dataset P is divided into ground points and non-ground points.
First, the reflected noise points, vertical interference points, and points from sector ring regions with invalid fitted planes are directly classified as non-ground points. The remaining points need to be divided based on the fitted planes within the sector ring regions. After calculating the orthogonal distance (d) between each point in the sector ring region and the fitted plane, it is compared with the threshold ( d t h , R - G P S ). If d d t h , R - G P S , the point is considered close to the fitted plane and classified as a ground point. Otherwise, it is classified as a non-ground point. Note in particular that, for the plane equation A x + B y + C z + D = 0 repaired by the R-IGPR module, A 2 + B 2 + C 2 1 . It is therefore necessary to use Formula (3) to calculate the orthogonal distance.

4. Results and Discussion

4.1. Dataset Preparation

In this experiment, we evaluate the performance of the proposed algorithm compared to that of the existing algorithms in ground segmentation using the SemanticKITTI dataset [30]. The SemanticKITTI dataset [30] is a large-scale dataset designed for autonomous driving scenarios. The dataset was collected using a 64-line mechanical rotating LiDAR mounted on a vehicle, capturing real-world driving environments. The dataset consists of 22 sequences, where sequences 00–10 contain both the raw point cloud data and per-point annotated class labels, while sequences 11–20 include only the raw point cloud data. We conducted experiments using sequences 00–10 containing labels. The dataset contains seven major classes and twenty-eight minor classes, with road, sidewalk, parking, and other ground categorized under the ground class. Therefore, in this experiment, the ground class is considered to be ground points, while the remaining classes are considered to be non-ground points. The sequence 00–10 contains a total of 23,210 frames of point cloud in different scenes, so using this dataset allows for objective comparison of segmentation methods.

4.2. Evaluation Metrics

In this experiment, precision, recall, and F1-score are used as evaluation metrics to quantitatively compare the segmentation performance of the different methods. If we define N T P , N F P , N T N , and N F N as the quantities of true positives, false positives, true negatives, and false negatives, respectively, the calculation methods for the evaluation metrics are as follows:
p r e c i s i o n = N T P N T P + N F P
r e c a l l = N T P N T P + N F N
F 1 = 2 p r e c i s i o n r e c a l l p r e c i s i o n + r e c a l l

4.3. Parameter Setting

The concentric zone model is divided into four zones of different specifications. In order of distance from near to far, each zone contains two, four, four, and four rings, and each ring in the zone contains sixteen, thirty-two, forty-five, and sixteen sector ring regions, respectively. The parameter settings for the R-RNR module are as follows: Z t h , R - R N R = 0.3 m, I t h , R - R N R = 0.2 , N z t h = 40 , and N I t h = 1 . The parameter settings for the R-VIR module are as follows: Z t h , R - V I R = 0.2 m, N t h , R - V I R = 20 , d t h , R - V I R = 0.3 m, and θ t h , R - V I R = 45 ° . The parameter settings for the R-GPF module are as follows: N t h , R - G P F = 20 , z t h - s e e d = 0.2 m, d t h , R - G P F = 0.1 m, and N R - G P F = 3 . The parameter settings for the R-GPVJ module are as follows: θ t h , R - G P V J = 45 ° . The parameter settings for the R-GPS module in the region are as follows: d t h , R - G P S = 0.1 m.

4.4. Comparison with Existing Methods

The comparative segmentation results between the proposed algorithm and existing algorithms are shown in Figure 7 and Table 2. Figure 7 shows the intuitive segmentation results of five algorithms, from which it can be seen that, compared to Patchwork [26] and Patchwork++ [29], the algorithm proposed in this paper has fewer red points. In addition, although the number of red points in the GPF [33] is similar to that in the proposed algorithm, the proposed algorithm has fewer blue points and more green points. Meanwhile, the segmentation result of LineFit [24] shows that there are obviously few green points while there are many blue points, indicating that the segmentation result of this algorithm is very unsatisfactory. From the evaluation metrics in Table 2, it is evident that the proposed algorithm achieves the highest F1-score, while the precision is only lower than LineFit [24] and the recall is only lower than Patchwork++ [29]. These indicate that, compared to the LineFit [24], GPF [33], Patchwork [26] and Patchwork++ [29] algorithms, the proposed algorithm exhibits the best integrated performance in ground segmentation.
The significantly greater precision of the proposed algorithm compared to that of the other algorithms can be attributed to its superior segmentation performance in capturing fine details, particularly in the transition regions between ground and non-ground points. As illustrated in Figure 7, the segmentation results of the proposed algorithm have fewer red and blue points in the road edge region. This is mainly due to the first stage of the proposed algorithm, which effectively removes a majority of the interference points, thereby improving the fitting precision in complex scenes with rich details. Additionally, the second state of the proposed algorithm repairs invalid fitted ground planes, further enhancing the integrity of the fitted planes and resulting in more precise ground point segmentation.
However, the proposed algorithm still has limitations when facing a large lawn area, as shown in the second graph of Figure 7f. Although the proposed algorithm works better compared to Patchwork [26] and Patchwork++ [29], there are still many lawns incorrectly classified as ground. In contrast, LineFit [24] and GPF [33] do very well in this area. This is mainly due to the fact that the difference between the lawn and the actual ground is small enough that the module cannot completely exclude the ground plane fitted to the lawn area, which can lead to subsequent misclassification.
Overall, the results demonstrate that the proposed algorithm outperforms the existing algorithms in terms of ground point segmentation effectiveness, as indicated by its F1-score. The algorithm’s ability to capture fine details and accurately segment ground points, especially in challenging scenarios, contributes to its superior performance.

4.5. Effect of R-RNR

This experiment compared the RNR module proposed in Patchwork++ [29] with the R-RNR module proposed in this paper. Figure 8 shows that, when the point cloud contains reflected noise with uneven reflection intensity, the RNR module can identify only obvious reflected noise points with low reflection intensity. However, the R-RNR module can successfully identify all the reflected noise points. This indicates that the proposed R-RNR module in this paper significantly outperforms the RNR module proposed in Patchwork++ [29] in terms of its ability to filter out reflective noise.

4.6. Algorithm Speed

The segmentation speeds of different algorithms are compared on a platform equipped with an Intel(R) Core(TM) i9-9900K CPU, and no GPU resources were used. Table 2 shows that the segmentation time of LineFit [24] and GPF [33] is much larger than the other algorithms, which completely fails to satisfy real-time requirements. This is because LineFit [24] divides the point cloud into 43,200 bins and fits the points in each bin statistically and linearly, which has large computational complexity. Meanwhile, GPF [33] fits the global point cloud onto a ground plane, causing a huge computational burden due to the large number of fitted seed points. In contrast, Patchwork [26] divides the point cloud into fewer regions, and the number of seed points required to fit the ground plane is drastically reduced, so Patchwork [26] has the lowest computational complexity. Patchwork++ [29], as an improvement of Patchwork [26], adds more modules to improve the segmentation accuracy, so the computational complexity has increased compared to Patchwork [26]. The proposed algorithm, as an improvement of Patchwork++ [29], adds modules with lower computational complexity to handle the details, and some unnecessary modules are removed to improve the segmentation speed. Therefore, the computational complexity of the proposed algorithm is only slightly increased compared to Patchwork++ [29].
In a word, the processing time of the proposed method is comparable to that of the existing real-time segmentation algorithm Patchwork++ [29], and slightly slower than that of Patchwork [26]. The computational complexity of the proposed method is low and can meet the real-time requirements.
Furthermore, we count the time taken for each part of the proposed algorithm as shown in Table 3. Other parts include steps such as conversion of data formats, data preprocessing, and visualization of results and therefore take up a relatively long time. In addition to this, the R-GPF module takes up the most time due to the fact that it requires multiple ground plane fits for each sector ring region. The R-VIR module, which also requires ground plane fitting, occupies less time due to the fact that the number of sector ring regions containing vertical interference points is generally small, and therefore the number of vertical plane fits is relatively small. Further, the R-RNR module needs to compare and count the height and reflection intensity of each point in the sector ring region and finally judge and divide them again, so this module takes up a relatively large amount of time as well. The R-GPVJ module and the R-IGPR module require only a criterion judgment for the fitted plane equations and a small number of planes to be repaired, so the computational complexity is relatively low. Finally, the R-GPS module needs to calculate the orthogonal distances of a large number of points to the plane and make judgments, so this module also has high computational complexity. In addition, since the above modules are all based on point cloud in a single sector ring region for processing, parallel computing can be achieved for each sector ring region.

5. Conclusions

This paper proposes a staged real-time ground segmentation algorithm based on 3D LiDAR point cloud. Through experimental verification, the proposed algorithm effectively addresses the issue of undersegmentation of ground points caused by inaccurate and incomplete ground plane fitting in existing algorithms, thus improving the effectiveness of ground segmentation. Moreover, similar to the existing real-time algorithms, the proposed algorithm has low computational complexity, ensuring fast segmentation speed.
In future research, we plan to use this algorithm as a preprocessing step, where the segmented ground points and non-ground points can be utilized for road extraction and object detection tasks, respectively. Additionally, we aim to design an adaptive algorithm to reduce the number of parameters that need to be set manually.

Author Contributions

Conceptualization, X.C.; Methodology, W.D.; Software, W.D.; Validation, W.D.; Formal analysis, W.D.; Investigation, W.D., X.C. and J.J.; Resources, X.C.; Data curation, W.D.; Writing—original draft, W.D.; Writing—review & editing, W.D., X.C. and J.J.; Visualization, J.J.; Supervision, X.C.; Project administration, X.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding. The APC was funded by Soochow University.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Roriz, R.; Cabral, J.; Gomes, T. Automotive LiDAR technology: A survey. IEEE Trans. Intell. Transp. Syst. 2021, 23, 6282–6297. [Google Scholar] [CrossRef]
  2. Zhang, J.; Singh, S. LOAM: Lidar odometry and mapping in real-time. In Proceedings of the Robotics: Science and Systems, Berkeley, CA, USA, 12–16 July 2014; Volume 2, pp. 1–9. [Google Scholar]
  3. Pramatarov, G.; De Martini, D.; Gadd, M.; Newman, P. BoxGraph: Semantic place recognition and pose estimation from 3D LiDAR. In Proceedings of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, 23–27 October 2022; pp. 7004–7011. [Google Scholar]
  4. Chen, Y.; Liu, S.; Shen, X.; Jia, J. Fast point r-cnn. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 9775–9784. [Google Scholar]
  5. Zimmer, W.; Ercelik, E.; Zhou, X.; Ortiz, X.J.D.; Knoll, A. A survey of robust 3d object detection methods in point clouds. arXiv 2022, arXiv:2204.00106. [Google Scholar]
  6. Wang, H.; Wang, B.; Liu, B.; Meng, X.; Yang, G. Pedestrian recognition and tracking using 3D LiDAR for autonomous vehicle. Robot. Auton. Syst. 2017, 88, 71–78. [Google Scholar] [CrossRef]
  7. Hasan, M.; Hanawa, J.; Goto, R.; Suzuki, R.; Fukuda, H.; Kuno, Y.; Kobayashi, Y. LiDAR-based detection, tracking, and property estimation: A contemporary review. Neurocomputing 2022, 506, 393–405. [Google Scholar] [CrossRef]
  8. Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 652–660. [Google Scholar]
  9. Gomes, T.; Matias, D.; Campos, A.; Cunha, L.; Roriz, R. A survey on ground segmentation methods for automotive LiDAR sensors. Sensors 2023, 23, 601. [Google Scholar] [CrossRef] [PubMed]
  10. Sun, P.; Zhao, X.; Xu, Z.; Wang, R.; Min, H. A 3D LiDAR data-based dedicated road boundary detection algorithm for autonomous vehicles. IEEE Access 2019, 7, 29623–29638. [Google Scholar] [CrossRef]
  11. Wang, G.; Wu, J.; He, R.; Yang, S. A point cloud-based robust road curb detection and tracking method. IEEE Access 2019, 7, 24611–24625. [Google Scholar] [CrossRef]
  12. Liu, Z.; Amini, A.; Zhu, S.; Karaman, S.; Han, S.; Rus, D.L. Efficient and robust lidar-based end-to-end navigation. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 13247–13254. [Google Scholar]
  13. Patoliya, J.; Mewada, H.; Hassaballah, M.; Khan, M.A.; Kadry, S. A robust autonomous navigation and mapping system based on GPS and LiDAR data for unconstraint environment. Earth Sci. Inform. 2022, 15, 2703–2715. [Google Scholar] [CrossRef]
  14. Lee, B.; Wei, Y.; Guo, I.Y. Automatic parking of self-driving car based on lidar. Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 2017, 42, 241–246. [Google Scholar] [CrossRef]
  15. He, C.; Zeng, H.; Huang, J.; Hua, X.S.; Zhang, L. Structure aware single-stage 3d object detection from point cloud. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 11873–11882. [Google Scholar]
  16. Fan, L.; Xiong, X.; Wang, F.; Wang, N.; Zhang, Z. Rangedet: In defense of range view for lidar-based 3d object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 2918–2927. [Google Scholar]
  17. Wu, J.; Xu, H.; Tian, Y.; Pi, R.; Yue, R. Vehicle detection under adverse weather from roadside LiDAR data. Sensors 2020, 20, 3433. [Google Scholar] [CrossRef] [PubMed]
  18. Wahyono, E.P.; Ningrum, E.S.; Dewanto, R.S.; Pramadihanto, D. Stereo vision-based obstacle avoidance module on 3D point cloud data. Telkomnika 2020, 18, 1514–1521. [Google Scholar] [CrossRef]
  19. Chen, H.; Lu, P. Real-time identification and avoidance of simultaneous static and dynamic obstacles on point cloud for UAVs navigation. Robot. Auton. Syst. 2022, 154, 104124. [Google Scholar] [CrossRef]
  20. Choi, J.; Ulbrich, S.; Lichte, B.; Maurer, M. Multi-target tracking using a 3d-lidar sensor for autonomous vehicles. In Proceedings of the 16th International IEEE Conference on Intelligent Transportation Systems (ITSC 2013), The Hague, The Netherlands, 6–9 October 2013; pp. 881–886. [Google Scholar]
  21. Adnan, M.; Slavic, G.; Martin Gomez, D.; Marcenaro, L.; Regazzoni, C. Systematic and comprehensive review of clustering and multi-target tracking techniques for LiDAR point clouds in autonomous driving applications. Sensors 2023, 23, 6119. [Google Scholar] [CrossRef] [PubMed]
  22. Huang, W.; Liang, H.; Lin, L.; Wang, Z.; Wang, S.; Yu, B.; Niu, R. A fast point cloud ground segmentation approach based on coarse-to-fine Markov random field. IEEE Trans. Intell. Transp. Syst. 2021, 23, 7841–7854. [Google Scholar] [CrossRef]
  23. Bogoslavskyi, I.; Stachniss, C. Efficient online segmentation for sparse 3D laser scans. PFG- Photogramm. Remote. Sens. Geoinf. Sci. 2017, 85, 41–52. [Google Scholar] [CrossRef]
  24. Himmelsbach, M.; Hundelshausen, F.V.; Wuensche, H.J. Fast segmentation of 3D point clouds for ground vehicles. In Proceedings of the 2010 IEEE Intelligent Vehicles Symposium, La Jolla, CA, USA, 21–24 June 2010; pp. 560–565. [Google Scholar]
  25. Hu, X.; Rodriguez, F.S.A.; Gepperth, A. A multi-modal system for road detection and segmentation. In Proceedings of the 2014 IEEE Intelligent Vehicles Symposium Proceedings, Dearborn, MI, USA, 8–11 June 2014; pp. 1365–1370. [Google Scholar]
  26. Lim, H.; Oh, M.; Myung, H. Patchwork: Concentric zone-based region-wise ground segmentation with ground likelihood estimation using a 3D LiDAR sensor. IEEE Robot. Autom. Lett. 2021, 6, 6458–6465. [Google Scholar] [CrossRef]
  27. Qi, C.R.; Yi, L.; Su, H.; Guibas, L.J. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In Proceedings of the Advances in Neural Information Processing Systems 30 (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
  28. Hua, B.S.; Tran, M.K.; Yeung, S.K. Pointwise convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 984–993. [Google Scholar]
  29. Lee, S.; Lim, H.; Myung, H. Patchwork++: Fast and robust ground segmentation solving partial under-segmentation using 3D point cloud. In Proceedings of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, 23–27 October 2022; pp. 13276–13283. [Google Scholar]
  30. Behley, J.; Garbade, M.; Milioto, A.; Quenzel, J.; Behnke, S.; Stachniss, C.; Gall, J. Semantickitti: A dataset for semantic scene understanding of lidar sequences. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 9297–9307. [Google Scholar]
  31. Douillard, B.; Underwood, J.; Melkumyan, N.; Singh, S.; Vasudevan, S.; Brunner, C.; Quadros, A. Hybrid elevation maps: 3D surface models for segmentation. In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18–22 October 2010; pp. 1532–1538. [Google Scholar]
  32. Anand, B.; Senapati, M.; Barsaiyan, V.; Rajalakshmi, P. LiDAR-INS/GNSS-based real-time ground removal, segmentation, and georeferencing framework for smart transportation. IEEE Trans. Instrum. Meas. 2021, 70, 1–11. [Google Scholar] [CrossRef]
  33. Zermas, D.; Izzat, I.; Papanikolopoulos, N. Fast segmentation of 3d point clouds: A paradigm on lidar data for autonomous vehicle applications. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 5067–5073. [Google Scholar]
  34. Bogoslavskyi, I.; Stachniss, C. Fast range image-based segmentation of sparse 3d laser scans for online operation. In Proceedings of the RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Republic of Korea, 9–14 October 2016; pp. 163–169. [Google Scholar]
  35. Guo, D.; Yang, G.; Qi, B.; Wang, C. A Fast Ground Segmentation Method of LiDAR Point Cloud From Coarse-to-Fine. IEEE Sens. J. 2022, 23, 1357–1367. [Google Scholar] [CrossRef]
  36. Moosmann, F.; Pink, O.; Stiller, C. Segmentation of 3D lidar data in non-flat urban environments using a local convexity criterion. In Proceedings of the 2009 IEEE Intelligent Vehicles Symposium, Xi’an, China, 3–5 June 2009; pp. 215–220. [Google Scholar]
  37. Rummelhard, L.; Paigwar, A.; Nègre, A.; Laugier, C. Ground estimation and point cloud segmentation using spatiotemporal conditional random field. In Proceedings of the 2017 IEEE Intelligent Vehicles Symposium (IV), Los Angeles, CA, USA, 11–14 June 2017; pp. 1105–1110. [Google Scholar]
  38. Varney, N.; Asari, V.K. Pyramid point: A multi-level focusing network for revisiting feature layers. IEEE Geosci. Remote Sens. Lett. 2022; Early Access. [Google Scholar] [CrossRef]
  39. Narksri, P.; Takeuchi, E.; Ninomiya, Y.; Morales, Y.; Akai, N.; Kawaguchi, N. A slope-robust cascaded ground segmentation in 3D point cloud for autonomous vehicles. In Proceedings of the 2018 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, HI, USA, 4–7 November 2018; pp. 497–504. [Google Scholar]
  40. Lim, H.; Hwang, S.; Myung, H. ERASOR: Egocentric ratio of pseudo occupancy-based dynamic object removal for static 3D point cloud map building. IEEE Robot. Autom. Lett. 2021, 6, 2272–2279. [Google Scholar] [CrossRef]
  41. Zhao, X.; Yang, Z.; Schwertfeger, S. Mapping with reflection-detection and utilization of reflection in 3d lidar scans. In Proceedings of the 2020 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), Abu Dhabi, United Arab Emirates, 4–6 November 2020; pp. 27–33. [Google Scholar]
  42. Feng, C.; Taguchi, Y.; Kamat, V.R. Fast plane extraction in organized point clouds using agglomerative hierarchical clustering. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014; pp. 6218–6225. [Google Scholar]
Figure 1. Overall framework of the staged real-time ground segmentation algorithm.
Figure 1. Overall framework of the staged real-time ground segmentation algorithm.
Electronics 13 00841 g001
Figure 2. Comparison of two models for point cloud partitioning. (a) Uniform polar grid model. (b) Concentric zone model (CZM).
Figure 2. Comparison of two models for point cloud partitioning. (a) Uniform polar grid model. (b) Concentric zone model (CZM).
Electronics 13 00841 g002
Figure 3. Theoretical model for reflected noise in LiDAR.
Figure 3. Theoretical model for reflected noise in LiDAR.
Electronics 13 00841 g003
Figure 4. Reflected noise points with uneven reflection intensity. The colors used in the representation indicate varying reflection intensities, with red indicating low reflection intensity and green indicating high reflection intensity.
Figure 4. Reflected noise points with uneven reflection intensity. The colors used in the representation indicate varying reflection intensities, with red indicating low reflection intensity and green indicating high reflection intensity.
Electronics 13 00841 g004
Figure 5. Theoretical model of vertical interference points. (a) Visual description of a large number of vertical interference points and a small number of ground points. (b) Visual description of the impact of a large number of vertical interference points on fitting ground planes.
Figure 5. Theoretical model of vertical interference points. (a) Visual description of a large number of vertical interference points and a small number of ground points. (b) Visual description of the impact of a large number of vertical interference points on fitting ground planes.
Electronics 13 00841 g005
Figure 6. Neighborhood selection of invalid fitted ground planes in the R-IGPR. (a) Neighborhood selection of invalid planes at non-edge position. (b) Neighborhood selection of invalid planes at edge position.
Figure 6. Neighborhood selection of invalid fitted ground planes in the R-IGPR. (a) Neighborhood selection of invalid planes at non-edge position. (b) Neighborhood selection of invalid planes at edge position.
Electronics 13 00841 g006
Figure 7. Comparison of segmentation outputs from different algorithms. (a) Ground truth from SemanticKITTI dataset [30]. (b) The segmentation results of LineFit [24]. (c) The segmentation results of GPF [33]. (d) The segmentation results of Patchwork [26]. (e) The segmentation results of Patchwork++ [29]. (f) The segmentation results of the proposed algorithm. In (a), green and red points represent ground and non-ground. In (bf), green, red, and blue points represent T P , F P , and F N , respectively. The fewer red and blue points, the more green points, the better the effect. Furthermore, compared to (a), the missing points in (bf) are TN.
Figure 7. Comparison of segmentation outputs from different algorithms. (a) Ground truth from SemanticKITTI dataset [30]. (b) The segmentation results of LineFit [24]. (c) The segmentation results of GPF [33]. (d) The segmentation results of Patchwork [26]. (e) The segmentation results of Patchwork++ [29]. (f) The segmentation results of the proposed algorithm. In (a), green and red points represent ground and non-ground. In (bf), green, red, and blue points represent T P , F P , and F N , respectively. The fewer red and blue points, the more green points, the better the effect. Furthermore, compared to (a), the missing points in (bf) are TN.
Electronics 13 00841 g007
Figure 8. Comparison of reflected noise removal. (a) The original point cloud from SemanticKITTI dataset [30], with red boxes indicating the location of reflected noise points. (b) The reflected noise points. Red points indicate low reflection intensity close to 0, green points indicate high reflection intensity close to 1, and blue points indicate reflection intensity close to 0.5. (c) The identification results of RNR [29]. (d) The identification results of the R-RNR. The pink bounding box around the points indicates successful identification of those points as reflected noise.
Figure 8. Comparison of reflected noise removal. (a) The original point cloud from SemanticKITTI dataset [30], with red boxes indicating the location of reflected noise points. (b) The reflected noise points. Red points indicate low reflection intensity close to 0, green points indicate high reflection intensity close to 1, and blue points indicate reflection intensity close to 0.5. (c) The identification results of RNR [29]. (d) The identification results of the R-RNR. The pink bounding box around the points indicates successful identification of those points as reflected noise.
Electronics 13 00841 g008
Table 1. Criteria for judging the validity of fitted planes.
Table 1. Criteria for judging the validity of fitted planes.
StateUprightnessElevationFlatnessValidity
1--
2
3-
4-
Table 2. Comparison of evaluation metrics for different algorithms.
Table 2. Comparison of evaluation metrics for different algorithms.
MethodPrecision (%)Recall (%)F1 (%)Time (ms)
LineFit [24]87.8071.9578.00267.9
GPF [33]74.5095.5582.50179.8
Patchwork [26]72.7498.1282.5915.3
Patchwork++ [29]73.2399.3083.3228.6
Proposed77.2698.5985.6930.6
Table 3. Percentage of time occupation for each part of the proposed algorithm.
Table 3. Percentage of time occupation for each part of the proposed algorithm.
PartPercentage of Time (%)
CZM10.7
R-RNR11.3
R-VIR9.4
R-GPF12.6
R-GPVJ7.2
R-IGPR7.4
R-GPS11.6
Others29.8
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Deng, W.; Chen, X.; Jiang, J. A Staged Real-Time Ground Segmentation Algorithm of 3D LiDAR Point Cloud. Electronics 2024, 13, 841. https://doi.org/10.3390/electronics13050841

AMA Style

Deng W, Chen X, Jiang J. A Staged Real-Time Ground Segmentation Algorithm of 3D LiDAR Point Cloud. Electronics. 2024; 13(5):841. https://doi.org/10.3390/electronics13050841

Chicago/Turabian Style

Deng, Weiye, Xiaoping Chen, and Jingwei Jiang. 2024. "A Staged Real-Time Ground Segmentation Algorithm of 3D LiDAR Point Cloud" Electronics 13, no. 5: 841. https://doi.org/10.3390/electronics13050841

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop