A Staged Real-Time Ground Segmentation Algorithm of 3D LiDAR Point Cloud

Round 1
Reviewer 1 Report
Comments and Suggestions for AuthorsThe paper proposes a ground segmentation algorithm for 3D LiDAR point clouds, which is an important step in autonomous driving and robot navigation. The proposed method, based on a concentric zone model, segments ground from non-ground points through a two-stage process involving interference point removal and ground plane fitting. The experimental results, particularly on the SemanticKITTI dataset, demonstrate the algorithm's effectiveness compared to existing methods. However, to strengthen the overall manuscript, the authors need to address the following concerns:
Major Concerns:
-
The manuscript does not sufficiently address the generalizability of the proposed approach to different scenarios or datasets. The authors demonstrate performance primarily on SemanticKITTI dataset, which might limit understanding of its applicability in different environments or scenarios. It is required to consider additional related datasets to provide a more comprehensive evaluation of diverse environmental conditions. Further analysis in diverse settings would strengthen the paper's generalization ability.
-
Authors need to enhance the clarity and depth of the literature review, particularly in distinguishing the proposed work’s novelty compared to existing methods like Patchwork++ and other SOTA approaches.
-
It is highly recommended to improve the overall framework in a more detailed manner. The provided pipeline is not much clearer to the reader and seems superficial. Please provide a detailed illustration of each stage. As well as provide more detailed insights into the computational complexity of that stage and its impact on real-time processing capabilities.
-
The paper needs more analysis on its real-time performance, especially a comparison of computational needs with existing methods. Although section 4.6 touches on this, a comprehensive evaluation is essential for understanding the method's real-time capabilities thoroughly.
-
Authors need to include more SOTA methods in their comparison rather than just comparing with only 2-3 methods. It will help to understand the detailed picture of the proposed method.
-
The paper should discuss specific scenarios where the algorithm might underperform, as these insights are currently missing. Including details on its limitations will enhance understanding of its applicability and areas for improvement.
- Please consider expanding the scope of this project by exploring additional databases to ensure a comprehensive literature review
-
Please improve some technical descriptions, particularly on concentric zone model adjustments and interference removal criteria, which could be helpful for readers and expand its clarity.
-
Please avoid such typos like in section 3.2. ith should be ith
-
Author Response
Please see the attachment.
Author Response File: Author Response.pdf
Reviewer 2 Report
Comments and Suggestions for AuthorsThe article presents a real-time (fast) algorithm for separating background points from non-background points. The algorithm is built from a sequence of filtering and decision steps, as described in Fig.1. The authors use the SemanticKITTY dataset to benchmark the algorithm they suggest.
The article – although is written clearly – misses important aspects when it comes to legibility and easy of understanding: there are a multitude of notations that are not defined or the definition is implicit and one must deduce from the context, a prime example is CZM that appears first in Fig.1 – the Concentric Zone Model. Suggest a more direct explanation of the concepts e.g. in Fig.1.
A second aspect is the superficial notation of the points and sets of points, as seen in lines 148-150 of the article, then later in indexing arbitrarily as before equation 2 of the paper. There, the abbreviation is so heavy that it makes understanding extremely difficult.
A second suggestion is that the authors should explain the reflected noise removal _within_ a single cell – similarly to the case of Fig.5.
A third suggestion would be to be clearer w.r.to the ALGORITHMIC aspects and to provide rigorous description of the
Minor comments:
- suggest making the title clearer: the algorithm is not _BASED_ON_ but it is applied to. A suggestion is:
A Real-Time Ground Segmentation Algorithm of 3D LiDAR Point Clouds
- it is unclear what 2.5D means – the authors should define it.
- In Fig.4 the details are unclear.
- The equations 3 and 4 together are redundant: suggest keeping only eq.4 or mention that the vector (A,B,C) is normalized, eliminating the denominator from (4).
- In eq.8 the “F1 – score” should be clarified: the ‘-‘ sign might be read as an operator
- Fig.7 is extremely confusing: in (a) the red dots denote (N) class, later the red dots denote FALSE POSITIVES.
- A second problem with Fig.7 is that there are points missing from (b),(c),(d) that are present in (a). Suggest clarifying the fact that those points are the TRUE NEGATIVES.
- Before Sec.4.6 there should be a clear definition of the ALGORITHM. Suggest to state (approximate if exact is not possible) the complexity of each step and the fact whether the steps can be parallelized.
Author Response
Please see the attachment.
Author Response File: Author Response.pdf
Reviewer 3 Report
Comments and Suggestions for AuthorsThis paper presents a method for ground segmentation in 3D lidar point clouds. The algorithm shares several features with Patchwork++, but with a few improvements and extensions that make it better at filtering out non-ground points from the ground point list. The proposed method is marginally better than the Patchwork++ algorithm and a few others evaluated in this work.
The literature review is adequate, and the details of the method are explained in sufficient detail in Section 3. I suggest the following revisions to improve the paper:
· The meaning of Table 1 is unclear. Is there a missing left column?
· Better explanation is needed for Figure 7. What are the blue points? From context, it seems that these are ground points that are not classified correctly, but I can’t find anywhere that’s stated in the captions or text
· The method is tested on the SemanticKITTI dataset, presumably on the 11 sequences that include labeled point clouds? Is ground segmentation performed scan-by-scan, or on the total point cloud? How many different scans are labeled? Does using only 1 dataset bias your results? How would using point clouds generated with a different sensor, say a VLP-16 with only 16 lasers, change the performance of the algorithm? Please address these questions in the discussion section of the paper.
· “Patchwork” is mostly capitalized throughout the paper, but in in a few places it is lowercase. Please make it consistent.
Author Response
Please see the attachment.
Author Response File: Author Response.pdf
Round 2
Reviewer 2 Report
Comments and Suggestions for AuthorsThe authors made the necessary editing and clarifications.