Next Article in Journal
A VHR Bi-Temporal Remote-Sensing Image Change Detection Network Based on Swin Transformer
Next Article in Special Issue
Sample Plots Forestry Parameters Verification and Updating Using Airborne LiDAR Data
Previous Article in Journal
Large Area High-Resolution 3D Mapping of the Von Kármán Crater: Landing Site for the Chang’E-4 Lander and Yutu-2 Rover
 
 
Article
Peer-Review Record

Study on Single-Tree Extraction Method for Complex RGB Point Cloud Scenes

Remote Sens. 2023, 15(10), 2644; https://doi.org/10.3390/rs15102644
by Kai Xia 1,2,3,*,†, Cheng Li 1,2,3,†, Yinhui Yang 1,2,3, Susu Deng 4 and Hailin Feng 1,2,3
Reviewer 1:
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Remote Sens. 2023, 15(10), 2644; https://doi.org/10.3390/rs15102644
Submission received: 11 April 2023 / Revised: 13 May 2023 / Accepted: 15 May 2023 / Published: 19 May 2023
(This article belongs to the Special Issue 3D Point Clouds in Forest Remote Sensing III)

Round 1

Reviewer 1 Report

This paper proposes a new single-tree point cloud segmentation algorithm for UAV generated colored point clouds. Based on an existing semantic segmentation network RandLA-Net, the authors add position information processing modules and have improved RandLA-Net. The IMP-LFA module plays important role in enhancing the segmentation accuracy. The error analysis parts give intensive discussions about the mis-segmentation and mis-clustering reasons. To sum up, this is a pretty good paper which I suggest to publishing in Remote Sensing.

Author Response

Response to Reviewers

Thank you very much for your review of our paper and for providing valuable comments and suggestions. Here is our response to your evaluation:

We greatly appreciate your positive feedback on our proposed single-tree point cloud segmentation algorithm for UAV-generated colored point clouds. As you correctly pointed out, we have improved the existing semantic segmentation network RandLA-Net by incorporating position information processing modules.

We sincerely appreciate your overall assessment of the paper as being of high quality and your recommendation for publication in Remote Sensing.Once again, we thank you for your diligent review, as your expert feedback has been immensely helpful to our research.

Reviewer 2 Report

General comments:

 The paper “Study on single tree extraction method for complex RGB point cloud scenes” is focussed to the point cloud semantic segmentation using an algorithm based on the neural network RandLA-Net and Meanshift clustering. Although semantic segmentation of large scale RGB point clouds is a very interesting topic, some parts of the paper are insufficiently developed. As a new method is presented, the new algorithm structure should be more detailed described and differences with RandLA-Net LFA module explained and emphasized. Also a deeper analysis of algorithm performance for different tree and point cloud characteristics based on the differences between the data set analysed should be presented in the discussion.

 

Specific comments

 Abstract: The study justification comprises a major part of the abstract; these paragraphs may be reduced, adding some information on the proposed modifications of RandLA-Net and including results both on segmentation and clustering accuracies.

Table 1. Pixel spatial resolution should be included. The type of sensor is reported instead the maximum flight time.

L138-141. Explain why the test sample proportion and the training sample are different in each site.

L177-181. This error does not correspond to the data used and should not be provided in Material and methods. The error of the SfM point clouds used in this study may be very different and affect the RandLA-Net and Meanshift results, so it should be provided instead.

 L194. Detail which colour attributes are used.

L196-198. This justify the improvement proposed, it should be explained with more detail.

 Figure 3. All the symbols should be defined in the figure caption (for instance what means N,8 and N/4,32…, MLP, LocSE. Similarly, in line 228 SE and CBAM are used but they have not been defined previously in the text (they should be defined both in the text and in the figure).

L228. Explain what do SE and CBAM modules and how enhances LFA ability to extract local features.

L229. Which spatial and channel features? Please explain.

L249-251. As these point clustering methods are not applied in this study, they should be mentioned in the introduction or in the discussion instead in Material and methods section.

L284 & 289-290. Clarify if missing quantities (L284) or missed samples (L289) is used for missing rate computation.

L295-297. Information on network optimization algorithm and clustering algorithm should be explained separately, linking each algorithm with its respective parameters.

 L398. “dividing multiple trees with connected canopies into a single tree” -> “merging multiple trees with connected canopies into a single tree”

Discussion: Some interpretations of your results provided in the Results section may be better included in the Discussion section. Also a deeper insight in the performance of the novel algorithm and differences with RandLA-Net and other semantic segmentation and clustering extraction methods may enhance the discussion (for instance: why was improvement greater in Area 2?)

Author Response

Response to Reviewers

We sincerely appreciate the reviewers’ constructive comments and suggestions. We have carefully addressed each of the comments, and our specific responses to each individual question or comment are as follows.

Author Response File: Author Response.doc

Reviewer 3 Report

The paper is well written, but presents some flaws that should be addressed somewhere in the text.

The authors never mention the method they are using to create the point clouds they are segmenting: it´s photogrammetry and not SfM or alike.

The study areas are very small and almost devoid from trees. The existing trees are rather good separated. In such an environment, I guess all the other methods used for LiDAR point clouds would have similar results. It's not said that the segmentation is done for one species only in each area, and one understands it first when it comes to the results. The authors do mention a multi species segmentation as future work, though.

Although common in literature, I do not see the advantage of training double as much data as the data to be  classified. It would be more logical to train a small amount of data and classify a large amount of data. If you have to manually classify so much data you better do the rest also manually. Of course you'll have to do it for testing, but intuitively, I mean the statistical results would be much more meaningful if the proportion between training and testing data would be 1 to 5 or 1 to 10, for instance. 

Please see attached file for other comments.

Comments for author File: Comments.pdf


Author Response

Response to Reviewers

We sincerely appreciate the reviewers’ constructive comments and suggestions. We have carefully addressed each of the comments, and our specific responses to each individual question or comment are as follows.

Author Response File: Author Response.doc

Back to TopTop