Remote Sensing Parameter Extraction of Artificial Young Forests under the Interference of Undergrowth

Round 1
Reviewer 1 Report
Comments for author File: Comments.pdf
Author Response
Dear Reviewer 1,
We sincerely appreciate your invaluable feedback on our paper. Your insightful suggestions have greatly enhanced the quality of our work. We have diligently incorporated your recommendations into the revised manuscript, and the detailed modifications can be found in the attached document. Your guidance has been instrumental in refining our research. Thank you for your help and continued support.
Best regards,
Tao Zefu
Author Response File: Author Response.pdf
Reviewer 2 Report
It is interesting to add informations about samples age, sample structure especially the mixture percentage of studied species with the others species.
In page 4, I think the author confused between LiDAR data and the and UAV imagery. This has to be verified by the author
Figure 2: I advise the author to put the 2 boxes of UAV imagery and sample measured data at the same level that LiDAR Data
Author Response
Dear Reviewer 2,
We sincerely appreciate your invaluable feedback on our paper. Your insightful suggestions have greatly enhanced the quality of our work. We have diligently incorporated your recommendations into the revised manuscript, and the detailed modifications can be found in the attached document. Your guidance has been instrumental in refining our research. Thank you for your help and continued support.
Best regards,
Tao Zefu
Author Response File: Author Response.pdf
Reviewer 3 Report
Abstract
“canopy 19 height model (CHM)” must be “Canopy Height Model (CHM)”, please check all abbreviations in the paper.
The abstract does not need introduction, please rewrite it by summarizing the suggested approach, input, output, and accuracy.
Please check all numbers in the paper, never forget adding the units.
Introduction
Please avoid using (we, our and us), use the passive voice. Please check the paper.
Line 40: please define the expressions “biomass, forest stock, carbon emission” and add the references.
Reference citation in the text does not respect the journal author guideline, please check all reference citation after revision the journal guideline.
Related works summarized in this section is not enough, because a huge number of papers were published in this research area. Also, machine learning applications were not mentioned. Please extend the related works section using review papers such as:
Gharineiat, Z., Tarsha Kurdi, F., Campbell, G. 2022. Review of automatic processing of topography and surface feature identification LiDAR data using machine learning techniques. Remote Sens. 2022, 14 (19), 4685, https://doi.org/10.3390/rs14194685.
Solares-Canal, A., Alonso, L., Picos, J. et al. Automatic tree detection and attribute characterization using portable terrestrial lidar. Trees 37, 963–979 (2023). https://doi.org/10.1007/s00468-023-02399-0
Lines 86 to 96 mix the contribution with the suggested approach with the paper structure. Moreover, the paper title seems far to the paper topic, and the paper topic is not so clear. Please, the introduction must be organized carefully, please underline the three elements clearly in the introduction section. The way that you use unfortunately gives a reflection of poor contribution.
Materials and Methods
Please don’t put two section titles consecutively, you must add a transition paragraph between them, please check all the paper.
Study Area Overview and Data Collection
What is the definition of “Handheld RTK”
What is the “geometric correction of the image.”
Please add reference or the definition.
Do you mean the tree footprint diameter by “Crown width”? please add a small figure to show the tree parameters on it.
Define the parameter shown inside the Table 1 under the table.
It is not clear the context of the illustrated data in Table one, why you calculated the max, min, mean etc. for all direct measurements. I think that you must add a new section where you talk about the experiment design.
Line 131: average point density of 1300 points per square meter: try to eliminate the duplicated points, then you will get the real density value.
What is the area of scanned area, how many points of measured point cloud?
2.2 Methods
You say “LiDAR data“ and “UAV imagery data” knowing that you use UAV to measure both data.
Line 139: “Height Model (CHM) was extracted” how it was extracted?
The Marker-controlled Watershed Segmentation algorithm was then employed to extract tree feature parameters from both the CHM data and the UAV imagery è please cite a reference.
Line 141: “The preprocessed point cloud” : the word preprocessed is a general word, please replaced it by explaining which kind of preprocessing you mean.
Line 141: was segmented using the point cloud clustering segmentation algorithm è please cite a reference.
In figure 2: I think that you mean DTM instead of DEM.
A Digital Elevation Model (DEM) is a representation of the bare ground (bare earth) topographic surface of the Earth excluding trees, buildings, and any other surface objects. ( see https://www.usgs.gov/faqs/what-digital-elevation-model-dem).
So to calculate CHM, we need DTM instead of DEM, do you agree with me, if you use DEM. Please explain how you do that and cite a reference.
The same issue is met in Section 2.1.1
Please, you must add reference to any realized operation, or you must explain it (please check all the paper). In Figure 2, you day “Normalized point cloud” please explain how you do that starting from DEM. It can be done without DEM!
In the flowchart presented in Figure 2, you forget to add the output.
2.2.1 LiDAR Data Processing: please be precise, please replace the word “Processing”.
Line 147: “acquisition process” do you mean acquisition step? please be precise, please check all the paper.
Please select the section titles according to the flowchart of Figure 2 to find the link between them.
CHM=DSM-DEM must be CHM=DSM-DTM isn’t it?
DEM is a digital representation of the ground terrain based on elevation data (where you find this definition?) ( see https://www.usgs.gov/faqs/what-digital-elevation-model-dem).
Please check definitions using terminology dictionary or official references.
2.2.2 UAV Imagery Processing
After geometric correction and preprocessing to eliminate distortions, the UAV imageries were converted from the RGB color space to the HSV color space. In this process, 187 the R, G, and B attributes first need to be normalized. Please explain how realizing each operation or add reference(s).
If you develop any equation, please explain how you develop it, if not, please add reference(s) (please check all equations in the paper).
2.2.3 Raster Image segmentation algorithm
marker-controlled watershed segmentation 208 algorithm è add reference(s)
Section numbers are not correct (e.g., 2.2.3 is repeated two times). Please check section numerations and figures. Moreover, please add numeration to the equations.
2.2.3 Point Cloud Clustering Segmentation Algorithm
Please define parameters employed in Figure 5 under the same figure.
2.2.4 Segmentation Result Accuracy Verification
Please use the journal guideline to write the section titles (some words are started with capital letters and others are not, why?).
Line 257: please explain how field measurements are realized.
5. Conclusions
Please extend the conclusion by adding future works.
Minor editing of English language required
Author Response
Dear Reviewer 3,
We sincerely appreciate your invaluable feedback on our paper. Your insightful suggestions have greatly enhanced the quality of our work. We have diligently incorporated your recommendations into the revised manuscript, and the detailed modifications can be found in the attached document. Your guidance has been instrumental in refining our research. Thank you for your help and continued support.
Best regards,
Tao Zefu
Author Response File: Author Response.pdf
Reviewer 4 Report
This paper introduces a single-tree recognition method using CHM images, UAV images, and LiDAR point clouds. The key idea is to use the marker-controlled watershed segmentation algorithm and point cloud clustering segmentation algorithm. However, there are some concerns as follows:
1. Some important references are missing in the Introduction. The authors should give more literature reviews about tree classification and multi-modality. Please add a few recent works from a broader scope to this paper.
[1] Zou, Xinhuai, et al. "Tree classification in complex forest point clouds based on deep learning." IEEE Geoscience and Remote Sensing Letters 14.12 (2017): 2360-2364.
[2]Xia, Y., Wang, C., Xu, Y., Zang, Y., Liu, W., Li, J., & Stilla, U. (2019). RealPoint3D: Generating 3D point clouds from a single image of complex scenarios. Remote Sensing, 11(22), 2644.
2. It would be better to give more details of the point cloud collection to help readers understand the details of the procedure.
3. In experiments, the authors should compare the time complexity using the CHM images, UAV images, and LiDAR point clouds, respectively.
good
Author Response
Dear Reviewer 4,
We sincerely appreciate your invaluable feedback on our paper. Your insightful suggestions have greatly enhanced the quality of our work. We have diligently incorporated your recommendations into the revised manuscript, and the detailed modifications can be found in the attached document. Your guidance has been instrumental in refining our research. Thank you for your help and continued support.
Best regards,
Tao Zefu
Author Response File: Author Response.pdf
Reviewer 5 Report
This paper is intended to improve the tree recognition in artificial young forests, since current methods may lead to significant measurement errors caused by the interference of tree canopies with undergrowth vegetation.
The Authors combined UAV imagery, LiDAR point cloud data and canopy height model images to achieve it.
Several comments to improve the quality of the paper:
- Both in the abstract and throughout the paper, please indicate that the case study is located in China.
- The Introduction should have a more clear paragraph describing how the rest of this paper is organised.
- Please ensure every single acronym is fully written the first time they are mentioned (e.g., immediately after Figure 2, as it includes several ones).
- Methods: I miss a figure showing the number and directions of UAV flight bands so that the Readers can see the image capture routes.
- Please specify and describe the platform used to run all cleaning, modelling, and data filtering algorithms. After all, the Authors used well-known algorithms that are integrated into programmes such as CloudCompare.
- The Authors described detected limitations of their results, but did not specifically expand on the limitations of their research. This could be added in section 4.
- Please revise the format of authors' names in references 24 and 28.
Author Response
Dear Reviewer 5,
We sincerely appreciate your invaluable feedback on our paper. Your insightful suggestions have greatly enhanced the quality of our work. We have diligently incorporated your recommendations into the revised manuscript, and the detailed modifications can be found in the attached document. Your guidance has been instrumental in refining our research. Thank you for your help and continued support.
Best regards,
Tao Zefu
Author Response File: Author Response.pdf
Round 2
Reviewer 3 Report
My concerns have been addressed satisfactorily.
Minor editing of English language required
Reviewer 4 Report
I really do not understand why the reference [1] is not closely aligned with the content of this study. And I do not see any comparisons about the time complexity using different methods. I tend to reject this paper since my concerns are not addressed.
[1] Zou, Xinhuai, et al. "Tree classification in complex forest point clouds based on deep learning." IEEE Geoscience and Remote Sensing Letters 14.12 (2017): 2360-2364.
N/A