Next Article in Journal
Beyond the Tide: A Comprehensive Guide to Sea-Level-Rise Inundation Mapping Using FOSS4G
Previous Article in Journal
Quantifying Aboveground Grass Biomass Using Space-Borne Sensors: A Meta-Analysis and Systematic Review
 
 
Article
Peer-Review Record

Comparative Analysis of Algorithms to Cleanse Soil Micro-Relief Point Clouds

Geomatics 2023, 3(4), 501-521; https://doi.org/10.3390/geomatics3040027
by Simone Ott 1,*, Benjamin Burkhard 1, Corinna Harmening 2, Jens-André Paffenholz 3 and Bastian Steinhoff-Knopp 4
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Geomatics 2023, 3(4), 501-521; https://doi.org/10.3390/geomatics3040027
Submission received: 11 October 2023 / Revised: 20 November 2023 / Accepted: 23 November 2023 / Published: 26 November 2023

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

The article deals with the comparison of terrain point extraction methods (ground filtering) for the purpose of detecting changes in soil micro-relief in farmland.

Dear authors,

I find your article it very nicely organized with a very clear assessment. I have only one minor comment and one question, which in principle does not even have to affect the article as such:

Question:

Given the size of the area and the density of the point cloud, the resolution setting for CSF (0.1 m) seems large to me. I know that the older versions of the CloudCompare had  this limitation, which could only be exceeded via command line execution. The current version (I'm using 2.12.2) no longer has this limitation. Is it this reason not using lower value?

Remark: (this is not an incentive to cite, just possibly usefull information, I also deal with green vegetation filtering):

Due to the different colors and the complicated definition of the color area in the RGB space, I would recommend the vegetation index (calculated from the RGB only) as an additional variable for classification here rather than colors, e.g. ExG or GLI have proven as best eg in 

1. https://doi.org/10.3390/rs15133254

2. https://doi.org/10.3390/drones3030061

3. https://doi.org/10.3390/rs12020317

I am satisfied with the article and I have no reservations about it, I would just like to know the answer to the question asked, I do not require modifications.

Best Regards

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

The paper “Contactless Lawn Mowing: Performance of Vegetation Elimination in Point Clouds on Plot Scale” validates performance of various methods for separation of point cloud to ground points and vegetation. The paper is written satisfactory, however there are multiple major issues, which require thoughtful corrections as follows:

 

Title is misleading as it has nothing to do with mowing. There are no relevant results. The title has to reflect the content of the paper and has to be changed properly.

 

The abstract is not clearly written as it is misleading regarding deep and machine learning (line 19). Deep learning is actually a subset of machine learning. Same confusion has to be corrected through the text and in figures, e.g. Fig 1.  

 

Authors use the word “epoch” as time points of datasets, which confuses the reader because this word is usually used in machine learning for time a dataset passed through an algorithm. Therefore, the authors shall check and change the wording through the paper.

 

As great weight of paper’s values lies in datasets, the authors shall append produced dataset as a supplementary material. Alternative solution is to deposit data in repositories, e.g. mendeley. Please read this paper: https://www.mdpi.com/2306-5729/6/2/15 .  

 

Great deal of paper lies in machine learning utilization. However, there are details missing regarding learning/training datasets. How were datasets split into training/validation dataset? Training and testing datasets should not overlap. Additionally, why was cross-fold validation not used, e.g., k-fold Cross-Validation? This technique is frequently used to report correct results. What was done to check and prevent underfitting and overfitting? The results shall be extended regarding this matter.

 

Regarding methodology: existing methods are described too much in the details which unnecessarily prolongs the paper. Additionally, some figures are taken from other papers. The authors shall check if they have the rights to use them. Some figures are actually not needed, e.g. Figs 5 and 6.  Do they have any value for the reader? 



 

Comments on the Quality of English Language

Writing has to be improved as there are some sentences/paragraphs unclear and with obvious mistakes. Native speakers should check the paper. Some examples:

  • In line 26 the following sentence is not clear: “ we recommend CANUPO with colour as scalar field in combination with CSF.”. what is scalar field? abstract shall be written in a more general way.

  • Paragraph in line 227 starts as “Therefore…” which is unusual.

  • Line 648: “ The surface structure produced, differed most quantitatively with CSF.” 

  • Some sentences are missing verbs, e.g., line 215:  “Further information in [17].” 

  • Fig 4 has some errors, e.g. Confusion Matrizes.

  • Line 333 is not clear. Filtering algorithms basically perform a binary classification. 

  • Figure 11 has too small text. Similar applies to other figures.  

  • In figure 11 it is unclear what is plot 6. In the caption is plot 2.

 

Some other smaller typos: 

  • space missing in equation 3 (“+FN”)

  • Line 307: “ Data gaps in the soil surface due to shadowing by vegetation becomes…”

 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

Comments and Suggestions for Authors

Dear authors, I think the paper has been improved satisfactorily for publication.

Back to TopTop