Next Article in Journal
Applying Artificial Cover to Reduce Melting in Dagu Glacier in the Eastern Qinghai-Tibetan Plateau
Next Article in Special Issue
Terrestrial and Airborne Lidar to Quantify Shrub Cover for Canada Lynx (Lynx canadensis) Habitat Using Machine Learning
Previous Article in Journal
Human Activity Classification Based on Dual Micro-Motion Signatures Using Interferometric Radar
Previous Article in Special Issue
Assessment of the Capability of Landsat and BiodivMapR to Track the Change of Alpha Diversity in Dryland Disturbed by Mining
 
 
Communication
Peer-Review Record

Towards Prediction and Mapping of Grassland Aboveground Biomass Using Handheld LiDAR

Remote Sens. 2023, 15(7), 1754; https://doi.org/10.3390/rs15071754
by Jeroen S. de Nobel 1, Kenneth F. Rijsdijk 1, Perry Cornelissen 1,2 and Arie C. Seijmonsbergen 1,*
Reviewer 1: Anonymous
Reviewer 2:
Reviewer 3: Anonymous
Reviewer 4: Anonymous
Remote Sens. 2023, 15(7), 1754; https://doi.org/10.3390/rs15071754
Submission received: 10 February 2023 / Revised: 22 March 2023 / Accepted: 23 March 2023 / Published: 24 March 2023
(This article belongs to the Special Issue Local-Scale Remote Sensing for Biodiversity, Ecology and Conservation)

Round 1

Reviewer 1 Report

This is an interesting paper and one that will be useful as guidance to others in the field.  The authors do a good job of explaining their test of handheld lidar for assessing above ground biomass by building random forest models.  While I generally approve of the methodology and agree with the results and conclusions, I think the paper could be improved by adding some additional definitions, details on RF modeling, and computation of error rates, as well as some minor editing.  See inline comments on review copy of paper. I think the discussion could also be improved by further exploring sources of error and how that may affect the results.  While wind is mentioned (rightly) as a potential source of measurement error, the larger challenge of assessing error rates with the precision of the handheld scanner (6mm) greater than the Aeropoints (20mm), as well as potential errors in the lab measurement of AGB (of which there is no mention).  Otherwise this is a well written and valuable contribution to the literature.  

Comments for author File: Comments.pdf

Author Response

Dear Reviewer,

We thank you for your critical, and to-the-point comments, which helped to improve our manuscript. Some comments we can address and we did follow your suggestions, others can only be identified in this demonstration project, and are difficult to quantify. Here, we first respond to your general remarks, after, we reply point by point to your specific remarks, as earmarked in the separate .pdf document, and in sequential order.

In addition to your general response, we generally agree on your remarks and therefore:

  • We added definitions in this paper (stretch, drift, ghost points, discarded areas) to make sure these are clear to all readers
  • We added some details on the RF model (also mentioned by another reviewer) and added details on errors.
  • We checked and found that both the relative vertical and horizontal accuracy of the AeroPoints are within 10 mm, (and not 20mm) according to the Technical Sheet of AeroPointsTM , which decreases potential errors in the RF model.
  • We followed all standard procedures under supervision of the laboratory support staff of IBED, which are according to standardized protocols, and aligned with widely used procedures. We assume potential errors from using lab procedures to be very small relative to other potential errors.
  • We repaired the indicated typos, grammar suggestions and many others after re-evaluating the text in the newly uploaded version of our manuscript.

Replies to your remarks made in the separate pdf document:

Introduction:

Is that really the case? 

>>> We realize and agree that we probably have not reviewed all available literature. We changed this statement to: ‘ In most studies…’

Section 3.2

“What are the accuracy characteristics of this scanner?  Web search shows "relative accuracy up to 6 mm" >>> Thank you, the accuracy is indeed an important topic and we agree that the relative accuracy of the scanner should be reported. We added in the text >> ‘has a relative accuracy of approximately 6 mm’ as is provided by the technical information.

“Define these terms cloud drift, stretch, and discarded areas for readers not familiar with HMLS”

>> We agree that these terms are not widely known amongst the readers. We explained what drift and stretch and discarded areas are as follows, by inserting the following sentence: ‘These difficulties may cause distortions in the scan direction (stretch), or between adjacent scanlines (drift). Discarded areas are areas without data points as the result of  incomplete scanning.’

“You should insert a trademark symbol as this is a tradename“ >>We added a trademark symbol after an AeroPointTM and also to CloudCompareTM and eCognitionTM along with the relevant links to websites.

“At 20 mm accuracy, are these units sufficient to assess the accuracy of a ~ 6mm lidar?  What would be the impact on the biomass computation of this level of accuracy? Any?”

>> We re-checked the relative accuracy of the AeroPoints in the Technical Specification document (available at: https://www.geometius.nl/wp-content/uploads/2017/11/Brochures-AeroPoints-1.pdf): the relative accuracy is <10 mm horizontally and vertically mm as well. We changed this in the text. This means, for the biomass estimation, that errors stemming from the ~6 or ~ 10 mm  accuracy are expected to be relatively small. Field sampling location selection and actual sampling within the 30 cm circular sampling location, 1 m from the center of the AeroPointsTM also have a locational error – assumed to be within this same 10 mm range.

“Define ghost points” >> We defined what ghost point are in our research, we added: (3D points resulting from erroneously scanned moving objects)

“As well as Ntree, also an Mtry hyperparameter?”

>> Yes, you are right, we also used the Mtry hyperparameter, we added this in the  text (and we did used this later)

“How was this evaluated? %var explained from randomForest package?  included in your R scripts to be published in figshare? It appears that no test data was left out of the AGB dataset, so only internal cross validation when constructing the model?”

>> Yes, we used the %var explained from the random forest model, which is available in figshare. We indeed did not leave out test data, we used all samples for constructing and testing the model, which is related to our relatively small dataset and the expected relatively low variation of AGB values. We do think that this will be important when sampling larger extents and in areas that contain more heterogeneous vegetation (e.g. mixtures of grass, shrubs, trees).

“How was this determined?

>>> Thank you for asking: ….apeared to contain too few data points…. This is seen from figure 5, locations B15 and B16 are at the outer end of the scan line, and after the SLAM algorithm processing, too few data points were available at these locations. We added a cross reference to figure 5 in the text.

“This is an important point to highlight and could lead to uncertainty.  Is there any way to account for this? “

Thank you, a very valid remark: >>  …..both the scanning and the measurement of Hmax could potentially be influenced by wind force that affects the position of isolated higher grass plumes….. is indeed an important issue when investigating at fine scales.

To account for wind influence during scanning is difficult. One should consider to measure only during windless conditions (we addressed that in the Concluding Remarks), enlarge the sample radius and take more samples so that this effect is likely to be reduced. Another idea would be to use the video footage for additional measurements and/or corrections to the datasets.

In section 5.2

“ Is there a danger that this overfits to the training data? If AGB data had been left out in a test data set, it would be possible to evaluate more objectively.” >>>  You are probably right and, we foresee to account for potential overfits in future campaigns. For this study, a demonstration in small areas, with the low spatial variability of AGB across the areas A and B,  we regard this as an explorative analysis, therefore we did not investigate this potential danger in-depth.

 

Reviewer 2 Report

This work is very meaningful for prediction and mapping of grassland AGB using Handheld LiDAR. Some suggestions are as follows:

1.    I thing the scientific problems and the reason of using Lidar data for prediction and mapping were not explained clearly in the introduction. And the reasons why using RF model in the paper were insufficient.

2.    Could you introduce the Random Forest (RF) regression model briefly  in the paper?

3.    In the process of fitting the experimental data, it is mentioned that the model used in this paper has a good fitting accuracy, but it is still lack of comparison with other methods. Please add the content.

4.     In the discussion part of this paper, the reasons for the formation of model accuracy errors are not discussed in depth. Please supplement appropriately.

5.   The content of concluding remarks is not closely combined with the content of the previous analysis. Please supplement appropriately. 

6.   The format of references is not standard.

 

 

Author Response

Thank you for your valuable suggestions and interest in our work. The suggestions were helpful to improve the quality of our manuscript. We addressed most of your suggestions in the updated manuscript. Here, we detail our changes:

  1.   I think the scientific problems and the reason of using Lidar data for prediction and mapping were not explained clearly in the introduction. And the reasons why using RF model in the paper were insufficient.

We added a sentence to more concretely support our choice for the handheld mobile laser scanner in the introduction as follows, that supports most of our considerations:

“In addition, handheld LiDAR inventories produce high resolution data, are easily repeatable to support fine scale monitoring of vegetation structure and ABG across seasons, is cost effective, and operation in the terrain is adjustable to changing terrain conditions, an advantage over terrestrial laser scanning campaigns and multi-spectral imagery.”

We have rewritten the reasons for using a RF model as follows now in the introduction:

In our research, we use fifteen input metrics derived from the LiDAR data and only a small sample size. We opted for using the RF model, because it has the advantage that only a relatively small amount of training data are required to support numerous predictors [14].

 

In addition, we mention (and followed) the state-of-the-art overview of Morais et al (2021), who compared 26 grassland studies in which machine learning methods have been used, with 3 studies using LiDAR in combination with RF, while other studies mostly used spectral imagery to built their machine learning models.

 

  1.   Could you introduce the Random Forest (RF) regression model briefly in the paper?

Thank you for this question – we agree that that some additional introduction is useful. We introduced the RF regression model more explicitly in section 3.3. by rewriting and adding text. Specifically we added:

“Random Forest regression is an ensemble learning algorithm that can rank and select important variables for biomass prediction. By bootstrapping the samples it constructs decision trees, each with a randomized subset of predictors. [14]. Important hyperparameters in the model that can be tuned are, ntree and mtry. Ntree selects the amount of decision trees and Mtry determines the number of features that are randomly selected at each node [31]”.

  1.   In the process of fitting the experimental data, it is mentioned that the model used in this paper has a good fitting accuracy, but it is still lack of comparison with other methods. Please add the content.

Thank you for this remark/suggestion, indeed this would be great to have these comparisons. Quantitative /accuracy-wise comparison for our nature reserve cannot be done without building such other models. However, we have added information on R2 from the literature in the discussion, which refers to other machine learning methods used in above ground biomass prediction. Specifically, we added:

Morais et al. [2] reviewed 26 studies that used various machine learning methods to predict AGB in grassland dominated areas, most of them based on satellite imagery as data source (R2 ranging from 0.22-0.94). In three studies, RF models were used in combination with LiDAR data (R2 values of 0.59, 0.61 and 0.79), which is in line with our findings.

  1. In the discussion part of this paper, the reasons for the formation of model accuracy errors are not discussed in depth. Please supplement appropriately.

Thank you for your question. We agree that we can provide some more information on the formation of model errors. First, we can increase the sample size (now 30) as is now, and split the data into a training and validation dataset, which is expected to increase model stability. We do think, that this issue will become more important in grasslands with more variation in AGB. To make a link between the RF model and the ecological field situation, also we added a sentence, to address that grasslands with low spatial AGB variability can develop into heterogeneous grasslands over time, which will influence the use of metrics to explain variation in vegetation structure and AGB, as is reported in literature (e.g. Bakx et al 2019). We added the following lines:

We expect that with increasing variability of vegetation cover over time in the Oostvaardersplassen (e.g. a mosaic of grassland, shrubs and trees), other combinations of LiDAR metrics (including horizontal metrics), will contribute to variations in vegetation structure [11] and the prediction of AGB in the RF model.

  1.  The content of concluding remarks is not closely combined with the content of the previous analysis. Please supplement appropriately.

We added text to tighten the concluding remarks with the analyses in the concluding remarks section, in particular:

Future research could focus on further optimization of our three workflow routines, especially by increasing the samples size, training and validation data, increase the mapping extent, collect data in other seasons, and test the synergy with other sensors, such as existing nation-wide LiDAR data or Sentinel imagery.

 

  1.  The format of references is not standard.

Thank you for noticing. We tried to follow the guidelines of Remote Sensing, we changed several references, to align with these guidelines. We assume/hope that before the eventual proof reading these mistakes will be solved with help from the editing team.

Reviewer 3 Report

The paper deals with an innovative application of hand-held lidars. It is clear and easy to read, and the conclusions are based on satisfying experiments.

Author Response

Many thanks for your confidence and interest in our work and reviewing the text! We did change some content, based on the comments of the other reviewers, which were useful to further improve our manuscript

Reviewer 4 Report

I read with high interest your  contribution and strongly appreciated the use of a hand scanner for such applications. The methods are well described, the algorithm used are clear. It is really nice that you also investigate the weaknesses of your approach and tried an error analysis. Clearly, these are "pre-results" and it would be interesting to compare with other seasons using a similar methodology, and to know which adaptations are needed here. I have a few points that were bothering: there are many abbreviations through the manuscript, maybe you can think about reducing them. I am missing in the introduction a clear focus. I can read it "between the words" but it would be nice to have a bullet list that summarizes the main achievements or goals of this contribution. 

Author Response

Thank you for your kind words and comments!

Indeed, we intent to extent these first results the coming years across seasons and years and learn from the experiences documented in this manuscript and try linking to spectral information as well. we addressed that as well in the updated manuscript. We highly agree with you that ‘weaknesses’ – or ‘sources of uncertainties’ should be addressed, which is the way forward to optimize future research.

Also based on recommendation of the special issue editor, we reduced the number of abbreviations in the text to make it more accessible for interested readers, who are unfamiliar with the topic.

Specifically we removed OVP (indicating the study area), and SMR, ANN, SR because only few abbreviations were used. We kept e.g.  RMSE, AGB, RF, LiDAR, SLAM and OBIA, because these more or less belong to our manuscript and are used abundantly throughout the manuscript and are commonly used in literature.

We added only minor updates to the introduction, basically to support our choice for Lidar and RF, but three other reviewers did not mention a severe lack of focus, therefore we do not opt for bulleted clarification. Basically we developed a first workflow for mapping grassland aboveground biomass - here in a small demonstration project. We did include many small suggestions to improve the overall manuscript, so we are hope the new manuscript is according to the standards of Remote Sensing as is.

Round 2

Reviewer 2 Report

I think the modified version has made great progress. I suggest that the research method be further supplemented.

Author Response

Dear Reviewer,

Many thanks for addressing that our manuscript has made great progress. You have left one suggestion: I suggest that the research method be further supplemented.

We looked for possibilities to insert additional, and concrete supplementary information to further improve our methodology section – arising from your remark.

We added 4 small insertions, that we think, will streamline the method section.

  1. In the data collection section 3.2 we inserted information to collect ground control points by the HMLS, by keeping the scanner steady for at least 10 seconds in the center of an AeroPoint field marker
  2. A few details are added on sample oven-drying and weighing of field samples
  3. In section 3.3. we have added that the height data of each LiDAR point is used to calculate the 15 metrics to streamline that section part
  4. In our mapping section 3.4, we added a few lines to better explain the procedure to produce objects that lie at the basis of the final AGB maps.

We hope this has further added to improving our manuscript and hope it can be accepted for publication.

Back to TopTop