Next Article in Journal
Constraint Loss for Rotated Object Detection in Remote Sensing Images
Next Article in Special Issue
Is an Unmanned Aerial Vehicle (UAV) Suitable for Extracting the Stand Parameters of Inaccessible Underground Forests of Karst Tiankeng?
Previous Article in Journal
MIMO-SAR Interferometric Measurements for Structural Monitoring: Accuracy and Limitations
Previous Article in Special Issue
Social Network and Bibliometric Analysis of Unmanned Aerial Vehicle Remote Sensing Applications from 2010 to 2021
 
 
Article
Peer-Review Record

Comparison of Low-Cost Commercial Unpiloted Digital Aerial Photogrammetry to Airborne Laser Scanning across Multiple Forest Types in California, USA

Remote Sens. 2021, 13(21), 4292; https://doi.org/10.3390/rs13214292
by James E. Lamping 1,*, Harold S. J. Zald 2, Buddhika D. Madurapperuma 1 and Jim Graham 3
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Remote Sens. 2021, 13(21), 4292; https://doi.org/10.3390/rs13214292
Submission received: 31 August 2021 / Revised: 18 October 2021 / Accepted: 21 October 2021 / Published: 25 October 2021
(This article belongs to the Special Issue Trends in UAV Remote Sensing Applications: Part II)

Round 1

Reviewer 1 Report

The manuscript by Lamping et al. describes some important and exciting advances in the use of emerging low-cost drone and GNSS technology for forest mapping. The manuscript documents an extensive and innovative analytical effort and adds to our understanding of the performance of DAP relative to lidar in different contexts, as well as the influence of DAP camera angle; knowledge in both of these areas is important for determining how/when to incorporate drones into forest mapping workflows. The manuscript is generally well written, with generally sound analysis and inferences. That said, there are several important issues that must be addressed to bring this manuscript to a publishable state. I believe these issues can be addressed primarily through a revision of the language and interpretations. I am keen to review a revision of this manuscript and ultimately see this important work published.

General points:

  1. There were no plots or areas withheld for regression model validation; the models were evaluated simply based on their fit to the training data. Therefore, it is misleading to refer to the model-fitted structural metrics as “predictions” and the models as “accurate”. The language used in the paper implies that the performance that was observed is what one can expect for new field plots outside the training sample (which is where the utility of remote sensing comes in--measuring plots where you don’t also have field data). Rather than “predicted”, “predictions”, “accuracy”, “accurate”, the paper should use terms such as “explained”, “model fit”, “fitted values”. One important reason for this is the possibility of overfitting, especially with only ~20 plots per forest type. The point cloud summary data may fit the observed forest metrics closely, but this may be due to patterns specific to the training dataset that do not hold in a broader population of plots. Put another way, the point cloud summary statistics may contain variability that coincidentally aligns with the field data, but not in a way that is more broadly accurate and generalizable outside the training sample.
  2. How was the drone point cloud spatially co-registered to the lidar point cloud? This was not described. The HPGPS units that were used allow you to obtain highly precise *relative* positioning among rover-measured points (GCPs), but they do not enable precision much better than a Garmin for *absolute* positioning unless the base station log is post-processed and corrections are applied to all points. Was this done? Much of the strong performance in comparison to lidar was attributed to the use of HPGPS GCPs. I’m not sure that this is required for high performance, especially if nothing was done to spatially co-register the DAP and lidar point clouds. This is further reinforced by the fact that the closed-canopy stands such as DF did not show worse agreement with the lidar CHM relative to the other stands (suggesting that alignment is possible even with few GCPs). What about the alternative of flying a mission with no GCPs and then manually co-registering the two point clouds (shifting each point in a cloud by the same x-y offset so that the clouds optimally align)? Manually co-registering point clouds could also better allow for pixel-to-pixel and tree-level change detection. Addressing this point requires modifications to the methods, as well as Section 4.4 (first and third paragraph) and elsewhere.
  3. In several places (especially the discussion section 4.1), it is implied that to normalize the DAP point clouds, you need either a high-cost lidar acquisition or to use the DTM derived from DAP. It is also implied that one of the reasons for the poorer performance of DAP in denser stands is that it is difficult to normalize the point cloud because it is not possible to create an accurate DTM from the DAP point cloud. However, what about using a DTM from a different, existing data source that is more universally available (i.e., not a dedicated lidar acquisition), such as USGS 3DEP or SRTM DEMs? Using a 10m (or even 30m) DEM (interpolated to a finer resolution) should be reasonable given the spatial scale at which topography varies relative to tree heights. If you were to use this data source, you could potentially get an accurate DAP CHM even from closed-canopy stands without needing existing lidar data. Therefore, I don’t think you need to write off the utility of DAP for dense stands so readily.
  4. The creation of the multi-angled image set is not described. Was it created by combining all images from the nadir and off-nadir sets? If so, this means the multi-angled set has double the image density (effectively, lower overlap), which would lead you to expect more accurate surface and structure metric estimation, and this would make it difficult to attribute differences specifically to the use of multiple angles. Relatedly, the mission plans (flight paths) were not clearly described--were they flown as parallel transects in a serpentine fashion? The missing information includes, critically, the parameters of the off-nadir image collection mission. In recent work, it is common practice when using off-nadir camera angle to fly a grid mission (two perpendicular transect missions) so that each tree can be seen from four different sides. If this wasn’t done, the implications should be described (and compare to other lit and explain that you may have had better results if it had been a grid mission).
  5. The discussion is not well linked to existing literature. Only 5 statements are linked to the literature with citations. There are opportunities to compare the results obtained to results from similar studies, discuss how results may have been different/better if methods were different (e.g., higher image overlap), etc. One important opportunity to link to the literature involves discussing the potential for individual tree/TAO detection to help ameliorate some of the concerns present with the plot-based approach taken here (this latter link could be made in just 1-2 sentences).
  6. Individual Metashape project runs are stochastic (you can get slightly different point clouds and surfaces each time you run Metashape on a given image set; https://www.agisoft.com/forum/index.php?topic=6485.0). This should be included as a potential explanation for slight differences in performance between image sets (to avoid implying that it is all due to the stand). Similarly, were there differences in weather conditions (clouds, wind) between sites or between missions within a site that might have led to different performance?

Specific points:

L30: “Douglas-fir forest”: Is there room within the word limit to say what about this forest likely made the performance so poor (occlusion of the ground due to high stand density)?

L32: Change “accurately” to “somewhat accurately” or similar, since R-sq of 0.53-0.85 can’t be qualified as “accurate” outright.

L32: The “(R-sq = 0.53-0.85)” should come after “when compared to field data” since this is the comparison that these values correspond to (as written, it implies it’s for comparison to lidar).

L98: “and the widespread”: Need to add a word, like “and enable the widespread”

L103: Change “allowing” to “potentially allowing”

L194: Change comma to period.

L242: “an” to “a”

L283: Remove apostrophe; delete “made”

L289: “Structural metrics”: This is misleading because I think people usually think of structural metrics as BA, TPH, QMD, etc. What you mean here is “point cloud summary statistics” or something similar. This also occurs on L311 and potentially elsewhere.

L304: “Models were lidar”: There is a typo or something else off with the wording here.

L315 and L319: One list includes BAH but omits AGB; one has AGB but omits BAH. Need to fix?

L327: Figure 4 is referenced from the text before Fig. 3. Need to re-number figures and re-order them so they are referenced in numerical order (check all figs. because it might not be just 3 and 4).

L338: Delete “data”

L400: As with abstract (L32), change “accurately” to “somewhat accurately” or similar. You could convey that this is still a really valuable tool by referencing how the accuracy is nearly as high as can be achieved through lidar.

L402: Delete “type generated”

L407: Change “addition of” to “use of”? Or are you specifically referring to adding nadir and off-nadir image sets together?

L421: “reduced levels of accuracy in DSMs, DTMs, and CHMs”: do you mean specifically DSMs and CHMs? The DTMs were generally pretty good.

L436: Delete “comparable, but” since “comparable” and “slightly lower” mean different things (or, if comparable is meant to imply “close to”, that is already implied by “slightly”).

L488: Change “outperformed” to “marginally outperformed” or similar.

L488: Change “is the preferred” to “has traditionally been the preferred”

L491: Change to “Our study found that under specific forest conditions, low-cost UAS DAP can generate…”

L501: “Should not be used when doing pixel-to-pixel…”: If the concerns about inaccurate DAP DTMs (and normalization) are addressed by using existing DEMs (general point #3), is this still a major concern? Is there still a reason to believe DAP would lead to such highly variable results from one mission to the next within the same stand that it can’t be used in place of lidar? This could be a good place to tie into opportunities for individual tree/TAO detection literature (general point #5), which may enable more confidently linking observations of the same tree across acquisitions from different time points. Also applies to L527: “direct comparisons”.

L509: This sentence implies that multi/hyperspectral data are necessary for species ID, but this is a bit of a non-sequitur because even the RGB spectral data may enable accurate species ID (at least relative to lidar). It may not be necessary to reference multi/hyperspectral here.

L524: “should include off-nadir images”: This contradicts the results and earlier discussion and should be deleted.

Fig. 2: Note in the caption the elevation that the images were collected from. Is it the same as the whole imagery collection mission?

Figs. 3-5: Caption and/or figure should indicate which axis is DAP and which is lidar.

Fig. 6: Figure and caption say “THP” where I believe “TPH” is intended.

Fig. 7: May need to define “snag” in caption, or change figure to say “standing dead tree”.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

Many congratulations to the authors for submitting a very good research manuscript on the use of drone technology in predicting key forest attributes. However, it needs minor corrections before it could be published.  I have marked my comments in the manuscript attached. Few comments below:

1) Spatial coordinates could be included for the study site map

2) Discussion section could also cover the innovative portion of your study with reference to the past studies. You could also discuss how your system is effective in terms of cost compared to airborne and satellite systems.

 

 

 

Comments for author File: Comments.pdf

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

The article is interesting, and it is well developed. It is easy to read. The authors made a comparison between the surveys made with low-cost UAV/DGPS and aerial Lidars in order to estimate different forest parameters.

I think that the paper has the quality to be accepted in this journal, but it needs some improvements.

My recommendations to the authors are:

Line 101, 108. The authors only mentioned one reference in the use of tilted cameras to make DAP in forest areas and they said that this setting is unexplored too. There are a lot of papers that analyse the effect of the tilted camera in the estimation of forest´s parameters. I recommend to the authors look for more references about it and compare the results achieved by them with the results showed at this manuscript. Some examples:

Frey, J.; Kovach, K.; Stemmler, S.; Koch, B. UAV Photogrammetry of forests as a vulnerable process. A sensitivity analysis for a structure from motion RGB-image pipeline. Remote Sens. 2018, 10, 912.

Perroy, R.L.; Sullivan, T.; Stephenson, N. Assessing the impacts of canopy openness and flight parameters on detecting a sub-canopy tropical invasive plant using a small unmanned aerial system. ISPRS J. Photogramm. Remote Sens. 2017, 125, 174–183.

Wallace, L.; Bellman, C.; Hally, B.; Hernandez, J.; Jones, S.; Hillman, S. Assessing the ability of image-based point clouds captured from a UAV to measure the terrain in the presence of canopy cover. Forests 2019, 10, 284.

Moreira, B.M.; Goyanes, G.; Pina, P.; Vassilev, O.; Heleno, S. Assessment of the Influence of Survey Design and Processing Choices on the Accuracy of Tree Diameter at Breast Height (DBH) Measurements Using UAV-Based Photogrammetry. Drones 2021, 5, 43.

 

Table 2. I recommend to the authors add in this table the type of surveys (corrections) made with the different Lidars (PPK, RTK or other) in order to know how the Dense Clouds geolocation were improved.

 

Line 253, 256. The authors must explain why they used 85% sidelap/frontlap and 30º of tilted camera. They made previous studies and then chose the best options, or the authors used some bibliography to decide it?

 

Line 266. When the authors are speaking about the GPS receivers (not in the case of the UAS) they must use DGPS (differential GPS) instead of HPGPS.

 

Discussion. I recommend to the authors add the errors associated with the GCP´s geolocation. They mentioned that in the areas with more dense vegetation cover and with the camera tilted the GCP´s are not easy to identify in several images, and this, probably, is the source of the higher “errors” (Line 455). The deviation of the GCP´s must help to understand it to the readers.

At different manuscript sections (for example line 417) the authors mentioned that only with a previous Lidar survey the scientist can take a more accurate DTM to be used like “terrain” in the subsequent flights. It is true, but it is not the only tool to obtain an accurate DTM. There are a lot of examples of high precision DTM obtained through DGPS surveys or digitalized from previous cartographies. I recommend to the authors modify the text in those places.

 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

I was pleased to review the revision of the manuscript by Lamping et al. I thank the authors for so rapidly and thoroughly addressing my concerns. This is a solid and exciting addition to the remote forest mensuration literature. I have only two remaining concerns that I believe can be addressed with a couple additional sentences in the methods and/or discussion.

  1. Thanks for the clarification on your use of LOOCV. You said you revised the description of LOOCV methods, but I don’t see what was changed (though I really appreciated you providing the tracked changes). In any case, your use of terms “predictions”, etc.,  makes sense now that you pointed out this detail I overlooked (which I acknowledge was already in the manuscript), and I think the description is OK as is. However, with this better understanding, I worry that the forest metric performance results are a bit misleading. As you say in your response to my previous Point 1, “Our goal in this study was only to evaluate UAS DAP as a data source if no lidar data was available for a given area, not to suggest a definitive model to be used in certain landscapes when using UAS DAP.” I therefore believe you should ideally present the validation accuracy metrics for each stand separately. It’s clear (e.g. from Fig. 5) that there is a lot more uncertainty (lower validation R-sq) within stands than among them. By combining all stands predictions together before computing accuracy metrics, you inflate the accuracy relative to what you would expect in a real-world application to a single stand. With a different model for each stand, you get predictions that are centered around the observed mean for each stand. But because there is great variability among stands in mean forest structure, when you combine the predictions across multiple stands, you get clusters of points (each cluster corresponding to a stand), and the cluters generally line up along the 1:1 line, making it look like a tighter fit than if you looked at each stand model separately, or than if you used an overall model across all stands. While you would ideally do these additionally analyses, I would be satisfied if you simply added a sentence or two to the discussion acknowledging that within-stand prediction accuracy may be lower than what you observe when you combine predictions from stand-specific models across many stands (as you did).
  2. I appreciate you pointing out that because you decimated the point cloud, there is not a higher point density for the multi-angled missions (which had ~twice the number of images as the single-angled missions). Nonetheless, higher image density may mean more than just higher point density; it might mean more accurate 3D representation (e.g., better representing an area that could not be confidently represented with fewer images and thus where the points were filtered out, even before decimation). This should be acknowledged (even in just one sentence) in the discussion.



Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Back to TopTop