Next Article in Journal
Study on the Agricultural Land Transfer Embodied in Inter-Provincial Trade in China
Previous Article in Journal
The Household Food Security Implications of Disrupted Access to Basic Services in Five Cities in the Global South
 
 
Article
Peer-Review Record

A Practical Assessment of Using sUASs (Drones) to Detect and Quantify Wright Fishhook Cactus (Sclerocactus wrightiae L.D. Benson) Populations in Desert Grazinglands

by Thomas H. Bates 1, Val J. Anderson 2, Robert L. Johnson 3, Loreen Allphin 2, Dustin Rooks 4 and Steven L. Petersen 2,*
Reviewer 1: Anonymous
Reviewer 2:
Submission received: 19 March 2022 / Revised: 14 April 2022 / Accepted: 24 April 2022 / Published: 28 April 2022

Round 1

Reviewer 1 Report

The manuscript presents a study aimed at testing usability of drones to survey a small cactus in southwestern USA. Overall, I think this is a well written manuscript. The vast majority of my comments are minor and editorial in nature.

L 25 – Since the journal has an international readership, note it is southwest USA desert grazinglands (typo?)

L 25+ - Chicago style would require space between numerals and units (1-8 cm, 10 m, etc.), same with mathematic symbols (p < 0.001)

L 46, 54, 112-118, 140 – Italicize scientific names.

Figure 2. Define the inset of Utah.

Table 1 needs horizontal lines

L 159 – typo Figure 3

L 160, Figure 3, 205 – superscript m2

L 176 – censused not censured?

L 215, 241-245 – Italicize R package names for ease of reading.

Equations 1 and 2 – I understand the tilde as used in R programing, but as an equation for the reader, it should be an equals sign. Same with the notation of 1| for the random effects, not necessary here as they are defined as random effects in L 260-261.

L 264 ?

L 263 – Since Wright fishhook cactus can only be distinguished from a congener by flower/filament color (L 54), how effective was the sUAS at distinguishing species? To me, this is an essential question to address as the congener appears to have overlapping range and vegetatively are indistinguishable. It is one thing that the cacti can be detected and counted from flight elevation of the drone, but species confirmation is needed. It isn’t just that you can count cactus individuals, but distinction of species. If this requires subsequent ground verification, then I think a statement regarding that is necessary.

Author Response

Thank you for this review and suggestions. These were very helpful in improving this manuscript. We have addressed each of the comments that you provided. 

Comment: L 25 – Since the journal has an international readership, note it is southwest USA desert grazinglands (typo?)

Response: Great point. We have rewritten this sentence to reflect the location for readers worldwide. It now states “Wright fishhook cactus (Sclerocactus wrightiae), a small (1-8cm diameter) endangered species endemic to grazinglands in the southwest desert of Utah, USA”

Comment: L 25+ - Chicago style would require space between numerals and units (1-8 cm, 10 m, etc.), same with mathematic symbols (p < 0.001)

Response: We have changed all occurrences where a space was missing and inserted those as suggested. This includes between numerals and units and mathematic symbols. Thanks for pointing that out.

Comment: L 46, 54, 112-118, 140 – Italicize scientific names.

Response: Great observation. Thank you. We have italicized all of these scientific names.

Comment: Figure 2. Define the inset of Utah.

Response: This was a helpful comment. We have rewritten this figure caption to address this issue. It now states “Figure 2. (a) Flight locations; (b) enlarged map of a flight area (macro-plot) within a single flight location. Plots are all located in southcentral Utah, USA (see inset map).”

Comment: Table 1 needs horizontal lines

Response: Horizontal lines were added both above and below the text. Thanks for pointing that out.

Comment: L 159 – typo Figure 3

Response: The misspelled word was corrected.

Comment: L 160, Figure 3, 205 – superscript m2

Response: All squared symbols have been raised to the superscript as recommended.

Comment: L 176 – censused not censured?

Response: Yes, good point. This word choice was corrected.

Comment: L 215, 241-245 – Italicize R package names for ease of reading.

Response: All R package names were italicized as recommended

Comment: Equations 1 and 2 – I understand the tilde as used in R programing, but as an equation for the reader, it should be an equals sign. Same with the notation of 1| for the random effects, not necessary here as they are defined as random effects in L 260-261.

Response: These changes were made. We used an equals sign for both equations instead of the tilde and took out the “1|”.

Comment: L 264 ?

Response: This hard return error was corrected.

Comment: L 263 – Since Wright fishhook cactus can only be distinguished from a congener by flower/filament color (L 54), how effective was the sUAS at distinguishing species? To me, this is an essential question to address as the congener appears to have overlapping range and vegetatively are indistinguishable. It is one thing that the cacti can be detected and counted from flight elevation of the drone, but species confirmation is needed. It isn’t just that you can count cactus individuals, but distinction of species. If this requires subsequent ground verification, then I think a statement regarding that is necessary.

Response: This is a very good point. We have included this statement in the discussion section to address this issue. “Additionally, the correct identification of Wright fishhook cactus from its overlapping congener, small-flower fishhook cactus, is essential in correctly monitoring population densities of each species. We recommend accounting for this differentiation by obtaining images during each species’ specific flowering period, classifying images that account for flower color, and including field verification to ensure species are identified correctly.“

Reviewer 2 Report

I reviewed an earlier iteration of this manuscript for a different MDPI journal last year and was provided the authors’ responses to my previous comments.  I read those responses through carefully, as well as the new version of the manuscript.  I think the manuscript has improved considerably, both in terms of how the study is framed and concluded from, and in terms of its clarity. 

 

The authors have addressed most of my previous comments well.  I did suggest the paired-samples t-test as appropriate here, on that I think the authors’ response missed the point.  The comparisons are on exactly the same plot in each case because each plot was surveyed 4 times (field survey, 10m, 15m, 20m).  This is nothing to do with the pairing of grazed and ungrazed sites (not sure whether there was confusion on that issue).  The point is that there is no need to control for site, cactus ID or anything when comparing the survey methods when those methods were done on exactly the same sites at almost exactly the same time (i.e. with minimal likelihood of any changes to the vegetation between the methods being compared).  I demonstrated last time that paired t-tests are more powerful in this scenario than the modelling used: they were able to find significant differences for comparisons in which the modelling used in the manuscript failed to find those same differences significant.  This is not a type 1 error, but simply a reflection that the pairing in the paired-samples t-test controls the extraneous variation much more precisely than fitting fixed and random effects in a mixed model, for example.  Anyway, I think it is fine for the authors to decide whether to strengthen their results by using paired-samples t-tests or similar, or just stick with the analyses they have.  There is nothing fundamentally wrong with those analyses – it is just that they are less powerful: they are not able to find some of the differences that paired-samples t-tests do find.  The conclusions of the research would not change in any meaningful way, other than to have stronger statistical support.

 

The biggest change that I think is necessary is in how the errors (associated with the validation matrix – Table 2) are explained and talked about, and (later) related to the application of estimating population sizes.  There are several relevant points here:

 

*Lines 286-7 bizarrely state that “As anticipated, the errors of commission (EOC) decreased as flight altitude increased”.  This makes no sense.  The commission errors are where places in an image are marked as a cactus but prove not to be a cactus, i.e. false positives. Clearly these errors get more prevalent at higher flight altitudes. The higher the proportion of false positives (i.e. commission errors) the lower will be the ‘percent confirmed’, in direct proportion: they are complements of each other. So if an image had 100 potential cacti marked but only one was an actual cactus then the commission error rate is 99% and the percent confirmed is 1%.  What Table 2 shows is that the percent confirmed decreased as flight altitude increased, which means that the commission errors INCREASED.  Here the authors appear to be confusing the % error with its complement (the % correct).  In doing so, they refer to the EOC column in Table 2, but that is the CORRECTION TERM (see below) associated with the error of commission, and is again not the commission error but its complement.  So lines 286-7 should change to:

“As anticipated, the errors of commission (i.e. 1 minus the EOC correction term in Table 2) increased as flight altitude increased.”

 

*Given that the potential confusion (demonstrated by the authors, as above) is also likely in many readers, I think Table 2 needs better explanation (probably in the section headed ‘Validation Matrix’) and some modification:

First, I suggest adding a column headed ‘false positives’ or ‘commission error’, with the numbers in the column either being the total number of false positives (e.g. 101 for 10m altitude) or the percentage commission error (e.g. 35.3% for 10m altitude), accordingly.  If so, then the correction in the previous point can be modified accordingly (and this would helpfully avoid the awkward need to specify 1 minus the EOC correction term).

Second, I suggest that there should be better explanation of what the correction terms are for.  If I understand correctly, these correction terms are in order to produce population estimates for situations where the actual population numbers are not known – in this context, that means for correcting estimates produced from marking cacti in the images, without field surveys.  This is stated (though confusingly – see next point) much later in the manuscript – lines 364-5: “The counts obtained from the imagery can be multiplied by the net error term (Table 2) to obtain population estimates.”  As is already well explained in the caption of Table 2, the net error correction term can be calculated directly by dividing ‘actual’ by ‘marked’, meaning that the EOC and EOO correction terms are not necessary for producing the population estimates.  However, I think it is helpful to demonstrate that the net error correction term is also the product of the EOC correction term and the EOO correction term, as Table 2 does.  But currently this risks confusing error correction terms with errors.  They are not the same, as amply demonstrated above.  It would be very helpful for readers, in my opinion, to explain (in the Validation Matrix section, cross referenced in the caption of Table 2) that:

a) The correction terms are for estimating population from the number of marked individuals.

b) Errors of omission will tend to lead to underestimation of population size, so the greater the omission error the bigger the correction term should be.

c) Errors of commission will tend to lead to overestimation of population size, so the greater the commission error the SMALLER the correction term should be (it is the complement of the commission error).

d) The ‘net error correction term’ is the net adjustment that needs to be made to estimate population sizes, taking into account both the omission and commission errors (which act to cancel each other out). Thus it is not a measure of error, but of the adjustment needed to account for the combined effects of the errors of omission and commission.

e) Therefore the correction terms are derived from the errors, but are not themselves errors – far from it in the cases of the EOC and net error correction terms.

 

*Explaining those things clearly will both make a lot more sense of the various columns in Table 2, avoiding confusion, and will properly justify lines 364-5, where the application of this population estimation process is mentioned.  There also need to be changes made to parts of the text, to avoid confusion between errors and error correction terms, as follows:

Lines 231-2: “The validation data matrix also included three correction terms: errors of omission (EOO), errors of commission (EOC), and net error.” Here the text clearly suggests, wrongly, that the correction terms are errors.  I suggest adding ‘for’ repeatedly, for clarity: “The validation data matrix also included three correction terms: for errors of omission (EOO), for errors of commission (EOC), and for net error.” The next sentence should add that the purpose of the correction terms is to enable estimates of population from the number of marked individuals.

Lines 234-5: “Errors of commission were defined as the ratio of cacti confirmed to the number of cacti marked.”  This is simply wrong because this is a definition of the rate of correct identifications, which is the opposite of commission errors. This needs to be changed to explain correctly what commission errors are (easy if you can simply point to the extra column I recommend above for Table 2!) and then why higher commission errors mean a lower correction term to account for them (as I explain above).

Lines 235-6: “Errors of omission were defined as the ratio of actual cacti to the number of confirmed cacti.” Similar problem here. Although the EOO correction term increases with EOO, it is a derivative of the error of omission and is a correction term, which should not be stated as the definition of errors of omission.

Lines 236-7: “Net error was determined by multiplying the error of omission by the error of commission”. Once again, this sentence incorrectly refers to correction terms as errors.

Line 365“can be multiplied by the net error term (Table 2) to obtain population estimates” is once again misleading for the same reason. The ‘net error correction term’ is not a measure of error, but of the correction needed to account for errors that act to cancel each other out.  So it should not be called an error term.  That part of the sentence should therefore have ‘correction’ inserted, so it reads:

“can be multiplied by the net error correction term (Table 2) to obtain population estimates”.

 

*The final point relating to estimating populations from imagery is that the calculations shown in Table 2 assume that the work done with the images that is reported in this manuscript is representative of searching images in the absence of data on the actual numbers. I do not think this is a safe assumption, particularly when talking about surveying different areas, as in lines 367-8:

“Cacti were also discernable in all flight altitudes indicating sUAS may be of use in finding new populations.”

The problem here is that in the work reported in this manuscript, sites with no cacti of the target species were excluded – see lines 125-7: “Five plots were removed from the study because they lacked cactus plants, had environmental conditions not suitable for cactus establishment, or supported species other than Wright fishhook cactus.”

This means that people examining the images in this study knew there were cacti of the target species in the images, for all the data in the analyses.  This strikes me as qualitatively different from a situation where it is not known whether there are any such cacti in an image being inspected – and indeed, for any given image taken in a previously unsurveyed area, quite possible or even quite likely that there are none. In such a situation, I would expect detection rates may drop considerably.

 

MORE MINOR AND/OR SPECIFIC POINTS

 

Line 158: “with 0.25, 0.40, and 0.55 GSD (ground sample distance; resolution)”. This should have include the units of GSD, so add ‘cm/pixel’.

 

Around line 229: I suggest adding a label for potential cacti that were marked but found not to be a Wright fishhook cactus - the source of commission errors. I suggest 'false positive' as the label, and that can then be added as a column in Table 2 (as suggested above).

 

Lines 293-4: “An average of three more cacti per macro-plot were counted in the 10 m imagery than in the 15 m (p<0.001), and six more than in the 20 m imagery (p<0.001).”  From Fig.5 this appears to be incorrect: looks like ‘three’ and ‘six’ need to change to ‘six’ and ‘nine’ respectively.

 

Line 298: “For size class 1, three more cacti were counted in the 10 m than in the 20m”. The number in the 10 m was only 1.3, so it cannot be three more than anything!  It’s basically about one more.

 

Lines 366-7: “Thus, if high accuracy count data is not requisite, sUAS could shift the workload from the short flowering period to other times of the year.” I criticised this before because I found it misleading, and the authors explained it well in their response letter but did not make the change in the manuscript.  Please add that the suggestion here is that numerous images could be taken during the flowering season and then analyzed during the non-flowering season (for time efficiency) – rather than that drone flights would be conducted during the non-flowering season.

 

Line 379: “higher elevation drone flights (0.25 cm GSD)”. First, I think it should be ‘altitude’, not ‘elevation’, here. Second, it is not clear what is meant by the text in brackets, which is referring to the resolution of the lowest-altitude drone flights, not the higher-altitude ones.  Perhaps it should read something like “(where GSD > 0.25 cm)”?

 

Finally, there are some typos and similar that I spotted. I only noted a few down, though – a good copy-edit is needed.  Ones I noted:

Line 25: ‘grazinlands’ should be ‘grazinglands’.

Line 144: ‘suing’ should be ‘using’.

Line 159: ‘Figjre’ should be ‘Figure’.

Line 176: ‘censured’ should be ‘censused’.

Line 264: remove the errant paragraph break.

Author Response

Thank you for your outstanding review of our paper. Your input, particularly on issues relating to our error matrix presentation, were incredibly helpful. Here we provide a response to each of the comments that were provided. 

Comment: I did suggest the paired-samples t-test as appropriate here, on that I think the authors’ response missed the point.  The comparisons are on exactly the same plot in each case because each plot was surveyed 4 times (field survey, 10m, 15m, 20m). This is nothing to do with the pairing of grazed and ungrazed sites (not sure whether there was confusion on that issue).  The point is that there is no need to control for site, cactus ID or anything when comparing the survey methods when those methods were done on exactly the same sites at almost exactly the same time (i.e. with minimal likelihood of any changes to the vegetation between the methods being compared).  I demonstrated last time that paired t-tests are more powerful in this scenario than the modelling used: they were able to find significant differences for comparisons in which the modelling used in the manuscript failed to find those same differences significant.  This is not a type 1 error, but simply a reflection that the pairing in the paired-samples t-test controls the extraneous variation much more precisely than fitting fixed and random effects in a mixed model, for example.  Anyway, I think it is fine for the authors to decide whether to strengthen their results by using paired-samples t-tests or similar, or just stick with the analyses they have.  There is nothing fundamentally wrong with those analyses – it is just that they are less powerful: they are not able to find some of the differences that paired-samples t-tests do find.  The conclusions of the research would not change in any meaningful way, other than to have stronger statistical support.

Response: This comment makes sense. This research was conducted by a graduate student who has completed his thesis and has taken a new position. I have had difficulties obtaining communicating with the student and obtaining the data to conduct this t-test. If we can proceed to publish this paper with this current analysis (since you mention that this analysis is not fundamentally wrong), then that would be greatly appreciated. However, if this is of concern due to the expectations for this analysis, then I will continue to work in getting this data to conduct this analysis.

 

Comment: Lines 286-7 bizarrely state that “As anticipated, the errors of commission (EOC) decreased as flight altitude increased”.  This makes no sense.  The commission errors are where places in an image are marked as a cactus but prove not to be a cactus, i.e. false positives. Clearly these errors get more prevalent at higher flight altitudes. The higher the proportion of false positives (i.e. commission errors) the lower will be the ‘percent confirmed’, in direct proportion: they are complements of each other. So if an image had 100 potential cacti marked but only one was an actual cactus then the commission error rate is 99% and the percent confirmed is 1%.  What Table 2 shows is that the percent confirmed decreased as flight altitude increased, which means that the commission errors INCREASED.  Here the authors appear to be confusing the % error with its complement (the % correct).  In doing so, they refer to the EOC column in Table 2, but that is the CORRECTION TERM (see below) associated with the error of commission, and is again not the commission error but its complement.  So lines 286-7 should change to:

“As anticipated, the errors of commission (i.e. 1 minus the EOC correction term in Table 2) increased as flight altitude increased.”

Response: This is a very important and helpful comment and correction. As you pointed out, this was incorrectly portrayed information. We actually modified the table as suggested in the next comment that was provided showing a column heading with commission error (%) which we point out in the text here. Thank you for pointing that out.

 

Comment: *Given that the potential confusion (demonstrated by the authors, as above) is also likely in many readers, I think Table 2 needs better explanation (probably in the section headed ‘Validation Matrix’) and some modification: I suggest adding a column headed ‘false positives’ or ‘commission error’, with the numbers in the column either being the total number of false positives (e.g. 101 for 10m altitude) or the percentage commission error (e.g. 35.3% for 10m altitude), accordingly.  If so, then the correction in the previous point can be modified accordingly (and this would helpfully avoid the awkward need to specify 1 minus the EOC correction term).

Response: Again, this was a very helpful comment. We have created a new column heading and titled it commission error (%) and inserted the values showing 1-EOC which makes much more sense than what we had before. Excellent input.

 

Comment: I suggest that there should be better explanation of what the correction terms are for.  If I understand correctly, these correction terms are in order to produce population estimates for situations where the actual population numbers are not known – in this context, that means for correcting estimates produced from marking cacti in the images, without field surveys.  This is stated (though confusingly – see next point) much later in the manuscript – lines 364-5: “The counts obtained from the imagery can be multiplied by the net error term (Table 2) to obtain population estimates.”  As is already well explained in the caption of Table 2, the net error correction term can be calculated directly by dividing ‘actual’ by ‘marked’, meaning that the EOC and EOO correction terms are not necessary for producing the population estimates.  However, I think it is helpful to demonstrate that the net error correction term is also the product of the EOC correction term and the EOO correction term, as Table 2 does.  But currently this risks confusing error correction terms with errors.  They are not the same, as amply demonstrated above.  It would be very helpful for readers, in my opinion, to explain (in the Validation Matrix section, cross referenced in the caption of Table 2) that:

  1. a) The correction terms are for estimating population from the number of marked individuals.
  2. b) Errors of omission will tend to lead to underestimation of population size, so the greater the omission error the bigger the correction term should be.
  3. c) Errors of commission will tend to lead to overestimation of population size, so the greater the commission error the SMALLER the correction term should be (it is the complement of the commission error).
  4. d) The ‘net error correction term’ is the net adjustment that needs to be made to estimate population sizes, taking into account both the omission and commission errors (which act to cancel each other out). Thus it is not a measure of error, but of the adjustment needed to account for the combined effects of the errors of omission and commission.
  5. e) Therefore the correction terms are derived from the errors, but are not themselves errors – far from it in the cases of the EOC and net error correction terms.

*Explaining those things clearly will both make a lot more sense of the various columns in Table 2, avoiding confusion, and will properly justify lines 364-5, where the application of this population estimation process is mentioned. 

Response: I really appreciate this feedback. This was very helpful and the details you provided were excellent. If you are ok with it, I included most of the wording you included in this review directly in the text. If this comes across as plagiarism of your ideas, please let me know and I will make adjustments. If using your wording is ok, I would really like to leave it like this. It was a clear explanation.

 

Comment: There also need to be changes made to parts of the text, to avoid confusion between errors and error correction terms, as follows:

Lines 231-2: “The validation data matrix also included three correction terms: errors of omission (EOO), errors of commission (EOC), and net error.” Here the text clearly suggests, wrongly, that the correction terms are errors.  I suggest adding ‘for’ repeatedly, for clarity: “The validation data matrix also included three correction terms: for errors of omission (EOO), for errors of commission (EOC), and for net error.” The next sentence should add that the purpose of the correction terms is to enable estimates of population from the number of marked individuals.

Response: These edits were incorporated into the text just as was suggested.

 

Comment: Lines 234-5: “Errors of commission were defined as the ratio of cacti confirmed to the number of cacti marked.”  This is simply wrong because this is a definition of the rate of correct identifications, which is the opposite of commission errors. This needs to be changed to explain correctly what commission errors are (easy if you can simply point to the extra column I recommend above for Table 2!) and then why higher commission errors mean a lower correction term to account for them (as I explain above).

Response: We have adjusted the tables, and then reworded this sentence so it correctly reflects on commission errors. Again, this was a very helpful comment that pointed out an issue with the way we presented our data. We appreciate this insight.

 

Comment: Lines 235-6: “Errors of omission were defined as the ratio of actual cacti to the number of confirmed cacti.” Similar problem here. Although the EOO correction term increases with EOO, it is a derivative of the error of omission and is a correction term, which should not be stated as the definition of errors of omission.

Response: We have rewritten this sentence to correct the definition of errors of omission in this study.

 

Comment: Lines 236-7: “Net error was determined by multiplying the error of omission by the error of commission”. Once again, this sentence incorrectly refers to correction terms as errors.

Response: We have modified this sentence to more effectively describe what was measured.

 

Comment: Line 365“can be multiplied by the net error term (Table 2) to obtain population estimates” is once again misleading for the same reason. The ‘net error correction term’ is not a measure of error, but of the correction needed to account for errors that act to cancel each other out.  So it should not be called an error term.  That part of the sentence should therefore have ‘correction’ inserted, so it reads:

“can be multiplied by the net error correction term (Table 2) to obtain population estimates”.

Response: this change was made to the sentence to correct this issue with our explanation of these terms.

 

Comment: *The final point relating to estimating populations from imagery is that the calculations shown in Table 2 assume that the work done with the images that is reported in this manuscript is representative of searching images in the absence of data on the actual numbers. I do not think this is a safe assumption, particularly when talking about surveying different areas, as in lines 367-8:

“Cacti were also discernable in all flight altitudes indicating sUAS may be of use in finding new populations.”

The problem here is that in the work reported in this manuscript, sites with no cacti of the target species were excluded – see lines 125-7: “Five plots were removed from the study because they lacked cactus plants, had environmental conditions not suitable for cactus establishment, or supported species other than Wright fishhook cactus.”

This means that people examining the images in this study knew there were cacti of the target species in the images, for all the data in the analyses. This strikes me as qualitatively different from a situation where it is not known whether there are any such cacti in an image being inspected – and indeed, for any given image taken in a previously unsurveyed area, quite possible or even quite likely that there are none. In such a situation, I would expect detection rates may drop considerably.

Response: a comment was added that reflects this issue. We inserted the statement of “Cacti were also discernable in all flight altitudes indicating sUAS may be of use in finding new populations, however, these images were taken in areas where Wright fishhook cactus was known to occur. In areas where plants may or may not be known, detection rates could be lower.”

 

MORE MINOR AND/OR SPECIFIC POINTS

Comment: Line 158: “with 0.25, 0.40, and 0.55 GSD (ground sample distance; resolution)”. This should have include the units of GSD, so add ‘cm/pixel’.

Response: The phrase has been modified to include the units cm/pixel.

 

Comment: Around line 229: I suggest adding a label for potential cacti that were marked but found not to be a Wright fishhook cactus - the source of commission errors. I suggest 'false positive' as the label, and that can then be added as a column in Table 2 (as suggested above).

Response: Excellent suggestion. We have included a sentence in this paragraph that now states “False Positive” were for cacti that were marked but found not to be Wright fishhook cactus, the source of commission error.”

 

Comment: Lines 293-4: “An average of three more cacti per macro-plot were counted in the 10 m imagery than in the 15 m (p<0.001), and six more than in the 20 m imagery (p<0.001).”  From Fig.5 this appears to be incorrect: looks like ‘three’ and ‘six’ need to change to ‘six’ and ‘nine’ respectively.

Response: Yes, thanks. That was a mistake. Thanks for pointing that out. These values have been corrected.

 

Comment: Line 298: “For size class 1, three more cacti were counted in the 10 m than in the 20m”. The number in the 10 m was only 1.3, so it cannot be three more than anything!  It’s basically about one more.

Response: Thank you for pointing this out. We have corrected these inaccuracies and modified the wording to be more clear.

 

Comment: Lines 366-7: “Thus, if high accuracy count data is not requisite, sUAS could shift the workload from the short flowering period to other times of the year.” I criticised this before because I found it misleading, and the authors explained it well in their response letter but did not make the change in the manuscript.  Please add that the suggestion here is that numerous images could be taken during the flowering season and then analyzed during the non-flowering season (for time efficiency) – rather than that drone flights would be conducted during the non-flowering season.

Response: Excellent suggestion. We modified the wording as suggested which now states “sUAS could be used to obtain numerous images during the relatively short flowering period and then analyzed during the non-flowering period (increasing sampling efficiency).”

 

Comment: Line 379: “higher elevation drone flights (0.25 cm GSD)”. First, I think it should be ‘altitude’, not ‘elevation’, here. Second, it is not clear what is meant by the text in brackets, which is referring to the resolution of the lowest-altitude drone flights, not the higher-altitude ones.  Perhaps it should read something like “(where GSD > 0.25 cm)”?

Response: Good point. We have reworded this so that it now states “…cactus plants were more difficult to identify from higher altitude drone flights (where GSD > 0.25 cm)…”

 

Comment: Line 25: ‘grazinlands’ should be ‘grazinglands’.

Response: This word was correctly spelled.

 

Comment: Line 144: ‘suing’ should be ‘using’.

Response: Thank you for pointing that out. This misspelling was corrected.

 

Comment: Line 159: ‘Figjre’ should be ‘Figure’.

Response: The misspelled word (Figure) was corrected.

 

Comment: Line 176: ‘censured’ should be ‘censused’.

Response: Excellent point. This word choice was corrected.

 

Comment: Line 264: remove the errant paragraph break.

Response: This hard return error was corrected.

This manuscript is a resubmission of an earlier submission. The following is a list of the peer review reports and author responses from that submission.


Round 1

Reviewer 1 Report

I commend the authors for the effort put forth to submit this manuscript.  The contents in the manuscript are interesting and contribute to the science and applications of sUAS for endangered species monitoring.  One of the big takeaways in the paper in my mind is the fact the technique while not being as reliable as ground surveys, does represent a perhaps first assessment in an area where cacti may be suspected but not present.  This should be further up in the discussion because in my mind its a big selling point of the paper.  I found the analyses and such to be appropriate for the manuscript and only had minor wording changes in the manuscript.  I do recommend the authors enlarge some of the figures/or increase the resolution.

Comments for author File: Comments.pdf

Author Response

Thank you for you comments and input. These were very helpful and improved the paper. The following are the comments/suggested edits provided by the reviewer followed by our response

Introduction

Comment: recast sentence or split into two. Recommend changing the wording "human take" to something else to prevent confusion to readers.

Response: we broke this sentence into two sentences, and changed the wording to improve clarity. This included changing "human take" to "when humans collect or eradicate individuals of a species...". 

Comment: add the scientific name for Wright's cactus after the first use of the common name in the Introduction.

Response: we added the scientific name. Good catch. 

Comment: change the word "fanciers" to "collectors"

Response: done

Materials and Methods

Comment: remove the word "both" in the first sentence of section 2.2.

Response: done. The word "both" was deleted. 

Comment: delete the word "All" in the sentence "All flights were conducted during the flowering period".

Response: done. The word "All" was deleted.

Comment: in section 2.4. Analysis, delete the words "..., a total of..."

Response: These words were deleted for clarity and for conciseness.

Results

Comment: increase the resolution or size of the graph figures.

Response: we increased each image so they were more legible and clear. Thank you, that was a great suggestion.  

Discussion

Comment: One of the big takeaways in the paper in my mind is the fact the technique while not being as reliable as ground surveys, does represent a perhaps first assessment in an area where cacti may be suspected but not present.  This should be further up in the discussion because in my mind its a big selling point of the paper.

Response: This was a great idea. We addressed this by including a sentence midway through the first paragraph in the discussion that states "While our study found that ground-based measurements are highly effective in locating Wright’s fishhook cactus, remote sensing technology can be a useful first assessment technique for locating plants, particularly in areas where cacti may be suspected but have not yet been located."

Reviewer 2 Report

Signed review by Richard Field (University of Nottingham, UK)

 

This paper tests the use of low-flown drone imagery to detect individuals of a small, rare cactus that is of conservation concern, in a desert landscape that is used for grazing.  This case study is presented as helping to move the field forward in terms of detecting small, rare plants using remote sensing.  As such, it is potentially appropriate for both the journal and the Special Issue ‘Applications of Remote Sensing for Livestock and Grazing Land Management’.  However, (A) there are problems with the write-up of the research done, including lack of explanation of various aspects of the methods, presentation of the findings in a contradictory way, and logical inconsistencies.  Also, the rather complex analyses seem to miss significant differences that simple alternative analyses find.  But most fundamentally, at least with respect to publication in this journal, (B) I think the way the images are analysed can be improved, to produce more of an advance in the field.  I now expand on these main points, and then add (C) some more specific issues.

 

  1. PROBLEMS WITH THE MANUSCRIPT, AS A WRITE-UP OF THE WORK THAT HAS BEEN DONE.

 

The ABSTRACT can be improved in various ways, including:

*It does not say how the images were analysed, which is surely essential information in this journal!

*It only reports statistical significance and not effect size. The fact that the finer-resolution images produce significantly better results is so unsurprising as to be almost obvious a priori, but how much improvement is achieved at the finer resolutions is the key information.

*The statement of the main implication seems too vague to be useful: ‘We suggest that sUAS can be effectively used to locate cactus within grazingland areas, but should be coupled with ground surveys for higher accuracy and reliability.’

*To make matters worse, the next sentence (final one of the Abstract) then more-or-less contradicts it! That final sentence is ‘While sUAS-based remote sensing may have been successfully used in a variety of vegetative surveys for larger species and groups, it is important to acknowledge that these technologies can have limitations in effectively detecting small, low-growing individual plants such as fishhook cactus species.’

 

If I correctly understand what was done, the image analysis was just people looking at the imagery and trying to spot the cacti – so no technical advance is offered by the research.

The key results are that this exercise missed 62-90% of the cacti that were actually present, and of the things in the images that were marked as individual cacti of the target species, 35-73% of them were not actually the target species (depending on the resolution, in both cases).  In both respects, the least-bad results were for the finest resolution (unsurprisingly).  The amount of time taken to do the drone work and image analysis was actually more than the detailed field surveys of the same areas (line 297).

 

So the abstract does not actually convey the key results, which were basically that the method used did not provide a clear advantage over the field surveys that are currently used for the species.

 

The two possible advantages of the methods that are mentioned in the manuscript are not mentioned in the abstract: obtaining population estimates by utilising the error term, and using drone surveys outside the short flowering period of the target species.  (But I’m not convinced; more on this below.)

 

 

The METHODS NEED TO BE EXPLAINED BETTER, in several respects:

‘GPS locations were recorded for each cactus’ (line 114). Hand-held GPS or differential?  What was the accuracy involved?  If the error was several metres then how does this affect the matching with cacti identified from the imagery?  I guess hand-held GPS with ~4m error, which thus seems likely to introduce error to the validation matrix.  If so, it needs pointing out and quantifying, and ideally steps taken to address the problem.

 

‘Before conducting flights, we explored the possibilities of using both near infrared (NIR) and Red-Green-Blue (RGB) imagery to detect Wright fishhook cacti. While some species, such as prickly pears (Opuntia sp.), presented a distinct reflectance signature in NIR, the signature of Wright fishhook cacti was weak and less effective in distinguishing plants compared to RGB images.’ (From line 122.)  This appears to be an integral part of the research presented in this manuscript, but no further information is provided, and there is no citation of anything to back it up (so presumably it is not published elsewhere).  This is far from sufficient information on this aspect of the method (and this aspect of the method is very much in the remit of remote sensing).

 

‘Plots were censused on foot for cacti immediately following the three flights’ (Line 136). Were the ground censuses and the visual image ‘analyses’ done by the same people or independent ones?  If the same people, then there seems to be a chance of biasing the results – for example, by people remembering which plots had more of the cacti in, and adjusting their visual inspection of each image accordingly (whether consciously or unconsciously).

 

Line 147: ‘Our original intent was to use object-based image analysis (OBIA) in eCognition (Trimble Inc., Sunnyvale, California) to count the number of cacti in each image. However, after we determined that the software could not define a cactus as an object, we abandoned this method.’ Need more details about what was tried, here.  This is integral to the remit of the journal.

 

Line 151: ‘we determined that hand counting individuals from the images would be the best alternative.’  I assume that this means that the authors just looked at the images and tried to identify the cacti visually, but this seems to contradict what is said later (see below).  This needs to be clearer.

 

Line 187, equation 1: it is not clear whether ‘Site’ refers to the site in the sense used in Section 2.1 (two plots per site) or the plot.  It would seem to make more sense for it to be the plot, in the context of this manuscript, which basically treats the plots as independent samples, rather than paired.

 

Line 195, equation 2: what is meant by ‘cactus ID’ here?  Why is it included as a random effect?  I do not see the reasoning, here.

 

 

PROBLEMS WITH THE ANALYSES

First, there seems to be an error in reporting the results: ‘An average of three more cacti per macro-plot were counted in the 10 m imagery than in the 15 m (p<0.001), and six more than in the 20 m imagery (p<0.001)’ (line 224). Fig. 4a and Table 1 show that there were over 13 cacti found per plot, on average, in the 10 m imagery, only about 7 in the 15 m imagery and only about 4 in the 20 m imagery.  So it is six more and nine more per plot, respectively, not three and six.

 

More importantly, the mixed modelling approach seems unnecessarily complex.  On inspection of the results, it seemed surprising to me that the modelling did not find significant differences for some comparisons (for example, in Fig. 4b and 4d, and in Fig. 5b and 5d).  The results are paired: each plot has a ground survey, and a remote sensing survey at each of three resolutions.  Surely you can just do a paired-samples t-tests to compare these results, therefore!  That controls extraneous influences much better than the mixed modelling, as well as being much simpler.  I quickly pulled the data from supplementary file 2 and ran a paired-samples t-test on sqrt-transformed counts, to compare 10 m imagery with 15 m imagery.  I did this for the total count, and for each of the three size classes.  The differences were all significant, with P<0.015 in all cases.  For comparison, the mixed modelling had P=0.08 and P=0.17, respectively, for size classes 3 and 1, according to lines 226-230.  Spreadsheet attached (in pdf form – sorry, the system would not let me submit the actual spreadsheet).

 

 

CONTRADICTIONS need to be sorted out.

Line 264: ‘This suggests that this technology and image analysis software are suitable for quantifying and monitoring plant populations.’  This seems to be a contradiction with the method actually used: according to lines 147-151, the attempt at object-based classification was abandoned and instead people just tried to spot cactuses in the imagery by eye.  In other words, there was no software-based image analysis!

 

Line 268: ‘While accuracy in detection was increased, the amount of time required to cover the study area was reduced.’  Wrong!  With finer resolution, the accuracy increased but the amount of time required also INCREASED.

 

Line 280: ‘The counts obtained from the imagery can be multiplied by the net error term (Table 1) to obtain population estimates.’ I do not think it is sufficient just to float this.  Arguably, if this manuscript is contributing anything useful, then it is probably here – so this proposed method needs to be demonstrated empirically.  For example, for each image used in the manuscript (or for each site), you could predict using the error term for other images or sites (i.e. excluding the focal one), and then correlate those predictions with the actual populations in the plots.  (This is not a contradiction but an omission; I include it here because the next apparent contradiction builds on it.)

 

Line 282: ‘Thus, if high accuracy count data is not requisite, sUAS could shift the workload from the short flowering period to other times of the year.’  If I am correct in my assumption that the flowers are part of visually identifying the cacti from the images, then there seems no basis for the statement that the surveying could be done outside the flowering season.  Further, if it is relatively easy to identify the cacti without flowers, then why are the much more accurate ground surveys only done in the flowering season?  Indeed, it seems to contradict what is said from line 49: the focal species ‘is only readily distinguishable from its widespread relative, the small-flower fishhook cactus (Sclerocactus parviflorus Clover & Jotter), using flower and filament color.’

 

Line 302, which starts the Conclusions section: ‘We found that sUAS along with ground-based surveys can be used to improve the detection and capacity for monitoring the endangered Wright fishhook cactus.’  This seems to contradict both the findings (rather poor detection of the cacti from the images) and the previous paragraph, particularly line 297: ‘evaluation, the use of sUAS constituted an overall loss in time relative to ground censuses.’

 

 

  1. RECOMMENDATION FOR RE-ANALYSIS OF THE IMAGERY

I wonder whether machine learning-based analysis of the imagery would actually produce better results, if done in a different way to the object-based classification you tried.  I have a PhD student currently using code that he has adapted from the first half of the following article: https://towardsdatascience.com/color-identification-in-images-machine-learning-application-b26e770c4c71. He is using this to find small patches of non-leaf background in leaf scan images.  The background is not uniform in colour, but does differ in colour from the leaves.  This seems equivalent to images where there are small patches of colour (the cactus flowers) that are different from the rest of the image.

 

It seems to work well, with some modification.  My suggestion, then, is that you work up to (and including) the ‘Get colors from an image’ section of the article, which should produce a pie chart showing the dominant colours in an image (and the number of pixels of each colour type). You will probably need to increase the number of clusters (i.e. K in the K-means clustering), given that the flowers occupy only a very small part of any image, but that shouldn’t be a problem. The colours are shown on the pie chart, so it is easy to see whether they look right, and to match them to parts of the images.

 

You will also need to add in this line if you want to classify the image using your KMeans colour clusters:

img_quantised = clf.cluster_centers_[labels].reshape(img.shape).astype('uint8')

 

You will then be able to visualise using the colours you have.

 

Note: this is all Python code (though presumably the same is also possible in R).

 

If the cactus flower colours are too close to some of the background colours (e.g. yellowish sand or soil), then it may not work, but it seems worth a try.  If it works, it would seem to me to produce the sort of advance that one might expect to see in this journal.

 

 

  1. OTHER SPECIFIC POINTS

Title, line 2: the Latin species name should be in italics.

 

Line 113: although it is not the focus here, the pairing should be explained, and not just left hanging.

Similarly, in supplementary file 2, F and U are not defined (I guess fenced and unfenced?)

 

Line 116: why were 15 of the 20 plots selected for drone flights?  It is odd, given that this translates into both (paired) plots in 7 sites and only one of the two plots in an 8th site.

 

Define ‘GSD’ before reducing to that abbreviation.

 

Figure 3: it would be helpful to annotate the images to show the locations of the two flowering individuals.

 

Line 180: ‘defined by Ronald Kass (2001)’. Kass (2001) is in the reference list, so delete ‘Ronald’ and add the reference number.

 

The sentence in lines 205-7 is redundant, just repeating information from above and from Table 1.  Delete.

 

When analysing the effect of plant size (e.g. Figure 6), I wonder whether it would make more sense to measure the diameter of the flower, rather than the plant – or both.  I get the impression that the flowers are what are visually picked out of the images more than the other parts of the plants.

Comments for author File: Comments.pdf

Author Response

Please see attachment

Author Response File: Author Response.docx

Reviewer 3 Report

(Remarks to the Author):

 

The authors developed a practical to quantify Wright Fishhook Cactus’s population by using a drone system.

 

Overall, this research is very interesting and related to the aim of this journal. Actually, it is good to know that the drone method works well to detect the population of small vegetation in drylands. Moreover, the language and structure of this paper are good. I would like to recommend this paper for a minor revision.

 

Here are my comments and suggestion for this paper:

 

  1. Please add the importance of Wright Fishhook Cactus to the dryland ecosystem in the abstract.
  2. It is better to add a workflow to show the reader the process of obtaining the images and processing the images. You can find an example in “Drone-Based Remote Sensing for Research on Wind Erosion in Drylands: Possible Applications”.
  3. It is better to add a table to detail the parameters of the drone and the camera. You can find an example in “Drone-Based Remote Sensing for Research on Wind Erosion in Drylands: Possible Applications”.
  4. For 2.4.2, please add some details for lme4 [21], lmerTest [22], MuMIn [23]. Did you use a classification method for the analysis here?

 

 

 

Author Response

We appreciate your review and feedback. This input has helped improve this paper. The following are the comments/suggested edits provided by the reviewer followed by our response.

Comment: add the importance of Wright Fishhook Cactus to dryland ecosystems in the abstract.

Response: we added a sentence to the abstract that now more effectively describes the value of this cactus by stating "..., that enhances soil stability, provides nectar for pollinating insect species, and increases biodiversity in hot arid environments."

Comment: add a workflow to show the reader the process of obtaining the images and processing the images. 

Response: a workflow chart, much like the one provided in the recommended paper, was added.

Comment: add a table to detail the parameters of the drone and the camera.

Response: A table was added with the specific specs of the camera, imagery, and flight info, comparable to the table in the recommended paper by Zhang, et al. 

Comment: add some details for lme4 [21], lmerTest [22], MuMIn [23].

Response: we added text to clarify what these packages do, which now reads "Ime4 provides functions for fitting and analyzing mixed models, MuMln performs model selection and averaging, and ImerTest provides p-values in type I, type II or type III summary tables for linear mixed models". We did not perform any other classification methods.

Back to TopTop