Next Article in Journal
Ground Deformation Pattern Analysis and Evolution Prediction of Shanghai Pudong International Airport Based on PSI Long Time Series Observations
Previous Article in Journal
Hyperspectral Image Classification Based on 3D Coordination Attention Mechanism Network
 
 
Communication
Peer-Review Record

Cloud Processing for Simultaneous Mapping of Seagrass Meadows in Optically Complex and Varied Water

Remote Sens. 2022, 14(3), 609; https://doi.org/10.3390/rs14030609
by Eva M. Kovacs 1,*, Chris Roelfsema 1, James Udy 2, Simon Baltais 3, Mitchell Lyons 1 and Stuart Phinn 1
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Remote Sens. 2022, 14(3), 609; https://doi.org/10.3390/rs14030609
Submission received: 26 November 2021 / Revised: 14 January 2022 / Accepted: 24 January 2022 / Published: 27 January 2022
(This article belongs to the Section Remote Sensing Communications)

Round 1

Reviewer 1 Report

My comments have been sufficiently addressed

Author Response

Thank you for reviewing our manuscript.

Reviewer 2 Report

In the current manuscript seagrass meadows are mapped simultaneously in clear and more turbid waters using cloud-based processing. The methodological approach has been developed earlier for coral reefs mapping. From that perspective the current manuscript does not present anything new. However, this is the first time that seagrass is mapped using this methodology.

Main concern: The mapping depth depends on the ability of the remote sensing sensors to “see” the water bottom. High turbidity makes the water optically deep even in very shallow areas. I would assume that the depth of detection cannot go much deeper than the Secchi depth (in your case 0.3-1.4 m in most turbid areas and remained below 9 m even in the clearest parts of the study area).  However, you state that: “Depth limit for object analysis was set at 5 m” (2.3 Mapping method, paragraph 2) and “Mapping depth was limited to 10 m, a depth at which it becomes unreasonable to view seagrass via satellite imagery” (2.3 Mapping method, paragraph 2). I wonder, is the mapping depth of 10 m justified if the Secchi depth between 0.3-4.4 m was prevalent in the study area?

For me, the main concern of the current approach lies in the fact that benthic mapping seems to be performed also in areas of optically deep water, while in fact those areas should be delineated as optically deep. Please explain, in what way the depth data were considered in the model? Was the detectable depth reduced in the areas of higher water turbidity?

Would it be possible to add the bathymetry layer to see the approximate depths from which the seagrass coverage assessments were retrieved in the presence of measured Secchi depths?

Other remarks:

Could you give the general characterization of the water column? By “turbidity” you mean high loads of suspended sediments in the water column? High loads of dissolved organic matter?

Could you give some explanation for the substrate misclassifications? Could it be that high loads of suspended sediments in the water column can resemble to the signal that comes from the bare substrate? As such, high loads of sediments in the water column can obscure the vegetation signal and be mapped as bare substrate. Could this be one of the reasons why bare substrate was misclassified?

Once again I raise the issue of in situ data collection. How well do you think the point observations (could you give the approximate area of a single point) describe the 30x30 pixel size of Landsat?  Were the locations of point collection determined before the field campaign (by using previously collected satellite data, airborne data, orthophotos etc.) so that relatively homogeneous areas were selected for field data collection? I would expect at least a short discussion on this matter.

Minor remarks:

Page 3-4, 2.1 Study Site paragraph 1: Figures 1a-1d has been referred to in the text, while only 1a and 1b section are provided in the Figure 1 (page 3).

Page 4, 2.3 Mapping method: both pixel based and object based classifications are referred to in the text. If the image was segmented to various object before classification (paragraph 2), then I guess object-based classification was executed? But you still refer to pixel based classification in paragraph 3. It requires some explanation.   

Author Response

Please refer to attached document

Author Response File: Author Response.docx

Reviewer 3 Report

The manuscript utilized a random forest classifer to map seagrass in  turbid and clear, based on GEE platform. Even though the research objective is very clear, the method used in this study is too simple and lack of innovation. Besides, the accuracy of mapping results are not high, and the sensitivity of the method to water turbidity is not mentioned. Other minor concerns are as follows:

  1. A more detailed reviews of seagrass extraction methodologies from RS is required in the introduction section.
  2. If another Landsat 8 imagery with a very near date is taken, that is, if the water environment (water clarity) is different, will the classification result be roughly the same using the same method? I thought the classification highly relate with imagery used. 
  3. For the imagery used for classification, it is necessary to make it clear how many training points fall in clear water and how many fall in turbid water? What are the mapping accuracies of seagrass in turbid and clear water, respectively?

Author Response

As advised in the submission receipt email from Ms Carol Fang, the receipt of these comments greater than three days after my notification of comments from reviewers 1 and 2 (Wednesday, 8 December 2021 6:34 PM), necessitates that I address the comments from those reviewers only:

"Currently, we have collected 2 reports. We request you make major revisions before it is processed further. Moreover, there maybe have another review report on the way. Maybe you will receive it in the next 2-3 days. If not, you may only revise your work according to the two reports. "

Round 2

Reviewer 2 Report

I would like to thank the authors for their responses. Still, I would have following remarks/suggestions:

(1) In their response authors write that “As such, there are multiple adaptations (that are described in the text) that make the manuscript novel and of use to a wide audience.“ At the same time in the manuscript (page 4, line 141) they say that: „The methodology described in [13] was followed with some minor adjustments for seagrass habitats.“ I would suggest to define more specifically (point by point list), what were the methodological adaptation used in the current manuscript compared to the paper by Lyons et al. 2020.

(2) I would suggest to remove the entire sentence (page 4, line 158): „Mapping depth was limited to 5 m, as at depths of ~10 m it becomes unreasonable to view seagrass via satellite imagery [36].“ This sentence is irrelevant and confusing. If you set your depth limit to 5 m, then it becomes unnecessary to talk about 10 m depth limit.

(3) As I understand the mapping depth limit was set to 5 m based on the bathymetry layers? Still in my mind it would be necessary to delineate optically shallow and optically deep pixels within that 5 m layer. According to your Secchi measurements it could be assumed that in areas of lower water quality, the depth of substrate detection was considerable shallower than 5 m and it is very confusing if you show that you retrieved bottom maps from the areas, which cannot be mapped (or can be mapped to optically deep water) due to the decreased water quality. I would suggest to address this issue at least in your discussion. 

Author Response

Please refer to the attachment

Author Response File: Author Response.pdf

This manuscript is a resubmission of an earlier submission. The following is a list of the peer review reports and author responses from that submission.


Round 1

Reviewer 1 Report

Kovacs et al presented a study to automatically map seagrass along different gradients of water turbidity using machine learning techniques within the Google Earth Engine environment. While the study is at the state-of-the-art and environmentally relevant within the climate change context, I have one major objection regards the repeatability and robustness of the study. Indeed, the authors highlight that: “making subsequent comparable maps near impossible” (line 70) and “Importantly, any capability to assess seagrass habitats accurately, quantitatively, and spatially over time requires a method that is robust, consistent, and repeatable [8] so that map outputs are comparable over time” (lines 189-191). However, the method was tested locally and in a single Landsat8 satellite image. Is it possible to test the method, at least, over another image in 2015 to see how robust is the mapping? To effectively use this method in a global scale comprising different turbidity waters/regions/years, the dataset should be trained each time? How dependent is the method on the training dataset? How many point should be required ideally?

 

Methodology:

Line 121: “1,076 points from January to October of 2015”. Not enough description of the field data is provided. Were those points collected in transects? Random isolated stations? How representative is one point of a 30m pixel of Landsat8? Were points within a pixel averaged?

 

Line 122:  “randomly assigned to training or calibration datasets”. Which was the proportion of calibration and validation points? 50-50%?

 

Is it the script available?

 

Results: was the performance different according to the water turbidity? in which cases were the highest confusion? more turbid water? percent cover close to 25%? is it possible that seasonal changes are important for low % cover?

 

Discussion: Lines 189-194: “Importantly, any capability to assess seagrass habitats accurately, quantitatively, and spatially over time requires a method that is robust, consistent, and repeatable [8] so that map outputs are comparable over time. Due to its inherent simplicity and applicability across a range of water clarities, the method described here potentially provides capabilities for mapping seagrass extent and dynamics from local to global scales, supporting an understanding of coastal estuarine dynamics and the monitoring and management of these areas.” The first and the second phrase of this paragraph seem contradictory to me. In the first one, the authors say that outputs should be comparable over time but in the second one it seems enough by testing a single image over a range of turbidity? Please, rephrase.

 

Reviewer 2 Report

The authors utilize a model previously applied for coral mapping to a study of seagrass in an optically complex coastal environment.

Overall the short study was well conducted and should be of interest to the community, and it is decently written. The topic is a timely one and the use of a model which requires relatively little external data or local tuning will be useful, particularly for monitoring or management programs where absolute accuracy is less important than spatial/temporal coverage.

There are some points which need clarification however. While I appreciate the brevity of the paper, it is too sparse in places. In particular, the results are barely presented at all and this must be addressed before it is suitable to publish.

I would like more detail on the original model that was utilized and how it was modified. While the authors cite it, there should be a better summary in this paper.

Describe the input satellite imagery. Processing level, date range, bands utilized, any selection or quality criteria that were used?

For context, please cite and discuss other seagrass and benthic mapping efforts for challenging/turbid conditions. Possible studies include:

Dierssen, H.M., K.J. Bostrom, A. Chlus, K. Hammerstrom, D. Thompson and Z.P. Lee. Pushing the Limits of Seagrass Remote Sensing in the Turbid Waters of Elkhorn Slough, California. 2019. Remote Sensing. 11(14), 1664; https://doi.org/10.3390/rs11141664

Garcia, R.A.; Lee, Z.; Hochberg, E.J. Hyperspectral Shallow-Water Remote Sensing with an Enhanced Benthic Classifier. Remote Sens. 2018, 10, 147. https://doi.org/10.3390/rs10010147 

Hedley, J. *B. Russell, *K. Randolph, R.M. Vásquez-Elizondo and H. Dierssen. 2017. Hyperspectral mapping of seagrass leaf area index and species by a physics-based approach: do sensitivity analyses and practical application agree? Frontiers in Marine Science. doi: 10.3389/fmars.2017.00362

 

This will also help place the success of the model against hyperspectral  airborne efforts which utilize complex, regionally tuned models and spectral information for work in challenging coastal environments, etc. What can be achieved without expensive campaigns?

Results

Lines 168-169: I don't really understand this sentence, but far more information (and figures/graphs) must be presented to show if the model is useful. Did the model perform better in some areas of the bay than others?  Please go far more in depth here.

 

It could be argued that this is not a seagrass mapping exercise, but a  macrophyte mapping one. A coral mask was applied, but what about algae? What about mixed pixels? In my opinion this point is the largest issue with the study.

Back to TopTop