Next Article in Journal
Effects of Training Set Size on Supervised Machine-Learning Land-Cover Classification of Large-Area High-Resolution Remotely Sensed Data
Previous Article in Journal
Remote Sensing Applications for Landslide Monitoring and Investigation in Western Canada
Previous Article in Special Issue
Situational Awareness of Large Infrastructures Using Remote Sensing: The Rome–Fiumicino Airport during the COVID-19 Lockdown
 
 
Article
Peer-Review Record

Comparative Analysis of the Global Forest/Non-Forest Maps Derived from SAR and Optical Sensors. Case Studies from Brazilian Amazon and Cerrado Biomes

Remote Sens. 2021, 13(3), 367; https://doi.org/10.3390/rs13030367
by Edson E. Sano 1,2,*, Paola Rizzoli 3, Christian N. Koyama 4, Manabu Watanabe 4, Marcos Adami 5, Yosio E. Shimabukuro 6, Gustavo Bayma 7 and Daniel M. Freitas 1
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Remote Sens. 2021, 13(3), 367; https://doi.org/10.3390/rs13030367
Submission received: 24 December 2020 / Revised: 19 January 2021 / Accepted: 20 January 2021 / Published: 21 January 2021
(This article belongs to the Special Issue Remote Sensing Data Interpretation and Validation)

Round 1

Reviewer 1 Report

This is a very interesting comparative analysis between forest/nonforest maps derived from different remote sensing sensors. 

The causes of the discrepancies shown in figure 10 and 14 are not well explained.   Also - could the descriptions in line 501-512 to specific sections of the figure be flipped ( is 14D actually 14 A, etc?) - when describing the reservoir, I do not see a reservoir in the 14D landsat image, but perhaps I see one in 14A. How are reforestation or harvested areas evident when only FF/NF are shown?  Areas known to have changed between observations should be masked as they will only cause confusion. Can you speculate why  tandemX FNF should be more effective for identifying fragmented forests?

What I would like to see in conclusions are recommendations for which product to use, but unfortunately it is inconclusive relative to landsat 8 analysis. Please explain that more quantitatively.

I am quite surprised that "This study indicated that SAR-derived FNF global products over the tropical rainfall forests and savannahs are not really comparable with each other and neither with the Landsat 8 derived products."  "Not really comparable" is a low standard, and it is apparently not met.   I would like to see more detail about what you mean by that.  Figure 7 would seem to indicate that they are comparable, and fig 9 and 13 seem to indicate overall good agreement.  And you state in line 530 that there is 90% and 80% agreement - so  why are they not comparable then? What is the exact criteria you are looking for in order to be considered comparable?

The authors set out to answer the question: "how comparable is the analyzed global FNF maps derived from SAR and optical satellites?", and apparently the answer is that they are not really comparable.  But "not really comparable" is not specific or quantitative enough to answer the question regarding "how comparable".

 

 

Author Response

Please, see the attachment.

Author Response File: Author Response.docx

Reviewer 2 Report

Good for the publication in the present form

Author Response

We would like to acknowledge the time spent for reading the manuscript by the reviewer. 

Round 2

Reviewer 1 Report

regarding one previous question: "Areas known to have changed between observations should be masked as they will only cause confusion".  What I meant by this question is that in the figures in this paper, when it is known that the time period difference between the products covers some areas that are known to have changed, please mask out these areas as it is confusing to include these areas to illustrate the differences with the products.  I recommend that you make this change to your figures, as it will make it easier for the reader to interpret the figures.

This manuscript is a resubmission of an earlier submission. The following is a list of the peer review reports and author responses from that submission.


Round 1

Reviewer 1 Report

The authors present comparisons between different satellite sensors to see how their forest/non-forest estimates compare with each other.  I found it to be a very interesting paper, with only minor corrections needed prior to publication.  One thing lacking in the paper is an assessment of accuracy relative to the Landsat products.  This is ultimately what the readers would like to know.  Please check carefully for grammatical errors.

 

line 223.  For the JAXA FNF product, a clear definition of forest is provided here. However, I did not see documented the forest definition used for the DLR product from TandemX or from the landsat LULC product or from the Landsat 8 OLI product.  Please provide information about how these data sets defined "forest". is it comparable?

 

paragraph starting on line 303.  What is the accuracy of the forest classification based on visual interpretation?

 

line 355.  Yes it seems easy to obtain an FNF map. But is it possible to know which is more accurate ?

 

line 362. please provide a reference.  Since the X-band map is derived from the coherence rather than the backscatter, could this mitigate the difference in penetration capability?

 

Figure 7. Could a landsat image of this same area from 2015 be included in this figure?

 

figure 9 and later.  please provide exact center geographic coordinates, and label that indicates the green and red and forest or non-forest for the SAR imagery. It would be more helpful if instead of rgb for the landsat imagery, a forest/non-forest classification of this data was provided instead.  if this is not possible, please explain why not. 

 

In general, how are water bodies indicated in FNF maps by DLR and Jaxa?  please provide exact center coordinate.

 

figure 14.  please provide exact center coordinate.

 

 

conclusions-  Please explicitly state which product corresponds more closely with the landsat results.

 

Reviewer 2 Report

1) The title seems to be too lengthy. A slight re-arrangement is required. Words such as "case study" and "comparative analysis" shall be used for the re-arrangement. The method used for the comparative analysis must be given in the title.

2) The abstract plainly talk about comparative analysis. But, the methods used to analyse the images is not given in the abstract. It must be included in the last few lines of the abstract.

3) The introduction part talks only about the datasets available. It must be a critical analysis on how the comparative analysis is done with the pros and cons of different methods.

4) Section 2.3 gives a brief ideas about the methods. They talk about statistical methods and nearest neighbour method which are very weak methods. The rest of the paragraphs in the same section do not talk about any methods which makes the paper very weak scientifically..

5)The next section immediately seems to be results and section which shows this paper talks only about the datasets instead of the methods. Atleast 2-3 algorithms must be included which can do a comparative analysis. Without any methods, how can you arrive at results?

6) Figure 5 and 6 alone give a brief glimpse of the results which is not at all sufficient.. Again, it is more about statistical analysis which is normally used for validation of the actual methods such as artificial neural networks. These analysis are only supportive results. Few other similar figures are given with such analysis which is again not sufficient. Better classifiers must have been used.

7) Few other figures such as scattering plots are given which does not give more weightage to the paper. Other figures does not specifically aid in a proper comparative analysis with performance measures.

8) The conclusion section is not convincing in the sense that no inferences obtained from the results section are given.

9) Too much of references for an analysis paper. If it is a review paper, then fine. But, this paper is not a review paper.

 

Reviewer 3 Report

Good for the publication in the present form

Back to TopTop