Next Article in Journal
The Impact of Canopy Reflectance on the 3D Structure of Individual Trees in a Mediterranean Forest
Previous Article in Journal
Local, Daily, and Total Bio-Optical Models of Coastal Waters of Manfredonia Gulf Applied to Simulated Data of CHRIS, Landsat TM, MIVIS, MODIS, and PRISMA Sensors for Evaluating the Error
 
 
Article
Peer-Review Record

Mapping Natural Forest Remnants with Multi-Source and Multi-Temporal Remote Sensing Data for More Informed Management of Global Biodiversity Hotspots

Remote Sens. 2020, 12(9), 1429; https://doi.org/10.3390/rs12091429
by Joni Koskikala 1,*, Markus Kukkonen 2 and Niina Käyhkö 1
Reviewer 1:
Reviewer 2: Anonymous
Remote Sens. 2020, 12(9), 1429; https://doi.org/10.3390/rs12091429
Submission received: 8 March 2020 / Revised: 25 April 2020 / Accepted: 29 April 2020 / Published: 1 May 2020
(This article belongs to the Section Remote Sensing Image Processing)

Round 1

Reviewer 1 Report

The authors of the manuscript "Mapping natural forest remnants with multitemporal sensor fusion for more informed management of Global Biodiversity Hotspots" are attempting to improve tropical forest mapping using different remote sensing data. Although, Sentinel-1 and Sentinel-2 are used in the mapping process, but I do not see any fusion process is implemented or mentioned in the text except in the title!

There are major issues which must be handled by the authors:

1-I tried to understand the remote sensing part which is the most important with respect to the journal. from line 143 to line 165 I did not see any reference to important source of data "Copernicus", correction techniques or any explanation of how these techniques works such as Sen2Cor.
The same applies to Sentinel-1 pre-processing steps such as Thermal noise removal, image border noise removal, image calibration, radiometric and geometric corrections and speckle filtering.

2-Authors did not explain the size of samples and their type of distribution with respect to sub section reference data collection from 184 to 195.

 

3-Lines 197 to 204 does not explain clearly the 115 variables and how they are considered independent in order to combine them into 10 groups? (Have they used  specific criteria in the grouping process?)

4-The variables in Table 1 are not clear.  As an example, when authors write that according to literature there are 30 descriptions for variables of mean, variance and correlation extracted from Sentinel-2  bands descriptive haralicks! What are these descriptions add some examples in the description.

5- Authors used Random Forest algorithm without indicating why it has been selected. A subsection should be dedicated to explaining RF and in the introduction authors should add state of the art literature which support their use of RF.

6-in the results sub-section authors indicated on lines 265 to 266 that 1023 models are tested does this represents different 10 variables combination as indicated in Figure 1? More details are needed. In addition explain how TSS works

Author Response

Please see the attachment

Author Response File: Author Response.pdf

Reviewer 2 Report

See detailed comments in the pdf version attached here!

# Lines 29, 105 and 241-242: I understand that forests of less than one hectare can introduce noise into your data. However, you regularly underline the importance of identifying forest patches of 1 to 10 ha in your paper, but you do not explain why forest patches of <1 ha are suppressed, nor do you discuss the potential impact (positive, negative or neutral) of ignoring these forests. I thin this should be clearly introduced and discussed in the manuscript.

# Lines 58-60 : You can separate this sentence in two shorter sentences to make it clearer.

# Lines 84-85 and 185-186: Why using both Google Earth and Bing maps? Were they significant differences between them in terms of quality, temporality or spatial distribution?

Figure 2: In the legend, describe all the acronyms to help the readability of the research design.

2.2.2. Datasets and processing: I imagine that you use a standardized methodology to prepare the data but sometimes references are missing to understand were the methods comes from (e.g., lines 160-163 or 170-173)

Lines 157-158: Why specifically taking these data during the dry season? To limit the presence of clouds? In the discussion, you talk about seasonality (line 405) to distinguish natural evergreen forests to artificial forests. Therefore, was it to better identify evergreen forests?

Lines 186-187: What is the size and the shape of these samples? 10m2 square pixels? (line 108)

Lines 192-193: How is this threshold justified? In addition, how did you define what is a canopy and what is not? By using height thresholds?

Lines 202-204 : This part is crucial for the following analysis but I don’t understand what means “grouped” and its implication for the model. Does this mean that in the dataset used for RandomForest, you used all 115 attributes obtained for each sample point? If so, does this mean that for each sample point and each group, you had between 2 and 30 different values, representing different attributes?

Part 2.2.5. : Please precise (if I understand well) that in your RF model, the independent variable is the Forest/Non-forest category from the sample points you obtained previously. It may be hard to understand what the RF model is analyzing here.

Lines 218-221: Here, precise in brackets which step aims to identify why you will name after “best performing” and “best combination” models, it’ll greatly help to understand this part.

Lines 248-249: What does justify these thresholds?

Lines 288-289: Does it means that all the result you present in Figure 3 and Figure 4 were obtained from a RF model with 500 trees and 6 variables per nodes? In this case, maybe it should be said earlier in part 3.1.

Figure 5: Does the point/image 1 corresponds to figures B, C, D; point/image 2 to figures E, F, G; point/image3 to figures H, I, J? In this case, specify it in the legend. In addition, maybe use different colours/shades of grey for forest cover, canopy gaps or cloud-gaps in figures B to I, it will help the readability of the figure.

Lines 348-351: What do you mean by “degraded forests”? Does it group evergreen forests with a canopy cover <75%? I’m not familiar with the ecology of these forests but what about forests that underwent natural secondary disturbances that opened more than 25% of the canopy? Could this lead to ignoring some natural forest areas when their canopy is not dense enough?

Line 378: Here, you talk about a TSS value ranging from 0.24 to 0.81 in the CCEF model, but in Table 3, the values range from 0.39 and 0.84, why?

Lines 386-388: You discuss here a point that seems fundamental to me, as highlighted by previous comments. Why was it difficult to differentiate between natural and artificial forests in this region? Could this kind of problem have been observed elsewhere without being noticed? Are there alternatives to determine more effectively whether the forests in this region are natural or not?

Lines 417-425: This section discusses the use of the groups, that need to be better explained in the methods as said previously. Nevertheless, and if I understood the method correctly, couldn't the analysis of the groups be carried out as a preliminary analysis, and then you would have kept only the parameters belonging to the groups that showed the best efficiency?

Comments for author File: Comments.pdf

Author Response

Please see the attachment

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

The authors of the research paper "Mapping natural forest remnants with multitemporal sensor fusion for more informed management of Global Biodiversity Hotspots" have responded in details to the questions and requirements set by the reviewers.

There is one minor issue which was not tackled properly by the authors.

it is recommended to reduce the number of references with respect to  the techniques which are used in correcting the images. Instead, it is better if more details are added about the use of these techniques such as the specific selection of the parameters in sen2cor.

 

Author Response

Please see the attachment

 

Author Response File: Author Response.docx

Reviewer 2 Report

Review for Remote sensing journal

Paper ID: 754111

There has been a very substantial improvement in the introduction, good job! The authors generally responded to all my comments and when they did not, the explanations provided were relevant. I, therefore, believe that the article could now be accepted for publication. However, I would still have some minor comments to make to the manuscript, which do not require another round of revision.

Line 45: remove the comma.

Lines 51-52: I would add commas for the definition of fragmentation (…fragmentation, i.e. the division of habitat into..., may become…).

Lines 57-58: “from a conservation perspective”?

Lines 59-61: I think small fragments could harbour unique or rare biodiversity values but depending of the time since disturbance, among others, because it is also probably that most of the species of recently fragmented forests are already represented in the species pool of non-fragmented ones. So I think you need some references here.

Line 66: I would remove the word “intact” because it seems like you are going to focus on these kinds of forests.

Line 76: I would remove the word “satellite” (repetition).

Line 87: remove “to”.

Figure 2 legend: be consistent with the spaces when using “=”. Figure 2, part 4: Aren't there spaces missing between some words?

Line 209: “at 30m resolution”?

Lines 212-213: I don't see the connection between both lines, I think it needs to be reformulated.

Line 216: remove “derived from global SRTM DEM” (repetition).

Line 222: do you mean “to account” instead of “to count”?

Line 223-225: I understand that in the context of your study, no previous research has established a threshold to discriminate woodlands and mountain forests. This way, I think that a use of a 75% threshold is valid in the context of your study. However, you could briefly detail in the manuscript the reason of your choice to clearly assess that it is a relatively “subjective” threshold that could be improved in the future.

 

Lines 229-234: I would reformulate all of this in this way “Altogether, 115 variables were extracted from the training sample locations (Table 1) using the original resolution of each satellite dataset”.

2.2.5. Variable grouping: Creating a new section for this point will indeed help to clarify the methodology. However, in your last response in the Response to reviewer 2, you provide more detail explaining the choice of this methodology and how this decision is reflected in your analysis. For these reasons, I think some of the explanations given in the response to Reviewer 2 could be added in this section.

 

Line 268: What do you mean by “These training samples (Natural forest/other vegetation) were used as dependent variables”? I understand that you use “Natural forest vs other vegetation” as your dependent variables, but you should think that you also have independent variables in your training dataset. I think this part need to be reformulated.

Line 270: change “;” to “:”.

Line 275: I would not say that cross-validation (CV) is independent. Even when using the more traditional 70-30% split without CV for training and validation you cannot say that both datasets are independent. The same for line 364.

Line 286: “TSS values of 0.2–0.4”

Line 296: It is not clear for me how the additional validation was done and how it was measured.

Line 297: The term “accuracy” appears two times (repetition)

Line 299: “…we removed manually all the forest patches smaller…”. There are more cases like this throughout the text.

Lines 326-328: From what product were the statistics calculated? From the CCEF model? Or from the stack of the three products? It is not clear to me.

Line 333: “Out of 1023 models tested, the most frequent variable groups in the best performing models…”

Line 336: be consistent in the names of variable groups. By texture of optical, what variable group are you referring to? contrast or descriptive?

Lines 358-360: I think these lines do not provide relevant information.

Line 363: How was the overall accuracy obtained? On the other hand, specify that “OA” means “overall accuracy” either here or in Table 3.

Line 370: the least forest cover? less forest cover?

Lines 371-376: I think this part is a mix between results and discussion. Although the ideas are there, I would reformulate it, at least, for the discussion section. For me, the important thing is not that your model estimates less forest cover than previous methods (ok, this is part of the results section), the important thing is that it is able to identify gaps within forested areas due to its greater precision (unlike previous models), and this results in a lower forest cover estimated closer to what we observe in reality in continuous forested areas (discussion).

Lines 374: Change “Furthermore” to “However” because before you talked about the strengths of the model and now you are talking about its limitations.

Line 459: “than that of previous products”

Lines 465-467: It's a little confusing when you say “…confirming the previous estimates, but there are significant geographical differences”. I suppose you are talking in terms of protection status among blocks, and only accordingly to your results. In the way it is expressed it seems like you have found differences among previous studies and yours in term of protection status.

Line 478: I think there is a typo here (“ne inventory” instead of “new inventories”)

 

483: I would mention the accuracy value (TSS) also here in the discussion.

Line 485-486: I think that you mean that your approach provided also a more robust assessment for previously mapped forest areas, but not a direct evaluation of previous products.

Line 488: “in model performance”.

Line 499: “the natural closed canopy forests”

Line 500: By “forests of sparser canopy cover” are you referring to other kinds of forests different from (close canopy) evergreen forests? Or to evergreen forests with sparser canopy due, for example, to relatively recent disturbances. I think we need clarification here. Please precise here if you have been able to determine whether the forest of sparse canopy cover surrounding the dense forests were disturbed by human activities or natural disturbances.

506: I am not sure that “objective” is the more suitable term here.

Line 552: “on the model performance”.

Line 557: “that the recent developments”

Line 558: “allows

Author Response

Please see the attachment

Author Response File: Author Response.docx

Back to TopTop