Next Article in Journal
Comparison of Filters for Archaeology-Specific Ground Extraction from Airborne LiDAR Point Clouds
Next Article in Special Issue
Continuous Forest Monitoring Using Cumulative Sums of Sentinel-1 Timeseries
Previous Article in Journal
State of Science Assessment of Remote Sensing of Great Lakes Coastal Wetlands: Responding to an Operational Requirement
Previous Article in Special Issue
Comparative Assessment of Machine Learning Methods for Urban Vegetation Mapping Using Multitemporal Sentinel-1 Imagery
 
 
Article
Peer-Review Record

Investigating the Impact of Digital Elevation Models on Sentinel-1 Backscatter and Coherence Observations

Remote Sens. 2020, 12(18), 3016; https://doi.org/10.3390/rs12183016
by Ignacio Borlaf-Mena 1,2,*, Maurizio Santoro 3, Ludovic Villard 4, Ovidiu Badea 1,5 and Mihai Andrei Tanase 1,2
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Reviewer 4: Anonymous
Remote Sens. 2020, 12(18), 3016; https://doi.org/10.3390/rs12183016
Submission received: 27 July 2020 / Revised: 8 September 2020 / Accepted: 10 September 2020 / Published: 16 September 2020
(This article belongs to the Special Issue SAR for Forest Mapping)

Round 1

Reviewer 1 Report

With a great interest, I began the reading of this manuscript. Yet, found no special thing. It is very common in experimental setting and no interesting finding was revealed. However, this manuscript can also be published giving the following comments is addressed:

1) the title should be revised as: DEM selection affects topographic normalization for radar observations;

2) names of the used DEM should be given in the Abstract;

3) What criteria did you used to select the images over the two sites? Why not select all the available images during the study year?

4) The specific meaning of IOR should be given in the Method section;

5) Section 4.4 was too short;

6) The topographic effects on radar observations and the accuracies of the topographic normalizations from different DEM were not directly given.

Author Response

Please see the attachment

Author Response File: Author Response.pdf

Reviewer 2 Report

The manuscript focuses on evaluating different DEM products in affecting the normalization process of Sentinel-1 backscattered intensity and interferometric coherence. Different DEM products with various resolutions are for evaluation at two research locations with complex topography. A land cover classification task is also conducted for evaluation. However, the two research sites are relatively small which compromises the conclusion. Meanwhile, this study lacks theoretical or method innovations in solving the problem, and is more like a technical report rather than a research paper.

Apart from above, below points need to be addressed:

  1. In line 251, "a linear SVM is used as the classifier was selected for its robustness and short execution times". "Yes" for the short execution times because it doesn't have the kernel calculation which is computationally intensive, but "No" for its robustness since the linear classifier would not be able to distinguish classes that are not separable linearly. That is why kernelized SVM has been chosen to be an optimal land cover classifier (Mountrakis et al., 2011). If execution time is big concern, the author should use GPU to speed the computational process, for example, use this open source library (Wen et al., 2018).
  2. Line 260 - 265, why pick up the number of pixels available for the less extended class? You can upsample the minor class and this might yield better results. On the other hand, forcing every class to have the same number of sample is biased in sampling design. 
  3. Line 355-362. I think part of the reason of the misclassified forest pixels is due to the biased sampling design. 
  4. Is there any texture features used in the classification? If not, adding these might be helpful because of geographic autocorrelation.
  5. According to the author, the 5m ALS generated DEM is used as the reference. More details need to be reviewed in terms of how the DEM is generated and how the reference data for the land cover types are produced.
  6. A comparable study should also be conducted when spectral bands are included in land cover classification.

 

Mountrakis, G., Im, J., & Ogole, C. (2011). Support vector machines in remote sensing: A review. ISPRS Journal of Photogrammetry and Remote Sensing66(3), 247-259.

Wen, Z., Shi, J., Li, Q., He, B., & Chen, J. (2018). ThunderSVM: A fast SVM library on GPUs and CPUs. The Journal of Machine Learning Research19(1), 797-801.

Author Response

Please see the attachment

Author Response File: Author Response.pdf

Reviewer 3 Report

This manuscript assesses the impact of DEMs on Sentinel-1 SAR backscattered intensity normalization and interferometric coherence. With ALS DEM as benchmark, the IOR and coherence are evaluated and compared between different DEMs. This is a detailed work whose results and conclusions can be served for us to select a better DEM for SAR radiometric normalization, at least for Sentinel-1. I have some advices for the manuscript improvement.

 

This manuscript is highly professionally written, neglecting introduction to some professional terms, which I think may limit audience scope as this may prevent them from understanding the manuscript clearly without referring to much literature. It is my opinion that the authors can add definition or description to some terms, such as IOR (I think for this a more detailed calculation method considering SAR geometry and terrain parameters is needed), confusion matrix and error metrics (I think the function of these two terms is also needed.)

 

Some specified points are as follows.

  1. Line 20: “the difference to results obtained with a high-resolution local DEM”, it seems the word “to” is not correct.
  2. Line 127, “e.a.” and line 262, “(n)” seem to be clerical errors.
  3. Line 196-198, “The interferometric coherence was estimated from the interferograms using an adaptive window estimator whose size varied between 3x3 and 9x9 pixels.” How do you calculate the suitable window size?
  4. Give the descriptions of the variables in (1) and N in (2)~(5).

 

Author Response

Please see the attachment

Author Response File: Author Response.pdf

Reviewer 4 Report

Review

DEM selection affects radar topographic normalization

 

Introduction:

The introduction is good and provides a good overview of the problem. I think that it is ok in the current form. Nevertheless, as always, I can suggest small improvements.

The first paragraph introduces the problem and in the following paragraphs de methods used to derive a DTM from the space are described. Then, in line 115 the authors write about “terrain normalization” an important concept that is used in the paper. I would suggest discussing this concept, maybe in the first paragraph. What will be understood as “normalization” in the paper?

The aims of the paper are well described in the introduction.

  1. study area

The first paragraph introduces the study areas. They are poorly described here. Some details are later quoted in the text, like the relief differences or relation of the vegetation with slope. Such details can be summarized here because they would support the choice of these areas.

 

Methods:

Equation 1: Why is n=1?

Results:

Line 290: “The main differences between sites were the lower spread of IOR for the needle leaf forest at the Spanish site, and the smaller spread of AW3D compared with TDX20/30 at the same site”

I agree with the first part, but I cannot see the second statement.  Am I wrong or the spread of AW3D is larger for needle leaf in Romania?

The caption of figure 3 does not quote grassland and bare (figures a and b).

Question: Are we supposed to compare grassland to bare soil? As displayed in figure 3? The authors do not mention these plots in the text.

In table 05, some text is repeated.

Discussion:

It is not normal to use so many references in the discussion. That is the case when the authors write a discussion or conclusion of their work. As the discussion is the result of their work, they should reduce the citations in this part of the text. Otherwise, it gives the impression that the conclusion is from the reference. I understand that they try to show that their statements find support in the literature. Maybe they can change the style of doing so.

Example:” These factors may 414 have eased the detection of vegetation-free pixels, “pushing” the reported data nearer to the true 415 terrain surface once resampled to 30 m [17,16]”

Line 423: The results are well discussed, nevertheless, I miss a final discussion final about the source data, concerning the acquisition date and the spatial resolution, as some datasets need to be resampled, can resampling affect the results?

The authors could link the discussion of the results to the accuracies listed in table 1. As the original datasets have different accuracies, it is normal to expect different results.

Author Response

Please see the attachment

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

My concerns are all satisfactorily addressed. I have no more comment.

Author Response

Thank you for revising our manuscript

Reviewer 2 Report

First, the author justifies that the work is a build-on work based on a previous article (Truckenbrodt et al. 2019) and mentioned that this study is the most complete review of the topic to date. I will leave the domain experts to make sure these are true or not. On the other hand, the author mentioned about the study area is not able to be enlarged because of the data limitation. The justification sounds fair to me and please make sure to include this justification in the article.

Second, the issue of linear SVM is not resolved. The author mentioned that for some reason, they are not able to use the speedup version of a kernelized SVM and this would compromise the classification result and part of the conclusion. I think at least the author should try a random forest classifier as an alternative if the author cannot really get the non-linear SVM to work. 

Third, I think the gap or edge issues can be resolved using spatial interpolation (e.g. kriging) and I think texture information like GLCM mentioned by the author is still worth trying.

 

 

 

 

 

 

Author Response

Please see the attachement

 

Author Response File: Author Response.pdf

Back to TopTop