Next Article in Journal
PLANHEAT’s Satellite-Derived Heating and Cooling Degrees Dataset for Energy Demand Mapping and Planning
Next Article in Special Issue
Canopy Height and Above-Ground Biomass Retrieval in Tropical Forests Using Multi-Pass X- and C-Band Pol-InSAR Data
Previous Article in Journal
UAV-Based Slope Failure Detection Using Deep-Learning Convolutional Neural Networks
Previous Article in Special Issue
Combining ASNARO-2 XSAR HH and Sentinel-1 C-SAR VH/VV Polarization Data for Improved Crop Mapping
 
 
Article
Peer-Review Record

Refugee Camp Monitoring and Environmental Change Assessment of Kutupalong, Bangladesh, Based on Radar Imagery of Sentinel-1 and ALOS-2

Remote Sens. 2019, 11(17), 2047; https://doi.org/10.3390/rs11172047
by Andreas Braun 1,*, Falah Fakhri 2 and Volker Hochschild 1
Reviewer 1:
Reviewer 2: Anonymous
Remote Sens. 2019, 11(17), 2047; https://doi.org/10.3390/rs11172047
Submission received: 23 June 2019 / Revised: 27 August 2019 / Accepted: 28 August 2019 / Published: 30 August 2019

Round 1

Reviewer 1 Report

This paper covers two broad topics:

1.      The application of time series SAR-derived metrics and topographical variables to map land cover and its changes at the site level

2.      The science of land cover mapping and accuracy assessment and making transferable methodologies

These are merged to achieve a broad aim – to map the ecological impact of explosive refugee movements in a fragile cross-border environment in a transparent and reproducible way. While the first topics is addressed well I think the second topic is more poorly treated and there are a number of areas of improvement needed. I am also left wondering if the broad aim was achieved - what is the ecological impact of the refugee movements (if any)? - This is poorly documented except for a few brief statements in the conclusion. Overall, I would suggest that many improvements are needed before publication.


Specific Comments:

·        Line 13: When? Give timing of border crossing for context

·        Line 26-27: machine learning, humanitarian action in keywords but not mentioned or referred to again in the paper, they should be

·        Line 43: Spell out UNOSAT acronym

·        Line 52: delete ‘the’, comma after ‘applications’, ‘historical’ instead of ‘archived’ – current imagery can also be archived.

·        Line 56: delete “intends to” and say “uses”, “attributed to” instead of ”related to”

·        Line 56-58 should detail when the displacement occurred

·        Line 64: “were” instead of are

·        Line 65: Only 1.5 hectares of forest? Is this correct? Please double check

·        Line 77: put “sectors” at end of sentence

·        Line 115: “reported”, not reports

·        Line 117-121: In addition to the criticisms of Hassan et al. presented here, the authors should also address the shortcomings of using the NDVI more broadly – for example, soil effects: Soils tend to darken when wet, so that their reflectance is a direct function of water content. If the spectral response to moistening is not exactly the same in the two spectral bands, the NDVI of an area can appear to change as a result of soil moisture changes (precipitation or evaporation) and not because of vegetation changes. See for example: https://www.sciencedirect.com/science/article/pii/S0048969717301407?via%3Dihub

·        Line 127: Be careful about how you phrase “..because the forest cover significantly changes between the dry and the rainy season in Bangladesh.” Is it true change or just leaf on /leaf off effects? I doubt the forest cover significantly changes in terms of spatial extent between a wet and a dry season but its green canopy cover might change indeed. Please clarify.

·        Line 132: replace “and” with “thus valid”. Be careful about how you phrase “..images can potentially be captured and analysed at more regular intervals”. SAR imagery are actually less frequently acquired than optical for any given place on earth but the cloud cover problem means in practice that more SAR images are usable (especially in tropics). Please rephrase. 

·        Line 133: Replace “Its” at start of sentence with a noun- the potential of what?

·        Line 135: change “independency”

·        Line 136: insert “the” before main

·        Line 142: insert “of” before microwave

·        Line 147: A brief sentence on backscatter mechanism would help the reader – please include. Also the reference dates from 1998 so the assertion that few polarimetric radar sensors exist may be outdated. Can the authors justify the statement with evidence or at least point to a more recent reference which states the same?

·        Line 148: Insert “of” before anthropogenic

·        Line 148-158: Encapsulate the wider impact of the study – is there a governmental policy, sustainable development goal, international human rights target etc. that the study informs. There is a missing link here, the study should aim to inform a wider goal that the monitoring aspect contributes to.

·        Line 173: use “section” instead of chapter (throughout the text)

·        Line 174-175: There is no mention of global land cover maps and the changes that they reveal about the area – these should at least be mentioned.  Why would you not use a global land cover change map for this study?

·        Line 178: Be careful about how you phrase “..increase the information content of the radar images..” You will not increase or change their information content but you will increase spatial and temporal coverage by combining them. Please rephrase.

·        Figure 1: This needs much improvement. See below:

·        Why is part of the study area in the sea? It should be clipped to the terrestrial border

·        Please add the camp outline as shown in fig. 3

·        Add an inset map for global context - showing where this is in the world

·        Add country labels – where is Bangladesh?

·        Can you add the boundaries of the protected areas mentioned in line 171-172? See https://www.protectedplanet.net/ for boundary polygons

·        The town names are unreadable and probably unnecessary, unless referred to in the text

·        Line 180: The four polarisations ((HH/HV/VH/VV) should be spelled out at first mention

·        Line 178-183: The significance of the C-band and L-band setting should be explained. Also ALOS is not free data and Sentinel is. This should be made clear.  

Table 2: Footnote : does the portion of the study area include the part in the sea (shown in fig.1)? If so, this will create an error in the coverage. I would recalculate % coverage based on terrestrial only study area.  

Line 209: Why “radiometric” indices? Use “spectral” not radiometric, unless you derived the indices from the radiometric data, i.e. before conversion to surface reflectance, which I doubt you did.

Line 210: The four target classes should be defined better and justified. Why were these four chosen and how are they described, e.g. what kind of vegetation is it? Is urban the right name of the LC class? Since it is a short term, informal settlement, I would rename this class accordingly.

Line 212: “on” not “from”

Figure 2:

·        What % of the pre-classification sample were split into training and validation?, please annotate the workflow with the % numbers.

·        There should be one or more  data boxes coming out of accuracy assessment to show what data were used and input to that process

Line 226-229: Be sure to name the version of the software used as well.  Is the computer code available to others?

Line 229: bilinear resampling changes the native image values, it is better done after than prior to the image classification, see Lillesand et al. (2008), Remote Sensing and Image Interpretation, (6th edition), p.488. Please justify why you did it before and the implications of that for the workflow.

Line 230: change ‘radiometric‘

Line 236: mIR, swIR1, swIR2 are not explained anywhere

Line 239-240: Please rephrase this sentence, it is unclear

Line 257: were

Line 261: You mention Lee filter for the speckle effects but do not say why – please justify this filter over alternatives.

Line 295: rephrase

Figure 3: Add North arrow. Add map legend. Make it consistent with changes to figure 1. Include inset map for geographical context.

Line 313: “Consistent with” not compared to

Figure 4: same as figure 1 and 3

Figure 5: Why does A have linear trend lines and B have none? I‘d prefer not to have trend lines on either but you should be consistent.  The use of lines to join the discrete time points is already a liner interpolation of sorts and a trend line is excessive for such a short time period.

Line 396: “well” after “correlates”

Figure 6: This does not work for me – please remove the figure and rethink its purpose, cannot see the colour tones. If necessary figure, then consider zooming into a smaller area to show the LC change, e.g. 10*10km2 area. Currently not publishable.

Line 406-414: Congalton and Green (1999) must be cited in here somewhere – it is the best guide to LC classification accuracy assessment. It shows that the authors have not read it as some assumptions they make are questionable – why choose 1,000 points per class?

For example, one suggestion for the determination of the number of sample points needed for accuracy assessment is to use the binomial probability formula (Fitzpatrick-Lins, 1981) to estimate the appropriate number of samples over the entire study area:


See Word document for equation, it did not copy correctly here


As an example, if an accuracy of 85 % is expected, with an allowable error of 5% (95% confidence), 204 sample points would be needed to apply this approach. If the allowable error changed to only 2%, the number of sample points needed would increase to 1,275.

I would like to see a similar formula applied in this study to determine the precise number of points. Actually this probably applies to the 25 points, manually interpreted, to train the CART, mentioned on line 240

Also, how are those points spatially distributed, what strategy was chosen for the sampling? This will determine if within class spatial variation is fully exploited in the feature space.

Another issue to be addressed is size and spacing of the sampling points- Generally as a minimum, sample site should be no smaller than 3 x 3 cluster of pixels or the equivalent polygon area. See Justice and Townshend (1981) for further reading.

It is still not clear how much of the samples taken from the unsupervised classes (CART method) were split into training and validation points. For example, line 292 says that one third of the available training pixels per class was used, then later the authors say that 1,000 points per training class were used for validation. Thus overall how many points were extracted from the unsupervised classes and of this what proportion were training/validation? The same point must never be used twice- i.e. for both training and validation as that would corrupt the classification methodology. This should also be annotated on figure 2 as mentioned.

Another issue with CART as opposed to visual interpretation of training and validation points is that CART will also have errors in defining the coarse pre-classification. How is this error accounted for?

Figure 7: What is the “number of results” on y-axis? It is not clear. The blue line should be “cumulative”

Figure 8: Needs a north arrow, country names etc. It does not work for me – I cannot see where maps from Hassan et al. are – A and B not described in the figure caption anywhere. This figure needs more work to make it acceptable and publishable. The legend colours are indistinguishable.

Line 484: “used” after variable

Table 4:

·        It would be nice to see these results broke down by class – so what are the top most important features for each class?

·        The top3 SAR features should be annotated with S1 to show that they are Sentinel-1 derived

·        All the other SAR feature should be annotated with the respective sensor they came from

Line 497-500: It would be nice to see the significance of P band and C band SAR discussed with respect to these results. C-band important for tree leaves and volume scattering is dominant while P-band sensitive to tree trunks and limbs. What therefore is the implication of this in the results presented in table 4? Is there a difference in importance between the S1 and A2 derived features, related to wavelength of SAR? 

Line 570: Emergency is not the right word – consider a different word

Line 576-578: The issue with this statement relates to accuracy – how do you know the classifier will perform to the same level of accuracy in another area? The validation approach only shows agreement between the reference and classified data for the area studied.

Line 587: “of” after quality, “physical” before surface

Line 600: 49, not 45

Line 602: “the” after of, “were” DEM

Line 597-605: This discussion would be much more valuable if the results were presented by class, as mentioned previously – what were the most important features for each LC class?

Line 615-617: The problem with the resources you highlight is that they are all likely to be commercial data and not free and open like Sentinel. This is very significant for humanitarian applications that you discuss. In fact this issue is not addressed enough in the discussion (As I mentioned previously in relation to ALOS data which are not free) – humanitarian applications will demand free data because of the nature of the issue. The authors should address this and discuss.

Line 629-630: This last sentence does not make sense as currently written

Actually I argue that the method is only partly transferable to humanitarian organisations because of the high costs involved for SAR imagery such as ALOS

This last sentence should link the study back to the aims more as well

Line 636: The line says that A.B. was responsible for funding acquisition but the following line says that the research received no external funding. Please clarify. 

Line 638: I think it would be worth declaring the value of the ALOS data, even if it was covered by a grant. Future readers who would like to replicate the method would then know how much it would cost.

Line 644: Fig A1-A11: who are the “experts” mentioned for interpretation of missing urban areas? Are they the same as those used for the 25 points per class mentioned on line 239?

Generally:

The selection of features for classification seems strange and not well explained. What is the logic, e.g. in choosing topographic variables? Is there a reference to justify? Water presence for example is predetermined by topography in that it will be present in areas of little slope and mild terrain so would not explain much variation within that class. Therefore topography features could be the primary feature for water body presence.

After reading the paper, I am still wondering what are the land cover transitions that characterise the change in the area? For example, how much forest went to urban and bare ground during the period? Perhaps figure 6 could be revised to show this more explicitly, there are many graphical ways to show transition between classes between each date and overall.

General text changes

·        change “phonological” to “phenological” throughout the text

·        change “data is” to “data were/are” throughout the text. Datum is singular, Data are plural.

I would not say “vegetation bodies” – okay to say “water bodies


Comments for author File: Comments.pdf

Author Response

Please see attached word file

Author Response File: Author Response.docx

Reviewer 2 Report

The paper addresses an interesting topic, the material is suitable and the conclusions seem reliable. It appears however a bit verbose, especially in the justification of the adopted processing framework, which in my opinion appears too complex for the proposed task.


My main concern regards the use of indices besides the original image bands. The derived indices are combinations of the original bands, so they bring no independent information, which is recognized in the paper. Now, although the used classification algorithms somehow compensate this redundancy, I believe the use of additional bands with actually no independent information adds little to the obtained results. As a minimum, I find this issue should be addressed in the paper, e.g. by showing some results obtained by using only the independent bands in the classification. This could be added by reducing some of the lengthy descriptions to a more concise form.


In the case of the SAR-derived indices, there is an additional issue, which regards the use of different polarization channels coming from sensors operating in different frequency bands.

As should be clear from the literature regarding the definition of such indices, they are intended as combinations of polarimetric information to highlight particular features of the surface, based on physical interpretation of scattering mechanisms and statistical measurements of SAR backscatter. In all cases, their derivation is based on the assumption that the different polarimetric channels are all acquired in the same frequency band. In practice, using channels acquired in different carrier frequencies (in C and L bands in this case) does not ensure physical consistency, which is one of the main rationales behind the use of these indices. 


Generally, since, as the authors rightfully observe on line 620 of the manuscript, the classification is here aimed at recognizing a limited number of target classes (4), I strongly suspect that using so many "derived" bands does not add much to the results.

Also, the combined use of several window sizes for the preprocessing (1x1, ..., 49x49 pixels) simply brings in different amounts of spatial smoothing, which reduces the impact of noise at the expense of spatial resolution. According to the level of spatial detail connected to the features used, some smoothing levels may be more useful than others. Recognizing such relations before processing could ease the additional computational burden connected to the multiplication of all input bands by the number of different filter window sizes.


I find these issues somehow distract the reader from the main topic of the paper, which is the integrated use of radar and optical remote sensing, as well as topography, for the particular task of documenting in an original way the effects of this humanitarian crisis. Simplifying the methodology would render the paper more poignant and interesting, in my opinion.


Some more detailed comments follow.


Sect. 2.2.2: apart from the observations above about the necessity of such large quantity of indices, when listing them, at least a short description of what each index is supposed to highlight should be provided. Also, the values of the parameters G, C1, C2, etc. (line 237) used in the formulae should be explicited. Finally, in the same line of reasoning, the graphs in Appendix A appear not very useful without a succinct description of what each label, color, percentage and fraction represents.


In Figure 3, color scales for all image channels should be provided, in order to ease the interpretation.


Line 313: I believe the year here should be 2017 instead of 2018.


Lines 338-343: in view of these observations, have you tried just dropping the Oct. 2016 image from the analysis?


Line 345: "indifferent scattering characteristics". What is meant here?


Line 374: "between February and June 2018". I'd rather say between June and July 2018.


In Figure 6, how was the dashed line traced? By hand? Why not adopting a more quantitative criterion (e.g. based on pixel values)?


Figure 7: is the cumulated accuracy (blue) line really useful? Which kind of information is it supposed to convey?


Table 3: in the notes, "EoO = error of commission" -> "EoO = error of omission"


Lines 444-447: the description of this comparison should be improved. 


Line 462: "Resultantly" ?


Section 3.3: a considerable confusion arises here between individual "features" (which are, as I understand, the individual image "bands" used in the classification) and the "feature types", which are those listed in Table 4. I suggest re-reading the section and improving the description.


Line 582 and also elsewhere in the paper: "phonological" -> "phenological"


Lines 593-595: I think this aspect is not treated thoroughly enough in the paper to allow to draw such a conclusion (see also concerns above, about indices derived from different frequency bands!). Also, the conclusion in lines 617-619 appears not correct.


Author Response

Please see attached word file

Author Response File: Author Response.docx

Round 2

Reviewer 1 Report

Figure 1 and 2 have poor resolution and are difficult to read. Can they be high resolution figures?

Figure 8 still not well annotated, explain what A and B is in caption, as well as the meaning of the colour matrix

There are still spelling mistakes, especially in abstract and introduction, please do another spell and grammar check.

The use of the word "chapter" is not correct, please rephrase to "section" troughout the manuscript.

The manuscript is too long and detailed, the authors should consider removing unnecessary detail in order to make it easier to read.

Author Response

please see attached document

Author Response File: Author Response.docx

Reviewer 2 Report

The paper has considerably improved, thanks to the efforts of the authors in answering the numerous observations raised by the reviewers. I still find one of the points I raised in the previous review should be somehow better dealt with in the text. In the following I explain this and some more minor points which need corrections in my opinion.   Regarding the multifrequency polarization combinations - namely the indices defined in eqs. (7-14): except for the work of Omar et al. [68], which however uses them as simple regressors against the vegetation biomass observable, none of the cited literature uses polarization channels coming from different frequencies within the same index. Indeed, even the work of Schmitt et al. [110], when combining multi-frequency data, uses quad-pol images from each sensor, and then compares homogeneous indices derived for each frequency.  In fact, microwave backscatter from different frequencies, even neglecting differences due to the sensors (which may be approximately corrected by absolute calibration and by using similar incidence angles and acquisition geometries), may show different relative levels, due to the different scattering mechanisms taking place at the different frequencies. Just to make a simple example, the P_surface component of your "Pauli" decomposition, being derived as difference between HH (in L band) and VV (in C band) will be biased by the relatively stronger level of L-band backscatter compared to C-band. A similar effect can be seen for the P_dihedral component, where the HH part will dominate. And this can be said also for the other indices, except for the RFDI which is derived from L-band channels only. To use such kind of combination quantitatively, a possible method would be to use some form of "correction" of one channel with respect to the other, with "gain" factors which should take into account the expected dependence of the backscatter level from the frequency of the impinging radiation. Such gain factors should somehow carry some information of the type of surface being imaged...  Although such observations may seem pedantic at first sight, they stem from the fact that the theory behind the mentioned indices assumes that the original data are taken from a "canonical" polarimetric matrix, i.e. acquired in the same frequency, which exhibit well-determined symmetry and other mathematical properties, which cannot be verified if different entries are acquired in different frequencies. Having said that, I will also convene that it is probably ok to use such "hybrid" combined indices in a qualitative way, i.e. as simple linear combination of independent observation channels; also, I believe that the potential distorting effect in the classifications can be drowned in the large number of derived, redundant layers used by the RF algorithm.  What I recommend is to avoid over-interpretation of the real physical significance of such indices, e.g. in the description of the maps in Fig. 3. In practice, it should be made clear (or clearer) that all the indices introduced are actually "mock-ups" of the actual indices, derived by stitching together polarization channels acquired in different frequency bands. So, although their names can be retained in order to convey at large their original meaning, they should be considered simply, as mentioned above, as combinations of different channels, without any further physical interpretation.   Other minor comments follow.   The word "chapter" should be substituted by "section" throughout the paper - as already suggested by other reviewers.    Moreover, the correction "phonological"->"phenological", again already suggested in previous revisions, is not consistent through the manuscript.   Line 21: "...of 1500..." add hectares.   Line 183: Wildlife is repeated.   Line 322-323: it is worth specifying that 25 points are chosen *for each class*. In this way, the sentence "these 100 points" makes sense.   Eq. (8): "GV" -> "HV" ?    Line 372 (and eq. 14): the definitions should be for P_dihedral, P_volume, P_surface: the way they are combined in an RGB images has nothing to do with their definition. Of course, the warnings contained in the preceding paragraph should be applied to this nomenclature, to avoid over-interpretation.   Line 373: "Technically they can only be derived": consider correcting in "Technically, all these indices can only be derived..."   Line 455: "The reciprocal share open areas..." what does this mean?   Figure 6: light yellow and light green seem to represent the same class transition (vegetation -> open). Is that a typo?   Figure A2 has probably an inconsistency in one threshold representation. More generally, the figures in Appendix A should be reorganized and/or reformatted with larger fonts to be readable.

Author Response

please see attached document

Author Response File: Author Response.docx

Back to TopTop