Next Article in Journal
Classification of Typical Tree Species in Laser Point Cloud Based on Deep Learning
Next Article in Special Issue
Detection of Grassland Mowing Events for Germany by Combining Sentinel-1 and Sentinel-2 Time Series
Previous Article in Journal
Mapping Urban Excavation Induced Deformation in 3D via Multiplatform InSAR Time-Series
Previous Article in Special Issue
Estimating Crop Biophysical Parameters Using Machine Learning Algorithms and Sentinel-2 Imagery
 
 
Article
Peer-Review Record

Understanding the Requirements for Surveys to Support Satellite-Based Crop Type Mapping: Evidence from Sub-Saharan Africa

Remote Sens. 2021, 13(23), 4749; https://doi.org/10.3390/rs13234749
by George Azzari 1, Shruti Jain 1, Graham Jeffries 2, Talip Kilic 3,* and Siobhan Murray 3
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Remote Sens. 2021, 13(23), 4749; https://doi.org/10.3390/rs13234749
Submission received: 8 September 2021 / Revised: 5 November 2021 / Accepted: 8 November 2021 / Published: 23 November 2021
(This article belongs to the Collection Sentinel-2: Science and Applications)

Round 1

Reviewer 1 Report

The presented manuscript titled „Understanding the Requirements for Surveys to Support Satellite-Based Crop Type Mapping: Evidence from Sub-Saharan Africa“ seeks to assess the relationship between the quantity and type of field data needed to produce maps of single crop types, while accounting for plot size and classification input features in a controlled machine learning classification experiment. The objective of the study is timely, progresses the current state of knowledge and is well formulated. While the manuscript seeks to provide an important contribution, major adjustments are needed to justify publication in Remote Sensing. In the following, I will elaborate on the key issues.

The authors collected and made use of an exhaustive set of reference data for Malawi that enabled to assess the influence of the named factors on classification performance with a large number of test samples. However, the evaluation metrics used here do no inform about the different types of error made by the models. Including an assessment of class-wise user´s and producer´s accuracy is pivotal for the readers interested in producing, for example, rather conservative maps of maize cover with a low error of comission for the maize class. The authors should consider presenting the overall accuracy used here in combination with user´s and producer´s accuracy for the maize class. This will also further support the insights presented in 3.6.

Furthermore, the experiment solely focuses on the distinction between maize and other crops and does not include the classification of non-cropland land cover types, which normally have an additional impact on the classification performance, if no cropland mask is available. The authors need to be clear about this in the study, i.e. that applying the crop type models presented here is based on the assumption of the availbility of a timely and accurate cropland mask.

The authors should be more explicit about how the intercropped fields were used, as to my understanding intercropped fields with maize end up labeled as „maize“ in the reference data.

While many of the findings are in line with previous findings, it is illusive to me how the corner of a field boundary should serve as a good training sample for classifying crop types. Please consider if this experiment is conceptually flawed before making recommendations on field data collection based on corner points.

For section 2.2.2, please briefly justify the use of the selected indices based on current literature and elaborate on potential effects when integrating 20m SWIR bands for producing 10m maps.

Please be more explicit at what stage the reference data was split into train/val/test. Did you really use the same test data for all experiments? If so, please state that. Also, please inform the reader about the model hyperparameters used.

I understand it is not the key objective of the study but I suggest you better highlight that the methods granting multiple samples per field (boundary points, convex hull, plot points) offer potential in scenarios with limited field data availabilty.

Figure 7 suggests that we can already reach comparably good classification performance with a very limited training dataset (~2% less when using a few hundred samples as compared to >4000). This deserves discussion.

Overall the manuscript is well organized but overly lengthy. Shortening and consolidation of several passages is needed and some structural changes to the manuscript are required. For instance, the introduction features many aspects related to yield mapping, such as the scale-productivity relationship introduced in L65-68. These aspects are interesting but do not directly relate to the issues presented in the work here, which focusses on crop type mapping per se. Also, the summary of the key findings presented in the introduction could be removed or placed elsewhere. Sections 2.1.1 and 2.1.2 double wide passages of text as similar / identical methods were used to prepare the reference data. Section 3.6 describes an additional experiment which should be introduced in the methods section. The results contain an overly large number of figures which need to be further consolidated or moved into the appendix. The discussion section fails to embed the presented findings in the current literature, which would be an important step to contextualize the findings made. There is no conclusion in the manuscript, here the summary presented in the introduction would be better placed. All figures and tables in the manuscript appear as low quality graphics, which often compromise readability.

 

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 2 Report

The current study is on a topic of relevance and general interest to the readers of the journal. However, the manuscript requires some modifications. I would recommend authors to follow some concerns and comments below.

1. Introduction:

Lines 125-139: All findings should be in the discussion and conclusion sections not in the introduction.

2. Material and Methods:

Lines 314-316: What was the processing of the S1? Did authors just apply LIA? If so, why other processing steps such as calibration, orthorectification, etc., weren't applied? 

Line 324: Table 5: There are many vegetation indices, why did the authors selected these vegetation indices? Also, why was the normalized burn ratio 1 selected? It's an index to highlight burnt areas.

Lines 336 and 340 (Equation 1 and 2): What's beta, w and t? It isn't mentioned in the manuscript.

Appendix:

Lines 755-758: Pre-processing of S2 data should be shown in materials and methods not in an appendix.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 3 Report

Authors provide a feedback on how surveys should to be conducted to support satellite crop mapping in the African countries as Ethiopia and Malawi.

The structure of the manuscript is treated in very general way. References are not sufficient e.g. in a section 2.2.1 there is not even one reference provided.

Please correct the structure and reorganize manuscript.

Please provide and order data used for analyses, consistently. Presentation of data is not clear. Please provide exact time span of data used. Once authors use the term "materials", another time, name it "survey data" and probably authors describe some databases. Please clarify and reorder information provided.

E.g. In section 2.1.1 “Malawi” authors provide somehow data as well as investigate GIS issues which should be moved to the other section (3.1?)

In section 2.2 authors surprisingly provide some new data so the reader is not sure how to follow the concept.

In the section 2.2.3 authors present the piece of methodology and in the section 2.2.4 authors  present some new data again?  In the next section 2.3 authors jump to methodology? Please improve the structure.

Line 450 – there is no “Data” section, please clarify.

2.3.3. – Please provide the practical background for modeling and testing scenarios.

Line 472 – please clarify the term “sensitivity analyses”

Please enlarge figures and improve their quality. Please provide detailed captions not shortcuts. Figures are very small and not informative. Please improve.

Authors did not provide any conclusions? Why? Please provide the conclusions of your research.

 

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Round 2

Reviewer 1 Report

I thank the authors for the detailed reply and their efforts in working on the comments and suggestions. I believe the manuscript has improved quite a bit. However, I have one major and a few minor points which deserve attention before publication of the manuscript.

---

My only major point is related to adjustments made in section 2.2.2., where new information on the image processing raises questions about which data was acutally used and how it was preprocessed. The authors state they used L2A (surface reflectance) data but performed atmospheric correction based on a linear regression model. I fail to understand why surface reflectance products should be atmospherically corrected, what the regression model entails, and how this was done. Also, the authors introduce a new cloud, cloud-shadow, haze, and snow detection, which needs a reference or if presented for the first time at least information on how it operates. Clarifying this issue is required.

---

Additionally, some of my earlier remarks need a little bit more work. These do not affect the quality of the research per se but can still substantially improve the manuscript and I would appreciate if the authors take the little time to adjust these points. I here attach the full correspondance for context:

---

Reviewer 1 initial comment: Section 3.6 describes an additional experiment which should be introduced in the methods section.

Response: We have elected not to do this, since what we do in 3.6 is not “an experiment” vis-a-vis the simulated experiments presented in the methods section. In section 3.6, we are simply conducting a comparative analysis of the “final products” (in the form of 10-meter resolution rasters) based on the best competing models under each approach to georeferencing plot locations.

Reviewer 1 reply: I disagree, you present a new analysis workflow, which should be described in the methods section. Specifically, most of the information described in lines 624-655 should be placed in the Methods section.

---

Reviewer 1 initial comment: The results contain an overly large number of figures which need to be further consolidated or moved into the appendix.

Response: Without specific suggestions from the reviewer, we have not altered the composition of the figures, as we deem them to be the minimalist set to include vis-a-vis the experimental framework and the research questions. We have however increased the resolution of the graphics.

Reviewer 1 reply: In my opinion, 15 Figures and 10 Tables are not minimalist. For starters, you could combine Table 8 and Table 9 into one Table, Figure 10 and 11 present similar information and one could be moved to the appendix, I still find Figure 4 and Figure 6 not very informative, as the individual panels in each figure look extremely similar, so those can also be combined into one figure.

---

Reviewer 1 initial comment: All figures and tables in the manuscript appear as low quality graphics, which often compromise readability.

Response: We have however increased the resolution of the graphics.

Reviewer 1 reply: the graphics still show in low quality, tables and captions also come as graphics and partly show mouse cursors and Word autocorrect highlights (see e.g. Table 1, or Fig 7). Please make sure to use high quality graphics and to present captions and tables in text format.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Back to TopTop