Next Article in Journal
Interpreting Patterns of Concentric Rings within Small Buoyant River Plumes
Next Article in Special Issue
Aerial Imagery Paddy Seedlings Inspection Using Deep Learning
Previous Article in Journal
Applications of Unmanned Aerial Systems (UASs) in Hydrology: A Review
 
 
Article
Peer-Review Record

Mapping Rice Paddy Distribution Using Remote Sensing by Coupling Deep Learning with Phenological Characteristics

Remote Sens. 2021, 13(7), 1360; https://doi.org/10.3390/rs13071360
by A-Xing Zhu 1,2,3,4,5, Fang-He Zhao 2,4,*, Hao-Bo Pan 1 and Jun-Zhi Liu 1
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Remote Sens. 2021, 13(7), 1360; https://doi.org/10.3390/rs13071360
Submission received: 5 March 2021 / Revised: 25 March 2021 / Accepted: 30 March 2021 / Published: 2 April 2021

Round 1

Reviewer 1 Report

Dear authors, please address the following in the revised version.

  1. Introduction,

add 1.1 Contributions: Please summarized your objectives here without including the paper structure.

 At the end then include how the paper is structured following the objectives of your study.

Fig. 1. workflow diagram is missing some components e.g. atmospheric correction method. Also, increase the font size and overall size of the workflow diagram, the present format is difficult to read.

 

Line 149-166. Please include the equations for EVI2 and LSWI. Furthermore, LSWI is not a vegetation index?. Land Surface Water Index!. Therefore, the workflow is needed to modify as well following the above changes. So this section of the manuscript requires 3 equations.

2.3. Deep Learning model

Please also include the architecture of LSTM in the manuscript and explain it as it is the ML model used in this study therefore should be explained in the context.  Furthermore, how the LSTM was implemented is not documented in the paper e.g. software package, the programming language used.

Study area and data collection.

please include table add the Sentinel-2 images date-wise, also include the number of field samples against each image for readers conveniences.

3.3. Evaluation of the performance CHANGE TO Evaluation metrics.

I do not see the equation for the area under curve (AUC) values, how they were calculated.

Table 1. is also confusing it should contain OA, AUC, P, and R. However, mean overall accuracy just appears what of nowhere and OA is not included in the table. Maybe, mean overall accuracy is OA but the terms are not consistent which can lead to confusion for the readers.

Fig. 3. Please enlarge it to occupy the space available on the left side of the image. You can see the following paper as a reference to prepare the Figures.

https://www.mdpi.com/2072-4292/13/3/463

The rest of the results are clearly presented and make sense. And contribution provides a novel method. My recommendation is to provide the trained models for the users at Github or any other platform for those interested to use your model for mapping paddy rice using Sentinel data.

Thank you and  GOOD LUCK.

 

 

 

 

Author Response

Please see the attachment

Author Response File: Author Response.docx

Reviewer 2 Report

The authors tried to couple a deep learning method with phenological data of rice. The topic is of interest and the manuscript well written and organised.

The manuscript can be accepted after minor revisions:

- Please, use the passive form instead of We / we

- Pag. 6, lines 240-249: which is the ground sample resolution GSD?

- Results: the computational time of the different methods should be reported.

Author Response

Point 1: Please, use the passive form instead of We / we

Response 1: The paper has been examined thoroughly and all sentences that used “We/we” have been changed to passive forms. The detailed revisions are listed below:

  • Line 34: From “we can achieve …” to “it is possible to achieve …”.
  • Line 186: From “we then collected samples” to “samples were collected”.
  • Line 190: From “we can expect that …” to “it is expected that …”.
  • Line 195: From “we defined intervals of RCLE values” to “intervals of RCLE values are designated …”.
  • Line 197-198: From “We use random sampling method” to “Random sampling method is used”.
  • Line 278: From “we computed the time series of …” to “the time series of … was computed”.
  • Line 283-284: From “We chose 0.9 as the threshold for RCLE values and 0.1 as the threshold for LSWImin values” to “The threshold for RCLE values was set to 0.9 and the threshold for LSWImin values was set to 0.1”.
  • Line 313-315: From “We randomly selected 350 rice paddy (positive) samples” to “In the sampling step, 350 rice paddy (positive) samples were randomly selected”.
  • Line 358-360: From “we calculated and compared the overall accuracy (OA), precision (P), recall (R) and area under curve (AUC) value of the respective classification results” to “the overall accuracy (OA), precision (P), recall (R) and area under curve (AUC) value of the respective classification results were calculated and compared”.
  • Line 399: From “We conducted a hypothesis test” to “A hypothesis test was conducted”.
  • Line 401: From “We assume the classification errors …” to “The classification errors were assumed …”.
  • Line 403-404: From “We build the hypothesis that” to “The hypothesis is that …”.
  • Line 458 From “We calculated the overall accuracies of the above three maps” to “The overall accuracies of the above three maps were calculated”.
  • Line 502-503: From “we examined the impacts of …” to “the impacts of … were examined”.
  • Line 542: From “we substitute the LSTM component with…” to “the LSTM component was substituted with …”.
  • Line 543-544: From “we chose random forest and support vector machine” to “random forest and support vector machine were chosen”.

Point 2: Pag. 6, lines 240-249: which is the ground sample resolution GSD?

Response 2: The samples are collected using field photos and the ground sampling distance is 1 meter. The information has been added in Section 3.1 at Line 267.

Point 3: Results: the computational time of the different methods should be reported.

Response 3: The computational time for producing rice paddy distribution maps using different methods has been reported in Section 4.2 Line 473-481. Since computational efficiency is not a focus of this paper, the results are just reported in the text. The computational time is reported as follows:

“The computational time for mapping the spatial distribution of the three methods were also recorded during the experiment. It took 13,442 seconds for the pheno-deep method to produce a rice paddy distribution map, 12,472 seconds for the deep learning alone method, and 575 seconds for the phenological alone method. The pheno-deep method took slightly more time than the deep learning alone method and much more than the phenological alone method. It should also be noted that the pheno-deep method does not require any field samples as the deep learning alone method does, and its accuracy improved greatly from the phenological alone method. The additional computing time is not a significant concern in the application of the pheno-deep method.” (Line 473-481)

Reviewer 3 Report

Manuscript is well written and seems interesting to read but with following concerns:

  1. Please elaborate evaluation process of the method (with which map is it compared, how many points were considered for evaluating)?
  2. Please tabulate the comparison of pheno-deep method all other methods you considered.

Thank you.

Author Response

Response to Reviewer 3 Comments

Point 1: Please elaborate evaluation process of the method (with which map is it compared, how many points were considered for evaluating)?

Response 1: We have added more details to the description of the evaluation process. There are 364 samples in total used for validation, and this is specified in Section 3.1 at Line 268-269. The evaluation process is explained in Section 3.3 at Line 341-357. The overall performance of the pheno-deep method is evaluated based on the average results of 50 experiments with different randomly selected samples. The deep learning alone method is also repeated 50 times with different samples to obtain average results for comparison. The phenological alone method is not repeated due to the stability of the result.

In Section 4.2 where the accuracy of the maps under different terrain conditions is evaluated, the number of validation samples used in the northern, central, and southern areas is 135, 169, and 60, respectively. This has been added in Section 4.2 at Line 446-447.

 

Point 2: Please tabulate the comparison of pheno-deep method all other methods you considered.

Response 2: The comparison of the pheno-deep method and the deep learning alone method is tabulated in Table 2 at Line 397. The comparison of the pheno-deep method and the phenological alone method is tabulated in Table 3 at Line 437. The pheno-deep method is compared with the other two methods separately for two reasons. First, the purposes of comparing the pheno-deep method to the other two methods are different. The comparison to the deep learning method aims to examine if the performance of the LSTM model trained with samples from phenological results (pheno-deep method) is significantly different from that with field samples (deep learning alone method). The comparison to the phenological alone method aims to examine if the pheno-deep method can overcome the noises from phenological results. Second, the pheno-deep method used 5 spectral bands and 3 remote sensing indices for classification while the phenological alone method only utilized 2 remote sensing indices, therefore it would not be fair to directly compare them together. In Table 2, the pheno-deep method with only LSWI and EVI2 data is compared with the phenological alone method, so that the data used in the two methods are the same. This has been explained in Section 3.3 at Line 352-355.

The comparison of the three methods within three areas of different terrain conditions is compared together and is tabulated in Table 4 at Line 470.

Round 2

Reviewer 1 Report

The authors have answered all the questions and incorporated all the changes.

Back to TopTop