Next Article in Journal
Classification of 3D Digital Heritage
Previous Article in Journal
Decreasing the Uncertainty of the Target Center Estimation at Terrestrial Laser Scanning by Choosing the Best Algorithm and by Improving the Target Design
 
 
Article
Peer-Review Record

Integrating Growth and Environmental Parameters to Discriminate Powdery Mildew and Aphid of Winter Wheat Using Bi-Temporal Landsat-8 Imagery

Remote Sens. 2019, 11(7), 846; https://doi.org/10.3390/rs11070846
by Huiqin Ma 1,2,3, Wenjiang Huang 2,3,*, Yuanshu Jing 1, Chenghai Yang 4, Liangxiu Han 5, Yingying Dong 2,3, Huichun Ye 2,3, Yue Shi 2,3,6, Qiong Zheng 2,3,7, Linyi Liu 2,3,6 and Chao Ruan 2,3,8
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Remote Sens. 2019, 11(7), 846; https://doi.org/10.3390/rs11070846
Submission received: 22 February 2019 / Revised: 27 March 2019 / Accepted: 5 April 2019 / Published: 8 April 2019

Round 1

Reviewer 1 Report

General Comments: The authors evaluated the performance of three classification models to discriminate healthy, powdery mildew infected and aphid damaged winter wheat in a region located in China.  The models are support vector machine (SVM), Back-propagation neural network (BPNN) and a coupled method called SMOTE-BPNN. They defined four different input datasets (four different combinations of growth indices and environmental factors) to highlight the importance of “Bi-temporal growth indices” when it is considered as an input in those classification models. The results indicated that the coupled model with bi-temporal growth indices enjoys better performance than other evaluated models. In general, the paper is well-written and is assess a new coupled model to detect the major issues (crop disease and insect pest) occurring almost every year in winter wheat growing in China. The introduction was coherent, and I suggested adding some examples for some parts to make it clearer. The description of methodology, particularly for field survey, was not enough and must be completed. Adding figures such as the network structure which is used for both SVM and BPNN can be helpful for readers. The results were fine, but I think the tables can move to an appendix and the main parts of them can be illustrated in radar charts. The discussion was a scientific explanation to justify the inputs used in this study. I have listed my comments that can enhance the quality of the manuscript and must be considered by the authors for the next round.

 

Major:


1.       Introduction:

Line 62-67: “For instance, sugar beet Cercospora leaf spot, leaf rust, and powdery mildew were successfully detected and classified using spectral vegetation indices [8,9]. Some major and common wheat diseases and pests (i.e., yellow rust, powdery mildew, aphid and take-all disease) were successfully distinguished based on spectral reflectance, traditional spectral vegetation indices, specific wavelet features and new spectral indices…” It is important to be mentioned which bands or vegetation indices are more useful for powdery mildew detection and why?

Line 68 -70: “Conversely, multispectral satellite imagery with low cost, good data quality and wide swath coverage is a feasible method for crop disease and pest monitoring” What about their temporal and spatial resolution, overpass time, cloud issues in such imagery.

Line 71-73: “For instance, relying on a hyperspectral experiment, Yuan et al. [17] simulated the reflectance channels and some classic vegetation indices (VIs) of seven high-resolution satellite sensors to discriminate …” I think this part is not related to a hyperspectral experiment. Since Yuan used satellite imagery it must be a “multispectral experiment”. Check that one again.

Line 87: “but also require appropriate environmental conditions” Mention some examples for “environmental conditions” to make it clearer.

Line 127-130: “(1) to assess the feasibility of using a bi-temporal feature set which integrated growth indices and environmental factors to discriminate wheat powdery mildew and aphid; and (2) to evaluate the performance of the bi-temporal feature set based SMOTE-BPNN discrimination approach and its capability for mapping the damage from the disease and pest.” What is the difference between objective 1 and 2? It is not clear to me and must be revised. I think the first objective is to evaluate the performance of three classification models mentioned in the Methodology section and the second one is to assess the impact of Bi-temporal feature set on the accuracy of the classification models when it is considered as an input parameter.


2.       Materials and Methods

Line 136-137: “The local climate and environmental conditions provide  a suitable developing environment for powdery mildew and aphid.” The local climate and environmental conditions must be defined.

Fig 1: Location of the experimental fields

(a) I think it is better to change the flag icons to point makers or small makers because of their size and their numbers.

(b) What is red, pink and grey. Must be defined in the legend

(c) if the author access to the land cover map of the study area, use the land cover map in this figure.

(d) add a scale bar for the right map as well.

Line 144-145: “Five 1-m × 1-m plots were selected at a 30-m × 30-m area to match the spatial resolution of Landsat-8 satellite imagery.” it is important to show the grids of Landsat pixels and five 1-m plots on Fig 1.

Line 147-148: “Wheat growth conditions, height, and disease and pest occurrence severity were recorded in the survey

(a)    if it is possible, add a sample form used in the survey in the appendix section.

(b)    Wheat growth conditions? What does it mean? Provide some examples of that?

(c)      How the authors measured the “pest occurrence severity”?

Line 156-157: “Plots were randomly selected for model calibration and the remaining 46 plots were used for validation.” (a) I understand that the authors surveyed 137 field plots and then categorized them into training and testing datasets, but it is not clear to me the role of “five 1-m x 1m plots.”

(b) As I know in ANN-based models have three subset datasets, training, validation and testing. In your case, I assume that calibration is training, and validation is testing dataset, what about validation datasets that usually use to tune the model parameters such as the number of layers, neurons, kernel parameters?

(c) Mention the exact dates that the authors were in the field for survey instead of “mid-May 2014”. I’m asking this question since the authors used two Landsat imagery captured on May 15 and May 22, 2014.

(d) I think a single pixel grid of Landsat 8 (30 m x 30 m) must cover more than one plot in your case? Is it true? If yes, how the model can distinguish healthy plot from non-healthy plots with the same input (vegetation indices from Landsat 8)?


2.3. Image selection and Prepossessing

Why the difference between the two Landsat dates is 7 days? The study area is covered by two paths of Landsat?

Line 168-170: “A decision tree method was applied to the extraction of the winter wheat planting area.” The results of the decision tree must be illustrated in a figure or at least illustrate as a layer in Fig 1.

Line 182-183: If the “habitat factors” and “environment factors” are the same, use one term to be consistent entire the manuscript.

Line 190-191: “The single-channel method and the TIRS-1 (tenth band of Landsat-8 imagery) of the thermal infrared sensor was used to calculate LST as shown in equation (2).” Any citation for the equation?


-Balance Calibration Data Using Synthetic Minority Oversample Technique (SMOTE) Algorithm

Line 202-221: “Before the construction of crop disease… pest discrimination models

I don’t know how difficult it would be but that would be helpful to prepare a flowchart or schematic figure for the description in lines 202-221.


-Wheat Powdery Mildew and Aphid Discriminate Using Back Propagation Neural Networks (BPNN)

The structure of the neural network used in the study must be illustrated in this subsection.

Line 230-233: “In this study, a novel MOSTE-BPNN approach for crop disease and pest discrimination was proposed by combining the new calibration data balanced by  the MSOTE algorithm with.”

(a) “SMOTE-BPNN” or “MSOTE-BPNN”? Check it for entire the manuscript and make sure that one acronym is used for that.

(b) The number of hidden layers, neurons, and kernel function, stopping criteria (number of the epoch) used in the current study must be mentioned here or added to the structure of the neural network figure.

(c) Probably using “coupled SMOTE-BPNN” instead of “novel” would make more sense since the authors connect SMOTE to BPNN and they didn’t develop either of them.

 

- Accuracy Assessment of Disease and Pest Discrimination

The equation for all those stats (overall accuracy, user’s accuracy, producer’s accuracy, and kappa coefficient) must be written in this section.

Provide a short description of both G-means and F-score.

 

3.       Results

Line 256-257: “Typically, the disease causes the changes of biophysical and biochemical parameters of plants, such as pigments, water content and canopy structure as well as leaf color changes due to pustules or lesions.” This is important for readers to know how changes in pigment, water content, etc. can affect the VIs used in this study, particularly bio-temporal one. Since some part of the answer to this comment will be addressed in the discussion, I recommend merging the results section with discussion to cover the impact of those factors on the employed VIs in one section.   

Fig 3: Means and standard deviations of the selected normalized growth indices and environmental factors for both healthy and damaged (powdery mildew and aphid) plots on (a) May 15 and (b) May 22.

(a)    The number of samples that these VIs are computed for must be mentioned in the legend (e.g. Healthy (30), …)

(b) SR = N/R in both dates is 0.25 for Aphid class. I’m wondering to know how it could be possible the same SR gives different SIPI? I’m asking this question since we know that SIPI is equal to (SR-1)/(SR+1) and if SR= 0.25 then SIPI must be equal to 0.6 for Aphid but it is not?

(c) Still, it is not clear to me why the authors use two sequential images from Landsat?

(d) Why the LST (land surface temperature) is up to 1? did the authors use the normalized version of that?

(e) Is there any physical meaning why for example DVI is too high for Powdery mildew and is too low for Aphid categories?

(f) I’m wondering to know if the authors did fieldwork at both dates May 15 and May 22 or just one day in mid-May?


-Mapping Powdery Mildew and Aphid Damage

Table 5-8:  Confusion matrices and classification accuracies produced by different feature sets

(a)    Move them to appendix instead use radar charts just for OA and Kappa. Two radar charts; one for OA and one for Kappa but map AMOTE-BPNN, BPNN, and SVM on the same graph. Each vertex is related to each of four models and each polygon (graph) is related to each of input datasets (Bi-temporal growth indices and environmental factors, Bi-temporal growth indices, Single-date growth indices, and environmental factors, Single-date growth indices).

(b)    Use an appropriate abbreviation for four input datasets instead of using the full name.

Table 9: Area statistics for the different wheat damages based on the three methods using bi-349

 temporal growth indices and environmental factors.

I recommend reporting these numbers in percentage (%) but mentioned the total damaged area (summation of these numbers) in the title.

Fig 4: Damage maps of winter wheat produced by (a) SMOTE-BPNN, (b) BPNN, and (c) SVM using bi-temporal feature set integrating growth indices and environmental factors.

I think the color of healthy wheat and wheat powdery mildew is close to one another. Probably using a dark green for a healthy class and red for powdery and black for aphid would be more distinguishable.

 

4. Discussion:

Line 362-371: “Winter wheat undergoes a series of physiological and biochemical changes (i.e., pigments…content in the leaf, damaged by aphid piercing the leaf and sucking out leaf juice, results in a higher reflectance in the visible and SWIR regions than the non-infested leaf [6]. The leaf tissue destructed  by aphid infestation leads to a lower reflectance than the non-infested leaf in the NIR region [64]… “blue shifting” phenomenon has also been found in the red edge position [67]”

Since the proposed model can effectively detect the healthy and non-healthy crops and also the authors did field survey, I highly recommend adding a figure in this section showing the spectral response of these three categories at Landsat multispectral wavelength. That would be really helpful for readers to explore the impact of aphid and powdery mildew on the spectral response in a figure.

 

Minor:

Line 66-67: “it is very difficult for practical application due to their high cost and low availability”. it requires a proper citation.

Line 55-56: “These two threats can result in serious loss of grain yield and quality”. Mention the amount of loss in percentage.

Line 96-97: “Few studies combined the information from these two aspects into disease and pest monitoring and differentiation.” it requires a proper citation.

Line 103-104: “more attention needs to paid to.” Change it to “more attention needs to be paid to”

Line 115-117: “A single hidden BPNN layer can generally approximate any nonlinear function with arbitrary precision, which makes BPNN popular for predicting complex nonlinear systems [33].” Is it true for a single hidden BPNN layer for linear kernel function?

Line 170: “a decision tree method was applied to the extraction of the winter wheat planting area based on the phenological information for the main crops in the study area.” based on the phenological information – such as??

Table 1. Since both crop growth condition VIs and habitat situations index are obtained from satellite, it would more readable to add LST and Greenness in Table 1 and change the table title.

Fig 2: To cover all the methodology, I recommend adding SVM and the original version of BPNN in the flowchart before the box of “Synthetic minority ….”

Line 392-393: “The bi-temporal variations help to eliminate field anomalies other than the disease and pest infestations” field anomalies such as what?

Line 432-433: “However, limited by the spatial-temporal resolution of Lnadsat-8 images” a typo “Lnadsat -8”

 


Author Response

Dear Reviewer,

We really appreciate your suggestions and comments. We agree with these suggestions and have significantly revised the manuscript accordingly.

Our point-by point response and/ or changes to the reviewers’ suggestions/ comments are listed as follows, and the changes are marked in red.


Author Response File: Author Response.pdf

Reviewer 2 Report

This manuscript presents a methodology for simultaneous discrimination between appearance of powdery mildew and aphids on winter wheat in the study site in China. It is based on two closely acquired Landsat 8 images which are used to calculate various growth and environmental parameters. It is suggested that the Synthetic Minority Oversampling TEchnique (SMOTE) algorithm helps when dealing with the imbalanced calibration and validation dataset. All factors concerning the objectives of the study, the analysis and the derived results are discussed and connected with the relevant scientific findings. The authors describe the connection between the biophysical parameters of winter wheat and spectral reflectance to explain the use of Landsat-8 imagery data to carry out this experiment. They also discuss the advantages of the applied analysis methods and explain possible reasons for the more accurate results.

The manuscript is clearly written in good English, offers elements of innovation and should be of interest to the readers of Remote Sensing after some minor corrections are performed:


The introduction provides sufficient background knowledge and works of the studied subject, including information about the pathogens, remote sensing works of interest and relevant analysis methods, mostly neural networks (Back Propagation Neural Networks - BPNN). I would expand the analysis of multispectral satellite images (e.g. Landsat 8) on disease identification, e.g.:

- Mirik et al., 2013. Remote monitoring of wheat streak mosaic progression using sub-pixel classification of landsat 5 TM imagery for site specific disease management in winter wheat. http://dx.doi.org/10.4236/ars.2013.21003.

- Navrozidis et al., 2018. Identification of purple spot disease on asparagus crops across spatial and spectral scales. doi: 10.1016/j.compag.2018.03.035.


Section 3.2 provides the results from SMOTE analysis. How can you check the quality of SMOTE results? Please provide an evaluation if possible.


L145: I'm not certain that 5x1 m2 plots are adequate to describe a 900m2 pixel. Please justify this decision or list in the potential sources of error.


L142: Which were the dates of field surveying the 137 field plot? What is the influence of the difference of field surveys with the satellite acquisition dates? This is particularly important for very dynamic diseases. 


L294: Please define what is "acceptable accuracy".


L392-393: An important issue is discussed here: potential spectral fluctuations that occur at various dates of image acquisition along a time series (due to phenology, cultivation or other environmental conditions irrelevant to the disease) could be eliminated by using images of multiple acquisition dates. Please explain and discuss further.


A few typing errors to correct: L104 "needs to be paid", L304 "higher than", L230 and L232 correct "SMOTE". 

Author Response

Dear Reviewer,

We really appreciate your suggestions and comments. We agree with these suggestions and have significantly revised the manuscript accordingly.

Our point-by point response and/ or changes to the reviewers’ suggestions/ comments are listed as follows, and the changes are marked in red.


Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

The manuscript has been significantly improved. My comments are clearly addressed and the figures are revised. I have just one minor issue which is related to Table 4. Change "oa", "pa", "ea", and "ua" to "OA", "PA", "EA" and "UA" to be consistent with the manuscript. In addition, check the equation mentioned for "ea" in the table. Also, make sure that the appendix section is included in the final version of the submission. I'm pointing out this because the appendix section was in the authors' response but was not in the submitted manuscript.

Author Response

Dear Reviewer, 

We really appreciate your suggestions and comments. We agree with these suggestions and have significantly revised the manuscript accordingly. Our point-by point response and/ or changes to the reviewers’ suggestions/ comments are listed as follows, and the changes are marked in yellow highlight. 


Author Response File: Author Response.pdf

Back to TopTop