Next Article in Journal
Alternative Opportunities to Collect Semen and Sperm Cells for Ex Situ In Vitro Gene Conservation in Sheep
Next Article in Special Issue
Crop Node Detection and Internode Length Estimation Using an Improved YOLOv5 Model
Previous Article in Journal
Determination of Discrete Element Modelling Parameters of a Paddy Soil with a High Moisture Content (>40%)
Previous Article in Special Issue
The Application of Machine Learning Models Based on Leaf Spectral Reflectance for Estimating the Nitrogen Nutrient Index in Maize
 
 
Article
Peer-Review Record

Evaluating Data Augmentation Effects on the Recognition of Sugarcane Leaf Spot

Agriculture 2022, 12(12), 1997; https://doi.org/10.3390/agriculture12121997
by Yiqi Huang 1, Ruqi Li 1,2, Xiaotong Wei 1,3, Zhen Wang 1,2, Tianbei Ge 4 and Xi Qiao 1,2,*
Reviewer 2:
Reviewer 3:
Agriculture 2022, 12(12), 1997; https://doi.org/10.3390/agriculture12121997
Submission received: 17 October 2022 / Revised: 18 November 2022 / Accepted: 21 November 2022 / Published: 24 November 2022
(This article belongs to the Special Issue Model-Assisted and Computational Plant Phenotyping)

Round 1

Reviewer 1 Report

This paper  investigates how to improve the recognition accuracy for sugarcane leaf disease when the dataset is in a complex environment and lacks samples. The authors use unsupervised generative adversarial networks (DCGANs) and supervised data augmentation. Although the paper is well written and presented, but the idea of using GAN is not new or novel. It presented previously as authors stated in introduction such as using it with grape leaf disease [4]. 

The last paragraph of section I, " the main contribution of this paper.........." : (1) and (3) are same meaning!!

In section 2.4, the training technique of Deep Conv GAN (DCGAN) for G (Gen) and D (Des) should be clearly presented. 

 

Author Response

Point 1: This paper investigates how to improve the recognition accuracy for sugarcane leaf disease when the dataset is in a complex environment and lacks samples. The authors use unsupervised generative adversarial networks (DCGANs) and supervised data augmentation. Although the paper is well written and presented, but the idea of using GAN is not new or novel. It presented previously as authors stated in introduction such as using it with grape leaf disease [4]. 

 

Response 1: Fristly, no matter in reference [4] or reference [21][22][23][24], the datasets in their experiments are all under the simple background environment of the laboratory, and their methods are not applicable to the crop leaves in their native state (the complex background) used in this paper. Secondly, no studies on sugarcane have used to unsupervised data augmentation. Finally, although unsupervised data augmentation has been used in some areas, most of the studies still use supervised data augmentation during image preprocessing, it is hoped that this paper can highlight the contrast between supervised and unsupervised data augmentation and make unsupervised data augmentation the dominant image preprocessing method.

 

Point 2: The last paragraph of section I, " the main contribution of this paper.........." : (1) and (3) are same meaning!!

Response 2: I have revised lines 136-138 of the article based on your suggestions. “The main contributions of this paper are summarized asa proposed method based on DeepLabV3+, DCGAN and MobileNetV3-large for accurate identification of sugarcane spots classes in real environments.”

 

Point 3: In section 2.4, the training technique of Deep Conv GAN (DCGAN) for G (Gen) and D (Des) should be clearly presented.

 

Response 3: Thanks a lot for your proposal. I have revised the article according to your idea. and added Figure 5 to the description of G and D in Section 2.4

Author Response File: Author Response.pdf

Reviewer 2 Report

-          The introduction must be improved and should focus on relevant literature only.

-          In lines (35 ̶ 37), the authors highlighted the use of CNNs, and in the following paragraphs, discussed some studies that are based on classical machine learning techniques (lines 42 ̶ 44, 47  ̶  50). Please discuss the advantages of using CNNs and discuss relevant studies only.

-          Line 53 may contain a typo (what does “classification of disease TensorFlow technology” refer to here).

-          Line 116: Acronym “DCGAN”  must be defined at first mention

-          Section 2.2: please highlight the size of the original images in the different datasets.

-          What is the input size of DeepLabv3 and MobileNetv3-large?

-          Please briefly discuss the content of Table 1. I could not understand the concept of MobileNetv3-large by reading Section 2.5; please enrich its content.

-          Subsections 2.6.2-2.6.8 may be combined in one paragraph, then Equations may be listed

-          Line 224: please fix the error in the text “Specificity measures the rate of normal liver tissue.”

-          Section 3.1: the authors utilized Labelme software to generate a mask map of the 790 original images and trained DeepLabV3 to segment Sugarcane in the same 790 images. Before applying image recognition, please discuss the reason behind utilizing a semantic segmentation model. Can an image recognition model be used directly to recognize sugarcane leaf spots?

-          Line 261: the number of images per class is confusing. Sugarcane was segmented into 790 images, and the sugarcane leaf image was cut into a partial square image. The summation of images containing red rot, ring spot, rust, and healthy is 529. Also, the numbers in lines 270-271 are confusing. A table or a graph may help.

-          Why did the authors use MobileNetv3-large in particular? I suggest comparing the results with 3-4 different deep learning architecture.

-          Please discuss the total training time and the complexity of the model(s)

-          Discuss the limitation of the adopted approach

Author Response

Since the reply to you contains a chart, please check the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

Authors Have presented their work with the title “Research on the effect of data augmentation on the recognition of sugarcane leaf spot”

   The research work contributed by the authors of this paper is mainly reflected in the following points:

·         This paper investigates how to improve the recognition accuracy for sugarcane leaf disease when the dataset is in a complex environment and lacks samples.

·         The Authors have carried out an unsupervised data augmentation method for complex backgrounds for better accuracies and a MobileNetV3-large-based model for the recognition of sugarcane leaf spot is designed.

·         The DCGAN and DeepLabV3+ are used to augment the training dataset with the MobileNetV3-large model to improve the recognition of sugarcane leaf spot.

·         Figure-3 is not that much clear, so I request the authors to update the same.

·         All the references mentioned at the end of the paper are not cited in the running text, so I request the authors to check and cite all the references properly. If cited then ok no need to do, but check once.

·         In equations 3&4 explain what is the meaning of AՈB & AՍB and clearly explain why it is used?

Comments for author File: Comments.pdf

Author Response

Thanks for spending your valuable time to review my article.

 

I have revised the article and responded to your comments and suggestions.

 

Point 1: Figure-3 is not that much clear, so I request the authors to update the same.

 

Response 1: I have modified picture 3 into a clearer picture, please check the attachment.

 

Point 2: All the references mentioned at the end of the paper are not cited in the running text, so I request the authors to check and cite all the references properly. If cited then ok no need to do, but check once.

 

Response 2: Thanks for the suggestion, I have checked the article again.

 

Point 3: In equations 3&4 explain what is the meaning of AՈB & AՍB and clearly explain why it is used?

 

Response 3: Thanks a lot for your reminding, I should not add steps in the middle of the equation,this way of writing is easy to misunderstand, and I have modified equations 3&4 according to your suggestions, please check the attachment.

 

Wish you a pleasant life and good health!

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

-              The authors did not address most of my comments adequately.

-              The introduction needs a deep revision. The authors should provide some details and a context, not only listing existing techniques used in the literature.

 -              In lines 105-126, the authors listed studies that utilized data augmentation techniques in the medical field and did not focus on plant diseases. Many studies related to plant diseases augmented their data using different variations of GANs, and the authors did not discuss them. Some of the studies:

 

1.            Douarre, C.; Crispim-Junior, C.F.; Gelibert, A.; Tougne, L.; Rousseau, D. Novel data augmentation strategies to boost supervised segmentation of plant disease. Comput. Electron. Agric. 2019, 165, 104967, doi:10.1016/j.compag.2019.104967.

2.            Abbas, A.; Jain, S.; Gour, M.; Vankudothu, S. Tomato plant disease detection using transfer learning with C-GAN synthetic images. Comput. Electron. Agric. 2021, 187, 106279, doi:10.1016/j.compag.2021.106279.

3.            Zhang, J.; Rao, Y.; Man, C.; Jiang, Z.; Li, S. Identification of cucumber leaf diseases using deep learning and small sample size for agricultural Internet of Things. Int. J. Distrib. Sens. Networks 2021, 17, doi:10.1177/15501477211007407.

4.            Zhang, M.; Liu, S.; Yang, F.; Liu, J. Classification of Canker on Small Datasets Using Improved Deep Convolutional Generative Adversarial Networks. IEEE Access 2019, 7, 49680–49690, doi:10.1109/ACCESS.2019.2900327.

 - Additional studies can be found in the following review paper:

 

Lu, Y.; Chen, D.; Olaniyi, E.; Huang, Y. Generative adversarial networks (GANs) for image augmentation in agriculture: A systematic review. Comput. Electron. Agric. 2022, 200, 107208, doi:10.1016/j.compag.2022.107208.

 -              By reading the abovementioned review paper (Lu et al. 2022), the research gap is not quite clear.

 

-              Please revise Section 2.6. The definition of “precision” is not “Precision predicts the liver tumor tissue”.

 -              I think if the authors augmented the raw data through DCGAN and fed the images directly into MobileNetv2, the authors might get a similar range of accuracies. For the benefit of the readers, please discuss the reason behind choosing DeepLabV3 in the methodology.

 

 -              The authors did not show comparisons of different deep learning architectures in the manuscript.

 

 -              In the discussion, I would suggest visualizing some samples that show false positive and false negative predictions.

 

-              The limitations should be included in the discussion section, not in a new section.

 -              A native speaker must thoroughly check the paper.

Author Response

Point 1: The authors did not address most of my comments adequately.

 

Response 1: I am sorry that in the first round of replies I wrote "Since the reply to you contains a chart, please check the attachment." which did not specify which attachment, I replied in the attachment " author-coverletter-23632939.v1.docx", which contains the results of the experiment you asked me to do to increase the image recognition networks.

    But I must admit I didn't do a sufficient job, especially in the introduction, and I revised the introduction of this article again in this round of revisions.

 

Point 2: The introduction needs a deep revision. The authors should provide some details and a context, not only listing existing techniques used in the literature.

 

Response 2: I searched for background to revise the introduction, and a part of the lengthy display has been abbreviated.

 

Point 3: In lines 105-126, the authors listed studies that utilized data augmentation techniques in the medical field and did not focus on plant diseases. Many studies related to plant diseases augmented their data using different variations of GANs, and the authors did not discuss them. Some of the studies.

 

Response 3: Thank you very much for the references you gave me, I read them carefully and used them instead of the non-agricultural field references in the text.(in lines 137-151)

 

Point 4: Please revise Section 2.6. The definition of “precision” is not “Precision predicts the liver tumor tissue”.

 

Response 4: I have modified here. (in line 295-296)

 

Point 5: I think if the authors augmented the raw data through DCGAN and fed the images directly into MobileNetv2, the authors might get a similar range of accuracies. For the benefit of the readers, please discuss the reason behind choosing DeepLabV3 in the methodology.

 

Response 5: I added the experiment according to your method. It was found that the results of data augmentation using DCGAN in complex backgrounds were not good (The images generated by DCGAN are shown in Table 6), because the complex backgrounds led to a large-scale misidentification of rust and ring spot diseases that were difficult to distinguish. And I analyzed this issue in the discussion session(in line 471-477).

 

Point 6: The authors did not show comparisons of different deep learning architectures in the manuscript.

 

Response 6: I added 3 image recognition networks and found the most applicable one for subsequent experiments by comparing both accuracy and training time.(in figure 11 and table 8)

 

Point 7: In the discussion, I would suggest visualizing some samples that show false positive and false negative predictions.

 

Response 7: I followed your suggestion to find the misclassified image and analyzed the reason for misclassification in the discussion after tracing the original image. (in line 486-512, figure 12 and 13)

 

Point 8: The limitations should be included in the discussion section, not in a new section.

 

Response 8: I have revised the article as you suggested.(in line 554-564)

Author Response File: Author Response.pdf

Back to TopTop