Next Article in Journal
Forage Radish Cover Crops Improve Soil Quality and Fruit Yield of Lycium barbarum L. in an Arid Area of Northwest China
Previous Article in Journal
Identification of Terpene-Related Biosynthetic Gene Clusters in Tobacco through Computational-Based Genomic, Transcriptomic, and Metabolic Analyses
 
 
Article
Peer-Review Record

Comparing Inception V3, VGG 16, VGG 19, CNN, and ResNet 50: A Case Study on Early Detection of a Rice Disease

Agronomy 2023, 13(6), 1633; https://doi.org/10.3390/agronomy13061633
by Syed Rehan Shah 1, Salman Qadri 1, Hadia Bibi 2, Syed Muhammad Waqas Shah 3, Muhammad Imran Sharif 4,* and Francesco Marinello 5
Reviewer 1:
Reviewer 2:
Agronomy 2023, 13(6), 1633; https://doi.org/10.3390/agronomy13061633
Submission received: 14 May 2023 / Revised: 15 June 2023 / Accepted: 16 June 2023 / Published: 18 June 2023
(This article belongs to the Section Precision and Digital Agriculture)

Round 1

Reviewer 1 Report

see the attachment 

Comments for author File: Comments.pdf

Extensive editing of English language required

Author Response

Please find the attachment. 

Author Response File: Author Response.pdf

Reviewer 2 Report

Review on “Comparing Inception V3, VGG 16, VGG 19, CNN and Resnet 50: a case study on early detection of a rice disease”

This study attempts to compare different CNN architectures to classify rice blast disease. The dataset was collected from online sources, preprocessed, augmented, and enhanced for training. Pretrained Inception V3, VGG 16, VGG 19, CNN, and ResNet 50 models were used, and the performance of each model will be compared. The base of each model was frozen except for the fully connected layers to train on a new dataset. It is shown that the modified ReNet 50 had the highest accuracy of 99.16%.

 

  1. Title: Resnet is written as one of the model names mentioned in the title. However, I commonly saw that the model name is written as ResNet. I recommend changing the name.
  2. One paragraph with many sentences is written in section 2. Related Work. This paragraph is too long; I suggest splitting it into a few paragraphs.
  3. Table 1. explains the same disease but infects different parts of the plant. Is that correct? If so, you should emphasize that you only used the leaf for disease detection. Additionally, it would be better to include pictures showing symptoms, according to Table 1.
  4. You mentioned that the Gradio web server was used for deploying the model and disease detection. Please provide some background or information about this tool, why it is important in your research, how it works, how it was made, etc.
  5. Table 2: In the Training split column, are you splitting the data into training, testing, and validation? I could not find any explanation about that column.
  6. In line 162, you mentioned that the dataset was downloaded from Kaggle. However, it is necessary to give an adequate explanation about it. Describe where the dataset comes from, how the photos were captured, etc.
  7. Line 184: word “neutrons “should be “neurons”? Alternatively, I recommend using “nodes” because it does not imply actual biological neurons.
  8. Line 197: You mentioned “convo 1x1” It looks like this word refers to convolution. Therefore, you need to explain what this abbreviation means explicitly.
  9. Line 198. Missing the dot.
  10. Line 202: Same with line 197, please check the word “neutrons.”
  11. Line 207-214: This paragraph explains some equations. However, its explanation does not match the equation (ii). 
  12. Figure 4: Based on that figure, there was an augmentation process to make a testing dataset. This is not correct. Augmentation is for increasing the number of training datasets, not for testing datasets.
  13. Line 234: You mentioned that different machine vision and deep learning models using cross-validation techniques were used to eliminate the complexity of the training and testing ratio. Please elaborate on what you mean by the complexity of the training and testing ratio because it needs to be clarified. 
  14. Figure 4: Based on this figure, there are image pre-processing and enhancement processes to make a testing dataset. However, there is no adequate explanation of the pre-processing and enhancement processes used.
  15. Line 240: There are many releases of Python 3 where each version is different, especially when working with different libraries. So, please mention the exact Python version.
  16. Line 242: What is Tesla T4?
  17. Line 243: Gb should be GB?
  18. In this research, you are working with different CNNs; nowadays, TensorFlow and PyTorch are two of the most popular deep-learning framework. Are you using any of these deep learning frameworks? Please mention it.
  19. Table 4: Same with line 197, please check the word “Neutrons.”
  20. The Proposed ResNet in Figure 3 and Figure 4 are different. So, it needs to be clarified which ResNet you are using.
  21. The texts in Figure 6 are too small and look stretched. Furthermore, you should indicate each image with (a) and (b).
  22. The confusion matrix in Figure 6 is the result of which model? And why 0.83 of the healthy leaf samples are detected as diseased?
  23. You used the word “early detection” in your title. However, the paper has no information or explanation about early detection. How is the early symptom different from the late symptom? How early can the model detect the disease?

Review on “Comparing Inception V3, VGG 16, VGG 19, CNN and Resnet 50: a case study on early detection of a rice disease”

This study attempts to compare different CNN architectures to classify rice blast disease. The dataset was collected from online sources, preprocessed, augmented, and enhanced for training. Pretrained Inception V3, VGG 16, VGG 19, CNN, and ResNet 50 models were used, and the performance of each model will be compared. The base of each model was frozen except for the fully connected layers to train on a new dataset. It is shown that the modified ReNet 50 had the highest accuracy of 99.16%.

 

  1. Title: Resnet is written as one of the model names mentioned in the title. However, I commonly saw that the model name is written as ResNet. I recommend changing the name.
  2. One paragraph with many sentences is written in section 2. Related Work. This paragraph is too long; I suggest splitting it into a few paragraphs.
  3. Table 1. explains the same disease but infects different parts of the plant. Is that correct? If so, you should emphasize that you only used the leaf for disease detection. Additionally, it would be better to include pictures showing symptoms, according to Table 1.
  4. You mentioned that the Gradio web server was used for deploying the model and disease detection. Please provide some background or information about this tool, why it is important in your research, how it works, how it was made, etc.
  5. Table 2: In the Training split column, are you splitting the data into training, testing, and validation? I could not find any explanation about that column.
  6. In line 162, you mentioned that the dataset was downloaded from Kaggle. However, it is necessary to give an adequate explanation about it. Describe where the dataset comes from, how the photos were captured, etc.
  7. Line 184: word “neutrons “should be “neurons”? Alternatively, I recommend using “nodes” because it does not imply actual biological neurons.
  8. Line 197: You mentioned “convo 1x1” It looks like this word refers to convolution. Therefore, you need to explain what this abbreviation means explicitly.
  9. Line 198. Missing the dot.
  10. Line 202: Same with line 197, please check the word “neutrons.”
  11. Line 207-214: This paragraph explains some equations. However, its explanation does not match the equation (ii). 
  12. Figure 4: Based on that figure, there was an augmentation process to make a testing dataset. This is not correct. Augmentation is for increasing the number of training datasets, not for testing datasets.
  13. Line 234: You mentioned that different machine vision and deep learning models using cross-validation techniques were used to eliminate the complexity of the training and testing ratio. Please elaborate on what you mean by the complexity of the training and testing ratio because it needs to be clarified. 
  14. Figure 4: Based on this figure, there are image pre-processing and enhancement processes to make a testing dataset. However, there is no adequate explanation of the pre-processing and enhancement processes used.
  15. Line 240: There are many releases of Python 3 where each version is different, especially when working with different libraries. So, please mention the exact Python version.
  16. Line 242: What is Tesla T4?
  17. Line 243: Gb should be GB?
  18. In this research, you are working with different CNNs; nowadays, TensorFlow and PyTorch are two of the most popular deep-learning framework. Are you using any of these deep learning frameworks? Please mention it.
  19. Table 4: Same with line 197, please check the word “Neutrons.”
  20. The Proposed ResNet in Figure 3 and Figure 4 are different. So, it needs to be clarified which ResNet you are using.
  21. The texts in Figure 6 are too small and look stretched. Furthermore, you should indicate each image with (a) and (b).
  22. The confusion matrix in Figure 6 is the result of which model? And why 0.83 of the healthy leaf samples are detected as diseased?
  23. You used the word “early detection” in your title. However, the paper has no information or explanation about early detection. How is the early symptom different from the late symptom? How early can the model detect the disease?

Author Response

Please find the attachment. 

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

The author has addressed most of the comments 

Minor editing of English language required

Author Response

Thanks for the suggestion; we have proofread the manuscript and improved the English. 

Reviewer 2 Report

The second review on “Comparing Inception V3, VGG 16, VGG 19, CNN and Resnet 50: a case study on early detection of a rice disease.”

  1. Line 234: Sentence “Where i is the CNN . . .” what is i mean here? There is no i in equation (i). Additionally, Xand H­l are not explained in the text.
  2. Line 253: Which equations refer to the double accumulation? There is no double accumulation in equation (ii). Additionally, please revise “i.”.
  3. Equation (iii): Each variable needs to be explained in the text.
  4. Equation (iv): It is better to mention the abbreviations used in this equation in the text to clarify their meaning.

Minor editing of English language required

Author Response

Please find the attachment. 

Author Response File: Author Response.pdf

Back to TopTop