Next Article in Journal
Reproductive Biology of Solanum orbiculatum ssp. orbiculatum, an Australian Endemic Bush Tomato
Next Article in Special Issue
Win-Former: Window-Based Transformer for Maize Plant Point Cloud Semantic Segmentation
Previous Article in Journal
Pre-Harvest Application of Strigolactone (GR24) Accelerates Strawberry Ripening and Improves Fruit Quality
Previous Article in Special Issue
Improving Lettuce Fresh Weight Estimation Accuracy through RGB-D Fusion
 
 
Article
Peer-Review Record

Multi-Plant Disease Identification Based on Lightweight ResNet18 Model

Agronomy 2023, 13(11), 2702; https://doi.org/10.3390/agronomy13112702
by Li Ma 1,2, Yuanhui Hu 1, Yao Meng 1, Zhiyi Li 3 and Guifen Chen 4,*
Reviewer 1: Anonymous
Reviewer 2:
Reviewer 3: Anonymous
Agronomy 2023, 13(11), 2702; https://doi.org/10.3390/agronomy13112702
Submission received: 17 September 2023 / Revised: 22 October 2023 / Accepted: 25 October 2023 / Published: 27 October 2023
(This article belongs to the Special Issue Computer Vision and Deep Learning Technology in Agriculture)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

The authors present a light deep learning model for multi-plant disease classification based on leaf image data. This is achieved by modifying the Res-Net18 architecture in a number of different ways by employing a few techniques drawn from the literature. The paper is organized well and the authors have done a good job of presenting a variety of empirical results to justify some of their design choices. However, there are some issues that needs to be addressed to improve the quality of the paper and to justify its publication.

The prior work section mentions very few studies from the literature. There are quite a few well cited works on the same problem in the literature. The works mentioned are too few. Moreover, there are plant-disease detection approaches in the literature which are optimized for low computational cost which have not been mentioned either. Since, the paper is focused on computational efficiency it is only appropriate to mention at least some of those works.

Though it isn't clear if the authors have a validation dataset for hyper-parameter tuning, it looks like there is just a train and test dataset. It is important to have a validation dataset to tune hyper-parameters and also to ensure that there isn't any overfitting. The training plots (Figure 9) clearly show that a fixed number of epoch based training has lead to overfitting which should be a cause for concern regarding the validity of the comparisons. 

There is no justification provided for choosing a subset of about 18,000 images from the plant village dataset from the approximately 54,000 images available. What would be the effect on results if a different subset of 18000 images were chosen? When the comparison results so close to each other these things are very important. Also, choosing a subset and then mixing it with their own imaged makes comparison to other methods in literature difficult. Ideally, the authors should provide results for only the plant village datasets (using all the images). This will make the comparisons much easier to analyze.

There are also numerous assertion throughout the paper on why some of the design choices might make sense. For example, it is stated that Model_ite performs better on apple leaves because of SE and residual double-layer connections. But there isn’t any clear and verifiable evidence provided for this assertion. Such assertions could be made and validated if there is at least some empirical experimental data to support it.

Another important factor to consider is the size of the dataset. The authors used approximately one third of the full dataset. Larger modes are likely to do better on larger datasets. Hence, the comparisons are skewed in favor of lighter models because of the size of the dataset. This important issue has not been addressed by the authors.

The exponents in table 7 needs to be fixed.

Regardless of this shortcoming, the authors present a very light model (with just 32,000 parameters) that can compete with some of the more heavier models. This indicates these results might be worth a consideration for the community.

Author Response

Please refer to the attached document

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

In this study, authors proposed an improved residual network-based multi-plant disease recognition method that combines the characteristics of plant diseases. Their experimental dataset comprises 20 types of plant diseases, including 13 selected from the publicly available Plant Village dataset and seven self-constructed images of apple leaves with complex backgrounds containing disease symptoms. They reported that the experimental results demonstrate that their improved network model, Model_Lite, contains only about 1/344th of the parameters and requires 1/35th of the computational effort compared to the original ResNet18, with a marginal decrease in average accuracy of only 0.34%. As a result, they indicated that the Model_Lite holds significant potential for widespread application in plant disease recognition and can serve as a valuable reference for future research on lightweight network model design. I have listed my suggestions below.

1.      Any faults and warnings indicated by the system? Limitations of this study? I think it would be better if an explanation could be added to the article about these issues.

 

2.      I think the work is very important. Thank you for contributing to the scientific literature on the subject.

Comments on the Quality of English Language

Although there are minor grammatical errors in the writing language, they can be corrected with a final reading.

Author Response

Please refer to the attached document

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

 

 This research study proposes a lightweight plant leaf disease recognition network model based on ResNet 18. The topic of this research paper is very hot.

My main concern is its originality and the authors' need to explain exactly the novelty of this paper. It seems that authors use the ResNet18 architecture and make some modifications to the layers!

Novelty is missing in this research paper.

The experiment feels good, but including a real IoT testbed system will be more enjoyable. However, for model assessments, is ResNet18 the best model? How about tuning the model? Or another neural network architecture?

To achieve the purpose of a lightweight model, the results are a loss of model recognition accuracy. What’s the goal of a lightweight’s model? It seems to me that the authors plan to implement the model on an edge device like the Raspberry Pi (Tiny ML) ?

The authors mention: “The study's primary limitation is that identifying 582 complex background diseases and species still requires improvement.”

However, the authors mention: “We introduced the SE attention module to address the challenges in recognizing differences with complex backgrounds." Can you defend this argument, please? It’s a contradiction?

 

 

Author Response

Please refer to the attached document

Author Response File: Author Response.pdf

Back to TopTop