Improving the Accuracy of Agricultural Pest Identification: Application of AEC-YOLOv8n to Large-Scale Pest Datasets
Round 1
Reviewer 1 Report
Comments and Suggestions for AuthorsThe presented work was performed on a current topic. It is associated with ensuring food security and maintaining the quality and volume of the harvest.
The authors presented the indicators of existing models and determined the direction for improving the pest detection process. The structure of the developed model, the approaches used to train it and improve its quality indicators are described quite fully. Tunable hyperparameters are specified. The training was carried out on a fairly large dataset, which allows us to judge the high quality of the model.
There are a number of questions and comments regarding the work:
1. An assessment of damage from poor-quality and untimely detection and classification of pests has not been carried out.
2. The advantages and disadvantages of existing and developing methods for detecting and classifying pests in comparison with each other and with regulated ones are not given.
3. Differences in costs when combating different types of pests and, therefore, the relevance of this study are not given.
4. The required quality indicators of the developed model are not justified.
5. The model has not been evaluated on real images in agricultural farms. All research is carried out only on a prepared dataset.
6. Conclusions do not contain numerical data and do not reveal to what extent the research goal has been achieved.
Author Response
For research article
Response to Reviewer 1 Comments
|
||
1. Summary |
|
|
Thank you for reviewing my paper on your busy schedule, here are the responses to your questions, thank you again!
|
||
2. Questions for General Evaluation |
Reviewer’s Evaluation |
Response and Revisions |
Does the introduction provide sufficient background and include all relevant references? |
Must be improved |
|
Are all the cited references relevant to the research? |
Yes |
|
Is the research design appropriate? |
Can be improved |
|
Are the methods adequately described? |
Can be improved/ |
|
Are the results clearly presented? |
Must be improved |
|
Are the conclusions supported by the results? |
Must be improved |
|
3. Point-by-point response to Comments and Suggestions for Authors |
||
Comments 1: [An assessment of damage from poor-quality and untimely detection and classification of pests has not been carried out.] |
||
Response 1: [Thank you for your valuable comments. I have already cited a reference in the Introduction section to illustrate the losses due to untimely detection and classification of pests and to emphasise the importance of timely detection and control of pests. Here's what I added: With the development of agricultural modernization, pests have become one of the main threats to agricultural production. Their invasion will not only lead to a significant decline in crop yields, serious reduction in the quality of agricultural products, and may even lead to the spread of crop diseases, resulting in significant economic losses. According to Gandhi and others, nearly half of the world's crop yields are affected by pests and diseases. This not only affects agricultural productivity and product quality, but also negatively impacts food security and economic development.] We agree with this comment. Therefore, we have made changes to line 39 of the first paragraph on page 1.
|
||
Comments 2: [The advantages and disadvantages of existing and developing methods for detecting and classifying pests in comparison with each other and with regulated ones are not given.] |
||
Response 2: [Thank you for your valuable comments. In the Introduction section, I have explained the limitations of traditional pest detection methods. At the same time, I have addressed the shortcomings of some researchers who focus only on single pest species detection tasks after the rise of deep learning techniques, and emphasised the importance of large-scale, multi-category pest detection. Thanks again for your suggestions!]
|
Comments 3: [Differences in costs when combating different types of pests and, therefore, the relevance of this study are not given.] |
Response 3: [Thank you for your valuable comments. In the introduction, I have already described the importance of rapid and accurate identification of pest species and locations in terms of the cost of chemical pesticides, pointing out that this will help to improve the efficiency of pest detection and thus reduce the cost of control. Here's what I added: By quickly and accurately identifying the species and location of pests, farmers can be helped to control them more accurately, reduce unnecessary pesticide use, and thus reduce costs and environmental pollution. In addition, it can also help farmers identify the gathering areas of specific pests, carry out precise spraying, avoid spraying the entire field, and effectively reduce the amount of pesticides and costs.] We agree with this comment. Therefore, we have made changes to line 47 of the first paragraph on page 2.
|
Comments 4: [The required quality indicators of the developed model are not justified.] |
Response 4: [Thank you for your valuable comments! In the "Materials and Methods" section, I have provided a better description of the evaluation metrics of the model and explained how these metrics are justified in the pest detection task. Here's what I added: The target detection evaluation indexes used in this paper include accuracy (P), recall rate (R) and average accuracy (mAP50 and MAP50-100).P focuses on measuring the accuracy of model recognition results, that is, the proportion of real pests among the samples judged by the model as pests. In pest detection, high precision means that the model can identify pests more accurately and reduce the cases of misidentifying other objects as pests. The formula is as follows: TP refers to cases where the model correctly classifies a sample that is actually positive as positive, and FP refers to cases where the model incorrectly classifies a sample that is actually negative as positive. R shows the proportion of all images that actually contain pests that are correctly detected by the model. Recall rates measure the comprehensiveness of the model, i.e. whether pest targets are missed.In pest detection, the high recall rate means that the model is able to identify as many pests in the field as possible and avoid omissions, leading to more effective control.The formula is as follows:
FN refers to the situation where the model incorrectly classifies a sample that is actually positive as negative. AP values range from 0 to 1, with higher values indicating better performance of the model. The AP is a very important metric because it takes into account not only the accuracy of the model, but also the recall rate of the model, thus providing a balanced assessment.
mAP50 pays particular attention to how accurately the test box matches the real box, while mAP50-95 provides performance at different IoU thresholds. In practical applications, we hope that the model can not only identify the pest, but also accurately locate the location of the pest for precise control. The formula is as follows: ] We agree with this comment. Therefore, we have made changes to line 329 of the second paragraph on page 10.
|
Comments 5: [The model has not been evaluated on real images in agricultural farms. All research is carried out only on a prepared dataset.] |
Response 5: [Thank you for your valuable comments! In the visual comparison of the detection results in the results section, Figure 11 illustrates the detection results of the model in a real agricultural scenario. This provides a more objective basis for model evaluation. Thanks again for your valuable advice!]
|
Comments 6: [Conclusions do not contain numerical data and do not reveal to what extent the research goal has been achieved.] |
Response 6: [Thank you for your valuable comments! I have refined the conclusion section by adding specific numerical data including precision (P), recall (R), mAP50 (average precision at an IoU threshold of 50%) and mAP50-95 (average precision from 50% to 95% of the IoU threshold). These data indicate that the improved AEC-YOLOv8n model achieves higher detection precision. Here's what I added: In the end, our improved algorithm significantly improved the overall detection score, with P (accuracy), R (recall rate), mAP50 (average accuracy under 50% IoU threshold), and MAP50-100 (average accuracy under 50% to 100% IoU threshold) achieving 58.9%, 62.6%, 67.1%, and 43.1%, respectively.This result shows that AEC-YOLOv8n can achieve high detection accuracy when dealing with large-scale and diverse pest detection tasks.In particular, the test results on the IP102 data set further confirm the superiority of our algorithm in pest detection, showing higher accuracy than similar algorithms.Therefore, the algorithm has significant application potential and value in the field of agricultural pest detection. ] We agree with this comment. Therefore, we have made changes to line 503 of the second paragraph on page 18.
|
4. Response to Comments on the Quality of English Language |
Point 1: |
Response 1: (in red) |
5. Additional clarifications |
[Here, mention any other clarifications you would like to provide to the journal editor/reviewer.] |
Reviewer 2 Report
Comments and Suggestions for AuthorsIntroduction
The introduction is well-written and provides sufficient background information related to deep learning and the selected models. However, the novelty or the research gap identified by the authors is not robust. The authors claim that previous studies focused on small datasets, and their study aims to use a large dataset. If this is the case, you need to highlight the limitations and challenges of using big datasets in your introduction. This will provide a strong basis for claiming such objectives.
Methodology
I miss the data section! It is not clear how the IP102 Dataset has been used.
Results and Analysis
I recommend that the authors split the results section from the analysis. One possible solution is to merge the methodology and analysis into one section and have the results section stand alone.
In its current version, it is difficult to follow the results section, and this needs to be revised completely.
Discussion
The manuscript is missing a discussion section. What is presented is more or less the results without justification or comparison of the main findings with previous studies and without highlighting the degree of agreement.
Author Response
For research article
Response to Reviewer 2 Comments
|
||
1. Summary |
|
|
Thank you for reviewing my paper on your busy schedule, here are the responses to your questions, thank you again!
|
||
2. Questions for General Evaluation |
Reviewer’s Evaluation |
Response and Revisions |
Does the introduction provide sufficient background and include all relevant references? |
Can be improved |
|
Are all the cited references relevant to the research? |
Yes |
|
Is the research design appropriate? |
Must be improved |
|
Are the methods adequately described? |
Must be improved |
|
Are the results clearly presented? |
Must be improved |
|
Are the conclusions supported by the results? |
Must be improved |
|
3. Point-by-point response to Comments and Suggestions for Authors |
||
Comments 1: [Introduction:The introduction is well-written and provides sufficient background information related to deep learning and the selected models. However, the novelty or the research gap identified by the authors is not robust. The authors claim that previous studies focused on small datasets, and their study aims to use a large dataset. If this is the case, you need to highlight the limitations and challenges of using big datasets in your introduction. This will provide a strong basis for claiming such objectives.] |
||
Response 1: [Thank you for your compliments, you are right that the limitations and challenges of using large datasets should have been emphasised in the introduction, and the IP102 data is described in more detail in the Methodology. Here is what I added: However, most of the studies on pest detection by the above authors focused on small-scale and single-type data sets, and the detection studies on large-scale and multi-class pest targets were relatively insufficient, which was difficult to meet the needs of multi-target identification and response in real scenarios. Therefore, IP102, a large-scale crop pest and disease data set, was selected as the model training data set in this paper. IP102 contains more than 75,000 images covering 102 categories, and its natural long-tail distribution, hierarchical classification, class imbalance and intra-class variability pose significant challenges for model training and evaluation. In response to the above challenges, this paper proposes a pest detection framework based on improved YOLOv8n, which aims to effectively handle large-scale, multi-category pest detection by using deep learning algorithms, and achieve high accuracy pest identification in diverse agricultural environments.] We agree with this comment. Therefore, we have made changes to line 94 of the first paragraph on page 3.
|
||
Comments 2: [Methodology :I miss the data section! It is not clear how the IP102 Dataset has been used.] |
||
Response 2: [Thanks for pointing out that I should have put the details of the IP102 dataset in the Methodology.]
|
Comments 3: [Results and Analysis :I recommend that the authors split the results section from the analysis. One possible solution is to merge the methodology and analysis into one section and have the results section stand alone. In its current version, it is difficult to follow the results section, and this needs to be revised completely.] |
Response 3: [Thanks for pointing this out, I've made the change to make the results section stand alone to make it clearer and easier to understand.]
|
Comments 4: [Discussion :The manuscript is missing a discussion section. What is presented is more or less the results without justification or comparison of the main findings with previous studies and without highlighting the degree of agreement.] |
Response 4: [Thank you for your valuable comments. I have revised the discussion section based on your suggestions, summarising the innovations of this study and comparing it with previous studies, as well as discussing its significance and limitations, and future research directions.] We agree with this comment. Therefore, we have made changes to line 445 of the second paragraph on page 15.
|
4. Response to Comments on the Quality of English Language |
Point 1: |
Response 1: (in red) |
5. Additional clarifications |
[Here, mention any other clarifications you would like to provide to the journal editor/reviewer.] |
Round 2
Reviewer 1 Report
Comments and Suggestions for AuthorsThe presented work was performed on a current topic. It is associated with ensuring food security and maintaining the quality and volume of the harvest.
The authors presented the indicators of existing models and determined the direction for improving the pest detection process. The structure of the developed model, the approaches used to train it and improve its quality indicators are described quite fully. Tunable hyperparameters are specified. The training was carried out on a fairly large dataset, which allows us to judge the high quality of the model.
The presented responses to comments and explanations make the material easier to understand, but for the most part they are very general in nature and do not provide an evaluative (numerical) idea of ​​the research problem and the results achieved. Despite this, I believe that the work can be accepted for publication after minor adjustments.
Author Response
Please see the attachment
Author Response File: Author Response.pdf
Reviewer 2 Report
Comments and Suggestions for AuthorsI think the authors revised the manuscript according to my comments and suggestions, and the manuscript has improved. However, I still think that the discussion section is more like a results section. The authors need to compare their results and findings with previous studies and provide justification. Additionally, the quality of the figures is very poor; please correct them.
Author Response
Please see the attachment.
Author Response File: Author Response.pdf