Next Article in Journal
Combinatorial Approaches to Image Processing and MGIDI for the Efficient Selection of Superior Rice Grain Quality Lines
Previous Article in Journal
Spatio-Temporal Evolution of Net Ecosystem Productivity and Its Influencing Factors in Northwest China, 1982–2022
 
 
Article
Peer-Review Record

A Multiple Instance Learning Approach to Study Leaf Wilt in Soybean Plants

Agriculture 2025, 15(6), 614; https://doi.org/10.3390/agriculture15060614
by Sanjana Banerjee 1,*, Paula Ramos 2, Chris Reberg-Horton 2, Steven Mirsky 3, Anna Locke 4 and Edgar Lobaton 1
Reviewer 1:
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Agriculture 2025, 15(6), 614; https://doi.org/10.3390/agriculture15060614
Submission received: 5 February 2025 / Revised: 3 March 2025 / Accepted: 7 March 2025 / Published: 13 March 2025
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors
  1. This manuscript conducts relevant research based on [11] and improves the algorithm to extend it from binary classification to multi-class classification. However, the overall innovativeness is insufficient. Considering that reference [11] was published seven years ago, the authors should further strengthen the innovativeness. Moreover, it is recommended to add an analysis of the selection of different MIL pooling methods, supplement more experimental data to compare the performance of different pooling methods in the soybean leaf wilt classification task, and enhance the scientificity of model construction.
  2. The experimental results are only compared with some mainstream models (DenseNet121 and ViT). The scope of comparison models is not comprehensive enough, making it difficult to fully highlight the advantages of the MIL model. It is suggested to expand the scope of comparison models and include more representative models in the field of agricultural image analysis, such as ResNet, EfficientNet, etc.
  3. The analysis of the experimental results mainly focuses on indicators such as accuracy, lacking discussions on the performance of the model in practical application scenarios (such as different field environments and differences in soybean varieties), and the practicality analysis is insufficient. It is recommended to increase the comparison of the MIL model and these models in multiple indicators (such as precision, recall, and F1-score) to more comprehensively demonstrate the advantages and disadvantages of the MIL model.
  4. Please unify "multi-class classification" and "multi-class regression" throughout the manuscript. It should be "multi-class classification".
  5. The citation order of the references needs to be corrected. It should not start from [31] at the beginning of the manuscript.

Author Response

Please see the attachment

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

Overall the study explores a very interesting research topic and which is of primary importance. The methods are executed well and the results are well documented.

My comments are as follows

  1. The abstract needs to be improved. While more focus has been given to DL part, a better intro and implications are also needed in the abstract which needs to be added.
  2. Some more details about the baseline methods needs to be provided. 
  3. I would suggest to make the code base and data avaialable online to that this study can be reproduced. If the authors does not have permissions to share the data, code will be fine. This will allow other researchers to reproduce these methods for other crops and other diseases. 

 

Author Response

Please see the attachment

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

This paper proposed an efficient, lightweight, and interpretable drought stress detection model in soybean plants by integrating remote sensing with MIL in deep learning. The model demonstrates an excellent performance in soybean wilt classification tasks, which can outperform the DenseNet121 model. In general, I have the following concerns,

(1) the references should be cited in order, e.g., the references [31-33].

(2) in Figure 4, does the experiment adopt the same dataset for comparisons of a) DenseNet121, b)Our MIL model, and c) VisionTransformer? E.g., for class 0, the number of images is 37 for DenseNet121 and the proposed MIL model, but is 41 for VisionTransformer. It seems that the same case also happened for other classes.

Author Response

Please see attachment

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

Comments and Suggestions for Authors

The work was revised based on my comments. So it is recommended for publication.

Back to TopTop