Next Article in Journal
A Systematic Review of Using Deep Learning in Aphasia: Challenges and Future Directions
Next Article in Special Issue
Machine Learning Decision System on the Empirical Analysis of the Actual Usage of Interactive Entertainment: A Perspective of Sustainable Innovative Technology
Previous Article in Journal
Performance Comparison of CFD Microbenchmarks on Diverse HPC Architectures
Previous Article in Special Issue
Harnessing Machine Learning to Unveil Emotional Responses to Hateful Content on Social Media
 
 
Article
Peer-Review Record

A Hybrid Deep Learning Architecture for Apple Foliar Disease Detection

Computers 2024, 13(5), 116; https://doi.org/10.3390/computers13050116
by Adnane Ait Nasser and Moulay A. Akhloufi *
Reviewer 1:
Reviewer 2: Anonymous
Computers 2024, 13(5), 116; https://doi.org/10.3390/computers13050116
Submission received: 28 March 2024 / Revised: 23 April 2024 / Accepted: 3 May 2024 / Published: 7 May 2024

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

This paper introduces a hybrid deep learning architecture known as "CTPlantNet" that efficiently classifies apple leaf diseases using CNN models and a visual Transformer model. The architecture identifies and classifies plant diseases using two open-access datasets, Plant Pathology 2020-FGVC-7 and Plant Pathology 2021-FGVC-8. The proposed model obtains accuracy of 98.28% for Plant Pathology 2020-FGVC-7 and 95.96% for Plant Pathology 2021-FGVC-8, which is considered by authors as better than state of the art.

 

The introduction presents the importance of the research in the context of agrifood, apples being an important crop. Related work focus on deep learning approaches for plant leaf image classification, the various datasets and models used, and also the results obtained.

Materials and methods section present a publicly available dataset used. The proposed architecture is described in figure 2 and further on. I am not sure what is the value brought by Figure 3 in the paper, as it is very rudimentary.

Evaluation methodology is also presented. ACC is generally denoted as accuracy in literature? If so, please use the same name (of specify the acronym correspondence – line 284).

Table 1 presents results in comparison with related works. While is it clear there are some improvements, the results are not better than all other obtained. Please explain in this context what is the added value of this research then? What is the relevance of each metric, and meaning in this case?

In Table 2 the comparison is more clear and the results obtained are better. Can you motivate the benefits of the small improvement in precision? Other metrics are not present, so can this is be used to any conclusion? Is overfitting possible? Have you teste robustness of the model further on?

The paper is very well written and explained, the methodology is clearly explained and I appreciate that the results are compared with state of the art. However, the interpretation of the results seem exaggerated and not sufficiently motivated as to the added value of these results. The conclusion presents that there are ‘impressive performance’ while results are very similar to state of the art, and it is not clear if they perform better overall – as robustness is not tested. In this context I would encourage authors to take this research to a next level, and evaluate the models for robustness with a different testing dataset, or something similar. Otherwise the question arises – is it yet another model that performs similar to others, why would that matter? So add a discussion section in which to touch on these points, the good implications and the limitations of these results, if needed with more data/results to prove your claims. While this paper would prove very good for a conference presentation, for a journal the practice is usually to elaborate further on the subject, try a few more angles, get deeper into the what and why.

Citations are generally appropriate and recent. It is unclear why authors cite their work on CXR Chest Diseases in [22] and [23]. Reference [1] need revision.

Author Response

Dear Reviewer,

Please find attached a cover letter highlighting the modifications we have made, taking into consideration your comments and suggestions.

Best regards

Author Response File: Author Response.docx

Reviewer 2 Report

Comments and Suggestions for Authors

   The manuscript by Nasser and Akhloufi is devoted to development of hybrid deep learning architecture for apple foliar disease detection. The work is potentially interesting; however, I have comments and questions.

   1. Section 1. Introduction: There are numerous optical methods to provide estimation of plants, including multispectral, hyperspectral, and fluorescence imaging. Are the methods used in detection of apple foliar disease? What are advantages of analysis of color images in comparison to these methods of imaging? What methods can be used for the earliest detection of diseases?  It should be discussed in Introduction.

   2. Section 3.3. Implementation: It seems that numbers of samples in training and testing sets are not shown in work (or I did not find it). Please, include these numbers for initial images and the final numbers (after image transformation to enhance quantity of images). 

   3. Authors used images with leaves shown in detail; however, the images of canopy (which are typical for UAV measurements) can be strongly differed. Can the developed hybrid deep learning architecture be used for apple canopy images? Do authors plan the analysis of the apple canopy in future? It should be discussed, I suppose.

Author Response

Dear Reviewer,

Please find attached a cover letter highlighting the modifications we have made, taking into consideration your comments and suggestions.

Regards

Author Response File: Author Response.docx

Round 2

Reviewer 1 Report

Comments and Suggestions for Authors

The initial review questions have not been entirely answered, and some have barely been addressed. Please see below.

1. The question arises – is it yet another model that performs similar to others, why would that matter? 0.12% seems rather small. Any other measures relevant? Computation time is it worse or better?

2. So add a discussion section to explain implications and the limitations of these results and, if needed, add more data/experiments results to prove your claims.

3. While this paper would prove very good for a conference presentation, for a journal the practice is usually to elaborate further on the subject, try a few more angles (experiments, new contributions), get deeper into the what and why. Otherwise, it's originality and impact of contributions remain unclear.

Author Response

Dear Reviewer,

Thank you for your valuable insights and comments. Attached, you will find our cover letter containing responses to your feedback and suggestions from the second round of review.

Best regards,

Author Response File: Author Response.pdf

Round 3

Reviewer 1 Report

Comments and Suggestions for Authors

Authors have clarified the originality points of their work and have expanded the discussion session, which increases the value of the paper. However, the results are not very convincing as to their relevance in the state of the art context.

Back to TopTop