Next Article in Journal
Identification of a New Wide-Compatibility Locus in Inter-Subspecific Hybrids of Rice (Oryza sativa L.)
Previous Article in Journal
Rhizobial Symbiosis in Crop Legumes: Molecular and Cellular Aspects
 
 
Article
Peer-Review Record

Transfer Learning with Convolutional Neural Networks for Cider Apple Varieties Classification

Agronomy 2022, 12(11), 2856; https://doi.org/10.3390/agronomy12112856
by Silverio García Cortés 1,*, Agustín Menéndez Díaz 2, José Alberto Oliveira Prendes 3 and Antonio Bello García 2
Reviewer 2:
Agronomy 2022, 12(11), 2856; https://doi.org/10.3390/agronomy12112856
Submission received: 12 October 2022 / Revised: 9 November 2022 / Accepted: 11 November 2022 / Published: 15 November 2022
(This article belongs to the Section Precision and Digital Agriculture)

Round 1

Reviewer 1 Report

This paper presents a study case on image classification applied to the identification of cider apple varieties. While the overall approach is not new (several transfer learning examples are available in the literature), the paper is well written and presents the problem and the methodology to reach a comfortable accuracy level. 

Hence, the paper structure and contributions require almost no changes, besides perhaps a baseline for comparison (e.g.: the accuracy of a human operator). Some phrases are not sufficiently clear, like the discussion on "suitable morphology" lines 110-112 or the lack of precision on line 140 (4 to 6 photos), and deserve a better explanation.

Minor corrections may be done to the abstract ("98%.04%", line 18),  the absence of accuracy on table 5, or the duplicate citation of Cholet (references 18 and 20). This last one requires also more details like the book/article title, DOI/URL, etc.

In the overall, the paper is not a novelty but does a good demonstrator job on what image classification can bring to the agricultural field, so it deserves publication.

Author Response

Dear Reviewer:

Thanks for your constructive comments and suggestions. In the following lines we try to fulfill your indications with some modifications to the original manuscript. Please see the corresponding comments to your concerns.

“This paper presents a study case on image classification applied to the identification of cider apple varieties. While the overall approach is not new (several transfer learning examples are available in the literature), the paper is well written and presents the problem and the methodology to reach a comfortable accuracy level.”

Thank you.

“Hence, the paper structure and contributions require almost no changes, besides perhaps a baseline for comparison (e.g.: the accuracy of a human operator).”

To our knowledge, there is no study on the specific accuracy of human grading of apple varieties. In our experience we have dealt with apple nursery managers, orchard managers and apple pickers. The percentage of varieties that they are able to distinguish with the naked eye varies between 80% for the former, 50% for the latter and 75% for the pickers, approx. We worked with those varieties for which the three groups showed a coincidence of identification. It is difficult to know which aspects they look at, as often tree morphology, leaves and other features provide information external to the image of the apple itself. It is also not a simple problem for them to solve, since, especially in old apple orchards, grafts of different apple varieties may even coexist on the same tree.

To consider your recommendation we have added this sentence in line 130:

“Although there are no rigorous studies on this aspect, in our experience we appreciate that a human operator accustomed to working with local apple varieties, may be able to visually distinguish up to 80% of the varieties used in the production of PDO cider.

 “Some phrases are not sufficiently clear, like the discussion on "suitable morphology" lines 110-112 or the lack of precision on line 140 (4 to 6 photos), and deserve a better explanation.”

 We have clarified the two sentences. Now they are:

Line 118:

“not enough fruit specimens with sufficient development in their specific characteristics of each variety were obtained”

And: (Line 152 )

“… and from four to six pictures from positions 3, 4 and 5 (front, peduncle up and peduncle down respectively) depending on the symmetry and quality state of each apple specimen”

 “Minor corrections may be done to the abstract ("98%.04%", line 18),”

This error has been fixed

 “the absence of accuracy on table 5,”

This error also has been fixed

 “or the duplicate citation of Cholet (references 18 and 20). This last one requires also more details like the book/article title, DOI/URL, etc.”

We have made our best efforts to correct this. Reference [18] now points to Keras repository  on Github and reference [20] points to Keras online Documentation. Both now include url and access time.

 “In the overall, the paper is not a novelty but does a good demonstrator job on what image classification can bring to the agricultural field, so it deserves publication.”

Thanks again. We have tried to respond to your review requests with the above corrections.

 

Reviewer 2 Report

agronomy-1995530-peer-review-v1

The authors present a simple, cost-effective, and off-line colour vision system for classifying apple cultivars. However, the authors need to make the novelty of their study clear in the introduction section. Other comments need to be addressed by the authors to get the manuscript accepted.

 

Introduction

The authors need to state the novel idea in the study taking into account that colour vision and CNN have already been used numerous times in similar applications.  

Lines 79-84: You explain NN after stating the CNN. This is not needed here. You either delete this paragraph or put it before the CNN section.

 

Materials and Methods

The authors have to replace Figure 7 with a more detailed figure that explains the data analysis procedure followed in the study.

Results

Figure 8: Fonts are not clear and needs enhancement especially the upper part.

Discussion

The authors did not compare their results to any previous study. This must be fixed.

Conclusions

Needs rewriting as it is way too long

Author Response

Dear Reviewer:

Thanks for your comments and suggestions. In the following lines we try to fulfill your indications with some modifications to the original manuscript. Please see the corresponding comments to your concerns.

The authors present a simple, cost-effective, and off-line colour vision system for classifying apple cultivars. However, the authors need to make the novelty of their study clear in the introduction section. Other comments need to be addressed by the authors to get the manuscript accepted.”

We have tried to cover this request with our answer and manuscript modification in the related next point of your review below.

 

Introduction

The authors need to state the novel idea in the study taking into account that colour vision and CNN have already been used numerous times in similar applications.“

We agree with you that there is no methodological novelty in the use of transfer learning techniques on offline color images for classification purposes. However, the published articles usually work with much simpler problems such as the classification of very different fruit types or the classification of the state of the same type of fruit in three categories. Our case goes a little further in the field (also infrequent) of the study of the apple variety, with a classification into nine morphologically very similar classes.

On the other hand, we think that we also bring nuances to the transfer learning procedure (such as advice on the architectures that provide the best and fastest results, the training of all the weights of the network as opposed to the usual practice that limits the training to the head of the CNN or the head and only some base layers, or the verification of the improvement of the execution time and accuracy with the use of initial values even from very different image database as opposed to random initial values).  We believe that these details can make a difference in practice to obtain good classification results. For these reasons we think that this applied knowledge can be of interest to Agronomy readers.

We have added some lines regarding the novelty in the Introduction section.

Lines (88-91) now are:

“This paper proves the ability of convolutional neural networks to perform classifications of nine apple varieties with very subtle visual appearance differences. We advise on the best public CNN architectures available for the work and show the best strategies to achieve high accuracy values during the Transfer Learning process.”

Lines 79-84: You explain NN after stating the CNN. This is not needed here. You either delete this paragraph or put it before the CNN section.

This has been fixed, following your suggestion, changing the order of the paragraphs, and the affected reference numbering.

 

 Materials and Methods

The authors have to replace Figure 7 with a more detailed figure that explains the data analysis procedure followed in the study.  

This image has been changed following your suggestion.

 

 Results

Figure 8: Fonts are not clear and needs enhancement especially the upper part.

This image has also been changed following your suggestion.

 

 Discussion

The authors did not compare their results to any previous study. This must be fixed.

We have added the below information to the discussion part of the paper, following your suggestion.

 

We have found a somehow comparable studies dedicated to the classification of fruit varieties with Transfer Learning techniques. One of them [7], dedicated to mango varieties, uses two CNN architectures, although in one of the cases (ResNet50), it is used under a hybrid way, in combination with other Machine Learning techniques (Naive Bayes, Linear SVM, PolySVM and Logistic Regression. They report accuracies between 70% and 100% for that hybrid classification trial of eight mango varieties. Those same authors also report in their study other classification results (from different authors) for the same mango fruit image dataset with precisions between 88.57% and 92.42% using Transfer Learning with Inception v3, Xception, DenseNet and MobileNet architectures (This reference was probably accessed privately by the authors of the first study as it appears not to have been published). Finally the same authors [7] report also a 100% accuracy for MobileNet Transfer Learning and MobileNet (Fine tuning).  In our case, we have obtained precision results slightly higher than the private study and logically little lower than the absolute precision claimed by [7]. We recall that this is a study with a different fruit and with an image base of much smaller size, with only 200 color images belonging to eight mango varieties.

Other studies work with different fruit types in the image database, for example [29] (72 different types of fruit). Their accuracy is less than one point better than ours in their best tests using VGG16 architecture. Inceptionv3 were the only other model tested with slightly lower accuracy than our result

 

Conclusions

Needs rewriting as it is way too long

Conclusions have been shortened now by condensing the basic ideas of them and deleting some paragraphs.

 

We have tried to respond to your review requests with the above corrections. Thanks again.

 

 

Round 2

Reviewer 2 Report

The authors addressed most of the comments except the enhancement of figures. Figures 8-10 are still not clear and the numbers or text in the plots can be read clearly. This must be fixed.

Author Response

Dear reviewer,

In this second revised version of the paper, we have tried to improve the readability of the figures from 8 to 13, following your requirements.

We have improved  figure 8 by redoing and splitting it in figures (8,9,10 and 11) and we also have redone figures 12 and 13 for better clarity.

We hope we have met your expectations

Thanks for your comments

 

 

Back to TopTop