Next Article in Journal
Micro-CT for Biological and Biomedical Studies: A Comparison of Imaging Techniques
Previous Article in Journal
Designing a Computer-Vision Application: A Case Study for Hand-Hygiene Assessment in an Open-Room Environment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the Efficacy of Handcrafted and Deep Features for Seed Image Classification

Department of Mathematics and Computer Science, University of Cagliari, Via Ospedale 72, 09124 Cagliari, Italy
*
Author to whom correspondence should be addressed.
J. Imaging 2021, 7(9), 171; https://doi.org/10.3390/jimaging7090171
Submission received: 29 July 2021 / Revised: 26 August 2021 / Accepted: 26 August 2021 / Published: 31 August 2021
(This article belongs to the Section Computer Vision and Pattern Recognition)

Abstract

:
Computer vision techniques have become important in agriculture and plant sciences due to their wide variety of applications. In particular, the analysis of seeds can provide meaningful information on their evolution, the history of agriculture, the domestication of plants, and knowledge of diets in ancient times. This work aims to propose an exhaustive comparison of several different types of features in the context of multiclass seed classification, leveraging two public plant seeds data sets to classify their families or species. In detail, we studied possible optimisations of five traditional machine learning classifiers trained with seven different categories of handcrafted features. We also fine-tuned several well-known convolutional neural networks (CNNs) and the recently proposed SeedNet to determine whether and to what extent using their deep features may be advantageous over handcrafted features. The experimental results demonstrated that CNN features are appropriate to the task and representative of the multiclass scenario. In particular, SeedNet achieved a mean F-measure of 96%, at least. Nevertheless, several cases showed satisfactory performance from the handcrafted features to be considered a valid alternative. In detail, we found that the Ensemble strategy combined with all the handcrafted features can achieve 90.93% of mean F-measure, at least, with a considerably lower amount of times. We consider the obtained results an excellent preliminary step towards realising an automatic seeds recognition and classification framework.

1. Introduction

The last few decades have seen considerable growth in the use of image processing techniques to solve various problems in agriculture and the life sciences because of their wide variety of applications [1]. This growth is mainly due to the fact that computer vision techniques have been combined with deep learning techniques. The latter has offered promising results in various application fields, such as haematology [2], biology [3,4], or botany [5,6]. Deep learning algorithms differ from traditional machine learning (ML) methods in that they require little or no preprocessing of images and can infer an optimal representation of data from raw images without the need for prior feature selection, resulting in a more objective and less biased process. Moreover, the ability to investigate the structural details of biological components, such as organisms and their parts, can significantly influence biological research. According to Kamilaris et al. [7], image analysis is a significant field of research in agriculture for seeds, crops or leaves classification, anomaly or disease detection, and other related activities.
In the agricultural area, the cultivation of crops is based on seeds, mainly for food production. In particular, in this study, we focus on the field of plant science carpology, which examines seeds and fruits from a morphological and structural point of view. It generally has two main challenges: reconstructing the evolution of a particular plant species and recreating what the landscape was and, therefore, what its flora and fauna appeared. Professionals employed in this field typically capture images of seeds using a digital camera or flatbed scanner. Especially the latter provides quality and speed of workflow due to the constant illumination condition and the defined image size [8,9,10,11,12]. In this context, the seed image classification can play a fundamental role for manifold reasons, from crops, fruits and vegetables to disease recognition, or even to obtain specific feature information for archaeobotanical reasons, and so forth. One of the most-used tools by biologists is ImageJ [13,14,15]. It is defined as one of the standard types of image analysis software, as it is freely available, platform-independent, and easily applicable for biological researchers to quantify laboratory tests.
A traditional image analysis procedure uses a pipeline of four steps: preprocessing, segmentation, feature extraction, and classification, although deep learning workflows have emerged since the proposal of the convolutional neural network (CNN) AlexNet in 2012 [16]. CNNs do not follow the typical image analysis workflow because they can extract features independently without the need for feature descriptors or specific feature extraction techniques. In the traditional pipeline, image preprocessing techniques are used to prepare the image before analysing it to eliminate possible distortions or unnecessary data or highlight and enhance distinctive features for further processing. Next, the segmentation step divides the significant regions into sets of pixels with shared characteristics such as colour, intensity, or texture. The purpose of segmentation is to simplify and change the image representation into something more meaningful and easier to analyse. Extracting features from the regions of interest identified by segmentation is the next step. In particular, features can be based on shape, structure or colour [17,18]. The last step is classification, assigning a label to the objects using supervised or unsupervised machine learning approaches. Compared to manual analysis, the use of seed image analysis techniques brings several advantages to the process:
(i)
It speeds up the analysis process;
(ii)
It minimises distortions created by natural light and microscopes;
(iii)
It automatically identifies specific features;
(iv)
It automatically classifies families or genera.
In this work, we address the problem of multiclass classification of seed images from two different perspectives. First, we study possible optimisations of five traditional machine learning classifiers as adopted in our previous work [6], trained with seven different categories of handcrafted (HC) features extracted from seed images with our proposed ImageJ tool [19]. Second, we train several well-known convolutional neural networks and a new CNN, namely SeedNet, recently proposed in our previous work [6], in order to determine whether and to what extent a feature extraction performed from them may be advantageous over handcrafted features for training the same traditional machine learning methods depicted before. In particular, Table 1 depicts the study, contributions, and tools provided by our previous works and the one presented here.
The overall aim of the work is to propose a comprehensive comparison of seed classification systems, both based on handcrafted and deep features, to produce an accurate and efficient classification of heterogeneous seeds. It is important, for example, to obtain archaeobotanical information of seeds and to effectively recognise their types. More specifically, the classification addressed in this work is fine-grained, oriented to single seeds, rather than sets of seeds, as it is in [20]. In detail, our contribution is threefold:
(i)
We exploit handcrafted, a combination of handcrafted, and CNN-extracted features;
(ii)
We compare the classification results of five different models, trained with HC and CNN-extracted features;
(iii)
We evaluate the classification results from a multiclass perspective to assess which type of descriptor may be most suitable for this task.
This research aims to classify individual seeds belonging to the same family or species from two different and heterogeneous seed data sets, where differences in colour, shape, and structure can be challenging to detect. We also want to highlight how traditional classification techniques trained with handcrafted features can outperform CNNs in training speed and achieve accuracy close to CNNs in this task.
The rest of the article is organised as follows. Section 2 presents state of the art in plant science work, with a focus on seed image analysis. Section 3 presents the data sets used and the classification experiments. The experimental evaluation is discussed in Section 4, and finally, in Section 5 we give the conclusions of the work.

2. Related Work

This work aims to classify seeds of different families, or species, according to the data set used. In general, computer vision techniques have been applied to this or similar tasks, even though no studies address heterogeneous seed identification or classification. For example, several authors have proposed methods to detect or classify types of seeds [5,6,21], leaves [22,23,24], and crops [25], to identify the quality of crops [26] or diseased leaves or crops [1,25,27,28,29], using both traditional and deep learning-based techniques. Table 2 gives a summary of the main methods and findings of the literature.

2.1. Leaf Detection and Classification

Several methods have been proposed for tasks similar to seed classification, such as leaf identification and recognition. Examples include a Support Vector Machine (SVM)-based method trained with leaf-related features such as shape, colour, and texture [22] and a mobile leaf detection system based on saliency computation to extract regions of interest, followed by segmentation with the region growing algorithm, which exploits both saliency map and colour features [24]. Finally, Hall et al. [23] use a combination of handcrafted and deep features as part of a classification system based on Random Forest (RF). It can classify the leaves of different plant species using a data set of over 1900 images divided into 32 species. The last method is particularly relevant and can be applied to seed images; however, it uses images of leaves acquired in ideal conditions, as well as Di Ruberto et al. [22], i.e., with an artificial background and not in actual conditions with a natural background, as in our investigation. As for Putzu et al. [24], the main focus is on the processing of leaves with complex artificial backgrounds in a mobile application scenario, which is far from the purposes of this work.

2.2. Leaf Diseases Identification

Works that are similar but oriented to the identification of diseases have been proposed by different authors [1,25,27,28,29]. Some examples include the work of Slado et al. [27], which used CaffeNet (a single-GPU version of AlexNet’s CNN) to identify leaf diseases; while AlexNet and GoogleNet have been used to identify 14 crop species and 26 different diseases [25]. In addition, LeNet has been used for diseased banana leaves recognition in [28]. Barman et al. [1] proposed a real-time citrus leaf disease detection and classification system based on the MobileNet CNN [30]. Finally, Gajjar et al. [29] proposed a novel CNN as part of a framework for real-time identification of diseases in a crop, tested on 20 different healthy and diseased leaves of four different plants. Each of the works in this section uses convolutional neural networks suitable for leaf disease detection. Contrary to the analysis carried out in our work, they did not study the handcrafted features that are particularly important in discriminating different types of seeds.
Table 2. Overview of existing works in this field with key insights from the proposed methods.
Table 2. Overview of existing works in this field with key insights from the proposed methods.
WorkTaskMain MethodObservations
 [22]Leaf detectionHC features + SVMIdeal background
[23]Leaf classificationHC+CNN features + RFIdeal background
[24]Leaf detectionsaliency + colour featuresComplex artificial
+ SVMbackground
[25]Leaf diseaseAlexNet andNo investigation
detectionGoogLeNeton HC features
[28]LeNet
[27]CaffeNet
[1]MobileNet
[29]Novel CNN
[31]Crop detectionYOLOv3Detection system
modifiedfor monitoring
[26]Crop qualityCNN features + SVMNo investigation
detection on HC features
[5]Seed detectionHC features+LDAIdentification of single
seed class
[21]Seed germinationAlexNet modifiedIdentification of
ability classification single seed quality
[6]Seed classificationHC features + ML methodsNo optimisations
on ML methods
CNN methodsNo investigation on deep
features + ML methods

2.3. Classification of Crops

A pretty similar task to the one faced in this work is related to the classification of crops. For example, Junos et al. [31] proposed a detection system based on an improved version of YOLOv3 to detect loose palm fruits from images acquired under various natural conditions; on the other hand, Zhu et al. [26] realised a system to recognise the appearance quality of carrots, based on the SVM classifier trained with features extracted by CNNs. In particular, ResNet101 network offered excellent results. CNNs are also used in these works. In the first case, the task is different because the proposed system is a monitoring framework rather than a fine-grained classification system. In Zhu et al. [26], however, they employed features extracted from CNNs as in our work, although they did not employ handcrafted features as a comparison term for detecting the quality of carrots.

2.4. Seed Detection and Classification

The works most related to ours belong to this category. In particular, Sarigu et al. [5] performed a plum variety identification employing seed’s endocarp shape, colour, and texture descriptors followed by a Linear Discriminant Analysis (LDA) to obtain the most representative features, while Przybylo et al. [21] and Loddo et al. [6] employed CNNs. On the one hand, the first one used a modified version of AlexNet [32] and focused on the task of acorn germination ability classification based on the colour intensity of seed sections as a feature. On the other hand, a new CNN, called SeedNet, has been proposed by Loddo et al. [6] to classify and sort seeds belonging to different families or species. As in the first two works, we aim to classify seeds. Moreover, we investigated the same data sets employed in the latter work. The authors focused on a single seed variety in the first two works and employed only handcrafted features. As for our previous work [6], we proposed a new CNN for the classification and retrieval of fine-grained seeds without going into the details of handcrafted and deep features, which is one of the main purposes of the current work.

3. Materials and Methods

We leverage two data sets in this work. They contain images of heterogeneous seeds, both in number and in characteristics. They are publicly available on request. Each one was preprocessed as described in our previous work [6] and used for seed family or species classification using handcrafted or deep features. In the following, we start by describing the data sets in Section 3.1.1 and Section 3.1.2. We then provide the implementation details of the classification strategy. We validate the performance through an empirical evaluation with a 10-fold cross-validation strategy and visualise the process’s overall and class-specific discriminative features.

3.1. Data Sets Description

In this section, we describe the data sets used for the experiments.

3.1.1. Canada Data Set

The Canada data set is publicly available [33]. It contains 587 images of seeds, organised into families. Every seed belongs to the Magnoliophyta phylum. Every image can have one of the following three different resolutions: 600 × 800 , 600 × 480 , and 600 × 400 . We exploited this data set because
(i)
It provides several different families, and
(ii)
The background of the images is clean and requires a precise and unique preprocessing strategy.
In particular, for the experiments, we selected the families considering the six most represented—Amaranthaceae, Apiaceae, Asteraceae, Brassicaceae, Plantaginaceae, and Solanaceae (23)—for a total of 215 seed images. Figure 1 shows a sample for each family, and Table 3 indicates the number of samples. Each original image contains a scale marker as a dimensional reference for the seed. It was removed following the preprocessing procedure proposed in [6].

3.1.2. Cagliari Data Set

The basic collection of the Banca del Germoplasma della Sardegna (BG-SAR), University of Cagliari, Italy, was used to create the local data set. It consists of 3386 samples from 120 different plant species. Each seed is a member of the Fabaceae family and varies significantly in size and colour. The images have a resolution of 2125 × 2834 [34]. We used a preprocessing procedure defined by our previous work [6] to remove the background and extract single seeds for classification.
We chose the families with the most numerous samples, for a total of 23 different ones: Amorpha, Anagyris, Anthyllis barba jovis, Anthyllis cytisoides, Astragalus glycyphyllos, Calicotome, Caragana, Ceratonia, Colutea, Cytisus purgans, Cytisus scoparius, Dorycnium pentaphyllum, Dorycnium rectum, Hedysarum coronarium, Lathyrus aphaca, Lathyrus ochrus, Medicago sativa, Melilotus officinalis, Pisum, Senna alexandrina, Spartium junceum, and Trifolium, Vicia faba, for a total of 1988 seeds.
Figure 2 depicts one sample from each family in the Cagliari data set, while Table 4 shows the number of samples.

3.2. Evaluation Metrics

The following metrics are used to evaluate the performance of each classification model: accuracy (Acc), precision (Pre), specificity (Spec), and recall (Rec). Accuracy is defined as the proportion of correctly labelled instances to the total number of instances. Precision is the proportion of true positives in a set of positive results. Specificity is the proportion of negative results that are correctly identified, and recall is the proportion of positive results that are correctly identified. They are defined as follows:
A c c u r a c y = T P + T N T P + T F + F P + F N ,
P r e c i s i o n = T P T P + F P ,
S p e c i f i c i t y = T N F P + T N ,
R e c a l l = T P T P + F N .
T P , F P , T N , and F N indicate True Positives, False Positives, True Negatives, and False Negatives, respectively. Finally, since we are facing a multiclass imbalance problem, we also applied two of the most common global metrics for learning multiclass imbalance to evaluate the performance of the classifier [35]. The measures are the macro geometric mean (MAvG), defined as the geometric mean of the partial accuracy of each class, and the mean F-measure (MFM), which is the average of the F-measure computed for each class. They are defined as:
M A v G = ( i = 1 J A c c i ) 1 J ,
M F M = i = 1 J F m e a s u r e ( i ) J ,
where i represents the current class and J the total number of classes. The F-measure(i) for class i is defined as:
F - m e a s u r e ( i ) = 2 × T P ( i ) 2 × T P ( i ) + F P ( i ) + F N ( i ) .
We pinpoint that each of the metrics shown in this work has been calculated as a macro average of the number of classes.

3.3. Seed Classification

The handcrafted features were extracted using the ImageJ tool described in [19]. It can extract up to 64 different features. In particular, 32 are morphological shapes, 16 are textures, and 16 are colour intensities. Among the texture features, Haralick’s GLCM, which describes the arrangement of pixel pairs with the same grey level [36], was used to extract information of local similarities. They all permit their computation with the typical four different degrees: 0, 45, 90, 135. More precisely, we extracted the following second-order statistics from GLCM: energy, contrast, correlation, and homogeneity.
The handcrafted descriptors have been compared to deep features that were extracted from several different well-known network architectures: Vgg16, Vgg19, AlexNet, GoogLeNet, InceptionV3, ResNet101, Resnet18, Resnet50, and SeedNet. AlexNet [16], Vgg16 [37], and Vgg19 [38] are the shallowest among the tested architectures, being composed of 8, 16, and 19 layers, respectively. In all of these three cases, we extracted the features from the second last fully connected layer (fc7) for a total of 4096 features. GoogleNet [39], Inception-v3 [40], ResNet18, ResNet50, and ResNet101 [41] are much deeper, being composed of 100, 48, 18, 50, and 101 layers, respectively. In all of these cases, we extracted the features from the one fully connected layer for a total of 1000 features. Finally, SeedNet is a novel and lightweight CNN proposed in [6] for seed image classification and retrieval. We extracted the features from the last fully connected layer for a total of 23 features. The CNNs are known to have a sufficient representational power and generalisation ability to perform different visual recognition tasks [42]. Nevertheless, we fine-tuned the above CNNs on both data sets before the feature extraction in order to produce a fairer comparison to the standard machine learning classifiers trained with handcrafted features. In particular, we adopted the following classification strategy for both data sets:
(i)
We split the data into 60% for training, 20% for validation, and 20% for test set;
(ii)
We fine-tuned the CNNs on the training set, using the validation set to avoid overfitting;
(iii)
We used 10-fold stratified cross-validation on training and validation set combined, in order to train the five classification algorithms;
(iv)
We finally evaluated the classification performed on the test set.
The extracted features have been used as input to different classification algorithms in order to produce different classification models. The models considered are the following: k-Nearest Neighbors (kNN), Decision Tree (DT), Naive Bayes (NB), Ensemble (Ens), and Support Vector Machine (SVM). KNN uses the k nearest neighbour training examples in the data set as input, and a neighbour voting strategy ranks an object. Decision trees create a model that predicts the value of a target variable by learning simple decision rules inferred from the characteristics of the data. The deeper the tree, the more complex the decision rules and the more suitable the model. Naive Bayes classifiers are probabilistic models based on the application of the Bayes theorem with strong assumptions of independence between features. The Ensemble classifier is based on an ensemble of classifiers rather than a single one. The classifiers in the ensemble all predict the correct classification of each unseen instance, and their predictions are then combined using some form of voting system. Finally, SVM is a non-probabilistic binary linear classifier that assigns objects to a category, mapping the instances to points in space to maximise the width of the gap between categories.
To ensure the heterogeneity of the training set and keeping in mind that we faced a class imbalance problem, we trained each classifier with 10-fold stratified cross-validation to ensure that the proportion of positive and negative examples is respected in all folds in such a way that they contain a representative ratio of each class. For each case, we selected the model with the largest area under the ROC curve (AUC). The primary hyperparameters characterising each classifier were tuned in order to obtain a model with optimal performance. Furthermore, to make the results reproducible, we specify the values of the hyperparameters chosen for each model considered. For the kNN classifier, the distance metric adopted is the cityblock, and the number of nearest neighbours is 6, with a squared inverse distance weighting function. For the Decision Tree classifier, the maximum number of splits to control the depth of the trees is 50. The chosen distribution used to model the data is normal for the Naive Bayes classifier with a normal kernel smoother. For the Ensemble classifier, we chose the Adaptive Boosting (AdaBoost) method for multiclass classification. In particular, the learners are decision trees. Finally, for the SVM classifier, we used a polynomial kernel function of order 2, with an auto kernel scale parameter and a box constraint parameter equal to 1 to control the maximum penalty imposed on margin-violating observations and therefore to prevent overfitting. We evaluated the performance of each classifier using the same hyperparameters on both data sets.

4. Results and Discussion

We report several results obtained from the experiments. First of all, four graphs are presented to show the general behaviour of the two sets of descriptors from two points of view. In fact, we report the best and average accuracies for both sets, as shown in Figure 3 and Figure 4. It works as a general indicator of the effectiveness of the features used for the task. In addition, we pinpoint their performance in the multiclass scenario. In particular, Figure 5 and Figure 6 show the behaviour of the MFM computed for the different classifiers. Secondly, in the Appendix A, we report Table A1Table A5, in which the individual descriptor categories results obtained with each classifier are detailed.
The graphs in Figure 3 and Figure 4 show that each of the employed classifiers can achieve excellent classification accuracy on both data sets. However, from the results of the Canada data set, the Decision Tree classifier seems the least suitable for the task, especially when trained with CNNs descriptors, being the only one below 90% on average in that case. Although the other four strategies can achieve an accuracy above 90% in practically all cases, the Support Vector Machine seems the most appropriate in every experimental condition. It outperforms the others, averaging 98.38% and 99.49% on the Canada and Cagliari data sets, respectively, and a best of 99.58% and 99.73% with the features extracted from SeedNet on the Canada and Cagliari data sets, respectively. In general, and looking only at the accuracy, there seem to be no distinct performance differences in the two categories of descriptors to justify one over the other.
However, the scenario considerably changes when observing the results of the multiclass classification performance that we evaluated using the F-measure computed for all the classes, as indicated in Equation (6). In particular, Figure 5 shows that the best MFMs are generally lower than the accuracy on both data sets, even though the SVM reaches more than 99% of MFM on the Canada data set with ResNet50 and SeedNet descriptors and 96.07% with SeedNet-extracted features on Cagliari data set. In the last graph shown in Figure 6, the average performance obtained with the MFM again indicates the SVM trained with CNN descriptors as the most suitable choice for the task. Indeed, the SVM trained with CNN-extracted descriptors obtains 96.11% and 92.86% on Canada and Cagliari data sets, respectively.
As a general rule, on the one hand, the results provided with the extensive experiments conducted seem to show that the SVM trained with CNN-extracted features can accomplish the multiclass seeds classification task with performance that outperforms every other combination of descriptors and classifier analysed. Moreover, this solution seems to be robust, achieving the highest results in each comparative test. Those extracted from SeedNet performed excellently in all categories among the deep features, establishing themselves very well suited to the task. On the other hand, the results produced using the HC descriptors are satisfactory since they generally bring results comparable to the CNNs ones, even slightly lower than the best CNN descriptors case of the SVM. In general, the Ensemble strategy turns out to be the most appropriate when using HC descriptors, being able to reach 98.76% and 99.42% as the best accuracy and 95.24% and 90.93% as best the MFM on Canada and Cagliari data sets, respectively, and 96.50% 98.84% as the average accuracy and 84.88% and 81.59% as the MFM. Nevertheless, the different number of features that the two different categories have should also be considered. In particular, the handcrafted ones are 64 if combined, while the deep features are 4096 in the worst cases of AlexNet, Vgg16, and Vgg19, and 1000 in all the remaining CNNs. SeedNet is an exception because it has 23 features. Therefore, if we consider the number of discriminative features, the results obtained with the HC features are even more satisfactory and pave the way for possible combinations of heterogeneous features.
Since our investigation is the first attempt to study the problems of classifying fine-grained seed types with a large variety of different seed classes (up to 23), we leveraged known existing classification strategies that have been commonly used in other works closer to this [22,24,26,43,44]. Specifically, we employed kNN, Decision Tree, Naive Bayes, Ensemble classifier with AdaBoost method, and SVM. SVM is the most suitable for this task, probably due to its excellent capacity in distinguishing classes with closely related elements. This condition is evident in the Canada data set, containing seeds with heterogeneous shapes, colours, and textures. In contrast, the Cagliari data set is composed of similar classes, making the process more complex (see Astragalus, Medicago, and Melitotus as examples from Figure 2). For the same reasons, Decision Trees showed the most unsatisfactory results in this context because, with high probabilities, the features produced are insufficient to adequately represent all possible conditions of the internal nodes and realise an appropriate number of splits. Furthermore, as Figure 3 and Figure 4 show, the overall performance of the system in terms of accuracy indicates that the HC and CNN features are comparable, and in some cases, the first ones are better than the last ones. This behaviour is because, in general, both categories have high representational power for fine-grained seeds classification [6], both in this context and on the same data sets. However, the accuracy metric does not represent the detail of the multiclass issue faced in this work. For this reason, we adopted the mean F-measure in order to have a more unambiguous indication of the most suitable features for the task, keeping in mind the multiclass scenario.
Considering that we addressed a a multiclass classification problem, we provide Figure 7, Figure 8, Figure 9 and Figure 10, which represent the classwise MFM for each class of both features categories. In detail, regarding the Cagliari data set, Figure 9 and Figure 10 show that the most difficult classes are Calicotome villosa (with F-measures of 77.78% with the SeedNet features and 65.38% with all the HC features) and Cystus purgans (with F-measures of 82.93% with the SeedNet features and 68.35% with all the HC features), in both cases far below 90%. They are mostly misclassified with Hedysarum coronarium and Cystus scoparius, respectively. In both cases, this is certainly due to their similar shapes, and in the latter, certain seeds also have similar colours. As regards the Canada data set, Figure 8 shows that the Amaranthaceae (F-measure of 66%) class is mainly misclassified with Solanaceae, and vice versa, although to a lesser extent (F-measure of 86.95%). Even in this case, this is probably due to the similar shapes, but it is necessary to remark that the Amaranthaceae class contains only ten samples. The remaining four classes obtained an F-measure highly above 95%. On the other hand, Figure 7 represents the excellent representational power of the ResNet50-extracted features, considering that the F-measure of all the classes is above 95%, and, above all, the Amaranthaceae obtained 100%, overcoming the issues of the handcrafted features in discriminating it.
A final remark should be devoted to the execution time. We did not indicate the training time of the different CNNs employed because it is out of the scope of the work. However, we note that the training time was never less than 22 min on the Canada data set (AlexNet) and 21 min on the Cagliari data set (GoogLeNet) for the known architectures, while the SeedNet training lasted 4 and 12 min, respectively. Regarding the training time of the traditional classifiers, it was never above 1 min (the worst was Naive Bayes).
To sum up, the classification strategy based on the optimised SVM trained with SeedNet-extracted features is suitable for the seed-classification task, even in a multiclass scenario. This work shows how SeedNet is not only a robust solution for classification [6] but is also an outstanding feature extractor if coupled with the SVM classifier. The solution here obtained could also be more feasible than using SeedNet alone, considering the quicker training time of the SVM once provided with the selected features, in contrast to SeedNet alone.
While interesting results have been shown, our work suffers from some limitations. First, the best-performing solution relies entirely on one combination of descriptor and classifier, even though other categories of descriptors produced satisfactory results. Considering the properties of handcrafted features, combining them with deep features could improve the results, particularly in distinguishing the different classes of seeds more specifically. Second, every experimental condition assumed a preprocessing step before it, which needs to be tuned according to the data set employed. As a result, the trained classifier could have issues if applied to other data sets with different image conditions. Third, the training time of the best classification system strictly depends on the training time of the CNN adopted for the feature extraction. Efforts should be made in this sense in order to make a real-time system for the task addressed. Fourth, the dimensionality of the different feature vectors slightly changes if we compare handcrafted and deep descriptors. The first ones have a maximum of 64 features, while the second ones can have up to 4096. In this context, SeedNet is an excellent solution with only 23 features. A reasonable combination of heterogeneous descriptors could be made to investigate possible improvements, even followed by a feature reduction/selection. Fifth, as represented by the classwise performance, some classes are harder to distinguish because of their similar shapes and colours. In the case of Cagliari data sets, not even the deep features have overcome this issue. For this reason, the combination of heterogeneous descriptors could help recognise the most challenging classes.

5. Conclusions

In this work, we mainly focused on the problem of seed image classification. In this context, we specifically addressed an unbalanced multiclass task with two heterogeneous seed data sets, using both handcrafted and deep features. Based on shape, texture, and colour, the handcrafted features are general and dependent on the problem addressed, generating a feature vector with a maximum size of 64. Deep features were extracted from several known CNNs capable of performing different visual recognition tasks, generating a feature vector whose size is 1000 or 4096, except for SeedNet, which has 23. The features were then used to train five different classification algorithms, kNN, Decision Tree, Naive Bayes, SVM, and Ensemble. The experimental results show that the different feature categories perform best and comparably using SVM or Ensemble models for the Canada data set, with average accuracy values above 96.5%. The best model for the Cagliari data set is Ensemble for HC features and SVM for deep features. In both cases, the average accuracy values are above 99.4%. The MFM metric values give us essential information about how well the considered features can solve the unbalanced multiclass task. For both types of features, the Ensemble model achieves the best and comparable performance, with average values higher than 95.2% and 91%, respectively, for the data sets of Canada and Cagliari. When comparing HC- and CNN-based features, especially when considering the size of the feature vector, HC descriptors outperformed deep descriptors in some cases, as they achieved similar performance but with significant computational savings. In general, the results provided by the extensive experiments indicate that the SVM trained with the features extracted from the CNN can perform the task of multiclass seed classification with a performance that outperforms any other combination of descriptors and classifiers analysed. Moreover, this solution seems to be robust, obtaining the highest results in each comparative test. Among the deep features, those extracted by SeedNet performed excellently in all categories, establishing themselves as very well suited to the task and expressing SeedNet as a powerful tool for seed classification and feature extraction. It is also important to remark that the classwise performance highlighted that some classes are harder to distinguish because of their similar shapes and colours. For this reason, the combination of HC and deep descriptors could help recognise the most challenging classes.
In conclusion, SeedNet and CNNs, in general, have demonstrated their ability to offer convenient features for this task, achieving outstanding performance results with both data sets. However, if we consider the size of the feature vectors as a computational term, and the training time involved in the initial process, the HC feature performed satisfactorily, which is particularly desired for a real-time framework.
As a future direction, we aim to further improve the results obtained by investigating the possibility of combining the HC and CNN features, particularly to overcome the difficulties in recognising some seed classes and a feature selection step to reduce the dimensionality of the features. Finally, we also plan to realise a complete framework that can manage all the steps involved in this task, from image acquisition to seed classification, broadening our approach to distinguishing between seeds’ genera and species.

Author Contributions

Conceptualisation, A.L. and C.D.R.; Methodology, A.L. and C.D.R.; Investigation, A.L. and C.D.R.; software, A.L. and C.D.R.; writing—original draft, A.L. and C.D.R.; writing—review and editing, A.L. and C.D.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All the features extracted, the models, and the material produced in this study are available at the following URL: GitHub repository.

Acknowledgments

We acknowledge Elsevier for all the materials provided to this work, from the following work: Computers and Electronics in Agriculture; Volume 187; Andrea Loddo, Mauro Loddo, Cecilia Di Ruberto; “A novel deep learning based approach for seed image classification and retrieval”; Pages 106269; Copyright Elsevier (2021).

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CNNConvolutional Neural Network
HCHandcrafted
RFRandom Forest
LDALinear Discriminant Analysis
GLCMGray Level Co-occurrence Matrix
TPTrue Positive
TNTrue Negative
FNFalse Negative
FPFalse Positive
AccAccuracy
PrecPrecision
SpecSpecificity
RecRecall
MAvGMacro Average Geometric
MFMMean F-measure

Appendix A. Numerical Results

This appendix section contains the numerical results of the experiments conducted, grouped by classifier. In particular, for every classifier we reported a table with its performance results. We employed kNN Table A1, Decision Tree Table A2, Naive Bayes Table A3, SVM Table A4, and Ensemble Table A5. More specifically, every table contains the detail of the individual HC descriptors of Shape, Texture, and Colour alone. Then, we inserted all the possible combinations: Shape + Colour, Shape + Texture, Texture + Colour, and All, which represent the combination of all the three categories. Then, the Average HC row represents the information used in Figure 4 and Figure 6. Regarding the CNN descriptors, every row represent the results obtained using that ConvNet as feature extractor. Finally, we reported the Average CNN row, which was also used in Figure 4 and Figure 6.
Table A1. Performance results obtained using HC descriptors and deep features with kNN classifier on Canada and Cagliari data sets. KNN were trained with k = 6 and cityblock as distance. DS indicates the data set.
Table A1. Performance results obtained using HC descriptors and deep features with kNN classifier on Canada and Cagliari data sets. KNN were trained with k = 6 and cityblock as distance. DS indicates the data set.
DSDescriptorsAccPrecSpecRecMAvGMFM
CanadaShape94.8880.0796.8682.6678.8681.13
Texture96.9081.3798.2292.0772.3082.11
Colour86.2048.9491.5851.0040.9149.18
Shape + Colour97.2183.1498.3991.8577.3884.25
Shape + Texture98.1488.5698.9294.6986.2590.19
Texture + Colour95.3576.0297.3182.1267.8576.70
All97.5285.2398.5689.5281.5686.20
Average HC95.1777.6297.1283.4272.1678.54
AlexNet96.3491.1697.6290.5891.0490.56
Vgg1694.0182.4596.1985.5782.0483.52
Vgg1997.2894.5398.2094.3094.3894.28
GoogLeNet94.9789.6496.8290.0788.8888.80
Inceptionv394.5588.9596.5886.9487.9287.03
ResNet10195.8190.8297.2888.9190.6489.65
ResNet1895.7791.2897.2090.4391.1390.64
ResNet5096.4292.8097.6292.1992.6692.35
SeedNet97.6694.8598.5193.2794.6693.84
Average CNN95.8790.7297.3490.2590.3790.07
CagliariShape95.4339.3297.5645.075.5941.07
Texture96.5944.7898.2054.9134.9647.05
Colour98.2471.9399.0776.7266.0172.75
Shape + Colour98.2471.1599.0880.8267.4274.36
Shape + Texture97.0453.6798.4463.4842.6555.95
Texture + Colour98.2771.8899.0980.3767.8874.06
All98.4774.7099.2083.7970.9077.53
Average HC97.4761.0698.6669.3150.7763.25
AlexNet98.9082.7299.4389.9979.7384.78
Vgg1699.0384.2399.4988.9182.9986.02
Vgg1999.0284.0599.4991.5981.4086.35
GoogLeNet99.2387.7899.6093.5985.7089.55
Inceptionv398.7879.9699.3687.7477.0082.46
ResNet10198.8081.1799.3787.7876.4982.88
ResNet1899.3389.2699.6593.5488.3690.86
ResNet5099.1586.2599.5591.0184.5687.86
SeedNet99.3789.4999.6794.3687.9991.19
Average CNN99.0784.9999.5190.9582.6986.88
Table A2. Performance results attained using HC descriptors and deep features by Decision Tree classifier on Canada and Cagliari data sets. Decision Tree were trained with 50 as maximum number of splits. DS indicates the data set.
Table A2. Performance results attained using HC descriptors and deep features by Decision Tree classifier on Canada and Cagliari data sets. Decision Tree were trained with 50 as maximum number of splits. DS indicates the data set.
DSDescriptorsAccPrecSpecRecMAvGMFM
CanadaShape93.0274.1295.7273.5372.4473.80
Texture97.2186.7898.3387.3285.4886.99
Colour89.1559.4293.4458.8353.5758.91
Shape + Colour96.5985.8497.9485.3884.9785.51
Shape + Texture97.2189.9498.2889.5389.7189.68
Texture + Colour97.9888.4398.8388.8287.0288.59
All97.3689.7598.3989.6389.5089.68
Average HC95.5082.0497.2881.8680.3881.88
AlexNet87.4771.8191.7267.9871.0169.41
Vgg1691.0171.6994.2370.5869.3070.74
Vgg1990.4376.5893.6875.8676.5476.16
GoogLeNet88.9967.1492.8774.1965.1369.21
Inceptionv389.0170.0592.8469.5669.6669.48
ResNet10183.7460.9789.2762.6560.0161.71
ResNet1888.0964.4092.3565.3462.7564.75
ResNet5087.0369.0891.4768.1768.6268.16
SeedNet87.5970.1691.8472.1869.6470.80
Average CNN88.1569.1092.2569.6168.0768.94
CagliariShape97.4563.2998.6567.8955.5563.47
Texture98.0066.3398.9567.5411.2566.31
Colour97.2557.8298.5560.7148.1158.34
Shape + Colour98.5175.6699.2278.6868.9176.63
Shape + Texture98.4373.4399.1876.0965.8873.95
Texture + Colour98.5175.4299.2280.1370.1976.44
All98.5876.3399.2579.0070.4677.06
Average HC98.1069.7599.0072.8655.7670.31
AlexNet97.6966.4798.7767.0960.1166.45
Vgg1698.2573.3499.0776.5371.0374.33
Vgg1997.6466.1198.7567.5561.3066.69
GoogLeNet97.8568.6698.8768.3164.1768.33
Inceptionv397.3561.4598.6063.5757.1162.14
ResNet10196.6352.7798.2154.598.4553.09
ResNet1897.7066.6398.7866.7963.4666.44
ResNet5097.2760.4998.5560.4155.1360.18
SeedNet97.6366.1398.7466.7261.9266.18
Average CNN97.5664.6798.7065.7355.8564.87
Table A3. Performance results attained using HC descriptors and deep features by Naive Bayes classifier on Canada and Cagliari data sets. Naive Bayes were trained with kernel as distribution and normal as kernel type. DS indicates the data set.
Table A3. Performance results attained using HC descriptors and deep features by Naive Bayes classifier on Canada and Cagliari data sets. Naive Bayes were trained with kernel as distribution and normal as kernel type. DS indicates the data set.
DSDescriptorsAccPrecSpecRecMAvGMFM
CanadaShape93.0271.8095.7973.9767.2972.24
Colour85.1247.4891.0347.860.1045.55
Texture96.1283.0797.6786.2081.0784.07
Shape + Colour95.1977.5197.1278.8972.7177.61
Shape + Texture96.5986.2497.9190.6485.2387.45
Texture + Colour95.0478.8097.0281.4776.6279.61
All96.5985.8597.9290.7584.8487.32
Average HC93.9575.8296.3578.5466.8476.26
AlexNet96.7490.6297.9492.7290.4891.35
Vgg1692.2681.5095.0880.4380.9380.20
Vgg1996.1288.2297.6089.8988.1188.85
GoogLeNet96.7490.6297.9492.7290.4891.35
Inceptionv396.7490.6297.9492.7290.4891.35
ResNet10195.6686.3197.3090.2486.0687.88
ResNet1895.6687.1897.3089.5787.0988.15
ResNet5095.6686.2897.2989.2286.1587.47
SeedNet95.8189.3897.3788.9189.1189.02
Average CNN95.7187.8697.3189.6087.6588.40
CagliariShape95.9345.8997.8550.577.1745.88
Texture94.3635.6497.0642.314.8832.47
Colour97.3665.7298.6165.4559.7262.95
Shape + Colour96.9155.1898.3763.5250.3455.80
Shape + Texture96.1043.9597.9454.9136.7446.04
Texture + Colour96.4654.7498.1558.4450.0152.87
All96.4147.6798.1159.1841.3149.39
Average HC96.2249.8398.0156.3435.7449.34
AlexNet98.7883.7699.3582.8982.8382.83
Vgg1698.7783.7199.3582.5382.7582.72
Vgg1998.7983.8199.3682.4882.8382.76
GoogLeNet98.7883.9499.3582.6882.9182.85
Inceptionv398.7783.7199.3582.5082.7982.65
ResNet10198.7783.4099.3582.4882.4582.46
ResNet1898.7983.8499.3682.7782.9082.88
ResNet5098.7683.6899.3482.1882.7182.46
SeedNet98.7883.6899.3582.7582.6882.71
Average CNN98.7883.7399.3582.5882.7682.70
Table A4. Performance results attained using HC descriptors and deep features by SVM classifier on Canada and Cagliari data sets. SVM was trained with a polynomial kernel of order 2. DS indicates the data set.
Table A4. Performance results attained using HC descriptors and deep features by SVM classifier on Canada and Cagliari data sets. SVM was trained with a polynomial kernel of order 2. DS indicates the data set.
DSDescriptorsAccPrecSpecRecMAvGMFM
CanadaShape94.2677.3296.4977.9475.6477.56
Colour92.8769.5495.8069.3661.8968.89
Texture97.5288.3198.4987.9887.0988.13
Shape + Colour96.7483.5198.0883.7281.2483.54
Shape + Texture97.9890.8598.7792.5190.2691.50
Texture + Colour98.6091.8999.1991.9991.1991.92
All97.6789.4498.6090.2988.8089.78
Average HC96.5284.4197.9284.8382.3084.47
AlexNet98.9497.7799.3297.5497.7497.57
Vgg1695.7287.9497.2289.5087.8288.55
Vgg1998.7597.6099.1697.3497.5797.45
GoogLeNet98.7497.5499.1596.0597.5296.73
Inceptionv397.2594.1698.2392.6993.9793.24
ResNet10197.7095.1198.4895.0395.0795.06
ResNet1899.1598.4099.4298.0198.3998.18
ResNet5099.5899.2399.7199.1299.2299.17
SeedNet99.5899.0799.7399.0899.0599.06
Average CNN98.3896.3198.9496.0496.2696.11
CagliariShape94.6031.2597.1137.0622.9933.13
Texture96.5348.0798.1650.4642.2448.92
Colour98.5176.4799.2179.7974.3677.48
Shape + Colour98.3272.9199.1279.7368.8575.18
Shape + Texture96.6847.9798.2554.3339.4850.06
Texture + Colour98.5476.7999.2380.1474.2178.01
All98.4973.9799.2181.1870.0176.74
Average HC97.3861.0698.6166.1056.0262.79
AlexNet99.4891.9699.7394.6291.3192.99
Vgg1699.2788.2499.6290.4987.2989.14
Vgg1999.5493.0899.7594.5592.7193.73
GoogLeNet99.5593.0899.7694.1692.5293.52
Inceptionv399.4090.8199.6892.3190.2991.47
ResNet10199.2588.6499.6191.4886.9389.62
ResNet1899.6795.0899.8396.8294.8095.83
ResNet5099.5392.9499.7593.9492.4393.34
SeedNet99.7395.4799.8696.9195.1196.07
Average CNN99.4992.1499.7393.9291.4992.86
Table A5. Performance results attained using HC descriptors and deep features by Ensemble classifier on Canada and Cagliari data sets. The ensemble was trained with a bag method and tree as a learner. DS indicates the data set.
Table A5. Performance results attained using HC descriptors and deep features by Ensemble classifier on Canada and Cagliari data sets. The ensemble was trained with a bag method and tree as a learner. DS indicates the data set.
DSDescriptorsAccPrecSpecRecMAvGMFM
CanadaShape95.0477.8497.0280.5374.7878.87
Texture98.7694.2899.2596.6494.0195.24
Colour89.9259.6193.9362.5953.6060.34
Shape + Colour97.3686.7398.4587.9885.3187.21
Shape + Texture98.1490.4998.9093.0089.7791.50
Texture + Colour98.2990.0899.0191.5888.8090.67
All97.9889.2598.8092.1887.9790.31
Average HC96.5084.0497.9186.3682.0384.88
AlexNet96.2889.3197.7088.8588.8788.59
Vgg1693.6476.2096.1379.5275.0377.32
Vgg1997.5292.7598.4794.0492.6493.30
GoogLeNet94.7383.9296.7685.6283.6484.68
Inceptionv395.1984.7097.1386.8083.9885.12
ResNet10193.9581.6696.3185.0781.2982.84
ResNet1895.8188.5297.3990.1888.3089.10
ResNet5096.5991.4697.8490.1491.3590.69
SeedNet95.3587.3897.1087.5187.0987.40
Average CNN95.4586.2197.2087.5385.8086.56
CagliariShape98.1570.7599.0376.6265.9072.61
Texture98.5672.9699.2577.2966.6873.96
Colour98.2773.1399.0976.8870.0874.37
Shape + Colour99.2988.3599.6391.6887.0189.55
Shape + Texture99.0281.5399.4986.7877.1982.95
Texture + Colour99.1585.1499.5689.2083.6786.74
All99.4289.5499.7093.5388.1390.93
Average HC98.8480.2099.3984.5776.9581.59
AlexNet99.0282.6999.4991.1579.2985.05
Vgg1699.0684.8799.5087.1683.7985.73
Vgg1999.0783.5599.5191.1180.7385.89
GoogLeNet99.1685.3999.5690.6783.2387.34
Inceptionv398.8179.1199.3888.6075.8982.20
ResNet10198.4072.4599.1783.5763.7174.96
ResNet1899.2086.3999.5891.5283.9788.28
ResNet5098.9380.6499.4488.2377.2883.20
SeedNet99.2185.5899.5992.2283.0887.63
Average CNN98.9882.3099.4789.3679.0084.48

References

  1. Barman, U.; Choudhury, R.D.; Sahu, D.; Barman, G.G. Comparison of convolution neural networks for smartphone image based real time classification of citrus leaf disease. Comput. Electron. Agric. 2020, 177, 105661. [Google Scholar] [CrossRef]
  2. Di Ruberto, C.; Loddo, A.; Putzu, L. Detection of red and white blood cells from microscopic blood images using a region proposal approach. Comput. Biol. Med. 2020, 116, 103530. [Google Scholar] [CrossRef]
  3. Campanile, G.; Di Ruberto, C.; Loddo, A. An Open Source Plugin for Image Analysis in Biology. In Proceedings of the 2019 IEEE 28th International Conference on Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE), Napoli, Italy, 12–14 June 2019; pp. 162–167. [Google Scholar]
  4. Ahmad, N.; Asghar, S.; Gillani, S.A. Transfer learning-assisted multi-resolution breast cancer histopathological images classification. Vis. Comput. 2021, 1–20. [Google Scholar] [CrossRef]
  5. Sarigu, M.; Grillo, O.; Bianco, M.L.; Ucchesu, M.; d’Hallewin, G.; Loi, M.C.; Venora, G.; Bacchetta, G. Phenotypic identification of plum varieties (Prunus domestica L.) by endocarps morpho-colorimetric and textural descriptors. Comput. Electron. Agric. 2017, 136, 25–30. [Google Scholar] [CrossRef]
  6. Loddo, A.; Loddo, M.; Di Ruberto, C. A novel deep learning based approach for seed image classification and retrieval. Comput. Electron. Agric. 2021, 187, 106269. [Google Scholar] [CrossRef]
  7. Kamilaris, A.; Prenafeta-Boldú, F.X. Deep learning in agriculture: A survey. Comput. Electron. Agric. 2018, 147, 70–90. [Google Scholar] [CrossRef] [Green Version]
  8. Ucchesu, M.; Orrù, M.; Grillo, O.; Venora, G.; Usai, A.; Serreli, P.F.; Bacchetta, G. Earliest evidence of a primitive cultivar of Vitis vinifera L. during the Bronze Age in Sardinia (Italy). Veg. Hist. Archaeobot. 2015, 24, 587–600. [Google Scholar] [CrossRef]
  9. Ucchesu, M.; Orrù, M.; Grillo, O.; Venora, G.; Paglietti, G.; Ardu, A.; Bacchetta, G. Predictive method for correct identification of archaeological charred grape seeds: Support for advances in knowledge of grape domestication process. PLoS ONE 2016, 11, e0149814. [Google Scholar] [CrossRef] [Green Version]
  10. Ucchesu, M.; Sarigu, M.; Del Vais, C.; Sanna, I.; d’Hallewin, G.; Grillo, O.; Bacchetta, G. First finds of Prunus domestica L. in Italy from the Phoenician and Punic periods (6th–2nd centuries bc). Veg. Hist. Archaeobot. 2017, 26, 539–549. [Google Scholar] [CrossRef] [Green Version]
  11. Lo Bianco, M.; Grillo, O.; Cañadas, E.; Venora, G.; Bacchetta, G. Inter- and intraspecific diversity in Cistus L. (Cistaceae) seeds, analysed with computer vision techniques. Plant Biol. 2017, 19, 183–190. [Google Scholar] [CrossRef] [PubMed]
  12. Lo Bianco, M.; Grillo, O.; Escobar Garcia, P.; Mascia, F.; Venora, G.; Bacchetta, G. Morpho-colorimetric characterisation of Malva alliance taxa by seed image analysis. Plant Biol. 2017, 19, 90–98. [Google Scholar] [CrossRef]
  13. ImageJ. 2021. Available online: https://imagej.net/ImageJ (accessed on 7 July 2021).
  14. Landini, G. Advanced shape analysis with ImageJ. In Proceedings of the 2th ImageJ User and Developer Conference, Luxembourg, 7–8 November 2008; pp. 6–7. [Google Scholar]
  15. Lind, R. Open source software for image processing and analysis: Picture this with ImageJ. In Open Source Software in Life Science Research; Harland, L., Forster, M., Eds.; Woodhead Publishing Series in Biomedicine; Woodhead Publishing: Sawston, UK, 2012; pp. 131–149. [Google Scholar]
  16. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. In Advances in Neural Information Processing Systems 25, Proceedings of the 26th Annual Conference on Neural Information Processing Systems 2012, Lake Tahoe, NV, USA, 3–6 December 2012; Bartlett, P.L., Pereira, F.C.N., Burges, C.J.C., Bottou, L., Weinberger, K.Q., Eds.; Neural Information Processing Systems Foundation, Inc.: Lake Tahoe, NV, USA, 2012; pp. 1106–1114. [Google Scholar]
  17. Di Ruberto, C.; Cinque, L. Decomposition of Two-Dimensional Shapes for Efficient Retrieval. Image Vis. Comput. 2009, 27, 1097–1107. [Google Scholar] [CrossRef]
  18. Di Ruberto, C.; Fodde, G.; Putzu, L. Comparison of Statistical Features for Medical Colour Image Classification. In Computer Vision Systems; Nalpantidis, L., Krüger, V., Eklundh, J.O., Gasteratos, A., Eds.; Springer International Publishing: Cham, Switzerland, 2015; pp. 3–13. [Google Scholar]
  19. Loddo, A.; Di Ruberto, C.; Vale, A.; Ucchesu, M.; Soares, J.; Bacchetta, G. An effective and friendly tool for seed image analysis. arXiv 2021, arXiv:2103.17213. [Google Scholar]
  20. Gulzar, Y.; Hamid, Y.; Soomro, A.B.; Alwan, A.A.; Journaux, L. A convolution neural network-based seed classification system. Symmetry 2020, 12, 2018. [Google Scholar] [CrossRef]
  21. Przybylo, J.; Jablonski, M. Using Deep Convolutional Neural Network for oak acorn viability recognition based on color images of their sections. Comput. Electron. Agric. 2019, 156, 490–499. [Google Scholar] [CrossRef]
  22. Di Ruberto, C.; Putzu, L. A fast leaf recognition algorithm based on SVM classifier and high dimensional feature vector. In Proceedings of the 2014 International Conference on Computer Vision Theory and Applications (VISAPP), Lisbon, Portugal, 5–8 January 2014; Volume 1, pp. 601–609. [Google Scholar]
  23. Hall, D.; McCool, C.; Dayoub, F.; Sunderhauf, N.; Upcroft, B. Evaluation of Features for Leaf Classification in Challenging Conditions. In Proceedings of the 2015 IEEE Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 5–9 January 2015; pp. 797–804. [Google Scholar]
  24. Putzu, L.; Di Ruberto, C.; Fenu, G. A Mobile Application for Leaf Detection in Complex Background Using Saliency Maps. Advanced Concepts for Intelligent Vision Systems. In Lecture Notes in Computer Science, Proceedings of the 17th International Conference, ACIVS 2016, Lecce, Italy, 24–27 October 2016; Blanc-Talon, J., Distante, C., Philips, W., Popescu, D.C., Scheunders, P., Eds.; Springer: Berlin/Heidelberg, Germany, 2016; Volume 10016, pp. 570–581. [Google Scholar]
  25. Mohanty, S.P.; Hughes, D.P.; Salathé, M. Using Deep Learning for Image-Based Plant Disease Detection. Front. Plant Sci. 2016, 7, 1419. [Google Scholar] [CrossRef] [Green Version]
  26. Zhu, H.; Yang, L.; Fei, J.; Zhao, L.; Han, Z. Recognition of carrot appearance quality based on deep feature and support vector machine. Comput. Electron. Agric. 2021, 186, 106185. [Google Scholar] [CrossRef]
  27. Sladojevic, S.; Arsenovic, M.; Anderla, A.; Culibrk, D.; Stefanovic, D. Deep Neural Networks Based Recognition of Plant Diseases by Leaf Image Classification. Comput. Intell. Neurosci. 2016, 2016. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Amara, J.; Bouaziz, B.; Algergawy, A. A deep learning-based approach for banana leaf diseases classification. In Lecture Notes in Informatics (LNI); Proceedings—Series of the Gesellschaft fur Informatik (GI); Gesellschaft fur Informatik (GI): Bonn, Germany, 2017. [Google Scholar]
  29. Gajjar, R.; Gajjar, N.; Thakor, V.J.; Patel, N.P.; Ruparelia, S. Real-time detection and identification of plant leaf diseases using convolutional neural networks on an embedded platform. Vis. Comput. 2021, 1–16. [Google Scholar] [CrossRef]
  30. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
  31. Junos, M.H.; Khairuddin, A.S.M.; Thannirmalai, S.; Dahari, M. Automatic detection of oil palm fruits from UAV images using an improved YOLO model. Vis. Comput. 2021, 1–15. [Google Scholar] [CrossRef]
  32. Chatfield, K.; Simonyan, K.; Vedaldi, A.; Zisserman, A. Return of the Devil in the Details: Delving Deep into Convolutional Nets. In Proceedings of the British Machine Vision Conference, BMVC 2014, Nottingham, UK, 1–5 September 2014; Valstar, M.F., French, A.P., Pridmore, T.P., Eds.; BMVA Press: London, UK, 2014. [Google Scholar]
  33. Canada Dataset. Available online: https://inspection.canada.ca/active/netapp/idseed/idseed_gallerye.aspx?itemsNum=-1&famkey=&family=&keyword=&letter=A (accessed on 13 August 2021).
  34. Vale, A.M.P.G.; Ucchesu, M.; Ruberto, C.D.; Loddo, A.; Soares, J.M.; Bacchetta, G. A new automatic approach to seed image analysis: From acquisition to segmentation. arXiv 2020, arXiv:2012.06414. [Google Scholar]
  35. Alejo, R.; Antonio, J.A.; Valdovinos, R.M.; Pacheco-Sánchez, J.H. Assessments Metrics for Multi-class Imbalance Learning: A Preliminary Study. In Pattern Recognition; Carrasco-Ochoa, J.A., Martínez-Trinidad, J.F., Rodríguez, J.S., di Baja, G.S., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; pp. 335–343. [Google Scholar]
  36. Haralick, R.M.; Shanmugam, K.; Dinstein, I. Textural Features for Image Classification. IEEE Trans. Syst. Man Cybern. 1973, SMC-3, 610–621. [Google Scholar] [CrossRef] [Green Version]
  37. Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef] [Green Version]
  38. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. In Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
  39. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
  40. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 2818–2826. [Google Scholar]
  41. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  42. Putzu, L.; Piras, L.; Giacinto, G. Convolutional neural networks for relevance feedback in content based image retrieval. Mult. Tools Appl. 2020, 79, 26995–27021. [Google Scholar] [CrossRef]
  43. Vlasov, A.V.; Fadeev, A.S. A machine learning approach for grain crop’s seed classification in purifying separation. J. Phys. Conf. Ser. 2017, 803, 012177. [Google Scholar] [CrossRef] [Green Version]
  44. Agrawal, D.; Dahiya, P. Comparisons of classification algorithms on seeds dataset using machine learning algorithm. Compusoft 2018, 7, 2760–2765. [Google Scholar]
Figure 1. Samples of seed for each family present in the Canadian data set.
Figure 1. Samples of seed for each family present in the Canadian data set.
Jimaging 07 00171 g001
Figure 2. Samples of seed for each species present in the Cagliari data set.
Figure 2. Samples of seed for each species present in the Cagliari data set.
Jimaging 07 00171 g002
Figure 3. Accuracy trends with the different classifiers adopted. On the left, best accuracies obtained on Canada data set; on the right, on Cagliari data set.
Figure 3. Accuracy trends with the different classifiers adopted. On the left, best accuracies obtained on Canada data set; on the right, on Cagliari data set.
Jimaging 07 00171 g003
Figure 4. Average accuracy trends with the different classifiers adopted. On the left, average accuracies obtained on Canada data set; on the right, on Cagliari data set.
Figure 4. Average accuracy trends with the different classifiers adopted. On the left, average accuracies obtained on Canada data set; on the right, on Cagliari data set.
Jimaging 07 00171 g004
Figure 5. Best MFM trends with the different classifiers adopted. On the left, best MFM obtained on Canada data set; on the right, on Cagliari data set.
Figure 5. Best MFM trends with the different classifiers adopted. On the left, best MFM obtained on Canada data set; on the right, on Cagliari data set.
Jimaging 07 00171 g005
Figure 6. Average MFM trends with the different classifiers adopted. On the left, average MFM obtained on Canada data set; on the right, on Cagliari data set.
Figure 6. Average MFM trends with the different classifiers adopted. On the left, average MFM obtained on Canada data set; on the right, on Cagliari data set.
Jimaging 07 00171 g006
Figure 7. Classwise MFM on the best model for the Canada data set trained with deep features: SVM with ResNet50 features.
Figure 7. Classwise MFM on the best model for the Canada data set trained with deep features: SVM with ResNet50 features.
Jimaging 07 00171 g007
Figure 8. Classwise MFM on the best model for the Canada data set, trained with HC features: Ensemble with Texture features.
Figure 8. Classwise MFM on the best model for the Canada data set, trained with HC features: Ensemble with Texture features.
Jimaging 07 00171 g008
Figure 9. Classwise MFM on the best model for the Cagliari data set, trained with deep features: SVM with SeedNet features.
Figure 9. Classwise MFM on the best model for the Cagliari data set, trained with deep features: SVM with SeedNet features.
Jimaging 07 00171 g009
Figure 10. Classwise MFM on the best model for the Cagliari data set, trained with HC features: Ensemble trained with Shape + Texture + Colour (All) features.
Figure 10. Classwise MFM on the best model for the Cagliari data set, trained with HC features: Ensemble trained with Shape + Texture + Colour (All) features.
Jimaging 07 00171 g010
Table 1. Contributions of current and our previous work in the context of seeds images analysis.
Table 1. Contributions of current and our previous work in the context of seeds images analysis.
WorkContributions
Loddo et al. [6]- SeedNet CNN proposal
- classification based on HC + ML methods vs. CNN
- retrieval based on HC + ML methods vs. CNN
Loddo et al. [19]- seed image acquisition and preprocessing schemes
- open source ImageJ plugin for seeds feature extraction
- open source ImageJ plugin for seeds classification
- seeds classification with HC + ML methods
- seeds classification with CNNs only
This work- seeds classification with HC + ML methods
- seeds classification with deep features + ML algorithms
- study of the parameters optimisations in ML methods
- comparison of the classification schemes for multiclass tasks
Table 3. Canada data set description: family name and number of samples.
Table 3. Canada data set description: family name and number of samples.
FamilySamples
Amaranthaceae (Ama)10
Apiaceae (Api)56
Asteraceae (Ast)49
Brassicaceae (Bra)34
Plantaginaceae (Pla)43
Solanaceae (Sol)23
Table 4. Cagliari data set description: species name and number of samples.
Table 4. Cagliari data set description: species name and number of samples.
SpeciesSamplesSpeciesSamples
Amorpha fruticosa (Am.F)51Dorycnium rectum (Do.R)236
Anagyris foetida (An.F)29Hedysarum coronarium (He.C)208
Anthyllis barba jovis (An.BJ)51Lathyrus aphaca (La.A)52
Anthyllis cytisoides (An.C)29Lathyrus ochrus (La.O)46
Astragalus glycyphyllos (As.G)50Medicago sativa (Me.S)116
Calicotome villosa (Ca.V)32Melilotus officinalis (Me.O)176
Caragana arborescens (Ca.A)36Pisum sativum (Pi.S)121
Ceratonia siliqua (Ce.S)45Senna alexandrina (Se.A)194
Colutea arborescens (Co.A)42Spartium junceum (Sp.J)109
Cytisus purgans (Cy.P)44Trifolium angustifolium (Tr.A)183
Cytisus scoparius (Cy.S)65Vicia faba (Vi.F)31
Dorycnium pentaphyllum (Do.P)42
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Loddo, A.; Di Ruberto, C. On the Efficacy of Handcrafted and Deep Features for Seed Image Classification. J. Imaging 2021, 7, 171. https://doi.org/10.3390/jimaging7090171

AMA Style

Loddo A, Di Ruberto C. On the Efficacy of Handcrafted and Deep Features for Seed Image Classification. Journal of Imaging. 2021; 7(9):171. https://doi.org/10.3390/jimaging7090171

Chicago/Turabian Style

Loddo, Andrea, and Cecilia Di Ruberto. 2021. "On the Efficacy of Handcrafted and Deep Features for Seed Image Classification" Journal of Imaging 7, no. 9: 171. https://doi.org/10.3390/jimaging7090171

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop