Next Article in Journal
GRACE—Gravity Data for Understanding the Deep Earth’s Interior
Next Article in Special Issue
A Field Weed Density Evaluation Method Based on UAV Imaging and Modified U-Net
Previous Article in Journal
Impacts of Climate Variability and Drought on Surface Water Resources in Sub-Saharan Africa Using Remote Sensing: A Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Weed Identification in Maize, Sunflower, and Potatoes with the Aid of Convolutional Neural Networks

by
Gerassimos G. Peteinatos
1,*,
Philipp Reichel
1,
Jeremy Karouta
2,
Dionisio Andújar
2 and
Roland Gerhards
1
1
Institute of Phytomedicine, Department of Weed Science, University of Hohenheim, Otto-Sander-Straße 5, 70599 Stuttgart, Germany
2
Centre for Automation and Robotics, CSIC-UPM, Arganda del Rey, 28500 Madrid, Spain
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(24), 4185; https://doi.org/10.3390/rs12244185
Submission received: 23 October 2020 / Revised: 30 November 2020 / Accepted: 15 December 2020 / Published: 21 December 2020
(This article belongs to the Special Issue Precision Weed Mapping and Management Based on Remote Sensing)

Abstract

:
The increasing public concern about food security and the stricter rules applied worldwide concerning herbicide use in the agri-food chain, reduce consumer acceptance of chemical plant protection. Site-Specific Weed Management can be achieved by applying a treatment only on the weed patches. Crop plants and weeds identification is a necessary component for various aspects of precision farming in order to perform on the spot herbicide spraying or robotic weeding and precision mechanical weed control. During the last years, a lot of different methods have been proposed, yet more improvements need to be made on this problem, concerning speed, robustness, and accuracy of the algorithms and the recognition systems. Digital cameras and Artificial Neural Networks (ANNs) have been rapidly developed in the past few years, providing new methods and tools also in agriculture and weed management. In the current work, images gathered by an RGB camera of Zea mays, Helianthus annuus, Solanum tuberosum, Alopecurus myosuroides, Amaranthus retroflexus, Avena fatua, Chenopodium album, Lamium purpureum, Matricaria chamomila, Setaria spp., Solanum nigrum and Stellaria media were provided to train Convolutional Neural Networks (CNNs). Three different CNNs, namely VGG16, ResNet–50, and Xception, were adapted and trained on a pool of 93,000 images. The training images consisted of images with plant material with only one species per image. A Top-1 accuracy between 77% and 98% was obtained in plant detection and weed species discrimination, on the testing of the images.

Graphical Abstract

1. Introduction

Public concern about food security has increased in the last decades. Simultaneously, stricter rules have been applied worldwide regarding pesticide usage in the agri-food chain. The above reasons have reduced the consumer acceptance of chemical plant protection. Site-Specific Weed Management can be achieved by applying a treatment only on the weed patches. The current application methodology is to spread the herbicide on the whole field [1], which involves a portion of the herbicide to be applied to non-target plants as weeds have a variable and heterogeneous distribution over the field [2,3]. Thus, the current state of application technology usually has a low degree of the effectiveness of the treatment while it simultaneously leads to an unnecessary negative input into the environment [4]. Yet, reducing the spray rate is not advisable for agronomical reasons, since it can promote the emergence of resistant weed species, while it can also lead to a decrease in yield [5].
As Dyrmann et al. [6] predicted, using herbicides will be a challenging approach under the increasing political and social pressure. Therefore, the reduction of herbicides, insecticides, and fungicides is a major motivating force behind current agricultural expert systems [7,8] to comply with EU Directive 2009/128/EC [9]. On that front, precision farming or smart farming has improved the potentials of automation in agricultural applications [10,11,12]. Tyagi [11] has especially made great progress in the automation of agriculture applications. Due to this effort, robots applying pesticides on a plant individual level might soon be implemented as the standard application technology [13]. Crop and weed recognition foremost, and weed diversification secondary, are key elements for automating the weed management technologies and achieve successful weed control [14]. They are a prerequisite for further expansion towards a more sustainable agriculture. Targeted treatments, chemical or mechanical, can achieve better results if the treatment is targeted only on the weeds, and if we can diversify our treatment (e.g., different herbicide) based on the specific weed class [15]. A detailed plant mapping of the field can give a valuable insight into the specific weed population and the coverage of each species, which can lead to a more sophisticated weed management treatment and more educated weed management strategies.
Convolution neural networks (CNNs)—first implemented by LeCun et al. [16]—can be used as a tool providing highly-accuracy concerning image classification, object detection, and can even be a valuable candidate for fine-grained classification [17]. Although the use of CNNs is relatively new, recent publications have reached high classification accuracies of 99.50% for classifying segments as crop, soil, grass, and broadleaf weeds [15], 96.10% on a blob-wise crop/weed classification [18], 94.38% between twelve different plant species [19], and 95.70% of eight Australian weed species [20]. CNNs can provide insights into image related datasets that we have not yet understood, achieving identification accuracies that sometimes surpass the human-level performance [21]. When given sufficient data, the deep learning approach generates and extrapolates new features without having to be explicitly told which features should be used and how they should be extracted [22,23,24]. One of the most important characteristics of using CNNs in image processing is the obsolescence of feature engineering [25] as the CNN can obtain essential features by itself [26]. It can build and utilize more abstract concepts [27]. Therefore, with the utilization of deep learning procedures and CNNs in image processing, there is a reduced need to manually produce the best features [25,28]. Furthermore, these self-learned features make the deep learning approach less effective for natural variations such as changes in illumination, shadows, skewed leaves, and occluded plants, provided that the used methods have been trained partially or fully in a high variability of these different input variations [6]. Hence, state-of-the-art CNNs with a classification accuracy of over 95% on specific tasks are now quite close to human performance [29]. In agriculture and specifically in weed identification, the ability of CNNs to learn and obtain features in combination with their lower effect on natural variations makes these methods quite promising and able to achieve better classification results from other solutions.
According to Rawat and Wang [21], the deep learning renaissance was fueled by advanced hardware and improved algorithms. The usage of deep learning has swiftly emerged as a promising method in weed and plant classification, although it remains a relatively new topic as most publications were published after 2016 [25]. This is mainly the case because a lot of resources are required to create a large multi-class dataset for plant and weed classification [25,30,31], especially if the dataset is acquired under field conditions. Nevertheless, CNNs—which have been widely used in image recognition tasks—have swiftly emerged as a promising method in weed and plant classification in recent years.
In 2016, using a CNN, Dyrmann et al. [6] proposed a state-of-the-art approach in plant species classification. They built their own CNN and trained it using mini batches with 200 images. Their network was able to classify 22 weed and crop species from BBCH 12-16 [32] with an accuracy of 86.2%. Their dataset underlines the importance of adequate images for each species because if the amount of data is insufficient the recognition rate will significantly decline. Elnemr [19] proposed a new simple self-built CNN architecture to classify twelve classes (three crops and nine weeds) during their early growth stages. The results indicate that the more classes that the training dataset comprises, the more difficult it is to reach a good classification result. Nevertheless, the twelve-class CNN achieved an average test accuracy of 94.38%. Besides training and building a CNN from scratch, Ge et al. [33] proposed that when the dataset is limited it is better to take a pretrained network, trained on a large dataset such as ImageNet, and apply transfer learning [34] to reach a better performance and reduce overfitting. dos Santos Ferreira et al. [15] used a replication of AlexNet, pretrained on the ImageNet dataset for their neural network. Four classes were distinguished—soil, Glycine max, grass, and broadleaf weeds—from drone data, and the network reached an average accuracy of 99.5%. Munz and Reiser [35] used some pre-trained networks not only to separate between pea and oat but to also estimate their coverage. Olsen et al. [20] trained multiple CNNs: Inception-v3 [36] based on GoogLeNet [37] and ResNet–50 [38], both pretrained with the ImageNet dataset. In addition to the comparison of different CNNs, they introduced the first large, multi-class weed species image dataset (Deepweeds), comprising eight invasive weed species and collected entirely under field conditions. The networks were trained for 100 epochs and achieved average classification results of 95.1% (Inception-v3) and 95.7% (ResNet–50), respectively.
Each author either constructed their own network from scratch or used existing architectures that were modified for the respective dataset. In all cases, the authors have demonstrated the potential use of Neural Networks also in the agricultural discipline. Nevertheless, choosing a suitable network requires careful planning as it must fit the task at hand [39]. Furthermore, the robustness of the trained network, along with the robustness of training similar networks has not been examined. In the context of weed and crop classification, supervised training with a prelabeled dataset is widely used to cope with the high variability in the morphology of the plants based on the development stages and environmental influence, which can lead to poor classification accuracy [39]. Yet, the difficulty of acquiring multiple labeled instances of each plant in different development stages still poses an important academic and practical challenge [20]. The acquired datasets typically have a small number of labels, and a huge variation between the classes, which enforce the usage of an unbalanced dataset. In the current paper, the capabilities of three different networks were examined, namely VGG16 [40], ResNet–50 [38] and Xception [41] in their capabilities of identifying twelve different plant species. Our aim was to demonstrate how fast those networks can be trained, and how reliable this training can be, over multiple trainings. Through the proposed methodology a significant amount of labeled images have been acquired that enabled the utilization of a balanced subset of the dataset for training and validation purposes. Ten different repetitions of each network were performed to examine if the CNN training can always conclude to similar results, in a standardized and systematic way. Therefore, we tried to investigate if this balanced dataset can achieve a better result in weed identification and plant classification than the previously demonstrated, for an agronomically applicable amount of classes.

2. Materials and Methods

2.1. Experimental Field

Images were gathered on a predefined experimental field at the Heidfeldhof research station of the University of Hohenheim, in southwest Germany (48°42 59.0 N and 9°11 35.4 E) in 2019. Twelve plots of 12.5 × 1 m were used, each seeded with the seeds of the respective plant species. Three crop species were used, namely maize (Zea mays L.), potato (Solanum tuberosum L.), and sunflower (Helianthus annuus L.) along with nine weed species, namely Alopecurus myosuroides Huds, Amaranthus retroflexus L., Avena fatua L., Chenopodium album L., Lamium purpureum L., Matricaria chamomila L., Setaria spp., Solanum nigrum L., and Stellaria media Vill. (Table 1). In the current work, plant species will refer to both crop and weed data; otherwise, it will be explicitly stated as crop or weed plants. Images were gathered every second day from the date of emergence for 45 days until the plants had progressed to the 8th leave stage or the beginning of tillering. Prior to the seeding, the soil was cultivated in spring with a Rabe cultivator and a working width of 3 meters, and the field was sterilized with a steam treatment to reduce the emergence of unwanted weeds and volunteer previous crop plants. Furthermore, the experimental plots were cleaned by hand from weeds foreign to the intended species twice a week.

2.2. Image Acquisition

The pictures were captured with a Sony Alpha 7R Mark4 (ILCE7-RM4, Sony Corporation, Tokyo, Japan ), a 61 megapixel RGB DSLR camera at noon. The camera has a 35.7 × 23.8 mm, back illuminated full frame CMOS sensor and JPEG images were taken at a resolution of 9504 × 6336 pixels. A shutter speed of 1/2500 s was used, while the ISO calibrated automatically to achieve a good image quality under the changing lighting conditions during the measurements, and the glare opening adjusted each recording day and set between 7 and 11. The Zeiss Batis 25 mm—a fixed focal length lens—was used to achieve a better optical quality compared to a zoom lens. The Sony camera was mounted on the “Sensicle” [42], a multisensor platform for precision farming experiments at a height of 1.2 m. The driving speed was 4 km/h and each second a picture was captured.

2.3. Image Preprocessing

From each plot, images were saved with information relevant to the plot and acquisition date. For each image, a binary image was created (Figure 1), using the Excess Green–Red Index as a thresholding mechanism to separate plant material from the soil [43,44]. Each connected pixel formation from this thresholding procedure consisted of a potential region of interest that should be fed in the CNNs and was separated and prelabeled, creating the relevant bounding box, based on the following rules:
  • Pixel formations less than 400 pixels were discarded.
  • Bounding boxes were expanded if needed symmetrically to the minimum size of 64 × 64 pixels. The minimum bounding box was 64 × 64 pixels.
  • Regions bigger than 64 pixels were expanded only by 5 pixels in all directions. Therefore, there was no limitation to the maximum box.
  • If bounding boxes overlapped, a new bounding box was created, merging all the overlapping boxes.
  • Both the original and the merged bounding boxes were kept for labeling.
The above procedure ensures that all potential inputs provided for classification from our preprocessing method are available for labeling, while simultaneously reducing as much as possible soil clusters and other noise inferences. Labels with the respective European and Mediterranean Plant Protection Organization (EPPO) code of each plant were put automatically on each bounding box, based on the image information. These labels were examined by a human expert who discarded possible wrong classifications or unwanted weeds (Figure 1).
Figure 1. Schematic presentation of the labeling workflow using the example of Helianthus annuus.
Figure 1. Schematic presentation of the labeling workflow using the example of Helianthus annuus.
Remotesensing 12 04185 g001
The final dataset was cross examined a second time by a human expert for possible fallacies. The final dataset comprised 93,130 verified single plants (Table 1). Some example images that were used for the training of the networks can be seen in Figure 2.

2.4. Neural Networks

Artificial Neural Networks and specifically Convolutional Neural Networks (CNNs) are a powerful technique that can achieve a successful plant and weed identification. The basic structure of a neural network comprises an input layer, multiple hidden layers, and an output layer. For the current study CNNs that have demonstrated some good or robust results over different disciplines have been selected. Specifically, we used VGG16, ResNet–50, and Xception, as our basic networks, modifying the top layer architecture of each network.

2.4.1. VGG16

Simonyan and Zisserman [40] proposed VGG16, as a further development of AlexNet. VGG16 was one of the best performing networks at the 2014 ImageNet Large Scale Visual Recognition Challenge (ILSVRC) competition, providing a 71.3% top-1 accuracy and a 90.1% top-5 accuracy. ImageNet is a labeled dataset including over 14 million images classified to 1000 different classes. VGG16 has been used for its robustness since it can provide a high performance and respective accuracies, even when the image datasets are small [45]. The input of VGG16 is a three channel RGB image of the fixed size of 224 × 224 pixels. The VGG16 architecture contains a total of 16 layers, comprising 13 convolutional (3 × 3) and three fully-connected layers (Table 2). Rectified linear units (ReLUs)—first presented by Krizhevsky et al. [46]—act as the activation function for each convolutional layer and for the first two fully-connected layers. VGG16 is one of the best performing networks from the last years, but simultaneously it is simpler, less computer intensive than other networks.

2.4.2. ResNet–50

ResNet–50 (Residual Network 50) was first presented by He et al. [38]. Their architecture continued with the trend of an increased layer depth. ResNet–50 has a similar architecture as VGG16, centered around a 3 × 3 convolutional layer with a ReLU activation function, but before and after each 3 × 3 convolutional layer 1 × 1 convolutional layers are established. Furthermore, only one pooling layer is used, batch normalization is implemented, and the final total network structure comprised three times more layers than VGG16. It is comparable to the VGG16 network, apart from the fact that Resnet50 has an additional identity mapping capability [45]. ResNet–50 can be trained much faster than the VGG16, since it reduces the vanishing gradient problem by creating an alternative shortcut for the gradient to pass trough. Practically this translates as, even if the network is much deeper than VGG16 it can bypass a CNN layer if it is not necessary. The proposed final network comprised of 50 layers (Table 2) and reached first place at the ILSVRC 2015, outperforming the previous benchmark set by VGG16. The input of ResNet–50 is also a three channel RGB image of the fixed size of 224 × 224 pixels. The residual management that ResNet–50 provides makes this algorithm one of the best for training into new datasets.

2.4.3. Xception

Xception [41] stands for Extreme version of Inception and, of course, is an adaptation from Inception, revolutionizing how CNNs are designed. ResNet–50 tried to solve the image classification problem by increasing the depth of the network. The Inception architectures follow a different approach, by increasing the width of the network. A generic Inception module tries to calculate multiple different layers over the same input map in parallel, cleverly merging their results into the output. Three different convolutional layers and one max pool layer are activated in parallel, generating a wider CNN compared with the previous networks. Each output is then combined in a single concatenate layer. Therefore, for each layer, Inception does a 5 × 5, 3 × 3, and 1 × 1 convolutional transformation, and an additional max pool. The concatenate layer of the model then decides whether and how the information of each layer can be used. In Xception, the inception modules have been replaced with depth-wise separable convolutions. It calculates the spatial correlations on each output channel independently of the others, but in the end, it performs a 1 × 1 depthwise convolution to capture the cross-channel correlation. In the end, Xception has also a deep architecture, even deeper than ResNet–50 with 71 layers depth (Table 2). The input of Xception also differs from the two previews networks as it is a three layer RGB image of the fixed size of 299 × 299 pixels, compared to the 224 × 224 that was used before. The width approach that Xception is using, increases its degrees of freedom and therefore utilizes the best identification scenarios for the task at hand.

2.4.4. Dataset Normalization

All 93,130 images were separated into 3 distinct datasets for training, validation, and testing of the networks. To achieve a uniform comparison between network repetitions and network architectures the separation was done apriori, before the image enhancement or augmentation. For each separate class of labels, 70% of the images were used for the training of the networks, 15% of the dataset was used for the validation performed in each training, while the remaining 15% consisted of our testing subset, which was used for the final measurements and demonstration of the achieved results.
In order to perform the training and validation of the networks on a normalized dataset, subsampling was performed. A balanced dataset would avoid population bias since in the dataset there are some majority classes with a high amount of labeled images and some minority classes with fewer images. The large number of images in our dataset enabled us to perform this subsampling since even the minority classes had more than 1600 training images per class. Specifically, every five epochs 1300 images per class on the training subset, and 400 images per class on the validation subset were randomly chosen from their respective subsets. This resulted in 15,600 images to be used in each epoch for training and another 4200 for validation. The testing was performed on the complete unbalanced testing subset since this is a representative fraction of the labels actually identified inside the field.

2.4.5. Network Training

The network experimentation was performed with Keras 2.4.3 in python 3.6.8 using the Tensorflow (2.3.0) backend. Transfer learning was used [34]. All the aforementioned networks were used with the pretrained weights from the ImageNet dataset. The layers of those networks were not trained during our experimentation. For each of the used networks, the pretrained variant was used without the top classification layers for the ImageNet classification. Instead, two additional fully connected dense layers of 512 neurons each were included (Figure 3). On both of those layers, a ReLU activation function was implemented, while during the training a 50% neuron dropout was used. The networks were trained on a supercomputer cluster using the NVIDIA® Tesla Volta V100 PCIe Tensor Core GPU with 12 GB GPU memory (Nvidia Corporation, Santa Clara, California, United States). Instead of the stochastic gradient descent (SGD) algorithm, Adam, an adaptive learning rate algorithm with a learning rate of 1 × 10 3 and a decay of 0.01 200 , was implemented for Xception and ResNet–50. For VGG16, a smaller learning rate was chosen of 1 × 10 4 , but with the same decay. Each network was trained ten times, each independent of the previous ones. For the training and validation subsets, data augmentation was also performed to avoid over-fitting and overcome the highly variable nature of the target classification. This would account for variation in parameters like rotation, scale, illumination, perspective, and color. Specifically, rotation of up to 120 degrees, a brightness shift of ± 20 % , a channel shift of ± 30 % , a zoom of ± 20 % were randomly performed, along with a possible horizontal and vertical flip. A batch size of 32 images was selected. Each network was trained until the validation accuracy did not improve anymore for 150 consecutive epochs. This ensured that the networks have converged to a maximum, while even in the majority classes the probability that each training image has been used at least once on these 150 epochs overpassed 99%. The maximum and minimum number of epochs among the ten repetitions that each network used for its training can be seen in Table 2. Table 2 also shows information about the training such as the mean time used for the training of each epoch, the training parameters of each architecture, etc.

2.5. Evaluation Metrics

In order to evaluate the performance of the respective network classification result, precision, recall, and f1-score were used as proposed by Sokolova and Lapalme [47]. Precision refers to the class conformity of the data labels with the positive labels assigned by the classifier and it is calculated by:
P r e c i s i o n = t p t p + f p
where t p represents the true positive values, which means the plants belonging to a class that were identified as such, and f p are the false positive values, the plants that do not belong to a class but were identified as such. Recall evaluates the sensitivity of the respective network and was calculated by:
R e c a l l = t p t p + f n
where t p is similar to Equation (1), and f n represents the false negative values, which means the plants that belong to the class but were not identified as such. f1-score illustrates the ratio between precision and recall via a harmonic mean and was calculated by:
f 1 s c o r e = 2 × P r e c i s i o n × R e c a l l P r e c i s i o n + R e c a l l
where Precision is defined in Equation (1) and Recall is defined in Equation (2).

3. Results

In the current paper, three different networks have been tested to evaluate their performance in a balanced dataset of twelve different plant species. All three networks had their original layers pretrained on the ImageNet dataset, while some additional fully connected layers were included as top layers of the pretrained network. Only these layers were trained to classify and identify our twelve plant classes. Ten different repetitions of each network type were trained until the validation accuracy did not decrease after 150 epochs.

3.1. Model Accuracy/Model Loss

The mean training and valid loss along with the training and validation accuracy per network type can be seen in Figure 4. The training accuracy rapidly increases in the first 100 epochs and then slowly approaches towards an optimum, depending on the network at around 300 to 700 epochs for VGG 16 (Figure 4a), 350 to 850 epochs for ResNet–50 (Figure 4c), and 400 to 800 for Xception (Figure 4e). The model accuracy did not differentiate a lot between the ten repetitions of each network. The biggest differentiation was between the maximum and the minimum accuracy of the VGG 16 Neural Network training. All networks achieved a similar result of around 81% for VGG16 and more than 97% for ResNet–50 and Xception, both in the training and validation accuracy, which was also similar to the testing accuracy performed after the finalization of the training (Figure 4, Table 6). Specifically, the top-1 accuracy for the VGG 16 network ranged from 81 to 82.7%, for ResNet–50 between 97.2 to 97.7%, while the highest accuracy was achieved with Xception at 97.5 to 97.8%.
A loss function is used to improve and evolve the accuracy of a neural network model. Loss functions try to map the parameters of the network onto a single scalar value that indicates how well those parameters achieve in performing the task the network is intended to do. The loss value implies how poorly or well the model behaves after each iteration of optimization. In our case, the model loss is visualized via the sum of errors made for each example in the training or validation set, respectively. The training and validation loss reacted similarly in all networks with the validation loss presenting higher fluctuations and differences than the training loss. The validation loss decreases steadily until it reaches the lowest value depending on the neural network, in similar epochs like the maximum accuracy was achieved. For VGG 16 that was at around 300 to 700 epochs (Figure 4b), 350 to 850 epochs for ResNet–50 (Figure 4d), and 400 to 800 for Xception (Figure 4f). The highest fluctuations between the minimum and the maximum loss difference were monitored for the Xception neural network training while the lowest fluctuations were monitored at the ResNet–50 training.

3.2. Classification Performance

In our dataset, twelve different classes have been used to separate the plants into the equivalent plant species. The distribution of how well the three networks identified each plant species can be seen in Table 3, Table 4 and Table 5. These are the results from the testing part of the dataset. All images that were kept as the testing part were fed into the finalized network, the relative metrics were calculated, and the according confusing matrices were created. Since 15% of each plant species was separated and kept as the testing part of the dataset, this result derives from an unbalanced dataset, which represents the availability of our input data. Each row represents the actual plant species of the tested plant, while each column shows what plant species was the most prevalent decision for the Neural Network (top-1 accuracy). In order to make the data comparable and more comprehensive, all data are presented as percentages relative to the number of actual plants per species, therefore the sum of each row is 100.
The mean values for the ten VGG 16 confusion matrices are shown in Table 3a, while the standard deviation of those ten matrices is shown in Table 3b. The best identification was achieved for M. chamomila where 93.36% of the M. chamomila weeds were identified as such, while the worst identification was achieved for C. album with 55.01% of correct identifications. Since VGG 16 performed the worst of the three used networks, this can also be seen in the confusion matrix. A lot of plants are misclassified as Setaria spp., S. nigrum, and S. tuberosum while a lot of S. media and C. album weeds are not identified as such. It should be noted that especially concerning C. album whenever the network is unclear, its second choice would be S. nigrum. Concerning the three crop species included in the dataset, 82–85% of them were correctly identified into their relevant plant species while if we add into each crop classification the misclassifications of other crop plants, maize, potato, and sunflower were identified as a crop at 88.0%, 84.6% and 95.0% respectively. The ten different networks performed similarly, but in the aforementioned problematic classifications, the standard deviation between different networks was the highest between 0.5% and 2.0%.
The mean values for the ten ResNet–50 confusion matrices are shown in Table 4a, while the standard deviation of those ten matrices is shown in Table 4b. ResNet–50 achieved an accuracy of around 97% and this can also be seen in the relevant confusion matrices. All plant species had more than 90% correct identification. The best identification was achieved for A. retroflexus followed by M. chamomila where 99.66% and 99.54% of the respective weeds were identified as such. S. media which was one of the worst performers for the VGG 16 network was the third most correctly identified species with 99.41%. The worst identification was achieved for S. nigrum and C. album with 90.33% and 91.52% of correct identifications, respectively. The misclassification between S. nigrum and C. album also exists in this network, but with a smaller degree of uncertainty compared to the VGG 16. Concerning the three crop species included in the dataset, 97–99% of them were correctly identified into their relevant plant species while if we add into each crop classification the misclassifications of other crop plants, maize, potato, and sunflower were identified as a crop at 98.9%, 98.9%, and 99.0%, respectively. The ten different networks performed similarly, but in the aforementioned problematic classifications, the standard deviation between different networks was the highest between 0.5% and 0.8%.
The mean values for the ten Xception confusion matrices are shown in Table 5a, while the standard deviation of those ten matrices is shown in Table 5b. Xception achieved the best accuracy of around 98%. All plant species had more than 92% correct identification. The best identification was achieved for M. chamomila followed by S. media where 99.74% and 99.61% of the respective weeds were identified as such. The worst identification was achieved for S. nigrum and C. album with 91.49% and 92.46% of correct identifications, respectively. The misclassification between S. nigrum and C. album exists also in this network, but with a smaller degree of uncertainty compared to the other networks. Concerning the three crop species included in the dataset, 97–99% of them were correctly identified into their relevant plant species while if we add into each crop classification the misclassifications of other crop plants, maize, potato, and sunflower were identified as a crop at 99.2%, 98.9%, and 99.0% respectively. The ten different networks performed similarly, but in the aforementioned problematic classifications the standard deviation between different networks was the highest between 0.6% and 0.9%, even higher than the ResNet–50 networks.

3.3. Precision/Recall

In all three Neural Networks, the average precision and recall are similar to the identified top-1 accuracy (Table 6). Even though in Table 6 only the result of the first trained network is shown, there is a fluctuation between the absolute values per species, but the averages are the same or almost the same. VVG16 has an average precision of 0.75 and a recall of 0.79, while both ResNet–50 and Xception have a recall of 0.97 and a precision of 0.96 for ResNet–50 and 0.96 or 0.97 for different implementations of Xception. It should be noted that for both the averages and per species, the majority of the cases show a higher recall value than precision for VGG16 and Xception In ResNet–50 more instances show a higher precision than recall, resulting in almost 50% of the cases showing higher precision and the rest higher recall. Species like H. annuus, M. chamomila, and S. media show the best results, achieving in a lot of cases the perfect score at precision or recall of 1.00. On the other end, C. album achieved the worst result in all neural networks followed by S. nigrum.

4. Discussion

Image recognition with the aid of neural networks is a relatively new topic in the domain of plant and weed classifications, as most publications were published after 2016 [25], yet it shows high potential. All networks that were used in this experiment were able to train in our dataset, and achieve significant discrimination results in all our repetitions. ResNet–50 and Xception performed better than VGG16, achieving a performance of 97% and 98%, respectively. Recent publications like dos Santos Ferreira et al. [15], Potena et al. [18], Tang et al. [4], Sharpe et al. [39] and Elnemr [19] have also achieved classification results of over 90%. Yet in the majority of these cases, a low number of classes were used (2–4), or the datasets were only sufficient to prove the researched hypothesis but not sufficient to transfer the results into the complexity of the real world. Potena et al. [18] and Sharpe et al. [39] used only two classes, while Tang et al. [4] and dos Santos Ferreira et al. [15] four. Such a small amount of classes is not sufficient for specific local weed populations and the coverage of each species [2]. They can not be used for weed management applications like, for example, precision spraying or mechanical weed control [6]. The selection of a limited number of classes for classification is mainly due to the fact that the more classes that are considered, the less accurate the result [19]. In our case we managed to achieve quite a high classification accuracy, overpassing 97% in two of our networks, with twelve different classes, representing three summer plants with some of their representative weeds, both grasses, and broadleaved weeds.
For distinguishing between many classes, a large and robust dataset is required, which is a time-consuming task [25,30,31,48]. In cases where authors have tried to achieve multi-class weed and plant classification, their classification accuracy dropped under 90% [6,49]. This can be attributed to their limited amount of training data and the associated unbalanced dataset, which can make it hard for the network to generalize [21] or a bias towards the majority class can be created [50,51]. Therefore, the results should always be interpreted under the potential dataset limitations [52], which generally encompass the scale of the dataset, the number of distinguished classes, and the distribution between the classes, as well as potential dataset biases. With the methodology that we used we managed to acquire a significant amount of plant and weed images, while simultaneously making the labeling of the said images easier. Olsen et al. [20] demonstrate one of the most robust datasets, as their dataset was collected on different locations under field conditions, the dataset was balanced with 1000 images per plant and even a negative class was included in the training process. In our case, the images were gathered only in one location, with a homogeneous soil type and a specific camera, at a similar time, along with the optimum image settings the specific camera software chose. This data uniformity could pose a problem for the robust application of the network, but based on the proposed methodology more data can be acquired and integrated into the current dataset. Even in our case, the acquisition of images at different dates, and growth stages, makes the dataset per species more representative, while the images were also acquired under different soil conditions (e.g., wet soil, normal, and quite dry). The dataset comprises single plants, overlapping plants of the same species, plant sections, leaf fragments, and damaged leaf surfaces, which results in a high variability within the classes, but simultaneously it can generalize for possible new images of the specific species. Even though there are images of various qualities inside the dataset we did not notice any significant problems or systematic errors with the images used. The task of a Neural Network is to generalize and overcome influences on its data input, both concerning image irregularities and background unwanted objects. Images with different weed species overlapping need to be also examined, and included. With our dataset and our methodology, the goal is to further improve the standard for plant and weed recognition set by Olsen et al. [20], as our dataset comprises a total of 93,130 labeled single plant images based on nine weed species and three crops, which is sufficient for choosing the appropriate herbicide treatment.
Our dataset comprised images from the plant species during various development stages from their first emergence until tillering. The capability for both Resent50 and Xception to achieve an f1-score of at least 0.89, but typically between 0.95 to 0.98 per plant species should be noted, since each plant shows differences in its morphological structure, especially via the leaf shape, the texture of the leaf surface and the total number of leaves. This high variation in the acquired images typically creates a constraint for a successful identification, particularly in the period between emergence and youth development, which is also the most favorable time for successful weed control [3,6,53]. In our training, S. nigrum and C. album performed the worst, showing a high misclassification between these two weed species. This can be attributed to the similar morphological characteristics that those two plants have, especially during the 0- and 2-leave stage, where they can be discriminated only through their texture and color. Pérez-Ortiz et al. [1] also pointed towards classification problems due to the morphological similarities between different species especially at the time of emergence. Moreover, as overlapping of individual leaves can occur, this makes it even more difficult to distinguish between individual weed and crop species. In our case, plant overlapping also existed but only between plants of the same species. For the three crop species included in the dataset, 97–99% of them were correctly identified into their relevant plant species, and if we pool the respective crop misclassifications these numbers go to 99.2%, 98.9%, and 99.0% for maize, potato, and sunflower, respectively. This fact can encourage us to use these networks for crop related applications. Due to our high average classification result especially in the early development stages of Z. mays, S. tuberosum, and H. annuus, where weed interference can significantly reduce the yield [54], weed-specific herbicide applications can be executed.
VGG16 used the smallest amount of time per epoch to train but simultaneously had the poorest outcome compared with the other network architectures. The simpler architecture, and the less total parameters that VGG16 has, makes it a good candidate for online systems, where processing power can be a restricting power. Unfortunately, with an accuracy of 82%, even though VGG16 can be a viable alternative to other methods used until now [55], it still lacks the robustness needed for an online application. Xception had the best performance over all networks. Its highest depth and complexity enabled it to adapt and generalize better than the other two networks [1], but outperformed ResNet–50 only slightly. Yet, this complexity and the highest amount of total parameters resulted in Xception to be the slowest network concerning the training and validation speed, and afterward during the testing. ResNet–50, achieved results similar to Xception, but due to its architecture and a slightly lower amount of layers and total parameters, it managed to train and validate, much faster than Xception. Its high accuracy, combined with the smallest calculation time that it utilized, suggest it as the most viable candidate of the three for an online application.
The images and the dataset were acquired near the ground, but all images had to adapt to the input of the Neural Network (224 or 299 pixel input dimension). For small plants that means enlarging the plants but for bigger plants that actually was translated as shrinking the plants. This lower resolution used for bigger plants gives the potential for this method to be implemented in Unmanned Aerial Vehicles (UAV). Pena et al. [56] using OBIA could separate between sunflowers and weeds at a later stage with high accuracy (77–91%). Being able to capture enough pixels for a robust recognition is in the majority of the cases the limiting factor. As technology improves and pixel resolutions also increase, this hurdle can be overpassed. Pflanz et al. [57] used a UAV in a low altitude flight (1 to 6 m) to achieve good results for discriminating between Matricaria recutita L., Papaver rhoeas L., Viola arvensis M., and winter wheat. Such a low altitude flight cannot be exploited in practical agricultural farms, but it definitely shows the potential of such a system. As resolutions increase, a flight altitude of 15–20 m can be used for plant classification and can be commercially applicable. In all cases, Neural Networks need at least some pixels per plant to be able to recognize and classify it.
The presented weed identification algorithm can be used in combination with site-specific weed control methods for more precise herbicide applications and mechanical treatments. It can be used to control a sprayer or a mechanical hoe in real time. Weed classification and monitoring can enable more sophisticated and complicated Decision Support Systems. Such tools can also be used by farmers, agronomists, and consultants during weed scouting and vegetation surveys. However, there are two practical limitations that need to be addressed first, such as a more robust and diverse data set combined with better hardware. For practical use, the data needs to be collected and processed simultaneously over the entire sprayer boom at a certain frame rate per second. Simultaneously it is important to correctly recognize a heterogeneous plant stock, therefore a diverse and robust dataset is imperative. As the results show, more complex neural networks are required to increase the accuracy of the classification, but this is accompanied by an increase in computing power. Yet, the trade-off between improved accuracy and speed needs to be further explored, since in our case the increase in accuracy provided by Xception cannot justify its increase in computational time. Similarly to Integrated Pest Management where the balance between Pest Management, sustainability, and food security is explored, we need to investigate the balance of how good we need our accuracy for practical applications.

5. Conclusions

In the current paper, we have provided the results of plant identification using some Convolutional Neural Networks. A methodology for improving the image acquisition and the generation of the dataset has been proposed, which can make the acquisition of said images easier, along with the labeling and utilization in Neural Networks training and testing. The ResNet–50 along with Xception achieved a quite high top-1 testing accuracy (>97%), outperforming VGG16, yet there were systematic misclassifications between S. nigrum and C. album. More work needs to be done in order to improve the robustness and usability of the dataset, with more diverse images of the current classified plants, and more plant species. Bigger datasets can enable us to test even more detailed classification schemes, like per plant species, growth stage, or crop variant. The current work demonstrates a functional approach for porting this knowledge and classification routine for online, in the field, weed identification, and management.

Author Contributions

All authors contributed extensively to this manuscript. G.G.P. and R.G. conceptualized the experiment, while G.G.P., R.G. and D.A. set up the methodology. P.R. executed the experiment, while G.G.P. and J.K. created the software for analysis. G.G.P. and P.R. wrote the original draft, while all authors helped in the reviewing and editing of the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by EIT FOOD as project# 20140 DACWEED: Detection and ACtuation system for WEED management. EIT FOOD is the innovation community on Food of the European Institute of Innovation and Technology (EIT), an EU body under Horizon 2020, the EU Framework Programme for Research and Innovation.

Acknowledgments

The authors would like to thank all people that helped in the realization of this dataset. We would like to thank all the technicians at the research station Heidfeldhof for the field preparations made and the technicians Jan Roggenbuck, Alexandra Heyn, and Cathrin Brechlin of the Department of Weed Science for their aid during the field work.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ANN Artificial Neural Networks
CNN Convolutional Neural Networks
EPPO European and Mediterranean Plant Protection Organization
MDPI Multidisciplinary Digital Publishing Institute
UAV Unmanned Aerial Vehicle
VGG Visual Geometry Group

References

  1. Pérez-Ortiz, M.; Peña, J.M.; Gutiérrez, P.A.; Torres-Sánchez, J.; Hervás-Martínez, C.; López-Granados, F. Selecting patterns and features for between- and within- crop-row weed mapping using UAV-imagery. Expert Syst. Appl. 2016, 47, 85–94. [Google Scholar] [CrossRef] [Green Version]
  2. Oerke, E.C.; Gerhards, R.; Menz, G.; Sikora, R.A. (Eds.) Precision Crop Protection—The Challenge and Use of Heterogeneity, 1st ed.; Springer: Dordrecht, The Netherlands; Heidelberg, Germany; London, UK; New York, NY, USA, 2010; Volume 1. [Google Scholar] [CrossRef]
  3. Fernández-Quintanilla, C.; Peña, J.M.; Andújar, D.; Dorado, J.; Ribeiro, A.; López-Granados, F. Is the current state of the art of weed monitoring suitable for site-specific weed management in arable crops? Weed Res. 2018, 58, 259–272. [Google Scholar] [CrossRef]
  4. Tang, J.; Wang, D.; Zhang, Z.; He, L.; Xin, J.; Xu, Y. Weed identification based on K-means feature learning combined with convolutional neural network. Comput. Electron. Agric. 2017, 135, 63–70. [Google Scholar] [CrossRef]
  5. Dyrmann, M.; Christiansen, P.; Midtiby, H.S. Estimation of plant species by classifying plants and leaves in combination. J. Field Robot. 2017, 35, 202–212. [Google Scholar] [CrossRef]
  6. Dyrmann, M.; Karstoft, H.; Midtiby, H.S. Plant species classification using deep convolutional neural network. Biosyst. Eng. 2016, 151, 72–80. [Google Scholar] [CrossRef]
  7. Pantazi, X.E.; Moshou, D.; Bravo, C. Active learning system for weed species recognition based on hyperspectral sensing. Biosyst. Eng. 2016. [Google Scholar] [CrossRef]
  8. Sabzi, S.; Abbaspour-Gilandeh, Y.; García-Mateos, G. A fast and accurate expert system for weed identification in potato crops using metaheuristic algorithms. Comput. Ind. 2018, 98, 80–89. [Google Scholar] [CrossRef]
  9. European Parliament; Council of the EU. Directive 2009/128/EC of the European Parliament and of the Council of 21st October 2009 establishing a framework for Community action to achieve the sustainable use of pesticides (Text with EEA relevance). Off. J. Eur. Union 2009, L 309, 71–86. [Google Scholar]
  10. Machleb, J.; Peteinatos, G.G.; Kollenda, B.L.; Andújar, D.; Gerhards, R. Sensor-based mechanical weed control: Present state and prospects. Comput. Electron. Agric. 2020, 176, 105638. [Google Scholar] [CrossRef]
  11. Tyagi, A.C. Towards a Second Green Revolution. Irrig. Drain. 2016, 65, 388–389. [Google Scholar] [CrossRef]
  12. Peteinatos, G.G.; Weis, M.; Andújar, D.; Rueda Ayala, V.; Gerhards, R. Potential use of ground-based sensor technologies for weed detection. Pest Manag. Sci. 2014, 70, 190–199. [Google Scholar] [CrossRef] [PubMed]
  13. Lottes, P.; Hörferlin, M.; Sander, S.; Stachniss, C. Effective Vision-based Classification for Separating Sugar Beets and Weeds for Precision Farming. J. Field Robot. 2016, 34, 1160–1178. [Google Scholar] [CrossRef]
  14. Zheng, Y.; Zhu, Q.; Huang, M.; Guo, Y.; Qin, J. Maize and weed classification using color indices with support vector data description in outdoor fields. Comput. Electron. Agric. 2017, 141, 215–222. [Google Scholar] [CrossRef]
  15. Dos Santos Ferreira, A.; Freitas, D.M.; da Silva, G.G.; Pistori, H.; Folhes, M.T. Weed detection in soybean crops using ConvNets. Comput. Electron. Agric. 2017, 143, 314–324. [Google Scholar] [CrossRef]
  16. LeCun, Y.; Boser, B.; Denker, J.S.; Henderson, D.; Howard, R.E.; Hubbard, W.; Jackel, L.D. Backpropagation Applied to Handwritten Zip Code Recognition. Neural Comput. 1989, 1, 541–551. [Google Scholar] [CrossRef]
  17. Razavian, A.S.; Azizpour, H.; Sullivan, J.; Carlsson, S. CNN Features Off-the-Shelf: An Astounding Baseline for Recognition. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops, Columbus, OH, USA, 23–28 June 2014; IEEE: Piscataway, NJ, USA, 2014. [Google Scholar] [CrossRef] [Green Version]
  18. Potena, C.; Nardi, D.; Pretto, A. Fast and Accurate Crop and Weed Identification with Summarized Train Sets for Precision Agriculture. In Intelligent Autonomous Systems 14; Springer International Publishing: Cham, Switzerland, 2017; pp. 105–121. [Google Scholar] [CrossRef] [Green Version]
  19. Elnemr, H.A. Convolutional Neural Network Architecture for Plant Seedling Classification. Int. J. Adv. Comput. Sci. Appl. 2019, 10. [Google Scholar] [CrossRef]
  20. Olsen, A.; Konovalov, D.A.; Philippa, B.; Ridd, P.; Wood, J.C.; Johns, J.; Banks, W.; Girgenti, B.; Kenny, O.; Whinney, J.; et al. DeepWeeds: A Multiclass Weed Species Image Dataset for Deep Learning. Sci. Rep. 2019, 9. [Google Scholar] [CrossRef]
  21. Rawat, W.; Wang, Z. Deep Convolutional Neural Networks for Image Classification: A Comprehensive Review. Neural Comput. 2017, 29, 2352–2449. [Google Scholar] [CrossRef]
  22. Milioto, A.; Lottes, P.; Stachniss, C. Real-time blob-wise sugar beets vs weeds classification for monitoring fields using convolutional neural networks. ISPRS Ann. Photogramm. Remote. Sens. Spat. Inf. Sci. 2017, IV-2/W3, 41–48. [Google Scholar] [CrossRef] [Green Version]
  23. Lee, S.H.; Chan, C.S.; Mayo, S.J.; Remagnino, P. How deep learning extracts and learns leaf features for plant classification. Pattern Recognit. 2017, 71, 1–13. [Google Scholar] [CrossRef] [Green Version]
  24. Fuentes-Pacheco, J.; Torres-Olivares, J.; Roman-Rangel, E.; Cervantes, S.; Juarez-Lopez, P.; Hermosillo-Valadez, J.; Rendón-Mancha, J.M. Fig Plant Segmentation from Aerial Images Using a Deep Convolutional Encoder-Decoder Network. Remote Sens. 2019, 11, 1157. [Google Scholar] [CrossRef] [Green Version]
  25. Kamilaris, A.; Prenafeta-Boldú, F.X. Deep learning in agriculture: A survey. Comput. Electron. Agric. 2018, 147, 70–90. [Google Scholar] [CrossRef] [Green Version]
  26. Xinshao, W.; Cheng, C. Weed seeds classification based on PCANet deep learning baseline. In Proceedings of the 2015 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA), Hong Kong, China, 16–19 December 2015; IEEE: Piscataway, NJ, USA, 2015. [Google Scholar] [CrossRef]
  27. Hoeser, T.; Kuenzer, C. Object Detection and Image Segmentation with Deep Learning on Earth Observation Data: A Review-Part I: Evolution and Recent Trends. Remote Sens. 2020, 12, 1667. [Google Scholar] [CrossRef]
  28. McCool, C.; Perez, T.; Upcroft, B. Mixtures of Lightweight Deep Convolutional Neural Networks: Applied to Agricultural Robotics. IEEE Robot. Autom. Lett. 2017, 2, 1344–1351. [Google Scholar] [CrossRef]
  29. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
  30. Zhu, X.; Wu, X. Class Noise vs. Attribute Noise: A Quantitative Study. Artif. Intell. Rev. 2004, 22, 177–210. [Google Scholar] [CrossRef]
  31. McLaughlin, N.; Rincon, J.M.D.; Miller, P. Data-augmentation for reducing dataset bias in person re-identification. In Proceedings of the 2015 12th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), Karlsruhe, Germany, 25–28 August 2015; IEEE: Piscataway, NJ, USA, 2015. [Google Scholar] [CrossRef] [Green Version]
  32. Meier, U. Growth Stages of Mono- and Dicotyledonous Plants: BBCH Monograph; Open Agrar Repositorium: Göttingen, Germany, 2018. [Google Scholar] [CrossRef]
  33. Ge, Z.; McCool, C.; Sanderson, C.; Corke, P. Subset feature learning for fine-grained category classification. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Boston, MA, USA, 7–12 June 2015; IEEE: Piscataway, NJ, USA, 2015. [Google Scholar] [CrossRef] [Green Version]
  34. Pan, S.J.; Yang, Q. A Survey on Transfer Learning. IEEE Trans. Knowl. Data Eng. 2010, 22, 1345–1359. [Google Scholar] [CrossRef]
  35. Munz, S.; Reiser, D. Approach for Image-Based Semantic Segmentation of Canopy Cover in Pea–Oat Intercropping. Agriculture 2020, 10, 354. [Google Scholar] [CrossRef]
  36. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar] [CrossRef] [Green Version]
  37. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar] [CrossRef] [Green Version]
  38. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; IEEE: Piscataway, NJ, USA, 2016. [Google Scholar] [CrossRef] [Green Version]
  39. Sharpe, S.M.; Schumann, A.W.; Boyd, N.S. Detection of Carolina Geranium (Geranium carolinianum) Growing in Competition with Strawberry Using Convolutional Neural Networks. Weed Sci. 2018, 67, 239–245. [Google Scholar] [CrossRef]
  40. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  41. Chollet, F. Xception: Deep Learning with Depthwise Separable Convolutions. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1800–1807. [Google Scholar] [CrossRef] [Green Version]
  42. Keller, M.; Zecha, C.; Weis, M.; Link-Dolezal, J.; Gerhards, R.; Claupein, W. Competence center SenGIS—Exploring methods for multisensor data acquisition and handling for interdisciplinay research. In Proceedings of the 8th European Conference on Precision Agriculture 2011, Prague, Czech Republic, 11–14 July 2011; Czech Centre for Science and Society: Ampthill, UK; Prague, Czech Republic, 2011; pp. 491–500. [Google Scholar]
  43. Mink, R.; Dutta, A.; Peteinatos, G.; Sökefeld, M.; Engels, J.; Hahn, M.; Gerhards, R. Multi-Temporal Site-Specific Weed Control of Cirsium arvense (L.) Scop. and Rumex crispus L. in Maize and Sugar Beet Using Unmanned Aerial Vehicle Based Mapping. Agriculture 2018, 8, 65. [Google Scholar] [CrossRef] [Green Version]
  44. Meyer, G.E.; Neto, J.C.; Jones, D.D.; Hindman, T.W. Intensified fuzzy clusters for classifying plant, soil, and residue regions of interest from color images. Comput. Electron. Agric. 2004, 42, 161–180. [Google Scholar] [CrossRef] [Green Version]
  45. Theckedath, D.; Sedamkar, R.R. Detecting Affect States Using VGG16, ResNet50 and SE-ResNet50 Networks. SN Comput. Sci. 2020, 1. [Google Scholar] [CrossRef] [Green Version]
  46. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. In Proceedings of the 26th Annual Conference on Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105. [Google Scholar]
  47. Sokolova, M.; Lapalme, G. A systematic analysis of performance measures for classification tasks. Inf. Process. Manag. 2009, 45, 427–437. [Google Scholar] [CrossRef]
  48. Chang, T.; Rasmussen, B.; Dickson, B.; Zachmann, L. Chimera: A Multi-Task Recurrent Convolutional Neural Network for Forest Classification and Structural Estimation. Remote Sens. 2019, 11, 768. [Google Scholar] [CrossRef] [Green Version]
  49. Teimouri, N.; Dyrmann, M.; Nielsen, P.; Mathiassen, S.; Somerville, G.; Jørgensen, R. Weed Growth Stage Estimator Using Deep Convolutional Neural Networks. Sensors 2018, 18, 1580. [Google Scholar] [CrossRef] [Green Version]
  50. López, V.; Fernández, A.; García, S.; Palade, V.; Herrera, F. An insight into classification with imbalanced data: Empirical results and current trends on using data intrinsic characteristics. Inf. Sci. 2013, 250, 113–141. [Google Scholar] [CrossRef]
  51. Batista, G.E.A.P.A.; Prati, R.C.; Monard, M.C. A study of the behavior of several methods for balancing machine learning training data. ACM SIGKDD Explor. Newsl. 2004, 6, 20–29. [Google Scholar] [CrossRef]
  52. Barbedo, J.G.A. A review on the main challenges in automatic plant disease identification based on visible range images. Biosyst. Eng. 2016, 144, 52–60. [Google Scholar] [CrossRef]
  53. Gerhards, R.; Christensen, S. Real-time weed detection, decision making and patch spraying in maize, sugar beet, winter wheat and winter barley. Weed Res. 2003, 43, 385–392. [Google Scholar] [CrossRef]
  54. Tursun, N.; Datta, A.; Sakinmaz, M.S.; Kantarci, Z.; Knezevic, S.Z.; Chauhan, B.S. The critical period for weed control in three corn (Zea mays L.) types. Crop Prot. 2016, 90, 59–65. [Google Scholar] [CrossRef]
  55. Sökefeld, M.; Gerhards, R.; Oebel, H.; Therburg, R.D. Image acquisition for weed detection and identification by digital image analysis. In Proceedings of the 6th European Conference on Precision Agriculture (ECPA), Skiathos, Greece, 3–6 June 2007; Wageningen Academic Publishers: Wageningen, The Netherlands, 2007; Volume 6, pp. 523–529. [Google Scholar]
  56. Peña, J.; Torres-Sánchez, J.; Serrano-Pérez, A.; de Castro, A.; López-Granados, F. Quantifying Efficacy and Limits of Unmanned Aerial Vehicle (UAV) Technology for Weed Seedling Detection as Affected by Sensor Resolution. Sensors 2015, 15, 5609–5626. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  57. Pflanz, M.; Nordmeyer, H.; Schirrmann, M. Weed Mapping with UAS Imagery and a Bag of Visual Words Based Image Classifier. Remote Sens. 2018, 10, 1530. [Google Scholar] [CrossRef] [Green Version]
Figure 2. Some crop and weed images that were used for the training of the network. (a,b) Alopecurus myosuroides, (c,d) Amaranthus retroflexus, (e,f) Avena fatua, (g,h) Chenopodium album, (i,j) Helianthus annuus, (k,l) Lamium purpureum, (m,n) Matricaria chamomila, (o,p) Setaria spp., (q,r) Solanum nigrum, (s,t) Solanum tuberosum, (u,v) Stellaria media, (w,x) Zea mays.
Figure 2. Some crop and weed images that were used for the training of the network. (a,b) Alopecurus myosuroides, (c,d) Amaranthus retroflexus, (e,f) Avena fatua, (g,h) Chenopodium album, (i,j) Helianthus annuus, (k,l) Lamium purpureum, (m,n) Matricaria chamomila, (o,p) Setaria spp., (q,r) Solanum nigrum, (s,t) Solanum tuberosum, (u,v) Stellaria media, (w,x) Zea mays.
Remotesensing 12 04185 g002
Figure 3. Schematic presentation of the top layers and modifications as applied to the ResNet–50 CNN. Since the actual input for training depends on the batch size, (?) represents this batch size. In the current manuscript the batch size used for training, validation and testing was 32.
Figure 3. Schematic presentation of the top layers and modifications as applied to the ResNet–50 CNN. Since the actual input for training depends on the batch size, (?) represents this batch size. In the current manuscript the batch size used for training, validation and testing was 32.
Remotesensing 12 04185 g003
Figure 4. Minimum and maximum training and validation accuracy (a,c,e) along with the respective training and validation loss (b,d,f) over the ten repetitions performed for (a,b) VGG16, (c,d) ResNet–50, and the (e,f) Xception Convolutional Neural Networks
Figure 4. Minimum and maximum training and validation accuracy (a,c,e) along with the respective training and validation loss (b,d,f) over the ten repetitions performed for (a,b) VGG16, (c,d) ResNet–50, and the (e,f) Xception Convolutional Neural Networks
Remotesensing 12 04185 g004
Table 1. Plant species used in the training of the CNNs, the relevant EPPO code, along with the number of labels per plant species in total and for the training, validation and training subset.
Table 1. Plant species used in the training of the CNNs, the relevant EPPO code, along with the number of labels per plant species in total and for the training, validation and training subset.
Plant
Species
EPPO
CODE
Total
Images
Train
Images
Validation
Images
Testing
Images
Alopecurus myosuroides Huds.ALOMY7423519611131114
Amaranthus retroflexus L.AMARE52743691791792
Avena fatua L.AVEFA12,409868618611862
Chenopodium album L.CHEAL26901882403405
Helianthus annuus L.HELAN16,42611,49824632465
Lamium purpureum L.LAMPU7603532211401141
Matricaria chamomila L.MATCH15,15910,61122732275
Setaria spp. L.SETSS23781664355359
Solanum nigrum L.SOLNI29792085446448
Solanum tuberosum L.SOLTU27421919411412
Stellaria media Vill.STEME6941485810411042
Zea mays L.ZEAMX11,106777416651667
SUM93,13065,18613,96213,982
Table 2. General information about the three neural networks used.
Table 2. General information about the three neural networks used.
VGG16ResNet–50Xception
Mean time per epoch (s)164164274
Minimum epochs used469511538
Maximum epochs used864979945
Minimum top—1 test accuracy [%]8197.297.5
Maximum top—1 test accuracy [%]82.797.797.8
Minimum final validation loss0.5240.0770.085
Maximum final validation loss0.5600.0890.097
Network Depth (Layers)165071
Total Network Parameters27,829,06824,905,61222,179,380
Trained network parameters13,114,3801,371,0201,372,428
Input Image Size (pixels)224 × 224224 × 224299 × 299
Batch Size32
Train Images per epoch15,600
Validation Images per epoch4200
Table 3. Confusion matrix of the mean crop and weed identification for the VGG16 Convolutional Neural Network. The prediction reached between 81.0 and 82.7 % top-1 accuracy on the test dataset. The values are the percentage of identified labels for each plant species (Each row sums to 100%). (a) Mean values for the ten training runs. Cell values above 2% are highlighted. (b) The standard deviation for the ten training runs. Standard deviation values above 0.5% are highlighted.
Table 3. Confusion matrix of the mean crop and weed identification for the VGG16 Convolutional Neural Network. The prediction reached between 81.0 and 82.7 % top-1 accuracy on the test dataset. The values are the percentage of identified labels for each plant species (Each row sums to 100%). (a) Mean values for the ten training runs. Cell values above 2% are highlighted. (b) The standard deviation for the ten training runs. Standard deviation values above 0.5% are highlighted.
(a) Mean Values of VGG16
ALOMYAMAREAVEFACHEALHELANLAMPUMATCHSETSSSOLNISOLTUSTEMEZEAMX
ALOMY86.940.165.090.510.000.190.364.950.410.870.060.46
AMARE0.7492.120.060.530.000.761.263.080.470.620.160.19
AVEFA14.460.3466.571.240.170.414.262.991.831.080.196.45
CHEAL0.004.051.5155.010.476.741.802.7921.603.651.780.59
HELAN0.290.030.901.3585.690.540.550.230.934.040.175.28
LAMPU0.171.170.434.960.1775.831.821.017.984.801.590.08
MATCH1.280.890.840.150.001.0193.360.880.050.440.700.40
SETSS5.527.053.593.790.000.890.7874.570.501.951.110.25
SOLNI0.362.881.9015.630.228.081.250.8064.262.631.470.51
SOLTU0.440.731.211.971.264.643.250.192.1482.330.851.00
STEME0.024.990.043.110.004.542.861.902.221.4878.690.15
ZEAMX0.560.116.660.973.880.221.330.471.131.930.5282.21
(b) Standard Deviation of VGG16
ALOMYAMAREAVEFACHEALHELANLAMPUMATCHSETSSSOLNISOLTUSTEMEZEAMX
ALOMY1.230.090.910.100.000.090.170.780.130.190.060.15
AMARE0.171.500.060.170.000.280.280.910.220.160.110.06
AVEFA1.460.172.110.200.060.120.610.430.210.180.070.88
CHEAL0.000.710.493.000.261.200.440.482.550.560.620.12
HELAN0.020.020.110.170.690.110.070.040.140.450.070.51
LAMPU0.060.300.130.690.051.460.340.171.280.370.480.03
MATCH0.280.160.180.050.010.160.520.210.050.110.130.17
SETSS1.501.470.700.970.000.450.412.240.240.450.510.15
SOLNI0.150.590.362.190.001.700.200.362.940.700.290.10
SOLTU0.180.240.240.170.261.150.360.150.401.730.290.17
STEME0.040.940.050.930.001.140.240.260.630.462.800.05
ZEAMX0.170.040.840.270.530.110.280.180.180.280.171.31
Table 4. Confusion matrix of the mean crop and weed identification for the ResNet–50 Convolutional Neural Network. The prediction reached between 97.2 and 97.7 % top-1 accuracy on the test dataset. The values are the percentage of identified labels for each plant species (Each row sums to 100%). (a) Mean values for the ten training runs. Cell values above 2% are highlighted. (b) Standard deviation for the ten training runs. The standard deviation values above 0.5% are highlighted.
Table 4. Confusion matrix of the mean crop and weed identification for the ResNet–50 Convolutional Neural Network. The prediction reached between 97.2 and 97.7 % top-1 accuracy on the test dataset. The values are the percentage of identified labels for each plant species (Each row sums to 100%). (a) Mean values for the ten training runs. Cell values above 2% are highlighted. (b) Standard deviation for the ten training runs. The standard deviation values above 0.5% are highlighted.
(a) Mean Values of ResNet–50
ALOMYAMAREAVEFACHEALHELANLAMPUMATCHSETSSSOLNISOLTUSTEMEZEAMX
ALOMY98.230.000.850.030.000.010.000.780.040.000.000.06
AMARE0.0099.660.000.080.000.000.130.010.060.000.000.06
AVEFA3.260.0195.140.150.110.130.170.010.350.220.000.45
CHEAL0.000.440.5591.520.050.690.000.744.970.820.000.22
HELAN0.000.000.420.1797.090.200.210.000.010.640.001.25
LAMPU0.000.000.001.110.0197.200.000.010.990.550.140.00
MATCH0.040.420.000.000.000.0099.540.000.000.000.000.00
SETSS2.910.090.030.900.000.060.0695.540.000.000.340.06
SOLNI0.020.370.925.580.022.010.020.1090.330.620.000.00
SOLTU0.190.000.080.110.000.570.050.000.0098.840.110.05
STEME0.000.170.000.150.000.000.030.150.090.0099.410.00
ZEAMX0.070.000.870.040.330.000.000.050.020.090.0198.51
(b) Standard Deviation of ResNet–50
ALOMYAMAREAVEFACHEALHELANLAMPUMATCHSETSSSOLNISOLTUSTEMEZEAMX
ALOMY0.320.000.390.040.000.030.000.150.040.000.000.04
AMARE0.000.200.000.130.000.000.100.040.060.000.000.06
AVEFA0.760.020.870.070.040.030.080.020.070.070.000.14
CHEAL0.000.160.190.860.100.160.000.230.610.350.000.14
HELAN0.000.010.080.050.300.040.020.010.020.100.000.18
LAMPU0.000.000.000.140.030.260.000.030.190.120.070.00
MATCH0.040.100.010.000.000.000.100.000.000.000.010.00
SETSS0.480.190.090.260.000.120.120.640.000.000.180.12
SOLNI0.070.150.450.410.070.540.070.111.000.310.000.00
SOLTU0.100.000.160.120.000.110.100.000.000.250.120.10
STEME0.000.060.000.130.000.000.050.050.070.000.150.00
ZEAMX0.020.000.120.050.120.000.000.030.030.050.020.19
Table 5. Confusion matrix of the mean crop and weed identification for the Xception Convolutional Neural Network. The prediction reached between 97.5 and 97.8 % top-1 accuracy on the test dataset. The values are the percentage of identified labels for each plant species (Each row sums to 100%). (a) Mean values for the ten training runs. Cell values above 2% are highlighted. (b) Standard deviation for the ten training runs. The standard deviation values above 0.5% are highlighted.
Table 5. Confusion matrix of the mean crop and weed identification for the Xception Convolutional Neural Network. The prediction reached between 97.5 and 97.8 % top-1 accuracy on the test dataset. The values are the percentage of identified labels for each plant species (Each row sums to 100%). (a) Mean values for the ten training runs. Cell values above 2% are highlighted. (b) Standard deviation for the ten training runs. The standard deviation values above 0.5% are highlighted.
(a) Mean Values of Xception
ALOMYAMAREAVEFACHEALHELANLAMPUMATCHSETSSSOLNISOLTUSTEMEZEAMX
ALOMY97.630.031.210.000.000.000.011.110.010.000.000.01
AMARE0.0199.260.000.010.000.000.510.100.110.000.000.00
AVEFA1.670.0096.840.050.110.110.270.050.260.150.000.48
CHEAL0.000.330.2292.460.330.710.250.964.360.050.110.22
HELAN0.000.000.480.0997.370.190.120.000.030.370.001.34
LAMPU0.000.040.020.590.1398.150.000.010.610.380.070.00
MATCH0.010.160.000.000.010.0099.740.030.000.000.040.00
SETSS2.100.060.150.560.000.000.0996.690.000.060.250.03
SOLNI0.150.250.974.640.072.080.050.0091.490.170.020.10
SOLTU0.220.000.000.300.590.430.080.080.0098.140.110.05
STEME0.000.070.000.050.000.000.030.140.090.0199.610.00
ZEAMX0.090.000.620.090.400.000.000.030.000.010.0298.75
(b) Standard Deviation of Xception
ALOMYAMAREAVEFACHEALHELANLAMPUMATCHSETSSSOLNISOLTUSTEMEZEAMX
ALOMY0.330.040.260.000.000.000.030.160.030.000.000.03
AMARE0.040.240.000.040.000.000.160.120.110.000.000.00
AVEFA0.640.000.650.050.050.040.060.050.040.040.000.10
CHEAL0.000.120.180.890.230.270.000.360.870.100.170.18
HELAN0.010.000.060.030.240.040.060.010.030.080.000.24
LAMPU0.000.040.040.200.080.210.000.030.170.210.070.00
MATCH0.020.060.000.000.020.000.080.040.000.010.040.00
SETSS0.590.120.190.350.000.000.190.530.000.120.240.09
SOLNI0.110.160.260.630.110.450.090.000.840.180.070.15
SOLTU0.080.000.000.220.360.100.110.110.000.510.120.10
STEME0.000.060.000.070.000.000.050.050.030.030.120.00
ZEAMX0.040.000.150.060.120.000.000.030.000.020.060.31
Table 6. Precision , recall and f1-score for the first repetition of all three Neural Networks that were tested. These results derive from testing the networks only on the testing proportion of the dataset.
Table 6. Precision , recall and f1-score for the first repetition of all three Neural Networks that were tested. These results derive from testing the networks only on the testing proportion of the dataset.
Plants PerVGG 16ResNet–50Xception
Categoryprecisionrecallf1-scoreprecisionrecallf1-scoreprecisionrecallf1-score
ALOMY11140.770.860.810.930.990.960.940.980.96
AMARE7920.810.930.860.981.000.990.990.990.99
AVEFA18620.850.690.760.980.950.970.980.950.97
CHEAL4050.470.570.510.870.920.890.900.930.92
HELAN24650.970.860.911.000.970.980.990.970.98
LAMPU11410.800.780.790.980.970.980.980.990.98
MATCH22750.910.940.931.001.001.000.991.001.00
SETSS3590.530.770.630.960.950.960.930.960.95
SOLNI4480.510.630.570.920.900.910.930.920.92
SOLTU4120.570.820.670.930.990.960.950.980.96
STEME10420.940.770.841.000.990.991.000.991.00
ZEAMX16670.830.840.830.970.990.980.970.990.98
accuracy13,982 0.82 0.97 0.98
macro avg13,9820.750.790.760.960.970.960.960.970.97
weighted avg13,9820.830.820.820.980.970.970.980.980.98
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Peteinatos, G.G.; Reichel, P.; Karouta, J.; Andújar, D.; Gerhards, R. Weed Identification in Maize, Sunflower, and Potatoes with the Aid of Convolutional Neural Networks. Remote Sens. 2020, 12, 4185. https://doi.org/10.3390/rs12244185

AMA Style

Peteinatos GG, Reichel P, Karouta J, Andújar D, Gerhards R. Weed Identification in Maize, Sunflower, and Potatoes with the Aid of Convolutional Neural Networks. Remote Sensing. 2020; 12(24):4185. https://doi.org/10.3390/rs12244185

Chicago/Turabian Style

Peteinatos, Gerassimos G., Philipp Reichel, Jeremy Karouta, Dionisio Andújar, and Roland Gerhards. 2020. "Weed Identification in Maize, Sunflower, and Potatoes with the Aid of Convolutional Neural Networks" Remote Sensing 12, no. 24: 4185. https://doi.org/10.3390/rs12244185

APA Style

Peteinatos, G. G., Reichel, P., Karouta, J., Andújar, D., & Gerhards, R. (2020). Weed Identification in Maize, Sunflower, and Potatoes with the Aid of Convolutional Neural Networks. Remote Sensing, 12(24), 4185. https://doi.org/10.3390/rs12244185

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop