Next Article in Journal
Development and Testing of an Integrated Rotating Dynamometer Based on Fiber Bragg Grating for Four-Component Cutting Force Measurement
Next Article in Special Issue
Weed Growth Stage Estimator Using Deep Convolutional Neural Networks
Previous Article in Journal
Improving Color Accuracy of Colorimetric Sensors
Previous Article in Special Issue
Assessing the Crop-Water Status in Almond (Prunus dulcis Mill.) Trees via Thermal Imaging Camera Connected to Smartphone
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Vegetation Segmentation Approach for Cropped Fields Based on Threshold Detection from Hue Histograms

1
Department of Geomatics Engineering, University of Calgary, 2500 University Dr NW, Calgary, AB T2N1N4, Canada
2
Leica Geosystems Ltd.; 245 Aero Way NE, Calgary, AB T2E6K2, Canada
*
Author to whom correspondence should be addressed.
Sensors 2018, 18(4), 1253; https://doi.org/10.3390/s18041253
Submission received: 27 February 2018 / Revised: 30 March 2018 / Accepted: 13 April 2018 / Published: 18 April 2018

Abstract

:
Over the last decade, the use of unmanned aerial vehicle (UAV) technology has evolved significantly in different applications as it provides a special platform capable of combining the benefits of terrestrial and aerial remote sensing. Therefore, such technology has been established as an important source of data collection for different precision agriculture (PA) applications such as crop health monitoring and weed management. Generally, these PA applications depend on performing a vegetation segmentation process as an initial step, which aims to detect the vegetation objects in collected agriculture fields’ images. The main result of the vegetation segmentation process is a binary image, where vegetations are presented in white color and the remaining objects are presented in black. Such process could easily be performed using different vegetation indexes derived from multispectral imagery. Recently, to expand the use of UAV imagery systems for PA applications, it was important to reduce the cost of such systems through using low-cost RGB cameras Thus, developing vegetation segmentation techniques for RGB images is a challenging problem. The proposed paper introduces a new vegetation segmentation methodology for low-cost UAV RGB images, which depends on using Hue color channel. The proposed methodology follows the assumption that the colors in any agriculture field image can be distributed into vegetation and non-vegetations colors. Therefore, four main steps are developed to detect five different threshold values using the hue histogram of the RGB image, these thresholds are capable to discriminate the dominant color, either vegetation or non-vegetation, within the agriculture field image. The achieved results for implementing the proposed methodology showed its ability to generate accurate and stable vegetation segmentation performance with mean accuracy equal to 87.29% and standard deviation as 12.5%.

1. Introduction

The agriculture industry is a major source of livelihood in the whole world. Its importance has been enhanced by its role to supply food, clothing, medicine, and employment opportunities to all human beings. Agriculture has an important ability to enhance the environment condition through providing different biological products [1]. Due to such importance and roles, it was vital to transform the agriculture industry into a knowledge-based industry to enhance the benefits of its outputs. Therefore, a major step is to apply a management system to the agriculture industry which can achieve an economically and an environmentally development for the agriculture process. Meanwhile, it is important to avoid the side effects of using the different agricultural inputs such as water, chemical herbicides, fertilizers, and machine fuel, on the surroundings. For such needs, a special management system was needed to distribute the different inputs based on the needs of each spot in the agriculture field to avoid any misuse of the resources and thus avoid any harm effects on the surrounding environment. Such management system was named as smart farming or precision agriculture (PA), which is defined as using the right technology to perform the right agriculture activity at the right time and the right location [2].
Through the last three decades the use of PA as a management system for the agriculture industry has become one of the top ten revolutions in agriculture [3]. Generally, PA performs the distribution of the inputs in a customized manner based on the needs of each part of the field. Therefore, a PA management system depends on using different technologies to collect suitable information and detect the needs of each zone in the field along with controlling the distribution process of these inputs. A leading example for these technologies is remote sensing (RS) which has a huge ability to provide valuable information for PA activities. Common platforms such as satellites and airplanes provide RS technology with the ability to acquire images covering large areas within short time [4]. Such collected images are used to monitor the crop growth, crop stress, and predict crop yield. Despite such valuable information that space-borne and aerial-borne remote sensing systems can provide, there are shortcomings with these systems that affect the use of RS technology for PA, which can be showed by comparing the use of GNSS technology with RS technology for PA. Different surveying reports showed that the application of GNSS technology for PA is more widely practiced compared to the use of RS technology [5].
Generally, space-borne and airborne RS systems are limited by the low temporal and spatial resolution compared to the needs of farmers [6]. On the other hand, terrestrial RS systems are capable to increase the spatial resolution along with controlling the temporal resolution [7]. Though, such systems have other limitations especially with the needed time to cover the large agriculture fields. Therefore, unmanned aerial vehicles (UAVs) have been introduced and utilized as an alternative solution to fit the gaps between aerial and terrestrial RS systems.
Through the last few decades, UAVs have showed a great potential as an imagery system platform that can be used for different applications [8]. The UAV imagery systems are considered as a low-cost alternative for regular remote sensing systems such as aerial and satellite-based systems. They are providing higher spatial resolution as they are flying at lower altitudes, where such spatial resolution is capable to provide the needed information for different agriculture applications as weed detection and crop health monitoring. Moreover, UAV systems can provide high temporal resolution imaging systems which are capable of real-time/flexible imagery acquisition [9,10,11,12,13]. Therefore, UAV imagery system has been implemented for different PA applications. These applications include crop health monitoring [14], weed management [15,16], and crop row detection [17].
The aforementioned PA applications mainly depend on using computer vision or image processing approaches to extract the suitable information to achieve the targeted goals of these applications. Generally, for the mentioned applications, vegetation extraction, or vegetation segmentation process is considered as an initial step to achieve the objective of each application [7,16,18,19]. The vegetation extraction process aims to classify the different pixels in the acquired images into two classes which are vegetation class and non-vegetation class. The vegetation class includes all the plant objects in the images as crops and weeds, while the remaining objects belong to the non-vegetation class. To avoid any misclassification that can affect the quality of the targeted application, the vegetation extraction, vegetation segmentation, or background removal process should be performed in an appropriate way. Usually, the vegetation extraction can be easily performed using the vegetation indexes to detect the vegetation objects. These vegetation indexes such as NDVI, Normalized Red (NR), Normalized Green (NG), Difference Vegetation Index (DVI), and many more, are applied using multispectral images, taking the advantage of having different spectral channels especially the NIR, R, and G channels [2].
Unfortunately, the use of multispectral cameras for the UAV imagery systems is considered as a disadvantage due to the high cost of these sensors compared to the low-cost off-shelf RGB cameras. Such, cost is considered as a major obstacle of expanding the use of UAV for PA applications [4]. Thus, there was a need to develop the ability of performing the vegetation extraction process using RGB images. Therefore, different vegetation extraction methods were developed to be applied on RGB images. These methods can be classified into two general approaches which are: the combination of color index-based and threshold index-based approaches and the machine learning-based approaches [19].

1.1. Color Index-Based Segmentation

In computer vision and image processing applications, color is considered as an important factor due to its ability to discriminate between objects. So, several researchers used color characteristics to distinguish green vegetation objects from soil or background for different agriculture applications. Generally, the color index-based approaches convert the RGB values into modified grayscale using a color index. Then, the generated grayscale image is used to perform the vegetation extraction process using the suitable threshold value. Such combination is preferred for two main advantages: the simplicity of implementation, along with the high computational efficiency. These advantages are crucial for any real or near real-time agriculture application, which is a needed characteristic for any system provided for farmers.
Generally, within any RGB color space image, the color of each pixel is represented using the combination of three values which are the R, G, and B. Therefore, to make a transformation of the pixel color from the three values of R, G, and B into one grayscale value while keeping the ability of discrimination the vegetation object, is a challenging procedure. Thus, different researchers developed multiple color index-based techniques that generate different RGB vegetation indexes to create the modified grayscale images. These vegetation indexes are using different formulas or equations by changing the weights of the R, G, or B values as shown through Equations (1)–(4):
NDI = 128 × ( ( G R G + R ) + 1 )
ExG = 2 × G R B
ExR = 1.3 × R G
CIVE = 0.441 × R 0.811 × G + 0.385 × B + 18.78745
Thus, the modified grayscale images can provide the ability to discriminate the vegetation objects as they have different grayscale values from other objects. Such an approach was followed by different researchers for proposing different vegetation extraction indexes such as the Normalized Difference Index ( NDI ) [20], the Excess Green Index ( ExG ) [21], the Excess Red Index ( ExR ) [22], and the Color Index of Vegetation Extraction ( CIVE ) [23]. While other researchers worked by using a combination of different vegetation indexes to create a modified one. Examples for such techniques can be shown in Excess Green minus Excess Red Index ( ExGR ) [24], Combined Indices 1 ( COM 1 ) [25], and Combined Indices 2 ( COM 2 ) [26], as shown in Equations (5)–(7):
ExGR = ExG ExR
COM 1 = ExG + CIVE + ExGR + VEG
COM 2 = 0.36 × ExG + 0.47 × CIVE + 0.17 × VEG
Finally, it is important to note that color index-based techniques need a threshold value to convert the modified grayscale image into a vegetation binary image, where the vegetation objects are white while the remaining objects are black.

1.2. Threshold Index-Based Segmentation

As mentioned, the color index-based approaches need a threshold value to generate the vegetation binary image as the main output of the vegetation segmentation process. Therefore, different threshold detection approaches were proposed for such purpose. Among those techniques, Otsu threshold [27] is considered as one of the leading thresholding techniques. Otsu’s method is implemented in many segmentation applications. Therefore, many researchers used it for the vegetation segmentation process [28,29]. Though, due to the importance of the threshold value for the binarization process, different researches were developed for producing other threshold detectors.
For example, a suitable threshold value was introduced through calculating the local homogeneity value of the grayscale image. Different researchers introduced an algorithm for vegetation segmentation through converting the RGB images into grayscale then the method generates local homogeneity images to detect a homogeneity threshold [30,31]. The authors empirically detect a threshold value as 0.97 to be used for vegetation segmentation process. Moreover, the histogram entropy was used to produce a threshold detection method to be applied for weighted grayscale image [32]. Later another threshold detection method was introduced based on the assumption of existing two Gaussian distributions for the existing colors in the image [33]. The authors assumed that the highest mean represents the vegetation distribution while the lower mean represents the non-vegetation distribution. A similar threshold detection methodology was also introduced later, where the authors assumed that there are two groups of pixels and the threshold value is the one that minimizes the variances sum of the two groups [34].
The main limitation of the different proposed threshold techniques is the stability of the binarization accuracy of the system, as any mis-segmentation is generally caused by the error of the detected threshold. So, if the detected threshold is not appropriately estimated, the generated segmentation process will be highly affected. Moreover, different threshold-based methodologies follow the assumption of the equal presence of vegetation and non-vegetation objects in the collected images. Such assumption is not totally valid through the different growth stages. On the other side, color index-based techniques that convert the RGB values into grayscale don’t always result in a good discriminative grayscale image. This is mainly due to the effect of illumination conditions over the RGB values. Therefore, although the simplicity and the developed techniques offered by color index-based and threshold index-based approaches, they have some limitations as the effect of illumination changing conditions on the achieved vegetation segmentation results, particularly in sunny and overcast conditions, so there was a need for more research to investigate other alternatives, avoid these limitations, and achieve better results.
To avoid the illumination effects, the use of different color spaces such as HSV or LAB instead of RGB was considered. The main motivation of using these color spaces is mainly the ability to describe the object color away from the illumination of the object’s color. For more explanation, the RGB color space describes the object color using a combination of three values which are the R, G, and B. The combination of the three values describe the color content along with the illumination content without discriminating these two contents. On the other hand, HSV color space describe the object’s color content discriminated from the illumination content. HSV color space is composed of three channels which are the hue, saturation, and vibrancy, or value. The hue channel represents the color of the object, while the other two channels, S and V, describe the illumination content of the object. So, the H value provides a separate description of the object color rather than the illumination content. Hence, the proposed vegetation segmentation technique adopted the use of HSV as the color space of the images.
Generally, the proposed approach adopts a main assumption that any image for an agriculture field contains two main colors, or classes. The first class is the vegetation objects with green color, while the second class is the background soil of the field with yellow or brown color. Therefore, the proposed paper proposes a vegetation segmentation methodology following the threshold index-based approaches to discriminate between these two main classes or colors. In the following section, the proposed methodology will be discussed. Then, a description for the data acquisition process will be provided. The following section provides detailed steps to test the proposed vegetation segmentation approach. Later, an analysis of the achieved results will be presented. Finally, the paper’s main conclusions will be presented.

2. Methodology

As discussed in previous sections, the vegetation segmentation process is an initial and vital step for different agriculture applications such as weed detection and crop row detection, especially, when working with RGB images. Due to the importance of this step, different solutions and approaches have been proposed to perform the vegetation segmentation as accurate and fast as possible. The main motivation for targeting higher accuracy vegetation segmentation process is to enhance the targeted agriculture application results. On the other hand, the fast performance is essential due to the importance of time for the different PA applications. The proposed vegetation segmentation technique targets the RGB images, as they are the main output from low-cost UAV imagery systems. The proposed methodology follows the procedure of threshold index-based approaches to achieve the simplicity of implementation and the low processing time.
As shown in Figure 1, the proposed vegetation segmentation technique has four main steps. First, the acquired image’s color space is converted from RGB into HSV. Then, a filtered Hue histogram graph for each image is generated, along with a second graph which represents the Gaussian curve that fits the hue histogram. These two graphs are used in the threshold detection process in the third step. Finally, the detected threshold value is applied on the hue grayscale image to generate the vegetation binary image. Through the following subsections, a full description of the proposed technique is introduced.

2.1. HSV Color Space

Generally, the acquired images from a low-cost UAV imagery system are RGB images, where the color of any object in these images is represented with the combination values of R, G, and B channels. The main problem with such color representation is the impact of changes in illumination conditions on the objects’ colors. So, there was a need to use a different color space that describes the objects’ colors independent of the illumination effect.
For such needs, the HSV as a color space of the images could be used instead of the RGB. The main difference between these color spaces is mainly the color representation. The HSV color space represents the object’s color using three different parameters which are the hue (H), saturation (S), and value (V). The H represents the color of the object while the S and V values represent the illuminance state of the object’s color. Such description provides the ability to discriminate the color from the illuminance and therefore avoids the effect of illumination changes on the object color. Therefore, the proposed vegetation segmentation technique adopts the use of HSV color space.
Hence, as shown in Figure 1, the first step of the proposed methodology is to convert the image’s color space from the RGB into HSV. Generally, the transformation process of the pixel’s color value from RGB into HSV can be performed using the following equations [35]:
R = R 255 & G = G 255 & B = B 255
M = max ( R , G , B ) & m = min ( R , G , B ) & C = M m
H = { 0 0 if   C = 0 60 ° × ( G B C mod 6 ) if   M = R 60 ° × ( B R C + 2 ) if   M = G 60 ° × ( R G C + 4 ) if   M = B
S = { 0 if   M = 0 C M if   M 0
V = M

2.2. Image Hue Histogram

After converting the image’s color space into the HSV, the hue (H) image is used to detect the suitable threshold value to discriminate between vegetation and non-vegetation objects. As discussed, the hue image provides a separate description for the color content of the objects except from the illuminance state. So, Hue image as shown in Figure 2, provides a suitable grayscale image that can be used to classify objects based on the color content. Therefore, the following step of the proposed vegetation segmentation technique is the threshold detection using the Hue image and its histogram.
Generally, a histogram of a hue image represents a relationship between each hue value and the number of pixels that have the same hue value, as shown in Figure 3. The use of image histogram for threshold detection was introduced by different techniques [27,32,36]. These techniques generally were implemented on vegetation segmentation process with the assumption that there are two dominant classes, the vegetation and the non-vegetation, where these classes’ colors are distributed following the normal or Gaussian distribution. Based on the probability theory, the threshold value that discriminates between these two classes represents the value that discriminates between the two Gaussian distributions. Such concept is followed by many different threshold index-based techniques as well as the proposed methodology.
As shown in Figure 4, three different images for a canola crop field at three different growth stages are presented. These images show that the existence of the vegetation and non-vegetation classes is not equal through the different growth stages, as can be shown within the Hue histogram for each image. Moreover, the color distribution of one class, either vegetation or non-vegetation, might be significantly unnoticeable in the image. Thus, following the assumption that the color of these two classes will be distributed with two separate Gaussian distribution is not always valid.
Therefore, the proposed threshold detection technique assumes that there will be at least one main class, either vegetation or non-vegetation, that its color content is distributed following Gaussian distribution. Through such assumption, the proposed threshold detection process is based on detecting the limits of the main Gaussian distribution in the image histogram. The values of these limits or borders represent the threshold values that can discriminate the dominant class, either the vegetation class or the non-vegetation class, from the other class in the image. To detect that suitable threshold, two different graphs are generated from the hue histogram of the collected field image, to discriminate the dominant class hue color distribution. These graphs are a filtered hue histogram and a fitted Gaussian curve for the hue histogram, as shown in Figure 3.
On step two, the proposed vegetation technique filters the hue image histogram to remove any hue value that has small presence in the image. Such filtration process is performed based on the number of pixels for each hue. So, if any hue value is represented with less than 0.001% of the number of image’s pixels, the proposed process classifies it as a small presence hue, and therefore remove this hue value from the hue histogram. The main motivation for such step is to work as an outlier remover procedure that remove low importance hue values.
Then, the following step is to fit the filtered hue histogram with a normal distribution, or Gaussian curve. The basic concept for curve fitting is to find the fitting parameters ( a , b , c ), of Gaussian Equation (13) using the hue histogram ( x , y ) data. For such purpose, different approaches can be used to estimate these parameters as Gauss-Newton algorithm, or its modification, the Levenberg-Marquardt algorithm [37].
y = i = 1 n a i × e [ ( x b i c i ) 2 ]
where a is the amplitude, b is the centroid, c is related to peak width, and n is the targeted number of peaks. As the main assumption is the existence of two main Gaussian distribution, that represents the two main classes, therefore the number of terms ( n ) is selected to be 2.

2.3. Threshold Detection Process

The third step of the proposed vegetation segmentation technique aims to detect the suitable threshold value. As explained, the proposed vegetation technique assumes that the threshold value exists beyond the limits or the boarders of the dominant class’s color distribution, as shown in Figure 5. Therefore, the proposed technique introduces an alternative to detect different possible threshold values to produce the final suitable threshold for the image binarization process. These values aim to discriminate the dominant class’s colors from the remaining colors. So, the generated two graphs of hue histogram and fitted Gaussian curve are used to detect these values. The following parts explain these steps.
The threshold detection process starts with detecting the dominant hue value in the dominant class. This hue value can be detected as the mean of the dominant class Gaussian curve ( m e a n _ d o m i n a n t ) or the highest represented hue value from the filtered hue histogram ( m a i n _ H u e ). Such value is important for classifying the dominant class as vegetation or non-vegetation, and for detecting the direction of searching for the threshold values, either to the right or the left of the mean value, as shown in Figure 5. The ( m e a n _ d o m i n a n t ) is used for the classification process of the dominant class using 60° as a threshold value to discriminate between vegetation and non-vegetation. Such threshold value is accepted to be used for the classification of the mean value of the dominant class. Such value was chosen as the green color hue value, which is the main color of vegetation objects, ranges from yellow color at 60° till cyan color at 180° [38]. Such range can be used to classify the main Hue value in the dominant class, but won’t be suitable to classify each hue value. This is caused as the edge that filters between yellow and green is affected with different factors, therefore using a specific value to discriminate between yellow and green is not valid for all images. Therefore, the suggested range can be used to classify the main hue value, but can’t be used to classify all the values.
After classifying the dominant class, or color, in the hue image, the threshold searching direction can be detected. Also, through detecting the number of peaks, or the Gaussian curves in the fitted Gaussian graph, the system classifies the image case into four main types. These case types are:
  • Case (1): the image has one Gaussian peak and the dominant class is non-vegetation.
  • Case (2): the image has one Gaussian peak and the dominant class is vegetation.
  • Case (3): the image has more than one Gaussian peak and the dominant class is non-vegetation.
  • Case (4): the image has more than one Gaussian peak and the dominant class is vegetation.
Then, using the two hue graphs, different suitable threshold values are detected. These values propose different potentials to describe the boarders of the dominant class, where the threshold values are assumed to be the local minima within these boarders. First, the hue Gaussian fitted graph introduces two possible threshold values, which are ( t h _ 1 ) and ( t h _ 2 ). The first threshold, ( t h _ 1 ), represents a description for the boarders of the dominant class Gaussian curve as the sum of the ( m e a n _ d o m i n a n t ) value and the suitable confident interval value, between (σ, 2σ, or 3σ), of the Gaussian curve, as shown in Figure 6. It is important to note that the process neglects ( t h _ 1 ) if the confident interval value is equal or larger than distance ( S 2 ) , which represents the difference between the end of the fitted Gaussian curve, from the other side of the threshold searching direction, and the mean Hue value ( m e a n _ d o m i n a n t ) . Such assumption is followed as indication that the Hue histogram is fitted with Gaussian curve correctly.
Theoretically, the Gaussian curve is symmetric around its mean value. Therefore, in the hue histogram, the difference between the mean and the confident interval can’t be a negative value, as this would be an indication that the suggested Gaussian curve didn’t fit the hue histogram correctly. Thus, if the confident interval is larger than the threshold ( S 2 ) , the first threshold value ( t h _ 1 ) is rejected.
The second threshold value derived from the hue Gaussian fitted graph is ( t h _ 2 ). This value is only detected if there is more than one Gaussian curve in the hue Gaussian fitted graph, which works for cases (3) and (4). Generally, ( t h _ 2 ) represents the valley point between the dominant class Gaussian curve and the following Gaussian curve, based on the detected searching direction. Such point represents a potential description for the dominant class Gaussian curve boarders, or a local minima value.
The second graph used for threshold detection is the filtered hue histogram. This graph is used to detect three potential threshold values which are ( t h _ 3 ), ( t h _ 4 ), and ( t h _ 5 ). These threshold values are providing three different approaches for describing the local minima. The proposed three approaches aim to detect the local minima value that discriminate the dominant color, or class, in the agriculture field image from the remaining colors. First, ( t h _ 3 ). represents the first valley point which is smaller than the following valley in the filtered Hue histogram, as shown in Figure 7. Such threshold value will be referred as the first lowest valley following the mean hue value. For more description, ( t h _ 3 ) is detected through finding the different valleys followed the main value in the hue histogram. Then, if the number of pixels in the first valley is lower than the number of pixels of the second valley, then the first valley is considered ( t h _ 3 ). If the described condition didn’t work, then the comparison goes for the following two valleys, till the system reaches the suitable valley for ( t h _ 3 ).
The second detected threshold value in the filtered hue histogram is ( t h _ 4 ) which represents the first valley point followed by two successive uprising, as shown in Figure 8. Finally, the third detected threshold is ( t h _ 5 ). This threshold value is determined through detecting the first peak point smaller than the following peak, based on the number of pixels. After detecting this peak, the following and previous valleys are compared. Then, the smallest valley, also based on the number of pixels, is considered as the ( t h _ 5 ), as shown in Figure 9. Finally, it is important to note that the existence of the five different threshold cases is not achieved in every image, as some of these thresholds cases cannot be existing in all images. Such note is mainly derived as some the described thresholds were not detected in some cases, as will be explained through the results discussion.

2.4. Image Vegetation Binarization

Finally, the generated thresholds values are used to generate the vegetation binary image, where the vegetation objects are presented with white color and the remaining objects are presented with black color. As the thresholds values were excluded from the Hue channel, the proposed technique applies the detected threshold value on the grayscale Hue image to generate the vegetation binary image. As the binarization process is performed using one threshold value, it was important to generate the final threshold value from the detected potential values.
Therefore, it is challenging to find the suitable threshold value within the detected different thresholds. Thus, it was important to collect different RGB images using low-cost UAV imagery system. Such images are providing different cases to test the different thresholds and evaluate their performance. Finally, based on the performance analysis of these threshold, the suitable threshold value for the proposed methodology could be detected.

3. Methodology Implementation

As shown in Figure 10, an Inspire 1 UAV from DJI (Shenzhen, China) equipped with low-cost DJI Zenmuse X3 RGB camera was used to collect different images over three different agricultural fields located in the Scandia (AB, Canada) area. Table 1 summarizes the specs of the used X3 RGB camera. The targeted crops were canola, soybean, and organic beans. The collected images were used to evaluate the proposed methodology. The collected images for the three different fields, where collected at two or three different growth stages. The first crop, the canola, was emerged on 18 May 2017. For this crop, three data collection sessions were performed to collect the images, specifically on 3, 15 and 23 June 2017. At each session, the UAV was flown at 40, 80, and 120 m height. Using the same flying heights, the UAV was used to perform one flight sessions for the second crop which is soybeans. The soybeans emerged on 26 May 2017 and the session was on 15 June 2017. For the third crop, the organic beans, two flight sessions were performed to collect the RGB images. The sessions were on 23 June and 1 July 2017. The flying heights for the first session were 20, 80, and 120 m, while for the second session, the flying heights were 20, 40, 80, and 120 m.
The acquired RGB images present different cases of crops, illumination conditions, growth stages, and flight heights. These cases provide different scenarios which can test and evaluate the proposed methodology. The images were collected using the mentioned flight sessions, where from each session different images were selected to test the proposed methodology. The total number of the selected images are 34 images, which was divided into two main groups. The first group, contains 12 images, with focus on the low flight height images as listed in Table 2. This group of images was used to solve some challenging problem in the proposed methodology, as will be explained in the following parts. Later, the remaining 22 images, as listed in Table 3, were used to evaluate the methodology segmentation accuracy.
It is important to note that all the used images are raw images acquired from the imagery sensor without any modification or preprocessing. Moreover, for clarification, the different growth stages expression is used as an indication to the change of the vegetation canopy or the vegetation density in the field. Generally, all the collected images were at the vegetative phenological stage. Finally, the proposed methodology was tested using the acquired images individually instead of generating the mosaic of the image and then use it to test the methodology. The main reason for such orientation was mainly to make the use of UAV imagery systems for PA applications more computationally efficient.

4. Experimental Results

As discussed, the proposed methodology aims to detect a suitable threshold that can be used for the vegetation binarization process. The proposed procedure detects five different thresholds using two graphs driven from the hue image. These thresholds can be used to generate the vegetation binary images as the main output for the vegetation segmentation process, so it is important to generate ground truth vegetation binary images to be used for accuracy evaluation of the binarization and threshold detection.
The ground truth binary images were generated using an image manipulating program called GIMP, which is a free and open-source graphics editor software. Generally, GIMP, same as other image manipulating programs, provides the user with the ability to convert the image’s colors. So, for each RGB image, the vegetation objects were manually selected, and their colors were changed to be white while the remaining colors were converted to be black. Finally, for quantifying the vegetation binarization process accuracy, each binary image, generated with the proposed methodology, was pixel by pixel compared to its ground truth binary image generated with GIMP. But first, it is important to solve few challenges that we face when implementing the proposed methodology.
First, the threshold detection process provides five thresholds which can provide different values, so it is important to detect the suitable threshold value that achieve a sufficient binarization accuracy. Second, as discussed, t h _ 1 . depends on computing the confident interval of the dominant Gaussian distribution of the dominant hue color, so it is important to check the suitable confident interval value between (σ, 2σ, or 3σ), which achieves the highest binarization process accuracy.
The third challenge is related to t h _ 3 , t h _ 4 , and t h _ 5 . These three thresholds aim to detect the local minima point as an indication for the end of the dominant hue color distribution in the hue histogram. The challenge for these thresholds is caused by the fluctuations within the hue histogram. Such fluctuations could easily cause a wrong selection for the threshold point, so for each one of these three thresholds, the vegetation binarization process was performed using different values. These values represent the different successive selected points that fit the description of the threshold. For more explanation, t h _ 3 is considered as the first lowest valley as described in Figure 7. So, is it suitable to detect the first lowest valley and stop? Or should the system continue for the second lowest valley? Or should the system continue for successive possible points? The challenge here is to find the suitable stop for searching t h _ 3 and similarly t h _ 4 and t h _ 5 . Therefore, based on the computed accuracy derived from each of different possible values for each threshold case, such challenging questions could be answered.
As the first challenge, which search for the suitable threshold value between the proposed five thresholds, it is important to solve any challenges related to each one of them. Therefore, the following part works on solving the second and third challenges first. Thus, 12 different RGB images from the described flight sessions were used. These images, as listed in Table 2, provide different crops, flight height, growth stages, and illumination conditions. For selecting the suitable confident interval, which is related to t h _ 1 computation, each image was binarized using the detected t h _ 1 only using one of the three different values of confident intervals, which are σ, 2σ, and 3σ, for each case. The achieved accuracies were then listed in Table 4.
Similarly, for solving the third challenge, which is related to t h _ 3 , t h _ 4 , and t h _ 5 , each one of them is evaluated seperately. As it is important to detect the suitable stop point for searching the local minima for each threshold, so the accuracy of the binarization process for each detect local minima is computed. These accuracies for the different 12 RGB images are stated in Table 5, Table 6 and Table 7. The main motivation for such results is to find the suitable stop of each threshold.
The following parts are providing an analysis for the presented results through the previous tables. Such analysis is targeted to solve the mentioned challenges. Then, the main challenge of choosing the suitable threshold value form the proposed five thresholds, can also be solved through comparing the accuracy performance of each threshold. Moreover, the proposed methodology was compared with three other color-index based approaches which are the ExG , ExGR , and NGRDI . These three vegetation indexes are used to generate a suitable grayscale image from the input RGB image. Then, using the automatic Otsu threshold detection, the grayscale image is binarized. Also, the implementation of Otsu threshold detection is used for the Hue image as the fourth approach. Finally, the accuracies of these four approaches were used to evaluate the performance of the proposed methodology compared to these well-known approaches (i.e., ExG + Otsu, ExGR + Otsu, NGRDI + Otsu, and Hue + Otsu). These approaches present accurate segmentation process based on their performance mentioned in surveying researches [19].

5. Results Analysis

In this section, a discussion is provided for the acquired results, as shown in Table 3, Table 4, Table 5 and Table 6, to solve the mentioned challenges related to the selection of suitable confident interval of the standard deviation for t h _ 1 and the suitable stop for detecting t h _ 3 , t h _ 4 , and t h _ 5 . Later, through comparing the performance of the five different proposed thresholds, the suitable threshold value for the vegetation binarization can be derived. Finally, an analysis for the performance of the proposed methodology is provided through comparing the segmentation accuracies of the proposed vegetation segmentation methodology and four other accurate methodologies.

5.1. Confidence Interval of the Dominant Gaussian

As discussed, detecting the suitable confident interval is a challenging problem for detecting the suitable t h _ 1 . So, it was important to compare between the different intervals of σ, 2σ, and 3σ for th_1. Based on the achieved accuracies, as shown in Table 4 and Figure 11, generally, as the confident interval increases, the binarization accuracy also increases. The majority of the tested images showed higher binarization accuracy with the use of the higher allowable confident interval. The only exception was for image 8 where the 2σ confident interval showed higher accuracy. Thus, the most suitable solution for selecting the confident interval for t h _ 1 is using the highest allowable confident interval. Finally, it is worth to mention that some of the confident intervals failed to follow the discussed ( S 2 ) condition. Therefore, some of the images didn’t have a threshold at every confident interval and even more at image (9) all the confidence intervals thresholds were failed.

5.2. Local Minima Searching Limit

In the proposed methodology, the selection of a suitable local minima as a threshold to discriminate the dominant color histogram, is considered a challenging problem. Therefore, Table 4, Table 5 and Table 6 are providing the achieved accuracies for different local minima points that fit the description for t h _ 3 , t h _ 4 , and t h _ 5 . These tables were used to find an approach for selecting the suitable stop limit for searching for these thresholds.
For t h _ 3 , the achieved accuracies in Table 5 shows that there is no clear trend for the different stops’ accuracies to select the suitable stop searching limit. Through the listed accuracies, it was obvious that in some cases as in images 1, 2, 3, or 4, the first detected stop for t h _ 3 provides the highest accuracy, while in remaining images the highest accuracy was for middle stops. Therefore, computing the average as the suitable threshold might be considered as the reasonable solution based on the achieved accuracies.
Moreover, the achieved accuracies showed that the thresholds values larger than 70° and the thresholds less than 30° are providing lower accuracies compared to the remaining values. Therefore, for detecting the suitable t h _ 3 , the methodology should remove any threshold higher than 70° or less than 30°. To that end, the suitable t h _ 3 is the average of the detected values that fit the t h _ 3 . description after removing any value larger than 70° or less than 30°. Such approach was approved, as shown for the different tested images through Figure 12, which compare the achieved accuracies for computing t h _ 3 through four different approaches. The first approach is the average of all detected t h _ 3 . points. The second is to average all stops after removing the values larger than 70°. The third approach is to average of all stops after removing the values less than 30°, while the last approach is the average of all stops after removing the values less than 30° and larger than 70°.
Similarly, for t h _ 4 and t h _ 5 . accuracies, Table 6 and Table 7 showed that the threshold values larger than 30° and less than 70° are providing the highest accuracies. Such conclusion can be derived through Figure 13 and Figure 14. These figures show that the binarization accuracy for any image is higher when removing the values larger than 70° and less than 30° from the detected thresholds.
Finally, the previous analysis for the different cases accuracies for different thresholds, proposed a suitable solution for each challenge for t h _ 1 , t h _ 3 , t h _ 4 , and t h _ 5 . Such solutions are used for the main challenge to evaluate each threshold from the proposed 5 thresholds. So, each threshold is used to generate the binary image and through the ground truth images, the threshold binarization accuracy is generated.

5.3. Final Threshold Value

The main target of the proposed methodology is to generate a vegetation binary image. Such image, in the proposed methodology, is generated using the hue image and a suitable threshold value. As discussed, the methodology introduces five different threshold values from the fitted Gaussian graph and from the filtered hue histogram. Therefore, it is important to detect the suitable threshold value from these five candidate thresholds. As shown in Table 8, there is no clear threshold provides the best accuracy performance within the five different thresholds in all cases. Generally, t h _ 1 . is showing the highest accuracy, but it is not detected in all cases. Also, t h _ 3 is detected in all cases, but it didn’t provide the highest accuracy. Moreover, as shown in Table 8, it is important to note that some of the thresholds failed to be detected in some cases as can be seen in images (1, 2, 3, 4, 7, 8 and 9). Thus, the average of these threshold values would be considered as a satisfied solution for the suitable threshold value.

5.4. Proposed Methodology Performance Evaluation

As mentioned, different researchers have developed different vegetation segmentation methodologies that can be used for different agriculture applications as weed detection. These researchers use different accuracy evaluation approaches. Generally, these accuracy evaluation approaches depend on using an image as a benchmark. Then the vegetation segmentation accuracy is evaluated through comparing the generated binary image with the benchmark. Finally, the segmentation accuracy is measured through the ratio of correct classification of pixels.
The mentioned benchmark is generated through different techniques. For example, different researchers manually generated the benchmark image using Photoshop [7,39]. Other researchers generated the benchmark image using other vegetation segmentation methodologies [25]. Moreover, a review article for the vegetation extraction and segmentation techniques performed a dependent evaluation for different vegetation segmentation techniques through generating a benchmark image using Photoshop [19]. Therefore, the proposed vegetation segmentation methodology was evaluated using a similar process through generating a benchmark image using GIMP software which is free software similar to Photoshop.
In this subsection, an evaluation of the proposed methodology segmentation performance is provided through comparing the achieved accuracy for the binarization process between the proposed methodology and other well-known approaches. As mentioned, these methods are generally considered as color index-based approaches which generate a special grayscale images from the inputted RGB images. Then, using Otsu threshold detection, the vegetation binary images are generated. The used color index methods are ExG , ExGR , NGRDI , and Hue, which were selected based on their accurate performance [19]. This comparison was performed using the two groups of images as described in Table 2 and Table 3.
These different 34 RGB images, were used to evaluate the proposed methodology along with the other mentioned ExG + Otsu, ExGR + Otsu, NGRDI + Otsu, and Hue + Otsu methods. The achieved results from segmenting the 34 RGB images showed a reasonable stability and consistency binarization performance, as shown in Figure 15. Such performance stability is evaluated through comparing the mean and standard deviations of the proposed method to other methods as shown on Figure 16.
The acquired accuracies showed stable and accurate segmentation process, even with the changes of illumination conditions, crop type, or growth stage. The stability of segmentation process was assessed using the standard deviation of the accuracies for the different 34 RGB images. However, the low segmentation accuracies were generally caused by the low resolution of the collected RGB images. Such low-resolution effect can be noticed through comparing the achieved accuracies when changing the flight height for the same crop and same growth stage, as shown for the achieved binarization of images (29), (30), (31), and (32) in Table 9.

6. Conclusions

The proposed paper introduced a new vegetation segmentation approach which aims to generate vegetation binary images from RGB images acquired by a low-cost UAV imagery system. Such vegetation binary images can be used for weed management, crop row detection, and crop health monitoring applications, which depend on discriminating the vegetation from non-vegetation objects as an initial step for the application. Generally, the proposed approach depends on detecting a suitable threshold value using the hue histogram of the RGB image. Then, the detected threshold value, is used to perform a binarization process for the grayscale hue image and the generated binary image is considered as an output for the vegetation segmentation process. Later, to evaluate the proposed methodology, different UAV flight sessions for different agriculture fields were performed to collect different RGB images. These images, as explained, provided variety of cases that can evaluate the vegetation segmentation performance of the proposed methodology.
Based on the achieved results, the proposed methodology was capable to generate vegetation binary images that discriminate the vegetation objects from non-vegetation objects for different cases. These cases involved different flight heights, illumination conditions, crop types, and growth stages. The mean accuracy for the segmentation process for all the tested images reached 87.29% with standard deviation equal to 12.5%. The main advantage of the proposed methodology is its ability to successfully detect vegetation objects regardless the flight height and illumination conditions. Such ability can be shown in the stability of the segmentation accuracy based on the error standard deviation, while other well-known approaches such as ExG + Otsu, ExGR + Otsu, and NGRDI + Otsu showed different standard deviation ranges from 16.54 to 25.54 as shown in Figure 16.
Moreover, it is recommended to use the proposed methodology for segmenting the collected images for flight height 40 m or less, as the higher heights images were highly affected with the resolution of the used camera. Such recommendation is preferred if the targeted PA application is crop row detection or crop health monitoring. On the other hand, if the targeted application is weed management, the achieved results from 120 m flying height images showed the ability to represent the weed spots, as shown for the achieved segmentation accuracies in Table 9 for images (19), (21), and (32). Therefore, implementing such segmentation approach can be easily used to develop different weed patch detection and weed management applications. Such recommendation is based on the ability of the proposed segmentation approach to handle RGB images acquired at high altitude, which provide the ability to cover the agriculture filed faster and with less number of images compared to current UAV weed management techniques that recommend to collect the images at 40 m [16].
Finally, it is important to note that the proposed method provided the highest segmentation accuracy compared to the hue + Otsu approach as shown in Figure 16. However, the main reason that the general accuracy of the proposed methodology was better that the hue + Otsu approach is due to the low accuracy achieved at three particular images, which are images 29, 33, and 34. This is mainly due to the special situation of the field. These images were taken for organic bean field while it was irrigated. Therefore, the color spectral of these images were different from other images for the same field.
Therefore, the proposed vegetation segmentation methodology can be considered as a stable methodology especially as it provided the highest segmentation accuracy in many different cases, as shown in images 18, 19, 21, 22, 24, 25, 29, 33 and 34. Also, the proposed method least segmentation accuracy is 53.73%, while the hue + Otsu least segmentation accuracy is 16.74%, based on the achieved results summarized in Table 9.

Acknowledgments

The authors would like to thank Darren Nikkel and Scott Gillespie for their help through providing the needed fields that were used for the data acquisition process. This research is under the supervision and funding of Naser El-Sheimy from NSERC and Canada Research Chairs programs.

Author Contributions

Mohamed Hassanein developed the methodology, designed and performed the experiments, analyzed the data, and wrote the paper; Zahra Lari reviewed the methodology, designed the experiments, reviewed the analyzed data, and revised the paper; Naser El-Sheimy reviewed the methodology, verified the results, and revised the paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Swain, K.C.; Jayasuriya, H.P.; Salokhe, V.M. Suitability of low-altitude remote sensing images for estimating nitrogen treatment variations in rice cropping for precision agriculture adoption. J. Appl. Remote Sens. 2007, 1, 1–11. [Google Scholar] [CrossRef]
  2. Mulla, D.J. Twenty five years of remote sensing in precision agriculture: Key advances and remaining knowledge gaps 5. Biosyst. Eng. 2012, 114, 358–371. [Google Scholar] [CrossRef]
  3. Crookston, R.K. A top 10 list of developments and issues impacting crop management and ecology during the past 50 years. Crop Sci. 2006, 46, 2253–2262. [Google Scholar] [CrossRef]
  4. Zhang, C.; Kovacs, J.M. The application of small unmanned aerial systems for precision agriculture: A review. Precis. Agric. 2012, 13, 693–712. [Google Scholar] [CrossRef]
  5. Erickson, B.; Widmar, D.A. Precision Agricultural Services Dealership Survey Results; Purdue Univercity: West Lafayette, IN, USA, 2015. [Google Scholar]
  6. Stafford, J.V. Implementing Precision Agriculture in the 21st Century. J. Agric. Eng. Res. 2000, 76, 267–275. [Google Scholar] [CrossRef]
  7. Burgos-artizzu, X.P.; Ribeiro, A.; Guijarro, M.; Pajares, G. Real-time image processing for crop/weed discrimination in maize fields. Comput. Electron. Agric. 2011, 75, 337–346. [Google Scholar] [CrossRef] [Green Version]
  8. Nex, F.; Remondino, F. UAV for 3D mapping applications: A review. Appl. Geomat. 2014, 6, 1–15. [Google Scholar] [CrossRef]
  9. Lelong, C.C.D.; Burger, P.; Jubelin, G.; Roux, B.; Labbé, S.; Baret, F. Assessment of unmanned aerial vehicles imagery for quantitative monitoring of wheat crop in small plots. Sensors 2008, 8, 3557–3585. [Google Scholar] [CrossRef] [PubMed]
  10. Nebiker, S.; Annen, A.; Scherrer, M.; Oesch, D. A light-weight multispectral sensor for micro uav—Opportunities for very high resolution airborne remote sensing. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2008, XXXVII, 1193–1200. [Google Scholar]
  11. Xiang, H.; Tian, L. Method for automatic georeferencing aerial remote sensing (RS) images from an unmanned aerial vehicle (UAV) platform. Biosyst. Eng. 2011, 108, 104–113. [Google Scholar] [CrossRef]
  12. Hardin, P.J.; Hardin, T.J. Small-scale remotely piloted vehicles in environmental research. Geogr. Compass 2010, 4, 1297–1311. [Google Scholar] [CrossRef]
  13. Hunt, E.R., Jr.; Cavigelli, M.; Daughtry, C.S.T.; Mcmurtrey, J.E., III; Walthall, C.L. Evaluation of Digital Photography from Model Aircraft for Remote Sensing of Crop Biomass and Nitrogen Status. Precis. Agric. 2005, 6, 359–378. [Google Scholar] [CrossRef]
  14. Mccabe, M.F.; Houborg, R.; Rosas, J. The potential of unmanned aerial vehicles for providing information on vegetation health. In Proceedings of the 21st International Congress on Modelling and Simulation, Gold Coast, Australia, 29 November–4 December 2015; pp. 1399–1405. [Google Scholar]
  15. Hassanein, M.; El-sheimy, N. Efficient Weed Detection Using Low-Cost UAV System. In Proceedings of the 10th International Conference for Mobile Mapping Technology, Cairo, Egypt, 6–11 May 2017. [Google Scholar]
  16. Peña Barragán, J.M.; Torres-Sánchez, J.; Serrano Pérez, A.; Castro, A.I.D.; López Granados, F. Quantifying efficacy and limits of unmanned aerial vehicle (UAV) technology for weed seedling detection as affected by sensor resolution. Sensors (Basel) 2015, 15, 5609–5626. [Google Scholar] [CrossRef] [PubMed]
  17. Comba, L.; Gay, P.; Primicerio, J.; Ricauda Aimonino, D. Vineyard detection from unmanned aerial systems images. Comput. Electron. Agric. 2015, 114, 78–87. [Google Scholar] [CrossRef]
  18. Slaughter, D.C.; Giles, D.K.; Downey, D. Autonomous robotic weed control systems: A review. Comput. Electron. Agric. 2008, 61, 63–78. [Google Scholar] [CrossRef]
  19. Hamuda, E.; Glavin, M.; Jones, E. A survey of image processing techniques for plant extraction and segmentation in the field. Comput. Electron. Agric. 2016, 125, 184–199. [Google Scholar] [CrossRef]
  20. Woebbecke, D.M.; Meyer, G.E.; Von, K.; Mortensen, D.A. Plant species identification, size, and enumeration using machine vision techniques on near-binary images. Opt. Agric. For. 1993, 1836, 208–219. [Google Scholar]
  21. Woebbecke, D.M.; Meyer, G.E.; Von Bargen, K.; Mortensen, D. Color indices for weed identification under various soil, residue, and lighting conditions. Trans. ASAE 1995, 38, 259–269. [Google Scholar] [CrossRef]
  22. Meyer, G.E.; Hindman, T.W.; Lakshmi, K. Machine vision detection parameters for plant species identification. In Proceedings of the SPIE, Bellingham, WA, USA, 1–6 November 1998. [Google Scholar]
  23. Kataoka, T.; Kaneko, T.; Okamoto, H.; Hata, S. Crop growth estimation system using machine vision. In Proceedings of the 2003 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, Kobe, Japan, 20–24 July 2003; IEEE: New York, NY, USA, 2003; Volume 2, pp. b1079–b1083. [Google Scholar]
  24. Meyer, G.E.; Neto, J.C.; Jones, D.D.; Hindman, T.W. Intensified fuzzy clusters for classifying plant, soil, and residue regions of interest from color images. Comput. Electron. Agric. 2004, 42, 161–180. [Google Scholar] [CrossRef]
  25. Guijarro, M.; Pajares, G.; Riomoros, I.; Herrera, P.J.; Burgos-artizzu, X.P.; Ribeiro, A. Automatic segmentation of relevant textures in agricultural images. Comput. Electron. Agric. 2011, 75, 75–83. [Google Scholar] [CrossRef] [Green Version]
  26. Guerrero, J.M.; Pajares, G.; Montalvo, M.; Romeo, J.; Guijarro, M. Support vector machines for crop/weeds identification in maize fields. Expert Syst. Appl. 2012, 39, 11149–11155. [Google Scholar] [CrossRef]
  27. Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man. Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef]
  28. Shrestha, D.S.; Steward, B.L.; Birrell, S.J. Video processing for early stage maize plant detection. Biosyst. Eng. 2004, 89, 119–129. [Google Scholar] [CrossRef]
  29. Ling, P.P.; Ruzhitsky, V.N. Machine vision techniques for measuring the canopy of tomato seedling. J. Agric. Eng. Res. 1996, 65, 85–95. [Google Scholar] [CrossRef]
  30. Gebhardt, S.; Schellberg, J.; Lock, R.; Kühbauch, W. Identification of broad-leaved dock (Rumex obtusifolius L.) on grassland by means of digital image processing. Precis. Agric. 2006, 7, 165–178. [Google Scholar]
  31. Gebhardt, S.; Kühbauch, W. A new algorithm for automatic Rumex obtusifolius detection in digital images using colour and texture features and the influence of image resolution. Precis. Agric. 2007, 8, 1–13. [Google Scholar] [CrossRef]
  32. Tellaeche, A.; Burgos-Artizzu, X.P.; Pajares, G.; Ribeiro, A. A vision-based method for weeds identification through the Bayesian decision theory. Pattern Recognit. 2008, 41, 521–530. [Google Scholar] [CrossRef]
  33. Kirk, K.; Andersen, H.J.; Thomsen, A.G.; Jørgensen, J.R.; Jørgensen, R.N. Estimation of leaf area index in cereal crops using red–green images. Biosyst. Eng. 2009, 104, 308–317. [Google Scholar] [CrossRef]
  34. Jeon, H.Y.; Tian, L.F.; Zhu, H. Robust crop and weed segmentation under uncontrolled outdoor illumination. Sensors 2011, 11, 6270–6283. [Google Scholar] [CrossRef] [PubMed]
  35. Riaz, M.; Kang, G.K.G.; Kim, Y.K.Y.; Pan, S.P.S.; Park, J.P.J. Efficient Image Retrieval Using Adaptive Segmentation of HSV Color Space. In Proceedings of the 2008 International Conference on Computational Sciences and Its Applications, Perugia, Italy, 30 June–3 July 2008; pp. 491–496. [Google Scholar] [CrossRef]
  36. Park, D.S.; Park, J.S.; Kim, T.Y.; Han, J.H. Image indexing using weighted color histogram. In Proceedings of the 10th International Conference on Image Analysis Process, Venice, Italy, 27–29 September 1999; pp. 909–914. [Google Scholar] [CrossRef]
  37. Moré, J.J. The Levenberg-Marquardt algorithm: implementation and theory. In Numerical Analysis; Springer: Berlin, Germany, 1978; pp. 105–116. [Google Scholar]
  38. Joblove, G.H.; Greenberg, D. Color spaces for computer graphics. In ACM Siggraph Computer Graphics; ACM: New York, NY, USA, 1978; Volume 12, pp. 20–25. [Google Scholar]
  39. Torres-Sánchez, J.; López-Granados, F.; Peña, J.M. An automatic object-based method for optimal thresholding in UAV images: Application for vegetation detection in herbaceous crops. Comput. Electron. Agric. 2015, 114, 43–52. [Google Scholar] [CrossRef]
Figure 1. Main steps of the proposed methodology.
Figure 1. Main steps of the proposed methodology.
Sensors 18 01253 g001
Figure 2. (a) RGB image for organic-beans field; (b) hue image for the same field.
Figure 2. (a) RGB image for organic-beans field; (b) hue image for the same field.
Sensors 18 01253 g002
Figure 3. (a) Hue histogram; (b) the fitted Gaussian curve of the hue histogram.
Figure 3. (a) Hue histogram; (b) the fitted Gaussian curve of the hue histogram.
Sensors 18 01253 g003
Figure 4. RGB images and hue histogram for canola crop at three different growth stages. (a,c,e) represent RGB images at 17, 29, 37 days after plant emerging; (b,d,f) are the hue histogram for these images.
Figure 4. RGB images and hue histogram for canola crop at three different growth stages. (a,c,e) represent RGB images at 17, 29, 37 days after plant emerging; (b,d,f) are the hue histogram for these images.
Sensors 18 01253 g004
Figure 5. Location of possible thresholds (marked within ellipses). (a) the case of non-vegetation is the dominant class; (b) the case of vegetation is the dominant class.
Figure 5. Location of possible thresholds (marked within ellipses). (a) the case of non-vegetation is the dominant class; (b) the case of vegetation is the dominant class.
Sensors 18 01253 g005
Figure 6. Flowchart for detecting th1 & th2.
Figure 6. Flowchart for detecting th1 & th2.
Sensors 18 01253 g006
Figure 7. Description for t h _ 3 .
Figure 7. Description for t h _ 3 .
Sensors 18 01253 g007
Figure 8. Description for t h _ 4 .
Figure 8. Description for t h _ 4 .
Sensors 18 01253 g008
Figure 9. Description for t h _ 5 .
Figure 9. Description for t h _ 5 .
Sensors 18 01253 g009
Figure 10. (a) UAV imagery system from DJI; (b) Inspire 1 drone; (c) X3 RGB camera.
Figure 10. (a) UAV imagery system from DJI; (b) Inspire 1 drone; (c) X3 RGB camera.
Sensors 18 01253 g010
Figure 11. Binarization accuracies for different confident intervals.
Figure 11. Binarization accuracies for different confident intervals.
Sensors 18 01253 g011
Figure 12. Achieved accuracy for different modification for t h _ 3 .
Figure 12. Achieved accuracy for different modification for t h _ 3 .
Sensors 18 01253 g012
Figure 13. Achieved accuracy for different modification for t h _ 4 .
Figure 13. Achieved accuracy for different modification for t h _ 4 .
Sensors 18 01253 g013
Figure 14. Achieved accuracy for different modification for t h _ 5 .
Figure 14. Achieved accuracy for different modification for t h _ 5 .
Sensors 18 01253 g014
Figure 15. Generated vegetation binary images from the proposed methodology. Each image is labeled with the image number, while (a) is for the acquired RGB image, and (b) for the generated binary image.
Figure 15. Generated vegetation binary images from the proposed methodology. Each image is labeled with the image number, while (a) is for the acquired RGB image, and (b) for the generated binary image.
Sensors 18 01253 g015aSensors 18 01253 g015b
Figure 16. Performance comparison between the proposed methodology and other approaches. Std is represented by error bar.
Figure 16. Performance comparison between the proposed methodology and other approaches. Std is represented by error bar.
Sensors 18 01253 g016
Table 1. DJI RGB X3 sensor specs.
Table 1. DJI RGB X3 sensor specs.
Camera Specs
Sensor Size6.17 × 4.55 mm
Operating Temperature0° to 40° C
Focal length~4 mm
Image Format (pixels)4000 × 3000
Sensor typeSony EXMOR CMOS
Table 2. List of group 1 of images used for solving the methodology challenges.
Table 2. List of group 1 of images used for solving the methodology challenges.
Image #CropFlight Height (m)DateGSD (cm)
(1)Soybean4015 June ~1.54
(2)Soybean4015 June ~1.54
(3)Canola403 June ~1.54
(4)Canola403 June ~1.54
(5)Canola4015 June ~1.54
(6)Canola4015 June ~1.54
(7)Canola4023 June ~1.54
(8)Canola4023 June ~1.54
(9)Organic Bean2023 June ~0.77
(10)Organic Bean2023 June ~0.77
(11)Organic Bean201 July ~0.77
(12)Organic Bean201 July ~0.77
Table 3. List of group 2 of images used for evaluating the methodology.
Table 3. List of group 2 of images used for evaluating the methodology.
Image #CropFlight Height (m)DateGSD (cm)
(13)Soybean4015 June~1.54
(14)Soybean4015 June~1.54
(15)Soybean8015 June~3.1
(16)Soybean12015 June~4.63
(17)Canola403 June~1.54
(18)Canola4015 June~1.54
(19)Canola8015 June~3.1
(20)Canola8015 June~3.1
(21)Canola12015 June~4.63
(22)Canola4023 June~1.54
(23)Canola4023 June~1.54
(24)Canola8023 June~3.1
(25)Canola8023 June~3.1
(26)Canola12023 June~4.63
(27)Organic Bean2023 June~0.77
(28)Organic Bean8023 June~3.1
(29)Organic Bean201 July~0.77
(30)Organic Bean401 July~1.54
(31)Organic Bean801 July~3.1
(32)Organic Bean1201 July~4.63
(33)Organic Bean201 July~0.77
(34)Organic Bean201 July~0.77
Table 4. Binarization accuracies for different confident intervals.
Table 4. Binarization accuracies for different confident intervals.
Image #Accuracy %
σ
(1)90.991.1N/A
(2)89.4N/AN/A
(3)68.790.496.7
(4)62.288.094.9
(5)80.490.0N/A
(6)83.692.197.0
(7)87.896.899.5
(8)96.197.393.2
(9)N/AN/AN/A
(10)90.1N/AN/A
(11)80.697.1N/A
(12)83.692.997.7
Table 5. Binarization accuracies for different stops for t h _ 3 .
Table 5. Binarization accuracies for different stops for t h _ 3 .
Image # t h _ 3 /Accuracy %
1234567891011
(1)4573
87.278.2
(2)4573
92.575.4
(3)43516174
95.289.987.587.1
(4)4351556174
96.691.089.788.487.8
(5)73676155514845434125
75.186.691.697.098.797.594.792.589.671.6
(6)7367615551484543413821
78.289.894.999.896.794.691.889.786.780.960.1
(7)7361514328
85.098.398.697.195.5
(8)6157514543412513
98.798.896.694.894.193.590.890.7
(9)4351556167747882
99.979.272.362.557.151.548.947.4
(10)434851556167747882
84.398.595.889.279.372.362.657.353.8
(11)6661484533
75.485.589.684.465.0
(12)67615548434038
76.787.194.993.084.979.174.4
Table 6. Binarization accuracies for different stops for t h _ 4 .
Table 6. Binarization accuracies for different stops for t h _ 4 .
Image # t h _ 4 /Accuracy %
12345678910
(1)58
77.1
(2)454866
93.491.187.2
(3)454866
94.792.287.9
(4)7776685850252414137
43.944.853.565.280.678.778.778.678.678.6
(5)475558676873
67.561.359.254.754.652.4
(6)6851413837
94.798.696.796.296.1
(7)6851504138232221
94.096.696.393.592.590.890.890.8
(8)454857616768
93.684.669.562.557.157.1
(9)4548616768
90.598.579.372.372.3
(10)6643
75.480.1
(11)58514338
92.997.084.974.4
(12)58
77.1
Table 7. Binarization accuracies for different stops for t h _ 5 .
Table 7. Binarization accuracies for different stops for t h _ 5 .
Image # t h _ 5 /Accuracy %
123456789
(5)6761555548484340
86.691.697.097.097.597.592.588.1
(6)786755554848434019
64.389.899.899.894.694.689.785.160.1
(7)61535045
98.399.198.497.5
(8)71636153453825
88.697.898.797.394.892.590.8
(9)40515571
89.879.272.353.8
(10)40626671
75.179.372.966.8
(11)6155514840
85.598.894.689.675.1
(12)615555484840
87.194.994.993.093.079.1
Table 8. Binarization accuracies for different threshold types.
Table 8. Binarization accuracies for different threshold types.
Image #Threshold Accuracy %
th_1th_2th_3th_4th_5
(1)91.1N/A87.20.0N/A
(2)89.4N/A92.577.1N/A
(3)96.7N/A89.989.3N/A
(4)94.9N/A90.390.3N/A
(5)90.091.698.497.598.5
(6)97.097.196.094.696.7
(7)99.5N/A98.897.899.1
(8)93.2N/A96.396.396.8
(9)N/AN/A72.167.684.5
(10)92.491.789.284.588.9
(11)97.189.794.698.894.6
(12)97.779.497.093.097.7
Table 9. Binarization accuracy for the proposed methodology compared to other accurate approaches.
Table 9. Binarization accuracy for the proposed methodology compared to other accurate approaches.
Image #Vegetation Segmentation Accuracy %
Proposed MethodologyExG + OtsuExGR + OtsuNGRDI + OtsuHue + Otsu
(13)65.7085.9564.7864.7698.70
(14)73.8295.7167.6067.9397.71
(15)54.8679.6654.5554.5590.84
(16)53.7383.4753.4553.4591.62
(17)91.1036.9373.3677.8384.71
(18)97.1686.8337.5276.0893.72
(19)88.4577.0025.3761.7885.13
(20)70.8486.3842.1870.6792.19
(21)91.5579.9132.7853.4084.02
(22)99.5322.4420.2826.3893.87
(23)96.0224.6423.8026.4497.26
(24)98.7926.8043.0344.8494.73
(25)98.2821.1035.1245.2174.25
(26)97.8424.3214.6741.1299.39
(27)88.7292.3164.6172.1798.58
(28)80.3079.5061.9772.6286.57
(29)95.4070.5131.6462.6330.64
(30)69.3170.8041.4857.7384.38
(31)75.2177.1240.1451.1087.30
(32)88.1690.3661.7564.5095.80
(33)86.9257.0118.2853.2216.74
(34)95.5970.1229.6465.5027.65

Share and Cite

MDPI and ACS Style

Hassanein, M.; Lari, Z.; El-Sheimy, N. A New Vegetation Segmentation Approach for Cropped Fields Based on Threshold Detection from Hue Histograms. Sensors 2018, 18, 1253. https://doi.org/10.3390/s18041253

AMA Style

Hassanein M, Lari Z, El-Sheimy N. A New Vegetation Segmentation Approach for Cropped Fields Based on Threshold Detection from Hue Histograms. Sensors. 2018; 18(4):1253. https://doi.org/10.3390/s18041253

Chicago/Turabian Style

Hassanein, Mohamed, Zahra Lari, and Naser El-Sheimy. 2018. "A New Vegetation Segmentation Approach for Cropped Fields Based on Threshold Detection from Hue Histograms" Sensors 18, no. 4: 1253. https://doi.org/10.3390/s18041253

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop