Assessment of Texture Features for Bermudagrass ( Cynodon dactylon ) Detection in Sugarcane Plantations

Sugarcane products contribute significantly to the Brazilian economy, generating U.S. $12.2 billion in revenue in 2018. Identifying and monitoring factors that induce yield reduction, such as weed occurrence, is thus imperative. The detection of Bermudagrass in sugarcane crops using remote sensing data, however, is a challenge consideringtheir spectral similarity. To overcome this limitation,this paper aims to explore the potential of texture features derived from images acquired by an optical sensor onboard anunmanned aerial vehicle (UAV) to detect Bermudagrass in sugarcane. Aerial images with a spatial resolution of 2cm were acquired from a sugarcane field in Brazil.The Green-Red Vegetation Index and several texture metrics derived from the gray-level co-occurrence matrix were calculated to perform an automatic classification using arandom forest algorithm. Adding texture metrics to the classification process improved the overall accuracy from 83.00% to 92.54%, and this improvement was greater considering larger window sizes, since they representeda texture transition between two targets. Production losses induced by Bermudagrass presence reached 12.1 tons × ha−1 in the study site. This study not only demonstrated the capacity of UAV images to overcome the well-known limitation of detecting Bermudagrass in sugarcane crops, but also highlighted the importance of texture for high-accuracy quantification of weed invasion in sugarcane crops. Table S1.Error matrices for the experiments 1, without texture, and for experiments 2 to 8 (texture with windows ranging from 3x3 to 15x15). Reference – without texture Reference – with texture (3x3 window size) BG ST SC BS DO BG ST SC BS DO C la ss ifi ed a s BG 466 0 2 0 291 C la ss ifi ed a s BG 474 0 12 0 265 ST 0 500 0 4 0 ST 0 500 0 4 0 SC 19 0 498 0 27 SC 16 0 488 0 26 BS 15 0 0 496 67 BS 10 0 0 496 57 DO 0 0 0 0 115 DO 0 0 0 0 152 Reference – with texture (5x5 window size) Reference – with texture (7x7 window size) BG ST SC BS DO BG ST SC BS DO C la ss ifi ed a s BG 468 0 1 0 210 C la ss ifi ed a s BG 461 0 6 0 192 ST 0 500 1 4 0 ST 0 500 0 4 0 SC 27 0 498 6 5 SC 37 0 494 8 2 BS 5 0 0 490 28 BS 2 0 0 488 35 DO 0 0 0 0 257 DO 0 0 0 0 271 Reference – with texture (9x9 window size) Reference – with texture (11x11 window size) BG ST SC BS DO BG ST SC BS DO C la ss ifi ed a s BG 471 0 1 0 163 C la ss ifi ed a s BG 473 0 0 0 159 ST 0 500 0 4 0 ST 0 500 0 4 0 SC 28 0 499 8 1 SC 22 0 500 11 1 BS 1 0 0 488 29 BS 5 0 0 485 22 DO 0 0 0 0 307 DO 0 0 0 0 318 Reference – with texture Reference – with texture (13x13 window size) (15x15 window size) BG ST SC BS DO BG ST SC BS DO C la ss ifi ed a s BG 462 0 3 0 125 C la ss ifi ed a s BG 462 0 3 0 106 ST 0 500 0 4 0 ST 0 500 0 4 0 SC 32 0 497 12 5 SC 27 0 497 12 8 BS 6 0 0 484 18 BS 11 0 0 484 16 DO 0 0 0 0 352 DO 0 0 0 0 370 BG – Bermudagrass; ST – Straw; SC – Sugarcane; BS – Bare Soil and DO – Dark Objects.


Introduction
Sugarcane is mainly cultivated in tropical and subtropical regions around the world, and is one of the most important sources of sugar and raw material for ethanol production [1].Brazil is the world's leading producer of sugarcane, with an estimated raw production of 641 million tons in 2018 [2].This production supports the manufacturing of 38.6 million tons of sugar-related products and 27.8 billion liters of ethanol [2].These numbers can be translated into annual exportation revenue of 12.2 billion dollars for the Brazilian economy [2].Over the last 20 years, the planted area of sugarcane in Brazil has increased from 4.8 to 10.2 million hectares.Plantations in São Paulo state account for 55% of the entire Brazilian production [3].To be able to maintain a high productivity of this crop, it is critical to constantly monitor climatic (e.g., rainfall and temperature), environmental (e.g., soil quality), and biological (e.g., weeds) factors that may reduce production.
Weed detection and control is especially important for achieving high crop yields.Weeds compete with sugarcane plants for water, nutrients, and sunlight, negatively impacting sugarcane growth.Abnormal weed growth can also interfere with some agricultural practices, including soil management and the use of herbicides, which can add extra labor and increase the overall cost of production [4].Weeds are also pointed to as a host for sugarcane diseases and can contribute to pest infestations, such as the ground rat (Rattus sordidus), the cane beetle (Dermolepida albohirtum), and several nematode species [4][5][6][7][8].
There are a huge variety of weeds capable of infesting sugarcane crops.Some are dicotyledonous (usually called broadleaf weeds), but most are monocotyledonous and belong to the Poaceae family (commonly known as grasses) [4].Weeds such as Cynodon dactylon (bermudagrass), Sorghum halepense (johnsongrass), Eleusine indica (goosegrass), Panicum maximum (guineagrass), and Brachiaria reptans (creeping panic grass) are highlighted as the most common species affecting sugarcane crops around the world [4,[9][10][11].In an experimental analysis of sugarcane crops without weed control, weeds affected up to 60% of the planted area, representing yield losses up to 45 tons × ha −1 [4].Bermudagrass is responsible for yield losses that range from 6% to 14% of production (depending on the chemical control used), and is more critical in the early growth stages of a culture, as it shades emerging sugarcane shoots [9].
One way to effectively monitor weeds is using remote sensing data, especially those obtained by unmanned aerial vehicles (UAVs).These images are suitable for mapping areas where weeds have escaped control due to management errors or herbicide resistance [12].UAV images capture the spectral reflectance of vegetation and other objects from above, reducing the time required for and the number of field campaigns necessary to investigate weed occurrence.Each object spotted by an airborne sensor has specific reflectance characteristics for each wavelength of the electromagnetic spectrum, which allows distinguishing them with image processing techniques.The spectral reflectance detected by each sensor's channel (or band) is the baseline for applications involving weed mapping and airborne sensors [12,13].
Besides the spectral reflectance, image processing techniques provide other reliable products to improve weed detection, such asvegetation indices (VIs), for example [14].VIs were developed to decrease data dimensionality and highlight objects with green vegetation.For airborne sensors, which usually have only red, green and blue (RGB) bands, VIs can be calculated based on the green (500-570nm) and red (620-700nm) regions of the electromagnetic spectrum [14].It is known that the overall accuracy of weed mapping can be improved by using VIs derived from remote sensing data [15][16][17][18].These VIs are capable of distinguishing vegetation parameters such as color, dry matter (%), nitrogen (%), chlorophyll (A and B), and carotenoid content among several varieties of grasses, such as Lolium perenne (perennial ryegrass) and Poa pratensis (kentucky bluegrass) [12].However, for Bermudagrass, just spectral reflectance and VIs have not been enough to estimate these parameters precisely [12,13].
Potentially, the use of texture metrics derived from remote sensing images can improve weed detection [19,20].A methodology using not only VIs, but also texture features, was developed in order to map weeds on maize and sunflower crops [15].The authors used a 1.4-cm spatial resolution UAV image obtained from an RGB camera.After orthomosaic generation, they evaluated two classification strategies, the first one using only the RGB values and the second one considering features such as VI and texture.They used the support vector machine (SVM) algorithm for classification, and feature selection was performed with the "Information Gain" method on WEKA software [21,22].When considering weed detection in sunflower crops, the overall accuracy (OA) using only the RGB band values was 61.6%, but when the second strategy was used, the accuracy values increased to 95.5%.The accuracy of weed detection in maize crops, on the other hand, had a lower performance, reaching a maximum OA value of 79.0% [15].
The combinationof VIs with texture features has also improved the OA of weed detection in eggplant plantations [16].Using 5-cm spatial resolution images and the SVM classifier, the authors Drones 2019, 3, 36 3 of 15 could not distinguish weeds from eggplants using nine different VIs (OA of 58.4%).By adding texture features to the dataset, these authors increased the OA of their classification to 75.4%.Differently from Reference [15], no feature selection method was applied to the dataset.
Another example was weed detection in bean and spinach crops [17].The authors used a 0.35-cm UAV image obtained from an RGB sensor and compared the performance of three different classifiers (SVM, random forest [23], and convolutional neural networks (CNNs) [24]) with and without texture features.Similarly to References [15] and [16], the combined use of spectral values, VIs, and texture features obtained better results.For weed detection in bean crops, random forest outperformed the SVM classifier, with an OA of 70.1% against 60.6%.However, the best classification result was achieved by using CNNs, with an OA of 94.8%.For spinach crops, the random forest algorithm had better results than the other classifiers, with an OA of 96.9%.
An approach using image segmentation and object-based image analysis (OBIA) to discriminate crops and weeds was used by References [15] and [16].Several segmentation approaches, such as the ones used by the authors, are based on spectral information similarity (usually a region growth algorithm) in order to develop the polygons used to separate the classes.On the other hand, weeds and crops are hard to discriminate between using only spectral information, considering their strong similarities [12,13,17].Thus, applications that want to perform analyses on how texture influences weed detection can be performed applying a pixel by pixel strategy showing how texture window size influences classification results.
The use of texture and UAV images has the potential to improve the detection of weeds on a large variety of crops.Nevertheless, this combination has not yet been tested for sugarcane crops.In this sense, RGB UAV images have been used to identify Digitaria insularis (sourgrass) and Tridax procumbens (tridax daisy) [1].The authors used statistical descriptors for pixel groups, such as mean, standard deviation, and variance, to classify three different classes (sourgrass, tridax daisy, and sugarcane plants).The OA using the random forest classifier and a cross-validation method was 77.9% and 85.5% for the sourgrass and tridax daisy, respectively.When the error matrix was analyzed, the sourgrass was misclassified as both sugarcane and tridax daisy, resulting in a commission error of 23.7%.For the tridax daisy, this error was 17.4%, and it was also misclassified as sugarcane.
A similar methodology has been used to identify sugarcane plants, soil patches, Lepidium virginicum (peppergrass), Nicandra physalodes (shoo-fly plant), and Brachiaria decumbens (signalgrass) [25].The authors, using the same statistical descriptors as Reference [1], evaluated three different classifiers, and random forest and artificial neural networks (ANNs) obtained the best results [26].The OA obtained by random forest was 90.9%, and for the ANNs it was 91.6%, where the main source of error was the misclassification between sugarcane and shoo-fly plants.However, no validation was described by the authors, and their results had higher OA values compared to Reference [1].
Multispectral UAV images (with 13 bands) have been used to map sugarcane plants affected by the mosaic virus (Potyviridae family), which is responsible for stunning plant growth due to mottling the laminar region of the plants and causing the discoloration of leaves [27].The authors also mapped spots with Digitaria horizontalis (crabgrass) and guineagrass.An automatic classification was realized using the spectral information divergence (SID) algorithm, which uses only spectral reflectance [28].The OA of the study was 92.5%, and hyperspectral images could separate healthy from infected plants.Regarding weed mapping, the authors mentioned it was difficult to identify theseweeds once the sugarcane plants were in an advanced growth stage and usually covered the weeds.Once again, VIs and texture features were not used for classification.
Lastly, RGB UAV images have been used to map the level of infestation of Bermudagrass in vineyards [18].Using spectral reflectance and VIs with a region-basedclassification algorithm, the authors obtained an OA of 83.3%.However, they suggested that texture features may improve the results.
The potential of UAV images to detect Bermudagrass in sugarcane plantations has not been fully explored before, since other applications have not used both vegetation indices and texture features.In this context, the aim of this paper is to explore the potential of texture features and UAV images to detect bermudagrass occurrence in sugarcane crops in Brazil.

Study Site and Image Acquisition
The study site was in the Iracema Mill, located in São Paulo State, Brazil (Figure 1a).The Iracema Mill has been manufacturing sugarcane products such as sugar and ethanol for over 70 years, processing an average of 3 million tons of prime mater per year in an areaof 30,000 ha, spread across different municipalities of São Paulo State.Our study site was located in the Iracemápolis municipality (Figure 1b), which had 8,834 ha of planted crops, with sugarcane representing 95% of this area and being the most important source of income [3].
The area mapped was a heterogeneous sugarcane field (Figure 1d), with several planting failures and sugarcane plants in different growing stages (crown diameter ranging from 0.5 m to 2.0 m).The majority of the soil area was covered by sugarcane straws, but some spots were left uncovered.There were also a few flooded soil spots in small depressions, probably caused by the use of heavy machinery, and areas infested by Bermudagrass (Figure 1c).features.In this context, the aim of this paper is to explore the potential of texture features and UAV images to detect bermudagrass occurrence in sugarcane crops in Brazil.

Study Site and Image Acquisition
The study site was in the Iracema Mill, located in São Paulo State, Brazil (Figure 1a).The Iracema Mill has been manufacturing sugarcane products such as sugar and ethanol for over 70 years, processing an average of 3 million tons of prime mater per year in an areaof 30,000 ha, spread across different municipalities of São Paulo State.Our study site was located in the Iracemápolis municipality (Figure 1b), which had 8,834 ha of planted crops, with sugarcane representing 95% of this area and being the most important source of income [3].
The area mapped was a heterogeneous sugarcane field (Figure 1d), with several planting failures and sugarcane plants in different growing stages (crown diameter ranging from 0.5m to 2.0m).The majority of the soil area was covered by sugarcane straws, but some spots were left uncovered.There were also a few flooded soil spots in small depressions, probably caused by the use of heavy machinery, and areas infested by Bermudagrass (Figure 1c).Images were acquired on 16 February 2016, using a multirotor GYRO 200 OCTA 355 UAV (octocopter) equipped with a Sony RX100M3 camera of 20.1 megapixels and a CMOS Exmor R ® sensor (RGB) (Figure 2).A gimbal was used to guarantee the acquisition of images with a nadir configuration, minimizing the effects related to variations in the platform altitude.(RGB) (Figure 2).A gimbal was used to guarantee the acquisition of images with a nadir configuration, minimizing the effects related to variations in the platform altitude.
Two flights were carried out in the study site, one at 10:00 and the other at 11:50, obtaining 153 and 135 images, respectively.These flights were realized during their respective hours due to a cloud-free sky.All images were acquired with an overlap of 80% (longitudinal) and 60% (side), with a flight height of 80m.This configuration resulted in a pixel size of 2cm.After image acquisition, an orthomosaic was generated using Pix4D software.Furthermore, the image was cropped in order to avoid low overlapping areas on the borders, resulting in the image presented in Figure 1d.Two flights were carried out in the study site, one at 10:00 and the other at 11:50, obtaining 153 and 135 images, respectively.These flights were realized during their respective hours due to a cloud-free sky.All images were acquired with an overlap of 80% (longitudinal) and 60% (side), with a flight height of 80m.This configuration resulted in a pixel size of 2cm.After image acquisition, an orthomosaic was generated using Pix4D software.Furthermore, the image was cropped in order to avoid low overlapping areas on the borders, resulting in the image presented in Figure 1d.

Feature Extraction
Using the orthomosaic, the Green-Red Vegetation Index (GRVI) was calculated [14].A GRVI is used to highlight vegetation pixels considering the equation described below: The GRVI ranges from −1 to +1, and the digital number (DN) values ofthe green and red bands were used for the index calculation.This index was selected, among others, because of its capability of detecting early phase or leaf green-up, which was suitable to the study site plant heterogeneity, and also because it is suitable for different ecosystem types [13][14][15][16].
The texture features were calculated for each RGB band and also for the GRVI image (7 textures per layer, i.e., 28 features).These features were computed from the gray-level co-occurrence matrix (GLCM), which is a second-order histogram where each entry reports the joint probability of finding a set of two gray-level pixels at a certain distance and direction from each other over a predefined window [19,20].This window is shifted around the image, calculating the texture for each center pixel.It can be calculated for 4 different directions (0°, 45°, 90°, and 135°).In this study, the direction 0°was chosen, considering our image characteristics and class distribution [20].This direction explores the texture from left to right, parallel to the horizontal axis, andcaptures the heterogeneity between crop rows, where weeds are usually located [9].The window sizes used were 3 × 3, 5 × 5, 7 × 7, 9 × 9, 11 × 11, 13 × 13, and 15 × 15 pixels.These features were also used in other weed detection applications [15][16][17].The texture metrics were calculated using the equations described below.

Feature Extraction
Using the orthomosaic, the Green-Red Vegetation Index (GRVI) was calculated [14].A GRVI is used to highlight vegetation pixels considering the equation described below: The GRVI ranges from −1 to +1, and the digital number (DN) values ofthe green and red bands were used for the index calculation.This index was selected, among others, because of its capability of detecting early phase or leaf green-up, which was suitable to the study site plant heterogeneity, and also because it is suitable for different ecosystem types [13][14][15][16].
The texture features were calculated for each RGB band and also for the GRVI image (7 textures per layer, i.e., 28 features).These features were computed from the gray-level co-occurrence matrix (GLCM), which is a second-order histogram where each entry reports the joint probability of finding a set of two gray-level pixels at a certain distance and direction from each other over a predefined window [19,20].This window is shifted around the image, calculating the texture for each center pixel.It can be calculated for 4 different directions (0 • , 45 • , 90 • , and 135 • ).In this study, the direction 0 • was chosen, considering our image characteristics and class distribution [20].This direction explores the texture from left to right, parallel to the horizontal axis, andcaptures the heterogeneity between crop rows, where weeds are usually located [9].The window sizes used were 3 × 3, 5 × 5, 7 × 7, 9 × 9, 11 × 11, 13 × 13, and 15 × 15 pixels.These features were also used in other weed detection applications [15][16][17].The texture metrics were calculated using the equations described below.
Drones 2019, 3, 36 P i,j is the normalized value for the cell i,j;N is the number of rows or columns; and µ i,j is the GLCM mean.
Every GLCM generated is normalized to express the results into a close approximation of a probability table, according to the following equation [19,20]: V i,j is the value for the cell i,j of the matrix.

Classification
The classification algorithm used was random forest [23], which has presented good results in other applications using UAV images and weed detection [1,17,25].The random forest algorithm is also considered computationally efficient and less sensitive to noisy data, and the generation of multiple trees with the bootstrap technique can also avoid overfitting [23,29].The classification was performed pixel by pixel, and the algorithm was trained on 2000 samples for each of the five classes presented in Table 1.
Pi,j is the normalized value for the cell i,j;N is the number of rows or columns; and µi,j is the GLCM mean.
Every GLCM generated is normalized to express the results into a close approximation of a probability table, according to the following equation [19,20]: Vi,j is the value for the cell i,j of the matrix.

Classification
The classification algorithm used was random forest [23], which has presented good results in other applications using UAV images and weed detection [1,17,25].The random forest algorithm is also considered computationally efficient and less sensitive to noisy data, and the generation of multiple trees with the bootstrap technique can also avoid overfitting [23,29].The classification was performed pixel by pixel, and the algorithm was trained on 2000 samples for each of the five classes presented in Table 1.
Table 1.The five classes used in this study.

Class Description Image
Weed (Bermudagrass) Dark green gramineous weed that grows between and within crop rows.

Straw
Soil covered with sugarcane straws (light beige).Occurrence between and within crop rows.

Sugarcane
Green sugarcane plants, crow diameter ranging from 0.5m to 2.0m.

Bare soil
Dark brown soil with no cover and some points not well drained.Occurrence is between and within crop rows.
Pi,j is the normalized value for the cell i,j;N is the number of rows or columns; and µi,j is the GLCM mean.
Every GLCM generated is normalized to express the results into a close approximation of a probability table, according to the following equation [19,20]: Vi,j is the value for the cell i,j of the matrix.

Classification
The classification algorithm used was random forest [23], which has presented good results in other applications using UAV images and weed detection [1,17,25].The random forest algorithm is also considered computationally efficient and less sensitive to noisy data, and the generation of multiple trees with the bootstrap technique can also avoid overfitting [23,29].The classification was performed pixel by pixel, and the algorithm was trained on 2000 samples for each of the five classes presented in Table 1.
Table 1.The five classes used in this study.

Class Description Image
Weed (Bermudagrass) Dark green gramineous weed that grows between and within crop rows.

Straw
Soil covered with sugarcane straws (light beige).Occurrence between and within crop rows.

Sugarcane
Green sugarcane plants, crow diameter ranging from 0.5m to 2.0m.

Bare soil
Dark brown soil with no cover and some points not well drained.Occurrence is between and within crop rows.
Pi,j is the normalized value for the cell i,j;N is the number of rows or columns; and µi,j is the GLCM mean.
Every GLCM generated is normalized to express the results into a close approximation of a probability table, according to the following equation [19,20]: Vi,j is the value for the cell i,j of the matrix.

Classification
The classification algorithm used was random forest [23], which has presented good results in other applications using UAV images and weed detection [1,17,25].The random forest algorithm is also considered computationally efficient and less sensitive to noisy data, and the generation of multiple trees with the bootstrap technique can also avoid overfitting [23,29].The classification was performed pixel by pixel, and the algorithm was trained on 2000 samples for each of the five classes presented in Table 1.
Table 1.The five classes used in this study.

Class Description Image
Weed (Bermudagrass) Dark green gramineous weed that grows between and within crop rows.

Straw
Soil covered with sugarcane straws (light beige).Occurrence between and within crop rows.

Sugarcane
Green sugarcane plants, crow diameter ranging from 0.5m to 2.0m.

Bare soil
Dark brown soil with no cover and some points not well drained.Occurrence is between and within crop rows.

Bare soil
Dark brown soil with no cover and some points not well drained.Occurrence is between and within crop rows.

Sugarcane
Green sugarcane plants, crow diameter ranging from 0.5m to 2.0m.

Bare soil
Dark brown soil with no cover and some points not well drained.Occurrence is between and within crop rows.The random forest algorithm has 3 parameters that need calibration: Tree depth, a number of random features selected for each node, and the number of trees.The trees were generated without pruning (tree depth is unlimited), the number of random features at each node is described by the equation below, and the number of trees was set to 1.000 [22,23]: where total_features is the total number of features in the dataset, and Number of features on each node is rounded to the nearest integer.A total of 8 experiments were realized to evaluate how texture contributes to weed detection.The first one was performed using only 4 features, the DN values for each RGB band and the GRVI.After that, for each window size, 28 texture features were calculated.For experiment 2, for example, the red, green, blue, GRVI, and 7 textures for each layer were used, resulting in a total of 32 features (4+7 × 4).The window size used for experiment 2 was 3 × 3. The same procedure was realized for further experiments, considering greater window sizes.For example, experiment 3 used 28 textures calculated from the5 × 5 window and also the RGB and GRVI.This extended until experiment 8, which used textures calculated from the 15 × 15 window.Experiments 2 to 8 used a total of 32 features each.
Model validation was performed by the random selection of 2500 ground truth points (500 of each class).Each of these points was visually interpreted on the RGB image, and a respective ground truth class was assigned.Subsequently, this ground truth class was compared to the class assigned on the classified image, in order to generate error matrices for each experiment.Pixels on the roads crossing the sugarcane field were used neither for the training nor for the validation.
For these ground truth points,a spectral analysis was performed, considering the raw digital number (DN) values obtained for each RGB channel for each class [30,31].Error matrices and metrics ofoverall accuracy (OA), producer accuracy (PA), and user accuracy (UA) were employed to interpret the results.The equations of these metrics are described below: TP is true positive, FP is false positive, TN is true negative, and FN is false negative.All of these values are obtained from the error matrix.

Results
The results obtained by the proposed method are divided intothree sections.Initially, the spectral analysis for each class is presented (Section 3.1), followed by the results of the evaluation metrics and the error matrices (Section 3.2).Classified images are in Section 3.3, considering the The random forest algorithm has 3 parameters that need calibration: Tree depth, a number of random features selected for each node, and the number of trees.The trees were generated without pruning (tree depth is unlimited), the number of random features at each node is described by the equation below, and the number of trees was set to 1.000 [22,23]: where total_features is the total number of features in the dataset, and Number of features on each node is rounded to the nearest integer.A total of 8 experiments were realized to evaluate how texture contributes to weed detection.The first one was performed using only 4 features, the DN values for each RGB band and the GRVI.After that, for each window size, 28 texture features were calculated.For experiment 2, for example, the red, green, blue, GRVI, and 7 textures for each layer were used, resulting in a total of 32 features (4 + 7 × 4).The window size used for experiment 2 was 3 × 3. The same procedure was realized for further experiments, considering greater window sizes.For example, experiment 3 used 28 textures calculated from the 5 × 5 window and also the RGB and GRVI.This extended until experiment 8, which used textures calculated from the 15 × 15 window.Experiments 2 to 8 used a total of 32 features each.
Model validation was performed by the random selection of 2500 ground truth points (500 of each class).Each of these points was visually interpreted on the RGB image, and a respective ground truth class was assigned.Subsequently, this ground truth class was compared to the class assigned on the classified image, in order to generate error matrices for each experiment.Pixels on the roads crossing the sugarcane field were used neither for the training nor for the validation.
For these ground truth points, a spectral analysis was performed, considering the raw digital number (DN) values obtained for each RGB channel for each class [30,31].Error matrices and metrics ofoverall accuracy (OA), producer accuracy (PA), and user accuracy (UA) were employed to interpret the results.The equations of these metrics are described below: Drones 2019, 3, 36 8 of 15 TP is true positive, FP is false positive, TN is true negative, and FN is false negative.All of these values are obtained from the error matrix.

Results
The results obtained by the proposed method are divided intothree sections.Initially, the spectral analysis for each class is presented (Section 3.1), followed by the results of the evaluation metrics and the error matrices (Section 3.2).Classified images are in Section 3.3, considering the experiment without texture and the best case scenario (highest OA) when texture was used.

Spectral Analysis
The spectral analysis is illustrated in Figure 3.The RGB band centers and widths were obtained by the sensor user's manual [32].The raw DN ranged from 0 to 255, and the average values of the 500 ground truth points for each class at each band were calculated.Error bars were generated considering one standard deviation, and each class was plotted separately.

Spectral Analysis
The spectral analysis is illustrated in Figure 3.The RGB band centers and widths were obtained by the sensor user's manual [32].The raw DN ranged from 0 to 255, and the average values of the 500 ground truth points for each class at each band were calculated.Error bars were generated considering one standard deviation, and each class was plotted separately.

Error Matrix and Validation Metrics
Table 2 shows the OA for the validation dataset and the eight experiments.The OA with no texture features on the dataset was 83.00%, while the best result, an OA of 92.54%, was achieved by using textures with a 15 × 15 window size.The producer (PA) and user (UA) accuracies for each of the five classes in the eight experiments realized are shown in Figure 4.The error matrix (in a pixel count) for the experiment without texture and for the experiment where the best OA was achieved (with textures generated by a 15 × 15 window) are presented in Table 3. Complementary error matrices are presented as Supplementary Materials.

Error Matrix and Validation Metrics
Table 2 shows the OA for the validation dataset and the eight experiments.The OA with no texture features on the dataset was 83.00%, while the best result, an OA of 92.54%, was achieved by using textures with a 15 × 15 window size.The producer (PA) and user (UA) accuracies for each of the  4. The error matrix (in a pixel count) for the experiment without texture and for the experiment where the best OA was achieved (with textures generated by a 15 × 15 window) are presented in Table 3. Complementary error matrices are presented as Supplementary Materials.

Classified Images
Figure 5 presents the classified images from experiments 1 and 8, from which theerror matricesare presented in Table 3.Three sample regions were selected and zoomed in on for a better visualization of the results.

Classified Images
Figure 5 presents the classified images from experiments 1 and 8, from which theerror matricesare presented in Table 3.Three sample regions were selected and zoomed in on for a better visualization of the results.

Discussion
The first analysis was based onthe spectral characteristics presented in Figure 3.The spectral response of the classes of Bermudagrass and sugarcane were similar and presented characteristics of targets that contained green vegetation [33].These characteristics included lower digital number (DN) values for the blue band when compared to the respective values of green and red bands.This could be justified by the absorption of electromagnetic energy by leaf components such as chlorophyll A and B, carotenoids, and xanthophylls [33,34].For the green band, this absorption was reduced, resulting in more electromagnetic energy reflected and thus higher values of DN.For the red band, the absorption of electromagnetic energy increased once again, specially for some leaf components such as chlorophyll A and B, resulting in higher DN values when compared to the blue band, but usually lower DN values than the green band [33,34].
The sugarcane and Bermudagrass classes also presented higher standard deviation values when compared to other classes.The explanation for such variance in pixel DN values is the multiple scattering of electromagnetic energy on multilayer canopies [33].When an electromagnetic energy bean reaches the first level of a canopy (e.g., a sugarcane leaf on the top of the canopy), the energy can be transmitted, absorbed, or reflected.The transmitted energy hits targets on a second level (under the first leaf), such asa soil patch or another sugarcane leaf.On this second level, the remaining electromagnetic energy is partially reflected back and hits the bottom of the first sugarcane leaf, and is absorbed, reflected back, or transmitted to the atmosphere.This process repeats for targets on other levels.Different targets on the second level caused differences in the first-level DN values and hence a greater standard deviation for the Bermudagrass and sugarcane classes when compared to classes that had no canopy, such as bare soil and straw [33][34][35][36].
The classes of straw and bare soil had spectral responses different from the two previous classes.The class of straw contained parts of vegetation (mostly leaves) derived from previous sugarcane plantations.However, the leaf components related to absorption of the electromagnetic energy, such aschlorophyll A and B, carotenoids, and xanthophylls, were dead and led to almost no absorption from this kind of land cover [33].This led to the highest DN values obtained in this study.On the other hand, the bare soil class presented increasing DN values for greater wavelength values

Discussion
The first analysis was based onthe spectral characteristics presented in Figure 3.The spectral response of the classes of Bermudagrass and sugarcane were similar and presented characteristics of targets that contained green vegetation [33].These characteristics included lower digital number (DN) values for the blue band when compared to the respective values of green and red bands.This could be justified by the absorption of electromagnetic energy by leaf components such as chlorophyll A and B, carotenoids, and xanthophylls [33,34].For the green band, this absorption was reduced, resulting in more electromagnetic energy reflected and thus higher values of DN.For the red band, the absorption of electromagnetic energy increased once again, specially for some leaf components such as chlorophyll A and B, resulting in higher DN values when compared to the blue band, but usually lower DN values than the green band [33,34].
The sugarcane and Bermudagrass classes also presented higher standard deviation values when compared to other classes.The explanation for such variance in pixel DN values is the multiple scattering of electromagnetic energy on multilayer canopies [33].When an electromagnetic energy bean reaches the first level of a canopy (e.g., a sugarcane leaf on the top of the canopy), the energy can be transmitted, absorbed, or reflected.The transmitted energy hits targets on a second level (under the first leaf), such asa soil patch or another sugarcane leaf.On this second level, the remaining electromagnetic energy is partially reflected back and hits the bottom of the first sugarcane leaf, and is absorbed, reflected back, or transmitted to the atmosphere.This process repeats for targets on other levels.Different targets on the second level caused differences in the first-level DN values and hence a greater standard deviation for the Bermudagrass and sugarcane classes when compared to classes that had no canopy, such as bare soil and straw [33][34][35][36].
The classes of straw and bare soil had spectral responses different from the two previous classes.The class of straw contained parts of vegetation (mostly leaves) derived from previous sugarcane plantations.However, the leaf components related to absorption of the electromagnetic energy, such aschlorophyll A and B, carotenoids, and xanthophylls, were dead and led to almost no absorption from this kind of land cover [33].This led to the highest DN values obtained in this study.On the Drones 2019, 3, 36 11 of 15 other hand, the bare soil class presented increasing DN values for greater wavelength values in the visible region [400-700 nm], characterizing an exposed soil class [37,38].As mentioned in Table 1, some samples of the bare soil class were not well drained, resulting in high soil moisture, which contributed to a reduction in the DN values when compared to samples that were dry [33,37].This also contributed to a higher standard deviation in the bare soil class when compared to more homogeneous classes, such as straw or dark objects.The dark objects were shadows generated by overlapping sugarcane plants, and they had low DN values for all bands.
After the spectral analysis, the overall accuracy (OA) values presented in Table 2 were analyzed.Experiment 1, which used only the RGB bands and the GRVI, had an OA of 83.00%.In all experiments with texture features, the OA value was always superior in comparison to datasets with no texture features.For greater window sizes, higher values of OA were observed, reaching 92.54% for the best case scenario (experiment 8 with textures from the 15 × 15 window).The explanation for this relies on the "edge effect", where the window size used for texture calculation overlaps the border between distinct objects in the image (Figure 6).A window size that is partly over one natural texture and partly over another (along with the edge between them) will, very likely, be less ordered than a window entirely inside a patch of similar texture [20].
Some textures, such as contrast, dissimilarity, entropy, and homogeneity, are highly affected by the "edge effect" and can be more useful in discriminating between patches that have two different textures.On the other hand, textures such as mean and variance are more useful in describing the interior of objects, where edges do not occur.Considering that our dataset had more features that could describe the occurrence of two different textures in a window, which translated into two different classes for this application, a better OA was expected for greater window sizes. in the visible region [400-700nm], characterizing an exposed soil class [37,38].As mentioned in Table 1, some samples of the bare soil class were not well drained, resulting in high soil moisture, which contributed to a reduction in the DN values when compared to samples that were dry [33,37].This also contributed to a higher standard deviation in the bare soil class when compared to more homogeneous classes, such as straw or dark objects.The dark objects were shadows generated by overlapping sugarcane plants, and they had low DN values for all bands.
After the spectral analysis, the overall accuracy (OA) values presented in Table 2 were analyzed.Experiment 1, which used only the RGB bands and the GRVI, had an OA of 83.00%.In all experiments with texture features, the OA value was always superior in comparison to datasets with no texture features.For greater window sizes, higher values of OA were observed, reaching 92.54% for the best case scenario (experiment 8 with textures from the 15 × 15 window).The explanation for this relies on the "edge effect", where the window size used for texture calculation overlaps the border between distinct objects in the image (Figure 6).A window size that is partly over one natural texture and partly over another (along with the edge between them) will, very likely, be less ordered than a window entirely inside a patch of similar texture [20].
Some textures, such as contrast, dissimilarity, entropy, and homogeneity, are highly affected by the "edge effect" and can be more useful in discriminating between patches that have two different textures.On the other hand, textures such as mean and variance are more useful in describing the interior of objects, where edges do not occur.Considering that our dataset had more features that could describe the occurrence of two different textures in a window, which translated into two different classes for this application, a better OA was expected for greater window sizes.Neither of the previously revised works stated how texture window size affected OA, and a comparison of these applications wasrealizedby analyzing the best OA obtained.Considering weed detection applications in other crops, the OA obtained for Bermudagrass detection in sugarcane crops was better than those obtained foreggplants (75.4%), maize (79.0%), and vineyards (83.3%) [15,16,18].Furthermore, an OA of 92.5% was close to the best performances found in the literature: Weed detection in spinach (96.9%), sunflowers (95.5%), and beans (94.8%) [15,17].A review of these studies is listed in Table 4.
The high OA values obtained by applications in spinach and beans may have been related to the classifier (convolutional neural networks (CNNs)) used [17,24].CNNs are adeep learning area approach and have been successfully applied in several remote sensing applications [39,40].Classifications using CNNs in aerial images is usually performed pixel by pixel, and the algorithm is Neither of the previously revised works stated how texture window size affected OA, and a comparison of these applications wasrealizedby analyzing the best OA obtained.Considering weed detection applications in other crops, the OA obtained for Bermudagrass detection in sugarcane crops was better than those obtained foreggplants (75.4%), maize (79.0%), and vineyards (83.3%) [15,16,18].Furthermore, an OA of 92.5% was close to the best performances found in the literature: Weed detection in spinach (96.9%), sunflowers (95.5%), and beans (94.8%) [15,17].A review of these studies is listed in Table 4.
The high OA values obtained by applications in spinach and beans may have been related to the classifier (convolutional neural networks (CNNs)) used [17,24].CNNs are adeep learning area approach and have been successfully applied in several remote sensing applications [39,40].Classifications using CNNs in aerial images is usually performed pixel by pixel, and the algorithm is capable of learning representations of the data on multiple levels of abstraction [24,41].These representations start as simple image features, such as borders, edges, or colors, but evolve into image patterns and pattern associations [24], providing a powerful tool for applications involving weed detection [17].When comparing our results to other sugarcane applications that did not use texture features, the detection of Bermudagrass was more effective than the detection of sourgrass (77.9%) and tridax daisy (85.5%) [1].The classification of peppergrass, the shoo-fly plant, and signalgrass, described by Reference [25], obtained an OA of 91.6%.However, the authors did not perform any type of validation and it is possible that this value was overfitted to the dataset used to train the neural network used for classification.A similar OA (92.5%) was obtained by Reference [27] for the classification of sugarcane plants affected by the mosaic virus.However, the UAV image used by Reference [27] had 13 bands and a greater potential to discriminate classes using only spectral information.It is clear that the use of texture in UAV images is capable of improving weed detection in sugarcane crops, but further discussions may be developed to analyze the outputs of the error matrix and the user (UA) and producer (PA) accuracies (Table 3 and Figure 4, respectively).
For the experiment that did not use texture features, the error matrix showed misclassification errors between the classes of dark objects with both Bermudagrass and bare soil.These errors could be translated into low values of PA for the dark objects (23.0%) and UA for Bermudagrass (61.4%) and for bare soil (85.1%).These classes, especially Bermudagrass and dark objects, had a similar spectral behavior, and the use of features that only represented spectral reflectance was not enough to discriminate between them.A similar problem was found by Reference [13] when trying to discriminate between different weed types using only spectral information.On the other hand, when texture features were used, the PA of dark objectsimproved to 74.0% and the UA of the Bermudagrass and bare soil to 80.9% and 94.7%, respectively.The explanation of this relies on the fact that the greater window sizes could detect two different textures between classes, as explained by the "edge effect".This difference may be visually observed in region 1, highlighted in Figure 5, where the classification without texture barely detected points of dark objects, represented in small red regions.

Figure 1 .
Figure 1.(a) Location of the study site in São Paulo State, Brazil; (b) Iracemápolis municipality and the location where the image was acquired; (c) zoom representing the infestation of Bermudagrass (highlighted with red arrows) in a sugarcane field (gramineous weed between and within crop rows); (d) red, green, and blue (RGB) unmanned aerial vehicle (UAV) image of true color composition and 2-cm spatial resolution.

Figure 1 .
Figure 1.(a) Location of the study site in São Paulo State, Brazil; (b) Iracemápolis municipality and the location where the image was acquired; (c) zoom representing the infestation of Bermudagrass (highlighted with red arrows) in a sugarcane field (gramineous weed between and within crop rows); (d) red, green, and blue (RGB) unmanned aerial vehicle (UAV) image of true color composition and 2-cm spatial resolution.Images were acquired on 16 February 2016, using a multirotor GYRO 200 OCTA 355 UAV (octocopter) equipped with a Sony RX100M3 camera of 20.1 megapixels and a CMOS Exmor R ® sensor

Straw
Soil covered with sugarcane straws (light beige).Occurrence between and within crop rows.Drones 2019, 3, x FOR PEER REVIEW 6 Dark objectsDark objects (shadows) generated by overlapping sugarcane plants.Drones 2019, 3, x FOR PEER REVIEW 7 of 15Dark objects Dark objects (shadows) generated by overlapping sugarcane plants.

Figure 3 .
Figure 3. Analysis considering the raw digital number (DN) average values obtained for 500 ground truth points for each class: (a) Bermudagrass; (b)straw; (c)sugarcane; (d)bare soil; (e)dark objects.The RGB band wavelengths, considering sensor specifications, are highlighted in the respective colored squares.The error bars represent one standard deviation.

Figure 3 .
Figure 3. Analysis considering the raw digital number (DN) average values obtained for 500 ground truth points for each class: (a) Bermudagrass; (b) straw; (c) sugarcane; (d) bare soil; (e) dark objects.The RGB band wavelengths, considering sensor specifications, are highlighted in the respective colored squares.The error bars represent one standard deviation.
the eight experiments realized are shown in Figure 88.52 88.56 90.60 91.04 91.80 92.54No texture represents the classification with only the RGB band values and the Green-Red Vegetation Index (GRVI).

Figure 4 .
Figure 4. (a) Producer accuracy (PA) and (b)user accuracy(UA) for each class in the eight experiments realized.

Figure 4 .
Figure 4. (a) Producer accuracy (PA) and (b) user accuracy (UA) for each class in the eight experiments realized.

Figure 5 .
Figure 5. Images considering (a) the experiment without texture and (b) the experiment that used texture calculated from a 15 × 15 window.Three different regions were selected and zoomed in on for better visualization of the original RGB composition and the classification results for images (a) and (b).

Figure 5 .
Figure 5. Images considering (a) the experiment without texture and (b) the experiment that used texture calculated from a 15 × 15 window.Three different regions were selected and zoomed in on for better visualization of the original RGB composition and the classification results for images (a) and (b).

Figure 6 .
Figure 6.Explanation of the"edge effect": (a) A sample pixel on an RGB composition and a hypothetical class transition between bare soil (brown pixels) and straw (light beige pixels) in the study site; (b) a 3 × 3 window used for texture calculation provides an example where the "edge effect" does not occur, since the window is entirely located in one class; (c)a 15 × 15 window used for texture calculation capturing the texture transition between these two classes and representing the "edge effect".

Figure 6 .
Figure 6.Explanation of the "edge effect": (a) A sample pixel on an RGB composition and a hypothetical class transition between bare soil (brown pixels) and straw (light beige pixels) in the study site; (b) a 3 × 3 window used for texture calculation provides an example where the "edge effect" does not occur, since the window is entirely located in one class; (c) a 15 × 15 window used for texture calculation capturing the texture transition between these two classes and representing the "edge effect".

Table 1 .
The five classes used in this study.

Table 2 .
Overall accuracy (OA) (%) for the eight experiments realized.No texture represents the classification with only the RGB band values and the Green-Red Vegetation Index (GRVI).

Table 3 .
Matrices for experiment 1, without texture, and for experiment 8, which used texture with a 15 × 15 window.

Table 3 .
Matrices for experiment 1, without texture, and for experiment 8, which used texture with a 15 × 15 window.

Table 4 .
Several weed detection applications considering factors such as overall accuracy (OA) and the respective classifier and types of features used.