A Plant-by-Plant Method to Identify and Treat Cotton Root Rot Based on UAV Remote Sensing

Cotton root rot (CRR), caused by the fungus Phymatotrichopsis omnivora, is a destructive cotton disease that mainly affects the crop in Texas. Flutriafol fungicide applied at or soon after planting has been proven effective at protecting cotton plants from being infected by CRR. Previous research has indicated that CRR will reoccur in the same regions of a field as in past years. CRR-infected plants can be detected with aerial remote sensing (RS). As unmanned aerial vehicles (UAVs) have been introduced into agricultural RS, the spatial resolution of farm images has increased significantly, making plant-by-plant (PBP) CRR classification possible. An unsupervised classification algorithm, PBP, based on the Superpixel concept, was developed to delineate CRR-infested areas at roughly the single-plant level. Five-band multispectral data were collected with a UAV to test these methods. The results indicated that the single-plant level classification achieved overall accuracy as high as 95.94%. Compared to regional classifications, PBP classification performed better in overall accuracy, kappa coefficient, errors of commission, and errors of omission. The single-plant fungicide application was also effective in preventing CRR.


Introduction
The United States (U.S.) produced 20.9 million 218-kg (480-lb) bales of cotton in the 2017-2018 season with a production value of $7.2 billion (USD), ranking 3rd after India and China, and it is the largest cotton-exporting country in the world [1]. The state of Texas produced 9.5 million bales, approximately 44% of U.S. cotton production, ranking 1st in the U.S. [1]. While Texas is by far the largest producing state, a major obstacle to cotton production in Texas is a disease called cotton root rot (CRR) or Texas root rot. The disease is caused by the soilborne fungus, Phymatotrichopsis omnivora, a destructive plant disease throughout the southwestern U.S. The first documented study of CRR was in the 19th century by Pammel [2]. The disease rots the root, disrupting the vascular system and preventing water and nutrients from being transported from the roots to the rest of the plant, eventually killing the plant. An infected cotton plant usually dies within 10 days. If the disease develops in an RGB (red, green, and blue) and other multispectral sensors are commonly used, but the frequently used normalized difference vegetation index (NDVI) requires the NIR band. Without the NIR band, the normalized difference photosynthetic vigor ratio (NDPVR) and the visible atmospherically resistant index (VARI) can be calculated, and they have been used to estimate crop yield [28,29]. Both RGB and other multispectral images have been used for rice growth and yield estimation. However, the vegetation indices (VIs) derived from multispectral images including NIR correlate better with grain yield than VIs derived from RGB images [29]. Albetis et al. used the support vector machine (SVM) classifier on UAV images to differentiate diseased from non-diseased areas of vineyards. Due to the high spatial resolution, they could distinguish grapevine vegetation from bare soil, shadow, and inter-row vegetation. A high classification accuracy of 97-99% was achieved in four vineyards [30]. Furthermore, artificial neural networks (ANNs) have been used to estimate water potential in vineyards based on UAV data [31]. While a great deal of recent agricultural research has involved UAV-based remote sensing, there is scant research about UAV-based remote sensing for the delineation of CRR.
In general, classification procedures are a way to categorize data according to various characteristics. Classification in RS means categorizing or mapping an image into different classes depending on the features of the data such as tone, texture, pattern, etc. Unsupervised and supervised classification are common and differ according to whether human-guided training is involved in classifying the data.
The Superpixel algorithm segments images into many multi-pixel pieces (superpixels) based on shape, color, texture, etc. In essence, the Superpixel method converts images from pixel-level to district-level, and thus belongs to the image segmentation category in image processing. The Superpixel algorithm keeps the main features of the aggregated pixels, resulting in a sharp reduction in the number of data-containing units. As a result, it improves the image processing speed significantly. Sultani et al. used the Superpixel algorithm to detect objects in pavement images [33]. Different shaped objects such as patches, maintenance hole covers, and markers could be detected efficiently. After dividing the images into many small segments, features like histogram of oriented gradients (HOG), co-occurrence matrix (COOC), intensity histogram (IH), and mean intensity (MI) of each superpixel were calculated. HOG and COOC are texture and shape characteristics, while IH and MI are spectral intensity variations. Then, SVM was used to generate classifications based on each feature. The Superpixel algorithm has also been used to detect disease in agricultural crops. Zhang et al. developed a new method based on the Superpixel algorithm to detect cucumber diseases [34,35]. Leaf images were divided into superpixels, and then the expectation maximization (EM) method was applied to obtain lesion images. After feature extraction, SVM was used to detect the disease. The result indicated that the proposed method had the highest recognition rate and fastest processing speed compared to four other methods that have been used for cucumber disease recognition. Zhang et al. later proposed a new leaf recognition method based on the Superpixel algorithm, k-means, and pyramid of histograms of orientation gradients (PHOG) algorithms [34,35]. First, the RGB leaf image was divided into segments with the Superpixel algorithm. Then, k-means clustering was applied to segment the lesion section of the leaf. PHOG features were extracted and used to recognize the disease. Three apple and three cucumber leaf diseases were used to assess the method. The result indicated that the proposed method was effective and usually achieved the highest recognition rate compared to other methods that had been used for cucumber disease recognition.
Conventional CRR identification methods developed for 1-m resolution aerial images can only detect the CRR-infested area at the regional level, leading to the application of a large amount of fungicide to field areas that do not need it. UAV remote sensing makes high-resolution data collection possible, meaning that fungicide treatments could conceivably be applied at the level of individual plants. To take advantage of these high-resolution data, a novel high-precision CRR identification method is proposed to enable high-precision CRR detection and treatment. The objectives of this research were thus to (1) develop and evaluate a plant-by-plant (PBP) CRR detection and classification method; (2) compare the PBP classification method to common regional classification methods; (3) examine the effectiveness of PBP fungicide treatment to validate the necessity for the method.

Data Collection
A fixed-wing UAV (Tuffwing Mapper, Tuffwing LLC, Boerne, TX, USA; Figure 2a) was used to acquire image data of all the fields and regions on a cloud-free day, August 20, 2017. This UAV is equipped by the manufacturer with a global navigation satellite system (GNSS) receiver and an inertial measurement unit (IMU). A multispectral camera (RedEdge, Micasense, Seattle, WA, USA; Figure 2b) mounted on the UAV collected images at 120-m above ground level (AGL). The images had a pixel resolution of 7.64 cm and contained five spectral bands: blue (≈475-500 nm), green (≈550-565 nm), red (≈665-675 nm), red edge (≈715-725 nm), and NIR (≈825-860 nm). The images were taken between 11:00 and 13:00 local time with an optimized fixed-exposure. Eight ground control points (GCPs) were used during each flight to improve the geographical accuracy of the mosaicked image of each field area. The GCPs were placed in each field at the four corners and four midpoints of each side. Ground-truth data were collected on August 25 2017, and involved using a GPS receiver to record boundary locations of some CRR zones ( Figure 3). A total of about 20 plants from these zones were removed and evaluated for the presence of the fungus on the roots in order to validate the presence of CRR in the zone.

Data Collection
A fixed-wing UAV (Tuffwing Mapper, Tuffwing LLC, Boerne, TX, USA; Figure 2a) was used to acquire image data of all the fields and regions on a cloud-free day, August 20, 2017. This UAV is equipped by the manufacturer with a global navigation satellite system (GNSS) receiver and an inertial measurement unit (IMU). A multispectral camera (RedEdge, Micasense, Seattle, WA, USA; Figure 2b) mounted on the UAV collected images at 120-m above ground level (AGL). The images had a pixel resolution of 7.64 cm and contained five spectral bands: blue (≈475-500 nm), green (≈550-565 nm), red (≈665-675 nm), red edge (≈715-725 nm), and NIR (≈825-860 nm). The images were taken between 11:00 and 13:00 local time with an optimized fixed-exposure. Eight ground control points (GCPs) were used during each flight to improve the geographical accuracy of the mosaicked image of each field area. The GCPs were placed in each field at the four corners and four midpoints of each side. Ground-truth data were collected on August 25 2017, and involved using a GPS receiver to record boundary locations of some CRR zones ( Figure 3). A total of about 20 plants from these zones were removed and evaluated for the presence of the fungus on the roots in order to validate the presence of CRR in the zone.

Data Preprocessing
A 0.95-ha area was covered in each image with the AGL and camera used. An 80% forward overlap and 70% side overlap flight plan was used for image acquisition. The raw images were collected in tiff format with data from the GNSS receiver and IMU stored as image metadata. Image mosaicking was conducted with Pix4D software (Pix4D S.A, Lausanne, Switzerland). A Geoexplorer 6000 (Trimble, Sunnyvale, CA, USA) GNSS receiver was used in the fields to collect the coordinates of the GCP centroids for geo-referencing of the images, also conducted in Pix4D. The centers of the GCPs in each raw image were manually linked to the corresponding ground-truth GNSS coordinates during the geo-referencing process.
Three spectrally flat reference tiles were used for radiometric calibration: dark gray (≈3% reflectance) medium gray (≈20% reflectance), and light gray (≈45% reflectance). The actual reflectance spectra of the calibration tiles were collected with a portable spectroradiometer (Figure 4) (PSR+ 3500, Spectral Evolution, Haverhill, MA, USA). A linear relationship between known reflectance and image-pixel digital numbers (DNs) was established for each band. All DN images were converted to reflectance images in ENVI software (Harris geospatial solution, Boulder, CO, USA) based on these relationships.

Data Preprocessing
A 0.95-ha area was covered in each image with the AGL and camera used. An 80% forward overlap and 70% side overlap flight plan was used for image acquisition. The raw images were collected in tiff format with data from the GNSS receiver and IMU stored as image metadata. Image mosaicking was conducted with Pix4D software (Pix4D S.A, Lausanne, Switzerland). A Geoexplorer 6000 (Trimble, Sunnyvale, CA, USA) GNSS receiver was used in the fields to collect the coordinates of the GCP centroids for geo-referencing of the images, also conducted in Pix4D. The centers of the GCPs in each raw image were manually linked to the corresponding ground-truth GNSS coordinates during the geo-referencing process.
Three spectrally flat reference tiles were used for radiometric calibration: dark gray (≈3% reflectance) medium gray (≈20% reflectance), and light gray (≈45% reflectance). The actual reflectance spectra of the calibration tiles were collected with a portable spectroradiometer (Figure 4) (PSR+ 3500, Spectral Evolution, Haverhill, MA, USA). A linear relationship between known reflectance and

Data Preprocessing
A 0.95-ha area was covered in each image with the AGL and camera used. An 80% forward overlap and 70% side overlap flight plan was used for image acquisition. The raw images were collected in tiff format with data from the GNSS receiver and IMU stored as image metadata. Image mosaicking was conducted with Pix4D software (Pix4D S.A, Lausanne, Switzerland). A Geoexplorer 6000 (Trimble, Sunnyvale, CA, USA) GNSS receiver was used in the fields to collect the coordinates of the GCP centroids for geo-referencing of the images, also conducted in Pix4D. The centers of the GCPs in each raw image were manually linked to the corresponding ground-truth GNSS coordinates during the geo-referencing process.
Three spectrally flat reference tiles were used for radiometric calibration: dark gray (≈3% reflectance) medium gray (≈20% reflectance), and light gray (≈45% reflectance). The actual reflectance spectra of the calibration tiles were collected with a portable spectroradiometer ( Figure 4) (PSR+ 3500, Spectral Evolution, Haverhill, MA, USA). A linear relationship between known reflectance and image-pixel digital numbers (DNs) was established for each band. All DN images were converted to reflectance images in ENVI software (Harris geospatial solution, Boulder, CO, USA) based on these relationships.

Regional Classification
Previous studies have used regional classification for CRR detection [4,5,21,24,36,37]. In a related prior study involving UAV remote sensing of CRR [38], each CRR-infested zone was identified as a

Regional Classification
Previous studies have used regional classification for CRR detection [4,5,21,24,36,37]. In a related prior study involving UAV remote sensing of CRR [38], each CRR-infested zone was identified as a region of plants rather than individual plants. The image data were classified into healthy and CRR-infested regions with unsupervised, semi-supervised, and supervised classification methods. The unsupervised and semi-supervised methods were based on k-means clustering and included one two-class method (unsupervised) and three multi-class (3, 5, and 10 classes) methods that combined more classes to form two classes based on user knowledge and judgment (semi-supervised). The supervised methods, which required selection of training data by a human operator, were two-class methods and included support vector machine (SVM), minimum distance (MD), maximum likelihood (ML), and Mahalanobis distance (MHD) classifiers. The CIR images of four of the five regions described above (the other region in the current study was used for a fungicide application test as described below) were used to evaluate the classification methods by comparing their overall accuracies, errors of omission and commission, and kappa coefficients. These images covered 5.68, 0.15, 0.42, and 0.34 ha of field regions CH1, CH2, SH, and WP, respectively ( Figure 5). In the current study, the regional classification results from the related prior study were used for comparison with plant-by-plant classification.

Regional Classification
Previous studies have used regional classification for CRR detection [4,5,21,24,36,37]. In a related prior study involving UAV remote sensing of CRR [38], each CRR-infested zone was identified as a region of plants rather than individual plants. The image data were classified into healthy and CRRinfested regions with unsupervised, semi-supervised, and supervised classification methods. The unsupervised and semi-supervised methods were based on k-means clustering and included one two-class method (unsupervised) and three multi-class (3, 5, and 10 classes) methods that combined more classes to form two classes based on user knowledge and judgment (semi-supervised). The supervised methods, which required selection of training data by a human operator, were two-class methods and included support vector machine (SVM), minimum distance (MD), maximum likelihood (ML), and Mahalanobis distance (MHD) classifiers. The CIR images of four of the five regions described above (the other region in the current study was used for a fungicide application test as described below) were used to evaluate the classification methods by comparing their overall accuracies, errors of omission and commission, and kappa coefficients. These images covered 5.68, 0.15, 0.42, and 0.34 ha of field regions CH1, CH2, SH, and WP, respectively ( Figure 5). In the current study, the regional classification results from the related prior study were used for comparison with plant-by-plant classification.

Plant-by-Plant Classification
The high spatial resolution of UAV RS images makes it possible to detect CRR infection at a plant-by-plant (PBP) level of precision. The 7.64-cm resolution in the current study provides for roughly 120 pixels per plant zone at full canopy cover, assuming 76-cm (30-in.) row spacing and average seeding distance of 12 cm (4.6 in.) at 45,000 seeds per 0.405 ha (1.0 ac.). This number of pixels per plant should be adequate for identifying specific features to enable discrimination of plants based on spectral and spatial information. A new PBP classification method was thus proposed and based on Superpixel and k-means algorithms. This combination of algorithms was selected because the Superpixel algorithm, used appropriately, should have the capability to identify single plant zones, and k-means has been demonstrated to distinguish CRR-infected from healthy plants.
The simple linear iterative clustering (SLIC) Superpixel algorithm [39,40] is based on visual color converted to the three-dimensional (3D) spherical CIE-Lab color space. CIE-Lab expresses colors in numeric terms and deals with the issue that colorimetric distance in measurements does not correspond with the color difference perceived by humans. In this study, CIR images were used based on the fact that healthy plants are more easily visually differentiated from unhealthy plants with CIR instead of visual color images. In CIR images, the image spectral bands are converted from green to blue, red to green, and NIR to red for visual display. Other researchers have used NIR to enhance the SLIC Superpixel algorithm results based on RGB [41], but in this study the SLIC Superpixel algorithm was Remote Sens. 2020, 12, 2453 7 of 18 used on the RGB-channel outputs of the CIR images, so CIR was directly converted to an artificial CIE-Lab by way of a CIR-based XYZ color space. Figure 6 is the flowchart of the PBP classification algorithm. The SLIC Superpixel algorithm was first applied to a CIR image, and a number (k) of superpixel "seeds" were then generated and distributed uniformly across the image. The imported image was then divided into superpixels (small, rather homogenous, areas in the image) based on spectral and shape information around each "seed." The mean of the DN values within each superpixel was calculated and assigned to the superpixel so that it had a single DN value. The number (k) of superpixels was user-determined and provided to the algorithm based on the field planting rate; i.e., the number of superpixels was expected to be similar to the number of cotton plants in the image, multiplied by a scaling factor to account for bare soil areas in the image. K-means clustering was applied to the superpixel image to achieve a two-class regional classification, with "1" and "0" to represent CRR and healthy zones, respectively. At the same time, planting rows were detected by calculating the gradient of the raw image. A binary plant row image was generated, with "0" and "1" representing plant rows and the gaps between them, respectively. The centroids of each superpixel were identified and marked as potential cotton plant centroids, with "1" and "0" representing cotton plant and bare soil, respectively. The centroids of superpixels located in the CRR zone (as determined by k-means) and the planting row were regarded as locations of CRR-infected plants. The centroids of superpixels located in healthy zones and within the planting row were regarded as locations of healthy plants. The classifier logic was applied to all image pixels and can be expressed with the following equation: where Z is the regional classification based on k-means, in which value "0" = healthy zone, and value "1" = CRR zone; is the status of superpixel centroid location or not superpixel centroid location, in which value "0" = not superpixel centroid location, and value "1" = superpixel centroid The classifier logic was applied to all image pixels and can be expressed with the following equation: where Z n is the regional classification based on k-means, in which value "0" = healthy zone, and value "1" = CRR zone; S n is the status of superpixel centroid location or not superpixel centroid location, in which value "0" = not superpixel centroid location, and value "1" = superpixel centroid location; P n is the status of planting row or gap between rows, in which value "0" = gap, and value "1" = row; and C = { ψ|ψ : R → {0, 1, 2}} is an overall class containing all pixel classes.
While C = 0, the superpixel centroid location is marked as an individual healthy cotton plant. While C = 1, the superpixel centroid location is marked as a CRR-infected plant. C = 2 represents no superpixel centroid at this location, no matter whether the pixel is classified as healthy or infested.

Accuracy Assessment
Accuracy assessment involves specific means of evaluating the performance of classifications [42,43]. A ground-truth map was drawn manually on the original high-resolution UAV images. In this process, the ground-truth data collected on August 25 2017, were used as a visual reference when applying the following protocol. Zones in the field with more than approximately 10 immediately adjacent infected plants were categorized as CRR-infested zones. In larger CRR-infested zones, more than 10 immediately adjacent healthy plants were categorized as healthy zones. The regional classification maps were resampled to the higher resolution of the ground-truth map and compared to it on a pixel-by-pixel basis. This method is common for accuracy assessment of raster-based classification maps. On the other hand, the PBP classification maps, which were vector point maps, were compared to the ground-truth map at only the locations of the superpixel centroids; i.e., the classification of a superpixel was compared to the pixel at its centroid location on the ground-truth map. This method was selected to enable comparison to the ground-truth map at the plant level instead of the pixel level. It should be noted that the number of comparisons between a regional classification and the ground-truth map was much higher than the number for PBP classification. However, the methods used are considered reasonable for the type of data being evaluated; e.g., the regional classification maps did not have adequate resolution for plant-level comparison. Confusion matrices were developed based on the individual comparisons within these zones (Table 1).
The kappa coefficient, which indicates the agreement between the "predicted" and "true" values, was calculated from the derived confusion matrices and the following formula: where N is the total number of pixels; i is the class number; t i,i is the correctly classified number of pixels in Class i; G i is the total number of pixels classified as Class i in ground-truth data; and P i is the total number of pixels classified as Class i in the predicted data.
A kappa value of 1 indicates that the classification has perfect agreement with the true value, and a value of 0 indicates no agreement between the classification and ground truth (Figure 7). The errors of commission, representing a measure of false-positives, and errors of omission, representing a measure of false-negatives, were also calculated to evaluate the classifiers. Regional classification methods including k-means, SVM, MD, ML, and MHD were compared to the PBP classification method based on overall classification accuracy, the kappa coefficient, and errors of commission and omission.

Test of PBP Fungicide Treatment in the Field
An in-furrow, at-planting spray application is the most common way to apply the Topguard Terra (FMC Agricultural Solutions, Philadelphia, PA) fungicide (flutriafol) that is licensed for treatment of CRR. The continuous application of the fungicide over the top of seeds as they are planted treats not only soil close to the seed, but also a length of soil between seeds that may not need treatment. This process may result in applying more product than necessary, so it is important to determine whether applying the fungicide to individual seeds or plants is effective. To test whether PBP fungicide treatment is effective in protecting cotton plants from CRR infection, a stem-drench treatment-also proven in research trials to be an effective application method-was used in place of the at-planting application method. Specifically, the fungicide spray solution was applied to the stem of the cotton plant and a small amount of soil surrounding the stem. An 18.3 × 30.5 m (60 × 100 ft) test plot in field region PL was used to conduct this experiment.
Four treatments were applied: (1) a conventional at-planting treatment of in-furrow continuous spray over the top of planted seeds, applied with a tractor-pulled planter; (2) a stem-drench continuous spray, applied manually with a backpack sprayer;

Test of PBP Fungicide Treatment in the Field
An in-furrow, at-planting spray application is the most common way to apply the Topguard Terra (FMC Agricultural Solutions, Philadelphia, PA, USA) fungicide (flutriafol) that is licensed for treatment of CRR. The continuous application of the fungicide over the top of seeds as they are planted treats not only soil close to the seed, but also a length of soil between seeds that may not need treatment. This process may result in applying more product than necessary, so it is important to determine whether applying the fungicide to individual seeds or plants is effective. To test whether PBP fungicide treatment is effective in protecting cotton plants from CRR infection, a stem-drench treatment-also proven in research trials to be an effective application method-was used in place of the at-planting application method. Specifically, the fungicide spray solution was applied to the stem of the cotton plant and a small amount of soil surrounding the stem. An 18.3 × 30.5 m (60 × 100 ft) test plot in field region PL was used to conduct this experiment.
Four treatments were applied: (1) a conventional at-planting treatment of in-furrow continuous spray over the top of planted seeds, applied with a tractor-pulled planter; (2) a stem-drench continuous spray, applied manually with a backpack sprayer; (3) a stem-drench pulsed spray on individual plants, applied manually with a backpack sprayer; (4) a no-fungicide control. The experiment had 24 rows (Figure 8

Test of PBP Fungicide Treatment in the Field
An in-furrow, at-planting spray application is the most common way to apply the Topguard Terra (FMC Agricultural Solutions, Philadelphia, PA) fungicide (flutriafol) that is licensed for treatment of CRR. The continuous application of the fungicide over the top of seeds as they are planted treats not only soil close to the seed, but also a length of soil between seeds that may not need treatment. This process may result in applying more product than necessary, so it is important to determine whether applying the fungicide to individual seeds or plants is effective. To test whether PBP fungicide treatment is effective in protecting cotton plants from CRR infection, a stem-drench treatment-also proven in research trials to be an effective application method-was used in place of the at-planting application method. Specifically, the fungicide spray solution was applied to the stem of the cotton plant and a small amount of soil surrounding the stem. An 18.3 × 30.5 m (60 × 100 ft) test plot in field region PL was used to conduct this experiment.
Four treatments were applied: (1) a conventional at-planting treatment of in-furrow continuous spray over the top of planted seeds, applied with a tractor-pulled planter; (2) a stem-drench continuous spray, applied manually with a backpack sprayer; (3) a stem-drench pulsed spray on individual plants, applied manually with a backpack sprayer; (4) a no-fungicide control. The experiment had 24 rows (Figure 8

Plant-by-Plant Classification
One example (Region CH2) of the progression of the image processing results from each step in the PBP classification method is shown in Figure 10. The raw image (Figure 10a) gradient was calculated to identify the planting rows (Figure 10b). SLIC Superpixel segmentation was applied to the gradient map to determine possible locations of individual plants (Figure 10c). The original pixels of the raw image were aggregated into larger superpixels (Figure 10d) to identify individual plant locations. The k-means algorithm was applied to the superpixels to generate a two-class regional classification (Figure 10e). The final result of PBP classification is shown in Figure 10f, in which each individual healthy plant is marked with a yellow point, and each CRR-infected plant is marked with a blue point.

Plant-by-Plant Classification
One example (Region CH2) of the progression of the image processing results from each step in the PBP classification method is shown in Figure 10. The raw image (Figure 10a) gradient was calculated to identify the planting rows (Figure 10b). SLIC Superpixel segmentation was applied to the gradient map to determine possible locations of individual plants (Figure 10c). The original pixels of the raw image were aggregated into larger superpixels (Figure 10d) to identify individual plant locations. The k-means algorithm was applied to the superpixels to generate a two-class regional classification (Figure 10e). The final result of PBP classification is shown in Figure 10f, in which each individual healthy plant is marked with a yellow point, and each CRR-infected plant is marked with a blue point.
The accuracy assessment of PBP classification showed that it is a highly accurate method of differentiating between healthy and CRR-infected plants at the individual-plant level. In Region CH2, the PBP classification had the highest overall accuracy of 95.94%, as well as the highest kappa coefficient of 0.8617, which indicated very strong agreement between classification and ground-truth data ( Table 2). Table 2 is the confusion matrix for PBP classification applied to Region CH2. Over 11,000 plants identified by the PBP algorithm in CH2 were evaluated, and about 82% of those were identified as healthy according to ground-truth data. About 13.1% of the healthy plants were misclassified as CRR-infected (overclassification), while about 9.5% of the actually CRR-infected plants were misclassified as healthy (underclassification). The accuracy assessment of PBP classification showed that it is a highly accurate method of differentiating between healthy and CRR-infected plants at the individual-plant level. In Region CH2, the PBP classification had the highest overall accuracy of 95.94%, as well as the highest kappa coefficient of 0.8617, which indicated very strong agreement between classification and ground-truth data ( Table 2). Table 2 is the confusion matrix for PBP classification applied to Region CH2. Over 11,000 plants identified by the PBP algorithm in CH2 were evaluated, and about 82% of those were identified as healthy according to ground-truth data. About 13.1% of the healthy plants were misclassified as CRR-infected (overclassification), while about 9.5% of the actually CRR-infected plants were misclassified as healthy (underclassification).  The PBP classifications were also generated for CH1, WP, and SH. In CH1, the PBP algorithm achieved an overall accuracy of 93.5%, a kappa coefficient of 0.7848, an error of commission of 16.1%, and an error of omission of 18.6%. In SH, the PBP algorithm also had a high accuracy of 90.6% with a kappa coefficient of 0.7494. The errors of commission and omission were 12.2% and 8.5%, respectively. Compared to the other regions, WP had the lowest accuracy of 88.4%, but even this level would typically be acceptable for field application. The kappa coefficient, error of commission, and error of omission for WP were 0.6048, 20.9%, and 26.8%, respectively.

Comparison to Regional Classifications
Thirty-six confusion matrices were generated from the results of the nine overall classification methods as applied to the four field regions. These confusion matrices are summarized in Table 3. The two-class k-means classifier identified CRR-infected cotton plants in the image automatically, but the overall accuracy averaged only 77.5%. The kappa coefficient of 0.491 also indicated relatively weak agreement between the classification and ground-truth data. The error of commission of 46% indicated that almost half the plants classified as CRR-infected were overclassified. Manually combining three-class, five-class, and 10-class k-means classifications improved the overall accuracy to 83.5%, 84.4%, and 84.1%, respectively. The kappa coefficients also increased to 0.547, 0.552, and 0.576, respectively, indicating moderate agreement between classification and ground truth. However, it must be noted that combining classes required expertise from and implementation by the user, meaning that the ideal of automated processing was not realized.
Using supervised classifiers including SVM, MD, ML, and MHD increased the overall accuracy to 86.3%, 85.7%, 86.5%, and 87.7%, respectively. All of these classifiers performed significantly better (α = 0.05) than two-class k-means in overall accuracy. The kappa coefficients for these classifiers were 0.659, 0.636, 0.667, and 0.786, respectively. While supervised classifiers performed better than the unsupervised and semi-supervised classifiers, they did not perform as well as the PBP classifier, and it must be noted that these also need human intervention, specifically for selection of training data based on subjective judgment.
The PBP classification method averaged 92.1% overall accuracy, by far the best among all classifiers considered. This accuracy level was significantly higher (α = 0.05) than that of the unsupervised and combined unsupervised classifiers. The average kappa coefficient was 0.786, indicating strong agreement between the classifications and ground truth, and this value was significantly better (α = 0.05) than that of all the other classifiers considered. The average errors of commission and omission, 15.56% and 15.85%, were also the lowest in the overall comparison group.
A comparison chart of the errors of commission and omission is shown in Figure 11. Theoretically, the ideal classifier, which has 100% accuracy and thus no errors of commission or omission, should be located at the origin of this coordinate system. The PBP classifier is the one closest to the origin by far, indicating that it clearly performed the best in terms of overall accuracy. It is worth noting here that in the aforementioned related prior study [38], two methods proposed to take advantage of the high resolution of UAV images, k-means plus support vector machine (KMSVM) and k-means segmentation (KMSEG), were evaluated with only regions CH1, WP, and SH. The two methods had approximately 22% and 16% error of commission and 18% and 11% of error of omission, respectively. The KMSEG classifier, which is a fully automated regional classifier, generated similar results to the PBP classifier on a somewhat different data set. Both KMSEG and KMSVM are regional classifiers that were designed to take advantage of the morphological information available in high-resolution UAV images. Thus, like the PBP classifier, they were meant as improvements over traditional regional classifiers. The added advantage of the PBP classifier is that it is designed to classify individual plants, a tremendous advantage when subsequently applying fungicide on a PBP basis.  results to the PBP classifier on a somewhat different data set. Both KMSEG and KMSVM are regional classifiers that were designed to take advantage of the morphological information available in highresolution UAV images. Thus, like the PBP classifier, they were meant as improvements over traditional regional classifiers. The added advantage of the PBP classifier is that it is designed to classify individual plants, a tremendous advantage when subsequently applying fungicide on a PBP basis. Figure 11. Comparison of the errors of commission and omission among classifiers. Figure 11. Comparison of the errors of commission and omission among classifiers.

Test of Method of Fungicide Application for CRR Control
In the study on fungicide application methods, the application of Topguard Terra generally reduced the incidence of CRR compared to the control (Table 4), as expected. The manually pulsed stem-drench treatment had the lowest plant mortality among the treatments, but the difference from the other two fungicide treatments was not significant. While in most years, the portion of the field (PL) used in this experiment eventually approaches 100% mortality from CRR, the dry weather in 2018, the first year this study was conducted, resulted in low severity of CRR. Assuming a 5% significance level, there was no statistically significant difference among all the treatments. However, the pulsed stem-drench (i.e., PBP) treatment would be considered significantly better than the no-spray control treatment if a 15% level of significance were assumed. While this is an uncommonly weak significance level, it is reasonable to believe that all three methods of applying fungicide, including the PBP method, offered some protection against CRR. Due to even drier conditions in 2019 than 2018, no CRR development was observed during the experiment, so efficacy could not be assessed in 2019.

Discussion
In this study, the errors of commission represent the percentage of plants over-classified into the CRR category. The errors of omission represent the percentage of CRR-infected plants misclassified as healthy plants. From an economics perspective, omission plays a more important role than commission for CRR detection, because over-spraying of fungicide caused by over-classification would likely cost less than the loss of CRR-infected plants that could have been protected. The zones with weeds growing on bare soil, very possibly next to a dead cotton plant, contributed to the errors of omission. While not necessarily critical, it should be noted that the errors of commission were commonly observed at zones where bare soil was evident where there was no CRR-infected plant. The mixed pixels of soil, plant leaves, and shadow of plants, which were commonly present at the boundaries between healthy and infested zones, also caused a large number of errors of commission and omission with the regional classifiers, because the mixed pixels do not represent the reflectance information from a single object.
The homogeneity of the field could also affect the classification accuracy. Regions CH1 and CH2 were from the same cotton field, and the images of them were from the same flight mission. The patterns of planting and disease as well as the reflectance information were similar, and the lighting conditions were the same. The main difference between the regions was that CH2 was smaller than CH1. The results of classification in CH2 were better than in CH1 in most cases, especially for unsupervised classification. One reason is that unsupervised classification clusters data into different classes based on the "otherness" of data. Once the sample size becomes larger, more diverse data besides healthy and infected cotton are introduced into the field of view, such as a concrete road, power line, pond, or other objects. All of this "noise" can reduce the accuracy of classifiers. A prerequisite for an accurate automated classification is to have images consisting of only rows of cotton plants. All the classifiers had relatively low accuracies on Region WP compared to the other three regions. It is possible this was because the planter experienced mis-seeding during planting, causing a narrow and long "dead zone" consisting mainly of bare soil. Manual manipulation can be used as postprocessing to correct the misclassification from mis-seeding, but it violates the intent of automation in classification. Morphological image processing tools such as erosion and dilation could be introduced in the future to improve the performance of differentiating mis-seeding from cotton root rot while maintaining the automation of classification [38].
Comparing regional and PBP classifications is challenging because regional classification is based on pixels while PBP classification is based on individual plants. To make the comparison even more convincing, the classifications should ideally be evaluated with the same protocols. Comparing pixel differences between all classifications (PBP and regional classifications) and the same ground-truth map is a fair way to evaluate and compare classifications, but it is not readily done when the classifiers produce different types of maps as results. The PBP classifier output a vector point map, whereas the regional classifiers output raster maps, so the comparisons to the ground-truth map had to be done with different methods appropriate to each form of data.
In the comparison of fungicide application methods, the results should be validated by further study, as the disease pressure was low in both years due to dry weather conditions. However, the experiment did substantially support the concept of pulse application to reduce fungicide use. In 2018, the manual application of the pulse spray method was inefficient and actually resulted in the application of a greater amount of fungicide than the continuous spray, but in 2019, an average of 43% less fungicide was used because of improvements in the application technique. If efficacy of the fungicide holds up with that method and application rate, the overall concept of PBP detection, mapping, and fungicide application will be validated.
Considering computational requirements, the PBP algorithm required more computing time than the regional classification methods, because segmenting and locating seed positions is computationally intensive. For the 0.15-ha CH2 image, about 30 s was required to generate the classification on a 2016 Macbook Pro computer with an Intel i7-6920HQ central processing unit (CPU) and Radeon Pro 460 graphic processing unit (GPU). The PBP classification algorithm is slower than other conventional regional classification methods, but it is still acceptably fast. While a larger field might require a few hours to complete the classification, these classifications do not need to be done in real time. Rather, they can be performed between growing seasons.
As discussed previously, UAVs provide much higher resolution (decimeter level to centimeter-level) remote sensing data than manned aircraft or satellites (meter level). This study intended to explore how to make use of the high-resolution data in CRR detection in the creation of PBP prescription maps. The application of fungicide at the PBP level is clearly possible from a technological standpoint. Wilkerson et al. tested a seed-specific in-furrow fungicide application system and found that the system could achieve as high as 95% accuracy for seed-specific treatment in cotton [44]. However, identifying, predicting, locating, and treating the disease at the PBP level are the obstacles to high-precision treatment of CRR, as well as other plant diseases. This study shows that CRR-infected plants in the current season can be individually identified with high accuracy, and PBP fungicide treatment appears to be effective in controlling CRR. The remaining challenges to be investigated are (1) whether the precise location of individual CRR-infected plants is predictive for the following year, and (2) whether previously developed precision-spray technology [44] enables fungicide to be practically applied at these locations on a seed-by-seed basis.
There is no evidence to suggest that CRR can be cured once a plant is infected; the fungicide must be applied prior to disease development. Thus, PBP application of fungicide requires previous years' data to predict CRR-infested areas in future years. The entire PBP classification process can be conducted automatically if an appropriate seeding rate is known beforehand. Management of other crop diseases that can be treated during the growing season could potentially benefit from this type of high-resolution classifier.

Conclusions
This study involved development and evaluation of a PBP classifier that is able to detect CRR-infected plants at the single-plant level automatically. The PBP classifier is mainly based on the Superpixel segmentation and k-means clustering algorithms. Eight conventional regional and PBP classification methods, based on UAV remote sensing image mosaics of cotton, were evaluated in four field sections with a history of CRR. Among the regional classification methods, the unsupervised two-class k-means classifier achieved an overall accuracy of 77.48%, lower than the semi-supervised (83.47-84.12%) and supervised (85.68-87.72%) classifiers but requiring less human involvement. Compared to these conventional regional classification methods, the PBP method achieved the highest overall accuracy of 92.1%, the highest kappa coefficient of 0.786, the lowest errors of commission of 15.6%, and the second-lowest errors of omission of 15.9%. Furthermore, the PBP method is able to classify the image mosaics automatically. The PBP-based fungicide treatment in the field appeared to be effective in controlling CRR infection. These results generally validate the idea of plant-level CRR treatment and suggest the likelihood of major advances in high-resolution precision agriculture practices in the future.