1. Introduction
The bird’s-eye view perspective that aerial imagery provides of target areas offers an objective assessment and cost-effective method to monitor large areas compared to ground-based scouting [
1]. With recent technological advances, high resolution aerial images can be acquired by coupling different kinds of sensors [
2,
3] to a variety of platforms, including satellites, manned aircrafts and unmanned aircraft systems. Several studies have utilized aerial images to monitor plant health in agricultural and forest lands [
1,
4,
5]. While the aerial images obtained from sensors differ with sensor type and resolution, they all are derived using a variable wavelength of light energy that is distinctly recorded by the sensor aboard a platform. Furthermore, the utility of various imaging platforms also largely depends on the sensor’s capabilities, the question the user intends to address, and whether the platform–sensor combination is economical. Of course, the value of the answer depends on its accuracy and there is often a tradeoff between accuracy and cost. The optimal platform–sensor combination will balance the spatial, spectral, radiometric, and temporal resolution requirements along with processing requirements to maximize accuracy and minimize costs.
Alfalfa (
Medicago sativa L.) is an important perennial forage legume with high nutritive values, making it an ideal choice for cattle feed, especially dairy cattle. Of the 86.7 million tons of hay and haylage produced in the US in 2017, alfalfa accounted for 51% of the total production [
6]. However, alfalfa production in southern Oklahoma is severely limited due to Phymatotrichopsis root rot (PRR) disease (also referred to as cotton root rot, Texas root rot, Phymatotrichum root rot or Ozonium root rot), which is caused by
Phymatotrichopsis omnivora, a soil-borne ascomycete fungus. In addition to alfalfa, the pathogen has a very broad host range affecting dicotyledonous hosts (including cotton), but not monocotyledonous plants [
7]. Symptoms of PRR disease are visible during mid-to-late summer, when diseased plants begin to wilt and then rapidly die. The leaves remain firmly attached to the plant but they turn brown, leaving a clear outline of dead plants at the disease front. In the field, the disease manifests as numerous circular infested areas spreading in a centrifugal fashion, gradually coalescing and enlarging during the growing season [
8,
9]. As the disease circles expand, the increasing area of bare ground provides an opportunity for the encroachment of weeds. Some alfalfa plants inside the circle recover from the disease and these survivors reestablish by developing a large number of lateral roots below the crown [
10].
Monitoring PRR disease movement and assessing the extent of alfalfa stand loss at the ground level pose challenges to producers due to the increasing number of disease circles combined with the emergence of survivors and weeds in the resulting bare ground. Aerial imaging provides the ability to make landscape-scale assessments and is therefore an effective method to monitor and map PRR-diseased areas. In fact, aerial photography taken from an aircraft for visualizing PRR disease in cotton dates back to 1929, and likely represents the first of such report of the study of plant diseases [
11,
12]. This was later followed by the documentation of PRR-infested areas utilizing color infrared photography and multispectral video imagery, but these images were not further analyzed due to lack of sophisticated image processing techniques [
13,
14,
15].
During the past decade, with the availability of the latest high-end sensors and image analysis software, considerable research has been done to detect and map PRR-infested areas in cotton fields. For example, a manned aircraft that acquired multispectral and hyperspectral images with more than one-meter resolution was successfully utilized to distinguish PRR-infested areas from non-infested areas [
15]. Likewise, the utility of six supervised and two unsupervised classification techniques for the accurate classification of multispectral images of PRR-infested cotton fields was also explored [
16]. Further, Yang et al. [
17] compared multispectral images of PRR-infested cotton fields taken at 10-year intervals and showed how historical images can be used to make site-specific management recommendations. The current literature shows extensive usage of multispectral images acquired using manned aircrafts to study PRR disease progression in cotton fields. However, there is no knowledge of PRR disease progression or mapping available for a perennial forage crop system such as alfalfa. Aerial imaging in alfalfa provides an additional opportunity to not only monitor PRR disease within a growing season, but also across years, spanning different alfalfa stand ages. This enables the tracking of disease initiation and cessation across multiple growing seasons.
The applications for unmanned aircraft systems (UAS) in agriculture have been increasing, with studies such as mapping weed infestations, nutrient and drought stress, to sampling plant pathogen spores from the lower atmosphere [
18,
19,
20,
21]. High-resolution multispectral aerial images obtained from UAS were also shown to successfully detect diseases such as Huanglongbing in a citrus orchard and powdery-mildew-affected opium poppy plants [
22,
23]. Similar research involving the use of UAS has not been reported for the study of PRR disease in alfalfa, thereby presenting an opportunity for further investigation.
In the current study, high-resolution red/green/blue (RGB) aerial images of a PRR-infested alfalfa field were obtained at different times during two crop growing seasons using either manned or unmanned aircraft platforms with the following objectives: (1) develop a workflow to produce PRR disease maps from sets of high-resolution RGB images acquired from two different platforms; and (2) assess the feasibility of using these PRR disease maps to monitor disease progression in alfalfa fields.
2. Materials and Methods
2.1. Study Site
The study was conducted on a PRR-infested 24.8 ha semi-circular alfalfa commercial hay production field under a center-pivot irrigation system located at the Noble Research Institute’s Red River Farm, Burneyville, Oklahoma (
Figure 1; 33°52′35″ N, 97°15′28″ W, 210 m elevation). The study site was previously a pecan orchard that was converted in 2005 into a production field. Soybeans, rye, wheat, triticale and oats were grown on this site before planting with America’s Alfalfa Alfagraze 600 RR in the autumn of 2011. During May and June of 2015, this location received 856 mm of rainfall (nearly 90% of the average annual precipitation) and the Red River escaped its banks flooding portions of the study site, which resulted in a loss of 3.6 ha of alfalfa from the field.
2.2. Data Collection and Processing
Two different imaging platforms (
Supplementary Figure S1) were utilized to acquire a total of six aerial images from the study site at different solar times during the growing seasons of 2014 and 2015 (
Table 1). A Vireo fixed-wing unmanned aircraft system (UAS) with a 10 megapixel (MP) RGB camera was flown in June and August 2014 by the Farm Intelligence Company (USA). The flights occurred at an altitude of 120 m above ground level (AGL) between 11:00 to 15:00 (local time) and the conditions were sunny to mostly sunny.
In October 2014 and during the 2015 growing season, aerial imagery was obtained by CloudStreet AirBorne Survey (USA) using a 22 MP Canon EOS 5D Mark III mounted on a piloted Dragonfly sport utility aircraft flown at 300 m AGL. An aerial survey system (Track’ Air, Hengelo, The Netherlands) was used to automatically trigger the camera as the pilot maneuvered the aircraft over each point in the flight plan grid. A laser range finder used as an altimeter recorded the aircraft’s altitude AGL at high frequency. A geographic positioning system (GPS) receiver recorded the coordinates of the aircraft. Time stamps logged from the altimeter and GPS were matched with the trigger time logged by the camera to determine the altitude and location of the aircraft at the time the image was captured [
24,
25,
26].
Images acquired from the manned aircraft platform were converted from the RAW file format to the TIFF file format. The UAS images were collected in JPEG format and were not converted. Irrespective of the image source, the images were loaded into the Agisoft PhotoScan software. The images underwent a series of workflow steps that included image alignment, building dense cloud, developing mesh and creating an orthomosaic, as described in the software’s user manual. The final orthomosaic images were brought into the ArcMap software package 10.3.1 for additional computation. A minimum of 24 white reflective 0.09 m2 metal square plates were fixed on the ground at known locations around and within the field. These plates could be manually identified in the aerial images and served as ground control points for image registration. In order to maintain the map projection and accuracy, the images were geo-rectified with a spline transformation and projected to the Universal Transverse Mercator (UTM), World Geodetic Survey 1984 (WGS-84), Zone 14 North coordinate system. The polygon selection tool was then used to delineate the flooded section and field boundary from all six aerial images.
The RGB images collected using manned and unmanned aircrafts had a ground sample distance that ranged from 0.018 m to 0.064 m (
Table 1). To ensure standardization while analyzing images across different time intervals, all the images were resampled to a coarser resolution of 0.10 m. Image resampling was performed with the ‘resample’ tool using the nearest neighbor assignment resampling technique.
2.3. Image Classification
PRR has a distinctive disease pattern that is discernible from the damage caused by other pests and abiotic factors [
9,
16]. Multispectral or hyperspectral images make it possible to detect the condition of diseased plants that are not visible to the human eye and also capable of identifying plant stress or the severity of damage. However, our datasets include only the visible light bands R, G, and B in images that could not differentiate those nuanced differences, as our objective was to monitor PRR disease spread rather than disease identification. Ground truthing revealed PRR as the dominant factor contributing to alfalfa stand loss at the study site during the 2014–2015 period. The study site was therefore categorized into two classes: alfalfa and bare ground (treated as a ‘soil’ class while performing supervised image classification). However, during the course of the study, weeds emerged in some of the bare portions of the diseased areas. Hence, weed was also included as a third class in the image classification process (
Figure 2).
The non-availability of ground reference data limited our ability to generate training and validation data sets, which were required to perform supervised classification. As an alternative approach, a researcher experienced with field knowledge of PRR disease at this site visually inspected the georectified resampled image to generate training and validation data. In silico identification of one hundred polygons encompassing all three classes (alfalfa (40), soil (40), and weed (20)) that were uniformly spread throughout the study area were selected for each image (
Figure 3). Of these, 60 (alfalfa (25), soil (25), and weed (10)) were randomly selected as training datasets and the remaining 40 (alfalfa (15), soil (15), and weed (10)) served for validation purposes. This ensured that the training and validation datasets are independent of each other. The total number of pixels utilized to generate the training and validation datasets are enumerated in
Table 2. Each image (except the August 2014 and September 2015 images) was classified using one of three spectral signatures from the training samples that are specific to: (a) the image; (b) the UAS platform; and (c) the manned aircraft platform. The UAS platform-specific and manned aircraft platform-specific spectral signatures were developed based on the August 2014 and September 2015 RGB images, respectively. Therefore, for the August 2014 image, the image-specific and UAS platform-specific spectral signatures were the same. Similarly for the September 2015 image, the image-specific and manned-aircraft-platform-specific spectral signatures were identical. Our choice of selecting these particular RGB images to develop platform-specific images was based on the fact that they were taken during the mid-growing season, a time period when PRR disease symptoms are pronounced in the field.
A color model conversion function was employed in the ArcMap software to convert the aerial images from RGB to hue, saturation and value (HSV) color space. Unlike RGB, the HSV color space channels are less correlated with each other. This conversion sets the hue, saturation, and value channel values between 0 and 240, 0 and 255, and 0 and 255, respectively. Analysis of the spectral signatures generated with HSV images revealed that the hue channel provided greater contrast between the three classes in the study, but with higher variation than the saturation and value channels. Since hue is expressed as a polar dimension with a red hue mapped to values near 240 and 0, we rotated the hue values 60 units to produce a new Hrot60SV image by adding 60 units to the hue channel values less than 180 and subtracting 180 from values between 240 and 180 (e.g., 0 and 240 becomes 60 and 180 becomes 0). This conversion was performed in the R software version 3.4.2 using a raster package. Thus, for each aerial image we had three variants to evaluate: RGB, HSV and Hrot60SV (
Figure 4). Based on the spectral signatures unique to each variant image, maximum likelihood supervised classification was performed, categorizing each image into three classes (alfalfa, soil and weed). The maximum likelihood classification algorithm assigned a pixel to a user-defined class based on Bayes’ theorem of decision-making.
2.4. Model Accuracy
An accuracy assessment of all the classified images was performed using 40 validation polygons that were generated for each image, as mentioned above. One pixel was randomly selected from each validation polygon and compared with the corresponding pixel class in the classified image. This iterative process was performed 1000 times to compute the mean overall accuracy and balanced accuracy values for the alfalfa, soil and weed pixel classes. As the weed class was underrepresented compared to the alfalfa and soil classes, balanced accuracy values were estimated for each class, thus accounting for imbalance datasets. All analyses were performed in the R software, version 3.4.2, using the raster and caret packages [
27,
28].
2.5. Agreement between Two Classified Images
Comparison of the classified images generated for an RGB aerial image using two different spectral signatures was performed by pairing all pixels in the three classes from both images, thereby creating nine classes (alfalfa–alfalfa, soil–soil, weed–weed, alfalfa–soil, alfalfa–weed, soil–alfalfa, soil–weed, weed–alfalfa and weed–soil). The consistent class pairs where both classified pixels agreed (alfalfa–alfalfa, soil–soil, weed–weed) were not considered for further analysis. To estimate the true accuracy of the other six inconsistent class pairs in the remaining set of pixels, 20 random pixels were sampled in a stratified random manner from each class pair, and we visually inspected the pixels in the RGB image to manually classify the pixel. All analyses were performed in the R software, version 3.4.2, using the raster package.
2.6. Post-Processing of Classified Images
The visual observation of the classified images showed many misclassified isolated pixels. To remove this noise, the images underwent a series of post-classification processing steps: filtering to remove isolated pixels from the classified images, smoothing class boundaries and reclassifying small isolated regions (pixel count less than 100) to the closest surrounding cell values. All these steps were accomplished using generalization tools (majority filter, boundary clean, region group, set null and nibble tools) listed in the ArcMap software package 10.3.1. After performing post-classification processing, the number of pixels that belonged to each class was calculated for each image to assess the area of alfalfa stand loss that occurred due to PRR disease.
4. Discussion
We applied an aerial imaging approach to better understand PRR disease spread and map PRR-infested areas in an alfalfa field. Earlier research on PRR disease has focused on multispectral or hyperspectral aerial images acquired using a manned aircraft platform; these studies were conducted in cotton, another important host for
P. omnivora [
15,
17,
29]. To our knowledge, this is the first study that used multiple aerial high-resolution RGB images of a PRR-infested alfalfa field, spanning a period of two growing seasons (2014–2015). Unlike cotton, alfalfa is a perennial forage crop and is cut several times within a growing season, thereby providing a unique opportunity to study PRR disease progression under such intense management practices. In addition, continuous host availability over different years influences pathogen movement and survival, thereby affecting stand yields. We therefore used RGB images, as we were interested in monitoring PRR disease spread rather than PRR detection, in which case we would have resorted to using multi- or hyper-spectral sensors capable of capturing different spectra not perceived by the human eye. The major focus of this study was to develop a workflow for analyzing RGB images collected using UAS and manned aircraft platforms, and to discern the utility of these data for the study of PRR disease progression in alfalfa.
Regardless of the platform, remote sensing images are subject to optical and perspective distortions that arise during the process of image acquisition and representing a three-dimensional image into a two-dimensional format. Previous research has indicated that the usage of small focal length sensors (28 mm) produces more such distortions and that these distortions may have to be corrected during processing in order to measure geometric quantities correctly [
30]. In the current study, the UAS image dataset was developed using vendor-generated proprietary jpeg images without any change. For the images acquired with the manned aircraft platform, we used the Unidentified Flying Raw (UFRaw) application to convert the RAW file format to the TIFF file format, with the camera white balance option selected for color corrections. As earlier research has indicated higher perspective distortion with small focal length (50 mm) sensors compared to 85 or 105 mm sensors, the current study’s image dataset collected with the manned aircraft used an 85 mm focal length lens to yield minimum optical and perspective distortions [
31].
Initially, we examined the possibility of developing a single spectral signature to classify all the images that would reduce the processing time and computing resources. Our data indicated that a single spectral signature is not applicable for the analysis of images collected from the UAS and manned aircraft platforms. Hence, each image was classified using three spectral signatures: (a) image-specific; (b) UAS platform-specific; and (c) manned aircraft platform-specific. We identified that the UAS platform-specific spectral signature can be used to classify images acquired from a UAS platform, yielding higher accuracy results (
Table 3,
Supplementary Tables S1 and S2). However, the same model does not hold for manned aircraft images. The data from
Table 3 and
Supplementary Tables S1 and S2 indicated increased accuracy estimates when the manned aircraft images were classified using image-specific spectral signatures.
The acquisition of RGB images using different sensors at various flight times (
Table 1) might be some of the reasons that a single spectral signature could not be utilized for images from both platforms. The images obtained also had different spectral resolutions; to balance this effect, all images were resampled to a coarser resolution (0.10 m), ensuring uniform comparison. The flights for both platforms occurred at different times during the day resulting in images with varying degrees of lighting in addition to shadowing effects. To account for differences in luminance between the images and to accurately segment pixels into the classes alfalfa, soil and weed, we investigated the utility of color space conversion for our dataset.
The images from our study were converted to the traditional RGB (red, green, blue channels) color space. Several studies have shown a better separation of the image features by weighting each channel differently during the process of transforming the RGB color space. Such changes are expected to yield diverse color distributions in each model, as most of the transformations are non-linear [
32,
33]. For example, in a study where 11 different color spaces (RGB, normalized rgb, XYZ, L*a*b*, L*u*v*, HSV, HLS, YCrCb, YUV, I1I2I3 and TSL) were compared to segment lettuce plants and soil from a set of images, the L*a*b* color space was shown to achieve superior classification, with 99.2% accuracy [
34]. While there is no single optimum color space for any image classification, we chose to transform our RGB dataset into an HSV color model, as this model has been shown to be robust to illumination variations and removing shadow effects [
35,
36,
37]. After performing HSV transformation, we observed high variation in the pixel values of the hue channel, especially for the soil class, since the soils have a reddish hue with values just above 0 and just below 240. To minimize this variation, the hue channel pixel values were rotated by 60 units to yield a Hrot60SV transformation.
We expected improved classification accuracies with the HSV and Hrot60SV color spaces compared to the RGB color space (
Table 3,
Supplementary Tables S1 and S2), but failed to find support for this hypothesis in the data and subsequently rejected this hypothesis. Although the balanced accuracy estimates for the soil class were more than 92% for the RGB color space images classified using different spectral signatures (the UAS-platform-based signature for UAS-acquired images and the image-specific spectral signature for manned-aircraft-acquired images), a major factor contributing to the differences in the overall accuracy estimates appears to be the alfalfa and weed classes. In addition to color space conversions, we investigated the effect of the spectral angle mapper (SAM) algorithm when performing supervised classification on a subset of our dataset. Unlike the maximum likelihood classification algorithm that was used to perform supervised classification in this study, SAM does not require any assumptions regarding the statistical distribution of the data and is not affected by solar illumination and shading effects [
38]. However, the SAM classification resulted in accuracies of less than 85% compared to the maximum likelihood classification algorithm (data not shown). This is in contrast to Yang et al. [
16], who detected PRR in cotton fields using SAM with more than 95% accuracy, but their dataset was comprised of multispectral images, unlike the RGB images that were used in the current study. While it is a recognized challenge to accurately segment crops from weeds, other studies have mitigated this hurdle by using artificial neural network algorithms and other machine-learning-based approaches; assessing their utility for our dataset is beyond the scope of this study [
39,
40].
Post-classification processing steps were performed to minimize the speckled effect in the image, i.e., for removing misclassified isolated pixels and small regions less than 100 pixels. This process either maintained or improved the overall accuracy for five of the six images analyzed (
Table 6). For June 2014, the overall mean accuracy decreased from 0.901 to 0.898. This is also the image for which we determined that using the UAS-platform-specific spectral signature was better than the image-specific spectral signature. We estimated the congruency between the classified images generated by both spectral signatures, compared with the manually classified RGB pixels and determined UAS-platform-specific signature to be more accurate than the image-specific spectral signature. We observed lower balanced accuracy values for the weed class for this image. This is the result of significantly fewer weeds, leading to the generation of smaller training and validation datasets (
Table 2). We hypothesize that fewer training pixels might have an influence on the misclassification of some of the pixels as falling in the weed class instead of the alfalfa class or vice-versa.
The classified RGB aerial images estimating the extent of alfalfa stand loss due to PRR showed wide fluctuations in the area under the three classes, especially the soil class. More weeds occupied the new empty soil areas created by the PRR disease, leading to a reduction in the area under soil. However, by the end of the study period (October 2015), about 10 ha of area was recorded for the soil class, compared to 5.8 ha at the start of the study period (June 2014). This can be attributed to the formation of new diseased areas as well as the increase in existing diseased areas. Similar effects of PRR have been observed in one cotton growing season using multispectral images, where the percentage of root-rot-infected areas increased from 5.4% to 13.2% and 21.6% to 26.8% in two fields in Edroy, TX and from 27.0% to 37.8% and 21.4% to 50.6% in two fields in San Angelo, TX [
29].