Comparing Interpretation of High-Resolution Aerial Imagery by Humans and Artificial Intelligence to Detect an Invasive Tree Species

: Timely, accurate maps of invasive plant species are critical for making appropriate management decisions to eliminate emerging target populations or contain infestations. High-resolution aerial imagery is routinely used to map, monitor, and detect invasive plant populations. While conventional image interpretation involving human analysts is straightforward, it can require high demands for time and resources to produce useful intelligence. We compared the performance of human analysts with a custom Retinanet-based deep convolutional neural network (DNN) for detecting individual miconia ( Miconia calvescens DC) plants, using high-resolution unmanned aerial sys-tem (UAS) imagery collected over lowland tropical forests in Hawai’i. Human analysts ( n = 38) examined imagery at three linear scrolling speeds (100, 200 and 300 px/s), achieving miconia detection recalls of 74 ± 3%, 60 ± 3%, and 50 ± 3%, respectively. The DNN achieved 83 ± 3% recall and completed the image analysis in 1% of the time of the fastest scrolling speed tested. Human analysts could discriminate large miconia leaf clusters better than isolated individual leaves, while the DNN detection efficacy was independent of leaf cluster size. Optically, the contrast in the red and green color channels and all three (i.e., red, green, and blue) signal to clutter ratios (SCR) were significant factors for human detection, while only the red channel contrast, and the red and green SCRs were significant factors for the DNN. A linear cost analysis estimated the operational use of a DNN to be more cost effective than human photo interpretation when the cumulative search area exceeds a minimum area. For invasive species like miconia, which can stochastically spread propagules across thousands of ha, the DNN provides a more efficient option for detecting incipient, immature miconia across large expanses of forested canopy. Increasing operational capacity for large-scale surveillance with a DNN-based image analysis workflow can provide more rapid comprehension of invasive plant abundance and distribution in forested watersheds and may become strategically vital to containing these invasions.


Introduction
Invasive species are one of the main threats to native ecosystems worldwide, altering plant community structure and function, i.e., reducing biodiversity and compromising ecosystem services [1][2][3][4].Invasive species detection and control programs typically consume a significant portion of natural resource management budgets, and provide fertile ground for technological innovations to reduce costs by increasing efficiency in protecting large landscapes [5,6].A compelling example of this management challenge can be found in the state of Hawai'i, where conservation land managers are confronted by a multitude of invasive species threats to critical habitats and native ecosystems.Research and development of emerging technologies has become an institutional component of invasive species management strategies in Hawai'i, as a measure to gain advantages on large, often expensive, problems [7].Strategically, early detection and rapid response (EDRR) offers the last opportunity to consider an aggressive small-scale eradication program [8].Beyond that, naturalized invasive species populations often become established beyond feasible eradication and are consequently relegated to containment strategies attempting to confine populations to their occupied areas [9].Regardless of the management strategy, a majority of resources to combat invasive species are dedicated to reconnaissance and surveillance [10,11].There have been many technological advancements to this effort, starting with the advent of civilian GPS, geographical information systems (GIS), and remote sensing, leading to better spatial and temporal tracking of dynamic species invasions [12][13][14][15].
Miconia (Miconia calvescens DC) is a high-priority invasive plant target for the Hawai'i Invasive Species Council with over US$1M invested annually in search and control efforts [16][17][18][19][20].It is a mid-story canopy tree native to South and Central America and originally introduced as an ornamental specimen to the Big Island of Hawai'i in 1961.It is presently invading more than 100,000 ha of forested watersheds across the Hawaiian archipelago.Technological innovations in herbicide application, data collection, and search strategy have enhanced control and containment efforts, but miconia continues to spread [21][22][23][24][25][26].
Miconia is an autogomous species that is prone to passive long-distance dispersal by frugivores [27] and capable of establishing isolated founder populations stochastically spread out over large areas [28,29].Moreover, seed from several miconia species are reported with extended physiological dormancy that results in latent germination and persistent recruitment from a deposited seedbank over several decades [30,31], a trait also specifically observed in miconia [32,33].Miconia can germinate in low light conditions and can remain visually hidden under a multi-tiered forest canopy for several years [24,31].These life-history traits make miconia a highly problematic and aggressive invasive species, despite arduous search efforts and long-term intervention schedules [32].
Surveillance programs from manned aerial platforms (e.g., helicopter) have been described as random search efforts, which predictably translate to imperfect detection [23,[34][35][36].The randomness affecting miconia detection can be inferred from several factors.Importantly, an on-board observer's experience, acuity and stamina are observed factors in miconia detection [23].Visual discrimination of individual species in a diverse, heavily vegetated, wet forest is difficult, and may also be explained by color contrast and signal to clutter ratio [37,38].Miconia was imported to Hawaii and other locations as a striking ornamental; its leaves are large, elliptic to obovate (e.g., up to 80 cm in length) with three acrodomous veins, and dark green dorsal and reddish-purple ventral surfaces [39,40].These prominent features also assist with its detectability.Random (imperfect) search efforts often follow an exponential function where the probability of detection is dependent on the cumulative amount of search effort applied uniformly to an area [36,41].Thus, the only practical option for increasing the probability of detection is to compensate with repeated search efforts, usually at a great expense dictated by the terms of helicopter service contracts well over US$1000 h −1 .
The availability of high-resolution aerial imagery derived from unmanned aerial systems (UAS) has dramatically increased over the past decade due to reduced costs, reduced regulatory barriers, and technological advancements in flight endurance, GPSprecision flight planning, image sensors, post-processing algorithms and cloud-based computing [42][43][44][45][46][47][48].Mapping applications with UAS have become economical and routine, resulting in a growing demand for high spatial and temporal resolution data from a wide range of industries and services [49][50][51][52].Adoption of this technology for invasive species surveillance is still being cultivated through proof-of-concept demonstrations and protocol development [53][54][55][56].Many industrial applications may pertain to small-scale site inspections, while natural area conservation, including invasive species management, is more likely to be obfuscated by the enormity and remoteness of the location.While resolution and comprehension are desirable features of remotely sensed data, operational and post-process workflow efficiencies inherently dictate usability and adoption by practitioners.
Increased use of UAS can create a backlog of large aerial image data sets.Artificial intelligence and deep neural network (DNN) algorithms provide a means of automating image analysis for object detection, following investments in image collection, annotation, and model training [57,58].Early adopters of neural networks with UAS imagery have focused on agricultural systems [59][60][61][62], but a growing number of studies are now exploring ways to detect and map invasive plant species in more complex forest environments [63][64][65].
Detection and mapping these invasive plants fall within the domain of object detection.Convolutional neural networks have numerous variations but will consist of convolutional and pooling layers grouped into modules with a final layer outputting the class label [66,67].Convolutional neural network based object detection models may be separated into two categories: a two-stage approach and one-stage approach [68,69].In the two-stage approach, such as Fast R-CNN [70], Faster R-CNN [71], Mask R-CNN [72], and Feature pyramid network [73], object detection is separated into an initial region proposal phase, during which regions where the object may exist are identified, and the detection phase, where candidate regions are classified into different classes.One-stage detectors, such as YOLO [74], SSD [75], and RetinaNet [76], use anchors, which are sets of pre-defined bounding boxes of varying scales and rations, for initial region proposals and the detector classifies these pre-defined regions.One stage detectors are typically faster but will have reduced accuracy compared to two stage detectors [69].
Here, we present a convolutional deep neural network (DNN) based on a one-stage RetinaNet model [77] specific to detection of miconia in wet, heavily vegetated tropical forests and compare its performance to experienced human image analysts.RetinaNet was selected was selected due to its improved performance over other networks for tree crown detection [78,79].
While advances in automated remote sensing classification techniques are rapidly evolving, human interpretation of high-resolution imagery continues to play an important role in forestry and conservation [80][81][82][83], including Human-in-the-loop applications [84].Trained human analysts can readily detect cryptic understory species such as miconia in high-resolution imagery, but distraction and fatigue become factors of concern when processing large numbers of images [85,86].The motivation of this study was to advance the adoption of UAS technologies in invasive plant species surveillance with a need to understand the efficacies and efficiencies of human-and DNN based image analyses.Here we report on a study that compares the performance of human analysts against a custom DNN for image scanning and detection of miconia in high-resolution imagery derived from UAS.We measured human performance under a controlled experimental setting using three linear image scrolling speeds and compared those detection recalls against a customized miconia detection DNN algorithm.We further examined the importance of nine different optical features relating to miconia canopy geometry, size, and visual characteristics on detection recall.We also compared simple linear cost models based on the workflows in field mapping and image analyses performed manually by human analysts versus a semi-automatic computational approach using the DNN algorithm.

Image Collection
We collected aerial images over miconia-infested areas on the island of Hawai'i with a small multirotor UAS (Inspire 2, SZ DJI Technology Co., Ltd., Shenzhen, China) equipped with an RGB camera with a 4/3 sensor (Zenmuse X5S with 15 mm MFT lens, SZ DJI Technology Co., Ltd.).Flight surveys were conducted at an altitude of 50 m above ground level with a groundspeed of 5 m s −1 on parallel flight paths with the camera oriented in the nadir position and automatic settings for focus and white balance (Figure 1, Table 1).These surveys captured images (5280 × 3956 pixels) with a ground sampling distance of approximately 1.1 cm px −1 .No geometric correction, radiance correction, or reprojection was performed on the aerial images.Three different locations known to have miconia (sites A-C; Figure 1   We collected a total of 649 images were captured from these flights.We selected six individual images for interpretation, representing a range of miconia abundances from sparse to densely infested.In each image, we careflully outlined all contiguous miconia leaf canopy and manually digitized them into vector polygon features using QGIS (v.3.4.15)by two separate analysts spending at least 30 min per image for quality assurance (Figure 2).The six images from all three sites contained a total of 150 feature polygons.As a final step, we created a 53-pixel buffer, calculated based on 25% of the average characteristic dimension, around each feature polygon to accommodate human errors with hand-eye coordination when marking the feature polygons and thereby reducing falsepositive scoring, especially for the smaller features.The characteristic dimension is equal to the square root of the enclosed area.A single feature polygon may not necessarily designate an individual tree.Instead, several polygons could have been derived from a single plant, i.e., background tree canopy were to overlap with large sprawling miconia canopy.Alternatively, some polygons might correspond to multiple individual plants in close proximity.Green buffer corresponds to area of positive detection based on average characteristic length of polygons.Blue squares are buffers with sides equal to twice the characteristic length (square root of the enclosed area) of the enclosed polygon.Pixels within the yellow boundary correspond to the plant subset, P, while pixels within the blue boundary but outside of the yellow boundary correspond to background subset, B.

Human Analyst Trials
We wanted to formally compare the performance of trained human analysts, operating at three different scrolling speeds, against a custom DNN for the detection of miconia.To do this, we needed to generate baseline data for human detection performance.Institutional Review Board approval for the human experiments was obtained from the University of Hawai'i Human Studies Program (IRB protocol no.2017-00863).We recruited forty test participants from a pool of local volunteers with professional experience in identifying miconia but unfamiliar with the specific regions of interest in this study.Participants first answered a questionnaire to provide relevant information on their experience with identifying miconia and other background information that might impact their ability to visually detect miconia.We screened participants for visual acuity, using a LogMAR chart [87], and color-blindness, using the Ishihara test [88].If a participant demonstrated visual acuity below normal as defined by the visual acuity measurement standard [89] or a color blindness deficiency we removed them from the pool.In total, 38 participants were ultimately included to participate in the human analyst trials.
We developed a custom Python script to display image sections within a 500 × 500 px field of view, continuously scrolling through each test image at one of three fixed linear speeds (100,200, or 300 px/s).The fixed scrolling speed ensured a uniform viewing time for the image to reduce fixation in search effort that may come from a static view of each section.The viewing monitors were 24-inch diagonals with 1920 px × 1200 px resolution (P2419H, Dell Inc., Round Rock, TX, USA) and were calibrated to ensure consistent color displays (Datacolor Spyder5, Lawrenceville, NJ, USA).Participants were seated approximately 50 cm from the screen with a 30° viewing angle for a field of view encompassing the entire screen.The optical mouse was located on the right or left side based on subject preference.Participants were instructed to mark each suspected miconia plant distinguished as a contiguous leaf canopy cluster.The marking procedure was performed by placing a mouse cursor over the suspected feature and clicking to secure a reference point on the image section.They were further instructed that a mark anywhere within the contiguous leaf canopy cluster would be recorded as a successful detection, while multiple clicks within the same contiguous area did not affect total detection counts.Participants were presented with a total of three images randomly selected from the pool of six, as described above.Each image corresponded to one of the three speeds assigned randomly.Points created by each participant were saved separately as a comma separated variable (CSV) file accompanying the image section.The experiment was administered for each participant within a 10-min period, to eliminate fatigue as a factor.Points were classified as true positives when contained inside the buffered polygons.Points outside of the buffered polygons were classified as false positives.Buffered polygons with no points occurring inside the polygon were classified as false negatives.

Deep Convolutional Neural Network Searches
Convolutional neural networks have been successfully used to identify invasive plants in UAS imagery [67,90].However, to our knowledge no one has developed a DNN for the detection of miconia in nadir aerial imagery or done a rigorous comparison of DNN performance with trained human analyst trials.The miconia detection algorithms developed for this study were based on RetinaNet [77] with a ResNet-101 [91] backbone pretrained on ImageNet designed for fast and accurate detection of densely packed objects within images [92].The model was pre-trained to be capable of general image understanding by using a transfer learning technique with the large ImageNet database (i.e., 14 million images) consisting of ground-level photos of common objects.The final DNNs were obtained by freezing the first 80 layers of the ResNet backbone before performing specialized transfer learning with a custom miconia dataset.
We used cross fold validation to train 10 models on 3636 miconia annotations spread across 86 training images taken from all three sites.These training images were cropped 1000 × 1000 subsections of the original images.Two skilled human analysts sequentially annotated and verified each image section with bounding boxes.A 10-fold cross validation was performed with the 86 training images by splitting into ten roughly even folds, with six folds having nine images and four having eight.Each model was trained using a unique nine-fold combination, with the remaining folds used as a validation set.Hyperparameters used during training are provided in Table 2.This was then followed by standard ImageNet preprocessing as described by [77].To select the final model for each fold, we chose the model with the lowest validation loss across all epochs.The six images used in the human analyst trials were withheld from our DNN training and validation sets and processed by each of the ten developed DNNs.This methodology is depicted in Figure 3.The output of each DNN consisted of a set of generated bounding boxes for detected targets.Vector points were created from the centroids of each bounding box and assessed for recall against the miconia feature polygons described above.Algorithm development, calibration, and validation procedures were performed with a computer workstation with a Titan RTX graphical processing unit (NVIDIA, Corp., Santa Barbara, CA, USA) and an i9-9900K CPU (Intel, Santa Clara, CA, USA).

Recall and Effect of Optical and Search Properties
The recall of image interpretation in this study was measured as the aggregate probability of that each feature polygons will be correctly identified as a true positive by human participants or DNN models [93].This decision was based on the combination of data types involved in the study (point data from the human analysts and the DNN and polygons for the miconia reference data) and our treatment of detection counts, where multiple marks within the same contiguous area did not affect the totals to allow a fair comparison between the results of the DNN and human interpretation.Due to the definition of a true positive and false negative being defined based on detections within a polygon, false positives, i.e., detections outside of the defined polygons, lacked an equivalent definition to aggregate these points, preventing the calculation of precision.Therefore, the false positive rate, number of false positive detections divided by the total number of detections, was used in lieu of precision [94].
We examined the effects of nine factors relating to optical and search properties on the detection recall for the human analysts and the DNN algorithms.Geometric factors (n = 2) included the relative size of the miconia leaf clusters and distance to nearest miconia neighbors.Optical factors (n = 6) included miconia contrast to surrounding background and signal to clutter ratio (SCR) for each of the three color channels.Image scrolling speeds during human analyst trials (n = 1) were also examined.
The relative size of the plant was quantified as the number of pixels contained in the delimiting miconia polygon (NM).The distance to the nearest miconia, D, was determined using a nearest neighbor analysis that measured the shortest distance between two polygons within an image using the NNJoin plugin (QGIS 3.4.15).
Optical characteristics were analyzed in square regions centered on each miconia feature polygon.The sides of these squares were twice as long as the characteristic dimension of the bounded polygon, which in turn was calculated as the square root of the total number of contained pixels within the polygon (Figure 2).Within each square region, pixels belonging to miconia were denoted with a subscript M and pixels belonging to the background (i.e., not belonging to the miconia pixels under consideration) are denoted with a subscript B.
Contrast values between miconia and local background (CM) were calculated for each color channel as the sum of the squares of the difference between the digital value of each miconia pixel for that channel, yS for s∈M, and the corresponding mean digital value of the local background pixels, μB [95]: where μB is the mean of the digital values for the background pixels (s∈B): The SCR for each color channel in the image was calculated as the ratio of the difference between the mean digital values of miconia and background pixels and the variance of the local background pixels for the corresponding color [96]: where μM is Contrast and SCR calculated for each channel were designated by a subscript of the corresponding channel as red (R), green (G), or blue (B), respectively.
Statistical analysis on the effects of nine factors (CP and SCR for each color band, size NM and nearest neighbor distance D, and image scrolling speed S) on detection was conducted with a multivariate analysis of variance (MANOVA) for each factor using R software [97].Eta-squared (η 2 ) was used to measure the effect size for each MANOVA factor.All dependent variables were determined to be normally distributed based on Shapiro-Wilk's method (p > 0.05).Mean separation was performed with Tukey's Honest Significant Difference test for interpretation by humans at the various scrolling speeds and the DNN.

Cost Model
Cost models were constructed for workflows using human analysts or a DNN to perform invasive species detection from aerial imagery with the following base equation: where CT is the total cost, (CV)FS is the variable cost to conduct a UAS flight survey, (CV)S is the variable cost to analyze the resulting imagery, and CFX is the fixed cost.The fixed cost for conducting UAS flight operations largely pertains to the upfront investment and continued maintenance of aircraft and sensors estimated here to be $6000 USD which applies to both human and DNN interpretation.The DNN interpretation has an added fixed cost for the human labor associated with generating image annotations (approximately 240 h at $25 h −1 ) investment and maintenance of computer workstations capable of performing automated image analyses for a total of $15,000 USD.The variable cost to conduct a UAS flight was calculated as where AS is the survey area, IW was the image width, GSD is the ground sampling distance, OS is the proportional side overlap, v is the flight speed, and CL is the cost of labor set at $25 USD per hour.The variable cost to conduct image analyses with humans (HA) was calculated as where FOVH is the height of the field of view equal to 500 pixels in these trials and S is the linear scrolling speed.The variable cost to conduct a DNN analysis was calculated as where IH is the image height and tI is the time to process an image which was calculated based on the average time to process six images.While the DNN process is automated, here, we assume human involvement with routine tasks and oversight being performed until completion and include the cost of labor as well (CL).Additional parameters are described in Table 3.

Effect of Optical and Search Properties on Recall
Seven of the nine factors significantly affected miconia detection by human analysts (p < 0.05; Table 4).The two factors with the greatest effects (based on magnitudes of η 2 as indicated by asterisks) [98] were scrolling speed and relative size.These factors exhibited a non-linear association curve and an inverse relationship, respectively (Figure 5).Red and green contrasts (CM) and all three SCRs also had significant effects on human detection recall.Only three factors were significant for efficacy of DNN detection: CM,R, SCRR and SCRG (p < 0.05, magnitudes of η 2 ; Table 5), which were also significant for the human detections.

Cost Comparisons between Human and DNN Image Analyses
A simple exponential model was fit (R 2 = 0.995) to the human analyst efficacy results recall = 100 .
to estimate the scrolling speed needed to match the DNN recall of 83.3%.This scrolling speed was determined to be 78 px s −1 and is equivalent to a search effort of 0.23 s m −2 with FOV of 500 px and GSD of 1.1 cm 2 px −1 .The existing DNN was determined to be more cost effective than a human search conducted at a scrolling speed of 78 px/s once the cumulative area searched exceeds 617.3 ha (Figure 6).If recall is sacrificed and search speeds of 100, 200, and 300 px/s are used, the cumulative area searched must exceed 793.6, 1606.6, and 2439.9 ha, respectively, for the DNN to be more cost effective.

Discussion
We tested the ability of human analysts and a custom DNN to detect the invasive miconia plant in visible-wavelength UAS imagery collected over complex canopy forest and found that the DNN outperformed the human analysts.While similar results have been reported in other image classification studies [91,99], we are not aware of any prior studies involving rigorous time-controlled human trials for detecting invasive species in high-resolution UAS imagery.The significance of optical contrast and SCR as factors for human recall agrees with previous studies [100], and recent work has further associated poor accuracies with low SCR for both humans and DNNs [101,102].The lowland tropical forest canopy environment imaged in this study is a complex community of diverse species and functional groups, creating fully vegetated and highly cluttered backgrounds for detecting even a highly conspicuous plant such as miconia.The DNN's relatively high recall may be diminished in imagery collected in regions with different species composition from the training dataset.Therefore, additional training data for the miconia detection DNN, particularly including image training sets with low red contrast and red and green SCRs, would likely improve the robustness of the DNN.Additionally, alternative DNN architectures, such as EfficientDet [103,104], YOLOv5 [105], and Mask R-CNN [72,106,107], should be considered to improve accuracy and inference speed.
Developing a DNN with perfect accuracy in miconia detection is improbable and may not be necessary, based on the biology and life history traits of the species.Strategically, management interventions must outpace invasion by eliminating miconia before it reaches maturity within 3-4 years of germination [32].Seed bank germination is asynchronous and can survive for multiple decades [32,33], and miconia can remain cryptic in the understory for some time before becoming visible from above.Thus, even with the capability of 100% target detection, there is an inherent commitment to repeated surveillance of an area to ensure extinction.However, UAS surveillance platforms integrated with an automated detection workflow could greatly enhance our ability to detect miconia and other species, as well as our understanding of biological invasions, with a constant stream of data providing posterior updates improving predictability in forecasting management outcomes [108].
Comprehensive field intelligence on invasive species abundance and distribution is anything but routine.In reality, management programs struggle with budgetary decisions on how to proportionally allocate resources between detection and intervention.For miconia programs in Hawai'i, efforts are often combined by combining surveillance and intervention operations by treating targets as they are found.However, this remains operationally insufficient to meeting the demands of gathering intelligence across large, remote landscapes with repeated measures, even with the most cost-effective options [32].The parallel advancements of UAS mapping capability and artificial intelligence are surpassing human capacity in surveillance and may transcend invasive species management towards a better comprehension of the invasion problem with rapid deliverables and more precise and effective interventions.We recognize that different species may require different amounts of training data to produce similar results.Recognizing those limitations, we believe this study establishes that DNN interpretation of aerial imagery provides a more effective path for invasive detection at the landscape level than manual image interpretation, freeing up valuable human resources for management interventions and other activities.

Conclusions
UAS imagery can provide valuable intelligence for natural resource managers, but the current bottleneck of time and human resources required to exhaustively search through these images reduces the scalability of this approach.Automated classification of miconia with a deep neural network exhibited a higher degree of recall than any of our tested human search speeds and did not exhibit the biases toward large plants seen in human searches.This makes deep neural networks particularly appropriate for detection of incipient miconia populations which tend to consist of small, sporadically located plants.In FY20, the Hawai'i Invasive Species Council funded projects to search 792,368 acres across the Hawai'i an islands.Implementation of a deep neural network for invasive plant detection can result in cost savings due to the substantially faster processing time compared to human searches of UAS imagery.Further improvements to the deep neural network, through advancements in deep neural network architectures or other approaches and incorporating additional training data, to improve accuracy of detection and applications to other invasive species will further advance the utility of UAS in natural resource management.

Figure 2 .
Figure 2. Example of human search showing true positive (green) and false positive (red) marked points in image.Yellow polygon delimits contiguous area of visible miconia (treated as a "plant").Green buffer corresponds to area of positive detection based on average characteristic length of polygons.Blue squares are buffers with sides equal to twice the characteristic length (square root of the enclosed area) of the enclosed polygon.Pixels within the yellow boundary correspond to the plant subset, P, while pixels within the blue boundary but outside of the yellow boundary correspond to background subset, B.

Figure 3 .
Figure 3. Flow chart of experimental methodology.Following image acquisition images used for training and validation of the DNN were annotated with bounding boxes of individual miconia leaves while the test set used for final evaluation was annotated with polygons delimiting contiguous areas of miconia.

Figure 4 .
Figure 4. Frequency distribution of recall (portion of total miconia polygons in analyzed imagery identified within individual recall % bins of human subjects or DNN models) of each search type with mean recall values (vertical dashed lines).An increase in the search speed of human searches results in a smoothing of the frequency distribution as recall diminishes.The deep neural network has a bimodal distribution, either identifying the plant with high recall or failing to identify it completely.Inset: Results of Tukey's Honest Significant Difference test.

Figure 5 .
Figure 5. Non-linear regression of relationship between detection recall by human participants and relative size of miconia plant and search speed.

Figure 6 .
Figure 6.Linear cost analysis of implementing deep neural network (DNN) and human searches at varied search speeds for miconia.

Table 1 .
Conditions during image acquisition at sites A-C.

Table 2 .
Hyperparameters for RetinaNet training used in this study.

Table 3 .
Values for linear cost analysis of human and deep neural network searches of UAS imagery.

Table 4 .
ANOVA table of factors (NM, size; S, speed of search, CP,R, contrast in red channel; CP,G; contrast in green channel; CP,B, contrast in blue channel; SCRR, signal to clutter ration in red channel; SCRG, signal to clutter ratio in green channel; SCRB, signal to clutter ratio in blue channel; D, distance to nearest miconia plant) affecting human detection recall for miconia.* indicates significant factors with largest values of η 2 .

Table 5 .
ANOVA table of factors (NM, size; S, speed of search, CP,R, contrast in red channel; CP,G; contrast in green channel; CP,B, contrast in blue channel; SCRR, signal to clutter ration in red channel; SCRG, signal to clutter ratio in green channel; SCRB, signal to clutter ratio in blue channel; D, distance to nearest miconia plant) affecting DNN detection recall for miconia.* indicates significant factors with largest values of η 2 .