Mapping temperate forest phenology using tower, UAV, and terrestrial-based sensors

Phenology is one of the ubiquitous fingerprints of climate change on our ecosystems. Monitoring the spatiotemporal patterns of vegetation phenology is thus critical. A wide range of sensors have been used to monitor vegetation phenology. Sensor point of view and resolution can potentially impact estimates of phenology. We compared three different sensors from three different remote sensing platforms—a UAV mounted RGB camera, an under canopy, upward facing hemispherical camera with R, G and NIR capabilities, and a tower mounted RGB PhenoCam—to estimate spring phenological transition in a mixed-species temperate forest in central Virginia, USA. Our study had two objectives: 1) to compare the aboveand belowcanopy inference of canopy greenness (green chromatic coordinate and normalized difference vegetation index) and canopy structural attributes (leaf area and gap fraction) by matching under-canopy hemispherical photos with high spatial resolution (0.03 m) drone imagery to find the appropriate spatial coverage and resolution for comparison; 2) to compare how each sensor performed in estimating the temporality of the spring phenological transition. We find that a spatial buffer of 20 m radius for UAV imagery is most closely comparable to under-canopy imagery in this system. Sensors and platforms agree within +/5 days of when canopy greenness stabilizes from the spring phenophase into the growing season. This work has implications for paring UAV imagery with both tower-based observation platforms, as well as plot-based studies (e.g. long-term monitoring, existing research networks, permanent plots).


Introduction
Spring phenology in temperate forests serves as a leading, biological indicator of the near-term impacts of climate change [1]-exerting strong controls on photosynthesis [2], thus driving primary productivity and carbon cycling. Warming trends have resulted in both an advancement in the onset of spring vegetation activity phenophase, or leaf-out, and an extension of the growing seasons in many locations [3,4]. It is necessary to quantify changes in the temporality of phenological patterns that accompany anthropogenic climate change in order to constrain uncertainties in modelling the earth system [5]. Near-surface optical remote sensing methods are well-suited for phenological observation, with the most used being RGB cameras-cameras that capture energy in the red, green, and blue bands. RGB cameras used for phenological observation can be used either above-(from the air, unoccupied aerial vehicles--UAV), below-(tripod-based or terrestrial-based, facing upwards into the canopy), or at oblique view angles (often tower mounted, looking across the top of the canopy). Comparisons among optical sensors have been made and are robust [6][7][8][9][10], but cross platform validation of how view angle or observation perspective of the canopy influences canopy-level 2 of 14 phenological observation is necessary in order to inform scaling and synergistic efforts. UAV based sensors offer substantial potential to upscale observations to the stand or landscape level at finer resolutions than spaceborne platforms (e.g. MODIS, Landsat), but this potential must be informed by in situ measurements to be fully realized-remote sensing methods are incredibly powerful, but are most useful with ground-validation.
UAV mounted cameras are becoming more common in phenological studies [9,11], but very few have focused on forested systems [8,[12][13][14][15]. Forest-based UAV phenological studies have enabled analysis of tree-level phenology in temperate [13] and tropical systems [15]. UAV phenology studies bridge an important gap in spatial resolution and extent, while offering substantially higher temporal resolution than can realistically be garnered with airborne mapping. UAV imagery can be collected over areas on the order of hectares in just a few hours, thus providing an advantage in spatial extent over stationary RGB imagery [16,17]. UAVs do require licensed operators and their use may be restricted by local ordinances or laws and must remain in "line-of-site", almost universally. They are also subject to weather and battery constraints. The nadir view-angle of UAV based sensors (e.g., downward facing) may introduce potential obfuscation and errors that are not associated with the oblique angle of PhenoCam data. Despite these limitations, multi-temporal UAV-based studies in forest systems can provide detailed insight into the spatial drivers of broad-scale phenological phenomenon and must be evaluated in a range of ecosystems.
RGB cameras are the most prevalent near-surface remote sensing method used to observe forest phenology. Indices of plant greenness, such as the Green Chromatic Coordinate (GCC), can be derived from standard RGB imagery [18]. This methodology is implemented widely by the PhenoCam network, a network of over 400 cameras across North America (as of 2020), each uploading images to a server every 30 minutes [19]. Data from these cameras are made publicly available free of charge, as are open-source software solutions to interpret and analyze data. Repeat RGB imagery, such as the PhenoCam network [19] has the advantage of consistency and hightemporality. The images collected can readily be compared to each other and end-users know the data are always the same areas of the canopy, from image to image. However, PhenoCam data provide snap shots of only one area of the canopy. This limits the ability to assess site heterogeneity.
Below-canopy observation is often conducted with RGB cameras outfitted with hemispherical lens, mounted on tripods, facing upwards into the canopy [20]. This method has the advantage of isolating the canopy, which removes any interference from the ground or understory vegetation. These camera-based methods of capturing phenology are limited by user mobility, image quality, sun angle, and sky conditions. On clear sky days, early morning or late evening hours are preferable for optimal sampling. The upward facing nature of that camera, and its isolation of the canopy, allows for estimation of additional forest structure metrics such as leaf area index (LAI) and canopy gap fraction structural metrics that are not inferable from the other sensors surveyed [21]. Cameras with bands in the near-infrared (NIR) allow for direct estimates of the normalized difference vegetation index (NDVI), a common index in spaceborne remote sensing that relates the red and NIR channels in imagery to estimate plant greenness and vitality [22,23].
A major difference between below-canopy imagery collected with tripod-based, hemispherical cameras and many other remote sensing platforms, is that tripod-based cameras capture a pointmeasurement of a limited area of the canopy. While providing additional information about canopy structure not directly available from either PhenoCams or UAV imagery, tripod-based cameras are limited in space and time-with images that capture on the order of 10 to 100 m 2 in canopy area as compared to 1000s or more m 2 in the case of tower mounted PhenoCams, or rather continuous imagery on the order of hectares that can be provided by UAV imagery [8]. The additional canopy structural information provided by tripod-based cameras, if well correlated to other platforms when co-located imagery is taken, could scale from the plot-to stand-scale via UAV imagery. Given the difference in spatial extent, resolution, and perspective however, it is necessary to quantify the resolution at which UAV imagery is most comparable to hemispherical imagery. Pairing systems where possible though has additive benefits.
The purpose of this study is to assess three approaches to estimate phenology: 1) a terrestrial based, upward-facing DSLR camera commercially-designed for vegetation studies; 2) a UAV mounted RGB camera sensor; and 3) a tower mounted RGB camera specifically designed for phenological study as part of the PhenoCam network. Each sensor has unique advantages and disadvantages based on canopy perspective, spatial extent, and operating conditions as outlined above. Comparisons among these sensors are necessary in order to understand what elements of phenology are being represented from any specific perspective, but also how combining these measurements may provide more representative and synergistic phenological estimates. We tested each of these sensors in a mixed, temperate forests across the spring phenophase transition to full canopy closure in 2018 to answer the following questions: 1. What is the appropriate spatial extent at which to compare below-canopy imagery (tripod-based, hemispherical photography) to above-canopy-imagery (UAV based imagery)? 2. Do above-and below-canopy measures estimate similar phenological transitions as compared to continuous PhenoCam, MODIS, and Landsat phenological observations?

Site Description
The Pace Estate (37.9229, -78.2739) is a mixed-temperate secondary forest located near Palmyra, Virginia, approximately 20 miles east of Charlottesville. The forest has an average stem density of 1813 trees ha -1 , and populated by Acer rubrum, Quercus alba, Fagus grandifolia, Pinus Virginia, and Nyssa sylvatica (Chen, 2011). Precipitation averages 1241 mm yr -1 with a mean annual temperature of 13.9°C [24,25]).

Above-Canopy Measurements
We used a Mavic Pro outfitted with the stock 12.3 MP RGB camera with a 35 mm equivalent lens with an ISO range of 100-1600 and <1.5% distortion focus (DJI, Shenzhen, China) to capture orthoimagery ( Fig. 1). We used DroneDeploy (DroneDeploy; San Francisco, CA) to plan the flight path around the tower with 88% of front and side overlap. The flight altitude was 100 meters. The white balance of the camera on the drone was set to a fixed color temperature of 6000K. After data collection, we used PhotoScan (Agisoft; St. Petersburg, Russia) to create orthomosaic images. These data were rasterized and clipped to a rectangular bounding box around the plots of approximately 5.5 ha (37.9225, -78.276 to 37.9325, -78.275). We derived Green Chromatic Coordinate (GCC) values from the red, green, and blue channels of the UAV orthoimagery. GCC is calculated using the following equation (Browning et al. 2017): Where DN represents the digital number (0 to 255) for the red (DNR), green (DNG) and blue (DNB) channels. GCC is less sensitive to differences in camera properties or canopy illumination [26]. Orthoimage reconstruction was not fully successful for the on May 7, 2018 (DOY 127) orthoimage, resulting in approximately half of the cropped scene being removed (including 4 camera plots).

Below-Canopy Measurements
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 12 July 2020 doi:10.20944/preprints202007.0273.v1 We used a 24 Megapixel Sony 6000 DSLR Compact 2571 camera (Regent Instruments; Quebec, QU, Canada) with a 180° hemispherical lens with a maximum field-of-view of 90° to capture hemispherical canopy imagery (Fig. 1). The blue channel of the camera is replaced with a nearinfrared channel, which allows direct calculation of NDVI: The camera was mounted on a self-leveling tripod with the lens at 1 m from the ground, facingupwards, looking into the canopy. Five images were taken at each of 8 plot locations-at center, and 10 m off of center at cardinal directions--approximately weekly from late April until late May. For the first week of June, only images taken at plot center are available. We used image sets for analysis only during periods where UAV based imagery was taking concurrently. We estimated leaf area index (LAI), NDVI, and gap fraction for each image using WinSCANOPY (Regent Instruments; Quebec, QU, Canada) with a hemispherical image radius of 1925 px (total image size is 6000 x 4000 px) based on color based pixel classification, an internal software algorithm more tolerant to sky conditions variations that can be used on images with dark blue sky or cloud coverage. Color based classification palettes were established for each measurement period based on user selection.

Tower-Based Measurements
For oblique angle imagery of the camera, we used the RGB camera mounted to the Pace eddy covariance tower (Fig. 1). We accessed PhenoCam data acquired with the phenocamr package in R 3.6.2 (R Core Team) for site "pace" where we downloaded and analyzed 3-day time interval data in accordance to phenocamr documentation using the following suggested package workflow: 1) data expansion; 2) outlier detection; 3) smoothing; 4) phenophase calculation. Full details on the process are available in the phenocamr vignette that is included in the package. This workflow creates estimates of the "rising" phenophase based on phenocam derived GCC values (GCCPC).

Satellite-based NDVI
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 12 July 2020 doi:10.20944/preprints202007.0273.v1 To provide a baseline to large-scale phenological studies, we incorperated two common satellite-based NDVI products into our analysis: MODIS and Landsat 8. For each, we extracted NDVI from analysis-ready composite datasets available on Google Earth Engine, specifically the MOD13Q1 V6 Terra Vegetation Indices 16-Day Global 250m product and the Landsat 8 Collection 1 Tier 1 8-Day NDVI Composite. We compiled all available scenes for both products as a timeseries of mean and standard deviation of pixel values within the study extent. Statistics were computed for edge pixels with an area-based weighting of pixel values partially within the study extent.

Statistical Analysis
We compared UAV data to below-canopy imagery using two different approaches. For the first approach, only images taken at plot center for each period were used-these data are noted as center in all text, tables and figures. For the second approach, we used all available images per plot, per measurement period, averaged to make a plot level mean of NDVI, LAI, and gap fraction. These are denoted as composite in all text, tables and figures. Note, this does mean that there is no June imagery used in composite analyses. For each measurement period we averaged GCCUAV at increasing buffer sizes starting at 5 m radius from plot center and increasing by 1 m for each iteration to a maximum of 50 m radius. The range of 5 to 50 m radius was chosen based on assumptions of how much canopy the hemispherical camera can see given its focal length. Standard forestry inventory plots rarely exceed 20 m in radius thus making this range of comparison reasonable, even when considering that forest canopies associated with trees whose boles would be situated within a plot, will extend up to 5 m outside of the plot boundaries [20]. This was done for each of the 8 plots, for each of the 5 measurement periods, resulting in a total of 1147 clipped raster grid cell plots of mean GCCUAV. Note, that for May 07, 2018 (DOY 127), only 5 plots were used due to the Orthoimage error (see above-canopy measurements above). Linear regression was used to evaluate relationships among buffer size of GCCUAV with below-canopy estimates of NDVI, LAI, and gap fraction with goodness-of-fit determined based on coefficient of determination (R 2 ), directionality and internal calibration from the slope, and uncertainty quantified as residual mean standard error (RMSE). 95% Confidence intervals were determined using bootstrapping via the "boot" package in R [27,28] Statistical analysis of buffer size for both LAI and gap fraction were then used to inform the scaling of both to the entire UAV scene, with the appropriate averaging resolution for UAV data informed from the statistical models deemed most appropriate for LAI and gap fraction to GCCUAV, with resolution scaling done using bilinear sampling via resample using the rgdal [29] and raster [30] packages in R 3.6.2 [31].This was not done for NDVI, as GCCUAV and NDVI are already similar and the intent is to test the certainty with which LAI and gap fraction may be inferred from UAV imagery.
To evaluate how representative plot averages of GCCUAv were of the entire forest, means of whole scenes were compared against plot level means using root mean square error. We used breakpoint analysis with the segmented package [32] in R 3.6.2 to estimate when each method achieved canopy closure. .

Above-and Below-Canopy Comparisons
Linear regression results showed that NDVI explained a large portion of variance in GCCUAV for both center and composite below-canopy imagery with R 2 values of 0.88 at 20 m for center imagery and 0.92 at 20 m for composite imagery (Fig. 2a). R 2 for both center and composite imagery continued to rise with increasing buffer size, before stabilizing around 35 m. RMSE values dropped consistently with buffer size as well, with RMSE at 20 m buffer size of 0.12 for center imagery and 0.08 for composite imagery (Fig. 2b). Regression slopes were consistently positive and also rose with increasing buffer size (Fig. 2c). While composite imagery showed increased explanatory power, Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 12 July 2020 doi:10.20944/preprints202007.0273.v1 confidence intervals show this effect is small to insignificant, though confidence intervals do become more constrained after ~ 15 to 20 m buffer sizes.
There were only small differences in LAI between center and composite imagery, though confidence intervals were consistently large as buffer size increased. At 20 m, R 2 was 0.80 for both center and composite imagery, while RMSE was 0.56 for center imagery and 0.53 for composite imagery. Slopes for both were nearly identical and increased steadily with buffer size. Peak agreement, based on R 2 , between above-and below-canopy imagery was in the 14 to 16 m buffer sizes, showing that increasing area of above-canopy GCCUAV averaging did not add additional information as it did in the case of NDVI. For scaling purposes, linear regression fit at 15 m buffer size was chosen for scaling analysis. The resulting scaling function: We have included the iterative linear regressions statistics for gap fraction (Fig 2), but for scaling analysis chose to fit a non-linear function that better fit the data using the NLS function in R 3.6.2. We chose 20 m buffer size to scale GCCUAV from the plot to the entire scene, the resulting equation: 65354.32 * exp(-21.55 * z) The residual standard error from the NLS function is 3.20 compared to 4.87 from the linear model regression, demonstrating the better fit for the NLS approach.

Estimating canopy closure
Transition from spring phenophase into canopy closure based on breakpoint analysis largely agrees among sensors other than Landsat 8 or MODIS. GCCUAV (full scene), GCCUAV (20 m buffer), NDVI, gap fraction, and LAI all show canopy closure to occur around DOY 124, with standard error that overlap among the variables, indicating no operational difference. GCCPhenoCam shows canopy closure to occur on DOY 131 however, nearly a week after above-and below-canopy sensors (

Discussion
Here, by comparing three different, but similar sensors--UAV based RGB camera, tower based PhenoCam, and upward facing NDVI hemispherical imagery--we show strong, positive correlations among the rate and amount of canopy greening consistent across all sensors, regardless of the perspective which they view the forest. We show there are important differences in the spatial resolution at which canopy structural variables that can be derived from below-canopy hemispherical imagery-LAI and gap fraction-can be estimated using UAV imagery, but we demonstrate that given these considerations, these structural attributes can be inferred in temperate, broadleaf systems with a high degree of confidence. Our analyses show that all three sensor platforms we considered do well to approximate phenological indicators, but above-and below-canopy methods estimate canopy closure to occur earlier than tower-mounted PhenoCams, but later than MODIS derived NDVI.

Scaling above-and below-canopy imagery
We find there are optimal, but differing area sizes where GCCUAV best approximates belowcanopy NDVI and structural attributes LAI and gap fraction. As the averaging area for GCCUAV increases, increasing variance of the NDVI/GCCUAV relationship is resolved as evidenced by increasing values of R 2 with an asymptote of around 35 m in buffer size accompanied by a marked reduction in the size of the confidence intervals around 20 m. Composite imagery consistently outperformed center-only imagery, though the effect size appears to be small. For LAI however, there was differing spatial agreement, with the 14 to 16 m buffer size resolving the most variance, followed by a decrease at greater buffer sizes. There is also no apparent advantage to composite imagery over center-only imagery, indicating that center-only imagery is enough to scale LAI via UAV imagery within these systems, if sampling number is high enough. For the 5.5 ha area we sampled, the eight plots sampled repeatedly at one to two-week intervals, resolves a large portion of the variance. The strong agreement between GCCUAV and both LAI and gap fraction, is sufficient to scale these findings via the UAV-based metric. Like Kennan et al. [33] we observed non-linear relationships between GCCUAV and gap fraction (S1) which indicates saturation of greenness at high levels of leaf area and low levels of gap fraction. Comparing our late April to early June measurements, we see consistent greening lead to consistent decrease in gap fraction. These relationships are strong for this forest type, but it is unknown how a more heterogeneous forest (e.g. patchy disturbance, more species diverse) would compare. Given our interpretation of the data, we suggest that LAI would be more variable than gap fraction, or even NDVI/GCC in a more variable forest than our mesic, mixed forest.
We also show that each sensor approximates dormant to growing season transitions to fullcanopy closure within +/-2 days which is inline the MODIS estimate, though MODIS data is slightly earlier, however, Phenocam estimates are 5 days later a finding concurrent with other work [33]. Landsat 8 provides the latest canopy closure date nearly 15 days later than other sensors. Understanding and quantifying the magnitude differences in both the signal and the noise among various platforms and sensors used to remotely sense forest structure is imperative in order to make comparisons among studies and systems. We show that a 20 m radius buffer is suitable for comparison between above-and below-canopy measures of phenology, and potentially structural Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 12 July 2020 doi:10.20944/preprints202007.0273.v1 attributes as well. This is an important consideration for pairing UAV data with plot-based studies.
The potential for UAVs to scale plot-based research from the micro-to the mesoscale is enormous, and this work adds even more support to a growing, robust body of research that underlines this point [16,17,34].
NDVI and GCC tell similar stories of the canopy, but we acknowledge they are different variables. The periods of greatest uncertainty between GCC and NDVI typically occur during the end of the growing season when GCC tends to decline sooner than NDVI does due to changes in pigment concentrations [33,35]. This analysis may be particularly useful for applications where plot-based research can be augmented by UAV data. While there are geometrical considerations that may play a part (e.g. circular plot areas) when comparing hemispherical imagery to UAV imagery -including how buffer size and area were calculated -the agreement we see among sensors in this system is encouraging for scaling from below-canopy, to UAV, potentially even to satellite imagery.
There are often many different methods that can be employed in order to estimate a given variable of interest. For example, in forests, leaf area is a structural parameter of biophysical importance that can be estimated many different ways including optically (hemispherical camera, surface reflectance, light absorption), from active remote sensing (lidar) [36][37][38][39] or empirically [40]. The methods used to estimate leaf area can have variable results even within the same stand or plot [41]. These differences arise due to methodological assumptions such as statistical artifacts from saturation when estimated using light absorption methods that result in saturating values at higher ranges [42,43], or interference from parsing leaf and wood values [44,45]. Many methods are built on estimating gap fraction, but different assumptions based on Beer-Lambert law can change estimates. Depending on sensor type and perspective, methodological assumptions can similarly impact estimates in phenological studies.

Temporal Resolution
Temporal resolution was directly affected by the autonomy of data collection in this study. From highest to lowest frequency observations were terrestrial camera, UAV, satellite, and PhenoCam data. The high temporal resolution of PhenoCams makes these data uniquely powerful for examining longterm patterns and trends. For example, the PhenoCam located at the Pace Estate was installed on March 9, 2017 and has collected 43,254 images as of June 19, 2020. Further, there are over 400 PhenoCams collecting similarly large datasets [19]. Neither terrestrial nor UAV mounted cameras can approach these temporal resolutions. Largely because each sensor platform requires an operator, whereas PhenoCams do not. Even when UAVs can be deployed in high frequency, data volume, orthorectification, and post-processing become major bottlenecks in the processing pipeline. Additionally, terrestrial based methods, such as tripod mounted cameras, must be physically moved from location to location during a sampling period, adding additional effort and time. Satellite imagery partially mitigates these issues but is limited by repeat time -a problem that may be solved with the development of CubeSat constellations, capturing daily, high-resolution imagery.

Spatial Resolution
Spatial resolution was highest over the largest extent for the UAV, followed by the PhenoCam, Landsat, terrestrial camera, and MODIS. UAVs provide the greatest spatial coverage of any of the sensors we surveyed and do so at high resolution. The UAV imagery we examined had a spatial resolution of 0.03 m and spatial extent of ~105 ha. The higher resolution and greater spatial extent of UAV imagery allows for greater consideration of structural heterogeneity in the system over either terrestrial imagery or PhenoCams. PhenoCams are fixed and take the same image repeatedly, but the spatial extent of that image is based on the height of the camera, its azimuth, and in part, the system that it is targeting-cameras mounted on towers above forests tend to have greater spatial coverage Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 12 July 2020 doi:10.20944/preprints202007.0273.v1 than do cameras that are focused on prairies, grasslands, or similar low-stature ecosystems [18]. Extending coverage would require additional cameras and additional infrastructure at a site. The number of individual images that an operator can acquire is limited by cost, travel time, safety, terrain, and weather. We have focused on using derived values from the phenocamr package to estimate rates and amounts of green-up in this analysis as this is a fairly plug-and-play, off-the-shelf approach with high-quality output that can be employed by many researchers, even with limited remote sensing experience. However, additional means of analysis (see PhenoPix package in R [46]) can be used to analyze images for within scene variance that can increase the utility of these data that may alleviate some of these concerns.

Trends, Transition dates
Transition times agree within +/-5 days among the sensors we surveyed. We are limited in some inferences in our dataset, as we do not have earlier canopy imagery either from our UAV or terrestrial sensors. We also acknowledge that piecewise regression may inform these results differently than sigmoidal curves which are typically fit to these data. Given the absence of earlier (e.g. March/April) data, we could not use this approach. Thus, we can only assess when spring transition stabilizes, rather than estimating true start of season dates. We can assume that rather homogenous nature of the forest canopy observed contributed to this convergence. It is rather likely that in forests that are more heterogeneous, rather due to age, disturbance, or species composition, the uncertainty around this convergence would be inflated.

Future UAV applications for high-resolution phenology
While spaceborne remote sensing drastically altered how we perceive and quantify the earth system, UAVs are continuing this revolution by democratizing remote sensing. As both UAV and sensor technology decrease in price, and increase in capabilities, the adoption and application of UAV-based remote sensing will broaden, providing researchers of even modest means powerful tools. We envision daily automated drone data collection that can provide high resolution imagery, mapping phenology and structure at both high temporal and spatial resolution. For field sites with extensive infrastructure (e.g. LTER, NEON, AmeriFlux sites), UAVs offer unique potential to expand the impact of existing research.

Conclusions
While we conclude that these three near-surface approaches (UAV, terrestrial camera, and PhenoCam) provide similar estimates of canopy greening, each sensor has distinct advantages and disadvantages. As many instances in science, question determines instrument. We surveyed three sensors, based on three different platforms that afforded three different views of the canopy. UAV sensors provide superior spatial coverage and resolution but require special training and permitting and often proprietary software. PhenoCams provide high temporal resolution, are networked to the PhenoCam network, have consistent protocols and analysis pipelines, and are comparable across multiple environments. They do however require existing site infrastructure-internet access, tower mounts. PhenoCams are also stationary which sacrifices spatial coverage. In forests, the oblique canopy view angle of PhenoCams largely avoids image interference from the ground or sky, isolating the canopy for analysis. Terrestrial based cameras allow for canopy isolation that the other sensors do not, which gives the user the ability to infer additional structural information, rather than just greenness. They are limited by limited spatial coverage, and a high degree of user attention. Each sensor does, however, provide robust estimates of canopy phenology with broad utility in ecology and remote sensing studies. All three of the sensors surveyed though complement each other well and provide additional information about canopy phenology and to some degree, canopy structure. UAVs offer a means to democratize remote sensing for many researchers, scientists, and practitioners. Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 12 July 2020 doi:10.20944/preprints202007.0273.v1