1. Introduction
Identification of spatial variation in leaf canopy density is important in crop management and for accurate biomass estimation. Within viticulture specifically, being able to recognize such disparities provides vineyard managers the opportunity to examine and address this spatial variability by adjusting the management scheme with the potential of improving the crop [
1]. Vine canopy density is vital in protection and production of high quality winegrapes. Moderate canopy density is typically desired, depending on the time of the growing season, specific location, and grapevine varietal [
2]. Passive remote sensing datasets like aerial and satellite-based imagery of vineyard canopy can successfully identify such variability in canopy density and subsequent crop health within vineyard blocks [
3–
6]. Calculated vegetation indices, namely the normalized difference vegetation index (NDVI; [
7]), highly correlate with changes in canopy density measured by leaf area index (LAI; ratio of leaf surface area to ground surface area following [
8]). More recently, other datasets, like those provided by active remote sensors, are beginning to play a role in such viticultural research.
Unlike imagery, both terrestrial and airborne discrete return light detection and ranging (lidar) systems provide an additional third dimension of information (Z) for height and volumetric analysis. Terrestrial lidar has been successfully implemented to explore biophysical properties of vines [
9–
13]. Keightley
et al. [
10] measured uprooted grapevine trunk biomass with a stationary terrestrial lidar scanner. Rosell
et al. [
9] utilized a tractor-mounted lidar sensor to create three-dimensional (3D) scenes of vineyards and fruit orchards. These lidar data were found to be strongly correlated with field measurements and therefore were highly accurate when used to portray the entire crop structure (trunks, canopy, and trellis systems if present). Similarly, Llorens
et al. [
12] generated whole vineyard 3D canopy structure maps with a lidar sensor mounted on a tractor while moving between vine rows. Llorens
et al. [
11] modeled leaf area and accurately gauged ideal pesticide amounts for vineyards and orchards. Sanz-Cortiella
et al. [
13] used a tractor-mounted lidar system to study pear tree leaf density and found that the sensor provided an accurate 3D representation of leaf area but was highly affected by the height and angle of the sensor. Rosell
et al. [
9] suggested that lidar data may be used to explore relationships with LAI. Llorens
et al. [
12], in turn, reported a moderate, positive correlation between number of lidar returns and measured LAI of a given portion of canopy. Similarly, high total leaf area of juvenile trees has been shown to directly correlate with point density of the terrestrial lidar point cloud [
14]. In all of these cases, collected terrestrial-based lidar point clouds exist in a Cartesian coordinate system requiring a highly accurate location tracking global positioning system (GPS) mounted on the lidar sensor platform (tractor or otherwise) for proper georectification [
12].
To a much lesser extent, airborne lidar datasets have proven useful in visualization of vine canopy and vineyard structure leading to accurate delineation of vineyard parcels [
15]. Although not specifically applied to viticulture, airborne lidar datasets can confidently predict LAI and other biophysical characteristics of tree vegetation by calculating several height-based metrics [
16–
19]. Yet another method, that of statistically-based modeling, was implemented by [
20] to look at single vine canopy and explore potential light interception for different grapevine varietals. For sake of practicality and cost though, airborne and terrestrial lidar datasets have proven difficult to acquire [
21] and repeat acquisitions are usually cost-prohibitive. Due to this, alternative ways to gather similar datasets have emerged like Structure from Motion (SfM) [
22]. Most recently, successful vineyard canopy modeling has been completed by way of SfM primarily for visualization [
23,
24]. Across an entire vineyard, Turner
et al. [
23] compared a pre-growth and full-growth point cloud of vineyard canopy in natural color by way of SfM. At a more reduced scale, another SfM-based vineyard analysis accurately classified vine structures (grapes, canopy, trellis and other hardware) along portions of a vine row [
24].
SfM is a computer vision technique based heavily on the principles of photogrammetry wherein a significant number of photographs taken from different, overlapping perspectives are combined to recreate an environment (keypoint matching of features across images). SfM stems from a number of works, namely that of [
25,
26], which documents the development of the Bundler algorithm that is now employed by the most well-known SfM platform: Microsoft PhotoSynth. Although SfM was first intended to be used for ground-based applications, it has been used from aerial platforms and for geographic applications [
22,
27–
33]. For use in such geographic applications, the SfM output, which is made up of an internally consistent arbitrary coordinate system, must be transformed to real-world coordinates. Accordingly, georeferenced SfM datasets are similar to lidar datasets consisting of a set of data points, the keypoints generated from SfM product creation, with X, Y, and Z information (known as a point cloud in its entirety) with additional color information (red, green and blue [RGB] spectral) from the photographs. The cost to collect SfM point clouds remains very low compared to lidar; hence, there exists great interest in using such methods to model in 3D.
SfM-based 3D models have been used extensively in recreating urban and cultural features [
26,
27,
31,
34], and to a lesser extent topography and other surface features [
30,
33] such as vegetation [
23,
28]. The accuracy of the SfM approach, however, is often less trusted than other similar datasets provided by airborne or terrestrial lidar systems. Despite this, a number of research results insist that SfM point clouds are in fact comparable if not more accurate than lidar point clouds [
22,
28,
33]. Unfortunately, comparison of such datasets is difficult unless both datasets are collected for the same research purpose and at similar point densities.
The SfM approach with vegetation has proven more difficult than with urban and other features because of their more complex and discontinuous structures [
21,
28]. Keypoint matching is considerably more difficult when working with vegetative features because of leaf gaps, repeating structures of the same color, and inconsistent/random geometries. The resulting SfM point cloud can therefore be more random and less uniform in its spatial coverage [
28]. Despite this, satisfactory results of vegetation modeling (canopy height) with SfM have been reported [
28]. Placement of colored field markers, modification of the SfM algorithms, increasing the number of photographs captured, and taking images at higher altitudes were just a few of the suggestions provided to improve vegetation modeling when implementing the SfM approach [
28].
Besides [
23,
24], no studies have reported specifically modeling vineyard vegetation with SfM. More importantly, no studies have explored the relationship between SfM point clouds and
in situ LAI measurements as have been explored with lidar data. Consequently, this study uses SfM to create a 3D vineyard point cloud to visualize vineyard vegetation as well as attempt to predict vine LAI based on information derived from the created SfM point cloud. A number of metrics are calculated with extracted points from the SfM point cloud that are compared to LAI measurements to explore how LAI relates to said metrics (point heights, number of points,
etc.).
5. Conclusions
This study presented several visualizations of vine canopy from the whole vineyard to single vine scale based on a SfM-derived point cloud. This generated model of vine canopy was created by capturing 201 aerial photographs with a digital camera mounted on a kitewing UAV. The SfM point cloud was then classified as ground and non-ground with non-ground points representing vegetation. This method was successful at quickly, practically, and inexpensively recreating the vineyard environment at the study site including the vine canopy. Using extracted points from this point cloud, this study reported moderate success in relating measured LAI of vine canopy to SfM point cloud derived metrics with an R2 value of 0.567.
More work utilizing this rapidly developing SfM-methodology is necessary. This is especially the case with vegetation related studies because of the added level of difficulty associated with it. At this stage, modeling vegetation with SfM remains highly experimental and only moderately successful as shown by this and other studies [
28]. The reasonable success of this method in such an early stage provides hope that this technique can be improved upon. The practical and inexpensive nature of the SfM method of 3D modeling makes it highly attractive to researchers and practitioners within a variety of fields.
Future work using SfM for vegetation should employ colored targets to aid in keypoint matching. Likewise, higher point density is always desirable and can be obtained by acquiring more images, although this will prolong processing time to generate the point cloud. Implementation of this SfM method to predict LAI of other types of vegetation, particularly in forestry, would be worth exploring. SfM point clouds could also be utilized to estimate volumetric variables like biomass. Within the realm of viticulture, using this method at and between each phenological phase (budbreak, flowering, veraison, and harvest) to quickly generate whole vineyard 3D maps of vine growth both for visualization and LAI would be useful for vineyard managers wanting to assess spatial variation in size and density of vine canopy. It would be worth exploring potential variability in the prediction of LAI based on phenological phase, where fuller or lesser dense canopies may improve the accuracy of LAI prediction.