Next Article in Journal
Orbital Lifetime (2008–2017) Radiometric Calibration and Evaluation of the HJ-1B IRS Thermal Infrared Band
Next Article in Special Issue
Comparison of Machine Learning Methods for Mapping the Stand Characteristics of Temperate Forests Using Multi-Spectral Sentinel-2 Data
Previous Article in Journal
First Measurement of Soil Freeze/Thaw Cycles in the Tibetan Plateau Using CYGNSS GNSS-R Data
 
 
Article

Individual Tree Crown Delineation from UAS Imagery Based on Region Growing and Growth Space Considerations

Department of Natural Resources & the Environment, University of New Hampshire, 56 College Rd, Durham, NH 03824, USA
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(15), 2363; https://doi.org/10.3390/rs12152363
Received: 24 June 2020 / Revised: 20 July 2020 / Accepted: 21 July 2020 / Published: 23 July 2020
(This article belongs to the Special Issue Remote Sensing Models of Forest Structure, Composition, and Function)

Abstract

The development of unmanned aerial systems (UAS) equipped with various sensors (e.g., Lidar, multispectral sensors, and/or cameras) has provided the capability to “see” the individual trees in a forest. Individual tree crowns (ITCs) are the building blocks of precision forestry, because this knowledge allows users to analyze, model and manage the forest at the individual tree level by combing multiple data sources (e.g., remote sensing data and field surveys). Trees in the forest compete with other vegetation, especially neighboring trees, for limited resources to grow into the available horizontal and vertical space. Based on this assumption, this research developed a new region growing method that began with treetops as the initial seeds, and then segmented the ITCs, considering its growth space between the tree and its neighbors. The growth space was allocated by Euclidian distance and adjusted based on the crown size. Results showed that the over-segmentation accuracy (Oa), under-segmentation (Ua), and quality rate (QR) reached 0.784, 0.766, and 0.382, respectively, if the treetops were detected from a variable window filter based on an allometric equation for crown width. The Oa, Ua, and QR increased to 0.811, 0.853, and 0.296, respectively, when the treetops were manually adjusted. Treetop detection accuracy has a great impact on ITCs delineation accuracy. The uncertainties and limitations within this research including the interpretation error and accuracy measures were also analyzed and discussed, and a unified framework assessing the segmentation accuracy was highly suggested.
Keywords: individual tree crown; segmentation; unmanned aerial system; region growing; growth space individual tree crown; segmentation; unmanned aerial system; region growing; growth space

1. Introduction

In recent years, the rapid development of unmanned aerial systems (UAS) equipped with various sensors such as digital cameras, and multispectral, hyperspectral, or Lidar sensors has provided 3D data and/or higher spatial resolution images with greater flexibility and reliability, and for a much lower cost compared to traditional platforms such as manned aircraft or satellites [1,2,3]. These technologies have provided the capability to clearly “see” the individual trees in a forest, which has brought opportunities for precision forestry applications such as disease mapping, invasive species mapping, and fire monitoring [4]. Individual tree crowns (ITCs) are the building blocks of precision forestry, resulting in a bridge to connect multi-dimensional measurements of trees (e.g., size, crown shape) from multiple data sources (e.g., remote sensing data and field surveys) to study the forest at an individual tree level [1,5]. ITCs also enable foresters to perform better forest management and field inventory, such as selective cuts, silviculture treatments, or biodiversity assessments [6].
The ITC delineation procedure treats each tree crown as a single object, with a separate boundary from the background or other vegetation [6]. Image segmentation, the process of grouping spatially neighboring pixels into homogeneous regions that represent meaningful objects (i.e. image objects) [7,8], is an effective way to describe geographical objects from remote sensing data [9]. Segmentation algorithms are described by how each partition these objects and consist of pixel-based, edge-based, region-based, and hybrid methods [7,10]. Many researchers have applied or optimized these methods to delineate ITCs from various UAS image products, such as orthomosaics, canopy height models (CHM), point clouds, or any combination of these. For example, Huang, Li and Chen [5] used the marker-controlled watershed algorithm to segment Osmanthus and Podocarpus trees from UAS imagery, filtered by bias field estimation to reduce within-canopy spectral heterogeneity. La et al. [11] employed an edge detection technique, to extract single tree crowns by fusing hyperspectral imagery and Lidar data. Carr and Slyder [12] segmented deciduous trees from a leaf-off photogrammetric point cloud, to capture sub-canopy structure. Despite these advances, segmenting ITCs in the broadleaved or dense forest remains a challenge [13,14,15].
Region growing, because of its robust and simple characteristics, has been widely used to extract objects from remotely sensed imagery [16,17]. This approach selects a set of seed points across the image, and then builds objects by iteratively adding its spatially neighboring pixels to each seed, based on user-defined similarity (e.g., spectral similarity) [18]. However, directly applying this algorithm to UAS-derived imagery is likely to be problematic for the following reasons. First, the goal of the segmentation is a one to one relationship between tree crown objects and actual tree crowns, where each object generated contains the crown of only one tree. However, the high spectral variability within crowns (e.g., there are pixels belonging to branches, shaded leaves, or background) restricts the algorithm from allowing objects to include pixels technically belonging to the same tree crown, creating a crown fractured across multiple objects, often known as over-segmentation. Second, the high spectral similarity between neighboring trees, especially when canopies overlap with each other, can lead the algorithm to include pixels belonging to more than one tree, resulting in an under-segmented tree crown [19].
Many researchers have studied the region growing algorithms for ITC delineation [7]. For example, Erikson [20] improved the region growing by adding fuzzy rules to segment ITC based on aerial photographs, reaching a 73% accuracy. Zhen et al. [21] developed an agent-based region growing algorithm to delineate ITCs from airborne laser scanner (ALS) data where the accuracy reached 91.1%. Zhen et al. [22] studied the impacts of homogeneity criteria, crown size, shape, and growing order on the region growing algorithm, to delineate ITCs based on ALS data. Generally, the key elements to improve region growing algorithm include initial seeds, similarity, and threshold selection [22,23]. A great deal of work has gone into developing these improvements. Jain and Susan [24] chose the center of the image as the initial seed, and the boundary pixels labeled from the previous growing period as the next seed. Image clusters or blocks were also generated as prior information to identify the seeds [25,26]. Jun et al. [27] developed an adaptive region growing algorithm by Otsu thresholding [28], to differentiate disease spots from the crop leaves. Jianping et al. [29] improved the region growing by combining color contrast and edges as criteria.
Trees in the forest compete with other vegetation, especially neighboring trees, for limited resources, such as water and sunlight, in order to grow in the horizontal and/or vertical space available [30]. Therefore, this research assumed that a larger tree crown takes up more horizontal space. Additionally, the greater the distance between a tree and its neighbor in a specific direction, the more space that tree will have to grow its branches and leaves in that direction. Thus, this research included “growth space” as an additional variable in the region growing algorithm, as a means of further controlling segment growth and improving the accuracy of ITC delineation. This paper is organized as follows. Section 2 describes the details of the improved region growing method developed by the authors, primarily focusing on how to define the growth space. Section 3 provides two examples of applying this algorithm to a mixed forested area using very high-spatial resolution natural-color imagery collected via UAS. Section 4 and Section 5 present the results and discussion, respectively, in which both the strengths and limitations of this method were analyzed. The major conclusions are highlighted in the last section.

2. Methods

2.1. Workflow

Figure 1 summarizes the workflow of the improved region growing method proposed in this research. First, treetops were detected with a local window filter, which varied based on an allometric equation for crown width (Figure 1a). The detected treetops were regarded as region growing seeds. Second, the growth space of a tree was modeled by an adjusted Euclidean allocation, which considered the relative tree crown size and distance to its neighboring trees (Figure 1b). The growth space of a tree was then divided into eight smaller sections in terms of Euclidean directions (Figure 1c). Region growing was performed within each section by combining the spectral and spatial distance to the treetop. Steps 2 to 3 were performed iteratively, until all detected trees were segmented. Finally, a segmentation accuracy assessment was implemented by comparing the segmented ITCs to reference crowns chosen by random sampling (Figure 1d).

2.2. Individual Treetop Detection

Individual treetops were detected by running a variable window filter algorithm, developed by Popescu and Wynne [31], on a CHM. To start, each pixel in the CHM with a height greater than 5 m was initially treated as a potential treetop. Previous land cover mapping efforts defined forests as any land with trees greater than 5 m [32,33]. Therefore, pixels with a height less than 5 m were masked out. Tree crowns were assumed to be circular; thus, a local circular window filter centered at each pixel ( x , y ) was generated. The size of the filter was determined by means of an allometric equation that estimates crown width ( CW ) from tree height (Equation (1)) [31,34].
CW = 2.51503 + 0.00901 H 2
In Equation (1), H represents the height in meters at ( x , y ) . The estimated crown width ( CW ) at pixel ( x , y ) was transformed into a window size, based on the spatial resolution of the CHM. The allometric equation ( Equation   ( 1 ) ) was derived by Popescu and Wynne [31] from 424 deciduous and coniferous trees. This equation is considered representative of the eastern United States.
The filtering algorithm assumes the treetop is the highest point within the crown. A pixel ( x , y ) is chosen as a treetop only if it is the highest within the local filter. Each detected tree has a unique ID ranging from 1 to N where N is the total number of trees. Figure 1a shows an example of treetops detected by this algorithm.

2.3. Adjusted Euclidean Allocation

A tree’s growth space was modeled based on its crown size and distance to its neighboring trees (Figure 1b) using an adjusted Euclidean distance ( AED ), defined here as the Euclidean distance multiplied by the crown width (( Equation   ( 2 ) ). A pixel ( x , y ) was assigned to the   j th tree’s allocation only if it had the minimum AED to this tree.
AED = ( x x j ) 2 + ( y y j ) 2 2 × C W j
In Equation (2), ( x , y ) represents any pixel whose allocation needs to be determined and ( x j ,   y j ) indicates the   j th detected tree. Additionally, C W j is the crown width of the   j th tree.

2.4. Euclidean Direction and Region Growing

Each tree’s growth space was further divided into eight, 45° wide quadrants, where the north quadrant ranged from −22.5° to 22.5°, and so on (Figure 1c). The region growing was performed within each quadrant of the growth space. The algorithm used the detected treetops as the initial seeds and calculated the similarity between the treetop and a candidate crown pixel as the spectral distance ( SD ) between said pixels ( Equation   ( 3 ) ) multiplied by 1 minus a distance decay function ( f ( d ) ) ( Equation   ( 4 ) ). Any candidate pixel ( x , y ) located within a growth space quadrant was included as part of this tree’s crown if the similarity was less than a global threshold ( θ ) ( Equation   ( 5 ) ).
SD = ( S μ κ ) T   ( S μ k )
f ( d ) = e d 2 / 2 h 2
SD × ( 1 f ( d ) ) θ
In Equation (3), SD is the spectral distance between a treetop and a candidate pixel, where S represents the spectral vector of a candidate pixel and T is the transpose of the vector. All bands including R, G, and B were used to calculate the spectral distance. The detected treetops (initial seeds) may hit noise pixels (e.g., branches), and, therefore, the average spectral vector ( μ κ ) of each seed’s neighboring 5 × 5 pixels was used to smooth the spectra. The f ( d ) in Equation (4) denotes the exponential distance decay function where d represents the candidate pixel’s Euclidean distance to the seed and h is defined as the maximum distance between the seed and Euclidean boundary within that growth space quadrant. As d increases, 1 f ( d ) increases, which means candidate pixels further from the seed (i.e., treetop), but with a low spectral distance ( SD ), would be included as part of the crown. This improved region growing algorithm used 4-connectivity to describe its neighboring pixels. It is worth noting that any candidate pixel outside the growth space quadrant can still be included as part of a tree, which means the distance ( d ) can be greater than h , but it should have a lower SD .
Note that as the candidate pixel moves further away from the seed, ( 1 f ( d ) ) increases and spectral distance ( SD ) potentially increases, because it may hit another spectrally different tree crown. Therefore, this algorithm automatically converges towards a final segmentation. However, like other threshold-based approaches [22,23], the segmentation results will vary based on the chosen global threshold. To determine the best threshold for segmentation, the global threshold ( θ ) values between 10 to 20 were tested. The threshold was systematically increased by 1 after each segmentation. The chosen segmentation is the one that resulted in the highest accuracy based on a comparison to reference tree crowns (Section 2.6.2) It is also noteworthy that if the inequation ( Equation ( 5 ) ) divides ( 1 f ( d ) ) by both sides, it becomes a dynamic local threshold ( θ / ( 1 f ( d ) ) defined for SD within each growth space quadrant.

2.5. Hole Filling

The segmented result may have holes within an ITC because of, for example, the branches or background pixels that were not included as part of the crown. The morphological flood-fill operation was further applied to fill holes within each tree crown. The flood-fill process was based on dilation, complementation, and intersection [35,36].

2.6. Accuracy Assessment

2.6.1. Individual Tree Detection Accuracy Assessment

The individual tree detection (ITD) accuracy assessment was performed by comparing a sample of the detected trees using the proposed method with reference trees that were manually interpreted. The results were presented as an error matrix (Table 1). In the error matrix, TP (true positive) represents the number of trees that are correctly detected. FP (false positive) represents the number of detected trees that were not matched with a reference tree, also known as commission error. FN (false negative) indicates the number of undetected reference trees, also known as omission error. TN (true negative) occurs where there was no tree detected and there was also no reference tree. TN is not necessary for calculating the accuracy measures. The recall (r), precision (p), and F-score (F) were estimated from the error matrix values [37].

2.6.2. Segmentation Accuracy Assessment

A total of n segmented tree crowns whose treetops were detected correctly were randomly selected for a segmentation accuracy assessment. The corresponding reference tree crowns were manually interpreted and digitized by combining a natural color image with the CHM data. A segmented tree crown has automatically built a “one-to-one” relationship with its reference tree crown because both contain the same treetop. The segmentation accuracy was reported using over-segmentation accuracy ( Oa ), under-segmentation accuracy ( Ua ), and quality rate ( QR ) (Equations (6)–(8) respectively) [38,39].
Oa = 1 n i = 1 n ( area   ( r i   s i ) area   ( r i ) )
Ua = 1 n i = 1 n ( area   ( r i   s i ) area   ( s i ) )
QR = 1 n i = 1 n ( 1 area   ( r i   s i ) area   ( r i   s i ) )
In Equations (6)–(8) the symbol   represents the intersection of the reference and detected crown polygons, while the   denotes their union. The r i represents ith reference crown, while s i indicates the segmented tree crown corresponding to r i . n represents the sampling size.
Both Oa and Ua are based on the intersection of the segmented tree crown with its reference crown. QR combines the intersection and union regions and considers the geometrical similarity [40]. A higher Oa or Ua indicates greater segmentation accuracy, while a higher QR indicates lower accuracy [39].

3. Experiments and Analysis

Intuitively, the treetop detection error would impact the segmentation accuracy. Therefore, two comparative experiments were conducted to examine how treetop detection error propagates through the ITC delineation. Experiment #1 followed the entire workflow described in Section 2.2, Section 2.3, Section 2.4, Section 2.5 and Section 2.6. However, for experiment #2, the results of treetop detection (Section 2.2) were manually adjusted to significantly improve the detection accuracy prior to performing the rest of the methods as described.

3.1. Experiment #1

The study site of experiment #1 was located in the College Woods Natural Area (CWNA, Figure 2) in Durham, NH, U.S.A. CWNA is owned and managed by the University of New Hampshire. The central longitude and latitude of CWNA are 70°56’51.339“W and 43°8’7.935”N, respectively. CWNA is a mixed forest dominated by white pine (Pinus strobus), eastern hemlock (Tsuga canadensis), American beech (Fagus grandifolia), and oaks (Quercus spp). The study site is a spatial subset of the CWNA with a coverage area of 400 × 400 m, of which nearly 60% is occupied by coniferous species.
A total of 1961 raw images were collected on July 11th, 2018, with a senseFly eBee Plus carrying a S.O.D.A. (Sensor Optimized for Drone Application) camera, which captures natural color imagery. The flying height was 120 m above the ground and the imagery was acquired with a forward and side overlap of 85%. All the raw images were PPK post-processed with senseFly’s Flight Data Manager built into the eMotion software using a nearby CORS station (Site ID: NHUN) [41]. The images were further processed by Agisoft PhotoScan Pro (v.1.6.2) [42] to create a natural color orthomosaic and digital surface model (DSM) using the processing settings suggested by Fraser and Congalton [43]. The original spatial resolution of the orthomosaic and DSM was 2.31cm and 12.10 cm, respectively. The CHM model was created by subtracting a digital terrain model (DTM) from the DSM. The DTM was generated from Lidar data collected in the winter and spring of 2011 and downloaded from the GRANIT Lidar Distribution Site [44]. Based on the size and knowledge of the land-use history of the study site, the age of the DTM relative to the UAS missions would introduce little, if any, error. All data were converted to the same projection, coordinate system, horizontal, and vertical datum. Both the natural orthomosaic and CHM were rescaled to 12 cm, due to inconsistency of spatial resolution.
For validation of the ITD, the whole study site was divided into 256 plots, each of which covered a 25 × 25 m square area. A total of 60 plots were randomly selected. The individual tree detection error matrix was created by compiling the error analysis from each plot. A total of 262 trees were randomly selected from the correctly detected trees, and their reference tree crowns were manually interpreted for segmentation accuracy assessment.

3.2. Experiment #2

Experiment #2 was conducted to evaluate the impact of accurate treetop detection on the analysis. However, manually adjusting all detected treetops from experiment #1 to make the detection accuracy reach 100% would be arduous and nearly impossible. Therefore, a subsample, that is only the treetops corresponding to the 262 reference tree crowns in experiment #1 and their neighboring treetops were manually interpreted. Consequently, experiment #2 was implemented without Section 2.6.1 of the methods, since the treetops were manually identified.

4. Results

4.1. Individual Treetop Detection Results

A total of 4164 trees were detected in experiment #1, with an average height of 23.35 m and an average crown diameter of 7.43 m. A total of 971 trees were identified within the 60 randomly selected plots and used for the error analysis. The tree detection error matrix is presented in Table 2. The recall (r), precision (p), and F-score (F) were 74.85%, 90.42%, and 81.90%, respectively. Figure 3 shows four examples of detected treetops. The red dots indicate TPs, while the yellow and blue dots represent FPs and FNs, respectively. The FP treetops tend to occur where the branches of tall coniferous trees overhang deciduous canopies (Figure 3a), background (Figure 3a,c), or where multiple deciduous treetops (Figure 3b) were detected. FN treetops are more likely to be found on the boundary between the deciduous and coniferous trees (Figure 3d), or where smaller trees are adjacent to larger ones (Figure 3c).

4.2. ITC Delineation Results

To determine the best global threshold (θ) for segmentation, the threshold value was systematically varied from 10 to 20. Only the threshold values from the top five segmentation results based on QR for experiment #1 and #2 are shown in Figure 4. In experiment #1, Oa increased, while Ua decreased, as the global threshold rose from 10 to 14. The best segmentation occurs at a global threshold of 13 based on QR. At a threshold of 13, Oa, Ua, and QR reached 0.784, 0.766, and 0.382, respectively. The Oa and Ua in experiment #2 exhibited the same pattern as seen in experiment #1, as the global threshold grew from 15 to 19. However, the best ITC delineation occurs when the global threshold reaches 17. At this threshold, Oa, Ua, and QR reach 0.811, 0.853, and 0.296, respectively.
Figure 5 shows eight examples of the ITC delineation results from experiment #1 (θ = 13) and #2 (θ = 17), respectively, along with their corresponding reference crowns. It is worth noting that the sub-figures in Figure 5 cannot be shown at the same scale due to the variable crown sizes. Figure 5a–d show examples of deciduous trees, while Figure 5e–g are examples of coniferous trees. There is no clear distinction between the results of the deciduous trees compared to the coniferous. The accuracy is highly dependent on whether a tree’s neighboring trees are accurately detected (e.g., Figure 5c vs. Figure 5f). A tree’s crown tends to be under-segmented in a certain direction if its neighbor is not detected in that direction (Figure 5a–d,f)). Conversely, a tree’s crown is likely to be over-segmented if multiple treetops are detected within an individual tree crown (Figure 5e). Generally, the results of experiment #2 are visually much better than the results of experiment #1, due to the improvement in treetop detection.

5. Discussion

This research looked to improve upon ITC delineation using region growing segmentation, by considering the growth space between a tree and its neighbors. The improved algorithm was tested with UAS imagery collected over a mixed forest. Two experiments were conducted. Experiment #1 utilized treetops detected with a variable window filter as initial seeds, while Experiment #2 utilized manually delineated treetops as initial seeds. The Ua, Oa, and QR metrics, widely accepted for validating segmentation, were employed as accuracy measures in this study, and used to compare the results of the experiments.
The best results of experiment #1 were achieved with a global threshold of 13. At this threshold, the Oa, Ua, and QR reached 0.784, 0.766, and 0.382, respectively. After the treetops were manually adjusted and, thus, assumed to be more accurate, experiment #2 achieved its best results with a global threshold of 17 and the Oa, Ua, and QR increased to 0.811, 0.853, and 0.296, respectively. The accuracy of experiment #1 is higher than that of Erikson [20], who also sought to improve the region growing algorithm by adding fuzzy rules, to segment ITCs based on aerial photographs, achieving an accuracy of 73% in the end. The accuracy of experiment #2 is comparable to that achieved by Zhen, Quackenbush, Stehman, and Zhang [21]. The authors achieved an accuracy of 91.1%, after developing an agent-based region growing algorithm to delineate ITC from ALS data, using manual treetops as seeds. However, Zhen, Quackenbush, Stehman, and Zhang [21] employed the relative error of the crown area as an accuracy measure, which is quite different from the accuracy measures in this research. Thematic accuracy assessment of remote sensing classification has been well researched [45]; however, segmentation accuracy assessment still does not have unified sampling methods and accuracy measures [46]. As the spatial resolution of remote remotely sensed data has become higher, the user community has transitioned from pixel-based image analysis towards geographic object-based image analysis (GEOBIA) [7,47]. Therefore, a unified framework validating the segmentation is highly suggested.
A comparison of experiment #2 with experiment #1 indicates that the region growing algorithm developed in this research is highly dependent on the detection accuracy of treetops, which is reasonable, given that the algorithm employed here used a tree’s neighbors to define its growth space. The treetop detection recall (r), precision (p), and F-scores (F) were 74.85%, 90.42%, and 81.90%, respectively. A lower recall indicates a greater number of undetected trees. When a tree goes undetected, the growth space of one or more of its neighboring trees is allowed to expand. This results in under-segmentation as the neighbor tree can now grow larger than its actual crown. One underlying reason for this lower recall could be that the allometric equation is not local, but based on field inventory data conducted in the Piedmont physiographic province of Virginia [31]. The average crown width predicted by this allometric equation, 11.63 m, is 1.09 m higher than the average width derived from the reference tree crowns. The overestimation resulted in a wider local window filter and under detection of treetops. However, there are no published allometric equations for New England and, therefore, the equation used was the best available. Another reason could be that the tree detection algorithm assumes that tree canopies possess mountainous shape, where treetops are the locally highest in the CHM data and tree edges are lower in elevation [14,48]. This assumption works for coniferous trees, but becomes less effective for the deciduous trees that often have wide, flat crowns (Figure 3d) [49,50,51]. Additionally, the CHM produced through photogrammetry and the structure from motion (SfM) algorithm tends to underestimate the height of the upper layers of the canopy and overestimate the height of the lower layers [52,53]. The result is a smoother DSM with less vertical variability, making it harder to detect the edges between the crowns. The combination of these factors highlights the importance of analyzing the error budget of treetop detection products based on the allometric equation [54].
The reference data for validating the segmentation results is another source of uncertainty. Manually interpreting the ITCs to construct the reference dataset from the UAS is not only arduous but also full of uncertainties, especially for deciduous trees that are highly overlapped (Figure 5d). The same problem occurs when measuring tree variables in the field. Many researchers have used the concept of “ground truth” to validate UAS data [55,56]. However, data collected on the ground can never truly be 100% accurate and, thus, cannot be considered the “truth” [45]. As UAS has become common for collecting reference data, to determine whether the data can be used as a reference or how much uncertainty exists in the data becomes extremely important, and therefore needs further research.
The region growing algorithm developed in this research can be easily applied to other data sources, such as multispectral or point cloud data. However, there are limitations to this algorithm that require further research. First, this study utilized a sequential order to complete the growing procedure. A pixel that has been grown by the previous seed would not allow the following seeds to grow this pixel [22]. Although a unique allocation was assigned to each detected tree, to avoid being taken by other trees, a simultaneous growing strategy could achieve better accuracies [22]. Second, this algorithm did not consider the species information. A local competition scheme for space exists between different tree species [30]. Additionally, the relationship between crown width and height may vary from one species to another [31,34]. Adding this information on competition and species specific allometric equations could have helped to improve the tree detection and growth space allocation in this study. Third, although the region growing algorithm is generally better than watershed and valley-following algorithms for segmentation [57], it is time-consuming. Experiment #1 took 8 h and 26 min, using a laptop workstation with E-2176M 6 core processor and 32GB of memory. How to choose the best features from data and combine them with this growth space requires further research.

6. Conclusions

This study developed a region growing algorithm that took into consideration the growth space between neighboring trees, while segmenting ITCs. The algorithm was implemented on natural color imagery derived from UAS. Results showed treetop detection accuracies, recall, precision, and F-score, were 74.85%, 90.42%, and 81.90%, respectively. Segmentation accuracies, Oa, Ua, and QR, reached 0.784, 0.766, and 0.382, respectively. The Oa, Ua, and QR were increased to 0.811, 0.853, and 0.296, respectively, when the treetops were manually updated to improve the detection accuracy. The segmentation accuracy is highly dependent on the treetop detection accuracy. The sources of uncertainties, such as the allometric equation utilized, accuracy measures, and manual interpretation, were analyzed. A unified framework validating the segmentation was highly suggested. The region growing algorithm developed in this research can be easily applied to other data sources to achieve higher accuracy. Uncertainties and limitations, including growing order, species specific growth models, and ecological competition schemes of trees, were addressed.

Author Contributions

J.G., H.G., and R.G.C. conceived and designed the experiments. J.G. performed the experiments and analyzed the data with guidance from R.G.C.; J.G. wrote the paper. H.G. and R.G.C. edited and finalized the paper and manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

Partial funding was provided by the New Hampshire Agricultural Experiment Station. This is Scientific Contribution Number: #2858. This work was supported by the USDA National Institute of Food and Agriculture McIntire Stennis Project #NH00095-M (Accession #1015520).

Acknowledgments

The authors would like to acknowledge Benjamin Fraser, Vincent Pagano, and Hannah Stewart for their assistance with the UAS and reference data collection

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Navarro, J.A.; Algeet, N.; Fernández-Landa, A.; Esteban, J.; Rodríguez-Noriega, P.; Guillén-Climent, M.L. Integration of UAV, Sentinel-1, and Sentinel-2 Data for Mangrove Plantation Aboveground Biomass Monitoring in Senegal. Remote Sens. 2019, 11, 77. [Google Scholar] [CrossRef][Green Version]
  2. Ok, A.O.; Ozdarici-Ok, A. 2-D delineation of individual citrus trees from UAV-based dense photogrammetric surface models. Int. J. Digit. Earth 2018, 11, 583–608. [Google Scholar] [CrossRef]
  3. Nevalainen, O.; Honkavaara, E.; Tuominen, S.; Viljanen, N.; Hakala, T.; Yu, X.W.; Hyyppa, J.; Saari, H.; Polonen, I.; Imai, N.N.; et al. Individual Tree Detection and Classification with UAV-Based Photogrammetric Point Clouds and Hyperspectral Imaging. Remote Sens. 2017, 9, 185. [Google Scholar] [CrossRef][Green Version]
  4. Shin, J.-I.; Seo, W.-W.; Kim, T.; Park, J.; Woo, C.-S. Using UAV Multispectral Images for Classification of Forest Burn Severity—A Case Study of the 2019 Gangneung Forest Fire. Forests 2019, 10, 1025. [Google Scholar] [CrossRef][Green Version]
  5. Huang, H.Y.; Li, X.; Chen, C.C. Individual Tree Crown Detection and Delineation from Very-High-Resolution UAV Images Based on Bias Field and Marker-Controlled Watershed Segmentation Algorithms. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 2253–2262. [Google Scholar] [CrossRef]
  6. Wan Mohd Jaafar, W.S.; Woodhouse, I.H.; Silva, C.A.; Omar, H.; Abdul Maulud, K.N.; Hudak, A.T.; Klauberg, C.; Cardil, A.; Mohan, M. Improving Individual Tree Crown Delineation and Attributes Estimation of Tropical Forests Using Airborne LiDAR Data. Forests 2018, 9, 759. [Google Scholar] [CrossRef][Green Version]
  7. Hossain, M.D.; Chen, D. Segmentation for Object-Based Image Analysis (OBIA): A review of algorithms and challenges from remote sensing perspective. ISPRS J. Photogramm. Remote Sens. 2019, 150, 115–134. [Google Scholar] [CrossRef]
  8. Chen, G.; Weng, Q.; Hay, G.J.; He, Y. Geographic object-based image analysis (GEOBIA): Emerging trends and future opportunities. GISci. Remote Sens. 2018, 55, 159–182. [Google Scholar] [CrossRef]
  9. Blaschke, T. Object based image analysis for remote sensing. ISPRS J. Photogramm. Remote Sens. 2010, 65, 2–16. [Google Scholar] [CrossRef][Green Version]
  10. Poblete-Echeverria, C.; Olmedo, G.F.; Ingram, B.; Bardeen, M. Detection and Segmentation of Vine Canopy in Ultra-High Spatial Resolution RGB Imagery Obtained from Unmanned Aerial Vehicle (UAV): A Case Study in a Commercial Vineyard. Remote Sens. 2017, 9, 268. [Google Scholar] [CrossRef][Green Version]
  11. La, H.P.; Eo, Y.D.; Chang, A.; Kim, C. Extraction of individual tree crown using hyperspectral image and LiDAR data. KSCE J. Civil Eng. 2015, 19, 1078–1087. [Google Scholar] [CrossRef]
  12. Carr, J.C.; Slyder, J.B. Individual tree segmentation from a leaf-off photogrammetric point cloud. Int. J. Remote Sens. 2018, 39, 5195–5210. [Google Scholar] [CrossRef]
  13. Yin, D.; Wang, L. Individual mangrove tree measurement using UAV-based LiDAR data: Possibilities and challenges. Remote Sens. Environ. 2019, 223, 34–49. [Google Scholar] [CrossRef]
  14. Ke, Y.; Quackenbush, L.J. A review of methods for automatic individual tree-crown detection and delineation from passive remote sensing. Int. J. Remote Sens. 2011, 32, 4725–4747. [Google Scholar] [CrossRef]
  15. Zhen, Z.; Quackenbush, J.L.; Zhang, L. Trends in Automatic Individual Tree Crown Detection and Delineation—Evolution of LiDAR Data. Remote Sens. 2016, 8, 333. [Google Scholar] [CrossRef][Green Version]
  16. Ma, J.; Du, K.; Zhang, L.; Zheng, F.; Chu, J.; Sun, Z. A segmentation method for greenhouse vegetable foliar disease spots images using color information and region growing. Comput. Electron. Agric. 2017, 142, 110–117. [Google Scholar] [CrossRef]
  17. Soltani-Nabipour, J.; Khorshidi, A.; Noorian, B. Lung tumor segmentation using improved region growing algorithm. Nucl. Eng. Technol. 2020. [Google Scholar] [CrossRef]
  18. Merzougui, M.; Allaoui, A.E. Region growing segmentation optimized by evolutionary approach and Maximum Entropy. Procedia Comput. Sci. 2019, 151, 1046–1051. [Google Scholar] [CrossRef]
  19. Milas, A.S.; Arend, K.; Mayer, C.; Simonson, M.A.; Mackey, S. Different colours of shadows: Classification of UAV images. Int. J. Remote Sens. 2017, 38, 3084–3100. [Google Scholar] [CrossRef]
  20. Erikson, M. Segmentation of individual tree crowns in colour aerial photographs using region growing supported by fuzzy rules. Can. J. Forest Res. 2003, 33, 1557–1563. [Google Scholar] [CrossRef]
  21. Zhen, Z.; Quackenbush, L.J.; Stehman, S.V.; Zhang, L. Agent-based region growing for individual tree crown delineation from airborne laser scanning (ALS) data. Int. J. Remote Sens. 2015, 36, 1965–1993. [Google Scholar] [CrossRef]
  22. Zhen, Z.; Quackenbush, J.L.; Zhang, L. Impact of Tree-Oriented Growth Order in Marker-Controlled Region Growing for Individual Tree Crown Delineation Using Airborne Laser Scanner (ALS) Data. Remote Sens. 2014, 6, 555–579. [Google Scholar] [CrossRef][Green Version]
  23. Falah, R.K.; Bolon, P.; Cocquerez, J.P. A region-region and region-edge cooperative approach of image segmentation. In Proceedings of the 1st International Conference on Image Processing, Austin, TX, USA, 13–16 November 1994; pp. 470–474. [Google Scholar]
  24. Jain, P.K.; Susan, S. An adaptive single seed based region growing algorithm for color image segmentation. In Proceedings of the Annual IEEE India Conference (INDICON), Mumbai, India, 13–15 December 2013; pp. 1–6. [Google Scholar]
  25. Wang, Z.; Jensen, J.R.; Im, J. An automatic region-based image segmentation algorithm for remote sensing applications. Environ. Model. Softw. 2010, 25, 1149–1165. [Google Scholar] [CrossRef]
  26. Cui, W.; Guan, Z.; Zhang, Z. An Improved Region Growing Algorithm for Image Segmentation. In Proceedings of the International Conference on Computer Science and Software Engineering, Wuhan, China, 12–14 December 2008; pp. 93–96. [Google Scholar]
  27. Jun, P.; Bai, Z.; Jun-chen, L.; Li, S. Automatic segmentation of crop leaf spot disease images by integrating local threshold and seeded region growing. In Proceedings of the International Conference on Image Analysis and Signal Processing, San Francisco, CA, USA, 21–23 October 2011; pp. 590–594. [Google Scholar]
  28. Sezgin, M.; Sankur, B. Survey over image thresholding techniques and quantitative performance evaluation. J. Electron. Imaging 2004, 13, 146–165. [Google Scholar] [CrossRef]
  29. Jianping, F.; Yau, D.K.Y.; Elmagarmid, A.K.; Aref, W.G. Automatic image segmentation by integrating color-edge extraction and seeded region growing. IEEE Trans. Image Process. 2001, 10, 1454–1466. [Google Scholar] [CrossRef][Green Version]
  30. Grebner, D.L.; Bettinger, P.; Siry, J.P. Chapter 6—Ecosystem Services. In Introduction to Forestry and Natural Resources; Academic Press: San Diego, CA, USA, 2013; pp. 147–165. [Google Scholar] [CrossRef]
  31. Popescu, S.; Wynne, R. Seeing the Trees in the Forest: Using Lidar and Multispectral Data Fusion with Local Filtering and Variable Window Size for Estimating Tree Height. Photogramm. Eng. Remote Sens. 2004, 70, 589–604. [Google Scholar] [CrossRef][Green Version]
  32. Bontemps, S.; Defourny, P.; Van Bogaert, E.; Arino, O.; Kalogirou, V.; Perez, J.R. GlobCOVER 2009 Products Description and Validation Report; UCLouvain and ESA: Paris, France, 2010. [Google Scholar]
  33. Congalton, R.; Gu, J.; Yadav, K.; Thenkabail, P.; Ozdogan, M. Global Land Cover Mapping: A Review and Uncertainty Analysis. Remote Sens. 2014, 6, 12070–12093. [Google Scholar] [CrossRef][Green Version]
  34. Panagiotidis, D.; Abdollahnejad, A.; Surový, P.; Chiteculo, V. Determining tree height and crown diameter from high-resolution UAV imagery. Int. J. Remote Sens. 2017, 38, 2392–2410. [Google Scholar] [CrossRef]
  35. Chudasama, D.; Patel, T.; Joshi, S.; Prajapati, G.I. Image segmentation using morphological operations. Int. J. Comput. Appl. 2015, 117. [Google Scholar] [CrossRef]
  36. Bhargava, N.; Trivedi, P.; Toshniwal, A.; Swarnkar, H. Iterative Region Merging and Object Retrieval Method Using Mean Shift Segmentation and Flood Fill Algorithm. In Proceedings of the Third International Conference on Advances in Computing and Communications, Cochin, India, 29–31 August 2013; pp. 157–160. [Google Scholar]
  37. Mohan, M.; Silva, C.A.; Klauberg, C.; Jat, P.; Catts, G.; Cardil, A.; Hudak, A.T.; Dia, M. Individual Tree Detection from Unmanned Aerial Vehicle (UAV) Derived Canopy Height Model in an Open Canopy Mixed Conifer Forest. Forests 2017, 8, 340. [Google Scholar] [CrossRef][Green Version]
  38. Möller, M.; Lymburner, L.; Volk, M. The comparison index: A tool for assessing the accuracy of image segmentation. Int. J. Appl. Earth Obs. Geoinform. 2007, 9, 311–321. [Google Scholar] [CrossRef]
  39. Chen, Y.Y.; Ming, D.P.; Zhao, L.; Lv, B.R.; Zhou, K.Q.; Qing, Y.Z. Review on High Spatial Resolution Remote Sensing Image Segmentation Evaluation. Photogramm. Eng. Remote Sens. 2018, 84, 629–646. [Google Scholar] [CrossRef]
  40. Weidner, U. Contribution to the assessment of segmentation quality for remote sensing applications. Int. Arch. Photogramm. Remote Sens. 2008, 37, 479–484. [Google Scholar]
  41. SenseFly User-Manuals. Available online: https://www.sensefly.com/my-sensefly/user-manuals/ (accessed on 21 June 2020).
  42. Iizuka, K.; Yonehara, T.; Itoh, M.; Kosugi, Y. Estimating Tree Height and Diameter at Breast Height (DBH) from Digital Surface Models and Orthophotos Obtained with an Unmanned Aerial System for a Japanese Cypress (Chamaecyparis obtusa) Forest. Remote Sens. 2018, 10, 13. [Google Scholar] [CrossRef][Green Version]
  43. Fraser, T.B.; Congalton, G.R. Evaluating the Effectiveness of Unmanned Aerial Systems (UAS) for Collecting Thematic Map Accuracy Assessment Reference Data in New England Forests. Forests 2019, 10, 24. [Google Scholar] [CrossRef][Green Version]
  44. GRANIT LiDAR Distribution Site. Available online: http://lidar.unh.edu/map/ (accessed on 21 June 2020).
  45. Congalton, R.G.; Green, K. Assessing the accuracy of remotely sensed data: Principles and practices. Photogramm. Rec. 2019, 25. [Google Scholar] [CrossRef]
  46. Ye, S.; Pontius, R.G.; Rakshit, R. A review of accuracy assessment for object-based image analysis: From per-pixel to per-polygon approaches. ISPRS J. Photogramm. Remote Sens. 2018, 141, 137–147. [Google Scholar] [CrossRef]
  47. Dronova, I. Object-Based Image Analysis in Wetland Research: A Review. Remote Sens. 2015, 7, 6380–6413. [Google Scholar] [CrossRef][Green Version]
  48. Li, Z.; Hayward, R.; Zhang, J.; Liu, Y. Individual Tree Crown Delineation Techniques for Vegetation Management in Power Line Corridor. In Proceedings of the Digital Image Computing: Techniques and Applications, Canberra, Australia, 1–3 December 2008; pp. 148–154. [Google Scholar]
  49. Vauhkonen, J.; Ene, L.; Gupta, S.; Heinzel, J.; Holmgren, J.; Pitkänen, J.; Solberg, S.; Wang, Y.; Weinacker, H.; Hauglin, K.M.; et al. Comparative testing of single-tree detection algorithms under different types of forest. For.: Int. J. For. Res. 2011, 85, 27–40. [Google Scholar] [CrossRef][Green Version]
  50. Kwak, D.-A.; Lee, W.-K.; Lee, J.-H.; Biging, G.S.; Gong, P. Detection of individual trees and estimation of tree height using LiDAR data. J. Forest Res. 2007, 12, 425–434. [Google Scholar] [CrossRef]
  51. Larsen, M.; Eriksson, M.; Descombes, X.; Perrin, G.; Brandtberg, T.; Gougeon, F.A. Comparison of six individual tree crown detection algorithms evaluated under varying forest conditions. Int. J. Remote Sens. 2011, 32, 5827–5852. [Google Scholar] [CrossRef]
  52. Jayathunga, S.; Owari, T.; Tsuyuki, S. Evaluating the Performance of Photogrammetric Products Using Fixed-Wing UAV Imagery over a Mixed Conifer-Broadleaf Forest: Comparison with Airborne Laser Scanning. Remote Sens. 2018, 10, 187. [Google Scholar] [CrossRef][Green Version]
  53. Vastaranta, M.; Wulder, M.A.; White, J.C.; Pekkarinen, A.; Tuominen, S.; Ginzler, C.; Kankare, V.; Holopainen, M.; Hyyppä, J.; Hyyppä, H. Airborne laser scanning and digital stereo imagery measures of forest structure: Comparative results and implications to forest mapping and inventory update. Can. J. Remote Sens. 2013, 39, 382–395. [Google Scholar] [CrossRef]
  54. Habib, A.; Bang, K.I.; Kersting, A.P.; Lee, D.-C. Error budget of LiDAR systems and quality control of the derived data. Photogramm. Eng. Remote Sens. 2009, 75, 1093–1108. [Google Scholar] [CrossRef]
  55. Pla, M.; Duane, A.; Brotons, L. Potential of UAV images as ground-truth data for burn severity classification of Landsat imagery: Approaches to an useful product for post-fire management. Rev. Teledetección 2017, 49. [Google Scholar] [CrossRef][Green Version]
  56. Hassan, M.A.; Yang, M.; Fu, L.; Rasheed, A.; Zheng, B.; Xia, X.; Xiao, Y.; He, Z. Accuracy assessment of plant height using an unmanned aerial vehicle for quantitative genomic analysis in bread wheat. Plant Methods 2019, 15, 37. [Google Scholar] [CrossRef][Green Version]
  57. Ke, Y.; Quackenbush, L.J. Comparison of individual tree crown detection and delineation methods. In Proceedings of the ASPRS Annual Conference, Portland, OR, USA, 28 April–2 May 2008. [Google Scholar]
Figure 1. Workflow of the method: (a) Tree detection; (b) Adjusted Euclidean allocation; (c) Euclidean direction and region growing; (d) Result and accuracy assessment.
Figure 1. Workflow of the method: (a) Tree detection; (b) Adjusted Euclidean allocation; (c) Euclidean direction and region growing; (d) Result and accuracy assessment.
Remotesensing 12 02363 g001
Figure 2. Study site at College Woods, New Hampshire, U.S.A.
Figure 2. Study site at College Woods, New Hampshire, U.S.A.
Remotesensing 12 02363 g002
Figure 3. Examples of treetop detection: (ad) are four examples of detected treetops showing TP, FN and FP.
Figure 3. Examples of treetop detection: (ad) are four examples of detected treetops showing TP, FN and FP.
Remotesensing 12 02363 g003
Figure 4. Segmentation accuracy of Individual tree crowns (ITC) under different global thresholds.
Figure 4. Segmentation accuracy of Individual tree crowns (ITC) under different global thresholds.
Remotesensing 12 02363 g004
Figure 5. Comparison of the results of experiment #1 and #2 with the reference crowns for eight individual trees (ah).
Figure 5. Comparison of the results of experiment #1 and #2 with the reference crowns for eight individual trees (ah).
Remotesensing 12 02363 g005
Table 1. Tree detection error matrix (TP = true positive, FN = false negative, FP = false positive, TN = true negative).
Table 1. Tree detection error matrix (TP = true positive, FN = false negative, FP = false positive, TN = true negative).
Reference Data
PositiveNegative
DetectedPositiveTPFP
NegativeFNTN
r = TP / ( TP + FN )
p = TP / ( TP + FP )
F = ( 2 × r × p ) / (   r + p )
Table 2. Tree detection error matrix.
Table 2. Tree detection error matrix.
Reference Data
PositiveNegative
DetectedPositive87893
Negative295NA
Back to TopTop