Next Article in Journal
Unsupervised Parameterization for Optimal Segmentation of Agricultural Parcels from Satellite Images in Different Agricultural Landscapes
Next Article in Special Issue
CNN-Based Tree Species Classification Using High Resolution RGB Image Data from Automated UAV Observations
Previous Article in Journal
Time-Series Model-Adjusted Percentile Features: Improved Percentile Features for Land-Cover Classification Based on Landsat Data
Previous Article in Special Issue
Spatio-Temporal Classification Framework for Mapping Woody Vegetation from Multi-Temporal Sentinel-2 Imagery
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Object-Based Approach Using Very High Spatial Resolution 16-Band WorldView-3 and LiDAR Data for Tree Species Classification in a Broadleaf Forest in Quebec, Canada

Centre D’enseignement et de Recherche en Foresterie de Sainte-Foy (CERFO), 2440 Ch. Ste-Foy, Quebec City, QC G1V 1T2, Canada
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(18), 3092; https://doi.org/10.3390/rs12183092
Submission received: 18 August 2020 / Revised: 16 September 2020 / Accepted: 18 September 2020 / Published: 21 September 2020
(This article belongs to the Special Issue Mapping Tree Species Diversity)

Abstract

:
Species identification in Quebec, Canada, is usually performed with photo-interpretation at the stand level, and often results in a lack of precision which affects forest management. Very high spatial resolution imagery, such as WorldView-3 and Light Detection and Ranging have the potential to overcome this issue. The main objective of this study is to map 11 tree species at the tree level using an object-based approach. For modeling, 240 variables were derived from WorldView-3 with pixel-based and arithmetic feature calculation techniques. A global approach (11 species) was compared to a hierarchical approach at two levels: (1) tree type (broadleaf/conifer) and (2) individual broadleaf (five) and conifer (six) species. Five different model techniques were compared: support vector machine, classification and regression tree, random forest (RF), k-nearest neighbors, and linear discriminant analysis. Each model was assessed using 16-band or first 8-band derived variables, with the results indicating higher precision for the RF technique. Higher accuracies were found using 16-band instead of 8-band derived variables for the global approach (overall accuracy (OA): 75% vs. 71%, Kappa index of agreement (KIA): 0.72 vs. 0.67) and tree type level (OA: 99% vs. 97%, KIA: 0.97 vs. 0.95). For broadleaf individual species, higher accuracy was found using first 8-band derived variables (OA: 70% vs. 68%, KIA: 0.63 vs. 0.60). No distinction was found for individual conifer species (OA: 94%, KIA: 0.93). This paper demonstrates that a hierarchical classification approach gives better results for conifer species and that using an 8-band WorldView-3 instead of a 16-band is sufficient.

Graphical Abstract

1. Introduction

Forest characterization in Quebec, Canada, is usually assessed based on photo-interpretation using three-dimensional appearance. This approach has been used since the last century and is still in use for forest planning and forest composition analysis [1]. New techniques, such as image enhancement, have been developed over the years using aerial imagery and user-friendly software, and the information provided has been well accepted by and proven useful for foresters [2,3]. However, species identification with these newer methods still lacks precision, and varies among photo-interpreters, mainly because this characterization is made at the stand level, as species identification at the tree level would be time consuming and expensive [3,4]. Recently, very high spatial resolution satellite imagery has become more available and could be used to classify tree species at tree level across different biomes [5,6,7]. In addition, with an airborne laser scanner or “LiDAR” (light detection and ranging), an infrared laser can scan the surface of the earth, generating a 3D point cloud that can be used to analyze the tree structure [8,9].
Furthermore, LiDAR data allows a forest to be characterized at the tree level, which can lead to a better estimation of timber volume and hence better planning by foresters [10]. Individual tree crown (ITC) segmentation is being studied more and more [11,12,13,14]. Forest segmentation can be done through two different techniques: (1) point cloud-based and (2) raster-based, using the canopy height model (CHM) [8,15,16]. The first technique generally gives good results, but it is time-consuming, complex and requires advanced LiDAR sensors [17]. The second technique has been studied much more, both at the stand level [18,19] and at the tree level [20,21,22], as there are a variety of algorithms that provide rapid ITC segmentation, which gives satisfactory results [14,16,23].
Many studies have investigated tree species mapping at the tree level in a forest environment, however they usually process hyperspectral data [24,25,26,27,28]. Fewer studies have tried to map tree species with satellite imagery at tree level scale [6,29,30,31]. Pham et al. [32] tried to combine imagery, LiDAR and GIS topography indices, which led to better results than using a single data source. In previous projects, we used aerial hyperspectral data fused with LiDAR and GIS data to map individual tree species [4,33,34]. The results showed global precisions of over 93% to classify ash and spruce against 14 other species, and precisions of 62% and higher to classify seven species, in an urban and in a forest environment, respectively. In the latter case, yellow birch and hemlock were the species identified with the best accuracy (mean precisions of 77% and 83%, respectively). Both studies were carried out using an experimental hyperspectral sensor. While using aerial hyperspectral data gives interesting results, the complexity of the processes, as well as the high acquisition costs over large areas must be taken into account [35].
The use of satellite multispectral imagery is relevant for tree species mapping. Indeed, satellite data has been widely used to classify tree species [30,36,37,38,39], but since the launch of DigitalGlobe multispectral sensors, none of these efforts has had the ability to capture very high spatial resolution (<2 m) that is as detailed as WorldView-3. Moreover, the eight new bands in the Short-Wave Infrared (SWIR) may improve tree species classification [7,40]. In remote areas, such precise satellite images can become an alternative to aerial photography [41]. These images provide more spectral bands for analysis with a relatively competitive acquisition cost. Some studies also combined satellite imagery with LiDAR and demonstrated that a combination of both can significantly increase the accuracy of classification [19,38,42,43], but they have essentially worked at the stand level. Others have used fused data to classify tree species at the tree level using high spatial resolution sensors [32,43,44]. More recently, Li et al. [45] worked on tree species classification with WorldView-3 and LiDAR at the tree level in an urban context with isolated trees. Nevertheless, the number of tree species in those studies was limited, usually less than ten species.
Recently, the use of machine learning techniques, including the support vector machine (SVM), classification and regression tree (CART), random forest (RF), k-nearest neighbors (k-NN) and linear discriminant analysis (LDA) techniques for classifying forest characteristics have been gaining popularity. These techniques have been widely used in remote sensing for species classification [46,47], vegetation health assessment [48,49,50,51], biomass mapping [52,53,54], wetland mapping [55,56,57] and landslide risk evaluation [58,59]. He et al. [40] also used RF in a hierarchical approach in order to classify tree species. However, few studies evaluated the use of multiple techniques in a hierarchical approach at the tree level [60].
The SVM algorithm, initially suggested by Vapnik [61], maximizes the margin around the hyperplane that separates features into different domains [62]. For classes that are not linearly separable, the SVM uses a kernel function, reducing a nonlinear problem to a linear problem based on a radial basis function or Gaussian kernels. The penalty parameter (C) and the kernel parameter gamma (γ) for the radial basis function kernel should be optimized, and can heavily impact the classification accuracy when using SVM models [63]. It is C that determines the trade-off between margin maximization and training error minimization [64], while the γ parameter defines how far the influence of a single training example reaches, with low values meaning ‘far’ and high values meaning ‘close’ [65,66,67].
The CART approach operates by recursively splitting the data until the ending points, or terminal nodes, are achieved using pre-set criteria [68]. The CART therefore begins by analyzing all explanatory variables and determining which binary division of a single explanatory variable best reduces deviance in the response variable [69]. The main elements of the CART, and of any decision tree algorithm, are: (1) rules for splitting data at a node based on the value of one variable; (2) stopping rules for deciding when a branch is terminal and cannot be split anymore; and (3) a prediction for the target variable in each terminal node.
Introduced by Breiman [70], RF is a classifier that evolves from decision trees. It actually consists of many CARTs. To classify a new instance, each tree is trained with a randomly selected subset of the training samples and variables based on bootstrap sampling, and then the final classification is conducted based on a majority vote on the trees in the forest [71]. Although the classifier was originally developed for the machine learning community, thanks to its accuracy, interest in RF has grown rapidly in ecology [72] and in the classification of remotely-sensed imagery [73].
The k-NN [74] algorithm is a non-parametric method that assigns to an unseen point the dominant class among its k-nearest neighbors within the training set. Unlike most other methods of classification, k-NN falls under lazy learning, which means that there is no explicit training phase before the classification. The classification with k-NN is carried out by following three steps: (1) compute a distance value between the item to be classified and every other item in the training data set; (2) choose the k-closest data points (the items with the k-lowest distances); and (3) conduct a “majority vote” among those data points to decide the final classification.
LDA has been widely used in various tree species classification studies [75,76,77,78]. LDA projects the original features onto a lower dimensional space by means of three steps [79]: (1) calculate the separability between different classes, called the between-class variance; (2) calculate the distance between the mean and sample of each class, called the within-class variance; and (3) construct the lower dimensional space which maximizes the between-class variance and minimizes the within-class variance.
The main objective of this study is to map 11 tree species using an object-based approach with WorldView-3 imagery and LiDAR data. Object-based image analysis brings the capacity to group homogenous pixels into meaningful objects based on their spectral values, which can then be analyzed by their shape, size, texture and contextual information [19], in contrary to the pixel-based approach [80]. We implemented modeling techniques in a global and hierarchical approach. More specifically, this study aims to (1) delineate ITC using fused data (WorldView-3 imagery and LiDAR data); (2) compare models at each classification level (global and hierarchical); (3) evaluate classification improvement using 16-band instead of 8-band WorldView-3; and (4) apply the best models to map tree species over the study areas. This implies the ability to delineate ITCs in order to extract spectral signatures, and to assign a specie class to each object. For ITC segmentation, we used three different techniques. We propose an ITC segmentation using fused data (CHM and satellite imagery) to refine tree species’ crown delineation [32]. The classification of species is divided into three parts, on two levels: (1) tree types and (2) broadleaf and conifer tree species. In the present study we used five different models (SVM, CART, RF, k-NN and LDA) to overcome the uncertainty derived from the use of an individual model, given that the results can vary depending on the modeling technique.

2. Data and Methods

2.1. Study Areas and Data

2.1.1. Study Areas

The study was conducted on the Kenauk Nature property, which is located in the south-west region of the province of Quebec, Canada (N 45°52′1″–45°39′36″, W 74°58′22″–74°44′7″) (Figure 1). This private property is over 250 km2, has a mean elevation of 226 m and is composed of a diverse broadleaf forest including more than 25 tree species [81]. Medium slopes characterize its topography. For the purposes of the project and due to time limitations, three study areas totaling 26.1 km2 were selected to collect field data, train and apply the models. Those areas contain mature forest stands composed of dominant tree species with diverse structural stands and topography.

2.1.2. Imagery and Airborne Laser Scanner Data

Very high spatial resolution 16-band WorldView-3 imagery was acquired for the study areas on 26 August 2016 with a nadir view angle of 12.9° and a solar elevation of 54.3° (Table 1). Two cloud-free images were collected and preprocessed. The images were geometrically corrected with Rational Polynomial Coefficients (RPCs), radiometrically calibrated (radiance to reflectance) and then pansharpened using the least squares algorithm [82]. No atmospheric correction or dehaze reductions were applied since many artifacts were produced, and this preprocessing has proven to be less effective than pansharpening for tree classification purposes using high spatial resolution imagery [6,30,82]. The orthorectification was done using a 5 m LiDAR Digital Elevation Model with OrthoEngine of PCI Geomatica [6,7]. For the purpose of the study, all sixteen bands were rescaled to 30 cm, similarly to Li et al. [45]. Finally, the images were mosaicked with a bundle adjustment and then fitted with the CHM using 100 tie points to reduce the offset in the canopy [83]. A second-order polynomial regression was used to create the final mosaic with a root mean square error of 0.97 m. Inspired by Zhou and Qiu [84] and Hartling et al. [31], deep shadow was extracted from the mosaic using a maximum likelihood classification with a shadow index [85]. The Bhattacharyya index showed a separability of 1.94 to detect deep shadow against other elements. This result was used to mask the mosaic for subsequent processes. Those preprocesses were carried out using PCI Geomatica (version 2016) and ENVI (version 5.4).
LiDAR data were acquired on 17 June 2015, with a point density of 10 pts/m2. The sensor used was a Riegl Q-780 system with a pulse repetition frequency of 400 kHz and a laser wavelength of 1064 nm. For the acquisition, a Cessna 172 airplane was flown at a mean height of 1200 m above ground level with a flight speed of 185 km/h. The absolute accuracy in xyz was 30 cm. The point cloud was classified by the provider into five classes: Unassigned, Ground, High vegetation, Building and Water. We used the Ground class (lowest points) to create a 50-cm Digital Elevation Model (DEM) and the High vegetation class (highest points) to produce a 50-cm Digital Surface Model (DSM). We subtracted the DSM from the DEM to obtain the 50-cm CHM. This procedure was performed using the LAS Dataset To Raster function and the Raster Calculator tool in ArcGIS Desktop 10.6.

2.2. Field Survey and Data Collection

A total of 515 dominant trees were identified and positioned in the study areas using a high precision Trimble Pro6H GPS with a mean precision of 1 m after post-processing. We targeted individual trees or groups of trees presenting the same species with a minimum crown diameter of 5 m. Geographical coordinates and physical parameters, such as tree heights and crown sizes, were measured or interpreted from WorldView-3 and aerial images [1]. Based on GPS positions, a manual delineation was made by photo-interpretation to fit the crown to the correct tree on the WorldView-3 images (Figure 2) [6,7,86]. Thus, 185 broadleaf and 153 coniferous tree samples (total of 338) remained after this manual delineating exercise, as only visible and identifiable crowns were kept (Table 2). Mean crown sizes vary between species (22–85 m2) and the mean height is over 16 m in all cases.

2.3. Derived Variables

WorldView-3 imagery data were used to extract, calculate and adapt variables based on the available literature. The details of these variables are given in Appendix A. The spectral variables were computed from the 16 bands so as to cover diverse spectral elements (vegetation, wetness, openness etc.). Spectral variables of reflectance values were calculated by pixel [6] and object-based methods using R [87], SAS software [88] and eCognition Developer. Object-based indices consist of a series of customized arithmetic features calculated using the mean of pixel values within an object of specific bands [16,19,80]. Arithmetic features were also calculated for the 95th percentile highest pixels within each object in order to use the brightest (sunlit) parts of each crown. Although they could be correlated with pixel-based indices, arithmetic features are quick to calculate, which is a significant advantage when working with massive data. Textural variables were extracted using eCognition Developer 9.2.

2.4. Tree Crown Segmentation from Fused Data

The first step of the object-based approach is to segment the territory into contextual objects, such as single tree crowns. ITC segmentation using CHM is challenging in complex forest stands [89]. We decided to analyze trees over 17 m in height to reduce the effect of understory on canopy gaps [29,90] and as a trade-off based on the field survey (see Table 2). A 2 m buffer around mature trees (>17 m) was incorporated to keep pixels that are part of the same crown but that have smaller heights; those less than 7 m in height were eliminated and used as a mask for further processing [16,29]. Preprocessed CHM is usually utilized for ITC segmentation [15,21,91,92,93]. Various spatial filters with different window sizes (two or four pixels) and shapes (rectangular or circular) were tested, and the one that best fit the tree crowns visually was selected. We then combined the original and filtered CHMs with the Jakubowski et al. [16] method to keep both the edges of the original CHM and the smooth central crown of the filtered CHM. Topological operations were also undertaken to merge small crowns [93].
Three CHMs were evaluated in this study: original, filtered and corrected. Similarly to Pham et al. [32] and Koch et al. [15], an inverse watershed segmentation algorithm was used within eCognition Developer. This algorithm uses a CHM to find local maxima to grow a region according to the heights of neighboring pixels until they touch another object [94]. Several parameters were tested in order to select the optimal values for the height differences between local maxima and minima and object areas, using trial and error with various combinations. A neighborhood of eight pixels was used in order to produce disconnected objects. Three thresholds were used; considering the high resolution of the CHM (50 cm), these were: (1) an overflow area of 25 pixels (6.25 m2); (2) an overflow depth (difference between maximum and minimum height) of 0.5 m; and (3) an overflow volume of 15 m3 were selected. Objects that were below those thresholds were merged to their neighboring objects.
The approach based on the CHM depends on height variation detection and is therefore related to the shape of the crowns. However, different neighbor tree species may have overlapping branches and could produce confusion when using imagery data to classify tree species. We added a multiresolution segmentation to make the objects generated with CHM more precise by using imagery at a sublevel [5,6]. This segmentation is a bottom-up region merging technique. Combining CHM and imagery to make ITC segmentation allows pure species objects to be delineated. The multiresolution segmentation algorithm with the most significant bands identified with statistical analysis was used, as suggested by Koch et al. [15] (Section 2.5.1). This algorithm requires several empirical parameters, including: (1) the weights of each selected bands; (2) a color/shape weight associated with the spectral/shape homogeneity; (3) a compactness/smoothness weight according to the object shape; and (4) a scale parameter referring to the average size of objects. In this study, to integrate non-correlated visible and infrared information, and by trial and error, four bands (3, 5, 6 and 7) were selected and assigned the same weight (1). The weights of color and compactness were set as 0.3 and 0.7, respectively, in order to insist on spectral variations while retaining more compact objects such as tree crowns. A scale parameter of 100 was selected with visual assessments in order to limit under-segmentation. It has been shown that under-segmentation affects the classification accuracy more than over-segmentation [95]. Those parameters were empirically chosen to ensure best results for ITC segmentation [19]. The performance of ITC segmentation based on CHM alone and then with imagery was assessed using 30 random non-isolated crowns with photo-interpretation.
We first evaluated if a segment represented a single crown. Considering the offset between the data and the complexity of the forest structure, a single crown was identified when the segment contained at least 75% of the corresponding tree [15]. Then, we evaluated the number of species encompassed within the segment. The ITC segmentation technique which gave the best accuracy was retained and used for classification over the study areas.

2.5. Classification Models

2.5.1. Variable Selection

In accordance with the literature, a total of 240 variables were used as predictor variables, including 64 band statistics (mean, skewness, standard deviation), 112 spectral indices and 64 textures (detailed in Appendix A). Variable selection is a crucial procedure before modeling. Using a large number of predictor variables is time consuming, requires high computing capacity, reduces the reproducibility, and the results cannot be easily interpretable. Furthermore, the use of a large number of predictor variables does not necessarily produce the best results, since the model performance can vary widely according to the variables utilized.
As a first step, outliers were removed from the dataset, identified based on the work of Brillinger [96]. We first calculated the interquartile range for our data. Using that range, only the values falling between the median value plus 2.5 times the interquartile, and the median value minus 2.5 times the interquartile were conserved. A datum was removed after it had been attributed to an outlier for more than eight distributed variables.
Secondly, to reduce the dimensionality of the data we proceeded with a correlation analysis to avoid redundant information. Pairs of variables with correlation coefficients greater than 0.85 were considered to be highly correlated, and the variable presenting the highest mean correlation with the remaining predictor variables was discarded from further analysis [97]. To simplify the analysis, those procedures were applied to parametric and non-parametric models. For all models, except LDA, further procedures were applied. At the end of the correlation analysis, this limited number of variables were introduced as input variables in the Boruta algorithm [98]. Boruta is a wrapper algorithm that seeks to capture all the important, interesting features in a dataset with respect to a variable outcome. The 15 most significant variables were selected. Finally, as machine learning algorithms do not always retain the relevant variables [99,100], we created a loop among the 15 selected variables from Boruta to evaluate the performance of models created by all possible combinations of a number of variables (3, 4, 5, 6, 7, 8, 9 and 10). For the LDA, variable selection was done with the Stepdisc procedure in SAS using the stepwise selection method and a variable entry and staying significance level of 0.005. Those iterations were processed within the training dataset. We then retained the combination that gave the best performance for each classification technique. Those procedures were independently done for 16-band and 8-band WorldView-3 derived variables.

2.5.2. Modeling Process

We began by attempting to model all 11 tree species, five broadleaf and six coniferous, in a global approach. Next, we used a hierarchical approach with group classifications, similarly to Wessel et al. [60]. In the first step, we attempted to separate the two tree types (broadleaf and conifer trees). In the second step, we proceeded to the modeling of individual tree species belonging to each type. For the classification, four modeling procedures were implemented under R: RF [101], SVM [62], k-NN [102], CART [102,103] and one under SAS: LDA [76,77].
To avoid overfitting of the classification models, independent validation was conducted by dividing the available reference data into two sets, where 80% of the total samples were used for model calibration [25] and the remaining samples were used as test set (Table 2). Tuning was applied differently on each model.
To find the best gamma and penalty parameters for SVM we used a grid search over a supplied parameter range, and the combination of parameters that maximized model performance was retained. The CART model was tuned with different complexity parameter (cp) values that were estimated by testing cp values using cross-validation approaches. The best cp was then defined as the one that maximized the cross-validation accuracy. For the RF model, we selected the number of trees to grow (ntree) as the number from which the error converged and remained stable based on the out of bag (OOB) error. Careful selection of this parameter is key, as we want to have enough trees to stabilize the error but not so many that we over-correlate the ensemble. On the other hand, the number of variables randomly sampled as candidates at each split (mtry) was selected based on the value that minimised the OOB error and maximised the model performance. The neighborhood parameter (k) for the k-NN algorithm was selected based on the model achieving the best accuracy when varying k. For the LDA, selected variables were included using the Discrim procedure in SAS with a parametric method based on multivariate normal distribution within each class to derive a linear discriminant function [88]. Five models were developed in order to compare their performance using various combinations of predictor variables and tuning parameters. To determine if the eight extra bands of WorldView-3 imagery (SWIR 1 to 8) would improve classification accuracy, those procedures were first applied with variables derived from 8-band WorldView-3 and then repeated with variables derived from 16-band WorldView-3. Once the best-performing model was identified it was used to map the tree species’ distribution throughout the study areas.

2.6. Model Performance

The classification performance of each model was assessed based on confusion matrices computed using reference data (20%) selected as a test dataset. Comparing the observed and the predicted data allowed us to assess producer and user accuracies. Overall accuracy (OA) was calculated by averaging accuracies among all classes [104]. We calculated Cohen’s Kappa index of agreement (KIA) to evaluate the possibility of an agreement occurring simply by chance [105]. The KIA is a robust statistic useful for reliability testing. Similar to correlation coefficients, it can range from −1 to +1, where 0 represents the amount of agreement that can be expected from random chance, and 1 represents perfect agreement [106,107]. We compared all the models and selected the optimal ones offering the highest OA and KIA.

3. Results

3.1. Individual Tree Crown Segmentation and Assessment

Three CHMs were used in the ITC segmentation: the original CHM, a filtered CHM and a corrected CHM. Figure 3 presents a 3D profile for each CHM. It shows that the original CHM has a high z range and the filtered CHM smooths the z variation. The corrected CHM is composed of both original local variety and filtered CHM smoothness.
As a second step, imagery was added to the ITC segmentation. The assessment shows similar results for filtered and corrected CHMs (Table 3). They produced better delineation than the original CHM for single crown (63% vs. 40%) and single species (73% vs. 70%) segmentation. Our results indicate that the use of CHMs alone for ITC segmentation can lack precision, especially when different species’ crowns can be interlaced or when their neighbors are at the same height. The combination of filtered CHM and imagery showed the best result for single crown delineation (68%). Combinations of original and corrected CHMs with imagery indicated that 56% and 64% of the objects fitted a single crown, respectively, showing over-segmented crowns. ITC segmentation assessment showed that the best results for single species were found by combining a corrected CHM with imagery. For this combination, 82% of the objects represented a single species, in contrast to original (74%) and filtered (75%) combinations. For the residual objects (18%), a majority showed a single species for 50% of their area. Figure 4 shows an example of ITC segmentation using a corrected CHM combined with imagery, the combination chosen for tree mapping.

3.2. Classification and Assessment

For a visual analysis, the mean spectral values were calculated for each tree species of the training dataset (Figure 5). The species are most discriminated in the NIR region. Conifers (Figure 5B) are more separable than broadleaf species (Figure 5A), as their curves are more distinguishable.
Variable selection was conducted prior to the classification. From the 240 original derived variables, 50 and 75 were not correlated using the first eight bands and using all the bands, respectively, and were processed in the Boruta algorithm. The 15 most significant variables were then selected and utilized to run all possible combinations for all models except LDA. Among those 15 variables, three were especially notable: (A) the ARI_mean_95pc_higher index (Anthocyanin Reflectance Index); (B) the GLCM_Entropy_Band_7; and (C) the GMI2_mean index (Simple NIR/Red-edge Ratio) (Figure 6). The ARI (Anthocyanin Reflectance Index) was calculated with the 95th percentile highest pixels and allows an estimation of the anthocyanin (water-soluble vacuolar pigments) accumulation in intact stressed and senescing leaves [108]. Broadleaf trees have higher values than conifers, which is consistent with the fact that leaves, in contrary to needles, accumulate more anthocyanin to protect them from sunlight [109]. The GLCM (Grey Level Co-occurrence Matrix) entropy texture index was calculated using band 7 (832.5 nm); this index has a high value when all pixels are of similar magnitude [110,111]. Its values show that Sugar Maple (SM) species presents the most uniform pattern. A crown with small bumps and shallow cavities, with an appearance similar to that of a broccoli, are indeed characteristics used in photo-interpretation to identify this species [112]. The GMI2 (Gitelson and Merzylak Index) index is a simple ratio that allows chlorophyll content estimation in a wide range of pigment variation using insensitive (B7: near-infrared) and sensitive (B6: red-edge) bands and could be considered as an improved Normalized Difference Vegetation Index (NDVI) [113]. This index shows values that vary between conifer species. White pine (WP) is the species with the highest value, indicating a higher chlorophyll content than the other conifer species [114].
For the global modeling approach, RF (ntree: 2000; mtry: 3) was selected as the best model based on the performance assessment, as it gives an OA of 75% and a KIA of 0.72 when using 16-band derived variables (Table 4). This performance was achieved using nine variables. SVM also performed well, with 71% OA and a KIA value of 0.68. The k-NN, CART and LDA approaches offered less-precise classification, with OAs below 61%. All the performance models declined or were stable when using 8-band instead of 16-band derived variables, except for LDA, which increased by 5% (61% to 66%).
For the hierarchical modeling approach, RF (ntree: 2000; mtry: 2) presented the best performance with OA of 99% and KIA of 0.97 for tree type (broadleaf/conifer) modeling (Table 5). This performance was achieved using four variables derived from 16-band WorldView-3. If only 8-band instead of 16-band derived variables were used, RF and k-NN achieved the best performances (OA: 97%, KIA: 0.95). All the performance models declined or were stable when using 8-band derived variables. For broadleaf and conifer species modeling, RF (ntree: 2000; mtry: 2) gave the best performances using 8-band derived variables, with OAs of 70% and 94% and using six and seven variables, respectively (Table 6). For broadleaf modeling with RF, the KIA value (0.63) indicated a moderate agreement, which is considered as a substantial agreement as presented by McHugh [106]. The KIA was greater than 0.90 for conifer species, which is considered as an almost perfect agreement [106] with a value of 0.93. For RF, using 16-band instead of 8-band derived variables did not increase OAs for broadleaves (70% vs. 68%) or conifers (94% in both cases).
Without considering the global approach, the three models selected to classify tree species used a total of 16 variables: nine spectral indices, three simple bands, one standard deviation and three textures (Table 7). Out of the 16 WorldView-3 bands, a total of nine bands were used in our selected models (Table 8).
The following error matrices come from the best models (RF) using 16-band and 8-band derived variables for the global approach and tree type, and for individual species, respectively. Although the RF model presented an OA of 75% for the global approach, its precision per species fluctuated highly and varied between 58% for SM and 100% for Big Tooth Aspen (BT), Eastern Hemlock (HK) and White Spruce (WS) for the user’s accuracy in the confusion matrix, while its producer’s accuracy varied between 33% for WS and 100% for HK and WP (Table 9). The tree type model (Table 10) had almost perfect results (OA: 99%), with one error: one conifer was classified as a broadleaf. For broadleaf species identification (OA: 70%) (Table 11), all species were classified over 70%, except Red Oak (RO) and Yellow Birch (YB) (67%) for user’s accuracy. For producer’s accuracy, all species were classified over 60%. For conifer species (OA: 94%) (Table 12), all species were perfectly classified (100%), except WP with (83%) for user’s accuracy and Eastern White Cedar (EC) and Red Pine (RP) (80%) for producer’s accuracy.
The RF model was used to map tree types and individual broadleaf and conifer tree species over the study areas. Figure 7 illustrates the classification map in an island composed of both tree types. Tall WP trees are especially visible on the border of the island. The inland is mainly made up of SM and YB. Smaller trees (<17 m) were not mapped.

4. Discussion

This study compares five different models to successfully map 11 tree species in a natural North American forest based on WorldView-3 imagery and LiDAR data. The proposed method is highlighted by three main aspects: (1) an object-based segmentation technique using imagery and LiDAR; (2) a hierarchical classification approach with more than ten species; and (3) model iterations for optimal selection.
ITC segmentation is usually implemented when mapping species at the tree level, and studies have often used LiDAR data [13,119] or imagery [6,11,120]. Using only LiDAR or imagery at the tree level results in objects with merged tree crowns [121], especially in a mature broadleaf forest like the one in the present study. Both data types could be used together to limit this effect. As an example, Heinzel and Koch [121] delineated ITCs using a pixel-based classification within the objects to avoid neighbor tree errors. While Alonso-Benito et al. [39] used LiDAR and imagery for segmentation, they did not classify at the tree level. Koukoulas and Blackburn [83] also used both data types, but with a succession of complex GIS procedures to find treetops. The ITC segmentation proposed here follows a watershed algorithm [122] from LiDAR data similarly to Weinacker et al. [93] and Koch et al. [26]. Significant bands for tree types (broadleaf and conifer) were then used to refine the segmentation using a multiresolution algorithm as suggested by Pham et al. [32] and Koukoulas and Blackburn [83]. This approach has similarities with multiscale approaches to separate species in a dense and complex forest. Indeed, raster-based ITC segmentation approaches do not allow object overlaps yet offer a more realistic representation for a broadleaf natural forest [16]. As shown in Table 3, the results indicate that using a filtered or corrected CHM delineates single crowns and species better than using an original CHM (increased accuracy of at least 20% for single crowns and 3% for single species). When imagery is added to ITC segmentation it leads to over-segmentation, creating many objects in large crown cases when compared to their corresponding manually-delineated crowns. Single crown delineation accuracy could be reduced. In such a situation, one option would be to merge similar small objects [24] using spectral difference as a second step [80], although over-segmentation is generally preferred to under-segmentation [40,95]. For this assessment, no isolated tree crowns were used. This could be another reason why accuracies were not over 70% for single crown delineation. For single species delineation, its accuracy improved with imagery; up to 9% for the corrected CHM. For filtered and original CHMs, the accuracy slightly improved with imagery (2–4%). This could be related to the fact that ITC segmentation using filtered CHM alone produced bigger objects. Those were then divided into smaller parts that were not entirely covered (at least by 75%) by a single species.
The Kenauk Nature property is composed of a complex mixed forest with more than 25 tree species. A number of studies have used high spatial resolution sensors to map tree species in a natural forest environment at the tree level; those included relatively few species recognition [6,29,32,43,44,93,121]. For example, Immitzer et al. [7] classified 10 species while concentrating on pure stands for reference data, where spectral variability could be limited. Having such a high number of species in our study area forced us to survey only the dominant species (11). Misclassification could therefore be influenced by the complex forest environment that made it difficult to target suitable data for references. For this reason, we manually delineated tree samples by stereo photo-interpretation to have reliable data as suggested by Immitzer et al. [7].
Previous studies generally limited their classification to a global approach without new machine learning techniques such as SVM and RF. For example, Waser et al. [6] classified seven species with an OA of 83% with a global approach using LDA. The present study demonstrates a hierarchical classification approach as a significant procedure in order to classify and map 11 tree species. This approach conserves the integrity of the tested algorithms in a hierarchical perspective by first classifying tree types and then the individual species by their corresponding type. Our results show that the hierarchical approach gives a better performance than using a single global approach, especially for conifers, which is consistent with other studies [123]. However, the hierarchical approach needed more variables (16) than the global approach (nine). Also, the hierarchical approach presented here shows that using multiple modeling techniques at each level allows the best models to be selected, which could vary. Therefore, this approach has the ability to reduce unbalanced accuracies between species as reported by studies working in a global approach [7]. In our case, RF was the best model for all levels, followed by SVM. This observation is in opposition to other studies working with coarser imagery, such as Sentinel-2 [60].
Another interesting element is that this approach allows the selection of relevant variables and specific model techniques for each classification level. The variables selected for each model were not the same for broadleaf and conifer species. For example, broadleaf species are more distinguishable using texture variables because their branch structures are much more varied (Figure 6(B)). A similar technique was used in Krahwinkler and Rossmann [124] to make a binary decision tree hierarchical structure by classifying each single species. Our approach permits a simpler way to classify species by type with satisfying results, and limits the hierarchical structure to two levels. Moreover, instead of using only the SVM, we tried five different models to optimize the accuracy. On the other hand, it is worth noting that SVM and RF are generally the best algorithms according to their OA. For tree species classification, SVM is generally recognized to be more effective when working with a small number and imbalanced distribution of samples [45]. It should be pointed out that ancillary variables (topographic position index, topographic wetness index and water proximity, etc.) could also improve classification accuracy [32,53,125], although it would be important to collect stratified samples evenly distributed among those variables.
Mitigated improvements were observed when using 16-band or 8-band WorldView-3 derived variables. The additional eight bands (SWIR 1 to 8) slightly enhanced the global approach (OA: 75% vs. 71%, KIA: 0.72 vs. 0.67) and tree type classifications (OA: 99% vs. 97%, KIA: 0.97 vs. 0.95), but did not improve individual species classification. This is partially consistent with other studies that observed an improved classification accuracy when adding new bands, especially with a large number of tree species [7,40]. For example, Ferreira et al. [126] simulated WorldView-3 bands for tree species discrimination and found that incorporating SWIR bands significantly increased the average accuracy. Despite the low spatial resolution compared to other multispectral bands (5.25 × 7.5 m vs. 0.84 × 1.2 m), the spectral information of SWIR bands was significant for certain inter-species separability, despite the fact that their integration should be made with caution when mapping smaller trees because their crowns could be covered by just a few pixels. Adding the SWIR bands also permitted to integrate spectral indices that were developed within hyperspectral studies [127,128,129,130]. Finally, the small accuracy improvement shows that it may be sufficient to use only 8-band derived variables to simplify the method.
The model iterations procedure for optimal selection is an important contribution of this study compared to other similar studies. Studies generally integrate all variables without an oriented variable selection or by using complicated methods such as linear mixed-effects modeling and genetic algorithms [32,45,131,132,133]. However, this selection aspect is essential to insure reproducibility for operational purposes [14]. Moreover, our results showed that using fewer variables could actually improve the classification. We proposed a simple method using all the variables in order to select the 15 most significant variables provided by the Boruta algorithm [98], and eliminated the inter-correlated variables similarly to Budei et al. [14]. We then computed all combinations to determine the one that obtained the best results using the least possible number of variables.
Spectral variable calculation techniques are also an important aspect of this procedure. A majority of the recent studies use a pixel-based calculation technique to perform spectral variables [6,44,45]. We used two different calculation techniques: pixel-based and arithmetic feature (mean of all pixels or 95th percentile highest pixels within each object). For example, a tree crown could have an NDVI value that differs depending on if it is calculated using the mean of each red and near-infrared band (arithmetic feature) or if the mean of the NDVI calculated by pixel is extracted. The first case allows spectral variables to be calculated rapidly, while the second case makes it possible to calculate textural variables, for example. Indeed, an arithmetic feature has the advantage of creating variables rapidly within R or SAS instead of adding a new raster band each time, which would make massive data management difficult. Additionally, using the 95th percentile of the highest pixel values allowed us to keep the sunlit parts of crowns and thereby limit the shadow effects which could affect classification accuracy [7].
Machala et al. [19] was concerned about using maximum values in features where objects are heterogeneous (e.g., high and low trees), but this is not the case in our study since ITC segmentation is aiming for homogeneous objects. While testing correlations for both calculation techniques, we obtained high coefficients for many variables. For the arithmetic feature of NDVI with the mean of all pixels within each object, we found correlations of 0.99 and 0.93 for pixel-based and 95th percentile of highest pixel values’ corresponding variables, respectively. This method allowed more variables to be implemented in the modeling process.
Although the proposed approach is robust to identify 11 tree species, three main limitations were identified. The first limitation was that unevenly distributed samples between the 11 species made it difficult to correctly use machine learning models such as RF. This limitation was also identified by Tao et al. [134] and Farquad and Bose [135]. It is known that using an unbalanced training dataset tends to affect the prediction accuracy of the dominant classes, which implies lower accuracies in the less represented classes [60]. To limit this impact, new samples could balance the dataset, but this simple solution is also the most expensive, involving additional field surveys and photo-interpretation. As suggested by Farquad and Bose [135], another solution could be to automatically over- or under-balance the dataset [136].
The second limitation concerns spatial and spectral resolutions and the georeferencing of imagery. The research presented here was based on 16-band WorldView-3 imagery. Firstly, the spectral quality could have been affected by rescaling. WorldView-3 bands contain various spatial resolutions from 0.21 m for panchromatic up to 7.5 m for shortwave infrared. The panchromatic band ranges from 450 to 800 nm, covering the first seven bands. Despite the fact that the nine other bands were out of range, for methodological purposes, all bands were rescaled and pansharpened. Those last nine bands could have been degraded, which may have affected the modeling and the reproducibility of the method [137]. To limit this impact, the last nine bands should not be used for the texture variables. Secondly, despite preprocessing, an offset between imagery and LiDAR CHM persist (RMS: 0.97 m) and could affect the ITC segmentation and classification modeling. The offset at the ground level was almost perfect, but the misalignment of the crowns was sometimes over 3 m. Tree crowns tilted in the image could be used for stereo-reconstruction when at least two images are used [78,138], but using a single image caused complex situations where segmented LiDAR crowns were not matching their corresponding trees in the WorldView-3 image. A digital surface model derived from LiDAR could also be used to orthorectify the image [25,29]. However, we did several tests and decided not to use this technique because it created many artefacts when high spatial resolution images such as WorldView-3 were used. In this study, where mature trees were present all over the area, manual points were collected to fit the CHM and thereby reduce this offset. To limit this effect, a threshold of 17 m was set as a mask in order to analyze only tall and large trees. The imagery was also integrated in the ITC segmentation to divide objects including more than one species as a solution to eliminate the offset between data sources.
The third limitation of the proposed approach concerns the fact that the territory is composed of more than 11 species. Given that the species modeling does not include the full diversity, a marginal species will be classified into one of the 11 species classes used in the modeling. Also, small trees were not mapped, as a threshold of 17 m was incorporated. It would be interesting to integrate more species classes in the modeling, considering groups of age or height [26]. Although more species will make the model more complex, functional groups could be tested in the hierarchical approach [139], multi-temporal imagery could be used [40,41,45] or more advanced algorithms like deep learning techniques [31]. Li et al. [45] argued that using bi-temporal WorldView imagery could improve the classification on average by 10.7%. He et al. [40] found their best results when combining late-spring, mid-summer and early-fall images. Hartling et al. [31] demonstrated that deep learning techniques could improve broadleaf species classification by at least 30% compared to RF and SVM. Adding other variables such as LiDAR metrics or topological measures could also improve the classification [8,14,16,22,39,131]. Finally, an expert procedure could be implemented to select a maximum number of each categorical variable to limit over-representation [136]. For example, this procedure would avoid the need to automatically select only LiDAR variables and instead allow for a mix of LiDAR, spectral indices, topological variables, etc.

5. Conclusions

This study proposes a method to map individual tree species by using machine learning techniques with very high spatial resolution imagery (WorldView-3) in a complex natural North American forest at the tree level. An object-based approach at multiple scales was conducted. We found that adding spectral information to CHM improved ITC segmentation. We were able to successfully classify five broadleaf species and six conifer species using a hierarchical approach, with OAs of 70% and 94%, respectively. This hierarchical approach had better accuracies for conifers than using a global approach (75%). Only sixteen variables were used with three models (tree type, broadleaf and conifer) corresponding to nine spectral bands. Among the five tested machine learning techniques, RF provided the best results for all cases. This method could also be applied on a large scale with limited manipulations. The resulting maps represent a valuable tool with which to analyze forest composition and to guide forest planners. However, ITC segmentation could be enhanced with automatic evaluation techniques which could allow additional iterations. Ancillary variables such as topographic and hydrographic indices could also improve the classification accuracy. This approach could also be enriched by balancing dataset and expert procedures, by integrating LiDAR metrics and multi-temporal imagery or by combining other sensors such as hyperspectral sensors.

Author Contributions

Conceptualization, M.V.; methodology, M.V., G.J. and B.C.; software, M.V., G.J. and B.C.; validation, M.V.; formal analysis, B.C. and G.J.; writing—original draft preparation, M.V.; writing—review and editing, M.V., G.J. and B.C.; supervision, M.V.; project administration, M.V.; funding acquisition, M.V. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Canadian Wood Fiber Centre, which is part of the Canadian Forest Service in Natural Resources Canada [CWFC1718-017], in partnership with the Forest Innovation Program, and by the Ministère de l’Éducation et de l’Enseignement Supérieur du Québec.

Acknowledgments

The authors gratefully acknowledge Antoine Cullen, Martin Dupuis, Anne-Marie Dubois and Philippe Bournival for their support during the project.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Variables’ details. For pixel-based spectral indices, the abbreviation is “variable_mean_pixel”, for spectral indices calculated with 95th percentile highest pixels, the abbreviation is “variable_mean_95pc_higher”, for arithmetic feature calculated for spectral indices, the abbreviation is “variable_mean”. Band numbers are described in Table 1.
Table A1. Variables’ details. For pixel-based spectral indices, the abbreviation is “variable_mean_pixel”, for spectral indices calculated with 95th percentile highest pixels, the abbreviation is “variable_mean_95pc_higher”, for arithmetic feature calculated for spectral indices, the abbreviation is “variable_mean”. Band numbers are described in Table 1.
TypeAbbreviationSpectral VariableDescription/Adapted FormulaReferencePixel-BasedHigher 95%Arithmetic FeatureTotal Number
Calculated on bandBand_X_mean Band 1 to 16Arithmetic mean value of band X of the pixels forming the object x16
Standard_deviation_Band_X Band 1 to 16Standard deviation of band X of the pixels forming the object x16
Skewness_Band_X Band 1 to 16Skewness of band X of the pixels forming the object x16
Band_X_mean_95pc_highest Band 1 to 16Arithmetic mean of the 5% higher pixel value of the object x16
Spectral indicesARIAnthocyanin Reflectance Index1/B3 − 1/B6[115]xxx3
ARI2Anthocyanin Reflectance Index1/B3 − 1/B5[115]xxx3
CICarter IndexB7/B5[128]xxx3
CRICarotenoid Reflectance IndexB7 * (1/B2 − 1/B3)[140]xxx3
CRI2Carotenoid Reflectance Index1/B2 − 1/B5[140]xxx3
DRIDatt Reflectance Index(B7 − B14)/(B7 + B14)[141]xxx3
DWSIDisease Water Stress IndexB7/B10[130]xxx3
GMI1Simple NIR/Red-edge RatioB8/B6[113]xxx3
GMI2Simple NIR/Red-edge RatioB7/B6[113]xxx3
MSIMoisture Stress IndexB10/B7[142]xxx3
MSISRRatio MSI/simple ratio(B10/B7)/(B8/B5)[143]xxx3
NDIINormalized Difference Infrared Index (B7 − B11)/(B7 + B11)[144]xxx3
NDLINormalized Difference Lignin Index[log(1/B12) − log(1/B11)]/[log(1/B12) + log(1/B11)][145]xxx3
NDNINormalized Difference Nitrogen Index[log(1/B10) − log(1/B11)]/[log(1/B10) + log(1/B11)][145]xxx3
NDVI1Normalized Difference Vegetation Index(B7 − B5 )/(B7 + B5)[128]xxx3
NDVI2Normalized Difference Vegetation Index(B8 − B5 )/(B8 + B5)[127]xxx3
NDWINormalized Difference Water Index(B7 − B9)/(B7 + B9)[146]xxx3
NDWI2130Normalized Difference Water Index(B7 − B13)/(B7 + B13)[147]xxx3
NMDINormalized Multi-Band Drought Index[RB7 − (B11 − B13)]/[B7 + (B11 − B13)][148]xxx3
PBIPlant Biochemical IndexB7/B3[129]xxx3
PRI1Normalized difference Physiological Reflectance Index(B2 − B3)/(B2 + B3)[149,150]xxx3
PRI2Normalized difference Physiological Reflectance Index(B3 − B4)/(B3 + B4)[149,150]xxx3
PSRI1Plant Senescence Reflectance Index(B5 − B2)/B7[150]xxx3
PSRI2Plant Senescence Reflectance Index(B5 − B2)/B6[150]xxx3
R5R7Ratio of Landsat TM band 5 to band 7B11/B14[151]xxx3
RENDVIRed-edge Normalized Difference Vegetation Index(B7 − B6)/(B7 + B6)[127]xxx3
RGR1Simple Red/Green ratioB5/B2[150]xxx3
SIPIStructure Insensitive Pigment Index(B7 − B2)/(B7 + B5)[116,150]xxx3
SredgreenSimple Red/Green ratioB5/B3[117]xxx3
SRWISimple Ratio Water IndexB7/B9[152]xxx3
TCP_brightnessTasseled Cap—Brightness(B2 * 0.3029)+(B3 * 0.2786)+(B5 * 0.4733)+
(B7 * 0.5599)+(B10 * 0.508)+(B14 * 0.1872)
[118]xxx3
TCP_greenessTasseled Cap—Green Vegetation Index(B2 * −0.2941)+(B3 * −0.243)+(B5 * −0.5424)+
(B7 * 0.7276)+(B10 * 0.0713)+(B14 * −0.1608)
[118]xxx3
TCP_wetnessTasseled Cap—Wetness(B2 * 0.1511)+(B3 * 0.1973)+(B5 * 0.3283)+
(B7 * 0.3407)+(B10 * −0.7117)+(B14 * −0.4559)
[118]xxx3
VARIVisible Atmospherically Resistant Index(B3 − B5)/(B5 + B3 − B2)[140]xxx3
VigreenVisible Atmospherically Resistant Indices Green(B3 − B5)/(B5 + B3)[140]xxx3
WBIWater Band IndexB7/B8[153]xxx3
IHS_Hue_Band_5_3_2Intensity, hue, saturation (HIS) transformationHue calculated with B5, B3 and B2 as red, green and blue[94] x1
IHS_Hue_Band_7_3_2Intensity, hue, saturation (HIS) transformationHue calculated with B7, B3 and B2 as red, green and blue[94] x1
IHS_Sat_Band_5_3_2Intensity, hue, saturation (HIS) transformationSaturation calculated with B5, B3 and B2 as red, green and blue[94] x1
IHS_Sat_Band_7_3_2Intensity, hue, saturation (HIS) transformationSaturation calculated with B7, B3 and B2 as red, green and blue[94] x1
TexturesGLCM_Contrast_ Band_X Band 1 to 16Contrast calculated with the pixels forming an object[111]x 16
GLCM_Dissimilarity_ Band_X Band 1 to 16Dissimilarity calculated with the pixels forming an object[111]x 16
GLCM_Entropy_ Band_X Band 1 to 16Entropy calculated with the pixels forming an object[111]x 16
GLCM_Homogeneity_ Band_X Band 1 to 16Homogeneity calculated with the pixels forming an object[111]x 16
Total 232

References

  1. Leboeuf, A.; Vaillancourt, É. Guide de Photo-Interprétation des Essences Forestières du Québec Méridional—Édition 2015; Direction des Inventaires Forestiers du MFFP: Québec, QC, Canada, 2015; p. 72. [Google Scholar]
  2. Berger, J.-P. Norme de Stratification Écoforestière-Quatrième Inventaire Écoforestier; Comité Permanent de la Stratification Forestière de la Direction des Inventaires Forestiers du MRNFQ et Forêt Québec: Québec, QC, Canada, 2008; p. 64. ISBN 978-2-550-73857-2. [Google Scholar]
  3. Wulder, M.A.; White, J.C.; Hay, G.J.; Castilla, G. Towards automated segmentation of forest inventory polygons on high spatial resolution satellite imagery. For. Chron. 2008, 84, 221–230. [Google Scholar] [CrossRef]
  4. Varin, M.; Joanisse, G.; Ménard, P.; Perrot, Y.; Lessard, G.; Dupuis, M. Utilisation D’images Hyperspectrales en Vue de Générer une Cartographie des Espèces Forestières de Façon Automatisée; Centre D’enseignement et de Recherche en Foresterie de Sainte-Foy Inc. (CERFO): Québec, QC, Canada, 2016; p. 68. [Google Scholar]
  5. Cho, M.A.; Malahlela, O.; Ramoelo, A. Assessing the utility WorldView-2 imagery for tree species mapping in South African subtropical humid forest and the conservation implications: Dukuduku forest patch as case study. Int. J. Appl. Earth Obs. Geoinf. 2015, 38, 349–357. [Google Scholar] [CrossRef]
  6. Waser, L.T.; Küchler, M.; Jütte, K.; Stampfer, T. Evaluating the potential of worldview-2 data to classify tree species and different levels of ash mortality. Remote Sens. 2014, 6, 4515–4545. [Google Scholar] [CrossRef] [Green Version]
  7. Immitzer, M.; Atzberger, C.; Koukal, T. Tree species classification with Random forest using very high spatial resolution 8-band worldView-2 satellite data. Remote Sens. 2012, 4, 2661–2693. [Google Scholar] [CrossRef] [Green Version]
  8. Maltamo, M.; Vauhkonen, J.; Næsset, E. Forestry Applications of Airborne Laser Scanning-Concepts and Case Studies; Springer: Dordrecht, The Nederlands, 2014; Volume 32, ISBN 978-94-017-8663-8. [Google Scholar]
  9. Eid, T.; Gobakken, T.; Næsset, E. Comparing stand inventories for large areas based on photo-interpretation and laser scanning by means of cost-plus-loss analyses. Scand. J. For. Res. 2004, 19, 512–523. [Google Scholar] [CrossRef]
  10. Tompalski, P.; Coops, N.C.; White, J.C.; Wulder, M.A. Simulating the impacts of error in species and height upon tree volume derived from airborne laser scanning data. For. Ecol. Manag. 2014, 327, 167–177. [Google Scholar] [CrossRef]
  11. Gougeon, F.A.; Cormier, R.; Labrecque, P.; Cole, B.; Pitt, D.; Leckie, D. Individual Tree Crown (ITC) delineation on Ikonos and QuickBird imagery: The Cockburn Island Study. In Proceedings of the 25th Canadian Symposium on Remote Sensing, Montreal, QC, Canada, 14–16 October 2003; pp. 14–16. [Google Scholar]
  12. Ahmad Zawawi, A.; Shiba, M.; Jemali, N.J.N. Accuracy of LiDAR-based tree height estimation and crown recognition in a subtropical evergreen broad-leaved forest in Okinawa, Japan. For. Syst. 2015, 24. [Google Scholar] [CrossRef]
  13. Barnes, C.; Balzter, H.; Barrett, K.; Eddy, J.; Milner, S.; Suárez, J. Individual Tree Crown Delineation from Airborne Laser Scanning for Diseased Larch Forest Stands. Remote Sens. 2017, 9, 231. [Google Scholar] [CrossRef] [Green Version]
  14. Budei, B.C.; St-Onge, B.; Audet, F.-A.; Hopkinson, C. Identifying the genus or species of individual trees using a three-wavelength airborne lidar system. Remote Sens. Environ. 2018, 204, 632–647. [Google Scholar] [CrossRef]
  15. Koch, B.; Heyder, U.; Weinacker, H. Detection of Individual Tree Crowns in Airborne Lidar Data. Photogramm. Eng. Remote Sens. 2006, 72, 357–363. [Google Scholar] [CrossRef] [Green Version]
  16. Jakubowski, M.; Li, W.; Guo, Q.; Kelly, M. Delineating Individual Trees from Lidar Data: A Comparison of Vector- and Raster-based Segmentation Approaches. Remote Sens. 2013, 5, 4163–4186. [Google Scholar] [CrossRef] [Green Version]
  17. Rana, P.; Prieur, J.-F.; Budei, B.C.; St-Onge, B. Towards a Generalized Method for Tree Species Classification Using Multispectral Airborne Laser Scanning in Ontario, Canada. In Proceedings of the IGARSS 2018-2018 IEEE International Geoscience Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 5–8749. [Google Scholar]
  18. Diedershagen, O.; Koch, B.; Weinacker, H. Automatic segmentation and characterisation of forest stand parameters using airborne lidar data, multispectral and fogis data. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2004, 36, 208–212. [Google Scholar]
  19. Machala, M.; Zejdová, L. Forest Mapping Through Object-based Image Analysis of Multispectral and LiDAR Aerial Data. Eur. J. Remote Sens. 2014, 47, 117–131. [Google Scholar] [CrossRef]
  20. Hyyppa, J.; Kelle, O.; Lehikoinen, M.; Inkinen, M. A segmentation-based method to retrieve stem volume estimates from 3-D tree height models produced by laser scanners. IEEE Trans. Geosci. Remote Sens. 2001, 39, 969–975. [Google Scholar] [CrossRef]
  21. Gulbe, L. Identification and delineation of individual tree crowns using Lidar and multispectral data fusion. In Proceedings of the 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Milano, Italy, 26–31 July 2015; pp. 3294–3297. [Google Scholar]
  22. Sačkov, I.; Sedliak, M.; Kulla, L.; Bucha, T. Inventory of Close-to-Nature Forests Based on the Combination of Airborne LiDAR Data and Aerial Multispectral Images Using a Single-Tree Approach. Forests 2017, 8, 467. [Google Scholar] [CrossRef] [Green Version]
  23. Hamraz, H.; Contreras, M.A.; Zhang, J. A robust approach for tree segmentation in deciduous forests using small-footprint airborne LiDAR data. Int. J. Appl. Earth Obs. Geoinf. 2016, 52, 532–541. [Google Scholar] [CrossRef] [Green Version]
  24. Bunting, P.; Lucas, R. The delineation of tree crowns in Australian mixed species forests using hyperspectral Compact Airborne Spectrographic Imager (CASI) data. Remote Sens. Environ. 2006, 101, 230–248. [Google Scholar] [CrossRef]
  25. Dalponte, M.; Orka, H.O.; Gobakken, T.; Gianelle, D.; Naesset, E. Tree Species Classification in Boreal Forests With Hyperspectral Data. Geosci. Remote Sens. IEEE Trans. 2013, 51, 2632–2645. [Google Scholar] [CrossRef]
  26. Ghosh, A.; Fassnacht, F.E.; Joshi, P.K.; Koch, B. A framework for mapping tree species combining hyperspectral and LiDAR data: Role of selected classifiers and sensor across three spatial scales. Int. J. Appl. Earth Obs. Geoinf. 2014, 26, 49–63. [Google Scholar] [CrossRef]
  27. Jones, T.G.; Coops, N.C.; Sharma, T. Assessing the utility of airborne hyperspectral and LiDAR data for species distribution mapping in the coastal Pacific Northwest, Canada. Remote Sens. Environ. 2010, 114, 2841–2852. [Google Scholar] [CrossRef]
  28. Matsuki, T.; Yokoya, N.; Iwasaki, A. Hyperspectral Tree Species Classification of Japanese Complex Mixed Forest With the Aid of Lidar Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2177–2187. [Google Scholar] [CrossRef]
  29. Verlic, A.; Duric, N.; Kokalj, Z.; Marsetic, A.; Simoncic, P.; Ostir, K. Tree species classification using worldview-2 satellite images and laser scanning data in a natural urban forest. Prethod. Priopćenje Prelim. Commun. Šumarski List 2014, 138, 477–488. [Google Scholar]
  30. Fassnacht, F.E.; Latifi, H.; Stereńczak, K.; Modzelewska, A.; Lefsky, M.; Waser, L.T.; Straub, C.; Ghosh, A. Review of studies on tree species classification from remotely sensed data. Remote Sens. Environ. 2016, 186, 64–87. [Google Scholar] [CrossRef]
  31. Hartling, S.; Sagan, V.; Sidike, P.; Maimaitijiang, M.; Carron, J. Urban tree species classification using a worldview-2/3 and liDAR data fusion approach and deep learning. Sensors 2019, 19, 1284. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  32. Pham, L.T.H.; Brabyn, L.; Ashraf, S. Combining QuickBird, LiDAR, and GIS topography indices to identify a single native tree species in a complex landscape using an object-based classification approach. Int. J. Appl. Earth Obs. Geoinf. 2016, 50, 187–197. [Google Scholar] [CrossRef]
  33. Varin, M.; Joanisse, G.; Dupuis, M.; Perrot, Y.; Gadbois-Langevin, R.; Brochu, J.; Painchaud, L.; Chalghaf, B. Identification Semi-Automatisée D’essences Forestières à Partir D’images Hyperspectrales, Cas du Témiscamingue; Centre D’enseignement et de Recherche en Foresterie de Sainte-Foy Inc. (CERFO): Québec, QC, Canada, 2019; p. 10. [Google Scholar]
  34. Varin, M.; Gadbois-Langevin, R.; Joanisse, G.; Chalghaf, B.; Perrot, Y.; Marcotte, J.-M.; Painchaud, L.; Cullen, A. Approche Orientée-Objet pour Cartographier le Frêne et L’épinette en Zone Urbaine; Centre D’enseignement et de Recherche en Foresterie de Sainte-Foy Inc. (CERFO): Québec, QC, Canada, 2018; p. 8. [Google Scholar]
  35. Dalponte, M.; Bruzzone, L.; Gianelle, D. Tree species classification in the Southern Alps based on the fusion of very high geometrical resolution multispectral/hyperspectral images and LiDAR data. Remote Sens. Environ. 2012, 123, 258–270. [Google Scholar] [CrossRef]
  36. Arenas-Castro, S.; Fernández-Haeger, J.; Jordano-Barbudo, D. Evaluation and Comparison of QuickBird and ADS40-SH52 Multispectral Imagery for Mapping Iberian Wild Pear Trees (Pyrus bourgaeana, Decne) in a Mediterranean Mixed Forest. Forests 2014, 5, 1304–1330. [Google Scholar] [CrossRef]
  37. Dube, T.; Mutanga, O.; Elhadi, A.; Ismail, R. Intra-and-Inter Species Biomass Prediction in a Plantation Forest: Testing the Utility of High Spatial Resolution Spaceborne Multispectral RapidEye Sensor and Advanced Machine Learning Algorithms. Sensors 2014, 14, 15348–15370. [Google Scholar] [CrossRef] [Green Version]
  38. van Ewijk, K.Y.; Randin, C.F.; Treitz, P.M.; Scott, N.A. Predicting fine-scale tree species abundance patterns using biotic variables derived from LiDAR and high spatial resolution imagery. Remote Sens. Environ. 2014, 150, 120–131. [Google Scholar] [CrossRef]
  39. Alonso-Benito, A.; Arroyo, L.; Arbelo, M.; Hernández-Leal, P. Fusion of WorldView-2 and LiDAR Data to Map Fuel Types in the Canary Islands. Remote Sens. 2016, 8, 669. [Google Scholar] [CrossRef] [Green Version]
  40. He, Y.; Yang, J.; Caspersen, J.; Jones, T. An Operational Workflow of Deciduous-Dominated Forest Species Classification: Crown Delineation, Gap Elimination, and Object-Based Classification. Remote Sens. 2019, 11, 2078. [Google Scholar] [CrossRef] [Green Version]
  41. van Deventer, H.; Cho, M.A.; Mutanga, O. Improving the classification of six evergreen subtropical tree species with multi-season data from leaf spectra simulated to WorldView-2 and RapidEye. Int. J. Remote Sens. 2017, 38, 4804–4830. [Google Scholar] [CrossRef]
  42. Mutlu, M.; Popescu, S.C.; Stripling, C.; Spencer, T. Mapping surface fuel models using lidar and multispectral data fusion for fire behavior. Remote Sens. Environ. 2008, 112, 274–285. [Google Scholar] [CrossRef]
  43. Ke, Y.; Quackenbush, L.J.; Im, J. Synergistic use of QuickBird multispectral imagery and LIDAR data for object-based forest species classification. Remote Sens. Environ. 2010, 114, 1141–1154. [Google Scholar] [CrossRef]
  44. Fang, F.; McNeil, B.E.; Warner, T.A.; Maxwell, A.E. Combining high spatial resolution multi-temporal satellite data with leaf-on LiDAR to enhance tree species discrimination at the crown level. Int. J. Remote Sens. 2018, 39, 1–19. [Google Scholar] [CrossRef]
  45. Li, D.; Ke, Y.; Gong, H.; Li, X. Object-Based Urban Tree Species Classification Using Bi-Temporal WorldView-2 and WorldView-3 Images. Remote Sens. 2015, 7, 16917–16937. [Google Scholar] [CrossRef] [Green Version]
  46. Kukunda, C.B.; Duque-Lazo, J.; González-Ferreiro, E.; Thaden, H.; Kleinn, C. Ensemble classification of individual Pinus crowns from multispectral satellite imagery and airborne LiDAR. Int. J. Appl. Earth Obs. Geoinf. 2018, 65, 12–23. [Google Scholar] [CrossRef]
  47. Maxwell, A.E.; Warner, T.A.; Fang, F. Implementation of machine-learning classification in remote sensing: An applied review. Int. J. Remote Sens. 2018, 39, 2784–2817. [Google Scholar] [CrossRef] [Green Version]
  48. Hawryło, P.; Bednarz, B.; Wężyk, P.; Szostak, M. Estimating defoliation of Scots pine stands using machine learning methods and vegetation indices of Sentinel-2. Eur. J. Remote Sens. 2018, 51, 194–204. [Google Scholar] [CrossRef] [Green Version]
  49. Nay, J.; Burchfield, E.; Gilligan, J. A machine-learning approach to forecasting remotely sensed vegetation health. Int. J. Remote Sens. 2018, 39, 1800–1816. [Google Scholar] [CrossRef]
  50. Vaughn, R.N.; Asner, P.G.; Brodrick, G.P.; Martin, E.R.; Heckler, W.J.; Knapp, E.D.; Hughes, F.R. An Approach for High-Resolution Mapping of Hawaiian Metrosideros Forest Mortality Using Laser-Guided Imaging Spectroscopy. Remote Sens. 2018, 10, 502. [Google Scholar] [CrossRef] [Green Version]
  51. Wu, C.; Chen, W.; Cao, C.; Tian, R.; Liu, D.; Bao, D. Diagnosis of Wetland Ecosystem Health in the Zoige Wetland, Sichuan of China. Wetlands 2018, 38, 469–484. [Google Scholar] [CrossRef]
  52. Anderson, K.E.; Glenn, N.F.; Spaete, L.P.; Shinneman, D.J.; Pilliod, D.S.; Arkle, R.S.; McIlroy, S.K.; Derryberry, D.R. Estimating vegetation biomass and cover across large plots in shrub and grass dominated drylands using terrestrial lidar and machine learning. Ecol. Indic. 2018, 84, 793–802. [Google Scholar] [CrossRef]
  53. Matasci, G.; Hermosilla, T.; Wulder, M.A.; White, J.C.; Coops, N.C.; Hobart, G.W.; Zald, H.S.J. Large-area mapping of Canadian boreal forest cover, height, biomass and other structural attributes using Landsat composites and lidar plots. Remote Sens. Environ. 2018, 209, 90–106. [Google Scholar] [CrossRef]
  54. Zhang, C.; Denka, S.; Cooper, H.; Mishra, D.R. Quantification of sawgrass marsh aboveground biomass in the coastal Everglades using object-based ensemble analysis and Landsat data. Remote Sens. Environ. 2018, 204, 366–379. [Google Scholar] [CrossRef]
  55. Franklin, S.E.; Skeries, E.M.; Stefanuk, M.A.; Ahmed, O.S. Wetland classification using Radarsat-2 SAR quad-polarization and Landsat-8 OLI spectral response data: A case study in the Hudson Bay Lowlands Ecoregion. Int. J. Remote Sens. 2018, 39, 1615–1627. [Google Scholar] [CrossRef]
  56. Liu, T.; Abd-Elrahman, A.; Morton, J.; Wilhelm, V.L. Comparing fully convolutional networks, random forest, support vector machine, and patch-based deep convolutional neural networks for object-based wetland mapping using images from small unmanned aircraft system. GISsci. Remote Sens. 2018, 55, 243–264. [Google Scholar] [CrossRef]
  57. Whyte, A.; Ferentinos, K.P.; Petropoulos, G.P. A new synergistic approach for monitoring wetlands using Sentinels-1 and 2 data with object-based machine learning algorithms. Environ. Model. Softw. 2018, 104, 40–54. [Google Scholar] [CrossRef] [Green Version]
  58. Ada, M.; San, B.T. Comparison of machine-learning techniques for landslide susceptibility mapping using two-level random sampling (2LRS) in Alakir catchment area, Antalya, Turkey. Nat. Hazards 2018, 90, 237–263. [Google Scholar] [CrossRef]
  59. Kalantar, B.; Pradhan, B.; Naghibi, S.A.; Motevalli, A.; Mansor, S. Assessment of the effects of training data selection on the landslide susceptibility mapping: A comparison between support vector machine (SVM), logistic regression (LR) and artificial neural networks (ANN). Geomat. Nat. Hazards Risk. 2018, 9, 49–69. [Google Scholar] [CrossRef]
  60. Wessel, M.; Brandmeier, M.; Tiede, D. Evaluation of different machine learning algorithms for scalable classification of tree types and tree species based on Sentinel-2 data. Remote Sens. 2018, 10, 1419. [Google Scholar] [CrossRef] [Green Version]
  61. Vapnik, V.N. The Nature of Statistical Learning Theory; Springer: Berlin, Germany, 1995; ISBN 0-387-94559-8. [Google Scholar]
  62. Bennett, K.P.; Campbell, C. Support Vector Machines: Hype or Hallelujah? SIGKDD Explor. Newsl. 2000, 2, 1–13. [Google Scholar] [CrossRef]
  63. Huang, C.-L.; Wang, C.-J. A GA-based feature selection and parameters optimizationfor support vector machines. Expert Syst. Appl. 2006, 31, 231–240. [Google Scholar] [CrossRef]
  64. Scholkopf, B.; Smola, A.J. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond; MIT Press: Cambridge, MA, USA, 2001; p. 644. ISBN 0262194759. [Google Scholar]
  65. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  66. Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef] [Green Version]
  67. Camps-Valls, G.; Bruzzone, L. Kernel-based methods for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2005, 43, 1351–1362. [Google Scholar] [CrossRef]
  68. Lawrence, R.L.; Wright, A. Rule-based classification systems using classification and regression tree (CART) analysis. Photogramm. Eng. Remote. Sens. 2001, 67, 1137–1142. [Google Scholar]
  69. Breiman, L.; Friedman, J.; Stone, C.J.; Olshen, R.A. Classification and Regression Trees; Taylor & Francis: Abingdon, UK, 1984; p. 368. ISBN 9780412048418. [Google Scholar]
  70. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  71. Mao, W.; Wang, F.-Y. (Eds.) Chapter 8-Cultural Modeling for Behavior Analysis and Prediction. In New Advances in Intelligence and Security Informatics; Academic Press: Boston, MA, USA, 2012; pp. 91–102. ISBN 978-0-12-397200-2. [Google Scholar]
  72. Cutler, D.R.; Edwards, T.C.; Beard, K.H.; Cutler, A.; Hess, K.T.; Gibson, J.; Lawler, J.J. Random forests for classification in ecology. Ecology 2007, 88, 2783–2792. [Google Scholar] [CrossRef]
  73. Kavzoglu, T. Chapter 33-Object-Oriented Random Forest for High Resolution Land Cover Mapping Using Quickbird-2 Imagery. In Handbook of Neural Computation; Samui, P., Sekhar, S., Balas, V.E., Eds.; Academic Press: Cambridge, MA, USA, 2017; pp. 607–619. ISBN 978-0-12-811318-9. [Google Scholar]
  74. Altman, N.S. An Introduction to Kernel and Nearest-Neighbor Nonparametric Regression. Am. Stat. 1992, 46, 175–185. [Google Scholar] [CrossRef] [Green Version]
  75. Meerdink, S.K.; Roberts, D.A.; Roth, K.L.; King, J.Y.; Gader, P.D.; Koltunov, A. Classifying California plant species temporally using airborne hyperspectral imagery. Remote Sens. Environ. 2019, 232, 111308. [Google Scholar] [CrossRef]
  76. Pu, R.; Landry, S.; Yu, Q. Assessing the potential of multi-seasonal high resolution Pléiades satellite imagery for mapping urban tree species. Int. J. Appl. Earth Obs. Geoinf. 2018, 71, 144–158. [Google Scholar] [CrossRef]
  77. Ferreira, M.P.; Zortea, M.; Zanotta, D.C.; Shimabukuro, Y.E.; de Souza Filho, C.R. Mapping tree species in tropical seasonal semi-deciduous forests with hyperspectral and multispectral data. Remote Sens. Environ. 2016, 179, 66–78. [Google Scholar] [CrossRef]
  78. Immitzer, M.; Stepper, C.; Böck, S.; Straub, C.; Atzberger, C. Use of WorldView-2 stereo imagery and National Forest Inventory data for wall-to-wall mapping of growing stock. For. Ecol. Manag. 2016, 359, 232–246. [Google Scholar] [CrossRef]
  79. Tharwat, A.; Gaber, T.; Ibrahim, A.; Hassanien, A.E. Linear discriminant analysis: A detailed tutorial. AI Commun. 2017, 30, 169–190. [Google Scholar] [CrossRef] [Green Version]
  80. Hidayat, S.; Matsuoka, M.; Baja, S.; Rampisela, D.A. Object-based image analysis for sago palm classification: The most important features from high-resolution satellite imagery. Remote Sens. 2018, 10, 1319. [Google Scholar] [CrossRef] [Green Version]
  81. Gosselin, J. Guide de Reconnaissance des Types Écologiques de la Région Écologique 2a–Collines de la Basse-Gatineau; Ministère des Ressources Naturelles, de la Faune et des Parcs, Forêt Québec, Direction des Inventaires Forestiers, Division de la Classification Écologique et Productivité des Stations: Québec, QC, Canada, 2004; p. 184. ISBN 2-551-22454-3. [Google Scholar]
  82. Lin, C.; Wu, C.-C.; Tsogt, K.; Ouyang, Y.-C.; Chang, C.-I. Effects of atmospheric correction and pansharpening on LULC classification accuracy using WorldView-2 imagery. Inf. Process. Agric. 2015, 2, 25–36. [Google Scholar] [CrossRef] [Green Version]
  83. Koukoulas, S.; Blackburn, G.A. Mapping individual tree location, height and species in broadleaved deciduous forest using airborne LIDAR and multi-spectral remotely sensed data. Int. J. Remote Sens. 2005, 26, 431–455. [Google Scholar] [CrossRef]
  84. Zhou, Y.; Qiu, F. Fusion of high spatial resolution WorldView-2 imagery and LiDAR pseudo-waveform for object-based image analysis. ISPRS J. Photogramm. Remote Sens. 2015, 101, 221–232. [Google Scholar] [CrossRef]
  85. Azevedo, S.C.; Silva, E.A.; Pedrosa, M. Shadow detection improvment using spectral indices and morphological operators in urban areas in high resolution images. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, XL-7/W3, 587–592. [Google Scholar] [CrossRef] [Green Version]
  86. Mora, B.; Wulder, M.A.; White, J.C. Identifying leading species using tree crown metrics derived from very high spatial resolution imagery in a boreal forest environment. Can. J. Remote Sens. 2010, 36, 332–344. [Google Scholar] [CrossRef]
  87. R Development Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2018. [Google Scholar]
  88. SAS Institute Inc. (Ed.) SAS-STAT User’s Guide: Release 9.4 Edition; SAS Institute Inc.: Cary, NC, USA, 2013; ISBN 1-55544-094-0. [Google Scholar]
  89. Hill, R.A.; Thomson, A.G. Mapping woodland species composition and structure using airborne spectral and LiDAR data. Int. J. Remote Sens. 2005, 26, 3763–3779. [Google Scholar] [CrossRef]
  90. Leckie, D.; Gougeon, F.; Hill, D.; Quinn, R.; Armstrong, L.; Shreenan, R. Combined high-density lidar and multispectral imagery for individual tree crown analysis. Can. J. Remote Sens. 2003, 29, 633–649. [Google Scholar] [CrossRef]
  91. McCombs, J.W.; Roberts, S.D.; Evans, D.L. Influence of Fusing Lidar and Multispectral Imagery on Remotely Sensed Estimates of Stand Density and Mean Tree Height in a Managed Loblolly Pine Plantation. For. Sci. 2003, 49, 457–466. [Google Scholar] [CrossRef]
  92. Holmgren, J.; Persson, Å.; Söderman, U. Species identification of individual trees by combining high resolution LiDAR data with multi-spectral images. Int. J. Remote Sens. 2008, 29, 1537–1552. [Google Scholar] [CrossRef]
  93. Weinacker, H.; Koch, B.; Heyder, U.; Weinacker, R. Development of filtering, segmentation and modelling modules for lidar and multispectral data as a fundament of an automatic forest inventory system. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2004, 36, 1682–1750. [Google Scholar]
  94. Trimble Germany GmbH. Trimble Germany GmbH eCognition® Developer 9.3.2 for Windows Operating System: Reference Book; Trimble Germany GmbH: Munich, Germany, 2018; p. 510. [Google Scholar]
  95. Wang, L.; Sousa, W.P.; Gong, P. Integration of object-based and pixel-based classification for mapping mangroves with IKONOS imagery. Int. J. Remote Sens. 2004, 25, 5655–5668. [Google Scholar] [CrossRef]
  96. Brillinger, D.R. International Encyclopedia of Political Science; SAGE Publications Inc.: Thousand Oaks, CA, USA, 2011; p. 4032. [Google Scholar]
  97. Doherty, M.C. Automating the process of choosing among highly correlated covariates for multivariable logistic regression. In Proceedings of the SAS Conference Proceedings: Western Users of SAS Software 2008, Los Angeles, CA, USA, 5–7 November 2008; p. 7. [Google Scholar]
  98. Kursa, M.B.; Rudnicki, W.R. Feature Selection with the Boruta Package. J. Stat. Softw. 2010, 11. [Google Scholar] [CrossRef] [Green Version]
  99. Chen, Y.-W.; Lin, C.-J. Combining SVMs with Various Feature Selection Strategies. In Feature Extraction: Foundations and Applications; Guyon, I., Nikravesh, M., Gunn, S., Zadeh, L.A., Eds.; Springer: Berlin, Germany, 2006; pp. 315–324. ISBN 978-3-540-35488-8. [Google Scholar]
  100. Prasad, A.M.; Iverson, L.R.; Liaw, A. Newer Classification and Regression Tree Techniques: Bagging and Random Forests for Ecological Prediction. Ecosystems 2006, 9, 181–199. [Google Scholar] [CrossRef]
  101. Liaw, A.; Wiener, M. Classification and Regression by randomForest. R News 2002, 2, 18–22. [Google Scholar]
  102. Venables, W.N.; Ripley, B.D. Modern Applied Statistics with S; Springer: New York, NY, USA, 2002. [Google Scholar]
  103. Therneau, T.M.; Atkinson, E.J. An Introduction to Recursive Partitioning Using the RPART Routines; Springer: New York, NY, USA, 2015. [Google Scholar]
  104. Congalton, R.G. A review of assessing the accuracy of classifications of remotely sensed data. Remote Sens. Environ. 1991, 37, 35–46. [Google Scholar] [CrossRef]
  105. Monserud, R.A.; Leemans, R. Comparing global vegetation maps with the Kappa statistic. Ecol. Model. 1992, 62, 275–293. [Google Scholar] [CrossRef]
  106. McHugh, M.L. Interrater reliability: The kappa statistic. Biochem. Med. 2012, 22, 276–282. [Google Scholar] [CrossRef]
  107. Cohen, J. A Coefficient of Agreement for Nominal Scales. Educ. Psychol. Meas. 1960, 20, 37–46. [Google Scholar] [CrossRef]
  108. Henrich, V.; Jung, A.; Götze, C.; Sandow, C.; Thürkow, D.; Gläßer, C. Development of an online indices database: Motivation, concept and implementation. In Proceedings of the 6th EARSeL Imaging Spectroscopy SIG Workshop Innovative Tool for Scientific and Commercial Environment Applications, Tel Aviv, Israel, 16–18 March 2009. [Google Scholar]
  109. Hughes, N.M.; Smith, W.K. Seasonal Photosynthesis and Anthocyanin Production in 10 Broadleaf Evergreen Species. Funct. Plant Biol. 2007, 34, 1072–1079. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  110. Baatz, M.; Schape, A. Multiresolution segmentation—An optimization approach for high quality multi-scale image segmentation. Angew. Geogr. Inf. 2000, 12, 12–23. [Google Scholar]
  111. Haralick, R.M. Statistical image texture analysis haralick. In Handbook of Pattern Recognition and Image Processing; Young, T.Y., Ed.; Elsevier Science: New York, NY, USA, 1986; pp. 247–279. [Google Scholar]
  112. Leboeuf, A.; Vaillancourt, É.; Morissette, A.; Pomerleau, I.; Roy, V.; Leboeuf, A. Photographic Interpretation Guide for Forest Species in Southern Québec; Direction des Inventaires Forestiers: Québec, QC, Canada, 2015; ISBN 97825507278732550727878. [Google Scholar]
  113. Gitelson, A.A.; Merzlyak, M.N. Remote estimation of chlorophyll content in higher plant leaves. Int. J. Remote Sens. 1997, 18, 2691–2697. [Google Scholar] [CrossRef]
  114. Minocha, R.; Martinez, G.; Lyons, B.; Long, S. Development of a standardized methodology for quantifying total chlorophyll and carotenoids from foliage of hardwood and conifer tree species. Can. J. For. Res. 2009, 39, 849–861. [Google Scholar] [CrossRef]
  115. Gitelson, A.A.; Merzlyak, M.N.; Zur, Y.; Stark, R.H.; Gritz, U. Non-Destructive and Remote Sensing Techniques for Estimation of Vegetation Status. In Proceedings of the 3rd European Conference on Precision Agriculture, Montpelier, France, 8–11 July 2001; pp. 205–210, 273. [Google Scholar]
  116. Peñuelas, J.; Gamon, J.A.; Fredeen, A.L.; Merino, J.; Field, C.B. Reflectance indices associated with physiological changes in nitrogen- and water-limited sunflower leaves. Remote Sens. Environ. 1994, 48, 135–146. [Google Scholar] [CrossRef]
  117. Gamon, J.A.; Surfus, J.S. Assessing leaf pigment content and activity with a reflectometer. New Phytol. 1999, 143, 105–117. [Google Scholar] [CrossRef]
  118. Kauth, R.J.; Thomas, G.S. The tasselled cap-A graphic description of the spectral-temporal development of agricultural crops as seen by Landsat. In Proceedings of the Symposium on Machine Processing of Remotely Sensed Data, West Lafayette, IN, USA, 29 June–1 July 1976; Institute of Electrical and Electronics Engineers: Piscataway, NJ, USA, 1976; pp. 4B-41–4B-51. [Google Scholar]
  119. Pham, A.T.; De Grandpre, L.; Gauthier, S.; Bergeron, Y. Gap dynamics and replacement patterns in gaps of the northeastern boreal forest of Quebec. Can. J. For. Res. 2004, 34, 353–364. [Google Scholar] [CrossRef]
  120. Coops, N.C.; Wulder, M.A.; Culvenor, D.S.; St-Onge, B. Comparison of forest attributes extracted from fine spatial resolution multispectral and lidar data. Can. J. Remote Sens. 2004, 30, 855–866. [Google Scholar] [CrossRef]
  121. Heinzel, J.; Koch, B. Investigating multiple data sources for tree species classification in temperate forest and use for single tree delineation. Int. J. Appl. Earth Obs. Geoinf. 2012, 18, 101–110. [Google Scholar] [CrossRef]
  122. Soille, P. Morphological Image Analysis: Principles and Applications, 2nd ed.; Springer: Berlin, Germany, 2003; ISBN 3540429883. [Google Scholar]
  123. Lebourgeois, V.; Dupuy, S.; Vintrou, É.; Ameline, M.; Butler, S.; Bégué, A. A combined random forest and OBIA classification scheme for mapping smallholder agriculture at different nomenclature levels using multisource data (simulated Sentinel-2 time series, VHRS and DEM). Remote Sens. 2017, 9, 259. [Google Scholar] [CrossRef] [Green Version]
  124. Krahwinkler, P.; Rossmann, J. Tree species classification and input data evaluation. Eur. J. Remote Sens. 2013, 46, 535–549. [Google Scholar] [CrossRef] [Green Version]
  125. Kim, M.; Xu, B.; Madden, M. Object-based Vegetation Type Mapping from an Orthorectified Multispectral IKONOS Image using Ancillary Information. In Proceedings of the GEOBIA 2008—GEOgraphic Object Based Image Analysis for the 21st Century, Calgary, AB, Canada, 6 August 2008; pp. 1682–1777. [Google Scholar]
  126. Ferreira, M.P.; Zortea, M.; Zanotta, D.C.; Feret, J.B.; Souza Filho, C.R. On the use of shortwave infrared for tree species discrimination in tropical semideciduous forest. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, XL-3/W3. [Google Scholar] [CrossRef] [Green Version]
  127. Ferreira, M.P.; Zanotta, D.C.; Zortea, M.; Körting, T.S.; Fonseca, L.M.G.; Shimabukuro, Y.E.; Filho, C.R.S. Automatic tree crown delineation in tropical forest using hyperspectral data. In Proceedings of the 2014 IEEE Geoscience and Remote Sensing Symposium, Quebec City, QC, Canada, 13–18 July 2014; pp. 784–787. [Google Scholar]
  128. Cho, M.A.; Sobhan, I.; Skidmore, A.K.; Leeuw, J. Discriminating species using hyperspectral indices at leaf and canopy scales. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2008, 37, 369–376. [Google Scholar]
  129. Rama Rao, N.; Garg, P.K.; Ghosh, S.K.; Dadhwal, V.K. Estimation of leaf total chlorophyll and nitrogen concentrations using hyperspectral satellite imagery. J. Agric. Sci. 2008, 146, 65–75. [Google Scholar] [CrossRef]
  130. Apan, A.; Held, A.; Phinn, S.; Markley, J. Detecting sugarcane ‘orange rust’ disease using EO-1 Hyperion hyperspectral imagery. Int. J. Remote Sens. 2004, 25, 489–498. [Google Scholar] [CrossRef] [Green Version]
  131. Hovi, A.; Korhonen, L.; Vauhkonen, J.; Korpela, I. LiDAR waveform features for tree species classification and their sensitivity to tree- and acquisition related parameters. Remote Sens. Environ. 2016, 173, 224–237. [Google Scholar] [CrossRef]
  132. Korpela, I.; Ole Ørka, H.; Maltamo, M.; Tokola, T.; Hyyppä, J. Tree species classification using airborne LiDAR-effects of stand and tree parameters, downsizing of training set, intensity normalization, and sensor type. Silva Fenn. 2010, 44, 319–339. [Google Scholar] [CrossRef] [Green Version]
  133. Shi, L.; Wan, Y.; Gao, X.; Wang, M. Feature Selection for Object-Based Classification of High-Resolution Remote Sensing Images Based on the Combination of a Genetic Algorithm and Tabu Search. Comput. Intell. Neurosci. 2018. [Google Scholar] [CrossRef] [PubMed]
  134. Tao, Q.; Wu, G.-W.; Wang, F.-Y.; Wang, J. Posterior Probability SVM for Unbalanced Data. IEEE Trans. Neural Netw. 2005, 16, 1561–1573. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  135. Farquad, M.A.H.; Bose, I. Preprocessing unbalanced data using support vector machine. Decis. Support Syst. 2012, 53, 226–233. [Google Scholar] [CrossRef]
  136. Chalghaf, B.; Varin, M.; Joanisse, G. Cartographie Fine des Essences Individuelles par une Approche de Modélisation de type «Random Forest», à partir du lidar et de RapidEye; Rapport 2019-04; Centre D’enseignement et de Recherche en Foresterie de Sainte-Foy Inc. (CERFO): Québec, QC, Canada, 2019; p. 29. [Google Scholar]
  137. Zhang, Y. Problems in the Fusion of Commercial High-Resolution Satellite Images as Well as LANDSAT 7 Images and Initial Solutions. In Proceedings of the commission IV Symposium on Geospatial Theory, Processing and Applications, Ottawa, ON, Canada, 9–12 July 2002; pp. 587–592. [Google Scholar]
  138. St-Onge, B.; Grandin, S. Estimating the Height and Basal Area at Individual Tree and Plot Levels in Canadian Subarctic Lichen Woodlands Using Stereo WorldView-3 Images. Remote Sens. 2019, 11, 248. [Google Scholar] [CrossRef] [Green Version]
  139. Paquette, A.; Joly, S.; Messier, C. Explaining forest productivity using tree functional traits and phylogenetic information: Two sides of the same coin over evolutionary scale? Ecol. Evol. 2015, 5, 1774–1783. [Google Scholar] [CrossRef]
  140. Gitelson, A.A.; Kaufman, Y.J.; Stark, R.; Rundquist, D. Novel algorithms for remote estimation of vegetation fraction. Remote Sens. Environ. 2002, 80, 76–87. [Google Scholar] [CrossRef] [Green Version]
  141. Datt, B. A New Reflectance Index for Remote Sensing of Chlorophyll Content in Higher Plants: Tests using Eucalyptus Leaves. J. Plant Physiol. 1999, 154, 30–36. [Google Scholar] [CrossRef]
  142. Hunt, E.R.; Rock, B.N. Detection of changes in leaf water content using Near- and Middle-Infrared reflectances. Remote Sens. Environ. 1989, 30, 43–54. [Google Scholar] [CrossRef]
  143. Colombo, R.; Meroni, M.; Marchesi, A.; Busetto, L.; Rossini, M.; Giardino, C.; Panigada, C. Estimation of leaf and canopy water content in poplar plantations by means of hyperspectral indices and inverse modeling. Remote Sens. Environ. 2008, 112, 1820–1834. [Google Scholar] [CrossRef]
  144. Serrano, L.; Ustin, S.L.; Roberts, D.A.; Gamon, J.A.; Peñuelas, J. Deriving Water Content of Chaparral Vegetation from AVIRIS Data. Remote Sens. Environ. 2000, 74, 570–581. [Google Scholar] [CrossRef]
  145. Serrano, L.; Peñuelas, J.; Ustin, S.L. Remote sensing of nitrogen and lignin in Mediterranean vegetation from AVIRIS data. Remote Sens. Environ. 2002, 81, 355–364. [Google Scholar] [CrossRef]
  146. Gao, B. NDWI—A normalized difference water index for remote sensing of vegetation liquid water from space. Remote Sens. Environ. 1996, 58, 257–266. [Google Scholar] [CrossRef]
  147. Chen, D.; Huang, J.; Jackson, T.J. Vegetation water content estimation for corn and soybeans using spectral indices derived from MODIS near-and short-wave infrared bands. Remote Sens. Environ. 2005, 98, 225–236. [Google Scholar] [CrossRef]
  148. Wang, L.; Qu, J.J. NMDI: A normalized multi-band drought index for monitoring soil and vegetation moisture with satellite remote sensing. Geophys. Res. Lett. 2007, 34. [Google Scholar] [CrossRef]
  149. Peñuelas, J.; Filella, I.; Gamon, J.A. Assessment of photosynthetic radiation-use efficiency with spectral reflectance. New Phytol. 1995, 131, 291–296. [Google Scholar] [CrossRef]
  150. Sims, D.A.; Gamon, J.A. Relationships between leaf pigment content and spectral reflectance across a wide range of species, leaf structures and developmental stages. Remote Sens. Environ. 2002, 81, 337–354. [Google Scholar] [CrossRef]
  151. Elvidge, C.D.; Lyon, R.J.P. Estimation of the vegetation contribution to the 1•65/2•22 μm ratio in airborne thematic-mapper imagery of the Virginia Range, Nevada. Int. J. Remote Sens. 1985, 6, 75–88. [Google Scholar] [CrossRef]
  152. Zarco-Tejada, P.J.; Ustin, S.L. Modeling canopy water content for carbon estimates from MODIS data at land EOS validation sites. In Proceedings of the IEEE 2001 International Geoscience and Remote Sensing Symposium, Sydney, Australia, 9–13 July 2001; pp. 342–344. [Google Scholar]
  153. Penuelas, J.; Pinol, J.; Ogaya, R.; Filella, I. Estimation of plant water concentration by the reflectance Water Index WI (R900/R970). Int. J. Remote Sens. 1997, 18, 2869–2875. [Google Scholar] [CrossRef]
Figure 1. Study areas (delineated in white) at the Kenauk Nature property. The background image displays WorldView-3 in false colors (infrared, green and blue).
Figure 1. Study areas (delineated in white) at the Kenauk Nature property. The background image displays WorldView-3 in false colors (infrared, green and blue).
Remotesensing 12 03092 g001
Figure 2. Manually delineated crowns by photo-interpretation and their corresponding GPS points collected during the field campaign. The background image displays WorldView-3 in false colors (infrared, green and blue). The species’ abbreviations are described in Table 2.
Figure 2. Manually delineated crowns by photo-interpretation and their corresponding GPS points collected during the field campaign. The background image displays WorldView-3 in false colors (infrared, green and blue). The species’ abbreviations are described in Table 2.
Remotesensing 12 03092 g002
Figure 3. The three canopy height models (CHMs) used to make individual tree crown (ITC) segmentation. (a) original CHM; (b) filtered CHM; and (c) corrected CHM.
Figure 3. The three canopy height models (CHMs) used to make individual tree crown (ITC) segmentation. (a) original CHM; (b) filtered CHM; and (c) corrected CHM.
Remotesensing 12 03092 g003
Figure 4. Individual tree crown segmentation based on LiDAR (canopy height model) and WorldView-3 imagery. The background image displays WorldView-3 in true colors.
Figure 4. Individual tree crown segmentation based on LiDAR (canopy height model) and WorldView-3 imagery. The background image displays WorldView-3 in true colors.
Remotesensing 12 03092 g004
Figure 5. Spectral signatures for selected broadleaf (A) and conifer (B) species with 16-bands WorldView-3 (WV3). The points are the mean spectral values of the training samples. The spectral signatures are separated into four parts: visible, Red-edge (RE), Near-infrared (NIR) and Short-Wave Infrared (SWIR). The species’ abbreviations and the number of training samples used to calculate the mean are described in Table 2.
Figure 5. Spectral signatures for selected broadleaf (A) and conifer (B) species with 16-bands WorldView-3 (WV3). The points are the mean spectral values of the training samples. The spectral signatures are separated into four parts: visible, Red-edge (RE), Near-infrared (NIR) and Short-Wave Infrared (SWIR). The species’ abbreviations and the number of training samples used to calculate the mean are described in Table 2.
Remotesensing 12 03092 g005
Figure 6. Boxplots of the (A) ARI_mean_95pc_higher index; (B) GLCM_Entropy_Band_7; and (C) GMI2_mean index for tree types, broadleaves and conifers, respectively. The species’ abbreviations and the number of training samples used to calculate the mean are described in Table 2. The variables’ abbreviations are described in Appendix A.
Figure 6. Boxplots of the (A) ARI_mean_95pc_higher index; (B) GLCM_Entropy_Band_7; and (C) GMI2_mean index for tree types, broadleaves and conifers, respectively. The species’ abbreviations and the number of training samples used to calculate the mean are described in Table 2. The variables’ abbreviations are described in Appendix A.
Remotesensing 12 03092 g006
Figure 7. Tree species classification map. The background image displays WorldView-3 in true colors. The species’ abbreviations are described in Table 2.
Figure 7. Tree species classification map. The background image displays WorldView-3 in true colors. The species’ abbreviations are described in Table 2.
Remotesensing 12 03092 g007
Table 1. WorldView-3 channels characteristics.
Table 1. WorldView-3 channels characteristics.
BandSpectrumWavelength Range (nm)Wavelength Center (nm)Spatial Resolution (m)
0Panchromatic450–8006250.31
1Costal400–4504251.26
2Blue450–510480
3Green510–580545
4Yellow585–625605
5Red630–690660
6Red-edge705–745725
7Near-infrared #1770–895832.5
8Near-infrared #2860–1040950
9Short-Wave Infrared #11195–122512103.89
10Short-Wave Infrared #21550–15901570
11Short-Wave Infrared #31640–16801660
12Short-Wave Infrared #41710–17501730
13Short-Wave Infrared #52145–21852165
14Short-Wave Infrared #62185–22252205
15Short-Wave Infrared #72235–22852260
16Short-Wave Infrared #82295–23652330
Table 2. Distribution of the different tree species samples (train and test) and tree crowns statistics of each species. BL: Broadleaf, CF: Conifer.
Table 2. Distribution of the different tree species samples (train and test) and tree crowns statistics of each species. BL: Broadleaf, CF: Conifer.
SpeciesAcronymTypeTree Crowns StatisticsTrainTestTotal
Mean Size (m2)Mean Height (m)SD Height (m)
American BeechABBL32214311041
Big Tooth AspenBTBL4225413518
Red OakROBL60243241034
Sugar MapleSMBL8524337946
Yellow BirchYBBL63224361046
Balsam FirBFCN2216313316
Eastern White CedarECCN3121516521
Eastern HemlockHKCN3923329938
Red PineRPCN5928315520
White PineWPCN64264381048
White SpruceWSCN352057310
Total 25979338
Table 3. Individual tree crown assessment comparing segmented objects produced with original, filtered or corrected canopy height model (CHM) in combination with imagery.
Table 3. Individual tree crown assessment comparing segmented objects produced with original, filtered or corrected canopy height model (CHM) in combination with imagery.
OriginalFilteredCorrected
CHMCHM+ImageryCHMCHM+ImageryCHMCHM+Imagery
Single crown40%56%60%68%63%64%
Single species70%74%73%75%73%82%
Table 4. Summary of the performance assessment using training and test sets with five modelling techniques (RF, SVM, k-NN, CART and LDA) for global modeling. The modeling technique with the highest overall accuracy (OA) is highlighted in dark. KIA: Kappa index of agreement.
Table 4. Summary of the performance assessment using training and test sets with five modelling techniques (RF, SVM, k-NN, CART and LDA) for global modeling. The modeling technique with the highest overall accuracy (OA) is highlighted in dark. KIA: Kappa index of agreement.
Based on 8-Band WorldView-3Based on 16-Band WorldView-3
ModelTechniqueNo of VariablesTrainingTestNo of VariablesTrainingTest
OAKIAOAKIAOAKIAOAKIA
GlobalRF8100%1.0071%0.679100%1.0075%0.72
SVM1093%0.9370%0.661098%0.9871%0.68
k-NN972%0.6841%0.341078%0.7648%0.42
CART874%0.7053%0.481071%0.6853%0.48
LDA1196%0.9566%0.611195%0.9461%0.56
Table 5. Summary of the performance assessment using training and test sets for the five modelling techniques (RF, SVM, k-NN, CART and LDA) for the tree type (broadleaf/conifer) modeling approach. For tree mapping, the selected modeling technique with the highest overall accuracy (OA) is highlighted in dark. KIA: Kappa Index of agreement.
Table 5. Summary of the performance assessment using training and test sets for the five modelling techniques (RF, SVM, k-NN, CART and LDA) for the tree type (broadleaf/conifer) modeling approach. For tree mapping, the selected modeling technique with the highest overall accuracy (OA) is highlighted in dark. KIA: Kappa Index of agreement.
Based on 8-Band WorldView-3Based on 16-Band WorldView-3
ModelTechniqueNo of VariablesTrainingTestNo of VariablesTrainingTest
OAKIAOAKIAOAKIAOAKIA
Tree typeRF4100%1.0097%0.954100%1.0099%0.97
SVM10100%1.0094%0.876100%1.0097%0.95
k-NN6100%1.0097%0.954100%1.0097%0.95
CART297%0.9392%0.85498%0.9692%0.85
LDA397%0.9396%0.924100%0.9997%0.95
Table 6. Summary of the performance assessment using training and test sets for the five modelling techniques (RF, SVM, k-NN, CART and LDA) for broadleaf and conifer species modeling. For tree mapping, the selected modeling techniques with the highest overall accuracy (OA) for each model are highlighted in dark. KIA: Kappa Index of agreement.
Table 6. Summary of the performance assessment using training and test sets for the five modelling techniques (RF, SVM, k-NN, CART and LDA) for broadleaf and conifer species modeling. For tree mapping, the selected modeling techniques with the highest overall accuracy (OA) for each model are highlighted in dark. KIA: Kappa Index of agreement.
Based on 8-Band WorldView-3Based on 16-Band WorldView-3
ModelTechniqueNo of VariablesTrainingTestNo of VariablesTrainingTest
OAKIAOAKIAOAKIAOAKIA
BroadleafRF6100%1.0070%0.636100%1.0068%0.60
SVM1096%0.9559%0.491095%0.9468%0.60
k-NN779%0.7352%0.39683%0.7836%0.03
CART675%0.6845%0.31677%0.7059%0.49
LDA1094%0.9364%0.53993%0.9161%0.51
ConiferRF7100%1.0094%0.937100%1.0094%0.93
SVM1096%0.9589%0.859100%1.0083%0.78
k-NN989%0.8683%0.79988%0.8489%0.85
CART579%0.7377%0.71781%0.7569%0.60
LDA899%0.9980%0.749100%1.0071%0.63
Table 7. Description of the sixteen variables used in the selected models for the hierarchical approach. For spectral indices calculated with 95th percentile highest pixels, the abbreviation is “variable_mean_95pc_higher”, for arithmetic feature calculated for spectral indices, the abbreviation is “variable_mean”. Band numbers are described in Table 1.
Table 7. Description of the sixteen variables used in the selected models for the hierarchical approach. For spectral indices calculated with 95th percentile highest pixels, the abbreviation is “variable_mean_95pc_higher”, for arithmetic feature calculated for spectral indices, the abbreviation is “variable_mean”. Band numbers are described in Table 1.
AbbreviationVegetation IndexAdapted FormulaModelsReferences
ARI_meanAnthocyanin Reflectance Index1/B3_mean − 1/B6_meanConifer[115]
ARI_mean_95pc_higherAnthocyanin Reflectance IndexArithmetic mean of the 5% higher pixel value of the object with ARITree type[115]
Band_1_meanLayer valuesMean value of band 1 of the pixels forming the objectBroadleaf[6]
Band_12_95pc_higherLayer valuesArithmetic mean of the 5% higher pixel value of the object using band 12Tree type[6]
Band_5_95pc_higherLayer valuesArithmetic mean of the 5% higher pixel value of the object using band 5Broadleaf[6]
GLCM_Entropy_Band_7Texture valuesEntropy calculated with the value of band 7 of the pixels forming an objectBroadleaf; Conifer[110,111]
GLCM_Homogeneity_Band_3Texture valuesHomogeneity calculated with the value of band 3 of the pixels forming an objectConifer[110,111]
GLCM_Homogeneity_Band_4Texture valuesHomogeneity calculated with the value of band 4 of the pixels forming an objectConifer[110,111]
GMI2_meanSimple NIR/Red-edge RatioB7_mean/B6_meanConifer[113]
IHS_Hue_Band_7_3_2Intensity, hue, saturation (HIS) transformationHue calculated with B7, B3 and B2 as red, green and blueConifer[94,110]
PRI2_meanNormalized difference Physiological Reflectance Index(B3_mean − B4_mean)/(B3_mean + B4_mean)Broadleaf[116]
PRI2_mean_95pc_higherNormalized difference Physiological Reflectance IndexArithmetic mean of the 5% higher pixel value of the object with PRI2Broadleaf[116]
Sredgreen_meanSimple Red/Green ratioB5_mean/B3_meanConifer[117]
Sredgreen_mean_95pc_higherSimple Red/Green ratioArithmetic mean of the 5% higher pixel value of the object with SredgreenTree type[117]
Standard_deviation_Band_3Layer valuesStandard deviation of band 3 of the pixels forming the objectBroadleaf[6]
TCP_greeness_meanTasselled Cap—Green Vegetation Index(B2_mean * −0.2941)+(B3_mean * −0.243)+(B5_mean * −0.5424)+(B7_mean * 0.7276)+(B10_mean * 0.0713)+(B14_mean * −0.1608)Tree type[118]
Table 8. The number of times WorldView-3 bands were used with the hierarchical approach for tree mapping.
Table 8. The number of times WorldView-3 bands were used with the hierarchical approach for tree mapping.
BandB1B2B3B4B5B6B7B12B14
Times used1210343511
Table 9. Confusion matrix of the RF model for the global approach. AB: American Beech, BT: Big Tooth Aspen, RO: Red Oak, SM: Sugar Maple, YB: Yellow Birch, BF: Balsam Fir, EC: Eastern White Cedar, HK: Eastern Hemlock, RP: Red Pine, WP: White Pine, WS: White Spruce. OA: Overall accuracy, KIA: Kappa Index of agreement.
Table 9. Confusion matrix of the RF model for the global approach. AB: American Beech, BT: Big Tooth Aspen, RO: Red Oak, SM: Sugar Maple, YB: Yellow Birch, BF: Balsam Fir, EC: Eastern White Cedar, HK: Eastern Hemlock, RP: Red Pine, WP: White Pine, WS: White Spruce. OA: Overall accuracy, KIA: Kappa Index of agreement.
Reference
ABBTROSMYBBFECHKRPWPWSUser’s Accuracy (%)
PredictionAB7000200000078%
BT04000000000100%
RO0051000000083%
SM1037100000058%
YB1021700000064%
BF0000020000167%
EC1100004000067%
HK00000009000100%
RP0000001030160%
WP00000100210077%
WS00000000001100%
Producer’s accuracy (%)70%80%50%78%70%67%80%100%60%100%33%OA: 75%
KIA: 0.72
Table 10. Confusion matrix of the RF model for the hierarchical approach: binary classification of tree type (broadleaf/conifer). OA: Overall accuracy, KIA: Kappa Index of agreement.
Table 10. Confusion matrix of the RF model for the hierarchical approach: binary classification of tree type (broadleaf/conifer). OA: Overall accuracy, KIA: Kappa Index of agreement.
Reference
BroadleafConiferUser’s Accuracy (%)
PredictionBroadleaf44198
Conifer034100
Producer’s accuracy (%)10097OA: 99%
KIA: 0.97
Table 11. Confusion matrix of the RF model for the hierarchical approach: broadleaf species classification. AB: American Beech, BT: Big Tooth Aspen, RO: Red Oak, SM: Sugar Maple, YB: Yellow Birch. OA: Overall accuracy, KIA: Kappa Index of agreement.
Table 11. Confusion matrix of the RF model for the hierarchical approach: broadleaf species classification. AB: American Beech, BT: Big Tooth Aspen, RO: Red Oak, SM: Sugar Maple, YB: Yellow Birch. OA: Overall accuracy, KIA: Kappa Index of agreement.
Reference
ABBTROSMYBUser’s Accuracy (%)
PredictionAB6010175%
BT1500171%
RO0063067%
SM1016075%
YB2020867%
Producer’s accuracy (%)70%60%100%60%67%OA: 70%
KIA: 0.63
Table 12. Confusion matrix of the RF model for the hierarchical approach: conifer species classification. BF: Balsam Fir, EC: Eastern White Cedar, HK: Eastern Hemlock, RP: Red Pine, WP: White Pine, WS: White Spruce. OA: Overall accuracy, KIA: Kappa Index of agreement.
Table 12. Confusion matrix of the RF model for the hierarchical approach: conifer species classification. BF: Balsam Fir, EC: Eastern White Cedar, HK: Eastern Hemlock, RP: Red Pine, WP: White Pine, WS: White Spruce. OA: Overall accuracy, KIA: Kappa Index of agreement.
Reference
BFECHKRPWPWSUser’s Accuracy (%)
PredictionBF300000100%
EC040000100%
HK009000100%
RP000400100%
WP010110083%
WS000003100%
Producer’s accuracy (%)100%80%100%80%100%100%OA: 94%
KIA: 0.93

Share and Cite

MDPI and ACS Style

Varin, M.; Chalghaf, B.; Joanisse, G. Object-Based Approach Using Very High Spatial Resolution 16-Band WorldView-3 and LiDAR Data for Tree Species Classification in a Broadleaf Forest in Quebec, Canada. Remote Sens. 2020, 12, 3092. https://doi.org/10.3390/rs12183092

AMA Style

Varin M, Chalghaf B, Joanisse G. Object-Based Approach Using Very High Spatial Resolution 16-Band WorldView-3 and LiDAR Data for Tree Species Classification in a Broadleaf Forest in Quebec, Canada. Remote Sensing. 2020; 12(18):3092. https://doi.org/10.3390/rs12183092

Chicago/Turabian Style

Varin, Mathieu, Bilel Chalghaf, and Gilles Joanisse. 2020. "Object-Based Approach Using Very High Spatial Resolution 16-Band WorldView-3 and LiDAR Data for Tree Species Classification in a Broadleaf Forest in Quebec, Canada" Remote Sensing 12, no. 18: 3092. https://doi.org/10.3390/rs12183092

APA Style

Varin, M., Chalghaf, B., & Joanisse, G. (2020). Object-Based Approach Using Very High Spatial Resolution 16-Band WorldView-3 and LiDAR Data for Tree Species Classification in a Broadleaf Forest in Quebec, Canada. Remote Sensing, 12(18), 3092. https://doi.org/10.3390/rs12183092

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop