Evaluating the Potential of WorldView-3 Data to Classify Different Shoot Damage Ratios of Pinus yunnanensis

Tomicus yunnanensis Kirkendall and Faccoli and Tomicus minor Hartig have caused serious shoot damage in Yunnan pine (Pinus yunnanensis Faranch) forests in the Yunnan province of China. However, very few remote sensing studies have been conducted to detect the different shoot damage ratios of individual trees. The aim of the study was to evaluate the suitability of eight-band WorldView-3 satellite image for detecting different shoot damage ratios (e.g., “healthy”, “slightly”, “moderately”, and “severely”). An object-based supervised classification method was used in this study. The tree crowns were delineated on a 0.3 m pan-sharpened worldview-3 image as reference data. Besides the original eight bands, normalized two-band indices were derived as spectral variables. For classifying individual trees, three classifiers—multinomial logistic regression (MLR), a stepwise linear discriminant analysis (SDA), and random forest (RF)—were evaluated and compared in this study. Results showed that SDA classifier based on all spectral variables had the highest classification accuracy (78.33%, Kappa = 0.712). Compared to original eight bands of Worldview-3, normalized two-band indices could improve the overall accuracy. Furthermore, the shoot damage ratio was a good indicator for detecting different levels of individual damaged trees. We concluded that the Worldview-3 satellite data were suitable to classify different levels of damaged trees; therefore, the best mapping results of damaged trees was predicted based on the best classification model which is very useful for forest managers to take the appropriate measures to decrease shoot beetle damage in Yunnan pine forests.


Introduction
Tomicus spp. are the main cause of tree mortality in the Yunnan pine forests in Southwest China. Over the past 20 years, 1.5 million hectares of Yunnan pine forests have been infested [1][2][3][4]. In the last few years, the damage rate of Yunnan pine in Dali Yunnan has rapidly increased.
In the Yunnan province, the life cycle of Tomicus yunnanensis Kirkendall and Faccoli and Tomicus minor Hartig can be roughly divided into two stages-the shoot damage stage and the trunk damage stage. In the shoot damage stage, shoot beetles attack the fresh shoots of Yunnan pines from May to November. The color of attacked shoot needles gradually changes from green to yellow and red. The chlorophyll content and leaf moisture of damaged shoot needles change during the shoot damage stage. In the trunk damage stage, the shoot beetles begin to transfer from the shoots to the trunks for reproduction since December. New emerging adults attack the shoots from May of the next year [5][6][7][8][9][10]. In view of the increasing beetle damage, it is urgent to develop an effective approach to monitor Tomicus spp.
Traditional monitoring methods require foresters to investigate the infested trees in the forests, which are too expensive and unsuitable for monitoring large scale areas. Compared to traditional monitoring methods, remote sensing techniques are considered as useful approaches for detecting infested pines. Several studies have used multiple remote sensing data to detect insect disturbance, most of them focused on the detection of mountain pine beetles (Dendroctonus ponderosae, Hopkins), especially at the "red-attack" stage and "grey-attack" stage [11][12][13][14][15][16][17][18].
Aerial photography is being used operationally for red attack surveys in British Columbia's strategic beetle management plan (British Columbia Ministry of Forests, 2003b) and the United States Forest Service (USDA 2012a), which was always used as reference data for lower spatial resolution images at large areas [19][20][21][22]. In addition, aerial photography with high spatial resolution has the ability to detect damage at the single tree level [23,24]. With the improvement of remote sensing image resolution, the detection of beetle damage has been successfully identified by using multiple satellite imageries. Coops et al. [25] used Quickbird multi-imagery to detect red attack damage caused by mountain pine beetle in British Columbia and found that red to green reflectance (RGI) was the most successful spectral index to separate nonattack crowns from red attack crowns. White et al. [26] used an IKONOS imagery to detect red attack damage at low and medium levels; the results indicated that the detection of red attack was most effective for larger tree crowns (diameter > 1.5 m) which were located less than 11 m from other red attack trees. Despite Quickbird and IKONOS images which the spatial resolution was less than 5 × 5 m per pixel had an increased ability to detect red-attack stage, it was difficult to detect individual tree damage with limited spatial resolution. In recent years, satellite imagery with higher spatial resolution, such as Worldview-2 imagery (0.5 m for the panchromatic band and 2 m for the multispectral bands) has been confirmed to have the potential for detecting an individual damaged tree. Furthermore, Worldview-2 imagery also has a red edge band, which was useful for early detection of beetle damage [27][28][29][30][31][32][33][34][35]. Immitzer et al. [36] used WorldView-2 satellite images to detect bark beetle infestations and found an overall accuracy of about 70% to separate green-attack trees from "healthy" trees. Mullen [37] used WorldView-2 satellite imagery to detect the early stage of mountain pine beetles (MPB) damage with an overall accuracy of about 75% in distinguishing green-attack trees from nonattack trees.
Although most studies have detected tree mortality and green attack trees caused by trunk damage, these were not sufficient to define the shoot damage stage. To our knowledge, there have been no previous studies using high-resolution optical satellite data to detect shoot damage ratio (SDR) at the individual tree level. To address this gap, the aim of this study was to evaluate the suitability of WorldView-3 (WV-3) imagery for detecting four different levels of damaged trees in a P. yunnanensis forest: healthy tree (0% ≤ SDR < 5%), slightly infested tree (5% ≤ SDR < 20%), moderately infested tree (20 ≤ SDR ≤ 50%), and severely infested tree (50% < SDR<80%). Therefore, we explored the following: (1) the use of the eight bands of WV-3 image; (2) the use of the normalized two-band indices from the WV-3 image; (3) the classification accuracy and kappa of three classifiers (multinomial logistic regression, stepwise linear discriminant analysis, and random forest). (4) the output of different shoot damage maps.

Study Area
The Yunnan pine forest in Pupeng town (25 • 18 N, 100 • 18 E) of Dali City in the Yunnan Province was selected as the study area, covering 5248 ha ( Figure 1). The selection of study area was based on previous study mapping different SDRs in the area [38]. T. yunnanensis and T. minor have caused severe damage to Yunnan pine since 2013 in this area. The elevation ranges from 1720 m (Yunlichang Valley) to 2745 m (Yingge Mountains). The mean average precipitation is 783.7 mm per year, and the average annual temperature is 14.7 • C. The average monthly maximum and minimum temperatures are 27 and 7.6 • C, respectively.

Remote Sensing Data
High-spatial-resolution digital imagery was acquired on 6 April 2017 from a WV-3 satellite. The WV-3 satellite provides very high spatial resolution data with eight spectral bands. The details about the sensors are shown in Table 1. Rural, Initial Visibility: 60 km. The multispectral image was pan-sharpened by a panchromatic band using Gram-Schmidt Pan Sharpening. Finally, the 0.3 m pan-sharpened multispectral image was orthorectified using a 5 m digital terrain model (DTM). Studies have proven that pan-sharpened images are highly correlated with the original multispectral image [39]. In addition, fusing the coarse spatial resolution data with the higher spatial resolution data to enhance the spatial resolution of the eight multispectral bands was beneficial for selecting the tree crown. In our study, the pan-sharpened WV-3 image was used for analysis and classification.

Remote Sensing Data
High-spatial-resolution digital imagery was acquired on 6 April 2017 from a WV-3 satellite. The WV-3 satellite provides very high spatial resolution data with eight spectral bands. The details about the sensors are shown in Table 1. The calibration and atmospheric correction models (FLAASH in ENVI 5.3) were applied to the multispectral images to convert digital number (DN) values into values for sensor radiance and reflectance. The FLAASH settings: Atmospheric Model: Mid-Latitude Summer, Aerosol Model: Rural, Initial Visibility: 60 km. The multispectral image was pan-sharpened by a panchromatic band using Gram-Schmidt Pan Sharpening. Finally, the 0.3 m pan-sharpened multispectral image was orthorectified using a 5 m digital terrain model (DTM). Studies have proven that pan-sharpened images are highly correlated with the original multispectral image [39]. In addition, fusing the coarse spatial resolution data with the higher spatial resolution data to enhance the spatial resolution of the eight multispectral bands was beneficial for selecting the tree crown. In our study, the pan-sharpened WV-3 image was used for analysis and classification.

Reference Data
Based on the classification standard (Standard of Forest Pests Occurrence and Disaster LY/T 1681-2006) and ground survey, we divided the damage Yunnan pines into four damage levels ( Figure 2, Table 2).
where SDR is the single tree damage ratio, n i is the number of damaged shoots, and Ni is the number of total shoots.

Reference Data
Based on the classification standard (Standard of Forest Pests Occurrence and Disaster LY/T 1681-2006) and ground survey, we divided the damage Yunnan pines into four damage levels ( Figure  2, Table 2).

= ⁄ × 100%
(1) where SDR is the single tree damage ratio, ni is the number of damaged shoots, and Ni is the number of total shoots.  Field investigations were conducted from March to April 2017. WV-3 0.3 m pan-sharpened multispectral image (false color composite image) was brought to the field to locate tree crowns. The SDRs of sample trees were calculated based on Equation (1). Within the study, several damage shoots were selected to look for the beetles inside or invasion holes which confirmed that shoot damage was caused by bark beetles (T. yunnanensis and T. minor). In this study, the tree crowns of 193 healthy trees, 183 slightly infested trees, 175 moderately infested trees, and 186 severely infested trees were delineated on the WV-3 0.3 m pan-sharpened multispectral image and converted to a polygon vector file ( Figure 3).   Field investigations were conducted from March to April 2017. WV-3 0.3 m pan-sharpened multispectral image (false color composite image) was brought to the field to locate tree crowns. The SDRs of sample trees were calculated based on Equation (1). Within the study, several damage shoots were selected to look for the beetles inside or invasion holes which confirmed that shoot damage was caused by bark beetles (T. yunnanensis and T. minor). In this study, the tree crowns of 193 healthy trees, 183 slightly infested trees, 175 moderately infested trees, and 186 severely infested trees were delineated on the WV-3 0.3 m pan-sharpened multispectral image and converted to a polygon vector file ( Figure 3).

Reference Data
Based on the classification standard (Standard of Forest Pests Occurrence and Disaster LY/T 1681-2006) and ground survey, we divided the damage Yunnan pines into four damage levels ( Figure  2, Table 2).
where SDR is the single tree damage ratio, ni is the number of damaged shoots, and Ni is the number of total shoots.  Field investigations were conducted from March to April 2017. WV-3 0.3 m pan-sharpened multispectral image (false color composite image) was brought to the field to locate tree crowns. The SDRs of sample trees were calculated based on Equation (1). Within the study, several damage shoots were selected to look for the beetles inside or invasion holes which confirmed that shoot damage was caused by bark beetles (T. yunnanensis and T. minor). In this study, the tree crowns of 193 healthy trees, 183 slightly infested trees, 175 moderately infested trees, and 186 severely infested trees were delineated on the WV-3 0.3 m pan-sharpened multispectral image and converted to a polygon vector file ( Figure 3).

Tree Crown Extraction
We used the pan-sharpened WV-3 images to compare their accuracies in detecting different levels of shoot damage trees based on the object method ( Figure 4). The pan-sharpened WV-3 image was subdivided into image objects through multiresolution segmentation using the eCognition Developer 9.0 software [40,41]. The best segmentation parameters were evaluated on the tree crown polygons of the reference data. In order to keep the tree crowns consistent with the segment polygon, several levels of detail of segmentation were iteratively used to adapt the shape and compactness parameters. The final scale parameter was 30, homogeneity criterions of 0.7 and 0.7 for shape and compactness.

Tree Crown Extraction
We used the pan-sharpened WV-3 images to compare their accuracies in detecting different levels of shoot damage trees based on the object method ( Figure 4). The pan-sharpened WV-3 image was subdivided into image objects through multiresolution segmentation using the eCognition Developer 9.0 software [40,41]. The best segmentation parameters were evaluated on the tree crown Shadowed image polygons were removed to ensure that only sunlit tree crowns were used in the classification approaches. The sunlit areas were separated using intensity values I(RGB) from a hue, saturation and intensity (HIS) transformation of the RGB bands (5-3-2), where values smaller than 0.0055 were assigned to shadows. The threshold was determined based on visual inspection to separate shadowed and sunlit areas. Next, vegetation was extracted from the nonshadow area with a NDVI threshold. The threshold was determined with a stepwise approximation method. Objects with NDVI value greater than 0.55 were masked out as vegetation. The polygon vector file of tree crowns was transformed to the WV-3 segmentation image.

Spectral Variables
Reflectance values were extracted for all eight WV-3 spectral bands based on tree crown segment objects. The mean values for each crown object were calculated and used for further analysis. The eight bands of the pan-sharpened WV-3 image were used as the basic spectral variables. In addition, normalized two-band indices were also used as spectral variables. Normalized two-band indices were calculated using all possible permutations with the following equation: Shadowed image polygons were removed to ensure that only sunlit tree crowns were used in the classification approaches. The sunlit areas were separated using intensity values I (RGB) from a hue, saturation and intensity (HIS) transformation of the RGB bands (5-3-2), where values smaller than 0.0055 were assigned to shadows. The threshold was determined based on visual inspection to separate shadowed and sunlit areas. Next, vegetation was extracted from the nonshadow area with a NDVI threshold. The threshold was determined with a stepwise approximation method. Objects with NDVI value greater than 0.55 were masked out as vegetation. The polygon vector file of tree crowns was transformed to the WV-3 segmentation image.

Spectral Variables
Reflectance values were extracted for all eight WV-3 spectral bands based on tree crown segment objects. The mean values for each crown object were calculated and used for further analysis. The eight bands of the pan-sharpened WV-3 image were used as the basic spectral variables. In addition, normalized two-band indices were also used as spectral variables. Normalized two-band indices were calculated using all possible permutations with the following equation: where Ry is the reflectance of band y and Rx is the reflectance of band x. In our study, the eight spectral bands plus normalized two-band indices gave a total of 36 spectral variables.

Variable Selection and Data Mining
In this study, three statistical approaches were used to compare the spectral responses of the level 1-4 tree crowns. In the first approach, multinomial logistic regression (MLR) was chosen to select variables because the MLR model can be built with several spectral variables, but can also be built with a single variable. MLR is an extension of binomial logistic regression, which has been successfully applied in many studies [42][43][44][45][46][47]. MLR allows more than two categories of the response variable, such as the four damage levels in our study. In this study, a stepwise MLR method was used to minimize the models based on Akaike's information criterion (AIC).
In order to reduce variable space dimensions and intercorrelation among the variables, a stepwise linear discriminant analysis (SDA) was used. In the SDA model, variables were input step by step to select or remove variables based on minimizing the Wilks' Lambda according to the importance of the variables. The optimal discriminant variables are selected until there are no unimportant variables in the discriminant that need to be removed. The SDA model has been successfully applied in many studies [37,[48][49][50][51].
Lastly, the random forest classifier was frequently used for tree classification using remote sensing data [52][53][54][55][56]. Random forest (RF) can deal with both discrete data and continuous data, using integrated bootstrapping to estimate the classification accuracy. It provided a rank of variable importance based on the mean decrease in accuracy (MDA). A higher MDA indicated greater importance of a spectral variable in the RF classification

Classification and Validation
In this study, each different level of damage trees was randomly split into 50% training and 50% evaluation data sets. This random split of training/evaluation data was repeated ten times to classify and evaluate the different levels of damage trees. The accuracies in classifying different levels of damage trees were evaluated based on the models' confusion matrices included user's accuracy, producer's accuracy, overall accuracy, and Cohen's kappa coefficient. The values in the final confusion matrix for each model are the means of ten confusion metrics. In this study, MLR was completed using R 3.5.1 (RStudio, Inc., Boston, Massachusetts, USA) with R package "nnet" and RF using R 3.5.1 with R package "Randomforest", while stepwise linear discriminant analysis and one-way analysis of variance (ANOVA) were done in SPSS ® v. 24 (SPSS Inc., Chicago, IL, USA).

Spectral Signatures
The spectral signatures of the Levels 1-4 of damaged Yunnan pines based on the WV-3 image are shown in Figure 5. In the visible bands, with an increase in the SDRs, the spectral values showed evident changes, such as an increase in the red band reflectance and a drop in green band reflectance. The largest distinctions between the four damage levels appeared in the red Edge, NIR1, and NIR2 bands. The class "healthy" and "slightly" showed higher reflectance values than the classes "moderately" and "severely" in the three bands. The differences of NIR1 and NIR2 bands between the classes "healthy" and "slightly" were smaller, the same as the differences between the classes "moderately" and "severely". However, the standard deviations of red Edge, NIR1, and NIR2 were larger than those of the visible bands. . Reflectance values of the four damage levels "healthy", "slightly", "moderately", and "severely" based on the WV-3 image.

Classification Accuracies
The overall accuracy (OA), Kappa statistics for each classifier, and spectral variables are shown in Figure 6. For eight bands, the overall accuracy of RF was significantly lower than the other two classifiers. For all variables, the overall accuracy of SDA was significantly higher than the other two classifiers. The normalized two-band indices improved some accuracies in the detection of different damage level trees; using all spectral variables, the three classifiers all reached the maximum accuracies, including the model with the highest overall accuracy (SDA; OA = 78.33%; Kappa = 0.712), followed by the MLR classifier (OA = 76.56%; Kappa = 0.687) and RF classifier (OA = 75.46%; Kappa = 0.673). For variables selected by the SDA classifier, there was no significant difference between MLR and RF. As a result, the SDA classifier performed much better than the other two classifiers.  . Reflectance values of the four damage levels "healthy", "slightly", "moderately", and "severely" based on the WV-3 image.

Classification Accuracies
The overall accuracy (OA), Kappa statistics for each classifier, and spectral variables are shown in Figure 6. For eight bands, the overall accuracy of RF was significantly lower than the other two classifiers. For all variables, the overall accuracy of SDA was significantly higher than the other two classifiers. The normalized two-band indices improved some accuracies in the detection of different damage level trees; using all spectral variables, the three classifiers all reached the maximum accuracies, including the model with the highest overall accuracy (SDA; OA = 78.33%; Kappa = 0.712), followed by the MLR classifier (OA = 76.56%; Kappa = 0.687) and RF classifier (OA = 75.46%; Kappa = 0.673). For variables selected by the SDA classifier, there was no significant difference between MLR and RF. As a result, the SDA classifier performed much better than the other two classifiers. . Reflectance values of the four damage levels "healthy", "slightly", "moderately", and "severely" based on the WV-3 image.

Classification Accuracies
The overall accuracy (OA), Kappa statistics for each classifier, and spectral variables are shown in Figure 6. For eight bands, the overall accuracy of RF was significantly lower than the other two classifiers. For all variables, the overall accuracy of SDA was significantly higher than the other two classifiers. The normalized two-band indices improved some accuracies in the detection of different damage level trees; using all spectral variables, the three classifiers all reached the maximum accuracies, including the model with the highest overall accuracy (SDA; OA = 78.33%; Kappa = 0.712), followed by the MLR classifier (OA = 76.56%; Kappa = 0.687) and RF classifier (OA = 75.46%; Kappa = 0.673). For variables selected by the SDA classifier, there was no significant difference between MLR and RF. As a result, the SDA classifier performed much better than the other two classifiers.   The confusion matrix in Table 3 summarizes the best results for classifying damage levels 1-4 of trees using the SDA models based on all spectral variables. The best agreements were obtained for damage level 4 (87.5%), followed by level 1 (79.5%) and level 3 (75.4%). Level 2 (71.0%) had the lower agreement as it was mostly confused with the adjacent damage levels. The greatest confusion existed between healthy trees and slightly damage trees. The omission error of misclassification between healthy and slightly damage trees was almost 20%. Therefore, the omission error that misclassified slightly damage trees to moderately damage trees was almost 10%. Table 3. Confusion matrix for level 1-4 classification with the overall accuracy, kappa, producer's and user's accuracy based on the stepwise linear discriminant analysis (SDA) classifier with all spectral variables. All values presented are means of a cross-validation of ten classifications.

Reference Data
Classified as

Predictive Mapping
The best mapping results of shoot damage trees is shown in Figure 7. The SDA classifier using all spectral variables was used for the SDR mapping. In this study, we selected a moderately infested forest for SDR mapping. Figure 7a shows the 0.3 m WV-3 pan-sharpened false color image. Figure 7b shows the result of different SDR mapping using the SDA classifier. The confusion matrix in Table 3 summarizes the best results for classifying damage levels 1-4 of trees using the SDA models based on all spectral variables. The best agreements were obtained for damage level 4 (87.5%), followed by level 1 (79.5%) and level 3 (75.4%). Level 2 (71.0%) had the lower agreement as it was mostly confused with the adjacent damage levels. The greatest confusion existed between healthy trees and slightly damage trees. The omission error of misclassification between healthy and slightly damage trees was almost 20%. Therefore, the omission error that misclassified slightly damage trees to moderately damage trees was almost 10%. Table 3. Confusion matrix for level 1-4 classification with the overall accuracy, kappa, producer's and user's accuracy based on the stepwise linear discriminant analysis (SDA) classifier with all spectral variables. All values presented are means of a cross-validation of ten classifications.

Predictive Mapping
The best mapping results of shoot damage trees is shown in Figure 7. The SDA classifier using all spectral variables was used for the SDR mapping. In this study, we selected a moderately infested forest for SDR mapping. Figure 7a shows the 0.3 m WV-3 pan-sharpened false color image. Figure 7b shows the result of different SDR mapping using the SDA classifier.

Reference Data
Previous studies have mostly focused on MPB, which attacks only the trunks of pine. After the MPB attack, changes in foliage are classified into three stages [57][58][59][60][61][62]. The first stage is the green attack (GA); in this stage, the moisture content of the foliage decreases but there is no visible color damage to the tree crown. The second stage is the green-yellow-red attack; with the change of green

Reference Data
Previous studies have mostly focused on MPB, which attacks only the trunks of pine. After the MPB attack, changes in foliage are classified into three stages [57][58][59][60][61][62]. The first stage is the green attack (GA); in this stage, the moisture content of the foliage decreases but there is no visible color damage to the tree crown. The second stage is the green-yellow-red attack; with the change of green chlorophyll pigments, yellow carotenes, and red anthocyanin pigment, the canopy color changes from green to yellow and red. The third stage is the gray attack; in this stage, the needles fall from the canopy until the damaged trees are completely defoliated. In initial studies, researchers focused on the red attack, while in recent years, the early detection of bark beetle damage (GA) has become a focus for research. Compared to MPB, Tomicus spp. have the shoot damage stage. In this stage, the needles of attacked shoots gradually change from green to yellow and red. In the canopy level, the ratios of damaged shoots increase during the shoot damage stages. Unlike MPB, Yunnan pine damaged by Tomicus spp. does not have a green attack stage, the initial damage stage is shoot damage, and the visible change of the tree crown is the color change of the damaged shoots. In our study, the damage stages of Tomicus spp. were classified into four stages based on SDR (healthy, slightly infested, moderately infested, and severely infested). As a result, based on the life cycle of Tomicus spp. and the color changes of the canopy, we chose SDR as an indicator to reflect the different levels of individual tree damage.
In this study, an object-based supervised classification method was used to detect different levels of damage trees. The tree crowns were automatically segmented by software. However, the tree crowns manually delineated from the WV-3 0.3 m pan-sharpened multispectral image were not always consistent with the segment objects. The tree crown segment objects were often larger than the manually delineated crown polygons. Despite the high spatial resolution of WV-3 imagery, the crown borer of segment objects inevitably mixed with other pixels, including bare soil, falling leaves, and other tree crowns, which would affect the classification accuracy of different levels of damaged trees. In future experiments, more advanced techniques for extracting tree crowns should be used to improve the classification accuracy.

Classification Accuracies and Variable Importance
In our study, the different levels of damaged trees were assigned to the classes "healthy", "slightly infested", "moderately infested", and "severely infested" by three classification models, with overall accuracy ranging from 75%-80%. The best model was obtained with SDA using all spectral variables-an overall accuracy of 78.33% was achieved. The overall accuracy was not very high in relation to damage level 2-which was often misclassified as adjacent damage levels-which was a similar result to that reported in the research of Lars W et al. [63]. Numerous studies have indicated that vegetation indices can be better indicators of tree stress compared to single-band reflectance because they combine information from multiple bands [36,37]. In our study, with the increasing damage levels, the reflectance of the green band was decreased. However, the red band reflectance was increased, so the combination of two bands with different change directions, for example, the green-red index, emphasized the differences between damage levels. Furthermore, a set of completely uncorrelated variables selected by SDA was input to the MLR and RF classifiers, and the results indicated that the accuracies using all variables or variables selected by SDA were not significantly different between MLR and RF. In the future, other approaches that could reduce variable spatial dimensions and intercorrelation could be used to find the important variables for detecting different SDRs. As a result, the classifiers using all spectral variables had a higher overall accuracy than those using WV-3 bands in this study.
Then, we classified the damaged trees into four damage levels. However, previous studies mostly focused on the differences between healthy trees and red attack trees or gray attack trees. Marx and Alexander [64] used multitemporal Rapid Eye imagery to separate the reddish-colored deteriorating or dead tree groups with a high overall accuracy of 97%. Garrity et al. [65] used QuickBird and WorldView-2 images to classify live and dead classes and found overall accuracies above 95%. Meddens et al., [20] using 2.4 m aggregated aerial imagery to classify three different levels of tree mortality (green, red, and gray trees), herbaceous, and bare soil, found the highest accuracy was 90%. As a result, if the damaged trees were not separated into different levels, overall accuracies were usually higher.
In recent years, the emphasis of several studies has been on the early detection of bark beetles, which focused on the difference between green attack trees and healthy trees. Because of the small spectral differences between the two damage levels, the overall accuracies are always not high. For example, Immtizer and Atzberger [36] and Mullen K.E. [37] used WV-2 images to detect the green attack stage of bark beetles, classifying GA damage from healthy trees with overall accuracies around 75%. In our study, the accuracy was not higher than the above research when classifying the slightly infested class from the healthy class. The reasons might be that the changes of chlorophyll and moisture of the whole canopy were small when the number of damaged shoots was fewer. The corresponding changes in the spectrum were also not obvious. In future research, subdivision of the shoot damage ratios should be considered to determine the threshold, especially for the slightly infested trees that are relatively simple to separate from the healthy trees.
In general, although we could not truly detect slightly infested trees, our methods indicated the usefulness of WV-3 imagery for detecting severely infested trees, which are more useful for evaluating the SDRs of Yunnan pine forests.

The Performance of Worldview-3 Imagery
The WV-3 satellite data were suitable for classifying different levels of damaged trees, having a high accuracy and Kappa in our study. WV-3 is a commercial satellite providing data with a very high spatial resolution (0.3 m for the panchromatic band and 1.2 m for the multispectral bands (eight bands)). The high spatial resolution and spectral resolution contain more information to map individual trees. Furthermore, the red edge band might play an important role in dividing different levels of damaged trees. Another issue was the selection of satellite image dates. In our study, we used the WV-3 image from 6 April-at this time, the needles of the damaged shoots had all turned red after the shoot damage of the previous year. This largely influences the spectral reflectance, facilitating detection by RS data. In other words, regarding the visual effects, it is relatively easy to distinguish the different levels of damaged trees. Lin et al. [66] used UAV-based hyperspectral imagery and Lidar to detect shoot damage in pine forests. They investigated the SDRs in September, during the shoot damage stage. In this time period, some damaged shoots had already been invaded, but the color had not turned red, which reduces the differences between damage levels. The tree crown SDR, which was greater than 30%, was underestimated in the hyperspectral approach. Although several beetles still damaged the shoots from December to April of the next year, the majority of the shoots were damaged from May to November-the SDRs reached an approximate maximum in November to December. To address this issue, in future work, data from prior to the detection time should be included so that the forest department can take measurements earlier.

Conclusions
In our study, we evaluated the potential of WV-3 satellite data to classify different levels of damaged trees based on WV-3 original bands and normalized two-band indices. We concluded four major points: (1) Normalized two-band indices can improve some accuracies in the detection of different levels of damaged trees.
(2) The SDA classifier performed well in classifying different levels of damaged trees. The best model was the SDA classifier based on all spectral variables.
(3) The WV-3 satellite data were suitable for classifying different levels of damaged trees. (4) SDR was a good indicator for detecting different levels of individual damaged trees. These results, especially the output of SDR mapping, are important for understanding the spatial distribution of damaged trees. In further study, forest structure, topography, landscape context, region variables (precipitation, temperature, etc.), and other variables should be combined to explore the spatial and temporal characterization of the spread pattern of Tomicus. spp., which is useful for forest managers who can implement forestry operations that can effectively reduce the probability of beetle outbreak and reduce economic loss induced by beetle attacks.