Combining GF-2 and Sentinel-2 Images to Detect Tree Mortality Caused by Red Turpentine Beetle during the Early Outbreak Stage in North China

In recent years, the red turpentine beetle (RTB) (Dendroctonus valens LeConte) has invaded the northern regions of China. Due to the short invasion time, the outbreak of tree mortality corresponded to a low level of damage. Important information about tree mortality, provided by remote sensing at both single-tree and forest stand scale, is needed in forest management at the early stages of outbreak. In order to detect RTB-induced tree mortality at a single-tree scale, we evaluated the classification accuracies of Gaofen-2 (GF2) imagery at different spatial resolutions (1 and 4 m) using a pixel-based method. We also simultaneously applied an object-based method to 1 m pan-sharpened images. We used Sentinel-2 (S2) imagery with different resolutions (10 and 20 m) to detect RTB-induced tree mortality and compared their classification accuracies at a larger scale—the stand scale. Three kinds of machine learning algorithms—the classification and regression tree (CART), the random forest (RF), and the support vector machine (SVM)—were applied and compared in this study. The results showed that 1 m resolution GF2 images had the highest classification accuracy using the pixel-based method and SVM algorithm (overall accuracy = 77.7%). We found that the classification of three degrees of damage percentage within the S2 pixel (0%, <15%, and 15% < x < 50%) was not successful at a forest stand scale. However, 10 m resolution S2 images could acquire effective binary classification (<15%: overall accuracy = 74.9%; 15% < x < 50%: overall accuracy = 81.0%). Our results indicated that identifying tree mortality caused by RTB at a single-tree and forest stand scale was accomplished with the combination of GF2 and S2 images. Our results are very useful for the future exploration of the patterns of spatial and temporal changes in insect pest transmission at different spatial scales.


Introduction
Since 1998, the outbreak of red turpentine beetle (RTB) has brought about serious tree mortality to several provinces in northern China [1][2][3].In 2004, the area of RTB occupancy was determined to be more than 5 × 105 ha, with 6 million pine trees dead, indirectly causing economic loss to the value of ¥8.1 billion [4,5].In recent years, along with the distribution of pine forest, RTB has spread to the more northern provinces of Inner Mongolia and Liaoning [6].Due to the short time of introduction, the outbreak of Chinese pine (Pinus tabulaeformis) tree mortality caused by RTB is considered to be in the early stage and has not reached a large-scale level considering the landscape.The invasion of RTB has brought great loss to the local area, including timber loss, diminished recreational value, and altered ecosystem function.The large area of forest land has increased the time required for traditional field investigations.By the time the damage to the forest was discovered, the spread of RTB was Forests 2020, 11, 172 2 of 16 already serious, making prevention and control work difficult.It is therefore very important to monitor the damage to pine forest and implement control measures in a timely manner.
RTB can damage multiple Pinus species, including Chinese pine (Pinus tabuliformis Carr.), Mongolian pine (Pinus sylvestris var.mongolica Litv.), and Lacebark pine (Pinus bungeana Zucc.) [7].After being successfully attacked by RTB, Chinese pine go through a series of damage stages that are similar to the infestation process of the mountain pine beetle and spruce beetle [8,9].RTB attacks pines in late spring or late summer (twice, corresponding to feathering and flying periods) and, in the following few months, the needles remain green, with the pines in a "green attack" stage [9].Within 1-2 years after RTB infestation, the color of needles changes from green to yellow-green, and then to red, which is called the "red attack" stage [9,10].After 3-4 years of infestation by RTB, all the needles fall off, and the pines enter the stage of "gray attack" [9].Different spectral responses due to the changes in canopy color make it possible to monitor RTB using remote sensing technology.However, remote sensing studies on bark beetles are more concentrated on the mountain pine beetle and spruce beetle [8, [11][12][13][14][15][16], and there are few remote sensing studies which monitor tree mortality caused by RTB.
Considering the importance of cross-scale interaction in the outbreak dynamics of beetles [17], monitoring bark beetles at both single-tree and forest stand scales can provide important information on the spatiotemporal dynamics of beetle infestation [8].The successful detection of the mountain pine beetle and spruce beetle has been carried out by remote sensing platforms with fine-to medium-scale spectral resolution sensors, such as aerial photography, QuickBird, GeoEye-1, and Landsat.Meddens et al. [18] used multispectral aerial data to detect mountain pine beetle infestation and found that the highest accuracy (overall accuracy = 90%) was obtained when the image was resampled with the average crown size (2.4 m) of lodgepole pine.Coops et al. [12] reported that there was a good correlation between the red attack crowns of field investigation and the classified red attack crowns using QuickBird imagery.Dennison et al. [19] found that red-and gray-attack tree crowns extracted from GeoEye-1 images had high correlation coefficients, similar to those of the actual survey (red attack: R 2 = 0.77; gray attack: R 2 = 0.70).At a stand scale, Landsat imagery has often been employed for the detection of bark beetle outbreaks and has achieved high classification accuracies [8,10,20,21] because of its spectral resolution, large spatial and spectral band range, and free access.
The above research has been defective in that it has hindered understanding of the capability of different resolution imageries to detect beetle-caused tree mortality.Most of these studies mapped tree mortality at high damage levels.Large-scale and interconnected red attack tree-reduced spectral variability increases the classification accuracy because homogeneous regions are easily classified and mapped [20].Few published papers have evaluated the capability of remote sensing images of monitoring low damage level tree mortality.White et al. [11] used IKONOS images to monitor mountain pine beetle red attacks with a low level of attack (less than 5% of forest stand damage).The results indicated that the accuracy of red attack tree detection was 71% when creating a 4 m buffer around the pixels at 4 m resolution.Accordingly, this buffer setting led to inaccuracy in the location of individual damaged trees.Meddens et al. [22] investigated the capacity of Landsat images to quantify tree mortality attacked by mountain pine beetle at different levels.The results showed that it was unable to detect pixels within fewer than 40% of pines in the red stage at an acceptable accuracy.No matter whether a single-tree or forest stand scale was considered, higher resolution imagery was needed to improve the classification accuracy under low levels of tree damage.
It is a new methodological challenge to monitor RTB on a single-tree scale using high spatial resolution image, because the spectral response of an individual tree is affected by changes in canopy illumination and background effects [23], which often results in the noisy "salt-and-pepper effect" [24].The object-oriented classification method provides a new way for high spatial resolution image classification.In Bavarian Forest National Park, object-orientated image analysis yielded a 91.5% classification accuracy for the detection of dead spruce caused by spruce bark beetle [25].In western Canada, Coggins et al. [26] explored the invasion model of mountain pine beetle with high spatial resolution aerial images processed by an object-based method, which achieved an average accuracy of 80.2%.However, the object-based method and pixel-based method are rarely compared in terms of extracting damaged trees caused by bark beetles.
To address those gaps, our study has two main objectives: (1) investigate the efficacy of GF2 satellite imagery for detecting individual damaged trees caused by RTB at a low damage level (stands around 5% damage), including green trees, and red-and gray-attack trees.We compared the classification accuracy of GF2 images at a pan-sharpened 1 m resolution and 4 m multispectral resolution.For pan-sharpened 1 m resolution imagery, we applied object-based and pixel-based methods to map individual damage trees and compared their accuracies.(2) Investigate the efficacy of S2 satellite imagery to detect tree mortality caused by RTB at a forest stand scale.We evaluated the classification accuracy of different spatial resolutions (10 and 20 m) for detecting the percentage of tree mortality in the early stage of invasion (less than 50% damage within pixels).In addition, feature selection strategies will also be discussed and three different machine learning algorithms-the classification and regression tree (CART), the random forest (RF), and the support vector machine (SVM)-were applied and compared at both scales for improving the classification accuracy.

Study Area
The study areas were located in Dahebei Town of Chaoyang City in Liaoning Province (Figure 1).The total area of the two study areas (red box in Figure 1c) was about 100 hectares.The areas were chosen because the Chinese pine mortality caused by RTB in those areas was at a low damage level.According to the four grading criteria of White et al. [11], infested susceptible stands with a low damage level represent 1%-5% of red attacked trees, and the infestation intensities of the two stands (Figure 1c) in the study area were around 5%.The areas were dominated by pure forest of Chinese pine.Larch (Larix principis-rupprechtii Mayr) and some broad-leaved species were included in the study areas.The mean average annual precipitation of this area is 450.9 mm, and the annual temperature is 8.  resolution aerial images processed by an object-based method, which achieved an average accuracy of 80.2%.However, the object-based method and pixel-based method are rarely compared in terms of extracting damaged trees caused by bark beetles.
To address those gaps, our study has two main objectives: (1) investigate the efficacy of GF2 satellite imagery for detecting individual damaged trees caused by RTB at a low damage level (stands around 5% damage), including green trees, and red-and gray-attack trees.We compared the classification accuracy of GF2 images at a pan-sharpened 1 m resolution and 4 m multispectral resolution.For pan-sharpened 1 m resolution imagery, we applied object-based and pixel-based methods to map individual damage trees and compared their accuracies.(2) Investigate the efficacy of S2 satellite imagery to detect tree mortality caused by RTB at a forest stand scale.We evaluated the classification accuracy of different spatial resolutions (10 and 20 m) for detecting the percentage of tree mortality in the early stage of invasion (less than 50% damage within pixels).In addition, feature selection strategies will also be discussed and three different machine learning algorithms-the classification and regression tree (CART), the random forest (RF), and the support vector machine (SVM)-were applied and compared at both scales for improving the classification accuracy.

Study Area
The study areas were located in Dahebei Town of Chaoyang City in Liaoning Province (Figure 1).The total area of the two study areas (red box in Figure 1c) was about 100 hectares.The areas were chosen because the Chinese pine mortality caused by RTB in those areas was at a low damage level.According to the four grading criteria of White et al. [11], infested susceptible stands with a low damage level represent 1%-5% of red attacked trees, and the infestation intensities of the two stands (Figure 1c) in the study area were around 5%.The areas were dominated by pure forest of Chinese pine.Larch (Larix principis-rupprechtii Mayr) and some broad-leaved species were included in the study areas.The mean average annual precipitation of this area is 450.9 mm, and the annual temperature is 8.6 °C.The average monthly maximum and minimum temperatures are 15.5 and 2.3 °C [27], respectively.The elevation ranges from 428 (Dahebei Town) to 1018 m (Southdawa Mountains).

Reference Data
The unmanned aerial vehicle (UAV) images were collected in August 2018.The platform model used was Inspair 2, manufactured by Dà-Jiàng Company (Shen Zhen, China), which was equipped with an RGB (red, green, and blue) high-resolution camera.The camera was a Zenmuse X5S with an adjusted focal length of 15 mm.The flight path lines covered the entire study areas and generated

Reference Data
The unmanned aerial vehicle (UAV) images were collected in August 2018.The platform model used was Inspair 2, manufactured by Dà-Jiàng Company (Shen Zhen, China), which was equipped with an RGB (red, green, and blue) high-resolution camera.The camera was a Zenmuse X5S with an adjusted focal length of 15 mm.The flight path lines covered the entire study areas and generated three sets of images of the areas.Considering the height of the mountains, we performed flights at a height of approximately 220 m above the ground, with 90% frontal overlap and 85% side overlap.Image mosaics, texture and mesh generation, and generative orthophotos were completed in Agisoft PhotoScan Professional [28].The resolution of UAV imagary was 3.8 cm and the reprojection error was 2 pixel, which were from Agisoft PhotoScan processing report.
In order to assist with the visual interpretation of UAV images, four plots (30 × 30 m) were set up in the study areas in 2018 (Figure 1c).At each plot, we recorded the diameter at breast height (DBH) and height of trees with a DBH ≥ 7.5 cm.In addition, we randomly measured the crown diameter of 34 trees located in plots and the average diameter of Chinese pine was 3.4 m.The different stages of tree were assigned based on a visual assessment of the canopy color, the percentage of needles left in the trees, and the presence of beetle boreholes at the base of trunk (Table 1) (Figure 2).Finally, the canopies of individual trees in different attack stages were delineated on UAV images (Figure 1d).In the two study areas, the total number of crown delineations was 538 (green trees: 199, red trees: 199, gray trees: 73).three sets of images of the areas.Considering the height of the mountains, we performed flights at a height of approximately 220 m above the ground, with 90% frontal overlap and 85% side overlap.Image mosaics, texture and mesh generation, and generative orthophotos were completed in Agisoft PhotoScan Professional [28].The resolution of UAV imagary was 3.8 cm and the reprojection error was 2 pixel, which were from Agisoft PhotoScan processing report.
In order to assist with the visual interpretation of UAV images, four plots (30 × 30 m) were set up in the study areas in 2018 (Figure 1c).At each plot, we recorded the diameter at breast height (DBH) and height of trees with a DBH ≥ 7.5 cm.In addition, we randomly measured the crown diameter of 34 trees located in plots and the average diameter of Chinese pine was 3.4 m.The different stages of tree were assigned based on a visual assessment of the canopy color, the percentage of needles left in the trees, and the presence of beetle boreholes at the base of trunk (Table 1) (Figure 2).Finally, the canopies of individual trees in different attack stages were delineated on UAV images (Figure 1d).In the two study areas, the total number of crown delineations was 538 (green trees: 199, red trees: 199, gray trees: 73).

Satellite Image Preparation
A GF2 image (Table 2) was acquired for mapping individual trees at different damage stages caused by RTB.GF2, China's new-generation satellite, launched in August 2014, provided highresolution images for earth observation.According to the calibration parameters given on the official website [29], we carried out radiometric calibration and atmospheric correction (FLAASH) for GF2 images (processing level: L1A) to convert the original digital number (DN) values to radiation values and reflectance.The satellite scenes were orthorectified using "ASTER GDEM "data, which was 30 m resolution.Pan-sharpened 1 m resolution GF2 images were created by fusing the 4 m multispectral GF2 images with the 1 m panchromatic GF2 images using the nearest-neighbor diffusion (NNDiffuse) algorithm.
For detecting tree mortality at a forest stand scale, a scene of S2 image was obtained from the European Space Agency [30].S2 is a multispectral platform that contains 13 bands with different resolutions of 10, 20, and 60 m, and detailed band information is shown in Table 2.The S2 L1C images was processed into level 2A images (note: 60 m resolution bands were excluded) through atmospheric correction and terrain correction using the Sen2Cor Processor [31].
In order to accurately locate the damaged trees delineated from UAV images in satellite images, the GF2 and S2 images were geographically registered with RMSE less than 3 based on obvious ground control points selected from UAV images.The GF2 and S2 images were recorded on 23 June

Satellite Image Preparation
A GF2 image (Table 2) was acquired for mapping individual trees at different damage stages caused by RTB.GF2, China's new-generation satellite, launched in August 2014, provided high-resolution images for earth observation.According to the calibration parameters given on the official website [29], we carried out radiometric calibration and atmospheric correction (FLAASH) for GF2 images (processing level: L1A) to convert the original digital number (DN) values to radiation values and reflectance.The satellite scenes were orthorectified using "ASTER GDEM "data, which was 30 m resolution.Pan-sharpened 1 m resolution GF2 images were created by fusing the 4 m multispectral GF2 images with the 1 m panchromatic GF2 images using the nearest-neighbor diffusion (NNDiffuse) algorithm.
For detecting tree mortality at a forest stand scale, a scene of S2 image was obtained from the European Space Agency [30].S2 is a multispectral platform that contains 13 bands with different resolutions of 10, 20, and 60 m, and detailed band information is shown in Table 2.The S2 L1C images was processed into level 2A images (note: 60 m resolution bands were excluded) through atmospheric correction and terrain correction using the Sen2Cor Processor [31].
In order to accurately locate the damaged trees delineated from UAV images in satellite images, the GF2 and S2 images were geographically registered with RMSE less than 3 based on obvious ground control points selected from UAV images.The GF2 and S2 images were recorded on 23 June 2018 and 17 September 2018 with no cloud cover, which were adjacent to the UAV image acquisition time.

Extraction of Tree Mortality at a Single-Tree Scale
We classified the GF2 images to compare their accuracy in detecting tree mortality using 1 m resolution images based on object and pixel methods and 4 m resolution images (Figure 3).The multiresolution segmentation was used to segment the GF2 1 m resolution images into objects in eCognition developer software [32,33].The best segmentation parameters were assessed by the scale of tree crowns.We used multiple levels of detail to iteratively optimize segmentation and adjust the shape and compactness parameters.The final scale parameter was 10, and the homogeneity criteria of shape and compactness were set as the default values in the software.For the GF2 1 m resolution images, the shadow and herb image objects were removed to ensure that only the tree crowns exposed to sunlight were used in the subsequent classification.We developed a stepwise masking system and used histograms to find the optimal threshold that resulted in the highest matching accuracy between the mask areas and the reference data.Objects with a normalized difference vegetation index (NDVI) value greater than 0.58 were masked out as vegetation.Next, the sunlit areas of vegetated objects were separated using intensity values I[RGB] from hue, saturation and intensity (HSI) transformation of the RGB bands [34], where values smaller than 0.0061 were assigned to shadows.Lastly, the masked sunlit areas were then separated into the tree canopy and non-tree canopy (herbaceous) with a vegetation index (excess green) [35] threshold ≤291 for GF2 pan-sharpened images.For original multispectral GF2 images, the tree crowns and shadows were mixed, so we masked only the herbaceous area with an excess green [35] threshold ≤276.There were only slight changes in the mask threshold covered by the two study areas.
A total of 20 features were extracted from GF2 images (Table 3).The basic spectral information included the mean and ratio values of each band and three indices of the HSI transform of the RGB bands.Indices such as the NDVI and red-green index (RGI) were tested because they have been successfully used to identify pest disturbances in previous research [12][13][14]18].Six types of textural information were obtained from the gray level co-occurrence matrix (GLCM) [36].We did not use geometry information in the object-based method because Hooman Latifi et al. [37] revealed that spatial metrics did not play a major role in characterizing the infested stands.CART could be used to filter features according to the importance ranking in regression [38,39].Among the three feature selection methods studied by Li et al. [40], the features selected by CART had the highest accuracy for classification of tree species and CATR took the shortest time to select features.So we used CART to select subsets of features (contribution rate greater than or equal to 10%) prior to classification to reduce redundancy and intercorrelation among the characteristic variables presented in Table 3.

Extraction of the Percentage of Tree Mortality at a Forest Stand Scale
We compared the classification accuracy of the original 10 m resolution with that of 20 m resolution bands in S2 images for detecting the percentage of tree mortality caused by RTB (Figure 3).Before the extraction of tree mortality using S2 images, we used the same method as that of GF2 4 m resolution images to remove no-forest land and herbaceous areas by employing a histogram to find the best threshold.Previous studies have used the proportion of corresponding reference classification pixels (aerial images) in superpixels (30 m) and the point-counting method [46,47].We randomly and evenly sampled 60 and 40 pixels in 10 and 20 m resolution images of S2 and the corresponding damaged tree crowns in the pixels were calculated according to the reference UAV images.The average number of tree crowns was 8 and 25 for a S2 10 m pixel resolution and 20 m pixel resolution.According to the number of corresponding tree crowns in the S2-size pixels, we determined three types of damage percentages, including 0%, <15%, and 15% < x < 50%.Long et al. [47] precluded the ability of Landsat images to reliably determine the difference between 5% and 0% of Pixel damage percentage.In order to improve the classification accuracy of S2 20 m resolution images, we set the damaged pixel of only one damaged tree as the healthy pixel (0).Similar to GF2 images, CART was used to select important feature variables for S2 image classification.The difference was that we specially added MSI and NDMI indices to feature selection for the 20 m resolution images, because these indices were reported to be sensitive to beetle infestation [22,48,49].
We compared the ability of S2 superpixels (10 and 20 m resolution) for detecting damage percentages in three types with three machine learning algorithms (described in Section 2.6).In further research, we used the best algorithm to determine the classification accuracy for a binary (damage/live) characterization of different damage percentages.

Classification
We applied three machine learning algorithms to map damaged individual trees and the percentage of tree mortality, including CART, RF, and SVM.We used the statistical computing program R for data analysis, which included the packages C50, e1071, and randomForest.RF and SVM have been widely used and exhibited a good performance in land cover classification, while CART can be easily carried out and explained by certain rules [50,51].CART is a binary recursive partitioning algorithm based on tree nodes generated by training data [38].RF is an improvement in the traditional decision tree, and consists of a large number of decision trees.New data is classified by the majority of votes in the classification results of all constructed decision trees [52].We set the M tree and N tree parameters of RF as default values, which were the square root of the number of features and 500 trees, respectively.The aim of the SVM algorithm is to find the optimal hyperplane as a decision function in high-dimensional space, and classify input vectors into different classes [53].In our research, the linear function was chosen as the kernel and the cost parameter was set to 10 2 in SVM.
Objects or pixels of each class were randomly split into 50% training and 50% evaluation datasets.Finally, the accuracies in identifying individual damaged trees and the percentage of tree mortality were evaluated based on the overall accuracy (OA), producer's accuracy (PA), user's accuracy (UA), and Kappa coefficient that resulted from confusion matrices produced with cross-validation samples [54].Kappa coefficient is a strict estimate of the accuracy of classification, because it penalizes the agreement that may happen accidentally.It was suggested that a Kappa value represents the degree of agreement from "poor" (<0.4) to "moderate" (0.4-0.8) to "strong" (>0.80) [18,55].We randomly selected training and validation data and repeated collection 10 times to compute the average of the 10 confusion matrix metrics.For GF2 and S2 images, differences among combinations of resolutions and algorithms in the classification performance (OA) were analyzed using one-way analysis of variance (ANOVA), and pairwise differences were evaluated using a t-test.

Comparisons of Classification on a Single-Tree Scale
According to the CART classifier, the feature variables contributing more than or equal to 10% of GF2 classification are depicted in Table 4.The selected features were applied to the subsequent classification, which combined different resolutions and algorithms.
There are extremely significant differences between combinations of resolutions and algorithms (p = 0.000).( 1) For the same resolution image method, there is a significant difference between C50 and the other two algorithms, and no significant difference existed between SVM and RF, except for the 1 m resolution images based on the pixel method.However, regardless of the kind of resolution images that were used, SVM always had the highest classification accuracy and CART exhibited the lowest (Figure 4).( 2) For the same classification algorithm, there was no significant difference among  As a result, the combination of GF2 1 m resolution images based on the pixel method and SVM algorithm resulted in the highest classification accuracy, which was shown in Figure 5.The overall accuracy and Kappa coefficients of this combination were 77.7% and 0.58, respectively (Table 5).The PA and UA of red trees were 80.3% and 74.8%, respectively.However, the PA and UA of gray trees were only 1.3% and 18.4%, respectively.Most of the gray trees were mistakenly classified as either red trees or green trees.

Comparisons of Classification at a Forest Stand Scale
Prior to classification, 9 and 11 feature variables were screened out for the 10 and 20 m resolution images of S2, respectively (Table 4).
There are extremely significant differences between combinations of resolutions and algorithms (p = 0.000) for S2 images.(1) For images of the same resolution, subsequent classification results revealed that SVM and RF classifiers performed much better than CART and there is no significant difference between SVM and RF algorithms (Figure 6).(2) For the same classification algorithm, there was a significant difference in the classification performance of 10 and 20 m resolution images.The classification accuracy of 10 m resolution images was better than that of 20 m resolution images (Figure 6).However, no matter which combinations of resolution and algorithm were used, the classification efficiency was not good at a low damage level of tree mortality, for which the OA was less than 60% (Figure 6).
resolution images with the SVM algorithm.In addition to OA, we also paid special attention to the accuracy of the damage class.The accuracy assessment is shown in Table 6.The >15% damage percentage of 10 m resolution images had the highest classification accuracy, for which the OA and Kappa coefficient were 81.0% and 0.62, respectively.The OA and Kappa of <15% damage percentage of 10 m resolution images were 74.9% and 0.49, respectively.The binary classification accuracies of 10 m resolution images were significantly higher than those of 20 m resolution images (Table 6).In further research, we carried out binary classification (live/damaged) for S2 10 and 20 m resolution images with the SVM algorithm.In addition to OA, we also paid special attention to the accuracy of the damage class.The accuracy assessment is shown in Table 6.The >15% damage percentage of 10 m resolution images had the highest classification accuracy, for which the OA and Kappa coefficient were 81.0% and 0.62, respectively.The OA and Kappa of <15% damage percentage of 10 m resolution images were 74.9% and 0.49, respectively.The binary classification accuracies of 10 m resolution images were significantly higher than those of 20 m resolution images (Table 6).

Extraction of Tree Mortality at a Single-Tree Scale
In the early outbreak stage of tree mortality, GF2 1 m resolution images produced the highest 80.3% PA based on the pixel method.The result was higher than that presented in the work of White et al. [11], for which the accuracy of red attack detection was 70.1% for areas of low infestation.The reason for this might be related to the use of optimal variables and appropriate algorithms.In addition, according to the research of Meddens et al. [18], the classification accuracy should be higher when GF2 images are resampled to a 3.4 m pixel resolution, which is consistent with the average tree crown size.
Compared with previous studies [8,18], the detection accuracy of the gray attack stage was very low in our research.In addition to considering the reason for the damage level at which the detection accuracy of high damage level was greater than that of low damage level [11], the main reason for this might be related to the structural characteristics of host tree species.The branches of lodgepole pine and spruce are dense, as shown by the research of Meddens et al. [18] and Hart et al. [8], and the whole crown shape is still close after the needles fall in the gray attack stage, which can be easily detected in remote sensing images.However, the host tree species in this study, Chinese pine, had scattered branches and a loosened crown shape after the pine needles fell down (Figure 2c), which caused them to be easily mixed with underground vegetation and confused with green trees and red attack trees during classification.
In our research, we could not properly compare the object-based classification method with the pixel-based method.The spatial resolution of GF2 images was not fine enough, so image segmentation could not be carried out effectively according to tree crowns (Figure 7a).The classification objects in the reference data were mixed with other classes, which made them more heterogeneous.On the contrary, finer pixels could fall into the reference tree crown (Figure 7b), which made the classified pixels more homogeneous.The higher classification accuracy of the pixel-based method was expected because more homogeneous pixels often results in a better separability of classes [18,47].Improving the classification of finer-resolution images, such as WorldView-2, may be achieved with object-based classification.(b) based on the pixel method.bluepolygons correspond to delineated trees from UAV images, dark green corresponds to green trees, purple to red trees, white to gray trees, olive green to tree canopy areas, cyan to herbaceous, and black to shadow.

Extraction of Tree Mortality at a Forest Stand Scale
We explored the ability to identify the percentage of tree mortality within S2-sized pixels at a low level of tree mortality.The unsatisfactory result for forest managers was similar to that of Meddens et al. [22], who reported that when the pixel damage percentage was less than 40%, the classification accuracy was less than 50%.The authors researched two degrees of classes, including the undisturbed stand and damaged stand, and we studied three degrees of classes in which less than 15% of the damage class were seriously mixed with the healthy class and more than 15% of the damage class (Table 7).When we only conducted two classifications (health class and damage class), the accuracy of classification was significantly increased.
In Medden's research [22], the red stage class accuracies of <15% damage percentage and >15% damage percentage (<50%) within Landsat-scale superpixels were about 5% and 30%, respectively.A comparison of the results with either 10 or 20 m resolution images of the S2 satellite (Table 6) supported the idea that, with the increase in resolution of medium resolution satellite images, the monitoring accuracy of tree mortality was increased at a stand scale.The authors also reported that multidate classification was greater than single-date classification at lower levels of tree mortality.However, multitemporal images are not always available due to the presence of clouds.It would be (b) based on the pixel method.bluepolygons correspond to delineated trees from UAV images, dark green corresponds to green trees, purple to red trees, white to gray trees, olive green to tree canopy areas, cyan to herbaceous, and black to shadow.
In general, although we could not truly detect gray attack Chinese pines, our methods indicated the usefulness of GF2 imagery for detecting red attack trees, which were more prevalent in the study areas when stands were at a low damage level.

Extraction of Tree Mortality at a Forest Stand Scale
We explored the ability to identify the percentage of tree mortality within S2-sized pixels at a low level of tree mortality.The unsatisfactory result for forest managers was similar to that of Meddens et al. [22], who reported that when the pixel damage percentage was less than 40%, the classification accuracy was less than 50%.The authors researched two degrees of classes, including the undisturbed stand and damaged stand, and we studied three degrees of classes in which less than 15% of the damage class were seriously mixed with the healthy class and more than 15% of the damage class (Table 7).When we only conducted two classifications (health class and damage class), the accuracy of classification was significantly increased.In Medden's research [22], the red stage class accuracies of <15% damage percentage and >15% damage percentage (<50%) within Landsat-scale superpixels were about 5% and 30%, respectively.A comparison of the results with either 10 or 20 m resolution images of the S2 satellite (Table 6) supported the idea that, with the increase in resolution of medium resolution satellite images, the monitoring accuracy of tree mortality was increased at a stand scale.The authors also reported that multidate classification was greater than single-date classification at lower levels of tree mortality.However, multitemporal images are not always available due to the presence of clouds.It would be a good choice to use a single S2 10 m resolution image to monitor the early outbreak of tree mortality caused by RTB.The disadvantage was that, due to the lack of damage pixels, we could only divide three categories of damage percentage according to the distribution of the damage pixels, which could not be divided into multiple damage stages with 10% as an interval at a low damage level, as studied by Meddens et al. [22]

Feature Variables and Classification Algorithm
In previous studies, The RGI and NDVI index were used to extract bark-beetle-caused tree mortality [8,13,14,18,22].For GF2 images, we found that when HSI transformed into hue, the ratio of visible bands and mean value of the red band played an important role in classification at a single-tree scale (Table 4).Compared with spectral information, texture information plays a minor role.
In S2 images, the mean values of the red band, VEG1 band, and transformed HSI were very important in the importance ranking of feature variables, while the textural information played a secondary role.Fassnacht et al. [56], who monitored tree mortality caused by European bark beetle using aerial hyperspectral images, considered that the selected spectral regions agreed fairly well with the spectral bands of S2.Our research was consistent with the findings of the authors.However, among the selected spectral region, 2190 nm-corresponding to the SWIR2 band of S2 images-did not play an important role in classification.In addition, the spectral index generated by the shortwave infrared band of 20 m resolution images failed to be used in classification (Table 4).The reason for this might be that the shortwave infrared band was not sensitive at a low damage level.The spectral information was more important than the index and texture information at both a single-tree and forest stand scale.
As for the three machine learning algorithms, CART had the worst classification accuracy.The classification accuracy of RF and SVM was similar, but in most of the processing, the SVM algorithm was higher than the RF algorithm.SVM, which has been widely recognized to be particularly adept at dealing with small training samples [8,57], was demonstrated to be successful.RF does have an advantage, though, when the data contain numerous weak explanatory variables, as previously established [52,58].

Figure 1 .
Figure 1.(a) Liaoning Province in China; (b) true-color Gaofen-2 (GF2) image from 2018; (c) the study areas and field plots in the right and left corner of the GF2 image; (d) unmanned aerial vehicle (UAV) images in one of the study areas.

Figure 1 .
Figure 1.(a) Liaoning Province in China; (b) true-color Gaofen-2 (GF2) image from 2018; (c) the study areas and field plots in the right and left corner of the GF2 image; (d) unmanned aerial vehicle (UAV) images in one of the study areas.

Figure 2 .
Figure 2. Examples of green trees (a), red trees (b), and gray trees (c) in UAV images.

Figure 2 .
Figure 2. Examples of green trees (a), red trees (b), and gray trees (c) in UAV images.
in objects/pixels of each band[41] Ratio Band mean divided by sum of all bands[41] Transformed HSI The bands of RGB were color transformed to HSI into the channel hue (H), saturation (S), and intensity (I)[34] NDVI Normalized difference vegetation index: (NIR − RED)/(NIR + RED)[42]

Figure 3 .
Figure 3. Flowchart of operations used to classify GF2 and S2 images.

Figure 4 .
Figure 4. Overall accuracy (OA) of classification for different combinations of resolutions and algorithms using GF2 images, for which the significance level was 0.05.a: 1 m resolution images based on the object method; b: 1 m resolution images based on the pixel method; c: 4 m resolution images.Error bars indicate the standard deviations of 10 classifications.A-D: pairwise differences among combinations of resolutions and algorithms in the classification performance.

Figure 4 .
Figure 4. Overall accuracy (OA) of classification for different combinations of resolutions and algorithms using GF2 images, for which the significance level was 0.05.a: 1 m resolution images based on the object method; b: 1 m resolution images based on the pixel method; c: 4 m resolution images.Error bars indicate the standard deviations of 10 classifications.A-D: pairwise differences among combinations of resolutions and algorithms in the classification performance.

Figure 4 .
Figure 4. Overall accuracy (OA) of classification for different combinations of resolutions and algorithms using GF2 images, for which the significance level was 0.05.a: 1 m resolution images based on the object method; b: 1 m resolution images based on the pixel method; c: 4 m resolution images.Error bars indicate the standard deviations of 10 classifications.A-D: pairwise differences among combinations of resolutions and algorithms in the classification performance.

Figure 5 .
Figure 5. Detail example of the image classification of study areas.(a) 1m resolution image based on the pixel method and SVM algorithm; (b) 1m resolution image based on the pixel method and RF algorithm; (c) 1m resolution image based on the pixel method and C50 algorithm; (d) 1m resolution image based on the object method and SVM algorithm; (e) 4m resolution image based on the SVM algorithm; (f) GF2 true color image; (g) UAV image.

Figure 5 .
Figure 5. Detail example of the image classification of study areas.(a) 1m resolution image based on the pixel method and SVM algorithm; (b) 1m resolution image based on the pixel method and RF algorithm; (c) 1m resolution image based on the pixel method and C50 algorithm; (d) 1m resolution image based on the method and SVM algorithm; (e) 4m resolution image based on the SVM algorithm; (f) GF2 true color image; (g) UAV image.

Figure 6 .
Figure 6.OA of classification for combinations of resolutions and algorithms using S2 images.a: 10 m resolution images; b: 20 m resolution images.Error bars indicate the standard deviations of ten

Figure 6 .
Figure 6.OA of classification for combinations of resolutions and algorithms using S2 images.a: 10 m resolution images; b: 20 m resolution images.Error bars indicate the standard deviations of ten classifications.A-C: pairwise differences among combinations of resolutions and algorithms in the classification performance.

ForestsFigure 7 .
Figure 7. Subset of selected samples from GF2 1 m resolution images: (a) based on the object method;(b) based on the pixel method.bluepolygons correspond to delineated trees from UAV images, dark green corresponds to green trees, purple to red trees, white to gray trees, olive green to tree canopy areas, cyan to herbaceous, and black to shadow.

Figure 7 .
Figure 7. Subset of selected samples from GF2 1 m resolution images: (a) based on the object method;(b) based on the pixel method.bluepolygons correspond to delineated trees from UAV images, dark green corresponds to green trees, purple to red trees, white to gray trees, olive green to tree canopy areas, cyan to herbaceous, and black to shadow.

Table 1 .
Three stages of individual trees according to the sample plot survey.
1 Green tree Live or current beetle attack; needles green 2 Red tree Beetle attack of about two years; needles orange or red 3 Gray tree Beetle attack of more than three years; no needles Forests 2020, 11, 172 4 of 17

Table 1 .
Three stages of individual trees according to the sample plot survey.
Live or current beetle attack; needles green 2 Red tree Beetle attack of about two years; needles orange or red 3 Gray tree Beetle attack of more than three years; no needles

Table 3 .
All features extracted from GF2 and S2 images.

Table 3 .
All features extracted from GF2 and S2 images.

Table 5 .
Confusion matrix of GF2 1 m resolution images based on the pixel method and support vector machine (SVM) algorithm.

Table 6 .
Accuracy assessment of binary classification for S2 images using the SVM algorithm.15% < the second damage percentage of each resolution image <50%.

Table 7 .
Confusion matrix for three class classifications with the SVM algorithm using S2 10 m resolution images; 15% < the third class < 50%.