Automated Extraction of Forest Burn Severity Based on Light and Small UAV Visible Remote Sensing Images

: Identiﬁcation of forest burn severity is essential for ﬁre assessments and a necessary procedure in modern forest management. Due to the low efﬁciency and labor intensity of the current post-ﬁre ﬁeld survey in China’s Forestry Standards, the limitation of temporal resolution of satellite imagery, and poor objectivity of manual interpretations, a new method for automatic extraction of forest burn severity based on small visible unmanned aerial vehicle (UAV) images is proposed. Taking the forest ﬁres which occurred in Anning city of Yunnan Province in 2019 as the study objects, post-ﬁre imagery was obtained by a small, multi-rotor near-ground UAV. Some image recognition indices reﬂecting the variations in chlorophyll loss effects in different damaged forests were developed with spectral characteristics customized in A and C, and the texture features such as the mean, standard deviation, homogeneity, and shape index of the length–width ratio. An object-oriented method is used to determine the optimal segmentation scale for forest burn severity and a multilevel rule classiﬁcation and extraction model is established to achieve the automatic identiﬁcation and mapping. The results show that the method mentioned above can recognize different types of forest burn severity: unburned, damaged, dead, and burnt. The overall accuracy is 87.76%, and the Kappa coefﬁcient is 0.8402, which implies that the small visible UAV can be used as a substitution for the current forest burn severity survey standards. This research is of great practical signiﬁcance for improving the efﬁciency and precision of forest ﬁre investigation, expanding applications of small UAVs in forestry, and developing an alternative for forest ﬁre loss assessments in the forestry industry.


Introduction
As an essential ecological factor in forest ecosystems, forest fires play a crucial role in promoting the succession and renewal of fire-dependent ecosystems and maintaining biodiversity. Nonetheless, uncontrolled forest fires can become disasters, causing significant loss of forest vegetation and the ecological environment and even casualties. Consequently, the prevention and control of forest and grassland fires have been cause for worldwide concern [1,2]. The heterogeneous forest patches consisting of various trees with various amounts of damage (e.g., burnt, dead, damaged, and unburned) will be shaped postfire. According to the extent of fire damage, the burned areas can be divided, sampled, surveyed, and mapped, which is an essential part of fire investigations and assessments but is also vital in modern forest management. Prompt and accurate acquisition of forest burn severity is necessary to provide scientific assessments of the direct economic losses, which are of great practical significance to effectively establish fire archives, addressing firefighting emergencies, fire containment infrastructure planning, and construction. It is also meaningful for forestry judicial appraisals and insurance compensation [3][4][5]. The burnt, dead, damaged, and unburned forests were developed with a customized spectrum, particularly A, C, texture feature mean, standard deviation, homogeneous and shape index of the length-width ratio. Then, an object-oriented method is presented to extract different types of forest burn severity based on optimal segmentation and a multilevel rule classification recognition model. Finally, the performance of small visible UAVs based on objected-oriented methods to identify the forest burn severity alternatively to the field survey in National Forestry Standards is evaluated and discussed by precision validation and comparison with the recently widely used SVM.

Study Area
The fires are located in Wuyue Village Bajie Town, Anning City (24 • (Figure 1). The study area is part of the fire region representing all forest burn severity types in forestry standards mentioned [8]. Over 2500 people fought hard for nearly 72 h to suppress all the wildfires. According to the local forestry bureau on-site survey, this forest fire was a crown fire accompanied by surface fire, and the total woodland fire acreage of approximately 386.7 hm 2 . The primary vegetation type at the research site consists of coniferous and broad-leaved mixed forest, and the main dominant tree species are Pinus yunnanensis, Pinus armandi, and Cyclobalanopsis glaucoides. Furthermore, adjacent to Kunming, the dense population and frequent human activities give rise to high ignition rates, and the monsoon climate provides a warm, dry winter and moderately hot, humid summer in the study region. The mountainous and rugged topography causes climate change for this region's elevations, and there is underdeveloped transportation, which causes this location to experience frequent fire occurrences that are difficult to suppress. The local fire prohibited period is from December to the next year before the rainy season, usually at the end of May. Once the rainy season arrives, the vegetation in the burned area will quickly renew and be in succession.

Data Acquisition and Processing
Using DJI Phantom 4 Pro-quad-rotor light and small UAV and images of burned areas were acquired. This type of UAV has high-precision positioning, and the aircraft can autonomously return during flight [39], which is suitable for operation under complex terrain conditions in the field. UAV images were captured in July 2019 with a flying altitude of 270 m, and the lateral overlap and longitudinal overlap were set to 60% and 75%, respectively. A total of 430 single photos are taken. Image data processing was conducted by Pix4DMapper3.2 software stitching. First, the single-image and corresponding POS (position and orientation system) data after color correction and screening are imported into Pix4DMapper 3.2 software to generate an orthophoto of the post-disaster forest fire using image stitching. Then, several processes are as follows: complete spatial triangulation calculations automatically, densify point cloud, mosaic, output a visible-light orthophoto image (DOM). Finally, the stitched image is in ArcGIS10.5 software. We obtained the most typical sample image in the interior of the experimental forest fire area (Figure 2).

Analysis of Image Characteristics of Forest Fire Damage
After forest fires occur, tree death results from a long process that is affected by the comprehensive effects of the canopy, cambium, root burns, climate, and pests. The amount of canopies burnt is the most crucial indicator for predicting the survival of trees after a disaster and can be categorized by the number of canopies burnt. According to forest burn severity of field survey standards issued by the Chinese State Forestry Bureau [8], the image segmentation types are defined in four categories burnt, dead, damaged, and unburned. The burnt forest refers that tree crowns being entirely burnt, and the trunk being burned heavily, which cannot be used as timber after logging. The reflection spectrum characteristics of the burnt forest take on the disappearance of the small green reflection peak and minor rising in the red band, which shows a gray tone in the RGB color image. The appearance of the dead forest is that over 2/3 of crowns are burned, which cannot be renewed but can be used as timber. The dead forest spectral reflectance is enhanced in both red and blue bands, the overall color is mainly light brown, and the crown shapes are relatively complete. However, the chlorophyll reflection effect is significantly weakened and only occasionally green. The damaged forest means half or 1/4 of crowns are damaged with branches and leaves burned. The spectral reflectance presents a decrease in the green band and shows green crowns with brown. The unburned forest is those crowns, trunks, and bases of trees are not burned, and the spectral characteristics are the same as those of conventional wood. The image interpretation symbol and characteristics are shown in (Table 1). According to the image coverage of the study area, four auxiliary segmentation categories are added to the image categories: reservoir, bared land, cement building surfaces, and shallow water.

Forest Fire Loss Recognition Model
The characteristics reflected by the image objects are the key element for identifying ground objects in remote sensing images, and the images' spectral, geometric, and texture features are generally used as the basis for identification. There are spectral differences in chlorophyll loss processes in the visible light RGB band of trees with different degrees of damage, and the customized classification spectral characteristics were established based on spectral specificity. Then, the geometric characteristics are described by the object's length-width ratio, area, and compactness. Texture characteristics are analyzed by mean, standard deviation, homogeneity, contrast, dissimilarity, entropy, angular second moment and correlation of the Grey Level Concurrence Matrix (GLCM). At the same time, in order to highlight the target features in the image and increase the distinguishability of different ground objects, the vegetation index and HIS transform component of the UAV remote sensing visible light image are introduced. Among them, H (Hue), I (Intensity), and S (Saturation) represent the classification to which the color of the ground objects belongs, the brightness degree and the saturation degree of the color, respectively, which can be used as a basis for classifying ground objects with apparent color differences [40].
The RGB spectral characteristics of unburned forest, damaged forest, dead forest, burnt forest, bared land, cement surfaces, and water bodies observed in UAV images after a forest fire in the study area were analyzed. With the help of the remote sensing software ENVI 5.3, more than 30 sample areas of each type of ground object were collected, and the spectral mean and luminance mean values of each class were statistically explored. The pop curve relationship of all types of ground objects is shown in Figure 3.

Burnt forest
Ground survey shows that burnt forest generally presents in three states: (1) the main body of the tree is still there, the trunk is severely burnt, and tree crowns are entirely burnt; (2) the trunk cambium is burnt and broken, and the branches and trunks are scattered on the ground; (3) the main body of the tree is already burnt, showing the state of burning focal stumps, and the crown disappeared.
The image shows that the burnt forest has no canopy features, the tone is gray, the texture is irregular, and it is spotted or granular. The green spots in the area are the restoration of shrubs and grasses on the ground. Not considered due to less restoration.

Dead forest
Ground survey shows that the tree crowns of the dead forest are in different states due to different flame intensity: (1) 2/3 and above of the trunk cambium is dead, and the crowns turn tan; (2) The crown was scorched (dark brown) and burned to death (tan) were distributed in different proportions; (3) 2/3 and above of the crown are scorched, and the less severely burned part of the crown remains green.
The image shows that due to the difference in flame intensity, the leaves in the dead forest area are dehydrated, resulting in irregular changes in the image tone, mostly tan or patches with a small amount of green and uneven local texture characteristics.

Damaged forest
Ground survey shows that half or 1/4 of the tree crown had burned branches and leaves, tan with a slight green tone, and the vegetation could continue to grow after a restoration period.
The image shows that the crown of the damaged forest area presented brown and yellow-green patches.

Unburned forest
Ground survey shows that the trunk cambium and bases of trees are not injured, and the crown is not burned The image shows that the crown is in a normal state, with rich and complex textures.
After determining the feature categories based on the UAV visible light remote sensing images, we then specifically analyzed the correlation of the distribution of the three characteristic parameters in RGB for each type of ground object. Based on the spectral statistics results, the spectral diversity characteristics of the specific RGB three characteristic parameters of various types of ground objects in the images were studied, and the spectral statistics results are shown in Figure 4. From the figure, we can obtain that the image element distribution of burned and dead forest in the R-band has a significant difference, and the image element distribution of unburned forest is more different from that of burned and dead forest in the G-band and R-band. Therefore, the vegetation index can be constructed by combining the G-band and the R-band to distinguish the corresponding ground object types. Comparing bared land and concrete surfaces, the vegetation index can be constructed based on a combination of G-band and B-band differences to differentiate. This research uses the ESP (estimation of scale parameter) method proposed and applied by Drǎguţ [41] to calculate the optimal segmentation scale. ESP introduces the local variance (LV) to represent the standard deviation within the segmentation results and uses the ROC (rate of change) to obtain the LV change rates. The peak point with obvious change represents the best possible scale. The ROC formula is as follows: where LV n is the average standard deviation of the target object levels, and LV n−1 is the average standard deviation corresponding to the n-1 object levels below the target level in the formula. Through continuous segmentation experiments, the optimal segmentation scales for various features in the original image are determined, as shown in (Figure 5a). Considering that the spectral information of undamaged forest tree canopy edges is not uniform, there are many small pixel spots and large amounts of forest seam gaps. The image object segmentation process is easily affected by irrelevant information, which results in fragmentation. The research adopts mathematical morphology filter processing with a structural element of 3 × 3 to simplify the processing of UAV images to remove noise and eliminate some redundancy [42]. Subsequently, the peak value of the LV-ROC broken line (Figure 5b) was obtained to determine the first-rank segmentation scale.

Object-Oriented Feature Information Extraction
Referring to the excess green index (EXG) and visible band differential vegetation index (VDVI) [43] proposed by predecessors, the vegetation index extraction was performed using visible UAV remote sensing images. The formulas are as follows: where ρ green , ρ red , and ρ blue represent the average values of the red, green and blue threeband pixels, respectively. Figure 3 shows the characteristics and differences of the spectral response curves of various types of ground objects. Unburned forest shows stronger reflection in the green light band and stronger absorption in the blue light band. Dead forest areas and bared land exhibit vital reflection in the red light band and substantial absorption in the blue light band. Burnt forest and water bodies exhibit stronger reflection in the blue light band and stronger absorption in the red light band. To highlight the characteristics of various types of ground objects and the customized spectral features are further constructed.
where A and C represent the customized spectral features A and C, respectively, and ρ green , ρ red , and ρ blue represent the average values of the red, green and blue three-band pixels, respectively.
To distinguish ground objects with significant color differences and the F and N spectral characteristics [44] proposed by predecessors were referred to identify unburned forest areas, burnt forest areas, dead forest areas, and reservoirs.
where F and N represent the customized spectral features F and N, respectively, ρ green , ρ red , and ρ blue represent the average values of the red, green, and blue three-band pixels, respectively.

Support Vector Machine Extraction of Forest Fire Damage Information
Forest burn severity is also extracted by the support vector machine method to verify the performance of the object-oriented method. The support vector machine method is based on statistical theory. For the multi-classification situation involved in this study, a combined model containing multiple binary classifiers can usually be constructed for classification. Commonly used modeling algorithms are one-to-one modeling and one-tomany modeling [45]. The one-to-one method constructs N (N-1)/2 classifiers for N-class classification problems. This method has a large number of classifiers and a large number of calculations, and there is a problem with sample mixing [46]. The one-to-many method constructs N classifiers for N-class classification problems. This method has a relatively small number of classifiers, has a simple structure, and can achieve the same classification effect [47]. Therefore, this study used a one-to-many approach for forest burn severity information extraction.

Multiscale Segmentation Classification Hierarchy
Through continuous segmentation experiments, eCognition 9.1 (Definiens Imaging GmbH, Munich, Germany) tools were used and combined with the ESP evaluation model to obtain the optimal segmentation scales for different ground feature types. According to the principle of segmentation scale from large to small [48], a multi-scale segmentation classification hierarchy table for different ground features was established separately ( Table 2). Table 2 shows that the optimal segmentation parameters are different for each type of ground feature. The distribution area of burned forest is large and relatively concentrated with good homogeneity; thus, it is suitable for using large-scale segmentation and the best segmented effect with parameters 368, 0.3, and 0.6. Dead forests are mostly tan patches compared to damaged forests, which are easily segmented and extracted, and are best segmentation at intermediate scale 335, shape and compactness factors of 0.3 and 0.7. As damaged forest patches contain tan, green, and black areas, their composition is complex, and they are not easy to divide compared to the bare land, cement surface, and shallow water within a small area. The optimal segmentation scales of unburned forest are 452 with 0.4 and 0.6 shape and compactness factors, respectively. When damaged forest areas and other small areas' ground features are segmented together, priority should be given to small-scale ground features, and the experimental results show that when the segmentation parameters are 260, 0.4, and 0.7, the ground features of small areas show good distributions, and there is no over-segmentation phenomenon. In contrast, the physical compositions of the reservoirs are relatively uniform, and their boundaries are clear with a shape factor of 0.7, a compactness factor of 0.6, and a segmentation scale of 438, which is the optimal parameter.

Extraction of Forest Fire Loss Information
Based on optimal segmentation scales, the spectral and texture characteristic parameters are selected according to different ground features, and the thresholds of each ground object are determined by the eCognition software platform and human-computer interaction. The hierarchical extraction of forest fire damage areas is proposed according to different classification rules.
Through a continuous information extraction test, we first focus on large-scale unburned forest areas in the first-layer mathematical morphology filter image. The unburned forest contains shrubs and grass, various forest species, forest gaps, and a canopy of edge shadows. Although it presents spectral discrimination from other ground features, the distributions of unburned forest are complex so that the undamaged trees can be extracted into three types. Unburned forest 1 mainly includes shrubs and grass, and the tree crowns are bright green areas. Unburned forest 2 consists of dark green forest and grass areas. Unburned forest 3 consists of a wide variety of forest gap distribution areas. Additionally, the spectral response curves of unburned forest and damaged forest are similar, and there is a misclassification phenomenon when only depending on the spectral characteristics. Therefore, the GLCM correlation components and HIS components of the texture features can be used as momentous specialties to distinguish. The band and aspect ratios and the band mean can be considered auxiliary attributes. The specific classification rules are shown in Table 2 The first layer is followed by other ground feature information extraction in the original UAV image, and the first layer object classification results are merged and inherited on the LEVEL 2 layer. According to the segmentation scale results, the second layer mainly acquires the reservoir information. From Figure 3, it can be seen that the reflectivity of the water body is highest in band 3, preliminary extraction using the customized A spectral features, with the help of feature F and component I, based on the differences in color and brightness, and to separate the reservoir from other objects. Simultaneously, the results of LEVEL 2 segmentation are merged and inherited to the LEVEL 3 layer. LEVEL 3 principally extracts information from burnt forest areas. The value range of wholly burnt forest and other ground objects does not overlap in bands 1, 2, and 3, and the information is easily differentiated, and the reflection is more robust in the blue band and significant absorption in the red band, which can be combined with A, C and F features for large area extraction. The information extraction results were confounded to some extent with between the shadows of forest gaps and damaged forest tree crowns. However, most forest gap shadows still have green tones, distinguished using the G and R, B band ratios, and VDVI, and the rest of the confusion part can be combined with the texture feature mean and contrast components to distinguish other misclassification ground objects. The segmentation results are further combined and inherited to the lower layer. LEVEL 4 is mainly designed for forest areas that burnt after the disaster. Dead forest areas have strong reflections on the red band and strong absorption in the blue band. Hence, extensive area extraction can be performed by combining A, C, and N features. Figure 3 shows that the spectral curve of dead forest is similar to that of bared land and performs a partial mis-extraction phenomenon. Due to the high brightness and homogeneity of the bared land part, used homogeneity component of texture features removes the misrepresented bare ground. In addition, a small amount of sporadic green canopy existed in some burned forest areas, and combination of luminance mean, component H, and feature N was used to supplement the omitted part. Similarly, the segmentation results are combined and inherited in the lower layer.
The remaining features include bared land, cement surfaces, shallow water, and damaged forest areas. Because LEVEL 5 is the smallest layer of scale segmentation, there is a partial over-segmentation phenomenon in damaged forest areas. Regarding the appearance, we can first extract the other three feature types with better segmentation. Afterward, comprehensive analysis demonstrated that there is favorable texture separability for bared land, cement surfaces, and shallow water water, and it is easy to distinguish these three types of ground objects by combining features N and A. Finally, the unclassified area is defined as the damaged forest area.
Based on the above analysis, the classification rules for different levels and features are established ( Table 3). The results for ground feature types that were extracted by each segmentation layer are synchronized to the same segmentation layer, and a thematic map of the damage types due to forest fire is obtained (Figure 6), can extract shallow water areas, and the effect of the ground objects classification is significantly better than the thematic map of forest fire damage types obtained by using the support vector machine method (Figure 7). Figure 6 illustrates that the entire burned area shows a flowery face-like spatial pattern of forest damage. According to the topography and combustible distribution, the burnt forest is distributed over the middle and upper parts of the mountain slope. The transition to the downslope area consists of the dead forest, and the low-humidity gully bottom is the damaged forest area, which reveals a ring structure, while irregular unburned forest areas appear in the valleys and ridges that are chiefly related to the dampness of the valleys, strong winds on the ridges, scarcity of combustible materials and other factors.

Forest Burn Severity Information Extraction Accuracy
Object-oriented multi-scale and multi-level classification methods and the support vector machine method were used to obtain information about forest damage in the research area. We combined 515 randomly selected image points and field survey samples to acquire the two-method classification error matrices and accuracies, respectively (Tables 4 and 5). The table demonstrates that the overall accuracy of the object-oriented multi-scale and multi-level classification method is 87.76% with a Kappa coefficient of 0.8402, and the overall accuracy of the support vector machine method is 76.69% with a Kappa coefficient of 0.6964. The object-oriented multi-scale and multi-level classification method in this research is better than the support vector machine method, and the information extraction accuracy is high. The results show that light and small UAVs can be used to identify the impact of fire on forests. Judging from the various types of forest damage information, the user accuracy for burnt forest is as high as 97.87%, which is significantly different from other damaged types of forest in terms of the spectral characteristic and is why there are no crowns on the burned forest. Second, the user accuracy for the unburned forest is 93.54%, which is in the most marginal position of the image and is only linked together with the damaged forest. It is separate from the state of dead forest and burnt forest so that there is no interlaced area. Meanwhile, both the damaged and unburned forest show green features, resulting in spectral specificity close to each other. Table 4. Accuracy evaluation of the object-oriented classification.  The information for unburned forest by burning was inevitably omitted when extracting the information for unburned forest. On the other hand, the accuracy for damaged forest is close to that of dead forest, which is approximately 79.5%, the low classification accuracy relative to other feature types is related to the fact that there are some differences in spectral characteristics between dead and damaged forest, but less differences in texture and shape characteristics. The yellowish-green and tan feature distributions of the damaged forest are seen in the image. Thus, the differences in internal characteristics of local patches and unburned forest, dead forest are relatively insignificant. Furthermore, calculating characteristic values such as color and texture will also have an impact on the distinction between disaster types. At the same time, damaged forest are located adjacent to unburned forest, dead forest from burning, and dead forest and burnt forest are close to each other. Due to the different degrees of the burning of various types of objects in forest fires, the types of patches are broken and crisscrossed with each other, and the connectivity is poor. Therefore, both damaged forest and dead fore will cause misclassifications and there are many adjacent patches in the extraction. Furthermore, it is difficult to completely separate these in image segmentation, and there will be erroneous classifications of spots in zones, which result in low classification accuracy.
According to the statistics, the total area where damaged forest, dead forest, and burnt forest is 54.4 hm 2 . The total acreage of lacerated vegetation is 55.24 hm 2 , as determined from the visual interpretation. The error rate is 1.52% compared to the visual interpretation, which indicates that the extraction accuracy of the overall area of damage from forest fires is higher than that provided by predecessors UAV surveys. In addition, it can meet the forestry industry accuracy of special surveys and design of 95%. The conclusion reveals that visible UAV images can be appropriate for investigating forest fires.

Discussion
Visible light high-resolution remote sensing images obtained by the UAV contain three single RGB bands. Thus, the limited amount of spectral information in the RGB band relies on the spectrographic characteristics, which are not conducive to distinguishing damaged forest, unburned forest, dead forest and bare land areas with similar spectral characteristics. However, object-oriented multi-scale and multi-level rule classification methods can integrate the geometric, textural, and spectral characteristics of objects to extract ground features, which is beneficial for forest burn severity identification. At the same time, the feature rules of corresponding ground objects are established in the fire area to realize information extraction of different forest burn severity degrees, which can be used as the alternative to forest damage level assessment in the national forestry industry standards. However, there remains a practical problem of complexity of the rules for efficient application. The multispectral UAV images or integration of multi-remote sensing image data can be used to introduce near-infrared bands or red-edge bands, which are sensitive to vegetation and strengthen the investigation and efficiency of forest resource loss for crown fires. For instance, UAV multispectral images are used to classify the severity of forest fires, and the results show that the method can achieve high accuracy [28]. As to the method, the machine learning has become a vital driving force in the development of artificial intelligence. Meanwhile, it provides an algorithm that analyzes and accesses rules derived from data and uses these to predict unknown data automatically. Compared with traditional classification methods, machine learning algorithms are optimal and precise for classifying large datasets that exceed the processing ability of one person and the combined processing ability of several people [49][50][51]. For example, Bui D T et al. applied multiple adaptive regression splines and machine learning methods for differential flower pollination optimization to predict the spatial patterns of forest fires in Lao CAI, Vietnam [52]. A forest fire detection method based on unmanned aerial vehicles was proposed by Chen Y et al. [53] and the convolutional neural network method made use of flame simulations on an indoor laboratory bench to verify the effectiveness of the fire detection algorithm. Therefore, machine learning algorithms are the primary prospect for acquiring highly accurate forest damage categories on fire, which is of great significance to the sustainable development of forests.
Determining the optimal segmentation scale is crucial to obtaining high-precision classifications in multi-scale segmentation. The overcutting and insufficient segmentation lead to classification errors [54]. However, the optimum scale is a relative range, which is usually determined by actual repeated segmentation experiments. In addition, it is time-consuming to judge the first-rank segmentation scale based merely on a group of experiments. Thus, we adopt the model of scale parameter segmentation ESP in which peak points with noticeable changes are obtained, and optimal parameters for the segmentation scale are confirmed as effective as possible [55,56]. During the image classification process, the classifier needs to participate in the selection of scale parameters and establishment of feature rules. This means more significant demands for the manipulator, who must determine the segmentation parameters and establish feature rules according to their empirical knowledge to segment the target clearly.
Evaluating the accuracy of damage information related to forest fires primarily uses visual interpretations of remote sensing images and random selections of verification sample points. This approach inevitably has some subjective and uncertain factors. Therefore, future work should increase the number of investigations of sample verification spots or authoritative sites collected in the field and assess and verify classification accuracies. Additionally, high-resolution UAV images have a strong ability to recognize ground features. When images are affected by cloudy climates and the sun's illumination angle, they contain areas that are obscured by clouds and hilltops. The specific spectral features of visible-light images in shady areas are similar to those regions with burnt forest. The influence of information extraction on shadow areas is a way to further improve the accuracy of terrain spectrum restorations of images used in research.
Despite high resolution and maneuverability, small, light UAVs are more suitable for investigating forest damage in small areas. Low-and medium-resolution remote sensing images can monitor large-scale fires in real time. Due to the low spatial resolution, it is impossible to extract information on tree damage in fire-stricken areas [57,58]. With the characteristics of high definition and abundant spatial information, high-resolution satellite images can compensate the shortcomings of pixel mixing in low-and medium-resolution images in the classification of tree resource damage levels, but they are expensive, not suitable for direct processing, and difficult to obtain [59,60]. Therefore, for large damaged areas, fire investigations can combine UAV and low-and medium-resolution satellite remote sensing data and take advantage of the high spatial and temporal resolution of UAV images as point-scale surveys and use low-and medium-resolution satellite remote sensing images as inspections of large areas. Merging point and surface data is essential to achieve efficient acquisition of large-area forest fire damage severity.

Conclusions
This research adopts the object-oriented multilevel rule classification approach. This approach combines the features of spectral, geometric and texture indicators, visible light band comprehensive indices, A, C, and other parameters of image features. UAV remote sensing visible-light images were used to extract the degree of damage information due to forest fires. The precise analysis shows that the overall accuracy reaches 87.76%, and the Kappa coefficient is 0.8402. This indicates that UAV remote sensing is suitable for investigating forest damage signs due to fire and can replace traditional satellite remote sensing visual interpretations or field survey methods.
(1) The burnt and dead forests can be extracted effectively from UAV visible light remote sensing images by customized comprehensive spectral indices, A and C, which can better reflect the weakness or decline of the chlorophyll effect in fire areas.
(2) The object-oriented multilevel rule classification method of forest burn severity combined with morphological filtering image, improves the image segmentation effect and classification accuracy. Meanwhile, the results are represented in objects or patch level, which has more practical significance.
(3) Compared with traditional survey methods, UAV remote sensing is more efficient in extracting forest burn severity, which provides a vital supplement for large-area satel-lite remote sensing and takes an alternative to the forest loss assessment in national industry standards.
In brief, this research is of great practical significance for improving the efficiency and precision of forest fire investigation, expanding applications of small UAVs in for-estry, and developing an alternative for forest fire loss assessments in the forestry in-dustry. However, much effort should be made for large-scale forest burn severity identification by combing satellite remote sensing with UAVs, and more machine learning based extraction methods should be taken concern.

Patents
Invention Title: Extraction method of forest fire damage degree based on light and small UAV; Patent Number: 202010015720.0.
Author Contributions: Conceptualization, methodology, software, investigation and writing-original draft preparation, J.Y. and Z.C.; validation, visualization, writing-review and editing, J.Y. and Q.L.; supervision, project administration and funding acquisition, J.Y. and F.Z. All authors have read and agreed to the published version of the manuscript.