Next Article in Journal / Special Issue
Measuring a Fire. The Story of the January 2019 Fire Told from Measurements at the Warra Supersite, Tasmania
Previous Article in Journal
Fire from the Sky in the Anthropocene
Previous Article in Special Issue
Characterizing Spatial and Temporal Variability of Lightning Activity Associated with Wildfire over Tasmania, Australia
Order Article Reprints
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:

High-Resolution Estimates of Fire Severity—An Evaluation of UAS Image and LiDAR Mapping Approaches on a Sedgeland Forest Boundary in Tasmania, Australia

School of Science, RMIT University, Melbourne, VIC 3001, Australia
School of Geography, Planning, and Spatial Sciences, University of Tasmania, Hobart, TAS 7001, Australia
Bushfire and Natural Hazards Cooperative Research Centre, East Melbourne, VIC 3004, Australia
Author to whom correspondence should be addressed.
Current address: School of Science, RMIT University, Melbourne, VIC 3001, Australia.
Received: 3 February 2021 / Revised: 13 March 2021 / Accepted: 14 March 2021 / Published: 18 March 2021
(This article belongs to the Special Issue Bushfire in Tasmania)


With an increase in the frequency and severity of wildfires across the globe and resultant changes to long-established fire regimes, the mapping of fire severity is a vital part of monitoring ecosystem resilience and recovery. The emergence of unoccupied aircraft systems (UAS) and compact sensors (RGB and LiDAR) provide new opportunities to map fire severity. This paper conducts a comparison of metrics derived from UAS Light Detecting and Ranging (LiDAR) point clouds and UAS image based products to classify fire severity. A workflow which derives novel metrics describing vegetation structure and fire severity from UAS remote sensing data is developed that fully utilises the vegetation information available in both data sources. UAS imagery and LiDAR data were captured pre- and post-fire over a 300 m by 300 m study area in Tasmania, Australia. The study area featured a vegetation gradient from sedgeland vegetation (e.g., button grass 0.2 m ) to forest (e.g., Eucalyptus obliqua and Eucalyptus globulus 50 m ). To classify the vegetation and fire severity, a comprehensive set of variables describing structural, textural and spectral characteristics were gathered using UAS images and UAS LiDAR datasets. A recursive feature elimination process was used to highlight the subsets of variables to be included in random forest classifiers. The classifier was then used to map vegetation and severity across the study area. The results indicate that UAS LiDAR provided similar overall accuracy to UAS image and combined (UAS LiDAR and UAS image predictor values) data streams to classify vegetation (UAS image: 80.6%; UAS LiDAR: 78.9%; and Combined: 83.1%) and severity in areas of forest (UAS image: 76.6%, UAS LiDAR: 74.5%; and Combined: 78.5%) and areas of sedgeland (UAS image: 72.4%; UAS LiDAR: 75.2%; and Combined: 76.6%). These results indicate that UAS SfM and LiDAR point clouds can be used to assess fire severity at very high spatial resolution.

1. Introduction

Many of the world’s ecosystems have co-evolved with specific regimes of fire [1,2,3,4], which includes the frequency, extent, season, intensity and subsequent severity of fire. Fire severity is a critical element of the fire regime because it can predicate the ecosystem response [5]. Fire severity was quantitatively defined by Keeley [6] as the change in vegetative biomass following fire. In the broader literature, measures of severity are informed by change indicators such as crown volume scorch, percentage fuel consumption and tree mortality [7,8,9,10,11].
Fire severity assessments can be completed using techniques ranging from traditional field-based visual assessments through to established and emerging remotely-sensed assessments. Remote sensing methods that measure fire severity have typically used passive sensors to capture imagery from satellite or fixed-wing platforms [12,13,14,15]. Satellite sensors provide large area coverage and can generally capture a complete view of large wildfires with the benefit of lower associated costs [12,16]. Satellite sensors are limited by the frequency of observations and the spatial resolution of the sensor in categorising fire severity. In contrast, fixed wing aerial capture has greater flexibility in deployment for capturing on-demand imagery with higher spatial resolution, albeit at significantly higher cost. Fire severity classifications have been derived from single-date and multitemporal captures using spectral indices [12,17,18]. Indices are generally selected to be sensitive to the changes in vegetation health and condition often caused by fires [19,20,21,22]. A threshold at the sampling resolution of the sensor can be implemented to characterise fire severity classes for field validation or aerial photo interpretation. It should be noted that aerial photo interpretation can be completed independently of spectral index implementation [23].
High-resolution imagery captured using unoccupied aircraft systems (UAS, also referred to as drones or unmanned aerial vehicles (UAVs)) have been used in conjunction with supervised classifications (an algorithm which learns on a labelled dataset and can evaluate its accuracy on training data) to map fire severity [24,25,26,27,28,29]. Image capture from UAS presents a potential improvement in temporal and spatial resolution over airborne and satellite sensors for small areas, e.g., at several hectares to square kilometres. UAS imagery has been used previously to monitor vegetation health and condition, forest condition, soil conditions and ecological planning [30,31,32,33]. High-resolution pre- and post-fire imagery has been used to derive difference burn ratios [24,34,35]. For example, McKenna et al. [24] applied the Excess Green Index, Excess Green index Ratio and Modified Excess Green Index to derive fire severity maps with results comparable to multispectral satellite data using difference NVDI and difference NBR [21]. Arkin et al. [25] achieved an accuracy of 89.5% ± 1.5% at 5 m and 85.4% ± 1.5% at 1 m when applying a supervised classification to post-fire UAS imagery, employing textural and structural metrics as predictor variables to produce fire severity and land cover maps.
UAS LiDAR systems provide a means to collect high-resolution 3D data. The high density data collected from UAS platforms have been used to derive metrics of tree height, canopy and density [36,37,38,39]. Recently, UAS LiDAR has also been used to detect fine-scale vegetation which would contribute to fire behaviour beneath the canopy [40] with the active sensor allowing for penetration through the canopy to resolve below-canopy vegetation. The applicability of this technology to detect structural change has predominantly been used in forestry contexts [41,42]. Jaakkola et al. [41] demonstrated the ability of UAS LiDAR to detect changes within the canopy which were altered through physically removing branches and leaf material. Wallace et al. [42] produced similar results with UAS LiDAR point clouds successfully showing change from pruning in a Eucalyptus stand.
Limited studies have investigated the link between multi-temporal vegetation structural characteristics and assessing fire severity from wildfire [43,44,45,46]. Prior research has shown the utility of UAS point clouds to measure disturbance, and UAS imagery to measure fire severity [24,25,26,27,28,29]. To the authors’ knowledge, there have been no studies that evaluated pre- and post-fire UAS LiDAR variables to map land cover and fire severity across a sedgeland–forest boundary. There is an unresolved debate about the importance of fire, soil or both factors in maintaining these boundaries [47]. The objective of this study was to evaluate the effectiveness of structural metrics derived from UAS LiDAR for predicting fire severity. The first stage of the study applied a supervised classification to pre-fire UAS imagery and UAS LiDAR variables to map land cover. The second stage of the assessment classified fire severity within each land cover class to map fire severity across the study area. The study provides a comparison of accuracy between image-only, LiDAR-only and combined LiDAR and image predictor variables for mapping land cover and fire severity.

2. Materials and Methods

2.1. Study Area and Fire

The Weld River study area is located approximately 50 k m southwest of Hobart in Tasmania, Australia Figure 1. The study area consists of a 300 m × 300 m plot that captures a sedgeland forest boundary; vegetation types vary from Gymnoschoenus sphaerocephalus (button grass) plains in the north of the plot to Melaleuca squamea and Eucalyptus nitida approximately 4 m high in the intermediate zone, grading to a tall forest that at this site is dominated by Eucalyptus obliqua and Eucalyptus globulus. The dominant understorey species within the tall forest were Monotoca glauca and Pomederris apetela. There are significant variations in topography, ranging from 40 m to 68 m above mean sea level, with gullies present throughout the study area. The Weld River bisects the southwest corner of the study area.
Pre-fire datasets were acquired in September 2018. Following this data acquisition, a wildfire (Riveaux Road fire complex) occurred in January 2019 [48]. Post-fire datasets were acquired in May 2019.

2.2. Data Collection and Pre-Processing

2.2.1. Ground Control

To co-register the data derived from the respective sensors pre- and post-fire, ten Propeller Aeropoints ground control targets were distributed throughout the plot at locations that provided clear-sky views and allowed for a strong network geometry. The position of each target was calculated through the onboard GNSS receiver and post-processed against base stations from a Continually Operating Reference Station network. A base station was also set up on the northwestern edge of the plot, which remained running for the duration of both surveys—approximately 5 h each time. This base station was used to provide correction information for the positioning unit integrated in the UAS LiDAR system.

2.2.2. UAS LiDAR

LiDAR data were captured with two separate sensor systems pre- and post-fire event. Pre-fire data were captured with a custom-built UAS developed at the University of Tasmania, Australia. This system consisted of a DJI M600 platform, a Velodyne Puck (VLP-16) LiDAR scanner and an Advanced Navigation Spatial Dual coupled GNSS and IMU sensor. The VLP-16 scanner features 16 scan layers with a 30° vertical Field Of View (FOV), which equates to a 15° forward and backward distribution of the scan lines in the flight direction (+15° to −15° from nadir) and scan lines that are separated by approximately 2°. A maximum of two laser returns per pulse are collected with 300,000 pulses per second for the full 360° view of the scanner. The scan angle was limited to −40° to +40° in the across-track direction (80° field-of-view) resulting in approximately 60,000 pulses per second. The scanner has a horizontal beam divergence of 0.18 ° (3 m rad ) and a vertical one of 0.07 ° ( 1.2 m rad ). Data were processed using in-house software developed specifically for the University of Tasmania UAS LiDAR system which has also been in used for the production of point clouds in [49,50,51].
Post-fire data were captured with a RIEGL miniVUX-1 LiDAR scanner integrated with the APX-15 IMU sensor onboard a DJI M600 platform. The miniVUX-1 is a rotating mirror scanner with a 360° FOV. A maximum of five returns per pulse are collected with 100,000 pulses per second and a beam divergence of 1.6 × 0.5 m rad . Data were processed using the RIEGL UAS workflow by firstly adjusting the trajectory of the flight lines using the on-board IMU and GNSS with local corrections using POSPac software. The trajectory of the flight lines was then adjusted in RiProcess with segments of the flight lines trimmed to cover the plot and scan angles reduced to the same parameters as the pre-fire dataset. Lastly, point clouds were then extracted to LAS format and merged in CloudCompare v2.12 [52].
The flying height and flight pattern were identical between the two captures with flights completed 20 m above the highest canopy element and the overlap between flight strips being approximately 50%. Both point clouds were filtered to only include first returns.

2.2.3. UAS SfM

Images were captured using a DJI Phantom 4 Pro using the integrated RGB camera, which has an 8.8 m m nominal focal length and a 25 m m CMOS 20 megapixel sensor with 2.41 × 2.41 μ m nominal pixel size [53]. The UAS was flown at a flying height of 60 m above ground level. Nadir imagery was captured within two separate flights with 90% forward and sidelap. Due to changes in lighting conditions across the plot, camera settings were manually set to balance exposure of captured surfaces. This meant that, while the flight path was the same pre- and post-fire, the camera settings used were different (pre-fire: ISO-400, shutter speed 1/500 s and f 3.2; and post fire: ISO-320, shutter speed 1/400 s and f 2.8).
Images were downloaded from the UAS and processed to form a point cloud using Agisoft Metashape Professional v1.5.0 ( (accessed on 3 January 2021)) software [54]. A sparse point cloud was generated using the high-quality alignment setting where common features were found within the image set. Images were then aligned based on an iterative bundle adjustment to estimate the 3D positions of the matched features. Ground control targets were then identified within the images to georeference the point clouds, in-turn facilitating direct comparison to point clouds derived from laser scanning. The high-quality setting and mild depth filtering were then applied to generate a dense point cloud. Finally, ortho photos with a ground sampling distance of 0.1 m were created within the Metashape software. Manual noise removal was completed to remove spurious points beneath the ground.
The RGB colour-space was then converted to LAB space. The L*a*b*, or CIELab, color space is an international standard for colour measurements and was preferred over RGB space due to the stronger differentiation of red and green space [55]. L is the luminance or lightness component, which ranges from 0 to 100, and parameters A (from green to red) and B (from blue to yellow) are the two chromatic components, which range from −120 to 120 [56].

2.2.4. Reference Data

To generate reference data for the model a desktop assessment of vegetation type and severity was completed utilising a similar methodology to those described by McKenna et al. [24] and Arkin et al. [25]. The plot was first tiled into 10 m × 10 m squares. Within each tile two randomly generated points were assigned. These points were split into two unique collections consisting of approximately 250 points in each collection and given to two separate groups of assessors. Each group of assessors consisted of 3 people. For each point, a visual assessment of the orthophoto was undertaken to determine the vegetation class and severity. Once all assessments had been completed, points were summarised to form a final training dataset for each collection of points. A point was included in the training dataset if two or more assessors had agreement on the vegetation class and severity assessment. Once the training dataset was finalised, a spatial join was completed to assign the assessed vegetation and severity value to a segment. Two stages of random forest (RF) classifier were run to emulate the process which assessors completed: first to develop a vegetation classification using only metrics derived from pre-fire products and subsequently in the assessment of fire severity.

2.3. Data Co-Registration

Pre to Post Point Clouds

Point clouds were first clipped to ensure the same geographic area was being analysed and compared (this included removing the watercourse and all areas south of the river from both point clouds). Datasets were aligned using a two-step process. The first utilised the position information collected using the on-board position and orientation sensors. GNSS data were post-processed using software systems designed for the respective platforms/control targets. A second stage of alignment was completed through ground surface matching in open areas on rocks and road features. Care was taken to focus upon matching in areas that were likely to be undisturbed by the fire, due to likely structural deformation/slumping of the surface in fire-impacted areas of the plot.

2.4. Point Cloud Processing

For post-fire datasets, ground points were identified in the UAS LiDAR and UAS image-based point clouds using the Cloth Simulation Filter (CSF) outlined in Serifoglu Yilmaz et al. [57]. In order to parameterise the CSF filter several areas that were easily identifiable as ground were extracted from the post fire datasets. The filter was optimised by minimising RMSE between the reference bare ground and generated surface (resulting in a Cloth resolution (m) of 0.1 m , a class threshold of 0.05 m , a rigidity of 1, time step of 0.5 and 1000 iterations). Once identified, the ground points were processed to form a Triangular Irregular Network (TIN). The height of the TIN facet at the centre of each cell was then attributed to a 0.02 m Digital Terrain Model (DTM).
The point cloud was normalised based on each point’s height above the DTM, thereby providing a representation of the point cloud in relation to the ground. The point cloud was normalised in density using a 0.02 m voxel size in order to account for differences in point density across the plot.
For pre-fire LiDAR and SfM point clouds where there was minimal bare earth to optimise ground filter settings, the ground surface was taken from the post-fire dataset and used to normalise both the pre- and post-fire datasets. An assumption was made that the ground surface derived from the post-fire point clouds was more accurate than the surface which could be derived from the pre-fire dataset.
Finally, a Canopy Height Model (CHM) was created with the same resolution ( 0.10 m ) and extent as the imagery. Each cell in the CHM was attributed with the above ground height of the highest point that fell within its boundary. As none of the point clouds contained points for every cell, interpolation was undertaken to fill in the missing cells. A Gaussian smoothing kernel ( σ : 1.2) was applied to the entire CHM. Neighbouring missing cell values were ignored in the calculation of the central kernel value. This smoothed version of the CHM was then used to fill where missing pixels in the original version occurred.

2.5. Fire Severity Classification

A workflow consisting of area segmentation, segment description and classification was used to generate vegetation and fire severity classification maps. This workflow aimed to divide the study area into four vegetation classes: forest (areas with tall trees and greater than 30% cover), sedgeland, water and bare earth (Table 1). Within each vegetated cover class, the workflow also aimed to encompass three levels of fire severity. The levels of fire severity follow McKenna et al. [24] and are described in Table 2.
To facilitate comparison between sensors, the workflow implemented here was completed for three streams of input data: LiDAR-only, image-only and a combined stream (Table 3). For the LiDAR-only and image-only stream, only data available from that sensor were used at each step, whilst in the combined stream the segmentation of the data was based on the ortho image and all features from both the LiDAR and imaging sensors were included in the workflow.

2.5.1. Segmentation

A superpixel approach aggregates regions of similar pixels [58]. Superpixels are often used to capture redundancy in the image and reduce the complexity of subsequent large image processing tasks [58]. The Simple Linear Iterative Clustering (SLIC) algorithm implementation in scikit-image was used [59]. The RGB pre-fire image and canopy height model (aligned to the same grid) generated from the pre-fire LiDAR capture were used as separate inputs into the SLIC segmentation algorithm.
The SLIC segmentation algorithm performs K-means clustering on the image data. The number of seeds was kept consistent between the two input data-sources. The number of segments was derived from the area requirement to be approximately the same size as the segments used for the manual validation ( 3.14 m2). This size was consistent with prior studies that also used validation plots for training a random forest classifier [25] and was also deemed large enough to be able to determine vegetation classification and severity. The compactness and sigma parameters were optimised visually to provide segments consisting of only a single vegetation class and reduce slithers and sharp angles (image: compactness = 20, sigma = 5; LiDAR CHM: compactness = 22, sigma = 10). These settings resulted in a mean area of 3.19 ± 0.47 m2 for the image-derived segments and 3.21 ± 0.35 m2 for the CHM-related segments.

2.5.2. Image-Based Features

For each segment derived from the imagery pre- and post-fire and CHM, several descriptors were calculated based on the ortho image and the CHM (Table 4).
For each segment, the means of L, A and B components were calculated. Additionally, the LAB space has been shown to provide stronger severity delineation of vegetation elements in comparison to RGB imagery [60,61].
A further technique to differentiate between regions within the study area was implemented to analyse the texture of the ortho image and CHM. Gonzalez et al. [62] described texture as measures of smoothness, coarseness and regularity of an image region which can be calculated by using structural or statistical techniques. The Grey Level Co-occurrence Matrix (GLCM) method [63] was utilised in this study to describe the texture features within each segment. As per Kayitakire et al. [64] and Rao et al. [65], six texture features were extracted describing angular second moment (ASM), contrast, variance, homogeneity, correlation and entropy. Similar to Gini et al. [66], the GLCM calculations were performed only on one channel (L Channel), to reduce data redundancy. The difference between pre- and post-fire reflectances of the the respective L, A, B and texture variables were generated to be used as predictor variables.

2.5.3. Point Cloud Features

Structural properties were extracted from the point cloud for each segment and adjacent neighbours (Table 4). Point cloud properties were extracted for the segment and the respective neighbours to reduce the chances for a segment to be misclassifed (e.g., a segment that fell in a canopy gap may be misclassified as a segment with sedgeland vegetation). The area was clipped out of the point cloud and the Wilkes et al. [67] algorithm was applied to calculate the number of layers and layer location above 0.1 m . The parameterisation of this model utilised the default settings ( α : 0.3). The vertical distance between the first and second layers was also calculated.
Percentile heights were calculated (10th, 50th and 90th percentile) within each of the segments. The total volume of points within each segment was also calculated. Difference metrics were also calculated between each of the respective structural variables pre- and post-fire.

2.5.4. Random Forests Classification

A RF classifier was used to investigate the relationship between image, texture and structural metrics with vegetation and severity classification. This model was deemed appropriate to have good predictive capacity without overfitting data with RF classifiers being used in ecological studies previously for classification of discrete severity types [24,25,26,68,69,70].
The RF classifier used 1000 trees, splitting one set of the assessment data and associated metrics randomly into 70% training segments and 30% test segments. The training and test data segments were kept consistent across all streams of processing. Data inputs varied depending on the classification. For the vegetation classification, predictor variables were taken from pre-fire datasets. In contrast, the severity assessment utilised pre- and post-fire predictor variables. We implemented a feature selection method that firstly removed correlated variables (> 0.75 ) and secondly conducted a Recursive Feature Elimination (RFE) process to determine the optimum set of predictor variables from the initial selection. RFE utilises a backward selection of predictors by firstly building a model on the entire set of predictors and computing an importance score and support for each predictor [71,72]. The least important predictor is then removed, the model is re-built and importance scores are computed again. A consideration when running a RFE is determining the optimum number of features. The optimum number of features was calculated by beginning the loop with all features and progressively removing the least important feature in the dataset. The optimum model was selected based of the highest overall accuracy on the test data. The remaining set of assessment data was used as validation of the model.
The vegetation classification was completed first to identify the vegetation features at each site. The RF classifier was subsequently run to classify the severity of segments that were assigned the land cover class of ’forest’ and separately to classify the severity of segments that were assigned the land cover class of ’sedgeland vegetation’ (as defined in Table 1).
The results of the RF classification were summarised based on the accuracy of the test data from assessment group 1 and the complete group of assessment 2, using confusion matrices from the RandomForestClassifer within the Scikit-Learn Python package [73]. A vegetation and severity classification map was produced to show the classification across the plot. The user and producer accuracies were calculated for each of the data streams for the vegetation and severity classification [74]. To capture the difference in errors made by the models, the McNemar’s test was completed between each of the three models. McNemar’s is a nonparametric test based on standardised normal test statistic calculated from error matrices of the two classifiers as follows (Equation (1)) [75,76,77].
Z = n 0 0 n 0 1 ( n 0 0 + n 0 1 )
where n 0 0 denotes the number of samples that are misclassified by the first RF model but correctly classified by second RF model and n 0 1 denotes the number of samples that are correctly classified by second RF but misclassified by the second RF model. The Z value could be referred to the tables of chi-squared distribution with one degree of freedom [78]. McNemar’s test can therefore be expressed using a chi-squared formula computed as follows:
X 2 = ( n 0 0 n 0 1 ) 2 n 0 0 + n 0 1
If the statistic X 2 estimated from Equation (2) is greater than a chi-squared table value of 3.84 at 5 % level of significance, it implies that the models perform significantly different.

3. Results

3.1. Vegetation Classification

Vegetation maps produced by each of the three processed data streams (Section 2.5) demonstrated the area classified as forest and sedgeland varied by no more than 2% Figure 2. The combined stream classified 54.9% of the study area as forest and 43.9% as sedgeland, in comparison to 54.1% as forest and 42.5% as sedgeland for the image-only stream and 53.2% as forest and 44.2% as sedgeland for the LiDAR-only data stream.
A similar overall classification accuracy was achieved by all data streams (Table 5, Table 6 and Table 7). This is also indicated by McNemar’s test, which showed no significant differences in the performance of each stream (p > 0.5). Furthermore, Producer’s and user’s accuracy for the classification of forest and sedgeland areas were within 10% of each other across all three data streams (Table 5, Table 6 and Table 7).
The correlation removal and RFE approach resulted in eight predictor variables being used in the image-only stream, five predictor variables being used in the LiDAR-only data streams and ten predictor variables used in the combined stream (see Appendix A). In the image-only stream, structural variables (describing the 90th percentile height and distance between the top two layers), image variables (describing the L A B _ A mean and L A B _ B mean) and texture metrics derived from the CHM (homogeneity and entropy) and ortho image (contrast and correlation) were all used. The LiDAR-only stream also used variables describing structure (layer count and 10th, 50th and 90th percentile heights) as well as the texture metrics (correlation and homogeneity) derived from the CHM. The combined stream used a greater number of variables utilising structure and texture variables derived from both the SfM and LiDAR point clouds and CHM, respectively, as well as variables derived from the ortho image.

3.2. Fire Severity Classification

The predominant differences between the severity maps were observed within areas of unburnt riparian vegetation and in areas of vegetation experiencing a green flush post fire (Figure 3). This resulted in small differences in the total area that were classified as unburnt (combined: 2.4%; image-only 3.0%; and LiDAR-only 0.9% (Figure 3), not-severe (combined: 10.7%; image-only: 13.9%; and LiDAR-only: 11.1%) and severe (combined: 84.7%; image-only 79.8%; and LiDAR-only 84.5%). McNemar’s test highlighted that the streams that featured predictor variables derived from image products (image-only and combined streams) had similar classification errors (chi-squared; X 2 = 0.88 ). However, a McNemar’s test demonstrated differences in performance between the LiDAR-only stream and image-only stream and combined streams (Image-only and LiDAR-only: chi-squared, X 2 = 4.89 ; Combined stream and LiDAR-only: chi-squared, X 2 = 9.28 ).

3.2.1. Classification of Severity within Sedgeland Segments

In areas of sedgeland, the combined stream produced the highest overall accuracy compared to the reference data set (76.6%) followed by the LiDAR-only and image-only data streams (LiDAR: 75.2%; and Image: 72.4%). All data streams had higher producer’s and user’s accuracy for the severe reference segments in comparison to the non-severe reference segments (Table 8, Table 9 and Table 10). The highest producer’s and user’s accuracy for unburnt areas was observed in the image-only stream.
The feature selection approach resulted in 14 predictor variables being used in the image-only stream, 16 predictor variables used for the LiDAR-only stream and 24 predictor variables used for the combined data stream (Appendix B). Using the given training data, variables describing both the pre- and post-fire condition were used in the RF classifier. The feature selection approach when applied to image-only stream resulted in eight variables derived from post-fire capture, five variables describing the difference between pre- and post-fire captures and one variable from the pre-fire capture used in the determination of severity in areas that were classified as sedgeland. These features selected from the image-only stream are in contrast to the feature selection of the LiDAR-only stream that resulted in six variables from pre-fire capture, three variables from post-fire capture and seven variables describing the difference between pre- and post-fire captures. The feature selection approach when applied to the combined stream resulted in ten variables derived from post-fire capture, four variables derived from pre-fire capture and ten variables describing the difference between between pre- and post-fire captures.
When considering the variables selected through the feature selection process, variables derived from the texture of the canopy height model or direct structure estimates (height, layer count and volume) were used in the RF classifiers in all streams. However, different structural variables were selected across the streams. We found that in the combined and LiDAR-only streams, multi-temporal variables describing difference in CHM texture and structure metrics were selected to describe severity in sedgeland areas. This is in contrast to the image-only stream, which used only the difference in texture metrics of the CHM. The combined and image-only stream used variables that described the reflectance characteristics as well as the texture of the ortho image.

3.2.2. Classification of Severity within Forest Segments

The accuracy of the severity classification within forest for all three data streams classified were within 4% (image-only: 76.6%; LiDAR-only: 74.5%; and combined: 78.5%) Figure 3).
Similar to the classification of severity in sedgeland segments, all data streams had high producer’s and user’s accuracy for segments classified as severe in comparison to those classified as not severe (Table 11, Table 12 and Table 13). Producer’s and user’s accuracy for severe segments were within 6% across all streams. Not-severe segment user’s and producer’s accuracy was highest with the LiDAR-only stream whilst unburnt producer’s and user’s accuracy were highest in the image-only and combined data streams.
The predictor variables used in the modelling of severity in forest areas utilised structural, texture and ortho image metrics (Appendix B). The image-only stream used six variables whilst the LiDAR-only and combined streams used 14 predictor variables (Appendix B. When applied to the training data, the feature selection resulted in four post variables and two variables describing difference between pre- and post-fire being used from the image-only stream. In contrast the LiDAR-only stream used one variable derived from the post-fire capture, seven variables from the pre-fire capture and six variables describing the difference between pre- and post-fire captures. The combined stream used five variables from post-fire capture, two variables from pre-fire capture and six variables describing the difference between pre- and post-fire variables.
When analysing the variables selected, all streams used variables describing the volume of the point cloud either pre or post fire. Similar to the derivation of severity in sedgeland areas, the predictor variables were not consistent across the three streams. Predictor variables describing the texture of the CHM and ortho image were selected across all three streams. The LiDAR-only stream was the only stream that used variables describing the height pre-fire and relative change in height and volume between data captures. Further, metrics describing changes in the texture variables of the CHM were only used in the LiDAR-only and combined data streams.

3.3. Change in Vertical Structure as a Mechanism for Describing Fire Severity

Visual inspection of the point clouds showed a varying capability of each respective technology to describe the vertical profile of the vegetation pre and post fire. UAS LiDAR point clouds appear to represent the canopy and below-canopy elements most comprehensively, with UAS image-based point clouds providing only partial reconstruction, especially in the post-fire capture.

3.3.1. Forest and Severe Fire Impact

In areas that were classified as having forest and severe fire impact, the LiDAR segments showed an increase in the mean 10th and 50th percentile height values ( 0.48 m , 0.39 m ) Figure 4 and Table 14 from the pre-fire values. The 50th percentile height of the UAS SfM point cloud increased by 0.30 m (Figure 5). The UAS SfM point clouds showed a decrease in the 10th percentile height of 3.81 m between pre- and post-fire captures (Table 14). Inspection of the point clouds highlights an example of this variation in structural representation (Figure 4 and Figure 5). Both the UAS SfM and LiDAR point clouds showed a decrease in the 90th percentile heights of 0.39 m and 1.44 m , respectively).
The layer counts in both the LiDAR and SfM showed a decrease in the number of layers post fire (Table 14). This difference was seen to be greatest in the SfM point clouds with a mean decrease of 1.33 layers. A decrease was also observed in the volume estimate of UAS LiDAR point clouds of 0.45 m 3 and a larger decrease in UAS SfM point clouds of 6.61 m 3 .

3.3.2. Forest and Not Severe Fire Impact

In areas that were classified as having forest and not-severe fire affects, a decrease in the mean 90th percentile heights was seen in both the UAS SfM and LiDAR point clouds (Table 14 and Figure 4 and Figure 5). The SfM point clouds showed a decrease in heights in the 10th and 50th percentile heights. This is in contrast to the UAS LiDAR point clouds, which increased in height in these layers. The layer count showed a mean decrease of layers in the UAS LiDAR and SfM point clouds. There was a greater loss of volume in the UAS SfM point clouds in comparison to the LiDAR point clouds.

3.3.3. Sedgeland and Severe Fire Impact

Within the areas classified as sedgeland, the structural change in areas of severely burnt segments was observed in both the LiDAR and SfM point clouds with a decrease in all percentile heights (Table 14). The layer counts were seen to have a mean decrease within both the UAS LiDAR and SfM point clouds. The mean volume also decreased with the largest reduction occurring in the SfM point clouds (UAS SfM: 6.09 m 3 ; and UAS LiDAR: 1.01 m 3 ).

3.3.4. Sedgeland and Not Severe Fire Impact

In segments that were classified as sedgeland and did not burn severely, a mean decrease in all percentile heights was observed (Table 14). The SfM point clouds showed a greater mean decrease in the percentile heights in comparison to the LiDAR point clouds, especially in the 50th and 90th percentile heights.The layer counts were seen to have a mean decrease in both the UAS SfM and UAS LiDAR. Whilst both technologies showed a decrease in volume, the UAS SfM had a greater decrease with 2.90 m 3 .

4. Discussion

This study presented an evaluation of UAS LiDAR and image-based point cloud derived variables using a supervised classification to produce maps of land cover and fire severity. Temporally coincident observations were captured across a range of structurally diverse vegetation communities, allowing for a direct comparison between the two data sources and processing streams. Furthermore, the area was captured both pre- and post-fire allowing for a two stage classification: firstly classifying land cover and secondly classifying the severity within each land cover type, providing a testbed to explore the changes resulting from fire. Prior work from fixed wing and satellite remote sensing platforms have demonstrated the utility of imagery and supervised classifications to estimate fire severity across an area [43,44,46,68,69,70,79]. McKenna et al. [24], Simpson et al. [26] and Carvajal-Ramírez et al. [27] demonstrated the utility of UAS SfM image-derived variables from pre- and post-fire point clouds to map fire severity at local scales across areas with limited structural diversity (open grassland, woodland and peatland). Similarly, Arkin et al. [25] utilised UAS SfM workflows to derive image and structural variables captured post-fire, in combination with a supervised classification, to map fire severity across a burnt forested area (Douglas fir, hybrid white spruce and lodgepole pine). This study extends this research by comparing the utility of LiDAR-only, image-only and combined data streams separately, to classify vegetation and severity in a structurally diverse study area.

4.1. Land Cover Accuracy

Confusion matrices showed similar (within 5%) overall, producer’s and user’s accuracy for the land cover classification accuracy across the three data streams of processing. Consistent with Goodbody et al. [80] and Feng et al. [81], analysis of the variables used to map land cover in each of the three data streams demonstrated that all streams utilised texture metrics to identify different land cover classes. Whilst the workflow presented here classified land cover into four categories, land cover transition zones were noted by assessors as being difficult to classify through visual assessment. More broadly, the use of imagery and point cloud data provides new opportunities to classify land cover that takes into a consideration a diverse array of factors beyond that which human interpretation is able to achieve.
Visual inspection of the UAS LiDAR and UAS image-based point clouds captured pre-fire demonstrated that both technologies were able to adequately describe the vertical profile of the vegetation (Figure 4 and Figure 5). Whilst this reconstruction of below-canopy vegetation supports prior research that demonstrated the ability of UAS LiDAR point clouds to represent forest structure in a variety of forest types [38,39,42,82], it is in contrast to previous studies showing that UAS image-based point clouds were not able to represent information beneath the canopy accurately [40,42]. Potential reasons for increased vegetation representation beneath the canopy in our dataset were the environmental conditions at the time of capture, with low wind and good lighting beneath the canopy allowing for strong contrast between the ground and trees, assisting the point cloud reconstruction. Additionally, in the canopy areas of the plot, a greater amount of vegetation in the mid-storey/elevated layers in comparison to the post-fire capture is likely to have aided the depth reconstruction by providing extra features for the depth matching process. Prior research has demonstrated greater reliability of UAS LiDAR point clouds in generating a point cloud due to the active nature of the sensor being less sensitive to illumination conditions [36,40].

4.2. Severity Accuracy

In all data streams, and for both forest and sedgeland classes, classification of severe segments was more accurate than not-severe segments. This trend was also shown by McKenna et al. [24] who highlighted that the high severity classes attained a higher accuracy than low severity and unburnt classes. This potentially indicates an underlying bias in the dataset used in this analysis with the majority of the plot being severely burnt and being more obvious to detect. Further reasons for misclassification of not-severe segments may be in the form of obscuration of fire affected layers by a taller canopy. Hyper-emergence of individual trees is common to wet forests, and in this scenario it may have limited observations of areas that have had minimal fire impact [83].
Validating remotely sensed metrics of vegetation classification and fire severity with ground observations at the point scale is considered best practice. However, it is challenging to implement over large areas and requires ecological expertise. The visual interpretation of high-resolution ortho images for the determination of severity has been shown to be strongly correlated with field-based measures of severity [84,85]. It is acknowledged that visual interpretation limits the assessment of fire severity to what is visible in the imagery and excludes variables such stem scorch and understorey loss in areas of closed canopies. Previous research utilising UAS ortho imagery for the determination of fire severity have utilised visual interpretation as a reference for classification accuracy [24,25]. To further ensure that a high level of precision was obtained in the severity assessment in this manuscript, at least two assessors must have the same severity assessment. The classification of severity using broad user-defined scales potentially limits the degree to which fire severity can be classified. Further work could investigate the ability of UAS-derived variables and machine learning processes to deal with multiple classes such as those by Collins et al. [69] and Tran et al. [86]. However, we acknowledge that there is a likely trade-off that exists between the number of categories and the ability for interpreters to accurately distinguish between these categories. The timing of the post-fire capture is also important to consider in the context of severity accuracy. Post-fire rainfall at the study area led to a flush of growth, which is likely to have decreased the accuracy of the classification, with areas assessed as high severity confounded with spectral characteristics similar to pre-fire vegetation.
Predictor variables derived from point clouds were used in all streams for mapping fire severity either directly from percentile heights, layer count and volume estimates or indirectly through the production of canopy height models. Analysis of the predictor variables used in each classifier demonstrates that there was no consistent set of structural predictor variables used across all streams. Variables describing differences in texture between pre- and post-fire were selected by mapping severity across the plot in all streams. It was hypothesised that the improved vegetation representation from LiDAR would mean that predictor variables describing height or layer count differences between pre-fire and post-fire would be used in the prediction of severity, particularly in areas of forest. This would support Hu et al. [43], Hoe et al. [44] and Skowronski et al. [46], who demonstrated the effectiveness of describing changes to structural characteristics such as profile area and LiDAR return proportions 2 m above ground, pre-fire 95 % heights and pre-fire return proportions 2 m above ground. However, the structural variables generated in this research showed only a small change between pre- and post-fire (Table 14). This may indicate that the variables that are commonly used to assess structure at not suitable for fire induced impact assessments in the forest types observed in this study unless there is full tree loss. Whilst large amounts of fine fuel are consumed during a fire, the Eucalypt forests surveyed in this study have structure that persists after fire [87,88]. For the metrics utilised in this research to be selected through the feature selection process, it is predicted that more significant structural change is needed such as tree fall as is observed in some North American forests [89].
The models in each stream estimating severity in forest and sedgeland areas utilise variables derived from both pre- and post-fire captures. The image-only and combined streams used a greater number of post-fire factors in comparison to the LiDAR-only stream when predicting severity in forest areas. Variables that describe the difference between the captures were also utilised which supports prior work highlighting the effectiveness of bi-temporal observations to assess severity [24,44,84,90]. Further work should investigate the relative contribution of pre- and post-fire predictor variables in estimating fire severity. Recent work by Hoe et al. [44] and Skowronski et al. [46] explicitly links pre-fire fuel loading with fire severity, representing an opportunity to improve potential fire predictions across landscapes when combined with modelled weather conditions. This further work should also consider the findings of Arkin et al. [25], who used only post-fire variables in the mapping of fire severity which would enhance the usability of the workflow where pre-fire data is unavailable.
The overall classification accuracy of severity in forest areas (Image only: 76.6%; LiDAR only 74.5%; and Combined 78.5%) and sedgeland areas (Image only: 72.4%; LiDAR only: 75.2%; and Combined: 76.6%) in this study was achieved with very high-resolution ( 0.02 m ) data. Comparatively, satellite derived assessments of fire severity are completed at regional scales where pixel values describe areas between 3 and 500 m [21,91,92]. Previous studies have demonstrated high-resolution satellite imagery is capable of severity classification accuracy within the range of 50% and 95% [21]. Similar accuracy is achievable from imagery captured from manned aircraft, however, this data runs into similar issues in capturing understorey change especially in dense canopy environments [85,93]. Point cloud information derived from UAS SfM workflows has been shown to provide information describing changes in understory in open canopy forests present in this study and previous work [24].
LiDAR captured from manned aircraft go someway to address this issue with greater capacity to describe changes in below canopy vegetation structure with severity classification accuracy shown to be between 51% and 54.9% in mixed-conifer, oak woodlands and hardwood-evergreen forests [44]. Point cloud information can be used to detect changes in structure from fire at the tree level and at sub-tree scale and shrub level Figure 4 [42]. Additionally, UAS may be flown at the time desired by the operators. This is particularly useful in situations where there may be the opportunity to collect information prior to the passing of a fire and/or in diverse or transitional ecosystems, where the post-fire vegetation condition must be captured within a few days (e.g., grasslands and tropical savannas, 5–6 days) [94] or weeks (e.g., dry sclerophyll forests of southern Australia) [95] to enable severity to be accurately characterised. In these scenarios, high-spatial and temporal resolution products derived from UAS may be particularly useful to validate low-resolution, yet wide area satellite or airborne derived products [96].

4.3. Vertical Profile

As described by Hillman et al. [40] and Wallace et al. [36], visual inspection of the pre- and post-fire UAS LiDAR point clouds provide a complete representation of the vertical profile and allow for a description of forest structure in all strata. Similarly, the pre-fire UAS SfM point clouds appeared to provide a complete representation of the vertical profile. This is in contrast to the UAS SfM point clouds derived from the post-fire capture which provide limited reconstruction of the vegetation. This is demonstrated by large decreases in the mean height of the 50th and 90th percentile heights. The lack of information content in the post-fire UAS SfM point clouds could be due to similar factors as observed by Hillman et al. [40], with poor contrast between burnt ground and vegetation, and wind conditions at time of capture. These factors can confound the image matching process, resulting in limited vegetation reconstruction and have the potential to inaccurately represent structural change.
In contrast, the UAS LiDAR point clouds derived from the post-fire capture showed an increase in the 10th and 50th percentile heights. Whilst vegetation heights are expected not to have increased post-fire, the increase in these percentile heights is likely to be due to an increased penetration of the sensor beam through a sparser canopy. Whilst the differences in LiDAR sensors used between the pre- and post-fire data capture campaigns may contribute to small discrepancies in the height estimations, this is not believed to be a consideration that would influence the accuracy, as only the first returns from each sensor were used. This highlights an opportunity for further work to consider the use of all returns when deriving structural measurements with the potential that more information may be yielded.
Despite greater information content present in the UAS LiDAR point clouds, the feature selection process utilised fewer direct structural variables in the final mapping of land cover and severity. Further work should look to develop metrics that maximise the different information content contained within the UAS LiDAR point clouds. One such area that may yield new insights is the characterisation of ladder fuels and vertical connectivity. Wilkes et al. [67], for example, derived the number of layers in each segment and provided an indication of the presence and absence of vegetation in the point cloud. Approaches for deriving metrics that describe the vegetation and/or fuel properties over the vertical profile could be used to quantify the presence, change and consumption of ladder fuels. Approaches to quantify structure and arrangement in the vertical profile in previous studies have typically combined qualitative and quantitative approaches to measuring fuels [97,98,99], with some preliminary studies utilising remote sensing to measure canopy base height, percentage cover below canopy or fuel gaps [100,101,102,103,104]. Fuel strata gap, as proposed by Cruz et al. [105], is one such method that could be applied to leverage the available information content. However this method, whilst effective in North American forest types, may not be as successful in Eucalypt forest types where the arrangement of fuel is multi-layered and complex. Similar to the work presented by Skowronski et al. [104] and the approach implemented by Hillman et al. [40], this may allow for the identification and quantitative representation of ladder fuels independent of forest type.

4.4. Operational Applicability

UAS are being increasingly used in forest and fire management to measure landscape condition and for real-time emergency observations [106,107,108,109,110,111]. The versatility of UAS is that they can be deployed quickly and efficiently post-fire to collect severity information. Careful consideration of the purpose of the assessment should be made so that the sensor payload matches the desired information outputs. For example, this research demonstrated that UAS SfM point clouds cannot be relied upon to represent structural change from fire. Conversely, UAS LiDAR point clouds provided a more complete representation of vegetation structure pre- and post-fire. Whilst both technologies had difficulty in discerning not-severe areas from severe, the high accuracy in the severe category alone allows land managers to identify priority areas of treatment, without the need for costly airborne image capture.
The capacity to accurately map fire severity will enhance land managers’ understanding of ecosystem response. Given the reliability of detecting below-canopy vegetation structure in UAS LiDAR point clouds this technology provides the greatest opportunity to measure post-fire vegetation traits in the complex wet-eucalypt forest ecosystems. Utilising high-resolution measurements from UAS LiDAR facilitates the precise estimation of foliar change from fire. When high-resolution UAS-derived estimates of fire severity are considered as part of an ensemble approach to measuring fire severity from satellite, fixed-wing, ground-based and remotely-piloted platforms, these inputs can then be used to train models of severity and hazard over much larger areas such as those presented in [69,112,113]. When combined with pre-fire fuel hazard information, UAS LiDAR point clouds may allow us to untangle the effect of fuel hazard and structure on flammability and fire severity, which is poorly understood in wet forest systems [114,115,116]. High-resolution fire severity assessments can also be used to evaluate and inform treatment practices (e.g., prescribed fire and timber harvesting) [116,117,118,119]. With an accurate understanding of how comprehensively the vegetation has been affected by fire, development of more accurate fuel accumulation curves are also able to be developed, which is critical for future fire management. Additionally, as fire behaviour modelling is enhanced through the use of physics-based approaches, accurate 3D vegetation descriptions of on-ground fuel properties will allow fire managers to generate more accurate fire behaviour simulations, effectively deploy first responders and implement fuel management practices [120,121,122,123].

5. Conclusions

With an increasing frequency and severity of fires, there is a growing need to understand the severity of fire and associated recovery of vegetation post-fire. To the authors’ knowledge, there have been no prior studies utilising UAS LiDAR-derived variables with supervised classification to map land cover type and fire severity. This research contributes to this gap in knowledge and demonstrates the utility of metrics derived from UAS LiDAR point clouds captured pre- and post-fire to map vegetation and severity. Through a feature selection process, we selected subsets of predictor variables to build classifiers that used a small number of variables for the classification of land cover and fire severity. A comparison was made to image-only and combined (UAS LiDAR and UAS image predictor values) data streams with UAS LiDAR derived variables. The results indicate that UAS LiDAR provided similar overall accuracy to UAS image and combined data streams to classify severity in areas of forest with canopy dominance (UAS image: 76.6%; UAS LiDAR: 74.5%; and Combined: 78.5%) and areas of sedgeland (UAS image: 72.4%; UAS LiDAR: 75.2%; and Combined: 76.6%). Analysis of structural variables in combination with visual inspection of point clouds derived from image-based and LiDAR point clouds highlighted a greater level of vegetation reconstruction in the LiDAR point clouds. This observation is significant for mapping fire severity. Despite the feature selection process and subsequent accuracy analysis highlighting the similar capacity of each technology to classify fire severity, large differences in the information content indicate that the metrics derived for describing structural change in this study area were not suitable to represent the consumption of fine fuel. Future work should investigate the capacity of UAS-derived products to represent fine-fuel and develop metrics that are able to represent this change of vegetation beneath the canopy. The analysis presented in this paper demonstrates the capacity of UAS LiDAR point clouds to map land cover and severity from which land managers can make key decisions for identifying high priority areas post fire.

Author Contributions

Conceptualisation, S.H., B.H. and L.W.; Data curation, S.H., B.H., L.W., D.T., A.L. and K.R.; Formal analysis, S.H., B.H. and L.W.; Funding acquisition, K.R. and S.J.; Investigation, S.H., B.H., L.W., D.T. and A.L.; Methodology, S.H., B.H., L.W. and A.L.; Project administration, L.W. and K.R.; Resources, D.T., A.L. and S.J.; Software, S.H., B.H. and L.W.; Supervision, L.W., K.R. and S.J.; Validation, S.H., B.H. and L.W.; Visualisation, S.H., B.H. and L.W.; Writing—original draft, S.H.; and Writing—review and editing, S.H., B.H., L.W., D.T., A.L., K.R. and S.J. All authors have read and agreed to the published version of the manuscript.


This research was funded by the Bushfire Natural Hazard CRC (CON/2017/01377).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to ongoing research and development using these datasets.


The support of the Commonwealth of Australia through the Bushfire and Natural Hazards Cooperative Research Centre and the Australian Postgraduate Award is acknowledged. The University of Tasmania and TerraLuma research group are gratefully acknowledged for providing their equipment, lab and expertise.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Predictor Variables Used in Land Cover Calculation

Table A1. Summary of accuracy derived from the test and validation datasets for each of the three streams of data and predictor variables used in each stream.
Table A1. Summary of accuracy derived from the test and validation datasets for each of the three streams of data and predictor variables used in each stream.
Variables used90th percentile heightLayer count50th percentile height (SfM)
Distance between top 2 layers (SfM)10th percentile height10th percentile height (LiDAR)
A (Green-red) mean (Ortho)50th percentile height90th percentile height (LiDAR)
B (Blue-yellow) mean (Ortho)Correlation (CHM)Distance between top 2 layers (LiDAR)
Homogeneity (CHM-SfM)Homogeneity (CHM)A (Green-red) mean (Ortho)
Entropy (CHM-SfM) B (Blue-yellow) mean (Ortho)
Contrast (Ortho) Sum of squares variance (CHM-SfM)
Correlation (Ortho) Homogeneity (CHM-SfM)
Contrast (Ortho)
Correlation (Ortho)
Table A2. Summary of accuracy derived from the validation dataset and predictor variables used in Image-only stream.
Table A2. Summary of accuracy derived from the validation dataset and predictor variables used in Image-only stream.
Stream Severity
Image Stream ForestSedgeland
Variables usedVolume (Post)Volume (Post)
A (Green-red) mean (Post)10th percentile height (Post)
B (Blue-yellow) mean (Post)A (Green-red) mean (Post)
Correlation (CHM-Post)B (Blue-yellow) mean (Post)
Correlation difference (CHM)A (Green-red) mean (pre)
A (Green-red) mean difference (Ortho)Correlation (CHM-Post)
Sum of squares variance (CHM-Post)
Correlation (Ortho-Post)
Homogeneity (Ortho-Post)
Contrast difference (CHM)
Homogeneity difference (CHM)
Homogeneity difference (Ortho)
A (Green-red) mean difference
B (Blue-yellow) mean difference

Appendix B. Predictor Variables Used in Severity Classification from Pre and Post-Fire Calculation

Table A3. Summary of accuracy derived from the validation dataset and predictor variables used in LiDAR-only stream.
Table A3. Summary of accuracy derived from the validation dataset and predictor variables used in LiDAR-only stream.
Stream Severity
LiDAR Stream ForestSedgeland
Variables usedVolume (Pre)10th percentile height (Post)
10th percentile height (Pre)Volume (Pre)
50th percentile height (Pre)10th percentile height (Pre)
Entropy (CHM-Post)90th percentile height (Pre)
Contrast (CHM-Pre)Contrast (CHM-Post)
Correlation (CHM-Pre)Entropy (CHM-Post)
Sum of squares variance (CHM-Pre)Contrast (CHM-Pre)
Homogeneity (CHM-Pre)Correlation (CHM-Pre)
Volume differenceSum of squares variance (CHM-Pre)
10th percentile differenceVolume difference
Angular second moment difference (CHM)10th percentile difference
Contrast difference (CHM)50th percentile difference
Correlation difference (CHM)Angular second moment difference (CHM)
Sum of squares variance difference (CHM)Contrast difference (CHM)
Correlation difference (CHM)
Sum of squares variance difference (CHM)
Table A4. Summary of accuracy derived from the validation dataset and predictor variables used in Combined streams.
Table A4. Summary of accuracy derived from the validation dataset and predictor variables used in Combined streams.
Stream Severity
Combined Stream ForestSedgeland
Variables usedVolume (SfM-Post)Volume (LiDAR-Post)
A (green-red) mean (Post)Volume (SfM-Post)
B (blue-yellow) mean (Post)A (green-red) mean (Post)
B (blue-yellow) mean (Pre)B (blue-yellow) mean (Post)
Correlation (CHM-Post)A (green-red) mean (Pre)
Correlation (Ortho-Post)Correlation (LiDAR-CHM-Post)
Homogeneity (Ortho-Post)Homogeneity (LiDAR-CHM-Post)
Contrast (Ortho-Pre)Sum of squares variance (LiDAR CHM-Pre)
Angular second moment difference (SfM-CHM)Sum of squares variance (LiDAR CHM-Post)
Correlation difference (SfM-CHM)Homogeneity (SfM CHM-Post)
Angular second moment difference (LiDAR-CHM)Correlation (SfM CHM-Pre)
Correlation difference (LiDAR-CHM)Homogeneity (SfM CHM-Pre)
Contrast difference (Ortho)Correlation (Ortho-Post)
A (green-red) mean differenceHomogeneity (Ortho-Post)
Volume difference (LiDAR)
50th percentile height difference (LiDAR)
Angular Second Moment difference (CHM-SfM)
Contrast difference (CHM-SfM)
Contrast difference (CHM-LIDAR
Homogeneity difference (CHM-LiDAR)
Angular second moment difference (Ortho)
Homogeneity difference (Ortho)
A (green-red) mean difference
B (blue-yellow) mean difference


  1. Keeley, J.E.; Pausas, J.G.; Rundel, P.W.; Bond, W.J.; Bradstock, R.A. Fire as an evolutionary pressure shaping plant traits. Trends Plant Sci. 2011, 16, 406–411. [Google Scholar] [CrossRef][Green Version]
  2. Orians, G.H.; Milewski, A.V. Ecology of Australia: The effects of nutrient-poor soils and intense fires. Biol. Rev. 2007, 82, 393–423. [Google Scholar] [CrossRef]
  3. He, T.; Lamont, B.B. Baptism by fire: The pivotal role of ancient conflagrations in evolution of the Earth’s flora. Natl. Sci. Rev. 2018, 5, 237–254. [Google Scholar] [CrossRef][Green Version]
  4. Lamont, B.B.; He, T.; Yan, Z. Evolutionary history of fire-stimulated resprouting, flowering, seed release and germination. Biol. Rev. 2019, 94, 903–928. [Google Scholar] [CrossRef] [PubMed]
  5. Clarke, P.J.; Knox, K.J.; Bradstock, R.A.; Munoz-Robles, C.; Kumar, L. Vegetation, terrain and fire history shape the impact of extreme weather on fire severity and ecosystem response. J. Veg. Sci. 2014, 25, 1033–1044. [Google Scholar] [CrossRef]
  6. Keeley, J.E. Fire intensity, fire severity and burn severity: A brief review and suggested usage. Int. J. Wildland Fire 2009, 18, 116–126. [Google Scholar] [CrossRef]
  7. Wagner, C.V. Height of crown scorch in forest fires. Can. J. For. Res. 1973, 3, 373–378. [Google Scholar] [CrossRef][Green Version]
  8. Tolhurst, K. Fire from a flora, fauna and soil perspective: Sensible heat measurement. CALM Sci. 1995, 4, 45–88. [Google Scholar]
  9. Dickinson, M.; Johnson, E. Fire effects on trees. In Forest Fires; Elsevier: Amsterdam, The Netherlands, 2001; pp. 477–525. [Google Scholar]
  10. Moreno, J.M.; Oechel, W. A simple method for estimating fire intensity after a burn in California chaparral. Acta Oecol. (Oecol. Plant) 1989, 10, 57–68. [Google Scholar]
  11. Buckley, A.J. Fuel Reducing Regrowth Forests with a Wiregrass Fuel Type: Fire Behaviour Guide and Prescriptions; Fire Management Branch, Department of Conservation and Natural Resources: Victoria, Australia, 1993. [Google Scholar]
  12. White, J.D.; Ryan, K.C.; Key, C.C.; Running, S.W. Remote sensing of forest fire severity and vegetation recovery. Int. J. Wildland Fire 1996, 6, 125–136. [Google Scholar] [CrossRef][Green Version]
  13. Hudak, A.T.; Morgan, P.; Bobbitt, M.J.; Smith, A.M.; Lewis, S.A.; Lentile, L.B.; Robichaud, P.R.; Clark, J.T.; McKinley, R.A. The relationship of multispectral satellite imagery to immediate fire effects. Fire Ecol. 2007, 3, 64–90. [Google Scholar] [CrossRef]
  14. Roy, D.P.; Boschetti, L.; Trigg, S.N. Remote sensing of fire severity: Assessing the performance of the normalized burn ratio. IEEE Geosci. Remote Sens. Lett. 2006, 3, 112–116. [Google Scholar] [CrossRef][Green Version]
  15. Edwards, A.C.; Russell-Smith, J.; Maier, S.W. A comparison and validation of satellite-derived fire severity mapping techniques in fire prone north Australian savannas: Extreme fires and tree stem mortality. Remote Sens. Environ. 2018, 206, 287–299. [Google Scholar] [CrossRef]
  16. Smith, A.M.; Wooster, M.J.; Drake, N.A.; Dipotso, F.M.; Falkowski, M.J.; Hudak, A.T. Testing the potential of multi-spectral remote sensing for retrospectively estimating fire severity in African Savannahs. Remote Sens. Environ. 2005, 97, 92–115. [Google Scholar] [CrossRef][Green Version]
  17. Jakubauskas, M.E.; Lulla, K.P.; Mausel, P.W. Assessment of vegetation change in a fire-altered forest landscape. PE&RS Photogramm. Eng. Remote Sens. 1990, 56, 371–377. [Google Scholar]
  18. Cocke, A.E.; Fulé, P.Z.; Crouse, J.E. Comparison of burn severity assessments using Differenced Normalized Burn Ratio and ground data. Int. J. Wildland Fire 2005, 14, 189–198. [Google Scholar] [CrossRef][Green Version]
  19. Boer, M.M.; Macfarlane, C.; Norris, J.; Sadler, R.J.; Wallace, J.; Grierson, P.F. Mapping burned areas and burn severity patterns in SW Australian eucalypt forest using remotely-sensed changes in leaf area index. Remote Sens. Environ. 2008, 112, 4358–4369. [Google Scholar] [CrossRef]
  20. García, M.L.; Caselles, V. Mapping burns and natural reforestation using Thematic Mapper data. Geocarto Int. 1991, 6, 31–37. [Google Scholar] [CrossRef]
  21. French, N.H.; Kasischke, E.S.; Hall, R.J.; Murphy, K.A.; Verbyla, D.L.; Hoy, E.E.; Allen, J.L. Using Landsat data to assess fire and burn severity in the North American boreal forest region: An overview and summary of results. Int. J. Wildland Fire 2008, 17, 443–462. [Google Scholar] [CrossRef]
  22. Parker, B.M.; Lewis, T.; Srivastava, S.K. Estimation and evaluation of multi-decadal fire severity patterns using Landsat sensors. Remote Sens. Environ. 2015, 170, 340–349. [Google Scholar] [CrossRef]
  23. Brewer, C.K.; Winne, J.C.; Redmond, R.L.; Opitz, D.W.; Mangrich, M.V. Classifying and mapping wildfire severity. Photogramm. Eng. Remote Sens. 2005, 71, 1311–1320. [Google Scholar] [CrossRef][Green Version]
  24. McKenna, P.; Erskine, P.D.; Lechner, A.M.; Phinn, S. Measuring fire severity using UAV imagery in semi-arid central Queensland, Australia. Int. J. Remote Sens. 2017, 38, 4244–4264. [Google Scholar] [CrossRef]
  25. Arkin, J.; Coops, N.C.; Hermosilla, T.; Daniels, L.D.; Plowright, A. Integrated fire severity–land cover mapping using very-high-spatial-resolution aerial imagery and point clouds. Int. J. Wildland Fire 2019, 28, 840–860. [Google Scholar] [CrossRef]
  26. Simpson, J.E.; Wooster, M.J.; Smith, T.E.; Trivedi, M.; Vernimmen, R.R.; Dedi, R.; Shakti, M.; Dinata, Y. Tropical peatland burn depth and combustion heterogeneity assessed using UAV photogrammetry and airborne LiDAR. Remote Sens. 2016, 8, 1000. [Google Scholar] [CrossRef][Green Version]
  27. Carvajal-Ramírez, F.; Marques da Silva, J.R.; Agüera-Vega, F.; Martínez-Carricondo, P.; Serrano, J.; Moral, F.J. Evaluation of fire severity indices based on pre-and post-fire multispectral imagery sensed from UAV. Remote Sens. 2019, 11, 993. [Google Scholar] [CrossRef][Green Version]
  28. Shin, J.I.; Seo, W.W.; Kim, T.; Park, J.; Woo, C.S. Using UAV multispectral images for classification of forest burn severity—A case study of the 2019 Gangneung forest fire. Forests 2019, 10, 1025. [Google Scholar] [CrossRef][Green Version]
  29. Pérez-Rodríguez, L.A.; Quintano, C.; Marcos, E.; Suarez-Seoane, S.; Calvo, L.; Fernández-Manso, A. Evaluation of Prescribed Fires from Unmanned Aerial Vehicles (UAVs) Imagery and Machine Learning Algorithms. Remote Sens. 2020, 12, 1295. [Google Scholar] [CrossRef][Green Version]
  30. Guerra-Hernández, J.; González-Ferreiro, E.; Monleón, V.J.; Faias, S.P.; Tomé, M.; Díaz-Varela, R.A. Use of multi-temporal UAV-derived imagery for estimating individual tree growth in Pinus pinea stands. Forests 2017, 8, 300. [Google Scholar] [CrossRef]
  31. Klouček, T.; Komárek, J.; Surovỳ, P.; Hrach, K.; Janata, P.; Vašíček, B. The Use of UAV Mounted Sensors for Precise Detection of Bark Beetle Infestation. Remote Sens. 2019, 11, 1561. [Google Scholar] [CrossRef][Green Version]
  32. Clapuyt, F.; Vanacker, V.; Schlunegger, F.; Van Oost, K. Unravelling earth flow dynamics with 3-D time series derived from UAV-SfM models. Earth Surf. Dyn. 2017, 5, 791–806. [Google Scholar] [CrossRef][Green Version]
  33. Dash, J.P.; Watt, M.S.; Paul, T.S.; Morgenroth, J.; Hartley, R. Taking a closer look at invasive alien plant research: A review of the current state, opportunities, and future directions for UAVs. Methods Ecol. Evol. 2019, 10, 2020–2033. [Google Scholar] [CrossRef][Green Version]
  34. Arnett, J.T.; Coops, N.C.; Daniels, L.D.; Falls, R.W. Detecting forest damage after a low-severity fire using remote sensing at multiple scales. Int. J. Appl. Earth Obs. Geoinf. 2015, 35, 239–246. [Google Scholar] [CrossRef]
  35. Warner, T.A.; Skowronski, N.S.; Gallagher, M.R. High spatial resolution burn severity mapping of the New Jersey Pine Barrens with WorldView-3 near-infrared and shortwave infrared imagery. Int. J. Remote Sens. 2017, 38, 598–616. [Google Scholar] [CrossRef]
  36. Wallace, L.; Lucieer, A.; Malenovskỳ, Z.; Turner, D.; Vopěnka, P. Assessment of forest structure using two UAV techniques: A comparison of airborne laser scanning and structure from motion (SfM) point clouds. Forests 2016, 7, 62. [Google Scholar] [CrossRef][Green Version]
  37. Wallace, L.; Lucieer, A.; Watson, C.; Turner, D. Development of a UAV-LiDAR system with application to forest inventory. Remote Sens. 2012, 4, 1519–1543. [Google Scholar] [CrossRef][Green Version]
  38. Liu, K.; Shen, X.; Cao, L.; Wang, G.; Cao, F. Estimating forest structural attributes using UAV-LiDAR data in Ginkgo plantations. ISPRS J. Photogramm. Remote Sens. 2018, 146, 465–482. [Google Scholar] [CrossRef]
  39. Brede, B.; Lau, A.; Bartholomeus, H.M.; Kooistra, L. Comparing RIEGL RiCOPTER UAV LiDAR derived canopy height and DBH with terrestrial LiDAR. Sensors 2017, 17, 2371. [Google Scholar] [CrossRef]
  40. Hillman, S.; Wallace, L.; Lucieer, A.; Reinke, K.; Turner, D.; Jones, S. A comparison of terrestrial and UAS sensors for measuring fuel hazard in a dry sclerophyll forest. Int. J. Appl. Earth Obs. Geoinf. 2021, 95, 102261. [Google Scholar] [CrossRef]
  41. Jaakkola, A.; Hyyppä, J.; Kukko, A.; Yu, X.; Kaartinen, H.; Lehtomäki, M.; Lin, Y. A low-cost multi-sensoral mobile mapping system and its feasibility for tree measurements. ISPRS J. Photogramm. Remote Sens. 2010, 65, 514–522. [Google Scholar] [CrossRef]
  42. Wallace, L.; Watson, C.; Lucieer, A. Detecting pruning of individual stems using airborne laser scanning data captured from an unmanned aerial vehicle. Int. J. Appl. Earth Obs. Geoinf. 2014, 30, 76–85. [Google Scholar] [CrossRef]
  43. Hu, T.; Ma, Q.; Su, Y.; Battles, J.J.; Collins, B.M.; Stephens, S.L.; Kelly, M.; Guo, Q. A simple and integrated approach for fire severity assessment using bi-temporal airborne LiDAR data. Int. J. Appl. Earth Obs. Geoinf. 2019, 78, 25–38. [Google Scholar] [CrossRef]
  44. Hoe, M.S.; Dunn, C.J.; Temesgen, H. Multitemporal LiDAR improves estimates of fire severity in forested landscapes. Int. J. Wildland Fire 2018, 27, 581–594. [Google Scholar] [CrossRef]
  45. Lee, S.W.; Lee, M.B.; Lee, Y.G.; Won, M.S.; Kim, J.J.; Hong, S.K. Relationship between landscape structure and burn severity at the landscape and class levels in Samchuck, South Korea. For. Ecol. Manag. 2009, 258, 1594–1604. [Google Scholar] [CrossRef]
  46. Skowronski, N.S.; Gallagher, M.R.; Warner, T.A. Decomposing the interactions between fire severity and canopy fuel structure using multi-temporal, active, and passive remote sensing approaches. Fire 2020, 3, 7. [Google Scholar] [CrossRef][Green Version]
  47. Bowman, D.M.; Perry, G.L. Soil or fire: What causes treeless sedgelands in Tasmanian wet forests? Plant Soil 2017, 420, 1–18. [Google Scholar] [CrossRef]
  48. Crondstedt, M.; Thomas, G.; Considine, P. AFAC Independent Operational Review A Review of the Management of the Tasmanian Fires of Prepared for the Tasmanian Government Acknowledgements; Technical Report March; Australasian Fire and Emergency Service Authorities Council: East Melbourne, Australia, 2019. [Google Scholar]
  49. Grubinger, S.; Coops, N.C.; Stoehr, M.; El-Kassaby, Y.A.; Lucieer, A.; Turner, D. Modeling realized gains in Douglas-fir (Pseudotsuga menziesii) using laser scanning data from unmanned aircraft systems (UAS). For. Ecol. Manag. 2020, 473, 118284. [Google Scholar] [CrossRef]
  50. Camarretta, N.; A Harrison, P.; Lucieer, A.; M Potts, B.; Davidson, N.; Hunt, M. From Drones to Phenotype: Using UAV-LiDAR to Detect Species and Provenance Variation in Tree Productivity and Structure. Remote Sens. 2020, 12, 3184. [Google Scholar] [CrossRef]
  51. du Toit, F.; Coops, N.C.; Tompalski, P.; Goodbody, T.R.; El-Kassaby, Y.A.; Stoehr, M.; Turner, D.; Lucieer, A. Characterizing variations in growth characteristics between Douglas-fir with different genetic gain levels using airborne laser scanning. Trees 2020, 34, 649–664. [Google Scholar] [CrossRef]
  52. Girardeau-Montaut, D. CloudCompare. 2016. Available online: (accessed on 19 May 2019).
  53. Peppa, M.; Hall, J.; Goodyear, J.; Mills, J. Photogrammetric Assessment and Comparison of DJI Phantom 4 PRO and Phantom 4 RTK Small Unmanned Aircraft Systems. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, XLII-2/W13, 503–509. [Google Scholar] [CrossRef][Green Version]
  54. Agisoft, L. Agisoft Metashape User Manual, Professional Edition, Version 1.5; Agisoft LLC: St. Petersburg, Russia, 2018; Volume 2, Available online: (accessed on 1 June 2019).
  55. Pujari, J.; Pushpalatha, S.; Padmashree, D. Content-based image retrieval using color and shape descriptors. In Proceedings of the 2010 International Conference on Signal and Image Processing, Chennai, India, 15–17 December 2010; pp. 239–242. [Google Scholar]
  56. Connolly, C.; Fleiss, T. A study of efficiency and accuracy in the transformation from RGB to CIELAB color space. IEEE Trans. Image Process. 1997, 6, 1046–1048. [Google Scholar] [CrossRef]
  57. Serifoglu Yilmaz, C.; Yilmaz, V.; Güngör, O. Investigating the performances of commercial and non-commercial software for ground filtering of UAV-based point clouds. Int. J. Remote Sens. 2018. [Google Scholar] [CrossRef]
  58. Achanta, R.; Shaji, A.; Smith, K.; Lucchi, A.; Fua, P.; Süsstrunk, S. SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 2274–2282. [Google Scholar] [CrossRef][Green Version]
  59. Van der Walt, S.; Schönberger, J.L.; Nunez-Iglesias, J.; Boulogne, F.; Warner, J.D.; Yager, N.; Gouillart, E.; Yu, T. Scikit-image: Image processing in Python. PeerJ 2014, 2, e453. [Google Scholar] [CrossRef] [PubMed]
  60. Cancelo-González, J.; Cachaldora, C.; Díaz-Fierros, F.; Prieto, B. Colourimetric variations in burnt granitic forest soils in relation to fire severity. Ecol. Indic. 2014, 46, 92–100. [Google Scholar] [CrossRef]
  61. Hossain, F.A.; Zhang, Y.M.; Tonima, M.A. Forest fire flame and smoke detection from UAV-captured images using fire-specific color features and multi-color space local binary pattern. J. Unmanned Veh. Syst. 2020, 8, 285–309. [Google Scholar] [CrossRef]
  62. Gonzalez, R.C.; Woods, R.E.; Eddins, S.L. Digital Image Processing Using MATLAB; Pearson Education: Noida, India, 2004. [Google Scholar]
  63. Haralick, R.M.; Shanmugam, K.; Dinstein, I.H. Textural features for image classification. IEEE Trans. Syst. Man Cybern. 1973, 6, 610–621. [Google Scholar] [CrossRef][Green Version]
  64. Kayitakire, F.; Hamel, C.; Defourny, P. Retrieving forest structure variables based on image texture analysis and IKONOS-2 imagery. Remote Sens. Environ. 2006, 102, 390–401. [Google Scholar] [CrossRef]
  65. Rao, P.N.; Sai, M.S.; Sreenivas, K.; Rao, M.K.; Rao, B.; Dwivedi, R.; Venkataratnam, L. Textural analysis of IRS-1D panchromatic data for land cover classification. Int. J. Remote Sens. 2002, 23, 3327–3345. [Google Scholar] [CrossRef]
  66. Gini, R.; Sona, G.; Ronchetti, G.; Passoni, D.; Pinto, L. Improving tree species classification using UAS multispectral images and texture measures. ISPRS Int. J. Geo-Inf. 2018, 7, 315. [Google Scholar] [CrossRef][Green Version]
  67. Wilkes, P.; Jones, S.D.; Suarez, L.; Haywood, A.; Mellor, A.; Woodgate, W.; Soto-Berelov, M.; Skidmore, A.K. Using discrete-return airborne laser scanning to quantify number of canopy strata across diverse forest types. Methods Ecol. Evol. 2016, 7, 700–712. [Google Scholar] [CrossRef]
  68. Hultquist, C.; Chen, G.; Zhao, K. A comparison of Gaussian process regression, random forests and support vector regression for burn severity assessment in diseased forests. Remote Sens. Lett. 2014, 5, 723–732. [Google Scholar] [CrossRef]
  69. Collins, L.; Griffioen, P.; Newell, G.; Mellor, A. The utility of Random Forests for wildfire severity mapping. Remote Sens. Environ. 2018, 216, 374–384. [Google Scholar] [CrossRef]
  70. Meddens, A.J.; Kolden, C.A.; Lutz, J.A. Detecting unburned areas within wildfire perimeters using Landsat and ancillary data across the northwestern United States. Remote Sens. Environ. 2016, 186, 275–285. [Google Scholar] [CrossRef]
  71. Guyon, I.; Weston, J.; Barnhill, S.; Vapnik, V. Gene selection for cancer classification using support vector machines. Mach. Learn. 2002, 46, 389–422. [Google Scholar] [CrossRef]
  72. Hu, S.; Liu, H.; Zhao, W.; Shi, T.; Hu, Z.; Li, Q.; Wu, G. Comparison of machine learning techniques in inferring phytoplankton size classes. Remote Sens. 2018, 10, 191. [Google Scholar] [CrossRef][Green Version]
  73. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  74. Story, M.; Congalton, R.G. Accuracy assessment: A user’s perspective. Photogramm. Eng. Remote Sens. 1986, 52, 397–399. [Google Scholar]
  75. Foody, G.M. Thematic map comparison. Photogramm. Eng. Remote Sens. 2004, 70, 627–633. [Google Scholar] [CrossRef]
  76. De Leeuw, J.; Jia, H.; Yang, L.; Liu, X.; Schmidt, K.; Skidmore, A. Comparing accuracy assessments to infer superiority of image classification methods. Int. J. Remote Sens. 2006, 27, 223–232. [Google Scholar] [CrossRef]
  77. Abdel-Rahman, E.M.; Mutanga, O.; Adam, E.; Ismail, R. Detecting Sirex noctilio grey-attacked and lightning-struck pine trees using airborne hyperspectral data, random forest and support vector machines classifiers. ISPRS J. Photogramm. Remote Sens. 2014, 88, 48–59. [Google Scholar] [CrossRef]
  78. Agresti, A. Categorical Data Analysis; John Wiley & Sons: Hoboken, NJ, USA, 2003; Volume 482. [Google Scholar]
  79. Ramo, R.; Chuvieco, E. Developing a random forest algorithm for MODIS global burned area classification. Remote Sens. 2017, 9, 1193. [Google Scholar] [CrossRef][Green Version]
  80. Goodbody, T.R.; Coops, N.C.; Hermosilla, T.; Tompalski, P.; McCartney, G.; MacLean, D.A. Digital aerial photogrammetry for assessing cumulative spruce budworm defoliation and enhancing forest inventories at a landscape-level. ISPRS J. Photogramm. Remote Sens. 2018, 142, 1–11. [Google Scholar] [CrossRef]
  81. Feng, Q.; Liu, J.; Gong, J. UAV remote sensing for urban vegetation mapping using random forest and texture analysis. Remote Sens. 2015, 7, 1074–1094. [Google Scholar] [CrossRef][Green Version]
  82. Almeida, D.; Broadbent, E.N.; Zambrano, A.M.A.; Wilkinson, B.E.; Ferreira, M.E.; Chazdon, R.; Meli, P.; Gorgens, E.; Silva, C.A.; Stark, S.C.; et al. Monitoring the structure of forest restoration plantations with a drone-lidar system. Int. J. Appl. Earth Obs. Geoinf. 2019, 79, 192–198. [Google Scholar] [CrossRef]
  83. Tng, D.; Williamson, G.; Jordan, G.; Bowman, D. Giant eucalypts–globally unique fire-adapted rain-forest trees? New Phytologist 2012, 196, 1001–1014. [Google Scholar] [CrossRef] [PubMed]
  84. Hammill, K.A.; Bradstock, R.A. Remote sensing of fire severity in the Blue Mountains: Influence of vegetation type and inferring fire intensity. Int. J. Wildland Fire 2006, 15, 213–226. [Google Scholar] [CrossRef]
  85. McCarthy, G.; Moon, K.; Smith, L. Mapping Fire Severity and Fire Extent in Forest in Victoria for Ecological and Fuel Outcomes; Technical Report; Wiley Online Library: Hoboken, NJ, USA, 2017. [Google Scholar]
  86. Tran, N.; Tanase, M.; Bennett, L.; Aponte, C. Fire-severity classification across temperate Australian forests: Random forests versus spectral index thresholding. Remote Sens. Agric. Ecosyst. Hydrol. XXI Int. Soc. Opt. Photonics 2019, 11149, 111490U. [Google Scholar]
  87. Burrows, G. Buds, bushfires and resprouting in the eucalypts. Aust. J. Bot. 2013, 61, 331–349. [Google Scholar] [CrossRef]
  88. Clarke, P.J.; Lawes, M.; Midgley, J.J.; Lamont, B.; Ojeda, F.; Burrows, G.; Enright, N.; Knox, K. Resprouting as a key functional trait: How buds, protection and resources drive persistence after fire. New Phytol. 2013, 197, 19–35. [Google Scholar] [CrossRef][Green Version]
  89. Stephens, S.L.; Collins, B.M.; Fettig, C.J.; Finney, M.A.; Hoffman, C.M.; Knapp, E.E.; North, M.P.; Safford, H.; Wayman, R.B. Drought, tree mortality, and wildfire in forests adapted to frequent fire. BioScience 2018, 68, 77–88. [Google Scholar] [CrossRef][Green Version]
  90. Veraverbeke, S.; Verstraeten, W.W.; Lhermitte, S.; Goossens, R. Evaluating Landsat Thematic Mapper spectral indices for estimating burn severity of the 2007 Peloponnese wildfires in Greece. Int. J. Wildland Fire 2010, 19, 558–569. [Google Scholar] [CrossRef][Green Version]
  91. Leach, N.; Coops, N.C.; Obrknezev, N. Normalization method for multi-sensor high spatial and temporal resolution satellite imagery with radiometric inconsistencies. Comput. Electron. Agric. 2019, 164, 104893. [Google Scholar] [CrossRef]
  92. Michael, Y.; Lensky, I.M.; Brenner, S.; Tchetchik, A.; Tessler, N.; Helman, D. Economic assessment of fire damage to urban forest in the wildland–urban interface using planet satellites constellation images. Remote Sens. 2018, 10, 1479. [Google Scholar] [CrossRef][Green Version]
  93. James, L.A.; Watson, D.G.; Hansen, W.F. Using LiDAR data to map gullies and headwater streams under forest canopy: South Carolina, USA. Catena 2007, 71, 132–144. [Google Scholar] [CrossRef]
  94. Edwards, A.C.; Russell-Smith, J.; Maier, S.W. Measuring and mapping fire severity in the tropical savannas. Carbon Account. Savanna Fire Manag. 2015, 169, 169–184. [Google Scholar]
  95. Gupta, V.; Reinke, K.; Jones, S. Changes in the spectral features of fuel layers of an Australian dry sclerophyll forest in response to prescribed burning. Int. J. Wildland Fire 2013, 22, 862–868. [Google Scholar] [CrossRef]
  96. Puliti, S.; Dash, J.P.; Watt, M.S.; Breidenbach, J.; Pearse, G.D. A comparison of UAV laser scanning, photogrammetry and airborne laser scanning for precision inventory of small-forest properties. For. Int. J. For. Res. 2020, 93, 150–162. [Google Scholar] [CrossRef]
  97. Ottmar, R.D.; Sandberg, D.V.; Riccardi, C.L.; Prichard, S.J. An overview of the fuel characteristic classification system—Quantifying, classifying, and creating fuelbeds for resource planning. Can. J. For. Res. 2007, 37, 2383–2393. [Google Scholar] [CrossRef]
  98. Menning, K.M.; Stephens, S.L. Fire climbing in the forest: A semiqualitative, semiquantitative approach to assessing ladder fuel hazards. West. J. Appl. For. 2007, 22, 88–93. [Google Scholar] [CrossRef][Green Version]
  99. Prichard, S.J.; Sandberg, D.V.; Ottmar, R.D.; Eberhardt, E.; Andreu, A.; Eagle, P.; Swedin, K. Fuel Characteristic Classification System Version 3.0: Technical Documentation; General Technical Report PNW-GTR-887; US Department of Agriculture, Forest Service, Pacific Northwest Research Station: Portland, OR, USA, 2013; Volume 887, 79p. [Google Scholar]
  100. Kramer, H.A.; Collins, B.M.; Kelly, M.; Stephens, S.L. Quantifying ladder fuels: A new approach using LiDAR. Forests 2014, 5, 1432–1453. [Google Scholar] [CrossRef]
  101. Maguya, A.S.; Tegel, K.; Junttila, V.; Kauranne, T.; Korhonen, M.; Burns, J.; Leppanen, V.; Sanz, B. Moving voxel method for estimating canopy base height from airborne laser scanner data. Remote Sens. 2015, 7, 8950–8972. [Google Scholar] [CrossRef][Green Version]
  102. Kramer, H.A.; Collins, B.M.; Lake, F.K.; Jakubowski, M.K.; Stephens, S.L.; Kelly, M. Estimating ladder fuels: A new approach combining field photography with LiDAR. Remote Sens. 2016, 8, 766. [Google Scholar] [CrossRef][Green Version]
  103. Jarron, L.R.; Coops, N.C.; MacKenzie, W.H.; Tompalski, P.; Dykstra, P. Detection of sub-canopy forest structure using airborne LiDAR. Remote Sens. Environ. 2020, 244, 111770. [Google Scholar] [CrossRef]
  104. Skowronski, N.; Clark, K.; Nelson, R.; Hom, J.; Patterson, M. Remotely sensed measurements of forest structure and fuel loads in the Pinelands of New Jersey. Remote Sens. Environ. 2007, 108, 123–129. [Google Scholar] [CrossRef]
  105. Cruz, M.G.; Alexander, M.E.; Wakimoto, R.H. Modeling the likelihood of crown fire occurrence in conifer forest stands. For. Sci. 2004, 50, 640–658. [Google Scholar]
  106. Twidwell, D.; Allen, C.R.; Detweiler, C.; Higgins, J.; Laney, C.; Elbaum, S. Smokey comes of age: Unmanned aerial systems for fire management. Front. Ecol. Environ. 2016, 14, 333–339. [Google Scholar] [CrossRef]
  107. Yuan, C.; Zhang, Y.; Liu, Z. A survey on technologies for automatic forest fire monitoring, detection, and fighting using unmanned aerial vehicles and remote sensing techniques. Can. J. For. Res. 2015, 45, 783–792. [Google Scholar] [CrossRef]
  108. Moran, C.J.; Seielstad, C.A.; Cunningham, M.R.; Hoff, V.; Parsons, R.A.; Queen, L.; Sauerbrey, K.; Wallace, T. Deriving Fire Behavior Metrics from UAS Imagery. Fire 2019, 2, 36. [Google Scholar] [CrossRef][Green Version]
  109. Samiappan, S.; Hathcock, L.; Turnage, G.; McCraine, C.; Pitchford, J.; Moorhead, R. Remote sensing of wildfire using a small unmanned aerial system: Post-fire mapping, vegetation recovery and damage analysis in Grand Bay, Mississippi/Alabama, USA. Drones 2019, 3, 43. [Google Scholar] [CrossRef][Green Version]
  110. Shin, P.; Sankey, T.; Moore, M.M.; Thode, A.E. Evaluating unmanned aerial vehicle images for estimating forest canopy fuels in a ponderosa pine stand. Remote Sens. 2018, 10, 1266. [Google Scholar] [CrossRef][Green Version]
  111. Bright, B.C.; Loudermilk, E.L.; Pokswinski, S.M.; Hudak, A.T.; O’Brien, J.J. Introducing close-range photogrammetry for characterizing forest understory plant diversity and surface fuel structure at fine scales. Can. J. Remote Sens. 2016, 42, 460–472. [Google Scholar] [CrossRef]
  112. McColl-Gausden, S.; Bennett, L.; Duff, T.; Cawson, J.; Penman, T. Climatic and edaphic gradients predict variation in wildland fuel hazard in south-eastern Australia. Ecography 2020, 43, 443–455. [Google Scholar] [CrossRef][Green Version]
  113. Jenkins, M.E.; Bedward, M.; Price, O.; Bradstock, R.A. Modelling Bushfire Fuel Hazard Using Biophysical Parameters. Forests 2020, 11, 925. [Google Scholar] [CrossRef]
  114. Cawson, J.G.; Duff, T.J.; Swan, M.H.; Penman, T.D. Wildfire in wet sclerophyll forests: The interplay between disturbances and fuel dynamics. Ecosphere 2018, 9, e02211. [Google Scholar] [CrossRef]
  115. Burton, J.; Cawson, J.; Noske, P.; Sheridan, G. Shifting states, altered fates: Divergent fuel moisture responses after high frequency wildfire in an obligate seeder eucalypt forest. Forests 2019, 10, 436. [Google Scholar] [CrossRef][Green Version]
  116. Taylor, C.; McCarthy, M.A.; Lindenmayer, D.B. Nonlinear effects of stand age on fire severity. Conserv. Lett. 2014, 7, 355–370. [Google Scholar] [CrossRef][Green Version]
  117. Attiwill, P.M. Ecological disturbance and the conservative management of eucalypt forests in Australia. For. Ecol. Manag. 1994, 63, 301–346. [Google Scholar] [CrossRef]
  118. Attiwill, P.; Ryan, M.; Burrows, N.; Cheney, N.; McCaw, L.; Neyland, M.; Read, S. Timber harvesting does not increase fire risk and severity in wet eucalypt forests of southern Australia. Conserv. Lett. 2014, 7, 341–354. [Google Scholar] [CrossRef]
  119. Price, O.F.; Bradstock, R.A. The efficacy of fuel treatment in mitigating property loss during wildfires: Insights from analysis of the severity of the catastrophic fires in 2009 in Victoria, Australia. J. Environ. Manag. 2012, 113, 146–157. [Google Scholar] [CrossRef]
  120. Linn, R.R.; Sieg, C.H.; Hoffman, C.M.; Winterkamp, J.L.; McMillin, J.D. Modeling wind fields and fire propagation following bark beetle outbreaks in spatially-heterogeneous pinyon-juniper woodland fuel complexes. Agric. For. Meteorol. 2013, 173, 139–153. [Google Scholar] [CrossRef]
  121. Mell, W.; Maranghides, A.; McDermott, R.; Manzello, S.L. Numerical simulation and experiments of burning douglas fir trees. Combust. Flame 2009, 156, 2023–2041. [Google Scholar] [CrossRef]
  122. Rowell, E.; Loudermilk, E.L.; Seielstad, C.; O’Brien, J.J. Using simulated 3D surface fuelbeds and terrestrial laser scan data to develop inputs to fire behavior models. Can. J. Remote Sens. 2016, 42, 443–459. [Google Scholar] [CrossRef]
  123. Parsons, R.A.; Pimont, F.; Wells, L.; Cohn, G.; Jolly, W.M.; de Coligny, F.; Rigolot, E.; Dupuy, J.L.; Mell, W.; Linn, R.R. Modeling thinning effects on fire behavior with STANDFIRE. Ann. For. Sci. 2018, 75, 7. [Google Scholar] [CrossRef][Green Version]
Figure 1. (a) The location of the site in Tasmania, Australia; (b) the location of the Riveaux Road Fire; (c) an image of the study area plot captured before the Riveaux Road fire complex (September 2018); and (d) an image of the study area plot captured post fire (May 2019).
Figure 1. (a) The location of the site in Tasmania, Australia; (b) the location of the Riveaux Road Fire; (c) an image of the study area plot captured before the Riveaux Road fire complex (September 2018); and (d) an image of the study area plot captured post fire (May 2019).
Fire 04 00014 g001
Figure 2. Ortho images of the study area and two focused areas. Maps demonstrating the vegetation classification of the image-only data stream, LiDAR-only data stream and combined data stream.
Figure 2. Ortho images of the study area and two focused areas. Maps demonstrating the vegetation classification of the image-only data stream, LiDAR-only data stream and combined data stream.
Fire 04 00014 g002
Figure 3. Fire severity maps produced from image-only, LiDAR-only and combined data streams.
Figure 3. Fire severity maps produced from image-only, LiDAR-only and combined data streams.
Fire 04 00014 g003
Figure 4. Differences in UAS LiDAR point cloud information pre- and post-fire within areas classified as forest.
Figure 4. Differences in UAS LiDAR point cloud information pre- and post-fire within areas classified as forest.
Fire 04 00014 g004
Figure 5. Differences in UAS SfM point cloud information pre- and post-fire within areas classified as forest.
Figure 5. Differences in UAS SfM point cloud information pre- and post-fire within areas classified as forest.
Fire 04 00014 g005
Table 1. Descriptions of vegetation classification.
Table 1. Descriptions of vegetation classification.
Vegetation ClassDefinitionExample Species
Forest (tall)Vegetation greater than 3 m in heightEucalyptus obliqua, Eucalyptus globulus
Sedgeland (short)Vegetation beneath 3 m in heightGymnoschoenus sphaerocephalus,
Melaleuca squamea, Eucalyptus nitida
Non-vegetationWater and Bare earthN/A
Table 2. Descriptions of fire severity classification based upon land cover classifications.
Table 2. Descriptions of fire severity classification based upon land cover classifications.
ImpactWith Forest Vegetation PresentWith Sedgeland Vegetation Present
Severe>50% crown scorchGrass combusted (>80%) exposing bare soil, white or black ash
Not-severe<50% crown scorchPatchy burn on grass and litter incomplete
UnburntUnburntUnburnt grass, or unchanged conditions
Table 3. Segmentation description and metric sources for each of the three processing data streams.
Table 3. Segmentation description and metric sources for each of the three processing data streams.
Stream 1—Image-OnlyStream 2—LiDAR-OnlyStream 3—Combined
SegmentationPre-imageCanopy Height Model (CHM)Pre-image
Ortho image metrics
Ortho image texture metrics
Point cloud metrics—UAS SfM
Point cloud metrics—UAS LiDAR
CHM texture metrics—UAS SfM
CHM texture metrics—UAS LiDAR
Table 4. Metrics derived from image-based and point cloud products for classification of vegetation and fire severity.
Table 4. Metrics derived from image-based and point cloud products for classification of vegetation and fire severity.
Image Based MetricsImage Stream BandsLiDARDescription
MeanLABN/AMetric of each band calculated separately within the segment
ASMLCHMTexture calculated from single channel lightness (L) image within segment
Sum of squares: variance
Point Cloud Metrics
Percentiles (10th, 50th, 90th)RGB point cloudLiDAR point cloudAnalysis was conducted for the segment and 2nd level of adjacency to the central segment
Number of layers
Distance between 1st and 2nd layer
Volume of points
Difference in percentile heights
Difference in number of layers
Difference in volume
Table 5. Confusion matrix for image-only data stream describing vegetation classification.
Table 5. Confusion matrix for image-only data stream describing vegetation classification.
Reference Data
Classified DataClassBare EarthForestSedgelandWaterUser’s Accuracy
Bare Earth101050.0%
Producer’s Accuracy25.0%80.8%85.7%50.0%Overall: 80.6%
Table 6. Confusion matrix for LiDAR-only data stream describing vegetation classification.
Table 6. Confusion matrix for LiDAR-only data stream describing vegetation classification.
Reference Data
Classified DataClassBare EarthForestSedgelandWaterUser’s Accuracy
Bare Earth101050.0%
Producer’s Accuracy25.0%80.5%86.7%21.4%Overall: 78.9%
Table 7. Confusion matrix for Combined data stream describing vegetation classification.
Table 7. Confusion matrix for Combined data stream describing vegetation classification.
Reference Data
Classified DataClassBare EarthForestSedgelandWaterUser’s Accuracy
Bare Earth01100.0%
Producer’s Accuracy0.0%84.0%88.8%44.4%Overall: 83.1%
Table 8. Confusion matrix for image-only data stream describing severity of sedgeland segments.
Table 8. Confusion matrix for image-only data stream describing severity of sedgeland segments.
Reference Data—Pre and Post Variables
Classified Data—Pre and Post VariablesClassNot-SevereSevereUnburntUser’s Accuracy
Producer’s Accuracy16.7%86.8%42.1%Overall: 72.4%
Table 9. Confusion matrix for LiDAR-only data stream describing severity of low vegetation segments.
Table 9. Confusion matrix for LiDAR-only data stream describing severity of low vegetation segments.
Reference Data—Pre and Post Variables
Classified Data—Pre and Post VariablesClassNot-SevereSevereUnburntUser’s Accuracy
Producer’s Accuracy27.3%92.3%13.0%Overall: 75.2%
Table 10. Confusion matrix for Combined stream describing severity of sedgeland segments.
Table 10. Confusion matrix for Combined stream describing severity of sedgeland segments.
Reference Data—Pre and Post Variables
Classified Data—Pre and Post VariablesClassNot-SevereSevereUnburntUser’s Accuracy
Producer’s Accuracy23.8%92.2%26.3%Overall: 76.6%
Table 11. Confusion matrix for image-only stream describing severity of forest segments.
Table 11. Confusion matrix for image-only stream describing severity of forest segments.
Reference Data—Pre and Post Variables
Classified Data—Pre and post variablesClassNot-SevereSevereUnburntUser’s Accuracy
Producer’s Accuracy28.6%89.2%47.4%Overall: 76.6%
Table 12. Confusion matrix for LiDAR-only stream describing severity of forest segments.
Table 12. Confusion matrix for LiDAR-only stream describing severity of forest segments.
Reference Data—Pre and Post Variables
Classified Data—Pre and Post VariablesClassNot-SevereSevereUnburntUser’s Accuracy
Producer’s Accuracy29.5%88.9%30.4%Overall: 74.5%
Table 13. Confusion matrix for combined stream describing severity of forest segments.
Table 13. Confusion matrix for combined stream describing severity of forest segments.
Reference Data—Pre and Post Variables
Classified Data—Pre and Post VariablesClassNot-SevereSevereUnburntUser’s Accuracy
Producer’s Accuracy19.0%94.1%42.1%Overall: 78.5%
Table 14. Vertical structure change separated by the classification of the vegetation and severity type.
Table 14. Vertical structure change separated by the classification of the vegetation and severity type.
Capture MethodLiDARSfM
TimePrePost PrePostDifference (m)
ValueMeanStd DevSkewKurtosisMeanStd DevSkewKurtosisDifference (m)MeanStd DevSkewKurtosisMeanStd DevSkewKurtosis
ForestSevere10th % height (m)6.577.171.873.807.−3.81
50th % height (m)20.5710.81−0.10−1.0720.9610.96−0.29−0.900.3915.8010.510.45−0.8416.1011.340.02−1.360.30
90th % height (m)27.329.57−0.23−0.8026.9410.73−0.63−0.09−0.3926.1610.03−0.51−0.4724.7211.91−0.62−0.59−1.44
Layer Count4.861.900.580.964.192.180.30−0.04−0.684.491.920.−0.10−1.33
Volume ( m 3 )2.651.270.380.−0.4511.664.13−0.350.855.053.480.901.06−6.61
Not-Severe10th % height (m)7.026.941.542.158.998.240.59−0.771.975.037.282.727.502.273.803.0110.21−2.76
50th % height (m)23.8110.27−0.38−0.7624.0510.22−0.51−0.580.2418.2810.450.24−0.8319.769.71−0.32−0.951.49
90th % height (m)30.159.64−0.42−0.6130.0410.22−0.680.03−0.1028.8910.53−0.69−0.1229.2310.28−0.780.060.34
Layer Count4.981.910.41−0.224.632.070.25−0.09−0.354.791.940.060.043.931.830.18−0.32−0.86
Volume ( m 3 )3.641.300.100.013.361.520.430.19−0.2713.954.67−0.761.199.504.550.730.60−4.45
SedgelandSevere10th % height (m)0.480.8011.19249.660.231.4910.86125.90−0.250.841.0513.98365.310.150.8915.46274.95−0.69
50th % height (m)2.304.244.5823.341.945.203.7514.66−0.362.202.936.2550.411.524.224.5222.66−0.68
90th % height (m)5.526.962.−1.335.626.482.386.233.726.532.687.87−1.90
Layer Count1.571.531.433.990.831.202.196.41−0.751.841.251.352.980.741.032.309.40−1.10
Volume (( m 3 )1.641.21−0.17−1.050.640.821.534.50−−−6.02
Not-Severe10th % height (m)−0.541.221.163.1929.900.491.4312.25189.11−0.74
50th % height (m)3.374.473.8818.533.455.463.2211.660.082.863.035.2144.982.424.004.1922.01−0.44
90th % height (m)7.207.511.823.316.827.751.913.73−0.386.746.961.994.375.216.592.386.94−1.53
Layer Count1.871.681.241.791.361.261.604.05−0.511.841.441.101.541.−0.77
Volume ( m 3 )1.691.14−0.09−0.041.451.190.47−0.01−0.246.824.08−0.36−0.713.923.840.640.15−2.90
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hillman, S.; Hally, B.; Wallace, L.; Turner, D.; Lucieer, A.; Reinke, K.; Jones, S. High-Resolution Estimates of Fire Severity—An Evaluation of UAS Image and LiDAR Mapping Approaches on a Sedgeland Forest Boundary in Tasmania, Australia. Fire 2021, 4, 14.

AMA Style

Hillman S, Hally B, Wallace L, Turner D, Lucieer A, Reinke K, Jones S. High-Resolution Estimates of Fire Severity—An Evaluation of UAS Image and LiDAR Mapping Approaches on a Sedgeland Forest Boundary in Tasmania, Australia. Fire. 2021; 4(1):14.

Chicago/Turabian Style

Hillman, Samuel, Bryan Hally, Luke Wallace, Darren Turner, Arko Lucieer, Karin Reinke, and Simon Jones. 2021. "High-Resolution Estimates of Fire Severity—An Evaluation of UAS Image and LiDAR Mapping Approaches on a Sedgeland Forest Boundary in Tasmania, Australia" Fire 4, no. 1: 14.

Article Metrics

Back to TopTop