1. Introduction
Recent advances in remote sensing technology have ushered in a rapid increase in the number of earth observation (EO) satellites launched, mission longevity, and the spatial, spectral, and temporal resolution captured by the employed sensors. Since the 1970s, the average number of EO satellites launched per year/decade has increased from 2 to 12, and the spatial resolution of multispectral sensors has increased from 80 m to less than 1 m [
1,
2] The rate of increase in EO satellites launched is expected to accelerate in the coming decades. Technological advancements have also allowed spaceborne active sensors, synthetic aperture radar (SAR), and LiDAR, to be launched, with miniaturisation into CubeSats facilitating reduced costs for targeted missions [
3]. Rapid advancements in unmanned aerial vehicle (UAV) technology and lower costs have initiated an ongoing increase in the use of very-high (sub cm)-spatial-resolution UAV data in earth observation applications [
4,
5]. While the benefits of higher-spatial and -spectral-resolution remote sensing data include a greater volume of and detail in the EO information being captured, increasingly finer resolution can create other challenges for some applications, requiring more advanced image analysis techniques for the accurate classification of high-resolution imagery.
Pixel-wise techniques are commonly used in the classification of satellite imagery, where the radiometric properties of individual pixels are treated as independent from the surrounding pixels [
6,
7]. However, pixel-wise classification approaches have inherent limitations when applied to features with heterogenous landscape patterns or when the size of a feature may be smaller or larger than the size of a pixel. Similarly, the type of imagery may also influence the decision to use pixel-wise methods. For example, finer-resolution imaging increases the number of adjacent pixels that may need to be clustered to represent an object of interest [
7]. Furthermore, certain sensor characteristics may also increase misclassification when using pixel-wise methods. For example, SAR imagery is characterised by ‘salt and pepper’ noise due to coherent interference from multiple scatterers in each resolution cell [
8]. Therefore, the classification of single pixels is to be avoided in favour of spatial averaging or object-based approaches. Contextual information from the spatial association and radiometric comparisons of neighbouring pixels may improve classification capacity for high-resolution optical and radar imagery and classification problems in heterogenous landscapes.
Texture is an innate property of all surfaces and can be used in automated feature extraction to classify objects or regions of interest affected by the limits of classical pixel-wise image-processing methods. Image texture analysis aims to reduce the number of resources required to accurately describe a large set of complex data [
9]. Particular characteristics of landscape patterns may benefit from image texture analysis techniques. Texture may add vegetation structural information to estimates of forest and woodland composition via spectral vegetation indices, thereby helping to discern vegetation community types [
10,
11,
12,
13]. The accuracy in classifying environmental phenomena with a strong element of spatial contagion, such as floods, fires, smoke, and the spread of disease, may also improve through image texture analysis, as nearby pixels tend to belong to the same class or classes with a functional association [
8,
14].
Various statistical measures of image texture can be derived using different approaches, with the Grey-Level Co-occurrence Matrix (GLCM; [
15] ) method being the more commonly used approach in remote sensing. A GLCM represents the frequency of the occurrence of pairs of grey levels (intensity values) for combinations of pixels at specified positions relative to each other within a defined neighbourhood (e.g., a 5 × 5-pixel window or kernel). Within the neighbourhood, texture consists of three elements: the tonal difference between pixels, the distance over which tonal difference is measured, and directionality. The central pixel of the moving window is recoded with the chosen texture statistics, generating a single-raster layer that may be used as an input in further analysis [
9,
15,
16]. Texture-based statistics include first-order measures such as mean and variance, which do not include information on the directional relationships between pixels, and second-order (co-occurrence) measures such as contrast, homogeneity, correlation, dissimilarity, and entropy [
16]. With many options available, selecting appropriate texture metrics may require systematic comparative assessment and is likely to vary depending on the application.
In the broad-scale mapping of fire severity, which is defined as the immediate post-fire loss or change in organic matter caused by fire [
17], remotely sensed imagery is predominantly processed using pixel-wise image differencing techniques that determine the difference between pre- and post-fire images [
17,
18,
19,
20,
21,
22,
23] Numerous reflectance indices have been derived and compared for applications in the remote sensing of fire severity, including the differenced Normalised Burn Ratio (dNBR; [
20], the Relativised dNBR (RdNBR; [
24], the soil-adjusted vegetation index [
25,
26], the burned area index [
27]), Tasselled-cap brightness and greenness transformations [
28], and sub-pixel unmixing estimates of photosynthetic cover [
19,
29,
30,
31]. The supervised classification of multiple indices of fire severity has recently been the focus of research employing machine learning and pixel-wise approaches based on Landsat imagery [
32] as well as higher-resolution (10 m pixel size) Sentinel 2 imagery [
31]. However, fire severity is an ecological phenomenon with inherent spatial contagion and image texture properties that vary between severity classes. Image texture becomes increasingly more homogenous as fire severity increases. Thus, fire severity mapping may benefit from advanced data fusion techniques combining information characteristics from multiple input types (i.e., pixel-based and texture-based indices).
During the bushfire crisis of 2019–2020 in south-eastern Australia, the approach outlined in Gibson et al. (2020) was rapidly operationalised to map fires in near-real time using Sentinel 2 data [
31]. The mapping helped fire management agencies in New South Wales to understand the evolving situation and prioritise response actions. However, smoke from the extensive active fires in the surrounding landscape significantly limited the selection of suitable clear imagery for rapid response mapping. Rapid fire extent and severity mapping could benefit from the all-weather-, cloud-, and smoke-penetrating capability of SAR. Furthermore, active sensors such as SAR have a greater potential to add information on the third dimension of a biophysical structure compared to the more traditional two-dimensional optical remotely sensed data [
33,
34].
Recent studies that investigated the sensitivity of different SAR frequencies for fire severity applications found that both short- (C-band, ~5.4 cm) and longer-wavelength (L-band, ~24 cm) SAR data show some potential. For example, Tanase et al. (2010) observed an increase in co-polarised backscatter and a decrease in cross-polarised backscatter in the X-, C-, and L-bands in a burnt pine forest in Ebro valley, Spain, due to a decrease in volume scattering from the canopy [
35]. Interferometric SAR coherence and full polarimetric SAR have also facilitated the discrimination of fire severity classes [
36,
37]. Furthermore, a progressive burned-area-monitoring capacity has been demonstrated using Sentinel 1 [
38], capturing most of the burnt areas with the exception of some low-severity areas without a structural change.
Several recent studies have included SAR-derived texture metrics in burnt-area-mapping applications. The majority of these studies concerned the detection of post-fire burnt areas. For example, Lestari et al. (2021) demonstrated improved classification of burnt and unburnt areas through the joint classification of Sentinel 1 (including GLCM features) and Sentinel 2 data [
39]. Sentinel 1 GLCM texture measures of entropy, homogeneity, and contrast demonstrated high variability in separating burnt and unburnt areas in Victoria, Australia [
40]. The impact of topography on backscatter, and hence the better separation of burnt flat areas, was also highlighted by Tariq et al. (2023), who found that similarity, entropy, homogeneity, and contrast were most useful for separating burnt and unburnt areas [
41]. Tariq et al. (2020) also noted the importance of window size, lag distance, and quantisation level when calculating GLCM texture and that further studies were needed to better understand the sensitivities with changing resolution. No previous studies have evaluated the use of SAR texture metrics for fire severity mapping.
Through a systematic comparison of classification accuracy and the visual interpretation of classified fire extent and severity maps, this study tested whether image texture indices improved the accuracy of fire extent determination and severity mapping based on (1) Sentinel 1 SAR data, (2) Sentinel 2 optical data, and (3) whether the most suitable neighbourhood window size and texture metrics varied with the different sensors and applications (fire extent/fire severity).
5. Conclusions
Choosing which set of remotely sensed data to use in ecological studies and land management is a function of what is needed and what is possible. With rapidly changing needs and ever-expanding information resources, advanced image analysis and data fusion techniques are increasing the number of available possibilities for extracting detailed information from a multitude of sources for high-resolution imagery to provided information about forest structures, functions, and ecosystem processes. In this study, we compared the performance of Sentinel 1 (radar) and Sentinel 2 (optical) data with respect to fire severity and extent mapping over a diverse range of forests and topographies in NSW, Australia. The study has contributed new information on the use of SAR-derived GLCM texture metrics for fire severity mapping; such information is not prevalent in the scientific literature. The inclusion of texture indices alongside standard pixel-based metrics was found to increase classification accuracies for both sensor types. The greatest improvements were observed in the higher-severity class when using SAR data and the moderate-severity class when using optical data in the target-trained models. Sentinel 1 texture indices including mean and variance contributed the most to fire severity and extent mapping. The mean dNBR and dFCBare featured prominently in the Sentinel 2 severity models, while a higher number of texture indices contributed to Sentinel 2 extent mapping. Smaller window sizes (5 × 5 or 7 × 7 pixels) were suitable for Sentinel 2, while a larger window size (11 × 11 pixels) was optimal for the computation of texture indices using Sentinel 1 data, although this phenomenon may vary with forest canopy density and topographic complexity.
The influences of dense cover, high biomass, and steep terrain were more evident in the Sentinel 1 models, with the short wavelength of the C-band limiting the detection of burnt areas and severity. Multi-sensor performance was demonstrated using a novel approach to accuracy assessment based on target-trained and cross-validation strategies. Given the high variability in the radar backscattering response to burnt areas, we demonstrated that the use of local training data that capture the difference in relative intensity at a given location and time is important. Our cross-validation results indicate that Sentinel 2 has greater potential to map the fire extent and severity of novel fires compared to Sentinel 1. Future monitoring scenarios will likely continue to focus on the use of optical sensor data for fire extent and severity estimation. Currently, C-band SAR may be useful in instances wherein cloud and smoke limit optical observations, offering the potential to capture most of the severely impacted area. The combined potential of future generations of multi-frequency SAR for severity estimation and for use in analysing specific types of land cover (e.g., heath and grasslands) is the subject of future work. An integrated fire-mapping system that incorporates both active and passive remote sensing for detecting and monitoring changes in vegetation cover and structure would be a valuable resource for future fire management.