Next Article in Journal
Spectral Angle Mapper Application Using Sentinel-2 in Coastal Placer Deposits in Vigo Estuary, Northwest Spain
Next Article in Special Issue
A UAV-Based Multi-Scenario RGB-Thermal Dataset and Fusion Model for Enhanced Forest Fire Detection
Previous Article in Journal
Comparative Analysis of Trophic Status Assessment Using Different Sensors and Atmospheric Correction Methods in Greece’s WFD Lake Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evaluation of Key Remote Sensing Features for Bushfire Analysis

1
School of Computer Science, Faculty of Engineering and IT, University of Technology Sydney, Sydney, NSW 2007, Australia
2
RIKEN Center for Advanced Intelligence Project, Disaster Resilience Science Team, Tokyo 103-0027, Japan
3
Research Computing Center, University of Chicago, Chicago, IL 60637, USA
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(11), 1823; https://doi.org/10.3390/rs17111823
Submission received: 14 April 2025 / Revised: 9 May 2025 / Accepted: 19 May 2025 / Published: 23 May 2025
(This article belongs to the Special Issue Advances in Spectral Imagery and Methods for Fire and Smoke Detection)

Abstract

This study evaluates remote sensing features to resolve problems associated with feature redundancy, low efficiency, and insufficient input feature analysis in bushfire detection. It calculates spectral features, remote sensing indices, and texture features from Sentinel-2 data for the Blue Mountains region of New South Wales, Australia. Feature separability was evaluated with three measures: J-M distance, discriminant index, and mutual information, leading to an assessment of the best remote sensing features. The results show that for post-fire smoke detection, the best features are the normalized difference vegetation index (NDVI), the B1 band, and the angular second moment (ASM) in the B1 band, with respective scores of 0.900, 0.900, and 0.838. For burned land detection, the best features are NDVI, the B2 band, and correlation (Corr) in the B5 band, with corresponding scores of 1.000, 0.9436, and 0.9173. These results demonstrate the effectiveness of NDVI, the B1 and B2 bands, and specific texture features in the post-fire analysis of remote sensing data. These findings provide valuable insights for the monitoring and analysis of bushfires and offer a solid foundation for future model construction, fire mapping, and feature interpretation tasks.

1. Introduction

Bushfires are prevalent in Australia and typically occur in undeveloped natural areas such as bushes, sparse forests, and grasslands [1,2]. Bushfires are characterized by their rapid spread, extended duration, and high destructiveness. They endanger human lives and property, while also consuming crops and woods, threatening wildlife, and causing extensive structural damage [3,4]. In recent years, rising temperatures and increasing aridity in Australia have contributed to numerous bushfires, including the catastrophic events of “Black Saturday” and “Black Summer” [5,6,7]. The “Black Summer” bushfire, the most extensive in the past five years (2019–2020), burned approximately 17.1 million hectares of land, claimed 34 lives, and caused the deaths of billions of animals, significantly affecting Australia’s biological ecosystems [8].
The rapid development of remote sensing technology has enabled the development of long-range and large-scale fire detection methods and fire analysis [9,10]. These methods play a crucial role in identifying the onset and progression of fires, thereby helping to mitigate fire impacts and reduce associated losses [11].
Two of the principles in fire detection and analysis that utilizes remote sensing imagery are image classification and change detection. Change detection is a technique that can be defined as the comparison and analysis of remotely sensed images acquired at different time periods for identifying and detecting changes in surface features [12]. Image classification is the technique of partitioning each pixel in a remote sensing image into land-cover types either automatically or manually according to various characteristics of the image [13]. Usually, spectral features, texture features, remote sensing indices, and scattering characteristics from input variables are used in image classification and change detection using remote sensing images [14,15]. These characteristics are attribute characteristic informational data of remote sensing images [16,17] and may be applied in several analyses and interpretations concerning different properties of surface features.
Remote sensing imagery proved to be the workhorse for this purpose of surface change detection throughout the 20th century, primarily via compositing overlay, image differencing, band ratioing, and even principal component analysis (PCA). Image classification techniques were basically limited to maximum likelihood classification (MLC) and even the much simpler minimum distance classification [18,19]. All of these depended on typical remote sensing indices such as the normalized difference vegetation index (NDVI) and some spectral bands as the input variables [20].
Li et al. [21] conducted atmospheric and geometric corrections of multitemporal images regarding the rise in seven spectral bands, and PCA was performed to build up a multidimensional vector for extracting land-use change information. Equally, Sader et al. [22] combine spectral bands and NDVI among other common vegetation indices from the National Oceanic and Atmospheric Administration (NOAA), Landsat 5, and Satellite Pour l’Observation de la Terre (SPOT) satellite data to apply the maximum likelihood and minimum distance classification methods for an effective classification of land-cover types in tropical forests. In addition to this, Vogelmann et al. [23] analyzed the 1984 and 1988 Landsat 5 remote sensing images of the deciduous forest regions in southern Vermont and northwestern Massachusetts; the green, red, near-infrared (NIR), and shortwave infrared (SWIR) bands are used to build up a maximum likelihood classifier, for a detailed classification of forest damage levels in the study area.
The advancement of the 21st century has seen some researchers begin using texture features in the process of change detection and image classification and prove that it improves detection accuracy [24]. For instance, texture features were applied by Erener et al. [25] in the detection of land-use changes. In their experiments, black-and-white aerial photographs and modern satellite panchromatic images were used. Their results indicated that there was a considerable enhancement of classification accuracy with the incorporation of texture features, especially in the differentiation of buildings from other land-use types.
During this period, spectral bands, remote sensing indices, and texture features were introduced as a wider array of features used in change detection and image classification. The available methods in classification have become more diverse to include PCA, in combination with MLC, decision trees, and support vector machines (SVMs) and other machine-learning algorithms [26,27].
For example, spectral bands from the Hyperspectral Digital Imagery Collection Experiment (HYDICE) and Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) hyperspectral datasets were used by Rodarmel et al. [28]. After carrying out PCA to reduce the dimensionality of the hundreds of spectral bands for principal component feature extraction to be used in subsequent classification, they then used the maximum likelihood method to attain high-accuracy image classification [28]. In a similar way, Pal et al. [29] used spectral bands from Landsat-7 and Digital Airborne Imaging Spectrometer 791 (DAIS 791) hyperspectral data. They applied SVM, MLC, and artificial neural networks (ANN) to classify land-cover types [29].
As microwave remote sensing has continually improved in recent years, scattering characteristics have gradually gained recognition as key features of image classification. Current common methods of classification are based on machine learning and deep learning [30]. For example, the spectral and scattering features of the Pavia Center and University of Pavia datasets were used by Zhao et al. [31] to develop the SSFC deep learning model for urban land-cover types.
While combining spectral and scattering features can enhance classification performance, another common approach is to input only spectral features. Deep learning models are capable of automatically extracting relevant features during training [32,33], making this method particularly attractive. This approach has been widely adopted in wildfire detection applications. For instance, Park et al. [34] utilized multiple spectral bands from Sentinel-2 remote sensing imagery—including midwave infrared (MIR), SWIR, NIR, and visible light (RGB)—to train deep learning models for wildfire detection. They integrated them with U-Net, ResNet, and Long Short-Term Memory (LSTM) models for forest fire detection [34]. Another approach is to feed remote sensing indices only as input features. For instance, NDVI and other remote sensing indices were computed from Sentinel-2 imagery by Singh et al. [35] in a Geographic Information System (GIS)-based multicriteria decision support system to classify wildfire risk areas.
Some studies have combined spectral bands and remote sensing indices with scattering features as input variables [36,37,38]. For example, Chen et al. [36] computed Synthetic Aperture Radar (SAR) backscattering features (vertical transmit, vertical receive/vertical transmit, horizontal receive (VV/VH) polarization features) from Sentinel-1B imagery and NDVI from Sentinel-2A integrating SWIR bands as input features. They established a wildfire detection model using random forest, SVM, and convolutional neural network (CNN) models. They found that the scattering feature combined with optical data drastically improved detection accuracy.
In other cases, spectral bands and remote sensing indices can be combined with texture features as auxiliary inputs. For instance, Gibson et al. [39] incorporated texture and contrast imagery entropy as remote sensing imagery texture features alongside remote sensing indices like NDVI, leading to a significantly enhanced accuracy of mapping the fire extent and severity classification. They created a method of wildfire detection that uses texture features in combination with optical data.
Moreover, some other studies provide an example of the input being multispectral and attempting to integrate multiple features. A burned area detection model was developed by Yilmaz et al. [40] using the CNN approach applied to the Sentinel-2 satellite image. The researchers thus used data from the red, red-edge, NIR, SWIR 1, and SWIR 2 channels along with five remote sensing vegetation indices such as NDVI [40]. Another study by Fan et al. [41] proposed a new wildfire detection index called the Vegetation Anomaly Spectral Texture Index (VASTI). They combined spectral bands, changes in vegetation/burned areas extracted by remote sensing indices, and texture features to eliminate misdetection at the place where the fire boundaries are defined. They used raw spectral reflectance data to enhance the separability of land-cover categories, applied appropriate remote sensing vegetation and burned area indices to estimate changes in vegetation and burned areas, respectively, and applied textural indices to minimize misclassification errors from the boundaries of the fire.
Remote sensing features serve as a fundamental input for analyzing natural phenomena, monitoring environmental changes, and carrying out different interpretation and classification tasks. The discriminative power of the features directly mirrors the clarity and reliability of the analytical results. In the context of wildfire detection, understanding the intrinsic separability of each individual feature is crucial—not only for enhancing interpretability but also for supporting downstream tasks such as mapping, visualization, or model construction. Previous work has concentrated more on model-driven optimization of features rather than systematically assessing the individual discriminatory powers of remote sensing features prior to modeling. This leads to a gap where feature redundancy occurs along with an ignorance of variables’ standalone effectiveness.
Therefore, this study aims to independently evaluate the capability of individual remote sensing features to discriminate between burned land and post-fire smoke. By examining feature behavior prior to modeling, it offers a complementary perspective intended to support the design of more efficient, interpretable, and physically meaningful fire detection systems. Rather than optimizing feature combinations for predictive modeling, we analyze the intrinsic separability of each feature. Texture features and remote sensing indices were extracted from satellite imagery for this purpose, and separability was assessed using three distance-based statistical measures. The results highlight which features, on their own, show strong potential for distinguishing fire-affected areas—thus informing future feature selection strategies.

2. Materials and Methods

2.1. Study Area

New South Wales is located at 34°S latitude and 151°E longitude, in the southeastern region of Australia, along the South Pacific Ocean. It covers an area of 801,600 square kilometers, accounting for 10.4% of Australia’s total land area. The state borders the Pacific Ocean to the east, Queensland to the north, and Victoria to the south [42]. For this study, the Blue Mountains region of New South Wales has been selected as the study area. This region spans 1.03 million hectares and is situated 104 km west of Sydney, Australia [43]. The Blue Mountains are characterized by sandstone plateaus, cliffs, and canyons, and are predominantly covered by temperate eucalyptus forests [43]. The region is home to 114 plant species with distinct regional characteristics, as well as 120 nationally rare and endangered plant species [43]. Figure 1 shows the location and imageries of the study area. The image representing burned land was captured on 6 December 2019, while the image representing post-fire smoke was acquired on 26 December 2019. Both images were taken during the “Black Summer” bushfire period in Australia.

2.2. Methodology

This study initially selected Sentinel-2 images capturing two distinct post-fire conditions—burned land and post-fire smoke—followed by standard preprocessing steps. Post-fire smoke refers to lingering smoke plumes or haze that remain in the atmosphere after the active fire front has passed. Detecting post-fire smoke is important because it can obscure land surface features in satellite imagery, complicate the accurate identification of burn scars, and impact air quality monitoring and public health assessments.
Following image selection and preprocessing, texture features and remote sensing indices were extracted and integrated with the original images. Subsequently, labels for burned land, smoke, and vegetation were generated and visualized. Finally, based on these labels and the processed images, multiple distance metrics were calculated, and the optimal remote sensing characteristics for identifying burned land and smoke were selected. The flowcharts for both cases are presented in Figure 2.

2.2.1. Image Selection and Preprocessing

This study utilized data from two bushfire-impacted regions within the Blue Mountains. These were accessed through Google Earth Engine (GEE) using the Sentinel-2 Surface Reflectance Harmonized (S2_SR_HARMONIZED) dataset. (The central wavelengths of the Sentinel-2 bands B1–B12 are as follows: B1: 443 nm, B2: 490 nm, B3: 560 nm, B4: 665 nm, B5: 705 nm, B6: 740 nm, B7: 783 nm, B8: 842 nm, B8A: 865 nm, B9: 945 nm, B11: 1610 nm, B12: 2190 nm). The preprocessing actions already applied to this dataset are atmospheric correction via the Sen2Cor processor (a tool developed by ESA for converting Level-1C top-of-atmosphere reflectance to Level-2A bottom-of-atmosphere surface reflectance), radiometric calibration, and geometric alignment. The dataset offers spectral bands with a spatial resolution of 10 m, 20 m, and 60 m for the visible (blue, green, red), NIR, and SWIR regions. These spectral bands are very important for studies related to vegetation and fire analysis as they aid in understanding surface conditions along with changes in their spectral properties both prior to and following bushfire occurrences.

2.2.2. Calculation of Texture Features

Before calculating each texture features it is essential to derive the Gray Level Co-occurrence Matrix (GLCM) to analyze the image texture characteristics. The GLCM is constructed by calculating the co-occurrence frequency of gray levels between paired pixels at a specified spatial distance and direction. The GLCM captures the complexity of grayscale distribution and the degree of texture variation within an image, making it a critical tool for texture analysis. After constructing the GLCM, various texture features are calculated. This research selects ten texture features: angular second moment (ASM), contrast, correlation (Corr), variance (Var), inverse difference moment (IDM), sum average (SAVG), sum variance (SVAR), sum entropy (Sent), entropy (Ent), dissimilarity (Diss) [44,45,46,47]. The specific calculation formulas for these features are as follows:
p ( i , j ) = C ( i , j ) N
A S M = i j p ( i , j ) 2
C o n t r a s t = n n 2 [ i j = n p ( i , j ) ]
C o r r = i j ( i μ i ) ( j μ j ) p ( i , j ) σ i σ j
V a r = i j ( i μ ) 2 p ( i , j )
I D M = i j 1 1 + ( i j ) 2 p ( i , j )
S a v g = i i [ j p ( i , j ) + j ( j , i ) ]
S a v r = i ( i S a v g ) 2 [ j p ( i , j ) + j p ( j , i ) ]
S e n t = i [ j p ( i , j ) + j p ( j , i ) ] log [ j p ( i , j ) + j p ( j , i ) ]
E n t = i j p ( i , j ) log p ( i , j )
D i s s = i j i j p ( i , j )
Among them, C ( i ,   j ) denotes the co-occurrence frequency of pixel pairs with gray levels i and j , and N represents the total number of pixel pairs in the image, ensuring that p ( i ,   j ) is a normalized probability. p ( i ,   j ) represents the value of the element in the i-th row and j-th column in the GLCM, indicating the probability of two pixels with gray levels i and j co-occurring in the image under a specific spatial relationship; n denotes the absolute difference between the gray levels i and j , and μ i   and μ j are the means of the rows and columns of the GLCM. σ i and σ j represent the standard deviations of the rows and columns of the GLCM; μ is the overall mean of the GLCM.

2.2.3. Calculation of Remote Sensing Indices

This study selects seven remote sensing indices that effectively reflect fire occurrence for screening, including NDVI, the difference index (DI), normalized burn ratio (NBR), normalized difference water index (NDWI), enhanced vegetation index (EVI), soil-adjusted vegetation index (SAVI), and burned area index (BAI) [48,49,50,51,52,53,54]. The specific calculation formulas and their explanations are as follows:
(1)
NDVI is a widely used remote sensing index that indicates vegetation health and coverage. Its calculation formula is as follows:
N D V I = N I R R E D N I R + R E D
Among the variables, N I R represents the reflectivity of the NIR band, R E D represents the reflectivity of the red light band, and the NDVI value ranges from −1 to 1. A higher NDVI score indicates greater vegetation density.
(2)
DI measures the relative intensity between the SWIR and NIR bands, making it a valuable tool for monitoring vegetation health and assessing soil moisture. The formula for its calculation is as follows:
D I = S W I R N I R
where S W I R represents the reflectance of the SWIR band, and N I R represents the reflectance of the NIR band. A DI value greater than 1 suggests the presence of exposed soil or areas with carbide residues, while a value between 0 and 1 indicates vigorous vegetation or moist soil in the vicinity.
(3)
NBR is a widely used remote sensing index for fire detection, capable of indicating the extent of damage caused by fire to healthy vegetation. Its calculation formula is as follows:
N B R = N I R S W I R N I R + S W I R
where N I R represents the reflectance of the NIR spectrum, while S W I R represents the reflectance in the SWIR spectrum. The NBR values range from −1 to 1 indicates a higher probability of the area being a burned region or a water body.
(4)
NDWI is used to identify the distribution of water bodies and assess soil wetness by analyzing the difference between the NIR and SWIR bands. Its calculation formula is as follows:
N D W I = G N I R G + N I R
where G represents the reflectance value of the green band, while N I R denotes the reflectance value of the NIR band. The NDWI values range from −1 to +1. Positive values generally indicate the presence of water bodies, which exhibit high reflectance in the green band and low reflectance in the near-infrared spectrum. In contrast, non-aquatic surfaces, such as vegetation and bare soil, display higher reflectance in the near-infrared spectrum, resulting in negative NDWI values.
(5)
EVI is an advanced vegetation index designed to improve upon the NDVI. EVI provides a more precise assessment of vegetation health and coverage. The formula for calculating EVI is as follows:
E V I = C 3 × N I R R E D N I R + C 1 × R E D C 2 × B L U E + L
where R E D denotes the reflectance of the red light spectrum, N I R represents the reflectance of the NIR spectrum, B L U E signifies the reflectance of the blue light spectrum, C 1 and C 2 are atmospheric correction coefficients, commonly set at 6 and 7.5, respectively, while C 3 is the gain factor, typically assigned a value of 2.5. L is a canopy background adjustment factor, generally set to 1. EVI values below zero typically indicate non-vegetative surfaces such as built structures or water bodies. Positive values, on the other hand, correspond to healthy vegetation, with higher values reflecting greater vegetation density and vitality.
(6)
SAVI is designed to mitigate the influence of soil background on vegetation monitoring, thereby enhancing the accuracy and stability of vegetation analysis, particularly in areas with low vegetation coverage. It is widely used in ecological and environmental research. The calculation formula for SAVI is as follows:
S A V I = N I R R E D N I R + R E D + L × ( 1 + L )
where N I R represents the reflectance in the NIR band, RED refers to the reflectance in the red light band (visible spectrum), distinguishing it from the NIR band. L is the soil-adjustment factor, typically assigned a value of 0.5 to account for soil brightness. A negative SAVI value suggests the absence of vegetation in the observed area.
(7)
BAI is a simple yet effective index for detecting burned areas after wildfires. It measures the spectral distance between a given pixel and the characteristic reflectance values of burned surfaces in the red and NIR bands. A higher BAI value indicates a higher likelihood of fire-affected land. The formula for BAI is as follows:
B A I = 1 ( R 0.1 ) 2 + ( N I R 0.06 ) 2
where N I R represents the reflectance of the NIR band, R represents the reflectance of the red light band. A higher BAI indicates a more severe fire impact on vegetation, whereas a lower BAI suggests healthier or unaffected vegetation.

2.2.4. Making Labels

In this study, ArcGIS software (Arcmap 10.4.1) was used to create the labels for burned land, post-fire smoke, and vegetation to ensure that regions of interest were accurately and precisely defined. Specifically, the burned land label was generated by visually comparing pre-fire, fire-period, and post-fire Sentinel-2 imagery. Areas that appeared darkened or blackened in the fire-period imagery and showed clear spectral changes compared to pre-fire conditions were manually delineated as burned land. This process ensured a high level of spatial accuracy and thematic relevance. All target regions were digitized manually, and the shapefiles were edited and refined to ensure consistency and precision (The resulting annotated labels are illustrated in Figure 3). These vector shapefiles were subsequently used as the basis for sample extraction and feature separability analysis. This labeling method allowed the incorporation of reliable spatial data into the study while maintaining compatibility with the processing workflows on the GEE platform.

2.2.5. Distance-Based Feature Screening Method

The distance-based feature screening method evaluates the separability of remote sensing characteristics by analyzing the statistical distances between different classes. This study employs three distance measures to fully select characteristics from remote sensing data: the J-M distance measure, discriminant index, and mutual shared information between characteristics. These measures assist in obtaining a multi-perspective evaluation for choosing the best set characteristics. The three measures are described in detail below.
  • J-M distance
There are numerous methods to calculate sample separability, including the J-M distance [55], Bhattacharyya distance [56], and others. Among these, the J-M distance offers an accurate measure of separability between samples. The calculation is defined as follows:
D is = 2 ( 1 e B )
where D i s represents the J-M distance for a specific feature, and the range for the J-M distance is between 0 and 2, e is the base of the natural logarithm (Euler’s number), and B represents the Bhattacharyya distance, which quantifies the statistical separability between two classes. If the J-M distance is greater than 1, it indicates that the feature has strong separability between the two classes; if it is less than 1, it suggests that the feature has poor separability between the two classes.
  • Mutual Information
Mutual information [57,58] is a statistical measure that analyzes the correlation between two random variables. It quantifies the amount of information shared by the two variables and is primarily used in feature selection, natural language processing, and other fields. The specific calculation formula is as follows:
I ( X ; Y ) = x X y Y P ( x , y ) ln P ( x , y ) p ( x ) p ( y )
where p ( x , y ) is the joint probability distribution of the random variables X and Y , and p x and p ( y ) is the marginal probability distribution of X and Y , respectively.
  • Discriminant Index
The discriminant index is commonly used to analyze the separability (or difference) between two sets of data [59]. It is primarily used for feature selection, category separability analysis, and other related tasks. The specific calculation formula is as follows:
D I = | μ 1 μ 2 | σ 1 2 + σ 2 2
where μ 1 and μ 2 are the means of the two datasets, and σ 1 2 and σ 2 2 are their respective variances.

2.2.6. Comprehensive Selection of the Best Characteristics and Indices

To thoroughly evaluate the separability of remote sensing features, this study integrates three widely used statistical measures: J-M distance, the discriminant index, and mutual information. A weighted average approach is employed to combine these metrics into one single separability score for each feature. Although no universal standard exists for assigning weights to these measures, the distribution used here is based on the statistical properties and practical performance of each metric.
The J-M distance is given the highest weight (0.7) because it accounts for both the mean difference and also the variance between classes, effectively capturing class overlap. It also incorporates the Bhattacharyya distance, making it particularly robust in situations with overlapping class distributions. This robustness has been widely recognized in remote sensing classification studies, where the J-M distance consistently demonstrates strong performance in evaluating feature separability [60,61].
The discriminant index is assigned a moderate weight (0.2), as it primarily emphasizes the difference in class means. Although it is less effective under non-Gaussian or overlapping conditions, it serves as a valuable complement to the J-M distance, particularly in cases where classes are linearly separable [62].
Mutual information receives a lower weight (0.1), as it measures the statistical dependency between features and class labels. While mutual information is capable of capturing nonlinear relationships, it is also sensitive to redundancy in high-dimensional feature spaces and can result in biased evaluations [63]. Therefore, mutual information is applied as a secondary criterion, particularly for the detection of redundant features. To maintain consistency in the interpretation of all metrics, reverse normalization is applied to mutual information scores—where lower values indicate stronger class separability due to greater independence.
Although the specific weight values are not derived from an established standard, they reflect the theoretical contributions and practical strengths of each metric. This structured weighting scheme enables the balanced integration of distance-based, mean-difference-based, and information-theoretic perspectives in feature evaluation. Furthermore, it offers a robust and interpretable framework that can be extended to other remote sensing feature selection tasks.

3. Results

3.1. Results of Various Indicators for Post-Fire Smoke

The study of post-fire smoke gives knowledge about the usefulness of various remote sensing indices, spectral bands, and texture features. This part shows the separability, mutual information, and ability to discriminate these metrics, giving a good comparison of their performance in finding post-fire smoke traits.

3.1.1. J-M Distance Analysis of Remote Sensing Indices and Spectral Bands for Post-Fire Smoke

As shown in Figure 4a, the J-M distance analysis of remote sensing indices for post-fire smoke reveals significant differences in the separation capability of various indices. Among them, NDVI (0.5935) demonstrates the highest separation and is identified as the most effective remote sensing index for distinguishing post-fire smoke. Following this, the J-M distances for SAVI (0.0755) and NBR (0.0180) indicate a moderate ability to distinguish post-fire smoke. In contrast, DI, BAI, and NDWI exhibit low J-M distances, providing limited support in specific scenarios. Finally, EVI (0.0039) shows the weakest discrimination ability and is not recommended for post-fire smoke detection.
As shown in Figure 4b, the J-M distance analysis of the spectral bands reveals significant variation in their ability to distinguish smoke and vegetation samples. The B1 band demonstrates the highest J-M distance (0.7084), indicating separability. The B2 and B3 bands exhibit a moderate performance, with J-M distances of 0.5604 and 0.2887, respectively. However, the J-M distances for the bands B4 to B12 are relatively low, with B7 showing the poorest performance, with a J-M distance of 0.0038, which limits its effectiveness in accurately separating the target sample.
Figure 4c shows the J-M distance results for the texture features of each spectral band. Among these, the J-M distance of the Corr feature in the B7 band is the highest (1.0384), indicating its strong sensitivity to smoke occlusion and vegetation response. Conversely, the J-M distance of Ent in the B2 band is the lowest (0.00005), demonstrating its poor ability to differentiate between smoke and vegetation.

3.1.2. Mutual Information Analysis of Remote Sensing Indices and Spectral Bands for Post-Fire Smoke

The mutual information index was used to analyze the remote sensing indices of smoke and vegetation after the fire (Figure 5a). The results revealed variations in how different indices described the independence between the two types of samples. BAI (0.0035) exhibited the lowest mutual information value, indicating the highest independence between smoke and vegetation area, making it the most suitable index for mutual information analysis. DI (0.0046) and EVI (0.0051) showed moderate independence, but their values were still higher than BAI by more than 0.001. In contrast, NDVI (0.0083) had the highest mutual information value, suggesting weaker independence between the two sample types, possibly due to distribution overlap, which may limit its classification effectiveness.
Figure 5b shows the mutual information analysis of spectral bands, revealing the B8 band has the lowest mutual information value (0.0043), indicating that smoke and vegetation are most independent in this band. In contrast, the B1 band exhibits the highest mutual information value (0.0154), suggesting weaker independence between the two types of samples and a greater amount of shared information.
In the mutual information analysis of texture features, the results show that the IDM (0.0002) in the B6 band has the lowest mutual information value, indicating the strongest distribution independence between categories, making it the most effective feature in the mutual information analysis. Following this, Contrast (0.0003) in the B6 band and Diss (0.0003) in the B5 band also demonstrate high independence. In contrast, features with higher mutual information values, such as ASM (0.0154) in the B1 band and Corr (0.01191) in the B6 band, exhibit strong distribution correlation between categories, suggesting that their ability to distinguish between the two types is limited. Figure 5c shows the mutual information scores ( M i ) of different textures for post-fire smoke.

3.1.3. Discriminant Index Analysis of Remote Sensing Indices and Spectral Bands for Post-Fire Smoke

The remote sensing indices were analyzed using the discriminant index, and significant differences were observed across the values of various indices (Figure 6a). The discriminant index of NDVI was the highest (1.1825), indicating the largest mean difference between smoke and vegetation samples. This may be related to the extent of vegetation cover change after the fire. Following this, the discriminant index of SAVI was 0.3269, about 0.8 lower than the highest discriminant analysis. The discriminant index of NBR, NDWI, and BAI were relatively low (0.1904, 0.1903, and 0.1640, respectively), suggesting that the mean differences between smoke and vegetation were small, limiting their ability to distinguish between the two types. Finally, the discriminant index of EVI was the lowest (0.0362), indicating that it almost failed to capture the mean difference between smoke and vegetation.
The analysis results of the spectral bands are shown in Figure 6b. The discriminant indices of different bands vary significantly. Among them, the B1 band has the highest value (0.7244), indicating the strongest discrimination ability at the mean level. The discriminant index of the B2 band (0.6563) and the B3 band (0.4614) are also relatively high, suggesting more distinct mean differences between smoke and vegetation samples. In contrast, the discriminant index of the B6 band and the B7 band are the lowest, at 0.0282 and 0.0611, respectively.
By analyzing the texture features of each band, the results show that the discriminant index of Corr in the B5 band is the highest (0.7359), indicating the largest mean difference between smoke and vegetation samples for this texture feature. Next, the discriminant index of ASM in the B1 band is 0.7243, which also shows a significant mean difference. In contrast, the discriminant index of Diss in the B3 band is the lowest, at only 0.0053, suggesting that the Diss feature in this band has limited ability to distinguish between smoke and vegetation after a fire. Figure 6c shows discriminant index of different textures for post-fire smoke.

3.2. Results of Various Indicators of Burned Land

The assessment of indicators of burned land proves the discriminative capacity of the remote sensing indices, spectral bands, and texture features in differentiating burned land from vegetation samples. Therefore, the current section is devoted to a comparative exploration of the performance of these metrics according to diverse evaluation criteria, with a view to their practical application in post-fire analysis.

3.2.1. J-M Distance Analysis of Remote Sensing Indices and Spectral Bands for Burned Land

The J-M distance analysis of each remote sensing indices show that NDVI (1.9856) has the highest J-M distance, indicating the strongest ability to distinguish burned land from vegetation, with the most significant category distribution difference between the two. NBR (0.0897) also reflects a noticeable distribution difference between the two types of targets. In contrast, EVI (0.4507) has the lowest J-M distance, suggesting the weakest separation ability and a poor discrimination performance. Figure 7a shows the J-M distance of different remote sensing indices for burned land.
The J-M distance analysis of each spectral band reveals the following results: the B1 band (0.4575) has the highest J-M distance, indicating the greatest separability among all spectral bands. Bands such as B10 (0.1272), B3 (0.1142), and B7 (0.1122) fall within the middle range, indicating moderate separation capabilities. In comparison, the B11 band (0.0033) has the lowest J-1 distance, showing the weakest separability. Figure 7b shows results of J-M distance of different bands for burned land.
The J-M distance analysis of each texture feature (Figure 7c) shows the following results: the J-M distance of Corr (0.4953) in the B5 band is the largest, indicating the highest separability between the two types of samples. Next, the J-M distance of Var (0.4612) in the B6 band and ASM (0.4575) in the B1 band, while slightly smaller than that of Corr in the B5 band, still indicate strong separability. In contrast, the J-M distance of Sent (0.0001) in the B4 band is the lowest, demonstrating the poorest separability between the two types of samples.

3.2.2. Mutual Information Analysis of Remote Sensing Indices and Spectral Bands for Burned Land

The mutual information calculation results for various remote sensing indices of burned land are as follows: NDVI (0.0678) has the lowest mutual information value, indicating that the two sample types calculated using this index have the strongest distribution independence and the best discrimination ability. In contrast, EVI (0.3834) and SAVI (0.3837) exhibit the highest mutual information values, suggesting a higher statistical dependence between the two sample types, with smaller distribution differences and relatively weaker distinction capabilities. Figure 8a shows the mutual information score ( M i ) of the different remote sensing indices for burned land.
The mutual information calculation results for each spectral band are shown in Figure 8b. While the results for each band show slight variation, some trends can still be observed. Among them, the mutual information value of the B1 band (0.4051) is the highest, indicating that there is significant shared information between the burned land and vegetation samples, with relatively low category independence and weaker discrimination ability. Conversely, the mutual information value of the B2 band (0.3799) is the lowest, suggesting stronger distribution independence and slightly better discrimination performance.
Figure 8c shows the mutual information calculation results for each texture feature across different spectral bands. The mutual information value of Ent (0.0333) in the B1 band is the lowest, indicating strong distribution independence between the two sample types and suggesting higher potential for class distinction. In contrast, SAVG (0.4699) in the B1 band has the highest mutual information value, indicating that this feature shares more information between the two sample types, with a high-class distribution correlation and relatively poor discrimination ability.

3.2.3. Discriminant Index Analysis of Remote Sensing Indices and Spectral Bands for Burned Land

Figure 9a shows the calculation results of the discriminant index for each of the remote sensing indices applied to burned land. The discriminant index of NDVI is the highest (4.4364), significantly exceeding the other indices, indicating a large mean difference between the two sample types. This may be due to NDVI’s high sensitivity to changes in vegetation cover, effectively capturing the spectral differences between burned land areas and vegetation samples. Next, NBR and NDWI both have a discriminant index of 1.9916, indicating a good performance in distinguishing burned land from vegetation at the mean level. Finally, the discriminant index values of EVI, SAVI, and BAI are relatively low (0.2420, 0.2487, and 0.3247, respectively), indicating that the mean difference between the two sample types is small.
As shown in Figure 9b, the discriminant index for each spectral band varies considerably. Among them, the discriminant index of the B1 band is the highest (0.2152), indicating the largest mean difference between the two sample types in this band. The discriminant index of the B2 band (0.2096) is slightly lower than that of the B1 band, but it still shows a strong mean difference between the two sample types. In contrast, the discriminant index of the B11 band is the lowest (0.0063), indicating a very small mean difference between the two sample types, suggesting it may not effectively reflect the spectral differences between them.
The results of the discriminant index for each texture feature (Figure 9c) in each spectral band show that B5 has the highest Contrast (0.2237), indicating the largest mean difference between the two sample types. Next, texture features such as Var (0.2177) in the B6 band and ASM (0.2152) in the B1 band also demonstrate a significant distinguishing effect on the two sample types at the mean level. However, the B4 band has the lowest SVAR and Sent (nearly 0.0005), indicating the weakest discrimination ability at the mean level.

3.3. Comprehensive Selection

The results indicate that, among remote sensing indices, NDVI achieves the highest comprehensive score (0.900) for distinguishing post-fire smoke, significantly outperforming other indices (Figure 10a). This is attributed to NDVI’s notable J-M distance (0.5936) and high discriminant index (1.1825). Despite its relatively high mutual information value (0.0083), NDVI demonstrates a strong capability to separate post-fire smoke and vegetation samples, making it the most effective index in this study. Conversely, indices such as BAI and DI have low comprehensive scores, with EVI scoring the lowest (0.0664). This highlights EVI’s poor separation ability, rendering it unsuitable for distinguishing post-fire smoke.
Among the spectral bands, the B1 band achieves the highest comprehensive score (0.900), with its J-M distance (0.7084) and discriminant index (0.7244) being the highest among all bands. Additionally, it has a low mutual information value (0.0154), making it the best performing spectral band in this study (Figure 10b).
Among the texture features, ASM in the B1 band achieves the highest comprehensive score (0.8380). It exhibits an excellent J-M distance (0.0154) and discriminant index (0.7084), coupled with low mutual information (0.7244). This indicates that ASM in the B1 band has the strongest ability to distinguish between post-fire smoke and vegetation samples, making it the best-performing texture feature in this study (Figure 10c).
For burned land, in terms of remote sensing indices, NDVI achieves the highest comprehensive score (1.0000), with a J-M distance (1.9856) and discriminant index (4.4364) significantly surpassing those of other indices (Figure 11a). Additionally, it has the smallest mutual information value (0.0678). These results demonstrate that NDVI has the strongest separation ability for distinguishing burned land from vegetation, making it the primary remote sensing index for burned land identification.
In terms of spectral bands, the B2 band achieves the highest comprehensive score (0.9436), with a J-M distance of 0.4245, a discriminant index of 0.2096, and a mutual information value of 0.3799. This band excels in separating burned land from vegetation, combining a high J-M distance and discriminant index with low mutual information. Consequently, indicating it is identified as the most effective spectral band for this purpose (Figure 11b).
In terms of texture features (Figure 11c), Corr in the B5 band achieves the highest comprehensive score (0.9173), with a J-M distance of 0.4953, a discriminant index of 0.2237, and a mutual information value of 0.3938. This feature demonstrates a well-balanced performance across all indicators, exhibiting the strongest separation ability. It is identified as the most effective texture feature for distinguishing burned land traces from vegetation.

4. Discussion

4.1. Post-Fire Smoke

The results of the separability, mutual information, and discriminant index for different characteristics are not the same in the analysis of post-fire smoke. NDVI is noted to be the best index for analyzing post-fire smoke, consistent with findings from Zhang et al. and Colkesen et al. [41,64] who reported that NDVI offered the highest contrast between burned and unburned areas in smoke-affected environments. This superiority is due to the strong vegetation response to fire incidents, which creates a sharp spectral contrast between unscathed vegetation and smoke-affected areas. Here, NVDI provides an excellent visibility, and such sensitivity could offer rapid notice with smoke occurrence [64]. The other indices, EVI and NDWI, show poor performances in terms of the J-M distance and discriminant index. This is because they are less sensitive to fine spectral variations induced by smoke obstruction. For instance, high-aerosol environments restrict the utility of EVI because both the red and NIR bands increase smoke dispersion [65]. Perhaps more considerably, moisture sensitivity in NDWI is not so straightforwardly affected by post-fire smoke [66].
Spectral band analysis shows that the B1 band (coastal aerosol) plays the most important role in smoke differentiation from other classes, as indicated by high values of the J-M distance and discriminant index [67]. The performance of B1 aligns with previous studies by Vandansambuu et al. [68], who demonstrated the utility of coastal aerosol bands in detecting thin smoke layers. The reason for fine fog particle detection in smoke probably explains the excellent ability of the B1 band. Other bands like B2 (blue) and B3 (green) also show fairly good results, probably because of their sensitivity towards the scattering effect caused by smoke. Longer wavelength bands (for example, B7 and B12) are not effective because they are less sensitive to aerosols and more sensitive to surface reflectance. Texture features are also very important for the analysis of post-fire smoke. Such characteristics as Corr and ASM in the B1 band show a better discrimination capability. These characteristics are good for detecting spatial heterogeneity caused by smoke occlusion. For instance, the high J-M distance of Corr in B7 confirms that smoke destroys the spatial coherence of vegetation; hence, it shows strong texture differentiation.

4.2. Burned Land

For burned land, NDVI again proves to be the most efficient remote sensing index. Its high J-M distance and discriminant index speak of its sensitivity towards vegetation-cover changes, particularly pronounced in burned areas. This stark contrast in NDVI values between burned land and surrounding vegetation is because of the significant loss of chlorophyll and changes in surface reflectance properties. Thus, NDVI would be an excellent criterion for detecting boundaries in burned land and assessing fire effects [69]. Other indices, like NBR and NDWI, also showcase an average ability to differentiate burned land. NBR is specifically designed for post-fire analysis because it takes advantage of the strong reflectance in NIR and SWIR bands from burned areas. While its performance is not as great as NDVI, it offers additional information that could help to improve burned land detection in multi-index analysis. Indices such as EVI and SAVI are less effective. They respond more to soil background effects and less to spectral changes caused by burning [70].
Spectral band analysis shows that the B2 (blue) and B1 (coastal aerosol) bands are the most effective in discriminating burned land. Likely, the strong performance of the B2 band arises from sensitivity to charred surfaces, which show high reflectance in the blue spectrum. On the other hand, longer-wavelength bands such as B11 (SWIR) show poor discrimination characteristics, and therefore the reflectance properties are less influenced by burn severity and more related to background soil and vegetation. Texture features further refine the analysis of burned land. For instance, Corr and Var in the B5 band exhibit strong discriminative power, effectively highlighting spatial irregularities caused by fire-induced surface damage. These texture features are particularly sensitive to abrupt changes in vegetation structure and canopy loss, which are characteristic of burned areas. This observation aligns with findings from Gitas et al. [71], who reported that texture metrics derived from middle-infrared bands significantly enhance fire scar detection by capturing spatial heterogeneity. These characteristics effectively capture abrupt changes in surface texture associated with fire scars and loss of vegetation.

4.3. Final Feature Selection and Implications

The final feature selection process integrates results from all three distance metrics—J-M distance, mutual information, and discriminant index—to identify the most robust characteristics for post-fire smoke and burned land analysis. For post-fire smoke, NDVI, B1, and ASM (in B1) emerge as the top characteristics. These characteristics correspond to phenomena such as the vegetation response (NDVI), aerosol scattering (B1), and spatial heterogeneity caused by smoke (Corr). For burned land, the selected characteristics include NDVI, B2, and Corr (in B5). NDVI’s strong sensitivity to vegetation loss, B2’s ability to detect charred surfaces, and Contrast’s capacity to capture spatial irregularities make them highly effective in delineating burned land boundaries. These findings underscore the importance of combining spectral, index-based, and texture-based features for comprehensive post-fire analysis. Each feature captures different aspects of fire impact, from vegetation response to surface texture changes, enabling a multi-perspective understanding of fire effects. This approach not only improves the accuracy of fire damage assessment but also provides insights into the underlying physical and ecological processes associated with fire events.

4.4. Discrepancies Between Discriminant Index and Other Measures

Some discrepancies were observed in the separability of the remote sensing features; this was conducted using the J-M distance, discriminant index, and mutual information. The results obtained in the discriminant index seem to differ from those of the J-M distance and mutual information. These discrepancies arise because there is a basic difference in the evaluation of feature separability by each one of the metrics used.
The discriminant index relies on the mean and variance differences between classes assuming Gaussian-like distributions and linear separability, whereas real-world post-fire data—particularly with post-fire smoke and partially burned land—often skew or overlap, making mean-based comparisons insufficient. In contrast, the J-M distance incorporates the Bhattacharyya distance, accounting for both mean and variance and also their covariance structure, making it more robust for complex distributions. Contrarily, mutual information is a non-parametric measure which reflects statistical dependence rather than distance and is often more sensitive to overlapping classes having different entropy profiles.
For instance, in the case of EVI and NDWI for post-fire smoke, the discriminant index showed fairly low discrimination, whereas the mutual information they had suggested more independence. This might have been due to a non-Gaussian distribution in those features since smoke creates subtle or even nonlinear spectral effects that mean differences cannot capture. These contradictions serve to underscore the need for multiple complementary metrics. The J-M distance (distance-based) measure integrated with the discriminant index (mean-variance-based) and mutual information (information-theoretic) measure provides more balanced and reliable feature evaluation.

4.5. Justification for Not Using Model-Based Feature Selection Methods

In this study, we intentionally avoided model-based feature selection methods such as random forest, XGBoost, Lasso regression, or Recursive Feature Elimination (RFE). This decision aligns with the primary objective of the study: to evaluate the intrinsic, model-independent separability of individual remote sensing features for bushfire-related analysis. In the following sections, we explain the rationale behind this methodological choice.
First, feature importance measures derived from machine-learning models are inherently model-dependent. They reflect the marginal contribution of a feature within the context of a specific model structure and input feature set. As a result, their outcomes can vary significantly across different algorithms and are often sensitive to feature redundancy, interactions, and even hyperparameter tuning. The rankings produced by these methods indicate the potential of features to enhance model performance, but they do not necessarily reflect a feature’s standalone ability to discriminate between classes.
Second, model-driven feature importance scores are typically intended to explain model predictions, not to assess the inherent separability or discriminative power of individual variables. In other words, such methods are designed for post-model interpretation, whereas our study focuses on the pre-modeling analysis of feature behavior. In contrast, the distance-based metrics used in this study—the J-M distance, discriminant index, and mutual information—allow us to evaluate individual features independently of any model structure or inter-feature dependencies. These metrics assess features purely based on statistical class separability and information relevance.
Third, our goal is to identify physically meaningful and transferable factors whose separability arises from their inherent spectral, index-based, and textural responses to fire events. Machine-learning methods are often treated as black-box models, and their feature importance scores typically lack a direct connection to physical interpretation or generalizability across different scenarios. A feature that ranks highly in one model trained on a specific dataset may not demonstrate the same relevance in other contexts—particularly if its “importance” is driven by complex interactions rather than intrinsic properties. Therefore, employing model-based feature selection would not align with the knowledge objective of this study, which is to provide a clear, quantitative assessment of single-variable separability prior to modeling. While we fully acknowledge the value of machine-learning–based feature importance methods—particularly for building predictive models—such approaches are more appropriate for future work where classification or regression performance is the primary goal.

4.6. Limitations and Future Directions

This study provides valuable insights into the separability of remote sensing features for bushfire analysis but also has several limitations, which we acknowledge and plan to address in future work.
First, the study does not use field-validated ground truth datasets due to the logistical and temporal challenges of collecting adequate reference data over a large and diverse region like the Blue Mountains. Furthermore, since this study focuses on pre-modeling, feature-level evaluation rather than supervised classification, the need for detailed ground truth is reduced. Instead, we employed expert-guided manual labeling, supported by visual cross-checking of post-fire satellite imagery—a well-established practice in remote sensing when field data are unavailable. Nevertheless, incorporating standardized and validated reference datasets in future work would enhance the statistical rigor and generalizability of the findings.
Second, the five-day revisit time of Sentinel-2 limits the ability to capture active fires at the moment they occur. However, detecting active fires is not the goal of this study. Rather, we aim to analyze how remote sensing features behave in smoke-affected and burned areas following a fire event. From this perspective, the high spatial and spectral resolution of Sentinel-2 imagery makes it a suitable and widely used source for post-fire environmental analysis.
Third, the selection of sample areas—burned land, vegetation, and post-fire smoke—was based on visual interpretation of pre- and post-fire images, supplemented by auxiliary data and fire event records. While we represented a range of fire impacts, we did not conduct a detailed measurement of fire severity. Future studies will incorporate severity classifications (e.g., low, moderate, and high) to more clearly differentiate the effects of varying fire intensities.
Finally, this study does not aim to develop classification or detection models but instead offers a complementary perspective by evaluating feature behavior prior to modeling. The outcomes are intended to inform the design of more efficient, interpretable, and physically meaningful feature sets for future fire mapping and detection systems.
Future research could explore integrating the identified features into structured predictive frameworks that extend beyond independent evaluations. For example, models such as flow-based spatiotemporal predictors [72] or diffusion models with deterministic priors [73] present principled approaches to capturing temporal dynamics and uncertainty in environmental processes.

5. Conclusions

This study conducted a detailed evaluation of the separability of remote sensing features—including spectral bands, remote sensing indices, and texture features—for analyzing post-fire smoke and burned land using Sentinel-2 imagery. Three complementary separability measures—J-M distance, discriminant index, and mutual information—were employed to systematically assess the discriminative capability of each feature type.
The results show that different feature types capture distinct aspects of fire-related surface changes. Spectral bands reflect variations in surface reflectance; remote sensing indices enhance the interpretation of vegetation condition and burn signals; and texture features capture spatial heterogeneity. Our evaluation identified a small subset of highly separable features from each category that effectively distinguished fire-affected areas from vegetation, demonstrating their potential utility in fire mapping and post-disaster assessment.
This multi-metric separability analysis not only facilitates the identification of the most informative features but also helps to reduce redundancy and improve the efficiency of feature selection in fire detection models. The proposed methodology can be extended to other environmental monitoring applications that require robust and interpretable feature evaluation.

Author Contributions

Conceptualization, Z.Y. and H.A.-N.; methodology, Z.Y. and H.A.-N.; software, Z.Y.; validation, Z.Y. and H.A.-N.; formal analysis, Z.Y.; investigation, Z.Y., H.A.-N. and B.K.; resources, H.A.-N. and G.B.; writing—original draft preparation, Z.Y.; writing—review and editing, H.A.-N., B.K., M.Z. and N.U.; visualization, Z.Y., H.A.-N. and B.K.; supervision, H.A.-N., G.B. and B.K.; project administration, H.A.-N.; funding acquisition, B.K. and N.U. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by the RIKEN Center for Advanced Intelligence Project (AIP), Tokyo, Japan.

Data Availability Statement

Processed data were derived at University of Technology Sydney, Australia. Derived data supporting the findings of this study are available from the project’s administration on request.

Acknowledgments

The authors would like to thank the School of Computer Science, Faculty of Engineering and Information Technology, University of Technology, Sydney, and the RIKEN Centre for Advanced Intelligence Project (AIP), Tokyo, Japan, for providing all facilities during this research.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Sharples, J.J.; Cary, G.J.; Fox-Hughes, P.; Mooney, S.; Evans, J.P.; Fletcher, M.S.; Baker, P. Natural hazards in Australia: Extreme bushfire. Clim. Change 2016, 139, 85–99. [Google Scholar] [CrossRef]
  2. Cheney, N.P. Bushfire disasters in Australia, 1945–1975. Aust. For. 1976, 39, 245–268. [Google Scholar] [CrossRef]
  3. Atedhor, G.O.; Atedhor, C.N. Contexts of sustainable development risk in climate change and anthropogenic-induced bushfire in Nigeria. Sci. Afr. 2024, 25, e02271. [Google Scholar] [CrossRef]
  4. Robinne, F.N.; Secretariat, F. Impacts of Disasters on Forests, in Particular Forest Fires. UNFFS Background Paper. 2021. Available online: https://www.un.org/esa/forests/wp-content/uploads/2021/08/UNFF16-Bkgd-paper-disasters-forest-fires_052021.pdf (accessed on 11 October 2024).
  5. Udy, D.G.; Vance, T.R.; Kiem, A.S.; Holbrook, N.J.; Abram, N. Australia’s 2019/20 Black Summer fire weather exceptionally rare over the last 2000 years. Commun. Earth Environ. 2024, 5, 317. [Google Scholar] [CrossRef]
  6. Steffen, W.; Hughes, L.; Pearce, A. The Heat Is On: Climate Change, Extreme Heat, and Bushfires in Western Australia. Climate Council of Australia. 2015. Available online: https://researchers.mq.edu.au/files/72529918/72529888.pdf (accessed on 14 October 2024).
  7. Mu, M.; Sabot, M.E.; Ukkola, A.M.; Rifai, S.W.; De Kauwe, M.G.; Hobeichi, S.; Pitman, A.J. Examining the role of biophysical feedback on simulated temperature extremes during the Tinderbox Drought and Black Summer bushfires in Southeast Australia. Weather. Clim. Extrem. 2024, 45, 100703. [Google Scholar] [CrossRef]
  8. Baranowski, K.; Faust, C.; Eby, P.; Bharti, N. Quantifying the impact of severe bushfires on biodiversity to inform conservation. Res. Sq. 2020. Preprint Version 1. [Google Scholar] [CrossRef]
  9. Allison, R.S.; Johnston, J.M.; Craig, G.; Jennings, S. Airborne optical and thermal remote sensing for wildfire detection and monitoring. Sensors 2016, 16, 1310. [Google Scholar] [CrossRef]
  10. Wang, G.; Li, H.; Ye, S.; Zhao, H.; Ding, H.; Xie, S. RFWNet: A Multi-scale Remote Sensing Forest Wildfire Detection Network with Digital Twinning, Adaptive Spatial Aggregation, and Dynamic Sparse Features. IEEE Trans. Geosci. Remote Sens. 2024, 62, 4708523. [Google Scholar] [CrossRef]
  11. Veraverbeke, S.; Dennison, P.; Gitas, I.; Hulley, G.; Kalashnikova, O.; Katagis, T.; Stavros, N. Hyperspectral remote sensing of fire: State-of-the-art and future perspectives. Remote Sens. Environ. 2018, 216, 105–121. [Google Scholar] [CrossRef]
  12. Lu, D.; Mausel, P.; Brondizio, E.; Moran, E. Change detection techniques. Int. J. Remote Sens. 2004, 25, 2365–2401. [Google Scholar] [CrossRef]
  13. Zhang, Z.; Zhu, L. A review on unmanned aerial vehicle remote sensing: Platforms, sensors, data processing methods, and applications. Drones 2023, 7, 398. [Google Scholar] [CrossRef]
  14. Zhou, T.; Li, Z.; Pan, J. Multi-feature classification of multi-sensor satellite imagery based on dual-polarimetric Sentinel-1A, Landsat-8 OLI, and Hyperion images for urban land-cover classification. Sensors 2018, 18, 373. [Google Scholar] [CrossRef] [PubMed]
  15. Jin, H.; Mountrakis, G.; Stehman, S.V. Assessing integration of intensity, polarimetric scattering, interferometric coherence and spatial texture metrics in PALSAR-derived land cover classification. ISPRS J. Photogramm. Remote Sens. 2014, 98, 70–84. [Google Scholar] [CrossRef]
  16. Guo, A.; Huang, W.; Ye, H.; Dong, Y.; Ma, H.; Ren, Y.; Ruan, C. Identification of wheat yellow rust using spectral and texture features of hyperspectral images. Remote Sens. 2020, 12, 1419. [Google Scholar] [CrossRef]
  17. Tuominen, S.; Pekkarinen, A. Performance of different spectral and textural aerial photograph features in multi-source forest inventory. Remote Sens. Environ. 2005, 94, 256–268. [Google Scholar] [CrossRef]
  18. Bolstad, P.; Lillesand, T.M. Rapid maximum likelihood classification. Photogramm. Eng. Remote Sens. 1991, 57, 67–74. [Google Scholar]
  19. Wacker, A.G.; Landgrebe, D.A. Minimum distance classification in remote sensing. LARS Tech. Rep. 1972, 25. Available online: https://docs.lib.purdue.edu/cgi/viewcontent.cgi?article=1024&context=larstech (accessed on 21 November 2024).
  20. Mouat, D.A.; Mahin, G.G.; Lancaster, J. Remote sensing techniques in the analysis of change detection. Geocarto Int. 1993, 8, 39–50. [Google Scholar] [CrossRef]
  21. Li, X.; Yeh, A.G.O. Principal component analysis of stacked multi-temporal images for the monitoring of rapid urban expansion in the Pearl River Delta. Int. J. Remote Sens. 1998, 19, 1501–1518. [Google Scholar] [CrossRef]
  22. Sader, S.; Stone, T.; Joyce, A. Remote sensing of tropical forests—An overview of research and applications using non-photographic sensors. Photogramm. Eng. Remote Sens. 1990, 56, 1343–1351. [Google Scholar]
  23. Vogelmann, J.E.; Rock, B.N. Use of Thematic Mapper data for the detection of forest damage caused by the pear thrips. Remote Sens. Environ. 1989, 30, 217–225. [Google Scholar] [CrossRef]
  24. Walter, V. Object-based classification of remote sensing data for change detection. ISPRS J. Photogramm. Remote Sens. 2004, 58, 225–238. [Google Scholar] [CrossRef]
  25. Erener, A.; Düzgün, H.S. A methodology for land use change detection of high-resolution pan images based on texture analysis. Ital. J. Remote Sens. 2009, 41, 47–59. [Google Scholar] [CrossRef]
  26. Pal, M. Factors Influencing the Accuracy of Remote Sensing Classifications: A Comparative Study. Ph.D. Thesis, University of Nottingham, Nottingham, UK, 2002. Available online: https://www.academia.edu/download/3460778/full_phd.pdf (accessed on 29 November 2024).
  27. Dua, P.; Chenb, Y. Some key techniques on updating spatial data infrastructure by satellite remote sensing imagery. In Proceedings of the ISPRS Workshop on Service and Application of Spatial Data Infrastructure; ISPRS: Hangzhou, China, 2007; Volume XXXVI(4/W6), pp. 209–220. [Google Scholar]
  28. Rodarmel, C.; Shan, J. Principal component analysis for hyperspectral image classification. Surv. Land Inf. Sci. 2002, 62, 115–122. [Google Scholar]
  29. Pal, M.; Mather, P.M. Support vector machines for classification in remote sensing. Int. J. Remote Sens. 2005, 26, 1007–1011. [Google Scholar] [CrossRef]
  30. Zhu, L.; Zhang, Y.; Wang, J.; Tian, W.; Liu, Q.; Ma, G.; Chu, Y. Downscaling snow depth mapping by fusion of microwave and optical remote sensing data based on deep learning. Remote Sens. 2021, 13, 584. [Google Scholar] [CrossRef]
  31. Zhao, W.; Du, S. Spectral–spatial feature extraction for hyperspectral image classification: A dimension reduction and deep learning approach. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4544–4554. [Google Scholar] [CrossRef]
  32. Lang, P.; Fu, X.; Dong, J.; Yang, H.; Yin, J.; Yang, J.; Martorella, M. Recent advances in deep learning-based SAR image targets detection and recognition. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2025, 18, 6884–6915. [Google Scholar] [CrossRef]
  33. Niu, K.; Wang, C.; Xu, J.; Liang, J.; Zhou, X.; Wen, K.; Yang, C. Early forest fire detection with UAV image fusion: A novel deep learning method using visible and infrared sensors. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2025, 18, 6617–6629. [Google Scholar] [CrossRef]
  34. Park, S.; Im, J. Deep Learning-Based Wildfire Detection and Transferability Verification. In Proceedings of the Remote Sensing for Agriculture, Ecosystems, and Hydrology XXVI; SPIE: Bellingham, WA, USA, 2024; Volume 13191, p. 131910W. Available online: https://www.spiedigitallibrary.org/conference-proceedings-of-spie/13191/131910W/Deep-learning-based-wildfire-detection-and-transferability-verification/10.1117/12.3031320.short (accessed on 27 February 2025).
  35. Singh, H.; Srivastava, S.K. Identification of forest fire-prone region in Lamington National Park using GIS-based multicriteria technique: Validation using field and Sentinel-2-based observations. Geocarto Int. 2025, 40, 2462484. [Google Scholar] [CrossRef]
  36. Chen, X.; Zhang, Y.; Wang, S.; Zhao, Z.; Liu, C.; Wen, J. Comparative study of machine learning methods for mapping forest fire areas using Sentinel-1B and 2A imagery. Front. Remote Sens. 2024, 5, 1446641. [Google Scholar] [CrossRef]
  37. Zhang, P.; Hu, X.; Ban, Y.; Nascetti, A.; Gong, M. Assessing Sentinel-2, Sentinel-1, and ALOS-2 PALSAR-2 data for large-scale wildfire-burned area mapping: Insights from the 2017–2019 Canada Wildfires. Remote Sens. 2024, 16, 556. [Google Scholar] [CrossRef]
  38. Zhang, P.; Ban, Y.; Nascetti, A. Learning U-Net without forgetting for near real-time wildfire monitoring by the fusion of SAR and optical time series. Remote Sens. Environ. 2021, 261, 112467. [Google Scholar] [CrossRef]
  39. Gibson, R.K.; Mitchell, A.; Chang, H.C. Image texture analysis enhances classification of fire extent and severity using Sentinel 1 and 2 satellite imagery. Remote Sens. 2023, 15, 3512. [Google Scholar] [CrossRef]
  40. Yilmaz, E.O.; Kavzoglu, T. Burned area detection with Sentinel-2A data: Using deep learning techniques with explainable artificial intelligence. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2024, X-5-2024, 251–257. [Google Scholar] [CrossRef]
  41. Fan, J.; Yao, Y.; Tang, Q.; Zhang, X.; Xu, J.; Yu, R.; Zhang, L. A hybrid index for monitoring burned vegetation by combining image texture features with vegetation indices. Remote Sens. 2024, 16, 1539. [Google Scholar] [CrossRef]
  42. Powell, J.M. An Historical Geography of Modern Australia: The Restive Fringe; Cambridge University Press: Cambridge, UK, 1998; Available online: https://books.google.com/books?hl=en&lr=&id=NU5-E-suFwcC&oi=fnd&pg=PR7 (accessed on 24 December 2024).
  43. Bryant, A.; Sheppard, C. The Blue Mountains: Exploring Landscapes, Geology, and Ecology; New South Wales Press: Sydney, Australia, 2013; Available online: https://www.tandfonline.com/doi/abs/10.1080/03122417.2020.1823086 (accessed on 22 December 2024).
  44. Haralick, R.M.; Shanmugam, K.; Dinstein, I. Textural features for image classification. IEEE Trans. Syst. Man Cybern. 1973, SMC-3, 610–621. [Google Scholar] [CrossRef]
  45. Zubair, A.R.; Alo, O.A. Grey level co-occurrence matrix (GLCM) based second order statistics for image texture analysis. arXiv 2024, arXiv:2403.04038. Available online: https://arxiv.org/abs/2403.04038 (accessed on 15 December 2024).
  46. Fonseca, A.; Marshall, M.T.; Salama, S. Enhanced detection of artisanal small-scale mining with spectral and textural segmentation of Landsat time series. Remote Sens. 2024, 16, 1749. [Google Scholar] [CrossRef]
  47. Fonseca Gomez, A. Detecting Artisanal Small-Scale Gold Mines with LandTrendr Multispectral and Textural Features at the Tapajós River Basin, Brazil. Master’s Thesis, University of Twente, Enschede, The Netherlands, 2021. Available online: http://essay.utwente.nl/88728/1/fonsecagomez.pdf (accessed on 16 December 2024).
  48. Rouse, J.W.; Haas, R.H.; Schell, J.A.; Deering, D.W. Monitoring vegetation systems in the Great Plains with ERTS. NASA Spec. Publ. 1973, 351, 309–317. Available online: https://books.google.com/books?hl=en&lr=&id=cl42FB2_UEcC&oi=fnd&pg=PA309 (accessed on 11 December 2024).
  49. Chuvieco, E.; Congalton, R.G. Application of remote sensing and geographic information systems to forest fire hazard mapping. Remote Sens. Environ. 1989, 29, 147–159. [Google Scholar] [CrossRef]
  50. Escuin, S.; Navarro, R.; Fernández, P. Fire severity assessment by using NBR (Normalized Burn Ratio) and NDVI (Normalized Difference Vegetation Index) derived from LANDSAT TM/ETM images. Int. J. Remote Sens. 2008, 29, 1053–1073. [Google Scholar] [CrossRef]
  51. Gao, B.C. NDWI—A normalized difference water index for remote sensing of vegetation liquid water from space. Remote Sens. Environ. 1996, 58, 257–266. [Google Scholar] [CrossRef]
  52. Huete, A.; Didan, K.; Miura, T.; Rodriguez, E.P.; Gao, X.; Ferreira, L.G. Overview of the radiometric and biophysical performance of the MODIS vegetation indices. Remote Sens. Environ. 2002, 83, 195–213. [Google Scholar] [CrossRef]
  53. Huete, A.R. A soil-adjusted vegetation index (SAVI). Remote Sens. Environ. 1988, 25, 295–309. [Google Scholar] [CrossRef]
  54. Matsushita, B.; Yang, W.; Chen, J.; Onda, Y.; Qiu, G. Sensitivity of the enhanced vegetation index (EVI) and normalized difference vegetation index (NDVI) to topographic effects: A case study in high-density cypress forest. Sensors 2007, 7, 2636–2651. [Google Scholar] [CrossRef]
  55. Dokmanic, I.; Parhizkar, R.; Ranieri, J.; Vetterli, M. Euclidean distance matrices: Essential theory, algorithms, and applications. IEEE Signal Process. Mag. 2015, 32, 12–30. [Google Scholar] [CrossRef]
  56. Choi, E.; Lee, C. Feature extraction based on the Bhattacharyya distance. Pattern Recognit. 2003, 36, 1703–1713. [Google Scholar] [CrossRef]
  57. Mahjabeen, W.; Alam, S.; Hassan, U.; Zafar, T. Difficulty index, discrimination index and distractor efficiency in multiple choice questions. Ann. PIMS-Shaheed Zulfiqar Ali Bhutto Med. Univ. 2017, 13, 9–12. [Google Scholar]
  58. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  59. Kraskov, A.; Stögbauer, H.; Grassberger, P. Estimating mutual information. Phys. Rev. E. 2004, 69, 066138. [Google Scholar] [CrossRef] [PubMed]
  60. Colkesen, I.; Kavzoglu, T.; Sefercik, U.G.; Ozturk, M.Y. Automated mucilage extraction index (AMEI): A novel spectral water index for identifying marine mucilage formations from Sentinel-2 imagery. Int. J. Remote Sens. 2023, 44, 105–141. [Google Scholar] [CrossRef]
  61. Zhang, X.; Lin, X.; Fu, D.; Wang, Y.; Sun, S.; Wang, F.; Shi, Y. Comparison of the applicability of JM distance feature selection methods for coastal wetland classification. Water 2023, 15, 2212. [Google Scholar] [CrossRef]
  62. Morales, G.; Sheppard, J.W.; Logan, R.D.; Shaw, J.A. Hyperspectral dimensionality reduction based on inter-band redundancy analysis and greedy spectral selection. Remote Sens. 2021, 13, 3649. [Google Scholar] [CrossRef]
  63. Bennasar, M.; Hicks, Y.; Setchi, R. Feature selection using joint mutual information maximisation. Expert Syst. Appl. 2015, 42, 8520–8532. [Google Scholar] [CrossRef]
  64. Horton, D.; Johnson, J.T.; Baris, I.; Jagdhuber, T.; Bindlish, R.; Park, J.; Al-Khaldi, M.M. Wildfire threshold detection and progression monitoring using an improved radar vegetation index in California. Remote Sens. 2024, 16, 3050. [Google Scholar] [CrossRef]
  65. Huete, A.; Justice, C.; Van Leeuwen, W. MODIS vegetation index (MOD13). Algorithm Theor. Basis Doc. 1999, 3, 295–309. Available online: https://www.cen.uni-hamburg.de/en/icdc/data/land/docs-land/modis-collection6-vegetation-index-atbd-mod13-v03-1.pdf (accessed on 26 December 2024).
  66. Johnson, L.; Cooke, B.; Ramos, V.H.; Easson, G. Use of NASA Satellite Assets for Predicting Wildfire Potential for Forest Environments in Guatemala. Preliminary Report: University of Mississippi Geoinformatics Center. 2008. Available online: https://www.researchgate.net/profile/Greg-Easson/publication/265154859_Use_of_NASA_Satellite_Assets_for_Predicting_Wildfire_Potential_for_Forest_Environments_in_Guatemala/links/54466d170cf2d62c304dbd14/Use-of-NASA-Satellite-Assets-for-Predicting-Wildfire-Potential-for-Forest-Environments-in-Guatemala.pdf (accessed on 28 December 2024).
  67. Lyapustin, A.; Wang, Y.; Korkin, S.; Huang, D. MODIS collection 6 MAIAC algorithm. Atmos. Meas. Tech. 2018, 11, 5741–5765. [Google Scholar] [CrossRef]
  68. Vandansambuu, B.; Gantumur, B.; Wu, F.; Byambasuren, O.; Bayarsaikhan, S.; Chantsal, N.; Jimseekhuu, M.E. Assessment of burn severity and monitoring of the wildfire recovery process in Mongolia. Fire 2023, 6, 373. [Google Scholar] [CrossRef]
  69. Moharir, K.; Singh, M.; Pande, C.; Singh, S.K.; Gelete, G. Mapping forest fire-affected areas using advanced machine learning techniques in Damoh District of Central India. Knowl. Based Eng. Sci. 2024, 5, 62–80. [Google Scholar] [CrossRef]
  70. Singgalen, Y.A. Spatio-temporal analysis through NDVI, NDBI, and SAVI using Landsat 8/9 OLI. J. Comput. Syst. Inform. 2024, 5, 815–831. [Google Scholar]
  71. Gitas, I.Z.; Mitri, G.H.; Ventura, G. Object-based image classification for burned area mapping of Creus Cape, Spain, using NOAA-AVHRR imagery. Remote Sens. Environ. 2004, 92, 409–413. [Google Scholar] [CrossRef]
  72. Zand, M.; Etemad, A.; Greenspan, M. Flow-based spatio-temporal structured prediction of motion dynamics. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 13523–13535. [Google Scholar] [CrossRef]
  73. Zand, M.; Etemad, A.; Greenspan, M. Diffusion models with deterministic normalizing flow priors. OpenReview, 2024; Preprint. Available online: https://openreview.net/forum?id=ACMNVwcR6v (accessed on 18 May 2025).
Figure 1. Overview of the study area: (a) Australia, (b) enlarged view of Greater Sydney, the red box highlights the locations of burned land and post-fire smoke, (c) Sentinel-2 image of a post-fire smoke case, and (d) Sentinel-2 image of a burned land case.
Figure 1. Overview of the study area: (a) Australia, (b) enlarged view of Greater Sydney, the red box highlights the locations of burned land and post-fire smoke, (c) Sentinel-2 image of a post-fire smoke case, and (d) Sentinel-2 image of a burned land case.
Remotesensing 17 01823 g001
Figure 2. Proposed methodology.
Figure 2. Proposed methodology.
Remotesensing 17 01823 g002
Figure 3. Visualization of manually prepared fire-related labels: (a) vegetation and post-fire smoke label distribution for the post-fire smoke case, and (b) vegetation and burned land label distribution for the burned land case.
Figure 3. Visualization of manually prepared fire-related labels: (a) vegetation and post-fire smoke label distribution for the post-fire smoke case, and (b) vegetation and burned land label distribution for the burned land case.
Remotesensing 17 01823 g003
Figure 4. J-M distance of (a) different remote sensing indices (b) different bands, (c) different textures of post-fire smoke.
Figure 4. J-M distance of (a) different remote sensing indices (b) different bands, (c) different textures of post-fire smoke.
Remotesensing 17 01823 g004aRemotesensing 17 01823 g004b
Figure 5. Mutual information score ( M i ) of (a) different remote sensing indices, (b) different bands, (c) different textures of post-fire smoke.
Figure 5. Mutual information score ( M i ) of (a) different remote sensing indices, (b) different bands, (c) different textures of post-fire smoke.
Remotesensing 17 01823 g005
Figure 6. Discriminant index of (a) different remote sensing indices (b) different bands, (c) different textures for post-fire smoke.
Figure 6. Discriminant index of (a) different remote sensing indices (b) different bands, (c) different textures for post-fire smoke.
Remotesensing 17 01823 g006
Figure 7. J-M distance of (a) different remote sensing indices, (b) different bands, (c) different textures of burned land.
Figure 7. J-M distance of (a) different remote sensing indices, (b) different bands, (c) different textures of burned land.
Remotesensing 17 01823 g007
Figure 8. Mutual information score ( M i ) of (a) different remote sensing indices, (b) different bands (c) different textures of burned land.
Figure 8. Mutual information score ( M i ) of (a) different remote sensing indices, (b) different bands (c) different textures of burned land.
Remotesensing 17 01823 g008
Figure 9. Discriminant index of (a) different remote sensing indices, (b) different bands, (c) different textures for burned land.
Figure 9. Discriminant index of (a) different remote sensing indices, (b) different bands, (c) different textures for burned land.
Remotesensing 17 01823 g009aRemotesensing 17 01823 g009b
Figure 10. Comprehensive evaluation results of (a) different remote sensing indices, (b) different bands, (c) different textures for post-fire smoke.
Figure 10. Comprehensive evaluation results of (a) different remote sensing indices, (b) different bands, (c) different textures for post-fire smoke.
Remotesensing 17 01823 g010
Figure 11. Comprehensive evaluation results of (a) different remote sensing indices, (b) different bands, (c) different textures of burned land.
Figure 11. Comprehensive evaluation results of (a) different remote sensing indices, (b) different bands, (c) different textures of burned land.
Remotesensing 17 01823 g011
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, Z.; Al-Najjar, H.; Beydoun, G.; Kalantar, B.; Zand, M.; Ueda, N. Evaluation of Key Remote Sensing Features for Bushfire Analysis. Remote Sens. 2025, 17, 1823. https://doi.org/10.3390/rs17111823

AMA Style

Yang Z, Al-Najjar H, Beydoun G, Kalantar B, Zand M, Ueda N. Evaluation of Key Remote Sensing Features for Bushfire Analysis. Remote Sensing. 2025; 17(11):1823. https://doi.org/10.3390/rs17111823

Chicago/Turabian Style

Yang, Ziyi, Husam Al-Najjar, Ghassan Beydoun, Bahareh Kalantar, Mohsen Zand, and Naonori Ueda. 2025. "Evaluation of Key Remote Sensing Features for Bushfire Analysis" Remote Sensing 17, no. 11: 1823. https://doi.org/10.3390/rs17111823

APA Style

Yang, Z., Al-Najjar, H., Beydoun, G., Kalantar, B., Zand, M., & Ueda, N. (2025). Evaluation of Key Remote Sensing Features for Bushfire Analysis. Remote Sensing, 17(11), 1823. https://doi.org/10.3390/rs17111823

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop