Next Article in Journal
Sequential SAR-to-Optical Image Translation
Previous Article in Journal
Fault Movement and Uplift Mechanism of Mt. Gongga, Sichuan Province, Constrained by Co-Seismic Deformation Fields from GNSS Observations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

TSSA-NBR: A Burned Area Extraction Method Based on Time-Series Spectral Angle with Full Spectral Shape

1
State Key Laboratory of Earth Surface Processes and Hazards Risk Governance, Beijing 100875, China
2
Faculty of Geographical Science, Beijing Normal University, Beijing 100875, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(13), 2283; https://doi.org/10.3390/rs17132283
Submission received: 14 May 2025 / Revised: 26 June 2025 / Accepted: 1 July 2025 / Published: 3 July 2025
(This article belongs to the Section Ecological Remote Sensing)

Abstract

Wildfires threaten ecosystems, biodiversity, and human livelihood while exacerbating climate change. Accurate identification and monitoring of burned areas (BA) are critical for effective post-fire recovery and management. Although satellite multi-spectral imagery offers a practical solution for BA monitoring, existing methods often prioritize specific spectral bands while neglecting full spectral shape information, which encapsulates overall spectral characteristics. This limitation compromises adaptability to diverse vegetation types and environmental conditions, particularly across varying spatial scales. To address these challenges, we propose the time-series spectral-angle-normalized burn index (TSSA-NBR). This unsupervised BA extraction method integrates normalized spectral angle and normalized burn ratio (NBR) to leverage full spectral shape and temporal features derived from Sentinel-2 time-series data. Seven globally distributed study areas with diverse climatic conditions and vegetation types were selected to evaluate the method’s adaptability and scalability. Evaluations compared Sentinel-2-derived BA with moderate-resolution products and high-resolution PlanetScope-derived BA, focusing on spatial scale and methodological performance. TSSA-NBR achieved a Dice Coefficient (DC) of 87.81%, with commission (CE) and omission errors (OE) of 8.52% and 15.58%, respectively, demonstrating robust performance across all regions. Across diverse land cover types, including forests, grasslands, and shrublands, TSSA-NBR exhibited high adaptability, with DC values ranging from 0.53 to 0.97, CE from 0.03 to 0.27, and OE from 0.02 to 0.61. The method effectively captured fire scars and outperformed band-specific and threshold-dependent approaches by integrating spectral shape features with fire indices, establishing a data-driven framework for BA detection. These results underscore its potential for fire monitoring and broader applications in detecting surface anomalies and environmental disturbances, advancing global ecological monitoring and management strategies.

1. Introduction

Since 2000, global wildfire activity has intensified, with record-breaking wildfires occurring almost annually [1,2]. These wildfires have caused extensive vegetation loss and significant carbon emissions, reaching historical highs [3,4]. Burned area (BA) monitoring quantifies the ecological damage and carbon loss caused by wildfires, providing critical evidence for post-fire ecological restoration and the development of effective fire-management strategies [5,6]. Therefore, monitoring BA is crucial for the management of ecosystems.
Remote sensing technology offers unique advantages in identifying BA providing critical support for fire monitoring, analysis, and post-disaster recovery assessment [7]. Depending on the sensor, satellite imagery can supply multi-spectral reflectance data at moderate to high-spatial resolution and with revisit times sufficient to track fire-induced changes in space and time [8,9]. For example, the widely used normalized burn ratio (NBR) and its differential form (dNBR) can effectively distinguish between affected and unaffected areas by using the reflectance in the near-infrared (NIR) and shortwave infrared (SWIR) bands [10,11].
The Multi-Spectral Imager (MSI) on Sentinel-2 combines 10 m spatial resolution with near-infrared and short-wave infrared bands, providing the spectral detail necessary for accurate BA identification [12,13]. Although each pass is an instantaneous snapshot, Sentinel-2 satellites revisit any given location roughly every 5 days, supplying the multi-temporal imagery needed for time-resolved fire assessment, capturing the dynamic changes after a fire, such as the degradation and recovery of vegetation, thereby providing richer temporal data for an accurate assessment of fire impact [14,15]. With the development of analytical methods, the application of multi-temporal data in fire monitoring has increased [16,17]. Multi-date imagery improves detection of small, patchy burns that can be overlooked in a single-date scene; combined with 10 m spatial detail, this temporal depth increases overall mapping accuracy [18]. The synergy of dense temporal coverage and fine spatial detail makes Sentinel-2 well suited both for immediate BA delineation and for tracking long-term post-fire ecological recovery [19,20].
While existing fire monitoring methods are widely used due to their simplicity, indices such as NBR require selecting an appropriate threshold to distinguish between burned and unburned areas. This reliance on fixed thresholds poses challenges, as the spectral variability across different regions and vegetation types often limits their adaptability [21,22]. As a result, determining an optimal threshold becomes a complex and labor-intensive task, compromising the robustness of such methods in varied ecological settings [22,23]. Specifically, in environments with complex vegetation, changes in vegetation cover before and after a fire alter the spectral reflectance of the BA [24]. Threshold-based empirical methods have yet to fully leverage the benefits of multi-temporal spectral data, making it difficult to accurately differentiate between burned and unburned pixels [25]. The limitations of threshold setting often result in adaptability issues, prompting researchers to investigate unsupervised classification methods to adaptively address fire monitoring challenges in various environmental conditions [26].
To improve method adaptability, recent studies have shifted from single-date indices to time-series analyses that track the abrupt spectral changes induced by fire [27]. Although these approaches avoid fixed thresholds by detecting temporal breakpoints, they still rely on indices derived from specific bands, which can misclassify surfaces with similar spectral responses [28]. The spectral shape, represented as the complete waveform of reflectance [29], provides a comprehensive depiction of spectral changes by leveraging the full range of spectral information rather than being confined to individual bands. This full spectral shape characterization demonstrates dual advantages: (1) enhancing fire disturbance identification through preserving inter-band covariation patterns, and (2) improving methodological adaptability across diverse land covers by reducing sensitivity to specific spectral bands [30]. The Spectral Angle Mapper (SAM) algorithm [31] exemplifies the concept of analyzing full spectral shape by calculating the angle between spectral vectors to measure their similarity, enabling the identification of land cover categories [32]. In contrast to traditional methods that rely on specific spectral bands, SAM leverages the full spectral shape, offering a broader perspective that reduces reliance on specific band characteristics. This study employs SAM as a similar metric for spectral shape quantification. However, despite SAM’s established analytical capabilities, existing implementations remain constrained by manual selection of static reference spectra [33], failing to incorporate temporal spectral dynamics and consequently limiting operational applicability. Disturbances caused by fires lead to changes in spectral reflectance across various bands [34]. The incorporation of time-series data facilitates the automatic selection of reference spectra [35], thus enabling unsupervised classification methods based on spectral angle mapping to utilize full spectral shape information.
Therefore, this paper aims to introduce a novel unsupervised approach for extracting BA, utilizing temporal spectral information to improve the adaptability to various vegetation types and classification accuracy. The specific objectives are as follows: (1) To investigate the full spectral shape characteristics of remote sensing data, construct spectral angle indices, and combine normalization differences to create enhanced indices for the accurate assessment of fire burn conditions. (2) To identify and analyze changes in spectral shape over time, emphasizing the importance of spectral shape dynamics in capturing the processes of fire spread and enabling BA mapping even under unknown fire event conditions. (3) To leverage Sentinel-2 time-series spectral information for achieving adaptability and classification accuracy across different land cover types, and spatial scales, while balancing both accuracy and data availability. This research is of great importance for monitoring and evaluating fires, recovering ecological conditions post-fire, and formulating management strategies post-fire, and providing new insights for developing and monitoring methods suitable for other surface-related abnormal disasters.

2. Materials and Methods

2.1. Study Area and Datasets

This study evaluated the proposed fire detection method across seven geographically and climatically diverse regions worldwide to ensure its robustness and generalizability. The selected regions represented a wide range of landscapes, vegetation types and fire characteristics, providing a comprehensive basis for assessing BA mapping accuracy.
The seven regions represented in Figure 1 include the USA (region A and G), Argentina (region B), Portugal (region C), the Republic of Angola (region D), China (region E), and Brazil (region F). These regions were carefully selected to represent various environmental conditions, which aids in developing and validating fire detection methods across different vegetation types. Region A in California’s Big Sur area was affected by the Dolan Fire in September 2020 and lies within the Santa Lucia Mountains, which typically endure prolonged dry seasons [36]. Region B, San Martín County in Argentina, lies within a humid subtropical zone, is dominated by agricultural fields and natural vegetation, where fires are frequently driven by extreme droughts [37]. Region C, located in Portugal, features a Mediterranean climate with dry summers and a complex vegetation cover consisting of forests and shrublands, presenting particular challenges for fire detection due to the varied vegetation types [38]. Region D, in the savanna region of Angola, experiences seasonal fires with rapid burn dynamic, providing a unique setting for fire detection [39]. Region E, Xichang in China, represents a subtropical mountainous environment with dense forests, which makes it particularly challenging for accurate fire mapping [40]. Region F, Fairbanks North Star Borough in interior Alaska, lies near the polar region with a subarctic climate marked by short, cool summers, long, frigid winters, and a mosaic of boreal forest and tundra [41]. Region G, Altamira in Pará, Brazil, falls in an equatorial, tropical rainforest climate characterized by year-round high humidity, and continuous evergreen canopy cover, where intense biomass accumulation and cloud interference pose challenges for optical fire detection. The significant geographical differences and varying climate and vegetation types, make these regions ideal for testing and validating fire detection methods and ensuring their applicability across diverse environments.
For this study, Sentinel-2 MSI Level 2A imagery, obtained from the Google Earth Engine collection COPERNICUS/S2_SR_HARMONIZED, generated with the Sen2Cor processor, served as the primary dataset, provided the primary data source for BA detection. The process began by selecting a target date within the fire period, which defined a two-month window around that date, incorporating images one month before and one month after. This initial date selection sets the temporal window for analysis. Fire events were detected by analyzing the sliding changes in the time-series spectral angle-normalized burn index (TSSA-NBR) within this time window. The method exploited temporal variations in spectral characteristics to flag fire-affected pixels and identified the event date from shifts in TSSA-NBR trends. This sliding window approach allows for the flexible identification of fire events across different dates, without requiring fixed pre- and post-fire imagery. To ensure spatial precision, the Sentinel-2 bands were resampled to 10 m resolution, with atmospheric and orthographic corrections applied for consistent results.
A hierarchical validation framework was employed to ensure the reliability of BA mapping results, integrating high-resolution imagery, field-measured data, and widely used medium-resolution BA products. The validation framework in the Portugal study area included field-measured BA data provided by the Institute for Conservation of Nature and Forests (ICNF), which was recognized for its high accuracy [38]. The ICNF data served as the gold standard for evaluating fire mapping results. The close agreement between ICNF measurements and the results from PlanetScope-derived maps validated the latter as a reliable surrogate for ground truth.
PlanetScope imagery, with a spatial resolution of 3 m, was used as the primary reference for most regions in this study [42]. A burned-area mask was generated using Maximum Likelihood Classification (MLC), yielding an MLC-derived BA product, which was then manually refined against the high-resolution PlanetScope imagery to produce the final truth layer, termed PlanetScope_BA. An MLC algorithm was applied to refine these results, followed by manual corrections to ensure reliability. This iterative process established a robust foundation for validation and analysis.
For large-scale evaluation, we compared BA results from the proposed method and PlanetScope references with two widely recognized medium-resolution products: MCD64A1 and FireCCI. MCD64A1, with a spatial resolution of 500 m from MODIS, is effective for large-scale mapping, but its resolution limits the detection of small or heterogeneous BA. The FireCCI suite, including FireCCI51 with resolution of 250 m [18] and FireCCIS311 with resolution of 300 m [43], were critical for benchmarking the method’s performance across diverse fire scenarios and geographic conditions.
This study employed the Dynamic World global 10 m near-real-time land-cover product [44], which is generated from Sentinel-2 imagery via deep learning [45], to obtain an initial land-cover map for the study area. Although Dynamic World delivers fast and reasonably accurate global coverage, its post-fire performance is limited because the model is trained largely on unburned scenes, which causes recently burned surfaces to be frequently misclassified as bare ground or cropland. To mitigate these issues, we reclassified the Dynamic World output with MLC and refined the result through visual interpretation, retaining only vegetation classes (forest, grassland, shrub, and crop) and masking non-vegetated categories for subsequent fire-impact analysis.

2.2. Method

2.2.1. Burned Indices and Unsupervised Classification Method

Burned indices are extensively utilized in fire monitoring and severity assessment to detect BA by capturing fire-induced spectral changes. These indices are generally derived from the reflectance values in key spectral bands such as NIR and SWIR. The NIR bands are typically sensitive to vegetation changes, making them useful for detecting fire-induced vegetation loss, while the SWIR bands are more responsive to moisture content and can enhance the ability to differentiate between burned and unburned areas. NBR based on the difference between NIR and SWIR bands [46], is a well-established index for assessing post-wildfire vegetation recovery, with high values in healthy vegetation and significant decreases after burning [47]. NBRSWIR, developed for Landsat-8 OLI data, uses SWIR1 and SWIR2 bands to reduce the influence of moisture variation and enhance fire detection accuracy [28]. The MIRBI [48] utilizes SWIR1 and SWIR2 bands and is highly sensitive to spectral changes caused by burning, with reduced susceptibility to noise, making it particularly suitable for post-fire surface analysis. BAIS2 combines red-edge information with band ratios to enhance sensitivity to fire-induced spectral changes [49]. Based on the distinct characteristics of these indices, this study selected NBR, NBRSWIR, MIRBI, and BAIS2. By calculating the results of each index and applying MLC to the visually interpreted samples, we assessed fire detection accuracy across different land cover types and their adaptability to diverse vegetation. The specific formulas and references for these indices are provided in Table 1.
In addition to these indices, we also applied unsupervised classification method, specifically fuzzy C-means clustering (FCM) [51]. The differences between the various unsupervised classification methods were minimal; therefore, FCM was chosen as a representative method to display BA results [52]. This approach allows for a meaningful comparison of different fire indices and unsupervised classification methods. Because preliminary trials revealed only minor performance differences among several unsupervised algorithms, FCM was selected as a representative approach. To make the method fully data-driven, we coupled it with a Grey Wolf Optimizer (GWO), which adaptively tunes the cluster-membership thresholds and centroids, thereby yielding an optimized GWO-FCM variant [53]. To provide a deep-learning benchmark, we adopted U-Net, a convolutional neural network that combines an encoder and a decoder linked by skip connections, thereby preserving fine spatial detail while integrating multi-scale context [54,55]. U-Net is widely applied to remote sensing problems such as land-cover mapping, surface-water extraction, road and building delineation, and burned-area detection. In this study, it was trained on Sentinel-2 reflectance composites using the PlanetScope_BA mask as labels, allowing a direct comparison with the optimized unsupervised classifier and the index-based approach.

2.2.2. TSSA-NBR: A Time-Series Spectral Angle-Based Enhancement with Full Spectral Shape of the NBR Index

Utilizing the full spectral shape characteristics of Sentinel-2 and the NBR index, we developed a time-series spectral angle normalized NBR index (TSSA-NBR) for BA extraction. This study’s workflow primarily includes three components: (1) Image data collection: Time-series imagery is collected within a specified time window (e.g., two months) using Sentinel-2 MSI L2A, along with active fire products and ancillary data, to support the subsequent validation of BA accuracy. (2) BA extraction using the TSSA-NBR method: the TSSA-NBR method is constructed by combining time-series SAM and NBR to derive BA results. (3) Validation and comparison: The reliability and effectiveness of the TSSA-NBR were evaluated across different spatial scales, using various indices and unsupervised classification method, while considering different land cover types. The complete workflow is illustrated in Figure 2.
Unlike traditional single-band analysis methods, spectral shape characterization integrates reflectance information across the entire spectrum, from visible to shortwave infrared, capturing a more comprehensive spectral response of land cover types. To quantify spectral shape similarity, this study employs the classical SAM, a widely used yet computationally efficient tool for measuring spectral similarity based on the angular difference between reference and target spectra. Despite its simplicity, SAM effectively characterizes spectral shape variations, making it well-suited for unsupervised fire detection. In fire monitoring, selecting an appropriate reference spectra is crucial, as it directly affects spectral angle calculations. To address this, we construct a time-series spectral trajectory and introduce an automated reference selection mechanism based on historical spectral baselines. This approach ensures that the reference spectra is representative of pre-fire conditions while minimizing the impact of seasonal variability. Research indicates that the median burn duration is 29 days [56], suggesting that land cover spectral characteristics remain relatively stable within a one-month period. Therefore, the first image within a two-month time window is selected as the reference spectra, striking a balance between capturing fire-induced spectral changes and maintaining temporal consistency. By leveraging full spectral shape information and a systematic reference selection strategy, this approach enhances the robustness and adaptability of unsupervised burned-area detection.
SAM and NBR Index
The first image in the time-series is selected as the reference image, with R i , j representing the spectral vector of the pixel in this image, where i   and j indicate the pixel’s row and column positions. The spectral angle between the spectral vector S i , j , t of pixel p i , j at time t and the reference spectral vector R i , j is then calculated and denoted as θ i , j , t . Here, S i , j , t represents the spectral vector of pixel p i , j at time t , R i , j represents the reference spectral vector, and θ i , j , t denotes the resulting spectral angle, as shown in Equation (1).
θ i , j , t   =   c o s 1 R i , j S i , j , t R i , j S i , j , t
For pixel p i , j , the NIR and SWIR reflectance values at different times are denoted as N I R i , j , t and S W I R i , j , t , respectively. The calculated NBR index result is represented as N B R i , j , t , as shown in Equation (2).
N B R i , j , t   =   N I R i , j , t S W I R i , j , t N I R i , j , t + S W I R i , j , t
Data Screening Based on Sequential SAM Fitting Residuals
Remote sensing platforms are susceptible to the influence of atmospheric interference, noise, and inherent errors in the sensors. Spectral angles might be varied even when the land use type remains unchanged [57]. To better observe temporal trend changes, especially when a ground object undergoes a combustion mutation, each pixel’s time-series spectral angles and NBR results are subjected to normalization processing to amplify trend information. For normal vegetation areas, the spectral angles are smaller when undisturbed, and if the spectral angles are directly normalized, minor differences may be exaggerated, leading to a situation where the results combined with NBR are not enhanced, and the misjudgment of normal vegetation areas as fires may occur.
To eliminate trend-related errors, the original time-series is first subjected to a least squares linear fit, where independent variable represents θ i , j , t , and dependent variable corresponds to the fitted trend line derived from the linear fit, using the fitting function and the standard deviation of the residuals before normalization.
μ i , j , t   =   1 n t = 1 n ϵ i , j , t
σ ϵ i , j     =   1 n 1 t = 1 n μ i , j , t ϵ i , j , t 2
ϵ i , j , t μ i , j , t > σ ϵ i , j
In Equation (3), ϵ i , j , t denotes the residual sequence, calculated as the difference between the observed time-series value θ i , j , t and its fitted counterpart. The mean μ i , j , t (Equation (3)) and standard deviation σ ϵ i , j (Equation (4)) of the residual sequence are calculated. When the residual is greater than one standard deviation above the mean (Equation (5)), it is considered that there may be a potential fire occurrence, and thus, pixels that have not changed due to a fire are preliminarily filtered out. Generally, a point is considered an outlier if the absolute value of its residual exceeds a certain threshold. Standard thresholds are 2, 2.5, or 3 times the standard deviation from the mean residual [58]. In this study, a threshold of one standard deviation was chosen to avoid excluding too many pixels during the preliminary screening stage. This one standard deviation criterion is common in remote sensing anomaly detection, where it helps retain weak yet meaningful disturbances while larger multiples tend to omit low-intensity signals and reduce overall accuracy [59,60].
Index Normalization and Difference Calculation
After preliminary screening, the time-series spectral angle results and time-series NBR results are normalized, denoted as n θ i , j , t (Equation (6)) and n N B R i , j , t (Equation (7)).
n θ i , j , t   =   θ i , j , t m i n θ i , j m a x θ i , j m i n θ i , j
n N B R i , j , t   =   N B R i , j , t m i n N B R i , j m a x N B R i , j m i n N B R i , j
The TSSA-NBR method, similar to the Difference Vegetation Index (DVI), highlights BA by emphasizing the contrasting post-fire trends of an increasing normalized SAM and a decreasing normalized NBR, effectively capturing fire-induced spectral changes. The TSSA-NBR index, denoted as Δ i , j , t , represents the difference between the normalized SAM and normalized NBR for a given pixel p i , j at time t , capturing the spectral divergence caused by fire events (Equation (8)).
Δ i , j , t   =   n θ i , j , t n N B R i , j , t
Fire Detection Method Based on Trend Judgment
When no fire occurs, the spectral state of surface features remains stable. The normalized SAM approaches zero during this period, while the normalized NBR exhibits high values. After a fire event, the burning of surface features leads to significant spectral changes, causing a rapid increase in normalized SAM values towards 1 and a marked decrease in normalized NBR values towards 0. The TSSA-NBR method leverages the difference between normalized SAM and normalized NBR, which results in a new value range from −1 to 1. This range allows for simple and effective classification based on positive and negative changes. The core of the TSSA-NBR method lies in constructing this new range and evaluating the trend of changes in these values over time to identify BA.
However, inherent noise in imagery introduces uncertainty when relying solely on a single feature for fire detection. To address this limitation, the study incorporates the continuity of time-series characteristics as a criterion. Specifically, a fire is identified if the TSSA-NBR difference index exhibits negative values for two consecutive time points, followed by positive values for the next two. This criterion filters out interference caused by single-point or random fluctuations, ensuring that fire detection is based on stable trend changes within the time-series. A conditional formula (Equation (9)) is introduced to determine whether a pixel satisfies the constraints for fire detection based on continuous changes.
Ψ p i , j t   =   1 ,   Δ i , j , t 2 < 0 Δ i , j , t 1 < 0 Δ i , j , t > 0 Δ i , j , t + 1 > 0 0 ,   else  
In Equation (9), Ψ p i , j ( t ) is a Boolean function, where a value of 1 indicates that pixel p i , j at time t meets the fire detection criterion, and a value of 0 indicates it does not. The method evaluates all possible time points within the time-series, ensuring adaptability across different temporal spans.
Using these criteria, the study further defines the temporal and spatial extent of fire events. The initiation time and corresponding spatial location of a fire are determined using Equation (10).
t T , Ψ i , j t   =   1 t start   = t 1 p i , j
According to Equation (10), if pixel p i , j in time-series T satisfies condition Ψ i , j ( t ) at time t , the fire is considered to have occurred at t 1 .
Figure 3 illustrates the application of the TSSA-NBR method for fire detection using time-series remote sensing imagery, emphasizing its ability to identify BA through spectral and temporal changes. Figure 3a visualizes the two-month sliding-window strategy that underpins the TSSA-NBR workflow when the ignition date is unknown. For each candidate center image within the chronologically ordered stack of cloud-free Sentinel-2 scenes, the algorithm constructs a 60-day window that spans the 30 days preceding the candidate image and the 30 days following it. After processing, the window shifts forward by one Sentinel-2 revisit interval, which is roughly five days, so consecutive windows overlap by about fifty-five days. This substantial overlap ensures that every calendar day is examined multiple times, while also avoiding situations in which the first image of a window already contains burned pixels and would otherwise serve as an unsuitable reference spectra. Figure 3b presents the spectral and index trends for a single pixel. The left sub-panel shows a spectral waterfall plot, visualizing the reflectance changes across bands over time. The right sub-panel illustrates the time-series calculations of the spectral angle (SA) and NBR, with the red line marking the fire event date. Distinct trends are observed in SA and NBR, showing opposing temporal trends before and after the fire event, underscoring the utility of combining these indices for fire detection. Finally, Figure 3c displays the normalized TSSA-NBR results derived from the time-series imagery, where the index values are scaled to a range of −1 to 1 to facilitate unsupervised classification. The fire occurrence date is identified by sliding the two-month time window based on changes in the TSSA-NBR index trends. Together, these panels demonstrate the effectiveness of the TSSA-NBR method in leveraging temporal, spectral information to detect fire events robustly.

2.3. Accuracy Assessment

In this study, the MLC method initially generated ground truth pixels for BA. These results were compared with Planet imagery, followed by manual interpretation and correction to refine the truth map, represented by PlanetScope_BA. These corrected truth maps served as the reference for assessing the performance of the TSSA-NBR method. To evaluate the performance of TSSA-NBR, a quantitative analysis was conducted using evaluation metrics such as Omission Error (OE), Commission Error (CE), and Dice Coefficient (DC) [39]. The results from TSSA-NBR were compared not only with several widely used burned indices and unsupervised classification methods, but also through a scale-dependent comparison of products and reference results across different spatial resolutions. This allowed us to assess the performance of the TSSA-NBR method in relation to both traditional fire detection indices and unsupervised classification method, considering the effects of spatial resolution and product characteristics on BA accuracy.
To ensure that the evaluation accurately reflected each product’s ability to detect small-scale fires, which dominate the Portuguese ICNF records, Region C was expanded to include the surrounding landscape. This larger extent provided a sufficient number of small burned patches and guaranteed spatial overlap with all moderate and coarse-resolution BA products. Because field measurements are available for the expanded area, the corresponding PlanetScope imagery could be used directly, without additional manual correction, as an auxiliary benchmark for validating the reference map. Instead of relying on the total burned area as an accuracy metric, we compared the spatial correspondence between each BA product and the ICNF polygons within the enlarged extent so that resolution-driven bias was avoided.

3. Result

3.1. Scale-Dependent Comparison of Products and Reference Results

In Region C, we detected 180 burned-area patches across various size categories, with each detection method demonstrating distinct capabilities. The classification of fires into size categories (0.4–4, 4–62.5, 62.5–250 hectares) was based on typical pixel area sizes commonly used for fire detection in remote sensing studies. Fires smaller than 0.4 hectares were excluded from the analysis, as detecting fires of this size is particularly challenging in this region due to the limitations of the available satellite data [38].
For the 0.4–4 ha class, MCD64A1 and FireCCI51/FireCCIS311 missed virtually every event (57 and 56 omissions, respectively), registering only 0 and 1 true detections and producing 3 and 25 commission errors. Sentinel-2_BA generated by the TSSA-NBR method detected 34 fires (23 omissions, 73 commissions), while PlanetScope_BA achieved the highest sensitivity with 39 true detections and 18 omissions, albeit at the cost of 89 commissions. Performance improved for the 4–62.5 ha category. Although MCD64A1 and FireCCI51/FireCCIS311 still left 94 and 90 omissions and detected only 4 and 8 fires, Sentinel-2_BA identified 20 true fires with 12 omissions and 12 commissions, and PlanetScope_BA matched the 20 true detections while reducing omissions to 3 fires (23 commissions). In the 62.5–250 ha range, Sentinel-2_BA and PlanetScope_BA mapped all 20 fires with zero omissions or commissions. FireCCI51/FireCCIS311 detected 13 fires (7 omissions, 20 commissions), whereas MCD64A1 detected only 3 fires (17 omissions, 2 commissions). For the largest fires (>250 ha, 5 reference patches), PlanetScope_BA and Sentinel-2_BA again showed near-perfect performance, each recording seven true detections without omissions or commissions. By contrast, MCD64A1 detected only one fire (four omissions, one commission), and FireCCI51/FireCCIS311 detected four fires (one omission, three commissions). These detailed results confirm that Sentinel-2_BA offers the best balance between sensitivity and reliability across all size classes, substantially outperforming the coarse-resolution products, while PlanetScope_BA provides the greatest sensitivity for very small fires albeit with higher commission rates. Figure 4a visualizes the spatial distribution of BA detected by all methods, and Figure 4b presents the corresponding statistics in a stacked-bar format.
In the context of BA mapping, smaller BAs, including those from higher-resolution satellites, are often challenging to detect with moderate-resolution satellite imagery. However, PlanetScope’s superior spatial resolution has shown a better ability to detect smaller BA than other products. In accuracy comparisons with field data from the ICNF, PlanetScope-derived BA consistently outperformed other products, including MCD64A1, FireCCI51 or FireCCIS311, and Sentinel-2-derived BA, particularly in the detection of smaller BA. As a result, for validation in other study areas, PlanetScope_BAs are used as the reference ground truth, with manual adjustments made based on visual interpretation during the validation process.
Building on PlanetScope_BA as the reference ground truth, the performance of MCD64A1, FireCCI51 or FireCCIS311, and Sentinel-2_BA was assessed across seven study areas using DC, CE, and OE metrics. The results in Figure 5 demonstrate that Sentinel-2_BA, combined with the TSSA-NBR method, consistently achieved higher accuracy than the lower-resolution products. In Region D, Sentinel-2_BA attained a DC of 99.99%, with CE and OE values at 0.00%, reflecting its strong alignment with the reference. Similarly, in Region A and B, Sentinel-2_BA maintained high DC values of 85.64% and 94.08%, respectively, showcasing its ability to provide reliable fire detection even in diverse geographic conditions. Robust performance was also evident in Regions F and G, where Sentinel-2_BA achieved DC values of 97.38% and 90.98%, while the corresponding CE values were 3.36% and 7.33% and the OE values were 5.21% and 8.17%.
In addition to achieving high DC values, Sentinel-2_BA consistently reduced CE and OE across most regions, outperforming MCD64A1 and FireCCI51. For example, in Region C, Sentinel-2_BA lowered OE to 28.97%, a significant improvement compared to 81.58% and 63.59% for MCD64A1 and FireCCI51, respectively. This trend underscores the robustness of the TSSA-NBR approach in leveraging Sentinel-2’s spectral and temporal data, achieving accuracies comparable to PlanetScope_BA while maintaining efficiency and applicability for large-scale monitoring efforts.

3.2. Comparison of BA Indices and Unsupervised Classification Method

This study compares several methods for BA detection to assess their performance. Figure 6 presents the results of various algorithms, including TSSA-NBR, NBR, NBRSWIR, BAIS2, MIRBI, GWO-FCM, and U-Net, across seven regions, showing the post-fire image, ground truth, and detection outcomes. The results indicate that TSSA-NBR performed well across all regions, effectively identifying fire areas.
The performance of different BA detection methods was evaluated across seven regions, using PlanetScope imagery as a reference. Table 2 provides a summary of the performance metrics derived from the results in Figure 6, including true positives (TP), true negatives (TN), false positives (FP), false negatives (FN), OE, CE and DC.
Among the methods tested for detecting BA, TSSA-NBR performed the best with a DC of 87.81%, indicating its high accuracy in identifying burned regions. It also exhibited low OE at 15.58% and CE at 8.52%, suggesting it effectively distinguished between burned and non-burned areas. In comparison, GWO-FCM showed the lowest performance with a DC of 74.58%, accompanied by higher OE (18.09%) and CE (31.54%), indicating lower classification accuracy. While NBRSWIR improved over NBR with a DC of 84.38%, it still had a relatively high CE (9.48%). Other methods, like BAIS2 and MIRBI, also performed moderately, with DC values of 83.47% and 86.18%, but suffered from notable CE. Although MIRBI showed the lowest CE at 6.54%, it tended to underestimate the burned area, classifying fewer regions as burned, which limits its effectiveness in accurately capturing all BA. U-Net achieved a DC of 87.55%, nearly matching TSSA-NBR, while maintaining CE at 9.31% and the lowest OE among learning-based methods (14.35%). The actual BA for validation, based on PlanetScope, was 919.29 km2, providing the ground truth for comparison.

3.3. Results of BA Extraction for Different Land Cover Types

Although existing fire indices are easy to use and generally produce similar outlines in BA extraction (Figure 7), significant differences remain in finer details. These variations highlight that different methods exhibit varying adaptability across regions and land cover types.
Figure 7 presents the BA extraction results across various land cover types using different methods. The subplots were selected through an automated sliding window approach, guided by land cover type data, with specific regions randomly chosen for each land cover type to objectively describe the qualitative differences in each method across different land cover types. Red circles mark significant performance differences, highlighting the distinct responses of each method in finer detail. In fire detection across diverse land cover types, the TSSA-NBR method performs significantly better, providing more accurate and detailed BA extractions. In contrast, NBR, NBRSWIR, and BAIS2 struggle with capturing fine details and exhibit significant omissions, particularly in dense canopy and shrubland. MIRBI, while performing well in some regions, shows notable misclassification in the dense canopy. The GWO-FCM method, owing to its binary division of fire versus non-fire, offers only coarse delineations and thus performs poorly in fine-scale BA extraction. Meanwhile, U-Net delivers overall satisfactory results; however, it still suffers from noticeable omission errors in dense-forest areas, where complex canopy structures obscure subtle burn signatures. The TSSA-NBR method demonstrates high detection accuracy across diverse vegetation covers, surpassing other methods. The results presented in Figure 7 are derived from Area B, which is characterized by diverse vegetation covers and is located within a humid subtropical zone, where agricultural fields and natural vegetation dominate. The region’s varied vegetation covers, along with a sufficiently large sample size, provide a robust basis for comparing the performance of different BA extraction methods. Although Figure 7 serves as a qualitative illustration, the corresponding quantitative evidence, which includes DC values up to 0.97 and CE values as low as 0.02 for TSSA-NBR, is provided in Figure 8 and thereby substantiates the visual observations. It is important to note that the results shown in Figure 7 are intended to illustrate the qualitative differences in BA extraction across different vegetation covers, rather than to specifically evaluate the performance of the algorithms in this particular region.
The quantitative performance of the various methods across land-cover classes is illustrated in Figure 8. For every combination of method, landcover and region, the figure employs an incremental stacked bar that places the DC, OE and CE values in layers rather than adding them, which allows a direct visual comparison of their magnitudes. The table beneath the chart shows that the reference BA samples comprise approximately 562 km2 of forest, 258 km2 of shrubland, 87 km2 of grassland and only about 1 km2 of cropland. This distribution reflects the fact that large wildfires overwhelmingly occur in forest and shrub vegetation, whereas cropland rarely burns and therefore contributes few reference pixels. Because each land-cover class is evaluated independently, the imbalance does not bias the accuracy metrics for grassland, shrubland or cropland, but it does highlight that the cropland results are based on a much smaller sample pool. Within this sampling framework, TSSA-NBR demonstrates high stability across CE, OE and DC, particularly in forests, grassland and shrublands.
In forest regions, TSSA-NBR consistently outperformed other methods, achieving the highest DC values and low CE. For example, in Region A, TSSA-NBR obtained a DC of 0.88 for forests, exceeding NBR (0.79) and NBRSWIR (0.82) while maintaining the lowest CE of 0.02. Similarly, in Region D, TSSA-NBR achieved a DC of 0.95 for forests, showcasing its robustness in forests. The tropical forests of Region F and the boreal forests of Region G yielded DCs of 0.93 and 0.90 with corresponding CE values of 0.06 and 0.08, further confirming cross-biome robustness. For grasslands, TSSA-NBR achieved remarkable performance as well, with a DC of 0.97 and a CE of 0.06 in Region A, far surpassing GWO-FCM and burned indices, showed high DC values and low CE. Grasslands in Region F and Region G recorded DCs of 0.95 and 0.97, with CE values of 0.04 and 0.17, demonstrating reliable performance under both equatorial and sub-arctic climates. TSSA-NBR excelled in shrublands, achieving a DC of 0.94 in Region F and 0.90 in Region G, significantly outperforming other methods that struggled with higher CE and OE.
For crop-dominated areas, TSSA-NBR presented the highest classification accuracy among all methods, with the lowest CE values, such as 0.17 in Region A and 0.22 in Region B. The OE, however, remained comparatively high because the number of burned cropland samples was limited and the spectral signatures of burned and unburned fields are similar; Region A recorded an OE of 0.61, Region B 0.38, Region D 0.14 and Region G 0.26. Region F contained almost no cropland fires, which prevented a valid score, whereas Region G still achieved a DC of 0.73 with the CE and OE reported above, surpassing every other method by at least 0.10 in DC. Despite these challenges, TSSA-NBR remains the most reliable method for BA detection in crops, as its balanced performance across error metrics highlights its potential for accurate classification under complex conditions. The method’s ability to achieve superior results for forests, grasslands, and shrubs while maintaining competitive crop performance underscores its adaptability and reliability across diverse land cover types.

4. Discussion

4.1. Scale Comparison of Products and PlanetScope_BA

In this study, we compared the performance of MCD64A1, FireCCI51 or FireCCIS311, Sentinel-2_BA, and PlanetScope_BA in detecting BA across various regions in Portugal, leveraging ground truth data from ICNF. Although Planetscop_BA still has limitations in capturing details of small-scale fires, its overall performance remains superior across different fire scales due to its ability to provide high-resolution data. Therefore, PlanetScope_BA was chosen as a reference to validate results from other regions.
In contrast, Sentinel-2, which is widely available and offers a balance between spatial and temporal resolution, performed remarkably well when used with the TSSA-NBR method. Its ability to capture fire-affected areas with precision is evident, particularly in regions like Region D, where its DC reached 99.99%, surpassing MCD64A1 (67.76%) and FireCCI51 (71.68%) (Figure 5). The low overall OE of Sentinel-2_BA (14.77%) further emphasizes its reliability, especially in detecting smaller fires, compared to MCD64A1 (42.37%) and FireCCI51 (32.64%). For example, in Region C, Sentinel-2_BA’s OE was 28.97%, while MCD64A1’s much higher OE (81.58%) indicates difficulties with small-scale fire detection. The CE analysis showed that Sentinel-2_BA had an overall CE of 9.25%, significantly outperforming MCD64A1 (29.08%) and FireCCI51 (24.94%). This suggests that Sentinel-2’s temporal and spatial resolution allows for more accurate detection, minimizing overestimation in burned areas, a common issue with coarser resolution products like MCD64A1.
This scale-dependent analysis highlights the critical need to balance spatial resolution, accuracy, and data availability. While PlanetScope provides high resolution, making it an excellent reference for validation, its limited data accessibility restricts its broader application [61,62]. In contrast, with their coarser resolution, products like MCD64A1, FireCCI51, and FireCCIS311 often struggle to accurately detect small-scale fires, limiting their overall effectiveness for BA mapping. On the other hand, the availability and consistency of Sentinel-2 make it an effective tool for large-scale fire detection. The TSSA-NBR method, when applied to Sentinel-2, proves to be a robust and scalable solution, offering reliable fire monitoring even in regions where high-resolution data is challenging to obtain or use.

4.2. BA Detection in Different Land Cover Types

The performance of the TSSA-NBR method varied across different vegetation types, with notable differences observed between forested areas and grasslands or shrublands. Across the full Sentinel-2 evaluation, TSSA-NBR attained a DC of 87.81%, a CE of 8.52% and an OE of 15.58%, whereas the best competing index (NBRSWIR) reached only 84.38% DC with 9.48% CE and 20.98% OE, and NBR lagged at 78.93% DC with 19.08% CE. As seen in Figure 8, the forest bar chart shows a greater dispersion in classification results, than the more consistent results for grasslands and shrublands. This pattern aligns with the finer details shown in Figure 7, where forests display more intricate fire-related changes. This variation suggests that traditional methods like NBR, BAIS2, and NBRSWIR struggle with detecting fine details in forested regions. The primary reason for this challenge lies in the complexity of forest canopies, where fires often only affect parts of the canopy or the ground vegetation, leaving much of the forest still green. When a fire only impacts these areas, the spectral angle changes in forests are minimal, and reflectance changes in the NIR and SWIR bands are insignificant [63]. Since NBR relies on these bands to characterize the degree of vegetation damage, it struggles to construct an index that can reveal a clear signal during low-intensity fires. As a result, NBR and similar indices often confuse areas with moderate- to low-intensity fires and unaffected regions, leading to lower accuracy and higher confusion errors [64]. In contrast, TSSA-NBR leverages temporal spectral changes and the full spectral shape, which allows it to more effectively capture subtle changes in forests, even when only parts of the canopy are affected. A deep-learning baseline based on U-Net achieves a similar overall DC of 87.55% and a CE of 9.31%, yet its omission error in dense forests remains higher than TSSA-NBR because the network requires thousands of annotated samples for effective training, which limits rapid deployment in new regions [65]. The optimization-based GWO-FCM approach eliminates the need for training data but attains only 74.58% DC, with a CE of 31.54%, since its binary clustering oversimplifies heterogeneous burn patterns and overlooks many fine-scale scars.
In forest regions, TSSA-NBR outperformed other methods, achieving higher DC and lower CE. For example, in Region A, TSSA-NBR reached a DC of 0.88, surpassing NBR (0.79) and NBRSWIR (0.82), with a low CE of 0.02. Similarly, in Region D, TSSA-NBR achieved a DC of 0.95, demonstrating its robustness in forested areas. This is crucial in regions with low to moderate fire intensity, where traditional methods fail to detect subtle spectral changes. By normalizing time-series spectral angles, TSSA-NBR improves accuracy, overcoming the limitations of conventional indices in distinguishing between partially damaged and unaffected areas.
In grasslands and shrublands, TSSA-NBR also performed well, with DC values of 0.97 for grasslands in Region A and 0.89 in shrublands across Regions A and E. For crop-dominated areas, TSSA-NBR presented the highest classification accuracy among all methods, with the lowest CE values, such as 0.17 in Region A and 0.22 in Region B. The OE, however, remained comparatively high because the limited number of burned-cropland samples constrains reliable index calibration [66], and the spectral signatures of burned and unburned fields are highly similar; Region A recorded an OE of 0.61, Region B 0.38, Region D 0.14 and Region G 0.26 showed in Figure 8. Independent evaluations show that global burned-area products likewise suffer omission errors in croplands for the same reasons [67,68]. To reduce these errors, future work could integrate Sentinel-1 SAR time-series data, which discriminates burned croplands under cloudy conditions and fuse thermal-anomaly active-fire detections to provide near-real-time alerts and lower omission rates [69,70]. Such multi-sensor approaches, combined with augmented sample collections, are expected to enhance TSSA-NBR’s cropland performance while also shortening the detection interval that is critical for emergency response. Despite these challenges, TSSA-NBR remains reliable for detecting burned areas in croplands, and further research can focus on improving the method for agricultural areas with more sample data.

4.3. Potential and Limitations

The TSSA-NBR method advances BA detection by addressing critical limitations of traditional threshold-dependent techniques. By integrating the conventional SAM algorithm with burn indices, TSSA-NBR demonstrates that spectral shape characterization combined with time-series analysis achieves high-precision BA extraction. Through an unsupervised and threshold-free framework, this study further explores the simplicity and interpretability of the SAM algorithm, extending its adaptability beyond conventional application scenarios. Notably, the adoption of SAM in this work establishes an efficient methodological framework for spectral shape analysis, where the core methodology is transferable to other shape characterization techniques. For instance, SAM could be substituted with alternative approaches such as Dynamic Time Warping (DTW) for aligning temporal spectral trajectories or Euclidean distance-based spectral shape similarity metrics to address broader application requirements [71].
Unlike index-based approaches requiring land cover-specific thresholds to mitigate spectral variability, TSSA-NBR employs time-series spectral data to streamline fire pixel identification. Temporal analysis of spectral angles and NBR differences eliminates dependence on complex statistical assumptions, enabling robust classification across diverse vegetation and environmental conditions. Comparative experiments confirm TSSA-NBR’s superiority over traditional burn indices and supervised methods in accuracy and computational efficiency. Beyond fire detection, the method’s flexibility supports diverse environmental monitoring applications.
A key application is post-fire vegetation recovery monitoring. When vegetation regrows, spectral differences transition from positive to negative values and gradually approximate reference spectra levels. Selecting an appropriate reference spectrum and extending the temporal window enhances the understanding of recovery dynamics. TSSA-NBR also quantifies fire intensity: abrupt changes in the index over short periods correlate with fire severity, analogous to the dNBR threshold method. For rapid response, real-time calculation of normalized spectral angles and NBR differences detects sudden index shifts indicative of active fires, facilitating timely management. Furthermore, replacing NBR with indices like the Normalized Difference Water Index (NDWI) enables unsupervised flood mapping via the TSSA-NDWI variant. By synthesizing spectral shape features, domain-specific indices, and temporal dynamics, TSSA-NBR establishes a generalized framework for detecting anomalies with distinct spectral-index signatures, advancing unsupervised solutions for multi-hazard monitoring.
Despite the strong performance and broad application potential of the TSSA-NBR method, several limitations remain. First, while the two-month sliding window confers robustness by leveraging temporal continuity, data quality plays a crucial role in the method’s effectiveness. Factors such as cloud cover and atmospheric conditions can lead to data gaps [72], thereby reducing the accuracy and reliability of TSSA-NBR. Prolonged, multi-week cloud cover can dampen the temporal signal and delay detection, even though post-fire recovery is typically slow enough that the burned-area signature persists once clouds clear. Future work will therefore quantify how varying cloud-induced data-loss rates affect detection latency and accuracy, and will test gap-filling or time-series reconstruction strategies to mitigate this limitation. Moreover, emergency response requires information within days. Future studies should evaluate TSSA-NBR with shorter windows and assess sensor-fusion schemes that combine high-revisit optical or SAR data to balance speed and accuracy. Furthermore, the method cannot precisely determine the exact date of fire occurrence; instead, it provides only an approximate temporal window. Second, the accuracy of fire detection is highly dependent on the selection of the reference spectra. In regions where fire events are unknown if a fire has already occurred at the beginning of the time-series, the reference spectra may reflect the characteristics of the burned surface. This results in minimal spectral angle variation, potentially causing the initial land cover classification to remain unchanged and leading to the failure to detect the fire event accurately. Third, sudden changes in land cover, such as deforestation, can interfere with the TSSA-NBR method [73]. These abrupt transitions alter the spectral angle and NBR values, increasing the risk of misclassification [16]. For example, deforestation reduces vegetation cover, exposing bare soil that significantly alters spectral properties. This change can be mistaken for fire-induced spectral variations, resulting in false positives [74]. Fourth, TSSA-NBR faces challenges in detecting small-scale fires. For fires smaller than 4 hectares, changes in spectral angle and NBR values may not be sufficiently pronounced to distinguish burned areas effectively. These small fire patches are particularly susceptible to mixed-pixel effects, where multiple surface features within a single pixel complicate the spectral signal and obscure fire characteristics. This complexity increases the likelihood of OE, as the subtle spectral differences make it difficult to detect minor fire events accurately [75].

5. Conclusions

This study introduces TSSA-NBR, an unsupervised BA extraction method that leverages time-series spectral information, effectively enhancing the traditional NBR index. This approach addresses the limitations of conventional fire monitoring methods that rely on specific spectral bands and threshold values, enabling efficient and accurate BA detection across diverse environmental conditions.
This study yields three principal conclusions: (1) TSSA-NBR delivers high-precision burned-area maps over heterogeneous landscapes, as demonstrated by DC between 0.53 and 0.97, CE from 0.03 to 0.27, and OE from 0.02 to 0.61 across seven climatically and structurally diverse regions, confirming the robustness and scale invariance of the method. (2) Sentinel-2 imagery can effectively substitute for high-resolution datasets in regional burned-area monitoring, because the strong agreement between Sentinel-2-based TSSA-NBR outputs and the PlanetScope_BA product shows that medium-resolution data provide comparable accuracy while remaining more accessible and cost-efficient; (3) Integrating time-series spectral-angle information with the NBR index enables TSSA-NBR to outperform conventional indices such as NBR and BAIS2 as well as several unsupervised classifiers, thereby overcoming the band-specific and threshold-dependent limitations of traditional approaches and yielding more precise fire-scar delineations across forests, grasslands, and shrublands.
This study demonstrates that the TSSA-NBR method significantly enhances BA detection capabilities, offering an efficient and robust solution for rapidly and accurately extracting burned regions. The method shows broad application potential in fire monitoring and recovery management and provides new research insights for diagnosing and monitoring other surface anomalies.

Author Contributions

All authors made significant contributions to the work. Specific contributions include research design: D.L. and Y.Q.; data collection and analysis: D.L. and X.Y.; manuscript preparation: D.L., Y.Q. and Q.Z.; funding acquisition: Y.Q. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (No. 42192581).

Data Availability Statement

Data will be made available on request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bowman, D.; Williamson, G.J.; Abatzoglou, J.T.; Kolden, C.A.; Cochrane, M.A.; Smith, A.M.S. Human exposure and sensitivity to globally extreme wildfire events. Nat. Ecol. Evol. 2017, 1, 58. [Google Scholar] [CrossRef] [PubMed]
  2. Andela, N.; Morton, D.C.; Giglio, L.; Chen, Y.; van der Werf, G.R.; Kasibhatla, P.S.; DeFries, R.S.; Collatz, G.J.; Hantson, S.; Kloster, S.; et al. A human-driven decline in global burned area. Science 2017, 356, 1356–1362. [Google Scholar] [CrossRef] [PubMed]
  3. Cunningham, C.X.; Williamson, G.J.; Bowman, D.M.J.S. Increasing frequency and intensity of the most extreme wildfires on Earth. Nat. Ecol. Evol. 2024, 8, 1420–1425. [Google Scholar] [CrossRef]
  4. Kolden, C.A.; Abatzoglou, J.T.; Jones, M.W.; Jain, P. Wildfires in 2023. Nat. Rev. Earth Environ. 2024, 5, 238–240. [Google Scholar] [CrossRef]
  5. Liu, W.-Y.; Liu, C.-L.; Wang, Y.-R. Multi-Index Remote Sensing for Post-Fire Damage Assessment: Accuracy, Carbon Loss, and Conservation Implications. Front. For. Global Change 2025, 8, 1577612. [Google Scholar] [CrossRef]
  6. Kurbanov, E.; Vorobev, O.; Lezhnin, S.; Sha, J.; Wang, J.; Li, X.; Cole, J.; Dergunov, D.; Wang, Y. Remote Sensing of Forest Burnt Area, Burn Severity, and Post-Fire Recovery: A Review. Remote Sens. 2022, 14, 4714. [Google Scholar] [CrossRef]
  7. Campagnolo, M.L.; Oom, D.; Padilla, M.; Pereira, J.M.C. A patch-based algorithm for global and daily burned area mapping. Remote Sens. Environ. 2019, 232, 111288. [Google Scholar] [CrossRef]
  8. Pacheco, A.d.P.; da Silva Junior, J.A.; Ruiz-Armenteros, A.M.; Henriques, R.F.F.; de Oliveira Santos, I. Analysis of spectral separability for detecting burned areas using Landsat-8 OLI/TIRS images under different biomes in Brazil and Portugal. Forests 2023, 14, 663. [Google Scholar] [CrossRef]
  9. Suresh Babu, K.; Singh, S.; Gulzhiyan, K.; Kabzhanova, G.; Baktybekov, G. Burned area mapping based on KazEOSat 1 satellite datasets. Environ. Sci. Proc. 2024, 29, 82. [Google Scholar] [CrossRef]
  10. Miller, J.D.; Thode, A.E. Quantifying burn severity in a heterogeneous landscape with a relative version of the delta Normalized Burn Ratio (dNBR). Remote Sens. Environ. 2007, 109, 66–80. [Google Scholar] [CrossRef]
  11. Epting, J.; Verbyla, D.; Sorbel, B. Evaluation of remotely sensed indices for assessing burn severity in interior Alaska using Landsat TM and ETM+. Remote Sens. Environ. 2005, 96, 328–339. [Google Scholar] [CrossRef]
  12. van der Meer, F.D.; van der Werff, H.M.A.; van Ruitenbeek, F.J.A. Potential of ESA’s Sentinel-2 for geological applications. Remote Sens. Environ. 2014, 148, 124–133. [Google Scholar] [CrossRef]
  13. Fernández-Manso, A.; Fernández-Manso, O.; Quintano, C. SENTINEL-2A red-edge spectral indices suitability for discriminating burn severity. Int. J. Appl. Earth Obs. Geoinf. 2016, 50, 170–175. [Google Scholar] [CrossRef]
  14. Filipponi, F.; Manfron, G. Observing Post-Fire Vegetation Regeneration Dynamics Exploiting High-Resolution Sentinel-2 Data. Proceedings 2019, 18, 10. [Google Scholar] [CrossRef]
  15. Pádua, L.; Guimarães, N.; Adão, T.; Sousa, A.; Peres, E.; Sousa, J.J. Effectiveness of Sentinel-2 in Multi-Temporal Post-Fire Monitoring When Compared with UAV Imagery. ISPRS Int. J. Geo-Inf. 2020, 9, 225. [Google Scholar] [CrossRef]
  16. Bright, B.C.; Hudak, A.T.; Kennedy, R.E.; Braaten, J.D.; Henareh Khalyani, A. Examining post-fire vegetation recovery with Landsat time series analysis in three western North American forest types. Fire Ecol. 2019, 15, 8. [Google Scholar] [CrossRef]
  17. Priya, R.S.; Vani, K. Vegetation change detection and recovery assessment based on post-fire satellite imagery using deep learning. Sci. Rep. 2024, 14, 12611. [Google Scholar] [CrossRef]
  18. Lizundia-Loiola, J.; Otón, G.; Ramo, R.; Chuvieco, E. A spatio-temporal active-fire clustering approach for global burned area mapping at 250 m from MODIS data. Remote Sens. Environ. 2020, 236, 111493. [Google Scholar] [CrossRef]
  19. Suwanprasit, C.; Shahnawaz. Mapping burned areas in Thailand using Sentinel-2 imagery and OBIA techniques. Sci. Rep. 2024, 14, 9609. [Google Scholar] [CrossRef]
  20. Taylor, A.; Dronova, I.; Sigona, A.; Kelly, M. Using Sentinel-2 Imagery to Measure Spatiotemporal Changes and Recovery across Three Adjacent Grasslands with Different Fire Histories. Remote Sens. 2024, 16, 2232. [Google Scholar] [CrossRef]
  21. Sulova, A.; Arsanjani, J.J. Exploratory Analysis of Driving Force of Wildfires in Australia: An Application of Machine Learning within Google Earth Engine. Remote Sens. 2021, 13, 10. [Google Scholar] [CrossRef]
  22. Sismanis, M.; Chadoulis, R.-T.; Manakos, I.; Drosou, A. An unsupervised burned area mapping approach using sentinel-2 images. Land 2023, 12, 379. [Google Scholar] [CrossRef]
  23. Oliveira, E.R.; Disperati, L.; Alves, F.L. A New Method (MINDED-BA) for Automatic Detection of Burned Areas Using Remote Sensing. Remote Sens. 2021, 13, 5164. [Google Scholar] [CrossRef]
  24. Chuvieco, E.; Martín, M.P.; Palacios, A. Assessment of different spectral indices in the red-near-infrared spectral domain for burned land discrimination. Int. J. Remote Sens. 2002, 23, 5103–5110. [Google Scholar] [CrossRef]
  25. Negri, R.G.; Luz, A.E.O.; Frery, A.C.; Casaca, W. Mapping Burned Areas with Multitemporal–Multispectral Data and Probabilistic Unsupervised Learning. Remote Sens. 2022, 14, 5413. [Google Scholar] [CrossRef]
  26. Luz, A.E.O.; Negri, R.G.; Massi, K.G.; Colnago, M.; Silva, E.A.; Casaca, W. Mapping Fire Susceptibility in the Brazilian Amazon Forests Using Multitemporal Remote Sensing and Time-Varying Unsupervised Anomaly Detection. Remote Sens. 2022, 14, 2429. [Google Scholar] [CrossRef]
  27. Giglio, L.; Boschetti, L.; Roy, D.P.; Humber, M.L.; Justice, C.O. The Collection 6 MODIS burned area mapping algorithm and product. Remote Sens. Environ. 2018, 217, 72–85. [Google Scholar] [CrossRef]
  28. Liu, S.C.; Zheng, Y.J.; Dalponte, M.; Tong, X.H. A novel fire index-based burned area change detection approach using Landsat-8 OLI data. Eur. J. Remote Sens. 2020, 53, 104–112. [Google Scholar] [CrossRef]
  29. Carlotto, M.J. Spectral Shape Classification of Landsat. Photogramm. Eng. Remote Sens. 1998, 64, 905–913. [Google Scholar] [CrossRef]
  30. Lim, S.L.; Sreevalsan-Nair, J.; Daya Sagar, B. Multispectral data mining: A focus on remote sensing satellite images. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2024, 14, e1522. [Google Scholar] [CrossRef]
  31. Kruse, F.A.; Lefkoff, A.B.; Boardman, J.W.; Heidebrecht, K.B.; Shapiro, A.T.; Barloon, P.J.; Goetz, A.F.H. The Spectral Image-Processing System (Sips)—Interactive Visualization and Analysis of Imaging Spectrometer Data. Remote Sens. Environ. 1993, 44, 145–163. [Google Scholar] [CrossRef]
  32. Petropoulos, G.P.; Vadrevu, K.P.; Xanthopoulos, G.; Karantounias, G.; Scholze, M. A Comparison of Spectral Angle Mapper and Artificial Neural Network Classifiers Combined with Landsat TM Imagery Analysis for Obtaining Burnt Area Mapping. Sensors 2010, 10, 1967–1985. [Google Scholar] [CrossRef] [PubMed]
  33. Shin, J.I.; Seo, W.W.; Kim, T.; Park, J.; Woo, C.S. Using UAV Multispectral Images for Classification of Forest Burn Severity-A Case Study of the 2019 Gangneung Forest Fire. Forests 2019, 10, 1025. [Google Scholar] [CrossRef]
  34. Zhu, Z.; Qiu, S.; Ye, S. Remote sensing of land change: A multifaceted perspective. Remote Sens. Environ. 2022, 282, 113266. [Google Scholar] [CrossRef]
  35. Kulinan, A.S.; Cho, Y.; Park, M.; Park, S. Rapid wildfire damage estimation using integrated object-based classification with auto-generated training samples from Sentinel-2 imagery on Google Earth Engine. Int. J. Appl. Earth Obs. Geoinf. 2024, 126, 103628. [Google Scholar] [CrossRef]
  36. Oseghae, I.; Bhaganagar, K.; Mestas-Nuñez, A.M. The Dolan Fire of Central Coastal California: Burn Severity Estimates from Remote Sensing and Associations with Environmental Factors. Remote Sens. 2024, 16, 1693. [Google Scholar] [CrossRef]
  37. Smichowski, H.; Contreras, F.I. Application of Google Earth Engine in the preliminary analysis of fire severity in the Iberá National Park and Reserve, Argentina. Rev. UDCA Actual. Divulg. Científica 2024, 27, 2464. [Google Scholar] [CrossRef]
  38. Liu, P.; Liu, Y.; Guo, X.; Zhao, W.; Wu, H.; Xu, W. Burned area detection and mapping using time series Sentinel-2 multispectral images. Remote Sens. Environ. 2023, 296, 113753. [Google Scholar] [CrossRef]
  39. Stroppiana, D.; Sali, M.; Busetto, L.; Boschetti, M.; Ranghetti, L.; Franquesa, M.; Pettinari, M.L.; Chuvieco, E. Sentinel-2 sampling design and reference fire perimeters to assess accuracy of Burned Area products over Sub-Saharan Africa for the year 2019. ISPRS J. Photogramm. Remote Sens. 2022, 191, 223–234. [Google Scholar] [CrossRef]
  40. Cao, X.; He, K.; Hu, X.; Luo, G.; Zhou, Y.; Zhou, R.; Yang, Y.; Jin, T. Combined InSAR and optical dataset unravelling the characteristics of hillslope erosion in burned areas in Xichang, China. Catena 2024, 242, 108123. [Google Scholar] [CrossRef]
  41. Smith, C.W.; Panda, S.K.; Bhatt, U.S.; Meyer, F.J.; Badola, A.; Hrobak, J.L. Assessing Wildfire Burn Severity and Its Relationship with Environmental Factors: A Case Study in Interior Alaska Boreal Forest. Remote Sens. 2021, 13, 1966. [Google Scholar] [CrossRef]
  42. Cheng, Y.; Vrieling, A.; Fava, F.; Meroni, M.; Marshall, M.; Gachoki, S. Phenology of short vegetation cycles in a Kenyan rangeland from PlanetScope and Sentinel-2. Remote Sens. Environ. 2020, 248, 112004. [Google Scholar] [CrossRef]
  43. Lizundia-Loiola, J.; Franquesa, M.; Khairoun, A.; Chuvieco, E. Global burned area mapping from Sentinel-3 Synergy and VIIRS active fires. Remote Sens. Environ. 2022, 282, 113298. [Google Scholar] [CrossRef]
  44. Brown, C.F.; Brumby, S.P.; Guzder-Williams, B.; Birch, T.; Hyde, S.B.; Mazzariello, J.; Czerwinski, W.; Pasquarella, V.J.; Haertel, R.; Ilyushchenko, S.; et al. Dynamic World, Near real-time global 10 m land use land cover mapping. Sci. Data 2022, 9, 251. [Google Scholar] [CrossRef]
  45. Avcıoğlu, A.; Akbaş, A.; Görüm, T.; Yetemen, Ö. The compound effect of topography, weather, and fuel type on the spread and severity of the largest wildfire in NW of Turkey. Nat. Hazards 2024, 121, 3219–3237. [Google Scholar] [CrossRef]
  46. García, M.J.L.; Caselles, V. Mapping burns and natural reforestation using thematic Mapper data. Geocarto Int. 1991, 6, 31–37. [Google Scholar] [CrossRef]
  47. Schepers, L.; Haest, B.; Veraverbeke, S.; Spanhove, T.; Vanden Borre, J.; Goossens, R. Burned Area Detection and Burn Severity Assessment of a Heathland Fire in Belgium Using Airborne Imaging Spectroscopy (APEX). Remote Sens. 2014, 6, 1803–1826. [Google Scholar] [CrossRef]
  48. Trigg, S.; Flasse, S. An evaluation of different bi-spectral spaces for discriminating burned shrub-savannah. Int. J. Remote Sens. 2001, 22, 2641–2647. [Google Scholar] [CrossRef]
  49. Filipponi, F. BAIS2: Burned Area Index for Sentinel-2. Proceedings 2018, 2, 364. [Google Scholar] [CrossRef]
  50. Kashnitskii, A.V. Method for automatic detection of burned areas by wildfires using Landsat and Sentinel-2 satellite data. Sovrem. Probl. Distantsionnogo Zondirovaniya Zemli Iz Kosmosa 2022, 19, 29–38. [Google Scholar] [CrossRef]
  51. Ghaffarian, S.; Ghaffarian, S. Automatic histogram-based fuzzy C-means clustering for remote sensing imagery. ISPRS J. Photogramm. Remote Sens. 2014, 97, 46–57. [Google Scholar] [CrossRef]
  52. Mallinis, G.; Koutsias, N. Comparing ten classification methods for burned area mapping in a Mediterranean environment using Landsat TM satellite data. Int. J. Remote Sens. 2012, 33, 4408–4433. [Google Scholar] [CrossRef]
  53. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  54. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; Proceedings, Part III 18. pp. 234–241. [Google Scholar]
  55. Yan, C.; Fan, X.; Fan, J.; Wang, N. Improved U-Net remote sensing classification algorithm based on Multi-Feature Fusion Perception. Remote Sens. 2022, 14, 1118. [Google Scholar] [CrossRef]
  56. Melchiorre, A.; Boschetti, L. Global Analysis of Burned Area Persistence Time with MODIS Data. Remote Sens. 2018, 10, 750. [Google Scholar] [CrossRef]
  57. Bonannella, C.; Chirici, G.; Travaglini, D.; Pecchi, M.; Vangi, E.; D’Amico, G.; Giannetti, F. Characterization of Wildfires and Harvesting Forest Disturbances and Recovery Using Landsat Time Series: A Case Study in Mediterranean Forests in Central Italy. Fire 2022, 5, 68. [Google Scholar] [CrossRef]
  58. Brown, S.D.; Tauler, R.; Walczak, B. ScienceDirect. In Comprehensive Chemometrics: Chemical and Biochemical Data Analysis, 1st ed.; Elsevier: Boston, MA, USA, 2009. [Google Scholar]
  59. Swetnam, T.L.; Yool, S.R.; Roy, S.; Falk, D.A. On the Use of Standardized Multi-Temporal Indices for Monitoring Disturbance and Ecosystem Moisture Stress across Multiple Earth Observation Systems in the Google Earth Engine. Remote Sens. 2021, 13, 1448. [Google Scholar] [CrossRef]
  60. White, H.J.; Gaul, W.; Sadykova, D.; León-Sánchez, L.; Caplat, P.; Emmerson, M.C.; Yearsley, J.M. Quantifying large-scale ecosystem stability with remote sensing data. Remote Sens. Ecol. Conserv. 2020, 6, 354–365. [Google Scholar] [CrossRef]
  61. Acharki, S. PlanetScope contributions compared to Sentinel-2, and Landsat-8 for LULC mapping. Remote Sens. Appl. Soc. Environ. 2022, 27, 100774. [Google Scholar] [CrossRef]
  62. Vizzari, M. PlanetScope, Sentinel-2, and Sentinel-1 data integration for object-based land cover classification in Google Earth Engine. Remote Sens. 2022, 14, 2628. [Google Scholar] [CrossRef]
  63. Chatzopoulos-Vouzoglanis, K.; Reinke, K.J.; Soto-Berelov, M.; Jones, S.D. Are fire intensity and burn severity associated? Advancing our understanding of FRP and NBR metrics from Himawari-8/9 and Sentinel-2. Int. J. Appl. Earth Obs. Geoinf. 2024, 127, 103673. [Google Scholar] [CrossRef]
  64. Farhadi, H.; Ebadi, H.; Kiani, A. Badi: A Novel Burned Area Detection Index for Sentinel-2 Imagery Using Google Earth Engine Platform. Isprs Ann. Photo Rem. 2023, 10, 179–186. [Google Scholar] [CrossRef]
  65. Zhang, P.; Ban, Y.; Nascetti, A. Learning U-Net without forgetting for near real-time wildfire monitoring by the fusion of SAR and optical time series. Remote Sens. Environ. 2021, 261, 112467. [Google Scholar] [CrossRef]
  66. Seydi, S.T.; Akhoondzadeh, M.; Amani, M.; Mahdavi, S. Wildfire damage assessment over Australia using Sentinel-2 imagery and MODIS land cover product within the google earth engine cloud platform. Remote Sens. 2021, 13, 220. [Google Scholar] [CrossRef]
  67. Hall, J.V.; Argueta, F.; Giglio, L. Validation of MCD64A1 and FireCCI51 cropland burned area mapping in Ukraine. Int. J. Appl. Earth Obs. Geoinf. 2021, 102, 102443. [Google Scholar] [CrossRef]
  68. Zhu, C.; Kobayashi, H.; Kanaya, Y.; Saito, M. Size-dependent validation of MODIS MCD64A1 burned area over six vegetation types in boreal Eurasia: Large underestimation in croplands. Sci. Rep. 2017, 7, 4181. [Google Scholar] [CrossRef] [PubMed]
  69. Zhang, P.; Ban, Y.; Nascetti, A. Total-variation regularized U-Net for wildfire burned area mapping based on Sentinel-1 C-Band SAR backscattering data. ISPRS J. Photogramm. Remote Sens. 2023, 203, 301–313. [Google Scholar] [CrossRef]
  70. Jiao, L.; and Bo, Y. Near real-time mapping of burned area by synergizing multiple satellites remote-sensing data. GIScience Remote Sens. 2022, 59, 1956–1977. [Google Scholar] [CrossRef]
  71. Petitjean, F.; Inglada, J.; Gançarski, P. Satellite image time series analysis under time warping. IEEE Trans. Geosci. Remote Sens. 2012, 50, 3081–3095. [Google Scholar] [CrossRef]
  72. Zhang, P.Z.; Nascetti, A.; Ban, Y.F.; Gong, M.G. An implicit radar convolutional burn index for burnt area mapping with Sentinel-1 C-band SAR data. Isprs J. Photogramm. Remote Sens. 2019, 158, 50–62. [Google Scholar] [CrossRef]
  73. Barmpoutis, P.; Papaioannou, P.; Dimitropoulos, K.; Grammalidis, N. A Review on Early Forest Fire Detection Systems Using Optical Remote Sensing. Sensors 2020, 20, 6442. [Google Scholar] [CrossRef] [PubMed]
  74. Zhang, Y.J.; Wang, L.; Zhou, Q.; Tang, F.; Zhang, B.; Huang, N.; Nath, B. Continuous Change Detection and Classification-Spectral Trajectory Breakpoint Recognition for Forest Monitoring. Land 2022, 11, 504. [Google Scholar] [CrossRef]
  75. Hawbaker, T.J.; Vanderhoof, M.K.; Schmidt, G.L.; Beal, Y.-J.; Picotte, J.J.; Takacs, J.D.; Falgout, J.T.; Dwyer, J.L. The Landsat Burned Area algorithm and products for the conterminous United States. Remote Sens. Environ. 2020, 244, 111801. [Google Scholar] [CrossRef]
Figure 1. Geographical locations and remote-sensing imagery of seven fire events. For each study region, panel x1 (a1 to g1) shows the municipality-level map, whereas panel x2 (a2 to g2) displays the corresponding post-fire remote-sensing image. Region A, represented by panels a1 and a2, is Big Sur, California, USA; Region B, represented by panels b1 and b2, is San Martin County, Corrientes Province, Argentina; Region C, represented by panels c1 and c2, is Viana do Castelo, Portugal; Region D, represented by panels d1 and d2, is Huíla Province, the Republic of Angola; Region E, represented by panels e1 and e2, is Xichang, Sichuan, China; Region F, represented by panels f1 and f2, is Fairbanks North Star Borough, Alaska, USA; and Region G, represented by panels g1 and g2, is Altamira, Pará, Brazil.
Figure 1. Geographical locations and remote-sensing imagery of seven fire events. For each study region, panel x1 (a1 to g1) shows the municipality-level map, whereas panel x2 (a2 to g2) displays the corresponding post-fire remote-sensing image. Region A, represented by panels a1 and a2, is Big Sur, California, USA; Region B, represented by panels b1 and b2, is San Martin County, Corrientes Province, Argentina; Region C, represented by panels c1 and c2, is Viana do Castelo, Portugal; Region D, represented by panels d1 and d2, is Huíla Province, the Republic of Angola; Region E, represented by panels e1 and e2, is Xichang, Sichuan, China; Region F, represented by panels f1 and f2, is Fairbanks North Star Borough, Alaska, USA; and Region G, represented by panels g1 and g2, is Altamira, Pará, Brazil.
Remotesensing 17 02283 g001
Figure 2. Workflow of the TSSA-NBR Method for BA extraction, including (Step 1) image data collection, (Step 2) BA extraction using TSSA-NBR, and (Step 3) validation and comparison.
Figure 2. Workflow of the TSSA-NBR Method for BA extraction, including (Step 1) image data collection, (Step 2) BA extraction using TSSA-NBR, and (Step 3) validation and comparison.
Remotesensing 17 02283 g002
Figure 3. TSSA-NBR index construction and illustration of positive and negative changes in normalized difference: (a) Time-series remote sensing imagery: A selected pending image from arbitrary time, with additional imagery from one month before and after to form a time-series. (b) Illustrate the process through multispectral waterfall plots, along with time-series plots of SAM, NBR, and the constructed TSSA-NBR index, to showcase the methodology from a pixel-based perspective. Each polyline in the waterfall plots correspond to the spectral reflectance profile measured on a specific acquisition date, with the series stacked chronologically to reveal the temporal spectral evolution of the pixel evident. (c) TSSA-NBR sequences classified unsupervised for fire detection across different pixels and times.
Figure 3. TSSA-NBR index construction and illustration of positive and negative changes in normalized difference: (a) Time-series remote sensing imagery: A selected pending image from arbitrary time, with additional imagery from one month before and after to form a time-series. (b) Illustrate the process through multispectral waterfall plots, along with time-series plots of SAM, NBR, and the constructed TSSA-NBR index, to showcase the methodology from a pixel-based perspective. Each polyline in the waterfall plots correspond to the spectral reflectance profile measured on a specific acquisition date, with the series stacked chronologically to reveal the temporal spectral evolution of the pixel evident. (c) TSSA-NBR sequences classified unsupervised for fire detection across different pixels and times.
Remotesensing 17 02283 g003
Figure 4. (a) BA detection results across MCD64A1, FireCCI51 or FireCCIS311, Sentinel-2_BA, and PlanetScope_BA in expanded Region C, showcasing the spatial distribution of BA across the products. (b) Cumulative stacked bar chart comparing BA detection results in expanded Region C across 4 products, with orange representing true detections, green for commission, and purple for omission.
Figure 4. (a) BA detection results across MCD64A1, FireCCI51 or FireCCIS311, Sentinel-2_BA, and PlanetScope_BA in expanded Region C, showcasing the spatial distribution of BA across the products. (b) Cumulative stacked bar chart comparing BA detection results in expanded Region C across 4 products, with orange representing true detections, green for commission, and purple for omission.
Remotesensing 17 02283 g004
Figure 5. Scale-dependent comparison of different BA detection products and PlanetScope_BA across study regions, evaluated by DC, CE, and OE.
Figure 5. Scale-dependent comparison of different BA detection products and PlanetScope_BA across study regions, evaluated by DC, CE, and OE.
Remotesensing 17 02283 g005
Figure 6. The prediction of different methods in seven study areas.
Figure 6. The prediction of different methods in seven study areas.
Remotesensing 17 02283 g006
Figure 7. Mapping results of BA for different land cover types using various burned indices and unsupervised classification methods, with red circles indicating regions of notable differences in performance among the methods.
Figure 7. Mapping results of BA for different land cover types using various burned indices and unsupervised classification methods, with red circles indicating regions of notable differences in performance among the methods.
Remotesensing 17 02283 g007
Figure 8. Incremental stacked bar chart comparing the seven methods (TSSA-NBR, NBR, NBRSWIR, MIRBI, BAIS2, GWO-FCM, U-Net) across the seven study regions A–G and four landcover classes (forest, grassland, cropland, shrubland). Columns denote regions, and rows denote the evaluation metrics DC, OE and CE. Within every bar, color indicates the method, and stacking is used only for compact display. The table beneath the chart lists reference burned-area extents (km2) for each region-landcover combination, values obtained from manually refined PlanetScope imagery.
Figure 8. Incremental stacked bar chart comparing the seven methods (TSSA-NBR, NBR, NBRSWIR, MIRBI, BAIS2, GWO-FCM, U-Net) across the seven study regions A–G and four landcover classes (forest, grassland, cropland, shrubland). Columns denote regions, and rows denote the evaluation metrics DC, OE and CE. Within every bar, color indicates the method, and stacking is used only for compact display. The table beneath the chart lists reference burned-area extents (km2) for each region-landcover combination, values obtained from manually refined PlanetScope imagery.
Remotesensing 17 02283 g008
Table 1. Reference burned indices and their corresponding formulas.
Table 1. Reference burned indices and their corresponding formulas.
BA IndexFormulaReferences
NBR B 8 A B 12 B 8 A + B 12 [46]
NBRSWIR B 12 B 11 0.02   B 12 + B 11 + 0.1 [50]
MIRBI 10 × B 12 9.8 × B 11 + 2 [48]
BAIS2 ( 1 B 6 × B 7 × B 8   A   B 4 ) × ( B 12 B 8   A B 12 + B 8   A + 1 ) [49]
Table 2. Performance comparison of different BA detection methods over seven regions using accuracy metrics and total area burned in Sentinel-2.
Table 2. Performance comparison of different BA detection methods over seven regions using accuracy metrics and total area burned in Sentinel-2.
Sentinel-2 (km2)TP (km2)TN (km2)FP (km2)FN (km2)DC (%)CE (%)OE (%)
TSSA-NBR894.43818.2176.22150.975072.7987.818.5215.58
NBR922.60746.62175.98222.574973.0378.9319.0822.96
NBRSWIR846.07765.8780.20203.315068.8184.389.4820.98
BAIS2862.59764.4698.12204.725050.8883.4711.3821.12
MIRBI829.13774.8754.26194.315094.7586.186.5420.05
GWO-FCM1159.66793.89365.78175.304783.2374.5831.5418.09
U-Net876.51820.1046.51141.085102.4987.559.3114.35
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, D.; Qu, Y.; Yang, X.; Zhao, Q. TSSA-NBR: A Burned Area Extraction Method Based on Time-Series Spectral Angle with Full Spectral Shape. Remote Sens. 2025, 17, 2283. https://doi.org/10.3390/rs17132283

AMA Style

Liu D, Qu Y, Yang X, Zhao Q. TSSA-NBR: A Burned Area Extraction Method Based on Time-Series Spectral Angle with Full Spectral Shape. Remote Sensing. 2025; 17(13):2283. https://doi.org/10.3390/rs17132283

Chicago/Turabian Style

Liu, Dongyi, Yonghua Qu, Xuewen Yang, and Qi Zhao. 2025. "TSSA-NBR: A Burned Area Extraction Method Based on Time-Series Spectral Angle with Full Spectral Shape" Remote Sensing 17, no. 13: 2283. https://doi.org/10.3390/rs17132283

APA Style

Liu, D., Qu, Y., Yang, X., & Zhao, Q. (2025). TSSA-NBR: A Burned Area Extraction Method Based on Time-Series Spectral Angle with Full Spectral Shape. Remote Sensing, 17(13), 2283. https://doi.org/10.3390/rs17132283

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop