Previous Article in Journal
Indoor Mapping as a Spatiotemporal Framework for Mitigating Greenhouse Gas Emissions in Buildings: A Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Assessment of Dual-Polarization Sentinel-1 SAR Data for Improved Wildfire Burned Area Mapping: A Case Study of the Palisades Region, USA

Department of Geoinformatics Z_GIS, University of Salzburg, 5020 Salzburg, Austria
*
Author to whom correspondence should be addressed.
Geomatics 2026, 6(2), 28; https://doi.org/10.3390/geomatics6020028
Submission received: 4 February 2026 / Revised: 15 March 2026 / Accepted: 17 March 2026 / Published: 19 March 2026

Abstract

Wildfires have become more frequent and intense worldwide due to climate change and anthropogenic activities, which is why accurate and timely burned area mapping is essential for estimating damage and effective post-fire recovery planning. Synthetic Aperture Radar (SAR) data, which operates under all weather conditions and day-night cycles, offers a reliable source for burned area mapping. In this context, several studies have explored the use of dual-polarization SAR imagery and machine learning, yet the influence of multi-date, dual-orbit pass data and texture features remained unexplored. Therefore, this study aims to assess the Sentinel-1 acquisition configurations, varying in temporal depth and orbital direction, for wildfire burned area mapping, considering the recent Palisades wildfire event as a study area. A comparative study was conducted across different scenarios to evaluate the effectiveness of using single-date versus multi-date SAR imagery, the integration of ascending and descending orbit passes, and the contribution of Grey-Level Co-occurrence Matrix texture features. The performance of Random Forest (RF) and Extreme Gradient Boosting classifiers was analyzed through the scenarios mentioned above. The single-date configuration using RF achieved an accuracy of 82.34%, F1-score of 81.43%, precision of 83.07%, recall of 80.84%, and ROC-AUC of 90.88%, whereas the multi-date approach reached 85.78%, 85.15%, 86.45%, 84.56%, and 93.28%, respectively. Our study highlights the importance of acquisition configuration and texture information for reliable SAR-based wildfire burned area assessment.

1. Introduction

According to the Integrated Fire Management Voluntary Guidelines published by the Food and Agriculture Organization (FAO), wildfires are defined as uncontrolled fires ignited naturally, intentionally, or accidentally, which result in significant biomass burning and adversely affect socio-economic and environmental aspects of the affected landscape [1]. Published statistics [2] showed substantial increase in both the frequency and severity of wildfires worldwide, while the World Resources Institute reported that 2024 was an extreme year for wildfires, with 13.5 million hectares of forest burned globally [3]. Conducted research also found that between 2001 and 2024, wildfires destroyed nearly 152 million hectares of global tree cover [4]. The damage caused by wildfires extends beyond forest loss; in fact, they have ramifications on natural habitats, public health, infrastructure as well as the built-up environment [5]. For instance, more than 1.5 million deaths globally have been caused by illnesses caused by pollution influenced by wildfires [6]. Furthermore, studies have also revealed that wildfires have been significantly contributing to global carbon emissions, such as the 2023 Canadian forest fires resulting in approximately 647 TgC of carbon released into the atmosphere [7]. Similarly, a study conducted in China reported total CO2 emissions of 693 Tg, with an annual average of 31.5 Tg attributed to wildfires [8]. One additional drawback consists of these fire events degrading thousands of hectares of fertile soil, disrupting several important soil properties [9]. Knowing all the negative impact wildfire events have, mapping the extent of burned areas efficiently and accurately is critical for coordinating emergency response efforts, estimating damage, and supporting post-fire recovery planning [10].
In established practice, satellite-based optical imagery constituted the primary tool for mapping wildfire-burned areas [11,12,13,14,15]. Nevertheless, its usability is limited during fire events due to the presence of thick smoke and cloud cover, which often obstructs the view [16,17]. To address these challenges, Earth observation-dedicated studies [16,18,19,20] have exploited datasets acquired from Synthetic Aperture Radar (SAR) sensors for mapping and monitoring such events. In this context, the change in SAR backscatter, inflected by the physical principle that fire diminishes the amount of vegetation capable of reflecting emitted microwaves to the sensor, can be detected. The latter results in a noticeable decrease in the backscatter coefficient compared to unburned areas due to a reduction in vegetation [21]. Still, non-fire-related factors such as variations in soil and vegetation moisture content, flooding, or landslides can influence the radar signal reflected to the SAR sensor in a similar way, making it challenging to reliably identify burned areas based solely on the backscatter coefficient [22,23,24]. This led to the increased use of machine learning techniques, which, unlike traditional thresholding-based methods relying solely on predefined values [25], can learn complex patterns and relationships, allowing them to distinguish fire-induced changes from other factors affecting backscatter. This way, the accuracy and reliability of burned area detection using SAR imagery has been improved.
In order to overcome the limitations of individual sensors, Lestari et al. [26] compared burned area mapping using optical data, SAR data, and their combination with Random Forest (RF), Multi-Layer Perceptron, and Convolutional Neural Network classifiers in Central Kalimantan, Indonesia. Using a single pre-fire and post-fire C-band SAR image with features such as spectral indices, polarization difference, and ratios, they found that optical data gives the best result in cloud-free areas, whereas fusing SAR and optical data enhances classification accuracy under cloudy conditions. However, their reliance on single-date SAR acquisitions limits the assessment of multi-temporal effects. Building on this multi-sensor approach, De Luca et al. [27] achieved high-accuracy burned area maps by combining multitemporal Sentinel-1 SAR and Sentinel-2 optical images with an RF classifier, demonstrating the benefits of temporal composites while highlighting the need for careful feature preparation and data integration.
The effectiveness of SAR-based burned area mapping depends heavily on the selection and computation of appropriate radar indices and features. For that, Hosseini and Lim [28] conducted a study in Kangaroo Island, Australia, using Sentinel-1 imagery with an RF classifier, selecting three SAR images before the event and four images after for analysis. They computed indices such as Radar Burn Difference (RBD), Radar Burn Ratio (RBR), and Delta Radar Vegetation Index (ΔRVI) as features and obtained precision, accuracy, and kappa index values of 94%, 94%, and 0.87, respectively. Even though their analysis outperformed the MODIS MCD64 products, it was limited to single-orbit direction and did not assess texture features. Addressing the latter, the integration of radar backscattering intensity and texture characteristics of Sentinel-1 imagery for an RF-based model was investigated in [29], achieving a high accuracy of 87.12% with strong agreement with Sentinel-2 reference data. Yet, their primary focus remained on intensity and texture integration without evaluating multi-temporal acquisition strategies or orbit configuration effects. Similarly, De Luca et al. [30] developed a workflow for unsupervised (K-means clustering) burned area detection using Sentinel-1 data in two Mediterranean forest sites (Portugal and Italy), considering radar indices (RBD, LogRBR, ΔRVI, ΔDPSVI) and Grey-Level Co-occurrence Matrix (GLCM) texture features. This approach achieved acceptable performance with F-score values of ~0.80 for Portugal and ~0.85 for Italy when validated against Sentinel-2 ΔNBR reference maps, confirming the suitability of SAR data for burned area mapping in heterogeneous landscapes.
Addressing the performance of SAR-derived mapped areas under challenging environmental conditions, the authors in [20] used VV and VH channels of Sentinel-1 SAR data (dVV and dVH) over the Brazilian Pantanal and compared them with optical Differenced Normalized Burn Ratio (dNBR) imagery using the RF algorithm. The study found that while SAR slightly overestimated the burned areas compared to dNBR, it showed strong spatial consistency with the optical data. Importantly, the SAR products correctly differentiated water bodies, which were sometimes misclassified as burned areas in the dNBR imagery, demonstrating the potential of SAR for reliable burned area mapping under challenging conditions. However, this study was limited to only two SAR features, which may not capture the full variability of burned areas across different vegetation types and landscape conditions. In [31], Sentinel-1 imagery was also used to map burned areas from forest fires in Penajam Paser Utara, Indonesia, demonstrating promising accuracy of 85.58% (using RBR), 80.47% (using RBD), and 86.05% (combined), signifying the capability of SAR imagery when optical imagery is limited by clouds and smoke. In a comparative assessment, Abujayyab [32] evaluated SAR- and optical-derived indices for wildfire mapping in Mediterranean forests in Turkey, finding that SAR achieved slightly lower accuracy (69.2%) than optical (97.4%) indices but provided perfect precision (1.0), effectively avoiding false positives and demonstrating once again its reliability in cloud-prone regions.
One other angle to consider while dealing with methodologies involving machine learning is their performance that is influenced by how features are engineered and selected, as well as by the choice of classification algorithms. Chen et al. [33] conducted a comparative assessment of classifiers using combinations of spectral, vegetation index, and radar backscatter features for forest fire area mapping, demonstrating that classification performance varies significantly with feature selection and algorithm choice. In [34], the authors applied multiple machine learning models (LightGBM, RF and U-Net) to detect burned areas using Sentinel-2 imagery and found that accuracy varied with different combinations of input variables, highlighting the influence of feature selection on burned area detection performance.
All the above-mentioned studies have shown that machine learning models have strong potential to map burned area, yet a main research gap remains in evaluating how different SAR acquisition strategies, feature configurations, and input variables influence classification accuracy. In particular, the combined effects of single-date versus multi-date acquisitions, orbit directions, and texture-based features have not been fully assessed, limiting the development of optimized and operationally feasible workflows. Identifying the most informative and operationally efficient features and classifiers is critically essential to support more scalable and feasible wildfire mapping frameworks in real-world settings. While deep learning approaches have shown superior performance, they require extensive training data, high computational resources, and careful tuning, which limit their practicality in time-critical or data-scarce scenarios. In contrast, machine learning classifiers, such as Random Forest and Extreme Gradient Boosting (XGBoost), remain particularly valuable for operational wildfire mapping tasks due to their robustness, efficiency, and lower data dependency [35].
As a consequence, this study assesses the potential of dual-polarization Sentinel-1 SAR data for improved wildfire burned area mapping, with a systematic evaluation of different acquisition and feature configurations to optimize classification outcomes efficiently. The Palisades wildfire in Los Angeles, USA, has been selected as a case study, with Sentinel-1A images acquired before and after the fire event.
From a high-level perspective, our study offers insights to support the design of scalable, SAR-based burned area mapping frameworks that can support timely and reliable wildfire monitoring and impact assessment in operational environments. Hence, our main contributions can be summarized as follows:
  • Demonstrating the impact of single-date versus multi-date Sentinel-1 SAR acquisitions on burned area mapping accuracy and robustness.
  • Quantifying the added value of combining ascending and descending orbit images for improved classification performance.
  • Assessing the contribution of Grey-Level Co-occurrence Matrix texture features in enhancing SAR-based burned area detection.
  • A comparative performance analysis of Random Forest and XGBoost classifiers to identify an operationally feasible burned area mapping workflow.
The rest of the paper includes four sections. The study area description, data used, and methodological approach are discussed in Section 2. Section 3 and Section 4 are dedicated to outcomes and discussions, respectively. Finally, conclusions, along with future perspectives, are outlined in Section 5.

2. Methods

2.1. Study Area

The Palisades region, as shown in Figure 1, located in Southern California in the USA, was selected as the study area. It has been highly susceptible to wildfires due to its steep terrain, chaparral vegetation, dry Mediterranean climate, and frequent strong winds. These factors collectively create a fire-prone landscape, making the region particularly vulnerable during the dry season. A major wildfire event occurred between January 7 and 11, in 2025, marking one of the most destructive wildfires in California’s history [36]. The fire rapidly spread across rugged terrain, fueled by dense vegetation and driven by high wind speeds, causing extensive ecological and infrastructural damage [36]. The reason behind the selection of this event is its recent occurrence and its impact on several land covers, including wild lands and urban areas, making it a relevant study case for analyzing wildfire impacts where natural and built environments intersect.

2.2. Data Used

Table 1 shows the detailed specifications of the data used in this study. Six dual-polarized SAR images acquired by Sentinel-1 on the dates 9 December 2024, 21 December 2024, and 2 January 2025 (before the event), as well as the dates 14 January 2025, 26 January 2025, and 7 February 2025 (after the event), were selected for download from the Alaska Satellite Facility Data search portal. Both ascending and descending passes were considered to account for geometric variability in the analysis. This study utilized the IW (Interferometric Wide Swath) and GRD (Ground Range Detected) products, which contain only the amplitude (and intensity) information of the complex-valued SAR signal. For reference map generation purposes, Sentinel-2 Level 2A analysis-ready images, acquired on 2 January 2025 (before the event) and 12 January 2025 (after the event), have been downloaded from the Copernicus browser.

2.3. Methodological Approach

The methodology adopted in this study is shown in Figure 2.
The workflow consists of several steps, which initiate with Sentinel-1 images pre-processing for samples and labels determination, followed by feature extraction for all considered schemes. Then, the main steps encompass the training, parameter tuning, and testing of both selected classifier models (RF and XGBoost). The final step includes the evaluation and assessment of the models’ performance and eventually inferring burned area maps.

2.3.1. Data Preparation

Properly prepared data ensures that the input to the model is clean, consistent, and representative of the phenomena under analysis. Datasets were prepared through two main phases, which are discussed below:
(a)
Pre-processing the sample space data, which reduces noise and corrects for sensors and/or geometric distortions, allows the model to learn from relevant patterns rather than artifacts. For this purpose, the ESA Sentinel Application Platform (SNAP) version 11 was used to preprocess Sentinel-1 image datasets, following the standard workflow described by [37] and schematized in Figure 3. Ascending and descending image sets were processed separately for each timestamp. In the first step, a subset of the area of interest was extracted, followed by the orbit file application (default settings, i.e., sentinel precise as Orbit State Vectors and 3 polynomial degree) to achieve sub-meter geolocation accuracy by incorporating updated satellite position and velocity information that is essential for multi-temporal analysis and terrain correction. In the next step, thermal noise removal (with default parameters) was implemented to reduce noise effects in the inter-sub-swath texture, normalizing the backscatter signal within the entire Sentinel-1 scene and resulting in reduced discontinuities between sub-swaths for scenes in multi-swath acquisition modes. Subsequently, border noise removal (with default parameters) was applied to remove low-intensity noise and invalid data on scene edges. Following this, radiometric calibration was executed to convert digital pixel values into sigma nought backscatter value, which is essential for quantitative SAR analysis and comparability across different sensors or orbits. All six images were then co-registered by using the co-registration tool, by setting the standard bilinear interpolation method for smooth resampling, a product geolocation method for the precise initial offset estimation, and the minimum output extent to ensure consistent spatial coverage across the time series. Then, single-product speckle filtering was applied using the Lee sigma filter with a 7 × 7 analysis window and a 3 × 3 output window, which are the default settings in widely used SAR processing toolkits and provide a practical compromise between speckle reduction and preservation of spatial details in heterogeneous areas. Choosing a larger window enhances noise reduction but may induce a high smoothing effect, obscuring important features, while a 7 × 7 size offers a compromise suitable for heterogeneous areas. The 3 × 3 output window ensures that the localized statistics used in filtering are centered around each pixel, enabling fine-grained noise reduction while preserving edge information and structural details in the images. After speckle filtering, Range-Doppler Terrain Correction was performed using SRTM 1Sec HGT as a Digital Elevation Model, with the default coordinate reference system (WGS 84) and pixel spacing, and without masking out areas with missing elevation, to derive the precise geolocation information by correcting the distortion from side-looking geometry. SRTM 1Sec HGT was selected as the DEM for this relatively small study area as it provides good spatial (30 m) and vertical resolution for accurate terrain correction, offering a practical balance between geolocation accuracy and computational efficiency. GLCM texture properties were also computed for both VV and VH polarizations using a 5 × 5 window and a probabilistic quantizer set to 8 quantization levels, a choice that balances capturing local heterogeneity and preserving important texture patterns without over smoothing while minimizing noise and computational complexity. It consists of 20 layers, each containing 10 texture properties for each polarization. The final products (backscatter and GLCM) were exported in .geotiff format. As a final step, these layers were resampled into 10 m and reprojected into UTM zone 11N (EPSG:32611) using QGIS to maintain spatial accuracy and consistency for regional analysis.
(b)
Pre-processing the label space data was undertaken in order to generate a reference map. For that, the Differenced Normalized Burn Ratio was calculated according to the following:
d N B R = N B R p o s t f i r e N B R p r e f i r e  
where N B R p r e f i r e and N B R p o s t f i r e account, correspondingly, to the Normalized Burn Ratio for pre-fire and post-fire events computed from near-infrared (NIR) and shortwave infrared (SWIR) bands of Sentinel-2 with the following equation:
N B R = N I R S W I R N I R + S W I R  
The two bands NIR (Band 8) and SWIR (Band 12) have different spatial resolutions (10 and 20 m resolution, respectively), which require conversion to 10 m resolution using bilinear interpolation as the resampling method. The downloaded dataset was available in the projected coordinate system (EPSG:32611), requiring no further reprojection. dNBR was selected to generate the reference map, as it is a widely used index for burn severity mapping [15,38,39,40], while authoritative burned-area products such as MODIS MCD64A1 have much coarser spatial resolution, which is not compatible with our 10 m analysis scale. The reason why is that cloud-free Sentinel-2 imagery was used to generate proxy reference maps.
The Otsu thresholding method (through skimage.filters.threshold_otsu from scikit-image library version 0.25.5 and Python version 3.12) was applied to separate burned from unburned areas, which provides a fully reproducible threshold selection process, avoiding the subjectivity of manual annotation [15]. The obtained threshold value was 0.17, which can also be cross-verified based on the histogram displayed in Figure 4a. This exhibits a bimodal distribution, characterized by two prominent peaks, suggesting the presence of two dominant classes (burned and non-burned), each with distinct statistical properties. The separation between the peaks reflects the contrast between these two classes, leading to the binary reference map of the area shown in Figure 4b (0 for non-burned area and 1 for the burned area).

2.3.2. Feature Extraction

Feature extraction supports in highlighting meaningful information from the pre-processed data, improving the model’s ability to distinguish between two classes. In this study, a total of eight schemes were designed, out of which the first four are based on single-date pre-fire and post-fire images, and the remaining are focused on the average of multi-date pre-fire and post-fire backscatter values. The extracted features based on the different schemes, along with the corresponding information (image acquisition and pass mode) used for training the models, are laid out in Table 2. The corresponding formulas for the different features are shown in Table 3. In addition to this, the average for VV and VH backscatter values of three images was computed separately for pre-event and post-event periods from the multi-date stacked images, for both ascending and descending pass datasets.
P e v e n t c o m p o s i t e = ( P e v e n t 1 + P e v e n t 2 + P e v e n t 3 ) 3  
where P accounts for VV and VH polarization bands or GLCM properties, while event denotes pre-event or post-event.
The indices presented in Table 3 were selected based on their contribution to improving burned area detection.
(a)
Radar Burn Ratio, defined as the ratio between post-fire average backscatter and pre-fire average backscatter values [41], is a simple yet effective index. It is calculated for VV and VH polarizations separately, as shown in Equations (4) and (5), respectively. The rationale behind using RBR is that vegetation and surface structures typically exhibit changes in radar backscatter after being burned. A decrease in VH backscatter is often observed due to the loss of volume scattering from vegetation, leading to a significant change in the RBR values as an indication of potentially burned areas.
(b)
Radar Burn Difference is a change detection index that quantifies the absolute difference in radar backscatter before and after a fire event [41]. It is calculated for VV and VH polarizations separately, as indicated by Equations (6) and (7), respectively. A negative RBD value, especially in VH polarization, typically reflects a loss in vegetation structure and volume scattering due to fire. In contrast, positive values may occur in areas with increased surface roughness or residual moisture changes post-fire.
(c)
Delta Radar Burn Index (ΔRVI), which is a dual-polarimetric index, is calculated as the difference in RVI before and after the fire event according to Equation (9). Beforehand, RVI is computed separately for post-fire and pre-fire using both VV and VH polarization, as expressed by Equation (8). As a SAR-derived metric, ΔRVI captures the structural and volumetric scattering properties of vegetation. It is particularly effective for monitoring vegetation cover and changes in biomass, making it suitable for post-fire assessments where vegetation structure is altered. Higher RVI values are generally associated with dense vegetation due to increased volume scattering, while lower or negative values indicate sparser or bare surfaces.
(d)
GLCM texture properties were computed for both VV and VH polarization channels in order to capture spatial variation and structural patterns associated with burned areas. For each image acquisition date (pre-event and post-event), ten GLCM texture properties were derived that include: Contrast, Dissimilarity, Homogeneity, ASM, Energy, MAX, Entropy, GLCM Mean, GLCM Variance, and GLCM Correlation. For that, SNAP software version 12 was used, and the results were exported. The change in each GLCM texture property noted as ΔGLCM before and after the fire was also obtained using Equation (11). The calculation resulted in a total of 20 ΔGLCM features (10 for VV and 10 for VH) per pixel, capturing post-fire changes in textural characteristics. With the aim of reducing dimensionality and eliminating redundancy and multicollinearity among the texture variables, Principal Component Analysis (PCA) was applied to the stacked 20-band ΔGLCM dataset. It captures the majority of the variance in an optimal layer, speeding up the calculation process [30]. PCA was performed on a pixel-wise basis across the full study area using the PCA method from the scikit-learn Python library version 1.7.0 (sklearn.decomposition.PCA), transforming the correlated texture features into a new set of orthogonal components ranked by explained variance. The first three principal components were retained and used as input features in schemes 4 and 8. For the single-date images, Principal Components 1, 2, and 3 explained 74.78%, 17.09%, and 4.96% of the variance, respectively, capturing a total of 96.83% of the variance. For the multi-date images, Principal Components 1, 2, and 3 explained 71.34%, 24.10%, and 2.76% of the variance, respectively, capturing a total of 98.21% of the variance. These results indicate that using three components effectively reduces dimensionality while retaining almost all of the original texture information for both single-date and multi-date datasets.
(e)
Combining ascending and descending features: Features that were generated for schemes based on ascending (1, 5) and descending (2, 6) orbits were fused on a per-pixel basis by calculating their arithmetic mean (Equation (10)), as datasets were spatially aligned already in pre-processing step. This operation produced a single composite feature layer that integrates complementary scattering information from both orbit directions while maintaining the original spatial resolution.

2.3.3. Model Training and Evaluation Setup

After data preparation and feature extraction, the samples and label data spaces were well defined and prepared for model training. An appropriate dataset split was applied in order to ensure fair and reliable evaluation of the model’s performance, avoiding overfitting, and ensuring generalizability to unseen data. The study area was divided into 100 × 100 pixel tiles. Training and testing tiles were selected manually to ensure representation of different land cover and burn conditions across the entire study area. To prevent spatial overlap and reduce spatial autocorrelation, each test tile was selected to be at least one grid cell away from any training tile, ensuring that no test tile directly borders or touches a training tile. As shown in Figure 5, green tiles represent the training set, while blue tiles indicate the test set. In total, 33 train tiles (330,000 training samples) and 15 test tiles (130,000 test samples) were used.
The next step after train and test tile selection consists of tuning hyperparameters, which are important in machine learning for achieving higher accuracy [42]. The automated hyperparameter optimization techniques using randomly selected data can efficiently tune machine learning models and are highly efficient when a large number of parameters need to be tuned [43]. In our study, hyperparameter optimization was conducted using the Optuna optimization framework, which efficiently searches for optimal model settings by exploring the hyperparameter space with adaptive algorithms and pruning unpromising trials. This approach supports dynamic search spaces and advanced optimization strategies, improving efficiency and reducing computational cost compared to grid or random search methods.
For the Random Forest classifier, the following hyperparameters were optimized: maximum tree depth (8–20), minimum samples per leaf (1–6), minimum samples required for splitting (2–20), maximum feature fraction per split (0.5–0.9), and class weighting strategy (balanced, balanced_subsample). Note that the number of trees was fixed at 100 to maintain consistent computational cost. Similarly, for the XGBoost classifier, the optimized hyperparameters included maximum tree depth (5–20), learning rate (0.01–0.1, log-uniform), subsample ratio (0.6–1.0), column sampling ratio per tree (0.6–1.0), minimum child weight (1–10), minimum loss reduction parameter gamma (0–5), and L1 and L2 regularization coefficients (0–5). The number of boosting iterations was set to 1000, with early stopping applied if validation performance did not improve for 10 consecutive rounds.
As the input variables were different for each scheme, the tuning was done separately with a total of 10 optimization trials conducted for each model. GroupKFold-based spatial cross-validation with 3 splits was implemented, ensuring that samples from the same spatial tile were not shared between training and validation folds. Model performance was assessed using the mean macro F1-score across validation folds. The hyperparameter configuration yielding the highest mean validation F1-score was selected as the optimal setting. This procedure ensured that each model was optimally tuned for its specific input data while minimizing spatial overfitting and information leakage.
For training, the XGBoost classifier from its dedicated Python library and the RF classifier from the scikit-learn library were used. As a supervised algorithm, the latter, developed by Leo Breiman and Adele Cutler [26], consists of multiple independent classifiers (decision trees), leading to a single output generated by combining the results from each tree. It is efficient when dealing with large datasets and also handles the overfitting issue by managing the noise within the data [44]. As for XGBoost, introduced by Chen and Guestrin [45], a significant advancement in Gradient Boosting techniques is incorporated through a regularization method, loss function, and gain function, reducing overfitting while improving generalization and performance.
To assess the models’ performance, evaluation metrics were calculated using the scikit-learn library, namely accuracy, precision, recall, F1-score, and the area under the Receiver Operating Characteristic curve (AUC-ROC). Precision, recall, and F1-score were computed using the macro-averaging method, where each metric is first calculated for each class independently and then averaged without weighting by class frequency. Accuracy measures the overall correctness of the model by calculating the proportion of correct predictions among all predictions. Precision, on the other hand, quantifies the proportion of true positive predictions among all positive predictions, indicating the model’s ability to avoid false positives. As for recall, also referred to as sensitivity, it represents the proportion of true positive predictions among all actual positive samples, reflecting the model’s capability to identify relevant instances. The use of F1-score, which is the harmonic mean of precision and recall, provides a balanced measure, especially useful in scenarios with class imbalance. When it comes to the ROC-AUC, it offers insight into its overall discriminatory power, which evaluates the models’ ability to distinguish between classes across all classification thresholds.

3. Results

Our analysis comprises three parts: the first one consists of a qualitative evaluation of the generated classified maps, mainly the ones displayed on Figure 6. The binary outcome for burned and non-burned pixels in Figure 6a was obtained using a single-date image (schemes 1, 2, 3, and 4) for both classifiers. It can be perceived that the maps of the first and second rows representing the first two schemes have more dense points representing isolated burnt areas outside the actual burnt region. In contrast to that, the maps generated by scheme 3 and scheme 4 in the third and fourth rows show less density of points reflecting burnt areas beyond the affected zone. This implies that integrating images from both ascending and descending passes leads to more accurate burned area mapping. Similar outcomes can be observed from visual inspection of Figure 6b, which shows the maps generated using a multi-date image composite (schemes 5, 6, 7, and 8) for both classifiers. Outliers are more noticeable in schemes 5 and 6 than in schemes 7 and 8, further indicating that combining ascending and descending pass images enhances the accuracy.
The second part of our analysis focuses on the validation of the observations and deductions made previously. To do so, Table 4 and Table 5 present the evaluation metrics for the XGBoost and RF classifiers, respectively, comparing single-date and multi-date schemes across several performance indicators described in the previous section. The evaluation of the XGBoost classifier across the two configurations reveals important performance trends in harmony with our initial insight. For the single-date schemes, schemes 1 (ascending) and 2 (descending) achieve test accuracies of 0.7661 and 0.7771, with macro F1-scores of 0.7434 and 0.7568, respectively. The latter indicates that using only one acquisition pass limits classification effectiveness, although descending passes slightly outperform ascending passes. When both ascending and descending passes are combined in scheme 3, test accuracy improves to 0.8163 and macro F1-score to 0.8045, demonstrating that integrating temporal information enhances the model’s ability to correctly identify burned areas. Incorporating GLCM texture features in scheme 4 further increases test accuracy to 0.8214 (+0.0051) and burned-area F1-score to 0.8102 (+0.0057) as compared to scheme 3, highlighting the benefit of texture-based information.
A comparable pattern is observed in multi-date configurations, where schemes 5 (ascending) and 6 (descending) reach test accuracies of 0.8033 and 0.8017, with macro F1-scores of 0.7875 and 0.7854, respectively. Scheme 7, which combines both passes, further improves results, achieving a test accuracy of 0.8383 and an F1-score of 0.8272. By adding texture features, scheme 8 produces the best performance, with a test accuracy of 0.8509 (+0.0126) and F1-score of 0.8418 (+0.0146), showing slight improvement as compared to scheme 7. These metrics indicate a significant improvement in classification performance, with the model exhibiting a stronger ability to accurately identify burned areas while minimizing misclassifications.
In a similar manner, the Random Forest classifier demonstrates improved performance when shifting from single-date to multi-date configurations and when incorporating texture features. For the single-date schemes, schemes 1 (ascending) and 2 (descending) achieve test accuracies of 0.7814 and 0.7913, with burned-area F1-scores of 0.7686 and 0.7776, respectively. These outputs indicate that using only a single pass provides moderate classification performance, with descending passes slightly outperforming again ascending passes. Combining both passes in scheme 3 improves the test accuracy to 0.8210 and the burned-area F1-score to 0.8124, demonstrating once again that integrating both passes enhances the model’s ability to detect burned areas. Adding GLCM texture features in scheme 4 further increases test accuracy to 0.8234 (+0.0024) and burned-area F1-score to 0.8143 (+0.0019) compared to scheme 3, confirming as before the positive contribution of texture information to classification performance.
A similar pattern is observed for the multi-date schemes, where schemes 5 and 6 achieve test accuracies of 0.8183 and 0.8289, with burned-area F1-scores of 0.8086 and 0.8209, respectively. Combining passes in scheme 7 increases test accuracy to 0.8514 and F1-score to 0.8443, while adding texture in scheme 8 further improves performance to 0.8578 (+0.0064) for accuracy and 0.8515 (+0.0072) for F1-score. These trends indicate that temporal depth, data from both orbital passes, and texture features contribute to improved burned area detection, supporting the robustness of the Random Forest classifier for wildfire burned area mapping.
Combining ascending and descending orbital information in single-date schemes (schemes 3 and 4) resulted in a higher score than the multi-date schemes (schemes 5 (ascending only) and 6 (descending only)) for XGBoost, though not for Random Forest. This particular finding highlights the importance of considering orbital configurations alongside temporal sampling when designing feature schemes. An additional investigation consisting of calculating the detailed confusion matrices for all experimental schemes was conducted, and their respective outcomes are provided in Table A1 in Appendix A. The table presents the classification results of Random Forest and XGBoost, reporting the number of correctly and incorrectly classified burnt and unburnt samples for each scheme.
All quantitative analysis considered, the findings confirm our visual interpretation, which underscores the benefit of incorporating multi-temporal data, providing richer contextual information, leading to significant improvement of the model’s robustness and reliability in wildfire burned area mapping.
The final analysis focuses on evaluating the contribution of individual features to the RF model’s performance. Feature importance was derived using the mean decrease in impurity (MDI), which measures the average reduction in node impurity (Gini index) contributed by each feature across all trees in the forest. The outcome obtained for scheme 4 (single-date) and scheme 8 (multi-date) is shown in Figure 7a and Figure 7b, respectively. In the single-date schemes, the RBR derived from VH polarization, which alone contributes nearly 0.4 of the total importance, reflects the cross-polarization channel’s high sensitivity to vegetation structure loss caused by fire. The second most influential feature, RBD from VH, adds just around 0.3, while all remaining features each contribute less than 0.15, indicating a narrow but effective input set. In contrast, the multi-date scheme results in a more evenly distributed feature contribution, where ΔRVI becomes the most important feature (~0.40) capturing vegetation change over time. RBR (from VH channel) remains significant (~0.30), and RBD from VV gains importance (~0.20), showing that temporal dynamics and multi-polarization inputs enrich the model’s understanding. These results underline once again the value of multi-temporal data in reducing over-reliance on individual features, enhancing the classifier’s robustness and accuracy in wildfire burned area mapping.
A complementary feature importance analysis was also conducted using the XGBoost classifier, where importance was quantified based on gain, representing the average improvement in model performance contributed by each feature across all boosting iterations. Likewise, the feature importance obtained for scheme 4 (single-date) and scheme 8 (multi-date) is shown in Figure 7c and Figure 7d, respectively. The results show that the order of feature importance for the single-date scheme is the same as that of the RF, but the model’s predictions rely heavily on RBR_VH (~0.70), whereas for the multi-date scheme, the only two most influential features identified by XGBoost are the same as those highlighted by the RF model. It is worth noting that the three principal components derived from GLCM textures exhibit low importance in almost all cases.

4. Discussion

Regarding classification performance, both the RF and XGBoost algorithms achieved high overall accuracy, macro F1, and ROC-AUC, confirming their strong ability to discriminate burned and unburned classes. RF consistently showed slightly higher macro-average F1-scores than XGBoost (differences of ~1–2%), indicating a modest improvement in balanced class performance. Nevertheless, when considering ROC-AUC, the differences between the classifiers are even smaller (less than 1%), suggesting that both models have nearly equivalent discriminatory ability between burned and unburned classes. So, both classifiers perform comparably, and the observed differences do not represent a substantial improvement in predictive capability.
Both models demonstrated excellent performance for newly burned areas, but their performance was comparatively lower in built-up areas and regions that had already been burned. Figure 8 illustrates the false color composite, where the red, green, and blue channels were filled by B12, B8, and B4, representing, respectively, the SWIR, NIR, and red spectral bands. The left side of the figure depicts the area before the wildfire event, while the right side represents the same area after the wildfire occurred. On the other hand, all the burnt-area maps generated using both classifiers are closely aligned with the reference map for all the schemes (see Figure 6 and Figure 7, with comparison to Figure 8). It can be seen that the majority of the burned areas that fall within wildlands were detected accurately. Nonetheless, both algorithms underperformed in detecting burned areas in the south-eastern parts, highlighted by the yellow box in Figure 8. It turns out that these areas contain sparse buildings that were affected by the fire without collapsing, which could explain the inability of RF and XGBoost to properly detect them due to the complex backscatter patterns in built-up environments. In fact, double-bounce constitutes the dominant scattering mechanism (the vertical and horizontal surfaces meeting at near-right angles). It tends to remain stable before and after a fire, because the concrete buildings, walls, and paved ground in this area were not significantly altered by burning. As a result, SAR backscatter from double-bounce interactions appeared similar in both burned and unburned images. The fire affected only vegetation and surface material but left the structural geometry intact; this defect cannot be attributed to the model’s malfunction but most likely to the limitations of the information provided by the data itself.
Furthermore, it can be seen from Figure 8 that the region highlighted by the blue box already had a burned area before the event under study. Yet, the algorithms still classified these areas as newly burned areas, which is inaccurate. This misclassification is more visible in multi-date composite-based image classification (see Figure 6b) compared to single-date-based classification images (see Figure 6a). The underlying cause for this is related to the difficulty in distinguishing between old and new burn signatures in multi-date composites, especially when past burn scars maintain low backscatter values over time. Without proper temporal differentiation, the classifier may interpret these persistent signals as indicators of recent fire activity.

5. Conclusions

This study has evaluated the effectiveness of single-date and multi-date SAR-based classification schemes for wildfire burned area mapping using dual-polarization Sentinel-1 data and two ensemble classifiers: Random Forest and XGBoost. A comprehensive evaluation method that encompasses visual interpretation, quantitative accuracy metrics, and feature-level analysis was utilized to assess the influence of acquisition strategy and temporal information on classification performance. The results from our study have demonstrated that fusion of multi-temporal SAR data, particularly the combination of ascending and descending orbit passes, substantially enhanced burned area detection accuracy and model robustness.
Visual interpretation of the classified burned area maps showed that single-date schemes produced a higher density of isolated false positives outside the fire perimeter, whereas multi-date schemes achieved better spatial coherence. The results show that multi-date approaches better capture fire-induced surface changes. On the other hand, quantitative evaluation validated the visual interpretation, with XGBoost test accuracy showing an increase from 0.7661–0.8214 in single-date schemes to a maximum of 0.8509 in our proposed scheme 8, with a corresponding F1-score of 0.8418. Furthermore, Random Forest achieved slightly better performance, with single-date test accuracies ranging from 0.78144 to 0.8234 and a strong performance in the proposed scheme 8, reaching a test accuracy of 0.8578 and an F1-score of 0.8515. Feature importance analysis further indicated that multi-temporal inputs relatively promote a more balanced use of polarization features as compared to single-date inputs.
While challenges remain in complex urban environments in differentiating recent burns from historical scars due to SAR’s inherent backscatter characteristics, the proposed approach offers a strong foundation for operational fire monitoring. A limitation of this study is the potential spatial correlation in the data splitting, which could slightly inflate performance metrics. To address this, future work may implement a block-based spatial splitting approach to reduce spatial dependency and provide a more robust evaluation of model performance. In addition, future research may investigate temporal differencing strategies, change-trajectory analysis, and time-series learning frameworks, including deep learning-based classifiers capable of modeling sequential SAR observations. Such approaches may provide a more explicit representation of temporal evolution and further enhance burned-area detection performance across diverse fire regimes and environmental conditions.

Author Contributions

R.T.: conceptualization, methodology, software, validation, formal analysis, writing—original draft, and visualization. K.H.-R.: supervision and writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Raw Sentinel-1 and Sentinel-2 data are freely accessible from the Copernicus Open Access Hub (https://scihub.copernicus.eu/), and additional pre-processing steps are described in the Methods section. The dataset after the pre-processing steps using SNAP and QGIS software, used in this study, is publicly available from the Mendeley Data repository at https://doi.org/10.17632/d8r89ykgyd.1 [46], and the Machine Learning script is openly available at https://github.com/rabinatwayana/SAR-Burnt-Area-Mapping-ML (accessed on 16 March 2026).

Acknowledgments

This work was carried out while Rabina Twayana was supported by an Erasmus+ scholarship within the Copernicus Master in Digital Earth (CDE), an Erasmus Mundus Joint Master (EMJM) Degree co-funded by the European Union (Grant 101128006—CDE Erasmus Mundus Joint Master (EMJM, 2023–2029).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1 summarizes the confusion matrices for the Random Forest and XGBoost models under eight different schemes. Each scheme reports the distribution of true positives, true negatives, false positives, and false negatives for the burnt and unburnt classes.
Table A1. Confusion matrix comparision of Random Forest and XGBoost.
Table A1. Confusion matrix comparision of Random Forest and XGBoost.
SchemesModelsRandom ForestXGBoost
UnburntBurnedUnburntBurnt
#1Unburnt76,257968179,7456193
Burnt23,10440,95828,89935,163
#2Unburnt77,948799079,9296009
Burnt23,31340,74927,42736,635
#3Unburnt77,686825279,6546284
Burnt18,59245,47021,27642,786
#4Unburnt78,311762779,8296109
Burnt18,86645,19620,67643,386
#5Unburnt78,287765180,6745264
Burnt19,60444,45824,24439,818
#6Unburnt78,056788280,7955143
Burnt17,78146,28124,60839,454
#7Unburnt79,784615481,9393999
Burnt16,14347,91920,25343,809
#8Unburnt79,861607781,8104128
Burnt15,24648,81618,24045,822

References

  1. Food and Agriculture Organization of the United Nations. Integrated Fire Management Voluntary Guidelines—Principles and Strategic Actions, 2nd ed.; Forestry Working Paper; Food and Agriculture Organization of the United Nations: Rome, Italy, 2024. [Google Scholar]
  2. Li, X.; Jin, H.; He, R.; Wang, H.; Sun, L.; Luo, D.; Huang, Y.; Li, Y.; Chang, X.; Wang, L.; et al. Impact of Wildfire on Soil Carbon and Nitrogen Storage and Vegetation Succession in the Nanweng’he National Natural Wetlands Reserve, Northeast China. CATENA 2023, 221, 106797. [Google Scholar] [CrossRef]
  3. MacCarthy, J.; Richter, J.; Tyukavina, S.; Harris, N. The Latest Data Confirms: Forest Fires Are Getting Worse. Available online: https://www.wri.org/insights/global-trends-forest-fires (accessed on 21 January 2026).
  4. Soontha, L.; Bhat, M.Y. Global Firestorm: Igniting Insights on Environmental and Socio-Economic Impacts for Future Research. Environ. Dev. 2026, 57, 101362. [Google Scholar] [CrossRef]
  5. Pirotti, F.; Adedipe, O.; Leblon, B. Sentinel-1 Response to Canopy Moisture in Mediterranean Forests before and after Fire Events. Remote Sens. 2023, 15, 823. [Google Scholar] [CrossRef]
  6. Monash University New Study: More than 1.5 Million Die Each Year from Wild/Bush Fire Pollution. Available online: https://www.monash.edu/medicine/news/latest/2024-articles/new-study-more-than-1.5-million-die-each-year-from-wildbush-fire-pollution (accessed on 21 January 2026).
  7. Byrne, B.; Liu, J.; Bowman, K.W.; Pascolini-Campbell, M.; Chatterjee, A.; Pandey, S.; Miyazaki, K.; van der Werf, G.R.; Wunch, D.; Wennberg, P.O.; et al. Carbon Emissions from the 2023 Canadian Wildfires. Nature 2024, 633, 835–839. [Google Scholar] [CrossRef]
  8. Gong, X.; Liu, Z.; Tian, J.; Wang, Q.; Li, G.; An, Z.; Han, Y. Global Carbon Emission Accounting: National-Level Assessment of Wildfire CO2 Emission—A Case Study of China. EGUsphere 2024, 2024, 1–23. [Google Scholar] [CrossRef]
  9. Salgado, L.; Alvarez, M.G.; Díaz, A.M.; Gallego, J.R.; Forján, R. Impact of Wildfire Recurrence on Soil Properties and Organic Carbon Fractions. J. Environ. Manag. 2024, 354, 120293. [Google Scholar] [CrossRef]
  10. Giglio, L.; Loboda, T.; Roy, D.P.; Quayle, B.; Justice, C.O. An Active-Fire Based Burned Area Mapping Algorithm for the MODIS Sensor. Remote Sens. Environ. 2009, 113, 408–420. [Google Scholar] [CrossRef]
  11. Laneve, G.; Di Fonzo, M.; Pampanoni, V.; Bueno Morles, R. Progress and Limitations in the Satellite-Based Estimate of Burnt Areas. Remote Sens. 2023, 16, 42. [Google Scholar] [CrossRef]
  12. Martins, V.S.; Roy, D.P.; Huang, H.; Boschetti, L.; Zhang, H.K.; Yan, L. Deep Learning High Resolution Burned Area Mapping by Transfer Learning from Landsat-8 to PlanetScope. Remote Sens. Environ. 2022, 280, 113203. [Google Scholar] [CrossRef]
  13. Roy, D.P.; Huang, H.; Boschetti, L.; Giglio, L.; Yan, L.; Zhang, H.H.; Li, Z. Landsat-8 and Sentinel-2 Burned Area Mapping—A Combined Sensor Multi-Temporal Change Detection Approach. Remote Sens. Environ. 2019, 231, 111254. [Google Scholar] [CrossRef]
  14. Jiao, L.; Bo, Y. Near Real-Time Mapping of Burned Area by Synergizing Multiple Satellites Remote-Sensing Data. GISci. Remote Sens. 2022, 59, 1956–1977. [Google Scholar] [CrossRef]
  15. Zhang, S.; Bai, M.; Wang, X.; Peng, X.; Chen, A.; Peng, P. Remote Sensing Technology for Rapid Extraction of Burned Areas and Ecosystem Environmental Assessment. PeerJ 2023, 11, e14557. [Google Scholar] [CrossRef] [PubMed]
  16. Ban, Y.; Zhang, P.; Nascetti, A.; Bevington, A.R.; Wulder, M.A. Near Real-Time Wildfire Progression Monitoring with Sentinel-1 SAR Time Series and Deep Learning. Sci. Rep. 2020, 10, 1322. [Google Scholar] [CrossRef] [PubMed]
  17. Philipp, M.B.; Levick, S.R. Exploring the Potential of C-Band SAR in Contributing to Burn Severity Mapping in Tropical Savanna. Remote Sens. 2019, 12, 49. [Google Scholar] [CrossRef]
  18. Engelbrecht, J.; Theron, A.; Vhengani, L.; Kemp, J. A Simple Normalized Difference Approach to Burnt Area Mapping Using Multi-Polarisation C-Band SAR. Remote Sens. 2017, 9, 764. [Google Scholar] [CrossRef]
  19. Shama, A.; Zhang, R.; Wang, T.; Liu, A.; Bao, X.; Lv, J.; Zhang, Y.; Liu, G. Forest Fire Progress Monitoring Using Dual-Polarisation Synthetic Aperture Radar (SAR) Images Combined with Multi-Scale Segmentation and Unsupervised Classification. Int. J. Wildland Fire 2024, 33, WF23124. [Google Scholar] [CrossRef]
  20. Marra, A.B.; Galo, M.D.L.B.T.; Sano, E.E. Contribution of SAR/Sentinel-1 Images in the Detection of Burnt Areas in the Natural Vegetation of the Brazilian Pantanal Biome. Bol. Ciênc. Geod. 2024, 30, e2024005. [Google Scholar] [CrossRef]
  21. Belenguer-Plomer, M.A.; Tanase, M.A.; Fernandez-Carrillo, A.; Chuvieco, E. Burned Area Detection and Mapping Using Sentinel-1 Backscatter Coefficient and Thermal Anomalies. Remote Sens. Environ. 2019, 233, 111345. [Google Scholar] [CrossRef]
  22. Belenguer-Plomer, M.Á.; Chuvieco, E.; Tanase, M.A. Sentinel-1 Based Algorithm to Detect Burned Areas. In Proceedings of the 11th EARSeL Forest Fires SIG, Chania, Greece, 25–27 September 2017. [Google Scholar]
  23. Hrysiewicz, A.; Holohan, E.P.; Donohue, S.; Cushnan, H. SAR and InSAR Data Linked to Soil Moisture Changes on a Temperate Raised Peatland Subjected to a Wildfire. Remote Sens. Environ. 2023, 291, 113516. [Google Scholar] [CrossRef]
  24. Lasko, K. Incorporating Sentinel-1 SAR Imagery with the MODIS MCD64A1 Burned Area Product to Improve Burn Date Estimates and Reduce Burn Date Uncertainty in Wildland Fire Mapping. Geocarto Int. 2021, 36, 340–360. [Google Scholar] [CrossRef]
  25. Suwanprasit, C. Shahnawaz Mapping Burned Areas in Thailand Using Sentinel-2 Imagery and OBIA Techniques. Sci. Rep. 2024, 14, 9609. [Google Scholar] [CrossRef]
  26. Lestari, A.I.; Rizkinia, M.; Sudiana, D. Evaluation of Combining Optical and SAR Imagery for Burned Area Mapping Using Machine Learning. In Proceedings of the 2021 IEEE 11th Annual Computing and Communication Workshop and Conference (CCWC), Las Vegas, NV, USA, 27–30 January 2021; IEEE: New York, NY, USA, 2021; pp. 0052–0059. [Google Scholar]
  27. De Luca, G.; Silva, J.M.N.; Modica, G. Regional-Scale Burned Area Mapping in Mediterranean Regions Based on the Multitemporal Composite Integration of Sentinel-1 and Sentinel-2 Data. GISci. Remote Sens. 2022, 59, 1678–1705. [Google Scholar] [CrossRef]
  28. Hosseini, M.; Lim, S. Burned Area Detection Using Sentinel-1 SAR Data: A Case Study of Kangaroo Island, South Australia. Appl. Geogr. 2023, 151, 102854. [Google Scholar] [CrossRef]
  29. Shama, A.; Zhang, R.; Zhan, R.; Wang, T.; Xie, L.; Bao, X.; Lv, J. A Burned Area Extracting Method Using Polarization and Texture Feature of Sentinel-1A Images. IEEE Geosci. Remote Sens. Lett. 2023, 20, 1–5. [Google Scholar] [CrossRef]
  30. De Luca, G.; Silva, J.M.N.; Modica, G. A Workflow Based on Sentinel-1 SAR Data and Open-Source Algorithms for Unsupervised Burned Area Detection in Mediterranean Ecosystems. GISci. Remote Sens. 2021, 58, 516–541. [Google Scholar] [CrossRef]
  31. Puspitaningrum, L.A.; Bioresita, F. Utilization of Sentinel-1 Imagery for Burnt Area Detection and Impact of Forest Fires in Penajam Paser Utara Regency 2019. IOP Conf. Ser. Earth Environ. Sci. 2025, 1551, 012002. [Google Scholar] [CrossRef]
  32. Abujayyab, S.K.M. A Comparative Assessment of Sentinel-1 SAR with Optical Indices for Cloud-Resilient Wildfire Mapping. Eur. J. For. Eng. 2025, 11, 95–105. [Google Scholar] [CrossRef]
  33. Chen, X.; Zhang, Y.; Wang, S.; Zhao, Z.; Liu, C.; Wen, J. Comparative Study of Machine Learning Methods for Mapping Forest Fire Areas Using Sentinel-1B and 2A Imagery. Front. Remote Sens. 2024, 5, 1446641. [Google Scholar] [CrossRef]
  34. Lee, C.; Park, S.; Kim, T.; Liu, S.; Md Reba, M.N.; Oh, J.; Han, Y. Machine Learning-Based Forest Burned Area Detection with Various Input Variables: A Case Study of South Korea. Appl. Sci. 2022, 12, 10077. [Google Scholar] [CrossRef]
  35. Alkhatib, R.; Sahwan, W.; Alkhatieb, A.; Schütt, B. A Brief Review of Machine Learning Algorithms in Forest Fires Science. Appl. Sci. 2023, 13, 8275. [Google Scholar] [CrossRef]
  36. Caramela, S. Pacific Palisades Wildfire Officially Most Destructive in Los Angeles History. Available online: https://www.vice.com/en/article/pacific-palisades-wildfire-officially-most-destructive-in-los-angeles-history/ (accessed on 20 August 2025).
  37. Filipponi, F. Sentinel-1 GRD Preprocessing Workflow. In Proceedings of the 3rd International Electronic Conference on Remote Sensing, Online, 22 May–5 June 2018; MDPI: Basel, Switzerland, 2019; Volume 3. [Google Scholar]
  38. Zahabnazouri, S.; Belmont, P.; David, S.; Wigand, P.E.; Elia, M.; Capolongo, D. Detecting Burn Severity and Vegetation Recovery After Fire Using dNBR and dNDVI Indices: Insight from the Bosco Difesa Grande, Gravina in Southern Italy. Sensors 2025, 25, 3097. [Google Scholar] [CrossRef]
  39. Arellano-Pérez, S.; Ruiz-González, A.D.; Álvarez-González, J.G.; Vega-Hidalgo, J.A.; Díaz-Varela, R.; Alonso-Rego, C. Mapping Fire Severity Levels of Burned Areas in Galicia (NW Spain) by Landsat Images and the dNBR Index: Preliminary Results About the Influence of Topographical, Meteorological and Fuel Factors on the Highest Severity Level; Viegas, D.X., Ed.; Imprensa da Universidade de Coimbra: Coimbra, Portugal, 2018; pp. 1053–1060. [Google Scholar]
  40. Al-hasn, R.; Almuhammad, R. Burned Area Determination Using Sentinel-2 Satellite Images and the Impact of Fire on the Availability of Soil Nutrients in Syria. J. For. Sci. 2022, 68, 96–106. [Google Scholar] [CrossRef]
  41. Rokhmatuloh; Ardiansyah; Indratmoko, S.; Riyanto, I.; Margatama, L.; Arief, R. Burnt-Area Quick Mapping Method with Synthetic Aperture Radar Data. Appl. Sci. 2022, 12, 11922. [Google Scholar] [CrossRef]
  42. Elgeldawi, E.; Sayed, A.; Galal, A.R.; Zaki, A.M. Hyperparameter Tuning for Machine Learning Algorithms Used for Arabic Sentiment Analysis. Informatics 2021, 8, 79. [Google Scholar] [CrossRef]
  43. Hossain, R.; Timmer, D. Machine Learning Model Optimization with Hyper Parameter Tuning Approach. Glob. J. Comput. Sci. Technol. D Neural Artif. Intell. 2021, 21, 7–13. [Google Scholar]
  44. Parmar, A.; Katariya, R.; Patel, V. A Review on Random Forest: An Ensemble Classifier. In International Conference on Intelligent Data Communication Technologies and Internet of Things (ICICI) 2018; Hemanth, J., Fernando, X., Lafata, P., Baig, Z., Eds.; Lecture Notes on Data Engineering and Communications Technologies; Springer International Publishing: Cham, Switzerland, 2019; Volume 26, pp. 758–763. [Google Scholar]
  45. Chen, T.; Guestrin, C. XGBoost: A Scalable Tree Boosting System. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; ACM: New York, NY, USA, 2016; pp. 785–794. [Google Scholar]
  46. Twayana, R. Pre-Processed Dual-Pol Sentinel-1 SAR Dataset for Machine Learning-Based Burned Area Mapping. Available online: https://data.mendeley.com/datasets/d8r89ykgyd/1 (accessed on 26 August 2025).
Figure 1. Map of the study area. Data sources: state administrative boundaries—OpenDataSoft; California counties—California Open Data Portal; Palisades perimeter—NIFC FIRIS.
Figure 1. Map of the study area. Data sources: state administrative boundaries—OpenDataSoft; California counties—California Open Data Portal; Palisades perimeter—NIFC FIRIS.
Geomatics 06 00028 g001
Figure 2. Workflow of our methodological approach.
Figure 2. Workflow of our methodological approach.
Geomatics 06 00028 g002
Figure 3. SAR data preprocessing.
Figure 3. SAR data preprocessing.
Geomatics 06 00028 g003
Figure 4. (a) Histogram of dNBR values and the (b) generated reference map.
Figure 4. (a) Histogram of dNBR values and the (b) generated reference map.
Geomatics 06 00028 g004
Figure 5. Train and test tile selection.
Figure 5. Train and test tile selection.
Geomatics 06 00028 g005
Figure 6. Burned area maps using XGBoost (left) and Random Forest (right) across eight experimental configurations: (a) single-date schemes (1–4) and (b) multi-date schemes (5–8).
Figure 6. Burned area maps using XGBoost (left) and Random Forest (right) across eight experimental configurations: (a) single-date schemes (1–4) and (b) multi-date schemes (5–8).
Geomatics 06 00028 g006
Figure 7. Feature importance for Random Forest (a,b) and XGBoost (c,d) classifiers in single-date and multi-date schemes.
Figure 7. Feature importance for Random Forest (a,b) and XGBoost (c,d) classifiers in single-date and multi-date schemes.
Geomatics 06 00028 g007
Figure 8. Sentinel-2 image composite using B12, B8, and B4 for red, green, and blue channels, respectively, of (a) pre-fire and (b) post-fire acquisitions.
Figure 8. Sentinel-2 image composite using B12, B8, and B4 for red, green, and blue channels, respectively, of (a) pre-fire and (b) post-fire acquisitions.
Geomatics 06 00028 g008
Table 1. Specifications of data used.
Table 1. Specifications of data used.
SensorAcquisition DateProduct TypePolarization Mode/BandsSpatial
Resolution
Orbit
Sentinel-2 Multispectral ImagerBefore Event (2 January 2025)
After Event (12 January 2025)
Level-2AB8 (NIR)10 mDescending
B12 (SWIR2)20 m
Sentinel-1 (C-Band SAR)Before Event (9 December 2024,
21 December 2024, 2 January 2025)
After Event (14 January 2025,
26 January 2025, 7 February 2025)
GRDHVV, VH10 mAscending, Descending
Table 2. Image acquisition pass and features for different schemes.
Table 2. Image acquisition pass and features for different schemes.
DatesSchemesAcquisition PassFeatures
Single-Date#1Ascending RBR VV ,   RBR VH ,   RBD VV ,   RBD VH ,   ∆RVI
#2Descending RBR VV ,   RBR VH ,   RBD VV ,   RBD VH ,   ∆RVI
#3Ascending
+
Descending
RBR VV ,   RBR VH ,   RBD VV ,   RBD VH ,   ∆RVI
#4 RBR VV ,   RBR VH ,   RBD VV ,   RBD VH ,   ∆RVI,
3 GLCM texture difference principal components
Multi-Date#5Ascending RBR VV ,   RBR VH ,   RBD VV ,   RBD VH ,   ∆RVI
#6Descending RBR VV ,   RBR VH ,   RBD VV ,   RBD VH ,   ∆RVI
#7Ascending
+
Descending
RBR VV ,   RBR VH ,   RBD VV ,   RBD VH ,   ∆RVI
#8 RBR VV ,   RBR VH ,   RBD VV ,   RBD VH ,   ∆RVI,
3 GLCM texture difference principal components
Table 3. Formulas used for the calculation of the extracted features.
Table 3. Formulas used for the calculation of the extracted features.
IndicesFormulaEquation
Radar Burn Ratio for VV polarization R B R V V = V V p o s t _ f i r e V V p r e f i r e (4)
Radar Burn Ratio for VH polarization R B R V H = V H p o s t _ f i r e V H p r e f i r e (5)
Radar Burn Difference for VV polarization R B D V V = V V p o s t _ f i r e V V p r e f i r e (6)
Radar Burn Difference for VH polarization R B D V H = V H p o s t _ f i r e V H p r e f i r e (7)
Radar Vegetation Index R V I = 4   ×   V H V V + V H (8)
Radar Vegetation Index Difference R V I = R V I p o s t _ f i r e R V I p r e _ f i r e (9)
Combining ascending and descending features f e a t c o m b i n e d = ( f e a t a s c + f e a t d e s c ) 2
where ,   feat   =   RBR ,   RBD ,   RVI ,   GLCM
(10)
GLCM Textures Properties G L C M f e a t , x y = G L C M f e a t , x y , p o s t G L C M f e a t , x y , p r e
where, xy represents VV or VH
feat represents GLCM texture properties
(11)
Table 4. Evaluation metrics of XGBoost.
Table 4. Evaluation metrics of XGBoost.
DatesSchemesAccuracyF1-ScorePrecisionRecallROC-AUC
Single DateScheme 10.76610.74340.79210.73840.8682
Scheme 20.77710.75680.80180.75090.8632
Scheme 30.81630.80450.83060.79740.9044
Scheme 40.82140.81020.83540.80310.9094
Multi-DatesScheme 50.80330.78750.82610.78020.8984
Scheme 60.80170.78540.82560.77800.8871
Scheme 70.83830.82720.85910.81870.9234
Scheme 80.85090.84180.86750.83360.9335
Table 5. Evaluation metrics of Random Forest classifier.
Table 5. Evaluation metrics of Random Forest classifier.
DatesSchemeAccuracyF1-ScorePrecisionRecallROC-AUC
Single-DateScheme 10.78140.76860.78810.76330.8691
Scheme 20.79130.77760.80290.77160.8704
Scheme 30.82100.81240.82660.80690.9057
Scheme 40.82340.81430.83070.80840.9088
Multi-DateScheme 50.81830.80860.82650.80250.8985
Scheme 60.82890.82090.83450.81540.9047
Scheme 70.85140.84430.85900.83820.9271
Scheme 80.85780.85150.86450.84560.9328
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Twayana, R.; Hadj-Rabah, K. Assessment of Dual-Polarization Sentinel-1 SAR Data for Improved Wildfire Burned Area Mapping: A Case Study of the Palisades Region, USA. Geomatics 2026, 6, 28. https://doi.org/10.3390/geomatics6020028

AMA Style

Twayana R, Hadj-Rabah K. Assessment of Dual-Polarization Sentinel-1 SAR Data for Improved Wildfire Burned Area Mapping: A Case Study of the Palisades Region, USA. Geomatics. 2026; 6(2):28. https://doi.org/10.3390/geomatics6020028

Chicago/Turabian Style

Twayana, Rabina, and Karima Hadj-Rabah. 2026. "Assessment of Dual-Polarization Sentinel-1 SAR Data for Improved Wildfire Burned Area Mapping: A Case Study of the Palisades Region, USA" Geomatics 6, no. 2: 28. https://doi.org/10.3390/geomatics6020028

APA Style

Twayana, R., & Hadj-Rabah, K. (2026). Assessment of Dual-Polarization Sentinel-1 SAR Data for Improved Wildfire Burned Area Mapping: A Case Study of the Palisades Region, USA. Geomatics, 6(2), 28. https://doi.org/10.3390/geomatics6020028

Article Metrics

Back to TopTop