Next Article in Journal
Research on Landslide Hazard Detection in Ya’an Region Based on an Improved YOLO Model
Previous Article in Journal
Soil Salinity Assessment and Cross-Regional Validation Based on Multiple Feature Optimization Methods and SHAP
Previous Article in Special Issue
Regional Forest Wildfire Mapping Through Integration of Sentinel-2 and Landsat 8 Data in Google Earth Engine with Semi-Automatic Training Sample Generation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Unsupervised Machine Learning-Based Approach for Combining Sentinel 1 and 2 to Assess the Severity of Fires over Large Areas Using a Google Earth Engine

by
Ciro Giuseppe Riccardi
1,2,
Nicodemo Abate
3,* and
Rosa Lasaponara
2
1
Department of Humanities, Science, and Social Innovation, University of Basilicata, Via Lanera 20, 75100 Matera, Italy
2
Institute of Methodologies for Environmental Analysis, National Research Council, C.da S. Loja, 85050 Tito, Italy
3
Institute of Heritage Science, National Research Council, C.da S. Loja, 85050 Tito, Italy
*
Author to whom correspondence should be addressed.
Remote Sens. 2026, 18(6), 956; https://doi.org/10.3390/rs18060956
Submission received: 5 February 2026 / Revised: 2 March 2026 / Accepted: 18 March 2026 / Published: 23 March 2026
(This article belongs to the Special Issue Advances in Remote Sensing for Burned Area Mapping)

Highlights

What are the main findings?
  • The integration of Sentinel-1 (SAR) and Sentinel-2 (optical) data within a Google Earth Engine framework achieved a highly accurate burned area estimation, showing a minimal discrepancy of only 1.3% compared to official EFFIS records.
  • Statistical analysis confirms strong correlations between SAR-based indices (RVI and DPSVI) and traditional optical metrics, validating the use of radar to assess fire severity in areas with frequent cloud or smoke obstruction.
What are the implications of the main findings?
  • The proposed multi-sensor methodology provides a scalable and automated solution for large-scale wildfire monitoring, ensuring temporal continuity when optical sensors are limited by atmospheric conditions.
  • Leveraging cloud computing and unsupervised machine learning (K-means) reduces reliance on manual interpretation, offering a robust tool for rapid post-fire damage assessment and global wildfire mitigation strategies.

Abstract

Wildfires represent a significant global environmental challenge, necessitating advanced monitoring and assessment techniques. This study explores the integration of Sentinel-1 Synthetic Aperture Radar (SAR) and Sentinel-2 optical data within a Google Earth Engine (GEE) framework to enhance wildfire detection, burned area estimation, and severity assessment. By leveraging SAR’s capability to penetrate atmospheric obstructions and optical data’s spectral sensitivity to vegetation changes, the proposed methodology addresses limitations of single-sensor approaches. The results demonstrate strong correlations between SAR-based indices, such as the Radar Vegetation Index (RVI) and Dual-Polarized SAR Vegetation Index (DPSVI), and traditional optical indices, including the Normalized Burn Ratio (NBR) and differenced NBR (ΔNBR). Despite challenges related to terrain influence, sensor resolution differences, and computational demands, the integration of multi-sensor data in a cloud-based environment offers a scalable and efficient solution for wildfire monitoring. During the peak of the fire events, significant atmospheric obstruction was technically verified using Sentinel-2 metadata and the QA60 cloud mask band, which confirmed persistent cloud cover and thick smoke plumes over the study areas. This interference limited the reliability of purely optical monitoring, further justifying the integration of SAR data. Future research should focus on refining data fusion techniques, incorporating additional datasets such as thermal infrared imagery and meteorological variables, and enhancing automation through artificial intelligence (AI). This study underscores the potential of remote sensing advancements in improving fire management strategies and global wildfire mitigation efforts.

1. Introduction

Remote sensing technologies provide a powerful tool for monitoring environmental changes at various spatial and temporal scales. Over the past few decades, these technologies have been extensively utilized in wildfire research and management, enabling accurate assessments of fire dynamics, severity, and ecological impacts. Since the 1970s, satellite-based remote sensing has played a crucial role in wildfire detection and burned area mapping, significantly enhancing our understanding of fire-related phenomena and informing disaster response strategies [1]. Remote sensing sensors systematically observe the Earth’s surface across different spectral regions, offering a unique advantage for multi-temporal analysis. Various platforms have been deployed to support fire-related research, including (i) ground-based spectroradiometers, (ii) aerial imaging from helicopters, airplanes, and unmanned aerial systems (UAS), and (iii) satellite missions specifically designed to detect fire activity or equipped with spectral bands suitable for this purpose. In recent years, the range of remote sensing methodologies has expanded considerably, integrating advanced techniques such as (i) machine learning algorithms for automated fire detection and classification [2,3], (ii) the synergistic use of optical, thermal, and radar data for enhanced accuracy [4], (iii) radiative transfer models for fire behavior simulations [5], and (iv) refined change detection approaches for mapping post-fire impacts [6].
One of the primary applications of remote sensing in wildfire science is the accurate mapping of burned areas, a process that varies depending on the scale and objectives of the study. At local scales, high- and medium-resolution sensors (<100 m) are employed to detect spectral differences between pre- and post-fire imagery, facilitating the precise identification of burned regions [7,8]. Burned area estimation and fire severity assessments are often correlated, as both rely on similar methodologies. A common approach involves using active fire data to count the number of affected pixels and calculate the total burned area based on pixel size [9]. Over the years, several fire detections and burned area mapping algorithms have been developed. For instance, Giglio et al. [10] introduced a method that adjusts burned area estimates by incorporating tree and herbaceous cover fractions, thereby improving prediction accuracy. Roy et al. [9] demonstrated the utility of MODIS sensors for burned area mapping at national and regional scales, while Mahdianpari et al. [11] proposed a methodology that combines MODIS data with the Normalized Burn Ratio (NBR) and temporal texture analysis to enhance fire detection capabilities. The severity of wildfires significantly influences post-fire ecosystem recovery. Fire severity assessments help evaluate vegetation mortality, soil nutrient composition, and hydrological responses. The Composite Burn Index (CBI) is commonly used for field-based fire severity assessments, while remote sensing-based indices such as the Normalized Difference Vegetation Index (NDVI) and NBR are widely employed for spectral analysis [12,13]. NBR has become the standard for assessing burn severity by leveraging near-infrared (NIR) and shortwave infrared (SWIR) bands to detect changes in vegetation and soil conditions following a fire. The ΔNBR (differenced NBR) technique is frequently used for Burned Area Emergency Response (BAER) assessments, generating Burned Area Reflectance Classification (BARC) maps that aid in post-fire management and rehabilitation planning [14]. However, traditional fire severity assessment methods have limitations. Miller and Thode [15] noted that ΔNBR detects absolute changes, which may not perform optimally in areas with sparse vegetation. To address this issue, the Relative Difference NBR (RΔNBR) was introduced to improve classification accuracy in heterogeneous landscapes. Further refinements, such as the Revitalized Burn Ratio (RBR), have been proposed to enhance the correspondence with field-based CBI measurements [16]. Additionally, emerging research explores alternative approaches, including emissivity-enhanced spectral indices, variations in land surface temperature (LST), and machine learning-based classification techniques [17,18]. Synthetic Aperture Radar (SAR) has emerged as a valuable complement to optical remote sensing for fire monitoring. Unlike optical sensors, SAR operates independently of atmospheric conditions, providing consistent observations regardless of cloud cover or smoke. SAR-based burned area mapping leverages backscatter variations caused by fire-induced changes in vegetation structure, moisture content, and dielectric properties [19,20]. Research has demonstrated the sensitivity of SAR indices, such as the Radar Vegetation Index (RVI) and Dual-Polarized SAR Vegetation Index (DPSVI), to fire-induced alterations in biomass and canopy structure [21,22]. Integrating SAR with optical data enhances the robustness of fire monitoring systems, particularly in regions with frequent cloud cover. The European Space Agency’s Sentinel-1 and Sentinel-2 missions, part of the Copernicus program, provide free and open access to high-resolution SAR and multispectral data. Sentinel-1’s C-band SAR data have been widely utilized in wildfire studies, demonstrating sensitivity to vegetation and environmental changes caused by fire [23]. Meanwhile, Sentinel-2’s multispectral imagery offers critical insights into post-fire vegetation recovery, enabling synergistic applications with SAR data for improved fire mapping and severity assessment [24]. Recent advancements in cloud computing and big data analytics have further enhanced the capabilities of remote sensing for wildfire research. Platforms such as Google Earth Engine (GEE) offer high-performance computing environments for processing large datasets and applying advanced machine learning models to fire detection and mapping [25]. Additionally, near-real-time (NRT) fire monitoring systems leveraging Sentinel-3 (OLCI/SLSTR) and VIIRS thermal anomalies have emerged, enabling the rapid generation of global burned area products within 24 h of data acquisition. High-resolution commercial missions (e.g., PlanetScope) are increasingly used for local-scale burnt area mapping, often integrated into U-Net or Transformer-based segmentation models. New research also emphasizes transferable and generalizable neural networks capable of operating across diverse ecosystems, although domain adaptation and training data imbalance remain ongoing challenges. Overall, recent progress highlights a clear trend toward integrated, multi-sensor, AI-driven wildfire monitoring frameworks. The foundations for these advances were laid in the period 2020–2025, which saw rapid maturation of deep learning architectures and data fusion strategies. During this time, convolutional neural networks (CNNs) were widely adopted for spatial pattern recognition in optical data. Tiengo et al. (2022) demonstrated the effectiveness of a deep CNN (DeepLabV3+) on Sentinel-2 images for mapping burned areas in the Mediterranean region, outperforming threshold-based methods of spectral indices and achieving an IoU greater than 80% [26]. At the same time, the first large-scale applications of Transformers in remote sensing emerged. Qurratulain et al. (2023) proposed Burnt-Net, a CNN–Transformer hybrid, which improved the delineation of burned area boundaries by exploiting the Transformer’s ability to model long-range spatial dependencies [27]. The synergistic integration of SAR and optical data has become a pillar of research in this period, to overcome the limitations associated with cloud cover. Zhang, Qi et al. (2021) combined Sentinel-1 (VV/VH coherence) and Sentinel-2 (NBR indices) time-series within a Random Forest model on GEE, achieving more robust and early detection of areas affected by fire compared to the use of individual sensors [28]. On the near-real-time monitoring front, systems based on VIIRS and Sentinel-3 SLSTR thermal anomalies have been consolidated. The work of Lizundia-Loiola et al. (2020) on GEE optimized the MODIS Active Fire algorithm for VIIRS data, enabling the generation of global hotspot maps with a latency of a few hours, a crucial step towards today’s operational systems [29]. Remote sensing technologies continue to evolve rapidly, providing increasingly powerful tools for wildfire detection, monitoring, and assessment. Deep learning methods have become particularly central: models such as CNNs, U-Net, and Vision Transformers (ViT) have recently been successfully applied to optical data from Sentinel-2 to map burned areas. For example, Yilmaz & Kavzoglu (2024) developed a CNN combined with Explainable AI (SHAP method) on Sentinel-2A images, achieving ~98.9% accuracy in detecting burned areas and identifying NBR, dNBR, and NDVI indices as the most influential [30]. In parallel, using very-high-resolution optical data, Kim, Lee & Park (2024) employed PlanetScope images with a convolutional network to map burned areas, demonstrating the effectiveness of high-resolution commercial data for local studies [31]. With regard to fire severity prediction, Sykas, Zografakis & Demestichas (2024) compared segmentation networks (U-Net) and a Visual Transformer using the EO4WildFires dataset, integrating meteorological, optical (Sentinel-2), and SAR (Sentinel-1) data [32]. Other recent studies are moving towards Transformer architectures for active fire detection: Rad (2024) proposed a u-shaped Vision Transformer on Landsat-8 data for active fire detection, achieving an F1 score of ~90% [33]. On the multi-sensor front, Chen et al. (2024) conducted a comparative study of machine learning methods (SVM, Random Forest, Neural Network) on Sentinel-1B (SAR) and Sentinel-2A data, with SVM achieving up to 93.5% accuracy and RF excelling in the post-fire phase using the NBR index [34]. In addition, research in 2025 saw the emergence of near-real-time monitoring systems based on Sentinel-3 OLCI and SLSTR data integrated with VIIRS. Padilla, Ramo, Gómez-Dans et al. (2025) describe a deep learning-based approach used by the Copernicus Land Monitoring Service to generate maps of burned area within one day of image acquisition [35]. There is also no shortage of studies on more specific phenomena: a 2025 study on smoke detection (“incipient wildfire smoke”) exploited Sentinel-2 bands, using a neural network with sigmoidal activation function and momentum optimizer (MGD) to distinguish clouds from smoke [36]. Finally, a recent study (Natural Hazards, 2025) demonstrated the effectiveness of classical machine learning methods (SVM) on Landsat-8 data for detecting active fires in Australia, showing that combining SWIR and NIR bands with the Normalized Difference Fire Index (NDFI) improves fire discrimination, although challenges related to model interpretability remain [37]. Recent studies have increasingly demonstrated that the synergy between different sensors provides a more holistic view of fire impacts, especially in complex topographies. Specifically, the fusion of Sentinel-1 C-band SAR and Sentinel-2 multispectral data has proven effective in mitigating the limitations of individual sensors [38,39,40,41]. This study aims to leverage the combined potential of optical and SAR remote sensing, integrated within a GEE framework, to improve fire detection, burned area estimation, and severity assessment. Despite the significant performance of deep learning (DL) architectures, they often present conceptual and operational obstacles. Conceptually, supervised methods require large, high-quality labeled datasets for training, which are often insufficient for specific regions or difficult to obtain in near-real-time (NRT) scenarios. Operationally, these models require high computational resources and complex adaptation to be transferable between different ecosystems. The presented approach leverages the power of cloud computing asynchronously, with near-instantaneous results since there is no “training” phase. It is a plug-and-play product with no dependencies on complex external libraries or paid resources. There is a specific knowledge gap regarding the development of multisensory frameworks that are both automated and independent of pre-existing training labels. This work addresses this gap by proposing an unsupervised approach based on K-means clustering within Google Earth Engine (GEE). Unlike supervised ML, which learns from historical data, our approach is conceptually based on the intrinsic statistical structure of multisensory indices (SAR and optical) for a specific event. From an operational point of view, this eliminates the training phase, making the workflow lightweight, independent of training data, and easily scalable for monitoring forest fires on a global scale, where reference data on the ground is often unavailable. By harnessing Sentinel-1 and Sentinel-2 data, we seek to develop an efficient methodology for large-scale fire monitoring that overcomes the limitations of single-sensor approaches and enhances the accuracy of fire impact assessments. The proposed methodology will be applied to recent wildfire events to validate its effectiveness in real-world scenarios.

2. Materials and Methods

2.1. Rationale and Approach

The research followed the flowchart in Figure 1.
To process the large volume of multi-sensor data required for a regional-scale analysis, we utilized the cloud-based environment of Google Earth Engine. The platform was employed as a computational engine to handle the integration of Sentinel-1 and Sentinel-2 datasets, enabling the parallel execution of the K-means clustering algorithm across the entire study area of Sicily. This choice was dictated by the need for high-performance computing to manage the spatial tiling and multi-temporal analysis efficiently [42,43,44,45,46,47,48,49,50,51,52,53,54]. The K-means algorithm was selected as the core classifier to ensure the unsupervised nature of the workflow. Unlike supervised methods, which rely on the availability of timely and accurate training datasets, often difficult to obtain during active fire emergencies, K-means allows for the autonomous clustering of pixels based on their inherent statistical distribution in the multi-sensor feature space. This makes the proposed framework highly transferable and scalable, as it can be deployed across different ecosystems and fire regimes without the need for manual retraining or human intervention.
In order to fully understand the advantages and disadvantages of each sensor and the scalability potential of this method, the entire region of Sicily (Italy) was analyzed to study wildfires greater than 500 hectares, that occurred in July 2023. The entire area was 500.917 ha (hectars). To avoid a calculation overhead for the extension of the entire area, the region of interest was: (i) divided into tiles of 40.000 he each, so as to cover the entire regional surface; and (ii) calculations were applied to each tile according to a progressive ‘parallel’ scheme (Figure 2).
In order to gain an insight into the reliability of the system, the data obtained from the entire process were compared with those produced by the EFFIS for the same period, for the same area.
To validate the results of our unsupervised classification, we used official data from the European Forest Fire Information System (EFFIS), part of the Copernicus Emergency Management Service. The EFFIS provides standardized information on forest fires across Europe using the Rapid Damage Assessment (RDA) module. This system maps burned areas by processing satellite images from the Sentinel-2 and Landsat missions, offering a spatial resolution of approximately 20–30 m. In terms of temporal resolution, the EFFIS provides daily updates during the fire season. These data are publicly available and freely accessible and distributed as geospatial vector data (shapefiles) via the EFFIS Current Situation Viewer. Given its rigorous validation protocol, the EFFIS represents the gold standard for operational fire monitoring. The temporal selection of Sentinel-1 and Sentinel-2 images was optimized to ensure that the changes detected were attributable to the effects of fires rather than seasonal phenology or other external factors. We used a narrow time window, selecting the closest available acquisitions for Sentinel-2 and Sentinel-1 in the 15 days before and after the fire event. This strategy is particularly effective for summer fires in Sicily, where Mediterranean vegetation is in a stable state of senescence during the dry season, thus minimizing spectral variability caused by plant growth or seasonal greening.

2.1.1. SAR Sensor: Sentinel-1

The Sentinel-1 mission (European Space Agency) consists of a constellation of two satellites equipped with advanced SAR (Synthetic Aperture Radar) instrumentation that enables them to acquire data day and night in all weather conditions, useful for maritime and land monitoring, emergency response services, climate change observation and security. SAR sensors like Sentinel-1 have pros and cons:
SAR sensors work with microwaves and do not suffer from problems due to the presence of clouds over the area of interest, so it is always possible to have images for any period taken into consideration; In each image there is the characteristic salt–pepper noise, due to the interaction of electromagnetic waves with the ground; In each image there is the characteristic salt–pepper noise, due to the interaction of electromagnetic waves with the ground; In some cases, there may be images with “holes”, i.e., NO-DATA areas, due to the conformation of the ground; the area of terrain covered by each SAR resolution cell depends on the local topography; SAR is strongly influenced by the inclination of the terrain in the direction orthogonal to the satellite orbit and that along the azimuthal direction. As long as the terrain is horizontal, no geometric distortion is present. As the angle of inclination increases, however, the normal to the surface tends to the direction of view and the “range” resolution cell increases. This effect is commonly called compression. When the tilt of the ground is close to the angle of view, the resolution cell becomes very large resulting in the loss of all details. The limiting case is when the previously mentioned angle is less than the tilt of the ground, resulting in an inversion of the ground capture order, with the corresponding views in the previous resolution cells overlapping. This effect is called, precisely, superposition. In the opposite case, that is, when the ground is parallel to the line of sight, there is the boundary condition, beyond which everything is in shadow and cannot be illuminated by the satellite. The dataset used was COPERNICUS/S1_GRD.
This collection contains all of the GRD scenes. Each scene has one of 3 resolutions (10, 25 or 40 m), 4 band combinations (corresponding to scene polarization) and 3 instrument modes. Use of the collection in a mosaic context will likely require filtering down to a homogeneous set of bands and parameters. Each scene contains either 1 or 2 out of 4 possible polarization bands, depending on the instrument’s polarization settings. The possible combinations are single band VV or HH, and dual band VV + VH and HH + HV. Each scene also includes an additional ‘angle’ band that contains the approximate incidence angle from ellipsoid in degrees at every point. This band is generated by interpolating the ‘incidenceAngle’ property of the ‘geolocationGridPoint’ gridded field provided with each asset. Each scene was pre-processed with Sentinel-1 Toolbox using the following steps:
Thermal noise removal (removes additive noise in sub-swaths to help reduce discontinuities between sub-swaths for scenes in multi-swath acquisition modes.
Radiometric calibration (computes backscatter intensity using sensor calibration parameters in the GRD metadata).
Terrain correction using SRTM 30 or ASTER DEM for areas greater than 60 degrees latitude, where SRTM is not available. The final terrain-corrected values are converted to decibels via log scaling. (Converts data from ground range geometry, which does not take terrain into account, to σ° using the SRTM 30 m DEM or the ASTER DEM for high latitudes, greater than 60° or less than −60°) [45]. To mitigate terrain-induced distortions in the SAR signal (layover, foreshortening, and shadow), the Sentinel-1 data underwent rigorous pre-processing in GEE using the Range-Doppler terrain correction algorithm with the 30 m SRTM DEM. In mountainous areas of Sicily, pixels affected by severe radar shadow or layover—identified via the local incidence angle band—were masked or flagged to prevent false severity assignments caused by topographic artifacts rather than actual fire damage. Sentinel-1 Ground Range Detected (GRD) data in Interferometric Wide (IW) swath mode were accessed via Google Earth Engine. We selected the dual-polarization VV + VH bands. The pre-processing workflow provided by GEE was adopted, which includes: (i) precise orbit file application, (ii) thermal noise removal, (iii) radiometric calibration (sigma nought), and (iv) terrain correction using the SRTM 30 m DEM. The transferability of the workflow to other mountainous regions is thus supported by this rigorous terrain-normalization protocol, which minimizes the ‘false positives’ often caused by topographic brightness variations in radar imagery. All index computations were carried out in linear scale.
Once the dataset was imported, the first operation performed was the application of a filter to remove noise, especially the focal median method, defining the radius and intensity of the filter (with 90 intensity and circular geometric cross section). Then, the difference and division between the pre-fire and post-fire image was calculated. After doing so, an unsupervised learning algorithm was used to perform the classification of the burned areas, namely K-means. Once this is done, the severity of the fire is shown. Finally, the RVI (1) and RFDI (2) indices were calculated. After running the script, the results obtained can be exported.
R V I = 4 V H V V + V H
R F D I = V V V H V V + V H

2.1.2. Optical Sensor: Sentinel-2

The dataset used for optical analysis was ‘COPERNICUS/S2_SR’. The data available in this dataset are Sentinel 2 images. They have already been geometrically and spectrally corrected for surface reflectance (SR).
Optical sensors capture light reflected from the Earth’s surface in visible and infrared wavelengths.
Pros:
Intuitive images: Images from optical sensors are similar to what we see with the naked eye, making them easier to interpret.
Spectral resolution: Optical sensors capture data across multiple spectral bands (visible, infrared, near-infrared), which are useful for monitoring vegetation, water quality, and land use.
High spatial resolution: Optical satellites like Sentinel-2 or Landsat can achieve very high spatial resolution (down to a few meters), providing detailed imagery.
Spectral analysis: They allow the calculation of spectral indices, such as NDVI (Normalized Difference Vegetation Index), essential for agriculture, forest management, and environmental monitoring.
Cons:
(i)
Dependence on sunlight: Optical sensors require sunlight to function, so they cannot capture images at night.
(ii)
Sensitivity to weather conditions: Clouds, smoke, and dust can obstruct the view and compromise data acquisition. Adverse weather limits continuous data collection.
(iii)
Discontinuous data collection: Since they rely on light and weather conditions, optical data may not be available consistently, especially in cloud-prone regions.
The first operation produced on the Sentinel 2 data was to select only data with acceptable cloud coverage (<10%) and still apply the cloud mask following the algorithm suggested by GEE.
Then, considering the pre- and post-event images, the NBR (3) index, the NDVI (4) and EVI (5) vegetation indices and their deltas over time were calculated.
N B R = N I R S W I R N I R + S W I R
N D V I = N I R R E D N I R + R E D
E V I = G   ( N I R R e d ) N I R + C 1 R e d C 2 B l u e + L
Once the operations described above had been performed on Sentinel 1 and 2, a single multi-band image file was created, with all the previously created normalized index deltas. Before the K-means classification, all input features (SAR deltas and optical indices) were normalized to a common scale [0, 1] using a min-max normalization to prevent variables with larger numerical ranges from disproportionately influencing the clustering distance metrics. Equal weighting was assigned to each sensor type to ensure a balanced contribution from SAR (structural changes) and optical (spectral/pigment changes) data. A mask to remove the water was applied and, then, a double unsupervised K-means classification was applied: (i) binary, allowing the burnt areas to be extracted from the unburnt areas, obtaining a binary image and a vector file of the fire perimeter as output, and (ii) classification of the detected fire in order to assess its internal statistics related to fire severity classes. The input feature stack consisted of five elements: NDVI, EVI, NBR, RVI, and RFDI. We used the K-means ++ initialization method to improve convergence speed and stability. The algorithm was configured with a maximum of 300 iterations and repeated 10 times to avoid local minima and ensure consistent cluster assignment. The labeling of the resulting clusters followed a hierarchical logic based on centroid analysis. Clusters were classified according to the values assumed by the above-mentioned indices. A two-stage clustering strategy was preferred over a direct multi-class approach: the first stage produced a binary mask (burned vs. unburned) to eliminate noise, while the second stage divided the burned pixels into three severity levels (Low, Moderate, High). This hierarchical refinement significantly reduced commission errors and improved the thematic accuracy of the final map. To ensure that all input features contribute equally to the unsupervised classification, a Min-Max normalization was applied individually to each index (NDVI, EVI, NBR, RVI, RFDI) to scale their values between 0 and 1. This normalization was performed per study area (event-based) to adapt the framework to local environmental variability. The decision to apply uniform weighting across all features was made to maintain the unsupervised and objective nature of the workflow, avoiding the introduction of human bias in the importance of structural (SAR) versus spectral (optical) attributes. The robustness of this equal-weighting strategy was validated through an ablation analysis, which demonstrated that the combined use of SAR and optical indices consistently outperformed single-sensor configurations. Three classes were used to make the classification. The decision to use three clusters for the K-means algorithm (Low, Moderate, and High Severity) was dictated by operational requirements for rapid post-fire assessment. Although Key & Benson’s ΔNBR scale [55] defines five levels, from an operational perspective, this three-mode classification simplifies decision-making for land managers during initial restoration planning. Sensitivity analysis was conducted by testing cluster numbers ranging from k = 2 to k = 5. We found that k = 2 merged moderate damage with unburned areas, while k = 5 led to class fragmentation with high intra-class similarity. The k = 3 configuration provided the highest stability and best alignment with the aggregated EFFIS severity classes. From a physical perspective, k = 3 effectively captures the primary transition states of vegetation post-fire: (i) total canopy consumption and charring (High), (ii) partial crown scorch and structural thinning (Moderate), and (iii) understory fire with minimal canopy impact (Low). To validate this choice statistically, an Elbow Method analysis was conducted on the multi-sensor feature stack across different fire events. The results consistently showed that increasing k beyond 3 provided marginal gains in the sum of squared errors (SSEs) while significantly increasing the risk of cluster overlap and reduced thematic interpretability. By maintaining k = 3, the framework ensures a robust and standardized characterization of fire severity that is directly applicable to operational emergency management protocols.
The classification results come from the application of the K-means unsupervised machine learning algorithm mentioned in the opening section. The latter process was compared with ΔNBR obtained from Sentinel 2 optical data and the normal thresholds applied for the classification of fire severity, used by Key & Benson (2006) [55] and also reported by the EFFIS (https://forest-fire.emergency.copernicus.eu/about-effis/technical-background/fire-severity (accessed on 1 March 2026)) (Table 1).

2.2. Validation Framework and Accuracy Assessment

The validation of the unsupervised classification was conducted with respect to the official burned area and severity products provided by the European Forest Fire Information System (EFFIS), accessible through the Copernicus Emergency Management Service—Rapid Damage Assessment (RDA) module. Its outputs are distributed as georeferenced vector files (shapefiles) and are available to the public via the EFFIS Current Situation Viewer. Thanks to its rigorous and operationally validated protocol, the EFFIS is the reference product adopted in this study. To ensure independence between the classification output and the validation dataset, the EFFIS product was not used at any stage of the K-means clustering process. The unsupervised algorithm operated exclusively on the multi-sensor feature stack (NDVI, EVI, NBR, RVI, RFDI), without exposure to labels or boundaries derived from the EFFIS. Validation was then performed ex-post by spatially overlaying the K-means output with the EFFIS polygons after classification completion. For the quantitative assessment of fire severity accuracy, a stratified random sampling strategy was adopted. Validation points were generated independently within each EFFIS severity class (Low, Moderate, High), proportionally to the class area, with approximately 75 samples per class per fire event. Each sampling point was assigned: (i) a reference label derived from the EFFIS severity classification and (ii) a label predicted by the K-means cluster map. This followed the centroid-based correspondence described in Table 2. The class correspondence between the K-means clusters and the EFFIS/ΔNBR severity levels was established a posteriori by comparing the centroid values of the optical indices cluster (in particular ΔNBR) with the Key & Benson (2006) [55] thresholds reported in Table 1. This correspondence procedure was applied consistently to all fire events. Accuracy metrics were calculated from the resulting confusion matrices for each of the three case studies and for the aggregated dataset. The metrics reported include: overall accuracy (OA), producer accuracy (PA), user accuracy (UA), and Kappa coefficient (k). For burned area delineation (binary classification), omission errors (unburned pixels not detected) and commission errors (unburned pixels misclassified as burned) were calculated relative to EFFIS perimeter polygons, providing the statistical context for the reported 1.3% discrepancy in total burned area.

3. Result

A total of 17 fires were detected (Figure 3); they are detected in equal numbers by the proposed algorithm and the EFFIS.
For representative purposes, the results on three of the seventeen fires identified are presented below: (i) fire that occurred in the city of Palermo; (ii) fire that occurred in the city of Messina; and (iii) fire that occurred in the city of Gratteri (Palermo). Table 2 shows the correspondence between clusters and ΔNBR thresholds. To demonstrate the importance and validity of SAR indices (RVI and RFDI), an in-depth analysis was conducted on the use of the K-means classifier on four different inputs: (i) RVI only, (ii) RFDI only, (iii) SAR integration (RVI + RFDI), and (iv) multi-sensor integration (RVI + RFDI + NBR). The experimental results of the four classification tests are summarized as follows: RVI-based classification mainly identified high-severity crown fires. The RFDI-based approach was more sensitive to the degradation of vegetation structural complexity. The combined SAR-only model (RVI + RFDI) improved the result, demonstrating that radar is capable of independently mapping burned areas when optical data are not available. Finally, the complete multi-sensor model (RVI + RFDI + NBR) performed best, confirming that the two sensors provide complementary information.

3.1. Palermo Fire

First, a fire that occurred in the Palermo areas between 24 and 25 July 2023 of as much as 4218 ha will be analyzed. Figure 4 (left) shows the combined difference, obtained by combining the SAR and Optical images. As can be seen, the perimeter of the fire is clearly visible. The burned areas are easily distinguishable from the background (The red areas represent areas identified as burned, while the green areas represent intact areas). Figure 4 (right), on the other hand, shows the perimeter of the extracted fire.
Having defined the boundary, the unsupervised K-means classification algorithm is started. The results of the classification can be seen in Figure 5 (left) The most affected areas are the orange areas, while the yellow areas are the least damaged areas. Figure 5 (right) shows the ∆NBR of the same area; as can be seen, the most affected areas classified by our proposed method are found to be perfectly compatible with the ∆NBR.
The classification based on RVI achieved an overall accuracy of 0.62, while the classification based on RFDI achieved 0.63. RVI + RFDI achieved a score of 0.66, while the multi-sensor model achieved 0.72. The quantitative results obtained by the classifier with input of the various indices and comparison with ∆NBR are shown in Table 3.
Quantitative results on severity with the classifier (S1 + S2) and with ∆NBR are shown in the Figure 6 and table below (Table 4).

3.2. Messina Fire

Another example covered is the fire that affected the territory of Messina on 24–26 July 2023 with 1651 hectares burned. Figure 7a shows the combined SAR + Optical difference. The areas where an amplitude difference is recorded are the fuchsia-colored areas, i.e., the burned area. Figure 7b, on the other hand, shows the extracted fire profile, which was later used for classification operations.
Looking at the classification (Figure 8a), it is possible to say that the most affected areas are orange, while the least affected areas are yellow. Compatible results are obtained from the ∆NBR visible in Figure 8b.
The classification based on RVI achieved an overall accuracy of 0.57, while the classification based on RFDI achieved 0.63. RVI + RFDI achieved a score of 0.64, while the multi-sensor model achieved 0.70. The quantitative results obtained by the classifier with input of the various indices and comparison with ∆NBR are shown in Table 5.
Fire statistics (S1 + S2) are shown in Table 6 and Figure 9.

3.3. Gratteri Fire

Finally, the fire that occurred on 25–26 July 2023 in the municipality of Gratteri (Palermo) of as much as 1487 hectares. Figure 10a shows the combined difference. The perimeter of the fire detected by our algorithm in Figure 10b is shown.
Based on the extracted perimeter, the classification is carried out (Figure 11a). From the results obtained from the classification, the most burned areas are the orange-colored areas, while the little-affected areas are yellow. Figure 11b shows the result for ∆NBR; the two severities are found to be consistent.
The classification based on RVI achieved an overall accuracy of 0.63, while the classification based on RFDI achieved 0.64. RVI + RFDI achieved a score of 0.67, while the multi-sensor model achieved 0.69. The quantitative results obtained by the classifier with input of the various indices and comparison with ∆NBR are shown in Table 7.
Finally, results related to statistics S1 + S2 are shown Table 8 and Figure 12.
Table 9 reports the confusion matrices and derived accuracy metrics for the three representative fire events (Palermo, Messina, Gratteri) using the multi-sensor K-means classifier (S1 + S2). For the definition of the burned area (binary classification), Table 10 reports the errors of omission and commission with respect to the EFFIS perimeters:

4. Discussion

Combining Sentinel-1 SAR and Sentinel-2 optical data offers an efficient and scalable approach to fire detection, burned area estimation, and severity assessment. These advances address many limitations of traditional single-sensor workflows and support the development of robust systems for large-scale fire monitoring. The results obtained confirm the effectiveness of integrating optical (Sentinel-2) and radar (Sentinel-1) data within a cloud-based platform such as Google Earth Engine (GEE). The proposed approach made it possible to identify 17 significant fires in Sicily, demonstrating remarkable robustness with a discrepancy of just 1.3% (329 hectares out of over 25,000) compared to the official data provided by the EFFIS. It is important to note that the validation of our results against the EFFIS involves comparing two products derived from satellite data. Although the EFFIS is the operational standard for monitoring forest fires in Europe, it is subject to uncertainties arising from the spatial resolution of the primary sensors used (MODIS and VIIRS) and the automated mapping algorithms applied to Sentinel-2. Therefore, the EFFIS should be considered a “reference product” rather than an absolute truth. The lack of in-depth field data remains a limitation of this study due to the logistical difficulties involved in surveying multiple large-scale fire sites in Sicily immediately after the events. This accuracy highlights how the synergistic use of sensors can overcome the intrinsic limitations of individual technologies. Although optical sensors offer superior spectral sensitivity for vegetation analysis using indices such as NBR and overcome the inherent geometric distortions of SAR sensors, their effectiveness is severely limited by cloud cover and smoke. Conversely, SAR’s ability to penetrate the atmospheric blanket ensures temporal continuity in monitoring, which is essential during emergency events. One of the main strengths of the proposed unsupervised framework is its potential for transferability across different ecosystems. Unlike supervised models that require region-specific training labels, the K-means algorithm adapts to local statistical distributions. In boreal forests, where fires often cause high biomass loss but are hampered by persistent cloud cover and snow, the SAR component becomes essential for capturing structural changes even during long periods of overcast skies. In tropical savannas, characterized by rapid post-fire regrowth and frequent cloud cover, the high temporal resolution of the Sentinel constellation ensures that the snapshot of maximum severity is captured before the signal is obscured by new vegetation. From an operational standpoint, implementation within GEE offers significant advantages for environmental agencies. Processing time for a regional-scale assessment is less than 5 min, significantly faster than traditional desktop-based workflows. This speed, combined with the availability of zero-cost data from the Copernicus program, enables near-real-time monitoring. Furthermore, the integration of SAR data directly addresses the main operational constraint of optical sensors: cloud interference. By providing a reliable estimate of severity even in cloudy conditions, this multi-sensor approach ensures continuity of information during active fire seasons, which is critical for the allocation of emergency resources and post-fire risk mitigation. The application of the K-means unsupervised learning algorithm has demonstrated a strong spatial and quantitative correlation with traditional methods based on the NBR index differential. As observed in the case studies of Palermo, Messina and Gratteri, the areas classified as “Medium–High Severity” by the model correspond almost perfectly to the areas identified by standard optical spectral parameters. The performance of the proposed unsupervised framework (OA ~71%) highlights a critical trade-off in wildfire remote sensing: accuracy versus operational autonomy. In recent years, supervised algorithms (e.g., Random Forest, Support Vector Machines) and deep learning (DL) architectures (e.g., U-Net, CNNs) have set new benchmarks in classification accuracy. However, these models are intrinsically dependent on large, high-quality training datasets, which are often unavailable or inconsistent in the immediate aftermath of a fire event. Supervised models typically suffer from ‘spatial overfitting’, where a model trained on a specific ecosystem may underperform when transferred to a different climatic zone without retraining. In contrast, our K-means multi-sensor pipeline requires zero training labels. By treating each fire as a unique statistical population within a multidimensional feature space (SAR + Optical), the unsupervised approach adapts to local conditions autonomously. Furthermore, while DL models require significant GPU resources and complex pre-processing, our workflow leverages the native cloud-computing efficiency of Google Earth Engine, enabling the assessment of large-scale events in minutes. Therefore, while supervised methods remain preferable for high-precision post-fire inventories, the unsupervised integration of Sentinel-1 and Sentinel-2 provides a more robust and scalable solution for Rapid Damage Assessment and emergency response, especially in data-scarce environments or regions with frequent cloud cover where optical-only training sets would be incomplete.
In particular, in the case studies analyzed, the K-means (S1 + S2) classifier was found to be the method with the best agreement with the reference classification based on ∆NBR, as evidenced by the lowest average nAE (normalized Absolute Error) (6):
n A E f , m % = 100 · c ϵ C | A f , c m   A f , c ( N B R ) c C A f , c ( N B R )
where
(i)
c C are the severity classes (e.g., Low, Moderate–Low, Moderate–High);
(ii)
A f , c m is the area (ha) assigned to class c by method m;
(iii)
A f , c ( N B R ) is the area (ha) assigned to class c by the ΔNBR classifier (reference).
K-means more accurately reproduces the distribution in hectares between severity classes derived from ∆NBR. Conversely, the SAR-based methods alone (RVI, RFDI and their combination) show a higher average nAE overall, indicating less consistency with the distribution of areas by class provided by ∆NBR (Table 11).
However, some technical challenges remain related to terrain geometry (compression and overlap effects of the radar signal) and the need to further optimize the computational load for global-scale analysis. Looking ahead, the integration of meteorological variables and the implementation of more complex artificial intelligence models could further refine the predictive capability and response speed of near-real-time monitoring systems. Beyond rapid severity mapping, future developments of this GEE-based framework will integrate temporal trajectory analysis. By analyzing the time-series of SAR and optical indices over months or years, it will be possible to not only refine the fire onset detection but also to monitor post-fire recovery dynamics and vegetation resilience. This multi-temporal approach would provide a more comprehensive understanding of fire ecology, moving from a static ‘snapshot’ of damage to a dynamic assessment of ecosystem response. The results go beyond simply confirming the synergy between Sentinel-1 and Sentinel-2. Although the literature extensively covers supervised classification, this study demonstrates that unsupervised K-means clustering, when driven by a set of multidimensional features (SAR + optical data), can overcome the “training data bottleneck” in wildfire science. Despite its operational advantages, the proposed multi-sensor framework is subject to several physical and environmental limitations. First, SAR backscatter is highly sensitive to variations in soil moisture and dielectric properties. Rapid changes in moisture levels (e.g., due to unexpected rainfall events between pre- and post-fire acquisitions) can introduce significant noise into the RVI and RFDI indices, potentially masking or exaggerating fire-induced structural changes. Second, topographic effects remain a challenge in complex landscapes such as the Mediterranean. Although we applied terrain correction using the 30 m SRTM DEM, residual geometric distortions such as layover, foreshortening, and shading can still affect backscatter intensity on steep slopes, causing localized classification errors in mountainous areas of Sicily. Third, seasonal variability and vegetation phenology play a crucial role. The contrast between pre- and post-fire signals is more pronounced during the dry season. Acquisitions during the transition to the growing season may be affected by the greening of undergrowth vegetation, which can slightly alter NBR and NDVI reference values, particularly in low-severity areas. Finally, with regard to transferability, although the unsupervised nature of the K-means algorithm allows it to adapt to local data distributions without the need for retraining, the numerical thresholds of the resulting clusters may not be universal.

5. Conclusions

This study contributes to wildfire remote sensing by demonstrating that an unsupervised, multi-sensor framework based on Sentinel-1 SAR and Sentinel-2 optical data can provide operationally reliable fire severity mapping without relying on training data. Rather than maximizing classification accuracy through complex supervised models, the proposed approach deliberately prioritizes robustness, interpretability, and scalability, which are critical requirements in emergency and near-real-time contexts. From a methodological perspective, the results highlight that physically meaningful SAR and multispectral indices encapsulate sufficient information to discriminate fire severity patterns when analyzed within a multidimensional feature space. This finding challenges the prevailing assumption that advanced deep learning architectures are always necessary for large-scale wildfire assessment, particularly in scenarios where reference data are sparse, inconsistent, or unavailable. The framework also underscores the importance of SAR observations as a structural complement to optical data, not as a replacement. SAR-derived metrics capture fire-induced changes in vegetation structure and surface roughness that remain invisible to spectral indices alone, reinforcing the need for multi-sensor strategies in operational fire monitoring systems. The core finding of this study is the significant performance gain achieved through multi-sensor data fusion. When the unsupervised classification was restricted to a single-sensor configuration (optical-only), the overall accuracy (OA) peaked at 0.62, primarily due to the interference of residual smoke and the inability of spectral indices to capture sub-canopy structural changes. By integrating the cloud-penetrating, structural information from Sentinel-1 SAR (RVI and RFDI) with the Sentinel-2 NBR, the framework demonstrated a substantial accuracy progression, reaching an OA of 0.72. This improvement underscores the synergy between spectral and backscattering signals: while Sentinel-2 provides high sensitivity to chlorophyll loss and leaf charring, Sentinel-1 compensates for atmospheric obstructions and adds critical data on the loss of woody biomass, leading to a more holistic and reliable characterization of fire severity. Several limitations remain. Terrain-induced distortions affecting SAR backscatter and the absence of extensive field-based severity measurements constrain the precision of severity attribution at local scale. Addressing these issues will require targeted field campaigns and the integration of additional ancillary data, such as topography, meteorological variables, and thermal observations. Looking forward, the proposed Google Earth Engine-based workflow provides a foundation for extending wildfire analysis beyond single-event assessment. Future developments will focus on multi-temporal trajectory analysis to monitor post-fire recovery and ecosystem resilience, enabling a shift from static damage mapping toward dynamic fire ecology assessment at regional to global scales.

Author Contributions

Conceptualization, R.L. and N.A.; methodology, C.G.R., R.L. and N.A.; software, C.G.R. and N.A.; validation, C.G.R., R.L. and N.A.; formal analysis, C.G.R.; investigation, C.G.R.; resources, R.L.; data curation, C.G.R., R.L. and N.A.; writing—original draft preparation, C.G.R. and N.A.; writing—review and editing, R.L. and N.A.; visualization, C.G.R.; supervision, R.L.; project administration, R.L.; funding acquisition, R.L. All authors have read and agreed to the published version of the manuscript.

Funding

The present study was supported by the project Coelum, funded by the National Research Council of Italy.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Chuvieco, E.; Aguado, I.; Salas, J.; García, M.; Yebra, M.; Oliva, P. Satellite Remote Sensing Contributions to Wildland Fire Science and Management. Curr. For. Rep. 2020, 6, 81–96. [Google Scholar] [CrossRef]
  2. Chuvieco, E. (Ed.) Earth Observation of Wildland Fires in Mediterranean Ecosystems; Springer: Berlin/Heidelberg, Germany, 2009; pp. 105–109. [Google Scholar]
  3. Belenguer-Plomer, M.A.; Tanase, M.A.; Fernandez-Carrillo, A.; Chuvieco, E. Burned area detection and mapping using Sentinel-1 backscatter coefficient and thermal anomalies. Remote Sens. Environ. 2019, 10, 111345. [Google Scholar] [CrossRef]
  4. Ramo, R.; García, M.; Rodríguez, D.; Chuvieco, E. A data mining approach for global burned area mapping. Int. J. Appl. Earth Obs. Geoinf. 2018, 73, 39–51. [Google Scholar] [CrossRef]
  5. Giglio, L.; Loboda, T.; Roy, D.P.; Quayle, B.; Justice, C.O. An active-fire based burned area mapping algorithm for the MODIS sensor. Remote Sens. Environ. 2009, 113, 408–420. [Google Scholar] [CrossRef]
  6. García, M.; Riaño, D.; Chuvieco, E.; Salas, J.; Danson, F.M. Multispectral and LiDAR data fusion for fuel type mapping using Support Vector Machine and decision rules. Remote Sens. Environ. 2011, 115, 1369–1379. [Google Scholar] [CrossRef]
  7. Yebra, M.; Dennison, P.E.; Chuvieco, E.; Riaño, D.; Zylstra, P.; Hunt, E.R., Jr.; Danson, F.M.; Qi, Y.; Jurdao, S. A global review of remote sensing of live fuel moisture content for fire danger assessment: Moving towards operational products. Remote Sens. Environ. 2013, 136, 455–468. [Google Scholar] [CrossRef]
  8. Randerson, J.; Chen, Y.; Werf, G.; Rogers, B.; Morton, D. Global burned area and biomass burning emissions from small fires. J. Geophys. Res. Biogeosci. 2012, 117, G04012. [Google Scholar] [CrossRef]
  9. Roy, D.P.; Frost, P.G.H.; Justice, C.O.; Landmann, T.; Le Roux, J.L.; Gumbo, K.; Makungwa, S.; Dunham, K.; Du Toit, R.; Mhwandagara, K.; et al. The Southern Africa Fire Network (SAFNet) regional burned-area product-validation protocol. Int. J. Remote Sens. 2005, 26, 4265–4292. [Google Scholar] [CrossRef]
  10. Giglio, L. Global estimation of burned area using MODIS active fire observations. Atmos. Chem. Phys. 2006, 6, 18. [Google Scholar] [CrossRef]
  11. Mahdianpari, M.; Salehi, B.; Mohammadimanesh, F.; Homayouni, S.; Gill, E. The First Wetland Inventory Map of Newfoundland at a Spatial Resolution of 10 m Using Sentinel-1 and Sentinel-2 Data on the Google Earth Engine Cloud Computing Platform. Remote Sens. 2019, 11, 43. [Google Scholar] [CrossRef]
  12. Hudak, A.T.; Robichaud, P.R.; Evans, J.B.; Clark, J.; Lannom, K.; Morgan, P.; Stone, C. Field validation of Burned Area Reflectance Classification (BARC) products for post fire assessment. In Proceedings of the Tenth Forest Service Remote Sensing Applications Conference, Salt Lake City, UT, USA, 5–9 April 2004; Greer, J.D., Ed.; Remote Sensing for Field Users; CD-ROM; American Society of Photogrammetry and Remote Sensing: Bethesda, MD, USA, 2004. [Google Scholar]
  13. Robichaud, P.R.; Lewis, S.A.; Laes, D.Y.; Hudak, A.T.; Kokaly, R.F.; Zamudio, J.A. Postfire soil burn severity mapping with hyperspectral image unmixing. Remote Sens. Environ. 2007, 108, 467–480. [Google Scholar] [CrossRef]
  14. Hudak, A.T.; Morgan, P.; Bobbitt, M.J.; Smith, A.M.; Lewis, S.A.; Lentile, L.B.; Robichaud, P.R.; Clark, J.T.; McKinley, R.A. The Relationship of Multispectral Satellite Imagery to Immediate Fire Effects. Fire Ecol. 2007, 3, 64–90. [Google Scholar] [CrossRef]
  15. Miller, J.D.; Thode, A.E. Quantifying burn severity in a heterogeneous landscape with a relative version of the delta Normalized Burn Ratio (dNBR). Remote Sens. Environ. 2007, 109, 66–80. [Google Scholar] [CrossRef]
  16. Tansey, K.; Grégoire, J.-M.; Defourny, P.; Leigh, R.; Pekel, J.-F.; Bogaert, E.; Bartholomé, E. A new, global, multi-annual (2000–2007) burnt area product at 1 km resolution. Geophys. Res. Lett. 2008, 35, 1–6. [Google Scholar] [CrossRef]
  17. Zanella, A.; Jabiol, B.; Ponge, J.-F.; Sartori, G.; Waal, R.; Delft, B.; Graefe, U.; Cools, N.; Katzensteiner, K.; Hager, H.; et al. Toward a European Humus Forms Reference Base. Studi Trentini Di Sci. Nat. 2009, 85, 145–151. [Google Scholar]
  18. Padilla, M.; Stehman, S.V.; Ramo, R.; Corti, D.; Hantson, S.; Oliva, P.; Alonso-Canas, I.; Bradley, A.V.; Tansey, K.; Mota, B.; et al. Comparing the accuracies of remote sensing global burned area products using stratified random sampling and estimation. Remote Sens. Environ. 2015, 160, 114–121. [Google Scholar] [CrossRef]
  19. Benavides-Solorio, J.; MacDonald, L.H. Post-fire runoff and erosion from simulated rainfall on small plots, Colorado Front Range. Hydrol. Process. 2001, 15, 2931–2952. [Google Scholar] [CrossRef]
  20. Martin, D.A.; Moody, J.A. Comparison of soil infiltration rates in burned and unburned mountainous watersheds. Hydrol. Process. 2001, 15, 2893–2903. [Google Scholar] [CrossRef]
  21. Moody, J.A.; Martin, D.A. Initial hydrologic and geomorphic response following a wildfire in the Colorado Front Range. Earth Surf. Process. Landf. J. Br. Geomorphol. Res. Group 2001, 26, 1049–1070. [Google Scholar] [CrossRef]
  22. Epting, J.; Verbyla, D.; Sorbel, B. Evaluation of remotely sensed indices for assessing burn severity in interior Alaska using Landsat TM and ETM+. Remote Sens. Environ. 2005, 96, 328–339. [Google Scholar] [CrossRef]
  23. Fernández-Manso, A.; Fernández-Manso, O.; Quintano, C. SENTINEL-2A red-edge spectral indices suitability for discriminating burn severity. Int. J. Appl. Earth Obs. Geoinf. 2016, 50, 170–175. [Google Scholar] [CrossRef]
  24. White, J.D.; Ryan, K.C.; Key, C.C.; Running, S.W. Remote sensing of forest fire severity and vegetation recovery. Int. J. Wildland Fire 1996, 6, 125–136. [Google Scholar] [CrossRef]
  25. Mallinis, G.; Mitsopoulos, I.; Chrysafi, I. Evaluating and comparing Sentinel 2A and Landsat-8 Operational Land Imager (OLI) spectral indices for estimating fire severity in a Mediterranean pine ecosystem of Greece. GISci. Remote Sens. 2018, 55, 1–18. [Google Scholar] [CrossRef]
  26. Tiengo, R.; Merino-De-Miguel, S.; Uchôa, J.; Guiomar, N.; Gil, A. Burned Areas Mapping Using Sentinel-2 Data and a Rao’s Q Index-Based Change Detection Approach: A Case Study in Three Mediterranean Islands’ Wildfires (2019–2022). Remote Sens. 2025, 17, 830. [Google Scholar] [CrossRef]
  27. Qurratulain, S.; Zheng, Z.; Xia, J.; Ma, Y.; Zhou, F. Deep learning instance segmentation framework for burnt area instances characterization. Int. J. Appl. Earth Obs. Geoinf. 2023, 116, 103146. [Google Scholar] [CrossRef]
  28. Zhang, Q.; Ge, L.; Zhang, R.; Metternicht, G.I.; Du, Z.; Kuang, J.; Xu, M. Deep-learning-based burned area mapping using the synergy of Sentinel-1&2 data. Remote Sens. Environ. 2021, 264, 112575. [Google Scholar] [CrossRef]
  29. Lizundia-Loiola, J.; Otón, G.; Ramo, R.; Chuvieco, E. A spatio-temporal active-fire clustering approach for global burned area mapping at 250 m from MODIS data. Remote Sens. Environ. 2020, 236, 111493. [Google Scholar] [CrossRef]
  30. Yilmaz, E.O.; Kavzoglu, T. Burned Area Detection with Sentinel-2A Data: Using Deep Learning Techniques with eXplainable Artificial Intelligence. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2024, 10, 251–257. [Google Scholar] [CrossRef]
  31. Kim, B.; Lee, K.; Park, S. Burned-Area Mapping Using Post-Fire PlanetScope Images and a Convolutional Neural Network. Remote Sens. 2024, 16, 2629. [Google Scholar] [CrossRef]
  32. Sykas, D.; Zografakis, D.; Demestichas, K. Deep Learning Approaches for Wildfire Severity Prediction: A Comparative Study of Image Segmentation Networks and Visual Transformers on the EO4WildFires Dataset. Fire 2024, 7, 374. [Google Scholar] [CrossRef]
  33. Rad, R. Remote Wildfire Detection using Multispectral Satellite Imagery and Vision Transformers. Proc. Mach. Learn. Res. 2024, 222, 1135–1150. [Google Scholar]
  34. Chen, X.; Zhang, Y.; Wang, S.; Zhao, Z.; Liu, C.; Wen, J. Comparative study of machine learning methods for mapping forest fire areas using Sentinel-1B and 2A imagery. Front. Remote Sens. 2024, 5, 1446641. [Google Scholar] [CrossRef]
  35. Padilla, M.; Ramo, R.; Gomez-Dans, J.L.; Sierra, S.; Mota, B.; Lacaze, R.; Tansey, K. Near-real time monitoring of burned area at global scale based on deep learning. Int. J. Remote Sens. 2025, 46, 5996–6038. [Google Scholar] [CrossRef]
  36. Sali, A.; Nomqupu, S.; Nyamugama, A.; Ndou, N. Smoke characterization for incipient wildfire detection from Sentinel-2 sensor based on sigmoid activation function and momentum gradient descent optimizer. Earth Sci. Inform. 2025, 18, 488. [Google Scholar] [CrossRef]
  37. Singh, H.; Ang, L.M.; Srivastava, S. Active wildfire detection via satellite imagery and machine learning: An empirical investigation of Australian wildfires. Nat. Hazards 2025, 121, 9777–9800. [Google Scholar] [CrossRef]
  38. Pepe, A.; Sali, M.; Boschetti, M.; Stroppiana, D. Mapping Burned Areas from Sentinel-1 and Sentinel-2 Data. Environ. Sci. Proc. 2022, 17, 62. [Google Scholar] [CrossRef]
  39. De Luca, G.; Silva, J.M.N.; Modica, G. Regional-scale burned area mapping in Mediterranean regions based on the multitemporal composite integration of Sentinel-1 and Sentinel-2 data. GISci. Remote Sens. 2022, 59, 1678–1705. [Google Scholar] [CrossRef]
  40. Tassi, A.; Vizzari, M. Object-Oriented LULC Classification in Google Earth Engine Combining SNIC, GLCM, and Machine Learning Algorithms. Remote Sens. 2020, 12, 3776. [Google Scholar] [CrossRef]
  41. Bastarrika, A.; Rodriguez-Montellano, A.; Roteta, E.; Hantson, S.; Franquesa, M.; Torre, L.; Gonzalez-Ibarzabal, J.; Artano, K.; Martinez-Blanco, P.; Mesanza, A.; et al. An automatic procedure for mapping burned areas globally using Sentinel-2 and VIIRS/MODIS active fires in Google Earth Engine. ISPRS J. Photogramm. Remote Sens. 2024, 218, 232–245. [Google Scholar] [CrossRef]
  42. Amani, M.; Ghorbanian, A.; Ahmadi, S.A.; Kakooei, M.; Moghimi, A.; Mirmazloumi, S.M.; Moghaddam, S.H.A.; Mahdavi, S.; Ghahremanloo, M.; Parsian, S.; et al. Google Earth Engine Cloud Computing Platform for Remote Sensing Big Data Applications: A Comprehensive Review. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 5326–5350. [Google Scholar] [CrossRef]
  43. Gorelick, N.; Hancher, M.; Dixon, M.; Ilyushchenko, S.; Thau, D.; Moore, R. Google Earth Engine: Planetary-Scale Geospatial Analysis for Everyone. Remote Sens. Environ. 2017, 202, 18–27. [Google Scholar] [CrossRef]
  44. Gorelick, N. Google Earth Engine; American Geophysical Union: Vienna, Austria, 2013; p. 11997. [Google Scholar]
  45. Arévalo, P.; Bullock, E.L.; Woodcock, C.E.; Olofsson, P. A Suite of Tools for Continuous Land Change Monitoring in Google Earth Engine. Front. Clim. 2020, 2, 576740. [Google Scholar] [CrossRef]
  46. Arruda, V.L.S.; Piontekowski, V.J.; Alencar, A.; Pereira, R.S.; Matricardi, E.A.T. An Alternative Approach for Mapping Burn Scars Using Landsat Imagery, Google Earth Engine, and Deep Learning in the Brazilian Savanna. Remote Sens. Appl. Soc. Environ. 2021, 22, 100472. [Google Scholar] [CrossRef]
  47. Mutanga, O.; Kumar, L. Google Earth Engine Applications. Remote Sens. 2019, 11, 591. [Google Scholar] [CrossRef]
  48. Fattore, C.; Abate, N.; Faridani, F.; Masini, N.; Lasaponara, R. Google Earth Engine as Multi-Sensor Open-Source Tool for Supporting the Preservation of Archaeological Areas: The Case Study of Flood and Fire Mapping in Metaponto, Italy. Sensors 2021, 21, 1791. [Google Scholar] [CrossRef]
  49. Lasaponara, R.; Abate, N.; Masini, N. On the Use of Google Earth Engine and Sentinel Data to Detect “Lost’’ Sections of Ancient Roads. The Case of Via Appia. IEEE Geosci. Remote Sens. Lett. 2021, 19, 3001605. [Google Scholar] [CrossRef]
  50. Lasaponara, R.; Abate, N.; Masini, N. Early Identification of Vegetation Pest Diseases Using Sentinel 2 NDVI Time Series 2016–2023: The Case of Toumeyella Parvicorvis at Castel Porziano (Italy). IEEE Geosci. Remote Sens. Lett. 2024, 21, 2502305. [Google Scholar] [CrossRef]
  51. Vanama, V.S.K.; Mandal, D.; Rao, Y.S. GEE4FLOOD: Rapid Mapping of Flood Areas Using Temporal Sentinel-1 SAR Images with Google Earth Engine Cloud Platform. J. Appl. Remote Sens. 2020, 14, 034505. [Google Scholar] [CrossRef]
  52. Sismanis, M.; Chadoulis, R.-T.; Manakos, I.; Drosou, A. An Unsupervised Burned Area Mapping Approach Using Sentinel-2 Images. Land 2023, 12, 379. [Google Scholar] [CrossRef]
  53. Küçük Matci, D.; Avdan, U. Comparative Analysis of Unsupervised Classification Methods for Mapping Burned Forest Areas. Arab. J. Geosci. 2020, 13, 711. [Google Scholar] [CrossRef]
  54. Holden, Z.; Evans, J. Using fuzzy C-means and local autocorrelation to cluster satellite-inferred burn severity classes. Int. J. Wildland Fire 2010, 19, 853–860. [Google Scholar] [CrossRef]
  55. Key, C.; Benson, N. Landscape Assessment: Ground Measure of Severity, the Composite Burn Index; and Remote Sensing of Severity, the Normalized Burn Ratio; USDA Forest Service, Rocky Mountain Research Station: Ogden, UT, USA, 2006.
Figure 1. Flowchart of the proposed data processing.
Figure 1. Flowchart of the proposed data processing.
Remotesensing 18 00956 g001
Figure 2. Blocks considered.
Figure 2. Blocks considered.
Remotesensing 18 00956 g002
Figure 3. The 17 fires detected.
Figure 3. The 17 fires detected.
Remotesensing 18 00956 g003
Figure 4. Palermo fire. (left) Combined difference. (right) Extracted profile of the fire.
Figure 4. Palermo fire. (left) Combined difference. (right) Extracted profile of the fire.
Remotesensing 18 00956 g004
Figure 5. Palermo fire. (left) Classification. (right) ∆NBR.
Figure 5. Palermo fire. (left) Classification. (right) ∆NBR.
Remotesensing 18 00956 g005
Figure 6. Palermo fire. (a) RVI; (b) RFDI; (c) RVI + RFDI; (d) S1 + S2.
Figure 6. Palermo fire. (a) RVI; (b) RFDI; (c) RVI + RFDI; (d) S1 + S2.
Remotesensing 18 00956 g006
Figure 7. Messina fire. (a) Combined difference. (b) Extracted profile of the fire.
Figure 7. Messina fire. (a) Combined difference. (b) Extracted profile of the fire.
Remotesensing 18 00956 g007
Figure 8. Messina fire. (a) Classification. (b) ∆NBR.
Figure 8. Messina fire. (a) Classification. (b) ∆NBR.
Remotesensing 18 00956 g008
Figure 9. Messina fire. (a) RVI; (b) RFDI; (c) RVI + RFDI; (d) S1 + S2.
Figure 9. Messina fire. (a) RVI; (b) RFDI; (c) RVI + RFDI; (d) S1 + S2.
Remotesensing 18 00956 g009
Figure 10. Gratteri fire. (a) Combined difference. (b) Extracted profile of the fire.
Figure 10. Gratteri fire. (a) Combined difference. (b) Extracted profile of the fire.
Remotesensing 18 00956 g010
Figure 11. Gratteri fire. (a) Classification. (b) ∆NBR.
Figure 11. Gratteri fire. (a) Classification. (b) ∆NBR.
Remotesensing 18 00956 g011
Figure 12. Gratteri fire. (a) RVI; (b) RFDI; (c) RVI + RFDI; (d) S1 + S2.
Figure 12. Gratteri fire. (a) RVI; (b) RFDI; (c) RVI + RFDI; (d) S1 + S2.
Remotesensing 18 00956 g012
Table 1. ∆NBR classes.
Table 1. ∆NBR classes.
Class∆NBR Range (Multiplied by 1000)
Unburned or Regrowth<100
Low Severity100–270
Moderate Low Severity270–440
Moderate High Severity440–660
High Severity≥660
Table 2. Numerical correspondence between K-means clusters and ΔNBR thresholds.
Table 2. Numerical correspondence between K-means clusters and ΔNBR thresholds.
ClassesK-Means ClassifierCorresponding ΔNBR Range (Scaled 103)
Low SeverityCluster 1100–270
Moderate Low SeverityCluster 2270–660
Moderate High SeverityCluster 3>660
Table 3. Hectares surveyed by K-means classifier on indices and ∆NBR.
Table 3. Hectares surveyed by K-means classifier on indices and ∆NBR.
ClassesRVIRFDIRVI + RFDI∆NBR
Low Severity2026213921272325
Moderate Low Severity1499115012341230
Moderate High Severity693929857850
Table 4. Hectares surveyed by K-means classifier and ∆NBR.
Table 4. Hectares surveyed by K-means classifier and ∆NBR.
ClassesK-Means Classifier∆NBR Classifier
Low Severity21482325
Moderate Low Severity11361230
Moderate High Severity934850
Table 5. Hectares surveyed by K-means classifier on indices and ∆NBR.
Table 5. Hectares surveyed by K-means classifier on indices and ∆NBR.
ClassesRVIRFDIRVI + RFDI∆NBR
Low Severity953780822887
Moderate Low Severity495549505525
Moderate High Severity203322324287
Table 6. Hectares surveyed by K-means classifier and ∆NBR.
Table 6. Hectares surveyed by K-means classifier and ∆NBR.
ClassesK-Means Classifier∆NBR Classifier
Low Severity864887
Moderate Low Severity485525
Moderate High Severity302287
Table 7. Hectares surveyed by K-means classifier on indices and ∆NBR.
Table 7. Hectares surveyed by K-means classifier on indices and ∆NBR.
ClassesRVIRFDIRVI + RFDI∆NBR
Low Severity776796896861
Moderate Low Severity545467423403
Moderate High Severity166224168197
Table 8. Hectares surveyed by K-means classifier and ∆NBR.
Table 8. Hectares surveyed by K-means classifier and ∆NBR.
ClassesK-Means Classifier∆NBR Classifier
Low Severity852861
Moderate Low Severity425403
Moderate High Severity210197
Table 9. Accuracy metrics of the K-means (S1 + S2) severity classification.
Table 9. Accuracy metrics of the K-means (S1 + S2) severity classification.
Fire EventOA %KPA-Low %PA-Mod %PA-High %UA-Low %UA-Mod%UA-High%
Palermo72.10.5674.368.971.276.165.473.8
Messina70.30.5372.166.469.870.964.271.5
Gratteri69.40.5171.865.768.369.463.170.2
Table 10. Omission and commission errors for burned area delineation (binary K-means vs. EFFIS).
Table 10. Omission and commission errors for burned area delineation (binary K-means vs. EFFIS).
MetricValue
Total burned area—K-means (ha)25,135
Total burned area—EFFIS (ha)25,464
Absolute difference (ha)329
Relative difference (%)1.3
Overall omission error (%)8.4
Overall commission error (%)7.1
Table 11. Results for average nAE comparison between the methods used and the ∆NBR.
Table 11. Results for average nAE comparison between the methods used and the ∆NBR.
MethodMean nAE
K-means (S1 + S2)5.22
RVI + RFDI5.89
RFDI9.43
RVI14.9
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Riccardi, C.G.; Abate, N.; Lasaponara, R. An Unsupervised Machine Learning-Based Approach for Combining Sentinel 1 and 2 to Assess the Severity of Fires over Large Areas Using a Google Earth Engine. Remote Sens. 2026, 18, 956. https://doi.org/10.3390/rs18060956

AMA Style

Riccardi CG, Abate N, Lasaponara R. An Unsupervised Machine Learning-Based Approach for Combining Sentinel 1 and 2 to Assess the Severity of Fires over Large Areas Using a Google Earth Engine. Remote Sensing. 2026; 18(6):956. https://doi.org/10.3390/rs18060956

Chicago/Turabian Style

Riccardi, Ciro Giuseppe, Nicodemo Abate, and Rosa Lasaponara. 2026. "An Unsupervised Machine Learning-Based Approach for Combining Sentinel 1 and 2 to Assess the Severity of Fires over Large Areas Using a Google Earth Engine" Remote Sensing 18, no. 6: 956. https://doi.org/10.3390/rs18060956

APA Style

Riccardi, C. G., Abate, N., & Lasaponara, R. (2026). An Unsupervised Machine Learning-Based Approach for Combining Sentinel 1 and 2 to Assess the Severity of Fires over Large Areas Using a Google Earth Engine. Remote Sensing, 18(6), 956. https://doi.org/10.3390/rs18060956

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop