One of the primary applications of remote sensing in wildfire science is the accurate mapping of burned areas, a process that varies depending on the scale and objectives of the study. At local scales, high- and medium-resolution sensors (<100 m) are employed to detect spectral differences between pre- and post-fire imagery, facilitating the precise identification of burned regions [
7,
8]. Burned area estimation and fire severity assessments are often correlated, as both rely on similar methodologies. A common approach involves using active fire data to count the number of affected pixels and calculate the total burned area based on pixel size [
9]. Over the years, several fire detections and burned area mapping algorithms have been developed. For instance, Giglio et al. [
10] introduced a method that adjusts burned area estimates by incorporating tree and herbaceous cover fractions, thereby improving prediction accuracy. Roy et al. [
9] demonstrated the utility of MODIS sensors for burned area mapping at national and regional scales, while Mahdianpari et al. [
11] proposed a methodology that combines MODIS data with the Normalized Burn Ratio (NBR) and temporal texture analysis to enhance fire detection capabilities. The severity of wildfires significantly influences post-fire ecosystem recovery. Fire severity assessments help evaluate vegetation mortality, soil nutrient composition, and hydrological responses. The Composite Burn Index (CBI) is commonly used for field-based fire severity assessments, while remote sensing-based indices such as the Normalized Difference Vegetation Index (NDVI) and NBR are widely employed for spectral analysis [
12,
13]. NBR has become the standard for assessing burn severity by leveraging near-infrared (NIR) and shortwave infrared (SWIR) bands to detect changes in vegetation and soil conditions following a fire. The ΔNBR (differenced NBR) technique is frequently used for Burned Area Emergency Response (BAER) assessments, generating Burned Area Reflectance Classification (BARC) maps that aid in post-fire management and rehabilitation planning [
14]. However, traditional fire severity assessment methods have limitations. Miller and Thode [
15] noted that ΔNBR detects absolute changes, which may not perform optimally in areas with sparse vegetation. To address this issue, the Relative Difference NBR (RΔNBR) was introduced to improve classification accuracy in heterogeneous landscapes. Further refinements, such as the Revitalized Burn Ratio (RBR), have been proposed to enhance the correspondence with field-based CBI measurements [
16]. Additionally, emerging research explores alternative approaches, including emissivity-enhanced spectral indices, variations in land surface temperature (LST), and machine learning-based classification techniques [
17,
18]. Synthetic Aperture Radar (SAR) has emerged as a valuable complement to optical remote sensing for fire monitoring. Unlike optical sensors, SAR operates independently of atmospheric conditions, providing consistent observations regardless of cloud cover or smoke. SAR-based burned area mapping leverages backscatter variations caused by fire-induced changes in vegetation structure, moisture content, and dielectric properties [
19,
20]. Research has demonstrated the sensitivity of SAR indices, such as the Radar Vegetation Index (RVI) and Dual-Polarized SAR Vegetation Index (DPSVI), to fire-induced alterations in biomass and canopy structure [
21,
22]. Integrating SAR with optical data enhances the robustness of fire monitoring systems, particularly in regions with frequent cloud cover. The European Space Agency’s Sentinel-1 and Sentinel-2 missions, part of the Copernicus program, provide free and open access to high-resolution SAR and multispectral data. Sentinel-1’s C-band SAR data have been widely utilized in wildfire studies, demonstrating sensitivity to vegetation and environmental changes caused by fire [
23]. Meanwhile, Sentinel-2’s multispectral imagery offers critical insights into post-fire vegetation recovery, enabling synergistic applications with SAR data for improved fire mapping and severity assessment [
24]. Recent advancements in cloud computing and big data analytics have further enhanced the capabilities of remote sensing for wildfire research. Platforms such as Google Earth Engine (GEE) offer high-performance computing environments for processing large datasets and applying advanced machine learning models to fire detection and mapping [
25]. Additionally, near-real-time (NRT) fire monitoring systems leveraging Sentinel-3 (OLCI/SLSTR) and VIIRS thermal anomalies have emerged, enabling the rapid generation of global burned area products within 24 h of data acquisition. High-resolution commercial missions (e.g., PlanetScope) are increasingly used for local-scale burnt area mapping, often integrated into U-Net or Transformer-based segmentation models. New research also emphasizes transferable and generalizable neural networks capable of operating across diverse ecosystems, although domain adaptation and training data imbalance remain ongoing challenges. Overall, recent progress highlights a clear trend toward integrated, multi-sensor, AI-driven wildfire monitoring frameworks. The foundations for these advances were laid in the period 2020–2025, which saw rapid maturation of deep learning architectures and data fusion strategies. During this time, convolutional neural networks (CNNs) were widely adopted for spatial pattern recognition in optical data. Tiengo et al. (2022) demonstrated the effectiveness of a deep CNN (DeepLabV3+) on Sentinel-2 images for mapping burned areas in the Mediterranean region, outperforming threshold-based methods of spectral indices and achieving an IoU greater than 80% [
26]. At the same time, the first large-scale applications of Transformers in remote sensing emerged. Qurratulain et al. (2023) proposed Burnt-Net, a CNN–Transformer hybrid, which improved the delineation of burned area boundaries by exploiting the Transformer’s ability to model long-range spatial dependencies [
27]. The synergistic integration of SAR and optical data has become a pillar of research in this period, to overcome the limitations associated with cloud cover. Zhang, Qi et al. (2021) combined Sentinel-1 (VV/VH coherence) and Sentinel-2 (NBR indices) time-series within a Random Forest model on GEE, achieving more robust and early detection of areas affected by fire compared to the use of individual sensors [
28]. On the near-real-time monitoring front, systems based on VIIRS and Sentinel-3 SLSTR thermal anomalies have been consolidated. The work of Lizundia-Loiola et al. (2020) on GEE optimized the MODIS Active Fire algorithm for VIIRS data, enabling the generation of global hotspot maps with a latency of a few hours, a crucial step towards today’s operational systems [
29]. Remote sensing technologies continue to evolve rapidly, providing increasingly powerful tools for wildfire detection, monitoring, and assessment. Deep learning methods have become particularly central: models such as CNNs, U-Net, and Vision Transformers (ViT) have recently been successfully applied to optical data from Sentinel-2 to map burned areas. For example, Yilmaz & Kavzoglu (2024) developed a CNN combined with Explainable AI (SHAP method) on Sentinel-2A images, achieving ~98.9% accuracy in detecting burned areas and identifying NBR, dNBR, and NDVI indices as the most influential [
30]. In parallel, using very-high-resolution optical data, Kim, Lee & Park (2024) employed PlanetScope images with a convolutional network to map burned areas, demonstrating the effectiveness of high-resolution commercial data for local studies [
31]. With regard to fire severity prediction, Sykas, Zografakis & Demestichas (2024) compared segmentation networks (U-Net) and a Visual Transformer using the EO4WildFires dataset, integrating meteorological, optical (Sentinel-2), and SAR (Sentinel-1) data [
32]. Other recent studies are moving towards Transformer architectures for active fire detection: Rad (2024) proposed a u-shaped Vision Transformer on Landsat-8 data for active fire detection, achieving an F1 score of ~90% [
33]. On the multi-sensor front, Chen et al. (2024) conducted a comparative study of machine learning methods (SVM, Random Forest, Neural Network) on Sentinel-1B (SAR) and Sentinel-2A data, with SVM achieving up to 93.5% accuracy and RF excelling in the post-fire phase using the NBR index [
34]. In addition, research in 2025 saw the emergence of near-real-time monitoring systems based on Sentinel-3 OLCI and SLSTR data integrated with VIIRS. Padilla, Ramo, Gómez-Dans et al. (2025) describe a deep learning-based approach used by the Copernicus Land Monitoring Service to generate maps of burned area within one day of image acquisition [
35]. There is also no shortage of studies on more specific phenomena: a 2025 study on smoke detection (“incipient wildfire smoke”) exploited Sentinel-2 bands, using a neural network with sigmoidal activation function and momentum optimizer (MGD) to distinguish clouds from smoke [
36]. Finally, a recent study (Natural Hazards, 2025) demonstrated the effectiveness of classical machine learning methods (SVM) on Landsat-8 data for detecting active fires in Australia, showing that combining SWIR and NIR bands with the Normalized Difference Fire Index (NDFI) improves fire discrimination, although challenges related to model interpretability remain [
37]. Recent studies have increasingly demonstrated that the synergy between different sensors provides a more holistic view of fire impacts, especially in complex topographies. Specifically, the fusion of Sentinel-1 C-band SAR and Sentinel-2 multispectral data has proven effective in mitigating the limitations of individual sensors [
38,
39,
40,
41]. This study aims to leverage the combined potential of optical and SAR remote sensing, integrated within a GEE framework, to improve fire detection, burned area estimation, and severity assessment. Despite the significant performance of deep learning (DL) architectures, they often present conceptual and operational obstacles. Conceptually, supervised methods require large, high-quality labeled datasets for training, which are often insufficient for specific regions or difficult to obtain in near-real-time (NRT) scenarios. Operationally, these models require high computational resources and complex adaptation to be transferable between different ecosystems. The presented approach leverages the power of cloud computing asynchronously, with near-instantaneous results since there is no “training” phase. It is a plug-and-play product with no dependencies on complex external libraries or paid resources. There is a specific knowledge gap regarding the development of multisensory frameworks that are both automated and independent of pre-existing training labels. This work addresses this gap by proposing an unsupervised approach based on K-means clustering within Google Earth Engine (GEE). Unlike supervised ML, which learns from historical data, our approach is conceptually based on the intrinsic statistical structure of multisensory indices (SAR and optical) for a specific event. From an operational point of view, this eliminates the training phase, making the workflow lightweight, independent of training data, and easily scalable for monitoring forest fires on a global scale, where reference data on the ground is often unavailable. By harnessing Sentinel-1 and Sentinel-2 data, we seek to develop an efficient methodology for large-scale fire monitoring that overcomes the limitations of single-sensor approaches and enhances the accuracy of fire impact assessments. The proposed methodology will be applied to recent wildfire events to validate its effectiveness in real-world scenarios.