Next Article in Journal
FEM Analysis of 3D Timber Connections Subjected to Fire: The Effect of Using Different Densities of Wood Combined with Steel
Next Article in Special Issue
A Comparison of Four Spatial Interpolation Methods for Modeling Fine-Scale Surface Fuel Load in a Mixed Conifer Forest with Complex Terrain
Previous Article in Journal
Smoke Movement and Control in Tunnels under Construction: Recent Research Progress and Future Directions
Previous Article in Special Issue
Evaluating Traffic Operation Conditions during Wildfire Evacuation Using Connected Vehicles Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Deep Learning Approaches for Wildland Fires Using Satellite Remote Sensing Data: Detection, Mapping, and Prediction

Perception, Robotics and Intelligent Machines (PRIME), Department of Computer Science, Université de Moncton, Moncton, NB E1A 3E9, Canada
*
Author to whom correspondence should be addressed.
Fire 2023, 6(5), 192; https://doi.org/10.3390/fire6050192
Submission received: 4 April 2023 / Revised: 26 April 2023 / Accepted: 4 May 2023 / Published: 7 May 2023
(This article belongs to the Special Issue Advances in Forest Fire Behaviour Modelling Using Remote Sensing)

Abstract

:
Wildland fires are one of the most dangerous natural risks, causing significant economic damage and loss of lives worldwide. Every year, millions of hectares are lost, and experts warn that the frequency and severity of wildfires will increase in the coming years due to climate change. To mitigate these hazards, numerous deep learning models were developed to detect and map wildland fires, estimate their severity, and predict their spread. In this paper, we provide a comprehensive review of recent deep learning techniques for detecting, mapping, and predicting wildland fires using satellite remote sensing data. We begin by introducing remote sensing satellite systems and their use in wildfire monitoring. Next, we review the deep learning methods employed for these tasks, including fire detection and mapping, severity estimation, and spread prediction. We further present the popular datasets used in these studies. Finally, we address the challenges faced by these models to accurately predict wildfire behaviors, and suggest future directions for developing reliable and robust wildland fire models.

1. Introduction

Wildland fires are a natural disaster resulting in a loss of property, life, and homes, as well as a high level of damage to natural resources such as soil, forests, biodiversity, and wildlife. For example, in Canada, the year 2022 was among the most intense fire seasons, with 5449 fires burning 1,610,216 hectares [1]. Researchers have introduced adequate and effective wildfire detection systems using aerial images as well as ground images, which were captured from a terrestrial position. A plethora of wildfire detection systems have been developed over the years [2,3,4]. Satellite systems were also applied to address the wildland fire problem due to their reliability, high availability, fast execution, and capacity to monitor very large areas [5]. They acquire data on wildfires using numerous sensors such as thermal and optical to detect the physical characteristics of light such as wavelength, intensity, polarization, or vision to collect visual data from the environment, and radar sensors that produce accurate information. In addition, the researchers used satellite data to study the behaviors of forest fires, as well as their impact on a worldwide scale, providing important information such as the number of wildfires, their rate of spread, their size, and their evolution. Moreover, to monitor wildland fires, satellite remote sensing data were used in a wide variety of applications related to fire research and management such as fire danger assessment [6], fuel moisture content [7], fuel types [8], active fire [9], fire effects [10], post-fire [11], fire propagation [12], etc. The detection and analysis process of active fires can be applied in many fields, such as identifying the source of pollution in air quality analysis, locating the initial burned areas, and predicting the spread and growth of fires. As a result, satellite remote sensing data played a very important role in advancing wildfire research and management, and in helping to develop efficient strategies to reduce the impacts of wildland fires.
Deep learning (DL)-based approaches were recently used and showed promising results in satellite remote sensing applications such as damage mapping [13], anomaly detection [14], classification [15], water segmentation [16], weather forecasting [17], cloud cover detection [18], and forest disturbance segmentation [19]. In addition, DL models demonstrated their ability to perform well in the challenge of detecting wildfires by identifying their presence, mapping their extent and location, and predicting their behavior and potential impacts using satellite data compared to machine learning (ML) methods. To the best of our knowledge, there is a noticeable lack of comprehensive reviews that reported recent DL models for wildland fire applications using satellite data in the literature. As an example, Barmpoutis et al. [20] presented a comprehensive review of fire detection systems, including ground, airborne, and spaceborne systems. They also illustrated the DL and classical models of ML adopted to detect fire and smoke on each system. Ghali et al. [21] reviewed recent DL models used for wildland fire recognition, detection, and segmentation, using ground and aerial images. They also presented popularly used datasets such as CorsicanFire, FLAME, DeepFire, and the FD-dataset, as well as major challenges related to these techniques, such as data collection and labeling. Mohapatra and Trinh [22] provided a review of the recent trend and advancements in technologies (ground sensors, cameras, drones, and satellites) proposed for wildfire monitoring and firefighting tasks. Akhloufi et al. [23] reviewed the development of unmanned aerial vehicles (UAVs) for wildfires, highlighting wildland fire datasets, fire detection, segmentation, geolocation, and modeling methods, as well as cooperative autonomous systems for wildland fires. Therefore, in this paper, we conduct a detailed analysis of DL models for wildland fire detection, mapping, and prediction using satellite remote sensing data. We also present the most commonly used datasets for these tasks, as well as the main challenges and limitations associated with these models.
The main contributions of this review are as follows:
  • We provide a comprehensive analysis of recent (between 2018 and 2022) deep learning models used for wildland fire detection, mapping, and damage and spread prediction using satellite data.
  • We review the most popular datasets used for wildland fire detection, mapping, and prediction tasks, providing an overview of their attributes.
  • We discuss the challenges associated with these tasks, including data preprocessing (i.e., filtering, cleaning, and normalizing data), and the interpretability of deep learning models for each of these tasks.
The remainder of the review is organized as follows: Section 2 provides an overview of satellite systems. Section 3, Section 4, Section 5, and Section 6 respectively review the recent deep learning models utilized for fire detection, mapping, and damage and spread prediction using satellite data. Section 7 presents the most commonly used datasets for these tasks. Section 8 discusses the challenges of deep learning models related to these tasks, including data preprocessing and models interpretability. Finally, Section 9 concludes the review, highlighting future research directions.

2. Satellite Systems

Spaceborne systems use satellites in space to provide telecommunication services. They cover a very large area and provide a secure connection that is not affected by physical and weather obstacles compared to terrestrial systems. They are employed for many applications such as tracking the position of ships, sending and receiving data, collecting data about the earth’s surface, monitoring, and analyzing a variety of environmental changes.
Recently, satellite systems were adopted as a solution for detecting, monitoring, and mapping wildland fires, as well as in firefighting on the earth’s surface in near-real-time. They use thermal, optical, vision, and radar sensors to produce accurate information such as temperature, humidity, vegetation, atmospheric conditions, meteorological data, topographic data, historical fires, and human activities by providing information on the location and intensity of fires. Optical sensors can detect changes in vegetation and land cover that may indicate the presence of smoke and fire. Thermal sensors can detect heat information associated with smoke and flames and provide information on the temperature and intensity of fires. Radar sensors transmit and receive signals to see through smoke, darkness, and clouds to generate high-resolution images of the land surface, even under nighttime conditions. These data are then processed using mathematical models or artificial intelligence techniques such as ML and DL models, to detect and monitor potential wildland fire activities. In addition, the information obtained from satellite remote sensing systems can be employed to: (1) Support evacuation efforts by providing real-time information about the extent and location of a wildfire, which can be used to ensure the safety of nearby human populations. (2) Predict wildfire behaviors by estimating and tracking fire spread rates. This information helps to allocate firefighting resources and to develop efficient firefighting strategies. (3) Identify fire perimeters by detecting the boundaries of a wildfire. This information can be used to generate wildfire maps and to provide firefighting operations with better situational awareness. (4) Assess the impact of wildfires by determining the damage caused by a wildfire and estimating the severity of burned areas. This information can be used to plan post-fire recovery efforts and to protect ecosystems. Based on their orbit, these systems can be divided into three categories: geostationary orbit (GEO), low-earth orbit (LEO), and polar sun-synchronous orbit (SSO), as shown in Table 1.
  • Geostationary orbit (GEO) circles the earth above the equator of the earth following the rotation of the earth. It orbits the earth at an altitude of 35,786 km. The satellite in GEO appears to be stationary above a fixed point on the earth’s surface, thus providing continuous coverage to the same area. Many satellites are used, such as Geostationary Operational Environmental Satellites (GOES) [24], Landsat [25], and Sentinel [26]. They have a high temporal resolution. GOES offers a high spatial, temporal, and spectral resolution. It provides accurate weather information and real-time weather monitoring. However, some of them have a low spatial resolution and long revisit times, for example, eight days for LandSat-8, and 2 to 3 days for Sentinel-2B. GEO systems allow for the detection of the size, intensity, and locations of wildfires. They provide information on the wind direction and speed, which can help in estimating the spread of wildfires and in firefighting operations.
  • Low-earth orbit (LEO) is an orbit centered on the earth having an altitude of less than or equal to 2000 km. Its orbital period is less than one day. It is more suitable for observation and communication as it is closer to the earth. It provides high-resolution imagery, low communication latency, and high bandwidth. However, LEO satellites have a limited lifetime due to their low altitude. LEO systems can be used to detect wildland fires, as well as their locations and behaviors, which helps firefighting operators provide accurate strategies for wildfire prevention.
  • Polar sun-synchronous orbit (SSO) is an orbit around the earth in which the ground track of the satellite follows the sun. It has the same position at all times, with an altitude of between 200 and 1000 km, which allows for a continuous coverage of a precise zone at the same time and place every day. Numerous satellites are used, such as MODIS (Moderate Resolution Imaging Spectroradiometer) [27], AVHRR (Advanced Very-High-Resolution Radiometer) [28], and VIIRS (Visible Infrared Imaging Radiometer Suite) [29]. SSO satellites are used for monitoring the climate and for forecasting the weather. They are also capable of detecting and monitoring wildland fires, providing the size, location, and intensity of wildfires, as well as their spread based on weather information. However, their lifetime is very limited due to their low altitude.
Table 1. Overview of satellite systems used for fire detection, mapping, and prediction.
Table 1. Overview of satellite systems used for fire detection, mapping, and prediction.
SystemAdvantageDisadvantage
Geostationary orbit (GEO)Altitude of 35,786 km Provides a consistent view of the same zone High temporal resolutionCoverage limited area Large revisit time
Low-earth orbit (LEO)Altitude less than or equal to 2000 km High-resolution imageryLow communication latency High bandwidthLifetime limited
Polar sun-synchronous orbit (SSO)Altitude between 200 and 1000 km Continuous coverage of a precise zone at the same time and place everydayLifetime limited

3. Deep Learning-Based Approaches for Fire Detection Using Satellite Data

To detect and monitor fires on remote sensing satellite images, DL-based fire segmentation and detection methods have been developed in recent years. Both methods have shown an interesting result compared to traditional ML methods. In addition, they are very useful for efficient fire management. Fire detection focuses on identifying the presence of fire (smoke, flame, or both) and classifying it (see Figure 1), while fire segmentation is the process of grouping similar pixels of smoke or flame in an input satellite image based on their characteristics such as color, shape, and texture, and outputting the result as a mask (see Figure 2).
DL models were used to analyze smoke ignition and to automatically detect the presence of fires. They are capable of recognizing patterns in satellite imagery data corresponding to smoke plumes and fires, and of using this information to identify fire instances as they appear. Numerous DL models were proposed to detect and segment fires using satellite data, as shown in Table 2. CNN (Convolutional Neural Network) is a popular approach used to detect smoke by analyzing satellite images. CNNs are designed to identify patterns in visual data, and to recognize smoke plumes and other fire-related features. Kang et al. [30] developed a CNN, which consists of three 3 × 3 convolutional layers, each followed by a ReLU activation function and max pooling layer, and two fully connected layers to detect forest fires on geostationary satellite data. Using 2157 Himawari-8 images with 93,270 samples without fire and 7795 samples with fire, the proposed CNN showed superior performance, achieving an F1-score of 74% compared to the random forest method. Azami et al. [31] evaluated the CNN models (MiniVGGNet, ShallowNet, and LeNet) in detecting and recognizing wildfires on images collected by the KITSUNE CubeSat. Using 715 forest fire images, MiniVGGNet, ShallowNet, and LeNet achieved an accuracy of 98%, 95%, and 97%, respectively. Kalaivani and Chanthiya [32] proposed a custom optimized CNN which integrated an ALO (Antlion Optimization) method inside a PReLU activation function to detect forest fires. An accuracy of 60.87% was achieved using Landsat satellite images. Seydi et al. [33] presented a deep learning-based active forest fire detection method called Fire-Net, which consists of residual, point/depth-wise convolutional, and multiscale convolutional blocks. Fire-Net was trained using 578 Landsat-8 images, and tested on 144 images of the Australian forest, Central African forest, Brazilian forest, and Chernobyl areas, achieving F1-scores of 97.57%, 80.52%, 97.00%, and 97.24%, respectively. Palacio and Ian [34] used two pretrained deep learning models, MobileNet v2 [35] and ResNet v2 [36], on the ImageNet dataset [37] to predict wildfire smoke through satellite imagery of the California regions. Using fire perimeters, fire information (date, area, longitude, and altitude), and a historical fire map collected from Landsat 7 and 8, and MobileNet v2 obtained the best accuracy of 73.3%. Zhao et al. [38] investigated the impact of using the IR (infrared) band in detecting smoke. They proposed a lightweight CNN, called VIB_SD (Variant Input Bands for Smoke Detection), which integrates the inception structure, attention method, and residual learning. VIB_SD was trained using 1836 multispectral based on Landsat 8 OLI and Landsat 5 TM imagery data, and divided into three classes, (“Clear”, “Smoke”, and “Other_aerosol”), with horizontal/vertical flip as the data augmentation methods. The experimental results showed that adding an NIR band improved the performance by 5.96% compared to only using an RGB band (an accuracy of 86.45%). Wang et al. [39] proposed a novel smoke detection method named DC-CMEDA (Deep Convolution and Correlated Manifold Embedded Distribution Alignment), consisting of deep CNN (ResNet-50) and CMEDA as an improved MEDA (Manifold Embedded Distribution Alignment). First, ResNet-50 extracted the smoke features of the target and source domains (satellite and video images) data. Then, CMEDA was employed to reduce the bias in the source domain and make it more similar to the target domain, Finally, the presence or absence of smoke was generated as the output. A total of 200 satellite images and 200 RGB images were utilized in DC-CMEDA training, each including 100 smoke and 100 non-smoke images. In transferring from satellite images to video images, DC-CMEDA achieved an accuracy of 93.0%, overcoming the state-of-the-art methods, Filonenko’s method [40], and DC-ILSTM [41], by 2.5% and 1.5%, respectively. In transferring from video images to satellite images, DC-CMEDA also reached a high accuracy of 89.5%, surpassing Filonenko’s method and DC-ILSTM by 6.5% and 4.0%, respectively. Higa et al. [42] explored object detection methods such as PAA [43], VFNET [44], ATSS [45], SABL [46], RetinaNet [47], and Faster R-CNN [48] to detect and locate active fires and smoke in the Brazilian Pantanal regions. Using 775 images downloaded from the CBERS (China-Brazil Earth Resources Satellite) 04A WFI dataset [49], VFNET achieved the highest F1-score of 81%. Ba et al. [50] proposed a smoke detection method based on CNN and SmokeNet, using MODIS data. They presented a new dataset, called USTC_SmokeRS [51], comprising 6255 satellite images of smoke and various classes very close to smoke, such as haze, clouds, fog, etc. SmokeNet is a modified AttentionNet method that merges spatial and channel-based attention with residual blocks. It achieved an accuracy of 92.75%, outperforming VGG [52], ResNet [36], DenseNet [53], AttentionNet [54], and SE-ResNet [55]. Phan et al. [56] proposed a 3D CNN model to locate wildfires using satellite images collected from the GOE satellite, GOES-16. They integrated spatial and spectral information at the same time. The 3D CNN contains three 3D convolutional layers, followed by the ReLU activation function and batch normalization, fully connected layer, and softmax function. Imagery data and weather information were used as inputs to detect the presence of forest fires. Using 384 satellite images, an F1-score of 94% was achieved, outperforming baseline models such as MODIS-Terra [5], AVHRR-FIMMA [57], VIIRS-AFP [58], and GOES-AFP [59]. Hong et al. [60] designed a new CNN, FireCNN, to detect fires in Himawari-8 satellite images. FireCNN consists of three convolutional blocks and a fully connected layer. Each convolutional block consists of two convolutional layers, each followed by a ReLU activation and a max pooling layer. FireCNN was tested on a dataset containing 3646 non-fire spots and 1823 fire spots [61], obtaining an accuracy of 99.9% higher than the traditional ML methods (thresholding, SVM (Support Vector Machine), and random forest). Wang et al. [62] employed Swin transformer [63], which adopts attention mechanism to model local and global dependencies in detecting smoke and flame. A set of 5773 images obtained from FASDD (Flame and Smoke Detection Dataset) [64] were used in training this model, obtaining a mAP (mean Average Precision) of 53.01%.
FCN (Fully Convolutional Network) and the encoder–decoder model are the widely used types of CNNs in image segmentation tasks. FCN is the first CNN developed for pixel-level classification. It employs a series of convolutional and pooling layers to extract features from the input image, and then generates a binary mask as the output. Larsen et al. [65] proposed an FCN to predict smoke in satellite images acquired by Himawari-8 and the NOAA (National Oceanic and Atmospheric Administration) Visible Infrared Imaging Radiometer Suite in near-real-time [66]. FCN consists of four convolution layers, three max pooling layers, three transposed convolution layers, batch normalization, and ReLU activation functions. It was trained on 975 satellite images, attaining an accuracy of 99.5%. The encoder–decoder architecture contains two parts (encoder and decoder). The encoder performs convolutional and pooling layers to extract high- and low-level features, while the decoder employs transposed convolutions, which are designed to upsample the compressed feature map to produce a segmentation mask as output. U-Net [67] is the popular encoder–decoder architecture used for image segmentation. It also employs skip connections to combine the features from the encoder and the decoder to better capture finer details and to produce a more accurate result. Khryashchev et al. [68] applied U-Net with ResNet-34 as the backbone to detect and segment wildfire areas. Numerous data augmentation techniques (rotation, shift, flip, mirroring, and random chromatic distortion in HSV color) and 1850 satellite RGB images were used to train and test this model, achieving a Dice coefficient of 81.2%. Pereira et al. [69] adopted a modified U-Net architecture by adding dropout layers to avoid overfitting and batch normalization after each convolutional layer, to detect and segment active fire areas on Lansdsat-8 imagery data. The modified U-Net was trained using a large dataset, called the Landsat-8 Active fire detection (LAFD) dataset [69,70], which contains 8194 wildfire images and their corresponding binary masks. An accuracy of 87.2% was achieved, surpassing traditional machine learning methods for active fires [71,72,73]. Rashkovetsky et al. [74] also explored the semantic segmentation method, U-Net, to detect wildfires on images collected by multisensor satellites such as Sentinel-1, -2, -3, Terra, and Aqua. A total of 1324 records of fire perimeters, which occurred between 1950 and 2019 in California [75], and 38,897 satellite images (Sentinel-1: 2619 images, Sentinel-2: 1892 images, Sentinel-3: 15,514 images, and Terra and Aqua: 18,872 images) were used. In cloudless conditions (clear weather), U-Net obtained an F1-score of 83% and 87% using Sentinel-2 data and the fusion of Sentinel-2 and Sentinel-3 data, respectively. Under cloudy conditions, U-Net achieved an F1-score of 67% and 72% using the Sentinel-3 data and the merged Sentinel-1 and Sentinel-2 data, respectively. Rostami et al. [76] developed an encoder–decoder, MultiScale-Net, which includes dilated convolutional layers with different dilation rates, and convolutional kernels with varying sizes to detect active fire. The LAFD dataset (144 Landsat-8 images) and data augmentation technique (horizontal and vertical flip) were used in training MultiScale-Net. Based on the input bands, three scenarios (B1: AFI, B2: SWIR2 + SWIR1 + Blue, and B4: SWIR2 + SWIR1 + Blue + AFI) were evaluated, providing F1-scores of 90.53%, 89.09%, and 90.58% in B3, B1, and B4, respectively. Shirvan et al. [77] proposed a DL-based approach to detect and segment woodland fires zones in Mozambique regions. AUNet (attention gate units), which is a modified U-Net by adding an attention method and RAUNet (residual blocks and attention gate units) method, which integrates attention gates and residual blocks into the U-Net architecture, were employed to identify small fires and to detect fire events. Sentinel-1 and -2 data, Google Earth images, MODIS fire products, and field observation data were used to train and to evaluate these approaches. RAUNet achieved a high accuracy of 98.53%, outperforming U-Net and AUNET by 0.74% and 0.55%, respectively. Sun [78] designed a deep CNN method to generate pixel-level binary fire masks on Landsat 8 images of South American regions, using SWIR (Short-Wave Infrared) and Green bands. First, an encoder–decoder method (called FCN), which is a modified U-Net by adding three upsampling and downsampling convolutional blocks, was used to generate the binary mask. Then, a K-Means clustering was performed to determine the levels of cirrus cloud contamination. Finally, a CNN, which contains four convolutional layers, each one followed by ReLU activation and batch normalization, four max pooling layers, and a fully connected layer, was used three times with three different data features (only SWIR data, SWIR and raw cirrus data, and SWIR and segmented cirrus data) to determine the influence of simplified features on model performance and training time. FCN obtained a precision of 87.8% on the test data (14,274 fire images and 10,685 non-fire images) [79]. Wang et al. [80] presented a smoke segmentation model, Smoke-UNet, to detect forest smoke on multispectral Landsat-8 data. Smoke-UNet is an improved U-Net architecture that integrates a residual block to improve feature extraction ability and an attention mechanism to remove irrelevant and invalid information transmitted by skip connections. Numerous data augmentation techniques such as cropping and horizontal/vertical mirroring were applied to augment the training data, yielding 47 multispectral wildland smoke images composed of SWIR, RGB, mid-infrared, and NIR bands. The training result showed that Smoke-UNet reached the best accuracy of 92.3%, overcoming UNet, Res-UNet, Attention Res-UNet, FCN [81], PSPNet [82], and SegNet [83] by 12.2%, 11.8%, 8.8%, 7.8%, 7.8%, and 9.2%, respectively. Wang et al. [84] developed a semantic segmentation method, AOSVSSNET (Attention-Guided Optical Satellite Video Smoke Segmentation Network), to detect and segment smoke areas. AOSVSSNET is a modified UNet++ [85] by adding a convolutional attention module to suppress noisy and irrelevant characteristics. AOSVSSNET reached an IoU of 72.84% and 70.51% using real satellite smoke images (200 images) and synthetic images (10,000 images), respectively, surpassing UNet, DeepLabv3+ [86], and FCN. Knopp et al. [87] employed a CNN based U-Net for burnt area segmentation on Sentinel-2 data. New learning data were created and collected from three data sources: burned areas, which were processed at the DLR (German Aerospace Center), the fire incident dataset of CALFIRE (by the California Department of Forestry & Fire Protection), and the burned area data of the Portuguese ICNF (Institute for Nature Conservation and Forests). It includes burned areas acquired between 2017 and 2018, covering different seasons and biomes. It consists of 2637 satellite images and their burned area masks, divided into training, validation, and testing sets. This proposed model was trained using data augmentation techniques (random brightness, rotation, shift, contrast, and scale), which were employed to increase the learning data three times, achieving an accuracy of 98%.
Table 2. Deep learning models for fire detection and segmentation using satellite data.
Table 2. Deep learning models for fire detection and segmentation using satellite data.
Ref.MethodologyDatasetResults (%)
[30]Simple CNN2157 Himawari-8 images, with 93,270 samples without fire and 7795 samples with fireF1-score = 74.00
[31]MiniVGGNet715 wildfire images collected by the KITSUNE CubeSatAccuracy = 98.00
[32]Custom optimized CNNLandsat satellite imagesAccuracy = 60.87
[33]Fire-Net722 Landsat-8 imagesF1-score = 97.57
[34]MobileNet v2Fire perimeters, fire information (date, area, longitude, and altitude), and historical fire map collected from Landsat 7 and 8Accuracy = 73.30
[38]VIB_SD1836 multispectral based on Landsat 8 OLI and Landsat 5 TM imagery dataAccuracy = 92.41
[39]DC-CMEDA200 satellite images and 200 RGB images, each one including 100 smoke and 100 non-smoke imagesAccuracy = 96.00
[42]VFNETCBERS 04A WFI dataset: 775 imagesF1-score = 81.00
[50]SmokeNetUSTC_SmokeRS: 6255 satellite imagesAccuracy = 92.75
[56]3D CNNWeather data and imagery data (384 images)F1-score = 94.00
[60]FireCNNHimawari-8 satellite images: 3646 non-fire spots and 1823 fire spotsAccuracy = 99.90
[62]Swin transformerFASDD dataset: 5773 imagesmAP = 53.01
[65]FCN975 satellite images acquired by Himawari-8 and NOAA Visible Infrared Imaging Radiometer SuiteAccuracy = 99.50
[68]U-Net with ResNet-341850 satellite RGB imagesDice = 81.20
[69]Modified U-NetLAFD dataset: 8194 wildfire images and their corresponding binary masksPrecision = 87.20
[74]U-Net1324 records of fire perimeters and 38,897 satellite images (Sentinel-1: 2619 images, Sentinel-2: 1892 images, Sentinel-3: 15,514 images, Terra and Aqua: 18,872 images)F1-score = 87.00
[76]MultiScale-NetLAFD dataset: 144 Landsat-8 imagesF1-score = 90.58
[77]RAUNetSentinel-1 and -2 data, Google Earth images, MODIS fire products, and field observation dataAccuracy = 98.53
[78]FCNLAFD dataset: 14,274 fire images and 10,685 non-fire imagesPrecision = 87.80
[80]Smoke-UNet47 Landsat-8 imagesAccuracy = 92.30
[84]AOSVSSNET200 real satellite smoke images 10,000 synthetic satellite imagesIoU = 72.84 IoU = 70.51
[87]U-Net2637 satellite images collected from Sentinel-2 and their burned area masksAccuracy = 94.00

4. Deep Learning-Based Approaches for Fire Mapping Using Satellite Data

Similarly to fire detection, fire mapping can provide maps to visualize the intensity and location of detected wildland fires. It is the process of generating maps, and showing the locations and extents of wildland fires. These maps were utilized for a wide variety of purposes, such as fire damage reporting, firefighting and evacuation efforts planning, and wildland fire management. Many DL approaches were adopted for the fire mapping task, as summarized in Table 3. Belenguer-Plomer et al. [88] investigated CNN performance using Sentinel-1 and/or Sentinel-2 data, which were downloaded from the Copernicus Open Access Hub in detecting and mapping burned areas. The proposed CNN comprises two convolutional layers, each one followed by the ReLU activation function, max pooling layer, two fully connected layers, and the sigmoid function to predict the probabilities of burned or unburned areas. It reached a Dice coefficient of 57% and 70% using Sentinel-1 and Sentinel-2 data, respectively. Abid et al. [89] developed an unsupervised deep learning model to map the burnt forest zones on Sentinel-2 imagery data of Australia. First, a pretrained VGG16 was used to extract features of input data Then, K-means clustering and thresholding methods were used to perform the clustering of input data, which has similar features. This method achieved an F1-score of 87% using the real-time data of Sentinel-2 as the learning data. Hu et al. [90] explored numerous semantic segmentation networks such as U-Net, Fast-SCNN [91], DeepLab v3+ [86], and HRNet [92] in mapping burned areas using multispectral images of the boreal forests in Sweden and Canada, and the Mediterranean regions (Portugal, Spain, and Greece). Sentinel-2 and Landsat-8 images and data augmentation methods (resizing, mirroring, rotation, aspect, cropping, and color jitter) were used in training these DL models. The testing results demonstrated that DL-based semantic segmentation models showed higher performances compared to ML methods (LightGBM, KNN, and random forest) and thresholding methods, NBR (Normalized Burnt Ratio). and dNBR (delta NBR). As an example, with Sentinel-2 images, U-Net achieved the best Kappa coefficient of 90% in a Mediterranean fire site in Greece, and Fast-SCNN performed better, with a kappa coefficient of 82% in a boreal forest fire in Sweden. With Landsat-8 images, HRNet reached the best Kappa coefficient of 78% in a Sweden forest fire. Cho et al. [93] also employed U-Net as a semantic segmentation method to map burned areas. They used learning data, including satellite images with a resolution of 3 m per pixel collected from the PlanetScope dataset [94], and their ground truth masks, the dissimilarity obtained from GLCM (Gray-Level Co-occurrence Matrix), NDVI (Normalized Difference Vegetation Index), and land cover map data, as well as the topographic normalization of each image to reduce the effect of shadow, and a data augmentation technique (rotation, mirroring, and horizontal/vertical flip) to train U-Net, achieving F1-scores of 93.0%, 93.8%, and 91.8% in the Andong, Samcheok, and Goseong study areas, respectively. Brunt and Manandhar [95] also used U-Net to map burned areas in Sentinel-2 images of Indonesia and Central African regions. U-Net obtained an F1-score of 82%, 91%, and 92% using the Indonesia test data, the Central Africa test data, and the test data of both regions, respectively. Seydi et al. [96] developed a DL method, Burnt-Net, to map burned areas from post-fire Sentinel-2 images. Burnt-Net is an encoder–decoder architecture, including convolutional layers, ReLU functions, max pooling layers, batch normalization layers, residual multi-scale blocks, morphological operators (dilation and erosion), and transposed convolutional layers. Burnt-Net was trained with Sentinel-2 images of wildland fires in Spain, France, and Greece, France, and tested using wildland fires located in Portugal, Turkey, Cyprus, and Greece, obtaining an accuracy of 98.08%, 97.38%, 95.68%, and 95.51%, respectively, superior to the accuracy of U-Net and the Landsat burned area product. Prabowo et al. [97] also employed U-Net to map burned areas. They collected a new dataset comprising 227 satellite images with a resolution of 512 × 512 pix. acquired by the Landsat-8 satellite in Indonesian regions, and their corresponding binary masks. Using a data augmentation method (rotation), U-Net obtained a Jaccard index of 93%. U-Net was also evaluated in Colomba et al. [98] to map burned areas. It was trained and evaluated with 73 images downloaded from the satellite burned area dataset [98,99] and data augmentation techniques (rotation, shear, and vertical/horizontal flip), obtaining an accuracy of 94.3%. Zhang et al. [100] performed deep residual U-Net, which adopts the ResNet model as a feature extractor to map wildfires using the Sentinel-2 MSI time series and Sentinel-1 SAR data. Deep residual U-Net reached an F1-score of 78.07% and 84.23% on the Chuckegg Creek fire data acquired in 2019, and on the Sydney fire data collected between 2019 and 2020, respectively. Pinto et al. [101] proposed a deep learning approach, BA-Net (Burned Areas Neural Network), based on daily sequences of multi-spectral images to identify and map burned areas. BA-Net is an encoder–decoder with five connections between the encoder and decoder. The encoder comprises ST-Conv3 modules and spatial convolution. Each ST-Conv3 consists of two 3D convolution layers, followed by batch normalization with a ReLU activation function and a 3D dropout layer. The decoder contains UpST-Conv3 modules and 3D transposed convolutions. Each UpST-Conv3 module includes two 3D transposed convolution layers, followed by batch normalization with the ReLU activation function and a 3D dropout layer. Several datasets covering five regions (California, Brazil, Mozambique, Portugal, and Australia) were used to train and test this approach: VIIRS Level 1B data [102], VIIRS Active Fires data [58], MCD64A1C6 [103], FireCCI51 [104], Landsat-8 53 scenes [105], the FireCCISFD11 dataset [106], the MTBS dataset [107], TERN AusCover data [108], and ICNF Burned Areas [109]. BA-Net showed an excellent result (a Dice coefficient of 92%) in dating and mapping burned areas, outperforming the FireCCI51 simulators, thus confirming its ability in determining the spatiotemporal relations of active fires and the daily surface reflectances. Seydi et al. [110] presented a new method, DSMNN-Net (Deep Siamese Morphological Neural Network) to map burned areas using PRISMA (PRecursore IperSpettrale della Missione Applicativa) and Sentinel-2 multispectral images of the Australian continent. Two deep feature extractors, which adopt 3D/2D convolutional layers and a morphological layer based on dilation and erosion, were employed to extract features from the pre-fire and post-fire datasets. Two scenarios were investigated: in the first scenario, pre- and post-fire datasets collected from Sentinel-2; and in the second scenario, pre-fire datasets downloaded from Sentinel-2 and post-fire datasets obtained from PRISMA. The numerical results showed that DSMNN-Net achieved an accuracy of 90.24% and 97.46% in the first and second scenarios, respectively, outperforming existing state-of-the-art methods such as CNN proposed by Belenguer-Plomer et al. [88], random forest [111,112], and SVM [113] .

5. Deep Learning-Based Approaches for Fire Susceptibility Using Satellite Data

Deep learning models were applied to estimate fire severity and susceptibility using vegetation data, meteorological data, topographic data, historical fires, and human activities by providing information on the locations and intensities of fires, as shown in Figure 3. The severity level refers to the degree of vegetation damage caused by the wildland fire, and can be classified as very low, low, moderate, high, or very high, according to the severity of the damage.
Table 4 presents the deep learning methods used in order to predict the damage level caused by the fire. Zhang et al. [114] proposed a spatial prediction model based on CNN for wildfire susceptibility modeling in China (Yunnan province). This CNN includes three convolutional layers, a ReLU activation function, two pooling layers, and three fully connected layers. The authors used numerous data, including the interpretation of satellite images and historical fire reports from 2002 to 2010 (7675 fires occurred) to generate a wildfire event map as the output. They also used fourteen fire influencing factors, divided into three categories: vegetation-related (distance to road, distance to rivers, NDVI, and forest coverage ratio), climate-related (average temperature, specific humidity, average precipitation, average wind speed, precipitation rate, and maximum temperature) [115], and topography-related (aspect, slope, and elevation) [116]. A higher accuracy of 95.81% was achieved, outperforming four benchmark models that are multilayer perceptron neural networks, random forests, kernel logistic regression, and SVM. Prapas et al. [117] proposed a DL method, named ConvLSTM, for forest fire danger forecasting in the regions of Greece. The Datacube dataset [118] was used in training and testing this model. It contains burned areas and fire information (climate data, human activity, and vegetation information) between 2009 and 2020, daily weather data, satellite data collected from MODIS (LAI (Leaf Area Index), Fpar (Fraction of Photosynthetically Active Radiation), NDVI, EVI (Enhanced Vegetation Index), day/night LST (Land Surface Temperature), road density, land cover information, and topography data (aspect, slope, and elevation). ConvLSTM reached a precision of 83.2% better than random forest and LSTM (Long Short-Term Memory). Zhang et al. [119] studied the ability of CNN in predicting forest fire susceptibility maps divided into five levels (very high, high, moderate, low, and very low). Based on the processing cell type (grid and pixel), they proposed two CNNs, CNN-1D based on pixel cells and CNN-2D based on grid cells. CNN-1D consists of two 1D convolutional layers, three fully connected layers, ReLU activation functions, and a sigmoid function. CNN-2D contains two 2D convolutional layers, each one followed by ReLU activation and the max pooling layer, and three fully connected layers, the first two of which are followed by the ReLU activation function, and the last by a sigmoid function. Various data were employed in learning CNN: daily dynamic fire behaviors, individual fire characteristics, and estimated day of burn information, collected from the GFA (Global Fire Atlas) product from 2003 to 2016; metrology data obtained from the GLDAS (NASA Global Land Data Assimilation System), including average temperature, specific humidity, accumulated precipitation, soil moisture, soil temperature, and standardized precipitation index; vegetation data (LAI and NDVI) downloaded from the USGS (United States Geological Survey) website. Testing results showed that CNN-2D achieved accuracies of 91.08%, 89.61%, 93.18%, and 94.88% for four seasons (March–May, June–August, September–November, and December–February, respectively), surpassing the accuracies of CNN-1D and multilayer perceptron models. Le et al. [120] developed deep neural computing, Deep-NC, which includes three ReLU activation functions and a sigmoid function to predict wildfire danger in Gia Lai province in Vietnam. In total, there were 2530 historical fire locations from 2007 to 2016; 2530 non-forest fire data points, slope, land use, NDWI (Normalized Difference Water Index), aspect, elevation, NDMI (Normalized Difference Moisture Index), curvature, NDVI, speed of the wind, relative humidity, temperature, and rainfall information were assessed to remove noise and were used as input [121]. In the training step, four optimizers that are SGD (Stochastic Gradient Descent), Adam (Adaptive Moment Estimation), RMSProp (Root Mean Square Propagation), and Adadelta were employed to optimize Deep-NC’s weights. Deep-NC with Adam optimizer reached an accuracy of 81.50%. Omar et al. [122] employed a DL method, which consists of LSTM, fully connected layers, dropout, and a regression function in predicting forest fires. In total, 536 instances and twelve features, including temperature, relative humidity, rain, wind, ISI (Initial Spread Index), DMC (Duff Moisture Code), FWI (Forest Fire Weather Index), FFMC (Fine Fuel Moisture Code), and BUI (Buildup Index) were used to train the proposed model, obtaining an RMSE (Root Mean Squared Error) score of 0.021, and surpassing machine learning methods (decision tree, linear regression, SVM, and a neural network). Zhang et al. [123] developed a hybrid deep neural network (CNN2D-LSTM) to predict the global burned areas of wildfires based on satellite burned area products and historical time series predictors. CNN2D-LSTM includes two convolutional layers, three fully connected layers, ReLU functions, two max pooling layers, and two LSTM layers. A good RMSE of 4.74 was obtained using monthly burned area data between 1997 and 2016, collected from the GFED dataset, and temporally dynamic predictors (monthly maximum/minimum temperatures, average temperature, specific humidity, accumulated precipitation, soil moisture, soil temperature, standardized precipitation index, LAI, and NDVI), which affect forest fires. Shoa et al. [124] proposed a DL model, which includes linear layers, batch normalization layers, and LeakyReLU activation functions to predict the occurrence of wildfires in China. To train the proposed model, they used historical fire data (96,594 wildfire points collected on MODIS from 2001 to 2019) in China’s regions, climatic data (daily maximum temperature, average temperature, daily average ground surface temperature, average air pressure, maximum wind speed, sunshine hours, daily average relative humidity, and average wind speed), vegetation data (fractional vegetation cover), topographic data (slope, aspect, and elevation), and human activities (population, gross domestic product, special holiday, residential area, and road). The testing results showed that the proposed DL model reached an accuracy of 87.4%. Shams-Eddin et al. [125] proposed the 2D/3D CNN method to predict wildfire danger. The 2D CNN method was used to generate static inputs such as a digital elevation model, slope, distance to roads, population density, distance to waterways, etc., while 3D CNN generated dynamic inputs, including temperature, day/night land surface temperature, soil moisture index, relative humidity, wind speed, 2 m temperature, NDVI, surface pressure, 2 m dewpoint temperature, and total precipitation. Two LOADE (Location-Aware Adaptive Denormalization) blocks were also integrated into the 3D CNN to modulate dynamic features based on static features. Using FireCube [126] and NDWS (Next Day Wildfire Spread) [79] datasets, 2D/3D CNN showed a high performance (an accuracy of 96.48%), better than the baseline methods such as random forest, XGBoost, LSTM, and convLSTM. Jamshed et al. [127] adopted the LSTM method to predict the occurrence of wildland fires from 2022 to 2025. Historical wildfire data and burned data from Pakistan during 2012 and 2021 were used as training data and provided an accuracy of 95%. Naderpour et al. [128] designed a spatial method for wildfire risk assessment in the Northern Beaches region of Sydney. This method consists of two steps. In the first step, twelve influential wildfire factors (NDVI, slope, precipitation, temperature, land use, elevation, road density, distance to river, distance to road, wind speed, humidity, and annual temperature) [129,130] were fed into a deep NN (Neural Network) as a susceptibility model, which included more than three hidden layers to determine the weight of each factor, and then an FbSP (supervised fuzzy logic approach) method was used to optimize the results generated by deep NN. In the second step, the AHP (hierarchical analytical process) method was adopted as the vulnerability model to generate the physical and social vulnerability index using social and physical parameters such as population density, age, employment rate, housing, land use, high density, high value, etc. [131,132]. Finally, a risk function was used to calculate the wildfire risk map, giving the inventory of fire events (very low, low, medium, high, and very high). The proposed method obtained a Kappa coefficient of 94.3%. Nur et al. [133] proposed the hybrid models CNN-ICA and CNN-GWO, which include a CNN and a metaheuristic method (ICA: Imperialist Competitive Algorithms [134] and GWO: Grey Wolf Optimization [135]) to assess wildland fire susceptibility divided into five classes (very low, low, moderate, high, and very high). First, the DPM (Damage Proxy Map) method was adopted to identify burned forest areas on Sentinel-1 SAR (Synthetic Aperture Radar) images from 2016 to 2020 in the Plumas National Forest regions, and to generate an inventory dataset. Then, the inventory data and 16 wildfire conditioning factors, including topography factors (aspect, altitude, slope, and plan curvature), meteorological factors (precipitation, maximum temperature, solar radiance, and wind speed), environmental factors (distance to stream, drought index, soil moisture, NDVI, and TWI (Topographic Wetness Index)), and anthropological factors (land use, distance use, and distance to settlement) were used to train and test this model. Finally, the CNN hyperparameters were optimized using the ICA and GWO methods, and forest fire likelihoods were produced. The obtained result revealed that the CNN-ICA performance (an RMSE of 0.351) is better than the CNN-GWO result (an RMSE of 0.334). Bjånes et al. [136] designed an ensemble learning model based on two CNN architectures, namely CNN-1 and multi-input CNN, to predict forest fire susceptibility classes, which are split into five classes (very low, low, moderate, high, and very high) using satellite data from the Biobio and Nuble regions. CNN-1 is a modified Zhang’s CNN [114] by adding batch normalization in the first convolutional layer and dropout in the fully connected layers. Multi-input CNN is a simple CNN proposed by San et al. for flower grading [136]. A large data was used as learning data, consisting of fifteen fire influencing factors and fire history data from 2013 to 2019 (including 18,734 fires). The fire influencing factors were grouped into four categories: climatic data collected from the TerraClimate dataset [137] (minimum/maximum temperature, precipitation, wind speed, climatic water deficit, and actual evapotranspiration), anthropogenic data (distance to urban zones and distance to roads), vegetation data (NDVI, distance to rivers, and land cover type), and topographic data (aspect, surface roughness, slope, and elevation). This proposed model showed an F1-score of 88.77%, surpassing CNN-1 and multi-input CNN, and traditional methods such as XGBoost and SVM.
Deep learning methods were also employed to map burn severity as a multi-class semantic segmentation task. Huot et al. [138] studied four deep learning models; convolutional autoencoder, residual U-Net, convolutional autoencoder with convolutional LSTM, and residual U-Net with convolutional LSTM to predict wildfires. To train deep learning models, several datasets were used: historical wildfire data [139] since 2000 collected from MOD14A1 V6 of daily fire mask composites at 1 km resolution, vegetation data [140] obtained from the Suomi National Polar-Orbiting Partnership (S-NPP) NASA VIIRS Vegetation Indices (VNP13A1) dataset, and contained vegetation indices since 2012 sampled at 500 m resolution, topography data [141] obtained from the SRTM (Shuttle Radar Topography Mission) and sampled at 30 m resolution, drought [142], and weather data (temperature, humidity, and wind) [143] collected from GRIDMET (Gridded Surface Meteorological) at 4 km resolution since 1979. Residual U-Net achieved the best accuracy of 83%, showing a great ability to detect zones of high fire likelihood. Farasin et al. [144] proposed a novel supervised learning method, called Double-Step U-Net, to estimate the severity level of affected areas after wildfires through Sentinel-2 satellite data, giving each sub-area of the wildland fire area a numerical severity level of between 0 and 4, where 0, 1, 2, 3, and 4 represent an unburned area, negligible damage, moderate damage, high damage, and areas destroyed by fire, respectively. First, a binary classification U-Net method was employed to identify each sub-area as unburned or burned. Then, a regression U-Net method was applied to determine the severity level only of the burned area. Two sources of information were used, Copernicus EMS (Emergency Management Service), which offers the damage severity maps of five burned regions (Spain, France, Portugal, Sweden, and Italy) affected by past fires used as ground truth maps, and Sentinel-2, which offers satellite imagery. Using data augmentation techniques (horizontal/vertical flip, rotation, and shear), Double-Step U-Net achieved an F1-score of 95% for binary classification U-Net, and a high RMSE for regression U-Net, outperforming the U-Net and dNBR (delta Normalized Burnt Ratio) [145] methods. Monaco et al. [146] also studied the ability of Double-Step U-Net with varying loss functions (Binary Cross Entropy (BCE) and Intersection on Union Loss) in generating the damage severity map on manually labeled data collected by Copernicus EMS. The experiment results showed that the Double-Step U-Net with BCE loss achieved the best MSE of 0.54. Monaco et al. [147] also developed a two-step CNN solution to detect burned areas and predict their damage on satellite data. First, a binary semantic segmentation method-based CNN was used to detect burned areas, and then a regression method-based CNN was applied to predict their damage severity between 0 (no damage) and 4 (completely destroyed). Four semantic segmentation methods (U-Net, U-Net++, SegUNet, and attention U-Net [148]) were employed as a backbone to extract wildfire features. Using a satellite image collected from Copernicus EMS, DS -UNet, and DS-UNet++ models with BCE loss showed a higher IoU of 75% and 74%, respectively, in delineating the burnt areas compared to DS-AttU and DS-SegU; DS-AttU, DS-UNet, and DS-UNet++ performed better in predicting the damage severity levels of burned areas, obtaining an RMSE of 2.429, 1.857, and 1.857, respectively. Monaco et al. [149] also used DS-UNet to detect wildfire and to predict the damage severity level, from 0 (no damage) to 5 (completely destroyed) on Sentinel-2 images. DS-UNet achieved an average RMSE of 1.08, overcoming baseline methods such as DS-UNet++, DS-SegU, UNet++, PSPNet, and SegU-Net. Hu et al. [150] also investigated various deep learning-based multi-class semantic segmentation methods such as U-Net, U 2 -Net [151], UNet++, UNet3+ [152], attention U-Net, Deeplab v3 [153], Deeplab v3+, SegNet, PSPNet, etc. in mapping burn severity into five classes that are unburned, low, moderate, high, and non-processing area/cloud. A large-scale dataset, named MTBS (Monitoring Trends in Burn Severity), was developed to learn these models. It includes post-fire and pre-fire top-of-atmosphere Landsat images, dNBR images, perimeter mask, RdNBR (relative dNBR) images, and thematic burn severity from 2010 to 2019 (more than 7000 fires). Five loss functions (Cross-entropy, Focal, Dice, Lovasz softmax, and OHEM loss) and data augmentation techniques (vertical/horizontal flip) were used to evaluate these models. Attention U-Net achieved the best Kappa coefficient of 88.63%. Ding et al. [154] designed a deep learning method based on U-Net, called WLF-UNet, to identify the wildfire location and intensity (no-fire, low-intensity, and high-intensity) on the Himawari-8 satellite data. More than 5000 images captured by the Himawari-8 satellite between November 2019 and February 2020 in the Australian regions were employed as training data, achieving an accuracy of greater than 80%. Prapas et al. [155] also applied U-Net++ as a global wildfire forecasting method. Using the seasFire cube dataset [156], which includes variables related to fire such as historical burned areas and fire emissions between 2001 and 2021, climate, vegetation, oceanic indices, and human related data, U-Net++ reached an F1-score of 50.7%.
Table 4. Deep learning models for fire damage prediction using satellite data.
Table 4. Deep learning models for fire damage prediction using satellite data.
Ref.MethodologyDatasetResults
[114]Simple CNNClimate data, vegetation data, topography data, and historical forest fire points (2002–2010)Accuracy = 95.81
[117]ConvLSTMDatacube dataset: burned areas and fire driver information between 2009 and 2020, daily weather data, satellite data, road density, land cover information, and topography dataPrecison = 83.20
[119]CNN-2DDaily dynamic fire behaviors, individual fire characteristics, estimated day of burn information, average temperature, specific humidity, accumulated precipitation, soil moisture, soil temperature, standardized precipitation index, LAI, and NDVIAccuracy = 94.88
[120]Deep-NC2530 historical fire locations from 2007 to 2016, 2530 non-forest fire data points, slope, land use, NDWI, aspect, elevation, NDMI, curvature, NDVI, wind speed, relative humidity, temperature, and rainfallAccuracy = 81.50
[122]LSTM536 instances and 12 features including temperature, relative humidity, rain, wind, ISI, DMC, FWI, FFMC, and BUIRMSE = 0.021
[123]CNN2D-LSTMMonthly burned area data between 1997 and 2016, monthly maximum/minimum temperature, average temperature, specific humidity, accumulated precipitation, soil moisture, soil temperature, standardized precipitation index, LAI, and NDVIRMSE = 4.74
[124]Simple CNNHistorical fire data from 2001 to 2019 on regions in China, climatic data (daily maximum temperature, average temperature, daily average ground surface temperature, average air pressure, maximum wind speed, sunshine hours, daily average relative humidity, and average wind speed), vegetation data (fractional vegetation cover), topographic data (slope, aspect, and elevation), human activities (population, gross domestic product, special holiday, residential area, and road)Accuracy = 87.40
[125]2D/3D CNNFireCube and NDWS datasetsAccuracy = 96.48
[127]LSTMHistorical wildfire data and burned data from PakistanAccuracy = 95.00
[128]Deep NN, FbSP, and risk functionNDVI, slope, precipitation, temperature, land use, elevation, road density, distance to river, distance to road, wind speed, humidity, annual temperature, and social and physical parameters such as population density, age, employment rate, housing, high density, high value, etc.Kappa = 94.30
[133]CNN-GWOSentinel-1 SAR images from 2016 to 2020, aspect, altitude, slope, plan curvature, precipitation, maximum temperature, solar radiance, wind speed, distance to stream, drought index, soil moisture, NDVI, TWI, land use, distance use, and distance to settlementRMSE = 0.334
[136]Ensemble learningFire history data from 2013 to 2019 from the Biobio and Nuble regions; climatic data (minimum and maximum temperature, precipitation, wind speed, climatic water deficit, and actual evapotranspiration); anthropogenic data (distance to urban zones and distance to roads); vegetation data (NDVI, distance to rivers and land cover type); topographic data (aspect, surface roughness, slope, and elevation)F1-score = 88.77
[138]Residual U-NetHistorical wildfire data, topography data, drought data, vegetation data, and weather dataAccuracy = 83.00
[144]Double-Step U-NetBurned maps of five regions (Spain, France, Portugal, Sweden, and Italy) affected by past fires and satellite images collected from Sentinel-2F1-score = 95.00
[146]Double-Step U-NetSatellite imagery data collected from Sentinel-2MSE = 0.54
[147]DS-AttU DS-UNet DS-UNet++Satellite imagery and data collected from Sentinel-2RMSE = 2.42 RMSE = 1.85 RMSE = 1.85
[149]DS-UNetSentinel-2 dataRMSE = 1.08
[150]Attention U-NetMTBS dataset: post-fire and pre-fire Top of Atmosphere Landsat images, dNBR images, perimeter mask, RdNBR images, thematic burn severity from 2010 to 2019 (more than 7000 fires)Kappa = 88.63
[154]WLF-UNetMore than 5000 images captured by the Himawari-8 satellite between November 2019 and February 2020 in the Australian regionAccuracy = 80.00
[155]U-Net++SeasFire cube dataset: historical burned areas and fire emission between 2001 and 2021, climate, vegetation, oceanic indices, and human related dataF1-score = 50.70

6. Deep Learning-Based Approaches for Fire Spread Prediction Using Satellite Data

The fire spread approach estimates fire danger by representing the variable and fixed factors that affect the rate of fire spread, and the difficulty in controlling fires, thereby predicting how fires move and develop over time. Wildfire risk is mainly influenced by various factors such as weather (e.g., wind and temperature), fuel information (e.g., fuel type and fuel moisture), topographic data (e.g., slope, elevation, and aspect), and fire behaviors. Several systems were developed to estimate fire spread, area, behavior, and perimeter; for example, the Canadian FFBP (Forest Fire Behavior Prediction) system [157]. Throughout the years, numerous research studies have been proposed to predict fire spread. In this review, we report only the methods based on deep learning, as presented in Table 5. Stankevich [158] describes the process of an intelligent system to predict wildfire spread, avoiding state-of-the-art challenges such as low forecast performance, computational cost and time, and limited functionality in uncertain and unsteady conditions. Various data were used as inputs: satellite images collected from several sources: fire propagation data obtained from the NASA FIRMS resource management system [159]; environment data including air temperature, window speed, and humidity; forest vegetation data obtained from the European Space Agency Climate Change Initiative’s global annual Land Cover Map [160]; and weather data from Ventusky InMeteo [161]. Four CNNs, which consist of convolutional layers, ReLU activation functions, max pooling layers, and fully connected layers were used. First, a simple CNN was adopted to recognize objects in the forest fire data. Then, three CNNs were employed to estimate the environmental data, air temperature 2 m above the ground, wind speed at a height of 10 m above the ground, and relative air humidity. Finally, an auto-encoder generated the fire forecast. Radke et al. [162] proposed a novel model, FireCast, which integrates two CNNs to predict fire growth. Each CNN includes convolutional layers, one average pooling layer, three dropout layers, ReLU activation functions, two max pooling layers, and a sigmoid function. Given an initial fire perimeter, atmospheric data, and location characteristics as inputs, FireCast predicts the areas of the current fires that are expected to burn over the next 24 h. It obtained an important result (an accuracy of 87.7%), overcoming Farsite simulator [163] and the random prediction method using geospatial information such as Landsat8 satellite imagery [25], elevation data, GeoMAC dataset as fire perimeters data, and atmospheric and weather data collected from NOAA. Bergado et al. [164] proposed a deep learning method, AllConvNet, which includes convolutional layers, max pooling layers, and downsampling layers to estimate the probabilities of wildland fire burn over the next seven days. A heterogeneous dataset [165,166,167,168] was used as input, consisting of historical forest fire burn data from the Victoria Australia region during 2006 and 2017, topography data (slope, elevation, and aspect), weather data (rainfall, humidity, wind direction, wind speed, temperature, solar radiation, and lighting flash density), proximity to anthropogenic interface (distance to the power line and distance to roads), and fuel information (fuel moisture, fuel type, and emissivity). The experimental study reported that AllConvNet reached an accuracy of 58.23% better than baseline methods such as SegNet (56.03%), logistic regression (51.54%), and multilayer perceptron (50.48%). Hodges et al. [169] developed a DCIGN (Deep Convolutional Inverse Graphics Network) to determine the spread of wildland fire and to predict the burned zone up to 24 h. DCIGN consists of two convolutional layers, ReLU activation functions, two max pooling layers, one fully connected layer followed by TanH (hyperbolic tangent) activation functions, and one transpose convolutional layer. Various data are used as input, such as vegetation information (canopy height, canopy cover, and crown ratio), fuel model, moisture information (100-h moisture, 10-h moisture, 1-h moisture, live woody moisture, and live herbaceous moisture), wind information (north wind and east wind), elevation, and initial burn map. DCIGN was trained to predict homogenous and heterogeneous fire spread using 9000 and 2215 simulations, respectively, achieving an F1-score of 93%. Liang et al. [170] proposed an ensemble learning model, which includes a BPNN (Backpropagation Neural Network), LSTM, and RNN (Recurrent Neural Network) to predict the scale of forest fire. They used fire data on the Alberta region between 1990 and 2018, obtained from the Canada National Fire Database. They also employed eleven meteorological data (minimum temperature, mean temperature, maximum temperature, cooling degree days, total rain, total precipitation, heating degree days, total snow, speed of maximum wind gust, snow on ground, and direction of maximum wind gust) as input. The testing result showed that this method is suitable for estimating the size of the burned area and the duration of the wildfire, with a high accuracy of 90.9%. Khennou et al. [171] developed a deep learning model based on U-Net and FU-NetCast to determine the wildfire spread over 24 h, and to predict the newly burned areas. FU-NetCast showed excellent potential in predicting forest fire spread using satellite images, atmospheric data, digital elevation models (DEMs) [66], fire perimeter data, and climate data (temperature, humidity, wind, pressure, etc.) [171]. Khennou and Akhloufi [172] also developed FU-NetCastV2 to predict the next burnt area after a 24-h scale. Using GeoMAC data (400 fire perimeters) from 2013 to 2019, FU-NetCastV2 achieved a high accuracy of 94.60%, outperforming FU-NetCast by 1.87%. Allaire et al. [173] developed a deep learning model to identify fires and to determine their spread. This model is a deep CNN with two types of inputs that are the remaining scalar inputs and the spatial fields describing the surrounding landscape. It consists of four convolutions layers followed by a batch normalization layer, the ReLU activation function, an average pooling layer, and various dense layers, followed by batch normalization and the ReLU activation function. A MAPE (Mean Absolute Percentage Error) of 32.8% is reached using large training data, which includes a data map of Corsica (land cover field and elevation field) and various environmental data: fuel moisture content (FMC), wind speed, terrain slope, ignition point coordinates, heat of combustion perturbation, particle density perturbation, fuel load perturbations, fuel height perturbations, and surface–volume ratio perturbations, confirming the potential of this method in estimating fire spread in a wide range of environments. McCarthy et al. [174] illustrated a deep learning model based on U-Net to downscale GEO satellite multispectral imagery, monitor, and estimate fire progress. An excellent performance (a precision of 90%) is obtained, showing the effectiveness of this method in determining fire evolution with high spatiotemporal resolution (375 m) using quasi-static features (terrain, land, and vegetation information) and dynamic features obtained from GEO satellite imagery.

7. Datasets

Finding a large and reliable dataset for training and testing deep learning models is a critical challenge, as the dataset is the main factor in helping to build accurate models and in benchmarking multiple developed methods. However, for fire detection and mapping, as well as fire severity and spread prediction, there is no baseline dataset, which makes the comparison of models a critical issue. In this section, we present the most popular datasets used for fire detection and mapping, and in predicting fire spread and damage severity (see Table 6 and Table 7).
  • The GeoMAC (Geospatial Multi-Agency Coordination) database [176,177] illustrates stored fire perimeters data since August 2000. It is presented via the United States Geological Survey (USGS) data series product. It contains wildland fire perimeters information obtained from wildfire accidents, is evaluated for accuracy and completeness, and is collected via intelligence sources such as IR (infrared) imagery and GPS. It has public access via the GeoMAC Web application [176]. This data are archived by year and state.
  • Landsat8 satellite imagery is used as a visual imagery data collected from GloVis [25] every few months. Each imagery has a resolution of 30 m, where each pixel corresponds to a 30 × 30 square meter area on the ground.
  • Weather and atmospheric data are collected from the National Oceanic and Atmospheric Administration (NOAA) [66]. These include atmospheric pressure, wind direction, temperature (Celsius), precipitation, dew point, relative humidity, and wind speed for each wildfire location.
  • Digital Elevation Models (DEMs) information [66] represents the zero surface elevation to which scientists and geodesists refer. It was generated from remotely sensed data collected by drones, satellites, and planes with spatial resolutions of 20 m or higher using various remote sensing methods such as SAR interferometry, LiDAR, Stereo Photogrammetry, and Digitizing contour lines. It was collected from the USGS National Map [188] for each fire location.
  • VIIRS (Visible Infrared Imaging Radiometer Suite) Level 1B [102] data are developed by NASA (National Aeronautics and Space Administration) and generated using SIPS (NASA VIIRS Science Investigator-led Processing Systems). VIIRS is on two satellites, the JPSS (Joint Polar Satellite System) satellites and the SNPP (Suomi National Polar-orbiting Partnership). VIIRS Level 1B data contain an array of related information, calibrated radiance values, and uncertainty indices. These include three products for image resolution, day–night band, and moderate resolution. They provide geolocation products and calibrated radiances.
  • VIIRS Active Fire [58] is a fire monitoring product generated by FIRMS (Fire Information for Resource Management System) from MODIS (Moderate Resolution Imaging Spectroradiometer) and VIIRS. It includes near-real-time (within 3 h of satellite observation) and real-time (only in the US and Canada) fire locations.
  • The GlobFire (Gloab wildfire) dataset [178] is a public dataset generated by a data mining process utilizing MCD64A1 (MODIS burnt area product Collection 6). It was developed and available under the GWIS (Global Wildfire Information System) platform. It provides detailed information about each fire, such as the initial date, final date, perimeter, and burned area, which helps to determine the evolution of the fire.
  • The Wildfires dataset [179,180] are public data obtained from the CWFIS (Canadian Wildland Fire Information System) [189]. It contains diverse data related to weather (land surface temperature), ground condition (NDVI), burned areas, and wildfire indicators (thermal anomalies) collected from MODIS. The burned areas represent various regions that differ in their burning period, size, extent, and burn date. This dataset contains 804 instances divided into 386 wildfire instances and 418 non-wildfire instances.
  • MCD64A1 (Collection 6) C6 [103] is a burned area data product, which maps and identifies the approximate date and spatial extent of burning areas, employing a spatial resolution of 500 m of MODIS Surface Reflectance imagery. It includes the following data: burn date, quality assurance, burn data uncertainty, and the first and last days of the year, for reliable change detection.
  • The LANDFIRE 2.0.0 database [186] consists of public data for Puerto Rico, Alaska, the continental United States, and Hawaii. It contains fuel and vegetation data collected from various existing information resources such as the USGS National Gap Analysis Program (GAP), NPS Inventory and Monitoring, State Inventory Data, and USFS Vegetation and Fuel Plot Data. It also includes landscape disturbances and changes such as wildland fire, storm damage, fuel and vegetation treatments, insects, disease, and invasive plants.
  • The USTC_SmokeRS dataset [50,51] are public data for smoke detection tasks collected from MODIS satellites. It consists of 6225 satellite images with a spatial resolution of 1 km, a size resolution of 256 × 256 pix., and saved in “.tif” format. It includes six classes that are smoke (1016 images), seaside (1007 images), land (1027 images), haze (1002 images), dust (1009 images), and cloud (1164 images).
  • The Sentinel-2 dataset [147,183] includes the data of 73 areas of interest collected in various regions of Europe by Copernicus EMS, which are used to delineate forest fires and to predict the damage level. Each area of interest was presented with an image with a resolution of 5000 × 5000 × 12 (12 illustrates the twelve channels acquired via satellite remote sensing) and classified according to the wildfire damage level, varying over 0 (no damage), 1 (negligible damage), 2 (moderate damage), 3 (high damage), and 4 (completely destroyed).
  • The Landsat-8 Active fire detection (LAFD) dataset [69,70] is a public dataset developed for active fire detection. It contains 8194 satellite images (with a resolution of 256 × 256 pix.) of wildfires collected by Landsat-8 around the world in August 2020, 146,214 image patches with a resolution of 256 × 256 pix., consisting of 10-band spectral images, and associated results produced by three hand-crafted active fire detection methods [71,72,73], and 9044 image patches extracted from thirteen Landsat-8 images captured in September 2020 as well as their corresponding masks, which were manually annotated by a human specialist.
  • The WildfireDB dataset [181,182] is an open source data for wildfire propagation tasks collected from the VIIRS thermal anomalies/active fire database. It presents the historical wildfire occurrence over 2012–2017, as well as the vegetation (the maximum, median, sum, minimum, mode, and count values of canopy base density, as well as canopy height, canopy cover, canopy base height, and existing vegetation height and cover), topography (slope and elevation), and weather (total precipitation; maximum, average, and minimum temperatures; relative wind speed; and average atmospheric pressure). It contains 17,820,835 data points collected from a large area that covers 8,080,464 square kilometers of the continental United States (United States, Brazil, and Australia).
  • The TerraClimate dataset [137,184] is a high-resolution global dataset of monthly climate and climatic water balance from 1958 to present. It presents monthly the following climate factors: minimum and maximum temperature, precipitation, solar radiation, wind speed, climatic water deficit, vapor pressure, and reference evapotranspiration.
  • The Datacube dataset [118] includes nineteen features collected from MODIS, grouped into dynamic and static attributes. The dynamic attributes are thirteen features, which are max and min 2 m temperature, precipitation, LAI, Fpar, day and night LST, EVI, NDVI, and the min and max u-/v-component of wind. The static attributes introduce six features, namely CLC (Corine Land Cover), slope, elevation, aspect, population, and road density.
  • The GEODATA DEM-9S dataset [165] refers to Digital Elevation Model Version 3 and Flow Direction Grid 2008. It is public data, which presents ground level elevation points for all of Australia with a grid spacing of nine seconds in longitude and latitude, approximately 250 m in the GDA94 coordinate system. It is resampled to 500 m resolution using bilinear interpolation to generate the elevation, aspect, slope, sine, and cosine components of the spectrum.
  • The Vicmap data [166,167] show the distance to anthropogenic interfaces in Victoria, including distance to roads and distance to power lines.
  • The dynamic land cover dataset [168] is developed by the Australian Bureau of Agriculture, and Resource Economics and Sciences and Geoscience Australia. It reports land cover, vegetation cover, and the land use information of Australia.
  • The MTBS (Monitoring Trends in Burn Severity) dataset [107,150] is a large-scale public database developed to determine trends of burn severity. It describes burn severity and burn area delineation data for the entire United States land area between 1984 and 2021. It includes fire occurrence data and burned areas boundaries data, providing various influencing factors of fires such as post-fire and pre-fire Landsat Top of Atmosphere images, dNBR (delta Normalized Burnt Ratio) images, perimeter mask, RdNBR (relative dNBR) images, thematic fire severity from 1984 to 2021, and fire location obtained from various remote sensing satellites such as Landsat OLI, Landsat TM, Sentinel 2A, Sentinel 2B, Landsat ETM+, and Sentinel 2A.
  • The CALFIRE (California Fire Perimeter Database) dataset [75] was developed by the Fire and Resource Assessment Program. It contains the records of perimeters of forest fires collected in the state of California between 1950 and 2019.
  • The GFED (Global Fire Emissions Database, Version 4.1) dataset [185] includes the estimated monthly burned area, fractional contributions of different fire types, and monthly emissions, as well as 3-hourly or daily fields, which allow for scaling the monthly emissions to higher temporal resolutions. Additionally, it provides data for monthly biosphere fluxes. The spatial resolution of the data is 0.25 degrees latitude by 0.25 degrees longitude, and the time period covered is between 1995 and 2016. The emissions data consist of a variety of substances such as carbon, carbon monoxide, methane, dry matter, nitrogen oxides, total particulate matter, etc. These data are presented annually by region, globally, and by the source of fire for each area.
  • The MapBiomas Fire dataset [175] is a public dataset of burned areas for Brazil between 1985 and 2020. Maps of the burned area are available in various temporal domains that are monthly, annual, and accumulated periods, as well as fire frequency. They are combined with annual land cover and land use to show the zones affected by the fires over the last 36 years.
  • The PlanetScope dataset [94] is developed by the PlanetLabs cooperation. It includes high-resolution images with a spatial resolution of 3 m per pixels collected from 130 CubeSat 3U satellites, named Dove.
  • The burned areas in the Indonesia dataset was developed by Prabowo et al. [94,97] to train and evaluate deep learning models related to burned area mapping tasks. It comprises 227 images with a resolution of 512 × 512 pix. (in GeoTIFF format) collected from the Landsat-8 satellite in regions of Indonesia between 2019 and 2021, as well as their corresponding ground truth images, which are manually annotated.
  • The satellite burned area dataset [98,99] is a public dataset for burned area detection tasks based on the semantic segmentation method. It includes 73 forest fire images with a resolution of up to 10 m per pixel collected by the Sentinel-2 L2A satellite from 2017 to 2019 in Europe regions and their binary masks. It also contains the annotation of five severity damage levels, which range between undamaged and completely destroyed, generated by the Copernicus emergency management service annotation.
  • The FASDD (Flame and Smoke Detection Dataset) [62,64] is a very large public dataset consisting of flame and smoke images collected from multiple sources such as satellite and vision camera. It includes 310,280 remote sensing images with resolutions of between 1000 × 1000 pix. and 2200 × 2200 pix. obtained by Landsat-8 with a 30 m resolution and Sentinel-2 with a 10 m resolution, covering numerous regions such as Canada (5764 images), America (8437 images), Brazil and Bolivia (6977 images), Greece and Bulgaria (10,725 images), South Africa (9537 images), China (624 images), Russia (2111 images), and Australia (266,069 images). Among these remote sensing images, 5773 images were labeled via human–computer interaction in four kinds of formats such as JSON, XML, and text.
  • SeasFire Cube [156] is an open access dataset developed under the SeasFire project and funded by the ESA (European Space Agency). It contains fire data between 2001 and 2021 in 0.25 degree grid resolution and 8 day temporal resolution, including historical burned areas and wildfire emissions, meteorological data (humidity, direction of wind, wind speed, average/max/min temperature, solar radiation, total precipitation, etc.), human-related variables (population density), oceanic indices, vegetation data (LAI, land cover, etc.), and drought data.
  • The NDWS (Next Day Wildfire Spread) dataset [79,187] is a public, large-scale, multivariate remote sensing dataset over the continental United States during 2012 and 2020. It comprises 2D fire data with numerous variables such as vegetation (NDVI), population density, weather (wind direction, wind speed, humidity, precipitation, and maximum/minimum temperature), topography (elevation), drought index, and an energy release component. It also includes 18,455 fire samples; each represents 64 km × 64 km at 1 km resolution from the time and location of the fire, as well as the previous fire mask (mask at time t) and fire mask (time at t + 1 day).
  • The FireCube dataset [126] is a daily datacube for the modeling and analysis of wildfires in Greece. It includes numerous variables during 2009 and 2021 at a daily 1 km × 1 km grid: average (avg)/ maximum (max)/minimum (min) 2 m dewpoint temperature, avg/max relative humidity, avg/max/min surface pressure, avg/max/min total precipitation, avg/max/min 10 m V wind component, avg/max/min 10 m U wind component, avg/max/min 2 m temperature, 8 day evapotranspiration, FPAR (Fraction of Absorbed Photosynthetically Active Radiation), FWI (Forest Fire Weather Index), rasterized ignition points, LAI, day/night land surface temperature, wind direction of maw wind, max wind speed, daily number of fires, soil moisture index, soil moisture index anomaly, aspect, elevation, population density (2009–2021), distance from roads, roughness, slope, distance from waterways, etc.
  • The CBERS 04A WFI dataset is a public dataset developed by Higa et al. [42,49] to map active fires. It contains 775 RGB images collected by the Wide Field Imager (WFI) sensor on board the CBERS 04A remote sensing satellite between May 2020 and August 2020 in the Brazilian Pantanal areas.

8. Discussion

In this section, we discuss the data preprocessing methods used before training the deep learning models for fire detection and mapping, as well as for the fire spread and damage severity prediction tasks. In addition, we analyze the performances of the deep learning models for each of these tasks.

8.1. Data Preprocessing

The availability of satellite datasets is crucial in developing reliable and accurate DL models for detecting and mapping wildland fires, as well as for predicting their damage and spread. However, the sensitivity of wildland fire data is the main reason for the lack of public access to it, as it often includes sensitive information such as fire location and the severity of the damage. Moreover, there are several challenges depending on numerous factors, including the size of the data, the noise in the data, the variability of the weather and environmental conditions, and the complexity of the images. Table 7 illustrates the commonly used datasets for wildland fire detection, mapping, and prediction using satellite remote sensing data. There is no standard dataset for training and testing these models. Numerous datasets were designed for these tasks, such as PlanetScope [94], GFED [185], MapBiomas Fire [175], FASDD [62,64], NDWS [79,187], FireCube [126], WildfireDB [181,182], USTC_SmokeRS [50,51], and Wildfires [179,180]. We can see that the wildland fire data include a wide range of fire influencing factors, such as vegetation data (canopy height, canopy cover, NDVI, fuel moisture, fuel type, etc.), topography data (slope, aspect, surface roughness, and elevation), weather data (precipitation, temperature, wind speed, wind direction, pressure, solar radiation, vapor pressure, etc.), and proximity to anthropogenic interfaces (distance to power lines and roads), as well as historical fires and satellite images. Data preprocessing plays a crucial role in producing a reliable and accurate performance for wildland fire detection and mapping, as well as for predicting the severity and spread of fire damage. Many preprocessing steps were employed to remove, clean, and correct the data to ensure that DL models were trained on compatible and accurate learning data. These steps include: (1) data filtering to remove information that is not relevant to wildland fires, (2) data cleaning to remove noise such as cloud cover or smoke, and to correct or remove data anomalies resulting from sensor malfunction or other errors, and (3) data normalization to adjust the range of inputs to a similar scale, ensuring that DL models are trained on comparable and coherent data. On the other hand, data augmentation techniques were used to increase the size and diversity of wildland fire datasets and to overcome overfitting, as well as to improve the robustness of DL models for fire detection, mapping, and prediction, as shown in Table 8. Several augmentation techniques were used for this task, including mosaic data augmentation, image occlusion methods, photometric transformations, and geometric transformation. As an example, Zhao et al. [38] employed random vertical and horizontal flips to diversify the training data. Khryashchev et al. [68] used multiple techniques such as rotation, mirroring, shifting, and random chromatic distortion in HSV color format to augment the number of input images, resulting in eight times more images than the original training and testing data (1850 images). Colomba et al. [98] used four data augmentation methods during the training phase, including rotation, shear, and vertical/horizontal flips, to improve the robustness of their proposed model for fire severity forecasting. In [149], three data augmentation techniques (rotation, vertical/horizontal flips, and shear) were used to change the data variability, especially for unbalanced classes. In [80], cropping and horizontal/vertical mirroring methods were used, resulting in 47 multispectral smoke images. Hu et al. [90] also employed six data augmentation techniques (resize, mirror, rotation, aspect, crop, and color jitter) in training their fire mapping model.

8.2. Discussion of Model Results

The performances of the DL models are measured by their ability to accurately detect and map fires, as well as in predicting damage severity and fire spread. Evaluating the performance of a fire detection model depends on assessing its ability to correctly identify wildland fires. Similarly, in a fire mapping task, the reliability of DL models can be evaluated based on their success in correctly mapping and detecting burned areas on the input map. In addition, the performance of DL models in predicting damage severity and fire spread tasks is evaluated based on their effectiveness in determining the level of fire damage and estimating fire spread using various influencing factors. Table 2, Table 3, Table 4 and Table 5 present the results obtained from the DL models used for detecting and mapping wildland fires, and predicting the level of damage and spread caused by the fires. The comparison of the DL models is challenging due to the use of various metrics and datasets for evaluation. In general, deep learning models demonstrated remarkable performances in detecting and mapping wildland fires, as well as in predicting the severity and spread of the fires using satellite remote sensing data, outperforming traditional machine learning models. In fire detection and mapping, satellite data were utilized to identify smoke plumes and fires. Convolutional Neural Networks (CNNs), which are commonly composed of convolutional layers, pooling layers, and fully connected layers, are frequently used for fire detection. For instance, in [31], a deep CNN, MiniVGGNet, demonstrated excellent accuracy, at 98.00%. In [56], a 3D CNN was proposed to identify fire using GOE satellite images, achieving a superior F1-score of 94.0% compared to traditional methods such as GOES-AFP, AVHRR-FIMMA, and VIIRS-AFP. Additionally, a deep CNN, named FireCNN, was designed to detect fire in Himawari-8 satellite data, showing excellent performance with an accuracy of 99.90%, surpassing machine learning methods such as SVM, random forest, and thresholding methods. The CNN model FireCast showed a high accuracy of 87.7% in predicting fire spread, which is superior to the Farsite simulator by 24.1% and 19.9% using dry and wet fuel moisture, respectively [162]. CNN models were also used to predict fire damage severity, with promising results. For example, a 2D/3D CNN method [125] was employed to analyze multiple factors that influence fires, such as temperature, day/night land surface temperature, soil moisture index, relative humidity, wind speed, NDVI, surface pressure, and precipitation, obtaining an accuracy of 96.48% better than machine learning methods such as random forest and XGBoost. On the other hand, DL models based on the encoder–decoder architecture were utilized to detect fire or smoke areas and to map burned areas as a segmentation task. For instance, in [80], an encoder–decoder model called Smoke-UNet, was proposed to detect smoke areas using multispectral satellite images, showing a high accuracy of 92.3%, outperforming existing semantic segmentation methods such as UNet, FCN, PSPNet, and SegNet. In [87], UNet was used to detect fire areas, demonstrating a high accuracy, with 98.0%. In [96], the Burnt-Net, an encoder–decoder model, was developed to accurately map the extent of burned areas using Sentinel-2 images. It achieved an accuracy of 98.08%, surpassing U-Net by 1.15%. Numerous multi-class semantic segmentation techniques based on U-Net architecture, including the Double-Step U-Net [144,146], WLF-UNet [154], and DS-AttU [147] were also used to predict the severity of damage. These methods demonstrated their effectiveness in accurately predicting the level of damage severity.
The transfer learning technique was also employed to reutilize a pretrained DL model trained on a large dataset, such as ImageNet [37], for detecting and mapping wildfires using satellite data. The main idea is to adapt and fine-tune the pretrained model’s parameters to avoid overfitting caused by the limited amount of training data available for these tasks. Pretrained DL models showed great potential in accurately detecting and mapping wildfires in satellite data. For instance, in [34], a pretrained MobileNet v2 was employed to detect wildfires, achieving an accuracy of 73.3%. In [89], a pretrained VGG16 was also used as a feature extractor to map fires using real-time Sentinel-2 data, achieving an impressive F1-score of 87.0%.
To predict the severity of fire damage and fire spread, historical fire damage data were analyzed using an LSTM network, which is a type of RNN designed to capture the temporal dependencies and patterns in time series data. As an example, in [122], an LSTM was used to analyze dynamic predictors such as temperature, relative humidity, rain, wind, and ISI, etc., achieving a good RMSE of 0.021 compared to other methods, including decision tree, linear regression, SVM, and NN. The LSTM and RNN models also achieved a high accuracy of 90.9% when used to predict fire spread using historical fire data between 1990 and 2018, and eleven meteorological variables [170]. This suggests that these models were able to accurately learn the patterns and relationships between historical wildfire data and fire influencing factors to provide reliable predictions.
Attention mechanism was also used to analyze satellite data and to address the problem of wildland fires. It allows for the determination of global dependencies and to focus on relevant features in the input satellite data, thereby improving the performance of DL models in detecting and mapping wildland fires. In [150], attention U-Net, which adopts the attention gate mechanism to remove noisy and irrelevant features transmitted by skip connections, showed promising results (a Kappa coefficient of 88.63%), overcoming several DL models, including U-Net, U 2 -Net, U-Net++, DeepLab v3, DeepLab v3+, and FCN by 0.81%, 0.83%, 1.48%, 11.65%, 8.26%, and 7.39%, respectively, in detecting the burn severity level. AOSVSSNET, which also integrates the attention mechanism in skip connection, demonstrated a high IoU of 72.84%, outperforming FCN, UNet, and DeepLab v3+ by 43.21%, 7.17%, and 0.7% in detecting smoke using satellite data [84]. Moreover, the Swin Transformer, which adopts the attention mechanism as a main component, showed promising results (mAP of 53.01%), better than Yolo v5 (mAP of 41.39%) and Faster R-CNN (mAP of 32.05%) in detecting flame and smoke using a large satellite dataset (5773 images) [62].
In summary, the DL models showed interesting performances in detecting and mapping wildland fires using satellite remote sensing data as input. They automatically extract features from the input data, and detect smoke and fires more accurately than classical ML models. In addition, for predicting the damage and spread of wildland fires, DL models also showed promising results using various influencing fire factors such as temperature, wind speed, humidity, topography, etc. However, the comparison of DL models in these topics is challenging due to the use of various metrics and datasets for training and testing. Therefore, it is important to develop standard evaluation metrics and datasets for future research to provide solid comparisons and to facilitate the development of more reliable fire models.

9. Conclusions

This paper presented a comprehensive literature review of recent deep learning models proposed for detecting, mapping, and predicting wildland fires’ damage severity and spread using satellite remote sensing data. The proposed deep learning models demonstrated their potential with accurate and reliable performance, even when faced with challenges such as large data sizes, noisy data, environmental variability, and image complexity. We also illustrated the most commonly used datasets for these tasks. Finally, we discussed the challenges associated with data preprocessing and model interpretation for these tasks.
Our discussion reveals that deep learning models outperform traditional methods, confirming their effectiveness and potential in detecting, mapping, and predicting wildland fires using satellite data, including numerous fire influencing factors such as vegetation, topography, weather, and historical fire data. As an example, FireCNN and FCN showed interesting performances in wildland fire detection, reaching accuracies of 99.90% and 99.50%, respectively. For the wildland fire detection task, Burnt-Net demonstrated a high performance, with an accuracy of 98.08%. In the wildland fire susceptibility task, 2D/3D CNN achieved an excellent accuracy of 96.48%. Additionally, FU-NetCastV2 also showed a great result, with an accuracy of 94.60% in estimating wildland fire spread. However, several challenges remain for future research, including the scarcity of labeled satellite datasets, which are essential for training and testing wildfire models. Detection, mapping, and the forecasting of wildfires require the processing of different types of data such as satellite imagery, meteorological data, etc. These data can be noisy and may contain artifacts, which affect the performance of the results. In addition, wildfires progress rapidly, thus requiring real-time and continuous data processing to accurately detect and map the wildfires. One potential solution is the use of 3D virtual simulation platforms to generate satellite wildfire data with their annotations, which can facilitate the training of deep learning models. In addition, combining terrestrial, aerial (drone), and satellite wildland fire data with vision Transformer models could provide reliable and real-time information for wildland fire monitoring and management.

Author Contributions

Conceptualization, M.A.A. and R.G.; methodology, R.G. and M.A.A.; validation, R.G. and M.A.A.; formal analysis, R.G. and M.A.A.; writing—original draft preparation, R.G.; writing—review and editing, M.A.A.; funding acquisition, M.A.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research was enabled in part by support provided by the Natural Sciences and Engineering Research Council of Canada (NSERC), funding reference number RGPIN-2018-06233.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
DLDeep learning
MLMachine learning
CNNConvolutional Neural Network
ReLURectified Linear Unit
SWIRShort-Wave Infrared
NIRNear-Infrared
MODIS   Moderate Resolution Imaging Spectroradiometer
FCNFully Convolutional Network
IRInfrared
GEOGeostationary orbit
LEOLow-earth orbit
SSOSun-synchronous orbit
MODISModerate Resolution Imaging Spectroradiometer
AVHRRAdvanced Very-High-Resolution Radiometer
VIIRSVisible Infrared Imaging Radiometer Suite
LAILeaf Area Index
NDVINormalized Difference Vegetation Index
FparFraction of Photosynthetically Active Radiation
EVIEnhanced Vegetation Index
LSTLand Surface Temperature
SVMSupport Vector Machine
NDWINormalized Difference Water Index
NDMINormalized Difference Moisture Index
SGDStochastic Gradient Descent
AdamAdaptive Moment Estimation
RMSPropRoot Mean Square Propagation
ISIInitial Spread Index
DMCDuff Moisture Code
FWIForest Fire Weather Index
FFMCFine Fuel Moisture Code
BUIBuildup Index
RMSERoot Mean Squared Error
GFEDGlobal Fire Emissions Database
CALFIRECalifornia Fire Perimeter Database
dNBRdelta Normalized Burnt Ratio
RdNBRrelative dNBR
NNNeural Network
SARSynthetic Aperture Radar
TWITopographic Wetness Index
S-NPPSuomi National Polar-Orbiting Partnership
EMSEmergency Management Service
MTBSMonitoring Trends in Burn Severity
NOAANational Oceanic and Atmospheric Administration
LSTMLong Short-Term Memory
RNNRecurrent Neural Network
MAPEMean Absolute Percentage Error
PRISMAPRecursore IperSpettrale della Missione Applicativa

References

  1. Natural Resources Canada. National Wildland Fire Situation Report. 2023. Available online: https://cwfis.cfs.nrcan.gc.ca/report (accessed on 5 March 2023).
  2. Ghali, R.; Akhloufi, M.A.; Jmal, M.; Souidene Mseddi, W.; Attia, R. Wildfire Segmentation Using Deep Vision Transformers. Remote Sens. 2021, 13, 3527. [Google Scholar] [CrossRef]
  3. Ghali, R.; Akhloufi, M.A.; Mseddi, W.S. Deep Learning and Transformer Approaches for UAV-Based Wildfire Detection and Segmentation. Sensors 2022, 22, 1977. [Google Scholar] [CrossRef] [PubMed]
  4. Ghali, R.; Jmal, M.; Souidene Mseddi, W.; Attia, R. Recent Advances in Fire Detection and Monitoring Systems: A Review. In Proceedings of the 18th International Conference on Sciences of Electronics, Technologies of Information and Telecommunications (SETIT’18), Hammamet, Tunisia, 20–22 December 2018; Volume 1, pp. 332–340. [Google Scholar]
  5. Giglio, L.; Schroeder, W.; Justice, C.O. The collection 6 MODIS active fire detection algorithm and fire products. Remote Sens. Environ. 2016, 178, 31–41. [Google Scholar] [CrossRef] [PubMed]
  6. Chuvieco, E.; Aguado, I.; Salas, J.; García, M.; Yebra, M.; Oliva, P. Satellite remote sensing contributions to wildland fire science and management. Curr. For. Rep. 2020, 6, 81–96. [Google Scholar] [CrossRef]
  7. Ruffault, J.; Martin-StPaul, N.; Pimont, F.; Dupuy, J.L. How well do meteorological drought indices predict live fuel moisture content (LFMC)? An assessment for wildfire research and operations in Mediterranean ecosystems. Agric. For. Meteorol. 2018, 262, 391–401. [Google Scholar] [CrossRef]
  8. Sánchez Sánchez, Y.; Martínez-Graña, A.; Santos Francés, F.; Mateos Picado, M. Mapping Wildfire Ignition Probability Using Sentinel 2 and LiDAR (Jerte Valley, Cáceres, Spain). Sensors 2018, 18, 826. [Google Scholar] [CrossRef]
  9. Lin, Z.; Chen, F.; Niu, Z.; Li, B.; Yu, B.; Jia, H.; Zhang, M. An active fire detection algorithm based on multi-temporal FengYun-3C VIRR data. Remote Sens. Environ. 2018, 211, 376–387. [Google Scholar] [CrossRef]
  10. Ba, R.; Song, W.; Li, X.; Xie, Z.; Lo, S. Integration of Multiple Spectral Indices and a Neural Network for Burned Area Mapping Based on MODIS Data. Remote Sens. 2019, 11, 326. [Google Scholar] [CrossRef]
  11. Matasci, G.; Hermosilla, T.; Wulder, M.A.; White, J.C.; Coops, N.C.; Hobart, G.W.; Bolton, D.K.; Tompalski, P.; Bater, C.W. Three decades of forest structural dynamics over Canada’s forested ecosystems using Landsat time-series and lidar plots. Remote Sens. Environ. 2018, 216, 697–714. [Google Scholar] [CrossRef]
  12. Ott, C.W.; Adhikari, B.; Alexander, S.P.; Hodza, P.; Xu, C.; Minckley, T.A. Predicting Fire Propagation across Heterogeneous Landscapes Using WyoFire: A Monte Carlo-Driven Wildfire Model. Fire 2020, 3, 71. [Google Scholar] [CrossRef]
  13. Wu, C.; Zhang, F.; Xia, J.; Xu, Y.; Li, G.; Xie, J.; Du, Z.; Liu, R. Building Damage Detection Using U-Net with Attention Mechanism from Pre- and Post-Disaster Remote Sensing Datasets. Remote Sens. 2021, 13, 905. [Google Scholar] [CrossRef]
  14. Yu, Q.; Wang, S.; He, H.; Yang, K.; Ma, L.; Li, J. Reconstructing GRACE-like TWS anomalies for the Canadian landmass using deep learning and land surface model. Int. J. Appl. Earth Obs. Geoinf. 2021, 102, 102404. [Google Scholar] [CrossRef]
  15. Roy, S.K.; Kar, P.; Hong, D.; Wu, X.; Plaza, A.; Chanussot, J. Revisiting Deep Hyperspectral Feature Extraction Networks via Gradient Centralized Convolution. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–19. [Google Scholar] [CrossRef]
  16. Yuan, K.; Zhuang, X.; Schaefer, G.; Feng, J.; Guan, L.; Fang, H. Deep-Learning-Based Multispectral Satellite Image Segmentation for Water Body Detection. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 7422–7434. [Google Scholar] [CrossRef]
  17. Herruzo, P.; Gruca, A.; Lliso, L.; Calbet, X.; Rípodas, P.; Hochreiter, S.; Kopp, M.; Kreil, D.P. High-resolution multi-channel weather forecasting—First insights on transfer learning from the Weather4cast Competitions 2021. In Proceedings of the IEEE International Conference on Big Data, Orlando, FL, USA, 15–18 December 2021; pp. 5750–5757. [Google Scholar] [CrossRef]
  18. Francis, A.; Sidiropoulos, P.; Muller, J.P. CloudFCN: Accurate and Robust Cloud Detection for Satellite Imagery with Deep Learning. Remote Sens. 2019, 11, 2312. [Google Scholar] [CrossRef]
  19. Kislov, D.E.; Korznikov, K.A.; Altman, J.; Vozmishcheva, A.S.; Krestov, P.V. Extending deep learning approaches for forest disturbance segmentation on very high-resolution satellite images. Remote Sens. Ecol. Conserv. 2021, 7, 355–368. [Google Scholar] [CrossRef]
  20. Barmpoutis, P.; Papaioannou, P.; Dimitropoulos, K.; Grammalidis, N. A Review on Early Forest Fire Detection Systems Using Optical Remote Sensing. Sensors 2020, 20, 6442. [Google Scholar] [CrossRef]
  21. Ghali, R.; Akhloufi, M.A. Deep Learning Approaches for Wildland Fires Remote Sensing: Classification, Detection, and Segmentation. Remote Sens. 2023, 15, 1821. [Google Scholar] [CrossRef]
  22. Mohapatra, A.; Trinh, T. Early Wildfire Detection Technologies in Practice: A Review. Sustainability 2022, 14, 12270. [Google Scholar] [CrossRef]
  23. Akhloufi, M.A.; Couturier, A.; Castro, N.A. Unmanned Aerial Vehicles for Wildland Fires: Sensing, Perception, Cooperation and Assistance. Drones 2021, 5, 15. [Google Scholar] [CrossRef]
  24. NOAA Office of Satellite and Product Operations. GOES Satellite. 2023. Available online: https://www.ospo.noaa.gov/Operations/GOES/transition.html (accessed on 5 March 2023).
  25. United States Geological Survey (USGS). GloVis. 2023. Available online: https://glovis.usgs.gov/app?fullscreen=0 (accessed on 5 March 2023).
  26. European Space Agence. Sentinel Satellite. 2023. Available online: https://sentinels.copernicus.eu/web/sentinel/home (accessed on 5 March 2023).
  27. NASA Office. MODIS Satellite. 2023. Available online: https://modis.gsfc.nasa.gov/about/ (accessed on 5 March 2023).
  28. Earth Data Website. AVHRR Satellite. 2023. Available online: https://www.earthdata.nasa.gov/sensors/avhrr (accessed on 5 March 2023).
  29. Earth Data Website. VIIRS Satellite. 2023. Available online: https://www.earthdata.nasa.gov/learn/find-data/near-real-time/viirs (accessed on 5 March 2023).
  30. Kang, Y.; Jang, E.; Im, J.; Kwon, C. A deep learning model using geostationary satellite data for forest fire detection with reduced detection latency. Gisci. Remote Sens. 2022, 59, 2019–2035. [Google Scholar] [CrossRef]
  31. Azami, M.H.b.; Orger, N.C.; Schulz, V.H.; Oshiro, T.; Cho, M. Earth Observation Mission of a 6U CubeSat with a 5-Meter Resolution for Wildfire Image Classification Using Convolution Neural Network Approach. Remote Sens. 2022, 14, 1874. [Google Scholar] [CrossRef]
  32. Kalaivani, V.; Chanthiya, P. A novel custom optimized convolutional neural network for a satellite image by using forest fire detection. Earth Sci. Inform. 2022, 15, 1285–1295. [Google Scholar] [CrossRef]
  33. Seydi, S.T.; Saeidi, V.; Kalantar, B.; Ueda, N.; Halin, A.A. Fire-Net: A deep learning framework for active forest fire detection. J. Sens. 2022, 2022, 8044390. [Google Scholar] [CrossRef]
  34. Maria Jose Lozano, P.; MacFarlane, I. Predicting California Wildfire Risk with Deep Neural Networks. In Proceedings of the CS230: Deep Learning, Winter 2018; Stanford University: Stanford, CA, USA, 2018; pp. 1–6. [Google Scholar]
  35. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. MobileNetV2: Inverted Residuals and Linear Bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar]
  36. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  37. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. ImageNet: A large-scale hierarchical image database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
  38. Zhao, L.; Liu, J.; Peters, S.; Li, J.; Oliver, S.; Mueller, N. Investigating the Impact of Using IR Bands on Early Fire Smoke Detection from Landsat Imagery with a Lightweight CNN Model. Remote Sens. 2022, 14, 3047. [Google Scholar] [CrossRef]
  39. Wang, Y.; Liu, X.; Li, M.; Di, W.; Wang, L. Deep Convolution and Correlated Manifold Embedded Distribution Alignment for Forest Fire Smoke Prediction. Comput. Inform. 2020, 39, 318–339. [Google Scholar] [CrossRef]
  40. Filonenko, A.; Kurnianggoro, L.; Jo, K.H. Smoke detection on video sequences using convolutional and recurrent neural networks. In Proceedings of the Computational Collective Intelligence (ICCCI 2017), Nicosia, Cyprus, 27–29 September 2017; pp. 558–566. [Google Scholar]
  41. Wei, X.; Wu, S.; Wang, Y. Forest fire smoke detection model based on deep convolution long short-term memory network. J. Comput. Appl. 2019, 39, 2883–2887. [Google Scholar] [CrossRef]
  42. Higa, L.; Marcato, J., Jr.; Rodrigues, T.; Zamboni, P.; Silva, R.; Almeida, L.; Liesenberg, V.; Roque, F.; Libonati, R.; Gonçalves, W.N.; et al. Active Fire Mapping on Brazilian Pantanal Based on Deep Learning and CBERS 04A Imagery. Remote Sens. 2022, 14, 688. [Google Scholar] [CrossRef]
  43. Kim, K.; Lee, H.S. Probabilistic Anchor Assignment with IoU Prediction for Object Detection. In Proceedings of the Computer Vision—ECCV 2020, Glasgow, UK, 23–28 August 2020; pp. 355–371. [Google Scholar]
  44. Zhang, H.; Wang, Y.; Dayoub, F.; Sunderhauf, N. VarifocalNet: An IoU-Aware Dense Object Detector. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 8514–8523. [Google Scholar]
  45. Zhang, S.; Chi, C.; Yao, Y.; Lei, Z.; Li, S.Z. Bridging the Gap between Anchor-Based and Anchor-Free Detection via Adaptive Training Sample Selection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 9759–9768. [Google Scholar]
  46. Wang, J.; Zhang, W.; Cao, Y.; Chen, K.; Pang, J.; Gong, T.; Shi, J.; Loy, C.C.; Lin, D. Side-aware boundary localization for more precise object detection. In Proceedings of the Computer Vision–ECCV 2020, Glasgow, UK, 23–28 August 2020; pp. 403–419. [Google Scholar]
  47. Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollar, P. Focal Loss for Dense Object Detection. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
  48. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 7–12 December 2015; pp. 91–99. [Google Scholar]
  49. Higa, L.; Marcato Junior, J.; Rodrigues, T.; Zamboni, P.; Silva, R.; Almeida, L.; Liesenberg, V.; Roque, F.; Libonati, R.; Gonçalves, W.N.; et al. Active Fire Detection (CBERS 4A—RGB). 2023. Available online: https://sites.google.com/view/geomatics-and-computer-vision/home/datasets (accessed on 5 March 2023).
  50. Ba, R.; Chen, C.; Yuan, J.; Song, W.; Lo, S. SmokeNet: Satellite Smoke Scene Detection Using Convolutional Neural Network with Spatial and Channel-Wise Attention. Remote Sens. 2019, 11, 1702. [Google Scholar] [CrossRef]
  51. Ba, R.; Chen, C.; Yuan, J.; Song, W.; Lo, S. USTC_SmokeRS Dataset. 2023. Available online: https://pan.baidu.com/s/1GBOE6xRVzEBV92TrRMtfWg (accessed on 5 March 2023).
  52. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. In Proceedings of the 3rd International Conference on Learning Representations, ICLR, San Diego, CA, USA, 7–9 May 2015; pp. 1–14. [Google Scholar]
  53. Huang, G.; Liu, Z.; van der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
  54. Wang, F.; Jiang, M.; Qian, C.; Yang, S.; Li, C.; Zhang, H.; Wang, X.; Tang, X. Residual Attention Network for Image Classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 3156–3164. [Google Scholar]
  55. Hu, J.; Shen, L.; Sun, G. Squeeze-and-Excitation Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–22 June 2018; pp. 7132–7141. [Google Scholar]
  56. Phan, T.C.; Nguyen, T.T. Remote Sensing meets Deep Learning: Exploiting Spatio-Temporal-Spectral Satellite Images for Early Wildfire Detection. 2023. Available online: https://infoscience.epfl.ch/record/270339 (accessed on 5 March 2023).
  57. Li, Z.; Kaufman, Y.J.; Ichoku, C.; Fraser, R.; Trishchenko, A.; Giglio, L.; Jin, J.; Yu, X. A review of AVHRR-based active fire detection algorithms: Principles, limitations, and recommendations. In Global and Regional Vegetation Fire Monitoring from Space, Planning and Coordinated International Effort; Kugler Publications: Amsterdam, The Netherlands, 2001; pp. 199–225. [Google Scholar]
  58. Schroeder, W.; Oliva, P.; Giglio, L.; Csiszar, I.A. The New VIIRS 375 m active fire detection data product: Algorithm description and initial assessment. Remote Sens. Environ. 2014, 143, 85–96. [Google Scholar] [CrossRef]
  59. Xu, W.; Wooster, M.; Roberts, G.; Freeborn, P. New GOES imager algorithms for cloud and active fire detection and fire radiative power assessment across North, South and Central America. Remote Sens. Environ. 2010, 114, 1876–1895. [Google Scholar] [CrossRef]
  60. Hong, Z.; Tang, Z.; Pan, H.; Zhang, Y.; Zheng, Z.; Zhou, R.; Ma, Z.; Zhang, Y.; Han, Y.; Wang, J.; et al. Active Fire Detection Using a Novel Convolutional Neural Network Based on Himawari-8 Satellite Images. Front. Environ. Sci. 2022, 10, 102. [Google Scholar] [CrossRef]
  61. Japan Aerospace Exploration Agency. Himawari-8 Dataset. 2023. Available online: https://www.eorc.jaxa.jp/ptree/userguide.html (accessed on 5 March 2023).
  62. Wang, M.; Jiang, L.; Yue, P.; Yu, D.; Tuo, T. FASDD: An Open-access 100,000-level Flame and Smoke Detection Dataset for Deep Learning in Fire Detection. Earth Syst. Sci. Data Discuss. 2022, 2022, 1–22. [Google Scholar] [CrossRef]
  63. Zhu, X.; Su, W.; Lu, L.; Li, B.; Wang, X.; Dai, J. Deformable DETR: Deformable Transformers for End-to-End Object Detection. arXiv 2020, arXiv:2010.04159. [Google Scholar]
  64. Wang, M.; Jiang, L.; Yue, P.; Yu, D.; Tuo, T. Flame and Smoke Detection Dataset (FASDD). 2023. Available online: https://www.scidb.cn/en/detail?dataSetId=ce9c9400b44148e1b0a749f5c3eb0bda (accessed on 5 March 2023).
  65. Larsen, A.; Hanigan, I.; Reich, B.J.; Qin, Y.; Cope, M.; Morgan, G.; Rappold, A.G. A deep learning approach to identify smoke plumes in satellite imagery in near-real time for health risk communication. Expo. Sci. Environ. Epidemiol. 2021, 31, 170–176. [Google Scholar] [CrossRef]
  66. National Centers for Environmental Information. National Oceanic and Atmospheric Administration (NOAA). 2023. Available online: https://www.usgs.gov/programs/national-geospatial-program/national-map (accessed on 5 March 2023).
  67. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
  68. Khryashchev, V.; Larionov, R. Wildfire Segmentation on Satellite Images using Deep Learning. In Proceedings of the Moscow Workshop on Electronic and Networking Technologies (MWENT), Moscow, Russia, 11–13 March 2020; pp. 1–5. [Google Scholar]
  69. de Almeida Pereira, G.H.; Fusioka, A.M.; Nassu, B.T.; Minetto, R. Active fire detection in Landsat-8 imagery: A large-scale dataset and a deep-learning study. ISPRS J. Photogramm. Remote Sens. 2021, 178, 171–186. [Google Scholar] [CrossRef]
  70. de Almeida Pereira, G.H.; Fusioka, A.M.; Nassu, B.T.; Minetto, R. Active Fire Detection in Landsat-8 Imagery. 2023. Available online: https://github.com/pereira-gha/activefire (accessed on 5 March 2023).
  71. Schroeder, W.; Oliva, P.; Giglio, L.; Quayle, B.; Lorenz, E.; Morelli, F. Active fire detection using Landsat-8/OLI data. Remote Sens. Environ. 2016, 185, 210–220. [Google Scholar] [CrossRef]
  72. Kumar, S.S.; Roy, D.P. Global operational land imager Landsat-8 reflectance-based active fire detection algorithm. Int. J. Digit. Earth 2018, 11, 154–178. [Google Scholar] [CrossRef]
  73. Murphy, S.W.; de Souza Filho, C.R.; Wright, R.; Sabatino, G.; Correa Pabon, R. HOTMAP: Global hot target detection at moderate spatial resolution. Remote Sens. Environ. 2016, 177, 78–88. [Google Scholar] [CrossRef]
  74. Rashkovetsky, D.; Mauracher, F.; Langer, M.; Schmitt, M. Wildfire Detection From Multisensor Satellite Imagery Using Deep Semantic Segmentation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 7001–7016. [Google Scholar] [CrossRef]
  75. California Department of Forestry and Fire Protection’s Fire and Resource Assessment Program (FRAP). CAL FIRE Dataset. 2023. Available online: https://frap.fire.ca.gov/frap-projects/fire-perimeters/ (accessed on 5 March 2023).
  76. Rostami, A.; Shah-Hosseini, R.; Asgari, S.; Zarei, A.; Aghdami-Nia, M.; Homayouni, S. Active Fire Detection from Landsat-8 Imagery Using Deep Multiple Kernel Learning. Remote Sens. 2022, 14, 992. [Google Scholar] [CrossRef]
  77. Shirvani, Z.; Abdi, O.; Goodman, R.C. High-Resolution Semantic Segmentation of Woodland Fires Using Residual Attention UNet and Time Series of Sentinel-2. Remote Sens. 2023, 15, 1342. [Google Scholar] [CrossRef]
  78. Sun, C. Analyzing Multispectral Satellite Imagery of South American Wildfires Using Deep Learning. In Proceedings of the 2022 International Conference on Applied Artificial Intelligence (ICAPAI), Halden, Norway, 5 May 2022; pp. 1–6. [Google Scholar]
  79. Huot, F.; Hu, R.L.; Goyal, N.; Sankar, T.; Ihme, M.; Chen, Y.F. Next Day Wildfire Spread: A Machine Learning Dataset to Predict Wildfire Spreading From Remote-Sensing Data. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–13. [Google Scholar] [CrossRef]
  80. Wang, Z.; Yang, P.; Liang, H.; Zheng, C.; Yin, J.; Tian, Y.; Cui, W. Semantic Segmentation and Analysis on Sensitive Parameters of Forest Fire Smoke Using Smoke-UNet and Landsat-8 Imagery. Remote Sens. 2022, 14, 45. [Google Scholar] [CrossRef]
  81. Long, J.; Shelhamer, E.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
  82. Zhao, H.; Shi, J.; Qi, X.; Wang, X.; Jia, J. Pyramid Scene Parsing Network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2881–2890. [Google Scholar]
  83. Kamal, U.; Tonmoy, T.I.; Das, S.; Hasan, M.K. Automatic Traffic Sign Detection and Recognition Using SegU-Net and a Modified Tversky Loss Function with L1-Constraint. IEEE Trans. Intell. Transp. Syst. 2020, 21, 1467–1479. [Google Scholar] [CrossRef]
  84. Wang, T.; Hong, J.; Han, Y.; Zhang, G.; Chen, S.; Dong, T.; Yang, Y.; Ruan, H. AOSVSSNet: Attention-Guided Optical Satellite Video Smoke Segmentation Network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 8552–8566. [Google Scholar] [CrossRef]
  85. Zhou, Z.; Rahman Siddiquee, M.M.; Tajbakhsh, N.; Liang, J. UNet++: A Nested U-Net Architecture for Medical Image Segmentation. In Proceedings of the Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, Granada, Spain, 20 September 2018; pp. 3–11. [Google Scholar]
  86. Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 801–818. [Google Scholar]
  87. Knopp, L.; Wieland, M.; Rättich, M.; Martinis, S. A Deep Learning Approach for Burned Area Segmentation with Sentinel-2 Data. Remote Sens. 2020, 12, 2422. [Google Scholar] [CrossRef]
  88. Belenguer-Plomer, M.A.; Tanase, M.A.; Chuvieco, E.; Bovolo, F. CNN-based burned area mapping using radar and optical data. Remote Sens. Environ. 2021, 260, 112468. [Google Scholar] [CrossRef]
  89. Abid, N.; Malik, M.I.; Shahzad, M.; Shafait, F.; Ali, H.; Ghaffar, M.M.; Weis, C.; Wehn, N.; Liwicki, M. Burnt Forest Estimation from Sentinel-2 Imagery of Australia using Unsupervised Deep Learning. In Proceedings of the Digital Image Computing: Techniques and Applications (DICTA), Gold Coast, Australia, 9 November–1 December 2021; pp. 01–08. [Google Scholar]
  90. Hu, X.; Ban, Y.; Nascetti, A. Uni-Temporal Multispectral Imagery for Burned Area Mapping with Deep Learning. Remote Sens. 2021, 13, 1509. [Google Scholar] [CrossRef]
  91. Poudel, R.P.; Liwicki, S.; Cipolla, R. Fast-scnn: Fast semantic segmentation network. In Proceedings of the 30th British Machine Vision Conference (BMVC), Cardiff, UK, 9–12 September 2019; pp. 9–12. [Google Scholar]
  92. Wang, J.; Sun, K.; Cheng, T.; Jiang, B.; Deng, C.; Zhao, Y.; Liu, D.; Mu, Y.; Tan, M.; Wang, X.; et al. Deep High-Resolution Representation Learning for Visual Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 3349–3364. [Google Scholar] [CrossRef]
  93. Cho, A.Y.; Park, S.e.; Kim, D.j.; Kim, J.; Li, C.; Song, J. Burned Area Mapping Using Unitemporal PlanetScope Imagery with a Deep Learning Based Approach. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 242–253. [Google Scholar] [CrossRef]
  94. PlanetLabs Team. PlanetScope Dataset. 2023. Available online: https://developers.planet.com/docs/data/planetscope/ (accessed on 5 March 2023).
  95. Brand, A.; Manandhar, A. Semantic segmentation of burned areas in satellite images using a U-Net based convolutional neural network. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2021, XLIII-B3-2021, 47–53. [Google Scholar] [CrossRef]
  96. Seydi, S.T.; Hasanlou, M.; Chanussot, J. Burnt-Net: Wildfire burned area mapping with single post-fire Sentinel-2 data and deep learning morphological neural network. Ecol. Indic. 2022, 140, 108999. [Google Scholar] [CrossRef]
  97. Prabowo, Y.; Sakti, A.D.; Pradono, K.A.; Amriyah, Q.; Rasyidy, F.H.; Bengkulah, I.; Ulfa, K.; Candra, D.S.; Imdad, M.T.; Ali, S. Deep Learning Dataset for Estimating Burned Areas: Case Study, Indonesia. Data 2022, 7, 78. [Google Scholar] [CrossRef]
  98. Colomba, L.; Farasin, A.; Monaco, S.; Greco, S.; Garza, P.; Apiletti, D.; Baralis, E.; Cerquitelli, T. A Dataset for Burned Area Delineation and Severity Estimation from Satellite Imagery. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management, Atlanta, GA, USA, 17–21 October 2022; pp. 3893–3897. [Google Scholar]
  99. Luca, C.; Alessandro, F.; Simone, M.; Salvatore, G.; Paolo, G.; Daniele, A.; Elena, B.; Tania, C. Satellite Burned Area Dataset. 2023. [Google Scholar] [CrossRef]
  100. Zhang, P.; Ban, Y.; Nascetti, A. Learning U-Net without forgetting for near real-time wildfire monitoring by the fusion of SAR and optical time series. Remote Sens. Environ. 2021, 261, 112467. [Google Scholar] [CrossRef]
  101. Pinto, M.M.; Libonati, R.; Trigo, R.M.; Trigo, I.F.; DaCamara, C.C. A deep learning approach for mapping and dating burned areas using temporal sequences of satellite images. ISPRS J. Photogramm. Remote Sens. 2020, 160, 260–274. [Google Scholar] [CrossRef]
  102. NASA Visible Infrared Imaging Radiometer Suite Level-1B Product User Guide. VIIRS Level-1B Products. 2023. Available online: https://ladsweb.modaps.eosdis.nasa.gov/missions-and-measurements/science-domain/viirs-L0-L1/ (accessed on 5 March 2023).
  103. Giglio, L.; Boschetti, L.; Roy, D.P.; Humber, M.L.; Justice, C.O. The Collection 6 MODIS burned area mapping algorithm and product. Remote Sens. Environ. 2018, 217, 72–85. [Google Scholar] [CrossRef]
  104. Chuvieco, E.; Lizundia-Loiola, J.; Pettinari, M.L.; Ramo, R.; Padilla, M.; Tansey, K.; Mouillot, F.; Laurent, P.; Storm, T.; Heil, A.; et al. Generation and analysis of a new global burned area product based on MODIS 250 m reflectance bands and thermal anomalies. Earth Syst. Sci. Data 2018, 10, 2015–2031. [Google Scholar] [CrossRef]
  105. Rodrigues, J.A.; Libonati, R.; Pereira, A.A.; Nogueira, J.M.; Santos, F.L.; Peres, L.F.; Santa Rosa, A.; Schroeder, W.; Pereira, J.M.; Giglio, L.; et al. How well do global burned area products represent fire patterns in the Brazilian Savannas biome? An accuracy assessment of the MCD64 collections. Int. J. Appl. Earth Obs. Geoinf. 2019, 78, 318–331. [Google Scholar] [CrossRef]
  106. Roteta, E.; Bastarrika, A.; Padilla, M.; Storm, T.; Chuvieco, E. Development of a Sentinel-2 burned area algorithm: Generation of a small fire database for sub-Saharan Africa. Remote Sens. Environ. 2019, 222, 1–17. [Google Scholar] [CrossRef]
  107. Hu, X.; Zhang, P.; Ban, Y. MTBS Dataset. 2023. Available online: https://www.mtbs.gov/direct-download (accessed on 5 March 2023).
  108. Goodwin, N.R.; Collett, L.J. Development of an automated method for mapping fire history captured in Landsat TM and ETM+ time series across Queensland, Australia. Remote Sens. Environ. 2014, 148, 206–221. [Google Scholar] [CrossRef]
  109. Institute for the Conservation of Nature and Forests (ICNF). ICNF Burned Areas. 2023. Available online: https://www.icnf.pt// (accessed on 5 March 2023).
  110. Seydi, S.T.; Hasanlou, M.; Chanussot, J. DSMNN-Net: A Deep Siamese Morphological Neural Network Model for Burned Area Mapping Using Multispectral Sentinel-2 and Hyperspectral PRISMA Images. Remote Sens. 2021, 13, 5138. [Google Scholar] [CrossRef]
  111. Ngadze, F.; Mpakairi, K.S.; Kavhu, B.; Ndaimani, H.; Maremba, M.S. Exploring the utility of Sentinel-2 MSI and Landsat 8 OLI in burned area mapping for a heterogenous savannah landscape. PLoS ONE 2020, 15, e0232962. [Google Scholar] [CrossRef] [PubMed]
  112. Roy, D.P.; Huang, H.; Boschetti, L.; Giglio, L.; Yan, L.; Zhang, H.H.; Li, Z. Landsat-8 and Sentinel-2 burned area mapping - A combined sensor multi-temporal change detection approach. Remote Sens. Environ. 2019, 231, 111254. [Google Scholar] [CrossRef]
  113. Syifa, M.; Panahi, M.; Lee, C.W. Mapping of Post-Wildfire Burned Area Using a Hybrid Algorithm and Satellite Data: The Case of the Camp Fire Wildfire in California, USA. Remote Sens. 2020, 12, 623. [Google Scholar] [CrossRef]
  114. Zhang, G.; Wang, M.; Liu, K. Forest fire susceptibility modeling using a convolutional neural network for Yunnan province of China. Int. J. Disaster Risk Sci. 2019, 10, 386–403. [Google Scholar] [CrossRef]
  115. NCAR Research Data Archive (RDA). Data for Climate & Weather Research. 2023. Available online: https://rda.ucar.edu/ (accessed on 5 March 2023).
  116. NASA Earth Observation Data. Earth Data. 2023. Available online: https://search.earthdata.nasa.gov/search (accessed on 5 March 2023).
  117. Prapas, I.; Kondylatos, S.; Papoutsis, I.; Camps-Valls, G.; Ronco, M.; Fernández-Torres, M.; Guillem, M.P.; Carvalhais, N. Deep Learning Methods for Daily Wildfire Danger Forecasting. arXiv 2021, arXiv:2111.02736. [Google Scholar]
  118. Prapas, I.; Kondylatos, S.; Papoutsis, I. A Datacube for the Analysis of Wildfires in Greece. 2023. [CrossRef]
  119. Zhang, G.; Wang, M.; Liu, K. Deep neural networks for global wildfire susceptibility modelling. Ecol. Indic. 2021, 127, 107735. [Google Scholar] [CrossRef]
  120. Le, H.V.; Hoang, D.A.; Tran, C.T.; Nguyen, P.Q.; Tran, V.H.T.; Hoang, N.D.; Amiri, M.; Ngo, T.P.T.; Nhu, H.V.; Hoang, T.V.; et al. A new approach of deep neural computing for spatial prediction of wildfire danger at tropical climate areas. Ecol. Inform. 2021, 63, 101300. [Google Scholar] [CrossRef]
  121. Le, H.V.; Bui, Q.T.; Bui, D.T.; Tran, H.H.; Hoang, N.D. A Hybrid Intelligence System Based on Relevance Vector Machines and Imperialist Competitive Optimization for Modelling Forest Fire Danger Using GIS. J. Environ. Inform. 2018, 36, 43–57. [Google Scholar] [CrossRef]
  122. Omar, N.; Al-zebari, A.; Sengur, A. Deep Learning Approach to Predict Forest Fires Using Meteorological Measurements. In Proceedings of the 2nd International Informatics and Software Engineering Conference (IISEC), Ankara, Turkey, 16–17 December 2021; pp. 1–4. [Google Scholar]
  123. Zhang, G.; Wang, M.; Liu, K. Dynamic prediction of global monthly burned area with hybrid deep neural networks. Ecol. Appl. 2022, 32, e2610. [Google Scholar] [CrossRef]
  124. Shao, Y.; Wang, Z.; Feng, Z.; Sun, L.; Yang, X.; Zheng, J.; Ma, T. Assessment of China’s forest fire occurrence with deep learning, geographic information and multisource data. J. For. Res. 2022, 1–14. [Google Scholar] [CrossRef]
  125. Shams-Eddin, M.H.; Roscher, R.; Gall, J. Location-aware Adaptive Denormalization: A Deep Learning Approach for Wildfire Danger Forecasting. arXiv 2022, arXiv:2212.08208. [Google Scholar]
  126. Prapas, I.; Kondylatos, S.; Papoutsis, I. FireCube: A Daily Datacube for the Modeling and Analysis of Wildfires in Greece. 2023. [Google Scholar] [CrossRef]
  127. Jamshed, M.A.; Theodorou, C.; Kalsoom, T.; Anjum, N.; Abbasi, Q.H.; Ur-Rehman, M. Intelligent computing based forecasting of deforestation using fire alerts: A deep learning approach. Phys. Commun. 2022, 55, 101941. [Google Scholar] [CrossRef]
  128. Naderpour, M.; Rizeei, H.M.; Ramezani, F. Forest Fire Risk Prediction: A Spatial Deep Neural Network-Based Framework. Remote Sens. 2021, 13, 2513. [Google Scholar] [CrossRef]
  129. Australian Government, Bureau of Meteorology. Meteorology Data. 2023. Available online: http://www.bom.gov.au/ (accessed on 5 March 2023).
  130. NSW Governement Website. Land Cover Data. 2023. Available online: https://data.nsw.gov.au/ (accessed on 5 March 2023).
  131. Geoscience Australia’s New Website. Elvis—Elevation and Depth—Foundation Spatial Data. 2023. Available online: https://elevation.fsdf.org.au/ (accessed on 5 March 2023).
  132. Demographic Resource Centre. Social Data. 2023. Available online: https://profile.id.com.au/northern-beaches (accessed on 5 March 2023).
  133. Nur, A.S.; Kim, Y.J.; Lee, C.W. Creation of Wildfire Susceptibility Maps in Plumas National Forest Using InSAR Coherence, Deep Learning, and Metaheuristic Optimization Approaches. Remote Sens. 2022, 14, 4416. [Google Scholar] [CrossRef]
  134. Atashpaz-Gargari, E.; Lucas, C. Imperialist competitive algorithm: An algorithm for optimization inspired by imperialistic competition. In Proceedings of the IEEE Congress on Evolutionary Computation, Singapore, 25–28 September 2007; pp. 4661–4667. [Google Scholar]
  135. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  136. Bjånes, A.; De La Fuente, R.; Mena, P. A deep learning ensemble model for wildfire susceptibility mapping. Ecol. Inform. 2021, 65, 101397. [Google Scholar] [CrossRef]
  137. Abatzoglou, J.T.; Dobrowski, S.Z.; Parks, S.A.; Hegewisch, K.C. TerraClimate, a high-resolution global dataset of monthly climate and climatic water balance from 1958–2015. Sci. Data 2018, 5, 170191. [Google Scholar] [CrossRef]
  138. Huot, F.; Hu, R.L.; Ihme, M.; Wang, Q.; Burge, J.; Lu, T.; Hickey, J.; Chen, Y.; Anderson, J.R. Deep Learning Models for Predicting Wildfires from Historical Remote-Sensing Data. arXiv 2020, arXiv:2010.07445. [Google Scholar]
  139. NASA Earth Observation Data. MOD14A1—MODIS/Terra Thermal Anomalies/Fire Daily L3 Global 1 km SIN Grid. 2023. Available online: https://ladsweb.modaps.eosdis.nasa.gov/missions-and-measurements/products/MOD14A1 (accessed on 5 March 2023).
  140. NASA Earth Observation Data. VIIRS/NPP Vegetation Indices 16-Day L3 Global 500 m SIN Grid V001. 2023. Available online: https://cmr.earthdata.nasa.gov/search/concepts/C1392010616-LPDAAC_ECS.html (accessed on 5 March 2023).
  141. Farr, T.G.; Rosen, P.A.; Caro, E.; Crippen, R.; Duren, R.; Hensley, S.; Kobrick, M.; Paller, M.; Rodriguez, E.; Roth, L.; et al. The shuttle radar topography mission. Rev. Geophys. 2007, 45, 1–33. [Google Scholar] [CrossRef]
  142. Abatzoglou, J.T.; Rupp, D.E.; Mote, P.W. Seasonal Climate Variability and Change in the Pacific Northwest of the United States. J. Clim. 2014, 27, 2125–2142. [Google Scholar] [CrossRef]
  143. Abatzoglou, J.T. Development of gridded surface meteorological data for ecological applications and modelling. Int. J. Climatol. 2013, 33, 121–131. [Google Scholar] [CrossRef]
  144. Farasin, A.; Colomba, L.; Garza, P. Double-Step U-Net: A Deep Learning-Based Approach for the Estimation of Wildfire Damage Severity through Sentinel-2 Satellite Data. Appl. Sci. 2020, 10, 4332. [Google Scholar] [CrossRef]
  145. Miller, J.D.; Thode, A.E. Quantifying burn severity in a heterogeneous landscape with a relative version of the delta Normalized Burn Ratio (dNBR). Remote Sens. Environ. 2007, 109, 66–80. [Google Scholar] [CrossRef]
  146. Monaco, S.; Pasini, A.; Apiletti, D.; Colomba, L.; Garza, P.; Baralis, E. Improving Wildfire Severity Classification of Deep Learning U-Nets from Satellite Images. In Proceedings of the IEEE International Conference on Big Data (Big Data), Atlanta, GA, USA, 10–13 December 2020; pp. 5786–5788. [Google Scholar]
  147. Monaco, S.; Greco, S.; Farasin, A.; Colomba, L.; Apiletti, D.; Garza, P.; Cerquitelli, T.; Baralis, E. Attention to Fires: Multi-Channel Deep Learning Models for Wildfire Severity Prediction. Appl. Sci. 2021, 11, 11060. [Google Scholar] [CrossRef]
  148. Oktay, O.; Schlemper, J.; Folgoc, L.L.; Lee, M.C.H.; Heinrich, M.P.; Misawa, K.; Mori, K.; McDonagh, S.G.; Hammerla, N.Y.; Kainz, B.; et al. Attention U-Net: Learning Where to Look for the Pancreas. arXiv 2018, arXiv:1804.03999. [Google Scholar]
  149. Monaco, S.; Pasini, A.; Apiletti, D.; Colomba, L.; Farasin, A.; Garza, P.; Baralis, E. Double-Step deep learning framework to improve wildfire severity classification. In Proceedings of the EDBT/ICDT Workshops, Nicosia, Cyprus, 23 March 2021; pp. 1–6. [Google Scholar]
  150. Hu, X.; Zhang, P.; Ban, Y. Large-scale burn severity mapping in multispectral imagery using deep semantic segmentation models. ISPRS J. Photogramm. Remote Sens. 2023, 196, 228–240. [Google Scholar] [CrossRef]
  151. Qin, X.; Zhang, Z.; Huang, C.; Dehghan, M.; Zaiane, O.R.; Jagersand, M. U2-Net: Going deeper with nested U-structure for salient object detection. Pattern Recognit. 2020, 106, 107404. [Google Scholar] [CrossRef]
  152. Huang, H.; Lin, L.; Tong, R.; Hu, H.; Zhang, Q.; Iwamoto, Y.; Han, X.; Chen, Y.W.; Wu, J. UNet 3+: A Full-Scale Connected UNet for Medical Image Segmentation. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 4–8 May 2020; pp. 1055–1059. [Google Scholar]
  153. Chen, L.; Papandreou, G.; Schroff, F.; Adam, H. Rethinking Atrous Convolution for Semantic Image Segmentation. arXiv 2017, arXiv:1706.05587. [Google Scholar]
  154. Ding, C.; Zhang, X.; Chen, J.; Ma, S.; Lu, Y.; Han, W. Wildfire detection through deep learning based on Himawari-8 satellites platform. Int. J. Remote Sens. 2022, 43, 5040–5058. [Google Scholar] [CrossRef]
  155. Prapas, I.; Ahuja, A.; Kondylatos, S.; Karasante, I.; Panagiotou, E.; Alonso, L.; Davalas, C.; Michail, D.; Carvalhais, N.; Papoutsis, I. Deep Learning for Global Wildfire Forecasting. arXiv 2022, arXiv:2211.00534. [Google Scholar]
  156. Alonso, L.; Gans, F.; Karasante, I.; Ahuja, A.; Prapas, I.; Kondylatos, S.; Papoutsis, I.; Panagiotou, E.; Mihail, D.; Cremer, F.; et al. SeasFire Cube: A Global Dataset for Seasonal Fire Modeling in the Earth System. 2023. [Google Scholar] [CrossRef]
  157. Natural Resources Canada. Canadian Forest Fire Behavior Prediction (FBP) System. 2023. Available online: https://cwfis.cfs.nrcan.gc.ca/background/summary/fbp (accessed on 5 March 2023).
  158. Stankevich, T.S. Development of an Intelligent System for Predicting the Forest Fire Development Based on Convolutional Neural Networks. In Proceedings of the Advances in Artificial Systems for Medicine and Education III, Moscow, Russia, 1–3 October 2019; pp. 3–12. [Google Scholar]
  159. NASA Earth Observation Data. FIRMS (Fire Information for Resource Management System). 2023. Available online: https://firms.modaps.eosdis.nasa.gov/map/#d:24hrs;@0.0,0.0,2z (accessed on 5 March 2023).
  160. European Space Agency. Land Cover Map ESA/CCI. 2023. Available online: http://maps.elie.ucl.ac.be/CCI/viewer/ (accessed on 5 March 2023).
  161. Ventusky InMeteo. Ventusky InMeteo Data. 2023. Available online: https://www.ventusky.com/ (accessed on 5 March 2023).
  162. Radke, D.; Hessler, A.; Ellsworth, D. FireCast: Leveraging Deep Learning to Predict Wildfire Spread. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), Macao, China, 10–16 August 2019; pp. 4575–4581. [Google Scholar]
  163. Finney, M.A. FARSITE, Fire Area Simulator–Model Development and Evaluation; US Forest Service: Ogden, UT, USA, 1998.
  164. Bergado, J.R.; Persello, C.; Reinke, K.; Stein, A. Predicting wildfire burns from big geodata using deep learning. Saf. Sci. 2021, 140, 105276. [Google Scholar] [CrossRef]
  165. Hutchinson, M.; Stein, J.; Stein, J.; Anderson, H. GEODATA 9 s DEM and D8: Digital Elevation Model Version 3 and Flow Direction Grid 2008. 2023. Available online: http://pid.geoscience.gov.au/dataset/ga/66006, (accessed on 5 March 2023).
  166. Department of Environment, Land, Water and Planning of Victoria. Road Network—Vicmap Transport. 2023. Available online: https://services.land.vic.gov.au/SpatialDatamart/dataSearchViewMetadata.html?anzlicId=ANZVI0803002595&extractionProviderId=1 (accessed on 5 March 2023).
  167. State Government of Victoria. Vicmap Features of Interest. 2023. Available online: http://services.land.vic.gov.au/catalogue/metadata?anzlicId=ANZVI0803003646&publicId=guest&extractionProviderId=1#tab0 (accessed on 5 March 2023).
  168. Lymburner, L.; Tan, P.; Mueller, N.; Thackway, R.; Thankappan, M.; Islam, A.; Lewis, A.; Randall, L.; Senarath, U. Dynamic Land Cover Dataset. 2023. Available online: https://ecat.ga.gov.au/geonetwork/srv/eng/catalog.search#/metadata/71069 (accessed on 5 March 2023).
  169. Hodges, J.L.; Lattimer, B.Y. Wildland fire spread modeling using convolutional neural networks. Fire Technol. 2019, 55, 2115–2142. [Google Scholar] [CrossRef]
  170. Liang, H.; Zhang, M.; Wang, H. A Neural Network Model for Wildfire Scale Prediction Using Meteorological Factors. IEEE Access 2019, 7, 176746–176755. [Google Scholar] [CrossRef]
  171. Khennou, F.; Ghaoui, J.; Akhloufi, M.A. Forest fire spread prediction using deep learning. In Geospatial Informatics XI; SPIE: Bellingham, WA, USA, 2021; pp. 106–117. [Google Scholar]
  172. Khennou, F.; Akhloufi, M.A. Predicting wildland fire propagation using deep learning. In Proceedings of the 1st International Congress on Fire in the Earth System: Humans and Nature (fEs2021), Valencia, Spain, 4–8 November 2021; p. 104. [Google Scholar]
  173. Allaire, F.; Mallet, V.; Filippi, J.B. Emulation of wildland fire spread simulation using deep learning. Neural Netw. 2021, 141, 184–198. [Google Scholar] [CrossRef]
  174. McCarthy, N.F.; Tohidi, A.; Aziz, Y.; Dennie, M.; Valero, M.M.; Hu, N. A Deep Learning Approach to Downscale Geostationary Satellite Imagery for Decision Support in High Impact Wildfires. Forests 2021, 12, 294. [Google Scholar] [CrossRef]
  175. MapBiomas Website. MapBiomas Fire Dataset. 2023. Available online: https://mapbiomas.org/ (accessed on 5 March 2023).
  176. United States Geological Survey (USGS). Geospatial Multi-Agency Coordination (GeoMAC). 2023. Available online: https://wildfire.usgs.gov/geomac/GeoMACTransition.shtml (accessed on 5 March 2023).
  177. Walters, S.P.; Schneider, N.J.; Guthrie, J.D. Geospatial Multi-Agency Coordination (GeoMAC) Wildland Fire Perimeters, 2008. US Geol. Surv. Data Ser. 2011, 612, 6. [Google Scholar]
  178. Artés, T.; Oom, D.; De Rigo, D.; Durrant, T.H.; Maianti, P.; Libertà, G.; San-Miguel-Ayanz, J. A global wildfire dataset for the analysis of fire regimes and fire behaviour. Sci. Data 2019, 6, 296. [Google Scholar] [CrossRef]
  179. Sayad, Y.O.; Mousannif, H.; Al Moatassime, H. Predictive modeling of wildfires: A new dataset and machine learning approach. Fire Saf. J. 2019, 104, 130–146. [Google Scholar] [CrossRef]
  180. Sayad, Y.O.; Mousannif, H.; Al Moatassime, H. Wildfires Dataset. 2023. Available online: https://github.com/ouladsayadyounes/Wildfires (accessed on 5 March 2023).
  181. Singla, S.; Mukhopadhyay, A.; Wilbur, M.; Diao, T.; Gajjewar, V.; Eldawy, A.; Kochenderfer, M.; Shachter, R.; Dubey, A. WildfireDB: An Open-Source Dataset Connecting Wildfire Spread with Relevant Determinants. In Proceedings of the 35th Conference on Neural Information Processing Systems (NeurIPS 2021) Track on Datasets and Benchmarks, Virtual, 6–14 December 2021; pp. 1–10. [Google Scholar]
  182. Singla, S.; Mukhopadhyay, A.; Wilbur, M.; Diao, T.; Gajjewar, V.; Eldawy, A.; Kochenderfer, M.; Shachter, R.; Dubey, A. WildfireDB Dataset. 2023. Available online: https://wildfire-modeling.github.io/ (accessed on 5 March 2023).
  183. Monaco, S.; Greco, S.; Farasin, A.; Colomba, L.; Apiletti, D.; Garza, P.; Cerquitelli, T.; Baralis, E. Sentinel-2 Data. 2023. Available online: https://github.com/dbdmg/rescue (accessed on 5 March 2023).
  184. Abatzoglou, J.T.; Dobrowski, S.Z.; Parks, S.A.; Hegewisch, K.C. TerraClimate Dataset. 2023. Available online: https://data.nkn.uidaho.edu/dataset/monthly-climate-and-climatic-water-balance-global-terrestrial-surfaces-1958-2015 (accessed on 5 March 2023).
  185. Randerson, J.; Van Der Werf, G.; Giglio, L.; Collatz, G.; Kasibhatla, P. GFEDv4 Dataset. 2023. Available online: https://daac.ornl.gov/cgi-bin/dsviewer.pl?ds_id=1293 (accessed on 5 March 2023).
  186. U.S. Department of the Interior, Geological Survey. LANDFIRE 2.0.0 Database. 2023. Available online: https://landfire.gov/lf_remap.php// (accessed on 5 March 2023).
  187. Huot, F.; Hu, R.L.; Goyal, N.; Sankar, T.; Ihme, M.; Chen, Y.F. Next Day Wildfire Spread Dataset. 2023. Available online: https://www.kaggle.com/datasets/fantineh/next-day-wildfire-spread (accessed on 5 March 2023).
  188. National Center for Environmental Information. Climate Data Online. 2023. Available online: https://www.ncdc.noaa.gov/cdo-web/ (accessed on 5 March 2023).
  189. Government of Canada. The Canadian Wildland Fire Information System (CWFIS). 2023. Available online: https://cwfis.cfs.nrcan.gc.ca/ (accessed on 5 March 2023).
Figure 1. Fire detection based on DL model.
Figure 1. Fire detection based on DL model.
Fire 06 00192 g001
Figure 2. Fire segmentation based on DL model.
Figure 2. Fire segmentation based on DL model.
Fire 06 00192 g002
Figure 3. Fire severity damage prediction based on DL model.
Figure 3. Fire severity damage prediction based on DL model.
Fire 06 00192 g003
Table 3. Deep learning models for fire mapping using satellite data.
Table 3. Deep learning models for fire mapping using satellite data.
Ref.MethodologyDatasetResults
[88]Simple CNNSentinel-1 data Sentinel-2 dataDice = 57.00 Dice = 70.00
[89]VGG16, K-means, and thresholding methodsSentinel-2 imagery data of AustraliaF1-score = 87.00
[90]U-NetFast-SCNN H-RNetSentinel-2 and Landsat-8 imagesKappa = 90.00 Kappa = 82.00 Kappa = 78.00
[93]U-NetPlanetScope dataset, dissimilarity, NDVI, and land cover map dataF1-score = 93.80
[95]U-NetSentinel-2 imagery data of the Indonesia and Central Africa regionsF1-score = 92.00
[96]Burnt-NetPost-fire Sentinel-2 imagesAccuracy = 98.08
[97]U-Net227 satellite images and their corresponding binary masksJaccard = 93.00
[98]U-NetSatellite burned area dataset (73 images)Accuracy = 94.30
[100]Deep residual U-NetSentinel-2 MSI time series and Sentinel-1 SAR dataF1-score = 84.23
[101]BA-NetVIIRS Active Fires data, VIIRS Level 1B data, MCD64A1C6, FireCCI51, Landsat-8 53 scenes, FireCCISFD11 dataset, MTBS dataset, TERN AusCover data, ICNF Burned AreasDice = 92.00
Table 5. Deep learning models for fire spread prediction using satellite data.
Table 5. Deep learning models for fire spread prediction using satellite data.
Ref.MethodologyDatasetResults (%)
[162]FireCastGeoMAC, Landsat data, and atmospheric and weather dataAccuracy = 87.70
[164]AllConvNetWildfire burn data from the Victoria Australia region during 2006 and 2017, topography data (slope, elevation, and aspect), weather data (rainfall, humidity, wind direction, wind speed, temperature, solar radiation, and lighting flash density), proximity to anthropogenic interface (distance to the power line and distance to roads), and fuel information (fuel moisture, fuel type, and emissivity)Accuracy = 58.23
[169]DCIGNVegetation information (canopy height, canopy cover, and crown ratio), fuel model, moisture information (100-h moisture, 10-h moisture, 1-h moisture, live woody moisture, and live herbaceous moisture), wind information (north wind and east wind), elevation, and initial burn mapF1-score = 93.00
[170]BPNN, RNN, and LSTMEleven points of meteorological data: minimum temperature, mean temperature, maximum temperature, cooling degree days, total rain, total precipitation, heating degree days, total snow, speed of maximum wind gust, snow on ground, and direction of maximum wind gustAccuracy = 90.90
[171]FU-NetCastFire perimeter, landsat data, DEM, and climate dataAccuracy = 92.73
[172]FU-NetCastV2GeoMAC data: 400 fire perimeters from 2013 to 2019Accuracy = 94.60
[173]Deep CNNEnvironmental data, map data of CorsicaMAPE = 32.80
[174]U-NetLANDFIRE 2.0.0 database, GEO satellite imagery, and fire perimeters dataPrecision = 90.00
Table 6. Overview of datasets used for wildland fire detection.
Table 6. Overview of datasets used for wildland fire detection.
Ref.Data NameData TypeSpatial ResolutionPatch Size
[50,51]USTC_SmokeRS6225 satellite images1 km256 × 256
[69,70]LAFD8194 satellite images of wildfires collected by Landsat-8 around the world in August 2020; 146,214 image patches, consisting of 10-band spectral images and associated results; 9044 image patches extracted from thirteen Lansdsat-8 images captured in September 2020, as well as their corresponding masks30 m 10 m256 × 256
[42,49]CBERS 04A WFI775 RGB images collected by the WFI sensor on board the CBERS 04A satellite between May 2020 and August 2020 in the Brazilian Pantanal areas5 m256 × 256
[62,64]FASDD310,280 images covering numerous regions: Canada (5764 images), America (8437 images), Brazil and Bolivia (6977 images), Greece and Bulgaria (10,725 images), South Africa (9537 images), China (624 images), Russia (2111 images), and Australia (266,069 images); 5773 labeled images (format JSON, XML, and text)10 m 30 m1000 × 1000 2200 × 2200
Table 7. Overview of datasets used for wildland fire mapping and prediction.
Table 7. Overview of datasets used for wildland fire mapping and prediction.
Ref.Data NameData TypeSpatial ResolutionLabeling Type
[25]Landsat-8 satellite imageryImagery data15 m 30 mMapping
[66]DEMElevation20 mMapping
[102]VIIRS Level 1BImagery data, fire location375 m 750 mMapping
[58]VIIRS Active FireImagery data, fire location375 mMapping
[175]MapBiomas FireMaps of burned areas for Brazil between 1985 and 2020, annual land cover and land use30 mMapping
[94]PlanetScopeImages collected from 130 CubeSat 3U satellites3 mMapping
[103]MCD64A1 C6Burn date, quality assurance, burn data uncertainty, and the first and last days of the year for reliable change detection500 mMapping
[94,97]Burned areas in Indonesia227 images with a resolution of 512 × 512 pix collected from Indonesia’s regions between 2019 and 2021, and their corresponding ground truth images30 mMapping
[66]Weather and atmospheric dataAtmospheric pressure, wind direction, temperature, precipitation, dew point, relative humidity, and wind speed1 kmMapping Prediction
[98,99]Satellite burned area73 wildfire images collected by Sentinel-2 L2A satellite from 2017 to 2019 in Europe regions, their binary masks, and the annotation of five severity damage levels10 mMapping Prediction
[176,177]GeoMACFire perimeter, fire locationNSPrediction
[178]GlobFireInitial date of fire, final date of fire, fire perimeter, and burned areaNSPrediction
[179,180]WildfiresWeather data (land surface temperature), ground condition (NDVI), burned areas, and wildfire indicators (thermal anomalies)250 m 500 m 1 kmPrediction
[181,182]WildfireDBHistorical wildfire occurrence from 2012 to 2017; vegetation data (the maximum, median, sum, minimum, mode, and count values of canopy base density, canopy height, canopy cover, canopy base height, and existing vegetation height and cover); topography data (slope and elevation); weather data (total precipitation, maximum, average, and minimum temperature, relative wind speed, and average atmospheric pressure)30 m 375 mPrediction
[147,183]Sentinel-2Various samples collected in various regions of Europe by Copernicus EMS with a resolution of 5000 × 5000 × 12 and classified according to the wildfire damage level10 to 60 mPrediction
[137,184]TerraClimateClimatic data from 1958 to present, including minimum and maximum temperature, precipitation, solar radiation, wind speed, climatic water deficit, vapor pressure, and reference evapotranspiration<5 kmPrediction
[118]Datacube19 features: max and min 2 m temperature, precipitation, LAI, Fpar, day and night LST, EVI, NDVI, min and max u-/v-component of wind, CLC, slope, elevation, aspect, population and road density1 kmPrediction
[165]GEODATA DEM-9SGround level elevation points for all of Australia: slope, elevation, and aspect data1 kmPrediction
[166,167]VicmapDistance to anthropogenic interfaces in Victoria: distance to roads and distance to power lines250 mPrediction
[168]Dynamic land coverLand cover, vegetation cover, and land use information of Australia250 mPrediction
[107,150]MTBSPost-fire and pre-fire Landsat Top of Atmosphere images, dNBR images, perimeter mask, RdNBR images, thematic fire severity from 1984 to 2021, and fire location30 mPrediction
[75]CALFIRERecords of perimeters of forest fires collected in the state of California between the years 1950 and 201910 mPrediction
[185]GFEDv4Estimated monthly burned area, fractional contributions of different fire types, and monthly emissions, 3-hourly or daily fields, monthly biosphere fluxes27.8 kmPrediction
[186]LANDFIRE 2.0.0Fuel data, vegetation data, and landscape disturbances and changes (wildland fire, storm damage, fuel and vegetation treatments, insects, disease, and invasive plants)30 mPrediction
[156]SeasFire CubeHistorical burned area and wildfire emissions between 2001 and 2021, meteorological data (humidity, direction of wind, wind speed, average/max/min temperature, solar radiation, total precipitation, etc.), human-related variables (population density), oceanic indices, vegetation data (LAI, land cover, etc.), and drought data27.8 kmPrediction
[79,187]NWDS18,455 fire samples during 2012–2020, previous fire mask, fire mask, and 2D fire data with numerous variables such as vegetation (NDVI), population density, weather (wind direction, wind speed, humidity, precipitation, maximum/minimum temperature), topography (elevation), drought index, and energy release component1 kmPrediction
[126]FireCubeAvg/min/max 2 m dewpoint temperature, avg/max relative humidity, avg/max/min surface pressure, avg/max/min total precipitation, avg/max/min 10 m V wind component, avg/max/min 10 m U wind component, avg/max/min 2 m temperature, 8 day evapotranspiration, FPAR, FWI, rasterized ignition points, LAI, day/night land surface temperature, wind direction of maw wind, max wind speed, daily number of fires, soil moisture index, soil moisture index anomaly, aspect, elevation, population density (2009–2021), distance from roads, roughness, slope, and distance from waterways1 kmPrediction
NS refers to not specified.
Table 8. Overview of data augmentation techniques.
Table 8. Overview of data augmentation techniques.
TaskRef.Data Augmentation Techniques
Fire detection[38]Horizontal/vertical flip
[68]Rotation, shift, flip, mirroring, and random chromatic distortion in HSV color
[76]Horizontal/vertical flip
[80]Crop and horizontal/vertical mirroring
[87]Brightness, rotation, shift, contrast, and scale
Fire mapping[90]Resize, mirror, rotation, aspect, crop, and color jitter
[93]Rotation, mirror, and horizontal/vertical flip
[97]Rotation
[98]Rotation, shear, and vertical/horizontal flip
Fire damage prediction[144]Horizontal/vertical flip, rotation, and shear
[150]Vertical/horizontal flip
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ghali, R.; Akhloufi, M.A. Deep Learning Approaches for Wildland Fires Using Satellite Remote Sensing Data: Detection, Mapping, and Prediction. Fire 2023, 6, 192. https://doi.org/10.3390/fire6050192

AMA Style

Ghali R, Akhloufi MA. Deep Learning Approaches for Wildland Fires Using Satellite Remote Sensing Data: Detection, Mapping, and Prediction. Fire. 2023; 6(5):192. https://doi.org/10.3390/fire6050192

Chicago/Turabian Style

Ghali, Rafik, and Moulay A. Akhloufi. 2023. "Deep Learning Approaches for Wildland Fires Using Satellite Remote Sensing Data: Detection, Mapping, and Prediction" Fire 6, no. 5: 192. https://doi.org/10.3390/fire6050192

APA Style

Ghali, R., & Akhloufi, M. A. (2023). Deep Learning Approaches for Wildland Fires Using Satellite Remote Sensing Data: Detection, Mapping, and Prediction. Fire, 6(5), 192. https://doi.org/10.3390/fire6050192

Article Metrics

Back to TopTop