Next Article in Journal
Leveraging Contrastive Semantics and Language Adaptation for Robust Financial Text Classification Across Languages
Previous Article in Journal
CNN-Random Forest Hybrid Method for Phenology-Based Paddy Rice Mapping Using Sentinel-2 and Landsat-8 Satellite Images
Previous Article in Special Issue
Artificial Intelligence Approach for Waste-Printed Circuit Board Recycling: A Systematic Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Assessing Burned Area Detection in Indonesia Using the Stacking Ensemble Neural Network (SENN): A Comparative Analysis of C- and L-Band Performance

by
Dodi Sudiana
1,2,*,
Anugrah Indah Lestari
3,
Mia Rizkinia
1,2,
Indra Riyanto
2,4,
Yenni Vetrita
3,
Athar Abdurrahman Bayanuddin
5,
Fanny Aditya Putri
5,
Tatik Kartika
3,
Argo Galih Suhadha
3,
Atriyon Julzarika
6,
Shinichi Sobue
7,
Anton Satria Prabuwono
8 and
Josaphat Tetuko Sri Sumantyo
9,10
1
Department of Electrical Engineering, Faculty of Engineering, Universitas Indonesia, Depok 16424, Indonesia
2
Artificial Intelligence and Data Engineering (AIDE) Research Center, Faculty of Engineering, Universitas Indonesia, Depok 16424, Indonesia
3
Research Center for Geoinformatics, National Research and Innovation Agency, Bandung 40135, Indonesia
4
Department of Electrical Engineering, Faculty of Engineering, Universitas Budiluhur, Jakarta 12260, Indonesia
5
Directorate of Laboratory Management, Research Facilities, and Science and Technology Park, National Research and Innovation Agency, Jakarta 10340, Indonesia
6
Research Center for Limnology and Water Resources, National Research and Innovation Agency, Cibinong 16915, Indonesia
7
Japan Aerospace Exploration Agency (JAXA), Tsukuba 305-8505, Japan
8
Department of Computing, Faculty of Science, Management & Computing, Universiti Teknologi Petronas, Bandar Seri Iskandar 32610, Malaysia
9
Center for Environmental Remote Sensing and Research Institute of Disaster Medicine, Chiba University, Chiba 263-8522, Japan
10
Department of Electrical Engineering, Faculty of Engineering, Universitas Sebelas Maret, Surakarta 57126, Indonesia
*
Author to whom correspondence should be addressed.
Computers 2025, 14(8), 337; https://doi.org/10.3390/computers14080337
Submission received: 29 June 2025 / Revised: 7 August 2025 / Accepted: 14 August 2025 / Published: 18 August 2025
(This article belongs to the Special Issue Advanced Image Processing and Computer Vision (2nd Edition))

Abstract

Burned area detection plays a critical role in assessing the impact of forest and land fires, particularly in Indonesia, where both peatland and non-peatland areas are increasingly affected. Optical remote sensing has been widely used for this task, but its effectiveness is limited by persistent cloud cover in tropical regions. A Synthetic Aperture Radar (SAR) offers a cloud-independent alternative for burned area mapping. This study investigates the performance of a Stacking Ensemble Neural Network (SENN) model using polarimetric features derived from both C-band (Sentinel 1) and L-band (Advanced Land Observing Satellite—Phased Array L-band Synthetic Aperture Radar (ALOS-2/PALSAR-2)) data. The analysis covers three representative sites in Indonesia: peatland areas in (1) Rokan Hilir, (2) Merauke, and non-peatland areas in (3) Bima and Dompu. Validation is conducted using high-resolution PlanetScope imagery(Planet Labs PBC—San Francisco, California, United States). The results show that the SENN model consistently outperforms conventional artificial neural network (ANN) approaches across most evaluation metrics. L-band SAR data yields a superior performance to the C-band, particularly in peatland areas, with overall accuracy reaching 93–96% and precision between 92 and 100%. The method achieves 76% accuracy and 89% recall in non-peatland regions. Performance is lower in dry, hilly savanna landscapes. These findings demonstrate the effectiveness of the SENN, especially with L-band SAR, in improving burned area detection across diverse land types, supporting more reliable fire monitoring efforts in Indonesia.

1. Introduction

Forest and land fires are catastrophic disasters that occur in Indonesia every year. Several studies show that these fires have multiple consequences, including long-term PM2.5 air pollution [1], changes in soil nutrient content [2], and carbon dioxide emissions [3]. In 2019, the burned area in Indonesia reached 1.6 million hectares [4]. Peatland areas accounted for 30–40% of the forest and land fire incidents in 2019 [5]. They are challenging to extinguish and last long [6,7]. The smoke from these fires can have severe impacts, comparable to the 2015 wildfires, which were among Indonesia’s largest forest and land fire incidents [8,9].
Forest and land fires in Indonesia are primarily caused by deforestation and land-cover conversion [10]. These activities involve changing secondary forests into shrubs or converting them into plantations. When El Niño is active, its effects will contribute to this issue for most of the year [11]. Anthropogenic activities, particularly in mixed-production agricultural lands and degraded peatlands, show a high probability of fire [12,13]. By October 2023, approximately 267,900 hectares of land had burned since January, surpassing the burned area in 2022. This information is based on data from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor on NASA’s Terra satellite [14].
Remote sensing techniques are extensively employed for forest fire monitoring, which is crucial in Indonesia. Both optical remote sensing satellite imagery [15,16,17,18,19,20] and Synthetic Aperture Radar (SAR) data [21,22,23] have been investigated to map burned areas. The Republic of Indonesia’s Ministry of Forestry (MoF) utilizes optical remote sensing data to provide up-to-date information on burned areas in Indonesia. However, mapping burned areas can be challenging due to clouds or fog. SAR data, known for its ability to penetrate clouds and operate day or night, is a powerful tool for accurately mapping burned areas when combined with machine learning algorithms.
SAR sensor data relies on the backscatter coefficient, representing the signal reflected or backscattered from an object to the sensor [24]. Backscatter intensity is influenced by various factors, including frequency, polarization, incident angle, surface roughness, and relative permittivity [25,26]. In C-band SAR data with VV polarization, the backscatter intensity decreases in land cover, such as in shrubs and open forests, due to reduced soil moisture. However, the backscatter intensity of L-band SAR data remains relatively stable in certain land covers [27]. For burned areas, the sensitivity of radar data to the severity of the burn decreases as the frequency increases, including the X-band, C-band, and L-band. The C-band and L-band with cross-polarization exhibit the highest sensitivity to burn severity. In forest land cover, the backscatter trend in cross-polarization for burned areas generally has lower values than for unburned areas across all frequencies [28]. The polarimetric properties and SAR geometry, such as incidence angle, also impact SAR’s sensitivity in mapping burn severity. Acquiring steep SAR data poses a greater challenge in differentiating burn severity levels for the C-band compared to the L-band due to polarimetric properties related to volume scattering [29]. Studies comparing burned area identification using SAR data remain relatively understudied, particularly in peatland and non-peatland areas. Furthermore, there has been a shift in the frequency of wildfires in Indonesia from peatland to non-peatland areas, which must be investigated.
Machine learning techniques have been employed to acquire geospatial information, including identifying burned areas. The Iterative Self-Organizing Data Analysis Technique (ISODATA) unsupervised classification approach utilizes Radar Burn Difference (RBD) and Radar Burn Ratio (RBR) parameters derived from Sentinel-1 data to detect fire severity [30]. By employing unsupervised K-means clustering and the fire index and texture features from the gray level co-occurrence matrix (GLCM) of Sentinel-1 data, burned areas in the Mediterranean were mapped with a precision and recall value of approximately 80% [31]. The supervised classification uses the Support Vector Machine (SVM) method to identify burned areas using Advanced Land Observing Satellite—Phased Array L-band Synthetic Aperture Radar (ALOS-PALSAR(Japan Aerospace Exploration Agency (JAXA), Tokyo, Japan)) data and considers the influence of topography on classification results [32]. The Random Forest (RF) method has been used to detect burned areas on SAR data [23,33]. It demonstrates advantages such as a reduced sensitivity to training data quality, diminished risk of overfitting, and simpler hyperparameter settings compared to other shallow machine learning methods [34,35]. Furthermore, the Explainable Artificial Intelligence (XAI) framework can potentially improve the efficiency and transparency of the RF model in assessing burned area severity, hence assisting stakeholder comprehension [36]. Deep learning techniques, specifically artificial neural networks (ANNs), have been utilized to identify burned areas. Mithal et al. [37] applied ANN-based approaches to map burned areas using MODIS data in the Iberian Peninsula, outperforming the MCD45A1 product regarding commission and omission errors. Another study demonstrates improved average user accuracy for burned area mapping with a neural network classifier based on MODIS data [38]. The artificial neural network has also been widely used for burned area detection [39,40]. Compared to convolutional neural networks and shallow machine learning, the ANN requires less computation power, allowing it to generate information on a regional scale [41]. In addition, the ANN often requires less data than other models, which can effectively deal with incomplete or missing data [42]. Furthermore, ANN is relatively scalable when handling large-dimension datasets, including the remote sensing spectral index or texture features, is able to solve non-linear problems, and is more fault tolerant [43]. However, an artificial neural network tends to undergo overfitting [44].
One of the strategies to overcome that problem is to perform the stacking ensemble method, but no research exists on burned area detection using the stacking ensemble method with SAR data [45]. Several studies have shown that using the stacking ensemble can enhance model performance compared to the base [46,47,48]. The stacking method has been used in handling geospatial information when using the ensemble model. Stacking integrates heterogeneous base learners using a meta-learning approach to improve prediction accuracy. Several studies have been conducted on comparisons among stacking, bagging, and boosting, resulting in stacking methods that outperform the others including in the application of surface soil moisture mapping [49] and landslide susceptibility mapping [50]. This study developed and evaluated a Stacking Ensemble Neural Network model for burned area classification using C-band and L-band SAR remote sensing data to enhance the performance of burned area detection. The data used in the experiment encompassed peatland and non-peatland areas across Indonesia that exhibited various land cover types.

2. Materials and Methods

2.1. Research Area

We selected three areas of interest (AOIs) for this research (Figure 1). Figure 1A shows the Rokan Hilir Regency, in the northern part of Riau Province, Sumatra Island’s eastern coastal region. Most of the area consists of lowlands and swamps, particularly along the Rokan River, with an elevation ranging from 0 to 50 m above sea level [51]. The Rokan Hilir Regency experiences a tropical climate, with average air temperatures ranging from 22 °C to 35 °C. The dry season in this area typically spans from February to August, while the rainy season occurs between September and January [44]. The land cover within the first AOI is predominantly characterized by estate crops, followed by mixed dry agriculture, secondary swamp forests, and wet shrubs, as shown in Figure 1A.
Figure 1C represents the peatland area in the Merauke Regency (South Papua Province). The topography is predominantly flat and swampy along the coastline, with a 0–3% slope and generally flatland terrain between 0 and 60 m above sea level [52]. The Regency has a very distinct climate between the rainy and dry seasons. According to [53], the area falls within agroclimate Zone C, featuring a wet period lasting 5–6 months [52]. The land cover is dominated by savannas, followed by wet shrubs, primary mangrove forests, and secondary swamp forests, as illustrated in Figure 1B.
Figure 1B is a non-peatland area in Bima and Dompu Regencies (West Nusa Tenggara Province). The Bima Regency predominantly features highland terrain with mountainous and steep conditions, constituting 70% of its landscape, while the remaining 30% is lowland. The area experiences a relatively short rainfall duration, with an average annual precipitation of 79 mm per month, causing it to be classified as a dry region throughout the year. This condition adversely affects the local water supply and contributes to the aridity of most rivers. Peak rainfall typically occurs in December, January, and February [54]. The Dompu Regency is characterized by undulating to hilly topography, with some areas being flat to sloped. Dompu experiences a tropical climate, with the primary rainy season occurring from October to April [55]. The land cover within the third AOI is more diverse than other regions, featuring dryland forests, seasonal crops, savannas, settlement areas, paddy fields, bare ground, fishponds, and open water, as shown in Figure 1C.
We chose those AOIs because peatland has become a topic of global interest since its smoke has significantly contributed to world carbon emissions [56]. Meanwhile, non-peatland has reportedly dominated the burned area in Indonesia recently [6], and there is currently insufficient research regarding burned area identification being undertaken in Indonesia’s savannas [57].

2.2. Research Data

2.2.1. Remote Sensing Data (SAR, Sentinel-2, Planetscope, and IMERG)

This research used C-band SAR Sentinel-1 and L-band ALOS-2/PALSAR-2. We obtained Sentinel-1 Ground Range Detected (GRD) products acquired through the Google Earth Engine (GEE), a geospatial platform based on cloud computing that allows users to acquire, process, and analyze satellite images [58]. The ALOS-2/PALSAR-2 Level 2.1 was obtained from the Japan Aerospace Exploration Agency. This level of data has been orthorectified from Level 1.1 data using a digital elevation model. Sentinel-1 data operates at a center frequency of 5.405 GHz, whereas ALOS-2/PALSAR-2 data is at 1.2 GHz. Both sensors have the same resolution and orbit, 10 m and ascending, respectively. Sentinel-1 data have VH (Vertical–Horizontal) and VV (Vertical–Vertical) polarization modes, and ALOS-2/PALSAR-2 data have HV and HH polarization modes. Table 1 shows the Sentinel-1 and ALOS-2/PALSAR-2 data acquisition employed in this research.
Sentinel-2 Multi-Spectral Instrument (MSI) Level-2A images acquired from GEE, representing pre- and post-fire conditions, were used to generate reference points for distinguishing burned and unburned classes. The dataset provided an orthorectified surface reflectance across 13 spectral bands in the visible, near-infrared, and shortwave-infrared spectrum, with a spatial resolution ranging from 10 to 60 m.
PlanetScope optical imagery with a spatial resolution of 3 m was used as independent data for generating reference testing points to evaluate model accuracy. The standard surface reflectance product—with sufficient geolocation accuracy—was selected. Its resolution is nearly seven times higher than Sentinel-2 and about three times higher than ALOS-2/PALSAR-2, enabling assessment across different spatial resolutions. The PlanetScope has four bands (red, green, blue, and near-infrared) and was used to generate burned area maps. The datasets were accessed from the Center for Data and Information, National Research and Innovation Agency (BRIN).
Table 1 summarizes the data acquisition dates for Sentinel-1 and ALOS-2/PALSAR-2, Sentinel-2, and PlanetScope imagery used across AOIs. Sentinel-2 and PlanetScope optical data supported the development of training and testing datasets for the radar-based burned area model. Given that optical imagery more clearly reflects land surface changes visible to the human eye, it was used as a reference. Freely available multitemporal Sentinel-2 images were used to create paired pre- and post-fire composites, with pre-fire scenes representing vegetation conditions and post-fire scenes selected as the latest cloud-free acquisitions within the defined period.
In addition, the Integrated Multi-satellite Retrievals for the Global Precipitation Measurement (IMERG) [59] is a global surface precipitation rate estimator with a spatial resolution of 0.1° or 10 km and a temporal resolution of 30 min. The data were retrieved through NASA Giovanni “https://giovanni.gsfc.nasa.gov (accessed on 24 July 2024)”. This research used the product from January until December 2019 to obtain monthly rainfall rates in every AOI; this allows us to determine soil moisture conditions in each AOI. We used it for further analysis in the discussion section.

2.2.2. Ancillary Data (Peatland Map, Active Fire, Land Cover Map, and Burned Area Map)

This research also used additional data to support the model building and validation, including the peatland map, active fire, land cover map, and burned area map. The following paragraphs describe the details of this data. The peatland map from the Ministry of Energy and Mineral Resources, Republic of Indonesia, at a scale of 1:250,000, was necessary to determine the peatland and non-peatland areas in the three AOIs. The geological formation of peatland lies on old swamp deposits; young swamp deposit formations in the Merauke Regency; and an old alluvium formation in the Rokan Hilir Regency. The peatland map is a geological dataset representing landforms developed over millennia. Peatland boundaries are geologically stable features, and their spatial extent does not change significantly over short timescales (50–200 years) in the absence of major anthropogenic or natural disturbances. We verified that no large-scale geomorphological events (e.g., major canal construction or land reclamation projects) that could significantly alter the peat dome hydrology and boundaries were reported in our AOI between 2016 and the study period. Therefore, the map remains a reliable source for delineating peat and non-peat areas.
We used active fire data from the National Aeronautics and Space Administration (NASA) “https://firms2.modaps.eosdis.nasa.gov/ (accessed on 20 May 2024)” as an additional dataset to confirm burned areas beyond those visually identified. The data were derived from the Moderate-Resolution Imaging Spectroradiometer (MODIS) and the Visible Infrared Imaging Radiometer Suite (VIIRS) instruments, with 1 km and 375 m spatial resolution, respectively. The data were collected in alignment with the pre- and post-fire date range of each radar dataset for each AOI (see Table 1).
We obtained both Land Use/Cover (LUC) and official burned area maps in 2019 from Indonesia’s MoF to identify land cover types in training and testing datasets, and they were one of the sources for producing the testing datasets. The annual LUC map was produced by the MoF, primarily through visual interpretation of Landsat imagery, using annual composite images available up to mid-2019. Accordingly, the LUC status referenced in this study represents pre-fire LUC conditions, which aligns well with the objectives of this research. The burned area product was derived from monthly Landsat mapping, with acquisition dates closely matching those of the Sentinel-2 imagery used in this study.

2.3. Methodology

The research method was carried out and included several main stages: pre-processing, feature extraction, training and testing data generation, model development using the Stacking Ensemble Neural Network method, and model performance evaluation. In the pre-processing stage, we filter the Sentinel-1 and ALOS-2/PALSAR-2 data using a Boxcar speckle filter with an 11 × 11 pixel window size using Google Earth Engine. The speckle filter is essential because it determines the classification result [60]. The Boxcar filter is selected due to its computational efficiency, making it particularly suitable for developing classification models in large-scale regions such as Indonesia. In the interim, a window size of 11 × 11 has been selected based on the previous study conducted by Hasan et al. (2021) [61], which indicate that the Boxcar filter with this window size has the best performance in classifying land cover types, including urban, bare soil, vegetation, water, high wet soil, low wet soil, lake water, and mixed soil, using SAR data. Therefore, this choice is appropriate to be used in this research as the burned and unburned samples located in diverse land cover. We extracted the polarimetric features that will be used as classifier input. We built the burned area classification model using the Stacking Ensemble Neural Network classifier with logistic regression as a meta-learner. To develop the model, we used a system with specification of Intel Core i7-8750H Processor 2.2 GHz (Intel Corporation, Taipei, Taiwan), RAM 16 GB (Western Digital Corporation, Pasir Gudang, Malaysia), NVIDIA GeForce GTX 1050 Ti 4GB (NVIDIA Corporation, Taipei, Taiwan). The model classification was conducted using the Python 3.7 programming language. The SENN classifier was developed using “TensorFlow” and “Keras” libraries, while the performance evaluation was conducted using the “Scikit-learn” library. The Geospatial Data Abstraction Library (GDAL) was used in handling satellite images, including reading satellite images, importing raster data, importing vector data, and converting data. All the research approaches are presented in Figure 2.

2.4. Polarimetric Features

This research uses the sigma nought backscatter coefficient ( σ 0 ), the reflected radar signal per unit area of the ground plane in decibel units [62]. For Sentinel-1 data features, we used the Radar Burn Difference (RBD) [30], Radar Burn Ratio (RBR) [31], Radar Vegetation Index (RVI) [63], and Dual-Polarization SAR Vegetation Index (ΔDPSVI) [31] as burned area classification input indices. The ALOS-2/PALSAR-2 data used the Radar Ratio Vegetation Index (RRVI) [64] and Radar Normalized Difference Vegetation Index (RNDVI) features [64]. In these studies, the radar indices from Sentinel-1 are derived by multi-temporal image averaging. However, we used a single image for each pre-fire and post-fire event; therefore, the indices are modified, as shown in Table 2.

2.5. Stacking Ensemble Neural Network (SENN)

Wolpert [65] introduced the stacking ensemble method to improve the predictive performance by combining the outputs of multiple base learners through a meta-learning strategy. The strategy combines several base learners (level-0), which results in several models, and uses a meta-learner (level-1) to retrieve the final prediction. Figure 3 illustrates the stacking ensemble method.
This research uses the ANN as a base learner, so the method is called a Stacking Ensemble Neural Network (SENN). The ANN was inspired by the Hebbian Learning Law, which states that the interconnected neurons of a synapse are stimulated simultaneously and repeatedly, causing the synapse’s contribution weight to increase [66]. It can be expressed by a mathematical operation in Equation (1) as follows [67]:
y k =   σ j = 1 M w j k . x j +   b k
where w j k   is the weight of the connection between the neurons, x j   is the neuron’s output, b k   is a bias, and σ is the activation function employed in the output y k . ANN architecture commonly comprises three layers: an input layer, a hidden layer, and an output layer.
This research used three base learner models of ANN (ANN-1, ANN-2, and ANN-3), which consist of two hidden layers. The key differences among the three ANN models lie in the number of hidden layer nodes and dropout rates. The decision for the number of nodes was based on several considerations, including those from Stathakis et al. [68] and Rachmatullah et al. (2021) [69], for the first layer by considering that the number of input features for model development is 10 features both when using Sentinel-1 and ALOS-/PALSAR-2. All models shared the same architecture structure (two hidden layers with ReLU activation and sigmoid in the output), but differed in the number of nodes and dropouts as follows:
  • ANN-1: First layer: 8 nodes; second layer: 4 nodes; and dropout of 0.3 after the second layer.
  • ANN-2: First layer: 8 nodes; second layer: 7 nodes; and dropout of 0.5 after the first layer and 0.3 after the second layer.
  • ANN-3: First layer: 9 nodes; second layer: 5 nodes; and dropout of 0.3 after the second layer
A Rectified Linear Unit (ReLU) was used in each hidden layer, whereas a sigmoid activation function was utilized in the output layer. The Rectified Linear Unit (ReLU) was used based on several considerations, including that the ReLU helps the model converge faster during training and is able to solve the vanishing gradient problem [70]. In addition, based on our previous empirical research, the ReLU helps neural networks learn faster. Each hidden layer used a kernel and activity regularizers. In addition, we utilized a kernel regularizer and activity regularizer in the first and second hidden layer with a value of 0.0001 to prevent overfitting. In addition, the learning rate was set to 0.001, and the batch size was 32. In the case of a meta-learner, we chose a logistic regression, as it is widely used in binary decisions [71] and decreases the overfitting risk [72].

2.6. Training and Testing Dataset Preparation

This research constructed the training datasets using Sentinel-2 MSI images as the reference, as described in Table 1. The training dataset comprised unburned and burned classes. In addition, the land cover variety based on land cover maps from 2019 was also considered in the training dataset production. To ensure the reliability of the training dataset, we assessed the training dataset using the Separability Index (SI) [73] of the most utilized spectral index in identifying the burned area, namely the Normalized Burned Ratio [74], as expressed in Equation (2).
S I = | µ b µ u | ( σ b σ u )
where µb and µu are the mean values of certain indices for burned and unburned pixels, respectively, and σb and σu are the standard deviations for burned and unburned pixels.
The training dataset was in the form of a polygon. The number of training datasets was 205, 126, and 296 polygons for the AOI in Rokan Hilir, Merauke, and Bima and Dompu Regencies, respectively. In pixel units, these polygons were equivalent to 10, 125, 5798, and 13,535 pixels for each AOI. In addition, we ensured a relatively balanced sample between burned and unburned classes. The number of training datasets was 3956 unburned pixels and 6169 burned pixels for the Rokan Hilir Regency; 2957 unburned pixels and 2841 burned pixels for the Merauke Regency; and 8559 unburned pixels and 4976 burned pixels for the Bima and Dompu Regencies.
For each AOI, we selected a training dataset for burned and unburned classes following its land cover. Not all AOIs have training datasets for each land cover because it depends on the landscape. The training dataset in the Rokan Hilir Regency is mostly in estate crops, followed by wet shrubs, seasonal crops, secondary swamp forests, and settlement areas. The training dataset in the Merauke Regency is dominated by savannas, followed by secondary swamp forests. In contrast, the training dataset in Bima and Dompu Regencies consists of savannas, seasonal crops, dryland forests, dry shrubs, paddy fields, settlement areas, and open water. The number of training datasets is different for each land cover type in the AOI, as it depends on the amount of each land cover type.
For testing dataset production, burned area maps derived from PlanetScope imagery were used as reference testing points, labeled as burned and unburned. ISODATA unsupervised classification was employed to delineate burned and unburned areas. Studies have shown that ISODATA can outperform other unsupervised classification methods, such as k-means and chain clusters, particularly in identifying spatially and spectrally dominant classes [75,76]. While its accuracy may be lower than that of supervised methods, ISODATA can leverage abundant unlabeled data to improve classification accuracy when labeled data is limited. We selected ISODATA to reduce reliance on manual labeling and ensure objectivity in the labeling of test data.
The testing dataset consisted of points that comprised burned and unburned classes. The proportion between burned and unburned classes was balanced. We used stratified random sampling to collect the points where the burned and unburned points were spread over the area of interest. Also, as the spatial resolution between SAR images and PlanetScope images is different (10 m vs. 3 m), three pixels of PlanetScope images must belong to the same class to make sure the collected point was not a mixed class. In addition, the collected points must be a minimum of 30 m apart and agree with the burned area map obtained from Indonesia’s MoF.
The number of testing datasets was 118 (67 unburned samples and 51 burned samples), 182 (99 unburned samples and 83 burned samples), and 199 (99 unburned samples and 100 burned samples) for the AOI in the Rokan Hilir, Merauke, Bima, and Dompu Regencies, respectively. Rokan Hilir Regency’s testing dataset covers estate crops, seasonal crops, wet shrubs, and secondary swamp forests. The testing dataset in the Merauke Regency consists of savannas and wet shrubs, while savannas, seasonal crops, dryland forests, dry shrubs, and paddy fields dominate in the Bima and Dompu Regencies. There are differences in savanna conditions between Merauke and the Bima and Dompu Regencies: the savanna is wet and dry in the Merauke Regency and dry in the Bima and Dompu Regencies.

2.7. Performance Metrics

We used several metrics to evaluate the classification model performance, such as overall accuracy (OA), precision, recall, F1-score, and Cohen’s Kappa (K), as shown in Equations (3)–(7). In this research, the overall accuracy means the total number of points that are exactly predicted from all points, including the true positive (TP), true negative (TN), false positive (FP), and false negative (FN). Precision implies burned area points predicted as a burned class out of the total predicted points. The recall shows the burned class percentage that can be correctly predicted based on the reference data (i.e., the ground truth). The F1-score is the harmonic mean of the combined precision and recall. Cohen’s Kappa is an inter-observer agreement between the predicted results and the reference data. The pe value in Equation (7) shows the change probability in the measurement between the predicted results and the reference data.
O v e r a l l   A c c u r a c y   ( O A ) = T P + T N T P + T N + F P + F N
P r e c i s i o n = T P T P + F P
R e c a l l = T P T P + F N
F 1 S c o r e = 2 × ( P r e c i s i o n × R e c a l l ) ( P r e c i s i o n + R e c a l l )
C o h e n s   K a p p a = O v e r a l l   a c c u r a c y p e 1 p e

3. Results

3.1. Separability Index of the Training Dataset

This research addresses the SI value of the Normalized Burned Ratio (NBR) obtained from the training dataset for each AOI. According to Kaufman and Remer [73], this finding shows that the selected training dataset is reasonably representative of burned and unburned samples. This is because the SI value of NBR results in a high value (SI > 1) for a dataset in the Papua Regency (3.09), as well as in the Bima and Dompu Regencies (1.13), which indicates a good degree of separation. However, the lowest SI value is found in the Rokan Hilir Regency (0.68), because the training data in the Rokan Hilir Regency are obtained from fire incidents over several months, which can still be visually recognized on the optical image; much training data is needed to implement this method.

3.2. Performance Evaluation of Burned Area Detection Using Stacking Ensemble Neural Network (SENN)

3.2.1. Peatland Area

The performance metrics-based evaluation of burned area classifications in the Rokan Hilir Regency using ANN-1, ANN-2, ANN-3, and SENN methods on C-band SAR data is displayed in Figure 4a. Using the SENN method, the highest OA, precision, F1-score, and Cohen’s Kappa were 0.88, 0.80, 0.87, and 0.76, respectively. Meanwhile, the ANN-2 and ANN-3 methods had the same recall value of 1.00. Increasing the OA ranged from 4.24% to 10.17%. With Cohen’s Kappa score, according to Landis and Koch’s categorization [77], all tested methods indicated a substantial agreement. The ANN-2 performed the worst in terms of the OA, precision, F1-score, and Cohen’s Kappa.
Figure 4b shows the performance of burned area classification using ANN-1, ANN-2, ANN-3, and SENN methods on L-band SAR data. Of all the methods, the highest OA, precision, F1-score, and Cohen’s Kappa values were achieved using the SENN method at 0.93, 0.92, 0.92, and 0.86, respectively. However, the highest recall was at 0.94 using the ANN-2 method. The SENN methods increased the OA from 1.70% to 6.78%. In the case of Cohen’s Kappa score, the four methods indicated an almost perfect agreement, varying from 0.72 to 0.86. The lowest OA came from the ANN-1 and ANN-3 methods, which gave the same value (0.86). The ANN-1 method has the worst performance for recall, F1-score, and Cohen’s Kappa, except for the precision value.
A comparison of the C-band and L-band utilization for burned area classification in Figure 4a and 4b shows that the highest overall accuracy, precision, F1-score, and Cohen’s Kappa are achieved using the SENN method in L-band data with scores of 0.93, 0.92, 0.92, and 0.86, respectively. It indicated that the SENN method on L-band SAR data outperformed other methods on the C-band SAR in decreasing commission errors. Regarding recall, the ANN-2 and ANN-3 methods provided the highest score (1.00) when using C-band SAR data. This result proved that the ANN-2 and ANN-3 methods helped to decrease omission errors in C-band SAR data. In this case, the SENN method provides results balanced between precision and recall values, thus maintaining the performance of both metrics.
Figure 4c reveals the performance of burned area classification for the Merauke Regency derived from the compared methods of C-band SAR data. The highest OA, F1-score, and Cohen’s Kappa values were achieved using the SENN method; they were 0.96, 0.95, and 0.91, respectively. Both the SENN and ANN-3 methods gave the highest recall score at 0.96. The highest precision was 0.95 using the ANN-1 method. SENN methods increased the OA, which varied from 1.09% to 3.29% compared to the ANN-2 and ANN-3 methods. For Cohen’s Kappa score, using all methods indicated an almost perfect agreement, which varies from 0.84 to 0.91. The lowest OA, 0.92, occurred when using the ANN-2 method. This finding shows that the ANN-2 method performs worst for OA, precision, recall, the F1-score, and Cohen’s Kappa values.
The performance evaluation of burned area classification in the Merauke Regency for the tested methods on L-band SAR data is shown in Figure 4d. The graph illustrates that the highest OA, precision, F1-score, and Cohen’s Kappa were at 0.97, 0.93, 0.96, and 0.93, respectively, which were obtained using the SENN method. Meanwhile, the ANN-1, ANN-2, ANN-3, and SENN methods gave the same recall value at 1.00. The OA value increase ranged from 1.10% to 3.84% between the SENN and other methods. Based on Cohen’s Kappa score, all methods indicated an almost perfect agreement. The ANN-1 performed worst regarding OA, precision, recall, the F1-score, and Cohen’s Kappa metrics.
The result from C-band and L-band utilization for burned area classification in Figure 4c and 4d exhibits that the highest OA, precision, F1-score, and Cohen’s Kappa were achieved using SENN in L-band data with a score of 0.94, 0.94, 0.93, and 0.88, respectively. These results imply that the SENN methods on L-band SAR data outperformed other C-band SAR methods in decreasing the AOI commission error in the Merauke Regency. Regarding recall, both ANN-3 and SENN methods reached the highest score (0.96) when using C-band SAR data. This result shows that using ANN-3 and SENN methods on C-band SAR data decreased omission errors. However, it is important to interpret some of the performance metrics, particularly recall, with caution. For instance, the recall values reaching 1.00 in Merauke indicate that all reference burn points were correctly identified. This may reflect the dominance of extensive, homogeneous burned regions in the area, but it could also signal potential overfitting or optimistic classification in certain landscape contexts. Although the SENN model consistently outperformed the base learners, we did not conduct statistical significance testing (e.g., paired t-tests) to confirm whether these improvements were statistically robust due to the limited number of AOIs and fixed acquisition dates. Additionally, while SENN integrates predictions from three ANN models, we did not track or quantify the disagreement among them or the specific role of the meta-learner in resolving classification conflicts. These limitations are acknowledged and will be addressed in future studies to improve interpretability and statistical validation.

3.2.2. Non-Peatland Area

Figure 5a depicts the evaluation of burned area classification in Bima and Dompu Regencies for the tested methods using C-band SAR data. The highest OA, recall, F1-score, and Cohen’s Kappa values occurred using the SENN method at 0.72, 0.72, 0.72, and 0.45, respectively. Meanwhile, the highest precision is ANN-1 at 0.78. The use of SENN methods resulted in an increasing OA value, which ranged from 5.02 to 8.04%. Cohen’s Kappa score was categorized as a moderate agreement using SENN methods. In contrast, the other three methods result in a fair agreement. The lowest OA, 0.64, occurred when using the ANN-1 method. The ANN-2 method performed worst for OA, precision, recall, the F1-score, and Cohen’s Kappa values.
Figure 5b depicts the evaluation of burned area classification for the compared methods on L-band SAR data in Bima and Dompu Regencies. The graph shows that when using the SENN method, the highest recall and F1-score were 0.89 and 0.79, respectively, using the SENN method. Furthermore, the ANN-3 and SENN became the highest OA, with a value of 0.76. Meanwhile, the highest Cohen’s Kappa value occurred in the ANN-3 method, with a value of 0.5272. The OA value was raised by approximately 6% between the SENN and other methods. Based on Cohen’s Kappa score, ANN-3 and SENN methods indicated a moderate agreement, while ANN-1 and ANN-2 were categorized as a fair agreement. The ANN-1 method had the worst performance for the precision score, while the ANN-2 method gave the worst performance for the OA, recall, F1-score, and Kappa.
The comparison between C-band and L-band utilization for burned area classification in Figure 5a and 5b shows that the highest OA, recall, and F1-score occurred using SENN in L-band data with scores of 0.76, 0.89, and 0.79, respectively. It indicated that using SENN methods on L-band SAR data outperformed the other methods on C-band SAR in decreasing omission errors. Regarding precision, the ANN-1 reached the highest score (0.78) when using C-band SAR data. In addition, the highest Cohen’s Kappa occurred using ANN-3 (0.53) on L-band SAR data.

3.3. Burned Area Map Using the SENN Method

3.3.1. Peatland Area

Burned area classification maps using C-band and L-band SAR data with the SENN method for the Rokan Hilir and Merauke Regencies are depicted in Figure 6 and Figure 7, respectively. As clouds cover the area for most of each month, SAR data is essential to detect burned areas in the Rokan Hilir Regency, as shown in Figure 6a–e. It can be seen from the Sentinel-2 image that post-fire events were covered by clouds in several areas, so burned area detection cannot be optimal using optical data alone (Figure 6e).
Fortunately, SAR data (Figure 6c,d) can show surface conditions obscured by clouds in optical data. As seen in the figures, burned areas are indicated in yellow and orange. The result is verified by PlanetScope data, which was acquired on 2 November 2019, several days after the SAR data’s acquisition date (Figure 6f). Visually, the classification results for the Rokan Hilir Regency imply that many misclassifications occur in the region that should be categorized as unburned areas, mainly when using C-band SAR data (Figure 6a), which indicates a false positive. This finding corresponds to the precision value obtained using C-band SAR data, which is lower than that of L-band SAR data. In addition, the L-band SAR data could exhibit a more solid burned area pattern, as depicted in Figure 6b, than the C-band SAR data. This research also showed that misclassifications of burned area detection mainly occurred due to bare land areas among estate crops in the Rokan Hilir Regency.
Figure 7 displays the classification map of the burned area using the SENN method in the Merauke Regency. C-band and L-band SAR data can identify burned areas, as shown in Figure 7a,b. Misclassification of the burned area that should be classified as unburned mainly occurs in C-band SAR data, as depicted in the solid line polygon in Figure 7a. This result is confirmed by optical image data, as shown in Figure 7e (brown color). L-band SAR data seems more able to detect the pattern of burned areas in the dashed-line polygon than the C-band SAR data. Examining these two peatland areas shows that the C-band SAR data tends to overestimate more than the L-band SAR data in predicting the burned areas, as described by Carreiras et al., in whose work the Sentinel-1 increases the burned area estimation [78].

3.3.2. Non-Peatland Area

The classification map of burned areas using C-band and L-band SAR data with the SENN method for the Bima and Dompu Regencies is shown in Figure 8. Both bands’ SAR data could detect the burned areas that were categorized as medium fires. Both bands’ SAR data could identify dryland forests as unburned areas, as shown in Figure 8c,d in blue and Figure 8e,f in green. However, misclassifications still occurred in the bare land and seasonal crop areas, as depicted by the black polygon, where unburned areas were falsely identified as burned. This occurs when most seasonal crops have reached the harvest season, as illustrated in Figure 8f, allowing the SAR signal to interact directly with the soil. This research also shows that incorporating only two images in pre- and post-fire events is insufficient for accurately identifying the unburned class in seasonal crops due to temporal variations in the backscatter. Using both pre-fire and post-fire SAR imagery with an acquisition time close to the fire incident is also recommended. In addition to land cover types, topographical conditions contributed to the misclassification in this area, as the topography contributed to the distortion of the SAR image.
Figure 9 presents the backscatter distributions of three land cover types—bare land, burned area, and vegetation—derived from post-fire SAR data using Sentinel-1 (C-band) and ALOS-2/PALSAR-2 (L-band) in various polarization modes. Across all polarizations, vegetation consistently exhibits higher backscatter values, with Sentinel-1 VH (~ −15 dB) and ALOS-2 HV (~−12 dB) showing the most prominent separation. In contrast, both bare land and burned areas exhibit lower backscatter values, typically between −20 dB and −25 dB, with considerable overlap, making them difficult to distinguish based solely on intensity. This overlap indicates that, post-fire, the structural differences between bare and burned surfaces are minimal regarding the SAR response. Sentinel-1 VV and ALOS-2 HH polarizations show similar patterns, though with slightly reduced contrast among classes compared to their cross-polarized counterparts. Overall, cross-polarizations (VH and HV) demonstrate a greater sensitivity to vegetation cover, while L-band data (ALOS-2/PALSAR-2) offer deeper penetration, making it potentially more reliable in detecting structural fire damage beneath the sparse canopy.

4. Discussion

To evaluate the SENN method’s contribution to improving the burned area prediction result, we analyze the comparison among the results of each AOI. Our experiments imply that most performance metric values achieve their highest scores using the SENN method in all AOIs, the Rokan Hilir, Merauke, Bima, and Dompu Regencies. They demonstrate that the SENN method is acceptably effective for detecting burned areas in peatland and non-peatland areas with C-band and L-band SAR data. This result supports several studies that evaluate the use of the stacking ensemble method and suggest that it could increase performance, such as for wheat grain yield prediction [47], multi-type flood probabilities [79], and forest aboveground biomass estimations [48]. As there has been no stacking ensemble method application for burned area detection until now [45], this research demonstrates a new approach using the stacking ensemble method with SAR data for burned area detection.
Comprehending the benefits of SAR data that is not influenced by weather or clouds, burned area detection using SAR data has been investigated using several machine learning methods and yields an acceptable accuracy, such as the k-means clustering algorithm [31], Random Forest with an accuracy of 87% in a forest landscape [80], and deep convolutional neural network [81]. They used a single machine learning method to identify burned areas. This research provides a reference for detecting burned areas using the Stacking Ensemble Neural Network method in a peatland and non-peatland area, with the obtained accuracy ranging from 76% to 96%.
We investigate the burned area prediction result characteristics using SAR data in a different AOI between a peatland and a non-peatland condition. For the Rokan Hilir Regency, we found that L-band SAR data combined with the SENN method provides the highest overall accuracy. The monthly rainfall rate was in the low to heavy category [82] during fire events, as shown in Figure 10, which significantly affects soil moisture in this area [83]. Soil moisture contributes to the burned area classification result because it influences backscatter values, mainly in co-polarization [84]. We recommend using time-series images to observe burned areas in the estate crop land type due to phenological changes. In addition, another method, such as a recurrent neural network — long short-term memory, should be explored for this land cover type, as it can observe information in a certain period and handle sequential data [85].
The highest overall accuracy was achieved using L-band SAR data with the SENN method in the Merauke Regency because this area’s land cover type is a wet savanna. Therefore, the land condition is strongly affected by soil moisture due to high relative permittivity, mainly in HH polarization. In addition, L-band SAR data have a higher sensitivity than C-band SAR data when looking for fire effects due to a longer wavelength [29]. Moreover, L-band SAR data can retrieve soil properties in post-fire events better than C-band SAR data [86], which correlates with soil moisture [87]. This research’s finding implies that L-band SAR data are promising in identifying burned areas, mainly peatland areas. Besides soil moisture, other factors may become the factors that enhance the overall accuracy in a peatland such as Merauke and Rokan Hilir. Peat soil contains organic matter 657 that tends to maintain dielectric properties change. In addition, land fires in peatlands often occur on the subsurface if the surface fire lasts a long time with sufficient fuel and is more persistent, so L-band SAR data with longer wavelengths could result in a backscatter that is contrasted strongly. In contrast, fires in non-peatland areas occur on the surface, which result in less visible structural changes. The increased development of L-band SAR satellites, including the launch of ALOS-4 in 2024, the forthcoming NASA-ISRO (NISAR) satellite in 2025, and the development of the ROSE-L satellite by the European Space Agency, makes it possible to monitor burned areas continually.
This research finds the lowest classification accuracy in the Bima and Dompu Regencies. The land cover and topography strongly influence this region’s classification of burned areas. This area consists of dry savannas and grasses, unlike the savanna type found in the Merauke Regency. One study shows that the temporal backscatter in the post-fire event affects no significant change in the VH polarization of C-band SAR, except in VV polarization due to soil moisture contribution [88]. However, Figure 10 shows that the rainfall rate during fire events (July–October 2019) is light (in the range of 20 mm/month) in the Bima and Dompu Regencies, indicating a small contribution of soil moisture to the VV polarization value in C-band SAR. This classification result for the burned area is also similar to that found by Belenguer-Plomer et al. [89], in whose study the burned area classification does poorly in grassland and crop areas assessed using only Sentinel-1, which results in a high omission error or low recall. Like C-band SAR, HH polarization L-band SAR could reveal a slight change in backscatter after fire events due to the humidity factor of the land and remain stable in HV polarization in a grassland [27]. The topography factor is another challenge in burned area classification because it leads to misclassification in steep and hilly areas due to the shadow effect [28].

5. Conclusions

This research provides a new approach using a Stacking Ensemble Neural Network (SENN) for burned area detection using C-band and L-band SAR polarimetric features. It indicates that burned areas can be detected using the SENN method because it offers the highest overall accuracy in C-band and L-band SAR data in peatland and non-peatland areas compared to the base model. This research shows that L-band SAR outperforms C-band SAR data in detecting burned areas using the SENN method. The utilization of L-band SAR data results in an increased overall accuracy (ranging from 93 to 96%) and precision (ranging from 92 to 100%) for estate crop and wet savanna types in peatland areas, while it increases the overall accuracy (76%) and recall (89%) in dry savannas and grasses for the non-peatland areas. Additionally, with the launch of ALOS-4 data in 2024, the forthcoming NASA-ISRO (NISAR) satellite in 2025, and the development of ROSE-L satellite by the European Space Agency, the use of L-band SAR data is promising for the continuous identification of burned areas. The worst performance for burned area detection using the SENN method occurred in a dry savanna or grass type and hilly area. While burned area detection using the SENN method is feasible in both peatland and non-peatland areas, this research highlights that specific land cover types and topographical features significantly influence the model’s performance.
In future studies, as fire beneath the surface in the peatland area is such a challenging issue, it will be essential to explore the ability of L-band SAR data to detect fire in the subsurface zone. In addition, it is important to test the Stacking Ensemble Neural Network model in other regions with the same landscape to evaluate the robustness. In addition, studying the combination of C-band and L-band SAR data is essential to learn the benefits of combining the two SAR sensors. Furthermore, future work should consider incorporating statistical significance testing and a deeper analysis of model behavior, such as base learner disagreement, to enhance interpretability and support a more comprehensive evaluation. Further work should also include performance evaluations across individual land cover types to better understand class-specific errors and improve the method’s applicability in heterogeneous landscapes such as dry savannas.

Author Contributions

Conceptualization, D.S., A.I.L. and M.R.; methodology, D.S., A.I.L., M.R. and Y.V.; software, A.I.L. and I.R.; validation, A.I.L., Y.V. and A.G.S.; formal analysis, A.I.L., I.R., Y.V. and T.K.; investigation, A.I.L., A.A.B., F.A.P., T.K., A.G.S. and A.J.; resources, A.J.; data curation, A.I.L. and A.A.B.; writing—original draft preparation, A.I.L., A.A.B. and F.A.P.; writing—review and editing, D.S., M.R. and Y.V.; visualization, A.I.L., Y.V., A.A.B. and F.A.P.; supervision, S.S., A.S.P. and J.T.S.S.; project administration, D.S.; funding acquisition, D.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research is funded by the Directorate of Research and Development, Universitas Indonesia, under Hibah PUTI Q2 2024 (Grant No. NKB-708/UN2.RST/HKP.05.00/2024).

Acknowledgments

The authors sincerely thank the Japan Aerospace Exploration Agency (JAXA) for providing ALOS-2/PALSAR-2 images and the National Research and Innovation Agency of the Republic of Indonesia for supplying PlanetScope images. The authors acknowledge Indonesia’s Ministry of Forestry and the Ministry of Energy and Mineral Resources for providing Burned Area/Land Cover maps and Peatland maps, respectively.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Hein, L.; Spadaro, J.V.; Ostro, B.; Hammer, M.; Sumarga, E.; Salmayenti, R.; Boer, R.; Tata, H.; Atmoko, D.; Castañeda, J.-P. The Health Impacts of Indonesian Peatland Fires. Environ. Health 2022, 21, 62. [Google Scholar] [CrossRef] [PubMed]
  2. Al-hasn, R.; Almuhammad, R. Burned Area Determination Using Sentinel-2 Satellite Images and the Impact of Fire on the Availability of Soil Nutrients in Syria. J. For. Sci. 2022, 68, 96–106. [Google Scholar] [CrossRef]
  3. Zheng, B.; Ciais, P.; Chevallier, F.; Chuvieco, E.; Chen, Y.; Yang, H. Increasing Forest Fire Emissions despite the Decline in Global Burned Area. Sci. Adv. 2021, 7, eabh2646. [Google Scholar] [CrossRef] [PubMed]
  4. Indikasi Luas Kebakaran Rekapitulasi Luas Kebakaran Hutan Dan Lahan (Ha) Per Provinsi Di Indonesia. Available online: https://sipongi.menlhk.go.id/indikasi-luas-kebakaran (accessed on 12 May 2024).
  5. Nasib Restorasi Gambut Indonesia. Available online: https://pantaugambut.id/storage/widget_multiple/nasib-restorasi-gambut-indonesia-YKtMh.pdf (accessed on 12 May 2024).
  6. Fighting Peatland Fires in Central Kalimantan Emergency Response, Prevention & Recovery. Available online: https://orangutan.com/wp-content/uploads/2020/01/BNF-2019-Report_Fighting-peatland-fires-in-Central-Kalimantan.pdf (accessed on 12 May 2024).
  7. Rein, G.; Huang, X. Smouldering Wildfires in Peatlands, Forests and the Arctic: Challenges and Perspectives. Curr. Opin. Environ. Sci. Health 2021, 24, 100296. [Google Scholar] [CrossRef]
  8. Nurhayati, A.D.; Hero Saharjo, B.; Sundawati, L.; Syartinilia, S.; Cochrane, M.A. Forest and Peatland Fire Dynamics in South Sumatra Province. For. Sci. 2021, 591–603. [Google Scholar] [CrossRef]
  9. Waluyo, J.; Hardyanto, Y.; Hariri, D.; Adnan, H. Guidebook of Village-Based Procedures for Preventing and Controlling Forest and Peatland Fires, 1st ed.; The Partnership for Governance Reform: Jakarta, Indonesia, 2020; ISBN 978-602-1616-79-6. [Google Scholar]
  10. Adrianto, H.A.; Spracklen, D.V.; Arnold, S.R.; Sitanggang, I.S.; Syaufina, L. Forest and Land Fires Are Mainly Associated with Deforestation in Riau Province, Indonesia. Remote Sens. 2019, 12, 3. [Google Scholar] [CrossRef]
  11. Edwards, R.B.; Naylor, R.L.; Higgins, M.M.; Falcon, W.P. Causes of Indonesia’s Forest Fires. World Dev. 2020, 127, 104717. [Google Scholar] [CrossRef]
  12. Tan, L.; Ge, Z.; Zhou, X.; Li, S.; Li, X.; Tang, J. Conversion of Coastal Wetlands, Riparian Wetlands, and Peatlands Increases Greenhouse Gas Emissions: A Global Meta-analysis. Glob. Change Biol. 2020, 26, 1638–1653. [Google Scholar] [CrossRef]
  13. Thoha, A.S.; Saharjo, B.H.; Boer, R.; Ardiansyah, M. Characteristics and Causes of Forest and Land Fires in Kapuas District, Central Kalimantan Province, Indonesia. Biodiversitas 2018, 20, 110–117. [Google Scholar] [CrossRef]
  14. Indonesian Fires Return in 2023. Available online: https://earthobservatory.nasa.gov/images/151929/indonesian-fires-return-in-2023 (accessed on 16 May 2024).
  15. Bar, S.; Parida, B.R.; Pandey, A.C. Landsat-8 and Sentinel-2 Based Forest Fire Burn Area Mapping Using Machine Learning Algorithms on GEE Cloud Platform over Uttarakhand, Western Himalaya. Remote Sens. Appl. Soc. Environ. 2020, 18, 100324. [Google Scholar] [CrossRef]
  16. Gaveau, D.L.A.; Descals, A.; Salim, M.A.; Sheil, D.; Sloan, S. Refined Burned-Area Mapping Protocol Using Sentinel-2 Data Increases Estimate of 2019 Indonesian Burning. Earth Syst. Sci. Data 2021, 13, 5353–5368. [Google Scholar] [CrossRef]
  17. Hawbaker, T.J.; Vanderhoof, M.K.; Schmidt, G.L.; Beal, Y.-J.; Picotte, J.J.; Takacs, J.D.; Falgout, J.T.; Dwyer, J.L. The Landsat Burned Area Algorithm and Products for the Conterminous United States. Remote Sens. Environ. 2020, 244, 111801. [Google Scholar] [CrossRef]
  18. Lizundia-Loiola, J.; Otón, G.; Ramo, R.; Chuvieco, E. A Spatio-Temporal Active-Fire Clustering Approach for Global Burned Area Mapping at 250 m from MODIS Data. Remote Sens. Environ. 2020, 236, 111493. [Google Scholar] [CrossRef]
  19. Llorens, R.; Sobrino, J.A.; Fernández, C.; Fernández-Alonso, J.M.; Vega, J.A. A Methodology to Estimate Forest Fires Burned Areas and Burn Severity Degrees Using Sentinel-2 Data. Application to the October 2017 Fires in the Iberian Peninsula. Int. J. Appl. Earth Obs. Geoinf. 2021, 95, 102243. [Google Scholar] [CrossRef]
  20. Afira, N.; Wijayanto, A.W. Mono-Temporal and Multi-Temporal Approaches for Burnt Area Detection Using Sentinel-2 Satellite Imagery (a Case Study of Rokan Hilir Regency, Indonesia). Ecol. Inform. 2022, 69, 101677. [Google Scholar] [CrossRef]
  21. Arjasakusuma, S.; Kusuma, S.S.; Vetrita, Y.; Prasasti, I.; Arief, R. Monthly Burned-Area Mapping Using Multi-Sensor Integration of Sentinel-1 and Sentinel-2 and Machine Learning: Case Study of 2019’s Fire Events in South Sumatra Province, Indonesia. Remote Sens. Appl. Soc. Environ. 2022, 27, 100790. [Google Scholar] [CrossRef]
  22. Hosseini, M.; Lim, S. Burned Area Detection Using Sentinel-1 SAR Data: A Case Study of Kangaroo Island, South Australia. Appl. Geogr. 2023, 151, 102854. [Google Scholar] [CrossRef]
  23. Rokhmatuloh; Ardiansyah; Indratmoko, S.; Riyanto, I.; Margatama, L.; Arief, R. Burnt-Area Quick Mapping Method with Synthetic Aperture Radar Data. Appl. Sci. 2022, 12, 11922. [Google Scholar] [CrossRef]
  24. GFOI. Integration of Remote-Sensing and Ground-Based Observations for Estimation of Emissions and Removals of Greenhouse Gases in Forests: Methods and Guidance from the Global Forest Observations Initiative (GFOI), 3rd ed.; FAO: Rome, Italy, 2020. [Google Scholar]
  25. Gharechelou, S.; Tateishi, R.; Sumantyo, J.T.S. Comparison of Simulated Backscattering Signal and ALOS PALSAR Backscattering over Arid Environment Using Experimental Measurement. ARS 2015, 4, 224–233. [Google Scholar] [CrossRef]
  26. Moreira, A.; Prats-Iraola, P.; Younis, M.; Krieger, G.; Hajnsek, I.; Papathanassiou, K.P. A Tutorial on Synthetic Aperture Radar. IEEE Geosci. Remote Sens. Mag. 2013, 1, 6–43. [Google Scholar] [CrossRef]
  27. Menges, C.H.; Bartolo, R.E.; Bell, D.; Hill, G.J.E. The Effect of Savanna Fires on SAR Backscatter in Northern Australia. Int. J. Remote Sens. 2004, 25, 4857–4871. [Google Scholar] [CrossRef]
  28. Tanase, M.A.; Santoro, M.; De La Riva, J.; Prez-Cabello, F.; Le Toan, T. Sensitivity of X-, C-, and L-Band SAR Backscatter to Burn Severity in Mediterranean Pine Forests. IEEE Trans. Geosci. Remote Sens. 2010, 48, 3663–3675. [Google Scholar] [CrossRef]
  29. Tanase, M.A.; Santoro, M.; Aponte, C.; De La Riva, J. Polarimetric Properties of Burned Forest Areas at C- and L-Band. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 267–276. [Google Scholar] [CrossRef]
  30. Lasaponara, R.; Tucci, B. Identification of Burned Areas and Severity Using SAR Sentinel-1. IEEE Geosci. Remote Sens. Lett. 2019, 16, 917–921. [Google Scholar] [CrossRef]
  31. De Luca, G.; Silva, J.M.N.; Modica, G. A Workflow Based on Sentinel-1 SAR Data and Open-Source Algorithms for Unsupervised Burned Area Detection in Mediterranean Ecosystems. GIScience Remote Sens. 2021, 58, 516–541. [Google Scholar] [CrossRef]
  32. Mutai, S.; Chang, L. Post-Fire Hazard Detection Using Alos-2 Radar and Landsat-8 Optical Imagery. ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci. 2020, VI-3/W1-2020, 75–82. [Google Scholar] [CrossRef]
  33. Sudiana, D.; Lestari, A.I.; Riyanto, I.; Rizkinia, M.; Arief, R.; Prabuwono, A.S.; Sri Sumantyo, J.T. A Hybrid Convolutional Neural Network and Random Forest for Burned Area Identification with Optical and Synthetic Aperture Radar (SAR) Data. Remote Sens. 2023, 15, 728. [Google Scholar] [CrossRef]
  34. Belgiu, M.; Drăguţ, L. Random Forest in Remote Sensing: A Review of Applications and Future Directions. ISPRS J. Photogramm. Remote Sens. 2016, 114, 24–31. [Google Scholar] [CrossRef]
  35. Pal, M. Random Forest Classifier for Remote Sensing Classification. Int. J. Remote Sens. 2005, 26, 217–222. [Google Scholar] [CrossRef]
  36. Van, L.N.; Tran, V.N.; Nguyen, G.V.; Yeon, M.; Do, M.T.-T.; Lee, G. Enhancing Wildfire Mapping Accuracy Using Mono-Temporal Sentinel-2 Data: A Novel Approach through Qualitative and Quantitative Feature Selection with Explainable AI. Ecol. Inform. 2024, 81, 102601. [Google Scholar] [CrossRef]
  37. Mithal, V.; Nayak, G.; Khandelwal, A.; Kumar, V.; Nemani, R.; Oza, N. Mapping Burned Areas in Tropical Forests Using a Novel Machine Learning Framework. Remote Sens. 2018, 10, 69. [Google Scholar] [CrossRef]
  38. Ba, R.; Song, W.; Li, X.; Xie, Z.; Lo, S. Integration of Multiple Spectral Indices and a Neural Network for Burned Area Mapping Based on MODIS Data. Remote Sens. 2019, 11, 326. [Google Scholar] [CrossRef]
  39. Gómez, I.; Martín, M.P. Prototyping an artificial neural network for burned area mapping on a regional scale in Mediterranean areas using MODIS images. Int. J. Appl. Earth Obs. Geoinf. 2011, 13, 741–752. [Google Scholar] [CrossRef]
  40. Sorkhabi, O.M. Deep learning of Sentinel-1 SAR for burnt peatland detection in Ireland. Geosystems Geoenvironment. 2024, 3, 100321. [Google Scholar] [CrossRef]
  41. Santi, E. Neural Networks Applications for the Remote Sensing of Hydrological Parameters; Rosa, J.L.G., Ed.; IntechOpen: London, UK, 2016. [Google Scholar] [CrossRef]
  42. Maier, H.R.; Galelli, S.; Razavi, S.; Castelletti, A.; Rizzoli, A.; Athanasiadis, I.N.; Sànchez-Marrè, M.; Acutis, M.; Wu, W.; Humphrey, G.B. Exploding the myths: An introduction to artificial neural networks for prediction and forecasting. Environ. Model. Softw. 2023, 167, 105776. [Google Scholar] [CrossRef]
  43. Abiodun, O.I.; Jantan, A.; Omolara, A.E.; Dada, K.V.; Mohamed, N.A.E.; Arshad, H. State-of-the-art in artificial neural network applications: A survey. Heliyon 2018, 4, e00938. [Google Scholar] [CrossRef]
  44. Tu, J.V. Advantages and Disadvantages of Using Artificial Neural Networks versus Logistic Regression for Predicting Medical Outcomes. J. Clin. Epidemiol. 1996, 49, 1225–1231. [Google Scholar] [CrossRef]
  45. Zhang, Y.; Liu, J.; Shen, W. A Review of Ensemble Learning Algorithms Used in Remote Sensing Applications. Appl. Sci. 2022, 12, 8654. [Google Scholar] [CrossRef]
  46. Du, C.; Fan, W.; Ma, Y.; Jin, H.-I.; Zhen, Z. The Effect of Synergistic Approaches of Features and Ensemble Learning Algorithms on Aboveground Biomass Estimation of Natural Secondary Forests Based on ALS and Landsat 8. Sensors 2021, 21, 5974. [Google Scholar] [CrossRef] [PubMed]
  47. Fei, S.; Hassan, M.A.; He, Z.; Chen, Z.; Shu, M.; Wang, J.; Li, C.; Xiao, Y. Assessment of Ensemble Learning to Predict Wheat Grain Yield Based on UAV-Multispectral Reflectance. Remote Sens. 2021, 13, 2338. [Google Scholar] [CrossRef]
  48. Zhang, Y.; Ma, J.; Liang, S.; Li, X.; Liu, J. A Stacking Ensemble Algorithm for Improving the Biases of Forest Aboveground Biomass Estimations from Multiple Remotely Sensed Datasets. GIScience Remote Sens. 2022, 59, 234–249. [Google Scholar] [CrossRef]
  49. Das, B.; Rathore, P.; Roy, D.; Chakraborty, D.; Jatav, R.S.; Sethi, D.; Kumar, P. Comparison of bagging; boosting and stacking algorithms for surface soil moisture mapping using optical-thermal-microwave remote sensing synergies. CATENA 2022, 217, 106485. [Google Scholar] [CrossRef]
  50. Wu, X.; Wang, J. Application of Bagging; Boosting and Stacking Ensemble and EasyEnsemble Methods for Landslide Susceptibility Mapping in the Three Gorges Reservoir Area of China. Int. J. Environ. Res. Public Health. 2023, 20, 4977. [Google Scholar] [CrossRef]
  51. Rokan Hilir Regency in Figures 2018-BPS-Statistics Indonesia Rokan Hilir Regency. Available online: https://rohilkab.bps.go.id/en/publication/2018/08/16/e96c32fb842eb1e03c5a7bbe/rokan-hilir-regency-in-figures-2018.html (accessed on 10 June 2024).
  52. Government General Description of Merauke Regency. Available online: https://www.papua.go.id/view-detail-kabupaten-121/Gambaran-Umum.html (accessed on 3 June 2024).
  53. Oldeman, L.R. The agroclimatic classification of rice-growing environments in Indonesia. In Proceedings of the Symposium on the Agrometeorology of the Rice Crop; World Meteorological Organization and International Rice Research Institute: Los Baños, Philippines, 1980; pp. 47–55. [Google Scholar]
  54. Bima Regency in Figures 2015 - BPS-Statistics Indonesia Bima Regency. Available online: https://bimakab.bps.go.id/en/publication/2015/11/03/3290f42d8c01f866d2e2f602/kabupaten-bima-dalam-angka-2015.html (accessed on 16 July 2024).
  55. Dompu Regency in Figures 2024-BPS-Statistics Indonesia Dompu Regency. Available online: https://dompukab.bps.go.id/en/publication/2024/02/28/df3db3910eaed72ddb8a18df/dompu-regency-in-figures-2024.html (accessed on 16 July 2024).
  56. Hergoualc’h, K.; Verchot, L.V. Stocks and Fluxes of Carbon Associated with Land Use Change in Southeast Asian Tropical Peatlands: A Review: Peatland Carbon Dynamics and Land Use Change in Southeast Asia. Global Biogeochem. Cycles 2011, 25. [Google Scholar] [CrossRef]
  57. Junior, F.R.F.; Dos Santos, A.M.; Alvarado, S.T.; Da Silva, C.F.A.; Nunes, F.G. Remote Sensing Applied to the Study of Fire in Savannas: A Literature Review. Ecol. Inform. 2024, 79, 102448. [Google Scholar] [CrossRef]
  58. Gorelick, N.; Hancher, M.; Dixon, M.; Ilyushchenko, S.; Thau, D.; Moore, R. Google Earth Engine: Planetary-Scale Geospatial Analysis for Everyone. Remote Sens. Environ. 2017, 202, 18–27. [Google Scholar] [CrossRef]
  59. Huffman, G.J.; Bolvin, D.T.; Braithwaite, D.; Hsu, K.-L.; Joyce, R.J.; Kidd, C.; Nelkin, E.J.; Sorooshian, S.; Stocker, E.F.; Tan, J.; et al. Integrated Multi-Satellite Retrievals for the Global Precipitation Measurement (GPM) Mission (IMERG). In Satellite Precipitation Measurement; Advances in Global Change Research; Levizzani, V., Kidd, C., Kirschbaum, D.B., Kummerow, C.D., Nakamura, K., Turk, F.J., Eds.; Springer International Publishing: Cham, Switzerland, 2020; Volume 67, pp. 343–353. ISBN 978-3-030-24567-2. [Google Scholar]
  60. Rana, V.K.; Suryanarayana, T.M.V. Evaluation of SAR Speckle Filter Technique for Inundation Mapping. Remote Sens. Appl. Soc. Environ. 2019, 16, 100271. [Google Scholar] [CrossRef]
  61. Hasan, S.F.; Shareef, M.A.; Hassan, N.D. Speckle filtering impact on land use/land cover classification area using the combination of Sentinel-1A and Sentinel-2B (a case study of Kirkuk city; Iraq). Arab. J. Geosci. 2021, 14, 276. [Google Scholar] [CrossRef]
  62. Zhang, Y.; He, C.; Xu, X.; Chen, D. Forest Vertical Parameter Estimation Using PolInSAR Imagery Based on Radiometric Correction. IJGI 2016, 5, 186. [Google Scholar] [CrossRef]
  63. Holtgrave, A.-K.; Röder, N.; Ackermann, A.; Erasmi, S.; Kleinschmit, B. Comparing Sentinel-1 and -2 Data and Indices for Agricultural Land Use Monitoring. Remote Sens. 2020, 12, 2919. [Google Scholar] [CrossRef]
  64. Chen, W.; Yin, H.; Moriya, K.; Sakai, T.; Cao, C. Retrieval and Comparison of Forest Leaf Area Index Based on Remote Sensing Data from AVNIR-2, Landsat-5 TM, MODIS, and PALSAR Sensors. IJGI 2017, 6, 179. [Google Scholar] [CrossRef]
  65. Wolpert, D.H. Stacked Generalization. Neural Networks 1992, 5, 241–259. [Google Scholar] [CrossRef]
  66. Graupe, D. Principles of Artificial Neural Networks, 3rd ed.; Advanced Series in Circuits and Systems; World Scientific: Singapore, 2013; Volume 7, ISBN 978-981-4522-73-1. [Google Scholar]
  67. Ibrahim, R.A.; Elsheikh, A.H.; Elasyed Abd Elaziz, M.; Al-qaness, M.A.A. Basics of Artificial Neural Networks. In Artificial Neural Networks for Renewable Energy Systems and Real-World Applications; Elsevier: Amsterdam, The Netherlands, 2022; pp. 1–10. ISBN 978-0-12-820793-2. [Google Scholar]
  68. Stathakis, D.; Vasilakos, A. Satellite image classification using granular neural networks. Int. J. Remote Sens. 2006, 27, 3991–4003. [Google Scholar] [CrossRef]
  69. Rachmatullah, M.I.C.; Santoso, J.; Surendro, K. Determining the number of hidden layer and hidden neuron of neural network for wind speed prediction. PeerJ Comput. Sci. 2021, 7, e724. [Google Scholar] [CrossRef]
  70. Bai, Y. RELU-Function and Derived Function Review. SHS Web Conferences 2022, 144, 02006. [Google Scholar] [CrossRef]
  71. Jurafsky, D.; Martin, J.H. Logistic Regression. In Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition; Pearson: London, UK, 2022. [Google Scholar]
  72. Dreiseitl, S.; Ohno-Machado, L. Logistic Regression and Artificial Neural Network Classification Models: A Methodology Review. J. Biomed. Inform. 2002, 35, 352–359. [Google Scholar] [CrossRef]
  73. Kaufman, Y.J.; Remer, L.A. Detection of Forests Using Mid-IR Reflectance: An Application for Aerosol Studies. IEEE Trans. Geosci. Remote Sens. 1994, 32, 672–683. [Google Scholar] [CrossRef]
  74. García, M.J.L.; Caselles, V. Mapping Burns and Natural Reforestation Using Thematic Mapper Data. Geocarto Int. 1991, 6, 31–37. [Google Scholar] [CrossRef]
  75. Karthik; Shivakumar, B.R. Land Cover Mapping Capability of Chaincluster; K-Means; and ISODATA techniques—A Case Study. In BT- Advances in VLSI; Signal Processing; Power Electronics; IoT; Communication and Embedded Systems; Kalya, S., Kulkarni, M., Shivaprakasha, K.S., Eds.; Springer: Singapore, 2021; pp. 273–288. [Google Scholar]
  76. Yang, M.D. A genetic algorithm (GA) based automated classifier for remote sensing imagery. Can. J. Remote Sens. 2007, 33, 203–213. [Google Scholar] [CrossRef]
  77. Landis, J.R.; Koch, G.G. The Measurement of Observer Agreement for Categorical Data. Biometrics 1977, 33, 159. [Google Scholar] [CrossRef]
  78. Carreiras, J.M.B.; Quegan, S.; Tansey, K.; Page, S. Sentinel-1 Observation Frequency Significantly Increases Burnt Area Detectability in Tropical SE Asia. Environ. Res. Lett. 2020, 15, 054008. [Google Scholar] [CrossRef]
  79. Rahman, M.; Chen, N.; Elbeltagi, A.; Islam, M.M.; Alam, M.; Pourghasemi, H.R.; Tao, W.; Zhang, J.; Shufeng, T.; Faiz, H.; et al. Application of Stacking Hybrid Machine Learning Algorithms in Delineating Multi-Type Flooding in Bangladesh. J. Environ. Manag. 2021, 295, 113086. [Google Scholar] [CrossRef]
  80. Shama, A.; Zhang, R.; Zhan, R.; Wang, T.; Xie, L.; Bao, X.; Lv, J. A Burned Area Extracting Method Using Polarization and Texture Feature of Sentinel-1A Images. IEEE Geosci. Remote Sens. Lett. 2023, 20, 1–5. [Google Scholar] [CrossRef]
  81. Radman, A.; Shah-Hosseini, R.; Homayouni, S. A Deep Convolutional Neural Network for Burn Progression Mapping Using Sentinel-1 SAR Time-Series. Int. J. Remote Sens. 2023, 44, 2196–2215. [Google Scholar] [CrossRef]
  82. Supriyati, S.; Tjahjono, B.; Effendy, S. Analysis of Rainfall Pattern for Lahar Mitigation at Sinabung Volcano. J. Ilmu Tan. Lingk. 2018, 20, 95–100. [Google Scholar] [CrossRef]
  83. Haiyan, D.A.I.; Haimei, W.A.N.G. Influence of Rainfall Events on Soil Moisture in a Typical Steppe of Xilingol. Phys. Chem. Earth Parts A/B/C 2021, 121, 102964. [Google Scholar] [CrossRef]
  84. Burgin, M.; Clewley, D.; Lucas, R.M.; Moghaddam, M. A Generalized Radar Backscattering Model Based on Wave Theory for Multilayer Multispecies Vegetation. IEEE Trans. Geosci. Remote Sens. 2011, 49, 4832–4845. [Google Scholar] [CrossRef]
  85. Tian, H.; Wang, P.; Tansey, K.; Zhang, J.; Zhang, S.; Li, H. An LSTM Neural Network for Improving Wheat Yield Estimates by Integrating Remote Sensing Data and Meteorological Data in the Guanzhong Plain, PR China. Agric. For. Meteorol. 2021, 310, 108629. [Google Scholar] [CrossRef]
  86. Fernández-Guisuraga, J.M.; Marcos, E.; Suárez-Seoane, S.; Calvo, L. ALOS-2 L-Band SAR Backscatter Data Improves the Estimation and Temporal Transferability of Wildfire Effects on Soil Properties under Different Post-Fire Vegetation Responses. Sci. Total Environ. 2022, 842, 156852. [Google Scholar] [CrossRef]
  87. Luo, D.; Xiong, K.; Wu, C.; Gu, X.; Wang, Z. Soil Moisture and Nutrient Changes of Agroforestry in Karst Plateau Mountain: A Monitoring Example. Agronomy 2022, 13, 94. [Google Scholar] [CrossRef]
  88. Ban, Y.; Zhang, P.; Nascetti, A.; Bevington, A.R.; Wulder, M.A. Near Real-Time Wildfire Progression Monitoring with Sentinel-1 SAR Time Series and Deep Learning. Sci. Rep. 2020, 10, 1322. [Google Scholar] [CrossRef] [PubMed]
  89. Belenguer-Plomer, M.A.; Tanase, M.A.; Chuvieco, E.; Bovolo, F. CNN-Based Burned Area Mapping Using Radar and Optical Data. Remote Sens. Environ. 2021, 260, 112468. [Google Scholar] [CrossRef]
Figure 1. Area of interest (AOI) with varying land cover types in: (A) Rokan Hilir, (B) Bima and Dompu, and (C) Merauke Regencies.
Figure 1. Area of interest (AOI) with varying land cover types in: (A) Rokan Hilir, (B) Bima and Dompu, and (C) Merauke Regencies.
Computers 14 00337 g001
Figure 2. Research approaches.
Figure 2. Research approaches.
Computers 14 00337 g002
Figure 3. Illustration of a stacking ensemble method.
Figure 3. Illustration of a stacking ensemble method.
Computers 14 00337 g003
Figure 4. Comparison of burned area classification performance metric values using C-band and L-band SAR data: (a,b) Rokan Hilir Regency; (c,d) Merauke Regency.
Figure 4. Comparison of burned area classification performance metric values using C-band and L-band SAR data: (a,b) Rokan Hilir Regency; (c,d) Merauke Regency.
Computers 14 00337 g004
Figure 5. Comparison of burned area classification performance metric values using: (a) C-band and (b) L-band SAR data in Bima and Dompu Regencies.
Figure 5. Comparison of burned area classification performance metric values using: (a) C-band and (b) L-band SAR data in Bima and Dompu Regencies.
Computers 14 00337 g005
Figure 6. Burned area detection in Rokan Hilir Regency using the SENN method for (a) Sentinel-1 and (b) ALOS-2/PALSAR-2, with visual references: (c) Sentinel-1 (R = VHpre-fire event − VHpost-fire event, G = VVpre-fire event, B = VHpost-fire event) and (d) ALOS-2/PALSAR-2 (R = HVpre-fire event − HVpost-fire event, G = HHpre-fire event, B = HVpost-fire event); (e) Sentinel-2 post-fire RGB composite (B12-B8A-B4); and (f) PlanetScope post-fire composite (B3-B2-B1).
Figure 6. Burned area detection in Rokan Hilir Regency using the SENN method for (a) Sentinel-1 and (b) ALOS-2/PALSAR-2, with visual references: (c) Sentinel-1 (R = VHpre-fire event − VHpost-fire event, G = VVpre-fire event, B = VHpost-fire event) and (d) ALOS-2/PALSAR-2 (R = HVpre-fire event − HVpost-fire event, G = HHpre-fire event, B = HVpost-fire event); (e) Sentinel-2 post-fire RGB composite (B12-B8A-B4); and (f) PlanetScope post-fire composite (B3-B2-B1).
Computers 14 00337 g006
Figure 7. Burned area detection in Merauke Regency using the SENN method for (a) Sentinel-1 and (b) ALOS-2/PALSAR-2, with visual references: (c) Sentinel-1 (R = VHpre-fire event − VHpost-fire event, G = VVpre-fire event, B = VHpost-fire event), (d) ALOS-2/PALSAR-2 (R = HVpre-fire event − HVpost-fire event, G = HHpre-fire event, B = HVpost-fire event), and (e) Sentinel-2 post-fire RGB composite (B12-B8A-B4). Two boxes highlight areas where Sentinel-1 (C-band) detects more BA than ALOS-2 (L-band), indicating overestimation (solid line) and lower detail or more fragmentation (dashed line).
Figure 7. Burned area detection in Merauke Regency using the SENN method for (a) Sentinel-1 and (b) ALOS-2/PALSAR-2, with visual references: (c) Sentinel-1 (R = VHpre-fire event − VHpost-fire event, G = VVpre-fire event, B = VHpost-fire event), (d) ALOS-2/PALSAR-2 (R = HVpre-fire event − HVpost-fire event, G = HHpre-fire event, B = HVpost-fire event), and (e) Sentinel-2 post-fire RGB composite (B12-B8A-B4). Two boxes highlight areas where Sentinel-1 (C-band) detects more BA than ALOS-2 (L-band), indicating overestimation (solid line) and lower detail or more fragmentation (dashed line).
Computers 14 00337 g007
Figure 8. Burned area detection in Bima and Dompu Regencies using the SENN method for (a) Sentinel-1 and (b) ALOS-2/PALSAR-2, with visual references: (c) Sentinel-1 (R = VHpre-fire event − VHpost-fire event, G = VVpre-fire event, B = VHpost-fire event), (d) ALOS-2/PALSAR-2 (R = HVpre-fire event − HVpost-fire event, G = HHpre-fire event, B = HVpost-fire event), and (e) Sentinel-2 pre-fire RGB composite (B4-B3-B2); (f) Sentinel-2 post-fire RGB composite (B4-B3-B2).
Figure 8. Burned area detection in Bima and Dompu Regencies using the SENN method for (a) Sentinel-1 and (b) ALOS-2/PALSAR-2, with visual references: (c) Sentinel-1 (R = VHpre-fire event − VHpost-fire event, G = VVpre-fire event, B = VHpost-fire event), (d) ALOS-2/PALSAR-2 (R = HVpre-fire event − HVpost-fire event, G = HHpre-fire event, B = HVpost-fire event), and (e) Sentinel-2 pre-fire RGB composite (B4-B3-B2); (f) Sentinel-2 post-fire RGB composite (B4-B3-B2).
Computers 14 00337 g008
Figure 9. Boxplot comparison of SAR backscatter values (in dB) for different land cover types—bare land, burned area, and vegetation—based on post-fire imagery. (a) Sentinel-1 VV polarization, (b) Sentinel-1 VH polarization, (c) ALOS-2/PALSAR-2 HV polarization, and (d) ALOS-2/PALSAR-2 HH polarization.
Figure 9. Boxplot comparison of SAR backscatter values (in dB) for different land cover types—bare land, burned area, and vegetation—based on post-fire imagery. (a) Sentinel-1 VV polarization, (b) Sentinel-1 VH polarization, (c) ALOS-2/PALSAR-2 HV polarization, and (d) ALOS-2/PALSAR-2 HH polarization.
Computers 14 00337 g009
Figure 10. Monthly rainfall rate for each AOI in 2019.
Figure 10. Monthly rainfall rate for each AOI in 2019.
Computers 14 00337 g010
Table 1. Details of SAR and optical data acquisition used in 2019.
Table 1. Details of SAR and optical data acquisition used in 2019.
AOISensorAcquisition Date
Pre-Fire EventPost-Fire Event
Rokan Hilir Regency, Riau ProvinceSentinel-1 (C-Band SAR)10 June20 October (**–13 days)
ALOS-2/PALSAR-2 (L-Band SAR)11 June29 October (**–4 days)
Sentinel-2From 1 May to 30 July17 August, 1 September, and 1 October
PlanetScope*2 November
Merauke Regency, Papua ProvinceSentinel-1 (C-Band SAR)16 May20 August (**–12 days)
ALOS-2/PALSAR-2 (L-Band SAR)21 May27 August (**–5 days)
Sentinel-2From 1 June to 30 JuneFrom 15 August to 31 August
PlanetScope*1 September
Bima and Dompu Regencies, West Nusa Tenggara ProvinceSentinel-1 (C-Band SAR)11 June21 October (**–5–8 days)
ALOS-2/PALSAR-2 (L-Band SAR)8 June26 October (**–3 days)
Sentinel-2From 1 June to 15 JuneFrom 15 to 31 October
PlanetScope*26–29 October
* No pre-fire images are available. ** Different acquisition dates between radar images and PlanetScope (reference point testing).
Table 2. Polarimetric features used in classification.
Table 2. Polarimetric features used in classification.
SensorsPolarimetric FeaturesAcronyms Equations
Sentinel-1
C-Band
Radar post-fire events on VH polarizationVHpost-fire events 10 l o g 10 ( D N 2 ) + K
Radar post-fire events on VV polarizationVVpost-fire events 10 l o g 10 ( D N 2 ) + K
Radar Vegetation Index on post-fire eventsRVIpost-fire events 4 V H V V + V H
Dual-Polarization SAR Vegetation Index on post-fire eventsDPSVIpost-fire events V V + V H V V
Difference Radar Vegetation IndexDRVIRVIpost-fire events − RVIpre-fire events
Difference Dual-Polarization SAR Vegetation IndexDDPSVIDPSVIpost-fire events − DPSVIpre-fire events
Radar Burn Ratio on VH and VV polarizationsRBRVH and RBRVV P o s t f i r e   b a c k s c a t t e r x y P r e f i r e   b a c k s c a t t e r x y
where xy: polarization
Radar Burn Difference on VH and VV polarizationsRBDVH and RBDVVPost-fire backscatterxy − Pre-fire backscatterxy
where xy: polarization
ALOS-2/PALSAR-2
L-Band
Radar post-fire events on HV polarizationHVpost-fire events 10 l o g 10 ( D N 2 )   +   C F
where CF = −83.0
Radar post-fire events on HH polarizationHHpost-fire events 10 l o g 10 ( D N 2 )   +   C F
where CF = −83.0
Radar Ratio Vegetation Index on post-fire eventsRRVIpost-fire events H H H V
Radar Normalized Difference Vegetation Index on post-fire eventsRNDVIpost-fire events H V H H H V + H H
Difference Radar Ratio
Vegetation Index
DRRVIRRVIpost-fire events − RRVIpre-fire events
Difference Radar Normalized Difference Vegetation IndexDRNDVIRNDVIpost-fire events − RNDVIpre-fire events
Radar Burn Ratio on HV and HH polarizationsRBRHV and RBRHH P o s t f i r e   b a c k s c a t t e r x y P r e f i r e   b a c k s c a t t e r x y
where xy: polarization
Radar Burn Difference on HV and HH polarizationsRBDHV and RBDHHPost-fire backscatterxy − Pre-fire backscatterxy
where xy: polarization
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sudiana, D.; Lestari, A.I.; Rizkinia, M.; Riyanto, I.; Vetrita, Y.; Bayanuddin, A.A.; Putri, F.A.; Kartika, T.; Suhadha, A.G.; Julzarika, A.; et al. Assessing Burned Area Detection in Indonesia Using the Stacking Ensemble Neural Network (SENN): A Comparative Analysis of C- and L-Band Performance. Computers 2025, 14, 337. https://doi.org/10.3390/computers14080337

AMA Style

Sudiana D, Lestari AI, Rizkinia M, Riyanto I, Vetrita Y, Bayanuddin AA, Putri FA, Kartika T, Suhadha AG, Julzarika A, et al. Assessing Burned Area Detection in Indonesia Using the Stacking Ensemble Neural Network (SENN): A Comparative Analysis of C- and L-Band Performance. Computers. 2025; 14(8):337. https://doi.org/10.3390/computers14080337

Chicago/Turabian Style

Sudiana, Dodi, Anugrah Indah Lestari, Mia Rizkinia, Indra Riyanto, Yenni Vetrita, Athar Abdurrahman Bayanuddin, Fanny Aditya Putri, Tatik Kartika, Argo Galih Suhadha, Atriyon Julzarika, and et al. 2025. "Assessing Burned Area Detection in Indonesia Using the Stacking Ensemble Neural Network (SENN): A Comparative Analysis of C- and L-Band Performance" Computers 14, no. 8: 337. https://doi.org/10.3390/computers14080337

APA Style

Sudiana, D., Lestari, A. I., Rizkinia, M., Riyanto, I., Vetrita, Y., Bayanuddin, A. A., Putri, F. A., Kartika, T., Suhadha, A. G., Julzarika, A., Sobue, S., Prabuwono, A. S., & Sri Sumantyo, J. T. (2025). Assessing Burned Area Detection in Indonesia Using the Stacking Ensemble Neural Network (SENN): A Comparative Analysis of C- and L-Band Performance. Computers, 14(8), 337. https://doi.org/10.3390/computers14080337

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop