1. Introduction
Remote sensing has emerged as a privileged means of observing and studying natural disasters. The capabilities of spaceborne sensors and platforms allow more accurate and frequent observations. Optical imagery, usually preferred because of its accessibility, is often unusable because of the bad weather conditions caused by the extreme weather events inducing natural disasters. As evidenced by the increasing number of satellites launched between 2007 and 2019 [
1], synthetic aperture radar (SAR) imagery is becoming more and more popular. The numerous images produced offer new opportunities to be exploited in the field of natural hazards. Radar images are less impacted than optical imagery by cloudiness and the active sensor system allows for night-time snapshots [
2].
Processing chains are algorithms developed for the automated extraction of information from more or less raw data. Freely shared and documented, such algorithms represent a way to continue the desire to democratize the access and use of SAR images. Automation is a way to deliver end-products that are easy to handle for users with no expertise in SAR imagery, often considered extremely complex. Developing and making available such processing chains and products, in order to study the damage related to natural disasters, is a powerful tool for educating exposed populations, informing the affected territories, and helping the region to assess the risks with crisis management and planning.
Multiple long-term climate analysis predicts stagnation or a slight decrease in the number of tropical cyclones worldwide, with a probable displacement towards the poles of their trajectories [
3,
4]. The South West Indian Ocean (SWIO) experienced nine cyclones per year on average during the last six decades, among which 0.8 per year made landfall over Mozambique coasts. During a comparable period 5.1 events made landfall per year in the North Atlantic Ocean coasts [
5]. No significant changes in frequency could be highlighted for both regions during this period, but future impacts for populations could increase as there is a large global consensus on cyclone intensity predictions (maximum intensity reached for a cyclone, number of cyclones reaching high intensities, frequency of landfalls for high intensity cyclones) [
6]. Recent emergence of category 5 cyclones in the SWIO even if no significant increase in frequency could have been established, is a major concern for populations and economics, as damages could increase significantly [
7].
Floods are one of the most common and stronger impacts of hurricanes. According to Revilla-Romero et al. [
8], they are globally among the most catastrophic natural disasters in terms of the impact on both human life and the economy. Because of the large areas impacted and the need for rapid availability of geographical products for crisis management and field actions, remote sensing is an adapted method for flood detection. Floods are widely studied and mapped using different remote sensing techniques.
Optical sensors are commonly used because of their simple interpretability, but are strongly affected by atmospheric disturbance. Cloud cover during the days around the event is usually significant. Synthetic aperture radar (SAR) sensors offer less sensitivity to cloud cover, allowing for a better frequency of images [
9]. Hence, SAR is widely used for mapping water and flooded areas [
8,
10,
11,
12,
13,
14]. New satellites such as the Sentinel constellation and particularly Sentinel-1 A and B (launched in 2014 and 2016 respectively) provide interesting repetitiveness to measure the impact of natural disasters and study the resilience of environments through time series. Thus, change detection based on multi-temporal SAR images is increasingly used for disaster monitoring [
10,
15,
16,
17], especially for flood events [
18].
Different flood detection methods are based on SAR imagery. We will differentiate between the methods of textural analysis, spectral analysis, and change detection. Zhang et al. [
19] proposed a learning textural approach which is efficient but requires a training set in the procedure. Gong et al. [
20] used Gabor filters for SAR classification as they have strong discriminating power in classification problems. However, Gabor-based approaches have the disadvantages of being complex and extremely disturbed by the noise in SAR images, and selecting proper features is usually difficult and time consuming. Sghaier et al. [
21] used a textural and data fusion analysis on SAR time series (Radarsat-2 and Sentinel-1) to map floods in Canada, Greece and Iran. They compared their results with some other studies and showed that their texture-based approach achieves better accuracy in the case of flood detection than methods based on change detection and thresholding. This approach is however much more complicated and time-consuming to implement than change detection approaches.
Spectral analysis has been used in many previous studies on Sentinel-1 images thanks to pixel or object-based method in order to map surface water and floods. Twele et al. [
22] developed a fully automatic chain for the processing of Sentinel-1 ground range detected (GRD backscattering) based on thresholding, fuzzy-logic-based classification refinement, final classification, including auxiliary data, and dissemination of the results that can be achieved using a web-based platform. Li et al. [
23] used a change detection approach on Sentinel-1 GRD data to develop an efficient method for rapid flood mapping with low processing times. Although fully automated, this method depends on the selection of a reference image and the authors underline that this step can be time consuming (several hours might be required). Amitrano et al. [
24] provided a unsupervised framework for Sentinel-1 flood mapping combining a textural analysis with a fuzzy classification on GRD amplitudes and a change detection approach, designed for end users and decision makers. Zhang et al. [
25] estimated the impact of hurricane Irma on Florida in terms of floods using both spectral analysis and InSAR from Sentinel-1 data to quantify the extent of the floods and water level variations. In Bayik et al. [
26] flooded areas were extracted with threshold, random forest and deep learning approaches on Sentinel-1 time-series at the border of Turkey and Greece.
Algebraic methods are simple combinations of images and polarization between pre- and post-event. Difference, introduced by Weismiller et al. [
27], is the most straightforward algebraic method. Ratio methods are commonly used in SAR imagery for change detection. According to Rignot and van Zyl [
28], the ratio method is more effective for minimizing speckle noise. Normalized difference change detection (NDCD), introduced by Gianinetto and Villa [
29], demonstrates good performance for flood detection. Based on normalized difference reflectance (NDR), the NDCD can be used with different products and polarization combinations. More complex methods such as image transformations (principal component analysis (PCA) [
30], multivariate alteration detection (MAD) [
31]), or different classification methods, pixel or object based [
32,
33,
34], have also been developed for change detection with SAR data.
In order to automate SAR data processing, the normalized difference method has been chosen for its simplicity, robustness and algorithm speed. It is easily reproducible and is simple to transfer to non-expert users. It was applied to Sentinel-1 images, in order to take advantage of high spatial resolution (10 m), and high temporal revisit (12 days).
This paper will introduce the step-by-step construction of a processing chain dedicated to flood detection using Sentinel-1 SAR multi-temporal data, from the downloading of the data to the production of a map of flooded areas based on a change detection approach.
4. Discussion
Remote sensing has become an essential tool for disaster monitoring and management. Meteorological events make the use of optical images difficult as cloud cover can be considerable. The use of radar images insensitive to weather conditions thus becomes more relevant. New platforms and sensors provide frequent and highly spatially resolved images. The processing chain presented in this study has been developed around the freely available Sentinel-1 imagery. It provides an easy-access tool for processing radar data, often considered to be very complex, in order to detect and map flooded areas in a cyclonic context. The use of S1-Tiling, designed to pre-process large amount of Sentinel-1 data, enabled us to focus on a large time period for flood monitoring and resilience.
The Bahamas case-study was used as a calibration site. Highly impacted by Hurricane Dorian in September 2019, the island of Great Abaco is an ideal study site, especially thanks to the considerable media coverage that took place after the event. This mainly photographic and video data used as ground truth data enabled the threshold value for the NDR to be set empirically. However, the flood surface detected by the EMS rapid mapping products and the proposed method is quite different, with more detection with NDR. This difference can be explained by different factors. First, the imagery data is different, with very high-resolution optical data for Ems rapid mapping against 10-m resolution SAR for the proposed method. Moreover, the date of the EMS imagery is four days after the event, compared to one day with NDR. While the results are different, they are not totally inconsistent with two-thirds of the EMS Rapid Mapping total area detected by NDR.
Aware of that, we applied the same threshold value to the region of Beira, highly impacted by Cyclone Idai. In this case, as for the Bahamas, the most visible impacts were the large flooded areas that were of interest to map and follow. In this case, our results are fairly similar to those of EMS Rapid Mapping. Indeed, the same post-event Sentinel-1 image was used for both methods. Many differences in detection are due to the presence of semi-permanent water zones in the area, which were for most probably not present in the pre-event optical image used by EMS. These semi-permanent water zones can locally produce over-detections of the flooded area if the image used as reference is taken long before, during a drier season for example. One of the advantages of the NDR is to use two Sentinel-1 radar products for pre- and post-event references, which can closely surround the date of the cyclone, thus eliminating from detection areas already in water Nonetheless, the purpose of the proposed methodology and EMS Copernicus rapid mapping is very different. EMS Copernicus creates products in order to help for crisis management and therefore has a strong time constraint. The product we propose will be used to assess the costs of flooding and environmental resilience in the longer term.
In both study sites, a speckle noise on the data remains noticeable, impacting automatic flood detection by the algorithm. In the current state of the methodology, it is difficult to discriminate between background noise and actual floods for the smaller areas. One of the objectives of this work was to assess the capacities of S1-Tiling for time-series processing in multiple environments and locations, without any changes in the algorithm. S1-Tiling uses the Quegan multi-temporal speckle filter to produce the final denoised images. One of the prospects for improving our methodology will be to compare the results with different advanced multi-temporal speckle filters [
48,
49], either a modified Quegan filter [
50], or the Lee sigma based multi-temporal, which both give interesting results.
There are several studies whose objective is the creation of a processing chain based on Sentinel-1 for detection of change or water surfaces. Most are water surfaces classifications using algorithms of a range of complexity. Huang et al. [
51] and Pham et al. [
52] use the random forest method and a neuronal network respectively. Both algorithms are trained with Landsat images and the SRTM permanent water surface for Huang et al. [
51]. These methods are dependent on optical images for training models. Bioresita et al. [
9] use a classification method that is apparently accurate but also very complex with numerous pre-treatment, modeling and post-treatment steps. Furthermore, the accuracy assessment using F-measure or F-Score cannot be compared with our results because it does not take into account true negatives. Li et al and Amitrano et al. [
23,
24] studies offer a comparison with other different methods and globally obtained well detection of water. Although fully automated, the first method underline that the selection of reference image can be time consuming (several hours might be required). The second provided an unsupervised framework for S1 flood mapping combining a textural analysis with a fuzzy classification on GRD amplitudes and an object based image analysis. Even if the detection seems particularly accurate, the question of reproducibility of an object-oriented method on other geographical areas can be raised. These studies are undoubtedly interesting and a future comparison with other methods such as those presented in Amitrano [
24] would be constructive.
The studies by Twele et al. [
22] and Muro et al. [
53] are more comparable to our work but for different reasons. Like us, Twele et al. [
22] offer a fully functional automatic processing chain but still using one-to-one image classifications. The proposed chain and the work of Muro et al. [
53] are very similar even if the algorithms are different. What is really at stake here is change detection and not water surface classification. Indeed, our work aims to calculate an indicator based on the comparison of two images taken over a short period of time, one just before the event and the other just after. The algorithm is based on S1-omnibus, which is a variance analysis method. This method can be considered as robust in terms of results and the use of a statistical indicator. However, as reported in the paper, this method detects any type of change without being able to define its nature. While the NDR does not allow the real nature of the change to be defined either, it can provide information on its direction: smoother or more rugged.
Although improvements can still be made to it, this processing chain under active development proposes an innovative and robust solution compared to other interesting chains developed elsewhere. It has the benefit of being based on a non-supervised method, being easy to use for non-expert users, and using free Sentinel-1 images.
5. Conclusions
The algorithm presented in this paper permits Sentinel-1 data processing for detection of flood impacts from the download to the final product thanks to S1-tiling for the pre-treatment and NDR for flood detection.
Cyclonic seasons give rise to clouds and atmospheric disturbance. Less impacted by these than optical imagery, Sentinel-1 data are a reliable solution for flood detection during these periods. Even if the cloud cover is considerable, SAR images can be mobilized and potentially exploited sooner after an event than optical images. This reactivity leads to better flood assessment, when flooding reaches its maximum in terms of impacted areas.
NDR is a simple method for implementing automated flood detection, enabling accurate detection in different areas. With pre-event images closest to the event date, it is possible to prevent detection of objects that were already in water such as reservoirs, basins or paddy fields. Only the true event’s impacts are detected, leading to greater accuracy.
In this study, the automated chain showed its effectiveness at two study sites, for specific events and during a short time period after the cyclones. However, it can be transposed to any place in the world which makes it possible to quickly and efficiently process a large time-series of pre- and post-event images, in order to evaluate impacts and resilience of the study site.
The processing algorithms are still under active development as multiple improvements could still be made. One of the short-term prospects would be the integration of other multi-temporal despeckling methods into the chain for a better noise filtering as it is still sensitive in the results. Finally, algorithms and source code of the entire processing chain will be published online, in an open source policy.