Next Article in Journal
Synergetic Use of Sentinel-1 and Sentinel-2 Data for Soil Moisture Mapping at Plot Scale
Previous Article in Journal
Road Centerline Extraction from Very-High-Resolution Aerial Image and LiDAR Data Based on Road Connectivity
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Detection of Temporary Flooded Vegetation Using Sentinel-1 Time Series Data

1
Department of Geography, Ludwig Maximilian University of Munich, Luisenstr. 37, Munich 80333, Germany
2
German Aerospace Center (DLR), German Remote Sensing Data Center (DFD), Oberpfaffenhofen, 82234 Wessling, Germany
*
Author to whom correspondence should be addressed.
Remote Sens. 2018, 10(8), 1286; https://doi.org/10.3390/rs10081286
Submission received: 15 July 2018 / Revised: 5 August 2018 / Accepted: 12 August 2018 / Published: 15 August 2018

Abstract

:
The C-band Sentinel-1 satellite constellation enables the continuous monitoring of the Earth’s surface within short revisit times. Thus, it provides Synthetic Aperture Radar (SAR) time series data that can be used to detect changes over time regardless of daylight or weather conditions. Within this study, a time series classification approach is developed for the extraction of the flood extent with a focus on temporary flooded vegetation (TFV). This method is based on Sentinel-1 data, as well as auxiliary land cover information, and combines a pixel-based and an object-oriented approach. Multi-temporal characteristics and patterns are applied to generate novel times series features, which represent a basis for the developed approach. The method is tested on a study area in Namibia characterized by a large flood event in April 2017. Sentinel-1 times series were used for the period between September 2016 and July 2017. It is shown that the supplement of TFV areas to the temporary open water areas prevents the underestimation of the flood area, allowing the derivation of the entire flood extent. Furthermore, a quantitative evaluation of the generated flood mask was carried out using optical Sentinel-2 images, whereby it was shown that overall accuracy increased by 27% after the inclusion of the TFV.

Graphical Abstract

1. Introduction

Flooding affects societies, economies, and ecosystems worldwide and can have a devastating impacts. For the development of flood risk mitigation plans or disaster relief services, information about the flood extent in affected regions is an important information source for many institutions involved in crisis management, such as relief organizations, governmental authorities, and insurance companies [1,2].
Numerous current studies focus on flood mapping using various algorithms that enable the detection of temporary open water [3,4,5]. Besides these areas, the extraction of temporary flooded vegetation (TFV) areas is essential for mapping the entire flood extent. TFV can be described as areas where water bodies occur temporarily underneath vegetation [6]. The disregard of these areas can lead to an underestimation of the whole flood extent. Ground-based observations or aerial vehicles are not always sufficient for the mapping of large-scale floods due to weather conditions, technical limitations, or high risks. Satellite remote sensing data, especially Synthetic Aperture Radar (SAR), allows one to overcome these challenges and covers an extensive area of the Earth’s surface for a near-real time detection of large-scale flood events independent of daylight and weather conditions [7,8] and is suitable for the detection of water underneath the vegetation areas.
Smooth open water surfaces act as specular reflectors and are therefore characterized by low SAR backscatter values. These areas can be well differentiated from non-water regions showing higher backscatter values. In comparison, TFV shows highly complex backscatter behaviour, due to double- or multi-bounce interactions between the smooth open water surfaces and the vertical structures of the vegetation, such as trunks or stems. During the presence of water underneath vegetated areas, the SAR backscatter intensity values can significantly increase [9,10,11].
SAR-based techniques for the detection of TFV include simple approaches like manual thresholding [12,13] or automatic thresholding [11,14,15], distance-based classification methods [16,17], clustering algorithms [18,19], decision tree [7,20], or rule-based classification [21,22]. Furthermore, more advanced approaches based on machine learning [23,24,25], fuzzy logic [11,26], Markov Random Field modelling [14], or wishart classifications [27,28,29] are applied in dependency on the task, the availability of polarization modes, and phase information, as well as spatial or temporal resolution [6].
For the extraction of TFV, most of the approaches focus on a single-image, which represents only a snapshot of a current flood situation [12,30,31]. Other classification approaches use change detection techniques for the extraction of TFV, which allow one to detect potential changes in backscatter intensities [25,26,32,33,34,35]. Thereby, the selection of the scenes acquired under dry and flood conditions highly influences the classification results. Furthermore, the increase of backscatter intensities, which is induced by TFV, may not clearly be separable from backscatter intensities from dry vegetation depending on its phenological stage [36]. Time series approaches can be more appropriate for the extraction of flood related classes by allowing the inclusion of seasonal or annual fluctuations of the backscatter. Furthermore, as a consequence of multiple observations of the same area, multi-temporal data can enable an improvement of the reliability for mapping TFV areas. The shape of multi-temporal radar backscatter profiles can be used to derive multi-temporal features enabling a more detailed extraction of flood-related classes [6,7]. Only few studies address the detection of TFV based on multi-temporal satellite imagery, which either depends on the chronological order and regularity of satellite images [19,37] or the use of a limited amount of SAR time steps [11,34]. Furthermore, the application of multi-temporal approaches is limited due to the low availability of respective SAR data in the past. SAR systematic and long-term data acquisition within short revisit times by satellite missions such as Sentinel-1 finally enables the usage of multi-temporal data.
Although the pixel-based approach is the most commonly used method for the extraction of TFV [6], object-based approaches are emerging as an effective method for complex image classification [18,30,38,39]. On the one hand side, the objects are suitable to map heterogeneous classes, such as TFV, if the spatial resolution of the SAR data is higher than the objects on the ground [6]. On the other hand, the grouping of pixels to objects with similar backscatter values in SAR images allows one to reduce speckle noise [10,40]. Furthermore, objects enable the consideration of semantic or spatial contextual relationships, which helps to cope with heterogeneous feature such as TFV [6,11]. Several segmentation techniques, which group pixels into meaningful or perceptual regions (objects) for the extraction of TFV, can be found in the literature, including multiresolution segmentation [24,41], Mallat’s discrete wavelet transform [42], Markov Random Fields [14,43], and clustering approaches [10,11]. The combination of segmentation techniques with SAR time-series data can be used to discover multi-temporal characteristics and patterns, allowing for the extraction of useful information from enormous and complex data sets [44] and empowering the derivation of time series features with the advantages of an object-based approach.
In contrast to methods for the extraction of open water surfaces for flood detection [4,45,46], there is little research for the detection of the entire flood extent comprising TFV besides open water areas [11,25,29]. An example for automatic flood detection of open water in near-real time (NRT) by Sentinel-1 data is given by Twele et al. [47]. In the framework of the Center for Satellite Based Crisis Information (ZKI) located at the German Aerospace Center (DLR), this method is designed to provide a fully automatic web-based Sentinel-1 Flood Service (S-1FS) for rapid provision of open flood extent information for humanitarian relief activities and civil security issues worldwide [48,49]. This S-1FS is aimed to be extended and improved by the addition of TFV areas, which are derived by the developed time series approach within this paper.
This article introduces a time series approach for the derivation of an entire flood extent with a focus on the extraction of temporary open water (TOW) and temporary flooded vegetation (TFV) using multi-temporal S-1 imagery. The objectives of this work are:
  • To investigate the characteristics and patterns of SAR time series data and to show their potential regarding the detection of TOW and TFV;
  • To derive time series features, which are used as a basis for a time series approach focusing on the detection of TOW and TFV;
  • To classify the entire flood extent, including TOW and TFV in the analysed study area;
  • To improve the results of external approach DLR’s S-1FS [47] by supplementing temporary open water by TFV areas.

2. Materials

2.1. Study Area

The study area is located in the Caprivi Strip in the north-eastern part of Namibia, bordered by the countries Zambia, Zimbabwe, and Botswana (Figure 1). The focus is on the Chobe-Sambesi flood plain, which is formed by the Zambezi and Chobe river and is influenced by seasonal floods in March and April [33,50]. Large areas of high grasslands dominate the landscape [51]. During flooding, the area is therefore characterized not only by TOW areas but also by extensive TFV areas. The extent of the study area is indicated by the red rectangle in Figure 1.

2.2. Data Sets

A time series of 25 Sentinel-1 (S-1) B images with identical orbit configuration (same image geometry) was used to derive the flood-related classes. The Sentinel-1 mission consists of two satellites (Sentinel-1A/B) equipped by C-band (wavelengths [λ] = 5.6 cm) sensors that allow one to monitor the Earth’s surface at a repeat frequency of six days. The characteristics of the used images and the acquisition dates are listed in Table 1. Only images with an interval of 12 days could be used for the study area in the analysed period, since only one sensor (Sentinel-1 B) provided both polarizations VV and VH of the Interferometric Wide Swath (IW) mode and was already processed by ESA as Ground Range Detected High Resolution (GRDH) products. The flood event was covered by three S-1 images, whereby the scene acquired on 6th of April 2017 was used as a flood image for further analysis and classification. This flood scene is chosen due to the temporal proximity of the reference data. The validation of the developed approach was carried out on the basis of a 27 km² large extent in the study area. Figure 2b shows the validation extent of the preprocessed S-1 image at the flooding event (6 April 2017) for VV polarization. The generation of the reference data was carried out by visual interpretation and manual digitalisation of a high-resolution optical Sentinel-2 (S-2) image, which was recorded on 8 April 2017. When the backscatter image and optical data were compered, no changes in the flood extent could be observed within the span of two days. The digitised reference mask is shown in Figure 2b.
Urban areas and TFV are both characterized by strong double-bounce and multiple scattering effects. This makes their separation considerably more difficult. For the identification of urban areas, the Global Urban Footprint (GUF) was therefore used as additional information, which was derived on the basis of TerraSAR-X and TanDEM-X SAR data [52]. The spatial resolution of the GUF is 0.4 arcseconds (~12 m). Figure 3a shows the GUF for the study area. In the developed approach, the GUF layer is used as the exclusion layer. Since the focus of the study is on the derivation of TFV areas, urban areas are excluded from the analysis regarding flood and are not considered in the methodology.
All regions that lie below an elevation of 20 m above the nearest water network were defined as flood-prone in order to prevent misclassifications in elevated areas. For this purpose, the ‘Height above nearest drainage’ (HAND) index [53] has been integrated as binary mask to exclude areas above the mentioned threshold of 20 m before classification [47]. The HAND index with about 90 m resolution is based on the height and flow direction information provided by the product “Hydrosheds” [54]. Figure 3b shows the HAND mask for the study area in Namibia.
The separation between permanent open water surfaces and TOW areas was done by the use of SRTM Water Body Data (SWBD), which was applied as a permanent water mask. This product was created by the National Geospatial Intelligence Agency (NGA) [55] on the basis of data from the Shuttle Radar Topography Mission (SRTM30), which took place in February 2000, and it is freely available as a vector data set. SWBD was chosen for its resolution and global availability of SRTM data. Figure 3c shows the SWBD mask for the study area.

3. Methods

The process chain (Figure 4) of the time series approach for the derivation of the flood related classes consists of pixel- (deep orange arrow) and segment-based (bright orange arrows) parts. The pixel-based steps include the generation of time series features and their normalisation based on VV- and VH-polarisation layer stacks (see Section 3.2). Simultaneously, the object generation takes place using a clustering approach based on the combination of the VV and VH time series layer stacks (see Section 3.3). In the next step, normalised time series features are combined with the generated cluster image to produce cluster-based time series features. The features are the basis for a thresholding approach (see Section 3.4) that enables the derivation of TOW and TFV. It can be performed using pixel-based or object-based time series features, while the latter also requires the generation of pixel-based normalized time series features. The exclusion layers, including the GUF and the HAND index, allow one to mask out urban areas and to consider topographical information before the extraction of times series features and the generation of objects is performed. The preprocessing is not shown in Figure 4; however, it is essential for the creation of time series stacks. The implemented preprocessing steps are described in Section 3.1.

3.1. Image Preprocessing

An automated preprocessing for all used S-1 images is performed in a Python script using SNAP Toolbox (version 4.0.0) [56] components. It includes the radiometric calibration to sigma naught, a range Doppler terrain correction by using the SRTM digital terrain model [57], the co-registration of the individual data, and speckle filtering. The coregistration of the images is necessary to perform an analysis based on time series data. It is essential that the images match in their position with pixel accuracy, otherwise the backscatter values over time could lead to a distorted picture of the desired classes. The spatial refined Lee filter with a window size of 7 × 7 were used to reduce the noise in the SAR data [58,59]. This filter allows the preservation of the structure in the image by filtering homogeneous surfaces and preserving edges, as well as flood-related temporal characteristics and patterns. These preprocessing steps were carried out for both polarizations VV and VH, resulting in two independent time series. The result is a multi-temporal layer stack for each polarization.

3.2. Derivation of Time Series Features

On the basis of the preprocessed SAR time series data, an analysis of the multi-temporal profiles was carried out to determine the characteristics and patterns of the flood-related classes (TOW and TFV) (see Section 4.1). Accordingly, the decrease and/or increase of the backscatter values in the SAR time series data at the analysed date of the flood event for each polarization and polarization ratio is essential information, which can be used to derive TOW and TFV. The extraction of flood-relevant time series characteristics is based on these multi-temporal characteristics and patterns.
The backscatter values are influenced by different environmental conditions, whereby the intensity of the backscatter decrease or increase can vary within the desired classes. Influencing conditions such as varying land-cover classes, wind, water depth, or heterogeneity of the different vegetation types and their phenological stages can cause these variations [18,60,61]. Therefore, it is not always possible to compare the increase or decrease of the absolute backscatter values over time. In order to ensure the comparability, absolute backscatter values at the analysed date of the flood event were normalized over the time series for each pixel. This is achieved by the Z-transform of backscatter values and has been implemented by the following formula:
Z = (x − µ)/σ
Hereby, x corresponds to backscatter values for the analysed date at the flood event, µ represents the mean and σ the standard deviation of the backscatter values of a pixel over the time series. The transformed backscatter values allow spatial comparability of the data and are referred to hereinafter as Z-Score images. Because of the normalized description of the decrease or increase of the backscatter values, and the Z-Score in connection with the respective polarizations, their ratio (VV/VH) and their combinations (VV + VH, VV-VH) represent the essential time series features for the derivation of TOW and TFV. Mean value, median, and the standard deviation over the time series are considered as auxiliary information for the calculation of the Z-Score or as additional information in the later classification.
A quantitative analysis of the time series features was carried out using the Random Forest (RF) method to investigate the potential of those to derive the TOW and TFV. Therefore, training data for two flood-related classes and a third class representing Dry Land areas were defined based on the reference data and analysed by RF. RF is an ensemble learning method based on the construction of a large number of decision trees using training data. The RF classifier is relatively robust against outliers and noise, which is important for spatially variable SAR data [62]. Besides performing a supervised classification, the application of RF algorithm allows one to derive the importance of different features. By determining their importance, one can identify which time series feature has the highest contribution to the classification results or is most reliable for the derivation of TOW and/or TFV. In addition, RF was used to analyse whether a single time series characteristic or combination of time series characteristics, and which one, allows for high classification accuracy for the two searched classes. As a result, redundant information is sorted out, and the information essential for classification is identified.

3.3. Clustering Approach for Segment Generation

The derivation of the TOW and TFV was done based on both pixels and segments. Among other advantages, such as a reduction of speckle, the usage of segments allows for the reduction of intra-class variability, which can be caused by the high spatial level of detail of the S-1 data. Clustering represents a common method to perform image segmentation under consideration of multi-spectral and multi-temporal data [63]. Cluster algorithms have already been used as segmentation methods for the derivation of TFV [64,65,66]. The grouping of pixels into segments was implemented for the study area using a k-means cluster algorithm. k-means is an iterative, unsupervised method that assigns each pixel to a cluster using the minimum distance. This technique is widely used due to its simple implementation. In addition, k-means produces relatively high-quality clusters with low computational effort [67,68,69].
Segmentation by k-means was carried out in several steps. First, multi-temporal clustering was performed. Therefore, a SAR time series data stack, containing the VV and VH polarizations, is used. This allows one to integrate the temporal component and both polarizations for the generation of clusters. Second, a spatial component was integrated by means of spatial clustering based on the multi-polarised SAR image at the analysed date of the flood event. In order to combine the multi-temporal and spatial information, both cluster images, multi-temporal and spatial, were intersected with each other.
In contrast to other segmentation methods such as multiresolution segmentation [24,41], k-means clustering requires the definition of only a single parameter: the number of clusters. In order to determine this parameter for both the multi-temporal cluster and spatial cluster image, a range for the number of clusters between 5 and 100 has been defined. Thereafter, all possible combinations of the number of clusters for multi-temporal and spatial clustering were performed. Although all possible combinations for the defined areas were tested, it turned out that the combination of 10 multi-temporal clusters and 5 spatial clusters was the best cluster number combination in terms of classification accuracy and computation efficiency. Finally, the combined clusters were examined for their spatial independence and split into their spatially connected segments. The resulting segmented image was used beside the pixel image as a basis for the classification of TOW and TFV.

3.4. Hierarchical Thresholding Approach

The last step in the classification process chain (Figure 4) is the hierarchical thresholding approach, which is divided into three consecutive steps. First of all, the permanent open water surfaces are identified using the SWBD mask (see Section 2) and excluded from further analysis. In addition, GUF and HAND masks (see Section 2) were used as exclusion layers before the next step. The other two steps are based on the remaining unclassified image elements using the derived time series features (see Section 3.2). The threshold values for the corresponding time series features are automatically generated by a decision tree classifier [62] using reference-based training data (see Section 3.2). The automatic threshold definition represents a great advantage, since manually determining thresholds, especially for separation of TFV to other classes, can be a complex process [6]. With the aid of the aforementioned time series features and corresponding threshold values, the class TOW was derived firstly. Equivalent to TOW, the remaining unclassified image elements were separated into TFV and Dry Land in the last step.

4. Results

4.1. Multi-Temporal Characteristics and Patterns of Backscatter Intensities

The analysis of multi-temporal characteristics and patterns was performed based on preprocessed SAR time series data (see Section 3.1). For this purpose, pixels of the time series layer stacks were combined into objects using clustering methods (see Section 3.3). Objects are less susceptible to noise than pixels, and they can be used to create spatial contextual time series profiles, which allow one to derive object-related characteristics and patterns. Examples of object-based time series profiles for TOW and TFV, which were identified from the training data set acquired on 8 April 2017, are shown in Figure 5 and Figure 6. These objects contain 1148 pixels for TOW and 1054 pixels for TFV, respectively. The box plots represent a range of the backscatter values within the segment for each date. The time series are shown for the two single polarizations VV (Figure 5a and Figure 6a), VH (Figure 5b and Figure 6b), and their ratio VV/VH (Figure 5c and Figure 6c) for the same segment. In addition, the multi-temporal behaviour of the NDVI and NDWI values for the same time-period is displayed in Figure 5d and Figure 6d and in Figure 5e and Figure 6e, which serve as a comparison to the SAR time series data. These ratios are not integrated in the methodology as a data set. The analysed flood event image (6 April 2017) is indicated by a blue bar.
Figure 5 shows the multi-temporal characteristics and patterns of the backscatter values for TOW. Compared to all other dates in the time series, a significant decrease of the backscatter values can be observed at the analysed date during the flood event (blue bar) (Figure 5a,b). The ratio of the two polarizations (Figure 5c) shows no change at the analysed date of the flood event. This indicates a similar or equal change in both polarizations. As a comparison to the SAR time series data, NDVI values were used, which were derived from the S-2 data sets. Despite the cloud-related data gap between November 2016 and March 2017, there is a comparable reduction in the NDVI values for the analysed date at the flood event. In combination with the decrease of the backscatter values, this confirms the occurrence of water at the analysed date. In addition, the NDWI shows an increase in the values at the date of flooding, which indicates the occurrence of water.
Figure 6 shows an example of multi-temporal characteristics and patterns for TFV areas. Compared to TOW, TFV is characterized by an increase in the backscatter values at the analysed date in VV polarization. In contrast to the increase of the backscatter values in VV polarisation, a decrease of the backscatter values occurs in VH polarisation during the flood event (Figure 6b). The ratio between these two polarizations shows an increase for TFV at the analysed date (Figure 6c). As a comparison to the SAR time series data, NDVI values for TFV were derived from the S-2 data and are displayed for the same period (Figure 6d). At the analysed date, there is only a very slight change in the NDVI values. In addition, NDWI shows a slight increase in values, which indicates the occurrence of water in the vegetated areas. Nevertheless, the NDWI values remain in the negative range, and it seems that there are no open water areas but a mixture of standing water and vegetation. Accordingly, it can be confirmed that the increase in backscatter values in VV polarization is caused by flooding and not by any phenological changes. Overall, the increase in backscatter values for TFV in VV polarization is significantly lower compared to the decrease in the backscatter values for TOW areas.

4.2. Relevant Time Series Features

Relevant time series features were determined for the derivation of the flood-related classes based on trainings data, which were taken from the reference data. The total number of training pixels for each of the three classes is 15,559 (TOW), 15,563 (TFV), and 11,550 (Dry Land). The importance of time series features for each class was implemented by RF (see Section 3.2). The analysed time series features include Z-Score for the VV and VH polarizations, Z-Score for the ratio between VV and VH, and two Z-Score combinations VV + VH and VV-VH.
The training data were used to visualize the distribution and separability of the classes with respect to the derived time series features in a histogram. Figure 7 shows histogram distributions of the training data for each aforementioned class and time series feature. The separability between the TOW and other classes is clearly recognizable for Z-Score VV and Z-Score VV + VH. The overlap between the histograms of the classes TOW and Dry Land is strongly pronounced for Z-Score VV + VH and Z-Score VV/VH. In addition, there is an overlap to TFV by using Z-Score VH. The separability of TFV from other classes can be observed for Z-Score VV, Z-Score VV-VH, and Z-Score VV/VH. In particular, Z-Score VV/VH appears as a suitable time series feature to distinguish the TFV from other classes.
The RF algorithm was used to quantify the importance of time series features for the TOW and TFV classes (see Section 3.2). Thereby, the contribution of each time series feature was determined for the derivation of the respective desired classes. Taking into account the training data, 2000 estimators and a maximum tree depth of 500 were defined to identify the importance. These parameters were used to exclude randomness from the results. Figure 8 shows the importance of time series features for the classes TOW and TFV. It can be observed that for TOW, Z-Score VV, and Z-Score VV + VH, there are time series features with the highest contributions of 34.9% and 34.4%, respectively. The time series feature with the highest contribution (43.1%) for the TFV class is Z-Score VV/VH.
The time series feature with the highest contribution does not necessarily represent the highest possible classification accuracy. Therefore, it is analysed whether a single feature or a combination of multiple features would result in the highest possible classification accuracy for the respective desired class. For this purpose, a Random Forest classification was performed based on aforementioned training data. The use of the time series features with the highest contribution and the two highest contributions for the classification resulted in an overall accuracy (OA) of approximately 98.0% for the TOW class. In comparison, the combination of all time series features resulted in only 93.1%. Equivalent for the TFV class, the time series feature with the highest contribution achieved an OA of 98.2%, whereby the combination of two or all of the time series features resulted in lower accuracy (97.5% and 96.2%).
RF algorithm’s importance shows that Z-Score VV represents the most reliable time series feature and the feature with the highest contribution for the extraction of the TOW. For the extraction of the TFV, Z-Score VV/VH represents the most reliable time series feature and the feature with the highest contribution. The analysis of the OA with the RF algorithm also showed that the combination of all time series features is less accurate than the classification accuracy based on the time series feature with the highest contribution for the classes TOW and TFV. Based on these findings, the time series features Z-Score VV and Z-Score VV/VH were used to derive classes TOW and TFV by means of the time series approach.

4.3. Classification Results

The classification of the S-1 image (6 April 2017) at the date of the flood event was performed based on a S-1 time series layer stack for the period between 2 September 2016 and 23 July 2017. Validation of the classification was performed using the S-2-based reference flood mask.
Figure 9a shows the pixel-based classification result of the time series approach for the investigated area in Namibia, which includes the classes permanent open water, TOW, TFV, and Dry Land, while Figure 9b represents the object-based classification result. For visual comparison, the validation mask is shown in Figure 2b. A visual comparison of the two classification images with the validation mask reveals the clear similarity of the area extent for each class. As expected, the pixel-based classification appears to be slightly noisier in comparison to the object-based classification.
Figure 10 shows the intersection between the validation data and the results based on pixel-based (a) and object-based (b) classification, respectively. Areas of correspondence, as well as misclassified areas, can be shown by this intersection. The orange areas, which are marked as TFV in the classification, however, representing Dry Land in the validation data, can be visually identified as largest misclassification areas. Another significant misclassification can be identified by yellow areas. These areas represent the Dry Land class in the classification image and the TFV class in the validation data, and are slightly larger in the pixel-based classification compared to the object-based classification. Recognisable are also bright blue areas, which can be found at the transect zone between Dry Land and TOW. While these areas represent Dry Land in the reference data, they are classified as TOW. The visual comparison between pixel and object-based classification shows that the extent of bright blue areas in both classifications is similar; this is also the case for all other remaining small misclassified areas. A further difference between pixel- and object-based classification can be observed in the red areas, which represent Dry Land in the classification and TOW in the validation data. Overall, the comparison between pixel- and object-based classification shows that fewer misclassified areas occur in the object-based classification.
The quantification of the classification accuracy of TOW, TFV, and Dry Land was performed using OA, producer accuracy (PA), user accuracy (UA), and Kappa index (K). The accuracies for pixel-based and object-based classification are shown in Figure 11. The accuracy values (UA and PA) for the TOW class are 85.1% and 82.6% for pixel-based classification, and 85.8% (UA) and 85.5% (PA) for object-based classification, respectively. In comparison, the accuracy of the class TFV is lower, with 67.9% (UA) and 86.32% (PA) for pixel-based classification, and 76.1% (UA) and 91.2% (PA) for object-based classification. For the class Dry Land, the UA and PA are 74.8% and 59.0% for the pixel-based classification. For the object-based classification, these are 80.8% (UA) and 67.0% (PA), respectively. The improvement was mainly achieved in the UA area for TFV by 8.1% and PA for Dry Land by 8.0%. Overall, the OA of object-based classification is about 5.0% higher compared to pixel-based classification. The Kappa index is also 0.08 higher for the object-based classification. In addition, a confidence interval was calculated for each of the accuracy values [70]. For the pixel-based classification, the largest confidence interval lies at 0.38% for dry land producer accuracy. For object-based classification, the largest confidence interval is 0.37% for the same class and accuracy.
In addition to the classification of the validation extent, the classification for the entire study area was generated based on the same time series characteristics and the corresponding threshold values for the classes TOW and TFV. Figure 12a shows the pixel-based classification and Figure 12b the object-based classification. A difference between both classifications can be observed. In particular, the Dry Land areas of the pixel-based classification appear to be noisier in comparison to the Dry Land areas of the object-based classification and show slightly higher overestimation for all areas, especially for Dry Land. Both figures show that the disregard of the TFV areas would result in an enormous underestimation of the flood extent.

4.4. Improvement of the Sentinel-1 Flood Service

Figure 13a shows the classification result of the S-1FS (see Section 1) for the validation area in Namibia, which initially only contains the open flood water areas [47]. For the extension and improvement of the S-1FS, the open flood areas were supplemented by TOW and TFV of the time series approach, in order to detect the entire flood extent. The object-based classification result was used as a supplement, since higher accuracy could be achieved compared to pixel-based classification results.
An accuracy assessment was carried out for both the classification results of the S-1FS and the supplemented classification results of the S-1FS. The classes TFV and TOW, both in the validation data and in the supplemented classification of the S-1FS, were combined into a single class: flood. This represents the entire flood extent. The modified reference data is shown in Figure 13d. The merging of the classes of TOW and TFV provides a more comprehensive coverage of the flood extent in the reference data compared to the results of the S-1FS (Figure 13a). Figure 13b shows the classification result of the S-1FS, which are supplemented by TOW, and Figure 13c represents the S-1FS classification result simultaneously supplemented by TOW and TFV of the time series approach. The validation of the S-1FS and the two supplemented S-1FS classification results were performed using OA, PA, UA, and Kappa Index (Figure 14). A significant improvement in classification accuracy is observed for PA Flood and UA Dry Land, which are 57.1% and 31.0%, respectively. This improvement is mainly achieved by supplementing the S-1FS classification results with the TFV surfaces. Thereby, the OA increases by 27.0% and the Kappa coefficient increases from 0.24 to 0.69.

4.5. Number of Images

Besides the sensor characteristics and environmental conditions, the number of images used for the developed time series approach is another important element that can influence the accuracy of the classification result. 25 S-1 images were used to derive flood-related classes for the study area in Namibia. The time series was used to obtain the statistical multi-temporal distribution of the backscatter values for the flood-related classes (see Section 3.2). For this purpose, it was analysed whether a lower number of images in the time series has an effect on the classification results. In each time series, fewer images were used starting with the original classification of 24 images decreasing in increments of 1, whereby the analysed flood date image was included in each classification run. Figure 15 shows the OAs and Kappa coefficients for 24 classification results, each based on time series with a different number of applied images. In addition, a confidence interval was calculated for each AO. The use of a single image, together with the image taken at the time of the flood results in the lowest accuracy. The same classification resulted in the largest confidence interval 0.19%. Overall, it can be observed that the fewer images are used in the time series, the lower is the OA and Kappa coefficient.

5. Discussion

5.1. Multi-Temporal Characteristics and Time Series Features

The analysed characteristics and patters of SAR time series data clearly show a decrease in backscatter values for TOW and an increase in backscatter values for TFV at the analysed date of the flood event (Figure 5 and Figure 6). Thereby, the decrease indicates the occurrence of open water surfaces, which are characterized by low backscatter values, in which the emitted sensor energy is reflected away by the specular water surface [71]. The increase can be explained by the double- or multi-bounce interactions between the specular water surface and the vertical structures of the vegetation [9,10,11]. In many previous studies, similar behaviour of the backscatter values for the classes TOW and TFV is described [11,20,25]. These characteristics and patters in backscatter values are a prerequisite for the detection of the flood-related classes. The length of the decrease or increase of the backscatter values depends on the duration of the flood, the spatial extent, and the revisit time of the satellite.
It should also be noted that backscatter values can be strongly influenced by the environmental conditions (see Section 4.2). For example, wind or heavy rainfall can roughen the water surface and reduce or even erase the typical decrease in backscatter values during the flood event [4,9]. The increase in backscatter at the date of the flood can also be strongly influenced by the interaction between sensor characteristics (wavelength, angle of incidence, and polarisation) [6] and environmental conditions, such as aboveground biomass [72,73,74] or the relation between water level and plant height [10]. Such external environmental conditions can have a limiting impact on the methodology presented here.
In previous studies, absolute backscatter values are applied for the extraction of flood-related classes [10,37]. This can lead to underestimation or overestimation of the TFV areas and limits the use of the method. These disadvantages can be compensated by the normalisation of backscatter values over the time series, which ensure the comparability of the increase or decrease in backscatter values for each image element independent of the different types of phenological development of the vegetation. Moreover, the developed method enables simple management of SAR time series data without the use of extensive methods to describe the seasonality or the typical vegetation conditions. In comparison to other studies [19,37], which are based on the dependency of the sequential dates order of the multi-temporal SAR data, the technique presented in this paper can deal with non-sequential or irregular times series data. Sentinel-1 scenes acquired within a period of two years, which are containing the flood event, the flood event made it possible to distinguish the natural range of variation for the analysed vegetation from the changes induced by a flood event.
The decrease and the increase of the backscatter values in SAR time series data were used for the extraction of time series features. Therefore, not only the polarizations VV and VH were applied, but also their combination. By combining polarizations, the effects, such as different sensitivities of the polarisations to objects, can also be intensified, e.g., by mathematically induced stretching of the data [75,76,77]. The intensity of the amplification was quantified using the RF method (Figure 8). For the extraction of TOW, Z-Score VV was determined by a RF classifier as the most reliable time series feature, having the highest contribution in comparison to the VH based time series features. While this finding matches the results of other studies [4,47], some studies [25,37] prefer VH as a basis for the classification. This ambivalence can probably be explained by different sensor characteristics and environmental conditions in the studies.
The time series feature Z-Score VV/VH proved to be most suitable for the derivation of TFV. This can be explained by the different sensitivity of VV and VH for backscattering mechanisms [78], which causes the backscatter values to increase for VV and decrease for VH. In case of TFV, the increase of the backscatter values can usually be detected by VV polarization due to the double-bounce effect. The double-bounce effect cannot be detected due to the depolarizing property of VH polarization, and the increase of the backscatter values is not expected in VH. Because of the difference or ratio between these two polarisations, the increase in VV is intensified, allowing a more effective derivation of TFV compared to a simple use of VV polarisation.
In the example of Namibia, VV seems to be influenced by the double-bounce-effect indicating the occurrence of TFV [60,79], whereby VH is not influenced by the interaction between water surface and vegetation. Instead, VH is dominated by the specular reflection because of flooded soil in between the vegetation, which consequently results in a decrease in backscatter values. According to the difference in the backscatter between VV and VH for the analysed flood date, the time series feature Z-Score VV/VH was successfully used for the extraction of the TFV.

5.2. Classification Results

The comparison of the time series approach results with the reference mask provides good correspondence of 75.0% (OA, pixel-based) and 80.5% (OA, object-based) for the whole flood extent, comprising both classes TOW and TFV. Cazals et al. [37] achieved a similar OA (82.0%) using backscatter intensities of S-1 time series data for the detection of flooded areas, comprising TOW and TFV. However, a sequential time series is necessary for this method. Furthermore, the comparability of the validation results is limited by the differences between the study areas, including the vegetation types, the sizes of the individual classes, and the ratio in size between these classes and environmental conditions.
In the following, the classification results are discussed in comparison to the validation image (Figure 10). It can be observed that the areas of the correctly classified classes (TOW = blue, TFV = green, Dry Land = brown) predominate; however, misclassifications are present. The misclassifications occur mostly for TFV in the classification image, which is marked as Dry Land in the validation image. Indeed, in these areas the VV and VH polarisations of the time series features Z-Score VV/VH have a significant difference during the flood event and are therefore classified as TFV. Nevertheless, these areas are marked as Dry Land in the validation image. This contradiction can be explained by the fact that the interpretation of optical data in relation to TFV is challenging, and water under the vegetation cover cannot always be clearly identified in the validation image. In addition, the temporal shift between the flood image (6 April 2017) and the validation image (8 April 2017) might also be a reason for the aforementioned misclassifications. Even so, no change in the flood extend could be observed; a receding flood extent (Figure 5) until the date of the S-2 acquisition could be possible. Thus, misclassified areas can indeed represent Dry Land; however, they can also constitute TFV areas in the S-1 flood image. Another significant misclassification is represented by yellow areas. For these areas, the interaction between water surfaces and vegetation may not be strong enough to cause an increase in the backscatter values in VV polarisation [10,80,81]. Accordingly, there is no increase in the Z-Score VV/VH time series feature, so these areas were classified as Dry Land. The light blue areas represent confusion between TOW in the classification results and Dry Land in the validation image. These misclassifications may be caused by an insufficient decrease of the backscatter values in the VV polarisation at the analysed date of the flood event, which could not be detected by the generated threshold value. The transitional zones between TOW and dry areas have always been challenging to detect [82], because of the rapidly changing conditions during the flood event. Despite the misclassifications, the results show that for TOW 85.8% (UA) and 85.5% (PA) and for TFV 76.1% (UA) and 91.2% (PA) could be achieved.
In addition, it was shown that the object-based classification compared to pixel-based classification can help to detect more precisely the areas affected by flood. This can be explained by the fact that the heterogeneity of the TFV in S-1 imagery is reduced by grouping pixels into objects providing a depiction of real objects on the ground. In the TOW area, misclassifications to Dry Land could also be partially reduced. They may have been caused by environmental conditions such as wind, which only affected a few pixels in an object. It also seems that speckle in the SAR data could also be reduced by using objects for classification (Figure 10). However, it should be noted that an additional step to calculate objects must be performed. The results of previous studies also confirm that object-based classification reaches higher accuracies in comparison to pixel-based classifications [27,41].
The validation of the S-1FS results, which were supplemented with the results of the developed SAR time series approach, show that this addition led to an improvement in the classification results of the S-1FS. Although the results of the fully automatic near-real time S-1FS have been significantly improved, it should be noted that the developed SAR time series approach requires training data for the initialization, and that several images (time series) are used to derive the TFV. Thereby, preprocessing is more time-consuming compared to the S-1FS and depends on the number of images used in the time series, the classification basis applied (pixel- or object-based), and the duration for the extraction of training data by an expert. Apart from the above-mentioned steps, the actual classification process only takes a few seconds. A minimum number of data sets in the time series depends on the duration of the flood and the number of S-1 images that have been acquired during the flood. The less non-flooded dates used in the time series, the lower the classification accuracy. If available, it is recommended to include at least one vegetation cycle in order to consider the entire seasonality or phenology. Thus, the statistical distribution of the backscatter values over the entire vegetation cycle is considered [19]. Considering the above-mentioned prerequisite for the SAR time series approach, it was possible to achieve a significant improvement in classification accuracy of S-1FS for PA Flood and UA Dry Land. This was accomplished by supplementing the TOW of S-1FV with the TFV areas (Figure 13).

6. Conclusions

While most methods are aimed at the detection of temporary open water (TOW) areas, the developed method especially focuses on the detection of temporarily flooded vegetation (TFV), which allows the detection of flood areas without underestimating the flood extent. The developed method is based on Sentinel-1 C-band time series data and on additional information taking into account backscatter intensity, permanent water, and topographical and urban information, and uses a hierarchical thresholding approach to combine this information. This approach was used to map a flood event that occurred in spring 2017 in the Chobe-Sambesi flood plain in Namibia.
Normalized time series features were derived based on VV and VH polarisation and their combinations. The time series feature using VV polarisation performed better for the derivation of TOW than the composite of the derived time series features together. The time series feature using the ratio between VV and VH performed best for the derivation of TFV. This demonstrated that both polarizations, and especially their combination, are relevant for the detection of TFV. The classification results showed that the developed SAR time series approach is well suited to map flood in vegetated areas. By supplementing the TOW with the TFV areas, the accuracy of the classification results was significantly improved, and the entire flood extent could be detected. This highlights the importance of the extraction of TFV areas besides TOW for floodplain monitoring.
In the developed approach, the urban areas were not included; however, those will be integrated in the future as a refinement of the methodology. The developed method can be used to monitor future long-term flooding and flood dynamics. Furthermore, the classification results can serve as supplement information in the evaluation of ground observations or as inputs to hydrological models. In combination with the method developed here, the growing Sentinel-1 archive will be used in the future to analyse different TFV types aiming at the continued improvement of the derivation of the entire inundation area during flood events.

Author Contributions

V.T. designed the structure and contents of this article, acquired the data, carried out the analysis and interpretation, developed the approach, and wrote all sections of this paper. The co-authors contributed to the reflection of the ideas, the critical review of the methodology, the structure of the paper, and the review of orthographic and grammatical correctness. The sequence of authors reflects their level of contribution.

Funding

This research was funded by the Federal ministry for Economic Affairs and Enegry (BMWi) grant number 50 EE1338.

Acknowledgments

The authors would like to thank the anonymous reviewers for their helpful comments and constructive suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Smith, D.I. Flood Damage Estimation—A Review of Urban Stage Damage Curves and Loss Functions. Water SA 1994, 20, 231–238. [Google Scholar]
  2. Moel, H.D.; van Alphen, J.; Aerts, J.C.J.H. Flood maps in Europe–methods, availability and use. Nat. Hazards Earth Syst. Sci. 2009, 9, 289–301. [Google Scholar] [CrossRef] [Green Version]
  3. Li, Y.; Martinis, S.; Plank, S.; Ludwig, R. An automatic change detection approach for rapid flood mapping in Sentinel-1 SAR data. Int. J. Appl. Earth Obs. Geoinf. 2018, 73, 123–135. [Google Scholar] [CrossRef]
  4. Clement, M.A.; Kilsby, C.G.; Moore, P. Multi-temporal synthetic aperture radar flood mapping using change detection. J. Flood Risk Manag. 2017, 39, 130. [Google Scholar] [CrossRef]
  5. Dasgupta, A.; Grimaldi, S.; Ramsankaran, R.A.A.J.; Pauwels, V.R.N.; Walker, J.P. Towards operational SAR-based flood mapping using neuro-fuzzy texture-based approaches. Remote Sens. Environ. 2018, 215, 313–329. [Google Scholar] [CrossRef]
  6. Tsyganskaya, V.; Martinis, S.; Marzahn, P.; Ludwig, R. SAR-based detection of flooded vegetation—A review of characteristics and approaches. Int. J. Remote Sens. 2018, 39, 2255–2293. [Google Scholar] [CrossRef]
  7. Betbeder, J.; Rapinel, S.; Corpetti, T.; Pottier, E.; Corgne, S.; Hubert-Moy, L. Multitemporal Classification of TerraSAR-X Data for Wetland Vegetation Mapping. J. Appl. Remote Sens. 2014, 8, 83648. [Google Scholar] [CrossRef]
  8. Klemas, V. Remote Sensing of Emergent and Submerged Wetlands: An Overview. Int. J. Remote Sens. 2013, 34, 6286–6320. [Google Scholar] [CrossRef]
  9. Moser, L.; Schmitt, A.; Wendleder, A.; Roth, A. Monitoring of the Lac Bam Wetland Extent Using Dual-Polarized X-Band SAR Data. Remote Sens. 2016, 8, 302. [Google Scholar] [CrossRef]
  10. Pulvirenti, L.; Chini, M.; Pierdicca, N.; Guerriero, L.; Ferrazzoli, P. Flood Monitoring using Multi-Temporal COSMO-SkyMed Data: Image segmentation and signature interpretation. Remote Sens. Environ. 2011, 115, 990–1002. [Google Scholar] [CrossRef]
  11. Pulvirenti, L.; Pierdicca, N.; Chini, M.; Guerriero, L. Monitoring Flood Evolution in Vegetated Areas Using COSMO-SkyMed Data: The Tuscany 2009 Case Study. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2012, 6, 1807–1816. [Google Scholar] [CrossRef]
  12. Chapman, B.; McDonald, K.; Shimada, M.; Rosenqvist, A.; Schroeder, R.; Hess, L. Mapping Regional Inundation with Spaceborne L-Band SAR. Remote Sens. 2015, 7, 5440–5470. [Google Scholar] [CrossRef] [Green Version]
  13. Voormansik, K.; Praks, J.; Antropov, O.; Jagomagi, J.; Zalite, K. Flood Mapping with TerraSAR-X in Forested Regions in Estonia. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 562–577. [Google Scholar] [CrossRef]
  14. Martinis, S.; Twele, A. A Hierarchical Spatio-Temporal Markov Model for Improved Flood Mapping Using Multi-Temporal X-Band SAR Data. Remote Sens. 2010, 2, 2240–2258. [Google Scholar] [CrossRef] [Green Version]
  15. Pulvirenti, L.; Pierdicca, N.; Chini, M.; Guerriero, L. An Algorithm for Operational Flood Mapping from Synthetic Aperture Radar (SAR) Data using Fuzzy Logic. Nat. Hazards Earth Syst. Sci. 2011, 11, 529–540. [Google Scholar] [CrossRef] [Green Version]
  16. Brisco, B.; Schmitt, A.; Murnaghan, K.; Kaya, S.; Roth, A. SAR Polarimetric Change Detection for Flooded Vegetation. Int. J. Digit. Earth 2011, 6, 103–114. [Google Scholar] [CrossRef]
  17. Li, J.; Chen, W. A rule-based method for mapping Canada’s wetlands using optical, radar and DEM data. Int. J. Remote Sens. 2005, 26, 5051–5069. [Google Scholar] [CrossRef]
  18. Hess, L.L.; Melack, J.M. Remote Sensing of Vegetation and Flooding on Magela Creek Floodplain (Northern Territory, Australia) with the SIR-C Synthetic Aperture Radar. Hydrobiologia 2003, 500, 65–82. [Google Scholar] [CrossRef]
  19. Schlaffer, S.; Chini, M.; Dettmering, D.; Wagner, W. Mapping Wetlands in Zambia Using Seasonal Backscatter Signatures Derived from ENVISAT ASAR Time Series. Remote Sens. 2016, 8, 402. [Google Scholar] [CrossRef]
  20. Martinez, J.; Le Toan, T. Mapping of Flood Dynamics and Spatial Distribution of Vegetation in the Amazon Floodplain using Multitemporal SAR Data. Remote Sens. Environ. 2007, 108, 209–223. [Google Scholar] [CrossRef]
  21. Evans, T.L.; Costa, M.; Tomas, W.M.; Camilo, A.R. Large-Scale Habitat Mapping of the Brazilian Pantanal Wetland: A synthetic aperture radar approach. Remote Sens. Environ. 2014, 155, 89–108. [Google Scholar] [CrossRef]
  22. Hess, L.L.; Melack, J.M.; Affonso, A.G.; Barbosa, C.; Gastil-Buhl, M.; Novo, E.M.L.M. Wetlands of the Lowland Amazon Basin: Extent, Vegetative Cover, and Dual-season Inundated Area as Mapped with JERS-1 Synthetic Aperture Radar. Off. Sch. J. Soc. Wetland Sci. 2015, 35, 745–756. [Google Scholar] [CrossRef]
  23. Bourgeau-Chavez, L.; Lee, Y.; Battaglia, M.; Endres, S.; Laubach, Z.; Scarbrough, K. Identification of Woodland Vernal Pools with Seasonal Change PALSAR Data for Habitat Conservation. Remote Sens. 2016, 8, 490. [Google Scholar] [CrossRef]
  24. Robertson, L.D.; King, D.J.; Davies, C. Object-Based Image Analysis of Optical and Radar Variables for Wetland Evaluation. Int. J. Remote Sens. 2015, 36, 5811–5841. [Google Scholar] [CrossRef]
  25. Zhao, L.; Yang, J.; Li, P.; Zhang, L. Seasonal inundation monitoring and vegetation pattern mapping of the Erguna floodplain by means of a RADARSAT-2 fully polarimetric time series. Remote Sens. Environ. 2014, 152, 426–440. [Google Scholar] [CrossRef]
  26. Pierdicca, N.; Chini, M.; Pulvirenti, L.; Macina, F. Integrating Physical and Topographic Information Into a Fuzzy Scheme to Map Flooded Area by SAR. Sensors 2008, 8, 4151–4164. [Google Scholar] [CrossRef] [Green Version]
  27. Chen, Y.; He, X.; Wang, J.; Xiao, R. The Influence of Polarimetric Parameters and an Object-Based Approach on Land Cover Classification in Coastal Wetlands. Remote Sens. 2014, 6, 12575–12592. [Google Scholar] [CrossRef] [Green Version]
  28. Morandeira, N.; Grings, F.; Facchinetti, C.; Kandus, P. Mapping Plant Functional Types in Floodplain Wetlands: An Analysis of C-Band Polarimetric SAR Data from RADARSAT-2. Remote Sens. 2016, 8, 174. [Google Scholar] [CrossRef]
  29. Plank, S.; Jüssi, M.; Martinis, S.; Twele, A. Mapping of flooded vegetation by means of polarimetric Sentinel-1 and ALOS-2/PALSAR-2 imagery. Int. J. Remote Sens. 2017, 38, 3831–3850. [Google Scholar] [CrossRef]
  30. Arnesen, A.S.; Silva, T.S.F.; Hess, L.L.; Novo, E.M.L.M.; Rudorff, C.M.; Chapman, B.D.; McDonald, K.C. Monitoring flood extent in the lower Amazon River floodplain using ALOS/PALSAR ScanSAR images. Remote Sens. Environ. 2013, 130, 51–61. [Google Scholar] [CrossRef]
  31. Melack, J.M.; Wang, Y. Delineation of flooded area and flooded vegetation in Balbina Reservoir (Amazonas, Brazil) with synthetic aperture radar. J. SIL Proc. 1998, 26, 2374–2377. [Google Scholar] [CrossRef]
  32. Frappart, F.; Seyler, F.; Martinez, J.-M.; León, J.G.; Cazenave, A. Floodplain water storage in the Negro River basin estimated from microwave remote sensing of inundation area and water levels. Remote Sens. Environ. 2005, 99, 387–399. [Google Scholar] [CrossRef] [Green Version]
  33. Long, S.; Fatoyinbo, T.E.; Policelli, F. Flood Extent Mapping for Namibia using Change Detection and Thresholding with SAR. Environ. Res. Lett. 2014, 3, 1–9. [Google Scholar] [CrossRef]
  34. Pulvirenti, L.; Pierdicca, N.; Chini, M. Analysis of Cosmo-Sky Med observations of the 2008 flood in Myanmar. Ital. J. Remote Sens. 2010, 42, 79–90. [Google Scholar] [CrossRef]
  35. Pulvirenti, L.; Chini, M.; Pierdicca, N.; Boni, G. Use of SAR Data for Detecting Floodwater in Urban and Agricultural Areas: The Role of the Interferometric Coherence. IEEE Trans. Geosci. Remote Sens. 2016, 54, 1532–1544. [Google Scholar] [CrossRef]
  36. Martinis, S.; Rieke, C. Backscatter Analysis Using Multi-Temporal and Multi-Frequency SAR Data in the Context of Flood Mapping at River Saale, Germany. Remote Sens. 2015, 7, 7732–7752. [Google Scholar] [CrossRef] [Green Version]
  37. Cazals, C.; Rapinel, S.; Frison, P.-L.; Bonis, A.; Mercier, G.; Mallet, C.; Corgne, S.; Rudant, J.-P. Mapping and Characterization of Hydrological Dynamics in a Coastal Marsh Using High Temporal Resolution Sentinel-1A Images. Remote Sens. 2016, 8, 570. [Google Scholar] [CrossRef]
  38. Costa, M.P.F. Use of SAR Satellites for Mapping Zonation of Vegetation Communities in the Amazon Floodplain. Int. J. Remote Sens. 2004, 25, 1817–1835. [Google Scholar] [CrossRef]
  39. Evans, T.L.; Costa, M.; Telmer, K.; Silva, T.S.F. Using ALOS/PALSAR and RADARSAT-2 to Map Land Cover and Seasonal Inundation in the Brazilian Pantanal. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2010, 3, 560–575. [Google Scholar] [CrossRef]
  40. Hess, L. Dual-Season Mapping of Wetland Inundation and Vegetation for the Central Amazon Basin. Remote Sens. Environ. 2003, 87, 404–428. [Google Scholar] [CrossRef]
  41. Na, X.D.; Zang, S.Y.; Wu, C.S.; Li, W.L. Mapping Forested Wetlands in the Great Zhan River Basin through Integrating Optical, Radar, and Topographical Data Classification Techniques. Environ. Monit. Assess. 2015, 187, 187–696. [Google Scholar] [CrossRef] [PubMed]
  42. Maillard, P.; Alencar-Silva, T.; Clausi, D.A. An Evaluation of Radarsat-1 and ASTER Data for Mapping Veredas (Palm Swamps). Sensors (Basel) 2008, 8, 6055–6076. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  43. Cremon, É.H.; Rossetti, D.D.F.; Zani, H. Classification of Vegetation over a Residual Megafan Landform in the Amazonian Lowland Based on Optical and SAR Imagery. Remote Sens. 2014, 6, 10931–10946. [Google Scholar] [CrossRef] [Green Version]
  44. Aghabozorgi, S.; Seyed, S.A.; Ying Wah, T. Time-series clustering—A decade review. Inf. Syst. 2015, 53, 16–38. [Google Scholar] [CrossRef]
  45. Schlaffer, S.; Matgen, P.; Hollaus, M.; Wagner, W. Flood Detection from Multi-Temporal SAR data using Harmonic Analysis and Change Detection. Int. J. Appl. Earth Obs. Geoinf. 2015, 38, 15–24. [Google Scholar] [CrossRef]
  46. Martinis, S.; Kersten, J.; Twele, A. A fully automated TerraSAR-X based flood service. ISPRS J. Photogramm. Remote Sens. 2015, 104, 203–212. [Google Scholar] [CrossRef]
  47. Twele, A.; Cao, W.; Plank, S.; Martinis, S. Sentinel-1-based flood mapping: A fully automated processing chain. Int. J. Remote Sens. 2016, 45, 2990–3004. [Google Scholar] [CrossRef]
  48. Voigt, S.; Kemper, T.; Riedlinger, T.; Kiefl, R.; Scholte, K.; Mehl, H. Satellite Image Analysis for Disaster and Crisis-Management Support. IEEE Trans. Geosci. Remote Sens. 2007, 45, 1520–1528. [Google Scholar] [CrossRef]
  49. Martinis, S.; Twele, A.; Plank, S.; Zwenzner, H.; Danzeglocke, J.; Strunz, G.; Lüttenberg, H.-P.; Dech, S. The International Charter ‘Space and Major Disasters’: DLR’s Contributions to Emergency Response Worldwide. PFG–J. Photogramm. Remote Sens. Geoinf. Sci. 2017, 85, 317–325. [Google Scholar] [CrossRef]
  50. Burke, J.; Pricope, N.; Blum, J. Thermal Imagery-Derived Surface Inundation Modeling to Assess Flood Risk in a Flood-Pulsed Savannah Watershed in Botswana and Namibia. Remote Sens. 2016, 8, 676. [Google Scholar] [CrossRef]
  51. Namibia Nature Foundation. Wetland Habitats in the Chobe-Zambezi River System. Available online: http://www.nnf.org.na/RARESPECIES/InfoSys/IMAGES/WetlandGrazers/fig10habitatsChobeZam.gif (accessed on 5 January 2018).
  52. Esch, T.; Taubenböck, H.; Roth, A.; Heldens, W.; Felbier, A.; Thiel, M.; Schmidt, M.; Müller, A.; Dech, S. TanDEM-X mission—New perspectives for the inventory and monitoring of global settlement patterns. J. Appl. Remote Sens. 2012, 6, 061702. [Google Scholar] [CrossRef]
  53. Rennó, C.D.; Nobre, A.D.; Cuartas, L.A.; Soares, J.V.; Hodnett, M.G.; Tomasella, J.; Waterloo, M.J. HAND, a new terrain descriptor using SRTM-DEM: Mapping terra-firme rainforest environments in Amazonia. Remote Sens. Environ. 2008, 112, 3469–3481. [Google Scholar] [CrossRef]
  54. Lehner, B.; Verdin, K.; Jarvis, A. New Global Hydrography Derived from Spaceborne Elevation Data. Eos Trans. AGU 2008, 89, 93. [Google Scholar] [CrossRef]
  55. Farr, T.G.; Rosen, P.A.; Caro, E.; Crippen, R.; Duren, R.; Hensley, S.; Kobrick, M.; Paller, M.; Rodriguez, E.; Roth, L.; et al. The Shuttle Radar Topography Mission. Rev. Geophys. 2007, 45, 1485. [Google Scholar] [CrossRef]
  56. ESA. Sentinel-1 Toolbox (S1TBX): Version 4.0.0. Available online: https://sentinel.esa.int/web/sentinel/toolboxes/sentinel-1 (accessed on 27 February 2018).
  57. Jarvis, A.; Reuter, H.I.; Nelson, A.; Guevara, E. Hole-filled SRTM for the globe Version 4. Available online: http://srtm.csi.cgiar.org (accessed on 27 February 2018).
  58. Lee, J.-S. Refined filtering of image noise using local statistics. Comput. Graph. Image Process. 1981, 15, 380–389. [Google Scholar] [CrossRef] [Green Version]
  59. Lee, J.-S.; Pottier, E. Polarimetric Radar Imaging: From basics to applications. In Optical Science and Engineering; CRC Press: Boca Raton, FL, USA, 2009; Volume 142. [Google Scholar]
  60. Hess, L.L.; Melack, J.M.; Simonett, D.S. Radar Detection of Flooding Beneath the Forest Canopy: A review. Int. J. Remote Sens. 1990, 11, 1313–1325. [Google Scholar] [CrossRef]
  61. Schumann, G.J.-P.; Moller, D.K. Microwave Remote Sensing of Flood Inundation. Phys. Chem. Earth 2015, 83–84, 84–95. [Google Scholar] [CrossRef]
  62. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  63. Fu, K.S.; Mui, J.K. A survey on image segmentation. Pattern Recognit. 1981, 13, 3–16. [Google Scholar] [CrossRef]
  64. Allen, T.; Wang, Y.; Gore, B. Coastal wetland mapping combining multi-date SAR and LiDAR. J. Geocarto Int. 2013, 28, 616–631. [Google Scholar] [CrossRef]
  65. Mwita, E.; Menz, G.; Misana, S.; Nienkemper, P. Detection of Small Wetlands with Multi Sensor Data in East Africa. ARS 2012, 1, 64–73. [Google Scholar] [CrossRef]
  66. Pope, K.O.; Rey-Benayas, J.M.; Paris, J.F. Radar remote sensing of forest and wetland ecosystems in the Central American tropics. Remote Sens. Environ. 1994, 48, 205–219. [Google Scholar] [CrossRef]
  67. Napoleon, D.; Ramaraj, E. An Efficient Segmentation of Remote Sensing Images for the Classification of Satellite Data Using K-Means Clustering Algorithm. IJIRST–Int. J. Innov. Res. Sci. Technol. 2014, 1, 314–319. [Google Scholar]
  68. Xu, E.; Jia, Z.; Wang, L.; Hu, Y.; Yang, J. Remote Sensing Image Segmentation Model Based on the Otsu Rule and K-means Clustering Algorithm. Inf. Technol. J. 2014, 13, 690–696. [Google Scholar] [CrossRef]
  69. Rekik, A.; Zribi, M.; Benjelloun, M.; Hamida, A.B. A k-Means Clustering Algorithm Initialization for Unsupervised Statistical Satellite Image Segmentation. In Proceedings of the 2006 1ST IEEE International Conference on E-Learning in Industrial Electronics, Hammamet, Tunisia, 18–20 December 2007; pp. 11–16. [Google Scholar]
  70. Richards, J.A. Remote Sensing Digital Image Analysis: An Introduction, 5th ed.; Springer: Berlin, Germany, 2012. [Google Scholar]
  71. Ulaby, F.T.; Fung, A.K.; Moore, R.K. Microwave Remote Sensing: Active and Passive. Volume II: Radar Remote Sensing and Surface Scattering and Emission Theory; Remote Sensing Artech House: Norwood, MA, USA, 1986. [Google Scholar]
  72. Kasischke, E.S.; Smith, K.B.; Bourgeau-Chavez, L.L.; Romanowicz, E.A.; Brunzell, S.; Richardson, C.J. Effects of Seasonal Hydrologic Patterns in South Florida Wetlands on Radar Backscatter Measured from ERS-2 SAR Imagery. Remote Sens. Environ. 2003, 88, 423–441. [Google Scholar] [CrossRef]
  73. Costa, M.P.F.; Niemann, O.; Novo, E.; Ahern, F. Biophysical properties and mapping of aquatic vegetation during the hydrological cycle of the Amazon floodplain using JERS-1 and Radarsat. Int. J. Remote Sens. 2002, 23, 1401–1426. [Google Scholar] [CrossRef]
  74. Yu, Y.; Saatchi, S. Sensitivity of L-Band SAR Backscatter to Aboveground Biomass of Global Forests. Remote Sens. 2016, 8, 522. [Google Scholar] [CrossRef]
  75. Ulaby, F.T.; Long, D.G. Microwave Radar and Radiometric Remote Sensing; Artech House: Norwood, MA, USA, 2015. [Google Scholar]
  76. Schmitt, A.; Wendleder, A.; Hinz, S. The Kennaugh Element Framework for Multi-Scale, Multi-Polarized, Multi-Temporal and Multi-Frequency SAR Image Preparation. ISPRS J. Photogramm. Remote Sens. 2015, 102, 122–139. [Google Scholar] [CrossRef]
  77. Moser, L.; Schmitt, A.; Wendleder, A. Automated Wetland Delineation from Multi-Frequency and Multi-Polarized SAR Images in High Temporal and Spatial Resolution. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, III-8, 57–64. [Google Scholar] [CrossRef]
  78. Marti-Cardona, B.; Lopez-Martinez, C.; Dolz-Ripolles, J.; Bladè-Castellet, E. ASAR polarimetric, multi-incidence angle and multitemporal characterization of Doñana wetlands for flood extent monitoring. Remote Sens. Environ. 2010, 114, 2802–2815. [Google Scholar] [CrossRef]
  79. Chini, M.; Papastergios, A.; Pulvirenti, L.; Pierdicca, N.; Matgen, P.; Parcharidis, I. SAR coherence and polarimetric information for improving flood mapping. In Proceedings of the IEEE International Geoscience & Remote Sensing Symposium, Beijing, China, 10–15 July 2016; pp. 7577–7580. [Google Scholar]
  80. Bourgeau-Chavez, L.L.; Kasischke, E.S.; Brunzell, S.M.; Mudd, J.P.; Smith, K.B.; Frick, A.L. Analysis of Space-Borne SAR data for Wetland Mapping in Virginia Riparian Ecosystems. Int. J. Remote Sens. 2001, 22, 3665–3687. [Google Scholar] [CrossRef]
  81. Sang, H.; Zhang, J.; Lin, H.; Zhai, L. Multi-Polarization ASAR Backscattering from Herbaceous Wetlands in Poyang Lake Region, China. Remote Sens. 2014, 6, 4621–4646. [Google Scholar] [CrossRef] [Green Version]
  82. Malinowski, R.; Groom, G.; Schwanghart, W.; Heckrath, G. Detection and Delineation of Localized Flooding from WorldView-2 Multispectral Data. Remote Sens. 2015, 7, 14853–14875. [Google Scholar] [CrossRef]
Figure 1. Location map of the study area (red rectangle) in Namibia.
Figure 1. Location map of the study area (red rectangle) in Namibia.
Remotesensing 10 01286 g001
Figure 2. Validation extent of the preprocessed S-1 image at the flood event (6 April 2017) for VV polarization (a). Digitized validation mask based on S-2 data (8 April 2017) (b).
Figure 2. Validation extent of the preprocessed S-1 image at the flood event (6 April 2017) for VV polarization (a). Digitized validation mask based on S-2 data (8 April 2017) (b).
Remotesensing 10 01286 g002
Figure 3. Additional information in the study area Namibia. Global Urban Footprint (GUF) (a), exclusion mask (threshold 20 m) of the height above nearest drainage (HAND) index (b), SRTM Water Body Data (SWBD) (c).
Figure 3. Additional information in the study area Namibia. Global Urban Footprint (GUF) (a), exclusion mask (threshold 20 m) of the height above nearest drainage (HAND) index (b), SRTM Water Body Data (SWBD) (c).
Remotesensing 10 01286 g003
Figure 4. Classification process chain for the extraction of temporary open water and temporary flooded vegetation based on SAR time series data. The deep orange arrows represent the pixel-based process part, while bright orange arrows show the object-based part of the process chain. * Global Urban Footprint (GUF), Height above Nearest Drainage (HAND) index, and SRTM Water Body Data (SWBD) as exclusion layers.
Figure 4. Classification process chain for the extraction of temporary open water and temporary flooded vegetation based on SAR time series data. The deep orange arrows represent the pixel-based process part, while bright orange arrows show the object-based part of the process chain. * Global Urban Footprint (GUF), Height above Nearest Drainage (HAND) index, and SRTM Water Body Data (SWBD) as exclusion layers.
Remotesensing 10 01286 g004
Figure 5. Multi-temporal behaviour of the backscatter intensity for TOW areas for VV (a), VH (b), the ratio VV/VH (c), NDVI values (d), and NDWI values (e). The blue bars mark the analysed date at the flood event.
Figure 5. Multi-temporal behaviour of the backscatter intensity for TOW areas for VV (a), VH (b), the ratio VV/VH (c), NDVI values (d), and NDWI values (e). The blue bars mark the analysed date at the flood event.
Remotesensing 10 01286 g005
Figure 6. Multi-temporal behaviour of the backscatter intensity for TFV areas for VV (a), VH (b), the ratio VV/VH (c), NDVI values (d), and NDWI values (e). The blue bars mark the analysed date at the flood event.
Figure 6. Multi-temporal behaviour of the backscatter intensity for TFV areas for VV (a), VH (b), the ratio VV/VH (c), NDVI values (d), and NDWI values (e). The blue bars mark the analysed date at the flood event.
Remotesensing 10 01286 g006
Figure 7. Histogram distributions of training data for the classes TOW, TFV, and Dry Land for individual time series features. Z-score VV (a), Z-score VH (b), Z-Score VV + VH (c), Z-Score VV-VH (d), and Z-Score VV/VH (e).
Figure 7. Histogram distributions of training data for the classes TOW, TFV, and Dry Land for individual time series features. Z-score VV (a), Z-score VH (b), Z-Score VV + VH (c), Z-Score VV-VH (d), and Z-Score VV/VH (e).
Remotesensing 10 01286 g007
Figure 8. Importance of the time series features, which were determined by the Random Forest algorithm for TOW (blue bars) and TFV (green bars). The features are sorted by their highest to lowest importance.
Figure 8. Importance of the time series features, which were determined by the Random Forest algorithm for TOW (blue bars) and TFV (green bars). The features are sorted by their highest to lowest importance.
Remotesensing 10 01286 g008
Figure 9. Pixel-based classification result (a) and object-based classification result (b) for the study area in Namibia.
Figure 9. Pixel-based classification result (a) and object-based classification result (b) for the study area in Namibia.
Remotesensing 10 01286 g009
Figure 10. Areas of correspondence and misclassifications for the classes TOW, TFV, and Dry Land using pixel-based (a) and object-based (b) classification, which are generated by the intersection of the classifications and the validation data.
Figure 10. Areas of correspondence and misclassifications for the classes TOW, TFV, and Dry Land using pixel-based (a) and object-based (b) classification, which are generated by the intersection of the classifications and the validation data.
Remotesensing 10 01286 g010
Figure 11. Accuracy assessment results for pixel- and object-based classification, which are generated by means of the time series approach.
Figure 11. Accuracy assessment results for pixel- and object-based classification, which are generated by means of the time series approach.
Remotesensing 10 01286 g011
Figure 12. Pixel-based classification result for the entire study area in Namibia (a). Object-based classification result for the entire study area in Namibia (b).
Figure 12. Pixel-based classification result for the entire study area in Namibia (a). Object-based classification result for the entire study area in Namibia (b).
Remotesensing 10 01286 g012
Figure 13. Classification result of the S-1FS (a), classification results of the S-1FS supplemented by TOW areas of the time series approach (b), classification results of the S-1FS supplemented by TOW and TFV areas of the time series approach (c), and validation image with merged flood areas (TOW + TFV) (d).
Figure 13. Classification result of the S-1FS (a), classification results of the S-1FS supplemented by TOW areas of the time series approach (b), classification results of the S-1FS supplemented by TOW and TFV areas of the time series approach (c), and validation image with merged flood areas (TOW + TFV) (d).
Remotesensing 10 01286 g013
Figure 14. Accuracy assessment results based on the S-1FS and its improvement (S-1FS + TOW and S-1FS + TOW + TFV).
Figure 14. Accuracy assessment results based on the S-1FS and its improvement (S-1FS + TOW and S-1FS + TOW + TFV).
Remotesensing 10 01286 g014
Figure 15. Overall Accuracy (AO) and Kappa Coefficient for each classification run based on a decreasing number of images.
Figure 15. Overall Accuracy (AO) and Kappa Coefficient for each classification run based on a decreasing number of images.
Remotesensing 10 01286 g015
Table 1. Characteristics and acquisition dates of the used S-1 satellite data. S-1 scenes are acquired under the same orbit conditions. The scene acquired on 6 April 2017 was used within this study as flood event image.
Table 1. Characteristics and acquisition dates of the used S-1 satellite data. S-1 scenes are acquired under the same orbit conditions. The scene acquired on 6 April 2017 was used within this study as flood event image.
Characteristics of the Used Sentinel-1 DataNo.DateNo.Date
Wavelength5.6 cm12 September 20161417 February 2017
ModeInterferometric Wide Swath (IW)226 September 2016151 March 2017
PolarizationVV, VH308 October 20161613 March 2017
FrequencyC-Band (GHz)420 October 20161725 March 2017
Resolution20 × 22 m (ground range and azimuth)51 November 2016186 April 2017
613 November 20161918 April 2017
Pixel spacing10 × 10 m725 November 20162030 April 2017
Inc. angle30.4°–46.2°87 December 20162112 May 2017
Pass directionAscending919 December 20162224 May 2017
Relative orbit1161031 December 2016235 June 2017
Product levelLevel-1 (Ground Range Detected High Resolution (GRDH))1112 January 20172429 June 2017
1224 January 20172523 July 2017
135 February 2017

Share and Cite

MDPI and ACS Style

Tsyganskaya, V.; Martinis, S.; Marzahn, P.; Ludwig, R. Detection of Temporary Flooded Vegetation Using Sentinel-1 Time Series Data. Remote Sens. 2018, 10, 1286. https://doi.org/10.3390/rs10081286

AMA Style

Tsyganskaya V, Martinis S, Marzahn P, Ludwig R. Detection of Temporary Flooded Vegetation Using Sentinel-1 Time Series Data. Remote Sensing. 2018; 10(8):1286. https://doi.org/10.3390/rs10081286

Chicago/Turabian Style

Tsyganskaya, Viktoriya, Sandro Martinis, Philip Marzahn, and Ralf Ludwig. 2018. "Detection of Temporary Flooded Vegetation Using Sentinel-1 Time Series Data" Remote Sensing 10, no. 8: 1286. https://doi.org/10.3390/rs10081286

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop