Next Article in Journal
Mapping CORINE Land Cover from Sentinel-1A SAR and SRTM Digital Elevation Model Data using Random Forests
Next Article in Special Issue
On the Use of Global Flood Forecasts and Satellite-Derived Inundation Maps for Flood Monitoring in Data-Sparse Regions
Previous Article in Journal
Lateral Offset Quality Rating along Low Slip Rate Faults: Application to the Alhama de Murcia Fault (SE Iberian Peninsula)
Previous Article in Special Issue
Assimilation of GRACE Terrestrial Water Storage Observations into a Land Surface Model for the Assessment of Regional Flood Potential
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Detection and Delineation of Localized Flooding from WorldView-2 Multispectral Data

by
Radosław Malinowski
1,*,
Geoff Groom
2,
Wolfgang Schwanghart
3 and
Goswin Heckrath
1
1
Department of Agroecology, Aarhus University, Blichers Allé 20, Postboks 50, DK-8830 Tjele, Denmark
2
Department of Bioscience, Aarhus University, Grenaavej 14, DK-8410 Roende, Denmark
3
Institute of Earth and Environmental Science, University of Potsdam, Karl-Liebknecht-Str. 24-25, 14476 Potsdam-Golm, Germany
*
Author to whom correspondence should be addressed.
Remote Sens. 2015, 7(11), 14853-14875; https://doi.org/10.3390/rs71114853
Submission received: 11 June 2015 / Revised: 26 October 2015 / Accepted: 29 October 2015 / Published: 6 November 2015
(This article belongs to the Special Issue Remote Sensing in Flood Monitoring and Management)

Abstract

:
Remote sensing technology serves as a powerful tool for analyzing geospatial characteristics of flood inundation events at various scales. However, the performance of remote sensing methods depends heavily on the flood characteristics and landscape settings. Difficulties might be encountered in mapping the extent of localized flooding with shallow water on riverine floodplain areas, where patches of herbaceous vegetation are interspersed with open water surfaces. To address the difficulties in mapping inundation on areas with complex water and vegetation compositions, a high spatial resolution dataset has to be used to reduce the problem of mixed pixels. The main objective of our study was to investigate the possibilities of using a single date WorldView-2 image of very high spatial resolution and supporting data to analyze spatial patterns of localized flooding on a riverine floodplain. We used a decision tree algorithm with various combinations of input variables including spectral bands of the WorldView-2 image, selected spectral indices dedicated to mapping water surfaces and vegetation, and topographic data. The overall accuracies of the twelve flood extent maps derived with the decision tree method and performed on both pixels and image objects ranged between 77% and 95%. The highest mapping overall accuracy was achieved with a method that utilized all available input data and the object-based image analysis. Our study demonstrates the possibility of using single date WorldView-2 data for analyzing flooding events at high spatial detail despite the absence of spectral bands from the short-waveform region that are frequently used in water related studies. Our study also highlights the importance of topographic data in inundation analyses. The greatest difficulties were met in mapping water surfaces under dense canopy herbaceous vegetation, due to limited water surface exposure and the dominance of vegetation reflectance.

Graphical Abstract

1. Introduction

Remote sensing has been shown to provide a powerful and inexpensive means of analyzing the spatial extent and duration of flooding events [1,2,3,4,5]. Such flood mapping is predominately based on detection of water surfaces, which in the case of extreme flood events with large open water areas tend to be major features within the remotely-sensed scene. In such cases, the use of image data from passive optical systems or active microwave systems assures the detectability of water surfaces by exploiting, respectively, the open water phenomena of low reflectance in the visible spectrum and high absorption in the infrared spectrum, and specular reflection of microwave energy [6,7]. However, the use of these phenomena is greatly limited for flood extent mapping on vegetated riverine floodplain areas, since partially immersed and inundated vegetation changes the spectral signature of water surfaces, hindering their detection using multispectral data [8]. Inundated vegetation also increases surface roughness, reducing the specular reflection from water associated with SAR data [4]. Such problems are typical of wetlands and many floodplains that are affected by frequent localized inundation, as well as transitional zones between open water and dry areas that are subject to inundation during larger flooding events and are covered by various forms of herbaceous and woody terrestrial vegetation. Better knowledge of the spatial extents and temporal durations of localized flooding of these areas represents valuable environmental information for decision-makers, which is necessary for effective land management and spatial planning [9,10]. In addition, such knowledge, when collected and analyzed over longer periods, supplies ecologists and hydrologists with an effective means of monitoring the water cycle and its variation and improve our understanding of its influence upon the whole ecosystem [11,12].
Imagery from various high and medium spatial resolution remote sensing systems have frequently been used for mapping areas of inundated vegetation [5,13,14,15,16]. However, these data have limited applicability for localized flooding events. Inundation of floodplains with herbaceous vegetation is often associated with high local heterogeneity with open water interspersed with vegetation patches of various shapes and sizes (Figure 1). This results in a large number of mixed pixels in such image data [17,18] making high and medium spatial resolution remote sensing image data an unsatisfactory source of information. Thus, while these data provide an adequate level of information for mapping flooding patterns on a regional scale [3,5,19], they are usually too coarse for performing analyses at a local scale [20]. Schumann et al. [21] noted that only data with a spatial resolution finer than 5 m provide adequate information for deriving the precise extent of flood inundation. Such data have not been widely used for spatially accurate flood mapping because of the low coverage repetition rates of the relevant satellite systems and the transitory nature of many flood events. With very high spatial resolution (VHSR) multispectral image data becoming more readily available, our overall aim was to explore their use for mapping inundation patterns in areas with partially immersed vegetation. The specific objectives of our methodological study were to analyze VHSR WorldView-2 (WV2) image data (Digital Globe Corp.) for a flooding event of an agricultural area with a complex pattern of open water and emergent herbaceous vegetation and to develop methods for accurately mapping the spatial extent of such localized flooding.
Figure 1. Example of localized flooding in the Nørreå river valley at Vejrumbro, (Denmark), picturing interspersed vegetation and water patches.
Figure 1. Example of localized flooding in the Nørreå river valley at Vejrumbro, (Denmark), picturing interspersed vegetation and water patches.
Remotesensing 07 14853 g001
Our approach has been inspired by the study of Davranche et al. [13] who mapped wetland flood regimes with seasonal multispectral SPOT5 (SPOT/Programme ISIS, Copyright CNES) image data by analyzing the spectral reflectance of inundated vegetation from multi-temporal data. The basis for the method of Davranche et al. [13] was that the observed spectral signatures are conditioned by the presence of free water under the canopy, the plant water content and photosynthetic activity, and that it was possible to infer the presence of water from the time series of multispectral image data. We have followed their assumptions and adapted their methods to our area of interest and the characteristics of the WV2 image, notwithstanding the differences between the image data used in the two studies. First, the spectral bands of WV2 image data have a spatial resolution that is five times finer than the SPOT5 image data. Second, we used a single date scene, while Davranche et al. [13] applied a seasonal time series of image data. Finally, unlike the SPOT5 imagery, the eight spectral bands of the WV2 image do not include a band from the shortwave-infrared (SWIR) spectrum, which is known to be useful for analyzing water surfaces [22,23,24,25] and plant water content [26,27]. Thus, the key question addressed in our study was to what degree the VHSR and additional spectral bands (cf. Section 2.2) of the WV2 image may compensate for the lack of the SWIR band in mapping flood inundation on vegetated floodplains. We also wanted to investigate whether the classification method found to be efficient by Davranche et al. [13] using multi-temporal image data could be applied to the mapping of local inundation with a single date VHSR image dataset.

2. Materials and Methods

2.1. Study Area

The area investigated in our study is located in Central Jutland, Denmark (56°26′0″N, 9°32′0″E) and covers approx. 3.3 km2 of a rural landscape along a 3-km reach of the River Nørreå (Figure 2). Although the catchment area of the Nørreå represents an undulating post-glacial moraine landscape, the river valley itself has a very small longitudinal gradient. Agriculture is a dominant land use in the study area, with mostly improved or semi-improved grasslands (meadows and pasture) intermixed with patches of trees and some arable fields located on the more elevated parts of the floodplain.
The Nørreå floodplain is regularly temporarily inundated with shallow water during periods of high water stages in the river, occurring predominately in the second half of a calendar year. Inundation takes the form of localized flooding with the water surface to a high degree covered by emergent herbaceous vegetation.
Figure 2. Project study area covering a part of the Nørreå river valley, Denmark.
Figure 2. Project study area covering a part of the Nørreå river valley, Denmark.
Remotesensing 07 14853 g002

2.2. Remote Sensing Data

The WV2 scene used in this study was acquired on 29 September 2012 and was provided as a radiometrically corrected product (Standard Imagery Products [28]). The image consists of one panchromatic and eight relatively narrow multispectral bands (Table 1). The section of the image covering the study area is cloud and cloud-shadow free and was geo-referenced (UTM/ETRS89 zone 32N) with a first order polynomial transformation and nearest neighbor resampling, providing an accuracy of 1/3 of a pixel of the panchromatic band.
The applied flood mapping procedure also utilized a national digital terrain model (DK-DEM) [29] based on airborne LiDAR data (acquisition in 2006–2007) with an original grid cell size of 1.6 m resampled to 2 m, corresponding to the ground sample distance (GSD) of the WV2 multispectral imagery. This terrain model derived from LiDAR data was acquired in an early spring period and represents dry conditions in the Nørreå valley. Additionally, a terrain slope raster was calculated from the DK-DEM (2 m cell size).
Finally, a local set of airborne LiDAR point cloud data acquired on 23 September 2012 was used to derive a 2 m cell size normalized digital surface model (nDSM), also called vegetation height model. The nDSM was derived as the difference between a terrain model (DK-DEM) and a digital surface model (DSM). The DSM was calculated as the highest elevation from LiDAR points falling in each 2 × 2 m cell. The LiDAR data acquired a week before the WV2 image data acquisition represent up-to-date information on vegetation cover within the study area.
Table 1. Characteristics of the WorldView-2 image data [28].
Table 1. Characteristics of the WorldView-2 image data [28].
BandGround Sample Distance (GSD) (m)Spectral Range (nm)
Panchromatic (PAN)0.5447–808
Coastal Blue2396–458
Blue2442–515
Green2506–586
Yellow2584–632
Red2624–694
Red Edge2699–749
Near-infrared 1 (NIR-1)2765–901
Near-infrared 2 (NIR-2)2856–1043
Simultaneously with the local LiDAR point cloud data, RGB aerial image data were acquired with a DigiCAM-60 camera and delivered as orthoimages with a 0.1 m pixel size. This dataset was used for collecting training data and for accuracy assessment of the classification results.
Figure 3. Example of a vertical image taken from a tripod used for collecting training data of the flooded classes: (a) tripod in a location with emergent grass and patches of water; (b) vertical image. Four white sticks with colored endings used for georeferencing vertical images are seen on both pictures.
Figure 3. Example of a vertical image taken from a tripod used for collecting training data of the flooded classes: (a) tripod in a location with emergent grass and patches of water; (b) vertical image. Four white sticks with colored endings used for georeferencing vertical images are seen on both pictures.
Remotesensing 07 14853 g003

2.3. Field Data Collection

During the five days preceding the WV2 image acquisition, 28 representative photographs of surfaces in the study area, each covering an area measuring 6–8 m2, were captured using a standard RGB camera (Panasonic Lumix DMC-LX3) mounted on a tripod. The camera lens was facing perpendicularly to the ground (Figure 3a). All of these photographs, in the following referred to as “vertical images”, were geo-referenced utilizing four sticks that were placed within the field of view of the camera and whose positions were measured by a real time kinematic differential GPS (RTK/DGPS). The resulting GSD of the geo-referenced images was circa 1 mm, providing very detailed information about the floodplain wetness within the pictured area. These images represented different combinations of mixed water and vegetation, including different vegetation types (Figure 3b) and were used to derive training data for the flooding classification procedure.

2.4. Image Pre-Processing

The WV2 image was atmospherically corrected before performing the analysis. Initially we used ATCOR-2 software [30], which transformed the radiometrically corrected pixel values to surface reflectance. A post-hoc analysis of the reflectance image showed, however, unexpected data patterns for water-covered areas. The reflectance values of pixels representing water were in many cases much higher in the near-infrared wavelengths than in the visible region for both permanent and temporal (floodwater) water bodies. In principle, clear water has the highest reflectance in the green wavelength, which decreases with increasing wavelength, reaching a reflectance close to zero in the near-infrared region [7,31]. Although deviations from that pattern are expected as water reflectance in specific wavelengths changes depending on, for example, water turbidity, the relationship in most cases remains the same (Figure 4). This principle has been the basis for developments of spectral indices such as the Normalized Difference Water Index (NDWI) [32]. However, the reflectance values of the ATCOR-2 atmospherically corrected image deviated considerably from the expected pattern, thus their use for calculation of the NDWI index, and other spectral indices (cf. Section 2.5), was put into question, pending further analysis. As there might be different reasons for the observed pattern, and since analysis of the atmospheric correction process was not an aim of this study, the remaining data processing was done using image pixel values converted to the top-of-atmosphere spectral reflectance [33] and processed with the dark pixel subtraction (DPS) technique. The DPS process reduces the atmospheric effects originating from the upwelling path radiance.
Figure 4. Spectral reflectance characteristics of clear and turbid water within the visible and near-infrared spectral range (0.5–1.0 μm) (Adapted from Davis et al. [31].)
Figure 4. Spectral reflectance characteristics of clear and turbid water within the visible and near-infrared spectral range (0.5–1.0 μm) (Adapted from Davis et al. [31].)
Remotesensing 07 14853 g004

2.5. Training Data

The preparation of representative training data is a crucial step for performing reliable supervised classifications of the land cover (LC) classes under investigation. After a visual inspection of the area of interest in the WV2 image and consideration of the knowledge gained from frequent field visits, ten LC classes were distinguished as important for mapping flood extent in the study area. The LC classes were subsequently grouped into a set of non-flooded classes that comprised asphalt, buildings, grass, maize, shadow, soil and high vegetation (bushes and trees), and a set of flooded classes that comprised open water (O-W), mixed water and vegetation (W-V) and water under vegetation canopy (WuV). The class O-W referred to areas fully covered by water, while W-V represented areas of intermixed water and vegetation objects within the area of a single WV2 pixel (2 × 2 m) (Figure 5a). The WuV class represented inundated areas with a dense cover of emergent floodplain vegetation (excluding bushes and trees) (Figure 5b). Samples for all classes from the non-flooded group and for the open water class were collected by means of visual interpretation of the aerial orthoimagery (cf. Section 2.2). Samples for the remaining two classes (W-V, WuV) were collected based on the vertical images and the aerial orthoimagery, supported by field notes. Two hundred samples were collected for each class except the WuV class, for which only 70 samples could be obtained due to limitation in the field data collected for that class. Each sample corresponded to a single WV2 multispectral image pixel.
Figure 5. Examples of land cover representing flooded classes: (a) mixed water and vegetation class (W-V); (b) water under vegetation canopy class (WuV).
Figure 5. Examples of land cover representing flooded classes: (a) mixed water and vegetation class (W-V); (b) water under vegetation canopy class (WuV).
Remotesensing 07 14853 g005
Mapping inundation under a vegetation canopy with dense and long leaves (e.g., reed beds) is a challenging task when using optical image data [13]. Accordingly, we decided to work with two scenarios. The first scenario aimed at mapping inundation under any herbaceous and wetland vegetation (reeds, sedges, rushes) found on a riverine floodplain. It used training data composed of all the ten LC classes described above, hereafter referred to as the 3WET training data (Figure 6a). In the second scenario the focus was shifted towards mapping inundation on areas with at least some water surface visible among plants, which is represented by the class W-V. This approach was based on training data referred to as 2WET, in which the samples representing the WuV class were reclassified to the class of grass due to the similar spectral characteristics of these two classes (Figure 6b).
Figure 6. The class sets and class hierarchies of the two analysis scenarios, 3WET (a) and 2WET (b) used in the study. The figure shows that in the 2WET scenario the flooded class WuV from 3WET scenario was merged with the non-flooded class grass.
Figure 6. The class sets and class hierarchies of the two analysis scenarios, 3WET (a) and 2WET (b) used in the study. The figure shows that in the 2WET scenario the flooded class WuV from 3WET scenario was merged with the non-flooded class grass.
Remotesensing 07 14853 g006

2.6. Spectral Indices and Additional Classification Inputs

The eight spectral bands of the WV2 image (Table 1) are the core data set used in the classifications made in our study. In addition, several spectral indices that have previously been used to characterize vegetation or water were calculated (Table 2 [13,26,34,35,36,37,38,39,40,41,42]) and if necessary adopted to the spectral bands of the WV2 image. In cases where the SWIR band was applied in the original index, it was replaced by the WV2 band NIR-2. For the indices where a near-infrared band was required, two indices were calculated, applying respectively WV2 band NIR-1 and WV2 band NIR-2. Finally, a principle component analysis (PCA) performed on the WV2 scene provided additional eight images that were included in the analysis.
Table 2. List of multispectral indices derived from the spectral bands of the WorldView-2 image data, used for the decision tree classifications in this study.
Table 2. List of multispectral indices derived from the spectral bands of the WorldView-2 image data, used for the decision tree classifications in this study.
IndicesFormulaReferences
DVI*—differential vegetation indexNIR – RED Richardson and Everitt [34]
DVW**—difference between vegetation and waterNDVI – NDWI Gond et al. [35]
IFW*—index of free waterNIR – GREEN Adell and Puech [36]
NDWI*—normalized difference water index(GREEN – NIR)/(GREEN + NIR)McFeeters [32]
NDWI-G—normalized difference water index of Gao(NIR1 – NIR2)/(NIR1 + NIR2)Gao [26]
NDVI*—normalized difference vegetation index(NIR – RED)/( NIR + RED)Tucker [37]
OSAVI*—optimized SAVI(NIR − RED)/(NIR + RED + 0.16)Rondeaux et al. [38]
SAVI*—soil adjusted vegetation index1.5 (NIR − RED)/(NIR + RED + 0.5)Huete [39]
SR*—simple ratioRED/NIRPearson and Miller [40]
WI*—water indexNIR2/BLUEDavranche et al. [13]
WII*—water impoundment indexNIR2/REDCaillaud et al. [41]
VI*—vegetation indexNIR/REDLillesand and Kiefer [42]
* two indices calculated for respectively NIR1and NIR2; ** two indices calculated for NDVI and NDWI derived with NIR1 and NIR2.

2.7. Classification

Twelve decision tree (DT) based supervised flood mapping methods, differing from each other with respect to their input data, training dataset and analysis approach (Table 3), were applied and assessed for their mapping accuracy. All the methods provided binary division into flooded and non-flooded classes.
Decision trees are nonparametric classification methods that handle both categorical and continuous data and require no assumption about the data distribution [43]. DTs are composed of hierarchical rules built automatically using a set of features or variables provided with the training data. Those features are analyzed at each tree node and the feature that provides the best split of the data between the analyzed classes is selected. The eCognition (v. 9.0, Trimble Germany GmbH, Munich, Germany, 2014) software used in this study utilizes the CART algorithm (classification and regression tree) that performs splits using the Gini index [44,45]. The Gini index is a measure of impurity of the node being analyzed and reaches a value of zero when all objects in the node belong to the same class. DT classifier was shown to be efficient in mapping inundation in various landscapes [5,13]. All classifications performed in our study, including pixel-based (PB) and object-based approaches, used the following settings for DT parameters: maximum tree depth: 10, minimum number of samples per node for splitting: 10, number of cross-validations folds: 3.
Methods 1 and 7 were PB classifications adapted from Davranche et al. [13]. These approaches were applied using the WV2 spectral bands and spectral indices with the 3WET and 2WET training data, respectively. Methods 2 and 8 used similar classification settings to methods 1 and 7, respectively; however, they were performed on all available data, including PCA and topographic data.
As object-based image analysis (OBIA) has been found to be more appropriate and efficient than PB methods for mapping with VHSR image data [46,47,48], we also used methods that work on the object level (methods 3–6 and 9–12, Table 3) and tested whether OBIA improves the mapping of localized inundation. To do so, multiresolution segmentation [49] was run with a scale parameter of 10 and values of 0.1 and 0.5 for the shape and compactness parameters; the scale factor was chosen heuristically to ensure objects had a size that addressed the high heterogeneity and complexity of the river floodplain area. Segmentation was run on the WV2 NIR-2 image data that had been pan-sharpened using the Gram-Schmidt transformation [50] and the WV2 PAN band. The pan-sharpening was made to assure the highest possible level of detail. The NIR-2 band was used because it represents the most water sensitive waveband of the WV2 image, as the spectral range of the higher spatial resolution PAN band does not cover the wavelength of the NIR-2 band (Table 1).
Table 3. Specification of the classification methods developed and used in this study.
Table 3. Specification of the classification methods developed and used in this study.
Method #Analysis ApproachData UsedPre-Classification of High Vegetation and ShadowsTraining Data
WV2 Spectral BandsSpectral IndicesPCADTM and SLOPE Raster
Method 1PB++ 3WET
Method 2PB++++ 3WET
Method 3OBIA+++ +3WET
Method 4OBIA+++ 3WET
Method 5OBIA+++++3WET
Method 6OBIA++++ 3WET
Method 7PB++ 2WET
Method 8PB++++ 2WET
Method 9OBIA+++ +2WET
Method 10OBIA+++ 2WET
Method 11OBIA+++++2WET
Method 12OBIA++++ 2WET
PB—pixel-based approach; OBIA—object-based image analysis.
Four of the eight OBIA methods applied in the study (methods 3, 5, 9 and 11) used a pre-classification of areas of high vegetation and solar illumination shadows. The high vegetation areas were pre-classified and excluded from further analysis because inundation of high vegetation was out of the scope of this study. They were classified as having nDSM values greater than 2 m and positive NDVI values. The shadowed areas were pre-classified because they are commonly misclassified as water covered areas [19,51,52,53] due to the low reflectance of both classes. Shadows were found using a shadow fraction map, derived by sub-pixel analysis [54], with the assumption that shadowed areas are adjacent to high vegetation objects. The remaining shadows (e.g., building shadows) where accounted for by the DT classification. Both high vegetation and shadow areas were pre-classified using OBIA performed on the above-mentioned segmented pan-sharpened WV2 NIR-2 image data. Table 3 provides all details of the twelve classification methods.

2.8. Accuracy Assessment

An assessment of the classification results was carried out by comparing the mapped class with its classification in the reference data. The reference data comprised 114 randomly selected samples whose wetness conditions were checked in the field within the five days preceding the satellite image acquisition, and 186 randomly selected samples within the study area that were checked by visual interpretation of the aerial orthoimagery acquired six days before the satellite image (cf. Section 2.2). The labels of the reference data were adjusted to follow the class sets of the two training data scenarios, 3WET or 2WET. The overall accuracy (OA), correctness and completeness of the flooded class, and the quantity (QD) and allocation (AD) disagreement were calculated as accuracy measures for all 12 maps generated. The OA represents the ratio between the correctly classified samples from all classes and the total number of samples, providing a general view of the mapping performance. The completeness measure represents the level of false negative classifications (under-estimation of the flooded class), and the correctness shows the level of false positives (over-estimation of the flooded class). The quantity disagreement expresses the amount of difference between the reference observations and the classified (predicted) ones, while the allocation disagreement shows the proportion of misplaced categories from the classified map as compared with the spatial allocations in the reference data [55,56]. The sum of the quantity and allocation disagreement makes up the total disagreement between the reference and classified maps.

3. Results

Table 4 shows the results of an accuracy assessment performed on the derived maps of localized flooding. These accuracy measures were calculated from data with randomly selected locations and thus less frequent classes may be under-represented. A map resulting from method 1 (Figure 7b), the method most similar to the approach used by Davranche et al. [13], received the lowest OA of 77% and the highest quantity and allocation disagreement, reaching 23% of the total disagreement. In general, the methods that used the 3WET training data (methods 1–6) achieved lower OA scores (77%–92%) than their equivalent methods that classified only two flooded classes (methods 7–12). The methods that utilized the 2WET training data resulted in an OA of between 88% and 95% with method 11 achieving the highest OA (95%) and correctness scores (0.95) of all the methods. The high accuracy of this method is also confirmed by the lowest quantity and allocating disagreement percentage, providing 5% of total disagreement.
Table 4. Error matrices and the accuracy assessment measures derived for the classification methods used in this study.
Table 4. Error matrices and the accuracy assessment measures derived for the classification methods used in this study.
Method # Reference DataTotalOA (%)Flooded ClassQD (%)AD (%)
FN-FComp. (%)Corr. (%)
1Classification dataF107411487779724.318.7
N-F28124152
Total135165300
2Classification dataF123171409091881.78.0
N-F12148160
Total135165300
3Classification dataF114291438384802.714.0
N-F21136157
Total135165300
4Classification dataF110331438181772.716.7
N-F25132157
Total135165300
5Classification dataF127161439294892.75.3
N-F8149157
Total135165300
6Classification dataF127191469194873.75.3
N-F8146154
Total135165300
7Classification dataF97241218888803.78.7
N-F13166179
Total110190300
8Classification dataF97181159088841.78.7
N-F13172185
Total110190300
9Classification dataF9461009385943.34.0
N-F16184200
Total110190300
10Classification dataF91181098883830.312.0
N-F19172191
Total110190300
11Classification dataF10051059591951.73.3
N-F10185195
Total110190300
12Classification dataF9771049388932.04.7
N-F13183196
Total110190300
AD: Allocation disagreement; Comp.: Completeness; Corr.: Correctness F: Flooded; OA: Overall accuracy; N-F: Non-flooded; QD: Quantity disagreement.
Figure 7. WorldView-2 image of the study area presented in natural colors (a); and selected maps of localized flooding resulting from the classification methods developed in the study (bf) and described in Table 3. The flooding maps are superimposed on the WV2 panchromatic band.
Figure 7. WorldView-2 image of the study area presented in natural colors (a); and selected maps of localized flooding resulting from the classification methods developed in the study (bf) and described in Table 3. The flooding maps are superimposed on the WV2 panchromatic band.
Remotesensing 07 14853 g007aRemotesensing 07 14853 g007b
For the methods using the two groups of training data (3WET and 2WET) the accuracy assessment showed superior results for mapping performed with object-based analysis. The method 2 map, representing a pixel-based approach, had slightly poorer accuracy scores (OA, AD, completeness) than its equivalent based on OBIA, the method 6 map. A similar pattern can be observed for the maps produced using methods 8 (PB) and 12 (OBIA) for the 2WET training data group. The differences were even greater between the PB maps (2 and 8) and their equivalents with object-based pre-classification of vegetation and shadows (maps 5 and 11, respectively).
Figure 7b–f illustrates the extent of flooding in the analyzed area for selected maps using the methods described in Table 3, which give the most important information about the performance of the various DT classifications used.
The map of method 1 (Figure 7b) shows many areas misclassified as flooded (false positive) throughout the study area and the misclassification mainly takes the form of a large number of small isolated patches. Relatively similar results with large numbers of incorrectly classified flooded areas may also be seen on the maps of methods 3 (Figure 7d) and 4 (not illustrated). However, fragmentation of the map of method 1, which is the PB method, seems to be stronger then fragmentation of maps of the OBIA methods 3 and 4. These three maps are characterized by the lowest accuracy scores from all the methods, and areas that are classified as flooded on these maps cover most of the LC classes from the study area.
The second group of maps with a high similarity is produced using methods 2 (Figure 7c), 5 and 6. For this group flood inundation covers a wide and compact strip located in the central part of a river valley with only sparse patches of non-inundated areas. The maps have no areas classified as flooded outside the valley, likely because of inclusion of DTM data, but they apparently overestimate flooding inside it.
The methods trained with the 2WET data (methods 7–12) in general resulted in maps with less extensive inundation compared with maps based on the 3WET data. All the 2WET maps represent relatively similar spatial arrangements and small diversification of inundated areas. The main differences are a higher level of false positive misclassification outside the valley in maps of methods 7 (Figure 7e), 9 and 10, while flooding is limited to the valley bottom in maps of methods 8, 11 (Figure 7f) and 12. The overestimation in the 2WET group of maps is most apparent with method 7.

4. Discussion

We investigated the potential of using VHSR WV2 image data to delineate the extent of localized flooding on riverine floodplains. Most of the flood classification methods developed gave very high accuracy scores. However, our analyses also highlighted a number of important issues for improving the method before it would be operational.

4.1. Major Misclassification Issues

A comparison of the method specifications (Table 3) and resulting maps indicated that the widespread occurrence of incorrectly classified flooded areas (false positives) was characteristic of the methods that did not use elevation data (e.g., methods 1 and 7), and that also aimed to map the WuV class (e.g., methods 1, 3 and 4). Thus, in the maps derived with methods that used elevation data, the extent of flooded areas was mostly limited to the area of the river valley bottom (e.g., Figure 7c,f).
Since PB methods analyze each pixel separately, the use of VHSR image data characterized by high spectral complexity and variability of individual features may result in highly fragmented maps [57,58]. This effect is enhanced by pixels representing a mixture of spectral signatures of various LC classes (mixed pixels) whose spectral characteristics in our case were often similar to those representing flooded areas. These problems are often reduced by applying OBIA, because image objects are created using, in general, more than one criterion of homogeneity (e.g., spectral value, shape) [46]. By this process, isolated pixels with a value differing from their neighbors are often joined to larger objects and classified together, thereby reducing the fragmentation effect.
The higher number of false positive misclassification of flooded areas associated with the methods that used the 3WET training data (i.e., relatively low correctness values; Table 4), resulted from the fact that the spectral signature of the class WuV reflects mostly the signature of healthy green vegetation, such as grass or trees. Such vegetation was found across the whole study area. This illustrates the problem of using optical data for mapping inundated vegetation with closed canopy, represented by the WuV class (methods 1–6). Certainly, the presence of water influences the spectral reflectance of inundated vegetation [8]. However, the strength of that influence strongly depends on the structure and density of the vegetation as well as the relative size of area covered by vegetation and water [8,59]. For example, with a vegetation cover greater than 80% or with more than 60% of biomass standing above water, the surface reflectance within the range of the visible and near-infrared spectra (326–1055 nm) represents the typical spectral response curve of green non-flooded vegetation, regardless of species [60,61]. Therefore, detecting inundation in areas with dense vegetation cover, such as represented by the WuV class, is constrained when using a single date WV2 image. This may to some degree be addressed by applying multi-temporal analysis [13], which allows vegetation states under different flooding conditions to be compared. A multi-temporal approach can work well for wetland inundation monitoring [13], because most of the wetland vegetation species respond in a similar way to water abundance or deficit. Such analysis is, however, very complex in areas with both terrestrial and aquatic vegetation, because they react differently to various wetness states. While a water deficit influences both groups of vegetation species negatively, the prolonged abundance of water supports the growth of aquatic (wetland) plants but results in the wilting of the terrestrial species that are not tolerant to long inundation periods. Therefore, the differences in response of the aquatic and terrestrial plants to water abundance would need to be analyzed separately within the given group and not between them, due to their different responses to inundation. An analysis is required in which the spectral response of the same type of vegetation will be monitored in both dry and wet conditions.
Whilst relatively high overall accuracy scores were achieved with methods 2, 5 and 6 (Table 4), which were based on the 3WET training data and also used elevation data, and where the area mapped as flooded was limited to the river valley, they overestimated the extent of flooding within the floodplain area (Figure 7c). This overestimation probably originates from a combination of the broad spectral range of the WuV class and the use of elevation data in the methods, which resulted in the classification as flooded of almost the entire area falling within the elevation range represented by the training data of flooded classes. As shown in Figure 7f, this overestimation of the flooded extent does not appear in method 11 which uses the same classification rules as method 5 but applied the 2WET training data, in which flooded classes have narrower spectral ranges. Therefore, we suggest that the methods 1–6 that mapped the WuV class as flooded are unreliable because the spectral response of that class represents vegetation rather than water and introduces classification error. As stated earlier in this subsection, we also suggest that the analysis of inundated areas represented by the WuV class cannot be performed reliably without using multi-temporal data. Therefore, considering the accuracy measures (Table 4) and our knowledge of the wetness condition in the study area, we find methods 11 and 12 to yield the most optimal and reliable results that can be achieved with a single date image.

4.2. Application of SWIR Data

Based on the work of Baret et al. [62], Davranche et al. [13] suggested that the SWIR band, which is not available in the WV2 image data, is sensitive to soil moisture under the vegetation canopy and may help in mapping wetland inundation. Baret et al. [62], however, performed their analysis on agricultural crops, which differ considerably in structure, height and growing pattern from most floodplain plants. Thus, the spectral response from floodplain vegetation (both aquatic and terrestrial) in wet conditions may also differ from those observed by Baret et al. [62]. Moreover, Byrd et al. [59] found that the influence of water inundation on the reflectance of marsh vegetation is unclear in the SWIR region of the spectrum and requires more advanced studies. Therefore, considering that Davranche et al. [13] also met difficulties in mapping water under dense canopy, even though they used SWIR data, we would argue that there is no clear evidence that the SWIR band facilitates the detection of water inundation under a vegetation canopy. On the other hand, we do consider that application of the SWIR spectral bands, such as from the WorldView-3 sensor (launched August 2014), could improve detectability of areas of open water and classes similar to our W-V class. This consideration is based on the higher absorption of free water surfaces in the SWIR region than in the near-infrared and the possibility of calculating additional spectral indices dedicated to mapping open water surfaces, like the Automated Water Extraction Index [23] or the modified NDWI [25]. In our study, however, the 2WET flooding classes were mapped relatively well with the single date WV2 image data and, therefore, the focus of future research should be on whether SWIR data could be used to map land covers similar to our WuV class, which, as our results indicate, remains problematic.

4.3. Application of Topographic Data

An important issue indicated by our analyses is the application of terrain elevation data. In all the maps that were created using a DTM, the flooding extent is limited to the river valley bottom from where the training data were derived and which generally has a low absolute elevation. The use of elevation data in flood or water surface mapping is highly useful [63,64,65,66] and fully justified because water location, water movement in the landscape and water stage in a watercourse are primarily determined by topography. Therefore, applying elevation data or its derivatives (e.g., terrain slope) in the mapping procedure reduces the extent of the area where floodwaters can occur on the map, for example not on a slope. An additional advantage is the potential for fusion of elevation data with spectral data to aid the distinction of types of water surface. By using spectral data alone water surfaces may be differentiated from the other LC types, but it is often difficult to discriminate between floodwater and a permanent water body or even rainwater accumulated temporarily on the ground. Here, elevation data help to identify non-flooding water bodies by comparing the elevation of river or floodwater surfaces with other types of water surfaces.
However, using the absolute elevation data as was done in the current study reduces the transferability of samples to other areas, or even to other reaches of the same river. The elevation values collected as training data on a certain river reach make their application efficient and correct as long as the terrain elevation does not change considerably. The crisp thresholds estimated for the DT would fail, however, on the other sections of the river because the terrain drops in line to the river gradient. In that way areas upstream and downstream of the sampled location would be subject to under- or over-estimation, respectively. To further improve the method the absolute elevation data should be replaced with other estimates so that this problem is eliminated and the method is transferable to other locations. This requires application of one or more of the DTM derivatives that provide estimates adjustable to local conditions.

5. Conclusions

Our study demonstrates an application of a single date, very high spatial resolution optical image from the WorldView-2 sensor to delineate the spatial extent of localized flooding on densely vegetated riverine floodplain areas. An application of such detailed data has previously not been tested in depth for flood mapping, although it provides highly relevant information for flood management and wetland inundation analysis. The multispectral image served as a primary data source for developing various classification methods, the performance of which was verified with robust reference data. Our study shows that the single date WorldView-2 data enabled investigation of the patterns of floodplain inundation and accurate (above 90% of overall accuracy-OA) delineation of its spatial extent. It also revealed that spatial and spectral properties of the WorldView-2 image suffice to address the heterogeneity of floodplain areas under flooding conditions. We showed that this was possible despite the WorldView-2 data lacking the shortwave-infrared spectral band that is often of great use in water related studies. Moreover, for the conditions met in our study and where the greatest problem of the mapping was inundation under herbaceous vegetation, we argue that multi-temporal analysis would not necessarily improve classification considerably. Our statement, however, requires further verification.
The pixel-based methods developed in our study when used together with training data representing water under dense vegetation resulted in considerable misclassification (77% OA) and also fragmentation of areas classified as flooded and are thus considered unreliable. The method found as the most reliable and accurate (95% OA)—method 11 used a decision tree classifier that worked on the image objects instead of separate pixels. This reduced misclassification and fragmentation of flooded areas. The performance of this method was additionally improved by applying topographic data and by excluding shadows and area of high vegetation in a pre-classification step. However, the flooding maps derived from the most reliable methods often failed to include areas of inundated, dense vegetation, i.e., the water under vegetation class (completeness of 91%). Therefore, accurate mapping of this class remains a bottleneck of the methods based on data from passive optical systems.
For further improvements we recommend the development of detailed object-based image analysis methods with hierarchical structuring that would classify each individual flood-related class separately. Additionally, we would suggest using the WorldView-3 image data and testing applicability of its wide range of the shortwave-infrared bands for detailed flood studies. Moreover, the WorldView-3 image data, having, at present, the finest spatial resolution among civil satellite RS systems, should improve detection of small water surface patches among floodplain herbaceous vegetation.

Acknowledgments

The funding of this work for Radosław Malinowski, Geoff Groom and Goswin Heckrath, by a research grant from the Danish AgriFish Agency is gratefully acknowledged (grant number: 923063).
Wolfgang Schwanghart acknowledges the support by the Potsdam Research Cluster for Georisk Analysis, Environmental Change and Sustainability (PROGRESS) for his contribution to this work.

Author Contributions

All authors contributed to developing the overall concept of the analysis and preparation and collection of the filed data. Radosław Malinowski processed and analyzed the geospatial data and developed the classification methods with the close assistance of Geoff Groom. Most parts of the manuscript were written by Radosław Malinowski with support from all the remaining authors.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chignell, S.; Anderson, R.; Evangelista, P.; Laituri, M.; Merritt, D. Multi-temporal independent component analysis and Landsat 8 for delineating maximum extent of the 2013 Colorado front range flood. Remote Sens. 2015, 7, 9822–9843. [Google Scholar] [CrossRef]
  2. Pierdicca, N.; Pulvirenti, L.; Chini, M.; Guerriero, L.; Candela, L. Observing floods from space: Experience gained from COSMO-SkyMed observations. Acta Astronaut. 2013, 84, 122–133. [Google Scholar] [CrossRef]
  3. Thomas, R.F.; Kingsford, R.T.; Lu, Y.; Hunter, S.J. Landsat mapping of annual inundation (1979–2006) of the Macquarie Marshes in semi-arid Australia. Int. J. Remote Sens. 2011, 32, 4545–4569. [Google Scholar] [CrossRef]
  4. Smith, L.C. Satellite remote sensing of river inundation area, stage, and discharge: A review. Hydrol. Process. 1997, 11, 1427–1439. [Google Scholar] [CrossRef]
  5. Ward, D.P.; Petty, A.; Setterfield, S.A.; Douglas, M.M.; Ferdinands, K.; Hamilton, S.K.; Phinn, S. Floodplain inundation and vegetation dynamics in the Alligator Rivers region (Kakadu) of northern Australia assessed using optical and radar remote sensing. Remote Sens. Environ. 2014, 147, 43–55. [Google Scholar] [CrossRef]
  6. Jensen, J.R. Remote Sensing of the Environment: An Earth Resource Perspective., 2nd ed.; Pearson Prentice Hall: Upper Saddle River, NJ, USA, 2007; p. 592. [Google Scholar]
  7. Lillesand, T.M.; Kiefer, R.W.; Chipman, J.W. Remote Sensing and Image Interpretation, 6th ed.; John Wiley & Sons: Hoboken, NJ, USA.
  8. Silva, T.F.; Costa, M.F.; Melack, J.; Novo, E.L.M. Remote sensing of aquatic vegetation: Theory and applications. Environ. Monit. Assess. 2008, 140, 131–145. [Google Scholar] [CrossRef] [PubMed]
  9. European Communities (EC). Directive 2007/60/EC of the European Parliament and of the Council of 23 October 2007 on the assessment and management of flood risks. Off. J. European Communities 2007, 1, L288/27. [Google Scholar]
  10. European Communities (EC). COUNCIL REGULATION (EC) No 73/2009 of 19 January 2009 establishing common rules for direct support schemes for farmers under the common agricultural policy and establishing certain support schemes for farmers, amending Regulations (EC) No 1290/2005, (EC) No 247/2006, (EC) No 378/2007 and repealing Regulation (EC) No 1782/2003. Off. J. European Communities 2009, L 30, 16–19. [Google Scholar]
  11. Jones, K.; Lanthier, Y.; van der Voet, P.; van Valkengoed, E.; Taylor, D.; Fernández-Prieto, D. Monitoring and assessment of wetlands using Earth Observation: The GlobWetland project. J. Environ. Manage. 2009, 90, 2154–2169. [Google Scholar] [CrossRef] [PubMed]
  12. Matthews, G.V.T. The Ramsar Convention on Wetlands: Its History and Development; Ramsar Convention Bureau: Gland, Switzerland, 1993. [Google Scholar]
  13. Davranche, A.; Poulin, B.; Lefebvre, G. Mapping flooding regimes in Camargue wetlands using seasonal multispectral data. Remote Sens. Environ. 2013, 138, 165–171. [Google Scholar] [CrossRef] [Green Version]
  14. Davranche, A.; Lefebvre, G.; Poulin, B. Wetland monitoring using classification trees and SPOT-5 seasonal time series. Remote Sens. Environ. 2010, 114, 552–562. [Google Scholar] [CrossRef] [Green Version]
  15. Huang, C.; Peng, Y.; Lang, M.; Yeo, I.-Y.; McCarty, G. Wetland inundation mapping and change monitoring using Landsat and airborne LiDAR data. Remote Sens. Environ. 2014, 141, 231–242. [Google Scholar] [CrossRef]
  16. Zhao, X.; Stein, A.; Chen, X.-L. Monitoring the dynamics of wetland inundation by random sets on multi-temporal images. Remote Sens. Environ. 2011, 115, 2390–2401. [Google Scholar] [CrossRef]
  17. Klemas, V. Using remote sensing to select and monitor wetland restoration sites: An overview. J. Coast. Res. 2013, 29, 958–970. [Google Scholar] [CrossRef]
  18. Ozesmi, S.L.; Bauer, M.E. Satellite remote sensing of wetlands. Wetl. Ecol. Manag. 2002, 10, 381–402. [Google Scholar] [CrossRef]
  19. Mallinis, G.; Gitas, I.Z.; Giannakopoulos, V.; Maris, F.; Tsakiri-Strati, M. An object-based approach for flood area delineation in a transboundary area using ENVISAT ASAR and LANDSAT TM data. Int. J. Digit. Earth 2013, 6, 124–136. [Google Scholar] [CrossRef]
  20. Robertson, L.D.; Douglas, J.K.; Davies, C. Spatial analysis of wetlands at multiple scales in Eastern Ontario using remote sensing and GIS. In Proceedings of 32nd Canadian Symposium on Remote Sensing, Sherbrooke, QC, Canada, 13–16 June 2011.
  21. Schumann, G.J.P.; Neal, J.C.; Mason, D.C.; Bates, P.D. The accuracy of sequential aerial photography and SAR data for observing urban flood dynamics, a case study of the UK summer 2007 floods. Remote Sens. Environ. 2011, 115, 2536–2546. [Google Scholar] [CrossRef]
  22. Frazier, P.S.; Page, K.J. Water body detection and delineation with Landsat TM data. Photogramm. Eng. Remote Sens. 2000, 66, 1461–1467. [Google Scholar]
  23. Feyisa, G.L.; Meilby, H.; Fensholt, R.; Proud, S.R. Automated Water Extraction Index: A new technique for surface water mapping using Landsat imagery. Remote Sens. Environ. 2014, 140, 23–35. [Google Scholar] [CrossRef]
  24. Li, W.; Du, Z.; Ling, F.; Zhou, D.; Wang, H.; Gui, Y.; Sun, B.; Zhang, X. A Comparison of land surface water mapping using the normalized difference water index from TM, ETM+ and ALI. Remote Sens. 2013, 5, 5530–5549. [Google Scholar] [CrossRef]
  25. Xu, H.Q. Modification of normalised difference water index (NDWI) to enhance open water features in remotely sensed imagery. Int. J. Remote Sens. 2006, 27, 3025–3033. [Google Scholar] [CrossRef]
  26. Gao, B.-C. NDWI—A normalized difference water index for remote sensing of vegetation liquid water from space. Remote Sens. Environ. 1996, 58, 257–266. [Google Scholar] [CrossRef]
  27. Tucker, C.J. Remote sensing of leaf water content in the near infrared. Remote Sens. Environ. 1980, 10, 23–32. [Google Scholar] [CrossRef]
  28. DigitalGlobe. DigitalGlobe Core Imagery Product Guide; DigitalGlobe Inc.: Longmont, CO, USA, 2014. [Google Scholar]
  29. KMS. Produktspecification. Danmarks Højdemodel, DHM/Terræn. Data Version 1.0; National Survey and Cadastre: Copenhagen, Denamrk, 2012. [Google Scholar]
  30. Richter, R.; Schläpfer, D. Atmospheric/Topographic Correction for Satellite Imagery; DLR: Wessling, Germany, 2014. [Google Scholar]
  31. Davis, S.M.; Landgrebe, D.A.; Phillips, T.L.; Swain, P.H.; Hoffer, R.M.; Lindenlaub, J.C.; Silva, L.F. Remote Sensing: The Quantitative Approach; McGraw-Hill International Book Co.: New York, NY, USA, 1978. [Google Scholar]
  32. McFeeters, S.K. The use of the Normalized Difference Water Index (NDWI) in the delineation of open water features. Int. J. Remote Sens. 1996, 17, 1425–1432. [Google Scholar] [CrossRef]
  33. Updike, T.; Comp, C. Radiometric Use of WorldView-2 Imagery; Digital Globe Inc.: Longmont, CO, USA, 2010; pp. 1–17. [Google Scholar]
  34. Richardson, A.J.; Everitt, J.H. Using spectral vegetation indices to estimate rangeland productivity. Geocarto Int. 1992, 7, 63–69. [Google Scholar] [CrossRef]
  35. Gond, V.; Bartholomé, E.; Ouattara, F.; Nonguierma, A.; Bado, L. Surveillance et cartographie des plans d’eau et des zones humides et inondables en régions arides avec l’instrument VEGETATION embarqué sur SPOT-4. Int. J. Remote Sens. 2004, 25, 987–1004. [Google Scholar] [CrossRef]
  36. Adell, C.; Puech, C. Will the spatial analysis of water maps extracted by remote satellite detection allow locating the footprints of hunting activity in the Camargue? Bull. Soc. Fr. Photogramm. Teledetec. 2003, 76–86. [Google Scholar]
  37. Tucker, C.J. Red and photographic infrared linear combinations for monitoring vegetation. Remote Sens. Environ. 1979, 8, 127–150. [Google Scholar] [CrossRef]
  38. Rondeaux, G.; Steven, M.; Baret, F. Optimization of soil-adjusted vegetation indices. Remote Sens. Environ. 1996, 55, 95–107. [Google Scholar] [CrossRef]
  39. Huete, A.R. A soil-adjusted vegetation index (SAVI). Remote Sens. Environ. 1988, 25, 295–309. [Google Scholar] [CrossRef]
  40. Pearson, R.L.; Miller, L.D. Remote mapping of standing crop biomass for estimation of the productivity of the short-grass Prairie, Pawnee National Grasslands, Colorado. In Proceedings of the Eighth International Symposium on Remote Sensing of Environment, Ann Arbor, MI, USA, 2–6 October 1972; pp. 1357–1381.
  41. Caillaud, L.; Guillaumont, B.; Manaud, F. Essai de discrimination des modes d’utilisation des marais maritimes par analyse multitemporelle d’images SPOT, application aux marais maritimes du Centre Ouest; IFREMER: Brest, France, 1991; p. 45. [Google Scholar]
  42. Lillesand, T.M.; Kiefer, R.W. Remote Sensing and Image Interpretation, 2nd ed.; John Wiley and Sons: New York, NY, USA, 1987. [Google Scholar]
  43. Friedl, M.A.; Brodley, C.E. Decision tree classification of land cover from remotely sensed data. Remote Sens. Environ. 1997, 61, 399–409. [Google Scholar] [CrossRef]
  44. Breiman, L.; Friedman, J.; Stone, C.J.; Olshen, R.A. Classification and Regression Trees; Wadsworth International Group: Belmont, CA, USA, 1984. [Google Scholar]
  45. Trimble. eCognition Developer 9.0 - Reference Book; Trimble Germany GmbH: Munich, Germany, 2014. [Google Scholar]
  46. Blaschke, T. Object based image analysis for remote sensing. ISPRS J. Photogramm. Remote Sens. 2010, 65, 2–16. [Google Scholar] [CrossRef]
  47. Blaschke, T.; Hay, G.J.; Kelly, M.; Lang, S.; Hofmann, P.; Addink, E.; Queiroz Feitosa, R.; van der Meer, F.; van der Werff, H.; van Coillie, F.; et al. Geographic Object-Based Image Analysis—Towards a new paradigm. ISPRS J. Photogramm. Remote Sens. 2014, 87, 180–191. [Google Scholar] [CrossRef] [PubMed]
  48. Blaschke, T.; Strobl, J. What’s wrong with pixels? Some recent developments interfacing remote sensing and GIS. Was ist mit den Pixeln los? Neue Entwicklungen zur Integration von Fernerkundung und GIS 2001, 14, 12–17. [Google Scholar]
  49. Baatz, M.; Schäpe, A. Multiresolution segmentation: an optimization approach for high quality multi-scale image segmentation. Angew. Geogr. Inf. 2000, 12, 12–23. [Google Scholar]
  50. Laben, C.A.; Brower, B.V. Process for enhancing the spatial resolution of multispectral imagery using pan-sharpening. U.S. Patent No. 6,011,875, 4 January 2000. [Google Scholar]
  51. Herrera-Cruz, V.; Koudogbo, F.; Herrera, V. TerraSAR-X Rapid Mapping for Flood Events. In Proceedings of the International Society for Photogrammetry and Remote Sensing (Earth Imaging for Geospatial Information), Hannover, Germany, 2009; pp. 170–175.
  52. Jain, S.K.; Singh, R.D.; Jain, M.K.; Lohani, A.K. Delineation of flood-prone areas using remote sensing techniques. Water Resour. Manag. 2005, 19, 333–347. [Google Scholar] [CrossRef]
  53. Lu, S.L.; Wu, B.F.; Yan, N.N.; Wang, H. Water body mapping method with HJ-1A/B satellite imagery. Int. J. Appl. Earth Obs. 2011, 13, 428–434. [Google Scholar] [CrossRef]
  54. ERDAS. IMAGINE Subpixel Classifier User’s Guide; ERDAS, Inc.: Norcross, GA, USA, 2009. [Google Scholar]
  55. Pontius, R.G.; Millones, M. Death to Kappa: Birth of quantity disagreement and allocation disagreement for accuracy assessment. Int. J. Remote Sens. 2011, 32, 4407–4429. [Google Scholar] [CrossRef]
  56. Pontius, R.G.; Santacruz, A. Quantity, exchange, and shift components of difference in a square contingency table. Int. J. Remote Sens. 2014, 35, 7543–7554. [Google Scholar] [CrossRef]
  57. Arroyo, L.A.; Johansen, K.; Armston, J.; Phinn, S. Integration of LiDAR and QuickBird imagery for mapping riparian biophysical parameters and land cover types in Australian tropical savannas. For. Ecol. Manag. 2010, 259, 598–606. [Google Scholar] [CrossRef]
  58. Grenier, M.; Demers, A.M.; Labrecque, S.; Benoit, M.; Fournier, R.A.; Drolet, B. An object-based method to map wetland using RADARSAT-1 and Landsat ETM images: Test case on two sites in Quebec, Canada. Can. J. Remote Sens. 2007, 33, S28–S45. [Google Scholar] [CrossRef]
  59. Byrd, K.B.; O’Connell, J.L.; Di Tommaso, S.; Kelly, M. Evaluation of sensor types and environmental controls on mapping biomass of coastal marsh emergent vegetation. Remote Sens. Environ. 2014, 149, 166–180. [Google Scholar] [CrossRef]
  60. Beget, M.E.; Di Bella, C.M. Flooding: The effect of water depth on the spectral response of grass canopies. J. Hydrol. 2007, 335, 285–294. [Google Scholar] [CrossRef]
  61. Jakubauskas, M.; Kindscher, K.; Fraser, A.; Debinski, D.; Price, K.P. Close-range remote sensing of aquatic macrophyte vegetation cover. Int. J. Remote Sens. 2000, 21, 3533–3538. [Google Scholar] [CrossRef]
  62. Baret, F.; Guyot, G.; Begue, A.; Maurel, P.; Podaire, A. Complementarity of middle-infrared with visible and near-infrared reflectance for monitoring wheat canopies. Remote Sens. Environ. 1988, 26, 213–225. [Google Scholar] [CrossRef]
  63. Brivio, P.A.; Colombo, R.; Maggi, M.; Tomasoni, R. Integration of remote sensing data and GIS for accurate mapping of flooded areas. Int. J. Remote Sens. 2002, 23, 429–441. [Google Scholar] [CrossRef]
  64. Pierdicca, N.; Chini, M.; Pulvirenti, L.; Macina, F. Integrating physical and topographic information into a fuzzy scheme to map flooded area by SAR. Sensors 2008, 8, 4151–4164. [Google Scholar] [CrossRef] [Green Version]
  65. Pulvirenti, L.; Pierdicca, N.; Chini, M.; Guerriero, L. An algorithm for operational flood mapping from Synthetic Aperture Radar (SAR) data using fuzzy logic. Nat. Hazards Earth Syst. Sci. 2011, 11, 529–540. [Google Scholar] [CrossRef]
  66. Zwenzner, H.; Voigt, S. Improved estimation of flood parameters by combining space based SAR data with very high resolution digital elevation data. Hydrol. Earth Syst. Sci. 2009, 13, 567–576. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Malinowski, R.; Groom, G.; Schwanghart, W.; Heckrath, G. Detection and Delineation of Localized Flooding from WorldView-2 Multispectral Data. Remote Sens. 2015, 7, 14853-14875. https://doi.org/10.3390/rs71114853

AMA Style

Malinowski R, Groom G, Schwanghart W, Heckrath G. Detection and Delineation of Localized Flooding from WorldView-2 Multispectral Data. Remote Sensing. 2015; 7(11):14853-14875. https://doi.org/10.3390/rs71114853

Chicago/Turabian Style

Malinowski, Radosław, Geoff Groom, Wolfgang Schwanghart, and Goswin Heckrath. 2015. "Detection and Delineation of Localized Flooding from WorldView-2 Multispectral Data" Remote Sensing 7, no. 11: 14853-14875. https://doi.org/10.3390/rs71114853

Article Metrics

Back to TopTop