Next Article in Journal
Analysis of a Hybrid Suspended-Supported Photocatalytic Reactor for the Treatment of Wastewater Containing Benzothiazole and Aniline
Previous Article in Journal
Biological Treatment of Wastewater from Pyrolysis Plant: Effect of Organics Concentration, pH and Temperature
Previous Article in Special Issue
Evaluation of Hydrological Application of CMADS in Jinhua River Basin, China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Identification of Water Body Extent Based on Remote Sensing Data Collected with Unmanned Aerial Vehicle

Institute of Geodesy and Geoinformatics, Wroclaw University of Environmental and Life Sciences, 50-375 Wroclaw, Poland
*
Author to whom correspondence should be addressed.
Water 2019, 11(2), 338; https://doi.org/10.3390/w11020338
Submission received: 27 November 2018 / Revised: 12 January 2019 / Accepted: 13 February 2019 / Published: 16 February 2019
(This article belongs to the Special Issue Applications of Remote Sensing and GIS in Hydrology)

Abstract

:
The paper presents an efficient methodology of water body extent estimation based on remotely sensed data collected with UAV (Unmanned Aerial Vehicle). The methodology includes the data collection with selected sensors and processing of remotely sensed data to obtain accurate geospatial products that are finally used to estimate water body extent. Three sensors were investigated: RGB (Red Green Blue) camera, thermal infrared camera, and laser scanner. The platform used to carry each of these sensors was an Aibot X6—a multirotor type of UAV. Test data was collected at 6 sites containing different types of water bodies, including 4 river sections, an old river bed, and a part of a lake shore. The processing of collected data resulted in 2.5-D and 2-D geospatial products that were used subsequently for water body extent estimation. Depending on the type of used sensor, the created geospatial product, and the type of the water body and the land cover, three strategies employing image processing tools were developed to estimate water body range. The obtained results were assessed in terms of classification accuracy (distinguishing the water body from the land) and geometrical planar accuracy of the water body extent. The product identified as the most suitable in water body detection was four bands RGB+TIR (Thermal InfraRed) ortho mosaic. It allowed to achieve the average kappa coefficient of the water body identification above 0.9. The planar accuracy of water body extent varied depending on the type of the sensor, the geospatial product, and the test site conditions, but it was comparable with results obtained in similar studies.

1. Introduction

The problem of identifying water body extent with sufficient reliability occurs in many applications. As a part of the hydromorphological characterization of river valleys, the information about water body can serve as a factor in the assessment of hydrology, morphology, and flow continuity [1]. This information is also used for river management and restoration [2]. The actual knowledge about water body extent is very important in applications related to flood hazards. In general, the activities associated with flood hazards are of two kinds: protection and emergency. The protection rules were stated for European Union countries by the European Union legislation in directive 2007/60/EC [3] that introduces the requirement of river basin management plans to be developed for each river basin. In in the United States, the FEMA (Federal Emergency Management Agency) was established by the Presidential Reorganization Plan No. 3 of 1978 to coordinate the response to a disaster that has occurred and that overwhelms the resources of local and state authorities. These, and similar regulations and institutions in other countries aim at ensuring the safety in flood-prone areas. Flood zones, defined by FEMA as areas with significant flood risk, i.e., areas with a 1% annual chance of flood occurrence, as well as areas of moderate flood hazard, i.e., between the limits of the 100-year and 500-year floods, have a share in urbanized areas. According to the Atlas of the Human Planet 2017, around 1 billion of people in 155 countries are exposed to floods [4].
The shape of the water line can be used in flood-hazard mapping and in hydrodynamic modelling as a calibration and validation data. This line, as well as the digital elevation model obtained at the same time, may be used as the basis for determining the boundary conditions in this modelling. Indirectly, it can also serve to the surface roughness parametrization or delineation between river bed and floodplain [5,6,7]. If the mapping of flood extent is executed with a high time resolution, it can be used to model flood dynamics.
The identification of water body range plays an extremely important role in emergency activities. The actual location of water body extent during a flood impacts subsequent actions, such as the enhancement of embankments or people evacuations. In this case, the high spatial and time resolution of flood extent is required.
The identification of water bodies using airborne and satellite remote sensing data is a well investigated issue with an extended publication record. A recent review paper prepared by Wang and Xie [8] emphasizes this fact. Typically, the information about water body for large rivers is obtained by means of aerial (e.g., Reference [9]) or satellite imagery (e.g., References [10,11,12]). However, both of these methods have limitations. For example, satellite images may not be available during or shortly after the occurrence of the hazard. Some of the satellite systems can deliver images even with the daily period [13], but during the flood, an update about its extent should be more frequent, even with the hourly period. Moreover, satellite images are not useful if collected during overcast days. Besides technical limitations, costs of such data are relatively high. Further aspects related to the identification of water bodies in satellites imageries have been discussed in the review of Nath and Deb [14].
The abovementioned limitation can be partially removed when exploiting SAR (Synthetic Aperture Radar) data. This remote sensing technique is considered as an all-weather technique. In addition, SAR data is easy accessible since European Space Agency Sentinel-1 satellites have reached their operational stage. However, due to limited resolution, the SAR data can be utilized for water body delineation in regional and global scales (e.g., References [15,16,17]) rather than in mapping a small water reservoir or stream. In a local scale, the reasonable alternative seems to be mapping executed with other sensors form a lower altitude using manned or unmanned aircrafts. For readers interested in the issue of SAR remote sensing of water bodies we recommend the most recent survey papers: References [8,18].
The airborne mapping is economically justified if performed for large areas or for areas of high importance, such as large cities. Typical airborne mapping is performed with airplanes equipped with photogrammetric cameras that require specific conditions for use. One of the critical conditions is the flight altitude. It has not only an upper limit but also a lower limit caused by required minimal ground overlap of consecutive images. The increase of the flight altitude causes a lower resolution of the collected data [19], making difficulties in the appropriate estimation of flood extent, especially if the flood occurs on areas with a complex structure, i.e., covered by higher vegetation.
Since airborne LiDAR (Light Detection and Ranging) data became wider available in the last two decades, point clouds have been utilized in order to estimate water body extent. This technique delivers accurate 3-D geometrical information on the terrain topography but lacks it for water areas. This is caused by the low reflectance of a typically used infrared laser beam from water surfaces but is beneficial in estimating the borderline between water and dry land (e.g., References [20,21]). LiDAR point cloud density determines strongly the accuracy of the mentioned borderline estimation. To improve this accuracy, Wu et al. [22] proposed a method that relies on the integration of aerial images and LiDAR point clouds.
However, all abovementioned remote sensing data acquisition techniques meet their limitations in local scale applications. Because of that, it seems to be promising to use UAVs (Unmanned Aerial Vehicles) in water body extent mapping.
Several applications of UASs (Unmanned Aerial Systems) for flood forecasting, prevention, detection, monitoring, and flood damage assessment can be found in the literature. For instance, Casado et. al. [23] presented an UAV based framework for the identification of hydromorphological features in high resolution RGB (Red Green Blue) aerial images using an artificial neural network classifier. The same classifier was used in Reference [24] for the remote detection of flooded areas in UAV data. In the work of Casado et al. [1], aerial images of various resolutions were tested in a similar task. Milani et al. [25] used UAV images to quantify spatial and temporal floodplain dynamics. Witek et al. [26] proposed a real-time flood warning system where UAS data is used for the verification of the overbank flow prognosis. River bank monitoring by means of UAV-photogrammetry and the improved point cloud filtering method are the subjects of the most recent study by Tan et al. [27]. Water level measurement from UAV platforms was investigated in Reference [28]. Van Iersel et al. [29] used the time series of UAV RGB images in order to classify land cover, including water area, and to classify vegetation in a river floodplain by means of random forests algorithm. The potential of UAS in flood risk analysis was also highlighted in the work of Wang [30].
In contrast to the abovementioned research related to UAS, this investigation focuses on assessing the usefulness of various UAS remote sensing geospatial products in the recognition of water bodies. We assumed that this recognition should be fast to serve potentially in flood emergency applications. For that reason, processing should be possibly simple and should be able to avoid time consuming classification algorithms (e.g., random forests). Moreover, all cited publications present the use of RGB images only; however, the passive remote sensing in the visual part of the electromagnetic spectrum is often limited, and thus, other sensors were also investigated in this work. In particular, three different sensors were tested: RGB camera, TIR (Thermal InfraRed) camera, and laser scanner. The platform used to carry each of these sensors was a multirotor UAV. At the data processing stage, three semiautomatic strategies based on image processing methods were developed and tested to identify areas covered by water.

2. Materials and Methods

2.1. Test Sites and Data Collection

The test data was collected at six test sites, which were chosen to cover various water bodies with respect to topography, vegetation, and water flow. We selected four sections of small or medium rivers, one old river bed, and a part of a lake shore. More detailed characteristics of test sites, including maps (ortho mosaics) are given in Table 1.
The unmanned platform used for data collection was hexacopter Aibot X6 V2. The UAS data for each site was collected with three sensors:
  • RGB camera: Nikon D800 + Nikkor AF-S 24-85 mm f/3.5–4.5G ED VR (Nikon Corporation, Shinagawa, Tokyo, Japan),
  • TIR camera: Optris PI Lightweight 450 (Optris GmbH, Berlin, Germany) and
  • Laser scanner (LiDAR): Velodyne HDL-32E (Velodyne LiDAR, San Jose, California, United States).
RGB images were collected with s DSLR (Digital Single Lens Reflex) camera equipped with standard zoom lens that was fixed during data acquisition to a 24 mm focal length to get possibly the widest angles. The spectral range of the TIR sensor is 7.5–13 μm and allows to map temperatures from −20 to 900°C. During image preprocessing, the temperatures were encoded into 8 bit gray values based on the range of the actual temperatures on the site. The GSD (Ground Sampling Distance) for the TIR images was ranging from about 12 to about 25 cm depending on the flight plan parameters for a particular site. Note that GSD for RGB images was about sixteen times smaller. In addition, GCPs (Ground Control Points) were measured with the GNSS (Global Navigation Satellite System) RTK (Real Time Kinematic) technique for further georeferencing of both types of imagery data. These measurements were performed using Leica GS14 receiver (Leica Geosystems, Heerbrugg, Switzerland). GCPs were signalized in the field as cardboard targets where one part was covered by a printed black and white marker and the second by aluminum foil. The black and white and the aluminum markers were used for RGB and TIR image georeferencing, respectively. Note that aluminum foil has a very low emissivity [31]; thus, it was clearly visible in the TIR images even if the square target occupied only a few pixels in the image. A Velodyne HDL-32 sensor is a pulsed type laser scanner that is equipped with 32 laser diodes emitting near infrared laser beams. Although a used laser scanner is capable to detect two returns from a single emitted pulse, executed tests showed only several points created from the second echo; thus, the return number parameter was not used during further point cloud processing. For the LiDAR data georeferencing, a geodetic grade on-board GNSS and inertial navigation data as well as GNSS ground station data were collected. More detailed information of the collected data is given Table 2. Examples of the data (images and point cloud) collected with each sensor are shown in Figure 1.

2.2. Data Preprocessing

Remotely sensed data collected with UAV requires preprocessing before the creation of accurate geospatial products that finally can be used to estimate the water body range. The type of products and methodology of preprocessing is linked with the type of used sensor. Because UAS mapping is similar to traditional airborne mapping [32], a similar processing methodology may be applied.
The processing of RGB and TIR images was executed in two stages. The goal of the first stage is to find the position and orientation of images at the time of data acquisition. This task is solved in the process known in photogrammetry as AT (Aero Triangulation) [33]. The AT uses typically GCPs so the AT results as well as subsequent products are georeferenced. The second stage aims on the creation of 3-D, 2.5-D, and 2-D geospatial products. The primary product is a 3-D point cloud that is created by an image dense matching algorithm (e.g., Reference [34]). It should be noted that image dense matching algorithms are not versatile and that points cannot be created at certain circumstances. For example, collected RGB images points were usually not created for high vegetation (trees) and water areas, except the cases where the water was shallow and where the bottom of the water body was visible (a small part of the embankment on site Mietkow Lake and the bottom of the river bed on site Bobr) or where the water body was covered by duckweed (site Old river bed). In the case of TIR images that have a lower resolution, some points were created on the water surface, though they were noise points and had wrong heights. Created point clouds were not used directly in the water range detection process but served as an input to obtain other raster products. The first type of the product created based on the point cloud was a 2.5-D point cloud ortho image. In this case, the value of raster pixel (cell) indicates the height. These heights were not interpolated, but the pixel got the maximal height from all points falling into the cell (empty cells may also occur). The second of the created products was a 2-D ortho mosaic. It is also a raster product, but its pixel values are the same as for original RGB or TIR images. The ortho mosaic is created during the orthorectificacion process where images having perspective projection are rectified to orthogonal projection based on the DSM (Digital Surface Model) and then mosaicked [33]. The processing of RGB and TIR images was executed using the Agisoft Photoscan software (Agisoft LLC 11 Degtyarniy per., St. Petersburg, Russia), except the creation of the point cloud ortho image that was performed using the CloudCompare software (open source project, https://www.danielgm.net/cc/).
One of the assumptions of the project was to investigate the effectiveness of the water extent estimation using the data (product) obtained from each sensor individually. However, apart from tests for individual products, an attempt of image data fusion was also executed. It is well-known, that image classification may be improved by extending the spectral range of information through data fusion (e.g., Reference [35]). To investigate whether the fusion of RGB and TIR data will bring a significant improvement in the results, ortho mosaics created from both types of images were combined into four-bands ortho mosaic using the ArcGIS software. Since RGB and TIR products have different spatial resolution, they required a resampling to uniform resolution. A GSD equal to 10 cm and bicubic resampling were used.
In addition, DSM interpolated using point cloud was initially considered as a product that may be useful for water extent identification, but initial experiments showed that its usefulness is limited, and it was omitted in further analysis.
The processing of UAS LiDAR data is different from image processing. Laser scanners, such as Velodyne, produce the point cloud in a scanner local coordinate system that changes its global position and orientation during the flight. In order to add appropriate georeferencing to this point cloud (transformation from local to global coordinate system, e.g., Reference [36]), the trajectory of UAV in 6 degrees of freedom (position and orientation) needs to be reconstructed. This task is solved by integrating an on-board rover and a ground base GNSS station data with an on-board inertial data (linear accelerations and angular velocities) using an Extended Kalman Filter [37]. The use of high-grade on-board navigation sensors, especially the IMU (Inertial Measurement Unit), has a critical impact on the accuracy of the reconstructed trajectory and, consequently, on the accuracy of the created point cloud [36]. In this work, the georeferenced LiDAR point cloud was obtained using vendor provided software (NovAtel Inertial Explorer for trajectory reconstruction and Leica Pegasus AutoP for point cloud coordinate transformations).
In contrast to point clouds created from RGB image dense matching, LiDAR points were created for high vegetation. However, points for the bottom of the water body were not created even if the water was shallow and transparent. This is because the near infrared wavelength used by the Velodyne laser scanners is strongly absorbed by the water. Points on the water surface were created only if other objects were floating on its surface (e.g., duckweed on the site Old river bed). Based on the georeferenced point cloud, further geospatial products can be created. In this work, only a 2.5-D ortho image could be created. This product was created in the same manner as a corresponding product created from imagery data.
Since coordinates of GCPs and LiDAR points were given in the same coordinate system, all products created from the data collected with three different sensors did not require additional co-registration. The list of created geospatial products with respect to the sensor and their ground resolutions is given in Table 3. This table shows also the density of the point cloud used to create subsequent products.

2.3. Strategies of Water Body Range Estimation

Since the created 2-D and 2.5-D products were converted to raster images, image analysis tools can be used to estimate water body extent. This extent will be determined as a border between pixels recognized as water and non-water areas. Note that fast methods are preferred in this study; thus, simple image (raster) processing methods are favored.
Three semiautomatic strategies were developed and tested to identify water and non-water areas. These strategies are based on
  • supervised classification,
  • thresholding of pixel values, and
  • image transforms.
Developed strategies are shown as a diagram in Figure 2 and described in more details in the following subsections. Numerical experiments of the classification were performed using the ArcGIS 10.1 software (Redlands, California, United States) and were evaluated using the authors’ software.

2.3.1. Strategy 1: Supervised Classification

This strategy may be applied to both considered products—point cloud ortho image and ortho mosaic—as well as their combination.
The first step is optional data resampling. Since the UAS products may have different than desired (optimal) ground resolutions, a down-sampling or an up-sampling may be required to match the user-specified resolution. The bicubic resampling method is used in the experiments. Next, the image mask is created based on thresholding of the pixel values. The value of the threshold is selected manually by analyzing the histogram of the pixel values. This stage is optional and aims at reducing the area of the image subjected for further processing stages to speed up the process. For example, an RGB ortho mosaic ground without vegetation has a high intensity (bright colors), and the water surface has low intensities (dark colors); thus, the high intensities can be removed. Similarly, a point cloud ortho image water area has a low intensity (low height), and the tree crowns have high intensities.
Next step is the supervised classification using a maximal likelihood method. The supervised classification of the image means grouping pixels to the classes specified by the user who provides training samples of each class in order to train the algorithm. The maximal likelihood method takes into account the statistical parameters of pixel distribution in each class. Probabilistic parameters (a priori probability, mean values, and covariance matrices) are calculated based on the training data. The classification was executed assuming that the distribution of the feature density follows the multidimensional normal distribution:
f j ( x ) = 1 ( 2 π ) k / 2 | E j | 1 / 2   e x p [ 0.5 ( x m j ) T E j 1 ( x m j ) ]
where mj is the vector of mean values, Ej is the matrix of feature covariances, and k is the number of features.
Training samples were selected manually as a part of the image by coarse digitization of the areas belonging to these two classes (Figure 3a). It was assumed that the training set for both classes (Water and Non-water) should include at least 40% of the raster pixels belonging to that class. The result of the classification is shown in Figure 3b. To obtain the vector data representation of the water body range, an automatic conversion of the class Water from a raster to vector format is performed. Since the classifier may produce small areas classified as water that in fact are non-water areas, the results of the classification need to be further filtered. It was executed by the modal filtering of the raster classification results using a 5 × 5 pixel mask [38]. Figure 3c shows the class Water after applying the modal filter. After the conversion to a vector format, an additional filtering (using Structured Query Language (SQL)) to remove non-relevant (small) areas classified as water can be performed. The user may specify a threshold for the area that should not be classified as the class Water. The final results after applying this filter with a threshold equal to 300 m2 is shown in Figure 3d. The boundary of the class Water is the water body range.
Because the memory size of products created from high-resolution Nikon images is very large, this strategy was tested for two cases: the original GSD of products and after applying optional down-sampling to GSD equal to 10 cm. The comparison of the classification results obtained for both resolutions showed that this resampling does not have a significant impact on the estimation of the water body range. In addition, bicubic down-sampling worked similarly to low pass filter resulting in the reduction of the noise and possible data gaps that may occur during product creation.

2.3.2. Strategy 2: Thresholding of Pixel Values

This strategy is similar to the previously described strategy based on supervised classification and may be applied to the point cloud ortho image only if specific condition exists. The condition for this strategy is that point cloud contains points that exist on or below the water surface, e.g., the bottom of the river or stones protruding from the water surface as in test site Bobr. In this case, the point cloud ortho image has a significantly lower intensity on areas covered by water because the intensity of the point cloud ortho image is related to the height. This causes one of the maxima in the intensity histogram to reflect water surface height, and the threshold value can be easily selected. Consequently, water areas can be distinguished by the thresholding of image values causing supervised classification to be unnecessary. The next steps, i.e., modal filtering, conversion to vector format, and SQL query, are the same as in the supervised classification strategy.

2.3.3. Strategy 3: Image Transforms

This strategy may be applied to the point cloud ortho image if previously described strategies cannot be used. This case occurs when points were not created (collected) for areas covered by water. This is the typical case for both acquisition techniques: image dense matching (product obtained from RGB or TIR images) as well as laser scanning. Because the properties of image texture for water surfaces are insufficient for the image dense matching process, matching points are not created for such areas. In the case of LiDAR sensors, the lack of points on the water surface is caused by a strong absorption of near infrared wavelengths by the water. Consequently, reflections are not detected unless a laser beam hits an object floating on the water. The lack of points for the water area, or very little number of them, eliminates the possibility of applying the supervised classification strategy because it is impossible to calculate the statistical parameters of pixel distribution in the class Water for the maximum likelihood method. Similarly, the lack of pixel values preclude thresholding in the image thresholding strategy. In this case, water areas can be identified using a sequence of contextual and discrete image transforms.
The first step of this strategy—data resampling—is the same as before. The next step is clipping of an image area to the area of the data using a concave hull algorithm that bases on the point (pixel) aggregation. This step is executed to distinguish between areas without the image or laser scan coverage and areas without points but is caused by other reasons, e.g., water surface. The following operations are executed inside the concave hull shape only. Next, the pixel binarization (reclassification) is performed. This operation gives new binary values (0 or 1) for pixels depending on their previous values. Areas without points (e.g., water) get 1 value (white color) and areas with points get 0 value (black color) regardless of point height (pixel value). An exemplary result of the binarization is shown in Figure 4a. The next steps, i.e., the modal filtering (Figure 4b), conversion to vector format, and SQL query (Figure 4c), are the same as in the supervised classification strategy.

2.3.4. Evaluation of Identification

The evaluation of the identification of areas covered by water was performed based on a comparison between the reference water area and water areas identified using the investigated strategies. The reference water area was obtained using manual digitization executed on high resolution ortho mosaic created from RGB images. Due to the high resolution of RGB images and their indirect georeferencing, the horizontal accuracy of this product is the highest among all the products created from all tested sensors. Note that the obtained accuracy of the RGB image block adjustment for all sites was better than 4.5 cm and 6.5 cm for the horizontal and vertical directions, respectively. The quantitative evaluation of classification was executed based on the confusion matrix (error matrix) theory, where the number of objects belonging to each class for the reference (actual) and classified (predicted) data sets is given. In the case of two classes or analyzing a single class, the confusion matrix contains four values. Based on the confusion matrix, the following accuracy parameters can be calculated:
  • User accuracy uw for the class Water:
    u w = T P T P + F P
  • Producer accuracy pw for the class Water:
    p w = T P T P + F N
  • Overall accuracy d:
    d = T P + T N T P + F N + F P + T N
  • Kappa coefficient κ :
    κ = d P e 1 P e
    where P e = ( T P + F N ) · ( T P + F P ) + ( F P + T N ) · ( F N + T N ) ( T P + F N + F P + T N ) 2 , TP is the number of analyzed objects correctly recognized by the classifier, FN is the number of analyzed objects incorrectly recognized by the classifier as remaining objects, FP is the number of remaining objects incorrectly recognized by the classifier as analyzed objects, and TN is the number of remaining objects correctly recognized by the classifier.
All accuracies may get a maximum value of 1, meaning no errors in the classification (100% of all objects are classified correctly). Similarly, a maximum value for the kappa coefficient also equals to 1. Note that in the image classification, the number of objects means the number of pixels. The above parameters are invariant to image resampling since all values in the confusion matrix are scaled proportionally if resampling is applied.

2.3.5. Geometrical Accuracy Assessment

Provided in the previous section, the accuracy parameters (user, producer, overall accuracy, and kappa coefficients) are global values that characterize the quality of classification. However, these parameters do not provide any information about the planar accuracy of the contour line that determines water body extent. Meanwhile, this aspect seems to be crucial especially in the context of flood extent determination. To asses this geometrical accuracy, distances were measured between the reference contour line obtained from high resolution RGB ortho mosaic by means of manual digitization and the water contour line received as a result of the water body identification using one of described earlier strategies. Each contour line was represented by a polyline. Then, vertices of the analyzed contour line were projected orthogonally onto the reference polyline and the distances between them (vertices and their projections) were calculated. The mean value of these distances were treated as residuals to the reference line; thus, accuracy parameters could be calculated. Two parameters were calculated: the mean value of residuals and their standard deviation. This analysis was performed for representative sections of the identified earlier contour line. Representative sections for most of the sites were selected as one riverbank, and the analyzed lines contained at least 200 vertices. Note that the calculated planar accuracy parameters are influenced by two factors: the geometrical accuracy of the geospatial product and the accuracy of detection of the water edge executed using one of described earlier strategies.

3. Results and Discussion

3.1. Identification of Water Body Range

Experiments of the water body range estimation were executed for all products for all test sites. The selection of the identification strategy for each experiment was executed based on test site features (Table 1) and the product. The identification of the water class on an ortho mosaic can be executed using only the supervised classification strategy. Because this strategy is the most time consuming and requires manual selection of the training samples, the identification of the water class on point cloud ortho images was executed with the image transform strategy for sites and point cloud ortho images where points were not created on the water surface or below it. This strategy was used even for the Old river bed site because most of the water area belongs to the Mala Panew river that was not covered by the duckweed. The image transform strategy could not be used for the point cloud ortho images created from RGB and TIR images for the Bobr site because the points were created on the water area. In this case, the thresholding of the pixel values and the supervised classification strategies were used for the point cloud ortho images created from the RGB and TIR images, respectively. The use of the supervised classification on this site for the TIR point cloud ortho image was caused because of the significant noise in the height of the points that were created for the water surface. The selected strategies are shown in Table 4.
The results of the executed classification experiments and the reference class Water are visualized in Table 5 while the quality parameters of classification are shown in Figure 5.
The qualitative analysis of the results (Table 5) as well as the quantitative analysis (Figure 5) show high consistency between the identified and reference classes Water for most of the experiments. The overall accuracy and kappa coefficient are greater than 0.8 for all experiments that use RGB and RGB+TIR ortho mosaics. According to Landis and Koch [39], this value proves almost a perfect agreement between the classification results and reference data. Also, the point cloud ortho images obtained from RGB as well as from LiDAR data allow to achieve substantial agreement (kappa in the range of 0.6–0.8) for most of the test sites. This proves the efficiency of the proposed strategies. For a single sensor product, the best performance was achieved for the strategy based on the supervised classification that was applied to the ortho mosaic obtained from RGB images. In contrast, the same strategy applied to the same product but created from TIR images resulted in one of the lowest accuracies. Probably these results could be improved if the pixels have the original intensities of the TIR band instead of temperatures because an Optris camera records intensities in 16 bits depth but temperatures were coded only at 8 bits. The strategy based on the image transforms applied to a point cloud ortho image created from Velodyne data resulted in a sufficient quality; however, it was noticeably worse than the supervised classification strategy applied to an ortho mosaic. On average, the best was obtained for combined, four-bands ortho mosaic created from RGB and TIR images. This confirms the current knowledge that the automatic classification executed on visual spectral bands, such as RGB, should be supported with the spectral band from outside of the visual range. Note that visual spectral bands are usually strongly correlated. Additional infrared bands reduce the possibility of a wrong classification. Moreover, additional bands remove the need for the thresholding of RGB values, causing classification improvement and reduction of the processing time.

3.2. Geometrical Accuracy of Identified Water Body Range

The results of the geometrical accuracy analysis of the identified water body range are shown in Table 6.
The mean value reveals how far, in an average, the water contour line representing a particular sensor and product is from the reference contour line. This value varies in the range of a few decimeters for all sensors and geospatial products, except five cases. Lower accuracies, up to 3.4 m, were obtained usually for TIR products. This may be explained by the worse geometrical quality of a TIR sensor, the lower resolution of TIR products, and probably, mostly likely because of the worse quality of the water body range identification on TIR products. Surprisingly a large error equal to 2.53 m was achieved for the best single sensor product (RGB ortho mosaic). It can be explained by the test site conditions, i.e., a low water level at the edge of the reservoir and a high water transparency. Because of that, the bottom of the water reservoir in such places looked, in images, similar to dry land and was classified as non-water. A similar tendency can be observed for standard deviation values, which represent the variation of the water body contour line around the mean value. The best results for all sensors and products were achieved for the Swornica test site because the riverbank was almost free from high vegetation during data acquisition. In contrast, the heavily vegetated Old river bed site where part of the water area was covered with duckweed was challenging for all sensors. All accuracy parameters are significantly degraded under such environmental conditions.
To the authors’ knowledge, this research methodology of geometrical accuracy assessment of water body extent determined using UAS geospatial products has been not applied previously. Because of that, we cannot directly compare our results with other studies; however, we can refer to the geometrical accuracy of DSM in the water body margin. The most recent results provided by de Castro Vitti et al. [40] show a similar accuracy to our best results—accuracy parameters received for RGB ortho mosaic (Table 6). Note that described there, the DSM geometrical accuracy is not affected by the accuracy of water edge identification.

3.3. Potential of LiDAR Sensor in Vegetated Areas

The goal of this study was to evaluate the potential of particular UAV mapping sensors in the task of water body extent estimation in a possibly fast manner as it may be potentially used in an emergency situation, e.g., during a flood. Although better classification accuracy was obtained for RGB image products than for LiDAR products, laser scanner should not be removed from future investigations for two reasons. The first is the time needed to obtain the georeferenced point cloud. LiDAR points are created directly during data acquisition and need a georeferencing that is much faster than AT and a dense image matching necessary for passive sensors (cameras). The second reason is the ability of the LiDAR sensor to get some reflections from the ground even in densely vegetated areas. Note that this is possible even for the low-cost laser scanner that usually gives only one return from an emitted pulse because a high density of points causes some of laser beams to hit the ground between the foliage. The raw point cloud can be subsequently filtered to extract only ground points that will allow to estimate the water body extent in vegetated areas more reliably than products created from images. Obviously, an appropriate filtering algorithm should be developed since algorithms available for typical airborne LiDAR point clouds are not performing well if the point cloud density is very high. The comparison between point clouds created from RGB images and LiDAR data for the riverbank covered with the vegetation is shown in Figure 6b,c, respectively while corresponding ground points extracted semiautomatically are shown in Figure 6d,e, respectively. It can be clearly seen that the LiDAR point cloud contains more ground points causing the actual water range to be estimated more reliably. Note that the data was collected during summer when the vegetation was at its highest growth; thus, laser beams could not go through the most dense vegetation, and some parts of the area do not have ground points.

4. Conclusions

The presented investigation showed that products created from data collected with various UAS sensors can be used for a fast and reliable identification of the water body extent at a local scale. The developed strategies showed that a water area can be effectively identified on 2-D or 2.5-D raster products without the need for the complex processing of 3-D data (e.g., point cloud). In contrast to most of the other works related to UAVs and water regions, this study proves that besides RGB cameras, TIR cameras are also useful, especially when it supports RGB bands. The planar accuracy of the water body range typically increased when the water body was identified on a combined RGB+TIR four-band ortho mosaic. Excluding the identified gross errors, the obtained planar accuracy of the identified water body range for most of the test sites equals to a few decimeters and is sufficient for many applications. The worst results were obtained for highly vegetated areas where the water surface was covered with duckweed. Such conditions seem to be the most challenging because stagnant water tends to be covered by duckweed causing points on the water surface to be created from all types of investigated sensors data. For that reason, distinguishing between water and non-water areas is extremely difficult. Moreover, the vegetation growing near to the embankment occludes the water body edge making it invisible on the created geospatial products; however, the given results and discussion shows the potential of UAS active remote sensing in the form of laser scanning in vegetated areas. The identification of water areas covered by high and medium vegetation may be possible for LiDAR products created from a filtered point cloud, i.e., all points that are laser reflections from the vegetation should be removed prior to product creation. In contrast to cameras, this is possible because some of the laser rays may go through the foliage and reach the ground. This method requires the development of the reference data based on field measurements and will be investigated in the future.

Author Contributions

P.T. performed the classification inclusive accuracy assessment and took the lead in writing the manuscript. G.J. and A.B. designed and organized the experiment. G.J., A.B., A.W., and M.K. performed the UAS data acquisition and carried out the data processing. P.T., G.J., and A.B. analyzed the results. G.J. and A.B. helped to improve the text of the manuscript.

Funding

This research was funded by EIT Climat-KIC under the Pathfinder project “UAS remote sensing for flood extent estimation”, agreement number 27/1-M/2016.

Conflicts of Interest

The authors declare no conflict of interest. The founding sponsor had no role in the creation of this manuscript.

References

  1. Casado, M.R.; Gonzales, R.B.; Wright, R.; Bellamy, P. Quantifying the Effect of Aerial Imagery Resolution in Automated Hydromorphological River Characterisation. Remote Sens. 2016, 8, 650. [Google Scholar] [CrossRef]
  2. Vaughan, I.P.; Diamond, M.; Gurnell, A.M.; Hall, K.A.; Jenkins, A.; Milner, N.J.; Naylor, L.A.; Sear, D.A.; Woodward, G.; Ormerod, S.J. Integrating ecology with hydromorphology: A priority for river science and management. Aquat. Conserv. Mar. Freshw. Ecosyst. 2009, 19, 113–125. [Google Scholar] [CrossRef]
  3. European Commission Directive 2007/60/EC of the European Parliament and of the Council of 23 October 2007 on the Assessment and Management of Flood Risks. Retrieved 24 February 2017. Available online: http://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex:32007L0060 (accessed on 15 February 2019).
  4. Pesaresi, M.; Ehrlich, D.; Kemper, T.; Siragusa, A.; Florczyk, A.; Freire, S.; Corbane, C. Atlas of the Human Planet 2017: Global Exposure to Natural Hazards; Joint Research Centre, Publications Office of the European Union: Luxembourg, 2017. [Google Scholar]
  5. Tymków, P.; Borkowski, A. Land cover classification using airborne laser scanning data and photographs. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2008, 37, 185–190. [Google Scholar]
  6. Tymków, P.; Borkowski, A. Vegetation modelling based on TLS data for roughness coefficient estimation in river Valley. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2010, 38, 309–313. [Google Scholar]
  7. Tymków, P.; Karpina, M.; Borkowski, A. 3D GIS for flood modelling in river valleys. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 175–178. [Google Scholar] [CrossRef]
  8. Wang, X.; Xie, H. A review of applications of remote sensing and geographic information systems (GIS) in water resources and flood risk management. Water 2018, 10, 608. [Google Scholar] [CrossRef]
  9. Feng, Q.; Liu, J.; Gong, J. Urban Flood Mapping Based on Unmanned Aerial Vehicle Remote Sensing and Random Forest Classifier—A Case of Yuyao, China. Water 2015, 7, 1437–1455. [Google Scholar] [CrossRef] [Green Version]
  10. Jiang, H.; Feng, M.; Zhu, Y.; Lu, N.; Huang, J.; Xiao, T. An Automated Method for Extracting Rivers and Lakes from Landsat Imagery. Remote Sens. 2014, 6, 5067–5089. [Google Scholar] [CrossRef] [Green Version]
  11. Lu, S.; Wu, B.; Wang, H. Water body mapping method with HJ-1A/B satellite imagery. Int. J. Appl. Earth Observ. Geoinf. 2011, 13, 428–434. [Google Scholar] [CrossRef]
  12. Wang, K.; Zhu, Y. Recognition of water bodies from remotely sensed imagery by using neural network. Int. J. Image Process. 2010, 3, 265–384. [Google Scholar]
  13. Toth, C.; Jóźków, G. Remote sensing platforms and sensors: A survey. ISPRS J. Photogramm. Remote Sens. 2016, 115, 22–36. [Google Scholar] [CrossRef]
  14. Nath, R.K.; Deb, S.K. Water-Body Area Extraction from High Resolution Satellite Images—An Introduction, Review, and Comparison. Int. J. Image Process. 2010, 3, 353–372. [Google Scholar]
  15. Xie, C.; Zhang, J.; Huang, G.; Zhao, Z.; Wang, J. Water Body Information Extraction from High Resolution Airborne SAR Image with Technique of Imaging in Different Directions and Object-Oriented. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2008, 37, 165–168. [Google Scholar]
  16. Frappart, F.; Bourrel, L.; Brodu, N.; Riofrío Salazar, X.; Baup, F.; Darrozes, J.; Pombosa, R. Monitoring of the spatio-temporal dynamics of the floods in the Guayas Watershed (Ecuadorian Pacific Coast) using global monitoring ENVISAT ASAR images and rainfall data. Water 2017, 9, 12. [Google Scholar] [CrossRef]
  17. Prasad, N.R.; Vaibhav, G.; Praveen, K.T. Role of SAR data in water body mapping and reservoir sedimentation assessment. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, IV-5, 151–158. [Google Scholar] [CrossRef]
  18. Musa, Z.N.; Popescu, I.; Mynett, A. A review of applications of satellite SAR, optical, altimetry and DEM data for surface water modelling, mapping and parameter estimation. Hydrol. Earth Syst. Sci. 2015, 19, 3755–3769. [Google Scholar] [CrossRef] [Green Version]
  19. Karpina, M.; Jarząbek-Rychard, M.; Tymków, P.; Borkowski, A. UAV-based automatic tree growth measurement for biomass estimation. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, XLI-B8, 685–688. [Google Scholar] [CrossRef]
  20. Hoefle, B.; Vetter, M.; Pfeifer, N.; Mandlburger, G. Water surface mapping from airborne laser scanning using signal intensity and elevation data. Earth Surf. Processes Landf. 2009, 34, 1635–1649. [Google Scholar] [CrossRef]
  21. Smeeckaert, J.; Mallet, C.; David, N.; Cheheta, N.; Ferraz, A. Large-scale classification of water areas using airborne topographic lidar data. Remote Sens. Environ. 2013, 138, 134–148. [Google Scholar] [CrossRef]
  22. Wu, H.; Liu, C.; Zhang, Y.; Sun, W.; Li, W. Building a water feature extraction model by integrating areal image and lidar point clouds. Int. J. Remote Sens. 2013, 34, 7691–7705. [Google Scholar] [CrossRef]
  23. Casado, M.R.; Gonzales, R.B.; Kriechbaumer, T.; Veal, A. Automated Identification of River Hydromorphological Features Using UAV High Resolution Aerial Imagery. Sensors 2015, 15, 27969–27989. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Popescu, D.; Ichim, L.; Stoican, F. Unmanned Aerial Vehicle Systems for Remote Estimation of Flooded Areas Based on Complex Image Processing. Sensors 2017, 17, 446. [Google Scholar] [CrossRef] [PubMed]
  25. Milani, G.; Volpi, M.; Tonolla, D.; Doering, M.; Robinson, C.; Kneubühler, M.; Schaepman, M. Robust quantification of riverine land cover dynamics by high-resolution remote sensing. Remote Sens. Environ. 2018, 217, 491–505. [Google Scholar] [CrossRef]
  26. Witek, M.; Jeziorska, J.; Niedzielski, T. An experimental approach to verifying prognoses of floods using an unmanned aerial vehicle. Meteorol. Hydrol. Water Manag. Res. Oper. Appl. 2014, 2, 3–11. [Google Scholar] [CrossRef]
  27. Tan, Y.; Wang, S.; Xu, B.; Zhang, J. An improved progressive morphological filter for UAV-based photogrammetric point clouds in river bank monitoring. ISPRS J. Photogramm. Remote Sens. 2018, 146, 421–429. [Google Scholar] [CrossRef]
  28. Ridolfi, E.; Manciola, P. Water Level Measurements from Drones: A pilot case study at a dam site. Water 2018, 10, 297. [Google Scholar] [CrossRef]
  29. Van Iersel, W.; Straatsma, M.; Middelkoop, H.; Addink, E. Multitemporal classification of river floodplain vegetation using time series of UAV images. Remote Sens. 2018, 10, 1144. [Google Scholar] [CrossRef]
  30. Wang, Y. Advances in Remote Sensing of Flooding. Water 2015, 7, 6404–6410. [Google Scholar] [CrossRef] [Green Version]
  31. Turner, D.; Lucieer, A.; Malenovský, Z.; King, D.H.; Robinson, S.A. Spatial co-registration of ultra-high resolution visible, multispectral and thermal images acquired with a micro-UAV over Antarctic moss beds. Remote Sens. 2014, 6, 4003–4024. [Google Scholar] [CrossRef]
  32. Grejner-Brzezinska, D.A.; Toth, C.K.; Jóźków, G. On sensor georeferencing and point cloud generation with sUAS. In Proceedings of the Institute of Navigation, Honolulu, HI, USA, 20–23 April 2015; PACIFIC PNT: Manassas, VA, USA, 2015; pp. 839–848. [Google Scholar]
  33. Kraus, K. Photogrammetry—Geometry from Images and Laser Scans, 2nd ed.; Walter de Gruyter: Berlin, Germany, 2011. [Google Scholar]
  34. Haala, N. Multiray photogrammetry and dense image matching. In Proceedings of the Photogramm Week 2011; Wichmann Verlag: Berlin/Offenbach, Gemany, 2011; pp. 185–195. [Google Scholar]
  35. Chen, C. Signal and Image Processing for Remote Sensing; CRC Press: Boca Raton, FL, USA, 2012. [Google Scholar]
  36. Jóźków, G.; Toth, C.; Grejner-Brzezinska, D. UAS topographic map ping with VELODYNE LiDAR sensor. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 201–208. [Google Scholar] [CrossRef]
  37. Wendel, J.; Metzger, J.; Moenikes, R.; Maier, A.; Trommer, G.F. A performance comparison of tightly coupled GPS/INS navigation systems based on extended and sigma point Kalman filters. Navigation 2006, 53, 21–31. [Google Scholar] [CrossRef]
  38. Burrough, P.A.; McDonnell, R. Principles of Geographical Information Systems; Oxford University Press: Oxford, UK, 1998. [Google Scholar]
  39. Landis, J.R.; Koch, G.G. The measurement of observer agreement for categorical data. Biometrics 1977, 33, 159–174. [Google Scholar] [CrossRef] [PubMed]
  40. De Castro Vitti, D.W.; Junior, A.M.; Guimarães, T.T.; Koste, E.C.; Inocencio, L.C.; Veronez, M.R.; Mauad, F.F. Geometry accuracy of DSM in water body margin obtained from an RGB camera with NIR band and a multispectral sensor embedded in UAV. Eur. J. Remote Sens. 2018. [Google Scholar] [CrossRef]
Figure 1. Examples of raw images collected with (a) an RGB camera and (b) a TIR camera and of point cloud collected with (c) a laser scanner. Figures are not to scale or the same projection.
Figure 1. Examples of raw images collected with (a) an RGB camera and (b) a TIR camera and of point cloud collected with (c) a laser scanner. Figures are not to scale or the same projection.
Water 11 00338 g001
Figure 2. The strategies of water body range identification: blue shows the products suitable for the strategy and green shows the main processing steps.
Figure 2. The strategies of water body range identification: blue shows the products suitable for the strategy and green shows the main processing steps.
Water 11 00338 g002
Figure 3. An example of the processing according to the supervised classification strategy: (a) the training samples for classification, (b) the classification results, (c) the class Water after applying the modal filter for the raster format and the conversion into a vector format, and (d) the final results of the classification (blue area) filtered using an SQL query and projected the over input image.
Figure 3. An example of the processing according to the supervised classification strategy: (a) the training samples for classification, (b) the classification results, (c) the class Water after applying the modal filter for the raster format and the conversion into a vector format, and (d) the final results of the classification (blue area) filtered using an SQL query and projected the over input image.
Water 11 00338 g003
Figure 4. Examples of the processing according to the image transforms strategy: (a) the binarization results, (b) the class Water after applying the modal filter for the raster format and the conversion into a vector format, and (c) the final results of the classification (blue area) filtered using the SQL query and projected over the input image.
Figure 4. Examples of the processing according to the image transforms strategy: (a) the binarization results, (b) the class Water after applying the modal filter for the raster format and the conversion into a vector format, and (c) the final results of the classification (blue area) filtered using the SQL query and projected over the input image.
Water 11 00338 g004
Figure 5. The quality of the class Water identification with respect to the test site and sensor-product (x axis): the bar colors are blue for user accuracy, red for producer accuracy, green for overall accuracy, and violet for kappa.
Figure 5. The quality of the class Water identification with respect to the test site and sensor-product (x axis): the bar colors are blue for user accuracy, red for producer accuracy, green for overall accuracy, and violet for kappa.
Water 11 00338 g005
Figure 6. Point clouds collected for a densely vegetated riverbank: (a) the location of the sample on the Embankment site, (b) the raw point cloud created from RGB images, (c) the raw LiDAR point cloud, (d) the ground point extracted from a raw RGB image point cloud, and (e) the ground points extracted from a raw LiDAR point cloud. For Figure 6b–e, the heights are coded with colors.
Figure 6. Point clouds collected for a densely vegetated riverbank: (a) the location of the sample on the Embankment site, (b) the raw point cloud created from RGB images, (c) the raw LiDAR point cloud, (d) the ground point extracted from a raw RGB image point cloud, and (e) the ground points extracted from a raw LiDAR point cloud. For Figure 6b–e, the heights are coded with colors.
Water 11 00338 g006
Table 1. Descriptions of the test sites.
Table 1. Descriptions of the test sites.
Water Body/Site Name and DescriptionSite Map (Ortho Mosaic)
Swornica
  • A small river with embankments on both sides
  • The tributary of the Mala Panew river
  • Low and medium vegetation next to the water body
  • High vegetation behind the embankments
  • Medium water level and slow water flow during data acquisition
Water 11 00338 i001
Old river bed
  • Old river bed on a flat terrain
  • High vegetation next to the water body
  • The surface of the water is covered with the duckweed.
  • High water level and no water flow during data acquisition
  • The test area contains a small part of the Mala Panew river.
Water 11 00338 i002
Mala Panew
  • A medium river with embankments on both sides and a weir on the test section
  • Low to high vegetation close to the water body
  • Small parts of the riverbank formed by gravel
  • Medium water level and fast water flow during data acquisition
Water 11 00338 i003
Embankment
  • The next part of the Mala Panew river with similar characteristics but without the weir and gravelly parts of the riverbank
  • The estuary of a small river Swornica
  • A small wooden platform at the riverbank and the road bridge across the river
  • The mapping focused on a single riverbank and embankment.
Water 11 00338 i004
Bobr
  • A small mountain river
  • Low vegetation and single trees next to the water body
  • Very low water level and medium speed of water flow during data acquisition
  • Many stones in the river protruding from the water surface
  • River bottom partly covered by the vegetation and visible through the water
Water 11 00338 i005
Mietkow Lake
  • A sandy shore of the lake without vegetation next to the water body
  • Very mild slope of the shore
  • The site includes a part of the concrete dam.
  • Concrete paths descending to the water
  • Boats on the shore
  • A steel platform floating on the water
  • Mild water waving during the data acquisition
Water 11 00338 i006
Table 2. The collected data.
Table 2. The collected data.
SiteFlying Height with Cameras/Laser Scanner (m)GSD for RGB/TIR Images (mm)Number of RGB/TIR ImagesNumber of GCPsNumber of LiDAR Points
Swornica40/408/12594/175875 M
Old river bed40/408/125117/117895 M
Mala Panew80/4016/250101/103650 M
Embankment80/40–6016/250121/134766 M
Bobr50/5010/156110/109971 M
Mietkow Lake40/408/125156/3021262 M
Note: GSD—Ground Sampling Distance; RGB—Red Green Blue; TIR—Thermal Infrared; GCP—Ground Control Point; LiDAR—Light Detection and Ranging.
Table 3. The created geospatial products.
Table 3. The created geospatial products.
ProductPoint CloudPoint Cloud Ortho ImageOrtho Mosaic
Dimension3-D2.5-D (raster)2-D (raster)
Ground resolutionAverage point density (pts/m2)Pixel size (cm)Pixel size (cm)
RGB 1400–39002–41–1.5
LiDAR860–5404–6-
TIR 20–9010–2010–20
RGB+TIR combination--10
Note: LiDAR sensor does not provide images, thus ortho mosaic could not be created.
Table 4. The identification strategies selected with respect to product and test site.
Table 4. The identification strategies selected with respect to product and test site.
Data TypeProductTest Site
SwornicaOld River BedMala PanewEmbankmentBobrMietkow Lake
RGBPoint cloud ortho image333323
Ortho mosaic111111
LiDARPoint cloud ortho image333333
TIRPoint cloud ortho image333313
Ortho mosaic111111
RGB+TIROrtho mosaic111111
Table 5. The comparison of reference class Water with the classification results obtained for different sensor-products and test sites.
Table 5. The comparison of reference class Water with the classification results obtained for different sensor-products and test sites.
Water 11 00338 i007
* A product of different coverages due to TIR data processing issues requiring a separate reference class Water.
Table 6. The geometrical accuracy characteristics: All values are in (m). “mean” stands for the mean value and “std” stands for the standard deviation.
Table 6. The geometrical accuracy characteristics: All values are in (m). “mean” stands for the mean value and “std” stands for the standard deviation.
SiteParameterRGB Point Cloud Ortho Image RGB Ortho MosaicLiDAR Point Cloud Ortho MosaicTIR Point Cloud Ortho MosaicTIR Ortho MosaicRGB+TIR Ortho Mosaic
Swornicamean0.200.510.430.850.640.47
std2.080.460.350.750.530.41
Old river bedmean1.050,412.401.202.730.51
std2.000.391.823.091.770.40
Mala Panewmean0.670.040.252.201.260.38
std1.401.311.392.002.371.64
Embankmentmean0.530.370.933.400.620.60
std1.450.940.761.610.600.75
Bobrmean1.460.400.281.741.140.45
std1.730.261.351.190.960.36
Mietkow Lakemean0.522.530.573.480.720.88
std0.371.961.951.722.470.79

Share and Cite

MDPI and ACS Style

Tymków, P.; Jóźków, G.; Walicka, A.; Karpina, M.; Borkowski, A. Identification of Water Body Extent Based on Remote Sensing Data Collected with Unmanned Aerial Vehicle. Water 2019, 11, 338. https://doi.org/10.3390/w11020338

AMA Style

Tymków P, Jóźków G, Walicka A, Karpina M, Borkowski A. Identification of Water Body Extent Based on Remote Sensing Data Collected with Unmanned Aerial Vehicle. Water. 2019; 11(2):338. https://doi.org/10.3390/w11020338

Chicago/Turabian Style

Tymków, Przemysław, Grzegorz Jóźków, Agata Walicka, Mateusz Karpina, and Andrzej Borkowski. 2019. "Identification of Water Body Extent Based on Remote Sensing Data Collected with Unmanned Aerial Vehicle" Water 11, no. 2: 338. https://doi.org/10.3390/w11020338

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop