Next Article in Journal
Monitoring Deforestation at Sub-Annual Scales as Extreme Events in Landsat Data Cubes
Next Article in Special Issue
Review of Automatic Feature Extraction from High-Resolution Optical Sensor Data for UAV-Based Cadastral Mapping
Previous Article in Journal
Time-Dependent Afterslip of the 2009 Mw 6.3 Dachaidan Earthquake (China) and Viscosity beneath the Qaidam Basin Inferred from Postseismic Deformation Observations
Previous Article in Special Issue
Stable Imaging and Accuracy Issues of Low-Altitude Unmanned Aerial Vehicle Photogrammetry Systems
Open AccessArticle

Quantifying the Effect of Aerial Imagery Resolution in Automated Hydromorphological River Characterisation

1
School of Energy, Environment and Agrifood, Cranfield University, Cranfield MK430AL, UK
2
Regional Centre of Water Research (UCLM), Ctra. de las Peñas km 3.2, Albacete 02071, Spain
3
National Fisheries Services, Environment Agency, Threshelfords Business Park, Inworth Rd., Feering, Essex CO6 1UD, UK
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Academic Editors: Farid Melgani, Francesco Nex, Norman Kerle and Prasad S. Thenkabail
Remote Sens. 2016, 8(8), 650; https://doi.org/10.3390/rs8080650
Received: 5 June 2016 / Revised: 23 July 2016 / Accepted: 3 August 2016 / Published: 10 August 2016
(This article belongs to the Special Issue Recent Trends in UAV Remote Sensing)

Abstract

Existing regulatory frameworks aiming to improve the quality of rivers place hydromorphology as a key factor in the assessment of hydrology, morphology and river continuity. The majority of available methods for hydromorphological characterisation rely on the identification of homogeneous areas (i.e., features) of flow, vegetation and substrate. For that purpose, aerial imagery is used to identify existing features through either visual observation or automated classification techniques. There is evidence to believe that the success in feature identification relies on the resolution of the imagery used. However, little effort has yet been made to quantify the uncertainty in feature identification associated with the resolution of the aerial imagery. This paper contributes to address this gap in knowledge by contrasting results in automated hydromorphological feature identification from unmanned aerial vehicles (UAV) aerial imagery captured at three resolutions (2.5 cm, 5 cm and 10 cm) along a 1.4 km river reach. The results show that resolution plays a key role in the accuracy and variety of features identified, with larger identification errors observed for riffles and side bars. This in turn has an impact on the ecological characterisation of the river reach. The research shows that UAV technology could be essential for unbiased hydromorphological assessment.
Keywords: unmanned aerial vehicle; photogrammetry; resolution; comparison; hydromorphology; river management unmanned aerial vehicle; photogrammetry; resolution; comparison; hydromorphology; river management

1. Introduction

In the last few decades, there has been a strong legislative effort to improve the quality of freshwater ecosystems at both European and international level [1,2]. Within Europe, the Water Framework Directive [3] dictates the parameters to be monitored for the assessment of the ecological status of a given water body, where a water body is the basic unit used to assess the quality of the water environment and used as reference for environmental improvements. The ecological quality parameters sampled draw upon five different domains: biological, physico-chemical, chemical, specific pollutants and supporting elements [3]. In the particular case of supporting elements, hydromorphology plays a key role in the assessment of hydrology (i.e., quantity and dynamics of water flow and connection to groundwater bodies), morphology (i.e., river depth and width variation, structure and substrate of the river and structure of the riparian zone) and river continuity [4]. Hydromorphological characterisation has also been recognised to be key for river management and restoration [5,6] and essential in the understanding of biological metrics [6,7].
The list of methods available to assess hydromorphology for regulatory purposes is vast and varied. Belletti et al. [4] compared over 100 EU hydromorphological assessment methods and classified them in four broad categories based on the key features surveyed: physical habitat, riparian habitat, morphological and hydrological regime alteration. The majority of methods compared rely on physical habitat identification and require the recognition of in-channel features that define homogenous areas of flow, vegetation and substrate [8] from either aerial imagery (e.g., [9]) or extensive field based work (e.g., [10,11]). An example is the UK assessment of flow types describing the lentic-lotic character of a reach that is routinely used at a National scale to assess river quality [7,8,12]. For aerial imagery based methods, the success of feature identification relies heavily on the resolution of the imagery used [13]. High resolution remote sensing products are expected to provide better feature identification accuracy than imagery commercially available. This has been proved true by several authors in a range of different research disciplines [14,15,16,17]. However, some of the approaches used to compare the accuracy in feature identification rely on resolutions obtained by averaging adjacent pixels from a ground truth orthoimage captured at fine resolution [16]. These approaches are incomplete as they fail to take into account the uncertainty arising from differing flight planning parameters (e.g., number of frames, flying heights, image footprint) required to obtain the desired geomatic products. Methods that contrast metrics derived from remote sensing products that have been obtained from data captured at different resolutions [15,18] should therefore be preferred. This is particularly relevant in river research projects where unmanned aerial vehicles (UAVs) are being used to capture high resolution aerial imagery for automated feature identification [19,20]. These automated classification techniques based on either object-based image analysis, unsupervised or supervised methods [21] provide feature identification results that match those obtained through visual based approaches and have been suggested as potential cost-effective approaches to satisfy national scale river hydromorphological assessment requirements [9,19].
Generally, coarse resolution imagery (12 cm–25 cm) has been used for the purposes outlined above. The imagery is captured through manned aircrafts that are able to cover wide areas using standard remote sensing cameras. Higher resolution imagery can be obtained with aircrafts equipped with more specialised and expensive equipment, with the highest resolution (finer than 1 cm) being obtained from either rotary or fixed wing UAVs [14,22]. The imagery resolution from aircrafts and UAVs is the result of a combination of both camera specifications and pre-selected flying height [17,23].
The choice between aircraft and UAVs depends upon the capability of the platform to integrate sensor requirements (e.g., camera payload—see [24] for a review) followed by the cost-effectiveness (low weight, slow flight, speed and extended range) and safety of the overall mission. From a financial perspective, fuel consumption is generally the limiting factor for aircraft deployment whereas local deployment of UAVs are cost-effective methods for timely and on-demand data collection when considering platform acquisition at fixed cost [22]. From a safety perspective, aircrafts are more stable than UAVs, in particular under gusty and wet conditions, and do not present drawbacks for flights planned over congested areas. UAV reliability when in flight is still a safety concern although recent platforms incorporate a set of failsafe options that ensure the recovery of the UAV in emergency situations (e.g., GPS failure). Both aircrafts and UAVs require compliance with national and international airspace regulatory frameworks, which currently are not standardised between countries for UAV usage. In terms of data acquisition, UAVs are able to fly under cloud cover and, although limited by battery life endurance and visual line of sight (500 m) regulatory constraints, enable data capture at higher resolutions than aircraft alternatives. UAV platforms range from inexpensive and light options such as blimps, kites or balloons to more sophisticated and expensive vertical take-off-and-landing (VTOL) and fixed wing platforms. VTOLs and fixed wings are preferred over light options due to their better manoeuvrability [22]. The selection of a specific UAV platform depends on the capability of its gimbal to integrate the payload; the extension of the survey area to be covered as well as deployment and flight plan logistics. VTOL UAVs are able to hover over a point and provide high resolution still imagery whereas fixed wing platforms enable wide area surveying [22].
Rapid advances in technology suggest that UAVs will be the preferred option for environmental data collection in the not too distant future [14]. For example, Sankaran et al. [25] identified UAVs as a low-cost, rapid and flexible alternative to airborne LIDAR for geomorphological mapping in general and highlights the added benefit of using UAVs to test models, forecast and understand the evolution of environmental processes; and Woodget et al. [26] believes that photogrammetric methodologies based on high resolution UAV aerial imagery could become the tool of choice for routine and reliable assessment of physical river habitat in the future. Within this context, little effort has been made to assess the added benefit of using UAV high resolution aerial imagery for hydromorphological assessment. In particular, whether the expected increase in feature identification accuracy justifies the use of UAV high resolution aerial imagery. Previous work has focused on the effect of spatial (i.e., 1 m and 2.5 m pixel sixe), spectral (i.e., hyper- and multi-spectral) and radiometric (i.e., 8 bit and 11 bit) resolution on the identification of in-stream habitats by comparing hyperspectral imagery to simulated multispectral data [27]. In [27], the spectral resolution was more relevant than the spatial or radiometric resolution for the automated identification of in-channel features (riffles, glides, pools and eddy drop zones). Anker et al. [28] focused on the effectiveness of aerial digital photography-spectral analysis (ADP-SP) to assess macrophyte cover when compared to ground-level data and hyperspectral imagery. ADP-SP presented a better spatial resolution than ground-level and hyperspectral data (4 cm vs. 1 m and 10 cm) and enabled the differentiation between emergent and submerged vegetation. In [29], the effectiveness of digital airborne imagery at two different resolutions (3 cm and 10 cm) to estimate median surface grain sizes were compared. The study showed that 10 cm resolution imagery was unsuitable for grain size estimation.
Here, we contrast a total of three orthoimages derived from UAV high resolution aerial imagery physically obtained for this intention. The aim is to quantify the uncertainty in automated river hydromorphological feature identification when using aerial imagery of different resolutions (i.e., 2.5 cm, 5 cm and 10 cm). This work builds on previous research carried out by the authors [19] on the development of a framework for the integration of UAV high resolution imagery and Artificial Neural Networks (ANNs) for the automatic classification of hydromorphological features. This framework has been shown to identify hydromorphological features with an averaged success rate of over 81% and will be used to address the aim of his paper through the following three interdependent objectives:
1
To quantify the performance of an ANN based operational framework [19] for hydromorphological feature identification at different aerial imagery resolutions.
2
To identify the optimal aerial imagery resolution required for robust automated hydromorphological assessment.
3
To assess the implications of results obtained from (1) and (2) in a regulatory context.

2. Methodology

2.1. Study Site

The study site was a 1.4 km reach along the upper catchment of the river Dee, Wales, UK (Figure 1). The site is located 30 km from the river source and was selected for the variety of hydromorphological features present (Table 1). The substrate is primarily gravel with silt deposition present in areas with low to non-perceptible flow. The UAV aerial imagery was collected on the 14 July 2015 under low flow conditions and a constant volumetric flow rate of 6.98 m3·s−1. The weather conditions during the UAV flight, based on Shawbury meteorological aerodrome report (METAR), presented surface winds of speeds between 1.5 m·s−1 and 3 m·s−1 and directions varying from 120° to 350°, with prevailing visibility up to 10,000 m and clouds scattered at 1800 ft (549 m) and 2800 ft (854 m) AMSL.

2.2. Sampling Design and Data Collection

A total of 100 Ground Control Points (GCPs) were uniformly distributed along the banks of the study site to ensure correct external orientation following [19] (Figure 2 and Figure 3 step 1–2). The GCPs were white PVC squares of 1 m × 1 m with opposite facing triangles painted in black [19]. Each GCP was pinned to the ground with metallic pegs through four eyelets. In general, accurate orthoimages can be achieved with a lower number of GCPs. However, the hilly configuration of the terrain and the difficulties accessing points close to the banks justified a larger number of GCPs than usual, following recommendations by [30]. The terrain configuration and the meandering nature of the river at the site also resulted in different number of GCPs allocated to each bank, with 53 and 47 points on the right and left banks, respectively (Figure 1). The location of the GCPs centroids was obtained from a Leica GS14 Base and Rover Real Time Kinematic (RTK) GPS (Figure 2). A subset of 25 GCPs was selected for validation purposes. Hereafter, these would be referred to as Check Points (XPs).
The primary data set consists of UAV aerial imagery in the visible spectrum collected with a QuestUA Q-200 with surveyor Q-Pod (QuestUAV Limited, Northumberland, UK) fixed wing platform equipped with an integrated camera and a Pulse Electronics W4000 series GPS receiver (Figure 2). The QuestUAV is a 2.8 kg high density expanded polypropylene (EPP) platform with a 2 m wingspan capable of flying under wind speeds of 17 m·s−1. The UAV was deployed three times, one for each of the resolutions compared (i.e., 2.5 cm, 5 cm and 10 cm). Each flight was completed under 40 min. A fix wing platform was preferred over a vertical-take-off and landing configuration because it reduced flight time, thus ensuring the imagery for the three resolutions was collected under similar weather and light conditions. The flight plan was defined by a longitudinal (along the river) multi-pass trajectory with frame capture at defined waypoints that ensured 80% overlap both across and along the track. Each waypoint represented the centre of a frame and had associated GPS coordinates as well as yaw, pitch and roll information.
Two different cameras, a Sony NEX 7 E-mount SELP1650 (Sony Europe Limited, Weybridge, Surrey, UK) and a Panasonic Lumix DMC7 (Panasonic, Berkshire, UK), were used to gather the data (Table 2, Figure 3 step 2). The Sony NEX 7 E-mount SELP1650 was used to collect the imagery at 2.5 cm and 5 cm resolution whereas the Panasonic Lumix DMC-LX7 was used for the imagery at 10 cm resolution. The integration of different camera sensors was required to ensure the target resolutions were achieved under the existing UK Civil Aviation Authority (CAA) regulatory framework CAP393 and CAP722, with extended permission to fly up to 800 ft (244 m) and 1000 m from the operator. The UAV was controlled by a fully qualified RPQ-s (Small UAV Remote Pilot Qualification) pilot following CAA legislation.
An ancillary point data set with 480 RTK GPS records identifying the exact location of key river features was collected during the UAV survey for validation purposes. These measurements were in addition to the 100 RTK GPS measurements obtained at each of the GCPs and XPs: these enabled the estimation of the error at each of the XPs through the comparison of coordinates obtained from the processed imagery (i.e., orthoimage) and the GPS.
The data set was complemented with just over 100 documented colour photographs and manual river maps showing the extent of each of the features of interest. This information was used, in combination with the orthoimage at 2.5 cm resolution, to describe the river features visually identified at each of the points defined by a 2 m × 2 m regular grid overlaid along the full extent of the 1.4 km reach. A total of 40,270 depth and water velocity measurements were obtained using a SonTek RiverSurveyor M9 Acoustic Doppler Current Profiler (ADCP) mounted on an ArcBoat radio control platform (Figure 2). A zigzag pattern was followed to capture the spatial variability of both depth and water velocity within the channel.

2.3. Photogrammetry and Image Classification

The geomatic products (i.e., orthoimage, digital terrain model (DTM) and point cloud) for each of the resolutions under study were independently obtained with Photoscan Pro version 1.1.6 (Agisoft LLC, Petersburg, Russia). The effect of lens distortion was accounted for within the overall data processing. The absence of high contrast and the high image quality provided by the two cameras ensured nil chromatic aberration. The ISO and shutter speed were adjusted to the illuminance conditions at the time of the flight to minimise noise. The photogrammetric analysis required is described by the workflow in Figure 3. In brief, for each flight a set of frames was selected based on their quality and spatial coverage, and georeferenced (scale, translate and rotate) into the World Geodetic System (WGS84) using the GCPs coordinates (Figure 3 step 3). The GCPs were manually identified for all the images collected. The centroid of the GCPs could be clearly identified at all resolutions, with the GCPs being always larger than the pixel unit. Therefore, the uncertainty in visual GCP centroid identification corresponded to the pixel size of the imagery. A more detailed explanation of the photogrammetric process is provided in [19].
The difference between the XPs coordinates obtained from the imagery and those obtained from the RTK GPS constituted the image coregistration error (XP errors). The error in x, y and z for each of the GCPs was derived from the Agisoft Photoscan GCP table. Equation (1) was used to estimate a combined measure of error for x, y and z.
R M S E = j = 1 N [ ( x ^ j x j ) 2 + ( y ^ j y j ) 2 + ( z ^ j z j ) 2 ] N
where RMSE is the Root Mean Squared Error; x ^ , y ^ and z ^ are the image derived coordinates at location j; x , y and z are the associated RTK GPS positions of the XPs; and N is the number of points assessed.
The automated feature identification was obtained using the Leaf Area Index Calculation (LAIC) software (Figure 3 step 4). LAIC uses supervised classification techniques based on an ANN approach that segment the spectral domain of the RGB imagery into areas that can be associated with features of interest. This method requires a training process by which representative samples of hydromorphological features are identified using a k-means clustering algorithm and then these are used as reference features to classify the entire orthophoto [19].

2.4. Comparison of Resolutions

The accuracy of automated feature identification with LAIC was assessed for each of the resolutions. For this purpose, the 10,943 points defined by the 2 m × 2 m regular grid were used as ground truth data set (Figure 3 step 5). Confusion matrices were used to estimate the accuracy (AC) in classification outputs for each resolution (Equation (2), Figure 3 step 6). The true positive ratio (TPR), true negative ratio (TNR), false positive ratio (FPR) and false negative ratio (FNR) were also estimated following [19]. These ratios were calculated as follows:
A C = i = 1 I ( T N i + T P i ) i = 1 I ( T N i + T P i + F N i + F P i )
T P R i = T P i F N i + T P i
T N R i = T N i T N i + F P i
F N R i = F N i F N i + T P i
F P R i = F P i T N i + F P i
where TPi (True Positives) is the number of points correctly identified as class i, FNi (False Negatives) is the number of points incorrectly rejected as class i, TNi (True Negatives) is the number of points correctly rejected as class i, FPi (False Positives) is the number of points incorrectly identified as class i, and I is the total number of classes identified.
TPRi, TNRi FNRi and FPRi are estimated for each of the features of interest whereas AC is a single value of overall classification performance. AC, as well as all the other ratios, ranges from 0 to 1. TPRi and TNRi quantify the power of LAIC at classifying features correctly when compared to the ground truth, whereas FNRi and FPRi show misclassification rates.
A complementary measure of accuracy, the kappa (k) statistic, was also estimated. Although the use of k as a concordance measure has been criticised by several authors [32,33] and considered not to reveal information that is different to the rates already defined [34], it is still regarded as a vital accuracy assessment measure [35]. k relates the total accuracy (AC) to the hypothetical expected probability of agreement under a given set of baseline constraints and has a range from zero to one; it has a maximum value of one when there is a perfect match in feature classification to the ground truth data set. k values of >0.75 are considered excellent, fair to good between 0.40 to 0.75 and poor if <0.40 [36]. Following [34], additional measures of quantity and allocation disagreement were estimated (Figure 3 step 6). Quantity disagreement (C) is the difference between the classification obtained for the ground truth data and that obtained for a given resolution upon mismatch in class proportions. Allocation disagreement (Q) is the amount of difference due to incorrect spatial allocation of pixels in the classification. For a given feature class g, the quantity disagreement cg and the allocation disagreement qg are calculated as follows:
c g = | ( i = 1 J p i g ) ( i = 1 J p g j ) |
q g = 2   min [ ( i = 1 J p i g ) p g g , ( i = 1 J p g j ) p g g ]
where pig is the estimated proportion of the study area that is class i in the ground truth map and class g in the selected resolution map; pgj is the estimated proportion of the study area that is class g in the ground truth map and class j in the selected resolution map; and pgg is the estimated proportion of the study area that is class g in both the ground truth map and the selected resolution map. The overall quantity disagreement (C) and allocation disagreement (Q) can therefore be derived as shown in Equations (9) and (10). For both C and Q, a larger value of disagreement indicates a larger mismatch between maps.
C = ( g = 1 J c g ) 2
Q = ( g = 1 J q g ) 2
Statistically significant differences in classification between resolutions were estimated using Cochran’s Q-test [37], a non-parametric alternative to one-way repeated measures analysis of variance. The test was implemented independently for each hydromorphological feature identified. This allowed to determine for which river features the selection of a particular resolution was key to accurate identification. The analysis was carried out using the per-pixel classification, where the classification obtained for the 2.5 cm resolution imagery was considered as the ground truth data set. Per-pixel feature values were extracted from the 5 cm and 10 cm resolution data sets at each of the 2.5 cm × 2.5 cm pixels.

3. Results

The coregistration errors for the GCPs increase in x, y and z as the imagery resolution coarsens, with errors below 3 cm for the finer resolutions and up to 31 cm for the coarser resolution. A similar pattern is observed for the XPs, where the coregistration error in x, y and z reaches values of nearly 3 cm for the orthoimage at 2.5 cm resolution and up to 83 cm for the coarser resolution. The increase in coregistration error is more evident when estimating the RMSE, which is ≈4.5 cm for the finer scales but larger than 3 m for the 10 cm resolution orthoimage (Table 3). The elapsed time required to process the imagery to obtain the geomatic products decreases as the resolution coarsens because the number of frames to be processed diminishes (Table 3). Here, the elapsed time refers to the time required by the computer to generate the geomatic products based on the performance of a computer with an Intel Core i7-5820 K 3.30 GHz processor, 32 Gb RAM and 2 graphic cards (NVIDIA Geoforce GTZ 980 and NVDIA Qadro K2200). An extra 7 h, 5.5 h and 3 h were required to locate the GCPs manually for the imagery at 2.5 cm, 5 cm and 10 cm resolution, respectively.
The total accuracy (AC) in automated feature classification is ≈65%, with a slight decrease in feature identification power as the resolution coarsens (Table 4). LAIC enables the allocation of multiple classification to a given point or pixel. For example, a given point falling in an area where both riffles and submerged vegetation are present will be simultaneously classified under feature classes riffle and vegetation. If the effect of multiple classification is taken into account, the overall accuracy ranges between 67% (at 10 cm resolution) and 76% (at 2.5 cm resolution) (Table 4). These results are consistent with those obtained for the k-statistic, which shows that at 2.5 cm resolution the classification is good but deteriorates down to fair and poor [36] for 5 cm and 10 cm resolutions, respectively. The measures of disagreement C and Q show that the disagreement between the ground truth point data set and the classifications obtained for each of the three resolutions is primarily due to the spatial allocation of feature classes (Q) rather than the proportion (C) represented by each class (Figure 4).
At feature level, the performance in classification measured through TPR and TNR (Table 5) decreases as the resolution coarsens for side bars, riffles and vegetation. Both, shadows and erosion are only detectable at 2.5 cm resolution (Figure 4) and omitted from the classifications obtained at coarser resolutions. For deep and shallow water, the pattern of change with resolution is not as clear or relevant as for the other feature classes.
When multiple classification is taken into account, the power in feature identification increases for riffles, deep waters and shallow waters, with some of the features reaching TPR values larger than 94% (Table 5). The ratio of single, double and triple feature allocations at 2.5 cm, 5 cm and 10 cm resolution are 10,360, 453 and 6; 10,420, 436 and 2; and 10,629, 205 and 1, respectively. The power to detect multiple classifications is larger at finer resolutions than at coarser ones. For example, at 2.5 cm resolution, LAIC is able to identify points with submerged vegetation present within riffle and shallow water categories, whereas this ability disappears at 10 cm resolution where submerged vegetation and riffles are barely identifiable (Figure 4). At 2.5 cm and 5 cm resolution, the number of double classifications assigned is comparable (453 and 436, respectively). However, at 10 cm resolution the value plummets down to 205. The most frequent combinations of double classification occur in the forms: (i) shallow water with deep water (162, 292, and 104 points for each of the three resolutions from finer to coarser); and (ii) shallow water with riffle (94, 58, and 36 points). Triple classifications are sporadic at all resolutions considered.
The FNR value (Table 5) increases as the resolution coarsens, reaching values of 100% for erosion and shadow. The large FNR values indicate that these features fail to be identified at resolutions coarser than 2.5 cm. The FNR for riffles increases from 53% to 87% from finer resolution to coarser, whereas the FPR value for deep and shallow water increases as resolution coarsens. This is because riffle features get systematically replaced by shallow and deep water classifications when the imagery resolution coarsens. A similar pattern is observed for side bars, for which FNR increases from 14% at 2.5 cm resolution to 99% at 10 cm resolutions.
The per-pixel analysis shows that, based on a total of 67,446,624 pixels (2.5 cm × 2.5 cm), the dominant classes within the reach are deep water and vegetation, followed closely by shallow water (Figure 5). The proportion of area allocated to the riffle and side bar feature classes decreases as the resolution coarsens, with erosion and shadow being present only on the 2.5 cm resolution imagery. By contrast, the proportion allocated to shallow and deep water is larger at coarser resolutions, with incremental changes from 5 cm to 10 cm resolutions.
The outputs for the Cochran test (Table 6) show that the mismatch in feature allocation at per-pixel level between the 2.5 cm resolution and the coarser resolutions is statistically significant (p < 0.001) for all features. Excluding erosion and shadows, riffle is the feature class with the largest percentage of pixels mismatched, followed by side bar and shallow water (Figure 4 and Figure 5). Figure 6 shows where the per-pixel misclassification occurs. For vegetation, mismatching pixels are primarily observed over submerged vegetation or side bars. For side bar, mismatch occurs on the edges, with coarser resolutions being unable to accurately detect the limit of the feature or the feature in itself. For deep and shallow water, mismatch occurs in areas of transition between the two features or around banks and submerged vegetation, whereas in the specific case of riffles, misclassification occurs for the overall class in general.

4. Discussion

This study focused on three core objectives: (i) to quantify the performance of an ANN based framework for hydromorphological feature identification for a set of aerial imagery resolutions; (ii) to identify the optimal aerial resolution required for robust automated hydromorphological feature identification; and (iii) to assess the implication of results obtained from (i) and (ii) in a regulatory context.
Where the first two objectives are concerned, the aerial imagery resolution plays a key role in the number, accuracy and variety of features automatically identified with LAIC. The coarser the resolution, the lower the number of features mapped within a river reach and the larger the bias in the detection of their extent. This is clearly visible for the specific case of riffle, side bar and erosion features, which are absent or barely identifiable from resolutions coarser than 5 cm. The patterns generated by the unbroken and broken standing waves that characterise riffles are not identifiable from the coarser (10 cm) imagery and get confused with classes that are more general, such as shallow and deep water. Similarly, at coarser resolutions submerged vegetation as well as vegetation on side bars fail to be identified properly. The power to delineate and identify side bars also decreases when imagery of resolution coarser than 2.5 cm is used. In addition, the distinction between deep and shallow waters cannot be drawn and misclassification occurs near submerged vegetation and banks.
The per-pixel analysis shows that these differences are statistically significant for all features and resolutions. However, these statistically significant differences need to be interpreted from a hydro-ecological perspective since these can be key to the structuring of freshwater biotic communities [6]. The failure to identify both riffles and submerged vegetation will have an impact on the overall assessment of the reach, not just from the hydromorphological point of view but also from a biological context, as these features define key habitats for freshwater ecosystem species. For example, several authors [38,39,40,41] have identified riffles where gravels are present as the preferred areas for salmonid species to spawn. Underestimation of the area suitable for spawning will occur whenever the combination riffle–shallow water is not adequately identified. Likewise, flow type diversification over space and time, pool–riffle sequences and morphological impairment directly relate to habitat-scale interpretation and invertebrate community [7]. Failure to characterise riffles, deep and shallow waters will directly impact upon the assessment of the reach suitability for macroinvertabrates. Failure to estimate submerged and emergent vegetation abundance and coverage will directly impact on the estimates of the area available for key habitats such as refuge, feeding, spawning and nesting [42].
Gurnell [43] reviews recent research on the geomorphological influence of vegetation within fluvial systems. The emergent biomass modifies the flow field and retains sediment, whereas the submerged biomass affects the hydraulics and mechanical properties of the substrate. A bias characterisation of the vegetation present within a reach will result for example, in biased estimates of erosion susceptibility. The uncertainty generated by the lack of power identifying side bars will also add to the bias in the estimation of risk from erosion and increase the difficulty in the detection of erosion and deposition patterns. This in turn will impair the accurate estimation of temporal changes in river reach boundaries, width and bank location (e.g., [44,45]).
The automated classification of hydromorphological features works in a similar way as visual identification would do. The algorithms LAIC applies are based on the RGB properties observed in the imagery, similar to what a field surveyor would detect when generating maps of homogeneous features in the field or from the aerial imagery. In [19], the LAIC classification was shown to be more accurate than visual feature identification. For example, LAIC was able to identify the water between tree branches along the river bank. Assuming the classification from the 2.5 cm resolution to be a true representation of the hydromorphological variability within the reach, it can be inferred that visual identification of features from imagery at coarse resolution will incur the same errors as those identified here with LAIC.
Whether an optimal resolution for hydromorphological characterisation can be proposed is difficult to ascertain since there is a strong trade-off between accuracy and area surveyed. If accuracy is of concern, then a fine resolution of 2.5 cm is required. Alternatively, if wide area coverage is of interest, coarser resolutions than 10 cm would be preferred. For regional or even national assessment of river hydromorphology, automated identification of river features from high resolution UAV aerial imagery could be integrated in existing GIS frameworks (e.g., [46,47,48,49]) that operate with coarser resolutions and alternative remote sensing data supports (e.g., satellite imagery). Gurnell et al. [50] recognises the benefits of using high resolution UAV aerial imagery to open further possibilities for improving the ability of frameworks to generate highly informative outputs in shorter time intervals and at smaller financial costs than at present. These enhancements will enable validation and calibration of current methods, frameworks and models for hydromorphological characterisation and intercalibration. The adoption of high resolution aerial imagery for hydromorphological characterisation at national level will significantly increase the demand for UAV based data capture. In UK, this can be provisioned through the deployment of platforms at the local level via the already existing network of 1557 CAA qualified providers [51] if detailed guidelines on data requirements are specified.
Another factor to take into account when selecting the UAV imagery resolution is the coregistration error associated to the geomatic products generated. These are considerable for 10 cm resolution orthoimages, with 19 cm error in x, 83 cm in y, 55 cm in z and up to 3 m combined error in x, y and z. If UAV data collection is undertaken on a regular basis to assess temporal changes in hydromorphological characteristics, the errors identified may prevent the accurate spatial collocation of river features and restrict the accuracy of the change metric estimated (e.g., river width, volume of sediments deposited). In addition, results indicated that the disagreement in feature classification, when using the 2 m × 2 m grid as ground truth, is primarily dominated by spatial disagreement (i.e., location of features) rather than quantity disagreement (i.e., proportion of area allocated to each feature) and increases as the resolution coarsens. The combination of both coregistration error and spatial disagreement highlights the ineffectiveness of coarse resolutions for accurate hydromorphological characterisation.
Where the third objective is concerned, the study clearly demonstrates that commercially available aerial imagery at resolutions coarser than 10 cm does not provide sufficient robustness for unbiased hydromorphological assessment. Use of these supports result in biased feature coverage estimates, incorrect heterogeneity metric estimates and uncertain in-channel feature characterisation. This is consistent with results obtained by Carboneau [29] who found that aerial imagery at coarse resolution (10 cm) was unsuitable to estimate grain size reliably. The selection of a given support will depend upon the objective the assessment needs to fulfil and will be conditioned by the trade-off between data acquisition costs and assessment uncertainty.
Outcomes from this research raise concerns about current practice in hydromorphological assessment and the need for the generation of uncertainty estimates. To the author’s knowledge, little work has been carried out to assess the uncertainty that the use of specific supports add to the overall hydromorphological characterisation. This is particularly relevant for policy and regulatory implementation as well as restoration management. Within the context of the Water Framework Directive, the intercalibration process aims to obtain comparability of ecological status boundaries and national assessment methods across Europe [52]. The intercalibration has focused on the harmonisation of the position of high/good and good/moderate boundaries for specific ecological quality parameters but has not looked at the precision of these estimates [4,53] which, as shown in this paper, will change according to the data support (i.e., resolution) used. This work demonstrates that for consistent and unbiased aerial imagery based hydromorphological assessment across EU Member States it is paramount to standardise the support used to obtain comparable ecological quality parameter estimates. Failure to do so could imply consequences in the management practice of water authorities (e.g., penalty payment) in terms of measures and efforts undertaken to accomplish the WFD [10]. In addition, results from the intercalibration process [4] identify as inadequate the spatial scale (few 100 m) of physical-habitat based hydromorphological methods and highlights the need for remote sensing based methods that enable detailed site-specific data collection to expand their application to a large number of water bodies. This study demonstrates that both shortcomings can be easily overcome through the application of the framework presented here.

5. Conclusions

The identification of hydromorphological features has been recognised as an essential variable in river management, restoration and river quality regulatory frameworks. Some of the existing methods for feature identification rely on commercially available aerial imagery of resolutions coarser than 10 cm. The imagery is used to identify homogeneous hydromorphological features through either visual identification or algorithms for automated identification. This study has shown that for an already tested hydromorphological automated classification framework, resolution has a statistically significant effect on both the number of features identified and the uncertainty in feature identification. Resolutions coarser than 5 cm present difficulties for the accurate identification of riffles, side bars and submerged vegetation. Use of coarse imagery (>5 cm) could result in biased feature coverage estimates, incorrect heterogeneity metrics and uncertain in-channel feature characterisation. For example, failure to identify and delineate these features accurately could have an impact on the overall assessment of the reach from both a hydromorphological and ecological point of view. This will translate into wider implications for management, restoration and regulatory applications. To the authors knowledge, high resolution aerial imagery finer than 5 cm for wide area river mapping can only be effectively captured through the use of recent advances in UAV technology, making UAV based technology essential in the development of unbiased frameworks for hydromorphological characterisation such as the one here presented. Further research should look into identifying the uncertainty thresholds that can be accepted to each of the purposes for which hydromorphology is assessed, specifically at the precision and accuracy of the estimates used for the implementation of regulatory frameworks.

Acknowledgments

We would like to thank the Environment Agency and EPSRC for funding this project under an EPSRC Industrial Case Studentship voucher number 08002930. The underlying data are confidential and cannot be shared. Special thanks go to SkyCap and Natural Resources Wales for their help and support with data collection. The authors acknowledge financial support from the Castilla-La Mancha Regional Government (Spain) under the Potdoctoral Scholarship Programa Operative 2007–2013 de Castilla-La Mancha.

Author Contributions

M.R.C. is the principal investigator and corresponding author. She led and supervised the overall research and field data collection. M.R.C. structured and wrote the paper in collaboration with R.B.G. R.B.G. helped with the flight planning and overall data collection. B.G. processed the UAV imagery and helped with the interpretation of the results. R.W. highlighted research priorities, helped structure the research and facilitated fieldwork arrangements. R.W. helped manage the financial aspects of the project. P.B. contributed to the design of the statistical analysis and reviewed the robustness of the analytical techniques applied. All authors were key for the interpretation of the results.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Australian and New Zealand Environment Conservation Council. National Water Quality Management Strategy: Australian and New Zealand Guidelines for Fresh and Marine Water Quality; Australian and New Zealand Environment and Conservation Council and Agriculture and Resource Management Council of Australian and New Zealand: Canberra, Australia, 2000. [Google Scholar]
  2. U.S. Environment Protection Agency. Clean Water Act. In Federal Water Act of 1972 (Codified as Amended at 33 U.S.C.); U.S. Environment Protection Agency: Washington, DC, USA, 2006. [Google Scholar]
  3. European Commission Directive 2000/60/EC of the European Parliament and of the Council of 23 October 2000 Establishing a Framework for Community Action in the Field of Water Policy. Available online: http://eur-lex.europa.eu/resource.html?uri=cellar:5c835afb-2ec6-4577-bdf8-756d3d694eeb.0004.02/DOC_1&format=PDF (accessed on 1 June 2016).
  4. Belletti, B.; Rinaldi, M.; Buijse, A.D.; Gurnell, A.M.; Mosselman, E. A review of assessment methods for river hydromorphology. Environ. Earth Sci. 2014, 73, 2079–2100. [Google Scholar] [CrossRef]
  5. Newson, M.D.; Large, A.R.G. “Natural” rivers, “hydromorphological quality” and river restoration: A challenging new agenda for applied fluvial geomorphology. Earth Surf. Processes Landf. 2006, 31, 1606–1624. [Google Scholar] [CrossRef]
  6. Vaughan, I.P.; Diamond, M.; Gurnell, A.M.; Hall, K.A.; Jenkins, A.; Milner, N.J.; Naylor, L.A.; Sear, D.A.; Woodward, G.; Ormerod, S.J. Integrating ecology with hydromorphology: A priority for river science and management. Aquat. Conserv. Mar. Freshw. Ecosyst. 2009, 19, 113–125. [Google Scholar] [CrossRef]
  7. Buffagni, A.; Armanini, D.G.; Erba, S. Does the lentic-lotic character of rivers affect invertebrate metrics used in the assessment of ecological quality? J. Limnol. 2009, 68, 92–105. [Google Scholar] [CrossRef]
  8. Padmore, C.L. Biotopes and their hydraulics: A method for defining the physical component of freshwater quality. In Freshwater Quality: Defining the Indefinable; Boon, P.J., Howell, D.L., Eds.; Scottish Natural Heritage, Edinburgh Statonery Office: Edinburgh, UK, 1997; pp. 251–257. [Google Scholar]
  9. Gilvear, D.J.; Davids, C.; Tyler, A.N. The use of remotely sensed data to detect channel hydromorphology; River Tummel, Scotland. River Res. Appl. 2004, 20, 795–811. [Google Scholar] [CrossRef]
  10. Scheifhacken, N.; Haase, U.; Gram-Radu, L.; Kozovyi, R.; Berendonk, T.U. How to assess hydromorphology? A comparison of Ukrainian and German approaches. Environ. Earth Sci. 2011, 65, 1483–1499. [Google Scholar] [CrossRef]
  11. Raven, P.J.; Holmes, N.T.H.; Charrier, P.; Dawson, F.H.; Naura, M.; Boon, P.J. Towards a harmonized approach for hydromorphological assessment of rivers in Europe: A qualitative comparison of three survey methods. Aquat. Conserv. Mar. Freshw. Ecosyst. 2002, 12, 405–424. [Google Scholar] [CrossRef]
  12. Raven, P.J.; Holmes, N.T.H.; Dawson, F.H.; Fox, P.J.A.; Everard, M.; Fozzaed, I.R.; Rouen, K.J. River Habitat Quality: The Physical Character of Rivers and Streams in the UK and Isle of Man; River Habitat Survey Report; Environment Agency: Bristol, UK, 1998.
  13. MacVicar, B.J.; Piégay, H.; Henderson, A.; Comiti, F.; Oberlin, C.; Pecorari, E. Quantifying the temporal dynamics of wood in large rivers: Field trials of wood surveying, dating, tracking, and monitoring techniques. Earth Surf. Processes Landf. 2009, 34, 2031–2046. [Google Scholar] [CrossRef]
  14. Gómez-Candón, D.; De Castro, A.I.; López-Granados, F. Assessing the accuracy of mosaics from unmanned aerial vehicle (UAV) imagery for precision agriculture purposes in wheat. Precis. Agric. 2014, 15, 44–56. [Google Scholar] [CrossRef]
  15. Garcia-Ruiz, F.; Sankaran, S.; Maja, J.M.; Lee, W.S.; Rasmussen, J.; Ehsani, R. Comparison of two aerial imaging platforms for identification of Huanglongbing-infected citrus trees. Comput. Electron. Agric. 2013, 91, 106–115. [Google Scholar] [CrossRef]
  16. Baker, B.A.; Warner, T.A.; Conley, J.F.; McNeil, B.E. Does spatial resolution matter? A multi-scale comparison of object-based and pixel-based methods for detecting change associated with gas well drilling operations. Int. J. Remote Sens. 2013, 34, 1633–1651. [Google Scholar] [CrossRef]
  17. Torres-Sánchez, J.; López-Granados, F.; De Castro, A.I.; Peña-Barragán, J.M. Configuration and Specifications of an Unmanned Aerial Vehicle (UAV) for Early Site Specific Weed Management. PLoS ONE 2013, 8. [Google Scholar] [CrossRef] [PubMed]
  18. Kavvadias, A.; Psomiadis, E.; Chanioti, M.; Gala, E.; Michas, S. Precision Agriculture—Comparison and Evaluation of Innovative Very High Resolution (UAV) and LandSat Data. In Proceedings of the 7th International Conference on Information and Communication Technologies in Agriculture, Food and Environment (HAICTA 2015), Kavala, Greece, 17–20 September 2015; pp. 376–386.
  19. Rivas Casado, M.; Ballesteros Gonzalez, R.; Kriechbaumer, T.; Veal, A. Automated Identification of River Hydromorphological Features Using UAV High Resolution Aerial Imagery. Sensors 2015, 15, 27969–27989. [Google Scholar] [CrossRef] [PubMed][Green Version]
  20. Woodget, A.S.; Carbonneau, P.E.; Visser, F.; Maddock, I.P. Quantifying submerged fluvial topography using hyperspatial resolution UAS imagery and structure from motion photogrammetry. Earth Surf. Process. Landf. 2015, 40, 47–64. [Google Scholar] [CrossRef]
  21. Richards, J.A. Remote Sensing Digital Image Analysis; Springer-Verlag: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  22. Zhang, C.; Kovacs, J.M. The application of small unmanned aerial systems for precision agriculture: A review. Precis. Agric. 2012, 13, 693–712. [Google Scholar] [CrossRef]
  23. Ballesteros, R.; Ortega, J.F.; Hernández, D.; Moreno, M.A. Applications of georeferenced high-resolution images obtained with unmanned aerial vehicles. Part I: Description of image acquisition and processing. Precis. Agric. 2014, 15, 579–592. [Google Scholar] [CrossRef]
  24. Sankaran, S.; Khot, L.R.; Espinoza, C.Z.; Jarolmasjed, S.; Sathuvalli, V.R.; Vandemark, G.J.; Miklas, P.N.; Carter, A.H.; Pumphrey, M.O.; Knowles, N.R.; et al. Low-altitude, high-resolution aerial imaging systems for row and field crop phenotyping: A review. Eur. J. Agron. 2015, 70, 112–123. [Google Scholar] [CrossRef]
  25. Tarolli, P. High-resolution topography for understanding Earth surface processes: Opportunities and challenges. Geomorphology 2014, 216, 295–312. [Google Scholar] [CrossRef]
  26. Woodget, A.S.; Visser, F.; Maddock, I.P.; Carbonneau, P.E. The Accuracy and Reliability of Traditional Surface Flow Type Mapping: Is it Time for a New Method of Characterizing Physical River Habitat? River Res. Appl. 2016. [Google Scholar] [CrossRef]
  27. Legleiter, C.J.; Marcus, A.; Lawrence, R.L. Effects of Sensor Resolution on Mapping In-Stream Habitats. Photogramm. Eng. Remote Sens. 2002, 68, 801–807. [Google Scholar]
  28. Anker, Y.; Hershkovitz, Y.; Ben Dor, E.; Gasith, A. Application of aerial digital photography for macrophyte cover and composition survey in small rural streams. River Res. Appl. 2014, 30, 925–937. [Google Scholar] [CrossRef]
  29. Carbonneau, P.E.; Lane, S.N.; Bergeron, N.E. Catchment-scale mapping of surface grain size in gravel bed rivers using airborne digital imagery. Water Resour. Res. 2004, 40, 1–11. [Google Scholar] [CrossRef]
  30. Vericat, D.; Brasington, J.; Wheaton, J.; Cowie, M. Accuracy assessment of aerial photographs acquired using lighter-than-air blimps: LOW-cost tools for mapping river corridors. River Res. Appl. 2009, 25, 985–1000. [Google Scholar] [CrossRef]
  31. Environment Agency. River Habitat Survey in Britain and Ireland; Environment Agency: Bristol, UK, 2003.
  32. Foody, G.M. Harshness in image classification accuracy assessment. Int. J. Remote Sens. 2008, 29, 3137–3158. [Google Scholar] [CrossRef]
  33. Allouche, O.; Tsoar, A.; Hadmon, R. Assessing the accuracy of species distribution models: Prevalence, kappa and true skill statistics (TSS). J. Appl. Ecol. 2006, 43, 1223–1232. [Google Scholar] [CrossRef]
  34. Pontius, R.G.; Millones, M. Death to Kappa: Birth of quantity disagreement and allocation disagreement for accuracy assessment. Int. J. Remote Sens. 2011, 32, 4407–4429. [Google Scholar] [CrossRef]
  35. Congalton, R.G.; Green, K. Assessing the Accuracy of Remotely Sensed Data: Principles and Practices, 2nd ed.; CRC Press: Boca Raton, FL, USA, 2009. [Google Scholar]
  36. Fleiss, J.L.; Levin, B.; Paik, M.C. Statistical Methods for Rates and Proportions; Wiley Series in Probability and Statistics; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2003. [Google Scholar]
  37. Sokal, R.R.; Rohlf, F.J. Biometry: The Principles and Practice of Statistics in Biological Research; W.H. Freeman: New York, NY, USA, 2012. [Google Scholar]
  38. Armstrong, J.D.; Kemp, P.S.; Kennedy, G.J.A.; Ladle, M.; Milner, N.J. Habitat requirements of Atlantic salmon and brown trout in rivers and streams. Fish. Res. 2003, 62, 143–170. [Google Scholar] [CrossRef]
  39. Kondolf, G.M.; Wolman, M.G. The sizes of salmonid spawning gravels. Water Resour. Res. 1993, 29, 2275–2285. [Google Scholar] [CrossRef]
  40. Elliott, S.R.; Coe, T.A.; Helfield, J.M.; Naiman, R.J. Spatial variation in environmental characteristics of Atlantic salmon (Salmo salar) rivers. Can. J. Fish. Aquat. Sci. 1998, 55, 267–280. [Google Scholar] [CrossRef]
  41. Crisp, D.T. The environmental requirements of salmon and trout in fresh water. Freshw. Forum 1993, 3, 176–202. [Google Scholar]
  42. Petr, T. Interactions between Fish and Aquatic Macrophytes in Inland Waters: A Review; FAO: Rome, Italy, 2000. [Google Scholar]
  43. Gurnell, A. Plants as river system engineers. Earth Surf. Processes Landf. 2014, 39, 4–25. [Google Scholar] [CrossRef]
  44. Parker, C.; Clifford, N.J.; Thorne, C.R. Automatic delineation of functional river reach boundaries for river research and applicatons. River Res. Appl. 2012, 28, 1708–1725. [Google Scholar] [CrossRef]
  45. Mckay, P.; Blain, C.A. An automated approach to extracting river bank locations from aerial imagery using image texture. River Res. Appl. 2014, 30, 1048–1055. [Google Scholar] [CrossRef]
  46. Schmitt, R.; Bizzi, S.; Castelletti, A. Characterizing fluvial systems at basin scale by fuzzy signatures of hydromorphological drivers in data scarce environments. Geomorphology 2014, 214, 69–83. [Google Scholar] [CrossRef]
  47. Marcus, W.A.; Legleiter, C.J.; Aspinall, R.J.; Boardman, J.W.; Crabtree, R.L. High spatial resolution hyperspectral mapping of in-stream habitats, depths, and woody debris in mountain streams. Geomorphology 2003, 55, 363–380. [Google Scholar] [CrossRef]
  48. Roux, C.; Alber, A.; Bertrand, M.; Vaudor, L.; Piégay, H. “FluvialCorridor”: A new ArcGIS toolbox package for multiscale riverscape exploration. Geomorphology 2014, 242, 29–37. [Google Scholar] [CrossRef]
  49. Leviandier, T.; Alber, A.; Le Ber, F.; Piégay, H. Comparison of statistical algorithms for detecting homogeneous river reaches along a longitudinal continuum. Geomorphology 2012, 138, 130–144. [Google Scholar] [CrossRef][Green Version]
  50. Gurnell, A.M.; Rinaldi, M.; Buijse, A.D.; Brierley, G.; Piégay, H. Hydromorphological frameworks: Emerging trajectories. Aquat. Sci. 2016, 78, 135–138. [Google Scholar] [CrossRef]
  51. Civil Aviation Authority CAP1361: CAA Approved Commercial Small Unmanned Aircraft (SUA) Operators. Available online: http://publicapps.caa.co.uk/modalapplication.aspx?appid=11&mode=detail&id=7078 (accessed on 13 May 2016).
  52. Poikane, S.; Zampoukas, N.; Borja, A.; Davies, S.P.; van de Bund, W.; Birk, S. Intercalibration of aquatic ecological assessment methods in the European Union: Lessons learned and way forward. Environ. Sci. Policy 2014, 44, 237–246. [Google Scholar] [CrossRef]
  53. Poikane, S.; Birk, S.; Böhmer, J.; Carvalho, L.; de Hoyos, C.; Gassner, H.; Hellsten, S.; Kelly, M.; Lyche Solheim, A.; Olin, M.; et al. A hitchhiker’s guide to European lake ecological assessment and intercalibration. Ecol. Indic. 2015, 52, 533–544. [Google Scholar] [CrossRef]
Figure 1. Site study area along the river Dee, near Bala (UK) with Ground Control Points (GCPs) and Check Points (XPs) distributed equidistantly along the 1.4 km reach. The close up views (ae) show the diversity of hydromorphological features within the reach; The map of England and Wales (f) shows the location of the river Dee and highlights the case study area.
Figure 1. Site study area along the river Dee, near Bala (UK) with Ground Control Points (GCPs) and Check Points (XPs) distributed equidistantly along the 1.4 km reach. The close up views (ae) show the diversity of hydromorphological features within the reach; The map of England and Wales (f) shows the location of the river Dee and highlights the case study area.
Remotesensing 08 00650 g001
Figure 2. Detailed imagery showing the study site and the equipment used: (a) view of a side bar; (b) Quest UAV Q-200 with surveyor Q-Pod; (c) Leica GS14 Base and Rover Real Time Kinematic GPS; (d) SonTek RiverSurveyor M9 Acoustic Doppler Current Profiler (ADCP) mounted on an ArcBoat radio control platform; (e) Ground Control Point; (f) general view of the site; (g) eroding cliffs; and (h,i) general view.
Figure 2. Detailed imagery showing the study site and the equipment used: (a) view of a side bar; (b) Quest UAV Q-200 with surveyor Q-Pod; (c) Leica GS14 Base and Rover Real Time Kinematic GPS; (d) SonTek RiverSurveyor M9 Acoustic Doppler Current Profiler (ADCP) mounted on an ArcBoat radio control platform; (e) Ground Control Point; (f) general view of the site; (g) eroding cliffs; and (h,i) general view.
Remotesensing 08 00650 g002
Figure 3. Workflow followed in this paper from flight planning to data interpretation. GSD, GCP, XP, RTK and ANN stand for Ground Sample Distance, Ground Control Point, Check Point, Real Time Kinematic and Artificial Neural Network, respectively.
Figure 3. Workflow followed in this paper from flight planning to data interpretation. GSD, GCP, XP, RTK and ANN stand for Ground Sample Distance, Ground Control Point, Check Point, Real Time Kinematic and Artificial Neural Network, respectively.
Remotesensing 08 00650 g003
Figure 4. Detailed view of the feature classification obtained with LAIC for four sections within the 1.4 km reach for each of the resolutions: (a1a4) detailed view of a section with submerged vegetation; (b1b4) detailed view of a section with riffles; (c1c4) detailed view of a section with erosion; and (d1d4) detailed view of a section with side bars. The images on the left show the orthoimage at 2.5 cm resolution.
Figure 4. Detailed view of the feature classification obtained with LAIC for four sections within the 1.4 km reach for each of the resolutions: (a1a4) detailed view of a section with submerged vegetation; (b1b4) detailed view of a section with riffles; (c1c4) detailed view of a section with erosion; and (d1d4) detailed view of a section with side bars. The images on the left show the orthoimage at 2.5 cm resolution.
Remotesensing 08 00650 g004
Figure 5. Area falling under each feature class for each of the resolutions considered. The number of pixels per resolution was 67,446,624, 17,301,453 and 5,706,057 for 2.5 cm, 5 cm and 10 cm, respectively. The codes used for each of the features are as follows: side bar (SB), erosion (ER), riffle (RI), deep water (DW), shallow water (SW), shadow (SH) and vegetation (VG).
Figure 5. Area falling under each feature class for each of the resolutions considered. The number of pixels per resolution was 67,446,624, 17,301,453 and 5,706,057 for 2.5 cm, 5 cm and 10 cm, respectively. The codes used for each of the features are as follows: side bar (SB), erosion (ER), riffle (RI), deep water (DW), shallow water (SW), shadow (SH) and vegetation (VG).
Remotesensing 08 00650 g005
Figure 6. Images showing the mismatch in class feature identification between the 2.5 cm resolution and the 5 cm and 10 cm resolutions. Pixels highlighted in pink show those locations where the 2.5 cm resolution classification identified: (a,b) vegetation; (c,d) side bar; (e,f) vegetation; (g,h) deep water; (i,j) side bar; and (k,l) shallow water, and the coarser resolutions were a mismatch.
Figure 6. Images showing the mismatch in class feature identification between the 2.5 cm resolution and the 5 cm and 10 cm resolutions. Pixels highlighted in pink show those locations where the 2.5 cm resolution classification identified: (a,b) vegetation; (c,d) side bar; (e,f) vegetation; (g,h) deep water; (i,j) side bar; and (k,l) shallow water, and the coarser resolutions were a mismatch.
Remotesensing 08 00650 g006
Table 1. Hydromorphological features identified within the study area in the river Dee, near Bala, UK based on [19,31].
Table 1. Hydromorphological features identified within the study area in the river Dee, near Bala, UK based on [19,31].
FeatureDescription
Substrate featuresSide barsConsolidated river bed material along the margins of a reach which is exposed at low flow.
ErosionPredominantly derived from eroding cliffs, which are vertical or undercut banks, with a minimum height of 0.5 m and less than 50% vegetation cover.
Water featuresRiffleArea within the river channel presenting shallow and fast-flowing water. Generally over gravel, pebble or cobble substrate with disturbed (rippled) water surface (i.e., waves can be perceived on the water surface). The average depth is 0.5 m with an average velocity ≈1 m·s−1.
Deep water (Glides and pools)Deep glides are deep homogeneous areas within the channel with visible flow movement along the surface. Pools are localised deeper parts of the channel created by scouring. Both present fine substrate, non-turbulent and slow flow. The average depth is 1.3 m and the average velocity ≈0.3 m·s−1.
Shallow waterIncludes any slow flowing and non-turbulent areas. The average depth is 0.8 m with an average velocity of ≈0.5 m·s−1.
Vegetation featuresVegetationThis includes trees obscuring the aerial view of the river channel, side bars presenting plant cover, vegetated banks, plants rooted on the riverbed with either floating leaves (submerged free floating vegetation) or floating leaves on the water surface (emergent free floating vegetation) and grass present along the bank.
ShadowsIncludes shading of channel and overhanging vegetation.
Table 2. Key characteristics of the cameras used to collect the aerial imagery for each of the resolutions compared. The Sony NEX 7 E-mount SELP1650 was used to collect the imagery at 2.5 cm and 5 cm resolution. The Panasonic Lumix DMC-LX7 was used to obtain the imagery at 10 cm resolution.
Table 2. Key characteristics of the cameras used to collect the aerial imagery for each of the resolutions compared. The Sony NEX 7 E-mount SELP1650 was used to collect the imagery at 2.5 cm and 5 cm resolution. The Panasonic Lumix DMC-LX7 was used to obtain the imagery at 10 cm resolution.
CharacteristicsSony NEX 7 E-Mount SELP1650Panasonic Lumix DMC-LX7
Sensor (Type)APS-C CMOS SensorAPS-C CMOS Sensor
Sensor diameter (mm)23.5 × 15.67.6 × 5.7
Million effective pixels24.310.1
Pixel size (mm)0.040.0018
Range of focal length (mm)24–75 (35)24–90 (35)
Focal length applied (mm)24 (35)24 (35)
Maximum Resolution (MP)24.310.10
Table 3. Parameters describing the flight height, coregistration error and processing times for each of the resolutions compared. GCP, XP and RMSE stand for Ground Control Point, Check Point and Root Mean Squared Error, respectively. The processing time is the computer time required to generate the geomatic products based on the performance of a computer with an Intel Core i7-5820K 3.30 GHz processor, 32 Gb RAM and 2 graphic cards (NVIDIA Geoforce GTZ 980 and NVDIA Qadro K2200).
Table 3. Parameters describing the flight height, coregistration error and processing times for each of the resolutions compared. GCP, XP and RMSE stand for Ground Control Point, Check Point and Root Mean Squared Error, respectively. The processing time is the computer time required to generate the geomatic products based on the performance of a computer with an Intel Core i7-5820K 3.30 GHz processor, 32 Gb RAM and 2 graphic cards (NVIDIA Geoforce GTZ 980 and NVDIA Qadro K2200).
Resolution
Parameter2.5 cm5 cm10 cm
Flight height (m)116133259
Total GCP error in x (m)0.01360.01320.1863
Total GCP error in y (m)0.01340.01120.2399
Total GCP error in z (m)0.02230.02950.3107
Total XP error in x (m)0.01390.01620.1872
Total XP Error in y (m)0.01350.01950.8336
Total XP Error in z (m)0.02600.05210.5521
XP RMSE0.04510.15743.0574
Processing time (h)161210
CameraSony NEX-7Sony NEX-7Panasonic Lumix
Table 4. Summary of overall hydromorphological feature identification performance per resolution. Accuracy (AC), Kappa (κ), quantity disagreement (C) and allocation disagreement (Q). Values in brackets take into account the effect of multiple classification, considering as correctly classified those points that are simultaneously identified as: (i) riffle and deep water; or (ii) riffle and shallow water.
Table 4. Summary of overall hydromorphological feature identification performance per resolution. Accuracy (AC), Kappa (κ), quantity disagreement (C) and allocation disagreement (Q). Values in brackets take into account the effect of multiple classification, considering as correctly classified those points that are simultaneously identified as: (i) riffle and deep water; or (ii) riffle and shallow water.
Resolution (cm)AC (%)κCQ
2.568.4 (75.8)0.620.0640.264
564.8 (72.4)0.480.1130.233
1062.8 (66.6)0.380.0910.276
Table 5. Results obtained for each of the hydromorphological features identified per resolution. TNR, TPR, FPR and FNR stand for true negative ratio, true positive ratio, false positive ratio and false negative ratio, respectively. The codes used for each of the features are as follows: side bar (SB), erosion (ER), riffle (RI), deep water (DW), shallow water (SW), shadow (SH) and vegetation (VG). Values in brackets take into account the effect of multiple classification, considering as correctly classified those points that are simultaneously identified as: (i) riffle and deep water; or (ii) riffle and shallow water.
Table 5. Results obtained for each of the hydromorphological features identified per resolution. TNR, TPR, FPR and FNR stand for true negative ratio, true positive ratio, false positive ratio and false negative ratio, respectively. The codes used for each of the features are as follows: side bar (SB), erosion (ER), riffle (RI), deep water (DW), shallow water (SW), shadow (SH) and vegetation (VG). Values in brackets take into account the effect of multiple classification, considering as correctly classified those points that are simultaneously identified as: (i) riffle and deep water; or (ii) riffle and shallow water.
2.5 cm
FeatureTPRTNRFNRFPR
SB85.761.914.36.5
ER13.663.486.40.4
RI47.1 (94.9)65.552.96.7
DW78.8 (87.5)58.821.216.3
SW41.5 (56.9)71.558.56.4
SH063.882.21.5
VG80.855.119.14.2
5 cm
FeatureTPRTNRFNRFPR
SB73.659.726.46.8
ER060.91000
RI34.9 (94.6)64.464.14.1
DW82.5 (86.8)54.517.523.4
SW40.6 (50.6)68.159.49.2
SH061.11000
VG78.252.421.74.2
10 cm
FeatureTPRTNRFNRFPR
SB1.454.898.65.6
ER054.31000
RI13.2 (97.3)60.486.80.6
DW80.1 (81.5)46.919.924.0
SW40.4 (43.2)59.159.616.8
SH054.51000
VG73.444.326.610.8
Table 6. Percentage of pixels at 5 cm and 10 cm resolution that present hydromorphological classification equal to that identified for the 2.5 cm resolution orthoimage. The codes used for each of the features are as follows: side bar (SB), erosion (ER), riffle (RI), deep water (DW), shallow water (SW), shadow (SH) and vegetation (VG).
Table 6. Percentage of pixels at 5 cm and 10 cm resolution that present hydromorphological classification equal to that identified for the 2.5 cm resolution orthoimage. The codes used for each of the features are as follows: side bar (SB), erosion (ER), riffle (RI), deep water (DW), shallow water (SW), shadow (SH) and vegetation (VG).
FeatureTotal PixelsMatching Pixels (%)Cochran Test
2.5 cm5 cm10 cmQp-Value
SB6,751,73866.8435.015,674,126<0.001
ER359,62100--
RI8,117,89526.824.9112,076,900<0.001
DW20,708,27383.3478.944,988,222<0.001
SW13,353,49361.4356.927,582,029<0.001
SH376,47300--
VG19,748,12288.6287.403,275,528<0.001
Back to TopTop