Remote Sensing of Boreal Wetlands 2: Methods for Evaluating Boreal Wetland Ecosystem State and Drivers of Change

The following review is the second part of a two part series on the use of remotely sensed data for quantifying wetland extent and inferring or measuring condition for monitoring drivers of change on wetland environments. In the first part, we introduce policy makers and non-users of remotely sensed data with an effective feasibility guide on how data can be used. In the current review, we explore the more technical aspects of remotely sensed data processing and analysis using case studies within the literature. Here we describe: (a) current technologies used for wetland assessment and monitoring; (b) the latest algorithmic developments for wetland assessment; (c) new technologies; and (d) a framework for wetland sampling in support of remotely sensed data collection. Results illustrate that high or fine spatial resolution pixels (≤10 m) are critical for identifying wetland boundaries and extent, and wetland class, form and type, but are not required for all wetland sizes. Average accuracies can be up to 11% better (on average) than medium resolution (11–30 m) data pixels when compared with field validation. Wetland size is also a critical factor such that large wetlands may be almost as accurately classified using medium-resolution data (average = 76% accuracy, stdev = 21%). Decision-tree and machine learning algorithms provide the most accurate wetland classification methods currently available, however, these also require sampling of all permutations of variability. Hydroperiod accuracy, which is dependent on instantaneous water extent for single time period datasets does not vary greatly with pixel resolution when compared with field data (average = 87%, 86%) for high and medium resolution pixels, respectively. The results of this review provide users with a guideline for optimal use of remotely sensed data and suggested field methods for boreal and global wetland studies. Remote Sens. 2020, 12, 1321; doi:10.3390/rs12081321 www.mdpi.com/journal/remotesensing Remote Sens. 2020, 12, 1321 2 of 48


Introduction
The boreal zone comprises approximately one-quarter of the world's wetlands [1]. In Canada, wetlands cover between 18% and 25% of the Canadian boreal region (ECCC, 2016) and are primarily peatlands including bogs and fens [2]. In comparison, the proportion of wetlands per surface area varies globally, with the highest proportion found in Asia (31.8%) and the smallest proportion found in Oceana (2.9%) [3]. Boreal peatlands (bogs and fens) are characterised by a thick organic soil layer of brown mosses and graminoid vegetation exceeding 40 cm [4]. Boreal peatlands can also have numerous forms, indicative of structural attributes including open, shrubby and treed forms. Remaining wetlands (swamp, marsh and shallow open water with minimal peat depth) are typically underlain by mineral soils and are comprised of graminoid (marsh) and treed/shrub (swamp) forms [5]. The formation and maintenance of northern wetlands and peatlands requires relatively cool climates such that precipitation exceeds potential evapotranspiration during most years. Despite this, changes in climate during the most recent period could shift these ecosystems towards increasing rates of terrestrialization [6]. Air temperature is expected to increase by 1.5 to 3 • C in the boreal zone associated with a 1.5 • increase in global mean surface temperature compared to today's mean annual average [7,8], and with this, changes in precipitation patterns are expected to occur. The IPCC [7] predicts a conservative increase in precipitation of 5-10% associated with 1.5 • increase in global mean surface temperature. However, [9] suggest that an increase in precipitation exceeding 15% is required for every 1 • C of warming to maintain the moisture dynamics of the boreal landscape. Wetland self-regulation is strongly coupled to local hydro-climatology, especially precipitation, evapotranspiration, soil water storage and ground water recharge [10,11], as well as numerous complex autogenic (within wetland) feedbacks that either amplify or dampen external hydro-climate driving mechanisms [12,13]. Therefore, small changes in water balance may result in large changes to wetlands in areas where potential evapotranspiration exceeds precipitation or during periods when dry climatic cycles are longer than wet climatic cycles [14,15]. Widespread increases in precipitation [7] have not been observed so far in western Canada boreal regions [7].
Vitousek et al. [16] and Foody [17] suggest that land cover change (anthropogenic and/or climate mediated) is the single, most important variable that affects ecosystem processes and condition. Therefore, our ability to predict the implications of land-use changes in response to future environmental and climate change scenarios, and vice versa, depends significantly on our ability to monitor and quantify landscape changes in the first place [18]. An accurate understanding of the spatial distribution of wetland/peatland ecosystems in areas that are rapidly changing is therefore fundamental for quantifying rates of change, proportional representativity, ecological trajectories associated with environmental driving mechanisms and how these changes affect ecosystems and ecosystem services [1,17]. Remote sensing technologies provide a means to infer, measure and monitor information regarding ecosystem type, distribution, proximal influences and change over time, both locally and regionally. While remote sensing does not provide measurements of the broad spectrum of complex processes afforded by field measurements within wetland environments (Part 1), the fusion of passive and active remote sensing technologies can provide useful estimates of the cumulative effects of land surface characteristics as proxy indicators of more complex processes. For example, [19] monitored boreal discontinuous permafrost-wetland succession over time using time series airborne lidar data and found that variable rates of wetland expansion were related to spatial variations in incident radiation and underlying hydrological processes. These cumulative effects can be related to functional wetland derivatives including vegetation species, structure, productivity and habitat [20][21][22][23]. Others include indicators of instantaneous water extent, coarse temporal estimates associated with hydroperiod (surface water extent changes over time), soil moisture and water chemistry [24][25][26]. Topographical variations provide important metrics for wetland zone identification characterisation, hydrology and connectivity [27][28][29][30][31].
In Part 2 of this review compendium, we provide a synthesis of remote sensing tools and methodologies used to better understand, quantify and scale wetland functions and services within an evaluation context [1,4,32]. This review provides an analysis of remote sensing tools and technologies aimed at stakeholders interested in individual wetlands (e.g., communities, industry), wetlands across regions (industry, non-governmental organisations, provinces and territories) and at provincial to national levels (provincial and federal government stakeholders). Here we discuss the state-of-art of remote sensing of boreal (and similar) wetlands based on a review of 248 journal articles. Each study has comparable results with geographically located field validation and an additional 116 articles that provide examples of applications (sometimes without validation, Part 1). In this second part of our literature review, we address four objectives. (1) Identify remote sensing technologies that have been and are currently used for wetland assessment and (2) apply the feasibility results provided in Part 1 of this compendium to describe wetland processes that can be either directly or indirectly observed using a variety of remote sensing tools, benefits and issues. We also focus on technologies used to identify wetlands structures and condition as opposed to describing technologies that may be used to infer changes in broad area wetland probability (e.g., passive microwave and gravimetric methods). In this section, remote sensing methods are grouped into wetland processes of importance to the Ramsar Convention on Wetlands [1], which include the broad range of ecosystem services provided by inland wetlands and in particular boreal region wetlands. Wetland classification and extent for inventory and monitoring include hydrological regime and water cycling, biogeochemical processes and maintenance of wetland function, carbon cycling and relationship to biological productivity. We also provide a summary of accuracies that are to be expected from remotely sensed data products. (3) Identify promising new and future technologies for wetland observation and management and (4) provide recommendations for field-sampling and costs of wetland attribute measurement for validation of remote sensing wetland classification and extent data products. While this literature review focuses on case studies from boreal region wetlands, examples are included across the broad range of global inland (and sometimes coastal) wetland types, when boreal examples could not be found. The overall goal of this review is to better understand connections between individual wetland attributes and processes, end user needs and corresponding remote sensing data products for wetland monitoring and the 'wise use of wetlands' identified in the Ramsar Convention framework.

Objective 1: Remote Sensing for Individual Wetlands and Wetland Density Across Regions
Remote sensing of wetlands has proliferated since the early 2000s due to the accessibility of moderate resolution, time-series Landsat data [33,34] and the development of a variety of airborne and space-borne technologies, in correspondence with improvements to computer processing, analysis and data storage (see Part 1). Numerous sensors exist with specific functionalities, whereby the most common remote sensing platforms used for wetland mapping are typically variations of passive optical imagers (e.g., multispectral and hyperspectral), followed by active remote sensing technologies: synthetic aperture radar (SAR) and airborne lidar technology. Optical imagery remains at the forefront of detecting key wetland characteristics including wetland type [1], class and form [35] attribution. Additional information can be obtained using lidar and SAR including open water/wet areas, flooded vegetation, topographical variability and vegetation structure. In addition, unmanned aerial vehicles and the development of structure from motion point clouds are showing promise for species and structural characteristics of wetlands. Table 1 provides a summary of 46 common airborne and satellite remote sensing technologies used for quantifying wetland attributes (of >900 historical, operational and future airborne and satellite systems [36]).  [19,20,[28][29][30]50,84,124,137,[180][181][182][183][184][185][186][187][188][189][190] Multi-spectral lidar Varies based on spot spacing; 0.5-5 3 2014-On demand [191][192][193][194]

Wetland Classification and Extent for Inventory and Monitoring
Classification is critical for quantifying the distribution and area extent of wetlands across the region (wetland inventory) [195]. Changes in wetland class and extent are monitored by comparing in situ measurements with changes in the absorption, reflection, emission and transmission of energy sensed by remote sensing technologies through time, often associated with changing ground cover/vegetation characteristics. Monitoring and management of many wetlands over broad areas requires the use of remotely sensed data, with validation from field surveys, to determine individual wetland class (wetland classification of bogs, fens, marshes, swamps and shallow open water) and type (e.g., treed fen, shrub fen, open fen; hydroperiod). Despite the need to classify and inventory wetlands, accurate wetland classification and boundary delineation can be difficult [195]. Vegetation and geomorphological gradients vary across the boreal region, and within many global regions where wetlands exist, due to variations in soil moisture and soil organic layer thickness. Environmental gradients cause blending between wetland edges, also known as perimeters or transition zones into adjacent land cover types, resulting in considerable natural variability [196] and blurring the boundaries between species communities [17]. For example, Mayner et al. [197] examined the characteristics of black spruce (Picea mariana) bog-upland transitions using field-based vegetation assessments across a range of hydrogeological settings based on surficial geology and predominant sediment textures in the Boreal Plains ecozone, Canada. They found a wide range of bog-upland ecotone widths ranging from sharp transitions (0 m, no ecotone) through to wide margin or transitional ecotones (max. width 60 m) with an average width of 12 m. There were no significant differences in ecotone widths across hydrogeologic settings except bogs on fine-textured deposits had significantly greater margin areas to total peatland area ratios, likely due to the gentle slopes and generally larger (expansive) size of the peatlands [197].
The blending of boundaries between wetlands and adjacent land covers does not necessarily improve when high spatial resolution remotely sensed data are used. Lower resolution optical multi-spectral imagery (e.g., SPOT, Sentinel-2) may be used to integrate the spectral characteristics of transition zones within pixels, such that homogeneous land cover patches [198] can be characterised and classified. Alternatively, they can be segmented into objects (i.e., grouping into spectrally similar pixels) using segmentation methods ( Figure 1) [199,200].
Two methods are used to separate along boundaries between land cover classes and wetlands based on differences in the reflection of energy from vegetation characteristic of the wetland environment. Accurate classification of the wetland extent, including the transition zones between land cover types is required for monitoring changes in wetland characteristics and extent over time. Classification methods broadly include (a) pixel-based methods, which classify data on a pixel-by-pixel basis; and (b) object-based image analysis, which classifies based on spatially continuous pixel clusters, where each pixel in a cluster has some similarity with those immediately adjacent to the cluster [200,201]. Traditionally multi-spectral optical imagery tends to be the best candidate for such analysis [202][203][204] due to the diversity of information available through different image bands. This allows for better characterization of individual objects/pixel clusters. For most land cover and wetland classification and inventory scenarios, object-based approaches tend to yield better overall validated accuracies than pixel-based approaches when validated against independent data due to noise reduction [47,205,206]. Identifiable shrubs along the transition zone between the wetland and riparian zone are visually observed in the images; (b) visible colour composite from Worldview-2 for the same bog. Pixel resolution is 1.4 m, smaller shrubs are within pixels and are averaged, making the boundary between shrubs and riparian area easier to discern; (c) Sentinel-2 data (visible colour composite) where transition zone is integrated with riparian and forest and wetland edge can be identified. Red outlines represent object-oriented segmentation associated with spectral reflectance differences of vegetation and soil moisture, riparian and forested zones.
Historically favoured and still often-used pixel-based supervised classifications (e.g., maximum likelihood classification) have been applied with relative success for classifying wetlands throughout the history of wetland (and land cover) classification of remote sensing data. However, pixel-based supervised classifications often require training data to represent all possible characteristics or groups of characteristics of the wetland (and proximal land cover) environment to maximize classification accuracies. Pixels that exhibit properties not described by the training dataset are often misclassified into other spectrally similar classes. Furthermore, classifiers should minimize data dimensionality (e.g., through the use of principal components analysis) where possible to include only the most informative attributes from the training data, thereby minimizing the introduction of noise within the classifier [207]. Land cover accuracies of wetland class and, to some degree, type using supervised classifications are typically between 75% and 95%. For example, Wei and Chow-Fraser [99] utilized a supervised maximum likelihood classification to classify open water among four other vegetated land covers at two sites in Canada's Georgian Bay with overall accuracies between 85% and 90% when compared with an independent data source. MacAlister and Mahaxay [208] similarly used the maximum likelihood classification to separate wetlands from non-wetlands across five sites with overall accuracies ranging from 77% to 93%. In another study, Franklin et al. [128] used a data conflation (also known as 'fusion') approach with Radarsat-2 and Landsat Operational Land Imager (OLI) to classify bog and fen wetlands, yielding an overall accuracy of 79%. Lower resolution imagery may be useful for national to global mapping of wetland vs. non-wetland classes (described in Part 1), however classification accuracy is typically reduced due to the inability to capture small wetlands within the spatial fidelity of the system, and difficulty identifying wetland transition zones [45]. Some studies have compared unsupervised and supervised classifiers for a variety of land cover and wetland mapping, where the majority conclude that the latter method yields superior results [209,210].
Another form of pixel-based classification (also suitable for object-based analysis) employs decision tree methods for identifying wetland class and type based on multiple spatially continuous datasets (Part 1). Here, decisions are made based on a set of defined characteristics or 'rules' that define a particular wetland environment or class (a bottom up approach) [84,184]. Alternatively, the defined ruleset can be used to successively partition (or split) the images (or feature space) to be classified into smaller subsets or groupings of areas with similar characteristics. Decision trees are easily interpreted by users due to their expressivity, which is based on a series of logical decisions; however, this also often results in a tendency to overfit models [211]. Within the decision tree ruleset, each split of the feature space creates a 'node' or decision based on the characteristics of the land surface with the goal of reducing confusion between each land cover class, wetland type, etc., such that classes are more 'pure' (or homogeneous). This is determined using impurity measures and thresholds [212]. After each split the decision to halt further splitting of the feature space is reviewed based on the impurity threshold. If class impurity is less than the defined threshold, splitting will stop and the node is labelled as a leaf (terminus), otherwise splitting will continue until the impurity threshold is no longer met. Once a decision tree is formulated, external (non-training) data are run through the tree, adhering to its splitting criteria at each node until it reaches the set impurity threshold for that particular decision criteria at the 'leaf' level, thereby yielding a class prediction. A variety of decision tree algorithms such as Classification Tree Analysis, Stochastic Gradient Boosting and Classification and Regression Tree [213][214][215] have been applied to numerous remote sensing land cover applications, including wetlands [216][217][218][219][220]. Baker et al. [213] noted Stochastic Gradient Boosting to be preferable to Classification Tree Analysis for mapping wetland, non-wetland and riparian land cover classes. In another study, Tulbure et al. [215] obtained an overall accuracy of 96% when classifying water bodies from other land cover types. Pantaleoni et al. [214] noted Classification and Regression Tree was better able to classify three wetland classes from upland land cover types with 73% overall accuracy compared with validation data. In one study, even though Classification and Regression Tree provided promising results, it was concluded that it did not yield high enough accuracies to replace wetland mapping methods based on feature extraction in high resolution image data [214].
Machine learning methods are broad, covering simple nearest neighbour algorithms to complex decision tree ensemble methods. Such algorithms are often supervised classifiers, meaning that they rely on a reference or training dataset in order to learn and are therefore usually supervised classifiers or used for spatial imputation beyond the characteristics of the classes of interest to the user reference data. This also means that while such algorithms are capable of handling large datasets with high data dimensionality (or many spatial data information layers), a reduction in the latter is often beneficial with respect to improved overall classification accuracies [221,222]. A simplistic machine learning method is k-Nearest Neighbour, which takes the modal classification of the k closest samples within the reference dataset. This technique is a non-parametric (no assumption of model form) classifier and has been utilized for wetland classification [223]. However, k-Nearest Neighbour methods can result in significantly lower overall accuracies than equivalent results from more sophisticated algorithms such as random forest [223,224]. The random forest algorithm [225] is a non-parametric ensemble classifier consisting of multiple parallel decision trees where each tree is trained from a random subset of a parent dataset, utilizing the 'bagging' concept [226]. In boreal regions, random forest methods have been used with varying degrees of success for classifying wetland class and type, ranging from 70-99% accuracy compared with validation data [128,205,206,221,224,[227][228][229][230]. Other activities such as Mahdavi et al. [231] and Amani et al. [232] included both random forest-based approaches and object-methods (described below) for wetland class accuracies of between 86% and 96% when compared with reference. A common alternative to random forest for wetland mapping is the (non-parametric) Support Vector Machine algorithm [233]. This method subsets the feature space much like random forest, however, it calls upon hyperplanes (linear lines that separate the feature space) to increase data purity. A wetland application of Support Vector Machine is given by Li et al. [234] who classified rice fields from all other land cover types in rural China by the use of SAR data. A number of data product combinations were utilized to drive the Support Vector Machine, resulting in overall accuracies ranging from 71% to 93% [234]. Mack et al. [109] also demonstrated success using Support Vector Machine for mapping upraised bogs using optical RapidEye data (95% accuracy), while Mahdianpari et al. [110] had slightly lowered success mapping wetlands using Support Vector Machine (74% accuracy). In the context of wetland classification, the random forest algorithm typically yields greatest overall classification accuracies when compared to other machine learning methods [224].
Deep learning methods are an emerging subset of machine learning and adhere to general machine learning functionality. Basic machine learning methods get progressively better at that function but still require guidance from additional data; that is, if an inaccurate prediction is returned external intervention is required in the form of a manual fix, or adding more training data and rerunning the model to correct the problem. Conversely, deep learning algorithms will identify inaccurate predictions autonomously and attempt a fix. Deep learning methods learn through a layered structure algorithm called an artificial neural network. These have demonstrated consistently superior results when compared to random forest wetland classifications [110], however, they are often computationally expensive, and non-trivial to set up. As a result, the application of deep learning for classifying wetland class, type and form, as well as extent remains limited. Despite this, a subset of studies indicate deep learning often outperforms other classifications, demonstrates paradigm shifting potential for the future of machine learning [235][236][237].
Object-based image analysis groups or 'segments' pixels into objects based on shape, size, colour (spectral response) and pixel topography parameters. Parameters vary as a function of the landscape being segmented and often requires trial and error or optimization based on landscape characteristics. For example, Rokitnicki-Wojcik et al. [103] developed a ruleset for regional application of an object-approach using optical IKONOS imagery, achieving an accuracy of 77% when mapping complex wetlands and vegetation classes. Transferability of the ruleset resulted in minimal loss of accuracy of 5.7%, illustrating the importance of the ability to transfer the ruleset to broader regions when applying this methodology. Despite the utility of high spatial resolution optical imagery, the use of remotely sensed data with small pixel sizes (e.g., 1 m-5 m) does not necessarily improve segmentation-based classification results. For instance, shaded and sunlit trees can confound the classification by producing additional objects due to differences in spectral reflectance between these objects, despite both being within the 'forest' class. Berhane et al. [88] applied object-based image analysis approaches to segment high spatial resolution Quickbird imagery into various wetland classes, with 90% accuracy, whereas Frohn et al. [47] applied similar methods to lower spatial resolution Landsat-7 (ETM+) imagery (Table 1), achieving an accuracy of 95%. However, as noted in Frohn et al. [47], wetlands <0.2 ha were not easily resolved within 30 m Landsat pixels (Table 1). Therefore, there is a trade-off between spatial fidelity of pixel resolution, wetland size and wetland edge detection. Indeed, often highly biodiverse cryptic swamp wetlands that are difficult to classify, but provide important ecosystem services [238] may not be included in a lower spatial resolution image classification.
Overall, decision-tree classifications with multiple datasets generally provide the most accurate classifications for wetland existence and class (average = 81.6%), ranging from 73-96% when compared with geographically located field measurements of wetland class. However, such wetland classification (of bog, fen, etc.) and form (treed, non-treed, permanence) (e.g., used in the Alberta Wetland Classification System) are typically local-to regionally-based, over-parameterised, and thus are often not easily transferrable to other regions with the same level of accuracy [84]. Machine learning imputation and Support Vector Machine learning methods for land cover and wetland class, form and type have average accuracies of 80% and 79%, respectively, and range from 72-99% (random forest) and 73-90% (Support Vector Machine). However, these methods require that training data capture the full variability of each class identified by the classifier [221,239]. Segmentation approaches are 77% accurate compared with field data, on average, with accuracies as high as 86% (when datasets acquired during winter are removed [57]. This finding illustrates that consistency in timing of data collection is required, though segmentation also requires significant parameterisation and user intervention, similar to decision-tree methods. Finally, pixel-based classifications and clustering methods, such as maximum likelihood classification, are accurate on average 73% of the time, ranging from 57 to 92% for wetland classes observed in the literature (Part 1). Accuracy is reduced when lower resolution imagery is used at the local level [45], and transitional edges can be problematic as they are not often discerned within the fidelity of low-resolution pixels (>10-20 m or more). However, for broad (national/global) area mapping of wetland vs. non-wetland land cover types and wetland classes, freely available moderate resolution remote sensing data such as Landsat and Sentinel-2 provide exceptional coverage and good fidelity of classification, given national-level data and computing constraints. These may be improved via local high-resolution image sampling using hyperspectral, multi-spectral and/or lidar data and parameterised using other important geospatial attributes, such as surficial geology.

Wetland Water Extent, Level and Hydroperiod
Wetlands occur at the elevation at which the water table intersects with the ground surface. The rate of water movement is often slow and therefore there tend to be zones of surface water and ground water interaction and storage. The movement of water through wetland ecosystems is therefore dependent on the characteristics of the underlying soil matrix, wetland connectivity and pathways for water cycling [10,240,241]. Hydroperiod provides an index of cumulative hydrological inputs and outputs from wetlands [242,243], and is inextricably linked to wetland biogeochemistry, productivity and wetland function [195], and numerous wetland ecosystem services [1].
Single polarization SAR data have demonstrated success in the mapping of water body extents [163,187,[244][245][246][247][248][249][250][251]. Single polarization SAR transmits and receives waveforms that are horizontally (HH) or vertically (VV) polarized, where the first letter is the transmitted polarized waveform and the second letter is the received polarized waveform. The backscatter mechanism of the emitted radio waves results in a weak to non-existent return signal from water surfaces due to specular reflection away from the sensor, such that water surfaces appear darker than other terrestrial surfaces [199]. Thus, SAR has been casually nicknamed "the water seeker" due to the ability of radar technologies to observe standing water based on this scattering property at a 'snapshot' in time and its sensitivity to a targets water content because of the high dielectric content [30,187]. In addition, the long wavelength emitted by SAR allows this technology to be used during cloudy conditions, during rainfall and at night.
Detection of water using SAR is due to the ability to polarimetrically discriminate signal information, where the definition of polarization follows the strict physics definition (i.e., restricting the transverse vibration of an electromagnetic wave to one direction). The most common SAR polarizations are 'horizontal'. This means that the wavelength travels at 0 • from the horizontal plane perpendicular to the direction of travel of the emitted radiation. 'Vertical' polarization means that the wavelength travels at 90 • from the horizontal plane perpendicular to radiation travel and orthogonal to horizontal plane [252]. The horizontal (H) and vertical (V) signal components are recorded by unique antenna components and stored in isolation by the systems electronics. Use of single polarization data does not always yield a reduced backscatter signal (i.e., appearing darker in the image) from water, however. In some cases, diffuse scattering may produce an increased backscatter signal (i.e., appearing brighter in the image), which can result in water surfaces being misidentified [250]. Specular scattering and diffuse scattering mechanisms are common from open water surfaces, where specular scattering occurs from still water and diffuse scattering is more common when the water surface is disturbed by wind and wave action [152,154,253]. The ability to detect water is improved by supplementing single polarization SAR with optical imagery and/or dual or quad polarization data [254]. With regards to vegetation, detection of vegetation can occur through both double bounce and volumetric scattering, such that different information is returned to the sensor. Phase information in dual or quad-polarization SAR allows for decomposition to differentiate between different scattering mechanisms (double bounce vs. volumetric) [255,256]. Double bounce occurs when two smooth surfaces create a right angle that deflects the incoming radar signal from both surfaces, such that most of the energy is returned to the sensor, sometimes indicative of emergent and flooded vegetation. Volumetric scattering occurs when the signal is backscattered in multiple directions from taller vegetation features, commonly observed in the transition zone or perimeter of wetlands where there is shrubby vegetation or tall cattails [257] (Figure 2). The use of steep incidence angles from nadir (e.g., using Radarsat-2) also enhance the ability to map sub-canopy hydrological features through greater canopy penetration, and the probabilistic reduction of double-bounce scattering [258][259][260][261]. Figure 2 illustrates changes in water extent and different wetland class types, including aquatic and inundated vegetation over different years using coherence statistics from volumetric and double bounce scattering mechanisms applied to a wetland complex. Dual-polarization SAR improves the ability and accuracy of water detection and includes the combined use of transmission and reception wavelengths in the form of HH, HV, VH and VV polarizations. For dual-polarized data, only two of the four listed combinations are recorded from the transmission of H and V polarized wavelengths. Of the available polarizations, HH and/or HV are best suited to open water mapping [262]. HH polarization is often the best choice for reducing small vertical displacements caused by waves and provides greater differences in backscatter between land and water surfaces [175,263]. HV provides improved water detection when high wind conditions or water surface roughness is present as there is less response in the backscatter compared to HH [262,264,265]. Dual-or quad-polarized (transmission of H and V, and reception of all four combinations of HH, HV, VH and VV) data also provide superior results for mapping flooded vegetation compared with single-polarization data [250,266] and have been employed for mapping open water and flooded vegetation [147,224,231,[267][268][269][270][271][272][273][274][275] required for accurate water extents and estimates of hydroperiod over time [250,276,277] (Figure 3). While SAR can be used to determine water extents, the temporal periodicity of data collections may not capture the full range of hydrological variability associated with rapid changes in measured hydroperiod. Multi-polarization data are common products of the latest satellite SAR missions [278] whereas single-polarization were utilized more commonly in early SAR systems but have since been recognized as somewhat limited with respect to wetland classification. Based on the literature presented in Table 1, average accuracy of water body detection is 89% (stdev = 3.9%). Further, water body classification may not consider the accuracy of edge detection [111] and may be over-inflated when comparing large binary land covers (water, no water), a potential issue for consideration of any large waterbody classification. For example, the proportion of water to water edge/mixed pixels is much greater for water pixels, therefore the accuracy of classifying water will be highly accurate, whereas detection at the waters' edge may be less accurate. Overall, the proportion of water pixels, resolution and accuracy will mask inaccuracies at these transition zones, and depending on the relative size of water bodies within wetlands relative to pixel resolution [154].
Hydroperiod is also mapped using optical imagery [279], however unlike SAR, challenges arise when acquiring images with suitable cloud conditions, high fidelity spatial resolution and timing between acquisitions. For these reasons, optical remote sensing may not capture all changes in water extent variability between images and therefore is not recommended. Monitoring hydroperiod by use of other technologies (i.e., lidar or hyperspectral imagery) is challenging because acquisition of repeat-pass data is cost-prohibitive [28,280], especially for airborne configurations. However, a recent study inferred hydroperiod regimes for small depressional wetlands via a single lidar acquisition [190], an alternate approach to inference via repeat data acquisitions [280]. When available, water extent and hydroperiod average accuracy using optical imagery is 86% (stdev = 12%), and improves when high-resolution data are used (average = 90%, stdev. = 10%).

Inferring Soil Moisture and Hydrological Connectivity in Wetlands Using Remote Sensing
Sources of water input to wetlands and hydrological connectivity can be used to indicate wetland type (e.g., ombrogenous bogs) and the potential for nutrient fluxes [4]. SAR is not only sensitive to shallow open water areas, but can also be used to estimate surface soil moisture. Numerous sensors (e.g., Radarsat, ALOS, CosmoSkyMed, etc.), wavelengths (C, L, X) and techniques (empirical, semi-empirical, physical models) have been used to infer spatial variations in soil moisture within a variety of environments. In many cases, methods are being actively developed for agricultural landscapes [281][282][283] with fewer applications in boreal peatland and wetland environments. Millard and Richardson [24] assessed several different polarimetric SAR parameters across different dates and found varying relationships with soil moisture based on variations in daily wetness of the ground surface. However, low predictive strength of soil moisture models was only evident through a process of model cross-validation (bivariate regression R 2 ranged from 0.14 to 0.66 for fitted models and 0.05 to 0.41 for independently cross-validated models). Millard and Richardson, [24] also compared the influence of vegetation density derived from airborne lidar data on backscattered signals from SAR and found that vegetation density influences C band signals. To mitigate this, soil moisture was predicted and compared within those sites that were not densely vegetated, yielding much higher predictive strength (R 2 improved from 0.11 to 0.71 within least vegetated sites). In another study, Millard et al. [284] used linear mixed effects models to monitor temporal dynamics of soil moisture in a peatland using remotely sensed imagery over one year. The purpose of the study was to determine the predictive accuracy of the combined remote sensing and modelling approach on alternative moisture periods that were outside of the time series. A time series of seven Moderate Resolution Imaging Spectroradiometer (MODIS) and SAR images were collected along with concurrent field measurements of soil moisture over one growing season. Linear mixed effects models allowed repeated measures (temporal autocorrelation) to be accounted for at individual sampling sites, as well as soil moisture differences associated with peatland classes. Covariates provided a large amount of explanatory power in models; however, SAR data contributed to only a moderate improvement in soil moisture predictions (marginal R 2 = 0.07; conditional R 2 = 0.7, independently validated R 2 = 0.36).

Topographical Indicators of Potential for Moist to Saturated Soil Conditions
Areas of increased soil moisture, soil saturation and standing water may be observed or inferred when using high spatial resolution digital elevation models (DEM) of the ground surface [196]. Thus, data of ground surface elevation can be used to determine where local topographic depressions exist in the land surface, and where water may accumulate. This approach, therefore, indicates where surface water may accumulate, whereas optical and active remote sensing are used to determine where surface water is. Despite the probability of water accumulation in depressions, moisture is not measured (unless multiple datasets, such as SAR is used), and the existence of surface soil moisture may be complicated by hydrological conductivity, gravitational water movement and underlying geology [10,285,286]. Connectivity between hydrological features such as wetlands may be estimated using high point density lidar data and UAV structure from motion. Connectivity is critical to understanding movement of water and nutrients to downstream ecosystems and may be an indicator of resilience vs. sensitivity to watershed influences. In the Boreal Plains, Alberta, Canada, lake resilience to drought improved in areas with more wetlands, which provide water to lakes during dry periods [287]. Further, discrete features within mineral soils, such as gullies, can be determined with relative accuracy using lidar DEMs and variations in intensity reflections of the laser return [282]. For example, Evans and Lindsay [186] were able to quantify gully depth to an accuracy of 92%, while errors increased when using lidar to determine gully width. Connectivity of wetland environments using DEMs becomes difficult in peatland environments, where surface topography may be unrelated to hydraulic gradient within organic soils [10].
Airborne lidar provides the most accurate, high spatial resolution estimate of land surface elevation of any remote sensing platform, when applied to a variety of different land surfaces because of the ability to emit and receive laser pulses through vegetation canopies to the ground surface. Lidar vertical accuracies on non-vegetated surfaces range from ≤0.05 m to ≤0.20 m and from ≤0.15 m to ≤0.60 m from vegetated surfaces [288], and are improved during leaf-off conditions when there is little leafy biomass to interact with laser pulses. Lidar DEM vertical accuracy is also significantly related to laser return density [20,289], and classification of laser reflections or 'returns' into those that reflect from the ground and those that reflect from non-ground surfaces. Raber et al. [290] used initial return densities of approximately 1 return per 1.5 m decimated, to 1 return per 10.8 m to determine if return density affected the accuracy of a DEM and flood extent. They found no significant difference of the density of returns on DEM accuracy. However, they did find that flood extent was sensitive to return density, and their results may not apply to extremely low gradient deltaic floodplain environments. This is an important consideration for estimating the cost of the lidar surveys, where high point densities have higher cost of acquisition because they require lower flying heights, slower flying speed and/or narrower scan-lines. However, most contemporary lidar data collections include at least one return per square meter where vegetation cover permits [20].
Lidar point clouds are typically classified into ground and non-ground returns using specialised software (e.g., LasTools, RapidLasso Inc., Germany or TerraScan, TerraSolid Inc. Finland) and then rasterised or interpolated into a DEM. The classification of ground returns is the most critical step required for the derivation of a high-quality DEM (reviewed in [291]). Liu [291] suggest that slope-based filters (e.g., [292] TerraScan, TerraSolid) work best in areas of flat terrain, typical of many boreal wetland environments, but become increasingly less accurate with increasing variability of terrain [293,294]. Other filters can also be used, including interpolation filters based on an approximation of the surface with a least-squares assessment of positive and negative residuals being classified as non-ground and ground, respectively [295,296]. Morphological filters classify abrupt changes in returns from the grey-scale ground surface morphology, such as those from the sides of buildings and trees will have higher elevation and therefore will be shaded differently to those surrounding it. These returns are then classified as non-ground returns [297].
There are also several different methods for rasterization of lidar ground returns. Triangular Irregular Network (TIN) gridding methods are the simplest and most efficient to use, but can introduce errors, especially if return density is sparse, such that micro-topographic features are not accounted for or included in the raster dataset [291]. Interpolation methods estimate the DEM grid cells based on the influence of proximal return elevations within a given area, assuming that proximal returns are highly correlated and continuous. Liu [291] reviews numerous interpolation methods, and suggests that kriging provides greater accuracy when compared with validation than using the inverse distance weighting method when applied to data with low return density. Liu et al. [298] found that accuracy is improved when inverse distance weighting is applied to datasets with high return densities. Spline-based methods tend to miss local topographic variability including ridges and troughs [299]. Töyra et al. [180] found that the root of the mean squared error (RMSE) of the DEM was most accurate when using kriging and inverse distance to power rasterisation methods (average RMSE = 0.08 m), when compared with validation data applied in a boreal wetland environment. Errors increased to 0.32 m (on average) using a TIN method, which retains the integrity of each laser pulse return. Bater and Coops [300] found that a natural neighbour rasterization method provided the most accurate representation of the ground surface using a DEM when compared with ground-truth data from a forested environment. Accuracy also improved when using higher resolution interpolators at 0.5 m as opposed to 1.0 or 1.5 m due to the ability to represent the ground surface in greater detail (also described in [291]). However, it must be decided as to which resolution is appropriate, given the application, as higher spatial resolution can result in significant requirements for data storage. Further, the interpolation procedure should produce a model of equal to or lower than the return density, where more returns may be included in the interpolation in low-relief environments and fewer returns included in the interpolation of high-relief environments [291].
Other errors in lidar DEMs are associated with under-or over-estimation of ground surface elevation within the ground classification. For example, [182,183] found that laser returns from the ground surface may also be prone to artefact and feature depressions, where it is difficult to separate artefact depressions from actual features. These can create pits or depression errors in DEMs, which are especially problematic for hydrological modelling. Lindsay [183] suggest useful approaches for removing depressions in DEMs, though they note that only in situ observation can determine if a depression is real or not. To filter ground depressions, they suggest using a Monte Carlo approach, whereby the likelihood of a depression is determined based on the variability of proximal the ground surface elevation. A depression is less likely to exist if depression elevation exceeds that of the broader topographic variability.
Unmanned aerial vehicle (UAV) photogrammetry structure from motion methods provide a similar point cloud to lidar data and may be used to estimate ground surface elevation at high vertical accuracies. Structure from motion datasets are derived from overlapping photographs, which are used to create point clouds of the same features found in more than one photograph. In order to perform structure from motion aerial photographs must be collected with extreme overlap (e.g., 80% is recommended both laterally and in flight direction). Increasing overlap in the flight direction is simply a matter of decreasing the time between photo acquisitions, or decreasing the flying speed. To increase photo overlap laterally, flight lines need to be carefully planned, taking into account flying height and image footprint. In addition, depending on platform configuration, UAV photogrammetry can require the positioning of ground control points for image georeferencing. These are optimally determined from independent data sources, such as ground survey of targets using a Global Navigation Satellite System (GNSS, which includes United States Global Positioning System + Russian GLObal NAvigation Satellite System) or lidar data [301]. Despite their importance, the use of ground control points requires a person to physically place the object within the study area, which may be difficult in some wetland environments, though these are improving with kinematic GNSS on UAVs.
In urban areas or landscapes where there are defined objects and boundaries only a few targets are required because, in addition to targets, the algorithm can easily identify these invariant objects in multiple images. However, in areas such as wetlands where colours and features are similar across large areas, it may be difficult for the algorithm to reliably detect the same object in multiple images and will need to rely on the targets for matching. Each image is tagged with a single GNSS location and this location is used to determine where each pixel in the image is located, and in creating point clouds and orthophotos when validated using ground control points of surface target features. For example, Uysal et al. [302] demonstrate high ground elevation accuracies similar to a differentially corrected GNSS system (average accuracy = 0.02 m) ground control points. Küng et al. [303] observed elevational accuracies ranging from 0.05 m-0.20 m at a survey altitude varying between 130 and 900 m above ground level (a.g.l). Similar accuracies were also found at a flying altitude of 150 m a.g.l. by Vallet et al. [304], while Rock et al. [305] demonstrate that ground accuracies from UAV structure from motion point clouds vary on average from 0.02-0.05 m (at flying heights of <100 m a.g.l.) to 0.5-0.7 m (flying heights approaching 600 m a.g.l. ).
An alternate solution to the use of ground control points is to use an on-board differentially corrected GNSS (either precise point kinematic or real-time kinematic) which is recorded whenever a photo is taken, or more frequently. This will enable the scale invariant feature transform algorithm to know where the camera was located when the photo was taken and result in high precision point clouds. Additionally, some systems use a camera on a gimbal (e.g., can rotate with UAV roll, pitch and yaw) and if the gimble orientation can be captured, this can be used by some structure from motion processing software to know more precisely where the camera was located and oriented. For example, Kalacska et al. [306] compared UAV with lidar data of ground surface elevation and found average elevational offsets of 0.27 m compared with located ground control points within a flat tidal marsh containing mostly short vegetation (spring survey). Lidar vertical accuracies were between 0.07 and 0.21 m, when compared with the same ground control points. Flener et al. [307] compared a mobile lidar with UAV point clouds, and found average differences of up to 0.5 m, compared with 0.01 m from a mobile lidar system. Further Dandois et al. [308] note that vegetation penetration into a forest is possible when the forest canopy is sunlit. However, penetration into the canopy (and accuracy of vegetation height) decreases significantly when UAV data are collected on cloudy days or when forward overlap of photographs is minimized. The deployment of ground control points and the requirement to correct positional accuracies due to geometric distortion from UAV can be onerous as noted in Rock et al. [305], though this will improve with the development of lighter platform-based orientation systems and improved methods of correction. Point clouds derived from overlapping photographs are characterised by high point density achievable at low cost, though requiring significant post-processing time, and potential uncertainties caused by shadows and overlying vegetation.

Inferring Biogeochemical Properties of Wetlands Using Optical and Active Remote Sensing
Biogeochemical properties within the water column found in shallow open water wetlands provide an indicator of the cumulative biological processes occurring within wetlands. Spatial and temporal variations of some chemical constituents can be inferred using multi-and hyper-spectral, optical, laser induced fluorescence, and to some degree, SAR [309]. These sensors may improve estimates of trophic status over broad and difficult to access areas [122]. Trophic status indicators typically observed using optical remote sensing include chlorophyll-a [129,310], turbidity (Secchi disk depth), total phosphorus [126] and coloured dissolved organic carbon (DOC) or matter (DOM) [127,146].
To determine concentration of chemicals and nutrients in the water column, remotely sensed data are used to examine the sensitivity of absorption and reflection of radiation within the visible wavelengths (blue, green, red) in comparison to the absorption and reflection characteristics of the water column [122]. Variations in sensitivity of absorption and reflection of wavelengths are a proxy indicator of concentration of different constituents but are not a direct measure. For example, absorption of electromagnetic radiation indicating higher concentration of coloured DOM occurs between the wavelengths of 275 nm and 295 nm in Cao et al. [146] who use Medium Resolution Imaging Spectrometer (MERIS) low resolution multi-spectral remote sensing (Table 1). In addition, red and blue wavelengths from the Landsat series of satellites were the most accurate indicators of boreal wetland trophic status [122]. They found accuracies of approximately 80% (chlorophyll-a), 90% (turbidity) and (-)70% (Secchi disk depth) when compared with field data. They also note that red is least influenced by the atmosphere and therefore provides more stability than the red/blue wavelength ratio. Similarly, Olmanson et al. [127] found that the combination of Landsat wavelengths: green, red and near infrared provided proxy indictors of water eutrophication, dissolved organic matter, chlorophyll-a, total suspended solids and DOC. Isenstein et al. [126] found that red and middle-infrared wavelengths provided the best indicator of total phosphorus (R 2 = 0.63), compared with measured, whereas all wavelengths except red could be used to infer total nitrogen (R 2 = 0.77) within the water column. In addition, Metternicht et al. [156] demonstrate the utility of spaceborne SAR for detecting surface salinity based on variations in relative dielectric of soil and vegetation properties. Application of a fuzzy overlay model based on user-defined values resulted in 81% accuracy of detection of saline vs. alkaline soils.
Mapping of underwater aquatic vegetation such as macrophytes in shallow open water using active underwater acoustics has developed considerably over the last decade due to advances in GPS/GNSS locational attributes and data processing. Unlike satellite and aerial imagery, hydroaccoustics are not impacted by atmospheric transmissivity, water surface variations or water turbidity [311]. Fortin et al. [312] illustrated the utility of hydroaccoustic imaging for quantifying aquatic vegetation structures based on echo timing in a shallow lake, mimicking vegetation structures similar to early profiling lidar systems of terrestrial vegetation. When compared with field validation data of aquatic vegetation (macrophyte) biomass, Vis et al. [313] found that underwater acoustic methods were accurate 55% to 63% of the time, while optical remote sensing methods were influenced by numerous environmental factors, illustrating the promise of these systems for wetland aquatic vegetation structure and biomass monitoring and mapping.

Water Contamination from Mining and Mine Spill Detection; Contaminants Affecting Wetland Function
The detection of mine spills, overland flow of contaminants from mining operations and leaks from oil pipelines are required for mitigating the possible impacts to ground water/surface water contamination, effects on wetland species (flora and fauna) and spatial extent, among others. At the simplest level, oil spill detection can be identified with videography and photographs using airborne platforms such as UAV. Other sensors include SAR, optical remote sensing and laser fluorosensors. Prominent optical properties of petroleum occur in wavelengths ranging from ultraviolet to near infrared. Fingas and Brown [314] reviewed remote sensing of oil on water, noted that oil has a higher surface reflectance than water in the visible wavelengths between 400 and 700 nm, but does not demonstrate wavelengths of specific absorption and reflection features. Further, while sheen from oil spills can be easily detected, this can also be confused with sun glint when differentiating between oil and water surfaces. Therefore, unlike optical remote sensing of vegetation species, methods to separate specific spectral signatures at differential wavelengths do not increase the ability to detect oil [314]. Spectral unmixing of hyperspectral remote sensing image data across (up to) hundreds of bands has shown promise for detecting large oil spills [76]. Thermal infrared detection is also an area of active research due to the absorption of solar radiation and emission as thermal energy at longer wavelengths (800 to 1400 nm). Increased thickness of the oil spill results in greater thermal infrared emission, which may be identified and classified [315].
Spectroscopic analysis of AVIRIS data has been used at different flying altitudes to identify absorption features of the distribution of canopies that were damaged by overland flow of oil centred around 1700 and 2300 nm, which represent carbon and hydrogen bonds in oil [76,77], though different oils reflect and absorb at different wavelengths. The extent of the oil spill along Gulf of Mexico coastal wetlands between July 31 and October 4, 2010 was classified with between 89% and 91% accuracy compared with in situ data. Kokaly et al. [76] also demonstrated that the use of lower resolution data, such as Landsat, significantly reduces the accuracy of oil spill detection. Other detection methods include using vegetation stress as a proxy indicator for oil spill extent [78].
Another remote sensing method includes active emission of laser pulses using laser fluorosensors (laser induced fluorescence) [316]. Jha et al. [317] note that these (e.g., the Scanning Laser Environmental Airborne Fluorosensor) are one of the more useful methods for detecting oil spills. Sensing is based on the detection of compounds (e.g., aromatic hydrocarbons), which exist in petroleum. These become electronically excited upon absorption of laser fluorescence emitted by the laser fluorosensor at wavelengths between 308 nm and 355 nm ( [318], referred to in [314]). Fluorescence peaks of crude oil occurs between 400 nm and 650 nm [314]. Excitation is removed due to a process of fluorescence emission, which occurs in the visible region of the spectrum and is detected by sensor optics [316]. Reflected spectra can be used to detect oil on various surfaces including water, soil, ice and snow [319] and at different thicknesses. Further, few naturally occurring substances fluoresce at these wavelengths, thereby improving detection of oil. A thorough review by Fingas and Brown [314] on oil detection in water note that different types of oil also have slightly different fluorescent intensities and spectral signatures, therefore it is possible to detect the class of oil, given ideal conditions.
With regards to contaminant and reclamation status of wetland areas affected by mining operations and contaminants, hyperspectral remote sensing from spaceborne, airborne and UAV platforms shows continuing promise. For example, Champagne et al. [320] and White et al. [321] used Landsat (in earlier study) and Hyperion to examine proximal airborne constituent and particulate emissions on soil surfaces during the 1970s followed by replanting and remediation in the Sudbury Ontario area. They found that Hyperion could be used to assess the changes in leaf area with distance from smelters and tall smoke stacks. Water contamination from gold mining in Nova Scotia examined in Percival et al. [322] demonstrate the application of hyperspectral imaging for identifying trace minerals including carbonate and sulphide within tailing ponds, especially within the visible to short-wave infrared regions of the spectrum. Hyperspectral instrumentation on UAV have also been used to detect barium, iron, contaminants and various mineral concentrations in northern lakes based on visible and near infrared reflectance, shortwave infrared response and longwave infrared response in Robinson and Kinghan [323].

Species Identification Using Remote Sensing
Wetland vegetation species and structures are indicative of the transitional and successional stages of the ecosystem, and measurement of changes in biological productivity is considered monitoring. Accumulation of organic matter reduces periodic flooding, while maintaining flood-tolerant vegetation [195]. Alternatively, species may be adapted to the environment through processes of peat formation, terrestrialization and paludification [324,325]. Northern peatlands (boreal bogs and fens) follow paludification trajectories, whereby changes in hydrology results in the accumulation of runoff and waterlogging of soils, further altering hydrology, nutrient mineralisation rates and changes in biogeochemistry [326]. This leads to a transition to anaerobic soils, reduced organic matter decomposition and a decline in nutrient cycling, promoting the growth of hydrophytic vegetation and mortality of hydrophobic vegetation such as jack pine (Pinus banksiana). Tree mortality and initial decomposition of woody materials in the anoxic zone further enhances accumulation of the peat layer [327]. At intermediate stages of succession, Nwaishi et al. [328] noted increases in productivity of Carex and high emissions of methane from aerenchymatic tissues, while humification of peat, increased height of peat mounds and changes in catotelm peat thickness reduces groundwater interactions, shifting the peatland to nutrient-poor conditions and reduced nutrient cycling. Mitsch and Wilson [329] noted that while long-term monitoring of reclaimed ecosystem trajectories (including species, biomass, hydrology, etc.) is important, it is also expensive due to the length of time required for monitoring of the long-term sustainability of ecosystem function and natural self-design. Remote sensing has not yet been proven viable to monitor paludification processes of gradual peat accumulation and sub-surface changes. This is due to differences in the length of the satellite/airborne records compared with longterm peatland succession. Despite this, remotely sensed data can be used to infer autogenic processes across the broader landscape by tracking long-term ecosystem trajectories [115]. The combination of both autogenic and allogenic processes that occur within wetland environments are complex and vary with successive stage [195].
Within mineral wetlands (e.g., marshes and shallow open water), vegetation distribution is driven primarily by water availability. Submersed and/or floating aquatic species occupy the deepest part of a shallow open water wetland basin (up to 2 m deep). A deep wetland vegetation zone surrounds the shallow open waters within the basin and exclusively supports graminoids such as rushes and cattails, that are tolerrant of prolonged flooding. Deep wetland zones are surrounded by a shallow wetland vegetation zone, often representing the transition from marsh to swamp, and supports vegetation adapted to seasonal flooding, primarily narrow-leaved graminoids [32]. Beyond wetland meadows exist the shallow wetland zone which supports water tolerant graminoids and forbs that are adapted to periodic flooding or saturated conditions. The transitional areas between mineral wetlands and upland terrestrial vegetation are often characterised by shrubs and trees. Ramsar Convention on Wetlands [1] highlight the importance of wetland classification and inventory monitoring methods that identify wetland successional stages, changes in condition and ecosystem services. Wetland succession is a natural process, which occurs over time as ponds transition to fens and eventually bogs, however, successional phases can also be altered and possibly reversed due to changes in climate and disturbance, including herbivory and faunal alterations, wildfire and anthropogenic disturbances. Changes in vegetation provide a quantitative measure of wetland stability, ecosystem services and value within the broader region. A variety of remote sensing technologies and methods are used to identify changes in vegetation productivity and structure, though inferring peat depth using remote sensing remains a complex problem due to variations in hydrology (floating peat mats), and an inability for most sensors (except for long-wavelength SAR, ground penetrating radar, electrical resistivity tomography and seismic survey) to sense beneath the Earth's surface.
Despite this, identifying ground covers such as Sphagnum mosses is important because mosses are especially sensitive to changes in hydrology and are thus good indicators of changes in moisture availability and overall wetland condition. For example, Bubier et al. [69] used hyperspectral AVIRIS and CASI spectroradiometers to identify various moss species including feather mosses and lichens (forest), brown mosses (rich fens) and Sphagnum species in boreal bogs and poor fens and their separability within boreal peatland and forest environments. They found that vascular broadleaf plants are most reflective in the near infrared (700-1300 nm), and reflectance in near infrared and shortwave infrared wavelengths (1300-2500 nm) decreased with increasing water content within species. Bubier et al. [69] also found noticeable peaks in reflectance shift with different species of Sphagnum and mosses, such that green, red and brown Sphagnum species, feather mosses and brown mosses can be characterised by separate green peaks and near infrared plateaus. While hyperspectral remote sensing is useful for identifying specific absorption and reflection bands indicative of plant biochemistry, [59] noted that much of the data may be redundant. Despite this, the information gained from using hyperspectral imagery for species identification provides significant utility, especially when applied at high spatial resolution. Lower resolution hyperspectral imaging, including Hymap and Hyperion (3.5-10 m; 30 m, respectively, Table 1) have species community identification accuracies ranging from 51% to 93% (average = 66%; n = 5) [62][63][64][65]81], though require spectral 'unmixing' of a mixture of vegetation communities, ground and shadow influences to understand the per pixel proportional variations. Regardless, hyperspectral data provides more accurate classifications of species type than multi-spectral imagery. Specific species spectral characteristics may be further identified within a classification to identify spatial variations in species coverage across broader parts of the hyperspectral image cube. For example, Belluco et al. [59] found that a supervised maximum likelihood classification outperformed other classification methods for detecting halophytic species within an intertidal salt marsh. Accuracies for species detection using hyperspectral data also improve with spatial resolution, where airborne systems including CASI, MIVIS, SASI and ROSIS (0.25-variable m, Table 1) have accuracies ranging from 75% to 100% (average = 89%) when compared with field data [59,70,72,73,86].
New satellite sensors, such as Sentinel-2 and the Worldview series, have capitalised on the information observed in narrow wavelengths observed from studies that used hyperspectral imagery to determine species distribution. These wavelengths contain considerable information, without the required cost or spatial limitations of using an (airborne) hyperspectral imager. For example, both incorporate red-edge reflectance (705-745 nm Worldview-3; 694.4 nm-713.4 nm Sentinel-2) and numerous near infrared and shortwave infrared wavelengths for identifying tree species. Shoko and Mutanga [86] compared Worldview-2, Sentinel-2 and Landsat OLI for identifying C3 and C4 grass species. Worldview-2 has the highest accuracy (95.7%) when compared with field measurements, while Sentinel-2 also demonstrates slightly reduced but also high accuracies (90.4%) for species detection of grasses. Differences are associated with narrow wavelengths, including red edge detection of both Worldview and Sentinel-2 data, allowing the sensor to detect shifts in red edge reflectance characteristics detected in species. While Worldview series benefits from high spatial resolution required to detect species mixtures, Sentinel-2 data are freely available, have global coverage and are multi-temporal, with a relatively high revisit time of five days due to its constellation of two identical satellites (i.e., Sentinel-2A and B), but exhibits coarser spatial resolution (Table 1). Therefore, complex heterogeneous ecosystem species community structures may be best detected using Worldview data, while homogeneous wetland species communities and change detection can be quantified using Sentinel-2 data. There is therefore a trade-off between cost of acquisition ($29/km for Worldview data in 2018; http://www.landinfo.com/LAND_INFO_Satellite_Imagery_Pricing.pdf) and accuracy required for species identification. If observations need to be highly accurate over a small area, then hyperspectral imaging or Worldview (average = 90%) provide accuracies that are most similar to field data collection. When compared with multi-spectral remote sensing data without red-edge detection, Shoko and Mutanga [86] find reduced accuracy (75%) when comparing Landsat OLI with field data, while average accuracies for multi-spectral systems varying in spatial resolution from IKONOS to RapidEye are 77% (on average; stdev = 17%).
In addition to satellite-based systems, UAV mounted with an imaging system have shown promise for classifying species types within a few wetland-based studies, so long as images are accurately located using ground control points. Systems mounted with cameras can provide ultra high-resolution images that can exceed decimetre-resolution pixels but this varies greatly based on the camera used, flying height and skill of the operator. While UAV are now commonly used platforms for collecting data remotely, the specific conditions under which they are operated can result in varying levels of precision and accuracy, and therefore varying ability to sense different aspects of a wetland. Generally, UAV are used for collecting air photos for two purposes: (1) create orthomosaics and (2) generate 3D point clouds. Recently, some scientists have supplemented cameras for a lidar sensor in order to produce point clouds but these can be expensive and not yet widely published (but is growing quickly). In the case of aerial photos, as flying height of an UAV increases, the sensor is able to view a larger area on the ground, and each pixel in the acquired image represents a larger area on the ground (i.e., both in size of area acquired per image and size of area captured per pixel).
With regards to the use of UAVs for measuring structure and growth/mortality multiple times over a longer time period, the accuracy of the ground elevation will be critical for estimating vegetation height. Target locations should be well-known to a high degree of accuracy and precision (e.g., their location should be recorded with a differentially-corrected GPS). For example, Knoth et al. [330] used UAV data to characterise wetland species classes and structures (average accuracy = 91%; species accuracies ranged from 84 to 95%). Similarly, Kalacska et al. [331] identified tussock cover at an ombrotrophic bog in southern Ontario with 96% accuracy using optical videography. Despite the increasing use of UAV with mounted imaging systems, aviation transportation guidelines can be stringent and are changing quickly. Guidelines also vary significantly between countries, and in some countries, UAV are prohibited (e.g., Kenya).

Wetland Vegetation Productivity: Optical Remote Sensing Monitoring Growth and Mortality
At the simplest level, multi-spectral vegetation indices such as the normalised difference vegetation index (NDVI) are used with varying success to identify ratios of absorption and reflection of visible and shortwave radiation. Intended to separate green biomass from the ground surface in the Sahel region of North Africa [332], NDVI is used as an indicator of green biomass, which varies spatially and temporally with climate, ecosystem characteristics, terrain morphology and soil properties [333,334]. Further, trends in NDVI on a per pixel basis can be an indicator of change over time when there is differentiation between the ground surface and vegetation, and with appropriate data normalisation and corrections applied, though saturation of NDVI is problematic in areas of closed canopies [115] or lack of differentiation between ground and canopy, and within boreal environments despite widespread use. For example, Feilhauer et al. [66] observe change in measured leaf mass per area as a correlate for growth rate or stress due to drought related to photosynthetic decline observed using NDVI derived from HyMap imagery. Similarly, Mo et al. [79] and Khanna et al. [80] and used various vegetation indices to determine vegetation stress following an oil spill impact along a coastal wetland. Khanna et al. [80] found that use of high spatial resolution (WorldView series; RapidEye; 1.4 m, 5 m) NDVI data products within the oil spill environment were adequate for quantifying stress detection compared with lower resolution imagery (e.g., Landsat). Whereas Mo et al. [79] used Landsat NDVI and AVIRIS data to examine the extent of the impact of the Deepwater Horizon oil spill, and required higher resolution imagery. Limited success (62% accuracy compared with measured) was achieved by Ghioca-Robrecht et al. [92] when using QuickBird image bands (Table 1) combined with NDVI to identify species community phenological changes of invasive Phragmites australis and Typha spp. indicating that the timing of peak phenology is critical for accurate species mapping and comparisons over time.
At broader scales, Myneni et al. [138] and many others [333,[335][336][337][338] have tracked greening and increased plant biomass growth across the northern high latitudes. Despite widespread use, NDVI is reduced in partially vegetated canopies due to soil brightness increases, all else being equal [339]. As a result of this potential for error, this has resulted in the development of a variety of other vegetation indices to reduce the influence of soils, including the Soil Adjusted Vegetation Index [339], Modified Soil Adjusted Vegetation Index [340] and others (reviewed in Dorigo et al. [341]).

Wetland (and Forestland) Vegetation Biogeochemistry via Optical Properties
Vegetation structure and foliage cellular properties also influence the fraction of photosynthetically active radiation absorbed by green vegetation and the efficiency with which vegetation use these wavelengths for biomass production (also known as light use efficiency [342]). Light use efficiency is highly variable due to environmental drivers including temperature, water and nutrient availability, and therefore varies over space and through time [343]. Hyperspectral remote sensing can be used to identify an indicator of light use efficiency by relating this to absorption of electromagnetic radiation at 531 nm and 570 nm by xanthophyll [344,345]. This is associated with the dissipation of excess radiation as heat and reduction of photosynthesis [346], known as the photochemical reflectance index. The index has been applied to monitor wetland carbon sequestration efficiency for productivity [68] with limited success (e.g., S capillifolium, accuracy = 13%) due to heterogeneous mixed pixels occurring at variable pixel resolution (MODIS 250-1000 m, <3 m, hyperspectral imagers). Photochemical reflectance index was applied by Hilker et al. [343] using the low-resolution MODIS spectroradiometer and compared with light use efficiency estimated from eddy covariance methods and a tower-based multi-angular spectroradiometer within two mature forest environments (accuracy = 54% and 63%). Hilker et al. [343] note the importance of shadowing within pixels and pixel mixtures also identified in Kalacska et al. [68]. In another example, Kross et al. [133] compared the low spatial resolution MODIS light use efficiency-based gross primary productivity data product with eddy covariance estimates of gross primary productivity within four peatlands and found that between 68% and 89% of the variability of gross primary productivity was explained in the MODIS data product. Kross et al. [23] also demonstrated the utility of MODIS data as an input into broader net ecosystem productivity models for peatland environments. They indicate that while the model is complex at single sites, there is potential to implement the MODIS gross primary productivity data product for regional to national applications. New developments in multi-spectral lidar are also producing NDVI signals of canopy attributes [194], which may improve species classification, biomass, leaf area index and foliage and wood partitioning in treed wetland environments.

Wetland Vegetation Structure: Quantifying Changes in Productivity, Growth and Mortality
Vegetation productivity can also be determined by changes in the structure of the vegetation canopy and height as an indicator (though not a direct measure) of net ecosystem production [347] using lidar data or UAV structure from motion data. Further, Hopkinson et al. [347] suggested that the residual difference between change in cumulative biomass over time measured by lidar and net ecosystem production may be an indicator of organic soil and belowground biomass and necromass accumulation, which are both difficult to measure using field methods. Though not applied to wetland environments, Hopkinson et al. [347] demonstrate the linkage between allometrically-derived plot measurements for tree species biomass, calibration of 3D lidar-based vegetation height derivatives, and the accumulation of biomass over a period of time compared with net carbon use for photosynthesis and maintenance (net ecosystem production) (Figure 4). Several studies have explored the use of lidar for estimating biomass in forested environments [348][349][350], with biomass accuracies for tree species ranging from 79% [350] to 93% [349]. Hopkinson et al. [351] found that a red pine plantation in Southern Ontario grew at a rate of 0.4 m per year (stdev. = 0.5 m) suggesting a temporal frequency of three years for detecting changes in tree height growth assuming a target uncertainty of 10%. In other words, change in vegetation structure over time needs to exceed the vertical error found within the lidar data. In Chasmer et al. [48], acquisition repeat times for high-resolution aerial photography used for monitoring changes in transition between permafrost plateaus and wetlands in the southern Northwest Territories require that the spatial extent of ecosystem changes are greater than the pixel resolution due to spectral mixing and geospatial accuracy. Therefore, the timing of repeat remotely sensed data depends on the average rate of change of wetland environments within a given representative spatial area. Acquisition frequency can be reduced when ecosystems are relatively stable and/or remote sensing pixel resolutions (at wetland boundaries) is moderate to low. Within mixed pixels, changes in spectral signature may indicate cumulative changes in vegetation biomass, all else being equal, however, for spectral imagery, these require appropriate normalisation of pixels and image atmospheric correction [352,353]. Chasmer et al. [48] found that high-resolution concurrent aerial photography had a combined positional and delineation uncertainty estimated to be 8-10% (pixel resolutions =~0.2 m) compared with measured and tie point orthorectification models. IKONOS imagery (pixel resolution = 4 m) had a positional and classification uncertainty of 26%, on average. These variations in positioning of spectral imagery can be used to estimate a minimum acquisition repeat period of~4-6 years for high-resolution aerial photography and >10 years using IKONOS data, based on rates of change presented in [19]. However, because wetlands are expanding into forested permafrost plateaus and therefore increasing tree species mortality, the rate of decomposition also needs to be considered such that trees no longer have 'tree-like' structural characteristics that will influence the classification (e.g., trees are leaning, or partially submerged into the wetland and no longer 'look' like trees).

Identifying Wetland Habitats: Faunal Forms of Biomass
The condition of wetlands as habitats and the maintenance of faunal populations within these add considerable quantitative value to the wetland environment. Habitats are typically defined by plant communities, which are used to determine the structure of the environment, and therefore have a significant influence on animal species [354,355]. Tews et al. [355] noted that the structure of vegetation and the spatial scale influence habitat suitability, whereby habitats can vary significantly depending on the perception of fragmentation of one species over another. A habitat suitability index [195] may be used to determine the baseline condition and suitability of wetland habitats for aquatic and terrestrial animals. Assessment of how habitats and habitat suitability index will change in the next 50 and 100 years at current rates and planned scenarios will be important for identifying habitats at risk and mitigating against these risks. A recent paper examined the potential impacts of climate change on the habitat of boreal woodland caribou (Rangifer tarandus caribou), a threatened species in Canada. Boreal caribou tend to forage in bogs and fens, which are also thought to act as refugia for caribou despite changes in vegetation partially associated with changing hydrology. Compared to peatlands, upland habitat was considered susceptible to climate fluctuations and wildfires [356]. Another recent study recommended targeting the restoration of wet seismic lines (i.e., peatlands) to change vegetation composition, structure and height, to restore ecosystem function for caribou and other boreal species [357].
The preference of habitat by species depends on the abundance of a certain species over a defined period of space and time [358], whereby individual behaviour defines the smallest spatial scale of diversity. At larger spatial scales, species diversity depends on the number of individuals in the regional pool and evolutionary history (reviewed in [355]). Habitat mapping using remote sensing therefore tends to focus on scale of diversity and species abundance. Remote sensing systems require the spatial fidelity to detect slight changes in vegetation structure and composition. These often include high resolution optical imagery from UAV [359], Worldview series [85], aerial photography [40] and lidar [85,184] and some lower resolution (4 m) systems, e.g., IKONOS [101,104]. Tews et al. [355] suggest that habitat studies focus on structural elements, richness and count; and continuous variables including structural extent, differences between sites and vegetation structure, height and coverage as important indicators. Many such attributes can be observed using accurate classification methods to identify vegetation community and surface characteristics. For example, Halls and Costin [85] used a supervised classification of Worldview data to determine benthic and emergent habitats, with accuracies exceeding 80% compared with measured. Goodale et al. [184] compared unsupervised, supervised and a decision-tree classification for identifying piping plover (Charadrius melodus) habitats along the south coast of Nova Scotia using morphological, structural and textural characteristics of the ground and vegetation, respectively. The decision-tree classification provided the most accurate method for characterising piping plover habitats and local environmental characteristics (90% accuracy). Supervised maximum likelihood classification accuracy of breeding habitat for whimbrel (Numenius phaeopus) along the outer Mackenzie Delta, Northwest Territories Canada using IKONOS imagery was 69% in Pirie et al. [101]. They noted that small patches could be identified in the IKONOS imagery as areas of potentially suitable breeding habitats for whimbrel. Over small areas, UAV provides high resolution imagery and structure from motion point clouds capable of providing evidence of species impacts, such as beavers [359], while potentially reducing the need for human involvement in the environment.

Identifying Realistic Accuracies from Remotely Sensed Data of Wetland Class and Function
The accuracy of remotely sensed data products reviewed from the literature and based on comparisons with field data (described in Part 1) are summarised in Table 2. This provides decision-makers and data users with an expectation of accuracy based on the spatial resolution of the data and using all methods of analysis available (including a combination of traditional and more advanced methods). For example, the expected accuracy of water extent using high resolution optical or SAR data is 87%, on average compared with field data, though accuracies may be as high as 98%. Typical accuracies for land cover classification is 85% (±12% stdev), while detailed estimates of vegetation foliage biochemistry and productivity are more difficult to determine accurately from remotely sensed data (Table 2). Classification accuracies tend to decrease as the ability to remotely sense wetland characteristics that do not emit, transmit or reflect active or passive wavelengths, are complicated by other environmental factors (e.g., observing water chemistry), or are within the fidelity of pixels also decrease. Further, medium and lower resolution datasets are also characterised by lower accuracies when compared with field data, often because small wetlands are smaller than the spatial fidelity of the data. However, in some cases, differences in data application products are not significant between high and medium resolution data products (e.g., water extent and water chemistry). The requirement for measurement accuracy and the need to balance accuracy with spatially continuous data can be guided using Table 2 as a basis for decision-making and spatial scale. Table 2. Average and standard deviation of data product accuracy compared with measurements collected in the field and accurately located using Global Navigation Satellite System (GNSS) from combined remotely sensed data based on pixel resolution. Bold numbers illustrate pixel resolution range of highest average accuracy; n = the number of comparison results from the literature (205 examples in total); and NA represents applications where remotely sensed data products are not available for wetlands at that pixel resolution.

Objective 3: Promising New Technologies for Wetland Inventory and Monitoring
There is a significant need to improve understanding and monitoring of storage and discharge, the roles of source water land cover types (e.g., wetlands, lakes and rivers) for mitigating water-related hazards, development and implications for water resources during the current period of climatic change [176,360,361]. Remote sensing therefore provides the opportunity for long-term monitoring of hydrological and ecological proxies over vast land surface areas. To this end, several initiatives for surface water mapping are showing significant promise.

Surface Water and Ocean Topography (SWOT)
Estimates of the number of lakes and water bodies within an area of 0.1 ha or larger are of critical interest for water resources, however, quantification of the extent and amount of water contained within these is highly uncertain [362]. In a collaboration between the US National Aeronautics and Space Administration (NASA), the Centre National d'Études Spatials (CNES), the Canadian Space Agency (CSA) and the United Kingdom Space Agency (UKSA), the Surface Water and Ocean Topography (SWOT) mission is a wide-swath altimeter mission designed to observe surface water elevations for the whole continental-estuaries-ocean continuum. The hydrologic scientific objectives are to provide global inventory/storage of terrestrial surface water bodies (lakes, reservoirs, wetlands) with surface areas greater than 250 m by 250 m, and rivers wider than 100 m and estimates of river discharge at sub-monthly to annual time scales. SWOT will measure surface elevations using satellite radar Ka-band with an estimated vertical accuracy of~10 cm when averaging over a water area of 1 km 2 , 25 cm when averaging over 250 m 2 to 1 km 2 [363].
SWOT pre-launch activities include airborne and field campaigns, which include several study sites in the Canadian prairie and arctic regions [364]. A key tool used in preparation for the SWOT launch in fall 2021 is AirSWOT, an airborne analogue to SWOT, with the purposes of better understanding Ka-band backscattering at SWOT-like incidence angles and to serve as a calibration and validation tool for SWOT. AirSWOT InSAR flights and ground-based observations collected during the summer of 2015 in the Yukon Flats basin of Alaska show promise in measuring surface water elevations and slopes [365,366]. Image processing and analysis for the 2017 NASA Arctic Boreal Vegetation Experiment (ABoVE; https://above.nasa.gov/) funded AirSWOT flights over Canadian lakes/wetlands (Peace-Athabasca Delta in northern Alberta) is expected to be launched in September, 2021.

Radarsat Constellation Mission (RCM)
The Radarsat Constellation Mission (RCM) is an initiative led by the Canadian Space Agency (CSA) under the Radarsat project and is the successor to the Radarsat-1 and -2 missions. RCM, launched on 12 June 2019, consists of three Earth observation satellites, each possessing a C-band SAR, thereby providing data continuity to existing Radarsat users [367]. At the time of writing, no data have been acquired by the sensors, but RCM is expected to offer a variety of imaging modes from 100 m low resolution to high 3 m resolution via spotlight mode; for a full list of RCM imaging modes see Thompson [368]. Data will primarily be acquired through dual-polarization compact polarimetry, which will realize many (but not all) of the benefits of quad-polarized data without its restricted swath width [367]; RCM is expected to offer swath-widths up 350 km in some imaging modes. RCM compact polarimetry is achieved by simultaneous transmissions from the H and V antennas, therefore allowing the transmission of electromagnetic radiation with circular polarization, and reception of H and V polarization [368]. RCM provides repeat-pass data every four days (considering all three satellites) as opposed to the 24-day repeat-pass associated with previous missions in the Radarsat program.
A number of studies have been undertaken to assess the efficacy of (simulated from Radarsat-2) RCM data for wetland applications. White et al. [369] investigated RCM capabilities for separating peatlands, concluding that little difference was notable between the use of simulated RCM data and quad-pol Radarsat-2 data. Similarly, Mahdianpari et al. [155] demonstrated the utility of simulated RCM compact polarimetry to map six wetland, upland and urban classes in Newfoundland, Canada. Whilst simulated RCM compact-pol data were unable to match the classification accuracies of Radarsat-2 quad-pol data (76% compared to 84%, respectively), it was noted that RCM has potential for large-scale wetland mapping. Importantly however, both of these studies use simulated compact polarimetric data (simulated from Radarsat-2 fully polarimetric data) but it is still uncertain what the actual configuration of RCM data will be (e.g. White et al. [369] tested different simulated noise floors but the true noise floor of RCM data is still unknown). Therefore, the pre-launch simulated data may not represent an equal substitution of the data acquired by the sensor in reality. Based on the available literature and should future RCM observations match simulated data, RCM is expected to establish itself as a strong candidate for wetland monitoring and its use for such purposes is encouraged, especially as data are anticipated to be open-access.

NASA-ISRO Synthetic Aperture Radar (NISAR)
The NASA-ISRO Synthetic Aperture Radar (NISAR) mission is a joint venture between NASA and the Indian Space Research Organization (ISRO) that is currently scheduled for launch in 2020 [370]. NISAR will exist as a single satellite that will house both L-and S-band SAR sensors with the purpose of observing the Earth's surface [371,372]. Both sensors are expected to provide wide-swath (>240 km) data with spatial resolutions between 2 m and 6m for the S-band sensor, and between 2 m and 30 m for the L-band sensor [373]. The shorter wavelength S-band data will offer single-, dual-and compact-polarizations, as well as quasi quad-polarization data (i.e., HH/HV and VH/VV), however, L-band data will be available in single-, dual-, compact-and quad-polarizations [374]. Both sensors will be based on the same platform, which is expected to offer a 12-day sampling and repeat orbit [370]. Of key importance is NISAR's expected capability surrounding wetland mapping applications, including wetland classification and monitoring hydroperiod regimes. Although no recorded or simulated NISAR data are currently available, the current NISAR baseline plan responsible for the characterization of spatial coverage, sensor frequency/polarization modes, resolution and data latency is already proposed to meet the technical requirements for a variety of wetland mapping applications [370]. In fact, it is expected that the use of S-band SAR data in conjunction with L-band data will enhance wetland classification accuracies [370].

Multi-Spectral Airborne Lidar
The relatively recent integration of two lasers and three laser wavelengths within a single lidar system, enables simultaneous collection of dense laser reflections within a point cloud. This provides not only 3D structural and textural information of vegetation and the ground surface [191][192][193][194], but also spectral information relative to the scattering properties of foliage [193,194], top of water and bathymetry for shallow water bodies [192]. The Teledyne Optech Inc. Titan lidar emits laser pulses at three wavelengths: 1550 nm (shortwave infrared, 3.5 degree forward-looking), 1064 nm (near infrared, nadir) and 532 nm (green, 7 degrees forward looking) [192]. While multi-spectral lidar has not been used within a wetland environment to identify species, the benefits of multi-spectral lidar has the potential to discriminate different moss and understory vegetation beneath tree canopies. Characterisation of understory wetland vegetation will enable more accurate classification of wetland type and form [375], though this requires the development of methods, testing and analysis. In peatlands and forested environments that have recently burned, Chasmer et al. [189] used a Titan multi-spectral lidar to identify variable burn severity using an active normalised burn ratio (1064 nm-1550 nm/1064 nm + 1550 nm) within an upland/peatland environment compared with field data and depth of burn from multi-temporal lidar data. They found that patterns of burn severity identified using the active normalised burn ratio closely mimicked depth of burn, providing evidence to suggest that the multi-spectral lidar-based burn ratio could be used to detect severity without requiring pre-and post-wildfire lidar surveys. Figure 5 illustrates variations in gridded intensity of three channels within a burned and unburned boreal peatland/forest environment and a false colour composite image of the three bands illustrating differences in reflectance of charred surfaces and healthy, green vegetation. Within a forested environment, Budei et al. [193] were able to characterize 10 different tree species with an overall accuracy of 75%, illustrating that key boreal wetland species often used as an indicator of wetland class (e.g., tamarack) can be accurately identified using this technology. Other more typically used optical indices such as NDVI may also be used to estimate foliage amount and productivity, bearing in mind that laser pulses reflections do not reflect from the same location [194].

Objective 4: Recommendations for Field Validation of Remote Sensing Data for Classification and Inventory
Field acquired data is the most reliable form of validation of wetland class, type, condition and other biotic and abiotic attributes also observed as proxy indicators or measured using remote sensing data. Here we provide a potential framework example for field data collection suitable for validating passive optical, lidar and SAR remote sensing data. The framework provided will also require some adjustment for pixel resolution as the size of pixels will greatly affect plot spacing and plot size of interest. In addition, depending on the size and morphology of different wetlands, plot spacing and size should also be considered on a case by case basis. All data collected for validation of remote sensing data needs to be geographically located using a survey-grade differentially corrected GNSS base and rover system required for high resolution (<5 m) datasets, whereas a handheld GPS (positional accuracy of ±5 m) may be adequate for locating validation data collected for remotely sensed datasets where the pixel resolution is >10 m. At the outset, boreal wetlands vs. other land cover types should be identified as either peatland or mineral wetlands based on soil type and pH.
Sampling of wetlands for validation of wetland classification of class, type, form and extent required for spatial frequency and inventory using remotely sensed data should incorporate one or preferably more transects of geographically located plots representing the resolution of the data and the transition zone between an adjacent land cover type into the wetland. This provides not only information on the wetland class, type/form, but also the characteristics of the transition zone, which can be monitored over time. Transects should preferably be located on different sides of the wetland (e.g., north side, east side, etc.) so as to monitor proximal influences surrounding the wetland, though this may not be necessary because, presumably, validated remotely sensed data should be able to identify these changes. Each transect may be up to 50 or more meters in length and depends on the size of the wetland so as to ensure that the transition zone between the upland and riparian areas, wet meadow and emergent zone to open water (shallow open water or bog) or from upland into the fen peatland are adequately represented. The start and end points of each transect should be located using a differentially corrected GNSS base and rover system for sub-decimeter accuracy, if possible. Vegetation plots located along transects may vary in size from 1 m × 1 m ground cover and short vegetation plots every 2 to 5 meters or so (used to validate high resolution remotely sensed data, or in areas that are rapidly changing (e.g., permafrost plateau to bog peatlands)). Plot area may extend to 5 m 2 or 10 m 2 located every 5 to 10 m or more along the transect, and if for lower resolution satellite imagery (e.g., Landsat), may be located using a handheld GPS if measurements occur at a snapshot in time and are not to be repeated. Plot centres (1 m plots) or corners (5 m, 10 m, etc. plots) should be located using tape measure and bearing (with level for lidar data), or preferably, differentially corrected GNSS data ( Figure 6). Depending on the application, short vegetation plot measurements (Figure 6a) should capture local vegetation community species [375][376][377], structural and composition characteristics. These may include dominant and sub-dominant vascular and moss species types, fractional cover (using point intercept method) or a leaf area index using a Licor LAI-2000 plant canopy analyser (Lincoln, Nebraska) [378], and height within each plot. In addition, spectroradiometer measurements in the field or laboratory may be used to acquire pure spectral endmembers needed to understand the combined influences of vegetation cover/structure, species type and other influencing factors on within pixel spectral mixing [121] preferably at the time of overflight (though this may not be possible). Further, consideration of the timing of plot measurements is important because these need to be acquired within the same developmental (long-term) and phenological (seasonal) stage as platform overpass, otherwise, measured vs. remotely sensed vegetation characteristics may be vastly different [379]. Further, measurement should occur within the days following sensor overpass such that vegetation structures and communities are not disturbed by field sampling in the remotely sensed data. Vegetation structural measurements including height and canopy cover, and species measurements within plots [20] are useful for validating lidar point clouds and understanding volumetric scattering and double bounce in SAR data [111,377]. The spatial variability of vegetation community composition and structural characteristics provide an indicator of the characteristics of the wetland and the transition zone between the wetland and upland environment. In addition, elevation measurements along transects and within short vegetation plots can be used to validate lidar and geographically registered UAV ground surface elevation [20]. The estimated cost of one 30 m transect with 151 m 2 plots, including GNSS, transect and surveyed plots takes approximately 7.5 man-hours to complete. Assuming a wage of $20 USD per hour, each plot costs approximately 10 USD to install and measure (excluding rental of equipment, time and cost required for travel to the wetland, etc.). Transportation costs to and from wetlands in remote areas (e.g., requiring helicopter, boat, etc.) will greatly increase the cost of doing fieldwork and varies depending on distance, fuel costs and service provider.
Tall vegetation such as trees and tall shrubs should be measured within larger plots along transects representing pixel sizes (e.g., 5 m × 5 m, 10 m × 10 m, 20 m × 20 m) or forestry permanent sample plots (11.3 m radius). These may be divided into quadrants (e.g., 10 m × 10 m quadrants within a larger 20 m × 20 m plot useful for validating at different pixel resolutions) [380]. Spatial sampling and time required should also be considered as larger plots can take considerably more time to install and measure than small plots. For example, one transect of large plots and sampling of many wetlands may be more effective than installing more than one transect into a single wetland. Tall vegetation measurements (Figure 6b) should include species, density, tree height, crown depth and stem diameter at breast height (useful for estimating biomass via allometry), and canopy cover or leaf area index [381]. Representative small plots of understory characteristics should also be included within larger plots [380], which provides measurements of understory and ground cover attributes that may contribute to mixed pixel optical characteristics. Vegetation structural and species composition measurements are also useful for validating lidar point cloud data and raster surfaces within the same phenological period as platform overflight. Time of measurement can vary significantly depending on tree and shrub density, however, typically, a 400 m 2 tall vegetation boreal forest plot [191] takes between 7 and 14 man-hours (between 77 and 155 USD) to install and measure. Vegetation characteristics can then be used to determine wetland class and form. Type requires additional measurements including biochemistry, which may be inferred from vegetation types [382].
In addition to vegetation characteristics, water extent from SAR or optical imagery may also be identified using field data by surveying along the waters' edge using GNSS in kinematic survey mode within a day or so of overflight [111]. Kinematic survey using a pole-based rover system (with nearby base station) along the waters' edge (or other discrete land covers such as permafrost plateaus) provides x and y coordinates, which can be compared with SAR or high resolution optical imagery, and elevation (z), used to validate lidar UAV structure from motion or interferometric SAR data (Figure 6c). These measurements are the least time consuming and may be completed within 0.5 to 1 hour under open sky conditions, though may be prone to signal loss and multi-path if working under canopy. Another option is to use higher resolution multispectral imagery acquired near the time of SAR acquisition [154]. UAVs are becoming very popular for providing this high resolution ground reference data [306]. To acquire 3D positions of water lines, the water extent can be digitized from high resolution imagery as "ground reference" and then intersected with a high resolution digital elevation model. While this framework provides one example as to how wetland vegetation and water extents could be measured for validation of remotely sensed data, we suggest that standardization of plot measurements is needed in order to improve comparisons between field data collection and products from remote sensing.

Conclusions and Recommendations
This review, which is part two of a two part series, examines the use of remote sensing of primarily Canadian boreal wetlands for use in policy and management decision-making. It provides a synthesis of the current state-of-the-art in the discipline of remote sensing for identifying, measuring or inferring proxy-indicators associated with wetland class and extent, condition and processes required for inventory and monitoring within the Ramsar framework [1]. While this summary and integration of the literature identifies applications for boreal wetlands, in particular, other inland and coastal wetlands have been considered as well. Unsurprisingly, increasing complexity of wetland attributes (e.g., biochemistry) and lower resolution remote sensing systems often do not consistently accurately represent field data, however, low sensor resolution may be most appropriate for national-level wetland assessment and monitoring [232,383]. High resolution imagery and data fusion methods most frequently represent field-validated land cover type (up to 97% accuracy), while wetland class and form may be classified with up to 92% accuracy and 88% accuracy when using multiple remote sensing data types within a fusion or conflation framework.
Effective classification methods, such as decision-trees and machine learning, provide the most accurate methods to spatialize wetland class and form, while data conflation via optical, SAR and lidar data within an object-oriented and machine learning framework is the current state-of-the-art. We also present other characteristics of wetlands which are critical for assessment of wetland ecosystem services are also included within the accuracy assessment from the literature ( Table 2). These summarized results are important because the accuracies observed within the literature (343 comparisons with accurately located field measurements) may be compared with the range of acceptable error required for monitoring. We describe a framework for remotely sensed data validation of wetland class, form and water extent using plot measurements along transects, such that fuzzy transition zones and wetland boundaries which are notoriously difficult to characterize are included within a remote sensing based monitoring framework. The implementation and standardization of plot measurements (and the costs associated with these) and remotely sensed data would provide comparability between sites, regions and at national and international levels required to better understand spatial changes in wetland condition. This would ensure that proximal influences are monitored and the requirement for the wise use of wetlands over time and described as a critical need within the Ramsar Convention on Wetlands [1].