Wetland Mapping with Landsat 8 OLI, Sentinel-1, ALOS-1 PALSAR, and LiDAR Data in Southern New Brunswick, Canada

Mapping wetlands with high spatial and thematic accuracy is crucial for the management and monitoring of these important ecosystems. Wetland maps in New Brunswick (NB) have traditionally been produced by the visual interpretation of aerial photographs. In this study, we used an alternative method to produce a wetland map for southern New Brunswick, Canada, by classifying a combination of Landsat 8 OLI, ALOS-1 PALSAR, Sentinel-1, and LiDAR-derived topographic metrics with the Random Forests (RF) classifier. The images were acquired in three seasons (spring, summer, and fall) with different water levels and during leaf-off/on periods. The resulting map has eleven wetland classes (open bog, shrub bog, treed bog, open fen, shrub fen, freshwater marsh, coastal marsh, shrub marsh, shrub wetland, forested wetland, and aquatic bed) plus various non-wetland classes. We achieved an overall accuracy classification of 97.67%. We compared 951 in-situ validation sites to the classified image and both the 2106 and 2019 reference maps available through Service New Brunswick. Both reference maps were produced by photo-interpretation of RGB-NIR digital aerial photographs, but the 2019 NB reference also included information from LiDAR-derived surface and ecological metrics. Of these 951 sites, 94.95% were correctly identified on the classified image, while only 63.30% and 80.02% of these sites were correctly identified on the 2016 and 2019 NB reference maps, respectively. If only the 489 wetland validation sites were considered, 96.93% of the sites were correctly identified as a wetland on the classified image, while only 58.69% and 62.17% of the sites were correctly identified as a wetland on the 2016 and 2019 NB reference maps, respectively.


Introduction
Wetlands are defined as lands that are saturated with water long enough to cause the formation of hydric soils and the growth of hydrophytic or water-tolerant plants [1]. Wetlands are found in almost all the regions of the world from the tundra to the tropics and are a critical part of the natural environment [2,3]. They have high biological diversity and offer critical habitats for numerous flora and fauna species [3]. Wetlands can also provide valuable services to humans such as flood reduction by temporarily storing and gradually releasing stormwater [2]. Wetlands are complex ecological systems Table 1. Comparison of the best overall accuracy (%) achieved in previous studies that used Random Forests for wetland mapping, as a function of the number of classes and the input datasets.

Number of Classes Input Data Region Authors
Maximum Likelihood 60.1 7 Radarsat-2 C-polSAR Manitoba, Canada [21] 87.2 3 Landsat 5 TM Ontario, Canada [26] 89.2 2 Landsat 7 ETM+, ERS-2 (C-VV) Spain [59] 93.9 5 Radarsat-2 C-simulated compact polSAR Manitoba, Canada [22] 94.0 3 Landsat 5 TM, ERS-1 (C-VV), JERS-1 (L-HH), Radarsat-1 (C-HH) Great Lakes, North America [60] Rule-based classifier 85. 7 4 Airborne (C-HH, C-HV, C-VV, C-VH) Ontario, Canada [  Maryland, USA [35] 80. 8 4 Sentinel-1 C-VH, Sentinel-2, SRTM-DEM Alberta, Canada [64]  88.0 5 RapidEye, Radarsat-2 (C-HH, C-HV, C-VV, C-VH), Sentinel-1 (C-VH, C-VV), ALOS-1/2 PALSAR (L-HH, L-HV) Newfoundland and Labrador, Canada [69] 94.0 2 Sentinel-1 (C-VV, C-VH), Sentinel-2 Turkey [70] Our study objective was to assess the accuracy of wetland mapping for southern New Brunswick using freely available optical and SAR satellite imagery. Such as in many Canadian studies, the resulting map included at least the five wetland classes as defined in the Canadian Wetland Classification System (CWCS): bog, fen, marsh, swamp, and shallow open water of fewer than 2 m in depth [1]. Given that combining SAR and optical images has been already demonstrated as being the most useful for wetland mapping (Tables 1 and 2) and to have a low-cost methodology, our study used freely available optical and SAR satellite imagery, i.e., Sentinel-1 C-band SAR, ALOS-1 PALSAR L-band SAR, and optical Landsat 8 OLI imagery. The images were combined with topographical metrics that were extracted from Lidar data as they have been shown to be useful for wetland mapping [37,38]. The images were classified using Random Forests which has been successfully used for wetland mapping using optical and SAR (Table 1). Following Jahncke et al. [37] and LaRocque et al. [38], the satellite images were acquired at different times of the year (including during flood periods) to consider the seasonal (leaf on/off) and water level variations on the SAR backscatters and optical data. The accuracy of both the resulting map and of both the 2016 and 2019 NB reference maps was validated by comparison to 951 GPS field wetland and non-wetland sites that were visited during the summer and fall of 2016. This comparison allowed the identification of the source of errors for misidentified wetland sites on both NB reference maps and the classified image.

Study Area
The study area in southern New Brunswick encompasses approximately 30,000 km 2 between 45 • and 46 • Lat. N and 66 • and 67 • Long. W (Figure 1). The area is dominated by extensive floodplains of the Saint John River and its tributaries. The area has an elevation range of between 0 m ASL at Grand Lake to 470 m ASL in the Central Uplands Ecoregion in the northwest part of the study area. There is also a hilly region in the southeast part of the study area known as the Caledonia Ecodistrict ( Figure 1).

Study Area
The study area in southern New Brunswick encompasses approximately 30,000 km 2 between 45° and 46° Lat. N and 66° and 67° Long. W (Figure 1). The area is dominated by extensive floodplains of the Saint John River and its tributaries. The area has an elevation range of between 0 m ASL at Grand Lake to 470 m ASL in the Central Uplands Ecoregion in the northwest part of the study area. There is also a hilly region in the southeast part of the study area known as the Caledonia Ecodistrict ( Figure 1). The study area was chosen primarily due to its recognition as a priority wetland conservation area by Eastern Habitat Joint Venture, the availability of recent LiDAR data, and a high number of easily accessible wetland sites for ground-truthing. While most of the study area still has natural landscapes, it is facing increased pressure from agriculture, urban developments, and climate change. The study area experiences warm summers and cold snowy winters. Precipitation is quite evenly distributed throughout the year with around 1000 mm per year in the central north to 1300 mm on the southern coast. Snowfall is abundant with 1-3 meters of snowfall per year [71]. According to the New Brunswick Ecological Working Group, the area is dominated by a mixture of well-drained soils to the west of the Saint John The study area was chosen primarily due to its recognition as a priority wetland conservation area by Eastern Habitat Joint Venture, the availability of recent LiDAR data, and a high number of easily accessible wetland sites for ground-truthing. While most of the study area still has natural landscapes, it is facing increased pressure from agriculture, urban developments, and climate change. The study area experiences warm summers and cold snowy winters. Precipitation is quite evenly distributed throughout the year with around 1000 mm per year in the central north to 1300 mm on the southern coast. Snowfall is abundant with 1-3 meters of snowfall per year [71]. According to the New Brunswick Ecological Working Group, the area is dominated by a mixture of well-drained soils to the west of the Saint John River and poorly drained soils to the east of the Saint John River [71]. The vegetation in the area is mainly a mixture of hardwood species, e.g., red maple (Acer rubrum), sugar maple (Acer saccharum), beech (Fagus grandifolia), yellow birch (Betula alleghaniensis) white birch (Betula spp.), balsam poplar (Populus balsamifera), trembling aspen (Populus tremuloides) and softwood species, like balsam fir (Abies balsamea), black spruce (Picea mariana), white spruce (Picea glauca), red spruce (Picea rubens), jack pine (Pinus banksiana), red pine (Pinus resinosa), and white pine (Pinus strobus). The area has the five wetlands classes (i.e., bog, fen, marsh, swamp or forested wetland, and shallow/open water) of the Canadian Wetland Classification System (CWCS) [11,41,72], some of them being subdivided as a function of the vegetation cover. In total, the study considered the following 11 wetland classes: open bog, shrub bog, treed bog, open fen, shrub fen, freshwater marsh, coastal marsh, shrub marsh, shrub wetland, forested wetland, and aquatic bed. The 11 non-wetland classes included: unvegetated, sparse vegetation, crop field, pasture/hayfield, grassland, softwood forest, hardwood forest, mixed wood forest, shrubland, and forest clear-cut.

Satellite Imagery
The study used optical imagery and two types of SAR imagery. The first SAR imagery was acquired by the Japanese Advanced Land Observing Satellite using its Phased Array type L-band Synthetic Aperture Radar (ALOS-1 PALSAR). The ALOS-1 PALSAR sensor operates in the L-band (1270 MHz) and has a 23.62 cm wavelength with a pixel resolution of 12.5 m and a swath of 70 km. Incidence angles ranged between 36 • and 38 • . The images correspond to the ascending orbit that is northeast-looking. The images were acquired in 2010 during three seasons which corresponds to Remote Sens. 2020, 12, 2095 7 of 30 different water levels in the wetlands (low, moderate, or high) ( Table 3). We produced one mosaic for each season to cover the full study area.
The second SAR imagery was acquired by the European Sentinel-1 satellite using the Terrain Observation with a Progressive Scanning SAR (TOPSAR) sensor in a high resolution (HR) Interferometric Wide (IW) swath mode. It operates in the C-band (5.405 GHz) and has a 5.6 cm wavelength with a pixel resolution of 10 m and a swath of 250 km. The incidence angles ranged between 29.1 • and 46 • . We used the C-HH and C-HV polarized images as well as the C-VH and C-VV images. The C-HH and C-HV images correspond to the descending orbit that is northwest-looking. The C-VH and C-VV images correspond to the ascending orbit that is northeast-looking. The S1A Level-1 Ground Range Detected (GRD) data products are already multi-looked and projected to ground range on the WGS84 coordinate system using an Earth ellipsoid model, hence multi-looking was unnecessary. The Sentinel-1 images were acquired in 2017 according to the season (spring, summer, and fall), which corresponds to a water level in the wetlands (flood, high, or low) ( Table 4). For each season, while the descending image covered the whole study area, it was necessary to mosaic the two ascending images. Table 3. Characteristics of the ALOS-1 PALSAR L-HH and L-HV images used in the study as a function of the season and the water levels.

Season
Water  The optical imagery was acquired by the new Landsat 8 Operational Land Imager (OLI) sensor (Table 5). We used the radiometrically calibrated and orthorectified Landsat 8 OLI imagery coming from the Level-1 (L1TP) Collection in a GeoTIFF Data Product, which is the highest Level-1 and the most suitable imagery for time series analysis [73]. Among the spectral bands of the sensor, we used the following eight bands: Band 1-Ultra Blue (0.43 -0.45 µm), Band 2-Blue (0.45-0.51 µm), Band 3-Green (0.53-0.59 µm), Band 4-Red (0.64-0.67 µm), Band 5-Near-infrared or NIR (0.85-0.88 µm), Band 6-Shortwave Infrared 1 or SWIR-1 (1.57-1.65 µm), Band 7-Shortwave Infrared 2 or SWIR-2 (2.11-2.29 µm) and Band 8-Panchromatic (0.50-0.68 µm). These images have a pixel resolution of 30 m (15 m for the panchromatic band) and a swath of 180 km. We have selected images without any cloud or snow on the ground. They were downloaded from the archives accessible of the USGS Global Visualization Viewer website (https://glovis.usgs.gov). The images were acquired in three seasons (spring, summer, and fall), which represents a different water level in the wetlands (flood, high, low, or moderate) ( Table 4). The coverage of the whole study area requires four mosaicked images. One mosaic was produced for each season. The mosaics could have images acquired at different years to avoid cloud cover, but a visual inspection did not highlight big changes in the land cover between the years.

Field Data
Between mid-July and October 2016, 1979 GPS ground-truthing sites were visited through the study area ( Figure 2). Sites were chosen that were easily accessible, well distributed over the study area, and across the various classes. Following [73], they were defined as a wetland when the water table was close to (less than 10 cm) or at the surface, or when we found indicator plants, soil hydrology, or other signs that the area is very often saturated with water. Some wetland sites were previously found by the interpretation of aerial photographs or mapped as a wetland on the NB reference map. On each site, the following measurements were made: GPS location, elevation, and site class based on the class description of Table 6. Ground photographs were also taken, showing the vegetation cover and the soil moisture status. In addition, 1028 sites were used to delineate the training areas, while the remaining sites (951) were used for validation data ( Table 7). The validation sites were also compared to both the 2016 and 2019 NB reference maps. Among the 1028 sites, 530 sites were non-wetland or water sites while 498 sites were wetland sites. For the validation sites, the total number of non-wetland or water sites was 462 and the total number of wetland sites was 489.

Lidar Metrics
The The tiles were mosaicked in PCI Geomatica into a 15 m mosaic ( Figure 1) to match the satellite imagery resolution. Using the same method as [38], we derived several topographic metrics from the DTM to model how and where water will flow since they are related to landscape shape and position. They include: (i) Slope; (ii) Profile curvature; (iii) Plan curvature; (iv) Topographic position index (TPI); (v) Topographic wetness index #1; and (vi) Topographic wetness index #2. Terrain morphology influences water flow across landscapes, hence it plays a major role in defining where wetlands could develop [11,38]. All the metrics were computed using the System for Automated Geoscientific Analysis (SAGA, v.7.4.0_x64.), a free open-source software designed for spatial data analysis [75]. The software only needs the input files and the grid system to generate the metrics. We also computed the canopy height model (CHM) by subtracting the DTM and DSM layers in order to consider the vegetation height in the classification. All metric outputs were then exported to ArcGIS 10.6.1 in GeoTiff format.
Using the same method as [38], we derived several topographic metrics from the DTM to model how and where water will flow since they are related to landscape shape and position. They include: (i) Slope; (ii) Profile curvature; (iii) Plan curvature; (iv) Topographic position index (TPI); (v) Topographic wetness index #1; and (vi) Topographic wetness index #2. Terrain morphology influences water flow across landscapes, hence it plays a major role in defining where wetlands could develop [11,38]. All the metrics were computed using the System for Automated Geoscientific Analysis (SAGA, v.7.4.0_x64.), a free opensource software designed for spatial data analysis [75]. The software only needs the input files and the grid system to generate the metrics. We also computed the canopy height model (CHM) by subtracting the DTM and DSM layers in order to consider the vegetation height in the classification. All metric outputs were then exported to ArcGIS 10.6.1 in GeoTiff format.

WA Deepwater
Deepwater with no vegetation present (e.g., lake)

Image Processing and Classification
The image processing was performed mainly in PCI Geomatica 2018, except where otherwise noted. The ALOS-1 PALSAR imagery was terrain-corrected and georeferenced using the Alaska Satellite Facility's Map Ready tool kit [76] and the DTM. The imagery was then exported to PCI Geomatica as a Geotiff file. The images for each of the water level/surface conditions were then mosaicked together. Pre-classification processing of Sentinel-1 GRD data included updating orbit metadata, noise removal, and terrain correction with the SNAP toolbox [77]. More details about Sentinel-1 data processing can be found in [78]. The imagery was then exported to PCI Geomatica as a Geotiff file. The VV and VH images were mosaicked, while the C-HH and C-HV covered the whole study area. The Landsat 8 OLI imagery was already calibrated and georeferenced. They were atmospherically corrected using the ATCOR2 program of PCI Geomatica 2018 that uses the algorithm of [79]. Such a correction removes some of the atmospheric interference and converts the image digital numbers in reflectance values. The original multispectral images had a spatial resolution of 30 m and were pan-sharped to 15 m using the PANSHARP program of PCI Geomatica 2018 that applies the method of Zhang [80]. All the input datasets (Landsat 8 OLI, ALOS-1 PALSAR, Sentinel-1, Lidar variables) were then re-projected into New Brunswick Double Stereographic NAD83 (CanNBnad83) datum with a 15 m pixel resolution. The re-projected data were then used into a supervised classifier that requires delineation of training areas for each class. We delineated a total of 1028 training areas of 40 Landsat pan-sharpened pixels of 15 m GSD that were randomly distributed throughout the study area ( Table 7).
The training areas were used to calculate the class spectral signatures for Landsat 8 OLI B2 to B6 images acquired during the three seasons. The class spectral signatures were then used to assess the spectral separabilities between class pairs with the Jeffries-Matsushita (J-M) distance, which is defined in [81]. The J-M distance ranges between 0.0 and 2.0, a value of 2.0 indicating 100% class spectral separability. The training areas were then used in the Random Forests (RF) classifier, which was originally developed by Breiman and Cutler [82,83] and which had recently been successfully employed in several wetland mapping studies ( Table 1). The classifier is a non-parametric decision tree supervised classifier that can handle both Gaussian and non-Gaussian data as it does not consider the data distribution parameters. The classification was performed with all input features. Indeed, we compared classification performances using different groups of input features in previous wetland mapping studies [37,38] and concluded that the best case is using all topographic data, SAR imagery, and optical imagery acquired in different seasons. The code used in this study was the one developed in the R programming language [84]. We used the all-polygon version that has the advantage of taking account of the actual class size and outperforms the sub-polygon version [85]. The settings of the classifier were a forest of 500 independent decision trees with the default mtry variable [86]. Such a setting includes all input features, i.e., all pixels are randomly sampled as candidates at each split of every node. RF is not sensitive to noise or over-fitting and produces a "Mean Decrease Accuracy" plot that gives the importance of the individual input variables in the classification [87][88][89][90].

Accuracy Assessment
The classification accuracy was assessed first by comparing independent training areas with the equivalent classified land use in the imagery. Such comparison was performed using a "confusion matrix" or error matrix", where each cell expresses the number of pixels classified to a particular class with the class defined by the training areas [91]. The confusion matrix allows for computing the average and overall accuracies as well as the individual class User's and Producer's accuracies and their related errors (omission and commission), as described in [91]. However, the classification accuracy is based on training areas and does not give a good assessment of the actual map accuracy. A more robust and independent accuracy assessment is to compare the resulting classified image with an independent set of field observation data acquired over the GPS validation sites that were not used as training areas. If the image returns the same class as the one observed at the validation sites, then the pixel related to this validation site is associated with a value of 1. If it is not the case, then the value is zero. A percentage of correct identifications can then be computed as a function of the total number of validation sites. Such a comparison was done using 951 GPS sites that were related to both wetland and non-wetland classes (Table 7). Table 8 presents the Jeffries-Matsushita (J-M) distances between the class pairs that were computed using the reflectance values of Landsat 8 OLI Band 2 to Band 6 images. The mean J-M distance is 1.971, indicating a good spectral separability between the classes. The minimum separability of 1.706 occurred between the shrub marsh (SM) and shrub wetland (SW) classes. The highest separability values of 2.000 were obtained between the deep water (WA) class and other classes except with the aquatic bed (AB) class.  Table 9 shows the confusion matrix (and associated classification accuracies) comparing the training areas with the classified image for all the 22 landcover classes when RF was applied to the whole dataset. The resulting classified image is presented in Figure 3. We achieved an overall accuracy (OA) of 97.67% indicating an excellent classification accuracy. As shown in Table 9 Table 9. Confusion matrix (in pixels) and associated accuracies when the RF classifier is applied to all the images and the Lidar-derived metrics(*). UV  SV  AC  AP  AG  SF  HF  MF  SL  CC  OB  SB  TB  OF  TF  FM  CM  SM  SW  FW  AB    The RF classifier produces the variable of importance rank scores (expressed as a percentile), which gives the importance of each variable in the overall classification ( Figure 4). For the classification, we used all the input data, since, in a previous study [39], we already did a detailed analysis of classification performances by generating wetland maps using different groups of variables and showed that the best classification was produced using all variables. The most important variable was the digital terrain model (DTM), whose removal from the model resulted in a decrease in model accuracy of 59%. Four of the The RF classifier produces the variable of importance rank scores (expressed as a percentile), which gives the importance of each variable in the overall classification ( Figure 4). For the classification, we used all the input data, since, in a previous study [39], we already did a detailed analysis of classification performances by generating wetland maps using different groups of variables and showed that the best classification was produced using all variables. The most important variable was the digital terrain model (DTM), whose removal from the model resulted in a decrease in model accuracy of 59%. Four of the LiDAR-derived topographic metrics TWI02, TPI, TWI01, and CHM were among the top 10 variables. The other variables in the top ten most important variables in the classification were the Landsat 8 OLI B1 band (ultra-blue) image acquired in summer and spring and the Landsat 8 OLI B2 band (blue) image acquired in spring. The most important SAR imagery variable was the Sentinel-1 C-VV imagery followed by the Sentinel-1 C-HH and Sentinel-1 C-VH images acquired in summer ( Figure 4). LiDAR-derived topographic metrics TWI02, TPI, TWI01, and CHM were among the top 10 variables. The other variables in the top ten most important variables in the classification were the Landsat 8 OLI B1 band (ultra-blue) image acquired in summer and spring and the Landsat 8 OLI B2 band (blue) image acquired in spring. The most important SAR imagery variable was the Sentinel-1 C-VV imagery followed by the Sentinel-1 C-HH and Sentinel-1 C-VH images acquired in summer ( Figure 4).

Comparison with Independent GPS Sites and the 2016 and 2019 NB Reference Maps
The classified image was validated by comparison to 951 in-situ GPS collected field sites. Table 10 shows the confusion matrix (and associated accuracies) comparing the field sites to the classified image. The validation has an overall accuracy of 94.95%. For the upland and water classes, the highest UA and PA occurred with crop field (AC) and deep water (WA) classes, (100%). Forest clear-cut (CC) has the highest error of commission (19.23%) because of a confusion with softwood forest (SF) and hardwood forest (HF). The softwood forests (SF) class has the highest error of omission of 33.33%, which is due to a confusion with the treed bog (TB), mixed wood forests (MF), forest clear-cuts (CC), and shrub wetlands (SW). For most of the wetland classes, the UA and PA were above 91.30%, with an exception for PA for treed bog (TB) and the UA for freshwater marsh (FM).

Comparison with Independent GPS Sites and the 2016 and 2019 NB Reference Maps
The classified image was validated by comparison to 951 in-situ GPS collected field sites. Table 10 shows the confusion matrix (and associated accuracies) comparing the field sites to the classified image. The validation has an overall accuracy of 94.95%. For the upland and water classes, the highest UA and PA occurred with crop field (AC) and deep water (WA) classes, (100%). Forest clear-cut (CC) has the highest error of commission (19.23%) because of a confusion with softwood forest (SF) and hardwood forest (HF). The softwood forests (SF) class has the highest error of omission of 33.33%, which is due to a confusion with the treed bog (TB), mixed wood forests (MF), forest clear-cuts (CC), and shrub wetlands (SW). For most of the wetland classes, the UA and PA were above 91.30%, with an exception for PA for treed bog (TB) and the UA for freshwater marsh (FM).   Total  46  26  24  132  41  33  38  49  24  26  26  37  47  33  29  23  27  31  102  93  The highest UA and PA occurred with the aquatic bed (AB) class. The treed bog (TB) class has the highest error of commission (21.28%) mainly because of confusion with softwood forests (SF) and forested wetlands (FW). The highest error of omission was for freshwater marsh (FM) class (8.70%) because of a confusion with pasture/hayfield (AP) and grasslands (AG). We also compared the in-situ GPS wetland validation sites to the 2016 provincial wetland reference map (Table 11). For such a comparison, given that the 2016 NB reference map has only wetland classes, all non-wetland classes were combined into a single class. The overall accuracy (63.30%) was well below those obtained with the classified image produced in this study. The associated UA and PA are generally lower, except for coastal marsh ( We compared the in-situ GPS validation sites to the 2019 NB reference map ( Table 12). Given that the 2019 NB reference map does not have individual wetland classes, all the in-situ GPS wetland validation sites were grouped into two categories: wetland and non-wetland. While the 2019 NB reference map gives a higher overall accuracy of 80.02% than the 2016 NB reference map (63.30%), this new map has still a lower accuracy than the map produced in this study (94.95%). The 2019 NB reference map does not provide individual wetland classes, which limits its practical use for several applications, including or ecological studies. When the classified image and the two NB reference maps were compared only to the 489 wetland sites 96.93% of these sites were classified in a wetland class over the classified image, while only 58.69% and 62.17% of these sites were identified as a wetland on the 2016 and 2019 NB reference maps, respectively The confusion matrices of Table 10, Table 11, and Table 12 give a global accuracy assessment of the classified image produced in this study, the 2016 NB reference map, and the 2019 NB reference map, and it is necessary to further investigate the sources of error for each map. Table 13 compares the 2016 NB reference map and the classified image for the number or percentage of the 489 in-situ GPS wetland validation sites that were correctly identified in a wetland class or in the correct wetland class. While 97.34% of the GPS validation sites were correctly identified in a wetland class over the classified image, this percentage dropped to 59.92% in the case of the 2016 NB reference map. Similarly, 94.27% of the GPS validation sites were correctly identified in the correct wetland class in the classified image, while only 29.86% were correctly identified in the correct wetland class in the 2016 NB reference map. To determine which wetland class in the classified map or the 2016 NB reference map has the best or poorest identification (by comparison with the GPS wetland validation sites), we calculated the number and the percentage of correctly identified wetland validation sites for each wetland class (Table 14). All the classes were better identified in the classified image than in the 2016 NB reference maps. For the classified image, the open fen (OF) and the coastal marsh (CM) had the highest identification accuracy (100.00%). The treed bog (TB) had the lowest accuracy (78.72%). For the 2016 NB reference map, the open bog (OB) had the highest accuracy (65.38%), while treed fen (TF) and shrub marsh (SM) classes were unable to be identified (0.00%).  For the classified image, about half of the incorrectly identified in-situ GPS wetland validation sites were not in the right wetland class and the other half were not identified as a wetland class (Table 15). For the 2016 NB reference map, about 2/3 incorrectly identified in-situ GPS wetland validation sites were not in the right wetland class and the other 1/3 were not identified as a wetland class (Table 15). On the classified image, misidentifications were mainly due to wetlands being identified in the wrong wetland classes (Table 16). On the 2016 NB reference map, the sites were either identified in the wrong wetland class (Table 16) or an upland class (Table 17). This last case is particularly true for the treed bogs, the coastal marshes, the shrub wetlands, and the forested wetlands (Table 17).

Discussion
In this study, we produced a map having 10 non-wetland classes, 10 wetland classes, and one water class by applying the Random Forest classifier to a combination of LiDAR topographic metrics with Landsat 8 OLI, Sentinel-1, ALOS1-PALSAR images, acquired at three seasons and under different water level conditions. We achieved a mean Jeffries-Matsushita (J-M) distance of 1.971. It was computed between the class pairs using the reflectance values of Landsat 8 OLI Band 2 to Band 6 images. Such a J-M distance indicates a good mean spectral separability between the classes. The lowest J-M distance (1.706) was obtained for the SM and SW classes that are hardly distinguishable from each other using optical bands. Both classes have shrub-type vegetation and only differ according to the amount of shrub coverage. The optical images cannot distinguish well the percent shrub coverage. It has already been shown that shrub wetlands are the least separable classes [24,48,53,92,93]. The highest M distances were obtained between the deep water (WA) class and the other classes except for the aquatic bed (AB) class. This can be attributed to the very distinct reflectance of the deep water and aquatic bed classes. Other studies [24,57,92] have also found that open/shallow-water is the most separable class from other wetland classes.
The overall classification accuracy obtained (97.67%) was much higher than those obtained in the Random Forest-based wetland mapping studies listed in Table 1. Most of these studies only used a combination of C-band SAR, optical, and DEM data. Some studies also included L-band imagery [41,50,51,56,68] given that L-band is a longer wavelength that is more penetrating especially in high-density canopy areas, where C-band and optical beams have a limited penetration and cannot reach the floor. However, even when including the L-band imagery, the corresponding overall accuracy was slightly lower than this study. Lower classification accuracies in previous studies could be attributed to the fact that a few of these studies considered the seasonality of vegetation and water level in imagery selection, as done in this study, where images were selected based on three different seasons to account for the influence of leaf-off/on periods and water level. In addition, some images for this study were acquired in spring during a flooding event which aided the delineation of wetlands. In flooding conditions, most wetlands are filled with water which aided in classification [38,51]. In comparison to previous studies [38,49], which also considered seasonality, our classification accuracies were slightly higher because of the use of Landsat 8 OLI optical imagery which has a better spatial resolution than the Landsat 5 TM imagery used in their studies. Landsat 8 OLI images are advantageous for wetland mapping because they have more bands than the Landsat 5 TM and a better radiometric resolution. Similar to [37], we used several LiDAR-derived topographic metrics that can provide much-needed information related to water flow. Our better overall accuracy can also be attributed to the use of pan-sharpened Landsat 8 OLI images and of high-resolution Sentinel-1 and ALOS-1 PALSAR images. High spatial resolution optical images were already known to be advantageous for detecting wetland boundaries and species using Random Forests [19]. Another reason is the use of Random Forests that has also been shown to be superior over other classifiers such as the maximum likelihood, rule-based, neural network, hierarchical decision tree, or support vector machine classifier in several previous studies [7,47,66,94]. This is evident by comparing Tables 1 and 2. One of the studies of Table 2 [70] achieved an overall accuracy of 94% by applying a support vector machine classifier, but the resulting map has only two wetland classes, whereas our map included 11 classes.
Four of the LiDAR-derived topographic metrics TWI02, TPI, TWI01, and CHM were among the top 10 variables. These LiDAR-derived topographic metrics seem to be very important when mapping both upland and wetland land covers. Topography was shown to be very important when mapping wetlands in Ontario [39], Nova Scotia [37], and Newfoundland and Labrador [54]. The high rank for the LiDAR-derived topographic metrics can be explained by the influence of the topography on the way water flows across or into a wetland. Wetlands can develop in a variety of landscapes, but topography influences the distribution of surplus water and consequently the location of wetlands [73]. The other variables in the top ten most important variables in the classification were the Landsat 8 B1 band (ultra-blue) image acquired in summer and spring and B2 band (blue) image acquired in spring. In numerous previous studies, optical imagery was found to be suitable for mapping wetlands, particularly in the case of open wetlands with short vegetation [20,23,[48][49][50]. Most variables derived from SAR imagery were shown to be less important than optical imagery except for Sentinel-1 C-VV summer imagery (Figure 4). Such a result agrees with previous studies, which found optical imagery acquired during high to medium water levels (spring and summer) was more important than SAR images when classifying wetlands [37,38,51]. The most important SAR imagery variable was the Sentinel-1 C-band VV followed by HH and VH acquired in summer. C-VV imagery has proved useful for delineating low-density marshes [30], similar to the coastal and shrub marshes of our study area. In addition, co-polarizations (HH and VV) were found to give a higher contrast backscatter between swamps and dry forest than cross-polarization [31]. C-band imagery has been useful in detecting standing water under short vegetation [20,31] and was suitable in some wooded wetlands with low-density canopies or leaf-off conditions [31][32][33][34]. While we should expect longer wavelength L-band imagery to be useful in penetrating forest canopies to detect standing water, the ALOS-1 PALSAR L-band imagery was less important than Sentinel-1 C-band imagery. Similar results were reported for mapping wetlands in Yukon, Canada [50]. One probable reason in our case is the time delay between the ALOS-1 PALSAR L-band imagery and the other imagery and data. The ALOS-1 PALSAR L-band imagery was acquired in 2010, whereas the other data (LiDAR, Sentinel-1, Landsat 8 OLI) were acquired between 2013 and 2018. Future mapping efforts and studies would be assisted by obtaining all data sources during the same years. Amongst all the ALOS-1 PALSAR L-band imagery, the L-HV fall imagery was ranked higher than the other L-band imageries. These results are inconsistent with Bourgeau-Chavez et al. [49] reports that L-HH spring bands are the most important among other L-bands.
The classified image produced by this study had an overall validation accuracy of 94.95%. This is somewhat lower than the validation accuracy (98.6%) obtained previously for a study in central New Brunswick [38]. This could be attributed to the fact that the previous study used Radarsat-2 C-band images acquired during flooding events, while, in this study, Sentinel-1 C-band imagery was acquired in high water levels, but not during flooding events. In addition, the number of validation sites for [38] was lower. The accuracy in this current study was higher than that we obtained previously in Nova Scotia (88.5%), by also using L-band imagery and imagery acquired during flooding events [38]. While the confusion matrices obtained by comparing the classified image and the two reference maps to the validation GPS sites cannot be compared directly given the different number of classes for the classified image and the two maps, there was a lower overall accuracy for both the 2016 and 2019 NB reference maps compared to the classified image produced in this study. Both the 2016 and 2019 NB reference maps were produced by photo-interpretation of digitized aerial photography. Such a method has already been shown to be unreliable for mapping forested wetlands, particularly in regions such as most parts of New Brunswick because of the effect of the dense tree cover [13][14][15]37]. The 2019 NB reference map gave slightly better accuracies than the 2016 NB reference map because the map also included information from Lidar-derived 1m topographic data. Figure 4 supported that Lidar-derived topographic data are very important for land cover mapping and adding such information in the analysis is beneficial. Figure 5 compares our classified image, the 2016 NB reference map, and the 2019 NB reference map for a forested wetland site for which the ground picture of the site is given. Only the classified image was able to properly map the site as a forested wetland. On both the 2016 and 2019 NB reference maps, the site was mapped as an upland site. Our detailed analysis of the sources of error for each map is consistent with our previous work in central NB, which found most misidentifications of GPS validation sites occurred for treed wetland classes [38]. Remote Sens. 2020, 12,

Conclusions
Traditional mapping of wetlands using visual interpretation of air photos can be costly and timeconsuming. This study has demonstrated the potential of applying the Random Forests (RF) classifier to freely available Landsat 8 OLI, ALOS-1 PALSAR, and Sentinel-1 images combined with LiDAR-derived topographic metrics to produce a highly accurate wetland map of southern New Brunswick. The resulting overall classification accuracy (97.67%) and validation accuracy (94.95%) showed that the combination of optical, SAR, and Lidar data acquired in three seasons with varying water levels in the wetlands is highly efficient for mapping wetlands in forested landscapes, such as already shown in recent studies. Our confusion matrixes showed that the main sources of errors in wetland mapping occurred for treed wetland classes, especially for the treed bog (TB). A comparison with the GPS field validation sites indicated that the wetland map produced in this study was more accurate than both the 2016 and 2019 NB reference maps.
Compared to previous studies on wetland mapping using optical, SAR, and Lidar data (Tables 1  and 2), exceptionally high classification and validation accuracies were obtained despite the time period between when the Landsat 8 OLI, SAR imagery, and LiDAR data were collected. Some landcover changes would have occurred within this time period. Further work is needed to assess whether this time gap has a significant result on the mapping accuracy and whether better accuracy can be obtained using data acquired within the same year. The study used Landsat 8 OLI optical imagery. Future work would benefit from testing newly available free optical images such as Sentinel-2.

Conclusions
Traditional mapping of wetlands using visual interpretation of air photos can be costly and time-consuming. This study has demonstrated the potential of applying the Random Forests (RF) classifier to freely available Landsat 8 OLI, ALOS-1 PALSAR, and Sentinel-1 images combined with LiDAR-derived topographic metrics to produce a highly accurate wetland map of southern New Brunswick. The resulting overall classification accuracy (97.67%) and validation accuracy (94.95%) showed that the combination of optical, SAR, and Lidar data acquired in three seasons with varying water levels in the wetlands is highly efficient for mapping wetlands in forested landscapes, such as already shown in recent studies. Our confusion matrixes showed that the main sources of errors in wetland mapping occurred for treed wetland classes, especially for the treed bog (TB). A comparison with the GPS field validation sites indicated that the wetland map produced in this study was more accurate than both the 2016 and 2019 NB reference maps.
Compared to previous studies on wetland mapping using optical, SAR, and Lidar data (Tables 1  and 2), exceptionally high classification and validation accuracies were obtained despite the time period between when the Landsat 8 OLI, SAR imagery, and LiDAR data were collected. Some landcover changes would have occurred within this time period. Further work is needed to assess whether this time gap has a significant result on the mapping accuracy and whether better accuracy can be obtained using data acquired within the same year. The study used Landsat 8 OLI optical imagery. Future work would benefit from testing newly available free optical images such as Sentinel-2.