Next Article in Journal
Arctic Sea Ice Thickness Estimation from CryoSat-2 Satellite Data Using Machine Learning-Based Lead Detection
Previous Article in Journal
A Software Tool for Atmospheric Correction and Surface Temperature Estimation of Landsat Infrared Thermal Data
Open AccessArticle

Land Cover Classification in SubArctic Regions Using Fully Polarimetric RADARSAT-2 Data

by Yannick Duguay 1,2,*,†, Monique Bernier 1,2,†, Esther Lévesque 2,3 and Florent Domine 2,4
1
Centre Eau Terre Environnement, Institut national de la recherche scientifique (INRS), 490 de la Couronne, Québec City , QC G1K 9A9, Canada
2
Centre d’études nordiques, Laval University, Pavillon Abitibi-Price 2405, rue de la Terrasse Local 1202, Québec City, QC G1V 0A6, Canada
3
Université du Québec à Trois-Rivières, 3351, boul. des Forges, Trois-Rivières, QC G9A 5H7, Canada
4
Takuvik Joint International Laboratory, Laval University (Canada) and CNRS (France), Pavillon Alexandre Vachon, 1045 avenue de la Médecine, Québec City , QC G1V 0A6, Canada
*
Author to whom correspondence should be addressed.
Academic Editors: Zhong Lu, Richard Gloaguen and Prasad S. Thenkabail
Remote Sens. 2016, 8(9), 697; https://doi.org/10.3390/rs8090697
Received: 26 May 2016 / Revised: 25 July 2016 / Accepted: 9 August 2016 / Published: 24 August 2016

Abstract

The expansion of shrub vegetation in Arctic and sub-Arctic environments observed in the past decades can have significant effects on northern ecosystems. There is a need for efficient tools to monitor those changes, not only in terms of the spatial coverage of shrubs, but also their vertical growth. The objective of the current paper is to evaluate the performance of polarimetric C-band SAR datasets for land cover classification in sub-Arctic environments. A series of RADARSAT-2 quad-pol images were acquired between October 2011 and April 2012. The Support Vector Machine (SVM) classification scheme was used on three sets of features: the elements of the polarimetric coherency matrix [ T ] , the parameters extracted from a polarimetric decomposition based on the eigenvalues and eigenvectors of [ T ] and the parameters extracted from a model-based decomposition. Using a single image, the results show that the best classification accuracies ( 75 % ) are obtained using the [ T ] matrix with the October images. When adding a second image to the feature set, either from two different dates or two incidence angles, the classification accuracy is improved and reaches 90 . 1 % with two images from October 2011 and April 2012 at 27 incidence. The results show that C-band polarimetric SAR imagery is an adequate tool to map shrub vegetation in sub-Arctic environments.
Keywords: SAR; polarimetry; sub-Arctic; classification; support vector machine SAR; polarimetry; sub-Arctic; classification; support vector machine

1. Introduction

The expansion of shrub vegetation in Arctic and sub-Arctic environments, or Arctic greening, is a phenomenon that has received much attention in the past few decades [1,2,3,4]. This process is characterized by an increase in shrub vegetation abundance [3,5,6], which generally expands at the detriment of lichens [7,8]. While the increase in vertical and radial growth of shrub vegetation was shown to correlate with rising summer air temperatures observed in northern environments [9,10], multiple other mechanisms are affecting the phenomenon. For instance, snow cover protects shrubs during the winter, while keeping soil temperatures relatively warmer during the winter, favouring biological processes and the availability of nutrients [2,11]. Shrubs also capture wind drifted snow, which creates a positive feedback by providing enhanced conditions for shrub growth [1,2,12]. The replacement of lichens with shrubs can have major impacts on these ecosystems, and rapid changes in land cover can also affect the climate through increased albedo and changes in the output of greenhouse gases [13,14,15]. The monitoring of shrub vegetation is therefore important for the understanding of the ongoing changes in northern environments. Current methods rely either on field sampling or aerial and satellite imagery in the visible and infrared spectrum to assess changes in shrub coverage or growth [6,8,16,17,18,19,20]. However, these methods have certain limitations as the field sampling methods can be very costly and do not provide a high spatial coverage, while satellite imagery in the visible and infrared spectrum is affected by the presence of clouds, which can be persistent in northern regions [16,21]. Previous studies have shown that SAR imagery can be a suitable tool to detect, quantify and map shrub vegetation, but mostly in arid or semi-arid environments [22,23,24,25]. More recently, it has been demonstrated that C- and X-band SAR backscattering is sensitive to shrub height in sub-Arctic environments [26]. In particular, it was shown that C-band SAR backscattering is most sensitive to shrub height when the stands are shorter than one meter and is most sensitive to changes in shrub coverage when it is below 20%. This indicates that C-band SAR would be most sensitive to the early stages of shrub growth and a good tool to study the expansion of shrub vegetation in Arctic and sub-Arctic environments. The production of land cover classifications using satellite imagery is an effective tool to produce useful maps of the studied environment and to facilitate the monitoring of temporal changes, especially in the complex environments found in sub-Arctic regions [8]. To our knowledge, however, no attempt has been made to use SAR imagery to classify and map out these types of environments. The objective of this paper is to demonstrate the potential of C-band polarimetric SAR imagery for land cover classification of sub-Arctic environments, in particular of the shrub vegetation cover. Polarimetric decompositions are widely used methods in SAR polarimetry to extract information on the physical nature of natural targets from scattered electromagnetic waves [27]. These decompositions can be used to enhance the response of various targets of interest in order to provide a better separability of the different classes and improve classification accuracies. As a secondary objective, two widely-used decomposition algorithms, one based on scattering models [28] and another based on the eigenvalues and eigenvectors of the coherency matrix [27,29], will be used as input features of the classification scheme in order to assess their individual performance for the classification of sub-Arctic environments.

2. Methodology

2.1. Study Area

The study area is a 60-km 2 region situated in the vicinity of the Inuit community of Umiujaq (56.55°N, 76.55°W) on the eastern shore of the Hudson Bay, Nunavik (Northern Quebec, QC, Canada; see Figure 1), it has been used in many studies and described in a prior paper by Duguay et al. [26]. It is a discontinuous permafrost zone positioned at the northern tree line, forming a transition between the forest tundra to the south and the shrub tundra to the north. The geomorphology of the area is characterized by a cuesta formation sloping gently eastward from Hudson Bay for nearly 5 km, up to an altitude of 330 m, at which point it forms steep, mainly east-facing cliffs. At the foot of these cliffs, we find Tasiapik Valley to the north and the Guillaume-Delisle Lake to the southeast. The vegetation cover in the coastal portion of the study area is made up of a range of tundra plant communities dominated by graminoids, forbs, prostrate dwarf shrubs, lichens and mosses. Areas covered with erect shrubs are relatively common, mostly composed of dwarf birch (Betula glandulosa Michx.), green alder (Alnus viridis (Chaix) DC. subsp. crispa (Ait.) Turrill) and willow species (Salix argyrocarpa Andersson, S. glauca L. var. cordifolia (Pursh) Dorn, S. planifolia Pursh, S. vestita Pursh). Scattered black spruce (Picea mariana (Mill.) BSP) krummholz can also be found. Tasiapik Valley is mainly erect shrub tundra dominated by dwarf birch mixed with a few willows (mainly Salix planifolia), Labrador tea (Rhododendron groenlandicum (Oeder) Kron and Judd) and green alder. Prostrate dwarf shrub-lichen tundra is found on lithalsa summits and at higher valley-side elevations. The lichen cover found in the valley is dominated by Cladonia stellaris ((Opiz) Pouzar & Vezda, 1971), which used to cover a greater portion of the area according to the local population and was confirmed by Provencher-Nolet et al. [8]. Clusters of black spruce are found in the upper part of the valley, while extended patches are present in the lowermost portion of the valley. Small wetlands and thermokarst ponds are also scattered in the Tasiapik Valley. Some peatlands and wetlands are also found on plateaus to the northeast of the Tasiapik Valley. Figure 1 shows a GeoEye-1 satellite image of the area, overlayed with polygons of the training and validation areas, as well as an overview of the region. Vascular plant nomenclature follows the Database of Vascular Plants of Canada (VASCAN) by Brouillet et al., 2010 [30].

2.2. Satellite, GIS and In Situ Datasets

A series of RADARSAT-2 Single-Look Complex (SLC) Fine Quad-pol (FQ) scenes (HH, HV, VH, VV polarization) were acquired over the study area between October 2011 and April 2012. RADARSAT-2 operates at the C-band with a frequency of 5.4 GHz; the nominal resolution for the FQ beam is 5.2 m × 7.6 m (slant range × azimuth). All of the acquisitions were made on descending orbits with two incidence angle modes, one at low incidence with θ 27 ° and one at high incidence with θ 38 ° (Table 1). The choices for the orbit and incidence modes were made in order to maximize the coverage of the study area while capturing a good range of incidence angles. Unfortunately, no images from the summer months were available for the study, so only fall and winter images were used.
A Digital Elevation Model (DEM) of the area was created by combining a high resolution LiDAR DEM (1-meter horizontal resolution) with topographic data generated from the Shuttle Radar Topography Mission (SRTM) to fill the areas that were not covered by the LiDAR DEM. The produced DEM was used to perform terrain corrections on the SAR images. A high-resolution GeoEye-1 multispectral image (1.65-m resolution), as well as a mosaic of aerial photographs (0.15-m resolution) were used to select the training and validation areas for the classification.
In situ measurements of vegetation characteristics were collected during the summer of 2009 on a total of 238 circular plots (10 meter-diameter). Each species of shrub and tree was identified; the percentage of the ground that they covered within the plot was assessed visually using abundance classes (Table 2); their heights were assessed using height classes (Table 2); and up to three height measurements were made for each species. Note that the height and percent coverage intervals of the classes are not equally distributed. The type of soil, its moisture conditions and the topographic position were also documented.

2.2.1. SAR Processing

Polarimetric SAR images contain the full polarization spectrum of a scattered electromagnetic wave through the 2 × 2 complex scattering matrix [ S ] . In the case of RADARSAT-2, the complex elements of [ S ] are in the linear basis (H and V) and expressed as the combination of transmitted and scattered polarization in the form S H H , S H V , S V H , S V V . Since RADARSAT-2 is a monostatic system, reciprocity is assumed, and the cross- polarized scatterings are considered equal ( S H V = S V H ). The scattering matrix is used to represent coherent targets; however, natural targets are generally incoherent, and a statistical representation is needed in order to characterize these types of random mediums. In this case, the second order statistics of [ S ] are generated by averaging over a number of independent samples and represented with the covariance [ C ] or coherency [ T ] matrices, where denotes the ensemble averaging.
Averages are provided through the multi-looking process, which is described below. The covariance matrix results from the outer product of the target vector k L , which is based on a lexicographic reordering of [ S ] and is expressed as follows:
[ C ] = | S H H | 2 2 S H H S H V * S H H S V V * 2 S H V S H H * 2 | S H V | 2 2 S H V S V V * S V V S H H * 2 S V V S H V * | S V V | 2
The diagonal elements correspond to the backscattering coefficients in the different polarization channels. For natural environments, the reflection symmetry of the target is generally assumed [27,31], which means that S H H S H V * S V V S H V * 0 .
The coherency matrix results from the outer product of the target vector k P , which is based on a linear combination of the Pauli matrices, and is expressed as follows:
[ T ] = | S H H + S V V | 2 ( S H H + S V V ) ( S H H - S V V ) * 2 ( S H H + S V V ) S H V * ( S H H - S V V ) ( S H H + S V V ) * | S H H - S V V | 2 2 ( S H H - S V V ) S H V * 2 S H V ( S H H + S V V ) * 2 S H V ( S H H - S V V ) * 4 | S H V | 2
This representation provides an interpretation that is more closely related to the physical properties of the scattered wave. Generally speaking, T 11 is linked to Bragg-type surface scattering; T 22 is linked to scattering from a dihedral; and T 33 is related to volume scattering. The coherency and covariance matrices are symmetric hermitian matrices and contain nine independent parameters.
The PolSARpro software Version 5.0 [32] was used to read the single-look complex RADARSAT-2 images and to perform the subsequent polarimetric analyses. The covariance and coherency matrices were extracted, and a first multi-look processing was applied during the operation by averaging the values of 2 pixels in the azimuth direction of the SAR image. In order to generate the first and second order statistics necessary for the production of the covariance and coherency matrices, a second multi-looking step is applied through the speckle filtering procedure. The improved Lee sigma [33] polarimetric speckle filter was applied with a 5 × 5 window, which brings the total number of looks to 50 before geo-corrections. As described in [34], the variance of the eigenvalues of the coherency matrix decreases with the number of looks and tends to become relatively stable at around 50 looks, and the mean values become relatively close to the values of true eigenvalues. A larger number of looks brings little benefit in terms of reducing the variance and approaching the true values of the eigenvalues and also produces a more important spatial smoothing, which could blur the edges between classes, especially in a heterogeneous environment.
The filtered covariance images were re-projected from slant range to ground range and orthorectified using the Alaska Satellite Facility (ASAF) MapReady software [35]. The process simulates an SAR image using a DEM and co-registers the SAR images on the simulated images. The pixel localization accuracy was on the order of 0.5 pixels on average (≈4–5 m). The polarimetric parameters were then extracted from the co-registered images.

2.2.2. Polarimetric Decompositions

The scatterers represented by the diagonal elements of the coherency matrix [ T ] are theoretical or pure geometrical objects and cannot adequately represent the complexity of natural targets and their associated scattering mechanisms. Polarimetric decompositions are therefore used to extract information on naturally occurring scattering mechanisms from fully-polarimetric images. Two types of decompositions were considered for this study, one based on a scattering model and another based on the eigenvalues and eigenvectors of the coherency matrix. The model-based decomposition used is the one developed by Yamaguchi et al. [28], which decomposes the covariance matrix into four scattering mechanisms:
P t = P s + P d + P v + P c
where P t is the total scattering power (span), P s is the surface scattering power generated by the ground or water surfaces, P d is the double-bounce scattering power generated by double reflections of the radar signal on the ground and tree trunks or sufficiently large rock boulders, P v is the volume scattering power generated by the randomly-oriented branches of the vegetation canopy and P c is the helix scattering power, which arises when the reflection symmetry condition does not apply to a target ( S H H S H V * 0 and S V V S H V * 0 ).
The eigenvalue-based decomposition considered for this study is the one developed by Cloude and Pottier [27,29]. This decomposition method extracts information on the nature of the scattering mechanisms found within a pixel through the use of the eigenvalues and eigenvectors of the coherency matrix. It introduces the concept of scattering entropy to take into account the randomness of the scattering mechanisms found within a given target. Three main parameters can be extracted from this analysis, namely entropy (H), anisotropy (A) and the alpha angle (α). Entropy is calculated from the logarithmic sum of the eigenvalues:
H = i = 1 n - P i l o g n ( P i )
P i = λ i j = 1 n λ j
where n = 3 in the case of a monostatic system and P i can be referred to as the pseudo-probability of a given eigenvalue λ i . Entropy can fluctuate between 0 and 1, where H = 0 represents a pure target, which can be described in its entirety by the first eigenvalue and eigenvector, while H = 1 indicates that all of the eigenvalues are equal and that the target generates completely random polarization. Anisotropy is complementary to the entropy and describes the relative importance of λ 2 and λ 3 :
A = λ 2 - λ 3 λ 2 + λ 3
Anisotropy is useful in cases where entropy is high to determine the contribution of secondary mechanisms. It ranges from 0–1, where A = 0 indicates that both λ 2 and λ 3 have the same value, while A = 1 indicates that there are only two scattering mechanisms contributing to the signal and that all of the information is contained in the first and second eigenvalues.
The alpha angle (α) is an element of the eigenvectors of [ T ] , which identifies the nature of the scattering mechanism. The mean of the α angles from each eigenvector is used to estimate the dominant scattering mechanisms through the relation:
α ¯ = P 1 α 1 + P 2 α 2 + P 3 α 3
the value of α ¯ ranges between 0 and 90 , where α ¯ = 0 represents scattering from a Bragg-type surface; α ¯ = 90 represents double-bounce scattering, and when α ¯ = 45 , it is considered to be volume scattering from a cloud of randomly-oriented dipoles.

2.3. Classification

For the purpose of the classification, nine (9) different types of environments were identified. These environments are typically found in sub-Arctic areas and were chosen mainly by considering the scale and resolution of the images, as well as the inherent capabilities of radar systems. The final classes were inspired by the Earth Observation for Sustainable Development of Forests (EOSD) land cover classification legend [36] with some adaptations (Table 3). An example for each class is demonstrated through aerial photographs in Figure 2. The main differences between the EOSD legend and the classes used for this study are found within the shrub- sparse and wetland-low vegetation classes. The first represents a land covered with a mixture of lichens and herbs with the presence of a sparse shrub cover regardless of their height, which would be a combination of the herbs and bryoids classes from EOSD. The wetland-low vegetation class is mostly composed of peatlands with some herb and prostrate shrubs, which did not fit with the definition of any of the EOSD classes. Those types of environments are similar to the shrub-sparse class in terms of the physical structure of the vegetation, but they differ in the hydrological regime, which highly affects SAR backscattering. The creation of this class was therefore necessary to distinguish it from the shrub-sparse during the classification process, since the SAR signature should be different between the two due to the differences of the dielectric properties of the ground.
The Support Vector Machine (SVM) method is a widely-used algorithm for pattern recognition and classification [37] and has demonstrated its capabilities for the classification of remote sensing imagery in a variety of applications [38,39,40]. The basic principle behind the SVM classifier is to find an optimal separating hyperplane that will divide the training data points from two distinct classes. When the elements are not linearly separable, a kernel function is applied to the datasets, which maps them to a higher dimensional space in order to find a linear separating hyperplane in this higher dimensional space. The original SVM algorithm was designed for binary classifications; to apply the method to a multi-class case, some adaptations were made through various techniques, two of the most common being one-versus-all or one-versus-one algorithms.
SAR datasets generally have high variations due to speckle or to different types of scattering mechanisms, so the ability of SVM classifications to delimit non-linearly separable classes is well adapted for the classification of SAR data [41]. The SVM method is also a non-parametric approach, which does not rely on the assumption that the dataset follows a specific statistical distribution; this makes it well adapted to polarimetric SAR data, which can have different distributions depending on the studied target and the polarimetric parameter [42]. It has demonstrated its potential for land cover classification using SAR imagery [41,43,44,45] and has been used for various types of applications, such as the classification of rice crops [46], for the delimitation and mapping of snow and sea ice [47,48,49], as well as forest vegetation classification [43,50,51].
The choice of features for the classification is one of the most important parts of the methodology in order to provide the best separability between classes and yield higher classification accuracies. The advantage of having fully- polarimetric images lies in the possibility to retrieve information from the polarization state of the electromagnetic wave that will provide information that is well adapted to the studied target. However, it might be difficult to choose the right parameters for a specific application. This study will look at three different sets of parameters to compare the efficiency for the distinction and classification of different types of land covers found in sub-Arctic environments. The first feature set is composed of the full coherency matrix (all 9 elements of the [ T ] matrix); the second feature set is composed of the scattering powers from the model-based decomposition; and the third feature set is composed of the parameters from the eigenvalue-based decomposition.
Combining multiple images with sufficient differences in acquisition parameters can also provide further separability between classes and improve classification accuracy. For the current study, two distinct incidence angles, as well as the different acquisition dates spanning two seasons were expected to provide sufficient variability in the acquisition conditions to enhance the class separability. Incidence angle has a significant effect on SAR backscattering and affects differently the various scattering mechanisms. Lower incidence angles generally produce stronger backscattering, regardless of the scattering mechanism considered. However, surface scattering mechanisms tend to show stronger responses at low incidence angles relative to volume scattering from the vegetation [52]. On the other hand, the relative importance of volume scattering from vegetation compared to surface scattering from the ground generally increases at higher incidence angles [52]. By combining two images with two sufficiently different incidence angles and acquired at a short time interval in order to keep sufficiently similar ground conditions, it would be possible to have a better separability between classes that are dominated by surface scattering (e.g., water, exposed land) and classes dominated by volume scattering (e.g., shrub-tall, wetland-shrub).
Similarly, by combining images acquired at different dates, in particular from different seasons, it is possible to enhance the separability between classes and to provide better classification accuracies [53]. For the studied environment, some classes will react differently to changes in temperatures during the transition from fall to winter. For example, the rock/rubble class will have very little variations in dielectric properties from fall to winter, while exposed land will experience a drop in its dielectric constant as the ground water freezes. Classes with shrub coverage tend to retain more snow, as demonstrated in [1,2,12], which causes ground temperatures to be warmer during the winter, enabling a better differentiation from classes with little or no shrub coverage. The volume scattering component from the vegetation is also affected by snow cover, as snow tends to attenuate the scattering from the vegetation due to the lower dielectric contrast between the shrub branches and the snow [26].
The Orfeo Toolbox [54] tools were used to perform the SVM classifications, which uses the LIBSVM [55] library as a back end for the SVM learning tools. The kernel function used for the classification was the radial basis function, as it proved to be the most reliable for the current application compared to the linear, polynomial and sigmoid kernel functions. For the multi-class algorithm, the one-versus-one method was used; the cost parameter C and the γ kernel parameter were optimized using cross-validation to provide the best possible classification accuracy; and the probabilities for each classes were estimated [56] to produce a confidence map of the classification. The training and validation pixels were sampled randomly within the training polygons (Table 3) with half of the pixels assigned to training and half to validation. The classes with smaller total training areas were used to limit the number of samples selected in the classes with larger training areas, and the number of training pixels per class ranged between 1165 pixels and 1240 pixels.

3. Results

3.1. Classification with a Single Image

The first series of classifications was performed on the three sets of polarimetric parameters extracted from each SAR acquisition listed in Table 1. The accuracies and kappa coefficients (κ) were calculated for each classification (Table 4). The highest overall classification accuracies were generally obtained using the full coherency matrix (T), with the best accuracy achieved with the 22 October 2011 image at 27 incidence. The classification accuracies achieved with decompositions are generally 5–15 percentage points lower than with [ T ] , and there is little difference between the two types of decomposition.
Looking at the producer’s and user’s accuracies from the classifications generated with each feature set with the 22 October 2011 image at 27 (Table 5), as well as the confusion matrix for the classification created with the [ T ] matrix (Table 6), we can see that the class with the lowest accuracy is the wetland-shrub class with significantly more omission errors than commission errors. Most of the confusion happens with the classes that have significant amounts of double bounce, such as the coniferous-open, the shrub-tall and, to a lesser extent, the shrub-low classes.

3.2. Classification with Multiple Images

The addition of an image from a different date or with a different incidence angle acquired at close dates significantly increases the overall accuracy. The results for the classifications with two images at different incidence angles are detailed in Table 7, and results for the classifications with two images at different dates are detailed in Table 8. The accuracies presented only include the classifications using the full coherency matrix ( [ T ] ) images; classifications using multiple images from the model-based decomposition were tested, but the accuracies were lower. The best results for each type of combination (multi-angle and mutlti-date) were obtained by combining two Ocotber images for the multi-angle case and by combining an image from October and and image from April at 27 incidence for the multi-date case. The producer and user accuracies for each class from these two classifications are shown in Table 9.
The best results with two images at different incidence angles were obtained with the images acquired in October 2011, with an overall accuracy of 89%. The accuracy decreases during the winter when the ground and vegetation are frozen and the presence of snow within the vegetation reduces the sensitivity of the SAR signal to variations in vegetation cover [26].
The confusion matrix of the classification using the two images from October 2011 is displayed in Table 10, where it can be seen that the classes with the lower accuracies are those containing taller and denser vegetation cover: the shrub-tall and wetland-shrub classes. The errors found within these two classes are generally due to confusions with other shrub classes. The wetland-shrub class has more omission errors than commission errors, while the opposite is true for the shrub-tall class, and the main reason stems from the confusion between the two classes.
The use of two images from different months acquired at the same incidence angle provided similar results as the use of twin incidence angles in terms of overall accuracy. The best result is obtained when using an image from October 2011 combined with an image from April 2012 at θ = 27 , but there is very little difference in the accuracies between the classifications using the October 2011 image. The accuracies steadily decrease when using the combination of images acquired in late fall and during the winter. The classifications using images at higher incidence angles ( θ = 38 ) generally result in slightly lower accuracies than those using images at a lower incidence angle. The confusion matrix for the classification using images from 22 October 2011 and 7 April 2012 ( θ = 27 ) is detailed in Table 11. Comparing to the confusion matrix from the classification using two images at different incidence angles (Table 10), it is possible to see an improvement in the classification accuracy of the wetland classes.
A map of a classification produced with the combination providing the best accuracy, using the 22 October and 07 April images at θ = 27 , is presented in Figure 3. Figure 3b shows an aerial photograph of a subset area in the Tasiapik valley where a good variety of environment types can be found. The lighter areas are representative of the shrub-sparse, class which is dominated by a lichen cover. These areas are surrounded by relatively short shrubs representative of the shrub-low class, as well as some small ponds in the northwest and southeast of the area, which are generally surrounded by shrubs and are representative of the wetland-shrub class. The area in darker green in the north of the image is representative of the shrub-tall class. Figure 3d is the result of the classification of this area, and Figure 3e represents the confidence map overlayed on the classification where the darker areas are an indication of the lower confidence in the classification results.

4. Discussion

4.1. Classification with a Single Image

Overall, the classifications using the elements of the [ T ] matrix provide better results than the polarimetric decompositions. While polarimetric decompositions are generally used to emphasize or better represent the scattering mechanisms found in natural environments, they are general models that may not apply to the specific classes used for this study. These findings are consistent with the conclusions from [43] where they considered that the coherency matrix is optimal for classification purposes using the SVM algorithm, as the addition of other polarimetric parameters did not provide significant improvements. The findings in [53] also show that combining all of the parameters from the various polarimetric decompositions adds little benefit to the overall accuracy of a classification using the random forest algorithm. The [ T ] matrix contains the full polarimetric information enclosed within the signal, so it seems that even if the decompositions emphasize the main scattering mechanisms, the SVM scheme is able to better pick out small differences in the signal that characterizes the different classes. For example, the double-bounce scattering mechanism from the model-based decomposition shows little differentiation between classes compared to the T 22 element from the coherency matrix, which is generally associated with the double-bounce mechanism (Figure 4). The violin plots introduced in Figure 4 illustrate the distribution of the T 22 and P d values found within each class. Since SVM classifications do not assume any statistical distribution about the input data or the different classes, they are well suited for extracting the variations of the signal within the [ T ] matrix characterizing each class. However, the classes with the lowest user’s and producer’s accuracies are the wetland-shrub, shrub-low, coniferous-open and shrub-tall, which have significant overlap in double-bounce scattering, as seen in Figure 4. These classes also display similar distributions for the T 33 element (Figure 5).

4.2. Classification with Multiple Images

The use of two incidence angles with a sufficient difference between them produced distinct responses from the surface scattering from the underlying ground, as well as the volume scattering component of the shrub vegetation, which improves the separability of these classes and the classification accuracy as a consequence. As an example, the values of the T 11 parameter, which is related to surface scattering, exhibits different responses for certain classes during the fall (October 2011) depending on the incidence angle used (Figure 6). It can be seen that the classes dominated by surface scattering, such as rock/rubble, exposed land and wetland-low vegetation display greater differences in scattered power between each angle compared to other classes. It should be noted however that the difference in scattering within the water class is not solely due to the difference in incidence angles, but rather to differences in wind conditions between the two dates. The stronger winds on 19 October produced more wavelets, which increased the surface scattering.
The changes in the dielectric properties of the different types of soil and the presence of snow accumulations in areas with denser shrub vegetation during the winter [26] help the classification algorithm with separating the classes. The increase in classification accuracy when combining two images acquired at different dates and the fact that the best results are obtained when combining two images from different seasons are consistent with the results from [53]. Again, looking at the variations of the T 11 element of the coherency matrix within individual classes, but this time for each acquisition date at θ = 27 (Figure 7), it can be observed that the response varies from one class to the other. It can be observed that shrub classes display a steady increase in T 11 power, which can be mostly associated with surface scattering, and is correlated with a decrease in the sensitivity of the SAR signal to the volume scattering of shrub vegetation during the winter [26]. The wetland classes also display some variations, but to a lesser degree, and there are even less variations in classes dominated by ground scattering, such as rock/rubble and exposed land. The strong increase in scattering from the water class is due to the formation of ice, which increases the T 11 power, as well as the span of the SAR signal as a whole.
Furthermore, the improvement in the classification accuracy of the wetland classes compared to the multi-angle classification can be explained by the significant changes in surface scattering between October and April due to the freezing of the water=saturated ground and shallow ponds. There is also a slight increase in the classification accuracy of the shrub classes due in part to the decrease of confusion with the wetland classes. The exposed land and water classes, which are dominated by surface scattering, have slightly lower accuracy when compared to the classification using two incidence angles, but the differences are very minor.
When looking at the details of the classification map in Figure 3, the first thing that can be observed is the high spatial variability of the environments found in the sub-area, which is reflected in the confidence map. The wetland-shrub class has the lowest accuracy, and it can be seen that it incurs commission errors, classifying areas that are not wetlands as wetlands, as well as omission errors, areas that should be classified as wetlands, but are not. Even with these types of errors, the confidence remains relatively high in areas where these errors occur, such as the northern part of the area where a few ponds and their surroundings are classified as rock/rubble. This could be due to the fact that both classes are predominantly surface scatterers (water surface and rock surface) with relatively low surface roughness. There is however one pond in the western part of the image that is classified as a combination of many classes, but this confusion is reflected in the confidence map, which is very dark in this area, meaning that there is very little confidence in the classified pixels. This would suggest that this specific environment has a polarimetric signature that could be associated with many classes and is probably situated near the hyperplane separating those classes in the SVM algorithm.
Many areas were also classified as coniferous-open, but there was no black spruce in the area. This is consistent with the results displayed in the confusion matrix (Table 11), which shows that the class is the third worst class in terms of commission errors, after the wetland-shrub and the shrub-tall classes. The coniferous-open generates volume scattering from the canopy of the black spruce trees, but the fact that it is a sparse cover means that there is a significant amount of surface scattering occurring, so it is normal to see some confusion with wetland-shrub, which can generate a similar response, but also with the shrub-low and shrub-tall classes, which also have a significant amount of surface scattering coming from the ground underneath the shrubs. However, all of the images used to produce the classifications where acquired during the fall and the winter seasons, so it would be interesting to see the results with images acquired during the summer when the shrubs still have their leaves. This would probably affect the penetration of the SAR signal through the canopy and change the contribution of the surface scattering in the shrub classes.
The results are also consistent with our previous study [26], which concluded that SAR backscattering is most sensitive to the early stages of shrub growth when the vegetation is shorter than one meter and where the shrub density is lower than 20%. The shrub-sparse class has been defined as being covered with less than 50% shrub vegetation; it is the shrub class that consistently has the highest accuracy compared to the shrub-tall and shrub-low classes and one of the classes with the highest accuracy in general.

5. Conclusions

The main objective of this paper was to assess the capabilities of polarimetric C-band SAR data to perform land cover classification in sub-Arctic environments. Of particular interest is the classification and mapping of shrub vegetation to study the Arctic greening phenomenon. A secondary objective focused on the analysis of the usefulness of widely-used polarimetric decomposition algorithms, which are generally used to enhance the scattering mechanisms found in SAR scenes, to classify the images. The results show that it is possible to achieve overall classification accuracies of 75 % with the data contained directly in the polarimetric coherency matrix ( [ T ] ) of images acquired in early fall. The polarimetric decompositions used independently yielded lower classification accuracies. It was also demonstrated that the use of two images acquired at different times or two images acquired at different incidence angles within a few days of each other provided a substantial increase in classification accuracies over the classifications performed with single images. The best classification accuracy was achieved using two images from October 2011 and April 2012 at an incidence angle of 27 , which yielded an overall classification accuracy of 90.1% and a κ index of 0.89. These results are comparable in terms of accuracy to a previous study in the area, which used high resolution aerial photographs and object-oriented classification algorithms [8]. This points to the possibility to classify land cover, and in particular shrub vegetation, in sub-Arctic environments using polarimetric SAR imagery at the C-band, depending on the spatial scale of the study. The advantage of this method over classifications using satellite or aerial imagery acquired in the visible and infrared range of the electromagnetic spectrum is the ability to acquire data under any lighting or meteorological condition. The results are also consistent with our previous study [26], which concluded that SAR backscattering is most sensitive to the early stages of shrub growth when the vegetation is shorter than one meter and where the shrub density is lower than 20%. The shrub-sparse class has been defined as being covered with less than 50% shrub vegetation; it is the shrub class that consistently has the highest accuracy compared to the shrub-tall and shrub-low classes and one of the classes with the highest accuracy in general. This points to the possibility to classify shrub vegetation in other sub-Arctic regions, but further testing would be necessary, especially in environments where wetlands are more prevalent, as this type of land cover is very complex and caused the most problems in the current study. Furthermore, looking at the results shown in [22,23,24,25], it seems that polarimetric SAR data could be used to classify shrub vegetation cover in arid and semi-arid environments using a similar method. In these cases, however, the effects of seasonality might not be as apparent as the current study, and the temporal contrasts might be more related to rainfall and soil moisture cycles.
While it would be possible to enhance the classification results by classification merging methods, this is outside the scope of this study, and it could prove to be impractical and costly to acquire multiple SAR images for each single classification, especially in the context of long-term monitoring of shrub vegetation. Overall, the method yields relatively good classification results and could provide a useful tool for shrub monitoring in sub-Arctic environments. Further research will focus on the the estimation of snow mass accumulations within shrub vegetation using SAR data. It has already been demonstrated that snow accumulations are affected by shrub height [57], and the classification results could provide a good basis for the mapping of snow accumulations in sub-Arctic environments.

Acknowledgments

The authors would like to acknowledge the Canadian Space Agency (CSA) for providing the RADARSAT-2 imagery through the Science and Operational Applications Research-Education (SOAR-E) Initiative. SOAR-E project 5014: Évaluation des paramètres de la neige en milieu subarctique à l’aide de la polarimétrie et de l’interférométrie radar. Funding for this project has been provided by ArcticNet and Natural Sciences and Engineering Research Council of Canada (NSERC). Funding for the field work was provided by the Northern Scientific Training Program (NSTP) of the Canadian Polar Commission. The authors would also like to thank Centre d’études nordiques for the access to their infrastructures, the Umiujaq community for their support during the field campaigns, Benoit Tremblay (Project Manager, Ministère du Développpement durable, de l’Environnement et Lutte contre les changements climatiques) for his help with the interpretation of in situ data and André Beaudoin (Research Scientist, Canadian Forest Service, Natural Resources Canada) for his counsels on vegetation classification.

Author Contributions

Yannick Duguay: design of the study, ground truth assessment, image and in situ data analysis, results’ interpretation and redaction. Monique Bernier: design of the study, ground truth assessment, results’ interpretation and paper review. Esther Lévesque: ground measurements of vegetation characteristics, results’ interpretation and paper review. Florent Domine: design of the study, ground truth assessment, results’ interpretation and paper review.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sturm, M.; Holmgren, J.; McFadden, J.P.; Liston, G.E.; Chapin, F.S.; Racine, C.H. Snow-Shrub Interactions in Arctic Tundra: A Hypothesis with Climatic Implications. J. Clim. 2001, 14, 336–344. [Google Scholar] [CrossRef]
  2. Sturm, M.; Schimel, J.; Michaelson, G.; Welker, J.; Oberbauer, S.; Liston, G.; Fahnestock, J.; Romanovsky, V.E. Winter biological processes could help convert Arctic tundra to shrubland. BioScience 2005, 55, 17–26. [Google Scholar] [CrossRef]
  3. Myers-Smith, I.H.; Forbes, B.C.; Wilmking, M.; Hallinger, M.; Lantz, T.; Blok, D.; Tape, K.D.; Macias-Fauria, M.; Sass-Klaassen, U.; Lévesque, E.; et al. Shrub expansion in tundra ecosystems: Dynamics, impacts and research priorities. Environ. Res. Lett. 2011, 6, 045509. [Google Scholar] [CrossRef]
  4. Elmendorf, S.C.; Henry, G.H.R.; Hollister, R.D.; Bjork, R.G.; Boulanger-Lapointe, N.; Cooper, E.J.; Cornelissen, J.H.C.; Day, T.A.; Dorrepaal, E.; Elumeeva, T.G.; et al. Plot-scale evidence of tundra vegetation change and links to recent summer warming. Nat. Clim. Chang. 2012, 2, 453–457. [Google Scholar] [CrossRef]
  5. Sturm, M.; Racine, C.; Tape, K. Climate change: Increasing shrub abundance in the Arctic. Nature 2001, 411, 546–547. [Google Scholar] [CrossRef] [PubMed]
  6. Tremblay, B.; Lévesque, E.; Boudreau, S. Recent expansion of erect shrubs in the Low Arctic: Evidence from Eastern Nunavik. Environ. Res. Lett. 2012, 7, 035501. [Google Scholar] [CrossRef]
  7. Cornelissen, J.H.C.; Callaghan, T.V.; Alatalo, J.M.; Michelsen, A.; Graglia, E.; Hartley, A.E.; Hik, D.S.; Hobbie, S.E.; Press, M.C.; Robinson, C.H.; et al. Global change and Arctic ecosystems: is lichen decline a function of increases in vascular plant biomass? J. Ecol. 2001, 89, 984–994. [Google Scholar] [CrossRef]
  8. Provencher-Nolet, L.; Bernier, M.; Lévesque, E. Quantification des changements récents à l’écotone forêt-toundra à partir de l’analyse numérique de photographies aériennes. Écoscience 2014, 21, 419–433. [Google Scholar] [CrossRef]
  9. Forbes, B.C.; Fauria, M.M.; Zetterberg, P. Russian Arctic warming and ”greening” are closely tracked by tundra shrub willows. Glob. Chang. Biol. 2009, 16, 1542–1554. [Google Scholar] [CrossRef]
  10. Hallinger, M.; Manthey, M.; Wilmking, M. Establishing a missing link: Warm summers and winter snow cover promote shrub expansion into alpine tundra in Scandinavia. New Phytol. 2010, 186, 890–899. [Google Scholar] [CrossRef] [PubMed]
  11. Buckeridge, K.M.; Grogan, P. Deepened snow alters soil microbial nutrient limitations in Arctic birch hummock tundra. Appl. Soil Ecol. 2008, 39, 210–222. [Google Scholar] [CrossRef]
  12. Schimel, J.P.; Bilbrough, C.; Welker, J.M. Increased snow depth affects microbial activity and nitrogen mineralization in two Arctic tundra communities. Soil Biol. Biochem. 2004, 36, 217–227. [Google Scholar] [CrossRef]
  13. Olthof, I.; Pouliot, D. Treeline vegetation composition and change in Canada’s western SubArctic from AVHRR and canopy reflectance modeling. Remote Sens. Environ. 2010, 114, 805–815. [Google Scholar] [CrossRef]
  14. Chasmer, L.; Kenward, A.; Quinton, W.; Petrone, R. CO2 exchanges within zones of rapid conversion from permafrost plateau to bog and fen land cover types. Arct. Antarct. Alp. Res. 2012, 44, 399–411. [Google Scholar] [CrossRef]
  15. Fraser, R.H.; Lantz, T.C.; Olthof, I.; Kokelj, S.V.; Sims, R.A. Warming-Induced Shrub Expansion and Lichen Decline in the Western Canadian Arctic. Ecosystems 2014, 17, 1151–1168. [Google Scholar] [CrossRef]
  16. Stow, D.A.; Hope, A.; McGuire, D.; Verbyla, D.; Gamon, J.; Huemmrich, F.; Houston, S.; Racine, C.; Sturm, M.; Tape, K.; et al. Remote sensing of vegetation and land-cover change in Arctic Tundra Ecosystems. Remote Sens. Environ. 2004, 89, 281–308. [Google Scholar] [CrossRef]
  17. Blok, D.; Schaepman-Strub, G.; Bartholomeus, H.; Heijmans, M.M.P.D.; Maximov, T.C.; Berendse, F. The response of Arctic vegetation to the summer climate: Relation between shrub cover, NDVI, surface albedo and temperature. Environ. Res. Lett. 2011, 6, 035502. [Google Scholar] [CrossRef]
  18. Boelman, N.T.; Gough, L.; McLaren, J.R.; Greaves, H. Does NDVI reflect variation in the structural attributes associated with increasing shrub dominance in Arctic tundra? Environ. Res. Lett. 2011, 6, 035501. [Google Scholar] [CrossRef]
  19. McManus, K.M.; Morton, D.C.; Masek, J.G.; Wang, D.; Sexton, J.O.; Nagol, J.R.; Ropars, P.; Boudreau, S. Satellite-based evidence for shrub and graminoid tundra expansion in northern Quebec from 1986 to 2010. Glob. Chang. Biol. 2012, 18, 2313–2323. [Google Scholar] [CrossRef]
  20. Ropars, P.; Boudreau, S. Shrub expansion at the forest-tundra ecotone: Spatial heterogeneity linked to local topography. Environ. Res. Lett. 2012, 7, 015501. [Google Scholar] [CrossRef]
  21. Hope, A.S.; Pence, K.R.; Stow, D.A. NDVI from low altitude aircraft and composited NOAA AVHRR data for scaling Arctic ecosystem fluxes. Int. J. Remote Sens. 2004, 25, 4237–4250. [Google Scholar] [CrossRef]
  22. Musick, H.; Schaber, G.S.; Breed, C.S. AIRSAR studies of woody shrub density in Semiarid Rangeland: Jornada del Muerto, New Mexico. Remote Sens. Environ. 1998, 66, 29–40. [Google Scholar] [CrossRef]
  23. Svoray, T.; Shoshany, M.; Curran, P.J.; Foody, G.M.; Perevolotsky, A. Relationship between green leaf biomass volumetric density and ERS-2 SAR backscatter of four vegetation formations in the semi-arid zone of Israel. Int. J. Remote Sens. 2001, 22, 1601–1607. [Google Scholar] [CrossRef]
  24. Patel, P.; Srivastava, H.S.; Panigrahy, S.; Parihar, J.S. Comparative evaluation of the sensitivity of multi-polarized multi-frequency SAR backscatter to plant density. Int. J. Remote Sens. 2006, 27, 293–305. [Google Scholar] [CrossRef]
  25. Monsivais-Huertero, A.; Chenerie, I.; Sarabandi, K. Sahelian-grassland parameter estimation from backscattered radar response. IEEE Int. Geosci. Remote Sens. Symp. 2008, 3, 1119–1122. [Google Scholar]
  26. Duguay, Y.; Bernier, M.; Lévesque, E.; Tremblay, B. Potential of C and X band SAR for shrub growth monitoring in sub-arctic environments. Remote Sens. 2015, 7, 9410–9430. [Google Scholar] [CrossRef]
  27. Cloude, S.R.; Pottier, E. A review of target decomposition theorems in radar polarimetry. IEEE Trans. Geosci. Remote Sens. 1996, 34, 498–518. [Google Scholar] [CrossRef]
  28. Yamaguchi, Y.; Moriyama, T.; Ishido, M.; Yamada, H. Four-component scattering model for polarimetric SAR image decomposition. IEEE Trans. Geosci. Remote Sens. 2005, 43, 1699–1706. [Google Scholar] [CrossRef]
  29. Cloude, S.R.; Pottier, E. An entropy based classification scheme for land applications of polarimetric SAR. IEEE Trans. Geosci. Remote Sens. 1997, 35, 68–78. [Google Scholar] [CrossRef]
  30. Brouillet, L.; Coursol, F.; Meades, S.J.; Favreau, M.; Anions, M.; Bélisle, P.; Desmet, P. VASCAN, Database of Vascular Plants of Canada. Available online: http://data.canadensys.net/vascan (accessed on 8 June 2016).
  31. Freeman, A.; Durden, S.L. A three-component scattering model for polarimetric SAR data. IEEE Trans. Geosci. Remote Sens. 1998, 36, 963–973. [Google Scholar] [CrossRef]
  32. PolSARpro Version 5.0. Available online: https://earth.esa.int/web/polsarpro (accessed on 8 June 2016).
  33. Lee, J.S.; Wen, J.H.; Ainsworth, T.L.; Chen, K.S.; Chen, A.J. Improved Sigma Filter for Speckle Filtering of SAR Imagery. IEEE Trans. Geosci. Remote Sens. 2009, 47, 202–213. [Google Scholar]
  34. López-Martínez, C.; Pottier, E.; Cloude, S.R. Statistical Assessment of eigenvector-based target decomposition theorems in radar polarimetry. IEEE Trans. Geosci. Remote Sens. 2005, 43, 2058–2074. [Google Scholar] [CrossRef][Green Version]
  35. MapReady. Available online: https://www.asf.alaska.edu/data-tools/mapready (accessed on 8 June 2016).
  36. Wulder, M.; Nelson, T. EOSD Land Cover Classification Legend Report; Natural Resources Canada, Canadian Forest Service, Pacific Forestry Centre: Victoria, BC, Canada, 2003. [Google Scholar]
  37. Burges, C. A tutorial on support vector machines for pattern recognition. Data Min. Knowl. Discov. 1998, 2, 121–167. [Google Scholar] [CrossRef]
  38. Huang, C.; Davis, L.S.; Townshend, J.R.G. An assessment of support vector machines for land cover classification. Int. J. Remote Sens. 2002, 23, 725–749. [Google Scholar] [CrossRef]
  39. Foody, G.M.; Mathur, A. A relative evaluation of multiclass image classification by support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1335–1343. [Google Scholar] [CrossRef]
  40. Mountrakis, G.; Im, J.; Ogole, C. Support vector machines in remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2011, 66, 247–259. [Google Scholar] [CrossRef]
  41. Fukuda, S.; Hirosawa, H. Support vector machine classification of land cover: Application to polarimetric SAR data. IEEE Int. Geosci. Remote Sens. Symp. 2001, 1, 187–189. [Google Scholar]
  42. Lee, J.S.; Hoppel, K.W.; Mango, S.A.; Miller, A.R. Intensity and phase statistics of multilook polarimetric and interferometric SAR imagery. IEEE Trans. Geosci. Remote Sens. 1994, 32, 1017–1028. [Google Scholar]
  43. Lardeux, C.; Frison, P.L.; Tison, C.; Souyris, J.C.; Stoll, B.; Fruneau, B.; Rudant, J.P. Support vector machine for multifrequency SAR polarimetric data classification. IEEE Trans. Geosci. Remote Sens. 2009, 47, 4143–4152. [Google Scholar] [CrossRef]
  44. Zhang, L.; Zou, B.; Zhang, J.; Zhang, Y. Classification of Polarimetric SAR Image Based on Support Vector Machine Using Multiple-component Scattering Model and Texture Features. EURASIP J. Adv. Signal Process. 2010, 2010, 1–9. [Google Scholar] [CrossRef]
  45. Longepe, N.; Rakwatin, P.; Isoguchi, O.; Shimada, M.; Uryu, Y.; Yulianto, K. Assessment of ALOS PALSAR 50 m orthorectified FBD data for regional land cover classification by support vector machines. IEEE Trans. Geosci. Remote Sens. 2011, 49, 2135–2150. [Google Scholar] [CrossRef]
  46. Tan, C.P.; Koay, J.Y.; Lim, K.S.; Ewe, H.T.; Chuah, H.T. Classification of multi-temporal SAR images for rice crops using combined entropy decomposition and support vector machine technique. Prog. Electromagn. Res. 2007, 71, 19–39. [Google Scholar] [CrossRef]
  47. Longépé, N.; Shimada, M.; Allain, S.; Pottier, E. Capabilities of full-polarimetric palsar/alos for snowextent mapping. IEEE Int. Geosci. Remote Sens. Symp. 2008, 4, 1026–1029. [Google Scholar]
  48. Li, Z.; Huang, L.; Chen, Q.; Tian, B. Glacier snowline detection on a polarimetric SAR image. IEEE Geosci. Remote Sens. Lett. 2012, 9, 584–588. [Google Scholar]
  49. Liu, H.; Guo, H.; Zhang, L. SVM-Based Sea Ice Classification Using Textural Features and Concentration From RADARSAT-2 Dual-Pol ScanSAR Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 1601–1613. [Google Scholar] [CrossRef]
  50. Lardeux, C.; Frison, P.L.; Tison, C.; Souyris, J.C.; Stoll, B.; Fruneau, B.; Rudant, J.P. Classification of Tropical Vegetation Using Multifrequency Partial SAR Polarimetry. IEEE Geosci. Remote Sens. Lett. 2011, 8, 133–137. [Google Scholar] [CrossRef]
  51. Neumann, M.; Saatchi, S.; Ulander, L.M.H.; Fransson, J.E.S. Assessing performance of L- and P-band polarimetric interferometric SAR data in estimating boreal forest above-ground biomass. IEEE Trans. Geosci. Remote Sens. 2012, 50, 714–726. [Google Scholar] [CrossRef]
  52. Ulaby, F.T.; Moore, R.K.; Fung, A.K. From theory to applications. In Microwave Remote Sensing Active and Passive; Addison-Wesley: Reading, MA, USA, 1986; Volume 3. [Google Scholar]
  53. De Almeida Furtado, L.F.; Silva, T.S.F.; de Moraes Novo, E.M.L. Dual-season and full-polarimetric C band SAR assessment for vegetation mapping in the Amazon várzea wetlands. Remote Sens. Environ. 2016, 174, 212–222. [Google Scholar] [CrossRef]
  54. Orfeo Toolbox. Available online: https://www.orfeo-toolbox.org (accessed on 8 June 2016).
  55. Chang, C.C.; Lin, C.J. LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst. Tech. 2011, 2. [Google Scholar] [CrossRef]
  56. Wu, T.F.; Lin, C.J.; Weng, R.C. Probability Estimates for Multi-class Classification by Pairwise Coupling. J. Mach. Learn. Res. 2004, 5, 975–1005. [Google Scholar]
  57. Duguay, Y.; Bernier, M. The use of RADARSAT-2 and TerraSAR-X data for the evaluation of snow characteristics in sub-Arctic regions. In Proceedings of the 2012 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Munich, Germany, 22–27 July 2012; pp. 3556–3559.
Figure 1. The regional image tiles (left) are a courtesy of MapQuest; portions courtesy of NASA/JPL-Caltech and the U.S. Depart. of Agriculture, Farm Service Agency. RADARSAT-2 polarimetric span image of the study area (right) acquired on 19 October 2011 overlayed with the training/validation polygons for each class.
Figure 1. The regional image tiles (left) are a courtesy of MapQuest; portions courtesy of NASA/JPL-Caltech and the U.S. Depart. of Agriculture, Farm Service Agency. RADARSAT-2 polarimetric span image of the study area (right) acquired on 19 October 2011 overlayed with the training/validation polygons for each class.
Remotesensing 08 00697 g001
Figure 2. Aerial photographs acquired in the summer of 2010 of some of the training areas for the nine land cover classes: (a) water; (b) rock/rubble; (c) exposed land; (d) shrub-tall; (e) shrub-low; (f) sparse shrubs; (g) coniferous-sparse; (h) wetland-shrub; (i) wetland-low vegetation.
Figure 2. Aerial photographs acquired in the summer of 2010 of some of the training areas for the nine land cover classes: (a) water; (b) rock/rubble; (c) exposed land; (d) shrub-tall; (e) shrub-low; (f) sparse shrubs; (g) coniferous-sparse; (h) wetland-shrub; (i) wetland-low vegetation.
Remotesensing 08 00697 g002
Figure 3. Example of a classified image using the [ T ] matrix from two images at θ = 27 , one image from 22 October 2011 and the other from 7 April 2012. The image in (a) is a GeoEye-1 satellite image from 24 September 2009; the image in (b) is an aerial photograph from 12 august 2010 of a smaller area in the Tasiapik valley representing a heterogeneous environment typically found in the area; the image in (c) represents a classification result of the region, covering the same extent as image (a); image (d) is the classified image from the same sub-area as image (b) and the image in (e) is the classified image overlayed with the confidence map where the darker areas are indicative of a lower confidence in the classification results.
Figure 3. Example of a classified image using the [ T ] matrix from two images at θ = 27 , one image from 22 October 2011 and the other from 7 April 2012. The image in (a) is a GeoEye-1 satellite image from 24 September 2009; the image in (b) is an aerial photograph from 12 august 2010 of a smaller area in the Tasiapik valley representing a heterogeneous environment typically found in the area; the image in (c) represents a classification result of the region, covering the same extent as image (a); image (d) is the classified image from the same sub-area as image (b) and the image in (e) is the classified image overlayed with the confidence map where the darker areas are indicative of a lower confidence in the classification results.
Remotesensing 08 00697 g003
Figure 4. Violin plots of the values of the T 22 element from the coherency matrix and the double-bounce component of the model-based decomposition from the 22 October image ( θ = 27 ) extracted from the training areas of each class.
Figure 4. Violin plots of the values of the T 22 element from the coherency matrix and the double-bounce component of the model-based decomposition from the 22 October image ( θ = 27 ) extracted from the training areas of each class.
Remotesensing 08 00697 g004
Figure 5. Violin plots of the values of the T 33 element from the coherency matrix and the volume component of the model-based decomposition from the 22 October image ( θ = 27 ) extracted from the training areas of each class.
Figure 5. Violin plots of the values of the T 33 element from the coherency matrix and the volume component of the model-based decomposition from the 22 October image ( θ = 27 ) extracted from the training areas of each class.
Remotesensing 08 00697 g005
Figure 6. Violin plots of the values of T 11 extracted from the training areas of each class for two different incidence angles. The data on the left are from the image acquired on 22 October 2011 at θ = 27 , and the data on the right are from the image acquired on 19 October 2011 at θ = 38 .
Figure 6. Violin plots of the values of T 11 extracted from the training areas of each class for two different incidence angles. The data on the left are from the image acquired on 22 October 2011 at θ = 27 , and the data on the right are from the image acquired on 19 October 2011 at θ = 38 .
Remotesensing 08 00697 g006
Figure 7. Violin plots of the values of T 11 extracted from the training areas of each class for each SAR acquisition date at θ = 27 .
Figure 7. Violin plots of the values of T 11 extracted from the training areas of each class for each SAR acquisition date at θ = 27 .
Remotesensing 08 00697 g007
Table 1. Acquisition dates and characteristics of SAR images. Date format: yy/mm/dd.
Table 1. Acquisition dates and characteristics of SAR images. Date format: yy/mm/dd.
DateSensorPolarizationsIncidence Angle (θ)
2011/10/19RADARSAT-2quad-pol38
2011/10/22RADARSAT-2quad-pol27
2011/11/12RADARSAT-2quad-pol38
2011/11/15RADARSAT-2quad-pol27
2011/12/06RADARSAT-2quad-pol38
2011/12/09RADARSAT-2quad-pol27
2012/03/11RADARSAT-2quad-pol38
2012/03/14RADARSAT-2quad-pol27
2012/04/04RADARSAT-2quad-pol38
2012/04/07RADARSAT-2quad-pol27
Table 2. Classes used for sampling vegetation height (m) and vegetation coverage (%) during field measurements.
Table 2. Classes used for sampling vegetation height (m) and vegetation coverage (%) during field measurements.
Classes01234567
Height (m)00–0.250.25–0.500.50–11–1.51.5–2.52.5–5>5
Coverage (%)00–55–1515–2525–5050–7575–9090–100
Table 3. Definition of the classes used, adapted from the Earth Observation for Sustainable Development of Forests (EOSD) land cover classification legend. The symbol for each class used in this paper, the number of training polygons used for the classifier and the total area of these polygons (in m 2 ) are also presented.
Table 3. Definition of the classes used, adapted from the Earth Observation for Sustainable Development of Forests (EOSD) land cover classification legend. The symbol for each class used in this paper, the number of training polygons used for the classifier and the total area of these polygons (in m 2 ) are also presented.
ClassSymbolNumber of Training PolygonsTotal Area of Training Polygons (m 2 )Description
WaterW8926,900Lakes, rivers, ponds larger than 3 × 3 pixels (27 × 27 m)
Rock/RubbleR8317,900Exposed bedrock, block field or rubble
Exposed LandEL36204,800Exposed soil, mostly sand
Shrub-TallST14216,800Covered with at least 50% shrub; average shrub height greater than or equal to 1 m.
Shrub-LowSL33194,500Covered with at least 50% shrub; average shrub height less than 1 m.
Shrub-SparseSS16248,800Covered with less than 50% shrub, regardless of shrub height; lichen and herbaceous vegetation cover at least 50% of the ground.
Coniferous-OpenCO9409,00025%–50% crown closure; coniferous trees make up 75% or more of the stands.
Wetland-ShrubWS31230,000Land with a water table near, at or above the soil surface for enough time to promote wetland or aquatic processes, the vegetation is composed in the majority of low or tall shrubs. This can also be composed of smallponds(less than 3 × 3 pixels) surrounded by shrubs.
Wetland-Low vegetationWL36210,100Land with a water table near, at or above the soil surface for enough time to promote wetland or aquatic processes; the vegetation is composed in the majority of mosses, herbs and some prostrate shrub. This is generally composed of peatlands with small ponds (less than 3 × 3 pixels).
Table 4. Classification accuracies for each SAR image with each feature set: the full [ T ] matrix, the model-based decomposition scattering powers and the eigenvalue-based parameters. Date format: yy/mm/dd.
Table 4. Classification accuracies for each SAR image with each feature set: the full [ T ] matrix, the model-based decomposition scattering powers and the eigenvalue-based parameters. Date format: yy/mm/dd.
[ T ] MatrixModel-BasedEigenvalue-Based
DateIncidence AngleAccuracy κ Accuracy κ Accuracy κ
2011/10/1938 74.9%0.7266.8%0.6365.1%0.61
2011/10/2227 74.9%0.7267.2%0.6366.9%0.63
2011/11/1238 70.0%0.6663.9%0.5962.3%0.58
2011/11/1527 66.9%0.6358.2%0.5356.7%0.51
2011/12/0638 64.4%0.6056.8%0.5154.4%0.49
2011/12/0927 65.9%0.6255.9%0.5055.4%0.50
2012/03/1138 58.4%0.5344.3%0.3749.0%0.43
2012/03/1427 54.6%0.4943.1%0.3644.7%0.38
2012/04/0438 60.7%0.5646.5%0.4050.0%0.44
2012/04/0727 58.9%0.5445.6%0.3946.2%0.39
Table 5. Producer’s and user’s accuracies for each class from the classifications generated using the image from 22 October 2011 at 27 for each feature set: the full [ T ] matrix, the model-based decomposition scattering powers and the eigenvalue-based parameters.
Table 5. Producer’s and user’s accuracies for each class from the classifications generated using the image from 22 October 2011 at 27 for each feature set: the full [ T ] matrix, the model-based decomposition scattering powers and the eigenvalue-based parameters.
[ T ] MatrixModel-BasedEigenvalue-Based
ClassProducer’s AccuracyUser’s AccuracyProducer’s AccuracyUser’s AccuracyProducer’s AccuracyUser’s Accuracy
Water98.7%95.8%99.1%94.9%75.6%80.3%
Rock/Rubble79.4%75.7%72.8%71.7%25.9%41.3%
Exposed Land81.2%86.8%77.4%89.7%60.4%59.2%
Shrub-Tall77.2%69.3%70.2%57.0%53.9%48.3%
Shrub-Low60.4%68.2%52.7%49.9%45.3%41.8%
Shrub-Sparse80.0%81.4%75.6%79.0%69.2%53.9%
Coniferous-Open70.1%62.2%61.9%52.4%66.7%55.8%
Wetland-Shrub47.6%58.7%19.8%38.5%25.2%38.4%
Wetland-Low Vegetation79.7%75.5%75.4%67.1%66.8%62.6%
Table 6. Confusion matrix for the classification using the image from 22 October 2011 at 27 . The classes are identified as: W, Water; R, Rock/Rubble; EL, Exposed Land; ST, Shrub-Tall; SL, Shrub-Low; SS, Shrub-Sparse; CO, Coniferous-Open; WS, Wetland-Shrub; WL, Wetland-Low vegetation.
Table 6. Confusion matrix for the classification using the image from 22 October 2011 at 27 . The classes are identified as: W, Water; R, Rock/Rubble; EL, Exposed Land; ST, Shrub-Tall; SL, Shrub-Low; SS, Shrub-Sparse; CO, Coniferous-Open; WS, Wetland-Shrub; WL, Wetland-Low vegetation.
PredictedWRELSTSLSSCOWSWLTotal
Reference
W11630130010011178
R2955895334396701203
EL499396714710151191
ST0009296808811801203
SL0721187112813093881177
SS0784212894910871186
CO024092591860163281227
WS01301941211261562291181
WL0911149814149471188
Total1214126111141341104311661383957125510,734
Table 7. Classification accuracies for pairs of images from two different r incidence angles with the full [ T ] matrix. Date format: yy/mm/dd.
Table 7. Classification accuracies for pairs of images from two different r incidence angles with the full [ T ] matrix. Date format: yy/mm/dd.
DatesIncidence AnglesAccuracyκ
2011/10/19 + 2011/10/2227 + 38 89.0%0.88
2011/11/12 + 2011/11/1527 + 38 85.8%0.84
2011/12/06 + 2011/12/0927 + 38 80.2%0.78
2012/03/11 + 2012/03/1427 + 38 74.8%0.72
2012/04/04 + 2012/04/0727 + 38 78.1%0.75
Table 8. Classification accuracies for pairs of images from two different dates using the full [ T ] matrix. Only the year and month of the image acquisitions are displayed to simplify the table. Date format: yy/mm.
Table 8. Classification accuracies for pairs of images from two different dates using the full [ T ] matrix. Only the year and month of the image acquisitions are displayed to simplify the table. Date format: yy/mm.
θ = 27 θ = 38
First DateSecond DateAccuracy κ Accuracy κ
2011/102011/1189.4%0.8887.0%0.85
2011/102011/1288.2%0.8786.6%0.85
2011/102012/0388.4%0.8787.3%0.86
2011/102012/0490.1%0.8988.2%0.87
2011/112011/1286.9%0.8585.0%0.83
2011/112012/0384.4%0.8285.8%0.84
2011/112012/0484.7%0.8386.7%0.85
2011/122012/0384.3%0.8284.0%0.82
2011/122012/0486.3%0.8583.7%0.82
2012/032012/0479.2%0.7779.9%0.77
Table 9. Producer and user accuracies for each class from two classifications generated using dual images that had the best κ index: the combination of the October images with two different incidence angles; the combination of the October and April images at 27 incidence.
Table 9. Producer and user accuracies for each class from two classifications generated using dual images that had the best κ index: the combination of the October images with two different incidence angles; the combination of the October and April images at 27 incidence.
Multi-AngleMulti-Date
ClassProducer’s AccuracyUser’s AccuracyProducer’s AccuracyUser’s Accuracy
Water99.7%99.5%98.4%97.6%
Rock/Rubble90.8%88.7%87.5%89.0%
Exposed Land90.4%95.0%85.2%91.4%
Shrub-Tall90.9%82.1%93.4%85.0%
Shrub-Low82.7%88.2%86.1%89.4%
Shrub-Sparse92.2%90.9%93.8%92.1%
Coniferous-Open87.7%89.8%90.4%88.2%
Wetland-Shrub75.7%80.0%82.0%86.1%
Wetland-Low Vegetation90.9%87.6%93.6%92.3%
Table 10. Confusion matrix for the classification using the combined images of 20 October 2011 at 27 and 19 October 2011 at 38 . The classes are identified as: W, Water; R, Rock/Rubble; EL, Exposed Land; ST, Shrub-Tall; SL, Shrub-Low; SS, Shrub-Sparse; CO, Coniferous-Open; WS, Wetland-Shrub; WL, Wetland-Low vegetation.
Table 10. Confusion matrix for the classification using the combined images of 20 October 2011 at 27 and 19 October 2011 at 38 . The classes are identified as: W, Water; R, Rock/Rubble; EL, Exposed Land; ST, Shrub-Tall; SL, Shrub-Low; SS, Shrub-Sparse; CO, Coniferous-Open; WS, Wetland-Shrub; WL, Wetland-Low vegetation.
PredictedWRELSTSLSSCOWSWLTotal
Reference
W1175030000001178
R11080391711741301190
EL5641072113201101186
ST0001093320195901203
SL01149968222466401171
SS02610011109911441192
CO0104814010818801232
WS00012261074887281172
WL0454210391610741181
Total11811217112913321098120912041109122610,705
Table 11. Confusion matrix for the classification using the combined images of October 2011 and April 2012 at 27 . The classes are identified as: W, Water; R, Rock/Rubble; EL, Exposed Land; ST, Shrub-Tall; SL, Shrub-Low; SS, Shrub-Sparse; CO, Coniferous-Open; WS, Wetland-Shrub; WL, Wetland-Low vegetation.
Table 11. Confusion matrix for the classification using the combined images of October 2011 and April 2012 at 27 . The classes are identified as: W, Water; R, Rock/Rubble; EL, Exposed Land; ST, Shrub-Tall; SL, Shrub-Low; SS, Shrub-Sparse; CO, Coniferous-Open; WS, Wetland-Shrub; WL, Wetland-Low vegetation.
PredictedWRELSTSLSSCOWSWLTotal
Reference
W11620144010001181
R0104763347963281197
EL2880100775470261182
ST0101127180194101206
SL03041100193855151162
SS081109111233391185
CO0503920511114811229
WS081745207597841192
WL025608247611151191
Total11901177110213261120120712591136120810,725
Back to TopTop