Next Article in Journal
Assessment of Effective Seasonal Downscaling of TRMM Precipitation Data in Peninsular Malaysia
Next Article in Special Issue
Polarimetric Calibration of CASMSAR P-Band Data Affected by Terrain Slopes Using a Dual-Band Data Fusion Technique
Previous Article in Journal
What Four Decades of Earth Observation Tell Us about Land Degradation in the Sahel?
Previous Article in Special Issue
Use of Sub-Aperture Decomposition for Supervised PolSAR Classification in Urban Area
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Woodland Extraction from High-Resolution CASMSAR Data Based on Dempster-Shafer Evidence Theory Fusion

1
Chinese Academy of Surveying and Mapping, Lianhuachi West Road, No 28, Beijing 100830, China
2
National Quality Inspection and Testing Center for Surveying and Mapping Products, Lianhuachi West Road, No 28, Beijing 100830, China
3
Chinese University of Geosciences, Lumo Road, No 388, Wuhan 430074, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2015, 7(4), 4068-4091; https://doi.org/10.3390/rs70404068
Submission received: 17 September 2014 / Revised: 15 March 2015 / Accepted: 26 March 2015 / Published: 7 April 2015
(This article belongs to the Special Issue Remote Sensing Dedicated to Geographical Conditions Monitoring)

Abstract

:
Mapping and monitoring of woodland resources is necessary, since woodland is vital for the natural environment and human survival. The intent of this paper is to propose a fusion scheme for woodland extraction with different frequency (P- and X-band) polarimetric synthetic aperture radar (PolSAR) and interferometric SAR (InSAR) data. In the study area of Hanjietou, China, a supervised complex Wishart classifier based on the initial polarimetric feature analysis was first applied to the PolSAR data and achieved an overall accuracy of 88%. An unsupervised classification based on elevation threshold segmentation was then applied to the InSAR data, with an overall accuracy of 90%. After Dempster-Shafer (D-S) evidence theory fusion processing for the PolSAR and InSAR classification results, the overall accuracy of fusion result reached 95%. It was found the proposed fusion method facilitates the reduction of polarimetric and interferometric SAR classification errors, and is suitable for the extraction of large areas of land cover with a uniform texture and height. The woodland extraction accuracy of the study area was sufficiently high (producer’s accuracy of 96% and user’s accuracy of 96%) enough that the woodland map generated from the fusion result can meet the demands of forest resource mapping and monitoring.

Graphical Abstract

1. Introduction

Woodland is defined in Chinese woodland management regulations as low-density forest with an understory of shrubs and herbaceous plants. Differing from forest with higher densities and areas of trees, woodland generally refers to low-density arbor and shrub forest with a smaller canopy closure [1]. Mapping and monitoring of woodland resources have been the subject of significant attention, as it offers an effective means for the measurement of carbon storage, which is highly relevant to climate change [2]. SAR as an active sensor offers all-weather, day/night coverage imaging capability, and it can also yield information on the underlying structure of land cover, particularly, woodland and grassland. Many studies have been conducted using SAR techniques, e.g., woodland classification, extraction and mapping [3,4,5,6,7,8]. Currently, an increasing number of spaceborne and airborne SAR systems are being launched, which can provide abundant multi-frequency (X, C, S, L, P), quad-polarization, multi-interferometric mode data for woodland resource research.
Polarization selection plays a key role in forest recognition and classification. Knowlton and Hoffer [9] analyze the identification ability of HH and HV polarizations for various forest cover types, and their analysis showed that deciduous and coniferous forest cover types were more easily separated in the HH image than the HV image. Drieman et al. [10] reported interpretation results with significant differences between the VV and HV polarization images, and concluded from the visual analysis of SAR imagery that VV polarization data are often more effective for discriminating between a variety of forest types than HV polarization data. Liesenberg and Gloaguen [11] evaluated forest classification accuracy using single, interferometric dual, and quad-polarization mode L-band data, and the experimental results demonstrated that polarimetric features extracted from the quad-polarization L-band increased classification accuracy when compared to single and dual-polarization data alone.
Frequency selection has a great impact on the discrimination and classification of forest cover types. Longer wavelengths (L- and P-band) generally perform better at extracting woody structural parameters than shorter wavelengths (C- and X-band) [12], because of their better foliage penetration and subsequent interaction with the largest structural features, e.g., the trunk and large branches. Meanwhile, leaves and branches generally contribute to the shorter wavelength (X- and C-band), as the shorter wavelength radar wave barely penetrates the canopy into the trunk or foliage [13]. From the perspective of forest classification, primary forest and logged forest in high-resolution X- and C-band images can be easily distinguished from other land-cover types because they display particularly distinctive textural patterns, and a single L- or P-band channel enables accurate classification of non-forest cover types [14].
SAR interferometry has become increasing important in forest classification and mapping. SAR interferometry has also been found to be an effective tool for tree height estimation in forest areas [15]. The additional information of interferometric coherence is also important for forest mapping, to distinguish the different forest types [16]. Wegmuller and Werner [17] found that coherence enables coniferous, deciduous and mixed forest stands to be distinguished when repeat-orbit ERS-SAR data are used. Simard et al. [18] reported that an interferometrically derived forest map could be generated with an accuracy of around 90% for the mapping of forest versus non-forest.
As a result, the combination of multi-frequency, quad-polarization, and multi-interferometric mode SAR data has become a major trend in woodland research. Data fusion offers an effective way for the use of multi-source data. Data fusion can be performed at three different levels: pixel, feature, and decision. With the characteristics of high openness and fault-tolerance, the decision-level approaches have the potential to improve classification accuracy [19,20,21,22]. The popular decision-level approaches include the Bayesian methods [23], Dempster-Shafer (D-S) evidence theory [24], and fuzzy-logic [25]. Among the above approaches, the D-S evidence theory is considered the most promising approach in the application of land-cover discrimination [26]. Compared to the Bayesian approaches, D-S evidence theory represents both imprecision and uncertainty and is a more flexible and general approach than the Bayesian approaches. Another of its advantages is its ability to consider not only individual classes, but also unions of classes [27]. The traditional classification approaches, such as the maximum likelihood classifier, basically differ from the D-S procedure in that the characteristics of each classification category are automatically selected from the training sites (as with multispectral data) and are not based on expert knowledge [28]. A great deal of classification research based on D-S evidence theory has been successfully implemented. For instance, Mascle [27] performed a separate unsupervised classification on each image, and the results were fused using a strategy based on belief functions, resulting in a 20% improvement in identification rate for corn compared with two other simple data fusion methods. Milisavljević [29] fused classification results coming from several sets of polarimetric parameters following different strategies, and the result indicated that D-S evidence theory outperformed the other methods in most cases. Yang and Moon [30] combined multi-frequency (L- and P-band) polarimetric SAR and optical data for land cover classification using D-S evidence theory and obtained noticeably better land-cover classification results than the methods not involving fusion. Borja and Maria [31] combined multiple decision indices using D-S theory to determine the locations of swimming pools, and they achieved an overall detection accuracy of 99.86%.
For the purpose of woodland resource mapping and monitoring, highly accurate woodland extraction is an essential prerequisite for the successive processing, e.g., tree species recognition and classification. As described above, a number of alternative schemes for woodland extraction can be undertaken with multi-frequency, quad-polarization, multi-interferometric mode SAR datasets. CASMSAR, as China’s first airborne SAR mapping system, has made significant achievements in mapping science and technology [32]. The high-resolution SAR system onboard CASMSAR includes a dual-antenna X-band interferometer and quad-polarimetric SAR at the P-band. It provides abundant backscatter information for the different bands, and interferometric phase and polarimetric features, all of which can be acquired at the same time. The aim of this study was to make an accurate woodland extraction in the study area of our choice. According to the woodland characteristics of the study area, the woodland extraction could be deemed a binary forest/non-forest classification. Considering two kinds of available datasets, PolSAR and InSAR classifications were first implemented, and then D-S evidence theory was employed to fuse the two classification results. The classification accuracies of the overall classification results were evaluated with land-cover reference data, and the woodland was extracted to generate a woodland map.

2. Materials

2.1. Study Area

The study area was located in Hanjietou (34.39°N to 34.40°N and 113.09°E to 113.11°E, see Figure 1), Henan province, in the middle of China, covering a south-eastern part of the city of Dengfeng of about 2.1 km2, The elevation value (i.e., the heights above the geoid) range from 220 m to 310 m. It should be noted that there were hills located in the east and north of the study area, which could hinder the woodland extraction due to topographic effects. The study area mainly comprised woodland, farmland, buildings, and bare land. The main tree species in the study were pine and northern larch, categorized as woodland (low-density needleleaf forest), which facilitated a binary forest/non-forest classification.
Figure 1. The optical image of the study area (copyright Google Earth).
Figure 1. The optical image of the study area (copyright Google Earth).
Remotesensing 07 04068 g001

2.2. CASMSAR Datasets

In this study, a pair of X-band interferometric images were simultaneously acquired by the dual-antenna interferometer, and a single P-band quad-polarization image was acquired by the fully polarimetric SAR sensor in June 2011. Prior to and during the period of SAR data acquisition, it was cloudy weather and not precipitation occurred in the study area. The main characteristics of the CASMSAR data used in this study are presented in Table 1.
Table 1. CASMSAR dataset description.
Table 1. CASMSAR dataset description.
ParametersP-BandX-Band
Acquired2011-06-232011-06-23
PolarizationHH/HV/VH/VVHH
Pixel size (m2)0.6 × 0.60.25 × 0.25
Wavelength (m)0.50.03
Spatial resolution (m)10.5
Central frequency600 MHz9.6 GHz
Number of looks11
Flight altitude (m)31553155
Aircraft ground speed (m/s)115.5115.5
Incidence angle (°)4350
Incidence angle interval (°)1013
Slant range swath start (m)3496.312690.64
Slant range swath end (m)12,097.9112,418.64

2.3. Pre-Processing

In the pre-processing stage, polarimetric and geometric calibration was first implemented with the aid of trihedral and dihedral corner reflectors deployed in the study area. For the polarimetric SAR pre-processing, a Lee polarimetric filter was employed with the P-band PolSAR image so as to reduce the impact of speckling [33], with the size of filter window being 5 × 5 pixel. For the interferometric pre-processing, the InSAR processing chain was undertaken to generate the height map with the X-band InSAR images. The InSAR processing consists of the following main processing steps: (1) accurate co-registration of the focused SAR data; (2) calculation of the interferogram associated with multi-look processing of four looks; (3) interferogram flattening; (4) interferogram filtering with a 5 × 5 pixel window; (5) phase unwrapping; and (6) unwrapped phase to height conversion. The P- and X-band datasets were then georeferenced to the same reference system (WGS84 ellipsoid and Transverse Mercator (TM) projection) with a grid size of 1 m. Finally, an orthorectified PolSAR image and an InSAR elevation map (see Figure 2) were generated for the following fusion.
Figure 2. (a) P-band color-synthesized map of three Pauli basis polarizations (HH+VV, HV, HH-VV); (b) X-band elevation map generated by interferometric processing.
Figure 2. (a) P-band color-synthesized map of three Pauli basis polarizations (HH+VV, HV, HH-VV); (b) X-band elevation map generated by interferometric processing.
Remotesensing 07 04068 g002

2.4. Land-Cover Data

The reference land-cover selected for the study was a 1:10,000 land-cover product produced by the local surveying department of Henan in 2007. This 1:10,000 land-cover product was manually produced by a photointerpreter via a field investigation of the different land-cover types. The accuracy of the land-cover product was not less than 90%. The land-cover was translated to the reference system (WGS84 ellipsoid and TM projection) with grid size of 1 m, in agreement with the georeferenced X- and P-band SAR images, using a small number of ground control points. However, the land cover was subjected to some errors in classification, and classes may have changed between 2007 and SAR images acquisition. Landsat TM images taken from 2007 to 2011 were used for the woodland change detection. It was found that the majority of the woodland had not changed between 2007 and 2011, except for a small area of newly planted woodland, as shown in Figure 3. Therefore, the reference land cover could be considered as a reliable data source for the accuracy validation.
Figure 3. 1:10,000 land-cover map of the study area.
Figure 3. 1:10,000 land-cover map of the study area.
Remotesensing 07 04068 g003

3. Methods and Processing

3.1. Single-Band SAR Data Processing

For the purpose of the fusion of the different frequency PolSAR and InSAR data for the woodland extraction, a supervised complex Wishart classification based on the initial polarimetric features analysis, and an unsupervised classification based on elevation threshold segmentation were, respectively, applied to the P-band PolSAR and X-band InSAR data. The detailed processing is described in the following.

3.1.1. PolSAR Image Classification

The supervised complex Wishart classifier was used for the P-band PolSAR classification because it makes full use of the polarimetric coherency matrix [34]. Prior to the classification, the parameter of the span representing the total scattering power was first calculated, as shown in Figure 4a [35]. The polarimetric parameters of the alpha angle (α), the entropy (H), and the anisotropy (A) were analyzed because these parameters characterize the representative polarimetric properties of different land-cover types, as well as the invariant attribute of the polarimetric span, which is independent of the radiometric variation associated with topography [36,37]. The three polarimetric parameters were computed with a 5 × 5 sliding window, as shown in Figure 4b–d.
The span map displays the total power from −39 dB to 10 dB, which reveals building and woodland with the higher radar returns due to double-bounce and volume scattering, and farm with lower radar returns, which is probably due to the inhomogeneity of crops. For the polarimetric features analysis, the representative 2-D segmented planes of H-α and H/A can be depicted according to the computed parameters of alpha angle, entropy, and anisotropy. Based on the unsupervised classification scheme proposed by Cloude and Pottier [36], the 3-D H/α/A segmentation space that represents all the random scattering mechanisms is often employed in PolSAR classification. In this study, the two planes of the 3-D space were utilized: the H-α and H/A planes. They were then, respectively, segmented into nine and six different zones. Referring to the different land-cover types of the land-cover product, the segmentation zones were then categorized into four classes. As shown in Figure 5, woodland is characterized by high-entropy vegetation scattering in zones 1 and 2 of the H-α plane, and low anisotropy corresponding to random scattering in zone 2 of the H/A plane. Buildings are characterized by low-entropy multiple scattering in zone of 7 of the H-α plane, and higher anisotropy corresponding to two scattering mechanisms in zones 3 and 5 of the H/A plane. Farmland is characterized by mixed double-bounce and volume scattering with high entropy and high α value which leads to the confusion between farmland/forest and farmland/building; however, the lower span of less than −30 dB can be used to distinguish this class from the others. Bare land is characterized by Bragg surface scattering corresponding to low entropy with an α value of less than 42.5° in zone 9 of the H-α plane.
Figure 4. (a) Span, (b) alpha angle, (c) entropy, and (d) anisotropy of the study area.
Figure 4. (a) Span, (b) alpha angle, (c) entropy, and (d) anisotropy of the study area.
Remotesensing 07 04068 g004
Figure 5. 2-D segmented planes: (a) H-α segmented plane and (b) H/A segmented plane.
Figure 5. 2-D segmented planes: (a) H-α segmented plane and (b) H/A segmented plane.
Remotesensing 07 04068 g005
The supervised complex Wishart classification followed the feature analysis. The supervised complex Wishart classification scheme can be represented as a three-step process, described in the following.
(1) By virtue of the feature analysis of the study area, we selected four clusters of training samples from the PolSAR image.
(2) According to the complex Wishart distribution, we establish the maximum likelihood estimation function for each cluster.
(3) We learned the different statistical quantities from the training samples and assigned each pixel of the PolSAR image to one of the clusters by the minimum distance measure provided in the first step.

3.1.2. InSAR Elevation Map Classification

The interferometric elevation map of the X-band image is actually the digital surface model (DSM) data of the study area, so, we aimed to use the elevation feature to distinguish the various land-cover types. The unsupervised threshold segmentation was applied to the InSAR elevation map classification. However, the existence of the topography can have an impact on the classification result. For example, if there are buildings positioned on sloping terrain, which have a similar elevation to woodland on flat ground, the two objects could be categorized into the same class. Therefore, in order to avoid the topographic effects, we needed to separate the ground points from the object points (i.e., generate the digital elevation model (DEM) from the DSM) prior to the classification.
The DEM was extracted from the DSM by applying a local minimum filter to remove non-terrain objects [38]. The DEM was produced by first applying a filter to determine the local height minima, and then by smoothing with a mean filter. The minimum and mean filter sizes were, respectively, 3 × 3 and 35 × 35. Because applying different mean filter window sizes to the DEM typically produces relatively large errors for the large and small filter windows, a mid-range filter window size of 35 × 35 was selected. Once the DEM data were subtracted from the DSM data, the normalized DSM (i.e., nDSM) image is obtained by the method described in [39], eliminating the influence of topography. Figure 6, respectively, presents the rendered DEM and DSM images of the study area. The relationships between the DSM, DEM, and nDSM [40] are shown in Figure 7. As shown in Figure 7, the solid line represents the DEM, which is the outline of the terrain; dashed line represents the DSM, which is both the outline of the terrain and non-terrain objects; and the solid and dotted line represents the nDSM, which is the outline of the non-terrain objects, e.g., woodland and buildings.
Figure 6. The DEM and nDSM of the study area. (a) DEM. (b) nDSM.
Figure 6. The DEM and nDSM of the study area. (a) DEM. (b) nDSM.
Remotesensing 07 04068 g006
Figure 7. The relationships between the DSM, DEM and nDSM.
Figure 7. The relationships between the DSM, DEM and nDSM.
Remotesensing 07 04068 g007
As shown in Figure 6, the height difference between the farmland and bare land classes is so small that it is difficult to distinguish the two classes. Compared with the four classes of the PolSAR image, the nDSM image is considered suitable for three-class segmentation processing. The threshold segmentation process applied in this research is specifically described as follows.
(1) Determine the initial peaks in the statistical histogram of the normalized nDSM with respect to the three classes in the previous PolSAR image classification.
(2) Search for the optimal threshold for two categories of classification between one peak to its adjacent peak through the Ostu threshold selection method [41]. If the elevation interval of the nDSM image f ( i , j )   is [0, L], we select initial thresholds { t 1 , t 2 } to divide the pixels of the nDSM image f ( i , j ) into three groups, c1, c2, and c3.
{ c 1     f ( i , j ) [ 0 , t 1 ]     c l a s s p r o b a b i l i t y i s w 1 , a v e r a g e e l e v a t i o n i s m 1 , v a r i a n c e i s σ 1 2 c 2     f ( i , j ) ( t 1 , t 2 ] c l a s s p r o b a b i l i t y i s w 2 , a v e r a g e e l e v a t i o n i s m 2 , v a r i a n c e i s σ 2 2 c 3     f ( i , j ) ( t 2 , t 3 ] c l a s s p r o b a b i l i t y i s w 3 , a v e r a g e e l e v a t i o n i s m 3 , v a r i a n c e i s σ 3 2
Thus, the average elevation of the nDSM image f ( i , j ) is
m = i = 1 3 m i w i
The variance of the group is
σ w 2 = i = 1 3 w i σ i 2
The variance between different groups is
σ B 2 = i = 1 3 w i ( m i m ) 2
Clearly, the smaller the value of σ w 2 is, the more similar the pixels are, while the greater the value of σ B 2 is, the greater the differences the two groups have. Therefore, the value of σ B 2 / σ w 2 should be as large as possible. Thus, one can change the threshold value of { t 1 , t 2 } so that σ B 2 / σ w 2 reaches its maximum value. Threshold segmentation was implemented using the optimal threshold obtained by step (2), and the nDSM image is divided into three areas. Figure 8 shows the threshold segmentation result of the X-band interferometric elevation map. The InSAR elevation map classification result can be obtained by assigning the segmentations to land-cover classes, with respect to the PolSAR image classification result.
Figure 8. The threshold segmentation result of the InSAR elevation map.
Figure 8. The threshold segmentation result of the InSAR elevation map.
Remotesensing 07 04068 g008

3.2. Dual-Band SAR Data Fusion Processing

In order to achieve accurate woodland extraction, the P-band and X-band classification results are fused, thereby combining the respective advantages of the polarimetric and interferometric information. In this study, the P-band PolSAR and X-band InSAR data are considered as two independent sources of information. The combination of the classification results (respectively, generated from the two-band SAR data) is performed by D-S theory fusion.

3.2.1. High-Level Fusion Using Dempster-Shafer Evidence Theory

During the realization of multi-source data fusion based on D-S evidence theory, the most difficult part is the construction of the mass function. To date, there is still a lack of systematic methods to quantitatively describe the mass function definition. In this paper, we utilize the two classification results to obtain the evidence for fusion. In other words, after the classification, we extract the corresponding characteristic parameters of each hypothesis through the statistics for all the clusters, and we then build a model based on the statistical parameters to obtain the mass function.
According to D-S evidence theory, Θ is the set of hypotheses about the pixel classes. Thus, we construct the “frame of discernment” as follows, which is based on the results of the classification described in the previous section. Note that the X-band InSAR image is classified into three classes, in order to fuse all the categories in the framework of D-S evidence theory. The bare land class is added to the X-band InSAR classification result, with respect to the bare land class imposed on the P-band PolSAR classification result.
Θ = { woodland, buildings, farmland, bare land }
After the single-band SAR data classification, we obtain the statistics to get the cluster center and variance of each class in the P- and X-band SAR images as initial inputs for the following fusion. We denote { A 1 , A 2 , A 3 , A 4 } and { B 1 , B 2 , B 3 , B 4 } as the two sets of four classes (i.e., woodland, buildings, farmland, bare land), respectively, from the P-band and the X-band data sources. The fusion algorithm deals with the set, which is of non-empty intersections between the classes from the different sources.
{ A i B j ,    such that  A i B j Φ ,     i [ 1 , 4 ] ,     j [ 1 , 4 ] }
Under the assumption of the data source (PolSAR and InSAR data) being characterized by a Gaussian distribution, we use a Gaussian distribution function to calculate the membership degree P ( x | A i ) , P ( x | B j ) , which is defined as follows.
{ P ( x | A i ) = 1 2 π σ i 2 e ​   ( x μ i ) 2 2 σ i 2 ​   P ( x | B j ) = 1 2 π σ j 2 e ​   ( x μ j ) 2 2 σ j 2
From expression (7), P ( x | A i ) (respectively, P ( x | B j ) ) represents the conditional probability for each pixel χ belonging to cluster A i in the P-band PolSAR image (respectively, cluster B j in the X-band InSAR image). μ i (respectively, μ j ) indicates the cluster center of A i (i.e., the mean of class A i ) and σ i (respectively, σ j ) denotes the standard deviation of cluster A i (i.e., the square root of the class variance).
Then for each pixel χ, we define the non-empty sets of the mass functions based on the statistical parameters above, as expressed in (8). For H =   A i B j , such that A i B j    
{ m A i ( H ) = m A i ( x ) = z a P ( x | A i ) m B j ( H ) = m B j ( x ) = z b P ( x | B j )
where z a and z b are normalization terms, and they are introduced in order to ensure that the mass functions are always in the interval [0, 1] and m A i = 1 (respectively, m B j = 1 ). If we denote n i (respectively, n j ) the number of non-empty intersections between cluster A i and B j , as i [ 1 , 4 ] , (respectively, j [ 1 , 4 ] ), then the normalization terms can be expressed as
{ z a = 1 / i = 1 4 { ( 2 n i 1 ) × P ( x | A i ) }    z b = 1 / j = 1 4 { ( 2 n j 1 ) × P ( x | B j ) }

3.2.2. Combination and Decision

After calculating the mass functions for each class, basic probability assignment (BPA) for the total classes, i.e., m(H), can be obtained in terms of Dempster’s combination rule [24]. m(H) (also called the orthogonal sum) is represented as follows.
For H =   A i B j such that A i B j
m ( H ) = m 1 m 2 = {      O                      H = Φ              A i B j = H m 1 ( A i ) m 2 ( B j ) 1 K            H Φ                 w h e r e K = A i B j = Φ m 1 ( A i ) m 2 ( B j )
where K is called the normalization factor, representing the extent of the conflict between the different evidences.
Therefore, for example, we, respectively, take pixel χ from the P-band image and pixel χ′ from the X-band image, both in row one and column one, fusing the orthogonal sum m(H) to combine the evidences. Table 2 shows the mass function values of the four classes (i.e., woodland, buildings, farmland, bare land) from the two sources in terms of definition (8).
Table 2. Mass function values of the four classes.
Table 2. Mass function values of the four classes.
SourceWoodlandBuildingsFarmlandBareland
P-band data m A 1 ( x ) =0.6 m A 2 ( x ) =0.0 m A 3 ( x ) =0.2 m A 4 ( x ) =0.1
X-band data m B 1 ( x ' ) =0.2 m B 2 ( x ' ) =0.0 m B 3 ( x ' ) =0.3 m B 4 ( x ' ) =0.2
where m A 1 ( x ) represents the probability of pixel χ being classified as woodland in the P-band image (respectively,. m B 1 ( x ' ) for the X-band image).
Table 3 summarizes the BPA combination results from the two sources based on the orthogonal sum. The empty set indicates that there are no ambiguities between clusters discriminated by the different data sources. In addition, we denote C i , i [ 1 , 2 ] , the complementary set of {woodland, buildings, farmland, bare land}, under the condition of m A 1 ( x ) + m A 2 ( x ) + m A 3 ( x ) + m A 4 ( x ) < 1 (respectively, m B j ( x ' ) ). For pixel X positioned at the first row and first column of the fusion image, we use the maximum of the belief function decision criterion Bel(A) to determine its class. The decision criterion can be represented by the following formula.
If    max [ B e l ( A i ) ] ,    X A i
Table 3. Combination and decision results of four classes.
Table 3. Combination and decision results of four classes.
P-band m A 1 ( x )
0.6
m A 2 ( x )
0
m A 3 ( x )
0.2
m A 4 ( x )
0.1
C 1
0.1
X-band
m B 1 ( x )
0.2
Woodland
0.12
Φ
0
Φ
0.04
Φ
0.02
Woodland
0.02
m B 2 ( x )
0
Φ
0
Buildings
0
Φ
0
Φ
0
Buildings
0
m B 3 ( x )
0.3
Φ
0.18
Φ
0
Farmland
0.06
Φ
0.03
Farmland
0.03
m B 4 ( x ) 0.2 Φ 0.12 Φ 0 Φ 0.04Bare land 0.02Bare land 0.02
C 2 0.3Woodland 0.18Buildings 0Farmland 0.06Bare land 0.03C 0.03
x x = W o o d l a n d m A ( x ) m B ( x ) = 0.32 x x = B u l i d i n g m A ( x ) m B ( x ) = 0.00 x x = F a r m m A ( x ) m B ( x ) = 0.15 x x = O t h e r s m A ( x ) m B ( x ) = 0.07 x x = C m A ( x ) m B ( x ) = 0.03
1-K= 1 A i B i = Φ m A ( x ) · m B ( x ) = 0.57
m 1 ( x ) m 2 ( x ) 0.32/0.57 = 0.560.00/0.57 = 0.000.15/0.57 = 0.260.07/0.57 = 0.12
Bel( X )Bel( W ) = 0.56Bel( B ) = 0.00Bel( F ) = 0.26Bel( O ) = 0.12
W woodland, B buildings, F farmland, O bare land.

4. Results

For the comparison between the classification results of the different polarizations and frequencies, the HH polarization of the P-band classification result was obtained with a supervised maximum likelihood method. The different polarization and frequency classification results are shown in Figure 9a–c. Apart from this comparison, the K-means fusion result with the datasets (backscatter, interferometry, H-α Wishart classification) was also obtained to evaluate the D-S fusion result. The K-means and D-S fusion results with the PolSAR and InSAR images are shown in Figure 9d–e.
From the point of view of visualization, the advantages and disadvantages of the single-band SAR image classification results are apparent. The P-band classification result exhibits the complete class information, i.e., four different land-cover types. However, due to the effect of speckling, the result is not satisfactory, with many isolated pixels (see Figure 9a,c). On the other hand, in the X-band interferometric elevation map, the edge of the large areas of land cover are continuous and the texture is uniform, e.g., the woodland and farmland, but some classes hardly exist in the classification result, e.g., buildings, and bare land are excluded from the result (see Figure 9b). The K-means and D-S fusion results appear relatively complete (see Figure 9d,e), and combine the respective advantages of the polarimetric and interferometric information.
Figure 9. The classification results of the CASMSAR data. (a) The supervised classification result of the P-band PolSAR image. (b) The unsupervised classification result of the X-band InSAR elevation map. (c) The supervised classification result of the P-HH polarized image. (d) The K-means fusion classification result. (e) The D-S fusion classification result.
Figure 9. The classification results of the CASMSAR data. (a) The supervised classification result of the P-band PolSAR image. (b) The unsupervised classification result of the X-band InSAR elevation map. (c) The supervised classification result of the P-HH polarized image. (d) The K-means fusion classification result. (e) The D-S fusion classification result.
Remotesensing 07 04068 g009
For the quantitative evaluation, random samples were selected from the 1:10,000 land-cover map as the ground truth. The samples selection procedure was as follows.
(1) The two temporal Landsat TM data (2007 and 2011) were used to undertake post-classification change detection [42]. More specifically, due to spectral the features being sensitive to the land-cover types, a simple classification method based on threshold segmentation was employed for the classification of the two temporal TM images, and we then obtained the difference by a comparison of two temporal classification results. Finally we detected some changed areas of newly planted woodland, as shown in Figure 3.
(2) We extracted the regions from the 1:10,000 land-cover map which that did not contain land-cover errors according to step (1).
(3) We selected samples from the extracted regions using stratified random sampling. More specifically, the study area was divided into 32 columns and 26 rows resulting in 832 cells, and a 50 × 50 m site was randomly sampled from each cell.
(4) A number of requirements were also met: the samples were proportionally selected from the cells corresponding with the area of each class of the land-cover map; the samples were also selected at the pixels located in the central zone of each class in order to avoid pixels near the edges between classes.
The parameters of the proportion of the area of each class to the total area and the sample number of each class are listed in Table 4.
Table 4. The sample parameters.
Table 4. The sample parameters.
Land-Cover ClassProportion(%)Sample Number
Woodland31258
Buildings434
Farmland56465
Bare land975
A confusion matrix and a Z-test were employed to evaluate the classification results. Five different statistical measures were calculated for the validation: global kappa coefficient (κ) [43], overall accuracy (OA), the producer’s accuracy (PA), the user’s accuracy (UA) [44], and the Z-score of the Z-test [45]. Table 5, Table 6, Table 7 and Table 8 show the accuracy evaluation of the supervised classification result for all the images. It can be seen that the result with the P-HH polarized image has the lowest classification accuracy, due to the effect of the large amount of speckling, and some of the buildings and bare land are wrongly classified as other classes (see Table 5). The result with the P-band image has a higher classification accuracy, and all the classes are classified correctly (see Table 6). The result with the X-band InSAR image also shows a higher classification accuracy for the woodland and farmland classes; however, the building class classification accuracy is rather low due to the uneven distribution of the buildings in the X-band interferometric elevation map (see Table 7). The D-S fusion result has the highest classification accuracy, and all the classes are distinguished. The classification accuracies of the four classes in the D-S fusion result are improved compared to the other results (Table 9). The classification accuracies for the woodland and bare land classes are higher in the K-means result, and the lower accuracies for the other classes arise because these classes are not sensitive to certain features. In the Z-test, for a given significance level α = 5%, the critical value of Z is 1.96. The Z-test shows that the two fusion and P-band classification results are consistent with the land-cover product, since the Z-scores of the two results are both less than the critical value of 1.96. However, the P-HH result reveals a higher Z values, so that the result is regarded as being inconsistent with the land-cover product. Note that the calculation for the Z-scores of the X-band result does not make sense since the number of classes does not conform with the land-cover product.
Table 5. The accuracy evaluation of the supervised classification result with the P-HH polarized image.
Table 5. The accuracy evaluation of the supervised classification result with the P-HH polarized image.
Land-Cover ClassGround Truth (%)UA (%)
WoodlandBuildingsFarmlandBareland
Woodland75.8977.493.6618.9081.15
Buildings0.006.550.010.0098.00
Farmland18.1613.3683.2243.8276.00
Bare land5.962.6113.1137.2834.05
OA = 72.43% , Kappa = 0.55, Z = 3.43
Table 6. The accuracy evaluation of the supervised classification result with the P-band PolSAR image.
Table 6. The accuracy evaluation of the supervised classification result with the P-band PolSAR image.
Land-Cover ClassGround Truth (%)UA (%)
WoodlandBuildingsFarmlandBareland
Woodland84.078.291.986.5794.26
Buildings4.6089.320.270.1361.11
Farmland1.450.0090.281.8798.02
Bare land9.882.397.4891.4261.75
OA = 87.95%, Kappa = 0.82, Z = 1.90
Table 7. The accuracy evaluation of the unsupervised classification result with the X-band InSAR image.
Table 7. The accuracy evaluation of the unsupervised classification result with the X-band InSAR image.
Land-Cover ClassGround Truth (%)UA (%)
WoodlandBuildingsFarmland
Woodland92.4126.394.3692.12
Buildings0.695.141.2517.11
Farmland6.8368.3494.3690.11
OA = 90.08%, Kappa = 0.80
Table 8. The accuracy evaluation of the fusion result based on the K-means classifier.
Table 8. The accuracy evaluation of the fusion result based on the K-means classifier.
Land-Cover ClassGround Truth (%)UA (%)
WoodlandBuildingsFarmlandBareland
Woodland95.069.231.975.1795.26
Buildings3.7488.385.640.0043.17
Farmland0.170.0082.980.7299.57
Bare land1.032.399.4194.1172.15
OA = 89.84%, Kappa = 0.84, Z = 1.88
Table 9. The accuracy evaluation of the fusion result based on D-S evidence theory.
Table 9. The accuracy evaluation of the fusion result based on D-S evidence theory.
Land-Cover ClassGround Truth (%)UA (%)
WoodlandBuildingsFarmlandBareland
Woodland96.149.892.440.0696.01
Buildings0.0290.050.082.8888.92
Farmland2.190.0094.825.4296.85
Bare land1.610.002.4191.6286.11
OA = 94.77%, Kappa = 0.92, Z = 1.80
We also found that each modality had different sensitivities to the different sets of classes, which is relevant to the selected features. To evaluate the choice of classes, a distinguishing capability analysis for each modality was implemented. As shown in Table 10, the P-band PolSAR supervised classification, the X-band InSAR unsupervised classification, and the D-S evidence theory fusion classification with P-band PolSAR and X-band InSAR images were, respectively, assigned as the different modalities a, b, and c. The distinguishing capabilities for the classes are dependent on the classification accuracy (i.e., PA). Three levels of low (PA < 60%), medium (60 ≤ PA < 90%), and high (PA ≥ 90%) were employed as indicators to evaluate the distinguishing capability of the different modalities. For modality a, the polarimetric parameters of H, α, A, and the scattering power parameter of span were selected as the feature parameters with respect to the polarimetric parameter analysis described in Section 3.1.1. The classification accuracies for the four classes (i.e., woodland, buildings, farmland, and bare land) were all more than 80%. This demonstrates that modality a has great potential to distinguish these classes. As for modality b, elevation as the only feature has the capability to distinguish three classes (i.e., woodland, buildings and farmland). It was found that woodland and farmland could be well distinguished (classification accuracies were more than 90%), but the distinguishing capability for buildings was at lower level. Modality c combines all the features of modality a and b, and the distinguishing capability for the four classes could be considered be high since the classification accuracy of each class was more than 90%.
From the point of view of the classes, woodland consists of random highly anisotropic scattering elements characterized by single scattering from a cloud of anisotropic needle-like particles, or multiple scattering from a cloud of low-loss symmetric particles. The range of the feature H and α in modality a can be determined by random highly anisotropic scattering elements, i.e., H > 0.9 and α = 45°, A < 0.5. Due to the short wavelength of the X-band data, the height of the top of the canopy can be obtained by nDSM, i.e., elevation >10 m. H, α, A, and the elevation are good features to distinguish woodland from the other classes. Buildings consist of isolated dielectric and metallic dihedral elements characterized by low-entropy double, or even, bounce scattering. The range of the feature H and α in modality a can be determined by the dihedral elements, i.e., H < 0.5 and α > 47.5°, A > 0.5. Elevation is not a good feature to distinguish buildings from the other classes since the radar echoes are reflected from multiple directions, e.g., the wall surfaces and roofs of the buildings. H, α, and A are good features to distinguish buildings from the other classes. Farmland is characterized by mixed double- bounce and volume scattering, which leads to the confusion between farmland/forest and farmland/buildings; however, the lower span of less than −30 dB can distinguish this class from the others, and a reasonable explanation is that the dielectric constant of the farmland is slightly lower than others. Elevation can distinguish the farmland from others because the elevation of farmland is much lower than for the other classes, i.e., elevation < 5 m. Span and elevation are good features to distinguish farmland from the other classes. Bare land is characterized by Bragg surface scattering. The range of the features H and α in modality a can be determined by a Bragg surface model, i.e., H < 0.5 and α < 40°. The elevation of bare land is similar to farmland and buildings, so that it is not selected as a distinguishable feature. H and α are good features to distinguish bare land from the other classes.
For the woodland extraction application, the PA and UA values of the woodland class were obtained using the supervised and unsupervised classification schemes on the PolSAR and InSAR images, with a PA of 84.07% and 94.26% and a UA of 92.41% and 92.12%, respectively. An improved accuracy was also obtained from the D-S fusion result, with a PA of 96.14% and a UA of 96.01%. The fusion result can ensure high-accuracy woodland extraction mapping. The woodland map of the study area was obtained from the fusion image and is presented in Figure 10.
Table 10. The distinguishing capability analysis for each modality.
Table 10. The distinguishing capability analysis for each modality.
Land-Cover ClassModalitySelected FeatureClassification Accuracy (%)Distinguishing Capability
WoodlandaH, α, A84.07Medium
belevation92.41High
cH, α, A, elevation96.14High
BuildingsaH, α, A89.32Medium
belevation5.14low
cH, α, A, elevation90.05High
Farmlandaspan90.28High
belevation94.36High
cspan, elevation94.82High
Bare landaH, α91.42High
b×××
cH, α91.62High
Figure 10. The extracted woodland map of the study area.
Figure 10. The extracted woodland map of the study area.
Remotesensing 07 04068 g010

5. Discussion

From the point of view of visualization and quantitative evaluation, the D-S fusion result with the PolSAR and InSAR images obtained the highest classification accuracy, and all the classes were classified correctly. For the woodland extraction application, the highest accuracy was again obtained by the D-S fusion result. To make a thorough analysis for the woodland extraction accuracy, we extracted three partial regions (see Figure 9e, A, B and C), respectively, from the same areas of the classification results, and the fusion results are shown in Figure 11a–c. The woodland extraction accuracies of the three regions are shown in Table 11. As can be seen from the P-band classification result of region A in Figure 11 and Table 11, there are many isolated pixels due to the effect of SAR speckling, and some of the pixels of the woodland class have been mistakenly classified as buildings or bare land. On the other hand, the edge of the woodland area is continuous and the texture of the woodland area is also uniform in the X-band classification result, so that the best woodland extraction accuracy can be achieved. The K-means and D-S fusion results also achieve improved woodland extraction accuracy as they incorporate the advantages of the X-band classification in region A. The Z-test shows that the fusion and X-band results of the woodland extraction in region A are consistent with the land-cover product since the Z values of the three results are all less than the critical value of 1.96. However, for the P-band woodland extraction result, a contrary conclusion can be drawn. In region B, the P-band classification result can distinguish woodland, except for some isolated pixels, as in region A. However, the X-band classification result has a lower UA accuracy than the P-band result as part of the pixels belonging to the farmland class have been mistakenly classified as woodland. The reason for this can be explained by the fact that the X-band InSAR segmentation error leads to the X-band classification error in the region. The two fusion results partly make up for the deficiency of the individual P- and X-band classification results, and achieve the best woodland extraction accuracy. Therefore, as with the result of the Z-test in region B, the two fusion results of woodland extraction are considered to be consistent with the land-cover product. In region C, most of the woodland can be correctly distinguished in the P-band classification result, and some of the pixels belonging to the buildings class are mistakenly classified as woodland. In the X-band classification result, almost all of the pixels belonging to the buildings class are mistakenly classified as woodland, which directly results in the lowest UA accuracy of all the methods.
Figure 11. Comparison of the land cover, the classification results of the P- and X-band images, and the two fusion results. (a) Region A. (b) Region B. (c) Region C.
Figure 11. Comparison of the land cover, the classification results of the P- and X-band images, and the two fusion results. (a) Region A. (b) Region B. (c) Region C.
Remotesensing 07 04068 g011
Overall, the D-S fusion can take full advantage of the P- and X-band classifications and can improve the classification accuracy of the final result. In addition, the consistency of the D-S fusion and the K-means fusion results validates the correctness of the D-S fusion result.
Table 11. The accuracy evaluation of the woodland extraction results with the different methods.
Table 11. The accuracy evaluation of the woodland extraction results with the different methods.
Woodland Extraction MethodsRegion UmberPA (%)UA (%)Z
P-band supervised classificationA85.7893.692.80
B84.5494.982.51
C83.6875.892.92
X-band unsupervised classificationA94.8395.621.87
B93.5181.092.65
C86.4657.313.35
K-means fusionA93.7795.501.88
B88.2490.391.93
C86.1670.743.02
D-S fusionA93.5294.001.95
B89.9396.231.89
C95.6861.523.00

6. Conclusions

This paper has explored the woodland extraction capability of Dempster-Shafer evidence fusion theory with P- and X-band polarimetric SAR (PolSAR) and interferometric SAR (InSAR) data. The two kinds of classification methods are combined in different sets of classes: in other words, the P-band supervised classification is capable of four categories of classification, while the X-band unsupervised classification is only sensitive to three of the four categories. The fusion method, respectively, utilizes the advantages of the polarimetric and interferometric features of the PolSAR and InSAR images. It can reduce PolSAR image classification errors due to the impact of SAR speckling, as well as InSAR classification errors due to inaccurate segmentation in the InSAR elevation map. The classification accuracy of the fusion result is improved when compared to the individual PolSAR and InSAR classification results. The fusion result is also consistent with the K-means fusion result and the land-cover product. The Dempster-Shafer evidence theory fusion has been proved to be suitable for large-area land cover with a uniform texture and height, e.g., woodland. Therefore, a high degree of accuracy of woodland extraction was achieved by the decision-level fusion of the PolSAR and InSAR classification results. The achieved accuracy, with a PA of 96%, is more than 85%, which is enough for woodland extraction since the attribute accuracy of all classes should be not less than 85% for land-cover mapping in China [46]. The woodland map obtained in this article could be widely applied in the investigation and monitoring of forest resources. As for the extraction of other classes, such as buildings and grasslands, more features need to be introduced. For instance, texture is a notable characteristic for buildings, which exhibit a great difference with the other land-cover classes [47], and interferometric coherence is an important indicator for distinguishing different vegetation types [48]. Our follow-up research will mainly focus on SAR data fusion by use of more features that are sensitive to the different land-cover types in multi-frequency PolInSAR datasets.

Acknowledgments

This work was supported in part by the National Administration of Surveying, Mapping and Geoinformation (B1402). This work was also supported by the National Natural Science Foundation of China (Grant No. 41401530) and the Special Fund for Surveying and Mapping and Geoinformation Research in the Public Interest (201412010).

Author Contributions

The paper was written by Lijun Lu and Wenjun Xie, and data processing and statistical analyzes were realized by Lijun Lu, Wenjun Xie and Qiwei Li. The data acquisition work was carried out by Jixian Zhang and Guoman Huang. The field work was conducted by Zheng Zhao.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. The Central People’s Government of People’s Republic of China. Available online: http://www.gov.cn/flfg/2005-09/27/content_70635.htm (accessed on 22 January 2014).
  2. Gibbs, H.K.; Brown, S.; Niles, J.O.; Foley, J.A. Monitoring and estimating tropical forest carbon stocks: Making REDD a reality. Environ. Res. Lett. 2007, 2, 1–13. [Google Scholar]
  3. Saatchi, S.S.; Rignot, E. Classification of boreal forest cover types using SAR images. Remote Sens. Environ. 1997, 60, 270–281. [Google Scholar] [CrossRef]
  4. Maghsoudi, Y.; Collins, M.J.; Leckie, D.G. Radarsat-2 Polarimetric SAR Data for boreal forest classification using SVM and a wrapper feature selector. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 6, 1531–1537. [Google Scholar] [CrossRef]
  5. Xiao, W.S.; Wang, X.Q.; Ling, F.L. The application of ALOS PALSAR data on mangrove forest extraction. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2010, 25, 91–96. [Google Scholar]
  6. Wang, H.P.; Ouchi, K.; Jin, Y.Q. Extraction of typhoon-damaged forests from multi-temporal high-resolution polarimetric SAR images. In Proceedings of the IGARSS 2010 Symposium, Honolulu, HI, USA, 25–30 July 2010.
  7. Quegan, S.; Le; Toan, T.; Yu, J.J.; Ribbes, F.; Floury, N. Multitemporal ERS SAR analysis applied to forest mapping. IEEE Trans. Geosci. Remote Sens. 2000, 38, 741–753. [Google Scholar] [CrossRef]
  8. Rignot, E.J.M.; Williams, C.L.; Way, J.; Viereck, L.A. Mapping of forest types in Alaskan boreal forests using SAR imagery. IEEE Trans. Geosci. Remote Sens. 1994, 32, 1051–1059. [Google Scholar] [CrossRef]
  9. Knowlton, D.J.; Hoffer, R.M. Radar imagery for forest cover mapping. In Proceedings of the Machine Processing of Remotely Sensed Data Symposium, West Lafayette, IN, USA, 23–26 June 1981.
  10. Drieman, J.A.; Ahern, F.J.; Corns, I.G.W. Visual interpretation results of multipolarization C-SAR imagery of Alberta boreal forest. In Proceedings of IGARSS’89,the 12th Canadian Symposium on Remote Sensing, Vancouver, BC, Canada, 10–14 July 1989.
  11. Liesenberg, V.; Gloaguen, R. Evaluating SAR polarization modes at L-band for forest classification purposes in Eastern Amazon, Brazil. Int. J. Appl. Earth Obs. Geoinf. 2013, 21, 122–135. [Google Scholar] [CrossRef]
  12. Dobson, M.C.; Ulaby, F.T.; Letoan, T.; Beaudoin, A.; Kasischke, E.S.; Christensen, N. Dependence of radar backscatter on coniferous forest biomass. IEEE Trans. Geosci. Remote Sens. 1992, 30, 412–415. [Google Scholar] [CrossRef]
  13. Hirosawa, H.; Matsuzaka, Y.; Kobayashi, O. Measurement of microwave backscatter from a cypress with and without leaves. IEEE Trans. Geosci. Remote Sens. 1989, 27, 698–701. [Google Scholar] [CrossRef]
  14. Van der Sanden, J.J.; Hoekman, D.H. Potential of airborne radar to support the assessment of land cover in a tropical rain forest environment. Remote Sens. Environ. 1999, 68, 26–40. [Google Scholar] [CrossRef]
  15. Ulander, L.; Dammert, P.B.G.; Hagberg, J.O. Measuring tree height with ERS-1 SAR interferometry. In Proceedings of the IGARSS’95 Symposium, Florence, Italy, 10–14 July 1995.
  16. Takeuchi, S.; Oguro, Y. A comparative study of coherence patterns in C-band and L-band interferometric SAR from tropical rain forest areas. Adv. Space Res. 2003, 32, 2305–2310. [Google Scholar] [CrossRef]
  17. Wegmuller, U.; Werner, C.L. SAR interferometric signatures of forest. IEEE Trans. Geosci. Remote Sens. 1995, 33, 1153–1161. [Google Scholar] [CrossRef]
  18. Wegmiiller, U.; Werner, C.L. Retrieval of vegetation parameters with SAR interferometry. IEEE Trans. Geosci. Remote Sens. 1997, 35, 18–24. [Google Scholar] [CrossRef]
  19. Simard, M.; Saatchi, S.S.; de Grandi, G. The use of decision tree and multiscale texture for classification of JERS-1 SAR data over tropical forest. IEEE Trans. Geosci. Remote Sens. 2000, 38, 2310–2321. [Google Scholar] [CrossRef]
  20. Chen, C.T.; Chen, K.S.; Lee, J.S. The use of fully polarimetric information for the fuzzy neural classification of SAR images. IEEE Trans. Geosci. Remote Sens. 2003, 41, 2089–2100. [Google Scholar] [CrossRef]
  21. Waske, B.; Linden, S. Classifying multilevel imagery from SAR and optical sensors by decision fusion. IEEE Trans. Geosci. Remote Sens. 2008, 46, 1457–1466. [Google Scholar] [CrossRef]
  22. Seetharamana, K.; Palanivelb, N. Texture characterization, representation, description, and classification based on full range Gaussian Markov random field model with Bayesian approach. Int. J. Image Data Fusion 2013, 4, 342–362. [Google Scholar] [CrossRef]
  23. Berger, O.B. Statistical Decision Theory and Bayesian Analysis. Springer Series in Statistics; Springer-Verlag: New York, NY, USA, 1985. [Google Scholar]
  24. Shafer, G. A Mathematical Theory of Evidence; Princeton University Press: Princeton, NJ, USA, 1976. [Google Scholar]
  25. Dubois, D.; Prade, H.; Yager, R. Merging fuzzy information. In Handbook of Fuzzy Sets Series, Approximate Reasoning and Information Systems; Springer: Boston, MA, USA, 1999; pp. 335–401. [Google Scholar]
  26. Lein, J.K. Applying evidential reasoning methods to agricultural land cover classification. Int. J. Remote Sens. 2003, 24, 4161–4180. [Google Scholar] [CrossRef]
  27. Mascle, S.L.H.; Bloch, I.; Madjar, D.V. Application of Dempster-Shafer evidence theory to unsupervised classification in multisource remote sensing. IEEE Trans. Geosci. Remote Sens. 1997, 35, 1018–1031. [Google Scholar] [CrossRef]
  28. Cayuela, L.; Golicher, J.D.; Salas Rey, J.; Rey Benayas, J.M. Classification of a complex landscape using Dempster-Shafer theory of evidence. Int. J. Remote Sens. 2006, 27, 1951–1971. [Google Scholar] [CrossRef]
  29. Milisavljević, N.; Bloch, I.; Alberga, V.; Satalino, G. Three strategies for fusion of land cover classification results of polarimetric SAR data. In Sensor and Data Fusion; I-Tech: Vienna, Austria, 2009; pp. 277–298. [Google Scholar]
  30. Yang, M.S.; Moon, W.M. Decision level fusion of multi-frequency polarimetric SAR and optical data with Dempster-Shafer evidence theory. In Proceedings of the IGARSS’03 Symposium, Toulouse, France, 21–25 July 1993.
  31. Rodríguez-Cuenca, B.; Alonso, M.C. Semi-automatic detection of swimming pools from aerial high-resolution images and LiDAR data. Remote Sens. 2014, 6, 2628–2646. [Google Scholar] [CrossRef]
  32. Zhang, J.X.; Zhao, Z.; Huang, G.M.; Lu, Z. CASMSAR: An Integrated Airborne SAR Mapping system. Photogramm. Eng. Remote Sens. 2012, 78, 1110–1114. [Google Scholar]
  33. Lee, J.S.; Grunes, M.R.; Grandi, G. Polarimetric SAR speckle filtering and its implication for classification. IEEE Trans. Geosci. Remote Sens. 1999, 37, 2363–2373. [Google Scholar] [CrossRef]
  34. Lee, J.S.; Grunes, M.R.; Kwok, R. Classification of multi-look polarimetric SAR imagery based on the complex Wishart distribution. Int. J. Remote Sens. 1994, 15, 2299–2311. [Google Scholar] [CrossRef]
  35. Chan, C.Y. Studies on the Power Scattering Matrix of Radar Targets. Master’s Thesis, University of Illinois, Chicago, IL, USA, 1981. [Google Scholar]
  36. Cloude, S.R.; Pottier, E. Anentropy based classification scheme for land applications of polarimetric SAR. IEEE Trans. Geosci. Remote Sens. 1997, 35, 68–78. [Google Scholar] [CrossRef]
  37. Hajnsek, I.; Pottier, E.; Cloude, S.R. Inversion of surface parameters from polarimetric SAR. IEEE Trans. Geosci. Remote Sens. 2003, 41, 727–744. [Google Scholar] [CrossRef]
  38. Rowland, C.S.; Balzter, H. Data fusion for reconstruction of a DTM, under a woodland canopy, from airborne L-band InSAR. IEEE Trans. Geosci. Remote Sens. 2007, 45, 1154–1163. [Google Scholar] [CrossRef] [Green Version]
  39. Dash, J.; Steinle, E.; Singh, R.P.; Bähr, H.P. Automatic building extraction from laser scanning data: An input tool for disaster management. Adv. Space Res. 2004, 33, 317–322. [Google Scholar] [CrossRef]
  40. Zhang, J.; Duan, M.; Yan, Q.; Lin, X. Automatic vehicle extraction from airborne LiDAR data using an object-based point cloud analysis method. Remote Sens. 2014, 6, 8405–8423. [Google Scholar] [CrossRef]
  41. Ostu, N. A Threshold Selection Method from Gray-Level Histograms. Available online: http://159.226.251.229/videoplayer/otsu1979.pdf?ich_u_r_i=e503462fdc95647446b6e497574284e6&ich_s_t_a_r_t=0&ich_e_n_d=0&ich_k_e_y=1545038930750163512478&ich_t_y_p_e=1&ich_d_i_s_k_i_d=4&ich_u_n_i_t=1 (accessed on 17 September 2014).
  42. Jensen, J.R. Digital change detection. In Introductory Digital Image Processing: A Remote Sensing Perspective; Prentice-Hall: New Jersey, NJ, USA, 2004; pp. 467–494. [Google Scholar]
  43. Rosenfield, G.H.; Fitzpatrick-Lins, K. A coefficient of agreement as a measure of thematic classification accuracy. Photogramm. Eng. Remote Sens. 1986, 52, 223–227. [Google Scholar]
  44. Congalton, R.G.; Green, K. Assessing the Accuracy of Remotely Sensed Data: Principles and Practices; Lewis: Boca Raton, FL, USA, 1999. [Google Scholar]
  45. Loveland, J.L. Mathematical Justification of Introductory Hypothesis Tests and Development of Reference Materials. Available online: http://digitalcommons.usu.edu/gradreports/14/ (accessed on 17 September 2014).
  46. Huang, Z.D. Western land cover mapping investigation. Geomatics Technol. Equip. 2009, 11, 38–39. [Google Scholar]
  47. Zhao, L.L.; Yang, J.; Li, P.X.; Zhang, L.; Shi, L.; Lang, F. Damage assessment in urban areas using post-earthquake airborne PolSAR imagery. Int. J. Remote Sens. 2013, 34, 8952–8966. [Google Scholar] [CrossRef]
  48. Santoro, M.; Wegmüller, U.; Askne, J.I.H. Signatures of ERS-ENVISAT interferometric SAR coherence and phase of short vegetation: An analysis in the case of Maize fields. IEEE Trans. Geosci. Remote Sens. 2010, 48, 1702–1713. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Lu, L.; Xie, W.; Zhang, J.; Huang, G.; Li, Q.; Zhao, Z. Woodland Extraction from High-Resolution CASMSAR Data Based on Dempster-Shafer Evidence Theory Fusion. Remote Sens. 2015, 7, 4068-4091. https://doi.org/10.3390/rs70404068

AMA Style

Lu L, Xie W, Zhang J, Huang G, Li Q, Zhao Z. Woodland Extraction from High-Resolution CASMSAR Data Based on Dempster-Shafer Evidence Theory Fusion. Remote Sensing. 2015; 7(4):4068-4091. https://doi.org/10.3390/rs70404068

Chicago/Turabian Style

Lu, Lijun, Wenjun Xie, Jixian Zhang, Guoman Huang, Qiwei Li, and Zheng Zhao. 2015. "Woodland Extraction from High-Resolution CASMSAR Data Based on Dempster-Shafer Evidence Theory Fusion" Remote Sensing 7, no. 4: 4068-4091. https://doi.org/10.3390/rs70404068

Article Metrics

Back to TopTop