Next Article in Journal
Above-Ground Biomass Retrieval over Tropical Forests: A Novel GNSS-R Approach with CyGNSS
Next Article in Special Issue
Remote Sensing Support for the Gain-Loss Approach for Greenhouse Gas Inventories
Previous Article in Journal
Deep Discriminative Representation Learning with Attention Map for Scene Classification
Previous Article in Special Issue
Predicting Forest Cover in Distinct Ecosystems: The Potential of Multi-Source Sentinel-1 and -2 Data Fusion
Open AccessArticle

Land Use/Land Cover Mapping Using Multitemporal Sentinel-2 Imagery and Four Classification Methods—A Case Study from Dak Nong, Vietnam

1
Department of Forest resource & Environment management (Frem), Faculty of Agriculture and Forestry, Tay Nguyen University, Le Duan Str. 567, Buon Ma Thuot City 63000, Daklak Province, Vietnam
2
Department of Electronics and Nanoengineering, School of Electrical Engineering, Aalto University, P.O. Box 11000, 00076 Aalto, Finland
3
Department of Forest Sciences, University of Helsinki, Latokartanonkaari 7, P.O. Box 27 FI-00014 Helsinki, Finland
4
Raspberry Ridge Analytics, 15111 Elmcrest Avenue North, Hugo, MN 55038, USA
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(9), 1367; https://doi.org/10.3390/rs12091367
Received: 23 March 2020 / Revised: 15 April 2020 / Accepted: 24 April 2020 / Published: 26 April 2020
(This article belongs to the Special Issue Advances in Remote Sensing for Global Forest Monitoring)

Abstract

Information on land use and land cover (LULC) including forest cover is important for the development of strategies for land planning and management. Satellite remotely sensed data of varying resolutions have been an unmatched source of such information that can be used to produce estimates with a greater degree of confidence than traditional inventory estimates. However, use of these data has always been a challenge in tropical regions owing to the complexity of the biophysical environment, clouds, and haze, and atmospheric moisture content, all of which impede accurate LULC classification. We tested a parametric classifier (logistic regression) and three non-parametric machine learning classifiers (improved k-nearest neighbors, random forests, and support vector machine) for classification of multi-temporal Sentinel 2 satellite imagery into LULC categories in Dak Nong province, Vietnam. A total of 446 images, 235 from the year 2017 and 211 from the year 2018, were pre-processed to gain high quality images for mapping LULC in the 6516 km2 study area. The Sentinel 2 images were tested and classified separately for four temporal periods: (i) dry season, (ii) rainy season, (iii) the entirety of the year 2017, and (iv) the combination of dry and rainy seasons. Eleven different LULC classes were discriminated of which five were forest classes. For each combination of temporal image set and classifier, a confusion matrix was constructed using independent reference data and pixel classifications, and the area on the ground of each class was estimated. For overall temporal periods and classifiers, overall accuracy ranged from 63.9% to 80.3%, and the Kappa coefficient ranged from 0.611 to 0.813. Area estimates for individual classes ranged from 70 km2 (1% of the study area) to 2200 km2 (34% of the study area) with greater uncertainties for smaller classes.
Keywords: classification; Sentinel 2; land use land cover; improved k-NN; logistic regression; random forest; support vector machine classification; Sentinel 2; land use land cover; improved k-NN; logistic regression; random forest; support vector machine

1. Introduction

1.1. Motivation

Most Vietnamese forests are classified as tropical with natural forest accounting for more than 70% of the total forest area [1]. Dak Nong province has the most abundant natural forest resources in Vietnam. The great diversity of this resource is primarily owing to a wide variety of environmental and climatic factors, most of which are governed by latitude and topography [2]. However, Dak Nong’s natural forests are being lost at an alarming rate owing to factors that include expanding agriculture, conversion to commercial and plantation forest types, and increasing human population. For many years, the Highland Plateau, which includes Dak Nong, has been a major “hot spot” for conversion of forest to agriculture in Vietnam. During the 1990s and early 2000s, forest was lost at an average annual rate of 15,000 ha per year [3], with forest cover declining from 75% in 1985 to 60% in 2009. During this time, the annual rate of deforestation in the Highland Plateau was the greatest of all regions, accounting for 46.3% of the entire national forest area lost.
The Highland Plateau is characterized by a complex topography with mountains, highlands, valleys, deltas, and diversified soil types. Approximately 1.3 million ha are fertile soils, rich in organic matter and nutrients, that facilitate development of high value industrial perennial crops such as coffee, rubber, pepper, cashew, and fruit trees. Additionally, the distinct rainy and dry seasons in the south of Vietnam cause differences in the rates of plant growth. Finally, climatic differences from north to south in Vietnam cause vegetation to vary in physiognomy and lead to morphological differences among land cover types, particularly between semi-evergreen and deciduous Dipterocarp forests.
Current, accurate, and detailed land cover information that reflects these unique topographic and climatic conditions, particularly for natural forest types, is crucial for land managers, decision makers, and policy makers tasked with developing forest management strategies and policies [4,5,6]. Forest resource decision-making is characterized by a large degree of uncertainty regarding the outcomes of alternative choices. The result is a wide variety of opinions regarding the different options that impedes agreement on a clear way forward. Although there is usually agreement on general objectives such as sustainable forest use, biodiversity conservation, and the alleviation of rural poverty, conflicts among stakeholders over the best course of action for achieving these objectives almost always arise. New issues or new actors may appear and influence discussions, external events may unexpectedly require the revision of agreed policy proposals, and deadlocks can exist for long periods, all continuing until pressing circumstances lead to settlements and decisions [7].

1.2. Remotely Sensed Data

Remote sensing offers a unique environmental capability for monitoring extensive geographical areas in a cost-efficient manner, while simultaneously producing information related to the Earth’s land, atmosphere, and oceans [8]. Land cover mapping represents one of the most common uses of remotely sensed data [9,10,11], with satellite imagery serving as one of the most important data sources [11].
As previously, Dak Nong presents unique challenges for the construction of accurate remote sensing-based land use land cover (LULC) maps [12]. The variation in vegetation owing to the rainy/dry seasonal variation affects the spectral reflectance properties of vegetation. For example, deciduous dipterocarp forests have spectral properties in the dry season that are similar to those of other cover types such as industrial coffee and rubber crops, whereas the respective spectral properties are quite different in the rainy season. Only a few studies have accommodated this kind of seasonal variation when constructing satellite image-based land cover classifications. Sothe et al. (2017) [13] combined multi-spectral fall and spring season images when mapping land cover with Landsat-8 data and found that the inclusion of additional band data considerably improved classifications when compared with the use of fall spectral bands alone. For both classifiers used by Sothe et al. (2017) [13], there were meaningful increases in classification accuracy, by 4.8% and 2.9% for the random forests and support vector machine classifiers, respectively, when the “spring” spectral bands were added.
Dak Nong’s seasonal growth variation, varying vegetation spectral signatures, and varying topography suggest that Sentinel-2 satellite spectral data, with its fine spatial resolution (10–60 m), fine temporal resolution (five days), and fine spectral resolution (13 spectral bands), may be particularly well-suited for land cover classification purposes in the province. Although data from the Sentinel-2 sensor have been investigated for a variety of vegetation monitoring [14,15], terrestrial monitoring [16], and forest classification [16] applications, only a few studies have used Sentinel-2 for land cover mapping [17,18,19]. Therefore, additional studies that evaluate the utility of this imagery for land cover classification for regions with extremely diverse conditions such as those in Dak Nong are well-justified [20].

1.3. Classification Techniques

Factors that affect classification accuracy include sensor type, sources of training and accuracy assessment data, the number of classes, and the classification method [21,22,23]. Of these factors, the selection of a suitable algorithm that achieves acceptable classification accuracy with minimal processing time can be crucial [24]. Many methods have been proposed for constructing satellite image-based land cover maps [25,26], including both unsupervised and supervised methods and both parametric and non-parametric methods. Although unsupervised algorithms such as IsoData and K-Means clustering have been widely used for many years, general purpose clustering algorithms are cumbersome and difficult to develop [27]. Parametric supervised algorithms such as linear discriminant analysis [28,29,30,31] and multinomial logistic regression (MLR) have also been broadly used [32,33] and are often considered standards for comparison purposes [29,30,34]. In the last decade, non-parametric methods including support vector machine (SVM) [35,36,37], k-nearest neighbors (k-NN) [38,39], and random forests (RF) [40,41,42] have gained attention for remote sensing-based land cover classification. However, both SVM and RF require the selection of values for multiple parameters that affect their efficacy, and both are computationally intensive [6,35]. For k-NN, Naidoo et al. (2012) [43] reported difficulty in selecting the optimal value of k and that the genetic algorithms recommended for optimization can be computationally intensive [44,45]. Finally, object-based classification has been shown to be an effective method for classifying fine resolution imagery [46,47]. Object-based methods have been used with both fuzzy sets [48,49] and neural networks [48,50] to map land cover using satellite imagery. Although object-based classification methods have been shown to increase accuracy for some land cover mapping applications, fine spatial resolution remotely sensed imagery remains the most frequently used data source for these applications [51].
Because of the unique features of each study and study area including definitions, sample sizes, and numbers and characteristics of the classes, comparisons of methods with respect to accuracy among studies is difficult. Even so, not much effort has been committed to comparing methods with respect to accuracy for diverse tropical forest regions such as Dak Nong. Meyfroidt et al., 2013 [52] used MLR with Landsat data to assess classes of forest change and reported land cover classification accuracies of 0.64 to 0.69. Use of RF for land cover classification has been reported for multiple studies in Vietnam. Bourgoin et al. (in press) [53] used RF with both Landsat and Sentinel-2 data for multiple land cover and land cover change classes and reported an overall accuracy of 0.81. Nguyen et al. (2018) [6] used RF and Landsat data for 10 classes including multiple forest classes in Vietnam. Overall accuracy was approximately 0.90. Ha et al. (2018) [54] used RF and Landsat data for seven land cover classes including forest land and reported overall accuracies greater than 0.90. Finally, Phan and Kappas (2018) [20] reported that SVM was more accurate than RF for classifying six types of land cover types including one forest class in the North of Vietnam (Red River Delta) using Sentinel-2 data. In summary, although only a few studies using only a few methods have been used for the classification of forest land in Vietnam, the reported accuracies are relatively large. Thus, there is merit in a more comprehensive evaluation of classification methods, particularly for diverse tropical regions such as in Dak Nong province, Vietnam, with their distinct seasonal effects.

1.4. Objectives

The overall objective was to evaluate the utility of multi-seasonal Sentinel-2 spectral data for land cover classification and mapping in Dak Nong province, Vietnam. A subordinate objective was to compare the parametric MLR and non-parametric ik-NN, SVM, and RF classification methods with respect to both overall and class-level accuracies and with respect to whether the methods exploited the beneficial effects of the multi-seasonal Sentinel-2 data. Google Earth Engine was used for collecting and pre-processing both training and accuracy assessment data. A second subordinate objective was rigorous statistical estimation of the ground area of each land cover class.

2. Materials and Methods

2.1. Overview

The structure of this section has multiple components. First, the Dak Nong study area is described in Section 2.2, the Sentinel-2 satellite imagery and its separation into temporal periods are described in Section 2.2, and the land cover data from multiple sources and their separation into training and validation subsets are described in Section 2.3. Next, brief descriptions of the four classifiers are provided in Section 2.4, including descriptions of their statistical properties, details on their required input parameters, and procedures for optimizing their performance. Finally, in Section 2.5, the two analytical components used to compare all combinations of the four temporal image periods and the four classifiers are described. The first component focuses on map accuracy assessment, while the second component focuses on estimating LULC class areas and their corresponding uncertainties. The overall research approach is summarized in Figure 1.

2.2. Study Area

The study was conducted in Dak Nong Province in the Central Highlands of Vietnam (Figure 2). The average elevation is between 600 and 700 m above sea level. The mean temperature is 24 degrees Celsius. The climate conditions produce general characteristics of a subequatorial tropical monsoon climate. The province has characteristics of humid tropical highland climate and is affected by dry-hot southwest monsoons. There are two distinct annual seasons: the rainy season starts in April and ends in November, and the dry season starts in December and ends in March the following year. The average annual rainfall is 2500 mm, of which 90% occurs during the rainy season. The study area extends over 6516 km2 and is characterized by substantial fragmentation, thereby making LULC classification particularly challenging. Natural forest consists of patches of natural evergreen broad-leaved, mixed bamboo, deciduous dipterocarp, and semi-deciduous forest with different levels of disturbance, mainly human in origin.

2.3. Data

2.3.1. Sentinel-2 Imagery

Sentinel-2 MSI (multi-spectral instrument, Level-1C) remotely sensed data were used for LULC classification. The Sentinel-2 mission was developed by the European Space Agency (ESA) as a part of the Copernicus Programme [55]. The mission’s wide swath, fine spatial resolution (10 m–60 m), multi-spectral features (13 spectral bands), and frequent revisit time (10 days at the equator with one satellite, and 5 days with two satellites) support monitoring vegetation changes within a growing season, forest monitoring, land cover change detection, and natural disaster management [56]. The spectrum characteristics of the Sentinel 2 images are described in Table 1.
The difference in solar illumination geometry during image acquisition between the two seasons was considered in the present study. Although vegetation in the study area presents reduced climatic and phonological seasonality, the observed reflectance varies by season owing to changes in the solar illumination geometry caused by the Earth’s translation movement [13]. Therefore, seasonal image datasets were separately classified to evaluate these influences. Accordingly, the scenes of interest included the following: (i) a collection of Sentinel-2 MSI scenes in the study area during the dry season of 2017 and 2018 (1 January 2017 to 31 March 2017, and 1 December 2017 to 31 March 2018), designated imagery 1 (IMG 1) (Table 2); (ii) a collection of Sentinel-2 MSI scenes during the rainy season of 2017 and 2018 (from 1 April 2017 to 30 November 2017 and 1 April 2018 to 30 June 2018), designated IMG 2; (iii) a collection of Sentinel 2 MSI scenes for all months of 2017 (from 1 January 2017 to 31 December 2017), designated IMG 3; and (iv) a combination of all bands for the dry and rainy seasons (combination of IMG 1 and IMG 2, designated IMG 4.
The different seasonal image datasets represented for each season were considered for the analyses based on the median value of the collection. The multi-spectral bands in the study included Blue (B2), Green (B3), Red (B4), Red Edge 1 (B5), Red Edge 2 (B6), Red Edge 3 (B7), near infrared (NIR) (B8), Red Edge 4 (B8A), short-wave infrared (SWIR) 1 (B11), and SWIR 2 (B12). In addition to these spectral bands, the normalized difference vegetation index (NDVI) and a digital elevation model (DEM) were added to the seasonal image data (IMG 1–4) with the objective of increasing classification accuracy, as reported from previous studies [22,57]. These bands, including NDVI [58,59] and DEM, were resampled at 10 m resolution. Image information is described in Table 2 below.
To conduct the analyses, the JavaScript API Code Editor in the Google Earth Engine (GEE) was used to collect data for a large number of images. GEE provides most freely available image data and an application programming interface (API) to analyze and visualize the data [60,61]. Surface reflectance (SR) images for 2017 were not available, and for 2018 images, approximately 50% of the study area was covered by clouds. Hence, a top of atmosphere (TOA) dataset acquired for 2017 and 2018 was used for the study. The set of collected images was pre-processed to reduce the effects of topography and bidirectional reflectance distribution function (BRDF). At the same time, cloud areas were masked out and shadows were removed during this process.
All images underwent pixel-wise cloud and cloud shadow masking using the Google cloudScore algorithm for cloud masking and temporal dark outlier mask (TDOM) for cloud shadows, both of which built on ideas from Landsat TDOM and cloudScore algorithm. The original concept was written by Carson Stam, adapted by Ian Housman, currently documented in [60], and described and evaluated in a forthcoming paper [62]. This study implemented the correction of reflectance spectral values by BRDF based on the method described by Roy et al. 2017a,b [63,64]. Topographic correction is the process to account for diffuse atmospheric irradiance caused by slope, aspect, and elevation effects. The sun-canopy-sensor + C (SCSc) correction method based on the C-correction, as described by Soenen et al. [65], was applied for topographic correction in this study. The median function was then applied to create an image object (single image) representing the median value of all images in the filtered collection [66,67]. The median lies closer to the majority of values and is insensitive to extreme values and has exactly half the values smaller and half the values greater than the median, as elaborately applied by [68]. The post-processed images were then resampled to a spatial resolution of 10 m using the nearest neighbor method [69], and subsetted to the study area. The entire pre-processing was implemented on GEE based on the script available on “Open Geo Blog - Tutorials, Code snippets and examples to handle spatial data” [61,70].

2.3.2. Training and Validation Data

Within the study area, 11 LULC classes were distinguished: (1) dense evergreen broadleaved forest (the forest has been slightly impacted); (2) open evergreen broadleaved forest (the forest has been moderately to heavily disturbed); (3) semi-evergreen forest (the forest that consists of a mixture of evergreen and deciduous dipterocarp tree species); (4) deciduous dipterocarp forest; (5) plantation forest; (6) mature rubber (≥3 years old); (7) perennial industrial plants; (8) croplands (annual crop land); (9) residential area; (10) water surface; and (11) other lands including, but not limited to, other types of grassland, shrubs, bare land, and abandoned land.
Acquiring adequate training and validation data is often challenging in tropical regions. Sothe et al., 2017 [13] and Teluguntla et al., 2018 [71] both obtained good results using sample data from a combination of sources including field investigations, very fine spatial resolution Google Earth imagery, current Landsat and Sentinel imagery, and other sources such as maps. A similar approach was used for this study for which three sets of sample data were acquired in 2017 and 2018: (1) field observations for a purposive sample of size 232; (2) visual interpretations of fine and very fine resolution imagery from sources that included Google Earth for a purposive sample size of 214; and (3) visual interpretations of fine and very fine resolution imagery from sources that included Google Earth and Sentinel- 2A imagery for a simple random sample size of 800. For the latter sample dataset, field observations and data from the 2016 Dak Nong Forest Inventory Map were used to clarify and refine interpretations for the LULC classes such as semi-evergreen forest, plantation forest, and some perennial industrial crops that were difficult to distinguish in the imagery.
To obtain the probability sample necessary for validation, a systematic sample of the probability-based third dataset was selected. The plots in the third dataset were first sorted by their east and north coordinates, and then a systematic sample was selected from within each LULC class. For each class, the proportion selected was arbitrary, but was guided by the desire for a minimum sample size of 15, where possible, while yet retaining a sufficient sample size for training purposes. For the eleven LULC classes, the proportions were, in order, as follows: 0.20, 0.20, 0.50, 0.67, 1.00, 0.50, 0.11, 0.50, 0.67, 0.50, 0.20. Because the third dataset was selected using a simple random sample, and it was systematically subsampled, each subsample can also be considered a simple random sample and, therefore, can be used for validation. The remaining portion of the third dataset was used for training purposes. The result was a sample size of 1036 for training and a sample size of 208 for validation. The number of validation plots by LULC category was considered sufficient and generally complied with the recommendation of Särndal et al. (1992) [72]. A summary of the training and validation datasets is shown in Table 3 with the spatial distribution of sample locations shown in Figure 2.

2.4. Classifiers

The MLR, ik-NN, RF, and SVM supervised classification algorithms were used to classify the satellite image data as described above. The training areas for each LULC type were selected based on Google Earth, field data, and prior knowledge, as well as available data. The models were used as supervised classifiers to classify pixels based on their spectral signatures.

2.4.1. Multinomial Logistic Regression (MLR)

With MLR, the probability of class c for the ith plot, c=1,..., C, is estimated as follows:
p ( y i = c | x i ) = exp ( β c x i ) 1 m = 1 C 1 exp ( β m x i ) + ε i ,   for   c = 1 ,   ,   C 1
and
p ( y i = C | x i ) = 1 1 m = 1 C 1 exp ( β m x i ) + ε i ,
where C is the number of the LULC classes, x i is the vector of predictor variable observations for the ith population unit, and βc is the vector of regression coefficients associated with LULC class c. The class with the greatest probability is selected as the prediction for the ith population unit. Optimal estimates for { β c :   c = 1 ,   ,   C } can be obtained using any of multiple statistical software packages.

2.4.2. Improved k-Nearest Neighbors (ik-NN)

In the terminology of nearest neighbors techniques, the auxiliary or predictor variables are designated feature variables and the space defined by the feature variables is designated the feature space; the set of sample units for which observations of both response and feature variables are available is designated the reference set; and the set of population units for which predictions of response variables are desired is designated the target set (Chirici et al., 2016) [73]. All population units for both the reference and target sets are assumed to have complete sets of observations for all feature variables.
For the ith target unit, all forms of nearest neighbors algorithms entail selecting the k-nearest or most similar neighbors, { y j i : j = 1 ,   2 ,   ,   k } , from the reference set with respect to a distance metric, d, formulated as a function of the feature variables. For categorical response variables such as land cover classes, the prediction, y ^ i , for the ith target unit is the most heavily weighted class among the k-nearest neighbors, a weighted median or mode in case of ordinal scale variables, or a mode in the case of nominal variables. Implementation of nearest neighbors techniques requires multiple selections: (i) the distance metric, d, to assess nearness or similarity in the feature space; (ii) a scheme for weighting the predictor variables in the distance metric; (iii) a scheme for weighting individual neighbors when calculating predictions; and (iv) a number, k, of nearest neighbors [73].
For this study, the distance metric was a simplified version of the metric proposed by Tomppo and Halme (2004) [44], as used in the operational Finnish multi-source national forest inventory (MS-NFI),
d ij = m = 1 p w m 2 ( x im x jm ) 2 ,
where i denotes a target unit; j denotes a reference unit; dij is the distance between the units i and j; m indexes the feature variables; x im and x jm are observations of the mth feature variable for the ith target unit and jth reference unit, respectively; and w m is a feature variable weight. Neighbor weights are typically formulated as powers, t [ 0 , 2 ] , of distances between target and reference units. Often, the necessary selections to implement a nearest neighbor algorithm are made in an arbitrary method, whereas improved k-NN (ik-NN) entails optimized selection of the weights, wm, using a technique such as genetic algorithms [44,45,74]

2.4.3. Support Vector Machine (SVM)

The principle behind the SVM classifier is a hyperplane that separates the data for different classes. The main focus is construction of the hyperplane by maximizing the distance from the hyperplane to the nearest data point of either class. These nearest data points are known as support vectors [75].
According to Huang et al. (2002) [35] (p. 734), by mapping the input data into a high-dimensional space, the kernel function converts non-linear boundaries in the original data space into linear boundaries in the high-dimensional space, which can then be located using an optimization algorithm. Therefore, selection of the kernel function and appropriate values for corresponding kernel parameters, referred to as the kernel configuration, can affect the performance of the SVM.
The radial basis function kernel (RBF kernel) is one of the most popular kernels used to implement the support vector machine algorithm and was used for this study. The squared Euclidean distance metric was used to construct completely non-linear hyperplanes. The RBF kernel of the SVM classifier is commonly used and has performed well. Polynomial kernels, especially high-order kernels, took far more time to train than RBF kernels [35].
Meyer et al. (2002) [76] stated that, for classification tasks, C-classification is most likely used with the RBF kernel because of its good general performance and the small number of parameters (only two, C and γ). Therefore, the two parameters that must be defined for this classification algorithm are the cost parameter (C) and the kernel width parameter (γ). According to Knorn et al. (2009) [77] (p.960), C is a regularization parameter that controls the trade-off between maximizing the margin and minimizing the training error. C is a preset penalty value for misclassification errors, while γ describes the kernel width, which affects the smoothing of the shape of the class-dividing hyperplane.
The authors of LIBSVM suggest trying small and large values for C, such as 1 to 1000, then using cross-validation to decide which is optimal for the data, and finally trying several γ’s for the optimal C’s. A small C-value tends to emphasize the margin while ignoring the outliers in the training data, while a large C-value may overfit the training data [77] (p.960). Optimal selection of C and γ parameters was done by testing C parameters in the range 2−1 to 28 and γ parameters in the range 0.1 to 2.0.

2.4.4. Random Forests (RF)

The RF classifier developed by Breiman (2001) [78] requires selection of three parameters: ntree (number of trees to grow), mtry (the number of variables to split each node), and variable importance (the number of variables/bands influences model performance). Liaw & Wiener (2002) [79] recommend using the square root of the number of input variables as the default value for mtry. A large value for ntree produces a stable result for variable importance, which is estimated using two indicators: (i) mean decrease accuracy (MDA) and ii) mean decrease gini (MDG). MDA is the accuracy associated with each predictor variable based on the out-of-bag error rate (OOB). Gini impurity is a measure of how often a randomly chosen element from the set would be incorrectly labeled if it is randomly labeled according to the distribution of labels in the subset. For this study, MDA values were investigated to determine the importance of variables. Nguyen et al. (2018) [6] indicated that within the range 1 ≤ ntree ≤ 500, ntree = 300 produced the best fit. In addition, Breiman (2001) [78] stated that using more than the required number of trees may be unnecessary, albeit not harmful, because the relationship between accuracy and ntree is asymptotic. The ‘RandomForest’ package in the R environment developed by Liaw and Wiener (version 4.6–14 in 2018) was used in present study. Optimal values of mtry, ntree, and variable importance were selected based on the smallest OOB error. The optimal variable importance depended on the MDA value and accuracy of the model.

2.5. Analyses

2.5.1. Accuracy Assessment

Accuracy assessment is an important step before accepting a classification result [21]. The classification accuracy of a map product is estimated by constructing a confusion matrix between reference and classified pixels. Classification accuracy was assessed using criteria such as overall accuracy (OA), Kappa coefficient (K), producer’s accuracy (PA), and user’s accuracy (UA). Congalton and Green (1999) [80] assert that analysis of the causes of differences in the confusion matrix can be one of the most important and interesting steps in the construction of a map from remotely sensed data.
The objectives of the study included comparing the performance of classifiers as well as assessing the effects of Sentinel-2 satellite images for different seasons, as described in Table 2. The number of seasonal bands used with the four classifiers is reported in Table 4.

2.5.2. Land Cover Class Area Estimation

For each land cover class, an estimate of the class area and the corresponding standard error were calculated using a combination of confusion matrices and stratified estimators [81,82]. For each class, C, a confusion matrix was constructed for two classes: (i) class C and (ii) the aggregation of all other classes into a single class designated ~C (Table 5). Using estimates of proportions and corresponding variances as indicated in Table 5, the stratified estimate of the area of class C was as follows:
A ^ C = A tot ( wt 1 p ^ 1 + wt 2 p ^ 2 ) ,
with standard error,
SE ( A ^ C ) = A tot [ wt 1 2 Var ^ ( p ^ 1 ) + wt 2 2 Var ^ ( p ^ 2 ) ]
where wt 1 is the proportion of the total map area in class C, wt 2 = 1 wt 1 , and A tot is the total area of interest. Approximate 95% confidence intervals for the class areas can be estimated as follows:
A ^ C   ± 2   SE ( A ^ C ) .

3. Results

3.1. Classifiers

3.1.1. Multinomial Logistic Regression (MLR)

The parameters of the multinomial logistic regression model (Equations (1) and (2)) were estimated using the multinom function of the R packages [83]. All variables (spectral values of all bands of the image set in the analysis) were included in the model. The log-likelihood stabilized after 100 iterations. The importance of the variables was quite similar among different Sentinel-2 datasets. For the dry season image (IMG 1) and for the all-month image (IMG 3), the most important variables were Blue and SWIR 2, otherwise, the variable importance values were approximately the same. For the rainy season image (IMG 2), the results were also similar, although the Blue and SWIR 2 importance values were slightly less than for IMG 1 and IMG 3. Differences among importance values were small for the two-season image (IMG 4).

3.1.2. Improved k-NN (ik-NN)

The improved k-NN (ik-NN) algorithm was applied as described in [45], except that only overall accuracy was used in the fitness function. The value of k = 10 was used. For the genetic algorithm, the number of the generations was 60, the population and medi-population sizes were 50, and the maximum number of the random iterations was 4000. Otherwise, the genetic algorithm parameters were as reported by Tomppo et al. (2009) [45]. Because genetic algorithms as a heuristic optimization method may select a local optimum, several trial runs were used to find a near optimal solution. Pixel-level estimates can be readily calculated with ik-NN when the weights of the variables have been optimized. The importance for the different variables was similar for ik-NN and MLR. However, in the case of predictor variables with large correlations, caution should be used when drawing conclusions with either method.

3.1.3. Support Vector Machine (SVM)

With the SVM algorithm using the RBF kernel, determination of the optimal cost (C) and Gamma parameters for the model is important. Following Qian et al. (2015) [84] and using our actual dataset, the R function ‘tune()’ was used to select these two SVM parameters. The optimal cost (C) value was determined from the values: 2−1, 20, 21, 22, 23, 24, 25, 26, 27, 28, and the Gamma (γ) value was a free parameter set from 0.1 to 2. The optimal parameters were determined based on classification error. Figure 3 describes the performance of the SVM model using the different cost and Gamma parameters. The darker the blue area, the better the performance of the model presented.
For the IMG 2-SVM combination and the IMG 4-SVM combination, the optimal C of 23 and Gamma of 0.1 produced classification errors of 0.2283 and 0.1525, respectively, while for the IMG 1-SVM and IMG 3-SVM combinations, the optimal C of 25 and 26, both with Gamma of 0.1, produced classification errors of 0.2212 and 0.2145, respectively.

3.1.4. Random Forests (RF)

The three RF parameters, ntree, mtry, and variable importance, play important roles in classification. The algorithm assesses the importance of each variable in the classification process by means of a specific measure. ‘Importance()’ and ‘varImplot()’ functions were used to determine the MDA values and to select the potential variables that were actually needed for the optimal RF models. Figure 4 shows the variable importance ranked in the direction of decreasing MDA from right to left for the four seasonal images. The selection of variables was based on MDA using the backward selection method [85]. With this method, the algorithm starts with all predictor variables and then sequentially removes some variables until the greatest accuracy is achieved. Accordingly, the least MDAs were attributed to two bands of IMG 1, 2, and 3: specifically, B6 and B8 for IMG 1 and B6 and B7 for both IMG 2 and IMG 3. For IMG 4, the five bands were included: B5a, B2a, B2b, B7a, and B8b. In addition, the NIR band reduced the accuracy for all images.
The number of variables used for splitting at each node (mtry) was determined using the tuneRF function based on the variable importance and the number of trees (ntree). On the basis of the OOB error estimation, the optimum ntree and mtry parameters were chosen for the models.
Figure 5 shows the OOB errors when the model was run with ntree ranging from 1 to 500 trees. The smallest OOB errors were obtained for ntree = 500 trees for all seasonal image combinations.
The OOB errors associated with different mtry values are shown in Figure 6. The smallest OOB error was obtained with mtry = 3 for the IMG 2/RF and IMG 3/RF combinations, with mtry = 6 for the IMG 1/RF combination, and with mtry = 4 for the IMG4/RF combination.

3.2. Analyses

3.2.1. Accuracy of Classification Results

OA, K, PA, and UA for each class for the different combinations of image groups and classifiers are reported Table 6 and a comparison of the results is reported in Figure 7. In general, relatively large accuracies were found with OA >60% and K >0.6. Of interest, the IMG 4 composite of rainy and dry images produced the greatest accuracies for all classifiers. By contrast, the rainy season IMG 2 images produced the smallest accuracies. Classification accuracies for IMG 1 and IMG 3 were less than for IMG 4, but greater than for IMG 2.
The greatest accuracy was achieved for the composite two-season IMG 4 using the SVM classifier, specifically OA = 80.3% and Kappa = 0.813. The smallest accuracy was for IMG 2 with the MLR classifier. The differences between the greatest and smallest accuracies were 16.4% percentage points for OA and 0.202 for K. With respect to the classification algorithms, differences between the greatest and smallest accuracies for the four algorithms were 14.4% percentage points for OA and 0.202 for K. For SVM, the differences were 11.9% percentage points and 0.096 for K; for ik-NN, the differences were 10% percentage points for OA and 0.108 for K. The final LULC map was constructed using the classification for the most accurate IMG 4/SVM combination and is shown in Figure 8.
Average UA and PA estimates were greater than 60%, apart from a few cases, but differed by LULC class. For the forest cover classes, dense forest (1) had the greatest accuracy, while open forest (2) had the smallest accuracies for the four methods for most seasons (Figure 9).

3.2.2. Land Cover Class Area Estimates

Land cover class area estimates with corresponding standard errors are shown by class for the 16 seasonal image and prediction technique combinations in Table 6. Class area estimates ranged from slightly greater than 70 km2 for class 10 for the IMG 4 and MLR combination to slightly greater than 2200 km2 for class 7 for the IMG 4 and SVM combination. Standard errors ranged from approximately 6 km2 to approximately 230 km2, with larger standard errors associated with larger area estimates (Figure 10), although smaller ratios of standard errors to area estimates were associated with larger area estimates. Regardless of the seasonal image and prediction technique combination, the three classes with the greatest areas, in order, were class 7: perennial industrial plants, class 2: open evergreen broadleaved forest, and class 1: dense evergreen broadleaved forest. For IMG 1, IMG 2, and IMG 3, the sums of the estimated areas for these three classes as proportions of the total area ranged from slightly more than 0.50 to approximately 0.63, but with larger estimates for IMG 4.

4. Discussion

Errors are present in any classification, estimation, or prediction [21,86,87,88]. Comparison of the results of this study and those of earlier studies is not straightforward because the numbers and definitions of the vegetation classes differ by study. Thus, optimality differs by study and user [21,86,87,88]. There are also no generally accepted limits on how accurate a classification should be to be characterized as reliable, because different users may have different concerns about accuracy. They may, for example, be interested in the accuracy for a specific class or in accuracy for areal estimates [89]. In addition, multiple factors influence classification accuracy: image quality, classifier, image composition, number and details of classes, and sample size.
Andersen et al. (1976) [90] recommended that accuracies of 85% for mapping land cover are acceptable. However, as Foody (2008) [91] noted, for many contemporary mapping applications, the challenge may be more difficult than assessed by Anderson et al. (1976) [90], particularly when attempting to distinguish among a large number of relatively detailed classes at a relatively local, large cartographic scale. Consequently, in such applications, the use of the 85% target suggested by Anderson et al. (1976) [90] may be inappropriate, as it may be unrealistically large.
Indeed, many studies have been conducted to select the most accurate classifier, either among those simultaneously evaluated or with classifiers evaluated in other studies. Such works reach no consensus, because the performance of a classifier always depends on the specific site characteristics, on the type and quality of the remotely sensed data, and also on the number and general aspects of the classes of interest [13]. Using the RF, SVM, maximum likelihood, and neural network classification algorithms to discriminate among four individual land cover classes based on two Landsat-8 OLI scenes, Lowe and Kulkarni (2015) [40] reported overall classification accuracies of 96.25%, 86.88%, 83.13%, and 76.87%, respectively. Kennedy et al. (2015) [41] used RF to classify Landsat time-series data from 1198 training patches for four classes (agriculture, forest, urbanization, and stream) and reported OA greater than 80%, but most successfully for the numerically well-represented forest management class. Meanwhile, Franco-Lopez (2001) [38] used k-NN to map 13 types of land cover using Landsat TM and achieved OA = 64%. Tomppo et al. (2008) [92] reported OA between 70% and 80% for classifying dominant tree species in one boreal forest test site in Finland when using two adjacent Landsat 7 ETM+ scenes and the ik-NN method. Pelletier et al. (2016) [18] used RF and SVM algorithms to classify SPOT-4 imagery and Landsat-8 HR-SITS images in southern France. The authors reported an OA of 83.3% for RF and 77.1% for SVM. Research by Phan and Kappas (2017) [20] showed different results among RF, SVM, and k-NN classifiers used to discriminate six types of LULC using Sentinel-2 image data in the Red River Delta of Vietnam. This research reported that SVM produced the greatest OA (95.29%) with the least sensitivity to the training sample sizes, followed consecutively by RF (94.44%) and k-NN (94.13%). These results indicate that no standard of accuracy is appropriate for all cases, because accuracy relevance depends on both the objective and the user.
Spatial information including remotely sensed data has been an excellent source of information for decision makers in forest management, albeit in conjunction with an understanding of classification uncertainties, whereby the probabilities of non-optimal and infeasible decisions are reduced. For this study, OA ranged from 63.9% to 80.3% (Figure 7) when using Sentinel 2 data to classify 11 LULC classes, with SVM producing the greatest accuracies. The difference between accuracies for the most accurate SVM classifier and the least accurate MLR classifier was approximately 14.4%. Although the results for SVM and RF were relatively similar, some authors recommend RF because training is less time-consuming and parameter selection is easier [18], a recommendation that was confirmed in our study.
Producer’s and user’s accuracies among the 11 LULC classes differed considerably (Figure 9). In general, the open evergreen forest classes were confused more than the other forest cover classes. This result is attributed to the heterogeneous conditions of natural tropical forests. In addition, forests in the study area have been disturbed to different degrees [21]. Among the forest classes, deciduous dipterocarp and semi-evergreen forest are considered the most challenging for remote sensing classification because of the seasonal deciduous characteristics of these forest types in the dry season [93]. However, this problem may be solved by using the combination of dry and rainy season images, as investigated in the present study.
The Sentinel-2 images acquired for different seasons (plant growth stages) produced different results. The greatest accuracies were for the composite rainy and dry season IMG 4; by contrast, the lowest accuracies were for the rainy season IMG 2. The observed reflectance varied by season owing to changes in the solar illumination geometry caused by the Earth’s translation movement. In addition, the vegetation in the study area varies depending on the season, owing to the substantial rainfall differences for the two seasons. Sothe et al. (2017) [13] assert that differences in classification accuracies for the dry and rainy seasons can be attributed to the differences in solar illumination geometry between the two seasons. For images acquired in the dry season, the incident sun radiation arrives in a more perpendicular direction to the Earth’s surface, thus reducing the shadow effect caused by topography and variations in the forest canopy height, and leading to greater pixel illumination. For the current study, there was a substantial increase in classification accuracies when using a composite of dry and rainy Sentinel 2 images (IMG 4). For the ik-NN, RF, and SVM classifiers, the greatest accuracies were obtained for the combined rainy and dry IMG 4 relative to the rainy or dry season alone (Table 6). The accuracy increase for the composite image may be explained by the fact that different seasons contain different information for the same kind of land cover (e.g., dipterocarp forest is deciduous in dry seasons and green in rainy seasons). Combining the two season’s image bands captures additional information on land cover.
Among all combinations of images, classification algorithms, and land classes, the smallest SE for area estimates was for the water surface class owing to its stability, whereas the largest SE was for the industrial plant class. In fact, because cultivation characteristics of industrial plants in the study area are quite complex with a variety of species such as coffee, pepper, and cashew, all with uneven ages, large SEs are inevitable. This complexity also explains the large difference among area estimates for this class, ranging from 1643.45 km2 to 2223.87 km2, or from 25% to 34% of the total area (Figure 10).
Although classification accuracies for vegetation classes were not particularly large, the classifications are still useful for complex tropical rain forests that have been disturbed to different degrees such as in the Central Highlands of Vietnam. The area estimates and spatial distributions of the LULC classes produced from the current study will assist local authorities, managers, and other stakeholders in decision-making and planning regarding forest land cover and uses. The usual practice is for the Institution of Forest inventory and Planning (FIPI) to conduct a forest inventory and construct a forest map every five years. Local forest units such as Dak Nong receive the maps and update them manually. However, the accuracy of the map has usually not been announced, and inaccuracies and errors have been detected only by local forest staff when patrolling in the field. Moreover, LULC changes, particularly for industrial land, occur quickly and easily owing to factors such as unstable crop markets and increasing population resulting from migration. Thus, the results of this study will not only provide authorities with updated information on current conditions, but will also serve as a recommendation regarding methods for proactively updating LULC maps in a timely and costly manner. Specifically, timely and updated maps assist authorities by serving as a basis for formulating suitable solutions and policies for managing LULC including forest cover.

5. Conclusions

This research showed the utility of combining Sentinel-2, multi-spectral, and dry and rainy season band data when mapping LULCs in Dak Nong Province, Vietnam. The greatest accuracies were achieved for the composite IMG 4 obtained by combining dry and rainy season image sets using the SVM classifier.
Among the classifiers, SVM produced the greatest accuracies, although RF, which had similar accuracies, was simpler to train and apply, and was less computationally intensive. For IMG 4, the greatest accuracies with SVM were OA = 80.3% and Kappa index = 0.813; for RF, the greatest accuracies were OA = 80.0% and K = 0.802. Thus, the combination of dry and rainy season imagery used with the SVM or RF may contribute to potentially new ways for classifying the complex tropical forest of Vietnam and similar areas. The area estimates and spatial distributions of the LULC classes produced from the current study will assist local authorities, managers, and other stakeholders in decision-making and planning regarding forest land cover and uses.
In conclusion, the two-season, multi-spectral Sentinel-2 images provided useful data for classifying LULC classes in areas with substantial fragmentation, especially for natural forests that have been disturbed and degraded at different levels such as in Dak Nong, Vietnam. The SVM and RF machine learning algorithms were both accurate classifiers when used with the Sentinel 2 imagery. The methods developed for this study are applicable to boreal and temporal forests with different classes in addition to the tropical forests for the current study. However, the model parameters always need to be re-estimated for each application.

Author Contributions

Conceptualization, H.T.T.N., E.T., and R.E.M.; methodology, H.T.T.N., E.T., and R.E.M.; software, H.T.T.N., T.M.D., E.T., and R.E.M.; validation, H.T.T.N., E.T., R.E.M., and T.M.D.; formal analysis, H.T.T.N., R.E.M., E.T., and T.M.D.; investigation, T.M.D. and H.T.T.N.; resources, H.T.T.N.; data curation, H.T.T.N. and T.M.D.; writing—original draft preparation, H.T.T.N. and T.M.D.; writing—review and editing, R.E.M. and E.T.; visualization, T.M.D.; supervision, H.T.T.N.; project administration, H.T.T.N.; funding acquisition, H.T.T.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by UNITED STATES AGENCY FOR INTERNATIONAL DEVELOPMENT, grant number AID-OAA-A-11-00012.

Acknowledgments

This work is part of the research project under the PEER program (Partnerships for Enhanced Engagement in Research), a U.S. government program to fund scientific research in developing countries. This is a program sponsored by USAID in partnership with several other U.S. Government agencies and administered by the U.S. National Academy of Sciences (NAS). The authors would like to thank all of the people involved in collecting field data for classification and validation. The authors thank also the editor and three anonymous reviewers for the constructive comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ministry of Agriculture and Rural Development. Decision on the Declaration of Forest Status of the Country in 2018; Decision No. 911/QD-BNN-LN; Ministry of Agriculture and Rural Development: Hanoi, Vietnam, 2019.
  2. Thai, V.T. Vietnamese Forest Vegetation, 1st ed.; Science and Technique Publishing House: Hanoi, Vietnam, 1978. [Google Scholar]
  3. Hoang, M.H.; Do, T.H.; van Noordwijk, M.; Pham, T.T.; Palm, M.; To, X.P.; Doan, D.; Nguyen, T.X.; Hoang, T.V.A. An Assessment of Opportunities for Reducing Emissions from All Land Uses–Vietnam Preparing for REDD. Final National Report; ASB Partnership for the Tropical Forest Margins: Nairobi, Kenya, 2010; p. 85. [Google Scholar]
  4. Burkhard, B.; Kroll, F.; Nedkov, S.; Müller, F. Mapping ecosystem service supply, demand and budgets. Ecol. Indic. 2012, 21, 17–29. [Google Scholar] [CrossRef]
  5. Gebhardt, S.; Wehrmann, T.; Ruiz, M.A.M.; Maeda, P.; Bishop, J.; Schramm, M.; Kopeinig, R.; Cartus, O.; Kellndorfer, J.; Ressl, R.; et al. MAD-MEX: Automatic wall-to-wall land cover monitoring for the Mexican REDD-MRV program using all Landsat data. Remote Sens. 2014, 6, 3923–3943. [Google Scholar] [CrossRef]
  6. Nguyen, T.T.H.; Doan, M.T.; Volker, R. Applying random forest classification to map land use/land cover using landsat 8 OLI. Int. Soc. Photogramm. Remote Sens. 2018, XLII-3/W4, 363–367. [Google Scholar] [CrossRef]
  7. Arnold, F.E.; van der Werf, N.; Rametsteiner, E. Strengthening Evidence-Based Forest Policy-Making: Linking Forest Monitoring With National Forest Programmes; Forestry Policy and Institutions Working; FAO: Rome, Italy, 2014; p. 33. [Google Scholar]
  8. Khatami, R.; Mountrakis, G.; Stehman, S.V. A meta-analysis of remote sensing research on supervised pixel-based land cover image classification processes: General guidelines for practitioners and future research. Remote Sens. Environ. 2016, 177, 89–100. [Google Scholar] [CrossRef]
  9. Jensen, J.R.; Cowen, D.C. Remote sensing of urban/suburban infrastructure and socioeconomic attributes. Photogramm. Eng. Remote Sens. 1999, 65, 611–622. [Google Scholar]
  10. Deka, J.; Tripathi, O.P.; Khan, M.L. Study on land use/land cover change dynamics through remote sensing and GIS–A case study of Kamrup District, North East India. J. Remote Sens. GIS 2014, 5, 55–62. [Google Scholar]
  11. Topaloğlu, H.R.; Sertel, E.; Musaoğlu, N. Assessment of classification accuracies of SENTINEL-2 and LANDSAT-8 data for land cover/use mapping. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, XLI-B8, 1055–1059. [Google Scholar]
  12. Gomez, C.; White, J.C.; Wulder, M.A. Optical remotely sensed time series data for land cover classification: A review. Int. Soc. Photogramm. 2016, 116, 55–72. [Google Scholar] [CrossRef]
  13. Sothe, C.; Almeida, C.M.; Liesenberg, V.; Schimalski, M.B. Evaluating sentinel-2 and landsat-8 data to map sucessional forest stages in a subtropical forest in Southern Brazil. Remote Sens. 2017, 9, 838. [Google Scholar] [CrossRef]
  14. Addabbo, P.; Focareta, M.; Marcuccio, S.; Votto, C.; Ullo, S.L. Contribution of sentinel-2 data for applications in vegetation monitoring. Acta IMEKO 2016, 5, 44–54. [Google Scholar] [CrossRef]
  15. Song, X.; Yang, C.; Wu, M.; Zhao, C.; Yang, G.; Hoffmann, W.C.; Huang, W. Evaluation of sentinel-2a satellite imagery for mapping cotton root rot. Remote Sens. 2017, 9, 906. [Google Scholar] [CrossRef]
  16. Li, J.; Roy, D.P. A global analysis of sentinel-2a, sentinel-2b and landsat-8 data revisit intervals and implications for terrestrial monitoring. Remote Sens. 2017, 9, 902. [Google Scholar]
  17. Pirotti, F.; Sunar, F.; Piragnolo, M. Benchmark of machine learning methods for classification of a sentinel-2 image. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, XLI-B7, 335–340. [Google Scholar] [CrossRef]
  18. Pelletier, C.; Valero, S.; Inglada, J.; Champion, N.; Dedieu, G. Assessing the robustness of Random Forests to map land cover with high resolution satellite image time series over large areas. Remote Sens. Environ. 2016, 187, 156–168. [Google Scholar] [CrossRef]
  19. Sharma, R.C.; Hara, K.; Tateishi, R. High-resolution vegetation mapping in japan by combining sentinel-2 and landsat 8 based multi-temporal datasets through machine learning and cross-validation approach. Land 2017, 6, 50. [Google Scholar] [CrossRef]
  20. Phan, T.N.; Kappas, M. Comparison of random forest, k-nearest neighbor, and support vector machine classifiers for land cover classification using sentinel-2 imagery. Sensors 2018, 18, 20. [Google Scholar]
  21. Nguyen, T.T.H. Forestry Remote Sensing: Multi-Source Data in Natural Evergreen Forest Inventory in the Central Highlands of Vietnam, 1st ed.; Lambert Academic Publishing: Saarbruecken, Germany, 2011; p. 165. [Google Scholar]
  22. Manandhar, R.; Odeh, I.O.A.; Ancev, T. Improving the accuracy of land use and land cover classification of landsat data using post-classification enhancement. Remote Sens. 2009, 1, 330–344. [Google Scholar] [CrossRef]
  23. Heydari, S.S.; Mountrakis, G. Effect of classifier selection, reference sample size, reference class distribution and scene heterogeneity in per-pixel classification accuracy using 26 Landsat sites. Remote Sens. Environ. 2018, 204, 648–658. [Google Scholar] [CrossRef]
  24. Lu, D.; Weng, Q.A. Survey of image classification methods and techniques for improving classification performance. Int. J. Remote Sens. 2007, 28, 823–870. [Google Scholar] [CrossRef]
  25. Waske, B.; Braun, M. Classifier ensembles for land cover mapping using multitemporal SAR imagery. ISPRS J. Photogramm. Remote Sens. 2009, 64, 450–457. [Google Scholar] [CrossRef]
  26. Li, C.; Wang, J.; Wang, L.; Hu, L.; Gong, P. Comparison of classification algorithms and training sample sizes in urban land classification with Landsat Thematic Mapper imagery. Remote Sens. 2014, 6, 964–983. [Google Scholar] [CrossRef]
  27. Abbas, A.W.; Minallh, N.; Ahmad, N.; Abid, S.A.R.; Khan, M.A.A. K-means and ISODATA clustering algorithms for landcover classification using remote sensing. Sindh Univ. Res. J. (Sci. Ser.) 2016, 48, 315–318. [Google Scholar]
  28. Paola, J.D.; Schowengerdt, R.A. A detailed comparison of backpropagation neural network and maximum-likelihood classifiers for urban land use classification. IEEE Trans. Geosci. Remote Sens. 1995, 33, 981–996. [Google Scholar] [CrossRef]
  29. Shafri, H.Z.M.; Suhaili, A.; Mansor, S. The performance of maximum likelihood, spectral angle mapper, neural network and decision tree classifiers in hyperspectral image analysis. J. Comput. Sci. 2007, 3, 419–423. [Google Scholar] [CrossRef]
  30. Santos, J.A.; Ferreira, C.D.; Torres, R.D.S.; Gonalves, M.A.; Lamparelli, R.A.C. A relevance feedback method based on genetic programming for classification of remote sensing images. Inf. Sci. 2011, 181, 2671–2684. [Google Scholar] [CrossRef]
  31. Ahmad, A.; Quegan, S. Analysis of maximum likelihood classification on multispectral data. Appl. Math. Sci. 2012, 6, 6425–6436. [Google Scholar]
  32. McRoberts, R.E. A two-step nearest neighbors algorithm using satellite imagery for predicting forest structure within species composition classes. Remote Sens. Environ. 2009, 113, 532–545. [Google Scholar] [CrossRef]
  33. McRoberts, R.E. Satellite image-based maps: Scientific inference or pretty pictures? Remote Sens. Environ. 2011, 115, 714–724. [Google Scholar] [CrossRef]
  34. Pal, M.; Mather, P.M. Support vector classifiers for land cover classification. In Proceedings of the Map India Conference, New Delhi, India, 28–31 January 2003. [Google Scholar]
  35. Huang, C.; Davis, L.S.; Townshend, J.R.G. An assessment of support vector machines for land cover classification. Int. J. Remote Sens. 2002, 23, 725–749. [Google Scholar] [CrossRef]
  36. Bahari, N.I.S.; Ahmad, A.; Aboobaider, B.M. Application of support vector machine for classification of multispectral data. IOP Conf. Ser. Earth Environ. Sci. 2014, 20. [Google Scholar] [CrossRef]
  37. Balcik, F.B.; Kuzucu, A.K. Determination of land cover/land use using spot 7 data with supervised classification methods. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, XLII-2/W1, 143–146. [Google Scholar] [CrossRef]
  38. Franco-Lopez, H.; Ek, A.R.; Bauer, M.E. Estimation and mapping of forest stand density, volume, and cover type using the k-nearest neighbors method. Remote Sens. Environ. 2001, 77, 251–274. [Google Scholar] [CrossRef]
  39. Yu, S.; Backer, S.; Scheunders, P. Genetic feature selection combined with composite fuzzy nearest neighbor classifiers for hyperspectral satellite imagery. Pattern Recognit. Lett. 2002, 23, 183–190. [Google Scholar] [CrossRef]
  40. Lowe, B.; Kulkarni, A. Multispectral image analysis using random forest. Int. J. Soft Comput. (IJSC) 2015, 6, 14. [Google Scholar] [CrossRef]
  41. Kennedy, R.E.; Yang, Z.; Braaten, J.; Copass, C.; Antonova, N.; Jordan, C.; Nelson, P. Attribution of disturbance change agent from Landsat time-series in support of habitat monitoring in the Puget Sound region, USA. Remote Sens. Environ. 2015, 166, 271–285. [Google Scholar] [CrossRef]
  42. Basten, K. Classifying Landsat Terrain Images via Random Forests. Bachelor thesis Computer Science; Radboud University: Nijmegen, The Netherlands, 2016. [Google Scholar]
  43. Naidoo, L.; Cho, M.A.; Mathieu, R.; Asner, G. Classification of savanna tree species, in the Greater Kruger National Park region, by integrating hyperspectral and LiDAR data in a random forest data mining environment. ISPRS J. Photogramm. Remote Sens. 2012, 69, 167–179. [Google Scholar] [CrossRef]
  44. Tomppo, E.; Halme, M. Using coarse scale forest variables as ancillary information and weighting of variables in k-NN estimation: A genetic algorithm approach. Remote Sens. Environ. 2004, 92, 1–20. [Google Scholar] [CrossRef]
  45. Tomppo, E.; Gagliano, C.; De Natale, F.; Katila, M.; McRoberts, R. Predicting categorical forest variables using an improved k-nearest neighbor estimator and Landsat imagery. Remote Sens. Environ. 2009, 113, 500–517. [Google Scholar] [CrossRef]
  46. Dharamvir. Object oriented model classification of satellite image. CDQM 2013, 16, 46–54. [Google Scholar]
  47. Machala, M.; Zejdová, L. Forest mapping through object-based image analysis of multispectral and lidar aerial data. Eur. J. Remote Sens. 2014, 47, 117–131. [Google Scholar] [CrossRef]
  48. Mora, A.; Santos, T.M.A.; Łukasik, S.; Silva, J.M.N.; Falcão, A.J.; Fonseca, J.M.; Ribeiro, R.A. Land cover classification from multispectral data using computational intelligence tools: A comparative study. Information 2017, 8, 147. [Google Scholar] [CrossRef]
  49. Sowmya, B.; Sheelarani, B. Land cover classification using reformed fuzzy C-means. Sadhana 2011, 36, 153–165. [Google Scholar] [CrossRef]
  50. Apte, K.S.; Patravali, D.S. Development of back propagation neutral network model for ectracting the feature from a satellite image using curvelet transform. Int. J. Eng. Res. Gen. Sci. 2015, 3, 226–236. [Google Scholar]
  51. Ma, L.; Li, M.; Ma, X.; Cheng, L.; Du, P.; Liu, Y. A review of supervised object-based land-cover image classification. ISPRS J. Photogramm. Remote Sens. 2017, 130, 277–293. [Google Scholar] [CrossRef]
  52. Meyfroidt, P.; Lambin, E.F.; Erb, K.H.; Hertel, T.W. Globalization of land use: Distant drivers of land change and geographic displacement of land use. Curr. Opin. Environ. Sustain. 2013, 5, 438–444. [Google Scholar] [CrossRef]
  53. Bourgoin, C.; Oszwald, J.; Bourgoin, J.; Gond, V.; Blanc, L.; Dessard, H.; Phan, T.V.; Sist, P.; Läderach, P.; Reymondin, L.; et al. Assessing the ecological vulnerability of forest landscape to agricultural frontier expansion in the Central Highlands of Vietnam. Int. J. Appl. Earth Obs. Geoinf. 2020, 84, 13. [Google Scholar] [CrossRef]
  54. Ha, T.V.; Tuohy, M.; Irwin, M.; Tuan, P.T. Monitoring and mapping rural urbanization and land use changes using Landsat data in the northeast subtropical region of Vietnam. Egypt. J. Remote Sens. Space Sci. 2020, 23, 11–19. [Google Scholar] [CrossRef]
  55. Drusch, M.; Del Bello, U.; Carlier, S.; Colin, O.; Fernandez, V.; Gascon, F.; Hoersch, B.; Isola, C.; Laberinti, P.; Martimort, P.; et al. Sentinel-2: ESA’s optical high-resolution mission for GMES operational services. Remote Sens. Environ. 2012, 120, 25–36. [Google Scholar] [CrossRef]
  56. Vuolo, F.; Zółtak, M.; Pipitone, C.; Zappa, L.; Wenng, H.; Immitzer, M.; Weiss, M.; Baret, F.; Atzberger, C. Data service platform for Sentinel-2 surface reflectance and value-added products: System use and examples. Remote Sens. 2016, 8, 938. [Google Scholar] [CrossRef]
  57. Yacouba, D.; Guangdao, H.; Xingping, W. Assessment of land use cover changes using NDVI and DEM in Puer and Simao Counties, Yunnan Province, China. World Rural Obs. 2009, 1, 1–11. [Google Scholar]
  58. Defries, R.S.; Townshend, J.R.G. NDVI-derived land cover classifications at a global scale. Int. J. Remote Sens. 1994, 15, 3567–3586. [Google Scholar] [CrossRef]
  59. Pettorelli, N.; Ryan, S.; Mueller, T.; Bunnefeld, N.; Jedrzejewska, B.; Lima, M.; Kausrud, K. The Normalized Difference Vegetation Index (NDVI): Unforeseen successes in animal ecology. Clim. Res. 2011, 46, 15–27. [Google Scholar] [CrossRef]
  60. Housman, I.W.; Chastain, R.A.; Finco, M.V. An evaluation of forest health insect and disease survey data and satellite-based remote sensing forest change detection methods: Case studies in the United States. Remote Sens. 2018, 10, 21. [Google Scholar] [CrossRef]
  61. Gorelick, N.; Hancher, M.; Dixon, M.; Ilyushchenko, S.; Thau, D.; Moore, R. Google earth engine: Planetary-scale geospatial analysis for everyone. Remote. Sens. Environ. 2017, 202, 18–27. [Google Scholar] [CrossRef]
  62. Housman, I.; Hancher, M.; Stam, C. A quantitative evaluation of cloud and cloud shadow masking algorithms available in Google Earth Engine. Manuscript in preparation. Unpublished work.
  63. Roy, D.P.; Li, Z.; Zhang, H.K. Adjustment of sentinel-2 multi-spectral instrument (msi) red-edge band reflectance to nadir BRDF adjusted reflectance (NBAR) and quantification of red-edge band BRDF effects. Remote Sens. 2017, 9, 1325. [Google Scholar]
  64. Roy, D.P.; Li, J.; Zhang, H.K.; Yan, L.; Huang, H. Examination of sentinel-2A multi-spectral instrument (MSI) reflectance anisotropy and the suitability of a general method to normalize MSI reflectance to nadir BRDF adjusted reflectance. Remote Sens. Environ. 2017, 199, 25–38. [Google Scholar] [CrossRef]
  65. Soenen, S.A.; Peddle, D.R.; Coburn, C.A. SCS+ C: A modified sun-canopy-sensor topographic correction in forested terrain. IEEE Trans. Geosci. Remote Sens. 2005, 43, 2148–2159. [Google Scholar] [CrossRef]
  66. Google Earth Engine. Developer’s Guide. ImageCollection Reductions. Available online: https://developers.google.com/earth-engine/reducers_image_collection (accessed on 29 October 2017).
  67. De Alban, J.D.T.; Connette, G.M.; Oswald, P.; Webb, E.L. Combined landsat and L-band SAR data improves land cover classification and change detection in dynamic tropical landscapes. Remote Sens. 2018, 10, 306. [Google Scholar] [CrossRef]
  68. Gilat, D.; Hill, T.P. Quantile-locating functions and the distance between the mean and quantiles. Stat. Neerl. 1993, 47, 279–283. [Google Scholar] [CrossRef]
  69. Google Earth Engine. Developer’s Guide. Scale. Available online: https://developers.google.com/earth-engine/scale#scale-of-analysis (accessed on 29 October 2017).
  70. Open Geo Blog–Tutorials, Code Snippets and Examples to Handle Spatial Data. Available online: https://mygeoblog.com/ (accessed on 28 October 2018).
  71. Teluguntla, P.; Thenkabail, P.S.; Oliphant, A.; Xiong, J.; Gumma, M.K.; Congalton, R.G.; Kamini Yadav, K.; Huete, A. A 30-m landsat-derived cropland extent product of Australia and China using random forest machine learning algorithm on Google Earth Engine cloud computing platform. ISPRS J. Photogramm. Remote Sens. 2018, 144, 325–340. [Google Scholar] [CrossRef]
  72. Särndal, C.-E.; Swensson, B.; Wretman, J. Model Assisted Survey Sampling; Springer: New York, NY, USA, 1992. [Google Scholar]
  73. Chirici, G.; Mura, M.; McInerney, D.; Py, N.; Tomppo, E.O.; Waser, L.T.; McRoberts, R.E. A meta-analysis and review of the literature on the k-Nearest Neighbors technique for forestry applications that use remotely sensed data. Remote Sens. Environ. 2016, 176, 282–294. [Google Scholar] [CrossRef]
  74. McRoberts, R.E.; Domke, G.M.; Chen, Q.; Næsset, E.; Gobakken, T. Using genetic algorithms to optimize k-Nearest Neighbors configurations for use with airborne laser scanning data. Remote Sens. Environ. 2016, 184, 387–395. [Google Scholar] [CrossRef]
  75. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  76. Meyer, D.; Leisch, F.; Hornik, K. Benchmarking Support Vector Machines; Report Series No. 78; Vienna University of Economics and Business Administration Augasse 2–6, 1090: Wien, Austria, 2002. [Google Scholar]
  77. Knorn, J.; Rabe, A.; Radeloff, V.C.; Kuemmerle, T.; Kozak, J.; Hostert, P. Land cover mapping of large areas using chain classification of neighboring Landsat satellite images. Remote. Sens. Environ. 2009, 113, 957–964. [Google Scholar] [CrossRef]
  78. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  79. Liaw, A.; Wiener, M. Classification and regression by randomForest. R News 2002, 2, 18–22. [Google Scholar]
  80. Congalton, R.G.; Green, K. Assessing the Accuracy of Remotely Sensed Data: Principles and Practices; Lewis Publishers: Boca Raton, FL, USA, 1999. [Google Scholar]
  81. Cochran, W.G. Sampling Techniques, 3rd ed.; Wiley: New York, NY, USA, 1977. [Google Scholar]
  82. Olofsson, P.; Foody, G.M.; Herold, M.; Stehman, S.V.; Woodcock, C.E.; Wulder, M.A. Good practices for estimating area and assessing accuracy of land change. Remote Sens. Environ. 2014, 148, 42–57. [Google Scholar] [CrossRef]
  83. R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2019; Available online: https://www.R-project.org/ (accessed on 23 March 2020).
  84. Qian, Y.; Zhou, W.; Yan, J.; Li, W.; Han, L. Comparing machine learning classifiers for object-based land cover classification using very high resolution imagery. Remote Sens. 2015, 7, 153–168. [Google Scholar] [CrossRef]
  85. Sutter, J.M.; Kalivas, J.H. Comparison of forward selection, backward elimination, and generalized simulated annealing for variable selection. Micro. J. 1993, 47, 60–66. [Google Scholar] [CrossRef]
  86. Brown, J.F.; Loveland, T.R.; Ohlen, D.O.; Zhu, Z. The global land-cover characteristics database: The user’s perspective. Photogramm. Eng. Remote Sens. 1999, 65, 1069–1074. [Google Scholar]
  87. Lark, R.M. Components of accuracy of maps with special reference to discriminant analysis on remote sensor data. Int. J. Remote Sens. 1995, 16, 1461–1480. [Google Scholar] [CrossRef]
  88. Salovaara, K.J.; Thessler, S.; Malik, R.N.; Tuomisto, H. Classification of Amazonian primary rain forest vegetation using Landsat ETM+ satellite imagery. Remote Sens. Environ. 2005, 97, 39–51. [Google Scholar] [CrossRef]
  89. Foody, G.M. Status of land cover classification accuracy assessment. Remote Sens. Environ. 2002, 80, 185–201. [Google Scholar] [CrossRef]
  90. Anderson, J.R.; Hardy, E.E.; Roach, J.T.; Witmer, R.E. A Land Use and Land Cover Classification System for Use with Remote Sensor Data; U.S. Government Publishing Office: Washington, DC, USA, 1976.
  91. Foody, G.M. Harshness in image classification accuracy assessment. Int. J. Remote Sens. 2008, 29, 3137–3158. [Google Scholar] [CrossRef]
  92. Tomppo, E.; Katila, M.; Makisara, K.; Perasaari, J. Multi-source National Forest Inventory: Methods and applications; Springer: Dordrecht, The Netherlands, 2008. [Google Scholar]
  93. Xie, Z.; Chen, Y.; Lu, D.; Li, G.; Chen, E. Classification of land cover, forest, and tree species classes with ziyuan-3 multispectral and stereo data. Remote Sens. 2019, 11, 164. [Google Scholar] [CrossRef]
Figure 1. Research approach as a flowchart. The Sentinel-2 2017 and 2018 data were collected for different seasons: dry, rainy, whole year, and rainy and dry composited images. The MLR, ik-NN, SVM, and RF classifiers were tested with the resulting uncertainty assessments used as criteria for comparing combinations of seasonal datasets and classifiers. OA, overall accuracy; K, Kappa coefficient; PA, producer’s accuracy; UA, user’s accuracy.
Figure 1. Research approach as a flowchart. The Sentinel-2 2017 and 2018 data were collected for different seasons: dry, rainy, whole year, and rainy and dry composited images. The MLR, ik-NN, SVM, and RF classifiers were tested with the resulting uncertainty assessments used as criteria for comparing combinations of seasonal datasets and classifiers. OA, overall accuracy; K, Kappa coefficient; PA, producer’s accuracy; UA, user’s accuracy.
Remotesensing 12 01367 g001
Figure 2. The study area in Dak Nong province, Vietnam, with sample unit locations.
Figure 2. The study area in Dak Nong province, Vietnam, with sample unit locations.
Remotesensing 12 01367 g002
Figure 3. Parameter tuning of support vector machine: (a) IMG1-SVM; (b) IMG 2-SVM; (c) IMG 3-SVM; (d) IMG 4-SVM.
Figure 3. Parameter tuning of support vector machine: (a) IMG1-SVM; (b) IMG 2-SVM; (c) IMG 3-SVM; (d) IMG 4-SVM.
Remotesensing 12 01367 g003
Figure 4. Ranking the variable importance measurement (bands): (a) IMG 1-RF; (b) IMG 2-RF; (c) IMG 3-RF; (d) IMG 4-RF.
Figure 4. Ranking the variable importance measurement (bands): (a) IMG 1-RF; (b) IMG 2-RF; (c) IMG 3-RF; (d) IMG 4-RF.
Remotesensing 12 01367 g004
Figure 5. Out-of-bag error rate (OOB) errors versus ntree values: (a) IMG 1-RF; (b) IMG 2-RF; (c) IMG 3-RF; (d) IMG 4-RF.
Figure 5. Out-of-bag error rate (OOB) errors versus ntree values: (a) IMG 1-RF; (b) IMG 2-RF; (c) IMG 3-RF; (d) IMG 4-RF.
Remotesensing 12 01367 g005aRemotesensing 12 01367 g005b
Figure 6. The OOB error of the model based on mtry parameter: IMG 1-RF; (b) IMG 2-RF; (c) IMG 3-RF; (d) IMG 4-RF.
Figure 6. The OOB error of the model based on mtry parameter: IMG 1-RF; (b) IMG 2-RF; (c) IMG 3-RF; (d) IMG 4-RF.
Remotesensing 12 01367 g006
Figure 7. Overall accuracy for all combinations.
Figure 7. Overall accuracy for all combinations.
Remotesensing 12 01367 g007
Figure 8. Land use and land cover (LULC) map produced for the most accurate classification combination.
Figure 8. Land use and land cover (LULC) map produced for the most accurate classification combination.
Remotesensing 12 01367 g008
Figure 9. User’s and producer’s accuracies by class for the four classifiers using the IMG 4 combination of the rainy and dry Sentinel 2 satellite imagery.
Figure 9. User’s and producer’s accuracies by class for the four classifiers using the IMG 4 combination of the rainy and dry Sentinel 2 satellite imagery.
Remotesensing 12 01367 g009
Figure 10. Standard errors (SE) (km2) of class area estimates versus area estimates (km2).
Figure 10. Standard errors (SE) (km2) of class area estimates versus area estimates (km2).
Remotesensing 12 01367 g010
Table 1. Basic characteristics of Sentinel 2 multi-spectral instrument (MSI).
Table 1. Basic characteristics of Sentinel 2 multi-spectral instrument (MSI).
NameMinMaxScaleResolutionWavelengthDescription
B1010,0000.000160 Meters443 nmAerosols
B2010,0000.000110 Meters490 nmBlue
B3010,0000.000110 Meters560 nmGreen
B4010,0000.000110 Meters665 nmRed
B5010,0000.000120 Meters705 nmRed Edge 1
B6010,0000.000120 Meters740 nmRed Edge 2
B7010,0000.000120 Meters783 nmRed Edge 3
B8010,0000.000110 Meters842 nmNear infrared (NIR)
B8a010,0000.000120 Meters865 nmRed Edge 4
B9010,0000.000160 Meters940 nmWater vapor
B10010,0000.000160 Meters1375 nmCirrus
B11010,0000.000120 Meters1610 nmShort-wave infrared (SWIR) 1
B12010,0000.000120 Meters2190 nmSWIR 2
QA10 10 Meters Always empty
QA20 20 Meters Always empty
QA60 60 Meters Cloud mask
Table 2. Seasonal satellite images used in the classification.
Table 2. Seasonal satellite images used in the classification.
Image NameTimeAcquisition DateNumber of Images InvolvedNumber of Bands
IMG 1Dry season, 2017–201801/01/2017–03/31/2017 and 12/01/2017–03/31/201816910
IMG 2Rainy season, 2017–201804/01/2017–11/30/2017 and 04/01/2018–06/30/201827710
IMG 3All for year 201701/01/2017–12/31/201726510
IMG 4Combination of all bands for both 2017 and 2018 (IMG 1 + IMG 2)Dry season 2017–2018 + Rainy season 2017–201844620
Table 3. Training and validation data.
Table 3. Training and validation data.
DatasetUseLand Cover ClassTotal
1234567891011
1Training776151329340133249232
2Training6852331114192120425213
3Training99972290172342081174591
TotalTraining182111895540652535460191081036
3Validation252522177172820161219208
Table 4. Classifiers and seasonal bands. Ik-NN, improved k-nearest neighbors; MLR, multinomial logistic regression; SVM, support vector machine; RF, random forests.
Table 4. Classifiers and seasonal bands. Ik-NN, improved k-nearest neighbors; MLR, multinomial logistic regression; SVM, support vector machine; RF, random forests.
Classification AlgorithmImage SetNumber of Bands
ik-NNIMG 110
IMG 210
IMG 310
IMG 420
MLRIMG 110
IMG 210
IMG 310
IMG 420
SVMIMG 110
IMG 210
IMG 310
IMG 420
RFIMG 110
IMG 210
IMG 310
IMG 420
Table 5. Confusion matrix.
Table 5. Confusion matrix.
Map ClassReference ClassTotalUA * p ^ h Var ^ ( p ^ h )
C~C
C n 11 n 12 n 1 = n 11 + n 12 ua 1 = n 11 n 1 p ^ 1 = n 11 n 1 Var ^ ( p ^ 1 ) = p ^ 1 ( 1 p ^ 1 ) n 1
~C n 21 n 22 n 2 = n 21 + n 22 ua 2 = n 22 n 2 p ^ 2 = n 21 n 2 Var ^ ( p ^ 2 ) = p ^ 2 ( 1 p ^ 2 ) n 2
Total n 1 = n 11 + n 21 n 2 = n 12 + n 22
PA * pa 1 = n 11 n 1 pa 2 = n 22 n 2
* UA = user’s accuracy, * PA = producer’s accuracy.
Table 6. Accuracy estimates: OA = overall accuracy, K = Kappa, PA = producer’s accuracy, UA = user’s accuracy, A ^ = class area estimate (km2), SE ( A ^ ) = standard error of class area estimate.
Table 6. Accuracy estimates: OA = overall accuracy, K = Kappa, PA = producer’s accuracy, UA = user’s accuracy, A ^ = class area estimate (km2), SE ( A ^ ) = standard error of class area estimate.
ImageClassifierOAKappaAccuracyLand Class *
1234567891011
IMG 1MLR68.30.657PA97.6035.5070.0039.400.00047.7091.4047.1048.60100.0089.30
UA58.5066.7066.7080.000.00068.8075.9070.00100.0085.7061.50
A ^ 821.72932.59502.16378.25241.3456.511923.88502.16202.1791.3469.56
SE ( A ^ ) 104.35156.5297.8297.8284.78110.87195.65123.9145.6513.0471.74
Ik-NN72.10.732PA98.2055.4081.9045.1032.5038.3093.6028.2031.60100.0087.70
UA80.0065.2080.8080.0060.0084.6068.6091.7091.7092.3062.10
A ^
SE ( A ^ )
886.94808.68404.34541.29202.17463.041878.23456.51189.1397.82593.47
78.26143.4878.26123.9171.74130.43208.69123.9152.176.52104.35
RF67.70.67PA92.2064.0062.1042.1046.9018.9089.9038.4042.00100.0087.80
UA77.4063.0085.7081.8066.70100.0059.5085.7091.7092.3058.10
A ^
SE ( A ^ )
886.94763.03502.16515.21182.61697.811689.10397.82215.21104.35567.38
104.35123.91104.35123.9158.69163.04215.2197.8252.176.52104.35
SVM73.20.748PA94.9054.3073.6047.1026.8036.2094.4042.8046.50100.0084.80
UA76.7064.0087.00100.0060.0078.6070.6092.3092.9092.3063.00
A ^ 880.42886.94404.34456.51189.13397.821956.49463.04169.56104.35613.03
SE ( A ^ ) 91.3163.0484.78110.8771.74117.39215.21104.3552.176.52110.87
IMG 2MLR63.90.611PA67.8037.7071.8033.708.8096.3085.3039.5041.40100.0086.20
UA54.5058.8066.7053.3010.0084.2067.9070.0090.9092.3064.30
A ^ 854.33978.24469.56371.73189.13228.261813.01645.64280.43104.35586.95
SE ( A ^ ) 150.00189.1371.7484.7871.7426.09215.21136.9565.226.52104.35
Ik-NN64.30.673PA90.9036.5061.2044.9056.3042.4084.5038.8038.1086.0082.00
UA74.2080.0062.1085.7083.3085.7051.2081.3092.3091.7063.00
A ^ 854.331317.37417.38404.34104.35365.211643.45436.95182.6191.3704.33
SE ( A ^ ) 104.35202.1791.384.7839.13110.87228.26110.8758.6913.04123.91
RF67.50.712PA86.3039.4065.9066.3053.1058.6085.0042.0028.00100.0080.200
UA78.6068.8075.0091.7062.5087.5058.30100.0091.7085.7056.700
A ^
SE ( A ^ )
913.031180.41404.34319.56104.35293.471760.84547.82195.6591.3717.38
110.87202.1784.7845.6539.1384.78228.26117.3965.2213.04136.95
SVM68.40.717PA83.1038.6066.0052.1041.8066.0085.9043.8042.10100.0089.40
UA67.7064.7072.0081.8080.0088.2063.60100.0092.9085.7064.30
A ^ 815.21180.41436.95345.65130.43247.821871.7534.77136.9591.3723.9
SE ( A ^ ) 104.35208.6991.365.2245.6578.26228.26123.9152.1713.04117.39
IMG 3MLR64.20.611PA73.2036.0069.7029.406.9096.2084.7050.8048.40100.0085.40
UA54.5058.8066.7053.3010.0084.2067.9070.0090.9092.3064.30
A ^
SE ( A ^ )
939.11971.72430.43384.78221.74254.341663.01717.38319.56117.39502.16
156.52182.6171.7497.8278.2626.09202.17136.9571.746.5297.82
Ik-NN66.90.684PA88.8040.3087.7067.5038.9018.4090.1038.5037.1086.8087.80
UA75.9060.0069.0078.60100.0054.5060.0081.3085.7091.7070.80
A ^ 919.55965.2319.56280.43169.56723.91754.32443.47182.6197.82658.68
SE ( A ^ ) 117.39182.6145.6545.6558.69182.61228.26117.3958.6913.04104.35
RF69.50.721PA90.9051.2084.8056.1046.8013.7091.4039.6045.50100.00100.00
UA82.1061.5074.1090.90100.0080.0060.5085.7092.9092.3067.90
A ^
SE ( A ^ )
913.03893.46313.04352.17176.08743.461754.32502.16150104.35613.03
104.35163.0445.6578.2652.17176.08221.74117.3945.656.5278.26
SVM71.20.743PA92.2042.6086.0068.4050.6025.8094.5037.1043.2088.9097.70
UA77.4059.1083.3092.30100.0085.7065.8081.30100.0091.7066.70
A ^ 886.941043.46358.69280.43136.95658.681819.53469.56169.56117.39586.95
SE ( A ^ ) 104.35189.1345.6539.1345.65150208.69117.3952.1713.0478.26
IMG 4MLR65.90.611PA69.3039.4078.7026.601.2098.7085.5031.3058.80100.0083.30
UA54.5058.8066.7053.3010.0084.2067.9070.0090.9092.3064.30
A ^ 854.331030.42560.86345.65195.65456.511799.97449.99273.9171.74489.12
SE ( A ^ ) 156.52189.1384.7884.7871.7445.65215.21130.4345.656.5297.82
Ik-NN74.30.781PA91.1052.6069.5044.9039.3055.0094.7035.2047.00100.0087.20
UA75.0081.3090.0092.9057.1087.5067.6093.3092.9092.3070.80
A ^ 867.381036.94449.99449.99123.91280.432034.75384.78143.4897.82639.12
SE ( A ^ ) 110.87176.0884.78130.4352.1784.78228.26104.3545.656.52104.35
RF800.802PA89.4061.9078.4068.9046.0077.2095.1034.2033.20100.0098.10
UA85.2069.2083.3092.3062.50100.0082.1083.30100.0092.3069.20
A ^ 945.63952.16391.3280.43130.43189.132204.31482.6176.0891.3678.25
SE ( A ^ ) 110.87163.0458.6952.1752.1732.61195.65136.9558.6913.0484.78
SVM80.30.813PA89.4063.3077.8070.2042.2081.0095.1039.6039.30100.0097.50
UA82.1073.9090.5093.3071.40100.0080.6086.7092.3085.7069.20
A ^ 932.59971.72391.3358.69123.91234.782223.87710.86378.2597.82639.12
SE ( A ^ ) 113.48163.0458.69104.3552.1739.13208.69182.61143.4813.0484.78
* Class 1: dense evergreen; 2: open evergreen; 3: semi-evergreen; 4: dipterocarp; 5: plantation; 6: rubber; 7: industrial plants; 8: crop land; 9: residential. 10: water surface; 11: other land.
Back to TopTop