Object-Level Double Constrained Method for Land Cover Change Detection

Land cover change detection based on remote sensing has become increasingly important for protecting the ecological environment. Spatial features of images can be extracted by object-level methods. However, the computational complexity is high when using many features to detect land cover change. Meanwhile, single-constrained change detection (SCCD) methods produce non-objective and inaccurate results. Therefore, we proposed a land cover change detection method: the object-level double constrained change detection (ODCD) method. First, spectral and spatial features were calculated based on multi-scale segmentation results. Second, using the significant difference test (SDT), feature differences among all categories were calculated, and the features with more significant differences were considered as the optimal features. Third, the maximum Kappa coefficient was used as the criterion for determining the optimal change intensity and correlation coefficient. Finally, the ODCD was validated using GF-1 satellite images on March 2016 and February 2017 in north Beiqijia Town, Beijing. Using optimal feature selection, the dimension of features was reduced from 26 to 12. Compared with SCCD methods, the result of the ODCD was more reliable and accurate. Its overall accuracy was 10% higher, overall error was 27% lower, and the Kappa coefficient was 0.22 higher. In conclusion, the ODCD is effective for land cover change detection and can improve computational efficiency.


Introduction
Owing to the rapid increase of urban populations and the rapid expansion of urban areas, many ecological and environmental problems, such as reduced vegetation cover and increased surface runoff, have become gradually more serious [1]. As the core of ecological environment change monitoring, land cover change detection has become a hot topic in environmental science and ecology [2]. Remote sensing technology has the advantages of being macroscopic, comprehensive, dynamic, and rapid, as well as being the most economical and effective means for detecting land cover changes [3]. Various remote-sensing methods have been applied to this problem: Yuan et al. used Principal Component Analysis (PCA) to identify land cover changes based on multi-temporal Landsat5 TM images [4]; Johnson et al. used Change Vector Analysis (CVA) to detect land cover changes in Landsat5 TM multispectral images [5]; Zhou and Yang used ratios of different images to detect changes in Anshun City, Guizhou Province [6]; and Li and Ye used PCA to detect changes in Dongguan, Zhujiang Delta [7]. However, these methods were all on the pixel-level; thus, they cannot use the spatial characteristics of images and are prone to the serious "pepper and salt phenomenon" [8]. Due to GF-1 satellite is a high-resolution earth observation system remote sensing satellite launched by China on 26 April 2013. GF-1 satellite is equipped with two cameras (PMS) with a 2-m resolution panchromatic wave band, 8-m resolution multi-spectra band, and four 16-m resolution multi-spectral wide-format cameras (WFV1-WFV4). Its multispectral sensors have four bands: blue, green, red, and near infrared. The interview period is 4 days. GF-1 satellite data has the characteristics of high resolution, wide width, and short return period, and it can be widely used in agricultural remote sensing, environmental monitoring, and other fields [25,26].

Methods
This study used object-level CVA and the correlation coefficient to achieve land cover change detection. To reduce redundancy of data and to improve the quality of selected features, the significant difference test (SDT) for features was performed to select the most significant difference feature as the optimal feature. When determining the optimal change threshold, the optimal thresholds of change intensity and the correlation coefficient were selected based on the maximum Kappa coefficient. The Kappa coefficient is a consistency test method proposed by Cohen in 1960 to evaluate the classification results of remote sensing images [27,28]. It is calculated according to Equations (1) and (2).
where k is the Kappa coefficient. P 0 is the proportion of units in which the judges agreed; P e is the proportion of units for which agreement is expected by chance; n is the number of types of classification; N is the total number of samples; P ii is the number of correctly classified samples of type i. The flow chart for the ODCD is shown in Figure 2, which includes the following steps. First, the atmospheric correction, geometric correction, orthorectification, and image registration were performed to reduce noises and improve image quality. Second, multi-scale segmentation was used to obtain highly homogeneous objects, and the initial land cover categories, such as classification of vegetation, bare land, residential, and waterbody. Third, some common spectral features, shape features, and texture features of objects were calculated based on the multi-scale segmentation results. Fourth, using SDT, the differences of the above features amongst all the categories were calculated, and the features with more significant differences were considered as the optimal features. Fifth, the Sensors 2019, 19, 79 4 of 21 change intensity was calculated via CVA using the optimal features, and the correlation coefficient between corresponding objects in the GF-1 image of 2016 and 2017 were calculated. Sixth, based on the change intensity and the correlation coefficient calculated in the previous step, the maximum Kappa coefficient was used as the criterion for determining the optimal thresholds of the change intensity and the correlation coefficient. Seventh, using the optimal thresholds of the change intensity and the correlation coefficient, the results of land cover change detection could be obtained. Finally, for the overall accuracy, the Kappa coefficient and the overall error were selected as the accuracy evaluation indexes. calculated, and the features with more significant differences were considered as the optimal features. Fifth, the change intensity was calculated via CVA using the optimal features, and the correlation coefficient between corresponding objects in the GF-1 image of 2016 and 2017 were calculated. Sixth, based on the change intensity and the correlation coefficient calculated in the previous step, the maximum Kappa coefficient was used as the criterion for determining the optimal thresholds of the change intensity and the correlation coefficient. Seventh, using the optimal thresholds of the change intensity and the correlation coefficient, the results of land cover change detection could be obtained. Finally, for the overall accuracy, the Kappa coefficient and the overall error were selected as the accuracy evaluation indexes.

Multi-Scale Segmentation
Multi-scale segmentation is a widely used image segmentation method that exhibits good results for object-level remote sensing image analysis. It comprehensively considers the spectral features and spatial features of remote sensing images, and it uses a bottom-up iterative merging algorithm to segment an image into objects with high homogeneity [29]. The homogeneity of objects is calculated as the standard deviation of the objects' internal pixels, whilst the heterogeneity includes the spectral and shape heterogeneity of objects [30]. In this study, both images underwent multi-scale segmentation simultaneously to obtain the same segmentation objects. This ensured that the homogeneous objects did not include multiple objects or mixed objects in the other image, if only a single image was used for segmentation. Furthermore, the segmentation results should not be too fragmented and the differences between neighboring objects are demonstrated by setting the segmentation scale. The objects should have high internal homogeneity and be consistent with the actual boundaries of features. The main parameters of multi-scale segmentation include the segmentation scale and homogeneity factor. The homogeneity factor includes spectral features and

Multi-Scale Segmentation
Multi-scale segmentation is a widely used image segmentation method that exhibits good results for object-level remote sensing image analysis. It comprehensively considers the spectral features and spatial features of remote sensing images, and it uses a bottom-up iterative merging algorithm to segment an image into objects with high homogeneity [29]. The homogeneity of objects is calculated as the standard deviation of the objects' internal pixels, whilst the heterogeneity includes the spectral and shape heterogeneity of objects [30]. In this study, both images underwent multi-scale segmentation simultaneously to obtain the same segmentation objects. This ensured that the homogeneous objects did not include multiple objects or mixed objects in the other image, if only a single image was used for segmentation. Furthermore, the segmentation results should not be too fragmented and the differences between neighboring objects are demonstrated by setting the segmentation scale. The objects should have high internal homogeneity and be consistent with the actual boundaries of features. The main parameters of multi-scale segmentation include the segmentation scale and homogeneity factor. The homogeneity factor includes spectral features and shape factors, and the sum of the weights of the spectral features and shape factors is 1. Spectral features are typically the most important, being important factors for image object generation. When the weight of the shape factor is higher than 0.5, the generated polygons are too regular and have no practical meaning, and therefore, do not conform to the actual features of the objects [31]. Thus, the weight of the spectral features should be greater than 0.6 [32,33].

Feature Construction
Object-level change detection can not only utilize the spectral features, but also the spatial features of images, including texture features and shape features, to obtain more descriptive information. In traditional change detection methods, spectral features are the most important factor due to visual expression of image information [34]. In this study, four commonly used spectral features, including the mean, standard deviation, normalized difference vegetation index (NDVI), and the normalized difference water index (NDWI) were selected, and the features' calculation formulas are shown in Table 1.

Feature Parameters Formula Formula Description
Mean V i µ is the sum of all pixel values V i divided by the total number of pixels in one object. 2 V i is the value of all pixels in the object, µ is the mean of the object.
ρ N IR is the reflectance for the near-infrared band, ρ R is the reflectance for the infrared band.
ρ Green is the reflectance of the green band, and ρ N IR is the reflectance of the near-infrared band.
Texture features can reflect the regional characteristics of a remote sensing image. In 1973, Haralick proposed the characteristic parameters for analyzing the Gray Level Co-occurrence Matrix (GLCM) [35]. The GLCM is a widely used method for calculating texture features. In this study, three common features, including correlation, dissimilarity, and energy were selected, and the calculation formulas are shown in Table 2.

Features Parameters Formula Formula Description
i is the gray value of any point in the image; j is the gray value of another point deviating from the point; P(i, j) is the frequency of occurrence of the gray pair in the gray level co-occurrence matrix, u i and u j represent the mean values in the row and column direction, respectively, and S i and S j represent the variance in the row and column direction, respectively. It reflects the consistency of image texture and the degree of similarity of metric co-occurrence matrix elements in the row or column direction.
is the frequency of occurrence of the gray pair in the gray level co-occurrence matrix. The higher the local contrast, the higher the similarity.
is the frequency of occurrence of the gray pair in the gray level co-occurrence matrix. Energy is also called "the angle second moment." When the image is a homogeneous area with a consistent texture, its energy is greater. Shape features can reflect the shape information of an object in a remote sensing image and describe the assemblage of its shape features, which helps to avoid the phenomena of "same object with different spectra, different objects with same spectrum" [36]. The area, length, width, shape index, and aspect ratio are generally used to describe shape features. In this study, the area, shape index, and aspect ratio were selected. The calculation formulas are shown in Table 3. Table 3. Shape features.

Feature Parameters Formula Formula Description
x i x i is the value of pixel i. This describes the size of the object. For non-geographically referenced data, the area of the pixel is 1.
S is the covariance matrix composed of the coordinates of points after object vectorization, w is the width, and l is the length of each object.
The variable p is the perimeter of the image object, A is the area of the image object. This describes the compactness of an object. The higher the compactness, the greater the density, and the more similar the shape is to a square.

Feature Selection
Considering the spectral, texture, and shape features of objects and multi-band characteristics, data redundancy is inevitable. To reduce this redundancy and improve feature quality, it is necessary to select features that effectively describe the information of each object. The SDT is used to test the difference between the experimental group and the control group, or the effect of two different treatments, and whether the difference is significant or not [37]. Therefore, we used the SDT to calculate the significance difference for different land cover types amongst the selected category features in each band for feature optimization. The greater the difference in features, the more significant the band features. This also demonstrates that the features of this band can be selected as optimal features. Variance analysis was applied to perform the SDT in this study. The significant difference was calculated according to Equation (3). where, where F is the statistic calculated by the analysis of variance, MS b is the variance between groups, MS w is the variance within group, SS b is the sum of squared deviations between groups, SS w is the sum of squared deviations within group, SS is the total sum of squared deviations, X ij is the j-th sample value of the i-th group, N is the total number of samples, b is the total number of samples of each group, V b is the degree of freedom between groups, V w is the degree of freedom within the group, and k is the total number of groups. F Distribution is F~F(v b , v w ) as in Reference [38]. According to the F threshold table, F α (v b , v w ) can be found. Generally, the value of α is 0.05 or 0.01. By comparing F with F α (v b , v w ), the significant difference can be found. For example, when , the difference is extremely significant.

Change Vector Analysis
CVA was first proposed by Malila in 1980 [39]. It can express multiple characteristics of each object using one-dimension vectors of n bands for an image. In this study, because the dimension and magnitude of the features used were quite different, standardization [40] was performed for each feature prior to its use in change detection. The change vector contains all the change information for a given object between two images and can be expressed as Equation (11).
The feature vector of an object in a remote sensing image at time t 1 and t 2 is represented as T , respectively, where n is the feature number and x k i (t) represents the normalized value of the k-th feature of object i at time t. The change intensity can be calculated using the Euclidean Distance as in Equation (12): where ∆G characterizes all the feature differences between two remote sensing images. The larger the ∆G , the more likely the object is changed. Detection for changed and non-changed objects can be completed by setting the change threshold according to the value of the change intensity. By determining the change threshold, the change area can be determined from the change intensity map easily and accurately.

Correlation Coefficient Calculation
According to pattern recognition theory, multiple unrelated constraints should be used for recognition to avoid the limitations of single constraints [41]. In the traditional land cover detection method, only single change intensity is utilized to determine the change and unchanged areas, and its accuracy is not ideal. This study introduced the correlation coefficient and combined it with the change intensity to determine the change area. Based on the multi-scale segmentation results, we calculated the correlation coefficient between objects. When the object changes, the correlation coefficient is low; when the object does not change, the correlation coefficient is high. The correlation coefficient is calculated using Equation (13).
where n is the number of bands, x k i (t) represents the average gray value of all the pixels of object i in band k in the t-phase image, and x i (t) represents the mean gray value of object i of the n bands in the t-phase image.

Optimal Threshold Determination for Change Detection
According to the research of Tung Fung et al. [42], the Kappa coefficient based on a confusion matrix may be the most appropriate for determining the optimal change threshold among various indexes. Therefore, in this study, samples with changes and samples with no changes were selected. The change intensity and correlation coefficient with the maximum Kappa coefficient were selected as the optimal change thresholds. Then, the two thresholds were applied to detect the land cover changes. Finally, the change detection result image was generated. The confusion matrix [43] for the land cover change detection results is shown in Table 4. The misjudgment error, omission error, detection accuracy, overall accuracy, and the Kappa coefficient are calculated according to Equations (14)- (18).
Omission error = N cn /N ct Overall accuracy = (N nn + N cc )/N (17) where N nn represents the number of samples where the detection results are unchanged and have not changed in practice, N cn represents the number of changed samples incorrectly identified as unchanged samples, N tn is the total number of unchanged samples in the test results, N nc represents the number of unchanged samples incorrectly identified as changed samples, N cc represents the number of samples where the detection results are changed and have changed in practice, N tc represents the total number of changed samples in the test results, N nt represents the total number of unchanged samples in practice, N ct represents the total number of changed samples in practice, and N represents the total number of samples. k hat is the Kappa coefficient.

Multi-Scale Segmentation
The optimal parameters for multi-scale segmentation of the remote sensing images were determined by comparison experiments. Different parameter segmentation results are shown in Figures 3-5. Figure 3 shows that when the segmentation scale is 15, the result is too fine, which makes the result complex, and when the segmentation scale is 35, the result is too rough, which will not make a clear difference. The segmentation scale was 25, which can segment the image into patches with high internal homogeneity. Figure 4 shows that when the weights of the shape and spectral feature are 0.2 and 0.8, respectively, the result provides a clearer difference and can segment the image into patches with high internal homogeneity. Figure 5 shows that when the weight of compactness is 0.5, the result is too fine, which makes the result complex, and when the weight of compactness is 0.9, the result is too rough, which will not  Figure 3 shows that when the segmentation scale is 15, the result is too fine, which makes the result complex, and when the segmentation scale is 35, the result is too rough, which will not make a clear difference. The segmentation scale was 25, which can segment the image into patches with high internal homogeneity.  Figure 4 shows that when the weights of the shape and spectral feature are 0.2 and 0.8, respectively, the result provides a clearer difference and can segment the image into patches with high internal homogeneity.  Figure 5 shows that when the weight of compactness is 0.5, the result is too fine, which makes the result complex, and when the weight of compactness is 0.9, the result is too rough, which will not make a clear difference. The weight of compactness was 0.7, which can segment the image into patches with high internal homogeneity. Therefore, the segmentation scale was 25, the spectral and shape weights were 0.8 and 0.2, and the smoothness and compactness weights were 0.3 and 0.7, respectively. The results of the multi-scale segmentation are shown in Figure 6. The scale and weight set for the multi-scale  Figure 3 shows that when the segmentation scale is 15, the result is too fine, which makes the result complex, and when the segmentation scale is 35, the result is too rough, which will not make a clear difference. The segmentation scale was 25, which can segment the image into patches with high internal homogeneity.  Figure 4 shows that when the weights of the shape and spectral feature are 0.2 and 0.8, respectively, the result provides a clearer difference and can segment the image into patches with high internal homogeneity.  Figure 5 shows that when the weight of compactness is 0.5, the result is too fine, which makes the result complex, and when the weight of compactness is 0.9, the result is too rough, which will not make a clear difference. The weight of compactness was 0.7, which can segment the image into patches with high internal homogeneity. Therefore, the segmentation scale was 25, the spectral and shape weights were 0.8 and 0.2, and the smoothness and compactness weights were 0.3 and 0.7, respectively. The results of the multi-scale segmentation are shown in Figure 6. The scale and weight set for the multi-scale   Figure 4 shows that when the weights of the shape and spectral feature are 0.2 and 0.8, respectively, the result provides a clearer difference and can segment the image into patches with high internal homogeneity.  Figure 5 shows that when the weight of compactness is 0.5, the result is too fine, which makes the result complex, and when the weight of compactness is 0.9, the result is too rough, which will not make a clear difference. The weight of compactness was 0.7, which can segment the image into patches with high internal homogeneity. Therefore, the segmentation scale was 25, the spectral and shape weights were 0.8 and 0.2, and the smoothness and compactness weights were 0.3 and 0.7, respectively. The results of the multi-scale segmentation are shown in Figure 6. The scale and weight set for the multi-scale Therefore, the segmentation scale was 25, the spectral and shape weights were 0.8 and 0.2, and the smoothness and compactness weights were 0.3 and 0.7, respectively. The results of the multi-scale segmentation are shown in Figure 6. The scale and weight set for the multi-scale segmentation was reasonable, since it could avoid too much fragmentation of the segmentation results, and effectively reflect the differences between the different patches, which were consistent with the feature boundaries. Therefore, the results were satisfactory.
Sensors 2019, 19, x 10 of 21 segmentation was reasonable, since it could avoid too much fragmentation of the segmentation results, and effectively reflect the differences between the different patches, which were consistent with the feature boundaries. Therefore, the results were satisfactory.

Optimal Feature Selection
Some of the differences within features, e.g., the differences in the mean grey value and the energy of each band, are shown as follow. Figures 7showed the differences in the mean grey value of each band. F 0.05 (1,9) was the limiting value and its value was 5.1174. Bands 1, 2, 3, and 4 represented blue, green, red, and near-infrared band, respectively. Generally, when F was higher than F 0.05 (1,9), the corresponding difference between the two categories was more significant. Compared with the differences in the mean of the spectral features (Figure 7), it showed that the differences between the residential and bare land in band 1 (Figure 8) and the differences between the vegetation and residential land in bands 3 and 4 (Figure 9), were not significant, because F was much lower than F 0.05 (1,9). The differences between all the categories in band 2 were more significant for F which it was higher than F 0.05 (1,9).

Optimal Feature Selection
Some of the differences within features, e.g., the differences in the mean grey value and the energy of each band, are shown as follow. Figure 7 showed the differences in the mean grey value of each band. F 0.05 (1,9) was the limiting value and its value was 5.1174. Bands 1, 2, 3, and 4 represented blue, green, red, and near-infrared band, respectively. Generally, when F was higher than F 0.05 (1,9), the corresponding difference between the two categories was more significant. Compared with the differences in the mean of the spectral features (Figure 7), it showed that the differences between the residential and bare land in band 1 (Figure 8) and the differences between the vegetation and residential land in bands 3 and 4 (Figure 9), were not significant, because F was much lower than F 0.05 (1,9). The differences between all the categories in band 2 were more significant for F which it was higher than F 0.05 (1,9). segmentation was reasonable, since it could avoid too much fragmentation of the segmentation results, and effectively reflect the differences between the different patches, which were consistent with the feature boundaries. Therefore, the results were satisfactory.

Optimal Feature Selection
Some of the differences within features, e.g., the differences in the mean grey value and the energy of each band, are shown as follow. Figures 7showed the differences in the mean grey value of each band. F 0.05 (1,9) was the limiting value and its value was 5.1174. Bands 1, 2, 3, and 4 represented blue, green, red, and near-infrared band, respectively. Generally, when F was higher than F 0.05 (1,9), the corresponding difference between the two categories was more significant. Compared with the differences in the mean of the spectral features (Figure 7), it showed that the differences between the residential and bare land in band 1 (Figure 8) and the differences between the vegetation and residential land in bands 3 and 4 (Figure 9), were not significant, because F was much lower than F 0.05 (1,9). The differences between all the categories in band 2 were more significant for F which it was higher than F 0.05 (1,9).   Compared with the differences in the energy of t h e texture features (Figure 10), it showed that the differences between the residential and bare land were not significant in bands 2, 3, and 4, because the F was much lower than F 0.05 (1,9). All the differences between the categories in band 1 were significant for F higher than F 0.05 (1,9).   Compared with the differences in the energy of t h e texture features (Figure 10), it showed that the differences between the residential and bare land were not significant in bands 2, 3, and 4, because the F was much lower than F 0.05 (1,9). All the differences between the categories in band 1 were significant for F higher than F 0.05 (1,9). Compared with the differences in the energy of the texture features (Figure 10), it showed that the differences between the residential and bare land were not significant in bands 2, 3, and 4, because the F was much lower than F 0.05 (1,9). All the differences between the categories in band 1 were significant for F higher than F0.05 (1,9). Using the SDT, the most distinctive features among the categories were finally selected ( Figure  11). The most distinctive spectral features were the mean of band 2, the variance of band 1, the NDVI, and the NDWI. The most distinctive texture features were the correlation of band 3 and band 4, the dissimilarity of band 1 and band 3, and the energy of band 1. The shape features were the length-width ratio, area, and the shape index. In total, 12 features were selected as the optimal features for land cover change detection. Figure 10. Comparison of the differences in the energy of texture feature energy differences.
Using the SDT, the most distinctive features among the categories were finally selected ( Figure 11). The most distinctive spectral features were the mean of band 2, the variance of band 1, the NDVI, and the NDWI. The most distinctive texture features were the correlation of band 3 and band 4, the dissimilarity of band 1 and band 3, and the energy of band 1. The shape features were the length-width ratio, area, and the shape index. In total, 12 features were selected as the optimal features for land cover change detection. Figure 10. Comparison of the differences in the energy of texture feature energy differences.
Using the SDT, the most distinctive features among the categories were finally selected ( Figure  11). The most distinctive spectral features were the mean of band 2, the variance of band 1, the NDVI, and the NDWI. The most distinctive texture features were the correlation of band 3 and band 4, the dissimilarity of band 1 and band 3, and the energy of band 1. The shape features were the length-width ratio, area, and the shape index. In total, 12 features were selected as the optimal features for land cover change detection. Figure 11. The processes of selecting the optimal features.

Change Intensity and the Correlation Coefficient
According to the multi-scale segmentation results, we calculated the change intensity and correlation coefficient of the objects. The change intensity map for the GF-1 images in 2016 and 2017 is shown in Figure 12. When the patch was changed, its change intensity value was greater and the color was brighter. When the patch was unchanged, the change intensity value was smaller and the color was darker. The correlation coefficient map for the GF-1 images in 2016 and 2017 is shown in Figure 13. When the patch was changed, its correlation coefficient was smaller and its color was

Change Intensity and the Correlation Coefficient
According to the multi-scale segmentation results, we calculated the change intensity and correlation coefficient of the objects. The change intensity map for the GF-1 images in 2016 and 2017 is shown in Figure 12. When the patch was changed, its change intensity value was greater and the color was brighter. When the patch was unchanged, the change intensity value was smaller and the color was darker. The correlation coefficient map for the GF-1 images in 2016 and 2017 is shown in Figure 13. When the patch was changed, its correlation coefficient was smaller and its color was brighter. When the patch was unchanged, the correlation coefficient was larger and the color was darker. brighter. When the patch was unchanged, the correlation coefficient was larger and the color was darker.

Land Cover Change Detection
Using visual interpretation, 213 samples were selected for model training, including 87 changed samples and 126 unchanged samples. These training samples were used to analyze the changed binary map generated during the loop computation. By calculating the Kappa coefficient, the optimal change intensity threshold and the optimal correlation coefficient threshold with the maximum Kappa coefficient were calculated. The ODCD and SCCD methods were both used for land change detection.

Results from SCCD
When SCCD was applied for land cover change detection, only the change intensity was used. According to the training samples, the maximum value of the Kappa coefficient was 0.80 and the optimal threshold of change intensity was 0.30. That is, the object changed when the change intensity value was greater than 0.30. Based on the optimal threshold of change intensity, a final brighter. When the patch was unchanged, the correlation coefficient was larger and the color was darker.

Land Cover Change Detection
Using visual interpretation, 213 samples were selected for model training, including 87 changed samples and 126 unchanged samples. These training samples were used to analyze the changed binary map generated during the loop computation. By calculating the Kappa coefficient, the optimal change intensity threshold and the optimal correlation coefficient threshold with the maximum Kappa coefficient were calculated. The ODCD and SCCD methods were both used for land change detection.

Results from SCCD
When SCCD was applied for land cover change detection, only the change intensity was used. According to the training samples, the maximum value of the Kappa coefficient was 0.80 and the optimal threshold of change intensity was 0.30. That is, the object changed when the change intensity value was greater than 0.30. Based on the optimal threshold of change intensity, a final

Land Cover Change Detection
Using visual interpretation, 213 samples were selected for model training, including 87 changed samples and 126 unchanged samples. These training samples were used to analyze the changed binary map generated during the loop computation. By calculating the Kappa coefficient, the optimal change intensity threshold and the optimal correlation coefficient threshold with the maximum Kappa coefficient were calculated. The ODCD and SCCD methods were both used for land change detection.

Results from SCCD
When SCCD was applied for land cover change detection, only the change intensity was used. According to the training samples, the maximum value of the Kappa coefficient was 0.80 and the optimal threshold of change intensity was 0.30. That is, the object changed when the change intensity value was greater than 0.30. Based on the optimal threshold of change intensity, a final binary map was acquired as shown in Figure 14. The maximum Kappa coefficient corresponding to the confusion matrix is shown in Table 5. By overlaying the land cover change map on the GF-1 image in 2017, as shown in Figure 15, it was clear that most of the changed features involved changes from bare land and vegetation to residential, and small parts involved changes from waterbody to bare land. SCCD change detection was sensitive to seasonal changes in vegetation, spectral changes in the roofs of buildings, and spectral changes in bare land, so they are easily misclassified as changes by SCCD.
Sensors 2019, 19, x 14 of 21 binary map was acquired as shown in Figure 14. The maximum Kappa coefficient corresponding to the confusion matrix is shown in Table 5. By overlaying the land cover change map on the GF-1 image in 2017, as shown in Figure 15, it was clear that most of the changed features involved changes from bare land and vegetation to residential, and small parts involved changes from waterbody to bare land. SCCD change detection was sensitive to seasonal changes in vegetation, spectral changes in the roofs of buildings, and spectral changes in bare land, so they are easily misclassified as changes by SCCD.   binary map was acquired as shown in Figure 14. The maximum Kappa coefficient corresponding to the confusion matrix is shown in Table 5. By overlaying the land cover change map on the GF-1 image in 2017, as shown in Figure 15, it was clear that most of the changed features involved changes from bare land and vegetation to residential, and small parts involved changes from waterbody to bare land. SCCD change detection was sensitive to seasonal changes in vegetation, spectral changes in the roofs of buildings, and spectral changes in bare land, so they are easily misclassified as changes by SCCD.

Results from the ODCD
When the ODCD was applied for land cover change detection, the change intensity and correlation coefficient were both used. According to the training samples, the maximum Kappa coefficient was 0.87, and the optimal thresholds of the change intensity and correlation coefficient were 0.26 and 0.94, respectively. That is, when the change intensity was greater than 0.26 and the correlation coefficient was less than 0.94, the object changed. According to the optimal change intensity and correlation coefficient thresholds, the change binary image was acquired as shown in Figure 16. By overlaying the land cover change map on the GF-1 image in 2017 as shown in Figure 17, some typical change examples are shown in Figure 18. The maximum Kappa coefficient corresponding to the confusion matrix is shown in Table 6, and it is clear that the overall accuracy and the Kappa coefficient of the ODCD are higher than that of the SCCD. The ODCD can make up for the shortcomings in the seasonal sensitivity of SCCD and improve the accuracy of change detection.

Unchanged Changed
Test Results

Results from the ODCD
When the ODCD was applied for land cover change detection, the change intensity and correlation coefficient were both used. According to the training samples, the maximum Kappa coefficient was 0.87, and the optimal thresholds of the change intensity and correlation coefficient were 0.26 and 0.94, respectively. That is, when the change intensity was greater than 0.26 and the correlation coefficient was less than 0.94, the object changed. According to the optimal change intensity and correlation coefficient thresholds, the change binary image was acquired as shown in Figure 16. By overlaying the land cover change map on the GF-1 image in 2017 as shown in Figure  17, some typical change examples are shown in Figure 18. The maximum Kappa coefficient corresponding to the confusion matrix is shown in Table 6, and it is clear that the overall accuracy and the Kappa coefficient of the ODCD are higher than that of the SCCD. The ODCD can make up for the shortcomings in the seasonal sensitivity of SCCD and improve the accuracy of change detection.      Among all the above examples, the ODCD efficiently detected the changed areas. The typical change detection results in Figure 18a,b show changes from bare land to residential, and the results in Figure 18c show changes from waterbody to bare land, whilst the results in Figure 18d show changes from vegetation to bare land. The results in Figure 18e show changes from vegetation to residential.
Owing to the resolution of the images, some small details were detected as changed areas, such as shadows between buildings.

Precision Comparison
To further confirm the validity and accuracy of the ODCD, a total of 333 samples, excluding the samples used to determine the optimal change threshold, were selected for validation, including 133 changed samples and 200 unchanged samples. For the analysis of the change detection results, we compared the results of the ODCD and SCCD. The overall accuracy, Kappa coefficient, and overall error were selected as the accuracy evaluation indexes. The quantitative evaluation results are shown in Tables 7 and 8. The accuracy comparison between the ODCD and SCCD is shown in Table 9. The overall accuracy and the Kappa coefficient of the ODCD were higher than that of SCCD. For overall accuracy, the overall accuracy of the ODCD was 92.19% while the overall accuracy of SCCD was 81.98%. The overall accuracy of the ODCD was about 10% higher than that of SCCD. For the Kappa coefficient, the Kappa coefficient of the ODCD was 0.84 and the Kappa coefficient of SCCD was 0.62. The Kappa coefficient of the ODCD was about 0.22 higher than that of SCCD. For the error, misjudgment error, and the omission error, the results for the ODCD were lower than the results of SCCD. The misjudgment error was 15% lower and the omission error was 12% lower. The total error of the ODCD was 27% lower than that of SCCD. For testing whether the differences of accuracy were statistically significant, we carried out a hypothesis statistical test (t-test) to confirm that the differences in accuracy were statistically significant. An additional total of 121 samples were selected for validation, including 89 changed samples and 32 unchanged samples. The quantitative evaluation results are shown in Tables 10-12. The t-test was used to test whether the differences were statistically significant. α is the limiting value in the t-test, and its value is generally 0.05. When the p-value calculated by the t-test was less than α, it could be concluded that the differences were statistically significant [44]. The results are shown in Table 13. Given the p-value was less than 0.05, the differences were statistically significant. The validity and accuracy of the ODCD could be confirmed.

Conclusions
The ODCD proposed for land cover change detection in this study resolved the problem of computational complexity due to excessive feature variables in feature extraction. However, the problem of using empirical judgment to determine the change threshold makes the results less objective and less accurate. The ODCD employs a more objective change threshold determination algorithm that achieves simultaneous determination of the thresholds of the change intensity and the correlation coefficient based on maximizing the Kappa coefficient. It is a more effective method for land cover change detection. The major conclusions are as follows.
(1) Combining change vector analysis with correlation coefficients based on object-level, the ODCD can reduce the shortcomings of seasonal sensitivity of SCCD and improve the accuracy of land cover change detection. The ODCD's overall accuracy was 92.19% and this was 10% higher than that of SCCD. At the same time, its overall error was 20% and it was 27% lower than that of SCCD. (2) ODCD can be used to reduce the number of features and improve the computational efficiency.
The SDT is an effective feature optimization method. Using optimal feature selection, the feature dimensions were reduced from 26 to 12, which increased the calculation speed.
Therefore, the ODCD can provide a useful reference for ecological environment assessment and land use planning. At the same time, the ODCD can provide an effective reference for remote sensing in land cover change detection.
Future studies on the ODCD should involve a greater number of different objects, such as objects related to residential change detection. In addition, the image resolution in this study led to some areas being detected as changed areas; therefore, further studies should also utilize different resolution images.