Next Article in Journal
Projecting Mammal Distributions in Response to Future Alternative Landscapes in a Rapidly Transitioning Region
Previous Article in Journal
Depthwise Separable Convolution Neural Network for High-Speed SAR Ship Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Change Detection of High Spatial Resolution Images Based on Region-Line Primitive Association Analysis and Evidence Fusion

1
School of Geography, Nanjing Normal University, Nanjing 210023, China
2
Key Laboratory of Virtual Geographic Environment (Nanjing Normal University), Ministry of Education, Nanjing 210023, China
3
Jiangsu Center for Collaborative Innovation in Geographical Information Resource Development and Application, Nanjing 210023, China
4
State Key Laboratory Cultivation Base of Geographical Environment Evolution (Jiangsu Province), Nanjing 210023, China
5
School of Information Engineering, China University of Geosciences (Beijing), Beijing 100083, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2019, 11(21), 2484; https://doi.org/10.3390/rs11212484
Submission received: 26 July 2019 / Revised: 18 October 2019 / Accepted: 23 October 2019 / Published: 24 October 2019

Abstract

:
Change detection (CD) remains an important issue in remote sensing applications, especially for high spatial resolution (HSR) images, but it has yet to be fully resolved. This work proposes a novel object-based change detection (OBCD) method for HSR images that is based on region–line primitive association analysis and evidence fusion. In the proposed method, bitemporal images are separately segmented, and the segmentation results are overlapped to obtain the temporal region primitives (TRPs). The temporal line primitives (TLPs) are obtained by straight line detection on bitemporal images. In the initial CD stage, Dempster–Shafer evidence theory fuses the multiple items of evidence of the TRPs’ spectrum, edge, and gradient changes, and obtains the initial changed areas. In the refining CD stage, the association between the TRPs and their contacting TLPs in the unchanged areas is established on the basis of the region–line primitive association framework, and the TRPs’ main line directions (MLDs) are calculated. Some changed TRPs omitted in the initial CD stage are recovered by their MLD changes, thereby refining the initial CD results. Different from common OBCD methods, the proposed method considers the change evidence of TRPs’ internal and boundary information simultaneously via information complementation between TRPs and TLPs. The proposed method can significantly reduce missed alarms while maintaining a low level of false alarms in OBCD, thereby improving total accuracy. In our experiments, our method is superior to common CD methods, including change vector analysis (CVA), PCA-k-means, and iterative reweighted multivariate alteration detection (IRMAD), in terms of overall accuracy, missed alarms, and Kappa coefficient.

Graphical Abstract

1. Introduction

Change detection (CD) identifies differences in an object or phenomenon by observing it at different times [1]. CD is an important technical step in many applications, such as damage assessment, deforestation and disaster monitoring, urban expansion, land management, and land-use/land-cover monitoring [2,3,4,5,6]. On the basis of its analytical units, CD is classified into pixel-based change detection (PBCD) and object-based change detection (OBCD). In PBCD, pixels or windows of pixels function as analytical units. In OBCD, pixels are first grouped into objects, i.e., segments, by image segmentation, and the succeeding feature extraction and analysis are all based on the objects [4].
PBCD is commonly sensitive to registration errors and radiation differences when dealing with multitemporal images; it is easily influenced by image noise, and it produces fragmented results when processing high spatial resolution (HSR) images [7,8,9,10]. By contrast, OBCD has the advantages of rich object features, enhanced treatment of image noise, and naturally formed multiscale analysis. Therefore, OBCD is recognized as suitable for HSR images with obvious spectral confusion and image noise among and within ground objects [3,4,11,12,13,14].
According to its implementations, OBCD can be further classified into feature-based change detection (FOCD), which involves comparing objects’ feature vectors among different times directly; classification-based change detection (COCD), which involves comparing different temporal classification maps; and hybrid change detection (HCD), which is a combination of classification and feature extraction techniques [3,4,15]. FOCD is an unsupervised CD method in which object features and similarity measurements can be selected flexibly. This approach is easy to implement and use in practice, and it has been widely investigated in OBCD research [13,15,16,17]. Unlike the FOCD method, COCD can determine the change type of image objects; however, its CD accuracy is highly dependent on the accuracy of classification, which is nontrivial in practical scenarios [18,19,20]. HCD makes full use of the classification and feature extraction techniques for object detection. HCD has the advantages of the first two methods and can achieve relatively high CD accuracy. However, HCD is complex and time consuming, and it is mainly adopted in specific applications [15]. In the present study, object-based FOCD (hereinafter referred to as FOCD for brevity) is investigated.
The specific steps of FOCD are (1) image preprocessing, which involves image registration and radiation correction; (2) image segmentation, in which objects are obtained by image segmentation and used in succeeding analyses; and (3) changed feature analysis, through which the changed objects are obtained by comparing the differences in their temporal features [3,4,21]. In changed feature analysis, commonly used methods include change vector analysis (CVA) [22], principal component analysis (PCA) [23], multivariate alteration detection (MAD) [24], and iterative reweighted multivariate alteration detection (IRMAD) [25].
FOCD methods using multiple features have been proposed to deal with HSR imagery with many ground details and spectral confusion. Du et al. [26] proposed a synthetic OBCD method using features such as normalized difference vegetation index and textures. Tang et al. [27] conducted CD for urban buildings with morphological-based building indexes, spectra, and shapes. Chen and Chen [5] conducted OBCD by combining multifeature object-based image analysis (OBIA) and CVA. Chen et al. [28] proposed a genetic particle swarm optimization-based feature selection algorithm for feature selection in multifeature OBCD. Zhang et al. [29] conducted multifeature fusion-based OBCD using support vector machine. Peng and Zhang [15] proposed an OBCD method by combining segmentation optimization and multifeature fusion. In their methods, Dempster–Shafer (D-S) evidence theory and the expectation–maximization algorithm are used in multifeature fusion and change analysis. Cai et al. [30] proposed an OBCD method that fuses multiple features to detect changes on the basis of improved fuzzy c-means. Wang et al. [31] proposed an OBCD scheme using multiple features and ensemble learning for urban areas. In addition, Luo et al. [32] proposed an urban CD method for very high resolution images that fuses the results from different methods, including CVA, IRMAD, and iterative slow feature analysis (ISFA) through D–S evidence theory. Bitemporal HSR images were stacked for image segmentation. CVA, IRMAD, and ISFA were applied to obtain three candidate change maps. Then, these change maps were fused on the basis of D–S theory to obtain the final CD result. This method explores the uncertainty among different change results and generates a change map that is more accurate than that produced by individual methods.
The existing research verifies the feasibility and application value of OBCD. However, OBCD only considers objects’ (segments’) “internal” information, and may thus lose important CD evidence along object boundaries. Typically, changes in artificial objects, e.g., roads and buildings, are often related to the disappearance or appearance of structural straight lines that strongly denote the objects’ shapes and configurations. Some lines are located on object boundaries and are highly informative in delineating objects when their tones are similar to the surrounding background. Thus, using such structural information, i.e., straight lines, is helpful in OBCD.
In previous studies [33,34], Wang et al. proposed the region–line primitive association framework (RLPAF), which uses structural lines in OBIA. In this technical framework, straight edge lines act as additional object primitives, i.e., line primitives (LPs), for succeeding feature extraction and analysis. The RLPAF conducts information complementation between segments, i.e., region primitives (RPs) and LPs, expands OBIA’s feature set, and enhances OBIA performance. It has been validated in the extraction of several spatial objects, e.g., raft cultivation areas and photovoltaic panels, from HSR images [35,36].
In the present study, a novel OBCD method originating from the idea of the RLPAF is proposed. In this method, pieces of change evidence of temporal region primitives (TRPs) are fused on the basis of evidence theory for the initial CD. The change evidence of the temporal line primitives (TLPs) inside the TRPs and that along the TRPs’ boundaries are then supplemented such that they offer auxiliary CD evidence for the “weakly” changed TRPs omitted in the initial CD. The experiments show that the proposed method greatly reduces missed alarms while maintaining low false alarm rates, and is thus obviously superior to other methods.
This remainder of this paper is organized as follows. Section 2 introduces the technical route of the multitemporal region–line association modeling and feature extraction originated and extended from the RLPAF. It also introduces the implementation of the proposed method. Section 3 is the experimental part, which includes method analysis and comparison. Section 4 is the discussion section. Finally, Section 5 summarizes this study and introduces future work.

2. Methodology

As shown in Figure 1, the proposed method is composed of two main stages: the creation of TRPs and TLPs, and CD based on RPs and LPs. The CD stage is further composed of substages of initial detection based on evidence fusion and CD refinement based on the RLPAF.
The bitemporal images are segmented separately, and the segmentation results are overlapped to obtain TRPs. The initial detection fuses the change evidence of an object’s spectrum, edge, and gradient features without consideration of TLPs; hence, this process may lose weakly changed regions. Thus, in the refining CD stage, the RLPAF-based relationships between the TRPs and the TLPs are established within the unchanged regions of the initial CD. Then, missed alarms are revised on the basis of the RLPAF rules, and the entire CD process is then completed.

2.1. Object Primitive Creation and Change Feature Extraction

As shown in Figure 2, the bitemporal images are first segmented separately using the hard boundary-constrained segmentation method [37]. Then, the bitemporal RPs are overlapped into a single layer. The bitemporal spectral, gradient, edge, and shape features are calculated to obtain the TRPs. In implementing OBCD, TRPs’ bitemporal spectral and gradient features are quantized to form the frequency histograms—that is, the change feature vectors of TRPs. The histogram of edge pattern distribution describes the frequencies of different edge pixel distributions [38] and forms the TRPs’ change feature vectors of edges. For multiband images, feature vectors of individual bands are stacked to obtain the whole feature vectors. Meanwhile, Burns’ straight line detection method [39], which can extract low contrast lines, obtains straight lines from bitemporal images. Furthermore, line direction, length, and intensity are calculated to obtain the TLPs.

2.2. Initial Change Detection

For the evidence fusion-based OBCD, the structural similarities (SSIMs) [40] of the spectral, gradient, and edge features of TRPs at two time points are calculated first. These SSIM measures, namely, change evidence, are fused via D-S evidence theory to obtain the initial changes.

2.2.1. Feature Similarity Measure

SSIM comprehensively consider the mean, variance, and covariance of two variables and are defined as:
SSIM ( X , Y ) = ( 2 μ X μ Y + C 1 ) ( 2 σ X Y + C 2 ) ( μ X 2 + μ Y 2 + C 1 ) ( σ X 2 + σ Y 2 + C 2 )   ,
where X and Y are two variables, and μ X , μ Y ; σ X , σ Y ; σ X 2 , σ Y 2 ; and σ X Y are their mean, standard deviation, variance, and covariance, respectively. C1 and C2 are constants to prevent instability when the denominator is close to 0. A TRP’s spectral SSIM is calculated using Equation (1), where X and Y are the bitemporal spectral feature vectors of the TRP. The TRP’s gradient and edge SSIMs are obtained in the same way, in which X and Y are the bitemporal gradient and edge feature vectors, respectively.

2.2.2. Change Detection by Evidence Fusion

D-S evidence theory is a tool to model inaccurate and uncertain information [41]. We define a non-empty set U , i.e., the discriminative framework, that consists of a series of mutually exclusive and exhaustive elements. Proposition A in the question domain belongs to the power set 2 U . We define the basic probability assignment function (BPAF) m   : 2 U [ 0 , 1 ] in 2 U and let:
m ( Φ ) = 0 , A U m ( A ) = 1 ,
where m(A) represents the confidence of subset A in U with the current evidence. If A U , then m(A) denotes a determined belief in A; if A = U , then m(A) denotes an uncertain assignment; if A U and m ( A ) > 0 , then A is called a focal element of m. D-S evidence theory combines different pieces of evidence with an orthogonal sum.
We let m 1 , m 2 ,…,   m n be n BPAFs in 2 U ; their orthogonal sum is expressed as:
m = m 1 m 2 m n ,
and is further defined as:
m ( A ) =   A i = A 1 j n m j ( A i ) 1 k     ( A U ) ,
where:
k =   A i = Φ 1 j n m j ( A i )
and k is the conflict degree of the evidence. Equation (4) is called Dempster’s combination rule.
BPAF is constructed on the basis of spectral, gradient, and edge SSIMs. The discrimination framework U for CD is defined as:
U = { Y , N } ,
where   Y   represents the changed classes, and N represents the unchanged classes. On the basis of this equation, BPAF is specifically defined as:
m i ( { Y } ) = ( 1.0 S i ) × α i , m i ( { N } ) = S i × α i , m i ( { Y , N } ) = 1.0 α i ,   i = 1 , 2 , 3 ,
where Si is the SSIM of a feature, and α i is the trust degree of evidence in the discrimination frame.
The spectral (BPAF1), edge (BPAF2), and gradient (BPAF3) BPAFs are calculated using Equation (5). Then, BPAF1, BPAF2, and BPAF3 are fused via Equation (4) to calculate the TRP’s BPAF, namely, m({Y}), m({N}), and m({Y, N}), respectively. If a TRP’s BN, the value of m({N}) is less than some prespecified threshold, and it is labeled as changed in the initial CD stage.

2.3. Change Detection Refinement Using RLPAF

The initial CD stage might lose some changed TRPs. These TRPs are rechecked on the basis of the relationships of their RLPAF with their contacted TLPs.
Given a region Q and its contacted straight line L, we define the direction operator set as:
D i r ( Q , L ) = { N e g ( Q , L ) , Z e r o ( Q , L ) , P o s ( Q , L ) } ,
which denotes a subset extracted from Q that is located above, on, or below line L, as illustrated in Figure 3a. The topology operator set is as follows:
T o p ( L , Q ) = { I n ( L , Q ) , T o u c h ( L , Q ) , O u t ( Q , L ) , P r o j ( L , Q ) } ,
where the first three operators denote a subset extracted from L, which is contained in, touched by, or outside Q. On the basis of these definitions, the region–line mutual conversion operators are defined, and they include region-to-region, region-to-line, and line-to-region conversions [33,34].
Given a region, the region-to-line conversion in this study is defined as extracting its contained and touched straight lines whose directions fall in the main line direction (MLD). To obtain the MLDs, we divide [−π/2, π/2] into 16 intervals and allocate the lines. Then, we select the average angles of the first and second sectors that contain the most lines as the first and second main line directions. Bitemporal MLDs are extracted for TRPs to compare the changes in line direction. Given that touched lines are involved in the MLD calculation, pieces of change evidence located inside objects and on object boundaries are comprehensively considered in OBCD.
Some changed TRPs are marked as unchanged because of their low BPAF, which indicates that multiple instances of change evidence only from the TRPs might be insufficient. To this end, we consider TRPs’ MLD changes for method refinement. Given that the line detection and directional sector division might be inaccurate, the refining stage relaxes the CD threshold for the compensation of possibly changed TRPs. This processing is more robust than a direct MLD comparison.
Algorithm 1 is the pseudocode of the two CD stages. The spectral, gradient, and edge BPAFs are calculated and fused via D-S theory to obtain the BN of each TRP (P). If P’s BN is less than threshold T, then P is included in the output results {PC}; otherwise, P’s MLD1 and MLD2 are calculated. If MLD1 is not equal to MLD2, then threshold T is relaxed to T1, and P is redetermined by T1 for its belongings. These steps are looped until all the elements in {P} have been processed. Then, {PC} is output as the final CD result.
Algorithm 1. Two-stage change detection
Input: TRPs {P}, TLPs {L1}, and {L2}, Change threshold T, Scaling factor S
Output: Changed TRPs {PC}
For each P within {P}{
 Calculate its spectral BPAF, gradient BPAF, and edge BPAF and fuse them to obtain BN
 If P’s BN < T, put P to {PC}
 Else
  Obtain P’s bitemporal MLD1 and MLD2 using its contacted lines extracted from {L1} and {L2}
  If MLD1 is not equal to MLD2
    relax threshold T to T1 (T×S)
    If BN <T1, put P to {PC}
Return {PC}

3. Experimental Results and Analysis

The proposed method was implemented and integrated into the RSFinder software system [34], which was developed by the authors on the basis of Microsoft Visual C++2010 and Geospatial Data Abstraction Library (GDAL) 1.8 library. Different experimental images captured by different sensors were selected for method validation and comparison to verify the characteristics and advantages of the proposed method.

3.1. Experimental Procedure

Table 1 lists the experimental areas, including the sensors, imaging date, locations, and some method inputs. Experimental areas 1 and 2 were Advanced Land Observation Satellite (ALOS) images. The ALOS satellite was launched on 24 January 2006. It includes three sensors: the Panchromatic Remote-Sensing Instrument for Stereo Mapping (PRISM), the Advanced Visible and Near-Infrared Radiometer 2 (AVNIR-2), and the Phased-Array L-band Synthetic Aperture Radar. PRISM’s resolution is 2.5 m. AVNIR-2′s resolution is 10 m, and it contains four channels covering 0.42–0.89 μm, namely, blue (0.42–0.50 μm), green (0.52–0.60 μm), red (0.61–0.69 μm), and near infrared (0.76–0.89 μm). Experimental area 3 involved China’s Gaofen-2 satellite launched on 19 August 2014, which carries two 1-m panchromatic and two 4-m multispectral cameras. The wavelengths of the multispectral cameras include four channels covering 0.45–0.89 μm, namely, blue (0.45–0.52 μm), green (0.52–0.59 μm), red (0.63 – 0.69 μm), and near infrared (0.77–0.89 μm). Figure 4 shows the original bitemporal images of the three experimental areas.
For ALOS, the single-band panchromatic and pansharpened multispectral images using the Gram–Schmidt method [42] were used for method validation. For Gaofen-2, the pansharpened multispectral images were used. For image segmentation, the scale inputs for each experimental area were varied to minimize oversegmentation errors and avoid undersegmentation errors. For straight line detection, the gradient difference was fixed to 1, and the line length was 10.
The parameters for the initial CD were determined by a few sample areas. Change threshold T was slightly different according to the image resolutions and ground field distributions, as shown in Table 1. Other parameters were fixed, and they included the following:
(1) Trust degrees α 1 , α 2 , and α 3 . These parameters affected the trust degree of each feature during evidence fusion. They were equal to 0.35, 0.85, and 0.65.
(2) Scaling factor S. This parameter affected the extraction of the weakly changed region during the refining CD stage. An excessively small value cannot detect weakly changed areas, whereas excessively large ones will bring obvious false alarms. The parameter was tuned and fixed to 1.5.
(3) The constants C1 and C2 of SSIMs were default values in [43], which were 0.3 and 0.7, respectively.
Three well-known CD methods, namely, CVA, PCA-k-means, and IRMAD, were selected for method comparison. CVA detects changes according to the magnitude and direction of multitemporal spectral vector changes. The CD threshold of CVA was manually tuned to 0.6 in our experiments, and optimal results were obtained. PCA–k-means has two parameters, namely, nonoverlapping blocks H (H was 3 in our experiments) and the dimensions of S (S was 3 in our experiments) of the eigenvector space. IRMAD is an iterative, multivariate CD method. Its maximum number of iterations was fixed to 50.
To conduct method evaluation, we used four common indicators: false alarm (FA), missed alarm (MA), overall accuracy (OA), and Kappa coefficient.
M A = F N F N + T P ,   F A = F P F P + T N ,   O A = T P + T N T P + F P + T N + F N , K a p p a = P o P e 1 P e ,
where:
P o = T P + T N T P + F P + T N + F N , P e = ( T P + F P ) ( T P + F N ) + ( F N + T N ) ( F P + T N ) ( T P + F P + T N + F N ) 2 .
TP is the number of change image objects correctly detected, FP is the number of unchanged image objects incorrectly detected as changed ones, TN is the number of unchanged image objects correctly detected, and FN is the number of changed image objects incorrectly detected as unchanged ones.

3.2. Method Performance Analysis

Reference maps were obtained by using the visual interpretation of the changed TRPs in the experimental areas. Manmade object changes, which could be visually identified, were included in the reference maps. The increase and decrease in vegetation, water, and other natural objects, which were visually identifiable (normally with the changed areas larger than 100 pixels), were marked as changes. TRPs with changed areas larger than 40% were classified as changed segments. The same and unified standards were adopted in all the experiments. Two sets of statistical results were obtained by counting the segments or pixels, as shown in Table 2, Table 3 and Table 4. Overall, the results were similar. Therefore, the succeeding analyses are mainly based on the segment-based results.

3.2.1. Experimental Area 1

Figure 5 shows the results of experimental area 1. From this figure, we can see that the proposed method had the lowest missed alarms and false alarms, and that it obtained the best results in experimental area 1. This evaluation was further verified, as shown in Table 2.
As shown in Figure 5a,b, changes mainly occurred in the building and road sections in this area. CVA caused a large number of false alarms and missed alarms, as shown in Figure 5g. PCA-k-means and IRMAD had fewer false alarms, but more serious missed alarms compared with CVA, as shown in Figure 5h,i. The proposed method had the least false alarms and missed alarms, as presented in Figure 5l. The multifeature evidence fusion made the method robust to false alarms in the initial CD, and the region–line associated analysis detected some weak changes in the refining stage, resulting in the method’s superior performance.
As shown in Table 2, CVA’s OA and Kappa were the lowest. PCA-k-means and IRMAD had fair FAs, but their MAs were high. IRMAD was the best among the three methods used for comparison, with its OA, FA, and Kappa values being equal to 93.65%, 4.13%, and 0.47, respectively. These measures of the proposed method were 97.68%, 0.87%, and 0.77, and were thus better than those of IRMAD. CVA had the best MA of 28.77% among the three methods, whereas the proposed method’s MA was 26.03%, which was better than that of CVA. In general, the proposed method had the highest OA and Kappa and lowest MA and FA among all the methods, indicating its superior performance.
Figure 5k illustrates the results of directly relaxing the scale factor without considering MLD changes. Directly relaxing caused many false alarms while reducing missed alarms, as observed in Figure 5. As shown in Table 2, direct threshold relaxation led to an obvious decline of OA and Kappa in experimental area 1. Specifically, OA dropped from 97.52% to 97.06%, and Kappa dropped from 0.74 to 0.72.
By contrast, the MLD-based refining significantly reduced MA and prevented a significant FA increase. As shown in Table 2, MA dropped from 32.88% to 26.03%, whereas FA only increased from 0.63% to 0.87%. Accordingly, OA and Kappa increased from 97.52% to 97.68% and from 0.74 to 0.77, respectively. All these results indicated the MLD constraint, effectively preventing possible false alarms and increasing the overall method accuracy relative to the case of direct threshold relaxation.

3.2.2. Experimental Area 2

Figure 6 shows the results of experimental area 2, and Table 3 presents the detection precision in area 2. From Figure 6 and Table 3, we can see that the proposed method also obtained the best results in this experimental area.
In this area, CVA performed the worst with serious missed alarms and false alarms, as shown in Figure 6. PCA-k-means had few missed alarms, but serious false alarms. IRMAD had few false alarms, but serious missed alarms. Similar to the results in area 1, IRMAD was the best among the three methods used for comparison in this area, with the OA, FA, and Kappa values being equal to 82.64%, 10.99%, and 0.22, respectively. PCA-k-means had the best MA of 62.66% among the three methods.
The false alarms of the proposed method increased in this area, but it still performed the best among all the methods, with its OA reaching 95.32%, its MA and FA respectively reaching 12.53% and 3.30%, and its Kappa reaching 0.79, indicating its superior performance.
Figure 6k illustrates the results of directly relaxing the scale factor without considering MLD changes. As shown in Figure 6 and Table 3, directly relaxing caused many false alarms while reducing missed alarms, OA dropped from 94.39% to 87.65%, and Kappa dropped from 0.71 to 0.58 in area 2.
By comparison, the MLD-based refining significantly reduced MA and prevented significant FA increase. In area 2, MA decreased by 21.41%, whereas FA only increased by 1.77%. Accordingly, OA and Kappa increased from 94.39% to 95.32% and from 0.71 to 0.79, respectively. All these results indicated the MLD constraint, effectively preventing possible false alarms and increasing the overall method accuracy.

3.2.3. Experimental Area 3

Figure 7 and Table 4 are the results and detection precision of experimental area 3, respectively.
In this area, changes occurred in the buildings and some water sections. CVA and IRMAD had serious missed alarms and false alarms, as shown in Figure 7 and Table 4. PCA-k-means had few false alarm errors, but had obvious missed alarms. The proposed method still obtained the best results among all the methods.
Figure 7k illustrates the results of directly relaxing the scale factor without considering MLD changes. Similar to the results of areas 1 and 2, directly relaxing caused many false alarms while reducing missed alarms. By contrast, the MLD-based refining significantly reduced MA and prevented significant FA increase. Compared with the result of initial detection in this area, the refinement caused OA to decrease from 94.91% to 94.58%. Meanwhile, the refinement caused the OA measured by pixels to increase from 96.08% to 96.16%. OA in this area varied slightly. Kappa significantly increased from 0.55 to 0.62 (by segments) and 0.67 (by pixels). All these results indicated the MLD constraint, effectively preventing possible false alarms and increasing the overall method accuracy relative to the case of direct threshold relaxation.

3.2.4. Further Analyses

Figure 8, Figure 9 and Figure 10 show some enlarged areas, which illustrate the characteristics of the proposed method. Initial CD caused some missed alarms in the three experimental areas.
As shown in Figure 8, the buildings and surroundings in the panchromatic image were difficult to distinguish, resulting in missed alarms in the initial CD. However, the MLD changed significantly from vertical to near horizontal with the emergence of buildings. Thus, the missed building was recovered in the refining stage, as shown in Figure 8g,h.
Missed and false alarms existed in all the comparative methods, as shown in Figure 9. Given that the bitemporal gradient and edge features were similar in some changed areas, the proposed method lost some changes in the initial CD stage, as shown in Figure 9g. However, because many horizontal lines emerged along with the building construction, the MLD changed significantly in the second image. Thus, many missed alarms were compensated for in the refining stage, as shown in Figure 9h.
Figure 10 shows a changed area from bare land to buildings. Tones were close between the bitemporal images, easily causing errors in the initial CD stage. However, lines were cluttered in the first image. In the second image, MLD was approximately horizontal along with the construction of buildings. Given the obvious MLD changes, the missed TRPs were recovered in the refining stage, as shown in Figure 10g,h.
Meanwhile, CVA had obvious false alarms in Figure 8 and Figure 9, whereas obvious missed alarms were found in Figure 10 with the same CD settings. PCA-k-means had obvious false alarms in Figure 9 and missed alarms in Figure 8 and Figure 10. IRMAD had obvious missed alarms in Figure 8 and obvious false and missed alarms in Figure 9 and Figure 10. The proposed method showed the most stable performance among all the methods.

4. Discussion

In this section, method efficiency, parameters, and possible improvements are further discussed.

4.1. Method Efficiency

The main steps of the method include TRP and TLP creation, feature similarity calculation, CD by evidence fusion, and CD refinement using RLPAF. The time consumption of each step is shown in Table 5. The experiments were carried out on Windows 7 64-bit OS with a CPU (Intel Core i7-4790, 3.60 GHz), RAM (8 GB), and a GPU (NVIDIA GT 630, 2 GB).
TRP and TLP creation took the most time among these steps, because the bitemporal images needed to be segmented separately and straight lines were detected in both times. In the CD refinement step using RLPAF, the modeling and calculation of MLDs took the second-longest amount of time. However, image complexity influenced the method efficiency. Compared with that in area 1 with a single-band image, the feature similarity calculation of the multiband images in areas 2 and 3 took a significantly longer time. Meanwhile, the time consumption of the CD refinement step in area 3 was significantly longer than those in areas 1 and 2, because the TLPs in area 3 were more densely distributed.

4.2. Parameter Influence

To analyze the influences of parameters on method accuracy, we modified the change threshold, scaling factor, and the trust degrees of the spectral, edge, and gradient features in every step while keeping the other parameters fixed.
Figure 11 shows the influence when the trust degree changes. Kappa increased gradually and reached the maximum values when the spectral trust degree increase from 0.20 to 0.35, as shown in Figure 11(a1–a3). The results indicated that the spectral feature improved the method accuracy when its weight (trust degree) was within a certain range. However, it reduced the method accuracy when the weight was unbalancedly large. The gradient and edge features also showed the same performance, as illustrated in Figure 11b,c. This evidence indicated that none of these individual features could be fully trusted, and that synthetical feature utilization was thus necessary.
Spectral features were generally more vulnerable to temporal changes than gradient and edge features. Kappa obtained the largest values when the trust degrees of spectral, gradient, and edge features reached 0.35, 0.85, and 0.65, respectively, as shown in Figure 11. These results indicated that the spectral features took a less important role than the gradient and edge features in the proposed method, and were helpful in improving the method’s robustness.
Figure 12 shows the method accuracy curves when modifying the change threshold and scaling factor. Kappa first increased and then decreased along with the increase in change threshold, as shown in Figure 12(a1–a3). Many missed alarms existed when the change threshold was small, and clear false alarms appeared with large thresholds. Similar Kappa increases and decreases were found when the scaling factors increased, as shown in Figure 12(b1–b3). In the stage of CD refinement using RLPAF, the change threshold was relaxed to detect the weakly changed TRPs. However, the unchanged TRPs were considered changed when the change threshold was excessively relaxed, resulting in additional false alarms and reduced method accuracy. In this study, the method inputs were tuned using small pieces of training areas and were validated as appropriate.

4.3. Further Discussion on Method Performance

In CD engineering applications, missed alarms must be recovered by visually comparing the entire bitemporal image, whereas false alarms only need to be validated within the detected changed areas. Thus, the intention of the application is to prevent missed alarms more than false alarms. On the basis of RLPAF, straight line changes were introduced as new OBCD evidence, and weakly changed TRPs were extracted. Thus, missed alarms were significantly reduced when low levels of false alarms were maintained, thereby improving the method’s practicability.
However, the MLD-based constraint still left a small number of false alarms in the natural areas, typically in vegetation, because trivial changes in vegetation appearance also caused line distribution changes. Further determining the specific types of changes may be useful. These issues could be solved by integrating image classification techniques, which could separate manmade and natural objects, and classify change types into the proposed method. To this end, deep learning-based techniques, which can perform automatic feature learning and feature expression, have shown better accuracy than traditional methods. Hence, deep learning-based techniques will be considered for a classification-involved OBCD in our future work.

5. Conclusions

In this study, a pioneering OBCD method for HSR images is proposed on the basis of the analysis of region–line primitive association and evidence fusion. The steps include obtaining TRPs by using multitemporal image segmentation, acquiring TLPs via multitemporal straight line detection, conducting multifeature initial OBCD using evidence theory, calculating MLD features through RLPAF, and CD refinement with MLD change analysis. The proposed method conducts OBCD on TRPs and TLPs, and is thus distinct from common OBCD methods. The advantages of the proposed method include the following aspects:
(1) Multifeature fusion in the initial CD stage obtains fair method accuracy. Tone differences between bitemporal images generally exist after image radiation correction. Through multifeature fusion, the proposed method reduces such influence by introducing more robust change evidence than image tones. The results include few false alarms and improved method accuracy.
(2) RLPAF extends the OBIA feature set, such as the feature subsets of line and region–line association [33,34], and thus offers new and effective information for OBCD. In common OBCD methods, feature extraction and analysis are all based on region primitives—that is, segments. In such technical scenarios, change clues along the boundaries of objects are easily ignored and might cause missed detection errors. The internal and tangent TLPs related to the TRPs are synthetically involved in the proposed method. Therefore, change clues from the object’s interior and boundary are considered to improve detection accuracy.
(3) In the refinement stage, CD is limited within the areas with distinctive MLD changes. Such region–line synthetical analysis significantly reduces the missed alarms and prevents a clear increase in false alarms. In real applications, missed alarms are more intended for prevention than false alarms. These characteristics improve its application values.
In all the experiments, the proposed method obtains the best accuracy measures, which verify its most stable performance among all the methods. However, a few false alarms exist typically in natural objects and vegetation areas. In the future, we will continually refine the method to improve its accuracy and practicability. Furthermore, the proposed method focuses on the change analysis of bitemporal images. Future studies will concentrate on the analysis of the changes in serial images to investigate the distribution patterns of changes in time and space, describe and explain the change rules, and predict future changes.

Author Contributions

J.H. wrote the manuscript and was responsible for the entire research. Y.L. participated in algorithm implementation and partial manuscript writing. M.W. put forward initial research ideas, gave comments, and proofread this article. Other authors participated in the research idea or experiments.

Funding

This work is jointly supported by the National Key Research and Development Program of China [2017YFB0503902], the National Natural Science Foundation of China [41671341, 41671369], and the Major Science and Technology Program for Water Pollution Control and Treatment [2017ZX07302003].

Acknowledgments

We would like to sincerely thank the editors and four anonymous reviewers for their great efforts and valuable comments to improve this manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript
CDchange detection
HSRhigh spatial resolution
OBCDobject-based change detection
TRPtemporal region primitive
TLPtemporal line primitive
MLDmain line direction
PBCDpixel-based change detection
COCDclassification-based change detection
FOCDfeature-based change detection
HCDhybrid change detection
CVAchange vector analysis
PCAprincipal component analysis
MADmultivariate alteration detection
IRMADiterative reweighted multivariate alteration detection
OBIAobject-based image analysis
D-SDempster and Shafer
ISFAiterative slow feature analysis
RLPAFregion–line primitive association framework
LPline primitive
RPregion primitive
SSIMstructural similarities
BPAFbasic probability assignment function
GDALGeospatial Data Abstraction Library
ALOSAdvanced Land Observation Satellite
PRISMpanchromatic Remote-Sensing Instrument for Stereo Mapping
AVNIR-2advanced Visible and Near-Infrared Radiometer 2
FAfalse alarm
MAmissed alarm
OAoverall accuracy

References

  1. Singh, A. Digital change detection techniques using remotely sensed data. Int. J. Remote. Sens. 1989, 10, 989–1003. [Google Scholar] [CrossRef]
  2. Lu, D.; Mausel, P.; Brondízio, E.; Moran, E. Change detection techniques. Int. J. Remote Sens. 2004, 25, 2365–2401. [Google Scholar] [CrossRef]
  3. Chen, G.; Hay, G.J.; Carvalho, L.M.T.; Wulder, M.A. Object-based change detection. Int. J. Remote Sens. 2012, 33, 4434–4457. [Google Scholar] [CrossRef]
  4. Hussain, M.; Chen, D.; Cheng, A.; Wei, H.; Stanley, D. Change detection from remotely sensed images: From pixel-based to object-based approaches. ISPRS J. Photogramm. Remote Sens. 2013, 80, 91–106. [Google Scholar] [CrossRef]
  5. Chen, Q.; Chen, Y. Multi-Feature Object-Based Change Detection Using Self-Adaptive Weight Change Vector Analysis. Remote Sens. 2016, 8, 549. [Google Scholar] [CrossRef]
  6. Lamine, S.; Petropoulos, G.P.; Singh, S.K.; Szabó, S.; Bachari, N.; Srivastava, P.K.; Suman, S. Quantifying Land Use/Land Cover Spatio-Temporal Landscape Pattern Dynamics from Hyperion Using SVMs Classifier and FRAGSTATS®. Geocarto Int. 2017, 33, 862–878. [Google Scholar] [CrossRef]
  7. Bruzzone, L.; Prieto, D.F. Automatic analysis of the difference image for unsupervised change detection. IEEE Trans. Geosci. Remote Sens. 2000, 38, 1171–1182. [Google Scholar] [CrossRef] [Green Version]
  8. Nemmour, H.; Chibani, Y. Multiple support vector machines for land cover change detection: An application for mapping urban extensions. ISPRS J. Photogramm. Remote Sens. 2006, 61, 125–133. [Google Scholar] [CrossRef]
  9. Bovolo, F.; Bruzzone, L. A Theoretical Framework for Unsupervised Change Detection Based on Change Vector Analysis in the Polar Domain. IEEE Trans. Geosci. Remote Sens. 2007, 45, 218–236. [Google Scholar] [CrossRef]
  10. Dai, Q.; Liu, J.; Liu, S. Remote sensing image change detection using particle swarm optimization algorithm. Acta Geod. Cartogr. Sin. 2012, 41, 847–850. [Google Scholar]
  11. Su, J.; Liu, D. An Object-level change detection algorithm for remote sensing images. Acta Photonica Sin. 2007, 36, 1764–1768. [Google Scholar]
  12. Wang, W.; Zhao, Z.; Zhu, H. Object-oriented multi-feature fusion change detection method for high resolution remote sensing image. In Proceedings of the 17th International Conference on Geoinformatics, Fairfax, VA, USA, 12–14 August 2009. [Google Scholar]
  13. Wu, J.; Yan, W.; Ni, W.; Hui, B. Object-Level Change Detection Based on Image Fusion and Multi-Scale Segmentation. Electron. Opt. Cont. 2013, 20, 51–55. [Google Scholar]
  14. Lv, X.; Ming, D.; Lu, T.; Zhou, K.; Wang, M.; Bao, H. A new method for region-Based majority voting CNNs for very high resolution image classification. Remote Sens. 2018, 10, 1946. [Google Scholar] [CrossRef]
  15. Peng, D.; Zhang, Y. Object-based change detection from satellite imagery by segmentation optimization and multi-features fusion. Int. J. Remote Sens. 2017, 38, 3886–3905. [Google Scholar] [CrossRef]
  16. Sun, K.; Chen, Y. The Application of Objects Change Vector Analysis in Object-level Change Detection. In Proceedings of the 3rd International Conference on Computational Intelligence and Industrial Application (PACIIA 2010), Wuhan, China, 6–7 November 2010; pp. 383–389. [Google Scholar]
  17. Wang, C.; Xu, M.; Wang, X.; Zheng, S.; Ma, Z. Object-oriented change detection approach for high-resolution remote sensing images based on multiscale fusion. J. Appl. Remote Sens. 2013, 7, 073696. [Google Scholar] [CrossRef]
  18. Hazel, G.G. Object-level change detection in spectral imagery. IEEE Trans. Geosci. Remote Sens. 2001, 39, 553–561. [Google Scholar] [CrossRef]
  19. Su, X.; Wu, W.; Li, H.; Han, Y. Land-Use and Land-Cover Change Detection Based on Object-Oriented Theory. In Proceedings of the 2011 International Symposium on Image and Data Fusion, Tengchong, China, 9–11 August 2011. [Google Scholar]
  20. Ming, D.; Zhou, W.; Xu, L.; Wang, M.; Ma, Y. Coupling relationship among scale parameter, segmentation accuracy and classification accuracy In GeOBIA. Photogramm. Eng. Remote Sens. 2018, 84, 681–693. [Google Scholar] [CrossRef]
  21. Zhang, L.; Wu, C. Advance and Future Development of Change Detection for Multi-temporal Remote Sensing Imagery. Acta Geod. Cartogr. Sin. 2017, 46, 1447–1459. [Google Scholar]
  22. Malila, W.A. Change vector analysis: An approach for detecting forest changes with Landsat. In Proceedings of the 1980 Machine Processing of Remotely Sensed Data Symposium, West Lafayette, IN, USA, 3–6 June 1980; pp. 326–335. [Google Scholar]
  23. Celik, T. Unsupervised Change Detection in Satellite Images Using Principal Component Analysis and k-Means Clustering. IEEE Geosci. Remote Sens. Lett. 2009, 6, 772–776. [Google Scholar] [CrossRef]
  24. Nielsen, A.A.; Conradsen, K.; Simpson, J.J. Multivariate Alteration Detection (MAD) and MAF Postprocessing in Multispectral, Bitemporal Image Data: New Approaches to Change Detection Studies. Remote Sens. Environ. 1998, 64, 1–19. [Google Scholar] [CrossRef] [Green Version]
  25. Nielsen, A.A. The Regularized Iteratively Reweighted MAD Method for Change Detection in Multi- and Hyperspectral Data. IEEE T. Image Process. 2007, 16, 463–478. [Google Scholar] [CrossRef] [Green Version]
  26. Du, X.; Zhang, C.; Yang, J.; Su, W. A New Multi-Feature Approach to Object-Oriented Change Detection Based on Fuzzy Classification. Intell. Autom. Soft Comput. 2012, 18, 1063–1073. [Google Scholar] [CrossRef]
  27. Tang, Y.; Huang, X.; Zhang, L. Fault-Tolerant Building Change Detection from Urban High-Resolution Remote Sensing Imagery. IEEE Geosci. Remote Sens. Lett. 2013, 10, 1060–1064. [Google Scholar] [CrossRef]
  28. Chen, Q.; Chen, Y.; Jiang, W. Genetic Particle Swarm Optimization–Based Feature Selection for Very-High-Resolution Remotely Sensed Imagery Object Change Detection. Sensors 2016, 16, 1204. [Google Scholar] [CrossRef] [PubMed]
  29. Zhang, B.; Lu, J.; Guo, H.; Xu, J.; Zhao, C. Object-oriented change detection for multi-source images using multi-feature fusion. In Proceedings of the 2016 Third International Conference on Artificial Intelligence and Pattern Recognition (AIPR), Lodz, Poland, 19–21 September 2016. [Google Scholar]
  30. Cai, L.; Shi, W.; Hao, M.; Zhang, H.; Gao, L. A Multi-Feature Fusion-Based Change Detection Method for Remote Sensing Images. J. Indian Soc. Remote Sens. 2018, 46, 2015–2022. [Google Scholar] [CrossRef]
  31. Wang, X.; Liu, S.; Du, P.; Liang, H.; Xia, J.; Li, Y. Object-Based Change Detection in Urban Areas from High Spatial Resolution Images Based on Multiple Features and Ensemble Learning. Remote Sens. 2018, 10, 276. [Google Scholar] [CrossRef]
  32. Luo, H.; Liu, C.; Wu, C.; Guo, X. Urban Change Detection Based on Dempster–Shafer Theory for Multitemporal Very High-Resolution Imagery. Remote Sens. 2018, 10, 980. [Google Scholar] [CrossRef]
  33. Wang, M.; Wang, J. A region-line primitive association framework for object-based remote sensing image analysis. Photogramm. Eng. Remote Sens. 2016, 82, 149–159. [Google Scholar] [CrossRef]
  34. Wang, M.; Xing, J.; Wang, J.; Lv, G. Technical design and system implementation of region-line primitive association framework. ISPRS J. Photogramm. Remote Sens. 2017, 130, 202–216. [Google Scholar] [CrossRef]
  35. Wang, M.; Cui, Q.; Wang, J.; Ming, D.; Lv, G. Raft cultivation area extraction from high resolution remote sensing imagery by fusing multi-scale region-line primitive association features. ISPRS J. Photogramm. Remote Sens. 2017, 123, 104–113. [Google Scholar] [CrossRef]
  36. Wang, M.; Cui, Q.; Sun, Y.; Wang, Q. Photovoltaic Panel Extraction from Very High-Resolution Aerial Imagery Using Region-Line Primitive Association Analysis and Template Matching. ISPRS J. Photogramm. Remote Sens. 2018, 141, 100–111. [Google Scholar] [CrossRef]
  37. Wang, M.; Li, R. Segmentation of high spatial resolution remote sensing imagery based on hard-boundary constraint and two-stage merging. IEEE Trans. Geosci. Remote Sens. 2014, 52, 5712–5725. [Google Scholar] [CrossRef]
  38. Wang, M.; Zhang, X. Change detection using high spatial resolution remotely sensed imagery by combining evidence theory and structural similarity. J. Remote Sens. 2010, 14, 558–570. [Google Scholar]
  39. Burns, J.B.; Hanson, A.R.; Riseman, E.M. Extracting straight lines. IEEE Trans. Pattern Anal. Mach. Intell 1986, 8, 425–455. [Google Scholar] [CrossRef]
  40. Wang, Z.; Lu, L.; Bovik, A.C. Video quality assessment based on structural distortion measurement. Signal Process-Image 2004, 19, 121–132. [Google Scholar] [CrossRef] [Green Version]
  41. Ruthven, I.; Lalmas, M. Using Dempster-Shafer’s Theory of Evidence to Combine Aspects of Information Use. J. Intell. Inf. Syst. 2002, 19, 267–301. [Google Scholar] [CrossRef]
  42. Laben, C.A.; Brower, B.V. Process for enhancing the spatial resolution of multispectral imagery using pan-sharpening. U.S. Patent 6,011,875, 4 January 2000. [Google Scholar]
  43. Mattoccia, S.; Tombari, F.; Stefano, L.D. Efficient template matching for multi-channel images. Pattern Recogn. Lett. 2011, 32, 694–700. [Google Scholar] [CrossRef]
Figure 1. Technical framework.
Figure 1. Technical framework.
Remotesensing 11 02484 g001
Figure 2. Creation of object primitives via image segmentation and map overlap.
Figure 2. Creation of object primitives via image segmentation and map overlap.
Remotesensing 11 02484 g002
Figure 3. Region–line primitive association framework (RLPAF) modeling to time–phase region primitive and multitemporal line primitive. (a) RLPAF modeling. (b) A line touching the region. (c) Regions with apparent line direction changes.
Figure 3. Region–line primitive association framework (RLPAF) modeling to time–phase region primitive and multitemporal line primitive. (a) RLPAF modeling. (b) A line touching the region. (c) Regions with apparent line direction changes.
Remotesensing 11 02484 g003
Figure 4. Three experimental areas. (a,b) Original bitemporal images of area 1. (c,d) Original bitemporal images of area 2. (e,f) Original bitemporal images of area 3.
Figure 4. Three experimental areas. (a,b) Original bitemporal images of area 1. (c,d) Original bitemporal images of area 2. (e,f) Original bitemporal images of area 3.
Remotesensing 11 02484 g004
Figure 5. Experimental area 1. (a,b) Original bitemporal images. (c) Temporal region primitives (TRPs). (d,e) Temporal line primitives (TLPs). (f) Visual interpretation result. (g) Result of change vector analysis (CVA). (h) Result of principal component analysis (PCA)-k-means. (i) Result of iterative reweighted multivariate alteration detection (IRMAD). (j) Result of initial detection. (k) Result of direct threshold relaxation. (l) Result after refinement.
Figure 5. Experimental area 1. (a,b) Original bitemporal images. (c) Temporal region primitives (TRPs). (d,e) Temporal line primitives (TLPs). (f) Visual interpretation result. (g) Result of change vector analysis (CVA). (h) Result of principal component analysis (PCA)-k-means. (i) Result of iterative reweighted multivariate alteration detection (IRMAD). (j) Result of initial detection. (k) Result of direct threshold relaxation. (l) Result after refinement.
Remotesensing 11 02484 g005
Figure 6. Experimental area 2. (a,b) Original bitemporal images. (c) TRPs. (d,e) TLPs. (f) Visual interpretation result. (g) Result of CVA. (h) Result of PCA-k-means. (i) Result of IRMAD. (j) Result of initial detection. (k) Result of direct threshold relaxation. (l) Result after refinement.
Figure 6. Experimental area 2. (a,b) Original bitemporal images. (c) TRPs. (d,e) TLPs. (f) Visual interpretation result. (g) Result of CVA. (h) Result of PCA-k-means. (i) Result of IRMAD. (j) Result of initial detection. (k) Result of direct threshold relaxation. (l) Result after refinement.
Remotesensing 11 02484 g006
Figure 7. Experimental area 3. (a,b) Original bitemporal images. (c) TRPs. (d,e) TLPs. (f) Visual interpretation result. (g) Result of CVA. (h) Result of PCA-k-means. (i) Result of IRMAD. (j) Result of initial detection. (k) Result of direct threshold relaxation. (l) Result after refinement.
Figure 7. Experimental area 3. (a,b) Original bitemporal images. (c) TRPs. (d,e) TLPs. (f) Visual interpretation result. (g) Result of CVA. (h) Result of PCA-k-means. (i) Result of IRMAD. (j) Result of initial detection. (k) Result of direct threshold relaxation. (l) Result after refinement.
Remotesensing 11 02484 g007
Figure 8. Enlarged area 1. (a,b) Original images. (c) TRPs and TLPs. (d) Result of CVA. (e) Result of PCA-k-means. (f) Result of IRMAD. (g) Result of initial detection. (h) Result after refinement.
Figure 8. Enlarged area 1. (a,b) Original images. (c) TRPs and TLPs. (d) Result of CVA. (e) Result of PCA-k-means. (f) Result of IRMAD. (g) Result of initial detection. (h) Result after refinement.
Remotesensing 11 02484 g008
Figure 9. Enlarged area 2. (a,b) Original images. (c) TRPs and TLPs. (d) Result of CVA. (e) Result of PCA-k-means. (f) Result of IRMAD. (g) Result of initial detection. (h) Result after refinement.
Figure 9. Enlarged area 2. (a,b) Original images. (c) TRPs and TLPs. (d) Result of CVA. (e) Result of PCA-k-means. (f) Result of IRMAD. (g) Result of initial detection. (h) Result after refinement.
Remotesensing 11 02484 g009
Figure 10. Enlarged area 3. (a,b) Original images. (c) TRPs and TLPs. (d) Result of CVA. (e) Result of PCA-k-means. (f) Result of IRMAD. (g) Result of initial detection. (h) Result after refinement.
Figure 10. Enlarged area 3. (a,b) Original images. (c) TRPs and TLPs. (d) Result of CVA. (e) Result of PCA-k-means. (f) Result of IRMAD. (g) Result of initial detection. (h) Result after refinement.
Remotesensing 11 02484 g010
Figure 11. Influences of different trust degrees of features on method accuracy. (a1), (a2), and (a3) Influences of spectral trust degree in three experimental areas; (b1), (b2), and (b3) gradient trust degree; (c1), (c2), and (c3) edge trust degree.
Figure 11. Influences of different trust degrees of features on method accuracy. (a1), (a2), and (a3) Influences of spectral trust degree in three experimental areas; (b1), (b2), and (b3) gradient trust degree; (c1), (c2), and (c3) edge trust degree.
Remotesensing 11 02484 g011
Figure 12. Influences of change thresholds and scaling factor. (a1), (a2), and (a3) Influences of the change threshold on Kappa in the three experimental areas; (b1), (b2), and (b3) influences of the scaling factors.
Figure 12. Influences of change thresholds and scaling factor. (a1), (a2), and (a3) Influences of the change threshold on Kappa in the three experimental areas; (b1), (b2), and (b3) influences of the scaling factors.
Remotesensing 11 02484 g012
Table 1. Experimental data. ALOS: Advanced Land Observation Satellite.
Table 1. Experimental data. ALOS: Advanced Land Observation Satellite.
Experimental AreaImage TypeResolutionImaging Date 1Imaging Date 2Image SizeLocationSegmentation ScalesChange Threshold
Area 1ALOS panchromatic image2.5 m2007.022008.01999×96331°52′45.40″N–
31°54′4.47″N
118°46′2.58″E–
118°47′35.97″E
500.3
Area 2ALOS fused image2.5 m2007.022008.01885×74831°39′35.516″N–
31°40′42.018″N
119°2′4.997″E~
119°3′33.694″E
500.4
Area 3Gaofen-2 fused image1 m2016.022017.011576×121931°34′22.535″N–
31°34′55.646″N
118°50′56.279″E–
118°51′43.373″E
1500.44
Table 2. Precision in area 1. TP: the number of change image objects correctly detected, FP: the number of unchanged image objects incorrectly detected as changed ones, FN: the number of changed image objects incorrectly detected as unchanged ones, TN: the number of unchanged image objects correctly detected, FA: false alarm, MA: missed alarm, OA: overall accuracy.
Table 2. Precision in area 1. TP: the number of change image objects correctly detected, FP: the number of unchanged image objects incorrectly detected as changed ones, FN: the number of changed image objects incorrectly detected as unchanged ones, TN: the number of unchanged image objects correctly detected, FA: false alarm, MA: missed alarm, OA: overall accuracy.
TypeMethodTPFPFNTNOA (%)MA (%)FA (%)Kappa
Segment-basedCVA5221921100081.42%28.77%20.82%0.23
IRMAD415032116993.65%43.84%4.13%0.47
PCA-K-means447829114191.72%39.73%6.58%0.41
Initial detection49824121197.52%32.88%0.63%0.74
Direct threshold relaxation541919120097.06%26.03%1.52%0.72
Refined detection541119120897.68%26.03%0.87%0.77
Pixel-basedCVA20,10669,80810,659861,46491.64%34.65%7.92%0.30
IRMAD11,567810219,198923,17097.16%62.40%0.87%0.44
PCA-K-means13,11615,62217,649915,65096.54%57.37%1.68%0.42
Initial detection24,43316326332929,64099.17%20.58%0.17%0.86
Direct threshold relaxation27,71285203053922,75298.80%9.92%0.90%0.82
Refined detection27,71254803053925,79299.11%9.92%0.57%0.86
Table 3. Detection precision in area 2.
Table 3. Detection precision in area 2.
TypeMethodTPFPFNTNOA (%)MA (%)FA (%)Kappa
Segment-basedCVA89674294206068.94%76.76%31.36%−0.01
IRMAD125283258245182.64%67.36%10.99%0.22
PCA-K-means143832240190265.61%62.66%40.68%0.04
Initial detection25345130268994.39%33.94%1.53%0.71
Direct threshold relaxation35235431238087.65%8.09%12.96%0.58
Refined detection3359848263695.32%12.53%3.30%0.79
Pixel-basedCVA13,09655,49560,401533,87382.52%82.18%10.15%0.09
IRMAD20,31342,38053,184546,98885.58%72.36%7.47%0.22
PCA-K-means30,419142,39643,078446,97272.02%58.61%29.83%0.11
Initial detection45,567845227,930580,91694.51%38.00%1.35%0.69
Direct threshold relaxation65,40556,3778092532,99190.27%11.01%9.42%0.62
Refined detection62,35222,07011,145567,29894.99%15.16%3.51%0.76
Table 4. Detection precision in area 3.
Table 4. Detection precision in area 3.
TypeMethodTPFPFNTNOA (%)MA (%)FA (%)Kappa
Segment-basedCVA1231018154271070.74%55.60%35.93%0.07
IRMAD16419261330983.02%94.22%12.60%-0.04
PCA-K-means145342132338688.16%47.65%9.69%0.32
Initial detection13865139366394.91%50.18%1.71%0.55
Direct threshold relaxation22128956343991.39%20.22%7.90%0.52
Refined detection19913978358994.58%28.16%3.67%0.62
Pixel-basedCVA36,450242,84768,5411,573,30685.35%65.28%12.98%0.14
IRMAD12,600283,04192,3911,533,11280.46%87.99%18.31%−0.02
PCA-K-means56,01770,69048,9741,745,46393.77%46.65%3.92%0.45
Initial detection49,97620,37255,0151,795,78196.08%52.40%1.10%0.55
Direct threshold relaxation89,19180,46915,8001,735,68494.99%15.05%4.41%0.62
Refined detection81,41950,19323,5721,765,96096.16%22.45%2.72%0.67
Table 5. Method efficiency (unit: seconds).
Table 5. Method efficiency (unit: seconds).
AreaTRP and TLP CreationFeature similarity CalculationCD by Evidence FusionCD Refinement Using RLPAF
Area 184.787.71.0613.99
Area 289.8423.271.1924.25
Area 3138.0929.561.49117.93

Share and Cite

MDPI and ACS Style

Huang, J.; Liu, Y.; Wang, M.; Zheng, Y.; Wang, J.; Ming, D. Change Detection of High Spatial Resolution Images Based on Region-Line Primitive Association Analysis and Evidence Fusion. Remote Sens. 2019, 11, 2484. https://doi.org/10.3390/rs11212484

AMA Style

Huang J, Liu Y, Wang M, Zheng Y, Wang J, Ming D. Change Detection of High Spatial Resolution Images Based on Region-Line Primitive Association Analysis and Evidence Fusion. Remote Sensing. 2019; 11(21):2484. https://doi.org/10.3390/rs11212484

Chicago/Turabian Style

Huang, Jiru, Yang Liu, Min Wang, Yalan Zheng, Jie Wang, and Dongping Ming. 2019. "Change Detection of High Spatial Resolution Images Based on Region-Line Primitive Association Analysis and Evidence Fusion" Remote Sensing 11, no. 21: 2484. https://doi.org/10.3390/rs11212484

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop