Next Article in Journal
Quantification of Pollutants in Mining Ponds Using a Combination of LiDAR and Geochemical Methods—Mining District of Hiendelaencina, Guadalajara (Spain)
Next Article in Special Issue
A ViSAR Shadow-Detection Algorithm Based on LRSD Combined Trajectory Region Extraction
Previous Article in Journal
Comparative Analysis of Striping Noise between FY-3E MWTS-3 and FY-3D MWTS-2
Previous Article in Special Issue
Ship Instance Segmentation Based on Rotated Bounding Boxes for SAR Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Unsupervised SAR Image Change Detection Based on Structural Consistency and CFAR Threshold Estimation

1
Key Laboratory of Technology in Geo-Spatial Information Processing and Application System, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100190, China
2
Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China
3
School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 101408, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(5), 1422; https://doi.org/10.3390/rs15051422
Submission received: 20 January 2023 / Revised: 22 February 2023 / Accepted: 27 February 2023 / Published: 3 March 2023
(This article belongs to the Special Issue SAR Images Processing and Analysis)

Abstract

:
Despite the remarkable progress made in recent years, until today, the automatic detection of changes in synthetic aperture radar (SAR) images remains a difficult task due to speckle noise. This inherent multiplicative noise tends to increase false alarms and misdetections. As a solution, we developed an unsupervised method that detects SAR changes by analyzing structural differences. By this method, the spatial structure cues of a pixel are represented by a set of similarity weight vectors calculated from the non-local scale of the pixel. The difference image (DI) is then derived by measuring the structural consistency of the corresponding pixels. A new statistical distance that is insensitive to speckle noise was used to measure the similarity weights between patches in order to obtain an accurate structure. It was derived by applying the Nakagami–Rayleigh distribution to a statistical test and customizing the approximation based on change detection. The CFAR threshold estimator in conjunction with the Rayleigh hypothesis was then employed to attenuate the effect of the unimodal histogram of the DI. The results indicated that the proposed method reduces the false alarm rate and improves the kappa and F1-scores, while providing satisfactory visual results.

1. Introduction

Change detection (CD) is the process of detecting change through remote sensing images taken at different times over the same geographic area. SAR imagery has been used more rarely for change detection tasks than optical imagery, but has always been attractive to scholars due to the independence of atmosphere and sunlight conditions. In recent decades, it has received widespread attention, including urban studies [1], environmental monitoring [2], disaster assessment [3], crop management [4], and land cover monitoring [5]. However, the inherent multiplicative speckle noise has always restricted the analysis and application of SAR to change detection tasks [6].
There are various perspectives for classifying change detection (CD). In terms of the data source, CD can be classified as follows: (1) monomodal change detection (MNCD) and multi-modal change detection (MMCD); (2) from the detection strategy: direct change detection and post-classification comparison; (3) from the scale: pixel-based change detection (PBCD) and object-based change detection (OBCD); (4) from the labeling: unsupervised, semi-supervised, and supervised change detection. Despite the diverse categories, the basic steps typically involve three stages [7]: (1) pre-processing; (2) generation of the difference image (DI); (3) analyzing the difference image to generate the change map (CM).
The first step usually encompasses geometric correction, co-registration, and denoising. The purpose is to geographically align the bi-temporal images, as well as to suppress the noise. In the second step, the differences between the dual-temporal images are compared pixel by pixel in order to obtain the difference image (DI) representing the change level. The third step is to discriminate the DI into two groups: “changed” and “unchanged”. The DI generation step and CM generation step are widely emphasized in the study.
In the context of SAR change detection, the suppression of speckle noise by the ratio-based method is an important research trend. In the DI generation step, the main approaches are the ratio detector (RD) [8] and the log-ratio detector (LRD) [9]. For isolated noise pixels, a mean-ratio detector (MRD) [10] incorporating neighborhood information was developed to find changes by the ratio of the mean values of local patches. Additionally, other advanced ratio-based detectors, such as the neighborhood-based ratio detector [11] and the region likelihood ratio detector [12], have been developed to address specific situations.
Later, the combined difference image (CDI) was proposed to enhance detection by fusing the LRD and the MRD. It is based on the consensus that different detectors target different differences. In the work of Ma et al. [13], the authors performed wavelet transforms on the LRD and the MRD and then extracted their high- and low-frequency components, respectively. The final DI was reconstructed by fusing LL, LH, HL, and HH under a neighborhood-based fusion rule. Zheng et al. [14] fused the subtraction DI and the log-ratio DI by experimental trial and error, allowing region consistency and edge information to be retained. In [15], the authors proposed a novel fusion strategy based on the discrete wavelet transform (DWT). The study explored the rule of selecting the weight averaging for the low frequency and the maximum local contrast coefficients for the high frequency. Experience showed that the CDI fusing the log-ratio DI and the Gauss-log ratio DI enhances the difference intensity of the changed area. Later, the work by Gong et al. [16] extended the fusion techniques by constructing intensity–texture difference images (ITIDs). The ITID includes the intensity DI (IDI) and the texture DI (TDI), which are generated by the log-ratio detector and Garbo filter banks. Innovatively, the multivariate generalized Gaussian distribution (MGGD) is employed to fit the joint probability (IDI, TDI), which is used as a prior term in the graph cut. Recently, Wang et al. [17] fused superpixel-level affinities and pixel-level heterogeneous affinities to enhance the multi-scale properties for SAR change detection.
Despite the success of ratio-based methods on CD tasks, they are limited by the properties of first-order statistics and may still fail to detect changes when the average intensity of local neighborhoods remains constant [10]. One possible solution is to use sliding-window-based statistical analysis to consider higher-order neighborhood characteristics. In this mode, changes are detected by measuring the difference of the statistical distribution of each pixel between dual-temporal images. A classical approach is the assumption of a Gaussian distribution under the KL divergence [18]. Subsequently, a local image statistical series expansion method based on cumulants was proposed by Inglada and Mercier [10] to substitute the Gaussian assumption. Immediately afterwards, Zheng and You [19] proposed a projection-based Jeffrey detector (PJD) by replacing the KL divergence with the Jeffrey divergence. More similarity-based CD methods can be found in the work of Alberga [20]. However, since the changes are not modeled, it is difficult to select the window size, which affects the misdetections and overdetections.
In recent years, some scholars have obtained difference images through feature transformation and representation. In this paradigm, the amplitudes are replaced by abstract and reliable features. In [21], the interaction between feature points representing local amplitude attributes was defined as structure features, which were encoded by a graph model based on the ratio similarity. Then, the change can be indicated by the consistency between the graph model constructed on bi-temporal SAR images. In the work of Wan et al. [22], a sort histogram was created for each pixel to implicitly express the local spatial layout, which weakens the defects of the PBCD method. Besides, a pairwise pixel-structure-representing approach was proposed by Touati et al. [23] for the heterogeneous CD task. It models the observation field through each image’s own pixel pairs, which is imaging modality-invariant. Recently, a parametric contraction mapping strategy based on spatial fractal decomposition was proposed in [24] to make dual-temporal images comparable in the presence of intensity heterogeneity. This mapping is based on the consensus that any satellite data can be approximately encoded with their own appropriate spatial transformation part. Moreover, in recent studies, Sun et al. [25] proposed the NLPG method to build a K-nearest graph and a cross-mapping graph for each pixel, whose discrepancy can reflect the change information. Later, an extended version of NLPG, named IRG-McS [26], was put forward based on superpixels and iteration.
The DI analysis step can be viewed as a binary partitioning problem. A decision threshold is usually determined to divide the DI into two categories. The Kittler–Illingworth threshold (K&I) [27], the expectation maximization (EM) threshold, and Otsu [28] are several widely used estimation methods, and they work under the class conditional distribution assumption and no prior distribution assumption, respectively. Besides, empirical threshold selection is also a common approach, which is called “trial and error” (TAE). Clustering methods such as PCAKmeans [29], FCM, and FLICM [30,31,32], in an unsupervised segmentation manner, automatically aggregate pixels into “change” and “unchanged” classes without modeling the statistical distribution. Meanwhile, the recently proposed new threshold method HFEM outperforms Otsu and FCM in the case of only a few change areas [33].
This research focused on improving the performance of the pixel-based change detector in the presence of speckle noise. To this end, we propose an effective method based on the self-similarity of non-local structures. Speckle noise can still produce a large amplitude difference even between pixels pairs that have not changed, even in a monomodal CD task. However, the unchanged areas share the same spatial structure [25]. Inspired by Sun et al. [34,35], we can, therefore, determine whether a pixel has changed by measuring the structural consistency of the corresponding pixels. Unlike them, we focused on SAR change detection. The acquired structural features were dense, i.e., each patch in the non-local scale was taken into account. The spatial structure information of pixels is represented by our proposed non-local structure weight feature (NLSW) and a sorted version (SNLSW). The structure consistency can be measured by the similarity/dissimilarity of NLSW features at the L 2 -norm.
Each component in the NLSW/SNLSW feature, as the weight strength of the mutual interaction between the center pixel and non-local spatial pixel, is a function of the similarity of their neighborhood patches. The Euclidean distance used in the NLM algorithm is not suitable for SAR due to its inability to accurately construct the strength of the structural weights. To overcome this problem, we derived and approximated a similarity metric suitable for the multiplicative noise model based on our previous work [36], where a dedicated statistical distribution of SAR, i.e., the Nakagami–Rayleigh distribution, was assumed. The derived similarity metric was then used to build the weight components. As a result, pixels associated with changes presented larger differences between the corresponding NLSW/SNLSW features compared to those unchanged pixels.
The conversion of the difference image (DI) to the change map (CM) can be regarded as a threshold segmentation procedure. The constant false alarm rate (CFAR) was employed to automatically extract the change areas with an imposed Rayleigh statistical distribution assumption to avoid the difficulty of estimating the decision threshold using histogram-based methods when the change indicator exhibits a unimodal histogram. Besides, the Otsu threshold, K&I threshold, and TAE were also implemented in the DI analysis step.
The main contributions are as follows:
(1)
We propose an unsupervised framework for automated SAR change detection. In the DI generation step, we extract the pixelwise non-local structure weight (NLSW) features and detect changes by spatial structure cues. Eventually, the comparability of bi-temporal SAR images (BTSIs) was enhanced.
(2)
The mathematical proof is given to illustrate the uncertainty of the similarity between patches in the expectation sense when the Euclidean distance is used in multiplicative noise. For this reason, a patch similarity metric, dedicated to the proposed NLSW change detector for BTSIs, was derived and approximated by considering the statistical properties of SAR.
(3)
The CFAR under the Rayleigh distribution assumption was applied to the DI to generate the final change map, thus avoiding the unimodal histogram problem. The experiments on simulated and real bi-temporal SAR images demonstrated the effectiveness and stability of the proposed method.
The rest of this paper is organized as follows. Section 2 presents the relevant theories and the proposed framework. Section 3 presents the experimental results, the parameters’ analysis, and the discussion. The conclusion is provided in Section 4.

2. Materials and Methods

Speckle noise is a key factor in accurate change detection of SAR. Due to the fluctuating amplitude, even pixels that have not changed may exhibit significant radiometric differences. The purely amplitude-based methods may fail in highlighting the change area. However, self-similar structures can always be found in any image (common to optical and SAR images). That is, the image can be (approximately) formed of properly transformed parts of itself [24]. One way of constructing self-similar structural information is by applying the idea of non-local means (NLMs) [37]. However, in this work, we were primarily interested in the similarity weights between non-local patches rather than the denoising process of NLMs. This is because the structural information can be well quantified by measuring the similarity between patches’ patterns. Therefore, we built the non-local structure weight (NLSW) feature for each pixel on bi-temporal images by using a set of measured weights in the non-local search window. Figure 1 depicts the construction process of the NLSW feature. We first computed the weight G ( N p , N q ) (defined in Section 2.2), then we organized all the weights in a certain way (one case is shown by the yellow curve). By assigning the NLSW feature per pixel, a projection of the unstable amplitude space to the stable structural space can be implemented. Then, a simple pixelwise difference between the bi-temporal SAR images can better show the degree of change.

2.1. Theoretical Background

2.1.1. Non-local Means Method

The non-local mean (NLM) is a prevalent image denoising technology that was pioneered by Baudes et al. [38]. The NLM obtains a weighted average of pixels by identifying similar pixels within a non-local spatial search window and then assigning larger weights to those with similar neighboring patterns [39]. In the context of SAR images, the NLM has been shown to be effective at suppressing speckle noise while preserving intricate structural details, thereby facilitating many applications such as SAR segmentation [36], guided patchwise despeckling [40], and OPT/SAR image fusion [41]. Furthermore, the NLM can also improve the resolution and sharpness of SAR images to enhance target detection and identification [42].
In this section, the NLM methods described in [43] are briefly introduced. Let us consider a single-band image u = ( u ( x ) ) x Ω defined over a bounded domain Ω R 2 . u ( x ) R + , u ( x ) is the true observed amplitude, i.e., contaminated by noise at pixel x Ω . Now, we consider pixel p, then the NLM is defined as
u ˜ ( p ) = 1 Z ( p ) q B ( r , p ) w ( N p , N q ) u ( p )
where N p and N q represent the neighborhood patch centered at pixel p and pixel q, respectively. B ( r , p ) denotes the non-local squared search window centered at pixel p and with a size of ( 2 r + 1 ) × ( 2 r + 1 ) . Limited by the computational cost, q is only chosen within the r-box radius of p. Z ( p ) is normalized constant, i.e., Z ( p ) = q B ( r , p ) w ( N p , N q ) . The weight w ( N p , N q ) expresses the similarity between the vectorized patches N q and N q and is defined as
w N p , N q = exp V s ( N p ) V s ( N q ) 2 , σ 2 h 2
where σ denotes the standard deviation of the Gaussian kernel, which is used to consider the distance between the central pixel and other pixels in the patch. h is the smoothing parameter, which is usually set empirically. V s ( N p ) indicates the vectorization of patch N p with a size of s × s , and · denotes the 2 -norm.

2.1.2. Euclidean Measurement of Inter-Patches’ Similarity and Problem on SAR Image

The proposed NLSW detector is a structure change indicator based on the self-similarity property, and its accuracy is directly affected by the stability of the weight w ( N p , N q ) in Equation (2). Generally, Equation (2) can provide a fairly reliable estimate of the similarity of patches in optical images (also common to natural images). Azzabou et al. [44] asserted that the Gaussian-weighted similarity estimation (Euclidean distance) is robust for additive noise, but hardly effective for multiplicative noise (SAR speckle). With reference to [45], we introduced a theoretical proof about the validity of the Euclidean distance in optical images and the invalidity in SAR images.
Now, a classical additive noise model is considered, i.e., Y = X + V , where Y is the observed image, X is the noise-free image, and V is the zero-mean Gaussian noise with the standard deviation as σ v . For patches Y N p and Y N q centered at the pth and qth pixels, respectively, the distance Δ Y < p , q > can be written as
Δ Y < p , q > = Y N p Y N q 2 2 = k = 1 O Y N p ( k ) Y N q ( k ) 2
where O denotes the number of pixels in the patch, then the expectation is
E Δ Y p , q = E k = 1 O Y N p ( k ) Y N q ( k ) 2 = k = 1 O E X N p ( k ) X N q ( k ) + V N p ( k ) V N q ( k ) 2 = k = 1 O E Δ X ( k ) + Δ V ( k ) 2 = k = 1 O Δ X ( k ) 2 + E Δ V ( k ) 2
considering Δ X ( k ) = X N p ( k ) X N q ( k ) , and Δ V ( k ) is the difference of the Gaussian distribution, i.e., Δ V N ( 0 , 2 σ 2 ) , then we can obtain
E Δ Y p , q = X N p X N q 2 2 + O · 2 σ v 2
Equation (5) means that the Euclidean distance between the noise-polluted patches Y N p and Y N q is related to the similarity between noise-free patches X N p and X N q , as well as the noise variance σ v . One can consider that the order of similarity between noise-free patches is preserved in the sense of the distance expectation [45]. That is, if two patches are similar in the absence of noise, they will remain similar in the presence of noise.
From the above evidence, one can see that, for optical images under the additive Gaussian noise model, the Euclidean distance can effectively measure the inter-pixel similarity. However, the SAR image is prone to multiplicative speckle noise, and therefore, Equation (2) no longer fits the SAR image. A multiplicative noise model is assumed for the SAR image, that is Y = X · V , where Y denotes the noise-polluted image, X denotes the noise-free image, and V denotes coherent noise with μ v = 1 and σ v 2 the variance. For two patches Y N p and Y N q , we have the following distance:
Δ Y < p , q > = Y N p Y N q 2 2 = k = 1 O Y N p ( k ) Y N q ( k ) 2 2
then
E Δ Y < p , q > = E Y N p Y N q 2 2 = k = 1 O E X N p ( k ) V N p ( k ) X N q ( k ) V N q ( k ) 2 = k = 1 O E X N p 2 ( k ) V N p 2 ( k ) 2 X N p ( k ) V N p ( k ) X N q ( k ) V N q ( k ) + X N q 2 ( k ) V N q 2 ( k ) = k = 1 O E X N p 2 ( k ) ( σ v 2 + 1 ) + E X N q 2 ( k ) ( σ v 2 + 1 ) 2 E X N p ( k ) X N q ( k ) = k = 1 O σ v 2 E X N p 2 ( k ) + E X N q 2 ( k ) + E | X N p ( k ) X N q ( k ) | 2 = k = 1 O X N p ( k ) X N q ( k ) 2 + k = 1 O σ v 2 X N p 2 ( k ) + X N q 2 ( k ) = X N p X N q 2 2 + σ v 2 X N p 2 2 + X N q 2 2
The second term in Equation (7) indicates that the similarity between the observed patches Y N p and Y N q is linked to the noiseless amplitude. This means that the distance depends upon the local area where the patches reside. The larger the amplitude of the local area, the larger the distance between the patches is. As a result, the structural information about SAR obtained from Equation (2) is unstable. To accurately calculate the NLSW features, a distance metric tailored to SAR images should be used.

2.2. A Novel Similarity Measurement for SAR Patches

Precisely projecting the amplitude domain specifically to the structure information domain is the key factor in determining whether the NLSW detector works in the change detection task. It is, of course, inappropriate for SAR to use Equation (2) to calculate the weight similarity. There have been quite a few distance measurements developed in recent years to calculate the similarity of the SAR block. Statistical tests on SAR statistical distributions are a widely discussed topic, such as Deledalle’s proposal in [46]. Based on the statistical properties of SAR images, they summarized a series of distance metrics. Wan et al. [39] applied the Nakagami–Rayleigh distribution to the Bayesian formulation and derived a new statistical distance. Based on their work, we combined the Nakagami–Rayleigh distribution and the generalized likelihood ratio (GLR) to develop a non-logarithmic ratio-based distance in our previous work [36]. This paper introduces this similarity metric in the change detection task due to its decent performance. Significantly, to meet the practical requirements and the computational robustness, we made the following modification.
As a basic theoretical multilook amplitude model for SAR, the Nakagami–Rayleigh distribution was adopted to model the SAR amplitude image [47]. The L-look Nakagami distribution is represented as
P ( X | R ) = 2 L R L 1 Γ ( L ) X 2 L 1 exp L X 2 R
The radar reflectivity, i.e., noise-free pixel amplitude, is represented by R, and the actual observed amplitude is represented by X. Γ · is the Gamma function. For paired patches ( N p , N q ) , the similarity (or distance) can be statistically regarded as the comparison of two hypothesis, H 0 and H 1 :
H 0 : R p = R q = R 12 ( n u l l h y p o t h e s i s ) H 1 : R p R q ( a l t e r n a t i v e h y p o t h e s i s )
where the null hypothesis ( H 0 ) means that both patches ( N p , N q ) obey the same statistical distribution, i.e., R p = R q = R 12 (parametric constraint). The alternative hypothesis ( H 1 ) denotes no parametric constraint of R. In this case, the likelihood ratio test (LRT) is the optimal statistical test, which can approximate the ratio of two hypotheses:
L V s ( N p ) , V s ( N q ) = P V s ( N p ) , V s ( N q ) R p = R q = R 12 , H 0 P V s ( N p ) , V s ( N q ) R p = R 1 , R q = R 2 , H 1
where V s ( N p ) and V s ( N q ) are the observed vectorized patches, which range in Θ . s is the radius of the patches. R 1 , R 2 , and R 12 are the true backscatter values. To limit the range of R = { R 1 , R 2 , R 12 } , the generalized likelihood ratio test (GLRT) extends Equation (10) to address the unknown parameters by using the maximum likelihood estimation (MLE) as if they were known:
G V s ( N p ) , V s ( N q ) = S u p R Θ L V s ( N p ) , V s ( N q ) ; R
Note that G V s ( N p ) , V s ( N q ) [ 0 , 1 ] ; the larger the value, the greater the confidence in accepting the hypothesis H 0 is, which means a higher probability of paired patches ( N p , N q ) obeying the same statistical distribution.
To obtain the analytical solution, we imposed a strict assumption that the paired patches are irrelevant and paired pixels are independent. Thus, Equation (11) can be reformed as
G V s ( N p ) , V s ( N q ) = k = 1 O ϕ G L R ( x p , k , x q , k )
where O is ( 2 s + 1 ) × ( 2 s + 1 ) , x p , k N p , and x q , k N q . ϕ G L R ( x p , k , x q , k ) is given by
ϕ G L R ( x p , k , x q , k ) = S u p R 12 P x p , k , x q , k R p = R q = R 12 , H 0 S u p R 1 P x p , k R p = R 1 , H 1 S u p R 2 P x q , k R q = R 2 , H 1
To obtain the maximum likelihood estimation value of the numerator in Equation (13), we define the joint probability P x p , k , x q , k H 0 by applying Equation (8):
P x p , k , x q , k H 0 = 2 Γ ( L ) 2 L R 12 2 L x p , k x q , k 2 L 1 exp L R 12 x p , k 2 + x q , k 2
As for parameter R 12 , we determined its maximum likelihood estimation R ^ 12 by differentiating the log-likelihood function H R 12 = m = 1 M ln P x p , k m , x q , k m R 12 and setting d H ( R 12 ) d R 12 = 0 , then we obtain
R ^ 12 = 1 2 M m = 1 M x p , k m 2 + x q , k m 2
considering only one pixel can be observed in each position, i.e., M = 1 ; therefore, we can obtain the following estimation:
R ^ 12 = 1 2 x p , k 2 + x q , k 2
For the denominators P x p , k R 1 , H 1 and P x q , k R 2 , H 1 in Equation (13), the same estimation process is performed, then we obtain
R ^ 1 = x p , k 2 R ^ 2 = x p , k 2
Next, the R = R 1 , R 2 , R 12 in Equation (13) are replaced by the maximum likelihood estimation R ^ = R ^ 1 , R ^ 2 , R ^ 12 , then we have
ϕ G L R x p , k , x q , k = 4 L 2 L Γ ( L ) ( x p , k x q , k ) 2 L 1 ( x p , k 2 + x q , k 2 ) 2 L exp ( 2 L ) 2 2 L 2 L L Γ ( L ) x p , k 2 L x p , k 2 L 1 exp ( L ) 2 L L Γ ( L ) x q , k 2 L x q , k 2 L 1 exp ( L )
After simplification,
ϕ G L R x p , k , x q , k = 2 x p , k x q , k x p , k 2 + x q , k 2 2 L
Finally, we can measure the similarity of paired patches ( N p , N q ) by
G N p , N q = k = 1 O ϕ G L R ( x p , k , x q , k ) = k = 1 O 2 x p , k x q , k x p , k 2 + x q , k 2 2 L
The structure weights can be measured by Equation (20). The statistical distance takes the form of a ratio, so it can suppress speckle noise effectively. However, in this change detection task, we cannot use the continued product form of Equation (20), because it will result in a form that is not computationally tractable and unstable values. Local squared neighborhoods with zero amplitude will result in no similarity, which is not desirable. Our solution to this problem, and to prevent the loss of contrast associated with the logarithmic form, is to approximate the continued product by a summed form.
G N p , N q k = 1 K 2 x p , k x q , k x p , k 2 + x q , k 2 2 L

2.3. The Non-Local Structure Weight Difference Detector

2.3.1. The Non-Local Structure Weight Feature

Consider two registered SAR images acquired at time t 1 and t 2 on the same geographical area with a size of H × W , which are represented as X = x 1 , x 2 , , x N and Y = y 1 , y 2 , , y N , respectively. N is the total pixels number. The N p refers to the patch centered at the pth pixel with a size of s × s . We define W p as a non-local search window of size Q = ( 2 w + 1 ) × ( 2 w + 1 ) . P = N q | p r q r | , | p c q c | W p indicates the set of non-local patches in the search window of pixel p. ( r , c ) denotes the position of the pixels. For the pth pixel in image X , the non-local structure weight feature (SWF) is
SWF p X = s w f p , 1 X , s w f p , 2 X , , s w f p , q X , , s w f p , Q 1 X , s w f p , Q X
where each element s w f p , q X of Q-dimensional SWF p X encodes the distance between the central patch N p and the corresponding candidate patch N q in set P . According to the modified distance metric Equation (21), s w f p , q X can be calculated by
s w f p , q X = G N p , N q
Traversing each pixel of the bi-temporal SAR images, we can obtain the non-local structure weight feature maps of X and Y :
SWF X = S W F 1 X , , S W F p X , , S W F N X SWF Y = S W F 1 Y , , S W F p Y , , S W F N Y

2.3.2. Generation of Change Difference Intensity

As soon as the NLSW features of bi-temporal SAR images are obtained, the mapping process from the amplitude to the stable multidimensional structure feature space is completed. Therefore, we can detect the changes between the pre- and post-images using robust structure information. Theoretically, traditional pixel-based change detection (PBCD) methods can be used. Furthermore, studies on the distance criteria for measuring the difference between feature descriptors have been conducted extensively in the field of feature matching. The most typical is the Minkowski series criterion L p . Others include the L 1 -norm distance, L 2 -norm distance, and some variants such as the cumulative European distance and cumulative Manhattan distance. Here, we adopted the simple L 2 -norm for the NLSW features to generate the difference image (DI).
D I p = 1 Q k = 1 Q | s w f p , k X s w f p , k Y | 2
where Q is the number of element in each NLSW feature. After traversing all pixels, we normalize D I to obtain D I n o r m :
D I n o r m = D I m a x p D I p
Inspired by [22,25], a variant of the NLSW called the SNLSW was developed. We used the following sorted non-local structure weight (SNLSW) feature in the actual experiments:
SWF p X , S = S o r t SWF p X SWF p Y , S = S o r t SWF p Y
In the experimental part, we analyzed the impact of the NLSW and SNLSW features and also explored the impact of the SNLSW feature length (percentage of Q) on change detection.

2.4. Generation of Change Map

The generation of the change map (CM) from the difference image (DI) can be regarded as a binary segmentation or classification problem.
C M ( i , j ) = 1 , if   D I ( i , j ) T 0 , if   D I ( i , j ) < T
where the label is configured as c h a n g e = 1 , u n c h a n g e = 0 and T is threshold. Common methods include statistical model-based, clustering-based, and threshold-based methods. This paper applied the CFAR method, which is used widely in radar detection systems. In addition to being simple to calculate, CFAR threshold estimation offers constant false alarm rates and adaptive threshold determination, among others. It also offers complete unimodal histogram estimation while controlling false alarm detection areas. We must first determine a clutter statistical model for the DI before performing CFAR. Here, the Rayleigh distribution was selected, and the P D F is
f ( x ) = x b 2 exp x 2 2 b 2 if     x > 0 0 otherwise
where the expectation μ = π 2 b and variance σ 2 = 2 π 2 b 2 , respectively. After setting the false alarm rate P f a , the CFAR threshold T C F A R can be calculated by
T C F A R = 2 log P f a π 2 2 π 2 σ + μ
In addition, the other two commonly used automatic threshold estimation methods and an empirical selection method were also employed to extract the changed regions: (1) Otsu [28] threshold; (2) K&I threshold; (3) trial and error (TAE).
The flowchart of the proposed method is given in Figure 2. It consists of three steps: (1) projection from the amplitude feature to the NLSW feature (if the SNLSW feature is used, the NLSW needs to be sorted); (2) generation of the change difference image (DI); (3) estimation of the threshold and segmentation to obtain the change map (CM).
The CFAR method is the only one described here, and the other methods are analogous. Besides, the fine-tuning processing for the CM is not included in the flowchart, even though the post-processing is widely used for improving the detection accuracy.

3. Experiments, Results, and Discussion

3.1. Datasets, Evaluation Criteria, and Comparison

To validate the performance of the NLSW/SNLSW detector, four pairs of bi-temporal SAR images were used:
(1) D a t a s e t s # 1 / 2 : The first two datasets were taken by Radarsat-2 in the Yellow River Estuary area of China in 2008 and 2009, with a size of 257 × 289 and 306 × 291 , respectively, shown in Figure 3. The huge disparity of the speckle noise between t 1 and t 2 may aggravate the difficulties met in the process of change detection.
(2) D a t a s e t # 3 : The third dataset was taken by Radarsat-1 in 1998 and 1999 with a resolution of 10 m and a size of 627 × 619 . The change regions shown in the third column of Figure 3 were caused by the different water content of soil.
(3) D a t a s e t # 4 : The fourth dataset was taken by Radarsat-2 at Vancouver airport in 2008 and 2009 with a resolution of 3 m and a size of 908 × 532 . The narrow and long structure of the change area makes the detection difficult. Noticeably, we downsampled the bi-temporal images to reduce the impact of the registration error.
For comparison, we implemented several representative algorithms, including SH [22], MRFFCM [32], HHG [17], the log-ratio detector [9], and NLPG [25]. MRFFCM processes the DI generated by the LR detector to obtain the change map. For SH, LR, and NLPG, Otsu [28], CFAR, PCAKMeans [29], and FLICM [31] were selected to produce the change map, respectively. For the proposed method, we used the CFAR estimator in addition to the K&I [27] threshold, Otsu, and trial and error (TAE).
The performance of the proposed method can be evaluated by eight widely employed measurements: the false alarm rate (FAR), the missed rate (MR), the overall error (OE), the overall accuracy (OA), the precision (PRE), the recall (RC), the F1-score (F1), and the Kappa coefficient (KC). The NLSW/SNLSW performed better when the FAR, MR, and OE were small and the OA, PRE, RC, F1, and KC were large. Furthermore, the quality of the DIs generated by the various parameters were evaluated using the ROC curve. For the ROC curve, a positive detection rate (PDE) is regarded as a function of the false alarm rate (FAR). As an additional quantitative indicator, the area under the curve (AUC) was calculated for the DI. The greater the AUC, the better the discriminability of the DI is.

3.2. Experimental Result and Quantitative Comparison

3.2.1. Noise Sensitivity Analysis of the Simulation

Speckle noise has been one of the bottlenecks of SAR change detection tasks. We, therefore, conducted the noise sensitivity experiment on the simulation of the SAR images to validate the effectiveness of the proposed method in suppressing speckle noise and representing structure information. The validity was verified in two ways: (1) for the unchanged bi-temporal images, the changes should not be detected regardless of the degree of noise interference; (2) for the changed bi-temporal images, the changed area should be detected irrespective of the noise interference.
Figure 4a,b are the T1 and T2 images. First, six-look speckle noise was added to the T2 image. Second, 2-look, 4-look, and 6-look speckle noise were added to the T1 image. Then, we defined an image set N o i s e M a p = [ T 1 w i t h 2 - l o o k n o i s e , T 1 w i t h 4 - l o o k n o i s e , T 1 w i t h 6 - l o o k n o i s e ] . Figure 4c gives the DI matrix. According to the first row ((A),(B),(C)), it is obvious that the difference between the dual-temporal images can be detected under different numbers of look noise, implying that the proposed NLSW has good feasibility. Even though the noise intensity in the 2-look and 6-look images had a huge disparity, the NLSW features were still able to distinguish changed and unchanged pixels, which indicates that the NLSW is robust to speckle noise. As a note, the only difference between the images in N o i s e M a p was the number of looks (noise level). Therefore, ideally, no change should be detected between them. (D), (E), and (F) (shown in Figure 4c), representing the DI between any two N o i s e M a p s , confirmed that. Specifically, (D) and (F) showed almost no possible change are, while (E) showed extremely slight possible changes. This is because the huge difference of the speckle noise level between the 2-Look and 6-Look images complicates the change detection.

3.2.2. Result on Dataset #1

For Dataset #1, we set the parameters as w 1 = 2 , w 2 = 7 , k = 0.1 , and L = 3 . The segmentation thresholds were set as T T A E = 0.5 , T O t s u = 0.39 , T K & I = 0.42 , T C F A R = 0.45 , respectively. The default parameters were set for the comparison of the CD methods. The change maps of the mentioned methods are shown in Figure 5. In comparison to the other methods, MRFFCM and LR-FLICM gave more false alarms due to inaccurate amplitude information. By exploiting the pixel features, PCAKmean (Figure 5i) suppressed the noise for a better result. The results of NLPG under two thresholds (Figure 5j–k) were evidently different, which indicated the instability of the DI generated by NLPG for SAR images. Affected by sliding window detection mode, the SH produced many false alarms and missed blocks. HHG yielded a high number of false alarms. Our proposed methods presented the best visual results. Only a few isolated noise pixels appeared in our results, in contrast with the other methods. Additionally, the NLSW detector showed consistent results (Figure 5b–e) under the four thresholds, suggesting that the DI generated by the NLSW is stable for SAR images.
Table 1 reports the quantitative assessment of different algorithms. NLPG with CFAR had the lowest FAR, but with the most-serious MR of 0.76. PCAKmeans and HHG obtained a competitive FAR 0.0351 and 0.0947, but were much inferior with respect to our methods in terms of the MR. By the overall evaluation, the OA and Kappa also demonstrated that the proposed NLSW indicator could detect change trends correctly.

3.2.3. Result on Dataset #2

Figure 6 show the CD results on Dataset #2. We set w 1 = 2 , w 2 = 7 , k = 0.1 , and L = 1 and the thresholds T T A E = 0.47 , T O t s u = 0.34 , T K & I = 0.29 , and T C F A R = 0.43 .
While the LR with MRFFCM and FLICM had the highest detection resolution, the false alarms were quite serious due to speckle noise interference. Similarly, the CMs of NLPG showed extreme false alarm and misdetections. SH was almost invalid on Dataset #2. The visual performance of the NLSW detector under the different threshold methods (Figure 6b–e) was, as expected, more satisfactory. When compared to other structure-based detectors (NLPG, SH, and HHG), the proposed NLSW detector not only found more changed pixels, but also weakened the influence of speckle noise.
Table 2 provides the results of the quantitative analysis. Our methods supplied the best FAR (0.0122), OA (0.9833), and Kappa (0.8570). An FAR of 0.0057 was under the manual threshold setting (see TAE), which means the DI generated by the NLSW detector can adequately reflect the change level. According to the Kappa coefficient and F1-score, the NLSW detector with CFAR performed better than PCAKmeans, with 0.8570, 0.8659, 0.7527, and 0.7703, respectively.

3.2.4. Result on Dataset #3

For Dataset #3, we set w 1 = 3 , w 2 = 7 , k = 0.1 , L = 7 , and L = 1 and the thresholds T T A E = 0.32 , T O t s u = 0.20 , T K & I = 0.082 , and T C F A R = 0.27 . Figure 7 shows the binary change maps, and Table 3 lists the corresponding quantitative results. Generally, the amplitude-based methods, including MRFFCM and LR-FLICM (see Figure 7g,h), exhibited serious false alarm pixels, as expected, due to fluctuating amplitudes. NLPG showed a quite highfalse alarm rate and misdetections due to a huge disparity between the statistical distribution of the DI generated by NLPG and the statistical assumptions in both threshold estimations. The SH method still barelyworked. While HHG produced a better result, there were still large blocks of false alarms. Benefiting from the structured strategy, our methods (Figure 7b,e) outperformed the other methods and yielded satisfactory visual results. The quantitative analysis shown in Table 3 confirmed the visual results. Our proposed NLSW detector won by most indicators.

3.2.5. Result on Dataset #4

On Dataset #4, we set w 1 = 2 , w 2 = 12 , and L = 7 and the thresholds T T A E = 0.42 , T O t s u = 0.25 , T K & I = 0.29 , and T C F A R = 0.34 . Figure 8 provides the CM results obtained by the different CD methods. The fragmented change area of Dataset #4 along with speckle noise made it difficult for MRFFCM and LR-FLICM (Figure 8g,h) to produce a clean binary CM. SH is sensitive to scatter noise, and many changed pixels were undetected. It can be confirmed from Table 4 that the MR of SH was the highest (0.7230). The red circle (see Figure 8b–f,i–k) indicates that our proposed NLSW detector was capable of retaining small structures. Moreover, HHG also yielded a remarkable visual result.
Table 4 reports the quantitative analysis on Dataset #4. The SNLSW with CFAR was slightly higher than HHG on the recall, which was 87.32% and 86.72%, respectively. For the other indicators, HHG outperformed our approach. However, by adjusting the threshold, our method (SNLSW with TAE) achieved similar results to HHG.

3.3. Parameter Analysis and Discussion

3.3.1. Selection of Analysis Window

In this section, we investigated the sensitivity of the NLSW detector to its parameters: the local patch size w 1 and the non-local search window size w 2 . We carried out the experiments on Dataset #1 and Dataset #2, shown in Figure 3. w 1 was set as [ 1 , 2 , 3 , 4 , 5 ] , and w 2 was set from 1 to 20. The AUC curves of the NLSW detector with different w 1 and w 2 for both datasets are plotted in Figure 9a,b.
Figure 9a gives the curve on Dataset #1. We can see that the AUC curve rose at first, then gradually declined before staying steady with w 2 from 1 to 8. With w 2 further increasing, the AUC ascended slowly and remained stable. Figure 9b shows the result on Dataset #2.
Overall, the AUC curves on Dataset #2 appeared relatively flat. Concave points occurred in five curves near w 2 = 7 , and they had the highest AUC values between w 2 from 12 to 15. All AUC curves exhibited fluctuations, but the fluctuations were not significant. Dataset #1 showed a low inter-curve fluctuation of less than 2 % and a low intra-curve fluctuation of less than 1 % . In particular, the curve of w = 1 showed the greatest fluctuation, perhaps because a small patch reduces the robustness of the NLSW detector. The inter-curve and intra-curve fluctuations on Dataset #2 were less than 1 % and 0.5 % . We speculated that a small non-local search window would result in less robust and imprecise structure information. A large analysis windows, however, could result in redundancy and interference, which would reduce the NLSW features’ discrimination. The peak points in the AUC curves indicated that the robustness and discriminativeness of the NLSW features achieved a balance. In conclusion, the slight fluctuation indicated that the NLSW detector is insensitive to and stable with respect to the analysis window.

3.3.2. Performance Analysis of NLSW and SNLSW

This section aimed at analyzing the performance of the NLSW feature and the sorted NLSW (SNLSW) feature. We carried out the experiments on Datasets #1/2/3. The ROC curve and AUC were used to qualitatively and quantitatively evaluate the performance of the DI generated by the NLSW and SNLSW, respectively. ( w 1 , w 2 ) was set as { ( 2 , 3 ) , ( 2 , 5 ) , ( 2 , 7 ) } . Figure 10 plots the ROC curves of the NLSW and SNLSW under different parameters. As shown in Figure 10a–c, the SNLSW had a slightly higher ROC than the NLSW in the interval with a small false alarm rate. As the false alarm rate (X-axis) increased, the ROC curves crossed over and showed the opposite. We can note that the NLSW and SNLSW had different performance in different false alarm rate intervals. In general, the NLSW and SNLSW performed similarly. It is possible to select the NLSW and SNLSW based on the FAR requirements for CD tasks.
Table 5, Table 6 and Table 7 report the quantitative results of both detectors on Datasets #1/2/3. There is no significant difference between the numerical results in Table, which is consistent with the findings of the ROC curves. The thing worth mentioning is that we only sorted the NLSWs to obtain the SNLSW features without reducing the number of feature elements. This suggests that the SNLSW features may be redundant. In practice, not all elements in the SNLSW features contribute to the detection of changes. In Section 3.3.1, it was mentioned that a long SNLSW feature may reduce discrimination.

3.3.3. Sensitivity to the Length K of SNLSW

In the experiments, we found that the length of the SNLSW feature plays an important rule in the CD task. This section aimed at analyzing the sensitivity of the SNLSW to the length K. K refers to the feature length of the SNLSW. Two aspects were considered: (1) Interference: If K is small, we think that the robustness of the SNLSW is not enough to suppress speckle noise. This is because the SNLSW structure feature will be dominated by weights corresponding to the contaminated patch if K is small, which leads to a larger error in the DI. (2) Redundancy: A large K means more patches are included, along with more redundant patches. This will result in the discrepancy between the SNLSW features to be averaged, thus impairing the discriminative ability.
We define K m a x as the maximum number of elements in the SNLSW, i.e., the length of the SNLSW. K is defined as the actual length of the SNLSW. In the SNLSW features, K m a x is directly related to the radius w 2 as follows:
K m a x = 2 × w 2 + 1 2 1
To explore the impact of K, we selected Datasets #1/2/3 as the analysis object. The ROC curve and AUC were used as the qualitative and quantitative indicators, respectively. In the experiments, we set w 2 = [ 5 , 9 , 15 ] , i.e., K m a x took three values. Since K varies with K m a x , it is inappropriate to set a series of fixed K. Therefore, we set K as a percentage of K m a x , as follows:
K = k k = ( i + 1 ) × Δ s × K m a x , i = 0 , 1 , , 9
where · is the c e i l operator and Δ s denotes the incremental step, which was set as Δ s = 0.1 in experiments.
Figure 11 plots the ROC curves of the SNLSW with different K on Datasets #1/2/3. Some regularities can be observed in the ROC curves on Datasets #1/2 (see Figure 11a–f). With a low false alarm rate, the SNLSW detector had a higher ROC curve when K was smaller. As the false alarm rate rose, the curves crossed and became reversed. As the size of the non-local search window increased, the ROC curves with a small K value dominated in every interval. However, the AUC curves with different K on Datasets #1/2 (shown in Figure 12a,b) reflected that the small K was associated with the better performance of the SNLSW. We considered two extremes of change detection in the application based on the ROC curve and AUC curve. First, the accuracy was as high as possible to ensure the detected area had actually changed. Second, we preferred a high recall so that all of the changed areas were marked as much as possible, with less emphasis on the false alarm rate. On the ROC curve, the left and right intervals (X-axis) correspond to these two cases, respectively. Therefore, it is possible to select K according to both requirements mentioned above. On Dataset #3, it was apparent that the ROC curves of a small K were better, which illustrated that the SNLSW detector with a small K can obtain a better DI.
The AUC curves on Datasets #1/2/3 demonstrated in Figure 12 showed a consistent trend, i.e., the smaller the length K, the higher the value of the AUC. Combining the ROC curves and AUC curves, we believe that the sorted NLSW (SNLSW) detector with small K can often obtain a better change detection result.

3.3.4. Effect of Changed Region Size on NLSW Detector

To analyze the effect of the size of the changed regions on the NLSW detector, one approach is to use the simulated data with known changed regions of different sizes. This allows for a controlled comparison of performance across different changed region sizes. First, based on the pre-time image (Figure 13a), we generated five post-time images with different sizes of changed regions corresponding to 2.5%, 4.5%, 10.6%, 22.2%, and 30% of the total image ( 256 × 256 ). Single-look simulated speckle noise with the Gamma distribution was added to the simulated images, as depicted in Figure 13. Then, we ran the algorithm on five dual-temporal pairs with the parameters settings of { w 1 = 1 , w 2 = 5 } and { w 1 = 2 ; w 2 = 7 } .
Figure 14a,b show the ROC curves under two parameters, respectively. Figure 14c presents the AUC curves under four groups of parameters. We observed that the optimal size of the changed regions may fall in the range of 4–10% of the image. Our finding revealed that the size of changed regions had some impact on the performance of the NLSW detector with smaller or larger region sizes, resulting in a degraded performance, corresponding to false alarms and misdetections, respectively. However, we can cope with the sensitivity by adjusting the parameters; for example, a large neighborhood w 1 = can be selected to improve the effectiveness when the changed region is relatively large.

3.3.5. Sensitivity to Distance Metric

A reliable non-local structure feature is essential to the effectiveness of the NLSW detector. To illustrate the superiority of the NLSW detector obtained by our developed distance metric in terms of reliability, we compared four other distance metrics: (1) Euclidean distance; (2) Euclidean distance applied on the log-transformed SAR image, the “Log-Euclidean Distance”; (3) Bayesian distance under the assumption of the Nakagami–Rayleigh distribution proposed by Wan et al. [39], “Nakagami-Bayesian Distance”; (4) a modified version of (3), derived from the log-transformed SAR image [36], “Log-Nakagami-Bayesian Distance”; (5) ours (Equation (20)), “Nakagami-GLR Distance”. The comparison was performed on Datasets #2/3, and the parameters were set as { w 1 = 2 , w 2 = 7 } .
The experimental results, as depicted by the ROC curves in Figure 15a,b, clearly showed that the detector using our distance metric outperformed the Euclidean distance, indicating that it was more effective at capturing structural information. The trend of change can be properly reflected. Our finding underscores the critical importance of selecting an appropriate similarity metric while building a non-local structure weight structure detector. For SAR images, the Euclidean distance is not an optimal, option and the intrinsic statistical properties should be fully considered.

4. Conclusions

In this study, we mainly addressed the generation of the difference image (DI) and proposed an unsupervised change detection strategy specifically for analyzing SAR images. As a result of this method, it avoids the problem of amplitude instability caused by speckle noise in SAR by finding the change based on the spatial structural consistency and the discrepancy of pixels. By encoding similarity weights between patches in non-local search windows, we assigned each pixel a non-local structure weight called the NLSW and sort NLSW (SNLSW) feature. As a measure of the patch similarity, we introduced and modified a statistical distance that was not sensitive to speckle noise in order to capture structure cues and calculate the NLSW features accurately. It was derived by combining the multiplicative noise model, the Nakagami–Rayleigh distribution, with a non-log transformed generalized likelihood ratio. Then, the binary change map was automatically generated by detecting the threshold of the DI with the CFAR technique. We validated the effectiveness of the NLSW detector and its extended version, the SNLSW, on simulated and real SAR images. The experimental results indicated that the proposed method can correctly describe difference trends between dual-temporal SAR images, providing satisfactory results in both visual and quantitative analysis.
The advantages of our method are summarized as follows: (1) The proposed CD framework is unsupervised and does not require a priori knowledge. (2) The SAR amplitude attribute was discarded, and the image is represented by stable structure information. (3) A statistical distance suitable for SAR was adopted, which effectively suppresses the noise. In future work, we will focus on the enhancement of the DI and the analysis of change difference intensity maps.

Author Contributions

Conceptualization, J.Z.; methodology, J.Z.; software, J.Z.; validation, J.Z.; formal analysis, J.Z.; investigation, J.Z.; resources, J.Z.; data curation, J.Z.; writing—original draft preparation, J.Z.; writing—review and editing, J.Z., F.W., and H.Y.; visualization, J.Z.; supervision, J.Z.; project administration, F.W. and H.Y.; funding acquisition, F.W. and H.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Key Research Program of Frontier Sciences, Chinese Academy of Science, under Grant ZDBS-LY-JSC036.

Acknowledgments

The authors would like to thank the Reviewers for their valuable suggestions and comments. Finally, we are grateful to Sun Yuli for providing us with NLPG algorithm program.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Colin Koeniguer, E.; Nicolas, J.M. Change Detection Based on the Coefficient of Variation in SAR Time-Series of Urban Areas. Remote Sens. 2020, 12, 2089. [Google Scholar] [CrossRef]
  2. Simpson, M.D.; Marino, A.; de Maagt, P.; Gandini, E.; Hunter, P.; Spyrakos, E.; Tyler, A.; Telfer, T. Monitoring of Plastic Islands in River Environment Using Sentinel-1 SAR Data. Remote Sens. 2022, 14, 4473. [Google Scholar] [CrossRef]
  3. Sousa, J.J.; Liu, G.; Fan, J.; Perski, Z.; Steger, S.; Bai, S.; Wei, L.; Salvi, S.; Wang, Q.; Tu, J.; et al. Geohazards Monitoring and Assessment Using Multi-Source Earth Observation Techniques. Remote Sens. 2021, 13, 4269. [Google Scholar] [CrossRef]
  4. Shang, J.; Liu, J.; Poncos, V.; Geng, X.; Qian, B.; Chen, Q.; Dong, T.; Macdonald, D.; Martin, T.; Kovacs, J.; et al. Detection of crop seeding and harvest through analysis of time-series Sentinel-1 interferometric SAR data. Remote Sens. 2020, 12, 1551. [Google Scholar] [CrossRef]
  5. De Alban, J.D.T.; Connette, G.M.; Oswald, P.; Webb, E.L. Combined Landsat and L-band SAR data improves land cover classification and change detection in dynamic tropical landscapes. Remote Sens. 2018, 10, 306. [Google Scholar] [CrossRef] [Green Version]
  6. Lee, J.S.; Pottier, E. Polarimetric Radar Imaging: From Basics to Applications; CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar]
  7. You, Y.; Cao, J.; Zhou, W. A survey of change detection methods based on remote sensing images for multi-source and multi-objective scenarios. Remote Sens. 2020, 12, 2460. [Google Scholar] [CrossRef]
  8. Rignot, E.J.; Van Zyl, J.J. Change detection techniques for ERS-1 SAR data. IEEE Trans. Geosci. Remote Sens. 1993, 31, 896–906. [Google Scholar] [CrossRef] [Green Version]
  9. Bazi, Y.; Bruzzone, L.; Melgani, F. Automatic identification of the number and values of decision thresholds in the log-ratio image for change detection in SAR images. IEEE Geosci. Remote Sens. Lett. 2006, 3, 349–353. [Google Scholar] [CrossRef] [Green Version]
  10. Inglada, J.; Mercier, G. A new statistical similarity measure for change detection in multitemporal SAR images and its extension to multiscale change analysis. IEEE Trans. Geosci. Remote Sens. 2007, 45, 1432–1445. [Google Scholar] [CrossRef] [Green Version]
  11. Gong, M.; Cao, Y.; Wu, Q. A neighborhood-based ratio approach for change detection in SAR images. IEEE Geosci. Remote Sens. Lett. 2011, 9, 307–311. [Google Scholar] [CrossRef]
  12. Shuai, Y.m.; Xu, X.; Sun, H.; Xu, G. Change detection based on region likelihood ratio in multitemporal SAR images. In Proceeding of the 2006 8th International Conference on Signal Processing, Beijing, China, 16–20 November 2006; Volume 2. [Google Scholar]
  13. Ma, J.; Gong, M.; Zhou, Z. Wavelet fusion on ratio images for change detection in SAR images. IEEE Geosci. Remote Sens. Lett. 2012, 9, 1122–1126. [Google Scholar] [CrossRef]
  14. Zheng, Y.; Zhang, X.; Hou, B.; Liu, G. Using combined difference image and k-means clustering for SAR image change detection. IEEE Geosci. Remote Sens. Lett. 2013, 11, 691–695. [Google Scholar] [CrossRef]
  15. Hou, B.; Wei, Q.; Zheng, Y.; Wang, S. Unsupervised change detection in SAR image based on Gauss-log ratio image fusion and compressed projection. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 3297–3317. [Google Scholar] [CrossRef]
  16. Gong, M.; Li, Y.; Jiao, L.; Jia, M.; Su, L. SAR change detection based on intensity and texture changes. ISPRS J. Photogramm. Remote Sens. 2014, 93, 123–135. [Google Scholar] [CrossRef]
  17. Wang, J.; Zhao, T.; Jiang, X.; Lan, K. A Hierarchical Heterogeneous Graph for Unsupervised SAR Image Change Detection. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  18. Sofiane, H.; Ferdaous, C. Comparison of change detection indicators in SAR images. In Proceedings of the 8th European Conference on Synthetic Aperture Radar, Online, 7–10 June 2010; pp. 1–4. [Google Scholar]
  19. Zheng, J.; You, H. A new model-independent method for change detection in multitemporal SAR images based on Radon transform and Jeffrey divergence. IEEE Geosci. Remote Sens. Lett. 2012, 10, 91–95. [Google Scholar] [CrossRef]
  20. Alberga, V. Similarity measures of remotely sensed multi-sensor images for change detection applications. Remote Sens. 2009, 1, 122–143. [Google Scholar] [CrossRef] [Green Version]
  21. Pham, M.T.; Mercier, G.; Michel, J. Change detection between SAR images using a pointwise approach and graph theory. IEEE Trans. Geosci. Remote Sens. 2015, 54, 2020–2032. [Google Scholar] [CrossRef]
  22. Wan, L.; Zhang, T.; You, H. Multi-sensor remote sensing image change detection based on sorted histograms. Int. J. Remote Sens. 2018, 39, 3753–3775. [Google Scholar] [CrossRef]
  23. Touati, R.; Mignotte, M.; Dahmane, M. Multimodal change detection in remote sensing images using an unsupervised pixel pairwise-based Markov random field model. IEEE Trans. Image Process. 2019, 29, 757–767. [Google Scholar] [CrossRef]
  24. Mignotte, M. A fractal projection and Markovian segmentation-based approach for multimodal change detection. IEEE Trans. Geosci. Remote Sens. 2020, 58, 8046–8058. [Google Scholar] [CrossRef]
  25. Sun, Y.; Lei, L.; Li, X.; Sun, H.; Kuang, G. Nonlocal patch similarity based heterogeneous remote sensing change detection. Pattern Recognit. 2021, 109, 107598. [Google Scholar] [CrossRef]
  26. Sun, Y.; Lei, L.; Guan, D.; Kuang, G. Iterative robust graph for unsupervised change detection of heterogeneous remote sensing images. IEEE Trans. Image Process. 2021, 30, 6277–6291. [Google Scholar] [CrossRef] [PubMed]
  27. Kittler, J.; Illingworth, J. Minimum error thresholding. Pattern Recognit. 1986, 19, 41–47. [Google Scholar] [CrossRef]
  28. Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef] [Green Version]
  29. Celik, T. Unsupervised Change Detection in Satellite Images Using Principal Component Analysis and k-Means Clustering. IEEE Geosci. Remote Sens. Lett. 2009, 6, 772–776. [Google Scholar] [CrossRef]
  30. Lou, X.; Jia, Z.; Yang, J.; Kasabov, N. Change detection in SAR images based on the ROF model semi-implicit denoising method. Sensors 2019, 19, 1179. [Google Scholar] [CrossRef] [Green Version]
  31. Gong, M.; Zhou, Z.; Ma, J. Change detection in synthetic aperture radar images based on image fusion and fuzzy clustering. IEEE Trans. Image Process. 2011, 21, 2141–2151. [Google Scholar] [CrossRef]
  32. Gong, M.; Su, L.; Jia, M.; Chen, W. Fuzzy clustering with a modified MRF energy function for change detection in synthetic aperture radar images. IEEE Trans. Fuzzy Syst. 2013, 22, 98–109. [Google Scholar] [CrossRef]
  33. Zhang, K.; Lv, X.; Chai, H.; Yao, J. Unsupervised SAR image change detection for few changed area based on histogram fitting error minimization. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–19. [Google Scholar] [CrossRef]
  34. Sun, Y.; Lei, L.; Li, X.; Tan, X.; Kuang, G. Patch similarity graph matrix-based unsupervised remote sensing change detection with homogeneous and heterogeneous sensors. IEEE Trans. Geosci. Remote Sens. 2020, 59, 4841–4861. [Google Scholar] [CrossRef]
  35. Sun, Y.; Lei, L.; Li, X.; Tan, X.; Kuang, G. Structure consistency-based graph for unsupervised change detection with homogeneous and heterogeneous remote sensing images. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–21. [Google Scholar] [CrossRef]
  36. Zhu, J.; Wang, F.; You, H. SAR Image Segmentation by Efficient Fuzzy C-Means Framework with Adaptive Generalized Likelihood Ratio Nonlocal Spatial Information Embedded. Remote Sens. 2022, 14, 1621. [Google Scholar] [CrossRef]
  37. Buades, A.; Coll, B.; Morel, J.M. Non-local means denoising. Image Process. Line 2011, 1, 208–212. [Google Scholar] [CrossRef] [Green Version]
  38. Buades, A.; Coll, B.; Morel, J.M. A non-local algorithm for image denoising. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–26 June 2005; Volume 2, pp. 60–65. [Google Scholar]
  39. Wan, L.; Zhang, T.; Xiang, Y.; You, H. A robust fuzzy c-means algorithm based on Bayesian nonlocal spatial information for SAR image segmentation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 896–906. [Google Scholar] [CrossRef]
  40. Vitale, S.; Cozzolino, D.; Scarpa, G.; Verdoliva, L.; Poggi, G. Guided Patchwise Nonlocal SAR Despeckling. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6484–6498. [Google Scholar] [CrossRef] [Green Version]
  41. Fu, S.; Xu, F.; Jin, Y.Q. Reciprocal translation between SAR and optical remote sensing images with cascaded-residual adversarial networks. Sci. China Inf. Sci. 2021, 64, 1–15. [Google Scholar] [CrossRef]
  42. Zhang, T.; Zhang, Z.; Yang, H.; Guo, W.; Yang, Z. Ship Detection of Polarimetric SAR Images Using a Nonlocal Spatial Information-Guided Method. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  43. Kervrann, C.; Boulanger, J.; Coupé, P. Bayesian non-local means filter, image redundancy and adaptive dictionaries for noise removal. In Proceedings of the International Conference on Scale Space and Variational Methods in Computer Vision, Ischia, Italy, 30 May–2 June 2007; Springer: Berlin/Heidelberg, Germany, 2007; pp. 520–532. [Google Scholar]
  44. Azzabou, N.; Paragios, N.; Guichard, F.; Cao, F. Variable bandwidth image denoising using image-based noise models. In Proceedings of the 2007 IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, MN, USA, 17–22 June 2007; pp. 1–7. [Google Scholar]
  45. Feng, H.; Hou, B.; Gong, M. SAR image despeckling based on local homogeneous-region segmentation by using pixel-relativity measurement. IEEE Trans. Geosci. Remote Sens. 2011, 49, 2724–2737. [Google Scholar] [CrossRef]
  46. Deledalle, C.A.; Denis, L.; Tupin, F. How to compare noisy patches? Patch similarity beyond Gaussian noise. Int. J. Comput. Vis. 2012, 99, 86–102. [Google Scholar] [CrossRef] [Green Version]
  47. Deledalle, C.A.; Denis, L.; Tupin, F. Iterative weighted maximum likelihood denoising with probabilistic patch-based weights. IEEE Trans. Image Process. 2009, 18, 2661–2672. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. Illustration of constructing the proposed non-local structure weight feature. Taking the pth pixel as an example, the non-local search window W p has the size ( 2 w 2 + 1 ) × ( 2 w 2 + 1 ) .
Figure 1. Illustration of constructing the proposed non-local structure weight feature. Taking the pth pixel as an example, the non-local search window W p has the size ( 2 w 2 + 1 ) × ( 2 w 2 + 1 ) .
Remotesensing 15 01422 g001
Figure 2. The framework of the proposed method. Note: the CFAR threshold segmentation method is employed in this diagram. In the experiments, the TAE, Otsu, and K&I threshold methods were also adopted.
Figure 2. The framework of the proposed method. Note: the CFAR threshold segmentation method is employed in this diagram. In the experiments, the TAE, Otsu, and K&I threshold methods were also adopted.
Remotesensing 15 01422 g002
Figure 3. Four experimental bi-temporal SAR datasets. From the first column to the fourth column are D a t a s e t # 1 , D a t a s e t # 2 , D a t a s e t # 3 , D a t a s e t # 4 , respectively. Each column refers to i m a g e t 1 ; i m a g e t 2 ; g r o u n d t r u t h .
Figure 3. Four experimental bi-temporal SAR datasets. From the first column to the fourth column are D a t a s e t # 1 , D a t a s e t # 2 , D a t a s e t # 3 , D a t a s e t # 4 , respectively. Each column refers to i m a g e t 1 ; i m a g e t 2 ; g r o u n d t r u t h .
Remotesensing 15 01422 g003
Figure 4. Noise sensitivity test on the simulated SAR image. (a) T1 image. (b) T2 image. (c) Change difference intensity map matrix. We added 2-look, 4-look, and 6-look speckle noise to T1 image. We added 6-look speckle noise to T2 image.
Figure 4. Noise sensitivity test on the simulated SAR image. (a) T1 image. (b) T2 image. (c) Change difference intensity map matrix. We added 2-look, 4-look, and 6-look speckle noise to T1 image. We added 6-look speckle noise to T2 image.
Remotesensing 15 01422 g004
Figure 5. Change maps of the different methods on Dataset # 1 . (a) DI of proposed method (heat map). (b) SNLSW-TAE (ours). (c) SNLSW-Otsu (ours). (d) SNLSW-K&I (ours). (e) SNLSW-CFAR (ours). (f) HHG. (g) MRFFCM. (h) LR-FLICM. (i) LR-PCAKmeans. (j) NLPG-Otsu. (k) NLPG-CFAR. (l) SH-Otsu.
Figure 5. Change maps of the different methods on Dataset # 1 . (a) DI of proposed method (heat map). (b) SNLSW-TAE (ours). (c) SNLSW-Otsu (ours). (d) SNLSW-K&I (ours). (e) SNLSW-CFAR (ours). (f) HHG. (g) MRFFCM. (h) LR-FLICM. (i) LR-PCAKmeans. (j) NLPG-Otsu. (k) NLPG-CFAR. (l) SH-Otsu.
Remotesensing 15 01422 g005
Figure 6. Change maps of the different methods on Dataset # 2 . (a) DI of proposed method (heat map). (b) SNLSW-TAE (ours). (c) SNLSW-Otsu (ours). (d) SNLSW-K&I (ours). (e) SNLSW-CFAR (ours). (f) HHG. (g) MRFFCM. (h) LR-FLICM. (i) LR-PCAKmeans. (j) NLPG-Otsu. (k) NLPG-CFAR. (l) SH-Otsu.
Figure 6. Change maps of the different methods on Dataset # 2 . (a) DI of proposed method (heat map). (b) SNLSW-TAE (ours). (c) SNLSW-Otsu (ours). (d) SNLSW-K&I (ours). (e) SNLSW-CFAR (ours). (f) HHG. (g) MRFFCM. (h) LR-FLICM. (i) LR-PCAKmeans. (j) NLPG-Otsu. (k) NLPG-CFAR. (l) SH-Otsu.
Remotesensing 15 01422 g006
Figure 7. CMs of the different methods on Dataset # 3 . (a) DI of proposed method. (b) SNLSW-TAE (ours). (c) SNLSW-Otsu (ours). (d) SNLSW-K&I (ours). (e) SNLSW-CFAR (ours). (f) HHG. (g) MRFFCM. (h) LR-FLICM. (i) LR-PCAKmeans. (j) NLPG-Otsu. (k) NLPG-CFAR. (l) SH-Otsu.
Figure 7. CMs of the different methods on Dataset # 3 . (a) DI of proposed method. (b) SNLSW-TAE (ours). (c) SNLSW-Otsu (ours). (d) SNLSW-K&I (ours). (e) SNLSW-CFAR (ours). (f) HHG. (g) MRFFCM. (h) LR-FLICM. (i) LR-PCAKmeans. (j) NLPG-Otsu. (k) NLPG-CFAR. (l) SH-Otsu.
Remotesensing 15 01422 g007
Figure 8. CMs of the different methods on Dataset # 4 . (a) DI of proposed method. (b) SNLSW-TAE (ours). (c) SNLSW-Otsu (ours). (d) SNLSW-K&I (ours). (e) SNLSW-CFAR (ours). (f) HHG. (g) MRFFCM. (h) LR-FLICM. (i) LR-PCAKmeans. (j) NLPG-Otsu. (k) NLPG-CFAR. (l) SH-Otsu.
Figure 8. CMs of the different methods on Dataset # 4 . (a) DI of proposed method. (b) SNLSW-TAE (ours). (c) SNLSW-Otsu (ours). (d) SNLSW-K&I (ours). (e) SNLSW-CFAR (ours). (f) HHG. (g) MRFFCM. (h) LR-FLICM. (i) LR-PCAKmeans. (j) NLPG-Otsu. (k) NLPG-CFAR. (l) SH-Otsu.
Remotesensing 15 01422 g008
Figure 9. AUC curves with different sizes of patch ( w 1 ) and non-local search windows ( w 2 ) : (a) Dataset #1. (b) Dataset #2.
Figure 9. AUC curves with different sizes of patch ( w 1 ) and non-local search windows ( w 2 ) : (a) Dataset #1. (b) Dataset #2.
Remotesensing 15 01422 g009
Figure 10. The ROC curves of the NLSW and SNLSW detectors on Datasets #1/2/3. (a) ROC curves on Dataset #1. (b) ROC curves on Dataset #2. (c) ROC curves on Dataset #3.
Figure 10. The ROC curves of the NLSW and SNLSW detectors on Datasets #1/2/3. (a) ROC curves on Dataset #1. (b) ROC curves on Dataset #2. (c) ROC curves on Dataset #3.
Remotesensing 15 01422 g010
Figure 11. The ROC curves of the SNLSW detector with different K on Datasets #1/2/3. (ac) ROC curves on Dataset #1. (df) ROC curves on Dataset #2. (gi) ROC curves on Dataset #3.
Figure 11. The ROC curves of the SNLSW detector with different K on Datasets #1/2/3. (ac) ROC curves on Dataset #1. (df) ROC curves on Dataset #2. (gi) ROC curves on Dataset #3.
Remotesensing 15 01422 g011
Figure 12. The AUC curves under different K on Datasets #1/2/3. (a) AUC curves on Dataset #1. (b) AUC curves on Dataset #2. (c) AUC curves on Dataset #3.
Figure 12. The AUC curves under different K on Datasets #1/2/3. (a) AUC curves on Dataset #1. (b) AUC curves on Dataset #2. (c) AUC curves on Dataset #3.
Remotesensing 15 01422 g012aRemotesensing 15 01422 g012b
Figure 13. Simulated bi-temporal SAR image pairs. (a) Before image. (b) After Image #1 (2.5%). (c) After Image #2 (4.5%). (d) After Image #3 (11.6%). (e) After Image #4 (22.2%). (f) After Image #5 (30%).
Figure 13. Simulated bi-temporal SAR image pairs. (a) Before image. (b) After Image #1 (2.5%). (c) After Image #2 (4.5%). (d) After Image #3 (11.6%). (e) After Image #4 (22.2%). (f) After Image #5 (30%).
Remotesensing 15 01422 g013
Figure 14. The evaluation curves on five simulated bi-temporal images. (a) ROC curves under the parameters { w 1 = 1 , w 2 = 5 } . (b) ROC curves under the parameters { w 1 = 2 , w 2 = 7 } . (c) AUC curves under four sets of parameters.
Figure 14. The evaluation curves on five simulated bi-temporal images. (a) ROC curves under the parameters { w 1 = 1 , w 2 = 5 } . (b) ROC curves under the parameters { w 1 = 2 , w 2 = 7 } . (c) AUC curves under four sets of parameters.
Remotesensing 15 01422 g014
Figure 15. The ROC curves of NLSW detectors with the five different distance metrics. (a) ROC curves on Dataset #2. (b) ROC curves on Dataset #3.
Figure 15. The ROC curves of NLSW detectors with the five different distance metrics. (a) ROC curves on Dataset #2. (b) ROC curves on Dataset #3.
Remotesensing 15 01422 g015
Table 1. Quantitative evaluation of the different methods on Dataset #1.
Table 1. Quantitative evaluation of the different methods on Dataset #1.
MethodMRFFCMSH-OtsuHHGLRNLPGProposed SNLSW
FLICMPCAKOtsuCFARTAEOtsuK&ICFAR
FAR0.10270.21550.09470.33700.03510.15470.01170.03130.07840.06270.0507
MR0.45410.48000.14160.12560.19830.16970.76050.14870.07390.08450.1003
OE0.16620.26340.10320.29910.06460.15740.14710.05250.07760.06670.0597
OA0.83380.73660.89680.70080.93540.84260.85290.94750.92240.93330.9403
Pre0.53990.34750.66680.36390.83440.54230.81840.85720.72270.76310.7967
Recall0.54590.52000.85840.87440.80170.83030.23950.85130.92610.91550.8997
F10.54290.41660.75060.51390.81780.65610.37060.85420.81190.83240.8451
kappa0.44130.25510.68680.34720.77850.55980.31440.82220.76390.79120.8083
Table 2. Quantitative evaluation of the different methods on Dataset #2.
Table 2. Quantitative evaluation of the different methods on Dataset #2.
MethodMRFFCMSH-OtsuHHGLRNLPGProposed SNLSW
FLICMPCAKOtsuCFARTAEOtsuK&ICFAR
FAR0.13370.17310.02390.35380.03380.15360.01610.00570.03080.04770.0122
MR0.13320.44950.07290.01820.03700.02580.42880.14400.03340.01920.0886
OE0.13360.18950.02680.33390.03400.1460.04050.01390.03100.04600.0167
OA0.86640.81050.97320.66610.96600.8540.95950.98610.96900.95400.9833
Pre0.28970.16670.70890.14860.64180.28520.69040.90440.66370.56410.8247
Recall0.86680.55050.92710.98180.96300.97420.57120.85600.96660.98080.9114
F10.43430.25590.80350.25820.77030.44120.62510.87950.78700.71620.8659
kappa0.37920.18150.78940.17320.75270.38490.60390.87220.77090.69310.8570
Table 3. Quantitative evaluation of the different methods on Dataset #3.
Table 3. Quantitative evaluation of the different methods on Dataset #3.
MethodMRFFCMSH-OtsuHHGLRNLPGProposed SNLSW
FLICMPCAKOtsuCFARTAEOtsuK&ICFAR
FAR0.11640.07570.06740.48660.06110.09380.03200.00740.05860.18460.0191
MR0.20940.67820.02100.42020.01080.18690.45410.14240.00920.00020.0637
OE0.12090.10440.06520.48350.05870.09830.05210.01380.05630.17580.0213
OA0.87910.89560.93480.51650.94130.90170.94790.98620.94370.82420.9787
Pre0.25350.17530.42080.05630.44760.30240.46070.85340.45810.21320.7099
Recall0.79060.32180.97900.57980.98920.81310.54590.85760.99080.99980.9363
F10.38390.22700.58860.10260.61630.44090.49970.85550.62660.35140.8075
kappa0.33600.17620.55920.01720.58940.39910.47250.84820.60050.29610.7965
Table 4. Quantitative evaluation of the different methods on Dataset #4.
Table 4. Quantitative evaluation of the different methods on Dataset #4.
MethodMRFFCMSH-OtsuHHGLRNLPGProposed SNLSW
FLICMPCAKOtsuCFARTAEOtsuK&ICFAR
FAR0.17610.03460.01050.32930.01560.01390.01250.00500.07370.04370.0208
MR0.10990.72300.13280.05650.14730.33000.35450.25950.07230.08790.1268
OE0.17360.06030.01510.31920.02050.02570.02520.01450.07370.04530.0248
OA0.82640.93980.98490.68080.97950.97430.97480.98550.92630.95470.9752
Pre0.16340.23620.76100.09970.67840.65090.66600.85090.32720.44670.6181
Recall0.89010.27700.86720.94350.85270.67000.64550.74050.92770.91210.8732
F10.27620.25500.81060.18030.75560.66030.65560.79190.48370.59970.7239
kappa0.22760.22380.80280.12120.74500.64700.64250.78440.45370.57870.7113
Table 5. Quantitative analysis of the SNLSW and NLSW on Datasets #1/2/3 with w 1 = 2 , w 2 = 3 .
Table 5. Quantitative analysis of the SNLSW and NLSW on Datasets #1/2/3 with w 1 = 2 , w 2 = 3 .
MetricDataset #1Dataset #2Dataset #3
AUCOAKappaAUCOAKappaAUCOAKappa
sort0.92400.89320.69180.99170.96860.76840.99170.98090.7928
unsort0.92440.91820.74460.99240.97740.81530.99170.97930.7821
Table 6. Quantitative analysis of the SNLSW and NLSW on Datasets #1/2/3 with w 1 = 2 , w 2 = 5 .
Table 6. Quantitative analysis of the SNLSW and NLSW on Datasets #1/2/3 with w 1 = 2 , w 2 = 5 .
MetricDataset #1Dataset #2Dataset #3
AUCOAKappaAUCOAKappaAUCOAKappa
sort0.91860.91010.73270.99120.96980.77520.99070.97840.7733
unsort0.92020.92020.75360.99140.97400.79480.97840.97490.7515
Table 7. Quantitative analysis of the SNLSW and NLSW on Datasets #1/2/3 with w 1 = 2 , w 2 = 7 .
Table 7. Quantitative analysis of the SNLSW and NLSW on Datasets #1/2/3 with w 1 = 2 , w 2 = 7 .
MetricDataset #1Dataset #2Dataset #3
AUCOAKappaAUCOAKappaAUCOAKappa
sort0.91660.91200.73820.99100.96930.77230.98990.97730.7609
unsort0.92030.92030.75500.99090.97210.78340.98890.97230.7300
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhu, J.; Wang, F.; You, H. Unsupervised SAR Image Change Detection Based on Structural Consistency and CFAR Threshold Estimation. Remote Sens. 2023, 15, 1422. https://doi.org/10.3390/rs15051422

AMA Style

Zhu J, Wang F, You H. Unsupervised SAR Image Change Detection Based on Structural Consistency and CFAR Threshold Estimation. Remote Sensing. 2023; 15(5):1422. https://doi.org/10.3390/rs15051422

Chicago/Turabian Style

Zhu, Jingxing, Feng Wang, and Hongjian You. 2023. "Unsupervised SAR Image Change Detection Based on Structural Consistency and CFAR Threshold Estimation" Remote Sensing 15, no. 5: 1422. https://doi.org/10.3390/rs15051422

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop