Next Article in Journal
Non-Destructive Biomass Estimation in Mediterranean Alpha Steppes: Improving Traditional Methods for Measuring Dry and Green Fractions by Combining Proximal Remote Sensing Tools
Next Article in Special Issue
HA-Net: A Lake Water Body Extraction Network Based on Hybrid-Scale Attention and Transfer Learning
Previous Article in Journal
Measurement of Solar Absolute Brightness Temperature Using a Ground-Based Multichannel Microwave Radiometer
Previous Article in Special Issue
An Improved Aggregated-Mosaic Method for the Sparse Object Detection of Remote Sensing Imagery
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multispectral Image Change Detection Based on Single-Band Slow Feature Analysis

1
College of Information Science and Engineering, Xinjiang University, Urumqi 830046, China
2
Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, Shanghai 200400, China
3
Knowledge Engineering and Discovery Research Institute, Auckland University of Technology, Auckland 1020, New Zealand
*
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(15), 2969; https://doi.org/10.3390/rs13152969
Submission received: 8 June 2021 / Revised: 18 July 2021 / Accepted: 24 July 2021 / Published: 28 July 2021
(This article belongs to the Special Issue Advances in Optical Remote Sensing Image Processing and Applications)

Abstract

:
Due to differences in external imaging conditions, multispectral images taken at different periods are subject to radiation differences, which severely affect the detection accuracy. To solve this problem, a modified algorithm based on slow feature analysis is proposed for multispectral image change detection. First, single-band slow feature analysis is performed to process bitemporal multispectral images band by band. In this way, the differences between unchanged pixels in each pair of single-band images can be sufficiently suppressed to obtain multiple feature-difference images containing real change information. Then, the feature-difference images of each band are fused into a grayscale distance image using the Euclidean distance. After Gaussian filtering of the grayscale distance image, false detection points can be further reduced. Finally, the k-means clustering method is performed on the filtered grayscale distance image to obtain the binary change map. Experiments reveal that our proposed algorithm is less affected by radiation differences and has obvious advantages in time complexity and detection accuracy.

Graphical Abstract

1. Introduction

Remote sensing change detection is a technology that discovers the changes of ground objects by analyzing remote sensing images taken in different periods [1]. This technology has served in many fields including urban change analysis [2], environmental monitoring [3], land management [4], and natural disaster monitoring [5]. Multispectral remote sensing images are the most important data source in the field of optical remote sensing change detection. Because multispectral remote sensing images have multiple receiving bands from visible light to infrared, it is easy to detect targets that may not be detected by single-channel remote sensing images. Compared with hyperspectral remote sensing images containing a great deal of redundant information and noise, multispectral images have higher application value because of their lower requirements for data preprocessing. With the rapid development of satellite technology, the amount of remote sensing image data is growing exponentially. This means that remote sensing image processing needs faster processing speed and higher precision. Therefore, it is of great practical significance to efficiently use the multiband information from multispectral images to quickly and objectively reflect the changes of ground objects [6].
Generally, the discovery of change information is the most essential part of remote sensing change detection [7]. To identify changes easily, some methods, including image differences, image ratios, and change vector analysis (CVA) [8], aim to measure the gray difference between bitemporal images. Other image transformation methods, such as the Gram–Schmidt transformation [9], principal component analysis (PCA) [10], multivariate alteration detection (MAD) [11,12,13], and slow feature analysis (SFA) [14], map the input image data into the appropriate feature space to more easily separate the changed and unchanged pixels. Recently, some machine learning methods, such as neural networks [15,16] and dictionary learning [17], have also been applied to change detection.
Although most current methods focus on improving change detection accuracy, the results are often not satisfactory due to deviations. The most important deviation comes from the differences in external imaging conditions, such as the atmospheric and radiation conditions, solar angle, sensor calibration, and soil moisture. In these cases, even if the ground object does not change, bitemporal multispectral images will have different spectral values. That is, the difference between unchanged pixels in bitemporal images will not be zero. In addition, this deviation also leads to the following common problems in change detection: real changes and false changes are difficult to distinguish; subtle changes are difficult to identify. Therefore, how to reduce the false changes caused by imaging conditions and improve the detection accuracy of real changes is one of the most significant research topics of change detection.
In traditional methods, image transformation algorithms such as PCA and GS hope to improve the degree of separation between real changes and false changes by strengthening the change information. However, due to the diversity of real change information, it is difficult for these methods to find the most effective projection direction to achieve the desired accuracy. Compared with a variety of changed features, unchanged features show the same and relatively simple changing trend due to the existence of radiation differences. Therefore, the degree of separation between real change and false change can be improved by restraining the radiation difference of unchanged features. Based on this idea, a new slow feature analysis algorithm for multispectral remote sensing images is proposed [18]. Its experimental results show its satisfactory performance in change detection, but the disorder distribution of change information seriously affects the detection accuracy. On the basis of the SFA algorithm, Wu et al. [19] further proposed iterative slow feature analysis (ISFA). In this method, the chi-square distance and chi-square distribution are used to determine the weight of pixels, aiming to make the unchanged pixels more important in the calculation and the changed pixels less important in the calculation. Although the experimental results show its obvious advantages and application potential, its accuracy is still easily affected by the disorderly distribution of the change information, because the feature bands with small eigenvalues do not necessarily have rich enough variation information, but they have a large weight in the final chi-square distance. Therefore, the detection accuracy of ISFA is even inferior to SFA in some cases. A subsequent study [20] demonstrated the application of ISFA on stacking pixel- and object-level spectral features rather than only on image fragments or pixels. However, it is an urgent problem to automatically determine the optimal segmentation scale of object-level features for different images. With the development of machine learning methods, neural network methods have also been applied to change detection of multitemporal remote sensing images. In the latest literature [21], a method of deep slow feature analysis (DSFA) for pixel-level change detection is proposed. This method combines the deep neural network with SFA, aiming at using the powerful nonlinear function approximation ability of the deep neural network to better represent the original remote sensing data, and then deploying the SFA module to suppress the spectral difference of unchanged pixels, so as to highlight the change information. This algorithm adopts the strategy of randomly selecting unchanged pixels from CVA predetection as training samples, which solves the problem that training network models need a lot of prior marking data. However, the detection results are easily affected by the CVA predetection accuracy and the quality of training samples.
At present, SFA has been proven to be a remote sensing change detection algorithm with good stability [22]. However, the disordered distribution of change information in multiple feature bands seriously affects the detection accuracy of this algorithm. In order to avoid this problem, a novel multispectral change detection algorithm is proposed in this paper. In the proposed algorithm, the idea of slow feature analysis is used to process the bitemporal multispectral image band by band. In this way, the radiation difference of each band can be effectively suppressed, and rich change information exists in each band. Since multispectral images can represent the features of ground objects in different bands, it is possible to more efficiently eliminate the differences in background areas by integrating the difference maps of different bands into a grayscale distance image. In this case, the obtained grayscale distance image can provide sufficient descriptions for the real changed areas.
Figure 1 shows a flow diagram of our proposed algorithm. Our algorithm is composed of five steps. First, multiple single-band image pairs are extracted from bitemporal multispectral images. In the second step, single-band slow feature analysis is performed on each pair of single-band images to obtain the optimal feature-difference image of each band. The third step is to fuse the feature-difference images of each band into a grayscale distance image through the Euclidean distance. The fourth step is filtering. Since the noise in multispectral images is mainly Gaussian noise, this paper chooses a Gaussian filter to denoise the grayscale distance image. Finally, binary clustering analysis of the filtered grayscale distance image is carried out using the k-means clustering method.
The contributions of our work are as follows. First, the shortcomings of the SFA algorithm are analyzed, and a new change detection algorithm is proposed. The idea of minimizing the deviation of the unchanged region of each band is used to process bitemporal multispectral images band by band to effectively highlight the change information of each band. Second, the proposed algorithm is simple in calculation and avoids the high time consumption caused by the huge computation of iterative calculation in our previous study [23]. Last, a novel simulation dataset is presented to discuss the universality of the change detection algorithm. The characteristic of the dataset is that the simulated changes are more similar to the real ground changes on gray values.
The rest of this paper is organized as follows. Section 2 introduces related work. In Section 3, SFA theory and our proposed algorithm are introduced. Section 4 discusses the experimental results. The complexity and universality of our proposed algorithm are discussed in Section 5. Finally, Section 6 provides a conclusion.

2. Related Work

Many methods of remote sensing image change detection have been proposed. Depending on whether prior knowledge is required, these methods can be divided into supervised methods and unsupervised methods.

2.1. Supervised Methods

Supervised change detection methods are mainly developed from machine learning methods, which include SVM-based methods [24], random forest-based methods [25], and deep learning-based methods. With the development of deep learning methods, some techniques, such as dictionary learning [17], convolutional neural networks (CNNs) [26], and generative adversarial networks (GANs) [27], have been proven to be effective in detecting changes. It is acknowledged that deep learning methods can automatically extract abstract features of complex images, and it is more robust to noise and other disturbances than other methods. However, due to the lack of training datasets, the complexity of network structure, and the limitation of computation, the application of deep learning to multispectral change detection is in the exploration stage.

2.2. Unsupervised Methods

Image algebra, the earliest unsupervised change detection algorithm, mainly includes CVA and spectral angle mapping (SAM). CVA has received considerable attention because it can contain rich change information and has the ability to extract different types of changes. Some advanced unsupervised methods for detecting multiple changes [28,29,30] are based on this method. However, this method is very sensitive to the radiation difference caused by external imaging conditions. When the radiation difference between bitemporal images is large, the change detection accuracy will be reduced.
Image transformation methods can often obtain higher accuracy in change detection than image algebra methods because the former methods project the original multispectral image into the appropriate feature space to more easily separate the changed and unchanged objects. Popular image transformation methods include PCA, MAD, and SFA. The most commonly used method is PCA, which can effectively eliminate the redundant information in multiband images and concentrate most of the change information. In addition, PCA can also be used as a feature extraction method to provide convenience for other algorithms [10]. The MAD algorithm was proposed based on the idea of looking for the most relevant feature of the image. Although this method has been widely studied because of its obvious advantages and latent application value, it also has serious shortcomings; it has difficulties concentrating the change information effectively. For this problem, Nielsen also proposed the IRMAD algorithm [31] by combining the expectation–maximization algorithm with MAD. Due to its good stability and excellent accurateness of identification, the IR-MAD algorithm is regarded as one of the most advanced change detection algorithms. Recently, the slow feature analysis theory has been applied to detect ground changes using optical remote sensing images. This method is less limited by radiation differences and has potential application value, so it has also been widely considered. Some advanced algorithms, such as ISFA combined with Bayesian soft fusion for semantic scene change detection [19], the kernel-SFA algorithm for multivariate detection [32], and the SFA algorithm for automatic relative radiation correction [33], are all based on this method.
It is worth mentioning that some hybrid methods, including the multimethod integration scheme [34], the multisource information fusion strategy [35], and the multifeature combination method [36], are often used to improve the effect of difference images to increase the accurateness of identification. The advantage of these methods is that they synthesize the advantages of various methods or data to obtain better results. For a specific application, it is very difficult to choose an appropriate hybrid scheme and coordinate multisource data or multifeature to achieve the desired results. These problems may cause the algorithms to be more complicated and less efficient.

3. Methodology

3.1. SFA Theory

SFA is an unsupervised algorithm for learning invariant features, which has been applied to human action recognition [37], text recognition [38], and fault detection [39]. Mathematically, SFA theory can be described as an optimization problem; for a given multidimensional input time signal x = [ x 1 ( t ) , , x M ( t ) ] T , it is expected to find a set of functions [ g 1 ( s ) , , g M ( s ) ] to minimize the time difference of output signals z ( t ) = [ g 1 ( x ) , , g M ( x )   ] . Due to the fact that SFA was originally proposed for continuous signals, the time difference is usually expressed as the square of the first derivative of output signals.
The optimization objective of SFA is expressed as Equation (1):
Δ j = Δ ( z j ) = z ˙ j 2 t , j [ 1 , , M ]
The constraints are written as:
z j t = 0
z j 2 t = 1
i < j : z i z j t = 0
The Δ measures the speed of signal changes; z ˙ is the first derivative of the output signal; · t is the mean value in time. Equations (2)–(4) are zero-mean constraint, unit variance constraint, and decorrelation constraint, respectively. The zero-mean constraint simplifies the computation and improves the computation speed; the unit variance constraint avoids constant solutions so that each output signal can carry a certain amount of information; the decorrelation constraint ensures that the output signals are not correlated with each other. After sorting, the first output signal is the slowest signal, the second one is the signal with the second-slowest change, and so on.
Assuming that the signal transformation function is linear, then the output signal can be expressed as Equation (5):
z j ( x ) = w j T x
And Equations (1)–(4) should be reformulated as Equations (6)–(9):
Δ ( z j ) = z ˙ j 2 t = ( w j T x ˙ ) 2 t = w j T x ˙ x ˙ T t w j = w j T A w j
z j t = ( w j T x ) t = 0
z j 2 t = ( w j T x ) ( w j T x ) t = w j T x x T t w j = w j T B w j = 1
z i z j t = ( w i T x ) ( w j T x ) t = w i T x x T t w j = w i T B w j = 0
If Equation (8) is integrated into Equation (6), the optimization objective function can be rewritten as:
Δ ( z j ) = z ˙ j 2 t z j 2 t = w j T A w j w j T B w j
Then, the SFA problem can be solved by the generalized eigenvalue problem:
A W = B W Λ
where A is the mathematical expectation of the covariance matrix of the first derivative of the input time signal; B is the mathematical expectation of the covariance matrix of the input signal; W represents the characteristic vector matrix, Λ = d i g ( λ 1 , λ 2 , λ M ) is a diagonal matrix of generalized eigenvalues, where λ j = Δ z j .
However, there is no such time series structure in bitemporal remote sensing images. To solve this problem, Wu et al. [18] reconstructed the SFA algorithm into the discrete case by using finite difference instead of the first derivative and applied it to change detection.
Given two centralized and standardized vector x ^ k = [ x ^ 1 k , x ^ 2 k , , x ^ N k ] and y ^ k = [ y ^ 1 k , y ^ 2 k , , y ^ N k ] , where x ^ k and y ^ k represent the two spectral vectors at the same position in the bitemporal remote sensing images X and   Y , k is the serial number of the pixel, and N is the number of bands. Then, matrix A and matrix B can be rewritten as Equations (12) and (13) for remote sensing change detection:
A = 1 P k = 1 P ( x ^ k y ^ k ) ( x ^ k y ^ k ) T = Σ Δ
B = 1 2 P [ k = 1 P ( x ^ k ) ( x ^ k ) T + k = 1 P ( y ^ k ) ( y ^ k ) T ] = 1 2 ( Σ x + Σ y )
where Σ x and Σ y are covariance matrices of remote sensing images X and Y , respectively. Σ Δ represents the covariance matrix of the original difference image, and P is the number of pixels in a single image.
Taking Equations (12) and (13) into Equation (11), the eigenvector matrix W can be obtained, and the feature-difference matrix SFA is expressed as:
S F A = W X W Y
Figure 2 shows six feature-difference images obtained by the SFA algorithm using the Taizhou dataset [21]. These images are sorted according to the λ j value from low to high. The lower the ranking, the smaller the feature variance of a feature-difference image. That is, (a) is the feature-difference image with the smallest variance, and (f) is the feature-difference image with the largest variance. In the output results, very dark or bright areas are named marked areas, which represent the areas with a high absolute value of the feature difference. Theoretically, the higher the λ j value of a feature difference image is, the more change information it contains. It can be seen from Figure 2a–d that each image contains a certain marked area, but the amount of change information from (a) to (d) does not increase in sequence. Specifically, the marked areas in (b) are closest to the reference change image of the Taizhou dataset, followed by (a) and (d), and finally (c). The changed and unchanged areas of (e) and (f) are not effectively separated. Therefore, visual interpretation shows that SFA theory is difficult to effectively and orderly concentrate the change information to a few feature bands, and the λ value cannot be used as a standard to measure the amount of change information.
In order to confirm this subjective opinion, the experiment in Table 1 is carried out. For each feature band combination in the experiment, Euclidean distance is used to generate a grayscale distance image, and then k-means binary clustering analysis is performed. Table 1 shows that the combination of feature bands 1, 2, and 4 with clearly marked areas performs best on PCC and kappa, while the combination of feature bands 1–6 has the worst detection results. Based on the subjective evaluation and objective indexes, it can be inferred that a large amount of useless information contained in feature bands 5 and 6 has a negative impact on the detection accuracy, and the change information contained in feature band 4 is more abundant than that in feature band 3. It means that the change information contained in each feature band cannot be measured only by the λ j value, and the SFA algorithm has the shortcoming of disordered distribution of change information.

3.2. Proposed Method

The purpose of the SFA algorithm is to minimize radiation differences in the unchanged areas to highlight the real changes. However, the shortcoming of the disordered distribution of change information will seriously affect detection accuracy. In order to avoid this problem and further improve the accuracy of change detection, a single-band slow feature analysis method is proposed in this section.
The single-band slow feature analysis algorithm is introduced as follows. First, multiple single-band images are extracted from multispectral images in order of wavelength. Second, the optimal projection vector of a single-band image is found by minimizing the variance of the feature-difference image of the band. Third, the optimal feature-difference images of each band are obtained. In this way, each optimal feature-difference image contains abundant change information. Finally, after the fusion of several feature-difference images, Gaussian filtering is used to obtain the final grayscale distance image. The detailed implementation steps show the following.
First, multiple single-band image pairs are extracted from bitemporal multispectral images. Specifically, multiple single-band images x 1 , x 2 , x N are extracted from multispectral image X , and multiple single-band images y 1 , y 2 , , y N are extracted from multispectral image Y . The i-band image pairs are denoted x i and y i . By performing zero-centering on each single-band image, the effect of radiation differences can be initially reduced. The processed i-band image pairs are represented as x ^ i = [ x ^ i 1 , , x ^ i P ] and y ^ i = [ y ^ i 1 , , y ^ i P ] , where P represents the total number of pixels in a single-band image.
Second, the ith projection vector is found by minimizing the variance of the feature-difference image of the i-band, and then, the optimal feature-difference image of the i-band is obtained.
The primary objective of the optimization is expressed as Equation (15):
i = 1 P k = 1 P ( w i x ^ i k w i y ^ i k ) 2 = w i T A i w i
The constraints are written as:
1 2 P [ k = 1 P w i x ^ i k + k = 1 P w i y ^ i k ] = 0
1 2 P [ k = 1 P ( w i x ^ i k ) 2 + k = 1 P ( w i y ^ i k ) 2 ] = w i T B i w i = 1
where A i and B i are as shown in Equations (18) and (19):
A i = 1 P k = 1 P ( x ^ i k y ^ i k ) ( x ^ i k y ^ i k ) T = Σ Δ i
B i = 1 2 P [ k = 1 P x ^ i k ( x ^ i k ) T + k = 1 P y ^ i k ( y ^ i k ) T ] = 1 2 ( Σ x i + Σ y i )
where Σ Δ i is the variance matrix of the i-band difference image, and   Σ x i and   Σ y i are the variance matrices of each temporal i-band image.
When Equation (17) is integrated into Equation (15), the objective function can be rewritten as:
i = w i T A i w i w i T B i w i
With Equations (18) and (19), a new optimization objective can be further obtained:
i = w i T Σ Δ w i w i T [ 1 2 ( Σ x + Σ y ) ] w i
Since Equation (21) aims to obtain the optimal projection vector w i , it can be calculated through Equation (22):
Σ Δ w i = 1 2 ( Σ x i + Σ y i ) w i Λ i
where Λ i = i ,   Λ i is the variance of the feature-difference image of the i-band.
The optimal feature-difference image matrix of the i-band is expressed as:
S F A i = w i x ^ i w i y ^ i
Third, the S F A grayscale distance matrix is calculated. Repeating the previous steps, the optimal feature-difference image matrix of each band is obtained. After adjusting the difference matrix S F A i into a column vector, the difference matrix S F A is expressed as:
S F A = ( S F A 1 , S F A 2 , S F A N )
D = ( K = 1 N ( S F A k ) 2 ) 1 / 2
Then, the difference matrix S F A is converted into the grayscale distance image matrix D by Equation (25) (the Euclidean distance formula), and the changed intensity of pixels in each band is unified.
Finally, the grayscale distance image is processed by a Gaussian filter to further eliminate false changes.
In order to measure the impact of the Gaussian filter on the performance of the proposed algorithm, the experiments in Table 2 are carried out by using the Taizhou dataset. In this experiment, the standard deviation of Gaussian filters is set to 1. The results show that Gaussian filtering can effectively reduce FN and FP, and improve the accuracy of change detection. The precision of the 7 × 7 filter window is the highest. For the sake of simplicity, the Gaussian filter with a 7 × 7 window is used in the following experiments.

4. Experiments

To verify the advantages of our proposed algorithm, some popular pixel-level methods, such as CVA [8], PCA [10], MAD [12], IR-MAD [31], SBIW [23], SFA [18], ISFA [19], and DSFA [21], were selected for comparison. Specifically, the convergence thresholds of IRMAD, SBIW, and ISFA were all set to 10−6; the DSFA algorithm selected in experiments has two hidden layers, each with 128 nodes. To ensure the comparability of results, the Gaussian filter was used for the grayscale distance image generated by each algorithm, and the k-means clustering algorithm was used in all the above methods to obtain the binary change map. It should be noted that the DSFA algorithm was implemented by Python, and other algorithms were implemented by MATLAB. The following experiments were performed on an Intel Corei5 1.6 GHz CPU equipped with 8 GB RAM.
Experiments were carried out on three bitemporal remote sensing image datasets. These datasets include two common multispectral change detection datasets and one simulation dataset. The first dataset is the Taizhou dataset [40]. Two multispectral images were obtained on 17 March 2000, and 6 February 2003, respectively. The second dataset was before and after the fire in Bastrop County, Texas in 2011 [41]. The two datasets were obtained by Landsat7 and Landsat 5 satellites, respectively. Six bands (bands 1–5 and 7) with a spatial resolution of 30 m were selected for the experiment. The third dataset was a self-made simulation dataset, detailed in Section 4.3.
The advantages and disadvantages of these algorithms were analyzed from two aspects of objective and subjective. The objective indexes included false negative (FN), false positive (FP), overall error (OE), percentage correct classification (PCC), kappa coefficient (KC), and time complexity. FN is the number of samples that are erroneously labeled as unchanged. FP represents the number of samples that are erroneously labeled as changed. OE indicates the sum of FN and FP; the closer OE, FP, and FN are to 0, the better the change detection performance. PCC is the proportion of correctly labeled samples in all samples. Kappa coefficient is a parameter that can accurately measure the classification accuracy; the closer its value is to 1, the more accurate the classification. Time complexity is a very important indicator, which measures the computational efficiency of an algorithm. To further reduce experimental errors, the time complexity in this paper refers to the average running time of an algorithm. The average running time is the average of 10 runs.

4.1. Experimental Dataset I

The first dataset comprises bitemporal multispectral remote sensing data from Taizhou City, Jiangsu Province. Two multispectral images were obtained on 17 March 2000, and 6 February 2003, respectively. The two images in Figure 3 are pseudo-color composite images of multispectral data of Taizhou, and each image contains 400 × 400 pixels. To objectively evaluate the proposed change detection algorithm, 21,390 test samples (13.4% of the image) provided by the dataset were used for quantitative analysis. The test samples include 4227 changed samples and 17,163 unchanged samples, as shown in Figure 4.
Binary change images obtained by eight comparison methods and the proposed method are shown in Figure 5. It is obvious that panels (a), (b), (c) and (f) have a large number of broken and small change areas, which indicates that the performances of CVA, DPCA, MAD, and SFA are not ideal for multispectral image change detection. Referring to the reference change sample, panel (h) seems to lose part of the change information. For panels (d), (e), (g) and (i), though there are some change points in the unsampled area, these points may represent the real changed pixels of the unsampled area. Therefore, it is impossible to visually judge which method has better performance.
In order to fairly evaluate the comparison methods and our proposed method, Table 3 and Table 4 show the objective evaluations of each algorithm with and without the Gaussian filter, respectively. According to Table 3 and Table 4, the detection performance of CVA is the worst, and the performances of MAD and SFA are also not ideal. The PCC and kappa performance of IR-MAD, ISFA, and SBIW are effectively improved through iterative weighting, but the time cost of these algorithms increases significantly. Without using a Gaussian filter, the OE of our proposed algorithm is the lowest, and the kappa coefficient of our proposed algorithm is higher than SFA, ISFA, and DSFA. Table 3 shows that the kappa of our proposed algorithm is 24.04% higher than that of SFA, 0.15% higher than that of ISFA, and 11.62% higher than that of DSFA. After Gaussian filtering for the grayscale distance image, the accuracy of most algorithms was improved to a certain extent, which proves that this filtering strategy can effectively improve the performance of change detection. Although the kappa of our algorithm was slightly lower than that of ISFA after adding the filter, our algorithm has more advantages in time cost. In summary, our proposed algorithm has obvious advantages in both kappa value and time cost, which indicates that our proposed algorithm is an effectively improved algorithm.

4.2. Experimental Dataset II

The second dataset is the Texas fire dataset. Figure 6a is the pseudo-color composite before the Texas fire, and Figure 6b is the pseudo-color composite after the fire. The image size is 1534 × 808. In panels (a) and (b), the red parts indicate the safe areas, and the black parts indicate the burning or burned areas. The difference between these pre-event and post-event pairs is only due to the burning of the forest. The reference map (c) is marked as changed and unchanged at the pixel level, in which the white parts (10.64% of the image) represent the changed areas caused by wildfire.
As can be seen from Table 5 and Figure 7, MAD and ISFA have a large number of false change points, which means that their inhibition effect on the radiation difference of background area is not ideal. The FN of DPCA and SFA are much higher than other algorithms, which means that these two algorithms have poor performance in detecting changes. Although the performance of our proposed algorithm on FN and FP is not the best, its overall performance is relatively balanced. Its OE is the lowest among all algorithms, and the PCC and kappa are the highest among all algorithms, which are 0.9750 and 0.8690, respectively. Compared with SBIW, our proposed algorithm has a great advantage in time consumption. Considering the precision and time complexity, our algorithm has excellent change detection performance.

4.3. Experimental Dataset III

In this section, we adopt an artificial simulation dataset as dataset III. Compared with a real multispectral image dataset, an artificial simulation dataset provides a simpler and easier way to obtain the reference change map to analyze the experimental results objectively. In the artificial simulation dataset, the first image and the second image are usually multispectral images taken on different days in the same year and month. The artificial change areas are added to the second image to simulate real changes. The specific procedures are as follows: First, the vegetation pixel block with a certain size is obtained from the first band image and spliced into a specific shape of region M1; second, a region, N1, in the first band image of the second phase is replaced by region M1 to simulate a changing region (the shape and size of the region M1 and N1 are the same, but they contain different ground feature information); finally, the above steps are repeated for the remaining bands. Specifically, a vegetation pixel block is intercepted from the i-band image and spliced into the region Mi. The position of the captured pixel block corresponds to the previous operation, and the size of Mi is consistent with that of M1. Then, the region Ni in the i-band image of the second phase is replaced by the region Mi, and the position of the replaced area is the same as for the first band image. In this way, the simulated changes are more consistent with the real ground changes on the gray value. In addition, this simulation method can conveniently obtain an accurate reference map to objectively analyze the detection results and test the effectiveness of the algorithm.
Dataset III includes two multispectral images taken from Bayingoleng, Xinjiang, China. The first multispectral image and the second multispectral image were taken by Landsat 5 on 13 September and 29 September 2007, respectively. Both images have seven bands, and six visible bands (bands 1–5 and band 7) were selected for the experiment. In the latter image, the artificial change region was added to obtain the second-stage multispectral image to be detected. Figure 8a shows a pseudo-color display of the first multispectral image. Figure 8b shows a pseudo-color display of the second-stage multispectral image, and the red area in this figure is the simulated change area. The binary reference map is shown in Figure 8c.
As shown in Figure 9 and Table 6, for CVA, MAD, and SFA, the FPs of these three algorithms are obviously high. Although IRMAD, ISFA, and DSFA have fewer FPs, these improvements are achieved at the cost of time. The experimental results of DPCA show that it has good performance in time complexity, but the overall error needs to be further reduced. The SBIW algorithm and our proposed algorithm have the highest detection precision and kappa coefficient. This is because these two methods fully utilize multiband information to detect the changed area in more detail. However, in terms of time complexity, our proposed algorithm outperforms the SBIW algorithm. Considering the time complexity and accuracy, our proposed algorithm is better than other reference algorithms.

5. Discussion

From all the aforementioned experiments, we can see that our proposed algorithm can well suppress the background and noise information in multispectral images, and achieve good performance in change detection accuracy. However, the results also show that our algorithm is not good enough to keep the edge information with weak change intensity. As shown in Figure 5i, the road detected by our method is discontinuous. In addition, our proposed method only uses the spectral feature for change detection, so that false change points may appear in the water area (as shown in Figure 7i).
To further verify the universality of our algorithm, 50 groups of artificially generated simulation datasets are created by using the method proposed in Section 4.3. Compared with marking the changed area based on personal experience, it is convenient to make the change reference map for objective analysis by manually adding some change areas. Therefore, using simulation data set for change detection can effectively evaluate the advantages and disadvantages of our algorithm and reference algorithm. Due to the limitation of the paper, the average change detection results can be found in Table 7.
Table 7 shows that the total number of error pixels of the proposed algorithm is 52, which is the second lowest among all algorithms, and compared with other algorithms, it has apparent advantages over the PCC values and kappa coefficient. Specifically, the PCC of our proposed algorithm is 2.77% higher than that of SFA, and the kappa coefficient is 25.42%, 7.97%, and 9.45% higher than that of SFA, ISFA, and DSFA, respectively. Compared with SBIW, the proposed algorithm avoids the high time consumption caused by the large amount of iterative computation in previous work. While maintaining the same excellent detection accuracy, the detection time is greatly shortened. The advantages and universality of the proposed algorithm are fully verified by these experiments with 50 datasets.

6. Conclusions

The purpose of this research is to decrease the false changes caused by radiation differences, improve the accurateness of change detection, and reduce the consumption of the detection time. In this paper, a multispectral image change detection algorithm based on single-band slow feature analysis is proposed. Firstly, the proposed single-band slow feature analysis algorithm is used to process the bitemporal multispectral images to obtain multiple optimal feature-difference images. In this way, the radiation differences of unchanged regions of each band in the bitemporal multispectral images are effectively suppressed, and the change information of each band is further highlighted. Next, the Euclidean distance formula is used to fuse multiple optimal feature-difference images into a grayscale distance image by equal weight. In order to reduce the negative impact of the noise in the difference image on the change detection results, a Gaussian filter is used to the generated grayscale distance image. Finally, the k-means algorithm is used to mark the changed region and unchanged region. Experiments show that our algorithm has obvious advantages over the reference algorithms in terms of time complexity and detection accuracy. The future work is to explore algorithms to further reduce the impact of noise on change detection.

Author Contributions

Z.J. and Y.H. conceived and designed the experiments; Y.H. performed the experiments and wrote the paper; J.Y. and N.K.K. assisted by commenting on the manuscript and providing editorial oversight. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Science Foundation of China (No. U1803261) and the International Science and Technology Cooperation Project of the Ministry of Education of the People’s Republic of China (No. 2016–2196).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Publicly available datasets were analyzed in this study. This data can be found here: Du et al. [21]; Volpi et al. [41].

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Singh, A. Review Article Digital change detection techniques using remotely-sensed data. Int. J. Remote Sens. 1989, 10, 989–1003. [Google Scholar] [CrossRef] [Green Version]
  2. Tang, Y.Q.; Zhang, L.P. Urban Change Analysis with Multi-Sensor Multispectral Imagery. Remote Sens. 2017, 9, 252. [Google Scholar] [CrossRef] [Green Version]
  3. Gxokwe, S.; Dube, T.; Mazvimavi, D. Multispectral Remote Sensing of Wetlands in Semi-Arid and Arid Areas: A Review on Applications, Challenges and Possible Future Research Directions. Remote Sens. 2020, 12, 4190. [Google Scholar] [CrossRef]
  4. Panuju, D.R.; Paull, D.J.; Griffin, A.L. Change Detection Techniques Based on Multispectral Images for Investigating Land Cover Dynamics. Remote Sens. 2020, 12, 1781. [Google Scholar] [CrossRef]
  5. Li, Z.; Jia, Z.; Liu, L.; Yang, J.; Kasabov, N. A method to improve the accuracy of SAR image change detection by using an image enhancement method. Isprs J. Photogramm. Remote Sens. 2020, 163, 137–151. [Google Scholar] [CrossRef]
  6. Ma, L.; Jia, Z.; Yu, Y.; Yang, J.; Kasabov, N.K. Multi-Spectral Image Change Detection Based on Band Selection and Single-Band Iterative Weighting. IEEE Access 2019, 7, 27948–27956. [Google Scholar] [CrossRef]
  7. Zhang, J.; Yang, G. Automatic land use and land cover change detection with one temporary remote sensing image. J. Remote Sens. Beijing 2005, 9, 294. [Google Scholar]
  8. Yoon, G.-W.; Yun, Y.B.; Park, J.-H. Change vector analysis: Detecting of areas associated with flood using Landsat TM. In Proceedings of the IGARSS 2003—2003 IEEE International Geoscience and Remote Sensing Symposium, Proceedings (IEEE Cat. No. 03CH37477), Toulouse, France, 21–25 July 2003; pp. 3386–3388. [Google Scholar]
  9. Collins, J.B.; Woodcock, C.E. An assessment of several linear change detection techniques for mapping forest mortality using multitemporal landsat TM data. Remote Sens. Environ. 1996, 56, 66–77. [Google Scholar] [CrossRef]
  10. Celik, T. Unsupervised Change Detection in Satellite Images Using Principal Component Analysis and k-Means Clustering. Ieee Geosci. Remote Sens. Lett. 2009, 6, 772–776. [Google Scholar] [CrossRef]
  11. Du, B.; Wang, Y.; Wu, C.; Zhang, L. Unsupervised Scene Change Detection via Latent Dirichlet Allocation and Multivariate Alteration Detection. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 4676–4689. [Google Scholar] [CrossRef]
  12. Nielsen, A.A.; Müller, A.; Dorigo, W. Hyperspectral Data, Change Detection and the MAD Transformation. In Proceedings of the 12th Australasian Remote Sensing & Photogrammetry Association Conference, Fremantle, Australia, 18–22 October 2004. [Google Scholar]
  13. Nielsen, A.A.; Hecheltjen, A.; Thonfeld, F.; Canty, M.J. Automatic change detection in RapidEye data using the combined MAD and kernel MAF methods. In Proceedings of the 2010 IEEE International Geoscience and Remote Sensing Symposium, Honolulu, HI, USA, 25–30 July 2010; pp. 3078–3081. [Google Scholar]
  14. Wu, C.; Zhang, L.; Du, B. Hyperspectral anomaly change detection with slow feature analysis. Neurocomputing 2015, 151, 175–187. [Google Scholar] [CrossRef]
  15. Daudt, R.C.; Saux, B.L.; Boulch, A.; Gousseau, Y. Urban Change Detection for Multispectral Earth Observation Using Convolutional Neural Networks. In Proceedings of the IGARSS 2018—2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018. [Google Scholar]
  16. Mou, L.; Zhu, X.X. A Recurrent Convolutional Neural Network for Land Cover Change Detection in Multispectral Images. In Proceedings of the IGARSS 2018—2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018. [Google Scholar]
  17. Lu, X.; Yuan, Y.; Zheng, X. Joint Dictionary Learning for Multispectral Change Detection. IEEE Trans. Cybern. 2017, 47, 884–897. [Google Scholar] [CrossRef] [PubMed]
  18. Wu, C.; Du, B.; Zhang, L. Slow Feature Analysis for Change Detection in Multispectral Imagery. IEEE Trans. Geosci. Remote Sens. 2014, 52, 2858–2874. [Google Scholar] [CrossRef]
  19. Wu, C.; Du, B.; Cui, X.; Zhang, L. A post-classification change detection method based on iterative slow feature analysis and Bayesian soft fusion. Remote Sens. Environ. 2017, 199, 241–255. [Google Scholar] [CrossRef]
  20. Xu, J.; Zhao, C.; Zhang, B.; Lin, Y.; Yu, D. Hybrid Change Detection Based on ISFA for High-Resolution Imagery. In Proceedings of the 2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC), Chong Qing, China, 27–29 June 2018; pp. 76–80. [Google Scholar]
  21. Du, B.; Ru, L.; Wu, C.; Zhang, L. Unsupervised Deep Slow Feature Analysis for Change Detection in Multi-Temporal Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2019, 57, 9976–9992. [Google Scholar] [CrossRef] [Green Version]
  22. Yang, M.; Zhang, M.; Gu, Y. Two-Layer Slow Feature Analysis Network for Change Detection. In Proceedings of the 2019 10th Workshop on Hyperspectral Imaging and Signal Processing: Evolution in Remote Sensing (WHISPERS), Amsterdam, The Netherlands, 24–26 September 2019; pp. 1–4. [Google Scholar]
  23. Ma, L.; Jia, Z.; Yang, J.; Kasabov, N. Multi-spectral image change detection based on single-band iterative weighting and fuzzy C-means clustering. Eur. J. Remote Sens. 2020, 53, 1–13. [Google Scholar] [CrossRef]
  24. Morsier, F.d.; Tuia, D.; Borgeaud, M.; Gass, V.; Thiran, J. Semi-Supervised Novelty Detection Using SVM Entire Solution Path. IEEE Trans. Geosci. Remote Sens. 2013, 51, 1939–1950. [Google Scholar] [CrossRef]
  25. Xu, L.; Jing, W.; Song, H.; Chen, G. High-Resolution Remote Sensing Image Change Detection Combined With Pixel-Level and Object-Level. IEEE Access 2019, 7, 78909–78918. [Google Scholar] [CrossRef]
  26. Chen, J.-W.; Wang, R.; Ding, F.; Liu, B.; Jiao, L.; Zhang, J. A Convolutional Neural Network with Parallel Multi-Scale Spatial Pooling to Detect Temporal Changes in SAR Images. Remote Sens. 2020, 12, 1619. [Google Scholar] [CrossRef]
  27. Gong, M.; Niu, X.; Zhang, P.; Li, Z. Generative Adversarial Networks for Change Detection in Multispectral Imagery. Ieee Geosci. Remote Sens. Lett. 2017, 14, 2310–2314. [Google Scholar] [CrossRef]
  28. Liu, S.; Du, Q.; Tong, X.; Samat, A.; Bruzzone, L.; Bovolo, F. Multiscale Morphological Compressed Change Vector Analysis for Unsupervised Multiple Change Detection. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 4124–4137. [Google Scholar] [CrossRef]
  29. Bovolo, F.; Marchesi, S.; Bruzzone, L. A Framework for Automatic and Unsupervised Detection of Multiple Changes in Multitemporal Images. IEEE Trans. Geosci. Remote Sens. 2012, 50, 2196–2212. [Google Scholar] [CrossRef]
  30. Chen, J.; Chen, X.; Cui, X.; Chen, J. Change Vector Analysis in Posterior Probability Space: A New Method for Land Cover Change Detection. IEEE Geosci. Remote Sensing Lett. 2011, 8, 317–321. [Google Scholar] [CrossRef]
  31. Nielsen, A.A. The regularized iteratively reweighted MAD method for change detection in multi- and hyperspectral data. IEEE Trans. Image Process. 2007, 16, 463–478. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  32. Wu, C.; Zhang, L.; Du, B. Kernel Slow Feature Analysis for Scene Change Detection. IEEE Trans. Geosci. Remote Sens. 2017, 55, 2367–2384. [Google Scholar] [CrossRef]
  33. Zhang, L.; Wu, C.; Du, B. Automatic Radiometric Normalization for Multitemporal Remote Sensing Imagery With Iterative Slow Feature Analysis. IEEE Trans. Geosci. Remote Sens. 2014, 52, 6141–6155. [Google Scholar] [CrossRef]
  34. Du, P.; Liu, S.; Gamba, P.; Tan, K.; Xia, J. Fusion of Difference Images for Change Detection Over Urban Areas. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2012, 5, 1076–1086. [Google Scholar] [CrossRef]
  35. Du, P.; Liu, S.; Xia, J.; Zhao, Y. Information fusion techniques for change detection from multi-temporal remote sensing images. Inf. Fusion 2013, 14, 19–27. [Google Scholar] [CrossRef]
  36. Peijun, D.U. Change detection from multi-temporal remote sensing images by integrating multiple features. J. Remote Sens. 2012, 16, 663–677. [Google Scholar]
  37. Zhang, Z.; Tao, D. Slow Feature Analysis for Human Action Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 436–450. [Google Scholar] [CrossRef] [Green Version]
  38. Yaping, H.; Jiali, Z.; Mei, T.; Qi, Z.; Siwei, L. Slow Feature Discriminant Analysis and its application on handwritten digit recognition. In Proceedings of the 2009 International Joint Conference on Neural Networks, Atlanta, GA, USA, 14–19 June 2009; pp. 1294–1297. [Google Scholar]
  39. Shang, C.; Yang, F.; Gao, X.Q.; Huang, X.L.; Suykens, J.A.K.; Huang, D.X. Concurrent monitoring of operating condition deviations and process dynamics anomalies with slow feature analysis. Aiche J. 2015, 61, 3666–3682. [Google Scholar] [CrossRef]
  40. Zhang, W.; Lu, X. The Spectral-Spatial Joint Learning for Change Detection in Multispectral Imagery. Remote Sens. 2019, 11, 240. [Google Scholar] [CrossRef] [Green Version]
  41. Volpi, M.; Camps-Valls, G.; Tuia, D. Spectral alignment of multi-temporal cross-sensor images with automated kernel canonical correlation analysis. ISPRS J. Photogramm. Remote Sens. 2015, 107, 50–63. [Google Scholar] [CrossRef]
Figure 1. Flow diagram of the algorithm.
Figure 1. Flow diagram of the algorithm.
Remotesensing 13 02969 g001
Figure 2. Feature-difference images obtained by the SFA algorithm. Panels (af) correspond to feature bands 1–6.
Figure 2. Feature-difference images obtained by the SFA algorithm. Panels (af) correspond to feature bands 1–6.
Remotesensing 13 02969 g002
Figure 3. Pseudo-color composite images of multispectral data of Taizhou obtained in (a) 2000 and (b) 2003.
Figure 3. Pseudo-color composite images of multispectral data of Taizhou obtained in (a) 2000 and (b) 2003.
Remotesensing 13 02969 g003
Figure 4. Test sample: (a) changed sample; (b) unchanged sample.
Figure 4. Test sample: (a) changed sample; (b) unchanged sample.
Remotesensing 13 02969 g004
Figure 5. Binary change maps of Taizhou dataset using (a) CVA, (b) DPCA, (c) MAD, (d) IRMAD, (e) SBIW, (f) SFA, (g) ISFA, (h) DSFA, and (i) proposed method.
Figure 5. Binary change maps of Taizhou dataset using (a) CVA, (b) DPCA, (c) MAD, (d) IRMAD, (e) SBIW, (f) SFA, (g) ISFA, (h) DSFA, and (i) proposed method.
Remotesensing 13 02969 g005aRemotesensing 13 02969 g005b
Figure 6. Pseudo-color composite images of multispectral data of Texas obtained (a) before the fire and (b) after the fire. (c) The binary reference map.
Figure 6. Pseudo-color composite images of multispectral data of Texas obtained (a) before the fire and (b) after the fire. (c) The binary reference map.
Remotesensing 13 02969 g006
Figure 7. Binary change maps of Texas dataset using (a) CVA, (b) DPCA, (c) MAD, (d) IRMAD, (e) SBIW, (f) SFA, (g) ISFA, (h) DSFA, and (i) proposed method.
Figure 7. Binary change maps of Texas dataset using (a) CVA, (b) DPCA, (c) MAD, (d) IRMAD, (e) SBIW, (f) SFA, (g) ISFA, (h) DSFA, and (i) proposed method.
Remotesensing 13 02969 g007aRemotesensing 13 02969 g007b
Figure 8. (a) Pseudo-color display of the first multispectral image in the Bayingoleng simulated dataset. (b) Pseudo-color display of the second-stage multispectral image in the Bayingoleng simulated dataset. The red parts are the added change areas. (c) The binary reference map.
Figure 8. (a) Pseudo-color display of the first multispectral image in the Bayingoleng simulated dataset. (b) Pseudo-color display of the second-stage multispectral image in the Bayingoleng simulated dataset. The red parts are the added change areas. (c) The binary reference map.
Remotesensing 13 02969 g008
Figure 9. Binary change maps using (a) CVA, (b) DPCA, (c) MAD, (d) IRMAD, (e) SBIW, (f) SFA, (g) ISFA, (h) DSFA, and (i) proposed method.
Figure 9. Binary change maps using (a) CVA, (b) DPCA, (c) MAD, (d) IRMAD, (e) SBIW, (f) SFA, (g) ISFA, (h) DSFA, and (i) proposed method.
Remotesensing 13 02969 g009
Table 1. Performance indexes of different band combinations.
Table 1. Performance indexes of different band combinations.
Multiband SelectionFNFPOEPCCKAPPA
Bands 1–6567211726840.87450.6524
Bands 1–45234249470.95570.8592
Bands 1,2,351167611870.94450.8275
Bands 1,2,45503579070.95760.8639
Bands 2,3,477047512450.94180.8115
Table 2. Performance indexes of Gaussian filter with different window sizes.
Table 2. Performance indexes of Gaussian filter with different window sizes.
Gaussian FilterFNFPOEPCCKAPPA
Without filter633576900.96770.8928
3 × 3525475720.97330.9119
5 × 5501515520.97420.9152
7 × 7494515450.97450.9164
Table 3. Performance indexes of different change detection algorithms without filtering.
Table 3. Performance indexes of different change detection algorithms without filtering.
MethodFNFPOEPCCKAPPATime (s)
CVA2841438472250.66220.06370.23
DPCA244172219660.90810.74390.28
MAD507227227790.87010.6460.39
IRMAD56396015230.92880.78323.51
SBIW971109810.95410.84186.4
SFA567211726840.87450.65240.43
ISFA678166940.96760.89131.635
DSFA1328913370.93750.776659.4
Proposed Method633576900.96770.89280.29
Table 4. Performance indexes of different change detection algorithms with filtering.
Table 4. Performance indexes of different change detection algorithms with filtering.
MethodFNFPOEPCCKAPPATime (s)
CVA2979454875270.64810.02650.37
DPCA209113613450.93710.81690.32
MAD52866711950.94410.8260.62
IRMAD11837612590.94110.79414.25
SBIW76247660.96420.87896.75
SFA668102216900.9210.75850.63
ISFA511145250.97550.9191.84
DSFA1119211210.94760.816560.2
Proposed Method494515450.97450.91640.47
Table 5. Performance indexes of different change detection algorithms.
Table 5. Performance indexes of different change detection algorithms.
MethodFNFPOEPCCKAPPATime (s)
CVA35,84911,35447,2030.96190.78190.23
DPCA57,017738264,3990.94800.67240.34
MAD31,46850,61382,0810.93380.67260.84
IRMAD29,902717037,0720.97000.82987.3
SBIW627926,88733,1660.97320.868325.81
SFA65,91111,02176,9320.93790.60031.04
ISFA5724146,255151,9790.87740.561122.89
DSFA26,74014,32841,0680.96690.8182163
Proposed Method14,88216,12231,0040.97500.86900.51
Table 6. Performance indexes of different change detection algorithms.
Table 6. Performance indexes of different change detection algorithms.
MethodFNFPOEPCCKAPPATime (s)
CVA027,79827,7980.826260.07650.31
DPCA137200.99980.99280.33
MAD0464146410.97100.36880.55
IRMAD52662710.99830.91112.56
SBIW0110.99990.99961.36
SFA73593660.99770.88340.55
ISFA72662730.99830.91041.42
DSFA014140.99990.99550.89
Proposed Method0110.99990.99960.53
Table 7. Performance indexes of different change detection algorithms.
Table 7. Performance indexes of different change detection algorithms.
MethodFNFPOEPCCKAPPATime (s)
CVA3420,84520,8790.91380.74090.46
DPCA102260027020.98910.93850.41
MAD388634467320.97060.74140.63
IRMAD812595267640.97080.65312.78
SBIW408480.99980.98942.2
SFA550594764770.97200.73440.68
ISFA2411393800.99830.90892.62
DSFA28090411840.9950.894170.04
Proposed Method484520.99970.98860.72
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

He, Y.; Jia, Z.; Yang, J.; Kasabov, N.K. Multispectral Image Change Detection Based on Single-Band Slow Feature Analysis. Remote Sens. 2021, 13, 2969. https://doi.org/10.3390/rs13152969

AMA Style

He Y, Jia Z, Yang J, Kasabov NK. Multispectral Image Change Detection Based on Single-Band Slow Feature Analysis. Remote Sensing. 2021; 13(15):2969. https://doi.org/10.3390/rs13152969

Chicago/Turabian Style

He, Youxi, Zhenhong Jia, Jie Yang, and Nikola K. Kasabov. 2021. "Multispectral Image Change Detection Based on Single-Band Slow Feature Analysis" Remote Sensing 13, no. 15: 2969. https://doi.org/10.3390/rs13152969

APA Style

He, Y., Jia, Z., Yang, J., & Kasabov, N. K. (2021). Multispectral Image Change Detection Based on Single-Band Slow Feature Analysis. Remote Sensing, 13(15), 2969. https://doi.org/10.3390/rs13152969

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop