Next Article in Journal
Application of a Multifractal Model for Identification of Lithology and Hydrothermal Alteration in the Dasuji Porphyry Mo Deposit in Inner Mongolia, China
Next Article in Special Issue
A Modified Iteration-Free SPGA Based on Removing the Linear Phase
Previous Article in Journal
A Cloud Water Path-Based Model for Cloudy-Sky Downward Longwave Radiation Estimation from FY-4A Data
Previous Article in Special Issue
An Efficient BP Algorithm Based on TSU-ICSI Combined with GPU Parallel Computing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Spaceborne SAR Time-Series Images Change Detection Based on SAR-SIFT-Logarithm Background Subtraction

Radar Monitoring Technology Laboratory, School of Information Science and Technology, North China University of Technology, Beijing 100144, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(23), 5533; https://doi.org/10.3390/rs15235533
Submission received: 19 October 2023 / Revised: 21 November 2023 / Accepted: 25 November 2023 / Published: 28 November 2023
(This article belongs to the Special Issue Advances in Synthetic Aperture Radar Data Processing and Application)

Abstract

:
Synthetic Aperture Radar (SAR) image change detection aims to detect changes with images of the same area acquired at different times. It has wide applications in environmental monitoring, urban planning and resource management. Traditional change detection methods for spaceborne SAR time-series images typically adopt a pairwise comparison strategy to obtain multi-temporal change information. However, this kind of method has the problem of losing the overall change information, which is time consuming. To address this problem, this paper proposes a new change detection algorithm for spaceborne SAR time-series data based on SAR-SIFT-Logarithm Background Subtraction. This algorithm combines the SAR-SIFT image registration technology with Logarithm Background Subtraction. The method first preprocesses the input time-series data with steps like noise reducing and radiometric calibration. Then, the images will be coregistered by the SAR-SIFT step to avoid mismatches-induced detection performance degradation. Next, the parts that remained unchanged throughout the time period are modeled with a median filter to obtain the static background. The change information is then obtained via the subtraction of background and CFAR detection and clustering. The proposed algorithm is validated using the Sentinel-1 GRD and PAZ-1 time-series dataset. Experimental results demonstrate that the proposed method effectively detects the overall change information and reduces processing time compared to traditional pairwise comparison methods.

1. Introduction

Change detection is an important application in the remote sensing domain [1,2,3]. Spaceborne SAR time-series images data have emerged in the past two to three decades as an important tool for change detection application, since it can offer advantages such as repeat pass observations, all-weather capability, and high resolution [4,5]. Currently, SAR image change detection is mainly based on differential images obtained by pairwise comparison, and the differential image’s quality like sharpness and signal-to-noise ratio has a significant impact on the final results [6]. Hence, the researchers focuses on improving its quality to obtain accurate change information. Reference [7] proposed a Combined Difference Image (CDI) method. The CDI method is simple, and the calculation speed is fast. However, it is necessary to artificially set the weighting parameters, and multiple tests are required to find the most appropriate parameters. Reference [8] proposed a neighborhood-based ratio (NR) operator. Compared with the CDI method, the NR operator can remove artificial parameters and achieve unsupervised. Reference [9] proposed a method of generating differential image by Wavelet Fusion (WF). This method uses the complementary information of mean-ratio (MR) and Log-ratio (LR) images to generate a differential image, which combines the advantages of MR differential images to maintain the overall information and LR differential images to maintain detailed information.
However, for multi-temporal data, this method is not effective enough to obtain the overall change information. To address this issue, an effective approach is to introduce Background Subtraction methods used in optical remote sensing [10,11,12]. In reference [13], the Background Subtraction idea is introduced into airborne SAR moving target detection, which can effectively extract the motion trajectory of moving targets. This method is applied to the spotlight mode, which can generate an image sequence to obtain the background. And thanks to the accurate position information of the platform, all images can be formed in unified image coordinates, i.e., no registration is needed. However, when the spaceborne SAR platform repeatedly observes the same scene, due to the influence of orbit offsets at different times, the geometric positioning error of the image is large, so background extraction and change detection cannot be directly performed. In addition, there are relatively few studies and discussions on target-level change detection in high-resolution spaceborne SAR images. Reference [14] proposed a moving target monitoring method with high framerate spaceborne Synthetic Aperture Radar SAR images. Reference [15] explored methods for urban change detection using multi-temporal spaceborne SAR data. Ye et al. developed an object-based change detection algorithm that can generate change maps at different scales [16]. Therefore, compared with airborne SAR images, introducing Background Subtraction into spaceborne SAR time-series images has several differences, which makes it a challenging task.
Based on the aforementioned analysis, this paper proposes an improved Logarithm Background Subtraction method, entitled SAR-SIFT-Logarithm Background Subtraction, for change detection using spaceborne SAR time-series data.The proposed method employs SAR-SIFT image registration [17,18,19] to obtain accurately coregistered image sequence, which is followed by the Logarithm Background Subtraction algorithm for image change detection. The method first preprocesses the input time-series data with steps like noise reducing and radiometric calibration. Then, the images are coregistered by a SAR-SIFT step to avoid mismatches-induced detection performance degradation. Next, the part that remained unchanged throughout the time period is modeled with a median filter to obtain the static background. The change information is then obtained via subtraction of background and CFAR detection and clustering. The method is experimentally validated using Sentinel-1 and PAZ-1 time-series datasets along with detailed truth data.
The structure of this paper is as follows: Section 2 introduces the dataset used in the paper and outlines the experimental design; Section 3 presents the proposed SAR-SIFT-Logarithm Background Subtraction. In Section 4, the detailed experimental results are presented. Section 5 contains the discussion of the experimental results. Section 6 contains the conclusion.

2. Dataset and Experiments

Sentinel-1 is a satellite mission developed by the European Space Agency (ESA) for Earth observation through radar imaging, and the sensor is equipped with a C-band SAR. It plays a pivotal role in monitoring and managing various environmental and geological applications [20]. Sentinel-1 offers all-weather, day-and-night imaging capabilities, making it an invaluable tool for applications such as disaster management, agriculture, forestry, and tracking changes in land and ocean surfaces [21]. Sentinel-1 can acquire many types of data, including multi-temporal SAR image sequences [22], SAR image mosaics [23], and SAR image fusion [24]. Among them, the multi-temporal SAR image sequence contains SAR images acquired in the same region at different times [25]. This enables the long-term monitoring and observation of a specific area, extracting change information from it.
The PAZ satellite, launched on 22 February 2018, is owned and operated by Hisdesat. PAZ operates in the same orbit of the twin satellites TerraSAR-X and TanDEM-X, and it has the same ground strip and acquisition mode. The three satellites work together as a high-resolution SAR satellite constellation. PAZ is equipped by a side-looking X-Band SAR using the active phased array antenna technology with an operational instantaneous bandwidth up to 300 MHz. It has been designed to be very flexible and is able to operate in a wide array of configurations depending on the desired image performance.
This paper utilizes Sentinel-1 GRD products and PAZ-1 products as the experimental dataset. Table 1 contains the parameters of the Sentinel-1 dataset, and Table 2 contains the parameters of the PAZ-1 dataset.

2.1. Experimental Design

In this paper, a sequence of time-series images was selected from the Sentinel-1 dataset; then, the proposed method and the traditional method were used to detect changes in the number of vehicles in a nearby parking lot and BEIJING-HYUNDAI AUTO Enterprise. The latitude and longitude of the nearby parking lot are 39 . 9214 and 116 . 1958 , respectively; the latitude and longitude of BEIJING-HYUNDAI AUTO Enterprise are 40 . 1047 and 116 . 6443 , respectively. For the nearby parking lot experiment, six sets of ground truth data were collected through field observations. The performance of these two methods was quantitatively evaluated using the root mean square error (RMSE) to validate the effectiveness and robustness of the proposed approach.
The PAZ-1 dataset is used to complete the change detection of the number of vehicles in the parking lot of the CCTV Tower, and then the truth data are used to verify the detection results. The latitude and longitude of the nearby parking are 39 . 9171 and 116 . 3000 , respectively.

2.1.1. Dataset 1: Nearby Parking Lot

This experiment is based on the Sentinel-1 dataset. As shown in Figure 1, the change of the number of vehicles is detected in the area of interest indicated by the red box in the nearby parking lot. The sequence of time-series images spans from 5 March 2020 to 14 November 2022, and it contains a total of 82 images.
Six sets of ground truth data were acquired through on-site collection. As the Sentinel-1 satellite is a sun-synchronous orbit satellite [26], it always passes over a specific area and acquires images every 12 days. Therefore, by recording the number of parked vehicles every 12 days during the sensor passing time, the required ground truth data can be obtained. Figure 2a shows the field situation of collecting truth data, and Figure 2b is an image obtained by modeling the truth data to restore the vehicle distribution in the parking lot, which is helpful for subsequent error analysis.
Table 3 lists six sets of truth data recorded from 15 September 2022 to 14 November 2022.
The Logarithm Background Subtraction method and the pairwise comparison method were used to detect the change in the number of vehicles in the parking lot, and the change curve of the number of vehicles was obtained. Finally, the obtained truth data are used to quantitatively evaluate and analyze the errors of the proposed method and the traditional method.

2.1.2. Dataset 2: BEIJING-HYUNDAI AUTO Enterprise

This experiment is also based on the Sentinel-1 dataset. As shown in Figure 3, the change of the number of vehicles is detected in the parking lot of BEIJING-HYUNDAI AUTO Enterprise, which is indicated by a red box. The sequence of time-series images spans from 9 July 2019 to 25 September 2020 and contains a total of 38 images.
The Logarithm Background Subtraction method and the pairwise comparison method were separately employed to detect the vehicle count changes in this parking lot, resulting in vehicle count change curves. A comparative analysis was performed to compare the experimental results of the two methods.

2.1.3. Dataset 3: CCTV Tower Parking Lot

This experiment is based on the PAZ-1 dataset. As shown in Figure 4, the change in the number of vehicles is detected in the area of interest indicated by the red box in the CCTV Tower Parking Lot. The sequence of time-series images spans from 14 February 2023 to 31 August 2023 and contains a total of 12 images.
The PAZ-1 data are used to complete the change detection of the number of vehicles in the parking lot of the CCTV Tower, and the change curve of the number of vehicles is obtained. This experiment mainly uses RTK (real-time kinematic ) equipment to collect the car’s accurate centimeter-level position as truth data when the satellite is passing by. During the satellite transit, RTK equipment is used to accurately record the latitude and longitude coordinates and location map of vehicles in the parking lot. Subsequently, by analyzing the RTK data, a vehicle distribution map is created to present the distribution of vehicles in the parking lot in detail.

3. Proposed Algorithm

The processing flowchart of the proposed method is shown in Figure 5. The algorithm consists of the following steps:
Firstly, the logarithmic transformation technique is used to transform the time-series images into logarithmic images, which are then sorted by azimuth time to form a sequence of logarithmic images.
This sequence consists of n images, each of size M × N pixels, and can be represented by a 3D array f { l o g [ I ] } of dimensions n × N × M, where the k - t h layer represents the k - t h image, and each layer contains M × N pixel values. It can be expressed as:
f { log [ I ] } = { log [ I ( i , j , k ) ] , i = 1 , 2 , M ; j = 1 , 2 , N ; k = 1 , 2 , n }
In the formula, l o g [ I ( i , j , k ) ] represents the pixel value located in the i - t h row and j - t h column of the k - t h image in the logarithmic image sequence.
Next, the sequence of logarithmic images is preprocessed, including image registration, image filtering, and radiometric correction. The speckle noise in SAR image seriously affects the accuracy of change detection, so it needs to be processed. The mean filtering method can be used to filter out the speckle noise in SAR images so as to improve the image quality. Image registration aligns the two images so that the corresponding points in the two images have the same coordinates. Radiation correction is a technique used to correct the radiation distortion in SAR images due to the beam illumination variation. Its purpose is to make the values of the corresponding pixels of each image in the image sequence consistent so that the user can obtain the stable parts of all images. Among them, SAR-SIFT image registration is a critical preprocessing step.
The preprocessed logarithmic image sequence can be represented as follows:
f { log [ I ˜ ] } = { log [ I ˜ ( i , j , k ) ] , i = 1 , 2 , M ; j = 1 , 2 , N ; k = 1 , 2 , n }
In the formula, l o g [ I ˜ ] represents the registered image sequence.
The steps of SAR-SIFT imag registration technology are detailed in Section 3.1.
Then, a median filter is applied along the image index dimension of the registered image sequence, whereby for each pixel, the grayscale values across multiple images are sorted, and the middle value is taken as the corresponding grayscale value in the static background, resulting in the unchanged part. The median filtering is the key concept used in the Background Subtraction method. It is used for extracting the unchanged part (background) from the reregistered image sequence. Since the image sequence consists of repeated pass observation data, the unchanged parts like buildings present as stable pixel values and have less variation along the index of the images. Meanwhile, the pixels with changes like cars leaving or moving into the frame present as high value or low value variations on the stable pixel value curve. Therefore, the median filtering can filter those high or low values caused by the changes and output the stable background, which we can use to obtain the unchanged parts in all the images.
log [ B ( i , j ) ] = median k = 1 n log [ I ˜ ( i , j , k ) ]
In the formula, B ( i , j ) represents the pixel intensity value located in the i - t h row and j - t h column of the static background, and median refers to the operation of taking the median value.
After obtaining the static background, the image sequence that contains the changed part is obtained by subtracting the static background from the original registered image sequence.
log [ F ( i , j , k ) ] = log [ I ˜ ( i , j , k ) ] log [ B ( i , j ) ]
In the formula, l o g [ F ( i , j , k ) ] represents the pixel intensity value located in the i - t h row and j - t h column of the k - t h image that contains the changed part.
Meanwhile, to exclude other interfering areas, a binary mask is applied on the image sequence to retain only the region of interest.
log [ F ^ ( i , j , k ) ] = log [ F ( i , j , k ) ] × M ( i , j )
In the formula, l o g [ F ^ ( i , j , k ) ] represents the pixel value located at the i - t h row and j - t h column of the k - t h image, which has undergone a mask processing. M ( i , j ) denotes the pixel value located at the i - t h row and j - t h column of the binary mask image, which can only be either 0 or 1. During the element-wise multiplication operation, the pixels with a value of 0 in the binary mask will result in the corresponding pixel value in the image being set to 0, while pixels with a value of 1 will have no effect.
After this, the Constant False Alarm Rate (CFAR) detection algorithm is applied on the image sequence to detect targets, and the number of pixels containing targets is extracted. The CFAR detection algorithm is a target detection technology commonly used in the field of radar signal processing [27]. Its main purpose is to effectively detect the area where the target exists while maintaining a constant false alarm rate. The basic idea is to adaptively adjust the detection threshold so that the system can adapt to changes in different backgrounds so as to achieve target detection while maintaining a certain false alarm rate [28].
Finally, the number of targets is obtained by multiplying the number of pixels containing targets by the proportional coefficient. In the experimental images, each pixel represents an area of 10 m by 10 m on the ground: that is, each pixel represents a square area of 10 m by 10 m. In the field inspection of parking lots, the number of vehicles parked in each pixel can be counted to determine the true value and proportional coefficient of the number of vehicles in a single pixel.
The main processing steps are from logarithmic transformation to CFAR detection; their details are introduced as follows.

3.1. SAR-SIFT Image Registration Algorithm

The Scale-Invariant Feature Transform (SIFT) algorithm is a classic method used for feature extraction and matching in images. It exhibits features such as scale invariance, rotation invariance, and illumination invariance, which enables it to extract stable feature points under different scales, angles, and lighting conditions. However, due to the unique characteristics of SAR images, such as strong noise, low contrast, and irregular reflections, traditional SIFT algorithms cannot be directly applied to SAR images for feature extraction. The SAR-SIFT algorithm improves the SIFT algorithm by introducing the characteristics of SAR images to adapt to the special properties of SAR images. Specifically, the SAR-SIFT algorithm introduces techniques such as adaptive Gaussian filtering and adaptive local binarization to enhance the contrast and stability of SAR images, thereby achieving more accurate feature point extraction and matching.
The following is the specific operation process of the SAR-SIFT algorithm for registration:
Establish the SAR-Harris scale space
Calculate the gradient of SAR image using the ROEWA. For orientation φ , the weighted averages of bilateral windows are as follows:
{ r 1 ( x , y φ ) = x , y g 1 x , y I x + x , y + y r 2 ( x , y φ ) = x , y g 2 x , y I x + x , y + y
In the formula, r 1 and r 2 are local exponential weighted averages, I is the SAR image, x and y are image pixel coordinates, g 1 and g 2 are exponential weighted filters, respectively:
g 1 ( x , y φ ) = exp x + y α x cos φ + y sin φ > 0 g 2 ( x , y φ ) = exp x + y α x cos φ + y sin φ < 0
φ = 0 and φ = 90 are horizontal and vertical direction exponential weighted filters. The gradient of the SAR image is calculated based on the ratio of local exponential weighted averages between horizontal and vertical directions. The gradient of an SAR image is as follows:
G x , a ( x , y ) = log r 1 x , y φ = 90 r 2 x , y φ = 90 G y , a ( x , y ) = log r 1 x , y φ = 0 r 2 x , y φ = 0 G n , a ( x , y ) = G x , a ( x , y ) 2 + G y , a ( x , y ) 2 G t , a ( x , y ) = arctan G y , a ( x , y ) G x , a ( x , y )
Then, we used the obtained gradient to establish the SAR-Harris scale space. The expression of multi-scale SAR-Harris function is as follows:
C S H ( x , y , a ) = g 2 a G x , a 2 G x , a G y , a G y , a G x , a G y , a 2
In the formula, g 2 a represents a Gaussian kernel with a standard deviation of 2 . C S H ( x , y , a ) represents the value of SAR-Harris scale space at ( x , y , a ) .
Feature points detection and precise localization.
For each point in the multi-scale space, detect feature points using the DoG operator. The DoG operator is a scale space transformation that can detect local extreme points at different scales. To obtain more robust extreme points, we adopted the following formula to filter out low-contrast points in the initial extreme points:
R S H ( x , y , a ) = det C S H ( x , y , a ) d × tr C S H ( x , y , a ) 2
In the formula, d e t represents calculating the determinant of the matrix, t r represents calculating the trace of the matrix, and d is the adjustment parameter set to 0.04.
Assign the main direction.
Next, we used the ROEWA gradient to calculate the gradient histogram in the key point neighborhood. The direction corresponding to the highest point of the histogram is the main direction.
Generate descriptors.
The scale-dependent neighborhood around each key point is divided into sectors. We utilized the ROEWA gradient to calculate the gradient histogram within the key point neighborhood. Then, we consolidated all histograms into a normalized vector to generate SAR-SIFT feature descriptors.
Key point matching and filter matched points.
After obtaining the descriptors of the key points, we used the NNDR to perform preliminary matching. We chose the two closest feature points and judged whether the distance between them meets the given threshold. According to the threshold, we selected the matched feature point pairs and stored them. The formula for calculating Euclidean distance is as follows:
d = i = 1 n x i y i 2
The initial matched feature point pairs contain a large number of erroneous matches, which need to be removed. We retained the matched points that are consistent in space and angle using the RANSAC algorithm. After obtaining all the correctly matched point pairs, we used these pairs to calculate the transformation model parameters and obtain the transformation matrix.
Image resampling.
We used the obtained affine transformation matrix to resample the registered image, adjusting its position and size to achieve perfect matching. Through the above steps, the registered SAR image was obtained. The flowchart of the SAR-SIFT algorithm operation process is shown below (Figure 6):

4. Experimental Results and Analysis

Experiment 1 provides a controllable experimental environment. It not only has the actual spaceborne SAR dataset but also contains the complete truth data, which can be used to evaluate the system and verify the performance of the algorithm. This experiment provides us with a quantitative method to evaluate the effectiveness of the algorithm.
Experiment 2 expanded the scope of the experiment and selected a larger experimental area, namely the parking lot of BEIJING-HYUNDAI AUTO Enterprise, which was located in the real application background during the COVID-19 pandemic. This experiment serves as a practical application context for presenting the outcomes.
Experiment 3 also provides a controllable experimental environment, which uses RTK equipment to accurately record vehicle coordinates, and then verifies the robustness of the proposed method with higher-resolution data.

4.1. Dataset 1: Nearby Parking Lot

For the detection of the change of the number of vehicles in the nearby parking lot, a total of 82 images were selected from 5 March 2020, to 14 November 2022. First, the experimental processing area was selected with an image size of 376 × 376 pixels, as shown in Figure 7.
After applying median filtering and radiometric correction to the image sequence, image registration is required. There are two main methods for image registration: pixel-based registration and feature-based registration. Pixel-based registration methods minimize the differences between pixels in the images to achieve image registration. Commonly used pixel-based registration methods include cross-correlation registration, phase correlation-based registration, and wavelet transform-based registration. The principle behind these methods is to calculate the similarity between corresponding pixels in two images, find the translation that maximizes the similarity, and complete image registration. Feature-based registration methods achieve image registration by extracting feature points or regions from the images and calculating the relative positions of these feature points or regions. In this paper, SAR-SIFT registration and cross-correlation registration were applied to the image sequence to demonstrate the registration results using a scene as an example.
The SAR-SIFT algorithm was used to register the images. Matching points connection diagrams are shown in Figure 8. In order to visualize the matching effect, we use red lines to connect all detected matching pairs so that the registration results can be clearly observed.
Figure 9a shows the intensity superposition of the reference image and the image to be registered before registration, where gray represents areas of equal intensity, and green and magenta represent areas of different intensities. Figure 9b shows the superposition after SAR-SIFT registration, and Figure 9c shows the superposition after cross-correlation registration.
By comparing the intensity overlay images of the two registration methods, it can be seen that the SAR-SIFT algorithm can accurately achieve the spatial alignment of two images, while the effect of cross-correlation registration is not ideal. Image registration is one of the key steps in this paper and is crucial for ensuring data quality and detection accuracy. Low registration accuracy will affect subsequent target detection ability and ultimately increase the experimental result errors. Therefore, SAR-SIFT algorithm with high accuracy is used to register the temporal images in this experiment for subsequent processing.
After registration, images need to be cropped for background subtraction. The main reason for cropping is that the areas that changed before and after are too small in the experimental images. The above operation can improve the proportion of changed regions in the entire image, and the final cropped size is 50 × 50 pixels. As seen above, Figure 10a is the area of registration, while Figure 10b and Figure 10c are the optical and SAR image of the parking lot, respectively.
Logarithm Background Subtraction is applied to the area marked by the red box in Figure 10, and the resulting image is shown in Figure 11. From left to right are the input image, the static background and the image that contains the changed part. The red box represents the parking area to be detected.
After obtaining the image that contains the changed part through Logarithm Background Subtraction, it is necessary to perform target detection on the image in order to obtain the change targets.
In this paper, a CFAR target detection algorithm based on Gaussian distribution is used by calculating the statistical information of the changed image. The Gaussian distribution is utilized to model the environmental background, thereby computing the detection threshold. The target detection result is presented in Figure 12. CFAR detection is a method to determine pixels whose pixel value is greater than the threshold as the target and those whose pixel value is less than the threshold as the non-target. Therefore, after performing CFAR detection, a binary image is finally formed, as shown on the left side of the figure. In this image, the white areas represent the target points, while the black areas represent non-target points. In the right figure, the red dots corresponds to the white part of the figure on the left, which is ‘vehicle appeared’ in the legend.
The advantages of the CFAR algorithm are that it enables an adaptive determination of the detection threshold, thereby avoiding the problem of static threshold sensitivity to noise and interference. Since the algorithm only requires some local information to complete the detection task, its computational load is relatively small, and its real-time performance is good, making it suitable for target detection tasks in this paper. By multiplying the number of pixels with detected targets by the proportional coefficient, the vehicle count for each scene image is calculated, and the resulting vehicle count variation curve is obtained, as shown in Figure 13, of which Figure 13a shows the logarithm Background Subtraction method result and Figure 13b shows the result of pairwise comparison method.

4.2. Dataset 2: BEIJING-HYUNDAI AUTO Enterprise

In order to carry out change detection experiment on BEIJING-HYUNDAI AUTO Enterprise, a total of 38 images were selected from 9 July 2019 to 25 September 2020. The images were cropped and the experimental processing area was identified. As shown in Figure 14, the area marked by the red box is the experimental processing area, and the image connected by the red line is the optical image corresponding to its enlarged image. As shown in the enlarged image, the size of the experimental processing area is 128 × 128 pixels.
After basic denoising of the images, the SAR-SIFT algorithm was used to perform registration on the experimental area. Matching points connection diagrams are shown in Figure 15. In order to visualize the matching effect, we use red lines to connect all detected matching pairs so that the registration results can be clearly observed.
The registration effect is shown in Figure 16. After SAR-SIFT registration, the images were aligned, and further operations can be performed.
The Logarithm Background Subtraction result of the image sequence is demonstrated in Figure 17. From left to right are the input image, the static background and the image that contains the changed part. The red box represents the parking area to be detected.
Adding a binary mask to the image that contains the changed part leaves only the region of interest, which is convenient for subsequent CFAR detection and reduces the error caused by the irrelevant area. The region of interest is shown in Figure 18, where Figure 18a is an optical image, and the red box indicates the parking area of the enterprise, which is also the CFAR detection area. Figure 18b shows the binary mask.
After obtaining the image that contains the changed part through Logarithm Background Subtraction, it is necessary to perform target detection in the region of interest of the image. The detection result is shown in Figure 19.
The pairwise comparison method detection result is shown in Figure 20.
The red pixels in the image represent the presence of vehicle targets compared to the background or base image, while green pixels represent the decrease of vehicle targets compared to the base image. By multiplying the number of detected target pixels by the proportional coefficient, the number of vehicles in each image can be calculated, resulting in a curve showing changes in vehicle count, as shown in Figure 21, of which Figure 21a shows the Logarithm Background Subtraction method result and Figure 21b shows the result of pairwise comparison method.

4.3. Dataset 3: CCTV Tower Parking Lot

This experiment is based on the PAZ-1 dataset. The sequence of time-series images spans from 14 February 2023 to 31 August 2023 and contains a total of 12 images. Since this dataset reaches a resolution of 3 m × 3 m, this experiment uses RTK (real-time kinematic) equipment to accurately record the coordinates of vehicles in the parking lot and the trajectory map when collecting data, and then it creates a vehicle distribution map by parsing RTK data. Finally, the experimental results obtained by this method will be compared with the truth data to evaluate the accuracy of the algorithm again.
As shown in Figure 22, the area marked by the red box is the parking lot area, and the image connected by the red line is the enlarged image of the parking lot and its corresponding optical image. As shown in the enlarged image, the size of the experimental processing area is 50 × 50 pixels.
The Logarithm Background Subtraction result of the image sequence is demonstrated in Figure 23. From left to right are the input image, the static background and the image that contains the changed part.
After obtaining the image that contains the changed part through Logarithm Background Subtraction, it is necessary to perform target detection in the region of interest of the image. The CFAR detection result is shown in Figure 24. The left image is the binary image obtained after CFAR detection, in which the white point is the target, and the right image is the visualization result. The red point in the image corresponds to the white point in the left image, that is, the vehicle target.

5. Discussion

5.1. Performance Comparison of Traditional and Proposed Method

The RMSE was used as a quantitative analysis metric for change detection in this experiment. RMSE is defined as the square root of the average of the squared differences between predicted values and true values and is commonly used to evaluate the accuracy of prediction models. The formula is as follows:
R M S E = 1 n i = 1 n y i y ^ i 2
In the formula, n is the number of predicted samples, y i is the true value, and y i ^ is the predicted value of the model.

5.1.1. Dataset 1: Nearby Parking Lot

From 15 September 2022 to 14 November 2022, a total of 6 scenes of ground truth data were collected for this parking lot. Table 4 shows the absolute value of detection for missing cars in the proposed method and the traditional pairwise comparison method.
The graph of the deviation for detected cars by the traditional method and the proposed method is shown in Figure 25.
It can be seen from the graph that the deviation for detected cars by the proposed method is lower than that of the traditional method. The overall detection performance is evaluated with the RMSE metric.
Based on calculations, the experimental results obtained from the Logarithm Background Subtraction method yield an RMSE value of 8.534. From the perspective of quantitative analysis indicators, there are certain errors in the detection results, but these are within a reasonable range. Considering the low resolution of Sentinel-1, this method can obtain more accurate results for higher-resolution images. Taking data from 21 October 2022 as a case study, the following section presents an error analysis of vehicle distribution in the parking lot.
By observing the parking lot on site, the distribution of vehicles in the parking lot is modeled. In this experiment, the number of parked cars in each pixel was counted to determine that the single-pixel proportional coefficient is 6, i.e., the ground truth number of vehicles in a pixel is 6. It can be seen from Figure 26a that not all pixels detected as target points contained 6 cars, and pixels with less than 3 cars were not detected as target points, resulting in missed detection.
After performing the calculations, the RMSE for the pairwise comparison method is determined to be 20.261. From a perspective of vehicle count change detection, the pairwise comparison method relies on the number of targets in the base image, which leads to cumulative errors in the detection results. The more targets there are in the base image, the higher the probability of false detections in the change detection results. On the other hand, the Background Subtraction method is not limited by the number of targets and only requires sufficient similarity between the background and detection images. Additionally, the pairwise difference method requires image registration and subtraction for each pair of images, which is computationally intensive and vulnerable to registration errors, leading to lower efficiency and accuracy. Compared to the pairwise comparison method, the Background Subtraction method reduces false detections caused by noise and illumination changes by modeling the static background.

5.1.2. Dataset 3: CCTV Tower Parking Lot

We take the image dated 31 August 2023 as an example to analyze the experimental result. The vehicle distribution map of the parking lot recorded by RTK equipment is shown in Figure 27a, and Figure 27b contains the enlarged image of CFAR detection result.
The true value is 53 vehicles, and the detection result is 57 vehicles. From the experimental result, it is not difficult to see that compared with Dataset 1, Dataset 3 has higher resolution and more accurate detection results. However, there is only one set of true values in this experiment. In the future, more truth data will be collected to verify the detection effect of the proposed method on high-resolution spaceborne SAR images.

5.2. Limitations and Potential Usability of the Proposed Method

Despite the promising results, it is essential to acknowledge the limitations and areas for improvement encountered in this study. One limitation lies in the sensitivity of the logarithm background subtraction method to variations in image resolution, as indicated by the observed errors in the detection results. Additionally, the algorithm’s performance may be influenced by changes in environmental conditions, such as weather and lighting, which were not explicitly addressed in this research.

6. Conclusions

This paper combines the Logarithm Background Subtraction with SAR-SIFT registration technology to form a change detection method suitable for spaceborne SAR platforms. Firstly, the logarithmic transformation technique is introduced to enhance the robustness of the Background Subtraction method. Then, the images are coregistered by SAR-SIFT technology to avoid mismatches-induced detection performance degradation. Next, the static background is modeled by the median filter, and then the subtraction is carried out to help extract the image that contains the changed part. Finally, change detection is performed on the image to obtain the overall change information. The experimental results show that the proposed method can effectively detect the overall change information in spaceborne SAR time-series images, and compared with the traditional pairwise comparison method, the detection efficiency is greatly improved. Future works are focusing on enhancing the algorithm’s robustness and performance by introducing all the polarization channels’ information.

Author Contributions

W.S. and Y.J. performed the experiments and method. W.S. and Y.J. wrote the manuscript. resources, Y.L. (Yang Li) and Y.L. (Yun Lin); supervision, Y.W.; project administration, Y.W.; funding acquisition, Y.W.; Y.W., Y.L. (Yang Li), Y.L. (Yun Lin), Z.B. and W.J. gave valuable advice on manuscript writing. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China under Grant 62201011, 62131001 and the R&D Program of the Beijing Municipal Education Commission grant KM202210009004 and North China University of Technology Research funds 110051360023XN224-8, 110051360023XN211.

Data Availability Statement

Data are available on request due to restrictions of privacy.

Acknowledgments

We thank the good advice and comments from anonymous reviewers to help improve the quality of the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hu, H.; Ban, Y. Unsupervised change detection in multitemporal SAR images over large urban areas. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 3248–3261. [Google Scholar] [CrossRef]
  2. Liao, M.; Jiang, L.; Lin, H.; Huang, B.; Gong, J.Y. Urban change detection based on coherence and intensity characteristics of SAR imagery. Photogramm. Eng. Remote Sens. 2008, 74, 999–1006. [Google Scholar] [CrossRef]
  3. Marin, C.; Bovolo, F.; Bruzzone, L. Building change detection in multitemporal very high resolution SAR images. IEEE Trans. Geosci. Remote Sens. 2014, 53, 2664–2682. [Google Scholar] [CrossRef]
  4. White, R.G. Change detection in SAR imagery. Int. J. Remote Sens. 1991, 12, 339–360. [Google Scholar] [CrossRef]
  5. Bazi, Y.; Bruzzone, L.; Melgani, F. An unsupervised approach based on the generalized Gaussian model to automatic change detection in multitemporal SAR images. IEEE Trans. Geosci. Remote Sens. 2005, 43, 874–887. [Google Scholar] [CrossRef]
  6. Qin, R.; Tian, J.; Reinartz, P. 3D change detection—Approaches and applications. J. Photogramm. Remote Sens. 2016, 122, 41–56. [Google Scholar] [CrossRef]
  7. Zheng, Y.; Zhang, X.; Hou, B.; Liu, G. Using combined difference image and k-means clustering for SAR image change detection. IEEE Geosci. Remote Sens. Lett. 2013, 11, 691–695. [Google Scholar] [CrossRef]
  8. Gong, M.; Cao, Y.; Wu, Q. A neighborhood-based ratio approach for change detection in SAR images. IEEE Geosci. Remote Sens. Lett. 2012, 9, 307–311. [Google Scholar] [CrossRef]
  9. Ma, J.; Gong, M.; Zhou, Z. Wavelet fusion on ratio images for change detection in SAR images. IEEE Geosci. Remote Sens. Lett. 2012, 9, 1122–1126. [Google Scholar] [CrossRef]
  10. McIvor, A.M. Background subtraction techniques. Proc. Image Vis. Comput. 2000, 4, 3099–3104. [Google Scholar]
  11. Benezeth, Y.; Jodoin, P.M.; Emile, B.; Laurent, H.; Rosenberger, C. Comparative study of background subtraction algorithms. J. Electron. Imaging 2010, 19, 033003. [Google Scholar]
  12. Brutzer, S.; Höferlin, B.; Heidemann, G. Evaluation of background subtraction techniques for video surveillance. In Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Colorado Springs, CO, USA, 20–25 June 2011; pp. 1937–1944. [Google Scholar]
  13. Shen, W.; Lin, Y.; Zhao, Y.; Yu, L.; Hong, W. Initial result of single channel CSAR GMTI based on background subtraction. In Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA, 23–28 July 2017; pp. 976–979. [Google Scholar]
  14. Chen, J.; Yang, W.; Wang, Y.; Li, C. Moving Target Monitoring Algorithm Based on High-frame-rate SAR Images. J. Radars 2022, 11, 1048–1060. [Google Scholar]
  15. Ban, Y.; Yousif, A.O. Multitemporal Spaceborne SAR Data for Urban Change Detection in China. IEEE Geosci. Remote Sens. Lett. 2012, 5, 1087–1094. [Google Scholar] [CrossRef]
  16. Ye, X.; Zhang, H.; Wang, C.; Zhang, B.; Wu, F.; Tang, Y. SAR image change detection based on object-based method. In Proceedings of the 2013 IEEE International Geoscience and Remote Sensing Symposium(IGARSS), Melbourne, Australia, 21–26 July 2013; pp. 208–2086. [Google Scholar]
  17. Paul, S.; Pati, U.C. SAR image registration using an improved SAR-SIFT algorithm and Delaunay-triangulation-based local matching. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 2958–2966. [Google Scholar] [CrossRef]
  18. Dellinger, F.; Delon, J.; Gousseau, Y.; Michel, J.; Tupin, F. SAR-SIFT: A SIFT-like algorithm for applications on SAR images. In Proceedings of the 2012 IEEE International Geoscience and Remote Sensing Symposium, Munich, Germany, 22–27 July 2012; pp. 3478–3481. [Google Scholar]
  19. Dubois, C.; Nascetti, A.; Thiele, A.; Crespi, M.; Hinz, S. SAR-SIFT for matching multiple SAR images and radargrammetry. PFG-J. Photogramm. Remote Sens. Geoinf. Sci. 2017, 85, 149–158. [Google Scholar] [CrossRef]
  20. Torres, R.; Snoeij, P.; Geudtner, D.; Bibby, D.; Davidson, M.; Attema, E.; Potin, P.; Rommen, B.; Floury, N.; Brown, M. GMES Sentinel-1 mission. Remote Sens. Environ. 2012, 120, 9–24. [Google Scholar] [CrossRef]
  21. Geudtner, D.; Torres, R.; Snoeij, P.; Davidson, M.; Rommen, B. Sentinel-1 system capabilities and applications. In Proceedings of the 2014 IEEE Geoscience and Remote Sensing Symposium, Quebec City, QC, Canada, 13–18 July 2014; pp. 1457–1460. [Google Scholar]
  22. Yagüe-Martínez, N.; Prats-Iraola, P.; Gonzalez, F.R.; Brcic, R.; Shau, R.; Geudtner, D.; De Zan, F.; Reigber, A.; Bamler, R. Interferometric processing of Sentinel-1 TOPS data. IEEE Trans. Geosci. Remote Sens. 2016, 54, 2220–2234. [Google Scholar] [CrossRef]
  23. Huang, L.; Liu, B.; Li, B.; Guo, W.; Yu, W.; Zhang, Z.; Yu, W. OpenSARShip: A dataset dedicated to Sentinel-1 ship interpretation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 11, 195–208. [Google Scholar] [CrossRef]
  24. Vreugdenhil, M.; Wagner, W.; Bauer-Marschallinger, B.; Pfeil, I.; Teubner, I.; Rüdiger, C.; Strauss, P. Sensitivity of Sentinel-1 backscatter to vegetation dynamics: An Austrian case study. Remote Sens. 2018, 10, 1396. [Google Scholar] [CrossRef]
  25. Veloso, A.; Mermoz, S.; Bouvet, A.; Le Toan, T.; Planells, M.; Dejoux, J.F.; Ceschia, E. Understanding the temporal behavior of crops using Sentinel-1 and Sentinel-2-like data for agricultural applications. Remote Sens. Environ. 2017, 199, 415–426. [Google Scholar] [CrossRef]
  26. Malenovský, Z.; Rott, H.; Cihlar, J.; Schaepman, M.E.; García-Santos, G.; Fernandes, R.; Berger, M. Sentinels for science: Potential of Sentinel-1,-2, and-3 missions for scientific observations of ocean, cryosphere, and land. Remote Sens. Environ. 2012, 120, 91–101. [Google Scholar] [CrossRef]
  27. Novak, L.M.; Owirka, G.J.; Netishen, C.M. Performance of a high-resolution polarimetric SAR automatic target recognition system. Linc. Lab. J. 1993, 6, 1. [Google Scholar]
  28. Gao, G.; Wang, X.; Niu, M.; Zhou, S. Modified log-ratio operator for change detection of synthetic aperture radar targets in forest concealment. J. Appl. Remote Sens. 2014, 8, 083583. [Google Scholar] [CrossRef]
Figure 1. Comparison of optical image and SAR image of the nearby parking lot. (a) Optical image from Google Earth. (b) SAR image acquired on 15 September 2022 by Sentinel-1A satellite.
Figure 1. Comparison of optical image and SAR image of the nearby parking lot. (a) Optical image from Google Earth. (b) SAR image acquired on 15 September 2022 by Sentinel-1A satellite.
Remotesensing 15 05533 g001
Figure 2. The collection of truth data (21 October 2022). (a) Field situation. (b) Truth data modeling.
Figure 2. The collection of truth data (21 October 2022). (a) Field situation. (b) Truth data modeling.
Remotesensing 15 05533 g002
Figure 3. Comparison of optical image and SAR image of the BEIJING-HYUNDAI AUTO Enterprise. (a) Optical image from Google Earth. (b) SAR image acquired on 25 September 2020 by the Sentinel-1A satellite.
Figure 3. Comparison of optical image and SAR image of the BEIJING-HYUNDAI AUTO Enterprise. (a) Optical image from Google Earth. (b) SAR image acquired on 25 September 2020 by the Sentinel-1A satellite.
Remotesensing 15 05533 g003
Figure 4. Comparison of optical image and SAR image of the CCTV Tower Parking Lot. (a) Optical image from Google Earth. (b) SAR image acquired on 31 August 2023 by the PAZ-1 satellite.
Figure 4. Comparison of optical image and SAR image of the CCTV Tower Parking Lot. (a) Optical image from Google Earth. (b) SAR image acquired on 31 August 2023 by the PAZ-1 satellite.
Remotesensing 15 05533 g004
Figure 5. Overall flowchart of the algorithm.
Figure 5. Overall flowchart of the algorithm.
Remotesensing 15 05533 g005
Figure 6. Flowchart of the SAR-SIFT algorithm.
Figure 6. Flowchart of the SAR-SIFT algorithm.
Remotesensing 15 05533 g006
Figure 7. Comparison of optical image and SAR image of the experimental area. (a) Optical image from Google Earth. (b) SAR image acquired at 15 Septmber 2022 by Sentinel-1A satellite.
Figure 7. Comparison of optical image and SAR image of the experimental area. (a) Optical image from Google Earth. (b) SAR image acquired at 15 Septmber 2022 by Sentinel-1A satellite.
Remotesensing 15 05533 g007
Figure 8. Line segments connecting matching points (date of reference image: 15 Septmber 2022; date of image to be registered: 21 October 2022).
Figure 8. Line segments connecting matching points (date of reference image: 15 Septmber 2022; date of image to be registered: 21 October 2022).
Remotesensing 15 05533 g008
Figure 9. Comparison image before and after SAR-SIFT registration. (a) Before SAR-SIFT registration. (b) After SAR-SIFT registration. (c) After cross-correlation registration.
Figure 9. Comparison image before and after SAR-SIFT registration. (a) Before SAR-SIFT registration. (b) After SAR-SIFT registration. (c) After cross-correlation registration.
Remotesensing 15 05533 g009
Figure 10. Parking lot area for experiment. (a) Area for registration from Google Earth. (b) Optical image of parking lot area from Google Earth. (c) SAR image of parking lot area acquired on 15 September 2022 by Sentinel-1A satellite.
Figure 10. Parking lot area for experiment. (a) Area for registration from Google Earth. (b) Optical image of parking lot area from Google Earth. (c) SAR image of parking lot area acquired on 15 September 2022 by Sentinel-1A satellite.
Remotesensing 15 05533 g010
Figure 11. Logarithm Background Subtraction result. (a) Input image. (b) Background. (c) Change part.
Figure 11. Logarithm Background Subtraction result. (a) Input image. (b) Background. (c) Change part.
Remotesensing 15 05533 g011
Figure 12. CFAR detection results and labeling results. (a) CFAR’s binary detection results of changed pixels (vehicles). (b) Detected changed pixel (vehicles) label on the original SAR image with red dot.
Figure 12. CFAR detection results and labeling results. (a) CFAR’s binary detection results of changed pixels (vehicles). (b) Detected changed pixel (vehicles) label on the original SAR image with red dot.
Remotesensing 15 05533 g012
Figure 13. Curve of vehicle number. (a) Logarithm Background Subtraction method result. (b) Pairwise comparison method result.
Figure 13. Curve of vehicle number. (a) Logarithm Background Subtraction method result. (b) Pairwise comparison method result.
Remotesensing 15 05533 g013
Figure 14. Parking lot area for experiment. (a) SAR image acquired on 25 September 2020 by Sentinel-1A satellite. (b) Optical image of parking lot area from Google Earth. (c) SAR image of parking lot area acquired on 25 September 2020 by Sentinel-1A satellite.
Figure 14. Parking lot area for experiment. (a) SAR image acquired on 25 September 2020 by Sentinel-1A satellite. (b) Optical image of parking lot area from Google Earth. (c) SAR image of parking lot area acquired on 25 September 2020 by Sentinel-1A satellite.
Remotesensing 15 05533 g014
Figure 15. Line segments connecting matching points (date of reference image: 25 September 2020; date of image to be registered: 24 December 2019).
Figure 15. Line segments connecting matching points (date of reference image: 25 September 2020; date of image to be registered: 24 December 2019).
Remotesensing 15 05533 g015
Figure 16. Comparison image before and after SAR-SIFT registration. (a) Before SAR-SIFT registration. (b) After SAR-SIFT registration.
Figure 16. Comparison image before and after SAR-SIFT registration. (a) Before SAR-SIFT registration. (b) After SAR-SIFT registration.
Remotesensing 15 05533 g016
Figure 17. Logarithm Background Subtraction result. (a) Input image. (b) Background. (c) Change part.
Figure 17. Logarithm Background Subtraction result. (a) Input image. (b) Background. (c) Change part.
Remotesensing 15 05533 g017
Figure 18. Region of interest (BEIJING HYUNDAI AUTO Enterprise). (a) Optical image from Google Earth. (b) Generated binary mask for extracting the region of interest.
Figure 18. Region of interest (BEIJING HYUNDAI AUTO Enterprise). (a) Optical image from Google Earth. (b) Generated binary mask for extracting the region of interest.
Remotesensing 15 05533 g018
Figure 19. Logarithmic Background Subtraction method detection result.
Figure 19. Logarithmic Background Subtraction method detection result.
Remotesensing 15 05533 g019
Figure 20. Pairwise comparison method detection result.
Figure 20. Pairwise comparison method detection result.
Remotesensing 15 05533 g020
Figure 21. Curve of vehicle number. (a) Logarithm Background Subtraction method result. (b) Pairwise comparison method result.
Figure 21. Curve of vehicle number. (a) Logarithm Background Subtraction method result. (b) Pairwise comparison method result.
Remotesensing 15 05533 g021
Figure 22. Parking lot area for experiment. (a) SAR image acquired on 31 August 2023 by PAZ-1 satellite. (b) Optical image of parking lot area from Google Earth. (c) SAR image of parking lot area acquired on 31 August 2023 by PAZ-1 satellite.
Figure 22. Parking lot area for experiment. (a) SAR image acquired on 31 August 2023 by PAZ-1 satellite. (b) Optical image of parking lot area from Google Earth. (c) SAR image of parking lot area acquired on 31 August 2023 by PAZ-1 satellite.
Remotesensing 15 05533 g022
Figure 23. Logarithm Background Subtraction result. (a) Input image. (b) Background. (c) Change part.
Figure 23. Logarithm Background Subtraction result. (a) Input image. (b) Background. (c) Change part.
Remotesensing 15 05533 g023
Figure 24. The CFAR detection result.
Figure 24. The CFAR detection result.
Remotesensing 15 05533 g024
Figure 25. Deviation for detected cars.
Figure 25. Deviation for detected cars.
Remotesensing 15 05533 g025
Figure 26. Comparison of actual distribution and detection result of vehicles in the parking lot. (a) Actual distribution. (b) Detection result.
Figure 26. Comparison of actual distribution and detection result of vehicles in the parking lot. (a) Actual distribution. (b) Detection result.
Remotesensing 15 05533 g026
Figure 27. Detection result. (a) Vehicle distribution map. (b) Enlarged image of CFAR detection result.
Figure 27. Detection result. (a) Vehicle distribution map. (b) Enlarged image of CFAR detection result.
Remotesensing 15 05533 g027
Table 1. Sentinel-1 experiment parameters.
Table 1. Sentinel-1 experiment parameters.
Parameter NameValues
Acq. ModeIW
Pol. modeHH
Spacing [m]10 × 10
Orbit Repeat Cycle12 days
Onboard sensorsC-Band
Table 2. PAZ-1 experiment parameters.
Table 2. PAZ-1 experiment parameters.
Parameter NameValues
Acq. ModeSM
Pol. modeHH
Spacing [m]3 × 3
Orbit Repeat Cycle11 days
Onboard sensorsX-Band
Table 3. Six sets of truth data recorded.
Table 3. Six sets of truth data recorded.
Date of Image2022.9.152022.9.272022.10.092022.10.212022.11.022022.11.14
Truth data
(number of vehicles)
1141341091404644
Table 4. The detection result and performance comparison of the proposed method and traditional pairwise based method.
Table 4. The detection result and performance comparison of the proposed method and traditional pairwise based method.
Date of Image2022.9.152022.9.272022.10.092022.10.212022.11.022022.11.14
Truth data
(number of vehicles)
1141341091404644
Traditional method
(number of vehicles)
1451101211673942
Deviation for detected cars
(traditional method)
3124122772
Proposed method
(number of vehicles)
108126961504836
Deviation for detected cars
(proposed method)
68131028
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shen, W.; Jia, Y.; Wang, Y.; Lin, Y.; Li, Y.; Bai, Z.; Jiang, W. Spaceborne SAR Time-Series Images Change Detection Based on SAR-SIFT-Logarithm Background Subtraction. Remote Sens. 2023, 15, 5533. https://doi.org/10.3390/rs15235533

AMA Style

Shen W, Jia Y, Wang Y, Lin Y, Li Y, Bai Z, Jiang W. Spaceborne SAR Time-Series Images Change Detection Based on SAR-SIFT-Logarithm Background Subtraction. Remote Sensing. 2023; 15(23):5533. https://doi.org/10.3390/rs15235533

Chicago/Turabian Style

Shen, Wenjie, Yunzhen Jia, Yanping Wang, Yun Lin, Yang Li, Zechao Bai, and Wen Jiang. 2023. "Spaceborne SAR Time-Series Images Change Detection Based on SAR-SIFT-Logarithm Background Subtraction" Remote Sensing 15, no. 23: 5533. https://doi.org/10.3390/rs15235533

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop