Next Article in Journal
Weakly Supervised Transformer for Radar Jamming Recognition
Previous Article in Journal
Enhancing Transferability with Intra-Class Transformations and Inter-Class Nonlinear Fusion on SAR Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Synthetic-Aperture Radar Radio-Frequency Interference Suppression Based on Regularized Optimization Feature Decomposition Network

School of Electronic Science, National University of Defense Technology, Changsha 410073, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(14), 2540; https://doi.org/10.3390/rs16142540
Submission received: 1 May 2024 / Revised: 5 July 2024 / Accepted: 8 July 2024 / Published: 10 July 2024

Abstract

:
Synthetic-aperture radar (SAR) can work in all weather conditions and at all times, and satellite-borne radar has the characteristics of short revisiting period and large imaging width. Therefore, satellite-borne synthetic-aperture radar has been widely deployed, and the SAR images have been widely used in geographic mapping, radar interpretation, ship detection, and other fields. Satellite-borne synthetic-aperture radar is also susceptible to various types of intentional or unintentional interference during the imaging process, and because the interference is a direct wave, its power is much stronger than the wave reflected by targets. As a common interference pattern, radio-frequency interference widely exists in various satellite-borne synthetic-aperture radars, which seriously deteriorates SAR image quality. In order to solve the above problems, this paper proposes a feature decomposition network to suppress interference based on regularization optimization. The contributions of this work are as follows: 1. By analyzing the performance limitations of the existing methods, this work proposes a novel regularization method for radio-frequency interference suppression tasks. From the perspective of data distribution histograms and residual components, the proposed method eliminates the variable components introduced by common regularization, greatly reduces the difficulty of data mapping, and significantly improves its robustness and performance. 2. This work proposes a feature decomposition network, where the feature decomposition module contains two parts; one part only represents the interference signal, and the other part only represents the radar signal. The neurons representing the interference signal are discarded, and the neurons representing the radar signal are used as input for the subsequent network. A cosine similarity constraint is used to separate the interference from the network as much as possible. Finally, this method is validated on the MiniSAR dataset and Sentinel-1A dataset.

1. Introduction

Synthetic-aperture radar (SAR) is a type of high-resolution imaging radar, and it also possesses imaging capability under all weather and all time conditions; thus, it has been widely deployed [1,2]. Common SAR satellites include the Sentinel satellites from Europe, RADARSAT satellites from Canada, and Haisi satellites from China. With the massive deployment of satellite-borne and airborne SAR systems, the imaging data have also been widely used in geodetic surveying [3], target recognition [4], ship detection [5,6], and super-resolution [7,8]. However, during the detection process of SAR satellites, various types of intentional or unintentional interference can cause SAR satellites to be blinded [9,10]. Among them, radio-frequency interference (RFI) is one of the most common interference patterns. RFI can be divided into narrowband interference and broadband interference. Usually, RFI is the direct wave, and its power is much stronger than the reflected wave of targets. Therefore, the coverage area of radio-frequency interference is wide and the interference intensity is high, resulting in the complete concealment of targets in the RFI-polluted areas. A common RFI-polluted image in Sentinel-1A is shown in Figure 1. As a countermeasure method, interference suppression aims to restore the clean image from the RFI-polluted image, laying the foundation for subsequent radar interpretation and other tasks, so it holds significant scientific research value and practical value.
The difficulty in interference suppression lies in how to filter out interference signals while preserving radar signals as much as possible. Interference suppression algorithms can be roughly divided into four categories: non-parametric methods, parametric methods, semi-parametric methods, and deep learning methods [11]. Non-parametric methods include notch filters [12,13], adaptive filters [14], and subspace projection filters [15,16,17]. For non-parametric methods, although they are simple, they also have significant limitations. They require a large difference between the interference signals and the radar signals, and when the difference characteristics are not obvious, their performance is insufficient [11]. In addition, non-parametric methods also filter out some radar signals when filtering out interference, which may cause target loss. For parametric methods, they need to model the echo signals [18,19], and their performance is limited by the model’s complexity. Especially for some complex interference scenarios, their performance is unclear. With the rise of compressed sensing and deep learning [20,21], sparse reconstruction methods and deep learning methods have gradually become the mainstream. Unlike parametric methods, semi-parametric methods do not directly model radio-frequency interference. Instead, they rely primarily on iterative loss functions to achieve matrix decomposition, effectively separating interference from aliased signals. These approaches not only filter out interference but also preserve radar signals, resulting in superior performance. Numerous iterative models have been proposed, including sparse models [22,23], low-rank models [24,25,26], and joint sparse low-rank models [27,28,29,30,31,32]. A previous work [25] proposes a robust principal component analysis method that uses sparse constraints to iteratively solve for principal components in the time–frequency domain to separate the interference. A previous work [33] proposes a weighted vector decomposition method that separates the interference from the aliased signals through low-rank constraints and sparse constraints. The work in [34] proposes a suppressing interference method by using blind source separation for singular value and eigenvalue decomposition based on information entropy. Although the above methods have achieved excellent performance, their computational complexity is high, and their performance is also unclear for complex combined interference scenarios. A previous work [35] presented an advanced approach for suppressing range ambiguity using blind source separation, and a previous work [36] presented a novel interference suppression method using secondary compensation.
Due to its good generalization ability, deep learning has been widely applied in SAR target interpretation [36,37,38,39,40], interference detection [41,42], etc. A previous work [43] proposed a new feature decomposition and reconstruction learning method to improve the facial expression recognition accuracy. A previous work [44] proposed to use an encoder–decoder network as the feature decomposition network to decompose the feature maps into same or different features based on their similarities or differences of visible and infrared images. In SAR target detection, a previous work [45] proposed to use the feature decomposition network to select the ideal features and discard the noise features, and finally the ideal features were used to carry out target detection. Overall, the feature decomposition networks have shown excellent performance in a variety of tasks.
What is more, deep learning has also gradually gained attention in the field of interference suppression. A previous work [46] proposed an interference suppression method based on combining interference detection and notch filtering. Firstly, it used the single-shot multi-box detector to detect interference and then used a traditional notch filter to separate the interference. A previous work [47] proposed a domain invariant feature interference suppression network, which optimizes the loss function to make the network focus only on the interference region, thus achieving the purpose of interference localization in cross-sensor experiments. Then, it also used a traditional notch filter to separate the interference. A previous work [48] introduced inpainting networks into SAR interference suppression for the first time and achieved a good result. A previous work [49] embed the sparsity constraints of interference and low-rankness constraints of radar signals into the loss function, and its performance exceeds semi-parametric methods such as RPCA. In the time–frequency spectrogram, due to the negative contribution of variable interference components to image inpainting, a previous work [50] proposed a joint image segmentation and image inpainting network, which greatly improves the image quality by separating variable interference components before inpainting.
Although the above deep learning methods have achieved good performance, there are still some shortcomings: First, in the time–frequency spectrogram, the interference intensity is uncertain, and the variable interference power can reduce the network performance. Although some methods attempt to solve the above problems, they do not reveal the underlying mechanism. Secondly, excessive variable interference components can also cause the network to require more neurons to deal with, which increases the difficulty of inpainting. To solve the above problems, firstly, this paper proposes a novel data regularization method and its superiority is proved by data distribution histograms and experiments. Secondly, this paper proposes a feature decomposition network, and it can separate interference from the network and improve performance. The innovations of this paper are as follows:
  • By analyzing the performance limitations of existing methods, this paper proposes a new data regularization method for radio-frequency interference suppression tasks. From the perspective of data distribution histograms and residual components, the proposed method eliminates the variable components introduced by the common regularization method, and the mapping relationship between input and output data has been transformed from a Gaussian relationship to an approximately linear relationship. Data mapping is much simpler, so it can improve the network’s robustness and performance. In addition, this regularization method can be extended to other radar-interference suppression tasks.
  • This paper proposes a feature decomposition network, in which the feature decomposition block consists of two parts; one part only represents the interference signal, and the other part only represents the radar signal. The neurons representing the interference signal are discarded, and the neurons representing the radar signal are used as the input of the subsequent network. A cosine similarity constraint is added to the loss function to separate the interference from the network as much as possible, thereby improving its performance. Moreover, this module can also be extended to other abnormal datasets containing noise, interference, etc.
This paper is organized as follows. Section 2 introduces the principle of interference suppression in the signal domain. Section 3 introduces the proposed network, loss function, and regularization method. The experimental results are presented in Section 4, and Section 5 summarizes the total paper.

2. Interference Suppression Principle

The coverage of radio-frequency interference is wide and the interference intensity is high. The RFI-polluted targets are completely mixed in the interference, so it is difficult to suppress interference. This work starts from the theory of signal transformation and conducts research on interference suppression algorithms in the time–frequency domain. This section is divided into two parts. The first part focuses on interference suppression research, and the second part delves into SAR imaging algorithm research. The purpose of the first part is to recover clean echo signals from the interfered echo signals, and the second part is to image the clean echo signals in the first part.

2.1. Interference Suppression in Signal Domain

For an SAR system, the single echo signal received by the radar can be expressed as follows:
r τ = s r τ + s i τ + n τ ,
τ is the time delay in the range, s r τ is the radar signal, s i τ is the interference signal, and n τ is the noise signal. The radar signal can be expressed as follows:
s r = r e c t τ t r exp j 2 π f c t + j π k r τ 2 ,
r e c t is a rectangle function, t r is the radar pulse duration, f c is the carrier frequency, and k r is the chirp rate. The radio-frequency interference can be expressed as follows:
s i = r e c t τ t r f i exp j 2 π f c t + j π k r f i τ 2 ,
t r f i is RFI pulse duration, k r f i is RFI chirp rate, and B r f i = k r f i t r f i is RFI bandwidth. Usually, when B r f i 30 MHz , radio-frequency interference is narrowband interference, and when B r f i > 30 MHz , radio-frequency interference is wideband interference. By comparing Formula (2) and Formula (3), it can be seen that, because of k r f i k r , the radar signal and the interference signal have a higher degree of distinguishability in the time–frequency domain, so most methods also focus on this domain. The time–frequency transform can be expressed as follows:
Y t , f τ = r τ w τ t exp j 2 π f τ τ d τ ,
w is a rectangle-pulse signal. At this point, the transformed signal can be expressed as follows:
Y t , f τ = X t , f τ + I t , f τ + N t , f τ ,
Y t , f τ is RFI-polluted time–frequency spectrogram, X t , f τ is the clean time–frequency spectrogram, I t , f τ is RFI time–frequency spectrogram, and N t , f τ is noise time–frequency spectrogram. For interference suppression, it aims to recover a clean time–frequency spectrogram from the corrupted one, which can be regarded as an image inpainting task. Deep learning has demonstrated significant performance advantages in inpainting, making it a natural candidate for SAR interference suppression. An image inpainting model based on maximum posterior probability can be expressed as follows:
max X log P X / Y log P Y / X + log P X ,
Like Formula (5), Y represents the RFI-polluted image, and X represents the clean image. The optimization function can be expressed as follows:
min X   l Y , X + λ R X ,
l Y , X is the loss function, the common loss functions include l1, l2, etc., R X is a prior information item, and λ is a hyperparameter. From Equation (7), it can be seen that there are two optimization schemes for deep learning methods. The first is to adopt a better model, and the second is to extract richer prior information [51,52]. The matrix function of the network can be represented as H, and the deep learning image inpainting can be expressed as follows:
X = H Y ,
Equation (8) is a nonlinear problem, which can be iteratively solved by gradient descent. The iterative formula can be expressed as follows:
H n + 1 = H n η X Y ,
H n is an iterative variable, and η is an iterative step size. By the above iterative formula, we can solve for the matrix function H, and a clean time–frequency spectrogram can be obtained by the network. Finally, an inverse short-time Fourier transform is applied to transform the signal back to an echo signal. The clean echo signal can then be expressed as follows:
s r τ = X t , f τ w f τ t exp j 2 π f τ τ d f τ ,

2.2. SAR Imaging

In Section 2.1, the interference has been filtered out. In this subsection, we image the clean echo to obtain a clean SAR image. The equivalent model for SAR imaging is shown in Figure 2. The radar moves from point A to point B with a velocity of v, and during this motion, the radar performs equidistant sampling to the target. The synthetic-aperture length is LAB, and the equivalent minimum distance between the radar and the target is R0. Assuming that the starting time at point A is η c , the distance equation can be expressed as follows:
R η = R 0 2 + v η + η c 2 ,
During the radar’s motion, the echo signal can be expressed as follows:
s τ , η = A 0 w r τ 2 R η c w a η η c exp j 4 π f c R η / c exp j π k r τ 2 R η c 2 ,
τ is the time delay in the range, η is the time delay in the azimuth, w r and w a are the rectangle-pulse signal, R η is the radar’s distance equation, c is light speed, f c is the carrier frequency, k r is the range chirp rate, η c is the Doppler center time, v is the radar’s speed, and R 0 is the vertical distance between the target and the radar. By pulse compression in the range, the signal can be expressed as follows:
s rc τ , η = S 0 f τ , η H f τ exp j 2 π f τ τ d f τ = A 0 p r τ 2 R η c w a η η c exp j 4 π f c R η / c ,
H f τ is the range matched filter, f τ is range frequency, and p r is a sinc-pulse signal. By azimuth Fourier transform, the signal can be represented as follows:
S 1 τ , f η = A 0 p r τ 2 R η c W a f η f η c exp j 4 π f c R 0 / c exp j π f η 2 k a ,
f η is the azimuth frequency, f η c is the Doppler center frequency, and k a is the azimuth chirp rate. By RCMC, the signal can be expressed as follows:
S 2 τ , f η = A 0 p r τ 2 R 0 c W a f η f η c exp j 4 π f c R 0 / c exp j π f η 2 k a ,
By azimuth matching filtering, the signal can be represented as follows:
S 3 τ , f η = S 2 τ , f η H a z f η = A 0 p r τ 2 R 0 c W a f η f η c exp j 4 π f c R 0 / c ,
H a z f η = exp j π f η 2 k a ,
H a z f η is the azimuth-matched filter. Finally, the imaging can be acquired by azimuth inverse Fourier transform. At this time, the signal is represented as follows:
s a c τ , η = S 3 τ , f η exp j 2 π f η η d f η = A 0 p r τ 2 R 0 c p a η exp j 4 π f c R 0 / c exp j π f η c η ,
In summary, the above radio-frequency interference suppression process is shown in Table 1.

3. Methods

This section is primarily divided into three parts. The first part introduces the feature decomposition network, which covers CNN-based and transformer-based feature decomposition networks. The second part discusses the loss function, and the third part presents data regularization.

3.1. Feature Decomposition Network

In interference suppression, our goal is to restore the clean image from the corrupted one. Because the interference intensity is high, for traditional inpainting networks, it requires more neurons to represent and suppress the interference, and it degrades the algorithm’s performance. To address this issue, we propose a feature decomposition network whose core idea is to allow some neurons to only represent the interference signal or the radar signal. The neurons representing the radar signal serve as input of subsequent network, while the neurons representing interference are discarded; thus, it can separate the interference from the network. To validate the advantages of the proposed network, we select both a CNN and transformer as base models and embed the feature decomposition block into the above networks. Therefore, this work has developed two types of feature decomposition networks, one is the CNN-based feature decomposition network, as shown in Section 3.1.1, and the other is the transformer-based feature decomposition network, as shown in Section 3.1.2. In theory, the CNN-based feature decomposition network has higher computational efficiency, while the transformer-based feature decomposition network has better performance, and the above conclusion is verified in Section 4.1. In Section 4.2 and Section 4.3, we have chosen the transformer-based feature decomposition network as the proposed method. The proposed network is depicted in Figure 3, where Figure 3a shows the overall structure, and Figure 3b shows the feature decomposition block. In Figure 3, H is the image height, W is the image width, and C is the channel numbers. We conduct comparative experiments in the two networks: the CNN-based model is shown in Table 2, and the transformer-based model is illustrated in Figure 4. In Figure 3, in order to better balance performance and computational cost, we set the parameters as follows: H × W = 512 × 512 , C = 16 .
The proposed network is a U-shaped network consisting of an encoding block, a decoding block, and a feature decomposition block. In the encoding block, firstly, an input mapping layer, composed of CNNs with C channels, a kernel size of 3, and a stride of 1, is used to extract image features. Then, four encoding layers and four down-sampling layers are employed to further extract information. The down-sampling layers, composed of CNNs with a kernel size of four and a stride of two, double the number of channels and reduce the image size by half. In the feature decomposition block, the input signal is divided into two paths: one for signal block and the other for noise block. The signal path serves as input for the subsequent network, while the noise path is discarded. Meanwhile, a cosine similarity metric is used to separate the noise as much as possible. In the decoding block, four up-sampling layers and four decoding layers are utilized to gradually restore the clean image. The up-sampling layers, composed of deconvolutions with a kernel size of two and a stride of two, halve the channel numbers and double the image size. Subsequently, an output projection layer, composed of CNNs with one channel and a stride of one, is employed to map the extracted features into a clean image. Finally, skip-connections are added between the encoder and decoder to facilitate information flow. For better understanding, we further elaborate the network in Figure 3 with formulas. The formula for the input projection layer can be expressed as follows:
X 0 = I n p u t P r o j e c t i o n Y ,
The formula for the i-th encoder layer can be expressed as follows:
E i = E n c o d e r D o w n S a m p l i n g E i 1 ,
The formula for the feature decomposition block can be expressed as follows:
X d = F e a t u r e D e c o m p o s i t i o n E 4 ,
The formula for the i-th decoder layer can be expressed as follows:
D i = D e c o d e r U p S a m p l i n g D i + 1 + E i ,
The formula for the output projection layer can be expressed as follows:
X = O u t p u t P r o j e c t i o n D 1 + Y ,
The above is the proposed feature decomposition network. The input to of network is the RFI-polluted image, and the output of the network is a clean image.

3.1.1. CNN-Based Feature Decomposition Network

In a CNN-based network, to better extract image features, we need to stack more convolutional layers. Because the encoder block, decoder block, signal block, and noise block in Figure 3 possess similar functionalities, to facilitate reuse, we adopt a unified structure for these blocks, with specific parameters detailed in Table 2.

3.1.2. Transformer-Based Feature Decomposition Network

In the transformer-based network, the encoder block, decoder block, signal block, and noise block in Figure 3 have similar functionalities. To facilitate reuse, we adopt a unified structure for these blocks, and the structure is illustrated in Figure 4. To better extract global and local features, we divide the network into two parts: a global attention module that primarily focuses on capturing global information, as shown in Figure 4a, and a convolutional module that specializes in extracting local information, as depicted in Figure 4b. In Figure 4a, the input data and output data have the same dimension of B × n C × H n × W n (B is the batch-size, and n is the ordinal number in encoder block or decoder block). To reduce computational cost, the network restricts the attention computation within a window, and then the results from all windows are concatenated. The window size is M × M , and the number of windows is B H W n 2 M 2 . Each window primarily consists of a global feature extraction layer and a linear projection layer. In Figure 4b, the local information extraction module primarily comprises two linear mapping layers and a CNN layer with a kernel size of three and a stride of one. In Figure 4, in order to better balance performance and computational cost, we set the parameters as follows: H × W = 512 × 512 , M × M = 8 × 8 , C = 16 , n = 64 .

3.2. Loss Function

The loss function consists of three parts: the first part is the reconstruction loss, the second part is regularization term, and the third part is the feature decomposition loss, so the loss function can be expressed as follows:
L o s s = l X , X + λ 1 X F + λ 2 D l o s s ,
λ 1 , λ 2 are the adjustable hyperparameters. The ultimate optimization goal of the network is to restore clean images from the RFI-polluted images, with the expectation that the smaller the difference between X and X , the better. The first part is shown as follows:
l X , X = X X + ε 2 ,
X is the network output, X is the label, ε is a hypermeter. The feature decomposition loss function is equal to the absolute value of cosine similarity, defined as follows:
D l o s s = 1 B n = 1 B s i n o s i n o ,
B is the batch-size, s i is the output of the signal block, and n o is the output of the noise block.

3.3. Data Normalization

In the interference suppression task, the interference power is unknown and variable. To enhance the robustness to interference power, the training data must include various types of interference power. In the constructed training dataset, the interference power distribution curve is shown in Figure 5. The blue curve represents the measured power values, while the red curve is the fitted value. As it can be seen from Figure 5, the distribution curve of interference power approximately follows a normal distribution, and the interference power is sufficient to cover most scenarios. There are significant differences between SAR images and traditional optical images. In SAR images, there are a large number of strong scattering points, while the reflectivity of most uniform media is low, resulting in an extremely high dynamic range. This paper summarizes the common regularization methods in interference suppression tasks and proposes an improved method. The common regularization methods can be classified into three types: the first regularization adopted by PISNet [49], the second regularization method adopted by FuSINet [50], and the third adopted is the proposed method.

3.3.1. First Regularization

In PISNet, it separately normalizes the input data and the output data, and it is also the most common regularization approach for optical images. The normalization can be expressed as follows:
Y n o r m = Y max Y = Y / Y max X n o r m = X max X = X / X max ,
The residual components that the network needs to learn can be expressed as follows:
R e s = Y n o r m X n o r m ,
By combining Formulas (5) and (28), we can see that:
R e s = Y n o r m X n o r m = 1 Y max 1 X max X + I + N Y max ,
From Formula (29), we can see that the residual component consists of two parts. The first part is a variable component, which causes significant difficulties in image restoration due to its random fluctuations following the pattern depicted in Figure 4. The second part is the interference component, which should be eliminated. Therefore, the first regularization has significant drawbacks in SAR interference suppression.

3.3.2. Second Regularization

In FuSINet, firstly it separates the interference component from the original data and then performs normalization on the input data and the output data separately. In this case, data normalization can be expressed as follows:
Y n o r m = Y I + X I max Y I + X I = Y I + X I / Y I + X I max X X I + N / X max X n o r m = X max X = X / X max ,
I + X I is the RFI-polluted component, I is the interference, X I is the RFI-polluted target. Since the RFI-polluted component is separated, the residual component that the network needs to learn can be expressed as follows:
R e s = Y n o r m X n o r m = N X I X max ,

3.3.3. Third Regularization

The proposed normalization normalizes the input data and output data by the maximum value of the interfered data. In this case, data normalization can be expressed as follows:
Y n o r m = Y max Y = Y / Y max X n o r m = X max Y = X / Y max ,
In this case, the residual component that the network needs to learn can be expressed as follows:
R e s = Y n o r m X n o r m = I + N Y max ,
Firstly, in Formulas (29), (31) and (33), the residual component represents the part of the network that needs to be repaired. The smaller the residual component, the fewer parts the network needs to learn, and therefore, the network will achieve better performance. Secondly, in adversarial experiments, the power of the interference source is unknown and variable, and the network must possess generalization capabilities towards interference power. Therefore, the training data must include a large number of signals with different interference powers. Thirdly, it can be seen from Formula (29), in the traditional regularization method, the residual component includes Y max and X max , where X max represents the peak power of strongly scattered targets and varies with changes in different scenarios, while Y max represents the peak power of interference and varies with changes in different interference parameters. Both Y max and X max exhibit significant randomness, which means that the network requires more parameters to fit such random errors, leading to a degradation in its performance. What is more, the proposed method does not require segmenting the interference, so it is simpler. The proposed regularization method maybe can provide some insights for other radar tasks. The histogram comparison results of the three regularization methods are shown in Figure 6, where the horizontal axis represents pixel values, the vertical axis represents the frequency, the blue curve represents the RFI-polluted histogram, and the red curve represents the clean histogram. Figure 6a is the histogram of the first regularization, Figure 6b is the histogram of the second regularization, and Figure 6c is the histogram of the third regularization. As can be seen from Figure 6, for our method and the second method, the input and output data distributions are basically consistent, and most of the curves overlap completely, corresponding to less residual components; thus, our method can significantly decrease the difficulty of data fitting.

4. Experiments

This work conducted three experiments. The first part is an ablation study, which mainly includes comparative experiments on different regularization methods, different feature decomposition networks, and different prior information terms. The second part is the MiniSAR experiment, and the third part is the Sentinel-1A experiment. In our experiments, the image size of the training data is 512 × 512. In Section 4.1 and Section 4.2, the training data come from MiniSAR, with a data volume of 3072 images, and in Section 4.3, the training data come from Sentinel-1, with a data volume of 2048 images. All training data are the semi-physical simulation data. The optimizer for the network is AdamW, the maximum epoch is 100, the initial learning rate is 0.002, and the weight decay is 0.02. Each group of testing data consists of 512 images, and all testing and training data come from different imaging scenarios.
A previous work [50] adopts PSNR, SSIM, and ME as the evaluation metrics, and we also select the above indicators to evaluate the image quality. The selection of evaluation indicators varies in different experiments for the following reasons: Firstly, MiniSAR data are semi-physical simulation data, while Sentinel satellite data are measured RFI-polluted data. Secondly, PSNR and SSIM are the relative evaluation metrics, and they are used to measure the similarity between the filtered images and the labels, while ME is a self-evaluation metric. Thirdly, as MiniSAR is semi-physical simulation data, they relatively easily obtain both RFI-polluted data and RFI-free data. However, Sentinel satellite data are measured RFI-polluted data. In a single measurement, we either obtain RFI-polluted data or RFI-free data. It is difficult to obtain both RFI-polluted data and RFI-free data simultaneously. Therefore, in the MiniSAR data, we simultaneously selected the above three evaluation indicators, while in the Sentinel satellite data, we only selected the third evaluation indicator.
PSNR is usually used to evaluate the quality of images, and it is calculated as follows:
PSNR X , X ^ = 20   log 10 Max X MSE MSE X , X ^ = 1 H W i = 0 W 1 X i , j X ^ i , j 2 ,
X ^ is the filtered image, X is the label, MSE is root mean square, and H and W represent image size. PSNR is the evaluation index of the noise level. The larger the PSNR, the better the performance. The structural similarity (SSIM) can be expressed as follows:
SSIM X , X ^ = 2 μ X μ X ^ + c 1 2 σ X X ^ + c 2 μ X 2 + μ X ^ 2 + c 1 σ X 2 + σ X ^ 2 + c 2 ,
μ is the mean value, σ is the variance or covariance. A higher structural similarity indicates better performance. However, for the Sentinel-1 satellite, labels are lacking, so the aforementioned evaluation metrics are no longer applicable. To address this issue, this paper also adopts ME as an evaluation metric. ME is defined as follows:
ME = Ent X ^ Mean X ^ ,
Ent X ^ is the entropy, Mean X ^ is the mean value. A smaller entropy indicates that pixel values are concentrated in a smaller range. A smaller mean value indicates less interference. Therefore, a smaller ME indicates better performance.

4.1. Ablation Study

To separately validate the various components of the proposed method, this section conducts three experiments. The first experiment mainly verifies the impact of different regularization methods, the second verifies the feature decomposition network, and the third verifies the impact of prior information terms. To increase generalization, the test data contain three types of interference: narrowband interference, sinusoidal modulated wideband interference, and chirp wideband interference. The bandwidth of narrowband interference is less than 30 MHz, the bandwidth of chirp wideband interference is ranging from 30 MHz to 150 MHz, and the bandwidth of sinusoidal modulated wideband interference is ranging from 30 MHz to 100 MHz.

4.1.1. Different Regularization Methods

This work conducts comparative experiments on three different regularization methods. To enhance the persuasiveness, we conduct relevant experiments on both UNet [53] and Uformer [54]. UNet is a U-shaped network composed of CNNs, while Uformer is a U-shaped network composed of transformers. The experimental results are shown in Figure 7, Figure 8 and Figure 9. Figure 7 is the result under narrowband interference, Figure 8 is the result under chirp wideband interference, and Figure 9 is the result under sinusoidal modulated wideband interference. The test metrics are shown in Table 3. Comparing the first regularization method with our method, in UNet, the improvement in PSNR is 4.75 dB, and the improvement of SSIM is 3.06 percentage points; in Uformer, the improvement in PSNR is 3.72 dB, and the improvement of SSIM is 2.38 percentage points. Comparing the second regularization method with our method, our method still achieves optimal performance. The above experimental results show that our method achieves significant performance improvements in both networks, and it is consistent with the conclusions in Section 3.3. Additionally, compared to the second regularization method, our method does not require segmenting the interfered regions before image restoration, so it is much simpler.

4.1.2. Ablation Study on FDNet

This work integrates the feature decomposition block into CNN-based and transformer-based networks, and the experimental results are shown in Figure 10 and Table 4. Comparing CNN networks with transformer networks, transformers exhibit superior performance, with an improvement of 1.23 dB in PSNR and 0.7 percentage points in SSIM. In CNN-based networks, compared to UNet, the proposed FD-UNet achieves an improvement of 0.21 dB in PSNR and 0.22 percentage points in SSIM. Compared to Uformer, FD-Uformer achieves an improvement of 0.25 dB in PSNR and 0.04 percentage points in SSIM. Therefore, we ultimately choose Uformer as the baseline network and embed the feature decomposition block into it, and it is namely FDNet. The network is illustrated in Figure 2, and its internal structure is depicted in Figure 4.

4.1.3. Ablation Study in Different Prior Information

In Equation (26), we can see that the loss function also incorporates a prior information term. In this work, we validated four prior information items, namely L1, L2, TV-L1, and TV-L2 [47]. The L1 constraint tends to make the output sparser, while the L2 constraint tends to make the output smoother. The total variation (TV) term is designed to remove noise. This work tests the combinations of TV with L1 and L2 separately. The experimental results are shown in Table 5. Because the time–frequency spectrogram does not exhibit significant sparse characteristics, these four prior information terms contribute little to the final network performance. Therefore, subsequent FDNet will not adopt these constraints.

4.2. MiniSAR Experiments

The FDNet is shown in Figure 2, and its internal structure is shown in Figure 4. This work selects DIFNet, PISNet, and FuSINet as comparative networks and conducts experiments on narrowband interference and wideband interference. DIFNet is a variant of the notch filter, which first segments the interference and then filters it out. In addition, the network is often less sensitive to noise power and other factors, so this method performs better than CFAR or other notch filters. PISNet and FuSINet are image inpainting networks. PISNet adds low-rank constraints to the network, while FuSINet first segments the interference area and then repairs the segmented area.
The parameters of MiniSAR are shown in Table 6, and the data were made public by the Sandia National Laboratories. The radar operates in the X-band, its bandwidth is 1.5 GHz, and its polarization mode is HH. The resolution of the MiniSAR images is 0.1 m. The measured MiniSAR data do not contain radio-frequency interference. In order to conduct experiments on MiniSAR data, we conducted a semi-physical simulation on the above data, and the simulated interference parameters are shown in Table 7. In the semi-physical simulation, we load the interference data into the true SAR echoes, and then perform imaging processing on the RFI-polluted echoes to obtain the RFI-polluted SAR images. In the semi-physical simulation, the radio-frequency interference can be divided into narrowband interference, chirp modulation wideband interference, and sinusoidal modulation wideband interference. The bandwidth of narrowband interference is less than 30 MHz, the bandwidth of the chirp modulation wideband interference is 30 M~150 MHz, the bandwidth of the chirp modulation wideband interference is 30 M~100 MHz, and the signal-to-interference ratio of all situations is −15 dB~0 dB.

4.2.1. Narrowband Interference

The parameters of the narrowband interference are shown in Table 6. The bandwidth of the interference signal is less than 30 MHz, and the signal-to-interference ratio (SIR) ranges from −15 dB to 0 dB. The time–frequency spectrogram of narrowband interference is presented in Figure 11, and the SAR image is shown in Figure 12. From Figure 11 and Figure 12, it can be observed that FuSINet and FDNet achieve higher image restoration quality and preserve more details. The performance indicators are shown in Table 8. On the time–frequency spectrogram, compared to PISNet, FDNet achieves an improvement of 5.93 dB in PSNR and 1.77 percentage points in SSIM. Compared to FuSINet, FDNet improves PSNR by 1.45 dB and SSIM by 0.6 percentage points. On the SAR image, compared to PISNet, FDNet improves PSNR by 3.16 dB and SSIM by 2.22 percentage points and reduces ME by 0.07. Compared to FuSINet, FDNet improves PSNR by 0.92 dB and SSIM by 1.08 percentage points, and ME remains unchanged.

4.2.2. Wideband Interference

The image restoration results for broadband interference are shown in Figure 13 and Figure 14. From these figures, it can be observed that FuSINet and FDNet achieve higher quality and preserve more details. The performance indicators are presented in Table 9. On the time–frequency spectrogram, compared to PISNet, FDNet achieves an improvement of 6.18 dB in PSNR and 1.32% in SSIM. Compared to FuSINet, FDNet improves PSNR by 1.23 dB and SSIM by 1.1 percentage points. On the SAR image, compared to PISNet, FDNet improves PSNR by 4.37 dB and SSIM by 1.12 percentage points and reduces ME by 0.09. Compared to FuSINet, FDNet improves PSNR by 0.34 dB and SSIM by 0.58 percentage points and reduces ME by 0.01. From the experimental results under these two interference patterns, we can conclude that all four methods can eliminate interference, and the proposed method acquires the optimal results.

4.3. Sentinel-1A Experiments

The Sentinel-1 satellite is a part of the Earth observation satellites within the Copernicus program of the European Space Agency, and its parameters are shown in Table 10. It works in C-band, its resolution is 3 m in range and 14 m in azimuth, and it provides two polarization modes, namely VV and VH. During radar imaging, the satellites may encounter intentional and unintentional interference, and the common examples are shown in Figure 15, which were captured in Japan on 25 March 2021. Figure 15a is the RFI-polluted image, and Figure 15b is a selected candidate region for validation experiments. Because the interference patterns and power are unknown, the test data are highly representative. In this work, PISNet, FuSINet, and FDNet are selected as comparison methods, and ME is chosen as the evaluation metric. For satellite data, it lacks labels, so PSNR is no longer applicable.
The restored time–frequency spectrogram is shown in Figure 16. Figure 16a is the interfered time–frequency spectrogram, Figure 16b is the time–frequency spectrogram by PISNet, Figure 16c is the spectrogram by FuSINet, and Figure 16d is the spectrogram by FDNet. From Figure 16a, we can see that the current interference is a kind of narrowband interference. It is evident that Figure 16b,c still contain some residual interference, while Figure 16d merely contains residual interference. The filtered SAR images are presented in Figure 17. Figure 17a is the RFI-polluted image, Figure 17b is the SAR image by PISNet, Figure 17c is the SAR image by FuSINet, and Figure 17d is the SAR image by FDNet. From Figure 17, it can be seen that the current interference power is higher, and the RFI-polluted targets are completely obscured by the interference. However, all three methods can eliminate most of the interference. Figure 17b,c still contain some residual components, Figure 17b shows blurry textures, and Figure 17d exhibits fewer residual components and clearer textures. The experimental metrics are shown in Table 11, and it can be seen that compared to PISNet, the ME decreases by 0.08, and compared to FuSINet, the ME decreases by 0.03; thus, it shows that our method achieves the best result.

5. Conclusions

Synthetic-aperture radar, as a type of all-weather, all-time, high-resolution imaging radar, has been widely deployed. However, during imaging, SAR is susceptible to various intentional or unintentional interferences. Among them, radio-frequency interference is a common form, and the interference signal is the direct wave. Compared to the transmitted wave of the target, the interference power is higher. Ultimately, radio-frequency interference will cover a large area, which greatly reduces SAR image quality. To suppress interference, this paper proposes a feature decomposition network based on regularization optimization. In the proposed feature decomposition network, a part of the neurons is dedicated to representing interference, while the other part focuses on representing clean signals. The neurons representing clean signals are as input for subsequent networks, while those representing interference are discarded. Secondly, a novel data regularization is introduced, which significantly improves the performance and is validated by data distribution histograms and experiments. On both the MiniSAR and Sentinel-1A datasets, the proposed method achieves excellent performance on the narrowband of wideband interference. What is more, the regularization method can be extended to other SAR tasks, and the proposed network can also be transferred to other tasks involving disturbance components. Lastly, we find that the current algorithm does not perform well in cross-sensor experiments, so we believe that transfer learning and self-supervised learning deserve further investigation. In addition, the RFI models also include some other types, such as AM, FM, QPSK, etc. The interference suppression methods for the above signals also deserve further research.

Author Contributions

Conceptualization, H.L. and F.F.; methodology, F.F.; software, F.F.; validation, S.X., W.M. and D.D.; writing—original draft preparation, F.F.; writing—review and editing, H.L., W.M. and S.X.; visualization, D.D.; project administration, D.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Thank you for the open-source data from the European Sentinel satellite (https://search.asf.alaska.edu/#/ 21 March 2021). Code: https://github.com/fangfuping/FDNet, 21 March 2021.

Acknowledgments

Thank you for the open-source data from the European Sentinel satellite.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Moreira, A.; Prats-Iraola, P.; Younis, M.; Krieger, G.; Hajnsek, I.; Papathanassiou, K.P. A tutorial on synthetic aperture radar. IEEE Geosci. Remote Sens. Mag. 2013, 1, 6–43. [Google Scholar] [CrossRef]
  2. Leng, X.; Ji, K.; Zhou, S.; Xing, X. Fast shape parameter estimation of the complex generalized Gaussian distribution in SAR images. IEEE Geosci. Remote Sens. Lett. 2020, 17, 1933–1937. [Google Scholar] [CrossRef]
  3. Mondini, A.C.; Guzzetti, F.; Chang, K.-T.; Monserrat, O.; Martha, T.R.; Manconi, A. Landslide failures detection and mapping using Synthetic Aperture Radar: Past, present and future. Earth-Sci. Rev. 2021, 216, 103574. [Google Scholar] [CrossRef]
  4. Oveis, A.H.; Giusti, E.; Ghio, S.; Martorella, M. A survey on the applications of convolutional neural networks for synthetic aperture radar: Recent advances. IEEE Aerosp. Electron. Syst. Mag. 2021, 37, 18–42. [Google Scholar] [CrossRef]
  5. Zhou, Y.; Liu, H.; Ma, F.; Pan, Z.; Zhang, F. A sidelobe-aware small ship detection network for synthetic aperture radar imagery. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5205516. [Google Scholar] [CrossRef]
  6. Zhang, X.; Huo, C.; Xu, N.; Jiang, H.; Cao, Y.; Ni, L.; Pan, C. Multitask learning for ship detection from synthetic aperture radar images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 8048–8062. [Google Scholar] [CrossRef]
  7. Mousa, A.; Badran, Y.; Salama, G.; Mahmoud, T. Regression layer-based convolution neural network for synthetic aperture radar images: De-noising and super-resolution. Vis. Comput. 2023, 39, 1295–1306. [Google Scholar] [CrossRef]
  8. Zhang, Q.; Zhang, Y.; Huang, Y.; Zhang, Y.; Pei, J.; Yi, Q.; Li, W.; Yang, J. TV-sparse super-resolution method for radar forward-looking imaging. IEEE Trans. Geosci. Remote Sens. 2020, 58, 6534–6549. [Google Scholar] [CrossRef]
  9. Li, N.; Lv, Z.; Guo, Z. Pulse RFI mitigation in synthetic aperture radar data via a three-step approach: Location, notch, and recovery. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5225617. [Google Scholar] [CrossRef]
  10. Li, N.; Lv, Z.; Guo, Z. Observation and mitigation of mutual RFI between SAR satellites: A case study between Chinese GaoFen-3 and European Sentinel-1A. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5112819. [Google Scholar] [CrossRef]
  11. Tao, M.; Su, J.; Huang, Y.; Wang, L. Mitigation of Radio Frequency Interference in Synthetic Aperture Radar Data: Current Status and Future Trends. Remote Sens. 2019, 11, 2438. [Google Scholar] [CrossRef]
  12. Cazzaniga, G.; Guarnieri, A.M. Removing RF interferences from P-band airplane SAR data. In Proceedings of the IGARSS’96. 1996 International Geoscience and Remote Sensing Symposium, Lincoln, NE, USA, 31 May 1996; pp. 1845–1847. [Google Scholar]
  13. Reigber, A.; Ferro-Famil, L. Interference suppression in synthesized SAR images. IEEE Geosci. Remote Sens. Lett. 2005, 2, 45–49. [Google Scholar] [CrossRef]
  14. Lord, R.T.; Inggs, M.R. Efficient RFI suppression in SAR using a LMS adaptive filter with sidelobe suppression integrated with the range-doppler algorithm. In Proceedings of the IEEE 1999 International Geoscience and Remote Sensing Symposium. IGARSS’99 (Cat. No. 99CH36293), Hamburg, Germany, 28 June–2 July 1999; pp. 574–576. [Google Scholar]
  15. Zhou, F.; Wu, R.; Xing, M.; Bao, Z. Eigensubspace-Based Filtering With Application in Narrow-Band Interference Suppression for SAR. IEEE Geosci. Remote Sens. Lett. 2007, 4, 75–79. [Google Scholar] [CrossRef]
  16. Yang, H.; Li, K.; Li, J.; Du, Y.; Yang, J. BSF: Block subspace filter for removing narrowband and wideband radio interference artifacts in single-look complex SAR images. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5211916. [Google Scholar] [CrossRef]
  17. Zhou, F.; Tao, M.; Bai, X.; Liu, J. Narrow-band interference suppression for SAR based on independent component analysis. IEEE Trans. Geosci. Remote Sens. 2013, 51, 4952–4960. [Google Scholar] [CrossRef]
  18. Huang, Y.; Liao, G.; Zhang, Z.; Xiang, Y.; Li, J.; Nehorai, A. Fast narrowband RFI suppression algorithms for SAR systems via matrix-factorization techniques. IEEE Trans. Geosci. Remote Sens. 2018, 57, 250–262. [Google Scholar] [CrossRef]
  19. Huang, X.; Liang, D. Parametric methods of RFI suppression in UWB-SAR. Syst. Eng. Electron. 2000, 22, 94–97. [Google Scholar]
  20. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  21. Candès, E.J.; Wakin, M.B. An introduction to compressive sampling. IEEE Signal Process. Mag. 2008, 25, 21–30. [Google Scholar] [CrossRef]
  22. Liu, H.; Li, D.; Zhou, Y.; Truong, T.-K. Joint wideband interference suppression and SAR signal recovery based on sparse representations. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1542–1546. [Google Scholar] [CrossRef]
  23. Liu, H.; Li, D.; Zhou, Y.; Truong, T.-K. Simultaneous radio frequency and wideband interference suppression in SAR signals via sparsity exploitation in time–frequency domain. IEEE Trans. Geosci. Remote Sens. 2018, 56, 5780–5793. [Google Scholar] [CrossRef]
  24. Yang, H.; Tao, M.; Chen, S.; Xi, F.; Liu, Z. On the mutual interference between spaceborne SARs: Modeling, characterization, and mitigation. IEEE Trans. Geosci. Remote Sens. 2020, 59, 8470–8485. [Google Scholar] [CrossRef]
  25. Su, J.; Tao, H.; Tao, M.; Wang, L.; Xie, J. Narrow-band interference suppression via RPCA-based signal separation in time–frequency domain. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 5016–5025. [Google Scholar] [CrossRef]
  26. Tao, M.; Li, J.; Su, J.; Fan, Y.; Wang, L.; Zhang, Z. Interference mitigation for synthetic aperture radar data using tensor representation and low-rank approximation. In Proceedings of the 2020 XXXIIIrd General Assembly and Scientific Symposium of the International Union of Radio Science, Rome, Italy, 29 August–5 September 2020; pp. 1–4. [Google Scholar]
  27. Joy, S.; Nguyen, L.H.; Tran, T.D. Radio frequency interference suppression in ultra-wideband synthetic aperture radar using range-azimuth sparse and low-rank model. In Proceedings of the 2016 IEEE Radar Conference (RadarConf), Philadelphia, PA, USA, 2–6 May 2016; pp. 1–4. [Google Scholar]
  28. Huang, Y.; Liao, G.; Li, J.; Xu, J. Narrowband RFI suppression for SAR system via fast implementation of joint sparsity and low-rank property. IEEE Trans. Geosci. Remote Sens. 2018, 56, 2748–2761. [Google Scholar] [CrossRef]
  29. Nguyen, L.H.; Dao, M.D.; Tran, T.D. Joint sparse and low-rank model for radio-frequency interference suppression in ultra-wideband radar applications. In Proceedings of the 2014 48th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 2–5 November 2014; pp. 864–868. [Google Scholar]
  30. Lyu, Q.; Han, B.; Li, G.; Sun, W.; Pan, Z.; Hong, W.; Hu, Y. SAR interference suppression algorithm based on low-rank and sparse matrix decomposition in time–frequency domain. IEEE Geosci. Remote Sens. Lett. 2021, 19, 4008305. [Google Scholar] [CrossRef]
  31. Huang, Y.; Zhang, L.; Yang, X.; Chen, Z.; Liu, J.; Li, J.; Hong, W. An efficient graph-based algorithm for time-varying narrowband interference suppression on SAR system. IEEE Trans. Geosci. Remote Sens. 2021, 59, 8418–8432. [Google Scholar] [CrossRef]
  32. Huang, Y.; Wen, C.; Chen, Z.; Chen, J.; Liu, Y.; Li, J.; Hong, W. HRWS SAR narrowband interference mitigation using low-rank recovery and image-domain sparse regularization. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5217914. [Google Scholar] [CrossRef]
  33. Huang, Y.; Liao, G.; Zhang, L.; Xiang, Y.; Li, J.; Nehorai, A. Efficient narrowband RFI mitigation algorithms for SAR systems with reweighted tensor structures. IEEE Trans. Geosci. Remote Sens. 2019, 57, 9396–9409. [Google Scholar] [CrossRef]
  34. Chen, S.; Lin, Y.; Yuan, Y.; Li, X.; Hou, L.; Zhang, S. Suppressive interference suppression for airborne SAR using BSS for singular value and eigenvalue decomposition based on information entropy. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5205611. [Google Scholar] [CrossRef]
  35. Chang, S.; Deng, Y.; Zhang, Y.; Zhao, Q.; Wang, R.; Zhang, K. An advanced scheme for range ambiguity suppression of spaceborne sar based on blind source separation. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5230112. [Google Scholar] [CrossRef]
  36. Zhang, Y.; Liao, G.; Xu, J.; Zhang, X.; Lan, L. A Method to Suppress Interferences Based on Secondary Compensation with QPC-FDA-MIMO Radar. Remote Sens. 2023, 15, 4711. [Google Scholar] [CrossRef]
  37. An, Q.; Pan, Z.; Liu, L.; You, H. DRBox-v2: An improved detector with rotatable boxes for target detection in SAR images. IEEE Trans. Geosci. Remote Sens. 2019, 57, 8333–8349. [Google Scholar] [CrossRef]
  38. He, Q.; Sun, X.; Yan, Z.; Fu, K. DABNet: Deformable contextual and boundary-weighted network for cloud detection in remote sensing images. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5601216. [Google Scholar] [CrossRef]
  39. He, Q.; Sun, X.; Yan, Z.; Li, B.; Fu, K. Multi-object tracking in satellite videos with graph-based multitask modeling. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5619513. [Google Scholar] [CrossRef]
  40. Artiemjew, P.; Chojka, A.; Rapiński, J. Deep learning for RFI artifact recognition in Sentinel-1 data. Remote Sens. 2021, 13, 7. [Google Scholar] [CrossRef]
  41. Tao, M.; Tang, S.; Li, J.; Zhang, X.; Fan, Y.; Su, J. Radio frequency interference detection for SAR data using spectrogram-based semantic network. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021; pp. 1662–1665. [Google Scholar]
  42. Lv, Q.; Quan, Y.; Feng, W.; Sha, M.; Dong, S.; Xing, M. Radar deception jamming recognition based on weighted ensemble CNN with transfer learning. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5107511. [Google Scholar] [CrossRef]
  43. Ruan, D.; Yan, Y.; Lai, S.; Chai, Z.; Shen, C.; Wang, H. Feature Decomposition and Reconstruction Learning for Effective Facial Expression Recognition. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 7656–7665. [Google Scholar] [CrossRef]
  44. Xu, H.; Gong, M.; Tian, X.; Huang, J.; Ma, J. CUFD: An encoder–decoder network for visible and infrared image fusion based on common and unique feature decomposition. Comput. Vis. Image Underst. 2022, 218, 103407. [Google Scholar] [CrossRef]
  45. Li, Y.; Du, L.; Du, Y. Convolutional neural network based on feature decomposition for target detection in SAR images. J. Radars 2023, 12, 1069–1080. [Google Scholar] [CrossRef]
  46. Yu, J.; Li, J.; Sun, B.; Chen, J.; Li, C. Multiclass radio frequency interference detection and suppression for SAR based on the single shot multibox detector. Sensors 2018, 18, 4034. [Google Scholar] [CrossRef]
  47. Fang, F.; Lv, W.; Dai, D. DIFNet: SAR RFI suppression based on domain invariant features. arXiv 2024, arXiv:2403.02894. [Google Scholar]
  48. Fan, W.; Zhou, F.; Tao, M.; Bai, X.; Rong, P.; Yang, S.; Tian, T. Interference mitigation for synthetic aperture radar based on deep residual network. Remote Sens. 2019, 11, 1654. [Google Scholar] [CrossRef]
  49. Shen, J.; Han, B.; Pan, Z.; Li, G.; Hu, Y.; Ding, C. Learning time–frequency information with prior for SAR radio frequency interference suppression. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5239716. [Google Scholar] [CrossRef]
  50. Fang, F.; Tian, Y.; Dai, D.; Xing, S. Synthetic Aperture Radar Radio Frequency Interference Suppression Method Based on Fusing Segmentation and Inpainting Networks. Remote Sens. 2024, 16, 1013. [Google Scholar] [CrossRef]
  51. Zhai, H.; Zhang, H.; Zhang, L.; Li, P. Total variation regularized collaborative representation clustering with a locally adaptive dictionary for hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2018, 57, 166–180. [Google Scholar] [CrossRef]
  52. Zhang, H.; Liu, L.; He, W.; Zhang, L. Hyperspectral image denoising with total variation regularization and nonlocal low-rank tensor decomposition. IEEE Trans. Geosci. Remote Sens. 2019, 58, 3071–3084. [Google Scholar] [CrossRef]
  53. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
  54. Wang, Z.; Cun, X.; Bao, J.; Zhou, W.; Liu, J.; Li, H. Uformer: A general u-shaped transformer for image restoration. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 17683–17693. [Google Scholar]
Figure 1. A common RFI-polluted image from Sentinel-1A satellite.
Figure 1. A common RFI-polluted image from Sentinel-1A satellite.
Remotesensing 16 02540 g001
Figure 2. SAR imaging model.
Figure 2. SAR imaging model.
Remotesensing 16 02540 g002
Figure 3. Feature decomposition network: (a) the total network; (b) feature decomposition block.
Figure 3. Feature decomposition network: (a) the total network; (b) feature decomposition block.
Remotesensing 16 02540 g003
Figure 4. Transformer-based network. (a) Global information extraction module. (b) Local information extraction module.
Figure 4. Transformer-based network. (a) Global information extraction module. (b) Local information extraction module.
Remotesensing 16 02540 g004
Figure 5. Histogram of the interference signal power.
Figure 5. Histogram of the interference signal power.
Remotesensing 16 02540 g005
Figure 6. Histograms of different regularization methods. (a) The first regularization. (b) The second normalization. (c) The third normalization.
Figure 6. Histograms of different regularization methods. (a) The first regularization. (b) The second normalization. (c) The third normalization.
Remotesensing 16 02540 g006
Figure 7. Regularization experiment under narrowband interference. (a) RFI-polluted image. (b) Label. (c) The restored result by UNet-1st. (d) The restored result by Uformer-1st. (e) The restored result by FuSINet-2nd. (f) The restored result by UNet-3rd. (g) The restored result by Uformer-3rd.
Figure 7. Regularization experiment under narrowband interference. (a) RFI-polluted image. (b) Label. (c) The restored result by UNet-1st. (d) The restored result by Uformer-1st. (e) The restored result by FuSINet-2nd. (f) The restored result by UNet-3rd. (g) The restored result by Uformer-3rd.
Remotesensing 16 02540 g007
Figure 8. Regularization experiment under chirp wideband interference. (a) RFI-polluted image. (b) Label. (c) The restored result by UNet-1st. (d) The restored result by Uformer-1st. (e) The restored result by FuSINet-2nd. (f) The restored result by UNet-3rd. (g) The restored result by Uformer-3rd.
Figure 8. Regularization experiment under chirp wideband interference. (a) RFI-polluted image. (b) Label. (c) The restored result by UNet-1st. (d) The restored result by Uformer-1st. (e) The restored result by FuSINet-2nd. (f) The restored result by UNet-3rd. (g) The restored result by Uformer-3rd.
Remotesensing 16 02540 g008
Figure 9. Regularization experiment under sinusoidal modulation wideband interference. (a) RFI-polluted image. (b) Label. (c) The restored result by UNet-1st. (d) The restored result by Uformer-1st. (e) The restored result by FuSINet-2nd. (f) The restored result by UNet-3rd. (g) The restored result by Uformer-3rd.
Figure 9. Regularization experiment under sinusoidal modulation wideband interference. (a) RFI-polluted image. (b) Label. (c) The restored result by UNet-1st. (d) The restored result by Uformer-1st. (e) The restored result by FuSINet-2nd. (f) The restored result by UNet-3rd. (g) The restored result by Uformer-3rd.
Remotesensing 16 02540 g009
Figure 10. FDNet experiment. (a) RFI-polluted image. (b) Label. (c) The restored result by UNet. (d) The restored result by FD-UNet. (e) The restored result by Uformer.
Figure 10. FDNet experiment. (a) RFI-polluted image. (b) Label. (c) The restored result by UNet. (d) The restored result by FD-UNet. (e) The restored result by Uformer.
Remotesensing 16 02540 g010
Figure 11. Time–frequency spectrograms under narrowband interference. (a) RFI-polluted image. (b) Label. (c) The restored result by notch filter. (d) The restored result by PISNet. (e) The restored result by FuSINet. (f) The restored result by FDNet.
Figure 11. Time–frequency spectrograms under narrowband interference. (a) RFI-polluted image. (b) Label. (c) The restored result by notch filter. (d) The restored result by PISNet. (e) The restored result by FuSINet. (f) The restored result by FDNet.
Remotesensing 16 02540 g011
Figure 12. SAR images under narrowband interference. (a) RFI-polluted image. (b) Label. (c) The restored result by DIFNet. (d) The restored result by PISNet. (e) The restored result by FuSINet. (f) The restored result by FDNet.
Figure 12. SAR images under narrowband interference. (a) RFI-polluted image. (b) Label. (c) The restored result by DIFNet. (d) The restored result by PISNet. (e) The restored result by FuSINet. (f) The restored result by FDNet.
Remotesensing 16 02540 g012
Figure 13. Time–frequency spectrograms under wideband interference. (a) RFI-polluted image. (b) Label. (c) The restored result by notch filter. (d) The restored result by PISNet. (e) The restored result by FuSINet. (f) The restored result by FDNet.
Figure 13. Time–frequency spectrograms under wideband interference. (a) RFI-polluted image. (b) Label. (c) The restored result by notch filter. (d) The restored result by PISNet. (e) The restored result by FuSINet. (f) The restored result by FDNet.
Remotesensing 16 02540 g013
Figure 14. SAR images under wideband interference. (a) RFI-polluted image. (b) Label. (c) The restored result by DIFNet. (d) The restored result by PISNet. (e) The restored result by FuSINet. (f) The restored result by FDNet.
Figure 14. SAR images under wideband interference. (a) RFI-polluted image. (b) Label. (c) The restored result by DIFNet. (d) The restored result by PISNet. (e) The restored result by FuSINet. (f) The restored result by FDNet.
Remotesensing 16 02540 g014
Figure 15. SAR image in Sentinel-1A satellite: (a) Global image. (b) Local image.
Figure 15. SAR image in Sentinel-1A satellite: (a) Global image. (b) Local image.
Remotesensing 16 02540 g015
Figure 16. Time frequency spectrograms of Sentinel-1A satellite; (a) RFI-polluted image. (b) The restored result by PISNet. (c) The restored result by FuSINet. (d) The restored result by FDNet.
Figure 16. Time frequency spectrograms of Sentinel-1A satellite; (a) RFI-polluted image. (b) The restored result by PISNet. (c) The restored result by FuSINet. (d) The restored result by FDNet.
Remotesensing 16 02540 g016
Figure 17. Sentinel-1A satellite interference suppression results. (a) RFI-polluted image. (b) The restored result by PISNet. (c) The restored result by FuSINet. (d) The restored result by FDNet.
Figure 17. Sentinel-1A satellite interference suppression results. (a) RFI-polluted image. (b) The restored result by PISNet. (c) The restored result by FuSINet. (d) The restored result by FDNet.
Remotesensing 16 02540 g017
Table 1. SAR Interference Suppression.
Table 1. SAR Interference Suppression.
Interference SuppressionStep
Signal transformShort-time Fourier transform
Interference suppressionImage inpainting
Signal inverse transformInverse short-time Fourier transform
SAR imagingImaging by RD algorithm
Table 2. Composition units in the CNN-based network.
Table 2. Composition units in the CNN-based network.
ParametersNameKernelStride
Layer
1CNN31
2LeakyReLU//
3CNN31
4LeakyReLU//
5CNN11
Table 3. Experimental results in different regularizations.
Table 3. Experimental results in different regularizations.
MetricsNetworkPSNR/dBSSIM
Normalization
FirstUNet-1st26.9593.18%
Uformer-1st29.2194.56%
SecondFuSINet-2nd32.8896.43%
Third (Our)UNet-3rd31.7096.24%
Uformer-3rd32.9396.94%
Table 4. Comparative experimental results in feature decomposition networks.
Table 4. Comparative experimental results in feature decomposition networks.
MetricsNetworkPSNR/dBSSIM
Base Model
CNN-BasedUNet31.7096.24%
FD-UNet31.9196.66%
Transformer-BasedUformer32.9396.94%
FD-Uformer33.1896.98%
Table 5. Comparative experimental results in prior terms.
Table 5. Comparative experimental results in prior terms.
MetricsPSNR/dB
Prior
L132.86
L232.97
TV-L132.88
TV-L232.95
Table 6. The Parameters of MiniSAR.
Table 6. The Parameters of MiniSAR.
ValueValue
Parameters
SourceSandia National Laboratories
BandX-band
Bandwidth1.5 GHz
PolarizationHH
Range × Azimuth resolution0.1 m × 0.1 m
Table 7. Interference parameters.
Table 7. Interference parameters.
TypeNarrowbandChirp BroadbandSinusoidal Broadband
Parameters
Interference bandwidth<30 MHz30~150 MHz30~100 MHz
SIR−15 dB~0 dB−15~0 dB−15~0 dB
Interference source23~51
Table 8. Image Restoration metrics under narrowband interference.
Table 8. Image Restoration metrics under narrowband interference.
MetricsTime–Frequency SpectrogramSAR Image
Methods PSNR/dBSSIMPSNR/dBSSIMME
DIFNet//24.5580.24%2.99
PISNet31.2896.59%28.7292.55%2.96
FuSINet35.7697.76%30.9693.69%2.89
FDNet37.2198.36%31.8894.77%2.89
Table 9. Image restoration metrics under wideband interference.
Table 9. Image restoration metrics under wideband interference.
MetricsTime–Frequency SpectrogramSAR Image
Methods PSNR/dBSSIMPSNR/dBSSIMME
DIFNet//23.6978.76%2.98
PISNet29.6895.93%26.9492.88%2.97
FuSINet34.6396.15%30.9793.42%2.89
FDNet35.8697.25%31.3194.00%2.88
Table 10. The Parameters of Sentinel-1A.
Table 10. The Parameters of Sentinel-1A.
ValueValue
Parameters
SourceSentinel-1A satellite
BandC-band
Bandwidth150 MHz
PolarizationVV/VH
Range × Azimuth resolution3 m × 14 m
Table 11. Image restoration indicators in Sentinel-1A.
Table 11. Image restoration indicators in Sentinel-1A.
MethodsPISNetFuSINetFDNet
Metrics
ME2.792.742.71
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fang, F.; Li, H.; Meng, W.; Dai, D.; Xing, S. Synthetic-Aperture Radar Radio-Frequency Interference Suppression Based on Regularized Optimization Feature Decomposition Network. Remote Sens. 2024, 16, 2540. https://doi.org/10.3390/rs16142540

AMA Style

Fang F, Li H, Meng W, Dai D, Xing S. Synthetic-Aperture Radar Radio-Frequency Interference Suppression Based on Regularized Optimization Feature Decomposition Network. Remote Sensing. 2024; 16(14):2540. https://doi.org/10.3390/rs16142540

Chicago/Turabian Style

Fang, Fuping, Haoliang Li, Weize Meng, Dahai Dai, and Shiqi Xing. 2024. "Synthetic-Aperture Radar Radio-Frequency Interference Suppression Based on Regularized Optimization Feature Decomposition Network" Remote Sensing 16, no. 14: 2540. https://doi.org/10.3390/rs16142540

APA Style

Fang, F., Li, H., Meng, W., Dai, D., & Xing, S. (2024). Synthetic-Aperture Radar Radio-Frequency Interference Suppression Based on Regularized Optimization Feature Decomposition Network. Remote Sensing, 16(14), 2540. https://doi.org/10.3390/rs16142540

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop