Next Article in Journal
MambaLIC: State-Space Models for Efficient Remote Sensing Image Compression
Next Article in Special Issue
Spatiotemporal Prediction and Pattern Analysis of Complex Ground Deformation Fields from Multi-Temporal InSAR
Previous Article in Journal
A Deformation Inversion Method for Ground-Based Synthetic Aperture Radar with Space-Variant Baseline Errors
Previous Article in Special Issue
Determination of Slow Surface Movements Around the 1915 Çanakkale Bridge During the 2022–2024 Period with Sentinel-1 Time Series
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Knowledge-Aided Multichannel SAR Clutter Suppression Algorithm in Complex Scenes

1
School of Electronics and Information Engineering, Harbin Institute of Technology, Harbin 150001, China
2
Nanjing Research Institute of Electronic Technology, Nanjing 210000, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2026, 18(6), 879; https://doi.org/10.3390/rs18060879
Submission received: 23 January 2026 / Revised: 4 March 2026 / Accepted: 11 March 2026 / Published: 12 March 2026

Highlights

What are the main findings?
  • Superpixel segmentation was applied to the imaging result of a single channel, combined with adaptive superpixel fusion, effectively achieving refined classification of different regions within complex scenes.
  • A two-step knowledge-aided clutter suppression algorithm combines multi-strategy clutter suppression preprocessing with residual clutter suppression to achieve excellent clutter suppression results while preserving the integrity of weak target echoes.
What are the implications of the main findings?
  • The proposed knowledge information extraction algorithm effectively addresses the inherent timeliness and compatibility issues of traditional knowledge information, advancing the development of knowledge-aided algorithms in SAR systems.
  • Knowledge-aided processing schemes provide an engineering solution for multichannel SAR clutter suppression in complex scenes, offering important insights for subsequent research.

Abstract

Multichannel synthetic aperture radar (SAR) achieves high-resolution imaging while significantly enhancing the spatial freedom of the SAR system. As SAR hardware performance continues to improve, observed scenes often transition from simple to complex scenes. The increasingly complex clutter components introduced by complex scenes make clutter suppression increasingly challenging. Traditional multichannel clutter suppression algorithms usually assume that the observed scene, as a whole, satisfies the independent and identical distribution (IID) condition. However, in actual complex scenes, this assumption often proves difficult to uphold. Therefore, how to achieve more effective clutter suppression for complex scenes is a challenge for SAR. In this paper, we propose a knowledge-aided (KA) multichannel SAR clutter suppression algorithm for complex scenes. First, the single-channel image is processed at the superpixel level and a superpixel fusion algorithm is proposed, which adaptively realizes the refined classification of the complex scene. Then, a two-step clutter suppression processing method with multi-strategy clutter suppression preprocessing and sparse Bayesian residual clutter suppression is proposed. This method not only provides effective classification information for complex scenes but also achieves more efficient clutter suppression in complex scenes based on this classification information. Finally, the clutter suppression performance of this algorithm in complex scenes was validated through measured data.

1. Introduction

Synthetic Aperture Radar (SAR) [1] systems are based on active coherent microwave imaging mechanisms, which can realize all-day, all-weather imaging [2,3,4]. SAR detections [5,6,7] have become an important source of information for global environmental monitoring, and effective clutter suppression methods are key to obtaining accurate target information. According to the difference in the number of channels, SAR systems can be categorized into single-channel and multichannel [8,9]. The clutter suppression of the single-channel system mainly utilizes the difference between the target and the clutter in the spectrum [10], which can be reflected in the Doppler spectrum and the Doppler modulation frequency. Multichannel systems increase the freedom of spatial dimension processing by deploying multiple receiving antennas along the trajectory. Their clutter suppression methods are mainly displaced phase center antenna (DPCA) [11] and space–time adaptive processing (STAP) [12,13,14]. Compared with the STAP method, the DPCA method does not utilize all the spatial degrees of freedom and is sensitive to noise, so the subsequent research mainly focuses on the STAP method. The STAP method achieves clutter suppression by performing 2D adaptive filtering on all pulses and all received channel data in the coherent processing interval (CPI). When utilizing the STAP method for clutter suppression, the processing effect depends on the accuracy of the clutter-plus-noise covariance matrix (CCM) estimation. For a homogeneous region with consistent statistical features, the CCM estimated from training samples at arbitrary locations can achieve effective clutter suppression in the scene. So the main problem of clutter suppression focuses on the treatment of nonhomogeneous environments [15,16,17]. STAP methods for nonhomogeneous environments can be generally categorized into three groups: (1) STAP methods based on sample selecting, (2) STAP methods combined with sparse recovery (SR-STAP), and (3) STAP methods combined with KA (KA-STAP).
Algorithms based on nonhomogeneous detectors [18], such as the generalized inner product (GIP) algorithm [19] and the power-selected training (PST) algorithm [20]. These algorithms have been proposed to select training samples for covariance matrix estimation in nonhomogeneous environments. The GIP algorithm removes nonhomogeneous portions from the training samples and is suitable for nonhomogeneous cases caused by small terrain changes. The author in [21] chose to use the GIP method in the training samples to minimize the negative impact of contaminated samples. The PST algorithm extracts stronger clutter samples from the training samples and is suitable for nonhomogeneous regions contaminated by strong scattered clutter.
For SR-STAP, with the rapid development of compressive sensing (CS) [22,23], the sparsity of signals has gradually become the focus of many scholars [24]. With the development of CS theory, sparse recovery techniques were also introduced into the radar field [25,26], and accordingly, the sparse recovery STAP (SR-STAP) method has been developed. Sparse recovery can be used to estimate the CCM, to estimate the space–time spectrum, and to reconstruct targets, among others. The author in [27] proposed a data-dependent reduced dimension STAP approach based on sparse recovery, which yields relatively accurate CCM estimates. The authors in [28] proposed a tensor-based SR-STAP scheme for large-scale dictionary applications. The proposed method reduces the complexity of large-scale dictionary operations while ensuring excellent clutter suppression performance. The author in [29] proposed the STAP-GPSR to achieve clutter suppression as well as reconstruction of moving targets.
For KA-STAP, the commonly used knowledge information generally includes digital terrain elevation models (DTEMS) [30], geographic information systems (GIS) [31], land-use and coverage data (LUCD) [32,33], radar system parameters, clutter model, and clutter spectral structure. Using the radar system parameters as knowledge information, the author in [34] proposed a linear combination of the a priori clutter covariance matrix with the sample covariance matrix, which leads to better clutter suppression performance. The author in [35] proposed a KA-STAP method based on QR decomposition. This method utilizes digital topographic maps, land cover data, and the locations of man-made features as knowledge information, achieving superior clutter suppression performance compared to traditional STAP. The author in [36] proposed a 3D STAP method using a digital terrain database as knowledge information to achieve effective nonhomogeneous clutter suppression, and the application of the knowledge information also includes indirect application and direct application. The indirect application of knowledge information generally serves to guide the selection of training samples, while the direct application of knowledge information can directly estimate the CCM of clutter samples.
The author in [36] selected the training samples along the equal Doppler line based on prior knowledge to realize the suppression of short-range nonstationary clutter. The author in [37] proposed a novel colored-loading factor optimization method based on prewhitening (PW) performance evaluation in the direct data domain. The proposed method improves the accuracy of the a priori information as well as obtaining the color-loading factors for different range cells. The problem that the a priori covariance matrix may differ in range cells is effectively solved. The author in [38] found a new intrinsic cyclic characteristic of CCM. The authors have improved the clutter suppression performance through cyclic CCM construction and iterative updating. The author in [39] proposed a knowledge-aided (KA) CCM estimation algorithm based on the symmetric alternating direction method of multiplier (KA-SADMM) to improve the processing performance of clutter suppression.
However, in real complex scenes, there are often multiple homogeneous and nonhomogeneous regions at the same time, and the statistical properties of different homogeneous regions are significantly different. This makes the processing for specific conditions not universal in complex scenes. A schematic diagram of the complex scene composition is shown in Figure 1.
For STAP methods based on training sample selection, how to apply the most appropriate processing method for a specific location is a problem to be solved. For the SR-STAP method, the mining of sparsity in system echoes is a problem to be explored. In addition, the specific strategy for combining sparse recovery with clutter suppression algorithms is also a direction that needs to be thoroughly investigated.
The source of knowledge information is the first problem that the KA-STAP method needs to face, and the way of obtaining the knowledge information is also an urgent problem to be solved. After obtaining the knowledge information, how to design the integration mechanism with the clutter suppression method is the core problem. Moreover, the knowledge information also faces the problems of insufficient timeliness and poor compatibility. In the practical application process, the SAR echo data are usually acquired first, and then the corresponding knowledge information is matched according to the data, which results in knowledge acquisition lagging behind the scene changes. When the scene changes in real time, the lagging information is difficult to meet the real-time demand. In addition, there is an inherent resolution difference between the knowledge information and the SAR echoes due to the different methods of acquiring them. This problem leads to the mismatch between the knowledge information and the SAR echoes, which reduces the processing performance. An analysis of the limitations in current algorithms reveals several challenges. Modern multichannel high-resolution airborne SAR systems can acquire rich spatial information, with each channel independently capable of producing high-resolution images. Building on this capability, the single-channel image is utilized as the knowledge information to enable refined classification of complex scenes. This approach facilitates the differentiation between homogeneous and nonhomogeneous regions within the scene.
The single-channel image satisfies both timeliness and compatibility, and the refined classification scenes facilitate the implementation of different treatments for different situations. Based on the refined classification results, a two-step processing method for clutter suppression with multi-strategy clutter suppression preprocessing and sparse recovery residual clutter suppression is also proposed.
In the multi-strategy clutter suppression preprocessing, it is combined with refined classification results. Conventional processing for homogeneous regions and location-specific optimization processing for nonhomogeneous regions are implemented, solving the problems faced by the STAP method based on training samples.
In residual clutter suppression algorithms based on sparse Bayesian, target reconstruction and residual clutter suppression are achieved by utilizing the sparsity of the target. Due to the presence of the preprocessing stage, complete weak target echoes can still be preserved during the target reconstruction process. The operational flow diagram of the proposed method is shown in Figure 2.
The main contributions of this paper are described as follows:
  • We propose to utilize the single-channel image as a source of knowledge information. The single-channel image is obtained from SAR echoes and thus has a good match with SAR echoes. Moreover, the imaging results are generated immediately after the echo acquisition, which is also guaranteed in terms of timeliness.
  • In terms of knowledge information extraction, we propose a method that combines knowledge information sources with superpixel-level processing. Moreover, during the superpixel fusion stage, a fusion algorithm was proposed to realize adaptive classification of the scene. This algorithm enables refined classification between homogeneous and nonhomogeneous regions. The results show that there are a large number of homogeneous and nonhomogeneous regions in complex scenes, which validates the importance of research on refined classification of complex scenes.
  • A two-step processing method combining multi-strategy clutter suppression preprocessing and sparse Bayesian residual clutter suppression is proposed based on the refined classification results in complex scenes. And the processing effect of the two steps is analyzed separately using the measured data. Research findings indicate that incorporating knowledge information enhances processing effectiveness during the clutter suppression preprocessing stage. It also provides a flatter clutter background for residual clutter suppression. The effectiveness of the proposed algorithm for clutter suppression in complex scenes is verified.
The rest of this paper is organized as follows. In Section 2, we model the multichannel SAR echoes. In Section 3, the knowledge information is extracted by utilizing the superpixel-level processing method. An adaptive superpixel fusion algorithm is proposed in Section 3.2. Based on the extracted knowledge information, a two-step clutter suppression method combining multi-strategy clutter suppression preprocessing and sparse Bayesian residual clutter suppression is proposed in Section 4. Section 5 analyzes the experimental results of the proposed algorithm. Section 6 presents the discussion. Finally, Section 7 summarizes the paper.

2. Multichannel SAR Echo Model

This section introduces the multichannel SAR echo model, providing the theoretical basis for subsequent derivations in the following sections. The geometric configuration of a multichannel SAR system with N channels is shown in Figure 3. Taking the system working in front-side view mode as an example for analysis.
The platform moves along the X -axis at a height H above the ground with a constant velocity V . The distance between the array elements is d . Assume that the radar pulse is transmitted from the first antenna Q 1 and received by all antennas Q n ( n = 1 , 2 , , N ) . When the SAR system transmits the LFM signal, the complex base-band signal obtained from the echo signal received by the n -th channel after detection and demodulation processes can be expressed as
s e c h o t ^ , t m , n = A t w a t m w r t ^ 2 R n t m c × exp j π γ t ^ 2 R n t m c 2 exp j 4 π λ R n t m
where t ^ , t m and R n ( t m ) represent the fast-time, the slow-time, and the instantaneous slant distance between the n -th channel and the target at moment t m . Furthermore, w a denotes the azimuthal envelope function, w r denotes the envelop function of the transmitted signal, and γ represents the modulation frequency of the LFM signal. The formula for R n ( t m ) can be expressed as
R n t m R B + v r t m + 1 2 a r t m 2 + ( v x V ) t m + n 1 d 2 2 R B
where R B is the shortest slant range, v r is the radial velocity of the moving target, a r is the radial acceleration of the moving target, and v x is the along-track velocity of the moving target. When the envelope of the transmitted signal is a rectangular window function, the echo signal after range compression can be expressed as
s e c h o t ^ , t m , n = A t w a t m sin c B r t ^ 2 R n t m c × exp j 4 π λ R n t m
where B r indicates the bandwidth of the transmitted signal. Next, the specific formula for the instantaneous slant distance is brought into (3) to obtain the following expression:
s e c h o t ^ , t m , n = A t w a t m sin c B r t ^ 2 R n t m c × e x p j 4 π λ R B × exp j 2 π λ v r 2 R B V v x 2 + a r R B × exp j 4 π λ v r V v x n 1 d V v x 2 + a r R B × exp j π 2 V v x 2 + 2 a r R B λ R B v r R B V v x 2 + a r R B + t m V v x n 1 d V v x 2 + a r R B 2
In multichannel SAR echo, stationary targets in the scene are considered as clutter signals ( v x = 0 ,   v r = 0 ,   a r = 0 ) . Thus, the clutter signal echo can be expressed as
s c l u t t e r t ^ , t m , n = A c w a t m sin c B r t ^ 2 R n t m c × exp j π 2 V 2 λ R B t m d n 1 V 2 × e x p j 4 π λ R B
From Equation (5), it can be seen that the presence of the antenna baseline makes the clutter signal return between different channels have a slow time delay. In order to achieve better clutter suppression, it is necessary to exclude the influence of the antenna baseline on the return signal between different channels, so that the clutter signal returns from different channels are consistent. Therefore, a slow time delay compensation process is required, which is usually performed in the Doppler domain. The result of the delay compensation process can be expressed as
s c l u t t e r _ c o m p t ^ , t m , n = s c l u t t e r t ^ , t m + Δ t n , n = A c w a t m sin c B r t ^ 2 R n t m c × e x p j 4 π λ R B × exp j π 2 V 2 λ R B t m 2
Δ t n denotes the value for slow-time delay compensation processing. The target echo signals between different channels all also processed with the same time delay compensation, denoted as
s t a r g e t _ c o m p t ^ , t m , n A t w a t m sin c B r t ^ 2 R n t m c × e x p j 4 π λ R B × exp j 2 π λ v r 2 R B V v x 2 + a r R B × exp j 4 π λ v r V v x n 1 d V v x 2 + a r R B × exp j π 2 V v x 2 + 2 a r R B λ R B v r R B V v x 2 + a r R B + t m 2
Derivation of s t a r g e t _ c o m p t ^ , t m , n obtained from the time delay compensation process yields the following equation:
s t a r g e t _ c o m p t ^ , t m , n = A t w a t m sin c B r t ^ 2 R n t m c e x p j 4 π λ R B × exp j 2 π 2 λ v r V v x n 1 d V v x 2 + a r R B 2 v r λ t m V v x 2 + a r R B λ R B t m 2 = A t w a t m sin c B t ^ 2 R n t m c e x p j 4 π λ R B × exp [ j ζ n f d t ] exp [ j 2 π f d t t m + j π κ d t t m 2 ]
where f d t = 2 v r / λ represents the Doppler center frequency, κ d t is the Doppler chirp rate of the moving target, and ζ n f d t is the interference phase caused by the radial velocity of the target. These expressions are written as follows:
κ d t = 2 V v x 2 + 2 a r R B / λ R B
ζ n f d t = 2 π f d t n 1 d V v x + a r R B / V v x
The measured data used for experimental analysis in this paper were obtained from a three-channel airborne SAR, whose system parameters are shown in Table 1. After the slow time-delay compensation process, the clutter signal echoes between different channels are approximately the same, and the SAR echo data of any channel can be selected for imaging.
Based on the measured data obtained from the SAR system parameters in Table 1, the single-channel imaging results are shown in Figure 4. Due to the large scale of the entire scene, a representative section of the complex scene was selected for processing, as shown in Figure 4c.

3. The Extraction of Knowledge Information

In order to solve the various problems faced by KA-STAP, this study proposes to utilize a single-channel in multichannel SAR as a source of knowledge information. The knowledge information is obtained from the imaging result of the single-channel to realize the refined classification of the complex scene, which can guide the subsequent clutter suppression.

3.1. Superpixel Segmentation Algorithm for Refined Classification of Complex Scenes

One of the innovations in this paper is to use SAR images as an auxiliary source of knowledge information and extract knowledge information from it to realize the refined classification of the complex scene [40]. In the step of knowledge extraction based on SAR images [41], the first step is to use the superpixel segmentation algorithm based on simple linear iterative clustering (SLIC) [42,43]. Superpixel segmentation is the search for an optimal division of an image, the goal of which is to make the boundaries of the divided superpixels fit the boundaries of the objects in the image and to ensure that the pixels inside the superpixels have similar features.
Since SAR images are grayscale, the SLIC algorithm is applied in 3D space l , x , y . The intensity value of a pixel represents the pixel value of the image. The SLIC algorithm obtains initial cluster centers C h h = 1 , 2 , 3 , by sampling pixels at regular grid intervals. Subsequently, an iterative local k-means clustering process associates each pixel with the cluster center having the minimum distance measure D . The formula for calculating distance measure D is as follows:
D i , j = d c ( i , j ) 2 m 2 + d s ( i , j ) 2 S 2 1 2
where d c ( i , j ) = l i l j 2 denote the intensity similarity between pixel i and j . l i and l j represent the respective intensity value of pixel i and j . d s ( i , j ) = x i x j 2 + y i y j 2 denotes the spatial distance [44] between pixel i and j . ( x i , y i ) and ( x j , y j ) denote the horizontal and vertical coordinates of the two pixels, respectively. m and S are regularization parameters.
Assume the SAR image P to be processed is segmented into K superpixels. Each resulting superpixel can be denoted as X k . The entire image after superpixel segmentation must satisfy the following relationship:
k = 1 K X k = P
X k 1 X k 2 = Ø , k 1 k 2

3.2. Superpixel Fusion Algorithm for Refined Classification of Complex Scenes

After the superpixel segmentation is completed, the overall scene is divided into a large number of superpixels. The imaging results of the measured data and the corresponding superpixel segmentation results are shown in Figure 5a,b. Although the superpixels achieve refined segmentation of the scene, homogeneous regions satisfying the same statistical properties are also divided into numerous superpixels. Figure 5c shows the existence of this situation.
Traversing each superpixel for clutter suppression significantly increases the computational burden. Therefore, in this case, it is better to first fuse the superpixels that satisfy the same statistical characteristics using a specific criterion, and then perform the clutter suppression process on the fused slice-shaped region. Traditional superpixel fusion methods usually only play the role of distinguishing the target from the background, and they do not provide a close refinement of the background region, which is suitable for target analysis in simple scenes. However, when the scene is a complex background, it is difficult for traditional methods to realize the differential fusion of multiple types of backgrounds.
To solve this problem, we propose a superpixel fusion algorithm, which can divide the complex scene into homogeneous and nonhomogeneous regions and provide accurate knowledge information for subsequent clutter suppression. The schematic diagram of the proposed algorithm is shown in Figure 6. For clarity, the corresponding processing results are displayed alongside each processing step in Figure 6. Since the proposed superpixel fusion method requires multiple iterations under the constraint of a circular judgment, only the section indicated by the red dashed line presents the final output of the superpixel fusion. The results in the remaining sections are outputs from intermediate processing stages. For illustrative purposes, all intermediate processing results shown originate from the first iteration. Algorithm 1 provides the details of the proposed algorithm.
Algorithm 1: An adaptive superpixel fusion
Inputs: superpixel segmentation results;
Outputs: the refined classification results of the complex scene;
Initialization: starting superpixel Xstart, similarity label set simlabel, alternative set altlabel, dissimilarity set diffmeasure.
  (1)
Search all superpixels in the neighborhood of the Xstart and extract their labels to form the alternative set altlabel, card(altlabel) = nalt.
  (2)
Calculate the statistical information (color information and texture information) for superpixel Xstart and all superpixels corresponding to labels in the alternative set.
  (3)
Based on the obtained statistical information, the dissimilarity measure diffmeasure(j), j = 1,…, nalt between Xstart and each Xaltlabel(i), i = 1,…, nalt is calculated, and the set of diffmeasure = (diffmeasure(1), diffmeasure(2),…, diffmeasure(nalt)) is generated.
  (4)
For the items in set diffmeasure that do not exceed the set threshold, their corresponding superpixel labels are extracted to form the similarity label set simlabel, card(simlabel) = nsim.
  (5)
Determine whether simlabel is the empty set. When nsim ≠ 0, perform step (6); otherwise perform step (7).
  (6)
Fuse the superpixels corresponding to all labels in the simlabel with the current Xstart. Update Xstart using the fusion result and repeat steps (1)–(5).
  (7)
Determine whether all superpixels currently not participating in the fusion process have ever been used as a starting superpixel. If any unused superpixels exist, proceed to step (8); otherwise, the algorithm terminates.
  (8)
Search for superpixels that have not participated in fusion and have not been used as a starting superpixel. Take the minimum value of the label corresponding to these superpixels. Update Xstart with the superpixel corresponding to this label value and repeat steps (1)–(5).
There are several points to note in the algorithm. Firstly, any superpixel within the superpixel segmentation result can be selected as the starting point (but the superpixels at the corners are usually chosen as the starting point to simplify the fusion process).
Secondly, the texture information is described by gray-level co-occurrence matrix (GLCM), which is commonly used in SAR images. The GLCM counts the frequency of occurrence of pixel pairs with a specific gray value in an image under a specific spatial relationship (direction and distance).
Thirdly, the design of the similarity label set s i m l a b e l is also one of the innovations of the algorithm in this section, which realizes the adaptive generation of different homogeneous regions. During the processing of the algorithm, when s i m l a b e l is empty, it indicates that there is no superpixel in a l t l a b e l that can participate in the fusion. At this point, a homogeneous region guided by the statistical properties of the current starting superpixel X s t a r t is generated. The next step is to search for the next superpixel that has not been used as a starting superpixel and generate a homogeneous region guided by this superpixel. If all labels have been searched, the fusion algorithm ends at this point and different homogeneous regions are adaptively generated. The superpixel fusion results obtained by the proposed algorithm are shown in the red dashed box in Figure 6.

4. Knowledge-Aided Clutter Suppression Method in Complex Scene

In Section 3, the introduction of knowledge information realizes the refined classification of homogeneous and nonhomogeneous regions in complex scenes. Since different regions are in different clutter environments, it is difficult to obtain the ideal clutter suppression effect by utilizing a single sample CCM at any location in the scene to process the overall complex scene.
In this case, a two-step processing method for clutter suppression is proposed in this section. Firstly, the knowledge information is used as a guideline and combined with a multi-strategy clutter suppression algorithm to preprocess the complex scene for clutter suppression. Then, the remaining clutter in the complex scene is further suppressed using the sparse Bayesian algorithm.

4.1. Multi-Strategy Clutter Suppression Preprocessing

To achieve clutter suppression processing aided by knowledge information, the clutter suppression preprocessing stage is performed in the image domain. After performing azimuth compression of Equations (6) and (7), the image domain expression I c l u t t e r t ^ , t m , n for clutter echoes can be obtained. For the divided homogeneous and nonhomogeneous regions, a multi-strategy approach is used in the preprocessing stage. Firstly, the preprocessing of the homogeneous region is considered, the homogeneous region has a smooth clutter environment, and the preprocessing can be realized by estimating the CCM for each region separately for STAP.
The basic idea of the STAP technique is to carry out the 2D adaptive processing of the signals according to a certain optimality criterion under the model of the background clutter plus the known signals. In this case, the SCNR of the output is maximized to achieve clutter suppression.
In SAR, the Fourier transforms of the echo data at different Doppler frequencies are asymptotically independent due to the long synthetic aperture time, and the cross terms between different Doppler channels can be neglected, at which time the STAP can be approximated as the spatial adaptive processing.
The optimal weight vector W o p t is obtained by the following linear optimization problem:
min W W H R W s . t . W H s t = 1
where R is the CCM and s t is the steering vector.
Assuming that the noise signal is a zero-mean complex Gaussian signal with power σ n 2 . As the clutter and noise are independent from each other, the CCM becomes
R t ^ , t m = E I c l u t t e r t ^ , t m I c l u t t e r H t ^ , t m = R c t ^ , t m + R n = R c t ^ , t m + σ n 2 I N × N
where H stands for conjugate transposition, R c t ^ , t m is the clutter covariance matrix, I is an identity matrix, and I c l u t t e r t ^ , t m is the set with
I c l u t t e r t ^ , t m = I c l u t t e r t ^ , t m , 1 , , I c l u t t e r t ^ , t m , N T
However, in the actual process, R c t ^ , t m cannot be obtained directly and needs to be estimated from neighboring cells. The maximum likelihood estimation of R c t ^ , t m can be expressed as
R c t ^ , t m = 1 L i = 1 L I c l u t t e r t ^ i , t m I c l u t t e r H t ^ i , t m
where L denotes the number of IID training samples selected in the neighborhood of the cell to be processed.
According to the Reed, Mallett, and Brennan (RMB) criterion [45], for the average SCNR loss compared to the optimal space–time filtering performance within 3 dB, the number of IID training samples should be at least 2 N.
From Equation (8), it can be seen that the difference in echo signals between different channels are mainly determined by ζ n f d t . Therefore, the steering vector in STAP processing can be expressed as
s t = s t f d t = 1 , exp j ζ 1 f d t , , exp j ζ N f d t T
By solving the optimization problem of (14), the optimal weight vector W o p t can be obtained as
W o p t t ^ , t m = R 1 t ^ , t m s t f d t s t H f d t R 1 t ^ , t m s t f d t
Secondly for nonhomogeneous regions, the cause of nonhomogeneity is the key to choosing the specific processing method. Taking the nonhomogeneous scene in the measured data as an example, different parts can be summarized as follows
The first category involves a large number of superpixels concentrated in the center of the scene (as shown by the red rectangular portion in Figure 7, which is the bridge region in the scene). The large number of isolated strong scattering points in the bridge structure is the reason for the nonhomogeneity of clutter in this region. To address this type of clutter, it is proposed to use the power selective training (PST) method to screen the stronger energy clutter training samples to estimate the CCM of the region to achieve clutter suppression.
The second category consists of concentrated but small number of superpixels occurring at arbitrary locations in the scene (as shown in the purple square portion of Figure 7, which have nonhomogeneous clutter due to abrupt terrain changes). For this nonhomogeneity, a GIP-based training sample selection method is used to estimate the CCM for processing.
The third category is the isolated superpixels that are not fused in each homogeneous region, (as shown in the blue triangular part in Figure 7, only the circled part is used as an example). Isolated and unfused superpixels are usually considered as parts of the suspected target, and for this category, the CCM pair of the nonhomogeneous region in which the current superpixel is located is used for processing. Since the target is included in the isolated and unfused superpixel, the training samples of the target location are not mixed in the estimation of the CCM of the current background nonhomogeneous region, which effectively prevents the self-cancelation problem during clutter suppression.

4.2. Residual Clutter Suppression Combined with Sparse Bayesian

Before preprocessing, the main components in the SAR echo are clutter signals, in which the target signal is submerged and cannot be extracted. After the refined clutter suppression preprocessing of the complex scene, the approximate position of the target can already be observed, and the overall clutter background has become flat compared with the preprocessing, but the presence of the remaining clutter still adversely affects the detection.
For any distance cells, the target signal before preprocessing is flooded with clutter and cannot be extracted. After preprocessing, the clutter amplitude is reduced extensively, resulting in a significant increase in the target SCNR. At this time, the strong scattering characteristics of the target only appear in small portion of the distance cell, so the target scattering coefficient is sparse.
In this case, sparse reconstruction algorithms are considered to achieve accurate reconstruction of the target and the consideration of additive perturbation terms in the model can further suppress the residual clutter.
As observed in Equation (8), the echo data s t a r g e t _ c o m p t ^ , t m , n in the range-Doppler domain is related to the echo’s scattering coefficient A t . Therefore, to suppress residual clutter, the processed results in the image domain must be transformed back to the range-Doppler domain.
Referring to Equation (8), the result of the refined clutter suppression preprocessing can be expressed as
s r c s p _ o u t t ^ , t m = A r c s p w a t m exp [ j ζ n f d t ] exp [ j 2 π f d t t m + j π κ d t t m 2 ] + c t m
where A r c s p denotes the amplitude of the signal after clutter suppression preprocessing. For any range cell, the result of s r c s p _ o u t t ^ , t m can be rewritten as s r c s p _ o u t t m .
Organizing s r c s p _ o u t t m into matrix form yields the sparse observation model as follows:
s M × 1 = Θ M × N a a N a × 1 + c M × 1
The measurement matrix is constructed as
s M × 1 = s r c s p _ o u t t m , t m = t 1 , , t M T , a N a × 1 = A 1 , , A N a T
Θ M × N a = θ t m + N a 2 P R F , , θ t m i P R F , , θ t m 1 P R F N a 2 1
θ t m i / P R F = exp j 2 π f d t t m i / P R F + j π κ d t t m i / P R F 2
where s M × 1 denotes the observed signal of dimension M × 1 , which is the result after the preprocessing of refined clutter suppression, Θ M × N a denotes the observation matrix of dimension M × N a , c M × 1 denotes the additive perturbation term, and a N a × 1 denotes the target complex scattering sparse coefficient vector of dimension N a × 1 , which is the sparse signal to be reconstructed.
In order to realize the sparse reconstruction of target scattering coefficients and the suppression of residual clutter, a Bayesian compressed sensing method based on Laplace prior distribution is adopted for the sparse characteristics of the target. The sparse Bayesian learning (SBL) method [46], also known as the BCS method, was originally proposed as a machine learning algorithm in 2001, and excelled in regression and classification tasks [47].
Subsequently, SBL was introduced to the field of sparse signal recovery and compressed sensing to convert the sparse signal reconstruction problem into a Bayesian linear regression problem [48]. Considering the observation model containing additive perturbations, all unknown signals are modeled as random variables, and a hierarchical sparse prior model is constructed under the framework of Relevance Vector Machine (RVM) [49] theory in order to make full use of the signal. The conditional distribution of the observed signal is established based on the distributional characteristics of the additive perturbation, and the posterior probability distribution of the signal is calculated through Bayesian inference by combing the prior distribution of the signal and the distribution of the observed signal.
Considering that the BCS reconstruction algorithm is based on a real-domain signal model and the SAR sparse observation model is complex-domain, it is necessary to transform the complex-domain observation model into its real-domain counterpart:
s ˜ 2 M × 1 = Θ ˜ 2 M × 2 N a a ˜ 2 N a × 1 + c ˜ 2 M × 1
where the specific form of each term in Equation (25) is shown as follows:
s ˜ 2 M × 1 = s s , Θ ˜ 2 M × 2 N a = Θ Θ Θ Θ , a ˜ 2 N a × 1 = a a , c ˜ 2 M × 1 = c c
The additive perturbation c ˜ is independent and Gaussian, with zero mean and variance equal to σ 2 . According to the linear Bayesian estimation theory, the observation signal s ˜ obeys a Gaussian distribution with a conditional probability density function of
p s ˜ | a ˜ , σ 2 = N s ˜   |   Θ ˜ a ˜ , σ 2 = 2 π σ 2 N a exp 1 2 σ 2 s ˜ Θ ˜ a ˜ 2
with a Gamma prior placed on β = 1 / σ 2 as follows:
p β = Γ β   |   a , b = b a Γ a β a 1 exp b β
Since the Gamma distribution is the conjugate prior of the distribution, it is often used as the prior distribution of the inverse of the variance of the Gaussian distribution in order to achieve a simplified analysis.
In Bayesian modeling, all known signals are treated as stochastic quantities with assigned probability distributions. And the unknown signal a ˜ is given a sparse prior distribution p a ˜ γ as a constraint, which models our knowledge of the signal properties. For the sparse prior distributions, the Laplace prior distribution is assigned to the coefficients a ˜ because the Laplace prior has better sparsity and the concave logarithmic domain Laplace prior maximizes the prior. However, the Laplace prior distribution is not the conjugate prior of the Gaussian likelihood function and this formulation of the Laplace prior does not allow for a tractable Bayesian analysis. To solve this problem, RVM is introduced to construct a hierarchical sparse prior model [47]. The first stage of the hierarchical model specifies the following prior on a ˜ :
p a ˜   |   α = i = 1 2 N a N a ˜ i   |   0 , α i
where α = α 1 , α 2 , , α 2 N a . In the second stage of the hierarchical structure, α i is assigned a Gamma distribution, which is the dual of the Gaussian distribution
p α i   |   η = Γ α i   |   1 , η / 2 = η 2 exp η α i 2 , α i 0 , η 0
Combining Equations (29) and (30), the following equation can be obtained:
p a ˜ | η = p a ˜   |   α p α   |   η d α = η / 2 2 N a exp η i a ˜ i
From Equation (31), it can be seen that a ˜ obeys the Laplace distribution, verifying the correctness of the hierarchical sparse prior model. In the last stage of the hierarchical structure, we model η as the realization of the following Gamma hyperprior:
p η   |   υ = Γ η   |   υ / 2 , υ / 2
The first two stage of the three-stage hierarchical structure result in a Laplace distribution p a ˜ | η , and the last stage is used to calculate η . Based on the hierarchical sparse prior model, the solution of parameter a ˜ , α , η , β is realized by Bayesian inference. The posterior distributions of all parameters can be obtained using Bayesian formulation:
p a ˜ , α , η , β   |   s ˜ = p a ˜ , α , η , β , s ˜ / p s ˜
However, the posterior p a ˜ , α , η , β   |   s ˜ is difficult to solve, since p s ˜ = p a ˜ , α , η , β , s ˜ d a ˜ d α d η d β cannot be calculated analytically. Therefore, the solution of parameters is achieved through the decomposition of the posterior distribution. According to the Bayesian formula, our inference procedure is based on the following decomposition:
p a ˜ , α , η , β   |   s ˜ = p a ˜   |   α , η , β , s ˜ p α , η , β   |   s ˜
Since p a ˜   |   α , η , β , s ˜ p a ˜ , α , η , β , s ˜ , and the distribution p a ˜ | α , η , β , s ˜ is found to be a multivariate Gaussian distribution N a ˜   |   μ , Σ with parameters
μ = Σ β Θ ˜ T s ˜
Σ = β Θ ˜ T Θ ˜ + Λ 1
where Λ = d i a g 1 / α 1 , , 1 / α i , , 1 / α 2 N a . We then utilize p α , η , β | s ˜ to estimate the hyperparameters α , η , β .
According to the following equation,
p α , η , β   |   s ˜ = p α , η , β , s ˜ / p s ˜ p α , η , β , s ˜
The hyperparameters are solved through the joint distribution p α , η , β , s ˜ . By integrating out a ˜ from p a ˜ , α , η , β , s ˜ , we can obtain the following equation:
p α , η , β , s ˜ = p s ˜ a ˜ , β p a ˜ α p a ˜ α p α η p η p β d a ˜ = 1 2 π N a β 1 I + Θ ˜ Λ 1 Θ ˜ T 1 2 × exp 1 2 s ˜ T β 1 I + Θ ˜ Λ 1 Θ ˜ T 1 s ˜ × p α η p η p β
where I is the identity matrix. In order to facilitate the analysis, we convert Equation (38) into logarithmic form:
= 1 2 log J 1 2 s ˜ T J 1 s ˜ + 2 N a log η 2 η 2 i α i + υ 2 log υ 2 log Γ υ 2 + υ 2 1 log η υ 2 η + a β 1 log β b β β
For the estimation of hyperparameters α , η , β , algorithms such as the Expectation Maximization (EM) algorithm or Type II maximum likelihood estimation are utilized. These algorithms obtain the optimal estimates of the hyperparameters by maximizing the likelihood function of the hyperparameters, and the updates of the hyperparameters can be expressed as
α i = 1 2 η + 1 4 η 2 + a ˜ i 2 η
η = 2 N a 1 + υ / 2 i α i / 2 + υ / 2
β = N a + a β s ˜ Θ ˜ a ˜ 2 / 2 + b β
where a ˜ i 2 = μ i 2 + Σ i i with Σ i i the i th diagonal element of Σ . Finally, we can also estimate υ be maximizing Equation (41) with respect to υ . This results in solving the following equation:
log υ 2 + 1 ψ υ 2 + log η η = 0
where ψ z = d log Γ z / d z . Since the mean μ and variance Σ are functions of the hyperparameters α and β , and the hyperparameters α and β are also functions with respect to μ and Σ , they need to be iteratively updated with each other until convergence is reached. The estimate of the sparse signal to be solved is the posterior mean μ .
The above algorithm needs to invert the matrix of size 2 N a × 2 N a during each iteration, so the computational complexity of each iteration is O 8 N a 3 . For SAR images, the presence of a large amount of data makes the algorithm computationally larger and slower to converge. In order to solve this problem and improve the convergence speed of the algorithm, the fast RVM algorithm based on sparse Bayesian model is used in this section to realize the fast reconstruction of the signal.
According to the Woodbury matrix identity, the variance Σ can be organized as
Σ = Λ 1 Λ 1 Θ ˜ T β 1 I + Θ ˜ Λ 1 Θ ˜ T 1 Θ ˜ Λ 1 = Λ 1 Λ 1 Θ ˜ T J 1 Θ ˜ Λ 1
To satisfy the need for increased sparsity and reduced the computational effort, only one hyperparameter α i is updated during each iteration. The fast reconfiguration algorithm initialized an empty model α = 0 , and elements are added to the model through continuous iterations. Based on this idea, J can be represented as
J = β 1 I + i α i θ ˜ i θ ˜ i T = J 1 + α i θ ˜ i θ ˜ i T
where J 1 is the result of J removing the i th column. Using the Woodbury identity, the inverse matrix of J can be represented as
J 1 = J i 1 J i 1 θ ˜ i θ ˜ i T J i 1 1 / α i + θ ˜ i T J i 1 θ ˜ i
and using the determinant identity can obtain the following equation:
J = J i 1 + α i θ ˜ i T J i 1 θ ˜ i
At this point, the equivalent logarithmic form of p α , η , β , s ˜ can be organized as
α = 1 2 log J - i + s ˜ T J i 1 s ˜ + η 2 j i α j + 1 2 log 1 1 + α i s i + q i 2 α i 1 + α i s i η α i = α i + l α i
where l α i = 1 / 2 log 1 / 1 + α i s i + q i 2 α i / 1 + α i s i η α i , s i and q i are defined as s i = θ ˜ i T J i 1 θ ˜ i and q i = θ ˜ i T J i 1 s ˜ . The derivative of α with respect to α i can be expressed as
d α d α i = d l α i d α i = 1 2 s i 1 + α i s i + q i 2 1 + α i s i 2 α i = 1 2 α i 2 η s i 2 + α i s i 2 + 2 η s i + η + s i q i 2 1 + α i s i 2
Note that the numerator has a quadratic form while the denominator is always positive and therefore d L α / d α i = 0 is satisfied at
α i = s i s i + 2 η ± s i Δ 2 η s i 2
where Δ = s i + 2 η 2 4 η s i q i 2 + η . Observe that if q i 2 s i η , then Δ < s i + 2 η 2 and both solutions in (48) are negative, and since d l α i / d α i | α i = 0 < 0 , the maximum value of α arises at α i = 0 . On the other hand, if q i 2 s i > η , there are two solutions. When q i 2 s i > η , we have d l α i / d α i | α i = 0 > 0 and d l α i / d α i | α i = < 0 . Thus, α takes the maximum value when α i is a positive solution, and in summary, the estimate of α i is expressed as
α i = 1 2 η s i 2 s i s i + 2 η 2 4 η s i q i 2 + η + s i s i + 2 η , i f q i 2 s i > η 0 , o t h e r w i s e

5. Experimental Results

This section validated the effectiveness of the proposed algorithm through experiments conducted on measured data. First, the knowledge extraction component was analyzed by fitting different segments of the classification results using classical models, validating both the accuracy of the extracted knowledge and necessity of KA processing. Subsequently, for the proposed two-step clutter suppression algorithm, the performance of both the preprocessing clutter suppression and residual clutter suppression steps was evaluated separately, confirming the effectiveness of the proposed clutter suppression algorithm. Finally, the proposed algorithm is compared with existing algorithms, further validating the superior processing performance of the proposed clutter suppression algorithm in complex scenes.

5.1. Extraction of Knowledge Information

The extraction of knowledge information realizes the classification of homogeneous and nonhomogeneous regions in complex scenes, where different homogeneous regions (land, sea, etc.) also fulfill different distributional properties.
In Section 5.1, the refined classification results of complex scene based on knowledge information were analyzed; separate fittings were conducted for each homogeneous and nonhomogeneous region. The Weibull, Rayleigh, Lognormal, and Gamma distributions from the classical model are taken for comparison, and finally, the goodness of fit of the different distributions is tested according to the statistical test measure K S . The probability density function (PDF) equations for the Weibull distribution, Rayleigh distribution, Lognormal distribution, and Gamma distribution are shown below.
f ( z ) = χ / ζ z / ζ χ 1 exp z / ζ χ
f ( z ) = z / κ 2 exp z 2 / 2 κ 2 , z 0
f ( z ) = 1 / 2 π z σ l exp ln z μ l 2 / 2 σ l 2
f ( z ) = β g α g / Γ ( α g ) z α g 1 e β g z , z > 0
where χ and ζ are the shape parameter and scale parameter of the Weibull distribution, κ is the Rayleigh factor, μ l and σ l are the mean and variance of the Lognormal distribution, α g and β g are the shape parameter and scale parameter of the Gamma distribution, and Γ is the gamma function.
The K S test, which tests whether a given clutter sample obeys a particular distribution, has a test statistic that can be expressed as
K S = max < x < + F e m p i r i c a l x F t h e o r e t i c a l x
where F e m p i r i c a l x is the cumulative distribution function (CDF) of the empirical distribution, and F t h e o r e t i c a l x is the CDF of the hypothesized distribution. A smaller value of K S indicates a better fit of the data to the current model.
Firstly, the homogeneous regions in the refined classification results are analyzed, and four homogeneous regions are arbitrarily selected for fitting. The extracted four homogeneous regions and their fitting results are shown in Figure 8. It can be seen that the intensity values in different homogeneous regions exhibit significant differences. To investigate the optimal fitting models for different homogeneous regions, numerical analysis is conducted using the measure K S .
The fitting of each of the four homogeneous regions using the K S statistic can be analyzed as shown in Table 2. The statistical properties of homogeneous regions 1 and 4 are closer to a Lognormal distribution, while the statistical properties of homogeneous regions 2 and 3 are closer to a gamma distribution. The experimental results show that different homogeneous regions in complex scenes may also satisfy different distributional properties.
Secondly, the nonhomogeneous regions in the refined classification results are analyzed. Nonhomogeneous regions due to the presence of strong scattering points as well as abrupt terrain changes are extracted. The extracted nonhomogeneous regions and their fitting results are shown in Figure 9, and the results of the same fitting test measure K S are given in Table 3.
Based on the fitting results in Figure 9 and the fitting test measure K S in Table 3, it can be seen that the classical models all have difficulty in realizing the fit to the nonhomogeneous regions. In order to be able to better reflect the difference between homogeneous and nonhomogeneous regions, we introduce a statistical model that can uniformly describe clutter regions [50,51]. The PDF of the introduced model can be expressed as
p t v , ϑ = 1 B n , v n v 1 n t n 1 1 + n t / v 1 n + v
where B ( n , v ) = Γ ( n ) / Γ ( v ) / Γ ( n + v ) , n represents the number of views, and v represents the texture parameter. A larger texture parameter v indicates higher homogeneity. Conversely, a smaller texture parameter v indicates higher nonhomogeneity.
From Table 4, it can be seen that the homogeneous regions all have larger texture parameter values compared to the nonhomogeneous regions. The experimental results once again verify the correctness of this paper’s knowledge of complex scene.

5.2. Clutter Suppression Performance in Complex Scenes

In the analysis of the experimental results in Section 5.2, the refined clutter suppression preprocessing step within the proposed two-step clutter suppression algorithm is first analyzed. Comparisons are made with the clutter suppression results achieved using CCM estimated from a single sample region.
Figure 10 presents the imaging results of the raw data, the clutter suppression results obtained using the single sample CCM, and the results after processing with the proposed refined clutter suppression preprocessing. Localized magnification was applied to the target regions in different experimental results, with target locations marked using white and orange rectangles.
In the imaging result of the raw data, only two targets can be identified within the white rectangle, while the targets within the orange rectangle are barely detectable and submerged in the background clutter. After processing the whole complex scene using a single-sample CCM, almost all targets can be seen in the white rectangle but with low contrast, and two targets can be detected in the orange rectangle. It can be seen that a certain degree of clutter suppression can be achieved by utilizing single sample CCM processing, but the effect is not satisfactory. This is due to the fact that the CCM obtained from a single sample does not reflect the specific characteristics of the clutter in each different part of the complex scene.
The results of the refined clutter suppression preprocessing demonstrate that targets within the white and orange rectangular regions are clearly distinguishable, with a pronounced contrast between the targets and the background. The proposed algorithm fully utilizes the knowledge information of echoes to achieve multi-strategy processing for different types of clutter.
After conducting a general analysis of the processing results, specific echo curves at designated locations were extracted for numerical analysis. The analysis first focused on the target locations, with randomly selected targets and their corresponding 1D data represented by the yellow dashed box and yellow dashed line in Figure 11a, respectively.
Figure 12a analyzes the curve passing through the coordinate point [1107, 867]. It can be observed that the clutter suppression effect of the single sample CCM is limited, reducing the echo intensity by only 3 to 4 dB. In contrast, the proposed multi-strategy clutter suppression preprocessing method further reduces the overall clutter background intensity by 10 dB. Furthermore, due to the application of the PST algorithm in the strong scattering region, the echo intensity of the strong scattering point is also attenuated by 10 dB.
Figure 12b analyzes the curve passing through the coordinate point [1047, 1865]. There are no strong scatter points in the data at this time. For the single-sample CCM processing, the clutter intensity is reduced by approximately 5 dB within the range cell from 0 to 1080. However, within the range cell from 1080 to 2400, the clutter intensity decreased by nearly 10 dB. This indicates that the CCM estimated from a single sample at this point better aligns with the clutter characteristics within the range cell from 1080 to 2400. The proposed algorithm, based on a single-sample CCM processing, further reduces clutter intensity by 5 dB and 8 dB within the range cell from 0 to 1080 and from 1080 to 2400, respectively.
The analysis then focuses solely on the portion containing only clutter background. Randomly selected 1D data is shown as the white dashed line in Figure 11b. The presence of complex scenes results in significant intensity differences among echoes from different azimuth cells. The results in Figure 12c,d, indicate that while single sample CCM processing can significantly suppress clutter echoes, it fails to achieve optimal results at all locations. In contrast, the proposed method delivers optimal processing performance across all positions.
After analyzing the processing effect of the preprocessing part of the proposed algorithm, the residual clutter suppression effect combined with the sparse Bayesian approach is analyzed immediately after.
Due to the large number of targets in the scene, in order to facilitate the analysis, we analyze the processing effect by performing residual clutter suppression on targets located at different range cells. We analyzed all the targets in Figure 10 in four parts, as shown in Figure 13. In order to facilitate a clear demonstration of the suppression effect of the residual clutter, only the interval of the range cells where the targets are located is treated in the four parts of the target analysis.
The effect of residual clutter suppression shows that the two-step processing method of clutter suppression that combines knowledge information can realize effective clutter suppression in complex scenes. To quantitatively evaluate the clutter suppression performance of the proposed method, comparisons are made based on the average output signal-to-noise ratio (SCNR), whose expression can be expressed as
SCNR = 10 log 10 P s P c + n
where P s denotes the power of the signal, and P c + n represents the average power of clutter plus noise. A quantitative analysis of the experimental results for clutter suppression preprocessing and residual clutter suppression yields Table 5.

5.3. Comparison of the Proposed Algorithm with Existing Algorithms

In Section 5.3, the proposed KA based algorithm is compared and analyzed with the traditional clutter suppression algorithm without introducing knowledge information.
As shown in Figure 14, compared to traditional algorithm, the proposed algorithm achieves superior clutter suppression while fully preserving weak target echoes. The diversity of clutter environments in complex scenes impacts both the effectiveness of sparse Bayesian methods in residual clutter suppression and the quality of target reconstruction. Incorporating knowledge information not only enables refined classification of complex scenes but also guides clutter suppression preprocessing for such environments, achieving superior clutter suppression performance. To more clearly demonstrate the processing capabilities of the proposed algorithm, echo data curves from specific locations were extracted from the processing results for further analysis between the proposed and traditional algorithms. The specific locations of extracted curves for different partial targets are shown in Figure 15a–e, while comparison of echo curves between different methods are presented in Figure 15e–h. The results demonstrate that the proposed algorithm achieves superior processing performance compared to traditional algorithm.
Finally, Table 6 presents a quantitative analysis of the experimental results for clutter suppression, comparing the performance of the proposed algorithm with that of the traditional algorithm.

6. Discussion

This paper proposes a knowledge-aided multichannel SAR clutter suppression method for complex scenes. It performs superpixel-level processing on single-channel images and employs an adaptive superpixel fusion algorithm to achieve refined classification in complex scenes. Subsequently, a two-step clutter suppression processing method is introduced, comprising multi-strategy clutter suppression preprocessing and sparse Bayesian residual clutter suppression. This method not only provides effective classification information for complex scenes but also enables more efficient clutter suppression based on this classification.
Existing knowledge-aided multichannel SAR clutter suppression methods struggle to balance compatibility and timeliness in complex scenes. For instance, while LUCD and DTEMS data from the same geographic location can meet compatibility requirements, their non-real-time acquisition makes it difficult to satisfy timeliness demands. Compared to existing algorithms, the proposed method not only achieves effective clutter suppression in complex scenes but also advances the application of knowledge-aided approaches in multichannel SAR systems.
Limitations of this study include the following: During multichannel echo modeling, target echoes are modeled using traditional multichannel SAR echo model parameters. While these parameters satisfy the requirements of multichannel SAR moving platforms, a non-negligible phase term may exist in more complex motion scenes [52] (e.g., high-speed target movement). Future work could incorporate complex target motion into the modeling process to achieve more accurate echo modeling and effective clutter suppression in multichannel SAR under more challenging conditions

7. Conclusions

This paper proposes a knowledge-aided solution for clutter suppression in multichannel SAR under complex scenes. First, knowledge information is extracted from single-channel imaging results to achieve refined classification in complex scenes. Subsequently, based on this knowledge, a two-step clutter suppression method is introduced, combining multi-strategy clutter suppression preprocessing with sparse Bayesian residual clutter suppression. Finally, a detailed analysis of each step in the algorithm was conducted using measured data. Comparisons with traditional algorithms that do not incorporate knowledge information demonstrated that the proposed method exhibits superior clutter suppression performance in complex scenes.
This KA approach overcomes limitations of conventional knowledge application, offering novel solutions for clutter suppression in complex scenes. Future work aims to extract knowledge information about clutter from the matrix dimension, integrating classification information from complex scenes with knowledge information from the clutter CCM. Tensor techniques will be employed to perform principal component analysis. By incorporating knowledge information from more dimensions to guide clutter suppression, the research scope of KA methods in complex scenes will be expanded.

Author Contributions

Conceptualization and methodology, N.K.; writing—original draft preparation, N.K.; writing—review and editing, N.K., Y.Z., Z.H., Q.H. and H.R.; supervision, Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the National Natural Science Foundation of China under Grants 62371170.

Data Availability Statement

Restrictions apply to the availability of the data, which were used under license for this study. Data are available from the authors withs permission from the Key Laboratory of Marine Environmental Monitoring and Information Processing.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Brown, W.M. Synthetic Aperture Radar. IEEE Trans. Aerosp. Electron. Syst. 1967, AES-3, 217–229. [Google Scholar] [CrossRef]
  2. Pastina, D.; Turin, F. Exploitation of the COSMO-SkyMed SAR System for GMTI Applications. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 966–979. [Google Scholar] [CrossRef]
  3. Cerutti-Maori, D.; Klare, J.; Brenner, A.R.; Ender, J.H.G. Wide-Area Traffic Monitoring with the SAR/GMTI System PAMIR. IEEE Trans. Geosci. Remote Sens. 2008, 46, 3019–3030. [Google Scholar] [CrossRef]
  4. Song, C.; Wang, B.; Xiang, M.; Dong, Q.; Wang, Y.; Wang, Z.; Xu, W.; Wang, R. A General Framework for Slow and Weak Range-Spread Ground Moving Target Indication Using Airborne Multichannel High-Resolution Radar. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5113616. [Google Scholar] [CrossRef]
  5. Li, Y.; Chen, J.; Zhu, J. A New Ground Accelerating Target Imaging Method for Airborne CSSAR. IEEE Geosci. Remote Sens. Lett. 2024, 21, 4013305. [Google Scholar] [CrossRef]
  6. Li, H.L.; Chen, S.W. Polyhedral Corner Reflectors Multidomain Joint Characterization with Fully Polarimetric Radar. IEEE Trans. Antennas Propag. 2025, 73, 10679–10693. [Google Scholar] [CrossRef]
  7. Li, Y.; Liang, X.; Liang, J.; Chen, J. Image-Domain Signal Modeling and Refocusing of Air Moving Targets for MEO Multichannel SAR. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5224514. [Google Scholar] [CrossRef]
  8. Zhang, Y.; Zhang, X.; Li, H.; Wang, Z.; Zhuang, Y. Detection and Imaging of Moving Objects with Multichannel SAR System. In Proceedings of the 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 26–31 July 2015; IEEE: New York, NY, USA, 2015; pp. 2417–2420. [Google Scholar]
  9. Makhoul, E.; Baumgartner, S.V.; Jager, M.; Broquetas, A. Multichannel SAR-GMTI in Maritime Scenarios with F-SAR and TerraSAR-X Sensors. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 5052–5067. [Google Scholar] [CrossRef]
  10. Raney, R. Synthetic Aperture Imaging Radar and Moving Targets. IEEE Trans. Aerosp. Electron. Syst. 1971, AES-7, 499–505. [Google Scholar] [CrossRef]
  11. Cerutti-Maori, D.; Sikaneta, I. A Generalization of DPCA Processing for Multichannel SAR/GMTI Radars. IEEE Trans. Geosci. Remote Sens. 2013, 51, 560–572. [Google Scholar] [CrossRef]
  12. Ward, J. Space-Time Adaptive Processing for Airborne Radar. In Proceedings of the 1995 International Conference on Acoustics, Speech, and Signal Processing, Detroit, MI, USA, 9–12 May 1995; IEEE: New York, NY, USA, 2002; Volume 5, pp. 2809–2812. [Google Scholar]
  13. Ender, J.H.G. Space-Time Processing for Multichannel Synthetic Aperture Radar. Electron. Commun. Eng. J. 1999, 11, 29–38. [Google Scholar] [CrossRef]
  14. Li, Z.; Ye, H.; Liu, Z.; Sun, Z.; An, H.; Wu, J.; Yang, J. Bistatic SAR Clutter-Ridge Matched STAP Method for Nonstationary Clutter Suppression. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5216914. [Google Scholar] [CrossRef]
  15. Duan, K.; Xie, W.; Wang, Y. A New STAP Method for Nonhomogeneous Clutter Environment. In Proceedings of the 2010 the 2nd International Conference on Industrial Mechatronics and Automation, Wuhan, China, 30–31 May 2010; IEEE: New York, NY, USA, 2010; pp. 66–70. [Google Scholar]
  16. Sun, Y.; Yang, X.; Long, T.; Sarkar, T.K. Robust Sparse Bayesian Learning STAP Method for Discrete Interference Suppression in Nonhomogeneous Clutter. In Proceedings of the 2017 IEEE Radar Conference (RadarConf), Seattle, WA, USA, 8–12 May 2017; IEEE: New York, NY, USA, 2017; pp. 1003–1008. [Google Scholar]
  17. Wang, Y.; Chen, J. Robust STAP Approach in Nonhomogeneous Clutter Environments. In Proceedings of the 2001 CIE International Conference on Radar Proceedings (Cat No.01TH8559), Beijing, China; IEEE: New York, NY, USA, 2001; pp. 753–757. [Google Scholar]
  18. Rangaswamy, M.; Chen, P.; Michels, J.H.; Himed, B. A Comparison of Two Non-Homogeneity Detection Methods for Space-Time Adaptive Processing. In Proceedings of the Sensor Array and Multichannel Signal Processing Workshop Proceedings, Rosslyn, VA, USA, 6 August 2002; IEEE: New York, NY, USA, 2002; pp. 355–359. [Google Scholar]
  19. Guo, Q.; Liu, L.; Kaliuzhnyi, M.; Wang, Y.; Qi, L. STAP Training Samples Selection Based on GIP and Volume Cross Correlation. IEEE Geosci. Remote Sens. Lett. 2022, 19, 4028205. [Google Scholar] [CrossRef]
  20. Rabideau, D.J.; Steinhardt, A.O. Improved Adaptive Clutter Cancellation through Data-Adaptive Training. IEEE Trans. Aerosp. Electron. Syst. 1999, 35, 879–891. [Google Scholar] [CrossRef]
  21. Luo, C.; Zhang, F.; Fu, Y.; Zhang, W.; Yang, W.; Yu, R. Multichannel SAR Moving-Target Detection Based on HPD Manifold in Heterogeneous Clutter. IEEE Trans. Geosci. Remote Sens. 2025, 63, 5216119. [Google Scholar] [CrossRef]
  22. Prünte, L. GMTI from Multichannel SAR Images Using Compressed Sensing under Off-Grid Conditions. In Proceedings of the 2013 14th International Radar Symposium (IRS), Rosslyn, VA, USA, 6 August 2002; IEEE: New York, NY, USA, 2003. [Google Scholar]
  23. Prünte, L. Application of Distributed Compressed Sensing for GMTI Purposes. In Proceedings of the IET International Conference on Radar Systems (Radar 2012), Glasgow, UK, 22–25 October 2012; Institution of Engineering and Technology: Hertfordshire, UK, 2012; p. 21. [Google Scholar]
  24. Rani, M.; Dhok, S.B.; Deshmukh, R.B. A Systematic Review of Compressive Sensing: Concepts, Implementations and Applications. IEEE Access 2018, 6, 4875–4894. [Google Scholar] [CrossRef]
  25. Li, J.; Zhu, X.; Stoica, P.; Rangaswamy, M. High Resolution Angle-Doppler Imaging for MTI Radar. IEEE Trans. Aerosp. Electron. Syst. 2010, 46, 1544–1556. [Google Scholar] [CrossRef]
  26. Maria, S.; Fuchsa, J.-J. Application of the Global Matched Filter to Stap Data an Efficient Algorithmic Approach. In Proceedings of the 2006 IEEE International Conference on Acoustics Speed and Signal Processing Proceedings, Toulouse, France, 14–19 May 2006; IEEE: New York, NY, USA, 2006; Volume 4, pp. IV-1013–IV-1016. [Google Scholar]
  27. Zhang, W.; An, R.; He, N.; He, Z.; Li, H. Reduced Dimension STAP Based on Sparse Recovery in Heterogeneous Clutter Environments. IEEE Trans. Aerosp. Electron. Syst. 2020, 56, 785–795. [Google Scholar] [CrossRef]
  28. Cui, N.; Xing, K.; Yu, Z.; Duan, K. Tensor-Based Sparse Recovery Space-Time Adaptive Processing for Large Size Data Clutter Suppression in Airborne Radar. IEEE Trans. Aerosp. Electron. Syst. 2022, 59, 907–922. [Google Scholar] [CrossRef]
  29. Mu, H.; Zhang, Y.; Jiang, Y.; Yang, T. STAP-Based GMTI for Multichannel SAR with Sparse Sampling. In Proceedings of the 2017 IEEE Radar Conference (RadarConf), Seattle, WA, USA, 8–12 May 2017; IEEE: New York, NY, USA, 2017; pp. 1483–1487. [Google Scholar]
  30. Li, X.; Yang, Z.; Tan, X.; Li, J. A Robust KA-STAP Method for Terrain Clutter Suppression in Hybrid Baseline Radar Systems. IET Conf. Proc. 2023, 2022, 500–505. [Google Scholar] [CrossRef]
  31. Shi, J.; Zhang, W.; He, Z.; Deng, M.; Lu, X. Joint Design of Transmit Beamforming and Stap Filter in the Modified Phased Array Based on Prior Information. In Proceedings of the IGARSS 2022 IEEE International Geoscience and Remote Sensing Symposium, Kuala Lumpur, Malaysia, 17–22 July 2022; IEEE: New York, NY, USA, 2022; pp. 2995–2998. [Google Scholar]
  32. Asaro, F.; Prati, C.M.; Belletti, B.; Bizzi, S.; Carbonneau, P. Land Use Analysis Using a Compact Parametrization of Multi-Temporal SAR Data. In Proceedings of the IGARSS 2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; IEEE: New York, NY, USA, 2018; pp. 5823–5826. [Google Scholar]
  33. Ohki, M.; Shimada, M. Large-Area Land Use and Land Cover Classification with Quad, Compact, and Dual Polarization SAR Data by PALSAR-2. IEEE Trans. Geosci. Remote Sens. 2018, 56, 5550–5557. [Google Scholar] [CrossRef]
  34. Zhu, X.; Li, J.; Stoica, P. Knowledge-Aided Space-Time Adaptive Processing. IEEE Trans. Aerosp. Electron. Syst. 2011, 47, 1325–1336. [Google Scholar] [CrossRef]
  35. He, M.D.; Cao, J.S. Recursive KA-STAP Algorithm Based on QR Decomposition. In Proceedings of the 2013 International Workshop on Microwave and Millimeter Wave Circuits and System Technology, Chengdu, China, 24–25 October 2013; IEEE: New York, NY, USA, 2013; pp. 391–394. [Google Scholar]
  36. Xiong, Y.; Xie, W.; Wang, Y.; Chen, W.; Hou, M. Short-Range Nonstationary Clutter Suppression for Airborne KA-STAP Radar in Complex Terrain Environment. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2025, 18, 2766–2776. [Google Scholar] [CrossRef]
  37. Xiong, Y.; Xie, W.; Li, H.; Gao, X. Colored-Loading Factor Optimization for Airborne KA-STAP Radar. IEEE Sens. J. 2023, 23, 23317–23326. [Google Scholar] [CrossRef]
  38. Hu, J.; Li, J.; Li, H.; Li, K.; Liang, J. A Novel Covariance Matrix Estimation via Cyclic Characteristic for STAP. IEEE Geosci. Remote Sens. Lett. 2020, 17, 1871–1875. [Google Scholar] [CrossRef]
  39. Du, X.; Jing, Y.; Chen, X.; Cui, G.; Zheng, J. Clutter Covariance Matrix Estimation via KA-SADMM for STAP. IEEE Geosci. Remote Sens. Lett. 2024, 21, 3507505. [Google Scholar] [CrossRef]
  40. Li, H.L.; Liu, S.W.; Chen, S.W. PolSAR Ship Characterization and Robust Detection at Different Grazing Angles with Polarimetric Roll-Invariant Features. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5225818. [Google Scholar] [CrossRef]
  41. Liu, B.; Hu, H.; Wang, H.; Wang, K.; Liu, X.; Yu, W. Superpixel-Based Classification with an Adaptive Number of Classes for Polarimetric SAR Images. IEEE Trans. Geosci. Remote Sens. 2013, 51, 907–924. [Google Scholar] [CrossRef]
  42. Chen, Z.; Zhong, Z.; Pan, X.; Xi, X. A Novel Improved SLIC Superpixel Segmentation Algorithm. In Proceedings of the 2022 IEEE 4th International Conference on Civil Aviation Safety and Information Technology (ICCASIT), Dali, China, 12–14 October 2022; IEEE: New York, NY, USA, 2022; pp. 1202–1206. [Google Scholar]
  43. Yin, J.; Wang, T.; Du, Y.; Liu, X.; Zhou, L.; Yang, J. SLIC Superpixel Segmentation for Polarimetric SAR Images. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5201317. [Google Scholar] [CrossRef]
  44. Feng, J.; Cao, Z.; Pi, Y. Amplitude and Texture Feature Based SAR Image Classification with a Two-Stage Approach. In Proceedings of the 2014 IEEE Radar Conference, Cincinnati, OH, USA, 19–23 May 2014; IEEE: New York, NY, USA, 2014; pp. 0360–0364. [Google Scholar]
  45. Reed, I.S.; Mallett, J.D.; Brennan, L.E. Rapid Convergence Rate in Adaptive Arrays. IEEE Trans. Aerosp. Electron. Syst. 1974, AES-10, 853–863. [Google Scholar] [CrossRef]
  46. Wu, Q.; Zhang, Y.D.; Amin, M.G.; Himed, B. Space–Time Adaptive Processing and Motion Parameter Estimation in Multistatic Passive Radar Using Sparse Bayesian Learning. IEEE Trans. Geosci. Remote Sens. 2016, 54, 944–957. [Google Scholar] [CrossRef]
  47. Babacan, S.D.; Molina, R.; Katsaggelos, A.K. Bayesian Compressive Sensing Using Laplace Priors. IEEE Trans. Image Process. 2010, 19, 53–63. [Google Scholar] [CrossRef]
  48. Ji, S.; Xue, Y.; Carin, L. Bayesian Compressive Sensing. IEEE Trans. Signal Process. 2008, 56, 2346–2356. [Google Scholar] [CrossRef]
  49. Tipping, M.E. Sparse Bayesian Learning and the Relevance Vector Machine. J. Mach. Learn. Res. 2001, 1, 211–244. [Google Scholar]
  50. Gierull, C.H.; Sikaneta, I.; Cerutti-Maori, D. Two-Step Detector for RADARSAT-2′s Experimental GMTI Mode. IEEE Trans. Geosci. Remote Sens. 2013, 51, 436–454. [Google Scholar] [CrossRef]
  51. Cerutti-Maori, D.; Sikaneta, I.; Gierull, C.H. Optimum SAR/GMTI Processing and Its Application to the Radar Satellite RADARSAT-2 for Traffic Monitoring. IEEE Trans. Geosci. Remote Sens. 2012, 50, 3868–3881. [Google Scholar] [CrossRef]
  52. Li, H.L.; Chen, S.W. General Polarimetric Correlation Pattern: A Visualization and Characterization Tool for Target Joint-Domain Scattering Mechanisms Investigation. IEEE Trans. Geosci. Remote Sens. 2026, 64, 5200417. [Google Scholar] [CrossRef]
Figure 1. Complex scene compositions and their challenges.
Figure 1. Complex scene compositions and their challenges.
Remotesensing 18 00879 g001
Figure 2. Schematic diagram of the composition of a complex scene.
Figure 2. Schematic diagram of the composition of a complex scene.
Remotesensing 18 00879 g002
Figure 3. Multichannel SAR echo geometric configuration.
Figure 3. Multichannel SAR echo geometric configuration.
Remotesensing 18 00879 g003
Figure 4. Imaging results obtained from measured data. (a) Results of range compression only. (b) Perform azimuth compression on the basis of range compression. (c) Selected region of interest.
Figure 4. Imaging results obtained from measured data. (a) Results of range compression only. (b) Perform azimuth compression on the basis of range compression. (c) Selected region of interest.
Remotesensing 18 00879 g004
Figure 5. Superpixel segmentation processing. (a) Grayscale display of imaging results. (b) Superpixel segmentation processing. (c) Problems existing after superpixel segmentation.
Figure 5. Superpixel segmentation processing. (a) Grayscale display of imaging results. (b) Superpixel segmentation processing. (c) Problems existing after superpixel segmentation.
Remotesensing 18 00879 g005
Figure 6. The schematic diagram of the proposed adaptive superpixel fusion algorithm.
Figure 6. The schematic diagram of the proposed adaptive superpixel fusion algorithm.
Remotesensing 18 00879 g006
Figure 7. Different components in a nonhomogeneous environment. (a) Representations in superpixel segmentation results. (b) Corresponding to the representation in the imaging results.
Figure 7. Different components in a nonhomogeneous environment. (a) Representations in superpixel segmentation results. (b) Corresponding to the representation in the imaging results.
Remotesensing 18 00879 g007
Figure 8. Extraction and fitting of different homogeneous regions. (a) Extraction of homogeneous region 1. (b) Extraction of homogeneous region 2. (c) Extraction of homogeneous region 3. (d) Extraction of homogeneous region 4. (e) Fitting of homogeneous region 1. (f) Fitting of homogeneous region 2. (g) Fitting of homogeneous region 3. (h) Fitting of homogeneous region 4.
Figure 8. Extraction and fitting of different homogeneous regions. (a) Extraction of homogeneous region 1. (b) Extraction of homogeneous region 2. (c) Extraction of homogeneous region 3. (d) Extraction of homogeneous region 4. (e) Fitting of homogeneous region 1. (f) Fitting of homogeneous region 2. (g) Fitting of homogeneous region 3. (h) Fitting of homogeneous region 4.
Remotesensing 18 00879 g008
Figure 9. Extraction and fitting of different nonhomogeneous regions. (a) Extraction of nonhomogeneous region 1. (b) Extraction of nonhomogeneous region 2. (c) Extraction of nonhomogeneous region 3. (d) Fitting of nonhomogeneous region 1. (e) Fitting of nonhomogeneous region 2. (f) Fitting of nonhomogeneous region 3.
Figure 9. Extraction and fitting of different nonhomogeneous regions. (a) Extraction of nonhomogeneous region 1. (b) Extraction of nonhomogeneous region 2. (c) Extraction of nonhomogeneous region 3. (d) Fitting of nonhomogeneous region 1. (e) Fitting of nonhomogeneous region 2. (f) Fitting of nonhomogeneous region 3.
Remotesensing 18 00879 g009
Figure 10. Overall performance comparison of multi-strategy clutter suppression preprocessing. (a) Imaging results of raw data. (b) Clutter suppression results for single sample CCM. (c) Clutter suppression results from multi-strategy clutter suppression preprocessing. (d) Localized magnification of raw data imaging results. (e) Localized magnification of the single sample CCM processing results. (f) Local magnification of multi-strategy clutter suppression preprocessing results.
Figure 10. Overall performance comparison of multi-strategy clutter suppression preprocessing. (a) Imaging results of raw data. (b) Clutter suppression results for single sample CCM. (c) Clutter suppression results from multi-strategy clutter suppression preprocessing. (d) Localized magnification of raw data imaging results. (e) Localized magnification of the single sample CCM processing results. (f) Local magnification of multi-strategy clutter suppression preprocessing results.
Remotesensing 18 00879 g010
Figure 11. Schematic diagram of specific data locations for numerical analysis. (a) Locations of the target curve. (b) Location of the clutter background curve.
Figure 11. Schematic diagram of specific data locations for numerical analysis. (a) Locations of the target curve. (b) Location of the clutter background curve.
Remotesensing 18 00879 g011
Figure 12. Numerical analysis of multi-strategy clutter suppression preprocessing. (a) Numerical analysis of target curve 1. (b) Numerical analysis of target curve 2. (c) Numerical analysis of clutter background curve 1. (d) Numerical analysis of clutter background curve 2.
Figure 12. Numerical analysis of multi-strategy clutter suppression preprocessing. (a) Numerical analysis of target curve 1. (b) Numerical analysis of target curve 2. (c) Numerical analysis of clutter background curve 1. (d) Numerical analysis of clutter background curve 2.
Remotesensing 18 00879 g012
Figure 13. Comparison of the target location before and after residual clutter suppression. (a) Clutter suppression preprocessing results for the targets in part one. (b) Clutter suppression preprocessing results for the targets in part two. (c) Clutter suppression preprocessing results for the targets in part three. (d) Clutter suppression preprocessing results for the targets in part four. (e) Residual clutter suppression processing results for the targets in part one. (f) Residual clutter suppression processing results for the targets in part two. (g) Residual clutter suppression processing results for the targets in part three. (h) Residual clutter suppression processing results for the targets in part four.
Figure 13. Comparison of the target location before and after residual clutter suppression. (a) Clutter suppression preprocessing results for the targets in part one. (b) Clutter suppression preprocessing results for the targets in part two. (c) Clutter suppression preprocessing results for the targets in part three. (d) Clutter suppression preprocessing results for the targets in part four. (e) Residual clutter suppression processing results for the targets in part one. (f) Residual clutter suppression processing results for the targets in part two. (g) Residual clutter suppression processing results for the targets in part three. (h) Residual clutter suppression processing results for the targets in part four.
Remotesensing 18 00879 g013
Figure 14. Comparison of the proposed algorithm with traditional algorithm in clutter suppression. (a) Traditional algorithm processing results for part one targets. (b) Traditional algorithm processing results for part two targets. (c) Traditional algorithm processing results for part three targets. (d) Traditional algorithm processing results for part four targets. (e) Processing results of the proposed algorithm for part one targets. (f) Processing results of the proposed algorithm for part two targets. (g) Processing results of the proposed algorithm for part three targets. (h) Processing results of the proposed algorithm for part four targets.
Figure 14. Comparison of the proposed algorithm with traditional algorithm in clutter suppression. (a) Traditional algorithm processing results for part one targets. (b) Traditional algorithm processing results for part two targets. (c) Traditional algorithm processing results for part three targets. (d) Traditional algorithm processing results for part four targets. (e) Processing results of the proposed algorithm for part one targets. (f) Processing results of the proposed algorithm for part two targets. (g) Processing results of the proposed algorithm for part three targets. (h) Processing results of the proposed algorithm for part four targets.
Remotesensing 18 00879 g014
Figure 15. Specific curve comparison between the proposed algorithm and traditional algorithm in terms of clutter suppression. (a) Curve extraction position for part one targets. (b) Curve extraction position for part two targets. (c) Curve extraction position for part three targets. (d) Curve extraction position for part four targets. (e) Comparison of processing results for part one targets. (f) Comparison of processing results for part two targets. (g) Comparison of processing results for part three targets. (h) Comparison of processing results for part four targets.
Figure 15. Specific curve comparison between the proposed algorithm and traditional algorithm in terms of clutter suppression. (a) Curve extraction position for part one targets. (b) Curve extraction position for part two targets. (c) Curve extraction position for part three targets. (d) Curve extraction position for part four targets. (e) Comparison of processing results for part one targets. (f) Comparison of processing results for part two targets. (g) Comparison of processing results for part three targets. (h) Comparison of processing results for part four targets.
Remotesensing 18 00879 g015
Table 1. The system parameters of the airborne SAR.
Table 1. The system parameters of the airborne SAR.
ParametersValues
Platform velocity (m/s)87
Altitude (km)7.5
Pulse repetition frequency (Hz)800
Baseline (m)0.18
Sampling frequency (MHz)88
Pulse width (μs)44
Bandwidth (MHz)420
Slant range (km)10
Table 2. KS results in homogeneous regions.
Table 2. KS results in homogeneous regions.
RegionsGammaWeibullRayleighLognormal
Homo region 10.05750.08040.20370.0465
Homo region 20.01430.03190.11520.0333
Homo region 30.01780.03440.13650.0367
Homo region 40.05720.07300.19950.0266
Table 3. KS results in nonhomogeneous regions.
Table 3. KS results in nonhomogeneous regions.
RegionsGammaWeibullRayleighLognormal
Nonhomo region 10.05730.07300.86170.0580
Nonhomo region 20.22690.15520.77810.1752
Nonhomo region 30.09290.20370.72120.1817
Table 4. The calculation results of the number of views and texture parameters for different regions.
Table 4. The calculation results of the number of views and texture parameters for different regions.
Regions n v
Homogeneous 11.00143.146
Homogeneous 21.23742.117
Homogeneous 30.957637.166
Homogeneous 40.983146.245
Nonhomogeneous 10.89512.574
Nonhomogeneous 21.42351.485
Nonhomogeneous 31.32191.746
Table 5. Quantitative analysis of clutter suppression preprocessing and residual clutter suppression.
Table 5. Quantitative analysis of clutter suppression preprocessing and residual clutter suppression.
Different RegionsSCNR (dB)
Clutter Suppression PreprocessingResidual Clutter
Suppression
SCNR
Improvement
Targets of part one8.2327.8219.59
Targets of part two6.8725.0418.17
Targets of part three6.5326.2719.74
Targets of part four5.4824.7619.28
Table 6. Quantitative analysis of the proposed algorithm compared to traditional algorithm.
Table 6. Quantitative analysis of the proposed algorithm compared to traditional algorithm.
Different RegionsSCNR (dB)
Traditional
Algorithm
The Proposed
Algorithm
SCNR
Improvement
Targets of part one19.6827.828.14
Targets of part two15.2325.049.81
Targets of part three12.3826.2713.89
Targets of part four17.6624.767.1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, Y.; Kang, N.; Huang, Z.; Hua, Q.; Ren, H. Knowledge-Aided Multichannel SAR Clutter Suppression Algorithm in Complex Scenes. Remote Sens. 2026, 18, 879. https://doi.org/10.3390/rs18060879

AMA Style

Zhang Y, Kang N, Huang Z, Hua Q, Ren H. Knowledge-Aided Multichannel SAR Clutter Suppression Algorithm in Complex Scenes. Remote Sensing. 2026; 18(6):879. https://doi.org/10.3390/rs18060879

Chicago/Turabian Style

Zhang, Yun, Niezipeng Kang, Zuzhen Huang, Qinglong Hua, and Hang Ren. 2026. "Knowledge-Aided Multichannel SAR Clutter Suppression Algorithm in Complex Scenes" Remote Sensing 18, no. 6: 879. https://doi.org/10.3390/rs18060879

APA Style

Zhang, Y., Kang, N., Huang, Z., Hua, Q., & Ren, H. (2026). Knowledge-Aided Multichannel SAR Clutter Suppression Algorithm in Complex Scenes. Remote Sensing, 18(6), 879. https://doi.org/10.3390/rs18060879

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop