Next Article in Journal
YOLOv7-UAV: An Unmanned Aerial Vehicle Image Object Detection Algorithm Based on Improved YOLOv7
Next Article in Special Issue
Analysis of Characteristics and Suppression Methods for Self-Defense Smart Noise Jamming
Previous Article in Journal
Support Vector Regression Model for Determining Optimal Parameters of HfAlO-Based Charge Trapping Memory Devices
Previous Article in Special Issue
Wideband DOA Estimation Utilizing a Hierarchical Prior Based on Variational Bayesian Inference
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Airborne Radar STAP Method Based on Deep Unfolding and Convolutional Neural Networks

1
Early Warning and Detection Department, Air Force Engineering University, Xi’an 710051, China
2
Electronic Information School, Wuhan University, Wuhan 430072, China
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(14), 3140; https://doi.org/10.3390/electronics12143140
Submission received: 12 June 2023 / Revised: 15 July 2023 / Accepted: 18 July 2023 / Published: 19 July 2023

Abstract

:
The lack of independent and identically distributed (IID) training range cells is one of the key factors that limit the performance of conventional space-time adaptive processing (STAP) methods for airborne radar. Sparse recovery (SR)-based and convolutional neural network (CNN)-based STAP methods can obtain high-resolution estimations of the clutter space-time spectrum by using few IID training range cells, so as to realize the clutter suppression effectively. However, the performance of SR-STAP methods usually depends on the SR algorithms, having the problems of parameter setting difficulty, high computational complexity and low accuracy, and the CNN-STAP methods have a high requirement for the nonlinear mapping capability of CNN. To solve these problems, CNNs can be used to reduce the requirements of SR algorithms for parameter setting and iterations, increasing its accuracy, and the clutter space-time spectrum obtained by SR can be used to reduce the network scale of the CNN, resulting in the method proposed in this paper. Based on the idea of deep unfolding (DU), the SR algorithm is unfolded into a deep neural network, whose optimal parameters are obtained by training to improve its convergence performance. On this basis, the SR network and CNN are trained end-to-end to estimate the clutter space-time spectrum efficiently and accurately. The simulation and experimental results show that, compared to the SR-STAP and CNN-STAP methods, the proposed method can improve the clutter suppression performance and have a lower computational complexity.

1. Introduction

Airborne radar usually faces complicated ground and/or sea clutter when detecting low-altitude moving targets. The clutter presents space-time coupling characteristics, and its power spectrum broadens severely. In general, one-dimensional methods based on Doppler filtering or spatial beamforming cannot achieve effective target detection. The space-time adaptive processing (STAP) method combines spatial and temporal two-dimensional information to suppress clutter in the space-time domain adaptively, improving the detection performance of moving targets for airborne radar [1,2]. However, conventional STAP methods need to use a certain number of independent and identically distributed (IID) training range cells to estimate the clutter plus noise covariance matrix (CNCM) of the range cell under test (RUT). It has been shown that the number of IID training range cells required by conventional STAP methods is at least twice the system’s degree of freedom to ensure that the loss of the output signal-to-clutter-plus-noise ratio (SCNR) is less than 3 dB compared to that of the optimal STAP [3]. In practice, non-ideal factors, such as non-stationary environment, non-homogeneous clutter characteristics and complicated platform motion, make this condition difficult to meet [4,5,6].
To reduce the requirement for IID training range cells, reduced-dimension STAP methods, reduced-rank STAP methods, direct data domain STAP methods and SR-based STAP methods have been proposed [7,8,9,10]. Among these methods, SR-STAP methods based on the focal under-determined system solver (FOCUSS), alternating direction method of multipliers (ADMM) and sparse Bayesian learning (SBL) algorithm [11,12,13,14,15] can achieve high-resolution estimation of the clutter space-time amplitude spectrum by using a small number of IID training range cells. Then, CNCM can be calculated to suppress clutter and detect targets. However, the SR algorithms used in SR-STAP methods usually have the problems of parameter setting difficulty, high computational complexity and low accuracy. In addition, under the non-side-looking conditions, the clutter sparsity deteriorates, and the performance of SR-STAP methods degrades severely [16].
Recently, based on the idea of image super-resolution reconstruction via deep learning [17,18], a new STAP method based on convolutional neural networks (CNNs) was proposed [19,20]. The CNN-STAP method trains a CNN offline by constructing the dataset that simulates the real clutter environment so that it can reconstruct a high-resolution image from its low-resolution version. Then, the trained CNN is used online to process the low-resolution clutter space-time power spectrum obtained based on a small number of IID training range cells to obtain the high-resolution estimation of the clutter space-time power spectrum, thus constructing the space-time filter for clutter suppression and target detection. CNN-STAP can obtain a high clutter suppression performance, and its online computing complexity is greatly reduced compared to the SR-STAP method. However, due to the small network scale and the relatively poor reconstruction capacity of the CNN constructed in [19], the clutter suppression performance of the CNN-STAP method needs to be further improved. Increasing the network scale can improve the reconstruction capacity of the CNN and the performance of CNN-STAP, but it inevitably increases the computational complexity.
To reduce the requirement for the network reconstruction capacity of the CNN-STAP method, the clutter space-time spectrum obtained by the SR algorithm can be used as the input of the CNN, i.e., the SR-STAP method and the CNN-STAP method can be cascaded. In such a case, due to the high quality of the input clutter space-time spectrum, a high-resolution clutter spectrum can be reconstructed using a small-scaled CNN. Moreover, this method can reduce the requirements for parameter setting and iterations of the SR-STAP method and can improve its estimation accuracy. Based on reasonable iteration parameters and a small number of iterations, the SR algorithm can be used to process a small amount of IID training range cell data to obtain the high-resolution clutter space-time spectrum with some errors. Then, the CNN can be used to improve the estimation accuracy. Furthermore, based on the idea of deep unfolding (DU) [21,22,23,24,25], the SR algorithm can be unfolded to form a deep neural network, and its optimal parameters under a certain number of iterations can be obtained via training, which can help to improve the accuracy of clutter space-time spectrum estimation. Then, the CNN can be used to obtain the final clutter space-time spectrum estimation result.
Based on the ideas mentioned above, a DU-CNN-STAP method is proposed in this paper. First, the airborne radar signal model is established, and the SR-STAP and CNN-STAP methods are briefly introduced. Then, the processing framework, network structure, dataset construction and training methods of the DU-CNN-STAP method are introduced in detail. Finally, the performance and advantages of the DU-CNN-STAP method are verified via simulation and experimental data. The results show that, compared to the SR-STAP and CNN-STAP methods, the proposed method can improve the clutter suppression performance and have a lower computational complexity.

2. Signal Model and STAP Methods

2.1. Signal Model

As shown in Figure 1, an airborne radar with a uniform linear array (ULA) moves along the y-axis at an altitude of H with a constant speed of v . The number of elements in the ULA is M , and the spacing of adjacent elements is d . The angle between the ULA and the radar moving direction is θ e . The radar transmits and receives N pulses in a coherent processing interval (CPI) with a pulse repetition interval of T r .
Without considering range ambiguity, the clutter range cell on the ground/sea surface is supposed to consist of N c clutter blocks, and their scattering coefficients are assumed to be mutually independent. Thus, the clutter-plus-noise component of the radar space-time echoed signal x can be given as
x c + x n = n = 1 N c σ c ; n v ( f c ; d , n , f c ; s , n ) + x n = n = 1 N c σ c ; n ( v d ( f c ; d , n ) v s ( f c ; s , n ) ) + x n
where x n is the complex Gaussian white noise signal with a mean of 0 and variance of σ n 2 , denotes the Kronecker product, σ c ; n denotes the scattering coefficient of the n - th ( n = 1 , 2 , , N c ) clutter block, v d ( f c ; d , n ) and v s ( f c ; s , n ) denote the temporal and spatial steering vectors of the n - th clutter block, expressed as
v d ( f c ; d , n ) = [ 1 , exp ( j 2 π f c ; d , n ) , , exp ( j 2 π ( N 1 ) f c ; d , n ) ] T N × 1 v s ( f c ; s , n ) = [ 1 , exp ( j 2 π f c ; s , n ) , , exp ( j 2 π ( M 1 ) f c ; s , n ) ] T M × 1
where ( ) T denotes the transpose, f c ; d , n and f c ; s , n are the normalized Doppler frequency and spatial frequency of the n - th clutter block, given by
f c ; d , n = 2 v T r λ cos θ n cos φ n f c ; s , n = d λ cos ( θ n + θ e ) cos φ n
where φ n and θ n denote the elevation angle and azimuth angle of the n - th clutter block, respectively, and λ denotes the signal wavelength.
According to (1), assuming that clutter and noise are mutually independent, the CNCM can be expressed as
R I = R c + R n = E ( x c x c H ) + E ( x n x n H ) = n = 1 N c E ( | σ c ; n | 2 ) v ( f c ; d , n , f c ; s , n ) v H ( f c ; d , n , f c ; s , n ) + σ n 2 I N M
where E ( ) denotes the expectation, ( ) H denotes the conjugate transpose, and I N M denotes the unit matrix with a dimension of N M × N M .
The output of the STAP filter is the inner product of the space-time weighting vector w and the radar space-time echoed signal x , expressed as
y = w H x
To maintain the target power while minimizing the power of clutter and noise after filtering, the optimal weighting vector of the STAP filter can be calculated as follows:
w opt = R I 1 v t / ( v t H R I 1 v t ) N M × 1
where ( ) 1 denotes matrix inversion, and v t denotes the space-time steering vector of the target.
In general, a certain number of training range cells without including the target are needed to estimate the CNCM of the RUT. Under the condition that the training range cells and the RUT are IID, via the sample matrix inversion (SMI) method [2], the CNCM of the RUT can be estimated as
R ^ I = 1 L l = 1 L x l x l H
where L is the number of IID training range cells, and x l denotes the space-time echoed signal of the l - th training range cell.
According to the RMB criterion [3], the output SCNR loss of the SMI method with respect to the optimal STAP method can be expressed as
SCNR loss = ( L O + 2 ) / ( L + 1 )
where O = M N denotes the system’s degree of freedom (DOF).
Equation (8) demonstrates that, to ensure that the output SCNR loss is less than 3 dB, the number of IID training range cells required by the SMI method is at least about twice the system’s DOF, i.e., L should be larger than 2 O 3 , which is difficult to meet in a practical non-homogeneous and non-stationary clutter environment.

2.2. SR-STAP Method

It can be seen from (1) that the clutter signal is superimposed by the space-time signals corresponding to different clutter blocks. Thus, the entire spatial frequency–Doppler frequency domain can be discretized into N s × N d grids, where N s = ρ s M , N d = ρ d N , and N s N d N M , to approximate the clutter component as
x c = i = 1 N d j = 1 N s γ i , j v ( f d , i , f s , j ) = Φ γ
where f d , i is the i - th Doppler frequency ( i = 1 , 2 , , N d ), f s , j is the j - th spatial frequency ( j = 1 , 2 , , N s ), v ( f d , i , f s , j ) is the space-time steering vector of the i - j - th grid, γ i , j denotes the complex amplitude of the i - j - th grid, γ = [ γ 1 , 1 , γ 2 , 1 , , γ N d , N s ] N s N d × 1 denotes the complex amplitude vector corresponding to all grids, i.e., the clutter space-time amplitude spectrum, and Φ denotes the dictionary of space-time steering vectors, given by
Φ = [ v ( f d , 1 , f s , 1 ) , v ( f d , 2 , f s , 1 ) , , v ( f d , N d , f s , N s ) ] N M × N s N d
Thus, the space-time echoed signal of the l - th training range cell without the target can be expressed as
x l = x c , l + x n , l = Φ γ l + x n , l
Due to the space-time coupling property of clutter, its space-time amplitude spectrum is usually sparse. Based on this property, the SR-STAP method can estimate the clutter space-time amplitude spectrum by solving a constrained optimization problem, expressed as
γ ^ l = arg min γ l γ l 0 ,         s . t .   x l Φ γ l 2 ε
where 0 denotes the L 0 norm, 2 denote the L 2 norm, and ε denotes the noise level.
With L training range cells, (12) can be upgraded to the multiple measurement vectors (MMV) model [14], expressed as
Γ ^ = arg min Γ Γ 2 , 0 ,         s . t .   X Φ Γ F ε
where X = [ x 1 , x 2 , , x L ] N M × L , Γ = [ γ 1 , γ 2 , , γ L ] N s N d × L , 2 , 0 denotes the L 0 norm of the column vector obtained by the L 2 norm of each row of a matrix, and F denotes the Frobenius norm.
By solving (12) or (13) with the SR algorithm, the CNCM can be estimated as
R ^ I = 1 L l = 1 L i = 1 N d j = 1 N s γ l , i , j 2 v ( f d , i , f s , j ) v H ( f d , i , f s , j ) + σ n 2 I N M
where γ l , i , j denotes the complex amplitude of the i - j - th grid of the l - th training range cell.
Based on the estimated CNCM, according to (6), the weighting vector can be calculated as
w ^ opt = R ^ I 1 v t / v t H R ^ I 1 v t N M × 1
The SR-STAP methods can estimate the CNCM accurately by using far fewer IID training range cells than the system’s DOF, i.e., L O . However, given the clutter space-time amplitude spectrum estimation model shown in (12) and (13), the performance of the SR-STAP methods is usually dependent on the employed SR algorithm. Although a series of SR algorithms have been proposed, there are still some problems, such as parameter setting difficulty, high computing complexity and low accuracy. In addition, under the non-side-looking conditions, the clutter sparsity deteriorates, and thus the performance of the SR-STAP methods degrades severely.

2.3. CNN-STAP Method

Based on the idea of deep-learning-based image super-resolution reconstruction, the CNN-STAP method [19,20] estimates a low-resolution clutter space-time power spectrum first based on a small number of IID training range cells. Then, the CNN is used to reconstruct a high-resolution clutter space-time power spectrum. Finally, the CNCM and the space-time adaptive weighting vector are calculated for clutter suppression and target detection. The specific steps can be summarized as follows.
Assuming there are L training range cells ( L O ) and the corresponding space-time echoed signal matrix is X , the low-resolution clutter space-time power spectrum Y N d × N s can be obtained via the Fourier-transform-based digital beam forming (DBF) algorithm, acting as the input data of the CNN, expressed as
Y ( i , j ) = v ( f d , i , f s , j ) H X 2 2 / L
Then, based on the theoretical clutter covariance matrix R c , the minimum variance distortion-less response (MVDR) spectrum estimation method [26] is used to obtain a high-resolution clutter space-time power spectrum Z T N d × N s , acting as the output label data of CNN, expressed as
Z T ( i , j ) = 1 v ( f d , i , f s , j ) H R c 1 v ( f d , i , f s , j )
The process of reconstructing a high-resolution clutter space-time power spectrum from the low-resolution spectrum Y via a CNN can be expressed as [19]
Z ^ C = F CNN ( Y , Θ C )
where F CNN ( ) denotes a nonlinear transform operation conducted on the clutter space-time power spectrum, acquiring a high-resolution spectrum from its low-resolution version, Θ C = { W 1 , W 2 , , W E , b 1 , b 2 , , b E } denotes the network parameters of the CNN, W e   ( e = 1 , 2 , , E ) denotes the convolutional kernel with a dimension of c e × f e × f e × n e , and b e denotes the bias vector with a dimension of n e .
By constructing a dataset to train the CNN, the optimal CNN parameters can be obtained as
Θ C = arg min Θ C 1 P p = 1 P F CNN ( Y p , Θ C ) Z T , p F 2
where Y p denotes the p - th low-resolution clutter space-time power spectrum input, Z T , p denotes the p - th high-resolution clutter space-time power spectrum label, { Y p } p = 1 P and { Z T , p } p = 1 P form the training dataset for the CNN, and P is the dataset size.
Finally, the actual space-time echoed signal matrix can be processed online by the trained CNN. The low-resolution clutter space-time power spectrum Y can be transformed to obtain the high-resolution clutter space-time power spectrum Z ^ C = F CNN ( Y , Θ C ) , and thus, the CNCM can be estimated as [19]
R ^ I = i = 1 N d j = 1 N s Z ^ C ( i , j ) v ( f d , i , f s , j ) v H ( f d , i , f s , j ) + σ n 2 I N M
Similar to the SR-STAP method, the CNN-STAP method can estimate R ^ I accurately with a small number of IID training range cells. However, due to the poor quality (e.g., low resolution and high sidelobe level) of the clutter space-time power spectrum generated by the Fourier transform, the CNN-STAP method places high demands on the reconstruction capability of CNN. Increasing the network scale can improve the reconstruction capability of the CNN and the performance of CNN-STAP, but the increase of computing complexity is inevitable.

3. DU-CNN-STAP

To solve the problems of SR-STAP and CNN-STAP simultaneously, the DU-CNN-STAP (deep unfolding and CNN-based STAP) method is proposed in this paper, as shown in Figure 2. The specific operations are summarized as follows.
Step 1. The space-time echoed signal x is input into the DU-CNN network (as indicated by the dashed box in Figure 2). The high-resolution clutter space-time power spectrum estimation Z ^ DC N d × N s is obtained via the nonlinear transform of DU-CNN.
Step 2. By replacing Z ^ C in (20) with Z ^ DC , the CNCM R ^ I is estimated.
Step 3. The space-time adaptive weighting vector w ^ opt is estimated according to (15), based on which clutter suppression and target detection are conducted.
DU-CNN-STAP implements a nonlinear transform from the space-time echoed signal x directly to the high-resolution clutter power spectrum, i.e., Z ^ DC = F DU-CNN ( x ) . The key of the method is the DU-CNN network. The following points should be noted: (1) The ADMM-Net is a solving network for the SR problem in (12) with the network parameter as Θ A , which can help to achieve fast acquisition of the clutter space-time amplitude spectrum γ ^ N d N s × 1 . (2) The squarer module ( ) 2 completes the conversion from the space-time amplitude spectrum to the space-time power spectrum. The transform T ( ) converts the data dimension from N d N s × 1 to 1 × N d × N s . The output data can be expressed as Y ^ S = T ( | γ ^ | 2 ) 1 × N d × N s . (3) The power normalization module normalizes the space-time spectrum data to improve the network convergence and obtains the input data of the CNN, expressed as Y ^ S , N = N ( Y ^ S ) = Y ^ S / P S 1 × N d × N s , where P S = Y ^ S F . (4) The CNN module is the space-time power spectrum reconstruction network with the parameter as Θ C , which implements the estimation of the normalized high-resolution clutter space-time power spectrum Z ^ C , N 1 × N d × N s . (5) The power restoring module performs clutter power restoring on Z ^ C , N , whose output is Z ^ C = R ( Z ^ C , N ) = P S Z ^ C , N 1 × N d × N s . Finally, the network output Z ^ DC N d × N s of DU-CNN can be obtained.
With a small number of layers (i.e., iterations), ADMM-Net can obtain a high-resolution estimation of the clutter space-time spectrum, and the CNN can further improve the estimation accuracy. Thus, the DU-CNN-STAP method can effectively solve the problems of parameter setting difficulty, high computational complexity and low accuracy of the SR-STAP method. In addition, unlike the CNN-STAP method, which uses the low-resolution clutter space-time spectrum as the input of the reconstruction network, the input of the reconstruction network in the proposed DU-CNN-STAP method is the high-resolution clutter space-time spectrum. Thus, the DU-CNN-STAP method can reduce the requirements for the nonlinear transform capability, the network scale and the computational complexity of the reconstruction network.
In the presence of range ambiguity, the SR problems shown in (12) and (13) can still be established, and ADMM-Net can still be used [16]. Thus, in the presence of range ambiguity, the proposed DU-CNN-STAP method is still applicable. In the following, the DU-CNN network in the DU-CNN-STAP method is introduced in detail, including its network structure, dataset construction and training method. It should be noted that this paper only considers the case with a single training range cell, i.e., L = 1 . The processing of multiple training range cells can be implemented via a simple extension of the proposed method. Thus, the subscript l is ignored in the following.

3.1. Network Structure

3.1.1. ADMM-Net

Because the L 0 norm is a discontinuous function, the complexity of solving (12) is high. Thus, (12) is often solved by transforming it into an L 1 convex optimization problem, expressed as
γ ^ = arg min γ γ 1 ,         s . t .   x Φ γ 2 ε
Introducing an auxiliary variable r , (21) can be transformed into
{ γ ^ , r ^ } = arg min γ , r γ 1 + 1 2 ρ r 2 2         s . t .   Φ γ + r = x
where ρ > 0 denotes the regularization factor.
The augmented Lagrange function of (22) is given by
{ γ ^ , r ^ , λ ^ } = arg min γ , r , λ γ 1 + 1 2 ρ r 2 2 λ , Φ γ + r x + β 2 Φ γ + r x 2 2
which can be transformed into { γ ^ , r ^ , λ ^ } = arg min γ , r , λ γ 1 + 1 2 ρ r 2 2 + β 2 Φ γ + r x λ β 2 2 λ 2 2 β with λ N M × 1 as the Lagrange multiplier and β > 0 as the quadratic penalty factor.
Given the initial values { γ ( 0 ) , r ( 0 ) , λ ( 0 ) } , the ADMM algorithm solves (23) via the following three steps with K iterations alternately [27]:
r ( k ) = arg min r 1 2 ρ r 2 2 + β 2 Φ γ ( k 1 ) + r x λ ( k 1 ) β 2 2 γ ( k ) = arg min γ γ 1 + β 2 Φ γ + r ( k ) x λ ( k 1 ) β 2 2 λ ( k ) = λ ( k 1 ) β ( Φ γ ( k ) + r ( k ) x )
where r ( k ) , γ ( k ) and λ ( k ) denote the estimation of r , γ and λ in the k - th   ( k = 1 , 2 , , K ) iteration, respectively.
The solutions of the sub-problems in (24) are given by [27]
r ( k ) = ρ 1 + ρ β ( λ ( k 1 ) β Φ γ ( k 1 ) + β x ) γ ( k ) = S γ ( k 1 ) + τ ρ β Φ H r ( k ) , τ β λ ( k ) = λ ( k 1 ) β ( Φ γ ( k ) + r ( k ) x )
where S ( ) denotes the soft threshold operator [28] and τ denotes the iteration step size.
The ADMM algorithm can obtain a high-resolution estimation γ ^ = γ ( K ) with K iterations and the iteration parameters as ρ , β and τ . Then, the CNCM and the adaptive weighting vector can be calculated according to (14) and (15). However, ADMM is a model-driven algorithm, whose parameters need to be given in advance. In practicality, the setting of parameters is generally difficult. Improper parameter settings affect the convergence performance of the ADMM algorithm, resulting in a high computing complexity of solving (12) and a low estimation performance of the clutter space-time amplitude spectrum. Even if the parameters can be set properly, using the same parameters for each iteration does not guarantee a best convergence performance for the ADMM algorithm. To solve this problem, the ADMM algorithm with a specific number of iterations can be unfolded into a deep neural network based on the idea of deep unfolding. The learning approach can then be used to obtain optimal parameters for different iterations, improving the convergence performance of ADMM.
As shown in Figure 3, the ADMM algorithm with K iterations is unfolded into a K-layer ADMM-Net, whose inputs are the space-time echoed signal x N M × 1 and the space-time steering vector dictionary Φ , and the parameters are Θ A = { Θ A ( k ) } k = 1 K = { ρ k , β k , τ k } k = 1 K . The output of each layer is the Lagrange multiplier λ ( k ) N M × 1 , the auxiliary variable r ( k ) N M × 1 and the space-time amplitude spectrum γ ( k ) N d N s × 1 . The final output of ADMM-Net is γ ^ = γ ( K ) , and the nonlinear function F k ( ) given in (26) is the same as (25).
{ r ( k ) , γ ( k ) , λ ( k ) } = F k ( x , Φ , λ ( k 1 ) , γ ( k 1 ) , r ( k 1 ) , Θ A ( k ) )
During the data-driven network training, 3 K network parameters { ρ k , β k , τ k } k = 1 K of ADMM-Net are adaptively tuned. This allows ADMM-Net to obtain a higher convergence performance than that of the ADMM algorithm with the same number of iterations, thus improving the estimation accuracy of the clutter space-time amplitude spectrum.

3.1.2. CNN

The space-time power spectrum reconstruction network in DU-CNN is a CNN with E two-dimensional convolutional layers, as shown in Figure 4. The input of the CNN is the normalized clutter space-time power spectrum Y ^ S , N = T ( | γ ^ | 2 ) / P S obtained by transforming the output γ ^ of ADMM-Net, whose dimension is 1 × N d N s , into a number of channels as 1 and a length × width as N d × N s . The output of the CNN is the normalized high-resolution clutter space-time power spectrum Z ^ C , N . Because each convolutional layer uses a zero-padding operation, only the number of channels is changed in the processing procedure, and the length × width keeps constant. Thus, the dimension of Z ^ C , N is also 1 × N d × N s .
The network parameters of CNN are Θ C = { W e , b e } e = 1 E , where W e denotes the convolutional kernel with a dimension of c e × f e × f e × n e , c e is the number of input channels, f e is the length and width of the convolutional kernel, n e is the number of convolutional kernels (i.e., the number of output channels), and b e is the bias vector with a dimension of n e . With ∗ denoting the convolutional operation, the operations of each convolutional layer in Figure 4 can be summarized as follows.
(1)
Conv-(1) is used to extract features from the input clutter space-time power spectrum, which adopts the ReLU activation function. The specific operation can be expressed as
Z ^ C , N ( 1 ) = max 0 , W 1 Y ^ S , N + b 1
(2)
Conv-(2)~Conv-(E−1) realize the nonlinear mapping of features, where the ReLU activation function is adopted. The specific operation can be expressed as
Z ^ C , N ( e ) = max 0 , W e Z ^ C , N ( e 1 ) + b e ,     e = 2 , 3 , , E 1
(3)
Conv-(E) is the image reconstruction layer, where the high-resolution clutter space-time power spectrum is output. The specific operation can be expressed as
Z ^ C , N = W E Z ^ C , N ( E 1 ) + b E

3.2. Dataset Construction

Similar to most deep neural networks, DU-CNN adopts the supervised learning method, i.e., it is trained based on the input data and its corresponding labels. In this paper, the following steps are used to construct a sufficient and complete dataset for DU-CNN to guarantee the clutter space-time spectrum estimation performance and the generalization capability of DU-CNN.
Step 1. The parameters including the airplane height H , the number of ULA elements M , the number of CPI pulses N , the array spacing d and the wavelength λ are assumed to be unchanged. The non-side-looking angle, the elevation angle, the clutter ridge slope and the clutter-to-noise ratio (CNR) corresponding to different range cells all obey the uniform random distribution, i.e., θ e U ( θ min , θ max ) , φ U ( φ min , φ max ) , β = 2 v T r / λ U ( β min , β max ) , and CNR U ( CNR min , CNR max ) . According to (1), P radar space-time echoed signals { x p } p = 1 P are generated as the input dataset for DU-CNN, with each range cell containing a total of N c clutter blocks. The clutter blocks distribute in the azimuth range [ 0 , π ] uniformly and have scattering coefficients that obey a complex Gaussian distribution, whose power is determined by the CNR.
Step 2. The spatial frequency [ f s , min , f s , max ] and Doppler frequency [ f d , min , f d , max ] are discretized into N s = ρ s M and N d = ρ d N grids, respectively. The dictionary of space-time steering vectors Φ is constructed according to (10). Different combinations of iteration parameters are set according to the convergence conditions of the ADMM algorithm. Based on (12), all the space-time echoed signals { x p } p = 1 P are processed by ADMM. The combination of ρ = ρ 0 , β = β 0 , τ = τ 0 and K = K 0 with the best estimation performance is obtained as the final parameters of ADMM. The estimated clutter space-time amplitude spectra { γ ^ p } p = 1 P are thus obtained as the intermediate dataset for DU-CNN. Then, according to the theoretical clutter covariance matrix, the high-resolution clutter space-time power spectra { Z T , p } p = 1 P are obtained based on (17), which is used as the output label dataset for DU-CNN.
Step 3. Via the previous two steps, the datasets { x p } p = 1 P , { γ ^ p } p = 1 P and { Z T , p } p = 1 P are obtained. The input dataset and output label dataset of DU-CNN are set as { x p , Z T , p } p = 1 P , the input dataset and output label dataset of ADMM-Net are set as { x p , γ ^ p } p = 1 P , and the input dataset and output label dataset of the CNN are set as { γ ^ p , Z T , p } p = 1 P . As shown in Figure 5, each of these three datasets is divided into a training dataset with the size as P train and a testing dataset with the size as P test . The training dataset is used to train the network parameters, whereas the testing dataset is not involved in the training procedure but is used to verify the performance of the trained network.

3.3. Training Method

To avoid falling into the local optimum and improving the convergence performance of the network, this paper adopts the method of “pre-training + fine-tuning” to train DU-CNN. First, ADMM-Net and CNN are independently pre-trained based on { x p , γ ^ p } p = 1 P and { γ ^ p , Z T , p } p = 1 P , respectively. Then, DU-CNN is trained end-to-end based on { x p , Z T , p } p = 1 P , i.e., the network parameters of ADMM-Net and CNN are fine-tuned jointly. The specific steps can be summarized as follows.
Step 1. According to the iteration parameter setting of the ADMM algorithm, the parameters of ADMM-Net are initialized as Θ A = { ρ k = ρ 0 , β k = β 0 , τ k = β 0 } k = 1 K . The network loss function is defined as the mean square error (MSE) between the ADMM-Net output and the label data, which can be expressed as
L ( Θ A ) = 1 P train p = 1 P train γ ^ ( K ) ( x p ; Θ A ) γ ^ p 2 2
where γ ^ ( K ) x p ; Θ A denotes the output of the K-th layer of ADMM-Net with x p as the input and Θ A as the parameter. The optimal parameter Θ A * = { ρ k * , β k * , τ k * } k = 1 K of ADMM-Net can be obtained by minimizing the loss function through the back propagation method [29], expressed as
Θ A * = arg min Θ A L ( Θ A )
Step 2. The Glorot method [30] is used to initialize the network parameters of the CNN. Similarly, the network loss function is defined as the MSE of the CNN output and the label data, expressed as
L ( Θ C ) = 1 P train p = 1 P train P S Z ^ C , N Y ^ S ; N , p ; Θ C Z T p F 2
where Z ^ C , N Y ^ S ; N , p ; Θ C denotes the output of the CNN with Y ^ S ; N , p = T ( | γ ^ | 2 ) / P S as the input and Θ C as the parameter. Similarly, the optimal parameters of the CNN can be obtained through the back propagation method by solving the following problem:
Θ C * = arg min Θ C L ( Θ C )
Step 3. Based on the independent pre-training results of ADMM-Net and CNN, DU-CNN is trained end-to-end to further improve its convergence performance. Here, the parameters of DU-CNN are initialized as Θ = { Θ A * , Θ C * } to avoid the local convergence problem that may occur when directly training DU-CNN. The network loss function is defined as
L ( Θ ) = 1 P train p = 1 P train Z ^ DC ( x p ; Θ ) Z T , p F 2
where Z ^ DC ( x p ; Θ ) denotes the output of DU-CNN with x p as the input and Θ = { Θ A , Θ C } as the parameter. Similarly, the optimal parameters of DU-CNN Θ * = { ρ k * , β k * , τ k * } k = 1 K { W e * , b e * } e = 1 E can be obtained as follows:
Θ = arg min Θ L ( Θ )

4. Simulation Results

In this section, the performance of the DU-CNN-STAP method is verified via simulations. A comparative analysis with the CNN-STAP method and the SR-STAP method is also performed. Considering the computational complexity, dataset size and memory usage, the parameters shown in Table 1 are used for simulations to mimic an airborne radar in a clutter environment [14]. For the SR-STAP algorithm, the iteration parameters of the ADMM algorithm are set as ρ 0 = 1.0 , β 0 = 0.01 , τ 0 = 0.04 and K 0 = 2000 . For the proposed method, the number of network layers of the CNN in DU-CNN is E = 5 , and the convolutional dimensions { c e × f e × f e × n e } e = 1 E are set as (1 × 11 × 11 × 16), (16 × 9 × 9 × 8), (8 × 7 × 7 × 4), (4 × 5 × 5 × 2) and (2 × 3 × 3 × 1). For the CNN-STAP method, the CNN is set as the same as that in the proposed method. In the training process of ADMM-Net, the CNN and DU-CNN, the batch size is set as 128. All network parameters are optimized via the Adam optimizer, and the learning rates for different networks are set as 10−4, 5 × 10−3 and 2 × 10−5, respectively.

4.1. Network Convergence Analysis

In this sub-section, the training convergences of ADMM-Net, the CNN and DU-CNN are analyzed. A comparison with the training convergence of the CNN in the CNN-STAP method [19] is also provided. The training convergences of ADMM-Net with the number of network layers K as 20, 30 and 40 are given in Figure 6a. It can be seen that the loss function of ADMM-Net decreases gradually and remains unchanged after about 150 epochs. As the number of network layers increases, the loss of ADMM-Net decreases, and its convergence performance improves. Considering the computing complexity and the convergence performance, the number of network layers of ADMM-Net is set as K = 20 in the following simulations. Figure 6b provides a comparison of the convergences of CNNs between the proposed method and the CNN-STAP method. It can be seen that, under the condition with the same CNN network scale, because the input of the CNN in the proposed method is the high-resolution clutter space-time spectrum obtained by the ADMM algorithm, it has a higher performance than that of the CNN-STAP method with the Fourier-transform-based (i.e., DBF-based) low-resolution clutter space-time spectrum as its input. Figure 6c shows the convergences of the proposed DU-CNN and the CNNs in the CNN-STAP method, where the number of network layers in CNN-New is 7 and the convolutional dimensions are (1 × 11 × 11 × 16), (16 × 9 × 9 × 12), (12 × 9 × 9 × 10), (10 × 7 × 7 × 8), (8 × 5 × 5 × 4), (4 × 5 × 5 × 2) and (2 × 3 × 3 × 1). It can be seen that (1) DU-CNN has a higher convergence performance than the CNN in the CNN-STAP method with the same parameters, and (2) increasing the network scale can improve the convergence performance of the CNN in the CNN-STAP method, obtaining the result close to DU-CNN, but its computing complexity is increased.
In general, as the network layer number and scale become larger, the nonlinear transform capability of DU-CNN becomes stronger, but the computing burden is increased. In practice, to determine the appropriate network layer number and scale of DU-CNN under different conditions, the following approach can be used: (1) conduct off-line training of DU-CNN with different network layer numbers and scales (make sure the network training converges), (2) obtain the condition for which the increase in the layer number and scale does not decrease the training loss significantly and (3) considering the balance of the clutter suppression performance and computational complexity, choose the network layer number and scale under the above-mentioned condition for DU-CNN.

4.2. Clutter Suppression Performance

In this sub-section, the clutter suppression performance of DU-CNN-STAP is verified and compared with the CNN-STAP and SR-STAP methods. For convenience, the SR-STAP method still adopts the ADMM algorithm; hence, it is also named ADMM-SATP. The clutter space-time spectra estimated by different methods are shown in Figure 7 and Figure 8, where Figure 7 corresponds to the side-looking case with the simulation parameters as φ = 29.6 , β = 1 , θ e = 0 and CNR = 42.8   dB and Figure 8 corresponds to the non-side-looking case with the simulation parameters as φ = 26.5 , β = 1.7 , θ e = 5.7 and CNR = 49.4   dB .
It can be seen that the SR-STAP method can obtain a high clutter space-time spectrum estimation performance under the side-looking condition; however, an uneven clutter distribution problem occurs. As the clutter sparsity becomes worse under the non-side-looking case, there are some interferences deviating from the clutter ridge, and the performance of the SR-STAP method deteriorates severely. Under the two conditions, the CNN-STAP method can reconstruct the clutter space-time spectrum effectively. The clutter distribution is continuous, and there is less interference deviating from the clutter ridge. However, this method broadens the clutter ridge. The proposed DU-CNN-STAP method can obtain a higher performance compared to those of the CNN-STAP and SR-STAP methods, with the obtained clutter space-time spectrum estimation results being close to the theoretical ones.
Based on the clutter space-time spectra shown in Figure 7 and Figure 8, the clutter suppression performance of different methods is shown in Figure 9 by using the SCNR loss as the indicator. The spatial frequency of the target is set to 0, and its normalized Doppler frequency varies linearly in the range of [ 0.5 , 0.5 ] . It can be seen that the SR-STAP and CNN-STAP methods can generate a deep null close to the zero frequency under the side-looking condition, providing good suppression of the clutter. However, unevenly distributed clutter, interferences deviating from the clutter ridge and the broadened clutter ridge make the null formed by the SR-STAP method wider, reducing its slow target detection performance. In contrast, the DU-CNN-STAP method proposed in this paper can benefit from a higher clutter spectrum estimation performance; thus, the width and depth of the null are well controlled to obtain a high clutter suppression performance.

4.3. Computational Complexity Analysis

In this sub-section, the computational complexity of the proposed method is analyzed, and a comparison with the SR-STAP and CNN-STAP methods is also provided. It should be emphasized that, because the method of offline training and online application can be used, the computational complexity analysis in this paper does not consider the computational complexity required for network training. Using the number of multiplications as the indicator, the computational complexity of the Fourier-transform-based spectrum estimation method in (16), the ADMM algorithm in (25) and the CNN method in (27–29) are shown in Table 2. It should be noted that the trained ADMM-Net has the same operations as those of the ADMM algorithm. Thus, under the same iteration number (network layer), the computational complexities of ADMM-Net and ADMM are the same. According to Table 2, the computational complexities of the proposed method, the SR-STAP method and the CNN-STAP method for clutter space-time spectrum estimation are C A ( K ) + C C , C A ( K 0 ) and C F + C C , respectively. The resulting computational complexities of different methods under the condition of M = N = N d / 5 = N s / 5 = 4 ~ 16 are shown in Figure 10, where CNN-STAP-New corresponds to CNN-New in Figure 6c, whose network scale is increased to improve the performance of CNN-STAP. It can be seen that the computational complexities of the proposed method and the CNN-STAP method are much lower than that of the SR-STAP (i.e., ADMM-STAP) method. As the dimensionality of the estimation problem becomes higher, the advantage becomes more obvious. Under the condition that the number of array elements is larger, the computational complexity of the proposed method is higher than that of the CNN-STAP method. However, to obtain a performance similar to that of the proposed method, the computational complexity of the CNN-STAP method increases.

5. Measured Data Processing Results

In this section, the practical performance of the proposed DU-CNN-STAP method is verified via the Mountain-Top actual measured data [14]. A comparison with the SMI-STAP method, the SR-STAP method and the CNN-STAP method is also provided, where the parameter settings for SR-STAP, CNN-STAP and the proposed DU-CNN-STAP method are the same as those in Section 4. In the Mountain-Top measured data, the number of array elements and pulses is 14 and 16, and the target with a normalized Doppler frequency as 0.25 is located at the 147th range cell. In order to match the simulations, the data corresponding to 10 elements and 10 pulses are taken for processing. It is assumed that there is no array element error, and the number of guard range cells is set as 12. The SMI-STAP method takes 200 training range cells around the 147th range cell for estimation, and the other STAP methods take the data of the 154-th range cell for estimation.
Obtained via different STAP methods, the clutter space-time spectrum estimation results are shown in Figure 11, and the space-time filters are then designed for clutter suppression, giving the target detection results shown in Figure 12. It can be seen that the DU-CNN-STAP method proposed in this paper can obtain better results in processing the measured data. Its clutter space-time spectrum estimation is closest to the result of the SMI-STAP method, and its target detection performance is higher than that of the other two STAP methods.

6. Conclusions

In this study, the DU-CNN-STAP method is proposed for airborne radar clutter suppression and target detection, and its processing framework, network structure, dataset construction and training methods are described in detail. Simulation and experimental results under different conditions show that, compared with the SR-STAP method, the proposed method can improve the clutter space-time spectrum estimation performance and reduce the computational complexity. Compared to the CNN-STAP method, the proposed method can reduce the requirement for network reconstruction capability and obtain higher clutter suppression performance. Future research will focus on the performance improvement of the proposed method under non-ideal conditions (e.g., training range cells are contaminated by moving targets), the construction of multi-dimensional deep unfolding networks and the optimization of CNN network structures.

Author Contributions

Conceptualization, B.Z. and W.F.; methodology, B.Z. and W.F.; software, B.Z. and W.F.; validation, B.Z. and H.Z.; writing—original draft preparation, B.Z.; writing—review and editing, W.F. and H.Z.; funding acquisition, W.F. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China, No. 62001507; Young Talent fund of University Association for Science and Technology in Shaanxi, China, No. 20210106; China Postdoctoral Science Foundation, No. 2021MD703951; and Youth Talent Lifting Project of the China Association for Science and Technology, No. 2021-JCJQ-QT-018.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Guerci, J.R. Space-Time Adaptive Processing for Radar; Artech House: London, UK, 2014. [Google Scholar]
  2. Brennan, L.E.; Reed, L.S. Theory of adaptive radar. IEEE Trans. Aerosp. Electron. Syst. 1973, 2, 237–252. [Google Scholar] [CrossRef]
  3. Reed, I.S.; Mallett, J.D.; Brennan, L.E. Rapid convergence rate in adaptive arrays. IEEE Trans. Aerosp. Electron. Syst. 1974, 6, 853–863. [Google Scholar] [CrossRef]
  4. Melvin, W.L. A STAP overview. IEEE Aerosp. Electron. Syst. Mag. 2004, 19, 19–35. [Google Scholar] [CrossRef]
  5. Li, Z.; Ye, H.; Liu, Z.; Sun, Z.; An, H.; Wu, J.; Yang, J. Bistatic SAR clutter-ridge matched STAP method for nonstationary clutter suppression. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5216914. [Google Scholar] [CrossRef]
  6. Wang, Y.; Chen, J.; Bao, Z.; Peng, Y. Robust space-time adaptive processing for airborne radar in nonhomogeneous clutter environments. IEEE Trans. Aerosp. Electron. Syst. 2003, 39, 70–81. [Google Scholar] [CrossRef]
  7. Yang, Z.; Wang, Z.; Liu, W.; Lamare, R. Reduced-dimension space-time adaptive processing with sparse constraints on beam-Doppler selection. Signal Process. 2019, 157, 78–87. [Google Scholar] [CrossRef]
  8. Peckham, C.D.; Haimovich, A.M.; Ayoub, T.F.; Goldstein, J.S.; Reid, I.S. Reduced-rank STAP performance analysis. IEEE Trans. Aerosp. Electron. Syst. 2000, 36, 664–676. [Google Scholar] [CrossRef]
  9. Cristallini, D.; Burger, W. A robust direct data domain approach for STAP. IEEE Trans. Signal Process. 2012, 60, 1283–1294. [Google Scholar] [CrossRef]
  10. Yang, Z.; Li, X.; Wang, H.; Jiang, W. On clutter sparsity analysis in space–time adaptive processing airborne radar. IEEE Geosci. Remote Sens. Lett. 2013, 10, 1214–1218. [Google Scholar] [CrossRef]
  11. Sun, K.; Meng, H.; Wang, Y.; Wang, X. Direct data domain STAP using sparse representation of clutter spectrum. Signal Process. 2011, 91, 2222–2236. [Google Scholar] [CrossRef] [Green Version]
  12. Feng, W.; Guo, Y.; Zhang, Y.; Gong, J. Airborne radar space time adaptive processing based on atomic norm minimization. Signal Process. 2018, 148, 31–40. [Google Scholar] [CrossRef]
  13. Yang, Z.; Lamare, R.; Liu, W. Sparsity-based STAP using alternating direction method with gain/phase errors. IEEE Trans. Aerosp. Electron. Syst. 2017, 53, 2756–2768. [Google Scholar]
  14. Duan, K.; Wang, Z.; Xie, W.; Chen, H.; Wang, Y. Sparsity-based STAP algorithm with multiple measurement vectors via sparse Bayesian learning strategy for airborne radar. IET Signal Process. 2017, 11, 544–553. [Google Scholar] [CrossRef]
  15. Liu, C.; Wang, T.; Zhang, S.; Ren, B. A fast space-time adaptive processing algorithm based on sparse Bayesian learning for airborne radar. Sensors 2022, 22, 2664. [Google Scholar] [CrossRef] [PubMed]
  16. Zou, B.; Wang, X.; Feng, W.; Zhu, H.; Lu, F. DU-CG-STAP method based on sparse recovery and unsupervised learning for airborne radar clutter suppression. Remote Sens. 2022, 14, 3472. [Google Scholar] [CrossRef]
  17. Dong, C.; Loy, C.C.; He, K.; Tang, X. Learning a Deep Convolutional Network for Image Super-Resolution, European Conference on Computer Vision; Springer: Cham, Switzerland, 2014; pp. 184–199. [Google Scholar]
  18. Wang, Z.; Chen, J.; Hoi, S.C.H. Deep learning for image super-resolution: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 3365–3387. [Google Scholar] [CrossRef] [Green Version]
  19. Duan, K.; Chen, H.; Xie, W.; Wang, Y. Deep learning for high-resolution estimation of clutter angle-Doppler spectrum in STAP. IET Radar Sonar Navig. 2022, 16, 193–207. [Google Scholar] [CrossRef]
  20. Duan, K.; Li, X.; Xing, K.; Wang, Y. Clutter mitigation in space-based early warning radar using a convolutional neural network. J. Radars 2022, 11, 386–398. [Google Scholar]
  21. Hu, X.; Xu, F.; Guo, Y.; Feng, W.; Jin, Y. MDLI-Net: Model-driven learning imaging network for high-resolution microwave imaging with large rotating angle and sparse sampling. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5212617. [Google Scholar] [CrossRef]
  22. Monga, V.; Li, Y.; Eldar, Y. Algorithm unrolling: Interpretable, efficient deep learning for signal and image processing. IEEE Signal Process. Mag. 2021, 38, 18–44. [Google Scholar] [CrossRef]
  23. Zhu, H.; Feng, W.; Feng, C.; Ma, T.; Zou, B. Deep unfolded gridless DOA estimation networks based on atomic norm minimization. Remote Sens. 2023, 15, 13. [Google Scholar] [CrossRef]
  24. Yang, C.; Gu, Y.; Chen, B.; Ma, H.; So, H. Learning proximal operator methods for nonconvex sparse recovery with theoretical guarantee. IEEE Trans. Signal Process. 2020, 68, 5244–5259. [Google Scholar] [CrossRef]
  25. Zhu, H.; Feng, W.; Feng, C.; Zou, B.; Lu, F. Deep unfolding based space-time adaptive processing method for airborne radar. J. Radars 2022, 11, 676–691. [Google Scholar]
  26. Capon, J. High-resolution frequency-wavenumber spectrum analysis. Proc. IEEE 1969, 57, 1408–1418. [Google Scholar] [CrossRef]
  27. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 2011, 3, 1–122. [Google Scholar] [CrossRef]
  28. Beck, A.; Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef] [Green Version]
  29. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  30. Glorot, X.; Bengio, Y. Understanding the Difficulty of Training Deep Feedforward Neural Networks. In Proceedings of the International Conference on Artificial Intelligence and Statistics, Sardinia, Italy, 13–15 May 2010; pp. 249–256. [Google Scholar]
Figure 1. Airborne radar target detection model.
Figure 1. Airborne radar target detection model.
Electronics 12 03140 g001
Figure 2. Processing framework of DU-CNN-STAP.
Figure 2. Processing framework of DU-CNN-STAP.
Electronics 12 03140 g002
Figure 3. Network structure of ADMM-Net.
Figure 3. Network structure of ADMM-Net.
Electronics 12 03140 g003
Figure 4. Network structure of CNN.
Figure 4. Network structure of CNN.
Electronics 12 03140 g004
Figure 5. Datasets for ADMM-Net, CNN and DU-CNN.
Figure 5. Datasets for ADMM-Net, CNN and DU-CNN.
Electronics 12 03140 g005
Figure 6. Training convergences of different networks. (a) ADMM-Nets with different layer numbers. (b) CNNs with different inputs. (c) Different STAP networks.
Figure 6. Training convergences of different networks. (a) ADMM-Nets with different layer numbers. (b) CNNs with different inputs. (c) Different STAP networks.
Electronics 12 03140 g006
Figure 7. Clutter spectrum estimation results with φ = 29.6°, β = 1, θe = 0° and CNR = 42.8 dB.
Figure 7. Clutter spectrum estimation results with φ = 29.6°, β = 1, θe = 0° and CNR = 42.8 dB.
Electronics 12 03140 g007aElectronics 12 03140 g007b
Figure 8. Clutter spectrum estimation results with φ = 26.5°, β = 1.7, θe = −5.7° and CNR = 49.4 dB.
Figure 8. Clutter spectrum estimation results with φ = 26.5°, β = 1.7, θe = −5.7° and CNR = 49.4 dB.
Electronics 12 03140 g008
Figure 9. SCNR loss curves of different methods.
Figure 9. SCNR loss curves of different methods.
Electronics 12 03140 g009
Figure 10. Computing complexities of different methods under different conditions.
Figure 10. Computing complexities of different methods under different conditions.
Electronics 12 03140 g010
Figure 11. Clutter spectrum estimation results of Mountain-Top actual measured data.
Figure 11. Clutter spectrum estimation results of Mountain-Top actual measured data.
Electronics 12 03140 g011
Figure 12. Target detection results of Mountain-Top actual measured data.
Figure 12. Target detection results of Mountain-Top actual measured data.
Electronics 12 03140 g012
Table 1. Simulation parameters.
Table 1. Simulation parameters.
ParameterSymbolValue
Airplane heightH3000 m
Element number in ULAM10
Pulse number in CPIN10
Element spacingd0.3 m
Signal wavelengthλ0.6 m
Elevation angle φ U ( 5 , 45 )
Non-side-looking angle θ e U ( 30 , 30 )
Clutter ridge slope β U ( 0.2 , 5 )
Clutter-to-noise ratioCNR U ( 30 , 50 )   dB
Number of clutter blocks N c 181
Spatial frequency range [ f s , min , f s , max ] [ 0.5 , 0.5 ]
Doppler frequency range [ f d , min , f d , max ] [ 0.5 , 0.5 ]
Number of spatial frequencies N s 50
Number of Doppler frequencies N d 50
Number of training data P train 10,000
Number of testing data P test 2000
Table 2. Computational complexities of different methods.
Table 2. Computational complexities of different methods.
MethodSymbolNumber of Multiplications
Fourier C F ( M N + 1 ) N d N s
ADMM C A ( K ) 2 M N N d N s K
CNN C C e = 1 E c e f e 2 n e N d N s
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zou, B.; Feng, W.; Zhu, H. Airborne Radar STAP Method Based on Deep Unfolding and Convolutional Neural Networks. Electronics 2023, 12, 3140. https://doi.org/10.3390/electronics12143140

AMA Style

Zou B, Feng W, Zhu H. Airborne Radar STAP Method Based on Deep Unfolding and Convolutional Neural Networks. Electronics. 2023; 12(14):3140. https://doi.org/10.3390/electronics12143140

Chicago/Turabian Style

Zou, Bo, Weike Feng, and Hangui Zhu. 2023. "Airborne Radar STAP Method Based on Deep Unfolding and Convolutional Neural Networks" Electronics 12, no. 14: 3140. https://doi.org/10.3390/electronics12143140

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop