Next Article in Journal
Editorial for Special Issue “UAV Photogrammetry and Remote Sensing”
Previous Article in Journal
High-Resolution Mapping of Aerosol Optical Depth and Ground Aerosol Coefficients for Mainland China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

High-Resolution ISAR Imaging and Autofocusing via 2D-ADMM-Net

1
Key Laboratory of Electronic Information Countermeasure and Simulation Technology of Ministry of Education, Xidian University, Xi’an 710071, China
2
National Lab of Radar Signal Processing, Xidian University, Xi’an 710071, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(12), 2326; https://doi.org/10.3390/rs13122326
Submission received: 7 May 2021 / Revised: 4 June 2021 / Accepted: 9 June 2021 / Published: 13 June 2021

Abstract

:
A deep-learning architecture, dubbed as the 2D-ADMM-Net (2D-ADN), is proposed in this article. It provides effective high-resolution 2D inverse synthetic aperture radar (ISAR) imaging under scenarios of low SNRs and incomplete data, by combining model-based sparse reconstruction and data-driven deep learning. Firstly, mapping from ISAR images to their corresponding echoes in the wavenumber domain is derived. Then, a 2D alternating direction method of multipliers (ADMM) is unrolled and generalized to a deep network, where all adjustable parameters in the reconstruction layers, nonlinear transform layers, and multiplier update layers are learned by an end-to-end training through back-propagation. Since the optimal parameters of each layer are learned separately, 2D-ADN exhibits more representation flexibility and preferable reconstruction performance than model-driven methods. Simultaneously, it is able to better facilitate ISAR imaging with limited training samples than data-driven methods owing to its simple structure and small number of adjustable parameters. Additionally, benefiting from the good performance of 2D-ADN, a random phase error estimation method is proposed, through which well-focused imaging can be acquired. It is demonstrated by experiments that although trained by only a few simulated images, the 2D-ADN shows good adaptability to measured data and favorable imaging results with a clear background can be obtained in a short time.

Graphical Abstract

1. Introduction

High-resolution inverse synthetic aperture radar (ISAR) imaging plays a significant role in space situation awareness and air target surveillance [1,2]. Under ideal observational environments with high signal-to-noise ratios (SNRs) and complete echo matrices, well-focused imaging can be acquired by classic techniques such as the range-Doppler (RD) algorithm and the polar formatting algorithm (PFA) [3]. For a target with a small radar cross section (RCS) or a long observation distance, however, the SNR of the received echoes is low due to limited transmitted power. In addition, the existence of strong jamming and the resource scheduling of the cognitive radar may result in incomplete data along the range or/and azimuth direction(s). The complex observational environments discussed above, i.e., incomplete data and low SNRs, cause severe performance degradation or even invalidate the available imaging techniques. As ISAR images are generally sparse in the image domain, high-resolution ISAR imaging under complex observational environments based on the theory of sparse signal reconstruction has received intensive attention in the radar imaging community in recent years [4,5], in which the reconstruction of sparse images (i.e., the distribution of dominant scattering centers) from noisy or gapped echoes given the observation dictionary is sought.
In addition, the motion of the target can be decomposed into translational motion and rotational motion, where the former is not beneficial to imaging and needs to be compensated by range alignment and autofocusing. Traditional autofocusing algorithms can obtain satisfying imaging results from complete radar echoes [6]. For complex observational environments, however, they cannot achieve good performance due to the deficiency of radar echoes and low SNRs. In recent years, parametric autofocusing techniques [7] have been applied for sparse imaging. Although they are superior to traditional methods under sparse aperture conditions, their performance is dependent on imaging quality. Therefore, in turn, a better sparse reconstruction method is needed.
The available sparse ISAR imaging methods can be divided into three categories: (1) model-driven methods; (2) data-driven methods; and (3) combined model-driven and data-driven methods. Among them, model-driven methods construct the sparse observation model and obtain high-resolution images by l 0 -norm or l 1 -norm optimization. The l 0 -norm optimization, e.g., orthogonal MP (OMP) [8] and smoothed l 0 -norm method [9], cannot guarantee that the solution is sparsest and may converge to the local minima. The l 1 -norm optimization, e.g., the fast iterative shrinkage-thresholding algorithm (FISTA) [10,11] and alternating direction method of multipliers (ADMM) [12,13], is the convex approximation of the l 0 -norm [14]. However, the regularization parameter directly affects the performance and how to determine its optimum value remains an open problem [4]. Additionally, vectorized optimization requires long operating times and a large memory storage space. To improve the efficiency, methods based on matrix operations such as 2D-FISTA [15] and 2D-ADMM [16] are proposed.
Data-driven methods solve the nonlinear mapping from echoes to the 2D image by designing and training a deep network [17]. In the training process, the target echoes and corresponding ISAR images are adopted as the inputs and the label, respectively, and the loss function is the NMSE between the network output and the label. In order to minimize the loss function (i.e., to obtain the optimal network parameters), the network parameters are randomly initialized and updated by the gradient descent method iteratively until convergence [18]. Then, the trained network is applied to generate focused imaging of an unknown target. Facilitated by off-line network training, such methods achieve the reconstruction of multiple images rapidly, and typical networks include the complex-value deep neural network (CV-DNN) [19]. Nevertheless, the subjective network design process lacks unified criterion and theoretical support, which makes it difficult to analyze the influence of network structure and parameter settings on reconstruction performance. In addition, the large number of unknown parameters require massive training samples to avoid overfitting.
The combined model-driven and data-driven methods first expand the model-driven methods into a deep network [20], and then utilize only a few training samples to learn the optimal values of the adjustable parameters [21]. Finally, they output the focused image of an unknown target by the trained network. Such methods effectively solve the difficulties in: (1) setting proper parameters for model-driven methods; (2) clearly explaining the physical meaning of the network; and (3) generating a large number of training samples to avoid overfitting for data-driven methods. A common technique to expand the model-driven methods is unrolling [22], which utilizes a finite-layer hierarchical architecture to implement iterations. As a typical imaging network, the deep ADMM network [23] uses measured data for effective training, which is usually limited due to observation conditions, and autofocusing is not considered for sparse aperture ISAR imaging.
To tackle the above-mentioned problems, this article proposes the 2D-ADMM-Net (2D-ADN) to achieve well-focused 2D imaging under complex observational environments, and its key contributions mainly include the following: (a) Mapping from the ISAR images to echoes in the wavenumber domain is established, which forms as a 2D sparse reconstruction problem. Then, the 2D-ADMM method is provided with phase error estimation for focused imaging. (b) Based on the 2D-ADMM, 2D-ADN is designed to include the reconstruction layers, nonlinear transform layers, and multiplier update layers. Then, the adjustable parameters are estimated by minimizing the loss function through back-propagation in the complex domain. (c) Simulation results demonstrate that the 2D-ADN, which is trained by a small number of samples generated from point-scattering model, obtains the best reconstruction performance. For both the complete and incomplete data with low SNRs, the 2D-ADN combined with random phase error estimation obtains better-focused imaging of measured aircraft data with a clearer background than the available methods.
The remainder of this article is organized as follows. Section 2 establishes the sparse observation model for the high-resolution 2D imaging and provides the iterative formulae of 2D-ADMM with random phase error estimation. Section 3 introduces the construction of 2D-ADN in detail. Section 4 gives the network loss function and derives the back-propagation formulae in the complex domain. Section 5 carries out various experiments to prove the effectiveness of 2D-ADN. In Section 6, we discuss the performance of 2D-ADN, and Section 7 concludes the article with suggestions for future work.

2. Modeling and Solving

2.1. 2D Modeling

After translational motion compensation [24,25], echoes Y C P × Q in the wavenumber domain satisfy:
Y = Φ 1 X Φ 2 E + N
where Φ 1 P × U is the over-complete range dictionary, X C U × V is the 2D distribution of the scattering centers, Φ 2 V × Q is the over-complete Doppler dictionary, N C P × Q is the complex noise matrix, and E C Q × Q is the diagonal random phase error matrix.
E = [ e j φ 1 0 0 0 0 0 0 0 0 0 0 e j φ q 0 0 0 0 0 0 0 0 0 0 e j φ Q ]
In (2), φ q denotes phase error of the q th echo.
For convenience, we vectorized (1) as:
y = ( Φ 2 E ) Τ Φ 1 x + n = Φ x + n
where y C P Q is the vector form of Y , is the Kronecker product, Φ = ( Φ 2 E ) Τ Φ 1 C P Q × U V , x C U V is the vector form of X , and n C P Q is the vector form of N .

2.2. The 2D-ADMM Method

Finding the optimal solution to (3) is a linear inverse problem, which can be further converted into an unconstrained optimization by introducing the regularization term:
min x { 1 2 Φ x y 2 2 + λ x 1 }
where λ is the regularization parameter.
According to the variable splitting technique [26], (4) is equivalent to:
min x , z { 1 2 Φ x y 2 2 + λ z 1 } s . t .   x = z
and the augmented Lagrangian function is:
min x , z , α L ρ ( x , z , α ) L ρ ( x , z , α ) = 1 2 Φ x y 2 2 + λ z 1 + α , x z + ρ 2 z x 2 2
where , is the inner product, α is the vector of Lagrangian multiplier, and ρ is the penalty parameter.
ADMM decomposes (6) into three sub-problems by minimizing L ρ ( x , z , α ) with respect to x , z , and α , respectively,
{ x ( n ) = arg min x 1 2 Φ x y 2 2 + α ( n 1 ) , x z ( n 1 ) + ρ 2 z ( n 1 ) x 2 2 z ( n ) = arg min z λ z 1 + α ( n 1 ) , x z ( n 1 ) + ρ 2 z x ( n ) 2 2 α ( n ) = α ( n - 1 ) + ρ ( x ( n ) z ( n ) )
where n is the iteration index. Let b = α ρ , the solutions to (7) satisfy:
{ x ( n ) = ( z ( n 1 ) b ( n 1 ) ) 1 1 + ρ Φ H ( Φ ( z ( n 1 ) b ( n 1 ) ) y ) z ( n ) = S ( x ( n ) + b ( n 1 ) ; λ / ρ ) b ( n ) = b ( n 1 ) + η ( x ( n ) z ( n ) )
where S ( ) is the shrinkage function [27] defined by S ( w , τ ) = sign ( w ) max { | w | τ , 0 } , τ is the threshold, and η is an update rate for the Lagrangian multiplier.
The 2D-ADMM estimates the 2D image X ( n ) by:
X ( n ) = ( Z ( n 1 ) B ( n 1 ) ) 1 1 + ρ Φ 1 H ( Φ 1 ( Z ( n 1 ) B ( n 1 ) ) Φ 2 E Y ) ( Φ 2 E ) H
In addition, the matrix forms of z and b , i.e., Z and B , are calculated by,
Z ( n ) = S ( X ( n ) + B ( n 1 ) ; λ / ρ )
B ( n ) = B ( n 1 ) + η ( X ( n ) Z ( n ) )

2.3. Phase Error Estimation and Algorithm Summation

The phase error is estimated by optimizing the following objective function [28]:
ϕ ^ q = arg min ϕ q Y ( , q ) e j ϕ q ( Φ 1 X Φ 2 ) ( , q ) 2 2
where Y ( , q ) is the q th column of Y , and ( Φ 1 X Φ 2 ) ( , q ) is the q th column of Φ 1 X Φ 2 .
Let the derivative of (12) with respect to φ q be zero, then:
φ ^ q = tan 1 Re ( ( ( Φ 1 X Φ 2 ) ( , q ) ) H Y ( , q ) ) Im ( ( ( Φ 1 X Φ 2 ) ( , q ) ) H Y ( , q ) )
where Re ( ) and Im ( ) represent the real and the imaginary parts, respectively, and the phase error matrix E ^ is constructed by (2).
Algorithm 1 summarizes the high-resolution 2D ISAR imaging and autofocusing method based on 2D-ADMM and random phase error estimation, where N denotes the total number of iterations. Analysis and experiments have shown that the choice of λ and ρ has a great influence on the imaging quality, and improper initialization may generate a defocused image with a noisy background. In addition, λ and ρ cannot be adaptively adjusted in each iteration, demonstrating a lack of flexibility.
Algorithm 1. Autofocusing by 2D-ADMM
1. Initialize E ^ = I , λ , and ρ
2. For t = 1 : T
       For n = 1 : N
             Update X t ( n ) by (9);
             Update Z t ( n ) by (10);
             Update B t ( n ) by (11);
       End
       Calculate E ^ by (13) and (2);
     End
3. Output X T ( N + 1 ) and E ^

3. Structure of 2D-ADN

To tackle the aforementioned problems, we modified 2D-ADMM and expanded it into 2D-ADN. There are similarities between deep networks and iterative algorithms [22] such as 2D-ADMM. Particularly, the matrix multiplication is similar to the linear mapping of the deep network, the shrinkage function is similar to the nonlinear operation, and the adjustable parameters are similar to network parameters. Therefore, the 2D-ADMM algorithm can be unfolded into 2D-ADN. As shown in Figure 1, the network has N stages, and stage n , n [ 1 , N ] , represents the n th iteration described by Algorithm 1. Typically, one stage consists of three layers, i.e., the reconstruction layer, the nonlinear transform layer, and the multiplier update layer, which correspond to (9), (10), and (11), respectively. The inputs of 2D-ADN are echoes in the 2D wavenumber domain, and the output is the reconstructed 2D high-resolution image. Below, we will derive the forward-propagation formulae of each layer.

3.1. Reconstruction Layer

As shown in Figure 2, the inputs of the reconstruction layer are Z ( n 1 ) and B ( n 1 ) , and the output is:
X ( n ) = ( Z ( n 1 ) B ( n 1 ) ) 1 1 + ρ ( n ) Φ 1 H ( Φ 1 ( Z ( n 1 ) B ( n 1 ) ) Φ 2 E Y ) ( Φ 2 E ) H
where the penalty parameter ρ ( n ) is the adjustable parameter.
For n = 1 , Z ( 0 ) and B ( 0 ) are initialized to zero matrices and thus the output is:
X ( 1 ) = 1 1 + ρ ( 1 ) Φ 1 H ( Y ) ( Φ 2 E ) H
For n [ 1 , N ] , the output serves as the input of Z ( n ) and B ( n ) . For n = N + 1 , the output is adopted as the input of the loss function.

3.2. Nonlinear Transform Layer

As shown in Figure 3, the inputs of the nonlinear transform layer are X ( n ) and B ( n 1 ) , and the output is:
Z ( n ) = S P L F ( X ( n ) + B ( n 1 ) ; { p i , q i ( n ) } i = 1 N c )
To learn a more flexible non-linear activation function, we substituted a piecewise linear function S P L F ( · ) for the shrinkage function [29]. Specifically, S P L F ( · ) is determined by N c control points { p i , q i ( n ) } i = 1 N c , where p i and q i ( n ) denote the predefined position and adjustable value of the i th point, respectively. In particular, we performed the S P L F ( · ) on the real and imaginary parts of the complex signal, respectively.
For n = 1 , B ( 0 ) is initialized as a zero matrix and the output is:
Z ( 1 ) = S P L F ( X ( 1 ) ; { p i , q i ( 1 ) } i = 1 N c )
The output of this layer serves as the input of X ( n + 1 ) and B ( n ) .

3.3. Multiplier Update Layer

As shown in Figure 4, the inputs of the multiplier update layer are B ( n 1 ) , X ( n ) , and Z ( n ) , and the output is:
B ( n ) = B ( n 1 ) + η ( n ) ( X ( n ) Z ( n ) )
where the learning rate η ( n ) is an adjustable parameter.
For n = 1 , B ( 0 ) is initialized as a zero matrix and the output is:
B ( 1 ) = η ( 1 ) ( X ( 1 ) Z ( 1 ) )
For n [ 1 , N 1 ] , the output of this layer serves as the input of X ( n + 1 ) , Z ( n + 1 ) , and B ( n + 1 ) . For n = N , the output is adopted as the input of the reconstruction layer.

4. Training of 2D-ADN

4.1. Loss Function

In this article, the loss function E ( Θ ) is defined as the normalized mean square error (NMSE) between the network output X ^ and the label image X g t , i.e., the ground truth of the scattering center distribution:
E ( Θ ) = 1 γ ( Y , X g t ) Γ X ^ ( Y , Θ ) X g t F 2 X g t F 2
where Y are the input echoes in the wavenumber domain defined by (14), Θ = { ρ ( n ) , q i ( n ) , η ( n ) } is the set of the adjustable parameters, F is the Frobenius norm, and Γ = { ( Y , X g t ) } is the training set with c a r d { Γ } = γ .

4.2. Back-Propagation

We optimized the parameters of 2D-ADN utilizing the gradient-based limited-memory Broyden–Fletcher–Goldfarb–Shanno (L-BFGS) algorithm. To this end, we computed the gradients of the loss function with respect to Θ = { ρ ( n ) , q i ( n ) , η ( n ) } through back-propagation over the deep architectures. Following the structures and writing styles of the available literatures dealing with deep networks [30] and the conjugate complex derivative of composite function [31], we derived the back-propagation of the three layers, respectively. To be consistent with the previous definition, the gradients of the matrices were expressed in matrix forms for convenience, and they were also calculated in matrix forms for efficiency.
As shown in Figure 5, the gradient transferred to the reconstruction layer through B ( n ) and Z ( n ) for n [ 1 , N ] satisfies:
E X ( n ) Τ = Re ( E Z ( n ) Τ ) Re ( Z ( n ) X ( n ) Τ ) + j Im ( E Z ( n ) Τ ) Im ( Z ( n ) X ( n ) Τ ) + Re ( E B ( n ) Τ ) Re ( B ( n ) X ( n ) Τ ) + j Im ( E B ( n ) Τ ) Im ( B ( n ) X ( n ) Τ )
For n = N + 1 , the gradient of the loss function transferred to X ( n ) equals to:
E X ( n ) Τ = X ^ ( Y , Θ ) X g t X g t F 2 X ^ ( Y , Θ ) X g t F 2
The gradient of ρ ( n ) is:
E ρ ( n ) = ( Re ( E X ( n ) Τ ) Re ( X ( n ) ρ ( n ) ) ) + ( Im ( E X ( n ) Τ ) Im ( X ( n ) ρ ( n ) ) )
where:
X ( n ) ρ ( n ) = 1 ( 1 + ρ ( n ) ) 2 Φ 1 H ( Φ 1 ( Z ( n 1 ) B ( n 1 ) ) Φ 2 E Y ) ( Φ 2 E ) H
and the gradients transferred to Z ( n 1 ) and B ( n 1 ) are calculated by:
E Z ( n 1 ) Τ = Re ( E X ( n ) Τ ) Re ( X ( n ) Z ( n 1 ) Τ ) + j I m ( E X ( n ) Τ ) Im ( X ( n ) Z ( n 1 ) Τ )
where:
X ( n ) Z ( n 1 ) Τ = 1 Φ 1 H ( Φ 1 ( 1 ) Φ 2 E ) ( Φ 2 E ) H 1 + ρ ( n )
where 1 represents the matrix with all ones:
E B ( n 1 ) Τ = Re ( E X ( n ) Τ ) Re ( X ( n ) B ( n 1 ) Τ ) + j Im ( E X ( n ) Τ ) Im ( X ( n ) B ( n 1 ) Τ )
where:
X ( n ) B ( n 1 ) Τ = X ( n ) Z ( n 1 )
As shown in Figure 6, the gradient transferred to the nonlinear transform layer through B ( n ) and X ( n + 1 ) satisfies:
E Z ( n ) Τ = Re ( E B ( n ) Τ ) Re ( B ( n ) Z ( n ) Τ ) + j Im ( E B ( n ) Τ ) Im ( B ( n ) Z ( n ) Τ ) + Re ( E X ( n + 1 ) Τ ) Re ( X ( n + 1 ) Z ( n ) Τ ) + j Im ( E X ( n + 1 ) Τ ) Im ( X ( n + 1 ) Z ( n ) Τ )
The gradient calculations of q i ( n ) , B ( n 1 ) , and X ( n ) are consistent with the derivation of the piecewise linear function defined in [29].
As shown in Figure 7, the gradient transferred to the multiplier update layer through X ( n + 1 ) , Z ( n + 1 ) , and B ( n + 1 ) for n [ 1 , N 1 ] satisfies:
E B ( n ) Τ = Re ( E B ( n + 1 ) Τ ) Re ( B ( n + 1 ) B ( n ) Τ ) + j Im ( E B ( n + 1 ) Τ ) Im ( B ( n + 1 ) B ( n ) Τ ) + Re ( E Z ( n + 1 ) Τ ) Re ( Z ( n + 1 ) B ( n ) Τ ) + j Im ( E Z ( n + 1 ) Τ ) Im ( Z ( n + 1 ) B ( n ) Τ ) + Re ( E X ( n + 1 ) Τ ) Re ( X ( n + 1 ) B ( n ) Τ ) + j Im ( E X ( n + 1 ) Τ ) Im ( X ( n + 1 ) B ( n ) Τ )
For n = N , the gradient transferred from the last reconstruction layer to B ( n ) is:
E B ( n ) Τ = Re ( E X ( n + 1 ) Τ ) Re ( X ( n + 1 ) B ( n ) Τ ) + j Im ( E X ( n + 1 ) Τ ) Im ( X ( n + 1 ) B ( n ) Τ )
The gradient of η ( n ) is:
E η ( n ) = ( Re ( E B ( n ) Τ ) Re ( B ( n ) η ( n ) ) ) + ( Im ( E B ( n ) Τ ) Im ( B ( n ) η ( n ) ) )
where:
B ( n ) η ( n ) = X ( n ) Z ( n )
and the gradient of this layer transferred to B ( n 1 ) , X ( n ) and Z ( n ) are calculated by:
E B ( n 1 ) Τ = Re ( E B ( n ) Τ ) Re ( B ( n ) B ( n 1 ) Τ ) + j Im ( E B ( n ) Τ ) Im ( B ( n ) B ( n 1 ) Τ )
where:
B ( n ) B ( n 1 ) Τ = 1
E X ( n ) Τ = Re ( E B ( n ) Τ ) Re ( B ( n ) X ( n ) Τ ) + j Im ( E B ( n ) Τ ) Im ( B ( n ) X ( n ) Τ )
where:
B ( n ) X ( n ) Τ = η ( n ) 1
E Z ( n ) Τ = Re ( E B ( n ) Τ ) Re ( B ( n ) Z ( n ) Τ ) + j Im ( E B ( n ) Τ ) Im ( B ( n ) Z ( n ) Τ )
where:
B ( n ) Z ( n ) Τ = η ( n ) 1
The training stage can be summarized as follows:
Step1: Define the NMSE between the network output and the label image as the loss function;
Step2: Back-propagation. Calculate the gradients of the loss function with respect to the penalty parameter, the piecewise linear function, and the learning rate of each stage;
Step3: Utilize the L-BFGS algorithm to update the network parameters according to their current values and the gradients;
Step4: Repeat Step2~Step4 until the difference between the loss functions of the two adjacent iterations is less than 10−6.
For multiple training samples, we utilized the average gradient and loss function.

4.3. 2D High-Resolution ISAR Imaging Based on 2D-ADN

According to the above discussions, high-resolution 2D ISAR imaging based on 2D-ADN includes the following steps:
Step1: Training set generation. Initialize the phase error matrix E = I , construct Φ 1 and Φ 2 according to the radar parameters and data missing pattern, generate randomly distributed scattering centers X g t with Gaussian amplitudes, calculate Y according to (1), and obtain the data set Γ = { ( Y , X g t ) } .
Step2: Network training. Initialize the adjustable parameters Θ = { ρ ( n ) , q i ( n ) , η ( n ) } , and utilize { ( Y , X g t ) } to train the network according to Section IV-B.
Step3: Testing. For simulated data, feed echoes in the wavenumber domain into the trained 2D-ADN and obtain the high-resolution image. For measured data with random phase errors, estimate the high-resolution image and random phase errors by the trained 2D-ADN and (13) iteratively until convergence.
As the distribution and the amplitudes of simulated scattering centers mimic true ISAR targets, optimal network parameters suitable for measured data imaging can be learned after network training. By this means, the issue of insufficient measured training data is effectively tackled.
In 2D-ADN the number of stages N determines the network depth. It is observed that the loss function first decreases rapidly and then tends to be stable with the increment of N . Therefore, we choose N according to the convergence condition given below:
E N ( Θ ) E N 1 ( Θ ) F 2 E N 1 ( Θ ) F 2 < ε
where E N ( Θ ) is the loss function of the trained network with N stages, E N 1 ( Θ ) is the loss function of the trained network with N 1 stages, and ε is a threshold.
For a single iteration, 2D-ADN and 2D-FISTA share the same computational complexity of O ( U P Q ) . As the number of stages in 2D-ADN is much smaller than the number of iterations required for 2D-FISTA to converge, the computational time of 2D-ADN is shorter.

5. Experimental Results

In this section, we will demonstrate the effectiveness of 2D-ADN by high-resolution ISAR imaging of complete data, incomplete range data, incomplete azimuth data, and 2D incomplete data. The SNR of the range-compressed echoes is set to 0 dB by adding Gaussian noise, and the loss rate of the incomplete data is 50%.
For network training, 40 samples are generated following Step1 in Section IV-C, and a typical label image is shown in Figure 8a. Specifically, the first 20 samples constitute the training set and the rest constitute the test set. Later experiments will demonstrate that a small training set is adequate as unfolded deep networks have the potential to developing efficient high-performance architectures from reasonably sized training sets [22].
In the training stage, the adjustable parameters are initialized as ρ ( n ) = 0.2 and η ( n ) = 1 . In addition, the piecewise linear function is initialized as a soft threshold function with τ = 1 / 20 , and the control points are equally spaced with N c = 101 . Then, the 2D-ADN is trained following Section 4.2, and the number of stages is set to 7 according to (40).
For the simulated test data, we fed the test samples into the trained 2D-ADN, and calculated the NMSE, the peak signal-to-noise ratio (PSNR), the structure similarity index measure (SSIM), and the entropy of the image (ENT) according to the output and the label for quantitative performance evaluation. In addition, we compared the imaging results of the 2D-FISTA, untrained 2D-ADN, UNet [32], and trained 2D-ADN. In particular, as a data-driven method, the UNet has much more trainable parameters than the 2D-ADN. To avoid overfitting, we generated 1000 samples as the simulated data set, where 800 samples constituted the training set and the rest constituted the test set. The network was trained by 28 epochs, and the training time was 22 min. For 2D-ADN, the training terminated when the relative error of the loss fell below 10 6 , and the training time was 19 min. For 2D-FISTA, the parameters were initialized as λ = 0.05 and the algorithm terminated when the image NMSE in adjacent iterations fell below 10 6 .
Additionally, we fed the measured data of a Yak-42 aircraft with random phase errors into the trained 2D-ADN and obtained the high-resolution imaging in various observation conditions. The original RD image with complete data and high SNR is shown in Figure 8b. The algorithm terminated when the NMSE between adjacent iterations fell below 10 4 .
The imaging results were obtained with MATLAB coding without optimization, using an Intel i9-10920X 3.50-GHz computer with a 12-core processor.

5.1. Complete Data

For the test sample illustrated in Figure 8a, imaging results are shown in Figure 9 and the corresponding metrics are shown in Table 1. It was observed that the image obtained by 2D-FISTA and untrained 2D-ADN are noisy with spurious scattering centers. On the contrary, the image obtained by trained 2D-ADN has the smallest NMSE and ENT, and the highest PSNR and SSIM, demonstrating its superior imaging performance. In addition, 2D-FISTA has the longest running time due to slow convergence. The UNet obtains satisfying denoising performance and has the shortest running time of only 0.01 s.
For measured data of the Yak-42 aircraft, imaging results are shown in Figure 10 and the corresponding entropies and running times are shown in Table 2. Compared with the available methods, the trained 2D-ADN obtained better-focused images with a clearer background. In addition, the running time increased because 2D-ADN was implemented multiple times for phase error estimation. The untrained 2D-ADN had the longest running time since the strong background noise hindered fast convergence.

5.2. Incomplete Range Data

The data missing pattern of the incomplete range data is shown in Figure 8c, where the white bars denote the available echoes and the black ones denote the missing echoes.
For the same test sample, the reconstruction results are shown in Figure 11, and the metrics for quantitative comparison are shown in Table 3. Still, the trained 2D-ADN demonstrated the best reconstruction performance.
For the same measured data of the Yak-42 aircraft, the reconstruction results are shown in Figure 12, where the trained 2D-ADN generated better-focused images with a clearer background than other methods. The corresponding entropies and running times are shown in Table 4, where 2D-FISTA has the longest time.

5.3. Incomplete Azimuth Data

The data missing pattern of the incomplete azimuth data is shown Figure 8d. For the same test sample, the reconstruction results are shown in Figure 13, and the metrics for quantitative comparison are shown in Table 5.
For the same measured data, the imaging results are shown in Figure 14, and the corresponding entropies and running times are shown in Table 6. Similarly, 2D-ADN achieved well-focused imaging with the shortest running time.

5.4. 2D Incomplete Data

The data missing pattern of the 2D incomplete data is shown in Figure 8e. For the same test sample, the reconstruction results are shown in Figure 15, and the metrics for the quantitative comparison are shown in Table 7.
For the same measured data, the reconstruction results are shown in Figure 16, and the corresponding metrics are shown in Table 8, which demonstrate the superiority of 2D-ADN over the other available methods under complex observation conditions.

6. Discussion

6.1. Influence of the Data Loss Rate and SNR

To further analyze the reconstruction performance of the proposed method, we designed more experiments using only 25% and 10% of the available data, where the SNRs were set to 0 dB, 5 dB, and 10 dB, respectively. The imaging results are shown in Figure 17 and Figure 18. The corresponding metrics are shown in Table 9 and Table 10.
It was observed that the imaging quality degraded heavily with the decrease of the available data for SNR of 0 dB. If we raised the SNR to 5 dB or 10 dB, however, the imaging performance improved rapidly. Therefore, the SNR has a greater impact on the imaging quality than the data loss rate. Furthermore, although it inherits the reconstruction performance of 2D-ADMM and utilizes a more flexible piecewise linear function as the denoiser, the 2D-ADN is still sensitive to low SNR.

6.2. Choice of the Optimal Regularization Parameter

In 2D-ADMM, it is necessary to perform multiple manual adjustments of the regularization parameter λ and penalty parameter ρ to obtain the best results. In addition, the adjustable parameters of the 2D-ADMM are fixed during iteration, which lacks flexibility. On the contrary, the 2D-ADN learns the optimal parameters of each layer separately, thus having more flexibility and a better reconstruction performance than the 2D-ADMM.
For a data loss rate of 50% and an SNR of 0dB, we obtained the optimal parameters with the minimum NSME by manual tuning, i.e., λ = 0.05 and ρ = 0.8 . Imaging results of the optimal 2D-ADMM are shown in Figure 19 and Table 11. It was observed that the quality of the 2D-ADN image was better than the 2D-ADMM image obtained by parameter tuning.

6.3. Difference Between 2D-ADMM and 2D-ADN

Through the use of the unrolling method, 2D-ADN deviates from the original 2D-ADMM algorithm. Figure 20 shows the variations of NMSE for 2D-ADMM and 2D-ADN, respectively. Figure 21 shows the outputs of each stage for 2D-ADN. It was observed that the NMSE of the 2D-ADMM gradually decreased; while the NMSE of the 2D-ADN fluctuated and then reached the minimum. Therefore, end-to-end training guarantees the rapid dropping of NMSE, and the flexible network structure boosts the reconstruction performance.

7. Conclusions

This article proposed 2D-ADN for high-resolution 2D ISAR imaging and autofocusing under complex observational environments. Firstly, the 2D mapping from the ISAR images to echoes in the wavenumber domain was established. Then, iteration formulae based on the 2D-ADMM were derived for high-resolution ISAR imaging and, combined with the phase error estimation method, an imaging and autofocusing method was proposed. On this basis, the 2D-ADMM was generalized and unrolled into an N -stage 2D-ADN, which consisted of reconstruction layers, nonlinear transform layers, and multiplier update layers. The 2D-ADN effectively tackles the parameter adjustment problem of model-driven methods and possesses more interpretability than data-driven methods. Experiments have shown that after the end-to-end training by randomly generated samples off-line, the 2D-ADN achieves the better-focused 2D imaging of measured data with random phase errors than the available methods while maintaining computational efficiency.
Future work will be focused on designing network architectures which incorporate residual translational motion compensation and 2D imaging, and on designing noise and jamming-robust network architectures in a Bayesian framework.

Author Contributions

X.L. proposed the method, designed the experiment, and wrote the manuscript; X.B. and F.Z. revised the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This paper was funded in part by the National Natural Science Foundation of China under Grant No. 61971332, 61801344, and 61631019.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kang, L.; Luo, Y.; Zhang, Q.; Liu, X.-W.; Liang, B.-S. 3-D Scattering Image Sparse Reconstruction via Radar Network. IEEE Trans. Geosci. Remote Sens. 2020. [Google Scholar] [CrossRef]
  2. Bai, X.R.; Zhou, X.N.; Zhang, F.; Wang, L.; Zhou, F. Robust pol-ISAR target recognition based on ST-MC-DCNN. IEEE Trans. Geosci. Remote Sens. 2019, 57, 9912–9927. [Google Scholar] [CrossRef]
  3. Carrara, W.G.; Goodman, R.S.; Majewski, R.M. Spotlight Synthetic Aperture Radar: Signal Processing Algorithms; Artech House: Boston, MA, USA, 1995; Chapter 2. [Google Scholar]
  4. Zhao, L.F.; Wang, L.; Yang, L.; Zoubir, A.M.; Bi, G.A. The race to improve radar imagery: An overview of recent progress in statistical sparsity-based techniques. IEEE Signal Process. Mag. 2016, 33, 85–102. [Google Scholar] [CrossRef]
  5. Bai, X.R.; Zhang, Y.; Zhou, F. High-resolution radar imaging in complex environments based on Bayesian learning with mixture models. IEEE Trans. Geosci. Remote Sens. 2019, 57, 972–984. [Google Scholar] [CrossRef]
  6. Li, R.Z.; Zhang, S.H.; Zhang, C.; Liu, Y.X.; Li, X. Deep Learning Approach for Sparse Aperture ISAR Imaging and Autofocusing Based on Complex-Valued ADMM-Net. IEEE Sens. J. 2021, 21, 3437–3451. [Google Scholar] [CrossRef]
  7. Shao, S.; Zhang, L.; Liu, H.W. High-Resolution ISAR Imaging and Motion Compensation With 2-D Joint Sparse Reconstruction. IEEE Trans. Geosci. Remote Sens. 2020, 58, 6791–6811. [Google Scholar] [CrossRef]
  8. Kang, M.; Lee, S.; Lee, S.; Kim, K. ISAR imaging of high-speed maneuvering target using gapped stepped-frequency waveform and compressive sensing. IEEE Trans. Image Process. 2017, 26, 5043–5056. [Google Scholar] [CrossRef]
  9. Hu, P.J.; Xu, S.Y.; Wu, W.Z.; Chen, Z.P. Sparse subband ISAR imaging based on autoregressive model and smoothed l 0 algorithm. IEEE Sens. J. 2018, 18, 9315–9323. [Google Scholar] [CrossRef]
  10. Beck, A.; Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imag. Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef] [Green Version]
  11. Li, S.; Amin, M.; Zhao, G.; Sun, H. Radar imaging by sparse optimization incorporating MRF clustering prior. IEEE Geosci. Remote Sens. Lett. 2020, 17, 1139–1143. [Google Scholar] [CrossRef] [Green Version]
  12. Afonso, M.V.; Bioucas-Dias, J.M.; Figueiredo, M.A.T. Fast image recovery using variable splitting and constrained optimization. IEEE Trans. Image Process. 2010, 19, 2345–2356. [Google Scholar] [CrossRef] [Green Version]
  13. Bai, X.R.; Zhou, F.; Hui, Y. Obtaining JTF-signature of space-debris from incomplete and phase-corrupted data. IEEE Trans. Aerosp. Electron. Syst. 2017, 53, 1169–1180. [Google Scholar] [CrossRef]
  14. Bai, X.R.; Wang, G.; Liu, S.Q.; Zhou, F. High-Resolution Radar Imaging in Low SNR Environments Based on Expectation Propagation. IEEE Trans. Geosci. Remote Sens. 2020, 59, 1275–1284. [Google Scholar] [CrossRef]
  15. Li, S.Y.; Zhao, G.Q.; Zhang, W.; Qiu, Q.W.; Sun, H.J. ISAR imaging by two-dimensional convex optimization-based compressive sensing. IEEE Sens. J. 2016, 16, 7088–7093. [Google Scholar] [CrossRef]
  16. Hashempour, H.R. Sparsity-Driven ISAR Imaging Based on Two-Dimensional ADMM. IEEE Sens. J. 2020, 20, 13349–13356. [Google Scholar] [CrossRef]
  17. Pu, W. Deep SAR Imaging and Motion Compensation. IEEE Trans. Image Process. 2021, 30, 2232–2247. [Google Scholar] [CrossRef]
  18. Pu, W. Shuffle GAN with Autoencoder: A Deep Learning Approach to Separate Moving and Stationary Targets in SAR Imagery. IEEE Trans. Neural Netw. Learn. Syst. 2021. [Google Scholar] [CrossRef]
  19. Hu, C.Y.; Wang, L.; Li, Z.; Sun, L.; Loffeld, O. Inverse synthetic aperture radar imaging using complex-value deep neural network. J. Eng. 2019, 2019, 7096–7099. [Google Scholar] [CrossRef]
  20. Zhang, J.; Ghanem, B. ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 19–21 June 2018; pp. 1828–1837. [Google Scholar]
  21. Ramírez, J.M.; Torre, J.I.M.; Fuentes, H.A. LADMM-Net: An Unrolled Deep Network for Spectral Image Fusion from Compressive Data. 2021. Available online: https://arxiv.org/abs/2103.00940 (accessed on 10 June 2021).
  22. Monga, V.; Li, Y.; Eldar, Y.C. Algorithm Unrolling: Interpretable, Efficient Deep Learning for Signal and Image Processing. IEEE Signal Process. Mag. 2021, 38, 18–44. [Google Scholar] [CrossRef]
  23. Hu, C.Y.; Li, Z.; Wang, L.; Guo, J.; Loffeld, O. Inverse synthetic aperture radar imaging using a Deep ADMM Network. In Proceedings of the 20th International Radar Symposium (IRS), Ulm, Germany, 26–28 June 2019; pp. 1–9. [Google Scholar]
  24. Qiu, W.; Zhao, H.; Zhou, J.; Fu, Q. High-resolution fully polarimetric ISAR imaging based on compressive sensing. IEEE Trans. Geosci. Remote Sens. 2014, 52, 6119–6131. [Google Scholar] [CrossRef]
  25. Liu, L.; Zhou, F.; Tao, M.L.; Sun, P.G.; Zhang, Z.J. Adaptive translational motion compensation method for ISAR imaging under low SNR based on particle swarm optimization. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 5146–5157. [Google Scholar] [CrossRef]
  26. Wang, Y.; Yang, J.; Yin, W.; Zhang, Y. A new alternating minimization algorithm for total variation image reconstruction. SIAM J. Imag. Sci. 2009, 1, 248–272. [Google Scholar] [CrossRef]
  27. Combettes, P.; Wajs, V. Signal recovery by proximal forward-backward splitting. Siam J. Multiscale Modeling Simul. 2005, 4, 1168–1200. [Google Scholar] [CrossRef] [Green Version]
  28. Zhao, L.; Wang, L.; Bi, G.A.; Yang, L. An autofocus technique for high-resolution inverse synthetic aperture radar imagery. IEEE Trans. Geosci. Remote Sens. 2014, 52, 6392–6403. [Google Scholar] [CrossRef]
  29. Sun, J.; Li, H.B.; Xu, Z.B. Deep ADMM-Net for compressive sensing MRI. In Proceedings of the 30th International Conference on Neural Information Processing Systems, Barcelona, Spain, 5–10 December 2016; pp. 10–18. [Google Scholar]
  30. Yang, Y.; Sun, J.; Li, H.B.; Xu, Z.B. ADMM-CSNet: A deep learning approach for image compressive sensing. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 521–538. [Google Scholar] [CrossRef] [PubMed]
  31. Petersen, K.B.; Pedersen, M.S. The Matrix Cookbook; Version 20121115; Technical University of Denmark: Copenhagen, Denmark, 2012; p. 24. Available online: http://www2.compute.dtu.dk/pubdb/pubs/3274-full.html (accessed on 10 June 2021).
  32. Yang, T.; Shi, H.Y.; Lang, M.Y.; Guo, J.W. ISAR imaging enhancement: Exploiting deep convolutional neural network for signal reconstruction. Int. J. Remote Sens. 2020, 41, 9447–9468. [Google Scholar] [CrossRef]
Figure 1. Structure of the 2D-ADN.
Figure 1. Structure of the 2D-ADN.
Remotesensing 13 02326 g001
Figure 2. Reconstruction layer.
Figure 2. Reconstruction layer.
Remotesensing 13 02326 g002
Figure 3. Nonlinear transform layer.
Figure 3. Nonlinear transform layer.
Remotesensing 13 02326 g003
Figure 4. Multiplier update layer.
Figure 4. Multiplier update layer.
Remotesensing 13 02326 g004
Figure 5. Back-propagation of the reconstruction layer.
Figure 5. Back-propagation of the reconstruction layer.
Remotesensing 13 02326 g005
Figure 6. Back-propagation of the nonlinear transform layer.
Figure 6. Back-propagation of the nonlinear transform layer.
Remotesensing 13 02326 g006
Figure 7. Back-propagation of the multiplier update layer.
Figure 7. Back-propagation of the multiplier update layer.
Remotesensing 13 02326 g007
Figure 8. (a) Label image of a test sample. (b) RD image of the Yak-42 aircraft with complete data and high SNR. (c) Data missing pattern of the incomplete range data. (d) Data missing pattern of the incomplete azimuth data. (e) Data missing pattern of the 2D incomplete data.
Figure 8. (a) Label image of a test sample. (b) RD image of the Yak-42 aircraft with complete data and high SNR. (c) Data missing pattern of the incomplete range data. (d) Data missing pattern of the incomplete azimuth data. (e) Data missing pattern of the 2D incomplete data.
Remotesensing 13 02326 g008
Figure 9. Images of the complete data obtained by (a) 2D-FISTA, (b) untrained 2D-ADN, (c) UNet, and (d) trained 2D-ADN.
Figure 9. Images of the complete data obtained by (a) 2D-FISTA, (b) untrained 2D-ADN, (c) UNet, and (d) trained 2D-ADN.
Remotesensing 13 02326 g009
Figure 10. Yak-42 images of complete data obtained by (a) 2D-FISTA, (b) untrained 2D-ADN, and (c) trained 2D-ADN.
Figure 10. Yak-42 images of complete data obtained by (a) 2D-FISTA, (b) untrained 2D-ADN, and (c) trained 2D-ADN.
Remotesensing 13 02326 g010
Figure 11. Images of the incomplete range data with data missing pattern shown in Figure 8c, which are obtained by (a) 2D-FISTA, (b) untrained 2D-ADN, (c) UNet, and (d) trained 2D-ADN.
Figure 11. Images of the incomplete range data with data missing pattern shown in Figure 8c, which are obtained by (a) 2D-FISTA, (b) untrained 2D-ADN, (c) UNet, and (d) trained 2D-ADN.
Remotesensing 13 02326 g011
Figure 12. Yak-42 images of the incomplete range data obtained by (a) 2D-FISTA, (b) untrained 2D-ADN, and (c) trained 2D-ADN.
Figure 12. Yak-42 images of the incomplete range data obtained by (a) 2D-FISTA, (b) untrained 2D-ADN, and (c) trained 2D-ADN.
Remotesensing 13 02326 g012
Figure 13. Images of the incomplete azimuth data with data missing pattern shown in Figure 8d, which are obtained by (a) 2D-FISTA, (b) untrained 2D-ADN, (c) UNet, and (d) trained 2D-ADN.
Figure 13. Images of the incomplete azimuth data with data missing pattern shown in Figure 8d, which are obtained by (a) 2D-FISTA, (b) untrained 2D-ADN, (c) UNet, and (d) trained 2D-ADN.
Remotesensing 13 02326 g013
Figure 14. Yak-42 images of the incomplete azimuth data obtained by (a) 2D-FISTA, (b) untrained 2D-ADN, and (c) trained 2D-ADN.
Figure 14. Yak-42 images of the incomplete azimuth data obtained by (a) 2D-FISTA, (b) untrained 2D-ADN, and (c) trained 2D-ADN.
Remotesensing 13 02326 g014
Figure 15. Images of the 2D incomplete data with data missing pattern shown in Figure 8e, which are obtained by (a) 2D-FISTA, (b) untrained 2D-ADN, (c) UNet, and (d) trained 2D-ADN.
Figure 15. Images of the 2D incomplete data with data missing pattern shown in Figure 8e, which are obtained by (a) 2D-FISTA, (b) untrained 2D-ADN, (c) UNet, and (d) trained 2D-ADN.
Remotesensing 13 02326 g015
Figure 16. Yak-42 images of the 2D incomplete data obtained by (a) 2D-FISTA, (b) untrained 2D-ADN, and (c) trained 2D-ADN.
Figure 16. Yak-42 images of the 2D incomplete data obtained by (a) 2D-FISTA, (b) untrained 2D-ADN, and (c) trained 2D-ADN.
Remotesensing 13 02326 g016
Figure 17. Images generated using 25% of available data with SNRs of (a) 0 dB, (b) 5 dB, and (c) 10 dB.
Figure 17. Images generated using 25% of available data with SNRs of (a) 0 dB, (b) 5 dB, and (c) 10 dB.
Remotesensing 13 02326 g017
Figure 18. Images generated using 10% of available data with SNRs of (a) 0 dB, (b) 5 dB, and (c) 10 dB.
Figure 18. Images generated using 10% of available data with SNRs of (a) 0 dB, (b) 5 dB, and (c) 10 dB.
Remotesensing 13 02326 g018
Figure 19. Images obtained by (a) optimal 2D-ADMM and (b) trained 2D-ADN.
Figure 19. Images obtained by (a) optimal 2D-ADMM and (b) trained 2D-ADN.
Remotesensing 13 02326 g019
Figure 20. NMSE of (a) 2D-ADMM and (b) 2D-ADN.
Figure 20. NMSE of (a) 2D-ADMM and (b) 2D-ADN.
Remotesensing 13 02326 g020
Figure 21. Images of 2D-ADN at different stages: (a) n = 1 , (b) n = 2 , (c) n = 3 , (d) n = 4 , (e) n = 5 , (f) n = 6 , (g) n = 7 , (h) n = 8 .
Figure 21. Images of 2D-ADN at different stages: (a) n = 1 , (b) n = 2 , (c) n = 3 , (d) n = 4 , (e) n = 5 , (f) n = 6 , (g) n = 7 , (h) n = 8 .
Remotesensing 13 02326 g021
Table 1. Quantitative performance evaluation for the complete simulation data.
Table 1. Quantitative performance evaluation for the complete simulation data.
MethodNMSEPSNRSSIMENTTime (s)
2D-FISTA0.527333.37540.43123.45380.51
Untrained 2D-ADN0.745430.31460.25484.16460.14
UNet0.132345.31120.98440.33610.01
Trained 2D-ADN0.128145.58250.98580.31940.14
Table 2. Quantitative performance evaluation for the complete measured data.
Table 2. Quantitative performance evaluation for the complete measured data.
MethodENTTime (s)
2D-FISTA2.80552.81
Untrained 2D-ADN3.824420.47
Trained 2D-ADN0.27642.15
Table 3. Quantitative performance evaluation for the incomplete range simulation data.
Table 3. Quantitative performance evaluation for the incomplete range simulation data.
MethodNMSEPSNRSSIMENTTime (s)
2D-FISTA0.457534.68750.69352.01771.53
Untrained 2D-ADN0.692230.96510.36333.66240.13
UNet0.360636.64050.88361.39010.01
Trained 2D-ADN0.239940.22970.95940.81040.14
Table 4. Quantitative performance evaluation for the incomplete range measured data.
Table 4. Quantitative performance evaluation for the incomplete range measured data.
MethodENTTime (s)
2D-FISTA1.482343.82
Untrained 2D-ADN3.312810.52
Trained 2D-ADN0.67622.88
Table 5. Quantitative performance evaluation for the incomplete azimuth simulation data.
Table 5. Quantitative performance evaluation for the incomplete azimuth simulation data.
MethodNMSEPSNRSSIMENTTime (s)
2D-FISTA0.446834.96670.69892.01160.72
Untrained 2D-ADN0.682231.09920.36843.65560.14
UNet0.299338.28740.91781.17300.01
Trained 2D-ADN0.228440.59010.96230.78560.14
Table 6. Quantitative performance evaluation for the incomplete azimuth measured data.
Table 6. Quantitative performance evaluation for the incomplete azimuth measured data.
MethodENTTime (s)
2D-FISTA1.943426.65
Untrained 2D-ADN3.61848.40
Trained 2D-ADN0.80413.21
Table 7. Quantitative performance evaluation for the 2D incomplete simulation data.
Table 7. Quantitative performance evaluation for the 2D incomplete simulation data.
MethodNMSEPSNRSSIMENTTime (s)
2D-FISTA0.443835.00020.68562.09160.75
Untrained 2D-ADN0.639731.65180.37393.61910.12
UNet0.287638.59240.93150.85110.01
Trained 2D-ADN0.189842.18500.96790.49760.12
Table 8. Quantitative performance evaluation for the 2D incomplete measured data.
Table 8. Quantitative performance evaluation for the 2D incomplete measured data.
MethodENTTime (s)
2D-FISTA1.533313.22
Untrained 2D-ADN3.276611.70
Trained 2D-ADN0.47471.68
Table 9. Quantitative performance evaluations using 25% available data.
Table 9. Quantitative performance evaluations using 25% available data.
SNR(dB)NMSEPSNRSSIMENTTime (s)
00.297638.27450.92471.07960.08
50.165743.36180.97150.66830.08
100.083749.31050.99110.36640.08
Table 10. Quantitative performance evaluations using 10% available data.
Table 10. Quantitative performance evaluations using 10% available data.
SNR(dB)NMSEPSNRSSIMENTTime (s)
00.557132.84450.78361.54510.08
50.325637.50590.88071.42660.08
100.207341.42800.92911.31580.08
Table 11. Quantitative performance evaluation for different methods.
Table 11. Quantitative performance evaluation for different methods.
MethodNMSEPSNRSSIMENTTime (s)
Optimal 2D-ADMM0.323137.85020.93400.533413.24
Trained 2D-ADN0.189842.18500.96790.49760.12
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, X.; Bai, X.; Zhou, F. High-Resolution ISAR Imaging and Autofocusing via 2D-ADMM-Net. Remote Sens. 2021, 13, 2326. https://doi.org/10.3390/rs13122326

AMA Style

Li X, Bai X, Zhou F. High-Resolution ISAR Imaging and Autofocusing via 2D-ADMM-Net. Remote Sensing. 2021; 13(12):2326. https://doi.org/10.3390/rs13122326

Chicago/Turabian Style

Li, Xiaoyong, Xueru Bai, and Feng Zhou. 2021. "High-Resolution ISAR Imaging and Autofocusing via 2D-ADMM-Net" Remote Sensing 13, no. 12: 2326. https://doi.org/10.3390/rs13122326

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop