Next Article in Journal
Monitoring of Land Subsidence and Ground Fissure Activity within the Su-Xi-Chang Area Based on Time-Series InSAR
Previous Article in Journal
Learning a Fully Connected U-Net for Spectrum Reconstruction of Fourier Transform Imaging Spectrometers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

High-Resolution ISAR Imaging Based on Plug-and-Play 2D ADMM-Net

1
Key Laboratory of Electronic Information Countermeasure and Simulation Technology of Ministry of Education, Xidian University, Xi’an 710071, China
2
National Lab of Radar Signal Processing, Xidian University, Xi’an 710071, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(4), 901; https://doi.org/10.3390/rs14040901
Submission received: 30 December 2021 / Revised: 28 January 2022 / Accepted: 9 February 2022 / Published: 14 February 2022

Abstract

:
We propose a deep learning architecture, dubbed Plug-and-play 2D ADMM-Net (PAN), by combining model-driven deep networks and data-driven deep networks for effective high-resolution 2D inverse synthetic aperture radar (ISAR) imaging with various signal-to-noise ratios (SNR) and incomplete data scenarios. First, a sparse observation model of 2D ISAR imaging is established, and a 2D ADMM algorithm is presented. On this basis, using the plug and play (PnP) technique, PnP 2D ADMM is proposed, by combining the 2D ADMM algorithm and the deep denoising network DnCNN. Then, we unroll and generalize the PnP 2D ADMM to the PAN architecture, in which all adjustable parameters in the reconstruction layers, denoising layers, and multiplier update layers are learned by end-to-end training through back-propagation. Experimental results showed that the PAN with a single parameter set can achieve noise-robust ISAR imaging with superior reconstruction performance on incomplete simulated and measured data under different SNRs.

Graphical Abstract

1. Introduction

High-resolution inverse synthetic aperture radar (ISAR) transmits a wide band signal to achieve a high resolution range profile and synthesizes a virtual aperture through the motion of the target, to achieve a high resolution along azimuth direction. Unlike optical imaging, ISAR has been used in various military applications, e.g., space situation awareness and air target surveillance in all-day and all-weather environments [1,2]. Generally, well-focused ISAR imaging can be obtained from high signal-to-noise ratio (SNR) and complete echoes using a range-Doppler (RD) algorithm and the polar formatting algorithm (PFA) [3]. In practice, the long observation distance in a low-elevation angle environment may generate low SNR. Additionally, in complex electromagnetic environments with active interference and passive interference, the non-cooperative nature of target and radar resource scheduling [4] may result in incomplete echoes, and the available imaging algorithms based on Fourier analysis cannot obtain satisfactory results. Considering the sparse nature of the scattering centers in the image domain, high-resolution ISAR imaging based on sparse signal reconstruction has received intensive attention in recent years [5,6].
The sparse signal reconstruction methods convert the sparse ISAR imaging problem into a sparse signal reconstruction problem, and then search the optimal solution by l 0 -norm and l 1 -norm optimizations. The l 0 -norm optimization, such as orthogonal matching pursuit (OMP) [7] and smoothed L0 (SL0) [8] are sensitive to noise and prone to local optima. For l 1 -norm optimization, a fast iterative shrinkage-thresholding algorithm (FISTA) [9] and alternating direction method of multipliers (ADMM) [10] can guarantee the sparsest solution. However, these methods usually require careful tuning of the regularization parameters, which is still an open problem. For 2D imaging, the vectorized optimization required a long operational time and large memory storage space. To tackle this problem, 2D SL0 [11], 2D FISTA [12], and 2D ADMM [13] based on matrix operations are proposed. Although the abovementioned sparse signal reconstruction methods have a clear physical significance and strong theoretical support, their performance will degrade rapidly, due to improper parameter setup.
Apart from traditional optimization methods, deep networks have recently provided unprecedented performance gains in ISAR imaging. The available deep networks mainly include: (1) model-driven methods, and (2) data-driven methods. Model-driven methods are generally based on the unrolling technique, which was first proposed by expanding the iterative shrinkage-thresholding (ISTA) algorithm, to improve the computational efficiency of sparse coding algorithms through end-to-end training [14]. Specifically, in ISAR imaging, the model-driven methods expand the iterative steps of the sparse signal reconstruction method into a deep network with finite-layers, set the adjustable parameters as network parameters, and then obtain their optimal values by network training. Finally, they output the focused image of an unknown target from the trained network. Typical networks with model-driven methods include AF-AMPNet [15], 2D-ADMM-Net (2D-ADN) [16], and convolution iterative shrinkage-thresholding (CIST) [17], etc. Although model-driven methods have strong interpretability and satisfying reconstruction performance, the optimal parameters are sensitive to SNR. Therefore, to obtain a well-focused image under various SNRs, it is necessary to build a model set, which will increase the time and space complexity.
The data-driven methods directly learn the nonlinear mapping between the input (e.g., the RD image) and the label image, to achieve high-resolution imaging by designing and training deep networks. Facilitated by off-line network training, these methods can reconstruct multiple images rapidly. Typical ones in data-driven methods include fully convolutional neural network (FCNN) [18] and UNet [19], etc. The data-driven methods generally adopt a hierarchical architecture composed of many layers and a large number of parameters (possibly millions); thus, they are capable of learning complicated mappings [14]. Specifically, in ISAR imaging, they have demonstrated robustness to various noise levels, i.e., a single trained network can achieve well-focused imaging of echoes with various SNR. However, the subjective network design process lacks unified criterion and theoretical support, which makes it difficult to analyze the influence of network structure and parameter settings on the reconstruction performance. In addition, they usually suffer from poor generalization performance and fail to obtain satisfying imaging results with incomplete data, as will be demonstrated later in Section 5.
Plug-and-play (PnP) [20] is a non-convex framework that combines proximal algorithms with advanced denoiser priors [21], e.g., block-matching and 3D filtering (BM3D) [22] or deep denoising network DnCNN [23]. Recently, PnP has achieved great empirical success in a large variety of imaging applications [24,25,26,27], owing to its effectiveness and flexibility, especially with the integration of a deep denoising network. Compared with the original iterative methods, PnP methods offer more promising imaging results, due to their powerful denoising performance. However, PnP is highly sensitive to parameter selection. In addition, parameter tuning requires several trials, which is cumbersome and time-consuming.
To tackle the abovementioned issues, this article proposes plug-and-play 2D ADMM-Net (PAN), for high-resolution 2D ISAR imaging in complex environments. The key contributions include the following:
  • 2D ADMM algorithm for sparse ISAR imaging is derived. On this basis, PnP 2D ADMM is proposed by combining a deep denoising network DnCNN and the 2D ADMM algorithm, which can significantly improve reconstruction performance.
  • To tackle the issues of parameter selection and tuning in PnP 2D ADMM, PAN, which is derived from PnP 2D ADMM, is designed using the ‘unrolling’ technique. Particularly, the adjustable parameters are estimated through end-to-end training. By this means, the sensitivity of the model-driven deep network to noise and the poor performance of the data-driven deep network for incomplete data are effectively addressed.
  • Although PAN is only trained by simulated data, experiments have shown that it can be generalized to measured data with different SNRs and obtain well-focused imaging.
The remainder of this article is organized as follows: Section 2 establishes the sparse ISAR observation model, provides the iterative formulae of 2D ADMM, and proposes the PnP 2D ADMM for high-resolution 2D imaging. Section 3 introduces the structure of PAN in detail. Section 4 introduces the training of PAN in detail. Section 5 carries out various experiments to prove the effectiveness of PAN. Section 6 discusses the depth of PAN, and Section 7 concludes the article with suggestions for future work.

2. Modeling and Solving

2.1. Signal Model

In ISAR imaging, the instantaneous range of a target can be seen as a combination of translational and rotational motion, in which the former is not beneficial to imaging and needs to be compensated by range alignment and autofocusing. Therefore, after translational motion compensation, it is supposed that the target is uniformly rotating.
The ISAR transmits a linear-frequency-modulated (LFM) signal with a high time-bandwidth product, which can be modeled as
s ( t ^ , t m ) = r e c t ( t ^ T p ) exp ( j 2 π ( f c t + 1 2 γ t ^ 2 ) )
where r e c t ( u ) = { 1 , | u | 1 / 2 0 , | u | > 1 / 2 denotes the window function. f c , T p , and γ denote the carrier frequency, pulse width, and chirp rate, respectively. The full time t = t ^ + t m is expressed in the form of fast time t ^ and slow time t m = m / PRF , where m is the azimuth index and PRF represents pulse repetition frequency. Supposing that the target motion only changes with the slow time and there are P scatters on the target, the received echoes can be expressed as
s R ( t ^ , t m ) = p = 1 P A p r e c t ( t ^ ( 2 R p ( t m ) / c ) T p ) × exp ( j 2 π ( f c ( t ^ 2 R p ( t m ) c ) + 1 2 γ ( t ^ 2 R p ( t m ) c ) 2 ) )
where A p , R p ( t m ) denote the backscattering coefficient of the p th scatterer and its instantaneous range to the radar, respectively.
After dechirping, the echoes in the range frequency-slow time can be expressed as follows:
s ( f , t m ) = p = 1 P A p r e c t ( f B ) exp ( j 4 π c ( f c + f ) R p s ( t m ) )
where f [ B / 2 , B / 2 ] is the range frequency, B is the bandwidth, and R p s is the instantaneous slant range between the p th scattering center and the reference point.
For a target that rotates by a small angle during the imaging interval, or for echoes after migration through range cell correction, we have
R p s = x p + y p ω r o t t m
where x p and y p represent the location of the p th scattering center on the reference coordinate. ω r o t is the angular rotation frequency of the turntable model.
For sparse band and azimuth observation, after translational motion compensation, an echo s ( f , t m ) can be written in a matrix form Y M × N after discretization, and (4) can be rewritten as
Y = Φ 1 X Φ 2 + N
where Φ 1 M × U is the range dictionary with M < U , X U × V is the 2D distribution of scattering centers, i.e., the image to be reconstructed, Φ 2 V × N is the Doppler dictionary with N < V , and N M × N is the complex noise matrix.

2.2. The 2D ADMM Method

Solving X from Equation (6) is a typical linear inverse problem, which can be converted into an l 1 -norm optimization in a matrix form, i.e.,
min X { 1 2 Y Φ 1 X Φ 2 F 2 + λ X 1 }
where λ is the regularization parameter, which has a great impact on imaging performance.
As a commonly used l 1 -norm optimization method, 2D ADMM has good reconstruction performance for linear inverse problems. By introducing an auxiliary matrix Z , (6) can be expressed as
min X { 1 2 Y Φ 1 X Φ 2 F 2 + λ Z 1 } s . t .   X Z = 0
To solve this constrained optimization problem, 2D ADMM utilizes the augmented Lagrangian method, where the augmented Lagrangian function is
( X , Z , A ) = 1 2 Y Φ 1 X Φ 2 F 2 + λ Z 1 + A , X Z + ρ 2 X Z F 2
where A is the Lagrangian multipliers and ρ is the penalty parameter, which should be carefully adjusted to obtain better imaging results.
According to the principles of the 2D ADMM algorithm, Equation (8) was decomposed into three sub-problems by minimizing ( X , Z , A ) with respect to X , Z , and A , respectively,
{ X ( n ) = arg min X 1 2 Y Φ 1 X Φ 2 F 2 + A ( n 1 ) , X Z ( n 1 ) + ρ 2 X Z ( n 1 ) F 2 Z ( n ) = arg min Z λ Z 1 + A ( n 1 ) , X ( n ) Z + ρ 2 X ( n ) Z F 2 A ( n ) = A ( n 1 ) + ρ ( X ( n ) Z ( n ) )
where n is the iteration index. Let B = A ρ , then the solution to (9) satisfy,
X ( n ) = ( Z ( n 1 ) B ( n 1 ) ) 1 1 + ρ Φ 1 H ( Φ 1 ( Z ( n 1 ) B ( n 1 ) ) Φ 2 Y ) Φ 2 H
Z ( n ) = S ( X ( n ) + B ( n 1 ) ; λ / ρ )
B ( n ) = B ( n 1 ) + η ( X ( n ) Z ( n ) )
where S ( ) is the shrinkage function, and η is the update rate for the Lagrangian multiplier.
It is worth mentioning that the optimal parameters of the 2D ADMM are sensitive to SNR. Therefore, several trials are required to obtain well-focused imaging, making this method impractical and time-consuming.

2.3. PnP 2D ADMM Method

The soft threshold function in Formula (12) can be treated as a denoising procedure and henceforth be replaced by a pre-trained deep denoising network through the PnP framework,
Z ( n ) = D ( X ( n ) + B ( n 1 ) )
where D ( ) represents the pre-trained deep denoising network. In this article, we chose DnCNN due to its satisfying denoising performance.
Algorithm 1 summarizes the high-resolution 2D ISAR imaging method based on PnP 2D ADMM, where N denotes the total number of iterations.
Algorithm 1: High-Resolution ISAR Imaging Based on PnP 2D ADMM.
1 .   Initialize   ρ > 0 ,   Z ( 0 )   ,   and   B ( 0 ) ;
2 .   For   n = 1 : N
Update   X ( n )   by   ( 10 ) ;
Update   Z ( n )   by   ( 13 ) ;
Update   B ( n )   by   ( 12 ) ;
  End
3 .   Output   X ( N + 1 )   by   ( 10 ) .
Although the reconstruction performance and noise robustness of the PnP 2D ADMM can be significantly improved by adopting DnCNN as the denoising prior, the choice of ρ and η still has a large impact on the image quality, and improper initialization may generate defocused images.
To obtain well-focused imaging, we need to find the optimal parameters manually through several trials. In addition, the internal parameter cannot be adjusted adaptively in each iteration, which lacks flexibility and leads to slow convergence.

3. Structure of PAN

To tackle the issue of the optimal internal parameter selection of the PnP 2D ADMM, we modify the structure of PnP 2D ADMM and expand it into a learnable deep architecture, i.e., the PAN, employing the unrolling technique. As shown in Figure 1, the network has N stages, and Stage n , n [ 1 , N ] , represents the n th iteration described by Table 1. Typically, one stage consists of three layers, i.e., the reconstruction layer R ( , , ; ρ ( n ) ) ; the denoising layer D ( , ; w ( n ) ) , where w is the network parameter; and the multiplier update layer U ( , , ; η ( n ) ) . The three layers correspond to (10), (13), and (12), respectively. In addition, ( ) represents the inputs of each layer. By these means, the penalty parameters and the update rate are trainable and the internal parameters are adjustable in each iteration.

3.1. Reconstruction Layer

The inputs of the reconstruction layer are the output of the denoising layer Z ( n 1 ) and the multiplier update layer B ( n 1 ) , and the output of the reconstruction layer is
X ( n ) = ( Z ( n 1 ) B ( n 1 ) ) 1 1 + ρ ( n ) Φ 1 H ( Φ 1 ( Z ( n 1 ) B ( n 1 ) ) Φ 2 Y ) Φ 2 H
where ρ ( n ) is the adjustable network parameter.
For n = 1 , Z ( 0 ) and B ( 0 ) are initialized to zero matrices and, thus, Equation (14) can be rewritten as
X ( 1 ) = 1 1 + ρ ( 1 ) Φ 1 H ( Y ) Φ 2 H
For n [ 1 , N ] , the output of the reconstruction layer serves as the input of the denoising layer Z ( n ) and the multiplier update layer B ( n ) . For n = N + 1 , the output of the reconstruction layer is the input of the loss function.

3.2. Denoising Layer

As shown in Figure 2, the inputs of the denoising layer are the output of the reconstruction layer X ( n ) and the previous multiplier update layer B ( n 1 ) , and the output of the denoising layer is
Z ( n ) = D ( X ( n ) + B ( n 1 ) ; w ( n ) )
where the deep network D ( , ; w ( n ) ) is DnCNN, without residual learning and batch normalization. As shown in Figure 2, the first three layers are convolution layers followed by the ReLU activation function, and each layer has 64 convolution kernels with the same kernel size 3 × 3 . In addition, we add a convolutional layer with one convolution kernel, whose size is 3 × 3 .
For n = 1 , B ( 0 ) is initialized as a zero matrix and Equation (16) can be reduced to the following form,
Z ( 1 ) = D ( X ( 1 ) ; w ( 1 ) )
The output of the denoising layer is the input of the reconstruction layer X ( n + 1 ) and the multiplier update layer B ( n ) .

3.3. Multiplier Update Layer

The inputs of the multiplier update layer are the output of the previous multiplier update layer B ( n 1 ) , the reconstruction layer X ( n ) , and the denoising layer Z ( n ) , and the output of the multiplier update layer is
B ( n ) = B ( n 1 ) + η ( n ) ( X ( n ) Z ( n ) )
where the update rate η ( n ) is the adjustable network parameter.
For n = 1 , B ( 0 ) is initialized as a zero matrix and Equation (18) can be expressed as,
B ( 1 ) = η ( 1 ) ( X ( 1 ) Z ( 1 ) )
For n [ 1 , N 1 ] , the output of the multiplier update layer is the input of the next reconstruction layer X ( n + 1 ) , the next denoising layer Z ( n + 1 ) , and the next multiplier update layer B ( n + 1 ) . For n = N , the output of the multiplier update layer is the input of the next reconstruction layer.

4. Training of PAN

In this article, we train PAN according to the forward-backward propagation method. By this means, the error can be back-propagated to the entire network, to directly update all the adjustable parameters, i.e., the penalty parameters in the reconstruction layers, the parameters of DnCNN in the denoising layers, and the update rate in the multiplier update layers. Below, we will discuss network training in detail.

4.1. Loss Function

In this article, the normalized mean square error (NMSE) is used as the loss function L ( Θ ) , which satisfied,
L ( Θ ) = 1 γ ( Y , X g t ) Γ X ^ ( Y , Θ ) X g t F 2 X g t F 2
where X ^ is the network output, Y is the input echoes in the wavenumber domain defined by (5), Θ = { ρ ( n ) , w ( n ) , η ( n ) } is the set of adjustable parameters, X g t is the label image, F is the Frobenius norm, and Γ = { ( Y , X g t ) } is the training set with c a r d { Γ } = γ .

4.2. Other Details

DataSet: Limited by the observation conditions, measured ISAR data is usually insufficient for network training in practice. In addition, it is not easy to generalize the network trained by a single category to other categories. To tackle this issue, we trained PAN with simulated data and tested it with both simulated and measured data. Implementation details for simulated data generation are described in Section 5.
Handling Complex Values: It should be noted that the forward and backward propagation formulae in PAN are implemented in the real domain. To deal with the complex-valued model, we concatenate the real and imaginary parts of the complex-valued matrix in the real domain [15]. Specifically, given a complex-valued matrix A P × Q , Β P × U and Φ U × Q , the matrix multiplication can be decomposed and expressed as
[ Re ( A ) Im ( A ) ] = [ Re ( Φ ) Im ( Φ ) Im ( Φ ) Re ( Φ ) ] [ Re ( B ) Im ( B ) ]
where Re(·) and Im(·) denote taking the real and the imaginary parts, respectively. By this means, PAN can properly deal with the complex-valued matrix multiplication in (6).
Implementation Details: The adjustable parameters of PAN were initialized as ρ ( n ) = 0.2 and η ( n ) = 1 . The stage number N was set to 4 and the depth of DnCNN was set to 3. The choice of these parameters will be discussed later in Section 6. Additionally, PAN was trained with 30 epochs. The initial learning rate was set to 2 × 10 4 , which was then linearly decayed to 10 5 . The Adam algorithm [28] was adopted to optimize the network parameters.

5. Experimental Results

In this section, we will demonstrate the effectiveness of PAN with high-resolution imaging of 2D incomplete data. The missing data pattern of the 2D incomplete data is shown in Figure 3a, where the white bars denote the available data and the black ones denote the missing data, and the loss rate is 50%.
For network training, 1000 image pairs ( Y , X g t ) were generated as the simulated data set, where each X g t includes randomly distributed scattering centers with Gaussian amplitude. The training set consisted of 800 samples and the test set consisted of 200 samples. The SNR of the range-compressed echoes of the training set ranged from 2 dB to 20 dB; while the SNR of the range-compressed echoes of the test set was set to 5 dB, 10 dB, and 15 dB, respectively. The label image of a typical noise-free test sample is shown in Figure 3b.
To evaluate the imaging and generalization performance of PAN on measured data, two additional test sets of airplanes were selected. For complete data and high SNR, their RD images are shown in Figure 3c and Figure 3d. Then, the SNR of the range-compressed echoes was set to 5 dB, 10 dB, and 15 dB, respectively, by adding Gaussian noise.
As a comparison, we provide imaging results of the 2D-ADN [16], UNet [19], and PnP 2D ADMM, respectively. According to the analysis given in Section 1, we needed to build a model set for the noise-level dependent, model-driven 2D-ADN, as the SNR is varying among echoes. Therefore, we trained three 2D-ADNs for SNRs of 5 dB, 10 dB, and 15 dB, respectively.
For quantitative performance evaluation, we calculated the normalized mean square error (NMSE), the peak signal-to-noise ratio (PSNR), the structure similarity index measure (SSIM), entropy (ENT), and the running time.
The computing platform was an Intel i9-10920X 3.50-GHz computer with a 12-core processor and 64 GB RAM. In addition, UNet, PnP 2D ADMM and PAN were implemented on an NVIDIA GTX3090 GPU with 24 GB RAM using the framework of Pytorch. The 2D-ADN was implemented on a CPU within the framework of MATLAB.

5.1. 2D Incomplete Data

For the test data illustrated in Figure 3b, the imaging results are shown in Figure 4, and for the test set, the average metrics of the 200 samples with SNR of 5 dB, 10 dB, and 15 dB are shown in Table 1, Table 2 and Table 3, respectively. It can be observed that the data-driven model, i.e., the UNet, gives unsatisfying imaging results on incomplete 2D data, due to the lack of theoretical support and sparse constraints. On the contrary, the rest of the methods were based on the 2D ADMM and demonstrated better reconstruction performance on the incomplete data.
In addition, the parameters of the different methods are shown in Table 4, although parameters of PAN are 1/70 of the UNet, it is observed that PAN had the highest PSNR, SSIM, and the smallest NMSE (except for 2D-ADN with single SNR), demonstrating its superior imaging performance and robustness to different noise levels. In addition, we adjusted the penalty parameters of PnP 2D ADMM manually to obtain better imaging results, which led to a low efficiency. The PnP 2D ADMM had the longest running time, as more iterations were required to obtain well-focused imaging results.
For the measured data illustrated in Figure 3c, the imaging results are shown in Figure 5, and the corresponding entropy (ENT) and running time are shown in Table 5. It is observed that although 2D-ADN had a lower entropy in low SNR scenarios, it had a poor generalization performance, as shown in Figure 5. On the contrary, PAN had a more complete structure, demonstrating its superior reconstruction performance on the measured data. In addition, UNet had artifacts, demonstrating its poor generalization performance.
For another set of measured data, illustrated in Figure 3d, the imaging results are shown in Figure 6, and the corresponding entropy and running time are listed in Table 6. It can be seen that PAN had the best imaging results, indicating its good generalization performance on the measured data.

5.2. Different Loss Rates

In order to further analyze the imaging performance of PAN with different loss rates, we designed more experiments with only 25% and 10% available data, the data missing patterns of a 75% loss rate and 90% loss rate are shown in Figure 7a and Figure 7b, respectively.
For the test data illustrated in Figure 3b, the imaging results are shown in Figure 8, and for the test set, the average metrics of the 200 samples with SNRs of 5 dB, 10 dB, and 15 dB are shown in Table 7, Table 8, and Table 9, respectively. It is observed that the test data with 90% loss rate had the worst imaging result under a 5 dB SNR. However, when we raised the SNR to 10 dB or 15 dB, the imaging performance improved rapidly. In addition, if we reduced the loss rate to 75% or 50%, the imaging performance also improved rapidly.
For the measured data illustrated in Figure 3c, the imaging results are shown in Figure 9, and the corresponding entropy (ENT) and running time are shown in Table 10. It is observed that the test data with 90% loss rate had a lot of false points and artifacts. However, if we reduced the loss rate to 75% or 50%, the imaging performance improved rapidly.
For another measured dataset, illustrated in Figure 3d, the imaging results are shown in Figure 10, and the corresponding entropy and running time are listed in Table 11. The test data with a 75% loss rate had a better imaging result than the test data with 90% loss rate, demonstrating that the loss rate had a significant impact on the imaging performance.

6. Discussion

To obtain the optimal imaging results, it is necessary to design the structure of PAN in two aspects: (a) the number of stages; and (b) the depth of DnCNN. The discussions below are based on the simulated data presented in Section 5.

6.1. The Number of Stages

For PAN, the number of stages corresponds to the number of iterations in PnP 2D ADMM. As the stage number increases, the runtime becomes longer. Table 12 presents the number of parameters for different stages, and Figure 11 shows the variation of NMSE, PSNR, and SSIM with the number of stages. It is observed that the curve is relatively stable; thus, we set the stage number to 4 in the experiments to reduce the model complexity.

6.2. The Depth of DnCNN

As a denoising layer in PAN, the DnCNN has the most adjustable parameters. When the number of stages is fixed to 4, the number of parameters for different depths are shown in Table 13. It is observed that compared with the stage number, the depth of DnCNN has a larger impact on the number of parameters. Figure 12 shows the variation of NMSE, PSNR, and SSIM with the network depth, where the reconstruction performance is stable when the depth of DnCNN is larger than 3.

7. Conclusions

This article proposed a PAN for sparse ISAR imaging of 2D incomplete data with various SNRs. First, PnP 2D ADMM was proposed for 2D ISAR imaging by combining a data-driven deep network DnCNN and sparse signal reconstruction method 2D ADMM. On this basis, the PnP 2D ADMM was generalized and unrolled into PAN, which had robustness to noise and satisfying reconstruction performance for incomplete data. Experiments demonstrated that, compared with the available methods, PAN with a single parameter set could achieve better-focused imaging with a higher computational efficiency for measured data under various SNRs.
To deal with the open problem of optimal parameter selection, future work will be focused on presenting tuning-free algorithms that can determine parameters automatically based on a deep network.

Author Contributions

Conceptualization, X.L. and Y.Z.; methodology, X.L.; validation, X.L. and Y.Z.; data curation, X.L.; writing—original draft preparation, X.L.; writing—review and editing, X.B. and F.Z.; visualization, X.L.; supervision, X.B.; project administration, X.B.; funding acquisition, F.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China under Grant 62131020, Grant 61971332 and Grant 61631019, and in part by the Fund for Foreign Scholars in University Research and Teaching Programs (the 111 Project) under Grant B18039.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Luo, Y.; Zhang, Q.; Yuan, N.; Zhu, F.; Gu, F. Three-dimensional precession feature extraction of space targets. IEEE Trans. Aerosp. Electron. Syst. 2014, 50, 1313–1329. [Google Scholar] [CrossRef]
  2. Bai, X.R.; Zhou, X.N.; Zhang, F.; Wang, L.; Zhou, F. Robust pol-ISAR target recognition based on ST-MC-DCNN. IEEE Trans. Geosci. Remote Sens. 2019, 57, 9912–9927. [Google Scholar] [CrossRef]
  3. Carrara, W.G.; Goodman, R.S.; Majewski, R.M. Spotlight Synthetic Aperture Radar: Signal Processing Algorithms; Artech House: Boston, MA, USA, 1995; Chapter 2. [Google Scholar]
  4. Wang, D.; Zhang, Q.; Luo, Y.; Liu, X.; Ni, J.; Su, L. Joint optimization of time and aperture resource allocation strategy for multi-target ISAR Imaging in radar sensor network. IEEE Sens. J. 2021, 21, 19570–19581. [Google Scholar] [CrossRef]
  5. Zhao, L.F.; Wang, L.; Yang, L.; Zoubir, A.M.; Bi, G.A. The race to improve radar imagery: An overview of recent progress in statistical sparsity-based techniques. IEEE Signal Process. Mag. 2016, 33, 85–102. [Google Scholar] [CrossRef]
  6. Kang, L.; Luo, Y.; Zhang, Q.; Liu, X.W.; Liang, B.S. 3-D Scattering Image Sparse Reconstruction via Radar Network. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–14. [Google Scholar] [CrossRef]
  7. Kang, M.; Lee, S.; Lee, S.; Kim, K. ISAR imaging of high-speed maneuvering target using gapped stepped-frequency waveform and compressive sensing. IEEE Trans. Image Process. 2017, 26, 5043–5056. [Google Scholar] [CrossRef]
  8. Hu, P.J.; Xu, S.Y.; Wu, W.Z.; Chen, Z.P. Sparse subband ISAR imaging based on autoregressive model and smoothed 0 algorithm. IEEE Sens. J. 2018, 18, 9315–9323. [Google Scholar] [CrossRef]
  9. Li, S.; Amin, M.; Zhao, G.; Sun, H. Radar imaging by sparse optimization incorporating MRF clustering prior. IEEE Geosci. Remote Sens. Lett. 2020, 17, 1139–1143. [Google Scholar] [CrossRef] [Green Version]
  10. Bai, X.R.; Zhou, F.; Hui, Y. Obtaining JTF-signature of space-debris from incomplete and phase-corrupted data. IEEE Trans. Aerosp. Electron. Syst. 2017, 53, 1169–1180. [Google Scholar] [CrossRef]
  11. Qiu, W.; Zhao, H.; Zhou, J.; Fu, Q. High-resolution fully polarimetric ISAR imaging based on compressive sensing. IEEE Trans. Geosci. Remote Sens. 2014, 52, 6119–6131. [Google Scholar] [CrossRef]
  12. Liu, Z.; Liao, X.; Wu, J. Image Reconstruction for Low-Oversampled Staggered SAR via HDM-FISTA. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–14. [Google Scholar] [CrossRef]
  13. Hashempour, H.R. Sparsity-Driven ISAR Imaging Based on Two-Dimensional ADMM. IEEE Sens. J. 2020, 20, 13349–13356. [Google Scholar] [CrossRef]
  14. Monga, V.; Li, Y.; Eldar, Y.C. Algorithm Unrolling: Interpretable, Efficient Deep Learning for Signal and Image Processing. IEEE Signal Process. Mag. 2021, 38, 18–44. [Google Scholar] [CrossRef]
  15. Wei, S.; Liang, J.; Wang, M.; Shi, J.; Zhang, X.; Ran, J. AF-AMPNet: A Deep Learning Approach for Sparse Aperture ISAR Imaging and Autofocusing. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–14. [Google Scholar] [CrossRef]
  16. Li, X.; Bai, X.; Zhou, F. High-resolution ISAR imaging and autofocusing via 2D-ADMM-Net. Remote Sens. 2021, 13, 2326. [Google Scholar] [CrossRef]
  17. Wei, S.; Liang, J.; Wang, M.; Zeng, X.F.; Shi, J.; Zhang, X.L. CIST: An improved ISAR imaging method using convolution neural network. Remote Sens. 2020, 12, 2641. [Google Scholar] [CrossRef]
  18. Hu, C.; Wang, L.; Li, Z.; Zhu, D. Inverse synthetic aperture radar imaging using a fully convolutional neural Network. IEEE Geosci. Remote Sens. Lett. 2020, 17, 1203–1207. [Google Scholar] [CrossRef]
  19. Yang, T.; Shi, H.Y.; Lang, M.Y.; Guo, J.W. ISAR imaging enhancement: Exploiting deep convolutional neural network for signal reconstruction. Int. J. Remote Sens. 2020, 41, 9447–9468. [Google Scholar] [CrossRef]
  20. Venkatakrishnan, S.V.; Bouman, C.A.; Wohlberg, B. Plug-and-Play Priors for Model Based Reconstruction. In Proceedings of the IEEE Global Conference on Signal and Information Processing (Global SIP), Austin, TX, USA, 3–5 December 2013; pp. 945–948. [Google Scholar]
  21. Wei, K.; Avilés-Rivero, A.I.; Liang, J.; Fu, Y.; Schönlieb, C.; Huang, H. Tuning-free plug-and-play proximal algorithm for inverse imaging problems. In Proceedings of the 37th International Conference on Machine Learning, Vienna, Austria, 13–18 July 2020; pp. 10158–10169. [Google Scholar]
  22. Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image denoising by sparse 3D transform-domain collaborative filtering. IEEE Trans. Image Process. 2007, 16, 2080–2095. [Google Scholar] [CrossRef]
  23. Zhang, K.; Zuo, W.; Chen, Y.; Meng, D.; Zhang, L. Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising. IEEE Trans. Image Process. 2017, 26, 3142–3155. [Google Scholar] [CrossRef] [Green Version]
  24. Dong, W.; Wang, P.; Yin, W.; Shi, G. Denoising Prior Driven Deep Neural Network for Image Restoration. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 41, 2305–2318. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Ahmad, R.; Bouman, C.A.; Buzzard, G.T.; Chan, S.; Liu, S.; Reehorst, E.T.; Schnitter, P. Plug-and-Play Methods for Magnetic Resonance Imaging: Using Denoisers for Image Recovery. IEEE Signal Process. Mag. 2020, 37, 105–116. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. He, J.; Yang, Y.; Wang, Y.; Zeng, D.; Bian, Z.; Zhang, H.; Sun, J.; Xu, Z.; Ma, J. Optimizing a Parameterized Plug-and-Play ADMM for Iterative Low-Dose CT Reconstruction. IEEE Trans. Med. Imaging. 2019, 38, 371–382. [Google Scholar] [CrossRef] [PubMed]
  27. Ryu, E.K.; Liu, J.; Wang, S.; Chen, X.; Wang, Z.; Yin, W. Plug-and-play methods provably converge with properly trained denoisers. arXiv 2019, arXiv:1905.05406. [Google Scholar]
  28. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
Figure 1. Structure of the PAN.
Figure 1. Structure of the PAN.
Remotesensing 14 00901 g001
Figure 2. Structure of the denoising layer.
Figure 2. Structure of the denoising layer.
Remotesensing 14 00901 g002
Figure 3. Data set illustration. (a) Data missing pattern of the 2D incomplete data. (b) Label image of a test sample. (c) RD image of the measured data with complete data and high SNR. (d) RD image of another measured dataset with complete data and high SNR.
Figure 3. Data set illustration. (a) Data missing pattern of the 2D incomplete data. (b) Label image of a test sample. (c) RD image of the measured data with complete data and high SNR. (d) RD image of another measured dataset with complete data and high SNR.
Remotesensing 14 00901 g003
Figure 4. ISAR images of Figure 3b with different SNR.
Figure 4. ISAR images of Figure 3b with different SNR.
Remotesensing 14 00901 g004
Figure 5. ISAR images of measured data with different SNRs.
Figure 5. ISAR images of measured data with different SNRs.
Remotesensing 14 00901 g005
Figure 6. ISAR images of another measured dataset with different SNRs.
Figure 6. ISAR images of another measured dataset with different SNRs.
Remotesensing 14 00901 g006
Figure 7. The missing data pattern of (a) 75% loss rate and (b) 90% loss rate.
Figure 7. The missing data pattern of (a) 75% loss rate and (b) 90% loss rate.
Remotesensing 14 00901 g007
Figure 8. ISAR images of Figure 3b with different loss rates and different SNRs.
Figure 8. ISAR images of Figure 3b with different loss rates and different SNRs.
Remotesensing 14 00901 g008
Figure 9. ISAR images of measured data with different loss rates and different SNRs.
Figure 9. ISAR images of measured data with different loss rates and different SNRs.
Remotesensing 14 00901 g009
Figure 10. ISAR images of another measured dataset with different loss rates and different SNRs.
Figure 10. ISAR images of another measured dataset with different loss rates and different SNRs.
Remotesensing 14 00901 g010
Figure 11. Variation of (a) NMSE, (b) PSNR, and (c) SSIM with the number of stages.
Figure 11. Variation of (a) NMSE, (b) PSNR, and (c) SSIM with the number of stages.
Remotesensing 14 00901 g011
Figure 12. Variation of (a) NMSE, (b) PSNR, and (c) SSIM with the number of layers for DnCNN.
Figure 12. Variation of (a) NMSE, (b) PSNR, and (c) SSIM with the number of layers for DnCNN.
Remotesensing 14 00901 g012
Table 1. Performance evaluation for the simulated 2D incomplete data under 5 dB condition.
Table 1. Performance evaluation for the simulated 2D incomplete data under 5 dB condition.
MethodNMSEPSNRSSIMTime(s)
UNet0.466540.90560.95790.01
2D-ADN0.172849.52180.98970.09
PnP 2D ADMM0.228647.41490.98051.12
PAN0.203548.10730.98470.01
Table 2. Performance evaluation for the simulated 2D incomplete data under 10 dB condition.
Table 2. Performance evaluation for the simulated 2D incomplete data under 10 dB condition.
MethodNMSEPSNRSSIMTime(s)
UNet0.434841.51820.96480.01
2D-ADN0.090255.16770.99720.09
PnP 2D ADMM0.120952.49120.99340.96
PAN0.110453.41700.99570.01
Table 3. Performance evaluation for the simulated 2D incomplete data under 15 dB condition.
Table 3. Performance evaluation for the simulated 2D incomplete data under 15 dB condition.
MethodNMSEPSNRSSIMTime(s)
UNet0.424341.73230.96710.01
2D-ADN0.052259.92370.99910.09
PnP 2D ADMM0.094555.08040.99661.24
PAN0.087255.48260.99730.01
Table 4. Parameters of different methods.
Table 4. Parameters of different methods.
MethodUNet2D-ADNPnP 2D ADMMPAN
Parameter No.7.7M410.04M0.11M
Table 5. Performance evaluation for the measured 2D incomplete data under different SNR conditions.
Table 5. Performance evaluation for the measured 2D incomplete data under different SNR conditions.
5 dB10 dB15 dB
ENTTime(s)ENTTime(s)ENTTime(s)
UNet0.76130.010.78090.010.78800.01
2D-ADN0.55430.091.16200.091.48540.09
PnP 2D ADMM0.49381.160.47531.470.47541.93
PAN1.42450.011.32360.011.30630.01
Table 6. Performance evaluation for another measured 2D incomplete dataset under different SNR conditions.
Table 6. Performance evaluation for another measured 2D incomplete dataset under different SNR conditions.
5 dB10 dB15 dB
ENTTime(s)ENTTime(s)ENTTime(s)
UNet0.59180.010.59690.010.59690.01
2D-ADN0.41150.090.79050.090.68360.09
PnP 2D ADMM0.38541.180.38201.710.37631.82
PAN1.04000.010.94650.010.93190.01
Table 7. Performance evaluation for the simulated data with different loss rates under the 5 dB condition.
Table 7. Performance evaluation for the simulated data with different loss rates under the 5 dB condition.
MethodNMSEPSNRSSIMTime(s)
50%0.203548.10730.98470.01
75%0.337143.71680.96120.01
90%0.681737.59790.87790.01
Table 8. Performance evaluation for the simulated data with different loss rates under the 10 dB condition.
Table 8. Performance evaluation for the simulated data with different loss rates under the 10 dB condition.
Loss RateNMSEPSNRSSIMTime(s)
50%0.110453.41700.99570.01
75%0.206647.97070.98560.01
90%0.581438.98230.90770.01
Table 9. Performance evaluation for the simulated data with different loss rates under the 15 dB condition.
Table 9. Performance evaluation for the simulated data with different loss rates under the 15 dB condition.
Loss RateNMSEPSNRSSIMTime(s)
50%0.087255.48260.99730.01
75%0.171749.59590.99000.01
90%0.542339.58760.91860.01
Table 10. Performance evaluation for the measured data with different loss rates under different SNR conditions.
Table 10. Performance evaluation for the measured data with different loss rates under different SNR conditions.
5 dB10 dB15 dB
ENTTime(s)ENTTime(s)ENTTime(s)
50%1.42450.011.32360.011.30630.01
75%1.56400.011.48850.011.48540.01
90%1.96540.011.87820.011.85220.01
Table 11. Performance evaluation for another measured dataset with different loss rate under different SNR conditions.
Table 11. Performance evaluation for another measured dataset with different loss rate under different SNR conditions.
5 dB10 dB15 dB
ENTTime(s)ENTTime(s)ENTTime(s)
50%1.04000.010.94650.010.93190.01
75%1.41680.011.41740.011.42640.01
90%1.82380.011.77110.011.73660.01
Table 12. Parameters of different stages.
Table 12. Parameters of different stages.
Stage No.345678
Parameter No.0.08M0.11M0.15M0.19M0.23M0.27M
Table 13. Parameters of different depth.
Table 13. Parameters of different depth.
Stage No.234567
Parameter No.36580.11M0.23M0.34M0.45M0.56M
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, X.; Bai, X.; Zhang, Y.; Zhou, F. High-Resolution ISAR Imaging Based on Plug-and-Play 2D ADMM-Net. Remote Sens. 2022, 14, 901. https://doi.org/10.3390/rs14040901

AMA Style

Li X, Bai X, Zhang Y, Zhou F. High-Resolution ISAR Imaging Based on Plug-and-Play 2D ADMM-Net. Remote Sensing. 2022; 14(4):901. https://doi.org/10.3390/rs14040901

Chicago/Turabian Style

Li, Xiaoyong, Xueru Bai, Yujie Zhang, and Feng Zhou. 2022. "High-Resolution ISAR Imaging Based on Plug-and-Play 2D ADMM-Net" Remote Sensing 14, no. 4: 901. https://doi.org/10.3390/rs14040901

APA Style

Li, X., Bai, X., Zhang, Y., & Zhou, F. (2022). High-Resolution ISAR Imaging Based on Plug-and-Play 2D ADMM-Net. Remote Sensing, 14(4), 901. https://doi.org/10.3390/rs14040901

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop