Next Article in Journal
The ESA Permanent Facility for Altimetry Calibration: Monitoring Performance of Radar Altimeters for Sentinel-3A, Sentinel-3B and Jason-3 Using Transponder and Sea-Surface Calibrations with FRM Standards
Previous Article in Journal
Delineation of Crop Field Areas and Boundaries from UAS Imagery Using PBIA and GEOBIA with Random Forest Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

CIST: An Improved ISAR Imaging Method Using Convolution Neural Network

School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(16), 2641; https://doi.org/10.3390/rs12162641
Submission received: 21 July 2020 / Revised: 13 August 2020 / Accepted: 14 August 2020 / Published: 16 August 2020

Abstract

:
Compressive sensing (CS) has been widely utilized in inverse synthetic aperture radar (ISAR) imaging, since ISAR measured data are generally non-completed in cross-range direction, and CS-based imaging methods can obtain high-quality imaging results using under-sampled data. However, the traditional CS-based methods need to pre-define parameters and sparse transforms, which are tough to be hand-crafted. Besides, these methods usually require heavy computational cost with large matrices operation. In this paper, inspired by the adaptive parameter learning and rapidly reconstruction of convolution neural network (CNN), a novel imaging method, called convolution iterative shrinkage-thresholding (CIST) network, is proposed for ISAR efficient sparse imaging. CIST is capable of learning optimal parameters and sparse transforms throughout the CNN training process, instead of being manually defined. Specifically, CIST replaces the linear sparse transform with non-linear convolution operations. This new transform and essential parameters are learnable end-to-end across the iterations, which increases the flexibility and robustness of CIST. When compared with the traditional state-of-the-art CS imaging methods, both simulation and experimental results demonstrate that the proposed CIST-based ISAR imaging method can obtain imaging results of high quality, while maintaining high computational efficiency. CIST-based ISAR imaging is tens of times faster than other methods.

Graphical Abstract

1. Introduction

Inverse synthetic aperture radar (ISAR) imaging is capable of imaging the non-cooperative targets, such as aircraft, ships, missiles, etc. in all-day and all-time environment. Thus, ISAR has been wildly applied in various field, e.g., target detection and recognition, missile defense, space surveillance, etc. [1,2]. Generally, ISAR can achieve high range resolution by transmitting wide bandwidth signal, and achieve high cross-range resolution through targets’ relative rotational motion. Traditional ISAR imaging methods are mainly based on the Range-Doppler (RD) algorithm [3,4], i.e., Fourier transform or matched filter. To achieve high cross-range resolution, they require the raw echo data to be complete, otherwise it may lead to low imaging quality in cross-range direction. However, since the targets of ISAR are mainly non-cooperative moving target, radar is likely to lost targets while observing. Therefore, a high cross-range resolution ISAR imaging method with limited data is meaningful.
Compressive sensing (CS) has been successfully utilized to reconstruct sparse signals with limited measurements [5], so it has been wildly used in ISAR sparse imaging [6]. Many CS-based ISAR imaging methods have been proposed in recent years [7,8,9,10]. Zhang et al. introduced compressed sensing into ISAR imaging, and showed that CS-based imaging methods outperform the RD types methods in resolution [11]. Wang et al. proposed a greedy Kalman filter based sparse ISAR imaging method [7], which exploits the sparsity in wavelet domain to enhance the reconstruction. Liu et al. proposed a fully automated ISAR imaging algorithm based on sparse Bayesian learning, but it was restricted by computational load [12]. In [13], Zhang et al. proposed a combination of local sparsity constraint and nonlocal total variation (NLTV) to improve the imaging quality. Zhang et al. used alternating direction method of multipliers (ADMM) [14] to substitute the matrix inversion in sparse Bayesian learning, and, therefore, dramatically improve computation efficiency [10], which takes 2–4 s to reconstruct a 256 × 256 ISAR image.
Conventional CS-based ISAR imaging has made progress in recent years, since it has compensated a major flaw of RD types’ ISAR imaging algorithm. However, there are several disadvantages of CS-based ISAR imaging methods: (1) Conventional CS-based imaging methods generally consume plenty of time, since they require lots of computing power on iterations and matrix inversion. (2) The optimization parameters (e.g., regularization parameter and threshold) are usually hand-crafted before the imaging process and it’s quite challenging to pre-define because they varies from different types of data. However, these parameters are essential for the imaging quality. (3) The sparse transform is pre-fixed. ISAR CS imaging uses mainly Fourier transform, although some work has utilized wavelet to improve the reconstruction [7], a fixed sparse transform can not ensure the best performance for different types of data. These disadvantages restrict the applications of conventional CS-based ISAR imaging methods to a great extent.
On the other hand, deep-network-based methods have been utilized to recover sparse signal [15,16,17], hence, a few deep-network-based ISAR CS imaging methods have been proposed recently. Hu et al. utilize a U-net-based network in ISAR imaging [8], which can use very few training samples as compared to other network-based imaging networks, but it only processes in image domain. Hu et al. also propose a so-called deep ADMM network (DAN) constructed by unfolding the traditional ADMM optimization algorithm [9], which can use much fewer measurements than conventional CS-based imaging methods to reconstruct high-quality ISAR image. Therefore, network-based ISAR CS imaging methods has become more feasible.
Among the typical conventional CS based ISAR imaging algorithms, such as Iterative Shrinkage- Thresholding Algorithm (ISTA) [18], Approximate Message-Passing (AMP) [19], Orthogonal Matching Pursuit (OMP) [20,21], ADMM [9,10], Sparse Bayesian learning (SBL) [22,23], and Sparsity Bayesian Recovery via Iterative Minimum (SBRIM) [24] algorithm, ISTA has the simplest structure. Accordingly, it can be easily modified into a convolutional network, while maintaining its advantage on flexibility. In order to reduce computational time and increase the robustness of conventional algorithms, we seek help from deep networks for its powerful learning capability.
In this paper, we propose a convolution iterative shrinkage-thresholding (CIST)-based ISAR sparse imaging method. CIST is based on ISTA, and it is composed with convolution neural network (CNN) to improve its robustness [25]. CIST unfolds the iterations of ISTA, and replaces the normal sparse transform with convolution operations. In the process of imaging ISAR target, CIST has shown its advantages. Firstly, the essential parameters (e.g., thresh-hold and stepsize) are learned end-to-end across iterative processes. Secondly, additional layer (involving convolution, Leaky Rectified Linear Unit (LReLU), and convolution) plays the role as nonlinear sparse transformation, which is self-adaptive updated through iterations. In addition, we use LReLU as activation function, since negative numbers are also essential in ISAR imaging. With the learnable parameters and self-adaptive nonlinear sparse transform, CIST has high flexibility and robustness. Furthermore, CIST has high computational efficiency. Once the CIST is well trained, it takes CIST less than one second to image a ISAR scene with size of 1024 × 2048 , which is tens of times faster than conventional algorithms.
The rest of this paper is organized, as follows. Section 2 presents the geometry of ISAR imaging and conventional ISTA-based ISAR sparse method. In Section 3, we introduce the architecture of proposed CIST-based ISAR imaging method and training strategy. In Section 4, simulated and measured experimental results and analysis are presented. In Section 5, we discuss the influence of the convolution part in CIST. The conclusions and future work are drawn in Section 6.

2. ISAR Sparse Imaging Methods

In this section, we firstly introduce the typical signal model of ISAR imaging system. Subsequently, we briefly elaborate how ISTA works.

2.1. ISAR Signal Model

Figure 1 presents the ISAR imaging model. The non-cooperative target is moving with relative motion, including rotational motion and translational motion. The translational motion error is supposed to be well compensated through range alignment and phase adjustment [26,27]. R 0 denotes the distance from radar to target center O. Supposed that the radar transmits a linear frequency modulated pulse signal s T , which can be expressed as:
s T ( τ ) = A T · rect τ T · exp j 2 π f c + γ 2 τ τ ,
where τ , T, A T , f c , and γ denotes the fast time, pulse repetition period, signal amplitude, carrier frequency, and the chirp rate, respectively; rect ( · ) denotes the unit rectangular function, as follows:
rect τ T = 1 , | τ | T / 2 0 , | τ | > T / 2
During the coherent processing interval (CPI), rotational angle changes Δ θ ( t ) = θ θ , then the instantaneous distance R ( t ) from P ( x , y ) to radar becomes approximately:
R ( t ) R 0 + x sin Δ θ ( t ) + y cos Δ θ ( t ) R 0 + x θ ( t ) + y ,
since the rotation angle change Δ θ ( t ) is small enough during CPI. Additionally, Δ θ ( t ) can be expanded by Taylor to:
Δ θ ( t ) = ω t + 1 2 α t 2 + o ( t 3 ) ,
where ω denotes rotation rate and α denotes its acceleration. Subsequently, the radar returned signal from P ( x , y ) can be presented as:
s R ( τ , t ) = A R · rect τ t d T rect t T a · exp j 2 π f c ( τ t d ) + γ 2 ( τ t d ) 2 ,
where c, t, A R , and T a denote the speed of light, slow time, echoed signal amplitude, and the observation duration, respectively. Additionally, t d = 2 R ( t ) / c denotes the round-trip time delay between radar and target. After range compression, i.e., Fourier transform along the range direction, the echoed signal can be expressed as:
s ( τ , t ) = A · rect t T a sinc [ γ T ( τ t d ) ] · exp ( j 4 π f c t d ) ,
where A is the signal amplitude after range compression. The target of ISAR are generally moving, so t d may lead to Doppler ratio. Subsequently, we substitute Equations (3) and (4) into Equation (6):
s ( τ , t ) = A · rect t T a sinc [ γ T ( τ 2 ( R 0 + y ) c ) ] · exp ( j 4 π 2 ( R 0 + y ) λ ) · exp [ j 2 π ( f t + 1 2 β t 2 ) ] ,
where λ is the wavelength, f = 2 ω x / λ denotes Doppler frequency, and β = 2 α x / λ denotes the Doppler rate. Suppose that range cell τ = 2 ( R 0 + y ) / c contains N scatters at different cross-range locations, the returned signal in the range cell can be expressed as:
s ( t ) = i = 1 N A i · rect t T a · exp [ j 2 π ( f i t + 1 2 β i t 2 ) ] ,
where we have neglected the constant phase term.
When considering that the rotational motion is assumed to be stationary in RD-type algorithms, rotation acceleration α is zero, so Equation (8) can be simplified as:
s ( t ) = i = 1 N A i · rect t T a · exp ( j 2 π f i t ) .
After applied cross-range Fourier transform and ignoring the constant phase term, the final ISAR imaging result can be formulated, as follows:
s ( f d ) = i = 1 N A i · sinc T a ( f d f i ) ,
where f d denotes the frequency domain. It can be seen from Equation (10) that the resolution in cross-range direction is proportional to CPI T a . However, the targets of ISAR are usually non-cooperative, so the CPI is greatly limited, leading to low cross-range resolution for RD imaging algorithm. As a result, ISAR CS imaging with limited data becomes more significant and practical.
When the CPI is very short and takes noise into account, the echoed signal from a single point after ranged compression in Equation (9) can be rewritten as:
s ( t ) = i = 1 M A i · rect t T a · exp ( j 2 π f i t ) + n ( t ) ,
where M is the total number of scattering centers, but M < N since the observation time is shorter and some scattering centers are lost; n ( t ) denotes the independent and identically distributed complex Gaussian noise. Equation (11) can be formulated in matrix form, as follows:
s = H w + n ,
where w C N , n C M , and s C M denote weighted vector, Gaussian noise, and observed data, respectively. The time and frequency resolution can be defined as Δ t and Δ f d . Supposing that the pulse repetition frequency is f r , then Δ t = 1 / f r and Δ f d = f r / N . Accordingly, matrix H C M × N can be presented, as follows:
H = φ 1 , 1 φ 1 , 2 φ 1 , N φ 2 , 1 φ 2 , 2 φ 2 , N φ M , 1 φ M , 2 φ M , N ,
where φ m , n = exp ( j 2 π · n Δ t · m Δ f d ) , 0 n N , 0 m M . After using the Fourier transform along cross-range direction, i.e., to achieve cross-range compression, the ISAR imaging result is as follows:
s = H F w + n ,
where F C N × N is the cross-range Fourier transform matrix. Equation (14) shows the linear relationship between the imaging result and input echo data, which is crucial to construct the CS-based ISAR imaging model.

2.2. ISTA Sparse Imaging

In general, given the fine ISAR image x C N in cross-range direction, linear measurements y C M , M < N and the measurements matrix Φ C M × N , CS-based ISAR imaging model can be presented as:
y = Φ x + n .
Specifically, y denotes the measurements in data domain (can be regarded as echoed data); measurements matrix Φ is constructed by Φ = D F , where D C M × N and F C N × N denotes the down-sampling matrix and Fourier transform matrix, respectively. To obtain the imaging result x in Equation (15), regularized minimization under the CS theorem can be presented, as follows:
x ^ = arg min x 1 2 y Φ x 2 2 + λ Ψ x 1 ,
where x ^ C n denotes the ISAR scene to be imaged, λ denotes the regularization coefficient, and Ψ x denotes the transform coefficients of x with respect to sparse transform Ψ C M × N . The sparsity of Ψ x is constrained by the l 1 norm [28,29].
The sparse imaging problem presented in Equation (16) can be solved with ISTA as the following iterative steps:
v ( k ) = y Φ x ( k ) z ( k ) = x ( k ) + γ Φ T v ( k ) x ( k + 1 ) = arg min x 1 2 x z ( k ) 2 2 + λ Ψ x 1 .
Here, k denotes the ISTA iteration number; v ( k ) denotes the residual measurement error in iteration-k; γ is the stepsize. To solve the last step in Equation (17) (so-called proximal mapping) [30,31], an efficient way is using the soft thresh-holding shrinkage, as follows:
x ( k + 1 ) = η s t ( x ( k ) + γ Φ T v ( k ) ; λ ) ,
where η s t ( · ) denotes the soft thresh-holding shrinkage function; λ is the shrinkage. η s t ( · ) function is defined as:
η s t ( r ; λ ) = sgn r j max r j λ , 0 .
We let z ( k ) denote the input of η s t ( · ) function in Equation (18):
z ( k ) = x ( k ) + γ Φ T v ( k ) = x ( k ) + γ Φ T ( y Φ x ( k ) ) .
Hence, the second part of Equation (17) can be rewritten as
x ( k + 1 ) = η s t ( z ( k ) , λ ) .
With iterations from Equations (20) and (21), the traditional ISTA can obtain a satisfactory imaging result. However, it requires extensive computation, and the parameters (e.g., thresh-hold λ , stepsize γ , and sparse transform Ψ ) need to be carefully pre-defined [32] in order to obtain satisfactory results, which are not easy to be optimally hand-crafted.

3. Proposed CIST-Based Imaging Method

In the proposed CIST-based ISAR sparse imaging method, iterations of ISTA are strictly mapped to a deep network, as shown in Figure 2. Each iteration corresponds to one phase of ISTA operation, as illustrated in Figure 3. CIST unfolds the conventional ISTA, and parameters in CIST are set to be learnable, which means essential parameters (e.g., λ and γ ) can achieve optimal value automatically through iterations. In addition, the linear transform Ψ is substituted by a more general nonlinear transform T ( · ) , which contains two convolution operations and a LReLU in between, as illustrated in Figure 2. In order to increase the capacity of the proposed method, the convolution size is set by N f , where N f is the filter size of convolution kernel (by default is 32). And the filter size of convolution kernel is set by 3 × 3 . Inspired by ResNet [33], a skip connection is also applied (from the start to end of one phase, as the red line in Figure 3 shows), in order to avoid vanishing gradient.

3.1. Network Model

To map ISTA into a convolutional network, the linear transform Ψ is replaced by a nonlinear transform T ( · ) , where T ( x ) = B ( L R e L U ( A x ) ) , ⊗ denotes convolution operation, A and B denote the first and second convolution respectively. Subsequently, Equation (16) can be rewritten as:
x ^ = arg min x 1 2 y Φ x 2 2 + λ T ( x ) 1 ,
T ( · ) is shown in Figure 2, framed by a red dotted rectangle. By Solving Equation (22) with ISTA and applying the new sparse transform T ( · ) to x, we can obtain a new form of Equation (16):
x ( k ) = arg min x 1 2 x z ( k ) 2 2 + λ T ( x ) 1 .
In the CIST, Equations (20) and (21) are mapped into a new form. Firstly, stepsize γ is allowed to be variable across iterations, so the first part of CIST is as follows:
z ( k ) = x ( k 1 ) γ ( k ) Φ T ( Φ x ( k 1 ) y ) .
Secondly, compute x ( k + 1 ) in Equation (21) associated with nonlinear transform T ( · ) , which can be presented in matrix form as:
T ( x ) = B ( L R e L U ( A x ) ) = B · A x , x 0 ρ 2 B · A x , x < 0
where ρ is a coefficient in LReLU (set to 0.01 by default) and A and B can be any matrices. Subsequently, E [ x E [ x ] 2 2 ] and E [ T ( x ) E [ T ( x ) ] 2 2 ] are linear related, i.e., the linear relationship can be expressed, as follows:
T ( x ) T ( z ( k ) ) 2 2 α x z ( k ) 2 2 ,
where α is a scalar and only related to T ( · ) , By applying the linear relationship in Equation (26) into Equation (23), we obtain:
x ( k ) = arg min x 1 2 T ( x ) T ( z ( k ) ) 2 2 + σ ( k ) T ( x ) 1 ,
where σ = λ α . Similar to Equation (21), the solution processes of Equation (23) are as follows:
T ( x ( k ) ) = η s t ( T ( z ( k ) ) , σ ( k ) )
Here, step size γ ( k ) and regularization parameter σ ( k ) are set to be variables. After every iteration, they will update their values, which is more flexible than traditional ones.
To solve x ( k ) in Equation (28), a left inverse of T ( · ) is needed, so we introduce T ˜ ( · ) , such that T ˜ · T = E , where E is identity matrix. Taking T ˜ ( · ) into Equation (28), we can obtain the final presentation of x ( k ) :
x ( k ) = T ˜ [ η s t ( T ( z ( k ) , σ ( k ) ) ]
Equations (24) and (29) are illustrated in Figure 2, every step of ISTA is mapped strictly into every phase of CIST, which guarantees the feasibility of CIST. Meanwhile, learnable parameters and transforms increase its flexibility.

3.2. Algorithm Flow

We cascade the structure of Figure 2 to complete the network, as illustrated in Figure 3. The number of cascades P is set by six, which means that every input is reconstructed by the structure in Figure 2 six times. Inputs are echo data down-sampled in cross-range, which has been range compressed and well motion compensated. Note that input measurements are in data domain, while imaging results are in image domain. Because, in this paper, considering that sparsity of cross-range direction is more practical in ISAR imaging, we focus on non-completed data in cross-range only.
As for the initial reconstruction x 0 , as denoted in Figure 3, we use least squares estimation to compute the initialization. Given the label and input pairs { x i , y i } , i = 1 , 2 , , N d , where N d is the total training number. So that the label and input can be presented as X = [ x 1 , x 2 , , x N d ] and Y = [ y 1 , y 2 , , y N d ] , respectively. Subsequently, the initial reconstruction x 0 can be determined, as follows:
x 0 = X Y T ( Y Y T ) 1 y ,
where y denotes any given input.
ISAR data are generally in the form of complex number; however, normal CNN networks support real number only. As a result, the plural data and measurement matrix need to be separated into real part and imagery part. According to [34], complex multiply β = Φ × α can be expressed as:
( β i ) ( β i ) = ( Φ i j ) ( Φ i j ) ( Φ i j ) ( Φ i j ) · ( α j ) ( α j )
where β and α are complex value vectors, i and j are index numbers in row and column direction, ( · ) denotes real part of the plural, and ( · ) denotes imaginary part. To process real ISAR data, we decompose the complex-valued data and measurement matrix before importing data into the network, and compose the output to generate imaging results.

3.3. Loss Function

Given the characteristic of input and output data, we need the relative error from every pixel of reconstructed result; hence, the loss function for the network training is designed, as follows:
loss = 1 N d i = 1 N d x i p x i 2 2 x i 2 2 + μ 1 N d i = 1 N d T ˜ ( T ( x i p ) ) x i 2 2 x i 2 2
where N d denotes the total training number, x i p denotes the imaging result after p phases, and x i denotes the corresponding label. The first part in Equation (32) denotes the error between reconstruction signal and the label; the second part denotes the error between T ˜ ( T ( x i p ) ) and the label, to ensure the assumption of inverse matrix T ˜ · T = E . Besides, μ is a regularization parameter, which is set to 0.01 by default. The loss function is optimized while using Adaptive Moment Estimation (Adam) [35].

4. Experiments

We use plenty of simulation data as training data and then test the network performance with simulated data and real measured ISAR data in order to validate the performance of the proposed method. In addition, results of some conventional CS methods, such as ISTA, AMP, and OMP, are also presented for comparison. Several metrics are also introduced to quantitatively evaluate performance of the CIST-based imaging method and traditional CS-based methods.

4.1. Simulated Data

To match the size of real measured ISAR data, simulation scene size is set by N r × N a , where N r = 1024 denotes range dimension and N a = 2048 denotes cross-range dimension. We generate 20 scenes with random points, i.e., the total number of training samples in cross-range dimension is N d = 20 × 1024 = 20,480. Other parameters of simulated radar signal include carrier frequency, bandwidth, pulse width, and pulse repetition frequency are set by 10 GHz, 600 MHz, 100 μ s, 200 Hz, respectively. As for measurements matrix, we construct it by Φ = D F , where F C n × n is the Fourier Transform matrix, and D C m × n is a randomly down-sampling matrix, i.e., m = 512 , n = 2048 for the 25 % down-sampled rate. Besides, Gaussian white noise is added to echo data, so that the Signal to Noise Ratio (SNR) is 20 dB, to simulate different noise environment. All of the inputs and labels are divided into real parts and imagery parts before the training and composed together after the reconstruction, as described in Section 3.2.
The details of the training processes is as follows. In the training process of CIST, the parameters σ and γ are treated as trainable parameters with initialization 0.02 and 0.002, respectively. The iteration number is set to six and the size of mini-batch is set to 64. Adam optimizer with learning rate 0.0001 was used for training.
We introduce several quantification standards, such as normalized mean square error (NMSE), false alarm (FA), image entropy (ENT), target-to-clutter ratio (TCR), where NMSE and FA take the high resolution result in Figure 4c as reference, in order to quantitatively evaluate the performance of different CS-based ISAR imaging methods. Note that the evaluation results are computed after normalized. TCR is defined, as follows:
T C R = 20 log 10 S t 2 2 S S t 2 2 ,
where S denotes the sum of whole simulated imaging result, and S t denotes the target area in it. Target area is defined as the valid area in the labeled imaging result, which can be determined by a threshold. FA is defined as:
F A = N u m ( S t S t ) N u m ( S ) × 100 % ,
where function N u m ( · ) denotes the length of the input; S t denotes the target area in imaging result; ⨁ is the exclusive OR operation. Note that target area S t are determined by the high resolution ISAR imaging result (referring as label), as shown in Figure 4c. In addition, the computational times were collected on a platform of Intel Core i7-7700k @ 4.20 GHz and Nvidia 1080ti.
We use a model of F35 plane as simulated data, and the full data echo as well as high resolution ISAR image result are given in Figure 4. In addition, we validate the performance of CIST with different down-sampling ratio and SNR. In this experiment, simulated echo data with down-sampling rates of 40%, 20% and 12.5% are considered, and the SNR of each echo data are set to 20 dB and 0 dB, respectively. Figure 5 and Figure 6 give the imaging results of four methods under higher and lower SNR, respectively, and Table 1 and Table 2 present the corresponding quantitative results. In Figure 5 and Figure 6, the first column gives the echo data at different random down-sampling rate; the second, third, fourth, and fifth give the ISAR imaging results of ISTA, AMP, OMP, and CIST, respectively; the first, second, and third row present random down-sampling rate at 40%, 20%, and 12.5%, respectively. As shown in Figure 5, as compared with the traditional methods, the proposed method CIST can obtain ISAR imaging results of high quality with a more clean background. In addition, as the echo ratio decreases, the image results of the traditional ISTA become worse, while results of CIST remain satisfactory. Furthermore, Table 1 gives the quantitative evaluation of these algorithms. Among the four methods, the proposed CIST-based ISAR imaging method obtains the lowest RNMSE, highest TCR, lowest entropy, and lowest FA in most cases. Except the one for down-sampling ratio at 20%, where OMP has obtained slightly lower entropy than CIST.
Furthermore, Figure 6 and Table 2 give the result of a more strict condition, in which the SNR is only 0dB. From the imaging results, it is seen that AMP, OMP, and CIST can achieve satisfactory results, except for traditional ISTA, which has a high side lobe in results. However, there are many ’ghost’ in the results of AMP and OMP. CIST has the best focused image and clean background. As demonstrated in Table 2, CIST achieves the lowest RNMSE, ENT, FA, and highest TCR in most cases, which indicates that CIST is more robust than the other three algorithms. In addition, while other algorithms need tens of seconds or even hundred of seconds for ISAR imaging, it takes CIST only less than one second to achieve the satisfactory results.
Under the condition of different down-sampling ratio and SNR, among the four CS-based imaging methods, the proposed CIST-based ISAR imaging method is capable of obtaining imaging results of highest quality within less than one second, which confirms its robustness and efficiency. Most importantly, it takes CIST only less than one second to obtain a satisfactory ISAR imaging result for a data size of 1024 × 2048 , while other traditional algorithms generally need tens of second or even thousands of second.

4.2. Measured Data

In order to test the network’s performance realistically, we use two group of real measured ISAR scatter of a plane(Yak-42) as test data (named as data I and data II). The Yak-42 data was collected by a ground-landed radar which operated at C-band and the bandwidth is 400 MHz. Each of the full data consists of 2048 pulses in cross-range, and each pulse contains 1024 samples. Note that the echo data has been range compressed and well motion compensated.The high resolution ISAR imaging results achieved by RD algorithm with full data are presented in Figure 7. The range compressed data are imported into CIST after randomly down-sampled to ratio at 40%, 20%, and 12.5%.
Figure 8 gives the imaging results of data I of the four algorithms at different down-sampling ratio. The first column shows the the input echo data at different down-sampling ratio; the other four columns present the imaging results. It can be seen that as sampling ratio decreases, the imaging quality of ISTA, AMP, and OMP become worse obviously. Specifically, ISTA and AMP lost the weak reflective parts of the target, and OMP has the highest side lobe. On the other hand, the results of CIST maintain a relatively complete target as well as a clean background. When the sampling ratio is as low as 12.5%, the results of ISTA and AMP are almost unusable, while CIST can still achieve satisfactory imaging result, which implies the robustness of CIST.
When considering the lack of true value of imaging target, which determines the results of RNMSE and FA, we use only TCR and ENT as the quantitation criteria of different methods. From the evaluation result of data I in Table 3, CIST achieves the highest TCR and lowest ENT at sampling ratio at 40% and 20%. In the special case, where the ratio is 12.5%, results of ISTA and AMP have the highest TCR and lowest ENT. But their imaging results are lacking some part of the target, i.e., the wings and fuselage only contain the stronger points but missing some weak points (around cross-range 1010–1040 and range 350–500), which leads to the superficially best evaluation results. After ignoring these disturbing results, CIST still has the better evaluation results than OMP. In addition, while conventional methods take generally more than 30 s for imaging process, CIST only takes less than one second (tens of times faster). Therefore, from the results of measured experiments, CIST has shown its robustness and high computational efficiency.
Figure 9 gives the imaging results of data II. It can be seen that the images that were obtained by ISTA, AMP, and OMP are defocused as sampling ratio decreases, but CIST maintains the fine imaging quality and clean background under all condition. It implies the superior performance of the proposed CIST imaging method. Furthermore, Table 4 gives the numerical evaluation of data II. It shows that CIST reaches the lowest ENT and highest TCR at every down-sampling ratio. Most importantly, CIST takes around 0.9 s for the target imaging, which is much faster than other algorithms that take over 25 s at best and can take up to two minutes to complete. The better imaging quality and less computational time indicate the superior performance and high efficiency of the proposed CIST-based ISAR imaging method.

5. Discussion

5.1. Effectiveness of Convolution Layer

A suitable sparse transform is one of the key questions in CS problem. The convolution layer in CIST plays an essential role in sparse transform. Candes and Tao have proven that Restricted Isometry Constants (RIP) is the sufficient condition for a perfect reconstruction [5]. For a given measurement matrix Φ and a constant δ k ( 0 , 1 ) , it should obey:
1 δ k x 2 2 Φ x 2 2 1 + δ k x 2 2
for all k-sparse signal x . However, to validate whether the measurement matrix Φ satisfy RIP condition is NP-hard. Hence, coherence μ ( Φ ) is more common approach, which is defined, as follows:
μ ( Φ ) = max 1 i , j N χ i , χ j χ i 2 χ i 2 ,
where χ i denotes the ith column of Φ . In conventional CS imaging methods, they generally use Fourier transform, Discrete cosine transform(DCT), wavelet transform, [36] etc. as sparse transform. The sparsity of measurements should be sparse enough to accurately reconstruct the signal [37,38]. To be specific, the sparsity K of the signal to be accurately reconstructed under l 1 -regularization should satisfy:
K < 1 + μ ( Φ ) 4 μ ( Φ ) ,
where μ ( Φ ) denotes the coherence of measurement matrix Φ . Therefore, a fixed sparse transform is based on the prior information, which is not suitable for different types of data.
One of the advantages of CIST is the convolution-based sparse transform, which is a crucial improvement for conventional ISTA. Whether it is self adaptive and learnable depends on the data characteristic. To validate the effectiveness of convolution layer, we compare CIST with learned ISTA (LISTA) network [39] proposed by Grefor and LeCun, based on which we construct a simplified version of CIST, so that the only difference between them is the existence of convolution layers. We train CIST and LISTA under the same condition, where stepsize, regularization parameters, iteration number and learning rate are initialized as γ 0 = 0.002 , σ 0 = 0.02 , 6 and 0.0001, respectively. Besides, training data are simulated echoed signal at down-sampling rate 20%.
Figure 10 gives the NMSE along with the training epochs. It is seen that CIST has the lower NMSE throughout the training process. Especially when the training just starts, CIST reaches the much lower (around one tenth smaller) NMSE than LISTA. At the end of training, the NMSE of CIST is still averagely one-tenth smaller than LISTA. Besides, CIST has a faster convergence since the NMSE of CIST reaches lowest point after 20 epochs, but LISTA needs around 30 epochs. In addition, Figure 11 shows the Yak-42 imaging results of CIST and LISTA. LISTA lost most part of the target, while CIST remain the fine imaging quality. As a result, we believe that the lower NMSE during training and the better imaging result of CIST can prove the effectiveness of convolution-based sparse transform in CIST.

5.2. Prospect of Network-Based ISAR Sparse Imaging Methods

ISAR plays a crucial role in the detection and recognition of moving targets, but non-cooperative targets could be lost during the observation. Accordingly, the CS-based ISAR sparse imaging methods are meaningful. There are two main obstacles of conventional CS imaging methods: low computational efficiency and manually defined parameters. The heavy computational cost limits the real-time applications of ISAR CS imaging to a large extent. Some essential parameters can greatly affect the imaging quality, so they need to be defined carefully, which usually takes several times for trial. Network-based ISAR imaging methods are highly promising to overcome the limitations. Firstly, they generally have higher computational efficiency once they are well trained. For instance, CIST can obtain imaging results of fine quality using much less time than conventional CS imaging methods, which can meet the demand for real-time processing. Secondly, parameters and sparse transform are set to be learnable, which means that they could achieve the optimal point through iterations. To obtain a fine imaging result, we have tuned the parameters of conventional ISTA several times, and every attempt takes tens of second. In addition, as discussed above in Section 5.1, the convolution-based sparse transform along makes a great difference under the same condition.
In a nutshell, network-based ISAR sparse imaging methods have higher computational efficiency and more flexible for moving targets imaging.

6. Conclusions and Future Work

In this paper, we proposed a CIST-based ISAR imaging method. Because CIST composed the advantage of convolution neural network and traditional ISTA, CIST can learn essential parameters automatically from end-to-end. Besides, CIST replaces the linear sparse transform with nonlinear convolution operations, which makes it more flexible and suitable for target-uncooperative ISAR imaging with under-sampled or non-completed data. Furthermore, it takes CIST less than one second to image an ISAR scene with size of 1024 × 2048 , which is dozens of times faster than other three conventional algorithms. Experimental result based on both simulated and measured data indicate that compared with state-of-art traditional CS-based methods, our proposed method can obtain results of sound quality, while maintaining high computational efficiency. In addition, when considering that AMP is an improved version of ISTA (faster for convergence and better reconstruction) and CIST has shown its advantages over other three conventional algorithms evaluated (ISTA, AMP, and OMP), to develop a convolution-involved version of AMP will be our future work.

Author Contributions

S.W. and J.L. proposed the ISAR imaging method; J.L. and M.W. performed the experiments; X.Z. (Xiangfeng Zeng), J.S. and X.Z. (Xiaoling Zhang) revised the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Key R&D Program of China under Grant (2017-YFB0502700), the National Natural Science Foundation of China (61671113,61501098) and the High-Resolution Earth Observation Youth Foundation (GFZX04061502).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chen, C.; Andrews, H.C. Target-Motion-Induced Radar Imaging. IEEE Trans. Aerosp. Electron. Syst. 1980, AES-16, 2–14. [Google Scholar] [CrossRef]
  2. Chen, V.C. Inverse Synthetic Aperture Radar Imaging: Principles, Algorithms and Applications; Institution of Engineering and Technology: Stevenage, UK, 2014. [Google Scholar]
  3. Chen, V.; Martorella, M. Inverse Synthetic Aperture Radar; Scitech Publishing: Raleigh, NC, USA, 2014. [Google Scholar]
  4. Xu, G.; Xing, M.; Zhang, L.; Duan, J.; Chen, Q.; Bao, Z. Sparse Apertures ISAR Imaging and Scaling for Maneuvering Targets. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2942–2956. [Google Scholar] [CrossRef]
  5. Candes, E.J.; Tao, T. Decoding by linear programming. IEEE Trans. Inf. Theory 2005, 51, 4203–4215. [Google Scholar] [CrossRef] [Green Version]
  6. Ender, J.H.G. On Compressive Sensing Applied to Radar. Signal Process. 2010, 90, 1402–1414. [Google Scholar] [CrossRef]
  7. Wang, L.; Loffeld, O.; Ma, K.; Qian, Y. Sparse ISAR imaging using a greedy Kalman filtering approach. Signal Process. 2017, 138, 1–10. [Google Scholar] [CrossRef]
  8. Hu, C.; Wang, L.; Li, Z.; Zhu, D. Inverse Synthetic Aperture Radar Imaging Using a Fully Convolutional Neural Network. IEEE Geosci. Remote Sens. Lett. 2019, 17, 1–5. [Google Scholar] [CrossRef]
  9. Hu, C.; Li, Z.; Wang, L.; Guo, J.; Loffeld, O. Inverse Synthetic Aperture Radar Imaging Using a Deep ADMM Network. In Proceedings of the 20th International Radar Symposium (IRS), Ulm, Germany, 26–28 June 2019; pp. 1–9. [Google Scholar] [CrossRef]
  10. Zhang, S.; Liu, Y.; Li, X. Fast Sparse Aperture ISAR Autofocusing and imaging via ADMM based Sparse Bayesian Learning. IEEE Trans. Image Process. 2019, 29, 3213–3226. [Google Scholar] [CrossRef]
  11. Zhang, L.; Xing, M.; Qiu, C.; Li, J.; Bao, Z. Achieving Higher Resolution ISAR Imaging With Limited Pulses via Compressed Sampling. IEEE Geosci. Remote Sens. Lett. 2009, 6, 567–571. [Google Scholar] [CrossRef]
  12. Liu, H.; Jiu, B.; Liu, H.; Bao, Z. Superresolution ISAR Imaging Based on Sparse Bayesian Learning. IEEE Trans. Geosci. Remote Sens. 2014, 52, 5005–5013. [Google Scholar]
  13. Zhang, X.; Bai, T.; Meng, H.; Chen, J. Compressive Sensing-Based ISAR Imaging via the Combination of the Sparsity and Nonlocal Total Variation. IEEE Geosci. Remote Sens. Lett. 2014, 11, 990–994. [Google Scholar] [CrossRef]
  14. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers; Now Publishers Inc.: Delft, The Netherlands, 2011. [Google Scholar]
  15. Mousavi, A.; Baraniuk, R.G. Learning to invert: Signal recovery via Deep Convolutional Networks. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), New Orleans, LA, USA, 5–9 March 2017; pp. 2272–2276. [Google Scholar]
  16. Mousavi, A.; Patel, A.B.; Baraniuk, R.G. A deep learning approach to structured signal recovery. In Proceedings of the 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton), Monticello, IL, USA, 29 September–2 October 2015; pp. 1336–1343. [Google Scholar]
  17. Wang, M.; Wei, S.; Shi, J.; Wu, Y.; Qu, Q.; Zhou, Y.; Zeng, X.; Tian, B. CSR-Net: A Novel Complex-valued Network for Fast and Precise 3-D Microwave Sparse Reconstruction. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020. [Google Scholar] [CrossRef]
  18. Rolfs, B.; Rajaratnam, B.; Guillot, D.; Wong, I.; Maleki, A. Iterative thresholding algorithm for sparse inverse covariance estimation. In Advances in Neural Information Processing Systems; Neural Information Processing Systems Foundation, Inc.: La Jolla, CA, USA, 2012; pp. 1574–1582. [Google Scholar]
  19. Donoho, D.L.; Maleki, A.; Montanari, A. Message-passing algorithms for compressed sensing. Proc. Natl. Acad. Sci. USA 2009, 106, 18914–18919. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Cai, T.T.; Wang, L. Orthogonal Matching Pursuit for Sparse Signal Recovery with Noise; Institute of Electrical and Electronics Engineers: Piscataway, NJ, USA, 2011. [Google Scholar]
  21. Li, G.; Zhang, H.; Wang, X.; Xia, X. ISAR 2-D Imaging of Uniformly Rotating Targets via Matching Pursuit. IEEE Trans. Aerosp. Electron. Syst. 2012, 48, 1838–1846. [Google Scholar] [CrossRef]
  22. Zhang, Z.; Rao, B.D. Sparse Signal Recovery With Temporally Correlated Source Vectors Using Sparse Bayesian Learning. IEEE J. Sel. Top. Signal Process. 2011, 5, 912–926. [Google Scholar] [CrossRef] [Green Version]
  23. Xu, G.; Xing, M.; Xia, X.; Chen, Q.; Zhang, L.; Bao, Z. High-Resolution Inverse Synthetic Aperture Radar Imaging and Scaling With Sparse Aperture. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 4010–4027. [Google Scholar] [CrossRef]
  24. Tian, B.; Zhang, X.; Wei, S.; Ming, J.; Shi, J.; Li, L.; Tang, X. A Fast Sparse Recovery Algorithm via Resolution Approximation for LASAR 3D Imaging. IEEE Access 2019, 7, 178710–178725. [Google Scholar] [CrossRef]
  25. Zhang, J.; Ghanem, B. ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 1828–1837. [Google Scholar]
  26. Berizzi, F.; Martorella, M.; Haywood, B.; Dalle Mese, E.; Bruscoli, S. A survey on ISAR autofocusing techniques. In Proceedings of the International Conference on Image Processing, ICIP ’04, Singapore, 24–27 October 2004; Volume 1, pp. 9–12. [Google Scholar]
  27. Qiang, W.; Niu, W.; Du, K.; Wang, X.-D.; Yang, Y.-A.; Du, W.-B. ISAR autofocus based on image entropy optimization algorithm. In Proceedings of the IEEE Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), Chongqing, China, 19–20 December 2015; pp. 1128–1131. [Google Scholar]
  28. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  29. Candes, E.J.; Romberg, J.; Tao, T. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 2006, 52, 489–509. [Google Scholar] [CrossRef] [Green Version]
  30. Wright, S.J.; Nowak, R.D.; Figueiredo, M.A.T. Sparse reconstruction by separable approximation. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, Las Vegas, NV, USA, 31 March–4 April 2008; pp. 3373–3376. [Google Scholar]
  31. Zhang, J.; Zhao, D.; Jiang, F.; Gao, W. Structural Group Sparse Representation for Image Compressive Sensing Recovery. In Proceedings of the 2013 Data Compression Conference, Snowbird, UT, USA, 20–22 March 2013; pp. 331–340. [Google Scholar]
  32. Chambolle, A.; De Vore, R.A.; Lee, N.-Y.; Lucier, B.J. Nonlinear wavelet image processing: Variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 1998, 7, 319–335. [Google Scholar] [CrossRef] [Green Version]
  33. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  34. Trabelsi, C.; Bilaniuk, O.; Zhang, Y.; Serdyuk, D.; Subramanian, S.; Santos, J.F.; Mehri, S.; Rostamzadeh, N.; Bengio, Y.; Pal, C. Deep Complex Networks. Neural and Evolutionary Computing. arXiv 2017, arXiv:1705.09792. [Google Scholar]
  35. Kingma, D.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  36. Mun, S.; Fowler, J.E. Block Compressed Sensing of Images Using Directional Transforms. In Proceedings of the Data Compression Conference, Cairo, Egypt, 7–10 November 2010; p. 547. [Google Scholar]
  37. Candes, E.J.; Romberg, J.; Tao, T. Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math. 2006, 59, 1207–1223. [Google Scholar] [CrossRef] [Green Version]
  38. Candes, E.J.; Tao, T. Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies? IEEE Trans. Inf. Theory 2006, 52, 5406–5425. [Google Scholar] [CrossRef] [Green Version]
  39. Gregor, K.; LeCun, Y. Learning Fast Approximations of Sparse Coding. In Proceedings of the 27th International Conference on International Conference on Machine Learning; Omnipress: Madison, WI, USA, 2010; pp. 399–406. [Google Scholar]
Figure 1. ISAR imaging model.
Figure 1. ISAR imaging model.
Remotesensing 12 02641 g001
Figure 2. Convolution iterative shrinkage-thresholding (CIST) network in K th phase.
Figure 2. Convolution iterative shrinkage-thresholding (CIST) network in K th phase.
Remotesensing 12 02641 g002
Figure 3. CIST network for Inverse synthetic aperture radar (ISAR) imaging.
Figure 3. CIST network for Inverse synthetic aperture radar (ISAR) imaging.
Remotesensing 12 02641 g003
Figure 4. (a) Simulated F35 Plane Model, (b) Full echo data and (c) ideal ISAR image.
Figure 4. (a) Simulated F35 Plane Model, (b) Full echo data and (c) ideal ISAR image.
Remotesensing 12 02641 g004
Figure 5. ISAR imaging results (SNR = 20 dB) of different methods at ratio 40%, 20%, and 12.5%, respectively.
Figure 5. ISAR imaging results (SNR = 20 dB) of different methods at ratio 40%, 20%, and 12.5%, respectively.
Remotesensing 12 02641 g005
Figure 6. ISAR imaging results (SNR = 0 dB) of different methods at ratio 40%, 20%, and 12.5%, respectively.
Figure 6. ISAR imaging results (SNR = 0 dB) of different methods at ratio 40%, 20%, and 12.5%, respectively.
Remotesensing 12 02641 g006
Figure 7. High resolution RD imaging results with Full echo of data (a) I and (b) II.
Figure 7. High resolution RD imaging results with Full echo of data (a) I and (b) II.
Remotesensing 12 02641 g007
Figure 8. ISAR imaging results of data I of different methods at ratio 40%, 20% and 12.5% respectively.
Figure 8. ISAR imaging results of data I of different methods at ratio 40%, 20% and 12.5% respectively.
Remotesensing 12 02641 g008
Figure 9. ISAR imaging results of data II of different methods at ratio 40%, 20% and 12.5%, respectively.
Figure 9. ISAR imaging results of data II of different methods at ratio 40%, 20% and 12.5%, respectively.
Remotesensing 12 02641 g009
Figure 10. Convergence of CIST and LISTA of Epoch (a) from 0 to 80, (b) from 30 to 80.
Figure 10. Convergence of CIST and LISTA of Epoch (a) from 0 to 80, (b) from 30 to 80.
Remotesensing 12 02641 g010
Figure 11. Imaging results of (a) CIST and (b) LISTA.
Figure 11. Imaging results of (a) CIST and (b) LISTA.
Remotesensing 12 02641 g011
Table 1. Evaluation of Simulated Experiments (SNR = 20 dB).
Table 1. Evaluation of Simulated Experiments (SNR = 20 dB).
RatioMethodNMSETCR (dB)ENTFA (%)Time (s)
40%ISTA0.742620.31261.00567.927344.5487
AMP0.659945.83880.39128.229571.6221
OMP0.697739.31840.45703.8948468.1091
CIST0.631350.85840.38342.86580.9031
20%ISTA1.51257.41791.495113.887620.9817
AMP1.8814−1.83503.040237.598733.1211
OMP0.717120.51690.52314.4982230.7727
CIST0.673033.58220.69812.08020.8641
12.5%ISTA5.6054−10.96903.482540.208512.7870
AMP0.715814.40291.404010.026918.2701
OMP0.697737.31840.45703.8948179.2718
CIST0.659638.75680.38381.06180.8732
Table 2. Evaluation of Simulated Experiments (SNR = 0 dB).
Table 2. Evaluation of Simulated Experiments (SNR = 0 dB).
RatioMethodNMSETCR (dB)ENTFA (%)Time (s)
40%ISTA4.8811−5.46724.761679.423031.3632
AMP2.9544−3.74463.345941.884870.0527
OMP0.67789.43130.51254.4954435.4453
CIST0.611626.71830.51873.86970.9025
20%ISTA5.6621−12.26394.911575.053215.7180
AMP1.10171.92511.763317.715534.2185
OMP0.70732.47630.85275.5791229.8903
CIST0.684423.79481.09594.86800.8723
12.5%ISTA7.1987−16.96004.961865.97319.7665
AMP0.807623.08031.600815.063118.7310
OMP0.744720.51690.52314.4982183.8871
CIST0.744324.33660.50121.31290.8852
Table 3. Evaluation for Measured Experiments of data I.
Table 3. Evaluation for Measured Experiments of data I.
RatioMethodTCR (dB)ENTTime (s)
40%ISTA22.23900.038328.1188
AMP18.08340.074941.7755
OMP8.87950.4223423.0314
CIST22.34660.02220.8882
20%ISTA23.27290.031414.2475
AMP23.15300.022944.3105
OMP10.40750.4142206.7307
CIST23.40100.02060.8876
12.5%ISTA23.34540.01919.0207
AMP29.56350.007427.7906
OMP11.13940.251044.4780
CIST22.56720.24180.8742
Table 4. Evaluation for Measured Experiments of data II.
Table 4. Evaluation for Measured Experiments of data II.
RatioMethodTCR (dB)ENTTime (s)
40%ISTA11.11560.319336.8382
AMP12.68340.532943.9997
OMP17.67410.2568122.7069
CIST18.87410.20150.8863
20%ISTA11.27040.241723.4636
AMP12.39270.485032.6397
OMP17.31480.260279.3270
CIST21.30820.12220.8754
12.5%ISTA11.11560.319336.8382
AMP11.93560.353728.8230
OMP17.67410.2568122.7069
CIST18.93280.24730.8692

Share and Cite

MDPI and ACS Style

Wei, S.; Liang, J.; Wang, M.; Zeng, X.; Shi, J.; Zhang, X. CIST: An Improved ISAR Imaging Method Using Convolution Neural Network. Remote Sens. 2020, 12, 2641. https://doi.org/10.3390/rs12162641

AMA Style

Wei S, Liang J, Wang M, Zeng X, Shi J, Zhang X. CIST: An Improved ISAR Imaging Method Using Convolution Neural Network. Remote Sensing. 2020; 12(16):2641. https://doi.org/10.3390/rs12162641

Chicago/Turabian Style

Wei, Shunjun, Jiadian Liang, Mou Wang, Xiangfeng Zeng, Jun Shi, and Xiaoling Zhang. 2020. "CIST: An Improved ISAR Imaging Method Using Convolution Neural Network" Remote Sensing 12, no. 16: 2641. https://doi.org/10.3390/rs12162641

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop