ISAR Resolution Enhancement Method Exploiting Generative Adversarial Network

: Deep learning has been used in inverse synthetic aperture radar (ISAR) imaging to improve resolution performance, but there still exist some problems: the loss of weak scattering points, over-smoothed imaging results, and the universality and generalization. To address these problems, an ISAR resolution enhancement method of exploiting a generative adversarial network (GAN) is proposed in this paper. We adopt a relativistic average discriminator (RaD) to enhance the ability of the network to describe target details. The proposed loss function is composed of feature loss, adversarial loss, and absolute loss. The feature loss is used to get the main characteristics of the target. The adversarial loss ensures that the proposed GAN recovers more target details. The absolute loss is adopted to make the imaging results not over-smoothed. Experiments based on simulated and measured data under different conditions demonstrate that the proposed method has good imaging performance. In addition, the universality and generalization of the proposed GAN are also well veriﬁed.


Introduction
With its rapid development, inverse synthetic aperture radar (ISAR) technology can acquire high-resolution radar images of non-cooperative targets under the conditions of an all-day and all-weather environment, which are widely used in military and civil fields [1]. The high range resolution of ISAR is due to the broadband of the transmitted signal, while the high azimuth resolution is determined by the virtual aperture synthesized by the relative motion between the radar and the target. ISAR images contain a lot of feature information of targets, which is vital for target recognition [2]. However, it is not easy to obtain the satisfactory ISAR images in the actual imaging process. The actual ISAR images are often blurred and the resolution is limited. That is because of non-cooperation of targets and noise interference. In addition, the radar echo of the target is often incomplete, which further degrades the quality of the ISAR image. Therefore, it is important to find a method to improve the resolution of the ISAR image.
Since the non-cooperative targets can always be regarded as a combination of scattering points, the sparse reconstruction algorithms based on compressive sensing (CS) are used to handle the imaging problems, which have attracted the attention of many scholars in recent years [3][4][5]. The echo received by ISAR physically has the sparsity characteristic. So, the CS method is very suitable for ISAR imaging with sparse aperture (SA). Many well-known CS algorithms have been applied to ISAR imaging, such as the orthogonal matching pursuit (OMP) algorithm [6], smoothed l 0 -norm (SL0) algorithm [7], fast iterative To tackle the above problems, this paper proposes a GAN-based ISAR resolution enhancement method to obtain better ISAR images. The key novelties are as follows: (1) inspired by [16] and [20], we adopt the GAN as our basic deep neural network structure. Compared with other networks, GAN has a more powerful ability to describe the target details. We adopt the relativistic average discriminator (RaD) to improve the resolution of the ISAR image. In the generator network, the Residual-in-Residual Dense Block (RRDB) is used. (2) The loss function of the proposed GAN is composed of feature loss, adversarial loss, and absolute loss. Feature loss is to maintain the main characteristics of ISAR images. Adversarial loss is used to recover the detailed features of weak scattering points. Absolute loss is designed to make the ISAR images not over-smoothed. The proposed loss function can achieve superior reconstruction with resolution enhancement. (3) We only train the proposed GAN under the condition of no noise and full aperture; furthermore, the trained network is used to recontruct ISAR images under the condition of low SNRs and SA, respectively. Simulated data and measured data under different parameter conditions are used to verify the effectiveness and universality of the proposed GAN. The simulation results obtain better-focused performance.
The rest of this article is organized as follows. In Section 2, the ISAR imaging model is constructed. Section 3 introduces the proposed GAN in detail and gives the network loss function. Section 4 describes the details of data acquisition and testing strategy. In Section 5, various experiments are carried out to evaluate the performance of the proposed GAN. Section 6 draws a conclusion.

ISAR Imaging Model
After translational motion of target compensation, the model can be simplified to a classic turntable model. When the target is in uniform motion status and the coherent processing interval (CPI) is short, the target motion can be equivalent as uniform rotation. Here, we take the monostatic radar as an example. Taking the origin as the phase center and a point scatterer ( ) 0 0 , P x y is situated on the target, as shown in Figure 1. , e x p 2 s E k A j k r (1) where φ is azimuth angle, 0 A is scattering coefficient of ( ) As for  k , it can be represented by the wave numbers in x and y directions in Equation (2): The radar echo from the point scatterer can be expressed as where φ is azimuth angle, A 0 is scattering coefficient of P(x 0 , y 0 ), → k is vector wave number in the propagation direction, and → r 0 stands for the vector from origin to point scatterer P(x 0 , y 0 ).
As for → k , it can be represented by the wave numbers in x and y directions in Equation (2): wherek,x, andŷ stand for the unit vectors in k, x, and y directions respectively. So, the → k · → r 0 in Equation (1) can be reorganized to obtain: where G represents generator network, L is the loss function of G and θ G stands for the network parameters set of G . To achieve this goal, the training procedure continues until the generator network G succeeds in fooling the discriminator network D . In this situation, the resolution of SR i I will get improved and G is the trained generator network that we will require.
To solve the optimization problem in (9), standard discriminator st D is adopted in [16], which is common for the GAN, shown in Equation (10): adversarial loss is introduced into G L , which improves the ability to recover weak scatter points correctly for G . Different from literature [16], we replace the standard discriminator with the relativistic average discriminator (RaD) [22] to provide more target details, which can be represented by Ra D . In general, the standard discriminator st D describes whether the superresolution (SR) ISAR image is real or fake, and the relativistic average discriminator Ra D estimates the probability that a HR ISAR image is more realistic than a SR ISAR image. Equations (11) and (12) show their definitions respectively: To solve the optimization problem in (9), standard discriminator D st is adopted in [16], which is common for the GAN, shown in Equation (10): where E represents taking average value, p train I HR is the distribution of HR ISAR images, and p G I LR is the distribution of LR ISAR images. According to this criterion, adversarial loss is introduced into L G , which improves the ability to recover weak scatter points correctly for G. Different from literature [16], we replace the standard discriminator with the relativistic average discriminator (RaD) [22] to provide more target details, which can be represented by D Ra . In general, the standard discriminator D st describes whether the super-resolution (SR) ISAR image is real or fake, and the relativistic average discriminator D Ra estimates the probability that a HR ISAR image is more realistic than a SR ISAR image. Equations (11) and (12) show their definitions respectively: D Ra I HR , I SR = σ C I HR − E I SR C I SR (12) where σ is the sigmoid function, C(x) is the output of non-transformed discriminator, and E I SR [·] represents the operation of taking average value in mini batch. So, the discriminator loss L Ra D can be expressed as: Also, the adversarial loss for generator is: L Ra ad = −E I HR log 1 − D Ra I HR , I SR − E I SR log D Ra I SR , I HR

Design of the Proposed GAN
For ISAR images, they do not have rich edges and color information like optical images. The most prominent feature of ISAR images is image contrast, which is displayed in the form of bright spots. Skip connections in the residual network (Res-Net) can preserve the image contrast [16], so we take Res-Net as the basic structure of G. Inspired by [20], the architecture of G is shown in Figure 3a. To improve the detailed expression ability of recovered ISAR images, the Residual-in-Residual Dense Block (RRDB) is adopted in the generator network. RRDB contains multi-level residual network and dense connections, which can describe the weak scatterers of ISAR images accurately and improve performance. Here, each RRDB contains three dense blocks and the number of RRDB is 11. A dense block is composed of four convolution layers and three leaky rectified linear unit (LeakyRelu) layers. The convolution layer consists of 3 × 3 kernels and 64 feature maps with stride 1 [20], which can improve network performance. Also, batch normalization (BN) layers in dense blocks are removed [23]. Besides, β is residual scaling to prevent instability and the value of β is 0.2. Different from the image super-resolution task, our training data, namely HR images and LR images, have the same size. And LR images are not obtained by down sampling in the data generation stage. So, the up-sampling module is cancelled to fit our task. The final convolution layer contains three output channels.

Design of the Proposed GAN
For ISAR images, they do not have rich edges and color information like optical images. The most prominent feature of ISAR images is image contrast, which is displayed in the form of bright spots. Skip connections in the residual network (Res-Net) can preserve the image contrast [16], so we take Res-Net as the basic structure of G . Inspired by [20], the architecture of G is shown in Figure 3a. To improve the detailed expression ability of recovered ISAR images, the Residual-in-Residual Dense Block (RRDB) is adopted in the generator network. RRDB contains multi-level residual network and dense connections, which can describe the weak scatterers of ISAR images accurately and improve performance. Here, each RRDB contains three dense blocks and the number of RRDB is 11. A dense block is composed of four convolution layers and three leaky rectified linear unit (LeakyRelu) layers. The convolution layer consists of × 3 3 kernels and 64 feature maps with stride 1 [20], which can improve network performance. Also, batch normalization (BN) layers in dense blocks are removed [23]. Besides, β is residual scaling to prevent instability and the value of β is 0.2. Different from the image super-resolution task, our training data, namely HR images and LR images, have the same size. And LR images are not obtained by down sampling in the data generation stage. So, the up-sampling module is cancelled to fit our task. The final convolution layer contains three output channels.  The generator G has a strong representation ability, the corresponding network structure of discriminator D is shown in Figure 3b  The generator G has a strong representation ability, the corresponding network structure of discriminator D is shown in Figure 3b. First, I SR and I HR pass through a convolution layer (3 channels and 64 feature maps) with a 3 × 3 kernel and stride 1. A leaky ReLU with a constant value of 0.2 is followed. Then consecutive convolution, BN, and leaky ReLU is used. Number of filters starts with 64 and consecutively increases by 128, 256, and 512. Next, two dense layers are used with output channel 1024 and 1. A leaky ReLU is also used between them. Finally, a sigmod activation function is selected to estimate whether I HR is more realistic than I SR .

Loss Function
The form of a loss function is vital for the generator to get the SR ISAR images. In various networks, different designs have improved the performance of the images, especially the Peak Signal-to-Noise Ratio (PSNR) value. These PSNR-oriented approaches often select the MSE as the loss function, which can be expressed as: where X and Y represent the height and width of HR/SR ISAR images, respectively. However, the results of MSE optimization problems are often over-smoothed, which will make some weak scatterers disappear and the overall performance of SR ISAR images get worse [20]. Instead of using MSE loss, we select the absolute loss L 1 to enhance the resolution of the SR ISAR images to get a better performance, which can be given as: At the same time, we introduce the feature loss L f eature into L G to maintain the main characteristics of scattering points. Feature loss is based on the ReLU layers of the pretrained VGG19 network. We use φ i,j to represent the features obtained by the j-th convolution before the i-th maxpooling layer. In this article, we select φ 5,4 to get the feature loss [20], which can be defined as: In addition, the adversarial loss L Ra ad is also introduced into L G to recover the detailed features of weak scatter points shown in Equation (14). Therefore, the definition of L G is: where λ and η are the coefficients to balance L Ra ad and L 1 . In ISAR images, the amplitude of most areas is close to zero except for some strong scattering points. So, the values of λ and η shuold be selected reasonably.

Data Acquisition
First, we randomly generate some scatterers in a specified area to obtain the radar echo. As mentioned before, LR ISAR images can be obtained by taking IFFT to the radar echo. The corresponding HR ISAR images can be acquired by convolving target scattering Remote Sens. 2022, 14, 1291 8 of 18 function with PSF. Here, the PSF is approximated by a 2-D Gaussian function instead of the Sinc function and its expression is shown in Equation (19): where σ 2 x and σ 2 y control the azimuth and range resolution, respectively. Then under the condition of no noise and full aperture, the LR ISAR images and HR ISAR images are the inputs and annotations of the proposed GAN, respectively.
In the generation stage of training data, the related parameters are as follows: radar carrier frequency 10 GHz, bandwidth 400 MHz, pulse repetition frequency 400 Hz, pulse width 25.6 µs, and pulse number 256. Each pulse contains 256 samples and 10-200 points are randomly generated in an area with a width of 50. The different scattering coefficients obey the standard complex Gaussian distribution. We obtain the LR/HR ISAR images by the MATLAB and the size of images is set to 256 × 256. The images are displayed in log magnitude with a dynamic range of 30 dB. The 10,000 LR/HR ISAR image pairs are used to train the proposed GAN. The randomly selected input and annotation image in training process are shown in Figure 4.

Data Acquisition
First, we randomly generate some scatterers in a specified area to obtain the radar echo. As mentioned before, LR ISAR images can be obtained by taking IFFT to the radar echo. The corresponding HR ISAR images can be acquired by convolving target scattering function with PSF. Here, the PSF is approximated by a 2-D Gaussian function instead of the Sinc function and its expression is shown in Equation (19): , exp x y y x h x y (19) where σ 2 x and σ 2 y control the azimuth and range resolution, respectively. Then under the condition of no noise and full aperture, the LR ISAR images and HR ISAR images are the inputs and annotations of the proposed GAN, respectively.
In the generation stage of training data, the related parameters are as follows: radar carrier frequency 10 GHz, bandwidth 400 MHz, pulse repetition frequency 400 Hz, pulse width μ 25.6 s , and pulse number 256. Each pulse contains 256 samples and 10-200 points are randomly generated in an area with a width of 50. The different scattering coefficients obey the standard complex Gaussian distribution. We obtain the LR/HR ISAR images by the MATLAB and the size of images is set to The training process can be divided into two steps. First, we only select the absolute loss 1 L in Equation (16) as the loss function for initialization, which can help the The training process can be divided into two steps. First, we only select the absolute loss L 1 in Equation (16) as the loss function for initialization, which can help the generator obtain the pleasing SR ISAR images. Then we use the L G in Equation (18) to optimize the network with λ = 5 × 10 −3 and η = 1 × 10 −2 [20]. The training process is performed using an NVIDIA V100 GPU based on PyTorch. The Adam is adopted as our optimization algorithm with β 1 = 0.9, β 2 = 0.999 [20]. The batch size is 4 and the learning rate is 1 × 10 −4 [20]. We alternately train the generator and discriminator network of the proposed GAN for 5 epochs, which costs 6.5 h.

Testing Strategy
In the test part, [16] has compared the performance of CV-CNN and Res-Net, so we only select three different GAN networks to compare with the method proposed in this paper, which can be simply defined as GAN1, GAN2, and GAN3. The network structure of GAN1 and GAN2 is the same as that in literature [16], but they adopt different loss functions. In GAN1, the loss function is L G1 = L f eature + L st ad [24], where L st ad is the adversarial loss adopting the standard discriminator D st . In GAN2, the loss function is L G2 = L f eature + λL Ra ad + ηL 1 , shown in Equation (18). As for GAN3, the network structure uses the structure proposed in this paper, but its loss function is To quantitatively evaluate the performance of different methods, we adopt three metrics, namely PSNR, structural similarity (SSIM), and image entropy (IE). Supposing that the annotation image is recorded as I = I xy X×Y and the reconstructed image is recorded as I = I xy X×Y , the definitions of these metrics are as follows [25]: where MAX is maximum pixel value of I , µ I and µ I are the mean value of I and I , σ 2 I and σ 2 I are the variance of I and I , σ I I is the covariance of I and I , and sum I = Among the above three metrics, we search to obtain a bigger PSNR, a bigger SSIM, and a smaller IE for the better reconstructed image.

Experiment Results and Analysis
In the experiment part, we select simulated data and measured data to verify the performance of different methods. In most articles, the number of scattering points for simulated aircraft is relatively small. While in our experiment, simulated aircraft data consists of 276 scattering points. The relevant parameters of simulated data are the same as those of the training network. Measured data contains the Yak42 aircraft data and F-16 airplane model data. The carrier frequency and bandwidth of the Yak42 aircraft data are 5.52 GHz and 400 MHz, respectively. The F-16 airplane model is measured in the microwave chamber. The frequency range is from 34.2857 to 37.9428 GHz.
Under the conditions of no noise and full aperture, we compare the performance of different methods. Then we consider the influence of random Gaussian noise on ISAR imaging performance for different methods. The random Gaussian noise is added to the radar echo for simulated data and measured data. The corresponding SNR is 2 and −4 dB. Next, the different methods are tested under the condition of sparse aperture. The echo data of LR ISAR images needs to be under-sampled. Here, we just consider that 224 pulses are recorded. In addition, zero padding is used to obtain test pictures. Finally, the universality and generalization of the proposed GAN are verified by the F-16 airplane model data.
For the complexity of different networks, the training process of the proposed GAN and GAN3 all cost 6.5 h, while GAN1 and GAN2 cost 3 h. Because of this, the proposed GAN adopts a more complex network structure. For the trained networks, the imaging time of the proposed GAN and GAN3 is 0.51 s, while the imaging time of GAN1 and GAN2 is 0.46 s.

Comparison of No Noise and Full Aperture
The imaging results of simulated aircraft by different methods are shown in Figure 5 under the condition of no noise and full aperture. The corresponding metrics are presented in Table 1. A LR ISAR image is shown in Figure 5b. We can see that its resolution is limited, and strong sidelobes submerge many weak scattering points. It is observed that four different methods all get better imaging performance than IFFT, which indicates the superiority of GAN. The proposed GAN has the smallest IE, and acceptable PSNR and SSIM. From Figure 5f, it is visually obvious that the proposed GAN has the best resolution performance and correctly reconstructs the most weak scattering points, compared with other methods. The proposed GAN describes the target details correctly, such as the tails of simulated aircraft. Compared with Figure 5d,f, we can see that the reconstructed result from GAN2 recovers some weak points incorrectly, which shows that the network structure proposed in this paper is better than that in [16]. Compared with Figure 5c,d, we can find that GAN2 achieves better performance than GAN1 precisely because the loss function proposed in this paper is adopted, which shows the effectiveness of the proposed loss function. In addition, the metrics of GAN3 are not bad, but Figure 5e has some unpleasant shadows around scattering points. Meanwhile, in Figure 5f the ISAR image has sharp edges. So, it is verified that L 1 loss has better performance than MSE loss in ISAR images.  The imaging results of Yak42 by different methods are shown in Figure 6. The corresponding metrics are shown in Table 2. As can be seen from Figure 6a, the imaging result of the traditional method is not focused and has many strong sidelobes. Compared with other methods, the proposed GAN obtains the better-focused image and gets the smallest IE. At the same time, the proposed GAN does not produce many fake points, while the imaging results of GAN1 and GAN2 have some fake points in the background. Just like the simulated aircraft experiment, GAN2 has better imaging effect than GAN1, which further confirms the effectiveness of the proposed loss function. In addition, the imaging result of GAN3 is over-smoothed and has shadows in the background, which also confirms that the MSE loss is not good.  The imaging results of Yak42 by different methods are shown in Figure 6. The corresponding metrics are shown in Table 2. As can be seen from Figure 6a, the imaging result of the traditional method is not focused and has many strong sidelobes. Compared with other methods, the proposed GAN obtains the better-focused image and gets the smallest IE. At the same time, the proposed GAN does not produce many fake points, while the imaging results of GAN1 and GAN2 have some fake points in the background. Just like the simulated aircraft experiment, GAN2 has better imaging effect than GAN1, which further confirms the effectiveness of the proposed loss function. In addition, the imaging result of GAN3 is over-smoothed and has shadows in the background, which also confirms that the MSE loss is not good.

Comparison of Different SNRs
The imaging results of simulated aircraft under different SNRs are shown in Figures 7 and 8. The corresponding metrics are shown in Tables 3 and 4. The proposed GAN gets the smallest IE, the highest SSIM, and acceptable PSNR when SNR is 2 dB and −4 dB. As shown in Figure 8f, the ISAR image formed by the proposed GAN is focused with a clearer background even when SNR is −4 dB. The proposed GAN improves resolution significantly. At the same time, it can recover the target details as much as possible. Of course, it is inevitable to lose some scattering points information. While for other ISAR images, there are some fake points appearing in the background because of the strong noise. This can prove the superiority of the proposed method. Similarly, GAN2 has better performance than GAN1 because of the proposed loss function. GAN3 has shadows around the target because of the MSE loss. background even when SNR is −4 dB. The proposed GAN improves resolution significantly. At the same time, it can recover the target details as much as possible. Of course, it is inevitable to lose some scattering points information. While for other ISAR images, there are some fake points appearing in the background because of the strong noise. This can prove the superiority of the proposed method. Similarly, GAN2 has better performance than GAN1 because of the proposed loss function. GAN3 has shadows around the target because of the MSE loss.
Azimuth Cell     The imaging results of Yak42 under different SNRs are shown in Figures 9 and 10. The corresponding metrics are shown in Tables 5 and 6. The proposed GAN has the smallest IE. It can recover the target details correctly and improve the resolution of Yak42. The image quality does not degrade significantly with the decrease of SNR. The proposed GAN depicts the outline of Yak42 clearly, which is helpful for target recognition. In addition, it produces as few fake points as possible with a clear background, while for GAN1 and GAN2 some fake points still exist in the images. This phenomenon shows the effectiveness of the proposed method. In addition, the outline of Yak42 for GAN3 is blurred because of the MSE loss. The above analysis can show that the proposed GAN is not sensitive to low SNR.

Comparison of Sparse Aperture
The imaging results of simulated aircraft under SA are shown in Figure 11. The corresponding metrics are shown in Table 7. It can be seen from Figure 11b that the ISAR image of IFFT does not have good focusing performance. The scattering points are seriously defocused in azimuth direction. The proposed GAN gets the smallest IE, the highest SSIM and PSNR. The proposed GAN improves resolution significantly and describes more target details. Weak scattering points are recovered as much as possible by the proposed GAN. However, some weak scattering points inevitably vanish because of the existence of SA. Compare with other ISAR images, Figure 11f does not generate many fake points around the target. For other networks, there are many fake points in the background. So, the proposed GAN gets the better imaging performance. The imaging results of Yak42 under SA are shown in Figure 12. The corresponding metrics are shown in Table 8. The proposed GAN has the smallest IE. It recovers more target details than other networks and improves resolution performance of Yak42. However, some fake points appear around the target, which shows the proposed GAN is sensitive to SA. Similarly, GAN2 has better imaging performance than GAN1 because of the proposed loss function. The ISAR image of GAN3 has a blurred outline of Yak42, which is the result of using MSE loss.

Comparison of Sparse Aperture
The imaging results of simulated aircraft under SA are shown in Figure 11. The corresponding metrics are shown in Table 7. It can be seen from Figure 11b that the ISAR image of IFFT does not have good focusing performance. The scattering points are seriously defocused in azimuth direction. The proposed GAN gets the smallest IE, the highest SSIM and PSNR. The proposed GAN improves resolution significantly and describes more target details. Weak scattering points are recovered as much as possible by the proposed GAN. However, some weak scattering points inevitably vanish because of the existence of SA. Compare with other ISAR images, Figure 11f does not generate many fake points around the target. For other networks, there are many fake points in the background. So, the proposed GAN gets the better imaging performance.

Universality and Generalization of the Proposed GAN
To validate the universality and generalization of the proposed GAN, an all-metal scaling model of an F-16 is measured in the microwave chamber, as shown in Figure 13a. From Figure 13b, we can see that the proposed GAN improves the resolution performance and has a clear outline of the F-16. However, it does not describe scattering characteristics of the F-16 perfectly. In fact, the carrier frequency of training data is different from Yak42 in previous experiments, which also proves the universality and generalization of the proposed GAN.
The imaging results of Yak42 under SA are shown in Figure 12. The corresponding metrics are shown in Table 8. The proposed GAN has the smallest IE. It recovers more target details than other networks and improves resolution performance of Yak42. However, some fake points appear around the target, which shows the proposed GAN is sensitive to SA. Similarly, GAN2 has better imaging performance than GAN1 because of the proposed loss function. The ISAR image of GAN3 has a blurred outline of Yak42, which is the result of using MSE loss.

Universality and Generalization of the Proposed GAN
To validate the universality and generalization of the proposed GAN, an all-metal scaling model of an F-16 is measured in the microwave chamber, as shown in Figure 13a. From Figure 13b, we can see that the proposed GAN improves the resolution performance and has a clear outline of the F-16. However, it does not describe scattering characteristics of the F-16 perfectly. In fact, the carrier frequency of training data is different from Yak42 in previous experiments, which also proves the universality and generalization of the proposed GAN.

Conclusions
A resolution enhancement method for ISAR imaging based on GAN is proposed in this paper. We adopt a relativistic average discriminator (RaD) to improve the ability to describe target details. The Residual-in-Residual Dense Block (RRDB) is used in the generator network. The loss function consists of feature loss, adversarial loss, and absolute loss. Feature loss is used to maintain the main characteristics of ISAR images. Adversarial loss is introduced to recover the weak scattering points. Absolute loss can make ISAR images not over-smoothed. Compared with other networks, the simulation shows that the proposed GAN can improve resolution performance significantly and describes the target details well. At the same time, the proposed GAN produces as few fake points as possible. In addition, it can work well under the condition of different SNRs. The proposed GAN is sensitive to sparse aperture, which will be improved in the future work. Besides, the universality and generalization of the proposed GAN are also well verified.

Conclusions
A resolution enhancement method for ISAR imaging based on GAN is proposed in this paper. We adopt a relativistic average discriminator (RaD) to improve the ability to describe target details. The Residual-in-Residual Dense Block (RRDB) is used in the generator network. The loss function consists of feature loss, adversarial loss, and absolute loss. Feature loss is used to maintain the main characteristics of ISAR images. Adversarial loss is introduced to recover the weak scattering points. Absolute loss can make ISAR images not over-smoothed. Compared with other networks, the simulation shows that the proposed GAN can improve resolution performance significantly and describes the target details well. At the same time, the proposed GAN produces as few fake points as possible. In addition, it can work well under the condition of different SNRs. The proposed GAN is sensitive to sparse aperture, which will be improved in the future work. Besides, the universality and generalization of the proposed GAN are also well verified.