Next Article in Journal
Forest Fire Identification in UAV Imagery Using X-MobileNet
Next Article in Special Issue
An Improved Theoretical Model to Extract the Optical Conductivity of Two-Dimensional Material from Terahertz Transmission or Reflection Spectroscopy
Previous Article in Journal
A New Noise Shaping Approach for Sigma-Delta Modulators Using Two-Stage Feed-Forward Delays and Hybrid MASH-EFM
Previous Article in Special Issue
Triple-Band Terahertz Chiral Metasurface for Spin-Selective Absorption and Reflection Phase Manipulation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Millimeter-Wave Image Deblurring via Cycle-Consistent Adversarial Network

1
Beijing Key Laboratory of Millimeter Wave and Terahertz Technology, School of Integrated Circuits and Electronics, Beijing Institute of Technology, Beijing 100081, China
2
Tangshan Research Institute of BIT, Tangshan 063007, China
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(3), 741; https://doi.org/10.3390/electronics12030741
Submission received: 26 December 2022 / Revised: 30 January 2023 / Accepted: 31 January 2023 / Published: 1 February 2023
(This article belongs to the Special Issue Recent Advances in Microwave and Terahertz Engineering)

Abstract

:
Millimeter-wave (MMW) imaging has a tangible prospect in concealed weapon detection for security checks. Typically, a one-dimensional (1D) linear antenna array with mechanical scanning along a perpendicular direction is employed for MMW imaging. To achieve high-resolution imaging, the target under test needs to keep steady enough during the mechanical scanning process since slight movement can induce large phase variation for MMW systems, which will result in a blurred image. However, in the scenario of imaging of a human body, sometimes it is difficult to meet this requirement, especially for the elderly. Such blurred MMW images would reduce the detection accuracy of the concealed weapons. In this paper, we propose a deblurring method based on cycle-consistent adversarial network (Cycle GAN). Specifically, the Cycle GAN can learn the mapping between the blurred MMW images and the focused ones. To minimize the effect of the shaking blur, we introduce an identity loss. Moreover, a mean squared error loss (MSE loss) is utilized to stabilize the training, so as to obtain more refined deblurred results. The experimental results demonstrate that the proposed method can efficiently suppress the blurring effect in the MMW image.

1. Introduction

Millimeter-wave (MMW) imaging has been widely employed in concealed weapons detection due to its penetration ability [1]. To obtain higher resolutions, the terahertz technology has also been studied for security inspections [2,3]. However, nowadays it is still not commercially applied because of the unstable performance of the circuits and the expensive manufacturing cost. In regards to the mostly applied MMW systems, the one-dimensional (1D) linear antenna array, with mechanical scanning along its perpendicular detection [4,5], is utilized for MMW imaging. Since mechanical scanning is involved, the target under test, such as a human body, is required to keep steady enough during the scanning process to eliminate unwanted phase variation. However, in practice, we could not avoid the slight movement, such as the shaking or swaying of the body, especially for the elderly, which will deteriorate the image quality. Since the state-of-the-art detection and classification methods usually apply the deep neural network architecture to realize the task through detecting the object edges or other characteristics in image, the blurred image with distorted object edges and features will definitely degenerate the detection accuracy [6]. The target detection from MMW images exhibits a similar characteristic. Thus, an efficient image deblurring approach is highly desired.
To achieve deblurring, Szeliski [7], Bar et al. [8] and Krishnan et al. [9] proposed some non-blind deblurring methods in optical applications, in which the Lucy–Richardson algorithm and Wiener filter were employed for deconvolution, assuming the blur kernels as a priori. However, the blur kernels cannot be accurately estimated in practical situations. Thus, the practicable deblurring algorithms should simultaneously estimate both the blur kernels and focused images [10]. More modified blind deconvolution methods [11,12,13,14] have been developed by using different point spread functions (PSFs) or prior information to reconstruct the focused images. Then, the blurring caused by the shaking of the optical lens can be eliminated. However, when the prior information is insufficient, it is difficult to determine the blur kernels. Thus, the classic non-blind and blind deblurring methods are hard to reconstruct satisfactory MMW images without blurring effects.
In addition to the aforementioned classic deblurring methods, the neural networks have also been employed in the field of optical blind deblurring. Sun et al. [15] employed the convolutional neural network (CNN) to estimate the blur kernel at the patch level, where each patch is assigned with a single motion label. However, this treatment violates the real data nature. Nah et al. [16] and Noroozi et al. [17] proposed an end-to-end deblur method without motion-blurred kernels by using multi-scale CNN; however, they can only handle mild Gaussian blur. Kupyn et al. [6] proposed a conditional adversarial network (cGAN) method to complete kernel-free blind motion deblurring. However, cGAN requires pairs of training samples, which are hard to obtain in practice. Thus, it is difficult to deal with real blurred samples. In addition, these methods all focused on the optical motion deblurring, which are not suitable for the MMW image deblurring.
Unlike the blurring induced by the shaking of lens, the MMW image blurring due to the movement of a human body, which is not a rigid object, may have different blur kernels with respect to different body parts. Clearly, this is more complex than the deblurring of optical images.
In this paper, we introduce the cycle-consistent adversarial network (Cycle GAN) [18] based method to realize the end-to-end deblurring of MMW imaging. The network can learn the translation between blurred MMW images and focused MMW images without using paired training samples. Different human shaking behaviors and the shake of different parts of the human body may produce different features of blur. To solve this problem, we collected MMW images of the human body under various shaking conditions for network training to improve network generalization. In addition, we adopted a mean squared error loss (MSE loss) in lieu of the binary cross entropy loss (BCE loss) to stabilize the model training procedure [19]. We also introduced an identity mapping loss function [20] to prevent the loss of information. The identity loss encourages mapping to keep the main information of the deblurred images the same as that of the blurred ones, which would fit well with the actual blurred MMW images. Additionally, we introduce the dropout operation during the generator training process to reduce the risk of overfitting. The final deblurred MMW images reconstructed by the proposed method show fair texture details without almost any artifacts.
The innovations of this paper are summarized as follows:
(a)
To the best of our knowledge, we are the first to address the phenomenon of MMW image blurring caused by the shaking of the human body, and propose an end-to-end deblurring architecture based on Cycle GAN.
(b)
The MSE loss and the identity loss are incorporated in Cycle GAN to ensure a more suitable model for the MMW deblurring task.
The rest of this paper is organized as follows. In Section 2, the model of the scattered waves is presented, as well as the imaging algorithm, to show the cause of the MMW image blurring. Then, the deblurring Cycle GAN architecture and the loss function are described in detail. In Section 3, the MMW imaging prototype is introduced. The deblurred results of the proposed method are shown in comparison with the classical blind deconvolution methods. In Section 4, the conclusions are drawn.

2. Formulation

2.1. Millimeter-Wave Imaging

The near field imaging scheme using a linear monostatic array with mechanical scanning is shown in Figure 1, where “●TR” denotes the position of the transceiver element. The scattering coefficient of each scatterer is represented by σ ( x , y , z ) . The demodulated scattered waves from the scatterers can be expressed as:
s ( x , y , k ) = σ ( x , y , z ) e j 2 k R d x d y d z
where R = x x 2 + y y 2 + z + R 0 2 is the round-trip distance between the target and the antenna, k is the wavenumber with k = 2 π f c , f is the operating frequency, c is the speed of light, and R 0 denotes the distance between the array aperture and the imaging plane.
The reconstruction of the target using RMA [2] can be expressed as:
σ ( x , y , z ) = FT 3 D 1 IN 1 D FT 2 D s ( x , y , k ) e j k z R 0
where FT 3 D 1 represents the three-dimensional (3D) inverse Fourier transform, FT 2 D indicates the two-dimensional (2D) Fourier transform, IN 1 D denotes the 1D interpolation in the wavenumber domain in the loop of the 2D spatial frequency domain. According to (1), if the human body slightly shakes, the antennas will collect the signals with the information of different postures. This will lead to blurring of the MMW images. As a result, the blurred object in the MMW image would be difficult to recognize through the subsequent detection module.

2.2. Cycle GAN Methodology for MMW Image Deblurring

To remove the MMW image blurring, we utilized the end-to-end Cycle GAN method to transfer the blurred MMW radar images to the focused ones, without considering the difficult unknown priors and blur kernels.
The goal of the MMW deblurring Cycle GAN is to learn the mapping functions between the blurred MMW image dataset A and the focused MMW image dataset B , given training samples a i = 1 N and b j = 1 M , where a i A and b j B . We denote the MMW data distribution as a a and b b , where a and b represent the probability distributions of a and b , respectively. As Figure 2 shows, the MMW deblurring Cycle GAN contains two mappings: generators G : A B and F : B A . In addition, two adversarial discriminators, D A and D B , with the same structure are introduced. D A aims to distinguish between the real blurred MMW image a and the synthetic image F b . Similarly, D B aims to distinguish between the real focused MMW image b and the deblurred MMW image G a . The structure of the generators G and F is shown in Figure 3. We mark the network with four modules. The part I is related to the forward convolution module. The second part II is related to stacked Resnetblock module. The third one and the fourth one are respectively referred to as the transpose convolution module and the convolution and reflection module. Different from the classical Cycle GAN, during the process of training the MMW deblurring Cycle GAN, we add a dropout [21] with the probability of 0.5 to each layer in ResnetBlocks to randomly deactivate the neurons with the probability of 0.5. The module can improve the generalization ability of the model and prevent overfitting. The neural networks that are fully activated by sigmoid have almost no sparsity and could miss some key information in the image, while ReLU has good sparsity due to its unilateral inhibition, and can also alleviate overfitting. Therefore, we chose ReLU as the activation function in this work. The discriminators D A and D B use the structure of classical PatchGANs [18].
The loss function of the classic Cycle GAN includes two terms: (1) the adversarial loss of GAN [22], which is used to update the parameters of the generator and discriminator; (2) the cycle consistency loss [16], which can prevent semantic loss and mode collapse. In the proposed MMW deblurring Cycle GAN, we further employ the identity loss [20] to maintain the detail information in the deblurred images, which is an incremental module compared with the classic Cycle GAN.
In regard to the adversarial loss, we employ MSE loss instead of BCE loss, because the former is more stable during the training procedure and generates better deblurred MMW images. The adversarial loss is denoted as follows:
L a d v G , D B , A , B = E a a D B G a 1 2 + E b b D B b 1 2 + E a a D B G a 2
where G aims to minimize the adversarial loss, while D B aims to maximize it. E means the mathematical expectation of the inner function. The adversarial loss for the mapping function F can be expressed in the same way as L a d v F , D A , B , A .
L a d v F , D A , B , A = E b b D A F b 1 2 + E a a D A a 1 2 + E b b D A F b 2
The second term is a cycle consistency loss, which is used to reduce the space of possible mapping functions [18] and prevent the information of the original blurred MMW image. As shown in Figure 2, each blurred MMW image a from the domain A should be brought to the original blurred one by the forward deblurring translation cycle, i.e., a G a a r e c a , where a r e c = F G a . Similarly, the same representation is given for focused MMW images from the domain B : b F b b r e c b , where b r e c = G F b . In the training process, we introduce the weights λ a and λ b to the forward and backward cycle consistency loss, respectively. They can control the emphasis of blurred MMW images to focused MMW images. This behavior can be represented as the cycle consistency loss:
L c y c G , F = λ a E a a F G a a 1 + λ b E b b G F b b 1
To enhance the feature extraction ability of the network and reserve the detail information during the transformation, we attach the identity loss [17] to the mapping function G . The identity loss can be expressed as:
L i d e n t i t y G = E b b G b b 1
Now, the total loss function of the MMW deblurring Cycle GAN can be expressed as:
L = L a d v G , D B , A , B + L a d v F , D A , B , A + L c y c G , F + λ i d t L i d e n t i t y G
Finally, the MMW image deblurring can be achieved.

3. Results

In this section, we utilized a MMW imaging prototype for data collection, and verified the efficiency of the proposed MMW image deblurring method. The working frequency was from 32 to 37 GHz. The prototype consists of a linear monostatic array with a horizontal mechanical scanning structure, as shown in Figure 4a. The transmit antenna worked in sequence, while the two neighboring receiving antennas collected the EM wave at the same time. This working pattern led to 383 virtual monostatic sampling points with an interval of 5 mm. The linear array scans horizontally to form a 2D aperture for 3D image reconstruction. The scanning spacing is 4 mm with 251 moving steps. The subject under test carried a concealed wrench standing in front of the array at a distance of 0.3 m, as shown in Figure 4. During the data collection, the subject stood firmly or shook slightly.
In the experiments, we generated 577 blurred MMW images (dataset A ) and 612 focused MMW images (dataset B ) as the training dataset. The dataset was collected under the scene that the subject is shaking slightly with varying degrees. The test data are collected under the scene where the subject is shaking slightly. The test dataset contains 80 blurred MMW images. The concealed objects (knife and wrench) hidden on the waist of the human body in the experiment are all covered by clothing. The size of all the employed images is 512 × 256 × 1 (height × width × channel).
The original reconstructed MMW images are shown in Figure 5a,f,k, representing the subject (taking a wrench) shaking vertically to the array, the subject (taking a knife) shaking parallel to the array and the subject (no object concealed) shaking parallel to the array, respectively. Although the human body shook slightly, the concealed objects and the human body exhibit seriously blurred properties, such as the invisible texture and coarse edge. We employ the classic blind deconvolution method [10], the original Cycle GAN [18] and the DeblurGAN [6], for comparison. In regard to the blind deconvolution method, the blur kernel was first estimated through the parts of the image, then was employed to restore the whole image, as shown in Figure 5b,g,l. Clearly, the image quality still significantly deteriorates with serious ring effects. Although this method may be suitable for the optical instruments, it fails in the situation of the target shaking in the scenario of MMW imaging. The deblurred images, by the original Cycle GAN, are shown in Figure 5c,g,l. Note that some detailed information is lost. In particular, the arms of the human body become much longer than the usual case. More seriously, the original Cycle GAN distorts the semantic information of the human body, and generates similar deblurred images for different blurred ones, which has the risk of mode collapse. The deblurred images by the DeblurGAN are demonstrated in Figure 5d,i,n. The results still show heavy blurred effect. In addition, there are a lot of disturbing clutters in the background.
Finally, the deblurred images from the proposed method are demonstrated in Figure 5e,j,o, which exhibit clearer finer textures than those obtained by the other state-of-the-art deblurring methods, such as the classic blind deconvolution method and the DeblurGAN method. As compared with the results of the original Cycle GAN, it proves that through incorporating the MSE loss and the identity loss, the proposed network obtains a more stable performance than the original Cycle GAN. Moreover, the proposed method achieves a more focused image with a clearer human outline and object texture in comparison to the results of the blind deconvolution in Figure 5b,g,l and the DeblurGAN in Figure 5d,i,n.
Furthermore, the image entropy is calculated to provide quantitative comparisons among the different methods. As is listed in Table 1, the entropy of the original Cycle GAN possesses the lowest value; however, the deblurred results are distorted, as shown in Figure 5c,h,m. Excluding the original Cycle GAN, the entropy of the proposed method is the lowest in comparison to that of the classic blind deconvolution method and the DeblurGAN method, which further indicates the superiority of the proposed method. Comparing with the original distorted image, the entropy of the deblurred image by the proposed method decreases by 16.83% on average.
Then, we evaluate the computational complexity of the proposed method measured by the floating-point operations (FLOP) [23]. A single operation is referred to as one FLOP, such as one addition or one multiplication. To do so, we divide the generative architecture into four modules, as shown in Figure 3. Here we present the computational complexity of the first three remarkable network modules: convolutional layer, transpose convolutional layer and InstanceNorm. The computational complexity of the convolutional layer is calculated as:
C c o n v = H o u t W o u t K 2 c i n c o u t
where H o u t and W o u t denote the height and width of the output image, respectively, K denotes the size of the convolution kernel and c i n and c o u t denote the numbers of input and output channels, respectively. For example, given a convolution layer with a 7 × 7 convolution kernel where the sizes of the input and output data are 512 × 256 × 64 and 256 × 128 × 128 , respectively, the computational cost is 13,153,337,344 FLOPs from (8).
The computational complexity of the transpose convolutional layer is calculated as:
C c o n v T = H i n W i n K 2 c i n c o u t
where H i n and W i n denote the height and width of the input image, respectively. For example, given a convolution layer with a 3 × 3 convolution kernel where the sizes of the input and output data are 256 × 128 × 128 and 512 × 256 × 64 , respectively, the computational cost is 2,415,919,104.
Finally, the InstanceNorm can be expressed as:
y = x E x V a r [ x ] + ε γ + β
where x is the input data, E x is the expectation of x , V a r [ x ] is the variance of x , ε is a non-zero constant and γ and β are learnable parameter vectors of input size. Then, the computational complexity of the InstanceNorm is calculated by:
C I N = 8 H i n W i n c i n
For example, given the input data of 512 × 256 × 64 , respectively, the computational cost is 67,108,864 from (11).
The ReflectionPad module is concerned about the transformation of data size, therefore, no FLOP is required. In summary, the computational cost of each module (marked in Figure 3) of the proposed method is listed in Table 2. The overall computational cost is approximately 0.12 TFLOPs. The computational cost of different methods used in this paper is calculated and listed in Table 3. The blind deconvolution is achieved using an Intel i7-11800H (maximum of 1.76 TFLOP per second) and does not use GPU acceleration, as it does not require GPU training. The proposed deblurring method, the original Cycle GAN and the DeblurGAN are achieved using Titan Xp (maximum of 12 TFLOP per second). Hence, the real-time image deblurring can be promised. The processing time of different methods is listed in Table 4. Clearly, the proposed method is the fastest one among the three methods.

4. Conclusions

In this paper, we propose an effective deblurring method for MMW images. The linear array associated with mechanical scanning may generate blurred reconstructed images due to the slight shaking of the human body during the data collection procedure. We utilized an architecture based on the Cycle GAN deal with the transformation from unpaired blurred MMW images to focused ones. The identity loss and the MSE loss are integrated to ensure the stable performance of the network. The experimental results demonstrated the superiority of the proposed deblurring method over both the classical blind deconvolution method and the original Cycle GAN without the introduced loss. In future work, we will prune the architecture of the proposed method to address a lightweight one as a plugin module in the post-processing of the MMW images.

Author Contributions

Conceptualization, H.L. and S.L.; methodology, H.L. and S.L.; formal analysis, H.L. and S.W.; resources, H.J.; investigation, S.W.; writing—original draft preparation, H.L.; writing—review and editing, S. L., G.Z. and H.S.; All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number 62071043.

Data Availability Statement

Data available on request due to restrictions eg privacy or ethical.

Acknowledgments

The authors would like to thank the First Research Institute of the Ministry of Public Security of PRC, Beijing 100048, China, for providing the imaging prototype to implement the experiments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zoughi, R. Microwave Non-Destructive Testing and Evaluation; Non-Destructive Evaluation Series 4; Springer: Dordrecht, The Netherlands, 2000. [Google Scholar] [CrossRef]
  2. Cooper, K.B.; Dengler, R.J.; Llombart, N.; Thomas, B.; Siegel, P.H. Thz imaging radar for standoff personnel screening. IEEE Trans. Terahertz Sci. Technol. 2011, 1, 169–182. [Google Scholar] [CrossRef]
  3. Shen, X.; Dietlein, C.R.; Grossman, E.; Popovic, Z.; Meyer, F.G. Detection and segmentation of concealed objects in terahertz images. IEEE Trans. Image Process. 2008, 17, 2465–2475. [Google Scholar] [CrossRef] [PubMed]
  4. Sheen, D.; McMakin, D.; Hall, T. Near-field three-dimensional radar imaging techniques and applications. Appl. Opt. 2010, 49, E83–E93. [Google Scholar] [CrossRef] [PubMed]
  5. Jing, H.; Li, S.; Miao, K.; Wang, S.; Cui, X.; Zhao, G.; Sun, H. Enhanced Millimeter-Wave 3-D Imaging via Complex-Valued Fully Convolutional Neural Network. Electronics 2022, 11, 147. [Google Scholar] [CrossRef]
  6. Kupyn, O.; Budzan, V.; Mykhailych, M.; Mishkin, D.; Matas, J. DeblurGAN: Blind motion deblurring using conditional adversarial networks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8183–8192. [Google Scholar] [CrossRef]
  7. Szeliski, R. Computer Vision: Algorithms and Applications, 1st ed.; Springer: New York, NY, USA, 2010. [Google Scholar]
  8. Bar, L.; Kiryati, N.; Sochen, N. Image Deblurring in the Presence of Impulsive Noise. Int. J. Comput. Vis. 2006, 70, 279–298. [Google Scholar] [CrossRef]
  9. Krishnan, D.; Fergus, R. Fast Image Deconvolution Using Hyper-Laplacian Priors. Neural Inf. Process. Syst. 2009, 22, 1033–1041. [Google Scholar]
  10. Fergus, R.; Singh, B.; Hertzmann, A.; Roweis, S.T.; Freeman, W.T. Removing camera shake from a single photograph. ACM Trans. Graph. 2006, 25, 787–794. [Google Scholar] [CrossRef]
  11. Xu, L.; Zheng, S.; Jia, J. Unnatural L0 Sparse Representation for Natural Image Deblurring. In Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013; pp. 1107–1114. [Google Scholar] [CrossRef]
  12. Babacan, S.D.; Molina, R.; Do, M.N.; Katsaggelos, A.K. Bayesian blind deconvolution with general sparse image priors. In Proceedings of the 12th European Conference on Computer Vision (ECCV), Florence, Italy, 7–13 October 2012; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar] [CrossRef] [Green Version]
  13. Wu, C.; Du, H.; Wu, Q.; Zhang, S. Image Text Deblurring Method Based on Generative Adversarial Network. Electronics 2020, 9, 220. [Google Scholar] [CrossRef]
  14. Shin, C.J.; Lee, T.B.; Heo, Y.S. Dual Image Deblurring Using Deep Image Prior. Electronics 2021, 10, 2045. [Google Scholar] [CrossRef]
  15. Sun, J.; Cao, W.; Xu, Z.; Ponce, J. Learning a convolutional neural network for non-uniform motion blur removal. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 769–777. [Google Scholar] [CrossRef]
  16. Nah, S.; Kim, T.H.; Lee, K.M. Deep multi-scale convolutional neural network for dynamic scene deblurring. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 257–265. [Google Scholar] [CrossRef]
  17. Noroozi, M.; Chandramouli, P.; Favaro, P. Motion deblurring in the Wild. In Proceedings of the German Conference on Pattern Recognition, Basel, Switzerland, 13–15 September 2017. [Google Scholar] [CrossRef]
  18. Zhu, J.Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2242–2251. [Google Scholar] [CrossRef]
  19. Mao, X.; Li, Q.; Xie, H.; Lau, R.; Smolley, S.P. Least squares generative adversarial networks. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017. [Google Scholar] [CrossRef]
  20. Taigman, Y.; Polyak, A.; Wolf, L. Unsupervised Cross-Domain Image Generation. arXiv 2016, arXiv:1611.02200. [Google Scholar]
  21. Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
  22. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. Adv. Neural Inf. Process. Syst. 2014, 27, 2672–2680. [Google Scholar] [CrossRef]
  23. Cumming, I.G.; Wong, F.H. Digital Processing of Synthetic Aperture Radar Data: Algorithms and Implementation; Artech House: Boston, MA, USA, 2005. [Google Scholar]
Figure 1. MMW imaging scheme by a linear array with mechanical scanning.
Figure 1. MMW imaging scheme by a linear array with mechanical scanning.
Electronics 12 00741 g001
Figure 2. The schematic diagram of the proposed MMW deblurring Cycle GAN method.
Figure 2. The schematic diagram of the proposed MMW deblurring Cycle GAN method.
Electronics 12 00741 g002
Figure 3. Generative architecture for the MMW deblurring Cycle GAN.
Figure 3. Generative architecture for the MMW deblurring Cycle GAN.
Electronics 12 00741 g003
Figure 4. (a) MMW radar imaging prototype, (b) subject with a wrench, (c) the front view and (d) the side view of the measurement scene during MMW echo collection.
Figure 4. (a) MMW radar imaging prototype, (b) subject with a wrench, (c) the front view and (d) the side view of the measurement scene during MMW echo collection.
Electronics 12 00741 g004
Figure 5. Comparison of various MMW images using different deblurring methods: (ae) human body taking a wrench, (fj) human body taking a knife and (ko) human body taking nothing.
Figure 5. Comparison of various MMW images using different deblurring methods: (ae) human body taking a wrench, (fj) human body taking a knife and (ko) human body taking nothing.
Electronics 12 00741 g005
Table 1. Comparison of entropy of the results using different deblurring methods.
Table 1. Comparison of entropy of the results using different deblurring methods.
ObjectsBlurred ImageBlind DeconvolutionOriginal CycleDeblurGANProposed
Wrench(a–e)0.76330.98050.60900.99430.6704
None(f–j)0.96750.98240.61500.99720.6374
Knife(k–o)0.69140.96270.61450.98010.6623
Table 2. The computational cost of different modules of the proposed deblurring architecture.
Table 2. The computational cost of different modules of the proposed deblurring architecture.
ModuleComputations (FLOPs)
27,099,402,788
87,257,097,216
5,140,119,552
423,620,060
Table 3. Comparison of computational cost of different deblurring methods.
Table 3. Comparison of computational cost of different deblurring methods.
MethodsComputations (FLOPs)
Blind deconvolution0.006 T
Original Cycle0.13 T
DeblurGAN0.13 T
proposed0.12 T
Table 4. Comparison of processing time of different deblurring methods.
Table 4. Comparison of processing time of different deblurring methods.
Blind DeconvolutionOriginal CycleDeblurGANProposed
Time2.367 s0.338 s0.322 s0.299 s
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, H.; Wang, S.; Jing, H.; Li, S.; Zhao, G.; Sun, H. Millimeter-Wave Image Deblurring via Cycle-Consistent Adversarial Network. Electronics 2023, 12, 741. https://doi.org/10.3390/electronics12030741

AMA Style

Liu H, Wang S, Jing H, Li S, Zhao G, Sun H. Millimeter-Wave Image Deblurring via Cycle-Consistent Adversarial Network. Electronics. 2023; 12(3):741. https://doi.org/10.3390/electronics12030741

Chicago/Turabian Style

Liu, Huteng, Shuoguang Wang, Handan Jing, Shiyong Li, Guoqiang Zhao, and Houjun Sun. 2023. "Millimeter-Wave Image Deblurring via Cycle-Consistent Adversarial Network" Electronics 12, no. 3: 741. https://doi.org/10.3390/electronics12030741

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop