Next Article in Journal
Scratching Beneath the Surface: A Model to Predict the Vertical Distribution of Prochlorococcus Using Remote Sensing
Previous Article in Journal
Nowcasting Surface Solar Irradiance with AMESIS via Motion Vector Fields of MSG-SEVIRI Data
Article Menu
Issue 6 (June) cover image

Export Article

Open AccessArticle
Remote Sens. 2018, 10(6), 846; https://doi.org/10.3390/rs10060846

A Deep Convolutional Generative Adversarial Networks (DCGANs)-Based Semi-Supervised Method for Object Recognition in Synthetic Aperture Radar (SAR) Images

1
Electronic Information Engineering, Beihang University, Beijing 100191, China
2
Space Mechatronic Systems Technology Laboratory, Department of Design, Manufacture and Engineering, Management, University of Strathclyde, Glasgow G11XJ, UK
3
Department of Informatics, University of Leicester, Leicester LE1 7RH, UK
*
Author to whom correspondence should be addressed.
Received: 28 March 2018 / Revised: 25 May 2018 / Accepted: 25 May 2018 / Published: 29 May 2018
(This article belongs to the Section Remote Sensing Image Processing)
Full-Text   |   PDF [2998 KB, uploaded 29 May 2018]   |  

Abstract

Synthetic aperture radar automatic target recognition (SAR-ATR) has made great progress in recent years. Most of the established recognition methods are supervised, which have strong dependence on image labels. However, obtaining the labels of radar images is expensive and time-consuming. In this paper, we present a semi-supervised learning method that is based on the standard deep convolutional generative adversarial networks (DCGANs). We double the discriminator that is used in DCGANs and utilize the two discriminators for joint training. In this process, we introduce a noisy data learning theory to reduce the negative impact of the incorrectly labeled samples on the performance of the networks. We replace the last layer of the classic discriminators with the standard softmax function to output a vector of class probabilities so that we can recognize multiple objects. We subsequently modify the loss function in order to adapt to the revised network structure. In our model, the two discriminators share the same generator, and we take the average value of them when computing the loss function of the generator, which can improve the training stability of DCGANs to some extent. We also utilize images of higher quality from the generated images for training in order to improve the performance of the networks. Our method has achieved state-of-the-art results on the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset, and we have proved that using the generated images to train the networks can improve the recognition accuracy with a small number of labeled samples. View Full-Text
Keywords: SAR target recognition; semi-supervised; DCGANs; joint training SAR target recognition; semi-supervised; DCGANs; joint training
Figures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Gao, F.; Yang, Y.; Wang, J.; Sun, J.; Yang, E.; Zhou, H. A Deep Convolutional Generative Adversarial Networks (DCGANs)-Based Semi-Supervised Method for Object Recognition in Synthetic Aperture Radar (SAR) Images. Remote Sens. 2018, 10, 846.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Remote Sens. EISSN 2072-4292 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top