Next Article in Journal
Scratching Beneath the Surface: A Model to Predict the Vertical Distribution of Prochlorococcus Using Remote Sensing
Previous Article in Journal
Nowcasting Surface Solar Irradiance with AMESIS via Motion Vector Fields of MSG-SEVIRI Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Deep Convolutional Generative Adversarial Networks (DCGANs)-Based Semi-Supervised Method for Object Recognition in Synthetic Aperture Radar (SAR) Images

1
Electronic Information Engineering, Beihang University, Beijing 100191, China
2
Space Mechatronic Systems Technology Laboratory, Department of Design, Manufacture and Engineering, Management, University of Strathclyde, Glasgow G11XJ, UK
3
Department of Informatics, University of Leicester, Leicester LE1 7RH, UK
*
Author to whom correspondence should be addressed.
Remote Sens. 2018, 10(6), 846; https://doi.org/10.3390/rs10060846
Submission received: 28 March 2018 / Revised: 25 May 2018 / Accepted: 25 May 2018 / Published: 29 May 2018
(This article belongs to the Section Remote Sensing Image Processing)

Abstract

:
Synthetic aperture radar automatic target recognition (SAR-ATR) has made great progress in recent years. Most of the established recognition methods are supervised, which have strong dependence on image labels. However, obtaining the labels of radar images is expensive and time-consuming. In this paper, we present a semi-supervised learning method that is based on the standard deep convolutional generative adversarial networks (DCGANs). We double the discriminator that is used in DCGANs and utilize the two discriminators for joint training. In this process, we introduce a noisy data learning theory to reduce the negative impact of the incorrectly labeled samples on the performance of the networks. We replace the last layer of the classic discriminators with the standard softmax function to output a vector of class probabilities so that we can recognize multiple objects. We subsequently modify the loss function in order to adapt to the revised network structure. In our model, the two discriminators share the same generator, and we take the average value of them when computing the loss function of the generator, which can improve the training stability of DCGANs to some extent. We also utilize images of higher quality from the generated images for training in order to improve the performance of the networks. Our method has achieved state-of-the-art results on the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset, and we have proved that using the generated images to train the networks can improve the recognition accuracy with a small number of labeled samples.

1. Introduction

Synthetic Aperture Radar (SAR) can acquire the images of non-cooperative moving objects, such as aircrafts, ships, and celestial objects over a long distance under all weather and all day, which is now widely used in civil and military fields [1]. SAR images contain rich target information, but because of different imaging mechanisms, SAR images are not as intuitive as optical images, and it is difficult for human eyes to recognize objects in SAR images accurately. Therefore, SAR automatic target recognition technology (SAR-ATR) has become an urgent need, which is also a hot topic in recent years.
SAR-ATR mainly contains two aspects: target feature extraction and target recognition. At present, target features that are reported in most studies include target size, peak intensity, center distance, and Hu moment. The methods of target recognition include template matching, model-based methods, and machine learning methods [2,3,4,5,6,7,8,9,10,11,12,13,14,15]. Machine learning methods have attracted increasing attention because appropriate models can be formed while using these methods. Machine learning methods commonly used for image recognition include support vector machines (SVM), AdaBoost, and Bayesian neural network [16,17,18,19,20,21,22,23,24,25]. In order to obtain better recognition results, traditional machine learning methods require preprocessing images, such as denoising and feature extraction. Fu et al. [26] extracted Hu moments as the feature vectors of SAR images and used them to train SVM, and finally, achieved better recognition accuracy than directly training SVM with SAR images. Huan et al. [21] used a non-negative matrix factorization (NFM) algorithm to extract feature vectors of SAR images, and combined SVM and Bayesian neural networks to classify feature vectors. However, in these cases, how to select and combine features is a difficult problem, and the preprocessing scheme is rather complex. Therefore, these methods are not practice-friendly, although they are somehow effective.
In recent years, deep learning has achieved great successes in the field of object recognition in images. Its advantage lies in the ability of using a large amount of data to train the networks and to learn the target features, which avoid complex preprocessing and can also achieve better results. Numerous studies have brought deep learning into the field of SAR-ATR [27,28,29,30,31,32,33,34,35,36,37,38,39]. The most popular and effective model of deep learning is convolutional neural networks (CNNs), which is based on supervised learning, which requires a large number of labeled samples for training. However, in practical applications, people can only obtain unlabeled samples at first, and then label them manually. Semi-supervised learning enables the label prediction of a large number of unlabeled samples by training with a small number of labeled samples. Traditional semi-supervised methods in the field of machine learning include generative methods [40,41], semi-supervised SVM [42], graph semi-supervised learning [43,44], and difference-based methods [45]. With the introduction of deep learning, people begin to combine the classical statistical methods with deep neural networks to obtain better recognition results and to avoid complicated preprocessing. In this paper, we will combine traditional semi-supervised methods with deep neural networks, and propose a semi-supervised learning method for SAR automatic target recognition.
We intend to achieve two goals: one is to predict the labels of a large amount of unlabeled samples through training with a small amount of labeled samples and then extend the labeled set; and, the other one is to accurately classify multiple object types. To achieve the former target, we develop training methods of co-training [46]. In each training round, we utilize the labeled samples to train two classifiers, then use each classifier to predict the labels of the unlabeled samples respectively, and select those positive samples with high confidence from the newly labeled ones and add them to the labeled set for the next round of training. We propose a stringent rule when selecting positive samples to increase the confidence of the predicted labels. In order to reduce the negative influence of those wrongly labeled samples, we introduce the standard noisy data learning theory [47]. With advanced training processes, the recognition outcome of the classifier is getting better, and the number of the positive samples selected in each round of training is also increasing. Since the training process is supervised, we choose a CNN for the classifier due to the high performance on many other recognition tasks.
The core of our proposed method is to extend the labeled sample set with newly labeled samples, and to ensure that the extended labeled sample set enables the classifier to have better performance than the previous version. We have noticed the deep convolutional generative adversarial networks (DCGANs) [48], which is very popular in recent years in the field of deep learning. The generator can generate fake images that are very similar to the real images by learning the features of the real images. We expect to expand the sample set with high-quality fake images for data enhancement to better achieve our goals. DCGANs contains a generator and a discriminator. We double the discriminator and use the two discriminators for joint training to complete the task of semi-supervised learning. Since the discriminator of DCGANs cannot be used to recognize multiple object types, some adjustments to the network structure are required. Salimans et al. [49] proposed to replace the last layer of the discriminators with the softmax function to output a vector of the class probabilities. We draw on this idea and modify the classic loss function to achieve the adjustments. We also take the average value of the two classifier when computing the loss function of the generator, which have been proved to improve the training stability to some extent. We prove that our method performs better, especially when the number of the unlabeled samples is much greater than that of the labeled samples (which is a common scenario). By selecting high quality’s synthetically generated images for training, the recognition results are improved.

2. DCGANs-Based Semi-Supervised Learning

2.1. Framework

The framework of our method is shown in Figure 1. There are two complete DCGANs in the framework, each contains one generator and two discriminators. To recognize multiple object types, we replace the last layer of the discriminators with a softmax function, and output a vector of that class probabilities. The last value in the vector represents the probability that the input sample is fake, while the others represent the probabilities that the input sample is real and that it belongs to a certain class. We modify the loss function of the discriminators to adapt to the adjustments, and take the average value of them when computing the loss function of the generator. The process of semi-supervised learning is accomplished through joint training of the two discriminators, and the specific steps in each training round are as follows: we firstly utilize the labeled samples to train the two discriminators, then use each discriminator to predict the labels of the unlabeled samples, respectively. We select those positive samples with high confidence from the newly labeled ones, and finally add them to each other’s labeled set for the next round of training when certain conditions are satisfied.
The datasets used for training are constructed according to the experiments, and there are two different cases: the first is to directly divide the original dataset into a labeled sample set and an unlabeled sample set to verify the effectiveness of the proposed semi-supervised method; the second is to select specific generated images of high quality as the unlabeled sample sets, select a portion from the original dataset, and form the labeled sample set to verify the effect of using the generated false images to train the networks.

2.2. MO-DCGANs

The generator is a deconvolution neural network, whose input is a random vector and outputs a fake image that is very close to a real image by learning the features of the real images. While the discriminator of DCGANs is an improved convolutional neural network, and both fake and real images will be sent to the discriminator. The output of the discriminator is a number falling in the range of 0 and 1, if the input data is a real image then this output number is getting closer to 1, and if the input data is a fake image then this output number is getting closer to 0. Both the generator and the discriminator will be strengthened during the training process.
In order to recognize multiple object types, we conduct enhancement for the discriminators. Inspired by Salimans et al. [49], we replace the output of the discriminator with a softmax function and make it a standard classifier for recognizing multiple object types. We name this model multi-output-DCGANs (MO-DCGANs). Assuming that the random vector   z   has a uniform noise distribution   P z ( z ) , and   G ( z )   maps it to the data space of the real images; the input   x   of the discriminator, which is assumed to have a distribution P d a t a ( x , y ) , is a real or fake image with label   y . The discriminator outputs a   k + 1   dimensional vector of logits   l = { l 1 , l 2 , , l k + 1 } , which is finally turned into a   k + 1   dimensional vector of class probabilities   p = { p 1 , p 2 , , p k + 1 }   by the softmax function:
p j = e l j i = 1 k + 1 e l i , j { 1 , 2 , , k + 1 }
A real image will be discriminated as one of the former   k   classes, and a fake image will be discriminated as the   k + 1   class.
We formulate the loss function of MO-DCGANs as a standard minimax game:
L = Ε x , y P d a t a ( x , y ) { D ( y | x , y < k + 1 ) } Ε x G ( z ) { D ( y | G ( z ) , y = k + 1 ) }
We do not take the logarithm of   D ( y | x )   directly in Equation (2), because the output neurons of the discriminator in our model have increased from 1 to   k + 1 , and   D ( y | x )   no longer represents the probability that the input is a real image but a loss function, corresponding to a more complicated condition. We choose cross-entropy function as the loss function, and then D ( y | x )   is computed as:
D ( y | x ) = i y i log ( p i )
where   y   refers to the expected class, p i   represents the probability that the input sample belongs to y . It should be noted that   y   and   y   are one hot vectors. According to Equation (3), D ( y | x , y < k + 1 )   can be further expressed as Equation (4) when the input is a real image:
D ( y | x , y < k + 1 ) = i = 1 k y log ( p i )
When the input is a fake image, D ( y | x , y < k + 1 )   can be simplified as:
D ( y | x , y = k + 1 ) = log ( p k + 1 )
Assume that there are   m   inputs both for the discriminator and the generator within each training iterations, and the discriminator is updated by ascending its stochastic gradient:
θ d 1 m i = 1 m [ D ( y | x i , y < k + 1 ) + D ( y | G ( z i ) , y = k + 1 ) ]
while the generator is updated by descending its stochastic gradient:
θ d 1 m i = 1 m D ( y | G ( z i ) , y = k + 1 )
The discriminator and the generator are updated alternately, and their networks are optimized during this process. Therefore, the discriminator can recognize the input sample more accurately, and the generator can make its output images look closer to the real images.

2.3. Semi-Supervised Learning

The purpose of semi-supervised learning is to predict the labels of the unlabeled samples by learning the features of the labeled samples, and use these newly labeled samples for training to improve the robustness of the networks. The accuracy of the labels has a great influence on the subsequent training results. Correctly labeled samples can be used to optimize the networks, while the wrongly labeled samples will maliciously modify the networks and reduce the recognition accuracy. Therefore, improving the accuracy of the labels is the key to semi-supervised learning. We conduct semi-supervised learning by utilizing the two discriminators for joint training. During this process, the two discriminator learns the same features synchronously. But, their network parameters are always dynamically different because their input samples in each round of the training are randomly selected. We use the two classifiers with dynamic differences to randomly sample and classify the same batch of the samples, respectively, and to select a group of positive samples from the newly labeled sample set for training each other. The two discriminators promote each other, and they become better together. However, the samples that are labeled in this way have a certain probability of becoming noisy samples, which deteriorates the performance of the networks. In order to eliminate the adverse effects of this noisy sample on the network as much as possible, we here introduce a noisy data learning theory [49]. There are two ways that are proposed to extend the labeled sample set in our model: one is to label the unlabeled samples from the original real images; the other is to label the generated fake images. The next two parts will describe the proposed semi-supervised learning method.

2.3.1. Joint Training

Numerous studies have shown that DCGANs training process is not stable, which fluctuates the recognition results. By doubling the discriminator in MO-DCGANs and by taking the average value of the two discriminators when computing the loss function, the fluctuations can be properly eliminated. This is because the loss function of a single classifier may be subject to large deviations in the training process, while taking the average value of the two discriminators can cancel the positive and negative deviations when ensuring that the performance of the two classifiers is similar. Meanwhile, we can use the two discriminators to complete semi-supervised learning tasks, which is inspired by the main idea of co-training. The two discriminators share the same generator, each forms a MO-DCGANs with the generator, and then we have two complete MO-DCGANs in our model. Every fake image from the generator will go into both the two discriminators. Let   D 1   and   D 2   represent the two discriminators, respectively, then Equation (7) becomes (8):
θ d 1 m i = 1 m j = 1 2 D j ( y | G ( z i ) , y = k + 1 )
Let L 1 t = { ( x 1 , y 1 ) , ( x 2 , y 2 ) , , ( x m , y m ) } and L 2 t = { ( x 1 , y 1 ) , ( x 2 , y 2 ) , , ( x m , y m ) } represent the labeled sample sets of   D 1 t   and   D 2 t , respectively, and   U 1 t { x 1 , x 2 , , x n }   and   U 2 t { x 1 , x 2 , , x n }   the unlabeled sample sets in the   t th training round. It should be emphasized that the samples in   L 1 t   and   L 2 t   are the same but in different orders, so do   U 1 t   and   U 2 t . As shown in Figure 2, the specific steps of the joint training are as follows:
(1)
utilize L 1 t   ( L 2 t ) to train D 1 t   ( D 2 t );
(2)
use D 1 t   ( D 2 t ) to predict the labels of the samples in   U 2 t   (   U 1 t ) ; and,
(3)
D 1 t   ( D 2 t ) selects   p   positive samples from the newly labeled samples according to certain criteria and adds them to L 2 t   ( L 1 t ) for the next round of training.
Note that the newly labeled samples will be regarded as unlabeled samples and will be added to   U   in the next round. Therefore, in each round, all the original unlabeled samples will be labeled, and the selected positive samples are different. As the number of training increases, unlabeled samples will be fully utilized, and the pool of positive samples is increased and diversified. D 1   and   D 2   are independent from each other in the first two steps. Each time, they select different samples, and they always maintain dynamic differences throughout the process. The difference will gradually decrease after lots of rounds of training and all of the unlabeled samples have been labeled and used to train   D 1   and   D 2 , and the unlabeled samples include the complete features of the unlabeled samples.
A standard is adopted when we select the positive samples. When considering that if the probabilities outputs by the softmax function are very close, then it is not sensible to assign the label with the largest probability to the unlabeled input sample. But, if the maximum probability is much larger than the average of all the remaining probabilities, then it is reasonable to do so. Based on this, we propose a stringent judging rule: if the largest class probability   P max   and the average of all the remaining probabilities satisfy Equation (9), then we can determined that the sample belongs to the class corresponding to   P max .
P max α i = 1 K P i P max K 1
where   K   is the total number of the classes,   α   ( α 1 )   is a coefficient that measures the difference between   P max   and all of the remaining probabilities. The value   α   is related to the performance of the networks. The better the network performance is, the larger the value of   α   is, and the specific value can be adjusted during the network training.

2.3.2. Noisy Data Learning

In the process of labeling the unlabeled samples, we often meet wrongly labeled samples, which are regarded as noise and will degrade the performance of the network. We look at the application shown in [45], which was based on the noisy data learning theory presented in [47] to reduce the negative effect of the noisy samples. According to the theory, if the labeled sample set   L   has the probably approximate correct (PAC) property, then the sample size   m   satisfies:
m = 2 μ ε 2 ( 1 2 η ) 2 ln ( 2 N δ )
where   N   is the size of the newly labeled sample set,   δ   is the confidence,   ε   is the recognition error rate of the worst hypothetic case,   η   is an upper bound of the recognition noise rate, and   μ   is a hypothetical error that helps the equation be established.
Let   L t   and   L t 1   denote the samples labeled by the discriminator in the   t th and the   ( t 1 ) th training rounds. The size of sample sets   L L t   and   L L t 1   are   | L L t |   and   | L L t 1 | , respectively. Let   η L   denote the noise rate of the original labeled sample set, and   e t   denotes the prediction error rate. Then, the total recognition noise rate of   L L t   in the   t th training round is:
η t = η L | L | + e t | L t | | L L t |
If the discriminator is refined through using   L t   to train the networks in the   t th training round, then   ε t < ε t 1 . In Equation (10), all of the parameters are constant except for   ε   and   η . So, only when   η t < η t 1 ,   the equation can still be established. When considering that   η L   is very small in Equation (11), then   η t < η t 1   is bound to be satisfied if   e t | L t | < e t 1 | L t 1 | . Assuming that   0 e t , e t 1 < 0.5 , when   | L t |   is far bigger than   | L t 1 | , we randomly subsample   | L t |   whilst guaranteeing   e t | L t | < e t 1 | L t 1 | . It has been proved that if Equation (12) holds, where   s   denotes the size of sample set   | L t | after subsampling, then   e t | L t | < e t 1 | L t 1 | is satisfied.
s = e t 1 | L t 1 | e t 1
To ensure that   | L t | is still bigger than   | L t 1 |   after subsampling,   L t 1   should satisfy:
| L t 1 | > e t e t 1 e t
Since it is hard to estimate   e t   on the unlabeled samples, we utilize the labeled samples to compute   e t . Assuming that the number of the correctly labeled samples among the total labeled sample set is   n , then   e t   can be computed as:
e t = 1 n m
The proposed semi-supervised learning algorithm is presented in Algorithm 1. It should be emphasized that the process of the semi-supervised training is only related to the two discriminators, so the training part of the generator is omitted here.
Algorithm 1. Semi-supervised learning based on multi-output DCGANs.
Inputs: Original labeled training sets L 1 and   L 2 , original unlabeled training sets U 2 and   U 2 , the prediction sample sets l 1 and l 2 , the discriminators D1 and D2, the error rates e r r 1 and e r r 2 , the update flags of the classifiers u p d a t e 1 and u p d a t e 2 .
Outputs: Two vectors of class probabilities h 1   and   h 2 .
1. 
Initialization: for i = 1 ,   2
u p d a t e i T r u e , e r r i = 0.5 , l i
2. 
Joint training: Repeat until 400 epoch
for i = 1 ,   2
(1)
If   u p d a t e i = T r u e , then   L i L i l i .
(2)
Use L i to train D i and get   h i .
(3)
Allow D i to label p i positive samples in U and add them to   l i .
(4)
Allow D i to measure e r r i with   L i .
(5)
If   | l i | = 0 , then   | l i | e r r i e r r i     e r r i + 1 .
(6)
If | l i | < | l i |   and   e r r i | l i | < e r r i | l i | , then   u p d a t e i T r u e .
(7)
If | l i | > e r r i e r r i e r r i + 1 , then   l i S u b s a m p l e ( l i , e r r i l i e r r i 1 ) and   u p d a t e i T r u e .
(8)
If   u p d a t e i = T r u e , then   e r r i e r r i ,   l i l i .
3. 
Output:   h 1 , h 2 .

3. Experiments and Discussions

3.1. MSTAR Dataset

We perform our experiments on the Moving and Stationary Target Acquisition and Recognition (MSTAR) database, which is co-funded by National Defense Research Planning Bureau (ADRPA) and the U.S. Air Force Research Laboratory (AFRL). Ten classes of vehicle objects in the MSTAR database are chosen in our experiments, i.e., 2S1, ZSU234, BMP2, BRDM2, BTR60, BTR70, D7, ZIL131, T62, and T72. The SAR and the corresponding optical images of each class are shown in Figure 3.

3.2. Experiments with Original Training Set under Different Unlabeled Rates

In the first experiment, we partition the original training set that contains 2747 SAR target chips in 17° depression into labeled and unlabeled sample sets under different unlabeled rates, including 20%, 40%, 60%, and 80%. Then, we use the total 2425 SAR target chips in 15° depression for testing. The reason why the training set and the test set take different depressions is that the object features are different in different depressions, which can ensure the generalization ability of our model. Table 1 lists the detailed information of the target chips that are involved in this experiment, and Table 2 lists the specific numbers of the labeled and unlabeled samples under different unlabeled rates. We use L to denote the labeled sample set, U to unlabeled sample set, and NDLT to noisy data learning theory. L+U represents the results obtained by using joint training alone, while L+U+NDLT represents the results that were obtained by using joint training and the noisy data learning theory together. We firstly utilize the labeled samples for supervised training and obtain supervised recognition accuracy (SRA). Then, we simultaneously use the labeled and unlabeled samples for semi-supervised training and obtain semi-supervised recognition accuracy (SSRA). Finally, we calculate the improvement of SSRA over SRA. Both SRA and SSRA are calculated by averaging the 150th to 250th training rounds accuracy of D1 and D2 to reduce accuracy fluctuations. In this experiment, we take α = 2.0 in Equation (9). The experimental results are shown in Table 3.
When comparing the results of L+U and L+U+NDLT, we can conclude that the recognition accuracy is improved after we have introduced the noisy data learning theory. This is because the noisy data will degrade the network performance, and the noisy data learning theory will reduce this negative effect and therefore bring about better recognition results. While comparing the results of L and L+U+NDLT, it can be concluded that the networks will learn more feature information after using the unlabeled samples for training, thus the results of L+U+NDLT is higher than L. We also observe that as the unlabeled rate increases, the average SSRA decreases, while it will obtain higher improvement. It should be noted that the recognition results of the ten classes largely differ. Some classes can achieve high recognition accuracy with only a small number of labeled samples, therefore, the recognition accuracy will not be significantly improved after the unlabeled samples participate in training the networks, such as 2S1, T62, and ZSU234. Their accuracy improvements under different unlabeled rates fall within 3%, but their SRAs and SSRAs are still over 98%. While some classes can obtain large accuracy improvement by utilizing a large number of unlabeled samples for semi-supervised learning, and the more unlabeled samples, the more improvement. Taking BTR70 as an example, its accuracy improvement is 13.94% under an 80% unlabeled rate, but its SRA and SSRA are only 84.94% and 96.78%, respectively.
To directly compare the experimental results, we plot the recognition accuracy curves of L, L+U, and L+U+NDLT corresponding to individual unlabeled ratios, as shown in Figure 4. It is observed that the three curves in Figure 4a look very close, L+U and L+U+NDLT are gradually higher than L in (b,c), and L+U, L+U+NDLT is over L in (d). This indicates that the larger the unlabeled rate is, the more the accuracy improvement can be obtained. Since semi-supervised learning may result in incorrectly labeled samples, which makes it impossible for newly labeled samples to perform, as well as the original labeled samples, the recognition effect will be better with a lower unlabeled rate (simultaneously a higher labeled rate). The experimental result shows that the semi-supervised method that is proposed in this paper is more suitable for those cases when the number of the labeled samples is very small, which is in line with the expectation.

3.3. Quality Evaluation of Generated Samples

One important reason why we adopt DCGANs is that we hope to use the generated unlabeled images for network training in order to improve the performance of our model, when there are only a small number of labeled samples. In this way, we can not only make full use of the existing labeled samples, but also obtain better results than just using the labeled samples for training. We analyze the quality of the generated samples before using them. We randomly select 20%, 30%, and 40% labeled samples (respectively, including 550, 824, and 1099 images) from the original training set for supervised training, then extract images generated in the 50th, 150th, 250th, 350th, and 450th epoch. It should be noted that in this experiment, we want to extract as many high-quality generated images as possible during the training process to improve the network performance. Therefore, we do not limit the number of these high-quality images, then the unlabeled rates cannot be guaranteed to be 40%, 60%, and 80%, respectively. Figure 5a shows the original SAR images, and (b–d) show the images generated with 1099, 824, and 550 labeled samples, respectively. In (b,c), each group of images from left to right is generated in the 50th, 150th, 250th, 350th, and 450th epoch.
We can see that as the training epoch increases, the quality of the generated images gradually becomes higher. In Figure 5b, objects in the generated images are already roughly outlined in the 250th epoch, and the generated images are very similar to the original images in the 350th epoch. In Figure 5c, objects in the generated images are clear until the 450th epoch is taken. In Figure 5c, the quality of the generated images is still poor in the 450th epoch.
In order to confirm the observations that are described above, we select 1000 images from each group of the generated images shown in Figure 5b–d and, respectively, input them into a well-trained discriminator, then count the total number of the samples that satisfy the rule shown in Equation (9), as presented in Section 2.3.1. We still use α = 2.0 in this formula. We believe that those samples which satisfy the rule are of high quality and can be used to train the model. The results listed in Table 4 are consistent with what we expect.

3.4. Experiments with Unlabeled Generated Samples under Different Unlabeled Rates

This experiment will verify the impact of the high-quality generated images on the performance of our model. We have confirmed in Section 3.2 that the semi-supervised recognition method that is proposed in this paper leads to satisfactory results in the case of a small number of labeled samples. Therefore, this experiment will be related to this case. The labeled samples in this experiment are selected from the original training set, and the generated images are used as the unlabeled samples. The testing set is unchanged. According to the conclusions made in Section 3.3, we select 1099, 824, 550 labeled samples from the original training set for supervised training, then, respectively, extract those high-quality generated samples in the 350th, 450th, and 550th epoch, and utilize them for semi-supervised training. It should be emphasized that since the number of the selected high-quality images is uncertain, the total amount of the labeled and unlabeled samples no longer remains at 2747. The experimental results are shown in Table 5.
It can be found that the average SSRA will obtain better improvement with less labeled samples. Different objects vary greatly in accuracy improvement. The generated samples of some types can provide more feature information, so our model will perform better after using these samples for training, and the recognition accuracy will also be improved significantly, such as BRDM2 and D7. Their accuracy improvements will significantly increase as the number of the labeled samples decreases. Note that BRDM2 performs worse with 1099 labeled samples, but much better with 550 labeled samples. This is because the quality of the generated images is much worse than that of the real image, therefore, the generated images will make the recognition worse when there is a large number of labeled samples. When the number of the labeled samples is too small, using a large number of generated samples can effectively improve the SSRA, but the SSRA cannot exceed the SRA of a little more labeled samples, such as the SRA of 824 and 1099 labeled samples. However, the generated samples of some types become worse as the number of the labeled samples decrease, and thus the improvement tends to be smaller, such as BTR70. Meanwhile, some generated samples are not suitable for network training, such as ZIL131. Its accuracy is reduced after the generated samples participate in the training, and we believe that the overall accuracy will be improved by removing these generated images.
We have found that when the number of the labeled samples is less than 500, there is almost no high-quality generated samples. Therefore, we will not consider using the generated samples for training in this case.

3.5. Comparison Experiment with Other Methods

In this part, we compare the performance of our method with several other semi-supervised learning methods, including label propagation (LP) [50], progressive semi-supervised SVM with diversity P S 3 VM-D [42], Triple-GAN [51], and improved-GAN [49]. LP establishes a similar matrix and propagates the labels of the labeled samples to the unlabeled samples, according to the degree of similarity. P S 3 VM-D selects the reliable unlabeled samples to extend the original labeled training set. Triple-GAN consists of a generator, a discriminator, and a classifier, whilst the generator and the classifier characterize the conditional distributions between images and labels, and the discriminator solely focuses on identifying fake images-label pairs. Improved-GAN adjusts the network structure of GANs, which enables the discriminator to recognize multiple object types. Table 6 lists the accuracies of each method under different unlabeled rates.
We can conclude from Table 6 that our method performs better than the other methods. There are mainly two reasons for this: one is that CNNs is used as the classifier in our model, which can extract more abundant features than the traditional machine learning methods, such as LP and P S 3 VM-D, also other GANs that consist of no CNNs, such as Triple-GAN and improved-GAN; the other is that we have introduced the noisy data learning theory, and it has been proved that the negative effect of noisy data can be reduced with this theory, and therefore bring better recognition results. It can be found that as the unlabeled rate increases, the system performance becomes worse. Especially when the unlabeled rate increases to 80%, the recognition accuracy of LP and Improved-GAN decreases to 73.17% and 87.52%, respectively, meaning that these two methods cannot cope with the situations where there are few labeled samples. While P S 3 VM-D, Triple-GAN, and our method can achieve high recognition accuracy with a small number of labeled samples, and our method has the best performance with individual unlabeled rates. In practical applications, label samples are often difficult to obtain, so a good semi-supervised method should be able to use a small number of labeled samples to obtain high recognition accuracy. In this sense, our method is promising.

4. Discussion

4.1. Choice of Parameter α

In this section, we will further discuss the choice of parameter   α . The value of   α   will place restrictions on the confidence of the predicted labels of the unlabeled samples, and the larger the value of   α , the higher the confidence. When using the generated images as the unlabeled samples, we can select those generated images of higher quality for the network training by taking a larger value for   α . Therefore, the value of   α   plays an important role in our method. According to the experimental results that are shown in Section 3.2, when the unlabeled rate is small, such as 20%, unlabeled samples have little impact on the performance of the model. So, in this section, we only analyze the impact of   α   on the experimental results when the unlabeled rates are 40%, 60%, and 80%. We determine the possibly best value of   α   by one-way analysis of variance (one-way ANOVA). With different unlabeled rates, we specify the values 1.0, 1.5, 2.0, 2.5, 3.0 for   α , and perform five sets of experiments, and 100 rounds of training per set.
The ANOVA table is shown in Table 7. It should be noted that almost no unlabeled samples can be selected for training when   α = 3 , so we finally give up the corresponding experimental data. Columns 2 to 6 in Table 7 refer to the source of the difference (intragroup or intergroup), sum squared deviations (SS), degree of freedom (df), mean squared deviations (MS), F-Statistic (F), and detection probability (Prob > F). It can be seen from Table 7 that intergroup MS is far greater than intragroup MS, indicating that the intragroup difference is small, while the intergroup difference is large. Meanwhile, F is much larger than 1 and Prob is much less than 0.05, which also supports that the intergroup difference is significant. Intergroup difference is caused by different values of   α . We can conclude that the value of   α   has great influence on the experimental results.
To directly compare the experimental results of different values of   α , we draw boxplots of recognition accuracy with different unlabeled rates, as shown in Figure 6. We use red, blue, yellow, and green boxes to represent the recognition results when   α   is 1.0, 1.5, 2.0, and 2.5, respectively. It can be found from Figure 6 that the yellow box’s median line is higher than the rest of the boxes at individual unlabeled rates, showing that the average level of recognition accuracy is higher when   α = 2 . This is because, when the value of   α   is small, the confidence of the labels is not guaranteed, and there may be more wrongly labeled samples involved in the training; when the value is large, only a small number of high-quality samples can be selected for the training and the unlabeled samples are not fully utilized. In Figure 6a,b, the yellow boxes have smaller widths and heights, which indicates more concentrated experimental data and more stable experimental process. In Figure 6c, the width and height of the yellow box are bigger. Therefore, we chose   α = 2   with different unlabeled rates to obtain satisfactory recognition results.

4.2. Performance Evaluation

4.2.1. ROC Curve

We have compared the recognition results of different methods on the MSTAR database. However, the comparison results cannot explain the generalization capability of our method on different datasets. In this section, we will compare the performance of different methods through the receiver operating characteristic (ROC) curves [52]. As shown in Section 4.1, we let   α = 2 , and plot the ROC curves of these methods with the unlabeled rate 40%, 60%, 80%, as shown in Figure 7.
It can be found that our method achieves better performance when compared with the other methods. In Figure 7a–c, the areas under the ROC curves of our method are close to 1, and the TPR values are greater than 0.8, while keeping low FPR. The areas under the ROC curves of the other methods are smaller than our method. We can learn from Figure 7 that, as the unlabeled rate decreases, the area of ROC curves of these methods decreases, and the smaller the unlabeled rate, the better performance of our method. The experimental results confirm that our method has a better generalization capability.

4.2.2. Training Time

In our method, after each round of training, those newly labeled samples with high label confidence will be selected for the next round. The network performance varies under different unlabeled rates, thus the total number of the selected newly labeled samples is different. Therefore, the time for each round of training is also different. In this section, we will analyze the training time of the proposed method [53]. We calculate the average training time from the 200th epoch to the 400th epoch at different unlabeled rates. The main configuration of the computer is: GPU: Tesla K20c; 705 MHz; 5 GB RAM; operating system: Ubuntu 16.04; running software: Python 2.7. The calculation results are shown in Table 8.
It can be found that, as the unlabeled rates increase, the training time tends to decrease. This is because when the unlabeled rate is larger, the original labeled samples are less; however, the network performs better with more original labeled samples, and more newly labeled samples can thus be selected, resulting in time increment. The conclusion is consistent with the previous analysis.

5. Conclusions

In this study, we presented a DCGANs-based semi-supervised learning framework for SAR automatic target recognition. In this framework, we doubled the discriminator of DCGANs and utilized the two discriminators for semi-supervised joint training. The last layer of the discriminator is replaced by a softmax function, and its loss function is also adjusted accordingly. Experiments on the MSTAR dataset have led to the following conclusions:
  • Introducing the noisy data learning theory into our method can reduce the adverse effect of the wrongly labeled sample on the network and significantly improve the recognition accuracy.
  • Our method can achieve high recognition accuracy on the MSTAR dataset, and especially performs well when there are a small number of labeled samples and a large number of unlabeled samples. When the unlabeled rate increases from 20% to 80%, the overall accuracy improvement increases from 0 to 5%, and the overall recognition accuracies are over 95%.
  • The experimental results have confirmed that when the number of the labeled samples is small, our model performs better after utilizing those high-quality generated images for the network training. The less the labeled samples, the higher the accuracy improvement. However, when the labeled samples are less than 500, the quality of the generated samples are too few to make the system work.

Author Contributions

Conceptualization, F.G., H.Z. and Y.Y.; Methodology, F.G., Y.Y. and H.Z.; Software, F.G., Y.Y. and E.Y.; Validation, F.G., Y.Y., J.S. and H.Z.; Formal Analysis, F.G., Y.Y., J.S. and J.W.; Investigation, F.G., Y.Y., J.S., J.W. and H.Z.; Resources, F.G., Y.Y. and H.Z.; Data Curation, F.G., Y.Y., J.S. and H.Z.; Writing-Original Draft Preparation, F.G., Y.Y., H.Z. and E.Y.; Writing-Review & Editing, F.G., Y.Y., H.Z.; Visualization, F.G., Y.Y., H.Z.; Supervision, F.G., Y.Y., H.Z. and E.Y.; Project Administration, F.G., Y.Y.; Funding Acquisition, F.G., Y.Y.

Funding

This research was funded by the National Natural Science Foundation of China (61771027; 61071139; 61471019; 61171122; 61501011; 61671035). E. Yang was funded in part by the RSE-NNSFC Joint Project (2017–2019) (6161101383) with China University of Petroleum (Huadong). Huiyu Zhou was funded by UK EPSRC under Grant EP/N011074/1, and Royal Society-Newton Advanced Fellowship under Grant NA160342.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, G.; Shuncheng, T.; Chengbin, G.; Na, W.; Zhaolei, L. Multiple model particle flter track-before-detect for range am-biguous radar. Chin. J. Aeronaut. 2013, 26, 1477–1487. [Google Scholar] [CrossRef]
  2. Dong, G.; Kuang, G.; Wang, N.; Zhao, L.; Lu, J. SAR Target Recognition via Joint Sparse Representation of Monogenic Signal. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 3316–3328. [Google Scholar] [CrossRef]
  3. Sun, Y.; Du, L.; Wang, Y.; Wang, Y.; Hu, J. SAR Automatic Target Recognition Based on Dictionary Learning and Joint Dynamic Sparse Representation. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1777–1781. [Google Scholar] [CrossRef]
  4. Han, P.; Wu, J.; Wu, R. SAR Target feature extraction and recognition based on 2D-DLPP. Phys. Procedia 2012, 24, 1431–1436. [Google Scholar] [CrossRef]
  5. Zhao, B.; Zhong, Y.; Zhang, L. Scene classification via latent Dirichlet allocation using a hybrid generative/discriminative strategy for high spatial resolution remote sensing imagery. Remote Sens. Lett. 2013, 4, 1204–1213. [Google Scholar] [CrossRef]
  6. Zhong, Y.; Zhu, Q.; Zhang, L. Scene classification based on the multifeature fusion probabilistic topic model for high spatial resolution remote sensing imagery. IEEE Trans. Geosci. Remote Sens. 2015, 53, 6207–6222. [Google Scholar] [CrossRef]
  7. Zhu, Q.; Zhong, Y.; Zhang, L.; Li, D. Scene Classification Based on the Sparse Homogeneous-Heterogeneous Topic Feature Model. IEEE Trans. Geosci. Remote Sens. 2018, 56, 2689–2703. [Google Scholar] [CrossRef]
  8. Han, J.; Zhang, D.; Cheng, G.; Guo, L.; Ren, J. Object detection in optical remote sensing images based on weakly supervised learning and high-level feature learning. IEEE Trans. Geosci. Remote Sens. 2015, 53, 3325–3337. [Google Scholar] [CrossRef]
  9. Li, J.; Bioucas-Dias, J.M.; Plaza, A. Semi-supervised discriminative random field for hyperspectral image classification. In Proceedings of the 2012 4th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Shanghai, China, 4–7 June 2012; pp. 1–4. [Google Scholar]
  10. Zhong, P.; Wang, R. Learning conditional random fields for classification of hyperspectral images. IEEE Trans. Image Process. 2010, 19, 1890–1907. [Google Scholar] [CrossRef] [PubMed]
  11. Wang, Q.; Zhang, F.; Li, X. Optimal Clustering Framework for Hyperspectral Band Selection. IEEE Trans. Geosci. Remote Sens. 2018, 1–13. [Google Scholar] [CrossRef]
  12. Starck, J.L.; Elad, M.; Donoho, D.L. Image decomposition via the combination of sparse representations and a variational approach. IEEE Trans. Image Process. 2005, 14, 1570–1582. [Google Scholar] [CrossRef] [PubMed]
  13. Tang, Y.; Lu, Y.; Yuan, H. Hyperspectral image classification based on three-dimensional scattering wavelet transform. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2467–2480. [Google Scholar] [CrossRef]
  14. Zhou, J.; Cheng, Z.S.X.; Fu, Q. Automatic target recognition of SAR images based on global scattering center model. IEEE Trans. Geosci. Remote Sens. 2011, 49, 3713–3729. [Google Scholar]
  15. Fauvel, M.; Benediktsson, J.A.; Chanussot, J.; Sveinsson, J.R. Spectral and spatial classification of hyperspectral data using SVMs and morphological profiles. IEEE Trans. Geosci. Remote Sens. 2008, 46, 3804–3814. [Google Scholar] [CrossRef]
  16. Hearst, M.A. Support Vector Machines; IEEE Educational Activities Department: Piscataway, NJ, USA, 1998; pp. 18–28. [Google Scholar]
  17. Friedman, J.; Hastie, T.; Tibshirani, R. Special Invited Paper. Additive Logistic Regression: A Statistical View of Boosting. Ann. Stat. 2000, 28, 337–374. [Google Scholar] [CrossRef]
  18. Chatziantoniou, A.; Petropoulos, G.P.; Psomiadis, E. Co-Orbital Sentinel 1 and 2 for LULC Mapping with Emphasis on Wetlands in a Mediterranean Setting Based on Machine Learning. Remote Sens. 2017, 9, 1259. [Google Scholar] [CrossRef]
  19. Guo, D.; Chen, B. SAR image target recognition via deep Bayesian generative network. In Proceedings of the IEEE International Workshop on Remote Sensing with Intelligent Processing, Shanghai, China, 19–21 May 2017; pp. 1–4. [Google Scholar]
  20. Ji, X.X.; Zhang, G. SAR Image Target Recognition with Increasing Sub-classifier Diversity Based on Adaptive Boosting. In Proceedings of the IEEE Sixth International Conference on Intelligent Human-Machine Systems and Cybernetics, Hangzhou, China, 26–27 August 2014; pp. 54–57. [Google Scholar]
  21. Ruohong, H.; Yun, P.; Mao, K. SAR Image Target Recognition Based on NMF Feature Extraction and Bayesian Decision Fusion. In Proceedings of the Second Iita International Conference on Geoscience and Remote Sensing, Qingdao, China, 28–31 August 2010; pp. 496–499. [Google Scholar]
  22. Wang, L.; Li, Y.; Song, K. SAR image target recognition based on GBMLWM algorithm and Bayesian neural networks. In Proceedings of the IEEE CIE International Conference on Radar, Guangzhou, China, 10–13 October 2017; pp. 1–5. [Google Scholar]
  23. Wang, Y.; Duan, H. Classification of Hyperspectral Images by SVM Using a Composite Kernel by Employing Spectral, Spatial and Hierarchical Structure Information. Remote Sens. 2018, 10, 441. [Google Scholar] [CrossRef]
  24. Wei, G.; Qi, Q.; Jiang, L.; Zhang, P. A New Method of SAR Image Target Recognition based on AdaBoost Algorithm. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Boston, MA, USA, 7–11 July 2008. [Google Scholar] [CrossRef]
  25. Xue, X.; Zeng, Q.; Zhao, R. A new method of SAR image target recognition based on SVM. In Proceedings of the 2005 IEEE International Geoscience and Remote Sensing Symposium, Seoul, Korea, 29–29 July 2005; pp. 4718–4721. [Google Scholar]
  26. Yan, F.; Mei, W.; Chunqin, Z. SAR Image Target Recognition Based on Hu Invariant Moments and SVM. In Proceedings of the IEEE International Conference on Information Assurance and Security, Xi’an, China, 18–20 August 2009; pp. 585–588. [Google Scholar]
  27. Huang, Z.; Pan, Z.; Lei, B. Transfer Learning with Deep Convolutional Neural Network for SAR Target Classification with Limited Labeled Data. Remote Sens. 2017, 9, 907. [Google Scholar] [CrossRef]
  28. Kim, S.; Song, W.-J.; Kim, S.-H. Double Weight-Based SAR and Infrared Sensor Fusion for Automatic Ground Target Recognition with Deep Learning. Remote Sens. 2018, 10, 72. [Google Scholar] [CrossRef]
  29. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. In Proceedings of the Advances in Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–8 December 2012; Curran Associates Inc.: Nice, France, 2012; pp. 1097–1105. [Google Scholar]
  30. Liu, Y.; Zhong, Y.; Fei, F.; Zhu, Q.; Qin, Q. Scene Classification Based on a Deep Random-Scale Stretched Convolutional Neural Network. Remote Sens. 2018, 10, 444. [Google Scholar] [CrossRef]
  31. Ding, J.; Chen, B.; Liu, H.; Huang, M. Convolutional Neural Network with Data Augmentation for SAR Target Recognition. IEEE Geosci. Remote Sens. Lett. 2016, 13, 364–368. [Google Scholar] [CrossRef]
  32. Chen, S.; Wang, H.; Xu, F.; Jin, Y.Q. Target Classification using the Deep Convolutional Networks for SAR Images. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4806–4817. [Google Scholar] [CrossRef]
  33. Masci, J.; Meier, U.; Ciresan, D.; Schmidhuber, J. Stacked Convolutional Auto-Encoders for Hierarchical Feature Extraction. In Artificial Neural Networks and Machine Learning, Proceedings of the ICANN 2011: 21st International Conference on Artificial Neural Networks, Espoo, Finland, 14–17 June 2011; Springer: Heidelberg, Germany, 2011; pp. 52–59. [Google Scholar]
  34. Zhang, Y.; Lee, K.; Lee, H.; EDU, U. Augmenting Supervised Neural Networks with Unsupervised Objectives for Large-Scale Image classification. In Proceedings of the Machine Learning Research, New York, NY, USA, 20–22 June 2016; Volume 48, pp. 612–621. [Google Scholar]
  35. Lin, Z.; Ji, K.; Kang, M.; Leng, X.; Zou, H. Deep Convolutional Highway Unit Network for SAR Target Classification with Limited Labeled Training Data. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1091–1095. [Google Scholar] [CrossRef]
  36. Chen, Y.; Lin, Z.; Zhao, X.; Wang, G.; Gu, Y. Deep learning-based classification of hyperspectral data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2094–2107. [Google Scholar] [CrossRef]
  37. Ma, X.; Wang, H.; Geng, J. Spectral–Spatial Classification of Hyperspectral Image Based on Deep Auto-Encoder. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 4073–4085. [Google Scholar] [CrossRef]
  38. Zhong, Y.; Fei, F.; Liu., Y.; Zhao, B.; Jiao, H.; Zhang, P. SatCNN: Satellite Image Dataset Classification Using Agile Convolutional Neural Networks. Remote Sens. Lett. 2017, 8, 136–145. [Google Scholar] [CrossRef]
  39. Wang, Q.; Wan, J.; Yuan, Y. Deep Metric Learning for Crowdedness Regression. IEEE Trans. Circuits Syst. Video Technol. 2017. [Google Scholar] [CrossRef]
  40. Shahshahani, B.M.; Landgrebe, D.A. The effect of unlabeled samples in reducing the small sample size problem and mitigating the Hughes phenomenon. IEEE Trans. Geosci. Remote Sens. 1994, 32, 1087–1095. [Google Scholar] [CrossRef]
  41. Pan, Z.; Qiu, X.; Huang, Z.; Lei, B. Airplane Recognition in TerraSAR-X Images via Scatter Cluster Extraction and Reweighted Sparse Representation. IEEE Geosci. Remote Sens. Lett. 2017, 14, 112–116. [Google Scholar] [CrossRef]
  42. Persello, C.; Bruzzone, L. Active and Semisupervised Learning for the Classification of Remote Sensing Images. I IEEE Trans. Geosci. Remote Sens. 2014, 52, 6937–6956. [Google Scholar] [CrossRef]
  43. Blum, A.; Chawla, S. Learning from Labeled and Unlabeled Data using Graph Mincuts. In Proceedings of the Eighteenth International Conference on Machine Learning, Williamstown, MA, USA, 28 June–1 July 2001; Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 2001; pp. 19–26. [Google Scholar]
  44. Jebara, T.; Wang, J.; Chang, S.F. Graph construction and b-matching for semi-supervised learning. In Proceedings of the 26th International Conference on Machine Learning (ICML 2009), Montreal, QC, Canada, 14–18 June 2009; pp. 441–448. [Google Scholar]
  45. Zhou, Z.H.; Li, M. Tri-training: Exploiting unlabeled data using three classifiers. IEEE Trans. Knowl. Data Eng. 2005, 17, 1529–1541. [Google Scholar] [CrossRef]
  46. Blum, A.; Mitchell, T. Combining labeled and unlabeled data with co-training. In Proceedings of the Conference on Computational Learning Theory, Madison, WI, USA, 24–26 July 1998; pp. 92–100. [Google Scholar]
  47. Angluin, D.; Laird, P. Learning from noisy examples. Mach. Learn. 1988, 2, 343–370. [Google Scholar] [CrossRef]
  48. Radford, A.; Metz, L.; Chintala, S. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. arXiv, 2015; arXiv:1511.06434. [Google Scholar]
  49. Salimans, T.; Goodfellow, I.; Zaremba, W.; Cheung, V.; Radford, A.; Chen, X. Improved Techniques for Training GANs. arXiv, 2016; arXiv:1606.03498. [Google Scholar]
  50. Wang, F.; Zhang, C. Label Propagation through Linear Neighborhoods. IEEE Trans. Knowl. Data Eng. 2008, 20, 55–67. [Google Scholar]
  51. Li, C.; Xu, K.; Zhu, J.; Zhang, B. Triple Generative Adversarial Nets. arXiv, 2016; arXiv:1703.02291. [Google Scholar]
  52. Fawcett, T. Roc Graphs: Notes and Practical Considerations for Researchers; Technical Report HPL-2003-4; HP Labs: Bristol, UK, 2006. [Google Scholar]
  53. Senthilnath, J.; Sindhu, S.; Omkar, S.N. GPU-based normalized cuts for road extraction using satellite imagery. J. Earth Syst. Sci. 2014, 123, 1759–1769. [Google Scholar] [CrossRef]
Figure 1. Framework of the deep convolutional generative adversarial networks (DCGANs)-based semi-supervised learning method.
Figure 1. Framework of the deep convolutional generative adversarial networks (DCGANs)-based semi-supervised learning method.
Remotesensing 10 00846 g001
Figure 2. The process of joint training.
Figure 2. The process of joint training.
Remotesensing 10 00846 g002
Figure 3. Optical images and corresponding Synthetic Aperture Radar (SAR) images of ten classes of objects in the Moving and Stationary Target Acquisition and Recognition (MSTAR) database.
Figure 3. Optical images and corresponding Synthetic Aperture Radar (SAR) images of ten classes of objects in the Moving and Stationary Target Acquisition and Recognition (MSTAR) database.
Remotesensing 10 00846 g003
Figure 4. Recognition accuracy curves of L, L+U and L+U+NDLT: (ad) correspond to 20%, 40%, 60%, and 80% unlabeled rate, respectively.
Figure 4. Recognition accuracy curves of L, L+U and L+U+NDLT: (ad) correspond to 20%, 40%, 60%, and 80% unlabeled rate, respectively.
Remotesensing 10 00846 g004
Figure 5. Original and generated SAR images: (a) Original SAR images; (b) 1099 original labeled images; (c) 824 original labeled images; and (d) 550 original labeled images. In (b,c), each group of images from left to right is generated in the 50th, 150th, 250th, 350th, and 450th epoch. Units of the coordinates are pixels.
Figure 5. Original and generated SAR images: (a) Original SAR images; (b) 1099 original labeled images; (c) 824 original labeled images; and (d) 550 original labeled images. In (b,c), each group of images from left to right is generated in the 50th, 150th, 250th, 350th, and 450th epoch. Units of the coordinates are pixels.
Remotesensing 10 00846 g005aRemotesensing 10 00846 g005b
Figure 6. Boxplots of recognition accuracy: (ac) correspond to unlabeled rate 40%, 60%, and 80%, respectively.
Figure 6. Boxplots of recognition accuracy: (ac) correspond to unlabeled rate 40%, 60%, and 80%, respectively.
Remotesensing 10 00846 g006
Figure 7. Receiver operating characteristic (ROC) curves of recognition accuracy: (ac) correspond to 40%, 60%, and 80% unlabeled rate, respectively.
Figure 7. Receiver operating characteristic (ROC) curves of recognition accuracy: (ac) correspond to 40%, 60%, and 80% unlabeled rate, respectively.
Remotesensing 10 00846 g007
Table 1. Detailed information of the MSRAT dataset used in our experiments.
Table 1. Detailed information of the MSRAT dataset used in our experiments.
TopsClassSerial No.Size
(Pixels)
Training SetTesting Set
DepressionNo. ImagesDepressionNo. Images
Artillery2S1B_01 64 × 64 17 ° 299 15 ° 274
ZSU234D_08 64 × 64 17 ° 299 15 ° 274
TruckBRDM2E_71 64 × 64 17 ° 298 15 ° 274
BTR60K10YT_7532 64 × 64 17 ° 256 15 ° 195
BMP2SN_9563 64 × 64 17 ° 233 15 ° 195
BTR70C_71 64 × 64 17 ° 233 15 ° 196
D792V_13015 64 × 64 17 ° 299 15 ° 274
ZIL131E_12 64 × 64 17 ° 299 15 ° 274
TankT62A_51 64 × 64 17 ° 299 15 ° 273
T72#A64 64 × 64 17 ° 232 15 ° 196
Sum————————2747——2425
Table 2. Specific number of the labeled and unlabeled samples under different unlabeled rates.
Table 2. Specific number of the labeled and unlabeled samples under different unlabeled rates.
Unlabeled RateLUTotal
20%21975502747
40%164810992747
60%109916482747
80%55021972747
Table 3. Recognition accuracy (%) and relative improvements (%) of our semi-supervised learning method under different unlabeled rates. The best accuracies are indicated in bold in each column.
Table 3. Recognition accuracy (%) and relative improvements (%) of our semi-supervised learning method under different unlabeled rates. The best accuracies are indicated in bold in each column.
ObjectsUnlabeled Rate
20%40%
LL+UL+U+NDLTLL+UL+U+NDLT
SRASSRAimpSSRAimpSRASSRAimpSSRAimp
2S199.7499.760.0299.56−0.1899.7199.770.0799.750.04
BMP297.7596.62−1.1698.070.3397.5996.65−0.9798.360.79
BRDM296.3296.04−0.2997.130.8494.9493.04−2.0098.613.87
BTR6099.0798.88−0.1998.88−0.1998.5898.700.1399.020.45
BTR7096.3196.450.1396.400.0894.2895.271.0597.052.93
D799.2898.15−1.1499.380.1098.8898.48−0.4099.680.81
T6298.9099.460.5798.79−0.1198.9399.270.3499.110.17
T7298.5399.060.5498.930.4197.9598.400.4599.281.35
ZIL13198.8697.10−1.7898.29−0.5797.6297.20−0.4398.841.25
ZSU23499.1598.92−0.2399.450.3098.6099.490.9099.701.11
Average98.3998.04−0.3598.490.1097.7197.63−0.0898.941.26
ObjectsUnlabeled Rate
60%80%
LL+UL+U+NDLTLL+UL+U+NDLT
SRASSRAimpSSRAimpSRASSRAimpSSRAimp
2S199.3699.690.3399.830.4799.2399.820.5999.850.62
BMP295.8096.180.4097.581.8592.4895.643.4297.805.75
BRDM289.0192.543.9798.4010.5575.0277.262.9883.0910.76
BTR6098.6798.890.2199.200.5495.4898.022.6698.943.62
BTR7091.2787.91−3.6994.573.6184.9487.513.0296.7813.94
D797.5799.271.7499.780.2690.8590.18−0.7498.838.79
T6298.6099.070.4899.200.6198.1699.080.9399.070.93
T7295.8898.843.0899.273.5391.7694.793.3098.957.84
ZIL13192.7596.944.5297.965.6286.9382.41−5.2183.78−3.63
ZSU23498.6399.641.0399.671.0697.2399.722.5599.692.53
Average95.7596.901.1998.552.9291.2192.441.3595.684.90
Table 4. The number of high-quality samples in 1000 generated samples from the 50th, 150th, 250th, 350th, and 450th epoch with different numbers of labeled samples.
Table 4. The number of high-quality samples in 1000 generated samples from the 50th, 150th, 250th, 350th, and 450th epoch with different numbers of labeled samples.
EpochThe Number of Labeled Samples
1099824550
50000
150000
2502300
3509454476
450969874551
Table 5. Recognition accuracy (%) and relative improvements (%) obtained with our semi-supervised learning method with different number of original labeled samples. The best accuracies are indicated in bold in each column.
Table 5. Recognition accuracy (%) and relative improvements (%) obtained with our semi-supervised learning method with different number of original labeled samples. The best accuracies are indicated in bold in each column.
ObjectsThe Number of Original Labeled Samples
1099824550
SRASSRAimpSRASSRAimpSRASSRAimp
2S199.5499.940.1999.3199.710.4099.0699.820.76
BMP294.6095.140.5895.3696.781.4992.5790.35−2.39
BRDM293.0788.67−4.7287.6791.174.0074.6685.5114.52
BTR6099.1198.57−0.5598.1998.11−0.0795.9597.161.26
BTR7091.9595.844.2388.5791.253.0285.4886.671.39
D799.1899.780.6096.3698.722.4591.6896.485.23
T6298.5399.000.4899.1499.210.0798.6498.26−0.39
T7297.6198.280.6995.7395.51−0.2390.6794.334.05
ZIL13195.4394.04−1.4691.2393.682.6887.6285.21−2.74
ZSU23498.7198.53−0.1898.6598.650.0097.2598.210.99
Average96.7796.760.0095.0296.281.3391.3693.202.01
Table 6. Recognition accuracy (%) of LP, P S 3 VM-D, Triple-GAN, Improved-GAN, and our method with different unlabeled rates.
Table 6. Recognition accuracy (%) of LP, P S 3 VM-D, Triple-GAN, Improved-GAN, and our method with different unlabeled rates.
MethodUnlabeled Rate
20%40%60%80%
LP96.0595.9794.1192.04
P S 3 VM-D96.1196.0295.6795.01
Triple-GAN96.4696.1395.9795.70
Improved-GAN98.0797.2695.0287.52
Our Method98.1497.9797.2295.72
Table 7. ANOVA table under different unlabeled rates.
Table 7. ANOVA table under different unlabeled rates.
Unlabeled RateSourceSSdfMSFProb > F
40%Intergroup0.0008130.0002733.62 2.23 × 10 19
Intragroup0.003163960.00001--
Total0.00397399---
60%Intergroup0.0005530.0001811.80 2.03 × 10 7
Intragroup0.006193960.00002--
Total0.00674399---
80%Intergroup0.0014930.0005027.11 5.76 × 10 16
Intragroup0.007263960.00002--
Total0.00875399---
Table 8. Training time under different unlabeled rates.
Table 8. Training time under different unlabeled rates.
Unlabeled RateTraining Time
(Sec/Epoch)
Total Epochs
20%40.71200
40%40.21200
60%39.79200
80%38.80200

Share and Cite

MDPI and ACS Style

Gao, F.; Yang, Y.; Wang, J.; Sun, J.; Yang, E.; Zhou, H. A Deep Convolutional Generative Adversarial Networks (DCGANs)-Based Semi-Supervised Method for Object Recognition in Synthetic Aperture Radar (SAR) Images. Remote Sens. 2018, 10, 846. https://doi.org/10.3390/rs10060846

AMA Style

Gao F, Yang Y, Wang J, Sun J, Yang E, Zhou H. A Deep Convolutional Generative Adversarial Networks (DCGANs)-Based Semi-Supervised Method for Object Recognition in Synthetic Aperture Radar (SAR) Images. Remote Sensing. 2018; 10(6):846. https://doi.org/10.3390/rs10060846

Chicago/Turabian Style

Gao, Fei, Yue Yang, Jun Wang, Jinping Sun, Erfu Yang, and Huiyu Zhou. 2018. "A Deep Convolutional Generative Adversarial Networks (DCGANs)-Based Semi-Supervised Method for Object Recognition in Synthetic Aperture Radar (SAR) Images" Remote Sensing 10, no. 6: 846. https://doi.org/10.3390/rs10060846

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop