Next Article in Journal
Research Progress on F-P Interference—Based Fiber-Optic Sensors
Previous Article in Journal
Step-Detection and Adaptive Step-Length Estimation for Pedestrian Dead-Reckoning at Various Walking Speeds Using a Smartphone
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Aspect-Aided Dynamic Non-Negative Sparse Representation-Based Microwave Image Classification

1
College of Communication Engineering, Chongqing University, Chongqing 400044, China
2
Department of Communication Commanding, Chongqing Communication Institute, Chongqing 400035, China
*
Author to whom correspondence should be addressed.
Sensors 2016, 16(9), 1413; https://doi.org/10.3390/s16091413
Submission received: 29 June 2016 / Revised: 17 August 2016 / Accepted: 29 August 2016 / Published: 2 September 2016
(This article belongs to the Section Physical Sensors)

Abstract

:
Classification of target microwave images is an important application in much areas such as security, surveillance, etc. With respect to the task of microwave image classification, a recognition algorithm based on aspect-aided dynamic non-negative least square (ADNNLS) sparse representation is proposed. Firstly, an aspect sector is determined, the center of which is the estimated aspect angle of the testing sample. The training samples in the aspect sector are divided into active atoms and inactive atoms by smooth self-representative learning. Secondly, for each testing sample, the corresponding active atoms are selected dynamically, thereby establishing dynamic dictionary. Thirdly, the testing sample is represented with 1 -regularized non-negative sparse representation under the corresponding dynamic dictionary. Finally, the class label of the testing sample is identified by use of the minimum reconstruction error. Verification of the proposed algorithm was conducted using the Moving and Stationary Target Acquisition and Recognition (MSTAR) database which was acquired by synthetic aperture radar. Experiment results validated that the proposed approach was able to capture the local aspect characteristics of microwave images effectively, thereby improving the classification performance.

1. Introduction

Microwave imaging sensors have gained extensive applications in bio-tissue image sensing, concealed weapon detection, and aerial remote sensing [1,2,3], in which the first and foremost objective is target microwave image classification. The present study is focused on target classification in images collected by a microwave imaging radar. Microwave imaging of targets is basically based on the electromagnetic scattering mechanism, so the images are usually not as detailed as optical images. Moreover, microwave imaging is very sensitive to the aspect angle and depression angle, so changes of these angles can cause significant changes in images. Figure 1 describes the aspect angle and depression angle in microwave imaging radar. In Figure 2, the images of a BMP2 target at varying aspect angles are given with a constant depression angle of 17°. As shown, image information differed profoundly for different aspect angles. Thus, these factors increase the difficulty in microwave image recognition [4,5].
There are two primary types of target classification technology used for images of microwave imaging radar: the template-based method [6] and model-based method [7]. The principle underlying the template-based method involves extracting features and comparing them with the stored templates generated by the training images, thereby identifying the unknown test image as the class of the matched template. The problem is that background clutters in microwave images interfere profoundly with template matching. The sensitivity to aspect angle and depression angle also causes difficulties in template matching. For the model-based method, SAR images were described using a statistical model based on scattering mechanisms. Then the target’s type was determined based on the maximum posterior probability of model parameters. Because of the difficulty of parameter estimation, this process can easily produce in weak correlation between training and testing samples, causing the method to fail. In addition, these methods have much worse classification performance at extended operating conditions, because the working parameters of training samples and testing samples are significantly different. Other optional methods include discriminative tensor analysis, non-negative matrix factorization (NMF), etc. [8,9].
In recent years, the sparse representation theory has been widely adopted in image processing and pattern recognition. The principle is to represent observed signal using linear combinations of a series of known signals called atoms. In sparse constraint of representation coefficient (minimal 0 norm), a unique solution was obtained. Then the target type was determined by minimizing the reconstruction error of every type of training sample. Sparse representation has been used extensively in various recognition tasks, such as facial recognition, speech recognition, and hyperspectral image classification [10,11,12]. Sparse representation based recognition has gained in-depth development in accordance to different requirements of image recognition. Fang et al. investigated multi-task sparse learning algorithm [13]. In a previous study by Zhang et al., the kernel technique and sparse representation in non-linear processing were used to create a kernel sparse representation model, which was then used in facial recognition and hyperspectral image classification [14]. Meanwhile, the 1 -regularized non-negative least square (NNLS) sparse representation was employed in classification of gene data [15,16]. The NNLS sparse representation model possessed advantages of clear physical meaning, high classification accuracy and rapid calculation.
In microwave radar image recognition, several studies have focused on sparse representation- based classification. For example, Zhang et al. proposed a joint sparse representation target classification method by evaluating the correlation between multi-view microwave images [17]. On this basis, Chen et al. presented an improved joint sparse representation by using low-rank matrix recovery [18]. Dong et al. studied monogenic signal joint sparse representation-based target recognition of microwave image sensors, and further developed sparse representation of monogenic signals on Grassmann manifolds [19,20]. Liu et al. investigated microwave image recognition via multiple sparse Dempster-Shafer fusion [21]. Liu et al. introduced fusion of sparse representation and support vector machine for microwave radar image target recognition [22]. Cao et al. proposed a target classification method based on joint sparse representation with multi-view SAR images over a locally adaptive dictionary [23]. Hou et al. investigated SAR image classification with hierarchical sparse representation and multisize patch features [24]. Recently, there are also several techniques based on sparse representation for polarimetric SAR image classification [25,26,27,28].
In current microwave image recognition methods involving sparse representation theory, training samples of all aspect angle from 0° to 360° were considered as dictionary atoms. It is assumed that the all aspect angle microwave images of one type of target exist in a linear subspace. In the presence of large differences in aspect angle, the assumption is inappropriate due to the sensitivity of microwave images to aspect angle. According to the scattering characteristics of a target, the microwave images are, in essence, characterized by the electromagnetic scattering mechanism. According to this, only microwave images with approximately the same aspect angles have strong correlation, and those with different aspect angles have weak correlation. This is in fact the aspect-dependent structural characteristics of microwave images. For example, images at 6° aspect angle have a close correlation with images with 1°–11° aspect angle, and the correlation with other aspect angle images is weak, especially when the difference in aspect angle is large. In this way, the sparse representation of a testing image should only involve training samples with high aspect correlation with the testing image, rather than all SAR training images. Meanwhile, the dictionary matrix is not optimal if constituted by all of 0°–360° training samples. The production of excessive numbers of dictionary entries might render computation and storage unnecessarily taxing.
The main contribution of the current study lies in the presentation of a microwave image recognition method based on aspect aided dynamic non-negative least square (ADNNLS) sparse representation. Figure 3 shows a schematic view of the proposed algorithm. First, the aspect angle of a testing sample was estimated. Taking the estimated aspect angle as the center, a sector was then defined. Training samples of all targets in the sector were considered effective dictionary atoms, others not in the sector were null atoms, and replaced with zero matrices. However, estimations of aspect angle are subject to errors. To prevent actual dictionary atoms not falling into an effective sector, the sector cannot be too small, so there are still some atoms in the sector that have weak correlations with the testing sample. In order to further narrow down the range of effective atoms, they were divided into two groups via smooth self-presentation clustering learning. The first group considered the testing sample to be the clustering center, and the second considered the furthest training sample to the testing sample to be the center. It is worth noting that the clustering samples included the testing sample, rather than only training samples in the sector. In this way, the selection of effective atoms could be enhanced. Then, only the effective atoms in the testing sample group were preserved. These are here called active atoms. The remaining atoms were set to zero matrices. Hence, the corresponding dictionary atoms of different testing samples need to be updated dynamically. That is why it is called aspect-dependent dynamic sparse representation. In this way, the size of dictionary did not change, with the active atoms keeping their original value and invalid atoms becoming zero matrices. Then the 1 -regularized non-negative least square sparse representation was used to perform regression representation of the tested sample. Decision making of the testing sample was then performed based on the principle of minimum reconstruction error. The advantage of the proposed model is that the quantity of effective data in training samples was reduced, and the interference of invalid atoms with large differences in aspect angle was eliminated, thereby improving the recognition accuracy. Additionally, the large number of zero matrices in the dynamic dictionary helps to reduce the required computation time and data storage space in sparse solving. Based on the Moving and Stationary Target Acquisition and Recognition (MSTAR) public database, several experiments were conducted to evaluate the performance of the proposed algorithm. The target microwave images in the database covered 0°–360° aspect angles, which enabled algorithm verification of the proposed aspect-aided dynamic non-negative least square sparse representation. The results were then compared to those of state-of-art algorithms.
This paper is organized as follows: Section 2 describes the algorithm for aspect-aided dynamic selection of active atoms. In Section 3, the non-negative least square sparse representation algorithm is reviewed. Then the microwave image recognition algorithm based on aspect-aided dynamic non-negative least square sparse representation is presented. In Section 4, experiments carried out according to the MSTAR database and comparisons between the proposed algorithm and the current algorithm are described. Conclusions are given in Section 5.

2. Aspect-Aided Dynamic Active Atoms Selection

The characteristics of target microwave images obtained from microwave imaging radar are closely related to their aspect angle. When the aspect angle changes severely, the pixel distribution in the target images also show abrupt changes. In this study, only the influence of the aspect angle was analyzed because it had the most dramatic influence on microwave images characteristics, and the depression angle did not change much in the experimental data.

2.1. Estimation of the Testing Sample Aspect Angle

As given above, the aspect angle of a testing sample needs to be estimated in the microwave image recognition algorithm based on ADNNLS sparse representation. It was also needed in template matching microwave image recognition [6]. Current estimation methods, which are only based on the information of the testing sample images, mainly use a technique such as radon transformation to estimate the target aspect angle [29]. The current methods did not consider using training samples with known aspect angles. Actually, in microwave image recognition tasks, there are large numbers of training samples with known aspect angle, which can be used to estimate the testing sample aspect angle. Thus, we present a simple but effective testing sample aspect angle estimation method based on the training sample aspect angle. This is an innovation of the current study. The precondition of the method is that the training sample aspect angle is known and distributed evenly across 0°–360°. The principle of the method is that the testing sample is closely correlated to training samples which have similar aspect angle. Hence, the correlation coefficients between the testing samples and training samples were calculated, and the aspect angle of training samples with the largest correlation coefficient was taken as the estimation value of the testing sample. Assuming that the testing sample are A, and certain training sample is B, the correlation coefficient r could be calculated as below:
r = m n ( A m n A ¯ ) ( B m n B ¯ ) ( m n ( A m n A ¯ ) 2 ) ( m n ( B m n B ¯ ) 2 )
Herein, A ¯ represents the mean of A, B ¯ is the mean of B, and Amn, Bmn represent the value of pixels at row m and column n in A and B, respectively.
The microwave images of target BMP2-9563 were used for verification. Correlation analyses were performed between one testing image of target BMP2-9563 at 15° depression angle and 48.49° aspect angle and training images of targets BMP2-9563, BTR70-C71, and T72-132 at 17° depression angle and all aspect angles. Figure 4 shows the variation curve of the correlation coefficients. As shown, the training samples with largest correlation coefficient were the 34th training samples of BMP2-9563. The corresponding aspect angle was 46.49°, which was then taken as the estimation value of testing sample aspect angle. As shown, there was an error of 2° relative to the true aspect angle of the testing sample. Figure 5 shows the errors of all testing samples in BMP2-9563. As shown, the error was around 5° for most cases and slightly above 10° for some. Data analysis showed that the estimated aspect angle mean error for BMP2-9563 testing samples was 0.19°, and the standard deviation is 2.76°. A similar trend was found for other testing samples in experiments.
This method of estimation has the following advantages: (1) it fully utilizes the aspect angle information of the training samples; (2) the calculations were simple; and (3) the problem of distinguishing target head and tail while only using testing samples information was avoided.
Once the aspect angle of the testing sample has been estimated, it was taken as the center to determine a sector. For example, if the testing sample aspect angle estimated is 40°, we can determine a sector which aspect angle range is from 25° to 55°. All types of training sample subsets in the sector were considered effective atoms. The other training samples not in the sector were set to zero matrices. A preliminary dictionary was composed of the effective atoms and zero matrices atoms.

2.2. Enhanced Atoms Selection in Aspect Sector Based on Smooth Self-Representation

The corresponding aspect sector was determined based on aspect angle estimation of testing sample, and the effective atoms were chosen. However, as shown above, the maximum estimation error might be as high as 15°. In order to prevent omission of any effective atoms, the sector size should be at least ±15°, namely covering an area of 30°. Hence, the effective atoms subsets contained redundancy. The smooth self-representation learning clustering method was adopted in this study to further refine these subsets. The clustering reported by Shafiee et al. was only for the training samples themselves [30]. Here, clustering was performed for both the testing sample and training samples. The aim of clustering was to divide effective atoms into two groups, one group containing the atoms closely correlated to the testing sample, and the other group had low correlations to the testing sample. Therefore, clustering was conducted with two centers, the current testing sample and the atom furthest from the current testing sample.
The smooth self-representation clustering algorithm presented by Han et al. was used in this study [31]. In clustering, the grouping effect is very important. The enforced grouping effect conditions are introduced in the smooth self-representation algorithm. This is what allows the algorithm to achieve such a strong clustering effect.
Assuming the dataset of the testing sample and effective training atoms is X = [ x 1 , x 2 , x n ] m × n where xi, i = 1,…,n is a vector with every sample arranged in columns. x1 is the testing sample, xn is the atom farthest from the testing sample. The clustering process is as below [31]:
First, the number of subspaces from clustering was obtained as d = 2, with two clustering centers x1 and xn. Then a K-NN plot W was made, and the corresponding Laplacian matrix L ˜ was calculated [31].
It is necessary to calculate the self-representation matrix U, i.e., the following optimization problem:
min U   f ( U ) = α X X U F 2 + tr ( U L ˜ U T )
Here, α > 0 is a control parameter, · F represents the matrix norm, and tr ( · ) is the trace of the matrix. Equation (2) is a smooth convex programming problem. According to [31], the optimal solution U* could be found by solving the Sylvester matrix in Equation (3):
α X T X U * + U * L ˜ = α X T X
Thereafter, the affinity matrix J was calculated using Equation (4):
J = ( | U * | + | U * T | ) / 2
Then, the final clustering result was obtained via the spectral clustering algorithm [31,32].
According to the clustering results, the atoms in the testing sample-centered type were refined effective atoms, here called active atoms. The other type of atom was replaced with zero vectors. Thus, the active atoms and zero vectors atoms constituted the final dictionary.

3. Non-Negative Least Square Sparse Representation Classifier

3.1. Basic Sparse Representation Classifier

Lately, sparse representation has gained widespread applications in image restoration, pattern recognition and hyperspectral image classification [33]. The success of the sparse signal model mainly lies in sparse representation of observed signal with a group of known observation samples (dictionary). The optimal sparse representation could be solved via the convex optimization method [34,35].
Given sufficient k types of training samples (k = 1,…,K), and X k = [ x k , 1 , x k , 2 , , x k , n k ] m × n k , the samples were sorted in a column vector in space m . An unknown observation sample y m × n k was represented as the linear combination of the k-th type training samples with the representation coefficient α k = [ α k , 1 , α k , 2 , , α k , n k ] T n k .
y = x k , 1 α k , 1 + x k , 2 α k , 2 + + x k , n k α k , n k
It was assumed that a certain type of sample formed an approximate linear subspace. Because the class label of observation sample y was unknown, it was represented with all k types of training samples of the training set. X = [ X 1 , X 2 , , X K ] m × n , here n = k = 1 K n k is the total number of training samples. Then the linear representation of y was written as follows:
y = X 1 α 1 + X 2 α 2 + + X K α K = X α
Herein, α = [ α 1 , α 2 , , α K ] T n where n is the representation coefficient vector. Theoretically, the optimal representation α is a coefficient vector, in which the elements corresponding to observation samples y were non-zero, and the other elements were all zero.
When m < n, the solution of Equation (6) is not unique. A common method was to add sparse constraint term to the representation coefficient vector, thereby searching for the sparsest representation as below [36]:
min α α 0   s . t . y X α 2 ε
Herein, · 0 : represents the 0 -norm of α, and ε is the allowable error. Since the optimization problem of minimal   0 -norm is NP-hard, Equation (7) could only be approximately solved with certain algorithms, such as matching pursuit and orthogonal matching pursuit [37,38]. According to the compressed sensing theory [39], Equation (7) could be relaxed to 1 -norm minimization as below.
min α α 1   s . t . y X α 2 ε
Herein, · 1 is the 1 -norm of α. Since Equation (8) is a convex optimization problem, it could be solved with a traditional optimization algorithm [40].
In sparse representation classifier, after the optimal sparse representation was obtained, the class label of y could be determined based on the law of minimum reconstruction error:
i d e n t i t y ( y ) = min k = 1 , , K y X k α 2
The above is the common-used basic sparse representation classifier (SRC) [41].

3.2. Non-Negative Least Square Sparse Representation Classifier

In SRC, the sparse representation coefficient might be either positive or negative, although a negative value has no physical meaning in actual application. Because of the presence of negative coefficients, different atoms might be eliminated as a result of offset, which does not conform to the visual perception mechanism. Previous studies have suggested that non-negative property was more accordant with human visual perception mechanism, therefore providing more effective representation [15,16]. Meanwhile, to a certain extent, non-negativity caused further sparsity. In terms of microwave image recognition, the non-negative constraint can eliminate atoms in sparse solution that are negatively correlated to input images, which is more reasonable for microwave image recognition. In the present study, the 1 egularized non-negative least square sparse representation classifier was adopted [16]:
min β 1 2 y X β 2 2 + λ β 1 s . t .   β 0
Herein, y is the testing sample, X is the dictionary of training samples, and λ is the control parameter. Equation (10) could be transformed into the following non-negative quadratic programming (NNQP) problem (11):
min β 1 2 β T H β + w T β s . t .   β 0
Herein, H = X T X and w = λ X T y . The optimization problem (11) could be solved with the Active-Set algorithm [16].
After obtaining the non-negative sparse coefficient vector β, the law of minimal reconstruction error was used to identify the label of the testing sample:
i d e n t i t y ( y ) = min k = 1 , , K y X k β 2

3.3. Microwave Image Classification with ADNNLS Sparse Representation

In summary, the steps of microwave image recognition based on ADNNLS sparse representation are listed in Algorithm 1 as follows:
Algorithm 1: ADNNLS sparse representation for microwave image recognition
Input:
X m × n : All types of training samples
Y m × p : All test samples
Output: the identity of Y m × p
Steps:
1)
A test sample yi was selected from Y m × p , and its correlation coefficients vector r y i with all training samples were calculated. The aspect angle of the testing sample θ y i was estimated based on the aspect angle of training samples that showed the largest correlation coefficient.
2)
An aspect sector was determined based on the estimated aspect angle, and all types of training samples in the sector were included in a set of effective atoms. Based on smooth self-representation learning clustering, the set of active atoms X y i was obtained.
3)
A dynamic dictionary X y i was created according to X y i .
4)
Using non-negative least square sparse representation algorithm, the testing sample yi was represented with the dynamic dictionary X y i , thereby deriving the representation coefficient β y i .
5)
Class label of testing sample yi was obtained based on the law of minimal reconstruction error.
6)
If all testing samples were classified, go to step 7). Otherwise, go back to step 1);
7)
End

4. Experiments and Discussion

4.1. Description of the Dataset

The MSTAR database of microwave images obtained using synthetic aperture radar was used in the experiments. The microwave imaging radar sensor worked in the X band, and the image resolution was 0.3 × 0.3 m. The images of three types of targets, i.e., BMP2, BTR70 and T72 were adopted. Optical images of the three targets are shown in Figure 6, and several microwave images of them with near aspect angle are shown in Figure 7. Specifically, the target BMP2 has three configuration types BMP2-9563, BMP2-9566, and BMP2-C21, BTR70 has one type BTR70-C71, and T72 has three configuration types T72-132, T72-812, and T72-S7. Table 1 lists the number of all training samples and testing samples. The microwave images at 17° depression angle of three targets were taken as training samples, and the images at 15° depression angle were testing samples. For each type of target, the images covered 0°–360° aspect angle. The sample type, depression angle, target type, and sample numbers are shown in Table 1. First, pre-processing was carried out for all training samples and testing samples by logarithm transformation to compress the dynamic range and to change multiplicative noise spots into additive noise. Because the original images contained background clutter, in order to reduce interference, a 60 × 60 m sub-image centered at the original image center was extracted, and all targets were located in the sub-image.

4.2. Classification Results

The classification performance of the proposed ADNNLS algorithm was analyzed first. To address the advantages of ADNNLS in microwave images classification, one BMP2-9563 testing sample, one testing sample of BTR70-C71, and one testing sample of T72-132 were selected as the testing samples to assess the sparse coefficient distribution diagram and reconstruction error graph, which was then compared to that of SRC and NNLS. The dictionaries in SRC and NNLS comprised global aspect training samples of all target types, including 233 training samples of BMP2-9563, 232 training samples of BMP2-9566, 233 training samples of BMP2-C21, 233 training samples of BTR70-C71, 232 training samples of T72-132, 231 training samples of T72-812, and 228 training samples of T72-S7. In ADNNLS, the training samples were selected within the aspect sector after smooth self-representation clustering, so, even though the dictionary was the same size as SRC and NNLS, the number of effective non-zero atoms was much smaller than in the SRC and NNLS. Sparse coding was carried out for three testing samples with corresponding dictionary, thereby producing three sparse coefficient vectors and corresponding reconstruction error graphs, as shown in Figure 8, Figure 9 and Figure 10. As shown in Figure 8b, the maximum sparse coefficient corresponded to BMP2-9563. Comparison of Figure 8c–e suggested that minimal reconstruction error for SRC and NNLS corresponded to incorrect classification, while the minimal reconstruction error for ADNNLS was able to identify the correct type. As shown in Figure 9c–e, although three algorithms were able to correctly identify the type, the reconstruction error of BTR70-C71 sample in ADNNLS was far less than other types. Similarly, Figure 10 shows the classification of a testing sample of T72-132. Using SRC, it was incorrectly identified as BMP2-9563. When using NNLS, it was then identified as BMP2-9566.
In traditional SRC and NNLS without the assistance of aspect angle, sparse decomposition was performed across global aspect range of 0°–360°, namely using training samples at all aspect angles. For the ADNNLS algorithm, the sparse decomposition and reconstruction were both performed within the local aspect sector, which was described in Algorithm 1 in detail. Therefore, because of the sensitivity of microwave images to aspect angle, ADNNLS was able to capture more aspect-related information compared with SRC and NNLS, not being interfered by training samples with large difference in aspect angle. These results indicated that the ADNNLS algorithm could prevent interference from training samples with large differences in aspect angle, thereby improving the classification performance.
In order to analyze the influence of sector size on ADNNLS performance, recognition experiments were carried out at different aspect sectors (30°, 60°, 90°, 180°, and 360°). Apparently, with 360° aspect sector, the algorithm was the full-aspect NNLS algorithm. Figure 11 shows the results of recognition. It can be seen that the recognition rate peaked at 99.63% for 30° aspect sector. As the sector increased, the rate gradually decreased, which was due to the increase of confusion atoms. Despite this, the recognition rate was still as high as 97.39% when the sector was 360°.
To assess the performance of the proposed algorithm, the recognition results of the current ADNNLS algorithm were compared to full-aspect SRC, aspect-aided dynamic SRC (ADSRC), full-aspect non-negative sparse representation (NNLS). A full-aspect algorithm means that the training samples across all aspects were included in the dictionary. An aspect-aided algorithm means the proposed aspect-aided smooth self-representation clustering was performed during dictionary design. The recognition experiments were conducted at different dimensions of PCA features. The results are as shown in Figure 12.
The performance of each algorithm increased with increasing dimension. At 800 dimensions, ADNNLS showed the largest recognition rate, which was 99.49%. The maximal recognition rates for SRC, NNLS, and ADSRC were 92.23%, 96.8%, and 96.97%, respectively. Moreover, ADNNLS demonstrated good recognition performance at low dimensions. The results validated the effectiveness of the proposed algorithm in microwave images recognition. In addition, the results of the presented algorithm were compared to the results of recently published IJSRC algorithm [18]. The reason of choosing this article was that it used the same dataset as in the current study. The ADNNLS algorithm showed a recognition rate of 99.83% at 30° aspect sector, only two out of 1365 testing samples were incorrectly identified. For IJSRC, the recognition rate was 98.53%, less than our algorithm. IJSRC used multi-view joint sparse representation, which was different from ADNNLS.
It is worth noting that the convergence condition used in four algorithms is error tolerance rather than sparsity in the manuscript. For comparing four algorithms performance with the same level of sparsity, we conducted the following experiment. We fixed sparsity value at 12, which means that non-zero elements number of solved sparse coefficient vector is 12. One query sample of BMP2-SN9563 with aspect angle 171.49° is selected to compare the representation coefficient vectors from the four algorithms. According to correspondence between training samples aspect angle and their index number, we know that samples near index number 114 should be selected and correspond to larger coefficient values. Figure 13 shows four representation vectors corresponding four algorithms.
SRC and NNLS picked dispersed atoms without aspect angle information aided, which can be seen from Figure 8a,b. The atoms selected by ADSRC are more assembled than SRC and NNLS. However, from Figure 8d, it can be observed that ADNNLS can select most representative atoms near index number 114 corresponding larger coefficient values. Figure 8e shows that ADNNLS has minimum reconstruction error for this testing sample among four algorithms. The advantage of the ADNNLS is that it can reduce the interference of invalid atoms with large differences in aspect angle, thereby improving the recognition accuracy.
We conducted another experiment to study whether OMP could pick similar dictionary atoms as ADNNLS by enforcing the same number of non-zero coefficients from ADNNLS. We select one BMP2-SN9563 testing sample with 213.49° aspect angle for this experiment. Ideally, those training samples near 213.49° in the dictionary will be picked under optimal representation algorithm. According to correspondence between training samples aspect angle and their index number, we know that samples near index number 135 should be selected and correspond to larger coefficient values. We first solve the representative coefficient vector under ADNNLS, which is shown in Figure 14a. The number of non-zero coefficients from ADNNLS is 29. The OMP representative vector is solved by enforcing 29 non-zero coefficients, which is shown in Figure 14b. From Figure 14a, it can be seen that there were four atoms near index number 135 picked and one atom corresponded to the largest value in the whole ADNNLS representative vector. However, there was only one atom near index number 135 selected with a small value under the OMP algorithm. From the OMP representing vector, we can see that atoms selected are more dispersed than those picked with ADNNLS. Although OMP is able to acquire a sparse solution, it cannot select those best represented atoms for a query sample which results in a larger representation error. Experiment results indicated that OMP had larger reconstruction error than ADNNLS, which can be seen from Figure 14c.

4.3. Robustess to Noise

Additionally, the anti-noise performance of the proposed algorithm was investigated. Different level Gaussian noises with varying Signal-to-Noise Ratio values (SNRs) were added into the testing samples thereby evaluating the robustness of the algorithm. Figure 15 shows the SAR images at different SNRs. Figure 16 shows the recognition performance curves of four algorithms at different SNRs. From Figure 16, one can see that the recognition rate of all algorithms increase with higher SNR, however, the ADNNLS algorithm could reach a recognition rate of over 90% within a broad range of 5–50 dB, while, the other three algorithms all have recognition rates lower than 70% at 5 dB. The results show that ADNNLS is more robust to noise corruption than the other three algorithms.

4.4. Experiments Conducted with Depression Angle Variations

In addition to aspect angle changes, depression angle variations of test samples can also significantly affect recognition performance. In the experiments, four algorithms are also evaluated with a large depression angle variation. Four targets, BRDM2, 2S1, SLICY and ZSU23/4 are employed, as illustrated in Figure 17, several SAR image samples of these targets are illustrated in Figure 18. The dataset used in the depression variation experiments is shown in Table 2. In this experiment, images collected at 17° depression angle are used for training, while images taken at 30° and 45° are utilized for testing. Experimental results to be studied are shown in Table 3.
As can be seen from Table 3, with test samples at 30° depression angle, the accuracy of ADNNLS achieved a high level of 96.44%, 3.04%, 11.73% and 12.43% better in average than ADSRC, NNLS and SRC, but when test samples at 45° depression angle are used, the performance of the four algorithms all decline. The reason is that huge differences occur between the training and test samples when there are large changes of depression angle from 17° to 45°. In this condition, the over-fitting issue in all algorithms is much more serious. However, ADNNLS still achieves an accuracy of 79.21%, 1.4%, 14.11% and 19.39% better on average than ADSRC, NNLS and SRC.
The experimental results verify that ADNNLS is the most robust to the depression variation among the four algorithms. This is due to that aspect constraints and smooth self-representation that reduces the confusion caused by large depression variation, which ensures accurate inference ability of sparse representation.

5. Conclusions

In the present study, an aspect-aided dynamic non-negative least square sparse representation model was proposed to classify target images collected by microwave imaging radar sensors, which fully utilized the aspect-dependent characteristic of microwave images. In ADNNLS sparse representation, the aspect sector of active atoms was determined based on the aspect angle of the testing samples. Then the smooth self-representation strategy was used in the sector to choose training samples that were most correlated to the testing sample, thereby constructing the dictionary. The strategy benefits searching for effective support training samples for the corresponding testing sample, and can effectively prevent interference from the training samples that have poor aspect correlation with the testing sample. Experiments based on MSTAR database showed that ADNNLS significantly outperformed SRC and NNLS and reached the recognition level of state-of-art algorithm. Meanwhile, it showed promising anti-noise behavior.

Acknowledgments

This work was supported by the National Natural Science Foundation of China under Grant No. 61301224; GaoFen Major Project Youth Innovation Fund under Grant no. GFZX04060103; and the National Hightech R&D Program of China (863 Program) under Grant No. 2015AA01A706. This work was also supported by Basic and Advanced Research Project in Chongqing under Grant No. cstc2016jcyjA0134.

Author Contributions

X. Zhang conceived and designed the experiments, analyzed the data, and wrote the paper. Q. Yang and M. Liu performed the anti-noise experiments. Y. Jia, S. Liu and G. Li gave some revision suggestions about the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lindell, D.B.; Long, D.G. Multiyear Arctic Ice Classification Using ASCAT and SSMIS. Remote Sens. 2016, 8, 294. [Google Scholar] [CrossRef]
  2. Islam, T.; Rico-Ramirez, M.A.; Srivastava, P.K. CLOUDET: A Cloud Detection and Estimation Algorithm for Passive Microwave Imagers and Sounders Aided by Naive Bayes Classifier and Multilayer Perceptron. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2015, 8, 4296–4301. [Google Scholar] [CrossRef]
  3. Jun, D.; Bo, C.; Hongwei, L. Convolutional Neural Network with Data Augmentation for SAR Target Recognition. IEEE Geosci. Remote Sens. Lett. 2016, 13, 364–368. [Google Scholar]
  4. Liu, M.; Wu, Y.; Zhao, W.; Zhang, Q.; Li, M.; Liao, G. Dempster–Shafer Fusion of Multiple Sparse Representation and Statistical Property for SAR Target Configuration Recognition. IEEE Geosci. Remote Sens. Lett. 2014, 11, 1106–1109. [Google Scholar] [CrossRef]
  5. Liu, M.; Wu, Y.; Zhang, Q. Synthetic Aperture Radar Target Configuration Recognition Using Locality-preserving Property and the Gamma Distribution. IET Radar Sonar Navig. 2016, 10, 256–263. [Google Scholar] [CrossRef]
  6. Novak, L.; Owirka, G.; Netishen, C. Performance of a High-resolution Polarimetric SAR Automatic Target Recognition System. Lincoln Lab J. 1993, 6, 11–23. [Google Scholar]
  7. Ma, C.; Wen, G.J.; Ding, B.Y. Three-dimensional Electromagnetic Model-based Scattering Center Matching Method for Synthetic Aperture Radar Automatic Target Recognition by Combining Spatial and Attributed Information. J. Appl. Remote Sens. 2016, 10, 122–134. [Google Scholar] [CrossRef]
  8. Huang, X.; Qiao, H.; Zhang, B. SAR Target Configuration Recognition Using Tensor Global and Local Discriminant Embedding. IEEE Geosci. Remote Sens. Lett. 2016, 13, 222–226. [Google Scholar] [CrossRef]
  9. Cui, Z.Y.; Cao, Z.J.; Yang, J.Y. Target Recognition in Synthetic Aperture Radar Images via Non-negative Matrix Factorisation. IET Radar Sonar Navig. 2015, 9, 1376–1385. [Google Scholar] [CrossRef]
  10. Wright, J.; Yang, A.; Ganesh, A.; Sastry, S.S.; Ma, Y. Robust Face Recognition via Sparse Representation. IEEE Trans. Patt. Anal. Mach. Intell. 2009, 31, 210–227. [Google Scholar] [CrossRef] [PubMed]
  11. Luo, Y.; Bao, G.; Xu, Y.; Ye, Z. Supervised Monaural Speech Enhancement Using Complementary Joint Sparse Representations. IEEE Signal Process. Lett. 2016, 23, 237–241. [Google Scholar] [CrossRef]
  12. Li, C.; Ma, Y.; Mei, X.; Liu, C.; Ma, J. Hyperspectral Image Classification with Robust Sparse Representation. IEEE Geosci. Remote Sens. Lett. 2016, 13, 641–645. [Google Scholar] [CrossRef]
  13. Fang, L.Y.; Li, S.T. Face Recognition by Exploiting Local Gabor Features with Multitask Adaptive Sparse Representation. IEEE Trans. Instrum. Measur. 2015, 64, 2605–2615. [Google Scholar] [CrossRef]
  14. Zhang, L.; Zhou, W.D.; Chang, P.C.; Liu, J.; Yan, Z.; Wang, T.; Li, F.Z. Kernel Sparse Representation-Based Classifier. IEEE Trans. Signal Process. 2012, 60, 1624–1632. [Google Scholar] [CrossRef]
  15. Li, Y.; Ngom, A. Classification Approach Based on Non-Negative Least Squares. Neurocomput 2013, 118, 41–57. [Google Scholar] [CrossRef]
  16. Li, Y.; Ngom, A. Nonnegative Least-Squares Methods for the Classification of High-Dimensional Biological Data. IEEE/ACM Trans. Comput. Biol. Bioinform. 2013, 10, 447–456. [Google Scholar] [CrossRef] [PubMed]
  17. Zhang, H.; Nasrabadi, N.M.; Zhang, Y.; Huang, T.S. Multi-view Automatic Target Recognition Using Joint Sparse Representation. IEEE Trans. Aerosp. Electron. Syst. 2012, 48, 2481–2497. [Google Scholar] [CrossRef]
  18. Cheng, J.; Li, L.; Li, H.S. SAR Target Recognition Based on Improved Joint Sparse Representation. EURASIP J. Adv. Signal Process. 2014, 2014, 1–12. [Google Scholar] [CrossRef]
  19. Dong, G.G.; Kuang, G.Y.; Wang, N. SAR Target Recognition via Joint Sparse Representation of Monogenic Signal. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2015, 8, 3316–3328. [Google Scholar] [CrossRef]
  20. Dong, G.G.; Kuang, G.Y. SAR Target Recognition via Sparse Representation of Monogenic Signal on Grassmann Manifolds. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2016, 9, 1308–1319. [Google Scholar] [CrossRef]
  21. Xing, X.W.; Ji, K.F.; Zou, H.X. Ship Classification in TerraSAR-X Images with Feature Space Based Sparse Representation. IEEE Geosci. Remote Sens. Lett. 2013, 10, 1562–1566. [Google Scholar] [CrossRef]
  22. Liu, H.C.; Li, S.T. Decision Fusion of Sparse Representation and Support Vector Machine for SAR Image Target Recognition. Neurocomputing 2013, 113, 97–104. [Google Scholar] [CrossRef]
  23. Cao, Z.; Xu, L.; Feng, J. Automatic Target Recognition with Joint Sparse Representation of Heterogeneous Multi-view SAR Images over a Locally Adaptive Dictionary. Signal Process. 2016, 126, 27–34. [Google Scholar] [CrossRef]
  24. Hou, B.; Ren, B.; Ju, G.; Li, H.; Jiao, L.; Zhao, J. SAR Image Classification via Hierarchical Sparse Representation and Multisize Patch Features. IEEE Trans. Geosci. Remote Sens. 2016, 13, 33–37. [Google Scholar] [CrossRef]
  25. Feng, J.; Cao, Z.; Pi, Y. Polarimetric Contextual Classification of PolSAR Images Using Sparse Representation and Superpixels. Remote Sens. 2014, 6, 7158–7181. [Google Scholar] [CrossRef]
  26. Zhang, L.M.; Sun, L.J.; Zou, B. Fully Polarimetric SAR Image Classification via Sparse Representation and Polarimetric Features. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2015, 8, 3923–3932. [Google Scholar] [CrossRef]
  27. Yang, F.; Gao, W.; Xu, B. Multi-Frequency Polarimetric SAR Classification Based on Riemannian Manifold and Simultaneous Sparse Representation. Remote Sens. 2015, 7, 8469–8488. [Google Scholar] [CrossRef]
  28. Xie, W.; Jiao, L.C.; Zhao, J. PolSAR Image Classification via D-KSVD and NSCT-Domain Features Extraction. IEEE Signal Process. Lett. 2016, 13, 227–231. [Google Scholar] [CrossRef]
  29. Principe, J.; Zhao, Q.; Xu, D. A Novel ATR Classifier Exploiting Pose Information. In Proceedings of the Image Understanding Workshop, Monterey, CA, USA, 30 October–5 November 1998; pp. 833–836.
  30. Shafiee, S.; Kamangar, F.; Ghandehari, L. Cluster-Based Multi-task Sparse Representation for Efficient Face Recognition. In Proceedings of the 2014 IEEE Southwest Symposium on Image Analysis and Interpretation, San Diego, CA, USA, 6–8 April 2014; pp. 125–128.
  31. Hu, H.; Lin, Z.; Feng, J.; Zhou, J. Smooth Representation Clustering. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 3834–3841.
  32. Von Luxburg, U. A Tutorial on Spectral Clustering. Stat. Comput. 2007, 17, 395–416. [Google Scholar] [CrossRef]
  33. Yang, M.; Zhang, L.; Yang, J.; Zhang, D. Robust Sparse Coding for Face Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs, CO, USA, 20–25 June 2011; pp. 625–632.
  34. Donoho, D. For Most Large Underdetermined Systems of Linear Equations the Minimal 1-Norm Solution Is Also the Sparsest Solution. Commun. Pure Appl. Math. 2006, 59, 797–829. [Google Scholar] [CrossRef]
  35. Sharon, Y.; Wright, J.; Ma, Y. Computation and Relaxation of Conditions for Equivalence between ℓ1 and l ℓ0 Minimization; CSL Technical Report UILU-ENG-07-2208; University of Illinois: Champaign, IL, USA, 2007. [Google Scholar]
  36. Donoho, D.; Elad, M. Optimal Sparse Representation in General (Nonorthogonal) Dictionaries via 1 Minimization. Proc. Nat. Acad. Sci. USA 2003, 100, 2197–2202. [Google Scholar] [CrossRef] [PubMed]
  37. Tibshirani, R. Regression Shrinkage and Selection via the LASSO. J. R. Stat. Soc. B 1996, 58, 267–288. [Google Scholar]
  38. Candes, E.; Romberg, J.; Tao, T. Stable Signal Recovery from Incomplete and Inaccurate Measurements. Commun. Pure Appl. Math. 2006, 59, 1207–1223. [Google Scholar] [CrossRef]
  39. Donoho, D. For Most Large Underdetermined Systems of Linear Equations the Minimal 1-Norm near Solution Approximates the Sparest Solution. Commun. Pure Appl. Math. 2006, 59, 907–934. [Google Scholar] [CrossRef]
  40. Boyd, S.; Vandenberghe, L. Convex Optimization; Cambridge University Press: London, UK, 2004. [Google Scholar]
  41. Yang, A.Y.; Sastry, S.S.; Ganesh, A.; Ma, Y. Fast 1-minimization Algorithms and an Application in Robust Face Recognition: A review. In Proceedings of the 17th IEEE International Conference on Image Processing, Hong Kong, China, 26–29 September 2010; pp. 1849–1852.
Figure 1. Description of aspect angle and depression angle in microwave imaging radar sensor.
Figure 1. Description of aspect angle and depression angle in microwave imaging radar sensor.
Sensors 16 01413 g001
Figure 2. Microwave images of BMP2 at different aspect angles (depression angle 17°).
Figure 2. Microwave images of BMP2 at different aspect angles (depression angle 17°).
Sensors 16 01413 g002
Figure 3. Flowchart of microwave image recognition based on aspect-aided dynamic non-negative least square sparse representation.
Figure 3. Flowchart of microwave image recognition based on aspect-aided dynamic non-negative least square sparse representation.
Sensors 16 01413 g003
Figure 4. Correlation coefficient curve between BMP2-9563 testing sample (depression angle 15°, aspect angle 48.49°) and all training samples of targets BMP2-9563, BTR70-C71, and T72-132.
Figure 4. Correlation coefficient curve between BMP2-9563 testing sample (depression angle 15°, aspect angle 48.49°) and all training samples of targets BMP2-9563, BTR70-C71, and T72-132.
Sensors 16 01413 g004
Figure 5. Estimation error in aspect angle of all testing samples of BMP2-9563.
Figure 5. Estimation error in aspect angle of all testing samples of BMP2-9563.
Sensors 16 01413 g005
Figure 6. Optical images of three targets. (a) BMP2; (b) BTR70; (c) T72.
Figure 6. Optical images of three targets. (a) BMP2; (b) BTR70; (c) T72.
Sensors 16 01413 g006
Figure 7. Microwave images of three targets at near aspect angles.
Figure 7. Microwave images of three targets at near aspect angles.
Sensors 16 01413 g007
Figure 8. (a) Test sample BMP2-9563; (b) Sparse vector obtained based on ADNNLS algorithm; (c) Reconstruction error of ADNNLS algorithm; (d) Reconstruction error of SRC algorithm; (e) Reconstruction error of NNLS algorithm.
Figure 8. (a) Test sample BMP2-9563; (b) Sparse vector obtained based on ADNNLS algorithm; (c) Reconstruction error of ADNNLS algorithm; (d) Reconstruction error of SRC algorithm; (e) Reconstruction error of NNLS algorithm.
Sensors 16 01413 g008aSensors 16 01413 g008b
Figure 9. (a) Test sample BTR70-C71; (b) Sparse vector obtained based on ADNNLS algorithm; (c) Reconstruction error of ADNNLS algorithm; (d) Reconstruction error of SRC algorithm; (e) Reconstruction error of NNLS algorithm.
Figure 9. (a) Test sample BTR70-C71; (b) Sparse vector obtained based on ADNNLS algorithm; (c) Reconstruction error of ADNNLS algorithm; (d) Reconstruction error of SRC algorithm; (e) Reconstruction error of NNLS algorithm.
Sensors 16 01413 g009
Figure 10. (a) Test sample T72-132; (b) Sparse vector obtained based on ADNNLS algorithm; (c) Reconstruction error of ADNNLS algorithm; (d) Reconstruction error of SRC algorithm; (e) Reconstruction error of NNLS algorithm.
Figure 10. (a) Test sample T72-132; (b) Sparse vector obtained based on ADNNLS algorithm; (c) Reconstruction error of ADNNLS algorithm; (d) Reconstruction error of SRC algorithm; (e) Reconstruction error of NNLS algorithm.
Sensors 16 01413 g010aSensors 16 01413 g010b
Figure 11. Variation in the performance of ADNNLS algorithm with the size of aspect sector.
Figure 11. Variation in the performance of ADNNLS algorithm with the size of aspect sector.
Sensors 16 01413 g011
Figure 12. Recognition performance of the four algorithms with varying feature dimensions.
Figure 12. Recognition performance of the four algorithms with varying feature dimensions.
Sensors 16 01413 g012
Figure 13. (a) Sparse coefficient vector obtained based on SRC algorithm; (b) Sparse coefficient vector obtained based on NNLS algorithm; (c) Sparse coefficient vector obtained based on ADSRC algorithm; (d) Sparse coefficient vector obtained based on ADNNLS algorithm; (e) Reconstruction error of four algorithms.
Figure 13. (a) Sparse coefficient vector obtained based on SRC algorithm; (b) Sparse coefficient vector obtained based on NNLS algorithm; (c) Sparse coefficient vector obtained based on ADSRC algorithm; (d) Sparse coefficient vector obtained based on ADNNLS algorithm; (e) Reconstruction error of four algorithms.
Sensors 16 01413 g013aSensors 16 01413 g013b
Figure 14. (a) Sparse coefficient vector obtained based on ADNNLS algorithm; (b) Sparse coefficient vector obtained based on OMP algorithm; (c) Reconstruction error of two algorithms.
Figure 14. (a) Sparse coefficient vector obtained based on ADNNLS algorithm; (b) Sparse coefficient vector obtained based on OMP algorithm; (c) Reconstruction error of two algorithms.
Sensors 16 01413 g014
Figure 15. SAR images with different SNRs: (a) original image; (b) 0 dB; (c) 10 dB; (d) 20 dB; (e) 30 dB; (f) 40 dB; (g) 50 dB.
Figure 15. SAR images with different SNRs: (a) original image; (b) 0 dB; (c) 10 dB; (d) 20 dB; (e) 30 dB; (f) 40 dB; (g) 50 dB.
Sensors 16 01413 g015
Figure 16. Performance of the algorithm with varying SNRs.
Figure 16. Performance of the algorithm with varying SNRs.
Sensors 16 01413 g016
Figure 17. The optical images of four target. (a) BRDM2; (b) 2S1; (c) ZSU234; (d) SLICY.
Figure 17. The optical images of four target. (a) BRDM2; (b) 2S1; (c) ZSU234; (d) SLICY.
Sensors 16 01413 g017
Figure 18. Example SAR images of four targets. (a) BRDM2; (b) 2S1; (c) ZSU234; (d) SLICY.
Figure 18. Example SAR images of four targets. (a) BRDM2; (b) 2S1; (c) ZSU234; (d) SLICY.
Sensors 16 01413 g018
Table 1. Name and number of the training and testing samples.
Table 1. Name and number of the training and testing samples.
Number of Targets1234567
Training sample typeBMP2BMP2BMP2BTR70T72T72T72
(17°)sn-9563sn-9566sn-c21sn-c71sn-132sn-812sn-s7
Number233232233233232231228
Testing sample typeBMP2BMP2BMP2BTR70T72T72T72
(15°)sn-9563sn-9566sn-c21sn-c71sn-132sn-812sn-s7
Number195196196196196195191
Table 2. Dataset used in the experiment.
Table 2. Dataset used in the experiment.
BRDM22S1ZSU234SLICY
Training Set (17°)298299299298
Testing Set (30°)298288288288
Testing Set (45°)298303303303
Table 3. Recognition rates under different depression angles with four classifiers.
Table 3. Recognition rates under different depression angles with four classifiers.
DepressionClassifier
SRCNNLSADSRCADNNLS
30°84.41%84.71%93.40%96.44%
45°59.82%65.10%77.81%79.21%

Share and Cite

MDPI and ACS Style

Zhang, X.; Yang, Q.; Liu, M.; Jia, Y.; Liu, S.; Li, G. Aspect-Aided Dynamic Non-Negative Sparse Representation-Based Microwave Image Classification. Sensors 2016, 16, 1413. https://doi.org/10.3390/s16091413

AMA Style

Zhang X, Yang Q, Liu M, Jia Y, Liu S, Li G. Aspect-Aided Dynamic Non-Negative Sparse Representation-Based Microwave Image Classification. Sensors. 2016; 16(9):1413. https://doi.org/10.3390/s16091413

Chicago/Turabian Style

Zhang, Xinzheng, Qiuyue Yang, Miaomiao Liu, Yunjian Jia, Shujun Liu, and Guojun Li. 2016. "Aspect-Aided Dynamic Non-Negative Sparse Representation-Based Microwave Image Classification" Sensors 16, no. 9: 1413. https://doi.org/10.3390/s16091413

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop