Next Article in Journal
The Minimum Temperature Outweighed the Maximum Temperature in Determining Plant Growth over the Tibetan Plateau from 1982 to 2017
Previous Article in Journal
Distinct Impacts of Two Types of Developing El Niño–Southern Oscillations on Tibetan Plateau Summer Precipitation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Technical Note

Target Recognition in SAR Images Using Complex-Valued Network Guided with Sub-Aperture Decomposition

1
School of Electronics and Information Engineering, Hebei University of Technology, Tianjin 300401, China
2
Hangzhou Institute of Technology, Xidian University, Hangzhou 311200, China
3
National Key Laboratory of Radar Signal Processing, Xidian University, Xi’an 710071, China
4
The Institute of Information and Navigation, Air Force Engineering University, Xi’an 710077, China
5
Collaborative Innovation Center of Information Sensing and Understanding, Air Force Engineering University, Xi’an 710077, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(16), 4031; https://doi.org/10.3390/rs15164031
Submission received: 11 July 2023 / Revised: 12 August 2023 / Accepted: 13 August 2023 / Published: 14 August 2023
(This article belongs to the Topic Radar Signal and Data Processing with Applications)

Abstract

:
Synthetic aperture radar (SAR) images have special physical scattering characteristics owing to their unique imaging mechanism. Traditional deep learning algorithms usually extract features from real-valued SAR images in a purely data-driven manner, which may ignore some important physical scattering characteristics and sacrifice some useful target information in SAR images. This undoubtedly limits the improvement in performance for SAR target recognition. To take full advantage of the physical information contained in SAR images, a complex-valued network guided with sub-aperture decomposition (CGS-Net) for SAR target recognition is proposed. According to the fact that different targets have different physical scattering characteristics at different angles, the sub-aperture decomposition is used to improve accuracy with a multi-task learning strategy. Specifically, the proposed method includes main and auxiliary tasks, which can improve the performance of the main task by learning and sharing useful information from the auxiliary task. Here, the main task is the target recognition task, and the auxiliary task is the target reconstruction task. In addition, a complex-valued network is used to extract the features from the original complex-valued SAR images, which effectively utilizes the amplitude and phase information in SAR images. The experimental results obtained using the MSTAR dataset illustrate that the proposed CGS-Net achieved an accuracy of 99.59% (without transfer learning or data augmentation) for the ten-classes targets, which is superior to the other popular deep learning methods. Moreover, the proposed method has a lightweight network structure, which is suitable for SAR target recognition tasks because SAR images usually lack a large number of labeled data. Here, the experimental results obtained using the small dataset further demonstrate the excellent performance of the proposed CGS-Net.

1. Introduction

As an active microwave imaging sensor, synthetic aperture radar (SAR) has the technological advantage of long combat distances, operating in any condition of time and weather [1,2,3,4,5]. It has a vital part to play in the remote sensing fields. Thus, as an important application, SAR target recognition has become an important issue in recent research.
There are many algorithms for SAR target recognition in the current research, which mainly include two paradigms: non-deep learning and deep learning methods. Specifically, non-deep learning SAR target recognition methods generally include template matching [6] and model-based methods [7]. In the following, several of these methods and the method proposed in this study are introduced in brief.
The template matching method is a basic method for pattern recognition, which generates numerous templates from the targets in different images and recognizes the targets by matching the templates with the region of interest (ROI) area [8]. However, it not only needs many training samples to generate templates but also requires an overwhelming amount of calculations. Consequently, it is difficult to apply the method in an actual SAR recognition task.
The model-based method extracts features in a dataset from a physical or conceptual model of a target and predicts the attributes of the target under different attitudes and configurations [9,10]. It is more effective than template matching. The key to its success lies in developing accurate models and identifying relevant features. However, physical models are complicated and the simulation of a model is difficult, which seriously restricts the development of the model-based method in SAR recognition tasks.
In recent years, with the rapid development of deep learning algorithms, some deep learning methods have prevailed in the SAR image and signal processing field [11,12,13,14,15]. These deep learning methods can extract target characteristics automatically rather than manually, as completed with traditional algorithms [16,17,18,19,20,21,22]. Compared with traditional methods, current deep learning methods are more robust, accurate, and efficient [23,24,25,26,27,28]. For example, Chen et al. [29] proposed an all-convolutional neural network (CNN) for SAR target recognition, which obtained better accuracy than traditional methods. Then, the excellent advantages of deep learning in the SAR field were further demonstrated [30,31,32]. However, deep learning methods typically require a large amount of labeled data for training. This undoubtedly restricts the application of deep learning methods in SAR recognition tasks. Then, to further increase the accuracy of deep learning methods in the case of small datasets, Peng et al. [33] applied a discriminator with classification for SAR target recognition, which improved the performance by adjusting the conditions of image generation and modifying the true and false discriminator. In reference [34], a Wasserstein deep convolutional generative adversarial network (W-GAN) was used for recognition, which obtained remarkable performance by improving the quality of generated images. Reference [35] introduced a task-driven domain adaptation method with transfer learning, which improved the performance of models in the case of small datasets. Subsequently, many relevant deep learning methods including DA, GAN, deep neural networks (DNNs) [36], and so on [37,38,39,40,41], have been proposed.
Although current deep learning methods have achieved some satisfactory results in SAR target recognition tasks, they always ignore physical scattering characteristics [42,43]. In contrast to natural images, the physical essence of SAR images is the coherent superposition of electromagnetic vectors after the electromagnetic waves interact with the scene or target. In the observation stage, the actual ‘small antenna’ of the SAR system is synthesized into an equivalent ‘large antenna’ to improve the imaging resolution. In fact, an SAR image is composed of multiple low-resolution echo signals with different imaging angles, which can be decomposed into multiple sub-aperture images using the sub-aperture decomposition algorithm [44,45]. Specifically, Figure 1 shows a full-aperture image and several corresponding sub-aperture images. Although the resolution of sub-aperture images is lower than that of full-aperture images, they contain abundant target features and electromagnetic information, which reflects the physical scattering characteristics of the target from different angles [46]. The scattering information for one target may be different at different angles. The target separability characteristics may exist in other angles when the type of target cannot be recognized from a specific angle. Hence, compared with the original composed SAR images, sub-aperture images contain multi-angle target information, which may increase the possibility of distinguishing different types of targets. However, the current deep learning methods generally regard SAR images simply as grayscale images and ignore some important physical scattering characteristics. Thus, it is crucial to establish a recognition method that can fully utilize the physical characteristic information in SAR images.
Physical scattering characteristics are important parts of SAR images, which contain a lot of useful information for target recognition. To make full use of the multi-angle physical scattering characteristics of SAR images, Wang et al. [47] proposed a transfer learning method with sub-aperture decomposition (SD). The SD algorithm is used to obtain sub-aperture images, which can enrich target information and improve recognition accuracy. However, original SAR images and sub-aperture images are complex-valued, and directly applying the real-valued neural network to the SAR target recognition task may potentially sacrifice some useful information for target recognition [48]. To make full use of the target information in a complex-valued SAR image, Zeng et al. [49] proposed a multi-stream complex-valued network for target recognition. Although the multi-stream strategy can extract separability characteristics effectively, it also greatly increases the calculations and parameters in the networks. This will negatively affect performance in the case of small datasets. Subsequently, Liu et al. [50] applied the multilevel attributed scattering center (M-ASC) framework for SAR target recognition, which helps enhance the generalization ability of networks. However, the process of obtaining M-ASCs is complex and parameter optimization is difficult, which seriously restricts the application in actual SAR recognition tasks.
In order to extract the target separability characteristics effectively, in this paper, a complex-valued network guided with sub-aperture images (CGS-Net) for target recognition in SAR images is proposed. A multi-task learning strategy is used in the proposed method, which combines the physical scattering characteristics of complex SAR images for target recognition. It contains main and auxiliary tasks. Specifically, the main task is the target recognition task, which is used to obtain the result of recognition. The auxiliary task is the reconstruction task. One target has different scattering information in different angles, thus, in the auxiliary task, sub-aperture decomposition is used to guide the network to extract the separability features of targets, which fully utilizes the multi-angle target information to improve the performance of the proposed method. Here, since original SAR images and sub-aperture images are complex-valued, the proposed CGS-Net has a complex-valued structure, which makes full use of amplitude and phase information available in the complex SAR data for target recognition. Significantly, the proposed method has a lightweight network structure, which may be suitable for SAR target recognition tasks due to the scarcity of large amounts of labeled data in SAR images.
The main contributions of the proposed SAR target recognition method are summarized as follows.
(1)
A novel SAR target recognition method based on complex-valued networks with a multi-task learning strategy is proposed in this paper. The proposed method is not only a complex-valued network but also a multi-task learning-based SAR target recognition method. Multi-task learning can be used to improve the performance of the main task by learning and sharing useful information from the auxiliary task. Here, the main and auxiliary tasks are contained in the proposed method. Specifically, the main task is the target recognition task, which is used to obtain the recognition results. As an auxiliary task, the reconstruction task is used to guide the model to learn the separability characteristics of targets by reconstructing the sub-aperture image. Here, a complex-valued structure is used to obtain the features from SAR images because the original SAR images are complex-valued.
(2)
Multi-angle target information is mined for the SAR target recognition task using sub-aperture decomposition. Since different targets have different physical scattering characteristics at different angles, the sub-aperture images contain multi-angle target information, which increases the possibility of distinguishing different types of targets. Therefore, in this paper, sub-aperture decomposition is used to improve accuracy by guiding the model to learn the target separability characteristics.
The rest of this paper is summarized as follows. The proposed CGS-Net for SAR target recognition is briefly introduced in Section 2. Then, the experiments and analyses are discussed in Section 3. Section 4 summarizes the whole paper in general.

2. Proposed Method

2.1. Overall CGS-Net Framework

A specific flowchart showing the CGS-Net framework is illustrated in Figure 2. It can be seen that the proposed CGS-Net mainly includes three parts: the base module, the recognition task, and the reconstruction task. In the base module, several complex-valued convolutional layers are used to extract features from SAR images. The features are used in the recognition task and the reconstruction task. Then, the recognition task is used to obtain the recognition results. Finally, the reconstruction task is used to guide the model to extract the separability features of targets by reconstructing the sub-aperture image, which takes full advantage of the information in the sub-aperture images to improve recognition performance. Notably, the reconstruction task only participates in the training stage as an assistant task. In the test stage, the final recognition results are obtained with the recognition task directly. These sub-structures are detailed in the following.

2.2. Base Module

(1)
Complex-Valued Convolutional Layer
The real-valued convolution (RV-Conv) operation purely extracts features from amplitude. Different from RV-Conv, in complex-valued convolution (CV-Conv), both the amplitude and phase information in complex data are used to extract features for target recognition. Hence, in SAR target recognition tasks, complex-value convolution is superior to traditional real-valued-based convolution [48,51].
Similar to traditional RV-Conv, the essence of CV-Conv is that the complex-valued operation is combined with the convolutional operation. Here, when the operation of convolution extends to the complex field, we perform the corresponding element multiplication and sum operation according to the complex-valued operation. In order to comprehend CV-Conv easily, the complex-valued features are separated into real and imaginary parts. Specifically, CV-Conv can be equivalent as follows [48]:
F Q = ( F R + i F I ) ( Q R + i Q I ) = ( F R Q R F I Q I ) + i ( F R Q I + F I Q R )
where F = F R + i F I represents the complex-valued feature layer, which contains real and imaginary parts. Similarly, Q = Q R + i Q I represents the CV-Conv kernel. Both the output feature layer and input are complex-valued.
In order to describe the process clearly, Figure 3 is used to illustrate the specific difference between complex-valued and real-valued convolution. In Figure 3, red and black represent the real and imaginary parts of the complex-valued operation, where is the convolution operator, and the kernel size is K × K . In RV-Conv, C 1 is the input channel and C 2 is the output channel. In CV-Conv, the first C 1 / 2 and C 2 / 2 feature maps (black) are the real components, and the remaining feature maps (red) are the imaginary components. Here, it can be demonstrated that CV-Conv under the same conditions (input and output channels, and kernel size) has fewer parameters than RV-Conv. Specifically, the parameters of CV-Conv are expressed by the following formula:
P = ( C 1 2 × C 2 2 × K × K ) + ( C 1 2 × C 2 2 × K × K ) = 1 2 ( C 1 × C 2 × K × K )
where the P is the parameter of CV-Conv. The parameters of RV-Conv under the same conditions are C 1 × C 2 × K × K . It is obvious that CV-Conv has fewer parameters than real-valued convolution, which is more suitable for a small dataset.
(2)
Specific Structure of Base Module
The base module is used to extract features from SAR images. Inspired by ResNet [52], in this paper, the complex-valued residual structure is used in the base module. As shown in Figure 2, the base module mainly includes a complex-valued convolutional layer and four complex-valued residual modules. Each complex-valued residual module has two complex-valued convolutional layers and a shortcut layer. Compared with general deep learning target recognition networks, the proposed method has fewer parameters. This is mainly due to the following reasons. The base module of the proposed method has a lightweight structure. It only contains a total of nine complex-valued convolutional layers, which is far fewer than typical deep learning networks, such as ResNet, VGG [53], etc. In addition, CV-Conv has fewer parameters than traditional RV-Conv. Thus, the proposed method may be suitable for SAR target recognition tasks because SAR images usually lack a large number of labeled data.

2.3. Reconstruction Task

(1)
Sub-Aperture Decomposition Algorithm
In the SAR system, SAR images are composed of low-resolution echo signals with different azimuths. In different sub-looks, the scattered echo information is different, which are also called sub-aperture images. The information can be obtained using the sub-aperture decomposition (SD) algorithm. Here, the sub-aperture images are related but different from each other [45,47]. Abundant electromagnetic scattering information on ground targets is contained in the sub-aperture images, such as geometry, material, structure, etc.
Figure 4 shows the specific process of the sub-aperture decomposition method. In order to clearly explain the process, here, the number of decomposed sub-aperture images is set to three. Theoretically, a SAR image can be decomposed into any number of sub-aperture images, and the Doppler spectrum may overlap or not.
The specific procedure for generating sub-aperture images using the SD algorithm is summarized as follows.
Step 1: The Doppler spectrum on the direction dimension is obtained using the fast Fourier transform (FFT).
Step 2: A remove-window process is performed in the Doppler spectrum, and the Doppler spectrum is divided into three equal parts.
Step 3: The inverse FFT (IFFT) operation is performed in three parts of the Doppler spectrum to obtain the final sub-aperture image.
S1–S3 in Figure 4 are three sub-aperture images generated with the SD algorithm. Notably, in the SD procedure, the original SAR image is complex-valued, and the sub-aperture image is also complex-valued. Here, for visualization, only the real-valued image is shown in the figure.
(2)
Guided Module
The guided module is used to lead network training, which fully extracts separable features of the target and effectively identifies different targets accurately. Owing to special imaging characteristics, in the SAR system, the sub-aperture image contains multi-angle target information. Different targets have different physical scattering characteristics at different angles, which increases the possibility of distinguishing different types of targets. Therefore, in the guided module, the sub-aperture image is used to guide the network to learn the separability characteristics of targets. Specifically, the guided module upsamples the features extracted using the base module to reconstruct a complex image. Then, the parameters in the base module are updated according to the recognition loss and guided loss after calculating the loss between the upsampling results and sub-aperture images. With the guided module, the base module pays more attention to the separability characteristics of targets, which aids the network to recognize different targets efficiently.
Since the base module is a complex-valued network and the sub-aperture image itself is also complex-valued, the structure of the guided module must be complex-valued. Several complex-valued transposed convolutional layers are contained in the guided module, which can be used to reconstruct sub-images with upsampling features. Finally, the sub-aperture image is used to guide the network to learn which regions and features in the SAR image mainly determine the category of the target.
(3)
Reconstruction Loss Function
Owing to the complex-valued structure of the reconstruction task, a complex-valued loss function is used to calculate the difference between the reconstruction results and sub-aperture images. Specifically, the reconstruction loss function is summarized as follows:
L r = k = 1 n | x k y k | = k = 1 n | ( x k R y k R ) + i ( x k I y k I ) |
where x k = x k R + i x k I and y k = y k R + i y k I are the kth complex-valued pixel of the reconstructed and sub-aperture image, respectively. Compared with the real-valued loss, the loss function highlights the significance of complex-valued information, and both the real and imaginary parts are processed using backpropagation simultaneously.

2.4. Recognition Task

(1)
Complex-Valued FC-Layer
In the recognition task, a complex-valued fully connected layer is used to integrate the complex features to obtain the recognition output since the input features are complex-valued. The formula for the specific complex-valued fully connected layer is given as follows:
a k = j = 1 n W k j x j = j = 1 n ( W k j R + i W k j I ) ( x j R + i x j I ) = j = 1 n ( W k j R x j R W k j I x j I ) + i ( W K j R x j I + W k j I x j R )
where a k   ( k = 1 , 2 m ) is the output kth neuron, W = W R + i W I is the complex weight, and x = x R + i x I is the input neuron.
(2)
Recognition Loss Function
In order to highlight the importance of the imaginary part, a complex-valued loss function is used to integrate complex information, which processes both the real and imaginary parts using backpropagation simultaneously. The specific formula for the loss function is expressed as follows:
L c = 1 N j = 1 N [ y j l o g ( f j ( x r + i x i ) ) + ( 1 y j ) l o g ( ( 1 f j ( x r + i x i ) ) ]
f ( x r + i x i ) = x r 2 + x i 2
where x r and x i are the real and imaginary parts of the complex output, respectively, and y j is the label of SAR images.

2.5. Specific Loss Function in the Proposed Method

Owing to the proposed main and auxiliary tasks, recognition loss and reconstruction loss are both contained in the loss function in the proposed method. Here, the specific loss function in the proposed method can be expressed as follows:
L = L c + L r
where L is the loss function in the proposed method and L r and L c are the recognition loss and reconstruction loss functions, respectively. Here, the specific expression of L r and L c are displayed in Equations (3) and (5).

3. Experimental Results

3.1. Experimental Data

The experiment data used in this paper are from the moving and stationary target acquisition and recognition dataset, which is the benchmark dataset for SAR target recognition tasks. It contains ground vehicle targets for different target types, depression angles, serial numbers, and aspect angles. Specifically, the dataset includes different ten-class targets with omnidirectional coverage in the 0–360° range [29]. The samples for the ten-class targets and corresponding optical images are displayed in Figure 5. It should be noted that the experimental data used in the complex-valued networks are the original MSTAR data with complex-valued components. The data used in the real-valued networks are also processed from the original complex-valued MSTAR data. The specific information in the MSTAR dataset is shown in Table 1.

3.2. Experimental Details

All experiments are conducted with the same configuration during training. The specific configuration is as follows: the number of iterations is 20,000, the optimizer is Adam, the initial learning rate is 1 × 10−3, and the MultiStepLR strategy is used to adjust the learning rate. The experimental platform is a personal computer with NVIDIA RTX 2080Ti GPU and Inter (R) Xeon (R) Silver 4210 CPU on the Ubuntu 18.04 Linux system. The deep learning framework is Pytorch 1. 2.

3.3. Evaluation Criteria

To evaluate the experimental results scientifically, the following evaluation criteria are used in the experiment, which include precision, recall, F1-score, and accuracy. Specifically, the formulas for the evaluation criteria are as follows:
P r e c i s i o n = T P T P + F P
R e c a l l = T P T P + F N
F 1 = 2 × P r e c i s i o n × R e c a l l P r e c i s i o n + R e c a l l
A c c u r a c y = C T P T P + F P
where C is the number of classes, TP is the number of correctly recognized targets, and FP is the number of false alarms.

3.4. Results under Ten-Class Targets

A.
Comparison with Classical Recognition Methods
In order to demonstrate the effectiveness of the proposed CGS-Net, several existing, widely used, real-valued deep learning recognition methods are selected and compared with CGS-Net. Specifically, they mainly include methods such as ResNet18, VGG16, Net4, ResNet10, etc. The specific experimental results are displayed in Table 2. Here, Net4 is a lightweight network, which only contains two convolutional and two fully connected layers. ResNet10 includes nine convolutional layers and a fully connected layer.
As shown in Table 2, it is obvious that CGS-Net is superior to the real-valued networks, which achieves an accuracy of more than 99.5%. This is mainly because of the following reasons. Compared with the typical real-valued convolutional networks, the proposed method utilizes physical scattering characteristics and complex information effectively. On the one hand, the complex-valued network mines the target information in complex SAR data effectively, which improves the performance of recognition. On the other hand, the guided module efficiently enhances the capacity of the model to extract the separability characteristics of different targets. In addition, SAR images usually lack sufficient labeled data. Here, the proposed method has fewer parameters than typical deep learning methods, which may be more suitable for SAR target recognition tasks.
In order to evaluate the experimental results scientifically and clearly, several evaluation criteria are used to further demonstrate the performance of the proposed method. Table 3 shows the precision, recall, and F1-score for the different ten-class targets, and the confusion matrix is displayed in Figure 6. The results demonstrate that the proposed method has excellent performance for recognizing any class targets.
  • B. Comparison with Other Complex-Valued Networks
To independently demonstrate the performance of the guided module, several complex-valued methods including Complex net [48] and DH-RCCNNs [54] are selected to compare with CGS-Net. The specific recognition accuracies are compared in Table 4. Obviously, compared with other complex-valued networks, the proposed method still achieves the highest accuracy. This suggests that the proposed method can fully exploit the physical scattering characteristics. The reconstruction task can guide the network to learn how to identify the target accurately. In addition, the results further demonstrate the superior performance of CGS-Net.
  • C. Comparison with State-of-the-Art Methods
To further demonstrate the performance of CGS-Net, several related works proposed in the last two years are used in the experiments, which include FEC [55], CAE [43], and A-ConvNet [56]. Table 5 shows the specific experimental results. It can be seen that the proposed method is superior to other related methods.
  • D. Experimental Results with limited data
To demonstrate the universality and robustness of the proposed CGS-Net in the case of limited data, 40%, 50%, 60%, and 70% of the original training data are used as several new training datasets. Due to the small size of the training data, only the methods with a small number of parameters are selected to compare with the proposed method. The specific recognition accuracies are compared in Table 6.
As shown in Table 6, it can be seen that the proposed method still has a higher accuracy than ResNet10 and the lightweight network (Net4) in the case of limited training data. Specifically, ResNet10 has the same number of layers (nine convolutional layers in the base module and one fully connected layer) as the proposed method; however, there is no guided module or complex-valued structure. It is obvious that the proposed method has more excellent performance than the other methods, and the guided module and complex-valued structure can improve the accuracy of SAR target recognition. In addition, the experimental results for the different-sized training datasets further demonstrate the universality and robustness of the proposed method.
  • E. Ablation Experiments
In order to demonstrate the performance of each part of the proposed method, in this paper, ablation experiments are conducted. The specific experimental results are shown in Table 7. As shown in Table 7, compared with ResNet10, Complex-ReNet10 has a preferable performance. The results demonstrate that the complex-valued base module is helpful for recognition. In addition, the proposed method has the highest accuracy, this is mainly because the proposed method uses physical scattering characteristics and complex information effectively. This further proves the effectiveness of the recognition task.
  • F. Comparison with Different Numbers of Sub-Apertures
In order to demonstrate the influence of different sub-aperture numbers for the proposed method, a comparison experiment with different sub-aperture numbers is conducted in this paper. The experimental results are shown in Table 8. It is obvious that the proposed method obtains the highest accuracy when the number of sub-apertures is 3. This suggests that the optimal number of sub-apertures is 3, which may be mainly because of the following reason. Too few sub-apertures cannot provide sufficient scattering features, while too many sub-apertures may lead to low-resolution sub-aperture images.

4. Discussion

From the experiments in Section 3, it is obvious that the proposed CGS-Net is superior to the state-of-the-art methods. This is mainly based on the following reasons. Firstly, compared with typical real-valued convolutional networks, the proposed method utilizes physical scattering characteristics and complex information effectively. Secondly, the guided module efficiently enhances the capacity of the model to extract the separability characteristics of different targets. Finally, the proposed method has fewer parameters than typical real-valued deep learning methods. Hence, the proposed method may be more suitable for SAR target recognition tasks because SAR images usually lack sufficient labeled data. The experimental results obtained when using the small dataset further prove that the proposed CGS-Net has an excellent performance.
In addition, in the training stage, some hyper-parameters, e.g., sub-aperture number and running time, are crucial for the performance of the model. From the experiments in Section 3, it is obvious that the optimal number of sub-apertures is 3. This may be mainly because of the following reasons. Too few sub-apertures cannot provide sufficient scattering features, while too many sub-apertures may lead to low-resolution of sub-aperture images.
Regarding the running time, indeed, the proposed method requires a longer operating time than classical methods due to an immature mode of complex computation in the deep learning frame. However, the proposed method has fewer parameters and flops than typical deep learning methods. This demonstrates that the proposed method has a lightweight structure. In addition, because the scene in SAR images in target recognition tasks is usually very small compared to the large scene in detection tasks, the algorithm used for recognition always processes at high efficiency and speed.
Although the proposed method obtained good performance in the SAR target recognition task, it also has the following limitations.
(1)
The proposed method is only applicable to SAR images. The proposed method includes sub-aperture decomposition. This is the unique imaging mechanism in the SAR system. Therefore, it is impossible to extend the proposed method to other fields, such as optical remote sensing and natural images. Its application is limited.
(2)
The proposed method has not been verified using a large-scale dataset. In contrast to some state-of-the-art methods, such as LW-CMDANet [57] and so on, the dataset used in this paper is complex-valued SAR data. Although the SAR image itself is complex-valued data, there is currently no public large-scale complex-valued SAR dataset, such as ImageNet [58] in the natural image field. The complex-valued SAR data currently available are generally the MSATR and MiniSAR datasets. Therefore, we have not verified the proposed method with a large-scale dataset.
(3)
Whether the proposed method can be extended to other tasks in the SAR field has not been verified. We have not applied the proposed method to other tasks, such as target detection. Therefore, the extensibility of the proposed method has not been thoroughly explored.
Based on the above analysis, the proposed method may have the limitation that it is only suitable for the SAR field. To address the above limitations, we will further improve the proposed method in future work.

5. Conclusions

Traditional deep learning algorithms generally treat SAR images simply as grayscale images and usually extract features from the real-valued SAR images in a purely data-driven manner. This may ignore the physical scattering characteristics of SAR images and sacrifice some useful target information. This is undoubtedly a huge barrier for SAR target recognition tasks and seriously restricts the development of deep learning methods. In order to fully exploit the physical information in SAR images, a complex-valued network guided with sub-aperture decomposition for target recognition in SAR images is proposed in this paper. A multi-task learning strategy is used in the proposed method, which combines the physical scattering characteristics of complex SAR images for target recognition. Specifically, sub-aperture decomposition is used to guide the network to learn the separability characteristics of targets as an auxiliary task, which mines the multi-angle target information in the SAR images for target recognition. Here, since both the original SAR images and sub-aperture images are complex-valued, the proposed CGS-Net has a complex-valued structure, which makes full use of amplitude and phase information efficiently. The experimental results demonstrate the outstanding performance of the proposed method on the MSTAR dataset.
In future work, the following two directions will be mainly researched. One is to further improve the performance of the recognition method by combining the proposed method with transformer and semi-supervised learning, especially for complex scenes and limited data. The other is the sub-aperture decomposition-guided strategy for the SAR target detection task.

Author Contributions

Conceptualization, R.W. and Z.W.; methodology, R.W. and Z.W.; software, R.W. and Y.C.; validation, Y.C., H.K., F.L. and Y.L.; formal analysis, R.W.; investigation, R.W.; re-sources, Z.W.; data curation, Z.W.; writing—original draft preparation, R.W.; writing—review and editing, R.W. and Z.W.; supervision, Z.W.; project administration, Z.W.; funding acquisition, Z.W. All authors have read and agreed to the published version of the manuscript.

Funding

This study was funded by the National Natural Science Foundation of China under Grant 62001155, in part by the China Postdoctoral Science Foundation under Grant 2021M702462, and in part by the Funding for Postdoctoral Research Projects in Hebei Province under Grant B2021005006.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author.

Acknowledgments

We are thankful for the reviewers’ valuable time spent on the review of our manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, Z.; Wang, R.; Fu, X.; Xia, K. Unsupervised Ship Detection for Single-Channel SAR Images Based on Multiscale Saliency and Complex Signal Kurtosis. IEEE Geosci. Remote Sens. Lett. 2021, 19, 4011305. [Google Scholar] [CrossRef]
  2. Dudgeon, D.E.; Lacoss, R.T. An overview of automatic target recognition. Linc. Lab. J. 1993, 6, 3–10. [Google Scholar]
  3. Ren, H.; Yu, X.; Zou, L.; Zhou, Y.; Wang, X.; Bruzzone, L. Extended convolutional capsule network with application on SAR automatic target recognition. Signal Process. 2021, 183, 108021. [Google Scholar] [CrossRef]
  4. Ai, J.; Tian, R.; Luo, Q.; Jin, J.; Tang, B. Multi-scale rotation-invariant haar-like feature integrated CNN based ship detection algorithm of multiple-target environment in SAR imagery. IEEE Trans. Geosci. Remote Sens. 2019, 57, 10070–10087. [Google Scholar] [CrossRef]
  5. Muhammad, Y.; Shanwei, L.; Mingming, X.; Hui, S.; Hossain, M.S.; Colak, A.T.I.; Wang, D.; Jianhua, W.; Dang, K.B. Multi-scale ship target detection using SAR images based on improved Yolov5. Front. Mar. Sci. 2023, 9, 1086140. [Google Scholar]
  6. Novak, L.M.; Owirka, G.J.; Netishen, C.M. Performance of a high-resolution polarimetric SAR automatic target recognition system. Linc. Lab. J. 1993, 6, 1. [Google Scholar]
  7. O’Sullivan, J.A.; DeVore, M.D.; Kedia, V.; Miller, M.I. SAR ATR performance using a conditionally Gaussian model. IEEE Trans. Aerosp. Electron. Syst. 2001, 37, 91–108. [Google Scholar] [CrossRef] [Green Version]
  8. Gao, G. Target Detection and Terrain Classification of Single-Channel SAR Images. In Characterization of SAR Clutter and Its Applications to Land and Ocean Observations; Springer: Berlin/Heidelberg, Germany, 2019; pp. 75–101. [Google Scholar]
  9. Kaplan, L.M. Analysis of multiplicative speckle models for template-based SAR ATR. IEEE Trans. Aerosp. Electron. Syst. 2001, 37, 1424–1432. [Google Scholar] [CrossRef]
  10. DeVore, M.D.; Lanterman, A.D.; O’Sullivan, J.A. ATR performance of a Rician model for SAR images. In Automatic Target Recognition X; SPIE: Orlando, FL, USA, 2000; Volume 4050. [Google Scholar]
  11. Zheng, Y.; Lv, X.; Qian, L.; Liu, X. An Optimal BP Neural Network Track Prediction Method Based on a GA– ACO Hybrid Algorithm. J. Mar. Sci. Eng. 2022, 10, 1399. [Google Scholar]
  12. Zheng, Y.; Liu, P.; Qian, L.; Qin, S.; Liu, X.; Ma, Y.; Cheng, G. Recognition and Depth Estimation of Ships Based on Binocular Stereo Vision. J. Mar. Sci. Eng. 2022, 10, 1153. [Google Scholar] [CrossRef]
  13. Qian, L.; Zheng, Y.; Li, L.; Ma, Y.; Zhou, C.; Zhang, D. A New Method of Inland Water Ship Trajectory Prediction Based on Long Short-Term Memory Network Optimized by Genetic Algorithm. Appl. Sci. 2022, 12, 4073. [Google Scholar] [CrossRef]
  14. Xiong, S.; Li, B.; Zhu, S. DCGNN: A single-stage 3D object detection network based on density clustering and graph neural network. Complex Intell. Syst. 2022, 9, 3399–3408. [Google Scholar] [CrossRef]
  15. Tan, X.; Lin, J.; Xu, K.; Chen, P.; Ma, L.; Lau, R.W.H. Mirror Detection With the Visual Chirality Cue. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 3492–3504. [Google Scholar] [CrossRef] [PubMed]
  16. Liu, Y.; Wang, K.; Liu, L.; Lan, H.; Lin, L. TCGL: Temporal Contrastive Graph for Self-Supervised Video Representation Learning. IEEE Trans. Image Process. 2022, 31, 1978–1993. [Google Scholar] [CrossRef]
  17. Liu, H.; Ding, F.; Li, J.; Meng, X.; Liu, C.; Fang, H. Improved Detection of Buried Elongated Targets by Dual-Polarization GPR. IEEE Geosci. Remote Sens. Lett. 2023, 20, 3501705. [Google Scholar] [CrossRef]
  18. Liu, H.; Yue, Y.; Liu, C.; Spencer, B.F.; Cui, J. Automatic recognition and localization of underground pipelines in GPR B-scans using a deep learning model. Tunn. Undergr. Space Technol. 2023, 134, 0886–7798. [Google Scholar] [CrossRef]
  19. Yang, Z.; Yu, X.; Dedman, S.; Rosso, M.; Zhu, J.; Yang, J.; Xia, Y.; Tian, Y.; Zhang, G.; Wang, J. UAV remote sensing applications in marine monitoring: Knowledge visualization and review. Sci. Total Environ. 2022, 838, 0048–9697. [Google Scholar] [CrossRef]
  20. Zhou, W.; Lv, Y.; Lei, J.; Yu, L. Global and Local-Contrast Guides Content-Aware Fusion for RGB-D Saliency Prediction. IEEE Trans. Syst. Man Cybern. Syst. 2021, 51, 3641–3649. [Google Scholar] [CrossRef]
  21. Yang, M.; Wang, H.; Hu, K.; Yin, G.; Wei, Z. IA-Net$:$ An Inception–Attention-Module-Based Network for Classifying Underwater Images From Others. IEEE J. Ocean. Eng. 2022, 47, 704–717. [Google Scholar] [CrossRef]
  22. Zhou, G.; Song, B.; Liang, P.; Xu, J.; Yue, T. Voids Filling of DEM with Multiattention Generative Adversarial Network Model. Remote Sens. 2022, 14, 1206. [Google Scholar] [CrossRef]
  23. Zhou, G.; Zhou, X.; Song, Y.; Xie, D.; Wang, L.; Yan, G.; Hu, M.; Liu, B.; Shang, W.; Gong, C.; et al. Design of supercontinuum laser hyperspectral light detection and ranging (LiDAR) (SCLaHS LiDAR). Int. J. Remote Sens. 2021, 42, 3731–3755. [Google Scholar] [CrossRef]
  24. Cheng, D.; Chen, L.; Lv, C.; Guo, L.; Kou, Q. Light-Guided and Cross-Fusion U-Net for Anti-Illumination Image Super-Resolution. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 8436–8449. [Google Scholar] [CrossRef]
  25. Lu, S.; Ding, Y.; Liu, M.; Yin, Z.; Yin, L.; Zheng, W. Multiscale Feature Extraction and Fusion of Image and Text in VQA. International Journal of Computational Intelligence Systems. Int. J. Comput. Intell. Syst. 2023, 16, 54. [Google Scholar] [CrossRef]
  26. Zhou, G.; Zhou, X.; Li, W.; Zhao, D.; Song, B.; Xu, C.; Zhang, H.; Liu, Z.; Xu, J.; Lin, G.; et al. Development of a Lightweight Single-Band Bathymetric LiDAR. Remote Sens. 2022, 14, 5880. [Google Scholar] [CrossRef]
  27. Zhao, C.; Chi, F.C.; Xu, P. High-efficiency sub-microscale uncertainty measurement method using pattern recognition. ISA Trans. 2020, 101, 503–514. [Google Scholar] [CrossRef] [PubMed]
  28. Zhang, Y.; Luo, J.; Zhang, Y.; Huang, Y.; Cai, X.; Yang, J. Resolution Enhancement for Large-Scale Real Beam Mapping Based on Adaptive Low-Rank Approximation. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5116921. [Google Scholar] [CrossRef]
  29. Chen, S.; Wang, H. SAR target recognition based on deep learning. In Proceedings of the International Conference on Data Science and Advanced Analytics (DSAA), Shanghai, China, 30 October–1 November 2014; pp. 541–547. [Google Scholar]
  30. Zhou, L.; Ye, Y.; Tang, T.; Nan, K.; Qin, Y. Robust Matching for SAR and Optical Images Using Multiscale Conutional Gradient Features. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar]
  31. Ding, J.; Chen, B.; Liu, H.; Huang, M. Conutional Neural Network with Data Augmentation for SAR Target Recognition. IEEE Geosci. Remote Sens. Lett. 2016, 13, 364–368. [Google Scholar]
  32. Guo, Y.; Du, L.; Li, C.; Chen, J. SAR Automatic Target Recognition Based on Multi-Scale Conutional Factor Analysis Model with Max-Margin Constraint. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021; pp. 3605–3608. [Google Scholar]
  33. Peng, G.; Liu, M.; Chen, S.; Li, Y.; Lu, F. Generation of SAR Images with Features for Target Recognition. In Proceedings of the IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC), Xi’an, China, 25–27 October 2022; pp. 1–4. [Google Scholar]
  34. Qin, J.; Liu, Z.; Ran, L.; Xie, R.; Tang, J.; Guo, Z. A Target SAR Image Expansion Method Based on Conditional Wasserstein Deep Conutional GAN for Automatic Target Recognition. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 7153–7170. [Google Scholar] [CrossRef]
  35. He, Q.; Zhao, L.; Ji, K.; Kuang, G. SAR Target Recognition Based on Task-Driven Domain Adaptation Using Simulated Data. IEEE Geosci. Remote Sens. Lett. 2022, 19, 4019205. [Google Scholar] [CrossRef]
  36. Niu, S.; Qiu, X.; Lei, B.; Fu, K. A SAR Target Image Simulation Method With DNN Embedded to Calculate Electromagnetic Reflection. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 2593–2610. [Google Scholar] [CrossRef]
  37. Hua, W.; Zhang, C.; Xie, W.; Jin, X. Polarimetric SAR Image Classification Based on Ensemble Dual-Branch CNN and Superpixel Algorithm. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 2759–2772. [Google Scholar] [CrossRef]
  38. Li, R.; Zhang, H.; Chen, Z.; Yu, N.; Kong, W.; Li, T.; Wang, E.; Wu, X.; Liu, Y. Denoising method of ground-penetrating radar signal based on independent component analysis with multifractal spectrum. Measurement 2022, 192, 110886. [Google Scholar] [CrossRef]
  39. Wei, D.; Du, Y.; Du, L.; Li, L. Target Detection Network for SAR Images Based on Semi-Supervised Learning and Attention Mechanism. Remote Sens. 2021, 13, 2686. [Google Scholar] [CrossRef]
  40. Zhu, H.; Xue, M.; Wang, Y.; Yuan, G.; Li, X. Fast Visual Tracking With Siamese Oriented Region Proposal Network. IEEE Signal Process. Lett. 2022, 29, 1437–1441. [Google Scholar] [CrossRef]
  41. Tian, Y.; Sun, J.; Qi, P.; Yin, G.; Zhang, L. Multi-block mixed sample semi-supervised learning for SAR target recognition. Remote Sens. 2021, 13, 361. [Google Scholar] [CrossRef]
  42. Wang, Z.; Du, L.; Mao, J.; Liu, B.; Yang, D. SAR target detection based on SSD with data augmentation and transfer learning. IEEE Geosci. Remote Sens. Lett. 2018, 16, 150–154. [Google Scholar] [CrossRef]
  43. Li, S.; Pan, Z.; Hu, Y. Multi-Aspect Conutional-Transformer Network for SAR Automatic Target Recognition. Remote Sens. 2022, 14, 3924. [Google Scholar] [CrossRef]
  44. Marino, A.; Sanjuan-Ferrer, M.; Hajnsek, I.; Ouchi, K. Ship detection with spectral analysis of synthetic aperture radar: A comparison of new and well-known algorithms. Remote Sens. 2015, 7, 5416–5439. [Google Scholar] [CrossRef]
  45. Cloude, S.R. Target decomposition theorems in radar scattering. Electron. Lett. 1985, 21, 22–24. [Google Scholar] [CrossRef]
  46. Ferro-Famil, L.; Reigber, A.; Pottier, E.; Boerner, W.M. Analysis of anisotropic behavior using sub-aperture polarimetric SAR data. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Toulouse, France, 21–25 July 2003; pp. 434–436. [Google Scholar]
  47. Wang, Z.; Fu, X.; Xia, K. Target Classification for Single-Channel SAR Images Based on Transfer Learning With Subaperture Decomposition. IEEE Geosci. Remote Sens. Lett. 2022, 19, 4003205. [Google Scholar] [CrossRef]
  48. Trabelsi, C.; Olexa, B.; Ying, Z.; Dmitriy, S.; Christopher, J. Deep Complex Networks. arXiv 2018, arXiv:1705.09792. [Google Scholar]
  49. Zeng, Z.; Sun, J.; Han, Z.; Hong, W. SAR Automatic Target Recognition Method Based on Multi-Stream Complex-Valued Networks. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5228618. [Google Scholar] [CrossRef]
  50. Liu, Z.; Wang, L.; Wen, Z.; Li, K.; Pan, Q. Multilevel Scattering Center and Deep Feature Fusion Learning Framework for SAR Target Recognition. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5227914. [Google Scholar] [CrossRef]
  51. Wilmanski, M.; Kreucher, C.; Hero, A. Complex input convolutional neural networks for wide angle SAR ATR. In Proceedings of the 2016 IEEE Global Conference on Signal and Information Processing (GlobalSIP), Washington, DC, USA, 7–9 December 2016; pp. 1037–1041. [Google Scholar]
  52. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  53. Karen, S.; Zisserman, A. Very deep conutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  54. Scarnati, T.; Lewis, B. Complex-Valued Neural Networks for Synthetic Aperture Radar Image Classification. In Proceedings of the IEEE Radar Conference (RadarConf21), Atlanta, GA, USA, 7–14 May 2021; pp. 1–6. [Google Scholar]
  55. Zhang, J.; Xing, M.; Xie, Y. FEC: A feature fusion framework for SAR target recognition based on electromagnetic scattering features and deep CNN features. IEEE Trans. Geosci. Remote Sens. 2021, 59, 2174–2187. [Google Scholar] [CrossRef]
  56. Chen, S.; Wang, H.; Xu, F.; Jin, Y. Target classification using the deep conutional networks for SAR images. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4806–4817. [Google Scholar] [CrossRef]
  57. Lang, P.; Fu, X.; Feng, C.; Dong, J.; Qin, R.; Martorella, M. LW-CMDANet: A Novel Attention Network for SAR Automatic Target Recognition. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 6615–6630. [Google Scholar] [CrossRef]
  58. Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; Fei-Fei, L. ImageNet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
Figure 1. The full-aperture image and several corresponding sub-aperture images, where (a) is the full-aperture image and (bd) are the corresponding sub-aperture images.
Figure 1. The full-aperture image and several corresponding sub-aperture images, where (a) is the full-aperture image and (bd) are the corresponding sub-aperture images.
Remotesensing 15 04031 g001
Figure 2. Structure of the proposed CGS-Net method for target recognition in SAR images.
Figure 2. Structure of the proposed CGS-Net method for target recognition in SAR images.
Remotesensing 15 04031 g002
Figure 3. Comparison of complex-valued and real-valued convolution. Here, * is the convolution operation, the red one is the convolution operation on the imaginary convolution kernel, and the black one is on the real convolution kernel.
Figure 3. Comparison of complex-valued and real-valued convolution. Here, * is the convolution operation, the red one is the convolution operation on the imaginary convolution kernel, and the black one is on the real convolution kernel.
Remotesensing 15 04031 g003
Figure 4. Schematic showing the SD algorithm for SAR images.
Figure 4. Schematic showing the SD algorithm for SAR images.
Remotesensing 15 04031 g004
Figure 5. The samples for ten-class targets and corresponding optical images, which include BMP2, BRDM2, BTR70, BTR60, D7, T62, T72, ZIL131, ZIL131, 2S1, and ZSU234. The optical images are at the top, and the corresponding SAR images are at the bottom.
Figure 5. The samples for ten-class targets and corresponding optical images, which include BMP2, BRDM2, BTR70, BTR60, D7, T62, T72, ZIL131, ZIL131, 2S1, and ZSU234. The optical images are at the top, and the corresponding SAR images are at the bottom.
Remotesensing 15 04031 g005aRemotesensing 15 04031 g005b
Figure 6. The confusion matrix of ten-class targets.
Figure 6. The confusion matrix of ten-class targets.
Remotesensing 15 04031 g006
Table 1. The Number of Samples for the Ten-Class Vehicle Targets.
Table 1. The Number of Samples for the Ten-Class Vehicle Targets.
ClassTest Set
(Depression 15°)
Training Set
(Depression 17°)
Serial
Number
BMP21952339563
BRDM2274298E-71
BTR70196233c71
BTR60195256k10yt7532
D72742999v13015
T62273299A51
T72196232132
ZIL131274299E12
2S1274299b01
2SU234274299d08
Total24252747/
Table 2. Comparison Among Different Classical Deep Learning Recognition Methods.
Table 2. Comparison Among Different Classical Deep Learning Recognition Methods.
MethodAccuracyParametersFlopsRunning Time
(2425 Images)
ResNet1897.6911.2 M595.44 M3.20 s
ResNet1097.284.93 M292.47 M2.67 s
VGG1694.31134.3 M5130.76 M4.47 s
Net-495.712.2 M10.94 M2.47 s
CGS-Net99.593.65 M277.37 M3.80 s
Table 3. Different Evaluation Criteria for the Ten-Classes Targets.
Table 3. Different Evaluation Criteria for the Ten-Classes Targets.
BMP2BTR70T722S1BRDM2BTR60D7T62ZIL131ZSU234
Precision1.00.9901.00.9900.9961.01.00.9961.01.0
Recall0.9851.00.9800.9891.00.9900.9931.01.01.0
F1-score0.9920.9950.9900.9890.9980.9950.9960.9981.01.0
Table 4. Comparison Among the Proposed Method and Other Complex-Valued Networks.
Table 4. Comparison Among the Proposed Method and Other Complex-Valued Networks.
MethodAccuracy
DH-RCCNNs97.24
Complex net98.56
CGS-Net99.59
Table 5. Comparison Among the Proposed Method and State-of-the-Art Methods.
Table 5. Comparison Among the Proposed Method and State-of-the-Art Methods.
MethodAccuracy
FEC99.27
CAE97.86
A-ConvNe99.13
CGS-Net99.59
Table 6. Comparison Among Different Recognition Methods When Using a Small Number of Samples.
Table 6. Comparison Among Different Recognition Methods When Using a Small Number of Samples.
Dataset SizeAccuracy
(ResNet10)
Accuracy
(Net4)
Accuracy
(CGS-Net)
40%94.8888.8097.44
50%95.0490.1997.90
60%96.0492.2598.72
70%96.8894.4099.09
100%97.2895.7199.59
Table 7. Ablation Experiments.
Table 7. Ablation Experiments.
MethodComplex-Valued
Based Module
Reconstruction
Task
Accuracy
ResNet10××98.89
Complex-ResNet10×99.01
Proposed Method99.59
Table 8. Comparison Among Different Numbers of Sub-Apertures.
Table 8. Comparison Among Different Numbers of Sub-Apertures.
Number of Sub-AperturesAccuracy
098.89
299.26
399.59
499.38
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, R.; Wang, Z.; Chen, Y.; Kang, H.; Luo, F.; Liu, Y. Target Recognition in SAR Images Using Complex-Valued Network Guided with Sub-Aperture Decomposition. Remote Sens. 2023, 15, 4031. https://doi.org/10.3390/rs15164031

AMA Style

Wang R, Wang Z, Chen Y, Kang H, Luo F, Liu Y. Target Recognition in SAR Images Using Complex-Valued Network Guided with Sub-Aperture Decomposition. Remote Sensing. 2023; 15(16):4031. https://doi.org/10.3390/rs15164031

Chicago/Turabian Style

Wang, Ruonan, Zhaocheng Wang, Yu Chen, Hailong Kang, Feng Luo, and Yingxi Liu. 2023. "Target Recognition in SAR Images Using Complex-Valued Network Guided with Sub-Aperture Decomposition" Remote Sensing 15, no. 16: 4031. https://doi.org/10.3390/rs15164031

APA Style

Wang, R., Wang, Z., Chen, Y., Kang, H., Luo, F., & Liu, Y. (2023). Target Recognition in SAR Images Using Complex-Valued Network Guided with Sub-Aperture Decomposition. Remote Sensing, 15(16), 4031. https://doi.org/10.3390/rs15164031

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop