Next Article in Journal
Cruise Speed Model Based on Self-Attention Mechanism for Autonomous Underwater Vehicle Navigation
Next Article in Special Issue
Optimizing Slender Target Detection in Remote Sensing with Adaptive Boundary Perception
Previous Article in Journal
Identifying Temporal Change in Urban Water Bodies Using OpenStreetMap and Landsat Imagery: A Study of Hangzhou City
Previous Article in Special Issue
SA3Det: Detecting Rotated Objects via Pixel-Level Attention and Adaptive Labels Assignment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Intrapulse Modulation Radar Signal Recognition Using CNN with Second-Order STFT-Based Synchrosqueezing Transform

College of Communication Engineering, Jilin University, Changchun 130012, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(14), 2582; https://doi.org/10.3390/rs16142582
Submission received: 12 May 2024 / Revised: 10 July 2024 / Accepted: 11 July 2024 / Published: 14 July 2024

Abstract

:
Intrapulse modulation classification of radar signals plays an important role in modern electronic reconnaissance, countermeasures, etc. In this paper, to improve the recognition rate at low signal-to-noise ratio (SNR), we propose a recognition method using the second-order short-time Fourier transform (STFT)-based synchrosqueezing transform (FSST2) combined with a modified convolution neural network, which we name MeNet. In particular, the radar signals are first preprocessed via the time–frequency analysis and STFT-based FSST2. Then, the informative features of the time–frequency images (TFIs) are deeply learned and classified through the MeNet with several specific convolutional blocks. The simulation results show that the overall recognition rate for seven types of intrapulse modulation radar signals can reach 95.6%, even when the SNR is −12 dB. Compared with other networks, the excellent recognition rate proves the superiority of our method.

1. Introduction

The intrapulse modulation radar signal is usually frequency or phase modulated within a pulse duration [1,2], which can improve the range resolution via pulse compression. With the emergence of new radar equipment in aerospace and electronic systems, the intrapulse modulation types of radar signals are increasingly diverse. The topic of radar signal classification and identification plays an important role in electronic reconnaissance, electronic countermeasures, spectrum management, etc. Conventionally, the intrapulse modulation recognition of radar signals is mainly divided into two stages. First, a feature extraction algorithm is used to artificially extract radar signal features from the original data, including time–domain features [2], frequency–domain features, time–frequency–domain features [3], autocorrelation functions, and high-order statistical features. Then, a classifier is applied to identify these features to realize the classification of radar signals. However, the recognition rate at low signal-to-noise ratio (SNR) significantly decreases. For instance, when SNR is smaller than −8 dB, the recognition rate of these algorithms is lower than 60% [4]. The improvement of recognition rate in the low SNR regime is a topic of interest in radar signal recognition.
In recent years, deep learning has been applied to the field of radar signal recognition [5,6,7]. Because the convolution neural network (CNN) can effectively retain feature information when processing multi-level features, radar signal recognition based on images has been widely used. In [8], the autocorrelation function of the extracted signal was transformed into images and sent to the SE-Net network to recognize six modulation types of radar signals. In [9], an intrapulse modulation radar signal recognition method using a high-order spectrum and residual shrinkage network was presented. More recently, the time–frequency images (TFIs) of radar signals were investigated as the input of CNN for radar modulation recognition. In [10], time–frequency analysis technology was used to generate Choi–Williams distribution (CWD)-TFIs from seven modulation types and classify them using a deep residual network with a convolutional block attention module (CBAM). In [11], the CWD-TFIs were designed based on the CNN structure of skip connection to maintain informative identity, and thirteen kinds of signals were recognized with a recognition rate of 80% at −10 dB. In [12], a joint feature map of TFIs and an autocorrelation diagram of radar signals was proposed to achieve better classification performance of phase modulated and frequency modulated signals; however, its dataset processing is complex. In [13], the SPWVD TFIs of eight modulation types were sent to the designed depth residual network for identification through adaptive singular value reconstruction denoising technology. For radar modulation recognition with small samples, in [14], an RRSNet network based on meta migration learning was proposed to process short-time Fourier transform (STFT)-based TFIs of six radar modulation types, and it completed the recognition with a small sample dataset at relatively low SNR. For coexistence radar–communication systems, an efficient RaComNet was designed for waveform recognition based on SPWVD and CNN in the presence of channel impairments [15]. Despite the fact that the existing work has improved the recognition rate of different radar signals, the systems cannot achieve robust performance at specifically low SNR, such as from −10 dB to −16 dB. In addition, they are usually accompanied with high computational complexity.
In this paper, to overcome these shortcomings, an intrapulse modulation type recognition method is proposed using the second-order STFT-based synchrosqueezing transform (FSST2) combined with a modified CNN, which we name MeNet. In particular, the radar signals are first preprocessed via time–frequency analysis (TFA) and FSST2. Then, the informative features of the TFIs are deeply learned and classified through the MeNet with several specific convolutional blocks. Compared with the traditional TFA technologies, the synchrosqueezing transform-based TFA technology has a better denoising effect [16], higher time–frequency resolution, and stronger noise resistance. The FSST2-based TFIs generated by radar signals are processed by MeNet with a better recognition rate.
The main contributions of this paper are as follows:
  • A new intrapulse modulation radar signal recognition method is proposed and verified by simulations to achieve a high recognition rate under low SNR.
  • The second-order STFT-based synchrosqueezing transform is introduced to feature extraction to generate TFIs for radar signal recognition, which has a strong anti-noise effect.
  • An efficient convolution module named MeNet is designed, which combines residual structure, depthwise (dw) convolution, and pointwise (pw) convolution to reduce the complexity of the model and improve the recognition rate by fully learning the informative features.

2. Intrapulse Modulation Radar Signal Model

The transmitted radar signals are usually accompanied with noise in complex electronic environment. Assume that Gaussian white noise is added to the signals, and SNR is defined as
S N R = 10 × log 10 P s P n ,
where P s and P n are the effective powers of signal and noise, respectively. The model of the received radar signal can be written as
x ( t ) = s ( t ) + n ( t ) ,
where s ( t ) and n ( t ) denote the modulated radar signal and white Gaussian noise, respectively; s ( t ) is given by
s ( t ) = A ( t ) e j ( 2 π f 0 t + ϕ ( t ) + ϕ 0 ) ,
where A ( t ) , f 0 , and ϕ 0 denote the amplitude, carrier frequency, and initial phase of the radar signal, and ϕ ( t ) is a phase function, which determines the modulation type of radar signal.
Seven types of representative intrapulse modulation radar signals are investigated in this paper, including (a) frequency modulation signals: linear frequency modulation (LFM), sinusoidal frequency modulation (SFM), and even quadratic frequency modulation (EQFM); (b) frequency codes signals: binary frequency shift keying (2FSK); (c) phase codes signals: binary phase-shift keying (BPSK) and quaternary phase-shift keying (QPSK); (d) no extra-modulated signals: conventional waveform (CW). The expressions of the seven types of intrapulse modulation radar signals are shown in Table 1, where f i denotes the ith frequency of the 2FSK signal, φ i denotes the ith modulated frequency parameter of the BPSK and QPSK signals, k denotes the frequency modulation coefficient, and φ is the initial phase.

3. Proposed Intrapulse Modulation Recognition Method Using CNN with FSST2

3.1. Time–Frequency Analysis

In order to distinguish different radar modulation types, TFA technology is applied. By time–frequency transformation, the one-dimensional (1D) signals can be converted into two-dimensional (2D) TFIs to better extract the joint information in the time domain and the frequency domain. The generated TFIs are used as the input of the deep neural network to distinguish different modulated types. STFT is commonly used as a TFA technology to generate the TFIs of signals, which is an efficient solution to extract time–frequency features. The STFT of a signal s ( t ) can be given by
S T F T ( t , f ) = + s ( τ ) η * ( τ t ) e j 2 π f ( τ t ) d τ ,
where η ( t ) denotes the window function, t is the time delay, and * stands for a conjugate operation. Obviously, the 2D S T F T ( t , f ) data can be regarded as the TFIs.
As a typical linear time–frequency transformation, STFT has no cross-terms interference problem. However, the energy aggregation of the TFA results is not high, and once the window function is fixed, the time–frequency resolution is also determined. To solve the problem, the synchrosqueezing transform (SST) is applied to form the Fourier synchrosqueezing transform (FSST). By combining the advantages of reassignment with the reversibility of the linear TFA method, the FSST can compress the time–frequency curve along the frequency direction, which improves the time–frequency aggregation of the signal and reduces the influence of noise on the signal, making it easier to distinguish different signal waveforms. In the FSST, T f ( t , ω ) , a synchrosqueezing operator is used to reassign ( t , f ) to a new position ( t , ω ^ f ( t , f ) ) , which can be written as
T f ( t , ω ) = 1 η * ( 0 ) S T F T ( t , f ) δ ( ω ω ^ f ( t , f ) ) d f ,
where the instantaneous frequency is given by
ω ^ f ( t , f ) = Re 1 j 2 π t S T F T ( t , f ) S T F T ( t , f ) ,
where t S T F T ( t , f ) denotes the partial derivative of S T F T ( t , f ) with respect to t. To further improve the analysis performance of rapidly changing frequency signals, the second-order FSST, called FSST2 [17], is introduced for more accurate IF estimates, and is given by
T 2 , f ( t , ω ) = 1 η * ( 0 ) S T F T ( t , f ) δ ( ω ω ^ t , f [ 2 ] ( t , f ) ) d f ,
where the second-order local instantaneous frequency is given by
ω ^ t , f [ 2 ] ( t , f ) = ω ^ f ( t , f ) + q ^ t , f ( t , f ) ( t τ ^ f ( t , f ) ) , i f t τ ^ f ( t , f ) 0 , ω ^ f ( t , f ) , o t h e r w i s e .
with q ^ t , f ( t , f ) being the second-order local complex modulation operator, which is defined as
q ^ t , f ( t , f ) = t ω ^ f ( t , f ) t τ ^ f ( t , f ) , whenever t τ ^ f ( t , f ) 0 .
The expressions ω ^ f ( t , f ) and τ ^ f ( t , f ) are local complex reference frequency and local complex delay, respectively. They are defined as
ω ^ f ( t , f ) = 1 j 2 π t S T F T ( t , f ) S T F T ( t , f ) ,
τ ^ f ( t , f ) = t 1 j 2 π f S T F T ( t , f ) S T F T ( t , f ) .
The expressions t ω ^ f ( t , f ) and t τ ^ f ( t , f ) denote the derivatives of ω ^ f ( t , f ) and τ ^ f ( t , f ) with respect to t.
Using STFT, FSST, and FSST2, respectively, the TFIs of seven intrapulse modulation radar signals can be generated. Figure 1 shows the TFIs when SNR is −8 dB.

3.2. Data Preprocessing

To reduce the calculation amount and calculation time, the TFIs are required to be preprocessed for the subsequent training procedure. The original TFIs have color and are large in size. If the TFIs are directly used as the input of a CNN, the complexity of the network is unimaginable, and the training process is prone to overfitting. Therefore, the grayscale image is obtained by grayscale processing of the color image, and then the grayscale image is scaled by bilinear interpolation to further reduce the data scale of the input image and training time of the network. The TFI size of the scaled image is 64 × 64. Finally, a dataset with different noise levels is built. For the training set, 500 samples every 2 dB are generated for each signal when the SNR ranges from −14 dB to 2 dB, and for the test set, 100 samples every 2 dB are generated for each signal when the SNR ranges from −16 dB to 4 dB. A total of 31,500 training samples and 7700 test samples are available. The specific parameters of the radar signal are shown in Table 2, where f 0 is the carrier frequency, B is the bandwidth, f m is the minimum frequency of LFM, SFM, and EQFM signals, and f 1 and f 2 are the two frequencies of 2FSK. The frequency and bandwidth parameters of each signal are normalized relative to the sampling frequency.

3.3. Network Construction

CNN is an efficient image classification structure. Traditional CNN mainly focuses on obtaining the input information features through the convolution layer and classifying them layer by layer. To implement more accurate classification, the network depth of CNN is usually deepened. However, sometimes this will lead to the disappearance of gradient and other problems, and increasing the network depth too much has limited effect on improving network accuracy.
In this paper, MeNet, a modified CNN structure module, is proposed, which combines residual structure, depthwise (dw) convolution, and pointwise (pw) convolution to form an efficient convolution module. We note that the dw convolution is different from standard convolution. The convolution kernel of standard convolution is used in all channels, whereas a convolution kernel of dw convolution corresponds to an input channel. The pw convolution is an ordinary convolution using a 1 × 1 convolution kernel. The combination of dw convolution and pw convolution can greatly reduce the complexity of the model.
As the core of MeNet, there is one module (a) and three modules (b) to comprehensively extract and learn deep features. As shown in Figure 2a, module (a) is composed of four convolution layers to enhance feature diversity by adopting dw concatenation layer and skip connection to collect their features. First, pw convolution is used to increase the dimension of input, and then pw convolution is exploited to reduce the dimension. Efficient and dense pw convolution layers can perform feature learning tasks. Branch 1 uses dw with a convolution kernel size of 3 × 3 to extract information. The output of this branch is added to the input. Branch 2 uses dw with a convolution kernel size of 5 × 5 to extract information. The two branches are finally concentrated to obtain the output result. Complex dw convolution layer with larger kernel will have fewer channels. Compared with the module (a), module (b) has no branch 2, which will not add too much burden to the system, as shown in Figure 2b.
The overall network architecture of MeNet is shown in Figure 3. The input is the signal TFIs with the size of 64 × 64 × 1. With a convolution kernel size of 3 × 3 and stride of 1, there is a convolution layer as the first layer, which directly extracts the coarse feature information, and then there is a pooling layer with a convolution kernel size of 2 × 2 and stride of 2 to reduce the computational complexity. The number of channels is set to 32. After that, the designed module (a) is applied to fully extract information features. Then, three modules (b) are followed. Among them, the maximum pooling layer is between the two modules, which is used to extract the feature information and reduce the number of parameters. Finally, the signal category can be obtained by using the Softmax function followed by a full connection layer. The Softmax function is given by
y = e x i k = 1 K e x k .

4. Simulation Results

In the following experiments, the performance of the proposed intrapulse modulation radar signal recognition method is validated. The number of sampling points is 1024 and the sampling frequency is 200 MHz for each signal. The 13 bit Barker code and 16 bit Frank code are used to generate the BPSK and QPSK signals, respectively. The signal frequency and bandwidth parameters are randomly configured in each sample according to the range in Table 2. By time–frequency transformation with FSST2 and then TFI preprocessing, the dataset with 31,500 training samples and 7700 test samples is generated according to Section 3.2. The number of iterations is set to 30, the mini-batch size for each iteration is set to 256, and the learning rate is initialized to 0.004. A computer is used with Intel(R) Core(TM) i7-9700K 3.6GHz CPU, 16GB RAM, and NVIDIA GeForce RTX 2070 6GB hardware capabilities. Tensorflow, Python, CUDA10.1, CUDNN, and MATLAB R2023a software are used.

4.1. Recognition Rate vs. SNR

In this experiment, we test the probability of successful recognition (PSR) for each type of intrapulse modulation radar signal using the proposed MeNet. The TFIs are built based on FSST2. When the SNR ranges from −16 dB to 4 dB, the result is as shown in Figure 4. At high SNR, the TFIs of signals have high PSR and are easy to distinguish from each other. When the SNR ≥ −8 dB, the PSRs of all types of signals almost reach 100%. When the SNR is −10 dB, the PSRs of all types of signals could achieve more than 90%. When the SNR is −12 dB, the PSRs of CW, EQFM, LFM, and SFM signals reach higher than 95% and the 2FSK and BPSK signals reach almost 90%. Even when the SNR is −14 dB, except for the BPSK signal, the PSRs of other types of signals reach higher than 70%, and the CW signal remains even higher than 90%.
In addition, the overall recognition rate under a certain SNR, which is an average of seven types of intrapulse modulation radar signals, can be achieved and is calculated using the number of correctly predicted samples divided by the total number of samples in a given test set. Thus, it is shown that the proposed MeNet can achieve an overall recognition rate of 95.6% for seven types of intrapulse modulation radar signals when the SNR is −12 dB.
The confusion matrix of radar modulation recognition at an SNR of −12 dB is as shown in Figure 5. Through in-depth analysis of the confusion matrix of the seven types of radar signals, it can be seen from Figure 5 that the PSR of most signals is very high. Nevertheless, it also indicates that the 2FSK and BPSK signals are prone to mutual misclassification, particularly under challenging SNR conditions. By searching for the samples with prediction errors in the test set, we find that, when the difference between f 1 and f 2 in the 2FSK signal is smaller, the energy superposition at the edge frequency becomes higher, and the frequency range of the entire spectrum energy concentration is also smaller, resulting in a certain degree of similarity between the 2FSK and BPSK signals in the time–frequency domain. Specifically, in a low SNR environment, noise may cause a frequency shift, which further affects the difference in time–frequency characteristics between the two signals, resulting in more difficulty in distinguishing one from one another. We further calculate the Pearson correlation coefficients between the 2FSK and BPSK datasets and obtain the result of maximum 0.68 for the samples that are confused, which also reflect that there is similarity between these two types of signals when the SNR is low.
Despite the fact that the recognition rates for the 2FSK and BPSK signals are lower than the overall recognition rate of 95.6% at an SNR of −12 dB, they can be well distinguished in low SNR if the frequency difference between f 1 and f 2 in the 2FSK signal becomes larger. Therefore, the proposed MeNet can be designed as a classifier for seven types of radar intrapulse modulation signals.

4.2. Comparison of Different TFIs

In this experiment, the PSR obtained from different TFIs is analyzed. By comparing the PSR of FSST2 TFIs with that of STFT and FSST TFIs, we verify the advantages of the FTTS2-based TFIs used in our method. First, three TFIs are generated for each type of radar signal, which are transformed into 64 × 64 images after the same preprocessing, and then sent to the MeNet. The results of overall recognition rates are shown in Figure 6. When the SNR is greater than −6 dB, the PSR of the three TFIs is close to 100%. However, with the decrease in SNR, the superiority of FSST2 is gradually obvious. When the SNR is reduced to −12 dB, the PSR of FSST is reduced to about 93%, and the PSR of STFT is 91%. The PSR of the proposed FTTS2-based TFIs is still 95.6%. Therefore, the FTTS2-based TFIs has better performance than the other TFIs.
FSST2 enhances the time–frequency characteristics while having a small computational cost. Table 3 shows the average computation time of STFT, FSST, and FSST2 when generating the same number of TFI datasets with all the samples. As shown in Table 3, FSST has the smallest computation time. FSST2 takes relatively more time than FSST; however, the additional time cost does not exceed 23%, proving that FSST2 can ensure computational efficiency in generating dataset while improving the time–frequency performance.

4.3. Comparison of Different Networks

In this experiment, we verify the advantages of the proposed MeNet by comparing it with some existing CNN networks in the literature, including LPI net in [11], DCNN structure in [12], and deep residual network in [13], as shown in Figure 7. We observe that when the SNR ≥ −8 dB, the recognition rates of the four networks are close to 100%. When the SNR decreases gradually, the PSR of our MeNet structure is higher than those of other CNN networks, which increases by 1.2∼12% at SNR −10∼−16 dB. It can be seen that our MeNet structure can greatly improve the recognition rate under low SNR.
In addition, we analyze the computational complexity and the training and test time of MeNet. Commonly, the number of floating points of operations (flops) are used to reflect the algorithm computational complexity of a network. Compared with 75 K in [11], 209 K in [12], and 360 K in [13], the flops of MeNet is 59 K, which has relatively lower computational complexity. In addition to the flops, the training and test time of MeNet is also analyzed. The actual training and test time can be affected by some other factors, such as the designed network structure, the use of hardware accelerators, and the batch processing size of input data. We conduct running time experiments using the training set with a large amount of data and the test set when SNR is −12 dB. The comparison results are shown in Table 4. It shows that the training and test time of our MeNet structure is larger than in [12] but smaller than in [11,13]. Although [12] outperforms our MeNet in terms of running time performance, it is inferior to MeNet in PSR performance. The above experiments fully validates the superiority of the proposed MeNet, which has the highest recognition rate and relatively smaller running time among the compared CNN networks.

4.4. Ablation Study for MeNet

Finally, to verify the effectiveness of module (a) and module (b) in our MeNet, ablation experiments are conducted to demonstrate the recognition performance under the condition of SNR −12 dB. As we know, MeNet is composed of one module (a) and three modules (b). Convolutional kernels with corresponding window sizes and number of channels are used to replace the modules that are removed. To demonstrate the individual contribution of module (a) or module (b), we conduct four experiment groups: Experiment 1, both types of modules are removed; Experiment 2, the type of module (b) is removed; Experiment 3, the type of module (a) is removed; Experiment 4, both types of modules are added. In addition, the F1 value is used to comprehensively evaluate the performance of the network structure. The experimental results are shown in Table 5. By comparing the results of four experiments, we see that the recognition rate and F1 value of Experiment 1 are the lowest among the four groups. Compared with Experiment 1, Experiments 2 and 3 have certain improvement effects on recognition performance. By combining module (a) and module (b), Experiment 4 enables the entire neural network to extract more significant features, proving that the residual structure, depthwise convolution, and pointwise convolution can improve the feature extraction ability of neural networks and obtain the best recognition performance.

5. Conclusions

In this paper, an intrapulse modulation radar signal recognition method is presented using the second-order STFT-based synchrosqueezing transform combined with a modified CNN called MeNet. FSST2 is introduced to the feature extraction to build TFIs for radar signal recognition, which has the advantages of synchrosqueezing transform in anti-noise and high time–frequency aggregation. In addition, an efficient CNN structure module, MeNet, is designed, which can effectively identify intrapulse modulation radar signals in low SNR. The simulation results show that the proposed method can achieve 95.6% overall recognition rate even when SNR is −12 dB, which is superior to other existing CNN networks and applicable for complex electromagnetic environments.

Author Contributions

Conceptualization, N.D. and H.J.; methodology, N.D.; software, Y.L.; validation, J.Z.; formal analysis, J.Z.; investigation, Y.L. and J.Z.; resources, H.J.; data curation, N.D.; writing—original draft preparation, N.D.; writing—review and editing, H.J.; visualization, Y.L.; supervision, H.J.; project administration, H.J.; funding acquisition, H.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Science Foundation of Jilin Province under Grants 20220101100JC and 20180101329JC, and the National Natural Science Foundation of China under Grants 61371158 and 61771217.

Data Availability Statement

The data of this research can be made available by request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Kawalec, A.; Owczarek, R. Radar emitter recognition using intrapulse data. In Proceedings of the 15th International Conference on Microwaves, Radar and Wireless Communications (IEEE Cat. No.04EX824), Warsaw, Poland, 17–19 May 2004; pp. 435–438. [Google Scholar]
  2. Yuan, S.; Wu, B.; Li, P. Intra-pulse modulation classification of radar emitter signals based on a 1-D selective rernel convolutional neural network. Remote Sens. 2021, 13, 2799. [Google Scholar] [CrossRef]
  3. Gao, L.; Zhang, X.; Gao, J.; You, S. Fusion image based radar signal feature extraction and modulation recognition. IEEE Access 2019, 7, 13135–13148. [Google Scholar] [CrossRef]
  4. Zhang, M.; Liu, L.; Diao, M. LPI radar waveform recognition based on time–frequency distribution. Sensors 2016, 16, 1682. [Google Scholar] [CrossRef] [PubMed]
  5. Wei, S.; Qu, Q.; Zeng, X.; Liang, J.; Shi, J.; Zhang, X. Self-attention bi-LSTM networks for radar signal modulation recognition. IEEE Trans. Microw. Theory Techn. 2021, 69, 5160–5172. [Google Scholar] [CrossRef]
  6. Qu, Q.; Wei, S.; Wu, Y.; Wang, M. ACSE networks and autocorrelation features for PRI modulation recognition. IEEE Commun. Lett. 2020, 24, 1729–1733. [Google Scholar] [CrossRef]
  7. Chen, S.; Zheng, S.; Yang, L.; Yang, X. Deep learning for large-scale real-world ACARS and ADS-B radio signal classification. IEEE Access 2019, 7, 89256–89264. [Google Scholar] [CrossRef]
  8. Wei, S.; Qu, Q.; Wu, Y.; Wang, M.; Shi, J. PRI modulation recognition based on squeeze-and-excitation networks. IEEE Commun. Lett. 2020, 24, 1047–1051. [Google Scholar] [CrossRef]
  9. Chen, K.; Zhu, L.; Chen, S.; Zhang, S.; Zhao, H. Deep residual learning in modulation recognition of radar signals using higher-order spectral distribution. Measurement 2021, 185, 109945. [Google Scholar] [CrossRef]
  10. Jin, X.; Ma, J.; Ye, F. Radar signal recognition based on deep residual network with attention mechanism. In Proceedings of the 2021 IEEE 4th International Conference on Electronic Information and Communication Technology (ICEICT), Xi’an, China, 18–20 August 2021; pp. 428–432. [Google Scholar]
  11. Huynh-The, T.; Doan, V.-S.; Hua, C.-H.; Pham, Q.-V.; Nguyen, T.-V.; Kim, D.-S. Accurate LPI radar waveform recognition with CWD-TFA for deep convolutional network. IEEE Wireless Commun. Lett. 2021, 10, 1638–1642. [Google Scholar] [CrossRef]
  12. Wang, F.; Yang, C.; Huang, S.; Wang, H. Automatic modulation classification based on joint feature map and convolutional neural network. IET Radar Sonar Navig. 2019, 13, 998–1003. [Google Scholar] [CrossRef]
  13. Chen, K.; Zhang, S.; Zhu, L.; Chen, S.; Zhao, H. Modulation recognition of radar signals based on adaptive singular value reconstruction and deep residual learning. Sensors 2019, 21, 449. [Google Scholar] [CrossRef] [PubMed]
  14. Lang, P.; Fu, X.; Martorella, M.; Dong, J.; Qin, R.; Feng, C.; Zhao, C. RRSARNet: A novel network for radar radio sources adaptive recognition. IEEE Trans. Veh. Technol. 2021, 70, 11483–11498. [Google Scholar] [CrossRef]
  15. Huynh-The, T.; Pham, Q.-V.; Nguyen, T.-V.; da Costa, D.B.; Kim, D.-S. RaComNet: High-performance deep network for waveform recognition in coexistence radar-communication systems. In Proceedings of the IEEE International Conference on Communications (ICC), Seoul, Republic of Korea, 16–20 May 2022; pp. 1–6. [Google Scholar]
  16. Auger, F.; Flandrin, P.; Lin, Y.-T.; McLaughlin, S.; Meignen, S.; Oberlin, T.; Wu, H.-T. Time-frequency reassignment and synchrosqueezing: An overview. IEEE Signal Process. Mag. 2013, 30, 32–41. [Google Scholar] [CrossRef]
  17. Oberlin, T.; Meignen, S.; Perrier, V. Second-order synchrosqueezing transform or invertible reassignment? Towards ideal time-frequency representations. IEEE Trans. Signal Process. 2015, 63, 1335–1344. [Google Scholar] [CrossRef]
Figure 1. TFIs of seven signals when SNR is −8 dB; ’−1’, ’−2’, ’−3’ denote STFT, FSST, FSST2, respectively.
Figure 1. TFIs of seven signals when SNR is −8 dB; ’−1’, ’−2’, ’−3’ denote STFT, FSST, FSST2, respectively.
Remotesensing 16 02582 g001aRemotesensing 16 02582 g001b
Figure 2. Description of a convolutional block in MeNet: (a) module (a); (b) module (b).
Figure 2. Description of a convolutional block in MeNet: (a) module (a); (b) module (b).
Remotesensing 16 02582 g002
Figure 3. The overall network architecture of MeNet; ’conv’ represents the convolutional layer; ’maxpool’ represents the maximum pooling; the numbers before and after ’conv’ represent the size of the convolution kernel and stride, respectively.
Figure 3. The overall network architecture of MeNet; ’conv’ represents the convolutional layer; ’maxpool’ represents the maximum pooling; the numbers before and after ’conv’ represent the size of the convolution kernel and stride, respectively.
Remotesensing 16 02582 g003
Figure 4. Recognition rate of seven types of radar signals with our method under different SNRs.
Figure 4. Recognition rate of seven types of radar signals with our method under different SNRs.
Remotesensing 16 02582 g004
Figure 5. Confusion matrix of radar modulation recognition at an SNR of −12 dB.
Figure 5. Confusion matrix of radar modulation recognition at an SNR of −12 dB.
Remotesensing 16 02582 g005
Figure 6. Recognition rate of MeNet with different TFIs.
Figure 6. Recognition rate of MeNet with different TFIs.
Remotesensing 16 02582 g006
Figure 7. Overall system recognition rate for the proposed MeNet, the LPI net in [11], DCNN structure in [12] and deep residual network in [13].
Figure 7. Overall system recognition rate for the proposed MeNet, the LPI net in [11], DCNN structure in [12] and deep residual network in [13].
Remotesensing 16 02582 g007
Table 1. Expression of intrapulse modulation radar signal.
Table 1. Expression of intrapulse modulation radar signal.
Modulation TypeSignal Expression
CW s t = A e j ( 2 π f 0 t + φ )
LFM s t = A e j ( 2 π f 0 t + π k t 2 + φ )
SFM s t = A e j 2 π f 0 t + k f n sin ( 2 π f n t ) + φ
EQFM s t = A e j 2 π f 0 t + π k ( t π 2 ) 3 + φ
   2FSK s t = i = 1 2 A e j ( 2 π f i t + φ )
  BPSK s t = i = 1 2 A e j ( 2 π f 0 t + φ i )
  QPSK s t = i = 1 4 A e j ( 2 π f 0 t + φ i )
Table 2. Specific parameters of radar signals.
Table 2. Specific parameters of radar signals.
SignalParameterRange
CW f 0 0.05–0.3
LFM f m 0.05–0.3
B0.05–0.2
SFM f m 0.05–0.3
B0.05–0.2
EQFM f m 0.05–0.3
B0.05–0.2
2FSK f 1 0.05–0.3
f 2 0.05–0.2
BPSK f 0 0.05–0.3
Barker codes[7,11,13]
QPSK f 0 0.05–0.3
Frank codes[16]
Table 3. Computation time of different methods to generate TFIs.
Table 3. Computation time of different methods to generate TFIs.
Method to Generate TFIsComputation Time (sec.)
STFT42,074
FSST39,461
FSST248,259
Table 4. Training time and test time for different networks.
Table 4. Training time and test time for different networks.
NetworkTraining Time (sec.)Test Time (sec.)
LPI-Net in [11]622.146.21
DCNN in [12]224.693.01
Deep residual network in [13]797.3410.48
MeNet468.256.07
Table 5. Experiment results of ablation study for MeNet when SNR is −12 dB.
Table 5. Experiment results of ablation study for MeNet when SNR is −12 dB.
Experiment GroupModule (a)Module (b)Recognition RateF1 Value
1 91.28%91.14%
2 92.85%92.83%
3 94.16%94.13%
495.57%95.56%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Dong, N.; Jiang, H.; Liu, Y.; Zhang, J. Intrapulse Modulation Radar Signal Recognition Using CNN with Second-Order STFT-Based Synchrosqueezing Transform. Remote Sens. 2024, 16, 2582. https://doi.org/10.3390/rs16142582

AMA Style

Dong N, Jiang H, Liu Y, Zhang J. Intrapulse Modulation Radar Signal Recognition Using CNN with Second-Order STFT-Based Synchrosqueezing Transform. Remote Sensing. 2024; 16(14):2582. https://doi.org/10.3390/rs16142582

Chicago/Turabian Style

Dong, Ning, Hong Jiang, Yipeng Liu, and Jingtao Zhang. 2024. "Intrapulse Modulation Radar Signal Recognition Using CNN with Second-Order STFT-Based Synchrosqueezing Transform" Remote Sensing 16, no. 14: 2582. https://doi.org/10.3390/rs16142582

APA Style

Dong, N., Jiang, H., Liu, Y., & Zhang, J. (2024). Intrapulse Modulation Radar Signal Recognition Using CNN with Second-Order STFT-Based Synchrosqueezing Transform. Remote Sensing, 16(14), 2582. https://doi.org/10.3390/rs16142582

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop