Next Article in Journal
Distributed Multi-Mobile Robot Path Planning and Obstacle Avoidance Based on ACO–DWA in Unknown Complex Terrain
Previous Article in Journal
Voting-Based Scheme for Leader Election in Lead-Follow UAV Swarm with Constrained Communication
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Embedding Soft Thresholding Function into Deep Learning Models for Noisy Radar Emitter Signal Recognition

1
College of Electronic Confrontation, National University of Defense Technology, No. 460 Huangshan Road, Shushan District, Hefei 230037, China
2
Hikvision Research Institute, Hangzhou Hikvision Digital Technology Co., Ltd., No. 555 Qianmo Road, Binjiang District, Hangzhou 310051, China
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(14), 2142; https://doi.org/10.3390/electronics11142142
Submission received: 16 June 2022 / Revised: 3 July 2022 / Accepted: 5 July 2022 / Published: 8 July 2022
(This article belongs to the Section Microwave and Wireless Communications)

Abstract

:
Radar emitter signal recognition under noisy background is one of the focus areas in research on radar signal processing. In this study, the soft thresholding function is embedded into deep learning network models as a novel nonlinear activation function, achieving advanced radar emitter signal recognition results. Specifically, an embedded sub-network is used to learn the threshold of soft thresholding function according to the input feature, which results in each input feature having its own independent nonlinear activation function. Compared with conventional activation functions, the soft thresholding function is characterized by flexible nonlinear conversion and the ability to obtain more discriminative features. By this way, the noise features can be flexibly filtered while retaining signal features, thus improving recognition accuracy. Under the condition of Gaussian and Laplacian noise with signal-to-noise ratio of −8 dB to −2 dB, experimental results show that the overall average accuracy of soft thresholding function reached 88.55%, which was 11.82%, 8.12%, 2.16%, and 1.46% higher than those of Sigmoid, PReLU, ReLU, ELU, and SELU, respectively.

1. Introduction

Accurate radar emitter signal recognition is a critical criterion for determining carriers, objectives, functions, and threat levels of radars [1,2,3]. Practical radar signals, however, tend to be mixed with noise. Therefore, the recognition accuracy of radar emitter signals under a low signal-to-noise ratio (SNR) has been a major research topic in the field of radar signal processing. Early radar emitter signal recognition was mainly achieved by feature matching methods, including the grey correlation analysis [4,5], template matching [6,7], fuzzy matching [8], and attribute measurement [9,10] methods. However, with the continuous development of radar technology, the electromagnetic environment becomes increasingly complex and variable. Moreover, because the feature matching method relies heavily on a priori knowledge, it has drawbacks such as low error tolerance, poor robustness, and complex feature extraction, which prevent it from meeting practical requirements. The rapid development of artificial intelligence techniques has promoted their application to solve the problem of radar emitter signal recognition. Most reported methods are based on time–frequency variation signals in combination with deep learning technology. For instance, signal classification and recognition were achieved on the basis of time–frequency transformation and convolutional neural networks [11,12]. Time–frequency images of signals were obtained by short-time Fourier transform and recognized by deep learning methods [13,14,15]. Based on the deep Q-learning network (DQN), Qu et al. [16] used the time–frequency graph extracted from the Cohen class for signal recognition. However, conversion of the time domain to the time–frequency domain is not only time- and computation-intensive, but also generates excessive noise when the SNR is too low. As a result, the feature discrimination of learning is insufficient and the recognition performance is adversely affected. Meanwhile, most studies adopt conventional deep learning methods, using conventional nonlinear activation functions, with few improvements to the deep learning models. This study proposes a soft thresholding function (SofT), which is a nonlinear activation function that is suitable for radar emitter signal recognition under a low SNR. Furthermore, a novel network model is established by implementing SofT in deep learning methods. The proposed network model does not require time–frequency transformation but directly uses the original one-dimensional (1D) signal as the input. The model then flexibly filters noise according to the input while retaining signal features to improve the recognition accuracy.

2. Methodology

This section introduces the concept of SofT and the design of the SofT module, which is eventually embedded in deep learning methods to improve the recognition accuracy of noisy radar emitter signals.

2.1. Soft Thresholding Function

SofT is a key parameter in wavelet denoising [17]. In the conventional wavelet denoising algorithm, the noise is transformed into a domain near zero, and SofT then removes the features near zero for denoising. In the denoising process, the choice of the wavelet basis function and the number of decomposition layers as well as the threshold selection rule are the key factors affecting the final denoising effect [18]. Determining these parameters requires considerable relevant knowledge and experience, and is therefore often a difficult task. Deep learning methods can automatically learn the weights of features. The combination of SofT and deep learning methods can improve the recognition accuracy while avoiding the complex denoising parameter design. SofT can be described as:
y = x τ x > τ 0 τ x τ x + τ x < τ
where x and y are input and output features, respectively, and τ is the positive threshold. According to Equation (1), SofT deletes the feature values inside the interval and retains those outside the interval. Figure 1a shows SofT. The partial derivatives of SofT with respect to can be described as:
y x = 1 x > τ 0 τ x τ 1 x > τ
According to Equation (2), the partial derivative of SofT is either 1 or 0, as shown in Figure 1b, which avoids the gradient vanishing problem.

2.2. Soft Thresholding Module

The SofT module uses a sub-network to learn a suitable threshold based on the input features. Therefore, each input feature would have its own independent SofT activation function, which can flexibly filter the noise features while retaining the signal features. The same input and output dimensions are maintained, so it can be easily embedded into the deep learning model. The structure is shown in Figure 2.
As shown in Figure 2a, the SofT module can automatically learn the appropriate value of τ . More specifically, the input feature map is first calculated for absolute values and global averages, which are designed to ensure that the threshold τ is positive while avoiding the shift variation problem during training. The expression is as follows:
y c = G A P ( a b s ( x ) ) = a v e r a g e i , j ( x i , j , c )
where y is a 1D vector, c is the index of the channel, x is the input feature map, and i , j are the width and height of the input feature map, respectively. Subsequent to two fully connected layers, the scale parameter can be obtained by:
α c = s i g m o i d ( z ) = 1 1 + e z c
where z is the output of the fully connected layer. Finally, Equations (3) and (4) are multiplied as shown Equation (5) to obtain the threshold τ of the input feature:
τ c = α c · y c = α c · a v e r a g e i , j x i , j , c
Equation (5) indicates that different channels of the feature map may have different thresholds. This allows more flexibility of the model to retain signal features while removing noise features.
Figure 2b shows a residual building unit (RBU) of the residual network (ResNet), where the conventional activation function ReLU is replaced by SofT. In fact, since the input dimension is the same as the output, the SofT can be easily embedded in any other models in addition to the RBU. Figure 2b comprises two batch normalization (BN), three SofT activation functions, two 1D convolutional layers, and one identity shortcut.
BN is a technique of normalized input feature to reduce the training difficulty and avoid the internal covariant shift problem, which is shown as follows:
μ = 1 N b a t c h n = 1 N b a t c h x n
σ 2 = 1 N b a t c h n = 1 N b a t c h ( x n μ ) 2
x ^ n = ( x n μ ) σ 2 + ε
y n = γ x ^ n + β
Since the radar emitter signal is a one-dimensional sequence, the deep learning models used in this paper all use one-dimensional convolution. Compared with two-dimensional convolution, one-dimensional convolution requires fewer parameters and can extract features directly from the one-dimensional signal while reducing computational resources. The convolution process is defined as follows:
y j = i = C k i j x i + b j
where y j represents the jth channel of the output feature map. k is the convolutional kernel, b is the bias, and C is the number of input channels.

3. Experimental and Results

To verify the effectiveness of SofT, five typical activation functions are compared, as shown in Table 1.
The sigmoid is one of the most common activation functions. It was first used in the LeNet network [19] and achieved the recognition of handwritten digits. The rectified linear unit (ReLU) [20] is also a mainstream activation function, which was applied to the AlexNet network [21] and achieved the state-of-the-art recognition result in the ImageNet LSVRC-2010 contest. ReLU can avoid the gradient disappearance problem and speed up the training process; however, it sets the negative part of the input to zeros, which may lead to the dead relu problem. In contrast, there is a learnable parameter α in PReLU [22], which has a small slope compared to ReLU in the negative region. Thus, the dead relu problem can be avoided in PReLU. However, the parameter α will be fixed after training, resulting in the accuracy of radar emitter recognition is not necessarily better than ReLU. The exponential linear unit (ELU) [23] can also retain negative values of inputs, and make the mean value of output features closer to 0, achieving similar effect as BN but with lower computational complexity. The scaled exponential linear unit (SELU) [24] is self-normalizing, which is robust to noise and can also speed up model convergence. In order to verify the effectiveness of SofT, the above five activation functions are embedded into the same network model structure as shown in Figure 3.
The structure of blocks in Figure 3 are the same as Figure 2b, but SofT will be replaced by each of the five activation functions mentioned above. If the block contains identity shortcuts, the model is ResNet; otherwise, it is ConvNet. The block diagram of this experiment is shown in Figure 4. Firstly, the SMU200A was used to simulate the radar emitter signals, and then two datasets representing different environmental conditions were constructed by adding Gaussian noise and Laplacian noise, respectively. Next, we constructed two models, ConvNet and ResNet, and embedded Sigmoid, PReLU, ReLU, ELU, SELU, and SofT as activation functions into the networks, respectively, to recognize the two kinds of datasets generated. Lastly, different modulation types of signals were output by neural networks.

3.1. Dataset

Seven representative modulation types of radar signals were generated: the Barker code (Baker), the Barker code linear frequency modulation hybrid modulation signal (Baker-LFM), the frequency-coded signal (FC), the continuous wave (CW), the frequency diversity signal (FD), the linear frequency modulation signal (LFM), and the nonlinear frequency modulation signal (LFM). All the radar emitter signal data contain only one pulse, with a data length of 512, and it is worth mentioning that the recognition task of such short single-pulse data is more challenging. The specific signal parameters of the signals are shown in Table 2.
To verify the effectiveness of SofT, two types of representative noise were added into the radar signals, namely Gaussian noise and Laplacian noise. Gaussian noise has strong randomness in communication channel testing and modeling, and it can seriously disrupt the time domain signal waveform and cover useful information, making signal recognition difficult. Laplacian noise is a non-Gaussian noise. In the actual communication environment, there often exists noise for which the Gaussian noise model cannot be applied, such as impulse noise and co-channel interference. Therefore, it is necessary to use Laplacian noise as non-Gaussian noise for modeling. The SNR range of the Gaussian and Laplacian noise was set from 8 dB to 2 dB, with an interval of 2 dB, for a total of 4 SNRs. For each noise type, 7 × 4 × 300 = 8400 samples were generated, which were randomly divided into training and testing sets in a 2:1 ratio.

3.2. Hyperparameter Setting

Two neural networks, ConvNet and ResNet, were used to compare the performance of different activation functions for radar emitter signal recognition. Furthermore, the number of Blocks was taken as 6, 9, and 12 to test the effect of model depth on performance. The training iterations were 160, the batch size was set to 128, and the initial learning rate was set to 0.1. The learning rate was decayed by a factor of 10 every 40 iterations until the number of iterations was 120, and then was decayed twice in the last 40 iterations. To reduce the randomness, the experiments were repeated 10 times with the same hyperparameter settings, and the mean and variance of the testing accuracy were obtained.

3.3. Experimental Results

The detailed recognition results for different network depths are shown in Table 3. Herein, ResNet-SofT indicates that ResNet uses SofT as the activation function; other notations are in the similar manner. To compare the effect of different activation functions on the performance of network models more clearly, the overall average recognition accuracy of each activation function is depicted, as shown in Figure 5.
Figure 5 indicates that SofT was superior to conventional activation functions (i.e., Sigmoid, PReLU, ReLU, ELU, and SELU) in terms of recognition accuracy for both Gaussian noise and Laplacian noise.
More specifically, the overall average accuracy of different activation functions is shown as Table 4. It shows that the overall average accuracy of SofT reached 86.16% for Gaussian noise, which was 12.23%, 7.17%, 4.82%, and 1.83% higher than those of Sigmoid, PReLU, ReLU, ELU, and SELU, respectively. For Laplace noise, the overall average accuracy of SofT reached 90.95%, which was 11.43%, 9.26%, 4.53%, 1.96%, and 1.10% higher than those of Sigmoid, PReLU, ReLU, ELU, and SELU, respectively. Such high recognition rates can be attributed to the fact that the use of SofT as an activation function allows the deep learning model to automatically learn the threshold based on the input, enabling it to flexibly remove noise features while retaining signal features, thus improving radar emitter signal recognition accuracy. Although SELU and ELU are self-normalizing and have some robustness to noise, their parameters are fixed and the network cannot adjust the activation function parameters according to the specific input; therefore, the recognition rate is lower than that of SofT. By contrast, the recognition accuracy of PReLU was unexpectedly lower than that of ReLU. The possible reason for this is that although each channel of the feature graph of PReLU is trained to obtain a multiplicative coefficient during the training process, this coefficient becomes constant during the testing process and cannot be adjusted according to the specific test signal. Therefore, more signal features are preserved while simultaneously introducing more noise, which reduces the discriminative power of high-dimensional features.
In addition, the overall average accuracy of different neural networks is shown as Table 5. The overall average accuracy of ResNet under Gaussian noise was 84.75%, which is 6.65% higher than that of ConvNet; the overall average accuracy of ResNet under Laplacian noise was 89.20%, which is 5.92% higher than that of ConvNet. This is because embedding SofT as an activation function increases the model complexity and ConvNet faces difficulties in parameter optimization as the number of network layers increases. By contrast, because of the identity shortcuts, ResNet can greatly facilitate the flow of gradients and ease the difficulty of optimization; thus, it has a better recognition performance.

3.4. Feature Analysis

A nonlinear dimensionality reduction method, namely t-distributed stochastic neighbor embedding (t-SNE) [25], was used to analyze the high-dimensional features learned by the network models. Considering space limitation, only the Gaussian noise condition was used to feed the test samples into the trained network models, and the high-dimensional features learned after the last global average pooling layer were extracted and projected into the 2D space using t-SNE for visualization and analysis. As shown in Figure 6 and Figure 7, different colors represent different modulation signals, where red, blue, green, purple, orange, yellow, and brown indicate Barker, Barker-LFM, FC, CW, FD, LFM, and NLFM, respectively. It can be seen that different types of signals were more distinguished in SofT compared to other activation functions under the same deep learning methods. For instance, in ResNet-SofT, the radar emitter signals of the same modulation type were distributed centrally, and the radar emitter signals of different modulation types were separated from each other. By contrast, for ResNet with other conventional activation functions, the 2D feature distributions of different types of signals showed high overlap because the conventional activation function cannot effectively remove noise while retaining discriminative features. Therefore, it is extremely challenging for the conventional deep learning model to distinguish different types of signals under a low SNR condition. As for SofT, the sub-network of the SofT module can learn the appropriate threshold according to the input and effectively remove the noise so that the last layer learned high-dimensional features with strong discrimination.
Figure 8 shows the training error and validation error in the training process under the condition of Gaussian noise. As observed, the error curves of all 12 models gradually stabilized after 160 iteration cycles. The smallest training error and validation error were observed for ConvNet-SofT among the six ConvNets and for ResNet-SofT among the six ResNets. Thus, the effectiveness of SofT was validated.
The thresholds learned by the SofT module for different modulation types of signals were compared and analyzed. Specifically, at SNR = 8 dB, the seven modulation type signals mentioned in Section 3.1 were input, and the thresholds of the last layer SofT of ConvNet-SofT and ResNet-SofT with nine network layers were output. The results are shown in Table 6.
According to Table 6, the learning threshold for different signals in the SofT module is different, which indicates that SofT can learn different thresholds according to the input signal, making it more flexible to remove noise features while retaining signal features in the threshold interval. This also explains why network models with SofT can achieve higher recognition results.

4. Conclusions

To achieve accurate radar emitter signal recognition results, the SofT was embedded into deep learning network models as a novel nonlinear activation function. The SofT module learns different thresholds for different input signals through a sub-network, achieving the purpose of removing noise and improving the feature learning ability of deep learning. The network models with SofT as the activation function do not need to perform dimensional transformation of the inputs but can directly learn discriminative features from the original signal, thus improving the radar emitter signal recognition accuracy.
To verify the effectiveness of the proposed approach, two network models of ConvNet and ResNet with different depths were constructed and tested in two different noisy environments, Gaussian and Laplacian, respectively, and five widely used activation functions, Sigmoid, PReLU, ReLU, ELU, and SELU, were compared. Experimental results revealed that, compared with the methods that use conventional activation functions, the deep learning method based on SofT for the identification of radar emitter signals at a low SNR is significantly superior. More specifically, compared with the overall average recognition rates of Sigmoid, PReLU, ReLU, ELU, and SELU, those of SofT were 12.23%, 7.17%, 4.82%, and 1.83% higher for Gaussian noise and 1.43%, 9.26%, 4.53%, 1.96%, and 1.10% higher for Laplacian noise, respectively. The t-SNE experiments showed that different types of signals are more distinguishable with SofT than with other activation functions, indicating that the learned high-dimensional features are more discriminative.
Therefore, by embedding SofT as a trainable nonlinear activation function into network models, the recognition ability of noisy radar emitter signals can be effectively improved, which is of practical significance for the recognition of radar signals received from actual electromagnetic environments.

5. Discussion

The SofT could be applied to more areas, such as speech recognition in the environment, communication signal recognition and noisy image recognition. However, since the SofT module is trained based on samples, the training set are required to be as extensive as possible, so the next work is to achieve radar emitter signal recognition under small sample conditions.

Author Contributions

Conceptualization, J.P. and S.Z.; methodology, J.P. and S.Z.; software, L.X. and S.Z.; validation, L.X., L.T. and S.Z.; formal analysis, L.G.; investigation, J.P.; resources, J.P.; data curation, J.P. and L.G.; writing—original draft preparation, S.Z. and L.T.; writing—review and editing, J.P. and L.G.; visualization, S.Z.; supervision, J.P.; project administration, J.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Experimental data are available from footnote under Table 2.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Petrov, N.; Jordanov, I.; Roe, J. Radar Emitter Signals Recognition and Classification with Feedforward Networks. Procedia Comput. Sci. 2013, 22, 1192–1200. [Google Scholar] [CrossRef] [Green Version]
  2. Pu, Y.; Liu, T.; Wu, H.; Guo, J. Radar emitter signal recognition based on convolutional neural network and main ridge coordinate transformation of ambiguity function. In Proceedings of the 2021 IEEE 4th Advanced Information Management, Communicates, Electronic and Automation Control Conference (IMCEC), Chongqing, China, 18–20 June 2021. [Google Scholar]
  3. Richard, W. ELINT: The Interception and Analysis of Radar Signals, 1st ed.; Artech: Norwood, MA, USA, 2006; pp. 6–23. [Google Scholar]
  4. Huang, Y.; Zhang, H.; Li, L.; Zhou, Y. Radar-Infrared Sensor Track Correlation Algorithm Using Gray Correlative Analysis. In Proceedings of the 2009 International Joint Conference on Artificial Intelligence, Hainan Island, China, 25–26 April 2009. [Google Scholar]
  5. Xin, G.; You, H.; Xiao, Y. A novel gray model for radar emitter recognition. In Proceedings of the 7th International Conference on Signal Processing, Beijing, China, 31 August–4 September 2004. [Google Scholar]
  6. Will, C.; Shi, K.; Weigel, R.; Koelpin, A. Advanced template matching algorithm for instantaneous heartbeat detection using continuous wave radar systems. In Proceedings of the First IEEE MTT-S International Microwave Bio Conference (IMBIOC), Gothenburg, Sweden, 15–17 May 2017. [Google Scholar]
  7. Vignesh, G.J.; Vikranth, S.; Ramanathan, R. A novel fuzzy based approach for multiple target detection in MIMO radar. Procedia Comput. Sci. 2017, 115, 764–770. [Google Scholar] [CrossRef]
  8. Allroggen, N.; Tronicke, J. Attribute-based analysis of time-lapse ground-penetrating radar data. Geophysics 2016, 81, H1–H8. [Google Scholar] [CrossRef]
  9. Cahyo, F.A.; Dwitya, R.; Musa, R.H. New approach to detect imminent slope failure by utilising coherence attribute measurement on ground-based slope radar. In Proceedings of the Slope Stability 2020: 2020 International Symposium on Slope Stability in Open Pit Mining and Civil Engineering, Perth, Australia, 12–14 May 2020. [Google Scholar]
  10. Cheng, H.Z.; Cheng, X.F. A Radar Fault Diagnosis Expert System Based on Improved CBR. Appl. Mech. Mater. 2013, 432, 432–436. [Google Scholar] [CrossRef]
  11. Zhang, M.; Diao, M.; Guo, L. Convolutional Neural Networks for Automatic Cognitive Radio Waveform Recognition. IEEE Access 2017, 5, 11074–11082. [Google Scholar] [CrossRef]
  12. Tian, X.; Sun, X.; Yu, X.; Li, X. Modulation Pattern Recognition of Communication Signals Based on Fractional Low-Order Choi-Williams Distribution and Convolutional Neural Network in Impulsive Noise Environment. In Proceedings of the 2019 IEEE 19th International Conference on Communication Technology (ICCT), Xi’an, China, 6–19 October 2019. [Google Scholar]
  13. Wu, S. Communication modulation recognition algorithm based on STFT mechanism in combination with unsupervised feature-learning network. Peer-to-Peer Netw. Appl. 2019, 12, 1615–1623. [Google Scholar] [CrossRef]
  14. Liu, H.; Li, L.; Ma, J. Rolling Bearing Fault Diagnosis Based on STFT-Deep Learning and Sound Signals. Shock Vib. 2016, 2016, 6127479. [Google Scholar] [CrossRef] [Green Version]
  15. Wang, X.; Huang, G.; Zhou, Z.; Gao, J. Radar emitter recognition based on the short time fourier transform and convolutional neural networks. In Proceedings of the 2017 10th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), Shanghai, China, 14–16 October 2017. [Google Scholar]
  16. Qu, Z.; Hou, C.; Hou, C.; Wang, W. Radar Signal Intra-Pulse Modulation Recognition Based on Convolutional Neural Network and Deep Q-Learning Network. IEEE Access 2020, 8, 49125–49136. [Google Scholar] [CrossRef]
  17. Peng, Y.H. De-noising by modified soft-thresholding. In Proceedings of the 2000 IEEE Asia-Pacific Conference on Circuits and Systems, Electronic Communication Systems, Tianjin, China, 4–6 December 2000. [Google Scholar]
  18. Zhong, J.; Jian, S.; You, C.; Yin, X. Wavelet de-noising method with threshold selection rules based on SNR evaluations. J. Tsinghua Univ. 2014, 54, 259–263. [Google Scholar]
  19. Lecun, Y.; Jackel, L.; Cortes, C.; Denker, J.; Drucker, H.; Guyon, I.; Muller, U.; Sackinger, E.; Simard, P.; Vapnik, V. Learning Algorithms For Classification: A Comparison On Handwritten Digit Recognition. Neural Netw. Stat. Mech. Perspect. 1995, 261, 2. [Google Scholar]
  20. Glorot, X.; Bordes, A.; Bengio, Y. Deep sparse rectifier neural networks. In Proceedings of the 14th International Conference on Artificial Intelligence and Statistics, Lauderdale, FL, USA, 11–13 April 2011. [Google Scholar]
  21. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
  22. He, K.; Zhang, X.; Ren, S.; Sun, J. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015. [Google Scholar]
  23. Clevert, D.-A.; Unterthiner, T.; Hochreiter, S. Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs). In Proceedings of the 4th International Conference on Learning Representations (ICLR 2016), San Juan, Puerto Rico, 2–4 May 2016. [Google Scholar]
  24. Klambauer, G.; Unterthiner, T.; Mayr, A.; Hochreiter, S. Self-normalizing neural networks. In Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
  25. Cieslak, M.C.; Castelfranco, A.M.; Roncalli, V.; Lenz, P.H.; Hartline, D.K. t-Distributed Stochastic Neighbor Embedding (t-SNE): A tool for eco-physiological transcriptomic analysis. Mar. Genom. 2020, 51, 100723. [Google Scholar] [CrossRef]
Figure 1. (a) Soft thresholding function, (b) soft thresholding function partial derivative.
Figure 1. (a) Soft thresholding function, (b) soft thresholding function partial derivative.
Electronics 11 02142 g001
Figure 2. (a) Threshold module, (b) residual building unit (RBU).
Figure 2. (a) Threshold module, (b) residual building unit (RBU).
Electronics 11 02142 g002
Figure 3. Structure of neural network.
Figure 3. Structure of neural network.
Electronics 11 02142 g003
Figure 4. The block diagram of the experiment.
Figure 4. The block diagram of the experiment.
Electronics 11 02142 g004
Figure 5. Overall recognition accuracy of different activation functions.
Figure 5. Overall recognition accuracy of different activation functions.
Electronics 11 02142 g005
Figure 6. Visualization of learned features at the final GAP layer of (a) ConvNet-Sigmoid, (b) ConvNet-PReLU, (c) ConvNet-ReLU, (d) ConvNet-ELU, (e) ConvNet-SELU, and (f) ConvNet-SofT when the number of ConvBlocks equals 9 and SNR equals 2 dB.
Figure 6. Visualization of learned features at the final GAP layer of (a) ConvNet-Sigmoid, (b) ConvNet-PReLU, (c) ConvNet-ReLU, (d) ConvNet-ELU, (e) ConvNet-SELU, and (f) ConvNet-SofT when the number of ConvBlocks equals 9 and SNR equals 2 dB.
Electronics 11 02142 g006
Figure 7. Visualization of learned features at the final GAP layer of (a) ResNet-Sigmoid, (b) ResNet-PReLU, (c) ResNet-ReLU, (d) ResNet-ELU, (e) ResNet-SELU, and (f) ResNet-SofT when the number of ResBlocks equals 9 and SNR equals 2 dB.
Figure 7. Visualization of learned features at the final GAP layer of (a) ResNet-Sigmoid, (b) ResNet-PReLU, (c) ResNet-ReLU, (d) ResNet-ELU, (e) ResNet-SELU, and (f) ResNet-SofT when the number of ResBlocks equals 9 and SNR equals 2 dB.
Electronics 11 02142 g007
Figure 8. Variation tendency of errors of (a) ConvNet-Sigmoid, (b) ConvNet-PReLU, (c) ConvNet-ReLU, (d) ConvNet-ELU, (e) ConvNet-SELU, (f) ConvNet-SofT, (g) ResNet-Sigmoid, (h) ResNet-PReLU, (i) ResNet-ReLU, (j) ResNet-ELU, (k) ResNet-SELU, and (l) ResNet-SofT when the number of blocks equals 9.
Figure 8. Variation tendency of errors of (a) ConvNet-Sigmoid, (b) ConvNet-PReLU, (c) ConvNet-ReLU, (d) ConvNet-ELU, (e) ConvNet-SELU, (f) ConvNet-SofT, (g) ResNet-Sigmoid, (h) ResNet-PReLU, (i) ResNet-ReLU, (j) ResNet-ELU, (k) ResNet-SELU, and (l) ResNet-SofT when the number of blocks equals 9.
Electronics 11 02142 g008
Table 1. Typical activation functions.
Table 1. Typical activation functions.
Activation FunctionExpression
Sigmoid y = 1 1 + e x
PReLU y = x x 0 α x x < 0
ReLU y = x x 0 0 x < 0
ELU y = x x 0 α ( e x 1 ) x < 0
SELU y = λ x x 0 α ( e x 1 ) x < 0
Table 2. Specific parameters of seven modulation type signals.
Table 2. Specific parameters of seven modulation type signals.
Signal TypeCarrier FrequencyParameter
Barker10~30 MHz13-bit Barker code width of each symbol is 1/13 μ s
Barker-LFM10~30 MHzFrequency bandwidth: 100 to 200 MHz
13-bit Barker code width of each symbol is 1/13 μ s
FC10~20 MHz
100~200 MHz
13-bit random code width of each symbol is 1/13 μ s
FD10~20 MHz
50~60 MHz
90~100 MHz
None
LFM20~30 MHzFrequency bandwidth: 50 to 200 MHz
1/2 up frequency modulation
1/2 down frequency modulation
NLFM20~30 MHzFrequency bandwidth: 50 to 200 MHz
Modulation: quadratic
1/2 up frequency modulation
1/2 down frequency modulation
CW10~30 MHzNone
The dataset can always be downloaded from https://osf.io/mfa96/ (accessed on 25 January 2022).
Table 3. The average accuracy of the testing set, where M is the number of ResBlocks or ConvBlocks (i.e., ConvBlocks without identity shortcuts) (%).
Table 3. The average accuracy of the testing set, where M is the number of ResBlocks or ConvBlocks (i.e., ConvBlocks without identity shortcuts) (%).
MMethodGaussLaplace
8 dB 6 dB 4 dB 2 dB 8 dB 6 dB 4 dB 2 dB
6ConvNet-Sigmoid59.34 ± 5.9074.91 ± 7.3585.89 ± 4.9884.60 ± 3.2767.86 ± 11.9875.63 ± 9.7982.14 ± 5.1485.57 ± 3.01
ConvNet-PReLU66.43 ± 1.3375.54 ± 2.2186.29 ± 1.3586.20 ± 1.9368.54 ± 2.1878.31 ± 2.7383.34 ± 1.9288.34 ± 2.98
ConvNet-ReLU70.83 ± 1.6182.11 ± 2.1888.20 ± 2.0591.46 ± 1.1978.26 ± 3.2787.09 ± 3.1891.34 ± 2.1194.00 ± 1.66
ConvNet-ELU73.80 ± 1.7884.31 ± 0.7090.26 ± 0.3892.66 ± 0.6780.11 ± 1.4390.14 ± 0.8592.94 ± 0.9595.40 ± 0.34
ConvNet-SELU73.34 ± 1.7884.94 ± 0.5491.49 ± 0.7993.63 ± 0.7582.80 ± 0.9590.23 ± 1.2394.00 ± 0.7495.74 ± 0.31
ConvNet-SofT75.51 ± 2.9885.31 ± 4.3491.84 ± 2.0394.03 ± 3.4183.91 ± 2.2391.09 ± 1.7194.57 ± 1.8296.60 ± 1.58
ResNet-Sigmoid66.26 ± 1.0978.89 ± 2.4686.43 ± 1.9189.74 ± 1.5676.89 ± 4.7384.46 ± 3.2790.71 ± 2.0492.94 ± 1.76
ResNet-PReLU70.14 ± 1.3980.94 ± 0.7387.80 ± 1.4291.06 ± 1.3277.17 ± 2.0584.69 ± 1.6789.66 ± 0.8792.89 ± 0.70
ResNet-ReLU71.51 ± 1.4681.89 ± 1.2788.34 ± 1.4291.60 ± 2.0079.37 ± 2.3587.37 ± 2.2591.74 ± 0.5794.66 ± 1.15
ResNet-ELU72.46 ± 0.7885.51 ± 0.5191.86 ± 0.3894.51 ± 1.1681.31 ± 2.6089.46 ± 1.5194.00 ± 0.4795.77 ± 0.82
ResNet-SELU72.63 ± 0.4283.69 ± 2.2290.89 ± 1.5393.86 ± 1.3782.49 ± 1.6990.06 ± 0.7294.26 ± 1.1195.97 ± 0.44
ResNet-SofT77.17 ± 1.9087.49 ± 1.5692.89 ± 0.8095.31 ± 1.4783.09 ± 1.7190.60 ± 0.3694.40 ± 1.4296.37 ± 1.24
9ConvNet-Sigmoid55.29 ± 11.7963.63 ± 16.7869.60 ± 19.6173.06 ± 20.2165.80 ± 19.4773.54 ± 20.3181.40 ± 20.7785.17 ± 16.28
ConvNet-PReLU63.69 ± 3.0171.34 ± 2.8877.37 ± 2.8682.00 ± 3.4066.34 ± 1.8675.26 ± 2.5082.23 ± 2.9786.11 ± 1.84
ConvNet-ReLU66.11 ± 2.3976.60 ± 5.0583.34 ± 4.1186.66 ± 3.9676.11 ± 3.8585.17 ± 2.6989.97 ± 2.5592.06 ± 2.66
ConvNet-ELU69.36 ± 8.6880.34 ± 9.9684.66 ± 12.7887.80 ± 13.6579.86 ± 2.7086.91 ± 3.8192.34 ± 3.4993.94 ± 2.75
ConvNet-SELU69.69 ± 5.0580.51 ± 7.5386.71 ± 7.6989.60 ± 7.1280.69 ± 2.1789.20 ± 0.9492.46 ± 0.3194.77 ± 0.88
ConvNet-SofT70.63 ± 2.8282.63 ± 2.7089.29 ± 1.2892.69 ± 2.3585.31 ± 10.7491.54 ± 7.9293.60 ± 6.8695.40 ± 5.08
ResNet-Sigmoid67.06 ± 1.2379.83 ± 2.8186.51 ± 3.4289.86 ± 2.4577.37 ± 2.0585.86 ± 1.2591.71 ± 1.0894.03 ± 0.50
ResNet-PReLU72.34 ± 2.3481.69 ± 1.3787.54 ± 1.2691.69 ± 0.5179.94 ± 2.3686.57 ± 2.1590.29 ± 2.1095.03 ± 1.30
ResNet-ReLU72.57 ± 1.7683.71 ± 1.2690.06 ± 1.2393.00 ± 1.9580.11 ± 1.7288.20 ± 1.2592.77 ± 0.9395.23 ± 1.10
ResNet-ELU73.89 ± 2.1286.54 ± 1.6292.29 ± 1.3294.51 ± 0.7482.86 ± 2.2290.09 ± 0.8994.34 ± 1.4395.50 ± 1.34
ResNet-SELU74.11 ± 1.9986.71 ± 1.2492.29 ± 1.4794.80 ± 1.3983.54 ± 1.3090.51 ± 2.2094.91 ± 0.5095.74 ± 0.87
ResNet-SofT76.74 ± 1.6687.23 ± 1.2993.46 ± 1.3295.71 ± 0.8584.06 ± 2.5891.49 ± 1.4495.29 ± 1.1296.63 ± 0.44
12ConvNet-Sigmoid51.77 ± 9.8858.43 ± 17.5663.69 ± 19.9966.00 ± 21.9455.40 ± 25.2862.17 ± 30.2468.43 ± 31.7672.06 ± 31.06
ConvNet-PReLU61.03 ± 4.3769.77 ± 3.5877.89 ± 4.1883.29 ± 3.7262.31 ± 2.6271.23 ± 2.7477.40 ± 2.3780.91 ± 3.19
ConvNet-ReLU63.86 ± 5.4672.63 ± 6.3078.60 ± 7.3784.46 ± 6.8368.60 ± 6.0876.43 ± 8.7782.71 ± 6.8887.29 ± 5.97
ConvNet-ELU64.66 ± 3.4975.14 ± 2.3982.60 ± 2.0589.57 ± 3.2573.86 ± 5.8982.29 ± 5.1189.11 ± 3.7991.57 ± 2.59
ConvNet-SELU65.17 ± 7.2977.40 ± 8.4185.26 ± 5.5789.74 ± 3.7877.09 ± 4.7484.54 ± 3.0589.20 ± 3.7291.80 ± 3.28
ConvNet-SofT68.31 ± 3.8280.94 ± 3.0387.26 ± 3.7090.11 ± 3.1678.09 ± 10.9686.80 ± 8.9990.74 ± 8.1992.71 ± 6.27
ResNet-Sigmoid65.20 ± 2.7879.69 ± 2.6887.69 ± 2.2691.03 ± 1.2576.51 ± 3.6784.66 ± 3.5586.09 ± 1.6292.17 ± 1.40
ResNet-PReLU72.37 ± 1.7080.49 ± 1.9687.74 ± 1.7991.31 ± 1.2177.06 ± 2.4785.69 ± 1.5188.51 ± 1.0092.63 ± 1.05
ResNet-ReLU72.83 ± 1.0381.29 ± 1.7888.09 ± 2.5292.49 ± 1.8180.34 ± 1.5987.86 ± 1.8592.49 ± 0.4294.80 ± 0.68
ResNet-ELU73.40 ± 1.1483.86 ± 1.4891.80 ± 0.8594.89 ± 0.7383.11 ± 0.9490.66 ± 0.9094.23 ± 0.6096.06 ± 0.63
ResNet-SELU73.91 ± 2.4385.66 ± 2.0492.89 ± 0.7295.06 ± 0.8784.00 ± 1.5191.37 ± 0.9694.37 ± 0.5996.60 ± 1.19
ResNet-SofT77.31 ± 0.9887.26 ± 1.4593.23 ± 0.5195.40 ± 1.1786.40 ± 1.4392.14 ± 2.3395.09 ± 1.1996.81 ± 1.38
Table 4. The overall average accuracy of different activation functions (%).
Table 4. The overall average accuracy of different activation functions (%).
MethodGaussian NoiseLaplacian Noise
Sigmoid73.93 ± 7.7279.52 ± 10.50
PReLU78.99 ± 2.1681.69 ± 2.05
ReLU81.34 ± 2.8386.42 ± 2.73
ELU83.78 ± 3.0388.99 ± 2.00
SELU84.33 ± 3.0889.85 ± 1.48
SofT86.16 ± 2.1190.95 ± 3.71
Table 5. The overall average accuracy of different neural networks (%).
Table 5. The overall average accuracy of different neural networks (%).
ModelGaussian NosieLaplacian Noise
ConvNet78.10 ± 5.4883.28 ± 6.01
ResNet84.75 ± 1.5089.20 ± 1.47
Table 6. Thresholds learned by SofT module.
Table 6. Thresholds learned by SofT module.
SignalThreshold
ConvNet-SofTResNet-SofT
Barker5.5300.001
Barker-LFM3.4651.275
FC2.8931.938
CW2.7772.640
FD0.0060.002
LFM2.4762.006
NLFM4.8332.123 
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Pan, J.; Zhang, S.; Xia, L.; Tan, L.; Guo, L. Embedding Soft Thresholding Function into Deep Learning Models for Noisy Radar Emitter Signal Recognition. Electronics 2022, 11, 2142. https://doi.org/10.3390/electronics11142142

AMA Style

Pan J, Zhang S, Xia L, Tan L, Guo L. Embedding Soft Thresholding Function into Deep Learning Models for Noisy Radar Emitter Signal Recognition. Electronics. 2022; 11(14):2142. https://doi.org/10.3390/electronics11142142

Chicago/Turabian Style

Pan, Jifei, Shengli Zhang, Lingsi Xia, Long Tan, and Linqing Guo. 2022. "Embedding Soft Thresholding Function into Deep Learning Models for Noisy Radar Emitter Signal Recognition" Electronics 11, no. 14: 2142. https://doi.org/10.3390/electronics11142142

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop