# Neural Networks for Radar Waveform Recognition

^{*}

## Abstract

**:**

## 1. Introduction

## 2. System View

## 3. Preprocessing

#### 3.1. Signal Model

#### 3.2. Choi–Williams Distribution

#### 3.3. Binary Image

- Normalize the resized images $G(x,y)\in [0,1]$, i.e.,$$G(x,y)=\frac{CW{D}_{N\times N}(x,y)-minCW{D}_{N\times N}(x,y)}{max(CW{D}_{N\times N}(x,y)-minCW{D}_{N\times N}(x,y))};$$
- Estimate the threshold T of $G(x,y)$, i.e.,$$T=\frac{maxG(x,y)+minG(x,y)}{2};$$
- Separate the image into two pixel groups ${G}_{1}$ and ${G}_{2}$; ${G}_{1}$ includes all pixels that values $>T$, and ${G}_{2}$ includes others;
- Calculate the average value ${\mu}_{1}$ and ${\mu}_{2}$ of two pixel groups ${G}_{1}$ and ${G}_{2}$, respectively;
- Update the threshold, i.e.,$$T=\frac{{\mu}_{1}+{\mu}_{2}}{2};$$
- Repeat (b)–(e), until the $\delta T$ is smaller than 0.001, i.e.,$$\delta T={T}_{now}-{T}_{before};$$
- Compute $B(x,y)$ as follows:$$B(x,y)=\left\{\begin{array}{c}1\phantom{\rule{1.em}{0ex}}G(x,y)>T\hfill \\ 0\phantom{\rule{1.em}{0ex}}others\hfill \end{array}\right.;$$
- Output $B(x,y)$.

#### 3.4. Noise Removed

## 4. Feature Extraction

#### 4.1. Signal Features

#### 4.1.1. Based on the Statistics

#### 4.1.2. Based on the Power Spectral Density

#### 4.1.3. Based on the Instantaneous Properties

- Calculate $\varphi (k)$;
- Calculate ${{\varphi}_{u}(k)}^{\u2605}$ from $\varphi (k)$;
- Calculate ${f(k)}^{\u2605\u2605}$, i.e.,$$f(k)={\varphi}_{u}(k)-{\varphi}_{u}(k-1);$$
- Calculate ${\mu}_{f}$, i.e.,$${\mu}_{f}:=\frac{1}{N}{\sum}_{k=0}^{N-1}f(k);$$
- Normalize the instantaneous frequency $\tilde{f}(k)$,$$\tilde{f}(k)=(f(k)-{\mu}_{f})/(max|f(k)-{\mu}_{f}|)$$
- Output the standard deviation of instantaneous frequency ${\widehat{\sigma}}_{f}$,$${\widehat{\sigma}}_{f}=\sqrt{\frac{1}{N}\left(\sum _{k=0}^{N-1}{\tilde{f}}^{2}(k)\right)-{\left(\frac{1}{N}\sum _{k=0}^{N-1}\left|\tilde{f}(k)\right|\right)}^{2}}.$$
- ★
- ${\varphi}_{u}(k)$ is the unwrapped phase of $\varphi (k)$. When the absolute jumps from $\varphi (k)$, we can add $\pm 2\pi $ to recover the consecutive phase.
- ★★
- In the sequence of $f(k)$, some spikes are created by processing. We use the median filter algorithm with window size of five to smooth the spikes.

#### 4.2. Image Features

- ●
- Repeat for every object, do $k=1,2,\dots ,{N}_{obj1}$;
- Retain the k-th object and remove others, called ${B}_{k}(x,y)$;
- Estimate the principal components of ${B}_{k}(x,y)$;
- Rotate${}^{\u2605}$ the ${B}_{k}(x,y)$ until the principal components are vertical; record as ${B}_{{k}^{\prime}}(x,y)$;
- Sum the vertical axis, i.e.,$$v(x)={\sum}_{y=0}^{N-1}{B}_{{k}^{\prime}}(x,y),$$
- Normalize $v(x)$ as follows$$\widehat{v}(x)=\frac{v(x)}{max\{v(x)\}};$$
- Estimate the standard deviation of $\widehat{v}(x)$, i.e.,$${\widehat{\sigma}}_{k,obj1}=\sqrt{1/N{\sum}_{x}{\widehat{v}}^{2}(x)-{(1/N{\sum}_{x}\widehat{v}(x))}^{2}},$$

- ●
- Output the rotation degree ${\widehat{\theta}}_{max}$, which performs Step (c) at the maximum object.
- ●
- Output the average of the ${\widehat{\sigma}}_{k,obj1}$, i.e.,$${\widehat{\sigma}}_{obj}=(1/{N}_{obj1}){\sum}_{k=1}^{{N}_{obj1}}{\widehat{\sigma}}_{k,obj1}.$$
- ★
- Nearest neighbor interpolation is applied in rotation processing.

## 5. Classifier

#### 5.1. CNN

- The input layer is a binary image, which is from Section 3.4. To reduce the computer load, we resize the image to $32\times 32$ with the nearest neighbor interpolation algorithm.
- The first hidden layer ${C}_{1}$ is a convolutional layer, which has six feature maps. Different feature maps require a different convolutional kernel. ${C}_{1}$ has six convolutional kernels with a size of $5\times 5$. We utilize ${C}_{1}(m,n,k)$ to represent the value of the k-th feature map at position $(m,n)$ in the ${C}_{1}$ layer.
- The second hidden layer ${S}_{1}$ is a down-sampling layer with six feature maps. In ${S}_{1}$, every feature value is the average of four adjacent elements in ${C}_{1}$. We denote ${S}_{1}(m,n,k)$ as the context. Further, we have:$$\begin{array}{cc}\hfill {S}_{1}(m,n,k)=& \mathrm{mean}({C}_{1}(2m-1,2n-1,k),{C}_{1}(2m-1,2n,\hfill \\ & k),{C}_{1}(2m,2n-1,k),{C}_{1}(2m,2n,k)).\hfill \end{array}$$The size of feature maps in ${S}_{1}$ reduces to $1/4$, compared with feature maps of ${C}_{1}$.
- ${C}_{2}$ is a convolutional layer with 16 different kernels. It is not fully connected with the ${S}_{1}$ layer [41]. The connection details are described in Table 2. ${C}_{2}(m,n,k)$ is also utilized to describe the neurons in this layer. For the $\alpha $-th column in Table 2, we mark row indices by ${\beta}_{\alpha ,0},{\beta}_{\alpha ,1},\xb7,{\beta}_{\alpha ,p-1}.$ For instance, if $\alpha =7$, then we will get parameters as follows: $p=4$, ${\beta}_{7,0}=1$, ${\beta}_{7,1}=2$, ${\beta}_{7,2}=3$, ${\beta}_{7,3}=4.$ Further, the size of the convolutional kernel is $p\times 5\times 5$. ${K}_{\alpha}$ is the $\alpha $-th kernel. Additionally, we have:$$\begin{array}{cc}\hfill {C}_{2}(m,n,\alpha )& ={\displaystyle \sum _{r=0}^{p-1}}{\displaystyle \sum _{{m}_{0}=0}^{4}}{\displaystyle \sum _{{n}_{0}=0}^{4}}{S}_{1}(m+{m}_{0},n+{n}_{0},{\beta}_{\alpha ,r})\hfill \\ & \times {K}_{\alpha}(5-{m}_{0},5-{n}_{0},p-1-r).\hfill \end{array}$$For example, for the zeroth column, $p=3$, ${\alpha}_{00}=0$, ${\alpha}_{01}=1$, ${\alpha}_{02}=2$, and we also have:$$\begin{array}{cc}\hfill {C}_{2}(m,n,0)& ={\displaystyle \sum _{r=0}^{p-1}}{\displaystyle \sum _{{m}_{0}=0}^{4}}{\displaystyle \sum _{{n}_{0}=0}^{4}}{S}_{1}(m+{m}_{0},n+{n}_{0},{\beta}_{0,r})\hfill \\ & \times {K}_{0}(5-{m}_{0},5-{n}_{0},p-1-r).\hfill \end{array}$$
- Similar to ${S}_{1}$, this layer is a down-sampling layer, called ${S}_{2}$. ${S}_{2}$ has 16 feature maps. To follow the context in Equation (20), we donate:$$\begin{array}{cc}\hfill {S}_{2}(m,n,k)=& \mathrm{mean}({C}_{2}(2m-1,2n-1,k),{C}_{2}(2m-1,2n,\hfill \\ & k),{C}_{2}(2m,2n-1,k),{C}_{2}(2m,2n,k)).\hfill \end{array}$$
- The connection between ${S}_{2}$ and ${N}_{1}$ is a full connection. Each kernel in ${N}_{1}$ will be connected with all of the feature maps in ${S}_{2}$. There are 120 kernels in this layer. Additionally, the size of the kernel is $5\times 5$, which means the output is a column vector with the size of $120\times 1$. We describe ${N}_{1}(\lambda )$ as the $\lambda $-th feature map of ${N}_{1}$ and ${K}_{\lambda}$ as the $\lambda $-th kernel. Then, we have:$$\begin{array}{cc}\hfill {N}_{1}(\lambda )& ={\displaystyle \sum _{r=0}^{15}}{\displaystyle \sum _{{m}_{0}=0}^{4}}{\displaystyle \sum _{{n}_{0}=0}^{4}}[{S}_{2}({m}_{0},{n}_{0},r)\hfill \\ & \times {K}_{\lambda}(5-{m}_{0},5-{n}_{0},15-r)].\hfill \end{array}$$
- Finally, the connected style between ${N}_{1}$ and output layer is fully connected. There are five neurons (defined by the classes we want to classify) in the output layer with the $sigmoid$ function.

#### 5.2. ENN

## 6. Simulation Results and Discussion

#### 6.1. Production of Simulated Signals

#### 6.2. Experiment with SNR

#### 6.3. Experiment with Robustness

#### 6.4. Experiment with Computational Burden

## 7. Conclusions

## Acknowledgments

## Author Contributions

## Conflicts of Interest

## Abbreviations

LPI | Low probability of intercept |

EW | Electronic warfare |

ELINT | Electronic intelligence |

PSK | Phase shift keying |

FSK | Frequency shift keying |

ASK | Smplitude shift keying |

PRI | Pulse repetition interval |

PW | Pulse width |

RSR | Ratio of successful recognition |

SNR | Signal to noise ratio |

MLP | Multi-layer perceptron |

AD | Atomic decomposition |

LFM | Linear frequency modulation |

CW | Continuous wave |

STFT | Short time Fourier transform |

SC | Sparse classification |

CNN | Convolutional neural network |

DNN | Deep neural network |

SVM | Support vector machine |

BPSK | Binary phase shift keying |

ENN | Elman neural network |

CWD | Choi–Williams time-frequency distribution |

PCA | Principal component analysis |

AWGN | Additive white Gaussian noise |

WGN | White gaussian noise |

FFT | Fast Fourier transformation |

PSD | Power spectral density |

CPU | Central Processing Unit |

GPU | Graphics Processing Unit |

## References

- Adamy, D.L. EW 102: A Second Course in Electronic Warfare; Artech House: Norwood, MA, USA, 2004. [Google Scholar]
- Adamy, D.L. EW 101: A First Course in Electronic Warfare; Artech House: Norwood, MA, USA, 2000. [Google Scholar]
- Nandi, A.; Azzouz, E. Automatic analogue modulation recognition. Signal Process.
**1995**, 46, 211–222. [Google Scholar] [CrossRef] - Dudczyk, J.; Kawalec, A. Specific emitter identification based on graphical representation of the distribution of radar signal parameters. Bull. Pol. Acad. Sci. Tech. Sci.
**2015**, 63, 391–396. [Google Scholar] [CrossRef] - Dudczyk, J.; Kawalec, A. Fast-decision identification algorithm of emission source pattern in database. Bull. Pol. Acad. Sci. Tech. Sci.
**2015**, 63, 385–389. [Google Scholar] [CrossRef] - Dudczyk, J. A method of feature selection in the aspect of specific identification of radar signals. Bull. Pol. Acad. Sci. Tech. Sci.
**2017**, 65, 113–119. [Google Scholar] [CrossRef] - Dudczyk, J. Radar Emission Sources Identification Based on Hierarchical Agglomerative Clustering for Large Data Sets. J. Sens.
**2016**, 2016, 1879327. [Google Scholar] [CrossRef] - Nandi, A.K.; Azzouz, E.E. Algorithms for automatic modulation recognition of communication signals. IEEE Trans. Commun.
**1998**, 46, 431–436. [Google Scholar] [CrossRef] - Wong, M.D.; Nandi, A.K. Automatic digital modulation recognition using artificial neural network and genetic algorithm. Signal Process.
**2004**, 84, 351–365. [Google Scholar] [CrossRef] - López-Risueño, G.; Grajal, J.; Sanz-Osorio, Á. Digital channelized receiver based on time-frequency analysis for signal interception. IEEE Trans. Aerosp. Electron. Syst.
**2005**, 41, 879–898. [Google Scholar] [CrossRef] - Cohen, L. Time-frequency distributions-a review. Proc. IEEE
**1989**, 77, 941–981. [Google Scholar] [CrossRef] - Lopez-Risueno, G.; Grajal, J.; Yeste-Ojeda, O. Atomic decomposition-based radar complex signal interception. IEE Proc. Radar Sonar Navig.
**2003**, 150, 323–331. [Google Scholar] [CrossRef] - Lundén, J.; Koivunen, V. Automatic radar waveform recognition. IEEE J. Sel. Top. Signal Process.
**2007**, 1, 124–136. [Google Scholar] [CrossRef] - Ming, Z.; Lutao, L.; Ming, D. LPI Radar Waveform Recognition Based on Time-Frequency Distribution. Sensors
**2016**, 16, 1682. [Google Scholar] [CrossRef] - Ma, J.; Huang, G.; Zuo, W.; Wu, X.; Gao, J. Robust radar waveform recognition algorithm based on random projections and sparse classification. IET Radar Sonar Navig.
**2013**, 8, 290–296. [Google Scholar] [CrossRef] - Lawrence, S.; Giles, C.L.; Tsoi, A.C.; Back, A.D. Face recognition: A convolutional neural-network approach. IEEE Trans. Neural Netw.
**1997**, 8, 98–113. [Google Scholar] [CrossRef] [PubMed] - LeCun, Y.; Huang, F.J.; Bottou, L. Learning methods for generic object recognition with invariance to pose and lighting. In Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Washington, DC, USA, 27 June–2 July 2004. [Google Scholar]
- Sainath, T.N.; Kingsbury, B.; Saon, G.; Soltau, H.; Mohamed, A.R.; Dahl, G.; Ramabhadran, B. Deep convolutional neural networks for large-scale speech tasks. Neural Netw.
**2015**, 64, 39–48. [Google Scholar] [CrossRef] [PubMed] - Yoshioka, T.; Karita, S.; Nakatani, T. Far-field speech recognition using CNN-DNN-HMM with convolution in time. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brisbane, QLD, Australia, 19–24 April 2015; pp. 4360–4364. [Google Scholar]
- Swietojanski, P.; Arnab, G.; Steve, R. Convolutional neural networks for distant speech recognition. IEEE Signal Process. Lett.
**2014**, 21, 1120–1124. [Google Scholar] - Abdel-Hamid, O.; Mohamed, A.R.; Jiang, H.; Penn, G. Applying convolutional neural networks concepts to hybrid NN-HMM model for speech recognition. In Proceedings of the Acoustics, Speech and Signal Processing (ICASSP), 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Kyoto, Japan, 25–30 March 2012; pp. 4277–4280. [Google Scholar]
- Lin, T.Y.; RoyChowdhury, A.; Maji, S. Bilinear CNN models for fine-grained visual recognition. In Proceedings of the IEEE International Conference on Computer Vision, Cambridge, MA, USA, 20–23 June 2015; pp. 1449–1457. [Google Scholar]
- Vaddi, R.S.; Boggavarapu, L.N.P.; Vankayalapati, H.D.; Anne, K.R. Comparative analysis of contrast enhancement techniques between histogram equalization and CNN. In Proceedings of the 2011 Third International Conference on Advanced Computing, Chennai, India, 14–16 December 2011; pp. 106–110. [Google Scholar]
- Zhou, M.K.; Zhang, X.Y.; Yin, F.; Liu, C.L. Discriminative quadratic feature learning for handwritten Chinese character recognition. Pattern Recognit.
**2016**, 49, 7–18. [Google Scholar] [CrossRef] - Akhand, M.A.H.; Rahman, M.M.; Shill, P.C.; Islam, S.; Rahman, M.H. Bangla handwritten numeral recognition using convolutional neural network. In Proceedings of the 2015 International Conference on Electrical Engineering and Information Communication Technology (ICEEICT), Dhaka, Bangladesh, 21–23 May 2015; pp. 1–5. [Google Scholar]
- Niu, X.X.; Suen, C.Y. A novel hybrid CNN–SVM classifier for recognizing handwritten digits. Pattern Recognit.
**2012**, 45, 1318–1325. [Google Scholar] [CrossRef] - Abdel-Hamid, O.; Mohamed, A.R.; Jiang, H.; Deng, L.; Penn, G.; Yu, D. Convolutional neural networks for speech recognition. IEEE/ACM Trans. Audio Speech Lang. Process.
**2014**, 22, 1533–1545. [Google Scholar] [CrossRef] - Feng, Z.; Liang, M.; Chu, F. Recent advances in time–frequency analysis methods for machinery fault diagnosis: A review with application examples. Mech. Syst. Signal Process.
**2013**, 38, 165–205. [Google Scholar] [CrossRef] - Chen, B.; Shu, H.; Zhang, H.; Chen, G.; Toumoulin, C.; Dillenseger, J.L.; Luo, L.M. Quaternion Zernike moments and their invariants for color image analysis and object recognition. Signal Process.
**2012**, 92, 308–318. [Google Scholar] [CrossRef] - Xu, B.; Sun, L.; Xu, L.; Xu, G. Improvement of the Hilbert method via ESPRIT for detecting rotor fault in induction motors at low slip. IEEE Trans. Energy Convers.
**2013**, 28, 225–233. [Google Scholar] [CrossRef] - Pace, P.E. Detecting and Classifying Low Probability of Intercept Radar, 2nd ed.; Artech House: Norwood, MA, USA, 2009. [Google Scholar]
- Gonzalez, R.C. Digital Image Processing; Pearson Education: Delhi, India, 2009. [Google Scholar]
- Zhu, Z.; Aslam, M.W.; Nandi, A.K. Genetic algorithm optimized distribution sampling test for M-QAM modulation classification. Signal Process.
**2014**, 94, 264–277. [Google Scholar] [CrossRef] - Ozen, A.; Ozturk, C. A novel modulation recognition technique based on artificial bee colony algorithm in the presence of multipath fading channels. In Proceedings of the 2013 36th International Conference on Telecommunications and Signal Processing (TSP), Rome, Italy, 2–4 July 2013; pp. 239–243. [Google Scholar]
- Stoica, P.; Moses, R.L. Spectral Analysis of Signals; Pearson/Prentice Hall Upper: Saddle River, NJ, USA, 2005. [Google Scholar]
- Bailey, R.R.; Srinath, M. Orthogonal moment features for use with parametric and non-parametric classifiers. IEEE Trans. Pattern Anal. Mach. Intell.
**1996**, 18, 389–399. [Google Scholar] [CrossRef] - Chen, Z.; Sun, S.K. A Zernike moment phase-based descriptor for local image representation and matching. IEEE Trans. Image Process.
**2010**, 19, 205–219. [Google Scholar] [CrossRef] [PubMed] - Tan, C.W.; Kumar, A. Accurate iris recognition at a distance using stabilized iris encoding and Zernike moments phase features. IEEE Trans. Image Process.
**2014**, 23, 3962–3974. [Google Scholar] [CrossRef] [PubMed] - Honarvar, B.; Paramesran, R.; Lim, C.L. Image reconstruction from a complete set of geometric and complex moments. Signal Process.
**2014**, 98, 224–232. [Google Scholar] [CrossRef] - Lauer, F.; Suen, C.Y.; Bloch, G. A trainable feature extractor for handwritten digit recognition. Pattern Recognit.
**2007**, 40, 1816–1824. [Google Scholar] [CrossRef] - LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE
**1998**, 86, 2278–2324. [Google Scholar] [CrossRef] - Johnson, A.E.; Ghassemi, M.M.; Nemati, S.; Niehaus, K.E.; Clifton, D.; Clifford, G.D. Machine learning and decision support in critical care. Proc. IEEE
**2016**, 104, 444–466. [Google Scholar] [CrossRef] [PubMed] - Bengio, Y.; Courville, A.; Vincent, P. Representation learning: A review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell.
**2013**, 35, 1798–1828. [Google Scholar] [CrossRef] [PubMed] - Lin, W.M.; Hong, C.M.; Chen, C.H. Neural-network-based MPPT control of a stand-alone hybrid power generation system. IEEE Trans. Power Electron.
**2011**, 26, 3571–3581. [Google Scholar] [CrossRef] - Lin, C.M.; Boldbaatar, E.-A. Autolanding control using recurrent wavelet Elman neural network. IEEE Trans. Syst. Man Cybern.-Syst.
**2015**, 45, 1281–1291. [Google Scholar] [CrossRef] - Yin, S.; Yang, H.; Gao, H.; Qiu, J.; Kaynak, O. An adaptive NN-based approach for fault-tolerant control of nonlinear time-varying delay systems with unmodeled dynamics. IEEE Trans. Neural Netw. Learn. Syst.
**2016**, PP, 1–12. [Google Scholar] [CrossRef] [PubMed] - Sheela, K.G.; Deepa, S. Review on methods to fix number of hidden neurons in neural networks. Math. Probl. Eng.
**2013**, 2013, 425740. [Google Scholar] [CrossRef]

**Figure 1.**The figure shows the systematic components. Received data is processed in the preprocessing part and feature estimation part to extract features. And the data is classified in the classifier part.

**Figure 2.**This figure shows the details of the classifier. $Network1$ is the main network composed of CNN and $Network2$ is ENN. $Network2$ assists the main network $(network1)$ to complete the classification.

**Figure 3.**In this figure, different waveform classes are shown, including Linear Frequency Modulation (LFM), Binray Phase Shift Keying (BPSK), Frank, Costas codes, P1-P4 codes and T1-T4 codes. There are significant differences among the Choi-Williams Time-Frequency Distribution (CWD) images.

**Figure 4.**In this figure, we exhibit the processing with P3 code at an signal-to-noise ratio (SNR) of −4 dB.

**Figure 5.**The figure shows the structure of Convolutional Neural Network (CNN). The input image are processed in the hidden layers and classified in the out layer.

**Figure 6.**This figure shows the structure of Elman neural network. It is a 3 layers network that has feedback loops. So, the network can keep the past state, which is useful for waveform classification.

**Figure 7.**This figure depicts the different probabilities of 12 types of radar waveforms with testing data. SNR: Signal-to-noise ratio.

Index | Description | Symbol |
---|---|---|

1 | Moment (1-order) | ${\widehat{M}}_{10}$ |

2 | Moment (2-order) | ${\widehat{M}}_{20}$ |

3 | Cumulant (2-order) | ${\widehat{C}}_{20}$ |

4 | PSD maximum (1-order) | ${\gamma}_{1}$ |

5 | PSD maximum (2-order) | ${\gamma}_{2}$ |

6 | Std. of phase | ${\widehat{\sigma}}_{\varphi}$ |

7 | Std. of frequency | ${\widehat{\sigma}}_{f}$ |

8 | No. of objects (20%) | ${N}_{obj1}$ |

9 | No. of objects (50%) | ${N}_{obj2}$ |

10 | CWD time peak location | ${t}_{max}$ |

11 | Std. of object width | ${\widehat{\sigma}}_{obj}$ |

12 | Maximum of PCA degree | ${\widehat{\theta}}_{max}$ |

13 | Std. of ${\mathit{f}}_{n}$ | ${\widehat{\sigma}}_{Wf}$ |

14 | Autocorrelationof ${\mathit{f}}_{n}$ | r |

15 | FFT of correlation ${\mathit{f}}_{n}$ | ${\widehat{a}}_{max}$ |

16 | Pseudo-Zernike moment (2-order) | ${\widehat{Z}}_{20}$ |

17 | Pseudo-Zernike moment (3-order) | ${\widehat{Z}}_{30}$ |

18 | Pseudo-Zernike moment (4-order) | ${\widehat{Z}}_{22}$ |

19 | Pseudo-Zernike moment (4-order) | ${\widehat{Z}}_{31}$ |

20 | Pseudo-Zernike moment (5-order) | ${\widehat{Z}}_{32}$ |

21 | Pseudo-Zernike moment (6-order) | ${\widehat{Z}}_{33}$ |

22 | Pseudo-Zernike moment (7-order) | ${\widehat{Z}}_{43}$ |

0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|

0 | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ||||||

1 | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ||||||

2 | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ||||||

3 | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ||||||

4 | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ||||||

5 | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |

Signal Waveforms | Parameters | Uniform Ranges |
---|---|---|

- | Sampling rate $({f}_{s})$ | $\mathrm{U}(1)$ |

LFM | Samples number $(N)$ | $[500,1000]$ |

Bandwidth $(\Delta f)$ | $\mathrm{U}(1/16,1/8)$ | |

Initial frequency $({f}_{0})$ | $\mathrm{U}(1/16,1/8)$ | |

BPSK | Cycles per phase code $(cpp)$ | $[1,5]$ |

Number of code periods $({N}_{p})$ | $[100,300]$ | |

Barker codes | $\{7,11,13\}$ | |

Carrier frequency $({f}_{c})$ | $\mathrm{U}(1/8,1/4)$ | |

Costas codes | N | $[512,1024]$ |

Fundamental frequency $({f}_{min})$ | $\mathrm{U}(1/24)$ | |

Number changed | $[3,6]$ | |

Frank and P1 codes | ${f}_{c}$ | $\mathrm{U}(1/8,1/4)$ |

$cpp$ | $[1,5]$ | |

Frequency steps $(M)$ | $[4,8]$ | |

P2 code | ${f}_{c}$ | $\mathrm{U}(1/8,1/4)$ |

$cpp$ | $[1,5]$ | |

M | $2\times [2,4]$ | |

P3 and P4 codes | ${f}_{c}$ | $\mathrm{U}(1/8,1/4)$ |

$cpp$ | $[1,5]$ | |

M | $2\times [16,35]$ | |

T1-T4 codes | Number of segments $(k)$ | $[4,6]$ |

Overall code duration $(T)$ | $[0.07,0.1]$ |

**Table 4.**Confusion matrix for the system at an SNR of $-2$ dB. The overall Ratio of Successful Recognition (RSR) is 94.5%.

T1 | T2 | T3 | T4 | LFM | Costas | BPSK | Frank | P1 | P2 | P3 | P4 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|

T1 | 99.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

T2 | 0 | 98.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

T3 | 0.5 | 0 | 96 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

T4 | 0 | 0 | 0 | 98.5 | 0 | 1.5 | 2 | 0 | 0 | 0 | 0 | 0 |

LFM | 0 | 0 | 0 | 0 | 89.5 | 0 | 0 | 0 | 1 | 0 | 2 | 15.5 |

Costas | 0 | 0 | 0 | 0 | 0 | 97.5 | 0 | 0 | 0 | 1.5 | 1 | 0 |

BPSK | 0 | 0 | 0.5 | 0 | 0 | 0 | 98 | 1 | 0 | 0 | 0 | 0.5 |

Frank | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 90 | 4 | 7 | 11 | 1.5 |

P1 | 0 | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | 87.5 | 0 | 0 | 7.5 |

P2 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6.5 | 5.5 | 90 | 3 | 5.5 |

P3 | 0 | 1.5 | 3.5 | 1.5 | 0 | 0 | 0 | 2.5 | 1 | 1.5 | 78.5 | 0.5 |

P4 | 0 | 0 | 0 | 0 | 8 | 0 | 0 | 0 | 1 | 0 | 4.5 | 69 |

Item | Model/Version |
---|---|

CPU | E5-1620v2 (Intel) |

Memory | 16GB (DDR3 @ 1600 MHz) |

GPU | NVS315 (Quadro) |

MATLAB | R2012a |

LFM | BPSK | Costas | Frank | |
---|---|---|---|---|

−4 dB | 54.798/55.302/85.331 | 51.117/51.132/82.553 | 54.463/54.798/84.735 | 56.115/56.221/86.132 |

0 dB | 54.336/55.096/85.152 | 50.860/50.903/82.094 | 54.108/54.255/84.113 | 55.704/55.909/85.754 |

6 dB | 53.983/54.887/84.755 | 50.378/50.875/81.598 | 53.766/53.842/83.795 | 55.368/55.806/85.389 |

P1 | P2 | P3 | P4 | |

−4 dB | 58.887/58.889/88.112 | 55.559/55.759/86.739 | 58.386/58.522/87.117 | 54.105/54.732/85.079 |

0 dB | 58.398/58.519/87.847 | 55.308/55.431/86.180 | 58.106/58.310/86.869 | 53.858/54.338/84.787 |

6 dB | 57.792/58.034/87.106 | 54.668/55.307/85.848 | 57.707/57.802/86.478 | 53.501/54.196/84.503 |

T1 | T2 | T3 | T4 | |

−4 dB | 53.781/54.086/83.308 | 52.896/53.117/85.401 | 55.269/55.887/86.249 | 56.703/56.861/85.322 |

0 dB | 53.266/53.799/83.011 | 52.715/52.980/85.166 | 54.523/55.300/86.093 | 56.359/56.622/85.054 |

6 dB | 52.823/53.201/82.799 | 52.107/52.741/84.455 | 54.396/54.916/85.702 | 55.993/56.279/84.811 |

© 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Zhang, M.; Diao, M.; Gao, L.; Liu, L.
Neural Networks for Radar Waveform Recognition. *Symmetry* **2017**, *9*, 75.
https://doi.org/10.3390/sym9050075

**AMA Style**

Zhang M, Diao M, Gao L, Liu L.
Neural Networks for Radar Waveform Recognition. *Symmetry*. 2017; 9(5):75.
https://doi.org/10.3390/sym9050075

**Chicago/Turabian Style**

Zhang, Ming, Ming Diao, Lipeng Gao, and Lutao Liu.
2017. "Neural Networks for Radar Waveform Recognition" *Symmetry* 9, no. 5: 75.
https://doi.org/10.3390/sym9050075