# Radar-Spectrogram-Based UAV Classification Using Convolutional Neural Networks

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Micro-Doppler Signature (MDS)

## 3. Dataset Generation

#### 3.1. Measurement

#### 3.2. Pre-Processing

## 4. Models

#### 4.1. ResNet-18

#### 4.2. ResNet-SP

## 5. Experiment and Results

## 6. Conclusions

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Acknowledgments

## Conflicts of Interest

## References

- Saqib, M.; Daud Khan, S.; Sharma, N.; Blumenstein, M. A study on detecting drones using deep convolutional neural networks. In Proceedings of the 2017 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), Lecce, Italy, 29 August–1 September 2017; pp. 1–5. [Google Scholar] [CrossRef][Green Version]
- Zeiler, M.D.; Fergus, R. Visualizing and understanding convolutional networks. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2014; pp. 818–833. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv
**2014**, arXiv:1409.1556. [Google Scholar] - Kim, J.; Park, C.; Ahn, J.; Ko, Y.; Park, J.; Gallagher, J.C. Real-time UAV sound detection and analysis system. In Proceedings of the 2017 IEEE Sensors Applications Symposium (SAS), Glassboro, NJ, USA, 13–15 March 2017; pp. 1–5. [Google Scholar] [CrossRef]
- Seo, Y.; Jang, B.; Im, S. Drone Detection Using Convolutional Neural Networks with Acoustic STFT Features. In Proceedings of the 2018 15th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), Auckland, New Zealand, 27–30 November 2018; pp. 1–6. [Google Scholar] [CrossRef]
- Taha, B.; Shoufan, A. Machine learning-based drone detection and classification: State-of-the-art in research. IEEE Access
**2019**, 7, 138669–138682. [Google Scholar] [CrossRef] - Choi, B.; Oh, D. Classification of Drone Type Using Deep Convolutional Neural Networks Based on Micro- Doppler Simulation. In Proceedings of the 2018 International Symposium on Antennas and Propagation (ISAP), Busan, Korea, 23–26 October 2018; pp. 1–2. [Google Scholar]
- Rahman, S.; Robertson, D.A. Classification of drones and birds using convolutional neural networks applied to radar micro-Doppler spectrogram images. IET Radar Sonar Navig.
**2020**, 14, 653–661. [Google Scholar] [CrossRef][Green Version] - Pan, S.J.; Yang, Q. A survey on transfer learning. IEEE Trans. Knowl. Data Eng.
**2009**, 22, 1345–1359. [Google Scholar] [CrossRef] - Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going Deeper with Convolutions. arXiv
**2014**, arXiv:1409.4842. [Google Scholar] - He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Chen, V.C.; Li, F.; Ho, S.S.; Wechsler, H. Micro-Doppler effect in radar: Phenomenon, model, and simulation study. IEEE Trans. Aerosp. Electron. Syst.
**2006**, 42, 2–21. [Google Scholar] [CrossRef] - Harmanny, R.; De Wit, J.; Cabic, G.P. Radar micro-Doppler feature extraction using the spectrogram and the cepstrogram. In Proceedings of the 2014 11th European Radar Conference, Rome, Italy, 8–10 October 2014; pp. 165–168. [Google Scholar]
- Cohen, L. Time-Frequency Analysis; Prentice Hall: Upper Saddle River, NJ, USA, 1995; Volume 778. [Google Scholar]
- Hijazi, S.; Kumar, R.; Rowen, C. Using Convolutional Neural Networks for Image Recognition; Cadence Design Systems Inc.: San Jose, CA, USA, 2015; pp. 1–12. [Google Scholar]
- Bengio, Y.; Simard, P.; Frasconi, P. Learning long-term dependencies with gradient descent is difficult. IEEE Trans. Neural Netw.
**1994**, 5, 157–166. [Google Scholar] [CrossRef] - Goodfellow, I.J.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016; Available online: http://www.deeplearningbook.org (accessed on 11 November 2020).
- Yu, F.; Koltun, V. Multi-Scale Context Aggregation by Dilated Convolutions. arXiv
**2016**, arXiv:1511.07122. [Google Scholar] - Pascanu, R.; Mikolov, T.; Bengio, Y. On the difficulty of training Recurrent Neural Networks. arXiv
**2013**, arXiv:1211.5063. [Google Scholar] - Pang, G.; Shen, C.; Cao, L.; van den Hengel, A. Deep Learning for Anomaly Detection: A Review. arXiv
**2020**, arXiv:2007.02500. [Google Scholar] - Robbins, H.; Monro, S. A stochastic approximation method. Ann. Math. Stat.
**1951**, 22, 400–407. [Google Scholar] [CrossRef] - Sutskever, I.; Martens, J.; Dahl, G.; Hinton, G. On the importance of initialization and momentum in deep learning. In Proceedings of the International Conference on Machine Learning, Atlanta, GA, USA, 16–21 June 2013; pp. 1139–1147. [Google Scholar]
- Krogh, A.; Hertz, J.A. A simple weight decay can improve generalization. In Proceedings of the Advances in Neural Information Processing Systems, Denver, CO, USA, 30 November–3 December 1992; pp. 950–957. [Google Scholar]
- Zhang, Z.; Sabuncu, M. Generalized cross entropy loss for training deep neural networks with noisy labels. Adv. Neural Inf. Process. Syst.
**2018**, 31, 8778–8788. [Google Scholar]

**Figure 2.**Spectrogram of Unmanned Aerial Vehicles (UAVs); wing-flap (

**top**), quad-copter (

**middle**), and fixed-wing (

**bottom**).

**Figure 3.**Staring mode X-band frequency modulated continuous wave (FMCW) radar (Ancortek’s SDR-KIT 980AD2) and Specification.

**Figure 5.**Sequential video frames of a specific movement for each target: (

**a**) Metafly flight from right to left; (

**b**) Mavic Air 2 flight from front to back; (

**c**) Disco flight back to front; (

**d**) Walking right to left; (

**e**) Sit-Walking back to front.

**Figure 6.**Spectrogram resolution of walking according to window size; as the window size increases, the frequency resolution increases.

**Figure 7.**Spectrogram shape of Metafly flight according to window overlap ratio: approx. 24 wing beats per second of Metafly appear more clearly as the window overlap ratio increases.

**Figure 8.**Spectrogram of background clutter (

**top**) and Mavic Air 2 (

**bottom**). The red box is the non-recorded section of the target.

**Figure 9.**Spectrograms of Mavic Air 2 before refinement (

**left**) and after refinement (

**right**). red box: chopped image with an average intensity below threshold, blue box: chopped image with an average intensity above threshold.

**Figure 13.**The receptive field images: 3 × 3 kernel (

**left**), 5 × 5 kernel (

**center**), 2-dilated 3 × 3 kernel (

**right**).

**Figure 14.**The training loss curve before (

**left**) and after (

**right**) applying the gradient clipping method.

**Table 1.**The movements for each target and the settings for recording. L is left, R is right, B is back, F is forth and C is concentric.

Parameter | Metafly | Mavic Air 2 | Disco | Walking | Sit-Walking |
---|---|---|---|---|---|

Alt./Range (m) | 0–10/0–10 | 0–10/0–100 | 10/0–100 | 0/0–100 | 0/0–100 |

Movement | Free flight | Free flight | Circular-flight (L ↔ R, B ↔ F, C) | Free | Free |

Division | Mavic Air 2 | Disco |
---|---|---|

Before refinement | 995 | 339 |

After refinement | 747 | 168 |

Removal percentage | 25% | 50% |

Category | Window Size | Window Overlap | Vertical Flip | Total |
---|---|---|---|---|

(Specification) | (128, 256, 512) | (50%, 70%, 85%) | (O, X) | |

Original signal | ×3 | ×3 | ×2 | ×18 |

**Table 4.**The number of data for each class in the radar spectrogram dataset for five low altitude, slow speed, and small radar cross-section (LSS) targets.

Class | Metafly | Mavic Air 2 | Disco | Walking | Sit-Walking | Total |
---|---|---|---|---|---|---|

Train | 2142 | 2176 | 2196 | 2136 | 2112 | 10,762 |

Test | 219 | 218 | 206 | 198 | 195 | 1096 |

**Table 5.**Classification accuracy of ResNet-18 model according to the signal form of radar spectrogram.

Channels | Accuracy (%) |
---|---|

1 (Magnitude) | 75.98 |

2 (Real, Imaginary) | 79.88 |

2 (Magnitude, Phase) | 54.53 |

Noise Level | Accuracy on Gaussian Noise (%) | Accuracy on Uniform Noise (%) |
---|---|---|

0.01 | 76.20 | 80.80 |

0.03 | 66.30 | 80.90 |

0.05 | 40.25 | 75.71 |

Conv. Groups | Numbers of Layers | Feature-Map Size | Accuracy (%) |
---|---|---|---|

5 | 18 | 4 × 4 | 79.88 |

4 | 14 | 8 × 8 | 81.43 |

3 | 10 | 16 × 16 | 75.38 |

Kernel size | 7 | 3 | 3 |

Dilation | 1 | 2 | 3 |

Receptive field | 7 | 5 | 7 |

Accuracy (%) | 79.88 | 80.26 | 79.33 |

Models | Inference Time (ms) | Training Time (s) | Accuracy (%) | Standard Deviation |
---|---|---|---|---|

ResNet-18 | 2.68 | 640.39 | 79.88 | 0.0204 |

ResNet-SP | 1.98 | 242.22 | 83.39 | 0.0115 |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Park, D.; Lee, S.; Park, S.; Kwak, N.
Radar-Spectrogram-Based UAV Classification Using Convolutional Neural Networks. *Sensors* **2021**, *21*, 210.
https://doi.org/10.3390/s21010210

**AMA Style**

Park D, Lee S, Park S, Kwak N.
Radar-Spectrogram-Based UAV Classification Using Convolutional Neural Networks. *Sensors*. 2021; 21(1):210.
https://doi.org/10.3390/s21010210

**Chicago/Turabian Style**

Park, Dongsuk, Seungeui Lee, SeongUk Park, and Nojun Kwak.
2021. "Radar-Spectrogram-Based UAV Classification Using Convolutional Neural Networks" *Sensors* 21, no. 1: 210.
https://doi.org/10.3390/s21010210