Automated Detection of Multi-Rotor UAVs Using a Machine-Learning Approach
Abstract
:1. Introduction
2. Existing Methods for Detection and Identification of Objects in an Image
2.1. Background Subtraction
2.2. Contour Searching
2.3. Selective Searching
- Capturing all possible scales in the image—using a hierarchical algorithm, selective searching attempts to take into account all possible scales of the objects;
- Diversification—since objects in the analyzed area are subject to different changes such as illumination, shadows, and other, selective searching does not use a uniform strategy for a subregion search;
- Calculation speed—since the step of subregion searching is only a preparation for the object recognition itself, this algorithm is designed to not cause any decrease of calculation speed.
2.4. Support Vector Machines (SVM)
2.5. Cascade Classifier (Haar-Like Features)
2.6. Machine Learning and Neural Networks
2.7. TensorFlow (TF)
3. Drone Detection Using Computer Vision Methods
- SIFT—the number of features found compared to other detectors was higher, features were scattered throughout the object and identified also in areas that did not correspond to the edges of the object, finding good matches of features was also difficult after the application of additional filter;
- SURF—as in the first detector, the number of features identified was too high and often did not correspond to the object’s edges; features matching was only partially successful;
- BRISK—the number of features found was higher, but most of them corresponded to the edges of the object and the important parts of the objects; a sufficient number of successfully matched features were achieved by the additional filter;
- ORB—the number of significant points was the lowest among the tested detectors, but their localization was almost exclusively at the edges of the object and the important parts of the object; the points were not scattered throughout the whole object when the additional filter was applied; and a high number of correctly matched features was achieved;
- AKAZE—the number of significant points was higher, some even outside the object and important areas; a sufficient number of successfully matched features were achieved by an additional filter.
- movement and tracking of a single object,
- movement and tracking of multiple objects,
- leaving the sensing area,
- clash of objects.
4. Detection and Identification of a Drone Using a Machine-Learning Approach
- preparing data for training;
- preparing data for evaluation;
- selection of detection model;
- creating other necessary files for training;
- training;
- export of the trained model to a frozen graph format;
- creating an application to test the detector.
4.1. Preparing Data for Training and Evaluation
4.2. Selection of Detection Model
4.3. Creating Other Necessary Files for Training
4.4. Training
5. Experiments
- drone in a static image;
- drone in the sky;
- multiple drones in one image;
- drone with another flying object in the sky.
5.1. Drone in a Static Image
5.2. Video: Drone in the Sky
5.3. Video: Multiple Drones in One Image
5.4. Video: Drone with Another Flying Object in the Sky
5.5. Statistical Evaluation of the Detector
6. Conclusions and Future Work
Author Contributions
Funding
Conflicts of Interest
References
- Boon, M.A.; Drijfhout, A.P.; Tesfamichael, S. Comparison of a fixed-wing and multi-rotor uav for environmental mapping applications: A case study. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 42, 47. [Google Scholar] [CrossRef] [Green Version]
- Norouzi Ghazbi, S.; Aghli, Y.; Alimohammadi, M.; Akbari, A.A. Quadrotors Unmanned Aerial Vehicles: A Review. Int. J. Smart Sens. Intell. Syst. 2016, 9, 309–333. [Google Scholar]
- McEvoy, J.F.; Hall, G.P.; McDonald, P.G. Evaluation of unmanned aerial vehicle shape, flight path and camera type for waterfowl surveys: Disturbance effects and species recognition. PeerJ 2016, 4, e1831. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Rosnell, T.; Honkavaara, E. Point cloud generation from aerial image data acquired by a quadrocopter type micro unmanned aerial vehicle and a digital still camera. Sensors 2012, 12, 453–480. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Deng, C.; Wang, S.; Huang, Z.; Tan, Z.; Liu, J. Unmanned aerial vehicles for power line inspection: A cooperative way in platforms and communications. J. Commun. 2014, 9, 687–692. [Google Scholar] [CrossRef] [Green Version]
- DeGarmo, M.; Nelson, G. Prospective unmanned aerial vehicle operations in the future national airspace system. In Proceedings of the AIAA 4th Aviation Technology, Integration and Operations (ATIO) Forum, Chicago, IL, USA, 20–22 September 2004. [Google Scholar]
- Case, E.E.; Zelnio, A.M.; Rigling, B.D. Low-cost acoustic array for small UAV detection and tracking. In Proceedings of the 2008 IEEE National Aerospace Electronics Conference, Dayton, OH, USA, 16–18 July 2008. [Google Scholar]
- Busset, J.; Perrodin, F.; Wellig, P.; Ott, B.; Heutschi, K.; Rühl, T.; Nussbaumer, T. Detection and Tracking of Drones Using Advanced Acoustic Cameras. In Unmanned/Unattended Sensors and Sensor Networks XI; and Advanced Free-Space Optical Communication Techniques and Applications; International Society for Optics and Photonics: Bellingham, WA, USA, 2015; Volume 9647. [Google Scholar]
- Shi, X.; Yang, C.; Xie, W.; Liang, C.; Shi, Z.; Chen, J. Anti-drone system with multiple surveillance technologies: Architecture, implementation, and challenges. IEEE Commun. Mag. 2018, 56, 68–74. [Google Scholar] [CrossRef]
- Bradski, G.; Kaehler, A. Learning OpenCV: Computer Vision with The OpenCV Library; O’Reilly Media, Inc.: Newton, MA, USA, 2008. [Google Scholar]
- Šikudová, E.; Černeková, Z.; Benešová, W.; HALADOVÁ, Z.; Kučerová, J. Počítačové Videnie. Detekcia a Rozpoznávanie Objektov; Wikina: Prague, Czech Republic, 2013; p. 397. [Google Scholar]
- Toyama, K.; Krumm, J.; Brumitt, B.; Meyers, B. Wallflower: Principles and practice of background maintenance. In Proceedings of the seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; Volume 1. [Google Scholar]
- Koniar, D.; Hargaš, L.; Štofan, S. Segmentation of motion regions for biomechanical systems. Procedia Eng. 2012, 48, 304–311. [Google Scholar] [CrossRef] [Green Version]
- Uijlings, J.R.; Van De Sande, K.E.; Gevers, T.; Smeulders, A.W. Selective search for object recognition. Int. J. Comput. Vis. 2013, 104, 154–171. [Google Scholar] [CrossRef] [Green Version]
- Mallick, S. Image Recognition and Object Detection: Part 1. Available online: https://www.learnopencv.com/image-recognition-and-object-detection-part1/ (accessed on 28 April 2018).
- Chen, Q.; Georganas, N.D.; Petriu, E.M. Real-time vision-based hand gesture recognition using haar-like features. In Proceedings of the 2007 IEEE Instrumentation & Measurement Technology Conference IMTC 2007, Warsaw, Poland, 1–3 May 2007. [Google Scholar]
- Abadi, M.; Barham, P.; Chen, J.; Chen, Z.; Davis, A.; Dean, J.; Devin, M.; Ghemawat, S.; Irving, G.; Isard, M.; et al. Tensorflow: A system for large-scale machine learning. In Proceedings of the 12th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 16), Savannah, GA, USA, 2–4 November 2016. [Google Scholar]
- Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M.; et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv 2016, arXiv:1603.04467. [Google Scholar]
- Creative Commons (CC) License. Tensors. Available online: https://www.tensorflow.org/programmers_guide/tensors (accessed on 28 April 2018).
- Creative Commons (CC) License. Graphs and Sessions. Available online: https://www.tensorflow.org/programmers_guide/graphs (accessed on 28 April 2018).
- Huang, J.; Rathod, V.; Sun, C.; Zhu, M.; Korattikara, A.; Fathi, A.; Fischer, I.; Wojna, Z.; Song, Y.; Guadarrama, S.; et al. Speed/accuracy trade-offs for modern convolutional object detectors. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
- Tareen, S.A.; Saleem, Z. A comparative analysis of sift, surf, kaze, akaze, orb, and brisk. In Proceedings of the 2018 International Conference on Computing, Mathematics and Engineering Technologies (iCoMET), IEEE, Sukkur, Sindh, Pakistan, 3–4 March 2018. [Google Scholar]
- Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A.A. Inception-v4, inception-resnet and the impact of residual connections on learning. In Proceedings of the Thirty-first AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017. [Google Scholar]
SIFT | SURF | BRISK | ORB | AKAZE | |
---|---|---|---|---|---|
Number of features | x | x | xx | xxx | xx |
Dispersal of features | x | x | xxx | xxx | xx |
The number of successfully matched features | x | x | xx | xx | xx |
Overall rating | x | x | xx | xxx | xx |
Name of the Model | Speed of the Model [ms] | Mean Average Precision 1 |
---|---|---|
ssd_mobilenet_v2_coco | 31 | 22 |
ssd_inception_v2_coco | 42 | 24 |
faster_rcnn_inception_v2_coco | 58 | 28 |
faster_rcnn_resnet50_coco | 89 | 30 |
faster_rcnn_resnet50_lowproposals_coco | 64 | - |
Model Configuration | |
model name | Faster R-CNN with Inception v2 |
1st stage (location) regularizer | L2 |
1st stage initializer | Truncated Normal |
2nd stage (classification) regularizer | L2 |
2nd stage initializer | Variance Scaling |
score converter | SOFTMAX |
Training Configuration | |
learning rate | 0.0002 |
number steps | 300k |
Object Class | Num. of Objects | Num. of Successful Detections | Num. of Failed Detections | Detection Rate [%] |
---|---|---|---|---|
Drone | 74 | 74 | 0 | 100.0 |
Bird | 76 | 18 | 58 | 23.6 |
Overall | 150 | 92 | 58 | 61.3 |
Object Class | Num. of Objects | Num. of Successful Detections | Num. of Failed Detections | Detection Rate [%] |
---|---|---|---|---|
Drone | 74 | 70 | 4 | 94.5 |
Bird | 76 | 76 | 0 | 100.0 |
Overall | 150 | 146 | 4 | 97.3 |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Grác, Š.; Beňo, P.; Duchoň, F.; Dekan, M.; Tölgyessy, M. Automated Detection of Multi-Rotor UAVs Using a Machine-Learning Approach. Appl. Syst. Innov. 2020, 3, 29. https://doi.org/10.3390/asi3030029
Grác Š, Beňo P, Duchoň F, Dekan M, Tölgyessy M. Automated Detection of Multi-Rotor UAVs Using a Machine-Learning Approach. Applied System Innovation. 2020; 3(3):29. https://doi.org/10.3390/asi3030029
Chicago/Turabian StyleGrác, Šimon, Peter Beňo, František Duchoň, Martin Dekan, and Michal Tölgyessy. 2020. "Automated Detection of Multi-Rotor UAVs Using a Machine-Learning Approach" Applied System Innovation 3, no. 3: 29. https://doi.org/10.3390/asi3030029
APA StyleGrác, Š., Beňo, P., Duchoň, F., Dekan, M., & Tölgyessy, M. (2020). Automated Detection of Multi-Rotor UAVs Using a Machine-Learning Approach. Applied System Innovation, 3(3), 29. https://doi.org/10.3390/asi3030029