sensors-logo

Journal Browser

Journal Browser

UAV Detection, Classification, and Tracking

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Electronic Sensors".

Deadline for manuscript submissions: 31 December 2024 | Viewed by 43262

Special Issue Editors


E-Mail Website
Guest Editor
Information Technologies Institute, Centre for Research and Technology Hellas, 57001 Thessaloniki, Greece
Interests: UAV detection and classification; video surveillance applications; artificial intelligence; image and video processing; video coding
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Information Technologies Institute, Centre for Research and Technology Hellas, 57001 Thessaloniki, Greece
Interests: UAV detection and classification; 3D/4D computer vision; 3D human reconstruction and motion capturing; medical image processing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Innovation Engineering, University of Salento, 73100 Lecce, Italy
Interests: radar detection and localization; wireless networks; multi-sensor; multi-agent signal processing; cyber-physical systems; smart devices; social networks
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Innovation Engineering, University of Salento, 73100 Lecce, Italy
Interests: telecommunications; statistical signal processing; detection; estimation; localization
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Fraunhofer Institute of Optronics, System Technologies and Image Exploitation, 76131 Karlsruhe, Germany
Interests: UAV detection, tracking and classification; deep learning image analysis; object detection and classification; person re-identification
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Fraunhofer Institute of Optronics, System Technologies and Image Exploitation, 76131 Karlsruhe, Germany
Interests: deep learning image analysis; object classification; object segmentation
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

At present, UAVs (a.k.a. drones) are widely available in a wide range of sizes and capabilities, introducing unprecedented opportunities but also threats in terms of safety, privacy, and security. While the introduction of artificial intelligence and deep learning in conjunction with hardware innovations have significantly improved the capabilities to detect and classify drones, counter-UAV systems are facing challenges to detect threats from diverse UAV types and makes, in diverse and ever-changing environments.

This Special Issue aims to highlight advances in the field of UAV detection, classification, and tracking using a variety of single and multi-sensor techniques. Topics include, but are not limited to:

  • Visual UAV detection and classification;
  • IR UAV detection and classification;
  • Radar UAV detection and classification;
  • RF UAV detection and classification;
  • Data fusion for UAV detection and classification;
  • UAV tracking.

Dr. Anastasios Dimou
Dr. Dimitrios Zarpalas
Prof. Dr. Angelo Coluccia
Dr. Alessio Fascista
Dr. Arne Schumann
Dr. Lars Sommer
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • UAV detection
  • UAV classification
  • UAV tracking
  • Optical sensors
  • IR sensors
  • RF sensors
  • Radars
  • Deep learning

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 3664 KiB  
Article
A Reduced Complexity Acoustic-Based 3D DoA Estimation with Zero Cyclic Sum
by Rigel Procópio Fernandes, José Antonio Apolinário, Jr. and José Manoel de Seixas
Sensors 2024, 24(7), 2344; https://doi.org/10.3390/s24072344 - 07 Apr 2024
Viewed by 453
Abstract
Accurate direction of arrival (DoA) estimation is paramount in various fields, from surveillance and security to spatial audio processing. This work introduces an innovative approach that refines the DoA estimation process and demonstrates its applicability in diverse and critical domains. We propose a [...] Read more.
Accurate direction of arrival (DoA) estimation is paramount in various fields, from surveillance and security to spatial audio processing. This work introduces an innovative approach that refines the DoA estimation process and demonstrates its applicability in diverse and critical domains. We propose a two-stage method that capitalizes on the often-overlooked secondary peaks of the cross-correlation function by introducing a reduced complexity DoA estimation method. In the first stage, a low complexity cost function based on the zero cyclic sum (ZCS) condition is used to allow for an exhaustive search of all combinations of time delays between pairs of microphones, including primary peak and secondary peaks of each cross-correlation. For the second stage, only a subset of the time delay combinations with the lowest ZCS cost function need to be tested using a least-squares (LS) solution, which requires more computational effort. To showcase the versatility and effectiveness of our method, we apply it to the challenging acoustic-based drone DoA estimation scenario using an array of four microphones. Through rigorous experimentation with simulated and actual data, our research underscores the potential of our proposed DoA estimation method as an alternative for handling complex acoustic scenarios. The ZCS method demonstrates an accuracy of 89.4%±2.7%, whereas the ZCS with the LS method exhibits a notably higher accuracy of 94.0%±3.1%, showcasing the superior performance of the latter. Full article
(This article belongs to the Special Issue UAV Detection, Classification, and Tracking)
Show Figures

Figure 1

15 pages, 4545 KiB  
Article
Maritime Emission Monitoring: Development and Testing of a UAV-Based Real-Time Wind Sensing Mission Planner Module
by Theodoros Karachalios, Panagiotis Moschos and Theofanis Orphanoudakis
Sensors 2024, 24(3), 950; https://doi.org/10.3390/s24030950 - 01 Feb 2024
Cited by 2 | Viewed by 656
Abstract
Maritime emissions contribute significantly to global pollution, necessitating accurate and efficient monitoring methods. Traditional methods for tracking ship emissions often face limitations in real-time data accuracy, with wind measurement being a critical yet challenging aspect. This paper introduces an innovative mission planner module [...] Read more.
Maritime emissions contribute significantly to global pollution, necessitating accurate and efficient monitoring methods. Traditional methods for tracking ship emissions often face limitations in real-time data accuracy, with wind measurement being a critical yet challenging aspect. This paper introduces an innovative mission planner module for unmanned aerial vehicles (UAVs) that leverages onboard wind sensing capabilities to enhance maritime emission monitoring. The module’s primary objective is to assist operators in making informed decisions by providing real-time wind data overlays, thus optimizing flight paths and data collection efficiency. Our experimental setup involves the testing of the module in simulated maritime environments, demonstrating its efficacy in varying wind conditions. The real-time wind data overlays provided by the module enable UAV operators to adjust their flight paths dynamically, reducing unnecessary power expenditure and mitigating the risks associated with low-battery scenarios, especially in challenging maritime conditions. This paper presents the implementation of real-time wind data overlays on an open-source state-of-the-art mission planner as a C# plugin that is seamlessly integrated into the user interface. The factors that affect performance, in terms of communication overheads and real-time operation, are identified and discussed. The operation of the module is evaluated in terms of functional integration and real-time visual representation of wind measurements, and the enhanced situational awareness that it can offer to mission controllers is demonstrated. Beyond presenting a novel application of UAV technology in environmental monitoring, we also provide an extensive discussion of how this work will be extended in the context of complete aerial environmental inspection missions and the future directions in research within the field that can potentially lead to the modernization of maritime emission monitoring practices. Full article
(This article belongs to the Special Issue UAV Detection, Classification, and Tracking)
Show Figures

Figure 1

24 pages, 5359 KiB  
Article
Drone Detection and Tracking Using RF Identification Signals
by Driss Aouladhadj, Ettien Kpre, Virginie Deniau, Aymane Kharchouf, Christophe Gransart and Christophe Gaquière
Sensors 2023, 23(17), 7650; https://doi.org/10.3390/s23177650 - 04 Sep 2023
Cited by 2 | Viewed by 8913
Abstract
The market for unmanned aerial systems (UASs) has grown considerably worldwide, but their ability to transmit sensitive information poses a threat to public safety. To counter these threats, authorities, and anti-drone organizations are ensuring that UASs comply with regulations, focusing on strategies to [...] Read more.
The market for unmanned aerial systems (UASs) has grown considerably worldwide, but their ability to transmit sensitive information poses a threat to public safety. To counter these threats, authorities, and anti-drone organizations are ensuring that UASs comply with regulations, focusing on strategies to mitigate the risks associated with malicious drones. This study presents a technique for detecting drone models using identification (ID) tags in radio frequency (RF) signals, enabling the extraction of real-time telemetry data through the decoding of Drone ID packets. The system, implemented with a development board, facilitates efficient drone tracking. The results of a measurement campaign performance evaluation include maximum detection distances of 1.3 km for the Mavic Air, 1.5 km for the Mavic 3, and 3.7 km for the Mavic 2 Pro. The system accurately estimates a drone’s 2D position, altitude, and speed in real time. Thanks to the decoding of telemetry packets, the system demonstrates promising accuracy, with worst-case distances between estimated and actual drone positions of 35 m for the Mavic 2 Pro, 17 m for the Mavic Air, and 15 m for the Mavic 3. In addition, there is a relative error of 14% for altitude measurements and 7% for speed measurements. The reaction times calculated to secure a vulnerable site within a 200 m radius are 1.83 min (Mavic Air), 1.03 min (Mavic 3), and 2.92 min (Mavic 2 Pro). This system is proving effective in addressing emerging concerns about drone-related threats, helping to improve public safety and security. Full article
(This article belongs to the Special Issue UAV Detection, Classification, and Tracking)
Show Figures

Figure 1

20 pages, 2707 KiB  
Article
CUDM: A Combined UAV Detection Model Based on Video Abnormal Behavior
by Hao Cai, Zhiguang Song, Jianlong Xu, Zhi Xiong and Yuanquan Xie
Sensors 2022, 22(23), 9469; https://doi.org/10.3390/s22239469 - 04 Dec 2022
Cited by 2 | Viewed by 2681
Abstract
The widespread use of unmanned aerial vehicles (UAVs) has brought many benefits, particularly for military and civil applications. For example, UAVs can be used in communication, ecological surveys, agriculture, and logistics to improve efficiency and reduce the required workforce. However, the malicious use [...] Read more.
The widespread use of unmanned aerial vehicles (UAVs) has brought many benefits, particularly for military and civil applications. For example, UAVs can be used in communication, ecological surveys, agriculture, and logistics to improve efficiency and reduce the required workforce. However, the malicious use of UAVs can significantly endanger public safety and pose many challenges to society. Therefore, detecting malicious UAVs is an important and urgent issue that needs to be addressed. In this study, a combined UAV detection model (CUDM) based on analyzing video abnormal behavior is proposed. CUDM uses abnormal behavior detection models to improve the traditional object detection process. The work of CUDM can be divided into two stages. In the first stage, our model cuts the video into images and uses the abnormal behavior detection model to remove a large number of useless images, improving the efficiency and real-time detection of suspicious targets. In the second stage, CUDM works to identify whether the suspicious target is a UAV or not. Besides, CUDM relies only on ordinary equipment such as surveillance cameras, avoiding the use of expensive equipment such as radars. A self-made UAV dataset was constructed to verify the reliability of CUDM. The results show that CUDM not only maintains the same accuracy as state-of-the-art object detection models but also reduces the workload by 32%. Moreover, it can detect malicious UAVs in real-time. Full article
(This article belongs to the Special Issue UAV Detection, Classification, and Tracking)
Show Figures

Figure 1

18 pages, 2148 KiB  
Article
A Lightweight and Accurate UAV Detection Method Based on YOLOv4
by Hao Cai, Yuanquan Xie, Jianlong Xu and Zhi Xiong
Sensors 2022, 22(18), 6874; https://doi.org/10.3390/s22186874 - 11 Sep 2022
Cited by 5 | Viewed by 2792
Abstract
At present, the UAV (Unmanned Aerial Vehicle) has been widely used both in civilian and military fields. Most of the current object detection algorithms used to detect UAVs require more parameters, and it is difficult to achieve real-time performance. In order to solve [...] Read more.
At present, the UAV (Unmanned Aerial Vehicle) has been widely used both in civilian and military fields. Most of the current object detection algorithms used to detect UAVs require more parameters, and it is difficult to achieve real-time performance. In order to solve this problem while ensuring a high accuracy rate, we further lighten the model and reduce the number of parameters of the model. This paper proposes an accurate and lightweight UAV detection model based on YOLOv4. To verify the effectiveness of this model, we made a UAV dataset, which contains four types of UAVs and 20,365 images. Through comparative experiments and optimization of existing deep learning and object detection algorithms, we found a lightweight model to achieve an efficient and accurate rapid detection of UAVs. First, from the comparison of the one-stage method and the two-stage method, it is concluded that the one-stage method has better real-time performance and considerable accuracy in detecting UAVs. Then, we further compared the one-stage methods. In particular, for YOLOv4, we replaced MobileNet with its backbone network, modified the feature extraction network, and replaced standard convolution with depth-wise separable convolution, which greatly reduced the parameters and realized 82 FPS and 93.52% mAP while ensuring high accuracy and taking into account the real-time performance. Full article
(This article belongs to the Special Issue UAV Detection, Classification, and Tracking)
Show Figures

Figure 1

13 pages, 4275 KiB  
Article
High-Resolution Drone Detection Based on Background Difference and SAG-YOLOv5s
by Yaowen Lv, Zhiqing Ai, Manfei Chen, Xuanrui Gong, Yuxuan Wang and Zhenghai Lu
Sensors 2022, 22(15), 5825; https://doi.org/10.3390/s22155825 - 04 Aug 2022
Cited by 14 | Viewed by 2417
Abstract
To solve the problem of low accuracy and slow speed of drone detection in high-resolution images with fixed cameras, we propose a detection method combining background difference and lightweight network SAG-YOLOv5s. First, background difference is used to extract potential drone targets in high-resolution [...] Read more.
To solve the problem of low accuracy and slow speed of drone detection in high-resolution images with fixed cameras, we propose a detection method combining background difference and lightweight network SAG-YOLOv5s. First, background difference is used to extract potential drone targets in high-resolution images, eliminating most of the background to reduce computational overhead. Secondly, the Ghost module and SimAM attention mechanism are introduced on the basis of YOLOv5s to reduce the total number of model parameters and improve feature extraction, and α-DIoU loss is used to replace the original DIoU loss to improve the accuracy of bounding box regression. Finally, to verify the effectiveness of our method, a high-resolution drone dataset is made based on the public data set. Experimental results show that the detection accuracy of the proposed method reaches 97.6%, 24.3 percentage points higher than that of YOLOv5s, and the detection speed in 4K video reaches 13.2 FPS, which meets the actual demand and is significantly better than similar algorithms. It achieves a good balance between detection accuracy and detection speed and provides a method benchmark for high-resolution drone detection under a fixed camera. Full article
(This article belongs to the Special Issue UAV Detection, Classification, and Tracking)
Show Figures

Figure 1

26 pages, 733 KiB  
Article
Audio-Based Drone Detection and Identification Using Deep Learning Techniques with Dataset Enhancement through Generative Adversarial Networks
by Sara Al-Emadi, Abdulla Al-Ali and Abdulaziz Al-Ali
Sensors 2021, 21(15), 4953; https://doi.org/10.3390/s21154953 - 21 Jul 2021
Cited by 40 | Viewed by 7911
Abstract
Drones are becoming increasingly popular not only for recreational purposes but in day-to-day applications in engineering, medicine, logistics, security and others. In addition to their useful applications, an alarming concern in regard to the physical infrastructure security, safety and privacy has arisen due [...] Read more.
Drones are becoming increasingly popular not only for recreational purposes but in day-to-day applications in engineering, medicine, logistics, security and others. In addition to their useful applications, an alarming concern in regard to the physical infrastructure security, safety and privacy has arisen due to the potential of their use in malicious activities. To address this problem, we propose a novel solution that automates the drone detection and identification processes using a drone’s acoustic features with different deep learning algorithms. However, the lack of acoustic drone datasets hinders the ability to implement an effective solution. In this paper, we aim to fill this gap by introducing a hybrid drone acoustic dataset composed of recorded drone audio clips and artificially generated drone audio samples using a state-of-the-art deep learning technique known as the Generative Adversarial Network. Furthermore, we examine the effectiveness of using drone audio with different deep learning algorithms, namely, the Convolutional Neural Network, the Recurrent Neural Network and the Convolutional Recurrent Neural Network in drone detection and identification. Moreover, we investigate the impact of our proposed hybrid dataset in drone detection. Our findings prove the advantage of using deep learning techniques for drone detection and identification while confirming our hypothesis on the benefits of using the Generative Adversarial Networks to generate real-like drone audio clips with an aim of enhancing the detection of new and unfamiliar drones. Full article
(This article belongs to the Special Issue UAV Detection, Classification, and Tracking)
Show Figures

Figure 1

27 pages, 61789 KiB  
Article
Drone vs. Bird Detection: Deep Learning Algorithms and Results from a Grand Challenge
by Angelo Coluccia, Alessio Fascista, Arne Schumann, Lars Sommer, Anastasios Dimou, Dimitrios Zarpalas, Miguel Méndez, David de la Iglesia, Iago González, Jean-Philippe Mercier, Guillaume Gagné, Arka Mitra and Shobha Rajashekar
Sensors 2021, 21(8), 2824; https://doi.org/10.3390/s21082824 - 16 Apr 2021
Cited by 58 | Viewed by 12860
Abstract
Adopting effective techniques to automatically detect and identify small drones is a very compelling need for a number of different stakeholders in both the public and private sectors. This work presents three different original approaches that competed in a grand challenge on the [...] Read more.
Adopting effective techniques to automatically detect and identify small drones is a very compelling need for a number of different stakeholders in both the public and private sectors. This work presents three different original approaches that competed in a grand challenge on the “Drone vs. Bird” detection problem. The goal is to detect one or more drones appearing at some time point in video sequences where birds and other distractor objects may be also present, together with motion in background or foreground. Algorithms should raise an alarm and provide a position estimate only when a drone is present, while not issuing alarms on birds, nor being confused by the rest of the scene. In particular, three original approaches based on different deep learning strategies are proposed and compared on a real-world dataset provided by a consortium of universities and research centers, under the 2020 edition of the Drone vs. Bird Detection Challenge. Results show that there is a range in difficulty among different test sequences, depending on the size and the shape visibility of the drone in the sequence, while sequences recorded by a moving camera and very distant drones are the most challenging ones. The performance comparison reveals that the different approaches perform somewhat complementary, in terms of correct detection rate, false alarm rate, and average precision. Full article
(This article belongs to the Special Issue UAV Detection, Classification, and Tracking)
Show Figures

Figure 1

Back to TopTop