In this section, we summarize the different detection technologies.
Section 2.1,
Section 2.2,
Section 2.3,
Section 2.4,
Section 2.5 and
Section 2.6 describe solutions in the literature and some of the commercial solutions (if available). In the case of
Section 2.6, it is important to note that it focuses on the use of fusion approaches making use of different sensing technologies. Therefore, quite often, a commercial solution will be described in several of the following sections, once per sensor type, and again when talking about integrated sensing and fusion C-UAS systems. Finally,
Section 2.7 includes a comparative summary of technologies requirements, expected performance and limitations.
2.1. Active Detection Radars
Radars have several advantages in detecting aircraft compared with other sensors in terms of weather independency, day and night operation capability, technology development, and capacity to measure range and velocity simultaneously. A big challenge with UAVs is that they have very small radar cross sections (RCS), and they fly at lower altitudes and lower speeds compared to larger aircrafts [
4]. Regular radar systems typically aim to detect air targets of medium and large size (RCS larger than 1 m
2). In addition, due to its low speed, Doppler processing (Moving Target Indication/Detection) is not so effective. In the literature [
5], there are several types of radar used for detection, tracking and classification of drones, such as mmWave Radar or Ultrawide-Band Radar, which can be classified in two main categories: active detection and passive detection radars. In this section, we focus on active detection radars, while the next section describes passive radars.
Conventionally, there are two possible ways to increase the distance and azimuth resolution of active radar detection systems in the case of UAVs operations: using higher frequency carriers or utilizing multiple input multiple output (MIMO) beamforming antennas.
To use shorter a wavelength, K-band, X-band and W-band frequency modulated continuous wave (FMCW) radars are specifically designed for UAV detection. The selection of carrier frequency for UAV detection radar should be higher than 6 GHz (K-band), as in [
6], where it is verified the ability of radars to detect small, slow, and low-flying targets. There are two important factors to be considered for the use of radars to detect airborne threats: the target to be detected and the radar itself. When a radar is used to detect small and slow targets, the limiting factor is the RCS, so in that work, “mini-UAVs” were treated as a Medium of Airborne Attack (MoAA), and it was concluded that radars working in the K-band are the ones that best detect “mini-UAVs” due to their dimensions and radar sections. These radars offer optimal accuracy for measuring the coordinates of the targets being detected and small antenna dimensions.
Other approaches use multiple antennas following a MIMO approach. The advantage of this approach is in its applicability to a radar system with lower carrier frequencies, as in [
7,
8]. A Holographic RadarTM (HR) with a 2D antenna array and an appropriate signal processing is used in [
7]. This signal processing can create a multibeam, 3D, wide-area, staring surveillance sensor, which is able to achieve high detection sensitivity, and provide fine Doppler resolution, with update rates of fractions of a second. The ability to remain continuously on targets throughout the entire search volume enables the detection of small targets, such as UAVs, against a moving background. The system uses a 32-by-8 element L-Band receiver array. As the radar has a high detection sensitivity, it can detect small drones and other small moving targets as birds. Thus, it is necessary to have a stage of processing to discriminate the UAV from other objects. In this case, a machine learning decision tree classifier is used to reject small objects while maintaining a high probability of detection for the drone. A similar study is presented in [
8], where a ubiquitous frequency modulated continuous wave (FMCW) radar system working at 8.75 GHz (X-band) with PC-based signal processor can detect a micro-UAV at a range of 2 km with an excellent range–speed resolution.
The advances in computation enable another type of radar described in the literature for this application, the software-defined radar (SDR) [
9]. This radar is a multiband, multimode, software-defined radar that consists of a hardware-based platform and software-based platform. It is multiband because the module allows the selection of S-, X- and K-bands, while it is multimode because of the capacity of selecting waveforms of CW, Pulse, FMCW and LFM Chirp. The detection results of this system show that the detection of the SDR platform if successfully performed in real-time operations, so it can be used for air safety applications by detecting and warning of the threat UAVs.
An example of mmWave Radar with a precise detection and 3D localization system for drones can be observed in [
10]. The positions of drones are estimated from spatial heatmaps of the received radar signals, obtained by applying a super-resolution algorithm. These positions are improved analyzing the micro-Doppler effect, which is generated by the rotating propellers. This radar presents a novel Gaussian Process Regression model to compensate for systematic biases in the radar data.
Finally, another way of detecting UAVs using radars is by means of the Multistatic Forward Scattering Radar (FSR) [
11]. The most important principle of a FSR for target detection is the use of the shadow field. When the diffraction angle, which is the angle between the direction of the transmitter–target and the direction of the target–receiver, is approximately zero, the shadow field can be observed at the receiving point. This shadow field is considered in narrow regions where the diffraction angle is approximately zero, causing the forward scatter radar section to increase considerably compared to monostatic radar sections, which occurs only when the size of the target is larger than the wavelength. In the multistatic configuration of a FSR, a certain number of transmitting and receiving positions in the air and receiving positions on the ground must be used. The altitude of the targets must always be equal to or less than the altitude of the airborne transmitter positions, which could be placed onboard UAVs or any other type of aircraft. This kind of airborne sensor network is described here for completeness, but we do not model it in the second part of the paper.
To summarize, the highest disadvantage of active radars is the need for specially designed transmitters that can be difficult to deploy.
Next, we detail some commercial active radars. The ART Midrange 3D [
12] is a high-resolution C-UAS FMCW surveillance radar. This high-performance sensor is specifically designed to detect small unmanned aerial vehicles (C-UAS) and for its use in unmanned aircraft traffic management (UTM). The radar is composed of a 3D multibeam antenna system and a high-power amplification stage and is capable of detecting, tracking, and classifying micro quadcopters and micro fixed-wing UAVs, with extended elevation coverage. The main specifications of this solution can be seen in
Table 1.
Another commercial solution, provided by Indra [
13], called ARMS, includes another FMCW radar. Its main characteristics are detailed next, in
Table 2.
German company HENSOLDT has developed a drone detection system called Xpeller Counter UAV solution [
14]. This solution can detect the potential threat through a radar system whose specifications can be seen in
Table 3 (two different radar systems may be integrated).
Meanwhile, Echodyne [
15] has developed an alternative active radar solution based on an Electronic Scan Antenna capable of simultaneously tracking (with very high detection rate) and searching for additional targets in its coverage. Its specifications are detailed in
Table 4.
An alternative solution is the Ranger R8SS-3D from Flir [
16], whose specifications can be seen in
Table 5.
RST enterprise has another radar solution to detect UAVs, and it is called Doruk: UAV detection radar [
17]. Its basic functions are a low-altitude moving target detection over land and sea. It provides detection, classification, azimuth and range measures, RCS, radial velocity, heading and width of Doppler Frequency Spectrum of targets. Its main specifications can be seen in
Table 6.
2.2. Passive Detection Radars
Passive radars do not require a specially designed transmitter. There are two types of passive radar, the single station passive radar, which exploits only one illumination source, and the distributed passive radar, which uses the existing telecommunications infrastructures as illumination sources to enhance the UAV detection. Typically, two different widespread signals are used: cellular systems and the digital video broadcasting systems.
Passive bistatic radars (PBR) have a challenging problem in the detection of UAVs due to their low RCS [
18]. Range migration (RM) occurs in the coherent processing interval, which makes it difficult to increase coherent integration gain and improve radar detection ability, although there are techniques to alleviate this problem. An example of single-station passive radar is the investigation presented in [
19], where it is possible to localize small UAVs in 3D by exploiting a passive radar based on Wi-Fi transmissions. A demonstration of the capability of the radar to estimate the position of the target from the ground by exploiting multiple surveillance antennas is performed.
In the case of distributed passive radar, a possible approach is the one proposed in [
20], where the detection system uses reflected global system for mobile communications (GSM) signals to locate and track UAVs. Another example of distributed passive radar is the one presented in [
21], where a fixed-wing micro-UAV using passive radar based on digital technology is detected using audio broadcasting signals up to a distance of 1.2 km. The experiment was achieved at a lower frequency of 189 MHz in the VHF band.
The major disadvantage of passive radar is that a large amount of postprocessing effort or multiple receivers are required to obtain acceptable detection accuracy.
2.3. Detection through UAS Radio Frequency Signals
UAVs usually have at least one RF communication data link to their remote controller to either receive control commands (typically at 2.4 GHz) or deliver aerial images. In this case, the spectral patterns of such transmission are used for the detection and localization of UAVs. In most cases, software-defined radio receivers are employed to intercept the RF channels.
To utilize the spectrum patterns of UAVs, three possible approaches are considered for drone detection in [
22]. One of them is based on sniffing the communication between drone and its controller is a clear application of this approach. Another approach is the one explained in [
23], where the frequency hopping spread spectrum signals from a UAV are extracted. According to these articles, it is possible to train a classifier for identifying unique RF transmission patterns from UAVs.
Data traffic patterns are also an important feature to classify and identify UAVs. In [
24], a UAV’s detection and identification system, using two receiver units for recording the received signal strength resulting from the UAV was proposed. The system makes use of a novel machine learning-based for efficient identification and detection of UAVs. The system consists of four classifiers working in a hierarchical way. In the first classifier, the availability of the sample as UAV is checked, while the second classifier specifies the type of the detected UAV. The third and fourth classifiers handle specific vendors’ drone types. The system detects UAVs flying within the area, and it can classify UAVs and flight modes of the detected UAV with an accuracy around 99%.
Another UAV detection and identification approach is based on Wi-Fi signal and radio fingerprint, as presented in [
25]. Firstly, the system detects the presence of a UAV, and features from RF signal are extracted using Machine Learning and Principal Component Analysis-derived techniques to extract RF fingerprints. The extracted UAV fingerprints are stored and used as training data and test data. The results of this approach are above 95% in indoor scenarios and above 93% in outdoor scenarios.
The real scenarios are not controlled, so it is not so easy to pick up the RF signals, as there is interference in the environment. The following two studies have carried out their experiments with interference in the radio frequency band. The proposed method in [
26] relies on machine learning-based RF recognition and considers that the bandwidth of the video signal and Wi-Fi are identical. The process consists of extracting 31 features from the Wi-Fi signal and the UAV video signal and then introducing them to the classifier. It is demonstrated that the proposed method can accurately recognize UAV video signal in the presence of Wi-Fi interference. The proposed method has a recognition rate greater than 95% in the 2 km outdoor experiment. On the other hand, a radio frequency-based drone detection and identification system under wireless interference (Wi-Fi and Bluetooth), by using machine learning algorithms and a pretrained convolutional neural network-based algorithm called SqueezeNet as classifiers is explained in [
27]. Different categories of wavelet transforms are used to extract features from the signals. From these extracted features, different models have been built. The experiment has consisted of the study of the performance of these models under different signal-to-noise ratio levels. The results had a correct detection accuracy obtained of 98.9% at 10 dB signal-to-noise ratio level.
Next, we detail some commercial RF detection systems. DJI has created a system to detect their own drones. AeroScope [
28] can identify them by monitoring and analyzing their electronic signal to gain critical information such as flight status, paths, and other information in real time. There are two types of AeroScope systems: stationary (designed for continuous protection of large-scale sites, up to 50 km range) and portable (designed for temporary events and mobile deployments, up to 5 km range).
Dedrone provides a complete airspace security system [
29], including RF sensors, able to detect and localize drones by their RF signals. There are two types of these sensors: the DedroneSensor RF-160 forms the basis of the sensor network and is used in initial risk analysis, whereas the DedroneSensor RF-360 can locate and track drones. The main characteristics of these sensors can be seen in
Table 7.
Finally, DroneShield provides the DroneSentry-X product [
30], which is a portable device that is compatible with vehicles. It provides 360° awareness and protection using integrated sensors to detect and disrupt UAVs moving at any speed. It has a nominal UAV detection range greater than 2 km, and it detects UAV RF signals, operating on consumer and commercial industrial, scientific, and medical (ISM) frequencies.
2.4. Detection by Acoustic Signals
An array of acoustic sensors can be employed to capture the sound, detect, and estimate the direction of arrival of sounds from sources such as UAVs. These arrays are deployed around the restricted areas and record the audio signal periodically and deliver this signal to the ground stations. The ground stations extract the features of this audio signal to determine the direction of arrival of the UAV.
Conventionally, once the audio signal of UAV is received, the power or frequency spectrum is analyzed to identify the UAV. An example implementation of this type of UAV detection is explained in [
31]. This paper shows how to estimate and track the location of a target by triangularization with two or more microphone arrays, in addition to how the UAV model can be obtained by measuring the sound spectrum of the target. In this report, a small tetrahedral array of microphones was used. The results show that the detection algorithm performs best with a 99.5% probability of detection and a 3% false alarm rate. On the other hand, the tracking algorithm often misses trajectories when other trajectories are present, and the elevation tracking is poor.
Another example of UAV detection using acoustic signals is shown in [
32]. In this work, the data collection equipment is composed of two individual microphone arrays in 16-X and 4-L configurations where the microphones are placed on the ground and mounted on metal spikes, while the elevated sensors are placed on tripods. These microphones are covered by six-inch-thick foam shields to protect them and limit the effects of wind. Once the signals have been captured by the arrays, they must be processed and analyzed. The data processing developed, as well as the analysis of the acoustic sensor arrays, has been tested by being used to detect and track the trajectory of UAVs at low altitude and tactical distances. This process operates best under benign daytime conditions and is approximately five times better at detecting noisier, medium-sized, gasoline-powered UAVs than small, electric-powered UAVs.
In the literature, there are some machine learning (ML) approaches to classify the UAV from audio data. Support vector machine (SVM) is implemented to analyze the signal of an UAV engine and to build the signal fingerprint of UAV. The results show that the classifier can precisely distinguish the UAVs in some scenarios [
33]. Another example of using deep learning methods to detect UAVs with acoustic signals is shown in [
34]. In this paper, there is a comparison among Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs) and Convolutional Recurrent Neural Networks (CRNNs) using melspectrogram features. Here, the CNNs show the better performing results, achieving the highest average accuracy of 94.7%. In summary, machine learning presents an ability to recognize and locate the UAV. However, the nature of acoustic approaches limits the deployment and detection of UAV.
In [
35], a detailed study was conducted on how drone detection is performed by using acoustic signals, and it characterized how the microphone array in charge of capturing the sound signal should be organized. The geometry of the microphone array depends on the application to be carried out, although, when the desired signal can come from any angle, the best geometry is the circular array. The possible geometries studied were uniform linear array (ULA), uniform circular array (UCA) and uniform rectangular array (URA). In the array, it is important to know the number of microphones, which usually ranges from 4 to 16 microphones (in steps of two), and the distance between sensors, which usually ranges from 0.3 to 0.6 meters in increments of 0.05 meters.
A commercial C-UAS solution from Dedrone enterprise is Dedrone DroneTracker [
36], which is a multiple-sensor unit that may integrate an ultrasonic audio detector. Its specifications are shown in
Table 8.
2.5. Detection through Video/Images
Vision-based UAV detection techniques mainly focus on image processing. Cameras and videos are used to capture the images of UAVs. Then, using artificial vision techniques, UAVs positions are estimated.
A vision-based UAV detection approach is presented in [
37]. This approach consists of an online recognition system for the identification of 3D objects. The system uses a black-and-white television camera to provide a 2D image on a digital computer. After obtaining the image on the computer, the next step is to remove the clutter from the image by means of a preprocess that provides a clean silhouette as well as its boundaries. At the time of the calculations, certain characteristics are obtained and are used to identify the objects, the position they occupy and their orientation in space by means of a recognition algorithm. A similar system is the one developed in [
38], which makes use of classical vision algorithms. This system starts by taking the first image, which is used for initialization of the background estimation. Then a loop is started where the trajectories are predicted in the capture time for each new image taken by the cameras. All those pixels that are different from the background that was previously estimated are detected and form one or more blobs related to the current targets. These blobs are extracted using trajectory predictions, edge detectors and motion detectors. With blobs and an association process, one on more blobs are associated to each target, and in addition, the blobs within the association are used to initialize the tracks. Finally, each track is updated with its corresponding blobs, and the not-updated tracks are deleted.
In contrast, nonconventional segmentation methods make use of neural networks to directly identify the appearance of UAVs. For example, in [
39], the authors developed a system that is capable of detecting, recognizing, and tracking a UAV using a single camera automatically. For that purpose, a single Pan–Tilt–Zoom (PTZ) camera detects flying objects and obtains their tracks; once a track is identified as a UAV, it locks the PTZ control system to capture the detailed image of the target region. Afterward, the images can be classified into the UAV and interference classes (such as birds) by a convolution neural network classifier trained with an image dataset. The identification accuracy of track and image reaches 99.50% and 99.89%, respectively. This system could be applied in a complex environment where many birds and UAVs appear simultaneously.
It is possible to detect UAVs from the cameras of other UAVs. An approach for online detection of small UAVs and estimation of their positions and velocities in a 3D environment from a single moving (on-board) camera is presented in [
40]. The methods used are computationally light, despite the complexity of computer vision algorithms, so they may be used on UAVs with limited payload. This approach incorporates fast object detection using an AdaBoost-based tracking algorithm. Real-time performance with accurate object detection and tracking is possible, enabling the tracker to extract the position and size of an aircraft from a video frame. The detections are given to a multitarget tracker to estimate the aircraft’s position and velocity in 3D. The effectiveness of this method has been proven with an indoor experiment with three quadrotors. In [
41], a general architecture for a highly accurate and computationally efficient UAV-to-UAV detection and tracking algorithm from a camera mounted on a moving UAV platform was developed. The system is composed of a moving target detector followed by a target tracker. The moving target detector accurately subtracts the background from subsequent frames by using a sparsely estimated global perspective transform. The target tracker consists of a Kalmar tracker and was validated using public video data from multiple fixed-wing UAVs working in real time. Video surveillance has not yet been incorporated to our simulation models but is described here for completeness.
Next, we describe two commercial PTZ cameras used for drone detection and tracking. On the one hand, there is Axis Q6215-LE PTZ Network Camera from Axis Communications [
42], which is a camera with normal range. Its specifications can be seen in
Table 9.
On the other hand, there is Triton PT-Series HD Camera from FLIR Enterprise [
43], which is a PTZ with very high range, whose specifications are detailed in
Table 10.
Indra also has a camera/optronic sensor to be integrated in its ARMS system. Some details on it are described next, in
Table 11.
Another company that markets this type of sensor is HGH USA, specifically with its product called Spynel Series [
44]. Spynel is based on thermal imaging technology with a 360° thermal sensor, which works day and night. Spynel can track targets over a long range and wide area. The specifications of each sensor model that exists in this product series can be seen in
Table 12.
2.6. Detection by Data Fusion
Detection using a collection of these techniques is the ultimate way to detect UAVs. Data fusion, which is the process of integrating multiple data sources to obtain more consistent, accurate and useful information than that provided by any of the individual techniques explained below, has the advantaged to gain more informative and synthetic fused data than the original inputs. In the case of UAV detection, data fusion could be used to improve the performance of the UAV detection system, by overcoming or alleviating the problems and disadvantages of the individual sensors.
However, data fusion should be conducted with great caution. The key problems to be solved can be referred to as data association, positional estimation, and temporal synchronization. Data association is a general method of combining data from different sensors by correlating one sensor observation with the other observations. This process should ensure that only measurements that refer to the same drone are associated. There are different ways to perform this process: one of them is by spatial synchronization, i.e., seeing that a pair of measurements from different sensors have very similar position values. The coordinate’s changes, bias estimation and correction are sources of errors to be considered in this process. Furthermore, before making any kind of association, it is necessary to make a time synchronization so that all the measures refer to the same instant of time. The last problem faced by data fusion systems is filtering and prediction, for which they usually use common techniques such as Kalman filtering and Bayesian methods.
A low-cost, low-power methodology consisting of a fusion of technologies linking several sensors is presented in [
45]. This technology includes a simple radar, an acoustic array of microphones and optical cameras that are used to detect, track, and discriminate potential airborne targets. The multimode sensor fusion algorithms employ the Kalman filter for target tracking, and an acoustic and visual recognition algorithm is implemented to classify targets. The first element of the multimode sensor network is the radar, which is responsible for detecting targets that are approaching the area of interest. The second component is the acoustic microphone array, whose main objectives are to provide target arrival direction and target identification and classification and to mitigate false alarms. The last sensor is the optical system composed of infrared detectors to improve the resolution of targets. Results show that this sensor fusion is useful for detecting, tracking, and discriminating small UAVs. Another set of heterogeneous sensors combined with a sensor data fusion is proposed in [
46]. This system is composed of a Radio Frequency (RF) sensor to capture the uplink and downlink communications of the UAV, an acoustic sensor searching for the rotor noise, a passive radar system using the cellular network and a multihypothesis tracking (MHT) system for the fusion of sensor data. Finally, in the case explained in [
47], the system is composed of different range acoustic, optical and radar sensors. There is a combination of sensors of long- and short-range detection, the passive RF receivers detect the UAV’s telemetry signals, and the camera and microphone sensors are used to increase the detection accuracy in the near field. Specifically, the system is composed of a 120-node acoustic array that uses acoustic signal to locate and track the UAV; 16 high-resolution optical cameras, which are used to detect the UAV in the middle distance; and MIMO radar (with three bands) to achieve remote detection in the long distance. The developed combination overcomes the drawbacks of each of the sensor types in UAV detection and maximizes the advantages of the sensors. At the same time, the system reduces the cost of large-scale sensor deployment.
In this paper, we focus on the simulation of individual sensors, so we do not simulate these integrated solutions, which remains an area for future research, especially for the cases in which some of the sensors are controlled by the outputs provided by others.
Regarding commercial solutions, some of them are based on integrating some of the previously described sensors. For instance, a commercial solution provided by Indra [
13], called ARMS (Anti-RPAS Multisensor System), is a multilayer system ready to support the full C-UAS cycle, combining multiple types of sensors and countermeasures, ready to be deployed in different formats (fixed, mobile, portable) and designed to interact with complementary systems in to provide defense against UAVs threats. It is composed of a radar (described in
Section 2.1), a jammer (to interfere with drone control or GPS navigation) and optronics (described in
Section 2.5).
HENSOLDT Xpeller Counter UAV solution [
14] combines various types of sensors and effectors for protection against small drones. The sensors used to detect and identify are radars, electro-optics, rangefinders, and direction finders. Its radars were described in
Section 2.1), and it also identifies the potential threats via visual confirmation with a multispectral camera.
Meanwhile, Dedrone provides a complete airspace security system [
29]. Different types of sensors may be connected to the DedroneTracker software. The sensors provided by Dedrone are RF sensors, radars, and cameras. Depending on the application, Dedrone has different radars [
48] with different performances in the Dedrone platform, such as the Counter-Drone Radar from Echodyne [
15] and the Ranger R8SS-3D from Flir [
16], whose specifications were analyzed in
Section 2.1. The last sensors integrated by Dedrone are PTZ cameras [
49]. DedroneTracker system software has a video analysis capability, able to detect and locate UAVs in real time. Depending on the application, Dedrone can integrate one or more PTZ camera models with different performance levels. On the one hand, there is Axis Q6215-LE PTZ Network Camera from Axis Communications [
42], On the other hand, there is Triton PT-Series HD Camera from FLIR [
43]. They were described in
Section 2.5.
Another company to have its drone detection solutions analyzed in this paper is DroneShield [
50]. It has a range of stand-alone portable products and rapidly deployable fixed site solutions. One of the most remarkable ones is the DroneSentry product [
51], which is an autonomous fixed C-UAS system that integrates DroneShield’s suite of sensors and countermeasures into a unified responsive platform. This product has as its primary detection method the RadarZero product [
52], which is a radar, and/or the RfOne RF detector [
53]. It has secondary detection methods such as the WideAlert acoustic sensors and DroneOpt camera sensor [
54]. The main specifications of DroneSentry can be seen in
Table 13.