Next Article in Journal
FedGTD-UAVs: Federated Transfer Learning with SPD-GCNet for Occlusion-Robust Ground Small-Target Detection in UAV Swarms
Previous Article in Journal
UAV-Centric Privacy-Preserving Computation Offloading in Multi-UAV Mobile Edge Computing
Previous Article in Special Issue
UAVThreatBench: A UAV Cybersecurity Risk Assessment Dataset and Empirical Benchmarking of LLMs for Threat Identification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Intelligent Passive System for UAV Detection and Identification in Complex Electromagnetic Environments via Deep Learning

1
School of Systems and Telecommunications Engineering, Technical University of Madrid, 28031 Madrid, Spain
2
College of Electronic and Optical Engineering, Nanjing University of Posts and Telecommunications, Nanjing 210023, China
3
The Key Laboratory of Dynamic Cognitive System of Electromagnetic Spectrum Space, College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China
*
Author to whom correspondence should be addressed.
Drones 2025, 9(10), 702; https://doi.org/10.3390/drones9100702 (registering DOI)
Submission received: 10 September 2025 / Revised: 8 October 2025 / Accepted: 10 October 2025 / Published: 12 October 2025

Abstract

With the rapid proliferation of unmanned aerial vehicles (UAVs) and the associated rise in security concerns, there is a growing demand for robust detection and identification systems capable of operating reliably in complex electromagnetic environments. To address this challenge, this paper proposes a deep learning-based passive UAV detection and identification system leveraging radio frequency (RF) spectrograms. The system employs a high-resolution RF front-end comprising a multi-beam directional antenna and a wideband spectrum analyzer to scan the target airspace and capture UAV signals with enhanced spatial and spectral granularity. A YOLO-based detection module is then used to extract frequency hopping signal (FHS) regions from the spectrogram, which are subsequently classified by a convolutional neural network (CNN) to identify specific UAV models. Extensive measurements are carried out in both line-of-sight (LoS) and non-line-of-sight (NLoS) urban environments. The proposed system achieves over 96% accuracy in both detection and identification under LoS conditions. In NLoS conditions, it improves the identification accuracy by more than 15% compared with conventional full-spectrum CNN-based methods. These results validate the system’s robustness, real-time responsiveness, and strong practical applicability.

1. Introduction

Once confined to military applications, unmanned aerial vehicles (UAVs) are now transforming a wide range of civilian fields. These include logistics [1,2], agriculture [3], surveillance [4], and emergency response [5,6]. Recent research on UAV-assisted communications and spectrum mapping has further highlighted their value in low-altitude networks, e.g., UAV-assisted informative path planning for 3D spectrum mapping [7] and sparse Bayesian learning-based 3D radio environment construction [8]. However, alongside these benefits, the malicious or unauthorized use of UAVs has emerged as a growing security threat [9], posing serious risks to public safety, personal privacy, and the operational continuity of critical infrastructure. Several high profile incidents have exemplified these challenges, such as the 2018 disruption at Gatwick Airport, the 2019 interference at Newark Liberty International Airport, and repeated low-altitude intrusions near Shanghai Pudong International Airport in China. In response to these threats, regulatory initiatives such as the mandated Remote ID policy in the United States [10] have further stimulated the development of advanced UAV identification technologies [11]. These incidents underscore the urgent need for robust, reliable, and real-time UAV detection and identification systems to ensure the security and orderly management of low-altitude airspace.
To address the challenges introduced by unauthorized UAVs, numerous detection and identification technologies have been proposed in recent studies, including radar sensing [12,13], optical imaging [14,15], and acoustic sensing [16,17]. A summary of the advantages and limitations of these methods is provided in Table 1. For instance, a distributed K-band radar system capable of high-sensitivity detection and velocity estimation of small UAVs was introduced in [18]. An empirical mode decomposition-based method was developed for radar-based automatic multicategory UAV classification, demonstrating promising accuracy on real measurement data [19]. Reference [20] proposed the IRSDD-YOLOv5 framework, which incorporates an infrared small target detection module, a customized prediction head, and optical feature enhancement strategies, demonstrating superior performance. A residual image prediction model based on the U-Net architecture was presented in [21] for infrared UAV detection. In addition, a distributed system integrating a wireless acoustic sensor network with machine learning algorithms was introduced in [22] to detect and localize unauthorized UAVs. Although each method performs well under specific conditions, they exhibit inherent limitations. Radar systems often miss small UAVs due to their low radar cross-section. Optical techniques are highly sensitive to illumination, occlusion, and adverse weather. Acoustic sensing, meanwhile, is vulnerable to environmental noise and suffers from limited detection range.
In contrast, communication-assisted sensing, particularly radio frequency (RF) sensing, identifies UAVs by passively monitoring their communication signals, as illustrated in Figure 1. This approach offers several advantages, including all-weather applicability, non-line-of-sight (NLoS) capability, and ease of deployment. Recently, integrated sensing and communication (ISAC), including resource-efficient UAV systems and near-field modeling for reconfigurable intelligent surface-assisted air-to-ground links, has attracted significant research attention [23,24]. In particular, authors in [25] proposed a reinforcement learning-based ISAC framework for low-altitude UAV detection, achieving adaptive parameter optimization for robust detection under multipath and low-signal-to-noise ratio conditions. Similarly, an orthogonal time–frequency and space-based ISAC system that exploits micro-Doppler signatures to distinguish UAV echoes from static clutter, demonstrating high detection accuracy with a software-defined radio prototype, was proposed in [26].
A core technique in this domain is RF fingerprinting, which enables device identification by exploiting unique signal characteristics [27]. Commonly used RF fingerprint features include wavelet coefficients [28], I/Q imbalance parameters [29], fractal dimensions [30], carrier frequency offsets [31], bispectral features [32], and transient-based signal characteristics [33]. These inherent characteristics give rise to distinctive signal traits that can be exploited for UAV identification. Reference [34] proposed a method for detecting and identifying small UAVs by extracting cyclostationarity, kurtosis, and spectral features from downlink RF signals. A dual-source RF detection system was presented in [35], enabling simultaneous identification of both flight control and video transmission signals (VTSs). In [36], RF fingerprinting was combined with machine learning algorithms to classify UAV types, achieving high identification accuracy. Meanwhile, UAV communication-specific channel measurements and clustering methods have been investigated to support RF-assisted identification, such as space clustering based on full-domain channel characteristics [37] and comprehensive surveys on UAV channel sounding technologies [38]. Moreover, frequency-hopping signals (FHSs), commonly used in UAV communications, exhibit distinctive time–frequency characteristics. These spectrogram patterns have attracted increasing research interest in spectrogram-based UAV detection and identification methods. For example, an improved YOLO-based algorithm was proposed in [39] to identify UAV models using FHS spectrogram features.
Table 1. Advantages and disadvantages of different UAV detection methods.
Table 1. Advantages and disadvantages of different UAV detection methods.
MethodAdvantagesDisadvantages
Radar-based method [12,13,18,19]High performanceDifficult to detect small UAVs/High cost
Optical-based method [14,15,20,21]Provides intuitive resultsShort detection range/Susceptible to weather
Acoustic-based method [16,17,22]Low costShort detection range/Easily affected by noise
RF-based method [34,35,36,39]Low cost/Long rangeEasily affected by interference
In general, communication-assisted methods for UAV detection and identification offer several notable advantages. However, these approaches remain highly susceptible to interference from other electromagnetic signals, which severely constrains their detection accuracy and reliability in complex environments. To overcome these limitations, this paper develops a passive UAV detection and identification system that combines a high-resolution RF front-end with a spectrogram-based deep learning framework. Specifically, the YOLO network is first utilized to locate and extract the regions of frequency-hopping UAV signals from the spectrogram, thereby suppressing the influence of environmental noise and interference. Subsequently, a CNN classifier is employed to identify the UAV type based on the extracted signal regions. This two-stage design effectively decouples detection and identification, enabling robust and accurate performance under both line-of-sight (LoS) and NLoS conditions. The main innovations of this work are summarized as follows:
  • A passive UAV detection and identification system is proposed, with a modular architecture comprising a multi-beam directional antenna, a wideband spectrum analyzer, and a computing unit. The system enables real-time acquisition, processing, and identification of RF signals and is specifically designed to ensure robust operation in complex electromagnetic environments.
  • A two-stage deep learning framework is proposed, which integrates a YOLO-based detector for FHSs with a CNN-based UAV model classifier. The YOLO module enables rapid and accurate localization of FHS regions in spectrograms, while the CNN extracts discriminative features for precise UAV model identification, thereby enhancing both accuracy and robustness under severe interference conditions.
  • Extensive measurements are conducted in diverse LoS and NLoS conditions across a campus. The results showed that the proposed system achieved over 96% detection and identification accuracy under LoS conditions and improved identification accuracy by more than 15% in NLoS conditions compared with conventional full-spectrogram classification methods.
The remainder of this paper is organized as follows: Section 2 introduces the RF spectrogram-based system framework for passive UAV detection and identification system. Section 3 describes the hardware architecture and algorithm design of the proposed system. Section 4 presents experimental validation and analysis under both LoS and NLoS conditions. Finally, conclusions are drawn in Section 5.

2. Spectrogram-Based Framework for Passive UAV Detection and Identification

2.1. Characteristics of Frequency Hopping and Video Transmission Signals

UAV communication links are generally categorized into two types. The first employs a dual-link configuration, in which FHSs is used for flight control, while an Orthogonal Frequency-Division Multiplexing (OFDM) signal is employed for video transmission. The second approach integrates both control and video data into a single OFDM-based communication link. However, this unified scheme suffers from reduced anti-jamming capability and a higher risk of link disruption in the presence of interference. Consequently, most modern UAV systems are designed with a dual-link architecture, leveraging FHS for robust control and OFDM for high-throughput video transmission.
The FHS of a UAV can be modeled as follows:
x F H S ( t ) = A k = 0 N h 1 W T ( t k T h ) cos [ 2 π f k ( t k T h ) + φ n ]
where N h denotes the number of frequency hopping points, A is the signal amplitude, W T ( · ) is the rectangular window function defined over the hopping period T h , f k represents the frequency at the k-th hopping point, and φ n is the initial phase. In the time–frequency domain, this signal appears in the spectrogram as a series of periodically spaced, stripe-like patterns with abrupt transitions in frequency. These pronounced structural features provide strong discriminative cues, making FHSs especially suitable for passive UAV detection and identification tasks.
Regarding video transmission, modern commercial UAVs such as those in the DJI (Shenzhen, China) series typically employ advanced OFDM-based communication systems with bandwidths of 10 MHz or 20 MHz. These systems are capable of dynamically selecting communication frequency bands to maintain reliable data throughput under diverse operating conditions. Although this mechanism improves transmission efficiency, it also increases signal diversity and complexity, which poses significant challenges for accurate UAV model identification. The OFDM signal used for video transmission can be mathematically described as the superposition of multiple modulated subcarriers. Assuming the transmission begins at time t = t s , the signal can be represented as follows:
x O D F M = i = 0 N 1 d i rect t t s T 2 exp j 2 π f c + i T t t s , t s t t s + T 0 , t t s t s t s + T
where N s denotes the number of subcarriers, d i is the complex modulated symbol on the i-th subcarrier, f c is the frequency of the first subcarrier, and T represents the OFDM symbol duration. The rectangular window function rect · is defined as follows:
rect t = 1 ,   τ T / 2 0 ,   otherwise
In the spectrogram, the OFDM signal appears as a continuous and wideband structure with high temporal stability and consistent bandwidth.
To further investigate the radio frequency characteristics of the UAV communication signal in practical deployment scenarios, a representative RF spectrogram dataset is constructed using real measurement data from an open-source UAV signal repository [40]. All signal samples are collected in a microwave anechoic chamber under standardized experimental conditions, ensuring a high signal-to-noise ratio and reliable measurement fidelity. The two-dimensional spectrograms of six representative commercial UAVs operating under typical communication conditions are presented in Figure 2. The FHSs display substantial variation in their spectral patterns across different UAV platforms, which reflects differences in hopping algorithms, bandwidth allocation, and modulation schemes. In contrast, the VTSs exhibit uniform OFDM structures that closely resemble standard Wi-Fi signals in both spectral shape and bandwidth occupancy. These observations underscore the superior discriminative value of the FHS over the OFDM-based VTS for passive UAV detection and identification. The distinct time–frequency patterns of the FHS facilitate more robust detection and identification, particularly in complex electromagnetic environments where interference and overlapping signals are prevalent.

2.2. Architecture of the Passive UAV Detection and Identification System

For efficient passive detection and identification of UAV targets, this paper proposes a deep learning-based system that processes RF spectrograms. As illustrated in Figure 3, the system employs a modular framework consisting of three primary components: signal acquisition, UAV signal detection, and UAV type identification.
The raw radio frequency signal is first captured as in-phase and quadrature (IQ) samples in complex electromagnetic environments, where the received data contains both UAV signals and coexisting interference sources such as Wi-Fi, Bluetooth, and other ambient RF signals. These in-phase and quadrature components are subsequently combined to construct a complex-valued time-domain signal:
IQData ( n ) = I ( n ) + j Q ( n )
where I ( n ) and Q ( n ) denote the in-phase and quadrature components at time index n, respectively. To reduce spectral leakage during frequency analysis, a Hanning window is applied:
w ( n ) = 0.5 1 cos 2 π n N 1 , n = 0 , 1 , 2 , , N 1
where N is the signal length. The windowed IQ data is then computed as follows:
IQData windowed ( n ) = IQData ( n ) · w ( n )
The time–frequency representation of the signal is obtained using the short-time Fourier transform, defined as follows:
X ( t , f ) = x ( τ ) w ( τ t ) e j 2 π f τ d τ
where x ( τ ) is the time domain signal, w ( τ t ) is the window function, and X ( t , f ) denotes the complex spectrogram at time t and frequency f. This transformation retains the temporal–frequency distribution of the energy of the signal, effectively highlighting structural patterns such as frequency-hopping trajectories. For practical implementation, the discrete Fourier transform is computed via a fast Fourier transform:
Y ( f ) = n = 0 N 1 IQData windowed ( n ) e j 2 π f n / N
where Y ( f ) represents the frequency-domain signal at frequency f, and N is the number of data points. The power spectral density (PSD) is then obtained by squaring the magnitude of the FFT result:
P ( f ) = Y ( f ) N 2
Finally, to convert the PSD to decibels (dBm), the following equation is used:
P dBm ( f ) = 10 log 10 P ( f ) 0.001
where 0.001 represents the reference power (1 mW). To ensure consistent input quality and enhance the robustness of the deep learning model, the generated spectrograms undergo image normalization and augmentation.
The UAV detection module analyzes the preprocessed spectrograms using a YOLO-based detection framework to localize candidate regions corresponding to FHS. By generating accurate binary decisions on the presence of UAV-related signals, this module enables rapid and reliable screening of complex spectrogram data. In doing so, it significantly alleviates the computational burden on downstream identification processes, while preserving high detection accuracy and real-time responsiveness.
Figure 2. Spectrum of FHSs and VTSs from different UAV models: (a) DJI Inspire 2. (b) DJI Matrice 100. (c) DJI Matrice 210. (d) DJI Mavic Pro. (e) DJI Phantom 4. (f) DJI Phantom 4 Pro Plus.
Figure 2. Spectrum of FHSs and VTSs from different UAV models: (a) DJI Inspire 2. (b) DJI Matrice 100. (c) DJI Matrice 210. (d) DJI Mavic Pro. (e) DJI Phantom 4. (f) DJI Phantom 4 Pro Plus.
Drones 09 00702 g002aDrones 09 00702 g002b
Figure 3. Architecture of the proposed deep learning-based UAV detection and identification framework based on RF spectrograms.
Figure 3. Architecture of the proposed deep learning-based UAV detection and identification framework based on RF spectrograms.
Drones 09 00702 g003
Upon detection of a valid UAV signal, the localized FHS region is forwarded to the UAV identification module. To mitigate the influence of signal power variations, the FHS region is first converted into a binary image representation. A block-based detection algorithm is then applied to extract the centroids of individual hopping burst. Each hopping signal is thus represented as a set of two dimensional coordinates in the time–frequency domain, collectively forming a frequency-hopping point set. This point set is subsequently fed into a CNN-based classifier that exploits the spatial distribution of hopping patterns for UAV identification. By focusing on the geometric arrangement of hopping bursts rather than global spectrogram textures, the system achieves more robust and accurate UAV identification under complex electromagnetic conditions.

3. System Hardware and Algorithm Design

3.1. Hardware Platform

To satisfy the stringent requirements of high sensitivity, real-world deployability, and scalability, the proposed platform integrates high-performance commercial off-the-shelf hardware with custom-designed components to form a comprehensive RF signal acquisition and processing system. As illustrated in Figure 4, the hardware architecture consists of a multi-beam directional antenna for spatially selective signal reception, a wideband spectrum analyzer for real-time RF signal monitoring and digitization, and a local computing unit for signal decoding and spectrogram generation. This integrated design ensures accurate signal capture and efficient processing, thereby providing a reliable foundation for passive UAV detection and identification in complex electromagnetic environments.
A multi-beam directional antenna with six directional elements is designed to enhance the directional sensing capability of the UAV detection and identification system. The array provides full 360° horizontal coverage and employs high-gain, narrow-beam antennas to ensure strong spatial selectivity. The spacing and angular placement of the antennas are carefully optimized to achieve a balance between high-resolution multi-target detection in short-range scenarios and the structural compactness required for mobile and portable deployment.To facilitate high fidelity spectrum acquisition, the system employs the Spectran V6 PLUS spectrum analyzer developed by AARONIA GmbH (Euscheid, Germany). This advanced receiver supports an ultra-wide frequency range from 9 kHz to 8 GHz, with excellent dynamic range and high-speed sampling capabilities. Its maximum instantaneous scan rate of 1 THz/s effectively covers the primary UAV control bands at 2.4 GHz and 5.8 GHz. These characteristics make it particularly suitable for real-time monitoring of FHSs. The analyzer incorporates a built-in real-time spectrum analysis engine, enabling precise capture of short duration, burst like spectral transitions. This capability is critical for detecting dynamic hopping sequences and rapidly switching communication channels commonly used in UAV systems. Owing to its high performance and reliability, the Spectran V6 PLUS has been widely used in radio surveillance and counter-UAV applications, thus providing a robust foundation for real-world deployment. The detailed technical specifications are summarized in Table 2. On the data processing side, the system integrates a high-performance local computing terminal, specifically a laptop equipped with an NVIDIA GeForce RTX 4060 GPU (Santa Clara, CA, USA). This terminal manages the entire pipeline from signal acquisition to spectrogram generation. It connects to the spectrum analyzer via a cloud-based communication protocol provided by the AARONIA device and runs custom MATLAB 2024 scripts for real-time IQ data acquisition, buffering, and decoding.

3.2. Algorithm Design

The detection and identification framework developed in this paper consists of two core modules. The first is a YOLO based detection module that automatically localizes UAV frequency-hopping regions in spectrograms. The second is a CNN-based identification module that further identifies UAV models from the detected regions. These two modules operate sequentially in a process of localization and identification, providing an efficient and automated solution for UAV detection and identification in challenging environments.
In the hopping signal detection stage, the YOLOv3 architecture is employed and customized to accommodate the unique characteristics of RF spectrograms. The choice of YOLOv3 is motivated by its favorable trade-off between detection accuracy and processing speed, making it particularly suitable for real-time applications with constrained computational resources. As shown in Figure 5, the network consists of three main components: feature extraction, feature fusion, and multi-scale prediction. The input to the YOLO-based detection network is the time–frequency spectrogram generated from the raw IQ RF signals after a short-time Fourier transform. A deep-feature extraction module, based on the Darknet backbone, is employed to learn local textures and hopping patterns. By leveraging its strong encoding capability, the backbone effectively captures weak yet critical time–frequency signatures. The extracted features are then passed into a feature pyramid fusion module, which incorporates upsampling, residual connections, and multi-scale pooling. Specifically, a spatial pyramid pooling structure with three max-pooling layers at different scales expands the receptive field, improving adaptability to local variations and scale changes. The multi-path fusion strategy ensures effective information flow between high-level semantic features and low-level spatial details, thereby enhancing detection robustness across various hopping signal sizes. Finally, three detection heads predict small, medium, and large hopping regions by regressing bounding box coordinates and confidence scores, while simultaneously classifying UAV control signals.
Traditional spectrogram identification methods typically treat the entire spectrogram as input to a deep neural network, directly learning spatial textures and energy distributions [39]. However, in real-world scenarios, spectrograms often contain substantial non-target noise and overlapping signals, which increases the risk of overfitting to background patterns and leads to degraded performance under non-ideal conditions. To address these challenges, this study introduces a specialized CNN-based UAV model identification module, as shown in Figure 6. The CNN classifier employs a five-layer convolutional architecture consisting of three convolutional layers followed by two fully connected layers. Each convolutional layer utilizes 3 × 3 kernels with ReLU activation and max-pooling operations to extract hierarchical spatial features, while the final softmax layer performs multi-class classification of UAV types. Hyperparameters are optimized through a combination of grid search and validation-based fine-tuning. Unlike conventional methods, the proposed method uses the coordinate-based structural information of the FHS as the primary input feature. Inspired by keypoint-based geometric relationship modeling in face identification, this approach overcomes the limitations of traditional spectrogram-based methods, which heavily rely on global texture distributions and have weak interpret ability. For the CNN-based identification module, the input consists of coordinate matrices representing the spatial distribution of the FHS in the time–frequency domain. Each spectrogram is first processed by a block-based detection algorithm to extract the two-dimensional coordinates of FHS points. These coordinates are then normalized with respect to time and frequency scales and structurally encoded to preserve the geometric relationships among hopping trajectories. The network not only considers the absolute positions of individual points but also captures their spatial relationships, such as hopping intervals, trajectory patterns, relative frequencies, and density distributions. The extracted structural features are fused through multi-layer fully connected layers and passed to a classifier that outputs the UAV model label. By incorporating position normalization and structural encoding mechanisms, the model learns an invariant and discriminative structural representation, thereby significantly improving identification accuracy.

4. Experimental Validation and Analysis

4.1. Measurement Setup

To validate the proposed system under realistic conditions, a comprehensive training and testing dataset is constructed by combining a public UAV spectrogram dataset with custom-measured UAV spectrograms. In addition to six mainstream commercial UAV models from an open-source RF signal database [40], two representative UAV types are selected for controlled experiments to evaluate the model’s robustness under practical conditions. The first UAV is an assembled platform, as shown in Figure 7a, equipped with a 2.4 GHz remote control link that employs a typical frequency-hopping communication mechanism. This configuration offers high protocol flexibility and a fully controllable signal structure, enabling in-depth analysis of hopping behavior under different configurations.The second UAV is a commercial DJI Mini, as shown in Figure 7b, whose control link also operates primarily in the 2.4 GHz band but utilizes a proprietary closed-source communication protocol with a more complex and stable frequency-hopping pattern. These two UAVs represent open and closed communication systems, respectively, providing complementary test cases for assessing the proposed framework’s adaptability and robustness in diverse communication environments.
For dataset construction, the strategy employed in this study involves training the model on clean spectrograms and subsequently testing it with more complex spectrograms. This approach ensures both effective feature learning and adaptability to real-world deployment scenarios. The training data is collected entirely in a microwave anechoic chamber under strictly controlled RF-shielded conditions. The frequency hopping signals obtained are free from external interference, with minimal background noise and clear spectral boundaries, ensuring that the model could effectively learn intrinsic characteristics such as UAV control signals, hopping trajectories, and modulation patterns during the initial phase, thereby enhancing feature extraction and identification performance. The dataset consists of two parts. The first part includes spectrograms of multi-class UAV RF signals obtained from an open-source database, with 600 spectrogram samples generated for each UAV type. The second part comprises spectrograms of the self-assembled UAV and the DJI Mini, collected using the measurement platform in a microwave anechoic chamber as illustrated in Figure 8, with 600 spectrogram samples extracted for each UAV in this private measured dataset. For both datasets, all spectrograms are standardized to a resolution of 800 × 800 pixels and divided into training and validation sets with a 2:1 ratio.
To evaluate the performance of the proposed system under different propagation path conditions, measurements are conducted at six representative locations around the Technical University of Madrid, as shown in Figure 9. The experimental setting represents a typical urban low-altitude propagation scenario, incorporating various interference sources and obstructions typically encountered during UAV flight, such as low-rise buildings, vehicles, and trees. In addition, multiple RF sources, including Wi-Fi, Bluetooth, and other wireless devices, contributed to the complex and non-ideal electromagnetic environment. To assess the system’s adaptability for mobile monitoring applications, the receiving equipment is mounted on a vehicle to emulate a mobile monitoring terminal, which could be deployed for applications such as urban security patrols and emergency communication management, as shown in Figure 10a. The UAV and its controller are placed on a fixed platform on the rooftop of a building approximately 20 m above ground, serving as the transmission source, as illustrated in Figure 10b. The horizontal distance between the receiver and transmitter varied from 50 m to 150 m, covering typical low-altitude, short-range communication link characteristics. Specifically, test positions 1, 2, 3, and 6 correspond to LoS conditions, where no obstacles existed between the receiver and the UAV, and signal propagation occurred primarily along the direct LoS. In contrast, test positions 4 and 5 correspond to NLoS conditions, where the propagation path is partially obstructed by buildings, trees, or other obstacles. In these cases, the receiver mainly captured reflected or diffracted signals, resulting in greater environmental complexity and unpredictability. At each position, 500 spectrograms are collected for both the assembled UAV and the DJI Mini. As shown in Figure 11, the spectrograms display clear FHSs of the assembled UAV. In addition, significant RF interference is present in the environment. This interference manifests in the spectrograms as background noise patterns, broadband pulse bursts, or spurious frequency hopping signatures, which overlapped with and in some cases interfered with the target signal in the time–frequency domain, thereby complicating the detection task. Particularly at some NLoS locations, characterized by long reflection paths, high attenuation, and dense interference sources, the target signal is easily obscured.

4.2. Performance Evaluation

To evaluate the performance of the proposed YOLO-based UAV detection algorithm, validation measurements are conducted at six positions and the results are compared with those obtained using traditional methods that detect signals from the entire spectrogram. The results are summarized in Table 3. Under LoS conditions, the proposed method achieved detection accuracies of 96.3%, 89.3%, 86.5%, and 92.2% at test positions 1, 2, 3, and 6, respectively, which represents improvements of 8.7%, 11.9%, 10.8%, and 7.7% over the traditional method [39]. Under NLoS conditions, the performance gap between the two methods becomes more pronounced. The traditional method achieved accuracies of 61.0% and 57.9% at test positions 4 and 5, whereas the proposed method reached 81.1% and 72.8%, corresponding to improvements of 20.1% and 14.9%, respectively. These results indicate that traditional methods are more susceptible to background interference in complex electromagnetic environments and obstructed conditions, which leads to degraded detection performance. In contrast, the proposed method, by extracting FHS regions, effectively isolates non-target interference and captures key structural features. This enables stable detection and high-precision detection of UAV hopping signals across various environmental conditions, demonstrating superior robustness and adaptability.
To further validate the effectiveness of the proposed UAV identification model, comparative analyses are conducted on the identification performance of the proposed system under multi-source data conditions. Initially, in an ideal environment, spectrograms of six representative commercial UAVs from the open-source database, together with spectrogram samples of the assembled UAV and DJI Mini collected in a microwave anechoic chamber, are selected to construct the training and testing datasets. A confusion matrix is employed to compare performance of the model across different UAV types. As a commonly used tool in multi-class classification tasks, the confusion matrix provides a visual representation of identification accuracy and misclassification rates for each UAV category. Each column of the matrix corresponds to predicted labels, while each row corresponds to true UAV labels. The identification results are shown in Figure 12. The proposed model achieved 100% accuracy across all test objects, thereby fully confirming the theoretical performance limits of the method in interference-free environments and demonstrating its precise capability for extracting and classifying frequency-hopping features.
To further validate the effectiveness of the proposed UAV identification module, two UAV models, the assembled UAV and DJI Mini, are selected as the target objects. The performance of the traditional identification method [39] is shown in Figure 13. Under LoS conditions, the identification accuracies for the assembled UAV and DJI Mini are 82.6% and 84.3%, respectively, demonstrating limited discriminative ability. However, under NLoS conditions, the accuracies dropped sharply to 61.4% and 69.3%, with misidentification rates reaching 38.6% and 30.7%, respectively. These results indicate that traditional methods, which lack structural constraints during feature extraction, are highly susceptible to interference from video transmission signals, Wi-Fi, and other non-target RF sources, leading to model confusion and performance degradation. The performance of the proposed identification method is shown in Figure 14. Under LoS conditions, the identification accuracies for the assembled UAV and DJI Mini improved significantly to 96.8% and 94.1%, respectively, compared to the traditional approach. In addition, the proposed framework achieves an average processing time of approximately 34 ms per detection–identification cycle, demonstrating millisecond-level, real-time performance suitable for practical UAV monitoring. More importantly, the proposed method maintained high accuracy and low misidentification rates under NLoS conditions, where traditional full image identification approaches perform poorly due to multipath interference and background noise. By focusing on frequency-hopping structures, the proposed system effectively isolates target features from interference, demonstrating superior robustness and adaptability. These results further demonstrate the effectiveness of the proposed two-stage YOLO–CNN framework. In this design, the YOLO network rapidly locates frequency-hopping signal regions within the spectrogram, significantly reducing the impact of environmental interference, while the CNN classifier performs fine-grained feature extraction for UAV type identification. Compared with single-stage deep learning models, this architecture achieves a better trade-off between real-time performance and identification accuracy, especially under non-stationary or overlapping signal conditions commonly encountered in complex electromagnetic environments.

5. Conclusions

This paper addresses the challenge of UAV detection and identification in complex electromagnetic environments by designing and implementing a spectrogram-based detection and identification system. The hardware platform integrates a high-sensitivity multi-beam directional antenna, a wideband spectrum analyzer, and a computing unit, providing strong spatial directivity and flexible deployment capabilities. On the algorithmic side, a joint identification framework combining YOLO and CNN is developed to enable accurate localization of FHS. Extensive measurements are conducted under both LoS and NLoS conditions around a university campus, focusing on two UAV targets: a custom-built drone and the DJI Mini. Experimental results show that the proposed system achieves over 96% identification accuracy under LoS conditions and improves accuracy by more than 15% over traditional methods in NLoS scenarios. Overall, the system demonstrates significant advantages in terms of identification accuracy, response speed, and interference resilience, highlighting its strong engineering applicability and potential for real-world deployment. Moreover, in our future work, we plan to integrate the RF sensing and identification modules onto UAV platforms to enable airborne passive monitoring and dynamic spectrum mapping, thereby enhancing environmental coverage and response speed in real-world deployments. In addition, we will further extend the framework to support simultaneous detection and identification of multiple UAVs in complex electromagnetic environments.

Author Contributions

Conceptualization, G.Z. and C.B.; Methodology, G.Z.; Software, G.Z.; Validation, G.Z. and C.B.; Investigation, G.Z.; Data curation, G.Z. and C.B.; Writing—original draft, G.Z.; Writing—review and editing, Y.L., Q.Z., S.L. and K.M.; Visualization, G.Z.; Supervision, Y.L., Q.Z., C.B., Y.H. and Z.L.; Funding acquisition, Y.L. and S.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China under Grant No. 62371248 and No. 62401284; in part by the Natural Science Research Start-up Foundation of Recruiting Talents of Nanjing University of Posts and Telecommunications under Grant NY222059; in part by the DISCO6G-COMUNIDAD DE MADRID under Grant TEC-2024/COM/360.

Data Availability Statement

The original contributions presented in this study are included in this article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Xu, J.; Liu, X.; Jin, J.; Pan, W.; Li, X.; Yang, Y. Holistic Service Provisioning in a UAV-UGV Integrated Network for Last-Mile Delivery. IEEE Trans. Netw. Serv. Manage. 2025, 22, 380–393. [Google Scholar] [CrossRef]
  2. Wen, J.; Wang, F.; Su, Y. A Bi-Layer Collaborative Planning Framework for Multi-UAV Delivery Tasks in Multi-Depot Urban Logistics. Drones 2025, 9, 512. [Google Scholar] [CrossRef]
  3. Bonilla-Marquez, A.; Guzman-Flores, O.; Rodriguez-Gallo, Y.; Pimentel-Hernandez, K. Seed Spreading UAV Prototype for Precision Agriculture Development. In Proceedings of the IEEE Central America and Panama Student Conference (CONESCAPAN), Panama, Panama, 24–27 September 2024; pp. 1–6. [Google Scholar] [CrossRef]
  4. He, D.; Hou, H. UAV-Assisted Legitimate Wireless Surveillance: Performance Analysis and Optimization. In Proceedings of the IEEE International Conference on Unmanned Systems (ICUS), Nanjing, China, 18–20 October 2024; pp. 1975–1979. [Google Scholar] [CrossRef]
  5. Dousai, N.; Loncaric, S. Detecting Humans in Search and Rescue Operations Based on Ensemble Learning. IEEE Access 2022, 10, 26481–26492. [Google Scholar] [CrossRef]
  6. Dumencic, S.; Lanca, L.; Jakac, K.; Ivic, S. Experimental Validation of UAV Search and Detection System in Real Wilderness Environment. Drones 2025, 9, 473. [Google Scholar] [CrossRef]
  7. Chen, Y.; Zhu, Q.; Wang, J.; Jia, Z.; Wang, X.; Lin, Z. UAV-Aided Efficient Informative Path Planning for Autonomous 3D Spectrum Mapping. IEEE Trans. Cognit. Commun. Netw. 2025; early access. [Google Scholar] [CrossRef]
  8. Wang, J.; Zhu, Q.; Lin, Z.; Chen, J.; Ding, G.; Wu, Q.; Gu, G.; Gao, Q. Sparse Bayesian Learning-Based Hierarchical Construction for 3D Radio Environment Maps Incorporating Channel Shadowing. IEEE Trans. Wirel. Commun. 2024, 23, 14560–14574. [Google Scholar] [CrossRef]
  9. Sathyamoorthy, D. A review of security threats of unmanned aerial vehicles and mitigation steps. Def. Secur. Anal. 2015, 6, 81–97. [Google Scholar]
  10. Proposed Rule on Remote Identification of Unmanned Aircraft Systems. Available online: https://www.federalregister.gov/documents/2019/12/31/2019-28100/remote-identification-of-unmanned-aircraft-systems (accessed on 25 August 2025).
  11. Chen, Y.; Zhu, L.; Yao, C.; Gui, G.; Yu, L. A Global Context-Based Threshold Strategy for Drone Identification Under the Low SNR Condition. IEEE Internet Things J. 2023, 10, 1332–1346. [Google Scholar] [CrossRef]
  12. Aziz, N.; Fodzi, M.; Shariff, K.; Haron, M.; Yu, L. Analysis on drone detection and classification in LTE-based passive forward scattering radar system. Int. J. Integr. Eng. 2023, 15, 112–123. [Google Scholar] [CrossRef]
  13. Patel, J.; Fioranelli, F.; Anderson, D. Review of radar classification and RCS characterisation techniques for small UAVs or drones. IET Radar Sonar Navigat. 2018, 12, 911–919. [Google Scholar] [CrossRef]
  14. Ashraf, M.; Sultani, W.; Shah, M. Dogfight: Detecting drones from drones videos. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 7063–7072. [Google Scholar] [CrossRef]
  15. Rozantsev, A.; Lepetit, V.; Fua, P. Detecting flying objects using a single moving camera. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 879–892. [Google Scholar] [CrossRef] [PubMed]
  16. AI-Emadi, S.; Al-Ali, A.; Mohammed, A.; Al-Ali, A. Audio based drone detection and identification using deep learning. In Proceedings of the 15th International Wireless Communications & Mobile Computing Conference (IWCMC), Tangier, Morocco, 24–28 June 2019; pp. 459–464. [Google Scholar] [CrossRef]
  17. Zelnio, A.; Case, E.; Rigling, B. A low-cost acoustic array for detecting and tracking small RC aircraft. In Proceedings of the IEEE 13th Digital Signal Processing Workshop and 5th IEEE Signal Processing Education Workshop (DSP/SPE), Marco Island, FL, USA, 4–7 January 2009; pp. 121–125. [Google Scholar] [CrossRef]
  18. Shin, D.; Jung, D.; Kim, D.; Ham, J.; Park, S. A distributed FMCW radar system based on fiber-optic links for small drone detection. IEEE Trans. Instrum. Meas. 2017, 66, 340–347. [Google Scholar] [CrossRef]
  19. Oh, B.; Guo, X.; Wan, F.; Toh, K.; Lin, Z. Micro-Doppler Mini-UAV Classification Using Empirical-Mode Decomposition Features. IEEE Geosci. Remote Sens. Lett. 2018, 15, 227–231. [Google Scholar] [CrossRef]
  20. Yuan, S.; Sun, B.; Zou, Z.; Huang, H.; Wu, P.; Li, C.; Dang, Z.; Zhao, Z. IRSDD-YOLOv5: Focusing on the Infrared Detection of Small Drones. Drones 2023, 7, 393. [Google Scholar] [CrossRef]
  21. Fang, H.; Xia, M.; Zhou, G.; Chang, Y.; Yan, L. Infrared Small UAV Target Detection Based on Residual Image Prediction via Global and Local Dilated Residual Networks. IEEE Geosci. Remote Sens. Lett. 2022, 19, 7002305. [Google Scholar] [CrossRef]
  22. Yue, X.; Liu, Y.; Wang, J.; Song, H.; Cao, H. Software Defined Radio and Wireless Acoustic Networking for Amateur Drone Surveillance. IEEE Commun. Mag. 2018, 56, 90–97. [Google Scholar] [CrossRef]
  23. Ma, Z.; Zhang, R.; Ai, B.; Zeng, L.; Niyato, D. Deep Reinforcement Learning for Energy Efficiency Maximization in RSMA-IRS-Assisted ISAC System. IEEE Trans. Veh. Technol. 2025; early access. [Google Scholar] [CrossRef]
  24. Jiang, H.; Shi, W.; Zhang, Z.; Pan, C.; Wu, Q.; Shu, F. Large-Scale RIS Enabled Air-Ground Channels: Near-Field Modeling and Analysis. IEEE Trans. Wirel. Commun. 2025, 24, 1074–1088. [Google Scholar] [CrossRef]
  25. Su, Y.; Cheng, Q.; Liu, Z. Integrated Sensing and Communication for UAV Detection: A Reinforcement Learning-based Approach. In Proceedings of the IEEE Cyber Science and Technology Congress (CyberSciTech), Boracay Island, Philippines, 5–8 November 2024; pp. 540–543. [Google Scholar] [CrossRef]
  26. Yang, Z.; Wang, Q.; Liu, G.; Ma, Z. UAV Detection Based on OTFS ISAC System. In Proceedings of the IEEE 101st Vehicular Technology Conference (VTC2025-Spring), Oslo, Norway, 17–20 June 2025; pp. 1–7. [Google Scholar] [CrossRef]
  27. Nie, W.; Han, Z.; Zhou, M.; Xie, L.; Jiang, Q. UAV Detection and Identification Based on WiFi Signal and RF Fingerprint. IEEE Sens. J. 2021, 21, 13540–13550. [Google Scholar] [CrossRef]
  28. Toonstra, J.; Kinsner, W. Transient analysis and genetic algorithms for classification. In Proceedings of the IEEE WESCANEX 95. Communications, Power, and Computing. Conference Proceedings, Winnipeg, MB, Canada, 15–16 May 1995; pp. 432–437. [Google Scholar] [CrossRef]
  29. Song, P.; Zhang, N.; Zhang, H.; Guo, F. Blind Estimation Algorithms for I/Q Imbalance in Direct Down-Conversion Receivers. In Proceedings of the IEEE 88th Vehicular Technology Conference (VTC-Fall), Chicago, IL, USA, 27–30 August 2018; pp. 1–5. [Google Scholar] [CrossRef]
  30. Klein, R.; Temple, M.; Mendenhall, M.; Reising, D. Sensitivity Analysis of Burst Detection and RF Fingerprinting Classification Performance. In Proceedings of the IEEE International Conference on Communications(ICC), Dresden, Germany, 14–18 June 2009; pp. 1–5. [Google Scholar] [CrossRef]
  31. Jana, S.; Kasera, S. On fast and accurate detection of unauthorized wireless access points using clock skews. IEEE Trans. Mob. Comput. 2010, 9, 449–462. [Google Scholar] [CrossRef]
  32. Xie, Z.; Xu, L.; Ni, G.; Wang, Y. A new feature vector using selected line spectra for pulsar signal bispectrum characteristic analysis and recognition. Chin. J. Astron. Astrophys. 2007, 7, 565–571. [Google Scholar] [CrossRef]
  33. Patel, H.; Temple, M.; Baldwin, R. Improving ZigBee device network authentication using ensemble decision tree classifiers with radio frequency distinct native attribute fingerprinting. IEEE Trans. Rel. 2015, 64, 221–233. [Google Scholar] [CrossRef]
  34. Xiao, Y.; Zhang, X. Micro-UAV detection and identification based on radio frequency signature. In Proceedings of the International Conference on Systems and Informatics (ICSAI), Shanghai, China, 2–4 November 2019; pp. 1056–1062. [Google Scholar] [CrossRef]
  35. Xie, Y.; Jiang, P.; Gu, Y.; Xiao, X. Dual-Source Detection and Identification System Based on UAV Radio Frequency Signal. IEEE Trans. Instrum. Meas. 2021, 70, 2006215. [Google Scholar] [CrossRef]
  36. Li, Q.; Wang, F.; Zhang, Y.; Li, R.; Shi, S.; Li, Y. Novel Micro-UAV Identification Approach Assisted by Combined RF Fingerprint. IEEE Sens. J. 2024, 24, 26802–26813. [Google Scholar] [CrossRef]
  37. Zhu, G.; Li, S.; Liu, Y.; Zhu, Q.; Zhang, J.; Mao, K.; Zhou, Z. Space clustering and identification based on full-domain channel characteristics for UAV communication networks. Chin. J. Radio Sci. 2024, 39, 432–441. (In Chinese) [Google Scholar] [CrossRef]
  38. Mao, K.; Zhu, Q.; Wang, X.; Ye, X.; Gomez-Ponce, J.; Cai, X. A Survey on Channel Sounding Technologies and Measurements for UAV-Assisted Communications. IEEE Trans. Instrum. Meas. 2024, 73, 8004624. [Google Scholar] [CrossRef]
  39. Li, M.; Hao, D.; Wang, J.; Wang, S.; Zhong, Z.; Zhao, Z. Intelligent Identification and Classification of Small UAV Remote Control Signals Based on Improved Yolov5-7.0. IEEE Access 2024, 12, 41688–41703. [Google Scholar] [CrossRef]
  40. Radio-Frequency Control and Video Signal Recordings of Drones. Available online: https://zenodo.org/records/4264467 (accessed on 25 August 2025).
Figure 1. Passive UAV detection and identification systems via RF communication signal sensing.
Figure 1. Passive UAV detection and identification systems via RF communication signal sensing.
Drones 09 00702 g001
Figure 4. Hardware architecture of the proposed passive UAV detection and identification system.
Figure 4. Hardware architecture of the proposed passive UAV detection and identification system.
Drones 09 00702 g004
Figure 5. YOLO-based detection model for detecting FHS regions.
Figure 5. YOLO-based detection model for detecting FHS regions.
Drones 09 00702 g005
Figure 6. CNN-based identification model for identifying UAV models.
Figure 6. CNN-based identification model for identifying UAV models.
Drones 09 00702 g006
Figure 7. Tested UAV models: (a) Assembled UAV. (b) DJI Mini.
Figure 7. Tested UAV models: (a) Assembled UAV. (b) DJI Mini.
Drones 09 00702 g007
Figure 8. UAV signal spectrograms: (a) Assembled UAV. (b) DJI Mini.
Figure 8. UAV signal spectrograms: (a) Assembled UAV. (b) DJI Mini.
Drones 09 00702 g008
Figure 9. Measurement positions at the Technical University of Madrid. (The base map is sourced from Google Maps.)
Figure 9. Measurement positions at the Technical University of Madrid. (The base map is sourced from Google Maps.)
Drones 09 00702 g009
Figure 10. Measured equipment: (a) Mobile UAV detection and identification system. (b) Assembled UAV Controller.
Figure 10. Measured equipment: (a) Mobile UAV detection and identification system. (b) Assembled UAV Controller.
Drones 09 00702 g010
Figure 11. Spectrograms collected from various test positions: (a) Position 1 (LoS). (b) Position 2 (LoS). (c) Position 3 (LoS). (d) Position 4 (NLoS). (e) Position 5 (NLoS). (f) Position 6 (LoS).
Figure 11. Spectrograms collected from various test positions: (a) Position 1 (LoS). (b) Position 2 (LoS). (c) Position 3 (LoS). (d) Position 4 (NLoS). (e) Position 5 (NLoS). (f) Position 6 (LoS).
Drones 09 00702 g011
Figure 12. Performance of the proposed UAV identification model under ideal conditions.
Figure 12. Performance of the proposed UAV identification model under ideal conditions.
Drones 09 00702 g012
Figure 13. Performance of the traditional UAV identification method. (a) LoS conditions. (b) NLoS conditions.
Figure 13. Performance of the traditional UAV identification method. (a) LoS conditions. (b) NLoS conditions.
Drones 09 00702 g013
Figure 14. Performance of the proposed UAV identification method. (a) LoS conditions. (b) NLoS conditions.
Figure 14. Performance of the proposed UAV identification method. (a) LoS conditions. (b) NLoS conditions.
Drones 09 00702 g014
Table 2. Spectran V6 PLUS Parameters.
Table 2. Spectran V6 PLUS Parameters.
ParametersValue
Real-time Receiving BandwidthMax 245 MHz
Real-time Transmitting Bandwidth120 MHz
Minimum Detectable Signal IntervalMinimum 10 ns
Maximum Receiving Power+23 dBm
Maximum Transmitting Power+20 dBm
Internal Amplifier−170 dBm/Hz
Amplitude AccuracyTypical ±0.5 dB
Frequency Reference Accuracy0.5 ppm
Resolution Bandwidth62 MHz to 200 MHz
Attenuator Adjustment Range50 dB/70 dB (0.5 dB steps)
FPGA ModelXC7A200T-2
GPS SynchronizationTimestamp precision ±10 ns
Table 3. Accuracy comparison of UAV detection methods.
Table 3. Accuracy comparison of UAV detection methods.
Test PoistionsTraditional Method Accuracy [39]Proposed Method AccuracyDifference
1 (LoS)87.6%96.3%+8.7%
2 (LoS)77.4%89.3%+11.9%
3 (LoS)75.7%86.5%+10.8%
6 (LoS)84.5%92.2%+7.7%
4 (NLoS)61.0%81.1%+20.1%
5 (NLoS)57.9%72.8%+14.9%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhu, G.; Briso, C.; Liu, Y.; Lin, Z.; Mao, K.; Li, S.; He, Y.; Zhu, Q. An Intelligent Passive System for UAV Detection and Identification in Complex Electromagnetic Environments via Deep Learning. Drones 2025, 9, 702. https://doi.org/10.3390/drones9100702

AMA Style

Zhu G, Briso C, Liu Y, Lin Z, Mao K, Li S, He Y, Zhu Q. An Intelligent Passive System for UAV Detection and Identification in Complex Electromagnetic Environments via Deep Learning. Drones. 2025; 9(10):702. https://doi.org/10.3390/drones9100702

Chicago/Turabian Style

Zhu, Guyue, Cesar Briso, Yuanjian Liu, Zhipeng Lin, Kai Mao, Shuangde Li, Yunhong He, and Qiuming Zhu. 2025. "An Intelligent Passive System for UAV Detection and Identification in Complex Electromagnetic Environments via Deep Learning" Drones 9, no. 10: 702. https://doi.org/10.3390/drones9100702

APA Style

Zhu, G., Briso, C., Liu, Y., Lin, Z., Mao, K., Li, S., He, Y., & Zhu, Q. (2025). An Intelligent Passive System for UAV Detection and Identification in Complex Electromagnetic Environments via Deep Learning. Drones, 9(10), 702. https://doi.org/10.3390/drones9100702

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop