Next Article in Journal
Leveraging Degradation Events for Enhanced Remaining Useful Life Prediction
Previous Article in Journal
The Impact of AI-Driven Application Programming Interfaces (APIs) on Educational Information Management
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Novel Matching Algorithm for Effective Drone Detection and Identification by Radio Feature Extraction

1
School of Cyberspace Science and Technology, Beijing Institute of Technology, Beijing 100081, China
2
Beijing Key Laboratory of Software Security Engineering Technology, Beijing Institute of Technology, Beijing 100081, China
3
School of Computer Science, Beijing Institute of Technology, Beijing 100081, China
*
Author to whom correspondence should be addressed.
Information 2025, 16(7), 541; https://doi.org/10.3390/info16070541
Submission received: 22 April 2025 / Revised: 13 June 2025 / Accepted: 25 June 2025 / Published: 25 June 2025

Abstract

With the rapid advancement of drone technology, the demand for the precise detection and identification of drones has been steadily increasing. Existing detection methods, such as radio frequency (RF), radar, optical, and acoustic technologies, often fail to meet the accuracy and speed requirements of real-world counter-drone scenarios. To address this challenge, this paper proposes a novel drone detection and identification algorithm based on transmission signal analysis. The proposed algorithm introduces an innovative feature extraction method that enhances signal analysis by extracting key characteristics from the signals, including bandwidth, power, duration, and interval time. Furthermore, we developed a signal processing algorithm that achieves efficient and accurate drone identification through bandwidth filtering and the matching of duration and interval time sequences. The effectiveness of the proposed approach is validated using the DroneRF820 dataset, which is specifically designed for drone identification and counter-drone applications. The experimental results demonstrate that the proposed method enables highly accurate and rapid drone detection.

1. Introduction

Recent advancements in unmanned aerial vehicle (UAV) technology, along with its extensive applications in military [1], civil [2], and commercial sectors [3,4], have established UAVs as versatile mobile radio communication platforms that deliver improved efficiency and convenience across various tasks. However, these developments present substantial challenges related to radio communication security, particularly as the communication landscape becomes more complex [5]. The limited availability of spectrum resources and intense competition have intensified issues such as spectrum congestion and interference [6], potentially degrading or interrupting communication quality. Additionally, UAVs are susceptible to significant wireless security threats, such as unauthorized access, commandeering, and various other hazards [7], thereby mandating the adoption of stringent protocols to safeguard the integrity and dependability of communications.
To address unauthorized UAV flights, research has focused on four primary detection methods: optical detection [8,9,10,11,12], acoustic detection [13,14,15], radar detection [16], and radio detection [17]. Considering environmental factors such as weather and detection distance limitations, optical and acoustic detection methods have been excluded. Additionally, due to the limited battery capacity of drones, the power of onboard radio equipment is constrained; therefore, in scenarios where power consumption and cost are critical, radio detection is preferred over radar detection. Radio detection can effectively capture radio frequency (RF) signals within the same frequency band using signal detection technology, thereby identifying the electromagnetic signals emitted by drones through various recognition algorithms. Recognition methods for drone signals are typically categorized into two types: video transmission signals (VTSs), which carry real-time video data, and flight control signals (FCSs), which are essential for command and control communications between the drone and its ground station. Flight control signals typically employ frequency hopping for spread spectrum communication. Recent advancements in radio communication principles have led to an increased focus on using radio detection methods for unmanned aerial vehicles (UAVs), primarily centered around the extraction of signal characteristics. Xu Chengtao et al. [18] utilized Short-Time Fourier Transform (STFT) to extract time–frequency energy features from mapping signals in single transient control spectrograms. Subsequently, they used Principal Component Analysis (PCA) for dimensionality reduction and applied Support Vector Machine (SVM) and K-Nearest Neighbor (KNN) algorithms for signal classification before PCA. Although the application of PCA significantly improved classification performance, it is more suitable for scenarios with multiple correlated features. Given that radio signals typically comprise fewer than ten features, despite empirical evidence supporting its efficacy, the applicability and theoretical justification of PCA in this context remain questionable. In an alternative approach, Yuelei Xie et al. [19] proposed a method for detecting and identifying unmanned aerial vehicles (UAVs) based on flight control signals (FCSs). This method emphasizes the preprocessing of FCSs to extract three key features: signal bandwidth, signal duration, and frequency of occurrence. These features are then matched against a feature database for UAV identification. However, this approach is susceptible to interference from Bluetooth, WiFi, and other signals, which creates a congested spectral environment that complicates real-time UAV detection. Additionally, some UAVs may enter automatic navigation mode when battery levels are low, resulting in the absence of generated flight control signals. In contrast, video transmission signals (VTSs) are generally more stable, exhibit greater strength, and are easier to receive, making VTS detection a more effective method for UAV identification. Despite the availability of various detection methods, significant room for improvement remains. Wei N et al. [20] extracted multidimensional signal features, including Signal Frequency Spectrum (SFS), Wavelet Energy Entropy (WEE), and Power Spectrum Entropy (PSE), by monitoring radio signals and Channel State Information (CSI) exchanged between drones and their controllers. They employed machine learning algorithms for drone detection and localization, thereby enhancing detection accuracy and localization precision while minimizing environmental sensitivity. Olusiji et al. [21] focused on extracting time–frequency energy domain features from flight control signals (FCSs) and video transmission signals (VTSs) using the Hilbert–Huang Transform and wavelet packet transform. They employed a stacked denoising autoencoder and a local outlier factor algorithm within a semi-supervised learning framework to effectively distinguish between drones and their control signals, achieving the precise identification and classification of drones.
Although recent research has achieved considerable progress in the accuracy of UAV identification, existing techniques rely on converting radio signals into image representations and employing neural networks for recognition. This approach results in slow identification speeds, making it difficult to meet the demands of real-world applications. Therefore, it is particularly important to explore methods that enable direct identification without preprocessing to simultaneously improve both accuracy and speed.
On the other hand, to promote the development of RF signal-based anti-drone monitoring technology, it is crucial to establish a robust RF dataset for drones. Although the literature [22] presents a dataset with three UAV models and advocates for an open, maintainable database of UAV RF signals, it has not been expanded to meet researchers’ needs for diverse UAV types, analyzable frequency bands, and signal durations. Due to data privacy concerns, many researchers [19,23,24] rely on proprietary, non-open-source datasets for validation, hindering fair and transparent evaluation criteria for various methods. As UAV technology evolves, transmission systems have shifted from single-band to dual-band communication [25,26,27], yet most current datasets and studies focus primarily on the 2.4 GHz band. To address these limitations, we introduce the dataset named DroneRF820, which was collected using dual-band simultaneous monitoring techniques.
To mitigate the shortcomings inherent in current techniques for the detection and identification of drones via radio frequency signals and address the paucity of accessible datasets, this research presents three pivotal contributions:
(a)
We propose a novel drone detection method that identifies the presence of unmanned aerial vehicle (UAV) signals through power detection, filters out non-drone signals using bandwidth analysis, and matches drone signals based on time-series characteristics, thereby achieving the precise recognition of UAV signals.
(b)
We present a method to evaluate the effectiveness of UAV feature matching, establishing reliable metrics to improve identification accuracy.
(c)
We introduce DroneRF820 (https://pan.quark.cn/s/ae18fe3731da (accessed on 13 June 2025)), a new dataset collected through dual-band simultaneous monitoring. It contains RF signals from eight common UAVs and their flight controllers, offering high signal-to-noise ratio data for developing and validating detection methods.
(d)
We evaluate the performance of the recognition algorithm using the datasets. Compared to previous methods, our algorithm demonstrates advantages in both accuracy and speed.
The remainder of this paper is organized as follows. In the rest of this paper, we introduce some related work in Section 2. Section 3 gives the data acquisition methods. Section 4 proposes the methodology of drone detection. In Section 5, the experiment and analysis will be discussed in detail. Finally, Section 6 concludes this paper and looks forward to future work.

2. Related Works

2.1. Principles of UAV Communication

A UAV primarily communicates with a ground station through video signal transmission [28,29], while the ground station is responsible for information acquisition and command control of the UAV. As unmanned aircraft controls via radio remote control devices or onboard computers, UAVs are widely employed for high-resolution video image acquisition, remote video transmission, and command-and-control operations. The ground station can rapidly transmit high-altitude images in real time to a command center, which can also issue cruise commands and other directives. Although this process may appear straightforward, the ability of a UAV to perform these tasks largely depends on its flight control system, the combination of gimbal and camera, and the wireless image transmission system.
The image transmission protocol is a set of rules and technical standards that govern the transmission of image data between UAVs and ground stations. This protocol ensures the clarity and integrity of video and image data during transmission. It encompasses various techniques, including modulation and demodulation, video and image compression coding (such as H.264, H.265, and JPEG), as well as error correction and anti-jamming techniques such as Forward Error Correction (FEC), Frequency Hopping Spread Spectrum (FHSS), and Direct Sequence Spread Spectrum (DSSS). By employing these efficient data compression, coding, secure encryption, and anti-jamming techniques, the image transmission protocol provides ground stations with real-time visual information to assist in the identification and monitoring of UAV activities.
Different brands of UAVs utilize distinct image transmission protocols. For instance, Dowtone operates in the 2.4 GHz, 5.2 GHz, and 5.8 GHz bands, emphasizing penetration and transmission distance, while DJI’s OcuSync and Lightbridge technologies primarily utilize the 2.4 GHz and 5.8 GHz bands to accommodate globally available frequency bands. Flymy employs similar frequency bands; however, the specific technologies differ.
According to the nature of UAV communications, various types of UAVs may employ different image transmission protocols. Based on this characteristic, we can analyze the signal characteristics in these protocols, such as frequency, bandwidth, temporal characteristics, and modulation methods, which can be used to identify and differentiate various UAV types. Furthermore, image transmission protocols not only provide essential data support for UAV identification but also enhance the security and reliability of the system. For example, in terms of network protocols, the Transmission Control Protocol (TCP) provides reliable connection services, while the User Datagram Protocol (UDP) offers low-latency, connectionless services. In terms of security, encryption and authentication mechanisms ensure secure data transmission, preventing unauthorized drones from entering sensitive areas by verifying their identities.
In UAV image transmission, WiFi is widely adopted due to its cost-effectiveness and ease of implementation. Despite its limitations in transmission range and susceptibility to interference, WiFi remains dominant in consumer-grade drones. WiFi standards (802.11a/g/n/ac/ax) almost universally employ Orthogonal Frequency Division Multiplexing (OFDM) modulation for efficient data transmission. Through OFDM, WiFi can provide stable signal transmission in complex environments, such as those affected by multipath interference. This foundation sets the stage for subsequent research, focusing on the characteristics and methods of UAV identification through OFDM-modulated signals transmitted over WiFi.

2.2. UAV Signal Transmission Based on OFDM Modulation

Unmanned aerial vehicle (UAV) detection and identification primarily focus on video transmission systems (VTSs) and flight control systems (FCSs). As indicated in Section 2, most consumer-grade UAVs that utilize WiFi communication employ Orthogonal Frequency Division Multiplexing (OFDM) for modulation. Therefore, our research primarily targets OFDM-modulated VTSs and FCSs. OFDM is a modulation technique that ensures the stable transmission of video data over frequency-selective communication channels. It achieves efficient spectrum utilization by distributing high-speed data streams into multiple parallel low-speed substreams while maintaining orthogonality among these substreams [27]. Moreover, due to its unique signal structure, OFDM exhibits excellent resistance to multipath interference, effectively reducing inter-symbol interference and maintaining communication stability in multipath propagation environments. Given these two characteristics, OFDM is extensively utilized in modern wireless communication systems for multi-carrier modulation, playing a crucial role in UAV video transmission.
The bandwidths for OFDM-modulated UAVs and remote control signals are 10 MHz and 20 MHz, respectively. UAV signals can automatically select communication channels, while flight control signals can adaptively adjust their center frequency to change channels as needed, thereby enhancing the flexibility and adaptability of the acquisition system. During the signal transmission process, the distinctive periodic spectrum characteristics of OFDM modulation provide significant advantages for UAV identification. By applying spectrum analysis, cyclostationary analysis, and feature matching techniques, UAV activities can be accurately detected and identified. Assuming that the original data stream is represented as X ( n ) , where n is the index of the subcarriers, the time-domain signal x ( k ) after the allocation through subcarriers is given in the following formula:
x ( k ) = 1 N n = 0 N 1 X ( n ) e j 2 π n k N
where N is the number of IFFT points and x(k) represents the generated time-domain signal. To mitigate Inter-symbol Interference (ISI), a cyclic prefix (CP) is introduced by copying the tail of the OFDM symbol to the beginning, with a length typically exceeding the maximum delay spread of the channel. Assuming this length is L, the signal after adding the CP is expressed as
x ˜ ( k ) = [ x ( k L ) , x ( k L + 1 ) , , x ( k + N 1 ) ]
OFDM achieves orthogonality by appropriately selecting the subcarrier spacing, typically given by Δ f = 1 T s , where T s denotes the OFDM symbol duration. Under ideal conditions, the interference between subcarriers is zero, adhering to the orthogonality condition as follows:
0 T e j 2 π ( f i f j ) t d t ( i j )
When a signal passes through a multipath channel, it is represented as
y ( k ) = h ( k ) · x ˜ ( k ) + n ( k )
where h ( k ) and n ( k ) denote the channel response and noise, respectively. At the receiving end, the received signal can be transformed back to the frequency domain using Fast Fourier Transform (FFT):
Y ( n ) = k = 0 N 1 y ( k ) e j 2 π n k N
OFDM commonly integrates efficient spectral modulation techniques such as QPSK and QAM. Assuming the use of QAM, the data is mapped to the in-phase (I) and quadrature (Q) components via the modulation technique. The symbol mapping can be expressed as X [ n ] = I + Q j , where I and Q represent the real and imaginary parts of the symbol, respectively, determined by the modulation order. These formulas and steps collectively establish the mathematical framework for the OFDM modulation and demodulation processes, enabling the efficient transmission of image data over frequency-selective channels. In modern UAV communications, the integration of OFDM with the characteristics of WiFi not only enhances the transmission efficiency of image data but also provides new avenues for UAV identification research. Analyzing the features of WiFi signals, OFDM signals, due to their unique spectral characteristics, can accurately detect and identify UAV activities through spectrum analysis, cyclostationary analysis, and feature matching techniques. This multi-carrier modulation scheme offers flexible spectral utilization and robust signal interference resistance, significantly enhancing the efficiency and reliability of UAV communication systems.

3. Data Acquisition and Analysis

This study presents DroneRF820, a publicly available radio frequency dataset developed to facilitate counter-drone research. The dataset features a high signal-to-noise ratio (SNR) and encompasses eight models from three distinct brands of consumer drones. All data were collected in a confined underground parking facility situated on the second basement level. This controlled environment significantly mitigated radio interference, thereby providing optimal conditions for precise signal analysis.

3.1. Data Collection

To facilitate effective data collection, this study employed a variety of hardware and software tools, including a host computer, software-defined radio (SDR) devices, drones, remote controllers, and displays. Detailed configurations of the equipment are outlined in Figure 1. The drones utilized in this research were sourced from three brands—DJI, Autel, and FIMI—as well as a custom-built racing drone, resulting in a total of eight distinct models. The specific models included the AUTEL Evo Lite (Autel Robotics, Shenzhen, China), DJI Air 2, DJI Air 3, DJI Mavic 3 Classic, DJI Mini 4 Pro (DJI, Shenzhen, China), FIMI X8 Pro (FIMI, Shenzhen, China), FPV 1.2G, and FPV 5.8G. These models encompass the commonly used radio frequency bands of 2.4 GHz and 5.8 GHz.
During the data collection phase, the work was performed in an underground parking facility located on the second basement level, utilizing software-defined radio (SDR) devices for signal acquisition. Throughout the process, the distance between the drone and the SDR device was consistently maintained within approximately 10 to 20 m, with the antenna of the SDR device continually oriented toward the drone to minimize signal interference. The drone remained connected and in flight mode to capture the corresponding radio transmission signals. For each type of drone, ten radio signal collection sessions were conducted in both standby and flight modes. The bandwidth for the initial five sessions was set to 10 MHz, while the subsequent five sessions were configured to 20 MHz. In both bandwidths, the first two sessions were conducted within the 2.4 GHz–2.5 GHz frequency range and the last three sessions were conducted within the 5.7 GHz–5.8 GHz frequency range, with the sampling rate fixed at 61.44 MHz for each session. Additionally, the signal collection in these two bandwidths encompassed the 2.4 GHz and 5.8 GHz frequency bands. To minimize signal-to-noise ratio (SNR) interference, the remote controller was positioned behind the SDR receiver during data collection, and Bluetooth and WiFi functionalities on mobile devices were disabled to mitigate external disturbances. Specific acquisition parameters are provided in Table 1.
The data collection environment for the experiment was an underground parking facility located on the second basement level, utilizing software-defined radio (SDR) devices for signal acquisition. The data collection duration for each drone was set to 200 ms, employing Orthogonal Frequency Division Multiplexing (OFDM) modulation, with a frequency range spanning from 2.4 GHz to 5.8 GHz. The file names specified the drone brand, model, sampling rate (samples per second, sps), center frequency, and sampling bandwidth (bandwidth, bw). All variables could be extracted from the file names, with those labeled “fly” indicating data collected during the drone’s takeoff state, while file names without this designation represented data collected during the drone’s standby state. All files were stored in binary format to facilitate efficient data processing and analysis, ensuring the integrity of data transmission and storage.

3.2. Time–Frequency Analysis of UAV Signal Measurements

To illustrate the communication signal transmission characteristics described in Section 2.1 and Section 2.2, we conducted a time–frequency analysis of the drone signals, revealing their dynamic changes in the frequency domain. The time–frequency representation illustrates the variation of signal energy over time, facilitating the observation of energy distribution across different frequencies. In standby mode, the signal was primarily concentrated in the 2.4 GHz band, exhibiting a distinct single peak, which indicated a clear signal with minimal interference. Conversely, during flight mode, the time–frequency representation displayed a broader frequency distribution, with the emergence of multiple frequency components indicating that the signal was significantly influenced by environmental noise. This time–frequency analysis allowed us to quantify the energy distribution characteristics across frequency bands and further investigate the reliability of the signals in dynamic environments and their implications for drone operations.
This study employed zero intermediate frequency (IF) data collection techniques, utilizing specifically in-phase and quadrature (IQ) data for the analysis. After demodulation, these data could be represented in time–frequency plots, as shown in Figure 2 and Figure 3. The acquired signals exhibited clear imagery, indicating a high signal-to-noise ratio (SNR), with the effective power of the signals significantly enhanced relative to the noise power. From the collected drone video transmission signals (VTSs), we could extract three key features: signal duration, signal interval time, and bandwidth.
  • Signal duration refers to the length of a data packet in time, that is, the time interval over which a single signal pulse or data packet was transmitted.
  • Signal interval time denotes the time gap between consecutive signals.
  • Bandwidth indicates the frequency range occupied by the signal, which determined the data transmission rate and signal quality.
In radio communication systems, physical layer signals are defined by strict format specifications. Consequently, different UAVs, due to their use of distinct video transmission protocols, exhibit significant differences in these three characteristics. Furthermore, the signals manifest as blocks of varying magnitudes over time, with each block representing the signal’s power level. The lengths of signal duration and interval time differ, resulting in distinct block representations, while the bandwidth remains constant, indicating the temporal pattern within the signal. By matching the extracted features of the collected UAV signals with a knowledge base, rapid and accurate UAV identification can be achieved. The thresholds and tolerances are determined according to the explicit definitions provided in the target radio physical layer protocol. For example, in the LTE protocol, the duration of a radio frame is 10 ms and the duration of a time slot is 0.5 ms.
Figure 3a illustrates the VTS for DJI Air2 and Figure 3b,c illustrate the VTSs and FCSs for FIMI X8 Pro and AUTEL Evo Lite, respectively. The FCSs automatically adjusted the center frequency, while the VTSs remained stable within a specific channel for a certain duration. If the channel was manually switched, the center frequency of the signal shifted, as depicted in Figure 3d.

4. Methods

In Section 2 and Section 3, the principles and models of drone communication were described, along with an interpretation of the communication acquisition methods and measured signals of drone communication. This section provides a detailed explanation of the multi-feature extraction and matching recognition algorithms based on the signals.

4.1. Signal Preprocessing Module

The sampling bandwidth of the software-defined radio (SDR) was set to 50 MHz to sample signals and obtain IQ data. The amount of data collected during each acquisition was f s = 61.44 × 10 6 . The signal preprocessing module converted the IQ data from time-domain signals into time–frequency representations, thereby facilitating feature extraction. This module consisted of two operations: a windowing operation and a Short-Time Fourier Transform (STFT) operation. Compared to the FFT, STFT segmented the signal into short time windows and performed Fourier transforms on each window. The formula for STFT was as follows: First, define a window length N and a window function g ( τ ) to smooth the signal. Then, perform the sliding window STFT operation on the collected complex signal segment x ( t ) :
X [ m , n ] = j = 0 N 1 x [ j ] g [ j m R ] e i 2 π n N j
After applying the Short-Time Fourier Transform (STFT), a complex matrix X [ m , n ] was obtained, representing the spectral coefficients at the m-th time frame and the n-th frequency point. The term x [ j ] denotes the value of the time-domain signal at index j, while R represents the overlap stride of the window, which determines the relationship between adjacent windows. In this step, the window size was set to 1024. Subsequently, the frequency value f n of the n-th frequency component in the spectrum was calculated according to the specified sampling rate, effectively converting the spectral data into frequency axes. The formula was given by
f n = n N · f s

4.2. UAV Feature Extraction Module

When the signal spectrum was detected and the device received the drone’s radio signal, the signal power P increased significantly. Based on the changes in power, a threshold T was established, defined as T = a · m a x ( P [ x ] ) , a [ 0 , 1 ] . If the received signal power was greater than the threshold of 10%, then the signal was deemed to be a drone signal. Signals with newly collected power that exceeded this threshold were classified as valid drone signals. The power calculation formula was given by
P [ x ] = | I + Q j | 2
where I and Q represent the in-phase and quadrature components of the orthogonal signal, respectively.
By traversing the power sequence P [ x ] , we identified signal segments that exceeded the threshold, satisfying the condition
P [ x ] T
For each segment i of the signal that exceeded the threshold T, its duration D i and interval I i were computed as follows:
D i = t e n d , i t s t a r t , i
I i = t s t a r t , i + 1 t e n d , i
Using the above formulas, we obtained the power sequence. In this study, the threshold was set to 10% of the maximum power found in the collected radio signals. If the power exceeded this threshold, this indicated that a radio signal from a drone had been received. We then traversed the sequence to identify continuous segments of signals exceeding the threshold; these segments signified the presence of a single signal. Finally, based on the previously described steps, the valid frequencies f n were obtained. The maximum frequency f m a x was calculated as f m a x = m a x ( f n ) and the minimum frequency f m i n was calculated as f m i n = m i n ( f n ) . The bandwidth was then determined using the following formula:
B W = f m a x f m i n

4.3. UAV Feature Reconstruction Module

Utilizing the previously discussed feature extraction module, we obtained lists of signal durations, signal intervals, and bandwidths from a long signal. The bandwidth was primarily employed to distinguish visual transmission signals from remote control signals, while the signal duration and signal interval were mainly used to differentiate various visual transmission signals based on their shorter durations. This paper filtered the three feature lists based on two characteristics: the bandwidth of environmental noise differed from that of the remote control signals. Consequently, we derived the signal duration list, signal interval list, and bandwidth list specifically for UAV visual transmission signals, represented as follows:
D = [ D 1 , D 2 , , D m ]
I = [ I 1 , I 2 , , I m ]
B W = [ B W 1 , B W 2 , , B W m ]
Next, based on D and I, we constructed an original data list ( D , I ) , formatted as a combination of the signal duration and the signal interval for each signal segment, as follows:
( D , I ) = ( D i , I i ) | i = 1 , 2 , , m
Considering that the new signal sequence maintained consistency with the original signal format but was shorter in length, only 4 to 8 small signal segments were selected for sliding window matching against the original signal sequence. An original data list was generated for all UAV models, with each model comprising 20 files. A raw database was established, incorporating the various UAV models and their corresponding multiple original data lists.

4.4. UAV Matching and Recognition Module

During the recognition process, the format of the new data aligned with that of the original data, both comprising two key features: the duration of the signal D and the interval time of the signal I. To achieve effective matching between the new signal and the original signal, the core concept of this algorithm was to compare the features of the new signal with those of the original signal using sliding window matching, as illustrated in Figure 4.
Specifically, the algorithm calculated the difference Δ ( t ) between the features by sliding the window over the original signal, using the following formula:
Δ ( t ) = | D n e w ( t ) D o r i g ( t ) | + | I n e w ( t ) I o r i g ( t ) |
If the calculated difference Δ ( t ) was less than the specified tolerance ϵ , the two signals were considered to match effectively during that time segment, and the count of effective matches was incremented:
M a t c h C o u n t + = 1
In this context, N n e w denotes the total amount of the new signal. The accuracy A represents the effectiveness of feature matching between the new signal and the original signal. When assessing the type of drone corresponding to the new signal, if the accuracy A exceeded the threshold A m i n , the model of the drone associated with the new signal could be determined. The specific condition for this judgment was expressed as follows:
A = M a t c h C o u n t N n e w
Through these steps, effective recognition of the new signal was achieved, enabling the precise identification of drone signals.

5. Experimental Design and Results Analysis

The primary objective of the experiments was to evaluate the effectiveness, robustness, and generalization capability of the proposed feature matching algorithm on diverse datasets. To achieve this, we selected two distinct datasets for comparative experiments: DroneRF820 and DroneRFa. The DroneRF820 dataset offered ideal experimental conditions for the algorithm due to its high signal-to-noise ratio, while the DroneRFa dataset encompassed various environments and conditions, further validating the algorithm’s applicability across different scenarios. In the experiments, all datasets were divided in an 8:2 ratio, with 80% of the data allocated for training and 20% for testing. This partitioning method ensured the representativeness of both the training and the testing sets, enabling a more reliable assessment of the algorithm’s performance. In addition, we also conducted a horizontal comparison of the experimental data with other advanced single-entity recognition methods to validate the advantages of this method further.

5.1. Controlled Experimential Design

Our algorithm is based on in-phase quadrature (IQ) sequences, which facilitate the collection of features such as signal duration, signal interval time, and bandwidth. These features are stored in a database. When new data is input, the system can directly extract the corresponding features and match them against the database. This process allows for the recording of the predicted drone model, which is then compared with the actual model present in the new data to enable accurate drone detection and identification. To quantify the performance of the algorithm, the accuracy A defined by Formula (18) was utilized as the primary evaluation metric.
Experiment 1: In the first experiment, we validated the proposed method using the collected DroneRF820 dataset, which included signals from eight drone models across three brands. The specific steps were as follows:
  • Dataset Partitioning: The signal data from the three brands and eight drone models were divided into training and testing sets in an 8:2 ratio.
  • Feature Extraction: In the training set, our algorithm extracted signal features to construct a drone signal feature library, while the testing set was utilized to evaluate the accuracy of the detection algorithm.
Experiment 2: To further validate the algorithm’s adaptability and effectiveness in various environments, we conducted a second round of experiments using the publicly available DroneRFa dataset. This dataset was recorded using dual-channel equipment and encompasses three ISM radio frequency bands, capturing real-world activities associated with the multi-frequency communication of drones. The dataset includes nine classes of signals from outdoor urban scenarios, fifteen classes from indoor urban environments, and one class of background reference signals. Each class contains at least twelve segments, with each segment comprising over 100 million sample points. This experiment primarily focuses on classifying and identifying the RF signals of drones in the dataset based on their distinctive features.

5.2. Experimental Results and Analysis

5.2.1. Experiment 1: Validation of the DroneRF820 Dataset

In the first experiment, we validated the proposed method using the self-collected DroneRF820 dataset. Analysis of the signal features from different drone models, as illustrated in Figure 4, revealed significant differences in signal duration and interval time. For instance, the DJI drones could manually adjust the channel, resulting in signal blocks that were concentrated at the channel center. In contrast, the Daotong drones lacked this capability, leading to a more dispersed signal block distribution. Additionally, the Feimi drones exhibited stronger remote control signals with frequency hopping behavior. To distinguish between the drone signals and remote control signals when both exhibited frequency hopping, bandwidth filtering could be employed. Figure 5 compares the video transmission signals of the DJI Mini 4 Pro in standby and flight states, with the upper image representing the standby state and the lower image depicting the flight state. The differences in signal characteristics between the two states are evident, indicating variations in feature extraction under different operational conditions.
Figure 6 illustrates the power spectrum and cumulative power spectrum for VTS and FCS. Notable differences between the two signal types enabled the implementation of a drone identification algorithm that filtered drone signals based on the calculated bandwidth. Specifically, the signal bandwidth could be determined using the 1–99% power accumulation percentage method. By identifying the corresponding frequencies at the beginning and end of the rising segment in the spectrum, we could accurately ascertain the signal bandwidth. This process effectively filtered out non-target signals, thereby ensuring the accuracy of feature extraction. Subsequently, we extracted four primary features: signal bandwidth, signal power, signal duration, and signal interval time, which were then stored in a feature database for drone signal identification and detection.
In experiments utilizing sliding window matching, our algorithm achieved 100% recognition accuracy on the DroneRF820 dataset while processing 20% of the test data, thereby demonstrating the effectiveness of the method.

5.2.2. Experiment 2: Validation of the DroneRFa Dataset

To further validate the adaptability and effectiveness of the proposed algorithm across various environments, we performed a second round of experiments using the publicly available DroneRFa dataset [23]. The data collection involved dual-channel recording, encompassing three ISM radio frequency bands and capturing real-world activities related to drone multi-frequency communication. The dataset includes nine classes of signals in urban outdoor scenarios, fifteen classes in urban indoor environments, and one class of background reference signals. Each class contains at least twelve segments, with each segment comprising over 100 million sample points. Detailed annotations regarding the outdoor flight distance and operational frequency of the drones are available, and the data extraction methodology involves combining prefix characters with binary encoding. This experiment primarily focused on classifying and identifying the RF signals of drones within the dataset based on their distinctive features.
In the experiment, we also divided the dataset into 8:2, where 80% of the data was used to build the feature library and 20% was used for accuracy evaluation. Since there were some mislabeled data in the dataset, we visually removed these data from the dataset to validate our algorithm. Figure 7 shows the time–frequency representation of six different drones from the DroneRFa dataset. Figure 8 shows the signal quality presented by different drones in real environments, including environmental noise and other interference. Using our algorithm, we could judge whether the drone signal existed based on power and distinguish between drones and other signals and noise using bandwidth filtering.
The experimental results indicated that the algorithm effectively filtered FTSs, noise, and other signals within the DroneRFa dataset, accurately identifying UAV signals. It demonstrated robust performance across multiple datasets, achieving an average identification accuracy of 99.99%. Furthermore, an analysis of the model’s recognition performance under varying signal lengths and frequency resolutions revealed that the model exhibited heightened sensitivity to changes in time-domain information. While maintaining recognition accuracy, appropriately compressing the time-domain data could significantly enhance the model’s real-time performance, thereby better addressing the demands for rapid UAV identification in low-latency environments.

5.3. Comparison with Other Methods

We conducted a horizontal comparison of the methods in this paper with other advanced methods in terms of accuracy, which were based on radar detection, acoustic detection, visual detection, and a detection method based on a pre-trained model. The performance of the four approaches is summarized in Table 2.
Through the analysis of experimental data, the target drone recognition method based on feature extraction demonstrated the following advantages:
  • Rapid training rate. Compared to other machine learning methods that require extensive image data for training, our approach only necessitates the extraction of a small number of drone features to achieve recognition. This significantly accelerates the training speed of the recognition model.
  • Real-time training capability. Due to the characteristic of training models based on extracting a limited set of features, our recognition model can commence training as soon as a drone is detected. This effectively enhances the precise recognition of drones in complex and variable recognition tasks.
  • High recognition accuracy. Compared to other methods, our experimental data shows an accuracy rate of 100%, which is considerably higher than other mainstream recognition methods.

6. Conclusions

This paper presented a novel algorithm that effectively detects and identifies UAVs by leveraging transmission signal features. By extracting key characteristics such as bandwidth, power, duration, and interval time from the signals, the algorithm can accurately capture the VTSs of UAVs, addressing the limitations of traditional methods that struggle to achieve both high identification accuracy and speed. In addition, the enhanced robustness of the algorithm against noise improves its stability across various scenarios and reduces the likelihood of false alarms. The proposed approach was validated using the DroneRF820 dataset, which was specifically designed for UAV identification and counter-drone applications. The experimental results demonstrate that the algorithm can identify the signals of various dual-frequency UAVs, such as racing drones, within at least 200 ms. This represents a significant improvement in detection and identification performance and speed, highlighting the broad applicability of our signal processing techniques and algorithm in complex environments.
In future work, the proposed algorithm will be further optimized to address the challenge of overlapping signals, thereby enhancing its performance in real-world environments. Our objective is to expand the capabilities of our system to meet emerging challenges in the field of drone inspection and develop adaptive solutions capable of rapidly responding to the evolving landscape of UAVs. By pursuing these goals, we aim to contribute to ongoing efforts to enhance operational security and mitigate the potential risks posed by unauthorized drone activities.

Author Contributions

Conceptualization, T.W. and Y.D.; methodology, T.W.; software, T.W. and Y.D.; validation, T.W., Y.D. and R.M.; formal analysis, H.X. and S.W.; data curation, T.W.; writing—original draft preparation, T.W. and R.M.; writing—review and editing, H.X., S.W. and C.H.; supervision, S.W. and C.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original data presented in the study are openly available in https://pan.quark.cn/s/ae18fe3731da (accessed on 13 June 2025).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Tsao, K.Y.; Girdler, T.; Vassilakis, V.G. A survey of cyber security threats and solutions for UAV communications and flying ad-hoc networks. Ad Hoc Netw. 2022, 133, 102894. [Google Scholar] [CrossRef]
  2. Boroujeni, S.P.H.; Razi, A.; Khoshdel, S.; Afghah, F.; Coen, J.L.; O’Neill, L.; Fule, P.; Watts, A.; Kokolakis, N.M.T.; Vamvoudakis, K.G. A comprehensive survey of research towards AI-enabled unmanned aerial systems in pre-, active-, and post-wildfire management. Inf. Fusion 2024, 108, 102369. [Google Scholar] [CrossRef]
  3. Wang, W.; Zhao, W.; Wang, X.; Jin, Z.; Li, Y.; Runge, T. A low-cost simultaneous localization and mapping algorithm for last-mile indoor delivery. In Proceedings of the 2019 5th International Conference on Transportation Information and Safety (ICTIS), Liverpool, UK, 14–17 July 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 329–336. [Google Scholar]
  4. Feng, K.; Li, W.; Ge, S.; Pan, F. Packages delivery based on marker detection for UAVs. In Proceedings of the 2020 Chinese Control And Decision Conference (CCDC), Hefei, China, 22–24 August 2020; pp. 2094–2099. [Google Scholar]
  5. Tanveer, L.; Alam, M.Z.; Misbah, M.; Orakzai, F.A.; Alkhayyat, A.; Kaleem, Z. Optimizing RF-Sensing for Drone Detection: The Synergy of Ensemble Learning and Sensor Fusion. In Proceedings of the 2024 20th International Conference on Distributed Computing in Smart Systems and the Internet of Things (DCOSS-IoT), Abu Dhabi, United Arab Emirates, 29 April–1 May 2024; pp. 308–314. [Google Scholar] [CrossRef]
  6. Jdid, B.; Hassan, K.; Dayoub, I.; Lim, W.H.; Mokayef, M. Machine learning based automatic modulation recognition for wireless communications: A comprehensive survey. IEEE Access 2021, 9, 57851–57873. [Google Scholar] [CrossRef]
  7. Chiper, F.L.; Martian, A.; Vladeanu, C.; Marghescu, I.; Craciunescu, R.; Fratu, O. Drone Detection and Defense Systems: Survey and a Software-Defined Radio-Based Solution. Sensors 2022, 22, 1453. [Google Scholar] [CrossRef]
  8. Zhou, X.; Yang, G.; Chen, Y.; Li, L.; Chen, B.M. VDTNet: A High-Performance Visual Network for Detecting and Tracking of Intruding Drones. IEEE Trans. Intell. Transp. Syst. 2024, 25, 9828–9839. [Google Scholar] [CrossRef]
  9. Zhao, J.; Zhang, J.; Li, D.; Wang, D. Vision-based anti-uav detection and tracking. IEEE Trans. Intell. Transp. Syst. 2022, 23, 25323–25334. [Google Scholar] [CrossRef]
  10. Gao, B.; Huo, J. UAV Night Target Detection Method Based on Improved YOLOv5. In Proceedings of the 2024 5th International Seminar on Artificial Intelligence, Networking and Information Technology (AINIT), Nanjing, China, 29–31 March 2024; pp. 2208–2211. [Google Scholar]
  11. Jiang, C.; Ren, H.; Ye, X.; Zhu, J.; Zeng, H.; Nan, Y.; Sun, M.; Ren, X.; Huo, H. Object Detection from UAV Thermal Infrared Images and Videos Using YOLO Models. Int. J. Appl. Earth Obs. Geoinf. 2022, 112, 102912. [Google Scholar] [CrossRef]
  12. Fang, H.; Ding, L.; Wang, L.; Chang, Y.; Yan, L.; Han, J. Infrared Small UAV Target Detection Based on Depthwise Separable Residual Dense Network and Multiscale Feature Fusion. IEEE Trans. Instrum. Meas. 2022, 71, 1–20. [Google Scholar] [CrossRef]
  13. Aydın, İ.; Kızılay, E. Development of a new light-weight convolutional neural network for acoustic-based amateur drone detection. Appl. Acoust. 2022, 193, 108773. [Google Scholar] [CrossRef]
  14. Tejera-Berengue, D.; Zhu-Zhou, F.; Utrilla-Manso, M.; Gil-Pita, R.; Rosa-Zurera, M. Analysis of distance and environmental impact on UAV acoustic detection. Electronics 2024, 13, 643. [Google Scholar] [CrossRef]
  15. Shi, Z.; Chang, X.; Yang, C.; Wu, Z.; Wu, J. An Acoustic-Based Surveillance System for Amateur Drones Detection and Localization. IEEE Trans. Veh. Technol. 2020, 69, 2731–2739. [Google Scholar] [CrossRef]
  16. Wang, X.; Fei, Z.; Zhang, J.A.; Huang, J.; Yuan, J. Constrained utility maximization in dual-functional radar-communication multi-UAV networks. IEEE Trans. Commun. 2020, 69, 2660–2672. [Google Scholar] [CrossRef]
  17. Fu, H.; Abeywickrama, S.; Zhang, L.; Yuen, C. Low-complexity portable passive drone surveillance via SDR-based signal processing. IEEE Commun. Mag. 2018, 56, 112–118. [Google Scholar] [CrossRef]
  18. Xu, C.; Chen, B.; Liu, Y.; He, F.; Song, H. RF fingerprint measurement for detecting multiple amateur drones based on STFT and feature reduction. In Proceedings of the 2020 Integrated Communications Navigation and Surveillance Conference (ICNS), Herndon, VA, USA, 8–10 September 2020. [Google Scholar]
  19. Xie, Y.; Jiang, P.; Gu, Y.; Xiao, X. Dual-source detection and identification system based on UAV radio frequency signal. IEEE Trans. Instrum. Meas. 2021, 70, 1–15. [Google Scholar] [CrossRef]
  20. Nie, W.; Han, Z.C.; Li, Y.; He, W.; Xie, L.B.; Yang, X.L.; Zhou, M. UAV detection and localization based on multi-dimensional signal features. IEEE Sens. J. 2021, 22, 5150–5162. [Google Scholar] [CrossRef]
  21. Medaiyese, O.O.; Ezuma, M.; Lauf, A.P.; Adeniran, A.A. Hierarchical learning framework for UAV detection and identification. IEEE J. Radio Freq. Identif. 2022, 6, 176–188. [Google Scholar] [CrossRef]
  22. Allahham, M.S.; Al-Sa’d, M.F.; Al-Ali, A.; Mohamed, A.; Khattab, T.; Erbad, A. DroneRF dataset: A dataset of drones for RF-based detection, classification and identification. Data Brief 2019, 26, 104313. [Google Scholar] [CrossRef]
  23. Bisio, I.; Garibotto, C.; Lavagetto, F.; Sciarrone, A.; Zappatore, S. Blind Detection: Advanced Techniques for WiFi-Based Drone Surveillance. IEEE Trans. Veh. Technol. 2019, 68, 938–946. [Google Scholar] [CrossRef]
  24. Ezuma, M.; Erden, F.; Anjinappa, C.K.; Ozdemir, O.; Guvenc, I. Detection and classification of UAVs using RF fingerprints in the presence of Wi-Fi and Bluetooth interference. IEEE Open J. Commun. Soc. 2019, 1, 60–76. [Google Scholar] [CrossRef]
  25. Aouladhadj, D.; Kpre, E.; Deniau, V.; Kharchouf, A.; Gransart, C.; Gaquière, C. Drone Detection and Tracking Using RF Identification Signals. Sensors 2023, 23, 7650. [Google Scholar] [CrossRef]
  26. Akhter, Z.; Bilal, R.M.; Shamim, A. A dual mode, thin and wideband MIMO antenna system for seamless integration on UAV. IEEE Open J. Antennas Propag. 2021, 2, 991–1000. [Google Scholar] [CrossRef]
  27. Yu, N.; Mao, S.; Zhou, C.; Sun, G.; Shi, Z.; Chen, J. DroneRFa: A large-scale dataset of drone radio frequency signals for detecting low-altitude drones. J. Electron. Inf. Technol 2023, 45, 1–10. [Google Scholar]
  28. Karuppuswami, S.; Baua, S. Numerical modelling of video transmission range for an unmanned aerial vehicle. In Proceedings of the 2022 IEEE International Symposium on Antennas and Propagation and USNC-URSI Radio Science Meeting (AP-S/URSI), Denver, CO, USA, 10–15 July 2022; pp. 1920–1921. [Google Scholar]
  29. Zhang, X.; Babar, Z.; Petropoulos, P.; Haas, H.; Hanzo, L. The Evolution of Optical OFDM. IEEE Commun. Surv. Tutor. 2021, 23, 1430–1457. [Google Scholar] [CrossRef]
  30. Al-Emadi, S.; Al-Senaid, F. Drone Detection Approach Based on Radio-Frequency Using Convolutional Neural Network. In Proceedings of the 2020 IEEE International Conference on Informatics, IoT, and Enabling Technologies (ICIoT), Doha, Qatar, 2–5 February 2020; pp. 29–34. [Google Scholar] [CrossRef]
  31. Medaiyese, O.O.; Syed, A.; Lauf, A.P. Machine Learning Framework for RF-Based Drone Detection and Identification System. In Proceedings of the 2021 2nd International Conference On Smart Cities, Automation & Intelligent Computing Systems (ICON-SONICS), Tangerang, Indonesia, 12–13 October 2021; pp. 58–64. [Google Scholar] [CrossRef]
  32. Al-Sa’d, M.F.; Al-Ali, A.; Mohamed, A.; Khattab, T.; Erbad, A. RF-based drone detection and identification using deep learning approaches: An initiative towards a large open source drone database. Future Gener. Comput. Syst. 2019, 100, 86–97. [Google Scholar] [CrossRef]
  33. Nemer, I.; Sheltami, T.; Ahmad, I.; Yasar, A.U.H.; Abdeen, M.A. RF-based UAV detection and identification using hierarchical learning approach. Sensors 2021, 21, 1947. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Hardware acquisition equipment of UAV signal.
Figure 1. Hardware acquisition equipment of UAV signal.
Information 16 00541 g001
Figure 2. Visualization of UAV video transmission signals alongside the demonstration of key features of drone signals.
Figure 2. Visualization of UAV video transmission signals alongside the demonstration of key features of drone signals.
Information 16 00541 g002
Figure 3. UAV signal time–frequency spectrum diagram. (a) DJI Air2 VTS; (b) FIMI X8 Pro VTS and FCS; (c) DJI mini4 Pro VTS with variable transformation; (d) AUTEL evo lite VTS and FCS.
Figure 3. UAV signal time–frequency spectrum diagram. (a) DJI Air2 VTS; (b) FIMI X8 Pro VTS and FCS; (c) DJI mini4 Pro VTS with variable transformation; (d) AUTEL evo lite VTS and FCS.
Information 16 00541 g003
Figure 4. Implementation of an algorithm for UAV visual transmission signal recognition.
Figure 4. Implementation of an algorithm for UAV visual transmission signal recognition.
Information 16 00541 g004
Figure 5. Comparison of video transmission signals for the DJI Mini 4 Pro in standby and flight states. The upper image depicts the standby state, while the lower image illustrates the flight state.
Figure 5. Comparison of video transmission signals for the DJI Mini 4 Pro in standby and flight states. The upper image depicts the standby state, while the lower image illustrates the flight state.
Information 16 00541 g005
Figure 6. Comparison of drone signals and flight control signals for the FIMI X8 Pro. (a,b) display the drone signal, while (c,d) illustrate the flight control signal. (a,c) represent the Power Spectral Density (PSD) plots, whereas subfigures (b,d) depict the cumulative Power Spectral Density (CPSD) plots.
Figure 6. Comparison of drone signals and flight control signals for the FIMI X8 Pro. (a,b) display the drone signal, while (c,d) illustrate the flight control signal. (a,c) represent the Power Spectral Density (PSD) plots, whereas subfigures (b,d) depict the cumulative Power Spectral Density (CPSD) plots.
Information 16 00541 g006
Figure 7. Signal time–frequency diagrams of six DroneRFa dataset.
Figure 7. Signal time–frequency diagrams of six DroneRFa dataset.
Information 16 00541 g007
Figure 8. Time–frequency graph of signals collected from the DroneRFa dataset, including FTSs, VTSs, and other signals in the real world.
Figure 8. Time–frequency graph of signals collected from the DroneRFa dataset, including FTSs, VTSs, and other signals in the real world.
Information 16 00541 g008
Table 1. The status parameter indicates whether the drone was flight mode (Standby or Flight), while channel bandwidth defines the allocated frequency range and frequency band specifies the operating frequencies, typically within standardized ranges like 2.4 GHz or 5.8 GHz.
Table 1. The status parameter indicates whether the drone was flight mode (Standby or Flight), while channel bandwidth defines the allocated frequency range and frequency band specifies the operating frequencies, typically within standardized ranges like 2.4 GHz or 5.8 GHz.
Collection No.StatusChannel BandwidthFrequency Band
1Standby10 MHz2.4 GHz–2.5 GHz
2Standby10 MHz2.4 GHz–2.5 GHz
3Standby10 MHz5.7 GHz–5.8 GHz
4Standby10 MHz5.7 GHz–2.5 GHz
5Standby10 MHz5.7 GHz–5.8 GHz
6Standby20 MHz2.4 GHz–2.5 GHz
7Standby20 MHz2.4 GHz–2.5 GHz
8Standby20 MHz5.7 GHz–5.8 GHz
9Standby20 MHz5.7 GHz–5.8 GHz
10Standby20 MHz5.7 GHz–5.8 GHz
11Flight10 MHz2.4 GHz–2.5 GHz
12Flight10 MHz2.4 GHz–2.5 GHz
13Flight10 MHz5.7 GHz–5.8 GHz
14Flight10 MHz5.7 GHz–5.8 GHz
15Flight10 MHz5.7 GHz–5.8 GHz
16Flight20 MHz2.4 GHz–2.5 GHz
17Flight20 MHz2.4 GHz–2.5 GHz
18Flight20 MHz5.7 GHz–5.8 GHz
19Flight20 MHz5.7 GHz–5.8 GHz
20Flight20 MHz5.7 GHz–5.8 GHz
Table 2. Horizontal comparison of accuracy.
Table 2. Horizontal comparison of accuracy.
MethodPrincipleRatesProcedureAccuracy
[30]CNN>5 GbpsPre-training59.2%
[31]XGBoost>5 GbpsPre-training70.1%
[32]DNN>5 GbpsPre-training46.8%
[33]Hierarchical>5 GbpsPre-training99.1%
OursMuti-features<1 kbpsReal-time99.99%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, T.; Du, Y.; Mao, R.; Xie, H.; Wei, S.; Hu, C. Novel Matching Algorithm for Effective Drone Detection and Identification by Radio Feature Extraction. Information 2025, 16, 541. https://doi.org/10.3390/info16070541

AMA Style

Wu T, Du Y, Mao R, Xie H, Wei S, Hu C. Novel Matching Algorithm for Effective Drone Detection and Identification by Radio Feature Extraction. Information. 2025; 16(7):541. https://doi.org/10.3390/info16070541

Chicago/Turabian Style

Wu, Teng, Yan Du, Runze Mao, Hui Xie, Shengjun Wei, and Changzhen Hu. 2025. "Novel Matching Algorithm for Effective Drone Detection and Identification by Radio Feature Extraction" Information 16, no. 7: 541. https://doi.org/10.3390/info16070541

APA Style

Wu, T., Du, Y., Mao, R., Xie, H., Wei, S., & Hu, C. (2025). Novel Matching Algorithm for Effective Drone Detection and Identification by Radio Feature Extraction. Information, 16(7), 541. https://doi.org/10.3390/info16070541

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop