Next Article in Journal
Novel Twist Morphing Aileron and Winglet Design for UAS Control and Performance
Next Article in Special Issue
Multi-Node Joint Jamming Scheme for Secure UAV-Aided NOMA-CDRT Systems: Performance Analysis and Optimization
Previous Article in Journal
Dark-SLAM: A Robust Visual Simultaneous Localization and Mapping Pipeline for an Unmanned Driving Vehicle in a Dark Night Environment
Previous Article in Special Issue
A Quantum-Resistant Identity Authentication and Key Agreement Scheme for UAV Networks Based on Kyber Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Convolutional Neural Network and Ensemble Learning-Based Unmanned Aerial Vehicles Radio Frequency Fingerprinting Identification

1
School of Electronic Information Engineering, Beihang University, Beijing 100191, China
2
School of Cyber Science and Technology, Beihang University, Beijing 100191, China
*
Author to whom correspondence should be addressed.
Drones 2024, 8(8), 391; https://doi.org/10.3390/drones8080391
Submission received: 15 July 2024 / Revised: 12 August 2024 / Accepted: 12 August 2024 / Published: 13 August 2024
(This article belongs to the Special Issue Physical-Layer Security in Drone Communications)

Abstract

With the rapid development of the unmanned aerial vehicles (UAVs) industry, there is increasing demand for UAV surveillance technology. Automatic Dependent Surveillance-Broadcast (ADS-B) provides accurate monitoring of UAVs. However, the system cannot encrypt messages or verify identity. To address the issue of identity spoofing, radio frequency fingerprinting identification (RFFI) is applied for ADS-B transmitters to determine the true identities of UAVs through physical layer security technology. This paper develops an ensemble learning ADS-B radio signal recognition framework. Firstly, the research analyzes the data content characteristics of the ADS-B signal and conducts segment processing to eliminate the possible effects of the signal content. To extract features from different signal segments, a method merging end-to-end and non-end-to-end data processing is approached in a convolutional neural network. Subsequently, these features are fused through EL to enhance the robustness and generalizability of the identification system. Finally, the proposed framework’s effectiveness is evaluated using collected ADS-B data. The experimental results indicate that the recognition accuracy of the proposed ELWAM-CNN method can reach up to 97.43% and have better performance at different signal-to-noise ratios compared to existing methods using machine learning.

1. Introduction

1.1. Background

Due to the rapid development of the unmanned aerial vehicles (UAVs) industry [1], UAVs have gradually entered various sectors such as agriculture, transportation, and surveying. Effective surveillance and management of UAVs is becoming a key area of focus. Automatic Dependent Surveillance-Broadcast (ADS-B) [2] is a key air surveillance technology and a critical component of the next-generation air transportation system. Compared with traditional primary and secondary radar systems, the ADS-B system does not require external intervention. It broadcasts aircraft information according to a specific cycle, with the advantages of fast data update speed and high accuracy [3].
Currently, large UAVs are used in feeder logistics operation scenarios with ADS-B devices for position monitoring and management. UAVs are primarily conducted through the 24-bit address code in ADS-B information, with each UAV possessing a unique address identification code. However, identity spoofing and device cloning have become the main issues because of the ADS-B system’s unencrypted and non-authenticated communication schemes [4]. By altering the settings of the ADS-B transmitter, the unique identification code can be changed, thereby impersonating the identity of another UAV. Simultaneously, various software-defined radios can transmit large amounts of false ADS-B information. Spurious ADS-B messages may cause airspace scheduling confusion, congestion, and national airspace security issues. Currently, the traditional method of encryption by the key will destroy the openness of ADS-B and violate the ADS-B system mechanism, which is not eligible for application in the aviation field.
Due to the increasing applications of wireless communication, a rising number of researchers have focused on radio frequency fingerprinting identification (RFFI) in electromagnetic radiation sources [5,6] to address the signal spoofing issue. RFFI can identify transmitters based on their hardware fingerprints [7] caused by differences in transmitter circuit design and manufacturing tolerances of electronic components [8]. Since hardware differences are reflected in the emitted radio frequency signals, extracting the signal features can indirectly characterize the transmitter properties and identify the source of radiation [9]. This process of detecting radio frequency features from signals is known as RFF extraction. RFF technology addresses identity spoofing and false ADS-B message issues resulting from forged 24-bit address codes or software-defined radio transmissions. Comparing collected or recorded transmitter fingerprint features with received signals directly confirms the identity of the signal source. Early feature extraction mainly relied on manual feature extraction [10], including methods based on statistical measures, spectral transformation, and parameter analysis. These methods performed poorly in large-scale practical application scenarios. With the emergence and maturity of Convolutional Neural Networks (CNN), recognition accuracy (RA) has been dramatically improved in big data processing, and neural network-based RFFI has become the focus of research [11]. However, most of the research on RFFI has not taken into account the characteristics of the signal’s structure and content. The use of generic processing frameworks in the handling of specific signals lacks specificity, resulting in poor recognition rates.
Traditional signal processing by CNN [12] mainly includes two schemes: (1) end-to-end and (2) non-end-to-end. Specifically, end-to-end processing directly uses the signal’s information (i.e., the pixels of an image or the waveform of a speech signal) as the input sample of the CNN, entirely relying on CNN to mine the feature hidden in the electronic fingerprint. End-to-end processing can automatically learn valuable features from data, thereby avoiding the need for manual feature engineering and simplifying the model design process. However, this method performs complex feature extraction and decision-making internally, and it makes the model more difficult to understand. On the other hand, non-end-to-end processing combines manual fingerprint extraction methods to transform the target signal into a specific domain before using it as the input of the CNN. It can better utilize domain knowledge and manual feature engineering, which may help build effective models in data scarcity cases. However, it usually requires more preprocessing or post-processing steps, increasing the complexity of model design and implementation. We are considering incorporating non-end-to-end processing while preserving the raw data in end-to-end processing. The aim is to fuse the original features with specific features, expecting better results.

1.2. Related Works

The research on the RFFI system has been developed since the 1960s. It is based on the RFFI of the signal to identify individual transmitters. The system has been gradually used in cognitive radio, military communications, and cellular networks [13]. In addition to conducting RFFI research on commonly used Internet of Things (IoT) devices research on RFFI for UAV ADS-B signals has also received widespread attention as ADS-B is gradually becoming more prevalent and applied [14] in recent years. Despite the differences in radio devices, research on RFFI shares common characteristics.
The research on RFFI of IoT devices mainly focuses on wireless NIC, ZigBee, and Lora. Based on the concept of radiometric identification, Zeng [15] proposed a preprocessing algorithm for improving synchrosqueezed wavelet transforms by energy regularization (ISWTE). Brik [16] designed and proposed the Passive Radiometric Device Identification System (PARADIS). Later, Knox [17] conducted research on the identification of RF devices of 2.4 GHz under different distance channel conditions based on PARADIS. With the development of convolutional neural network applications, Ref. [18] investigated the problem of identifying coexisting wireless devices using CNNs. Peng [19] evaluated 54 types of ZigBee devices using the extracted differential constellation trace figure (DCTF), carrier frequency offset, modulation offset, and I/Q offset. Subsequently, the combination of CNN and DCTF was proposed [20]. Al-Shawabka [21] used different representations of I/Q, amplitude-phase, and spectrum for the received signals and built a fingerprint database containing 100 classes of Lora radios. The research conducted experiments using both CNN and RNN-LSTM. The team [22,23] from the University of Liverpool in the UK designed an RFFI scheme for Lora based on the spectrogram and CNN, combined with instantaneous carrier frequency offset (CFO) drift. They proposed a hybrid classifier and used the estimated CFO to calibrate the softmax output of the CNN model.
In recent years, RFFI research on ADS-B has gradually become a concern. In 2017, the Defense Advanced Research Projects Agency (DARPA) [24] initiated the Radio Frequency Machine Learning project [25,26], which utilized raw I/Q data as input for machine learning techniques to perform RFF. ADS-B, being one of the key research targets, received significant attention from DARPA, with over 5,000 ADS-B transmitters and more than 400 GB of data provided. Tong [27] used ADS-B raw I/Q data as input to achieve finger-print recognition through end-to-end CNN. The impact of data length on recognition accuracy was discussed by generating sequence slices of arbitrary length. Agadakos [28] developed a new type of deep complex-valued neural network (DCN). The effectiveness of the DCN was validated in the article using ADS-B. These methods of directly adopting the ADS-B signal raw I/Q data are referred to as end-to-end processing, as opposed to non-end-to-end processing which transforms the signals. In non-end-to-end processing, Liu [29,30] collected ADS-B signals from more than 140 aircraft using an SDR receiver and reconstructed ideal signals based on the protocol. Although Liu did not analyze the content of the ADS-B signal, the use of residuals indirectly eliminated the influence of data labels caused by the air-craft International Civil Aviation Organization code contained in the content of the ADS-B signal. Through the established high-quality ADS-B dataset, TU [31] used the method of bi-spectrum transformation to extract signal features and analyze the recognition accuracy in different radio signal scenarios.
The above research on fingerprint recognition of commonly used device signals in the IoT [32] has become relatively mature, but existing research rarely connects signal content characteristics with RFFI methods. As a key technology for aviation surveillance, ADS-B signals not only contain information such as the speed, position, and heading of the UAVs but also the identity information code serves as the unique identification information for the UAV. Some research did not consider the content of ADS-B signals, and used raw I/Q unintentionally introduced label information, affecting the recognition accuracy. In the following research, we will primarily focus on exploring RFFI in conjunction with the characteristics of signal structure and content.

1.3. Motivations and Contributions

In this research, the neural network is applied to the RFFI of ADS-B devices. Unlike most studies that do not consider the content of the signal and directly use neural networks for RFFI [12,13], this paper analyzes the information content features of ADS-B signals. Compared with existing RFFI methods based on ADS-B signals, the proposed method segments the signal based on different content characteristics and uses different processing methods. The features extracted from different information segments are combined using ensemble learning (EL) to improve the robustness of the recognition system. In comparison with existing machine learning, the proposed method plays an outstanding result in both high and low signal-to-noise ratio (SNR) situations. The main contributions of this paper are summarized as follows.
(1)
The paper analyzes the inherent information content characteristics of ADS-B signals and segments the signal based on the different content of each segment. In a signal, the segments are divided into three types: fixed and unchanging information in all transmitters, fixed and unchanging information in the same transmitter, and constantly changing information
(2)
Merging end-to-end and non-end-to-end processing is proposed for different segmented ADS-B information, retaining the raw I/Q information and introducing features from other domains. Meanwhile, two different CNN models are introduced as primary classifiers.
(3)
The EL method is adopted to form new classifiers. Ensemble classifiers fuse the features extracted from each signal segment based on the primary classifiers. The final identification decision is made through the ensemble classifier. The proposed approach improves both the model’s classification and generalization abilities and achieves better performance in transmitter identification.
The remainder of this article is organized as follows: Section 2 analyzes the ADS-B signal characteristics and describes the classification and recognition methods. Section 3 presents the experimental results and related analysis. Finally, Section 4 concludes the paper and discusses future work.

2. System Model

2.1. Signal Analysis

ADS-B system is designed based on the Radio Technical Commission for Aeronautics DO260-B standard [33]. The signal adopts Pulse Position Modulation. ADS-B data link mainly includes three modes: 1090 MHz Mode S Extended Squitter (1090 ES), Universal Access Transceiver (UAT), and Very High-Frequency Data Link Mode 4. Since the 1090 ES mode has the most widespread application worldwide in the aviation sector, our research on ADS-B is based on this model. Because the signal structures all include a preamble and data block, it is noted that the proposed approach can be easily extended to UAT models.
ADS-B signal consists of an 8-microsecond preamble and a 112-microsecond data block, as shown in Figure 1. The preamble of the signal is composed of four pulses, each with a duration of 0.5 microseconds. The second, third, and fourth pulses are positioned at 1.0, 3.5, and 4.5 microseconds after the first transmitted pulse.
As shown in Table 1, take Down Format (DF) 17 as an example. The ADS-B data in flights is 17 format with a duration of 112 microseconds. For aviation ADS-B, the first five bits of the message data are fixed as 10001.
The 6th to 8th bits of ADS-B for aviation are reserved for the capability of an ADS-B transmitting installation. They include seven possible values, as shown in Table 2. CA 0 signifies transponder level 1 (surveillance only); CA 1, 2, and 3 are reserved; and CA 4, 5, and 6 signify transponder level 2. The minimum level for the aviation UAV transponder is 2, so there are only four binary forms: 100, 101, 101, and 111. The ninth to 32nd bits are the 24-bit identity information code, and they remain constant for the same UAV. The ME field contains flight and weather information. It is a variable field. The PI field is the parity check bit, which varies depending on the transmitted information.
Through the above analysis, there are some parts of the ADS-B information that are identical for all transmitters, while some parts are closely related to the transmitter, as shown in Figure 2. The first 14 microseconds are the same for all studied flights, including the preamble, DF 17 = ‘10001’, and the first bit of the CA = ‘1’. The 17th to 40th microseconds are the identity information code. For the same flight, all ADS-B information transmitted will have this segment identical. In theory, UAVs identity can be recognized through the identity information code. However, due to the openness of ADS-B, other transmitters can also falsify identity information codes that do not belong to them, causing identity confusion. Some research [29] considers not introducing the AA bits to eliminate the influence of the identity information address code. However, for a data frame with only 120 microseconds, reducing 24 microseconds of data leads to the loss of some feature information and does not fully utilize all the fingerprint features in ADS-B.

2.2. Signal Transfer Model

As shown in Figure 3, the baseband ADS-B signal is converted from digital to analog and broadcast to the surrounding area via the transmitter’s radio frequency link. After receiving the signal, the receiver processes it through filtering, down-conversion, and other steps, converting the received analog signal into a digital signal for storage.
As shown in Figure 3, the baseband ADS-B signal is converted from digital to analog converter (DAC) and broadcast to the surrounding area via the transmitter’s radio frequency link. After receiving the signal, the receiver processes it through filtering, down-conversion, and other steps, converting the received to a digital signal by analog to digital converter (ADC) for storage.
The ideal baseband signal  x ( t )  of the ADS-B transmitter can be expressed as
x ( t ) = k = 1 240 a k p ( t k T )
where  a k  expresses the value at the  k -th pulse position, indicated by 1 or 0, representing the presence or absence of a pulse at that position.  p ( t )  indicates a pulse function with a period of  T = 0.5  microseconds. Each signal contains 240 pulses for a total of 120 microseconds.
The baseband signal undergoes modulation, filtering, and amplification through the RF link before being transmitted by the antenna. The transmitted signal  c ( t )  can be calculated as
c ( t ) = H ( t ) S ( x ( t ) )
where  S ( )  represents the additional effects of the RF link on the signal and indicates the effect of the channel.
The actual received signal  c ( t )  is demodulated by the receiver and converted into a digital signal. The demodulated signal can be obtained as
x ˜ ( t ) = S 1 { R [ c ( t ) ] } = I ( t ) + i Q ( t )
where  x ˜ ( t )  expresses the received and modulated baseband signal,  R ( )  is the additional impact of the receiver on the signal,  S 1  indicates the effect of the reverse decomposition of the RF link on the signal, and  I ( t )  and  Q ( t )  are in-phase and quadrature components of the signal.
Besides the impact of the transmitter RF link on the baseband signal, the channel and the receiver RF link also influence the signal. The effects of the transmitter, channel, and receiver are all superimposed on the ideal baseband signal. They are reflected in the received Raw I/Q. To simplify the research, in this paper, we typically select signals with higher SNR to eliminate the effects of the channel. All signals are received by a single receiver, thus eliminating the influence of the receiver. In these conditions, we consider the signal distortion caused by the transmitter, which is given by
ε ( t ) = x ˜ ( t ) x ( t )

3. Proposed Solution

3.1. RFFI General Process

The RFFI schematic is shown in Figure 4. The system actively collects UAV ADS-B signals and processes them.
The primary process includes receiving ADS-B signals emitted by UAVs, followed by signal demodulation after down-conversion. Signals that pass the Cyclic Redundancy Check (CRC) indicate error-free demodulation, moving to the signal recognition stage. Signals failing the CRC are discarded. Then, instead of the traditional direct processing of the whole signal, we segment the signal according to the content structure of the signal and use end-to-end and non-end-to-end methods to process different signal segments. We mainly divide the signal into three segments according to the content, denoted as Set 1, Set 2, and Set 3. To keep the length of each segment the same, the experiment divides Set 3 into two segments denoted Set 3.1 and Set 3.2. Detailed signal processing and segmentation methods will be elucidated in Section 3.2. The processed signals are then fed into pre-trained neural networks, extracting recognition results from the softmax function of the output layer. The design and structure of these neural networks will be detailed in Section 3.3. Finally, EL is used to merge results from multiple classifiers’ softmax functions. By comparing the recognized ADS-B signals with known flight identities stored in a database, the system recognizes the identity of the radiating source with the highest recognition probability.

3.2. Signal Pre-Processing Step

According to the analysis of the signal content characteristics in the third section, we divide the ADS-B signal into segments and adopt different processing methods, as shown in Figure 5.
The first segment consists of fixed and unchanged bits from positions 1 to 14. This segment of the signal transmitted by all ADS-B transmitters contains the same data. The first segment does not include any information related to the transmitter’s identity. Therefore, no processing is performed on this segment. The received signals following an end-to-end approach are directly input into Classifier 1 for identification. The input data for Classifier 1 is shown asws:
x ˜ Set   1 ( t ) = I Set   1 ( t ) + i Q Set   1 ( t )
where  I Set   1 ( t )  and  Q Set   1 ( t )  are in-phase and quadrature components of the first segment  x ˜ Set   1 ( t ) . The effect of frequency bias is introduced in case of frequency mismatch between the transmitter and receiver oscillators. When we use a fixed receiver, the effect of frequency bias is only caused by errors in the transmitter’s oscillator, creating fingerprint uniqueness. Consequently, the feature extraction capability of the CNN network is enhanced by incorporating the frequency domain information into the I/Q through the Fourier transform.
X ( w ) S e t 1 = F F T [ x ˜ S e t 1 ( t ) ]
where  x ˜ S e t 1 ( t )  indicates the first segment of the demodulated signal and  X ( w ) S e t 1  is the result after Fourier transformation.
The second segment consists of the identity information code, which spans from the 17th to the 40th bit. Since each UAV has a unique identity information code, this segment of data remains the same for ADS-B signals transmitted by the same UAV. The presence of similar label-like data in the training of neural networks can affect the authenticity and accuracy of recognition. Some research has attempted to eliminate this influence by setting these bits to zero, but this approach results in the loss of valuable information. We calculated the average of all received  x ˜ ( t )  and used it as the reference for the ideal pulse height. Based on the assumption in Section 2.2, we disregarded the effects of the channel and reception in the transmission process. Then, we reconstructed the ideal signal  x ( t ) .
The residual components, as indicated in Equation (4), are caused by the differences between the transmitted and received baseband signals due to the RF chain  S ( ) . To further reduce the impact of the identity information code, we transform the residuals into the frequency domain and extract the magnitude and phase information of the residual spectrum:
mag { F F T [ ε Set   2 ( t ) ] }
a n g { F F T [ ε Set   2 ( t ) ] }
The magnitude and phase information of the frequency domain are used as the two-dimensional inputs for the classifier. By this way, we eliminate the effect of having label information in the second segment.
The third segment, consisting of bits 41 to 120, represents the message data and the PI. These bits are only related to the UAV’s position, status, and surrounding environment. We employ the same combined end-to-end and non-end-to-end approach as in the first segment. The data are divided into two segments. Features are extracted from both the I and Q components as well as from the frequency domain.

3.3. RFFI with CNN

CNN is used for the feature extraction of RFF. It can automatically learn and capture features at different levels of input data. Through the backpropagation algorithm, the network continuously adjusts the weights and biases to maximize the accuracy of feature extraction and classification. The convolution operation can be written as:
S = c o n v ( X , W )
where  X  is an N-dimensional tensor and  W  represents a tensor of the same dimensions as  X .
CNN optimizes a loss function to learn the most suitable feature representations from the input data [34]. The loss function quantifies the discrepancy between the network’s prediction and the actual target by the following formula:
L = i = 1 M y i log ( p i )
where  L  indicates the value of the loss function,  M  is the number of classes, and  p i  expresses the probability of a correct prediction. If the prediction is correct,  y i  is 1. Otherwise, it is 0.
In the context of the output layer in a neural network, the softmax function is often used for multi-class classification tasks. The softmax function, also known as the normalized exponential function, is calculated by
softmax ( z ¯ ) i = e z i c = 1 M e z i
where  z ¯  denotes the vector input to the softmax function, which includes the parameter  ( z 1 , z 2 , z M ) . The output of the softmax function will be used as the input for EL.
In this work, we adopt two CNN architecture models: one is a linear model inspired by the Visual Geometry Group (VGG), and the other is a residual model based on the ResNet. The baseline model is a modified version of VGG designed for RFF recognition. Different CNN models are used for different datasets to extract features from various perspectives.
As shown in Figure 6a, it consists of seven convolutional layers and three fully connected layers. Instead of using large convolutional kernels, it employs multiple stacked 1 × 6 convolutional kernels to reduce the required parameters. The reference to the ResNet architecture in Figure 6b indicates that it can extract more rich features as the network becomes deeper. As the network depth increases, the accuracy during training may saturate or even rapidly decline. This phenomenon is known as the vanishing gradient problem. ResNet introduces a residual block that allows the input of each block to be carried over to its output. This residual block enables the creation of deeper network architectures and alleviates the vanishing gradient problem, a significant, major cause of accuracy loss in deep architectures. Two-dimensional convolution kernels are used to extract higher-level features that may be more apparent in a two-dimensional signal representation. The model in Figure 6b can capture more complex and abstract features from the input data.

3.4. Ensemble Learning

To obtain better classification results by combining features from multiple classifiers, we employ EL to further process the output features of CNN.
Before the data are input into the final classifier, the signal features extracted by each primary classifier are utilized. The four groups of softmax function output parameters  K  of the primary classifiers are used to perform feature fusion:
K = [ softmax ( z ) i 1 , softmax ( z ) i 2 , softmax ( z ) i 3 , softmax ( z ) i 4 ]
The fusion of signal features is performed using an EL algorithm, and the process is shown in Figure 7. Suppose the primary classifier contains  M  classes and the probability that the primary classifier classifies correctly after input data  x  is  h i ( x ) h i j ( x )  is the output of  h i ( x )  on the category label. The ensemble method includes the following three methods:
(1)
Direct Averaging Method: The prediction result is the average of the classification confidence generated by different classifiers:
H ( x ) = 1 M i = 1 M h i ( x )
(2)
Weighted Average Method: Through introducing weighting factors  w i  to adjust the weights of different classifiers in the ensemble classifier, the recognition rate is improved:
H ( x ) = 1 M i = 1 M w i h i ( x )
among
w i 0 , i = 1 M w i = 1
(3)
Voting Method: The classification results obtained by each primary classifier are transformed into predicted categories before voting, and the category with the highest number of predictions is considered the final prediction result:
H ( x ) = c arg j max i = 1 M h i j ( x )
In general, we consider the direct averaging method as a particular case of weighted averaging. The goal of an ensemble classifier is to reduce bias and variance to minimize the generalization error and improve the model’s performance. As shown in Equation (17), Tumer [35] conducted a detailed analysis of the generalization error of ensemble classifiers.
E ˜ = E a d d ( 1 + i = 1 T P i δ i ( N 1 ) ) N
where  E ˜  is the ensemble classifier’s generalization error,  E a d d  expresses the average generalization error of the primary classifiers,  N  indicates the number of primary classifiers, and  T  means the number of categories.  δ i  is the average correlation factor among classifiers and  P i  represents the prior probability of class  i . The correlation coefficient of the primary classifiers is small, and the generalization error of the ensemble classifier is approximately equal to the average generalization error of the base classifiers multiplied by  1 / N . It can be seen that ensemble methods can effectively improve the generalization ability of a system.

4. Performance Evaluation

4.1. Experiment Setup

To validate the proposed method, we use Zed-board with AD9361 as a receiver device and collect ADS-B equipment information from 21 transmitters. The data used in the experiment are from real collected ADS-B transmitter data. Each signal contains 960 frames. The receiver sampling rate is 8 M sample/s, which is four times oversampling compared to ADS-B 2 MHz bandwidth. The antenna and receiver are arranged in an open outdoor area to receive ADS-B signals.
The SNR is above 30 dB, and the collected signal is considered ideal for the line of sight scenario. The energy-normalized ADS-B signals of the six airplanes are shown in Figure 8. In the experiment, various SNR signals and frequency offset signals are obtained by adding noise and frequency offset to the original signal. As illustrated in Figure 9, we demonstrate the addition of Gaussian noise to the collected signals with SNR exceeding 30 dB, thereby reducing the SNR to 0 dB.
The experiment preprocesses the collected signals and uses the data segmentation method mentioned in Section 3.2 to segment the signals based on the different content contained in each frame. The segmented signals are shown in Figure 10. According to the above method, ADS-B signal segment into SET 1, SET 2, SET 3.1, and SET 3.2 and are represented in Figure 10 as Figure 10a, Figure 10b, Figure 10c, and Figure 10d, respectively.
SET 3.1 and SET 3.2 are processed using the VGG network with the CNN parameter settings shown in Figure 11a. We adjusted the network architecture and parameters to make the network better suited for RFF applications. SET 1 and SET 2 are processed using the modified ResNet network with the parameter settings shown in Figure 11b. This section may be divided by subheadings. This should provide a concise and precise description of the experimental results, their interpretation, as well as the experimental conclusions that can be drawn.
Common model training parameters are used in the experiment. As can be seen in Table 3, the maximum number of epochs is set to 45, the initial learning rate for all models is set to 0.01, the small data batch size is 16, and the learning rate is reduced by a factor of 0.2 every 9 epochs during training. This strategy of decreasing the learning rate helps in the initial stages of training for faster convergence, while in the later stages, it allows the network to fine-tune and improve accuracy by avoiding oscillations near local optimization.

4.2. Performance Comparison

A total of 84,000 pieces of real collected ADS-B data are used in 21 ADS-B device RFF identification problems. 75,600 pieces of data are used for the training set, and 8400 pieces of data are used for the testing set. The experiment evaluates the performance for different maximum periods from 5 to 45. The obtained results are shown in Figure 12. Each primary classifier has shown promising recognition results.
Because the direct averaging method is a particular case of the weighted averaging method, the experiment uses EL weighted averaging (ELWAM) and the voting method (ELVM) to integrate primary classifiers. According to Section 3.4, the ensemble classifier is obtained using the weighted averaging and voting methods. The weighted average method uses the output of the softmax functional layer of each classifier as a weighting factor. The voting method uses the category that corresponds to the maximum accuracy of the output of the softmax functional layer of each classifier, and the category with the highest number of occurrences among the four classifiers is the final classification result. The output of the softmax function from the primary classifier is utilized as an input parameter for EL. Based on the different EL methods employed, we refer to the new ensemble classifiers as ELWAM-CNN and ELVM-CNN. From Figure 13, it can be observed that the accuracy of the ensemble classifier is higher than that of each primary classifier under different SNRs. It can be concluded that the combination of multiple individual classifiers achieves a combinatorial advantage in RFFI.
ELWAM-CNN and ELVM-CNN combine both end-to-end and non-end-to-end methods for data preprocessing. As shown in Figure 14, ELWAM-CNN and ELVM-CNN methods show higher accuracy on SNR from 0 to 30 dB compared to end-to-end processing only (ELWAM-IQ and ELVM-IQ) or non-end-to-end processing (ELWAN-RES and ELVM-RES) methods. The maximum accuracy is shown in Table 4. The experiment indicates that ELWAM-CNN and ELVN-CNN achieved recognition accuracies are 97.5 and 95.79%, better than the other algorithms. The recognition accuracy of ELWAM-CNN algorithm is 6.79% and 4.49% higher than ELWAM-IQ and ELWAM-RES, and the recognition accuracy of ELVM-CNN algorithm is 7.25% and 4.25% higher than ELVM-IQ and ELVM-RES. It demonstrates the superiority of end-to-end data processing and non-end-to-end data fusion.
Through optimizing the configuration parameters, the experiment evaluates the performance of the EL network in different SNR ranges. We compared the three methods proposed in recent literature with the method presented in the article. The proposed EL method significantly improved the performance. Compared to the accuracies of 90.81% for the CNN based on I/Q samples [27], 90.92% for the CNN based on a zero-bias network [29], and 87.33% for the LSTM [31], ELWAM-CNN and ELVM-CNN solution achieve an accuracy of 97.43% based on weighted averaging and 95.79% based on voting among 21 target devices, as shown in Figure 15. The proposed ELWAM-CNN and ELVM-CNN methods outperform the traditional methods in terms of recognition rate at both low and high SNR. The recognition accuracy of the proposed ELWAM-CNN is ere up to 1.64%, 6.62%, 6.51%, and 10.1% higher than those of ELVM-CNN, I/Q Sample-CNN, Zero Bias-CNN, and LSTM at 30dB SNR. it indicates the superiority of EL over other network structures.
Meanwhile, considering the presence of the Doppler effect in signal reception and the non-ideal characteristics of transmitters and receivers, the frequency offset is an unavoidable parameter existing in the signal. The experiment was conducted to evaluate the performances of different recognition methods in identifying frequency-offset signals, including 2 kHz, 4 kHz, 6 kHz, and 8 kHz frequency offset. We can see from Figure 16 that the I/Q Simple-CNN and LSTM schemes have a significant decrease in recognition accuracy when the frequency offset increases. ELWAN-CNN and ELVM-CNN exhibit higher recognition accuracy in various frequency offset scenarios when compared to other traditional approaches. The proposed recognition method based on EL can deliver more precise identification performance even in low SNR and high-frequency offset conditions, showing stronger generalization ability. We think different aspects or patterns in the data can be captured by each of the integrated classifiers. When these different classifiers are combined, the overall model can utilize a wider range of information than any single classifier, reducing the likelihood of overfitting to noise and idiosyncratic features in the training data. In simple terms, the integrated model can cover more situations or conditions than a single classifier, increasing performance.
The ensemble classifier confusion matrix for 21 classes of ADS-B devices with SNR of 0 dB and 30 dB is shown in Figure 17. Figure 17a,b show the confusion matrices for the ELWAM-CNN method applied to 21 classes of ADS-B devices under SNR of 30 dB and 0 dB, respectively. Figure 17c,d show the confusion matrices for the ELVM-CNN method applied to 21 classes of ADS-B devices under SNR of 30 dB and 0 dB, respectively. These matrices depict the classification performance by illustrating the distribution of predicted labels compared to the ground truth labels. They provide an assessment of the model’s ability to accurately classify different ADS-B devices in the presence of varying levels of noise. It can be seen that the recognition accuracy of the ensemble classifier for each device is almost the same in high SNR conditions. However, the classifier significantly differs in recognition accuracy for different devices under low SNR conditions. From Figure 17b,d, it can be seen that the different methods have different recognition accuracies for each type of transmitter. Taking category 19 as an example, it can be found that the identification accuracy is greater than 90% in Figure 17b and less than 80% in Figure 17d. This represents that even though the ensemble classifier uses the same combination of multiple classifiers, the different combination methods can result in different recognition accuracy precision.

5. Conclusions

In this article, we proposed a novel RFF recognition method that combines CNN-based feature extraction with EL for classification. Our approach eliminates label interference in ADS-B data and introduces a higher recognition rate and stronger generalization ability for RFFI recognition models.
Our main contributions are as follows: First, we introduced a pattern combining data processing with data content characteristics. Unlike existing ADS-B classification approaches that directly use the data, we conducted a detailed analysis of the data itself. The experiment employed different feature extraction methods and eliminated potential label interference by considering the different information in different data segments. Second, we adopted and validated the feasibility of two CNN architectures for RFF recognition. Different CNN architectures were used for different data segments to reduce the interdependence among primary classifiers. Finally, we applied EL to obtain classifiers with higher recognition accuracy and stronger generalization ability by harnessing the capabilities of multiple individual learners. The ELWAM-CNN method reached a recognition accuracy of 97.43%, and the ELVM-CNN method achieved a recognition accuracy of 95.79% under high SNR conditions. By decreasing the SNR and increasing the frequency offset, we verify that the proposed integrated learning-based method exhibits higher recognition accuracy and high generalization ability compared to traditional methods. In this way, we demonstrate the effectiveness of the proposed method.
The RFF scheme mentioned in the article only addresses closed identification scenarios and cannot correctly determine the identity of a signal when it arrives outside the database storage category. In addition, the model is unable to recognize imitation RFF data generated by generative adversarial networks. In future work, we will focus on addressing the issue of imbalanced class recognition accuracy in classification. Several factors, such as the rationality of feature extraction, the similarity between classes, and model design issues, will be the subject of future research.

Author Contributions

Conceptualization Y.Z. and X.Z.; methodology, Y.Z.; software, Y.Z., S.W. and W.Z.; validation, Y.Z. and S.W.; formal analysis, Y.Z.; investigation, Y.Z.; resources, Y.Z. and X.Z.; data curation, Y.Z.; writing—original draft preparation, Y.Z.; writing—review and editing, Y.Z., X.Z., S.W. and W.Z.; visualization, Y.Z. and W.Z.; supervision, Y.Z., X.Z. and W.Z.; project administration, X.Z.; funding acquisition, X.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
UAVUnmanned Aerial Vehicles
ADS-BAutomatic Dependent Surveillance-Broadcast
RFFIRadio Frequency Fingerprinting Identification
CNNConvolutional Neural Networks
IoTInternet of Things
1090 ES1090 MHz Mode S Extended Squitter
UATUniversal Access Transceiver
DACDigital to analog converter
ADCAnalog to digital converter
DFDown Format
CRCCyclic Redundancy Check
SNRSignal-to-Noise Ratio

References

  1. Song, X.; Zhao, S.; Wang, X.; Li, X.; Tian, Q. Performance Analysis of UAV RF/FSO Co-Operative Communication Network with Co–Channel Interference. Drones 2024, 8, 70. [Google Scholar] [CrossRef]
  2. Karch, C.; Barrett, J.; Ellingson, J.; Peterson, C.K.; Contarino, V.M. Collision Avoidance Capabilities in High-Density Airspace Using the Universal Access Transceiver ADS-B Messages. Drones 2024, 8, 86. [Google Scholar] [CrossRef]
  3. Wu, Z.; Shang, T.; Yue, M.; Liu, L. ADS-Bchain: A blockchain-Based T rusted service scheme for automatic dependent surveillance broadcast. IEEE Trans. Aerosp. Electron. Syst. 2023, 59, 8535–8549. [Google Scholar] [CrossRef]
  4. Liao, Y.; Jia, Z.; Dong, C.; Zhang, L.; Wu, Q.; Hu, H. Interference analysis for coexistence of uavs and civil aircrafts based on automatic dependent surveillance-broadcast. IEEE Trans. Veh. Technol. 2024, 1–5. [Google Scholar] [CrossRef]
  5. Lin, D.; Hu, S.; Wu, W.; Wu, G. Few-shot RF fingerprinting recognition for secure satellite remote sensing and image processing. Sci. China Inf. Sci. 2023, 66, 189304. [Google Scholar] [CrossRef]
  6. Qian, Y.; Qi, J.; Kuai, X.; Han, G.; Sun, H.; Hong, S. Specific emitter identification based on multi-level sparse representation in automatic identification system. IEEE Trans. Inf. Forensics Secur. 2021, 16, 2872–2884. [Google Scholar] [CrossRef]
  7. Liu, Y.; Wang, J.; Li, J.; Niu, S.; Song, H. Machine learning for the detection and identification of Internet of Things devices: A survey. IEEE Internet Things 2021, 9, 298–320. [Google Scholar] [CrossRef]
  8. Wu, W.; Hu, S.; Lin, D.; Wu, G. Reliable resource allocation with RF fingerprinting authentication in secure IoT networks. Sci. China Inf. Sci. 2022, 65, 170304. [Google Scholar] [CrossRef]
  9. Fu, X.; Peng, Y.; Liu, Y.; Lin, Y.; Gui, G.; Gacanin, H.; Adachi, F. Semi-supervised specific emitter identification method using metric-adversarial training. IEEE Internet Things 2023, 10, 10778–10789. [Google Scholar] [CrossRef]
  10. Zha, H.; Tian, Q.; Lin, Y. Real-world ADS-B signal recognition based on radio frequency fingerprinting. In Proceedings of the 2020 IEEE 28th International Conference on Network Protocols (ICNP), Madrid, Spain, 13–16 October 2020; pp. 1–6. [Google Scholar]
  11. Merchant, K.; Revay, S.; Stantchev, G.; Nousain, B. Deep learning for RF device fingerprinting in cognitive communication networks. IEEE J. Sel. Top. Signal Process. 2018, 12, 160–167. [Google Scholar] [CrossRef]
  12. Chen, X.; Wang, L.; Xu, X.; Shen, X.; Feng, Y. A review of radio frequency fingerprinting methods based on Raw I/Q and deep learning. J. Radars 2023, 12, 214–234. [Google Scholar]
  13. Jagannath, A.; Jagannath, J.; Kumar, P.S.P.V. A comprehensive survey on radio frequency (RF) fingerprinting: Traditional approaches, deep learning, and open challenges. Comput. Netw. 2022, 219, 109455. [Google Scholar] [CrossRef]
  14. Garcia, M.A.; Stafford, J.; Minnix, J.; Dolan, J. Aireon space based ADS-B performance model. In Proceedings of the 2015 Integrated Communication, Navigation and Surveillance Conference (ICNS), Herdon, VA, USA, 21–23 April 2015; pp. 1–10. [Google Scholar]
  15. Zeng, M.; Liu, Z.; Wang, Z.; Liu, H.; Li, Y.; Yang, H. An adaptive specific emitter identification system for dynamic noise domain. IEEE Internet Things 2022, 9, 25117–25135. [Google Scholar] [CrossRef]
  16. Brik, V.; Banerjee, S.; Gruteser, M.; Oh, S. Wireless device identification with radiometric signatures. In Proceedings of the ACM International Conference on Mobile Computing and Networking ACM, San Francisco, CA, USA, 14 September 2008; pp. 116–127. [Google Scholar]
  17. Knox, D.A.; Kunz, T. Wireless fingerprints inside a wireless sensor network. ACM Trans. Sens. Netw. (TOSN) 2015, 11, 1–30. [Google Scholar] [CrossRef]
  18. Bitar, N.; Muhammad, S.; Refai, H.H. Wireless technology identification using deep convolutional neural networks. In Proceedings of the 2017 IEEE 28th Annual International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC), Montreal, QC, Canada, 8–13 October 2017; pp. 1–6. [Google Scholar]
  19. Peng, L.; Hu, A.; Zhang, J.; Jiang, Y.; Yu, J.; Yan, Y. Design of a hybrid RF fingerprint extraction and device classification scheme. IEEE Internet Things 2018, 6, 349–360. [Google Scholar] [CrossRef]
  20. Peng, L.; Zhang, J.; Liu, M.; Hu, A. Deep learning based RF fingerprint identification using differential constellation trace figure. IEEE Trans. Veh. Technol. 2019, 69, 1091–1095. [Google Scholar] [CrossRef]
  21. Al-Shawabka, A.; Pietraski, P.; Pattar, S.B.; Restuccia, F.; Melodia, T. Deeplora: Fingerprinting lora devices at scale through deep learning and data augmentation. In Proceedings of the Twenty-second International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing, Shanghai, China, 26–29 June 2021; pp. 251–260. [Google Scholar]
  22. Shen, G.; Zhang, J.; Marshall, A.; Peng, L.; Wang, X. Radio frequency fingerprint identification for lora using spectrogram and CNN. In Proceedings of the IEEE INFOCOM 2021-IEEE Conference on Computer Communications, Vancouver, BC, Canada, 10–13 May 2021; pp. 1–10. [Google Scholar]
  23. Shen, G.; Zhang, J.; Marshall, A.; Peng, L.; Wang, X. Radio frequency fingerprint identification for lora using deep learning. IEEE J. Sel. Areas Commun. 2021, 39, 2604–2616. [Google Scholar] [CrossRef]
  24. Soltani, N.; Sankhe, K.; Dy, J.; Ioannidis, S.; Chowdhury, K. More is better: Data augmentation for channel-resilient RF fingerprinting. IEEE Commun. Mag. 2020, 58, 66–72. [Google Scholar] [CrossRef]
  25. Robinson, J.; Kuzdeba, S.; Stankowicz, J.; Carmack, J.M. Dilated causal convolutional model for rf fingerprinting. In Proceedings of the 2020 10th Annual Computing and Communication Workshop and Conference (CCWC), Las Vegas, NV, USA, 6–8 January 2021; pp. 157–162. [Google Scholar]
  26. Riyaz, S.; Sankhe, K.; Ioannidis, S.; Chowdhury, K. Deep learning convolutional neural networks for radio identification. IEEE Commun. Mag. 2018, 56, 146–152. [Google Scholar] [CrossRef]
  27. Jian, T.; Rendon, B.C.; Ojuba, E.; Soltani, N.; Wang, Z.; Sankhe, K.; Gritsenko, A.; Dy, J.; Chowdhury, K.; Ioannidis, S. Deep learning for RF fingerprinting: A massive experimental study. IEEE Internet Things 2020, 3, 50–57. [Google Scholar] [CrossRef]
  28. Agadakos, I.; Agadakos, N.; Polakis, J.; Amer, M.R. Chameleons’ oblivion: Complex-valued deep neural networks for protocol-agnostic rf device fingerprinting. In Proceedings of the 2020 IEEE European Symposium on Security and Privacy (EuroS&P), Genoa, Italy, 7–11 September 2021; pp. 322–338. [Google Scholar]
  29. Liu, Y.; Wang, J.; Li, J.; Song, H.; Yang, T.; Niu, S.; Ming, Z. Zero-bias deep learning for accurate identification of Internet-of-Things (IoT) devices. IEEE Internet Things 2020, 8, 2627–2634. [Google Scholar] [CrossRef]
  30. Liu, Y.; Wang, J.; Li, J.; Niu, S.; Song, H. Class-incremental learning for wireless device identification in IoT. IEEE Internet Things 2021, 8, 17227–17235. [Google Scholar] [CrossRef]
  31. Tu, Y.; Lin, Y.; Zhang, H.; Zhang, J.; Wang, Y.; Gui, G.; Mao, S. Large-scale real-world radio signal recognition with deep learning. Chin. J. Aeronaut. 2022, 35, 35–48. [Google Scholar] [CrossRef]
  32. Jafari, H.; Omotere, O.; Adesina, D.; Wu, H.; Qian, L. IoT devices fingerprinting using deep learning. In Proceedings of the 2018 IEEE Military Communications Conference (MILCOM), Los Angeles, CA, USA, 29–31 October 2018; pp. 1–9. [Google Scholar]
  33. Standard No.: DO-260B; RTCA. Minimum Operational Performance Standards (MOPS) for 1090 MHz Extended Squitter Automatic Dependent Surveillance-Broadcast (ADS-B) and Traffic Information Services-Broadcast (TIS-B). Radio Technical Commission for Aeronautics: Washington, DC, USA, 2011.
  34. Li, Y.; Chang, J.; Kong, C.; Bao, W. Recent progress of machine learning in flow modeling and active flow control. Chin. J. Aeronaut. 2022, 35, 14–44. [Google Scholar] [CrossRef]
  35. Tumer, K.; Ghosh, J. Error correlation and error reduction in ensemble classifiers. Connect. Sci. 1996, 8, 385–404. [Google Scholar] [CrossRef]
Figure 1. ADS-B signal format.
Figure 1. ADS-B signal format.
Drones 08 00391 g001
Figure 2. Content characteristics of ADS-B signals.
Figure 2. Content characteristics of ADS-B signals.
Drones 08 00391 g002
Figure 3. ADS-B signal transmission block.
Figure 3. ADS-B signal transmission block.
Drones 08 00391 g003
Figure 4. Flow chart of RFFI.
Figure 4. Flow chart of RFFI.
Drones 08 00391 g004
Figure 5. Preprocessing of ADS-B signals.
Figure 5. Preprocessing of ADS-B signals.
Drones 08 00391 g005
Figure 6. CNN framework models: (a) VGG linear architecture model, (b) ResNet architecture model.
Figure 6. CNN framework models: (a) VGG linear architecture model, (b) ResNet architecture model.
Drones 08 00391 g006
Figure 7. Ensemble learning flow.
Figure 7. Ensemble learning flow.
Drones 08 00391 g007
Figure 8. ADS-B signals from six different transmitters: (a) ADS-B signals from transmitter 1; (b) ADS-B signals from transmitter 2; (c) ADS-B signals from transmitter 3; (d) ADS-B signals from transmitter 4; (e) ADS-B signals from transmitter 5; (f) ADS-B signals from transmitter 6.
Figure 8. ADS-B signals from six different transmitters: (a) ADS-B signals from transmitter 1; (b) ADS-B signals from transmitter 2; (c) ADS-B signals from transmitter 3; (d) ADS-B signals from transmitter 4; (e) ADS-B signals from transmitter 5; (f) ADS-B signals from transmitter 6.
Drones 08 00391 g008
Figure 9. Comparison between the original signal and the noise-added signal: (a) Original signal; (b) Noise-added signal.
Figure 9. Comparison between the original signal and the noise-added signal: (a) Original signal; (b) Noise-added signal.
Drones 08 00391 g009
Figure 10. ADS-B signal segmentation fragment: (a) First segmentation fragment SET 1; (b) Second segmentation fragment STE 2; (c) Third segmentation fragment SET 3.1; (d) Fourth segmentation fragment STE 3.2.
Figure 10. ADS-B signal segmentation fragment: (a) First segmentation fragment SET 1; (b) Second segmentation fragment STE 2; (c) Third segmentation fragment SET 3.1; (d) Fourth segmentation fragment STE 3.2.
Drones 08 00391 g010
Figure 11. CNN parameter settings: (a) parameter settings for the modified VGG network; (b) parameter settings for the modified ResNet network.
Figure 11. CNN parameter settings: (a) parameter settings for the modified VGG network; (b) parameter settings for the modified ResNet network.
Drones 08 00391 g011
Figure 12. Accuracy of each primary classifier for different segments of data.
Figure 12. Accuracy of each primary classifier for different segments of data.
Drones 08 00391 g012
Figure 13. Performance comparison of ensemble classifier and primary classifier.
Figure 13. Performance comparison of ensemble classifier and primary classifier.
Drones 08 00391 g013
Figure 14. Performance of combinatorial classifiers under different data preprocessing.
Figure 14. Performance of combinatorial classifiers under different data preprocessing.
Drones 08 00391 g014
Figure 15. Ensemble learning versus other methods.
Figure 15. Ensemble learning versus other methods.
Drones 08 00391 g015
Figure 16. Recognition accuracy at different frequency offsets: (a) ADS-B signal with 2 kHz frequency offset; (b) ADS-B signal with 4 kHz frequency offset; (c) ADS-B signal with 6 kHz frequency offset; (d) ADS-B signal with 8 kHz frequency offset.
Figure 16. Recognition accuracy at different frequency offsets: (a) ADS-B signal with 2 kHz frequency offset; (b) ADS-B signal with 4 kHz frequency offset; (c) ADS-B signal with 6 kHz frequency offset; (d) ADS-B signal with 8 kHz frequency offset.
Drones 08 00391 g016
Figure 17. Confusion matrix for RFF of 21 classes ADS-B transmitters: (a) ELWAN-CNN recognition accuracy confusion matrix at 30 dB SNR; (b) ELWAN-CNN recognition accuracy confusion matrix at 0 dB SNR; (c) ELVM-CNN recognition accuracy confusion matrix at 30 dB SNR; (d) ELVM-CNN recognition accuracy confusion matrix at 0 dB SNR.
Figure 17. Confusion matrix for RFF of 21 classes ADS-B transmitters: (a) ELWAN-CNN recognition accuracy confusion matrix at 30 dB SNR; (b) ELWAN-CNN recognition accuracy confusion matrix at 0 dB SNR; (c) ELVM-CNN recognition accuracy confusion matrix at 30 dB SNR; (d) ELVM-CNN recognition accuracy confusion matrix at 0 dB SNR.
Drones 08 00391 g017
Table 1. DF = 17 ADS-B format structure table.
Table 1. DF = 17 ADS-B format structure table.
Bit1–56–89–3233–8889–112
InformationDF = 10001CAAAMessagePI
Number of bits53245624
Table 2. ”CA” Field Code.
Table 2. ”CA” Field Code.
CodingMeaning
BinaryDecimal
0000Level 1 transponder
0011Reserved
0102Reserved
0113Reserved
1004Level 2 or above transponder, and the ability to set “CA” code 7
1015Level 2 or above transponder, and the ability to set “CA” code 7
1106Level 2 or above transponder, and the ability to set “CA” code 7
1117Signifies the “DR” field is not equal to ZERO (0), or the “FS”
field equals 2, 3, 4, or 5, and either on the ground or airborne
Table 3. Training parameter settings.
Table 3. Training parameter settings.
Parameter NameParameter Value
MaxEpochs45
InitialLearnRate0.01
MiniBatchSize16
LearnRateDropFactor0.2
LearnRateDropPeriod9
Table 4. Maximum accuracy of the six identification methods.
Table 4. Maximum accuracy of the six identification methods.
MethodMaximum Accuracy (%)
ELWAN-CNN97.5
ELVM-CNN95.79
ELWAM-IQ90.71
ELVM-IQ88.54
ELWAM-RES93.01
ELVM-RES91.54
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zheng, Y.; Zhang, X.; Wang, S.; Zhang, W. Convolutional Neural Network and Ensemble Learning-Based Unmanned Aerial Vehicles Radio Frequency Fingerprinting Identification. Drones 2024, 8, 391. https://doi.org/10.3390/drones8080391

AMA Style

Zheng Y, Zhang X, Wang S, Zhang W. Convolutional Neural Network and Ensemble Learning-Based Unmanned Aerial Vehicles Radio Frequency Fingerprinting Identification. Drones. 2024; 8(8):391. https://doi.org/10.3390/drones8080391

Chicago/Turabian Style

Zheng, Yunfei, Xuejun Zhang, Shenghan Wang, and Weidong Zhang. 2024. "Convolutional Neural Network and Ensemble Learning-Based Unmanned Aerial Vehicles Radio Frequency Fingerprinting Identification" Drones 8, no. 8: 391. https://doi.org/10.3390/drones8080391

APA Style

Zheng, Y., Zhang, X., Wang, S., & Zhang, W. (2024). Convolutional Neural Network and Ensemble Learning-Based Unmanned Aerial Vehicles Radio Frequency Fingerprinting Identification. Drones, 8(8), 391. https://doi.org/10.3390/drones8080391

Article Metrics

Back to TopTop