Next Article in Journal
A Review of CP Violation Measurements in Charm at LHCb
Previous Article in Journal
Dynamical Properties of Baryons
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Specific Emitter Identification Based on Multi-Domain Feature Fusion and Integrated Learning

School of Electronic Countermeasures, National University of Defense Technology, Hefei 230000, China
*
Author to whom correspondence should be addressed.
Symmetry 2021, 13(8), 1481; https://doi.org/10.3390/sym13081481
Submission received: 22 July 2021 / Revised: 6 August 2021 / Accepted: 9 August 2021 / Published: 12 August 2021

Abstract

:
Specific Emitter Identification (SEI) is a key research problem in the field of information countermeasures. It is one of the key technologies required to be solved urgently in the target reconnaissance system. It has the ability to distinguish between different individual radiation sources according to the varying individual characteristics of the emitter hardware within the transmitted signals. In response to the lack of scarcity among labeled samples in specific emitter identification, this paper proposes a method combining multi-domain feature fusion and integrated learning (MDFFIL). First, the received signal is preprocessed to obtain segmented time domain signal samples. Then, the signal is converted to time–frequency distribution using wavelet transform. Afterwards, an integrated learning two-stage recognition classification method is designed to extract data features of 1D time domain signals and 2D time–frequency distribution signals using the symmetry network structures of CVResNet and ResNet. Finally, fused features are fed into the complex-valued residual network classifier to obtain the final classification results. We demonstrate through the analysis results of the measured data that the proposed method has a higher accuracy as compared with the classical feature extraction method, and that this can improve the identification of communication radiation sources with fewer labeled samples.

1. Introduction

Specific Emitter Identification (SEI) is the process of identifying individual emitters by matching the characteristics of the received signal with the emitter for correlation [1]. The production and manufacturing process determines the defects of the hardware. This makes the emitter features unique and difficult to imitate [2]. Therefore, with the help of the fingerprint features of the emitter, SEI is widely used in military and civilian wireless applications [3,4,5].
SEI technology is the fusion of signal processing technology and pattern recognition technology, which can be divided into three parts: data preprocessing, analysis of subtle features, and design of the classifier. Data and processing are used to modify the received signal into data that is suitable for feature extraction through certain methods of processing (such as filtering, normalization, mathematical transformation, etc.). The analysis of subtle features is a process that can effectively and reliably extract the subtle features from the signal by using the existing signal processing methods, while taking into consideration the individual information of the radiation source. Then, the subtle features that can effectively identify the individual radiation source are selected. Designing a classifier is a process that takes into account the quality and efficiency of classification according to the characteristics of the fine features, which are obtained through an analysis of those fine features. The problem of individual recognition of communication radiation sources can be attributed to the classification problem in machine learning. Classification accuracy, on the other hand, is mainly determined by feature extraction methods and classification algorithms.
The electromagnetic environment is becoming more and more complex, which brings more challenges to SEI technology. There are two main types of SEI methods: manual feature extraction-based methods and deep learning-based methods. Artificial feature extraction-based methods rely on signal processing algorithms, and require expert knowledge support. The artificial feature extraction-based methods mainly use the time domain features and transform domain features of the signal. For example, for the time domain characteristics of the signal, N. Scrinken [6] used sliding windows to extract the information of dimensional constructive features of the signal for the recognition of transient signals. Similarly, L. Wu [7] extracted the box dimension and variance dimension for the recognition of individuals. Furthermore, G. Huang [8] used the nonlinear dynamics to extract the alignment entropy for the matching recognition of transmitters. However, these methods are susceptible to noise and have certain limitations. Thus, more mainstream research is based on transform domain features. Time–frequency analysis can be used to reflect subtle differences through the time–frequency joint domain information of the signal. For example, G. Lopez-Risueno proposed a digital channelized receiver based on Short-Time Fourier Transform (STFT) [9], and C. Bertoncini used dynamic wavelet fingerprints to extract features [10]. Hilbert–Huang Transform (HHT) is a well-known method for processing nonlinear and non-stationary signals [11]. Pan [12] converts Hilbert spectra into grayscale images to represent features, and Zhang considers single-hop scenarios and relay scenarios, and proposes three SEI algorithms based on the Hilbert spectrum [13]. In addition, higher-order spectrum-based methods are also a hot research topic. The use of higher-order spectra can maintain the signal amplitude and phase information in order to identify fingerprint features [14,15]. Additionally, it can maintain the method of graphical representation [16] and geometric features [17,18].
The method based on deep learning can use the nonlinear activation function to extract the subtle features of the deep-level radiation source through multiple hidden layers [19], which makes it different from the method based on artificial feature extraction. Deep learning-based methods take the original data—or the image converted from the original data—as the input information, and then uses the input information to train the deep neural network for fingerprint feature learning. Wong [20] inputs the original I/Q signals directly into the neural network and uses the convolutional neural network to measure each transmitter’s in-phase/quadrature imbalance parameters. Merchant [21] proposed a method based on time domain complex baseband error signals for transmitter device identification. Pan [12] inputted the Hilbert spectrum into the deep residual network and found that it has better identification under various channel conditions. Sa [22] converted I/Q signals into Contour Stella images and inputted them into a Convolutional Neural Network (CNN) for classification. Wu [23] proposed a Recurrent Neural Network (RNN) recognition algorithm based on long and short-term memory, and found that it achieved high recognition accuracy under the condition of low signal-to-noise ratio. Ding [15] used the bispectral feature of the received signal as the input to CNN, and proved that it has higher accuracy than conventional methods. G. Baldini [24] compared various methods for training CNN by converting signals into images, and concluded that wavelet-based methods outperformed other methods. All of the above deep learning methods require adequate datasets, which can learn the network model more effectively and obtain better recognition accuracy as a result.
Feature extraction methods based on time domain features and transform domain features use a higher-order spectra, which mostly uses a single signal processing method to extract one of the subtle features. On the other hand, the actual communication signal is complex and variable. A single signal feature is not enough to fully and accurately represent the nuances between the radiation source signals. As a result, the final recognition accuracy is limited. In order to enhance the recognition accuracy, most models use deep neural networks as strong classifiers. Data-driven deep learning methods require a large amount of data. Actual communication is limited by time and labor costs. It is difficult to obtain sufficient radiation source signal data for deep learning training. Moreover, if the amount of data is too small, deeper and more complex neural networks are easily overfitted, which in turn seriously affects the final recognition effect. For the task of individual recognition under the condition of scarcity of labeled samples, some scholars have also proposed the transfer learning method, which first uses a large number of labeled datasets from other domains to train the neural network, and then uses the source domain to migrate to the sample set of the target domain. However, such a method requires obtaining a large number of matching datasets from other domains.
We combine intelligent learning with signal processing technology and divide the SEI task into two steps. The first step is data preprocessing and feature extraction: the one-dimensional complex-valued residual network is trained using time domain signal data, and the two-dimensional residual network is trained using wavelet-transformed time–frequency image data. The second step is the classifier design and training: the features extracted from one-dimensional time domain signal data and two-dimensional time–frequency variation domain data are fused to train the complex-valued neural network classifier. The data set to be recognized is then inputted into to the trained complex-valued neural network classifier to obtain the final classification results. The analysis results of the measured data prove that the proposed method has a higher accuracy rate as compared with the classical feature extraction method, and can improve the identification of communication radiation sources when the labeled samples are small.
The contribution of this paper can be summarized as follows:
(1)
A symmetric feature extraction architecture is designed, which integrates the time domain and time–frequency variation domain features of limited data to enhance the utilization of the data.
(2)
An ensemble learning approach is proposed to make full use of the performance of residual networks and complex-valued residual networks.
(3)
The combination of intelligent learning and signal processing technology greatly improves the ability to recognize radiation sources under the condition of sparse labeled samples.
The structure of this paper is composed as follows: in Section 2, the time–frequency analysis method and neural network model utilized in this paper are introduced; in Section 3, the classification method combining multi-domain features and integrated learning is described in detail; in Section 4, the identification results and discussion for both fixed-frequency and frequency-hopping sample sets are presented; and in Section 5, the conclusions are given.

2. Background

2.1. Wavelet Transform

Since CNN are more suitable for extracting features from images, the time domain signal samples are converted into an image-like two-dimensional time–frequency matrix [12]. Among several two-dimensional representations of the signals, the time–frequency energy distribution shows better performance as compared to the recursive map and bispectrum forms [24]. In the time–frequency energy distribution, the STFT has a fixed resolution as compared to the wavelet transform. The HHT has sparse time–frequency distribution, and thus is not suitable for training neural networks. Wavelet Transform (WT) through localized analysis of time (space) frequency can highlight the subtle regional characteristics of the signal, and has a unique advantage in the field of individual identification. The Morlet wavelet is a complex-valued wavelet which has good aggregation in both time and frequency domains. The measured data collected in this paper are all complex signals. Therefore, the Morlet wavelet is selected as the fundamental wavelet function for time–frequency analysis, and the continuous wavelet transform is used to generate the time–frequency distribution.
W T ( a , τ ) = 1 a + X ( t ) ψ ( t τ a ) d t
where ψ  is the Gaussian window, a ≠ 0 is the scale factor, which changes the scaling of the wavelet function. Besides, τ is the time shift factor, which changes the translation of the wavelet function.

2.2. Residual Network

Theoretically, the deeper the layers of the neural network, the stronger the nonlinear mapping ability between hidden layers, and the richer the extracted features. However, as the depth of the network model continues to increase, the deeper layers do not improve the accuracy of the model, but instead cause the accuracy to be saturated or to rapidly decline. Residual Network [25] (ResNet) is one of the classical models of deep convolutional networks, which creatively introduces the residual structure, and transfers the results of the previous layer directly to the next layer of the network, so that the error will not further increase. It also serves to alleviate the gradient dispersion problem caused by too many layers. It is a good solution to the problem of “degradation” of the model. Residual learning is much easier than learning overall features. In this way, the entire neural network can be expanded, so that hundreds of layers of convolutional neural networks can be designed.
In the task scenario where labeled samples are scarce, we use a simplified version of ResNet-18 as the feature extractor for the time–frequency transformed two-dimensional image data set, which contains four residual learning units (Table 1). A residual learning unit is made up of two building blocks, so the simplified version of ResNet has nine convolutional layers.

2.3. Complex-Valued Residual Network

There are few articles about the basic structure of complex neural networks. In the following, we will give a brief description of the complex-valued convolution layers, the complex-valued weight initialization method, the complex-valued activation function, and the complex-valued normalization layer. Based on these structures, we use the complex-valued residual neural network.
  • Complex-valued convolution layer:
Suppose the filter is W = A + i B and input h = x + i y . We use the real number network to simulate the complex-valued operation to obtain the complex-valued convolution output form as follows:
W h = ( A x B y ) + i ( B x + A y )
expressed in matrix form as:
[ R ( W h ) V ( W h ) ] = [ A B B A ] [ x y ]
where R ( W h ) is the real part of the result of convolution W h , V ( W h ) is the imaginary part of the result of convolution W h . Combining the real and imaginary parts into a new complex-value is the result of the complex-valued convolution.
  • Complex-valued weight initialization method
A suitable weight initialization method can speed up the convergence of the network and find the optimal solution quickly. To some extent, it can avoid the gradient disappearance or explosion when the network propagates in the reverse gradient. Define the form of complex weights as:
w = | w | e i θ = R { w } + i V { w }
where | w | and θ denote the magnitude and phase of the weights w , respectively, and the variance of the complex weights is expressed as follows:
V a r ( w ) = E [ w w * ] ( E [ w ] ) 2 = E [ | w | 2 ] ( E [ w ] ) 2
Assuming that w obeys the Gaussian distribution, and at this time its magnitude obeys the Rayleigh distribution, the following formula can be obtained according to the variance calculation of | w | .
V a r ( | w | ) = E [ | w | | w | ] ( E [ | w | ] ) 2 = E [ | w | 2 ] ( E [ | w | ] ) 2
Therefore, V a r ( | w | ) = V a r ( w ) ( E [ | w | 2 ] ) 2 , which is V a r ( w ) = V a r ( | w | ) + ( E [ | w | 2 ] ) 2 . Obtaining the variance of the weights only requires calculating the mean and variance of the magnitude | w | . According to the properties of the Rayleigh distribution, it is known that the mean and variance of | w | can be expressed by the parameter σ of the Rayleigh distribution.
E [ | w | ] = σ π 2 , V a r ( | w | ) = 4 π 2 σ 2
Thus, the complex-valued weight is
V a r ( w ) = 4 π 2 σ 2 + ( σ π 2 ) 2 = 2 σ 2
  • Complex-valued activation function
The three main complex-valued activation functions are modReLU, CReLU, and zReLU.
modReLU ( h ) = ReLU ( | h | + b ) e i θ z = { ( | h | + b ) h | h | i f   | h | + b 0 0 o t h e r w i s e }
CReLU ( h ) = ReLU ( R ( h ) ) + i ReLU ( V ( h ) )
zReLU ( h ) = { h i f   θ z [ 0 , π 2 ] 0 o t h e r w i s e }
modReLU is only ReLU for complex-valued amplitudes, and phase values are retained; CReLU is ReLU for real and imaginary parts, respectively, and zReLU is ReLU for phase only. Each of these three activation functions has advantages on different data sets.
  • Complex-valued normalization layer
We use the reciprocal of the square root of the covariance matrix V to scale the data at the center of 0. The derivation process of the bulk normalization of the complex-value is based on the derivation of the normalization of the real numbers, which is derived as follows:
h ˜ = ( V ) 1 2 ( h E [ h ] )
V = ( V r r V r i V i r V i i ) = ( C o v ( R { x } , R { x } ) C o v ( R { x } , V { x } ) C o v ( V { x } , R { x } ) C o v ( V { x } , V { x } ) )
BN ( h ˜ ) = γ h ˜ + β , γ = ( γ r r γ r i γ i r γ i i )
where γ r r and γ i i are initialized to 1 / 2 , γ r i and γ i r are the real part of the imaginary part of β , which are initialized to 0. The mean offset of V r r and V i i is initialized to 1 / 2 , the mean offset of V r i , V i r , and β is initialized to 0, and the momentum of the mean offset is 0.9.
For the one-dimensional time domain signal dataset, we use the one-dimensional complex-valued residual network as the feature extractor, which contains four complex residual units (Figure 1). Similar to the real residual element, each complex residual element also has the structure of residual connection. Where Xr and Xi are the in-phase and quadrature components of the signal, H(Xr, Xi) is the output through the complex residual unit. X is the envelope of the output Xr and Xi.

3. Specific Emitter Identification Based on Integrated Learning

The MDFFIL algorithm is mainly divided into two parts: feature extraction and classifier recognition. In the stage of feature extraction, we use ResNet and Complex-Valued ResNet algorithms to extract the time domain and time–frequency transform domain features of signals. In the stage of classifier recognition, we use complex network classifiers to comprehensively utilize the multi-domain features of signals. The specific operation process is shown in Figure 2 below.

3.1. Algorithm Flow

3.1.1. Feature Extraction

(1)
Preprocess the samples of the original I/Q signal, discard the data of the transmission silent state, splice the processed data set according to 8192 sampling points, and then standardize each sample data to reduce the effect of transmitting power.
(2)
Set the number of scales in Continuous Wavelet Transform (CWT) to 101, and perform continuous wavelet transformation on each normalized signal to obtain a scale map. Therefore, the input size of each residual network is 8192 × 101.
(3)
Use the labeled time domain signal data to train the shallow complex-valued residual network model, use the time–frequency variation domain data to train the shallow residual network model, and then use the trained feature extraction network to extract the data features of the labeled signal and the signal to be identified.

3.1.2. Feature Fusion

(4)
The features extracted from the two shallow network models are considered as the real and imaginary inputs of the complex-valued neural network classifier to train the classification model.
(5)
Input the data features of the signal to be recognized into the trained classification model to obtain the final classification results.

3.2. Classifier Model Framework

In order to make better use of the features of different domains extracted from the feature extractor, we did not simply splice them together, but designed a classifier based on complex network architecture to comprehensively utilize multi-domain features. The classifier network model structure is shown in Table 2 below:
We designed a complex-valued neural network classifier with data features extracted from the time domain and transform domain as inputs to the classifier. Then, the two data features are considered as real and imaginary inputs to the classifier to train the classifier network, which can make full use of the intrinsic connection between the data.

3.3. Network Parameters and Comparative Experimental Settings

3.3.1. Network Model Setting

The network parameters of the feature extractor and classifier are set, as shown in Table 3 below.
Among them, the batchsize of the residual network model is set to 16, the batchsize of the one-dimensional complex-valued residual network model is set to 64, and the network model of the complex-valued neural network classifier is also set to 64. In addition, the initial learning rate of all three network models is set to 1 × 10−3, but with the increase of the number of iterations, the learning rate decays by 0.01 for every 100 training sessions.

3.3.2. Baseline Methodology

We used the residual network and the complex-valued residual network [26,27] as comparison experiments. The input to the residual network is the transform domain data, and the input to the complex-value residual network is the one-dimensional time domain signal.

4. Experimental Results and Discussion

4.1. Experimental Data

The experimental data was generated by eight communication radiation source stations of the same model, and collected by the same receiving device. The model of the receiver is RSA6120A. The baseband signals generated by the radio station include both in-phase signals and quadrature signa. We acquired fixed frequency data and hopping frequency data from radio stations in both voice and digital modes of operation, and obtained four types of data. All of the signals sent by the radio stations are random. Among them, the fixed data frequency is 400 MHz, the hopping frequency range is 450–460 MH, and every 1 MHz is a frequency point. We introduce the signal parameters of the radio station and the parameters of different data sets in Table 4 and Table 5.

4.2. Experimental Result

From the preprocessed time domain signal data sample set, we randomly selected a different number (100, 200, …, 500) of samples as a training sample set. These samples were then put through the wavelet transform to get the corresponding change domain data. Moreover, from each station, we randomly selected 500 samples through the same data process in order to get 4000 identified radiation source test sample sets. The following three identification algorithms were used for testing, and the experiment was repeated five times in order to obtain the average accuracy of the identification results. The identification effect is compared as follows (Figure 3, Figure 4, Figure 5 and Figure 6).
In our proposed two-stage classification algorithm for integrated learning, we use a one-dimensional complex-valued residual network that is able to take advantage of the intrinsic connection between the in-phase component and the quadrature component for the feature extraction stage for time domain signals. For the two-dimensional change domain signal, we use a real residual neural network, which is able to take advantage of the convolutional network for image processing. In the classifier recognition stage, we again consider the features extracted by the two residual network models as the real and imaginary inputs to the classifier, aiming at mining their correlation. The experimental results demonstrate the superiority of the method, which shows that some more informative parts behind the original data are not exploited. The identification accuracy of various identification algorithms increases with the increase of the input labeled samples, which shows the impact of sufficient numbers of samples on identification classification. It also reflects the importance of mining the information inherent in limited data.

5. Conclusions

In the actual complex and changeable electromagnetic environment, the non-partner cannot detect and receive a large number of radiation source data. Learning algorithms that rely on deep neural networks generally require a large amount of training data. When the training data is too small, these methods are difficult to achieve the desired results in the test data. In order to address the problem of the scarcity of label samples in real scenes, leading to the difficulty of convergence of deep learning models, this paper proposes a fusion classification and identification method for communicating radiation sources based on MDFFIL. By wavelet transform processing of the original signal, time domain data and two-dimensional time–frequency data are obtained. In addition, this paper designs an integrated learning two-stage classification identification method. This method learns the time domain data by training with a one-dimensional complex-valued residual network, then learns the two-dimensional time–frequency data by training with a residual network, and finally obtains the final classification results by fusing two different features to train a complex-valued neural network classifier. Compared with other comparative algorithms, this method effectively improves the identification of Specific Emitter Identification in sparse labeled sample scenarios.

Author Contributions

Theoretical analysis, validation, writing, L.-Z.Q. and J.-A.Y.; Writing and translation, L.-Z.Q. and K.-J.H.; Check and Suggestion, H.L. and J.-A.Y. All authors have read and agreed to the published version of the manuscript.

Funding

The work described in this paper was supported by the Anhui Provincial Natural Science Foundation (NO.1908085MF202).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

Special thanks are due to National University of Defense Technology Excellence Youth Fund.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Talbot, K.; Duley, P.; Hyatt, M. Specific Emitter Identification and Verification. 2003. Available online: http://jmfriedt.org/phase_digital/03SS_KTalbot.pdf (accessed on 20 July 2021).
  2. Baldini, G.; Steri, G.; Giuliani, R. Identification of Wireless Devices from Their Physical Layer Radio-Frequency Fingerprints. In Encyclopedia of Information Science and Technology, 4th ed.; IGI Global: Hershey, PA, USA, 2019; pp. 937–949. [Google Scholar]
  3. Spezio, A.E. Electronic warfare systems. IEEE Trans. Microw. Theory Tech. 2002, 50, 633–644. [Google Scholar] [CrossRef]
  4. Ureten, O.; Serinken, N. Wireless security through RF fingerprinting. Electr. Comput. Eng. Can. J. 2007, 32, 27–33. [Google Scholar] [CrossRef]
  5. Rehman, S.; Sowerby, K.; Coghill, C. Radio-frequency fingerprinting for mitigating primary user emulation attack in low-end cognitive radios. Commun. IET 2014, 8, 1274–1284. [Google Scholar] [CrossRef]
  6. Serinken, N.; Üreten, O. Generalised dimension characterisation of radio transmitter turn-on transients. Electron. Lett. 2000, 36, 1064–1066. [Google Scholar] [CrossRef]
  7. Wu, L.; Zhao, Y.; Wang, Z.; Abdalla, F.Y.O.; Ren, G. Specific emitter identification using fractal features based on box-counting dimension and variance dimension. In Proceedings of the 2017 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT), Bilbao, Spain, 18–20 December 2017; pp. 226–231. [Google Scholar]
  8. Huang, G.; Yuan, Y.; Wang, X.; Huang, Z. Specific Emitter Identification Based on Nonlinear Dynamical Characteristics. Can. J. Electr. Comput. Eng. 2016, 39, 34–41. [Google Scholar] [CrossRef]
  9. Lopez-Risueno, G.; Grajal, J.; Sanz-Osorio, A. Digital channelized receiver based on time-frequency analysis for signal interception. IEEE Trans. Aerosp. Electron. Syst. 2005, 41, 879–898. [Google Scholar] [CrossRef]
  10. Bertoncini, C.; Rudd, K.; Nousain, B.; Hinders, M. Wavelet Fingerprinting of Radio-Frequency Identification (RFID) Tags. IEEE Trans. Ind. Electron. 2012, 59, 4843–4850. [Google Scholar] [CrossRef]
  11. Han, J.; Zhang, T.; Wang, H.H.; Ren, D.F. Communication emitter individual identification based on 3D-Hibert energy spectrum and multi-scale fractal features. Tongxin Xuebao/J. Commun. 2017, 38, 99–109. [Google Scholar] [CrossRef]
  12. Pan, Y.; Yang, S.; Peng, H.; Li, T.; Wang, W. Specific Emitter Identification Based on Deep Residual Networks. IEEE Access 2019, 7, 54425–54434. [Google Scholar] [CrossRef]
  13. Zhang, J.; Wang, F.; Dobre, O.A.; Zhong, Z. Specific Emitter Identification via Hilbert–Huang Transform in Single-Hop and Relaying Scenarios. IEEE Trans. Inf. Forensics Secur. 2016, 11, 1192–1205. [Google Scholar] [CrossRef]
  14. Han, J.; Zhang, T.; Ren, D.; Zheng, X. Communication emitter identification based on distribution of bispectrum amplitude and phase. IET Sci. Meas. Technol. 2017, 11, 1104–1112. [Google Scholar] [CrossRef]
  15. Ding, L.; Wang, S.; Wang, F.; Wei, Z. Specific Emitter Identification via Convolutional Neural Networks. IEEE Commun. Lett. 2018, 22, 2591–2594. [Google Scholar] [CrossRef]
  16. Dudczyk, J.; Kawalec, A. Specific emitter identification based on graphical representation of the distribution of radar signal parameters. Bull. Pol. Acad. Sci. Tech. Sci. 2015, 63. [Google Scholar] [CrossRef]
  17. Zhao, Y.; Wu, L.; Zhang, J.; Li, Y. Specific emitter identification using geometric features of frequency drift curve. Bull. Pol. Acad. Sciences. Tech. Sci. 2018, 66, 99–108. [Google Scholar]
  18. Rybak, U.; Dudczyk, J. A Geometrical Divide of Data Particle in Gravitational Classification of Moons and Circles Data Sets. Entropy 2020, 22, 16. [Google Scholar] [CrossRef] [PubMed]
  19. Robinson, J.; Kuzdeba, S.; Stankowicz, J.; Carmack, J. Dilated Causal Convolutional Model for RF Fingerprinting. In Proceeding of the 2020 10th Annual Computing and Communication Workshop and Conference (CCWC), Las Vegas, NV, USA, 6–8 January 2020. [Google Scholar] [CrossRef]
  20. Wong, L.J.; Headley, W.C.; Michaels, A.J. Specific Emitter Identification Using Convolutional Neural Network-based IQ Imbalance Estimators. IEEE Access 2019, 7, 33544–33555. [Google Scholar] [CrossRef]
  21. Merchant, K.; Revay, S.; Stantchev, G.; Nousain, B. Deep Learning for RF Device Fingerprinting in Cognitive Communication Networks. IEEE J. Sel. Top. Signal Process. 2018, 12, 160–167. [Google Scholar] [CrossRef]
  22. Sa, K.; Lang, D.; Wang, C.; Bai, Y. Specific Emitter Identification Techniques for the Internet of Things. IEEE Access 2019, 8, 1644–1652. [Google Scholar] [CrossRef]
  23. Wu, Q.; Feres, C.; Kuzmenko, D.; Zhi, D.; Yu, Z.; Liu, X.; Liu, X. Deep Learning Based RF Fingerprinting for Device Identification and Wireless Security. Electron. Lett. 2018, 54, 1405–1407. [Google Scholar] [CrossRef] [Green Version]
  24. Baldini, G.; Gentile, C.; Giuliani, R.; Steri, G. Comparison of techniques for radiometric identification based on deep convolutional neural networks. Electron. Lett. 2019, 55, 90–92. [Google Scholar] [CrossRef]
  25. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  26. Trabelsi, C.; Bilaniuk, O.; Zhang, Y.; Serdyuk, D.; Subramanian, S.; Santos, J.; Mehri, S.; Rostamzadeh, N.; Bengio, Y.; Pal, C. Deep Complex Networks. arXiv 2017, arXiv:1705.09792. [Google Scholar]
  27. Qu, L.; Yang, J.; Liu, H.; Huang, K. A method for individual identification of communication radiation sources based on complex residual networks. Signal Process. 2021, 37, 95–103. [Google Scholar]
Figure 1. One-dimensional complex-valued residual network structure.
Figure 1. One-dimensional complex-valued residual network structure.
Symmetry 13 01481 g001
Figure 2. MDFFIL algorithm framework.
Figure 2. MDFFIL algorithm framework.
Symmetry 13 01481 g002
Figure 3. Identification accuracy of dataset I.
Figure 3. Identification accuracy of dataset I.
Symmetry 13 01481 g003
Figure 4. Identification accuracy of dataset II.
Figure 4. Identification accuracy of dataset II.
Symmetry 13 01481 g004
Figure 5. Identification accuracy of dataset III.
Figure 5. Identification accuracy of dataset III.
Symmetry 13 01481 g005
Figure 6. Identification accuracy of dataset IV.
Figure 6. Identification accuracy of dataset IV.
Symmetry 13 01481 g006
Table 1. Architecture of the feature extractor.
Table 1. Architecture of the feature extractor.
Layer NameFilter SizeOutput Size
Conv17 × 7, 64, stride264 × 4096 × 51
max poolstride264 × 2048 × 26
conv2_x [ 3 × 3 64 3 × 3 64 ] , stride 164 × 2048 × 26
conv3_x [ 3 × 3 128 3 × 3 128 ] , stride 2128 × 1024 × 13
conv4_x [ 3 × 3 256 3 × 3 256 ] , stride 2256 × 512 × 7
conv5_x [ 3 × 3 512 3 × 3 512 ] , stride 2512 × 256 × 4
Average pool 512
Table 2. Classifier network structure.
Table 2. Classifier network structure.
Layer NameFilter SizeOutput Size
XrXi
1 × 5121 × 512
Complex Conv17 × 7, 64, stride264 × 25664 × 256
Complex max poolstride264 × 25664 × 256
Complex conv2_x [ 3 × 3 64 3 × 3 64 ] , stride 164 × 12864 × 128
Complex conv3_x [ 3 × 3 128 3 × 3 128 ] , stride 2128 × 64128 × 64
Complex conv4_x [ 3 × 3 256 3 × 3 256 ] stride 2256 × 32256 × 32
Complex Average pool 256
8
Table 3. Model main parameter setting.
Table 3. Model main parameter setting.
Model SettingsParameter Value
epoch200
Initial learning rate1 × 10−3
OptimizerAdam
Loss functionCross Entropy
Table 4. Radio data parameters.
Table 4. Radio data parameters.
Signal ParametersParameter Value
ModulationQPSK
Code rate256 Kbps
Sampling Rate50 MHz
Radio power0.125 W
Table 5. Differences between different types of datasets.
Table 5. Differences between different types of datasets.
Data SetParameter Value
Signal carrier frequencyOperating mode
I400 MHzVoice transmission
II450–460 MHzVoice transmission
III400 MHzDigital transmission
IV450–460 MHzDigital transmission
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Qu, L.-Z.; Liu, H.; Huang, K.-J.; Yang, J.-A. Specific Emitter Identification Based on Multi-Domain Feature Fusion and Integrated Learning. Symmetry 2021, 13, 1481. https://doi.org/10.3390/sym13081481

AMA Style

Qu L-Z, Liu H, Huang K-J, Yang J-A. Specific Emitter Identification Based on Multi-Domain Feature Fusion and Integrated Learning. Symmetry. 2021; 13(8):1481. https://doi.org/10.3390/sym13081481

Chicago/Turabian Style

Qu, Ling-Zhi, Hui Liu, Ke-Ju Huang, and Jun-An Yang. 2021. "Specific Emitter Identification Based on Multi-Domain Feature Fusion and Integrated Learning" Symmetry 13, no. 8: 1481. https://doi.org/10.3390/sym13081481

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop