Radar Emitter Identification with Multi-View Adaptive Fusion Network (MAFN)
Abstract
:1. Introduction
- Most of the available DNNs focus only on a single view of signals, such as spectrogram, signal waveforms, bi-spectrum, empirical features, and so on. However, multi-view learning has proved to be able to improve the generalization performance of classifiers. On the other hand, radar signals have multi-dimensional descriptions in the time domain, frequency domain, spatial domain, and combinational domain, which may be complementary for a more robust REI.
- Most of the available DNNs assume that the training set and test set have the same distribution, but the real electromagnetic environment is time-varying and the radar signals from emitters are too complex to strictly follow a fixed distribution. It is well known that DNNs are trained on a given set of instances collected from a limited number of cases. When the electromagnetic environment of signals varies, the performance of DNNs will deteriorate significantly.
- A multi-modal deep neural network is proposed, to fuse the multi-view representation of radar emitters, and generate more discriminative and robust multi-scale features for REI. The learned features can reveal the essential characteristics of radar emitters, which are beneficial for the subsequent classification.
- A decision-level fusion algorithm based on evidence theory is proposed. Probability vectors of categories from different views are computed from a multi-scale feature fusion module and dynamically integrated at the evidence layer. The evidence from each view is integrated via Dirichlet distribution and D-S evidence theory.
2. Multi-View Adaptive Fusion Network
2.1. Ambiguity Function
2.2. Architecture of MAFN
- Multi-scale fusion module. The original waveform and the contour line of the AF are used as the input of this module. A backbone network is constructed to extract features from the multi-view input and then a multi-scale feature fusion layer is employed to fuse features from multiple scales of each view. Convolution kernels of different sizes are adopted in the multi-scale layer.
- Decision-level fusion module. A classifier follows the backbone and multi-scale feature fusion layers and then the predicted pseudo-labels of multi-view can be obtained. Then, the pseudo-label is assumed to be the Dirichlet distribution and the results of different views are dynamically integrated into the evidence layer via D-S evidence theory.
2.3. Multi-Scale Feature Fusion Module
2.4. Decision-Level Fusion Module
2.5. Loss Function and Learning Algorithm
3. Experimental Results and Discussion
3.1. Datasets and Experimental Condition
3.2. Investigation on the Multi-View Representation
3.3. Investigation on the Multi-Scale and Fusion Module
3.3.1. Multi-Scale Module
3.3.2. Decision-Level Fusion Module
3.4. Comparison with Other Related Methods
4. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Saperstein, S.; Campbell, J.W. Signal Recognition in a Complex Radar Environment. Electronic 1977, 3, 8. [Google Scholar]
- Kawalec, A.; Owczarek, R. Radar Emitter Recognition Using Intrapulse Data. In Proceedings of the 15th International Conference on Microwaves, Radar and Wireless Communications (IEEE Cat. No. 04EX824), Warsaw, Poland, 17–19 May 2004; Volume 2, pp. 435–438. [Google Scholar]
- Ru, X.; Ye, H.; Liu, Z.; Huang, Z.; Wang, F.; Jiang, W. An Experimental Study on Secondary Radar Transponder UMOP Characteristics. In Proceedings of the 2016 European Radar Conference (EuRAD), London, UK, 5–7 October 2016; pp. 314–317. [Google Scholar]
- Zhao, Y.; Wui, L.; Zhang, J.; Li, Y. Specific Emitter Identification Using Geometric Features of Frequency Drift Curve. Bull. Pol. Acad. Sci. Tech. Sci. 2018, 66, 99–108. [Google Scholar]
- Cao, R.; Cao, J.; Mei, J.; Yin, C.; Huang, X. Radar Emitter Identification with Bispectrum and Hierarchical Extreme Learning Machine. Multimed. Tools Appl. 2019, 78, 28953–28970. [Google Scholar] [CrossRef]
- Chen, P.; Guo, Y.; Li, G.; Wan, J. Adversarial Shared-private Networks for Specific Emitter Identification. Electron. Lett. 2020, 56, 296–299. [Google Scholar] [CrossRef]
- Li, L.; Ji, H. Radar Emitter Recognition Based on Cyclostationary Signatures and Sequential Iterative Least-Square Estimation. Expert Syst. Appl. 2011, 38, 2140–2147. [Google Scholar] [CrossRef]
- Wang, X.; Huang, G.; Zhou, Z.; Tian, W.; Yao, J.; Gao, J. Radar Emitter Recognition Based on the Energy Cumulant of Short Time Fourier Transform and Reinforced Deep Belief Network. Sensors 2018, 18, 3103. [Google Scholar] [CrossRef] [Green Version]
- Seddighi, Z.; Ahmadzadeh, M.R.; Taban, M.R. Radar Signals Classification Using Energy-time-frequency Distribution Features. IET Radar Sonar Navig. 2020, 14, 707–715. [Google Scholar] [CrossRef]
- He, B.; Wang, F. Cooperative Specific Emitter Identification via Multiple Distorted Receivers. IEEE Trans. Inf. Secur. 2020, 15, 3791–3806. [Google Scholar] [CrossRef]
- Willson, G.B. Radar Classification Using a Neural Network. In Proceedings of the Applications of Artificial Neural Networks. In Proceedings of the 1990 Technical Symposium on Optics, Electro-Optics, and Sensors, Orlando, FL, USA, 16–20 April 1990; SPIE: Bellingham, WA, USA, 1990; Volume 1294, pp. 200–210. [Google Scholar]
- Shieh, C.-S.; Lin, C.-T. A Vector Neural Network for Emitter Identification. IEEE Trans. Antennas Propag. 2002, 50, 1120–1127. [Google Scholar] [CrossRef] [Green Version]
- Shmilovici, A. Support Vector Machines. In Data Mining and Knowledge Discovery Handbook; Springer: Berlin/Heidelberg, Germany, 2009; pp. 231–247. [Google Scholar]
- Sun, J.; Xu, G.; Ren, W.; Yan, Z. Radar Emitter Classification Based on Unidimensional Convolutional Neural Network. IET Radar Sonar Navig. 2018, 12, 862–867. [Google Scholar] [CrossRef]
- Zhu, M.; Feng, Z.; Zhou, X. A Novel Data-Driven Specific Emitter Identification Feature Based on Machine Cognition. Electronics 2020, 9, 1308. [Google Scholar] [CrossRef]
- Liu, Z.-M.; Philip, S.Y. Classification, Denoising, and Deinterleaving of Pulse Streams with Recurrent Neural Networks. IEEE Trans. Aerosp. Electron. Syst. 2018, 55, 1624–1639. [Google Scholar] [CrossRef]
- Notaro, P.; Paschali, M.; Hopke, C.; Wittmann, D.; Navab, N. Radar Emitter Classification with Attribute-Specific Recurrent Neural Networks. arXiv 2019, arXiv:1911.07683. [Google Scholar]
- Li, R.; Hu, J.; Li, S.; Ai, W. Specific Emitter Identification Based on Multi-Domain Features Learning. In Proceedings of the 2021 IEEE International Conference on Artificial Intelligence and Industrial Design (AIID), Guangzhou, China, 28–30 May 2021; pp. 178–183. [Google Scholar]
- Wu, B.; Yuan, S.; Li, P.; Jing, Z.; Huang, S.; Zhao, Y. Radar Emitter Signal Recognition Based on One-Dimensional Convolutional Neural Network with Attention Mechanism. Sensors 2020, 20, 6350. [Google Scholar] [CrossRef] [PubMed]
- Yuan, S.; Li, P.; Wu, B. Towards Single-Component and Dual-Component Radar Emitter Signal Intra-Pulse Modulation Classification Based on Convolutional Neural Network and Transformer. Remote Sens. 2022, 14, 3690. [Google Scholar] [CrossRef]
- Zhao, Y.; Wang, X.; Lin, Z.; Huang, Z. Multi-Classifier Fusion for Open-Set Specific Emitter Identification. Remote Sens. 2022, 14, 2226. [Google Scholar] [CrossRef]
- Yuan, S.; Li, P.; Wu, B.; Li, X.; Wang, J. Semi-Supervised Classification for Intra-Pulse Modulation of Radar Emitter Signals Using Convolutional Neural Network. Remote Sens. 2022, 14, 2059. [Google Scholar] [CrossRef]
- Stein, S. Algorithms for Ambiguity Function Processing. IEEE Trans. Acoust. Speech Signal Process. 1981, 29, 588–599. [Google Scholar] [CrossRef]
- Li, Y.; Yang, M.; Zhang, Z. A Survey of Multi-View Representation Learning. IEEE Trans. Knowl. Data Eng. 2018, 31, 1863–1883. [Google Scholar] [CrossRef] [Green Version]
- Chen, L.-C.; Papandreou, G.; Schroff, F.; Adam, H. Rethinking Atrous Convolution for Semantic Image Segmentation. arXiv 2017, arXiv:1706.05587. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Shafer, G. A Mathematical Theory of Evidence; Princeton University Press: Princeton, NJ, USA, 1976; Volume 42, ISBN 0-691-10042-X. [Google Scholar]
- Jøsang, A. Subjective Logic; Springer: Berlin/Heidelberg, Germany, 2016; Volume 3. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Li, Y.; Jing, W.; Ge, P. Radiation Emitter Signal Recognition Based on VMD and Feature Fusion. Syst. Eng. Electron. 2020, 42, 1499–1503. [Google Scholar]
- Zhang, W.; Ji, H.; Wang, L. Adaptive Weighted Feature Fusion Classification Method. Syst. Eng. Electron. 2013, 35, 1133–1137. [Google Scholar]
- Jin, T.; Wang, X.; Tian, R.; Zhang, X. Rapid Recognition Method for Radar Emitter Based on Improved 1DCNN+TCN. Syst. Eng. Electron. 2022, 44, 463–469. [Google Scholar]
- Yin, X.; Wu, B. Radar Emitter Identification Algorithm Based on Deep Learning. Aerosp. Electron. Warf. 2021, 37, 7–11. [Google Scholar]
- Peng, S.; Zhao, X.; Wei, X.; Wei, D.; Peng, Y. Multi-View Weighted Feature Fusion Using CNN for Pneumonia Detection on Chest X-Rays. In Proceedings of the 2020 IEEE International Conference on E-health Networking, Application & Services (HEALTHCOM), Shenzhen, China, 1–2 March 2021; pp. 1–6. [Google Scholar]
- Jiang, W.; Cao, Y.; Yang, L.; He, Z. A Time-Space Domain Information Fusion Method for Specific Emitter Identification Based on Dempster–Shafer Evidence Theory. Sensors 2017, 17, 1972. [Google Scholar] [CrossRef] [Green Version]
- Zhang, Y. Research on Classification of Sensing Targets in Wireless Sensor Networks Based on Decision Level Fusion. Ph.D. Thesis, Beijing Jiaotong University, Beijing, China, 2019. [Google Scholar]
- Hao, X.; Feng, Z.; Yang, S.; Wang, M.; Jiao, L. Automatic Modulation Classification via Meta-Learning. IEEE Internet Things J. 2023. [Google Scholar] [CrossRef]
Original ASPP | Modified ASPP |
---|---|
, rate = 1 | , rate = 1 |
, rate = 6 | , rate = 2 |
, rate = 12 | , rate = 3 |
, rate = 18 | , rate = 4 |
Global pooling | Global pooling |
Dataset | Algorithms | OA (%) | AA (%) | Kappa (%) |
---|---|---|---|---|
1 | View1-NF | 97.00 | 97.08 | 96.40 |
View2-NF | 96.61 | 96.67 | 95.93 | |
MAFN | 98.50 | 98.52 | 98.20 | |
2 | View1-NF | 99.50 | 99.50 | 99.45 |
View2-NF | 98.33 | 98.35 | 98.18 | |
MAFN | 99.86 | 99.86 | 99.85 |
Dataset | Algorithm | OA (%) | AA (%) | Kappa (%) |
---|---|---|---|---|
1 | MAFN-no M | 98.17 | 98.16 | 97.80 |
MAFN | 98.50 | 98.52 | 98.20 | |
2 | MAFN-no M | 99.78 | 99.78 | 99.77 |
MAFN | 99.86 | 99.86 | 99.85 |
Dataset | Algorithm | OA (%) | AA (%) | Kappa (%) |
---|---|---|---|---|
1 | View1-M | 98.22 | 98.22 | 97.87 |
View2-M | 96.94 | 97.02 | 96.33 | |
MAFN-no D | 98.33 | 98.33 | 98.00 | |
MAFN | 98.50 | 98.52 | 98.20 | |
2 | View1-M | 99.66 | 99.67 | 99.64 |
View2-M | 98.72 | 98.74 | 98.61 | |
MAFN-no D | 99.72 | 99.72 | 99.69 | |
MAFN | 99.86 | 99.86 | 99.85 |
Dataset | Algorithm | OA (%) | AA (%) | Kappa (%) | Train (s) | Test (ms) |
---|---|---|---|---|---|---|
1 | CFF [30] | 97.22 | 97.27 | 96.67 | 8149 | 0.62 |
TWFF [34] | 97.39 | 97.39 | 96.87 | 20,082 | 12.5 | |
MDF [35] | 98.00 | 97.99 | 97.60 | 12,613 | 7.94 | |
AWDF [31] | 98.11 | 98.11 | 97.73 | 9194 | 0.72 | |
DSDF [36] | 98.15 | 98.15 | 97.77 | 14,005 | 6.62 | |
MAFN | 98.50 | 98.52 | 98.20 | 15,238 | 6.99 | |
2 | CFF [30] | 99.67 | 99.67 | 99.63 | 14,206 | 1.23 |
TWFF [34] | 99.75 | 99.75 | 99.73 | 41,138 | 12.7 | |
MDF [35] | 99.68 | 99.68 | 99.65 | 24,065 | 8.55 | |
AWDF [31] | 99.70 | 99.70 | 99.67 | 36,213 | 0.70 | |
DSDF [36] | 99.75 | 99.74 | 99.73 | 28,326 | 6.95 | |
MAFN | 99.86 | 99.86 | 99.85 | 31,393 | 7.13 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yang, S.; Peng, T.; Liu, H.; Yang, C.; Feng, Z.; Wang, M. Radar Emitter Identification with Multi-View Adaptive Fusion Network (MAFN). Remote Sens. 2023, 15, 1762. https://doi.org/10.3390/rs15071762
Yang S, Peng T, Liu H, Yang C, Feng Z, Wang M. Radar Emitter Identification with Multi-View Adaptive Fusion Network (MAFN). Remote Sensing. 2023; 15(7):1762. https://doi.org/10.3390/rs15071762
Chicago/Turabian StyleYang, Shuyuan, Tongqing Peng, Huiling Liu, Chen Yang, Zhixi Feng, and Min Wang. 2023. "Radar Emitter Identification with Multi-View Adaptive Fusion Network (MAFN)" Remote Sensing 15, no. 7: 1762. https://doi.org/10.3390/rs15071762
APA StyleYang, S., Peng, T., Liu, H., Yang, C., Feng, Z., & Wang, M. (2023). Radar Emitter Identification with Multi-View Adaptive Fusion Network (MAFN). Remote Sensing, 15(7), 1762. https://doi.org/10.3390/rs15071762