HCTC: Hybrid Convolutional Transformer Classifier for Automatic Modulation Recognition
Abstract
:1. Introduction
- A hybrid architecture for AMR is proposed that combines a convolutional transformer-based DL classifier to minimize computational complexity while retaining classification performance;
- The proposed algorithm is investigated by conducting experiments with different batch sizes, optimizers, and model configurations.
2. Received Signal and Proposed HCTC Model
2.1. Received Signal
2.2. Proposed HCTC Model
2.2.1. Stage A—Convolutional Layer
2.2.2. Stage B—Transformer Layer
2.2.3. Stage C—Feature Mapping
3. Implementation Details
4. Experimental Results and Discussion
4.1. Classification Accuracy and Confusion Matrix
4.2. Numbers of Parameters and FLOPs and Accuracy at 0 dB and the Average Accuracy from 0 to 18 dB
5. Experiments on Variations in the Model Configuration and Batch Size
Experiment on Purely Convolutional Model and Pure Transformer Encoder Model
6. Open Issues in AMR Models and Future Work
7. Potential Real-World Applications
7.1. The Military and Secure Communication
7.2. Cognitive Radio Networks
7.3. Satellite Communications
7.4. Internet of Things (IoT)
7.5. 5G-and-Beyond Communications Systems
8. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Jdid, B.; Hassan, K.; Dayoub, I.; Lim, W.H.; Mokayef, M. Machine Learning Based Automatic Modulation Recognition for Wireless Communications: A Comprehensive Survey. IEEE Access 2021, 9, 57851–57873. [Google Scholar] [CrossRef]
- Ma, H.; Xu, G.; Meng, H.; Wang, M.; Yang, S.; Wu, R.; Wang, W. Cross Model Deep Learning Scheme for Automatic Modulation Classification. IEEE Access 2020, 8, 78923–78931. [Google Scholar] [CrossRef]
- Zhu, Z.; Nandi, A.K. Automatic Modulation Classification: Principles, Algorithms and Applications; John Wiley & Sons: Hoboken, NJ, USA, 2015; ISBN 1118906497. [Google Scholar]
- Ge, Z.; Jiang, H.; Guo, Y.; Zhou, J. Accuracy Analysis of Feature-Based Automatic Modulation Classification via Deep Neural Network. Sensors 2021, 21, 8252. [Google Scholar] [CrossRef] [PubMed]
- Hameed, F.; Dobre, O.A.; Popescu, D.C. On the Likelihood-Based Approach to Modulation Classification. IEEE Trans. Wirel. Commun. 2009, 8, 5884–5892. [Google Scholar] [CrossRef]
- Dobre, O.A.; Hameed, F. Likelihood-Based Algorithms for Linear Digital Modulation Classification in Fading Channels. In Proceedings of the 2006 Canadian Conference on Electrical and Computer Engineering, Ottawa, ON, Canada, 7–10 May 2006; pp. 1347–1350. [Google Scholar]
- Panagiotou, P.; Anastasopoulos, A.; Polydoros, A. Likelihood Ratio Tests for Modulation Classification. In Proceedings of the MILCOM 2000 Proceedings, 21st Century Military Communications, Architectures and Technologies for Information Superiority (Cat. No. 00CH37155), Los Angeles, CA, USA, 22–25 October 2000; IEEE: Piscataway, NJ, USA, 2000; Volume 2, pp. 670–674. [Google Scholar]
- Derakhtian, M.; Tadaion, A.A.; Gazor, S. Modulation Classification of Linearly Modulated Signals in Slow Flat Fading Channels. IET Signal Process. 2011, 5, 443–450. [Google Scholar] [CrossRef]
- Nandi, A.K.; Azzouz, E.E. Algorithms for Automatic Modulation Recognition of Communication Signals. IEEE Trans. Commun. 1998, 46, 431–436. [Google Scholar] [CrossRef]
- Gardner, W.A.; Spooner, C.M. Cyclic Spectral Analysis for Signal Detection and Modulation Recognition. In Proceedings of the MILCOM 88, 21st Century Military Communications—What’s Possible?’, Conference Record, Military Communications Conference, San Diego, CA, USA, 23–26 October 1988; Volume 2, pp. 419–424. [Google Scholar]
- Spooner, C.M.; Brown, W.A.; Yeung, G.K. Automatic Radio-Frequency Environment Analysis. In Proceedings of the Conference Record of the Thirty-Fourth Asilomar Conference on Signals, Systems and Computers (Cat. No. 00CH37154), Pacific Grove, CA, USA, 29 October–1 November 2000; IEEE: Piscataway, NJ, USA, 2000; Volume 2, pp. 1181–1186. [Google Scholar]
- Pawar, S.U.; Doherty, J.F. Modulation Recognition in Continuous Phase Modulation Using Approximate Entropy. IEEE Trans. Inf. Forensics Secur. 2011, 6, 843–852. [Google Scholar] [CrossRef]
- Zhang, Z.; Li, Y.; Zhu, X.; Lin, Y. A Method for Modulation Recognition Based on Entropy Features and Random Forest. In Proceedings of the 2017 IEEE International Conference on Software Quality, Reliability and Security Companion (QRS-C), Prague, Czech Republic, 25–29 July 2017; pp. 243–246. [Google Scholar]
- Zhang, Z.; Wang, C.; Gan, C.; Sun, S.; Wang, M. Automatic Modulation Classification Using Convolutional Neural Network with Features Fusion of SPWVD and BJD. IEEE Trans. Signal Inf. Process. Netw. 2019, 5, 469–478. [Google Scholar] [CrossRef]
- Wong, M.L.D.; Nandi, A.K. Automatic Digital Modulation Recognition Using Artificial Neural Network and Genetic Algorithm. Signal Process. 2004, 84, 351–365. [Google Scholar] [CrossRef]
- Lopatka, J.; Pedzisz, M. Automatic Modulation Classification Using Statistical Moments and a Fuzzy Classifier. In Proceedings of the WCC 2000-ICSP 2000. 2000 5th International Conference on Signal Processing Proceedings. 16th World Computer Congress 2000, Beijing, China, 21–25 August 2000; IEEE: Piscataway, NJ, USA, 2000; Volume 3, pp. 1500–1506. [Google Scholar]
- Muller, F.C.B.F.; Cardoso, C.; Klautau, A. A Front End for Discriminative Learning in Automatic Modulation Classification. IEEE Commun. Lett. 2011, 15, 443–445. [Google Scholar] [CrossRef]
- Park, C.-S.; Choi, J.-H.; Nah, S.-P.; Jang, W.; Kim, D.Y. Automatic Modulation Recognition of Digital Signals Using Wavelet Features and SVM. In Proceedings of the 2008 10th International Conference on Advanced Communication Technology, Gangwon, Republic of Korea, 17–20 February 2008; IEEE: Piscataway, NJ, USA, 2008; Volume 1, pp. 387–390. [Google Scholar]
- Kim, K.; Akbar, I.A.; Bae, K.K.; Um, J.-S.; Spooner, C.M.; Reed, J.H. Cyclostationary Approaches to Signal Detection and Classification in Cognitive Radio. In Proceedings of the 2007 2nd IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks, Dublin, Ireland, 17–20 April 2007; IEEE: Piscataway, NJ, USA, 2007; pp. 212–215. [Google Scholar]
- Abdelmutalab, A.; Assaleh, K.; El-Tarhuni, M. Automatic Modulation Classification Using Polynomial Classifiers. In Proceedings of the 2014 IEEE 25th Annual International Symposium on Personal, Indoor, and Mobile Radio Communication (PIMRC), Washington, DC, USA, 2–5 September 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 806–810. [Google Scholar]
- Aslam, M.W.; Zhu, Z.; Nandi, A.K. Automatic Modulation Classification Using Combination of Genetic Programming and KNN. IEEE Trans. Wirel. Commun. 2012, 11, 2742–2750. [Google Scholar]
- Wang, F.; Shang, T.; Hu, C.; Liu, Q. Automatic Modulation Classification Using Hybrid Data Augmentation and Lightweight Neural Network. Sensors 2023, 23, 4187. [Google Scholar] [CrossRef] [PubMed]
- Wang, D.; Lin, M.; Zhang, X.; Huang, Y.; Zhu, Y. Automatic Modulation Classification Based on CNN-Transformer Graph Neural Network. Sensors 2023, 23, 7281. [Google Scholar] [CrossRef] [PubMed]
- O’Shea, T.J.; Corgan, J.; Clancy, T.C. Convolutional Radio Modulation Recognition Networks. In Proceedings of the Engineering Applications of Neural Networks: 17th International Conference, EANN 2016, Aberdeen, UK, 2–5 September 2016; Proceedings 17. Springer: Berlin/Heidelberg, Germany, 2016; pp. 213–226. [Google Scholar]
- Rajendran, S.; Meert, W.; Giustiniano, D.; Lenders, V.; Pollin, S. Deep Learning Models for Wireless Signal Classification with Distributed Low-Cost Spectrum Sensors. IEEE Trans. Cogn. Commun. Netw. 2018, 4, 433–445. [Google Scholar] [CrossRef]
- Ke, Z.; Vikalo, H. Real-Time Radio Technology and Modulation Classification via an LSTM Auto-Encoder. IEEE Trans. Wirel. Commun. 2022, 21, 370–382. [Google Scholar] [CrossRef]
- Tekbıyık, K.; Ekti, A.R.; Görçin, A.; Kurt, G.K.; Keçeci, C. Robust and Fast Automatic Modulation Classification with CNN under Multipath Fading Channels. In Proceedings of the 2020 IEEE 91st Vehicular Technology Conference (VTC2020-Spring), Antwerp, Belgium, 25–28 May 2020; pp. 1–6. [Google Scholar]
- West, N.E.; O’shea, T. Deep Architectures for Modulation Recognition. In Proceedings of the 2017 IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN), Baltimore, MD, USA, 6–9 March 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–6. [Google Scholar]
- Meng, F.; Chen, P.; Wu, L.; Wang, X. Automatic Modulation Classification: A Deep Learning Enabled Approach. IEEE Trans. Veh. Technol. 2018, 67, 10760–10772. [Google Scholar] [CrossRef]
- Huynh-The, T.; Hua, C.H.; Pham, Q.V.; Kim, D.S. MCNet: An Efficient CNN Architecture for Robust Automatic Modulation Classification. IEEE Commun. Lett. 2020, 24, 811–815. [Google Scholar] [CrossRef]
- Xu, J.; Luo, C.; Parr, G.; Luo, Y. A Spatiotemporal Multi-Channel Learning Framework for Automatic Modulation Recognition. IEEE Wirel. Commun. Lett. 2020, 9, 1629–1632. [Google Scholar] [CrossRef]
- Perenda, E.; Rajendran, S.; Pollin, S. Automatic Modulation Classification Using Parallel Fusion of Convolutional Neural Networks. In Proceedings of the BalkanCom’19, Skopje, Republic of Macedonia, 10–12 June 2019. [Google Scholar]
- Hermawan, A.P.; Ginanjar, R.R.; Kim, D.S.; Lee, J.M. CNN-Based Automatic Modulation Classification for beyond 5G Communications. IEEE Commun. Lett. 2020, 24, 1038–1041. [Google Scholar] [CrossRef]
- Liu, X.; Yang, D.; El Gamal, A. Deep Neural Network Architectures for Modulation Classification. In Proceedings of the 2017 51st Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, USA, 29 October–1 November 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 915–919. [Google Scholar]
- Hong, D.; Zhang, Z.; Xu, X. Automatic Modulation Classification Using Recurrent Neural Networks. In Proceedings of the 2017 3rd IEEE International Conference on Computer and Communications (ICCC), Chengdu, China, 13–16 December 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 695–700. [Google Scholar]
- Njoku, J.N.; Morocho-Cayamcela, M.E.; Lim, W. CGDNet: Efficient Hybrid Deep Learning Model for Robust Automatic Modulation Recognition. IEEE Netw. Lett. 2021, 3, 47–51. [Google Scholar] [CrossRef]
- Zhang, F.; Luo, C.; Xu, J.; Luo, Y. An Efficient Deep Learning Model for Automatic Modulation Recognition Based on Parameter Estimation and Transformation. IEEE Commun. Lett. 2021, 25, 3287–3290. [Google Scholar] [CrossRef]
- Mendis, G.J.; Wei, J.; Madanayake, A. Deep Learning-Based Automated Modulation Classification for Cognitive Radio. In Proceedings of the 2016 IEEE International Conference on Communication Systems (ICCS), Shenzhen, China, 14–16 December 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 1–6. [Google Scholar]
- Lee, J.; Kim, B.; Kim, J.; Yoon, D.; Choi, J.W. Deep Neural Network-Based Blind Modulation Classification for Fading Channels. In Proceedings of the 2017 International Conference on Information and Communication Technology Convergence (ICTC), Jeju, Republic of Korea, 18–20 October 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 551–554. [Google Scholar]
- Peng, S.; Jiang, H.; Wang, H.; Alwageed, H.; Zhou, Y.; Sebdani, M.M.; Yao, Y.-D. Modulation Classification Based on Signal Constellation Diagrams and Deep Learning. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 718–727. [Google Scholar] [CrossRef] [PubMed]
- Huang, S.; Chai, L.; Li, Z.; Zhang, D.; Yao, Y.; Zhang, Y.; Feng, Z. Automatic Modulation Classification Using Compressive Convolutional Neural Network. IEEE Access 2019, 7, 79636–79643. [Google Scholar] [CrossRef]
- Shi, F.; Hu, Z.; Yue, C.; Shen, Z. Combining Neural Networks for Modulation Recognition. Digit. Signal Process. 2022, 120, 103264. [Google Scholar] [CrossRef]
- Zhang, Z.; Luo, H.; Wang, C.; Gan, C.; Xiang, Y. Automatic Modulation Classification Using CNN-LSTM Based Dual-Stream Structure. IEEE Trans. Veh. Technol. 2020, 69, 13521–13531. [Google Scholar] [CrossRef]
- Chang, S.; Huang, S.; Zhang, R.; Feng, Z.; Liu, L. Multitask-Learning-Based Deep Neural Network for Automatic Modulation Classification. IEEE Internet Things J. 2021, 9, 2192–2206. [Google Scholar] [CrossRef]
- Zhang, F.; Luo, C.; Xu, J.; Luo, Y.; Zheng, F.-C. Deep Learning Based Automatic Modulation Recognition: Models, Datasets, and Challenges. Digit. Signal Process. 2022, 129, 103650. [Google Scholar] [CrossRef]
- Xu, J.L.; Su, W.; Zhou, M. Likelihood-Ratio Approaches to Automatic Modulation Classification. IEEE Trans. Syst. Man Cybern. Part C (Appl. Rev.) 2011, 41, 455–469. [Google Scholar] [CrossRef]
- Abdel-Moneim, M.A.; El-Shafai, W.; Abdel-Salam, N.; El-Rabaie, E.M.; Abd El-Samie, F.E. A Survey of Traditional and Advanced Automatic Modulation Classification Techniques, Challenges, and Some Novel Trends. Int. J. Commun. Syst. 2021, 34, e4762. [Google Scholar] [CrossRef]
- Liu, X.; Li, C.J.; Jin, C.T.; Leong, P.H.W. Wireless Signal Representation Techniques for Automatic Modulation Classification. IEEE Access 2022, 10, 84166–84187. [Google Scholar] [CrossRef]
- Huynh-The, T.; Pham, Q.V.; Nguyen, T.V.; Nguyen, T.T.; Ruby, R.; Zeng, M.; Kim, D.S. Automatic Modulation Classification: A Deep Architecture Survey. IEEE Access 2021, 9, 142950–142971. [Google Scholar] [CrossRef]
- Xiao, W.; Luo, Z.; Hu, Q. A Review of Research on Signal Modulation Recognition Based on Deep Learning. Electronics 2022, 11, 2764. [Google Scholar] [CrossRef]
- Peng, S.; Sun, S.; Yao, Y.D. A Survey of Modulation Classification Using Deep Learning: Signal Representation and Data Preprocessing. IEEE Trans. Neural Netw. Learn. Syst. 2022, 33, 7020–7038. [Google Scholar] [CrossRef] [PubMed]
- Wang, T.; Yang, G.; Chen, P.; Xu, Z.; Jiang, M.; Ye, Q. A Survey of Applications of Deep Learning in Radio Signal Modulation Recognition. Appl. Sci. 2022, 12, 12052. [Google Scholar] [CrossRef]
- O’shea, T.J.; West, N. Radio Machine Learning Dataset Generation with Gnu Radio. In Proceedings of the GNU Radio Conference, Boulder, CO, USA, 12–16 September 2016; Volume 1. [Google Scholar]
- Deepsig. Available online: https://www.deepsig.ai/datasets/ (accessed on 17 October 2023).
- Nair, V.; Hinton, G.E. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), Haifa, Israel, 21–24 June 2010; pp. 807–814. [Google Scholar]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
- Zeiler, M.D.; Ranzato, M.; Monga, R.; Mao, M.; Yang, K.; Le, Q.V.; Nguyen, P.; Senior, A.; Vanhoucke, V.; Dean, J.; et al. On rectified linear units for speech processing. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 26–31 May 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 3517–3521. [Google Scholar]
- Dahl, G.E.; Sainath, T.N.; Hinton, G.E. Improving deep neural networks for LVCSR using rectified linear units and dropout. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 26–31 May 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 8609–8613. [Google Scholar]
- Nwankpa, C.; Ijomah, W.; Gachagan, A.; Marshall, S. Activation functions: Comparison of trends in practice and research for deep learning. arXiv 2018, arXiv:1811.03378. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention Is All You Need. In Advances in Neural Information Processing Systems; NIPS: Denver, CO, USA, 2017; Volume 30. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef]
- Ba, J.L. Layer normalization. arXiv 2016, arXiv:1607.06450. [Google Scholar]
- Wang, Y.; Lu, Q.; Jin, Y.; Zhang, H. Communication modulation signal recognition based on the deep multi-hop neural network. J. Frankl. Inst. 2021, 358, 6368–6384. [Google Scholar] [CrossRef]
- Peng, Y.; Guo, L.; Yan, J.; Tao, M.; Fu, X.; Lin, Y.; Gui, G. Automatic Modulation Classification Using Deep Residual Neural Network with Masked Modeling for Wireless Communications. Drones 2023, 7, 390. [Google Scholar] [CrossRef]
- Kim, S.-H.; Kim, J.-W.; Doan, V.-S.; Kim, D.-S. Lightweight Deep Learning Model for Automatic Modulation Classification in Cognitive Radio Networks. IEEE Access 2020, 8, 197532–197541. [Google Scholar] [CrossRef]
- Kim, S.-H.; Kim, J.-W.; Nwadiugwu, W.-P.; Kim, D.-S. Deep Learning-Based Robust Automatic Modulation Classification for Cognitive Radio Networks. IEEE Access 2021, 9, 92386–92393. [Google Scholar] [CrossRef]
- Toro-Betancur, V.; Valencia, A.C.; Bernal, J.I.M. Signal detection and modulation classification for satellite communications. In Proceedings of the 2020 3rd International Conference on Signal Processing and Machine Learning, Beijing, China, 22–24 October 2020; pp. 114–118. [Google Scholar]
- Jiang, J.; Wang, Z.; Zhao, H.; Qiu, S.; Li, J. Modulation recognition method of satellite communication based on CLDNN model. In Proceedings of the 2021 IEEE 30th International Symposium on Industrial Electronics (ISIE), Kyoto, Japan, 20–23 June 2021; pp. 1–6. [Google Scholar] [CrossRef]
- Usman, M.; Lee, J.-A. AMC-IoT: Automatic Modulation Classification Using Efficient Convolutional Neural Networks for Low Powered IoT Devices. In Proceedings of the 2020 International Conference on Information and Communication Technology Convergence (ICTC), Jeju, Republic of Korea, 21–23 October 2020; pp. 288–293. [Google Scholar] [CrossRef]
- Rashvand, N.; Witham, K.; Maldonado, G.; Katariya, V.; Marer Prabhu, N.; Schirner, G.; Tabkhi, H. Enhancing automatic modulation recognition for iot applications using transformers. IoT 2024, 5, 212–226. [Google Scholar] [CrossRef]
- Zhou, Q.; Zhang, R.; Zhang, F.; Jing, X. An automatic modulation classification network for IoT terminal spectrum monitoring under zero-sample situations. EURASIP J. Wirel. Commun. Netw. 2022, 2022, 25. [Google Scholar] [CrossRef]
- Clement, J.C.; Indira, N.; Vijayakumar, P.; Nandakumar, R. Deep learning based modulation classification for 5G and beyond wireless systems. Peer Peer Netw. Appl. 2021, 14, 319–332. [Google Scholar] [CrossRef]
- Kaya, O.; Karabulut, M.A.; Shah, A.F.M.S.; Ilhan, H. Modulation Classifier Based on Deep Learning for Beyond 5G Communications. In Proceedings of the 2024 47th International Conference on Telecommunications and Signal Processing (TSP), Prague, Czech Republic, 10–12 July 2024; pp. 336–339. [Google Scholar] [CrossRef]
Layer (Name) | Output Shape | Description |
---|---|---|
input2 (I Component) | (128, 1) | Input signal of length 128 × 1 |
input3 (Q Component) | (128, 1) | Input signal of length 128 × 1 |
conv1_2 (Conv1D) | (128, 64) | Number of filters: 64 Kernel size: 8 Activation function: ReLU Kernel initializer: Glorot uniform Kernel regularizer: L2 (1 × 10−4) |
conv1_3 (Conv1D) | (128, 64) | Number of filters: 64 Kernel size: 8 Activation function: ReLU Kernel initializer: Glorot uniform Kernel regularizer: L2 (1 × 10−4) |
input1 (I/Q Component) | (2, 128, 1) | Input layer of length 128 × 2 |
conv1_1 (Conv2D) | (2, 128, 256) | Number of filters: 256 Kernel size: (2, 8) Activation function: ReLU Kernel initializer: Glorot uniform Kernel regularizer: L2 (1 × 10−4) |
conv2 (Conv2D) | (2, 128, 64) | Number of filters: 64 Kernel size: (1, 8) Activation function: ReLU Kernel initializer: Glorot uniform Kernel regularizer: L2 (1 × 10−4) |
max_pooling2d (MaxPooling2D) | (2, 128, 256) | Pool size: (2, 2); Stride: 1; Padding: Same |
max_pooling2d_1 (MaxPooling2D) | (2, 128, 64) | Pool size: (2, 2); Stride: 1; Padding: Same |
gaussian_dropout (GaussianDropout) | (2, 128, 256) | Dropout rate: 0.1 |
gaussian_dropout_1 (GaussianDropout) | (2, 128, 64) | Dropout rate: 0.1 |
conv4 (Conv2D) | (1, 124, 32) | Number of filters: 32 Kernel size: (2, 5) Activation function: ReLU Kernel initializer: Glorot uniform Kernel regularizer: L2 (1 × 10−4) |
max_pooling2d_2 (MaxPooling2D) | (1, 124, 32) | Pool size: (2, 2); Stride: 1; Padding: Same |
gaussian_dropout_2 (GaussianDropout) | (1, 124, 32) | Dropout rate: 0.1 |
bn1 (BatchNormalization) | (1, 124, 32) | Batch normalization |
Layer (Name) | Output Shape | Description |
---|---|---|
multiply | (124, 16) | Multiplies the range tensor for adjustment of positional encoding |
sin | (124, 16) | Computes the sine of the angle in radians for positional encoding |
cos | (124, 16) | Computes the cosine of the angle in radians for positional encoding |
cast | (1, 124, 32) | Casts the positional encoding tensor to tf.float32. |
transformer_encoder (Function) | (1, 124, 32) | Applies multihead self-attention and feed-forward layers with positional encoding |
global_average_pooling1d (GlobalAveragePooling1D) | (32) | Computes the global average of each feature map |
Layer (Name) | Output Shape | Description |
---|---|---|
fc1 (Dense) | (256) | Number of units: 256 Activation function: SELU Kernel regularizer: L2 (1 × 10−4) |
batch_normalization (BatchNormalization) | (256) | Batch normalization applied to fc1 output |
fc2 (Dense) | (128) | Number of units: 128 Activation function: SELU Kernel regularizer: L2 (1 × 10−4) |
batch_normalization_1 (BatchNormalization) | (128) | Batch normalization applied to fc2 output |
softmax (Dense) | (11) | Number of units: 11 Activation function: softmax |
SNR | Proposed HCTC | 1DCNN-PF | CGDNet | CLDNN | CLDNN2 | CNN1 | CNN2 | DAE | DenseNet | GRU | IC-AMCNet | MCLDNN | MCNET | PET-CGDNN | ResNet |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
−20 | 10.18 | 9.18 | 9.00 | 9.14 | 9.32 | 8.91 | 9.36 | 9.45 | 9.32 | 9.68 | 9.41 | 9.64 | 9.27 | 9.36 | 9.32 |
−18 | 9.82 | 9.36 | 9.45 | 9.55 | 9.18 | 9.50 | 9.18 | 9.14 | 9.27 | 9.14 | 9.27 | 9.23 | 9.14 | 9.36 | 9.05 |
−16 | 10.42 | 9.41 | 9.68 | 9.77 | 9.77 | 9.50 | 9.55 | 9.68 | 9.50 | 10.27 | 9.41 | 9.50 | 9.82 | 10.23 | 9.59 |
−14 | 13.39 | 9.91 | 11.18 | 11.86 | 11.27 | 11.91 | 11.32 | 12.45 | 10.18 | 13.23 | 10.05 | 12.59 | 11.23 | 12.32 | 10.32 |
−12 | 18.61 | 11.55 | 14.95 | 16.05 | 16.23 | 15.27 | 14.27 | 17.41 | 11.55 | 16.91 | 12.95 | 16.32 | 14.91 | 14.73 | 13.27 |
−10 | 28.48 | 17.14 | 21.27 | 23.82 | 24.09 | 23.18 | 21.91 | 23.64 | 17.86 | 24.45 | 20.27 | 23.23 | 23.86 | 22.05 | 20.64 |
−8 | 39.27 | 28.86 | 32.59 | 37.36 | 35.86 | 35.36 | 33.82 | 34.91 | 29.82 | 38.91 | 33.68 | 38.64 | 39.82 | 35.27 | 32.05 |
−6 | 53.64 | 46.64 | 49.64 | 52.64 | 52.05 | 53.00 | 54.05 | 50.82 | 52.00 | 53.41 | 53.32 | 56.64 | 55.41 | 51.09 | 46.23 |
−4 | 67.09 | 61.77 | 62.50 | 62.00 | 63.00 | 64.82 | 67.05 | 62.68 | 64.00 | 63.82 | 63.73 | 67.09 | 62.82 | 65.59 | 56.32 |
−2 | 80.48 | 71.45 | 73.45 | 74.27 | 77.09 | 75.05 | 78.14 | 73.05 | 72.73 | 76.45 | 73.73 | 80.36 | 73.50 | 76.82 | 67.55 |
0 | 87.03 | 79.68 | 77.32 | 79.18 | 82.95 | 80.59 | 82.27 | 79.73 | 79.36 | 84.27 | 80.77 | 85.55 | 78.64 | 85.36 | 74.77 |
2 | 90.30 | 81.91 | 79.23 | 81.09 | 84.50 | 81.91 | 83.50 | 82.18 | 79.36 | 87.59 | 82.14 | 89.45 | 80.00 | 88.09 | 79.27 |
4 | 88.73 | 84.32 | 82.45 | 84.45 | 85.23 | 83.41 | 84.36 | 84.27 | 82.59 | 87.68 | 84.32 | 89.55 | 82.45 | 90.50 | 83.45 |
6 | 92.06 | 82.68 | 80.50 | 82.64 | 84.27 | 82.82 | 82.36 | 84.14 | 81.14 | 89.23 | 84.36 | 90.59 | 81.18 | 89.73 | 82.32 |
8 | 90.48 | 84.73 | 81.86 | 83.45 | 85.09 | 82.82 | 83.32 | 86.00 | 82.82 | 89.14 | 85.64 | 90.95 | 82.95 | 89.95 | 84.55 |
10 | 91.09 | 86.50 | 82.55 | 83.50 | 85.41 | 83.91 | 84.91 | 85.55 | 83.59 | 90.59 | 84.59 | 90.36 | 83.86 | 90.73 | 85.05 |
12 | 90.97 | 84.18 | 80.77 | 83.18 | 84.73 | 82.86 | 83.95 | 85.18 | 82.68 | 89.91 | 85.86 | 91.77 | 82.68 | 90.09 | 84.00 |
14 | 91.64 | 83.82 | 80.95 | 82.41 | 83.91 | 82.50 | 83.27 | 84.64 | 82.50 | 90.59 | 84.05 | 91.14 | 81.41 | 89.73 | 83.18 |
16 | 91.64 | 84.23 | 80.45 | 83.05 | 84.45 | 81.41 | 82.55 | 83.68 | 82.14 | 88.05 | 83.86 | 90.45 | 81.18 | 89.55 | 83.91 |
18 | 90.61 | 83.68 | 81.50 | 85.41 | 85.36 | 82.59 | 84.64 | 84.91 | 82.91 | 89.50 | 84.95 | 90.82 | 82.95 | 90.27 | 84.23 |
SNR | Proposed HCTC | 1DCNN-PF | CGDNet | CLDNN | CLDNN2 | CNN1 | CNN2 | DAE | DenseNet | GRU | IC-AMCNet | MCLDNN | MCNET | PET-CGDNN | ResNet |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
8PSK | 88.00 | 74.00 | 68.00 | 81.00 | 92.50 | 87.50 | 88.00 | 70.50 | 88.00 | 78.00 | 80.00 | 86.00 | 78.50 | 81.00 | 57.50 |
AM-DSB | 62.00 | 100.00 | 91.50 | 96.00 | 82.00 | 93.00 | 97.00 | 93.00 | 75.50 | 74.00 | 96.50 | 55.50 | 98.50 | 90.00 | 89.00 |
AM-SSB | 88.00 | 92.00 | 96.00 | 94.50 | 93.00 | 92.50 | 97.00 | 95.50 | 96.00 | 92.50 | 95.00 | 95.00 | 94.50 | 95.00 | 92.50 |
BPSK | 98.70 | 98.50 | 99.00 | 99.00 | 99.00 | 99.50 | 99.00 | 99.00 | 97.50 | 99.00 | 99.00 | 99.00 | 97.50 | 99.50 | 95.00 |
CPFSK | 100.00 | 98.50 | 99.50 | 99.50 | 100.00 | 100.00 | 100.00 | 99.50 | 99.00 | 99.50 | 100.00 | 99.50 | 99.00 | 99.50 | 98.00 |
GFSK | 95.30 | 90.00 | 96.50 | 93.00 | 98.00 | 94.50 | 94.50 | 97.50 | 96.50 | 97.00 | 97.00 | 97.50 | 96.00 | 95.50 | 96.00 |
4-PAM | 98.70 | 98.00 | 94.00 | 97.00 | 98.00 | 98.50 | 98.00 | 98.50 | 97.50 | 98.50 | 98.50 | 98.50 | 97.00 | 98.00 | 98.50 |
16-QAM | 84.70 | 41.50 | 39.00 | 31.00 | 52.50 | 47.50 | 52.50 | 49.00 | 37.00 | 74.50 | 57.50 | 84.50 | 35.00 | 82.00 | 42.50 |
64-QAM | 83.30 | 70.00 | 68.00 | 72.50 | 67.50 | 53.00 | 51.50 | 55.50 | 70.50 | 84.00 | 44.50 | 71.00 | 73.50 | 79.50 | 56.00 |
QPSK | 96.00 | 90.00 | 73.00 | 71.00 | 90.50 | 84.50 | 93.00 | 88.00 | 66.50 | 84.50 | 95.00 | 94.50 | 62.50 | 84.00 | 73.50 |
WBFM | 62.70 | 24.00 | 26.00 | 36.50 | 39.50 | 36.00 | 34.50 | 31.00 | 49.00 | 45.50 | 25.50 | 60.00 | 33.00 | 35.00 | 24.00 |
Model Name | Accuracy at a 0 SNR | Average Accuracy from 0 dB to 18 dB | Number of Trainable Parameters | Number of FLOPs |
---|---|---|---|---|
Proposed HCTC | 87.03 | 90.45 | 361,611 | 364,944 |
1DCNN-PF | 79.68 | 83.57 | 174,923 | 173,840 |
CGDNet | 77.32 | 80.76 | 124,933 | 139,221 |
CLDNN | 79.18 | 82.84 | 164,433 | 183,620 |
CLDNN2 | 82.95 | 84.59 | 517,643 | 536,439 |
CNN2 | 82.27 | 83.51 | 858,123 | 857,478 |
DenseNet | 79.36 | 81.91 | 3,282,603 | 3,281,798 |
GRU | 84.27 | 88.65 | 151,179 | 346,243 |
IC-AMCNet | 80.77 | 84.05 | 1,264,011 | 1,263,494 |
MCLDNN | 85.55 | 90.06 | 406,199 | 665,738 |
MCNET | 78.64 | 81.73 | 121,611 | 120,285 |
PET-CGDNN | 85.36 | 89.40 | 71,871 | 169,300 |
ResNet | 74.77 | 82.47 | 3,098,283 | 3,097,478 |
Model (Number of Heads, Depth, Number of FF Dimensions) | Number of Parameters | Accuracy (0 dB) | Average Accuracy (0–18 dB) |
---|---|---|---|
HCTC (2, 8, 256) | 209,579 | 85.45 | 88.79 |
HCTC (2, 16, 256) | 139,707 | 85.45 | 88.79 |
HCTC (2, 64, 256) | 361,611 | 87.03 | 90.45 |
HCTC (4, 8, 256) | 106,867 | 83.09 | 84.4 |
HCTC (4, 16, 256) | 141,851 | 86.96 | 89.3 |
HCTC (4, 64, 256) | 394,763 | 87.75 | 89.35 |
HCTC (8, 8, 256) | 107,987 | 82.60 | 84.24 |
HCTC (8, 16, 256) | 146,139 | 85.27 | 88.87 |
HCTC (8, 64, 256) | 461,067 | 87.03 | 88.82 |
SNR | Purely Convolutional Model | Pure Transformer Encoder Model | Proposed HCTC |
---|---|---|---|
−20 | 9.21 | 8.97 | 10.18 |
−18 | 8.97 | 9.33 | 9.82 |
−16 | 9.82 | 9.58 | 10.42 |
−14 | 11.45 | 9.94 | 13.39 |
−12 | 14.85 | 10.79 | 18.61 |
−10 | 23.82 | 14.67 | 28.48 |
−8 | 36.91 | 23.45 | 39.27 |
−6 | 51.88 | 33.03 | 53.64 |
−4 | 65.33 | 45.76 | 67.09 |
−2 | 75.27 | 52.61 | 80.48 |
0 | 80.85 | 59.76 | 87.03 |
2 | 82.30 | 61.39 | 90.30 |
4 | 83.33 | 63.82 | 88.73 |
6 | 84.97 | 63.64 | 92.06 |
8 | 84.73 | 63.39 | 90.48 |
10 | 84.67 | 65.70 | 91.09 |
12 | 84.48 | 66.79 | 90.97 |
14 | 83.33 | 65.70 | 91.64 |
16 | 84.55 | 64.00 | 91.64 |
18 | 84.06 | 64.85 | 90.61 |
Model | Number of Non-Trainable Parameters | Number of Trainable Parameters | Total Number of Parameters | Average Accuracy (0–18 dB) |
---|---|---|---|---|
Purely convolutional model | 768 | 294,923 | 295,691 | 83.73 |
Pure transformer encoder model | 0 | 117,515 | 117,515 | 63.90 |
Proposed HCTC | 896 | 361,611 | 362,507 | 90.46 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ruikar, J.D.; Park, D.-H.; Kwon, S.-Y.; Kim, H.-N. HCTC: Hybrid Convolutional Transformer Classifier for Automatic Modulation Recognition. Electronics 2024, 13, 3969. https://doi.org/10.3390/electronics13193969
Ruikar JD, Park D-H, Kwon S-Y, Kim H-N. HCTC: Hybrid Convolutional Transformer Classifier for Automatic Modulation Recognition. Electronics. 2024; 13(19):3969. https://doi.org/10.3390/electronics13193969
Chicago/Turabian StyleRuikar, Jayesh Deorao, Do-Hyun Park, Soon-Young Kwon, and Hyoung-Nam Kim. 2024. "HCTC: Hybrid Convolutional Transformer Classifier for Automatic Modulation Recognition" Electronics 13, no. 19: 3969. https://doi.org/10.3390/electronics13193969
APA StyleRuikar, J. D., Park, D.-H., Kwon, S.-Y., & Kim, H.-N. (2024). HCTC: Hybrid Convolutional Transformer Classifier for Automatic Modulation Recognition. Electronics, 13(19), 3969. https://doi.org/10.3390/electronics13193969