Performance Analysis of a Hybrid Complex-Valued CNN-TCN Model for Automatic Modulation Recognition in Wireless Communication Systems
Abstract
1. Introduction
1.1. Existing Solutions and Research Gaps
1.1.1. Existing Solutions
- Traditional Techniques: Early approaches like maximum likelihood (ML) methods, decision-theoretic approaches, and feature-based classifiers have been extensively studied. These methods utilize signal characteristics such as amplitude, phase, and frequency to distinguish modulation schemes with varying degrees of success.
- Signal Processing-Based Methods: Advanced signal processing techniques, such as wavelet transforms, cyclostationary feature detection, and higher-order cumulants, have been applied to improve the accuracy and robustness of AMC in diverse wireless environments.
- Machine Learning Approaches: Recent developments have focused on leveraging supervised learning algorithms, such as support vector machines (SVM) and k-nearest neighbors (k-NN), to classify modulation schemes. These models often rely on pre-extracted features and have shown promise in controlled scenarios.
- Deep Learning Solutions: The integration of deep learning models, particularly convolutional neural networks (CNNs) and recurrent neural networks (RNNs), has revolutionized AMC by enabling end-to-end learning. These models process raw I/Q data directly, eliminating the need for manual feature extraction, and have demonstrated superior performance in complex environments.
1.1.2. Major Research Gaps
- Performance in Low-SNR Environments: While many techniques achieve high accuracy under ideal conditions, their performance often degrades significantly in low-SNR environments. Enhancing robustness in these challenging conditions remains an open research area.
- Robustness Against Noise and Interference: AMC systems must reliably classify signals under noise uncertainty and interference. More robust architectures or noise-resistant features are needed for better performance in noisy environments.
- Interference from Dynamic Environments: Existing methods often assume static environments, which is unrealistic. Methods that can adapt to dynamic conditions, such as varying channel conditions and interference, are necessary.
1.2. Proposal of This Paper and Its Significance
1.3. Research Questions
2. Related Works
3. Mathematical Concept and Modeling
3.1. Mathematical Model of CV-CNN-TCN
3.1.1. Input Representation
3.1.2. Complex-Valued Convolutional Neural Network (CV-CNN)
3.1.3. Temporal Convolutional Network (TCN)
3.1.4. Fully Connected Layers
3.1.5. Loss Function
3.2. Baseline Methods
3.2.1. CNN
3.2.2. CLDNN
3.2.3. SCRNN
3.2.4. ResNet
3.3. Data Preprocessing and Sampling
Algorithm 1 Complex-Valued CNN-Temporal Convolutional Network Training |
Input:
|
3.4. Proposed Model: CV-CNN-TCN
3.5. Performance Metrics
4. Results
4.1. Training and Validation Loss
4.2. Classification Accuracy
4.3. Confusion Matrix Comparison
4.4. Ablation Study
4.5. Computation Complexity
4.6. Recognition Accuracy by Modulation and SNR
- At −4 dB SNR (Table 4): Our proposed CV-CNN-TCN and CV-CNN-TCN-DCC models outperform baselines for most modulation types, particularly for 16-QAM, 64-QAM, and WBFM, where baseline models show significant drops. For instance, CV-CNN-TCN-DCC achieves 49.35% for 16-QAM versus 10.6% (CLDNN).
- At 0 dB SNR (Table 5): The proposed models demonstrate significant gains in low-SNR conditions, with CV-CNN-TCN-DCC excelling in 16-QAM (38.14%) and WBFM (76.52%), far surpassing baseline performances. It also maintains high accuracy for complex modulations like BPSK and GFSK, matching or exceeding other temporal models.
- At 4 dB SNR (Table 6): The improvements are more pronounced at higher SNR levels. CV-CNN-TCN-DCC achieves superior performance across all modulations, particularly excelling at BPSK (98.04%), 64-QAM (69.68%), and QPSK (74.58%). Compared to baselines like SCRNN and CLDNN, our models provide significant robustness in handling complex modulations and noise resilience.
4.7. Statistical Analysis
4.7.1. Normality and Homogeneity of Variance Tests
- CLDNN: , → Normal;
- SCRNN: , → Normal;
- ResNet: , → Normal;
- CNN: , → Normal;
- CV-CNN-TCN: , → Normal;
- CV-CNN-TCN-DCC: , → Normal.
- Bartlett’s test: statistic = 25.776, → Variances are not homogeneous.
4.7.2. Kruskal–Wallis Global Test
- Kruskal–Wallis test: , → Significant differences found between groups.
4.7.3. Dunn’s Post Hoc Test
- CLDNN and CV-CNN-TCN, CLDNN and CV-CNN-TCN-DCC;
- SCRNN and ResNet, SCRNN and CNN, SCRNN and CV-CNN-TCN, SCRNN and CV-CNN-TCN-DCC.
4.7.4. Tools Used
- scipy.stats for Shapiro–Wilk, Bartlett, and Kruskal–Wallis tests;
- scikit-posthocs for Dunn’s post hoc test with Bonferroni correction;
- numpy, pandas, and matplotlib for data handling and visualization.
5. Discussion
5.1. Answers to Research Questions
5.2. Challenges and Limitation of the Proposed Work
- Complexity of wireless signals: Real-world wireless signals often exhibit overlapping characteristics between modulation schemes, particularly for closely spaced modulations like BPSK and QPSK. Extracting discriminative features in such scenarios is non-trivial.
- Computational demands: Hybrid architectures, including CV-CNN-TCN models, require significant computational power for training and inference. Devices with limited resources may struggle to handle the complexity, particularly in real-time applications.
- Overfitting: Due to the high capacity of deep learning models, overfitting is a significant concern. Ensuring robustness to unseen data requires techniques like dropout, data augmentation, and cross-validation.
- Latency constraints: Real-time AMC applications, such as those in cognitive radio or adaptive communication systems, demand low-latency processing. Sequential data processing models, like RNNs, are often unsuitable, while TCNs must be carefully optimized to meet latency requirements.
- Adaptability to new modulation schemes: The introduction of new or non-standard modulation schemes necessitates retraining or fine-tuning the model. Ensuring adaptability while minimizing retraining efforts remains an open problem.
6. Conclusions and Future Work
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
RF | Radio Frequency |
GMSK | Gaussian Minimum Shift Keying |
FM | Frequency Modulation |
CRN | Cognitive Radio Network |
AMC | Automatic Modulation Classification |
TCN | Temporal Convolutional Network |
RNN | Recurrent Neural Network |
DL | Deep Learning |
k-NN | k-Nearest Neighbors |
SVM | Support Vector Machine |
ML | Maximum likelihood |
CLDNN | Convolutional LSTM Dense Neural Network |
SCRNN | Sequential Convolutional Recurrent Neural Network |
LSTM | Long Short-Term Memory |
CV-CNN | Complex-Valued Convolutional Neural Network |
CV-CNN-TCN | Complex-Valued Convolutional Neural Network–Temporal Convolutional Network |
TCN | Temporal Convolutional Network |
DCC | Dilated Causal Convolutional |
ReLU | Rectified Linear Unit |
IQ | In-Phase and Quadrature |
JI | Jaccard Index |
NPV | Negative Predictive Value |
FDR | False Discovery Rate |
SE | Sensing Error |
Pmd | Probability of Missed Detection |
Pfa | Probability of False Alarm |
SDR | Software-Defined Radio |
References
- Dobre, O.A.; Abdi, A.; Bar-Ness, Y.; Su, W. Survey of automatic modulation classification techniques: Classical approaches and new trends. IET Commun. 2007, 1, 137–156. [Google Scholar] [CrossRef]
- Dobre, O.A. Signal identification for emerging intelligent radios: Classical problems and new challenges. IEEE Instrum. Meas. Mag. 2015, 18, 11–18. [Google Scholar] [CrossRef]
- Zhang, H.; Yu, L.; Xia, G.-S. Iterative time-frequency filtering of sinusoidal signals with updated frequency estimation. IEEE Signal Process Lett. 2015, 23, 139–143. [Google Scholar] [CrossRef]
- Dobre, O.A.; Oner, M.; Rajan, S.; Inkol, R. Cyclostationarity-based robust algorithms for QAM signal identification. IEEE Commun. Lett. 2011, 16, 12–15. [Google Scholar] [CrossRef]
- Orlic, V.D.; Dukic, M.L. Automatic modulation classification algorithm using higher-order cumulants under real-world channel conditions. IEEE Commun. Lett. 2009, 13, 917–919. [Google Scholar] [CrossRef]
- Sarmanbetov, S.; Nurgaliyev, M.; Zholamanov, B.; Kopbay, K.; Saymbetov, A.; Bolatbek, A.; Kuttybay, N.; Orynbassar, S.; Yershov, E. Novel filtering and regeneration technique with statistical feature extraction and machine learning for automatic modulation classification. Digit. Signal Process. 2024, 155, 104744. [Google Scholar] [CrossRef]
- Pablos, C.; Andrade, Á.G.; Galaviz, G. Modulation-agnostic spectrum sensing based on anomaly detection for cognitive radio. ICT Express 2022, 9, 398–402. [Google Scholar] [CrossRef]
- Lim, S.H.; Han, J.; Noh, W.; Song, Y.; Jeon, S.-W. Hybrid neural coded modulation: Design and training methods. ICT Express 2022, 8, 25–30. [Google Scholar] [CrossRef]
- O’Shea, T.; Hoydis, J. An Introduction to Deep Learning for the Physical Layer. IEEE Trans. Cogn. Commun. Netw. 2017, 3, 563–575. [Google Scholar] [CrossRef]
- O’Shea, T.J.; Corgan, J.; Clancy, T.C. Convolutional radio modulation recognition networks. In Engineering Applications of Neural Networks; Springer: Cham, Switzerland, 2016; pp. 213–226. [Google Scholar]
- West, N.E.; O’shea, T. Deep architectures for modulation recognition. In Proceedings of the 2017 IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN), Baltimore, MD, USA, 6–9 March 2017; pp. 1–6. [Google Scholar]
- Rajendran, S.; Meert, W.; Giustiniano, D.; Lenders, V.; Pollin, S. Deep learning models for wireless signal classification with distributed lowcost spectrum sensors. IEEE Trans. Cogn. Commun. Netw. 2018, 4, 433–445. [Google Scholar] [CrossRef]
- Yi, G.; Hao, X.; Yan, X.; Dai, J.; Liu, Y.; Han, Y. Automatic modulation recognition of radiation source signals based on two-dimensional data matrix and improved residual neural network. Def. Technol. 2024, 33, 364–373. [Google Scholar] [CrossRef]
- Zeng, Y.; Zhang, M.; Han, F.; Gong, Y.; Zhang, J. Spectrum analysis and convolutional neural network for automatic modulation recognition. IEEE Wirel. Commun. Lett. 2019, 8, 929–932. [Google Scholar] [CrossRef]
- Kharbouche, A.; Madini, Z.; Zouine, Y.; El-Haryqy, N. Signal demodulation with Deep Learning Methods for visible light communication. In Proceedings of the 2023 9th International Conference on Optimization and Applications (ICOA), Abu Dhabi, United Arab Emirates, 5–6 October 2023; pp. 1–5. [Google Scholar]
- Zhang, F.; Luo, C.; Xu, J.; Luo, Y. An efficient deep learning model for automatic modulation recognition based on parameter estimation and transformation. IEEE Commun. Lett. 2021, 25, 3287–3290. [Google Scholar] [CrossRef]
- Zhang, H.; Huang, M.; Yang, J.; Sun, W. A data preprocessing method for automatic modulation classification based on CNN. IEEE Commun. Lett. 2020, 25, 1206–1210. [Google Scholar] [CrossRef]
- Sun, Y.C.; Tian, R.L.; Wang, X.F. Emitter signal recognition based on improved CLDNN. Syst. Eng. Electron. 2021, 43, 42–47. [Google Scholar]
- Zhang, X.; Luo, Z.; Xiao, W. CNN-BiLSTM-DNN-Based Modulation Recognition Algorithm at Low SNR. Appl. Sci. 2024, 14, 5879. [Google Scholar] [CrossRef]
- Jang, J.; Pyo, J.; Yoon, Y.-I.; Choi, J. Meta-Transformer: A Meta-Learning Framework for Scalable Automatic Modulation Classification. IEEE Access 2024, 12, 9267–9276. [Google Scholar] [CrossRef]
- Qi, P.; Zhou, X.; Zheng, S.; Li, Z. Automatic Modulation Classification Based on Deep Residual Networks With Multimodal Information. IEEE Trans. Cogn. Commun. Netw. 2021, 7, 21–33. [Google Scholar] [CrossRef]
- Bhatti, S.G.; Bhatti, A.I. Radar signals intrapulse modulation recognition using phase-based stft and bilstm. IEEE Access 2022, 10, 80184–80194. [Google Scholar] [CrossRef]
- Peng, S.; Jiang, H.; Wang, H.; Alwageed, H.; Zhou, Y.; Sebdani, M.M.; Yao, Y.D. Modulation classification based on signal constellation diagrams and deep learning. IEEE Trans. Neural Netw. Learn. Syst. 2018, 30, 718–727. [Google Scholar] [CrossRef]
- Chandhok, S.; Joshi, H.; Darak, S.J.; Subramanyam, A.V. LSTM guided modulation classification and experimental validation for sub-nyquist rate wideband spectrum sensing. In Proceedings of the 2019 11th International Conference on Communication Systems & Networks (COMSNETS), Bengaluru, India, 7–11 January 2019; pp. 458–460. [Google Scholar]
- Huynh-The, T.; Hua, C.H.; Pham, Q.V.; Kim, D.S. MCNet: An efficient CNN architecture for robust automatic modulation classification. IEEE Commun. Lett. 2020, 24, 811–815. [Google Scholar] [CrossRef]
- Mossad, O.S.; ElNainay, M.; Torki, M. Deep convolutional neural network with multi-task learning scheme for modulations recognition. In Proceedings of the 2019 15th International Wireless Communications & Mobile Computing Conference (IWCMC), Tangier, Morocco, 24–28 June 2019; pp. 1644–1649. [Google Scholar]
- Njoku, J.N.; Morocho-Cayamcela, M.E.; Lim, W. CGDNet: Efficient hybrid deep learning model for robust automatic modulation recognition. IEEE Netw. Lett. 2021, 3, 47–51. [Google Scholar] [CrossRef]
- Kong, W.; Jiao, X.; Xu, Y.; Zhang, B.; Yang, Q. A transformer-based contrastive semi-supervised learning framework for automatic modulation recognition. IEEE Trans. Cogn. Commun. Netw. 2023, 9, 950–962. [Google Scholar] [CrossRef]
- Van Den Oord, A.; Dieleman, S.; Zen, H.; Simonyan, K.; Vinyals, O.; Graves, A.; Kalchbrenner, N.; Senior, A.; Kavukcuoglu, K. Wavenet: A generative model for raw audio. arXiv 2016, arXiv:1609.03499. [Google Scholar]
- Ramjee, S.; Ju, S.; Yang, D.; Liu, X.; Gamal, A.E.; Eldar, Y.C. Fast Deep Learning for Automatic Modulation Classification. arXiv 2019, arXiv:1901.05850. [Google Scholar]
- Clement, J.C.; Indira, N.; Vijayakumar, P.; Nandakumar, R. Deep learning based modulation classification for 5G and beyond wireless systems. Peer-to-Peer Netw. Appl. 2021, 14, 319–332. [Google Scholar] [CrossRef]
- Liao, K.; Zhao, Y.; Gu, J.; Zhang, Y.; Zhong, Y. Sequential convolutional recurrent neural networks for fast automatic modulation classification. IEEE Access 2021, 9, 27182–27188. [Google Scholar] [CrossRef]
- DeepSig Inc. RF Datasets for Machine Learning. Available online: https://www.deepsig.ai/datasets (accessed on 25 October 2024).
- Google Colaboratory. Available online: https://colab.research.google.com/ (accessed on 30 October 2024).
- Storey, J.D. A direct approach to false discovery rates. J. R. Stat. Soc. Ser. B Stat. Methodol. 2002, 64, 479–498. [Google Scholar] [CrossRef]
- Zhao, Y.; Paul, P.; Xin, C.; Song, M. Performance analysis of spectrum sensing with mobile SUs in cognitive radio networks. In Proceedings of the 2014 IEEE International Conference on Communications (ICC), Sydney, NSW, Australia, 10–14 June 2014; pp. 2761–2766. [Google Scholar]
- Yacouby, R.; Axman, D. Probabilistic extension of precision, recall, and f1 score for more thorough evaluation of classification models. In Proceedings of the First Workshop on Evaluation and Comparison of NLP Systems, Online, 20 November 2020; pp. 79–91. [Google Scholar]
- Fowlkes, E.B.; Mallows, C.L. A method for comparing two hierarchical clusterings. J. Am. Stat. Assoc. 1983, 78, 553–569. [Google Scholar] [CrossRef]
- Chicco, D.; Jurman, G. The advantages of the Matthews correlation coefficient (MCC) over F1 score and accuracy in binary classification evaluation. BMC Genom. 2020, 21, 6. [Google Scholar] [CrossRef]
- Vijay, E.V.; Aparna, K. Deep Learning-CT based spectrum sensing for cognitive radio for proficient data transmission in Wireless Sensor Networks. e-Prime Electr. Eng. Electron. Energy 2024, 9, 100659. [Google Scholar] [CrossRef]
- Milligan, G.W.; Cooper, M.C. A study of the comparability of external criteria for hierarchical cluster analysis. Multivar. Behav. Res. 1986, 21, 441–458. [Google Scholar] [CrossRef] [PubMed]
- Steinley, D. Properties of the hubert-arable adjusted rand index. Psychol. Methods 2004, 9, 386. [Google Scholar] [CrossRef] [PubMed]
Parameter | Value |
---|---|
Modulations | QPSK, PAM4, 8PSK, BPSK, CPFSK, BFSK, QAM64, QAM16, AM-DSB, WB-FM |
Samples | 1,200,000 |
Sampling Frequency | 1 Msample/s |
Sampling Interval | 128 µs |
Samples/Symbol | 8 samples/symbol |
Format | IQ |
Sample size | 2 × 128 |
SNR number | 20 values |
SNR range | [−20 dB, 18 dB] |
Training data | 960,000 samples (80%) |
Testing data | 240,000 samples (20%) |
Labels | SNR and modulation methods |
Models | CNN | ResNet | SCRNN | CLDNN | Proposed Model |
---|---|---|---|---|---|
Total parameters | 398,122 | 91,626 | 428,554 | 239,818 | 484,842 |
Epochs | 20 | 20 | 20 | 20 | 20 |
Training time (s)/epoch | 564.7 | 119.4 | 196.7 | 139.9 | 187 |
Inference time | 15.28 ms | 5.432 ms | 22.15 ms | 8.42 ms | 9.85 ms |
Memory usage | 18.73 MB | 345.67 MB | 24.81 MB | 12.58 MB | 15.32 MB |
FLOPs | 41,283,584 | 12,345,678 | 58,724,352 | 15,245,824 | 4,826,112 |
Input Size (%) | 512 | 256 | 128 | 64 | 32 | 16 |
---|---|---|---|---|---|---|
Classification time (ms) | 19.5 | 11.5 | 6.15 | 4.2 | 3.2 | 3 |
Modulation | CNN | ResNet | SCRNN | CLDNN | CV-CNN-TCN | CV-CNN-TCN-DCC |
---|---|---|---|---|---|---|
8PSK | 37.31% | 23.85% | 43.27% | 16.5% | 20.16% | 23.23% |
AM-DSB | 90.06% | 78.93% | 96.56% | 88.27% | 91.75% | 55.25% |
BPSK | 37.54% | 57.41% | 49.5% | 55.08% | 39.72% | 60.10% |
CPFSK | 94.37% | 62.6% | 82.06% | 87.27% | 85.97% | 77.50% |
GFSK | 94.9% | 84.87% | 87.81% | 91.68% | 90.12% | 91.50% |
PAM4 | 57.4% | 85.39% | 39.14% | 31.14% | 93.70% | 90.89% |
16-QAM | 11.54% | 19.77% | 6.12% | 10.6% | 42.08% | 49.35% |
64-QAM | 61.1% | 67.4% | 53.64% | 39.52% | 69.04% | 60.20% |
QPSK | 30.5% | 18.72% | 22.54% | 45.54% | 22.85% | 31.83% |
WBFM | 37.54% | 35.06% | 27.41% | 37.31% | 30.47% | 59.97% |
Modulation | CNN | ResNet | SCRNN | CLDNN | CV-CNN-TCN | CV-CNN-TCN-DCC |
---|---|---|---|---|---|---|
8PSK | 55.72% | 43.18% | 44.41% | 31.85% | 66.35% | 70.68% |
AM-DSB | 96.52% | 83.95% | 99.16% | 95.93% | 99.02% | 47.37 % |
BPSK | 80.68% | 85.5% | 73.58% | 85.75% | 85.58% | 92.75% |
CPFSK | 99.79% | 83.62% | 96.37% | 98.66% | 95.12% | 92.27% |
GFSK | 99.31% | 97.33% | 98.2% | 98.29% | 97.41% | 92.27% |
PAM4 | 93.72% | 94.81% | 84.04% | 85.00% | 98.41% | 98.04% |
16-QAM | 23.95% | 19.02% | 17.54% | 35.68% | 30.85% | 38.14% |
64-QAM | 77.12% | 70.18% | 67.93% | 59.14% | 75.95% | 67.64% |
QPSK | 52.62% | 55.31% | 51.62% | 65.7% | 62.74% | 60.66% |
WBFM | 39.77% | 41.02% | 34.33% | 38.12% | 32.48% | 76.52 % |
Modulation | CNN | ResNet | SCRNN | CLDNN | CV-CNN-TCN | CV-CNN-TCN-DCC |
---|---|---|---|---|---|---|
8PSK | 64.79% | 57.33% | 40.77% | 44.10% | 84.18% | 85.04% |
AM-DSB | 96.52% | 73.52% | 99.95% | 99.31% | 99.97% | 43.52% |
BPSK | 97.31% | 95.81% | 95.68% | 97.27% | 96.18% | 98.04% |
CPFSK | 99.93% | 95.39% | 98.64% | 99.31% | 99.29% | 99.14% |
GFSK | 99.77% | 98.97% | 99.35% | 99.31% | 99.14% | 98.81% |
PAM4 | 98.08% | 95.87% | 95.27% | 95.60% | 98.47% | 98.29% |
16-QAM | 20.54% | 18.83% | 16.37% | 31.49% | 26.60% | 35.94% |
64-QAM | 77.79% | 71.97% | 71.31% | 61.20% | 79.14% | 69.68% |
QPSK | 56.66% | 80.45% | 60.06% | 63.58% | 73.25% | 74.58% |
WBFM | 42.04% | 55.25% | 38.74% | 41.20% | 35.81% | 79.72% |
CLDNN | SCRNN | ResNet | CNN | CV-CNN-TCN | CV-CNN-TCN-DCC | |
---|---|---|---|---|---|---|
CLDNN | 1.000 | 1.000 | 0.553 | 1.000 | 0.0000 | 0.00020 |
SCRNN | 1.000 | 1.000 | 0.010 | 0.043 | 6.77 × 10−8 | 2.28 × 10−7 |
ResNet | 0.553 | 0.010 | 1.000 | 1.000 | 0.202 | 0.352 |
CNN | 1.000 | 0.043 | 1.000 | 1.000 | 0.059 | 0.112 |
CV-CNN-TCN | 0.00008 | 6.77 × 10−8 | 0.202 | 0.059 | 1.000 | 1.000 |
CV-CNN-TCN-DCC | 0.00020 | 2.28 × 10−7 | 0.352 | 0.112 | 1.000 | 1.000 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Published by MDPI on behalf of the International Institute of Knowledge Innovation and Invention. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ouamna, H.; Kharbouche, A.; El-Haryqy, N.; Madini, Z.; Zouine, Y. Performance Analysis of a Hybrid Complex-Valued CNN-TCN Model for Automatic Modulation Recognition in Wireless Communication Systems. Appl. Syst. Innov. 2025, 8, 90. https://doi.org/10.3390/asi8040090
Ouamna H, Kharbouche A, El-Haryqy N, Madini Z, Zouine Y. Performance Analysis of a Hybrid Complex-Valued CNN-TCN Model for Automatic Modulation Recognition in Wireless Communication Systems. Applied System Innovation. 2025; 8(4):90. https://doi.org/10.3390/asi8040090
Chicago/Turabian StyleOuamna, Hamza, Anass Kharbouche, Noureddine El-Haryqy, Zhour Madini, and Younes Zouine. 2025. "Performance Analysis of a Hybrid Complex-Valued CNN-TCN Model for Automatic Modulation Recognition in Wireless Communication Systems" Applied System Innovation 8, no. 4: 90. https://doi.org/10.3390/asi8040090
APA StyleOuamna, H., Kharbouche, A., El-Haryqy, N., Madini, Z., & Zouine, Y. (2025). Performance Analysis of a Hybrid Complex-Valued CNN-TCN Model for Automatic Modulation Recognition in Wireless Communication Systems. Applied System Innovation, 8(4), 90. https://doi.org/10.3390/asi8040090