Next Article in Journal
A Symmetric Matrix-Aided MIMO to Improve Reliability for Maritime Visible Light Communications
Previous Article in Journal
Radiation-Tolerant Electronics
Previous Article in Special Issue
A New Compact Method Based on a Convolutional Neural Network for Classification and Validation of Tomato Plant Disease
 
 

This is an early access version, the complete PDF, HTML, and XML versions will be available soon.

Article

Automatic Modulation Classification with Neural Networks via Knowledge Distillation

College of Intelligent Science, National University of Defense Technology, Changsha 410073, China
*
Author to whom correspondence should be addressed.
Academic Editor: Sung Jin Yoo
Electronics 2022, 11(19), 3018; https://doi.org/10.3390/electronics11193018
Received: 27 August 2022 / Revised: 12 September 2022 / Accepted: 18 September 2022 / Published: 22 September 2022
(This article belongs to the Special Issue Deep Learning for Computer Vision: Algorithms, Theory and Application)
Deep learning is used for automatic modulation recognition in neural networks, and because of the need for high classification accuracy, deeper and deeper networks are used. However, these are computationally very expensive for neural network training and inference, so its utility in the case of a mobile with memory limitations or weak computational power is questionable. As a result, a trade-off between network depth and network classification accuracy must be considered. To address this issue, we used a knowledge distillation method in this study to improve the classification accuracy of a small network model. First, we trained Inception–Resnet as a teacher network, which has a size of 311.77 MB and a final peak classification accuracy of 93.09%. We used the method to train convolutional neural network 3 (CNN3) and increase its peak classification accuracy from 79.81 to 89.36%, with a network size of 0.37 MB. It was also used similarly to train mini Inception–Resnet and increase its peak accuracy from 84.18 to 93.59%, with a network size of 39.69 MB. When we compared all classification accuracy peaks, we discover that knowledge distillation improved small networks and that the student network had the potential to outperform the teacher network. Using knowledge distillation, a small network model can achieve the classification accuracy of a large network model. In practice, choosing the appropriate student network based on the constraints of the usage conditions while using knowledge distillation (KD) would be a way to meet practical needs.
Keywords: modulation recognition; knowledge distillation; teacher-student framework; convolutional neural network modulation recognition; knowledge distillation; teacher-student framework; convolutional neural network
MDPI and ACS Style

Wang, S.; Liu, C. Automatic Modulation Classification with Neural Networks via Knowledge Distillation. Electronics 2022, 11, 3018. https://doi.org/10.3390/electronics11193018

AMA Style

Wang S, Liu C. Automatic Modulation Classification with Neural Networks via Knowledge Distillation. Electronics. 2022; 11(19):3018. https://doi.org/10.3390/electronics11193018

Chicago/Turabian Style

Wang, Shuai, and Chunwu Liu. 2022. "Automatic Modulation Classification with Neural Networks via Knowledge Distillation" Electronics 11, no. 19: 3018. https://doi.org/10.3390/electronics11193018

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop