Next Article in Journal
Hysteresis Modeling and Compensation for a Fast Piezo-Driven Scanner in the UAV Image Stabilization System
Previous Article in Journal
Automatic Modulation Classification Using Deep Residual Neural Network with Masked Modeling for Wireless Communications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Decentralized Learning and Model Averaging Based Automatic Modulation Classification in Drone Communication Systems

1
School of Network and Communication, Nanjing Vocational College of Information Technology, Nanjing 210023, China
2
College of Telecommunications and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing 210003, China
*
Author to whom correspondence should be addressed.
Drones 2023, 7(6), 391; https://doi.org/10.3390/drones7060391
Submission received: 1 May 2023 / Revised: 5 June 2023 / Accepted: 9 June 2023 / Published: 12 June 2023
(This article belongs to the Section Drone Communications)

Abstract

:
Automatic modulation classification (AMC) is a promising technology to identify the modulation mode of the received signal in drone communication systems. Recently, benefiting from the outstanding classification performance of deep learning (DL), various deep neural networks (DNNs) have been introduced into AMC methods. Most current AMC methods are based on a local framework (LocalAMC) where there is only one device, or a centralized framework (CentAMC) where multiple local devices (LDs) upload their data to only one central server (CS). LocalAMC may not achieve ideal results due to insufficient data and finite computational power. CentAMC carries a significant risk of privacy leakage and the final data for training model in CS are quite massive. In this paper, we propose a practical and light AMC method based on decentralized learning with residual network (ResNet) in drone communication systems. Simulation results show that the ResNet-based decentralized AMC (DecentAMC) method achieves similar classification performance to CentAMC while improving training efficiency and protecting data privacy.

1. Introduction

Advanced signal processing techniques accelerate the fast development of future wireless communications. Among these techniques, automatic modulation classification (AMC) is considered a promising technique for identifying signals in complicated electromagnetic environments such as in drone communication systems [1,2,3,4]. With the ability to identify modulation mode of the received or intercepted signal without prior modulation information [5], AMC has been widely considered in various scenarios. Originally, AMC has been located in military fields, such as drone recognition, signal monitoring, interference recognition, electronic countermeasures [6,7,8] and so on. With the constant consumption of spectrum resources, AMC has been introduced into civilian scenarios. In these scenarios, the signal transmitters do not need to send additional modulation information of signals, which means saving spectrum resources [9].
Classic AMC methods consist of likelihood-based (LB) methods and feature-based (FB) methods. The main LB methods can be classified as average likelihood ratio rest (ALRT) [10], generalized likelihood ratio test (GLRT) [11] and hybrid likelihood ratio test (HLRT) [12]. Other later LB methods are variations of above three AMC methods. According to the Bayes minimum misclassification cost criterion, LB methods can theoretically guarantee good classification performance. However, LB methods cannot achieve ideal results in non-cooperative communication due to the lack of prior knowledge. FB methods consist of extracting features and constructing classifiers [13,14,15]. Classic man-made features consist of parametric statistical features [16], high order statistical features, constellation features [17], wavelet transform features [18] and cyclic putter features [19,20]. In FB AMC, typical machine learning (ML)-based AMC methods were proposed first. These methods are simple in principle, easy to implement and can achieve excellent classification performance, but feature extraction requires too much artificial cost.
Compared with ML, deep learning (DL) has the advantage of automatically extracting depth features from the training samples and making use of these depth features to build a classifier for decision making. Consequently, DL is seen as one of the most promising algorithms in classification tasks. M. Liu et al. [21] proposed a message passing algorithm based on DL for efficient resource allocation in cognitive radio networks. J. Tan et al. [22] proposed a deep reinforcement learning approach for sharing intelligently unlicensed bands in long term evolution (LTE) and wireless fidelity (WiFi) systems. H. Huang et al. [23] proposed a fast unsupervised DL-based beamforming technology. There are also proposed AMC methods combined with DL. It was O’shea who introduced convolutional neural network (CNN) into AMC firstly [24] and proved the superiority of DL in classification tasks. Y. Lin et al. [25] proposed a neural network pruning technology for AMC. S. Huang et al. [26] proposed an AMC method utilizing a compressive CNN structure. Z. Zhang et al. [27] proposed an AMC method using CNN with feature fusion. Y. Wang et al. [28] proposed an AMC method with compressive sensing, which introduced a scaling factor for each neuron in CNN. F. Meng et al. [29] proposed a DL-enabled approach for AMC, whose computation speed is faster than traditional likelihood-based AMC methods. Z. Cao et al. [30] proposed a light-weight CNN for channel statement information (CSI) feedback in massive MIMO. P. Qi et al. [31] proposed an AMC method based on deep residual network with multimodal information. The above AMC methods are all under the scope of LocalAMC or CentAMC. LocalAMC may not achieve ideal classification performance because the dataset may be not sufficient and the local device (LD) has limited computational power. The training efficiency of CentAMC is low because only one central server (CS) is used to process abundant data which are collected from local devices (LDs). Transmission of training data consumes a lot of communication overhead. In addition, CentAMC has the problem of privacy leakage during data transmission.
From the perspective of training efficiency and data security, decentralized or distributed learning algorithms [32,33], are applied in AMC research. Decentralized AMC (DecentAMC) enables learning a global model while datasets are distributed across LDs instead of gathered into only one CS, and LD only needs to transfer the local model to the CS. Wang et al. [34] proposed a DecentAMC method based on the CNN structure and proved that the training efficiency of DecentAMC is higher than that of CentAMC. Fu et al. [35] proposed a DecentAMC method using lightweight network and model aggregation which can be considered in the application of drone communications. Most current DecentAMC methods are based on the CNN structure. However, with the deepening of convolution layers, there will be gradient explosion and gradient disappearance problems. In order to solve the above problems, He et al. [36] proposed residual network (ResNet). Considering the advantages of decentralized learning and ResNet, we propose a decentralized learning method for AMC in the drons communication systems based on ResNet, which is termed ResNet-based DecentAMC. Our main contributions of this paper are highlighted below.
  • We propose an AMC method using decentralized learning and residual network (ResNet) towards drone communication systems. This novel framework can achieve good classification performance and improve training efficiency while protecting data privacy.
  • We compare the classification accuracy of support vector machine (SVM)-based CentAMC method and deep neural network (DNN)-based CentAMC method using dataset RadioML 2016.10a, and improve that DL-based AMC performs better than ML-based AMC.
  • We compare classification accuracy of different DNN-based AMC methods using dataset RadioML 2018.01a. The proposed ResNet-based DecentAMC method performs better than current DNN-based DecentAMC method.
The remainder of this paper is structured as follows. We introduce the system framework of AMC and signal model in Section 2. The traditional AMC method based on artificial features is introduced in Section 3 to compare with DL-based AMC methods. The proposed DL-based AMC framework is introduced in Section 4. Section 5 presents and analyses the simulation results, and the conclusion is presented in Section 6.

2. AMC Description and Signal Model

2.1. AMC Description

AMC is used to classify the modulated mode of intercepted or received signals in military scenarios or civilian scenarios such as in drone communication systems. Specifically, the transmitter sends modulated signals. Through the wireless channel, the modulated signals are received and pre-processed for modulation recognition in the receiver. Then, the modulation recognition results are used to assist the demodulation of signals. The system framework of AMC is shown in Figure 1.

2.2. Signal Model

The band signal considered in this paper can be expressed as
r ( k ) = h e j ( 2 π f 0 k + ϕ 0 ) s m ( k ) + w ( k ) , k = 0 , 1 , , K 1 .
Please notice that the specific definition of the model is shown in Table 1.

3. ML-Based AMC and DL-Based AMC

3.1. Classic AMC Method Based on Artificial Features and ML

The classic FB methods include pre-processing, extracting features, constructing the classifier and so on. Here, we consider high order statistical features, which can be expressed as [37]
C ^ 20 = M 20 ,
C ^ 21 = M 21 ,
C ^ 40 = M 40 3 M 20 2 ,
C ^ 42 = M 42 | M 20 | 2 2 M 21 2 ,
C ^ 60 = M 60 15 | M 40 | | M 20 | + 30 M 20 3 ,
C ^ 63 = M 63 6 M 41 M 20 9 M 42 M 21 + 18 M 20 2 M 21 + 12 M 21 3 ,
where number of the sampling points of the received signal r = { r ( k ) } k = 1 K is limited in the real-world scenarios, thus we apply the estimation value of i-th order moments M ^ i j = 1 / K k = 1 K [ r i j ( k ) r * j ( k ) ] to replace its theoretical value M i j . As shown in Table 2, | C ^ 21 | of 8PSK, BPSK, QAM16, QAM64, and QPSK are equal. If | C ^ 40 | , | C ^ 42 | and | C ^ 63 | are used as features, 8PSK, BPSK, QAM16, QAM64 and QPSK can be theoretically classified. Therefore, we use F 1 = | C ^ 40 | | C ^ 21 | 2 , F 2 = | C ^ 42 | | C ^ 21 | 2 and F 3 | C ^ 63 | | C ^ 21 | 3 as features and SVM as the classifier for machine-learning-based AMC.

3.2. Modern AMC Method Based on Deep Features and DL

DL has the advantage of automatically extracting features from samples. In order to verify the universality that DL-based AMC methods perform better than ML-based AMC methods in classification performance, we consider comparing the classification performance of SVM and three DNNs which include CNN, modulation classification network (MCNet) [38] and ResNet. The structures of the above three DNNs are shown in Figure 2, Figure 3 and Figure 4, respectively.

4. Our Proposed AMC Method

4.1. System Model of DecentAMC

DecentAMC consists of four steps: broadcasting the initial comprehensive model; training, updating and uploading the local model; local model aggregation and global model downloading. The specific system model of DecentAMC is shown in Figure 5.

4.1.1. Broadcasting Initial Comprehensive Model

The CS sets initial parameters and builds initial comprehensive model, and then the CS broadcasts this initial model to all LDs.

4.1.2. Training, Updating and Uploading Local Model

After receiving model and weight, each LD performs local computing and uploads the new model and weight to CS.

4.1.3. Local Models Aggregation

After receiving all local models, CS aggregates these models to obtain a new comprehensive model. The method of aggregation is the key step of DecentAMC. In this paper, we adopt the model averaging (MA) [39], which can be expressed as
θ t g = 1 N n = 1 N θ t n ,
where t represents t-th epoch, N is the number of LDs and θ t g denotes aggregated global model weight.

4.1.4. Global Model Downloading

After local model weights are aggregated by CS, CS sends the new global model to all LDs. LDs replace their original model with this new global model, and then repeat step (2), step (3) and step (4) until loss convergence. The algorithm of the proposed ResNet-based DecentAMC is shown in Algorithm 1.
Algorithm 1 Algorithm statement of the proposed ResNet-based DecentAMC method.
Input: IQ samples and corresponding labels.
Output:  θ T 1 g .
CS sets initial parameters and builds initial global model (i.e., ResNet) and then send this model to all local devices.
  • N: the number of LDs.
  • B: the number of batches in each epoch.
  • T: the total number of communications, i.e., 100.
  • θ t n : the single model weight of n-th LS at t-th epoch.
  • θ t g : the global model weight aggregated by CS at t-th epoch.
for  t = 0 , 1 , 2 , , T 1  do:
    All LDs download the latest global model weight θ t , b n .
    for  b = 0 , 1 , 2 , , B 1  do:
        All LDs train and update local model weight.
    end for
    All LDs upload local model parameter { θ t , B n } n = 1 N to CS.
    CS updates global model parameter by model aggregation
          θ t g = 1 N n = 1 N θ t , B n .
end for
return  θ T 1 g

5. Simulation Results and Discussions

5.1. Dataset Description

5.1.1. RadioML 2016.10a

In order to prove the superiority of DL-based AMC methods over ML-based AMC methods and the universality of DL, we choose five modulated signals {8PSK, BPSK, QAM16, QAM64 and QPSK} from RadioML 2016.10a [40] to analyze the classification performance of SVM-based CentAMC method and DNN-based CentAMC methods, where the DNNs are mentioned in Section 3.2. As for DNNs, under each SNR, each modulated signal generates 1000 pieces of data, of which 60% are used for training set and the remaining 40% as the test set, and 30% of training set are used for validation set. It must be stressed that SVM and DNNs are all based on centralized method in this simulation. This guarantees the classification performance will not be affected due to the insufficient dataset.

5.1.2. RadioML 2018.01a

We choose a richer dataset, RadioML 2018.01a [41], to analyze the classification performance of the LocalAMC, CentAMC and DecentAMC using different DNNs (CNN, MCNet and ResNet). This dataset includes 24 different modulated signals {16APSK, 32APSK, 64APSK, 128APSK, 16QAM, 32QAM, 64QAM, 128QAM, 256QAM, AM-SSB-SC, AM-DSB-WC, AM-DSB-SC, FM, GMSK, OQPSK, 4ASK, 8ASK, OOK, BPSK, QPSK, 8PSK, 16PSK and 32PSK}, and is generated by GNU Radio. Under each SNR, each modulated signal generates 4096 pieces of data, 75% of the dataset is used for the training set and the rest is used as the test set, and 30% of training set is used as the validation set. In this simulation, we assume that there are 12 LDs and 1 CS. Detailed parameters are shown in Table 3.

5.2. Comparative AMC Methods

5.2.1. AMC Method Based on Local Framework

LocalAMC trains model in LD. This method will achieve the worst classification performance of all deep-learning-based AMC methods because training data are not sufficient and the computational power of a single LD is extremely limited.

5.2.2. AMC Method Based on Centralized Framework

CentAMC trains model in CS based on sufficient dataset collected from all LDs. This method can achieve the best classification performance of all deep-learning-based AMC methods, however the CS will be under great computational pressure in processing a huge amount of data.

5.3. Classification Performance: ML vs. DL 

For AMC based on machine learning, the suitability of artificial feature design will directly affect identification performance. Therefore, we first analyze the designed artificial features on the RadioML 2016.10a dataset. As shown in Figure 6, the selected features can effectively distinguish the five types of digital modulation signals. It also can be seen that, when the signal-to-noise ratio is greater than 5 dB, the separation of five signals is stable and no longer increases with increasing SNR.
The classification accuracy with varying SNR is considered to represent the classification performance of SVM-based CentAMC and DNN (CNN, MCNet and ResNet)-based CentAMC, which can be expressed as
P A c c i = N c o r r e c t i N t e s t i × 100 % ,
where P A c c i represents the classification accuracy at SNR = i dB and i range from −10 dB to 18 dB. N c o r r e c t i denotes the number of the correctly identified samples at SNR = i dB and N t e s t i denotes the number of test samples at SNR = i dB. The curves of P A c c i of SVM-based CentAMC method and DNN-based CentAMC method are shown in Figure 7. It can be observe that three DNNs perform better than SVM. At the same SNR, the accuracy of DNN-based CentAMC method is 3.49%∼9.45% higher than that of SVM-based CentAMC method. The performance of the machine-learning-based AMC and the deep learning-based AMC stabilizes when the SNR is greater than 5 dB, because the separation of five signals is stable and no longer increases with increasing SNR.
Different number of neural network layers or number of neurons can have an impact on the identification performance, and therefore it is necessary to explain the parameter configuration of the used ResNet in detail. Specifically, we discussed the LocalAMC performance when the number of residual blocks is 1, 2, 3, 4, 7 and 16, respectively. As shown in Figure 8, ResNet with one residual module has the lowest recognition accuracy and ResNet with three residual modules has the highest recognition accuracy, reaching around 90%. Under high SNR conditions, the recognition accuracy of ResNet using 7 residual modules and 16 residual modules actually decreases. Therefore, we design a ResNet with three residual blocks.

5.4. Classification Performance: Different AMC Methods Based on CNN, MCNet and Proposed ResNet

The Correct Classification Probability under Different SNR

The evaluation criteria for classification performance are the same as Equation (9) mentioned in the Section 5.3. The only significant difference is that the SNR i ranges from 20 dB to 30 dB. The curves of the correct classification probability of LocalAMC, CentAMC and DecentAMC using three DNNs (CNN, MCNet and ResNet) are shown in Figure 9. The classification accuracies P A c c i , i { 0 , 10 , 20 , 30 } dB of DNN (i.e., CNN, MCNet and ResNet)-based AMC methods are shown in Table 4.
It can be observed that, whatever the kind of structure of network, CentAMC and DecentAMC performs significantly better than LocalAMC. As for CNN-based AMC methods and MCNet-based AMC methods, the DecentAMC method has limited performance loss when compared with CentAMC method. Specifically, the performance gap is nearly 2% at high SNR. As for the proposed ResNet-based DecentAMC method in this paper, the error of DecentAMC method and CentAMC method is less than 0.5% and the curves of them almost coincide as shown in Figure 9.
We specially compared the classification accuracy of ResNet-based DecentAMC method with that of CNN-based DecentAMC method and MCNet-based method. As shown in Table 5 and Figure 10, the highest accuracy of DecentAMC based on CNN and MCNet is nearly 90% and the one based on ResNet is over 95%. At high SNR, the accuracy of ResNet-based DecentAMC method is 4.77%∼7.98% higher than that of CNN-based DecentAMC method and MCNet-based DecentAMC method. The accuracy at 30 dB is slightly lower than that at 20dB, no more than 0.2%, which is consistent with the phenomenon in paper [15]. This seemingly abnormal result can actually be explained by the fact that only a limited number of simulation experiments were carried out.

5.5. Communication Overhead: ResNet-Based AMC Methods vs. Comparative AMC Methods

One drawback of CentAMC is that each LD needs to upload sub-data to CD, which consumes significant communication resources. DecentAMC chooses to transmit model trained by the sub-data rather than sub-data itself to reduce communication overhead. The model sizes of three DNNs (i.e., CNN, MCNet and ResNet) are shown in Table 6.
In the LocalAMC method, LDs train model in itself. There is no CS which means communication overhead O L o c a l = 0 . The CentAMC method includes two times communication. Specifically, one time is each LD uploads its data to the CS, and the other is CS sends final comprehensive model to all LDs. The communication overhead O C e n t can be described as [15]
O C e n t = N ( D s + M s ) ,
where the N represents the number of LDs, and the D s denotes the data size of each LD, and the M s means the comprehensive model size. In this simulation, the D s is 1,431,738 Kb. In the DecentAMC method, there are multiple times communication between CS and LDs, because from the point view of LD, there exists frequent uploading of the local model and frequent downloading of the global model. The communication overhead O D e c e n t can be written as
O D e c e n t = 2 N M s T + N M s ,
where the T represents the total communication times.
The communication overhead of LocalAMC, CentAMC and DecentAMC using three DNNs (CNN, MCNet and ResNet) is shown in Table 7. It can be observed that, no matter what kind of network, the communication overhead of DecentAMC is much lower than that of CentAMC, and the exact percentage declines are approximately 90.49%, 91.06% and 90.89% in the case of CNN, MCNet and ResNet, respectively. For the DecentAMC, there is no significant difference in the communication overhead of the three DNN structures due to the close size of their network models. To further improve communication efficiency, we will explore how to combine Elastic averaging SGD [42] with AMC in the future.

6. Conclusions

In this paper, we verified the superiority of DL over ML by utilizing CNN, MCNet, ResNet and SVM in the drone communication systems. Then, we used these three DNNs to compare the classification performance of three different training methods: the LocalAMC, the CentAMC and the DecentAMC. DecentAMC based on CNN and MCNet has similar classification performance to CentAMC while protecting data privacy and reducing communication overhead. The classification performance of ResNet-based DcentAMC is quite close to that of ResNet-based CentAMC. Last but not least, we proved that the proposed ResNet-based DecentAMC performs better than the other two DNN (CNN and MCNet)-based DecentAMC in the drone communication systems. In future work, we shall consider the algorithm deployment in the real drone communication systems to realize the signal recognition.

Author Contributions

Conceptualization, M.M., Y.X. and Z.W.; methodology, Y.X.; software, X.F.; writing—original draft preparation, M.M.; writing—review and editing, G.G. and X.F.; visualization, Y.X.; funding acquisition, M.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received was supported in part by the Jiangsu University’s Blue Project Funding under grant number 2022, by the Natural Science Foundation Project of Nanjing Vocational College of Information Technology (Special Key for Vice Senior High School) under grant number YK20210501.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

We are sorry that the dataset of this article cannot be provided due to privacy and moral restrictions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Qi, P.; Zhou, X.; Ding, Y.; Zhang, Z.; Zheng, S.; Li, Z. FedBKD: Heterogenous federated learning via bidirectional knowledge distillation for modulation classification in IoT-edge system. IEEE J. Sel. Top. Signal Process. 2022, 17, 189–204. [Google Scholar] [CrossRef]
  2. Dong, B.; Liu, Y.; Gui, G.; Fu, X.; Dong, H.; Adebisi, B.; Gacanin, H.; Sari, H. A lightweight decentralized learning-based automatic modulation classification method for resource-constrained edge devices. IEEE Internet Things J. 2020, 9, 24708–24720. [Google Scholar] [CrossRef]
  3. Hou, C.B.; Liu, G.W.; Tian, Q.; Zhou, Z.C.; Hua, L.J.; Lin, Y. Multi-signal modulation classification using sliding window detection and complex convolutional network in frequency domain. IEEE Internet Things J. 2022, 9, 19438–19449. [Google Scholar] [CrossRef]
  4. Yang, J.; Gu, H.; Hu, C.; Zhang, X.; Gui, G.; Gacanin, H. Deep complex-valued convolutional neural network for drone recognition based on RF fingerprinting. Drones 2022, 6, 374. [Google Scholar] [CrossRef]
  5. Dobre, O.A.; Abdi, A.; Bar-Ness, Y.; Su, W. Survey of automatic modulation classification techniques: Classical approaches and new trends. IET Commun. 2007, 1, 137–156. [Google Scholar] [CrossRef] [Green Version]
  6. Mao, Q.; Hu, F.; Hao, Q. Deep learning for intelligent wireless networks: A comprehensive survey. IEEE Commun. Surv. Tutor. 2018, 20, 2585–2621. [Google Scholar] [CrossRef]
  7. Eldemerdash, Y.A.; Dobre, O.A.; Öner, M. Signal identification for multiple-antenna wireless systems: Achievements and challenges. IEEE Commun. Surv. Tutor. 2016, 18, 1524–1551. [Google Scholar] [CrossRef]
  8. Dobre, O.A. Signal identification for emerging intelligent radios: Classical problems and new challenges. IEEE Instrum. Meas. Mag. 2015, 18, 11–18. [Google Scholar] [CrossRef]
  9. Zhu, Z.; Nandi, A. Automatic Modulation Classification: Principles, Algorithms and Applications; John Wiley & Sons: Hoboken, NJ, USA, 2015. [Google Scholar]
  10. Kim, K.; Polydoros, A. Digital modulation classification: The BPSK versus QPSK case. In Proceedings of the IEEE Military Communications Conference (MILCOM), San Diego, CA, USA, 23–26 October 1988; pp. 431–436. [Google Scholar]
  11. Freund, Y.; Schapire, R.E. A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci. 1997, 55, 119–139. [Google Scholar] [CrossRef] [Green Version]
  12. Panagiotou, P.; Anastasopoulos, A.; Polydoros, A. Likelihood ratio tests for modulation classification. In Proceedings of the IEEE Military Communications Conference (MILCOM), Los Angeles, CA, USA, 22–25 October 2000; pp. 670–674. [Google Scholar]
  13. Huang, S.; Dai, R.; Huang, J.; Yao, Y.; Gao, Y.; Ning, F.; Feng, Z. Automatic modulation classification using gated recurrent residual network. IEEE Internet Things J. 2020, 7, 7795–7807. [Google Scholar] [CrossRef]
  14. Chang, S.; Huang, S.; Zhang, R.; Feng, Z.; Liu, L. Multitask-learning-based deep neural network for automatic modulation classification. IEEE Internet Things J. 2022, 9, 2192–2206. [Google Scholar] [CrossRef]
  15. Fu, X.; Gui, G.; Wang, Y.; Gacanin, H.; Adachi, F. Automatic modulation classification based on decentralized learning and ensemble learning. IEEE Trans. Veh. Technol. 2022, 71, 7942–7946. [Google Scholar] [CrossRef]
  16. Nandi, A.K.; Azzouz, E.E. Algorithms for automatic modulation recognition of communication signals. IEEE Trans. Commun. 1998, 46, 431–436. [Google Scholar] [CrossRef]
  17. Wang, F.; Wang, Y.; Chen, X. Graphic constellations and DBN based automatic modulation classification. In Proceedings of the IEEE Vehicular Technology Conference (VTC Spring), Sydney, NSW, Australia, 4–7 June 2017; pp. 1–5. [Google Scholar]
  18. Wang, L.; Guo, S.; Jia, C. Recognition of digital modulation signals based on wavelet amplitude difference. In Proceedings of the IEEE International Conference on Software Engineering and Service Science (ICSESS), Beijing, China, 26–28 August 2016; pp. 627–630. [Google Scholar]
  19. Gardner, W. Measurement of spectral correlation. IEEE Trans. Acoust. Speech Signal Process. 1986, 34, 1111–1123. [Google Scholar] [CrossRef]
  20. Gardner, W.A.; Spooner, C.M. Cyclic spectral analysis for signal detection and modulation recognition. In Proceedings of the IEEE Military Communications Conference (MILCOM), San Diego, CA, USA, 23–26 October 1988; pp. 419–424. [Google Scholar]
  21. Liu, M.; Song, T.; Hu, J.; Yang, J.; Gui, G. Deep learning-inspired message passing algorithm for efficient resource allocation in cognitive radio networks. IEEE Trans. Veh. Technol. 2019, 68, 641–653. [Google Scholar] [CrossRef]
  22. Tan, J.; Zhang, L.; Liang, Y.-C.; Niyato, D. Intelligent sharing for LTE and WiFi systems in unlicensed bands: A deep reinforcement learning approach. IEEE Trans. Commun. 2020, 68, 2793–2808. [Google Scholar] [CrossRef]
  23. Huang, H.; Peng, Y.; Yang, J.; Xia, W.; Gui, G. Fast beamforming design via deep learning. IEEE Trans. Veh. Technol. 2020, 69, 1065–1069. [Google Scholar] [CrossRef]
  24. O’Shea, T.; Hoydis, J. An introduction to deep learning for the physical layer. IEEE Trans. Cogn. Commun. Netw. 2017, 3, 563–575. [Google Scholar] [CrossRef] [Green Version]
  25. Lin, Y.; Tu, Y.; Dou, Z. An improved neural network pruning technology for automatic modulation classification in edge devices. IEEE Trans. Veh. Technol. 2020, 69, 5703–5706. [Google Scholar] [CrossRef]
  26. Huang, S.; Chai, L.; Li, Z.; Zhang, D.; Yao, Y.; Zhang, Y.; Feng, Z. Automatic Modulation Classification Using Compressive Convolutional Neural Network. IEEE Access 2019, 7, 79636–79643. [Google Scholar] [CrossRef]
  27. Zhang, Z.; Wang, C.; Gan, C.; Sun, S.; Wang, M. Automatic modulation classification using convolutional neural network with features fusion of SPWVD and BJD. IEEE Trans. Signal Inf. Process. Netw. 2019, 5, 469–478. [Google Scholar] [CrossRef]
  28. Wang, Y.; Yang, J.; Liu, M.; Gui, G. LightAMC: Lightweight automatic modulation classification via deep learning and compressive sensing. IEEE Trans. Veh. Technol. 2020, 69, 3491–3495. [Google Scholar] [CrossRef]
  29. Meng, F.; Chen, P.; Wu, L.; Wang, X. Automatic modulation classification: A deep learning enabled approach. IEEE Trans. Veh. Technol. 2018, 67, 10760–10772. [Google Scholar] [CrossRef]
  30. Cao, Z.; Shih, W.-T.; Guo, J.; Wen, C.-K.; Jin, S. Lightweight convolutional neural networks for CSI feedback in massive MIMO. IEEE Commun. Lett. 2021, 25, 2624–2628. [Google Scholar] [CrossRef]
  31. Qi, P.; Zhou, X.; Zheng, S.; Li, Z. Automatic modulation classification based on deep residual networks with multimodal information. IEEE Trans. Cogn. Commun. Netw. 2021, 7, 21–33. [Google Scholar] [CrossRef]
  32. Bal, H.E.; Pal, A. Parallel and distributed machine learning algorithms for scalable big data analytics. Future Gener. Comput. Syst. 2020, 108, 1159–1161. [Google Scholar] [CrossRef]
  33. Yang, Z.; Bajwa, W.U. ByRDiE: Byzantine-resilient distributed coordinate descent for decentralized learning. IEEE Trans. Signal Inf. Process. Netw. 2019, 5, 611–627. [Google Scholar] [CrossRef] [Green Version]
  34. Wang, Y.; Guo, L.; Zhao, Y.; Yang, J.; Adebisi, B.; Gacanin, H.; Gui, G. Distributed learning for automatic modulation classification in edge devices. IEEE Wirel. Commun. Lett. 2020, 9, 2177–2181. [Google Scholar] [CrossRef]
  35. Fu, X.; Gui, G.; Wang, Y.; Ohtsuki, T.; Adebisi, B.; Gacanin, H.; Adachi, F. Lightweight network and model aggregation for automatic modulation classification in wireless communications. In Proceedings of the IEEE Wireless Communications and Networking Conference (WCNC), Nanjing, China, 29 March–1 April 2021; pp. 1–6. [Google Scholar]
  36. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  37. Swami, A.; Sadler, B.M. Hierarchical digital modulation classification using cumulants. IEEE Trans. Commun. 2000, 48, 416–429. [Google Scholar] [CrossRef]
  38. Huynh-The, T.; Hua, C.; Pham, Q.; Kim, D. MCNet: An efficient CNN architecture for robust automatic modulation classification. IEEE Commun. Lett. 2020, 24, 811–815. [Google Scholar] [CrossRef]
  39. McDonald, R.; Hall, K.; Mann, G. Distributed training strategies for the structured perceptron. In Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics, Los Angeles, CA, USA, 2 June 2010; pp. 456–464. [Google Scholar]
  40. Oshea, T.J.; West, N. Radio machine learning dataset generation with GNU radio. In Proceedings of the GNU Radio Conference, Boulder, CO, USA, 6 September 2016; pp. 1–6. [Google Scholar]
  41. O’shea, T.J.; Roy, T.; Clancy, T.C. Over-the-air deep learning based radio signal classification. IEEE J. Sel. Top. Signal Process. 2018, 12, 168–179. [Google Scholar] [CrossRef] [Green Version]
  42. Zhang, S.; Choromanska, A.E.; LeCun, Y. Deep learning with elastic averaging SGD. Adv. Neural Inf. Process. Syst. 2015, 28, 1–9. [Google Scholar]
Figure 1. A basic system framework of AMC.
Figure 1. A basic system framework of AMC.
Drones 07 00391 g001
Figure 2. Structure of CNN, where the values of N and M are 1 and 5 for RadioML 2016.10a respectively, and 6 and 24 for RadioML 2018.01a, respectively.
Figure 2. Structure of CNN, where the values of N and M are 1 and 5 for RadioML 2016.10a respectively, and 6 and 24 for RadioML 2018.01a, respectively.
Drones 07 00391 g002
Figure 3. Structure of MCNet.
Figure 3. Structure of MCNet.
Drones 07 00391 g003
Figure 4. Structure of ResNet.
Figure 4. Structure of ResNet.
Drones 07 00391 g004
Figure 5. System model of the proposed DecentAMC method.
Figure 5. System model of the proposed DecentAMC method.
Drones 07 00391 g005
Figure 6. The analysis of means of three features of five digital modulation signals under different SNR.
Figure 6. The analysis of means of three features of five digital modulation signals under different SNR.
Drones 07 00391 g006
Figure 7. Classification accuracy of P A c c i SVM-based CentAMC method and DNN (CNN, MCNet and ResNet)-based CentAMC method.
Figure 7. Classification accuracy of P A c c i SVM-based CentAMC method and DNN (CNN, MCNet and ResNet)-based CentAMC method.
Drones 07 00391 g007
Figure 8. Classification accuracy of LocalAMC based on ResNet with different number of residual blocks on RadioML 2018.01a.
Figure 8. Classification accuracy of LocalAMC based on ResNet with different number of residual blocks on RadioML 2018.01a.
Drones 07 00391 g008
Figure 9. Classification accuracy P A c c i of DNN-based AMC methods. (a) Classification accuracy P A c c i of CNN-based AMC methods; (b) Classification accuracy P A c c i of MCNet-based AMC methods; (c) Classification accuracy P A c c i of ResNet-based AMC methods.
Figure 9. Classification accuracy P A c c i of DNN-based AMC methods. (a) Classification accuracy P A c c i of CNN-based AMC methods; (b) Classification accuracy P A c c i of MCNet-based AMC methods; (c) Classification accuracy P A c c i of ResNet-based AMC methods.
Drones 07 00391 g009
Figure 10. Classification accuracy P A c c i of DecentAMC method based on CNN, MCNet and ResNet.
Figure 10. Classification accuracy P A c c i of DecentAMC method based on CNN, MCNet and ResNet.
Drones 07 00391 g010
Table 1. Specific definition of the signal model.
Table 1. Specific definition of the signal model.
ParameterDefinition
r ( k ) The received band signal
hChannel coefficient
f 0 Frequency offset
ϕ 0 Carrier phase offset
s m ( k ) k-th symbol generated
m of s m ( k ) m-th modulation scheme
w ( k ) Additive Gaussian noise
KThe number of signal symbols
Table 2. Theoretical Values of High-order Cumulant for Each Modulation Signal.
Table 2. Theoretical Values of High-order Cumulant for Each Modulation Signal.
| C ^ 20 | | C ^ 21 | | C ^ 40 | | C ^ 41 | | C ^ 42 | | C ^ 60 | | C ^ 63 |
BPSKEE 2 E 2 2 E 2 2 E 2 16 E 2 16 E 3
QPSK0E E 2 0 E 2 0 4 E 3
8PSK0E00 E 2 0 4 E 3
16QAM0E 0.68 E 2 0 0.68 E 2 0 2.08 E 3
64QAM0E 0.62 E 2 0 0.62 E 2 0 1.80 E 3
Table 3. Simulation parameters for the DNN-based AMC methods.
Table 3. Simulation parameters for the DNN-based AMC methods.
ParameterValue
DeviceGeForce GTX 2080Ti
DatasetDeepSig RadioML (version 2018.01A)
Batch size of training50
Batch size of testing20
Epoch100
Learning rate η 0.001
EnvironmentKeras 2.2.4
OptimizerAdam
Table 4. Classification accuracy P A c c i of DNN-based AMC methods.
Table 4. Classification accuracy P A c c i of DNN-based AMC methods.
(a) Classification Accuracy  P Acc i  of CNN-Based AMC Methods
Method (CNN-Based) P Acc 0 P Acc 10 P Acc 20 P Acc 30
LocalAMC44.79%72.26%73.55%73.72%
CentAMC45.91%89.96%92.41%92.49%
DecentAMC51.36%89.00%90.60%90.47%
(b) Classification Accuracy P Acc i MCNet-Based AMC Methods
Method (MCNet-Based) P Acc 0 P Acc 10 P Acc 20 P Acc 30
LocalAMC47.62%77.56%80.31%80.06%
CentAMC48.17%88.45%91.74%91.46%
DecentAMC50.04%85.82%89.23%89.13%
(c) Classification Accuracy P Acc i ResNet-Based AMC Methods
Method (ResNet-Based) P Acc 0 P Acc 10 P Acc 20 P Acc 30
LocalAMC48.65%86.38%89.72%89.45%
CentAMC53.63%93.92%95.54%95.41%
DecentAMC (proposed)53.69%93.80%95.39%95.24%
Table 5. Classification accuracy P A c c i of DecentAMC for CNN, MCNet and ResNet.
Table 5. Classification accuracy P A c c i of DecentAMC for CNN, MCNet and ResNet.
DecentAMC P Acc 0 P Acc 10 P Acc 20 P Acc 30
CNN51.36%89.00%90.60%90.47%
MCNet50.04%85.82%89.23%89.13%
DecentAMC53.69%93.80%95.39%95.24%
(proposed)(3.65%↑, 2.09%↑)(7.98%↑, 4.80%↑)(6.16%↑, 4.79%↑)(6.11%↑, 4.77%↑)
Note 1: The red font corresponds to the performance improvement of ResNet over MCNet. Note 2: The blue font corresponds to the performance improvement of ResNet over CNN.
Table 6. The model size of three DNNs (i.e., CNN, MCNet and ResNet).
Table 6. The model size of three DNNs (i.e., CNN, MCNet and ResNet).
Network Structure M s
CNN678 Kb
MCNet637 Kb
ResNet649 Kb
Table 7. The communication overhead of LocalAMC, CentAMC and DecentAMC based on three DNNs (CNN, MCNet and ResNet).
Table 7. The communication overhead of LocalAMC, CentAMC and DecentAMC based on three DNNs (CNN, MCNet and ResNet).
Network Structure O Local O Cent O Decent
CNN0 Kb17,188,992 Kb1,635,336 Kb (90.49%↓)
MCNet0 Kb17,188,500 Kb1,536,444 Kb (91.06%↓)
ResNet0 Kb17,188,644 Kb1,565,388 Kb (90.89%↓)
Note: The red font indicates the percentage of communication overhead reduced by DecentAMC compared to CentAMC.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ma, M.; Xu, Y.; Wang, Z.; Fu, X.; Gui, G. Decentralized Learning and Model Averaging Based Automatic Modulation Classification in Drone Communication Systems. Drones 2023, 7, 391. https://doi.org/10.3390/drones7060391

AMA Style

Ma M, Xu Y, Wang Z, Fu X, Gui G. Decentralized Learning and Model Averaging Based Automatic Modulation Classification in Drone Communication Systems. Drones. 2023; 7(6):391. https://doi.org/10.3390/drones7060391

Chicago/Turabian Style

Ma, Min, Yunhe Xu, Zhi Wang, Xue Fu, and Guan Gui. 2023. "Decentralized Learning and Model Averaging Based Automatic Modulation Classification in Drone Communication Systems" Drones 7, no. 6: 391. https://doi.org/10.3390/drones7060391

Article Metrics

Back to TopTop