On Attacking Future 5G Networks with Adversarial Examples: Survey
Abstract
:1. Introduction
2. Big Data in 5G
3. Algorithms
3.1. Supervised Learning
Category | Algorithm | Mobile Network Applications |
---|---|---|
Supervised | Fully-connected neural network | Channel estimation [28,57], symbol detection [58], automatic modulation classification [59], channel coding [60,61,62], beamforming [34,63,64,65], activation control [66,67], power allocation [37,68,69,70,71], scheduling [72], routing [73], security [41,43,47,74] |
CNN | Channel estimation [57,75,76,77], automatic modulation classification [31,78], channel coding [61,62,79,80,81], beamforming [34], power allocation [82], routing [83], localization [84] | |
RNN (GRU, LSTM and ECN) | Automatic modulation classification [32], channel coding [61], power allocation [85], scheduling [39], routing [83], localization [84], security [45], slicing [86], caching [87] | |
Attention (transformer) | Channel coding [62,79] | |
Residual neural network | Beamforming [34] | |
K-NN | Security [74] | |
SVM (and SVR) | Beamforming [88], security [47] | |
Linear regression | Beamforming [88], security [74] | |
Decision trees (including random forest and gradient boosting trees) | Beamforming [88], security [74] | |
Naive Bayes | Security [74] | |
Unsupervised | Autoencoder | Channel estimation [77], channel coding [79], scheduling [39], caching [89], security [43] |
GAN | Channel estimation [76], beamforming [90] | |
Denoising CNN (DIP, LDAMP and DnCNN) | Channel estimation [91,92] | |
Reinforcement | Q-learning | Channel coding [60], beamforming [90], power allocation [93], caching [94] |
DQN | Activation control [67], power allocation [70,82,85], scheduling [95], routing [83], slicing [96,97,98] | |
DDPG | Activation control [66], scheduling [72], routing [99] | |
SARSA | Activation control [36] | |
Policy gradients | Channel coding [62], scheduling [39,95] | |
TRPO | Routing [73] | |
PPO | Slicing [100] |
3.2. Unsupervised Learning
3.3. Reinforcement Learning
4. AI/ML in 5G
5. Attacks
5.1. Attacks against k-NN
5.2. Attacks against Tree Ensembles
5.3. White-Box Attacks against Neural Networks
5.4. Score-Based Attacks against Neural Networks
5.5. Decision-Based Attacks against Neural Networks
6. Fuzzing Frameworks
7. Adversarial ML in 5G
8. Discussion
9. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- ITU-R M.2410-0; Minimum Requirements Related to Technical Performance for IMT2020 Radio Interface(s). ITU: Geneva, Switzerland, 2017.
- Marcus, M.J. 5G and “IMT for 2020 and beyond” [Spectrum Policy and Regulatory Issues]. IEEE Wirel. Commun. 2015, 22, 2–3. [Google Scholar] [CrossRef]
- Elijah, O.; Leow, C.Y.; Rahman, T.A.; Nunoo, S.; Iliya, S.Z. A Comprehensive Survey of Pilot Contamination in Massive MIMO—5G System. IEEE Commun. Surv. Tutor. 2016, 18, 905–923. [Google Scholar] [CrossRef]
- Song, M.; Shan, H.; Yang, H.H.; Quek, T.Q.S. Joint Optimization of Fractional Frequency Reuse and Cell Clustering for Dynamic TDD Small Cell Networks. IEEE Trans. Wirel. Commun. 2021, 21, 398–412. [Google Scholar] [CrossRef]
- TR37.817, G; Study on Enhancement for Data Collection for NR and EN-DC. 3GPP: Sophia Antipolis, France, 2021.
- Jahangiri, A.; Rakha, H.A. Applying Machine Learning Techniques to Transportation Mode Recognition Using Mobile Phone Sensor Data. IEEE Trans. Intell. Transp. Syst. 2015, 16, 2406–2417. [Google Scholar] [CrossRef]
- Li, H.; Ota, K.; Dong, M. Learning IoT in Edge: Deep Learning for the Internet of Things with Edge Computing. IEEE Netw. 2018, 32, 96–101. [Google Scholar] [CrossRef] [Green Version]
- Haidine, A.; Salmam, F.Z.; Aqqal, A.; Dahbi, A. Artificial intelligence and machine learning in 5G and beyond: A survey and perspectives. In Moving Broadband Mobile Communications Forward: Intelligent Technologies for 5G and Beyond; IntechOpen: London, UK, 2021; p. 47. [Google Scholar]
- Sagduyu, Y.E.; Erpek, T.; Shi, Y. Adversarial Machine Learning for 5G Communications Security. arXiv 2021, arXiv:2101.02656. [Google Scholar]
- GSMA. FS.30—Security Manual; GSMA: London, UK, 2021. [Google Scholar]
- GSMA. FS.31—Baseline Security Controls; GSMA: London, UK, 2020. [Google Scholar]
- GSMA. IR.77 InterOperator IP Backbone Security Req. For Service and Inter-Operator IP backbone Providers; GSMA: London, UK, 2019. [Google Scholar]
- GSMA. FF.21 Fraud Manual; GSMA: London, UK, 2021. [Google Scholar]
- Steinhardt, J.; Koh, P.W.; Liang, P. Certified Defenses for Data Poisoning Attacks. arXiv 2017, arXiv:1706.03691. [Google Scholar]
- Gu, T.; Liu, K.; Dolan-Gavitt, B.; Garg, S. BadNets: Evaluating Backdooring Attacks on Deep Neural Networks. IEEE Access 2019, 7, 47230–47244. [Google Scholar] [CrossRef]
- Schwarzmann, S.; Marquezan, C.C.; Trivisonno, R.; Nakajima, S.; Zinner, T. Accuracy vs. Cost Trade-off for Machine Learning Based QoE Estimation in 5G Networks. In Proceedings of the ICC 2020—2020 IEEE International Conference on Communications (ICC), Dublin, Ireland, 7–11 June 2020; pp. 1–6. [Google Scholar] [CrossRef]
- Masri, A.; Veijalainen, T.; Martikainen, H.; Mwanje, S.; Ali-Tolppa, J.; Kajó, M. Machine-Learning-Based Predictive Handover. In Proceedings of the 2021 IFIP/IEEE International Symposium on Integrated Network Management (IM), Bordeaux, France, 17–21 March 2021; pp. 648–652. [Google Scholar]
- Minovski, D.; Ogren, N.; Ahlund, C.; Mitra, K. Throughput Prediction using Machine Learning in LTE and 5G Networks. IEEE Trans. Mob. Comput. 2021. [Google Scholar] [CrossRef]
- Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.; Fergus, R. Intriguing properties of neural networks. arXiv 2014, arXiv:1312.6199. [Google Scholar]
- Papernot, N.; McDaniel, P.; Jha, S.; Fredrikson, M.; Celik, Z.B.; Swami, A. The Limitations of Deep Learning in Adversarial Settings. arXiv 2015, arXiv:1511.07528. [Google Scholar]
- TS33.501, G; Security Architecture and Procedures for 5G System. 3GPP: Sophia Antipolis, France, 2020.
- Zhang, C.; Patras, P.; Haddadi, H. Deep Learning in Mobile and Wireless Networking: A Survey. IEEE Commun. Surv. Tutor. 2019, 21, 2224–2287. [Google Scholar] [CrossRef] [Green Version]
- Cheng, X.; Fang, L.; Hong, X.; Yang, L. Exploiting Mobile Big Data: Sources, Features, and Applications. IEEE Netw. 2017, 31, 72–79. [Google Scholar] [CrossRef]
- Mehlführer, C.; Ikuno, J.C.; Simko, M.; Schwarz, S.; Wrulich, M.; Rupp, M. The Vienna LTE simulators—Enabling reproducibility in wireless communications research. EURASIP J. Adv. Signal Process. 2011, 2011, 29. [Google Scholar] [CrossRef] [Green Version]
- Palattella, M.R.; Watteyne, T.; Wang, Q.; Muraoka, K.; Accettura, N.; Dujovne, D.; Grieco, L.A.; Engel, T. On-the-Fly Bandwidth Reservation for 6TiSCH Wireless Industrial Networks. IEEE Sensors J. 2016, 16, 550–560. [Google Scholar] [CrossRef] [Green Version]
- Varga, A.; Hornig, R. An overview of the OMNeT++ simulation environment. In Proceedings of the 1st International Conference on Simulation Tools and Techniques for Communications, Networks and Systems & Workshops, Marseille, France, 3–7 March 2008; pp. 1–10. [Google Scholar]
- Alkhateeb, A. DeepMIMO: A Generic Deep Learning Dataset for Millimeter Wave and Massive MIMO Applications. arXiv 2019, arXiv:1902.06435. [Google Scholar]
- Alrabeiah, M.; Alkhateeb, A. Deep learning for TDD and FDD massive MIMO: Mapping channels in space and frequency. In Proceedings of the 2019 53rd Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, USA, 3–6 November 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1465–1470. [Google Scholar]
- O’Shea, T.; West, N. Radio Machine Learning Dataset Generation with GNU Radio. In Proceedings of the GNU Radio Conference, Boulder, CO, USA, 12–16 September 2016. [Google Scholar]
- O’Shea, T.J.; Corgan, J.; Clancy, T.C. Convolutional radio modulation recognition networks. In Proceedings of the International Conference on Engineering Applications of Neural Networks, Aberdeen, UK, 2–5 September 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 213–226. [Google Scholar]
- Meng, F.; Chen, P.; Wu, L.; Wang, X. Automatic Modulation Classification: A Deep Learning Enabled Approach. IEEE Trans. Veh. Technol. 2018, 67, 10760–10772. [Google Scholar] [CrossRef]
- Rajendran, S.; Meert, W.; Giustiniano, D.; Lenders, V.; Pollin, S. Deep Learning Models for Wireless Signal Classification with Distributed Low-Cost Spectrum Sensors. IEEE Trans. Cogn. Commun. Netw. 2018, 4, 433–445. [Google Scholar] [CrossRef] [Green Version]
- Klautau, A.; Batista, P.; González-Prelcic, N.; Wang, Y.; Heath, R.W. 5G MIMO Data for Machine Learning: Application to Beam-Selection Using Deep Learning. In Proceedings of the 2018 Information Theory and Applications Workshop (ITA), San Diego, CA, USA, 11–16 February 2018; pp. 1–9. [Google Scholar] [CrossRef]
- Ruseckas, J.; Molis, G.; Bogucka, H. MIMO beam selection in 5G using neural networks. Int. J. Electron. Telecommun. 2021, 67, 693–698. [Google Scholar]
- Twomey, N.; Diethe, T.; Kull, M.; Song, H.; Camplani, M.; Hannuna, S.; Fafoutis, X.; Zhu, N.; Woznowski, P.; Flach, P.; et al. The SPHERE Challenge: Activity Recognition with Multimodal Sensor Data. arXiv 2016, arXiv:1603.00797. [Google Scholar]
- Kozlowski, M.; McConville, R.; Santos-Rodriguez, R.; Piechocki, R. Energy Efficiency in Reinforcement Learning for Wireless Sensor Networks. arXiv 2018, arXiv:1812.02538. [Google Scholar]
- Sanguinetti, L.; Zappone, A.; Debbah, M. Deep Learning Power Allocation in Massive MIMO. arXiv 2019, arXiv:1812.03640. [Google Scholar]
- Balazinska, M.; Castro, P. CRAWDAD Dataset Ibm/Watson (v. 2003-02-19). 2003. Available online: https://crawdad.org/ibm/watson/20030219 (accessed on 11 April 2022).
- Challita, U.; Dong, L.; Saad, W. Proactive Resource Management for LTE in Unlicensed Spectrum: A Deep Learning Perspective. IEEE Trans. Wirel. Commun. 2018, 17, 4674–4689. [Google Scholar] [CrossRef] [Green Version]
- Garciá, S.; Grill, M.; Stiborek, J.; Zunino, A. An empirical comparison of botnet detection methods. Comput. Secur. 2014, 45, 100–123. [Google Scholar] [CrossRef]
- Fernández Maimó, L.; Perales Gómez, A.L.; Garcia Clemente, F.J.; Gil Pérez, M.; Martínez Pérez, G. A Self-Adaptive Deep Learning-Based System for Anomaly Detection in 5G Networks. IEEE Access 2018, 6, 7700–7712. [Google Scholar] [CrossRef]
- Kolias, C.; Kambourakis, G.; Stavrou, A.; Gritzalis, S. Intrusion detection in 802.11 networks: Empirical evaluation of threats and a public dataset. IEEE Commun. Surv. Tutor. 2016, 18, 184–208. [Google Scholar] [CrossRef]
- Rezvy, S.; Luo, Y.; Petridis, M.; Lasebae, A.; Zebin, T. An efficient deep learning model for intrusion classification and prediction in 5G and IoT networks. In Proceedings of the 2019 53rd Annual Conference on Information Sciences and Systems (CISS), Baltimore, MD, USA, 20–22 March 2019; pp. 1–6. [Google Scholar] [CrossRef]
- Sharafaldin, I.; Lashkari, A.H.; Ghorbani, A.A. Toward generating a new intrusion detection dataset and intrusion traffic characterization. ICISSp 2018, 1, 108–116. [Google Scholar]
- Lam, J.; Abbas, R. Machine Learning based Anomaly Detection for 5G Networks. arXiv 2020, arXiv:2003.03474. [Google Scholar]
- Almomani, I.; Al-Kasasbeh, B.; Al-Akhras, M. WSN-DS: A dataset for intrusion detection systems in wireless sensor networks. J. Sensors 2016, 2016, 4731953. [Google Scholar] [CrossRef] [Green Version]
- Hachimi, M.; Kaddoum, G.; Gagnon, G.; Illy, P. Multi-stage Jamming Attacks Detection using Deep Learning Combined with Kernelized Support Vector Machine in 5G Cloud Radio Access Networks. In Proceedings of the 2020 International Symposium on Networks, Computers and Communications (ISNCC), Montreal, QC, Canada, 20–22 October 2020; pp. 1–5. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. In Advances in Neural Information Processing Systems, Proceedings of the 26th Annual Conference on Neural Information Processing Systems, Lake Tahoe, Nevada, 3–6 December 2012; Pereira, F., Burges, C.J.C., Bottou, L., Weinberger, K.Q., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2012; Volume 25. [Google Scholar]
- Pascanu, R.; Mikolov, T.; Bengio, Y. On the difficulty of training recurrent neural networks. In Proceedings of the International Conference on Machine Learning, Atlanta, GA, USA, 16–21 June 2013; pp. 1310–1318. [Google Scholar]
- Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
- Cho, K.; van Merrienboer, B.; Bahdanau, D.; Bengio, Y. On the Properties of Neural Machine Translation: Encoder-Decoder Approaches. arXiv 2014, arXiv:1409.1259. [Google Scholar]
- Jaeger, H.; Haas, H. Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication. Science 2004, 304, 78–80. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Mnih, V.; Kavukcuoglu, K.; Silver, D.; Graves, A.; Antonoglou, I.; Wierstra, D.; Riedmiller, M. Playing Atari with Deep Reinforcement Learning. arXiv 2013, arXiv:1312.5602. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention Is All You Need. arXiv 2017, arXiv:1706.03762. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. arXiv 2015, arXiv:1512.03385. [Google Scholar]
- Friedman, J.H. Stochastic gradient boosting. Comput. Stat. Data Anal. 2002, 38, 367–378. [Google Scholar] [CrossRef]
- Gao, S.; Dong, P.; Pan, Z.; Li, G.Y. Deep Learning Based Channel Estimation for Massive MIMO with Mixed-Resolution ADCs. IEEE Commun. Lett. 2019, 23, 1989–1993. [Google Scholar] [CrossRef] [Green Version]
- Ye, H.; Li, G.Y.; Juang, B.H. Power of Deep Learning for Channel Estimation and Signal Detection in OFDM Systems. IEEE Wirel. Commun. Lett. 2018, 7, 114–117. [Google Scholar] [CrossRef]
- Jagannath, J.; Polosky, N.; O’Connor, D.; Theagarajan, L.N.; Sheaffer, B.; Foulke, S.; Varshney, P.K. Artificial Neural Network Based Automatic Modulation Classification over a Software Defined Radio Testbed. In Proceedings of the 2018 IEEE International Conference on Communications (ICC), Kansas City, MO, USA, 20–24 May 2018; pp. 1–6. [Google Scholar] [CrossRef]
- Carpi, F.; Hager, C.; Martalo, M.; Raheli, R.; Pfister, H.D. Reinforcement Learning for Channel Coding: Learned Bit-Flipping Decoding. In Proceedings of the 2019 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton), Monticello, IL, USA, 24–27 September 2019. [Google Scholar] [CrossRef] [Green Version]
- Lyu, W.; Zhang, Z.; Jiao, C.; Qin, K.; Zhang, H. Performance Evaluation of Channel Decoding with Deep Neural Networks. In Proceedings of the 2018 IEEE International Conference on Communications (ICC), Kansas City, MO, USA, 20–24 May 2018; pp. 1–6. [Google Scholar] [CrossRef]
- Goutay, M.; Aoudia, F.A.; Hoydis, J. Deep Reinforcement Learning Autoencoder with Noisy Feedback. arXiv 2019, arXiv:1810.05419. [Google Scholar]
- Alkhateeb, A.; Alex, S.; Varkey, P.; Li, Y.; Qu, Q.; Tujkovic, D. Deep Learning Coordinated Beamforming for Highly-Mobile Millimeter Wave Systems. IEEE Access 2018, 6, 37328–37348. [Google Scholar] [CrossRef]
- Cousik, T.S.; Shah, V.K.; Erpek, T.; Sagduyu, Y.E.; Reed, J.H. Deep Learning for Fast and Reliable Initial Access in AI-Driven 6G mmWave Networks. arXiv 2021, arXiv:2101.01847. [Google Scholar]
- Qi, C.; Wang, Y.; Li, G.Y. Deep Learning for Beam Training in Millimeter Wave Massive MIMO Systems. IEEE Trans. Wirel. Commun. 2020, 1. [Google Scholar] [CrossRef]
- Ye, J.; Zhang, Y.J.A. DRAG: Deep Reinforcement Learning Based Base Station Activation in Heterogeneous Networks. arXiv 2018, arXiv:1809.02159. [Google Scholar] [CrossRef] [Green Version]
- Liu, J.; Krishnamachari, B.; Zhou, S.; Niu, Z. DeepNap: Data-Driven Base Station Sleeping Operations Through Deep Reinforcement Learning. IEEE Internet Things J. 2018, 5, 4273–4282. [Google Scholar] [CrossRef]
- Sun, H.; Chen, X.; Shi, Q.; Hong, M.; Fu, X.; Sidiropoulos, N.D. Learning to Optimize: Training Deep Neural Networks for Interference Management. IEEE Trans. Signal Process. 2018, 66, 5438–5453. [Google Scholar] [CrossRef]
- Matthiesen, B.; Zappone, A.; Jorswieck, E.A.; Debbah, M. Deep learning for optimal energy-efficient power control in wireless interference networks. arXiv 2018, arXiv:1812.06920. [Google Scholar]
- Nasir, Y.S.; Guo, D. Multi-Agent Deep Reinforcement Learning for Dynamic Power Allocation in Wireless Networks. IEEE J. Sel. Areas Commun. 2019, 37, 2239–2250. [Google Scholar] [CrossRef] [Green Version]
- Kim, B.; Shi, Y.; Sagduyu, Y.E.; Erpek, T.; Ulukus, S. Adversarial Attacks against Deep Learning Based Power Control in Wireless Communications. arXiv 2021, arXiv:2109.08139. [Google Scholar]
- Chinchali, S.; Hu, P.; Chu, T.; Sharma, M.; Bansal, M.; Misra, R.; Pavone, M.; Katti, S. Cellular Network Traffic Scheduling with Deep Reinforcement Learning. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018; Volume 32. [Google Scholar]
- Suarez-Varela, J.; Mestres, A.; Yu, J.; Kuang, L.; Feng, H.; Cabellos-Aparicio, A.; Barlet-Ros, P. Routing in optical transport networks with deep reinforcement learning. J. Opt. Commun. Netw. 2019, 11, 547–558. [Google Scholar] [CrossRef]
- Pawlak, J.; Li, Y.; Price, J.; Wright, M.; Al Shamaileh, K.; Niyaz, Q.; Devabhaktuni, V. A Machine Learning Approach for Detecting and Classifying Jamming Attacks Against OFDM-based UAVs. In Proceedings of the 3rd ACM Workshop on Wireless Security and Machine Learning, Abu Dhabi, United Arab Emirates, 2 July 2021; pp. 1–6. [Google Scholar]
- Soltani, M.; Pourahmadi, V.; Mirzaei, A.; Sheikhzadeh, H. Deep Learning-Based Channel Estimation. arXiv 2019, arXiv:1810.05893. [Google Scholar] [CrossRef] [Green Version]
- Safari, M.S.; Pourahmadi, V.; Sodagari, S. Deep UL2DL: Data-Driven Channel Knowledge Transfer From Uplink to Downlink. IEEE Open J. Veh. Technol. 2020, 1, 29–44. [Google Scholar] [CrossRef]
- Wen, C.K.; Shih, W.T.; Jin, S. Deep Learning for Massive MIMO CSI Feedback. IEEE Wirel. Commun. Lett. 2018, 7, 748–751. [Google Scholar] [CrossRef]
- Peng, S.; Jiang, H.; Wang, H.; Alwageed, H.; Yao, Y.D. Modulation classification using convolutional Neural Network based deep learning model. In Proceedings of the 2017 26th Wireless and Optical Communication Conference (WOCC), Newark, NJ, USA, 7–8 April 2017; pp. 1–5. [Google Scholar] [CrossRef]
- O’Shea, T.; Hoydis, J. An Introduction to Deep Learning for the Physical Layer. IEEE Trans. Cogn. Commun. Netw. 2017, 3, 563–575. [Google Scholar] [CrossRef] [Green Version]
- Yashashwi, K.; Anand, D.; Pillai, S.R.B.; Chaporkar, P.; Ganesh, K. MIST: A Novel Training Strategy for Low-latency Scalable Neural Net Decoders. arXiv 2019, arXiv:1905.08990. [Google Scholar]
- Liang, F.; Shen, C.; Wu, F. An Iterative BP-CNN Architecture for Channel Decoding. IEEE J. Sel. Top. Signal Process. 2018, 12, 144–159. [Google Scholar] [CrossRef] [Green Version]
- Lu, X.; Xiao, L.; Dai, C.; Dai, H. UAV-Aided Cellular Communications with Deep Reinforcement Learning Against Jamming. IEEE Wirel. Commun. 2020, 27, 48–53. [Google Scholar] [CrossRef]
- Cao, G.; Lu, Z.; Wen, X.; Lei, T.; Hu, Z. AIF: An Artificial Intelligence Framework for Smart Wireless Network Management. IEEE Commun. Lett. 2018, 22, 400–403. [Google Scholar] [CrossRef]
- Wang, D.; Zhang, J.; Cao, W.; Li, J.; Zheng, Y. When will you arrive? estimating travel time based on deep neural networks. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018. [Google Scholar]
- Lu, Z.; Gursoy, M.C. Dynamic Channel Access and Power Control via Deep Reinforcement Learning. In Proceedings of the 2019 IEEE 90th Vehicular Technology Conference (VTC2019-Fall), Honolulu, HI, USA, 22–25 September 2019; pp. 1–5. [Google Scholar] [CrossRef]
- Guo, Q.; Gu, R.; Wang, Z.; Zhao, T.; Ji, Y.; Kong, J.; Gour, R.; Jue, J.P. Proactive Dynamic Network Slicing with Deep Learning Based Short-Term Traffic Prediction for 5G Transport Network. In Proceedings of the 2019 Optical Fiber Communications Conference and Exhibition (OFC), San Diego, CA, USA, 7–9 March 2019; pp. 1–3. [Google Scholar]
- Chen, M.; Saad, W.; Yin, C.; Debbah, M. Echo State Networks for Proactive Caching in Cloud-Based Radio Access Networks With Mobile Users. IEEE Trans. Wirel. Commun. 2017, 16, 3520–3535. [Google Scholar] [CrossRef]
- Wang, Y.; Narasimha, M.; Heath, R.W. MmWave Beam Prediction with Situational Awareness: A Machine Learning Approach. In Proceedings of the 2018 IEEE 19th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), Kalamata, Greece, 25–28 June 2018; pp. 1–5. [Google Scholar] [CrossRef] [Green Version]
- Lei, F.; Dai, Q.; Cai, J.; Zhao, H.; Liu, X.; Liu, Y. A Proactive Caching Strategy Based on Deep Learning in EPC of 5G. In Advances in Brain Inspired Cognitive Systems, Proceedings of the 9th International Conference, BICS 2018, Xi’an, China, 7–8 July 2018; Lecture Notes in Computer Science; Ren, J., Hussain, A., Zheng, J., Liu, C., Luo, B., Zhao, H., Zhao, X., Eds.; Springer: Berlin/Heidelberg, Germany, 2018; Volume 10989, pp. 738–747. [Google Scholar] [CrossRef]
- Maksymyuk, T.; Gazda, J.; Yaremko, O.; Nevinskiy, D. Deep Learning Based Massive MIMO Beamforming for 5G Mobile Network. In Proceedings of the 2018 IEEE 4th International Symposium on Wireless Systems within the International Conferences on Intelligent Data Acquisition and Advanced Computing Systems (IDAACS-SWS), Lviv, Ukraine, 20–21 September 2018; pp. 241–244. [Google Scholar] [CrossRef]
- Balevi, E.; Andrews, J.G. Deep Learning-Based Channel Estimation for High-Dimensional Signals. arXiv 2019, arXiv:1904.09346. [Google Scholar]
- He, H.; Wen, C.K.; Jin, S.; Li, G.Y. Deep Learning-Based Channel Estimation for Beamspace mmWave Massive MIMO Systems. IEEE Wirel. Commun. Lett. 2018, 7, 852–855. [Google Scholar] [CrossRef] [Green Version]
- Xiao, Z.; Gao, B.; Liu, S.; Xiao, L. Learning Based Power Control for mmWave Massive MIMO against Jamming. In Proceedings of the 2018 IEEE Global Communications Conference (GLOBECOM), Abu Dhabi, United Arab Emirates, 9–13 December 2018; pp. 1–6. [Google Scholar] [CrossRef]
- Sadeghi, A.; Sheikholeslami, F.; Giannakis, G.B. Optimal and Scalable Caching for 5G Using Reinforcement Learning of Space-Time Popularities. IEEE J. Sel. Top. Signal Process. 2018, 12, 180–190. [Google Scholar] [CrossRef]
- Peng, B.; Seco-Granados, G.; Steinmetz, E.; Fröhle, M.; Wymeersch, H. Decentralized Scheduling for Cooperative Localization With Deep Reinforcement Learning. IEEE Trans. Veh. Technol. 2019, 68, 4295–4305. [Google Scholar] [CrossRef] [Green Version]
- Li, R.; Zhao, Z.; Sun, Q.; I, C.L.; Yang, C.; Chen, X.; Zhao, M.; Zhang, H. Deep Reinforcement Learning for Resource Management in Network Slicing. IEEE Access 2018, 6, 74429–74441. [Google Scholar] [CrossRef]
- Nassar, A.; Yilmaz, Y. Deep Reinforcement Learning for Adaptive Network Slicing in 5G for Intelligent Vehicular Systems and Smart Cities. arXiv 2020, arXiv:2010.09916. [Google Scholar] [CrossRef]
- Shi, Y.; Sagduyu, Y.E.; Erpek, T. Reinforcement Learning for Dynamic Resource Optimization in 5G Radio Access Network Slicing. In Proceedings of the 2020 IEEE 25th International Workshop on Computer Aided Modeling and Design of Communication Links and Networks (CAMAD), Pisa, Italy, 14–16 September 2020; pp. 1–6. [Google Scholar] [CrossRef]
- Stampa, G.; Arias, M.; Sanchez-Charles, D.; Muntes-Mulero, V.; Cabellos, A. A Deep-Reinforcement Learning Approach for Software-Defined Networking Routing Optimization. arXiv 2017, arXiv:1709.07080. [Google Scholar]
- Liu, Y.; Ding, J.; Liu, X. Resource Allocation Method for Network Slicing Using Constrained Reinforcement Learning. In Proceedings of the 2021 IFIP Networking Conference (IFIP Networking), Espoo, Finland, 21–24 June 2021; pp. 1–3. [Google Scholar] [CrossRef]
- Ulyanov, D.; Vedaldi, A.; Lempitsky, V. Deep Image Prior. Int. J. Comput. Vis. 2020, 128, 1867–1888. [Google Scholar] [CrossRef]
- Metzler, C.A.; Mousavi, A.; Baraniuk, R.G. Learned D-AMP: Principled Neural Network based Compressive Image Recovery. arXiv 2017, arXiv:1704.06625. [Google Scholar]
- Zhang, K.; Zuo, W.; Chen, Y.; Meng, D.; Zhang, L. Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising. IEEE Trans. Image Process. 2017, 26, 3142–3155. [Google Scholar] [CrossRef]
- Hinton, G.E. A practical guide to training restricted Boltzmann machines. In Neural Networks: Tricks of the Trade; Springer: Berlin/Heidelberg, Germany, 2012; pp. 599–619. [Google Scholar]
- Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Networks. arXiv 2014, arXiv:1406.2661. [Google Scholar] [CrossRef]
- Mirza, M.; Osindero, S. Conditional Generative Adversarial Nets. arXiv 2014, arXiv:1411.1784. [Google Scholar]
- Bellman, R. Dynamic Programming; Dover Publications: Mineola, NY, USA, 1957. [Google Scholar]
- Watkins, C.J.C.H.; Dayan, P. Q-learning. Mach. Learn. 1992, 8, 279–292. [Google Scholar] [CrossRef]
- Lillicrap, T.P.; Hunt, J.J.; Pritzel, A.; Heess, N.; Erez, T.; Tassa, Y.; Silver, D.; Wierstra, D. Continuous control with deep reinforcement learning. arXiv 2019, arXiv:1509.02971. [Google Scholar]
- Fujimoto, S.; van Hoof, H.; Meger, D. Addressing Function Approximation Error in Actor-Critic Methods. arXiv 2018, arXiv:1802.09477. [Google Scholar]
- Rummery, G.A.; Niranjan, M. On-Line Q-Learning Using Connectionist Systems; Citeseer: London, UK, 1994; Volume 37. [Google Scholar]
- Mnih, V.; Badia, A.P.; Mirza, M.; Graves, A.; Lillicrap, T.P.; Harley, T.; Silver, D.; Kavukcuoglu, K. Asynchronous Methods for Deep Reinforcement Learning. arXiv 2016, arXiv:1602.01783. [Google Scholar]
- Schulman, J.; Levine, S.; Moritz, P.; Jordan, M.I.; Abbeel, P. Trust Region Policy Optimization. arXiv 2017, arXiv:1502.05477. [Google Scholar]
- Schulman, J.; Wolski, F.; Dhariwal, P.; Radford, A.; Klimov, O. Proximal Policy Optimization Algorithms. arXiv 2017, arXiv:1707.06347. [Google Scholar]
- Khani, M.; Alizadeh, M.; Hoydis, J.; Fleming, P. Adaptive Neural Signal Detection for Massive MIMO. arXiv 2019, arXiv:1906.04610. [Google Scholar] [CrossRef]
- Gao, G.; Dong, C.; Niu, K. Sparsely Connected Neural Network for Massive MIMO Detection. In Proceedings of the 2018 IEEE 4th International Conference on Computer and Communications (ICCC), Hangzhou, China, 30 July–2 August 2018; pp. 397–402. [Google Scholar] [CrossRef]
- Mennes, R.; Camelo, M.; Claeys, M.; Latré, S. A neural-network-based MF-TDMA MAC scheduler for collaborative wireless networks. In Proceedings of the 2018 IEEE Wireless Communications and Networking Conference (WCNC), Barcelona, Spain, 15–18 April 2018; pp. 1–6. [Google Scholar] [CrossRef]
- Gutterman, C.; Grinshpun, E.; Sharma, S.; Zussman, G. RAN Resource Usage Prediction for a 5G Slice Broker. In Proceedings of the Twentieth ACM International Symposium on Mobile Ad Hoc Networking and Computing, Catania, Italy, 2–5 July 2019; Association for Computing Machinery: New York, NY, USA, 2019; pp. 231–240. [Google Scholar] [CrossRef]
- Pang, H.; Liu, J.; Fan, X.; Sun, L. Toward smart and cooperative edge caching for 5G networks: A deep learning based approach. In Proceedings of the 2018 IEEE/ACM 26th International Symposium on Quality of Service (IWQoS), Banff, AB, Canada, 4–6 June 2018; pp. 1–6. [Google Scholar]
- Krizhevsky, A. One weird trick for parallelizing convolutional neural networks. arXiv 2014, arXiv:1404.5997. [Google Scholar]
- Jaderberg, M.; Simonyan, K.; Zisserman, A.; Kavukcuoglu, K. Spatial Transformer Networks. arXiv 2016, arXiv:1506.02025. [Google Scholar]
- Fischer, W.; Meier-Hellstern, K. The Markov-modulated Poisson process (MMPP) cookbook. Perform. Eval. 1993, 18, 149–171. [Google Scholar] [CrossRef]
- Baum, L.E.; Petrie, T.; Soules, G.; Weiss, N. A Maximization Technique Occurring in the Statistical Analysis of Probabilistic Functions of Markov Chains. Ann. Math. Stat. 1970, 41, 164–171. [Google Scholar] [CrossRef]
- Cao, G.; Lu, Z.; Lei, T.; Wen, X.; Wang, L.; Yang, Y. Demo: SDNbased seamless handover in WLAN and 3GPP cellular with CAPWAN. In Proceedings of the 13th International Symposium on Wireless Communication Systems, Poznań, Poland, 20–23 September 2016; pp. 1–3. [Google Scholar]
- Amsaleg, L.; Bailey, J.; Barbe, D.; Erfani, S.; Houle, M.E.; Nguyen, V.; Radovanovíc, M. The vulnerability of learning to adversarial perturbation increases with intrinsic dimensionality. In Proceedings of the 2017 IEEE Workshop on Information Forensics and Security (WIFS), Rennes, France, 4–7 December 2017; pp. 1–6. [Google Scholar] [CrossRef]
- Sitawarin, C.; Wagner, D. On the Robustness of Deep K-Nearest Neighbors. arXiv 2019, arXiv:1903.08333. [Google Scholar]
- Sitawarin, C.; Wagner, D. Minimum-Norm Adversarial Examples on KNN and KNN based Models. In Proceedings of the 2020 IEEE Security and Privacy Workshops (SPW), San Francisco, CA, USA, 21 May 2020; pp. 34–40. [Google Scholar]
- Wang, L.; Liu, X.; Yi, J.; Zhou, Z.H.; Hsieh, C.J. Evaluating the Robustness of Nearest Neighbor Classifiers: A Primal-Dual Perspective. arXiv 2019, arXiv:1906.03972. [Google Scholar]
- Yang, Y.Y.; Rashtchian, C.; Wang, Y.; Chaudhuri, K. Robustness for Non-Parametric Classification: A Generic Attack and Defense. arXiv 2020, arXiv:1906.03310. [Google Scholar]
- Sitawarin, C.; Kornaropoulos, E.M.; Song, D.; Wagner, D. Adversarial Examples for k-Nearest Neighbor Classifiers Based on Higher-Order Voronoi Diagrams. arXiv 2021, arXiv:2011.09719. [Google Scholar]
- Chen, H.; Zhang, H.; Si, S.; Li, Y.; Boning, D.; Hsieh, C.J. Robustness Verification of Tree-based Models. arXiv 2019, arXiv:1906.03849. [Google Scholar]
- Kantchelian, A.; Tygar, J.D.; Joseph, A.D. Evasion and Hardening of Tree Ensemble Classifiers. arXiv 2016, arXiv:1509.07892. [Google Scholar]
- Papernot, N.; McDaniel, P.; Goodfellow, I. Transferability in Machine Learning: From Phenomena to Black-Box Attacks using Adversarial Samples. arXiv 2016, arXiv:1605.07277. [Google Scholar]
- Zhang, F.; Wang, Y.; Liu, S.; Wang, H. Decision-based evasion attacks on tree ensemble classifiers. World Wide Web 2020, 23, 2957–2977. [Google Scholar] [CrossRef]
- Zhang, C.; Zhang, H.; Hsieh, C.J. An Efficient Adversarial Attack for Tree Ensembles. arXiv 2020, arXiv:2010.11598. [Google Scholar]
- Andriushchenko, M.; Hein, M. Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks. arXiv 2019, arXiv:1906.03526. [Google Scholar]
- Goodfellow, I.J.; Shlens, J.; Szegedy, C. Explaining and Harnessing Adversarial Examples. arXiv 2015, arXiv:1412.6572. [Google Scholar]
- Kurakin, A.; Goodfellow, I.; Bengio, S. Adversarial examples in the physical world. arXiv 2017, arXiv:1607.02533. [Google Scholar]
- Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; Vladu, A. Towards Deep Learning Models Resistant to Adversarial Attacks. arXiv 2019, arXiv:1706.06083. [Google Scholar]
- Dong, Y.; Liao, F.; Pang, T.; Su, H.; Zhu, J.; Hu, X.; Li, J. Boosting Adversarial Attacks with Momentum. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 9185–9193. [Google Scholar] [CrossRef] [Green Version]
- Sabour, S.; Cao, Y.; Faghri, F.; Fleet, D.J. Adversarial Manipulation of Deep Representations. arXiv 2016, arXiv:1511.05122. [Google Scholar]
- Moosavi-Dezfooli, S.M.; Fawzi, A.; Frossard, P. DeepFool: A simple and accurate method to fool deep neural networks. arXiv 2016, arXiv:1511.04599. [Google Scholar]
- Jang, U.; Wu, X.; Jha, S. Objective metrics and gradient descent algorithms for adversarial examples in machine learning. In Proceedings of the 33rd Annual Computer Security Applications Conference, Orlando, FL, USA, 4–8 December 2017; pp. 262–277. [Google Scholar]
- Carlini, N.; Wagner, D. Towards Evaluating the Robustness of Neural Networks. arXiv 2017, arXiv:1608.04644. [Google Scholar]
- Chen, P.Y.; Sharma, Y.; Zhang, H.; Yi, J.; Hsieh, C.J. EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples. arXiv 2018, arXiv:1709.04114. [Google Scholar] [CrossRef]
- Chang, K.H.; Huang, P.H.; Yu, H.; Jin, Y.; Wang, T.C. Audio Adversarial Examples Generation with Recurrent Neural Networks. In Proceedings of the 2020 25th Asia and South Pacific Design Automation Conference (ASP-DAC), Beijing, China, 13–16 January 2020; pp. 488–493. [Google Scholar] [CrossRef]
- Moosavi-Dezfooli, S.M.; Fawzi, A.; Fawzi, O.; Frossard, P. Universal adversarial perturbations. arXiv 2017, arXiv:1610.08401. [Google Scholar]
- Mode, G.; Hoque, K. Adversarial Examples in Deep Learning for Multivariate Time Series Regression. arXiv 2020, arXiv:2009.11911. [Google Scholar]
- Gupta, K.; Pesquet, J.C.; Pesquet-Popescu, B.; Kaakai, F.; Malliaros, F. An Adversarial Attacker for Neural Networks in Regression Problems. In Proceedings of the IJCAI Workshop on Artificial Intelligence Safety (AI Safety), Macau, China, 10–16 August 2021. [Google Scholar]
- Narodytska, N.; Kasiviswanathan, S.P. Simple Black-Box Adversarial Perturbations for Deep Networks. arXiv 2016, arXiv:1612.06299. [Google Scholar]
- Uesato, J.; O’Donoghue, B.; van den Oord, A.; Kohli, P. Adversarial Risk and the Dangers of Evaluating Against Weak Attacks. arXiv 2018, arXiv:1802.05666. [Google Scholar]
- Alzantot, M.; Sharma, Y.; Chakraborty, S.; Zhang, H.; Hsieh, C.J.; Srivastava, M.B. Genattack: Practical black-box attacks with gradient-free optimization. In Proceedings of the Genetic and Evolutionary Computation Conference, Prague, Czech Republic, 13–17 July 2019; pp. 1111–1119. [Google Scholar]
- Guo, C.; Gardner, J.; You, Y.; Wilson, A.G.; Weinberger, K. Simple black-box adversarial attacks. In Proceedings of the International Conference on Machine Learning, Taipei, Taiwan, 2–4 December 2019; pp. 2484–2493. [Google Scholar]
- Koga, K.; Takemoto, K. Simple black-box universal adversarial attacks on medical image classification based on deep neural networks. arXiv 2021, arXiv:2108.04979. [Google Scholar]
- Papernot, N.; McDaniel, P.; Goodfellow, I.; Jha, S.; Celik, Z.B.; Swami, A. Practical Black-Box Attacks against Machine Learning. arXiv 2017, arXiv:1602.02697. [Google Scholar]
- Brendel, W.; Rauber, J.; Bethge, M. Decision-based adversarial attacks: Reliable attacks against black-box machine learning models. arXiv 2017, arXiv:1712.04248. [Google Scholar]
- Cheng, M.; Le, T.; Chen, P.Y.; Yi, J.; Zhang, H.; Hsieh, C.J. Query-efficient hard-label black-box attack: An optimization-based approach. arXiv 2018, arXiv:1807.04457. [Google Scholar]
- Chen, J.; Jordan, M.I.; Wainwright, M.J. Hopskipjumpattack: A query-efficient decision-based attack. In Proceedings of the 2020 IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA, 18–21 May 2020; pp. 1277–1294. [Google Scholar]
- Chen, P.Y.; Zhang, H.; Sharma, Y.; Yi, J.; Hsieh, C.J. Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, Dallas, TX, USA, 3 November 2017; pp. 15–26. [Google Scholar]
- Rahmati, A.; Moosavi-Dezfooli, S.M.; Frossard, P.; Dai, H. Geoda: A geometric framework for black-box adversarial attacks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 8446–8455. [Google Scholar]
- Fawzi, A.; Moosavi-Dezfooli, S.M.; Frossard, P. Robustness of classifiers: From adversarial to random noise. Adv. Neural Inf. Process. Syst. 2016, 29, 1632–1640. [Google Scholar]
- Ilyas, A.; Engstrom, L.; Athalye, A.; Lin, J. Black-box adversarial attacks with limited queries and information. In Proceedings of the International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018; pp. 2137–2146. [Google Scholar]
- Hu, W.; Tan, Y. Generating adversarial malware examples for black-box attacks based on GAN. arXiv 2017, arXiv:1702.05983. [Google Scholar]
- Xiao, C.; Li, B.; Zhu, J.Y.; He, W.; Liu, M.; Song, D. Generating adversarial examples with adversarial networks. arXiv 2018, arXiv:1801.02610. [Google Scholar]
- Hinton, G.; Vinyals, O.; Dean, J. Distilling the knowledge in a neural network (2015). arXiv 2015, arXiv:1503.02531. [Google Scholar]
- Anderson, H.S.; Kharkar, A.; Filar, B.; Roth, P. Evading machine learning malware detection. In Proceedings of the Black Hat 2017, Las Vegas, NV, USA, 22–27 July 2017. [Google Scholar]
- Wu, D.; Fang, B.; Wang, J.; Liu, Q.; Cui, X. Evading Machine Learning Botnet Detection Models via Deep Reinforcement Learning. In Proceedings of the ICC 2019—2019 IEEE International Conference on Communications (ICC), Shanghai, China, 20–24 May 2019; pp. 1–6. [Google Scholar] [CrossRef]
- Schott, L.; Rauber, J.; Bethge, M.; Brendel, W. Towards the first adversarially robust neural network model on MNIST. In Proceedings of the International Conference on Learning Representations, New Orleans, LA, USA, 6–9 May 2019. [Google Scholar]
- Hogan, T.A.; Kailkhura, B. Universal decision-based black-box perturbations: Breaking security-through-obscurity defenses. arXiv 2018, arXiv:1811.03733. [Google Scholar]
- Kim, B.; Sagduyu, Y.E.; Davaslioglu, K.; Erpek, T.; Ulukus, S. Channel-Aware Adversarial Attacks Against Deep Learning-Based Wireless Signal Classifiers. arXiv 2021, arXiv:2005.05321. [Google Scholar] [CrossRef]
- Sadeghi, M.; Larsson, E.G. Physical Adversarial Attacks Against End-to-End Autoencoder Communication Systems. arXiv 2019, arXiv:1902.08391. [Google Scholar] [CrossRef] [Green Version]
- Papernot, N.; Faghri, F.; Carlini, N.; Goodfellow, I.; Feinman, R.; Kurakin, A.; Xie, C.; Sharma, Y.; Brown, T.; Roy, A.; et al. Technical report on the cleverhans v2. 1.0 adversarial examples library. arXiv 2016, arXiv:1610.00768. [Google Scholar]
- Nicolae, M.I.; Sinn, M.; Tran, M.N.; Buesser, B.; Rawat, A.; Wistuba, M.; Zantedeschi, V.; Baracaldo, N.; Chen, B.; Ludwig, H.; et al. Adversarial Robustness Toolbox v1.0.0. arXiv 2019, arXiv:1807.01069. [Google Scholar]
- Engstrom, L.; Tran, B.; Tsipras, D.; Schmidt, L.; Madry, A. Exploring the landscape of spatial robustness. In Proceedings of the International Conference on Machine Learning, Long Beach, CA, USA, 10–15 June 2019; pp. 1802–1811. [Google Scholar]
- Brown, T.B.; Mané, D.; Roy, A.; Abadi, M.; Gilmer, J. Adversarial patch. arXiv 2017, arXiv:1712.09665. [Google Scholar]
- Grosse, K.; Pfaff, D.; Smith, M.T.; Backes, M. The limitations of model uncertainty in adversarial settings. arXiv 2018, arXiv:1812.02606. [Google Scholar]
- Xu, W.; Evans, D.; Qi, Y. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks. In Proceedings of the 2018 Network and Distributed System Security Symposium, San Diego, CA, USA, 18–21 February 2018. [Google Scholar] [CrossRef] [Green Version]
- Dziugaite, G.K.; Ghahramani, Z.; Roy, D.M. A study of the effect of jpg compression on adversarial images. arXiv 2016, arXiv:1608.00853. [Google Scholar]
- Zantedeschi, V.; Nicolae, M.I.; Rawat, A. Efficient Defenses Against Adversarial Attacks. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, Dallas, TX, USA, 3 November 2017; Association for Computing Machinery: New York, NY, USA, 2017; pp. 39–49. [Google Scholar]
- Warde-Farley, D.; Goodfellow, I. 11 adversarial perturbations of deep neural networks. Perturbations Optim. Stat. 2016, 311, 5. [Google Scholar]
- Buckman, J.; Roy, A.; Raffel, C.; Goodfellow, I. Thermometer encoding: One hot way to resist adversarial examples. In Proceedings of the International Conference on Learning Representations, Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
- Weng, T.W.; Zhang, H.; Chen, P.Y.; Yi, J.; Su, D.; Gao, Y.; Hsieh, C.J.; Daniel, L. Evaluating the robustness of neural networks: An extreme value theory approach. arXiv 2018, arXiv:1801.10578. [Google Scholar]
- Arpit, D.; Jastrzebski, S.; Ballas, N.; Krueger, D.; Bengio, E.; Kanwal, M.S.; Maharaj, T.; Fischer, A.; Courville, A.; Bengio, Y.; et al. A closer look at memorization in deep networks. In Proceedings of the International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; pp. 233–242. [Google Scholar]
- Rauber, J.; Brendel, W.; Bethge, M. Foolbox: A Python toolbox to benchmark the robustness of machine learning models. arXiv 2018, arXiv:1707.04131. [Google Scholar]
- Ling, X.; Ji, S.; Zou, J.; Wang, J.; Wu, C.; Li, B.; Wang, T. DEEPSEC: A Uniform Platform for Security Analysis of Deep Learning Model. In Proceedings of the 2019 IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA, 19–23 May 2019; pp. 673–690. [Google Scholar] [CrossRef]
- He, W.; Li, B.; Song, D. Decision boundary analysis of adversarial examples. In Proceedings of the International Conference on Learning Representations, Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
- Kurakin, A.; Goodfellow, I.; Bengio, S. Adversarial machine learning at scale. arXiv 2016, arXiv:1611.01236. [Google Scholar]
- Ross, A.; Doshi-Velez, F. Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients. In Proceedings of the AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018; Volume 32. [Google Scholar]
- Papernot, N.; McDaniel, P.; Wu, X.; Jha, S.; Swami, A. Distillation as a defense to adversarial perturbations against deep neural networks. In Proceedings of the 2016 IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA, 22–26 May 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 582–597. [Google Scholar]
- Xie, C.; Wang, J.; Zhang, Z.; Ren, Z.; Yuille, A. Mitigating adversarial effects through randomization. arXiv 2017, arXiv:1711.01991. [Google Scholar]
- Song, Y.; Kim, T.; Nowozin, S.; Ermon, S.; Kushman, N. Pixeldefend: Leveraging generative models to understand and defend against adversarial examples. arXiv 2017, arXiv:1710.10766. [Google Scholar]
- Guo, C.; Rana, M.; Cisse, M.; van der Maaten, L. Countering Adversarial Images using Input Transformations. arXiv 2018, arXiv:1711.00117. [Google Scholar]
- Cao, X.; Gong, N.Z. Mitigating evasion attacks to deep neural networks via region-based classification. In Proceedings of the 33rd Annual Computer Security Applications Conference, Orlando, FL, USA, 4–8 December 2017; pp. 278–287. [Google Scholar]
- Ma, X.; Li, B.; Wang, Y.; Erfani, S.M.; Wijewickrema, S.; Schoenebeck, G.; Song, D.; Houle, M.E.; Bailey, J. Characterizing adversarial subspaces using local intrinsic dimensionality. arXiv 2018, arXiv:1801.02613. [Google Scholar]
- Meng, D.; Chen, H. MagNet: A Two-Pronged Defense against Adversarial Examples. arXiv 2017, arXiv:1705.09064. [Google Scholar]
- Carlini, N. A critique of the deepsec platform for security analysis of deep learning models. arXiv 2019, arXiv:1905.07112. [Google Scholar]
- Ding, G.W.; Wang, L.; Jin, X. advertorch v0.1: An Adversarial Robustness Toolbox based on PyTorch. arXiv 2019, arXiv:1902.07623. [Google Scholar]
- Xiao, C.; Zhu, J.Y.; Li, B.; He, W.; Liu, M.; Song, D. Spatially transformed adversarial examples. arXiv 2018, arXiv:1801.02612. [Google Scholar]
- Athalye, A.; Carlini, N.; Wagner, D. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In Proceedings of the International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018; pp. 274–283. [Google Scholar]
- Goodman, D.; Xin, H.; Yang, W.; Yuesheng, W.; Junfeng, X.; Huan, Z. Advbox: A toolbox to generate adversarial examples that fool neural networks. arXiv 2020, arXiv:2001.05574. [Google Scholar]
- Tramèr, F.; Zhang, F.; Juels, A.; Reiter, M.K.; Ristenpart, T. Stealing Machine Learning Models via Prediction APIs. arXiv 2016, arXiv:1609.02943. [Google Scholar]
- Liu, Q.; Guo, J.; Wen, C.K.; Jin, S. Adversarial attack on DL-based massive MIMO CSI feedback. J. Commun. Netw. 2020, 22, 230–235. [Google Scholar] [CrossRef]
- Kim, B.; Sagduyu, Y.E.; Davaslioglu, K.; Erpek, T.; Ulukus, S. Over-the-Air Adversarial Attacks on Deep Learning Based Modulation Classifier over Wireless Channels. arXiv 2020, arXiv:2002.02400. [Google Scholar]
- Usama, M.; Mitra, R.N.; Ilahi, I.; Qadir, J.; Marina, M.K. Examining Machine Learning for 5G and Beyond through an Adversarial Lens. arXiv 2020, arXiv:2009.02473. [Google Scholar] [CrossRef]
- Kim, B.; Sagduyu, Y.E.; Erpek, T.; Davaslioglu, K.; Ulukus, S. Adversarial Attacks with Multiple Antennas Against Deep Learning-Based Modulation Classifiers. arXiv 2020, arXiv:2007.16204. [Google Scholar]
- Kim, B.; Sagduyu, Y.E.; Erpek, T.; Ulukus, S. Adversarial Attacks on Deep Learning Based mmWave Beam Prediction in 5G and Beyond. arXiv 2021, arXiv:2103.13989. [Google Scholar]
- Catak, E.; Catak, F.O.; Moldsvor, A. Adversarial Machine Learning Security Problems for 6G: MmWave Beam Prediction Use-Case. arXiv 2021, arXiv:2103.07268. [Google Scholar]
- Psiaki, M.L.; Humphreys, T.E. GNSS Spoofing and Detection. Proc. IEEE 2016, 104, 1258–1270. [Google Scholar] [CrossRef]
- Manoj, B.R.; Sadeghi, M.; Larsson, E.G. Adversarial Attacks on Deep Learning Based Power Allocation in a Massive MIMO Network. arXiv 2021, arXiv:2101.12090. [Google Scholar]
- Shi, Y.; Sagduyu, Y.E. Adversarial Machine Learning for Flooding Attacks on 5G Radio Access Network Slicing. arXiv 2021, arXiv:2101.08724. [Google Scholar]
- Wang, F.; Gursoy, M.C.; Velipasalar, S. Adversarial Reinforcement Learning in Dynamic Channel Access and Power Control. arXiv 2021, arXiv:2105.05817. [Google Scholar]
- Shi, Y.; Sagduyu, Y.E.; Erpek, T.; Gursoy, M.C. How to Attack and Defend 5G Radio Access Network Slicing with Reinforcement Learning. arXiv 2021, arXiv:2101.05768. [Google Scholar]
- Qiu, J.; Du, L.; Chen, Y.; Tian, Z.; Du, X.; Guizani, M. Artificial Intelligence Security in 5G Networks: Adversarial Examples for Estimating a Travel Time Task. IEEE Veh. Technol. Mag. 2020, 15, 95–100. [Google Scholar] [CrossRef]
Dataset | Description | Features and Labels | Mobile Network Applications |
---|---|---|---|
DeepMIMO [27] | Dataset for mmWave and massive MIMO | Channel matrix, ray-tracing path parameters, line-of-sight status, tx-rx distance path loss, UE and BS locations | Channel estimation in MIMO [28] |
RadioML [29] | Synthetic dataset consisting of several modulations at varying SNRs | The received signal in in-phase/quadrature (I/Q) format, SNR, modulation type | Automatic modulation classification [30,31,32] |
Raymobtime [33] | Realistic dataset with ray tracing, mobility and time evolution | UE positions, ray-tracing, lidar and video image data | Beam selection [34] |
SPHERE [35] | Activity recognition with multi-modal sensor data | Accelerometer data, features extracted from video, passive sensor data, activity | Activation control [36] |
Massive MIMO [37] | Data for power allocation in the downlink of a massive MIMO network | UE positions, max-min and max-prod power allocation vectors | Power allocation [37] |
IBM/Watson [38] | SNMP records for IBM Watson research centre over several weeks | Aggregated network traffic features for each user of several APs | Scheduling [39] |
CTU [40] | Real botnet, normal and background traffic | Raw packet captures, per-flow network traffic data features, flow labels | Network intrusion detection [41] |
AWID [42] | Real 802.11 WLAN traces including normal and malicious traffic | MAC layer, radio and general frame information extracted from 802.11 WLAN traffic, attack labels | Network intrusion detection [43] |
CICIDS18 [44] | Realistic network traffic data for multiple attack scenarios | Raw packet captures, per-flow network traffic data features | Network intrusion detection [45] |
WSN-DS [46] | Dataset for intrusion detection systems in wireless sensor networks | Node state features: energy consumption, packets sent/received, distance to BS, attack labels | Jamming detection and classification [47] |
Category | Ref. | Description | Data and Features | Algorithm | Modus Operandi |
---|---|---|---|---|---|
Channel estimation | [75] | Estimates CSI of the whole time-frequency using the transmitted pilots | Received pilot signals transmitted in lattice formation | Supervised: CNN | Transmit pilots; train a CNN and freeze its parameters; train another CNN; in both cases, MSE between the estimated and the actual channel responses is used as the loss function |
[57] | Estimates CSI at the uplink for mixed ADCs massive MIMO systems | The least square channel estimations of pilot signals | Supervised: fully-connected neural network and CNN | Transmit pilots; estimate the channel with the least squares; either train one neural network for all antennas or two networks: one for high- and another for low-resolution ADCs and combine the outputs; in both cases, MSE between the estimated and the actual channel responses is used as the loss function | |
[91] | Estimates CSI for high-dimensional signals in massive MIMO-OFDM | Received signal powers of OFDM symbols | Unsupervised: CNN-based DIP neural network | Initiate the DIP network; transmit pilots and measure the received signal; optimise the network parameters to minimise loss between the signal received and the output of the network given the input is a randomly chosen input tensor; use the resulting network to estimate the signal; estimate the channel using the least squares method | |
[76] | Predicts CSI in DL based on the past UL measurements in OFDM FDD | Time-frequency UL-CSI | Supervised: CNN and GAN | Transmit UL pilots; measure the received signal; use the CNN to predict DL-CSI to minimise MSE between the real and predicted values; alternatively, train the GAN to generate UL- and DL-CSI, initialise a random vector and update it using the gradient descent so that the generated and the real UL part become similar with regards to the loss function equal to the weighted sum of the difference between the real and predicted UL CSIs and discriminator loss | |
[28] | Maps CSI at one set of antennas and frequency band to CSI at another set of antennas and frequency band | CSI matrix for a subset of antennas in the uplink | Supervised: fully-connected neural network | Estimate channel matrix for a subset of antennas in the uplink; preprocess the data: normalise, zero-mask, flatten; train the fully-connected network to predict the channel matrix for all antennas in the downlink; normalised MSE averaged over mini-batches is used as the loss function | |
[92] | Estimates CSI in beamspace mmWave massive MIMO | Received pilot signals | Supervised: CNN-based LDAMP neural network with DnCNN denoiser | Transmit pilots and measure the received signal; train an LDAMP network with DnCNN denoiser to minimise MSE between the real and predicted channel matrix; use the network trained to estimate CSI | |
[77] | Recovers CSI on the BS through feedback links in OFDM | CSI matrix estimated at UE | Unsupervised: CNN-based autoencoder | Transmit pilots and measure the received signal at the UE; evaluate CSI at the downlink; train an autoencoder to minimise MSE between the real CSI matrix and its reconstruction; deploy the encoder at the UE and the decoder - at the BS to reduce CSI feedback overhead | |
Symbol detection | [58] | Recovers the transmitted symbols in OFDM | The received data of the pilot block and one data block | Supervised: fully-connected neural network | Transmit pilots and one data block, use the data received as the input to a neural network; train the network to minimise the loss between its output and the data transmitted; use the network trained to recover the transmitted symbols |
[115] | Estimates the signal transmitted given the signal received and the CSI matrix | Received signal and the CSI matrix | Supervised: neural network MMNet | Transmit a signal and measure the signal received; estimate the CSI at the current time interval; train a neural network for 1000 iterations for the 1-st subcarrier and fine-tune using only 3 training iterations per subcarrier for subsequent subcarriers; use the models trained to detect the signals received in the current time interval on the corresponding subcarriers; the training algorithm is repeated at each time interval | |
[116] | Estimates the signal transmitted given the signal received and the CSI matrix | Received signal and the channel state matrix | Supervised: neural network ScNet | Transmit a signal and measure the signal received; estimate the channel state at the current time interval; train an ScNet to minimise the Euclidean distance between the real signal sent and the one predicted by the neural network; use the network trained to estimate the original signal transmitted given the signal received and the CSI matrix | |
AMC | [31] | Performs AMC based on sequences of the received signals and SNR | Sequences of the signals received, SNR, noise values | Supervised: CNN | Collect received signals, SNR and noise values; train a network to predict the modulation type and noise level; replace the output layer with the new one without the noise output; fine-tune the network using only modulation types; in both cases use multi-class cross-entropy as the loss function |
[32] | Performs AMC using the received signal | Received signal in I/Q format, FFT magnitudes | Supervised: LSTM | Measure the received signal or retrieve averaged FFT magnitude; train an LSTM network to predict the modulation type; the loss function is softmax cross-entropy with logits | |
[59] | Performs AMC based on manually crafted features | Amplitude var., max PSD of the amplitude, in-band spectral var. and deviation from unit circle, cumulants, SNR | Supervised: fully-connected neural network | Process the received signal; extract a specific set of manually designed features; train a neural network which predicts the modulation type; the loss function is multi-class cross-entropy between the real modulation expressed as a one-hot vector and the softmax output of the network | |
[78] | Performs AMC using constellation diagrams | Constellation diagrams of the modulated signals | Supervised: CNN | Measure the received signal; transform the signal into constellation diagrams; train an AlexNet to predict the modulation type; softmax cross-entropy with logits is used as the loss function | |
Channel coding | [79] | Interprets end-to-end communication systems as autoencoders | A message transformed into one-hot vector | Unsupervised: autoencoder, attention CNN-based RTN | Select and transmit messages; train an autoencoder neural network to learn representations of the messages that are robust with respect to noise, fading and distortion; use the categorical cross-entropy between the input message and the reconstruction as the loss function; the trained autoencoder can be used for both encoding and decoding so that the transmitted message can be recovered with small probability of error |
[60] | Searches for an optimal decoding strategy for binary linear codes | PC matrix, hard-decisions vector | Reinforcement: Q-table or Q-learning with a fully-connected network | Initiate a deep neural network; receive a codeword; initiate the state as PC matrix multiplied by the hard decisions vector; perform an action by flipping a bit in the received word; evaluate the reward based on whether the codeword is decoded or not; calculate Q-function value based on Bellman equation; repeat until the terminal state is obtained, i.e., the codeword is decoded; explore the environment with either selecting a random action or an action which flips one of the incorrect bits | |
[61] | Views the decoding problem as a classification problem | Received symbol vector after channel encoding, BPSK mapping and simulated channel noise | Supervised: fully-connected neural network, CNN, LSTM | Transmit and receive a codeword; train a fully-connected neural network, CNN or LSTM to minimise MSE between the original codeword sent and the output of the network; estimate information bits from the new received symbol using the network trained | |
[80] | Solves the decoding problem for convolutional and LDPC codes | Received symbol vector after channel encoding, BPSK mapping and channel noise | Supervised: CNN | Transmit and receive a codeword; train a CNN to minimise MSE between the original codeword sent and the output of the network; estimate information bits from the new received symbol using the network trained | |
[81] | Solves the decoding problem for LDPC codes | Received symbol vector after channel encoding, BPSK mapping and channel noise | Supervised: CNN | Transmit and receive a codeword; train a CNN to minimise the weighted sum of the residual noise power (squared loss) and the normality test; use BP decoder to estimate transmit symbols and calculate; reconstruct the noise with the CNN trained; construct the new version of the received vector the noise reconstructed and feed it back to the BP decoder to perform another round of BP decoding | |
[62] | Learns to communicate over an unknown channel without a reliable link | Transmitter: message sent, receiver: received signal | Reinforcement: policy gradients, supervised: fully-connected neural network, attention CNN-based RTN | Transmit messages over the channel; build the receiver network to estimate the conditional probability of the message sent given the signal received; build the transmitter using policy gradients to minimise the loss obtained at the receiver and sent back to the transmitter over an unreliable channel; train both the transmitter and the receiver iteratively to minimise the MSE between the message sent and the receiver’s output | |
Beamforming | [88] | Predicts mmWave beam power based on positions of the RSU and neighbouring cars | Position of the RSU and the cars in different lanes with regards to the receiver | Supervised: linear, support vector, random forest and gradient boosting regression models | Measure and encode positions of the RSU and surrounding cars; calculate received power for each beam pair using the channel model provided; train one of the regression models mentioned to predict the maximum beam power or all beam powers based on the feature values using root MSE between the real and predicted values as the loss function |
[63] | Predicts achievable rate with every beamforming codeword based on the OFDM omni-received sequences | The OFDM omni-received sequences collected from several BSs | Supervised: fully-connected neural network | Receive omni uplink pilots from multiple UEs on multiple BSs; evaluate the achievable rate of every beamforming vector; train a network to minimise the MSE between the desired normalised achievable rate and the network output given the OFDM omni-received sequences collected from all the BSs; use the network trained for coordinated beamforming | |
[64] | Predicts the best beam based on a subset of RSS measurements | RSS values from a subset of beams | Supervised: fully-connected neural network | Measure RSS values for a given subset of beams; calculate the optimal beam by measuring the angle between the receiver and the transmitter; train a neural network to retrieve the best beam based on the subset of RSS values; use the network to map this subset and to the best beam overall; the subset of beams used as the input is selected by cycling through the combinations of beams and then selecting the beam subset with the best accuracy | |
[34] | Performs beam selection using context-awareness of the UE | GNSS and lidar data | Supervised: fully-connected and residual CNN | Measure GNSS positions, collect lidar image; train a neural network to predict the optimal beam based on the aforementioned data; use categorical cross-entropy as the loss function; use the network trained to determine the beam direction with the use of context information | |
[90] | Calculates optimal antenna diagrams for massive MIMO | User distribution, channel information | GAN + reinforcement learning: Q-learning | Use the first generator to produce a sample of users’ locations; use the second generator to produce an antenna diagram; use the discriminator to evaluate the reward in terms of the total aggregate throughput; update the discriminator using Bellman equation; repeat until the convergence criteria is met, namely, all users have the highest possible spectral efficiency values | |
[65] | Performs beam training based on measurements for a subset of codewords | RSS values for a subset of BS codewords and the user codewords combinations | Supervised: fully-connected neural network | Send a subset of all possible combinations of the BS codewords and the user codewords; measure the received signal power; train a network to predict the optimal beam combination based on the RSS measurements; use the network trained to select the optimal combination of the combiner and the precoder | |
Activation control | [66] | Searches for an optimal SBS activation strategy | Traffic rates, SBS on/off modes | Reinforcement: DDPG, supervised: fully-connected neural networks | Initialise networks; observe historical data rates; predict the future data rate using a traffic rate prediction network; observer SBS on/off modes; calculate continuous action using a policy network; transform the action into a set of discrete actions and select the best using a cost-estimation network; observe actual traffic rates and costs; update all the network parameters |
[67] | Controls BS sleeping | Traffic volumes | Reinforcement: DQN with fully-connected neural network | Initialise the agent and IPP model; observe traffic volumes; filter the traffic information with IPP model; feed the filtered observation to a neural network; calculate and scale the reward based on the number of requests served and not served; store the experience in a replay buffer; sample an experience from the buffer and update the network parameters; update the reward scaling factor; use the network trained to decide whether the BS should sleep or not at the given time interval | |
[36] | Selects sensor power mode for indoor localisation | Sensor mode: low-power or enhanced | Reinforcement: SARSA | Initialize an RL agent; observe sensor modes; infer location using HMM model; estimate the localisation error depending on the sensor mode; update the agent’s Q-function using Bellman equation; use the algorithm for continuous learning of energy-efficient location sensing | |
Power allocation | [37] | Maps positions of UEs to the optimal power allocation policies | UE positions | Supervised: fully-connected neural network | Obtain UE positions; calculate an optimal power allocation policy vector; train a network using UE positions as the input and the MSE between the network output and the optimal power allocation policy vector as the loss function; the resulting network can be used to calculate the optimal power allocation policy for a new set of UEs’ positions |
[68] | Learns to optimise interference management | Channel coefficients | Supervised: fully-connected neural network | Measure channel coefficients; calculate optimal power allocation values using WMMSE algorithm; train a neural network to minimise MSE between the network output and the signal power given the channel coefficient values; use the network trained to estimate the power allocation policy for new channel coefficient values to reduce real-time processing | |
[69] | Develops a framework for energy-efficient power control | Channel realisations | Supervised: fully-connected neural network | Use the BB algorithm to generate the training set containing optimal power allocations for many different channel realisations; train a neural network to minimise MSE between the network output and the optimal power allocation policy; use the model trained as an effective and low-complexity online power allocation method | |
[93] | Selects power level to mitigate smart jamming | Estimated jamming power and all the users’ SNR at the current time slot | Reinforcement: Q-learning | Initialize the agent; estimate jamming power and users’ SNR at the current time slot; sample transmission power using the model; calculate the transmission utility as the weighted sum of the users’ SNR values and the transmission costs; update the model using Bellman Equation | |
[82] | Selects optimal relay signal power on an UAV during a jamming attack | BER of the previous user messages, the jamming power received by the UAV, gains of the channels between the BSs, UAV, user and jammer | Reinforcement: DQN with CNN | Initialise the agent; estimate feature values; sample the relay power; calculate the utility based on the BER of the user message received by the serving BS, the BER of the weaker message signal received by UAV or the relay BS, and the relay cost; store the experience in a replay buffer; sample an experience from the buffer and update the network parameters | |
[70] | Develops a distributively executed dynamic power allocation scheme | Tx power, contribution to the network objective, direct DL channel, SNR, interference and its contribution to the objective from the interferer and interfered sets | Reinforcement: DQN with fully-connected neural network | Initialize RL agents; for each agent determine interferer and interfered sets; measure feature values; sample the discrete power levels; calculate the reward for each agent based on its direct contribution to the network objective and the penalty due to its interference to all interfered neighbours; store the experience in a replay buffer; sample an experience from the buffer and update the network parameters | |
[85] | Performs jointly dynamic channel access and power control | the channel selected, its power level, the feedback signal, the indicator of no transmission | Reinforcement: DQN with LSTM | Initialise RL agents; for each agent determine the feature values; sample the discrete power levels for the channels selected or no transmission; calculate the reward which can be either individual transmission rate, sum-rate or proportional fairness; store the experience in a replay buffer; sample an experience from the buffer and update the network parameters | |
[71] | Studies the power allocation problem based on channel gains | Channel gains | Supervised: fully-connected neural network | Transmit pilot signals from the BS from each subcarrier; estimate the channel gains at the UE and report them to the BS; train a network to return each subcarrier power by minimising the error between the network output and the optimal power allocation vector calculated using the interior point method; use the network trained to estimate power allocation vectors for all the UEs | |
Scheduling | [39] | Formulates the resource allocation problem in LTE-LAA as a non-cooperative game | Traffic load history | Reinforcement: policy gradients, autoencoder, LSTM | Initialise the LSTM autoencoder; feed the past traffic observations for each SBS and WLAN into LSTM traffic encoders; compute the actions for all SBSs; sample an action for each SBS depending on the actions expected to be taken by other SBSs; use policy gradients algorithm to update model parameters; train the model until all coupled constraints are satisfied |
[117] | Infers free slots in MF-TDMA networks | Last MF-TDMA frames | Supervised: fully-connected neural network | Observe few last frames and traffic information; use a neural network to predict its probability matrix of free slots in the next frame; forward the result to the root node; receive the full schedule from the root; transmit according to the schedule; update the network weights using the total number of collisions as the loss function | |
[95] | Formulates the cooperative localization problem as multi-agent RL | Estimated node positions; number of neighbours; local covariance matrix | Reinforcement: DQN, policy gradients | Initialise an RL agent; observe estimated node positions; calculate and execute measurement action; measure the reward function; update the agent’s network parameters according to the algorithm selected; the process is terminated when each node’s uncertainty falls below the threshold | |
[72] | Schedules HVFT application traffic | KPIs: cell congestion, number of sessions, cell efficiency | Reinforcement: DDPG with fully-connected neural network | Initialize an RL agent; measure the KPIs; sample an action; estimate the average throughput using an RF model trained on the real data; use the throughput estimation to calculate the reward; update the agent’s network parameters | |
Routing | [83] | Selects AP to maximise the user throughput | RSSIs of uplink user traffic data for each AP | Reinforcement: DQN with CNN and RNN | Initialize a policy network; observe RSSIs for each AP; select the AP to which the user should be allocated using the network; evaluate the user’s throughput as the reward function; store the experience in a replay buffer; sample an experience from the buffer and update the network parameters; use the network trained to select the optimal AP |
[99] | Optimizes routing in SDN | Traffic matrix | Reinforcement: DDPG | Initialize an RL agent; observe the traffic matrix; sample link-weights for each source-destination pair of nodes using the neural network; evaluate the mean network delay as the reward function; update the network parameters; use the network trained to perform the routing | |
[73] | Optimizes online routing in OTNs | Statistics of the several candidate paths of all the sourcedestination pairs in the network | Reinforcement: TRPO with fully-connected neural network | Initialize an RL agent; collect statistics of the candidate paths for each source-destination pair; use the policy network to sample an end-to-end path from the list of candidate paths that connect the source and the destination given and provide the bandwidth required; evaluate the bandwidth of the current traffic demand; update the network parameters; use the network trained to route new source–destination traffic | |
Localization | [84] | Estimates the travel time | GPS trajectory points, travel distance, information about weather, driver and time | Supervised: CNN+ LSTM | Initialise attribute, spatio-temporal and multi-task learning components of the network; embed information about weather, driver and time into a low-dimensional feature vector; feed this vector and the travel distance to the attribute component; feed the output of the attribute network to the spatio-temporal learning network; feed the outputs of the attribute and spatio-temporal learning components to the multi-task learning network; train the entire framework end-to-end in a supervised way by minimising the error between the travel time of the entire path and the output of the multi-task network |
Slicing | [118] | Predicts REVA metric for RAN slice broker to provision network slices with RAN resource | REVA metric historical values | Supervised: LSTM | Calculate REVA for last time intervals; train a network using these values to minimise the loss function between the REVA predictions and the actual values; use the network trained to estimate future REVA values; use these estimations coupled with CSI to derive wireless link throughput and RAN resources required for each slice |
[86] | Proposes proactive dynamic network slicing scheme | Network traffic historical data | Supervised: GRU | Collect the standardised traffic load data; train multiple neural networks to predict future network loads for each slice; select the best model for each slice; use the data predictions to adjust and schedule traffic in the future; periodically retrain the models | |
[96] | Formulates the radio resource allocation problem as an MDP | Number of arrived packets in each slice within specific time window | Reinforcement: DQN | Initialize an RL agent; observe numbers of arrived packets in the network slices under consideration; sample the bandwidth allocation vector using the Q-network; measure the weighted sum of SE and QoE as the reward function; store the experience in a replay buffer; sample an experience from the buffer and update the agent network parameters | |
[100] | Formulates the network slice resource allocation as an MDP | Numbers of users using different slices | Reinforcement: PPO | Initialise an RL agent; observe numbers of users using the network slices under consideration; sample the bandwidth allocation policy using the agent’s policy network; measure the network throughput; update the agent network parameters | |
[97] | Formulates the network slicing resource allocation problem at the network edge as an MDP | Allocated resources, resources in use, the serving node, the task priority, the resources required, holding time | Reinforcement: DQN | Initialize an RL agent; observe values of the features; select the fog node for the task or reject the task using the Q-network; calculate the reward function; store the experience in a replay buffer; sample an experience from the buffer and update the agent network parameters | |
[98] | Proposes a dynamic resource allocation scheme for radio access network slicing | Frequency-time blocks, CPU usage, and transmit power available at the time slot | Reinforcement: DQN | Initialise an RL agent; observe available resources at the current time slot; select the requests to serve using the Q-network; calculate the weighted sum of requests satisfied which acts as the reward function; store the experience in a replay buffer; sample an experience from the buffer and update the agent network parameters | |
Caching | [87] | Proposes a proactive caching framework for CRANs | Content request time, week, gender, occupation, age, and device type | Supervised: ECN | Collect content request data; train an ECN model; use the ECN to predict the distribution of content requests and user mobility patterns; determine which content to cache based on the content request percentage; based on the content request distribution, cluster and sample the content request distributions to calculate the percentage of each content; use it to select the contents to cache |
[119] | Learns content cache priorities | Historical sequence of video content requests | Supervised: LSTM | Observe sequence of video content requests, train a network to predict cache priority values; use the network trained to decide which content should be evicted from the cache when a new content request in one base station arrives | |
[89] | Predicts content popularity class in evolved packet core | Historical traffic volumes of each content type | Supervised: sparse autoencoder | Collect historical data on content popularity; train a neural network to predict content popularity using the historical patterns; train each autoencoder in the network in unsupervised way to minimise the reconstruction error; fine-tune the network to minimise the prediction error; use the network trained to evaluate content popularity | |
[94] | Finds optimal caching policy | local and global user request profiles | Reinforcement: Q-learning | Initialize the agent; observe user request profiles; calculate binary caching policy vector; calculate the reward based on the cost of refreshing the cache contents, the operational cost and the cost of mismatch between the caching action vector and the global popularity profile; update the agent parameters using Bellman equation | |
Security | [41] | Classifies network flows, classifies attack symptoms | Network traffic flow features, timestamps, attack class | Supervised: fully-connected neural network, DBN, LSTM | Train one neural network to classify feature vectors extracted from network flows as either normal or attack symptoms; use the feature vector, timestamp and symptom type as the input to another neural network; train the second neural network to classify symptom sequences as attacks |
[43] | Detects intrusion in WiFi networks | MAC layer, radio and general frame information | Unsupervised: autoencoder, supervised: fully-connected neural network | Extract and preprocess features; train an autoencoder to minimise the reconstruction error; use the output of the autoencoder pre trained as the input to a fully-connected network; train the second network to minimise the categorical cross-entropy between its output and the attack class label | |
[45] | Looks for an optimal CNN architecture and detects network intrusions | Network traffic flow features | Supervised: RNN, CNN | Collect traffic flows from backhaul and core network links; transform feature vectors into images; select an optimal CNN architecture using RNN; train the resulting CNN model; use the model trained to classify network traffic flows | |
[47] | Classifies jamming attacks | Node state information | Supervised: fully-connected neural network, SVM | Collect traffic; extract features; train a neural network to classify samples as either normal or corresponding to a particular class of jamming; feed the samples classified as normal to an SVM; train the SVM to classify the samples as either normal or malicious; use the model trained to identify and classify jammers | |
[74] | Detects and classifies jamming attacks | Subcarrier spacing, symbol time, subcarrier length, cyclic prefix length, average received power, threshold, average signal power, average noise power, and SNR | Supervised: linear regression, fully-connected neural network, k-NN, decision tree, random forest, naive Bayes | Extract features from the signals; train a model to detect and/or classify samples as either normal or corresponding to a particular class of jamming; use the model trained to identify and classify jammers |
Module | Component | Cleverhans | ART | Foolbox | DeepSec | Advertorch | Advbox |
---|---|---|---|---|---|---|---|
Target model | Tensorflow or Pytorch | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
Sklearn, XGBoost, LightGBM or CatBoost | ✓ | ||||||
Custom model | ✓ | ✓ | ✓ | ✓ | |||
Whitebox attacks | Attacks against k-NN or k-means | ||||||
Attacks against decision trees | ✓ | ||||||
Attacks against neural networks | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | |
Scorebased attacks | Single pixel [150] | ✓ | ✓ | ||||
Local search [150] | ✓ | ✓ | |||||
GenAttack [152] | ✓ | ||||||
SimBA attack [153] | ✓ | ||||||
Decisionbased attacks | Substitute model attack [155] | ✓ | |||||
Boundary attack [156] | ✓ | ✓ | |||||
HopSkipJump attack [158] | ✓ | ✓ | |||||
ZOO attack [159] | ✓ | ||||||
GeoDA attack [160] | ✓ | ||||||
NES attack [162] | ✓ | ||||||
Pointwise attack [168] | ✓ | ||||||
Attack utility metrics | Minimal perturbation size | ✓ | |||||
Misclassification rate | ✓ | ||||||
Adv. class confidence | ✓ | ||||||
True class confidence | ✓ | ||||||
Adversarial example robustness | ✓ | ||||||
Imperceptibility | ✓ | ||||||
Computation cost | ✓ | ||||||
Defences | Adversarial training [19,137,187] | ✓ | ✓ | ✓ | ✓ | ||
Preprocessing [177,178,180,181,191,192] | ✓ | ✓ | ✓ | ✓ | |||
Postprocessing [193,201] | ✓ | ✓ | |||||
Model transformation [189] | ✓ | ✓ | |||||
Adversarial example detection [177,194,195] | ✓ |
Cat. | Attack | Target | Motivation | Access | Data and Capabilities |
---|---|---|---|---|---|
AMC | Black-box [170,203]; white-box [204,205] | Supervised: CNN [31,78], LSTM [32], fully-connected neural network [59] | Targeted modulation misclassification; any misclassification independent of the target label | Broadcast signal over the air | Black-box: some channels between the attacker and the receiver, ability to retrieve the modulation type predicted by the target classifier; white-box: the same as in the black-box attack plus the target model, the exact input at the receiver, the channel between the attacker and the receiver |
Channel coding | Black-box [171,204]; white-box [204] | Unsupervised: fully-connected autoencoder [171,204]; RL: policy gradients [204] | Any error in decoding the message transmitted | Broadcast signal over the air | Black-box: distribution of channel realisations between the attacker and the receiver, the set of all possible messages; white-box: the same as in the black-box attack plus the target model |
Beamforming | White-box [206] | Supervised: fully-connected neural network [64] | Any misclassification; targeted misclassification to select one of the worst beams | Broadcast signal over the air | RSS values over the subset of beams selected, the target model, sensing the target model predictions |
White-box [207] | Supervised: fully-connected neural network [63] | Maximise the error between the real achievable rate and the target model output | Perturb the model input at the cloud | The combined feature vector of omni-received sequences collected from all the BSs, the target model | |
Channel estimation | White-box [202] | Unsupervised: CNN autoencoder | Maximise the error between the real CSI matrix and the target model output | Broadcast signal over the air | The target decoder model, the channel between the attacker and the BS, the codeword (the CSI in the latent space) returned by the encoder |
Power allocation | Black-box, white-box [209] | Supervised: fully-connected neural network [37] | Make the sum of the target model outputs greater than the total power available to the BS | Spoof GNSS UE positions | Black-box: UE positions, sensing the target model outputs; white-box: the same as in the black-box attack plus the target model |
White-box [71] | Supervised: fully-connected neural network [71] | Minimise the maximum achievable rate of the UEs | Broadcast over the air | The target model, having perfect knowledge of the channels between the BS and UEs | |
Black-box [211] | Reinforcement: DQN [70,85] | Minimise the achievable sum-rate of all the transmitters | Broadcast over the air | SNR levels at different channels, the sum-rate returned by the target model once an action is carried out by the adversary | |
Scheduling | Black-box [9] | Supervised: fully-connected neural network [9] | Any misclassification independent of the target label | Broadcast signal over the air | The exact input to the target model, the channel between the ESC and the adversary, sensing whether the UEs transmit or not |
Slicing | Black-box [9] | Supervised: fully-connected neural network [9] | Targeted misclassification: fool the target model to classify the adversary’s signal as a authenticated one | Broadcast signal over the air | UE signals, the target model decisions |
Black-box [210] | Reinforcement: DQN [98] | Maximise the number of fake requests served | Transmit over the network | The number of available PRBs, obtaining the number of the fake requests served | |
Black-box [212] | Reinforcement: DQN [98] | Minimise the number of legitimate requests served | Transmit over the network | The number of available PRBs, sensing whether the legitimate UE requests have been served or not | |
Security | Black-box [167] | Supervised: CNN, RNN [41,45] | Targeted misclassification: fool the target model to classify each malicious flow as a legitimate one | Transmit over the network | Botnet flows, allowed flow perturbations that do not brake flow format, capability to determine whether the flow is classified as malicious or legitimate one |
Localization | Black-box, white-box [213] | Supervised: CNN+RNN [84] | Maximise the error between the real travel time and the target model output | Spoof UE GPS locations | Black-box: GPS trajectory points, output of the target model; white-box: the same as in the black-box attack plus the parameters and gradient information of the target model |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zolotukhin, M.; Zhang, D.; Hämäläinen, T.; Miraghaei, P. On Attacking Future 5G Networks with Adversarial Examples: Survey. Network 2023, 3, 39-90. https://doi.org/10.3390/network3010003
Zolotukhin M, Zhang D, Hämäläinen T, Miraghaei P. On Attacking Future 5G Networks with Adversarial Examples: Survey. Network. 2023; 3(1):39-90. https://doi.org/10.3390/network3010003
Chicago/Turabian StyleZolotukhin, Mikhail, Di Zhang, Timo Hämäläinen, and Parsa Miraghaei. 2023. "On Attacking Future 5G Networks with Adversarial Examples: Survey" Network 3, no. 1: 39-90. https://doi.org/10.3390/network3010003
APA StyleZolotukhin, M., Zhang, D., Hämäläinen, T., & Miraghaei, P. (2023). On Attacking Future 5G Networks with Adversarial Examples: Survey. Network, 3(1), 39-90. https://doi.org/10.3390/network3010003