Special Issue "Applications of AI for 5G and Beyond Communications: Network Management, Operation, and Automation"

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 15 September 2020.

Special Issue Editors

Prof. Yeong Min Jang
Website
Guest Editor
Wireless Communications and Artificial Intelligence Lab., Kookmin University, Seongbuk-Gu, Seoul, Korea
Interests: AI; artificial intelligence; big data; internet of energy; health; 5G/6G wireless communications; multimedia; computer vision; IoT platform; etc
Special Issues and Collections in MDPI journals
Dr. Mostafa Zaman Chowdhury
Website
Guest Editor
Kookmin University, Seoul, Republic of Korea
Interests: small cell networks, convergence networks, 5G/6G communications, optical wireless communications, IoT, and artificial intelligence
Prof. Dr. Takeo Fujii
Website
Guest Editor
Advanced Wireless and Communication Research Center (AWCC), The University of Electro-Communications, Tokyo 182-8585, Japan
Interests: wireless ad-hoc network; cognitive radio; wireless sensing technology; wireless network protocol; mobile network communications; ITS and software radio
Special Issues and Collections in MDPI journals
Prof. Dr. Juan-Carlos Cano
Website
Guest Editor
Department of Computer Engineering, Universitat Politècnica de València, 46022 Valencia, Spain
Interests: wireless networks; Intelligent Transport Systems (ITS); design, modeling, and implementation of computer networks; power aware routing protocols; quality of service for mobile ad hoc networks; pervasive computing; protocols for Unmanned Aerial Vehicles
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

Artificial Intelligence (AI) and Machine Learning (ML) are the quickest growing, most demanded techniques for the development of information and communication technology. Recent advances in AI and ML are providing unbelievable solutions and performing types of tasks that once seemed impossible. Applications of AI techniques in wireless communications will facilitate automation in network management and operations. Fifth-Generation (5G) and beyond communication systems are expected to provide services with massive connectivity, ultra-high data-rate, ultra-low latency, extremely high security, and extremely low energy consumption. It will be very difficult to achieve these goals without automation of the network systems. Applications of AI techniques in communication technologies are expected to make all these possible. AI can provide intelligent solutions for the design, management, and optimization of wireless resources. AI/ML techniques improve the way that we currently use network management, operations, and automation. They will be a great platform for the supporting software-defined networking (SDN) and network function virtualization (NFV), which are considered important technologies for the deployment of 5G and beyond communication systems. The increased complexity arising from the presence of heterogeneous network systems will be handled by AI techniques.

This special issue calls for high-quality unpublished research works on recent advances related to the application of AI for heterogeneous wireless communication systems. Contributions may present and solve open research problems, integrate efficient novel solutions, performance evaluation, and comparison with existing solutions. Theoretical as well as experimental studies for typical and newly emerging AI techniques, and use cases enabled by recent advances in wireless communications are encouraged. High-quality review papers are also welcomed.

Potential topics include, but are not limited to the following:

  • Theoretical approaches and methodologies for AI-enabled communication systems
  • AI and ML for network management
  • AI-enabled networks design and architecture
  • AI and ML in wireless communications and networking
  • Radio resource management
  • AI-enabled SDN and NFV
  • AI- enabled dynamic network slicing
  • AI- enabled security methods for IoT
  • AI-based network intelligence for IoT
  • Sequential analysis and reinforcement learning for wireless communications
  • AI-enabled ultra-dense network
  • Big data enabled wireless networking
  • AI- enabled network pricing models
  • Graph computing for communication networks
  • Signal processing over networks and graphs
  • Energy efficient network operation

Prof. Dr. Yeong Min Jang
Dr. Mostafa Zaman Chowdhury
Prof. Dr. Takeo Fujii
Prof. Dr. Juan Carlos Cano
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Artificial Intelligence
  • 5G communication
  • network management

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Effective Feature Selection Method for Deep Learning-Based Automatic Modulation Classification Scheme Using Higher-Order Statistics
Appl. Sci. 2020, 10(2), 588; https://doi.org/10.3390/app10020588 - 13 Jan 2020
Abstract
Recently, in order to satisfy the requirements of commercial communication systems and military communication systems, automatic modulation classification (AMC) schemes have been considered. As a result, various artificial intelligence algorithms such as a deep neural network (DNN), a convolutional neural network (CNN), and [...] Read more.
Recently, in order to satisfy the requirements of commercial communication systems and military communication systems, automatic modulation classification (AMC) schemes have been considered. As a result, various artificial intelligence algorithms such as a deep neural network (DNN), a convolutional neural network (CNN), and a recurrent neural network (RNN) have been studied to improve the AMC performance. However, since the AMC process should be operated in real time, the computational complexity must be considered low enough. Furthermore, there is a lack of research to consider the complexity of the AMC process using the data-mining method. In this paper, we propose a correlation coefficient-based effective feature selection method that can maintain the classification performance while reducing the computational complexity of the AMC process. The proposed method calculates the correlation coefficients of second, fourth, and sixth-order cumulants with the proposed formula and selects an effective feature according to the calculated values. In the proposed method, the deep learning-based AMC method is used to measure and compare the classification performance. From the simulation results, it is indicated that the AMC performance of the proposed method is superior to the conventional methods even though it uses a small number of features. Full article
Show Figures

Figure 1

Open AccessArticle
Adaptive Natural Gradient Method for Learning of Stochastic Neural Networks in Mini-Batch Mode
Appl. Sci. 2019, 9(21), 4568; https://doi.org/10.3390/app9214568 - 28 Oct 2019
Abstract
Gradient descent method is an essential algorithm for learning of neural networks. Among diverse variations of gradient descent method that have been developed for accelerating learning speed, the natural gradient learning is based on the theory of information geometry on stochastic neuromanifold, and [...] Read more.
Gradient descent method is an essential algorithm for learning of neural networks. Among diverse variations of gradient descent method that have been developed for accelerating learning speed, the natural gradient learning is based on the theory of information geometry on stochastic neuromanifold, and is known to have ideal convergence properties. Despite its theoretical advantages, the pure natural gradient has some limitations that prevent its practical usage. In order to get the explicit value of the natural gradient, it is required to know true probability distribution of input variables, and to calculate inverse of a matrix with the square size of the number of parameters. Though an adaptive estimation of the natural gradient has been proposed as a solution, it was originally developed for online learning mode, which is computationally inefficient for the learning of large data set. In this paper, we propose a novel adaptive natural gradient estimation for mini-batch learning mode, which is commonly adopted for big data analysis. For two representative stochastic neural network models, we present explicit rules of parameter updates and learning algorithm. Through experiments on three benchmark problems, we confirm that the proposed method has superior convergence properties to the conventional methods. Full article
Show Figures

Figure 1

Open AccessArticle
A Reinforcement-Learning-Based Distributed Resource Selection Algorithm for Massive IoT
Appl. Sci. 2019, 9(18), 3730; https://doi.org/10.3390/app9183730 - 06 Sep 2019
Cited by 2
Abstract
Massive IoT including the large number of resource-constrained IoT devices has gained great attention. IoT devices generate enormous traffic, which causes network congestion. To manage network congestion, multi-channel-based algorithms are proposed. However, most of the existing multi-channel algorithms require strict synchronization, an extra [...] Read more.
Massive IoT including the large number of resource-constrained IoT devices has gained great attention. IoT devices generate enormous traffic, which causes network congestion. To manage network congestion, multi-channel-based algorithms are proposed. However, most of the existing multi-channel algorithms require strict synchronization, an extra overhead for negotiating channel assignment, which poses significant challenges to resource-constrained IoT devices. In this paper, a distributed channel selection algorithm utilizing the tug-of-war (TOW) dynamics is proposed for improving successful frame delivery of the whole network by letting IoT devices always select suitable channels for communication adaptively. The proposed TOW dynamics-based channel selection algorithm has a simple reinforcement learning procedure that only needs to receive the acknowledgment (ACK) frame for the learning procedure, while simply requiring minimal memory and computation capability. Thus, the proposed TOW dynamics-based algorithm can run on resource-constrained IoT devices. We prototype the proposed algorithm on an extremely resource-constrained single-board computer, which hereafter is called the cognitive-IoT prototype. Moreover, the cognitive-IoT prototype is densely deployed in a frequently-changing radio environment for evaluation experiments. The evaluation results show that the cognitive-IoT prototype accurately and adaptively makes decisions to select the suitable channel when the real environment regularly varies. Accordingly, the successful frame ratio of the network is improved. Full article
Show Figures

Figure 1

Open AccessArticle
Machine Learning-Based Dimension Optimization for Two-Stage Precoder in Massive MIMO Systems with Limited Feedback
Appl. Sci. 2019, 9(14), 2894; https://doi.org/10.3390/app9142894 - 19 Jul 2019
Cited by 1
Abstract
A two-stage precoder is widely considered in frequency division duplex massive multiple-input and multiple-output (MIMO) systems to resolve the channel feedback overhead problem. In massive MIMO systems, users on a network can be divided into several user groups of similar spatial antenna correlations. [...] Read more.
A two-stage precoder is widely considered in frequency division duplex massive multiple-input and multiple-output (MIMO) systems to resolve the channel feedback overhead problem. In massive MIMO systems, users on a network can be divided into several user groups of similar spatial antenna correlations. Using the two-stage precoder, the outer precoder reduces the channel dimensions mitigating inter-group interferences at the first stage, while the inner precoder eliminates the smaller dimensions of intra-group interferences at the second stage. In this case, the dimension of effective channel reduced by outer precoder is important as it leverages the inter-group interference, the intra-group interference, and the performance loss from the quantized channel feedback. In this paper, we propose the machine learning framework to find the optimal dimensions reduced by the outer precoder that maximizes the average sum rate, where the original problem is an NP-hard problem. Our machine learning framework considers the deep neural network, where the inputs are channel statistics, and the outputs are the effective channel dimensions after outer precoding. The numerical result shows that our proposed machine learning-based dimension optimization achieves the average sum rate comparable to the optimal performance using brute-forcing searching, which is not feasible in practice. Full article
Show Figures

Figure 1

Open AccessArticle
Payload-Based Traffic Classification Using Multi-Layer LSTM in Software Defined Networks
Appl. Sci. 2019, 9(12), 2550; https://doi.org/10.3390/app9122550 - 21 Jun 2019
Cited by 1
Abstract
Recently, with the advent of various Internet of Things (IoT) applications, a massive amount of network traffic is being generated. A network operator must provide different quality of service, according to the service provided by each application. Toward this end, many studies have [...] Read more.
Recently, with the advent of various Internet of Things (IoT) applications, a massive amount of network traffic is being generated. A network operator must provide different quality of service, according to the service provided by each application. Toward this end, many studies have investigated how to classify various types of application network traffic accurately. Especially, since many applications use temporary or dynamic IP or Port numbers in the IoT environment, only payload-based network traffic classification technology is more suitable than the classification using the packet header information as well as payload. Furthermore, to automatically respond to various applications, it is necessary to classify traffic using deep learning without the network operator intervention. In this study, we propose a traffic classification scheme using a deep learning model in software defined networks. We generate flow-based payload datasets through our own network traffic pre-processing, and train two deep learning models: 1) the multi-layer long short-term memory (LSTM) model and 2) the combination of convolutional neural network and single-layer LSTM models, to perform network traffic classification. We also execute a model tuning procedure to find the optimal hyper-parameters of the two deep learning models. Lastly, we analyze the network traffic classification performance on the basis of the F1-score for the two deep learning models, and show the superiority of the multi-layer LSTM model for network packet classification. Full article
Show Figures

Figure 1

Open AccessArticle
Reinforcement Learning Based Resource Management for Network Slicing
Appl. Sci. 2019, 9(11), 2361; https://doi.org/10.3390/app9112361 - 09 Jun 2019
Cited by 1
Abstract
Network slicing to create multiple virtual networks, called network slice, is a promising technology to enable networking resource sharing among multiple tenants for the 5th generation (5G) networks. By offering a network slice to slice tenants, network slicing supports parallel services to [...] Read more.
Network slicing to create multiple virtual networks, called network slice, is a promising technology to enable networking resource sharing among multiple tenants for the 5th generation (5G) networks. By offering a network slice to slice tenants, network slicing supports parallel services to meet the service level agreement (SLA). In legacy networks, every tenant pays a fixed and roughly estimated monthly or annual fee for shared resources according to a contract signed with a provider. However, such a fixed resource allocation mechanism may result in low resource utilization or violation of user quality of service (QoS) due to fluctuations in the network demand. To address this issue, we introduce a resource management system for network slicing and propose a dynamic resource adjustment algorithm based on reinforcement learning approach from each tenant’s point of view. First, the resource management for network slicing is modeled as a Markov Decision Process (MDP) with the state space, action space, and reward function. Then, we propose a Q-learning-based dynamic resource adjustment algorithm that aims at maximizing the profit of tenants while ensuring the QoS requirements of end-users. The numerical simulation results demonstrate that the proposed algorithm can significantly increase the profit of tenants compared to existing fixed resource allocation methods while satisfying the QoS requirements of end-users. Full article
Show Figures

Figure 1

Open AccessArticle
A Novel Neural Network-Based Method for Decoding and Detecting of the DS8-PSK Scheme in an OCC System
Appl. Sci. 2019, 9(11), 2242; https://doi.org/10.3390/app9112242 - 30 May 2019
Cited by 1
Abstract
This paper proposes a novel method of training and applying a neural network to act as an adaptive decoder for a modulation scheme used in optical camera communication (OCC). We present a brief discussion on trending artificial intelligence applications, the contemporary ways of [...] Read more.
This paper proposes a novel method of training and applying a neural network to act as an adaptive decoder for a modulation scheme used in optical camera communication (OCC). We present a brief discussion on trending artificial intelligence applications, the contemporary ways of applying them in a wireless communication field, such as visible light communication (VLC), optical wireless communication (OWC) and OCC, and its potential contribution in the development of this research area. Furthermore, we proposed an OCC vehicular system architecture with artificial intelligence (AI) functionalities, where dimmable spatial 8-phase shift keying (DS8-PSK) is employed as one out of two modulation schemes to form a hybrid waveform. Further demonstration of simulating the blurring process on a transmitter image, as well as our proposed method of using a neural network as a decoder for DS8-PSK, is provided in detail. Finally, experimental results are given to prove the effectiveness and efficiency of the proposed method over an investigating channel condition. Full article
Show Figures

Figure 1

Back to TopTop