Special Issue "Machine Learning for Communications"

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Complexity".

Deadline for manuscript submissions: closed (31 August 2021) | Viewed by 6641

Special Issue Editor

Dr. Vaneet Aggarwal
E-Mail Website
Guest Editor
School of Industrial Engineering & School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN 47907, USA
Interests: machine learning; communication networks; cloud computing; information and coding theory; autonomous transportation

Special Issue Information

Dear Colleagues,

Due to the proliferation of applications and services that run over communication networks—ranging from video streaming and data analytics to robotics and augmented reality—tomorrow’s networks will be faced with increasing challenges resulting from the explosive growth of data traffic demand with significantly varying performance requirements. This calls for more powerful, intelligent methods to enable novel network design, deployment, and management. To realize this vision, there is an increasing need to leverage recent developments in machine learning (ML), as well as other artificial intelligence (AI) techniques, and fully integrate them into the design and optimization of communication networks.

There has been a growing number of recent proposals, which aim to harness ML to address various research problems, including traffic engineering, wireless optimization, congestion control, resource management, routing, transport protocol design, content distribution and caching, and user experience optimization. ML techniques allow future communication networks to exploit big data analytics and experience-driven decision making to enable more efficient, autonomous, and intelligent networks. As real-world communication systems are becoming more complex, there are many situations that a single learning agent or a monolithic system is not able to cope with, mandating the use of multiagent learning to yield the best results.

This Special Issue will focus on ML solutions to address problems in communication networks, cutting across various network layers, protocols, applications, and artifacts. Novel algorithms and analyses of learning-based approaches with applications to communication networks are highly encouraged. This Special Issue will accept unpublished original papers and comprehensive reviews focused (but not restricted) on the following research areas:

- Traffic engineering;

- Routing algorithms;

- Congestion control;

- Communication network resource management;

- Network optimization;

- Software-defined networks;

- Content distribution networks;

- Cloud and edge computing;

- Self-driving networks;

- Network security;

- Crowdsourcing/sensing systems;

- Blockchain.

Dr. Vaneet Aggarwal
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning
  • reinforcement learning
  • multiple agents
  • communication networks
  • network optimization

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

Editorial
Machine Learning for Communications
Entropy 2021, 23(7), 831; https://doi.org/10.3390/e23070831 - 29 Jun 2021
Viewed by 884
Abstract
Due to the proliferation of applications and services that run over communication networks, ranging from video streaming and data analytics to robotics and augmented reality, tomorrow’s networks will be faced with increasing challenges resulting from the explosive growth of data traffic demand with [...] Read more.
Due to the proliferation of applications and services that run over communication networks, ranging from video streaming and data analytics to robotics and augmented reality, tomorrow’s networks will be faced with increasing challenges resulting from the explosive growth of data traffic demand with significantly varying performance requirements [...] Full article
(This article belongs to the Special Issue Machine Learning for Communications)

Research

Jump to: Editorial

Article
Adversarial Machine Learning for NextG Covert Communications Using Multiple Antennas
Entropy 2022, 24(8), 1047; https://doi.org/10.3390/e24081047 - 29 Jul 2022
Viewed by 416
Abstract
This paper studies the privacy of wireless communications from an eavesdropper that employs a deep learning (DL) classifier to detect transmissions of interest. There exists one transmitter that transmits to its receiver in the presence of an eavesdropper. In the meantime, a cooperative [...] Read more.
This paper studies the privacy of wireless communications from an eavesdropper that employs a deep learning (DL) classifier to detect transmissions of interest. There exists one transmitter that transmits to its receiver in the presence of an eavesdropper. In the meantime, a cooperative jammer (CJ) with multiple antennas transmits carefully crafted adversarial perturbations over the air to fool the eavesdropper into classifying the received superposition of signals as noise. While generating the adversarial perturbation at the CJ, multiple antennas are utilized to improve the attack performance in terms of fooling the eavesdropper. Two main points are considered while exploiting the multiple antennas at the adversary, namely the power allocation among antennas and the utilization of channel diversity. To limit the impact on the bit error rate (BER) at the receiver, the CJ puts an upper bound on the strength of the perturbation signal. Performance results show that this adversarial perturbation causes the eavesdropper to misclassify the received signals as noise with a high probability while increasing the BER at the legitimate receiver only slightly. Furthermore, the adversarial perturbation is shown to become more effective when multiple antennas are utilized. Full article
(This article belongs to the Special Issue Machine Learning for Communications)
Show Figures

Figure 1

Article
Attaining Fairness in Communication for Omniscience
Entropy 2022, 24(1), 109; https://doi.org/10.3390/e24010109 - 11 Jan 2022
Viewed by 497
Abstract
This paper studies how to attain fairness in communication for omniscience that models the multi-terminal compress sensing problem and the coded cooperative data exchange problem where a set of users exchange their observations of a discrete multiple random source to attain omniscience—the state [...] Read more.
This paper studies how to attain fairness in communication for omniscience that models the multi-terminal compress sensing problem and the coded cooperative data exchange problem where a set of users exchange their observations of a discrete multiple random source to attain omniscience—the state that all users recover the entire source. The optimal rate region containing all source coding rate vectors that achieve omniscience with the minimum sum rate is shown to coincide with the core (the solution set) of a coalitional game. Two game-theoretic fairness solutions are studied: the Shapley value and the egalitarian solution. It is shown that the Shapley value assigns each user the source coding rate measured by their remaining information of the multiple source given the common randomness that is shared by all users, while the egalitarian solution simply distributes the rates as evenly as possible in the core. To avoid the exponentially growing complexity of obtaining the Shapley value, a polynomial-time approximation method is proposed which utilizes the fact that the Shapley value is the mean value over all extreme points in the core. In addition, a steepest descent algorithm is proposed that converges in polynomial time on the fractional egalitarian solution in the core, which can be implemented by network coding schemes. Finally, it is shown that the game can be decomposed into subgames so that both the Shapley value and the egalitarian solution can be obtained within each subgame in a distributed manner with reduced complexity. Full article
(This article belongs to the Special Issue Machine Learning for Communications)
Show Figures

Figure 1

Article
Scheduling and Power Control for Wireless Multicast Systems via Deep Reinforcement Learning
Entropy 2021, 23(12), 1555; https://doi.org/10.3390/e23121555 - 23 Nov 2021
Viewed by 633
Abstract
Multicasting in wireless systems is a natural way to exploit the redundancy in user requests in a content centric network. Power control and optimal scheduling can significantly improve the wireless multicast network’s performance under fading. However, the model-based approaches for power control and [...] Read more.
Multicasting in wireless systems is a natural way to exploit the redundancy in user requests in a content centric network. Power control and optimal scheduling can significantly improve the wireless multicast network’s performance under fading. However, the model-based approaches for power control and scheduling studied earlier are not scalable to large state spaces or changing system dynamics. In this paper, we use deep reinforcement learning, where we use function approximation of the Q-function via a deep neural network to obtain a power control policy that matches the optimal policy for a small network. We show that power control policy can be learned for reasonably large systems via this approach. Further, we use multi-timescale stochastic optimization to maintain the average power constraint. We demonstrate that a slight modification of the learning algorithm allows tracking of time varying system statistics. Finally, we extend the multi-time scale approach to simultaneously learn the optimal queuing strategy along with power control. We demonstrate the scalability, tracking and cross-layer optimization capabilities of our algorithms via simulations. The proposed multi-time scale approach can be used in general large state-space dynamical systems with multiple objectives and constraints, and may be of independent interest. Full article
(This article belongs to the Special Issue Machine Learning for Communications)
Show Figures

Figure 1

Article
Wireless Network Optimization for Federated Learning with Model Compression in Hybrid VLC/RF Systems
Entropy 2021, 23(11), 1413; https://doi.org/10.3390/e23111413 - 27 Oct 2021
Cited by 1 | Viewed by 974
Abstract
In this paper, the optimization of network performance to support the deployment of federated learning (FL) is investigated. In particular, in the considered model, each user owns a machine learning (ML) model by training through its own dataset, and then transmits its ML [...] Read more.
In this paper, the optimization of network performance to support the deployment of federated learning (FL) is investigated. In particular, in the considered model, each user owns a machine learning (ML) model by training through its own dataset, and then transmits its ML parameters to a base station (BS) which aggregates the ML parameters to obtain a global ML model and transmits it to each user. Due to limited radio frequency (RF) resources, the number of users that participate in FL is restricted. Meanwhile, each user uploading and downloading the FL parameters may increase communication costs thus reducing the number of participating users. To this end, we propose to introduce visible light communication (VLC) as a supplement to RF and use compression methods to reduce the resources needed to transmit FL parameters over wireless links so as to further improve the communication efficiency and simultaneously optimize wireless network through user selection and resource allocation. This user selection and bandwidth allocation problem is formulated as an optimization problem whose goal is to minimize the training loss of FL. We first use a model compression method to reduce the size of FL model parameters that are transmitted over wireless links. Then, the optimization problem is separated into two subproblems. The first subproblem is a user selection problem with a given bandwidth allocation, which is solved by a traversal algorithm. The second subproblem is a bandwidth allocation problem with a given user selection, which is solved by a numerical method. The ultimate user selection and bandwidth allocation are obtained by iteratively compressing the model and solving these two subproblems. Simulation results show that the proposed FL algorithm can improve the accuracy of object recognition by up to 16.7% and improve the number of selected users by up to 68.7%, compared to a conventional FL algorithm using only RF. Full article
(This article belongs to the Special Issue Machine Learning for Communications)
Show Figures

Figure 1

Article
Secure OFDM with Peak-to-Average Power Ratio Reduction Using the Spectral Phase of Chaotic Signals
Entropy 2021, 23(11), 1380; https://doi.org/10.3390/e23111380 - 21 Oct 2021
Cited by 1 | Viewed by 536
Abstract
In this paper, a new physical layer security technique is proposed for Orthogonal Frequency Division Multiplexing (OFDM) communication systems. The security is achieved by modifying the OFDM symbols using the phase information of chaos in the frequency spectrum. In addition, this scheme reduces [...] Read more.
In this paper, a new physical layer security technique is proposed for Orthogonal Frequency Division Multiplexing (OFDM) communication systems. The security is achieved by modifying the OFDM symbols using the phase information of chaos in the frequency spectrum. In addition, this scheme reduces the Peak to Average Power Ratio (PAPR), which is one of the major drawbacks of OFDM. The Selected Mapping (SLM) technique for PAPR reduction is employed to exploit the random characteristics of chaotic sequences. The reduction with this algorithm is shown to be similar to that of other SLM schemes, but it has lower computational complexity and side information does not have to be sent to the receiver. The security of this technique stems from the noise like behavior of chaotic sequences and their dependence on the initial conditions of the chaotic generator (which are used as the key). Even a slight difference in the initial conditions will result in a different phase sequence, which prevents an eavesdropper from recovering the transmitted OFDM symbols. Full article
(This article belongs to the Special Issue Machine Learning for Communications)
Show Figures

Figure 1

Article
Reinforcement Learning-Based Multihop Relaying: A Decentralized Q-Learning Approach
Entropy 2021, 23(10), 1310; https://doi.org/10.3390/e23101310 - 06 Oct 2021
Cited by 2 | Viewed by 744
Abstract
Conventional optimization-based relay selection for multihop networks cannot resolve the conflict between performance and cost. The optimal selection policy is centralized and requires local channel state information (CSI) of all hops, leading to high computational complexity and signaling overhead. Other optimization-based decentralized policies [...] Read more.
Conventional optimization-based relay selection for multihop networks cannot resolve the conflict between performance and cost. The optimal selection policy is centralized and requires local channel state information (CSI) of all hops, leading to high computational complexity and signaling overhead. Other optimization-based decentralized policies cause non-negligible performance loss. In this paper, we exploit the benefits of reinforcement learning in relay selection for multihop clustered networks and aim to achieve high performance with limited costs. Multihop relay selection problem is modeled as Markov decision process (MDP) and solved by a decentralized Q-learning scheme with rectified update function. Simulation results show that this scheme achieves near-optimal average end-to-end (E2E) rate. Cost analysis reveals that it also reduces computation complexity and signaling overhead compared with the optimal scheme. Full article
(This article belongs to the Special Issue Machine Learning for Communications)
Show Figures

Figure 1

Article
BeiDou Short-Message Satellite Resource Allocation Algorithm Based on Deep Reinforcement Learning
Entropy 2021, 23(8), 932; https://doi.org/10.3390/e23080932 - 22 Jul 2021
Cited by 3 | Viewed by 927
Abstract
The comprehensively completed BDS-3 short-message communication system, known as the short-message satellite communication system (SMSCS), will be widely used in traditional blind communication areas in the future. However, short-message processing resources for short-message satellites are relatively scarce. To improve the resource utilization of [...] Read more.
The comprehensively completed BDS-3 short-message communication system, known as the short-message satellite communication system (SMSCS), will be widely used in traditional blind communication areas in the future. However, short-message processing resources for short-message satellites are relatively scarce. To improve the resource utilization of satellite systems and ensure the service quality of the short-message terminal is adequate, it is necessary to allocate and schedule short-message satellite processing resources in a multi-satellite coverage area. In order to solve the above problems, a short-message satellite resource allocation algorithm based on deep reinforcement learning (DRL-SRA) is proposed. First of all, using the characteristics of the SMSCS, a multi-objective joint optimization satellite resource allocation model is established to reduce short-message terminal path transmission loss, and achieve satellite load balancing and an adequate quality of service. Then, the number of input data dimensions is reduced using the region division strategy and a feature extraction network. The continuous spatial state is parameterized with a deep reinforcement learning algorithm based on the deep deterministic policy gradient (DDPG) framework. The simulation results show that the proposed algorithm can reduce the transmission loss of the short-message terminal path, improve the quality of service, and increase the resource utilization efficiency of the short-message satellite system while ensuring an appropriate satellite load balance. Full article
(This article belongs to the Special Issue Machine Learning for Communications)
Show Figures

Figure 1

Back to TopTop