Machine Learning and Cognitive Networking

A special issue of Telecom (ISSN 2673-4001).

Deadline for manuscript submissions: closed (30 June 2022) | Viewed by 17210

Special Issue Editor

Khoury College of Computer Sciences, Northeastern University. Vancouver, BC, Canada
Interests: optimization of computer networks; data analytics; machine learning; deep learning; optical networks

Special Issue Information

Dear Colleagues,

The rapid progress of cloud with high bitrate requirements substantially affects transport networks. To overcome the issue of capacity crunch in transport networks, new cognitive models need to be developed. These new models are needed to extract valuable information from a comprehensive set of network data. A cognitive network utilizes advanced analytical solutions from several research areas (i.e., deep learning, data analytics, knowledge representation, telecommunication, network management) to solve modern problems in communication networks. The cognitive processes, which learn or use historical data to improve performance, apply various data analytics solutions typically utilizing machine learning techniques. In particular, data analytics (DA), machine learning (ML), and deep learning (DL) concepts are regarded as promising methodological areas to enable cognitive network data analysis; thus enabling, for example, automatized network self-configuration and fault management.

This Special Issue solicits the submission of high-quality and unpublished papers that aim to solve open technical problems and challenges related to cognitive networking. Both theoretical and experimental studies are encouraged, as well as high-quality review and survey papers.

Dr. Michał Aibin
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Telecom is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Machine learning and artificial intelligence (AI) for communications and networking
  • Machine learning for communication and network resource optimization
  • Machine learning for networking using the internet of things
  • Distributed learning, reasoning, and optimization for communications and networking
  • Architecture, protocols, cross-layer, and cognition cycle design for intelligent communications and networking
  • Machine learning for next-generation wireless networks such as 5G and 6G networks
  • Security and privacy issues in intelligent communications and networking
  • Network tools, testbeds, and performance evaluation based on AI and machine learning
  • Data-driven traffic prediction for cognitive network
  • AI-based performance evaluation.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 1219 KiB  
Article
An Unsupervised Machine Learning Approach for UAV-Aided Offloading of 5G Cellular Networks
by Lefteris Tsipi, Michail Karavolos and Demosthenes Vouyioukas
Telecom 2022, 3(1), 86-102; https://doi.org/10.3390/telecom3010005 - 20 Jan 2022
Cited by 7 | Viewed by 3707
Abstract
Today’s terrestrial cellular communications networks face difficulties in serving coexisting users and devices due to the enormous demands of mass connectivity. Further, natural disasters and unexpected events lead to an unpredictable amount of data traffic, thus causing congestion to the network. In such [...] Read more.
Today’s terrestrial cellular communications networks face difficulties in serving coexisting users and devices due to the enormous demands of mass connectivity. Further, natural disasters and unexpected events lead to an unpredictable amount of data traffic, thus causing congestion to the network. In such cases, the addition of on-demand network entities, such as fixed or aerial base stations, has been proposed as a viable solution for managing high data traffic and offloading the existing terrestrial infrastructure. This paper presents an unmanned aerial vehicles (UAVs) aided offloading strategy of the terrestrial network, utilizing an unsupervised machine learning method for the best placement of UAVs in sites with high data traffic. The proposed scheme forms clusters of users located in the affected area using the k-medoid algorithm. Followingly, based on the number of available UAVs, a cluster selection scheme is employed to select the available UAVs that will be deployed to achieve maximum offloading in the system. Comparisons with traditional offloading strategies integrating terrestrial picocells and other UAV-aided schemes show that significant offloading, throughput, spectral efficiency, and sum rate gains can be harvested through the proposed method under a varying number of UAVs. Full article
(This article belongs to the Special Issue Machine Learning and Cognitive Networking)
Show Figures

Figure 1

18 pages, 582 KiB  
Article
XGB-RF: A Hybrid Machine Learning Approach for IoT Intrusion Detection
by Jabed Al Faysal, Sk Tahmid Mostafa, Jannatul Sultana Tamanna, Khondoker Mirazul Mumenin, Md. Mashrur Arifin, Md. Abdul Awal, Atanu Shome and Sheikh Shanawaz Mostafa
Telecom 2022, 3(1), 52-69; https://doi.org/10.3390/telecom3010003 - 04 Jan 2022
Cited by 26 | Viewed by 4662
Abstract
In the past few years, Internet of Things (IoT) devices have evolved faster and the use of these devices is exceedingly increasing to make our daily activities easier than ever. However, numerous security flaws persist on IoT devices due to the fact that [...] Read more.
In the past few years, Internet of Things (IoT) devices have evolved faster and the use of these devices is exceedingly increasing to make our daily activities easier than ever. However, numerous security flaws persist on IoT devices due to the fact that the majority of them lack the memory and computing resources necessary for adequate security operations. As a result, IoT devices are affected by a variety of attacks. A single attack on network systems or devices can lead to significant damages in data security and privacy. However, machine-learning techniques can be applied to detect IoT attacks. In this paper, a hybrid machine learning scheme called XGB-RF is proposed for detecting intrusion attacks. The proposed hybrid method was applied to the N-BaIoT dataset containing hazardous botnet attacks. Random forest (RF) was used for the feature selection and eXtreme Gradient Boosting (XGB) classifier was used to detect different types of attacks on IoT environments. The performance of the proposed XGB-RF scheme is evaluated based on several evaluation metrics and demonstrates that the model successfully detects 99.94% of the attacks. After comparing it with state-of-the-art algorithms, our proposed model has achieved better performance for every metric. As the proposed scheme is capable of detecting botnet attacks effectively, it can significantly contribute to reducing the security concerns associated with IoT systems. Full article
(This article belongs to the Special Issue Machine Learning and Cognitive Networking)
Show Figures

Figure 1

18 pages, 544 KiB  
Article
A Survey on Traffic Prediction Techniques Using Artificial Intelligence for Communication Networks
by Aaron Chen, Jeffrey Law and Michal Aibin
Telecom 2021, 2(4), 518-535; https://doi.org/10.3390/telecom2040029 - 03 Dec 2021
Cited by 21 | Viewed by 5294
Abstract
Much research effort has been conducted to introduce intelligence into communication networks in order to enhance network performance. Communication networks, both wired and wireless, are ever-expanding as more devices are increasingly connected to the Internet. This survey introduces machine learning and the motivations [...] Read more.
Much research effort has been conducted to introduce intelligence into communication networks in order to enhance network performance. Communication networks, both wired and wireless, are ever-expanding as more devices are increasingly connected to the Internet. This survey introduces machine learning and the motivations behind it for creating cognitive networks. We then discuss machine learning and statistical techniques to predict future traffic and classify each into short-term or long-term applications. Furthermore, techniques are sub-categorized into their usability in Local or Wide Area Networks. This paper aims to consolidate and present an overview of existing techniques to stimulate further applications in real-world networks. Full article
(This article belongs to the Special Issue Machine Learning and Cognitive Networking)
Show Figures

Figure 1

26 pages, 27939 KiB  
Article
Multi-Tier Cellular Handover with Multi-Access Edge Computing and Deep Learning
by Percy Kapadia and Boon-Chong Seet
Telecom 2021, 2(4), 446-471; https://doi.org/10.3390/telecom2040026 - 15 Nov 2021
Cited by 2 | Viewed by 2729
Abstract
This paper proposes a potential enhancement of handover for the next-generation multi-tier cellular network, utilizing two fifth-generation (5G) enabling technologies: multi-access edge computing (MEC) and machine learning (ML). MEC and ML techniques are the primary enablers for enhanced mobile broadband (eMBB) and ultra-reliable [...] Read more.
This paper proposes a potential enhancement of handover for the next-generation multi-tier cellular network, utilizing two fifth-generation (5G) enabling technologies: multi-access edge computing (MEC) and machine learning (ML). MEC and ML techniques are the primary enablers for enhanced mobile broadband (eMBB) and ultra-reliable and low latency communication (URLLC). The subset of ML chosen for this research is deep learning (DL), as it is adept at learning long-term dependencies. A variant of artificial neural networks called a long short-term memory (LSTM) network is used in conjunction with a look-up table (LUT) as part of the proposed solution. Subsequently, edge computing virtualization methods are utilized to reduce handover latency and increase the overall throughput of the network. A realistic simulation of the proposed solution in a multi-tier 5G radio access network (RAN) showed a 40–60% improvement in overall throughput. Although the proposed scheme may increase the number of handovers, it is effective in reducing the handover failure (HOF) and ping-pong rates by 30% and 86%, respectively, compared to the current 3GPP scheme. Full article
(This article belongs to the Special Issue Machine Learning and Cognitive Networking)
Show Figures

Figure 1

Back to TopTop