Advances in Intelligence Networking and Computing

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (31 March 2022) | Viewed by 12988

Special Issue Editors


E-Mail Website
Guest Editor
School of Computer Science and Engineering, Pusan National University, Busan 46241, Republic of Korea
Interests: wireless IoT communication; SON (self-organizing network)/KDN (knowledge-defined networking); smart factory

E-Mail Website
Guest Editor
Department of Electrical Engineering, Tokyo University of Science, Tokyo 125-8585, Japan
Interests: chaos; neural networks; optimization; wireless communication systems; mobile networks; cognitive radio
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Biomedical Convergence Engineering, Pusan National University, Busan 46241, Korea
Interests: optimization theory; game theory; machine learning and data science for wireless communications and networking

Special Issue Information

Dear Colleagues,

The Internet is already huge and complex. The ever-increasing QoS demands by users and applications in terms of bandwidth, latency, and data integrity keep posing challenges in network management and configuration. Meanwhile, recent paradigm shifts in networking based on softwarization, virtualization, and cloud computing have led to several novel architectures supported by emerging technologies, such as software-defined networking (SDN), network function virtualization (NFV), edge computing, IoT, and 5G. A missing piece or challenge in this context is incorporating “Intelligence” into the network infrastructure, to make timely, effective, and automated decisions based on huge amounts of operational and service data collected from the whole network. Various artificial intelligence (AI) and machine learning (ML) techniques, such as reinforcement learning and transfer learning, combined with or targeted on the abovementioned emerging network infrastructures such as SDN, NFV, edge computing, IoT, and 5G, are necessary to harness “Intelligence” into networking and computing.

In this Special Issue, we invite submissions of high-quality original research and survey papers related to the intelligence in networking and computing in the context of SDN, NFV, edge computing, IoT, and 5G. Research and development topics for this Special Issue include but are not limited to:

  • Machine learning, deep learning, reinforcement learning, and other learning techniques for SDN and NFV;
  • Machine learning, deep learning, reinforcement learning, and other learning techniques for edge computing and cloud computing;
  • Machine learning, deep learning, reinforcement learning, and other learning techniques for IoT and 5G;
  • Intelligent algorithms/protocols for network configuration and management;
  • Intelligent algorithms/protocols for network resource allocation and scheduling;
  • Intelligent algorithms/protocols for low-power and efficient networking;
  • Machine learning and big data analytics for detecting network attacks and other security issues;
  • Machine learning and intelligent analytics for information diffusion and other network issues;
  • New optimization techniques/algorithms/hardware for intelligent networking and computing;
  • New machine learning techniques/algorithms/hardware for intelligent networking and computing;
  • Applications of emerging technologies (e.g., blockchain, federated learning, coded computing) for security, privacy, and trust in intelligence networking and computing;
  • Intelligence networking and computing for emerging applications and services: augmented/virtual reality, autonomous vehicles, massive IoT, smart grid, smart industries, and industry 5.0;
  • Testbed and practical implementation of intelligence networking and computing.

Prof. Dr. Jong-Deok Kim
Prof. Dr. Mikio Hasegawa
Prof. Dr. Won-Joo Hwang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Intelligence
  • Machine learning
  • SDN/NFV
  • IoT
  • 5G

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 1654 KiB  
Article
A Generative Model for Traffic Demand with Heterogeneous and Spatiotemporal Characteristics in Massive Wi-Fi Systems
by Jae-Min Lee and Jong-Deok Kim
Electronics 2022, 11(12), 1848; https://doi.org/10.3390/electronics11121848 - 10 Jun 2022
Cited by 2 | Viewed by 1471
Abstract
A substantial amount of money and time is required to optimize resources in a massive Wi-Fi network in a real-world environment. Therefore, to reduce cost, proposed algorithms are first verified through simulations before implementing them in a real-world environment. A traffic model is [...] Read more.
A substantial amount of money and time is required to optimize resources in a massive Wi-Fi network in a real-world environment. Therefore, to reduce cost, proposed algorithms are first verified through simulations before implementing them in a real-world environment. A traffic model is essential to describe user traffic for simulations. Existing traffic models are statistical models based on a discrete-time random process and combine a spatiotemporal characteristic model with the varying parameters, such as average and variance, of a statistical model. The spatiotemporal characteristic model has a mathematically strict assumption that the access points (APs) have approximately similar traffic patterns that increase during day times and decrease at night. The mathematical assumption ensures a homogeneous representation of the network traffic. It does not include heterogeneous characteristics, such as the fact that lecture buildings on campus have a high traffic during lectures, while restaurants have a high traffic only during mealtimes. Therefore, it is difficult to represent heterogeneous traffic using this mathematical model. Deep learning can be used to represent heterogeneous patterns. This study proposes a generative model for Wi-Fi traffic that considers spatiotemporal characteristics using deep learning. The proposed model learns the heterogeneous traffic patterns from the AP-level measurement data without any assumptions and generates similar traffic patterns based on the data. The result shows that the difference between the sample generated by the proposed model and the collected data is up to 72.1% less than that reported in previous studies. Full article
(This article belongs to the Special Issue Advances in Intelligence Networking and Computing)
Show Figures

Figure 1

16 pages, 1785 KiB  
Article
BER Minimization by User Pairing in Downlink NOMA Using Laser Chaos Decision-Maker
by Masaki Sugiyama, Aohan Li, Zengchao Duan, Makoto Naruse and Mikio Hasegawa
Electronics 2022, 11(9), 1452; https://doi.org/10.3390/electronics11091452 - 30 Apr 2022
Cited by 3 | Viewed by 1958
Abstract
In next-generation wireless communication systems, non-orthogonal multiple access (NOMA) has been recognized as essential technology for improving the spectrum efficiency. NOMA allows multiple users transmit data using the same resource block simultaneously with proper user pairing. Most of the pairing schemes, however, require [...] Read more.
In next-generation wireless communication systems, non-orthogonal multiple access (NOMA) has been recognized as essential technology for improving the spectrum efficiency. NOMA allows multiple users transmit data using the same resource block simultaneously with proper user pairing. Most of the pairing schemes, however, require prior information, such as location information of the users, leading to difficulties in realizing prompt user pairing. To realize real-time operations without prior information in NOMA, a bandit algorithm using chaotically oscillating time series, which we refer to as the laser chaos decision-maker, was demonstrated. However, this scheme did not consider the detailed communication processes, e.g., modulation, error correction code, etc. In this study, in order to adapt the laser chaos decision-maker to real communication systems, we propose a user pairing scheme based on acknowledgment (ACK) and negative acknowledgment (NACK) information considering detailed communication channels. Furthermore, based on the insights gained by the analysis of parameter dependencies, we introduce an adaptive pairing method to minimize the bit error rate of the NOMA system under study. The numerical results show that the proposed method achieves superior performances than the traditional using pairing schemes, i.e., Conventional-NOMA pairing scheme (C-NOMA) and Unified Channel Gain Difference pairing scheme (UCGD-NOMA), and ϵ-greedy-based user pairing scheme. As the cell radius of the NOMA system gets smaller, the superior on the BER of our proposed scheme gets bigger. Specifically, our proposed scheme can decrease the BER from 101 to 105 compared to the conventional schemes when the cell radius is 400 m. Full article
(This article belongs to the Special Issue Advances in Intelligence Networking and Computing)
Show Figures

Figure 1

16 pages, 8301 KiB  
Article
YOLO-Based Object Detection for Separate Collection of Recyclables and Capacity Monitoring of Trash Bins
by Aria Bisma Wahyutama and Mintae Hwang
Electronics 2022, 11(9), 1323; https://doi.org/10.3390/electronics11091323 - 21 Apr 2022
Cited by 26 | Viewed by 8703
Abstract
This study describes the development of a smart trash bin that separates and collects recyclables using a webcam and You Only Look Once (YOLO) real-time object detection in Raspberry Pi, to detect and classify these recyclables into their correct categories. The classification result [...] Read more.
This study describes the development of a smart trash bin that separates and collects recyclables using a webcam and You Only Look Once (YOLO) real-time object detection in Raspberry Pi, to detect and classify these recyclables into their correct categories. The classification result rotates the trash bin lid and reveals the correct trash bin compartment for the user to throw away trash. The performance of the YOLO model was evaluated to measure its accuracy, which was 91% under an optimal computing environment and 75% when deployed in Raspberry Pi. Several Internet of Things hardware, such as ultrasonic sensors for measuring trash bin capacity and GPS for locating trash bin coordinates, are implemented to provide capacity monitoring controlled by Arduino Uno. The capacity and GPS information are uploaded to Firebase Database via theESP8266 Wi-Fi module. To deliver the capacity monitoring feature, the uploaded trash bin capacity information is displayed on the mobile application in the form of a bar level developed in the MIT App Inventor for the user to quickly take action if required. The system proposed in this study is intended to be implemented in a rural area, where it can potentially solve the recyclable waste separation problem. Full article
(This article belongs to the Special Issue Advances in Intelligence Networking and Computing)
Show Figures

Figure 1

Back to TopTop