Resource Provisioning and Orchestration in Edge, Fog, and Distributed Cloud Computing Environments

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (31 December 2020) | Viewed by 2482

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Science, Hanyang University, Seoul, Korea
Interests: blockchain; distributed/cloud computing; Internet of Things
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
1. Department of Software and Communications Engineering, Hongik University, Sejong, Korea
2. Division of Computer Science and Engineering, Hanyang University, Seoul, Korea
Interests: cloud computing for big data; high-performance computing for big facilities; IoT data analytics
Special Issues, Collections and Topics in MDPI journals

E-Mail
Guest Editor
Division of Computer Science and Engineering, Hanyang University, Seoul, Korea
Interests: cloud and distributed computing; big data analytic engines; stream processing frameworks; service orchestration; blockchain technologies

Special Issue Information

Dear Colleagues,

In recent years, cloud, edge, and fog computing have emerged at the center of ICT technology, re-shaping the future of computer networks and distributed applications accordingly. However, the rapid increase in the dynamicity, scale, diversity, and heterogeneity of cloud resources requires expert knowledge about complex orchestration operations, such as deployment, selection, runtime control, and monitoring of these resources to achieve the desired quality of service and to meet the service level agreement. Complex application technologies that comprise numerous virtual machines and components with diverse software and configuration dependencies immensely benefit from the advances in orchestration technologies. For instance, the creation of distributed applications based on containerized technologies has the prerequisite of the ability to orchestrate a fleet of containers at scale. Hybrid cloud orchestration that uses cases that arise from simultaneously harnessing public and private cloud are open research challenges.

This Special Issue focuses on novel solutions and innovative approaches that contribute to the field of cloud, fog, and edge computing provisioning and orchestration. Articles submitted to this Special Issue can also be concerned with the most significant recent developments in the area of application deployment automation, scientific workflows orchestration, and container allocation optimization for microservices on the distributed cloud infrastructure.

Papers in the relevant areas of orchestration, planning, auto-scaling, container optimization in cloud, and edge computing, including but not limited to the following, are invited:

  • Performance monitoring, provisioning, and planning of container-based applications and infrastructure.
  • Auto-scaling support for cloud infrastructure and applications.
  • Model-based design, deployment, and management of containerized cloud services.
  • Multi-cloud service deployment and orchestration technology.
  • Softwarized networks for cloud and cloud-native computing.
  • Orchestration of edge, fog, and cloud computing.
  • Machine learning and artificial intelligence for cloud infrastructure management.
  • Virtualization and containers for high-performance computing.

Prof. Dr. Choonhwa Lee
Prof. Dr. Eun-Sung Jung
Dr. Muhammad Hanif
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Orchestration
  • Cloud application
  • Virtualization and containerization
  • Auto-scaling
  • Multi-cloud applications
  • Edge computing provisioning

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 5192 KiB  
Article
NetAP: Adaptive Polling Technique for Network Packet Processing in Virtualized Environments
by Hyunchan Park, Juyong Seong, Munkyu Lee, Kyungwoon Lee and Cheol-Ho Hong
Appl. Sci. 2020, 10(15), 5219; https://doi.org/10.3390/app10155219 - 29 Jul 2020
Cited by 1 | Viewed by 2162
Abstract
In cloud systems, computing resources, such as the CPU, memory, network, and storage devices, are virtualized and shared by multiple users. In recent decades, methods to virtualize these resources efficiently have been intensively studied. Nevertheless, the current virtualization techniques cannot achieve effective I/O [...] Read more.
In cloud systems, computing resources, such as the CPU, memory, network, and storage devices, are virtualized and shared by multiple users. In recent decades, methods to virtualize these resources efficiently have been intensively studied. Nevertheless, the current virtualization techniques cannot achieve effective I/O virtualization when packets are transferred between a virtual machine and a host system. For example, VirtIO, which is a network device driver for KVM-based virtualization, adopts an interrupt-based packet-delivery mechanism, and incurs frequent switch overheads between the virtual machine and the host system. Therefore, VirtIO wastes valuable CPU resources and decreases network performance. To address this limitation, this paper proposes an adaptive polling-based network I/O processing technique, called NetAP, for virtualized environments. NetAP processes network requests via a periodical polling-based mechanism. For this purpose, NetAP adopts the golden-section search algorithm to determine the near-optimal polling interval for various workloads with different characteristics. We implement NetAP in a Linux kernel and evaluated it with up to six virtual machines. The evaluation results show that NetAP can improve the network performance of virtual machines by up to 31.16%, while only using 32.92% of the host CPU time used by VirtIO for packet processing. Full article
Show Figures

Figure 1

Back to TopTop