sensors-logo

Journal Browser

Journal Browser

Special Issue "Edge/Fog Computing Technologies for IoT Infrastructure"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Internet of Things".

Deadline for manuscript submissions: 31 January 2021.

Special Issue Editors

Dr. Taehong Kim
Website
Guest Editor
School of Information and Communication Engineering, Chungbuk National University, 28644, Cheongju, Korea
Interests: edge computing; container orchestration; Internet of Things; SDN/NFV; wireless sensor networks
Special Issues and Collections in MDPI journals
Dr. Youngsoo Kim
Website
Guest Editor
Department of Computer Science, Air Force Academy, 28187, Korea
Interests: wireless sensor networks; artificial intelligence; machine learning; deep learning; Internet of Things; edge computing
Dr. Seong-eun Yoo
Website SciProfiles
Guest Editor
School of Computer and Communication Engineering, Daegu University, Gyeongsan 38453, Korea
Interests: wireless sensor networks; real-time and embedded systems; Internet of Things; edge computing
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

The prevalence of smart devices and cloud computing has led to an explosion in the amount of data generated by IoT devices. Moreover, emerging IoT applications, such as augmented and virtual reality (AR/VR), intelligent transportation systems, and smart factories, require ultra-low latency for data communication and processing. Fog/edge computing is a new computing paradigm where fully distributed fog/edge nodes located nearby end devices provide computing resources. By analyzing, filtering, and processing at local fog/edge resources instead of transferring tremendous data to the centralized cloud servers, fog/edge computing can reduce the processing delay and network traffic significantly. With these advantages, fog/edge computing is expected to be one of the key enabling technologies for building the IoT infrastructure.

Container (a lightweight virtualization) is one of the emerging fog/edge computing technologies for the IoT infrastructure. In spite of the advances of this technology, research into the integration of containers with fog/edge computing for the IoT infrastructure is still in the early stages. There are many challenges that need to be addressed, such as smart container orchestration, real-time monitoring of resources, auto-scaling, and load balancing of services.

Recently, there have been many researches and development from both academia and industries, and this Special Issue seeks recent advancements on fog/edge computing technologies for building an IoT infrastructure. The potential topics of interests for this Special Issue include, but are not limited to the following:

  • Fog/edge computing architecture for IoT infrastructure
  • Fog/edge computing-based IoT applications
  • Dynamic resource and service allocation and installment on fog/edge computing
  • Device and service management for fog/edge-based IoT infrastructure
  • Data management techniques for fog/edge-based IoT infrastructure
  • Algorithm and technologies for computation offloading on fog/edge computing
  • State-aware solutions for fog/edge computing
  • Container orchestration frameworks based on open source projects
  • Container orchestration techniques such as real-time monitoring, auto-scaling, and load balancing of services
  • Experimental testbed for fog/edge computing-based IoT application
  • Performance analysis and evaluation on fog/edge computing
  • Standards on fog/edge computing for IoT infrastructure
  • SDN/NFV techniques for fog/edge computing and IoT infrastructure
  • AI and deep learning techniques for fog/edge computing and IoT infrastructure
  • Security and privacy for fog/edge-based IoT infrastructure

Dr. Taehong Kim
Dr. Youngsoo Kim
Dr. Seong-eun Yoo
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Horizontal Pod Autoscaling in Kubernetes for Elastic Container Orchestration
Sensors 2020, 20(16), 4621; https://doi.org/10.3390/s20164621 - 17 Aug 2020
Abstract
Kubernetes, an open-source container orchestration platform, enables high availability and scalability through diverse autoscaling mechanisms such as Horizontal Pod Autoscaler (HPA), Vertical Pod Autoscaler and Cluster Autoscaler. Amongst them, HPA helps provide seamless service by dynamically scaling up and down the number of [...] Read more.
Kubernetes, an open-source container orchestration platform, enables high availability and scalability through diverse autoscaling mechanisms such as Horizontal Pod Autoscaler (HPA), Vertical Pod Autoscaler and Cluster Autoscaler. Amongst them, HPA helps provide seamless service by dynamically scaling up and down the number of resource units, called pods, without having to restart the whole system. Kubernetes monitors default Resource Metrics including CPU and memory usage of host machines and their pods. On the other hand, Custom Metrics, provided by external software such as Prometheus, are customizable to monitor a wide collection of metrics. In this paper, we investigate HPA through diverse experiments to provide critical knowledge on its operational behaviors. We also discuss the essential difference between Kubernetes Resource Metrics (KRM) and Prometheus Custom Metrics (PCM) and how they affect HPA’s performance. Lastly, we provide deeper insights and lessons on how to optimize the performance of HPA for researchers, developers, and system administrators working with Kubernetes in the future. Full article
(This article belongs to the Special Issue Edge/Fog Computing Technologies for IoT Infrastructure)
Show Figures

Graphical abstract

Back to TopTop