sensors-logo

Journal Browser

Journal Browser

Special Issue "Edge/Fog Computing Technologies for IoT Infrastructure"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Internet of Things".

Deadline for manuscript submissions: closed (31 January 2021).

Special Issue Editors

Dr. Taehong Kim
E-Mail Website
Guest Editor
School of Information and Communication Engineering, Chungbuk National University, Cheongju, Chungbuk 28644, Korea
Interests: edge computing; container orchestration; Internet of Things; SDN/NFV; wireless sensor networks
Special Issues and Collections in MDPI journals
Dr. Youngsoo Kim
E-Mail Website
Guest Editor
Department of Computer Science, Air Force Academy, 28187, Korea
Interests: wireless sensor networks; artificial intelligence; machine learning; deep learning; Internet of Things; edge computing
Dr. Seong-eun Yoo
E-Mail Website
Guest Editor
School of Computer and Communication Engineering, Daegu University, Gyeongsan 712-714, Korea
Interests: wireless sensor networks; industrial IoT; localization
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

The prevalence of smart devices and cloud computing has led to an explosion in the amount of data generated by IoT devices. Moreover, emerging IoT applications, such as augmented and virtual reality (AR/VR), intelligent transportation systems, and smart factories, require ultra-low latency for data communication and processing. Fog/edge computing is a new computing paradigm where fully distributed fog/edge nodes located nearby end devices provide computing resources. By analyzing, filtering, and processing at local fog/edge resources instead of transferring tremendous data to the centralized cloud servers, fog/edge computing can reduce the processing delay and network traffic significantly. With these advantages, fog/edge computing is expected to be one of the key enabling technologies for building the IoT infrastructure.

Container (a lightweight virtualization) is one of the emerging fog/edge computing technologies for the IoT infrastructure. In spite of the advances of this technology, research into the integration of containers with fog/edge computing for the IoT infrastructure is still in the early stages. There are many challenges that need to be addressed, such as smart container orchestration, real-time monitoring of resources, auto-scaling, and load balancing of services.

Recently, there have been many researches and development from both academia and industries, and this Special Issue seeks recent advancements on fog/edge computing technologies for building an IoT infrastructure. The potential topics of interests for this Special Issue include, but are not limited to the following:

  • Fog/edge computing architecture for IoT infrastructure
  • Fog/edge computing-based IoT applications
  • Dynamic resource and service allocation and installment on fog/edge computing
  • Device and service management for fog/edge-based IoT infrastructure
  • Data management techniques for fog/edge-based IoT infrastructure
  • Algorithm and technologies for computation offloading on fog/edge computing
  • State-aware solutions for fog/edge computing
  • Container orchestration frameworks based on open source projects
  • Container orchestration techniques such as real-time monitoring, auto-scaling, and load balancing of services
  • Experimental testbed for fog/edge computing-based IoT application
  • Performance analysis and evaluation on fog/edge computing
  • Standards on fog/edge computing for IoT infrastructure
  • SDN/NFV techniques for fog/edge computing and IoT infrastructure
  • AI and deep learning techniques for fog/edge computing and IoT infrastructure
  • Security and privacy for fog/edge-based IoT infrastructure

Dr. Taehong Kim
Dr. Youngsoo Kim
Dr. Seong-eun Yoo
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review

Editorial
Edge/Fog Computing Technologies for IoT Infrastructure
Sensors 2021, 21(9), 3001; https://doi.org/10.3390/s21093001 - 25 Apr 2021
Viewed by 588
Abstract
The prevalence of smart devices and cloud computing has led to an explosion in the amount of data generated by IoT devices [...] Full article
(This article belongs to the Special Issue Edge/Fog Computing Technologies for IoT Infrastructure)

Research

Jump to: Editorial, Review

Article
Dynamically Controlling Offloading Thresholds in Fog Systems
Sensors 2021, 21(7), 2512; https://doi.org/10.3390/s21072512 - 03 Apr 2021
Cited by 1 | Viewed by 619
Abstract
Fog computing is a potential solution to overcome the shortcomings of cloud-based processing of IoT tasks. These drawbacks can include high latency, location awareness, and security—attributed to the distance between IoT devices and cloud-hosted servers. Although fog computing has evolved as a solution [...] Read more.
Fog computing is a potential solution to overcome the shortcomings of cloud-based processing of IoT tasks. These drawbacks can include high latency, location awareness, and security—attributed to the distance between IoT devices and cloud-hosted servers. Although fog computing has evolved as a solution to address these challenges, it is known for having limited resources that need to be effectively utilized, or its advantages could be lost. Computational offloading and resource management are critical to be able to benefit from fog computing systems. We introduce a dynamic, online, offloading scheme that involves the execution of delay-sensitive tasks. This paper proposes an architecture of a fog node able to adjust its offloading threshold dynamically (i.e., the criteria by which a fog node decides whether tasks should be offloaded rather than executed locally) using two algorithms: dynamic task scheduling (DTS) and dynamic energy control (DEC). These algorithms seek to minimize overall delay, maximize throughput, and minimize energy consumption at the fog layer. Compared to other benchmarks, our approach could reduce latency by up to 95%, improve throughput by 71%, and reduce energy consumption by up to 67% in fog nodes. Full article
(This article belongs to the Special Issue Edge/Fog Computing Technologies for IoT Infrastructure)
Show Figures

Figure 1

Article
Identification of IoT Actors
Sensors 2021, 21(6), 2093; https://doi.org/10.3390/s21062093 - 17 Mar 2021
Cited by 3 | Viewed by 929
Abstract
The Internet of Things (IoT) is a leading trend with numerous opportunities accompanied by advantages as well as disadvantages. Parallel with IoT development, significant privacy and personal data protection challenges are also growing. In this regard, the General Data Protection Regulation (GDPR) is [...] Read more.
The Internet of Things (IoT) is a leading trend with numerous opportunities accompanied by advantages as well as disadvantages. Parallel with IoT development, significant privacy and personal data protection challenges are also growing. In this regard, the General Data Protection Regulation (GDPR) is often considered the world’s strongest set of data protection rules and has proven to be a catalyst for many countries around the world. The concepts and interaction of the data controller, the joint controllers, and the data processor play a key role in the implementation of the GDPR. Therefore, clarifying the blurred IoT actors’ relationships to determine corresponding responsibilities is necessary. Given the IoT transformation reflected in shifting computing power from cloud to the edge, in this research we have considered how these computing paradigms are affecting IoT actors. In this regard, we have introduced identification of IoT actors according to a new five-computing layer IoT model based on the cloud, fog, edge, mist, and dew computing. Our conclusion is that identifying IoT actors in the light of the corresponding IoT data manager roles could be useful in determining the responsibilities of IoT actors for their compliance with data protection and privacy rules. Full article
(This article belongs to the Special Issue Edge/Fog Computing Technologies for IoT Infrastructure)
Show Figures

Figure 1

Article
Efficient Implementation of NIST LWC ESTATE Algorithm Using OpenCL and Web Assembly for Secure Communication in Edge Computing Environment
Sensors 2021, 21(6), 1987; https://doi.org/10.3390/s21061987 - 11 Mar 2021
Cited by 2 | Viewed by 660
Abstract
In edge computing service, edge devices collect data from a number of embedded devices, like sensors, CCTVs (Closed-circuit Television), and so on, and communicate with application servers. Since a large portion of communication in edge computing services are conducted in wireless, the transmitted [...] Read more.
In edge computing service, edge devices collect data from a number of embedded devices, like sensors, CCTVs (Closed-circuit Television), and so on, and communicate with application servers. Since a large portion of communication in edge computing services are conducted in wireless, the transmitted data needs to be properly encrypted. Furthermore, the application servers (resp. edge devices) are responsible for encrypting or decrypting a large amount of data from edge devices (resp. terminal devices), the cryptographic operation needs to be optimized on both server side and edge device side. Actually, the confidentiality and integrity of data are essential for secure communication. In this paper, we present two versions of security software which can be used on edge device side and server side for secure communication between them in edge computing environment. Our softwares are basically web-based application because of its universality where the softwares can be executed on any web browsers. Our softwares make use of ESTATE (Energy efficient and Single-state Tweakable block cipher based MAC-Then-Encrypt)algorithm, which is a promising candidate of NIST LWC (National Institute of Standards and Technology LightWeight Cryptography) competition and it provides not only data confidentiality but also data authentication. It also implements the ESTATE algorithm using Web Assembly for efficient use on edge devices, and optimizes the performance of the algorithm using the properties of the underlying block cipher. Several methods are applied to efficiently operate the ESTATE algorithm. We use conditional statements to XOR the extended tweak values during the operation of the ESTATE algorithm. To eliminate this unnecessary process, we use a method of expanding and storing the tweak value through pre-computation. The measured results of the ESTATE algorithm implemented with Web Assembly and the reference C/C++ ESTATE algorithm are compared. ESTATE implemented as Web Assembly is measured in web browsers Chrome, FireFox, and Microsoft Edge. For efficiency on server side, we make use of OpenCL which is parallel computing framework in order to process a number of data simultaneously. In addition, when implementing with OpenCL, using conditional statements causes performance degradation. We eliminated the conditional statement using the loop unrolling method to eliminate the performance degradation. In addition, OpenCL operates by moving the data to be encrypted to the local memory because the local memory has a high operation speed. TweAES-128 and TweAES-128-6, which have the same structure as AES algorithm, can apply the previously existing studied T-table method. In addition, the input value 16-byte is processed in parallel and calculated. In addition, since it may be vulnerable to cache-timing attack, it is safely operated by applying the previously existing studied T-table shuffling method. Our softwares cover the necessary security service from edge devices to servers in edge computing services and they can be easily used for various types of edge computing devices because they are all web-based applications. Full article
(This article belongs to the Special Issue Edge/Fog Computing Technologies for IoT Infrastructure)
Show Figures

Figure 1

Article
Deep Reinforcement Learning-Based Task Scheduling in IoT Edge Computing
Sensors 2021, 21(5), 1666; https://doi.org/10.3390/s21051666 - 28 Feb 2021
Cited by 3 | Viewed by 1450
Abstract
Edge computing (EC) has recently emerged as a promising paradigm that supports resource-hungry Internet of Things (IoT) applications with low latency services at the network edge. However, the limited capacity of computing resources at the edge server poses great challenges for scheduling application [...] Read more.
Edge computing (EC) has recently emerged as a promising paradigm that supports resource-hungry Internet of Things (IoT) applications with low latency services at the network edge. However, the limited capacity of computing resources at the edge server poses great challenges for scheduling application tasks. In this paper, a task scheduling problem is studied in the EC scenario, and multiple tasks are scheduled to virtual machines (VMs) configured at the edge server by maximizing the long-term task satisfaction degree (LTSD). The problem is formulated as a Markov decision process (MDP) for which the state, action, state transition, and reward are designed. We leverage deep reinforcement learning (DRL) to solve both time scheduling (i.e., the task execution order) and resource allocation (i.e., which VM the task is assigned to), considering the diversity of the tasks and the heterogeneity of available resources. A policy-based REINFORCE algorithm is proposed for the task scheduling problem, and a fully-connected neural network (FCN) is utilized to extract the features. Simulation results show that the proposed DRL-based task scheduling algorithm outperforms the existing methods in the literature in terms of the average task satisfaction degree and success ratio. Full article
(This article belongs to the Special Issue Edge/Fog Computing Technologies for IoT Infrastructure)
Show Figures

Figure 1

Article
Optimal Service Provisioning for the Scalable Fog/Edge Computing Environment
Sensors 2021, 21(4), 1506; https://doi.org/10.3390/s21041506 - 22 Feb 2021
Cited by 2 | Viewed by 466
Abstract
In recent years, we observed the proliferation of cloud data centers (CDCs) and the Internet of Things (IoT). Cloud computing based on CDCs has the drawback of unpredictable response times due to variant delays between service requestors (IoT devices and end devices) and [...] Read more.
In recent years, we observed the proliferation of cloud data centers (CDCs) and the Internet of Things (IoT). Cloud computing based on CDCs has the drawback of unpredictable response times due to variant delays between service requestors (IoT devices and end devices) and CDCs. This deficiency of cloud computing is especially problematic in providing IoT services with strict timing requirements and as a result, gives birth to fog/edge computing (FEC) whose responsiveness is achieved by placing service images near service requestors. In FEC, the computing nodes located close to service requestors are called fog/edge nodes (FENs). In addition, for an FEN to execute a specific service, it has to be provisioned with the corresponding service image. Most of the previous work on the service provisioning in the FEC environment deals with determining an appropriate FEN satisfying the requirements like delay, CPU and storage from the perspective of one or more service requests. In this paper, we determined how to optimally place service images in consideration of the pre-obtained service demands which may be collected during the prior time interval. The proposed FEC environment is scalable in the sense that the resources of FENs are effectively utilized thanks to the optimal provisioning of services on FENs. We propose two approaches to provision service images on FENs. In order to validate the performance of the proposed mechanisms, intensive simulations were carried out for various service demand scenarios. Full article
(This article belongs to the Special Issue Edge/Fog Computing Technologies for IoT Infrastructure)
Show Figures

Figure 1

Article
Fuzzy Decision-Based Efficient Task Offloading Management Scheme in Multi-Tier MEC-Enabled Networks
Sensors 2021, 21(4), 1484; https://doi.org/10.3390/s21041484 - 20 Feb 2021
Cited by 2 | Viewed by 960
Abstract
Multi-access edge computing (MEC) is a new leading technology for meeting the demands of key performance indicators (KPIs) in 5G networks. However, in a rapidly changing dynamic environment, it is hard to find the optimal target server for processing offloaded tasks because we [...] Read more.
Multi-access edge computing (MEC) is a new leading technology for meeting the demands of key performance indicators (KPIs) in 5G networks. However, in a rapidly changing dynamic environment, it is hard to find the optimal target server for processing offloaded tasks because we do not know the end users’ demands in advance. Therefore, quality of service (QoS) deteriorates because of increasing task failures and long execution latency from congestion. To reduce latency and avoid task failures from resource-constrained edge servers, vertical offloading between mobile devices with local-edge collaboration or with local edge-remote cloud collaboration have been proposed in previous studies. However, they ignored the nearby edge server in the same tier that has excess computing resources. Therefore, this paper introduces a fuzzy decision-based cloud-MEC collaborative task offloading management system called FTOM, which takes advantage of powerful remote cloud-computing capabilities and utilizes neighboring edge servers. The main objective of the FTOM scheme is to select the optimal target node for task offloading based on server capacity, latency sensitivity, and the network’s condition. Our proposed scheme can make dynamic decisions where local or nearby MEC servers are preferred for offloading delay-sensitive tasks, and delay-tolerant high resource-demand tasks are offloaded to a remote cloud server. Simulation results affirm that our proposed FTOM scheme significantly improves the rate of successfully executing offloaded tasks by approximately 68.5%, and reduces task completion time by 66.6%, when compared with a local edge offloading (LEO) scheme. The improved and reduced rates are 32.4% and 61.5%, respectively, when compared with a two-tier edge orchestration-based offloading (TTEO) scheme. They are 8.9% and 47.9%, respectively, when compared with a fuzzy orchestration-based load balancing (FOLB) scheme, approximately 3.2% and 49.8%, respectively, when compared with a fuzzy workload orchestration-based task offloading (WOTO) scheme, and approximately 38.6%% and 55%, respectively, when compared with a fuzzy edge-orchestration based collaborative task offloading (FCTO) scheme. Full article
(This article belongs to the Special Issue Edge/Fog Computing Technologies for IoT Infrastructure)
Show Figures

Figure 1

Article
Balanced Leader Distribution Algorithm in Kubernetes Clusters
Sensors 2021, 21(3), 869; https://doi.org/10.3390/s21030869 - 28 Jan 2021
Cited by 2 | Viewed by 754
Abstract
Container-based virtualization is becoming a de facto way to build and deploy applications because of its simplicity and convenience. Kubernetes is a well-known open-source project that provides an orchestration platform for containerized applications. An application in Kubernetes can contain multiple replicas to achieve [...] Read more.
Container-based virtualization is becoming a de facto way to build and deploy applications because of its simplicity and convenience. Kubernetes is a well-known open-source project that provides an orchestration platform for containerized applications. An application in Kubernetes can contain multiple replicas to achieve high scalability and availability. Stateless applications have no requirement for persistent storage; however, stateful applications require persistent storage for each replica. Therefore, stateful applications usually require a strong consistency of data among replicas. To achieve this, the application often relies on a leader, which is responsible for maintaining consistency and coordinating tasks among replicas. This leads to a problem that the leader often has heavy loads due to its inherent design. In a Kubernetes cluster, having the leaders of multiple applications concentrated in a specific node may become a bottleneck within the system. In this paper, we propose a leader election algorithm that overcomes the bottleneck problem by evenly distributing the leaders throughout nodes in the cluster. We also conduct experiments to prove the correctness and effectiveness of our leader election algorithm compared with a default algorithm in Kubernetes. Full article
(This article belongs to the Special Issue Edge/Fog Computing Technologies for IoT Infrastructure)
Show Figures

Figure 1

Article
Hyper-Angle Exploitative Searching for Enabling Multi-Objective Optimization of Fog Computing
Sensors 2021, 21(2), 558; https://doi.org/10.3390/s21020558 - 14 Jan 2021
Cited by 2 | Viewed by 698
Abstract
Fog computing is an emerging technology. It has the potential of enabling various wireless networks to offer computational services based on certain requirements given by the user. Typically, the users give their computing tasks to the network manager that has the responsibility of [...] Read more.
Fog computing is an emerging technology. It has the potential of enabling various wireless networks to offer computational services based on certain requirements given by the user. Typically, the users give their computing tasks to the network manager that has the responsibility of allocating needed fog nodes optimally for conducting the computation effectively. The optimal allocation of nodes with respect to various metrics is essential for fast execution and stable, energy-efficient, balanced, and cost-effective allocation. This article aims to optimize multiple objectives using fog computing by developing multi-objective optimization with high exploitive searching. The developed algorithm is an evolutionary genetic type designated as Hyper Angle Exploitative Searching (HAES). It uses hyper angle along with crowding distance for prioritizing solutions within the same rank and selecting the highest priority solutions. The approach was evaluated on multi-objective mathematical problems and its superiority was revealed by comparing its performance with benchmark approaches. A framework of multi-criteria optimization for fog computing was proposed, the Fog Computing Closed Loop Model (FCCL). Results have shown that HAES outperforms other relevant benchmarks in terms of non-domination and optimality metrics with over 70% confidence of the t-test for rejecting the null-hypothesis of non-superiority in terms of the domination metric set coverage. Full article
(This article belongs to the Special Issue Edge/Fog Computing Technologies for IoT Infrastructure)
Show Figures

Figure 1

Article
Horizontal Pod Autoscaling in Kubernetes for Elastic Container Orchestration
Sensors 2020, 20(16), 4621; https://doi.org/10.3390/s20164621 - 17 Aug 2020
Cited by 4 | Viewed by 1944
Abstract
Kubernetes, an open-source container orchestration platform, enables high availability and scalability through diverse autoscaling mechanisms such as Horizontal Pod Autoscaler (HPA), Vertical Pod Autoscaler and Cluster Autoscaler. Amongst them, HPA helps provide seamless service by dynamically scaling up and down the number of [...] Read more.
Kubernetes, an open-source container orchestration platform, enables high availability and scalability through diverse autoscaling mechanisms such as Horizontal Pod Autoscaler (HPA), Vertical Pod Autoscaler and Cluster Autoscaler. Amongst them, HPA helps provide seamless service by dynamically scaling up and down the number of resource units, called pods, without having to restart the whole system. Kubernetes monitors default Resource Metrics including CPU and memory usage of host machines and their pods. On the other hand, Custom Metrics, provided by external software such as Prometheus, are customizable to monitor a wide collection of metrics. In this paper, we investigate HPA through diverse experiments to provide critical knowledge on its operational behaviors. We also discuss the essential difference between Kubernetes Resource Metrics (KRM) and Prometheus Custom Metrics (PCM) and how they affect HPA’s performance. Lastly, we provide deeper insights and lessons on how to optimize the performance of HPA for researchers, developers, and system administrators working with Kubernetes in the future. Full article
(This article belongs to the Special Issue Edge/Fog Computing Technologies for IoT Infrastructure)
Show Figures

Graphical abstract

Review

Jump to: Editorial, Research

Review
Resource Management Techniques for Cloud/Fog and Edge Computing: An Evaluation Framework and Classification
Sensors 2021, 21(5), 1832; https://doi.org/10.3390/s21051832 - 05 Mar 2021
Cited by 3 | Viewed by 1149
Abstract
Processing IoT applications directly in the cloud may not be the most efficient solution for each IoT scenario, especially for time-sensitive applications. A promising alternative is to use fog and edge computing, which address the issue of managing the large data bandwidth needed [...] Read more.
Processing IoT applications directly in the cloud may not be the most efficient solution for each IoT scenario, especially for time-sensitive applications. A promising alternative is to use fog and edge computing, which address the issue of managing the large data bandwidth needed by end devices. These paradigms impose to process the large amounts of generated data close to the data sources rather than in the cloud. One of the considerations of cloud-based IoT environments is resource management, which typically revolves around resource allocation, workload balance, resource provisioning, task scheduling, and QoS to achieve performance improvements. In this paper, we review resource management techniques that can be applied for cloud, fog, and edge computing. The goal of this review is to provide an evaluation framework of metrics for resource management algorithms aiming at the cloud/fog and edge environments. To this end, we first address research challenges on resource management techniques in that domain. Consequently, we classify current research contributions to support in conducting an evaluation framework. One of the main contributions is an overview and analysis of research papers addressing resource management techniques. Concluding, this review highlights opportunities of using resource management techniques within the cloud/fog/edge paradigm. This practice is still at early development and barriers need to be overcome. Full article
(This article belongs to the Special Issue Edge/Fog Computing Technologies for IoT Infrastructure)
Show Figures

Figure 1

Back to TopTop