Special Issue "Edge Computing for Internet of Things"

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Networks".

Deadline for manuscript submissions: closed (15 January 2022) | Viewed by 9976

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors

Dr. Kevin Lee
E-Mail Website
Guest Editor
School of Information Technology, Deakin University, Melbourne, VIC 3217, Australia
Interests: Internet of Things; cloud computing; robotics; embedded systems
Special Issues, Collections and Topics in MDPI journals
Prof. Dr. Ka Lok Man
E-Mail
Guest Editor
Department of Computer Science and Software Engineering, Xi’an Jiaotong Liverpool University, Suzhou Dushu Lake Higher Education Town, Suzhou Industrial Park, Suzhou, Jiangsu Province, China
Interests: wireless sensor networks; Internet of Things; Artificial Intelligence and photovoltaic
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The Internet of Things is becoming an established technology, with devices being deployed in homes, workplaces, and public areas at an increasingly rapid rate. IoT devices are the core technology of smart homes, smart cities, intelligent transport systems and the promise to optimise travel, reduce energy usage and improve quality of life. With the prevalence of IoT, the problem of how to manage the vast volumes, wide variety and type of data generated, and erratic generation patterns is becoming increasingly clear and challenging.

This Special Issue focuses on solving this problem through the use of edge computing. Edge computing offers a solution to managing IoT data through the processing of IoT data close to the location where the data are being generated. Edge computing allows computation to be performed locally, thus reducing the volume of data that need to be transmitted to remote data centers and Cloud storage. It also allows decisions to be made locally without having to wait for Cloud servers to respond.

We encourage papers in all areas related to this topic, including software architectures, systems, IoT devices, edge computing devices and fog computing.

Dr. Kevin Lee
Prof. Dr. Ka Lok Man
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Internet of Things
  • Edge computing
  • Fog computing
  • Cloud computing
  • 5G wireless
  • Local processing
  • Big data
  • Embedded systems
  • Data processing
  • Gateways
  • Data analysis
  • Data reduction
  • Real-time data processing

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

Editorial
Edge Computing for Internet of Things
Electronics 2022, 11(8), 1239; https://doi.org/10.3390/electronics11081239 - 14 Apr 2022
Viewed by 338
Abstract
The Internet of Things (IoT) is maturing and becoming an established and vital technology [...] Full article
(This article belongs to the Special Issue Edge Computing for Internet of Things)

Research

Jump to: Editorial

Article
Enabling Processing Power Scalability with Internet of Things (IoT) Clusters
Electronics 2022, 11(1), 81; https://doi.org/10.3390/electronics11010081 - 28 Dec 2021
Cited by 1 | Viewed by 405
Abstract
Internet of things (IoT) devices play a crucial role in the design of state-of-the-art infrastructures, with an increasing demand to support more complex services and applications. However, IoT devices are known for having limited computational capacities. Traditional approaches used to offload applications to [...] Read more.
Internet of things (IoT) devices play a crucial role in the design of state-of-the-art infrastructures, with an increasing demand to support more complex services and applications. However, IoT devices are known for having limited computational capacities. Traditional approaches used to offload applications to the cloud to ease the burden on end-user devices, at the expense of a greater latency and increased network traffic. Our goal is to optimize the use of IoT devices, particularly those being underutilized. In this paper, we propose a pragmatic solution, built upon the Erlang programming language, that allows a group of IoT devices to collectively execute services, using their spare resources with minimal interference, and achieving a level of performance that otherwise would not be met by individual execution. Full article
(This article belongs to the Special Issue Edge Computing for Internet of Things)
Show Figures

Figure 1

Article
FP-Growth Algorithm for Discovering Region-Based Association Rule in the IoT Environment
Electronics 2021, 10(24), 3091; https://doi.org/10.3390/electronics10243091 - 12 Dec 2021
Cited by 1 | Viewed by 729
Abstract
With the development of the Internet of things (IoT), both types and amounts of spatial data collected from heterogeneous IoT devices are increasing. The increased spatial data are being actively utilized in the data mining field. The existing association rule mining algorithms find [...] Read more.
With the development of the Internet of things (IoT), both types and amounts of spatial data collected from heterogeneous IoT devices are increasing. The increased spatial data are being actively utilized in the data mining field. The existing association rule mining algorithms find all items with high correlation in the entire data. Association rules that may appear differently for each region, however, may not be found when the association rules are searched for all data. In this paper, we propose region-based frequent pattern growth (RFP-Growth) to search for association rules by dense regions. First, RFP-Growth divides item transaction included position data into regions by a density-based clustering algorithm. Second, frequent pattern growth (FP-Growth) is performed for each transaction divided by region. The experimental results show that RFP-Growth discovers new association rules that the original FP-Growth cannot find in the whole data. Full article
(This article belongs to the Special Issue Edge Computing for Internet of Things)
Show Figures

Figure 1

Article
Automatic Failure Recovery for Container-Based IoT Edge Applications
Electronics 2021, 10(23), 3047; https://doi.org/10.3390/electronics10233047 - 06 Dec 2021
Cited by 1 | Viewed by 926
Abstract
Recent years have seen the rapid adoption of Internet of Things (IoT) technologies, where billions of physical devices are interconnected to provide data sensing, computing and actuating capabilities. IoT-based systems have been extensively deployed across various sectors, such as smart homes, smart cities, [...] Read more.
Recent years have seen the rapid adoption of Internet of Things (IoT) technologies, where billions of physical devices are interconnected to provide data sensing, computing and actuating capabilities. IoT-based systems have been extensively deployed across various sectors, such as smart homes, smart cities, smart transport, smart logistics and so forth. Newer paradigms such as edge computing are developed to facilitate computation and data intelligence to be performed closer to IoT devices, hence reducing latency for time-sensitive tasks. However, IoT applications are increasingly being deployed in remote and difficult to reach areas for edge computing scenarios. These deployment locations make upgrading application and dealing with software failures difficult. IoT applications are also increasingly being deployed as containers which offer increased remote management ability but are more complex to configure. This paper proposes an approach for effectively managing, updating and re-configuring container-based IoT software as efficiently, scalably and reliably as possible with minimal downtime upon the detection of software failures. The approach is evaluated using docker container-based IoT application deployments in an edge computing scenario. Full article
(This article belongs to the Special Issue Edge Computing for Internet of Things)
Show Figures

Figure 1

Article
Linked-Object Dynamic Offloading (LODO) for the Cooperation of Data and Tasks on Edge Computing Environment
Electronics 2021, 10(17), 2156; https://doi.org/10.3390/electronics10172156 - 03 Sep 2021
Cited by 1 | Viewed by 576
Abstract
With the evolution of the Internet of Things (IoT), edge computing technology is using to process data rapidly increasing from various IoT devices efficiently. Edge computing offloading reduces data processing time and bandwidth usage by processing data in real-time on the device where [...] Read more.
With the evolution of the Internet of Things (IoT), edge computing technology is using to process data rapidly increasing from various IoT devices efficiently. Edge computing offloading reduces data processing time and bandwidth usage by processing data in real-time on the device where the data is generating or on a nearby server. Previous studies have proposed offloading between IoT devices through local-edge collaboration from resource-constrained edge servers. However, they did not consider nearby edge servers in the same layer with computing resources. Consequently, quality of service (QoS) degrade due to restricted resources of edge computing and higher execution latency due to congestion. To handle offloaded tasks in a rapidly changing dynamic environment, finding an optimal target server is still challenging. Therefore, a new cooperative offloading method to control edge computing resources is needed to allocate limited resources between distributed edges efficiently. This paper suggests the LODO (linked-object dynamic offloading) algorithm that provides an ideal balance between edges by considering the ready state or running state. LODO algorithm carries out tasks in the list in the order of high correlation between data and tasks through linked objects. Furthermore, dynamic offloading considers the running status of all cooperative terminals and decides to schedule task distribution. That can decrease the average delayed time and average power consumption of terminals. In addition, the resource shortage problem can settle by reducing task processing using its distributions. Full article
(This article belongs to the Special Issue Edge Computing for Internet of Things)
Show Figures

Figure 1

Article
Secure Mobile Edge Server Placement Using Multi-Agent Reinforcement Learning
Electronics 2021, 10(17), 2098; https://doi.org/10.3390/electronics10172098 - 30 Aug 2021
Cited by 1 | Viewed by 737
Abstract
Mobile edge computing is capable of providing high data processing capabilities while ensuring low latency constraints of low power wireless networks, such as the industrial internet of things. However, optimally placing edge servers (providing storage and computation services to user equipment) is still [...] Read more.
Mobile edge computing is capable of providing high data processing capabilities while ensuring low latency constraints of low power wireless networks, such as the industrial internet of things. However, optimally placing edge servers (providing storage and computation services to user equipment) is still a challenge. To optimally place mobile edge servers in a wireless network, such that network latency is minimized and load balancing is performed on edge servers, we propose a multi-agent reinforcement learning (RL) solution to solve a formulated mobile edge server placement problem. The RL agents are designed to learn the dynamics of the environment and adapt a joint action policy resulting in the minimization of network latency and balancing the load on edge servers. To ensure that the action policy adapted by RL agents maximized the overall network performance indicators, we propose the sharing of information, such as the latency experienced from each server and the load of each server to other RL agents in the network. Experiment results are obtained to analyze the effectiveness of the proposed solution. Although the sharing of information makes the proposed solution obtain a network-wide maximation of overall network performance at the same time it makes it susceptible to different kinds of security attacks. To further investigate the security issues arising from the proposed solution, we provide a detailed analysis of the types of security attacks possible and their countermeasures. Full article
(This article belongs to the Special Issue Edge Computing for Internet of Things)
Show Figures

Figure 1

Article
A Task Execution Scheme for Dew Computing with State-of-the-Art Smartphones
Electronics 2021, 10(16), 2006; https://doi.org/10.3390/electronics10162006 - 19 Aug 2021
Cited by 6 | Viewed by 1113
Abstract
The computing resources of today’s smartphones are underutilized most of the time. Using these resources could be highly beneficial in edge computing and fog computing contexts, for example, to support urban services for citizens. However, new challenges, especially regarding job scheduling, arise. Smartphones [...] Read more.
The computing resources of today’s smartphones are underutilized most of the time. Using these resources could be highly beneficial in edge computing and fog computing contexts, for example, to support urban services for citizens. However, new challenges, especially regarding job scheduling, arise. Smartphones may form ad hoc networks, but individual devices highly differ in computational capabilities and (tolerable) energy usage. We take into account these particularities to validate a task execution scheme that relies on the computing power that clusters of mobile devices could provide. In this paper, we expand the study of several practical heuristics for job scheduling including execution scenarios with state-of-the-art smartphones. With the results of new simulated scenarios, we confirm previous findings and better comprehend the baseline approaches already proposed for the problem. This study also sheds some light on the capabilities of small-sized clusters comprising mid-range and low-end smartphones when the objective is to achieve real-time stream processing using Tensorflow object recognition models as edge jobs. Ultimately, we strive for industry applications to improve task scheduling for dew computing contexts. Heuristics such as ours plus supporting dew middleware could improve citizen participation by allowing a much wider use of dew computing resources, especially in urban contexts in order to help build smart cities. Full article
(This article belongs to the Special Issue Edge Computing for Internet of Things)
Show Figures

Figure 1

Article
Spectral Classification Based on Deep Learning Algorithms
Electronics 2021, 10(16), 1892; https://doi.org/10.3390/electronics10161892 - 06 Aug 2021
Cited by 1 | Viewed by 702
Abstract
Convolutional neural networks (CNN) can achieve accurate image classification, indicating the current best performance of deep learning algorithms. However, the complexity of spectral data limits the performance of many CNN models. Due to the potential redundancy and noise of the spectral data, the [...] Read more.
Convolutional neural networks (CNN) can achieve accurate image classification, indicating the current best performance of deep learning algorithms. However, the complexity of spectral data limits the performance of many CNN models. Due to the potential redundancy and noise of the spectral data, the standard CNN model is usually unable to perform correct spectral classification. Furthermore, deeper CNN architectures also face some difficulties when other network layers are added, which hinders the network convergence and produces low classification accuracy. To alleviate these problems, we proposed a new CNN architecture specially designed for 2D spectral data. Firstly, we collected the reflectance spectra of five samples using a portable optical fiber spectrometer and converted them into 2D matrix data to adapt to the deep learning algorithms’ feature extraction. Secondly, the number of convolutional layers and pooling layers were adjusted according to the characteristics of the spectral data to enhance the feature extraction ability. Finally, the discard rate selection principle of the dropout layer was determined by visual analysis to improve the classification accuracy. Experimental results demonstrate our CNN system, which has advantages over the traditional AlexNet, Unet, and support vector machine (SVM)-based approaches in many aspects, such as easy implementation, short time, higher accuracy, and strong robustness. Full article
(This article belongs to the Special Issue Edge Computing for Internet of Things)
Show Figures

Figure 1

Article
Method for Dynamic Service Orchestration in Fog Computing
Electronics 2021, 10(15), 1796; https://doi.org/10.3390/electronics10151796 - 27 Jul 2021
Cited by 2 | Viewed by 553
Abstract
Fog computing is meant to deal with the problems which cloud computing cannot solve alone. As the fog is closer to a user, it can improve some very important QoS characteristics, such as a latency and availability. One of the challenges in the [...] Read more.
Fog computing is meant to deal with the problems which cloud computing cannot solve alone. As the fog is closer to a user, it can improve some very important QoS characteristics, such as a latency and availability. One of the challenges in the fog architecture is heterogeneous constrained devices and the dynamic nature of the end devices, which requires a dynamic service orchestration to provide an efficient service placement inside the fog nodes. An optimization method is needed to ensure the required level of QoS while requiring minimal resources from fog and end devices, thus ensuring the longest lifecycle of the whole IoT system. A two-stage multi-objective optimization method to find the best placement of services among available fog nodes is presented in this paper. A Pareto set of non-dominated possible service distributions is found using the integer multi-objective particle swarm optimization method. Then, the analytical hierarchy process is used to choose the best service distribution according to the application-specific judgment matrix. An illustrative scenario with experimental results is presented to demonstrate characteristics of the proposed method. Full article
(This article belongs to the Special Issue Edge Computing for Internet of Things)
Show Figures

Figure 1

Article
Deep Learning-Based Content Caching in the Fog Access Points
Electronics 2021, 10(4), 512; https://doi.org/10.3390/electronics10040512 - 22 Feb 2021
Cited by 4 | Viewed by 1322
Abstract
Proactive caching of the most popular contents in the cache memory of fog-access points (F-APs) is regarded as a promising solution for the 5G and beyond cellular communication to address latency-related issues caused by the unprecedented demand of multimedia data traffic. However, it [...] Read more.
Proactive caching of the most popular contents in the cache memory of fog-access points (F-APs) is regarded as a promising solution for the 5G and beyond cellular communication to address latency-related issues caused by the unprecedented demand of multimedia data traffic. However, it is still challenging to correctly predict the user’s content and store it in the cache memory of the F-APs efficiently as the user preference is dynamic. In this article, to solve this issue to some extent, the deep learning-based content caching (DLCC) method is proposed due to recent advances in deep learning. In DLCC, a 2D CNN-based method is exploited to formulate the caching model. The simulation results in terms of deep learning (DL) accuracy, mean square error (MSE), the cache hit ratio, and the overall system delay is displayed to show that the proposed method outperforms the performance of known DL-based caching strategies, as well as transfer learning-based cooperative caching (LECC) strategy, randomized replacement (RR), and the Zipf’s probability distribution. Full article
(This article belongs to the Special Issue Edge Computing for Internet of Things)
Show Figures

Figure 1

Article
Application of Wireless Sensor Network Based on Hierarchical Edge Computing Structure in Rapid Response System
Electronics 2020, 9(7), 1176; https://doi.org/10.3390/electronics9071176 - 20 Jul 2020
Cited by 5 | Viewed by 1130
Abstract
This paper presents a rapid response system architecture for the distributed management of warehouses in logistics by applying the concept of tiered edge computing. A tiered edge node architecture is proposed for the system to process computing tasks of different complexity, and a [...] Read more.
This paper presents a rapid response system architecture for the distributed management of warehouses in logistics by applying the concept of tiered edge computing. A tiered edge node architecture is proposed for the system to process computing tasks of different complexity, and a corresponding rapid response algorithm is introduced. The paper emphasizes the classification of abstracted outlier sensing data which could better match different sensing types and transplant to various application fields. A software-defined simulation is used to evaluate the system performance on response time and response accuracy, from which it can be concluded that common predefined emergency cases can be detected and responded to, rapidly. Full article
(This article belongs to the Special Issue Edge Computing for Internet of Things)
Show Figures

Figure 1

Back to TopTop