Edge Computing in IoT

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (15 June 2020) | Viewed by 16272

Special Issue Editor


E-Mail Website
Guest Editor
School of Electrical and Computer Engineering, National Technical University of Athens, Athens, Greece
Interests: Internet of Things; context-awareness; edge/fog/cloud computing; social networks; data engineering; machine learning; networking and communications

Special Issue Information

Dear Colleagues,

The Internet of Things (IoT) evolution has dominated several application domains the last decade and is expected to proliferate further. The population of IoT connected devices has reached 30 billion and is foreseen to exceed 60 billion in five years. This fact greatly promotes the added-value potential of IoT but introduces several challenges, such as efficient handling of massive volumes of data constantly generated by IoT devices. The cloud-based deployments of IoT systems often fail to meet the increasing requirements of their clients, especially regarding the delivery of real-time services and high QoE, preserving the security and privacy of the entire system. This has led to IoT deployments that move data handling operations towards the edge of the network, giving rise to IoT solutions based on edge computing. This shift enables data processing and storage to be performed in a distributed fashion, close to the data sources, thus addressing network bandwidth limitations, high latency problems, and privacy matters. However, there are still a lot of challenges that need to be tackled before IoT systems can exploit the edge computing paradigm in full, such as heterogeneity and resource constraints of most IoT devices, performance, scalability, and privacy.

This Special Issue aims to cover the most recent technical advances with regards to edge computing in IoT, including emerging models, architectures, strategies and protocols, systems, applications, test-beds, and field deployments for edge computing-based IoT.

Topics of interest to this Special Issue include but are not limited to the following:
• Architectures, models, and protocols for edge computing in IoT;
• Resource allocation and management for edge computing in IoT;
• Data processing, distribution, management, and storage in edge computing-based IoT;
• Computation offloading in edge computing-based IoT;
• Security, privacy and trust in edge computing-based IoT;
• AI and machine learning for edge computing in IoT;
• Performance evaluation for edge computing in IoT;
• Services and applications in edge computing-based IoT.

Prof. Dr. Ioanna Roussaki
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 613 KiB  
Article
Energy Efficient Computation Offloading Mechanism in Multi-Server Mobile Edge Computing—An Integer Linear Optimization Approach
by Prince Waqas Khan, Khizar Abbas, Hadil Shaiba, Ammar Muthanna, Abdelrahman Abuarqoub and Mashael Khayyat
Electronics 2020, 9(6), 1010; https://doi.org/10.3390/electronics9061010 - 17 Jun 2020
Cited by 34 | Viewed by 3827
Abstract
Conserving energy resources and enhancing computation capability have been the key design challenges in the era of the Internet of Things (IoT). The recent development of energy harvesting (EH) and Mobile Edge Computing (MEC) technologies have been recognized as promising techniques for tackling [...] Read more.
Conserving energy resources and enhancing computation capability have been the key design challenges in the era of the Internet of Things (IoT). The recent development of energy harvesting (EH) and Mobile Edge Computing (MEC) technologies have been recognized as promising techniques for tackling such challenges. Computation offloading enables executing the heavy computation workloads at the powerful MEC servers. Hence, the quality of computation experience, for example, the execution latency, could be significantly improved. In a situation where mobile devices can move arbitrarily and having multi servers for offloading, computation offloading strategies are facing new challenges. The competition of resource allocation and server selection becomes high in such environments. In this paper, an optimized computation offloading algorithm that is based on integer linear optimization is proposed. The algorithm allows choosing the execution mode among local execution, offloading execution, and task dropping for each mobile device. The proposed system is based on an improved computing strategy that is also energy efficient. Mobile devices, including energy harvesting (EH) devices, are considered for simulation purposes. Simulation results illustrate that the energy level starts from 0.979 % and gradually decreases to 0.87 % . Therefore, the proposed algorithm can trade-off the energy of computational offloading tasks efficiently. Full article
(This article belongs to the Special Issue Edge Computing in IoT)
Show Figures

Figure 1

16 pages, 4122 KiB  
Article
An Effective Edge-Assisted Data Collection Approach for Critical Events in the SDWSN-Based Agricultural Internet of Things
by Xiaomin Li, Zhiyu Ma, Jianhua Zheng, Yongxin Liu, Lixue Zhu and Nan Zhou
Electronics 2020, 9(6), 907; https://doi.org/10.3390/electronics9060907 - 29 May 2020
Cited by 36 | Viewed by 3697
Abstract
In the traditional agricultural wireless sensor networks (WSNs), there is a large amount of redundant data and high latency on critical events (CEs) for data collection systems, which increases the time and energy consumption. In order to overcome these problems, an effective edge [...] Read more.
In the traditional agricultural wireless sensor networks (WSNs), there is a large amount of redundant data and high latency on critical events (CEs) for data collection systems, which increases the time and energy consumption. In order to overcome these problems, an effective edge computing (EC) enabled data collection approach for CE in smart agriculture is proposed. First, the key features data types (KFDTs) are extracted from the historical dataset to keep the main information on CEs. Next, the KFDTs are selected as the collection data type of the software-defined wireless sensor network (SDWSN). Then, the event types are decided by searching the minimum average variance between the sensing data of active nodes and the average value of the key feature data obtained by EC. Furthermore, the sensing nodes are driven to sense the event-related data with a consideration of latency constraints by the SDWSN servers. A real-world testbed was set up in a smart greenhouse for experimental verification of the proposed approach. The results showed that the proposed approach could reduce the number of needed sensors, sensing time, collection data volume, communication time, and provide the low latency agricultural data collection system. Thus, the proposed approach can improve the efficiency of CE sensing in smart agriculture. Full article
(This article belongs to the Special Issue Edge Computing in IoT)
Show Figures

Figure 1

13 pages, 818 KiB  
Article
Hardware Resource Analysis in Distributed Training with Edge Devices
by Sihyeong Park, Jemin Lee and Hyungshin Kim
Electronics 2020, 9(1), 28; https://doi.org/10.3390/electronics9010028 - 26 Dec 2019
Cited by 4 | Viewed by 4615
Abstract
When training a deep learning model with distributed training, the hardware resource utilization of each device depends on the model structure and the number of devices used for training. Distributed training has recently been applied to edge computing. Since edge devices have hardware [...] Read more.
When training a deep learning model with distributed training, the hardware resource utilization of each device depends on the model structure and the number of devices used for training. Distributed training has recently been applied to edge computing. Since edge devices have hardware resource limitations such as memory, there is a need for training methods that use hardware resources efficiently. Previous research focused on reducing training time by optimizing the synchronization process between edge devices or by compressing the models. In this paper, we monitored hardware resource usage based on the number of layers and the batch size of the model during distributed training with edge devices. We analyzed memory usage and training time variability as the batch size and number of layers increased. Experimental results demonstrated that, the larger the batch size, the fewer synchronizations between devices, resulting in less accurate training. In the shallow model, training time increased as the number of devices used for training increased because the synchronization between devices took more time than the computation time of training. This paper finds that efficient use of hardware resources for distributed training requires selecting devices in the context of model complexity and that fewer layers and smaller batches are required for efficient hardware use. Full article
(This article belongs to the Special Issue Edge Computing in IoT)
Show Figures

Figure 1

30 pages, 1088 KiB  
Article
Collaborative Computation Offloading and Resource Allocation in Cache-Aided Hierarchical Edge-Cloud Systems
by Yanwen Lan, Xiaoxiang Wang, Chong Wang, Dongyu Wang and Qi Li
Electronics 2019, 8(12), 1430; https://doi.org/10.3390/electronics8121430 - 30 Nov 2019
Cited by 10 | Viewed by 3309
Abstract
The hierarchical edge-cloud enabled paradigm has recently been proposed to provide abundant resources for 5G wireless networks. However, the computation and communication capabilities are heterogeneous which makes the potential advantages difficult to be fully explored. Besides, previous works on mobile edge computing (MEC) [...] Read more.
The hierarchical edge-cloud enabled paradigm has recently been proposed to provide abundant resources for 5G wireless networks. However, the computation and communication capabilities are heterogeneous which makes the potential advantages difficult to be fully explored. Besides, previous works on mobile edge computing (MEC) focused on server caching and offloading, ignoring the computational and caching gains brought by the proximity of user equipments (UEs). In this paper, we investigate the computation offloading in a three-tier cache-assisted hierarchical edge-cloud system. In this system, UEs cache tasks and can offload their workloads to edge servers or adjoining UEs by device-to-device (D2D) for collaborative processing. A cost minimization problem is proposed by the tradeoff between service delay and energy consumption. In this problem, the offloading decision, the computational resources and the offloading ratio are jointly optimized in each offloading mode. Then, we formulate this problem as a mixed-integer nonlinear optimization problem (MINLP) which is non-convex. To solve it, we propose a joint computation offloading and resource allocation optimization (JORA) scheme. Primarily, in this scheme, we decompose the original problem into three independent subproblems and analyze their convexity. After that, we transform them into solvable forms (e.g., convex optimization problem or linear optimization problem). Then, an iteration-based algorithm with the Lagrange multiplier method and a distributed joint optimization algorithm with the adoption of game theory are proposed to solve these problems. Finally, the simulation results show the performance of our proposed scheme compared with other existing benchmark schemes. Full article
(This article belongs to the Special Issue Edge Computing in IoT)
Show Figures

Figure 1

Back to TopTop