Next Article in Journal
Review and Comparison of Path Tracking Based on Model Predictive Control
Previous Article in Journal
Bias Correction to Antenna Frequency Response for Wideband Polarimetric Phased Array Radar
Open AccessArticle

On-Demand Computation Offloading Architecture in Fog Networks

by Yeonjin Jin and HyungJune Lee *,†
Department of Computer Science and Engineering, Ewha Womans University, Seoul 03760, Korea
Author to whom correspondence should be addressed.
Current Address: 52 Ewhayeodae-gil, Asan Eng. 334, Seodaemun-gu, Seoul 03760, Korea.
Electronics 2019, 8(10), 1076;
Received: 20 August 2019 / Revised: 11 September 2019 / Accepted: 18 September 2019 / Published: 23 September 2019
(This article belongs to the Section Networks)
With the advent of the Internet-of-Things (IoT), end-devices have been served as sensors, gateways, or local storage equipment. Due to their scarce resource capability, cloud-based computing is currently a necessary companion. However, raw data collected at devices should be uploaded to a cloud server, taking a significantly large amount of network bandwidth. In this paper, we propose an on-demand computation offloading architecture in fog networks, by soliciting available resources from nearby edge devices and distributing a suitable amount of computation tasks to them. The proposed architecture aims to finish a necessary computation job within a distinct deadline with a reduced network overhead. Our work consists of three elements: (1) resource provider network formation by classifying nodes into stem or leaf depending on network stability, (2) task allocation based on each node’s resource availability and soliciting status, and (3) task redistribution in preparation for possible network and computation losses. Simulation-driven validation in the iFogSim simulator demonstrates that our work achieves a high task completion rate within a designated deadline, while drastically reducing unnecessary network overhead, by selecting only some effective edge devices as computation delegates via locally networked computation. View Full-Text
Keywords: computation offloading; in-network resource allocation; fog networks; edge computing computation offloading; in-network resource allocation; fog networks; edge computing
Show Figures

Figure 1

MDPI and ACS Style

Jin, Y.; Lee, H. On-Demand Computation Offloading Architecture in Fog Networks. Electronics 2019, 8, 1076.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

Back to TopTop