Computational Offloading of Service Workflow in Mobile Edge Computing
Abstract
:1. Introduction
 We study the offloading and scheduling problems of workflow tasks in an MEC scenario with multiMD and multiVM. A workflow model based on the directed acyclic graph which indicates execution order and execution location of workflow tasks is proposed.
 We propose a workflow scheduling strategy based on an adaptive genetic algorithm. In genetic algorithm, the offloading scheduling consisting of the execution order and execution location of workflow is defined as the individual. The optimal scheduling strategy for workflow in multiuser and multitask scenarios is finally obtained through individual correction, competition for survival, selection, crossover, and mutation operations.
 The simulation results show that, compared with other benchmark methods, such as local offloading and random offloading, the proposed method can achieve optimal task scheduling for multiuser workflow to minimize the total energy consumption of the system.
2. Related Work
3. System Model
3.1. Workflow Task Model
3.2. Communication Model
3.3. Computation Model
 (1)
 Local computing: We define ${f}_{k}^{\mathrm{loc}}$ as the local computation ability of the MD k. When the subtask is executed locally, the local computation time ${T}_{i,k}^{\mathrm{loc}}$ is$$\begin{array}{c}\hfill {T}_{i,k}^{\mathrm{loc}}=\frac{{\omega}_{i,k}{c}_{i,k}}{{f}_{k}^{\mathrm{loc}}}.\end{array}$$The energy consumption of computing the subtask can be calculated as$$\begin{array}{c}\hfill {E}_{i,k}^{\mathrm{loc}}=\kappa {\left({f}_{k}^{\mathrm{loc}}\right)}^{2}{\omega}_{i,k}{c}_{i,k},\end{array}$$
 (2)
 Offloading computing: If a subtask is offloaded to VM for computing, the total execution time consists of two parts. One is the transmission time that MD offloads the subtask to the MEC server. The other is the computation time on VMs. Then, the transmission time of offloading subtask to MEC can be calculated as follows$$\begin{array}{c}\hfill {T}_{i,k}^{\mathrm{tr}}=\frac{{\omega}_{i,k}}{{r}_{k}^{\mathrm{u}}}.\end{array}$$The energy consumption of uplink can be calculated as follows:$$\begin{array}{c}\hfill {E}_{i,k}^{\mathrm{tr}}={p}_{k}^{\mathrm{trans}}\frac{{\omega}_{i,k}}{{r}_{k}^{\mathrm{u}}},\end{array}$$$$\begin{array}{c}\hfill {T}_{i,m,k}^{\mathrm{com}}=\frac{{\omega}_{i,k}{c}_{i,k}}{{f}_{m}^{\mathrm{ser}}},\end{array}$$$$\begin{array}{c}\hfill {T}_{i,m,k}^{\mathrm{ser}}={T}_{i,k}^{\mathrm{tr}}+{T}_{i,m,k}^{\mathrm{com}}=\frac{{\omega}_{i,k}}{{r}_{k}^{\mathrm{u}}}+\frac{{\omega}_{i,k}{c}_{i,k}}{{f}_{m}^{\mathrm{ser}}}.\end{array}$$$$\begin{array}{c}\hfill {E}_{i,m,k}^{\mathrm{ser}}={E}_{i,k}^{\mathrm{tr}}={p}_{k}^{\mathrm{trans}}\frac{{\omega}_{i,k}}{{r}_{k}^{\mathrm{u}}}.\end{array}$$$$\begin{array}{c}\hfill {T}_{i,k}=\left[\left(1{x}_{i,k,m}\right){T}_{i,k}^{\mathrm{loc}}+{x}_{i,k,m,m}{T}_{i,m,k}^{\mathrm{gr}}\right],\end{array}$$$$\begin{array}{c}\hfill {E}_{i,k}=\left[\left(1{x}_{i,k,m}\right){E}_{i,k}^{\mathrm{loc}}+{x}_{i,k,m}{E}_{i,m,k}^{\mathrm{sec}}\right].\end{array}$$In the workflow, subtask ${v}_{j,k}$ is the immediate successor of subtask ${v}_{i,k}$. When subtask ${v}_{i,k}$ is finished computing, the output data ${d}_{i,j,k}$ of subtask ${v}_{i,k}$ is transmitted to successor subtask ${v}_{j,k}$. We assume that subtask ${v}_{i,k}$ is computed locally and ${v}_{j,k}$ is computed on VM m. The output data ${d}_{i,j,k}$ transmission time from ${v}_{i,k}$ to ${v}_{j,k}$ and transmission energy consumption are$$\begin{array}{c}\hfill {T}_{i,j,k}^{\mathrm{tr}}=\frac{{d}_{i,j,k}}{{r}_{k}^{\mathrm{u}}}\end{array}$$$$\begin{array}{c}\hfill {E}_{i,j,k}^{\mathrm{tr}}={p}_{k}^{\mathrm{trans}}\frac{{d}_{i,j,k}}{{r}_{k}^{\mathrm{u}}}\end{array}$$Similarly, in the case that the subtask ${v}_{j,k}$ is executed on VM m and the subtask ${v}_{i,k}$ is executed locally, the local device needs to download the data from VM. Let ${p}_{k}^{\mathrm{re}}$ denote the downloading power of MD k. The data downloading time and the download energy consumption can be calculated respectively as$$\begin{array}{c}\hfill {T}_{j,i,k}^{\mathrm{tr}}=\frac{{d}_{j,i,k}}{{r}_{k}^{\mathrm{d}}},\end{array}$$$$\begin{array}{c}\hfill {E}_{j,i,k}^{\mathrm{tr}}={p}_{k}^{\mathrm{re}}\frac{{d}_{j,i,k}}{{r}_{k}^{\mathrm{d}}}.\end{array}$$The total computation time of workflow ${W}_{k}$ on MD k is the sum of computation time and data transmission time of the data transmitted between the associated subtasks. The total energy consumption of the workflow ${W}_{k}$ on MD k is the sum of the local computing energy consumption, offloading energy consumption, and the energy consumption for data transmission between associated subtasks. As mentioned above, the total computation time and the total energy consumption can be calculated respectively as$$\begin{array}{c}\hfill {T}_{k}=\sum _{i=1}^{I}{T}_{i,k}+\sum _{i=1}^{I1}\sum _{j=2}^{I}{\left{x}_{i,k,m{x}_{j,k,m}}\right{T}_{i,j,k}^{\mathrm{tr}}}^{},\end{array}$$$$\begin{array}{c}\hfill {E}_{k}=\sum _{i=1}^{I}{E}_{i,k}+\sum _{i=1}^{I1}\sum _{j=2}^{I}\left{x}_{i,k,m{x}_{j,k,m}}\right{E}_{i,j,k}^{\mathrm{tr}}.\end{array}$$
4. Problem Formulation
4.1. Algorithm Implementation
4.1.1. Encoding
4.1.2. Population Initialization and Individual Correction
4.1.3. Select
4.1.4. Competition for Survival
4.1.5. Crossover
Algorithm 1 Task execution order singlepoint crossover algorithm. 

Algorithm 2 Singlepoint crossover algorithm for task offloading position. 

4.1.6. Mutation
Algorithm 3 Offloading position singlepoint mutation algorithm. 

Algorithm 4 Singlepoint mutation for task execution order algorithm. 

5. Simulation Results and Discussion
 (1)
 Local computing (LC): The local execution involves no offloading. All tasks are executed locally on MDs;
 (2)
 Random offloading (RA): All subtasks in the workflow are randomly offloaded to some MEC servers for execution or executed locally;
 (3)
 Adaptive genetic algorithm (AGA): All tasks of the workflow are executed locally or offloaded to the MEC for execution based on the adaptive genetic algorithm in [24].
6. Conclusions
7. Work Limitations
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
 Hoang, D.T.; Lee, C.; Niyato, D.T.; Wang, P. A survey of mobile cloud computing: Architecture, applications, and approaches. Wirel. Commun. Mob. Comput. 2013, 13, 1587–1611. [Google Scholar]
 Sahni, J.; Vidyarthi, D.P. A CostEffective DeadlineConstrained Dynamic Scheduling Algorithm for Scientific Workflows in a Cloud Environment. IEEE Trans. Cloud Comput. 2018, 6, 2–18. [Google Scholar] [CrossRef]
 Wang, X.; Yang, L.T.; Chen, X.; Han, J.; Feng, J. A Tensor Computation and Optimization Model for CyberPhysicalSocial Big Data. IEEE Trans. Sustain. Comput. 2019, 4, 326–339. [Google Scholar] [CrossRef]
 Shi, W.; Cao, J.; Zhang, Q.; Liu, W. Edge Computing—An Emerging Computing Model for the Internet of Everything Era. J. Comput. Res. Dev. 2017, 54, 907–924. [Google Scholar]
 Peng, K.; Leung, V.C.M.; Xu, X.; Zheng, L.; Wang, J.; Huang, Q. A Survey on Mobile Edge Computing: Focusing on Service Adoption and Provision. Wirel. Commun. Mob. Comput. 2018, 2018, 8267838. [Google Scholar] [CrossRef]
 Mobile Edge Computing—A Key Technology towards 5G; ETSI White Paper No. 11; ETSI: Valbonne, France, 2015; ISBN 9791092620085.
 Sun, X.; Ansari, N. EdgeIoT: Mobile Edge Computing for the Internet of Things. IEEE Commun. Mag. 2016, 54, 22–29. [Google Scholar] [CrossRef]
 Peng, Q.; Jiang, H.; Chen, M.; Liang, J.; Xia, Y. Reliabilityaware and Deadlineconstrained workflow scheduling in Mobile Edge Computing. In Proceedings of the 2019 IEEE 16th International Conference on Networking, Sensing and Control (ICNSC), Banff, AB, Canada, 9–11 May 2019; pp. 236–241. [Google Scholar]
 Leymann, F.; Roller, D. Workflowbased applications. IBM Syst. J. 1997, 36, 102–123. [Google Scholar] [CrossRef] [Green Version]
 Pandey, S.; Wu, L.; Guru, S.M.; Buyya, R. A Particle Swarm OptimizationBased Heuristic for Scheduling Workflow Applications in Cloud Computing Environments. In Proceedings of the 2010 24th IEEE International Conference on Advanced Information Networking and Applications, Perth, Australia, 20–23 April 2010; pp. 400–407. [Google Scholar]
 Li, X.; Chen, T.; Yuan, D.; Xu, J.; Liu, X. A Novel Graphbased Computation Offloading Strategy for Workflow Applications in Mobile Edge Computing. arXiv 2021, arXiv:2102.12236. [Google Scholar] [CrossRef]
 Zhang, G.; Zhang, W.; Cao, Y.; Li, D.; Wang, L. EnergyDelay Tradeoff for Dynamic Offloading in MobileEdge Computing System With Energy Harvesting Devices. IEEE Trans. Ind. Inform. 2018, 14, 4642–4655. [Google Scholar] [CrossRef]
 Dong, H.; Zhang, H.; Li, Z.; Liu, H. Computation Offloading for Service Workflow in Mobile Edge Computing. Comput. Eng. Appl. 2019, 55, 36–43. [Google Scholar]
 Li, W.; Liu, H.; Li, Z.; Yuan, Y. EnergyDelay Tradeoff for Dynamic Offloading in MobileEdge Security and energy aware scheduling for service workflow in mobile edge computing. Comput. Integr. Manuf. Syst. 2020, 26, 1831–1842. [Google Scholar]
 Sundar, S.; Liang, B. Offloading Dependent Tasks with Communication Delay and Deadline Constraint. In Proceedings of the IEEE INFOCOM 2018—IEEE Conference on Computer Communications, Honolulu, HI, USA, 16–19 April 2018; pp. 37–45. [Google Scholar]
 Guo, S.; Liu, J.; Yang, Y.; Xiao, B.; Li, Z. EnergyEfficient Dynamic Computation Offloading and Cooperative Task Scheduling in Mobile Cloud Computing. IEEE Trans. Mob. Comput. 2019, 18, 319–333. [Google Scholar] [CrossRef]
 Ning, Z.; Dong, P.; Kong, X.; Xia, F. A Cooperative Partial Computation Offloading Scheme for Mobile Edge Computing Enabled Internet of Things. IEEE Internet Things J. 2019, 6, 4804–4814. [Google Scholar] [CrossRef]
 Sun, J.; Yin, L.; Zou, M.; Zhang, Y.; Zhang, T.; Zhou, J. Makespanminimization workflow scheduling for complex networks with social groups in edge computing. J. Syst. Archit. 2020, 108, 101799. [Google Scholar] [CrossRef]
 Wang, Z.; Zheng, W.; Chen, P.; Ma, Y.; Xia, Y.; Liu, W.; Li, X.; Guo, K. A Novel Coevolutionary Approach to Reliability Guaranteed MultiWorkflow Scheduling upon Edge Computing Infrastructures. Secur. Commun. Netw. 2020, 2020, 6697640. [Google Scholar] [CrossRef]
 Elgendy, I.A.; Zhang, W.Z.; Zeng, Y.; He, H.; Tian, Y.C.; Yang, Y. Efficient and Secure MultiUser MultiTask Computation Offloading for MobileEdge Computing in Mobile IoT Networks. IEEE Trans. Netw. Serv. Manag. 2020, 17, 2410–2422. [Google Scholar] [CrossRef]
 Chen, X. Decentralized Computation Offloading Game for Mobile Cloud Computing. IEEE Trans. Parallel Distrib. Syst. 2015, 26, 974–983. [Google Scholar] [CrossRef] [Green Version]
 Srinivas, M.; Patnaik, L.M. Adaptive probabilities of crossover and mutation in genetic algorithms. IEEE Trans. Syst. Man, Cybern. 1994, 24, 656–667. [Google Scholar] [CrossRef] [Green Version]
 Rappaport, T.T.S. Wireless Communications: Principles and Practice; Prentice Hall: Hoboken, NJ, USA, 1996. [Google Scholar]
 Yan, W.; Shen, B.; Liu, X. Offloading and resource allocation of MEC based on adaptive genetic algorithm. Appl. Electron. Tech. 2020, 46, 95–100. [Google Scholar]
Simulation Parameter  Value 

Bandwidth  5 MHz 
Transmission power of mobile device ${p}_{i}^{\mathrm{trans}}$  600 mW 
Receive power of mobile device ${p}_{k}^{\mathrm{re}}$  100 mW 
Background noise $\sigma $  −113 dBm 
Mobile device execution power consumption coefficient $\kappa $  ${10}^{24}$ Joule/cycle 
MEC execution power consumption coefficient ${\kappa}_{1}$  ${10}^{26}$ Joule/cycle 
Data subtask ${w}_{i,k}$  [50, 300] kB 
The weight between two subtasks ${d}_{i,j,k}$  [300, 500] kB 
Needed CPU cycles to calculate 1 bit task ${c}_{i,k}$  1000–1200 (cycles/byte) 
MD’s local computation capability ${f}_{k}^{\mathrm{loc}}$  [0.1, 1] GHz 
MEC computation capability ${f}_{m}^{\mathrm{ser}}$  [2, 4] GHz 
${p}_{\mathrm{c}1}\text{}{p}_{\mathrm{c}2}\text{}{p}_{\mathrm{m}1}\text{}{p}_{\mathrm{m}2}$  0.9, 0.4, 0.1, 0.05 
Population Size  80 
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. 
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Fu, S.; Ding, C.; Jiang, P. Computational Offloading of Service Workflow in Mobile Edge Computing. Information 2022, 13, 348. https://doi.org/10.3390/info13070348
Fu S, Ding C, Jiang P. Computational Offloading of Service Workflow in Mobile Edge Computing. Information. 2022; 13(7):348. https://doi.org/10.3390/info13070348
Chicago/Turabian StyleFu, Shuang, Chenyang Ding, and Peng Jiang. 2022. "Computational Offloading of Service Workflow in Mobile Edge Computing" Information 13, no. 7: 348. https://doi.org/10.3390/info13070348