You are currently viewing a new version of our website. To view the old version click .
Information
  • Article
  • Open Access

19 July 2022

Computational Offloading of Service Workflow in Mobile Edge Computing

,
and
College of Information and Electrical Engineering, Heilongjiang Bayi Agricultural University, Daqing 163316, China
*
Author to whom correspondence should be addressed.

Abstract

Mobile edge computing (MEC) sinks the functions and services of cloud computing to the edge of the network to provide users with storage and computing resources. For workflow tasks, the interdependency and the sequence constraint being among the tasks make the offloading strategy more complicated. To obtain the optimal offloading and scheduling scheme for workflow tasks to minimize the total energy consumption of the system, a workflow task offloading and scheduling scheme based on an improved genetic algorithm is proposed in an MEC network with multiple users and multiple virtual machines (VMs). Firstly, the system model of the offloading and scheduling of workflow tasks in a multi-user and multi-VMs MEC network is built. Then, the problem of how to determine the optimal offloading and scheduling scheme of workflow to minimize the total energy consumption of the system while meeting the deadline constraint is formulated. To solve this problem, the improved genetic algorithm is adopted to obtain the optimal offloading strategy and scheduling. Finally, the simulation results show that the proposed scheme can achieve a lower energy consumption than other benchmark schemes.

1. Introduction

With the development of computer networks, cloud computing, and the Internet of Things (IoT), mobile devices (MDs) have become indispensable parts of people’s daily lives. However, due to the limited computing power and battery capacity, it is a great challenge to execute complex workflow application tasks on MDs with limited computation ability, such as interactive online games and image processing. Mobile cloud computing (MCC) offloads applications from MDs to the cloud for execution. It can solve the problems of insufficient computing capacity and poor storage capacity of MDs []. However, the cloud is usually located far away from mobile users, which may cause higher delay and energy consumption to data transmission, and reduce the quality of service (QoS) for users, especially for specific delay-sensitive applications [].
To solve this problem, mobile edge computing (MEC) is proposed as a new computing model [,,]. It extends edge clouds with solid computing capabilities to resource-constrained MDs to enhance the processing capabilities of MDs []. Thus, it can solve the problem of high transmission cost, high energy consumption, and large delay in traditional cloud computing []. MEC-enabled 5G wireless systems are expected to meet the real-time, low latency, and high bandwidth access requirements for IoT device who has time sensitive computation tasks to be executed. Thus, MEC has become a key technology of IoT and 5G. In MEC networks, many mobile applications, such as image processing applications and face recognition applications, perform typical business processes in which the entire task is split into multiple subtasks, with predetermined relationships and data dependencies between the subtasks []. Compared with general parallel tasks, MEC’s workflow scheduling problem is more complicated and challenging, since the execution order and execution position of subtasks will affect the latency and energy consumption of the entire workflow [,]. Thus, how to optimally allocate the subtasks in a workflow to the local node and the edge for execution to minimize the energy consumption of the entire system under the maximum tolerance latency constraint is an essential issue in MEC networks.
To address this issue, considering an MEC network with multiple users and multiple virtual machines (VMs), the problem of determining the optimal offloading strategy and scheduling scheme under the deadline constraint is solved through genetic algorithm. The main contributions of this paper can be summarized as follows:
  • We study the offloading and scheduling problems of workflow tasks in an MEC scenario with multi-MD and multi-VM. A workflow model based on the directed acyclic graph which indicates execution order and execution location of workflow tasks is proposed.
  • We propose a workflow scheduling strategy based on an adaptive genetic algorithm. In genetic algorithm, the offloading scheduling consisting of the execution order and execution location of workflow is defined as the individual. The optimal scheduling strategy for workflow in multi-user and multi-task scenarios is finally obtained through individual correction, competition for survival, selection, crossover, and mutation operations.
  • The simulation results show that, compared with other benchmark methods, such as local offloading and random offloading, the proposed method can achieve optimal task scheduling for multi-user workflow to minimize the total energy consumption of the system.
The remainder of this paper is structured as follows: Section 2 discusses the related work in the past few years. Section 3 presents the system model. In Section 4, we formulate the problem of minimizing the total energy consumption in a multi-user multi-workflow MEC system, then propose an offloading and scheduling scheme to solve this problem. Section 5 presents extensive simulation experiments. Finally, Section 6 concludes this paper.

3. System Model

In a multi-user and multi-VM MEC network, as illustrated in Figure 1, there are a single-antenna base station (BS) and K MDs denoted as set U = { 1 , 2 , , K } with some workflow tasks to be computed randomly located around the BS. The MEC server includes M VMs denoted as S = { 1 , 2 , , M } for concurrent processing multiple computation tasks. Each VM works independently. The workflow processed by the MDs consists of I subtasks. Each subtask can be scheduled to be executed locally or by MEC server through wireless access. MDs can offload all or part of the computation tasks to the MEC server for computing to the reduce energy consumption and the delay.
Figure 1. System model.

3.1. Workflow Task Model

In this paper, a weighted directed acyclic graph (DAG) is used to describe the execution sequence dependency of workflow in the MEC network.
As show in Figure 2, let 2-tuples W k = V k , E k denote the DAG describing the execution sequence dependency of workflow W k , where V k = v 1 , k , v 2 , k , , v I , K is the set of I subtasks in the workflow W k and E k = e i , j | i , j I is the set of edges between subtasks.
Figure 2. Workflow directed acyclic graph.
Each edge connecting two subtasks indicates that there is a priority constraint between them. For example, in the workflow W k , v 0 is the entry subtask, and v 1 is the predecessor of subtask v 0 . It means that subtask v 1 can only start when v 0 is finished computing. For each subtask v i , k , we use a two-tuples ν i , k = w i , k c i , k to represent the i t h subtask of MD k, in which w i , k is the input data size (bits) of subtask v i , k , and c i , k is the number of CPU cycles needed to process one bit of data. It is assumed that all VMs have enough capacity to execute the computation tasks and execute the tasks until completion after the tasks are assigned.

3.2. Communication Model

We consider an MEC network where orthogonal frequency division multiple access (OFDMA) is adopted to offload tasks to the BS. When subtask v i , k is offloaded to the edge server, the uplink transmission rate r k u of MD k is given as
r k u = B k log 2 1 + g k p k trans σ k 2 ,
where B k is the channel bandwidth between MD k and the MEC sever, p k trans is the transmission power of the MD k, and g k is the channel gain between the MD k and the MEC sever. In addition, the noise obeys the Gaussian distribution with zero expectation, and its variance is represented by σ k 2 .
We assume that the downlink channel has the same fading environment and noise; thus, the downlink transmission r k d rate of MD k is given as
r k d = B log 2 1 + g k p m trans σ k 2 .

3.3. Computation Model

Let W k denote the workflow of MD k consisting of I subtasks. These subtasks can be computed locally or offloaded to VMs via a wireless channel for computation. T k max represents the maximum deadline constraint of workflow W k . In the following, the computation overhead will be discussed in terms of both execution time and energy consumption in local computing and offloading computing.
(1)
Local computing: We define f k loc as the local computation ability of the MD k. When the subtask is executed locally, the local computation time T i , k loc is
T i , k loc = ω i , k c i , k f k loc .
The energy consumption of computing the subtask can be calculated as
E i , k loc = κ f k loc 2 ω i , k c i , k ,
where κ is the energy consumption factor related to the CPU chip architecture, κ f k loc 2 is the energy consumption in each CPU cycle [].
(2)
Offloading computing: If a subtask is offloaded to VM for computing, the total execution time consists of two parts. One is the transmission time that MD offloads the subtask to the MEC server. The other is the computation time on VMs. Then, the transmission time of offloading subtask to MEC can be calculated as follows
T i , k tr = ω i , k r k u .
The energy consumption of uplink can be calculated as follows:
E i , k tr = p k trans ω i , k r k u ,
where p k trans is the transmission power of MD k. The computation time of subtasks on VM, which can be given as
T i , m , k com = ω i , k c i , k f m ser ,
where f m ser is the CPU frequency of the VMs m. Therefore, the total execution time for offloading can be expressed as
T i , m , k ser = T i , k tr + T i , m , k com = ω i , k r k u + ω i , k c i , k f m ser .
Similarly, in the case that a subtask is offloaded to VM, the energy consumption of MD includes transmission energy consumption and the circuit loss of the local device. Similar with [], we only consider the energy consumption for offloading and ignore the circuit loss. Thus, the energy consumption of MD for offloading subtasks is given by
E i , m , k ser = E i , k tr = p k trans ω i , k r k u .
Let L = S 0 = 0 , 1 , 2 , , M denote the execution position set of the subtasks. Since a task can only be performed by one VM, an offloading decision x i , k , m 0 , 1 is utilized. If subtask v i , k is offloaded to the VM m ( m S ) for computation, x i , k , m = 1 , otherwise x i , k , m = 0 . Thus, the total time and the energy consumption of subtask v i , k for computation are given as follows
T i , k = 1 x i , k , m T i , k loc + x i , k , m , m T i , m , k gr ,
E i , k = 1 x i , k , m E i , k loc + x i , k , m E i , m , k sec .
In the workflow, subtask v j , k is the immediate successor of subtask v i , k . When subtask v i , k is finished computing, the output data d i , j , k of subtask v i , k is transmitted to successor subtask v j , k . We assume that subtask v i , k is computed locally and v j , k is computed on VM m. The output data d i , j , k transmission time from v i , k to v j , k and transmission energy consumption are
T i , j , k tr = d i , j , k r k u
E i , j , k tr = p k trans d i , j , k r k u
where p k trans is the transmission power of MD k, and r k u is the transmission rate of MD k.
Similarly, in the case that the subtask v j , k is executed on VM m and the subtask v i , k is executed locally, the local device needs to download the data from VM. Let p k re denote the downloading power of MD k. The data downloading time and the download energy consumption can be calculated respectively as
T j , i , k tr = d j , i , k r k d ,
E j , i , k tr = p k re d j , i , k r k d .
The total computation time of workflow W k on MD k is the sum of computation time and data transmission time of the data transmitted between the associated subtasks. The total energy consumption of the workflow W k on MD k is the sum of the local computing energy consumption, offloading energy consumption, and the energy consumption for data transmission between associated subtasks. As mentioned above, the total computation time and the total energy consumption can be calculated respectively as
T k = i = 1 I T i , k + i = 1 I 1 j = 2 I x i , k , m x j , k , m T i , j , k tr ,
E k = i = 1 I E i , k + i = 1 I 1 j = 2 I x i , k , m x j , k , m E i , j , k tr .

4. Problem Formulation

In this section, an optimization problem to minimize the total energy consumption in a multi-user multi-workflow MEC system is formulated. Considering the execution time and the energy consumption of each MD, the workflow offloading position and execution order are jointly studied.
The problem of minimizing the system energy consumption can be expressed as
P 1 : m i n x i , k , m k = 1 K E k , s . t . C 1 : T k T k max , k U , C 2 : x i , k , m 01 i I , k U , m L , C 3 : i I m L x i , k , m 1 , k U .
where C1 is the deadline constraint of workflow W k , C2 is the offloading decision variable constraint, and C3 is the offloading constraint of the subtasks, that is, each subtask in the workflow can only be executed locally or offloaded to one VM.

4.1. Algorithm Implementation

P1 is an NP-hard problem that is difficult to solve using traditional methods such as integer programming and convex optimization. In this section, we adopt a genetic algorithm to solve this problem. Genetic algorithm is an effective method for solving optimization problems based on the principle of evolution. It generates individuals with suitable fitness through selection, crossover, and mutation operations to obtain feasible solutions from a larger search space in a limited time. The implementation process of the algorithm is shown in Figure 3.
Figure 3. The algorithm flowchart.

4.1.1. Encoding

In the genetic algorithm, the task execution order and offloading position of the subtasks are jointly expressed as an individual’s gene. The coding method is shown in Figure 4. Assuming that there are k MDs and the workflow W k of each MD is divided into I subtasks, the length of the gene of an individual is I × K . The subtask execution order is sorted in the workflow firstly, and then an offloading position is assigned to each subtask. The offloading position of a subtask is indicated by the value of the corresponding chromosome in the gene of this subtask. There are M + 1 possibilities for the offloading position. If the subtask is executed locally, the value of this chromosome is set to be 0. If the subtask is executed on VM m, the value is set to be m, m 1 , 2 , , M .
Figure 4. Coding scheme in the genetic algorithm.

4.1.2. Population Initialization and Individual Correction

The initialization operation includes the random initialization of the offloading position and the execution order of the subtasks. Considering that the subtasks in the workflow must meet the priority constraints, the initialization of the subtask execution order is designed as follows: Let the set S denote the sortable subtasks, which are the tasks with no predecessor or the predecessor task to be executed. Firstly, a sortable task is randomly selected to join the set S. Then, another sortable task is selected to join the set S. This process keeps iterating until a feasible task sequence is generated. For the initialization of the task position, an integer range from 0 to M is randomly generated to indicate the offloading position of each subtask. All tasks are iteratively checked in the same way as that of the execution order to generate an initial set of task positions.
For each W k , calculate the computation time according to Formula (16). The individual that meets the deadline constraint becomes a valid individual.

4.1.3. Select

An elite selection strategy is adopted to select the individual with the best fitness from the population. The individuals with the best fitness in the current population do not participate in crossover and mutation operations. It replaces the individuals with the worst fitness after crossover and mutation operations and enters the next generation.

4.1.4. Competition for Survival

In each generation, N individuals are randomly divided into N / 2 pairs to compete for survival. In each pair, the individual with more fitness is selected for the next crossover operation. Then, N / 2 individuals are obtained.

4.1.5. Crossover

The N / 2 individuals obtained from the survival competition are randomly selected in pairs to perform single-point crossover with the crossover probability P c . According to the adaptively varying probabilities of adjustment formula proposed by Srinivas [], the adaptive crossover probability P c is
P c = p c 1 p c 1 p c 2 f min f c f min f a v g , f c f a v g p c 1 , f c > f a v g
where f min and f a v g are the minimum fitness and the average fitness in the population, respectively, and f c is the less fit of the two individuals in the crossover pair. p c 1 and p c 2 are the maximum and minimum crossover probabilities, respectively.
Since each subtask in the workflow needs to meet the particular order relationship, the new individuals generated by the crossover operation also need to follow the certain order relationship. The crossover operation of the execution order is shown in Figure 5. Take two execution orders denoted as Order1 and Order2, for example. Firstly, a crossover point is randomly generated. The crossover MD which is the MD involved in the crossover operation with the crossover point in it can be indicated. Secondly, the two execution orders cross each other to generate two temporary execution orders. Finally, each temporary execution order is scanned from the beginning to the end. The repetitive subtasks in each execution order are removed. Thus, two new execution orders are generated. The specific operation is shown in Algorithm 1.
Figure 5. The single−point crossover operation of task execution order.
The process of the single-point crossing for the task offloading position is similar to that of the execution sequence, as shown in Figure 6. First, a cut-off crossing point is randomly selected in the task offloading position sequences. Then, the matching areas in the two execution position orders are swapped. The details of the process are shown in Algorithm 2.
Algorithm 1 Task execution order single-point crossover algorithm.
1:
BEGIN
2:
    f c = min f 1 , f 2 ;
3:
   if f c f a v g ;
4:
       P c = p c 1 p c 1 p c 2 f min f c / f min f a v g ;
5:
   else
6:
       P c = p c 1 ;
7:
   end if
8:
   Randomly select the crossover user and crossover points;
9:
   Execute the cross operations and remove the duplicate tasks in new individuals;
10:
END
Figure 6. Single−point crossover operation of task offloading position.
Algorithm 2 Single-point crossover algorithm for task offloading position.
1:
BEGIN
2:
    f c = min f 1 , f 2 ;
3:
   if f c f a v g ;
4:
       P c = p c 1 p c 1 p c 2 f min f c / f min f a v g ;
5:
   else
6:
       P c = p c 1 ;
7:
   end if
8:
   Randomly select the crossover points;
9:
   Execute the cross operations;
10:
END

4.1.6. Mutation

In the mutation operation, the genes of some individuals are randomly selected and changed to obtain new individuals with new characters. To maximize the chances of obtaining more excellent individuals, in this paper, the best individual with the most fitness in each generation is selected to be mutated. The best individual is mutated with mutation probability P m to generate N / 4 individuals for the next generation. The mutation operation of the subtask offloading position is shown in Figure 7. For each individual, the mutation operation is performed with the mutation probability P m . The value range of the mutation for each gene is 0∼M. The specific operation is shown in Algorithm 3.
Algorithm 3 Offloading position single-point mutation algorithm.
1:
BEGIN
2:
   if f m f a v g ;
3:
       P m = p m 1 p m 1 p m 2 f min f m / f min f a v g ;
4:
   else
5:
       P m = p m 1 ;
6:
   end if
7:
   Randomly select the mutation points;
8:
   Execute the cross operations;
9:
END
Figure 7. Single−point mutation operation of task offloading position.
The mutation operation of the subtask execution order is shown in Figure 8 and Algorithm 4. As shown in Figure 8, firstly, a subtask v i , k is randomly selected from the workflow. Then, the predecessor subtasks subset v 0 , k , v 1 , k , , v a , k consisting of all predecessor subtasks of v i , k and the subset v b , k , v b + 1 , k , , v I 1 , k consisting of all successors subtasks of v i , k are generated through forward searching and backward searching, respectively. As the mutation operation of subtask execution order must meet with the order constraint, the position of subtask v i , k must be inserted between v a + 1 , k and v b 1 , k with any position. The set v a + 1 , k , , v b 1 , k is called candidate set. Finally, the subtask v i , k can be placed in any position except the initial one. Similar to the crossover operation, the adaptive mutation probability is
P m = p m 1 p m 1 p m 2 f min f m f min f a v g , f m f a v g p m 1 , f m > f a v g
where f min represents the minimum fitness in the population, f a v g represents the average fitness of the entire population, and f m represents the fitness of the individuals who choose to mutate. p m 1 and p m 2 are the maximum and minimum values of the mutation probability, respectively.
Algorithm 4 Single-point mutation for task execution order algorithm.
1:
BEGIN
2:
   if f m f a v g ;
3:
       P m = p m 1 p m 1 p m 2 f min f m / f min f a v g ;
4:
   else
5:
       P m = p m 1 ;
6:
   end if
7:
   Find mutation user;
8:
   Obtain the predecessor set of task v m u _ p , u s e r ;
9:
   Obtain the successors set of task v m u _ p , u s e r ;
10:
 Obtain the candidate set v a + 1 , u s e r , , v b 1 , u s e r ;
11:
 Randomly select a new position in set v a + 1 , u s e r , , v b 1 , u s e r to insert v m u _ p , u s e r to generate a new individual
12:
END
Figure 8. Task execution order single−point mutation operation.

5. Simulation Results and Discussion

This paper evaluates the performance of the proposed algorithm on the Python platform. Our simulation settings are described as follows. We consider a single-cell multi-user MEC network, where MDs are randomly located in a 60 m × 60 m area. The wireless access base stations are located in the center of this area. According to the path loss model considered in [], the channel gain between MD k and the MEC is g k , m = d k , m α , where d k , m is the distance between MD k and the MEC, and α = 4 is the path loss factor. Every MD has a workflow task needed to be executed. To test the performance of the proposed scheme, in our simulation, we consider an image processing application with some workflow tasks needing to be executed. The simulation parameters are shown in Table 1.
Table 1. Simulation parameter.
To evaluate the performance, the proposed scheme is compared with some other computational offloading algorithms, which are introduced as follows:
(1)
Local computing (LC): The local execution involves no offloading. All tasks are executed locally on MDs;
(2)
Random offloading (RA): All subtasks in the workflow are randomly offloaded to some MEC servers for execution or executed locally;
(3)
Adaptive genetic algorithm (AGA): All tasks of the workflow are executed locally or offloaded to the MEC for execution based on the adaptive genetic algorithm in [].
Figure 9 shows the total system energy consumption versus the number of MDs. The number of subtasks in each workflow is 10. It can be seen from Figure 9 that, when the number of MDs increases, the total system energy consumption of all the four methods increase to executing more subtasks. Our proposed scheme consumes less energy than other three compared algorithms due to its optimal allocation of the execution position for subtasks. For the adaptive genetic algorithm, the adaptive crossover and mutation probability can be dynamically adjusted with the adaptive value to avoid entering the local optimal solution. Therefore, the energy consumption of this algorithm is second only to that of our proposed algorithm.
Figure 9. The total system energy consumption versus the number of MDs.
Figure 10 illustrates the total system energy consumption versus the number of the workflow subtasks in the four algorithms. From Figure 10, we can observe that as the number of tasks increases, the system total energy consumption increases accordingly. In four algorithms, the proposed scheme consumes less energy than other three algorithms. This is because the proposed scheme can optimally allocate the execution order and offload the position of each subtask in the workflow.
Figure 10. The total system energy consumption versus the number of workflow.
Figure 11 illustrates the total system energy consumption versus the number of MEC virtual servers. As shown in Figure 11, when the number of MEC virtual servers increases, the total system energy consumption of these four algorithms is decreased accordingly, since there are more virtual servers for the selection to minimize the energy consumption. The proposed scheme consumes less energy than the other two algorithms due to its optimal resource allocation. In addition, when the number of MEC virtual servers increases, especially those greater than 9, the decrease in the total system energy consumption slows down, since the number of virtual servers is large enough for subtasks to select to minimize the energy consumption and some virtual servers are idle.
Figure 11. The total system energy consumption versus the number of virtual servers.
Finally, Figure 12 illustrates the total system energy consumption versus the average workload size. In Figure 12, we can seen that, when the workload size of the subtask increases, the energy consumption increases accordingly for computing more tasks. The proposed scheme consumes less energy than the other three algorithms due to its optimal resource allocation.
Figure 12. The total system energy consumption versus the average workload size.

6. Conclusions

In this paper, to solve the problem of how to determine the scheduling and execution position of the subtasks in a workflow, we propose a multi-user workflow task offloading decision and scheduling scheme based on genetic algorithm. Firstly, a system model of workflow scheduling and an offloading decision in a multi-user, multi-task scenario was built. Secondly, we formulated the problem of how to optimally determine the scheduling and the execution position of the subtasks to minimize the total energy consumption of the system under the deadline constraint as an optimization problem. Then, an improved genetic algorithm was adopted to obtain the optimal task execution order and offloading position to minimize the system energy consumption under the deadline constraint. Finally, the simulation results showed that, compared with other benchmark methods, our proposed scheme consumes less energy by optimally determining the scheduling and execution position of the subtasks.

7. Work Limitations

In our manuscript, we consider the resource allocation problem of how to minimize the total energy consumption of the system under a deadline constraint in a multi-user, single-BS static scenario. The major limitation of the present study is that we did not consider the resource allocation problem of the workflow task offloading in a multi-BS mobile MEC Network. Multiple BSs and the MDs with mobility will make the workflow schedule problem more complicated to solve. The workflow schedule problem in this scenario is an interesting problem which is worthy of being investigated in our future research.

Author Contributions

Conceptualization, S.F. and C.D.; methodology, S.F. and C.D.; software, C.D.; validation, C.D.; formal analysis, S.F. and C.D.; investigation, S.F. and C.D.; data curation, P.J.; writing—original draft preparation, C.D.; writing—review and editing, S.F. and C.D.; visualization, C.D.; supervision, S.F. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the scholarship from China Scholarship Council (No. 201708230301), the Science Foundation of Heilongjiang Province for the Excellent Youth (No. YQ2019F014), the Science Talent Support Program of Heilongjiang Bayi Agricultural University (No. ZRCQC201807), the Scientific Research Foundation for Doctor of Heilongjiang Bayi Agricultural University (No. XDB2015-28).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hoang, D.T.; Lee, C.; Niyato, D.T.; Wang, P. A survey of mobile cloud computing: Architecture, applications, and approaches. Wirel. Commun. Mob. Comput. 2013, 13, 1587–1611. [Google Scholar]
  2. Sahni, J.; Vidyarthi, D.P. A Cost-Effective Deadline-Constrained Dynamic Scheduling Algorithm for Scientific Workflows in a Cloud Environment. IEEE Trans. Cloud Comput. 2018, 6, 2–18. [Google Scholar] [CrossRef]
  3. Wang, X.; Yang, L.T.; Chen, X.; Han, J.; Feng, J. A Tensor Computation and Optimization Model for Cyber-Physical-Social Big Data. IEEE Trans. Sustain. Comput. 2019, 4, 326–339. [Google Scholar] [CrossRef]
  4. Shi, W.; Cao, J.; Zhang, Q.; Liu, W. Edge Computing—An Emerging Computing Model for the Internet of Everything Era. J. Comput. Res. Dev. 2017, 54, 907–924. [Google Scholar]
  5. Peng, K.; Leung, V.C.M.; Xu, X.; Zheng, L.; Wang, J.; Huang, Q. A Survey on Mobile Edge Computing: Focusing on Service Adoption and Provision. Wirel. Commun. Mob. Comput. 2018, 2018, 8267838. [Google Scholar] [CrossRef]
  6. Mobile Edge Computing—A Key Technology towards 5G; ETSI White Paper No. 11; ETSI: Valbonne, France, 2015; ISBN 979-10-92620-08-5.
  7. Sun, X.; Ansari, N. EdgeIoT: Mobile Edge Computing for the Internet of Things. IEEE Commun. Mag. 2016, 54, 22–29. [Google Scholar] [CrossRef]
  8. Peng, Q.; Jiang, H.; Chen, M.; Liang, J.; Xia, Y. Reliability-aware and Deadline-constrained workflow scheduling in Mobile Edge Computing. In Proceedings of the 2019 IEEE 16th International Conference on Networking, Sensing and Control (ICNSC), Banff, AB, Canada, 9–11 May 2019; pp. 236–241. [Google Scholar]
  9. Leymann, F.; Roller, D. Workflow-based applications. IBM Syst. J. 1997, 36, 102–123. [Google Scholar] [CrossRef] [Green Version]
  10. Pandey, S.; Wu, L.; Guru, S.M.; Buyya, R. A Particle Swarm Optimization-Based Heuristic for Scheduling Workflow Applications in Cloud Computing Environments. In Proceedings of the 2010 24th IEEE International Conference on Advanced Information Networking and Applications, Perth, Australia, 20–23 April 2010; pp. 400–407. [Google Scholar]
  11. Li, X.; Chen, T.; Yuan, D.; Xu, J.; Liu, X. A Novel Graph-based Computation Offloading Strategy for Workflow Applications in Mobile Edge Computing. arXiv 2021, arXiv:2102.12236. [Google Scholar] [CrossRef]
  12. Zhang, G.; Zhang, W.; Cao, Y.; Li, D.; Wang, L. Energy-Delay Tradeoff for Dynamic Offloading in Mobile-Edge Computing System With Energy Harvesting Devices. IEEE Trans. Ind. Inform. 2018, 14, 4642–4655. [Google Scholar] [CrossRef]
  13. Dong, H.; Zhang, H.; Li, Z.; Liu, H. Computation Offloading for Service Workflow in Mobile Edge Computing. Comput. Eng. Appl. 2019, 55, 36–43. [Google Scholar]
  14. Li, W.; Liu, H.; Li, Z.; Yuan, Y. Energy-Delay Tradeoff for Dynamic Offloading in Mobile-Edge Security and energy aware scheduling for service workflow in mobile edge computing. Comput. Integr. Manuf. Syst. 2020, 26, 1831–1842. [Google Scholar]
  15. Sundar, S.; Liang, B. Offloading Dependent Tasks with Communication Delay and Deadline Constraint. In Proceedings of the IEEE INFOCOM 2018—IEEE Conference on Computer Communications, Honolulu, HI, USA, 16–19 April 2018; pp. 37–45. [Google Scholar]
  16. Guo, S.; Liu, J.; Yang, Y.; Xiao, B.; Li, Z. Energy-Efficient Dynamic Computation Offloading and Cooperative Task Scheduling in Mobile Cloud Computing. IEEE Trans. Mob. Comput. 2019, 18, 319–333. [Google Scholar] [CrossRef]
  17. Ning, Z.; Dong, P.; Kong, X.; Xia, F. A Cooperative Partial Computation Offloading Scheme for Mobile Edge Computing Enabled Internet of Things. IEEE Internet Things J. 2019, 6, 4804–4814. [Google Scholar] [CrossRef]
  18. Sun, J.; Yin, L.; Zou, M.; Zhang, Y.; Zhang, T.; Zhou, J. Makespan-minimization workflow scheduling for complex networks with social groups in edge computing. J. Syst. Archit. 2020, 108, 101799. [Google Scholar] [CrossRef]
  19. Wang, Z.; Zheng, W.; Chen, P.; Ma, Y.; Xia, Y.; Liu, W.; Li, X.; Guo, K. A Novel Coevolutionary Approach to Reliability Guaranteed Multi-Workflow Scheduling upon Edge Computing Infrastructures. Secur. Commun. Netw. 2020, 2020, 6697640. [Google Scholar] [CrossRef]
  20. Elgendy, I.A.; Zhang, W.Z.; Zeng, Y.; He, H.; Tian, Y.C.; Yang, Y. Efficient and Secure Multi-User Multi-Task Computation Offloading for Mobile-Edge Computing in Mobile IoT Networks. IEEE Trans. Netw. Serv. Manag. 2020, 17, 2410–2422. [Google Scholar] [CrossRef]
  21. Chen, X. Decentralized Computation Offloading Game for Mobile Cloud Computing. IEEE Trans. Parallel Distrib. Syst. 2015, 26, 974–983. [Google Scholar] [CrossRef] [Green Version]
  22. Srinivas, M.; Patnaik, L.M. Adaptive probabilities of crossover and mutation in genetic algorithms. IEEE Trans. Syst. Man, Cybern. 1994, 24, 656–667. [Google Scholar] [CrossRef] [Green Version]
  23. Rappaport, T.T.S. Wireless Communications: Principles and Practice; Prentice Hall: Hoboken, NJ, USA, 1996. [Google Scholar]
  24. Yan, W.; Shen, B.; Liu, X. Offloading and resource allocation of MEC based on adaptive genetic algorithm. Appl. Electron. Tech. 2020, 46, 95–100. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.