Next Article in Journal
Risk Assessment of Artifact Degradation in a Museum, Based on Indoor Climate Monitoring—Case Study of “Poni-Cernătescu” Museum from Iași City
Next Article in Special Issue
Effective Remote Sensing from the Internet of Drones through Flying Control with Lightweight Multitask Learning
Previous Article in Journal
An Earthquake Forecast Model Based on Multi-Station PCA Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

COME-UP: Computation Offloading in Mobile Edge Computing with LSTM Based User Direction Prediction

by
Sardar Khaliq uz Zaman
1,
Ali Imran Jehangiri
1,*,
Tahir Maqsood
2,
Arif Iqbal Umar
1,
Muhammad Amir Khan
2,
Noor Zaman Jhanjhi
3,4,*,
Mohammad Shorfuzzaman
5 and
Mehedi Masud
5
1
Department of Computer Science and Information Technology, Hazara University Mansehra, Mansehra 21300, Pakistan
2
Department of Computer Science, Abbottabad Campus, COMSATS University Islamabad, Abbottabad 54590, Pakistan
3
School of Computer Science and Engineering, Taylor’s University, Subang Jaya 47500, Malaysia
4
Center for Smart Society 5.0 [CCS5], Faculty of Innovation and Technology, Taylor’s University, Subang Jaya 47500, Malaysia
5
Department of Computer Science, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2022, 12(7), 3312; https://doi.org/10.3390/app12073312
Submission received: 21 February 2022 / Revised: 16 March 2022 / Accepted: 17 March 2022 / Published: 24 March 2022
(This article belongs to the Special Issue Edge-Enabled Big Data Intelligence for B5G and IoT Applications)

Abstract

:
In mobile edge computing (MEC), mobile devices limited to computation and memory resources offload compute-intensive tasks to nearby edge servers. User movement causes frequent handovers in 5G urban networks. The resultant delays in task execution due to unknown user position and base station lead to increased energy consumption and resource wastage. The current MEC offloading solutions separate computation offloading from user mobility. For task offloading, techniques that predict the user’s future location do not consider user direction. We propose a framework termed COME-UP Computation Offloading in mobile edge computing with Long-short term memory (LSTM) based user direction prediction. The nature of the mobility data is nonlinear and leads to a time series prediction problem. The LSTM considers the previous mobility features, such as location, velocity, and direction, as input to a feed-forward mechanism to train the learning model and predict the next location. The proposed architecture also uses a fitness function to calculate priority weights for selecting an optimum edge server for task offloading based on latency, energy, and server load. The simulation results show that the latency and energy consumption of COME-UP are lower than the baseline techniques, while the edge server utilization is enhanced.

1. Introduction

In MEC, 5G cellular users, intelligent sensors, surveillance systems, and other smart devices are connected to the network edge [1]. Handling mobile users at the network edge effectively uses spectral resources, lowers response time, and saves energy costs. By 2022, it is anticipated that global mobile data traffic would have increased by 77 exabytes per month [2]. 3D modelling, augmented reality, online gaming, high-definition multimedia processing, streaming, and machine learning-based applications are just a few examples of the computationally demanding applications that consume enormous amounts of resources and generate a large quantity of data [3]. In recent years, technological advances in multicore chip and memory architectures have made it possible for mobile devices to perform admirably. However, these mobile devices are still constrained with limited computational power, battery life, and memory. Edge computing overcomes these limitations by offering computational offloading [4,5]. In mobile edge networks, the intermediate servers between user and core network exist collocated at the small cell base stations that offer computation and storage services for the mobile devices. This service provisioning mechanism reduces the load on data centers with optimized response time [6]. Nevertheless, the issue of higher latency has been addressed to some extent, but some applications continue to have strict latency and computational limits [7] and are offloaded to nearby MEC servers, as illustrated in Figure 1.
Optimal server selection enhances the success of computation offloading in MEC. Latency, energy usage, and resource utilization are all indicators of optimal server performance [8,9]. Some research efforts have looked at combining the optimization of two or more of the factors mentioned above. User mobility has also been cited as a crucial factor in offloading decisions in a number of studies [10]. Different machine learning-based prediction methods have been created to anticipate a mobile user’s future position based on their prior mobility pattern. The best MEC server at the anticipated location is used for task assignment.
Reinforcement learning, which has a lower training complexity than supervised learning and unsupervised learning, is used in the majority of studies applying machine learning for task offloading [11]. Nonetheless, the reinforcement learning techniques have a significant temporal complexity caused by their repetitive hit and try processes [11]. In addition, the accuracy of user position prediction is inconsistent, resulting in suboptimal task assignment when real-time mobility factors such as user velocity, location and direction are ignored [12]. For example, dynamic mobility-aware partial offloading (DMPO) is a model for reducing the length of the offloading path after the movement of user equipment. Prediction is carried out considering the user tasks and mobility paths in the near future [8]. Task-assignment with optimal mobility (TAOM) assigns tasks based on the shortest possible pre-calculated execution time. It is appropriate for workflows with a small number of users, but not practicable for jobs with a big number of users [9]. The TAOM claims to be simple and has low latency. On the other hand, the DMPO predicts energy-efficient servers. However, both strategies deal with user mobility and task offloading individually, affecting latency and energy. DMPO’s mobility model is one-dimensional and does not predict user orientation. As a result, we propose COME-UP, a framework that utilizes the latitude, longitude, and user velocity as input parameters. Further, an optimized edge server with the lowest latency, energy consumption, and highest server utilization is selected. For model training, we used the Luxembourg SUMO Traffic (LuST) data set of 1000 vehicular users that is freely available online [10]. We selected TAOM as the baseline technique and DMPO for comparison with the proposed model. Extensive simulations have been performed to compare the proposed model against state-of-the-art methodologies using the MobFogSim simulator [10]. The following are the highlights of the contributions made in this work.
  • In MEC, we propose a novel task offloading framework called COME-UP, which is composed of two sub-models. Time series analysis combined with LSTM is used by the location and direction prediction model;
  • The task placement model makes use of a fitness function based on the weighted sum method to determine which edge server is the most suited;
  • We formulate a fitness function that determines the optimized server using different priority weights for latency ( α ), energy ( β ), and server load ( ϒ );
  • The performance comparisons utilizing MobFogSim show that the COME-UP outperforms the others (DMPO and TAOM) with a 32% reduction in latency, 16% energy savings, and increasing resource (CPU) utilization by 9%. The root mean square error of the LSTM model is noted as 0.5.
The rest of the paper is structured in the following manner. Section 2 discusses the findings of a literature review on mobility models and mobility-aware offloading approaches. Section 3 demonstrates the limitations of mobility prediction using a real-world use case. Section 4 describes the system overview and the models that are used, followed by the proposed framework. Section 5 discusses the experimental assessments and comparisons that were conducted. Section 6 brings the work to a conclusion, and Section 7 discusses some potential research directions for future work.

2. Related Work

The research community and various industries have attracted much interest in application development by introducing mobile devices’ multicore CPUs and contemporary memory architectures. However, there is enough room for improving today’s mobile devices’ capacities and capabilities since some compute-intensive jobs cannot be performed on those mobile devices [13,14]. Such resource-hungry applications divide the computational processes into small tasks and place them at a proximate edge server. The remote task execution reduces the latency with energy savings.
In MEC, the locality of servers plays a vital role in workflows offloading [15]. Furthermore, user mobility is related to the mobile user’s connection to an edge server. User mobility is critical when transferring from one base station to another after submitting a workload to the prior server. Task offloading results in longer delays, more energy consumption, and lower resource utilization [15]. The authors in [16] proposed a mechanism based on software defined networks to manage user mobility. The technique assigns task to the server with the smallest reported distance. The distance is calculated using signal strength. Real-time factors such as user position, velocity, and direction, are not considered. In Ref. [16], the authors consider the geographical locations of users within the cell’s boundaries to calculate the distance [15]. However, as with the previous SDN method, it also ignores user trajectory data. The random waypoint model (RWP) is the simplest model that estimates user mobility [17]. The RWP model is limited in that it only considers user movement within the cell and only tracks users who are moving towards or away from the base station, making the handoff analysis and prediction decision more difficult to complete. Another RWP-based robust framework is proposed in [18] that endeavors user mobility when making offloading decisions. The framework is incapable of reducing the complexity of job execution or energy usage. The self-similar least action walk is a mobility model that analyses a mobile user’s walking behaviors to predict his next likely destination [19]. It accurately predicts a user’s location while walking within the same radio access network, making it perfect for task transfer from one device to another. Traditional mobile networks, such as mobile adhoc networks, have extensively employed and tested the mobility models outlined thus far. It is unclear how they will be deployed and how well they’ll perform in small cell networks such as 5G or mobile edge networks. MEC servers typically prioritize the tasks for offloading depending on their computing power, channel usage, and extra processing requirements. However, the heterogeneous MEC system requires ongoing real-time task scheduling monitoring due to user mobility. To optimize the process, other performance factors such as energy consumption, latency, and system resource usage should also be considered. Another methodology for determining the availability of edge servers is the “head-light model”, which has been used in various works on mobility. This model is used to determine where the edge servers are positioned in relation to the data center [20]. The model captures a zone that points in the direction of a traveling mobile user, such as a moving vehicle’s headlight. The authors in [21] describe a compute offloading approach known as MobMig, which is designed to improve the work migration process by taking leverage of mobility. The MobMig balances the workload across the edge servers by identifying the overloaded nodes with no remaining capacity. With the mobility-aware and genetic algorithm (MAGA), which was recently detailed in [22], a novel offloading technique has been developed that is focused on decreasing offloading errors by exploiting genetic algorithms. MAGA optimizes offloading decisions with reduced mobile device power consumption and meeting task completion time requirements in FOG environments. MAGA can handle multisite offloading. The high computing cost of this technology limits it to single-task applications. In [8], a dynamic mobility-aware offloading algorithm (DMPO) is proposed. After the mobile user moves, the offloading path is minimized in DMPO, and a short-term prediction of the user mobility pattern is obtained. Since DMPO considers one-dimensional user mobility, it performs better when the base stations have even distribution. For bigger mobile edge networks, the authors came up with an energy-aware mobility management model that could help them run better [23]. It performs the task offloading by considering the user task size, location, and the number of base stations in the network. The heuristic mobility aware offloading algorithm, a heuristics-based technique that enables vehicular mobile devices to perform task offloading, is proposed in [24]. The system predicts user position by anticipating edge server capacity and radio resource health. While it reduces latency and energy consumption, it only has a short-term influence on mobility. Due to this limitation, the best services cannot be guaranteed for offloaded or cached jobs. Furthermore, a technique that considers all real-time mobility characteristics such as location, velocity, and user orientation is difficult to come across. In this direction, some efforts have been performed to examine the machine learning methods. However, few notable works exist in mobility-aware task offloading using ML techniques in the MEC domain. The user moving direction is critically important to be considered while predicting the next location and finding an optimized edge server.
Mobility-aware deep reinforcement learning [25] is a model in MEC to provide optimal service provisioning. It uses a glimpse mobility model to figure out what the next place a person is likely to visit ahead. The mobility-aware edge service placement methodology is similar to that used in [24]. This model achieves numerous goals and is supported by machine learning techniques that use user mobility trajectory data as input to estimate the likely future position of users. For predicting the amount of power needed for offloading jobs, an energy-cum-mobility aware offloading framework is presented in which Dijkstra’s algorithm and machine learning are combined to anticipate the user’s next location and power consumption requirements to offload or migrate a task.
Huang et al. introduced a distributed technique based on deep learning that leverages the power of numerous concurrent neural networks to find optimal solutions without the need for manual data labelling [26]. Tang et al. defined a task offloading issue with the goal of minimizing long-term resource consumption and suggested a distributed reinforcement learning algorithm in which each device can make task decisions independently [27]. Jang et al. suggested a method for performing knowledge transfer and policy model compression in a single training session on edge devices while taking into account their resource constraints [28]. The time required to train an edge policy using this method is much less than the time required to train an edge policy from scratch. Chen et al. considered terminal device communication in an ultra-dense LAN, where users can select multiple base stations for task offloading [29]. The performance is enhanced by reducing the energy consumption of tasks in the computational queue and the channel queue between end users and base stations. Similarly, Song et al. aim to improve resource use, energy consumption, and network delay by forecasting user behavior and solving the problem with a variation of the DQN algorithm [30].
We emphasized the significance of user mobility in MEC and reviewed recent studies on user mobility. Some models consider the geographic location or previously observed mobility patterns of mobile users when using SDN-based solutions to solve mobility problems. Then, some models take into consideration the real-time network throughput and the utilization of edge server. Users’ current location can also be predicted using the random waypoint and leavy walk models, both of which treat mobility as a random function. Although these models attempt to incorporate real-time data from users’ movements, none of them succeed in doing so. Furthermore, the prediction models’ time complexity is quite high. Due to the strict latency constraints, it cannot be used in real-world systems. These findings encourage us to investigate novel approaches that consider user mobility factors such as location, velocity, and direction when determining the best MEC server to offload an incoming task. From different studies we explored that LSTM based models perform better in the aforesaid environments [31,32,33], hence proposed an LSTM based user location and direction prediction model for task offloading in MEC.

3. System Overview

This section details the system models used in the COME-UP framework. We present a use case scenario that considers a heterogenous 5G cellular network, enabled with MEC capabilities and urban user mobilities, as shown in Figure 2. The network comprises mobile users having different smart devices with variable mobility patterns.

3.1. Computation Model

To create an application model in MEC, it is necessary first to describe the task needs and inter-task connections [34]. The application may be represented as a directed acyclic graph (DAG) D, with nodes representing the collection of tasks t = {t1,t2,…tn} and edges e or connections indicating the relationships between the tasks [34]. In DAG D, each task ti is made up of two elements: (a) the computation requirement ( C i ) of task ti expressed as millions of instructions per second (MIPS); and (b) memory requirement M i of task t i . The interdependencies between tasks are represented by the edges. The latency for each edge is determined by the amount of bandwidth available.
In MEC the tasks are executed locally on the user equipment or offloaded to a proximate server. Below is a detailed description of both execution states. Reduced latency and power consumption are achieved through offloading tasks. If the computational capabilities of the user device fulfil the task conditions, task offloading does not occur. The computing capacity of the mobile user is expressed as C u . Major symbols and notation used in the system models and equations are defined in Table 1.
As expressed in [35], the desired computation cycles for a job t i are represented as C i . The computation time of a task ti to be performed at user end is represented by E i (seen Equation (1)), whereas energy consumption of the mobile device is represented by P i (see Equation (2)).
E i = C i / C u
P i = C i ( C u ) 2
Whenever a mobile device does not match the requirements of a given task, it is offloaded to an edge server. According to many research, the amount of execution results after offloading is so little that the transmission time necessary to download is minimal when compared to the time required to post the data to the server [35]. The distance between the mobile user and the edge server in the x and y directions is represented as x i and y i , respectively. Here, p i represents the transmitting power of the mobile device, θ represents the standard path loss propagation, and σ 2 represents additive Gaussian noise in the transmission path. As a result, the signal to noise ratio (SNR) is calculated using Equation (3) [36].
S N R i = ( p i × x i × y i ) θ / σ 2
If the bandwidth B is known, the mobile device’s transmission rate Ri, is expressed using Equation (4).
R i = B l o g 2 1 + S N R i  
The aggregate of the execution times of jobs that have been previously offloaded to an edge server is expressed by S k (see Equation (5)).
S k = t i M k T E i k  
Further, the server utilization [37] is represented by Us, and the edge server’s capacity to which a task is going to offload is expressed by Cs (see Equation (6)).
U s = S k C s 100
Finally, the edge computation time Cg for a task ti is determined by Equation (7). If R i < C g the required task is offloaded at the edge server.
C g = C i / C s

3.2. Energy Model for MEC Servers

The central processing units (CPUs) endeavors 80% of the electrical energy in edge servers compared to other resources [38]. Furthermore, when operating at full speed, idle CPUs account for 70% of the overall energy utilized by the server. As a result, an increase in CPU use is proportional to energy consumption [35]. We employ the energy model proposed in [39] as seen in Equations (8) and (9).
P u = k × P m + 1 k × P m × u  
E s = t 0 t 1 P u t d t  
where P(m) is the amount of energy that a server consumes when it is fully loaded, and P(u) is the estimated energy consumed by a server. The “k” represents the percentage of power drawn by an idle server, while the “u” represents the server’s utilization at any given time. Being a benchmark value for modern servers, the value of P(m) is set to 250 joules per second in this scenario [40]. Due to the varying workload, CPU usage fluctuates with time, becoming a function of time denoted by u(t). Finally, we can compute a server’s energy consumption E s using Equation (9).

4. COME-UP: Computation Offloading in Mobile Edge Computing with User Direction Prediction

This section presents the proposed location and direction-aware task offloading framework based on LSTM. The proposed framework (COME-UP) consists of the Location and direction prediction model (LDPM) and task placement model (TPM). In order to forecast future location and direction of mobile users, the LDPM learns the mobile trajectories of mobile users from mobility traces and applies these learnings to predict their future location and direction, as explained in Section 4.1. Using features such as energy consumption, latency, and server utilization, the TPM chooses an optimum edge server residing in the expected cell locations, and then offloads the user tasks to the selected server, as described in Section 4.2. A mobile user MU1 positioned in cell1 delegates a task with higher computation requirements to the serving MEC server EdgS1. Nevertheless, due to frequent and quick mobility, the MU1 initiates a handoff to cell2 after submitting a job to the previously associated edge server, as shown in Figure 3.
When the mobile user moves out of the range of EdgS1, the delivery of results gets delayed. Consequently, the response time of mobile applications is increased, leading to additional energy consumption and underutilization of computing resources. Further, the fitness function uses the weighted sum method to determine the priority weights of the evaluation parameters that helps to discover an optimal edge server with minimal latency and energy consumption, and higher utilization, as detailed in Section 4.3.

4.1. Location and Direction Prediction Model (LDMP)

We propose LDPM as a method for predicting the location and direction of a mobile user. The user mobility characteristics in MEC networks are constrained, such as location, velocity, and direction, and the data set is nonlinear [10,41,42]. Therefore, considering the nature of input data and the length of required parameters, time series analysis is applied to the mobile user trajectories. Recurrent neural networks (RNNs) offer higher prediction accuracy for time series problems [43]. Therefore, we employ a variant of RNN known as LSTM [44] to predict the user location and direction using mobility traces, as shown in Figure 4.
The base station density in 5G cellular networks is significantly higher than in previous generations [45]. However, the individual user’s movement information contains a few parameters, as mentioned earlier. The LDPM model considers the mobile user’s location (x, y), velocity (v), direction (o), and the task execution time T E i k as base features. We derive the training characteristics by calculating the derivative of (x) and (y) for location change, (v) for acceleration, and (o) for change in user orientation, as illustrated in Equations (10)–(13), respectively. The LSTM layer of the LDPM model is fed with the input data based on the window size (no of previous values). The input data for the LSTM layer of the LDPM is fed based on the window size (no of previous values). LDPM then forecasts where the associated user will be residing in the near future based on the information provided. As detailed in the next section, the TPM model uses the predicted location, velocity, direction (xn-yn, vn, on) to determine an appropriate edge server offering minimum latency, energy efficiency, and higher CPU utilization.
d d x = x 2 x 1
d d y = y 2 y 1
d d v = v 2 v 1  
d d o = O 2 O 1

4.2. Task Placement Model

To elaborate the working of the TPM, we consider a scenario where the edge servers are collocated with small cell base stations (SBS) that are controlled by Mobile Edge Orchestrator and Main Base Station, respectively, as shown earlier in Figure 3. Once a mobile user leaves an SBS after submitting a compute-intensive task to the corresponding edge server, the MEO uses LDPM to predict its next location and direction based on previous mobile trajectories. Further, the MEO finds the list Ls of available servers in the mobile user moving orientation. The angle of the mobile device during movement shows its moving direction in four cardinal points, as shown in Figure A1.
The TPM model finds the best server from list Ls using a fitness function, as shown in Figure A2. The fitness function determines the optimized server δ i based on the objective function. The offloaded task ti is then offloaded to the optimized server δ i .

4.3. Fitness Function

We develop a fitness function to select the optimized server from the list Ls. As mentioned earlier, the fitness function considers latency, energy consumption, and the load of the servers in the list obtained from the predicted quadrant, as shown in Figure 5. We use the Weighted Sum Method [46] to find the ranks for each server in the list. The values of each server are stated as a M × P matrix, where M denotes the number of edge servers in the list Ls and P represents the three parameters described previously, as illustrated in Equation (14).
M 1 M n L a t E L d S 11 S 12 S 13 S n 1 S n 2 S n 3 × α β ϒ = M 1 M n L a t E L d α S 11 β S 12 ϒ S 13 α S n 1 β S n 2 ϒ S n 3
The matrix comprises the computed values for each server’s latency (Lat), energy (E), and server load (Ld). Additionally, for fitness value calculation we add priority weights to the parameters, as defined in Equation (15). We conducted experiments with different weight priorities for latency ( α ), energy ( β ), and server load ( ϒ ) to find appropriate values.
  δ = j = 1 P α S i j + β S i j + ϒ S i j
δ = m i n δ 1 ,   δ 2 , δ 3 δ n
Finally, the server having the highest rank (i.e., lowest δ value) is considered the optimized server calculated using Equation (16). Consequently, the task ti is allocated to δ server.

5. Experimental Evaluations

In this section, we will look at the overall effectiveness of COME-UP. After examining the influence of mobility prediction on latency, energy consumption, and server utilization, the LDPM is rated for its effectiveness. Furthermore, we compare COME-UP to two baseline approaches, DMPO [8] and TAOM [9], to see which one performs better. In addition, we examine the system’s performance with and without the user direction prediction in the suggested framework to see which is better. When the framework takes into account the user direction, it is referred to be COME-UP, and when it does not, it is referred to as COME-WP.

5.1. Experimental Setup

We carried out comprehensive computer simulations in accordance with the 3GPP specifications and open-source data that had been approved by telecom operators [47], as implemented in related works [48]. We used the MobFogSim simulator, which is dynamic and supports a wide range of IoT/mobile device platforms. Table 2 contains a description of the input parameters as well as the simulation settings.

5.2. Results

Using a different group of users (100~1000), we show the average latency, energy consumption, and resource utilization results. We considered the vehicular user movements from the LuST data set [10]. The performance comparisons show that the COME-UP outperforms the others (DMPO and TAOM) with a 32% reduction in latency, 16% energy savings, and increasing the resource (CPU) utilization by 21%.
The latency values illustrated in Figure 5 are consistent with the acceptable delay requirements for the 5G network, which have been established in other studies [49]. The average latency of the proposed framework is noticeably lower than the other techniques in all cases of user distributions.
The higher number of mobile users has resulted in a significant increase in the energy consumed by edge servers. The illustration of the proposed model’s energy usage in comparison to that of the other models is shown in Figure 6. The CPUs draw the critical portion of the energy in edge servers [38]. We use the CPU utilization model for both energy estimation and server utilization. Figure 7 shows the server utilization with different server distributions.
We used 50 sequences of LSTM as test data of user mobility patterns. The highest count of the error term is near to zero in this case, indicating the model’s acceptable error range. The observed error range is less than 0.5 during complete training, as shown in Figure A3.

5.3. Discussion

The COME-WP reduces latency by only predicting the user’s location. COME-UP anticipates user direction and selects the fastest server. The COME-UP fitness function prioritises server latency over other performance metrics such as energy and server utilization (see Figure 5a). With 100 users, TAOM automatically assigns tasks to the fastest server. When used with a large number of users, it does not scale well. DMPO has a lower task offloading latency than TAOM due to its simple server searching mechanism. Figure 5 shows how we measure latency by giving equal priority to the performance parameters. All of the techniques’ latency increased compared to the previous case. Since TOAM and DMPO lack the fitness function we proposed, their latency increases with user count. The energy consumption of COME-UP is lower than TAOM and DMPO but higher than its COME-WP variant in some cases. The server information contained by COME-UP is limited as it focuses on a particular area predicted by its LDPM model. As a result, there is a possibility that makes it hard to find an energy-efficient server there, especially when the numbers of users and servers are low. The energy consumption of the proposed techniques is lesser when the weight of energy is given higher priority during the server selection stage. However, no effect is observed in the case of TOAM and DMPO since they lack the proposed fitness function, as shown in Figure 6. The proposed framework outperforms existing methodologies and makes efficient use of available computational resources, as shown by the server usage results. TAOM has lower CPU utilization than DMPO, COME-UP, and COME-WP. Inefficient server selection causes high energy consumption and low CPU utilisation in the TAOM. Since the system’s workload grows with the number of users, the system’s total energy consumption grows as well. As a result, idle CPU energy is consumed positively, resulting in efficient resource usage (Figure 7). Also, when the server load was given more weight during optimal server selection, the resource utilisation of servers increased. The loss of the model is used to measure the performance of the model. The RMSE of the LSTM model-based proposed framework is calculated as 0.5321 that lies in an acceptable range for the training process, as shown in Figure A3.

6. Conclusions

To select a suitable edge server for a task, computation offloading in MEC is characterized by low latency, energy consumption, and server usage. The results of task offloading to mobile users are delayed by frequent user mobility. COME-UP is enabled by novel method that predicts the mobile user’s next location and direction, from where all edge servers are shortlisted. COME-UP determines the best server among the shortlisted ones by using weighted sum method and place the incoming task accordingly. COME-UP becomes a lightweight technique as it uses LSTM for training and prediction instead of sophisticated deep learning models. The performance tests reveal that COME-UP outperforms the others. The average values show the improvement as 32 percent reduction in latency, a 16 percent energy savings, and a 19 percent increase in resource (CPU) utilization. It is also noted that COME-UP holds good accuracy levels during training and testing.

7. Future Research Directions

We extend our efforts to explore the impact of mobile devices with high velocity in a fully dynamic environment to improve offloading decisions as a future research direction. Aside from that, in the case of a server overload or breakdown, we make no provision for task or virtual machine relocation. Task and mobility-aware virtual machine migration will be the focus of future research efforts. Also, when numerous users have similar destinations, the mobility issue gets more intriguing. For example, mobile crowd sensing (MCS) works better when users’ whereabouts are predicted. Similarly, in vehicular edge networks, many cars may share a destination. As a future study topic, users with comparable mobility circumstances can be aggregated into homogenous groups and their activities integrated. Moreover, very few studies on device-to-device task offloading. The moving users while executing the applications such as MCS might need to offload some resource hungry task to a neighbor edge user’s device in a collaborative fashion. A lightweight machine learning model to find the best local resource is an interesting area to work in future. The existing works do not explore the relationship between the offloaded task and its content. Since each subtask has its own application and content, it has its own delay requirement, which is defined by the subtask’s content characteristics. Due to the limited CPU capabilities of mobile devices, each subtask has a variable delay. The creation of a content-aware framework can improve offloading decisions in the future. Securing MEC deployments is a critical challenge. The adoption of edge cloud servers presents substantial security risks due to mobile device exploitation. The existing security solutions cannot keep up with the rate of rising security threats. Many present security protocols need complete connectivity, which is impossible in mobile edge networks as many links are by default intermittent. MEC, on the other hand, sends user data to an edge server that controls access for other mobile users. This causes issues with data integrity and authorization. Further, the data owners and data servers have different identities. As a result, a complete research investigation is required to uncover MEC security vulnerabilities.

Author Contributions

Conceptualization, S.K.u.Z.; Methodology, S.K.u.Z.; Software Implementation, S.K.u.Z.; Supervision, A.I.J.; Writing and Editing, S.K.u.Z. and T.M.; Review, M.A.K.; Validation, A.I.U.; Funding, N.Z.J., M.S. and M.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Taif University Researchers Supporting Project number (TURSP-2020/79), Taif University, Taif, Saudi Arabia.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors thank all reviewers for their helpful comments and suggestions.

Conflicts of Interest

The uthors declared no conflict of interest.

Appendix A

This section contains the supporting figures related to COME-UP framework. The illustration of user’s location and direction prediction in cardinal points is shown in Figure A1. The Figure A2 portrays the flow of COME-UP architecture. Finally, Figure A3 shows the root means square error determined for the LSTM model.
Figure A1. Mobile device orientation during movement.
Figure A1. Mobile device orientation during movement.
Applsci 12 03312 g0a1
Figure A2. Overview of TPM.
Figure A2. Overview of TPM.
Applsci 12 03312 g0a2
Figure A3. Learning model evaluation: (a) epoch loss; (b) root mean square error.
Figure A3. Learning model evaluation: (a) epoch loss; (b) root mean square error.
Applsci 12 03312 g0a3

References

  1. Zhao, Y.; Xu, K.; Wang, H.; Li, B.; Qiao, M.; Shi, H. MEC-Enabled Hierarchical Emotion Recognition and Perturbation-Aware Defense in Smart Cities. IEEE Internet Things J. 2021, 8, 16933–16945. [Google Scholar] [CrossRef]
  2. Wang, J.; Lv, T.; Huang, P.; Mathiopoulos, P.T. Mobility-aware partial computation offloading in vehicular networks: A deep reinforcement learning based scheme. China Commun. 2020, 17, 31–49. [Google Scholar] [CrossRef]
  3. McClellan, M.; Cervelló Pastor, C.; Sallent, S. Deep Learning at the Mobile Edge: Opportunities for 5G Networks. Appl. Sci. 2020, 10, 4735. [Google Scholar] [CrossRef]
  4. Huynh, L.N.T.; Pham, Q.V.; Pham, X.Q.; Nguyen, T.D.T.; Hossain, M.D.; Huh, E.N. Efficient Computation Offloading in Multi-Tier Multi-Access Edge Computing Systems: A Particle Swarm Optimization Approach. Appl. Sci. 2020, 10, 203. [Google Scholar] [CrossRef] [Green Version]
  5. Safavat, S.; Sapavath, N.N.; Rawat, D.B. Recent advances in mobile edge computing and content caching. Digit. Commun. Netw. 2020, 6, 189–194. [Google Scholar] [CrossRef]
  6. Mustafa, E.; Shuja, J.; Jehangiri, A.I.; Din, S.; Rehman, F.; Mustafa, S.; Maqsood, T.; Khan, A.N. Joint wireless power transfer and task offloading in mobile edge computing: A survey. Clust. Comput. 2021, 1–20. [Google Scholar] [CrossRef]
  7. Wang, C.; Li, R.; Li, W.; Qiu, C.; Wang, X. SimEdgeIntel: A open-source simulation platform for resource management in edge intelligence. J. Syst. Arch. 2021, 115, 102016. [Google Scholar] [CrossRef]
  8. Yu, F.; Chen., H.; Xu., J. DMPO: Dynamic mobility-aware partial offloading in mobile edge computing. Futur. Gener. Comput. Syst. 2018, 89, 722–735. [Google Scholar] [CrossRef]
  9. Wang, Z.; Zhao, Z.; Min, G.; Huang, X.; Ni, Q.; Wang, R. User mobility aware task assignment for Mobile Edge Computing. Future Gener. Comput. Syst. 2018, 85, 1–8. [Google Scholar] [CrossRef] [Green Version]
  10. Puliafito, C.; Gonçalves, D.M.; Lopes, M.M.; Martins, L.L.; Madeira, E.; Mingozzi, E.; Rana, O.; Bittencourt, L.F. MobFogSim: Simulation of mobility and migration for fog computing. Simul. Model. Pract. Theory 2020, 101, 102062. [Google Scholar] [CrossRef]
  11. Shakarami, A.; Ghobaei, A.M.; Shahidinejad, A. A survey on the computation offloading approaches in mobile edge computing: A machine learning-based perspective. Comput. Netw. 2020, 182, 107496. [Google Scholar] [CrossRef]
  12. Zaman, S.K.U.; Jehangiri, A.I.; Maqsood, T.; Ahmad, Z.; Umar, A.I.; Shuja, J.; Alanazi, E.; Alasmary, W. Mobility-aware computational offloading in mobile edge networks: A survey. Clust. Comput. 2021, 24, 2735–2756. [Google Scholar]
  13. Tu, Y.; Chen, H.; Yan, L.; Zhou, X. Task Offloading Based on LSTM Prediction and Deep Reinforcement Learning for Efficient Edge Computing in IoT. Futur. Internet 2022, 14, 30. [Google Scholar] [CrossRef]
  14. Moon, S.; Lim, Y. Task Migration with Partitioning for Load Balancing in Collaborative Edge Computing. Appl. Sci. 2022, 12, 1168. [Google Scholar] [CrossRef]
  15. Chamola, V.; Tham, C.K.; Chalapathi, G.S.S. Latency aware mobile task assignment and load balancing for edge cloudlets. In Proceedings of the 2017 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops), Kona, HI, USA, 13–17 March 2017. [Google Scholar] [CrossRef]
  16. Alam, G.R.; Tun, Y.K.; Hong, C.S. Multi-agent and reinforcement learning based code offloading in mobile fog. In Proceedings of the 2016 International Conference on Information Networking (ICOIN), Kota Kinabalu, Malaysia, 13–15 January 2016. [Google Scholar] [CrossRef]
  17. Xia, X.; Zhou, Y.; Li, J.; Yu, R. Quality-Aware Sparse Data Collection in MEC-Enhanced Mobile Crowdsensing Systems. IEEE Trans. Comput. Soc. Syst. 2019, 6, 1051–1062. [Google Scholar] [CrossRef]
  18. Deng, S.; Huang, L.; Taheri, J.; Zomaya, A.Y. Computation Offloading for Service Workflow in Mobile Cloud Computing. IEEE Trans. Parallel Distrib. Syst. 2015, 26, 3317–3329. [Google Scholar] [CrossRef]
  19. Waqas, M.; Niu, Y.; Li, Y.; Ahmed, M.; Jin, D.; Chen, S.; Han, Z. A Comprehensive Survey on Mobility-Aware D2D Communications: Principles, Practice and Challenges. IEEE Commun. Surv. Tutor. 2019, 22, 1863–1886. [Google Scholar] [CrossRef] [Green Version]
  20. Qi, Q.; Liao, J.; Wang, J.; Li, Q.; Cao, Y. Software Defined Resource Orchestration System for Multitask Application in Heterogeneous Mobile Cloud Computing. Mob. Inf. Syst. 2016, 2016, 2784548. [Google Scholar] [CrossRef]
  21. Peng, Q.; Xia, Y.; Feng, Z.; Lee, J.; Wu, C.; Luo, X.; Zheng, W.; Liu, H.; Qin, Y.; Chen, P. Mobility-Aware and Migration-Enabled Online Edge User Allocation in Mobile Edge Computing. In Proceedings of the 2019 IEEE International Conference on Web Services (ICWS), Milan, Italy, 8–13 July 2019; pp. 91–98. [Google Scholar] [CrossRef]
  22. Shi, Y.; Chen, S.; Xu, X. MAGA: A Mobility-Aware Computation Offloading Decision for Distributed Mobile Cloud Computing. IEEE Internet Things J. 2017, 5, 164–174. [Google Scholar] [CrossRef]
  23. Yao, H.; Bai, C.; Xiong, M.; Zeng, D.; Fu, Z.J. Heterogeneous cloudlet deployment and user-cloudlet association toward cost effective fog computing. Concurr. Comput. Pract. Exp. 2017, 29, e3975. [Google Scholar] [CrossRef]
  24. Zhan, W.; Luo, C.; Min, G.; Wang, C.; Zhu, Q.; Duan, H. Mobility-Aware Multi-User Offloading Optimization for Mobile Edge Computing. IEEE Trans. Veh. Technol. 2020, 69, 3341–3356. [Google Scholar] [CrossRef]
  25. Wu, C.L.; Chiu, T.C.; Wang, C.Y.; Pang, A.C. Mobility-Aware Deep Reinforcement Learning with Glimpse Mobility Prediction in Edge Computing. In Proceedings of the ICC 2020—2020 IEEE International Conference on Communications (ICC), Dublin, Ireland, 7–11 June 2020. [Google Scholar] [CrossRef]
  26. Huang, L.; Feng, X.; Feng, A.; Huang, Y.; Qian, L.P. Distributed Deep Learning-based Offloading for Mobile Edge Computing Networks. Mob. Netw. Appl. 2018, 1–8. [Google Scholar] [CrossRef]
  27. Tang, M.; Wong, V.W. Deep Reinforcement Learning for Task Offloading in Mobile Edge Computing Systems. IEEE Trans. Mob. Comput. 2020. [Google Scholar] [CrossRef]
  28. Jang, I.; Kim, H.; Lee, D.; Son, Y.-S.; Kim, S. Knowledge Transfer for On-Device Deep Reinforcement Learning in Resource Constrained Edge Computing Systems. IEEE Access 2020, 8, 146588–146597. [Google Scholar] [CrossRef]
  29. Chen, X.; Zhang, H.; Wu, C.; Mao, S.; Ji, Y.; Bennis, M. Optimized Computation Offloading Performance in Virtual Edge Computing Systems Via Deep Reinforcement Learning. IEEE Internet Things J. 2019, 6, 4005–4018. [Google Scholar] [CrossRef] [Green Version]
  30. Song, S.; Fang, Z.; Zhang, Z.; Chen, C.-L.; Sun, H. Semi-Online Computational Offloading by Dueling Deep-Q Network for User Behavior Prediction. IEEE Access 2020, 8, 118192–118204. [Google Scholar] [CrossRef]
  31. Zhou, L.; Zhang, Z.; Zhao, L.; Yang, P. Attention-based BiLSTM models for personality recognition from user-generated content. Inf. Sci. 2022, 596, 460–471. [Google Scholar] [CrossRef]
  32. Zhang, Z.; Guo, J.; Zhang, H.; Zhou, L.; Wang, M. Product selection based on sentiment analysis of online reviews: An intuitionistic fuzzy TODIM method. Complex Intell. Syst. 2022, 1–14. [Google Scholar] [CrossRef]
  33. Zhou, L.; Tang, L.; Zhang, Z. Extracting and ranking product features in consumer reviews based on evidence theory. J. Ambient Intell. Humaniz. Comput. 2022, 1–11. [Google Scholar] [CrossRef]
  34. Xu, C.; Zheng, G.; Zhao, X. Energy-Minimization Task Offloading and Resource Allocation for Mobile Edge Computing in NOMA Heterogeneous Networks. IEEE Trans. Veh. Technol. 2020, 69, 16001–16016. [Google Scholar] [CrossRef]
  35. Zaman, S.K.U.; Khan, A.U.R.; Malik, S.U.R.; Khan, A.N.; Maqsood, T.; Madani, S.A. Formal Verification and Performance Evaluation of Task Scheduling Heuristics for Makespan Optimization and Workflow Distribution in Large-scale Computing Systems. Comput. Syst. Sci. Eng. 2017, 32, 227–241. [Google Scholar]
  36. Wang, D.; Tian, X.; Cui, H.; Liu, Z. Reinforcement learning-based joint task offloading and migration schemes optimization in mobility-aware MEC network. China Commun. 2020, 17, 31–44. [Google Scholar] [CrossRef]
  37. Zaman, S.K.U.; Maqsood, T.; Ali, M.; Bilal, K.; Madani, S.A.; Khan, A.u.R. A Load Balanced Task Scheduling Heuristic for Large-Scale Computing Systems. Comput. Syst. Sci. Eng. 2019, 34, 79–90. [Google Scholar] [CrossRef]
  38. Katal, A.; Dahiya, S.; Choudhury, T. Energy efficiency in cloud computing data center: A survey on hardware technologies. Clust. Comput. 2021, 25, 675–705. [Google Scholar] [CrossRef]
  39. Beloglazov, A.; Abawajy, J.; Buyya, R. Energy-aware resource allocation heuristics for efficient management of data centers for Cloud computing. Future Gener. Comput. Syst. 2012, 28, 755–768. [Google Scholar] [CrossRef] [Green Version]
  40. Parrilla, L.; Álvarez-Bermejo, J.A.; Castillo, E.; López-Ramos, J.A.; Morales-Santos, D.P.; García, A. Elliptic Curve Cryptography hardware accelerator for high-performance secure servers. J. Supercomput. 2018, 75, 1107–1122. [Google Scholar] [CrossRef]
  41. Duong, T.M.; Kwon, S. Vertical Handover Analysis for Randomly Deployed Small Cells in Heterogeneous Networks. IEEE Trans. Wirel. Commun. 2020, 19, 2282–2292. [Google Scholar] [CrossRef]
  42. Liu, X.; Yu, J.; Qi, H.; Yang, J.; Rong, W.; Zhang, X.; Gao, Y. Learning to Predict the Mobility of Users in Mobile mmWave Networks. IEEE Wirel. Commun. 2020, 27, 124–131. [Google Scholar] [CrossRef]
  43. Hewamalage, H.; Bergmeir, C.; Bandara, K. Recurrent neural networks for time series forecasting: Current status and fu-ture directions. Int. J. Forecast. 2021, 37, 388–427. [Google Scholar]
  44. Liu, D.; Li, W. Mobile communication base station traffic forecast. Computing 2021, 5, 52–55. [Google Scholar]
  45. Chih-Lin, I.; Han, S.; Bian, S. Energy-efficient 5G for a greener future. Nat. Electron. 2020, 3, 182–184. [Google Scholar] [CrossRef]
  46. Marler, R.T.; Arora, J.S. The weighted sum method for multi-objective optimization: New insights. Struct. Multidiscip. Optim. 2010, 41, 853–862. [Google Scholar]
  47. Hoshino, M.; Yoshida, T.; Imamura, D. Further advancements for E-UTRA physical layer aspects (Release 9) Further advancements for E-UTRA physical layer aspects (Release 9), 2010. IEICE Trans. Commun. 2011, 94, 3346–3353. [Google Scholar]
  48. Wu, W.; Zhou, F.; Hu, R.Q.; Wang, B. Energy-Efficient Resource Allocation for Secure NOMA-Enabled Mobile Edge Computing Networks. IEEE Trans. Commun. 2020, 68, 493–505. [Google Scholar] [CrossRef] [Green Version]
  49. Lema, M.A.; Laya, A.; Mahmoodi, T.; Cuevas, M.; Sachs, J.; Markendahl, J.; Dohler, M. Business Case and Technology Analysis for 5G Low Latency Applications. IEEE Access 2017, 5, 5917–5935. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Scenarios of task offloading with user mobility and handovers.
Figure 1. Scenarios of task offloading with user mobility and handovers.
Applsci 12 03312 g001
Figure 2. Mobility and location prediction use case.
Figure 2. Mobility and location prediction use case.
Applsci 12 03312 g002
Figure 3. COME-UP architecture.
Figure 3. COME-UP architecture.
Applsci 12 03312 g003
Figure 4. Location and direction prediction model (LDPM).
Figure 4. Location and direction prediction model (LDPM).
Applsci 12 03312 g004
Figure 5. Latency; a   α   = 0.5, β   = 0.25, ϒ =0.25; b   α   = 0.3, β = 0.3, ϒ = 0.3.
Figure 5. Latency; a   α   = 0.5, β   = 0.25, ϒ =0.25; b   α   = 0.3, β = 0.3, ϒ = 0.3.
Applsci 12 03312 g005
Figure 6. Energy consumption; a   α = 0.25, β    = 0.5, ϒ   = 0.25; b   α = 0.3, β = 0.3, ϒ   = 0.3.
Figure 6. Energy consumption; a   α = 0.25, β    = 0.5, ϒ   = 0.25; b   α = 0.3, β = 0.3, ϒ   = 0.3.
Applsci 12 03312 g006
Figure 7. Server utilization; a   α   = 0.25, β   = 0.25, ϒ    = 0.5 b   α   = 0.3, β   = 0.3, ϒ   = 0.3.
Figure 7. Server utilization; a   α   = 0.25, β   = 0.25, ϒ    = 0.5 b   α   = 0.3, β   = 0.3, ϒ   = 0.3.
Applsci 12 03312 g007
Table 1. Symbols and definitions.
Table 1. Symbols and definitions.
SymbolDefinition
tiIncoming task for offloading
EiComputation time of task ti at the user device
CiCycles required to complete the task ti
RiMobile device transmission time
CsComputing capacity of the edge server
T E i k Amount of time required to execute task ti at edge server Sk
S k Aggregate execution time of tasks mapped at edge server S
UsResource utilization of server
PkPower consumed by server
δ Selected server with optimized features after prediction
Table 2. Simulation settings and values.
Table 2. Simulation settings and values.
ParameterValue
Macrocell radius200 m
No of cell sites144
Moving velocity of the user20–80 km/h
Server CPU frequency and instruction set3 GHz & CISC
Mobile device CPU frequency and instruction set3 GHz & RISC
Input Data size1∼30 MB
No of users100~1000
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zaman, S.K.u.; Jehangiri, A.I.; Maqsood, T.; Umar, A.I.; Khan, M.A.; Jhanjhi, N.Z.; Shorfuzzaman, M.; Masud, M. COME-UP: Computation Offloading in Mobile Edge Computing with LSTM Based User Direction Prediction. Appl. Sci. 2022, 12, 3312. https://doi.org/10.3390/app12073312

AMA Style

Zaman SKu, Jehangiri AI, Maqsood T, Umar AI, Khan MA, Jhanjhi NZ, Shorfuzzaman M, Masud M. COME-UP: Computation Offloading in Mobile Edge Computing with LSTM Based User Direction Prediction. Applied Sciences. 2022; 12(7):3312. https://doi.org/10.3390/app12073312

Chicago/Turabian Style

Zaman, Sardar Khaliq uz, Ali Imran Jehangiri, Tahir Maqsood, Arif Iqbal Umar, Muhammad Amir Khan, Noor Zaman Jhanjhi, Mohammad Shorfuzzaman, and Mehedi Masud. 2022. "COME-UP: Computation Offloading in Mobile Edge Computing with LSTM Based User Direction Prediction" Applied Sciences 12, no. 7: 3312. https://doi.org/10.3390/app12073312

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop