Next Article in Journal
Low-Cost Room-Temperature Perovskite Solar Cells Suitable for Continuous Production
Previous Article in Journal
Energy-Aware MPTCP Scheduling in Heterogeneous Wireless Networks Using Multi-Agent Deep Reinforcement Learning Techniques
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Task Offloading Scheme for Survivability Guarantee Based on Traffic Prediction in 6G Edge Networks

1
State Key Laboratory of Information Photonics and Optical Communications, Beijing University of Posts and Telecommunications, Beijing 100876, China
2
Kuaishou Technology Co., Ltd., Beijing 100085, China
3
Department of Fundamental Network Technology, China Mobile Research Institute, Beijing 100053, China
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(21), 4497; https://doi.org/10.3390/electronics12214497
Submission received: 19 September 2023 / Revised: 22 October 2023 / Accepted: 30 October 2023 / Published: 1 November 2023

Abstract

:
With the development of sixth-generation (6G) mobile networks, the rise of emerging intelligent services has led to a huge increase in traffic. As an important technology to support the development of 6G, mobile edge computing (MEC) effectively meets the ultra-low latency requirements of most emerging services. However, due to the limited processing capacity of edge nodes, overload on any edge node will cause service degradation, interruption, and even node failure, weakening the advantages of MEC and reducing the survivability of the whole network. In this paper, we propose a task offloading scheme based on traffic prediction for node-overload protection to ensure the survivability of the 6G edge networks. We transformed the network survivability guarantee problem into a task offloading problem under the constraint of future available resources based on traffic prediction and developed a particle swarm optimization algorithm based on policy gradient (PSO-PG) to jointly optimize offloading decisions, routing, and computing resource allocation. Simulations verify the effectiveness of our proposed scheme and guarantee the survivability of 6G edge networks. Meanwhile, evaluation in multiple scenarios with different node scales has verified the wide applicability of our work.

1. Introduction

In recent years, with the rapid advancement of 6G technological reformation, the rise of emerging intelligent services such as intelligent traffic control, holographic communication, mission-critical services, and smart homes has put forward higher requirements for ultra-low latency [1,2,3]. According to IDC predictions, global network traffic will reach 175ZB by 2025 [4,5]. The rapid growth of network traffic and the constantly updated demand for new service scenarios are powerful driving forces for the development of 6G [6]. The emerging intelligent services driven by 6G have increasingly urgent demands for high-reliability network performance and ultra-low latency [7,8]. In order to meet the more stringent requirements of intelligent services, mobile edge computing (MEC) is considered a key technology to solve the difficulties faced by 6G, representing an important component of distributed architecture and edge intelligence in the 6G vision [8].
Supported by MEC, massive intelligent services access computing resources with lower delay [9,10,11]. Intelligent services in new scenarios have the features of rapid changes in demands and large amounts of data. Constrained by limited resources and computing capacity on edge nodes, it is difficult to meet the ultra-low latency requirements of expanding large-scale services. On the one hand, when the edge node receives a large number of processing requests in a short time, the traffic bursts will cause node overload, leading to service degradation or interruption, weakening the advantages of MEC, which is intolerable for delay-sensitive services [12]. On the other hand, node failures caused by node overload significantly reduce the available resources, while fault recovery or node replacement is stiff and time-consuming [8,13]. It is urgent to improve the survivability of 6G edge networks.
The traffic surge brought about by the development of emerging services has led to a shortage of computing and network resources, and the lack of control over the use of available resources can easily lead to the overprovisioning of node resources, increasing the possibility of node overload in 6G edge networks. Meanwhile, the development of 6G expands the coverage scale of the network and increases the risk range of node failure. The decrease in survivability will seriously affect the quality of service and user experience. Ensuring the development of 6G networks requires evolving towards intelligence, and the key challenge in guaranteeing survivability is the multi-node collaborative scheduling of multidimensional resources. With the continuous improvement of artificial intelligence (AI) technology, depending on its management capacity for complex systems, it has become mature enough to be deployed in complex networks. AI-assisted algorithms have become a feasible solution to solve the difficulties of future complex network systems [14,15]. In the classic scenarios covered by 6G, ultra-low latency and ultra-high reliability can be realized with the assistance of artificial intelligence. Therefore, designing the pro-active mechanism by utilizing AI techniques that prevent node overload has become a promising solution for ensuring network survivability. However, a large number of studies have focused on carrying out dynamic scheduling based on load balancing without attaching importance to the future available resource utilization. Since traffic prediction can be representative of future resource demand, it should similarly reflect the specifics of the future available resource. Excellent predictive capacity is a crucial means of evaluating future available resources. Directly deploy AI algorithms on large-scale edge nodes to predict future available resources based on existing collected data. Furthermore, offloading blindly may lead to excessive resource competition for the tasks deployed on the same edge node, resulting in high latency overhead. Therefore, integrating future available resources into the task offloading scheme for node overload protection is of great significance for the survivability of the entire 6G edge network.
In this paper, we propose a task offloading scheme based on available resources for survivability guarantee, aiming to ensure the survivability of 6G edge networks. The scheme is composed of two algorithms, including a mapping of future available resources based on traffic prediction and an overload protection scheme based on a particle swarm optimization algorithm based on policy gradient (PSO-PG). Firstly, in order to maximize resource utilization, the predicted traffic scale is applied to map future available resources. Secondly, based on future available resources, a PSO-PG heuristic algorithm was developed to determine task offloading schemes from a global optimization perspective to provide reliable overload protection. Our contributions can be summarized as follows:
  • We deploy the accurate traffic prediction model on edge nodes by constructing a mapping between traffic prediction and future available resources to achieve resource visualization of the entire 6G edge network, so as to maximize the advantages of using edge resources while ensuring the survivability of the network.
  • In response to the highly dynamic nature of networks, they may face the challenge of failing to adapt fixed algorithmic parameters to mutating network environments. We develop the PSO-PG algorithm for the design of node-overload protection schemes in dynamic networks. The key parameters of the PSO algorithm are adaptive adjustments by policy gradients (PGs) that interact with the actual network environment. The improved algorithm solves the problems of difficulty in manually configuring parameters and the inability to be updated in time according to the actual operating conditions.
  • Under the constraints of the future available resources, utilizing the advantages of the PSO-PG algorithm, we propose an innovative survivability guarantee framework for 6G edge networks. It integrates the prediction of required processing power and the process of task offloading. The scheme effectively realizes the joint optimization of adjustment offloading decisions with routing and computing resources to match the network survivability guarantee, minimizing the increase in service delay due to insufficient or overallocation of resources while guaranteeing the performance of the whole network.
The rest of this paper is organized as follows: Section 2 introduces the related work. Section 3 introduces the system architecture and scheme process. Section 4 introduces task offloading based on traffic prediction schemes for node-overload protection. Section 5 presents the evaluation of the proposed scheme and analyzes the simulation results.

2. Related Work

The service degradation and node failures caused by node overload greatly reduced the survivability of the entire edge network, which means a huge disaster for both users and operators. Node-overload protection to ensure the survivability of the whole network has become a research hotspot.
In the study of [16], the protecting strategy was determined by the single threshold at the current moment of the network state. The services would be offloaded to any other server when exceeding the threshold. In [17], the authors divided the task into multiple subtasks, which would be offloaded to multiple edge nodes for overload protection by considering both network reliability and latency. In [18], the protecting scheme, which is based on offloading all tasks from the overloaded node to the destination node, was decided by arriving at the maximum reduction of energy consumption under the delay constraint. Liu et al. [17] had proposed the offloading scheme for overload protection by prioritizing the latency-sensitive tasks, in which they schedule the latency-sensitive tasks to the cloud server according to the set priority. Sun et al. [19] proposed a delay-aware offloading scheme in which all tasks will be offloaded to the nearest node to minimize transmission latency.
In [20], due to the constraints of the load balancer, task offloading would be developed among edge servers efficiently. In [21], value iterations of the Markov Decision Process (MDP) were used to perform cost-effective task offloading. In [22], the authors transformed the task offloading problem into a Markov process to develop the optimal strategy through the mathematical framework of the set cost model.
In [23], integrating the cache resources and computing resources through deep reinforcement learning, the optimized overload protection scheme was developed through multiple iterations. In [24], the proposed method would allow each user to independently learn the efficient offloading destination to avoid overload, achieving the joint optimization of global power consumption and latency. In [25], the authors adopted an adaptive admission control strategy for overloaded node protection. The strategy utilized the observation state model of reinforcement learning, combined with the actual consideration of offloading costs, to develop an offloading strategy that adapts to a time-varying environment.
In [26], the authors proposed a joint optimization method for computing offloading and content caching based on traffic prediction to predict the traffic of each edge server. By deploying a deep spatiotemporal residual network, followed by the use of genetic algorithms to implement the selection of the destinations with the lowest execution latency for determining the destinations for task offloading. In [27], the authors proposed a collaborative computing offloading approach using deep reinforcement learning to minimize latency by combining the Mixed-Integer Nonlinear Programming (MINLP) optimization problem. In [28], the authors proposed a fuzzy neural network (FNN) and game theory task offloading scheme, applying FNN to predict future traffic and adopting game theory to determine the optimal task offloading strategy for the user so as to satisfy the delayed requests for services.
In [10], the authors proposed a multi-task cooperative computing mechanism for overload protection in 6G networks, which focuses on node selection and path optimization to improve the efficiency of computational task execution and quality of service by optimizing the delay-aware multi-task cooperative computing objective. In [29], the authors proposed a novel profile-based data-driven VNF performance-resource analysis (PDPA) framework that analyzes the complex relationship between network performance KPIs, resource allocation, and utilization to achieve assurance for next-generation networks. In [30], the authors proposed deep reinforcement learning based on migration techniques to explore network slicing, guaranteeing the normal operation of the network under extreme conditions, avoiding unstable system performance, and violating network service level agreements (SLAs). In [31], the authors improved the probability of smart device connectivity by calculating the probabilistic sensitivity of the optimal paths between smart devices and would also optimize the communication delay and data preprocessing time by analyzing the different sensitivities to improve the stability of smart device access to the network using machine learning regression algorithms.
These studies focused on latency, regardless of the impact of policy execution on the performance of the entire network. In addition, these studies tend to fix the algorithm parameters for task offloading. The algorithm parameters cannot be adjusted to follow the dynamic network state, which is inflexible. Our work would compensate for these shortcomings by jointly applying traffic prediction and a task offloading scheme. By adjusting the algorithm parameters in a timely manner following the dynamic network state, the scheme has achieved an optimal trade-off between guaranteed service latency and the performance of the network.

3. System Architecture

In this section, we first describe the architecture of the 6G edge network and develop an optimal task offloading scheme for node-overload protection with guaranteed quality of service and survivability of the 6G edge networks. The symbols appearing in this section are explained in Table 1.

3.1. Description of Network Architecture

The MEC servers with computing capability and storage capability have been placed at the edge nodes in 6G edge networks, providing users with a low-delay processing guarantee [32]. The 6G edge network architecture is shown in Figure 1, which is composed of a cloud layer, an edge layer, and a device layer. The edge layer is juxtaposed by the MEC servers and the open-flow switches. Under the unified coordination of the controller, the MEC servers frequently interact through the ultra-large-capacity optical fiber interconnection [33]. It is worth noting that the computing capacity of the MEC server is limited.
In the architecture of 6G edge networks, each edge node interacts with the network environment in real time to realize the perception of the network state. Simultaneously, routine monitoring is continuously implemented, and reporting information is sent to the cloud periodically. The cloud also sends the state information of the relevant node to other edge nodes. With a certain degree of autonomy, the edge node can process tasks directly from the device layer in a timely manner.
In this paper, we focus on the constraints of average delay and limited computing resources in the case of single-node overload. The whole process includes node-overload prediction based on traffic prediction and task offloading for overload protection, as shown in Figure 1. The purpose of the traffic prediction is to predict the scale and timing of burst traffic. The node-overload protection scheme will be developed through task offloading based on edge collaboration before node-overload.

3.2. The Establishment of an Offloading Model

In order to obtain the network state at a certain time in a dynamically changing network, the network state in a short time is recognized as the quasi-static network state. The task offloading based on edge collaboration can solve the node-overload problem to ensure the survivability of the 6G edge networks, supported by the global view of the entire system resources provided by the controller.
In this paper, we mainly describe the protection scenario of single-node overload. Assuming that the 6G edge networks are composed of m edge nodes, the predicted overload node is labeled as M L , and the task set generated on it is identified as { P i | i = 1 , 2 , , N } . The set of candidate destination nodes is defined as { E m | m = 1 , 2 , , M 1 } , which would accept the task offloading requests. It is worth noting that each task is required to be processed within its maximum tolerance latency. The computing resource will be provided by the destination edge node.
The controller continuously monitors the state of the whole edge network in real time, including the available resources and computing capacity. The computing capacity is determined by the processing capacity of the CPU on the MEC server. The computing capacity is set to C k , which represents the CPU cycles completed per second. In order to satisfy the QoS, it is necessary to set the limit ratio of the maximum computing capacity, which is determined by the predicted traffic scale in the next timeslot. The limit ratio is indicated as δ i ,which presents a negative correlation with the predicted traffic.
Latency is an important evaluation metric that cannot be ignored during the whole process of node-overload protection. In this process, the latency consists of the transmission delay from the terminal to the local server, the transmission delay from the local server to the destination MEC server, and the processing delay on the destination MEC server.
It is worth noting that the terminal connects to the local MEC server through the base station (BS), which consists of two components: the wireless delay from the terminal to the BS and the forward transmission delay from the BS to the local MEC server [30]. For each task, the delay of these two components is unavoidable, which would proceed indiscriminately, independent of the subsequent offloading decisions. Therefore, this part of the delay will no longer be considered in this section.
The edge collaboration relies on EONs, which are supported by flexible grid technology. The spectrum resources of each link are divided into frequency slots (FSs), where the capacity of each FS is set to B = 6.25 Gbps. In this paper, the modulation format is BPSK, and the transmission rate is R = 6.25 Gbps. The transmission delay of task i is described in Formula (1).
T r i = m E m λ i , m d i R D i , m
where λ i , m represents whether task i is offloaded to the destination server m.  d i represents data size of task i. R indicates the transmission rate of each spectrum slot. D i , m is the number of frequency slots allocated for transmitting task i to the server m.
The MEC servers are equipped with high-speed CPU processors, whose processing delay is defined by Equation (2).
T c i = m E m λ i , m c i V i , m
where c i represents the computing capacity required to process task i, implying the number of CPU cycles required. V i , m represents the computing capacity allocated to task i by the destination server m.
Therefore, the total delay of the whole process of task offloading is exhibited as follows:
T i = m E m ( λ i , m d i R D i , m + λ i , m c i V i , m )
Objective:
min 1 N i P i T i
Constraint:
c i V i . m T i , max ensures that the processing delay of the task is within its maximum tolerable delay.
  • i P i λ i , m V i , m δ m C m indicates that the amount of tasks allocated to the edge server should be within the computing capacity of the server, and δ m indicates the limit ratio of the maximum computing capacity of the server m.
  • When m E m λ i , m = 1 , each task is allowed to be allocated to only one server for processing.
  • L p i , d ( a o , a d ) D i , d = f s i ( a o , a d ) represents that the frequency slot fs in the optical path ( a o , a d ) occupied from the local MEC to the destination MEC.
  • When i = 1 n f s i ( a o , a d ) D F S , it is constrained that the number of frequency slots occupied by all tasks should be within the capacity of the frequency slot of the optical path.
  • When ( f s i ( a o , a d ) , f + f s i ( a o , a d ) , f + 1 ) f [ f + 2 , F S ] f s i ( a o , a d ) , f , it is worth noting that the allocation of frequency slots follows the principle of proximity.

4. Task Offloading Scheme Based on Traffic Prediction for Node-Overload Protection

In order to improve the survivability of 6G edge networks and guarantee services with low latency processing, we propose a task offloading scheme based on traffic prediction for node-overload protection, as described below in detail.

4.1. Traffic Prediction Based on LSTM

In the architecture of the 6G edge network, the expanding scale of devices has driven the generation of a large number of new services. MEC has become the primary way to process services generated by devices. The whole process should be supported by low delays so as to improve the quality of the experience. Since the predicted traffic can reflect the future available resources, we have collected the historical traffic recorded by the edge nodes, which can be used to predict the future traffic scale by deploying the prediction model on the edge nodes. The main purpose of traffic prediction is to design a pro-active mechanism that prevents node overload by measuring the scale of traffic bursts. With accurate prediction, we will implement the task offloading scheme in a timely manner before node overload.
Due to the fact that traffic flow is a typical time series, we apply the Long-Short Term Memory (LSTM) algorithm to the traffic prediction model. LSTM is an improved recursive neural network algorithm that overcomes the problem of inaccurate learning of past information due to gradient vanishing in RNN [34]. By incorporating memory function and dynamically adjusting the correlation weight coefficients between sequences, LSTM achieves high-precision prediction of long and short time series. Monitoring and collecting the historic traffic generates a periodic time-series list by edge nodes as input to the LSTM. The entire process is shown in Figure 2.
The forget gate would determine which information in the cell state could be transferred from the previous moment to the current moment. The previous information would be reserved when the output was equal to 1. Otherwise, the previous information would be dropped, as shown in Formula (5).
f t = α ( W f [ h t 1 , x t ] + b f )
In the above formula, f t represents the forget gate state. x t is the input time-series. α represents the activation function. h t 1 is output value of the last hidden layer. W f is the weight value of forget gate and input gate. b f is the bias value.
The input gate can be used to determine which information could be saved into the cell state, conducted in two sections. First, the input gate is shown in Formula (6). Similar to the f t , the matrix value in the i t represents the discard or retention of the information in C ¯ . Conduct the candidate cell state C ¯ for saving x t and h t 1 . To determine the useful memories by dot product with i t , the retained information is added as the new memory to the cell state, as shown in Formula (7). The update cell state is shown in Equation (8), which is jointly updated by the useless information forgotten in f t and the latest useful information in i t .
i t = α ( W i [ h t 1 , x t ] + b i )
C ¯ = tanh ( W C [ h t 1 , x t ] + b c )
C t = C ¯ × i t + C t 1 × f t
Finally, we obtain the output value, which will be determined by the state of the cells. First, we identify the sigmoid layer to determine which part of the cell state will be output. Next, we process the cell state through tanh (to obtain a value between −1 and 1) and multiply it with the output of the sigmoid gate. Finally, output the corresponding prediction results.
o t = α ( W o [ h t 1 , x t ] + b o )
h t = o t × tanh ( C t )
Through the action of the fully connected hidden layers, we obtain the output time series, which represents the future traffic scale on edge nodes. Since traffic prediction can be representative of future resource demand, it should similarly reflect the specifics of the future available resource. The predicted traffic scale is mapped to future available resources, represented by the limit ratio of the maximum computing capacity provided by the server, as a basis for accepting or rejecting offloading requests in the future. The node-overload protection scheme based on traffic prediction is described in detail in the next section.
LSTM is a prediction solution based on an asymmetric cost function, owing to the fact that the cost penalty set by the cost function is not necessarily constant but is related to the loss of service amount due to insufficient supply [35]. The accuracy of traffic prediction is critical to the outcome of resource allocation, and incorrect prediction may lead to over- or under-provisioning of resources, thus increasing the cost of operating the network. Oversupply occurred when the predicted traffic was higher than the actual traffic, which would lead to the allocation of more resources than required. Unnecessary resource allocation leads to increased costs. Conversely, under-provisioning leads to QoS degradation by allocating fewer resources than required, inevitably resulting in loss of service.

4.2. PSO-PG-Based Task Offloading Scheme under Multi-Edge Collaboration

In our previous work, we applied LSTM to predict the future traffic scale. To avoid node overload, our work would focus on task offloading based on edge collaboration for node-overload protection. However, with the expansion of network scale, it is difficult for the traditional MINLP algorithms to obtain optimal solutions within an acceptable timeframe. Therefore, in the heuristic framework of the PSO algorithm, we apply the PSO-PG to develop the node-overload protection scheme based on edge collaboration for ensuring the survivability of the network, which balances the trade-off between average latency and resource usage [36,37].
The PSO-PG algorithm, which utilizes a policy gradient to guide the optimization direction of PSO, interacts with the intelligent agent, having achieved parameter tuning for the PSO algorithm through timely feedback [38,39]. The interaction process of PSO-PG is shown in Figure 3. Task offloading is based on edge collaboration for node-overload protection, whose key idea is to thoroughly scan all possible combinations of available edge servers and network resources and finally develop the optimal offloading scheme. Specifically, the offloading scheme is divided into making offloading decisions and selecting the destination MEC server.
First encoding the particles, the ith particle (the feasible scheme for task i) is defined as:
x i = { p i , d , p i , s , p i , p , p i , c , p i , o | i Q }
where p i , d is the destination MEC server d selected for offloading the task i, p i , s is the number of FSs allocated to the task i, p i , p is the routing path allocated to the task i, p i , c is the computing resource allocated for task i, p i , o is the allocation order of the task i.
The particle swarm coding with the kth iteration X i k , which represents the detail candidate offloading scheme.
X i k = { x 1 k , x 2 k , , x Q k }
For the selection of destination MEC servers, we prioritize selecting MEC servers with lighter workloads in the prediction results, which means there are more available resources. In order to minimize the average delay, all remaining computing resources are allocated to offloading tasks while ensuring sufficient computing resources for tasks supported on the candidate servers. In routing resource allocation, path selection follows the shortest K-path principle, and spectrum allocation follows the first-fit principle.
As the heuristic optimization algorithm, PSO-PG explores the optimal solution through particle iteration. Typically, a particle swarm is composed of N particles P [ V i , X i ] , which are associated with velocity and position in the dimension R of the search space.
V i = { V i 1 , V i 2 , , V i R | i = 1 , 2 , , N }
X i = { X i 1 , X i 2 , , X i R | i = 1 , 2 , , N }
The optimization objective of PSO is to find the global optimal solution by iteration in the feasible optimal solution space. In each merit-seeking process, the particle velocity and position update are shown in Equations (15) and (16).
V i t + 1 = ω × V i t + c 1 × r 1 × ( S p b e s t , i t X i t ) + c 2 × r 2 × ( S g b e s t , i t X i t )
X i t + 1 = X i t + V i t + 1
ω is the inertia weight, c 1 is the cognitive acceleration coefficient, c 2 is the social acceleration coefficient, and r 1 and r 2 are uniformly distributed (0,1) random numbers.
The velocity V i t + 1 describes the final optimization direction based on the accumulated velocity of particles in the past and the current particle optimization direction, which are obtained by calculating the difference between the optimal and current positions of particles. During the optimization process, the positions of the particles are continuously updated, which update stops if the change in the current search direction is small enough that the optimization process is close to optimal.
In the traditional PSO algorithm, due to the fixed parameters, it cannot make some adjustments based on actual operation, which can lead to premature convergence of the algorithm. In order to overcome the challenge, we ultimately adopt the improved algorithm PSO-PG to achieve dynamic adjustment of key parameters.
Relying on the intelligent agent, the PG can interact with the network environment in real time, and the environment will input the current state S t into the intelligent agent. The output action A t will be applied to the PSO, and the action results will feedback the corresponding reward value R t with the latest state S t + 1 to the intelligent agent. The agent will determine the parameters for tuning PSO based on the feedback reward values, helping the particle determine the optimization direction, as shown in Algorithm 1.
Algorithm 1: The algorithmic process of PSO-PG.
1. Initialize the PG training parameters, learning rate and the loss function L θ .
2. Initialize the size of particle swarm N, the maximum number of iterations I t max , inertia weight β and learning factors c 1 and c 2 .
3. Obtain the initialization fitness value according to the initial parameters.
4.  For  I t I t max do
5.   Input the individual optimal solution S p b e s t and the global optimal solution S g b e s t calculated by PSO into the PG.
6.   Output action probability distribution A t through PG iterative operation.
7.   The action selection function will select action a according to A t  and input a to the PSO.
8.   PSO receives the action a according to the update rule for parameters to update ω , c 1 , and c 2 .
9.   Update particle swarm velocity and position according to Formulas (20) and (21).
10.  Update S p b e s t and S g b e s t , and calculate the new Fitness value according to Formula (24).
11.   Return the new reward value R t by PG.
12.   Store the obtained the state s , action a , and reward R t in intelligent agent.
13.   Input the updated S p b e s t and S g b e s t into PG.
14.  End For
The input state S t is the relative distance between the particle and the global best position S g b e s t , i t and the relative performance between the particle and the global best fitness.
S = { d = i = 1 N x g b e s t , i x i f = i = 1 N F g b e s t , i F i
The intelligent agent performs the computation of the output action probabilities by means of the iterative neural network hidden layer activation function, which will select the action A t input with the highest output probability for the network environment. The output action probability distribution is a seven-dimensional action space.
A t = [ A 0 ,   A 1 ,   A 2 ,   ,   A 6 ]
A 0 + A 1 + A 2 + + A 6 = 1 , where A 0 , A 1 represents the increase and decrease of ω , A 2 , A 3 represents the increase and decrease of c 1 , A 4 , A 5 represents the increase and decrease of c 2 , and A 6 represents no change of parameters.
When the calculated Fitness is compared with the result of the previous iteration, the reward value R t will give positive feedback when the Fitness is improved. Conversely, R t will give negative feedback.
Adaptive updating of the PSO-PG and the improved formulae for particle velocity and position updates are as follows:
V i p ( t + 1 ) = ω t × V i p ( t ) + c 1 , t × r 1 × ( S p b e s t , i t X i p ( t ) ) + c 2 , t × r 2 × ( S g b e s t , i t X i p ( t ) )
X i p ( t + 1 ) = X i p ( t ) + V i p ( t + 1 )
The resource allocation of the offloading scheme is evaluated by the Fitness at the time slot t, which is used to guide the particle towards the optimal search. The Fitness value is a trade-off between latency, resource utilization and blocking rate.
Average delay: the average response delay of all tasks as described in Section 3, a v e T = 1 N i P i T t i .
Blocking probability: the proportion of the number of blocked tasks to the total number of tasks. B p = N b N , where N b represents the number of blocked tasks and N represents the number of all tasks.
Resource utilization: the percentage of the number of occupied frequency slots at time slot t. When Re = F s m F s , F s m represents the number of occupied frequency slots, and F s indicates the number of all frequency slots.
Thus, the Fitness function is defined as
F i t n e s s = α × B p + β × a v e T + ε × 1 Re
α , β , and ε are the weight coefficient.
From the above equation, it can be perceived that the Fitness is a combination of three components. In order to perform the optimized offloading scheme, it is necessary to optimize towards minimizing Fitness value.
In PSO-PG, the particles would be updated and evaluated by Equations (19) and (20), as shown in the Algorithm 2, in which it updates S p b e s t , i t and S g b e s t , i t by contrasting Fitness, and finally obtains the global optimal solution S g b e s t , i t .
Algorithm 2: Task offloading for node-overload protection based on PSO-PG.
1. Obtain the predicted node overload.
2. Establish an ascending sort as a set of candidate offloading nodes;
3. Initialize particle position  X i k = { x 1 k , x 2 k , , x Q k } and particle velocity vector  v i k = { v 1 k , v 2 k , , v Q k } ;.
4. While the algorithm has not converged do
5.   For each particle i do
6.    Decode each particle swarm through (12) to obtain the offloading scheme.
7.    Select the initial destination MEC server and allocation resource.
8.     Calculate the Fitness ( F i ) by Formula (21).
9.          S g b e s t = min ( S p b e s t . i )
.10.      if Fitness( F i ) < Fitness( S p b e s t , i ) then
11.            S p b e s t , i = F i
12.       end if
13.       if Fitness( S p b e s t , i ) < Fitness( S g b e s t ) then
14.             S g b e s t = S p b e s t , i
15.       end if
16.   End For
17.  Choose the smallest Fitness value as the optimal scheme and update particle through the Formulas (19) and (20).
18. End While
19. Output the optical scheme S g b e s t .

5. Evaluation

The detailed description of the scheme mentioned above is focused on two parts: traffic prediction and task offloading. AI-integration algorithms need to carry out a large amount of data for training and learning, and the lack of data will affect the effect of training. Moreover, a huge amount of high-performance arithmetic power is needed to support the AI algorithm training process [40,41,42]. Therefore, in this program, we deploy the algorithm on a high-performance GPU virtual machine for training. The simulation environment in this paper is run on a multi-core server with 12 2.5 GHz Intel Core i5-7200U CPU cores, 2 NVIDIA Titan XP GPU cores, and 32 GB of RAM for accelerated training of neural networks in a virtual machine. In this section, similarly, we divide the simulation into two parts for evaluation.

5.1. Simulation Setup and Results Analysis for Traffic Prediction Based on LSTM

We have trained the LSTM algorithm based on the historical traffic dataset on edge servers. The dataset used for simulation was traffic data collected every five minutes in July 2019 from a university data center in Beijing, China. The trained model is deployed on the edge nodes for traffic prediction. We collected 10,000 pieces of data from the historical dataset, including timestamp, arrival time, processing time, source IP and port, destination IP and port, packet size, device logs, etc. Eighty percent of the dataset is used for training and 20% for testing. The parameter settings for LSTM are depicted as follows: input_size = 5, batch_size = 32, learning rate = 0.001, epoch is set to 100, and the maximum hidden layer is set to 2, hidden units = 256.
Support Vector Machines (SVM) have the advantage of high prediction accuracy in small datasets due to their strict mathematical-logical nature. Deep-learning Neural Networks (DNN) have a strong nonlinear fitting capability and a wide range of applications in prediction using the ability to find optimized solutions at high speed. Recurrent Neural Networks (RNN) have the ability to capture dependencies in time series due to the presence of recurrent links in the network, empowering algorithms with powerful memory. We make a comprehensive comparison between the LSTM prediction algorithm proposed in this paper and the algorithms mentioned above, which are all prominent in the field of prediction.
Firstly, we compared the prediction accuracy of LSTM with SVM, DNN, and RNN, as shown in Figure 4. With the increasing number of epochs, the prediction accuracy of each algorithm presents an upward trend. It can be obviously observed that LSTM performs exceptionally well in prediction accuracy among numerous algorithms. In Figure 5a, the blue trajectory represents the actual traffic, while the green trajectory represents the predicted traffic predicted by LSTM, which makes it evident that the predicted traffic trend is almost identical to the actual traffic trend. Figure 5b shows the trend of traffic trajectories over a period of time, which intuitively reveals the comparison between the predicted trajectories of different algorithms and the actual trajectories. These results indicate that our selected algorithm has excellent performance in traffic prediction.
In order to better measure the accuracy of prediction results, we evaluated the performance of these algorithms in terms of prediction accuracy (MAE, MRE, and RMSE). In order to evaluate objectivity, we utilized the same dataset for training and testing during the prediction period.
In Table 2, it can be learned that the prediction accuracy of LSTM exceeds 94%, with the lower MAE, MRE, and RMSE performing better than the other three algorithms. It should be noted that RNN has higher prediction accuracy compared with SVM and DNN. Due to being able to only process the dependency between short-term sequences, it is unsuitable for long-term prediction. On this basis, LSTM can learn long-term dependency between sequences, which improves prediction accuracy.
Additionally, we analyze the computational complexity of the proposed LSTM, which would compare with the other benchmark algorithms, as shown in Table 3.
By comparing the computational complexity of the algorithms by order of magnitude, SVM has an order of magnitude greater computational complexity than the other three algorithms. And the computational complexity of LSTM, DNN, and RNN is in the same order of magnitude. LSTM can overcome the problem of gradient explosion and has high prediction accuracy, resulting in better overall performance.
From the above results, SVM has a lower prediction accuracy when the data fluctuates greatly. DNN can lead to inaccurate predictions due to the loss of some of the feature values during the training process. RNN is prone to gradient explosions due to the chain rule processing. LSTM is an improved recursive neural network algorithm that overcomes the problem of inaccurate learning of past information due to gradient vanishing in RNN. By incorporating memory function and dynamically adjusting the correlation weight coefficients between sequences, LSTM achieves high-precision prediction of long and short time series. In addition, in the literature [43], the authors proposed an advanced algorithm that combines LSTM with other algorithms, which also brings high-precision prediction results and becomes a key direction for our future research.

5.2. Simulation Setup and Results Analysis for Task Offloading Based on PSO-PG

In this section, we develop a task offloading scheme based on traffic prediction (TPO-TP) to avoid node overload and ensure the survivability of the network. In this section, we adopted the PSO-PG algorithm for task offloading.
In order to evaluate the performance of the proposed scheme for different scales of edge nodes, we set the scale of edge nodes in the simulated network topology to be in the range of 5–10. The fiber distance between edge nodes varies randomly between 20 and 100 Km. The modulation format is BPSK, and the link bandwidth is set to 6.25 Gbps per fiber with 100 FSs. For all MEC servers, the computing capacity is 3 × 10 10 cycles/s. The initial workload per MEC server is randomly distributed in [ 2 ,   5 ] × 10 8 cycles/s. The data size in task request is random in [ 1 , 2 ] × 10 9 bits. Simultaneously, the computing resource demand for a task request is randomly distributed at [ 4 , 30 ] × 10 7 cycles.
In addition, the simulation of the PSO-PG is implemented in Python 3.6 in Tensorflow. The particle size is 30, and the maximum number of iterations is 150. Initialized learning factor c 1 = c 2 = 1.2 , initial value of inertia weights ω = 0.9 , the maximum training number Trainmax = 100, learning rate δ = 0.05 , and discount factor γ = 0.95 . The PSO-PG is to be evolved in the direction of minimizing Fitness value. As shown in Equation (21), the Fitness goal of this paper is to achieve the minimum trade-off overhead by balancing the three indicators of average delay, resource utilization, and blocking probability. Therefore, we set α to 200, β to 5, and ε to 15.
To verify the efficiency of the proposed scheme, we compared the following schemes in Table 4, conducting a comprehensive evaluation of the following aspects.
  • Average delay
Average delay is a key metric for evaluating performance. We comprehensively evaluate the performance of the proposed scheme in terms of both edge node scale and task request scale. As shown in Figure 6a, we fixed the number of edge nodes to eight and clearly learned the changes in average delay under different task request scales. It can be intuitively observed that as the scale of task requests increases, the average delay of each scheme presents an upward trend. Due to the limited resources of edge nodes, more task requests mean a reduction in the available resources for each task. The average delay of PFS is largely greater than that of other schemes under large-scale task requests, owing to the fact that it only considers path distance and ignores the limited processing capacity of edge nodes. The limited processing capacity increases the waiting delay. RFS regards the priority of providing idle resources as primary for task offloading; thus, sufficient resources will generate a relatively lower average delay. Since path distance is not taken into account during offloading, it is intolerable for some latency-sensitive services. SVM-TPO, DNN-TPO, and RNN-TPO all consider future traffic scales, resulting in a certain reduction in average delay compared with unpredictable offloading schemes. TPO-TP achieved the minimum average delay among all the schemes compared, since the proposed scheme will achieve optimized task offloading based on future traffic scales through accurate prediction. In dynamic network environments, the proposed scheme is more stable due to the fact that it considers the network resource utility in highly dynamic networks to ensure that both task offloading and network performance constraints are met.
We further evaluated the changes in average delay under different edge node scales presented in Figure 6b, setting the number of task requests to 400. It can be clearly observed that for PFS only related to path length, the increase in available resources has no reduction in average delay. In the other schemes, there appears to be a downward trend with the expansion of edge nodes, resulting in an average delay due to the increase in available resources. In addition, to avoid failing to offload caused by the bust traffic increase sharply on the edge node, the schemes with a prediction mechanism are slightly better than RFS. The performance of TPO-TP is superior to other schemes, which can be attributed to the fact that the PSO-PG algorithm demonstrates the ability to adjust algorithm parameters according to network state in dynamic networks. It overcomes the challenge that, due to the changing network state, the competition for resources between services makes some of the service demands unfulfilled in the case of limited resources.
  • Blocking probability
The blocking probability is a key indicator reflecting QoS, with excessive delay and insufficient resources being important reasons for blocking. Similarly, the performance of the proposed scheme is evaluated in terms of task request scale and edge node scale. From Figure 7a, it can be clearly seen that the blocking probability of all schemes increases as the scale of task requests increases, owing to the increased failure probability of offloading due to resource competition. Due to the intense service competition with limited computing resources, PFS has a high blocking probability. Due to ensuring sufficient computing resources, the blocking probability of other schemes remains low. Compared with RFS and PFS, these flexible offloading schemes based on global planning greatly reduce the blocking probability. With high-precision traffic prediction, TPO-TP allows tasks to be offloaded to the optimal MEC server for processing. It would never allocate excessive resources to ensure latency constraints, thereby increasing the throughput of the network and reducing the blocking probability. Compared with the flexible offloading scheme based on global planning, TPO-TP ultimately achieved a 39.6% reduction in blocking probability under the larger task request scale, which proves the effectiveness of our proposed scheme.
As shown in Figure 7b, the blocking probability in all schemes, except for PFS, presents a downward trend due to the increase in available resources. PFS exhibits a higher blocking probability owing to adopting the principle of path distance priority, where the increase in available resources will not reduce the blocking probability but rather increase the computing complexity as the expansion network scales. All other schemes significantly reduce the blocking probability as the expansion of edge nodes scales. However, these schemes with traffic prediction benefited from a comprehensive consideration of the future available resources from a global perspective, achieving lower blocking probabilities. The proposed TPO-TP has the lowest blocking probability among all schemes, which represents that the proposed scheme can be ensured to satisfy the delay constraints of the service in the dynamic network state.
  • Resource utilization
We have further evaluated the proposed scheme from the perspective of resource utilization. Resource utilization is a critical evaluation criterion under the constraints of limited edge resources, reflecting the percentage of used resources in total resources. Figure 8a indicates the trend of resource utilization under different task request scales. The simulation results indicate that after accurate traffic prediction, resource utilization has increased by about 12.3%, highly compliant with the survival guarantee of fast offloading in a highly dynamic network environment. The scheme ensures meeting service requirements while maintaining high network resource utilization, which similarly proves the rationality of the proposed scheme. In Figure 8b, TPO-TP performs better than other schemes in the more complex network topologies, ensuring full utilization of established links in the network. These results further reveal the superiority of the proposed scheme and verify the effectiveness and applicability of the proposed prediction-assisted offloading scheme.

6. Conclusions and Future Work

In this paper, in order to ensure the low-delay requirements of intelligent services and the survivability of 6G edge networks, we propose a task offloading scheme based on traffic prediction for node-overload protection. In this scheme, we achieve future resource visualization of the entire 6G edge network by deploying an accurate traffic prediction model on the edge nodes. By constructing a mapping between traffic prediction and future available resources, our work has achieved a network survivability guarantee while maximizing the benefits of edge resources. Furthermore, we develop a task offloading scheme based on the PSO-PG algorithm under dynamic networks that realizes timely updating of the algorithm parameters according to the real-time network state. In the offloading process, the scheme effectively realizes the joint optimization of tuning offloading decisions with routing and computing resources to match network survivability guarantees. Simulation results indicate that the proposed scheme has reached the minimum trade-off overhead in terms of average delay, blocking probability, and resource utilization, validating the effectiveness of our work.
A network survivability guarantee would be a promising direction to support 6G development. Future research work will focus on using deep reinforcement learning to solve task offloading problems, which will contribute to solving the problem of algorithm dimension explosion. The surge in algorithm dimensions will no longer be an obstacle to updating mathematical models, further improving the details of resource allocation for survivability assurance. Ultimately, our goal is to develop an intelligent survivability guarantee scheme under the constraints of low latency. Therefore, the key of future work is to develop a task offloading strategy based on deep reinforcement learning under precise traffic prediction, achieving fine-grained allocation of multidimensional resources, and overcoming the shortcomings of traditional algorithms in poor performance with fixed allocation caused by dimension explosion.

Author Contributions

Conceptualization, Z.S. and H.Y.; methodology, Z.S., H.Y. and C.L.; software, Z.S. and Q.Y.; validation, Z.S., A.Y., J.Z. and Y.Z.; formal analysis, Z.S., S.L. and Y.L.; investigation, Z.S. and H.Y.; writing—original draft preparation, Z.S.; writing—review and editing, Z.S. and H.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work has been supported in part by the NSFC project (62122015, 62201088, 62271075) and is supported by the joint project of the China Mobile Research Institute and the Fund of SKL of IPOC (BUPT) (IPOC2021ZT04) and supported by “the Fundamental Research Funds for the Central Universities”.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Petrov, V.; Lema, M.A.; Gapeyenko, M.; Antonakoglou, K.; Moltchanov, D.; Sardis, F.; Samuylov, A.; Andreev, S.; Koucheryavy, Y.; Dohler, M. Achieving End-to-End Reliability of Mission-Critical Traffic in Softwarized 5G Networks. IEEE J. Sel. Areas Commun. 2018, 36, 485–501. [Google Scholar] [CrossRef]
  2. Spantideas, S.; Giannopoulos, A.; Cambeiro, M.A.; Trullols-Cruces, O.; Atxutegi, E.; Trakadas, P. Intelligent Mission Critical Services over Beyond 5G Networks: Control Loop and Proactive Overload Detection. In Proceedings of the 2023 International Conference on Smart Applications, Communications and Networking (SmartNets), Istanbul, Turkiye, 25–27 July 2023; pp. 1–6. [Google Scholar] [CrossRef]
  3. Skarin, P.; Tärneberg, W.; Årzen, K.-E.; Kihl, M. Towards Mission-Critical Control at the Edge and Over 5G. In Proceedings of the 2018 IEEE International Conference on Edge Computing (EDGE), San Francisco, CA, USA, 2–7 July 2018; pp. 50–57. [Google Scholar] [CrossRef]
  4. Sun, Z.; Yang, H.; Li, C.; Yao, Q.; Wang, D.; Zhang, J.; Vasilakos, A.V. Cloud-Edge Collaboration in Industrial Internet of Things: A Joint Offloading Scheme Based on Resource Prediction. IEEE Internet Things J. 2022, 9, 17014–17025. [Google Scholar] [CrossRef]
  5. Wu, D.; Han, X.; Yang, Z.; Wang, R. Exploiting Transfer Learning for Emotion Recognition Under Cloud-Edge-Client Collaborations. IEEE J. Sel. Areas Commun. 2021, 39, 479–490. [Google Scholar] [CrossRef]
  6. Lin, Z.; Lin, M.; Champagne, B.; Zhu, W.-P.; Al-Dhahir, N. Secrecy-Energy Efficient Hybrid Beamforming for Satellite-Terrestrial Integrated Networks. IEEE Trans. Commun. 2021, 69, 6345–6360. [Google Scholar] [CrossRef]
  7. Yang, H.; Yao, Q.; Bao, B.; Yu, A.; Zhang, J.; Vasilakos, A.V. Multi-associated parameters aggregation-based routing and resources allocation in multi-core elastic optical networks. IEEE/ACM Trans. Netw. 2022, 30, 2145–2157. [Google Scholar] [CrossRef]
  8. Yao, Q.; Yang, H.; Li, C.; Bao, B.; Zhang, J.; Cheriet, M. Federated Transfer Learning Framework for Heterogeneous Edge IoT Networks. China Commun. [CrossRef]
  9. Lin, Z.; Niu, H.; An, K.; Hu, Y.; Li, D.; Wang, J.; Al-Dhahir, N. Pain Without Gain: Destructive Beamforming From a Malicious RIS Perspective in IoT Networks. IEEE Internet Things J. 2023. [Google Scholar] [CrossRef]
  10. Yu, T.; Yang, H.; Yao, Q.; Yu, A.; Zhao, Y.; Liu, S.; Li, Y.; Zhang, J.; Cheriet, M. Multi visual GRU based survivable computing power scheduling in metro optical networks. IEEE Trans. Netw. Serv. Manag. 2023. [Google Scholar] [CrossRef]
  11. Wu, D.; Bao, R.; Li, Z.; Wang, H.; Zhang, H.; Wang, R. Edge-Cloud Collaboration Enabled Video Service Enhancement: A Hybrid Human-Artificial Intelligence Scheme. IEEE Trans. Multimed. 2021, 23, 2208–2221. [Google Scholar] [CrossRef]
  12. An, K.; Lin, M.; Ouyang, J.; Zhu, W.-P. Secure Transmission in Cognitive Satellite Terrestrial Networks. IEEE J. Sel. Areas Commun. 2016, 34, 3025–3037. [Google Scholar] [CrossRef]
  13. Ma, R.; Yang, W.; Shi, H.; Lu, X.; Liu, J. Covert communication with a spectrum sharing relay in the finite blocklength regime. China Commun. 2023, 20, 195–211. [Google Scholar] [CrossRef]
  14. Guo, C.; He, W.; Li, G.Y. Optimal Fairness-Aware Resource Supply and Demand Management for Mobile Edge Computing. IEEE Wirel. Commun. Lett. 2021, 10, 678–682. [Google Scholar] [CrossRef]
  15. Gunawardena, J. Learning Outside the Brain: Integrating Cognitive Science and Systems Biology. Proc. IEEE 2022, 110, 590–612. [Google Scholar] [CrossRef]
  16. Shi, C.; Ding, L.; Wang, F.; Salous, S.; Zhou, J. Joint Target Assignment and Resource Optimization Framework for Multitarget Tracking in Phased Array Radar Network. IEEE Syst. J. 2021, 15, 4379–4390. [Google Scholar] [CrossRef]
  17. Liu, J.; Zhang, Q. Offloading schemes in mobile edge computing for ultra-reliable low latency communications. IEEE Access 2018, 6, 12825–12837. [Google Scholar] [CrossRef]
  18. Chen, L.; Xu, J.; Zhou, S. Computation peer offloading in mobile edge computing with energy budgets. In Proceedings of the GLOBECOM 2017—2017 IEEE Global Communications Conference, Singapore, 4–8 December 2017; pp. 1–6. [Google Scholar]
  19. Sun, X.; Ansari, N. Latency aware workload offloading in the cloudlet network. IEEE Commun. Lett. 2017, 21, 1481–1484. [Google Scholar] [CrossRef]
  20. Li, J.; Luo, G.; Cheng, N.; Yuan, Q.; Wu, Z.; Gao, S.; Liu, Z. An end-to-end load balancer based on deep learning for vehicular network traffic control. IEEE Int. Things J. 2019, 6, 953–966. [Google Scholar] [CrossRef]
  21. Taleb, T.; Ksentini, A.; Frangoudis, P. Follow-me cloud: When cloud services follow mobile users. IEEE Trans. Mobile Comput. 2019, 7, 369–382. [Google Scholar] [CrossRef]
  22. Wang, S.; Urgaonkar, R.; Zafer, M.; He, T.; Chan, K.; Leung, K.K. Dynamic Service Migration in Mobile Edge Computing Based on Markov Decision Process. IEEE/ACM Trans. Netw. 2019, 27, 1272–1288. [Google Scholar] [CrossRef]
  23. He, Y.; Zhao, N.; Yin, H. Integrated networking, caching, and computing for connected vehicles: A deep reinforcement learning approach. IEEE Trans. Veh. Technol. 2018, 67, 44–55. [Google Scholar] [CrossRef]
  24. Chen, Z.; Wang, X. Decentralized computation offloading for multi-user mobile edge computing: A deep reinforcement learning approach. arXiv 2018, arXiv:1812.07394. [Google Scholar] [CrossRef]
  25. Jitani, A.; Mahajan, A.; Zhu, Z.; Abou-Zeid, H.; Fapi, E.T.; Purmehdi, H. Structure-Aware Reinforcement Learning for Node-Overload Protection in Mobile Edge Computing. IEEE Trans. Cogn. Commun. Netw. 2022, 8, 1881–1897. [Google Scholar] [CrossRef]
  26. Fang, Z.; Xu, X.; Dai, F.; Qi, L.; Zhang, X.; Dou, W. Computation Offloading and Content Caching with Traffic Flow Prediction for Internet of Vehicles in Edge Computing. In Proceedings of the 2020 IEEE International Conference on Web Services (ICWS), Beijing, China, 19–23 October 2020; pp. 380–388. [Google Scholar] [CrossRef]
  27. Tian, H.; Xu, X.; Qi, L.; Zhang, X.; Dou, W.; Yu, S.; Ni, Q. CoPace: Edge Computation Offloading and Caching for Self-Driving With Deep Reinforcement Learning. IEEE Trans. Veh. Technol. 2021, 70, 13281–13293. [Google Scholar] [CrossRef]
  28. Xu, X.; Jiang, Q.; Zhang, P.; Cao, X.; Khosravi, M.R.; Alex, L.T.; Qi, L.; Dou, W. Game Theory for Distributed IoV Task Offloading With Fuzzy Neural Network in Edge Computing. IEEE Trans. Fuzzy Syst. 2022, 30, 4593–4604. [Google Scholar] [CrossRef]
  29. Ferdosian, N.; Moazzeni, S.; Jaisudthi, P.; Ren, Y.; Agrawal, H.; Simeonidou, D.; Nejabat, R. Autonomous Intelligent VNF Profiling for Future Intelligent Network Orchestration. IEEE Trans. Mach. Learn. Commun. Netw. 2023, 1, 138–152. [Google Scholar] [CrossRef]
  30. Nagib, A.M.; Abou-zeid, H.; Hassanein, H.S. Toward Safe and Accelerated Deep Reinforcement Learning for Next-Generation Wireless Networks. IEEE Netw. 2023, 37, 182–189. [Google Scholar] [CrossRef]
  31. Prasanna Kumar, G.; Shankaraiah, N. An Efficient IoT-based Ubiquitous Networking Service for Smart Cities Using Machine Learning Based Regression Algorithm. Int. J. Inf. Technol. Comput. Sci. (IJITCS) 2023, 15, 15–25. [Google Scholar] [CrossRef]
  32. Yang, H.; Zhao, X.; Yao, Q.; Yu, A.; Zhang, J.; Ji, Y. Accurate fault location using deep neural evolution network in cloud data center interconnection. IEEE Trans. Cloud Comput. 2022, 10, 1402–1412. [Google Scholar] [CrossRef]
  33. Li, C.; Yang, H.; Sun, Z.; Yao, Q.; Bao, B.; Zhang, J.; Vasilakos, A.V. Federated hierarchical trust-based interaction scheme for cross-domain industrial IoT. IEEE Internet Things J. 2023, 10, 447–457. [Google Scholar] [CrossRef]
  34. Yang, Z.; Yao, Y.; Gao, H.; Wang, J.; Mi, N.; Sheng, B. New YARN Non-Exclusive Resource Management Scheme through Opportunistic Idle Resource Assignment. IEEE Trans. Cloud Comput. 2021, 9, 696–709. [Google Scholar] [CrossRef]
  35. Eramo, V.; Lavacca, F.G.; Catena, T.; Perez Salazar, J.P. Application of a Long Short Term Memory neural predictor with asymmetric loss function for the resource allocation in NFV network architectures. Comput. Netw. 2021, 193, 108104–108116. [Google Scholar] [CrossRef]
  36. Feriani, A.; Hossain, E. Single and Multi-Agent Deep Reinforcement Learning for AI-Enabled Wireless Networks: A Tutorial. IEEE Commun. Surv. Tutor. 2021, 23, 1226–1252. [Google Scholar] [CrossRef]
  37. Giannopoulos, A.; Spantideas, S.; Tsinos, C.; Trakadas, P. Power Control in 5G Heterogeneous Cells Considering User Demands Using Deep Reinforcement Learning. In Proceedings of the IFIP International Conference on Artificial Intelligence Applications and Innovations, Crete, Greece, 25–27 June 2021; Springer: Cham, Switzerland, 2021; Volume 628. [Google Scholar] [CrossRef]
  38. Liu, C.H.; Chen, Z.; Tang, J.; Xu, J.; Piao, C. Energy-efficient UAV control for effective and fair communication coverage: A deep reinforcement learning approach. IEEE J. Sel. Areas Commun. 2018, 36, 2059–2070. [Google Scholar] [CrossRef]
  39. Mur, D.C.; Gavras, A.; Ghoraishi, M.; Hrasnica, H.; Kaloxylos, A. (Eds.) AI and ML–Enablers for beyond 5G Networks; White Paper; 5G PPP Technology Board: Kista, Sweden, 2021. [Google Scholar]
  40. Yang, H.; Yao, Q.; Yu, A.; Lee, Y.; Zhang, J. Resource assignment based on dynamic fuzzy clustering in elastic optical networks with multi-core fibers. IEEE Trans. Commun. 2019, 67, 3457–3469. [Google Scholar] [CrossRef]
  41. Yu, A.; Yang, H.; Feng, C.; Li, Y.; Zhao, Y.; Cheriet, M.; Vasilakos, A.V. Socially-aware traffic scheduling for edge-assisted metaverse by deep reinforcement learning. IEEE Netw. 2023. [Google Scholar] [CrossRef]
  42. Li, C.; Yang, H.; Sun, Z.; Yao, Q.; Zhang, J.; Yu, A.; Vasilakos, A.V.; Liu, S.; Li, Y. High-Precision Cluster Federated Learning for Smart Home: An Edge-Cloud Collaboration Approach. IEEE Access 2023. [Google Scholar] [CrossRef]
  43. Eramo, V.; Catena, T. Application of an Innovative Convolutional/LSTM Neural Network for Computing Resource Allocation in NFV Network Architectures. IEEE Trans. Netw. Manag. 2022, 19, 2929–2943. [Google Scholar]
Figure 1. The edge collaboration based on traffic prediction for node-overload protection in 6G edge networks.
Figure 1. The edge collaboration based on traffic prediction for node-overload protection in 6G edge networks.
Electronics 12 04497 g001
Figure 2. The process of traffic prediction.
Figure 2. The process of traffic prediction.
Electronics 12 04497 g002
Figure 3. The interaction process of PSO-PG algorithm.
Figure 3. The interaction process of PSO-PG algorithm.
Electronics 12 04497 g003
Figure 4. The comparison of prediction accuracy.
Figure 4. The comparison of prediction accuracy.
Electronics 12 04497 g004
Figure 5. The comparison between real traffic and predicted traffic. (a) Comparison between real traffic and predicted traffic by LSTM; (b) Comparison between real traffic and predicted traffic using different algorithms.
Figure 5. The comparison between real traffic and predicted traffic. (a) Comparison between real traffic and predicted traffic by LSTM; (b) Comparison between real traffic and predicted traffic using different algorithms.
Electronics 12 04497 g005
Figure 6. (a) Comparison of average delay under different task request scales; (b) Comparison of average delay under different edge nodes scales.
Figure 6. (a) Comparison of average delay under different task request scales; (b) Comparison of average delay under different edge nodes scales.
Electronics 12 04497 g006
Figure 7. (a) Comparison of blocking probability under different task request scales; (b) Comparison of blocking probability under different edge nodes scales.
Figure 7. (a) Comparison of blocking probability under different task request scales; (b) Comparison of blocking probability under different edge nodes scales.
Electronics 12 04497 g007
Figure 8. (a) Comparison of resource utilization under different task request scales, (b) Comparison of resource utilization under different edge nodes scales.
Figure 8. (a) Comparison of resource utilization under different task request scales, (b) Comparison of resource utilization under different edge nodes scales.
Electronics 12 04497 g008
Table 1. Symbol.
Table 1. Symbol.
SymbolMeaning
MLThe local MEC server
PiThe taskset
EmThe candidate destination node set
TriThe transmission delay
TciThe processing delay
TiThe total delay
RThe transmission rate
δmThe limit ratio of the maximum computing capacity
CkThe computing capacity of the MEC server
λi,mThe task i is offloaded to the migrated server m
Ti,maxThe delay threshold of task i
diThe data size of task i
Li(ao,ad)The task i uses the link (ao,ad)
Di,mThe number of frequency slots allocated for transmitting task i
fsi(ao,ad)The frequency slot fs in the optical path
Vi,mThe computing capacity allocated to task i by the server m
ciThe computing capacity required to process task i
DFSThe capacity of the frequency slot of the optical path
Table 2. Detail performance comparison of prediction algorithms.
Table 2. Detail performance comparison of prediction algorithms.
AlgorithmAccuracy (%)MAEMRE (%)RMSE
LSTM94.20.213.240.27
SVM84.90.466.830.59
DNN86.40.354.350.42
RNN93.40.263.960.31
Table 3. Order of magnitude comparison of computational complexity of algorithms.
Table 3. Order of magnitude comparison of computational complexity of algorithms.
AlgorithmComputational Complexity
LSTMO[4(nm + n2 + n)], n: hidden size, m: input size
SVMO[Nsv3], Nsv: number of support vectors
DNNO[8nd2], n: input size, d:vector dimension
RNNO[nd2], n: input size, d:vector dimension
Table 4. Description of the comparison schemes.
Table 4. Description of the comparison schemes.
SchemesDescription of Scheme
PFSTask offloading according to the principle of path priority without predicting before.
RFSTask offloading according to the principle of resource priority without predicting before.
SVM-TPOTask offloading based on SVM prediction results.
DNN-TPOTask offloading based on DNN prediction results.
RNN-TPOTask offloading based on RNN prediction results.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sun, Z.; Yang, H.; Li, C.; Yao, Q.; Yu, A.; Zhang, J.; Zhao, Y.; Liu, S.; Li, Y. Task Offloading Scheme for Survivability Guarantee Based on Traffic Prediction in 6G Edge Networks. Electronics 2023, 12, 4497. https://doi.org/10.3390/electronics12214497

AMA Style

Sun Z, Yang H, Li C, Yao Q, Yu A, Zhang J, Zhao Y, Liu S, Li Y. Task Offloading Scheme for Survivability Guarantee Based on Traffic Prediction in 6G Edge Networks. Electronics. 2023; 12(21):4497. https://doi.org/10.3390/electronics12214497

Chicago/Turabian Style

Sun, Zhengjie, Hui Yang, Chao Li, Qiuyan Yao, Ao Yu, Jie Zhang, Yang Zhao, Sheng Liu, and Yunbo Li. 2023. "Task Offloading Scheme for Survivability Guarantee Based on Traffic Prediction in 6G Edge Networks" Electronics 12, no. 21: 4497. https://doi.org/10.3390/electronics12214497

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop