Next Article in Journal
Deep Learning-Based Docking Scheme for Autonomous Underwater Vehicles with an Omnidirectional Rotating Optical Beacon
Previous Article in Journal
Drone Swarm for Distributed Video Surveillance of Roads and Car Tracking
Previous Article in Special Issue
Multiple UAVs Networking Oriented Consistent Cooperation Method Based on Adaptive Arithmetic Sine Cosine Optimization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mobility-Aware Task Offloading and Resource Allocation in UAV-Assisted Vehicular Edge Computing Networks

1
School of Computer Science and Engineering, Southeast University, Nanjing 211189, China
2
Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications (Southeast University), Ministry of Education, Nanjing 211189, China
*
Author to whom correspondence should be addressed.
Drones 2024, 8(11), 696; https://doi.org/10.3390/drones8110696
Submission received: 16 October 2024 / Revised: 12 November 2024 / Accepted: 19 November 2024 / Published: 20 November 2024
(This article belongs to the Special Issue UAV-Assisted Intelligent Vehicular Networks 2nd Edition)

Abstract

:
The rapid development of the Internet of Vehicles (IoV) and intelligent transportation systems has led to increased demand for real-time data processing and computation in vehicular networks. To address these needs, this paper proposes a task offloading framework for UAV-assisted Vehicular Edge Computing (VEC) systems, which considers the high mobility of vehicles and the limited coverage and computational capacities of drones. We introduce the Mobility-Aware Vehicular Task Offloading (MAVTO) algorithm, designed to optimize task offloading decisions, manage resource allocation, and predict vehicle positions for seamless offloading. MAVTO leverages container-based virtualization for efficient computation, offering flexibility in resource allocation in multiple offload modes: direct, predictive, and hybrid. Extensive experiments using real-world vehicular data demonstrate that the MAVTO algorithm significantly outperforms other methods in terms of task completion success rate, especially under varying task data volumes and deadlines.

1. Introduction

The integration of Unmanned Aerial Vehicles (UAVs) into Vehicular Edge Computing (VEC) represents an innovative approach to enhancing the quality of service (QoS) within the Internet of Vehicles (IoV) landscape [1]. With UAV technology reaching higher maturity levels, drones can now possess communication and computational capabilities, enabling them to serve as mobile cellular base stations. UAVs can be deployed on demand to areas with high service demand to meet low-latency requirements while easing the burden on vehicular terminals. In regions with challenging terrains or insufficient ground infrastructure, UAVs provide an efficient and cost-effective option for task assistance. Unlike offloading tasks using a Roadside Unit (RSU), UAV deployment reduces substantial setup costs and offers greater flexibility. In addition, by circumventing remote cloud computing centers, UAVs help minimize transmission delays. This paradigm, where vehicles offload latency-critical tasks to proximate UAVs, significantly boosts vehicle capacity.
In this paper, we consider the problem of offloading tasks from bi-directional moving vehicles in UAV-assisted VEC. As depicted in Figure 1, the system comprises a vehicle layer and a UAV edge layer. We suppose that UAV/Vehicle and UAV/UAV can stably connect via dedicated U2V and U2U communication links. The edge layer involves multiple drones hovering above the roadway. Each UAV node is equipped with lightweight communication and computing capabilities, and their coverage is limited. A single vehicle can be within the range of several drones or at least one. Vehicles transmit and offload computing tasks to the connected drones. Vehicles within the drone coverage area can move in different directions. Vehicles may traverse several drone coverage zones while executing tasks. Consequently, the offloading node for a vehicle’s task is not limited to the currently engaged drone node; any drone node encountered before the task deadline qualifies as a potential offloading node. The primary challenge is assigning the most appropriate node for task offloading while considering factors such as task transmission cost, drone computing capability, and the sequence of tasks awaiting processing, with the goal of enhancing the quality of service.
Offloading tasks from bi-directional moving vehicles is a challenging issue: (a) Vehicles moving in both directions generate diverse offloading tasks that vary in urgency, data volume, and resource needs, making it difficult to prioritize tasks and establish an optimal offloading sequence. This is further complicated by the constant emergence of new tasks as vehicles move, which requires dynamic adjustments based on vehicle speed, proximity, and system load. (b) In bi-directional traffic environments, vehicles are constantly moving, often crossing the boundaries of different drones’ coverage zones during the task execution cycle. This movement makes it difficult to determine which drone should be assigned to offload a specific task. A single drone may not be able to serve a vehicle throughout the entire duration of its journey as the vehicle may quickly exit the drone’s operating range. Additionally, balancing the load among multiple drones and managing their limited computational capacity while ensuring they can still perform offloading without overloading any single drone requires sophisticated task scheduling and resource allocation mechanisms. (c) The computing power and storage capacity of drones in the connected vehicle are limited and require a flexible and efficient resource allocation approach to meet the needs of different tasks. Compared to heavyweight virtual machines (VM), using container-based virtualization technology for computation offloading is more suitable for the connected vehicle network. Containers have the advantages of fast startup time, less hardware overhead, and secure resource isolation, thus providing high flexibility in platform management and improving the task execution efficiency of UAV servers.
For the challenges mentioned above, we propose a heuristic scheduling algorithm referred to as MAVTO (Mobility Aware Vehicular Task Offloading) for the problem under study. The main contributions are as follows:
  • Task Offloading Framework: The article proposes a novel task offloading and resource allocation framework for UAV-assisted VEC systems. This framework considers the mobility of vehicles, heterogeneous task requirements, and limited coverage of drones. It makes vehicle position predictions based on their motion information in order to optimize task offloading and resource allocation.
  • MAVTO Algorithm: The article introduces the Mobility Aware Vehicular Task Offloading (MAVTO) algorithm, which dynamically adjusts task offloading decisions based on vehicle speed, task deadlines, and computational resources available in the UAV servers. MAVTO uses container-based virtualization to efficiently manage resources and improve task execution performance.
  • Offloading Modes: The framework includes multiple offloading modes (direct, predictive, and hybrid), which provide flexibility in offloading tasks based on vehicle movement. The hybrid mode maximizes task success rates by selecting the optimal UAV nodes that vehicles will pass through within task deadlines.
The remainder of this paper is organized as follows. Section 2 reviews the existing literature on the problem under study. Section 3 represents the overview of the system and the model of our work. The MAVTO algorithm is proposed in Section 4. The numerical results are given in Section 5, followed by the conclusion in Section 6.

2. Related Work

Task offloading within VEC contexts has received considerable attention. However, most of the existing work focuses mainly on the traditional static way. These traditional methods involve using edge servers located at roadside units (RSUs) as UAV nodes, in collaboration with cloud computing servers to provide computing services for tasks. Traditional edge computing typically relies on static roadside units (RSUs) or stationary servers for offloading, which limits flexibility, especially in environments with highly dynamic vehicle movement. In contrast, UAV-assisted offloading offers several advantages, such as mobility, flexible deployment, and the ability to cover areas with insufficient infrastructure, which are particularly beneficial in high-mobility scenarios like vehicular networks.
In a traditional VEC environment, Bute et al. [2] design a task offloading scheme in which vehicles can transfer their tasks to other vehicles or the MEC server via the vehicle-to-network link (V2N). Cui et al. [3] propose a task offloading framework capable of dynamically managing network and computing resources. Liu et al. [4] present a mobility prediction model aimed at sustaining stable task offloading performance under a highly dynamic vehicular network. Xue et al. [5] formulate a framework in VEC task offloading and service caching scenario that can minimize the cumulative average task processing delay. Wang et al. [6] propose an imitation learning enabled task scheduling algorithm for online VEC. Bahreini et al. [7] design an energy-aware resource management framework for VEC systems that can share and coordinate computing resources among connected vehicles. Fan et al. [8] leverage for load balancing through RSU-RSU that a high-load edge server can be further offloaded to the other low-load edge servers. Ernest et al. [9] propose a multi-agent deep reinforcement learning-based (MADRL-based) energy efficiency maximization algorithm to attain maximum energy efficiency in the MEC-enabled IoV network. Some studies consider the mobility of vehicles. You et al. [10] considered an energy-efficient coordination offloading architecture for vehicular networks based on edge computing under vehicle mobility. This scheme decomposes tasks into multiple subtasks and offloads them to different roadside units. Tan et al. [11] considered an edge computing architecture for vehicle mobility scenarios and proposed a decentralized task offloading strategy, aiming to minimize the overall task response time and studied task offloading under both overlapping and non-overlapping RSU coverage architectures.
Few works have recently focused on task offloading in UAV-assisted VEC networks. For example, Dai et al. [12] developed a UAV-assisted offloading system where UAVs help manage the load of congested RSUs instead of directly servicing individual vehicles. Samir et al. [13] employed deep reinforcement learning (DRL) to devise a strategy for determining optimal paths for a UAV swarm to enhance vehicular coverage. He et al. [14] constructed a UAV-aided VANETs framework by integrating MEC selection, resource allocation, and task offloading, aiming to optimize task processing efficiency. Qi et al. [15] introduced a comprehensive optimization approach for content placement, spectrum usage, link pairing, and power management in UAV-supported vehicular networks to maximize long-term energy efficiency. Alkubeily et al. [16] crafted a schematic for enhancing UAV communication with VANETs to cater to the maximum number of ground vehicles. Yang et al. [17] suggested a learning-based channel allocation and task offloading method in temporary UAV-assisted VEC networks to boost data collection at reduced service costs. Li et al. [18] focused on a combined optimization of UAV routes and computing resources to curtail energy consumption in the UAV-assisted VEC offloading context. Liwang et al. [19] analyzed graph-based intensive task scheduling in air-to-ground integrated vehicle networks, aiming to jointly refine the mapping of task components to vehicles and the transmission power of UAVs.
The previously discussed studies did not explore the potential for task offloading using horizontal collaboration among multiple drones during the task offloading process. By utilizing UAVs, the proposed MAVTO can dynamically adjust to traffic pattern changes by repositioning UAVs or offloading tasks to UAVs closer to moving vehicles, thereby reducing latency and improving task success rates.

3. System Model and Problem Formulation

3.1. System Model

The scenario considered in this paper is a two-way road shown in Figure 1. The notations to be used in this paper are shown in Table 1. A set of UAVs (equipped with edge servers, denoted by E = { 1 , . . . , e , . . . , E } ) are uniformly stationed above this road. Each UAV has a certain communication coverage range. The coverage range between UAVs may overlap, and a vehicle may be covered by one or more UAVs. The UAV e E is characterized by a quadruple e = ( R e , P e , r , L e ) , where R e is the transmission rate for the U2V wireless link and P e represents the CPU processing speed of the UAV server. We project the UAV onto the center line of the road. r is the radius of the UAV’s coverage area and  L e = ( x e , 0 ) represents the location of the UAV. The set of vehicles is given by V = { 1 , . . . , v , . . . , V } , with the location of each vehicle v specified as L v = ( x v , y v ) . Connectivity is possible only if UAVs cover these vehicles, specifically when the Euclidean distance between any vehicle and a UAV satisfies | | L e L v | | r . Tasks are resource-intensive and generated by vehicles. Let K be the set of tasks generated at the current time. The task k K generated by vehicle v is characterized by k = ( v , I k , O k , C k , T k m a x ) , which represents the vehicle, input data volume, output data volume, computational workload, and deadline constraint, respectively.
In this study, it is assumed that vehicles travel at constant speed and direction. UAVs communicate by wireless links. Tasks that the vehicle terminal is unable to process must be offloaded via U2V links to the UAV layer for execution. Each UAV is equipped with a UAV server, but computing resources are limited. To optimize resource use and ensure effective load distribution among servers, it is possible to transfer task data between UAVs, thus allowing for computational tasks to be offloaded to and processed by other UAV servers. Consequently, each UAV server may manage tasks from directly connected UAVs as well as those offloaded by others. This scheme accounts for both vehicle mobility and server load status. As vehicles move, they consistently connect to varying UAVs. By considering multiple UAV servers as candidates for task offloading, the range of options for vehicles increases, mitigating the issue of overloading certain nodes. Integrating load balancing with task offloading serves to enhance system effectiveness.

3.2. Computing Model

The vehicles offload tasks to the UAV server for execution to reduce execution time and complete tasks as quickly as possible. However, this also results in an additional transmission time being used to transmit task data. The task computing process in UAV-assisted vehicular networks mainly includes the following stages:
  • Task uploading: Vehicles upload tasks to the currently accessed UAV server via U2V upload links.
  • Task transmission: When the computing power of the UAV node in use is insufficient, or if numerous tasks are queued with a significant load burden, it is advisable to consider offloading tasks to a UAV node when the vehicle is not currently busy. To optimize task execution or waiting times and alleviate the load on the UAV node, the task should be transmitted across the UAVs’ wireless links to the target UAV node for processing.
  • Task execution: Once the task is uploaded to the destination UAV node, it is executed on the UAV server to retrieve the computational result. The waiting time for tasks should be taken into account.
  • Result downloading: When the vehicle is within the UAV coverage range and the task has been calculated, the UAV server transmits the calculation result back to the vehicle through the download links.
(1) Task uploading
Let R e be the transmission rate of the U2V wireless link and  I k be the input data volume of the task k. The uploading time T k , e u p for task k generated by vehicle v to be uploaded to UAV server e can be calculated as follows:
T k , e u p = I k R e
(2) Task transmission
Both the input data and the output data of task k can be transmitted between UAV nodes e 1 and e 2 . Let h o p s be the number of hops experienced by the task between e 1 and e 2 , and W be the transmission rate of the U2U wireless link between the UAV nodes. The task input or output data transmission time is defined as follows:
T k , e 1 , e 2 t r a n s = h o p s × I k o r O k W
(3) Task execution
The time required for a task to be processed on the UAV server includes three parts: the time it takes for the container instance to start, the time waiting in the task queue, and the time for actual computation. The container instance start time includes extracting the container image file and launching the container instance. In the problem studied in this article, it is assumed that the required container image files are all cached in the UAV node and do not need to be retrieved from a remote cloud. Therefore, only the time to launch the container instance is considered.
The UAV servers are heterogeneous, and each server has multiple container instances that can be used for task execution after startup. Let S e = { 1 , , s , , S } be the set of containers for UAV server e, where the containers are heterogeneous with different startup times θ s and processing speeds P s . Let ϖ ( s ) be the decision variable whether container s has been started, and  ξ k , s be the decision variable whether task k is assigned to container s.
ϖ ( s ) = 1 c o n t a i n e r s h a s b e e n s t a r t e d 0 o t h e r
ξ k , s = 1 t a s k k i s a s s i g n e d t o c o n t a i n e r s 0 o t h e r
Tasks can only wait to be processed by the UAV server after their data have been uploaded to the server and the container instance has been started. If the container is already running, it may be executing other tasks, so the current task needs to wait. Let h s denote the number of tasks that need to be processed on container s, and let Ψ s = ( ψ 1 s , , ψ h s s ) denote the processing order. Then, the starting execution time of task ψ p s on container s is given by S T ψ p s , s .
S T ψ p s , s = max { T ψ p s , e u p , E A T s }
where E A T s represents the earliest available time for the current container s. If the container has not been started, it equals the start time θ s . If the container has already been started, it equals the end time of all task sequences completed before the current task on the container.
E A T s = θ s ϖ ( s ) = 0 T ψ h s s , s ϖ ( s ) = 1
where T ψ h s s , s represents the completion time of task ψ h s s on container s.
The computation and completion time of task k on container s on the UAV node e is as follows:
T k , s c o m = C k P s
T k , e c o m = s S e ξ k , s T k , s c o m
T k , e = S T k , s + T k , e c o m
(4) Result downloading
Let O k be the output data volume of task k. The time required to transmit the calculation result from the UAV node e to the vehicle via the U2V wireless link is defined as follows:
T k , e d o w n = O k R e

3.3. Offloading Model

When tasks are offloaded to UAV nodes for execution, the task data need to be transferred to the node. There are three ways to offload tasks from vehicles to UAV nodes: direct offloading mode, predictive offloading mode, and hybrid offloading mode. The details of these three modes are discussed below.

3.3.1. Direct Offloading Model

When the vehicle v is within the coverage range of the UAV where server e 1 is located, the vehicle offloads the task k to the UAV server e 1 using U2V wireless link. While the UAV server e 1 is executing the computation task, the vehicle is moving at high speed and may have left the coverage range of the UAV when the task execution is completed. It enters the coverage range of the UAV where server e 2 is located. The computed result of the task needs to be transmitted from server e 1 to server e 2 through U2U wireless link between UAVs. This situation is depicted in Figure 2. The completion time of the task is computed as follows:
T k , e 1 = T k , e 1 u p + T k , e 1 c o m + T k , e 1 , e 2 t r a n s + T k , e 2 d o w n
where T k , e 1 u p represents the time it takes for the task k to be offloaded to the UAV server e 1 via U2V wireless link. T k , e 1 c o m represents the time taken for the task computation. T k , e 1 , e 2 t r a n s represents the time it takes for the computed result of the task to be transmitted between UAVs. T k , e 2 d o w n represents the time it takes to transmit the computed result back to the vehicle via the U2V wireless link.

3.3.2. Prediction Offloading Model

When the vehicle v is within the coverage range of the UAV where server e 1 is located, the vehicle utilizes a U2V wireless link to transmit the task k to the UAV server e 1 . Based on the vehicle’s motion-sensing information, including the current speed and direction, the edge server estimates the task completion time and predicts the location where the vehicle will arrive. The task then offloads to the server e 2 at the predicted location in advance for execution. When the vehicle enters the coverage range of the server e 2 , the task computation result is obtained directly. This scenario is depicted in Figure 3. The total completion time of the task T k , e 2 is calculated as follows:
T k , e 2 = T k , e 1 u p + T k , e 1 , e 2 t r a n s + T k , e 2 c o m + T k , e 2 d o w n

3.3.3. Mixed Offloading Model

Based on the vehicle’s motion-sensing information, including the current speed and direction, the edge server predicts the changes in the vehicle’s position and the UAVs it will pass through within the deadline. The edge servers during the deadline are considered as candidate offloading nodes for the task, and one server e 2 is selected as the offloading node for task computation. The vehicle transmits the task through the currently accessed UAV node e 1 , and after the task computation is completed, the computed result is transmitted to UAV node e 3 , which is currently accessed by the vehicle and is returned to the vehicle through e 3 . The scenario is illustrated in Figure 4. The total completion time of the task T k , e 2 is calculated as follows:
T k , e 2 = T k , e 1 u p + T k , e 1 , e 2 t r a n s + T k , e 2 c o m + T k , e 2 , e 3 t r a n s + T k , e 3 d o w n

3.4. Problem Formulation

In this paper, we consider a set K of tasks generated by vehicles entering the coverage range of a UAV at the current time. Let μ k , e be the decision variable whether the task k is within the coverage range of UAV server e, and  η k , e be the decision variable whether it is executed on server e. Each task k K has a hard deadline T k m a x . With the consideration of deadline constraints, the objective is to maximize the task completion success ratio (SR), which represents the proportion of tasks completed within the deadline. Due to the limited resources of UAV servers, there is a time cost for each computation task, especially for UAV servers located in high-density and high-computation-demand areas, which may result in high server load, long task waiting time, and high task failure rate. Based on the three offloading mode schemes discussed above, adopting a hybrid offloading mode provides more options for task offloading and can achieve better optimization goals through reasonable offloading decisions. Therefore, we consider using the hybrid offloading mode, which predicts the vehicle’s location based on its motion-sensing information and selects the UAV nodes that the vehicle passes through within the deadline as candidate offloading nodes. The objective is to maximize the task completion success ratio in the task offloading and resource allocation problem within the framework of VEC:
max S R = k K F k | K |
s.t.
F k = 1 , T k T k m a x 0 , T k > T k m a x
T k = S T k + E T k + D T k
S T k = max { T T k , E S T k }
T T k = e E η k , e T k , e u p
T k , e u p = T k , e 1 u p , e = e 1 & μ k , e 1 = 1 T k , e 1 u p + T k , e 1 , e 2 t r a n s , e = e 2 & μ k , e 1 = 1
E S T k = e E ( η k , e s S e ( ξ k , s E A T s ) )
E T k = e E ( η k , e s S e ( ξ k , s T k , s c o m ) )
D T k = e E μ k , e T k , e d o w n
T k , e d o w n = T k , e 1 d o w n , e = e 1 & η k , e 1 = 1 T k , e 2 d o w n + T k , e 1 , e 2 t r a n s , e = e 2 & η k , e 1 = 1 & η k , e 2 1
μ k , e = 1 , | | L e L v | | r 0 , o t h e r
η k , e = 1 , t a s k k i s e x e c u t e d o n n o d e e 0 , o t h e r
Equation (14) represents the optimization objective, which is to maximize the success ratio of tasks completed within the deadline. Equation (15) indicates whether a task can be completed within the deadline, with a value of 1 for completion and 0 for non-completion. Equation (16) describes the completion time of a task under edge offloading, which is the sum of the task’s start execution time ( S T ), execution time ( E T ), and result transmission time ( D T ). Equations (18)–(20) describe the relationship between the task’s start execution time ( S T ) and the task’s transmission time, container startup time, and the earliest available time ( E A T ) of the container. Equation (21) describes the task execution time in relation to the allocated container. Equations (22) and (23) represent the result transmission time. Equations (24) and (25) are the two decision variables.

4. Mobility Aware Vehicular Task Offloading Algorithm

The MAVTO (Mobility Aware Vehicular Task Offloading) algorithm offers a mobility-focused approach to vehicular task offloading and resource distribution within the Internet of Vehicles framework. This method takes into account vehicle mobility and utilizes UAV servers equipped with containerization technology. Aiming to optimize the success rate of task execution, the algorithm uses a hybrid offloading mode to tackle the issue while adhering to resource limitations.

4.1. Algorithm Framework

The algorithm accepts three input sets: user vehicles V, tasks K, and UAV servers E . It outputs the task offloading strategy Y , resource allocation strategy π , and the task success rate S R . The framework of the algorithm is detailed in Algorithm 1. For task offloading requests from vehicles on the road, the algorithm initially segregates the tasks into a local execution sequence and an offloading sequence, considering deadline constraints and local computation abilities. It then forms a set of potential offloading nodes for each task. Following this, the algorithm uses the offloading sequence generation algorithm to arrange the offloading sequences according to travel direction and iteratively selects offloading directions to decide on task offloading. For every task, once the offloading node is chosen, an execution container is linked to it until all tasks are scheduled and finished. After obtaining the offloading decisions and resource allocation outcomes, the task success rate S R is computed.
Algorithm 1: MAVTO (Mobility Aware Vehicular Task Offloading) Algorithm Framework
Input: The set of UAV servers E ; the set of vehicles V ; the set of tasks K;
Output: Task Offloading Decision Y ; Resource Allocation Π ; Success Rate of Task Completion S R
Drones 08 00696 i001

4.2. Generate the Candidate Group of Offloading Servers

According to the high-speed mobility of the user vehicle and the limited coverage range of UAVs, during the task offloading and execution process, the vehicle may move out of the coverage range of the offloading UAV node, passing through multiple UAVs. The UAV servers mounted on these UAVs can serve as candidate offloading nodes for the task. When the vehicle enters the coverage range of a UAV, it can directly communicate with the UAV, offloading the task to the UAV server for execution. However, due to the limited coverage range of UAVs, the vehicle currently cannot communicate directly with the upcoming UAVs, which are referred to as multi-hop UAVs. If the UAV server of a multi-hop UAV is selected as the offloading node, it can utilize the directly connected UAVs as relay nodes to perform multi-hop transmission of the offloaded task to the final destination UAV node, where the task is received and processed. Therefore, the candidate offloading group G k is obtained based on the driving speed v s v , driving direction v d v , and deadline constraint T k m a x of the task k. The final candidate group G k is determined by the specific formulations (26). Equations (27)–(29) defines the minmal and maximal coordinates of vehicles and UAVs, respectively.
G k = { e | e E , [ x v m i n , x v m a x ] [ x e m i n , x e m a x ] }
x v m i n = x v T k m a x × v s v , v d v = W E S T x v , v d v = E A S T
x v m a x = x v , v d v = W E S T x v + T k m a x × v s v , v d v = E A S T
x e m i n = x e r y v ; x e m a x = x e + r y v

4.3. Task Offloading Sequence Generation

For a given task, if the local processing capability of the vehicle terminal is sufficient to handle the task and meets the deadline constraint, the task can be executed locally, saving transmission time. However, most vehicle terminals have limited processing capabilities. If the local resources are not sufficient to handle the task or if the local processing queue is too long, the task can be offloaded to UAV servers, leveraging their stronger processing capabilities. This gives rise to the distinction between the local execution sequence and the offloading sequence. Regarding the offloading sequence, during the task offloading decision process, the tasks are first assigned different priorities according to certain rules. They are then sorted based on priority to generate the task offloading sequence. The tasks are then assigned to the server nodes in the order of the offloading sequence. Since bidirectional vehicles are considered in this paper, the tasks generated by vehicles traveling in different directions are ranked separately. Based on the task attributes and optimization goals, this paper proposes the Offloading Task Sequence Sorting (OTSS) method for sorting the task offloading sequence:
1.
OTSS1: MNCF (Minimum Number of Candidates First). The MNCF rule considers the candidate group count of the potential offloading servers for tasks, as mentioned in Section 4.2. For tasks with a limited selection range, they should be assigned offloading nodes with higher priority. On the other hand, for tasks with a wide selection range, they should be assigned offloading nodes later so that the offloading node assignment can be adjusted based on the current decision scheme. Otherwise, it would result in limited flexibility for tasks with a smaller selection range.
2.
OTSS2: MATF (Minimum Available Time First). The MATF rule considers not only the deadline constraint of tasks but also takes into account the characteristics of vehicle speed and position based on vehicle mobility information. The less time a vehicle spends driving within the current coverage range, the easier it is for the vehicle to leave the range of the current UAV. The available time for a vehicle v to handle task k is denoted as T v a v a i l :
T v a v a i l = min ( T k m a x , T v s t a y ) T v s t a y = R D v s v R D k = ( r ) 2 y v 2 ± | x v x e |
where T v s t a y is the remaining travel time of the vehicle within the coverage area of the current UAV node, and  R D is the remaining travel distance of the vehicle within the coverage area of the current UAV node. The  R D calculation is shown in the example in Figure 5.
3.
OTSS3: Maximum Local Execution Time First. For tasks that take a long time to execute locally, tasks are more likely to be completed beyond the deadline, resulting in a poor experience. Therefore, priority is given to offloading tasks that take a long time to execute locally.
4.
OTSS4: MWSDC (Minimum Weighted Sum of Data and Computation First). The MWSDC rule takes into account the input data volume and computational workload of a task. For a task k, its weighted sum S U M k is defined as follows:
S U M k = λ 1 C k + λ 2 I k
where λ 1 and λ 2 are used to balance the computational workload ( C k ) and data volume ( I k ) of the task, a smaller S U M value indicates a lower resource demand for the task. Therefore, tasks with smaller SUM values have higher priority and are preferred for offloading decisions.

4.4. Offloading Decision

The task submits an offloading request to the VEC system, which first makes an offloading decision to assign a specific UAV node for each task execution. In this paper, we consider the multiple nodes that a high-speed moving vehicle passes through within the deadline as candidate offloading nodes for the vehicle’s tasks. From the candidate offloading node group, a target server is selected for offloading. The vehicle can directly offload the tasks to the currently connected UAV node for execution, or consider transferring the tasks to other UAV nodes through the currently connected UAV node. As we are dealing with a heterogeneous resource scenario, the processing capabilities of UAV servers may vary, and the processing capabilities of containers on the same UAV server may also differ. In addition, the distribution of UAV servers varies, leading to different costs for task transmission. Meanwhile, the task deadline constraint must be considered during the offloading decision, adding complexity to the problem. Therefore, it is necessary to consider the characteristics of UAV servers and task attributes, balance the contradiction between reducing execution time by task offloading and increasing transmission time by offloading to UAV nodes, and maximizing the task success rate. Consequently, the following Offloading Decision (OD) rules are formulated as the objective:
1.
OD1: MECTF (Minimum Estimated Completion Time First). When selecting offloading nodes for tasks, various factors such as uploading time, transmission time, waiting time, computation time, and result return time are taken into account. The estimated completion time of the task at each UAV node is assessed, and the destination node is chosen as the one that can complete the task earliest. The estimated completion time of the task is calculated using Equation (32).
E C T k , e ^ = max ( T T k , e , E A T e ^ ) + E T k , e ^ + D T k , e ^
T T k , e = T k , e 1 u p , e = e 1 T k , e 1 u p + T k , e 1 , e 2 t r a n s , e = e 2 & e 1 e 2
The estimated completion time E C T k ^ of task execution on a UAV node is determined by a combination of factors, including the transmission time T T k , e from various nodes, the estimated earliest availability time of servers E A T e ^ , the estimated computation time E T k , e ^ , and the estimated task result return time D T k , e ^ . Transferring the task to an unconnected node e 2 incurs an additional transmission cost T k , e 1 , e 2 t r a n s compared to uploading it directly to the directly connected UAV node e 1 . The value of E T k , e ^ is calculated using the average processing speed of the UAV server, represented by P e ¯ = s S e P s | S e | . The estimated earliest availability time of the server is calculated using the average startup time θ ¯ of containers and the average wait time W T e ¯ of tasks on the server. The task result return time is determined by the size of the computation result, the downstream transmission rate, and the location at which the vehicle arrives, specifically D T k , e ^ = O k R e + h o p s × O k W . Based on the perception information of vehicle movement, the predicted location of the vehicle’s arrival is used to estimate the number of h o p s that the computation result needs to traverse between UAVs.
2.
OD2: MCCF (Maximum Computing Capacity First). Taking into consideration the computational capabilities of UAV nodes, tasks executed on UAV servers with higher computational capabilities will have shorter execution times, maximizing the utilization of node processing capabilities. However, this approach directly ignores the task transmission cost and the node’s workload, which may result in an uneven distribution of tasks among UAV nodes. As a consequence, servers with higher processing capabilities may become overloaded, while servers with lower processing capabilities remain idle, leading to under-utilization of resources.
3.
OD3: LBF (Load Balance First). Balancing the processing capabilities and the number of pending tasks among UAV nodes to match their processing capacity with the workload is essential to achieve equilibrium among the nodes. In the task selection process, the UAV node with the minimum current load pressure is chosen as the offloading node. The load factor (LF) is defined as the ratio of the current workload of a UAV node to its maximum processing capacity:
L P e = ψ i Ψ e C ψ i P e ¯
Here, Ψ e represents the set of pending tasks for a UAV node. L F is used as a comprehensive measure to assess the processing capabilities and the number of tasks being processed by a UAV node. The UAV node with the smallest L F value is selected as the unload node for task computation.

4.5. Resource Allocation

After making the decision to offload each task, it is necessary to match the tasks with containers on the offloading node. This process takes into consideration the task’s own attributes and the resource situation on the UAV node. Through task scheduling, suitable containers are assigned to the tasks in order to achieve the optimization objective of maximizing task completion success rates. The following resource allocation (RA) strategy is proposed:
1.
RA1: EFTF (Earliest Finish Time First). The EFTF strategy takes into account both the processing capabilities of the containers and the queuing status of pending task sequences. It estimates the completion time of tasks on each container and assigns tasks to the container with the shortest completion time for processing. This approach ensures that tasks are assigned to the containers that can complete them in the shortest amount of time.
2.
RA2: Max–Min. The core idea of this strategy is to sort the task sequence in a non-increasing order of processing time and assign tasks to containers with shorter processing times. This maximizes resource utilization and minimizes task completion time, ultimately aiming to meet task deadlines. The strategy optimizes resource usage and increases the likelihood of completing tasks within their respective deadlines.
3.
RA3: Min-Variance. To improve the efficiency of task scheduling in a containerized environment, a strategy is proposed where tasks are sorted based on their processing time in ascending order. The tasks are then assigned to containers with the shortest processing time. An initial scheduling plan is obtained on the basis of this strategy. Subsequently, a failure sequence of tasks that exceed the deadline is recorded. To address these failures, a Min-Variance algorithm and a rescheduling algorithm are introduced. The Min-Variance algorithm sorts tasks in a non-decreasing order of their computational workload and assigns each task to a container with the minimum completion time. After obtaining the task-resource mapping sequence, the failed tasks are rescheduled onto other containers for execution. The pseudocode for the Min-Variance algorithm and the rescheduling algorithm is presented as Algorithm 2 and Algorithm 3, respectively.
Algorithm 2: The Framework of Min Variance
Input: The set of UAV servers E
Output: The task-container matching sequence Π ; success rate of task execution S R
Drones 08 00696 i002
   The rescheduling algorithm (Algorithm 3) sorts failed tasks based on increasing deadline constraints. It sequentially reschedules tasks by selecting previously unconsidered UAV nodes. If a new candidate node improves completion time and allows task completion within the deadline, it becomes the new target container. The algorithm keeps track of the maximum optimization space. After considering all candidate nodes, it recalculates task completion time and container availability and updates the unload decision and resource allocation plans. The Min-Variance algorithm first sorts the set of tasks K q e on each UAV server e based on their computational requirements, with a time complexity of O ( | K q e | log ( | K q e | ) ) . Then, it matches resources for each task one by one, with a time complexity of O ( | K q e | × | S e | ) . Finally, the algorithm calls Algorithm 3 for rescheduling the failed task sequence. Assuming the number of failed tasks is n 1 , the time complexity of Algorithm 3 is O ( n 1 × | E | × | S | ) . Therefore, the total time complexity of the Min-Variance algorithm is O ( | K q e | log ( | K q e | ) + | K q e | × | S e | + n 1 × | E | × | S | ) , which simplifies to O ( | K q e | log ( | K q e | ) ) .
Algorithm 3: TRS (Task Re-Scheduling)
Input: Set of failed tasks K f a i l e d ; Sequence of task-container matches Π ; Number of successfully executed tasks S N
Output: Updated sequence of task-container matches Π ; Updated number of successfully executed tasks S N
Drones 08 00696 i003

5. Experimental Results

5.1. Experimental Environment and Datasets

The dataset utilized for the experiments presented in this paper is sourced from the vehicle trajectory dataset associated with the Wu Yun Expressway in Shanxi, China [20]. These data include details such as the vehicle’s position, time of recording, speed, and travel direction. For the experiment, a typical two-way lane on the Wu Yun Expressway was chosen, and five sections were designated for UAV deployment. The actual vehicle trajectory data from 1 August 2021 were employed to represent the motion path of the vehicle performing the task.
In the process of fine-tuning algorithm parameters and evaluating performance, it is crucial to maintain fairness in the comparison of experimental outcomes. To achieve this, numerous examples must be created for testing. In this context, we consider vehicle counts of { 60 , 70 , 80 , 90 , 100 } . To assess how task data influence algorithm performance, experiments are compared across four data intervals: { [ 0.3 , 0.5 ] , [ 0.5 , 0.8 ] , [ 0.8 , 1 ] , [ 1 , 1.5 ] } ( Mb ) . Additionally, this study examines the effects of varying deadline constraints on algorithm performance by using four different deadline intervals: { [ 0.2 , 0.3 ] , [ 0.3 , 0.5 ] , [ 0.5 , 1 ] , [ 0.5 , 1.5 ] } ( s ) . For details on the experimental parameters, refer to [21], with the specific settings detailed in Table 2. For each of the 5 vehicle counts, 4 task data volume intervals, and 4 deadline constraint intervals, 10 instances are generated, resulting in a total of 5 × 4 × 4 × 10 = 800 instances. For parameter tuning, there are 4 OTSS strategies, 3 OD strategies, and 3 RA strategies, leading to 4 × 3 × 3 × 800 = 28,800 experiments in total. The optimal configuration is determined by evaluating the experimental data.
In order to evaluate the algorithm’s performance and parameter calibration, the RPD (Relative Percentage Deviation) is used as an evaluation metric in this paper. The RPD measures the deviation of the algorithm’s objective function value Z from the optimal solution Z , which represents the highest task completion success rate obtained by all algorithms under the same conditions. A smaller R P D value indicates a better algorithm performance.
R P D ( % ) = Z Z Z × 100 %

5.2. Parameter Calibration

In this paper, the proposed components are calibrated using ANOVA (Analysis of Variance) technique, and the performance of each component is analyzed.
1.
Task Offloading Sequence Generation
Figure 6 presents a comparison of four task offloading sequence ordering rules: OTSS1, OTSS2, OTSS3, and OTSS4. Among these, OTSS4, which gives precedence to workloads with the least data intensity, demonstrates superior performance. This rule takes into account both transmission and computation resource demands, thus enhancing the likelihood of tasks meeting their deadlines. Conversely, OTSS1 and OTSS2 focus solely on minimizing the number of candidate nodes and available time, respectively, thus evaluating only partial task characteristics and delivering inferior performance. OTSS3 acknowledges computation resource needs but overlooks bandwidth requirements and variations in processing capabilities. Therefore, OTSS4 is selected as the task offloading sequence ordering rule in this study.
2.
Offloading Decision Strategy
Figure 7 depicts a comparison of the performance of three task offloading strategies: OD1, OD2, and OD3. Of these, OD3, which implements a load balancing priority tactic, achieves superior performance compared to OD1 and OD2. OD3 takes into account both the processing capabilities of the UAV nodes and the order of pending task offloading when selecting nodes. Conversely, OD1 is based on a minimum estimated completion time priority strategy and considers the server’s processing capabilities and current load. However, it may result in suboptimal task allocation due to inaccurate estimates of task completion times, leading to inferior performance relative to OD3. On the other hand, OD2 employs a maximum processing capability priority strategy that focuses solely on the resource status of UAV nodes, causing uneven server loads and reduced task processing effectiveness. Consequently, OD2 delivers the least favorable performance. From the analysis and comparison, OD3, with its load balancing priority approach, is selected as the optimal task offloading decision rule.
3.
Resource Allocation
Figure 8 presents a comparative analysis of various resource allocation strategies within a 95% Tukey HSD confidence interval. The figure illustrates that among the three strategies evaluated, RA3 demonstrates superior performance. RA1, which prioritizes the earliest completion time, allocates tasks to containers with the earliest finishing capability but fails to account for resource contention among tasks, resulting in extended waits for other tasks. RA2, based on the Max–Min strategy, prioritizes large task scheduling by assigning them to the earliest available containers, which, in turn, possibly makes smaller tasks wait beyond their deadlines. Conversely, RA3 adopts the Min-Variance strategy, focusing on scheduling smaller tasks first, thereby minimizing their wait times. Post initial allocation, a secondary assignment is conducted to relocate tasks at risk of deadline breaches to containers that can further reduce their completion times, thereby enhancing overall task success rates. Ultimately, this study selects RA3 as the most effective resource allocation strategy.

5.3. Performance Comparison

To assess the effectiveness of the proposed MAVTO algorithm Source code: https://github.com/balldu/MAVTO/tree/master, (accessed on 11 November 2024). the TAVF [22], MONSA [23], and TFPVTO [24] algorithms serve as baseline algorithms. In this paper, a multifactor analysis of variance technique is used to compare the performance of the algorithms, considering factors such as the number of vehicles, task data volume, and deadline values.
Figure 9 shows the mean variation of RPD (Relative Percentage Deviation) for different numbers of vehicles in four data volume ranges: [0.3, 0.5] Mb, [0.5, 0.8] Mb, [0.8, 1.0] Mb, and [1.0, 1.5] Mb. In Figure 9a, for smaller task data volume and fewer vehicles, the performance of the MAVTO algorithm is similar to the TFPVTO algorithm. This is attributed to the fact that in scenarios with a small number of tasks, the advantage of load balancing techniques is not significant, and the load on different UAV nodes is not significantly different. However, as the number of vehicles increases, the performance gap between the MAVTO algorithm and the comparative algorithms widens due to the varying loads on different UAV nodes. The similarity between the MONSA algorithm and the TAVF algorithm is observed because, for lower data volumes, the MONSA algorithm selects UAV nodes other than the nearby ones, resulting in lower transmission costs and shorter task wait times. TAVF algorithm considers transmission costs and tends to select nearby UAV nodes more often, leading to relatively longer computation times and comparable completion times to the MONSA algorithm. Thus, there is not much difference in the success rates between the two algorithms. When the data volume increases, the transmission cost also increases. As shown in Figure 9d, the transmission cost of MONSA algorithm becomes larger, hence it performs slightly worse than TAVF algorithm. TFPVTO algorithm also employs a strategy of selecting nearby UAV nodes for offloading, but it optimizes resource allocation using the minimum-weight perfect matching algorithm based on a weighted bipartite graph. This results in a more optimal allocation scheme compared to the TAVF algorithm and MONSA algorithm. Overall, the proposed MAVTO algorithm outperforms the others. MAVTO algorithm considers the resource and load situation of each UAV node comprehensively, achieving a more balanced load distribution among the nodes. It prioritizes scheduling small tasks in resource allocation and readjusts the allocation scheme to optimize the results. From Figure 9, it can be observed that as the task data volume increases, the performance difference between the MAVTO algorithm and the other three algorithms gradually increases. MAVTO algorithm does not concentrate tasks on resource-rich nodes like the MONSA algorithm, nor does it adopt the nearby selection strategy of the TAVF algorithm and TFPVTO algorithm, which may cause high load pressure and longer task waiting times. Instead, the MAVTO algorithm evenly distributes tasks across different UAV nodes, reducing task wait times. In summary, under different task data volumes, the MAVTO algorithm outperforms the other three comparative algorithms in terms of maximizing the success rate of completing tasks within the deadline, especially in scenarios with a larger task data volume.
Figure 10 shows the mean variation of RPD (Relative Percentage Deviation) for different numbers of containers in four data volume ranges: [0.3, 0.5] Mb, [0.5, 0.8] Mb, [0.8, 1.0] Mb, and [1.0, 1.5] Mb. As shown in the figure, MAVTO consistently outperforms the other algorithms (MONSA, TAVF, and TFPVTO) by maintaining the lowest RPD, indicating that it provides the closest results to the optimal solution regardless of task data volume. As data volume increases, the RPD for MONSA, TAVF, and TFPVTO generally increases, especially at higher container numbers. This trend suggests that these algorithms struggle with higher data volumes, possibly due to increased resource contention or processing delays. MAVTO’s stability across data volumes shows that its hybrid offloading strategy and load balancing capabilities handle increased data volumes effectively. For each data volume interval, increasing the number of containers tends to increase the RPD for all algorithms, but this effect is notably less severe for MAVTO. In Figure 10a,b, MAVTO’s RPD remains relatively flat even as the number of containers increases, showing minimal impact from additional containers. This suggests that MAVTO’s resource allocation strategy effectively manages task distribution even when more containers are introduced. For MONSA, TAVF, and TFPVTO, the increase in RPD with more containers may reflect inefficiencies in these algorithms’ ability to allocate tasks efficiently when there are more resources. These algorithms could be encountering issues with load balancing or resource management as container numbers rise.
Figure 11 shows the mean variation of RPD (Relative Percentage Deviation) for different numbers of vehicles in four deadline ranges: [0.2, 0.3] s, [0.3, 0.5] s, [0.5, 1.0] s and [0.5, 1.5] s. In Figure 11a, it can be observed that as the number of vehicles increases, the difference in RPD values between the MAVTO algorithm and the other three algorithms also increases. Moreover, the TFPVTO, TAVF, and MONSA algorithms show an upward trend, while the proposed MAVTO algorithm remains stable. The RPD of the MONSA algorithm is higher than the other three algorithms because the MONSA algorithm selects the UAV node with the highest comprehensive evaluation value of bandwidth and computational resources during offloading decision-making, without considering the load of the UAV nodes. This may lead to a high load on certain UAV nodes, resulting in long waiting times for tasks and a decreased task completion success rate. Figure 11b–d show that as the task deadline increases, the performance gap between the algorithms gradually narrows. This is because as the deadline constraint of tasks becomes looser, more tasks can be completed within the deadline, and the total number of offloaded tasks remains constant. As a result, the optimization space of the MAVTO algorithm decreases. Therefore, as the deadline increases, the difference between the MAVTO algorithm and the other algorithms gradually reduces. From the performance comparison results, it can be concluded that the MAVTO algorithm outperforms the other three algorithms in different deadline ranges. The MAVTO algorithm is more suitable for application scenarios with smaller task deadline ranges when the goal is to maximize task completion success rate.

6. Conclusions

This paper addressed the problem of task offloading and resource allocation in high-speed VEC systems, particularly within UAV-assisted networks. The proposed MAVTO algorithm, which is based on vehicle mobility prediction and container-based virtualization, optimizes task offloading decisions to maximize the task completion success rate while considering vehicle mobility and resource constraints. The framework integrates direct, predictive, and hybrid offloading modes to improve the flexibility and efficiency of task execution. Simulation results show that MAVTO significantly outperforms traditional algorithms in terms of task success rates, especially in scenarios involving high data complexity and stringent deadline requirements. These results validate the effectiveness of using a multi-drone collaborative offloading strategy in VEC networks. Future work could explore further refinements in UAV coordination and load balancing for enhancing scalability and performance in even more complex vehicular environments.

Author Contributions

Conceptualization, L.C. and J.D.; methodology, L.C. and J.D.; software, J.D.; validation, J.D.; formal analysis, J.D.; investigation, J.D.; resources, L.C.; writing—original draft preparation, L.C. and J.D.; writing—review and editing, L.C., J.D. and X.Z.; supervision, X.Z.; project administration, J.D.; funding acquisition, L.C. and X.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the National Key Research and Development Program of China (No. 2022YFB3305500), the National Natural Science Foundation of China (No. 62102080), the Fundamental Research Funds for the Central Universities (No. 2242022R10017).

Data Availability Statement

The original data presented in the study are openly available in https://github.com/balldu/MAVTO/tree/master.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ali, S.S.D.; Zhao, H.P.; Kim, H. Mobile edge computing: A promising paradigm for future communication systems. In Proceedings of the TENCON 2018–2018 IEEE Region 10 Conference, Jeju, Republic of Korea, 28–31 October 2018; pp. 1183–1187. [Google Scholar]
  2. Bute, M.S.; Fan, P.; Liu, G.; Abbas, F.; Ding, Z. A collaborative task offloading scheme in vehicular edge computing. In Proceedings of the 2021 IEEE 93rd Vehicular Technology Conference (VTC2021-Spring), Virtual, 25–28 April 2021; pp. 1–5. [Google Scholar]
  3. Cui, Y.; Liang, Y.; Wang, R. Intelligent task offloading algorithm for mobile edge computing in vehicular networks. In Proceedings of the 2020 IEEE 91st Vehicular Technology Conference (VTC2020-Spring), Antwerp, Belgium, 25–28 May 2020; pp. 1–5. [Google Scholar]
  4. Liu, J.; Liu, N.; Liu, L.; Li, S.; Zhu, H.; Zhang, P. A proactive stable scheme for vehicular collaborative edge computing. IEEE Trans. Veh. Technol. 2023, 72, 10724–10736. [Google Scholar] [CrossRef]
  5. Xue, Z.; Liu, C.; Liao, C.; Han, G.; Sheng, Z. Joint service caching and computation offloading scheme based on deep reinforcement learning in vehicular edge computing systems. IEEE Trans. Veh. Technol. 2023, 72, 6709–6722. [Google Scholar] [CrossRef]
  6. Wang, X.; Ning, Z.; Guo, S.; Wang, L. Imitation learning enabled task scheduling for online vehicular edge computing. IEEE Trans. Mob. Comput. 2020, 21, 598–611. [Google Scholar] [CrossRef]
  7. Bahreini, T.; Brocanelli, M.; Grosu, D. Vecman: A framework for energy-aware resource management in vehicular edge computing systems. IEEE Trans. Mob. Comput. 2023, 22, 1231–1245. [Google Scholar] [CrossRef]
  8. Fan, W.; Hua, M.; Zhang, Y.; Su, Y.; Li, X.; Tang, B.; Wu, F.; Liu, Y. Game-based task offloading and resource allocation for vehicular edge computing with edge-edge cooperation. IEEE Trans. Veh. Technol. 2023, 72, 7857–7870. [Google Scholar] [CrossRef]
  9. Ernest, T.Z.H.; Madhukumar, A.S. Computation offloading in mec-enabled iov networks: Average energy efficiency analysis and learning-based maximization. IEEE Trans. Mob. Comput. 2024, 23, 6074–6087. [Google Scholar] [CrossRef]
  10. You, C.; Huang, K.; Chae, H.; Kim, B. Energy-efficient resource allocation for mobile-edge computation offloading. IEEE Trans. Wirel. Commun. 2017, 16, 1397–1411. [Google Scholar] [CrossRef]
  11. Tan, K.; Feng, L.; Dán, G.; Törngren, M. Decentralized convex optimization for joint task offloading and resource allocation of vehicular edge computing systems. IEEE Trans. Veh. Technol. 2022, 71, 13226–13241. [Google Scholar] [CrossRef]
  12. Dai, X.; Xiao, Z.; Jiang, H.; Lui, J.C. UAV-assisted task offloading in vehicular edge computing networks. IEEE Trans. Mob. Comput. 2023, 23, 2520–2534. [Google Scholar] [CrossRef]
  13. Samir, M.; Ebrahimi, D.; Assi, C.; Sharafeddine, S.; Ghrayeb, A. Leveraging uavs for coverage in cell-free vehicular networks: A deep reinforcement learning approach. IEEE Trans. Mob. Comput. 2020, 20, 2835–2847. [Google Scholar] [CrossRef]
  14. He, Y.; Zhai, D.; Zhang, R.; Du, J.; Aujla, G.S.; Cao, H. A mobile edge computing framework for task offloading and resource allocation in uav-assisted vanets. In Proceedings of the IEEE INFOCOM 2021-IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), Virtual, 9–12 May 2021; pp. 1–6. [Google Scholar]
  15. Qi, W.; Song, Q.; Guo, L.; Jamalipour, A. Energy-efficient resource allocation for uav-assisted vehicular networks with spectrum sharing. IEEE Trans. Veh. Technol. 2022, 71, 7691–7702. [Google Scholar] [CrossRef]
  16. Alkubeily, M.; Sakulin, S.A.; Hasan, B. Design an adaptive trajectory to support uav assisted vanet networks. In Proceedings of the 2023 5th International Youth Conference on Radio Electronics, Electrical and Power Engineering (REEPE), Moscow, Russia, 16–18 March 2023; Volume 5, pp. 1–6. [Google Scholar]
  17. Yang, C.; Liu, B.; Li, H.; Li, B.; Xie, K.; Xie, S. Learning based channel allocation and task offloading in temporary uav-assisted vehicular edge computing networks. IEEE Trans. Veh. Technol. 2022, 71, 9884–9895. [Google Scholar] [CrossRef]
  18. Li, B.; Xie, W.; Ye, Y.; Liu, L.; Fei, Z. Flexedge: Digital twin-enabled task offloading for UAV-aided vehicular edge computing. IEEE Trans. Veh. Technol. 2023, 72, 11086–11091. [Google Scholar] [CrossRef]
  19. Liwang, M.; Gao, Z.; Hosseinalipour, S.; Su, Y.; Wang, X.; Dai, H. Graph-represented computation-intensive task scheduling over air-ground integrated vehicular networks. IEEE Trans. Serv. Comput. 2023, 16, 3397–3411. [Google Scholar] [CrossRef]
  20. Wang, J.; Fu, T.; Xue, J.; Li, C.; Song, H.; Xu, W.; Shangguan, Q. Realtime wide-area vehicle trajectory tracking using millimeter-wave radar sensors and the open tjrd ts dataset. Int. J. Transp. Sci. Technol. 2023, 12, 273–290. [Google Scholar] [CrossRef]
  21. Ren, H.; Liu, K.; Yan, G.; Li, Y.; Zhan, C.; Guo, S. A memetic algorithm for cooperative complex task offloading in heterogeneous vehicular networks. IEEE Trans. Netw. Sci. Eng. 2023, 10, 189–204. [Google Scholar] [CrossRef]
  22. Tang, C.; Wei, X.; Zhu, C.; Wang, Y.; Jia, W. Mobile vehicles as fog nodes for latency optimization in smart cities. IEEE Trans. Veh. Technol. 2020, 69, 9364–9375. [Google Scholar] [CrossRef]
  23. Zhang, R.; Wu, L.; Cao, S.; Hu, X.; Xue, S.; Wu, D.; Li, Q. Task offloading with task classification and offloading nodes selection for mec-enabled iov. ACM Trans. Internet Technol. (TOIT) 2021, 22, 1–24. [Google Scholar] [CrossRef]
  24. Xie, L.; Chen, L.; Li, X.; Wang, S. A traffic flow prediction based task offloading method in vehicular edge computing. In Proceedings of the CCF Conference on Computer Supported Cooperative Work and Social Computing, Harbin, China, 18–20 August 2023; pp. 360–374. [Google Scholar]
Figure 1. Task offloading from bi-directions moving in UAV-assisted Vehicular Edge Computing.
Figure 1. Task offloading from bi-directions moving in UAV-assisted Vehicular Edge Computing.
Drones 08 00696 g001
Figure 2. Direct offloading model.
Figure 2. Direct offloading model.
Drones 08 00696 g002
Figure 3. Prediction offloading model.
Figure 3. Prediction offloading model.
Drones 08 00696 g003
Figure 4. Mixed offloading model.
Figure 4. Mixed offloading model.
Drones 08 00696 g004
Figure 5. Example diagram for calculating the remaining travel distance of the vehicle.
Figure 5. Example diagram for calculating the remaining travel distance of the vehicle.
Drones 08 00696 g005
Figure 6. The performance of different task offloading sequences under a 95% Tukey HSD confidence interval.
Figure 6. The performance of different task offloading sequences under a 95% Tukey HSD confidence interval.
Drones 08 00696 g006
Figure 7. The performance of different task offloading strategies under a 95% Tukey HSD confidence interval.
Figure 7. The performance of different task offloading strategies under a 95% Tukey HSD confidence interval.
Drones 08 00696 g007
Figure 8. The performance of different resource allocation strategies under a 95% Tukey HSD confidence interval.
Figure 8. The performance of different resource allocation strategies under a 95% Tukey HSD confidence interval.
Drones 08 00696 g008
Figure 9. Interaction plots of the compared algorithms for tests with different vehicle numbers and task data volume under 95.0% Tukey HSD confidence interval.
Figure 9. Interaction plots of the compared algorithms for tests with different vehicle numbers and task data volume under 95.0% Tukey HSD confidence interval.
Drones 08 00696 g009
Figure 10. Interaction plots of the compared algorithms for tests with different container numbers and task data volume intervals under 95.0% Tukey HSD confidence interval.
Figure 10. Interaction plots of the compared algorithms for tests with different container numbers and task data volume intervals under 95.0% Tukey HSD confidence interval.
Drones 08 00696 g010
Figure 11. Interaction plots of the compared algorithms for tests with different vehicle numbers and task deadlines under 95.0% Tukey HSD confidence interval.
Figure 11. Interaction plots of the compared algorithms for tests with different vehicle numbers and task deadlines under 95.0% Tukey HSD confidence interval.
Drones 08 00696 g011
Table 1. Notations in this paper.
Table 1. Notations in this paper.
NotationDescription
E the set of UAV servers, E = { 1 , , e , , E } , where e = ( R e , P e , r , L e )
R e the U2V wireless link transmission rate of UAV node e
P e the CPU processing speed of UAV server e
rradius of coverage zones of UAV server e
L e L e = ( x e , 0 ) , coordinates of UAV server e
Wthe U2U wireless link transmission rate between UAV nodes
V the set of vehicles, V = { 1 , , v , , V } , where v = ( v s v , v d v , L v )
v s v speed of vehicle v
v d v direction of vehicle v
L v L v = ( x v , y v ) , coordinates of vehicle v
Kthe set of tasks generated, K = { 1 , , k , , K } , where k = ( v , I k , O k , C k , T k m a x ) , task generated by vehicle v
I k the input data volume of task k
O k the output data volume of task k
C k the computational workload of task k
T k m a x the deadline of task k
S e the set of containers on UAV server e, S e = { 1 , , s , , S } , where s = ( θ s , P s )
θ s the instance startup time required for container s
P s the CPU processing speed of container s
T k , e the completion time of task k on UAV node e
S T k , s the start time of task k on container s
T k , e u p the time for uploading task k to UAV node e
T k , e 1 , e 2 t r a n s the time for transmitting task k between UAV node e 1 and e 1
T k , e c o m the computation time of task k on UAV node e
T k , e d o w n the time for downloading results of task k to vehicles
Table 2. Experiment Parameters.
Table 2. Experiment Parameters.
ParameterParameter Value
Number of UAV servers5
U2V link transmission rate [ 2 , 6 ] Mbps
CPU processing speed of UAV servers [ 1 , 3 ] ( × 10 11 ) GHz
Radius of coverage zones of UAVs r 125 m
Number of vehicles { 60 , 70 , 80 , 90 , 100 }
Input task data volume { [ 0.3 , 0.5 ] , [ 0.5 , 0.8 ] , [ 0.8 , 1 ] , [ 1 , 1.5 ] } Mb
Output task result data volume [ 0.25 , 0.35 ] Mb
Task computational workload [ 2 , 3 ] ( × 10 10 ) CPU cycles
Task deadline intervals { [ 0.2 , 0.3 ] , [ 0.3 , 0.5 ] , [ 0.5 , 1 ] , [ 0.5 , 1.5 ] } ( s )
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, L.; Du, J.; Zhu, X. Mobility-Aware Task Offloading and Resource Allocation in UAV-Assisted Vehicular Edge Computing Networks. Drones 2024, 8, 696. https://doi.org/10.3390/drones8110696

AMA Style

Chen L, Du J, Zhu X. Mobility-Aware Task Offloading and Resource Allocation in UAV-Assisted Vehicular Edge Computing Networks. Drones. 2024; 8(11):696. https://doi.org/10.3390/drones8110696

Chicago/Turabian Style

Chen, Long, Jiaqi Du, and Xia Zhu. 2024. "Mobility-Aware Task Offloading and Resource Allocation in UAV-Assisted Vehicular Edge Computing Networks" Drones 8, no. 11: 696. https://doi.org/10.3390/drones8110696

APA Style

Chen, L., Du, J., & Zhu, X. (2024). Mobility-Aware Task Offloading and Resource Allocation in UAV-Assisted Vehicular Edge Computing Networks. Drones, 8(11), 696. https://doi.org/10.3390/drones8110696

Article Metrics

Back to TopTop