Next Article in Journal
A Context-Adaptive Hyperspectral Sensor and Perception Management Architecture for Airborne Anomaly Detection
Previous Article in Journal
IM-ZDD: A Feature-Enhanced Inverse Mapping Framework for Zero-Day Attack Detection in Internet of Vehicles
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A DAG-Based Offloading Strategy with Dynamic Parallel Factor Adjustment for Edge Computing in IoV

School of Computer and Artificial Intelligence, Beijing Technology and Business University, Beijing 100048, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(19), 6198; https://doi.org/10.3390/s25196198
Submission received: 5 September 2025 / Revised: 30 September 2025 / Accepted: 5 October 2025 / Published: 6 October 2025
(This article belongs to the Section Internet of Things)

Abstract

With the rapid development of Internet of Vehicles (IoV) technology, massive data are continuously integrated into intelligent transportation systems, making efficient computing resource allocation a critical challenge for enhancing network performance. Due to the dynamic and real-time characteristics of IoV tasks, existing static offloading strategies fail to effectively cope with the complexity caused by network fluctuations and vehicle mobility. To address this issue, this paper proposes a task offloading algorithm based on the dynamic adjustment of the parallel factor in directed acyclic graphs (DAG), referred to as Dynamic adjustment of Parallel Factor (DPF). By leveraging edge computing, the proposed algorithm adaptively adjusts the parallel factor according to the dependency relationships among subtasks in the DAG, thereby optimizing resource utilization and reducing task completion time. In addition, the algorithm continuously monitors network conditions and vehicle states to dynamically schedule and offload tasks according to real-time system requirements. Compared with traditional static strategies, the proposed method not only significantly reduces task delay but also improves task success rates and overall system efficiency. Extensive simulation experiments conducted under three different task load conditions demonstrate the superior performance of the proposed algorithm. In particular, under high-load scenarios, the DPF algorithm achieves markedly better task completion times and resource utilization compared to existing methods.

1. Introduction

With the rapid development of the Internet of Vehicles (IoV) and intelligent transportation systems, the demand for communication and computation in vehicles has grown explosively [1]. As an integral part of next-generation intelligent transportation, IoV enables real-time information exchange through vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I), and vehicle-to-cloud (V2C) communications. In this process, vehicles are not only participants in the transportation system but also act as both data producers and consumers. However, due to the limited computing capabilities of onboard devices, it is difficult to independently handle computation-intensive tasks such as autonomous driving decision-making, route planning, video analysis, and fleet coordination [2]. Therefore, how to allocate computing resources efficiently and guarantee low-latency task execution in highly dynamic vehicular environments has become a pressing challenge.
Cloud computing has been widely employed in IoV-related applications to support task processing [3]. Nevertheless, its centralized architecture results in long physical distances between cloud servers and vehicles, introducing significant transmission latency and making it unsuitable for latency-sensitive tasks. In contrast, edge computing pushes computation and storage resources closer to vehicles, for example, at roadside units (RSUs), thereby reducing latency, alleviating bandwidth pressure, and improving computational efficiency [4,5]. Studies have demonstrated that tasks such as real-time route planning [6] and obstacle detection [7] can be offloaded to edge nodes via RSUs, which alleviates the computational burden on vehicles and improves both responsiveness and reliability [8,9].
Against this background, task offloading has emerged as a crucial research direction in IoV. An effective task offloading strategy can significantly reduce the computational load of vehicles while leveraging the powerful computing capabilities of edge servers to shorten execution time and improve service quality [10]. However, task offloading in IoV faces multiple challenges. On one hand, IoV environments are highly dynamic, with rapidly changing topologies due to vehicular mobility, which complicates task scheduling. On the other hand, vehicular tasks are heterogeneous and often subject to strict delay constraints, particularly in autonomous driving scenarios where tasks such as route planning and obstacle detection require real-time execution [11]. Moreover, factors such as network latency, the heterogeneous computing capacity of edge nodes, and resource contention significantly affect the performance of offloading strategies [12,13,14].
As vehicular applications become more sophisticated, the size and structure of generated tasks are also increasing in complexity. A promising approach to address this challenge is to decompose large tasks into subtasks for more efficient scheduling and execution. In this regard, Directed Acyclic Graph (DAG)-based task decomposition models have been widely adopted [15,16,17]. The DAG model explicitly represents task dependencies: dependent tasks are executed sequentially, while independent tasks can be executed in parallel, thus improving both resource utilization and scheduling flexibility.
In DAG-based models, the parallel factor is a critical parameter that determines the number of subtasks that can be executed concurrently. An appropriately set parallel factor can maximize concurrency while satisfying task dependency constraints, thereby enhancing resource utilization [18]. However, setting the parallel factor too high leads to excessive resource contention and longer completion times, whereas setting it too low results in underutilization of computing resources. Recent studies have indicated that in highly dynamic IoV environments, DAG-based task offloading combined with dynamic parallel factor adjustment can significantly improve system performance [19]. For instance, when network load is low, the system can increase the parallel factor to accelerate execution, whereas when resources are heavily utilized, the parallel factor can be reduced to alleviate contention and optimize latency.
Motivated by these insights, this paper proposes a Dynamic adjustment of Parallel Factor (DPF) algorithm. The algorithm introduces a dynamic adjustment mechanism into DAG-based task offloading to adaptively regulate concurrency while maintaining task dependencies. By doing so, the DPF algorithm improves execution efficiency, enhances resource utilization, and reduces overall delay under constrained edge computing resources. Experimental results demonstrate that the proposed algorithm achieves excellent performance under various network conditions and effectively addresses the challenges of high mobility and resource contention in IoV.

2. Related Work

With the development of Internet of Vehicles (IoV) technology, researchers have conducted in-depth explorations of the applications of edge computing in IoV environments. The primary characteristics of IoV are its dynamic nature and real-time requirements, which pose new challenges to computing architectures. To address these challenges, many studies have focused on optimizing task offloading and resource management strategies.

2.1. Offloading Scenario

MEC offloading can be categorized into two primary scenarios: single-server multi-user and multi-server multi-user. For the first scenario, Singh et al. [20] developed an energy-efficient task offloading strategy (EETOS) based on the Levy-flight moth flame optimization (LMFO) algorithm. This strategy aims to minimize energy consumption and end-to-end delay for IoT sensor applications in fog-cloud computing systems. Li et al. [3] proposed an adaptive transmission strategy based on cloud computing for task offloading and transmission in an IoV architecture. By dynamically assigning tasks to different cloud link lists and considering node characteristics for distributed processing, this strategy optimizes transmission delay and resource utilization. Ali et al. [21] introduced a novel task scheduling algorithm that addresses energy consumption and execution time issues in mobile devices through an energy-efficient dynamic decision-making approach. This model rapidly adapts to cloud computing tasks while optimizing energy and time computations for mobile devices.
For the second scenario, Xu et al. [9] proposed a cloud computing offloading strategy based on a multi-strategy cooperation-seal optimization algorithm (M-TSA). This strategy integrates task priority and computational offloading node prediction, simulating vehicle movement under real-world road conditions to optimize task offloading delay, energy consumption, and efficiency. Shu et al. [22] introduced an EFO algorithm-based offloading scheme for multi-user edge computing systems, which efficiently offloads the most suitable IoT tasks or subtasks to edge servers to minimize the expected execution time. Shao et al. [23] proposed a dynamic edge-end computing collaboration architecture for urban IoV, offering a more flexible and adaptive task allocation approach. In this architecture, edge nodes and vehicle terminals collaborate to process tasks. The paper considers task delay and overhead, task transmission models, task priority, and the computational capacities of edge nodes and vehicle terminals. It defines task utility and formulates the task allocation problem as an optimization model.
However, none of the above studies have addressed collaborative computation among edge servers. If effectively utilized, inter-server coordination could further enhance the performance of offloading optimization.

2.2. Task Model

In the current research, tasks are classified into two types: indivisible tasks and divisible tasks.
For indivisible tasks, Zhu et al. [24] proposed a task offloading strategy based on cloud-fog collaborative computing. This strategy introduces a vehicle-to-vehicle (V2V)-assisted task forwarding mechanism and designs a forwarding vehicle prediction algorithm based on environmental information. Additionally, a multi-strategy improved genetic algorithm (MSI-GA) is proposed, which initializes populations using a chaotic sequence, optimizes adaptive operators by comprehensively considering influencing factors, and incorporates Gaussian perturbation to enhance the local optimization capability of the algorithm. Plachy et al. [25] proposed a low-complexity computing and communication resource allocation method for offloading real-time computational tasks generated by mobile users. This method utilizes probabilistic modeling of user mobility to pre-allocate computing resources at base stations and selects appropriate communication paths between users and base stations with pre-allocated computing resources.
For divisible tasks, Du et al. [26] proposed a blockchain-based directed acyclic graph (DAG) structure for secure and efficient information sharing in IoV. The paper also designed a driving decision-based tip selection algorithm (DDB-TSA) and a reputation-based rate control strategy (RBRCS) to enhance information-sharing security. Yan et al. [27] proposed an MEC network comprising two users, where each wireless device (WD) has a series of tasks to execute. Considering task dependencies between the two WDs—where one WD’s task input requires the final output of the other—the paper investigates the optimal task offloading strategy and resource allocation (e.g., offloading transmission power and local CPU frequency) to minimize the weighted sum of WDs’ energy consumption and task execution time.
Our work further extends divisible task research by considering dependencies among subtasks and exploring offloading parallel to enhance system utilization.

2.3. Offloading Strategy

Dai et al. [28] proposed a computation offloading scheme based on deep reinforcement learning (DRL). This scheme employs a deep Q-network (DQN) to adapt to the dynamic vehicular edge computing environment, quickly learning offloading decisions by balancing the exploration and exploitation process to minimize the average task processing delay. Sun et al. [29] designed an Adaptive Learning Task Offloading (ALTO) algorithm based on the multi-armed bandit theory to minimize the average offloading delay. ALTO operates in a distributed manner without requiring frequent state exchanges and incorporates input awareness and occurrence awareness to adapt to dynamic environments. Misra et al. [30] proposed a three-tier architecture to address task offloading in a mobile cloud environment. This architecture, named “Selection of Best Destination to Offload,” attempts to offload tasks first to nearby mobile devices and edge cloud servers before considering remote cloud servers. The first two tiers consist of nearby mobile devices and edge cloud servers, while the third tier comprises remote cloud servers.
In summary, resource allocation in the task offloading process is a critical research direction in vehicular networks. The computing, storage, and network bandwidth resources of edge computing nodes are limited. Therefore, when handling the large number of tasks generated by vehicles, efficient resource allocation is crucial [31]. Studies have shown that optimizing resource allocation strategies can not only enhance task processing efficiency but also significantly reduce task transmission latency and system energy consumption [32]. To achieve efficient resource allocation, the system must monitor network conditions, task loads, and the resource utilization of edge nodes in real-time [33]. Based on this information, the system can dynamically adjust task offloading decisions and resource allocation strategies to maximize task success rates and resource utilization efficiency under different scenarios [34]. Furthermore, resource allocation must consider vehicle mobility, as high-speed movement may disrupt communication links with edge nodes. Therefore, offloading strategies must be highly robust and flexible [35].
Although significant progress has been made in vehicular networks and edge computing research, challenges remain in achieving efficient task offloading and resource allocation in dynamic environments. Building on existing studies, this paper proposes a novel dynamic task offloading algorithm to further enhance system performance in vehicular networks.

3. System Model

In this section, we will provide a detailed introduction to the network model, task model, communication model and present our optimization problem.

3.1. Network Model

This paper proposes a vehicular task offloading system that integrates cloud computing and edge computing, as shown in Figure 1, featuring a three-layer network architecture. Vehicles act as task initiators and can send offloading requests to the cloud via the nearest edge node. The cloud then generates an offloading strategy and assigns tasks to target nodes, which may include the vehicle’s local processor, edge nodes, base stations, or cloud servers. Once the task is completed, the execution results are returned to the vehicle.
The edge computing layer consists of multiple edge nodes. As shown in Figure 2, every three adjacent edge nodes form a small-scale network cluster, and a base station monitors the status of the nodes and manages task scheduling within the cluster. Base stations are connected to edge nodes via optical fiber, and the cloud server monitors the operational status of all base stations to enable inter-cluster scheduling. As edge nodes, base stations, and the cloud server can communicate via optical fiber, vehicles are considered capable of offloading subtasks to any available server across the network, ensuring efficient task processing.

3.2. Communication Model

The edge server layer consists of a collection of edge nodes and base stations. Each vehicle has a set of tasks S = { 1 , 2 , , j , , s } to be executed. The execution process of the j-th task on vehicle i is modeled as a Directed Acyclic Graph (DAG), denoted as G i j = ( V i j , E i j ) , where V i j represents the set of subtasks of task j, and E i j denotes the dependency relationships among these subtasks. It is assumed that each task can be decomposed into subtasks according to computational requirements, with each subtask representing the smallest unit of computation. Subtasks can either be executed locally on the vehicle or offloaded to any available server in the network, depending on the offloading strategy.
For local execution, assuming the vehicle has a computational capability of f i , the execution time of subtask T i , j k on vehicle i is given by:
t i , j k = c i , j k f i .
The corresponding energy consumption for local execution is:
e i , j k = c i , j k × δ i ,
where δ i denotes the energy consumption per CPU cycle for the onboard processor of vehicle i.
In this paper, it is assumed that servers of the same type have identical computing capabilities, while different types differ. Let f = { f e , f b , f c } represent the computational capacities of edge servers, base stations, and cloud servers, respectively.
For subtasks offloaded to servers, the total processing time is expressed as:
w i , j k = c i , j k f .
t i , j k = d i , j k r i , e + t i , j k , Queue + w i , j k .
where w i , j k is the computation time on the target server, r i , e is the transmission rate between vehicle i and the selected server, and t i , j k , Queue is the queuing delay at the server.
Since edge and cloud servers are assumed to be continuously powered, only the energy consumption at the vehicle side during the offloading process is considered:
e i , j k = q i × d i , j k r i , n ,
where q i represents the transmission power of vehicle i.

3.3. Problem Formulation

The ultimate goal of our proposed model is to minimize the average task processing time for all vehicles, thereby meeting stringent delay requirements. To achieve this, we first define the AST of subtask k offloaded to server l, denoted as AST i , j ( k , l ) , as follows:
A S T i , j ( k , l ) = max { a v a i l { 0 [ l ] } , max k p r e d ( k ) A E T ( k ) + C k k } .
Here, p r e d ( k ) represents the set of immediate predecessor subtasks of subtask k, and C k k denotes the communication cost between subtask k and its predecessor k . The term a v a i l { 0 [ l ] } refers to the earliest available time for the computing resource, which can either be the vehicle itself (local execution) or the target server l.
Taking into account the queuing delay and execution order, the AET of subtask k, denoted as A E T i , j ( k ) , is defined as:
A E T i , j ( k ) = min w k l + A S T i , j ( k , l )
where w k l is the execution time of subtask k on the computing node l , which could be either a server or the vehicle itself. The actual completion time of the subtask is determined once all of its predecessors have completed execution and required resources become available.

4. Dynamic Adjustment of Parallel Factor Offloading Strategy

In this section, we propose DPF, an efficient DAG-based task offloading algorithm that dynamically selects target servers for subtasks. A task is modeled as a DAG and decomposed into multiple subtasks with dependency relations, as shown in Figure 3. For example, task T 1 represents a complete vehicular task, which is further divided into a set of subtasks { t 1 , t 2 , , t 8 } . t 1 must be executed first; t 2 and t 3 can then run in parallel, while tasks t 4 , t 5 , and t 6 are processed only after the completion of t 2 and t 3 , respectively. Subtask t 7 depends on t 4 and t 5 , and t 8 requires the outputs of t 6 and t 7 , so the task completes only after all paths are executed. By adjusting the parallel factor, DPF balances computation across servers, reduces waiting time, and minimizes overall processing delay.

4.1. Subtask Scheduling and Prioritization

In edge computing scenarios, complex vehicular tasks are typically decomposable into multiple interdependent subtasks. The DAG is utilized to represent these dependencies, guiding the assignment and execution of subtasks across heterogeneous computing nodes, including local devices, edge servers, and cloud servers.
Let a complex task S be decomposed into a set of subtasks T, and the dependency among these subtasks be represented as a DAG: G = ( V , E ) , where V denotes the set of subtasks (each represented as a vertex), and E denotes the set of directed edges indicating dependency relations. A directed edge from subtask T k to T k indicates that T k can only be executed after T k has completed.
The execution order of subtasks has a direct impact on the total latency. Therefore, we first model the scheduling order of subtasks. Let data ( k , k ) represent the data volume transmitted from subtask k to k. The communication delay between these two subtasks can be modeled as:
c k , k = 0 if a i , k = a i , k , data k , k r i , e ( a ) otherwise .
Here, a ( i , k ) denotes the offloading decision for subtask k, and r i , e ( a ) is the transmission rate between vehicle i and the edge server. If both subtasks are assigned to the same server, communication overhead can be ignored.
We then compute the priority of each subtask based on scheduling urgency using a recursive ranking function:
r a n k ( k ) = w i , k l + C k , k + r a n k ( k ) .
While this ranking reflects computation and communication cost, task urgency must also be considered in vehicular networks. We categorize tasks into four urgency levels: high ( u h ), medium ( u m ), low ( u l ), and non-urgent ( u n ). Each task inherits a fixed urgency level, which is uniformly applied to its subtasks.
We define the urgency ratio RTR U ( k ) for each subtask as:
R T R U ( k ) = T estimated ( k ) T current ( k ) T deadline ( k ) T current ( k ) ,
where T deadline ( k ) is the deadline, T current ( k ) is the current time, and T estimated ( k ) is the estimated completion time for subtask k. A larger value of R T R U ( k ) indicates higher urgency.
To combine the task complexity and urgency, we propose a weighted priority score: where α and β are tunable parameters that control the trade-off between computational priority and urgency level, depending on system-specific performance objectives. By sorting subtasks based on their priority R ( k ) , and intelligently mapping them to suitable computing nodes, our objective is to minimize the overall task completion time. This optimization can be formulated as:
min k max R ( k ) t i , j k .
By jointly considering execution time t i , j ( k ) and priority R ( k ) , the system can achieve global optimization of task scheduling and offloading decisions, rather than just locally optimizing individual task execution. The DPF algorithm embodies this principle by ensuring that higher-priority tasks, which are more time-sensitive, are preferentially executed or offloaded to computing nodes with lower latency.

4.2. Parallel Factor and Load-Aware Scheduling Model

In this section, we introduce the concept of a parallel factor, which refers to the degree of parallel among mutually independent subtasks in a task’s DAG. By adjusting the parallel factor for each task DAG, the number of executable subtasks at a given time can be increased, thereby expanding the pool of offloading candidates and enabling the selection of a more optimal offloading strategy to reduce overall task processing time.
To monitor the computational load of each server node, we propose a theoretical load model. Let the computing capacity of a server node l be denoted by C l , representing the number of tasks it can process per second. The current utilization of node l is defined as:
U l = n l C l ,
where n l is the number of tasks currently being processed. When U l > 1 , the server is considered overloaded, while U l < 1 indicates underutilization. The system aims to maintain each node’s utilization near 1 by dynamically adjusting the parallel factor.
In practical terms, we define the utilization U l ( t ) of node l at time t as the ratio of its consumed computing resources to its total capacity, based on the subtasks being processed. Let T l k ( t ) denote the set of subtasks running on node l at time t. The utilization can be computed as:
U l ( t ) = k T l k ( t ) C k C l ,
where C k denotes the computational demand of subtask k on node l. By continuously monitoring U l ( t ) , the system can dynamically perceive the computational status of each node and adjust the scheduling strategy and parallel factor accordingly.
From Equation (7), we obtain the actual execution end time A E T i , j ( k ) of subtask k on server node l, which includes both queuing and computation time. The goal of the DPF algorithm is to achieve a relatively balanced task completion time across all computing nodes.
To do this, we first calculate the total task completion time T C T l ( t ) of server l at time t:
T C T l ( t ) = A E T i , j ( k ) ,
where k is the last subtask in node l’s execution queue at time t. Hence, the time at which this subtask finishes execution reflects the total processing time of node l.
We then calculate the average task completion time across all n nodes:
T C T ( t ) ¯ = 1 n l = 1 n T C T l ( t ) .
and the variance of completion time among all nodes:
T V 2 = 1 n 1 l = 1 n T C T l ( t ) T C T ( t ) ¯ 2 .
We define the processing cycle time ( P C T K ) for subtask k as the difference in completion time between it and its predecessor subtask k :
P C T K = A E T i , j ( k ) A E T i , j ( k ) .
When a new subtask is offloaded, the increase in a node’s workload is not only the computation time of that subtask, but also the queuing delay resulting from earlier subtasks. Therefore, incorporating PCT K allows for more precise load balancing adjustments. The complete procedure is illustrated in Algorithm 1.
Algorithm 1: DPF Algorithm
Input: G i j = ( V i j , E i j ) , d i , j k , c i , j k , f, T deadline ( k ) , R ( k )
Output: a i , j k
1 
Initialize the number of tasks and subtasks for each vehicle and generate the corresponding DAG for each task
2 
Define r a n k ( k ) by Equation (9)
3 
Calculate R ( k ) by Equation (11)
4 
for each new subtask k do
5 
   while  l < n  do
6 
     Offload k to the corresponding node based on the DAG
7 
     if  P C T k < A E T i , j ( k )  and  T V 2 < θ  then
8 
       Maintain the DAG
9 
     else
10 
       Unload subtask k to min ( A E T ) ’s N
11 
        if  T V 2 < θ  then
12 
         Adjust Q and modify the DAG
13 
       else
14 
         Place k into nodes where A E T i , j ( k ) < T C T ( t ) ¯
15 
       end if
16 
     end if
17 
   The remaining subtasks are sent to the nodes of this offloading scheme
18 
   end while
19 
end for

5. Evaluation Analysis

In the simulation scenario, vehicles travel on a straight, bidirectional road. It includes 100 vehicles [36], each randomly generating 10 to 15 tasks, and each task is by default decomposable into 5 to 15 subtasks. The size of the subtasks and the required CPU cycles are randomly generated within the ranges of 500 KB to 1500 KB and 0.2 GHz to 0.3 GHz, respectively. The transmission power is set to 100 mW, background noise to 100 dBm, and wireless channel bandwidth to 20 MHz. The local CPU frequency of the vehicle is 2 GHz, the CPU frequency of the edge service node is 20 GHz [37], the CPU frequency of the base station server is 50 GHz, and the CPU frequency of the cloud server is 200 GHz.

5.1. Task Completion Time

Task completion time is one of the most critical performance indicators in edge computing task offloading, It directly reflects the system’s task processing efficiency.
Figure 4 shows the average task completion time of different algorithms under the three load conditions. In the low-load scenario, all algorithms achieve relatively short completion times due to the abundance of system resources. As the load increases, the advantage of the DPF algorithm becomes more obvious: by decomposing tasks into subtasks and using each node’s computational resources more evenly, the algorithm achieves better load balancing. Compared with the competing methods, DPF reduces task completion time by approximately 0.16 s, 1.17 s, and 0.8 s under low, medium, and high load conditions, respectively. These results demonstrate that DPF can effectively reduce queuing delay by balancing server workloads.
In real IoV scenarios, reducing task completion time by over one second under heavy load is critical for delay-sensitive tasks such as obstacle detection or autonomous driving decisions.

5.2. Servers Utilization

Server utilization refers to the ratio between a server’s actual workload and its maximum processing capacity. Ideally, server utilization should be maintained at an appropriate level to maximize the use of server resources while avoiding overload.
To evaluate server utilization, we monitored the resource usage of ten edge nodes under different task load conditions. Figure 5 presents the server utilization, while Figure 6 shows its variance, reflecting load balancing among nodes. Under high load, the DPF algorithm maintains utilization between 80% and 90%, whereas other algorithms achieve only 65–75%. The utilization variance of DPF is also very low, ranging from 0.001 to 0.003, which is lower than that of the competing algorithm. This is because the DPF algorithm can dynamically adjust the parallel factor, leading to a more balanced use of computational resources across nodes. As a result, each node can achieve its maximum computational potential, thereby avoiding both resource overload and idleness.
In practice, this means DPF maximizes system capacity by keeping all servers consistently well utilized and evenly loaded.

5.3. System Scalability

Due to variations in road length and complexity, the number of deployed edge nodes may differ. To validate the algorithm’s performance under different scenario complexities, we increased the number of nodes from 3 to 15 under high-load task conditions. As shown in Figure 7, when only 3 nodes are available, the difference in task completion time compared to competing algorithm is minimal due to the limited number of available target nodes. As the number of nodes increases, the optimization effect of the DPF algorithm on task completion time becomes more pronounced. With 6 nodes available, the increased number of offloading targets allows the DPF algorithm to demonstrate its advantage in adaptive dynamic adjustment, achieving a task completion time 0.23 s faster than the EFO algorithm. As the number of nodes continues to grow, DPF achieves more rational resource allocation. For 10 or more nodes, the greater number of available offloading targets further enhances the benefits of dynamic adjustment, resulting in a 7.53% performance improvement over M-TSA algorithms, with the optimization trend continuing.
This demonstrates that DPF is applicable to scenarios of varying complexity, with its advantages becoming more prominent as the number of available nodes increases. This indicates that in large-scale vehicular networks, DPF can effectively leverage additional edge nodes to maintain performance improvements as the system scales.

5.4. Task Success Rate

Task success rate refers to the proportion of tasks successfully completed within the specified time during the offloading process. A high task success rate indicates that the system can process tasks in a timely manner, avoiding failures caused by insufficient resources or excessive delays. In edge computing systems, the task success rate not only reflects the system’s computational capability but also reveals the effectiveness of resource scheduling. Therefore, it serves as a crucial metric for evaluating system stability and reliability.
To evaluate the task success rate, we conducted comparative tests under the four priority levels, with higher-priority tasks typically related to driving safety, requiring preferential processing. The proportion of tasks finished before deadlines is plotted in Figure 8. Across all load levels, DPF consistently provides higher reliability: over 97% at light load, above 94% at medium load, and still above 90% at heavy load. Such improvements come from prioritizing urgent tasks while balancing server workloads, which lowers the risk of deadline violations. This is because the adaptive dynamic adjustment mechanism of the DPF algorithm, which prioritizes urgent tasks while balancing server workloads, thereby reducing the risk of task timeout. This means DPF ensures dependable performance for time-critical vehicular applications even in congested environments.

5.5. System Regulation Performance

In a real-world edge computing environment, node instability and external interference are inevitable. Therefore, an algorithm must be able to adapt to fluctuations in computational power. By simulating interference scenarios, we can evaluate the algorithm’s performance under complex conditions, providing both theoretical and empirical support for its practical deployment. This section introduces node interference and examines how the system offloads tasks from constrained nodes to other available nodes, thereby assessing the algorithm’s ability to adjust task distribution and load balancing.
Figure 9 and Figure 10 illustrate the task completion time and success rate under high load conditions when a node experiences interference. Figure 9, compared with Figure 4c, shows that task completion time increased by 2.23 s after interference. However, it remains 15.92% lower than M-TSA algorithm. Figure 10, compared with Figure 8c, demonstrates that task success rate experienced only a slight drop of 1.45% across four priority levels. Moreover, the system still ensures high-priority tasks receive preferential processing. This is because the DPF algorithm dynamically adjusts the parallel factor and re-offloads subtasks originally assigned to interfered nodes to other available nodes, thereby minimizing the impact of interference to the greatest extent.
From the above evaluation metrics, it can be seen that the DPF algorithm, through dynamic task scheduling and parallel processing optimization, is able to maintain low task completion time while achieving relatively balanced resource utilization in different network environments.

6. Conclusions

This paper proposes a DAG based DPF task offloading algorithm to address the challenges of task offloading in IoV environments. The algorithm dynamically adjusts the degree of parallel according to subtask dependencies and server utilization, thereby reducing overall task completion time and enhancing system stability. By modeling the task offloading process under a DAG structure and introducing a dynamic parallel factor adjustment mechanism, this paper provides a flexible and efficient solution for vehicular task scheduling. Simulation results under multiple parameter settings demonstrate that the proposed DPF algorithm outperforms benchmark methods in terms of task completion delay, task success rate, and server utilization. In addition, when certain nodes experience interference, the algorithm is still able to maintain stable performance, which verifies its practicality and robustness in dynamic IoV environments. Nevertheless, this work still has several limitations. The simulation environment assumes relatively balanced computational capabilities between vehicles and servers, whereas real vehicular networks are often highly heterogeneous. The frequent disconnection problem caused by high vehicle mobility has not been explicitly addressed. In addition, the scalability of the algorithm for large-scale or highly complex DAG task graphs requires further validation. These issues will be investigated in our future work.

Author Contributions

Conceptualization, Q.Z.; methodology, Q.Z.; software, W.G.; validation, W.G.; formal analysis, Q.Z. and W.G.; investigation, W.G.; resources, X.L.; data curation, C.G.; writing—original draft preparation, Q.Z.; writing—review and editing, X.L.; visualization, W.G.; supervision, X.L.; project administration, W.G.; funding acquisition, X.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Qureshi, K.N.; Din, S.; Jeon, G.; Piccialli, F. Internet of vehicles: Key technologies, network model, solutions and challenges with future aspects. IEEE Trans. Intell. Transp. Syst. 2020, 22, 1777–1786. [Google Scholar] [CrossRef]
  2. Meneguette, R.; De Grande, R.; Ueyama, J.; Filho, G.P.R.; Madeira, E. Vehicular edge computing: Architecture, resource management, security, and challenges. ACM Comput. Surv. (CSUR) 2021, 55, 1–46. [Google Scholar] [CrossRef]
  3. Li, B.; Li, V.; Li, M.; Li, J.; Yang, J.; Li, B. An adaptive transmission strategy based on cloud computing in IoV architecture. EURASIP J. Wirel. Commun. Netw. 2024, 2024, 13. [Google Scholar] [CrossRef]
  4. Yang, C.; Liu, Y.; Chen, X.; Zhong, W.; Xie, S. Efficient mobility-aware task offloading for vehicular edge computing networks. IEEE Access 2019, 7, 26652–26664. [Google Scholar] [CrossRef]
  5. Mao, Y.; Zhang, J.; Letaief, K.B. Dynamic computation offloading for mobile-edge computing with energy harvesting devices. IEEE J. Sel. Areas Commun. 2016, 34, 3590–3605. [Google Scholar] [CrossRef]
  6. Yang, J.; Xi, M.; Wen, J.; Li, Y.; Song, H.H. A digital twins enabled underwater intelligent internet vehicle path planning system via reinforcement learning and edge computing. Digit. Commun. Netw. 2024, 10, 282–291. [Google Scholar] [CrossRef]
  7. Nagasubramaniam, P.; Wu, C.; Sun, Y.; Karamchandani, N.; Zhu, S.; He, Y. Privacy-Preserving Live Video Analytics for Drones via Edge Computing. Appl. Sci. 2024, 14, 10254. [Google Scholar] [CrossRef]
  8. Islam, A.; Debnath, A.; Ghose, M.; Chakraborty, S. A survey on task offloading in multi-access edge computing. J. Syst. Archit. 2021, 118, 102225. [Google Scholar] [CrossRef]
  9. Xu, Q.; Zhang, G.; Wang, J. Research on cloud-edge-end collaborative computing offloading strategy in the Internet of Vehicles based on the M-TSA algorithm. Sensors 2023, 23, 4682. [Google Scholar] [CrossRef]
  10. Zhao, J.; Li, Q.; Gong, Y.; Zhang, K. Computation offloading and resource allocation for cloud assisted mobile edge computing in vehicular networks. IEEE Trans. Veh. Technol. 2019, 68, 7944–7956. [Google Scholar] [CrossRef]
  11. Zhang, D.; Cao, L.; Zhu, H.; Zhang, T.; Du, J.; Jiang, K. Task offloading method of edge computing in internet of vehicles based on deep reinforcement learning. Clust. Comput. 2022, 25, 1175–1187. [Google Scholar] [CrossRef]
  12. Raza, S.; Liu, W.; Ahmed, M.; Anwar, M.R.; Mirza, M.A.; Sun, Q.; Wang, S. An efficient task offloading scheme in vehicular edge computing. J. Cloud Comput. 2020, 9, 28. [Google Scholar] [CrossRef]
  13. Mu, H.; Wu, S.; He, P.; Chen, J.; Wu, W. Task Similarity-Aware Cooperative Computation Offloading and Resource Allocation for Reusable Tasks in Dense MEC Systems. Sensors 2025, 25, 3172. [Google Scholar] [CrossRef] [PubMed]
  14. Huang, B.; Fan, X.; Zheng, S.; Chen, N.; Zhao, Y.; Huang, L.; Gao, Z.; Chao, H.C. Collaborative Sensing-Aware Task Offloading and Resource Allocation for Integrated Sensing-Communication-and Computation-Enabled Internet of Vehicles (IoV). Sensors 2025, 25, 723. [Google Scholar] [CrossRef] [PubMed]
  15. Yang, L.; Zhong, C.; Yang, Q.; Zou, W.; Fathalla, A. Task offloading for directed acyclic graph applications based on edge computing in industrial internet. Inf. Sci. 2020, 540, 51–68. [Google Scholar] [CrossRef]
  16. Chen, J.; Yang, Y.; Wang, C.; Zhang, H.; Qiu, C.; Wang, X. Multitask offloading strategy optimization based on directed acyclic graphs for edge computing. IEEE Internet Things J. 2021, 9, 9367–9378. [Google Scholar] [CrossRef]
  17. Tang, Z.; Lou, J.; Zhang, F.; Jia, W. Dependent task offloading for multiple jobs in edge computing. In Proceedings of the 2020 29th International Conference on Computer Communications and Networks (ICCCN), Honolulu, HI, USA, 3–6 August 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–9. [Google Scholar]
  18. Hou, C.; Zhao, Q. Optimal task-offloading control for edge computing system with tasks offloaded and computed in sequence. IEEE Trans. Autom. Sci. Eng. 2022, 20, 1378–1392. [Google Scholar] [CrossRef]
  19. Liu, J.; Zhou, A.; Liu, C.; Zhang, T.; Qi, L.; Wang, S.; Buyya, R. Reliability-enhanced task offloading in mobile edge computing environments. IEEE Internet Things J. 2021, 9, 10382–10396. [Google Scholar] [CrossRef]
  20. Singh, P.; Singh, R. Energy-efficient delay-aware task offloading in fog-cloud computing system for IoT sensor applications. J. Netw. Syst. Manag. 2022, 30, 14. [Google Scholar] [CrossRef]
  21. Ali, A.; Iqbal, M.M.; Jamil, H.; Qayyum, F.; Jabbar, S.; Cheikhrouhou, O.; Baz, M.; Jamil, F. An efficient dynamic-decision based task scheduler for task offloading optimization and energy management in mobile cloud computing. Sensors 2021, 21, 4527. [Google Scholar] [CrossRef]
  22. Shu, C.; Zhao, Z.; Han, Y.; Min, G. Dependency-aware and latency-optimal computation offloading for multi-user edge computing networks. In Proceedings of the 2019 16th Annual IEEE International Conference on Sensing, Communication and Networking (SECON), Boston, MA, USA, 10–13 June 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–9. [Google Scholar]
  23. Shao, S.; Su, L.; Zhang, Q.; Wu, S.; Guo, S.; Qi, F. Multi task dynamic edge–end computing collaboration for urban internet of vehicles. Comput. Netw. 2023, 227, 109690. [Google Scholar] [CrossRef]
  24. Zhu, C.; Liu, C.; Zhu, H.; Li, J. Cloud–Fog Collaborative Computing Based Task Offloading Strategy in Internet of Vehicles. Electronics 2024, 13, 2355. [Google Scholar] [CrossRef]
  25. Plachy, J.; Becvar, Z.; Strinati, E.C.; Di Pietro, N. Dynamic allocation of computing and communication resources in multi-access edge computing for mobile users. IEEE Trans. Netw. Serv. Manag. 2021, 18, 2089–2106. [Google Scholar] [CrossRef]
  26. Du, G.; Cao, Y.; Li, J.; Zhuang, Y. Secure information sharing approach for internet of vehicles based on DAG-Enabled blockchain. Electronics 2023, 12, 1780. [Google Scholar] [CrossRef]
  27. Yan, J.; Bi, S.; Zhang, Y.J.; Tao, M. Optimal task offloading and resource allocation in mobile-edge computing with inter-user task dependency. IEEE Trans. Wirel. Commun. 2019, 19, 235–250. [Google Scholar] [CrossRef]
  28. Dai, F.; Liu, G.; Mo, Q.; Xu, W.; Huang, B. Task offloading for vehicular edge computing with edge-cloud cooperation. World Wide Web 2022, 25, 1999–2017. [Google Scholar] [CrossRef]
  29. Sun, Y.; Guo, X.; Song, J.; Zhou, S.; Jiang, Z.; Liu, X.; Niu, Z. Adaptive learning-based task offloading for vehicular edge computing systems. IEEE Trans. Veh. Technol. 2019, 68, 3061–3074. [Google Scholar] [CrossRef]
  30. Misra, S.; Wolfinger, B.E.; Achuthananda, M.; Chakraborty, T.; Das, S.N.; Das, S. Auction-based optimal task offloading in mobile cloud computing. IEEE Syst. J. 2019, 13, 2978–2985. [Google Scholar] [CrossRef]
  31. Arthurs, P.; Gillam, L.; Krause, P.; Wang, N.; Halder, K.; Mouzakitis, A. A taxonomy and survey of edge cloud computing for intelligent transportation systems and connected vehicles. IEEE Trans. Intell. Transp. Syst. 2021, 23, 6206–6221. [Google Scholar] [CrossRef]
  32. Wang, K.; Wang, X.; Liu, X.; Jolfaei, A. Task offloading strategy based on reinforcement learning computing in edge computing architecture of internet of vehicles. IEEE Access 2020, 8, 173779–173789. [Google Scholar] [CrossRef]
  33. Sun, Z.; Sun, G.; Liu, Y.; Wang, J.; Cao, D. BARGAIN-MATCH: A game theoretical approach for resource allocation and task offloading in vehicular edge computing networks. IEEE Trans. Mob. Comput. 2023, 23, 1655–1673. [Google Scholar] [CrossRef]
  34. Zhang, J.; Guo, H.; Liu, J.; Zhang, Y. Task offloading in vehicular edge computing networks: A load-balancing solution. IEEE Trans. Veh. Technol. 2019, 69, 2092–2104. [Google Scholar] [CrossRef]
  35. Xiao, Z.; Dai, X.; Jiang, H.; Wang, D.; Chen, H.; Yang, L.; Zeng, F. Vehicular task offloading via heat-aware MEC cooperation using game-theoretic method. IEEE Internet Things J. 2019, 7, 2038–2052. [Google Scholar] [CrossRef]
  36. Hossain, M.D.; Sultana, T.; Hossain, M.A.; Layek, M.A.; Hossain, M.I.; Sone, P.P.; Lee, G.W.; Huh, E.N. Dynamic task offloading for cloud-assisted vehicular edge computing networks: A non-cooperative game theoretic approach. Sensors 2022, 22, 3678. [Google Scholar] [CrossRef]
  37. Wu, Q.; Wang, X.; Fan, Q.; Fan, P.; Zhang, C.; Li, Z. High stable and accurate vehicle selection scheme based on federated edge learning in vehicular networks. China Commun. 2023, 20, 1–17. [Google Scholar] [CrossRef]
Figure 1. Overall framework of task offloading in vehicular networks based on edge computing.
Figure 1. Overall framework of task offloading in vehicular networks based on edge computing.
Sensors 25 06198 g001
Figure 2. Offloading framework based on subtask directed acyclic.
Figure 2. Offloading framework based on subtask directed acyclic.
Sensors 25 06198 g002
Figure 3. Illustration of task decomposition and subtask dependencies based on a directed acyclic graph.
Figure 3. Illustration of task decomposition and subtask dependencies based on a directed acyclic graph.
Sensors 25 06198 g003
Figure 4. (a) Low load completion time. (b) Medium load completion time. (c) High load completion time.
Figure 4. (a) Low load completion time. (b) Medium load completion time. (c) High load completion time.
Sensors 25 06198 g004
Figure 5. (a) Low load node utilization. (b) Medium load node utilization. (c) High load node utilization.
Figure 5. (a) Low load node utilization. (b) Medium load node utilization. (c) High load node utilization.
Sensors 25 06198 g005
Figure 6. The variance of server node utilization under different loads.
Figure 6. The variance of server node utilization under different loads.
Sensors 25 06198 g006
Figure 7. Task completion time under different number of edge nodes.
Figure 7. Task completion time under different number of edge nodes.
Sensors 25 06198 g007
Figure 8. (a) Low load task success rate. (b) Medium load task success rate. (c) High load task success rate.
Figure 8. (a) Low load task success rate. (b) Medium load task success rate. (c) High load task success rate.
Sensors 25 06198 g008
Figure 9. Task Completion Time After Interference.
Figure 9. Task Completion Time After Interference.
Sensors 25 06198 g009
Figure 10. Task Success Rate After Interference.
Figure 10. Task Success Rate After Interference.
Sensors 25 06198 g010
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guan, W.; Zheng, Q.; Lian, X.; Gao, C. A DAG-Based Offloading Strategy with Dynamic Parallel Factor Adjustment for Edge Computing in IoV. Sensors 2025, 25, 6198. https://doi.org/10.3390/s25196198

AMA Style

Guan W, Zheng Q, Lian X, Gao C. A DAG-Based Offloading Strategy with Dynamic Parallel Factor Adjustment for Edge Computing in IoV. Sensors. 2025; 25(19):6198. https://doi.org/10.3390/s25196198

Chicago/Turabian Style

Guan, Wenyang, Qi Zheng, Xiaoqin Lian, and Chao Gao. 2025. "A DAG-Based Offloading Strategy with Dynamic Parallel Factor Adjustment for Edge Computing in IoV" Sensors 25, no. 19: 6198. https://doi.org/10.3390/s25196198

APA Style

Guan, W., Zheng, Q., Lian, X., & Gao, C. (2025). A DAG-Based Offloading Strategy with Dynamic Parallel Factor Adjustment for Edge Computing in IoV. Sensors, 25(19), 6198. https://doi.org/10.3390/s25196198

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop