Next Article in Journal
A Lyapunov-Stable Direct Deadbeat Control Strategy for Grid-Current-Sensor-Only Active Power Filters
Previous Article in Journal
Research on Service Orchestration and Composition Technologies for Multi-Platform Avionics Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimization of Accuracy-Sensitive Task Offloading and Model Update in Vehicular Edge Computing

Guangxi Key Laboratory of Multimedia Communications and Network Technology, School of Computer and Electronics Information, Guangxi University, Nanning 530004, China
*
Author to whom correspondence should be addressed.
Electronics 2026, 15(5), 1072; https://doi.org/10.3390/electronics15051072
Submission received: 3 February 2026 / Revised: 28 February 2026 / Accepted: 2 March 2026 / Published: 4 March 2026
(This article belongs to the Section Computer Science & Engineering)

Abstract

Vehicular edge computing (VEC) enables vehicles to offload computation-intensive tasks to roadside units (RSUs) equipped with deep learning (DL) models, thereby supporting low-latency and accuracy-sensitive intelligent vehicular tasks. To adapt DL models to evolving task requirements and time-varying vehicular environments, the RSUs must consume limited computing and memory resources to retrieve optimized parameters from the cloud to update local models. During these updates, the DL models cannot provide services to tasks, and vice versa. However, the limited computational and memory resources of RSUs make it challenging to determine which tasks to offload and which DL models to update, in order to maximize task acceptance rates and quality of service. In this paper, we investigate the joint optimization of accuracy-sensitive task offloading and DL model updating in VEC systems. We formulate the problem as a mixed-integer nonlinear programming (MINLP) problem that aims to maximize a weighted utility function of task acceptance rate (AR) and quality of service (QoS), subject to latency, accuracy, and resource constraints. The formulated problem is shown to be NP-hard. To enable efficient decision making, we propose a heuristic algorithm termed the Load-Accuracy-Sensitive Joint Task Offloading and Model Update algorithm. The proposed algorithm leverages real-time system state information and jointly considers transmission feasibility, RSU workload, model accuracy matching, and queue-aware load balancing when making task offloading and model update decisions. Extensive simulation results demonstrate that the proposed algorithm outperforms benchmark algorithms.

1. Introduction

The Internet of Vehicles (IoV) is derived from the Internet of Things (IoT). It enables smart transportation by connecting vehicles to the network and collecting and processing road information for real-time responses [1]. With the rapid development of IoV, the computing tasks generated by vehicles have become increasingly complex and diversified, posing ever-growing demands for real-time performance and accuracy—such as those required for autonomous driving, real-time video analytics, and augmented/virtual reality applications [2]. To achieve optimal performance, these tasks usually require substantial computing and storage resources. However, constrained by their inherent limited computing and storage capacities, vehicles are generally unable to process such tasks efficiently locally [3]. Although cloud servers are equipped with abundant computing and storage resources to handle computing tasks, the long-distance data transmission between vehicles and cloud servers incurs significant transmission latency [4], which violates the latency constraints of computing tasks. To address this issue, scholars in the field have introduced Vehicular Edge Computing (VEC), an emerging computing paradigm, as a solution.
As a critical component of intelligent transportation systems (ITSs), VEC integrates IoV and Mobile Edge Computing (MEC). Among these, MEC offloads part of the computing and storage capabilities of the cloud to the network edge that is closer to end-users, thereby reducing latency [5]. The emergence of VEC addresses two core challenges simultaneously: on the one hand, vehicles can offload computing tasks to nearby edge servers co-located with roadside units (RSUs) for processing, thereby solving the problem that vehicles cannot handle computing tasks locally due to their limited resources [6]; on the other hand, compared with the limited computing capabilities of vehicles, the powerful computing capabilities of edge servers enable more accurate and faster task execution, which satisfies the requirements for task accuracy and latency. In addition, VEC can also overcome the problem of excessive transmission latency inherent in cloud-based solutions [7].
Typically, edge servers co-located with RSUs deploy deep learning (DL) models pre-trained in the cloud to process task requests [8]. At present, with the rapid advancement of artificial intelligence (AI), a host of emerging task requests have emerged. These tasks generally impose stringent requirements on both accuracy and latency. If the vehicle’s task is loaded onto a low-precision DL model, and due to the low accuracy of the DL model, the calculation results may be incorrect [9], which is unacceptable. To guarantee the performance of DL models, servers are required to perform model updating. However, constrained by the lack of sufficient resources for large-scale training, servers typically retrieve optimized parameters from the cloud to update local models [10]. During the model updating phase, the DL models deployed on servers cannot accept and process incoming task requests, and vice versa. There exists a mutual restriction between task offloading and model updating on servers, which directly impacts the overall system performance. This restriction is manifested in the following two aspects:
  • If model updating is performed frequently, the Quality of Service (QoS) can be improved, and the task throughput will be directly reduced. In contrast, if the update interval is excessively long, outdated models will fail to satisfy the task accuracy requirements due to parameter obsolescence. This indicates that the timing of model updating is directly affected by the distribution of task offloading.
  • Although high-accuracy DL models can satisfy task accuracy requirements, their inference latency is relatively longer than low-accuracy models, which may result in violations of latency constraints and thus a reduction in QoS. Conversely, low-accuracy DL models feature low inference latency but tend to violate accuracy constraints.
Therefore, striking a balance between task offloading and model updating in the VEC is challenging.
As shown in Figure 1, we present an instance of task offloading and model updating in VEC, which is described as follows:
  • ve 1 first searches for nearby RSUs and finds that only RSU 1 is within its communication range. Consequently, ve 1 offloads the task to RSU 1 . Since RSU 1 is equipped with the corresponding DL model to process the task from ve 1 , no forwarding process is performed.
  • There are two RSUs in the communication range of ve 2 . However, the distance between RSU 2 and ve 2 is approximately the same as that between RSU 1 and ve 2 . In addition, the DL model on RSU 1 that can process ve 2 ’s task is already occupied by the task from RSU 1 . Therefore, ve 2 offloads its task to RSU 2 , which is equipped with the corresponding and available DL model for processing ve 2 ’s task.
  • Similarly, ve 3 offloads its task to RSU 2 for processing. At this moment, ve 5 transmits its task to RSU 3 . However, RSU 3 does not have the corresponding DL model for processing this type of task, and RSU 2 ’s corresponding DL model is occupied by the task from ve 3 . Thus, the task from ve 5 is forwarded to RSU 1 for processing, which is equipped with the corresponding and available DL model. There is no direct communication link between RSU 1 and RSU 3 , and RSU 2 maintains a direct communication link with both RSU 1 and RSU 3 . Therefore, the task from ve 5 is transmitted from RSU 3 via RSU 2 to RSU 1 for processing.
  • The distance from ve 4 to RSU 2 is approximately the same as ve 4 to RSU 3 , and both RSUs are equipped with the corresponding DL models to process ve 5 ’s task. However, for this type of DL model, the one deployed on RSU 2 offers higher accuracy than that on RSU 3 . Therefore, ve 4 offloads its task to RSU 2 for processing.
  • Compared with low-accuracy DL models, high-accuracy ones can improve QoS but incur longer processing latency. Therefore, when the entire VEC system operates under a low-load condition, RSU 3 retrieves optimized parameters from the cloud to update its DL model from the low-accuracy version to the high-accuracy one, aiming to improve QoS. Conversely, when the system is under a high-load condition, RSU 1 downgrades its DL model from high precision to low precision to reduce task processing latency, so that more tasks can be processed. When a specific DL model on an RSU is undergoing an update, it cannot accept any task offloading requests, and vice versa.
In this paper, we consider the joint optimization problem of task offloading and DL model updating in VEC. Specifically, under the premise of satisfying the constraints of latency and accuracy for a task, we need to comprehensively consider task acceptance rate and QoS to select appropriate RSUs and models for each task, and reasonably schedule the timing of DL model updating. However, this problem, a Mixed-Integer Nonlinear Programming (MINLP) problem, involves discrete decision variables (e.g., whether a task is offloaded, whether a DL model is updated) as well as nonlinear latency and QoS constraints. Characterized by high computational complexity, it cannot be solved in real time by traditional algorithms in VEC. To address the challenges, this paper proposes a load and accuracy-aware joint optimization algorithm, which makes decisions on task offloading and DL model updating by leveraging real-time system information. With the weighted sum of task acceptance rate and QoS as the optimization objective, the algorithm effectively improves the overall system performance. Specifically, the main solution approach of this paper includes the following aspects:
  • This paper constructs a system model for the VEC, depicting task offloading, constraints on resources, and the mechanisms of model updating, and further formulates the joint optimization problem as an MINLP problem. Subsequently, through the simplification and relief of the problem, we prove that the joint optimization problem is NP-hard.
  • We propose a heuristic algorithm named Load-Accuracy-Sensitive Joint Task Offloading and Model Update Algorithm, which leverages multi-dimensional information including distance, load status, and model accuracy to make decisions on task offloading and model updating.
  • This paper conducts simulation experiments to compare the proposed algorithm with benchmark algorithms, verifying its performance advantages in metrics such as task acceptance rate and QoS.
The rest of this paper is organized as follows. In Section 2, we analyze the related work; Section 3 describes the system model and formalizes the optimization problem; in Section 4, we present the proposed algorithm in detail. Section 5 evaluates the performance of the proposed algorithm via simulation results. In Section 6, we conclude this paper.

2. Related Works

In recent years, a large number of researchers have made substantial and important contributions to the field of task offloading. Beytur et al. [9] focused on hierarchical inference systems with multiple user devices and random offloading costs and proposed an online algorithm to achieve delay–accuracy–fairness tradeoffs. Liu et al. [10] studied a joint optimization problem of dynamic DL model deployment and freshness-sensitive task assignment under multi-dimensional constraints in edge intelligence. Hu et al. [11] proposed an online computation offloading approach that can effectively reduce execution latency and energy consumption of tasks in dynamic MEC environments. Yan et al. [12] studied the impact of inter-user task dependency on the task offloading decisions and resource allocation in a two-user MEC network. Guo et al. [13] considered the problem of joint task scheduling and resource allocation, which optimizes the ground D2D association strategy and computation task offloading strategy. Tran-Dang et al. [14] proposed a fog resource-aware adaptive task offloading framework for the IoT–fog–cloud systems to provide the IoT computing services with the context-aware minimized delay. Liu et al. [15] studied computation offloading for mobile applications with task-dependency requirements in MEC systems. Long et al. [16] investigated the task offloading problem under the “End–Edge–Cloud” architecture as a multi-objective optimization problem. They took the energy consumption of mobile devices, the response delay of tasks and the load balancing between edge nodes as the optimization goals. Fan et al. [17] proposed a task offloading scheme to minimize the total task processing delay in a quality-aware edge-assisted ML task inference scenario while guaranteeing the task inference accuracy requirements and the task queues’ stability. Dong et al. [18] discussed the optimization problem of task offloading scheduling based on multiuser and multi-MEC servers and establishes the corresponding mathematical model for the problem. In [19], a collaborative mobile edge computing system with multiple UAVs and multiple ECs is investigated. Chakraborty et al. [20] addressed the task prioritization and offloading policy in a three-layered architecture with a fuzzy logic. Duan et al. [21] studied the mobility-aware online task offloading problem with adaptive load balancing to minimize the total computation costs and devised an online control scheme to jointly optimize the task offloading and MES grouping in an online manner.
In addition, extensive research efforts have been devoted to the task offloading problem in VEC. Wang et al. [22] proposed an MEC-assisted vehicular network architecture, and optimized the task offloading and migration decisions in the vehicle network to minimize the weighted sum of delay and energy consumption. In [23], the authors investigated the dependency-aware task offloading and service caching problem in VEC. Zhao et al. [24] considered the data dependency among tasks, proposed a vector-based computation offloading model, and defined the minimization of the system’s average response time and average energy consumption as a combinatorial optimization problem. Xia et al. [25] proposed a novel location-aware task offloading mechanism for the single-vehicle multi-unit scenario in VEC. In [26], the authors incorporated a Transformer-based large model into the action generation process for task offloading in MEC and proposed a task offloading algorithm that integrates a Transformer-based large model with deep reinforcement learning to meet the demands for data processing and real-time decision-making in the IoV. Chen et al. [27] focused on the study of a hybrid energy-powered multi-server MEC system with Cybertwin, as the digital representation of the complicated physical end-systems. Dai et al. [28] studies a problem of multi-task offloading in the VEC system and developed the Seq2seq-based Meta Reinforcement Learning which can efficiently find an offloading policy adaptive to a new environment. In [29], the problem of task allocation in dynamic VFC environments was studied. Authors designed a distributed V2V task offloading scheme to solve the problem. Jin et al. [30] addressed task offloading and resource allocation in dynamic vehicular networks, focused on a dynamic optimization strategy that considered the stochastic nature of tasks and vehicle movement. Zhao et al. [31] studied the field of online stable computation offloading proactive sensing tasks in VEC networks, taking into account stochastic task data arrivals. They proposed a novel framework that combines Lyapunov optimization and DRL, leveraging their respective advantages, to minimize the average weighted sum of task delay while ensuring long-term queue stability. The authors of [32] investigated task offloading methods in Vehicular Fog Computing (VFC) networks. Liu et al. [33] studied the problem of exploiting mobility task offloading in VEC networks to minimize the utility of client vehicles. Lv et al. [34] considered problem of task offloading for autonomous vehicles in an edge computing environment and proposed a task offloading scheme for environmental perception of autonomous vehicles. In [35], A new vehicle-centric multi-objective task offloading (MOTO) model was proposed. Authors proposed a vehicle-centric two-stage task offloading approach, which addresses the task offloading challenge encountered by connected autonomous vehicles in vehicular networks. Zhang et al. [36] investigated the task offloading problem in the context of the IoV and proposed a DT-empowered task offloading framework that leverages global and historical information to make efficient task offloading decisions at the DT layer. Sun et al. [37] focused on the task offloading among vehicles, i.e., the driving systems or passengers of some vehicles generate computation tasks, while some other surrounding vehicles can provide computing services. Xiao et al. [38] formulated a joint problem of collaborative task assignment, offloading and resource allocation for minimizing task completion delay, by fully considering the vehicle mobility and task dependency. They developed a new framework of perception task offloading with collaborative computation for autonomous driving.
In summary, our work differs from existing studies in that we:
  • Consider DL-based task processing (inference-driven services), where model quality directly impacts service outcomes, rather than treating all tasks as conventional computation workloads.
  • Introduce an explicit model-updating mechanism and jointly optimize task offloading and model updating, instead of assuming a static model or ignoring model maintenance costs.
  • Explicitly incorporate accuracy-related utility into the objective and study the latency–accuracy trade-off, whereas relatively fewer VEC studies directly optimize accuracy-oriented metrics

3. System Model

3.1. Network Model

This paper studies a vehicular edge network composed of multiple edge base stations, where each RSU is equipped with a server and the server is deployed with DL models for processing task requests. We consider a network model as G = {N, E}, where the sets of RSUs and links between RSUs are denoted by N = {1, …, n, …, N} and E, respectively. The set of task requests from vehicle users is denoted by U = {1, …, u, …, U}. Specifically, u = { v o l u , q u , c u , d u }, where v o l u represents the task data size of task request u, q u denotes the type of model required by task request u, c u indicates the model accuracy required by task request u, and d u stands for the latency requirement of task request u.
The computing resources, memory resources and model sets owned by RSU n are denoted by C n , M n , and k n respectively; its set of DL models is represented by k n = { k n , 1 , …, k n , i , …, k n , I }, and I n denotes the number of DL models for the RSU n. I n value is determined by the storage capacity of the RSU n, where we denote the storage resources of RSU n as D n . In general, each RSU can accommodate 2–3 DL models. k n , i = { κ n , i , μ n , i , s n , i }, where κ n , i denotes the accuracy of model k n , i , μ n , i denotes the type of model k n , i , and s n , i denotes the size of model k n , i . The higher the accuracy κ n , i of a DL model k n , i , the longer the latency required for it to process a task. A time slot model is adopted in this paper, which discretizes continuous time into time slots, with T = {1, …, t, …, T} denoted as the set of time slots. At the beginning of each time slot, each RSU makes model update decisions based on its resource utilization rate. When an RSU performs a model update, the DL model undergoing the update operation enters an irreversible service interruption state, during which it cannot accept any task offloading requests. After making model updating decisions, DL models that do not undergo update operation are available to receive and process task requests from vehicle users.

3.2. Latency Model

Latency is a key metric for the overall performance of a VEC system. We assume that the system adopts the Orthogonal Frequency Division Multiple Access (OFDMA) communication technology [39], and the bandwidth allocated by each RSU to each vehicle user request is equal. Generally, the communication range of a vehicle user covers multiple RSUs, where the vehicle user will associate with the RSU providing the strongest signal strength for task data transmission. The transmission rate between vehicle user u and its associated RSU n is denoted as
r u , n = B log 2 1 + P u H u , n σ 2 ,
where B denotes the bandwidth, P u represents the transmission power, H u , n indicates channel gain and σ 2 stands for the noise power.
Given that the task data size of vehicle user u is v o l u , the data upload latency is expressed as v o l u / r u , n . If the RSU n associated with vehicle user u does not have a DL model of the corresponding type for processing, while RSU n has a DL model of the required type, the task request will be offloaded from the associated RSU n to the DL model k n , i on RSU n . Let p a t h n , n denote the shortest path between RSU n and RSU n . Therefore n u m ( p a t h n , n ) denotes the number of links in p a t h n , n . Let r n , n denote the transmission rate between RSUs. Then, the total transmission latency for the task of user u can be expressed as
t u , n , i t r a n s = v o l u / r u , n + v o l u · n u m ( p a t h n , n ) / r n , n .
Although high-accuracy DL models can guarantee the correctness of task computation results, they are accompanied by longer computation latency; the opposite holds for low-accuracy DL models. The total computation latency of the DL model k n , i on RSU n for processing the task of user u can be expressed as
t u , n , i c o m p = v o l u · P u f n , i ,
where v o l u · P u denotes the total number of CPU cycles required to process the task generated by vehicle user u. When multiple tasks are processed simultaneously on the same RSU, the available computing resources are shared among these tasks. To alleviate the constraints on computing resources, this paper assumes that the DL models deployed on an RSU equally share the computing resources of the RSU. In the above formula, f n , i denotes the computational processing capacity of DL model k n , i , which is mainly affected by the computing resources C n of RSU n, the number of tasks currently processed by the RSU c o m p n , and the processing factor P, namely
f n , i = C n c o m p n · P .

3.3. Task Offloading and Model Updating

Task Offloading: The task offloading decision is denoted by X = { x n , i u , t { 0 , 1 } } i I , t T , u U , n N . Specifically, if x n , i u , t = 1, it indicates that in time slot t, the DL model k n , i deployed on RSU n accepts and processes the task offloading request from vehicle user u.
Model Updating: The model update decision is denoted by y = { y n , i t { 0 , 1 } } i I , t T , n N . Specifically, if y n , i t = 1, it indicates that in time slot t, RSU n performs an update operation on DL model k n , i , during which the model cannot accept or process task offloading requests from any vehicle user u.
Note that the task offloading and model update decisions are closely coupled, since a DL model is unable to accept and process task offloading requests when it is undergoing an update operation, and vice versa, namely x n , i u , t + y n , i t 1 .
In this paper, we assume that when an RSU intends to perform a model update operation on a DL model, the RSU retrieves the optimized parameters from the cloud instead of downloading the entire DL model, and thus the download delay is omitted. When a DL model is performing a model update operation (i.e., y n , i t = 1 ), it consumes certain computational resources. And the total delay of the model update is denoted as
t n , i u p d a t e = s n , i f n , i u p d a t e
where f n , i u p d a t e = C n c o m p n denotes the computational processing capacity of the DL model for model update. When a model k n , i decides to perform a model update operation, i.e., y n , i t = 1 , it means that the model cannot accept and process task offloading requests for a duration of t n , i u p d a t e .

3.4. Acceptance Rate and QoS Model

For the entire VEC, the more task requests it can accept and process, the better. That is, the higher the Acceptance Rate (AR) of task requests, the better. For vehicle users, the higher the Quality of Service (QoS), the better. Therefore, we adopt an AR-QoS-based utility function as the performance metric for the entire VEC system. This utility function is expressed as
Z t ( A R , Q o S ) = α · A R + β · Q o S ,
where α , β [ 0 , 1 ] denote the weight values of AR and QoS respectively, with α + β = 1 . The values of α and β depend on application characteristics. In latency-critical scenarios (e.g., safety-related perception tasks), we can choose larger α (e.g., α 0.7 ). In accuracy-sensitive scenarios (e.g., object recognition or cooperative perception), β can be larger (e.g.,  β 0.6 ).
The AR in time slot t is expressed as
A R t = u U n N i I x n , i u , t U t .
The QoS function reflects the trade-off between inference accuracy and task execution latency. At time slot t, the QoS for user u is expressed as
Q o S t = 1 U t u U t Q o s t u ,
where Q o S t u is calculated as follows:
Q o S t u = θ · r d e l a y u , t + ( 1 θ ) · r a c c u r a c y u , t ,
where θ is used to balance the proportion of latency and accuracy in the QoS function. When θ is small, the weight of accuracy in the QoS function is larger. In this case, vehicles are more inclined to offload tasks to models with higher accuracy rather than faster-processing models. The terms r d e l a y and r a c c u r a c y are defined as follows:
r d e l a y = 1 1 + t u , n , i t r a n s + t u , n , i c o m p d u ,
r a c c u r a c y = 1 2 c u κ n , i

3.5. Problem Formulation

To maximize the utility function Z t ( A R , Q o S ) while satisfying the constraints on accuracy, latency, memory, and computing resources, this paper formulates the joint optimization problem of task offloading and DL model updating as follows:
P 1 : max t Z t ( A R , Q o S )
s . t . x n , i u , t + y n , i t 1 , i I , t T , n N , u U ,
n N i I x n , i u , t 1 , t T , u U ,
t u , n , i t r a n s + t u , n , i c o m p d u , i I , t T , n N , u U ,
c u κ n , i , i I , n N , u U ,
q u = μ u , i , if x u , i u , t = 1 , i I , n N , u U ,
I I s n , i D n , n N ,
u U i I x n , i u , t I n , t T , n N ,
u U i I x n , i u , t · v o l u M n , t T , n N ,
x n , i u , t , y n , i t 0 , 1 , i I , t T , n N , u U .
Among the above constraints, Constraint (13) ensures that task processing and model updating are mutually exclusive for a DL model. Constraint (14) guarantees that a task can only be offloaded to one DL model for processing. Constraints (15) and (16) represent the latency and accuracy constraint respectively. Constraint (17) ensures that a task is offloaded to a DL model of the correct type. Constraints (18), (19) and (20) correspond to the storage, computing and memory resource constraint of an RSU, respectively.
Theorem 1.
The optimization problem P1 is an NP-hard problem.
Proof of Theorem 1.
To prove this proposition, we consider a constrained version of P1. First, we focus on a single time slot and neglect the temporal coupling between different time slots. Second, we fix the model update decisions and assume that the deployed models are static. Third, we only consider the task offloading decisions, with the objective of maximizing the number of accepted tasks subject to the constraints of task latency, task accuracy and RSU resource capacity, while ignoring the QoS-related nonlinear terms. Under these simplified conditions, the problem is transformed into selecting a subset of tasks to be offloaded to RSUs, such that the total resource demand does not exceed the available resources while maximizing the number of accepted tasks. This problem is equivalent to the classic 0-1 knapsack problem, where tasks correspond to items, RSU resources correspond to knapsack capacity, and task resource requirements correspond to item weights. Since the 0-1 knapsack problem is an NP-hard problem [40], the constrained version of P1 is also NP-hard. Therefore, the original and more general problem, which involves more decision variables and nonlinear constraints, is NP-hard as well.    □

4. Algorithm

4.1. Overview

As described in Section 3, problem P1, namely the joint task offloading and DL model update problem, is a Mixed-Integer Non-Linear Programming (MINLP) problem and has been proven to be NP-hard. In the dynamic VEC environment, it is challenging to obtain a real-time optimal solution. To address this challenge, we propose a heuristic-based algorithm, namely the Load-Accuracy-Sensitive Joint Task Offloading and Model Update (STD) Algorithm. The algorithm aims to balance AR and QoS, and determine task offloading and model update decisions by leveraging real-time system states, including RSU workload, queue length, available resources, and model accuracy.
The STD Algorithm operates in a time slot-based manner. At the start of each time slot, the algorithm collects system status information from all RSUs and vehicles. It then executes two closely coupled phases sequentially: (i) task offloading decision-making and (ii) DL model update decision-making. These two phases are coordinated to avoid conflicts between task computation and DL model updating, thereby achieving a balance between AR and QoS.

4.2. Algorithm Design

Our objective is to achieve a rational operating state for the entire system while satisfying various constraints, thereby making effective decisions on task offloading and DL model updating to maximize the weighted sum of AR and QoS. More specifically, for each task request generated by vehicles within the current time slot, the STD Algorithm evaluates all candidate RSUs within the task’s communication range. For each candidate RSU, the algorithm not only further checks for deployed DL models that match the task type, but also verifies the transmission feasibility, computational load, queue length, remaining memory resources, and model accuracy matching degree of the RSU. As for DL model updating, the STD Algorithm also makes update decisions based on the overall system load level and queue length. To quantify the suitability of offloading a task to a specific RSU–model pair, this paper constructs a weighted utility score, expressed as follows:
Ψ n , i u = ω 1 Φ u , n d i s t + ω 2 Φ n l o a d + ω 3 Φ u , n , i a c c + ω 4 Φ n b a l .
The utility score Ψ n , i u evaluates the suitability of offloading the task from user u to model k n , i on RSU n by jointly considering transmission feasibility, RSU workload, model accuracy matching, and system load balancing. In Equation (22), Φ u , n d i s t measures whether the communication conditions between task u and RSU n are conducive to satisfying the latency constraint; Φ n l o a d measures whether the RSU is currently under high load, thus avoiding task offloading to congested nodes; Φ u , n , i a c c evaluates whether the accuracy of the model on RSU meets the task requirements; Φ n b a l encourages offloading tasks to relatively idle RSUs in the system to improve the overall resource utilization rate. ω is used to trade off the weights among Φ , and k = 1 4 ω k = 1 .
The expressions of Φ u , n d i s t , Φ n l o a d , Φ u , n , i a c c and Φ n b a l are respectively denoted as follows
Φ u , n d i s t = ξ 1 + d i s t u , n ,
where ξ indicates whether the task latency requirement can be satisfied, taking a value of 1 if satisfied and 0 otherwise. d i s t u , n denotes the distance between the vehicle user u and RSU n.
Φ n l o a d = n u m n t a s k + n u m n u p d a t e C n ,
where n u m n t a s k and n u m n u p d a t e denote the number of tasks being processed and the number of models being updated by RSU n, respectively.
Φ u , n , i a c c = m a x ( 0 , κ n , i C u ) · s c o r e n , i ,
where s c o r e n , i takes a value greater than 1 if the model is in an idle state, and 1 otherwise.
Φ n b a l = i I ( Q n , i t a s k + Q n , i u p d a t e ) + Q n t r a n s ,
where Q n , i t a s k , Q n , i u p d a t e and Q n t r a n s represent the task queue of model k n , i , the model update queue and the forwarding queue of RSU n, respectively.
The algorithm details are presented in Algorithm 1. In Algorithm 1, Lines 1–2 initialize the decision variables and collect the current system states. Lines 3–22 determine the task offloading decisions for all task requests in the current time slot. Specifically, Lines 8–12 evaluate the feasible RSU–model pairs based on the weighted utility function. Each task is then offloaded to the RSU–model pair with the maximum utility value, and the system states are updated accordingly after each offloading operation (Lines 18–20). Subsequently, Lines 23–29 determine the model update decisions for each RSU according to the current workload and resource utilization. Finally, the algorithm outputs the task offloading and model update decisions for the current time slot.
Algorithm 1: Load-Accuracy-Sensitive Joint Task Offloading and Model Update (STD) Algorithm
Electronics 15 01072 i001

5. Performance Evaluation

In this section, the performance of the proposed STD Algorithm and other benchmark algorithms is evaluated through simulation experiments. All algorithms are run on a computer equipped with an Intel i5-13490 CPU and 32 GB RAM, and the experiments are conducted on the Python 3.8.0 platform.

5.1. Experimental Environment Settings

We consider a vehicular edge network where the number of RSUs ranges from 10 to 30 and the number of vehicles ranges from 30 to 50. We assume that each RSU can host 3 DL models, with both the types of DL models and task types set to 3. Task arrivals are assumed to follow a Poisson distribution, where the task size v o l u ranges from 40 to 45 MB, the accuracy requirement c u ranges from 0.4 to 0.8, and the latency requirement d u ranges from [10,15] * (1 + c u ). For RSUs, we assume that the transmission rate between an RSU and a vehicle is 500 MB/s, the transmission rate between RSUs is 1000 MB/s, the CPU frequency and computing density of an RSU are 0.1 GHz and 0.297 Gcycles/Mbit respectively, and the memory resource of an RSU is 130 MB. For DL models, we assume that there are three model types with three accuracy levels, where each accuracy level corresponds to three distinct processing factors P, i.e., [0.7, 1.0, 1.5]. Both α and β are set to 0.5. We set all ω to 0.25. All experiments are conducted with a fixed random seed of 42 to ensure reproducibility. Each deployed DL model maintains a task queue following a First-In-First-Out (FIFO) service discipline. The parameters related to the model accuracy range and update size are presented in Table 1.
We compare the STD Algorithm with two benchmark algorithms:
  • Random Algorithm: At the start of each time slot, the algorithm performs random task offloading and model updating according to a certain probability.
  • Greedy Algorithm: The objective of this algorithm is to offload as many tasks as possible to maximize the acceptance rate. It first collects information on RSUs with idle DL models; if such RSUs exist, it randomly selects one for task offloading. If no idle DL models are available, it selects the RSU with the lowest load and queue length for offloading. For model updating, the algorithm chooses an opportunity to update idle models based on the system load and total queue length.
  • Lyapunov-based Algorithm: At the beginning of each time slot, this algorithm observes the current queue backlogs and resource states of all RSUs and makes online offloading decisions by minimizing a Lyapunov drift-plus-penalty metric. It selects the RSU–model pair that yields the smallest increase in queue backlog (drift) while considering a QoS-related term (e.g., accuracy matching and delay feasibility); model updates are triggered only under low-load/short-queue conditions to avoid degrading task processing.

5.2. Performance of Different Algorithms for the Problem

We first evaluate the performance of the STD Algorithm and other comparative algorithms under the scenario where the numbers of RSUs and vehicles are fixed at 10 and 30, respectively. Figure 2 presents a comparison of the Z, AR and QoS values achieved by different algorithms. The experimental results demonstrate that the performance of the STD Algorithm is significantly superior to the other three benchmark algorithms. Specifically, the STD Algorithm shows a distinct advantage in terms of AR. Although the Lyapunov algorithm performs slightly better than the STD Algorithm in terms of QoS, the STD Algorithm still has advantages in terms of the Z value.
Figure 3 illustrates the comparison of QoS values among the three algorithms, namely the values of r d e l a y and r a c c u r a c y . It can be observed that the STD Algorithm outperforms the Greedy and Random algorithms. Since the Greedy Algorithm prioritizes maximizing the AR, its QoS performance is slightly inferior to the Random Algorithm. The Lyapunov algorithm outperforms the STD Algorithm in terms of r d e l a y , which is why it achieves a slightly better QoS value than the STD Algorithm. However, the STD Algorithm has advantages in r a c c u r a c y , indicating that it is more suitable for accuracy-sensitive tasks.
Figure 4 shows STD’s sensitivity to α ( β = 1 α ) . The task acceptance rate increases with the increase of α , while the QoS only rises slightly and then stabilizes. If a task is not completed, its QoS value will be 0. This explains why the average QoS value is not large even when α = 0 , and why the average QoS value increases slightly as α increases.

5.3. Impact of Number of RSUs, Vehicles and Time

In this subsection, we evaluate the performance differences between the STD Algorithm and other benchmark algorithms by considering variations in the number of RSUs, vehicles, and time slots.
Figure 5a–c illustrate the variations in the system-wide Z, AR, and QoS values of each algorithm when the number of vehicles is fixed at 30 while the number of RSUs in the network increases progressively. It can be observed that the STD Algorithm outperforms the Greedy and Random algorithms significantly on all metrics, except when compared with the Lyapunov algorithm. As shown in Figure 5c, although the Lyapunov algorithm has a slight advantage in QoS value in the early stage, the QoS value of the STD Algorithm becomes slightly larger than that of the Lyapunov algorithm when sufficient resources are available. Meanwhile, in Figure 5a,b, as resources gradually become sufficient, the Z and AR values of the Lyapunov algorithm also gradually approach the STD Algorithm.
Figure 6a–c depict the changes in the system’s Z, AR, and QoS values of each algorithm when the number of RSUs is fixed at 10 while the number of vehicles in the network keeps increasing. As shown in Figure 6a,b, it is clear that under the condition of resource scarcity caused by the increasing number of vehicles, although the performance of the STD Algorithm degrades, it still outperforms the other three benchmark algorithms in terms of Z and AR values. As shown in Figure 6c, the QoS value of the STD Algorithm decreases as the number of vehicles increases, but becomes slightly better than that of the Lyapunov algorithm when the number of vehicles reaches 50. Therefore, the STD Algorithm still performs well under resource-constrained conditions.
Figure 7a–c demonstrate the variations in the system-wide Z, AR, and QoS values of each algorithm when the numbers of RSUs and vehicles are fixed at 10 and 30, respectively, while the number of time slots increases. As shown in Figure 7a, with the elapse of time, the Z values of the STD and Lyapunov algorithms stabilize and even exhibit an upward trend, whereas the Greedy algorithm and Random algorithm experience certain fluctuations. This manifests the stability of the STD Algorithm. In Figure 7b,c, although the Greedy algorithm and Random algorithm also achieve stable and improved AR values consistent with the STD Algorithm, both algorithms suffer from QoS fluctuations due to their respective inherent drawbacks.
The decision-making mechanism of the Greedy algorithm focuses on local optimality. Consequently, in the early stage of the simulation, the Greedy algorithm seizes high-quality resources without considering the long-term impacts of this selection on subsequent tasks. As time progresses, newly arriving tasks fail to be allocated high-quality resources, leading to a decline in QoS performance. When these high-quality resources are released, the Greedy algorithm can then utilize them to serve newly incoming tasks, thus driving a rebound in QoS values.
In contrast, in the early simulation stage, the Random algorithm yields a relatively high QoS value because the system load is extremely low, RSU queues are nearly empty, and high-quality resources remain unoccupied. As time goes on, the inherent flaws of the Random algorithm become prominent: its performance relies entirely on probability without any optimization mechanisms. Meanwhile, the system load rises continuously, RSU queues become congested, and high-quality resources are almost fully occupied. Under such circumstances, the QoS performance of the Random algorithm declines significantly. Nevertheless, in the late stage of the simulation, when the number of trials is sufficiently large, the QoS performance of the Random algorithm stabilizes around the overall expected value of the system, resulting in a flattened curve in the later phase, but with a generally low QoS level.
Although the STD Algorithm is inferior to the Lyapunov algorithm in QoS value, it outperforms the Lyapunov algorithm slightly in terms of stability. Overall, it can be concluded that the STD Algorithm is superior to the Lyapunov algorithm and the other two benchmark algorithms.

6. Conclusions

This paper investigated the joint optimization problem of accuracy-sensitive task offloading and deep learning (DL) model updating in vehicular edge computing (VEC) systems. Considering the limited communication and computation resources of roadside units (RSUs), we formulated a utility maximization problem that jointly accounts for task acceptance rate, latency constraints, and model accuracy requirements. To address the computational complexity of the formulated problem, a load- and accuracy-aware heuristic decision algorithm was proposed, which dynamically selects RSU–model pairs and schedules model updates based on real-time system status. In addition, a Lyapunov-based, Greedy and Random algorithm were implemented as comparative benchmark algorithms to evaluate the stability–performance tradeoff. Simulation results demonstrated that the proposed method achieves better performance than Random, Greedy, and Lyapunov-based algorithms under diverse traffic densities and workload intensities. The results indicate that jointly optimizing task offloading and model updating is essential in VEC systems where model accuracy directly affects service quality.

7. Future Work

Although promising performance has been achieved, several important directions remain for future investigation:
  • Integration with Learning-Based Optimization: Future work will explore combining the proposed heuristic framework with deep reinforcement learning (DRL) to further enhance adaptability in highly dynamic vehicular scenarios. Learning-based policies may better capture long-term performance tradeoffs under non-stationary traffic patterns.
  • Federated and Distributed Model Updating: The current study assumes centralized model updating at RSUs. Extending the framework to federated learning-based distributed model training, where vehicles collaboratively participate in model updates, would improve scalability and privacy preservation.
  • Mobility-Aware Scheduling: Vehicle mobility and frequent RSU handovers may significantly influence queue stability and update decisions. Incorporating predictive mobility models into the joint optimization framework could improve robustness under high-speed scenarios.
  • Energy-Aware Optimization: Future extensions will consider energy consumption of both RSUs and vehicles, enabling a multi-objective optimization framework balancing latency, accuracy, and energy efficiency.
  • Real-World Deployment and Testbed Validation: While this study relies on simulations, implementing the proposed scheme in a real V2X-enabled edge computing testbed would provide further insights into practical deployment challenges.

Author Contributions

Conceptualization, Y.B. and J.L.; methodology, Y.B.; software, Y.B.; validation, Y.B.; formal analysis, Y.B. and J.L.; investigation, Y.B.; resources, Y.B.; data curation, Y.B.; writing—original draft preparation, Y.B.; writing—review and editing, Y.B. and J.L.; visualization, Y.B.; supervision, Y.B. and J.L.; project administration, Y.B. and J.L.; funding acquisition, Y.B. and J.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research is funded by the National Natural Science Foundation of China (Grant No. 62362005) and the Key Research and Development Program of Guangxi (No. AD25069071).

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
NThe set of RSUs
EThe set of links between RSUs
UThe set of task requests
v o l u , q u , c u , d u The task data size, type of model required, model accuracy required, latency requirement of task request
C u , M u , k u The computing resources, memory resources and model sets of RSU.
I n The number of DL models for the RSU.
D n The storage capacity of the RSU.
κ n , i , μ n , i , s n , i The accuracy, type and size of model.
T The set of time slots.
x n , i u , t , y n , i t The task offloading decision and the model update decision.
r u , n The transmission rate between vehicle user and its associated RSU
p a t h n , n The shortest path between RSU n and RSU n
n u m ( p a t h n , n ) The number of links in p a t h n , n
r n , n The transmission rate between RSUs
t u , n , i t r a n s The total transmission latency for the task
t u , n , i c o m p The total computation latency of the task
PuThe task computational density
f n , i , f n , i u p d a t e The computational processing capacity of DL model for processing task and model update
c o m p n , P The number of tasks currently processed by the RSU and the processing factor
ARThe task acceptance rate
QoSThe quality of Service
r d e l a y , r a c c u r a c y The utility of delay and accuracy function for QoS
ZThe utility of function for AR an QoS
P s i n , i u The utility score
Φ u , n d i s t , Φ n l o a d , Φ u , n , i a c c , Φ n b a l The utility function of the communication condition, load condition, model accuracy and resource utility

References

  1. Zhuang, W.; Ye, Q.; Lyu, F.; Cheng, N.; Ren, J. SDN/NFV-Empowered Future IoV With Enhanced Communication, Computing, and Caching. Proc. IEEE 2020, 108, 274–291. [Google Scholar] [CrossRef]
  2. Peng, J.; Shangguan, W.; Chai, L.; Chen, J.; Peng, C.; Cai, B. V2X-Enabled Platoon Control for Aperiodic Congestion Mitigation via Moving Bottlenecks in Mixed Traffic Environments. IEEE Trans. Veh. Technol. 2025, 1–13. [Google Scholar] [CrossRef]
  3. Bozorgchenani, A.; Maghsudi, S.; Tarchi, D.; Hossain, E. Computation Offloading in Heterogeneous Vehicular Edge Networks: On-Line and Off-Policy Bandit Solutions. IEEE Trans. Mob. Comput. 2022, 21, 4233–4248. [Google Scholar] [CrossRef]
  4. Sun, Y.; Wu, Z.; Meng, K.; Zheng, Y. Vehicular Task Offloading and Job Scheduling Method Based on Cloud-Edge Computing. IEEE Trans. Intell. Transp. Syst. 2023, 24, 14651–14662. [Google Scholar] [CrossRef]
  5. Han, Z.; Yang, Y.; Wang, W.; Zhou, L.; Nguyen, T.N.; Su, C. Age Efficient Optimization in UAV-Aided VEC Network: A Game Theory Viewpoint. IEEE Trans. Intell. Transp. Syst. 2022, 23, 25287–25296. [Google Scholar] [CrossRef]
  6. Liu, J.; Ahmed, M.; Mirza, M.A.; Khan, W.U.; Xu, D.; Li, J.; Aziz, A.; Han, Z. RL/DRL Meets Vehicular Task Offloading Using Edge and Vehicular Cloudlet: A Survey. IEEE Internet Things J. 2022, 9, 8315–8338. [Google Scholar] [CrossRef]
  7. Zhou, J.; Tian, D.; Wang, Y.; Sheng, Z.; Duan, X.; Leung, V.C.M. Reliability-Optimal Cooperative Communication and Computing in Connected Vehicle Systems. IEEE Trans. Mob. Comput. 2020, 19, 1216–1232. [Google Scholar] [CrossRef]
  8. Wu, Y.; Wu, J.; Chen, L.; Liu, B.; Yao, M.; Lam, S.K. Share-Aware Joint Model Deployment and Task Offloading for Multi-Task Inference. IEEE Trans. Intell. Transp. Syst. 2024, 25, 5674–5687. [Google Scholar] [CrossRef]
  9. Beytur, H.B.; Aydin, A.G.; de Veciana, G.; Vikalo, H. Optimization of Offloading Policies for Accuracy-Delay Tradeoffs in Hierarchical Inference. In Proceedings of the IEEE INFOCOM 2024—IEEE Conference on Computer Communications, Vancouver, BC, Canada, 20–23 May 2024; pp. 1989–1998. [Google Scholar]
  10. Liu, H.; Liu, S.; Long, S.; Deng, Q.; Li, Z. Joint Optimization of Model Deployment for Freshness-Sensitive Task Assignment in Edge Intelligence. In Proceedings of the IEEE INFOCOM 2024—IEEE Conference on Computer Communications, Vancouver, BC, Canada, 20–23 May 2024; pp. 1751–1760. [Google Scholar]
  11. Hu, Z.; Niu, J.; Ren, T.; Dai, B.; Li, Q.; Xu, M.; Das, S.K. An Efficient Online Computation Offloading Approach for Large-Scale Mobile Edge Computing via Deep Reinforcement Learning. IEEE Trans. Serv. Comput. 2022, 15, 669–683. [Google Scholar] [CrossRef]
  12. Yan, J.; Bi, S.; Zhang, Y.J.; Tao, M. Optimal Task Offloading and Resource Allocation in Mobile-Edge Computing With Inter-User Task Dependency. IEEE Trans. Wirel. Commun. 2020, 19, 235–250. [Google Scholar] [CrossRef]
  13. Guo, H.; Wang, Y.; Liu, J.; Liu, C. Multi-UAV Cooperative Task Offloading and Resource Allocation in 5G Advanced and Beyond. IEEE Trans. Wirel. Commun. 2024, 23, 347–359. [Google Scholar] [CrossRef]
  14. Tran-Dang, H.; Kim, D.-S. FRATO: Fog Resource Based Adaptive Task Offloading for Delay-Minimizing IoT Service Provisioning. IEEE Trans. Parallel Distrib. Syst. 2021, 32, 2491–2508. [Google Scholar] [CrossRef]
  15. Liu, S.; Yu, Y.; Lian, X.; Feng, Y.; She, C.; Yeoh, P.L.; Guo, L.; Vucetic, B.; Li, Y. Dependent Task Scheduling and Offloading for Minimizing Deadline Violation Ratio in Mobile Edge Computing Networks. IEEE J. Sel. Areas Commun. 2023, 41, 538–554. [Google Scholar] [CrossRef]
  16. Long, S.; Zhang, Y.; Deng, Q.; Pei, T.; Ouyang, J.; Xia, Z. An Efficient Task Offloading Approach Based on Multi-Objective Evolutionary Algorithm in Cloud-Edge Collaborative Environment. IEEE Trans. Netw. Sci. Eng. 2023, 10, 645–657. [Google Scholar] [CrossRef]
  17. Fan, W.; Chen, Z.; Hao, Z.; Wu, F.; Liu, Y. Joint Task Offloading and Resource Allocation for Quality-Aware Edge-Assisted Machine Learning Task Inference. IEEE Trans. Veh. Technol. 2023, 72, 6739–6752. [Google Scholar] [CrossRef]
  18. Dong, S.; Xia, Y.; Kamruzzaman, J. Quantum Particle Swarm Optimization for Task Offloading in Mobile Edge Computing. IEEE Trans. Ind. Inform. 2023, 19, 9113–9122. [Google Scholar] [CrossRef]
  19. Zhao, N.; Ye, Z.; Pei, Y.; Liang, Y.-C.; Niyato, D. Multi-Agent Deep Reinforcement Learning for Task Offloading in UAV-Assisted Mobile Edge Computing. IEEE Trans. Wirel. Commun. 2022, 21, 6949–6960. [Google Scholar] [CrossRef]
  20. Chakraborty, C.; Mishra, K.; Majhi, S.K.; Bhuyan, H.K. Intelligent Latency-Aware Tasks Prioritization and Offloading Strategy in Distributed Fog-Cloud of Things. IEEE Trans. Ind. Inform. 2023, 19, 2099–2106. [Google Scholar] [CrossRef]
  21. Duan, S.; Lyu, F.; Wu, H.; Chen, W.; Lu, H.; Dong, Z.; Shen, X. MOTO: Mobility-Aware Online Task Offloading With Adaptive Load Balancing in Small-Cell MEC. IEEE Trans. Mob. Comput. 2024, 23, 645–659. [Google Scholar] [CrossRef]
  22. Wang, H.; Lv, T.; Lin, Z.; Zeng, J. Energy-Delay Minimization of Task Migration Based on Game Theory in MEC-Assisted Vehicular Networks. IEEE Trans. Veh. Technol. 2022, 71, 8175–8188. [Google Scholar] [CrossRef]
  23. Shen, Q.; Hu, B.-J.; Xia, E. Dependency-Aware Task Offloading and Service Caching in Vehicular Edge Computing. IEEE Trans. Veh. Technol. 2022, 71, 13182–13197. [Google Scholar] [CrossRef]
  24. Zhao, L.; Zhang, E.; Wan, S.; Hawbani, A.; Al-Dubai, A.Y.; Min, G.; Zomaya, A.Y. MESON: A Mobility-Aware Dependent Task Offloading Scheme for Urban Vehicular Edge Computing. IEEE Trans. Mob. Comput. 2024, 23, 4259–4272. [Google Scholar] [CrossRef]
  25. Xia, Y.; Zhang, H.; Zhou, X.; Yuan, D. Location-Aware and Delay-Minimizing Task Offloading in Vehicular Edge Computing Networks. IEEE Trans. Veh. Technol. 2023, 72, 16266–16279. [Google Scholar] [CrossRef]
  26. Zhou, X.; Guan, X.; Wang, N.; Chen, H.; Ohtsuki, T.; Zhang, Y.; Han, Z. Large Model Empowered Task Offloading for Multi-Access Edge Computing in the Internet of Vehicles. IEEE Trans. Veh. Technol. 2025, 74, 18012–18025. [Google Scholar] [CrossRef]
  27. Chen, Y.; Zhao, F.; Chen, X.; Wu, Y. Efficient Multi-Vehicle Task Offloading for Mobile Edge Computing in 6G Networks. IEEE Trans. Veh. Technol. 2022, 71, 4584–4595. [Google Scholar] [CrossRef]
  28. Dai, P.; Huang, Y.; Hu, K.; Wu, X.; Xing, H.; Yu, Z. Meta Reinforcement Learning for Multi-Task Offloading in Vehicular Edge Computing. IEEE Trans. Mob. Comput. 2024, 23, 2123–2138. [Google Scholar] [CrossRef]
  29. Shi, J.; Du, J.; Wang, J.; Wang, J.; Yuan, J. Priority-Aware Task Offloading in Vehicular Fog Computing Based on Deep Reinforcement Learning. IEEE Trans. Veh. Technol. 2020, 69, 16067–16081. [Google Scholar] [CrossRef]
  30. Jin, L.; Bao, S.; Zhang, G.; Zhu, H.; Duan, W.; Sun, Q.; Zhang, J.; Ho, P.-H. Adaptive Task Offloading and Resource Management for Vehicular Edge Computing. IEEE Trans. Veh. Technol. 2026, 75, 1521–1535. [Google Scholar]
  31. Zhao, W.; Shi, K.; Liu, Z.; Wu, X.; Zheng, X.; Wei, L.; Kato, N. DRL Connects Lyapunov in Delay and Stability Optimization for Offloading Proactive Sensing Tasks of RSUs. IEEE Trans. Mob. Comput. 2024, 23, 7969–7982. [Google Scholar]
  32. Mishra, K.; Rajareddy, G.N.V.; Ghugar, U.; Chhabra, G.S.; Gandomi, A.H. A Collaborative Computation and Offloading for Compute-Intensive and Latency-Sensitive Dependency-Aware Tasks in Dew-Enabled Vehicular Fog Computing: A Federated Deep Q-Learning Approach. IEEE Trans. Netw. Serv. Manag. 2023, 20, 4600–4614. [Google Scholar] [CrossRef]
  33. Liu, L.; Zhao, M.; Yu, M.; Jan, M.A.; Lan, D.; Taherkordi, A. Mobility-Aware Multi-Hop Task Offloading for Autonomous Driving in Vehicular Edge Computing and Networks. IEEE Trans. Intell. Transp. Syst. 2023, 24, 2169–2182. [Google Scholar] [CrossRef]
  34. Lv, P.; Xu, W.; Nie, J.; Yuan, Y.; Cai, C.; Chen, Z.; Xu, J. Edge Computing Task Offloading for Environmental Perception of Autonomous Vehicles in 6G Networks. IEEE Trans. Netw. Sci. Eng. 2023, 10, 1228–1245. [Google Scholar] [CrossRef]
  35. Chen, S.; Li, W.; Sun, J.; Pace, P.; He, L.; Fortino, G. An Efficient Collaborative Task Offloading Approach Based on Multi-Objective Algorithm in MEC-Assisted Vehicular Networks. IEEE Trans. Veh. Technol. 2025, 74, 11249–11263. [Google Scholar] [CrossRef]
  36. Zheng, J.; Zhang, Y.; Luan, T.H.; Mu, P.K.; Li, G.; Dong, M.; Wu, Y. Digital Twin Enabled Task Offloading for IoVs: A Learning-Based Approach. IEEE Trans. Netw. Sci. Eng. 2024, 11, 659–672. [Google Scholar] [CrossRef]
  37. Sun, Y.; Guo, X.; Song, J.; Zhou, S.; Jiang, Z.; Liu, X.; Niu, Z. Adaptive Learning-Based Task Offloading for Vehicular Edge Computing Systems. IEEE Trans. Veh. Technol. 2019, 68, 3061–3074. [Google Scholar] [CrossRef]
  38. Xiao, Z.; Shu, J.; Jiang, H.; Min, G.; Chen, H.; Han, Z. Perception Task Offloading With Collaborative Computation for Autonomous Driving. IEEE J. Sel. Areas Commun. 2023, 41, 457–473. [Google Scholar] [CrossRef]
  39. Tan, L.; Kuang, Z.; Zhao, L.; Liu, A. Energy-Efficient Joint Task Offloading and Resource Allocation in OFDMA-Based Collaborative Edge Computing. IEEE Trans. Wireless Commun. 2022, 21, 1960–1972. [Google Scholar] [CrossRef]
  40. Tang, H.; Wu, H.; Zhao, Y.; Li, R. Joint Computation Offloading and Resource Allocation Under Task-Overflowed Situations in Mobile-Edge Computing. IEEE Trans. Netw. Serv. Manag. 2022, 19, 1539–1553. [Google Scholar] [CrossRef]
Figure 1. The instance of task offloading and model updating in VEC. The squares in the figure represent the tasks of vehicles, and the color of each square indicates the type of task.
Figure 1. The instance of task offloading and model updating in VEC. The squares in the figure represent the tasks of vehicles, and the color of each square indicates the type of task.
Electronics 15 01072 g001
Figure 2. Performance of different algorithms for the values.
Figure 2. Performance of different algorithms for the values.
Electronics 15 01072 g002
Figure 3. Performance of different algorithms for the QoS.
Figure 3. Performance of different algorithms for the QoS.
Electronics 15 01072 g003
Figure 4. The performance of the STD Algorithm varies with the value of α .
Figure 4. The performance of the STD Algorithm varies with the value of α .
Electronics 15 01072 g004
Figure 5. Impact of the number of RSUs, (a) Z value, (b) AR value, and (c) QoS value.
Figure 5. Impact of the number of RSUs, (a) Z value, (b) AR value, and (c) QoS value.
Electronics 15 01072 g005
Figure 6. Impact of the number of vehicles, (a) Z value, (b) AR value, and (c) Qos value.
Figure 6. Impact of the number of vehicles, (a) Z value, (b) AR value, and (c) Qos value.
Electronics 15 01072 g006
Figure 7. Impact of the number of time slots, (a) Z value, (b) AR value, and (c) Qos value.
Figure 7. Impact of the number of time slots, (a) Z value, (b) AR value, and (c) Qos value.
Electronics 15 01072 g007
Table 1. Parameters of each model.
Table 1. Parameters of each model.
Model TypeIIIIII
Accuracy [ 0.5 , 0.7 , 0.87 ] [ 0.4 , 0.6 , 0.8 ] [ 0.6 , 0.75 , 0.85 ]
Update Size [ 25 , 35 , 45 ] [ 20 , 30 , 40 ] [ 30 , 40 , 50 ]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bai, Y.; Liang, J. Optimization of Accuracy-Sensitive Task Offloading and Model Update in Vehicular Edge Computing. Electronics 2026, 15, 1072. https://doi.org/10.3390/electronics15051072

AMA Style

Bai Y, Liang J. Optimization of Accuracy-Sensitive Task Offloading and Model Update in Vehicular Edge Computing. Electronics. 2026; 15(5):1072. https://doi.org/10.3390/electronics15051072

Chicago/Turabian Style

Bai, Yuanjie, and Junbin Liang. 2026. "Optimization of Accuracy-Sensitive Task Offloading and Model Update in Vehicular Edge Computing" Electronics 15, no. 5: 1072. https://doi.org/10.3390/electronics15051072

APA Style

Bai, Y., & Liang, J. (2026). Optimization of Accuracy-Sensitive Task Offloading and Model Update in Vehicular Edge Computing. Electronics, 15(5), 1072. https://doi.org/10.3390/electronics15051072

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop