Next Article in Journal
In Vitro Photoprotective and Skin Aging-Related Enzyme In-Hibitory Activities of Cylindrospermum alatosporum (NR125682) and Loriellopsis cavernicola (NR117881) Extracts
Previous Article in Journal
Analysis of Investment Feasibility for EV Charging Stations in Residential Buildings
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Objective Optimization with Server Load Sensing in Smart Transportation

1
School of Computer and Cyber Sciences, Communication University of China, Beijing 100024, China
2
School of Computer and Information Engineering, Tianjin Chengjian University, Tianjin 300384, China
3
Library, Tianjin Chengjian University, Tianjin 300384, China
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2025, 15(17), 9717; https://doi.org/10.3390/app15179717
Submission received: 19 July 2025 / Revised: 28 August 2025 / Accepted: 1 September 2025 / Published: 4 September 2025

Abstract

The rapid development of telematics technology has greatly supported high-computing applications like autonomous driving and real-time road condition prediction. However, the limited computational resources and dynamic topology of in-vehicle terminals pose challenges such as delay, load imbalance, and bandwidth consumption. To address these, a three-layer vehicular network architecture based on cloud–edge–end collaboration was proposed, with V2X technology used for multi-hop transmission. Models for delay, energy consumption, and edge caching were designed to meet the requirements for low delay, energy efficiency, and effective caching. Additionally, a dynamic pricing model for edge resources, based on load-awareness, was proposed to balance service quality and cost-effectiveness. The enhanced NSGA-III algorithm (ADP-NSGA-III) was applied to optimize system delay, energy consumption, and system resource pricing. The experimental results (mean of 30 independent runs) indicate that, compared with the NSGA-II, NSGA-III, MOEA-D, and SPEA2 optimization schemes, the proposed scheme reduced system delay by 21.63%, 5.96%, 17.84%, and 8.30%, respectively, in a system with 55 tasks. The energy consumption was reduced by 11.87%, 7.58%, 15.59%, and 9.94%, respectively.

1. Introduction

With the rapid advancement of internet of vehicles (IoV) and artificial intelligence technologies, compute-intensive applications such as autonomous driving, intelligent transportation systems, and real-time traffic prediction have emerged. Vehicle-to-infrastructure interconnectivity not only enhances road safety and operational efficiency but also accelerates the development of smart cities [1,2]. However, the limited computational resources at vehicular terminals, combined with the stringent real-time data processing demands, present significant challenges in ensuring efficient task execution and quality of service (QoS) guarantees, particularly in dynamic vehicular environments.
While traditional cloud computing platforms address the computational demands of IoV through elastic resource provisioning, their effectiveness in delay-sensitive scenarios is limited due to the transmission delay jitter associated with long-distance data transfers [3,4]. These delay bottlenecks can undermine the responsiveness of the system in time-critical operations, such as emergency obstacle avoidance and autonomous navigation, directly jeopardizing driving safety [5]. Mobile edge computing (MEC) alleviates these challenges by deploying computational resources at base stations near the network edge, significantly reducing delay and improving real-time processing capabilities [6]. However, conventional edge computing frameworks predominantly rely on static resource allocation models, which offer limited adaptability to high-speed vehicular mobility, fluctuating workloads, and dynamic network topologies. Current research on resource scheduling and task offloading in IoV has garnered significant attention; however, critical challenges remain [7,8]. Existing optimization models primarily focus on minimizing delay and energy consumption, often overlooking the broader considerations of edge node economic costs and load balancing. This oversight results in suboptimal resource allocation under specific constraints. Traditional evolutionary algorithms are prone to entrapment in local optima when applied to high-dimensional multi-objective optimization problems, further exacerbated by slow convergence and instability in dynamic environments, attributes that hinder real-time processing and compromise global optimality. The proposed multi-objective optimization scheme, based on an enhanced NSGA-III algorithm, aims to address challenges related to delay, load imbalance, and high bandwidth consumption in IoV resource scheduling. A three-tier cloud–edge–end collaborative IoV architecture is designed to facilitate multi-hop vehicular communication through V2X technology. System models for delay, energy consumption, and edge caching are formulated to ensure low-delay operations, energy efficiency, and effective cache utilization. To enhance economic sustainability, a load-aware dynamic pricing model is introduced, converting real-time edge server workloads into economic cost metrics to achieve a QoS-cost equilibrium. Finally, the NSGA-III algorithm is augmented with adaptive reference vector adjustment and distributed population management strategies, significantly improving convergence speed and global optimization capabilities. Experimental results show that, for 55 concurrent tasks, our solution reduces system delay by 13.5%, 8.2%, 18.5%, and 11.2%, compared to the optimizations of NSGA-II, NSGA-III, MOEA/D, and SPEA2, respectively, with corresponding reductions in energy consumption.
The main contributions of this paper are as follows:
  • A three-tier cloud–edge–device collaborative IoV architecture is proposed, utilizing V2X-enabled multihop vehicular communication.
  • Comprehensive system models are developed that integrate delay, energy consumption, and edge caching, along with a load-aware dynamic pricing mechanism that balances QoS and cost-effectiveness through economic cost quantification.
  • An Adaptive Distributed Population-enhanced NSGA-III (ADP-NSGA-III) algorithm is designed, incorporating adaptive reference vector adjustment and distributed population management to address the QoS-cost paradox in conventional cloud–edge resource scheduling via the tri-objective optimization of delay, energy consumption, and resource pricing.

2. Related Work

With the rapid development of cloud–edge–end collaboration technology in telematics, communication and computational power between vehicles have become critical factors in supporting high-computing applications such as autonomous driving and real-time prediction of road conditions [9,10]. To enhance computational efficiency and task scheduling performance in Vehicular Networking, researchers have proposed various solutions, most of which focus on the cooperative optimization of cloud computing, edge computing, and their integration [11,12].

2.1. Multi-Layer Resource Scheduling for Telematics with Cloud–Edge–End Collaboration

The application of cloud–edge collaboration technology in vehicular network systems can significantly enhance resource utilization and improve the user experience [13]. The study in [14] incorporates a caching system into the task execution process to address the issue of resource waste caused by redundant computations. By decomposing tasks into several subtasks, the study adopts a clustered edge server layout strategy and proposes a multi-intelligence deep reinforcement learning (MADRL) algorithm for edge–cloud collaboration. This approach optimizes the placement relationship between vehicle applications and edge servers, improving the utilization of computing resources in the vehicle. The work in [15] investigates a dynamic resource allocation scheme for various tasks, formulating an optimization problem to minimize the average task delay for cloud–edge collaboration offloading under long-term constraints related to energy consumption and system cost. The work in [16] focuses on the problem of joint task offloading, scheduling, and resource allocation in an MEC-enabled vehicle edge network (VEN), where vehicles can offload tasks to edge/cloud servers via vehicle-to-infrastructure (V2I) links or to other end-vehicle links via vehicle-to-vehicle (V2V) links. The work in [17] proposes a layered framework that improves the utilization of server resources and the satisfaction of vehicle service by formulating joint resource allocation and task load problems. This stimulates horizontal and vertical collaboration among vehicles, VEC servers, and cloud servers to optimize resource allocation within VEC servers and balance offloading loads among them.

2.2. Multi-Objective Optimization and Resource Pricing

Introducing a multi-objective optimization method and a resource pricing scheme into the telematics system can effectively meet the diverse demands of the service of the users [18]. The study in [19] formulates an optimization problem by implementing a multi-objective optimization approach, treating resource allocation and pricing as optimization variables, and proposes an intelligent joint dynamic pricing and resource sharing (SJDPRS) scheme that utilizes a novel deep reinforcement learning (DRL) method. To improve system operation efficiency, the study in [20] introduces a multi-objective multi-agent reinforcement learning algorithm (MARL) with high learning efficiency and low computational demands. This algorithm automatically triggers adaptive small sample learning in dynamic, distributed, and noisy environments, rewarding sparsity and minimizing delay. Furthermore, the work in [21] proposes a collaborative cloud–edge caching strategy based on the awareness of the content of structured tasks. It models the dependencies between task fragments using fuzzy judgment criteria, optimizing system delay, energy consumption, and edge server load balancing to improve both user service experience and overall system performance. The study in [22] proposes an efficient and energy-saving offloading decision scheme for V2X scenarios. First, task segmentation models, offloading delay models, energy consumption models, load balancing models, and multi-objective optimization models are constructed. Then, based on comprehensive consideration of the delay in data transfer, energy consumption and load balance, a MOEA/D-based task transfer scheme is proposed.
The aforementioned studies have made significant progress in resource scheduling, task offloading, and multi-objective optimization; however, several critical issues remain unresolved. First, existing research has not fully taken into account heterogeneous resources, particularly how to optimally utilize these resources. Second, the balance between delay, energy consumption, and economic factors in multi-objective optimization problems still lacks effective dynamic scheduling mechanisms. As the transportation environment evolves, traditional scheduling methods may not meet these complex objectives. Finally, conventional optimization algorithms are prone to becoming trapped in local optima in dynamic environments, limiting their ability to perform effective global optimization and resulting in suboptimal resource scheduling.
To address these challenges, this paper proposes an enhanced resource scheduling scheme based on the NSGA-III algorithm. The proposed scheme improves both the efficiency and economic viability of resource scheduling by incorporating a heterogeneous resource quantization model and a dynamic resource pricing function. At the algorithmic level, the improved NSGA-III algorithm better balances the demands of multi-objective optimization, overcoming the issue of local optima common in traditional algorithms. Additionally, the introduction of dynamic pricing further enhances the flexibility and cost effectiveness of the system. This innovative scheduling scheme provides robust support for the efficient operation of intelligent networked vehicles in complex transportation scenarios.

3. System Modeling

3.1. Three-Tier Communication Architecture for Cloud–Edge–End Collaboration

In this paper, we consider a bidirectional linear intelligent road driving scenario for intelligent Internet-connected vehicles, operating within a cloud–edge–end collaborative architecture for intelligent transportation. The scenario consists of 1 cloud server, N edge servers (equipped with road base units (RSUs), M intelligent internet-connected vehicle terminals, and D structured and intensive tasks to be processed. In this collaborative caching architecture, the cloud server layer represents the resource pool provided by the cloud computing platform, designed to handle a high volume of data transmission and storage requests, thereby laying a solid foundation for the efficient operation of the collaborative caching system. Data collection devices, such as RSUs, radars, smart cameras, and traffic signals, collect real-time parameters including vehicle speed, position, travel direction, and surrounding road conditions. Due to the heterogeneous nature of the data, this paper employs multi-source heterogeneous data fusion based on artificial intelligence technology to preprocess the data, enabling efficient computational offloading. The preprocessed data interacts with the in-vehicle terminal, which decides whether to process the data locally or offload it to other servers for further processing, ultimately receiving the processed results.
The assumptions made in this paper while designing the system model are as follows:
  • The cloud server can provide services to any service unit within the scenario, while the edge server and the intelligent Internet-connected vehicle-mounted terminal can only serve units within their respective signal coverage areas.
  • The communication mode between the intelligent Internet connected vehicle mounted terminal and the cloud server is V2C (Vehicle-to-Cloud), between the vehicle mounted terminal and the edge server is V2E (Vehicle-to-Edge), and between vehicle-mounted terminals is V2V (Vehicle-to-Vehicle).
  • During task execution, both the vehicle and the edge server have stable computing power. However, the vehicle computing power is significantly lower than that of the edge server, and once servers reach their maximum load, they cannot continue processing tasks.
  • The three-layer cloud–edge–end collaborative communication architecture proposed in this paper enables dynamic resource allocation among cloud servers, vehicles, and edge servers.
  • The scenario considered in this paper is an idealized quasi-dynamic scenario, where the time at which the intelligent Internet-connected vehicle terminal passes through the edge server is divided into T = { 1 , 2 , . . . , t , . . . , T } time slots, with each time slot having a duration of t.
  • The mobility of vehicles is simulated using a random walk model, in which the direction and speed of each vehicle are random. The simulation is based on the generation of random trajectory, and the movement of vehicles is discretized using time steps.
  • Task generation is based on a “quasi-dynamic” scenario, using a uniform distribution to generate task sizes, randomly and uniformly selecting task sizes within the range of 10 MB to 100 MB.
The collaborative communication architecture with edge–end clouds proposed in this paper is illustrated in Figure 1.
As shown in Figure 1, in the three-layer architecture designed in this article, which integrates cloud–edge–end collaboration, the cloud layer consists of a powerful cloud computing center responsible for global storage and analysis of massive data, intelligent applications and wide-area services; the middle layer is the edge layer, composed of widely distributed edge computing nodes (e.g., RSUs, base stations or regional edge servers). The edge layer processes near-real-time data within its coverage area and executes low-delay, high-reliability tasks; The terminal layer (vehicles and roadside devices) is responsible for data collection, preliminary processing, and instruction execution. To more intuitively highlight the advantages of the model, the entity units within the model are mathematically abstracted. The parameter information after abstraction is presented in Table 1.
In Table 1, S I G c l o u d is the characteristic information of the cloud server. M E C S is the set of edge servers. S I G m e c n is the edge server feature information. U S E R is the set of users. S I G u s e r is the characteristic information of the users. D t a s k is the set of tasks. f c l o u d c a l c ( m i p s ) is the computational power of the central cloud servers. p c l o u d t r a n (w) is the transmission power of the central cloud servers. p c l o u d c a l c (w) is the computational power of the central cloud servers. f u s e r m c a l c ( m i p s ) is the computational power of u s e r m . v u s e r m t r a n is the travel speed of u s e r m . l o c u s e r m t ( x , y ) is the position information of u s e r m at the moment of t. f m e c n c a l c ( m i p s ) is the computational power of m e c n , s m e c n c a c h e ( M h z ) is the cache resource of m e c n , p m e c n c a l c (w) is the computational power of m e c n . l o c m e c n x , y is the location information of m e c n . d t a s k d d a v ( M B ) is the amount of data for t a s k d , f t a s k d c a l c ( m i p s ) is the computational resources required to compute t a s k d , s t a s k d c a c h e ( M h z ) is the cache space required to cache t a s k d , and  t t a s k d r e s t ( m s ) is the maximum tolerable delay for t a s k d .

3.2. Communication Model

In this paper, we employ a hybrid communication mode V2X, with channel gain following Nakagami fading [7,19].
The communication rate v c l o u d u s e r m between the smart grid-connected vehicle terminal and the cloud server is shown in Equation (1):
v c l o u d u s e r m = ( B c l o u d u s e r m c h c l o u d u s e r m ) × log 2 1 + p t r a n s u s e r m × s σ 2
where B c l o u d u s e r m denotes the channel bandwidth between the user terminal and the cloud server, and c h c l o u d u s e r m is the number of channels between the user terminal and the cloud server. s is the channel loss constant, and in this paper, s is a constant value of 1 × 10 9 dB.
The data transfer rate v m , n v 2 i of the user terminal communicating with the edge server using the V2I communication method is shown in Equation (2):
v m , n v 2 i = ( B m , n ) × log 2 1 + p t r a n s u s e r m × g m , n v 2 i σ 2 + d m , n
where B m , n denotes the channel bandwidth between the user terminal and the edge server, g m , n v 2 i denotes the channel gain between the user terminal and the edge server, σ 2 denotes the Gaussian white noise power, and  d m , n denotes the signal interference constant factor.
The user uses V2V wireless communication with its data transfer rate v m , k v 2 v , which is shown in Equation (3):
v m , k v 2 v = ( B m , k ) × log 2 1 + p t r a n s u s e r m × g m , k v 2 v σ 2 + ( L m , k ) 2
where B m , k denotes the communication bandwidth between user terminals, g m , k v 2 v denotes the channel gain, and  L m , k denotes the Euclidean distance between users.

3.3. Edge Caching Model

In this paper, according to the popularity of the task, some of the service content is cached in advance to the corresponding service node to improve the service response speed. The edge server periodically updates cached content through the caching policy in order to provide better service, captures the number of times requested content has been requested in the current RSU, and if the number of times requested is greater than a given value, the placed content is replaced, to understand the popularity of the requested content and the hotspots [21]. The popularity of the vehicle request content satisfies Zipf’s law. For t a s k d , the cache hit rate P ( t a s k d ) is shown in Equation (4):
P ( t a s k d ) = t a s k d η d = 1 O t a s k d η
where O denotes the total number of contents, and η represents the degree of preference for the frequency distribution of requests. A higher value of η will result in requests being highly concentrated on a few popular pieces of content, while a lower value of η means that content requests are relatively evenly distributed.
The cache matrix S defined in this paper is shown in Equation (5):
S = { t a s k d | c ( n , i ) }
where c ( n , i ) = 0 indicates that t a s k d has been cached in m e c n ; otherwise, t a s k d is not cached.

3.4. Delay Modeling and Energy Modeling

In the cloud–edge–end collaboration scenario of telematics, the delay consumption during task execution mainly contains two parts: the transmission delay of the task and the computation delay of the task [21]. Among them, the delay in task transmission is mainly the delay consumption generated by the intelligent Internet-connected vehicle terminal sending requests to all service nodes within the communication range to obtain the specified data or services. The computation delay of the task is the delay consumption generated by the task that executes the computation on the server, and this part of the delay is mainly affected by the computation capability of the server.

3.4.1. Tasks Are Executed Locally

If t a s k d is executed locally, the computational delay is shown in Equation (6):
t t a s k d l o c a l = f t a s k d c a l c f u s e r m c a l c
The energy consumption e t a s k d l o c a l of t a s k d performed locally is shown in Equation (7):
e t a s k d l o c a l = p u s e r m c a l c × f t a s k d c a l c f u s e r m c a l c

3.4.2. Tasks Are Executed in Other Vehicles

If t a s k d is executed in other cars, then the computation time t t a s k d v 2 v and energy consumption e t a s k d v 2 v are shown in Equations (8) and (9):
t t a s k d v 2 v = d t a s k d d a v v m , k v 2 v × ρ + f t a s k d c a l c f u s e r k c a l c
e t a s k d v 2 v = d t a s k d d a v v m , k v 2 v × p u s e r m t r a n s + f t a s k d c a l c f u s e r k c a l c × p u s e r k c a l c
where ρ is the number of times the mission uses V2V wireless communication, f u s e r k c a l c is the computational power of the target vehicle, and p u s e r k c a l c is the computational power of the target vehicle.

3.4.3. Tasks Are Executed at the Edge Server

The delay of execution of t a s k d at the edge server is divided into the transmission delay of t a s k d transmission to the edge server and the computation delay of t a s k d on the edge server. Computation delay t t a s k d m e c s and the energy consumption e t a s k d m e c s at the edge server are shown in Equations (10) and (11):
t t a s k d m e c s = d t a s k d d a v v m , n v 2 i + f t a s k d c a l c f m e c n c a l c , c ( n , i ) = 1 d t a s k d d a v v m , n v 2 i , c ( n , i ) = 0
e t a s k d m e c s = d t a s k d d a v v m , n v 2 i × p u s e r m t r a n s + f t a s k d c a l c f m e c n c a l c × p u s e r m c a l c , c ( n , i ) = 1 d t a s k d d a v v m , n v 2 i × p u s e r m t r a n s , c ( n , i ) = 0

3.4.4. Tasks Are Executed at the Cloud Server

If t a s k d is executed on the cloud server, the computational delay t t a s k d c l o u d and the energy consumption e t a s k d c l o u d are shown in Equations (12) and (13):
t t a s k d c l o u d = d t a s k d d a v v c l o u d u s e r + f t a s k d c a l c f c l o u d c a l c
e t a s k d c l o u d = d t a s k d d a v v c l o u d u s e r × p u s e r m t r a n s + f t a s k d c a l c f c l o u d c a l c × p c l o u d c a l c
We assume that the total task offload delay is T ( X ) , and the total energy consumption is E ( X ) as shown in Equations (14) and (15).
T ( X ) = d D t t a s k d l o c a l + t t a s k d m e c s + t t a s k d v 2 v + t t a s k d c l o u d
E ( X ) = d D e t a s k d l o c a l + e t a s k d m e c s + e t a s k d v 2 v + e t a s k d c l o u d

3.5. Resource Dynamic Pricing Model Based on Load Balancing Awareness

In the cloud–edge–end collaborative computing scenario of Telematics, the dynamic pricing mechanism of resources can influence the task allocation decision through price signals, enabling the system to adaptively find a balance between delay, energy consumption, and resource balance of the server load [22]. Since the resources of cloud servers are more sufficient, this paper only considers vehicle terminals and edge servers when designing the resource heterogeneity metric and resource dynamic pricing model. The dynamic pricing of resources based on load balancing sensing is shown in Equation (16):
p n ( t ) = exp ( k p × λ n ( t ) )
where k p where is the coefficient of price elasticity, λ n ( t ) is the dynamic load factor of the edge server, and a larger λ n ( t ) indicates a higher load imbalance of the edge server in the system and vice versa a lower load imbalance.

3.6. Delay Modeling and Energy Modeling

Since the problem studied in this paper is difficult to solve within polynomials ( N P h a r d problem) [23], a multi-objective optimization model developed in this paper is shown in Equation (17):
min T , min E , min ( p n p r e f ) s . t . C 1 : f t a s k d c a l c f m e c n c a l c C 2 : f t a s k d c a l c f u s e r m c a l c C 3 : s t a s k d c a c h e s c l o u d c a c h e C 4 : s t a s k d c a c h e s m e c n c a c h e C 5 : t t a s k d r e s t o ^
where p r e f is the reference pricing of system resources; C 1 indicates that the resources for computation t a s k d cannot exceed the maximum computational resources of the edge server; constraint C 2 indicates that the resources for computation cannot exceed the maximum computational resources of the on-board terminals; constraint C 3 indicates that the resources required for caching t a s k d will not exceed the maximum caching resources of the cloud servers; constraint C4 indicates that the resources required for caching t a s k d will not exceed the maximum caching resources of the edge servers; and constraint C5 indicates that the computation can be completed within the maximum response delay.

4. NSGA-III Based Optimization Scheme

With the proliferation of multi-objective optimization dimensions and dynamic complexity in Telematics scenarios, the contradiction between maintaining population diversity and convergence accuracy of traditional static reference vector strategies is becoming increasingly prominent [24]. Compared with the fixed reference vector generation method adopted by the classical NSGA-III algorithm, the ADP-NSGA-III algorithm proposed in this paper can continuously monitor the population distribution in the target space during the iterative process of the population, and is able to perform evolutionary operations independently for each sub-population. In addition, since this paper designs an individual migration mechanism based on von Neumann topology when the algorithm is improved, in this way, the dynamic balance between global exploration and local exploitation can be realized by using the directional migration strategy among sub-populations.

4.1. Coding

In the telematics cloud–edge–end collaboration scenario due to the limited binary coding space coding, which makes it unable to meet the telematics cloud–edge–end collaboration scenario, this paper adopts decimal coding to solve the problem posed in this paper [25]. The coding method used in this paper is shown in Equation (18):
X i = 0 , Execute locally ; [ 1 , n ] Execute at the edge server ; n + 1 , Execute on the cloud server ;
where X i = 0 denotes that the task is executed locally at the smart connected vehicle terminal, X i = [ 1 , n ] denotes that the task is offloaded to an edge server for execution, and  X i = n + 1 denotes that the task is offloaded to a cloud server for execution.

4.2. Adaptation Evaluation Function

In the paper, the degree of individual merit is assessed by employing the fitness evaluation function as an evaluation metric [7]. The fitness function is coupled by the system delay, the system energy consumption, and the optimal metric and dynamic pricing of resources, which are calculated by Equation (14), Equation (15), and Equation (16), respectively, and the final optimization objective is to minimize all three and to achieve the optimal trade-off between them.

4.3. ADP-NSGA-III Algorithm Design

4.3.1. Reference Vector Dynamic Response Mechanism

In the traditional NSGA-III algorithm, the reference vector is set to be fixed, which may make the algorithm solution unable to adapt to the complex cloud–edge–end collaborative system of telematics, in order to more accurately adapt the algorithm to the change of the Pareto front [23]. In this paper, the reference vector dynamic response mechanism is used as a way to adjust the reference vector direction in real time to avoid the solution of the algorithm is concentrated in a specific region. The reference vector dynamic response mechanism designed in this paper is shown in Equation (19):
Δ M ( t ) = λ × ( φ ( p t ) ) ( M ) + δ × ( M ( t 1 ) M * )
where φ ( p t ) represents population entropy, which is an indicator of population diversity. We define population entropy based on Shannon’s entropy formula, associate each individual with its target vector, and calculate the entropy of the entire population. λ ( λ ( 0 , 1 ] ) is the adjustment step of the reference vector, δ is the memory retention coefficient that balances the weight balance of the historical reference vector with the current adjustment, and  M * is the direction of the ideal reference vector. The calculation of population entropy φ ( p t ) is shown in Equation (20):
φ ( p t ) = i = 1 N p i log ( p i )
where p i is the normalized value of the proportion of individuals distributed in the target space.
The partial derivative measures the impact of changes in the reference vector on population entropy. We use the finite difference method to approximate the impact of small changes in the reference vector M on the distribution of the population on the Pareto front as shown in Equation (21):
φ ( p t ) M φ ( p t + ϵ ) φ ( p t ) ϵ
where ϵ is the perturbation value.

4.3.2. Population Delimitation Mechanisms Based on Partitioning Strategies

In the cloud–edge–end collaboration scenario of telematics, the large number of tasks and the massive traffic information make it easy for the traditional NSGA-III algorithm to fall into the local optimal solution, which makes it difficult to exert the maximum efficiency of the system [26]. For this reason, this paper employs the k-means clustering algorithm to divide the population into M subpopulations, where individuals are grouped based on the similarity of their objective vectors in the objective space. This allows each subpopulation to focus on different regions, evolve independently, and promote information exchange between subpopulations, thereby reducing the risk of premature convergence and avoiding the algorithm getting stuck in local optima. To effectively promote orderly communication between subpopulations while avoiding premature convergence, we adopt a von Neumann topology structure to maintain a balance between global exploration and local exploitation of effective solutions. By structurally migrating individuals between subpopulations, we promote diversity across the entire population.
The population division mechanism of the partitioning strategy designed in this paper is shown in Equation (22):
P = m = 1 M { x arg min d ( f ( x ) , M i ) = m } , 1 i H
where M denotes the number of subpopulations, H is the number of reference vectors, and d is a distance metric function to alleviate the distance failure problem in high-dimensional spaces.

4.3.3. ADP-NSGA-III Algorithm

The ADP-NSGA-III algorithm designed in this paper is shown in Algorithm 1.
Algorithm 1 AD-NSGA-III algorithm.
Require: input Population size N, maximum number of iterations T m a x , number of reference vectors
      H, migration rate μ , crossover probability C R , mutation probability F
Ensure:  output Pareto optimal solution set
  1:
Initialization:
  2:
Generate a set of reference vectors V = { v 1 , , v H } using UniformPoint(Problem.N, Problem.M)
  3:
Create an initial population P 0 randomly generate N individuals
  4:
Divide the population into H distributed sub-populations { S 1 , , S H }
  5:
for  t = 1 to T m a x  do
  6:
    Crossover and Mutation:
  7:
    for each sub-population S h  do
  8:
        Perform SBX crossover and polynomial mutation to generate offspring
  9:
    end for
10:
    Generate the offspring population Q t by applying the genetic operators to each sub-population
11:
    Merge Populations: R t = P t Q t
12:
    Non-dominated Sorting:
13:
    Divide R t into front layers { F 1 , F 2 , }
14:
    Adaptive Reference Vector Update:
15:
    Calculate the population distribution entropy Δ D based on the distance between solutions and reference vectors
16:
    Update reference vectors: v i v i + γ × Δ D / Δ t
17:
    Elite Retention:
18:
    Select individuals starting from F 1 until N individuals are retained in the population
19:
    Apply an improved niching mechanism to preserve diversity (using constraint violation-based tournament selection)
20:
    Distributed Migration:
21:
    if  t mod 5 = 0  then
22:
        Perform cross-sub-population migration (randomly select 10 % of individuals from each sub-population)
23:
        Update the sub-population division { S 1 , , S H } based on clustering results
24:
    end if
25:
end for
26:
return non-dominated solution set F 1

4.3.4. Complexity Analysis of ADP-NSGA-III Algorithm

In the ADP-NSGA-III algorithm designed in this paper, the operations that affect the time complexity of the algorithm are the non-dominated sorting operation, the environment selection operation, the dynamic adjustment operation of reference vectors, and the population partitioning strategy. We set the number of populations as N, the number of optimization objectives as M, the number of reference vectors as H, and the number of sub-populations as K. To improve efficiency, we adopt an improved non-dominant sorting algorithm, reducing its time complexity to O ( M × N × log M 1 N ) . This improves performance by introducing sorting optimization and tree structures, significantly improving the efficiency of the algorithm, especially when the number of targets M is large. Second, the environmental selection operation combines non-dominance sorting and elite selection, and adopts a microhabitat preservation strategy to ensure population diversity. The time complexity of this operation is O ( N × H + N × log N ) , where ( N × log N ) comes from sorting individuals and determining dominance relationships, and ( N × H ) comes from comparing and updating reference vectors. In the reference vector dynamic adjustment operation, we calculate the distance between each individual and the reference vector, and adjust the direction of the reference vector based on the population entropy. The time complexity of this operation is O ( N × H + M × H ) , where( N × H ) mainly comes from calculating the distance between the individual and the reference vector, and ( M × H ) comes from the mutual adjustment between reference vectors, considering the number of reference vectors M and H. For the population partitioning strategy, we use the K-means clustering algorithm to partition the population into multiple subpopulations, with a time complexity of O ( N × log K + K 2 + N ) , where ( N × log K ) comes from the operation of partitioning individuals into subpopulations, K is the complexity of calculating the subpopulation centroids, and N comes from the update operation.
Since K, M, and H do not significantly affect the final time complexity, the overall time complexity of the ADP-NSGA-III algorithm designed in this paper is shown in Equation (23):
O ( N × log 2 N + D × N )

5. Simulation Experiment and Analysis

This experiment was conducted on a Windows 11 operating system using an Intel Core™ i7-14650HX processor and MATLAB (R2022b) for simulation. The algorithm parameters were set as follows: the random seed for the experiment was set to 42, the maximum number of iterations was 500, and the population size was 100. The comparison algorithms (NSGA-II, NSGA-III, MOEA/D, SPEA2) used the same population size and maximum iteration limit as the proposed algorithm, with all other parameters set according to their respective original papers. The proposed ADP-NSGA-III algorithm utilizes decimal encoding and incorporates reference vector dynamic adjustment (with a learning rate of 0.1) and distributed population management (with 6 subpopulations, a migration ratio of 0.1, and migration occurring every 5 generations). In the search process, constraint violation value-based tournament selection is employed for mating selection, differential evolution is used for mutation (with the mutation coefficient linearly decreasing from 0.9 to 0.1), binary crossover is applied (with the crossover probability quadratically decreasing from 0.9 to no less than 0.1), and polynomial mutation is performed (with a distribution index of 20). Environmental selection combines non-dominated sorting with niche preservation strategies to maintain the diversity and convergence of the solution set. We compare the ADP-NSGA-III optimization scheme proposed in this paper with the NSGA-II optimization scheme [27], the NSGA-III optimization scheme [24], the MOEA-D optimization scheme [28], and the SPEA2 optimization schemes [29,30] for comparison experiments. The comparison dimensions are mainly in terms of total system delay, system energy consumption, and optimal pricing of system resources, respectively. The parameters of the experiments designed in this paper are shown in Table 2.

Experimental Design and Analysis of Results

To further validate the contribution of each module added in this paper to the ADP-NSGA-III scheme, we conducted experiments by enabling/disabling each component. The experimental results are shown in Table 3.
Table 3 highlights the contribution of each improvement module in ADP-NSGA-III. Disabling the adaptive reference vector (ARV) leads to a slight increase in IGD and a minor reduction in HV, which indicates that ARV plays a key role in improving the convergence towards the Pareto front, although the effect is moderate. Removing the distributed subpopulation (DSP) results in a more noticeable decrease in HV, suggesting that DSP is critical for maintaining the diversity of the solution set by exploring a larger solution space. Disabling the migration strategy causes a slight increase in IGD and a small reduction in HV, which reflects its importance in stabilizing convergence and improving the global search capability by facilitating the exchange of information between subpopulations. In summary, ARV, DSP, and the migration strategy all contribute positively to the algorithm’s performance, each enhancing different aspects such as convergence, diversity, and stability. These effects are complementary, with each module improving a unique dimension of the optimization process.
Figure 2 illustrates the population distribution of the optimal pricing effect of five optimization schemes on system delay, energy consumption, and edge system resources. As shown in Figure 2, the ADP-NSGA-III optimization scheme proposed in this paper demonstrates faster convergence and a more concentrated and uniform distribution of dominant solutions. This is due to the algorithm’s population division mechanism based on a partition strategy, allowing each subpopulation to focus on different regions, evolve independently, and promote information exchange between subpopulations. This approach reduces the risk of premature convergence and prevents the algorithm from falling into a local optimum. Therefore, it is argued that the ADP-NSGA-III optimization scheme is particularly well-suited for the cloud–edge–end collaboration scenario in telematics proposed in this paper.
Figure 3 illustrates the population distribution of five optimization schemes, comparing the system delay and energy consumption. As shown in Figure 3, system energy consumption increases as the system delay increases. However, in the same delay, the energy consumption of the proposed scheme remains consistently lower than that of the other optimization schemes. Therefore, compared to the other schemes, the proposed optimization scheme is more suitable for delay and energy-sensitive cloud–edge–end collaboration systems in telematics.
Figure 4 illustrates the population distribution of five optimization schemes, comparing the system delay and the pricing of the system resources. As shown in Figure 4, system resource pricing exhibits a decreasing trend as system delay increases. The rise in system delay indicates that more task pairs are offloaded to other servers for computation, reducing the load on individual servers. Consequently, the price of the resource of the system decreases accordingly, promoting a more balanced utilization of the resource of the system.
Figure 5 illustrates the population distribution of five optimization scenarios, comparing energy consumption and system resource pricing. As shown in Figure 5, system resource pricing exhibits a decreasing trend as system energy consumption increases. This rise in energy consumption primarily results from the transmission delay incurred when tasks are offloaded to servers, indicating that a greater number of task pairs are being offloaded to other servers for computation. However, a moderate increase in energy consumption can be beneficial if it leads to a more rational and efficient utilization of system resources.
As shown in Figure 6, a larger dynamic pricing coefficient k p amplifies the economic impact of server load imbalance on task scheduling. Specifically, when k p is low (e.g., 0.1), the price responds slowly to load changes, and scheduling decisions primarily depend on latency and energy consumption, with limited guidance from load balancing. When k p is 0.5, the pricing effect moderately amplifies load imbalance, effectively guiding tasks to migrate to servers with lighter loads, thereby improving overall resource allocation. When k p is 1.0, prices rise rapidly with load imbalance, which may lead to scheduling overly favoring servers with lighter loads, potentially worsening overall latency and energy consumption performance.
Figure 7 shows a comparison of system delay among different optimization schemes under varying task quantities (data represents the mean and standard deviation from 30 independent runs). As shown in the figure, system delay increases with the number of tasks. The proposed ADP-NSGA-III optimization scheme consistently achieves lower delay than other schemes, demonstrating a significant advantage. Specifically, when the system task count is 55, the ADP-NSGA-III delay is 22.10 ± 1.10, while the latencies of other schemes are as follows: NSGA-II is 28.20 ± 1.40, NSGA-III is 23.50 ± 1.20, MOEA-D is 26.90 ± 1.40, and SPEA2 is 24.10 ± 1.30. Compared to these schemes, ADP-NSGA-III reduces system delay by 21.63%, 5.96%, 17.84%, and 8.30%, respectively, when the number of tasks is 55. Therefore, the ADP-NSGA-III optimization scheme demonstrates a significant advantage in reducing system delay.
Figure 8 shows a comparison of system energy consumption among different optimization schemes under varying task quantities (data represents the mean and standard deviation from 30 independent runs). As shown in the figure, system energy consumption exhibits an upward trend as the number of tasks increases. The proposed ADP-NSGA-III optimization scheme consistently achieves lower energy consumption than other schemes, demonstrating a significant advantage. Specifically, when the system has 55 tasks, the energy consumption of ADP-NSGA-III is 3884 ± 116.5, while the energy consumption values of the other schemes are as follows: NSGA-II is 4407.1 ± 132.2, NSGA-III is 4202.5 ± 126.1, MOEA-D is 4601.3 ± 138.0, and SPEA2 is 4312.6 ± 129.4. Under this task quantity, the energy consumption of the ADP-NSGA-III scheme is reduced by 11.87%, 7.58%, 15.59%, and 9.94% compared to the NSGA-II, NSGA-III, MOEA-D, and SPEA2 schemes, respectively.
Figure 9 shows a comparison of system resource prices for different optimization schemes under varying task quantities (data represents the mean and standard deviation from 30 independent runs). As shown in the figure, system resource pricing exhibits an upward trend as the number of tasks increases. The proposed ADP-NSGA-III optimization scheme consistently achieves lower resource pricing than other schemes, demonstrating a significant advantage. Specifically, when the number of system tasks is 55, the resource pricing of ADP-NSGA-III is 0.79 ± 0.03, while the pricing of other schemes is as follows: NSGA-II is 0.83 ± 0.03, NSGA-III is 0.80 ± 0.03, MOEA-D is 0.82 ± 0.03, and SPEA2 is 0.82 ± 0.03. Under this task quantity, the resource pricing of the ADP-NSGA-III scheme is reduced by 4.82%, 1.25%, 3.66%, and 3.66% compared to the NSGA-II, NSGA-III, MOEA-D, and SPEA2 schemes, respectively.
Figure 10 shows a comparison of system delay among different optimization schemes under varying numbers of vehicle terminals (data represents the mean and standard deviation from 30 independent runs). As shown in the figure, system delay increases with the number of vehicle terminals. The ADP-NSGA-III optimization scheme proposed in this paper consistently achieves lower delay than other schemes, demonstrating a significant advantage. Specifically, when the number of vehicle terminals is 25, the delay of ADP-NSGA-III is 29.50 ± 1.80, while the delays of other schemes are as follows: NSGA-II is 34.08 ± 2.00, NSGA-III is 30.12 ± 2.10, MOEA-D is 33.19 ± 2.20, and SPEA2 is 34.12 ± 2.50. Under this configuration, the ADP-NSGA-III scheme reduces system delay by 13.44%, 2.06%, 11.12%, and 13.54% compared to the NSGA-II, NSGA-III, MOEA-D, and SPEA2 schemes, respectively.
Figure 11 shows a comparison of system energy consumption among different optimization schemes under varying numbers of vehicle terminals (data represents the mean and standard deviation from 30 independent runs). As shown in the figure, system energy consumption increases with the number of vehicle terminals. The ADP-NSGA-III optimization scheme proposed in this paper consistently achieves lower energy consumption than other schemes, demonstrating a significant advantage. Specifically, when the number of vehicle terminals is 25, the energy consumption of ADP-NSGA-III is 4935 (±138), while the energy consumption values of other schemes are as follows: NSGA-II is 5275 (±145), NSGA-III is 5130 (±142), MOEA-D is 5180 (±150), and SPEA2 is 5030 (±140). Under this configuration, the energy consumption of the ADP-NSGA-III scheme is reduced by 6.45%, 3.80%, 4.73%, and 1.89% compared to the NSGA-II, NSGA-III, MOEA-D, and SPEA2 schemes, respectively.
Figure 12 shows a comparison of system delay among different optimization schemes under varying numbers of edge servers (data represents the mean and standard deviation from 30 independent runs). As shown in the figure, system delay decreases as the number of edge servers increases. The ADP-NSGA-III optimization scheme proposed in this paper consistently maintains lower delay than other schemes, demonstrating a significant advantage. Specifically, when the number of edge servers is 5, the delay of ADP-NSGA-III is 30.5 (±1.5), while the delays of other schemes are as follows: NSGA-II is 33.6 (±1.7), NSGA-III is 32.8 (±1.6), MOEA-D is 34.7 (±1.7), and SPEA2 is 31.9 (±1.6). Under this configuration, the delay of ADP-NSGA-III is reduced by 9.23%, 7.01%, 12.10%, and 4.39% compared to NSGA-II, NSGA-III, MOEA-D, and SPEA2, respectively.
Figure 13 shows a comparison of system energy consumption among different optimization schemes under varying numbers of edge servers (data represents the mean and standard deviation from 30 independent runs). As shown in the figure, system energy consumption decreases as the number of edge servers increases. The ADP-NSGA-III optimization scheme proposed in this paper consistently maintains lower energy consumption than other schemes, demonstrating a significant advantage. Specifically, when the number of edge servers is 5, the energy consumption of ADP-NSGA-III is 2400 (±60), while the energy consumption values of other schemes are as follows: NSGA-II is 2850 (±86), NSGA-III is 2550 (±71), MOEA-D is 2900 (±87), and SPEA2 is 2500 (±67). Under this configuration, the energy consumption of ADP-NSGA-III is reduced by 9.23%, 7.01%, 12.10%, and 4.39% compared to NSGA-II, NSGA-III, MOEA-D, and SPEA2, respectively. This is because as the number of edge servers increases, the distributed availability of computing and storage resources improves, enabling tasks to be executed closer to the end nodes, thereby reducing energy consumption from long-distance transmission and alleviating energy costs caused by excessive loads. When the number of edge servers reaches five, resource supply and task demand reach a relative balance, allowing the algorithm to maximize resource utilization and optimize task paths, resulting in particularly significant energy consumption advantages compared to other solutions.

6. Conclusions

Cloud–edge–end collaboration has injected new momentum into the development of in-vehicle information systems, effectively addressing challenges such as high scheduling overhead and server load imbalance in in-vehicle networks. This paper proposes a layered communication architecture based on V2X technology for multi-hop in-vehicle communication. We designed system delay, energy consumption, and edge caching models to ensure low delay, low energy consumption, and high caching efficiency. This paper introduces a dynamic pricing model for edge system resources, achieving a balance between service quality and cost efficiency. An improved ADP-NSGA-III algorithm is used for multi-objective optimization of system delay, energy consumption, and edge resource pricing. Simulation results show significant improvements in system delay, validating the effectiveness and cost-effectiveness of the proposed method. However, this study has several limitations. For example, the scalability of the system in large-scale, highly dynamic in-vehicle networks has not been fully explored. Future research should focus on incorporating more realistic mobility models and network conditions. Although the dynamic pricing model performs well in simulations, actual deployment may face challenges due to resource fluctuations. Further research is needed on robust pricing strategies and the application of deep reinforcement learning in intelligent resource management, especially when integrating 6G technology.

Author Contributions

Y.Y. was responsible for data curation, methodology, software, validation, and writing the original draft. Z.S. contributed to formal analysis, investigation, resources, software, visualization, and reviewing and editing the manuscript. Q.Z. provided supervision, acquired funding, administered the project, developed the conceptualization, and participated in manuscript review and editing. All authors have read and agreed to the final version of the manuscript.

Funding

This work was supported by the Natural Science Foundation Project of China (62172457), the Tianjin Natural Science Foundation Project (22JCZDJC00600).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding authors.

Acknowledgments

The authors acknowledge support from the National Natural Science Foundation of China (62172457) and the Tianjin Natural Science Foundation (22JCZDJC00600).

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

  1. Wu, H.; Jin, J.; Ma, H.; Xing, L. Hybrid Cooperative Cache Based on Temporal Convolutional Networks in Vehicular Edge Network. Sensors 2023, 23, 4619–4633. [Google Scholar] [CrossRef]
  2. Tian, A.; Feng, B.; Zhou, H.; Huang, Y.; Sood, K.; Yu, S.; Zhang, H. Efficient Federated DRL-Based Cooperative Caching for Mobile Edge Networks. IEEE Trans. Netw. Serv. Manag. 2023, 20, 246–260. [Google Scholar] [CrossRef]
  3. Zhang, Z.; Wang, N.; Wu, H.; Tang, C.; Li, R. A Fast and Efficient Task Offloading Algorithm in Heterogeneous Edge Cloud Computing Environments. IEEE Internet Things J. 2023, 10, 3165–3178. [Google Scholar] [CrossRef]
  4. Bi, X.; Zhao, L. Collaborative Caching Strategy for RL-Based Content Downloading Algorithm in Clustered Vehicular Networks. IEEE Internet Things J. 2023, 10, 9585–9596. [Google Scholar] [CrossRef]
  5. Gul-E-Laraib; Uz, S.K.; Maqsood, T.; Rehman, F.; Saad, M.; Amir, K.M.; Neelam, G.; Algarni, A.D.; Hela, E. Content Caching in Mobile Edge Computing Based on User Location and Preferences Using Cosine Similarity and Collaborative Filtering. Electronics 2023, 12, 284–302. [Google Scholar] [CrossRef]
  6. Cui, Y.; Yang, X.; He, P.; Wang, R.; Wu, D. URLLC-eMBB hierarchical network slicing for Internet of Vehicles: An AoI-sensitive approach. Veh. Commun. 2023, 43, 100648. [Google Scholar] [CrossRef]
  7. Zhu, S.; Song, Z.; Huang, C.; Zhu, H.; Qiao, R. Dependency-aware cache optimization and offloading strategies for intelligent transportation systems. J. Supercomput. 2025, 81, 45. [Google Scholar] [CrossRef]
  8. Feng, B.; Feng, C.; Feng, D.; Feng, D.; Wu, Y.; Xia, X.G. Proactive Content Caching Scheme in Urban Vehicular Networks. IEEE Trans. Commun. 2023, 71, 4165–4180. [Google Scholar] [CrossRef]
  9. Yin, S.; Sun, Y.; Xu, Q.; Sun, K.; Li, Y.; Ding, L.; Liu, Y. Multi-harmonic sources identification and evaluation method based on cloud-edge-end collaboration. Int. J. Electr. Power Energy Syst. 2024, 156, 109681. [Google Scholar] [CrossRef]
  10. Cai, J.; Liu, W.; Huang, Z.; Yu, F.R. Task decomposition and hierarchical scheduling for collaborative cloud-edge-end computing. IEEE Trans. Serv. Comput. 2024, 17, 4368–4382. [Google Scholar] [CrossRef]
  11. Li, B.; Yang, Y. Distributed fault detection for large-scale systems: A subspace-aided data-driven scheme with cloud-edge-end collaboration. IEEE Trans. Ind. Inform. 2024, 20, 12200–12209. [Google Scholar] [CrossRef]
  12. Wang, Y.; Yang, C.; Lan, S.; Zhu, L.; Zhang, Y. End-edge-cloud collaborative computing for deep learning: A comprehensive survey. IEEE Commun. Surv. Tutor. 2024, 26, 2647–2683. [Google Scholar] [CrossRef]
  13. Liu, L.; Zhang, Y. Task Offloading Optimization for Multi-objective Based on Cloud-Edge-End Collaboration in Maritime Networks. Future Gener. Comput. Syst. 2025, 164, 107588. [Google Scholar] [CrossRef]
  14. Zhang, T.; Wu, F.; Chen, Z.; Chen, S. Optimization of Edge–Cloud Collaborative Computing Resource Management for Internet of Vehicles Based on Multiagent Deep Reinforcement Learning. IEEE Internet Things J. 2024, 11, 36114–36126. [Google Scholar] [CrossRef]
  15. Geng, J.; Qin, Z.; Jin, S. Dynamic Resource Allocation for Cloud-Edge Collaboration Offloading in VEC Networks With Diverse Tasks. IEEE Trans. Intell. Transp. Syst. 2024, 25, 21235–21251. [Google Scholar] [CrossRef]
  16. Wu, J.; Tang, M.; Jiang, C.; Gao, L.; Cao, B. Cloud-Edge–End Collaborative Task Offloading in Vehicular Edge Networks: A Multilayer Deep Reinforcement Learning Approach. IEEE Internet Things J. 2024, 11, 36272–36290. [Google Scholar] [CrossRef]
  17. Sun, Z.; Sun, G.; Liu, Y.; Wang, J.; Cao, D. BARGAIN-MATCH: A Game Theoretical Approach for Resource Allocation and Task Offloading in Vehicular Edge Computing Networks. IEEE Trans. Mob. Comput. 2024, 23, 1655–1673. [Google Scholar] [CrossRef]
  18. Liu, M.; Pan, L.; Liu, S. Collaborative Storage for Tiered Cloud and Edge: A Perspective of Optimizing Cost and Latency. IEEE Trans. Mob. Comput. 2024, 23, 10885–10902. [Google Scholar] [CrossRef]
  19. Nouruzi, A.; Mokari, N.; Azmi, P.; Jorswieck, E.A.; Erol-Kantarci, M. Smart Dynamic Pricing and Cooperative Resource Management for Mobility-Aware and Multi-Tier Slice-Enabled 5G and Beyond Networks. IEEE Trans. Netw. Serv. Manag. 2024, 21, 2044–2063. [Google Scholar] [CrossRef]
  20. Tan, J.; Khalili, R.; Karl, H. Multi-Objective Optimization Using Adaptive Distributed Reinforcement Learning. IEEE Trans. Intell. Transp. Syst. 2024, 25, 10777–10789. [Google Scholar] [CrossRef]
  21. Zhu, S.; Song, Z.; Zhu, H.; Rui, Q. Efficient Slicing Scheme and Cache Optimization Strategy for Structured Dependent Tasks in Intelligent Transportation Scenarios. Ad Hoc Netw. 2025, 168, 103699. [Google Scholar]
  22. Wang, B.; Guo, Q.; Xia, T.; Li, Q.; Liu, D.; Zhao, F. Cooperative IoT Data Sharing with Heterogeneity of Participants Based on Electricity Retail. IEEE Internet Things J. 2024, 11, 4956–4970. [Google Scholar] [CrossRef]
  23. Zhu, S.; Wang, Y.; Chen, H.; Zha, H. A Novel Internet of Vehicles’s Task Offloading Decision Optimization Scheme for Intelligent Transportation System. Wirel. Pers. Commun. 2024, 137, 2359–2379. [Google Scholar] [CrossRef]
  24. Asghari, A.; Sohrabi, M.K. Bi-objective Cloud Resource Management for Dependent Tasks Using Q-learning and NSGA-III. J. Ambient Intell. Humaniz. Comput. 2024, 15, 197–217. [Google Scholar] [CrossRef]
  25. Mahmed, A.N.; Kahar, M.N.M. Simulation for Dynamic Patients Scheduling Based on Many Objective Optimization and Coordinator. Informatica 2024, 48, 91–106. [Google Scholar] [CrossRef]
  26. Zhang, Y.; Liu, H.L.; Chen, L. Variable Universe Fuzzy Logic Controller-Based Constrained Multi-Objective Evolutionary Algorithm. SSRN 2024. ssrn:4931444. [Google Scholar]
  27. Rawat, R.; Rajavat, A. Illicit Events Evaluation Using NSGA-2 Algorithms Based on Energy Consumption. Informatica 2024, 48, 77–96. [Google Scholar] [CrossRef]
  28. Gu, Q.; Li, K.; Wang, D.; Liu, D. A MOEA/D with Adaptive Weight Subspace for Regular and Irregular Multi-objective Optimization Problems. Inf. Sci. 2024, 661, 120143. [Google Scholar] [CrossRef]
  29. Wang, H.; Du, Y.; Chen, F. A Hybrid Strategy Improved SPEA2 Algorithm for Multi-Objective Web Service Composition. Appl. Sci. 2024, 14, 4157. [Google Scholar] [CrossRef]
  30. Xu, Y.; Ma, J.; Yuan, J. Application of SPEA2-MMBB for Distributed Fault Diagnosis in Nuclear Power System. Processes 2024, 12, 2620. [Google Scholar] [CrossRef]
Figure 1. Three-tier communication architecture for cloud–edge–end collaboration.
Figure 1. Three-tier communication architecture for cloud–edge–end collaboration.
Applsci 15 09717 g001
Figure 2. Comprehensive optimization comparison of 5 optimization schemes.
Figure 2. Comprehensive optimization comparison of 5 optimization schemes.
Applsci 15 09717 g002
Figure 3. Comparison of 5 optimization schemes regarding system delay and energy consumption.
Figure 3. Comparison of 5 optimization schemes regarding system delay and energy consumption.
Applsci 15 09717 g003
Figure 4. Comparison of 5 optimization schemes regarding system delay and system resource price.
Figure 4. Comparison of 5 optimization schemes regarding system delay and system resource price.
Applsci 15 09717 g004
Figure 5. Comparison of 5 optimization scenarios regarding system energy consumption and system resource price.
Figure 5. Comparison of 5 optimization scenarios regarding system energy consumption and system resource price.
Applsci 15 09717 g005
Figure 6. Sensitivity analysis of different k p values.
Figure 6. Sensitivity analysis of different k p values.
Applsci 15 09717 g006
Figure 7. Comparison of optimization schemes regarding system delay for different number of tasks.
Figure 7. Comparison of optimization schemes regarding system delay for different number of tasks.
Applsci 15 09717 g007
Figure 8. Comparison of optimization schemes on system energy consumption for different number of tasks.
Figure 8. Comparison of optimization schemes on system energy consumption for different number of tasks.
Applsci 15 09717 g008
Figure 9. Comparison of optimization schemes on system resource pricing for different number of tasks.
Figure 9. Comparison of optimization schemes on system resource pricing for different number of tasks.
Applsci 15 09717 g009
Figure 10. Comparison of each optimization scheme regarding system delay under different vehicle terminals.
Figure 10. Comparison of each optimization scheme regarding system delay under different vehicle terminals.
Applsci 15 09717 g010
Figure 11. Comparison of optimization schemes on system energy consumption under different vehicle terminals.
Figure 11. Comparison of optimization schemes on system energy consumption under different vehicle terminals.
Applsci 15 09717 g011
Figure 12. Comparison of each optimization scheme on system delay with different edge servers.
Figure 12. Comparison of each optimization scheme on system delay with different edge servers.
Applsci 15 09717 g012
Figure 13. Comparison of the optimization schemes on system energy consumption with different edge servers.
Figure 13. Comparison of the optimization schemes on system energy consumption with different edge servers.
Applsci 15 09717 g013
Table 1. Model parameter representation and interpretation.
Table 1. Model parameter representation and interpretation.
Model SymbolSymbol Meaning
S I G c l o u d S I G c l o u d = { f c l o u d c a l c , p c l o u d t r a n , p c l o u d c a l c }
M E C S M E C S = { m e c 1 , m e c 2 , , m e c s n , , m e c N }
S I G m e c n S I G m e c n = { f m e c n c a l c , s m e c n c a c h e , p m e c n c a l c , l o c m e c n x , y }
U S E R U S E R = { u s e r 1 , u s e r 2 , , u s e r m , , u s e r M }
S I G u s e r S I G u s e r = { f u s e r m c a l c , p u s e r m c a l c , v u s e r m t r a n , l o c u s e r m x , y }
D t a s k D t a s k = { t a s k 1 , t a s k 2 , , t a s k d , t a s k D }
S I G t a s k d S I G t a s k d = { d t a s k d d a v , s t a s k d c a c h e , t t a s k d r e s t , f t a s k d c a l c }
Table 2. Parameters and their symbolic and numerical values.
Table 2. Parameters and their symbolic and numerical values.
ParametersSymbolicNumerical Value
Amount of data of t a s k d d t a s k d d a v 10∼100 MB
Required computing resources of t a s k d f t a s k d c a l c 60∼100 mips
Computing Resources of Cloud Server f c l o u d c a l c 800 mips
Computing resources of m e c n f m e c n c a l c 180∼280 mips
Computing resources of u s e r m f u s e r m c a l c 60∼120 mips
Computational power of u s e r m p u s e r m c a l c 100∼160 W
Computational power of cloud servers p c l o u d c a l c 400 W
Computational power of m e c n p m e c n c a l c 200 W
Transmission power of u s e r m p u s e r m t r a n s 30 W
Gaussian white noise power θ 2 −70 dBm
Cache resources for m e c n s m e c n c a c h e 3000 MHz
Communication bandwidth of Cloud Servers B c l o u d u s e r 400 MHz
Communication bandwidth of m e c i B m , n 100 MHz
Population sizeN100
Maximum number of iterations T m a x 500
Initial number of reference vectorsV6
Crossover probability P c From 0.9 to no less than 0.1
Differential coefficient of variationFFrom 0.9 to 0.1
Number of subpopulations R a 42
Table 3. Ablation study of ADP-NSGA-III: module usage and performance metrics.
Table 3. Ablation study of ADP-NSGA-III: module usage and performance metrics.
Algorithm VariantARVDSPMigrationIGD (Mean ± Std)HV (Mean ± Std)
ADP-NSGA-IIIYesYesYes0.037 ± 0.0020.815 ± 0.009
Without Adaptive Reference VectorNoYesYes0.041 ± 0.0030.805 ± 0.010
Without Distributed SubpopulationsYesNoYes0.040 ± 0.0030.800 ± 0.012
Without Migration StrategyYesYesNo0.039 ± 0.0020.810 ± 0.011
NSGA-IIINoNoNo0.050 ± 0.0040.770 ± 0.013
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yu, Y.; Song, Z.; Zhang, Q. Multi-Objective Optimization with Server Load Sensing in Smart Transportation. Appl. Sci. 2025, 15, 9717. https://doi.org/10.3390/app15179717

AMA Style

Yu Y, Song Z, Zhang Q. Multi-Objective Optimization with Server Load Sensing in Smart Transportation. Applied Sciences. 2025; 15(17):9717. https://doi.org/10.3390/app15179717

Chicago/Turabian Style

Yu, Youjian, Zhaowei Song, and Qinghua Zhang. 2025. "Multi-Objective Optimization with Server Load Sensing in Smart Transportation" Applied Sciences 15, no. 17: 9717. https://doi.org/10.3390/app15179717

APA Style

Yu, Y., Song, Z., & Zhang, Q. (2025). Multi-Objective Optimization with Server Load Sensing in Smart Transportation. Applied Sciences, 15(17), 9717. https://doi.org/10.3390/app15179717

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop