Next Article in Journal
Shaft Wall Damage to High-Depth Inclined Ore Passes under Impact Wear Behavior
Next Article in Special Issue
Enhancing Sequence Movie Recommendation System Using Deep Learning and KMeans
Previous Article in Journal
Alternative Fuels for the Marine Sector and Their Applicability for Purse Seiners in a Life-Cycle Framework
Previous Article in Special Issue
Enhancing the Performance of XR Environments Using Fog and Cloud Computing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Traffic-Aware Optimization of Task Offloading and Content Caching in the Internet of Vehicles

School of Computer Science and Technology, Donghua University, Shanghai 201620, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(24), 13069; https://doi.org/10.3390/app132413069
Submission received: 26 September 2023 / Revised: 28 November 2023 / Accepted: 30 November 2023 / Published: 7 December 2023

Abstract

:
Emerging in-vehicle applications seek to improve travel experiences, but the rising number of vehicles results in more computational tasks and redundant content requests, leading to resource waste. Efficient compute offloading and content caching strategies are crucial for the Internet of Vehicles (IoV) to optimize performance in time latency and energy consumption. This paper proposes a joint task offloading and content caching optimization method based on forecasting traffic streams, called TOCC. First, temporal and spatial correlations are extracted from the preprocessed dataset using the Forecasting Open Source Tool (FOST) and integrated to predict the traffic stream to obtain the number of tasks in the region at the next moment. To obtain a suitable joint optimization strategy for task offloading and content caching, the multi-objective problem of minimizing delay and energy consumption is decomposed into multiple single-objective problems using an improved Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D) via the Tchebycheff weight aggregation method, and a set of Pareto-optimal solutions is obtained. Finally, the experimental results verify the effectiveness of the TOCC strategy. Compared with other methods, its latency is up to 29% higher and its energy consumption is up to 83% higher.

1. Introduction

With the rapid advancement of socioeconomic factors and 5G technology, urban areas have witnessed a substantial surge in vehicle numbers. This exponential growth, primarily catalyzed by the emergence of Internet of Vehicles (IoV) technology, has significantly elevated user driving experiences. The integration of the IoV into urban transportation systems has not only provided drivers with real-time traffic information and navigation assistance, but has also opened up new opportunities for advanced vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication [1]. However, in the context of high-velocity vehicular operations, seamless access to a diverse range of internet-based content is of paramount importance. Drivers and passengers increasingly rely on in-vehicle infotainment systems, streaming services, and real-time applications to enhance their travel experiences. Consequently, there is a growing demand for robust and efficient data delivery mechanisms within vehicular networks.
Simultaneously, propelled by the emergence of cutting-edge technologies in artificial intelligence and machine learning, the domain of vehicular networks is undergoing a profound paradigm shift. These technologies are enabling intelligent traffic management, predictive maintenance, and enhanced safety measures. Autonomous vehicles, for example, leverage AI algorithms for real-time decision making, contributing to safer and more efficient transportation. In this backdrop, novel applications are offering innovative solutions to address prevailing challenges. Collaborative traffic optimization algorithms, edge computing for low-latency data processing, and predictive analytics for traffic forecasting are just a few examples of how technology is reshaping urban mobility. As vehicular networks continue to evolve, they hold the promise of not only improving the efficiency of transportation systems but also reducing congestion, emissions, and accidents, ultimately enhancing the quality of life in urban areas.
For time-sensitive tasks, meeting deadlines is critical. Vehicular units often have limited computing and storage capacity, leading to latency or incomplete task execution when processed locally. Mobile Edge Computing (MEC) addresses this by integrating resources and intelligence at the network’s edge. MEC brings cloud-like capabilities closer to data sources, allowing vehicles to offload tasks to edge servers. This enhances the driving experience and supports complex computations. AI and ML models on edge servers enable applications like predictive maintenance and intelligent traffic management. MEC reduces data transmission latency, ensuring real-time safety alerts and traffic information. It creates a responsive vehicular network, adapting to urban traffic dynamics. As vehicles become smarter and more connected, MEC with AI-driven edge computing promises safer and efficient urban mobility.
In this rapidly evolving landscape, traffic stream forecasting plays a pivotal role in enhancing the efficiency of task offloading and content caching. It offers invaluable insights into the distribution of task volumes across different regions, aiding in more effective resource allocation [2]. As vehicular terminals experience a continuous surge in computational demands, the development of efficient computational offloading strategies becomes increasingly crucial. This is precisely the focus of our paper, where we delve into the evolution of task-level offloading strategies [3]. By integrating cutting-edge trends in AI- and machine-learning-driven edge/fog computing technologies, we propose an innovative optimization approach known as TOCC. Based on traffic flow prediction, task offloading and content caching are jointly optimized in this method, which can effectively meet the growing demand of intelligent applications. This approach guarantees a safer and more efficient urban mobility experience, especially in the context of the rapidly evolving IoV technology and the advances brought about by 5G.
Specifically, this work makes the following main contributions:
  • By preprocessing the dataset, we enhance its adaptability to ensure alignment with the operational requirements of the open-source prediction tool. The open-source prediction tool FOST supports mainstream deep learning models such as RNN, MLP, and GNN for predicting the preprocessed BikeNYC dataset. Its fusion module automatically selects and integrates predictions generated by different models, enhancing the overall robustness and accuracy of predictions.
  • We use the predicted traffic stream and an enhanced multi-objective evolutionary algorithm (MOEA/D) to decompose the multitask offloading and content caching problem into individual optimization problems. This decomposition helps us to obtain a set of Pareto optimal solutions, which is achieved through the Tchebycheff weight aggregation approach.
  • Finally, we chose the BikeNYC dataset for traffic flow prediction. It is a multimodal dataset rich in spatiotemporal data, making it ideal for this purpose. We conducted comprehensive simulation experiments to evaluate the TOCC algorithm’s performance, which effectively reduced execution time and energy consumption, as shown by the results.

2. Related Work

In the high-speed mobile environment of the IoV, efficient task execution plays a crucial role in minimizing time delays. However, the limited computing and storage capacity of automotive systems often necessitate offloading resource-intensive tasks. These tasks are strategically shifted to edge or cloud servers, enabling efficient execution and ensuring prompt fulfillment of user requests. Automotive applications frequently involve repeated content requests, making caching popular content on edge servers highly effective in reducing latency and energy consumption for subsequent access in the IoV. Nevertheless, improper task offloading and inaccurate content caching can lead to increased energy consumption and latency, mainly due to network congestion and server queuing. To address these challenges, the joint optimization of task offloading and content caching is essential. By harmonizing these processes, we can effectively minimize time latency and energy consumption, enhancing the overall performance and user experience in the IoV.
In recent years, the problem of task offloading in cloud–edge environments has received much attention. In their pursuit of optimizing data placement performance, Wang et al. [4] recognized the significance of spatiotemporal data characteristics. They introduced TEMPLIH, where a temperature matrix captures the influence of data features on placement. The Replica Selection Algorithm (RSA-TM) is utilized to ensure compliance with latency requirements. Furthermore, an improved Hungarian algorithm (IHA-RM) based on replica selection is employed to achieve multi-objective balance. Zhao et al. [5] proposed an energy-efficient offloading scheme to minimize the overall energy usage. Although their algorithm exhibits low energy consumption on a single task, additional research is needed to evaluate its effectiveness in a multi-task setting. In order to optimize data caching and task scheduling jointly, Wang et al. [6] proposed a multi-index cooperative cache replacement algorithm based on information entropy theory (MCCR) to improve the cache hit rate. Subsequently, they further proposed the NHSA-MCCR algorithm, which aims to optimize the scheduling to achieve the optimization of delays and energy consumption. Chen et al. [7] presented a UT-UDN system model that demonstrated a 20% reduction in time delay and a 30% decrease in energy costs, as indicated by simulation results. Elgendy et al. [8] proposed a Mobile Edge Computing solution for Unmanned Aerial Vehicles (UAVs), which uses a multi-layer resource allocation scheme, a load balancing algorithm, and integer programming to achieve cost reduction. In the context of multi-cloud spatial crowdsourcing data placement, Wang et al. [9] introduced a data placement strategy with a focus on cost-effectiveness and minimal latency. Concurrently, they incorporated the interval pricing strategy and utilized a clustering algorithm to analyze the geographic distribution patterns of data centers. Furthermore, certain studies have investigated the application of heuristics to address the task offloading problem. For instance, Xu et al. [10] employed enumeration and branch-and-bound algorithms to tackle these challenges. Meanwhile, Yin et al. [11] introduced a task scheduling and resource management strategy based on an enhanced genetic algorithm. This approach took into account both delays and energy consumption, with the objective of minimizing their combined sum. Kimovski et al. [12] introduced mMAPO as a solution for optimizing conflicting objectives in multi-objective optimization, including completion time, energy consumption, and economic cost. Ding et al. [13] addressed the challenge of edge server state changes and the unavailability of global information by introducing Dec-POMDP, which handles observed states, and a task offloading strategy based on the Value Decomposition Network (VDN). Li et al. [14] concentrated on addressing the security concerns in edge computing and introduced the priority-aware PASTO algorithm. This algorithm aims to minimize the overall task completion time while adhering to energy budget constraints for secure task offloading. These efforts have been concentrated on addressing computation offloading and caching challenges in cloud–edge environments, optimizing for objectives such as latency, energy consumption, cost, and load performance, yielding substantial results. Nevertheless, cloud–edge environments constitute a vast research domain, necessitating a renewed focus. Consequently, our upcoming investigation will center on the resolution of computation offloading and caching challenges in vehicular edge environments, aiming to explore solutions within this particular domain.
The in-vehicle edge environment merges computing resources from the vehicle and the edge, creating a strong platform for computing services. Yang et al. [15] introduced a location-based offloading scheme that effectively balances task speed with communication and computational resources, leading to significant system cost reduction. To tackle challenges related to task latency and constraints in RSU (roadside unit) server resources, Zhang et al. [16] proposed a contract-based computing resource allocation scheme in a cloud environment, aiming to maximize the benefits of MEC service providers while adhering to latency limits. Dai et al. [17] devised a novel approach by splitting the joint load balancing and offloading problem into two sub-problems, formulated as a mixed-integer nonlinear programming problem, with the primary objective of maximizing system utility under latency constraints. In the realm of 5G networks, Wan et al. [18] introduced an edge computing framework for offloading using multi-objective optimization and evolutionary algorithms, efficiently exploring the synergy between offloading and resource allocation within MEC and cloud computing, resulting in optimized task duration and server costs. For comprehensive optimization, Zhao et al. [19] proposed a collaborative approach that minimizes task duration and server costs through joint optimization of offloading and resource allocation in the MEC and cloud computing domains. To solve the problem of communication delay and energy loss caused by the growth in IoV services, Ma [20] proposed that through a comprehensive analysis of the optoelectronic communication and computing model, the vehicle computing task should be encoded and transformed into a knapsack problem, where the genetic algorithm is used to solve the optimal resource allocation strategy. Lin et al. [21] proposed a data offloading strategy called PKMR, which considers a predicted k-hop count limit and utilizes VVR paths for data offloading with neighboring Rsus. Sun et al. [22] introduced the PVTO method, which offloads V2V tasks to MEC and optimizes the strategy using GA, SAW, and MCDM. Ko et al. [23] introduced the belief-based task offloading algorithm (BTOA), where vehicles make computation and communication decisions based on beliefs while observing the resources and channel conditions in the current VEC environment. While prior research has predominantly focused on addressing computation offloading and resource allocation problems in vehicular edge environments with utility-, latency-, and cost-related objectives in mind, these strategies often overlook the potential impact of future traffic patterns within the context of vehicular networks, which can result in a loss of offloading precision. Moving forward, our discussion shifts to traffic flow prediction in vehicular environments and the accompanying investigation into computation offloading and content caching issues based on this groundwork.
In the environment of the Internet of Vehicles, the key to effective resource deployment is real time, accuracy, and intelligence. In order to achieve this goal, traffic flow prediction becomes a crucial factor. For traffic flow prediction, in order to enhance real-time traffic prediction in various scenarios, Laha et al. [24] proposed two incremental learning methods and compared them with three existing methods to determine the suitable scenarios for these methods. To address the limitations of current approaches in long-term prediction performance, Li et al. [25] introduced a novel prediction framework called SSTACON, which utilizes a shared spatio-temporal attention convolutional layer to extract dynamic spatio-temporal features and incorporates a graph optimization module to model the dynamic road network structure. In order to tackle the issue of parameter selection and improve prediction accuracy, Ai et al. [26] incorporated the Artificial Bee Colony (ABC) algorithm into the ABC-SVR algorithm. Lv et al. [27] addressed the service switching challenge among adjacent roadside units, introducing cooperation between vehicles, vehicle-to-roadside-unit communication, and implementing trajectory prediction to minimize task processing delays. Fang et al. [28] recognized the significance of traffic flow prediction, and, building upon this foundation, introduced the ST-ResNet network for traffic prediction, complemented by the NSGA-III algorithm for multi-objective optimization. Their approach demonstrated superior performance in terms of latency and energy consumption, outperforming existing methodologies.
Based on these findings, we observed a relative scarcity of research focusing on joint optimization of computation offloading and content caching based on traffic flow prediction. Hence, our study aims to address the problem of joint optimization of task offloading and edge content caching. We employ an enhanced traffic-based multi-objective evolutionary algorithm to achieve this. The overarching objective is to minimize both transmission and computation latency while reducing energy consumption.
Table 1 displays a concise synopsis of the scrutinized literature, delineating pivotal aspects including the application environment, addressed challenges, and optimization objectives.

3. System Model and Problem Formulation

3.1. System Framework

Figure 1 illustrates a robust three-tier cloud–edge–vehicular network framework designed to cater to diverse tasks across different regions. The framework comprises three layers: the vehicle terminal layer, the Mobile Edge Computing (MEC) layer, and the cloud computing layer. In the cloud computing layer, cloud servers cover the entire area, providing extensive coverage. The MEC layer consists of edge servers along the roadside, covering an area. The vehicle terminal layer comprises vehicles traversing the road, communicating with both the MEC layer and the cloud computing layer via wireless channels. During operation, the vehicle terminal layer undertakes one or more computing tasks with varying probabilities, taking advantage of time gaps to execute the tasks efficiently. Three possible destinations for task offloading are available: local processing (i.e., handling tasks on the computing device within the vehicle), processing by the edge server, or processing by the cloud server. The edge server is equipped with the capability to cache popular content, further enhancing the task offloading efficiency.
A region’s traffic stream is the count of vehicles passing through it in a given time slot, such as one minute. Let  T r  denote the set of vehicle trajectories at the  t t h  time slot. The vehicle traffic stream in region i during the  t t h  time slot can be determined by  T S i ( t ) t r T r I ( t r ) , where  t r = < s 1 , s 2 , , s | t r | >  is an ordered set representing the discrete trajectory of a user multimedia request over time;  s i  represents the geographical position of the user’s multimedia request at certain times; and  I ( t r )  is a binary variable that equals 1 if  s k  in region i exists in  t r . Otherwise, it equals 0. The problem of predicting the stream of vehicles can be defined as follows: given the historical traffic stream  T S i ( t ) | 1 i I , 1 t T , the goal is to predict the future traffic stream  T S i ( T + 1 ) | 1 i I .
In a partitioned region, we assume  N > Q , where N is the number of vehicles and Q is the number of tasks. This assumption accounts for prevalent tasks that are repeatedly requested and executed. Each vehicle can perform only one task at a time, and multiple vehicles may request the same task based on their preferences. To simplify notation, we define  q i , n  as the task q generated by the  n t h  vehicle in region i.
To characterize different computational tasks, a triple is employed as the computational task model:  q i , n = ( c q i , n , d q i , n , D D L q i , n ) . Task  q i , n  can be partially offloaded to either the MEC server or the cloud computing server for processing. The parameter  c q i , n  is the amount of CPU cycles required to accomplish task  q i , n . The parameter  d q i , n  represents the input data size required for processing task  q i , n , while  D D L q i , n  signifies the maximum deadline for completing the task. It is assumed that the value of  c q i , n  remains constant regardless of whether task  q i , n  is processed locally, offloaded to the MEC server, or executed on the cloud computing server. Furthermore, MEC servers within the region are assumed to have limited computation capacity, denoted as  c c m , and a cache size of  c s m .

3.2. Execution Time and Energy Consumption Model

For the task offloading problem, we divide the computational task into multiple parts and define the offloading decision variable  α n i = ( α i , n l , α i , n m , α i , n c ) , where  α i , n l , α i , n m , α i , n c [ 0 , 1 ]  denotes the percentage of task  q i , n  offloaded to the local vehicle, MEC server, and cloud server, respectively. The constraint  α i , n l + α i , n m + α i , n c = 1  ensures that the entire task is accounted for. For example, if  α i , n l = 1 , the task is executed exclusively within the vehicle, where  α i , n l = 0  indicates complete offloading to the MEC or cloud server. The overall offloading decision policy is denoted as  A = [ α 1 1 , α 2 1 , , α n i , , α N I ] .

3.2.1. Execution Time and Energy Consumption of Local Task Computation

If the task  q i , n  is selected for local processing, then  T L q i , n  is defined as the local execution time. Due to the difference in its own computational power brought by vehicle heterogeneity, the local execution time delay of task  q i , n  is
T L q i , n ( t ) = α i , n l × c q i , n f l n i
The local energy consumption is calculated as
E L q i , n ( t ) = α i , n l × c q i , n × ( f l n i ) 2 × ς
where  f l n i  is the computational power of the  n t h  vehicle in region i ( f l n i ) 2 × ς  is the energy consumption per CPU cycle.

3.2.2. Execution Time and Energy Consumption of Edge Task Computing

When task  q i , n  is offloaded to the MEC server, the process involves the following steps: the  n t h  vehicle uploads the task’s input data to the MEC server via the BS/RSU. The MEC server allocates computational resources for task processing and returns the result to the vehicle. The time of tasks offloaded to MEC server m can be as described below:
T M q i , n ( t ) = α i , n m × ( d q i , n v n i ( t ) + c q i , n f m i + o q i , n v m )
where the first part enclosed in parentheses in Equation (3) represents the offload time to the edge server, the second part denotes the execution time delay for processing  q i , n , and the final part accounts for the feedback time delay of processing results. Here,  f m i  is the computing capability of edge servers in region i v n i ( t )  signifies the data offload rate from the  n t h  vehicle to the mobile edge server within region i at time slot t o q i , n  is the size of the task processing outcome, and  v m  is the backhaul transmission rate of edge server m.
This means the edge energy consumed is
E M q i , n ( t ) = α i , n m × ( c q i , n × ( f m i ) 2 × ς + o q i , n v m × δ m )
where  δ m  is the offload capability for edge server m.

3.2.3. Execution Time and Energy Consumption of Cloud Server Computing

The cloud server c is situated at a greater distance from the task source compared to the edge server, resulting in potential latency. Hence, we opt to incorporate the cloud computing model for task processing only under specific conditions. These conditions include cases where the task demands extensive computational resources that surpass the processing capacity of the edge server or in scenarios where the edge server is already operating at its maximum capacity due to concurrent multitasking. Hence, the time  T C q i , n  of the task offloaded to cloud server c is defined as
T C q i , n ( t ) = α i , n c × ( d q i , n v n , c i ( t ) + c q i , n f c c + o q i , n v c )
where  v n , c i ( t )  denotes the data offload rate from the  n t h  vehicle to the cloud server within region i at time slot t and  v c  is the backhaul transmission rate of edge server c. Furthermore, the energy consumption of the cloud server is:
E C q i , n ( t ) = α i , n c × ( c q i , n × ( f c c ) 2 × ς + o q i , n v c × δ c )
Based on the analysis, the execution time for the task on the  n t h  vehicle in region i can be calculated according to the equation below:
T T q i , n ( t ) = m a x ( T L q i , n , T M q i , n , T C q i , n )
Regarding energy consumption, the calculation for the task executed on the  n t h  vehicle in region i can be expressed as follows:
E T q i , n ( t ) = E L q i , n + E M q i , n + E C q i , n

3.3. Edge Data Caching Model

Task caching refers to the storage of completed tasks and their associated data on the edge cloud. This paper formulates the content caching problem using a binary cache decision variable  s m i , n 0 , 1 . If  s m i , n = 1 , it signifies that the task generated by the  n t h  vehicle in region i is cached to edge server m. Otherwise, it is set to 0. Therefore, the task caching policy is expressed as  S = s 1 1 , 1 , s 1 1 , 2 , , s m i , n .
Considering the joint processing of task offloading and content caching to vehicle local, edge, and cloud servers, the total execution latency of task  q i , n  generated by the  n t h  vehicle in region i is
T q i , n ( t ) = s m i , n × o q i , n v m + ( 1 s m i , n ) × T T q i , n ( t )
Furthermore, the total consumption of energy is:
E q i , n ( t ) = s m i , n × E M q i , n ( t ) + ( 1 s m i , n ) × E T q i , n ( t )
Therefore, the average execution time of vehicles in the region i is
T ¯ i ( t ) = 1 T S i ( t ) × i = 1 T S i ( t ) T q i , n ( t )
The energy consumption of vehicles in the region i is
E i ( t ) = i = 1 T S i ( t ) E q i , n ( t )
The combined size of the data contained in the regional edge servers is:
C S i = s m i , n × i = 1 T S i ( t ) o q i , n + ( 1 s m i , n ) × i = 1 T S i ( t ) α i , n m d q i , n

3.4. Problem Definition

In this paper, the aim is to minimize the average time taken to execute and the total energy consumption within each region. This problem is formulated with the consideration of maximum latency and computing power constraints, and can be described as follows:
m i n T ¯ i ( t ) , m i n E i ( t ) i = 1 , 2 , , I s . t . α i , n l , α i , n m , α i , n c 0 , 1 , α i , n l + α i , n m + α i , n c = 1 s m i , n 0 , 1 T q i , n ( t ) D D L q C S i c s m n = 1 N α i , n m f m i c c m
These constraints address task divisibility, binary caching decisions, and timely task completion. They also limit the data cached by tasks to match the edge server cache capacity while ensuring that the combined resource requests of offloaded tasks do not surpass the computing capacity of edge servers.
It is worth emphasizing that the execution time, energy consumption, and other formulas involved in this article all adhere to scientific principles and follow the foundation work of previous related research. Accurate definitions and thte rationality of these formulas are derived from in-depth research in the cited literature [6,12,19,28], ensuring the scientificity and credibility of our offloading strategies and their application in different scenarios.

4. Joint Optimization for Offloading and Content Caching with Traffic Stream Prediction

In this section, FOST [29], an industry-wide, highly versatile, and easy-to-use spatio-temporal forecasting open source tool, was first used to solve the problem of predicting traffic streams. The BikeNYC dataset is preprocessed to provide a dataset format adapted to the model, after which FOST extracts temporal and spatial correlations on the dataset and integrates them to predict traffic streams, thus improving the accuracy of predicted traffic streams. Then, MOEA/D [30] is used to search for the optimal solution for the joint optimization of task offloading and content caching to determine whether to cache the vehicle-generated tasks to the edge or offload them to each platform.

4.1. FOST Enabled Traffic Stream Prediction

4.1.1. Data Preprocessing Requirements

Before using the data model for prediction, it is necessary to provide a dataset format adapted to the model, including train and graph, where train involves time and target values, and graph involves the spatial relationship weights between nodes.
According to the first law of geography, “Everything is related to everything else, but near things are more related than distant things”. In order to reflect the spatial dependence of multiple regions, we define W is the spatial weight matrix, which describes the spatial dependence between  i × j  regions.
w 11 w 12 w 1 j w 21 w 22 w 2 j w i 1 w i 2 w i j
where  w i j  represents the degree of influence of the region i on the region j in the space. If  w i j = 1 , region i and region j are adjacent. Otherwise,  w i j = 0 .
We choose Queen adjacency to construct the spatial relationship weight matrix, as shown in Figure 2. The specific meaning of Queen adjacency is that two geographical entities are considered adjacent if they are located next to each other horizontally, vertically, or diagonally within a given context. The creation of a Queen adjacency weight matrix serves to represent these spatial relationships, supplying fundamental data for subsequent spatial analyses. For instance, in Figure 2, if entity B is linked to entity A, then for entities A and B, their spatial weight is set to 1. Conversely, if entity C is distant from A, then for entities A and C, their spatial weight is designated as 0. The weight matrix derived from Queen adjacency has been proven as a valuable tool for a wide array of spatial analysis and modeling tasks.

4.1.2. Spatial-Temporal Correlation Extraction

Traffic flows often showcase intricate temporal and spatial patterns. FOST excels in dissecting these nuances, delving into both long-term and short-term trends within traffic data by adeptly extracting temporal correlations. Additionally, leveraging a spatial correlation analysis enables FOST to unravel the intricate relationships between traffic flows in distinct locations. Consequently, FOST’s proficiency in discerning temporal and spatial correlations not only refines predictions with a fine-grained precision, surpassing global estimates, but also empowers the model to more effectively apprehend changes and patterns inherent in the data.
In the realm of modeling, FOST’s module leverages the preprocessed dataset to employ advanced architectures such as multilayer perceptron (MLP) and recurrent neural networks (RNNs). These architectures adeptly extract temporal correlations embedded in the training dataset. Furthermore, the integration of graph neural networks (GNNs) facilitates the capture of spatial correlations among nodes in graph datasets. The fusion module is intricately designed to autonomously select and amalgamate model predictions. This innovative design empowers FOST with the flexibility and intelligence to dynamically adjust model combinations, ensuring heightened precision and robustness in predictions when confronted with complex forecasting tasks. In the specific domain of vehicle traffic prediction, FOST recognizes the imperative consideration of not only temporal fluctuations but also spatial interactions. Acknowledging that traffic dynamics within a particular area are intricately linked to neighboring regions, FOST underscores the significance of spatial correlations that cannot be overlooked.
Hence, the extraction of both temporal and spatial correlations from traffic stream data becomes essential. To achieve this, the time series data encompassing recent temporal intervals, including cumulative traffic flows entering and exiting the region during each time slot, are input into an underlying deep temporal neural network module. Subsequently, the temporal module of the model undertakes the task of assimilating patterns from historical data, subsequently encoding them as a collection of vectors within a latent space.
Furthermore, a crucial step involves the integration of spatial information through a mechanism of superimposing information onto the temporal patterns of adjacent spatial units. This facilitates a comprehensive aggregation of spatial insights and patterns, enhancing the predictive capabilities of the model.
We performed an in-depth comparison between the BikeNYC real dataset and FOST’s traffic flow predictions at a specific time instance. The comparative findings are depicted in Figure 3. Evidently, post-adequate training, FOST demonstrates exceptional accuracy in predicting the traffic flow within a 16 × 8 region. This remarkable performance lays a robust and dependable groundwork for subsequent endeavors in optimizing task offloading and edge caching, leveraging the insights from traffic flow prediction.

4.2. MOEA/D-Based Joint Optimization

The IoV environment always consists of multiple regions, so the joint optimization problem of task offloading and content caching with the objective of minimizing the average execution time and total energy consumption in each region under the constraints of maximum latency and computational power is a multi-objective optimization problem. The multi-objective optimization problem is converted to a multi-group single-objective optimization problem using the Tchebycheff approach. A collaborative approach combined with a population evolution strategy is used to optimize these subproblems simultaneously using the neighborhood relationship between the subproblems.

4.2.1. Tchebycheff Weight Aggregation Approach

Among the various decomposition methods of MOEA/D, we choose to use the Tchebycheff approach because it can handle relatively more complex problems with a high computational efficiency. Then, the decomposition cost form of the sub-problem optimization problem of computing the content caching or offloading of tasks in a region is  m i n i m i z e g t c h e ( x w , z * ) = m a x 1 i m λ i f i ( x ) w i * , where,  z = z 1 , z 2 ( i 1 , 2 , , P )  is the optimal value of  T ¯ i ( t )  and  E i ( t ) w i  represents the weight of the  i t h  objective function for each x, and P represents the population size defined by MOEA/D. In this paper, let  f 1  be the energy consumption of the area and  f 2  be the average time delay of the area. Decompose the multi-objective optimization problem into m subproblems according to the Tchebycheff ray uniform expansion, assign the objective weight of each subproblem, and obtain the subproblem weight matrix  w = w 1 , w 2 , , w i , , w m , where  w i = ( w i 1 , w i 2 )  represents the weight vector of the  i t h  sub problem. Let  w i 1 = i × 1 m + 1 w i 2 = 1 w i 1 w i 1 = 1 w i 1 1 w i 1 + 1 w i 2 w i 2 = 1 w i 1 .

4.2.2. Chromosome Coding and Genetic Operators

In this paper, the chromosome encoding for the population evolution strategy uses Real Integer Encoding (RI), where each bit on the chromosome represents the true value of the decision variable. For selection, etour was used. For recombination, we used Simulated Binary Crossover (SBX) to simulate single point crossover based on binary strings. For mutation, we generated a new population chromosome matrix by polynomially varying each decision variable in the real integer-encoding population chromosome matrix according to the mutation rate.

4.3. Description of the Algorithm

Algorithm 1 provides an overview of the optimization process. Steps 1–4 involve the pre-processing traffic stream data and initializing trainable parameters for model training. Steps 5–24 focus on joint optimization of task offloading and content caching using MOEA/D. First, G is denoted as the maximum number of iterations, and T is denoted as the number of neighbours of each weight vector  w i . To initialize the population  P 0 , p individuals are randomly generated within the decision space. Each individual in population  P 0  computes the values of  T ¯ i ( t )  and  E i ( t )  for the objective function defined in Equation (15). Initialize the reference point  z * = z 1 , z 2 T  and the set of uniformly distributed individual weight vectors  V = v 1 , v 2 , , v T . Next, the Euclidean distance between every pair of weight vectors is calculated to identify the T nearest weight vectors for each weight vector, denoted as  B ( i ) = i 1 , i 2 , , i T . After that, the evolutionary process is carried out with updates. Subsequently, the minimum delay and minimum energy consumption are updated. If the termination condition is met, the computation is concluded, and the results are generated. Finally, the corresponding set of offloading policies A and caching policies S are outputted. MOEA/D needs to maintain its internal population of N solutions, and external population EP. MOEA/D is a technique that breaks down a multi-objective optimization problem into numerous scalar optimization subproblems, all of which are tackled simultaneously [30]. These subproblems draw on information from their neighboring subproblems during optimization, leading to a reduced computational complexity within the TOCC framework.
Algorithm 1: TOCC: Task offloading and content caching strategy based on MOEA/D
Applsci 13 13069 i001

5. Experimental Evaluation

This section begins with a description of the experimental setup, after which the performance of TOCC is evaluated.

5.1. Experimental Setup

The performance of TOCC was evaluated and compared to three baselines. These baselines are briefly described below.
  • LOCAL: No utilization of edge content cache and MEC, i.e., tasks are processed locally in the vehicle upon generation.
  • TFO: No edge content cache is utilized. The offload destination for the task is determined by selecting the destination with the lowest execution time.
  • F_NSGA-III: To facilitate a fair performance comparison of the joint optimization component, we substituted the prediction segment in the COC from [28] with FOST. Consequently, we utilized the NSGA-III algorithm, relying on FOST traffic flow prediction, to tackle the joint optimization problem.
In our experiments, we meticulously selected and assessed 128 city center areas characterized by a maximum traffic stream of 30, spanning across three dimensions. We conducted a comprehensive evaluation of our method’s efficacy in diverse environmental scenarios, encompassing variations in the number of content types (ranging from 5 to 30), input data sizes (varying from 1 to 2 times), and CPU cycle counts (ranging from 1 to 6 times); a 10% increase in edge server storage capacity; and a 50% increase in edge server computational power. The implementation of our methods was carried out using Python 3 on a Ubuntu 18.04 LTS operating system.

5.2. Comparative Experiments

5.2.1. Comparison with Changing of Content Types

As the content type increases, the average execution time of TOCC is better than that of LOCAL and TFO, as shown in Figure 4a. The cache of TOCC is sensitive to the content type and is greatly affected by it, which causes the average execution time to continuously increase. In contrast, TFO and LOCAL are less affected by content type changes and the average execution time fluctuates within a certain range of higher levels. The average execution time of TOCC is 46% better than LOCAL and 44% better than TFO. This highlights the effectiveness of MEC and edge content caching in reducing the IoV time latency. TOCC is slightly worse than F_NSGA-III, 12% lower. Figure 4b shows that with the increase in content types, TOCC has the lowest total energy consumption compared with LOCAL, TFO, and F_NSGA-III. In terms of total energy consumption, TOCC outperforms LOCAL by 60%, TFO by 85%, and F_NSGA-III by 79%. Through experimental observations, we have made an intriguing discovery. When comparing two experiments, TOCC and F_NSGA-III, we observed that F_NSGA-III slightly outperforms TOCC in terms of average latency. However, when it comes to total energy consumption, TOCC significantly outperforms F_NSGA-III. This intriguing phenomenon was consistently observed in subsequent experiments. Upon analyzing the algorithms used in both experiments, we found that the TOCC algorithm employs a joint optimization strategy that carefully balances the trade-off between minimizing latency and energy consumption, effectively avoiding excessive energy consumption. Of course, this optimization comes at the expense of sacrificing a portion of latency minimization.

5.2.2. Comparison with Change in CPU Cycles

Figure 5a clearly illustrates the significant advantage of TOCC over LOCAL and TFO when the number of CPU cycles per task increases by a factor, and TOCC slightly loses to F_NSGA-III. In terms of the average execution time, TOCC achieves a 54% improvement over LOCAL, a 53% improvement over TFO, and a 46% reduction over F_NSGA-III. In Figure 5b, as the number of CPU cycles per task increases, TOCC achieves the lowest energy consumption compared to LOCAL, TFO, and F_NSGA-III. In terms of total energy consumption, TOCC is 78% higher than LOCAL, 93% higher than TFO, and 94% higher than F_NSGA-III. This is because the strategy in TOCC can better balance the average execution time and energy consumption and does not make one party dominant.

5.2.3. Comparison with Changing of Input Data Size

Figure 6a clearly depicts that TOCC outperforms LOCAL, TFO, and F_NSGA-III in achieving the lowest average execution time when the input data size of a single task is increased by 20%, showing a flat upward trend. This is because the input data affect the amount of tasks in the edge cache and the offloading of computational tasks. TOCC improved by 29% over LOCAL, 28% over TFO, and 1.2% over F_NSGA-III. In Figure 6b, the total energy consumption of TOCC is also the lowest as the number of tasks increases. In terms of total energy consumption, TOCC exceeds LOCAL by 78%, is 93% higher than TFO, and is 94% higher than F_NSGA-III.

5.2.4. Comparison with Change in Edge Server Storage Capacity

Figure 7a clearly depicts that when the storage capacity of the edge server increases by 10%, TOCC outperforms LOCAL and TFO and is slightly worse than F_NSGA-III, achieving a lower average execution time with a flat rising trend. TOCC achieves a 31% improvement over LOCAL, an 85% improvement over TFO, and is 1% lower than F_NSGA-III. In Figure 7b, the total energy consumption of TOCC also performs the best as the storage capacity of the edge service increases. In terms of total energy consumption, TOCC is 53% higher than LOCAL, 85% higher than TFO, and 97% higher than F_NSGA-III. This is because while the storage capacity of the edge server is very sensitive to TOCC, LOCAL and TFO are almost not affected by the storage capacity of the edge server.

5.2.5. Comparison with Change in Edge Server Computing Power

With a 50% increase in edge server computational capacity, TOCC demonstrates superior average execution times compared to LOCAL, TFO, and F_NSGA-III, as illustrated in Figure 8a. LOCAL remains unaffected by changes in the edge server computational capacity, maintaining a consistent average execution time. TFO, on the other hand, experiences a significant impact, resulting in a continuous decrease in average execution time. Specifically, TOCC exhibits a 33% improvement over the LOCAL algorithm, a 27% improvement over TFO, and a 7% improvement over F_NSGA-III in terms of the average execution time. Figure 8b reveals that as the edge server computational capacity increases, TOCC consistently achieves the lowest total energy consumption relative to LOCAL, TFO, and F_NSGA-III. In terms of the total energy consumption, TOCC surpasses LOCAL by 43%, outperforms TFO by 83%, and surpasses F_NSGA-III by 97%. Interestingly, TOCC does not exhibit the significant latency advantage we initially anticipated with an increasing edge server computational capacity; instead, it fluctuates within a certain range. Combining these findings with multiple experimental datasets, we can ascertain that, according to Equations (3) and (4), an increase in the edge server computational capacity should result in a rapid increase in energy consumption. However, TOCC strikes a balance between energy consumption and latency, preventing excessive energy consumption at the cost of sacrificing some latency.
In our research, the TOCC algorithm is an effective means to solve joint optimization problems through multi-objective optimization. However, we encountered some challenges when exploring the normalization problem of latency and energy consumption. The original idea is to normalize the delay and the energy consumption and convert them into a single-objective problem to solve through objective weighting. However, during the experiment, we faced some difficulties. First, the dimensions of delay and energy consumption were inconsistent, making it difficult to find a suitable ratio in the fitness function, which makes the function too exaggerated. Although we observed in our experiments that there seems to be a quasi-linear relationship between latency and energy consumption, the problem of dimensional inconsistency in the fitness function still exists. Secondly, since we have not discussed specific business scenarios in detail, the importance of latency and energy consumption is uncertain in practical applications. The TOCC algorithm excels in this regard because it is able to complete trade-offs within the algorithm itself without the need for detailed discussions of specific business scenarios. This enables the TOCC algorithm to effectively achieve the goal of joint optimization without being affected by dimensional inconsistencies in the delay and the energy consumption and uncertainty in business scenarios. Therefore, the TOCC algorithm shows a strong robustness and adaptability in solving joint optimization problems.

6. Conclusions

To address the resource wastage issue arising from redundant content requests in the IoV, we propose an innovative solution that integrates task offloading and content caching optimization. Utilizing the FOST deep learning model, we extract spatial and temporal correlations to forecast traffic patterns. Building upon this foundation, we decompose the multi-objective problem of latency and energy consumption optimization into separate single-objective objectives, employing the Tchebycheff weight aggregation method within the MOEA/D algorithm. Our approach, named TOCC, is rigorously validated through comprehensive experiments. The results unequivocally demonstrate that TOCC outperforms LOCAL by 29% in terms of the average execution time, surpasses TFO by 21%, and exhibits a slight performance gap compared to F_NSGA-III. In the context of energy consumption, TOCC excels by 43% over LOCAL, outperforms TFO by 83%, and surpasses F_NSGA-III by 79%.
Looking forward, our work will include coping with dynamic changes in resource allocation, task prioritization under emergency situations [31], and other issues. Furthermore, we hope to extend our approach to more realistic application scenarios, ensuring its adaptability to diverse and evolving vehicle environments.

Author Contributions

Conceptualization, P.W.; methodology, P.W. and Y.W.; software, P.W., Y.W. and J.Q.; validation, P.W. and J.Q.; formal analysis, P.W. and Y.W.; investigation, P.W. and Y.W.; resources, P.W.; data curation, Y.W. and J.Q.; writing—original draft preparation, P.W. and Y.W.; writing—review and editing, P.W., Y.W. and Z.H.; visualization, P.W. and Y.W.; supervision, P.W.; project administration, P.W.; funding acquisition, P.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (NSFC) under Grant 61602109, the DHU Distinguished Young Professor Program under Grant LZB2019003, Shanghai Science and Technology Innovation Action Plan under Grant 22511100700, and Fundamental Research Funds for the Central Universities.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Publicly available datasets were analyzed in this study. This data can be found here: BikeNYC: https://pan.baidu.com/s/1mhIPrRE#list/path=%2F (accessed on 25 September 2023); FOST: https://github.com/microsoft/FOST (accessed on 25 September 2023).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Xia, Z.; Wu, J.; Wu, L.; Chen, Y.; Yang, J.; Yu, P.S. A Comprehensive Survey of the Key Technologies and Challenges Surrounding Vehicular Ad Hoc Networks. ACM Trans. Intell. Syst. Technol. 2021, 12, 37. [Google Scholar] [CrossRef]
  2. Zhou, S.; Gong, J.; Zhou, Z.; Chen, W.; Niu, Z. GreenDelivery: Proactive Content Caching and Push with Energy-Harvesting-based Small Cells. IEEE Commun. Mag. 2015, 53, 142–149. [Google Scholar] [CrossRef]
  3. Liu, B.; Xu, X.; Qi, L.; Ni, Q.; Dou, W. Task scheduling with precedence and placement constraints for resource utilization improvement in multi-user MEC environment. J. Syst. Archit. 2020, 114, 101970. [Google Scholar] [CrossRef]
  4. Wang, P.; Zhao, Y.; Huang, H.; Zhang, Z. Temperature Matrix-based Data Placement Optimization in Edge Computing Environment. Comput. Inform. 2022, 41, 1465–1490. [Google Scholar] [CrossRef]
  5. Zhao, X.; Zhao, L.; Liang, K. An energy consumption oriented offloading algorithm for fog computing. In Quality, Reliability, Security and Robustness in Heterogeneous Networks: 12th International Conference, Seoul, Republic of Korea, 7–8 July 2016; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  6. Wang, P.; Zhao, Y.; Zhang, Z. Joint Optimization of Data Caching and Task Scheduling based on Information Entropy for Mobile Edge Computing. In Proceedings of the 19th IEEE International Symposium on Parallel and Distributed Processing with Applications (ISPA 2021), New York, NY, USA, 30 September–3 October 2021. [Google Scholar]
  7. Chen, M.; Hao, Y. Task offloading for mobile edge computing in software defined ultra-dense network. IEEE J. Sel. Areas Commun. 2018, 36, 587–597. [Google Scholar] [CrossRef]
  8. Elgendy, I.; Meshoul, S.; Hammad, M. Joint Task Offloading, Resource Allocation, and Load-Balancing Optimization in Multi-UAV-Aided MEC Systems. Appl. Sci. 2023, 13, 2625. [Google Scholar] [CrossRef]
  9. Wang, P.; Chen, Z.; Zhou, M.; Zhang, Z.; Abusorrah, A.; Ammari, A.C. Cost-Effective and Latency-minimized Data Placement Strategy for Spatial Crowdsourcing in Multi-Cloud Environment. IEEE Trans. Cloud Comput. 2023, 11, 868–878. [Google Scholar] [CrossRef]
  10. Xu, J.; Hao, Z.; Sun, X. Optimal offloading decision strategies and their influence analysis of mobile edge computing. Sensors 2019, 19, 3231. [Google Scholar] [CrossRef] [PubMed]
  11. Yin, X.; Chen, L. Task Scheduling and Resource Management Strategy for Edge Cloud Computing Using Improved Genetic Algorithm. J. Inf. Process. Syst. 2023, 19, 450–464. [Google Scholar]
  12. Kimovski, D.; Mehran, N.; Kerth, C.E.; Prodan, R. Mobility-Aware IoT Application Placement in the Cloud—Edge Continuum. IEEE Trans. Serv. Comput. 2022, 15, 3358–3371. [Google Scholar] [CrossRef]
  13. Ding, S.; Lin, D. Multi-Agent Reinforcement Learning for Cooperative Task Offloading in Distributed Edge Cloud Computing. IEICE Trans. Inf. Syst. 2022, 105, 936–945. [Google Scholar] [CrossRef]
  14. Li, Y.; Zeng, D.; Gu, L.; Zhu, A.; Chen, Q.; Yu, S. PASTO: Enabling Secure and Efficient Task Offloading in TrustZone-Enabled Edge Clouds. IEEE Trans. Veh. Technol. 2023, 72, 8234–8238. [Google Scholar] [CrossRef]
  15. Yang, C.; Liu, Y.; Chen, X.; Zhong, W.; Xie, S. Efficient mobility-aware task offloading for vehicular edge computing networks. IEEE Access 2019, 7, 26652–26664. [Google Scholar] [CrossRef]
  16. Zhang, K.; Mao, Y.; Leng, S.; Vinel, A.; Zhang, Y. Delay constrained offloading for mobile edge computing in cloud-enabled vehicular net-works. In Proceedings of the 2016 8th International Workshop on Resilient Networks Design and Modeling (RNDM), Halmstad, Sweden, 13–15 September 2016. [Google Scholar]
  17. Dai, Y.; Xu, D.; Maharjan, S.; Zhang, Y. Joint load balancing and offloading in vehicular edge computing and networks. IEEE Internet Things J. 2018, 6, 4377–4387. [Google Scholar] [CrossRef]
  18. Wan, S.; Li, X.; Xue, Y.; Lin, W.; Xu, X. Efficient computation offloading for Internet of Vehicles in edge computing-assisted 5G networks. J. Supercomput. 2020, 76, 2518–2547. [Google Scholar] [CrossRef]
  19. Zhao, J.; Li, Q.; Gong, Y.; Zhang, K. Computation offloading and resource allocation for cloud assisted mobile edge computing in vehicular networks. IEEE Trans. Veh. Technol. 2019, 68, 7944–7956. [Google Scholar] [CrossRef]
  20. Ma, Z. Communication Resource Allocation Strategy of Internet of Vehicles Based on MEC. J. Inf. Process. Syst. 2022, 18, 389–401. [Google Scholar]
  21. Lin, S.; Huang, C.; Wu, T. Multi-Access Edge Computing-Based Vehicle-Vehicle-RSU Data Offloading Over the Multi-RSU-Overlapped Environment. IEEE Open J. Intell. Transp. Syst. 2022, 3, 7–32. [Google Scholar] [CrossRef]
  22. Sun, Y.; Wu, Z.; Shi, D.; Hu, X. Task Offloading Method of Internet of Vehicles Based on Cloud-Edge Computing. In Proceedings of the 2022 IEEE International Conference on Services Computing (SCC), Barcelona, Spain, 10–16 July 2022. [Google Scholar]
  23. Ko, H.; Kim, J.; Ryoo, D.; Cha, I.; Pack, S. A Belief-Based Task Offloading Algorithm in Vehicular Edge Computing. IEEE Trans. Intell. Transp. Syst. 2023, 24, 5467–5476. [Google Scholar] [CrossRef]
  24. Putatunda, S.; Laha, A. Travel Time Prediction in Real time for GPS Taxi Data Streams and its Applications to Travel Safety. Hum.-Centric Comput. Inf. Sci. 2023, 3, 381–401. [Google Scholar] [CrossRef]
  25. Li, P.; Ke, C.; Tu, H.; Zhang, H.; Zhang, X. Shared Spatio-temporal Attention Convolution Optimization Network for Traffic Prediction. J. Inf. Process. Syst. 2023, 19, 130–138. [Google Scholar] [CrossRef]
  26. Ai, R.; Li, C.; Li, N. Application of an Optimized Support Vector Regression Algorithm in Short-Term Traffic Flow Prediction. J. Inf. Process. Syst. 2022, 18, 719–728. [Google Scholar]
  27. Lv, B.; Yang, C.; Chen, X.; Yao, Z.; Yang, J. Task Offloading and Serving Handover of Vehicular Edge Computing Networks Based on Trajectory Prediction. IEEE Access 2021, 9, 130793–130804. [Google Scholar] [CrossRef]
  28. Fang, Z.; Xu, X.; Dai, F.; Qi, L.; Zhang, X.; Dou, W. Computation offloading and content caching with traffic flow prediction for internet of ve-hicles in edge computing. In Proceedings of the 2020 IEEE International Conference on Web Services (ICWS), Beijing, China, 19–23 October 2020. [Google Scholar]
  29. FOST. Available online: https://www.msra.cn/zh-cn/news/features/fost (accessed on 7 December 2021).
  30. Zhang, Q.; Li, H. MOEA/D: A Multiobjective Evolutionary Algorithm Based on Decomposition. IEEE Trans. Evol. Comput. 2007, 11, 712–731. [Google Scholar] [CrossRef]
  31. Kiani, F.; Sarac, O.F. A novel intelligent traffic recovery model for emergency vehicles based on context-aware reinforcement learning. Inf. Sci. 2023, 619, 288–309. [Google Scholar] [CrossRef]
Figure 1. Three-tier cloud–edge–vehicle network framework with different tasks.
Figure 1. Three-tier cloud–edge–vehicle network framework with different tasks.
Applsci 13 13069 g001
Figure 2. Queen adjacency.
Figure 2. Queen adjacency.
Applsci 13 13069 g002
Figure 3. Heat maps of the real dataset and the FOST prediction.
Figure 3. Heat maps of the real dataset and the FOST prediction.
Applsci 13 13069 g003
Figure 4. Comparison of average execution time and total energy consumption with the change in content types.
Figure 4. Comparison of average execution time and total energy consumption with the change in content types.
Applsci 13 13069 g004
Figure 5. Comparison of average execution time and total energy consumption with change in CPU cycles.
Figure 5. Comparison of average execution time and total energy consumption with change in CPU cycles.
Applsci 13 13069 g005
Figure 6. Comparison of average execution time and total energy consumption with change in input data size.
Figure 6. Comparison of average execution time and total energy consumption with change in input data size.
Applsci 13 13069 g006
Figure 7. Comparison of average execution time and total energy consumption with change in edge server storage capacity.
Figure 7. Comparison of average execution time and total energy consumption with change in edge server storage capacity.
Applsci 13 13069 g007
Figure 8. Comparison of average execution time and total energy consumption with change in edge server computing power.
Figure 8. Comparison of average execution time and total energy consumption with change in edge server computing power.
Applsci 13 13069 g008
Table 1. Related work comparison. The check mark indicates that the paper has addressed a specific aspect.
Table 1. Related work comparison. The check mark indicates that the paper has addressed a specific aspect.
ReferenceCloud ServersEdge ServersInternet of VehiclesTask OffloadingContent CacheEnergy ConsumptionLatency
 [4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15,16,17,18]
[19]
[20]
[21]
[22]
[23]
[24,25,26]
[27]
[28]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, P.; Wang, Y.; Qiao, J.; Hu, Z. Traffic-Aware Optimization of Task Offloading and Content Caching in the Internet of Vehicles. Appl. Sci. 2023, 13, 13069. https://doi.org/10.3390/app132413069

AMA Style

Wang P, Wang Y, Qiao J, Hu Z. Traffic-Aware Optimization of Task Offloading and Content Caching in the Internet of Vehicles. Applied Sciences. 2023; 13(24):13069. https://doi.org/10.3390/app132413069

Chicago/Turabian Style

Wang, Pengwei, Yaping Wang, Junye Qiao, and Zekun Hu. 2023. "Traffic-Aware Optimization of Task Offloading and Content Caching in the Internet of Vehicles" Applied Sciences 13, no. 24: 13069. https://doi.org/10.3390/app132413069

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop