You are currently viewing a new version of our website. To view the old version click .
Sensors
  • Article
  • Open Access

27 April 2022

Optimized Distributed Proactive Caching Based on Movement Probability of Vehicles in Content-Centric Vehicular Networks †

,
,
and
1
Department of Computer Science and Engineering, Kongju National University, Cheonan 31080, Chungnam, Korea
2
School of Information and Communication Engineering, Chungbuk National University, Cheongju 28644, Chungbuk, Korea
*
Author to whom correspondence should be addressed.
This paper is an conference extension: Park, S.; Oh, S.; Nam, Y.; Bang, J.; Lee, E. Mobility-aware Distributed Proactive Caching in Content-Centric Vehicular Networks. In Proceedings of the 2019 12th IFIP Wireless and Mobile Networking Conference (WMNC), Paris, France, 11–13 September 2019; pp. 175–180, doi:10.23919/WMNC.2019.8881585.
This article belongs to the Special Issue Green Communications under Delay Tolerant Networking

Abstract

Content-centric vehicular networks (CCVNs) have considered distributed proactive caching as an attractive approach for the timely provision of emerging services. The naïve caching schemes cache all of the contents to only one selected roadside unit (RSU) for requested vehicles to decrease the data acquisition delay between the data source and the vehicles. Due to the high deployment cost for RSUs and their limited capacity of caching, the vehicular networks could support only a limited number of vehicles and a limited amount of content and thus decrease the cache hit ratio. This paper proposes a mobility-aware distributed proactive caching protocol (MDPC) in CCVNs. MDPC caches contents to the selected RSUs according to the movement of vehicles. To reduce the redundancy and the burden of caching for each RSU, MDPC distributes to cache partial contents by the movement pattern, the probability to predict the next locations (RSUs) on the Markov model based on the current RSU. For recovery of prediction failures, MDPC allows each RSU to request partial missing contents to relatively closer neighbor RSUs with a short delay. Next, we expand the protocol with traffic optimization called MDPC_TO to minimize the amount of traffic for achieving proactive caching in CCVNs. In proportion to the mobility probability of a vehicle toward each of the next RSUs, MDPC_TO controls the amount of pre-cached contents in each of the next RSUs. Then, MDPC_TO has constraints to provide enough content from other next RSUs through backhaul links to remove the delay due to prediction failures. Simulation results verify that MDPC_TO produces less traffic than MDPC.

1. Introduction

With the emerging demand for content-based applications, content-centric networking (CCN) has attracted attention for the next-generation communication paradigm [1]. In particular, based on mobile communication and intelligent vehicular technologies, content-centric vehicular networks (CCVNs) [2] are required to provide pleasant and safe driving as well as a variety of services such as multimedia entertainment and social interaction on the go [3,4]. Owing to the key services, e.g., on-the-road multimedia service, the scale of vehicular content increases significantly, on the other hand, the network will suffer from the amount of content and their requirement such as seamless services. In the future, the main type of content will be video content and its scale is larger and larger with user demand. It will cause a huge amount of traffic on the network. However, the network components have limited capacity (e.g., short communication range, intermittent connectivity, resources, etc.).
In the existing vehicular networks, each vehicle has its own on-board units (OBUs) so that it has a wireless networking function and can be provided content via the nearest roadside unit (RSU). The vehicles can connect with RSUs through vehicle-to-infrastructure (V2I) communication. The vehicles send their interests to obtain requested contents; then the RSUs could relay the requested contents provided from the remote content server(s). In the communication between the vehicles and the RSUs, the networks should support mobility as the vehicles are moving continuously and requesting a seamless content service. In other words, the requester vehicle should be provided contents from RSUs continuously. In spite of supporting mobility, the delay in receiving data from the RSU could increase when a vehicle sends a request for content because of a disconnection or moving to a new location before receiving the requested content. For the continuously moving vehicle, in-network caching has been considered an attractive approach for the CCVNs. It has two approaches to cache data in the networks: reactive and proactive approaches. In the reactive approach [5,6], if the requester vehicle disconnects from a current RSU, the RSU keeps caching content for the interest of the vehicle. When the vehicle reconnects to another new RSU, it informs the identity of the previous RSU to the new RSU for connection between the two RSUs. The new RSU requests the content being cached in the previous RSU. This approach also has the disadvantage of increased delay for the new RSU to start to transfer the contents to the vehicle.
Proactive caching [7,8,9] is a more efficient approach for content distribution. Proactive caching prefetches the contents of interest requested by vehicles ahead of time. The most basic way to cache contents is cache-all, which allows all caching units to cache all content passing through them. It is simple to implement; however, it makes high redundancy in the cache of RSUs. The redundancy causes additional delays in searching and processing. To efficiently cache contents, some studies [8,9] consider the mobility pattern of vehicles for prediction and select only several RSUs to cache contents. Unfortunately, they have still some problems. First, it also suffers from the cache limitation of each RSU; then it could support only a limited number of vehicles and a limited amount of contents. It can be a serious problem as the number of vehicles and the amount of content is gradually increased. Second, if the prediction based on the mobility pattern fails, it causes too significant delays in requesting and receiving the contents. If a vehicle connects an expected RSU to request successive contents, the RSU has no contents to provide and it should request or search the requested contents to the remote content provider (server).
This paper proposes a mobility-aware distributed proactive caching strategy in content-centric vehicular networks. To reduce the redundancy and the burden of caching for each RSUs, our previous work [10,11] distributes contents as much as the mobility pattern, which means the probability to predict the next locations (RSUs) of the Markov model based on the current location (RSU). For recovery of the prediction failure, the next RSUs are provided the information on the content distribution on them. With the information, each RSU could request partial missing contents to relatively closer RSUs with a short delay. However, the caching strategy of only using Markov model-based probability prediction has inefficient aspects in terms of the amount of content and partial missing contents transmitting delay. Additionally, network components have a limited capacity and limited bandwidth, so they can provide only limited content. Therefore, reducing backhaul traffic has become a new challenge. Thus, we expand the proposed proactive caching strategy through optimization, the mobility-aware distributed proactive caching protocol with traffic optimization called MDPC_TO to minimize the amount of traffic to achieve proactive caching in CCVNs. In proportion to the mobility probability of a vehicle toward each of the next RSUs, MDPC_TO controls the amount of pre-caching contents in each of the next RSUs. Then, MDPC_TO uses constraints to provide enough amount of content from other next RSUs through backhaul links to remove the delay due to prediction failures.
The rest of this paper is organized as follows. Section 2 introduces related works. In Section 3, the details of MDPC and MDPC_TO are presented. We evaluate our strategy and compare it with the existing schemes in Section 4. Finally, Section 5 concludes the paper.

3. The Proposed Protocol

3.1. Network Model and Protocol Overview

This paper considers the content-centric networking model [2] for the proposed strategy. Figure 1 shows CCN-based V2I communication using RSUs for content delivery to vehicles. In this model, when a vehicle enters the communication coverage of an RSU, the vehicle connects to the RSU infrastructure. When the vehicle needs a specific content, it transfers an interest packet for the content to the RSU. Vehicles communicate with RSUs using the IEEE 802.11p standard [39]. Each RSU is connected to a content server via a backhaul link and can receive content. In addition, RSUs are assumed to be connected and communicate with each other in a backhaul link. Additionally, each RSU can find an RSU with content via the FIB table [2]. When an RSU receives the content from the content server, the RSU transmits the content to the vehicle that has requested it.
Figure 1. Overview of the proposed strategy on content-centric vehicular networks.
Recently, as the size of content increases on the Internet, vehicles may not be able to receive the entire content with one RSU. The proposed strategy calculates the content size as much as possible that a vehicle can receive with one RSU, divides the obtained content by the chunk level, and caches some chunks of the content. In addition, content chunks are distributed and cached through transition probabilities derived by the Markov model [40]. The proposed strategy reduces the cache redundancy and backhaul traffic by distributed caching. When an RSU receives an interest packet from a vehicle, the RSU determines the transition probability matrix for the next RSUs by the Markov model. The RSU downloads the content chunk as much as possible that the vehicle can receive from content servers connected by the backhaul link. The current RSU, which has transferred content to the vehicle, calculates the transition probability matrix of the vehicle for distributed caching. The RSU calculates the content chunk number to be cached in the next RSUs in proportion to the transition probability and transmits the distributed information to the next RSUs. After receiving the distributed information, the next RSUs receive from the content server the content chunk of only the content chunk number included in the distributed information. The distributed information includes the identifiers of the next RSUs and the content chunk number to be cached in the RSUs. In short, a vehicle requests some content to a content server via a local RSU. The local RSU relays the request message and the movement probability to the server. According to the probability, the server distributes the requested content to the expected multiple next RSUs because the vehicle continuously moves to one of the next RSUs. Then, the vehicle reaches one of them; the reaching RSU transfers the cached content and re-requests the distributed content to the other RSUs.

3.2. Markov Model-Based Transition Probability

There are various ways to predict the next location where a vehicle will arrive soon. For example, vehicle navigation GPS, providing the path to a destination, can be used to predict the next location. However, the drivers might follow the preferred road rather than the pre-determined path information from the navigation system. Additionally, when driving on a familiar road, most drivers do not enter destination information in the navigation. Therefore, vehicle navigation could not guarantee the transition probability that a vehicle will arrive soon. The Markov model uses historical information about the vehicle to predict the next arrival location from the current location. Deriving transition probability based on the Markov model is a well-known method to predict mobility [40]. The Markov model predicts the future location based on the current location. With transition probability, this paper aims at caching more content to RSU with higher transition probability, less content to RSU with lower transition probability, and reducing caching redundancy and backhaul traffic. The RSU with which the vehicle communicated at the current location is R i , and when the vehicle communicates with the RSU at the current location, the RSU with which the vehicle may next communicate at the current location is R ( i , j ) . We define the number of R ( i , j ) as n. Therefore, R j i can be presented as follows:
R i j = j = 1 n R ( i , j ) = { R ( i , 1 ) , R ( i , 2 ) , . . . , R ( i , n ) } .
Additionally, the probability of the vehicle moving from R i to R ( i , j ) is defined as p i j . The number of times the vehicle has passed the communication range of the RSU at present is N i , and after passing the communication range of the RSU at present, the number of times the vehicle has passed the communication range of R i j is called N ( R i , R j ) . The movement probability p i j is defined as follows:
p i j = P r ( R i | R ( i , j ) ) = N ( R i , R ( i , j ) ) N i .
Therefore, p i j is
j = 0 n p i j = 1 .

3.3. The Amount of Caching Contents Based on the Probability

Currently, in order to cache contents for moving vehicles in the vehicular network, the existing studies consider either caching in the RSU with the highest transition probability or caching to all RSUs with the transition probability. However, such caching methods cause caching redundancy problems for the same contents in many RSUs as well as increasing backhaul traffic for content delivery to the many RSUs concurrently. In this paper, we partially cache content chunks based on transition probability to reduce the burden of caching in RSUs backhaul traffic. The existing studies focus only on who caches content and assume that the selected node predicts cache whole chunks of specific content. So, if the prediction fails, an additional cost is required for recovery. For instance, let us assume that there are RSUs R A , R B , R C as the next candidate RSUs of a vehicle. R A is selected by the current RSU and all chunks of the content are transferred to R A . However, if the vehicle moves to the RSU R B or R C , the caching content in R A is useless and the visiting RSU should request the contents to the content server. It requires additional delay and signaling messages. Moreover, the prediction requires the information gathered for a long time. If the historical location information (for vehicles) is insufficient, each next RSU has a similar value for transition probability.

3.4. Mobility Aware Distributed Proactive Caching (MDPC)

This subsection explains the details of our proposed strategy. We explain the process for MDPC and MDPC with entropy. MDPC calculates the amount of content that will be proactive to the RSU in that direction based on the transition probability. Based on the vehicle’s movement record, the probability of moving the vehicle was calculated using the Markov model. The Markov model is a cumulative recording method that records the number of times a vehicle has veered in a particular direction from that road relative to the number of times it has entered that road. Thus, the longer the learning time, the more accurate the probability can be.

3.4.1. Caching Strategy

This subsection shows the proposed chunk-level caching strategy based on the transition probability matrix as shown in Figure 2. Table 1 describes the content caching parameters. A roadside unit that has received an interest packet calculates the content-chunk size that the vehicle can receive within the RSU’s communication range through the vehicle’s speed and communication range. We can calculate the maximum number of chunks C p r e as follows:
C p r e = R R S U V v e h i c l e × B V 2 I .
where R R S U is the communication range of the roadside units, V v e h i c l e is the velocity of a vehicle and is assumed to be constant, and B V 2 I means the bandwidth of the V2I communication. After the roadside unit R i calculates C i , the unit downloads the content chunk from 1 to C i at the content server linked by the backhaul network. The RSU sends content chunks from 1 to C i to the vehicle and calculates the vehicle transition probability through Equation (2). The content chunk size C p r e ( i , j ) cached by R ( i , j ) is calculated by the following:
C p r e ( i , j ) = C p r e × p i j ,
where, C p r e is j = 1 J C p r e ( i , j ) .
Figure 2. Transition probability example for each roadside unit.
Table 1. Content caching parameters.
R i determines the scope of content chunk for each next RSU according to the transition probability. To determine the scope, R i first calculates the maximum number of chunks as many as possible that it could relay with Equation (4). In addition, we use distributed points and C r ( i , j ) to calculate the content size cached to the next R ( i , j ) of the requested R ( i , j ) . C r ( i , j ) is the content size to be transferred from the adjacent RSU, and C r ( i , j ) is calculated by the following:
C r ( i , j ) = B V 2 I × B B a c k h a u l B V 2 I + B B a c k h a u l + ( C p r e B B a c k h a u l C p r e ( i , j ) B V 2 I )
The distributed point is calculated as follows:
C i + 1 = C p r e ( i , i + 1 ) + C r ( i , i + 1 ) + C i , ( j = i + 1 ) .
Therefore, the next R ( i , j ) caches the contents from C i + 1 . The proposed strategy assigns the chunk number sequentially for simple number management. For the assignment of the numbers, we should consider the caching chunks not to be relayed by the previous RSU and the chunks to be cached by the new RSUs. R i assigns a content chunk number to be cached in R ( i , j ) sequentially from R ( i , j ) having the highest transition probability. R i sends a message containing the RSU ID of R ( i , j ) and the content chunk number of R ( i , j ) . After R ( i , j ) receives the message to R i , it searches for the content chunk number that matches its RSU ID and downloads the content chunk at the content server. Since the message for which R ( i , j ) is received contains information for the next R ( i , j ) , R ( i , j ) can know the content chunk number to be cached to the neighbor R ( i , j ) . For example in Figure 3 and Table 2, the values of p i j are 0.85, 0.10, and 0.05, respectively. We assume that the vehicle speed is 60 km/h, the communication range is 100 m, the V2I communication bandwidth is 20 Mbps, the content size is 1000 Mb, and the content chunk size is 1 Mb. By the probability p i j , we assign 204, 24, and 12 chunks to R ( 1 , 1 ) , R ( 1 , 2 ) , and R ( 1 , 3 ) . If the current RSU complete to relay the chunks from #1 to #240, the next RSUs continue to relay the maximum 240 chunks from #241. With the descending order of probabilities, R ( 1 , 1 ) , R ( 1 , 2 ) , and R ( 1 , 3 ) are assigned the chunks #241∼#444, #445∼#468, and #469∼#480, respectively.
Figure 3. Correlation between entropy and prediction accuracy.
Table 2. The amount of the content caching according to the probability.

3.4.2. MDPC with Entropy Value (MDPC_E)

Entropy is a measure of uncertainty introduced by Claude Shannon in his paper [41]. With a discrete probability distribution p = { p 1 , p 2 , . . . , p n } where p i ≥0 and i = 1 n p i = 1 , the entropy H is calculated as
H = p i l o g 2 p i .
When the entropy value exceeds 0.5, it is estimated to be an inaccurate prediction. To support our use of entropy value as an inaccuracy, we simulated the correlation between prediction accuracy and entropy. The accuracy of the probabilities was calculated by setting success as if the vehicle had moved to the highest probability and fail if not. Figure 3 shows a graph of the accuracy measured according to the direction of the transition probability of the vehicle when the entropy value varies from 0.1 to 1.6. As a result, higher entropy means a lower prediction accuracy. In addition, When the prediction accuracy is greater than 90%, the entropy does not exceed 0.5.
With the given example p = {0.85, 0.10, 0.05}, the entropy H is −0.85 × log 2 0.85 − 0.10 × log 2 0.10 − 0.05 × log 2 0.05 ≈ 0.748. If the probability value is incorrect, caching is performed with a weight value for the most visited RSUs. As shown in Table 3, when the entropy value of p i j exceeds 0.5, R i adds the smallest probability of p i j to p i j of the most visited RSU when calculating C i j . After that, C i determines the content chunk number of R i j . The transition probability in Figure 3 will exceed 0.5 when the entropy value is calculated. Since R 11 is the most visited RSU, weighting is recalculated to perform more accurate caching. After recalculation, the entropy becomes −0.9 × log 2 0.9 − 0.1 × log 2 0.1 ≈ 0.469 < 0.5.
Table 3. Entropy adjusting.

3.4.3. Content Delivery

This subsection describes the procedure for content delivery between RSUs and a vehicle. The vehicle V r , connecting a RSU R i , moves to one RSU ( R ( i , j ) ) among the set R i j . After the next RSU R ( i , j ) detects the movement of the vehicle V r , the RSU transfers the content chunks pre-cached in it. As the RSU has the information on content distribution, the RSU could request the content not to cache to the other RSUs in the set R i j . After the RSU relays the content chunks received from the other RSUs to the vehicle, the RSU checks whether the last number C i j of chunks transferred is the same as the last chunk number of the requested content C T . If they are the same, the RSU terminates the communication for the content. Otherwise, the RSU R ( i , j ) selects its next RSUs and calculates the transition probability and the chunk number assignment as described above.
In our strategy as shown in Figure 4, as the first chunk is cached in the highest probability RSU, we consider the following two cases: the vehicle moves either to the highest RSU or to one of the other RSUs. In the case of the highest RSU, the RSU transfers the content chunks sequentially. However, if the vehicle moves to one of the other RSUs, which has a lower probability, it causes the chunk to reassemble the problem. Namely, it causes some chunks to be omitted in the chunk sequence. To recover the omitted chunks, the RSU assigns the chunk numbers for the next RSUs, including the omitted numbers.
Figure 4. The proposed caching strategy for a vehicle and each RSU.

3.5. MDPC with Optimization (MDPC_TO)

MDPC with optimization is a measure to minimize backhaul traffic by optimization using the transition probability in the intersection. Based on the transition probability of a vehicle, the MDPC_TO optimized the amount of pre-caching in a way that minimizes backhaul traffic. The RSU, where the vehicle enters next, brings content from other pre-cached RSUs while delivering optimized pre-cached content to the vehicle, providing the vehicle with a maximum amount of content.

3.5.1. Objective Function

Let R ( i , j ) be one of the next RSUs when a vehicle v is staying in R i . Thus, using p i , j , the total backhaul traffic U v consumed by the vehicle as it moves in each direction can be calculated through Equation (9). The following function minimizes the total backhaul traffic under some constraints.
m i n U v = j = 1 J ( p i j × ( k = 1 J 2 d i k d i j ) )
s . t . 1 r d i j + k = 1 , k j J 1 w d i k 0 , j J
d i ( k + 1 ) d i k 0 , k > 1
j = 1 J d i j M I N ( t d w e l l × w , M I N ( C A P A j ) )
d i j 0

3.5.2. Constraints

The first constraint in Equation (10) means the limitation that the current RSU R ( i , j ) , which communicates with the vehicle, requires obtaining pre-cached content from another R ( i , j ) wired to backhaul while the vehicle is provided with quantity C p r e ( i , j ) of pre-cached content. This is because R ( i , j ) imports the remaining content before it could provide the vehicle with the quantity C p r e ( i , j ) of pre-cached content before it can provide the remaining content immediately after providing C p r e ( i , j ) . If the constraint does not exist, the vehicle might receive pre-cached content from R ( i , j ) and receive nothing while importing the remaining content from R ( i , j ) other than R ( i , j ) , which in turn leads to a delay.
Equation (11) describes that the R ( i , j ) with a higher transition probability must pre-cache the greater amount based on p i j arrayed with the large of the transition probability. If a large quantity of pre-caching is performed on an R ( i , j ) that is less likely to be moved, it will be more frequent for other R ( i , j ) to take content from R ( i , j ) because the vehicle is less likely to go to other R ( i , j ) . Therefore, we had a constraint because of the increase in backhaul traffic. This can result in a small amount of pre-cached content being very small for R ( i , j ) that is sufficient. Backhaul traffic may increase to import the remaining content from other R ( i , j ) because the vehicle is likely to enter other R ( i , j ) with a certain transition probability. However, because other R ( i , j ) are less likely to move, from the expected traffic volume point of view, a large amount of content can be pre-cached where the probability of moving is high, thereby lowering backhaul traffic. Further, even if the vehicle moved to an R ( i , j ) with a transition probability that is sufficiently low, there is no trouble in providing the maximum amount of contents to the vehicle due to the constraint of the expression (10).
The last two constraints in Equations (12) and (13) calculate the maximum content amount that the vehicle can receive from the R ( i , j ) within the communication range of the R ( i , j ) based on the current speed of the vehicle. The amount of all the pre-cached contents does not need to exceed the amount available to the vehicle. To solve the problem that more backhaul traffic is consumed when caching contents requested by a vehicle to all RSUs, we propose that if the vehicle pre-caches more content than can be provided, the vehicle will not be able to receive all prepared content, so that the traffic that the R ( i , j ) could not provide to the vehicle in the traffic consumed for pre-caching content. Therefore, it is not necessary to pre-cache the contents more than the amount available. Through the given optimization formula, the maximum amount of content can be provided to the vehicle while consuming minimum backhaul traffic without waste. Additionally, the total amount of pre-cached contents should be less than the caching capacity of each RSU. To minimize the total traffic, as the amount of pre-caching content could be the negative amount, we have the constraint that each amount should not be negative.

4. Performance Evaluation

4.1. Simulation Model and Performance Evaluation Metrics

We compare the MDPC with the existing caching schemes by a simulation for performance evaluation. The simulation is implemented on the network simulator NS-3.2.7 [42]. In our scenario, there is a requester vehicle and four RSUs placed at each intersection, and the network area is 1 km × 1 km in size. The content server is connected by a backhaul network with a bandwidth of 1 Mbps and a delay of 10 ms. Additionally, the RSUs are connected to the backhaul network, using P2P and having a data transmission rate of 50 Mbps. The communication between the RSU and the vehicle uses IEEE 802.11p [39] with a data rate of 24 Mbps. The V2I communication range is 100 m. The vehicle sends an interest packet once it enters the communication range of an RSU. The existing caching schemes to be compared are CacheMax [9,34] and CacheAll. CacheMax means the concept where the highest probability RSU caches contents and CacheAll indicates the assumption that all adjacent RSUs cache contents. The result values indicate the average of 100 times of simulation. In addition, the same simulation environment was implemented as mentioned above to compare MDPC and MDPC_TO. The content server is connected by a backhaul network with a bandwidth of 1 Mbps and a delay of 10 ms. Additionally, the RSUs are connected to the backhaul network, using P2P and having a data transmission rate of 1000 Mbps. The communication between the RSU and the vehicle uses IEEE 802.11p [39] with a data rate of 50 Mbps. The V2I communication range is 200 m.
Figure 5 shows the delay time impacted by content size. The main purpose of content caching in content-centric vehicular networks is to reduce the delay time for vehicles to receive content. In our simulation, we evaluate the performance in terms of delay time. The delay time is measured at the time between the time when the content is requested and the time when the last content chunk is received. The delay time does not include the travel time of the vehicle. We fix the hit rate on moving at 50%. The content size requested by the vehicle is increased by 100 Mbs from 1000 Mbs to 2000 Mbs. As expected, in the case of the CacheAll, since the contents are cached in all RSUs on the route, it is possible to receive the last number of the content chunk faster than the CacheMax and the proposed strategy. When CacheMax and the proposed strategy have the same probability of hit rate on moving, the CacheMax has a longer delay time because it has to receive a larger amount of content from the content server instead of the neighbor RSUs.
Figure 5. The delay time impacted by content size.
Figure 6 shows the backhaul traffic on the networks with the proposed strategy and the two schemes. The goal of ICN and proactive caching is to reduce content server traffic. Backhaul traffic considered the traffic generated when transmitting content from the content server to the RSU and the traffic generated when receiving content from the RSU. Although traffic does not occur because the content is not received by the neighbor RSUs in the case of the CacheAll, the traffic volume becomes largest because the content server must generate the traffic for all the contents to all RSUs. In the case of CacheMax, when the content is received by the content server, the same amount of content as the proposal is transmitted. However, when the prediction fails, the traffic volume increases because the entire amount of cached content must be transmitted from the server to the corresponding RSU through the backhaul network. However, in the case of our strategy, the cached content is distributed to all the neighbor RSUs in proportion to the transition probability of movement, so even if a cache miss occurs, the amount of content transmitted through the backhaul becomes smaller than that of the CacheMax.
Figure 6. Backhaul traffic of according to the content size.
We vary the hit ratio from 0% to 50% by 10%. In this simulation, the hit ratio could be defined as the probability to move to the highest RSU. In other words, that means the ratio that the prediction is successful. Figure 7 presents the delay time according to the hit ratio. CacheAll has a constant result because it does not exploit the transition probability and caches content chunks to all RSUs. However, in the comparison between the CacheMax and the proposed strategy, if the prediction in CacheMax fails, it requires additional delay for requesting and receiving the missing contents to the content server. On the other hand, in the proposed strategy, even if the hit rate is low, the content is cached in proportion to the transition probability, so even if a cache miss occurs, the recovery time could be reduced rather than the other schemes as our strategy could get the missing content chunks from the adjacent RSU based on the content distribution information.
Figure 7. Delay time according to the hit ratio.
We measure the backhaul traffic by the hit ratio as shown in Figure 8. In the CacheAll, the traffic has uniform values because the contents are cached to all RSUs. Meanwhile, for the recovery of missing content, since the CacheMax requires the participation of many routers in the backhaul networks, it increases the backhaul traffic.
Figure 8. Backhaul traffic according to the hit ratio.
Content should be cached efficiently due to the limited storage space of the RSUs. To measure the content availability, we have more request vehicles in the networks. The 63 vehicles send a content request message for storage availability. When a car sends a content request message, the RSU is defined as a hit when caching can be performed compared to the content request size contained in its content storage and content request message. Figure 9 shows the storage failure in accordance with the size of the content. In such a case, content storage quickly dropped when the vehicle sent a content request a message to cache content to all RSUs. Additionally, in the case of CacheMax, although the content is cached to the most probable RSUs when the vehicles move while the car moves when the vehicles gather to the cached RSUs, the storage falls immediately. However, since the proposed strategy distributes and caches contents to the adjacent RSUs, it could support more vehicles and contents.
Figure 9. Caching unavailability according to the content size.
Figure 10 and Figure 11 show a graph of the backhaul traffic and delay time of the MDPC and the MDPC_E. It increased entropy from 0.1 to 1.6 by fixing its content size at 2500 Mb, measuring delay time and backhaul traffic.
Figure 10. Delay time applying the entropy according to the value of the entropy (0.1–0.5).
Figure 11. Delay time applying the entropy according to the value of the entropy (0.1–1.6).
Figure 10 shows a graph of the delay time measured with the entropy value from 0.1 to 0.5. When the entropy value is less than or equal to 0.5, the possibility of a vehicle visiting the RSU with the highest transition probability is increased. Therefore, MDPC caches content that is cached to the lowest-probability RSU with a weight value to the highest-probability RSU, thus reducing delay time. However, when the entropy value is 0.6 or more as shown in Figure 11, the accuracy is lowered, and the possibility of the vehicle visiting at the highest probability is reduced. Therefore, the delay time will increase as the content cached by the adjacent RSU must be received via the backhaul link.
Figure 12 shows a graph of the delay time measured with the entropy value from 0.1 to 0.5. When the entropy value is 0.5 or less, the accuracy of the probability is 90% or more, so that the possibility of the vehicle visiting the RSU with the lowest probability is reduced. MDPC was not caching content to the lowest-probability RSU but cached the highest-probability RSU with a weight. Therefore, when the vehicle visits the RSU with the highest probability, the RSU reduces the content size received from the adjacent RSU via the backhaul link, thereby reducing the backhaul traffic.
Figure 12. Backhaul traffic applying the entropy according to the value of the entropy.
Figure 13 shows a graph of the delay time measured with the entropy value from 0.1 to 1.6. When the entropy value is 0.6, MDPC and MDPC_E backhaul traffic is almost the same. When the entropy value is 0.6, the backhaul traffic of MDPC and MDPC_E becomes almost the same. However, when the entropy value is 0.6 or more, the accuracy of the probability is reduced, and the possibility of the vehicle visiting the RSU having the highest probability is reduced. Therefore, if the entropy is 0.6 or higher, the backhaul traffic of MDPC is higher than the backhaul traffic of MDPC_E.
Figure 13. Backhaul traffic applying the entropy according to the entropy.

4.2. Simulation Results of MDPC_TO

In this section, we compare the backhaul traffic and delay time performance of MDPC and MDPC_TO. The MDPC strategy caches content in proportion to the transition probability derived from the Markov probability of the vehicles. As the next step, the MDPC_TO strategy optimizes the transition probability derived from the Markov model from the perspective of backhaul traffic.
Figure 14 compares the performance differences between the delay time of MDPC and MDPC_TO. Delay time occurs when the cached content is transmitted to the vehicle to which the RSU has sent the request or the content is requested to the adjacent RSU and the content is being transmitted from the adjacent RSU. MDPC_TO has a lower delay time than MDPC because it is optimized where the delay time is minimized for cached contents.
Figure 14. Delay time of optimization according to the content size.
When the vehicle sends a request to the RSU, the RSU downloads the content on the content server. When requesting content from an RSU adjacent to the RSU, the content is distributed through a backhaul link. Backhaul traffic occurs in this case. Figure 15 shows a graph related to backhaul traffic of MDPC and MDPC_TO. MDPC_TO minimizes the content size requested by the neighboring RSU from the perspective of backhaul traffic, resulting in lower backhaul traffic than MDPC.
Figure 15. Backhaul traffic of optimization according to the content size.
Figure 16 shows a comparison of the backhaul traffic between MPDC and MPDC_TO according to the entropy value. The simulation used the random value function of NS-3 to distribute the transition probability. Additionally, the content size is fixed to 2500 Mb, and the simulation is performed. The entropy value was measured using Formula (4) and increased by 0.1 from the minimum value of 0.1 to the maximum value of 1.6 in the range to measure backhaul traffic. MDPC_TO does not have the entry value applied to the objective function, so when the entry value is low, backhaul traffic is higher than MDPC. However, backhaul traffic of MDPC_TO and MDPC gradually becomes similar in 0.7 sections, which are about 80% accurate, and the difference becomes larger until it reaches 1.6. In the case of MDPC_TO, the content size to be cached in RSU is fixed. When the content size to be pre-cached is calculated by an MDPC strategy, the entropy value is about 0.8. Therefore, the difference is increased from the section of 0.8 or more.
Figure 16. Backhaul traffic of optimization according to the entropy.

5. Conclusions

This paper proposes a distributed pre-caching strategy (MDPC) in a content-centric vehicular network for increasing vehicles and contents. To reduce the redundancy of contents on the networks, we exploit the transition probability for the distribution of contents to the next roadside units (RSUs). In order to decrease the delay time, the RSUs share the distribution information for the content; then they are able to request the missing content chunks to the other adjacent RSUs rather than the original content server. Via the simulation, we evaluate the performance in terms of delay time, backhaul traffic, and content availability. Further, as a result of comparing the MDPC and the MDPC when the entropy is 0.5 or less, it is proved that it is more efficient not to cache the content to the RSU having the lowest probability. We optimized MDPC (MDPC_TO) to reduce backhaul traffic. The overall performance was increased, but the optimization constraint did not include transition probability, so backhaul traffic was higher than MDPC when the entropy value was lower. In order to solve these problems, a transition probability will be added when setting a constraint condition.

Author Contributions

S.O., S.P., Y.S. and E.L. contributed to the protocol design and detailed the algorithm. Additionally, they conducted a performance analysis. All authors have read and agreed to the published version of the manuscript.

Funding

These results were supported by “Regional Innovation Strategy (RIS)” through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (MOE) (2021RIS-004). This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. NRF-2020R1C1C1010692).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ahlgren, B.; Dannewitz, C.; Imbrenda, C.; Kutscher, D.; Ohlman, B. A survey of information-centric networking. IEEE Commun. Mag. 2012, 50, 26–36. [Google Scholar] [CrossRef]
  2. Su, Z.; Hui, Y.; Yang, Q. The Next Generation Vehicular Networks: A Content-Centric Framework. IEEE Wirel. Commun. 2017, 24, 60–66. [Google Scholar] [CrossRef]
  3. Xu, Q.; Su, Z.; Zhang, K.; Ren, P.; Shen, X.S. Epidemic Information Dissemination in Mobile Social Networks With Opportunistic Links. IEEE Trans. Emerg. Top. Comput. 2015, 3, 399–409. [Google Scholar] [CrossRef]
  4. Su, Z.; Xu, Q.; Fei, M.; Dong, M. Game Theoretic Resource Allocation in Media Cloud With Mobile Social Users. IEEE Trans. Multimedia 2016, 18, 1650–1660. [Google Scholar] [CrossRef] [Green Version]
  5. Badov, M.; Seetharam, A.; Kurose, J.; Firoiu, V.; Nanda, S. Congestion-aware Caching and Search in Information-Centric Networks. In Proceedings of the 1st ACM Conference on Information-Centric Networking, Paris, France, 25–26 September 2014; pp. 37–46. [Google Scholar]
  6. Ming, Z.; Xu, M.; Wang, D. Age-based cooperative caching in information-centric networking. In Proceedings of the 2014 23rd International Conference on Computer Communication and Networks (ICCCN), Shanghai, China, 4–7 August 2014; pp. 1–8. [Google Scholar]
  7. Modesto, F.M.; Boukerche, A. An analysis of caching in information-centric vehicular networks. In Proceedings of the IEEE International Conference on Communications (ICC 2017), Paris, France, 21–25 May 2017; pp. 1–6. [Google Scholar]
  8. Vasilakos, X.; Siris, V.A.; Polyzos, G.C.; Pomonis, M. Proactive selective neighbor caching for enhancing mobility support in information-centric networks. In Proceedings of the Second Edition of the ICN Workshop on Information-Centric Networking (ICN 2012), Helsinki, Finland, 17 August 2012; pp. 61–66. [Google Scholar]
  9. Abani, N.; Braun, T.; Gerla, M. Proactive caching with mobility prediction under uncertainty in information-centric networks. In Proceedings of the 4th ACM Conference on Information-Centric Networking (ICN 2017), Berlin, Germany, 26–28 September 2017; pp. 88–97. [Google Scholar]
  10. Park, S.; Oh, S.; Nam, Y.; Bang, J.; Lee, E. Mobility-aware Distributed Proactive Caching in Content-Centric Vehicular Networks. In Proceedings of the 2019 12th IFIP Wireless and Mobile Networking Conference (WMNC), Paris, France, 11–13 September 2019. [Google Scholar]
  11. Park, S.; Lee, E. Proactive Caching Strategy Based on Optimal Content Distribution in Content Centric Vehicular Networks. KIPS Trans. Comput. Commun. Syst. 2020, 9, 131–136. [Google Scholar]
  12. Luan, T.H.; Ling, X.; Shen, X. MAC in Motion: Impact of Mobility on the MAC of Drive-Thru Internet. IEEE Trans. Mob. Comput. 2011, 11, 305–319. [Google Scholar] [CrossRef]
  13. Atallah, R.F.; Khabbaz, M.J.; Assi, C.M. Modeling and Performance Analysis of Medium Access Control Schemes for Drive-Thru Internet Access Provisioning Systems. IEEE Trans. Intell. Transp. Syst. 2015, 16, 3238–3248. [Google Scholar] [CrossRef]
  14. Cisco. Cisco Visual Networking Index: Global Mobile Data Traffic Forecast Update 2014–2019; Cisco: San Jose, CA, USA, 2015. [Google Scholar]
  15. Xu, Q.; Su, Z.; Han, B.; Fang, D.; Xu, Z.; Gan, X. Analytical model with a novel selfishness division of mobile nodes to participate cooperation. Peer-to-Peer Netw. Appl. 2015, 9, 712–720. [Google Scholar] [CrossRef]
  16. Qureshi, K.N.; Din, S.; Jeon, G.; Piccialli, F. Internet of Vehicles: Key Technologies, Network Model, Solutions and Challenges With Future Aspects. IEEE Trans. Intell. Transp. Syst. 2021, 22, 1777–1786. [Google Scholar] [CrossRef]
  17. Mahmood, A.; Siddiqui, S.A.; Sheng, Q.Z.; Zhang, W.E.; Suzuki, H.; Ni, W. Trust on wheels: Towards secure and resource efficient IoV networks. Computing, 2022; in press. [Google Scholar] [CrossRef]
  18. Du, Z.; Wu, C.; Yoshinaga, T.; Yau, K.-L.A.; Ji, Y.; Li, J. Federated Learning for Vehicular Internet of Things: Recent Advances and Open Issues. IEEE Open J. Comput. Soc. 2020, 1, 45–61. [Google Scholar] [CrossRef] [PubMed]
  19. Cisco, Visual Networking Index: Forecast and Methodology, 2013–2018, June 2014, White Paper. Available online: http://www.cisco.com/go/vni (accessed on 30 June 2016).
  20. Zhang, M.; Luo, H.; Zhang, H. A Survey of Caching Mechanisms in Information-Centric Networking. IEEE Commun. Surv. Tutor. 2015, 17, 1473–1499. [Google Scholar] [CrossRef]
  21. Jacobson, V.; Smetters, D.K.; Thornton, J.D.; Plass, M.; Briggs, N.; Braynard, R. Networking named content. Commun. ACM 2012, 55, 117–124. [Google Scholar] [CrossRef]
  22. Ohtani, M.; Tsukamoto, K.; Koizumi, Y.; Ohsaki, H.; Imase, M.; Hato, K.; Murayama, J. VCCN: Virtual Content-Centric Networking for Realizing Group-Based Communication. In Proceedings of the IEEE ICC 2013, Budapest, Hungary, 9–13 June 2013; pp. 3476–3480. [Google Scholar]
  23. Kutscher, D.; Eum, S.; Pentikousis, K.; Psaras, I.; Corujo, D.; Saucez, D.; Schmidt, T.C.; Wählisch, M. Information-Centric Networking (ICN) Research Challenges; Internet Research Task Force (IRTF): Fremont, CA, USA, 2016. [Google Scholar]
  24. Ioannou, A.; Weber, S. A Survey of Caching Policies and Forwarding Mechanisms in Information-Centric Networking. IEEE Commun. Surv. Tutor. 2016, 18, 2847–2886. [Google Scholar] [CrossRef]
  25. Han, T.; Ansari, N. Opportunistic content pushing via WiFi hotspots. In Proceedings of the 3rd IEEE IC-NIDC, Beijing, China, 21–23 September 2012; pp. 680–684. [Google Scholar]
  26. Rao, Y.; Zhou, H.; Gao, D.; Luo, H.; Liu, Y. Proactive Caching for Enhancing User-Side Mobility Support in Named Data Networking. In Proceedings of the IMIS, Taichung, Taiwan, 3–5 July 2013; pp. 37–42. [Google Scholar]
  27. Cho, K.; Lee, M.; Park, K.; Kwon, T.T.; Choi, Y.; Pack, S. Wave: Popularity-based and collaborative in-network caching for content oriented networks. In Proceedings of the IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), Orlando, FL, USA, 25–30 March 2012. [Google Scholar]
  28. Aloulou, N.; Ayari, M.; Zhani, M.F.; Saidane, L. A popularity-driven controller-based routing and cooperative caching for named data networks. In Proceedings of the International Conference on the Network of the Future (NOF), Montreal, QC, Canada, 30 September–2 October 2015. [Google Scholar]
  29. Kanai, K.; Muto, T.; Katto, J.; Yamamura, S.; Furutono, T.; Saito, T.; Mikami, H.; Kusachi, K.; Tsuda, T.; Kameyama, W.; et al. Proactive Content Caching for Mobile Video Utilizing Transportation Systems and Evaluation Through Field Experiments. IEEE J. Sel. Areas Commun. 2016, 34, 2102–2114. [Google Scholar] [CrossRef]
  30. Mahmood, A.; Zhang, W.E.; Sheng, Q.Z. Software-Defined Heterogeneous Vehicular Networking: The Architectural Design and Open Challenges. Future Internet 2019, 11, 70. [Google Scholar] [CrossRef] [Green Version]
  31. Grewe, D.; Wagner, M.; Frey, H. PeRCeIVE: Proactive caching in ICN-based VANETs. In Proceedings of the IEEE Vehicular Networking Conference, Columbus, OH, USA, 8–10 December 2016. [Google Scholar]
  32. Grewe, D.; Schildt, S.; Wagner, M.; Frey, H. ADePt: Adaptive Distributed Content Prefetching for Information-Centric Connected Vehicles. In Proceedings of the VTC Spring, Porto, Portugal, 3–6 June 2018; pp. 1–5. [Google Scholar]
  33. Ding, R.; Wang, T.; Song, L.; Han, Z.; Wu, J. Roadside-unit caching in vehicular ad hoc networks for efficient popular content delivery. In Proceedings of the 2015 IEEE Wireless Communications and Networking Conference (WCNC), New Orleans, LA, USA, 9–12 March 2015; pp. 1207–1212. [Google Scholar]
  34. Khelifi, H.; Luo, S.; Nour, B.; Sellami, A.; Moungla, H.; Nait-Abdesselam, F. An Optimized Proactive Caching Scheme Based on Mobility Prediction for Vehicular Networks. In Proceedings of the IEEE Global Communications Conference, Abu Dhabi, United Arab Emirates, 9–13 December 2018; pp. 1–6. [Google Scholar]
  35. Zhao, Z.; Guardalben, L.; Karimzadeh, M.; Silva, J.; Braun, T.; Sargento, S. Mobility Prediction-Assisted Over-the-Top Edge Prefetching for Hierarchical VANETs. IEEE J. Sel. Areas Commun. 2018, 36, 1786–1801. [Google Scholar] [CrossRef]
  36. Zhang, F.; Xu, C.; Zhang, Y.; Ramakrishnan, K.K.; Mukherjee, S.; Yates, R.; Nguyen, T. EdgeBuffer: Caching and prefetching content at the edge in the MobilityFirst future Internet architecture. In Proceedings of the 2015 IEEE 16th International Symposium on A World of Wireless, Mobile and Multimedia Networks (WoWMoM), Boston, MA, USA, 14–17 June 2015; pp. 1–9. [Google Scholar]
  37. Liu, T.; Zhou, S.; Niu, Z. Joint Optimization of Cache Allocation and Content Placement in Urban Vehicular Networks. In Proceedings of the IEEE Global Communications Conference (GLOBECOM), Abu Dhabi, United Arab Emirates, 9–13 December 2018; pp. 1–6. [Google Scholar]
  38. Bruno, F.; Cesana, M.; Gerla, M.; Mauri, G.; Verticale, G. Optimal content placement in ICN vehicular networks. In Proceedings of the 2014 International Conference and Workshop on the Network of the Future (NOF), Paris, France, 3–5 December 2014; pp. 1–5. [Google Scholar]
  39. IEEE SA-P802.11bb; IEEE Standard for Information Technology—Local and Metropolitan Area Networks—Specific Requirements—Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications Amendment 6: Wireless Access in Vehicular Environments. IEEE: Piscataway, NJ, USA, 2010; pp. 1–51.
  40. Gambs, S.; Killijian, M.-O.; del Prado Cortez, M.N. Next Place Prediction using Mobility Markov Chains. In Proceedings of the 1st ACM Workshop on Measurement Privacy and Mobility, Bern, Switzerland, 10 April 2012; pp. 1–6. [Google Scholar]
  41. Shannon, C.E. A Mathematical Theory of Communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef] [Green Version]
  42. NS-3. Available online: http://www.nsnam.org/ (accessed on 30 June 2016).
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.