Next Article in Journal
Modelling and Analysis of Performance Characteristics in a 60 Ghz 802.11ad Wireless Mesh Backhaul Network for an Urban 5G Deployment
Previous Article in Journal
Graph Representation-Based Deep Multi-View Semantic Similarity Learning Model for Recommendation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Active Path-Associated Cache Scheme for Mobile Scenes

1
National Network New Media Engineering Research Center, Institute of Acoustics, Chinese Academy of Sciences, Beijing 100190, China
2
School of Electronic, Electrical and Communication Engineering, University of Chinese Academy Sciences, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Future Internet 2022, 14(2), 33; https://doi.org/10.3390/fi14020033
Submission received: 13 December 2021 / Revised: 15 January 2022 / Accepted: 18 January 2022 / Published: 19 January 2022
(This article belongs to the Section Internet of Things)

Abstract

:
With the widespread growth of mass content, information-centric networks (ICN) have become one of the research hotspots of future network architecture. One of the important features of ICN is ubiquitous in-network caching. In recent years, the explosive growth of mobile devices has brought content dynamics, which poses a new challenge to the original ICN caching mechanism. This paper focuses on the WiFi mobile scenario of ICN. We design a new path-associated active caching scheme to shorten the time delay of users obtaining content to enhance the user experience. In this article, based on the WiFi scenario, we first propose a solution for neighbor access point selection from a theoretical perspective, considering the caching cost and transition probability. The cache content is then forwarded based on the selected neighbor set. For cached content, we propose content freshness according to mobile characteristics and consider content popularity at the same time. For cache nodes, we focus on the size of the remaining cache space and the number of hops from the cache to the user. We have implemented this strategy based on the value of caching on the forwarding path. The simulation results show that our caching strategy has a significant improvement in performance compared with active caching and other caching strategies.

1. Introduction

With the rapid growth of Internet information, the demand of users has changed from communication between hosts to the access of host and network information. The traditional host-centered network can no longer meet the needs of users because, in the modern network communication mode, users are more concerned about the content than the location of data. To meet the needs of users, network researchers began to improve based on IP networks and proposed P2P [1], CDN [2], and other networks. However, these networks can not fundamentally solve the content-centered demand. To date, the information-centered network architecture has been widely valued and studied by academic circles.
With the in-depth research in recent years, a variety of typical ICN network architectures have been proposed, such as DONA [3], CCN [4], NetInf [5], and PSIRP [6]. To alleviate the huge pressure on network bandwidth caused by the rapid growth of network traffic, these network architectures make use of the naming characteristics of content and generally provide built-in cache to accelerate the distribution of content, improve the utilization of the network, and reduce the server load [7]. However, with the advent of the mobile Internet era, global mobile data traffic shows an exponential growth trend. At this time, the ICN network will also face the problem of a large number of mobile users’ access. With the movement of users, the popularity of content in different periods will also change significantly, which will lead to the invalidation of the previous cache, thus introducing the problem of dynamic cache content. Therefore, in the mobile scenario, designing a new content caching scheme is of great significance to ensure the reliable transmission of the network and reduce the network delay.
At present, academia generally pays attention to the user-side mobile, that is, the mobile scene in which the user side moves and the opposite end remains stationary. The research on user mobility caching methods can be roughly divided into the following four types [8]:
  • Reactive: the old path node continues to cache the content requested by the user until the user accesses the new connection point, and the new connection point requests the previously cached content.
  • Persistent subscription: cache the items subscribed by mobile users through an intermediate agent.
  • Active: the content is cached to the new access point in advance, and the content is directly obtained when the user connects to the new access point.
  • Enable: dynamically adjust the cache policy to improve the hit rate of the local cache.
Through the investigation and analysis of relevant literature, the first method introduces a large time delay, while the second method causes an unnecessary waste of space. Combining the advantages of the third proactive type and the fourth heuristic type, this paper proposes a set of collaborative caching mechanisms based on neighbor message propagation for the user-side mobile WiFi scenario. For cached destination access point (AP) selection, this mechanism can select a set of destination APs according to the transition probability table of the source AP and the set transition threshold after the terminal device moves. In addition, after the terminal device accesses the destination AP, the destination AP will reply with a message to update the transition probability table of the source AP. For the selection of cache nodes, this article is different from other proactive caching strategies that send content directly to the destination AP. We dynamically pull the content in advance to the forwarding path from the source AP to the destination AP through a collaborative caching algorithm based on the freshness and popularity of the content to reduce network delay and improve user experience. The main contributions of this article are as follows:
  • We propose a caching mechanism for neighbor message propagation based on user behavior. This mechanism selects the target AP set according to the transition probability table and the transition threshold of the source AP and triggers active caching when the user moves.
  • Different from other active caching methods that directly push content to the target AP, our idea is to make full use of the forwarding path from the source AP to the destination AP and cache content with high freshness and popularity in high-value nodes. This idea ensures the cache hit rate while using the limited network equipment space and reduces the network delay.
  • We use real network topology to evaluate the performance of the proposed cache co-location scheme. Through experiments, the active caching strategy and other caching schemes are compared, and the results show that our scheme has the best performance in different network topologies.
The rest of this article is organized as follows. Section 2 reviews the related research on the ICN mobility mechanism and caching scheme. Section 3 introduces our proposed caching scheme, including the neighbor message propagation mechanism and the cache co-location strategy. In the fourth part, we conducted simulation experiments and analyzed the results and performance of collaborative cache placement. Finally, we summarized and looked forward to future research.

2. Related Work

2.1. Mobility of the ICN Network

Mobility is a very important research issue in traditional IP networks. To support the mobility of IP networks, researchers have formulated various additional protocols [9,10]. Rosenberg et al. proposed the session initiation protocol (SIP) [9], which is an application-layer control protocol that implements instant messaging. SIP can natively support terminal mobility. Each user has an associated SIP resolver. When the switch occurs, the terminal device re-registers its SIP-URL and IP to the resolver, and the content can continue to be obtained after a successful registration. Perkins et al. proposed the mobile IP (MIP) protocol, which is a network layer solution that supports host movement, allowing the terminal to retain its original IP address when it moves to other access points [10]. In the MIP protocol, each terminal is identified by a home address, and data are routed through a care-of address. When a mobile handover occurs in the terminal, the terminal needs to register the new address with the home agent to realize the binding of the home agent’s forwarding address. During routing, the current location of the terminal can be found through the previous binding information to complete the forwarding of the data packet. Both SIP and MIP cores use a mapping mechanism to ensure that the network is not affected by movement. However, SIP only informs the latest address of the terminal after the handover, and data reliability cannot be guaranteed. MIP uses a tunneling mechanism. Each time a mobile node requests data, it needs to be transferred through the home agent. This method creates an inefficient triangular routing problem.
These complex protocols use the patch method to solve the problem but avoid the fundamental problem of network architecture design. Therefore, the information center network supporting mobility has been widely proposed. In DONA [3], the mobile user sends a new search message at the current location. When the RH infrastructure receives the message, it will provide them with the closest copy of the information. Mobile publishers can log out of information and then re-register their information when the network location changes. To ensure that there is no outdated registration status, messages need to be forwarded all the way to layer 1 RH. This method will bring a lot of expenses, and the advertising information can be traced back to their new location. In NDN [11], when the user moves, it sends a new interest message from the current location to the object that has not received the information. These messages will be suppressed by the PIT (Pending Interest Table) on the first common CR (content route) of the two transmission routes (before and after the switch), and then the old location will receive the corresponding information object. In addition, when the content publisher moves, the FIB pointing to them must be updated, which requires re-publishing the information prefix it carries through the routing protocol. The above-mentioned high-mobility scenarios bring very high overhead. NDN uses the LFBL (Listen First, Broadcast Later) protocol to achieve mobility in opportunistic networks. In LFBL, there are a large number of interest packets. When the requested information receives interest, it listens to the wireless channel to confirm whether another node has sent a matching message. If no matching message is received, it sends a data message to the subscriber.
PURSUIT considers various mobile situations [12]. Local user mobility is supported by multicast and caching. Specifically, information objects are multicast to multiple possible locations of mobile users. Local users receive content from nearby caches after switching. In addition, PURSUIT deals with global user mobility by modifying the forwarding function of the architecture. The mobility prediction can be used to reduce handover delay by caching the information requested by the user in the area where the user is expected to move after handover [8]. Since the publisher’s new location in the network must be notified of the topology management function, the mobility of the publisher is more difficult. MobilityFirst is designed to solve the mobility problems of hosts, information, and the entire network [13]. Host mobility is mainly handled by the Global Name Resolution Service (GNRS). When the network connection object changes its connection point, the GNRS must be updated. There is no need for routing-level indirect addressing such as that implemented in Mobile IP. Supporting network mobility can be at a lower level. In addition, another distributed protocol (similar to BGP) is used to propagate routing updates. Although BGP can be used for inter-domain routing, networks disconnected due to mobility and variable link conditions are supported by local storage optimization (e.g., in delay token networks [14]), the storage-aware routing mechanism can be adopted at the intra-domain level.

2.2. Mobility Caching in ICN Networks

Although many researchers have designed a new type of ICN network architecture to naturally support mobility, when mobile consumers move to a new location before receiving the requested content, the delay in receiving the requested data will increase. For delay-sensitive applications, reducing this delay is very important, such as streaming media services, online games, conference calls, etc. [8]. The author of [15] believes that the methods to realize its mobility can be divided into the following types: reactive methods [16,17,18,19], durable subscriptions [20], and active (or prefetching) methods [21,22]. Among them, compared with obtaining data from the original content source, the active caching strategy can significantly reduce the delay in transmitting content when the subscriber changes its location [23].
Siris et al. present a specific neighbor caching way for supporting users’ mobility in publish/subscribe networks [24]. As per the methodology, a mobile’s memberships are communicated to a subset of agents that are neighbors of the current representative that the mobile is associated with. Their key commitment is the meaning of an objective expense work and an astute strategy for choosing the subset of neighbors. The upside of their proposition is that it decreases the buffering costs since not all neighbor agents cache things while still obtaining the additions of proactive reserving by reserving things in a subset of neighbor agents that the mobile has a high likelihood of associating with. Vassilakos et al. propose a selective neighbor cache (SNC) method to enhance mobility in ICN architecture [15]. The method is based on actively caching information requests and corresponding items to a subset of agents adjacent to the agent currently connected to the mobile device. An important contribution of their paper is defining a target cost function. This function considers the trade-off between the latency and the cache cost and is used to select the appropriate subset of neighbors for user mobility behavior. They investigate the steady-state and transient performance of the proposed scheme. Compared with the case of active caching and no caching in all neighbor agents, the scheme identified and quantified its benefits. In addition, their survey shows how these benefits are affected by latency and cache costs and mobile behavior.
In addition, Siris et al. extended the work of [15]. They proposed a distributed active caching method, which uses the user’s mobile information to decide where to actively cache data and uses the congestion pricing scheme to use the cache storage [23]. The proposed method is suitable for objects of different sizes and a two-level cache hierarchy. For both cases, the active cache problem is very difficult. Their evaluation results show the influence of various system parameters on the delay gain of the proposed method. Compared with the optimal scheme of Oracle and plane cache structure, this method achieves a robust and good performance. Moreover, Jiang et al. propose a caching and prefetching mechanism that can be used in mobile environments [25]. Their idea is to predict disconnected and prefetch content that users may access in the near future. Therefore, when to trigger the cache and prefetch module is an important part of the target. Their paper mainly studies the design and implementation of the disconnection prediction algorithm. This method predicts the time before the terminal is disconnected from the Access Point (AP) by monitoring the Received Signal Strength Indicator (RSSI). They evaluate and compare through experiments.
In recent years, research on mobile caching has also focused on how to design new caching strategies in mobile scenarios. Li et al. propose a caching strategy based on user mobility and content popularity. This caching strategy uses a Markov model to describe user mobility and uses a multi-linear regression model to predict content popularity [26]. Finally, a content placement strategy based on edge gain is proposed according to user mobility and content popularity. In the content layout strategy, they considered content access delay and layout cost and established the content layout problem. Then, a content layout algorithm based on the marginal gain is proposed to solve the content layout problem. Content placement is to achieve the optimal performance of content placement by analyzing the marginal gains brought by edge servers. The experimental results show that the strategy is better than the comparison method in all performance indicators. In addition, in the field of Internet of Vehicles, the mobility cache also plays a big role. Yu et al. propose a mobile proactive edge caching strategy based on federated learning (mpcf) [27]. This new strategy allows multiple vehicles to use private training data on local vehicles to jointly learn a model for predicting content popularity. Mpcf also uses an adversarial autoencoder to predict content popularity. Mpcf integrates a mobility-aware cache replacement strategy, allowing the edge of the network to update content based on the vehicle’s movement patterns. The experimental results show that in the vehicle edge network, mpcf is superior to the comparison scheme in terms of cache hit rate and significantly reduces the communication cost. These research works are all designing new caching strategies in specific mobile scenarios. This is similar to the research background of this paper.

3. The Proposed Approach

The network background of the method proposed in this paper is the MobilityFirst network architecture. MobilityFirst is a future network architecture funded by the National Science Foundation [28]. It aims to directly solve the challenges of large-scale wireless access and mobility. Its main design concepts include the separation of names and addresses, globally unique identifiers of content objects, global name resolution services to bind names and addresses, and so on. These characteristics make MobilityFirst naturally support mobility. Our proactive caching strategy is a scheme that relies on ICN caching to enhance user experience in mobile scenarios.
In the WIFI mobile scenario, user movement brings new challenges to the original cache strategy of ICN, so this section mainly discusses the design of a new cache strategy in the mobile scenario. Our overall caching architecture is to first select the target AP set for mobile user migration (in the following, we refer to the target AP set as a neighbor AP set) and then select the appropriate node cache on the path from the source AP to the target AP. A simplified network scenario is shown in Figure 1. When no movement occurs, the user and the content server communicate and cache the content according to the red path in Figure 1. Our active caching strategy is triggered when the user moves, and our designed path cooperative caching strategy will cache valuable content along the blue path. In this section, we first introduce how to select the target AP from a theoretical perspective, i.e., the neighbor AP selection mechanism in the WiFi mobile scenario. Then, in the next section, based on our neighbor AP selection mechanism, we further propose a cooperative caching mechanism on the path.

3.1. Neighbor Selection Mechanism

The distribution of APs is usually arranged according to the signal coverage requirements (RSSI), which does not take into account the actual user’s mobile behavior. However, the movement behaviors generated by a large number of users will show different transition probabilities from the source AP to the neighboring AP. Figure 2 shows the above process. It can be seen that the transition probability of the source AP to the neighboring AP presents a large difference. In traditional IP networks, there is also research on caching based on neighbors (such as PNC [29] and SNC [15,30]), but the main research object is the contextual information of mobile devices, and the focus is on data buffering. This is essentially different from this paper’s focus on the caching of mobile ICN network content. We first introduce the acquisition of transition probability.

3.1.1. Acquisition of Transition Probability

Figure 2 shows the neighbor graph with transition probability. How can we obtain transition probability more cheaply? Inspired by the recommendation system [31], we can regard it as a data-driven problem. In our network, we have implemented the neighbor table function on the AP to store the following content, including (1) the BSSID of the neighbor AP (BSSID means the AP’s MAC address); (2) the total number of times the user moves from the source AP to the destination AP; (3) the transition probability. The above content corresponds to the second, third, and fourth column of Table 1. As shown in Table 1, usually, the number of nodes of an AP neighbor is 3–4 [32,33], so the storage cost is extremely low.
When the network is initialized, this is similar to the cold start problem in the recommendation system [34,35]. Since our scene has very few features, it is different from the recommendation system of high-dimensional features that can use deep learning to train the model [36]. Our scenario can only adjust the transition probability matrix from the perspective of user behavior and actual experience. When the network is first deployed, the network administrator can set an initial value for the probability transition matrix (usually based on historical information of the network). If there is no historical information, random initialization or uniform can be used. Suppose the number of neighbors of AP is 3, we rely on previous historical data to set the initial probability matrix of A P i to A P j , A P k , and A P q (j, k, q∈ NS, NS is a collection of N neighbors) to be [0.80, 0.17, 0.03]. Each time the User Equipment (UE) completes the movement and re-accesses the new AP, the new AP will reply to the source AP with an ACK carrying the BSSID (similar to the X2 protocol with 5G base stations [37]). After the source AP receives the ACK, it will increase the number of jumps by 1. Considering the large-scale mobile scenario, the AP will not update the transition probability in real-time but will use the time slice algorithm for a centralized update [38]. The calculation formula of the transition probability is as follows:
P i j = C o u n t i j k a l l n e i g h b o r s C o u n t i k
C o u n t i j represents the number of jumps from A P i to A P j . The denominator of Expression (1) represents the sum of the number of transfers from the source AP to all neighbor APs. After a time slice update, a stable transition probability P i j can be obtained. Based on the calculation of Formula (1), when the UE moves, the AP can select the neighbor AP that meets the following conditions to cache the content:
P i j δ
where δ is a predefined value. In the next section, we designed a method to calculate δ .

3.1.2. Threshold of Transition Probability

After obtaining the transition probability by the method in Section 3.1.1, we need to select the neighbor A P j , with P i ( j ) δ , for caching. How to choose δ is the research content of this section. The main factors that affect cache revenue are the cache cost ( C o s t C a c h e ), the request delay of the cache device hitting the content ( D e l a y h i t ), the request delay of the cache device not hitting the content ( D e l a y n o t h i t ), and δ . According to economic principles [39], we can obtain a cost formula as follows:
C o s t = n u m s ( N S ) C o s t C a c h e + P h i t ( N S ) D e l a y h i t + ( 1 P h i t ( N S ) ) D e l a y n o h i t
where NS is all selected neighbor nodes of source AP, nums(NS) is the number of neighbor APs, P h i t ( N S ) = j N S P i j . In addition, P h i t ( N S ) , D e l a y h i t , and D e l a y n o t h i t are calculated based on the real network environment. Below, we calculate the transition probability through the defined cost function.
According to the process of Section 3.1.1, we can know that the transition probability will be updated, so how to determine the threshold is the key. Next, we use mathematical induction [40] to solve Formula (3). We assume that the source AP has selected the number of nums(NS) to jump. Then, the source A P i to A P j meets the transition probability and is added to the NS, which can be expressed as:
n e w C o s t = ( n u m s ( N S ) + 1 ) C o s t C a c h e + P h i t ( N S j ) D e l a y h i t + ( 1 P h i t ( N S j ) ) D e l a y n o h i t
If and only if n e w C o s t is less than C o s t , the update is established. At this time, the transition probability P i j satisfies the following:
P i j δ > C o s t c a c h e j D e l a y h i t j D e l a y n o t h i t j D e l a y h i t j 1
The numerator in Equation (4) represents the cost of caching, and the denominator represents the benefits of caching. Thus, the solution set NS of (1) can be found through (4). The calculation of D e l a y h i t and D e l a y n o t h i t has been mentioned above; next, we will give the calculation method of C o s t c a c h e .
Cache cost is usually related to the price of cache capacity and cache hit rate. On the one hand, the goal of our neighbor selection algorithm is to reuse the cache, which will increase the cache hit rate. Thus, when the cache utilization is higher, the cost of the cache will be higher. In addition, the higher the cache cost, the higher the cost of cache capacity. Based on the properties described above, we define the relationship of the three as follows:
C o s t c a c h e j = C o s t c a p a c i t y j 1 b δ
Here, C o s t c a p a c i t y j represents the cost of storage capacity, which is usually related to the price of the hardware. The b is the regularization item, which is adjusted according to the actual scene, and we can know that b < δ . According to Formulas (5) and (6), we can know that the choice of transition probability will affect the cost. When the selected transition probability threshold is low, most neighbor APs will cache the content, which will increase the cost. The storage cost of APs is usually more expensive than routing devices. In addition, in large-scale mobile scenarios, APs have to frequently call the cache replacement strategy to cache the latest content due to the limited cache space. The above situation causes a great waste of resources and overhead. For this reason, we propose a strategy to utilize the caching capability of the forwarding path.

3.2. Neighbor Selection Mechanism OVER ICN

NSM can be implemented in a software module running in AP without modifying the underlying architecture of ICN. The specific implementation of NSM includes the following aspects: (1) estimate C o s t c a c h e j / D e l a y h i t j , D e l a y n o t h i t j / D e l a y h i t j , and the transition probability of Equation (5); (2) the source AP initiates active caching and pushes the hot content to the neighbor AP collection; (3) the number of AP neighbor table update transfers.
Regarding (1), C o s t c a c h e j / D e l a y h i t j , D e l a y n o t h i t j / D e l a y h i t j , and the transition probability can be calculated by AP itself. Assuming that the calculation result based on Table 1 is P i j ≥ 0.1, then only A P j and A P k will be pushed to the cached content at this time. Regarding (2), Figure 3 shows the steps of mobile handover in the native mobility support of the MobilityFirst architecture. In Figure 3, NRS is the Name Resolution Service, which is a distributed system including many resolution nodes. RNL is the Resolution Node List, which is a list that contains information about some resolution nodes. The time node for starting the active cache is performed when the mobile device disconnects the source AP (step 7 in Figure 3). The transmission of content required by mobile devices involves one-to-many distribution of the same information, which can be implemented in ICN using receiver-driven (pull-based) communication primitives. In addition, one-to-many transmission can also take advantage of ICN’s multicast capabilities [15]. Regarding (3), in step 14 in Figure 3, the destination AP will reply to the source AP signaling of successful handover. The software module of the AP can capture the signaling and update the third column, “Number of moves from A P i ”, in the AP neighbors table.

4. Cooperative Caching Mechanism

4.1. ICN Architecture Supporting IP Network

The existing IP network occupies a very important position in the Internet. Although many excellent researchers have designed a new network architecture for information centers, it is very undesirable to completely abandon the original IP network architecture [41]. Therefore, the cache strategy architecture of this paper is based on the ICN architecture compatible with IP networks. IP network compatibility is achieved by extending the existing network layer and transport layer to support instant messaging network functions. In this network architecture, the name resolution system is an important network component, which is usually used for name resolution and content routing. Under this network architecture, content publishers, cache nodes, and mobile terminals use NRS to register the mapping between IP addresses and content names. ICN routers can handle IP-based routing and caching decisions.

4.2. Caching Mechanism Based on Node Cache Value

The edge caching algorithm only considers storing the content to the edge node as much as possible. However, this approach ignores the storage capacity of other nodes on the path, so it cannot maximize the cache revenue. Man et al. proposed measuring the state value of each node on the path for cache decision [42]. Inspired by this mechanism, we apply its idea to our scenario and design a caching mechanism based on content value and node importance. The general process of proactive caching is shown in Figure 4. The following sections describe our caching mechanism in detail.

4.2.1. Node Cache Value

The purpose of designing edge caching is to place high-value content on the most important nodes. How to judge the value of content? Many researchers have designed algorithms for cache placement based on content popularity [43,44,45], but in mobile scenarios, users move frequently, which will trigger active caching to frequently pull content to the cache area. In traditional caching scenarios, we can intuitively understand that the content that needs to be cached the most is often the content with high popularity over a period of time. In our scenario, user movement introduces the dynamics of content popularity. In this case, we propose a new evaluation metric called content freshness. The introduction of content freshness can assess the importance of content over time. We add the current timestamp to the actively cached data packet, and the freshness of the current data packet is equal to the timestamp minus the oldest timestamp in the freshness list of the cache node. The freshness of the content is compared independently for the content list of each node. The freshness value represents the current timestamp of the content minus the oldest timestamp of the node. In addition, we maintain the popularity of the least popular content in the content list of the cache node. The pop of the cache node is equal to the popularity of the content of the cache node divided by the popularity of the least popular content.
Similarly, how to judge the importance of nodes? We know that the delay that affects the content of the user’s request is related to the number of route hops from the current user to the cache node. In addition, the size of the remaining cache space of the cache node also determines whether the content needs to be stored. We know that frequent content replacement will increase the network delay. Therefore, the importance of the node is related to the remaining space of the cache and the number of hops from the user to the cache node.
Based on the above factors, we propose calculating the C a c h e v a l u e of a node as follows:
C a c h e v a l u e = ω 1 N ( H o p ) + ω 2 N ( F r e s h n e s s v a l u e ) + ω 3 N ( P o p ) + ω 4 N ( C a c h e s i z e )
The N in Formula (7) represents standardization, and the value of each parameter is usually not an order of magnitude. To balance this deviation, we use the min-max method to standardize each value [46]. In addition, the larger the cache, the more worthy the node is to be cached. Therefore, when calculating hop, we calculate the hops from the cache node to the content source. In our scenario, the content source can be regarded as the source AP.

4.2.2. The Process of Cache Decision

Algorithm 1 describes how our caching strategy marks cacheable nodes on the path. Based on the neighbor selection mechanism of Section 3.1, after the source AP determines the set of neighbor APs, the source AP will send a lightweight detection packet. The detection packet contains the number of hops, the current timestamp, the content id, and the popularity of the content. The detection packet is used to transfer the current maximum cache value on the path. The cache node will determine whether it is cached based on the cache value in the detection packet. We describe the complete marking process of the source AP sending the detection packet to a neighbor AP in Figure 5. In this process, we use the has_cache() method to filter out APs, that is, AP devices do not cache content. As can be seen in Figure 5, through the decision of detecting packet and node information, our policy marks that node1 and node3 are worth caching.
Algorithm 1 Mark Cache Node.
Input: 
Probe package(Prp)
Output: 
Operation Statement
1:
Prp.hop = Prp.hop + 1
2:
temppop = Prp.pop/node.pop
3:
tempfreshness = Prp.freshness − node.freshness
4:
Normalized Prp.hop, temppop, node.cs, tempfreshness
5:
C a c h e v a l u e = Calculate(Prp.hop, temppop, node.cs, tempfreshness)
6:
if has_cache(node) then
7:
     if  C a c h e v a l u e > Prp.maxvalue then
8:
         prp.maxvalue = C a c h e v a l u e
9:
         node.is_cache = True
10:
     else
11:
     node.is_cache = False
12:
   end if
13:
end if
14:
Forward(Prp)
15:
return SUCCESS

4.2.3. The Process of Cache Decision

After filtering based on Algorithm 1, we describe the decision-making process of determining cache nodes in Algorithm 2. After the neighboring AP receives the motion detection packet, it will return a lightweight confirmation packet. The confirmation packet carries the cache field to judge whether the cache node has been selected. The final confirmation packet returns the IP address of the first cacheable node on the path. After the source AP receives the confirmation packet, it will actively cache the data on the routing node. The above process is shown in Figure 6. We can see that node3 is finally selected as the cache node for the user’s current move.
Algorithm 2 Confirm Cache Node.
Input: 
Confirmation package(Cop)
Output: 
Operation Statement
1:
if Cop.find_cache == False then
2:
    if node.is_cache == True then
3:
        Cop.ipaddress = node.ipaddress
4:
        Cop.find_cache = True
5:
    end if
6:
end if
7:
Forward(Cop)
8:
return SUCCESS

5. Simulation Results and Analysis

In this section, we mainly conduct simulation experiments on the selection of path cache nodes after triggering the active cache. First, the experimental setup is described, and then the simulation results are given. Furthermore, compare our method with the classical path caching strategy, including:
  • EDGE: The full name of this policy is edge cache policy. In this strategy, content is only cached at the edge [47].
  • LCE: The full name of this policy is Leave Copy Everywhere strategy. In this policy, a copy of the content is stored in any cache node on the cache path [11].
  • LCD: The full name of this policy is Leave Copy Down strategy. In this policy, the content cache is only cached in the cache node where the service node jumps to the user direction [48].
  • CL4M: The full name of this policy is Cache Less for More strategy. This policy is triggered only once in the cache path, and the content is cached in the node with the greatest mediation centrality [49].
  • ProbCache: This strategy caches the content in the cache path according to the probability. The probability factors mainly include the distance between the source and the target and the available cache space on the cache path [50].
  • Random: This strategy randomly selects a node on the cache path for caching [47].

5.1. Setting of Experimental Environment

We use Icarus as the simulation software [47]. Icarus is an ICN simulation using python, and it is licensed under the terms of the GNU GPLv2 license. Object-oriented encapsulation allows users to easily expand many useful modules and functions on its original basis. In addition, it can efficiently simulate a large number of requests in the network and support the implementation of various designed caching methods. To fully evaluate the performance of our method, we have conducted extensive experiments on various real networks, including WIDE (Japanese Academic Network), GARR (Italian Academic Network), and GEANT (European Academic Network). It is worth mentioning that APs are internal nodes with degree = 1 in the topology.
To fully test the effect of our strategy, we selected the cache hit ratio, link load, and latency as our evaluation indicators. To ensure the confidence of experimental statistics, each experiment is run five rounds, and then the results of the experiment are averaged. For specific experimental parameters, to simulate a real scenario, we set the requested data volume to 6 × 10 5 (here refers to the total request volume of mobile users) and set the content in the network to 3 × 10 5 . Before actually running the experiment, we set preheating data to 10 6 . Setting a higher preheat value will help subsequent experiments obtain more stable results. In addition, we assume that the mobile users conform to the Poisson distribution, and the number of users accessing new APs per second is 10. Like other conventional ICN cache research, we also assume that the content requested by users usually also meets the Zipf popularity distribution, so we set the parameter skewness ( α ) of the Zipf distribution to have a range of [0.6, 0.7, 0.8, 0.9, 1.0, 1.1, 1.2]. At the same time, the size of the cache space is also a key factor affecting performance, and lower cache capacity usually means higher cache competition. Therefore, we set the range of the ratio of the cache space to the total content size to be [0.004, 0.002, 0.01, 0.02, 0.03, 0.04, 0.05]. Regarding cache replacement, we use LRU as the cache eviction strategy by default for all cache strategies. All configuration contents are shown in Table 2.

5.2. Cache Hit Ratio

The node cache hit rate refers to the ratio of the content requests cached by the node to the total number of requests arriving at the node. In this article, we calculate the overall average cache hit rate. The cache hit rate is usually regarded as one of the most important metrics in a caching strategy.
In this article, we have conducted a lot of experiments in three different topological networks to verify the effectiveness of our strategy. In the first three sets of experiments in Figure 7, we set α = 0.6, and the cache space ratio was changed according to the settings in Table 1. From Figure 7a–c, we can see that the cache hit rate of all strategies is on the rise. Due to the low percentage of cache space, the overall cache hit rate is at a low level. However, from the figure, we can see that our strategy only needs about 0.04 of the cache space to achieve the best performance out of all other solutions, which saves almost 20% of the cache space.
Moreover, as shown in Figure 7d–f, as the popularity of content requests increases, the cache hit rate of all methods has doubled. In all the test values, our strategy maintains the best performance and does not show significant attenuation.
Through the experiments after the above indicators have been changed, our strategies have achieved the best performance. This time, it is worth mentioning that we can see the poor performance of EDGE in Figure 7, which is an edge cache that only considers the network topology. At the same time, it is also a reference method for many researchers to design edge caching solutions in mobile scenarios. We can see that with the increase in the cache space, the EDGE cache hit rate is very cached, and MINE can reach the EDGE hit rate of 0.05 cache space at a very low percentage of cache space. This is because based on the popularity and freshness of the content, the highest quality content is kept in the marginal area as much as possible.

5.3. Latency

Latency is the average time it takes for us to record the requested content to reach the user. This indicator is one of the key indicators for measuring the performance of the cache solution, and the delay is related to the user experience.
We adjust the range of Alpha from 0.6 to 1.2 according to the settings in Table 2. From Figure 8d–f, it can be seen that the delay of all schemes is significantly reduced. For larger Alpha, the content that users need is usually more concentrated, and our strategy adapts the content at the edge of the network to these behaviors and caches, thereby greatly reducing the transmission delay. Experiments show that our strategy is significantly better than other programs.
It can be seen from Figure 8a–c that as the cache space increases, the request delay decreases more and more slowly. The slight optimization delay has a huge impact on the user experience improvement. Our strategy maintains the best performance in three different network topologies. Compared with EDGE, a typical scheme of active caching, MINE shows considerable superiority. Furthermore, from the trend of EDGE delay performance, we have reason to believe that even if the cache space continues to increase, the performance of EDGE is difficult to maintain at a high level. Compared with some of the best-performing caching solutions, MINE has also achieved significant improvements.

5.4. Link Load

Debt balance is a key problem in computer network research. It can be seen that researchers attach great importance to link load. In this paper, we use the average link load to calculate the time consumption of these caching strategies.
As shown in Figure 9d–f, with the increase in Alpha, the average link load also begins to decrease. We mentioned in the previous analysis that with the increase in Alpha, popular content is more likely to be cached and requested, which will reduce the delay exponentially. All the caching schemes have achieved a faster load drop speed. Our solution still achieved the best load performance.
In the experiment of increasing the cache space, both LCE and CL4M showed that as the space increased, the average network load did not decrease. This shows that the two strategies have more cache redundancy. The best-performing LCD and our strategy have seen a significant load reduction as the space increases, which shows that the idea of pulling content to the edge and making full use of other nodes in the cache link is very effective. In addition, it is worth mentioning that the cache efficiency of MINE is significantly higher than that of LCD. It can be seen from Figure 9a–c that our strategy saves 60% of the space compared to LCD when obtaining the same performance as LCD, which is a very amazing optimization effect.

6. Conclusions and Future Work

In this article, we propose a new active caching mechanism for WiFi mobile scenarios. It first selects the set of target APs based on the historical transition probability and cache cost. After the user moves, the content is pulled and cached on the forwarding path of the source AP and the destination AP. For forwarding path caching, we have designed a strategy based on the value of caching. From a content perspective, we pay attention to the freshness and popularity of the content. From the perspective of the cache node, we focus on the remaining space of the cache and the number of hops from the cache to the user. The core idea of the strategy is to place high-value content on nodes of high importance. For this reason, we have designed a lightweight detection package and a reply package to judge the state of the path node when the user moves and select the final cache node. The simulation results show that under three real network topology environments, our scheme is significantly better than active cache and other classic cache placement strategies in terms of cache hit rate, user access latency, and network link load.
In the neighbor AP selection part of the active caching strategy of this article, we currently propose a theoretical scheme and do some qualitative analysis. In future work, we will conduct experiments on the selection of target APs in real WiFi network scenarios and compare them with some mobile prediction solutions.

Author Contributions

Conceptualization, T.Z., P.S. and R.H.; methodology, T.Z., P.S. and R.H.; software, T.Z.; writing—original draft preparation, T.Z.; writing—review and editing, T.Z., P.S. and R.H.; supervision, P.S.; project administration, R.H.; funding acquisition, P.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by Strategic Leadership Project of Chinese Academy of Sciences: SEANET Technology Standardization Research System Development (Project No. XDC02070100).

Data Availability Statement

Not applicable, the study does not report any data.

Acknowledgments

We would like to express our gratitude to Peng Sun, Rui Han, Yuanhang Li, and Li Zeng for their meaningful support for this work.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Balakrishnan, H.; Kaashoek, M.F.; Karger, D.; Morris, R.; Stoica, I. Looking up data in P2P systems. Commun. ACM 2003, 46, 43–48. [Google Scholar] [CrossRef] [Green Version]
  2. Peng, G. CDN: Content distribution network. arXiv 2004, arXiv:cs/0411069. [Google Scholar]
  3. Koponen, T.; Chawla, M.; Chun, B.G.; Ermolinskiy, A.; Kim, K.H.; Shenker, S.; Stoica, I. A data-oriented (and beyond) network architecture. In Proceedings of the 2007 Conference on Applications, Technologies, Architectures, and Protocols for Computer Communications, Kyoto, Japan, 27–31 August 2007; pp. 181–192. [Google Scholar]
  4. Jacobson, V.; Smetters, D.K.; Thornton, J.D.; Plass, M.F.; Briggs, N.H.; Braynard, R.L. Networking named content. In Proceedings of the 5th International Conference on Emerging Networking Experiments and Technologies, Rome Italy, 1–4 December 2009; pp. 1–12. [Google Scholar]
  5. Dannewitz, C.; Kutscher, D.; Ohlman, B.; Farrell, S.; Ahlgren, B.; Karl, H. Network of information (netinf)—An information-centric networking architecture. Comput. Commun. 2013, 36, 721–735. [Google Scholar] [CrossRef]
  6. Ain, M.; Trossen, D.; Nikander, P.; Tarkoma, S.; Visala, K.; Rimey, K.; Burbridge, T.; Rajahalme, J.; Tuononen, J.; Jokela, P.; et al. Deliverable D2.3—Architecture Definition, Component Descriptions, and Requirements. PSIRP 7th FP EU-Funded Project. 2009. Volume 11. Available online: http://www.psirp.org/files/Deliverables/FP7-INFSO-ICT-216173-PSIRP-D2.3_ArchitectureDefinition.pdf (accessed on 12 December 2021).
  7. Ahlgren, B.; Dannewitz, C.; Imbrenda, C.; Kutscher, D.; Ohlman, B. A survey of information-centric networking. IEEE Commun. Mag. 2012, 50, 26–36. [Google Scholar] [CrossRef]
  8. Fang, C.; Yao, H.; Wang, Z.; Wu, W.; Jin, X.; Yu, F.R. A survey of mobile information-centric networking: Research issues and challenges. IEEE Commun. Surv. Tutor. 2018, 20, 2353–2371. [Google Scholar] [CrossRef]
  9. Rosenberg, J.; Schulzrinne, H.; Camarillo, G.; Johnston, A.; Peterson, J.; Sparks, R.; Handley, M.; Schooler, E. SIP: Session Initiation Protocol; Association for Computing Machinery: New York, NY, USA, 2002. [Google Scholar]
  10. Perkins, C.E. Mobile ip. IEEE Commun. Mag. 1997, 35, 84–99. [Google Scholar] [CrossRef]
  11. Zhang, L.; Estrin, D.; Burke, J.; Jacobson, V.; Thornton, J.D.; Smetters, D.K.; Zhang, B.; Tsudik, G.; Massey, D.; Papadopoulos, C.; et al. Named Data Networking (NDN) Project; Relatório Técnico NDN-0001; Xerox Palo Alto Research Center-PARC: Palo Alto, CA, USA, 2010. [Google Scholar]
  12. Fotiou, N.; Nikander, P.; Trossen, D.; Polyzos, G.C. Developing information networking further: From PSIRP to PURSUIT. In Proceedings of the International Conference on Broadband Communications, Networks and Systems, Athens, Greece, 25–27 October 2010; pp. 1–13. [Google Scholar]
  13. Seskar, I.; Nagaraja, K.; Nelson, S.; Raychaudhuri, D. Mobilityfirst future internet architecture project. In Proceedings of the IEEE 7th Asian Internet Engineering Conference, Bangkok, Thailand, 9–11 November 2011; pp. 1–3. [Google Scholar]
  14. Kent, S.; Lynn, C.; Seo, K. Secure border gateway protocol (S-BGP). IEEE J. Sel. Areas Commun. 2000, 18, 582–592. [Google Scholar] [CrossRef]
  15. Vasilakos, X.; Siris, V.A.; Polyzos, G.C.; Pomonis, M. Proactive selective neighbor caching for enhancing mobility support in information-centric networks. In Proceedings of the IEEE Second Edition of the ICN Workshop on Information-Centric Networking, Helsinki, Finland, 17 August 2012; pp. 61–66. [Google Scholar]
  16. Caporuscio, M.; Carzaniga, A.; Wolf, A.L. Design and evaluation of a support service for mobile, wireless publish/subscribe applications. IEEE Trans. Softw. Eng. 2003, 29, 1059–1071. [Google Scholar] [CrossRef]
  17. Fiege, L.; Gartner, F.C.; Kasten, O.; Zeidler, A. Supporting mobility in content-based publish/subscribe middleware. In Proceedings of the IEEE ACM/IFIP/USENIX International Conference on Distributed Systems Platforms and Open Distributed Processing, Beijing, China, 9–13 December 2003; pp. 103–122. [Google Scholar]
  18. Sourlas, V.; Paschos, G.S.; Flegkas, P.; Tassiulas, L. Mobility support through caching in content-based publish/subscribe networks. In Proceedings of the IEEE 2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing, Melbourne, Australia, 17–20 May 2010; pp. 715–720. [Google Scholar]
  19. Wang, J.; Cao, J.; Li, J.; Wu, J. MHH: A novel protocol for mobility management in publish/subscribe systems. In Proceedings of the IEEE 2007 International Conference on Parallel Processing (ICPP 2007), Xi’an, China, 10–14 September 2007; p. 54. [Google Scholar]
  20. Farooq, U.; Parsons, E.W.; Majumdar, S. Performance of publish/subscribe middleware in mobile wireless networks. ACM SIGSOFT Softw. Eng. Notes 2004, 29, 278–289. [Google Scholar] [CrossRef]
  21. Burcea, I.; Jacobsen, H.A.; De Lara, E.; Muthusamy, V.; Petrovic, M. Disconnected operation in publish/subscribe middleware. In Proceedings of the IEEE IEEE International Conference on Mobile Data Management, Berkeley, CA, USA, 19–22 January 2004; pp. 39–50. [Google Scholar]
  22. Gaddah, A.; Kunz, T. Extending mobility to publish/subscribe systems using a pro-active caching approach. Mob. Inf. Syst. 2010, 6, 293–324. [Google Scholar] [CrossRef]
  23. Siris, V.A.; Vasilakos, X.; Polyzos, G.C. Efficient proactive caching for supporting seamless mobility. In Proceedings of the IEEE International Symposium on a World of Wireless, Mobile and Multimedia Networks 2014, Sydney, Australia, 19 June 2014; pp. 1–6. [Google Scholar]
  24. Siris, V.A.; Vasilakos, X.; Polyzos, G.C. A Selective Neighbor Caching Approach for Supporting Mobility in Publish/Subscribe Networks. In Proceedings of the Fifth ERCIM Workshop on Emobility, Catalonia, Spain, 14 June 2011; p. 63. [Google Scholar]
  25. Jiang, P.; Jin, Y.; Yang, T.; Geurts, J.; Liu, Y.; Point, J.C. Handoff prediction for data caching in mobile content centric network. In Proceedings of the IEEE 2013 15th IEEE International Conference on Communication Technology, Guilin, China, 17–19 November 2013; pp. 691–696. [Google Scholar]
  26. Li, C.; Song, M.; Yu, C.; Luo, Y. Mobility and marginal gain based content caching and placement for cooperative edge-cloud computing. Inf. Sci. 2021, 548, 153–176. [Google Scholar] [CrossRef]
  27. Yu, Z.; Hu, J.; Min, G.; Zhao, Z.; Miao, W.; Hossain, M.S. Mobility-aware proactive edge caching for connected vehicles using federated learning. IEEE Trans. Intell. Transp. Syst. 2020, 22, 5341–5351. [Google Scholar] [CrossRef]
  28. Raychaudhuri, D.; Nagaraja, K.; Venkataramani, A. Mobilityfirst: A robust and trustworthy mobility-centric architecture for the future internet. ACM SIGMOBILE Mob. Comput. Commun. Rev. 2012, 16, 2–13. [Google Scholar] [CrossRef]
  29. Mishra, A.; Shin, M.; Arbaush, W. Context caching using neighbor graphs for fast handoffs in a wireless network. In Proceedings of the IEEE INFOCOM 2004, Hong Kong, China, 7–11 March 2004; Volume 1. [Google Scholar]
  30. Pack, S.; Jung, H.; Kwon, T.; Choi, Y. Snc: A selective neighbor caching scheme for fast handoff in ieee 802.11 wireless networks. ACM SIGMOBILE Mob. Comput. Commun. Rev. 2005, 9, 39–49. [Google Scholar] [CrossRef]
  31. Davidson, J.; Liebald, B.; Liu, J.; Nandy, P.; Van Vleet, T.; Gargi, U.; Gupta, S.; He, Y.; Lambert, M.; Livingston, B.; et al. The YouTube video recommendation system. In Proceedings of the Fourth ACM Conference on Recommender Systems, Barcelona, Spain, 26–30 September 2010; pp. 293–296. [Google Scholar]
  32. Balachandran, A.; Voelker, G.M.; Bahl, P.; Rangan, P.V. Characterizing user behavior and network performance in a public wireless LAN. In Proceedings of the 2002 ACM SIGMETRICS International Conference on Measurement and Modeling of Computer Systems, Marina Del Rey, CA, USA, 15–19 June 2002; pp. 195–205. [Google Scholar]
  33. Schwab, D.; Bunt, R. Characterising the use of a campus wireless network. In Proceedings of the IEEE INFOCOM 2004, Hong Kong, China, 7–11 March 2004; Volume 2, pp. 862–870. [Google Scholar]
  34. Lika, B.; Kolomvatsos, K.; Hadjiefthymiades, S. Facing the cold start problem in recommender systems. Expert Syst. Appl. 2014, 41, 2065–2073. [Google Scholar] [CrossRef]
  35. Schein, A.I.; Popescul, A.; Ungar, L.H.; Pennock, D.M. Methods and metrics for cold-start recommendations. In Proceedings of the 25th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Tampere, Finland, 11–15 August 2002; pp. 253–260. [Google Scholar]
  36. Wei, J.; He, J.; Chen, K.; Zhou, Y.; Tang, Z. Collaborative filtering and deep learning based recommendation system for cold start items. Expert Syst. Appl. 2017, 69, 29–39. [Google Scholar] [CrossRef] [Green Version]
  37. Prados-Garzon, J.; Adamuz-Hinojosa, O.; Ameigeiras, P.; Ramos-Munoz, J.J.; Andres-Maldonado, P.; Lopez-Soler, J.M. Handover implementation in a 5G SDN-based mobile network architecture. In Proceedings of the 2016 IEEE 27th Annual International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC), Valencia, Spain, 4–8 September 2016; pp. 1–6. [Google Scholar]
  38. Natale, D. Dynamic end-to-end guarantees in distributed real time systems. In Proceedings of the 1994 Proceedings Real-Time Systems Symposium, San Juan, PR, USA, 7–9 December 1994; pp. 216–227. [Google Scholar]
  39. Mankiw, N.G. Principles of Economics, 8th ed.; Cengage Learning: Boston, MA, USA, 2018; Available online: https://voltariano.files.wordpress.com/2020/03/mankiw_principles_of_economic.pdf (accessed on 12 December 2021).
  40. Bather, J.A. Mathematical Induction. 1994. Available online: https://sms.math.nus.edu.sg/smsmedley/Vol-16-1/Mathematical%20induction(John%20A%20Bather).pdf (accessed on 12 December 2021).
  41. Wang, J.; Chen, G.; You, J.; Sun, P. SEANet: Architecture and Technologies of an On-site, Elastic, Autonomous Network. J. Netw. New Media 2020, 9, 1–8. [Google Scholar]
  42. Man, D.; Wang, Y.; Wang, H.; Guo, J.; Lv, J.; Xuan, S.; Yang, W. Information-Centric Networking Cache Placement Method Based on Cache Node Status and Location. Wirel. Commun. Mob. Comput. 2021, 2021. [Google Scholar] [CrossRef]
  43. Li, S.; Xu, J.; Van Der Schaar, M.; Li, W. Popularity-driven content caching. In Proceedings of the IEEE INFOCOM 2016—The 35th Annual IEEE International Conference on Computer Communications, San Francisco, CA, USA, 10–14 April 2016; pp. 1–9. [Google Scholar]
  44. Suksomboon, K.; Tarnoi, S.; Ji, Y.; Koibuchi, M.; Fukuda, K.; Abe, S.; Motonori, N.; Aoki, M.; Urushidani, S.; Yamada, S. PopCache: Cache more or less based on content popularity for information-centric networking. In Proceedings of the 38th Annual IEEE Conference on Local Computer Networks, Sydney, Australia, 21–24 October 2013; pp. 236–243. [Google Scholar]
  45. Wang, W.; Sun, Y.; Guo, Y.; Kaafar, D.; Jin, J.; Li, J.; Li, Z. CRCache: Exploiting the correlation between content popularity and network topology information for ICN caching. In Proceedings of the 2014 IEEE International Conference on Communications (ICC), Sydney, Australia, 10–14 June 2014; pp. 3191–3196. [Google Scholar]
  46. Patro, S.; Sahu, K.K. Normalization: A preprocessing stage. arXiv 2015, arXiv:1503.06462. [Google Scholar] [CrossRef]
  47. Saino, L.; Psaras, I.; Pavlou, G. Icarus: A caching simulator for information centric networking (icn). In Proceedings of the SIMUTools 2014: 7th International ICST Conference on Simulation Tools and Techniques, Lisbon, Portugal, 17–19 March 2014; Volume 7, pp. 66–75. [Google Scholar]
  48. Laoutaris, N.; Che, H.; Stavrakakis, I. The LCD interconnection of LRU caches and its analysis. Perform. Eval. 2006, 63, 609–634. [Google Scholar] [CrossRef]
  49. Chai, W.K.; He, D.; Psaras, I.; Pavlou, G. Cache “less for more” in information-centric networks. In Proceedings of the International Conference on Research in Networking, Prague, Czech Republic, 21–25 May 2012; pp. 27–40. [Google Scholar]
  50. Psaras, I.; Chai, W.K.; Pavlou, G. Probabilistic in-network caching for information-centric networks. In Proceedings of the Second Edition of the ICN Workshop on Information-Centric Networking, Helsinki, Finland, 17 August 2012; pp. 55–60. [Google Scholar]
Figure 1. The network scenario.
Figure 1. The network scenario.
Futureinternet 14 00033 g001
Figure 2. A P i transition probability graph.
Figure 2. A P i transition probability graph.
Futureinternet 14 00033 g002
Figure 3. Mobile handover in MobilityFirst.
Figure 3. Mobile handover in MobilityFirst.
Futureinternet 14 00033 g003
Figure 4. Proactive caching process.
Figure 4. Proactive caching process.
Futureinternet 14 00033 g004
Figure 5. Process of marking cache nodes.
Figure 5. Process of marking cache nodes.
Futureinternet 14 00033 g005
Figure 6. The process of deciding nodes and caching content.
Figure 6. The process of deciding nodes and caching content.
Futureinternet 14 00033 g006
Figure 7. Performance of Cache Hit Ratio.
Figure 7. Performance of Cache Hit Ratio.
Futureinternet 14 00033 g007
Figure 8. Performance of Latency.
Figure 8. Performance of Latency.
Futureinternet 14 00033 g008
Figure 9. Performance of Link Load.
Figure 9. Performance of Link Load.
Futureinternet 14 00033 g009
Table 1. An example of A P i ’s neighbor transfer table.
Table 1. An example of A P i ’s neighbor transfer table.
Neighbor APBSSIDNumber of Moves from AP i Transition Probability
A P j a8:82:38:3f:40:4B12,0000.80
A P k a8:82:38:3f:40:4E25000.17
A P q a8:82:38:3f:40:4A5000.03
Table 2. Experiment parameters.
Table 2. Experiment parameters.
ParametersValue
Topology structureGARR, WIDE, and GEANT
Replacement policyLRU
Number of contents3 × 10 5
Requests number for system warm-up10 6
Total mobile user requests6 × 10 6
Number of mobile users per second10 per/s
Cache size ratio[0.004, 0.002, 0.01–0.05]
Skewness ( α )[0.6, 0.7, 0.8, 0.9, 1.0, 1.1, 1.2]
Experiment run time for each scenario5
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhou, T.; Sun, P.; Han, R. An Active Path-Associated Cache Scheme for Mobile Scenes. Future Internet 2022, 14, 33. https://doi.org/10.3390/fi14020033

AMA Style

Zhou T, Sun P, Han R. An Active Path-Associated Cache Scheme for Mobile Scenes. Future Internet. 2022; 14(2):33. https://doi.org/10.3390/fi14020033

Chicago/Turabian Style

Zhou, Tianchi, Peng Sun, and Rui Han. 2022. "An Active Path-Associated Cache Scheme for Mobile Scenes" Future Internet 14, no. 2: 33. https://doi.org/10.3390/fi14020033

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop