Next Article in Journal
Evolving Dispatching Rules in Improved BWO Heuristic Algorithm for Job-Shop Scheduling
Previous Article in Journal
Few-Shot Image Segmentation Using Generating Mask with Meta-Learning Classifier Weight Transformer Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

RMBCC: A Replica Migration-Based Cooperative Caching Scheme for Information-Centric Networks

1
National Network New Media Engineering Research Center, Institute of Acoustics, Chinese Academy of Sciences, No. 21, North Fourth Ring Road, Haidian District, Beijing 100190, China
2
School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, No. 19(A), Yuquan Road, Shijingshan District, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(13), 2636; https://doi.org/10.3390/electronics13132636
Submission received: 12 June 2024 / Revised: 28 June 2024 / Accepted: 2 July 2024 / Published: 4 July 2024

Abstract

How to maximize the advantages of in-network caching under limited cache space has always been a key issue in information-centric networking (ICN). Replica placement strategies aim to fully utilize cache resources by optimizing the location and quantity distribution of replicas in the network, thereby improving the performance of the cache system. However, existing research primarily focuses on optimizing the placement of replicas along the content delivery path, which cannot avoid the inherent drawback of not being able to leverage off-path cache resources. The proposals for off-path caching cannot effectively solve this problem as they introduce excessive complexity and cooperation costs. In this paper, we address the trade-off between cache resource utilization and cooperation costs by introducing a mechanism complementary to replica placement. Instead of redesigning a new caching strategy from scratch, we propose a proactive cooperative caching mechanism (called RMBCC) that involves an independent replica migration process, through which we proactively relocate replicas evicted from the local cache to neighboring nodes with sufficient cache resources. The cooperation costs are effectively controlled through migration replica filtering, migration distance limitation, as well as hop-by-hop migration request propagation. Extensive simulation experiments show that RMBCC can be efficiently integrated with different on-path caching strategies. Compared with representative caching schemes, RMBCC achieves significant improvements in evaluation metrics such as cache hit ratio and content retrieval time, while only introducing negligible cooperation overhead.

1. Introduction

The current Internet architecture was originally designed for point-to-point communication between remote hosts, while lacking consideration of native support for massive content distribution. However, with the explosive growth of network traffic, the way people use networks is gradually changing to the distribution and retrieval of contents [1]. As a result, the mismatch between the Internet’s underlying architecture and upper-layer application demands is gradually aggravated [2], making it hard to achieve satisfactory performance in network resource utilization and user-perceived quality of service (QoS).
Researchers have proposed many solutions to enhance the content distribution capabilities of the Internet, based on the TCP/IP architecture. Most of them are deployed as overlays on existing IP infrastructure [3], such as content distribution networks (CDNs), point-to-point networks (P2P) and web proxy caching systems, etc. However, these solutions do not solve the inherent problems of TCP/IP, such as limited expressiveness of IP addressing, inefficient support for mobility, and in-network caching [4], which greatly limits further improvements of the content distribution efficiency.
Under these circumstances, information-centric networking (ICN) is proposed as a promising next-generation network paradigm [5]. ICN adopts a content-oriented communication model and a name-based routing scheme, aiming to fundamentally solve the scalability and mobility problems in current network architecture. Based on the unique naming of content, in-network caching is integrated into network forwarding elements as an inherent capability of the ICN network layer. By caching contents at nodes distributed throughout the network, user requests can be served by any cache node holding a copy of the content, thereby reducing both content retrieval latency and network traffic costs.
Given the potential of ICN in-network caching to improve content delivery, researchers have conducted extensive research in this field in terms of cache allocation [6,7,8], replica management [9,10,11], request routing [12,13], etc. One of the most active topics is replica management, especially the design of replica placement strategies. The replica placement strategy solves the problem of which contents to cache and on which nodes to place the replicas, so it has a decisive impact on the performance of the cache system. Ref. [14] pointed out that according to the position of the cache node relative to the content delivery path, the replica placement strategy in ICN can be divided into two categories: on-path caching and off-path caching. On-path caching attempts to optimize the placement of replicas along the content delivery path. Off-path caching usually involves collaboration between nodes to strategically place replicas at locations that are not necessarily along the direct delivery path from the content source to the requester. On-path caching is simpler to implement and introduces little additional communication overhead. However, the problem is that it cannot utilize those caches outside the forwarding path, resulting in low resource utilization and further poor cache hit ratio. Off-path caching can achieve better cache performance by coordinating the caching operations of different nodes, but this collaboration will inevitably introduce higher overhead. In addition, off-path caching causes content forwarding to deviate from the shortest path between the requester and the content source, which significantly increases the traffic load and reduces the network goodput.
To address the above problems, researchers have begun to explore how to combine the advantages of on-path caching and off-path caching to design a hybrid caching mechanism that has both low cooperation costs and performance advantages. According to the collaboration form between nodes, related work mainly falls into two categories. One is to share cache information between neighboring nodes, aiming to reduce cache redundancy and improve content discovery efficiency. In this approach, the placement of replicas is still limited to on-path nodes, so the cache resource utilization cannot be improved. The other type is that nodes collaboratively decide the location of replicas and extend the selection range of cache nodes to the neighboring nodes of on-path nodes. However, this often comes at the expense of higher cache decision-making complexity. It can be seen that existing work still cannot achieve a good trade-off between caching benefits and cooperation costs. More importantly, most existing studies have ignored the significant impact of cache resource utilization on cache performance. Considering the uneven distribution of request loads, nodes located in different areas of the network tend to experience different cache loads. Fully utilizing the cache capacity of all available nodes can effectively increase the number of different contents cached in the network, thereby improving the performance of the cache system in terms of cache hit ratio and network traffic cost.
Given the shortcomings of existing caching strategies, this paper proposes a Replica Migration-Based Cooperative Caching scheme (RMBCC). Instead of redesigning a new caching strategy from scratch, RMBCC introduces an independent replica migration process as a complement to replica placement, which is used to relocate the replicas evicted from local caches to neighboring nodes with sufficient cache resources, thereby improving the utilization of cache resources. The innovations of RMBCC can be summarized as follows:
  • Replica placement: RMBCC does not participate in the replica placement process directly. Instead, it can be integrated with any advanced on-path caching strategy to ensure low overhead of the cache decision process.
  • Proactive cache purging: In addition to regular replica replacement, RMBCC also introduces a proactive cache purging function for local replica management, which proactively evicts replicas based on specific rules to free up storage space.
  • Independent Replica Migration: RMBCC filters the replicas evicted from the local cache (including replicas generated by the replica replacement and cache purging process) and migrates the replicas that are still worth caching to neighboring nodes with sufficient cache resources through node cooperation.
  • Cooperation cost control: RMBCC controls the number of migrated replicas by filtering the evicted replicas, and limits the cooperation scope of nodes through migration distance constraints and hop-by-hop migration request propagation, so as to avoid incurring excessive transmission costs. Additionally, the data of the migrated replicas can be selectively transmitted at off-peak times to avoid network congestion and make full use of the idle bandwidth.
We conducted extensive simulations to verify the effectiveness of RMBCC in different scenarios. Experimental results demonstrated that RMBCC can be efficiently integrated with different on-path strategies under different network settings. Compared to existing on-path caching, off-path caching, and hybrid caching solutions, RMBCC exhibited significant performance advantages in metrics such as cache resource utilization, cache hit ratio, network link load, and content retrieval time.
The remainder of this paper is organized as follows. Section 2 summarizes representative ICN caching schemes. Section 3 introduces the cache management framework and design details of RMBCC. Section 4 presents the simulation design and corresponding result analysis. Finally, Section 5 concludes the paper and provides a brief analysis of future work.

2. Related Work

The replica management efficiency of the in-network cache system is crucial to the deployment and application of ICN. As an important part of replica management, the replica placement strategy determines the distribution, redundancy, and residence time of replicas in the network, which directly affect content delivery performance, network traffic cost, scalability, etc. According to the position of the cache node relative to the content delivery path, research on replica placement strategies can usually be divided into three categories: on-path caching, off-path caching, and hybrid caching.

2.1. On-Path Caching

On-path caching selects one or more intermediate nodes along the content delivery path to store copies of the content. As a passive caching method, it usually only performs caching operations when the content is transmitted to the specific node, so it has low implementation complexity and communication overhead. For example, leave copy everywhere (LCE), the default caching strategy of named data networking (NDN) [15], involves indiscriminately caching copies of content at every node encountered along the content delivery path, thus introducing a large amount of cache redundancy and resulting in poor cache performance. The main reason for this problem is the lack of coordination between on-path nodes. That is, each node makes decisions independently, without considering the decision results or status information of other nodes, and does not notify other nodes of local cache information. Researchers have proposed a series of improved strategies for this problem. For example, CL4M [16] utilized the network topology information and proposed to reduce cache redundancy by only caching contents at the node with the largest betweenness centrality on the forwarding path. To this end, a collaborative mechanism is needed between on-path nodes to compare the betweenness centrality values of different nodes. The authors proposed to add corresponding fields to the packet header and dynamically update them during the hop-by-hop forwarding process of the packet. This implicit cooperation approach (i.e., without introducing out-of-band signaling) is also widely used in subsequent works. For example, ProbCache [17] records the number of forwarding hops in the packet header, which is used to estimate the cache capacity of the download path and cache each content probabilistically.
Some works further consider using content popularity information to assist caching decisions. ABC [18] proposed to set an age value for each replica to control its lifetime in the router. The age value is dynamically calculated and updated by each hop node based on its distance to the server and the content popularity. Similar works also include MAGIC [19], BidCache [20], etc. In recent years, emerging applications have imposed stringent latency requirements on content-oriented services. Some studies have begun to focus on using dynamic network congestion information to make caching decisions to provide users with a better quality of experience (QoE). Typical research was CAC [21], which proposed caching content at downstream nodes of congested links to reduce the content download time for subsequent requests. Its derivative works include CPC [22], LAC [23,24,25], etc. The booming development of artificial intelligence has also brought new opportunities for the design of caching strategies [26]. For example, Ref. [27] proposed a reinforcement learning-based hybrid caching scheme in NDN, where the routers work in a distributed manner and learn to pick the most suitable policy for caching content.
However, since the cooperation scope of these improved strategies is still limited to on-path nodes, they cannot avoid the inherent problems of on-path caching: on the one hand, caching the same content repeatedly on different paths introduces cache redundancy. On the other hand, it restricts the selection scope of replica placement nodes to those along the forwarding path. This fundamental characteristic limits its ability to effectively utilize the available cache capacity of neighboring nodes, thus limiting further improvements in cache resource utilization and cache performance.

2.2. Off-Path Caching

Off-path caching fundamentally breaks the limitations of the scope of replica placement, and the content is strategically placed in locations unrelated to the forwarding path. By globally coordinating the cache behavior across different nodes, off-path caching effectively addresses the cache redundancy and low cache resource utilization issues associated with on-path caching. For instance, inspired by the idea of “one content, one replica”, Ref. [28] proposed using a hash function at edge routers to determine the caching router for each requested content. This process does not take content popularity into account and operates in a distributed and stateless manner. Similar ideas can be found in [29], where the authors further discussed the selection of the return path of the content. Specifically, they proposed five different routing strategies, including symmetric, asymmetric, and multicast approaches. However, Ref. [30] noted that hash routing might lead to extensive detour delays due to request forwarding deviating from the shortest path. To address this, the authors suggested employing domain clustering techniques to partition a large domain into small clusters and apply hash routing within each cluster. Many subsequent works have adopted the idea of hash routing, including [31,32], CPHR [33], HRCS [34], and SDC [35]. Some studies calculate the optimal placement of replicas by solving optimization problems [36]. For instance, in [37], researchers modeled replica placement as a total network traffic cost minimization problem and proposed four different distributed intra-domain online cache management algorithms. In contrast, Ref. [38] adopted a centralized control architecture, where the controller was responsible for aggregating content request information from the entire network, executing caching decisions, and issuing caching commands to corresponding caching nodes.
The performance improvements of off-path caching come at the expense of higher collaboration costs [39]. Computing the placement for each content, whether based on hashing or solving the optimization problem, introduces unacceptable computational overhead. Furthermore, since the designated cache node usually deviates from the shortest path from the requester to the content source, this leads to higher response latency in the content retrieval process, and the longer detour distance also results in higher network traffic costs.

2.3. Hybrid Caching

Hybrid caching refers to further utilizing cache nodes near the content delivery path based on on-path caching and performing replica management in a cooperative manner. This approach retains the low overhead and low complexity of on-path caching, while also leveraging available cache resources of neighboring nodes within a certain distance, thereby improving the performance of the cache system with lower communication overhead.
The cooperation between neighboring nodes can take various forms. For example, Intra-AS [40] and MuNCC [39] proposed methods for exchanging cache summary information between neighboring nodes and deleting redundant replicas according to certain rules. Additionally, they improved the request forwarding strategy by directing requests to neighboring nodes that hold copies of the requested content based on the cache summary information. Similarly, OpenCache [41] introduced a lightweight regional cache collaboration method for the ICN paradigm based on hierarchical naming. Neighboring nodes obtained each other’s cache status by exchanging aggregated name prefixes, thereby reducing cooperation costs. eNcache [42] employed simple on-path caching, with the improvement of considering whether neighboring routers had cached the content in the cache-decision-making process.
Different from the above works, P-TAC [43] directly leveraged the cache capacity of neighboring nodes. The basic idea was to improve the cache hit ratio by utilizing links with sufficient bandwidth resources. It determined whether to cache the content locally or push it to neighboring nodes based on content popularity and the congestion level of the incoming link. Similarly, Ref. [44] first conducted on-path caching by considering content popularity and the distance of the current node to the content source. Unsuccessfully cached contents, as well as the replaced ones, were then migrated to the central node. HCCD [45] adopted a similar approach which adaptively scheduled popular contents in hot zone nodes to neighboring idle nodes to eliminate redundancy and balance cache loads. Ref. [46] proposed a proactive caching scheme that facilitated the migration of replicas from high-concentration regions to low-concentration regions to achieve rapid deployment of replicas and dynamic adjustment of the cache locations. For IoT scenarios, Ref. [47] suggested caching contents near the delivery path to improve the utilization of cache resources. PDPU [48] proposed to push the content to the upstream or downstream one-hop node for caching based on its popularity information. NCR-BN [49] maintained a minimum popularity table in the neighbor cache region. In cases where cache replacement occurred within the region, the replaced replica would be forwarded to the node corresponding to the minimum popularity for re-caching.
According to the above literature survey, both on-path caching and off-path caching exhibit distinct advantages and disadvantages. Hybrid caching can achieve better cache performance by combining the strengths of both approaches while introducing little collaboration overhead. Nevertheless, existing hybrid caching approaches still have some shortcomings in effectively balancing cache cooperation costs and performance gains. This paper tackles the problem of replica placement from a different perspective. We introduce a replica migration process that works independently of replica placement, which aims to improve the cache utilization by utilizing the cache capacity of neighboring off-path nodes. We take several measures to control the additional overhead caused by replica migration, thereby achieving a better trade-off between cache performance gains and replica management costs.

3. Replica Migration-Based Cooperative Caching (RMBCC) Scheme

In this section, we propose the Replica Migration-Based Cooperative Caching scheme (RMBCC). We first illustrate the cache management framework of RMBCC. Then, we elaborate on the design of the replica migration decision and migration destination selection mechanism, which are two crucial components of the cache cooperation scheme.

3.1. Overview

RMBCC is a distributed cache cooperation method. Instead of redesigning a new caching strategy from scratch, it is designed as a complementary mechanism to replica placement. It is recommended to use the on-path caching approach for replica placement to maintain a low-overhead cache decision process. The basic idea of RMBCC is to introduce a replica migration process that works independently of replica placement, which proactively pushes replicas evicted from the local cache to nearby off-path nodes with lower cache load. The intuition behind RMBCC is two-fold. On the one hand, by recycling these evicted popular replicas, their residence time in the cache system is extended, thereby increasing the probability of being hit by subsequent requests. On the other hand, by utilizing the available cache capacity of off-path nodes, the cache load between nodes can be effectively balanced, thereby improving the overall cache resource utilization.
In addition, RMBCC introduces a proactive cache purging mechanism for local replica management, which is used to proactively remove replicas from the cache according to specific rules to free up cache space. For example, it can be triggered when the cache occupancy exceeds a certain threshold or executed by the cache node at off-peak times to clear outdated replicas. It works together with replica replacement to maintain the stability of cache occupancy by evicting replicas on demand. Each evicted replica, whether through replica replacement or the cache purging process, faces two options: being directly deleted from the cache or being migrated to a nearby node with available cache resources for re-caching. To address this, we designed replica migration decision and migration destination selection methods, which are used to filter evicted replicas and select appropriate destination nodes from neighboring nodes, respectively. Moreover, to avoid network congestion caused by the migration of a large number of replicas and to make full use of idle bandwidth, we can wait until off-peak hours to transfer these replicas if there is sufficient remaining cache space.
Figure 1 illustrates the cache management framework of RMBCC. In this figure, the black solid lines represent the replica placement process, and the red solid lines represent the replica migration process. The meanings of the other lines can be found in the legend. The green circles next to each arrow line indicate that the decision result of that step is true. For the replica migration process, the numbers next to each line indicate the execution order of the steps.

3.2. Replica Migration Decision

The replica migration decision process is responsible for filtering the replicas evicted from the local cache and selecting those that are worth migrating to other nodes for re-caching. As shown in Figure 1, the evicted replicas can be generated through two processes, namely replica replacement decision and proactive cache purging. We briefly introduce their working principles below.
Replica Replacement Decision: When a new replica needs to be cached at a node but there is not enough remaining storage space, some replicas within the local cache need to be evicted to accommodate the new one. Let v denote the caching node and m represent the content currently being transmitted. M v denotes the set of replicas in node v ’s local cache, and M v r denotes the set of replicas being evicted during the cache insertion process. Obviously, M v r = if no replacement occurs. Otherwise, it can be expressed by Equation (1):
M v r = { m j | p = 1 i s m p < s m q = 1 i + 1 s m q ,   m j M v ,   j i + 1 }
where the indices (i.e., i, j, p, q) represent the order in which the replicas are being replaced, which is determined by the replacement policy, such as least recently used (LRU). For example, m p indicates the p th replica to be replaced, which corresponds to the p th least recently accessed replica in LRU. In addition, s m , s m p , and s m q indicate the sizes of the corresponding replicas, respectively.
Proactive Cache Purging: Most ICN proposals rely on replacement policies to manage replicas in the cache system. However, frequent cache replacement will inevitably increase the I/O overhead and energy consumption of storage devices. To this end, we set a cache occupancy threshold for each cache router. When the cache occupancy exceeds the threshold, we proactively remove some replicas to maintain the cache occupancy at a reasonable level. Additionally, we schedule the cache purging process to execute only during off-peak hours (such as early morning on weekdays) to reduce the impact on services. Let M v p denote the set of the replicas being evicted during the cache purging process and H denote the cache occupancy threshold. Using the notations introduced above, M v p can be expressed as Equation (2).
M v p = { m j | p = 1 i s m p < m M v s m H q = 1 i + 1 s m q ,   m i M v ,   j i + 1 }
However, not every replica evicted by the above two processes is suitable for migration to other nodes for re-caching. This is because the migration of replicas entails certain costs, including the consumption of bandwidth resources during data transmission and the computational overhead during the selection process of the destination node. Therefore, we need to carefully filter these replicas to control the number of migrations. Our idea is to migrate only the replicas of popular content because they are more likely to be accessed by subsequent requests.
In our design, routers in the network are not required to keep track of the request counts for each replica. Instead, we assign a 1-bit flag (referred to as cache flag) for each replica after it is cached. These cache flags are stored in the replica information table (RIT) in each node’s local memory. The cache flag of each replica is initialized to 0. Only when the replica is accessed by subsequent requests after being cached, is its cache flag set to 1 to indicate that the corresponding content is popular. The rationale behind this operation is that when a content is requested again after being cached, it suggests that it has been requested at least twice. This can effectively filter out contents that will only be requested once (known as one-timers), and the proliferation of such content in the cache is considered to cause cache pollution problems [50]. In addition, when a replica is inserted into the cache, we also record the cache distance (i.e., the distance between the current node and the upstream service node) and the size of its corresponding content in the RIT. This information will be used in the subsequent decision-making process. The structure of RIT is illustrated in Figure 2.
The replica migration decision-making workflow can be summarized as follows: For each replica m in the set of evicted replicas, router v checks the replica’s cache flag (denoted as C a c h e F l a g v m ) from its local RIT. If C a c h e F l a g v m = 1 , the migration decision result for this replica is true, indicating that subsequent migration operations are enabled before deleting the replica. This involves calling the migration destination selection module to determine the node to which it should be migrated. Otherwise, the decision result is false, and node v deletes the replica directly. The pseudocode for this process is shown in Algorithm 1.
Algorithm 1: Replica_migration_decision
Input: Cache node: v ; Cache occupancy threshold: H
Output: Migration decision result: r e s
Initialization: M v r { } , M v p { } , D s t N o d e n u l l
1:
if replica replacement occurs then
2:
   M v r Get replaced replicas based on local replacement policy
3:
end if
4:
if the current time is pre-defined cache purging time then
5:
   M v p ← Get purged replicas based on the threshold H
6:
end if
7:
for each replica m in M v r and M v p  do
8:
  Get C a c h e F l a g v m from node v ’s RIT
9:
   r e s F a l s e
10:
  if  C a c h e F l a g v m is 1 then
11:
     r e s T r u e
12:
  end if
13:
  if  r e s is T r u e then
14:
     D s t N o d e Migration_destination_selection( v , m )
15:
    if D s t N o d e is not n u l l  then
16:
      Transfer data of replica m to D s t N o d e
17:
  end if
18:
  Delete replica m from node v
19:
end for

3.3. Migration Destination Selection

Replica migration can utilize the available cache resources of off-path nodes while maintaining the low cooperation and management overhead of on-path caching schemes. This method enhances the utilization of in-network cache resources and improves the overall cache hit ratio. However, migrating replicas to different nodes affects cache performance differently. This variation is due to the different cache loads and traffic intensities across different nodes, resulting in significant differences in the cache benefits and service experiences when relocating replicas to these nodes. Additionally, nodes are at varying distances from the migration source node, leading to different data migration costs, such as the transmission resources required to transfer the replica data. Therefore, our design goal is to maximize the cache resource utilization as well as the cache hit ratio achieved through replica migration under a given migration cost constraint.
In this section, we introduce a migration destination node selection method based on cooperation among neighboring nodes. The basic idea is to propagate migration requests hop-by-hop to neighbor nodes, starting from the migration source node, while adhering to a specified migration distance constraint. At each hop, the nodes independently determine the destination node based on the cache load information of neighboring nodes and the current migration distance. The intuition behind our design is that nodes with lower cache load typically have lower replica replacement probabilities, allowing replicas to reside in the cache for longer periods and thus increasing the probability of being hit by subsequent requests.
It should be noted that a key part of our design is the mechanism by which each node periodically exchanges its cache load information with neighboring nodes. In this paper, the cache load of a node is defined as the number of replicas it replaces per unit of time. For example, considering a cache node v and assuming the neighbor interaction period is P , its cache load can be expressed as Equation (3).
C a c h e L o a d v = N u m b e r   o f   r e l a c e m e n t s P
Since replica replacement is often closely related to replica placement, when the cache space is full, each new replica insertion results in some replicas being evicted from the cache. If the number of replica replacements is zero, it indicates that the cache space is not full, or that no new replicas were cached during the monitored period. Therefore, a lower cache load implies that the replica can be cached on that node for a longer period, making it a preferred choice for hosting the migrated replicas.
In addition, each node locally maintains a neighbor information table (NIT) to record and update the cache load of each directly connected neighbor node, as illustrated in Figure 3.
Below, we provide a detailed introduction to the workflow of the algorithm. Assume that node s r c is the migration source node, and the evicted replica is m . If the migration decision result for m derived from Algorithm 1 is true, node s r c will then proceed to determine the destination node according to the following steps.
S1: The migration source node s r c queries replica m ’s cache distance through the local RIT, assuming it’s d m . It constructs a migration request packet that includes header fields such as content ID, source node, destination node, and TTL. Specifically, the content ID field is set to the ID of the content corresponding to replica m , the source node field is set to node s r c , the destination node field is set to n u l l initially, and the TTL field is set to d m 1 , indicating the migration distance constraint for this replica. Proceed to step 2;
S2: For the convenience of distinction, we mark the node currently forwarding the migration request as node x . At this time, node x is node s r c ;
S3: Node x traverses its neighboring node set N x , queries the locally maintained NIT to obtain the cache load of each neighbor node n i N x , and compares it with its own load l x . Assuming the cache load of node n i is l i , if l i < l x , then node n i is added to the candidate node set C x of node x . After traversing all neighbor nodes, if C x is not empty, the algorithm proceeds to step 4. If C x is empty, it checks whether the current node x is the migration source node s r c . If so, the migration is canceled, and the algorithm terminates. If not, node x is selected as the migration destination node, and the algorithm terminates;
S4: Select the node with the lowest cache load value in the set C x as the next hop node for forwarding the migration request, assuming it’s node n e x t , and proceed to step 5;
S5: Node x modifies the migration request packet by updating the destination node field to node n e x t , and decrements the value of the TTL field by 1. Proceed to step 6;
S6: Node n e x t receives the migration request packet and parses it. If the value of the TTL field is greater than 0, set x to node n e x t , indicating that node n e x t is the current migration request forwarding node. Then return to step 3. If the value of TTL is equal to 0, it indicates that the migration distance constraint has been reached. In this case, node n e x t is selected as the migration target node, and the algorithm terminates.
The above process can be expressed as Algorithm 2.
Algorithm 2: Migration_destination_selection
Input: Migration source node: s r c ; Evicted replica: m ;
Output: Destination node: d s t ;
Initialization:  d s t n u l l
1:
Node s r c gets replica m ’s cache distance d m from local RIT;
2:
Node s r c constructs migration request packet: t t l d m 1 , d s t n u l l ;
3:
Set node s r c as the current migration request forwarding node: x s r c ;
4:
while  t t l > 0  do
5:
  Get neighbor node set N x of node x ;
6:
  Get node x ’s cache load l x ;
7:
  Initialize node x ’s candidate node set: C x { } ;
8:
  for each node n i in N x  do
9:
    Get node n i ’s cache load l i from the NIT of node x ;
10:
    if  l i < l x  then
11:
      Add node n i to node x ’s candidate node set: C x C x + n i ;
12:
    end if
13:
  end for
14:
  if  C x is empty then
15:
    if  x is node s r c  then
16:
      Quit;
17:
    else
18:
      Select x as the destination node: d s t x ;
19:
      Quit;
20:
    end if
21:
  else
22:
    Get the node n e x t with the minimum cache load in C x ;
23:
    Modify migration request packet: t t l t t l 1 , d s t n e x t ;
24:
    Forward migration request packet to node n e x t ;
25:
    Set node n e x t as current migration request forwarding node: x n e x t ;
26:
  end if
27:
end while
28:
if  t t l = 0  then
29:
  Select node x as destination node: d s t x ;
30:
end if
31:
return  d s t
Furthermore, in order to further understand the above process, we demonstrate how our algorithm works through the example shown in Figure 4.
In Figure 4a, each node is annotated with a number representing its current cache load. Assume that node s is the migration source node, and the replica to be migrated is m . Node s first checks its local RIT to obtain replica m ’s cache distance d m , which we assume to be 2 in this example. Next, node s traverses its directly connected neighbor node set { c , d , f } , and obtains the candidate node set { d , f } , based on whether their cache load is lower than node s . Since node f exhibits the lowest load among the candidates, it is selected as the next hop node for forwarding the migration request. Node s then constructs the migration request packet, sets the source node and destination node field to s and f , respectively, and sets the TTL field to d m 1 , which is 1 in this case. Similarly, node f selects node h as its next-hop forwarding node, updates the destination node field of the request packet to h , and decrements the TTL field value by 1. When node h receives the migration request, since the TTL field value is 0, node h will no longer forward the migration request, and node h will be selected as the migration destination node.
For comparison, Figure 4b shows the case when replica m ’s cache distance is 4 (i.e., d m = 4 ). The migration request is forwarded to node j through a similar process as in Figure 4a. Although the TTL value of the packet at node j is not 0, node j cannot find a neighbor node with a lower load than its own; that is, the candidate node set is empty, node j is selected as the migration destination node.

4. Performance Evaluation

In this section, we first introduce the simulation setup and the relevant experimental parameters. Then, we compare the performance of RMBCC with the representative strategies for on-path caching, off-path caching, and hybrid caching. We investigated the impact of various factors, including network topology, network cache capacity, and content popularity distribution.

4.1. Simulation Set-Up

We implemented the proposed cache cooperation scheme RMBCC on top of Icarus [51], a Python-based discrete-event simulator used for evaluating caching performance in ICN. The Icarus version we used in this paper is 0.8.0. Then, we conducted all the experiments on different real-world Internet Service Provider (ISP) topologies collected by Rocketfuel [52], including GARR and TISCALI, which correspond to networks of different sizes and structures. The nodes in each topology are designated as receivers, cache routers, or content sources, respectively, based on their node degrees. Specifically, the GARR topology contains 61 nodes and 75 bidirectional links, of which the number of cache nodes is 27. The TISCALI topology contains 240 nodes interconnected by 404 links, including 160 cache nodes. The bandwidth of all intra-network links is set to 1 Gbps, and the access bandwidth of all receivers is set to 100 Mbps. Each cache router is configured with the same cache capacity, which is uniformly distributed based on the total cache capacity budget.
The network contains 1 × 10 5 unique contents, and 2 × 10 5 independent requests are generated for them. We used the first half of the requests as a warm-up for the cache system to converge to a stable state, and collected the results of the latter half of the requests. Initially, contents are evenly distributed among all source nodes, and their popularity is assumed to follow a Zipf distribution [53]. Requests for these contents are generated at a rate of 200 requests/sec, which is assumed to follow a Poisson distribution. For ease of illustration, we set the threshold for proactive cache purging to be the cache capacity. This means that we will only consider cases where evicted replicas are generated by cache replacement in the following experiments. Each experimental scenario was conducted five times, and the average value of the statistical results was used.
Furthermore, we assumed the deployment of a centralized name resolution system, which is responsible for providing the receiver with the address of the cache holding a copy of the requested content [54]. Referring to the design in [55], when multiple cached copies are available, the cache closest to the receiver is selected as the request destination. Subsequently, the receiver forwards the content retrieval request to the node along the shortest path.
Table 1 lists the main parameters involved in the simulation.

4.2. Evaluation Metrics

We evaluated the performance of different caching schemes from the perspectives of cache performance and transmission performance, respectively. The cache-related performance metrics include cache resource utilization and cache hit ratio, while the transmission-related performance metrics include network link load and content retrieval time. We provide their definitions in the following text.
(1)
Cache Resource Utilization
Cache resource utilization is defined as the ratio of used cache space to the router’s total cache capacity. A higher ratio indicates that more content replicas can be stored within the available cache space, thereby increasing the probability of requests being fulfilled from the cache, which is one of the core goals of our proposal. It is worth noting that the used cache space of each node increases over time until the cache is full. Once full, the cache occupancy remains relatively stable with the dynamic insertion and replacement of replicas. Consequently, we record the used cache space ratio of each node at the end of the simulation, and calculate their average to represent the cache resource utilization of the overall network, as shown in Equation (4), where N denotes the set of cache routers in the network, and c n u s e d and c n represent the cache occupancy and cache capacity of node n , respectively.
C R U ¯ = n N c n u s e d / c n N
(2)
Cache Hit Ratio
Cache hit ratio refers to the fraction of content requests serviced by the cache router out of all requests. It is a widely used metric for evaluating the performance of cache systems. Generally, a higher hit ratio leads to a greater reduction in the load on the source server, which further helps to reduce cross-domain traffic and bandwidth costs. Moreover, the improvement in the cache hit ratio contributes to reducing the response delay of requests, which is crucial for latency-sensitive application scenarios. Assuming that the total number of requests issued by users during the simulation is R , H s represents the number of requests served by the source server, and H c represents the number of requests served by cache routers, then the cache hit ratio can be calculated using Equation (5).
C H R = H c H c + H s = H c R
(3)
Network Link Load
Network link load represents the volume of traffic transmitted by a link per unit of time. It is often used to measure link bandwidth utilization. A lower network link load corresponding to a caching strategy implies that more bandwidth resources can be conserved through in-network caching, thus enhancing the network’s data delivery capability and scalability. For any link e , its link load can be calculated using Equation (6), where R represents the total number of requests, T represents the simulation duration, R e q S i z e i is the size of the request packet corresponding to request i , D a t a S i z e i is the amount of data of corresponding to request i , and x i , e r e q and x i , e d a t a , respectively, denote whether link e is located on the request (data) forwarding path. If so, x i , e r e q and x i , e d a t a are assigned the value 1, otherwise, they are assigned the value 0. To reflect the transmission load of the overall network, we calculate the average link load of all the used links, as shown in Equation (7), where E u s e d represents the set of used links during the simulation, and E u s e d is the total number of used links.
N L L e = i = 1 R ( x i , e r e q × R e q S i z e i + x i , e d a t a × D a t a S i z e i ) T
N L L ¯ = e E u s e d N L L e | E u s e d |
(4)
Content Retrieval Time
Content retrieval time measures the time elapsed from when a user issues a request to when all data for the corresponding content is received. This metric is one of the most intuitive indicators reflecting the user’s service experience. It is closely related to the load status of the content transmission path [56]. Higher link loads increase the probability of network congestion, resulting in a lower achievable transmission rate and longer transmission time. We calculate the average content retrieval time for all requests using Equation (8):
C R T ¯ = i = 1 R ( t i f n i t i r e q ) R
where R represents the number of requests, t i r e q represents the time when request i is issued by the user, and t i f n i represents the time when all data corresponding to request i is received.

4.3. Results and Discussion

4.3.1. Impact of Base On-Path Caching Strategies

It is worth noting that RMBCC does not function as an independent cache management scheme. Instead, it is designed as a complementary mechanism to replica placement. Hence, a natural question arises: is RMBCC applicable to various replica placement strategies? In other words, what are the differences in performance improvements obtained by integrating RMBCC with different replica placement strategies?
To answer this question, we studied six typical on-path caching strategies as the base strategies, including LCE, LCD, CL4M [16], Random [51], ProbCache [17], and our preliminary work, PLABC [57], which employs a path load-aware based caching strategy. We chose these strategies because they exhibit different preferences regarding the replica placement locations, leading to differences in their ability to utilize cache resources. At the same time, these strategies also have varying levels of implementation complexity and cache performance, making them widely used as baseline strategies for comparison in related research. Table 2 briefly summarizes the characteristics of the aforementioned strategies.
We analyzed the performance differences of these strategies before and after integrating RMBCC, respectively. All experimental scenarios were evaluated using the metrics described in Section 4.2. We tested the performance of these strategies under different combinations of cache capacity and content popularity distribution, and they exhibited very similar improvement trends. Therefore, we only present the results when the cache size ratio was set to 10% and the Zipf skewness parameter was set to 0.8.
Cache Resource Utilization: As shown in Figure 5, across different topologies, the cache resource utilization of all the base strategies has been greatly improved after integrating RMBCC. However, the improvement ratio varies among different strategies. Specifically, CL4M demonstrates the highest cache resource utilization improvement ratio after integrating RMBCC, which increased by 71.26% and 98.54% in the GARR and TISICALI topology, respectively. The minimum improvement ratio for other strategies was 40.48%. This indicates that compared to the independent deployment of base on-path strategies, integration with RMBCC can significantly increase the number of replicas cached in the network, thereby increasing the probability of subsequent requests hitting the cache.
However, it should be noted that the performance of RMBCC is constrained by the performance of its corresponding base on-path strategy. For example, in the TISCALI topology, the cache resource utilization of ProbCache with RMBCC can only reach 76.71%. This limitation primarily stems from the probability-based caching operation of its base strategy (i.e., ProbCache), which results in a lower number of cache insertions, replacements, and migrations compared to other strategies, thus failing to fully explore the available cache resources of other nodes in the network. As for CL4M, its cache insertion operation is confined to a few nodes with high betweenness centrality, limiting its ability to utilize available cache resources when integrating RMBCC. In contrast, LCE and Random achieve close to 100% cache resource utilization after integrating RMBCC, indicating that the cache resources of all nodes in the network are fully utilized.
Cache Hit Ratio: Figure 6 shows the cache hit ratio results. Due to the improvements in cache resource utilization, the cache hit ratios of all base strategies have been significantly increased after integrating RMBCC and exhibit similar trends across different topologies. Specifically, in the GARR topology, the average increase in cache hit ratio for these six base strategies was 16.77%. In the TISCALI topology, this ratio rose to 22.09%. The cache hit ratio of RMBCC is still limited by its corresponding base strategy. For instance, the cache hit ratio of LCE remains the lowest both with and without integrating RMBCC, while PLABC maintains the highest in both cases. It is noteworthy that by integrating RMBCC, even simple on-path caching strategies such as LCE and Random can achieve satisfactory cache hit ratios, thereby greatly simplifying the design of the cache decision-making process.
Network Link Load: Figure 7 presents the results for the average network link load. Benefiting from the increase in cache hit ratio, the network link load of all base strategies is significantly reduced when integrated with RMBCC across different topologies. It can be observed that the reduction ratio is higher in the TISCALI topology. In detail, in the GARR topology, the average link load reduction ratio of RMBCC was 17.02%, while in the TISCALI topology, this ratio reached 36.49%. This is mainly due to the larger network diameter of the TISCALI topology, which allows RMBCC to explore a wider range of neighboring nodes and utilize more idle links. The lower link load results in higher data transmission rates during content delivery, thereby shortening content retrieval time.
Content Retrieval Time: Figure 8 shows the results of content retrieval time. All base strategies reduce their content retrieval time when integrated with RMBCC. However, it is notable that the reduction ratio of content retrieval time does not match the reduction level of the network link load. The reason is that although RMBCC utilizes more spare cache and link resources through replica migration, bandwidth bottlenecks may still occur on some backbone links, thus limiting the increase in end-to-end transmission throughput. This also emphasizes that in application scenarios focused on user service experience, priority should be given to caching strategies that consider link congestion, such as PLABC.

4.3.2. Impact of Varying Cache Size

In this section, we further explore how varying cache sizes affect the performance of different caching schemes. Specifically, we adjust the cache size ratio in each experimental scenario, which represents the ratio of the total cache capacity of all nodes to the content population. Since the storage space that a caching system can provide is always very limited compared to the vast amount of internet traffic, we believe it is reasonable to configure a relatively small cache size (e.g., no more than 10%). Additionally, considering the number of cache nodes and the size of the content catalog in the simulation, we set the lower limit of the cache capacity to 1% to ensure that each node can cache at least one replica. Therefore, we set the range of cache sizes from 1% to 10% to study the impact of different cache capacities on the caching performance of various strategies. Other settings remained consistent with those described above.
The experiments conducted in Section 4.3.1 demonstrate that different base on-path strategies exhibit similar improvement trends when integrated with RMBCC. For clarity of presentation, we only present the results of LCE, Random, and ProbCache in the following experiments, as they correspond to different levels of cache decision overhead and cache load. For ease of reference, we denote their RMBCC-integrated versions as LCE(RMBCC+), Random(RMBCC+), and ProbCache(RMBCC+), respectively. In addition, to highlight the advantages of RMBCC in terms of cooperation overhead and cache performance, we included two comparison items: HR Hybrid SM [29] and Intra-AS [40]. HR Hybrid SM is an off-path caching strategy that assigns a globally unique cache node to each content. The designated node is determined by hashing the content ID, which effectively eliminates cache redundancy. Intra-AS is built upon LCE and introduces an independent collaboration process, allowing each node to periodically exchange cache summaries with its neighbor nodes to mitigate cache redundancy. Therefore, it can be considered as a hybrid caching strategy.
Cache Resource Utilization: As shown in Figure 9, under different cache sizes, RMBCC significantly improves the cache resource utilization of the base strategies. For instance, the results of LCE(RMBCC+) and Random(RMBCC+) consistently remained close to 100% in different topologies. It is noteworthy that when the cache size reaches a certain level (i.e., 9% in GARR and 2% in TISCALI), the cache resource utilization of ProbCache(RMBCC+) decreases as the cache size continues to increase. This is primarily due to the fact that compared to the GARR topology, TISCALI has a larger number of cache nodes, resulting in a smaller cache size allocated to each node. Consequently, under the same migration counts and migration distance ranges, it explores fewer cache resources. As the cache size increases, the proportion of these explored resources to the overall cache capacity further decreases. In contrast, LCE(RMBCC+) and Random(RMBCC+) remain unaffected due to their higher number of migrations.
Additionally, HR Hybrid SM maintains a cache resource utilization of 100% consistently, as each node is responsible for caching a distinct set of content. Intra-AS exhibits the lowest cache resource utilization due to its proactive redundancy removal design, which results in a significant portion of cache space remaining unused. As the cache size increases, the redundancy of replicas in the network also increases, leading Intra-AS to delete more replicas. Consequently, its cache utilization ratio exhibits a declining trend.
Cache Hit Ratio: Figure 10 illustrates that in the two tested topologies, the cache hit ratios of all compared schemes increase as the cache size increases. Across different cache size configurations, RMBCC-integrated schemes maintain stable performance improvements compared to their corresponding base strategies. For example, in the GARR topology, the cache hit ratio of LCE(RMBCC+) improved by an average of 28.92% compared to LCE. By enhancing cache resource utilization, integrating RMBCC with a simple on-path caching strategy can achieve a cache hit ratio comparable to that of complex off-path strategies. Another noteworthy point is that as the cache size increases, the hit ratio of HR Hybrid SM increases more rapidly than others. This is mainly because the number of unique contents it caches grows linearly with the cache size increment, while the performance of RMBCC-integrated schemes is limited by their corresponding base strategies. In addition, Intra-AS exhibits poor performance in cache hit ratio due to its collaborative redundancy elimination process greatly reducing cache resource utilization. Moreover, under the name resolution-based routing scheme, Intra-AS’s content discovery process based on neighbor collaboration becomes ineffective, resulting in even poorer performance compared to native LCE.
Network Link Load: Figure 11 shows the results of the network link load against variations in cache size. Across different cache capacity configurations, those RMBCC-integrated schemes achieved the lowest link load, which is mainly due to the substantial increase in cache hit ratio. In the GARR topology, HR Hybrid SM exhibits a significantly higher link load compared to other schemes. The reason is that all requests and contents need to detour to the designated cache, which greatly increases the transmission distance and the resulting traffic cost. Conversely, in the TISCALI topology, the results of HR Hybrid SM are even lower than those of on-path strategies. This is because our statistical method only calculates the average load of the used links, rather than all links in the network. The total load of HR Hybrid SM is still much higher than that of on-path strategies. In contrast, RMBCC delegates the task of replica placement to the base on-path policy and controls the migration traffic by replica filtering and migration distance limitation, thus maintaining its overall link load at a very low level. In addition, due to the significant improvement in cache hit ratio, the network link load is further reduced compared to the base on-path strategy. It is worth noting that in the TISCALI topology, the link load of ProbCache(RMBCC+) increases slightly as the cache size increases. This is because there is no need to migrate replicas to distant nodes when the cache resources of neighboring nodes are sufficient, thereby reducing the number of links used and resulting in a slightly higher average link load.
Content Retrieval Time: As shown in Figure 12, as the cache size increases, the content retrieval time decreases across all strategies. Notably, ProbCache(RMBCC+) and Random(RMBCC+) consistently yield optimal results in both network topologies. Across different cache size configurations, those RMBCC-integrated strategies demonstrate a stable reduction in content transmission time, which is mainly due to their higher cache hit ratio and lower network link load. However, despite achieving a high cache hit ratio, HR Hybrid SM exhibits poor performance in the content retrieval time metric. For example, in the TISCALI topology, HR Hybrid SM’s content retrieval time was 59.49% higher on average than that of ProbCache(RMBCC+). This difference can be attributed to the larger scale of the TISCALI topology, which necessitates longer detours for forwarding content to the designated cache nodes, thereby greatly increasing the link load costs and the occurrence probability of bandwidth bottlenecks. RMBCC performs best in terms of content retrieval time performance. This is primarily due to its significant improvement in the cache hit ratio while maintaining relatively low cooperation costs.

4.3.3. Impact of Varying Content Popularity Distribution

As mentioned previously, we assume that content popularity follows the Zipf distribution, a discrete probability distribution often used to model the frequency of occurrence of elements in large datasets. This assumption is widely employed in related research on ICN in-network caching. In the Zipf distribution, the request probability of a content item is proportional to 1 / k α , where k is the content’s popularity ranking, and α is the skewness parameter. By varying α , we can adjust the distribution of requests among content items. Specifically, a larger α value will generate a request sequence that is more concentrated on a small amount of content items. Consequently, the cache system can achieve higher performance, as it only needs to cache a small portion of popular content items to fulfill the majority of requests. To simulate a wide range of request workloads and different types of traffic, we varied α from 0.6 to 1.0, which is a reasonable range according to the research in [53,58,59]. We inherit the comparison items in Section 4.3.2, and set the request rate and cache size ratio to 200 req/s and 10%, respectively. Figure 13 and Figure 14 present the aggregated performance results against varying the Zipf skewness parameter ( α ) in the GARR and TISCALI topologies, respectively.
Cache Resource Utilization: As shown in Figure 13a and Figure 14a, across different Zipf parameter settings, RMBCC consistently improves the cache resource utilization of the base on-path strategies. This demonstrates RMBCC’s broad applicability across different workloads. Similar to the results in the previous section, Intra-AS exhibits the lowest cache resource utilization across various workloads, while HR Hybrid SM consistently maintains 100% utilization. Furthermore, in the TISCALI topology, ProbCache’s performance is as poor as that of Intra-AS. This is mainly due to its probabilistic cache insertion approach, which underutilizes the available cache resources, consequently limiting the performance of ProbCache(RMBCC+).
Cache Hit Ratio: As shown in Figure 13b and Figure 14b, with the increase of α , the cache hit ratios of all compared schemes exhibit an increasing trend. This is because requests for popular content items become increasingly concentrated, allowing more requests to be served from the cache given the same cache size. Across different α values, RMBCC consistently increases the cache hit ratios of the base on-path strategies significantly. However, as α further increases, the difference between them gradually decreases. Consistent with the performance results of cache resource utilization, HR Hybrid SM and Intra-AS achieved the highest and lowest cache hit ratios, respectively.
Network Link Load: As shown in Figure 13c and Figure 14c, the network link load of all caching schemes decreases as α increases due to the increase in cache hit ratio. Specifically, in the GARR topology, the three RMBCC-integrated schemes all achieve the lowest link load across different α values, with very small differences between them. HR Hybrid SM exhibits a higher network link load due to its longer content delivery distances. In the TISCALI topology, the link load of ProbCache(RMBCC+) is higher than that of LCE(RMBCC+) and Random(RMBCC+), because it utilizes fewer caches and links, as reflected in Figure 14a. HR Hybrid SM uses more links due to the larger size of the TISCALI topology, resulting in a lower average network link load.
Content Retrieval Time: As α increases, the transmission time of all strategies decreases, which is attributed to the improved cache hit ratios and reduced network link loads. The longer content delivery distance of HR Hybrid SM results in a higher overall network link load, leading to its worst content retrieval time performance in both topologies. Intra-AS also performs worse than other solutions in this metric due to its low cache hit ratio and higher network link load. In contrast, RMBCC maintains the low content retrieval time of the corresponding base on-path strategies and demonstrates stable performance gains across different α values. This also indicates that the cooperation cost of RMBCC is much lower than that of HR Hybrid SM, resulting in better performance in terms of both network traffic cost and user QoE.

4.3.4. Overhead Analysis

As a cooperative caching scheme, RMBCC inevitably introduces some cooperation overhead. For example, nodes need to exchange necessary information to support migration decisions, and migrating replicas from one cache node to another will also cause additional bandwidth consumption. Since the storage resources occupied by the newly introduced tables (i.e., RIT and NIT) are negligible, we will focus our detailed analysis on the computational and communication overhead of RMBCC.
Computational overhead: Since RMBCC does not interrupt the replica placement process of the base strategies, its computational overhead is mainly introduced by the migration destination selection process. During the hop-by-hop forwarding process of the migration request, it is necessary to select the neighbor node with the lowest load in the NIT. Although this process has a time complexity of O ( N ) , considering that the number of neighbors corresponding to each node is relatively small, the computational overhead it introduces is very low.
Communication overhead: The communication overhead introduced by RMBCC mainly consists of two parts. The first part involves the periodic exchange of load information among cache nodes. The corresponding traffic cost is shown in Equation (9), where N c a c h e represents the set of cache nodes, c n is the number of neighbor nodes of node n , s e x is the size of the exchange message, d e x is the distance to neighbor nodes, and T c n t represents the number of load information exchanges, which is equal to the number of periods passed during the simulation process. The second part involves the data transmission during the replica migration, as shown in Equation (10), where M denotes the total number of replica migrations, s m is the size of the content corresponding to the m th migrated replica, and d m is the migration distance.
C e x c h a n g e = n N c a c h e c n × s e x × d e x × T c n t
C m i g r a t e = m = 1 M s m × d m
Although RMBCC introduces additional communication overhead, it reduces traffic costs by improving the cache hit ratio. To assess this, we compared the total traffic cost associated with different strategies, which represents the sum of the traffic passing through all network links during the simulation process. For any caching strategy P , its traffic cost is given by:
T r a f f i c C o s t P = i = 1 R s i × d i
where R is the total number of requests, and s i and d i represent the content size and transmission distance corresponding to the i th request, respectively. We ignored the traffic of request packets since it is very small compared to content traffic. Let P ( R M B C C + ) represents the new strategy corresponding to P integrated with RMBCC, its total traffic cost can be expressed as:
T r a f f i c C o s t P R M B C C + = i = 1 R s i × d i + C m i g r a t e + C e x c h a n g e
RMBCC’s efforts to control communication overhead are reflected in the following three aspects:
(1)
RMBCC improves the overall cache hit ratio, thus reducing the response distance of requests (i.e., d i in Equation (12)) and the resulting traffic cost;
(2)
RMBCC filters the evicted replicas through the migration decision process and limits the migration distance to reduce the traffic generated by replica migration. This corresponds to C m i g r a t e in Equation (12);
(3)
RMBCC sets the distance range for load information exchange to 1 (i.e., d e x = 1 in Equation (9)), thereby reducing the cost of exchanging information with neighbor nodes. In addition, since the exchange message is very small and only contains its own load value, the overall cost of this part is also very low, which corresponds to the C e x c h a n g e item in Equation (12);
We conducted experiments to compare the total traffic cost of different caching schemes, with experimental parameters consistent with those outlined in Section 4.3.1. The results are depicted in Figure 15. The traffic cost of the base strategies (i.e., without RMBCC) is plotted by the light gray bar, corresponding to Equation (11). The portion highlighted within the red rectangle in the figure represents the additional communication overhead introduced by RMBCC, corresponding to the last two terms in Equation (12). We observe that, for any on-path caching strategy, the overhead introduced by integrating RMBCC is very low. Moreover, due to the improvement in the cache hit ratio, there is a reduction in the traffic generated by serving requests, enabling the total traffic cost to remain as low as that of the base on-path strategies. Specifically, in the GARR topology, the total traffic cost of these six on-path strategies increases by an average of 0.63% with the integration of RMBCC. In the TISCALI topology, RMBCC even reduces the total traffic cost compared to the base on-path strategies. This is attributed to the greater improvement in the cache hit ratio facilitated by RMBCC. In contrast, since HR Hybrid SM needs to detour to the designated cache during content transmission, its transmission distance and total traffic cost are higher. As for Intra-AS, its overall traffic cost is also high due to its low cache hit ratio and the additional traffic introduced by its periodic exchange of cache summaries.

4.3.5. Discussion

Based on the above experimental results and analysis, RMBCC demonstrates significant advantages in balancing cache performance and cooperation costs. Specifically, RMBCC achieves cache resource utilization and hit rate levels comparable to complex off-path caching strategies by integrating with simple on-path strategies. The lightweight nature of the replica placement decision process greatly reduces the impact on the router’s traffic forwarding function, ensuring line-speed caching and forwarding operations. Additionally, RMBCC significantly reduces the cooperation costs between nodes through measures such as filtering the migrated replicas and limiting the migration distance, as evidenced by the network link load metrics. Thanks to the lower traffic costs, RMBCC also achieves satisfactory performance in QoE metrics measured by content retrieval time.
However, it is worth noting that the experimental results show that as the cache capacity increases, the effectiveness of RMBCC in improving the cache hit ratio gradually diminishes, which requires further analysis of the underlying mechanisms. Furthermore, the performance of RMBCC is highly correlated with the base on-path strategies it employs. Understanding how they influence each other and how to make them work more efficiently together remains an unresolved issue in this paper. This will be a significant research direction for the future.

5. Conclusions

This paper proposes a cooperative caching scheme based on replica migration for ICN, aiming to fully utilize the cache capacity of nodes near the forwarding path to balance the cache load between caches and improve the cache resource utilization. Through the proposed replica migration mechanism, popular replicas evicted are given a second chance to be cached, thereby increasing their chances of being accessed by subsequent requests. During the selection of migration replicas, we filtered out unpopular replicas to avoid unnecessary migration overhead. Additionally, we set distance constraints for each migrated replica to strictly control the migration traffic costs. Finally, we implemented the proposed cooperative caching mechanism in the Icarus simulator and compared it with representative solutions of on-path caching, off-path caching, and hybrid caching. The results demonstrate significant advantages in cache performance and cooperation cost. However, an important limitation of this paper is that the interaction mechanism between RMBCC and the underlying replica placement strategy has not yet been studied. In future work, we will focus on leveraging content popularity information to further improve RMBCC’s performance, particularly in optimizing the cache hit ratio. At the same time, we will explore the joint design and optimization methods for on-path caching strategies and RMBCC to maximize the collaboration advantages of RMBCC.

Author Contributions

Conceptualization, Y.C., H.N., and R.H.; methodology, Y.C. and R.H; software, Y.C.; writing—original draft preparation, Y.C.; writing—review and editing, Y.C., H.N., and R.H.; supervision, H.N.; project administration, R.H.; funding acquisition, H.N. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the National Key R&D Program of China: Application Demonstration of Polymorphic Network Environment for Computing from the Eastern Areas to the Western. (Project No. 2023YFB2906404).

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

ICNInformation-Centric Networking
NDNNamed Data Networking
CDNsContent Delivery Networks
P2PPoint-to-Point Network
QoSQuality of Service
QoEQuality of Experience
LCELeave Copy Everywhere
LCDLeave Copy Down
LRULeast Recently Used
IoTInternet of Things
CSContent Store
RITReplica Information Table
NITNeighbor Information Table
TTLTime to Live
ISPInternet Service Provider

References

  1. López-Ardao, J.C.; Rodríguez-Pérez, M.; Herrería-Alonso, S. Recent Advances in Information-Centric Networks (ICNs). Future Internet 2023, 15, 392. [Google Scholar] [CrossRef]
  2. Jacobson, V.; Smetters, D.K.; Thornton, J.D.; Plass, M.F.; Briggs, N.H.; Braynard, R.L. Networking Named Content. In Proceedings of the 5th International Conference on Emerging Networking Experiments and Technologies, Rome, Italy, 1–4 December 2009; Association for Computing Machinery: New York, NY, USA, 2009; pp. 1–12. [Google Scholar]
  3. Carofiglio, G.; Morabito, G.; Muscariello, L.; Solis, I.; Varvello, M. From Content Delivery Today to Information Centric Networking. Comput. Netw. 2013, 57, 3116–3127. [Google Scholar] [CrossRef]
  4. Pruthvi, C.N.; Vimala, H.S.; Shreyas, J. A Systematic Survey on Content Caching in ICN and ICN-IoT: Challenges, Approaches and Strategies. Comput. Netw. 2023, 233, 109896. [Google Scholar] [CrossRef]
  5. Ahlgren, B.; Dannewitz, C.; Imbrenda, C.; Kutscher, D.; Ohlman, B. A Survey of Information-Centric Networking. IEEE Commun. Mag. 2012, 50, 26–36. [Google Scholar] [CrossRef]
  6. Rossi, D.; Rossini, G. On Sizing CCN Content Stores by Exploiting Topological Information. In Proceedings of the 2012 Proceedings IEEE INFOCOM Workshops, Orlando, FL, USA, 25–30 March 2012; pp. 280–285. [Google Scholar]
  7. Wang, Y.; Li, Z.; Tyson, G.; Uhlig, S.; Xie, G. Design and Evaluation of the Optimal Cache Allocation for Content-Centric Networking. IEEE Trans. Comput. 2016, 65, 95–107. [Google Scholar] [CrossRef]
  8. Chu, W.; Dehghan, M.; Lui, J.C.S.; Towsley, D.; Zhang, Z.-L. Joint Cache Resource Allocation and Request Routing for In-Network Caching Services. Comput. Netw. 2018, 131, 1–14. [Google Scholar] [CrossRef]
  9. Pires, S.; Ziviani, A.; Sampaio, L.N. Contextual Dimensions for Cache Replacement Schemes in Information-Centric Networks: A Systematic Review. PeerJ Comput. Sci. 2021, 7, e418. [Google Scholar] [CrossRef] [PubMed]
  10. Khandaker, F.; Oteafy, S.; Hassanein, H.S.; Farahat, H. A Functional Taxonomy of Caching Schemes: Towards Guided Designs in Information-Centric Networks. Comput. Netw. 2019, 165, 106937. [Google Scholar] [CrossRef]
  11. Zhang, Z.; Lung, C.-H.; Wei, X.; Chen, M.; Chatterjee, S.; Zhang, Z. In-Network Caching for ICN-Based IoT (ICN-IoT): A Comprehensive Survey. IEEE Internet Things J. 2023, 10, 14595–14620. [Google Scholar] [CrossRef]
  12. Lee, M.; Song, J.; Cho, K.; Pack, S.; “Taekyoung” Kwon, T.; Kangasharju, J.; Choi, Y. Content Discovery for Information-Centric Networking. Comput. Netw. 2015, 83, 1–14. [Google Scholar] [CrossRef]
  13. Liu, H.; Azhandeh, K.; de Foy, X.; Gazda, R. A Comparative Study of Name Resolution and Routing Mechanisms in Information-Centric Networks. Digit. Commun. Netw. 2019, 5, 69–75. [Google Scholar] [CrossRef]
  14. Zhang, M.; Luo, H.; Zhang, H. A Survey of Caching Mechanisms in Information-Centric Networking. IEEE Commun. Surv. Tutor. 2015, 17, 1473–1499. [Google Scholar] [CrossRef]
  15. Zhang, L.; Afanasyev, A.; Burke, J.; Jacobson, V.; Claffy, K.; Crowley, P.; Papadopoulos, C.; Wang, L.; Zhang, B. Named Data Networking. SIGCOMM Comput. Commun. Rev. 2014, 44, 66–73. [Google Scholar] [CrossRef]
  16. Chai, W.K.; He, D.; Psaras, I.; Pavlou, G. Cache “Less for More” in Information-Centric Networks. In NETWORKING 2012; Bestak, R., Kencl, L., Li, L.E., Widmer, J., Yin, H., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 27–40. [Google Scholar]
  17. Psaras, I.; Chai, W.K.; Pavlou, G. Probabilistic In-Network Caching for Information-Centric Networks. In Proceedings of the Second Edition of the ICN Workshop on Information-Centric Networking—ICN’12, Helsinki Finland, 17 August 2012; ACM Press: New York, NY, USA, 2012; p. 55. [Google Scholar]
  18. Ming, Z.; Xu, M.; Wang, D. Age-Based Cooperative Caching in Information-Centric Networking. In Proceedings of the 2014 23rd International Conference on Computer Communication and Networks (ICCCN), Shanghai, China, 4–7 August 2014; pp. 1–8. [Google Scholar]
  19. Ren, J.; Qi, W.; Westphal, C.; Wang, J.; Lu, K.; Liu, S.; Wang, S. MAGIC: A Distributed MAx-Gain In-Network Caching Strategy in Information-Centric Networks. In Proceedings of the 2014 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), Toronto, ON, Canada, 27 April–2 May 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 470–475. [Google Scholar]
  20. Gill, A.S.; D’Acunto, L.; Trichias, K.; van Brandenburg, R. BidCache: Auction-Based In-Network Caching in ICN. In Proceedings of the 2016 IEEE Globecom Workshops (GC Wkshps), Washington, DC, USA, 4–8 December 2016; pp. 1–6. [Google Scholar]
  21. Badov, M.; Seetharam, A.; Kurose, J.; Firoiu, V.; Nanda, S. Congestion-Aware Caching and Search in Information-Centric Networks. In Proceedings of the 1st ACM Conference on Information-Centric Networking, Paris, France, 24–26 September 2014; Association for Computing Machinery: New York, NY, USA, 2014; pp. 37–46. [Google Scholar]
  22. Nguyen, D.; Sugiyama, K.; Tagami, A. Congestion Price for Cache Management in Information-Centric Networking. In Proceedings of the 2015 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), Hong Kong, China, 26 April–1 May 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 287–292. [Google Scholar]
  23. Carofiglio, G.; Mekinda, L.; Muscariello, L. LAC: Introducing Latency-Aware Caching in Information-Centric Networks. In Proceedings of the 2015 IEEE 40th Conference on Local Computer Networks (LCN), Clearwater Beach, FL, USA, 26–29 October 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 422–425. [Google Scholar]
  24. Yokota, K.; Sugiyama, K.; Kurihara, J.; Tagami, A. RTT-Based Caching Policies to Improve User-Centric Performance in CCN. In Proceedings of the 2016 IEEE 30th International Conference on Advanced Information Networking and Applications (AINA), Crans-Montana, Switzerland, 23–25 March 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 124–131. [Google Scholar]
  25. Dutta, N.; Patel, S.K.; Faragallah, O.S.; Baz, M.; Rashed, A.N.Z. Caching Scheme for Information-Centric Networks with Balanced Content Distribution. Int. J. Commun. Syst. 2022, 35, e5104. [Google Scholar] [CrossRef]
  26. Hamdi, M.M.F.; Chen, Z.; Radenkovic, M. Mitigating Cache Pollution Attack Using Deep Learning in Named Data Networking (NDN). In Intelligent Computing; Arai, K., Ed.; Springer Nature Switzerland: Cham, Switzerland, 2024; pp. 432–442. [Google Scholar]
  27. Iqbal, S.M.d.A.; Asaduzzaman. Cache-MAB: A Reinforcement Learning-Based Hybrid Caching Scheme in Named Data Networks. Future Gener. Comput. Syst. 2023, 147, 163–178. [Google Scholar] [CrossRef]
  28. Barakat, C.; Kalla, A.; Saucez, D.; Turletti, T. Minimizing Bandwidth on Peering Links with Deflection in Named Data Networking. In Proceedings of the 2013 Third International Conference on Communications and Information Technology (ICCIT), Beirut, Lebanon, 19–21 June 2013; pp. 88–92. [Google Scholar]
  29. Saino, L.; Psaras, I.; Pavlou, G. Hash-Routing Schemes for Information Centric Networking. In Proceedings of the 3rd ACM SIGCOMM Workshop on Information-Centric Networking—ICN ’13, Hong Kong, China, 12 August 2013; ACM Press: New York, NY, USA, 2013; p. 27. [Google Scholar]
  30. Sourlas, V.; Psaras, I.; Saino, L.; Pavlou, G. Efficient Hash-Routing and Domain Clustering Techniques for Information-Centric Networks. Comput. Netw. 2016, 103, 67–83. [Google Scholar] [CrossRef]
  31. Saha, S.; Lukyanenko, A.; Ylä-Jääski, A. Cooperative Caching through Routing Control in Information-Centric Networks. In Proceedings of the 2013 Proceedings IEEE INFOCOM, Turin, Italy, 14–19 April 2013; pp. 100–104. [Google Scholar]
  32. Zhang, G.; Wang, X.; Gao, Q.; Liu, Z. A Hybrid ICN Cache Coordination Scheme Based on Role Division between Cache Nodes. In Proceedings of the 2015 IEEE Global Communications Conference (GLOBECOM), San Diego, CA, USA, 6–10 December 2015; pp. 1–6. [Google Scholar]
  33. Wang, S.; Bi, J.; Wu, J.; Vasilakos, A.V. CPHR: In-Network Caching for Information-Centric Networking with Partitioning and Hash-Routing. IEEE/ACM Trans. Netw. 2016, 24, 2742–2755. [Google Scholar] [CrossRef]
  34. Lanlan, R.U.I.; Jiahui, K.; Haoqiu, H.; Hao, P. Domain-orientated Coordinated Hybrid Content Caching and Request Search in Information-centric Networking. J. Electron. Inf. Technol. 2017, 39, 2741–2747. [Google Scholar] [CrossRef]
  35. Kamiyama, N.; Murata, M. Dispersing Content Over Networks in Information-Centric Networking. IEEE Trans. Netw. Serv. Manag. 2019, 16, 521–534. [Google Scholar] [CrossRef]
  36. Nguyen, X.N.; Saucez, D.; Turletti, T. Efficient Caching in Content-Centric Networks Using OpenFlow. In Proceedings of the 2013 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), Turin, Italy, 14–19 April 2013; pp. 67–68. [Google Scholar]
  37. Sourlas, V.; Gkatzikis, L.; Flegkas, P.; Tassiulas, L. Distributed Cache Management in Information-Centric Networks. IEEE Trans. Netw. Serv. Manag. 2013, 10, 286–299. [Google Scholar] [CrossRef]
  38. Salah, H.; Strufe, T. CoMon: An Architecture for Coordinated Caching and Cache-Aware Routing in CCN. In Proceedings of the 2015 12th Annual IEEE Consumer Communications and Networking Conference (CCNC), Las Vegas, NV, USA, 9–12 January 2015; pp. 663–670. [Google Scholar]
  39. Mick, T.; Tourani, R.; Misra, S. MuNCC: Multi-Hop Neighborhood Collaborative Caching in Information Centric Networks. In Proceedings of the 3rd ACM Conference on Information-Centric Networking, Kyoto, Japan, 26–28 September 2016; ACM: New York, NY, USA, 2016; pp. 93–101. [Google Scholar]
  40. Wang, J.M.; Zhang, J.; Bensaou, B. Intra-AS Cooperative Caching for Content-Centric Networks. In Proceedings of the 3rd ACM SIGCOMM Workshop on Information-Centric Networking, Hong Kong, China, 12 August 2013; Association for Computing Machinery: New York, NY, USA, 2013; pp. 61–66. [Google Scholar]
  41. Yang, Y.; Song, T.; Zhang, B. OpenCache: A Lightweight Regional Cache Collaboration Approach in Hierarchical-Named ICN. Comput. Commun. 2019, 144, 89–99. [Google Scholar] [CrossRef]
  42. Chaudhary, P.; Hubballi, N.; Kulkarni, S.G. eNCache: Improving Content Delivery with Cooperative Caching in Named Data Networking. Comput. Netw. 2023, 237, 110104. [Google Scholar] [CrossRef]
  43. Mori, K.; Kamimoto, T.; Shigeno, H. Push-Based Traffic-Aware Cache Management in Named Data Networking. In Proceedings of the 2015 18th International Conference on Network-Based Information Systems, Taipei, Taiwan, 2–4 September 2015; pp. 309–316. [Google Scholar]
  44. Rath, H.K.; Panigrahi, B.; Simha, A. On Cooperative On-Path and Off-Path Caching Policy for Information Centric Networks (ICN). In Proceedings of the 2016 IEEE 30th International Conference on Advanced Information Networking and Applications (AINA), Crans-Montana, Switzerland, 23–25 March 2016; pp. 842–849. [Google Scholar]
  45. Fang, W.; Chen, S.; Jiang, Y.; Rem, Z. The Hotspot Control and Content Dispatch Caching Algorithm in Content-Centric Networking. Acta Electron. Sin. 2017, 45, 1182–1188. [Google Scholar] [CrossRef]
  46. Luo, X.; An, Y. A Proactive Caching Scheme Based on Content Concentration in Content-Centric Networks. Int. Arab J. Inf. Technol. 2019, 16, 1003–1012. [Google Scholar]
  47. Hua, Y.; Guan, L.; Kyriakopoulos, K.G. A Fog Caching Scheme Enabled by ICN for IoT Environments. Future Gener. Comput. Syst. 2020, 111, 82–95. [Google Scholar] [CrossRef]
  48. Nour, B.; Khelifi, H.; Moungla, H.; Hussain, R.; Guizani, N. A Distributed Cache Placement Scheme for Large-Scale Information-Centric Networking. IEEE Netw. 2020, 34, 126–132. [Google Scholar] [CrossRef]
  49. Wu, T.; Zheng, Q.; Shi, Q.; Yang, F.; Xu, Z. NCR-BN Cooperative Caching for ICN Based on Off-Path Cache. In Proceedings of the 2022 5th International Conference on Hot Information-Centric Networking (HotICN), Guangzhou, China, 24–26 November 2022; pp. 42–47. [Google Scholar]
  50. Imai, S.; Leibnitz, K.; Murata, M. Statistical Approximation of Efficient Caching Mechanisms for One-Timers. IEEE Trans. Netw. Serv. Manag. 2015, 12, 595–604. [Google Scholar] [CrossRef]
  51. Saino, L.; Psaras, I.; Pavlou, G. Icarus: A Caching Simulator for Information Centric Networking (ICN). In Proceedings of the 7th International ICST Conference on Simulation Tools and Techniques, Lisbon, Portugal, 17–19 March 2014; ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering): Brussels, Belgium, 2014; pp. 66–75. [Google Scholar]
  52. Spring, N.; Mahajan, R.; Wetherall, D. Measuring ISP Topologies with Rocketfuel. SIGCOMM Comput. Commun. Rev. 2002, 32, 133–145. [Google Scholar] [CrossRef]
  53. Breslau, L.; Cao, P.; Fan, L.; Phillips, G.; Shenker, S. Web Caching and Zipf-like Distributions: Evidence and Implications. In Proceedings of the IEEE INFOCOM ’99. Conference on Computer Communications. Proceedings. Eighteenth Annual Joint Conference of the IEEE Computer and Communications Societies. The Future is Now (Cat. No.99CH36320), New York, NY, USA, 21–25 March 1999; Volume 1, pp. 126–134. [Google Scholar]
  54. Wang, J.; Cheng, G.; You, J.; Sun, P. SEANet: Architecture and Technologies of an On-site, Elastic, Autonomous Network. J. Netw. New Media 2020, 9, 1–8. [Google Scholar]
  55. Zhang, F.; Zhang, Y.; Raychaudhuri, D. Edge Caching and Nearest Replica Routing in Information-Centric Networking. In Proceedings of the 2016 IEEE 37th Sarnoff Symposium, Newark, NJ, USA, 19–21 September 2016; pp. 181–186. [Google Scholar]
  56. Yan, M.; Luo, M.; Chan, C.A.; Gygax, A.F.; Li, C.; Chih-Lin, I. Energy-Efficient Content Fetching Strategies in Cache-Enabled D2D Networks via an Actor-Critic Reinforcement Learning Structure. IEEE Trans. Veh. Technol. 2024, 1–11. [Google Scholar] [CrossRef]
  57. Chao, Y.; Ni, H.; Han, R. A Path Load-Aware Based Caching Strategy for Information-Centric Networking. Electronics 2022, 11, 3088. [Google Scholar] [CrossRef]
  58. Fayazbakhsh, S.K.; Lin, Y.; Tootoonchian, A.; Ghodsi, A.; Koponen, T.; Maggs, B.; Ng, K.C.; Sekar, V.; Shenker, S. Less Pain, Most of the Gain: Incrementally Deployable ICN. SIGCOMM Comput. Commun. Rev. 2013, 43, 147–158. [Google Scholar] [CrossRef]
  59. Gupta, D.; Rani, S.; Singh, A.; Rodrigues, J.J.P.C. ICN Based Efficient Content Caching Scheme for Vehicular Networks. IEEE Trans. Intell. Transp. Syst. 2023, 24, 15548–15556. [Google Scholar] [CrossRef]
Figure 1. Cache management framework of RMBCC.
Figure 1. Cache management framework of RMBCC.
Electronics 13 02636 g001
Figure 2. Structure of RIT.
Figure 2. Structure of RIT.
Electronics 13 02636 g002
Figure 3. Structure of an NIT.
Figure 3. Structure of an NIT.
Electronics 13 02636 g003
Figure 4. Examples of migration destination selection: (a) d m   = 2; (b) d m   = 4.
Figure 4. Examples of migration destination selection: (a) d m   = 2; (b) d m   = 4.
Electronics 13 02636 g004
Figure 5. Cache resource utilization results of various base on-path caching strategies in the topology: (a) GARR; (b) TISCALI.
Figure 5. Cache resource utilization results of various base on-path caching strategies in the topology: (a) GARR; (b) TISCALI.
Electronics 13 02636 g005
Figure 6. Cache hit ratio results of various base on-path caching strategies in the topology: (a) GARR; (b) TISCALI.
Figure 6. Cache hit ratio results of various base on-path caching strategies in the topology: (a) GARR; (b) TISCALI.
Electronics 13 02636 g006
Figure 7. Network link load results of various base on-path caching strategies in the topology: (a) GARR; (b) TISCALI.
Figure 7. Network link load results of various base on-path caching strategies in the topology: (a) GARR; (b) TISCALI.
Electronics 13 02636 g007
Figure 8. Content retrieval time results of various base on-path caching strategies in the topology: (a) GARR; (b) TISCALI.
Figure 8. Content retrieval time results of various base on-path caching strategies in the topology: (a) GARR; (b) TISCALI.
Electronics 13 02636 g008
Figure 9. Cache resource utilization results of various cache size in the topology: (a) GARR; (b) TISCALI.
Figure 9. Cache resource utilization results of various cache size in the topology: (a) GARR; (b) TISCALI.
Electronics 13 02636 g009
Figure 10. Cache hit ratio results of various cache size in the topology: (a) GARR; (b) TISCALI.
Figure 10. Cache hit ratio results of various cache size in the topology: (a) GARR; (b) TISCALI.
Electronics 13 02636 g010
Figure 11. Network link load results of various cache size in the topology: (a) GARR; (b) TISCALI.
Figure 11. Network link load results of various cache size in the topology: (a) GARR; (b) TISCALI.
Electronics 13 02636 g011
Figure 12. Content retrieval time results of various cache size in the topology: (a) GARR; (b) TISCALI.
Figure 12. Content retrieval time results of various cache size in the topology: (a) GARR; (b) TISCALI.
Electronics 13 02636 g012
Figure 13. Aggregated results of various content popularity distribution in the GARR topology: (a) cache resource utilization; (b) cache hit ratio; (c) network link load; (d) content retrieval time.
Figure 13. Aggregated results of various content popularity distribution in the GARR topology: (a) cache resource utilization; (b) cache hit ratio; (c) network link load; (d) content retrieval time.
Electronics 13 02636 g013
Figure 14. Aggregated results of various content popularity distribution in the TISCALI topology: (a) cache resource utilization; (b) cache hit ratio; (c) network link load; (d) content retrieval time.
Figure 14. Aggregated results of various content popularity distribution in the TISCALI topology: (a) cache resource utilization; (b) cache hit ratio; (c) network link load; (d) content retrieval time.
Electronics 13 02636 g014
Figure 15. Traffic cost of various caching strategies in the topology of (a) GARR; (b) TISCALI.
Figure 15. Traffic cost of various caching strategies in the topology of (a) GARR; (b) TISCALI.
Electronics 13 02636 g015
Table 1. Simulation parameters.
Table 1. Simulation parameters.
ParametersValue
Network topologyGARR/TISCALI
Bandwidth of intra-network links1 Gbps
Bandwidth of access links100 Mbps
Cache placementUniform
Content size10 MB
Content popularity distributionZipf ( α = 0.6 ~ 1.0 )
Number of contents 1 × 10 5
Number of requests 2 × 10 5
Request distributionPoisson
Request rate200 req/s
Table 2. Introduction to comparative strategies.
Table 2. Introduction to comparative strategies.
StrategiesCharacteristics
LCECache a copy of the content at each node on the content delivery path
LCDCache only at the next hop node of the service node
CL4MCache at the intermediate node with the largest betweenness centrality
RandomCache at a randomly selected node on the content delivery path
ProbCacheCache at each hop node according to the dynamically calculated probability
PLABCCache at the node with the largest utility value on the content delivery path
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chao, Y.; Ni, H.; Han, R. RMBCC: A Replica Migration-Based Cooperative Caching Scheme for Information-Centric Networks. Electronics 2024, 13, 2636. https://doi.org/10.3390/electronics13132636

AMA Style

Chao Y, Ni H, Han R. RMBCC: A Replica Migration-Based Cooperative Caching Scheme for Information-Centric Networks. Electronics. 2024; 13(13):2636. https://doi.org/10.3390/electronics13132636

Chicago/Turabian Style

Chao, Yichao, Hong Ni, and Rui Han. 2024. "RMBCC: A Replica Migration-Based Cooperative Caching Scheme for Information-Centric Networks" Electronics 13, no. 13: 2636. https://doi.org/10.3390/electronics13132636

APA Style

Chao, Y., Ni, H., & Han, R. (2024). RMBCC: A Replica Migration-Based Cooperative Caching Scheme for Information-Centric Networks. Electronics, 13(13), 2636. https://doi.org/10.3390/electronics13132636

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop