Next Article in Journal
Hierarchical Aggregation for Federated Learning in Heterogeneous IoT Scenarios: Enhancing Privacy and Communication Efficiency
Previous Article in Journal
Advances in Remote Sensing and Propulsion Systems for Earth Observation Nanosatellites
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Hierarchical Cache Architecture-Oriented Cache Management Scheme for Information-Centric Networking

by
Yichao Chao
1,2 and
Rui Han
1,2,*
1
National Network New Media Engineering Research Center, Institute of Acoustics, Chinese Academy of Sciences, No. 21, North Fourth Ring Road, Haidian District, Beijing 100190, China
2
School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, No. 19(A), Yuquan Road, Shijingshan District, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Future Internet 2025, 17(1), 17; https://doi.org/10.3390/fi17010017
Submission received: 3 December 2024 / Revised: 25 December 2024 / Accepted: 3 January 2025 / Published: 5 January 2025

Abstract

:
Information-Centric Networking (ICN) typically utilizes DRAM (Dynamic Random Access Memory) to build in-network cache components due to its high data transfer rate and low latency. However, DRAM faces significant limitations in terms of cost and capacity, making it challenging to meet the growing demands for cache scalability required by increasing Internet traffic. Combining high-speed but expensive memory (e.g., DRAM) with large-capacity, low-cost storage (e.g., SSD) to construct a hierarchical cache architecture has emerged as an effective solution to this problem. However, how to perform efficient cache management in such architectures to realize the expected cache performance remains challenging. This paper proposes a cache management scheme for hierarchical cache architectures in ICN, which introduces a differentiated replica replacement policy to accommodate the varying request access patterns at different cache layers, thereby enhancing overall cache performance. Additionally, a probabilistic insertion-based SSD cache admission filtering mechanism is designed to control the SSD write load, addressing the issue of balancing SSD lifespan and space utilization. Extensive simulation results demonstrate that the proposed scheme exhibits superior cache performance and lower SSD write load under various workloads and replica placement strategies, highlighting its broad applicability to different application scenarios. Additionally, it maintains stable performance improvements across different cache capacity settings, further reflecting its good scalability.

1. Introduction

In next-generation network architectures such as Information-Centric Networking (ICN), in-network caching is one of the core functions to achieve efficient content distribution [1,2,3,4]. ICN uses location-independent names to uniquely identify each piece of content and replaces traditional security mechanisms that rely on communication channel protection with content-based security [5], which achieves complete separation of content and its location. This enables the network layer to transparently cache content and efficiently serve subsequent requests: users can directly retrieve data from nearby nodes with cached content replicas using the content’s name, without having to access the original server. This significantly reduces network connection demands, minimizes redundant traffic and bandwidth consumption, and effectively optimizes end-to-end latency performance [6,7].
To achieve large-scale deployment of ICN caching, the primary challenge lies in the design and implementation of cache nodes. Since caching operations in ICN are closely coupled with the forwarding pipeline, the I/O rate of the caching system must meet strict performance requirements to match the forwarding rate of network components [8]. For instance, high-speed memory (such as DRAM) is typically used to enable line-speed caching operations, but it has limitations in terms of cost and cache capacity. In the context of large-scale Internet traffic, the effectiveness of ICN in-network caching depends largely on the ability to provide sufficient cache capacity to host a large number of content replicas [9], thereby improving the cache hit rate. Persistent storage devices such as SSDs (Solid-State Drives), with their lower upfront costs and higher storage capacities, can offer scalability at the terabyte (TB) level for ICN caching. However, high access latency remains a major limitation to their widespread application in ICN cache nodes.
In this context, researchers have begun exploring the use of multiple storage technologies to construct hierarchical cache architectures in order to address the trade-offs between cache capacity, deployment costs, and read/write speed [8]. A typical approach is to use SSDs as an additional cache layer to DRAM [9,10,11], where the core idea is to store a small amount of popular data in DRAM to fully utilize its high-speed memory bandwidth, while storing less popular data in SSDs to leverage their large storage capacity. At the same time, DRAM is used to store indexes and act as a write buffer for the SSD cache. In addition, related research has further optimized the access latency performance of hierarchical cache architectures by adopting techniques such as data prefetching [12] and decoupling caching from forwarding operations [13].
Cache management in hierarchical cache architectures remains an unresolved issue. Existing research typically handles local replica management within a single cache layer through cache replacement policies [14]. However, in hierarchical cache architectures, the differentiated read/write characteristics of various storage media present new challenges for replica management. For example, the limited lifespan of SSDs means that they can only endure a finite number of erase/write cycles, so unnecessary write operations should be avoided to reduce the operating costs of the cache system. Most existing studies filter data written to SSDs based on attributes like content popularity [10,15], but they do not fully consider the characteristic information of the node and network dimensions in ICN, which leads to inefficient filtering or underutilization of SSD cache space.
In addition, the key issue in cache management lies in designing cache replacement policies that can adapt to the characteristics of the hierarchical cache architecture. Due to the differences in cache load and request access patterns between the upper and lower layers, homogeneous cache management may lead to a decline in overall cache performance. Specifically, when data are forwarded through a cache node, DRAM inserts the data into the local cache based on the cache decision results, while the input load on the SSD primarily comes from the data replaced by DRAM. As a result, the SSD experiences lower cache load and replacement frequency, allowing for more refined replica management with relatively small overhead. On the other hand, due to the filtering effect of the upper-layer cache on the workload, the request arrival patterns in the lower-layer cache tend to be different. This characteristic can be leveraged to implement differentiated cache replacement policies across multiple cache layers, adapting to their respective request access patterns, thereby improving overall cache hit rate.
To address the above issues, this paper proposes a cache management scheme oriented towards hierarchical cache architectures in ICN. In the following text, we will refer to it as the hierarchical cache management policy (denoted as HCM). The core ideas of HCM are as follows:
  • A differentiated cache replacement policy is proposed to accommodate the varying request access patterns across different cache layers. At the DRAM cache layer, LRU is used to prioritize retaining short-term popular data. At the SSD cache layer, a multi-queue replacement policy based on cache utility is implemented, with the goal of storing higher-utility data (such as long-term popular data or data with high retrieval costs) in SSDs for a longer period;
  • Considering the limited lifespan of SSDs, a probabilistic insertion-based SSD cache admission filtering mechanism is introduced. By leveraging the redundancy and popularity attributes of data in ICN, this mechanism effectively reduces the write load on the SSD while improving the overall cache hit rate.
We conducted extensive simulations based on Icarus [16] and comprehensively evaluated the performance of HCM under various application scenarios using performance metrics such as cache hit rate, network link load, and SSD write load. The experimental results show that HCM consistently demonstrates significant cache performance advantages under different request workloads and replica placement strategies, highlighting its wide applicability across multiple application scenarios. Furthermore, HCM is able to maintain stable performance gains under different cache capacity settings, demonstrating its excellent scalability.
It should be noted that the approach proposed in this paper is not limited to ICN. It can also be applied to other distributed caching and storage scenarios, where nodes may utilize a multi-level cache hierarchy consisting of DRAM and SSD. Designing efficient cache replacement policies to improve cache resource management and system performance is equally crucial in these contexts. The ideas of SSD cache filtering and hierarchical cache replacement in HCM can be effectively applied to such environments as well. However, in this paper, we focus on the ICN context for the following reasons: First, the coupled nature of forwarding and caching in ICN presents unique challenges for cache management. For example, the limited resources of routers constrain the complexity of cache management, and features such as on-path caching impose higher demands on write throughput. On the other hand, symmetric routing in ICN, along with features like ubiquitous and transparent caching, provides greater flexibility and design space for cache management schemes. Therefore, while the HCM method has broader applicability, exploring it within the ICN context allows us to address specific challenges and optimize cache management schemes for this network architecture.
The remainder of this paper is organized as follows. Section 2 provides an overview of the hierarchical cache architecture in ICN and related research on ICN cache replacement. Section 3 introduces the hierarchical cache management architecture designed in this paper. Section 4 elaborates on the hierarchical cache management policy and its core components. Section 5 describes the experimental design and result analysis. Finally, Section 6 summarizes the main contributions of this paper and briefly discusses future research directions.

2. Related Work

2.1. ICN Hierarchical Caching Architecture

The feasibility of ICN in-network caching systems largely depends on whether routers can be equipped with large caches that have sufficient storage capacity and support line-speed operations. However, existing storage technologies struggle to simultaneously meet the demands of storage capacity and high I/O throughput [8]. To address this technical bottleneck, researchers have begun exploring hierarchical cache architectures that integrate multiple types of storage media to leverage the advantages of heterogeneous storage systems, enabling more efficient cache management.
Ref. [9] proposed a multi-level caching system that predicts subsequent requests based on the temporal correlation between requests and proactively prefetches data blocks from SSDs to DRAM. This approach decouples the low-speed I/O operations of SSDs from the high-speed data forwarding of network components in terms of time scale. The scheme expands the capacity of ICN caches to the multi-terabyte level while maintaining line-speed data processing performance at a multi-Gbps rate. Further, ref. [11] implemented a prototype system based on a DPDK (Data Plane Development Kit) on off-the-shelf hardware, which separates high-latency SSD access operations from the forwarding process using multi-core processing technology. With a configuration of tens of gigabytes of DRAM and hundreds of gigabytes of SSD, the system achieved a throughput of over 10 Gbps. Ref. [13] proposed using block storage devices to extend the cache capacity of network elements to the terabyte level. The authors introduced an innovative split architecture, dividing network devices into a “forwarding end” and a “storage end”, each handling data packet forwarding and caching operations, respectively, thereby addressing the speed mismatch between high-speed data packet forwarding and low-speed block I/O operations on switches. HCaching [17] uses DRAM as the primary cache and SRAM (Static Random-Access Memory) as a cache for DRAM, forming a high-speed, large-capacity hierarchical scheme, and optimizes DRAM cache throughput through a bulk data block prefetching mechanism. The commonality among these works lies in decoupling the high-latency access operations of lower-level large-capacity caches from the high-speed data forwarding operations in terms of time, and using small, high-speed caches to hide the latency of the lower-level caches. While these works address the integration of persistent storage into cache systems, they do not provide effective support for multi-level cache management strategies.
Some studies explored the design issues of cache systems from the perspective of performance analysis. For example, ref. [18] identified bottlenecks in NDN (Named Data Networking) software routers based on COTS (Commercial Off-The-Shelf) computers and suggested that storage design schemes should be adjusted according to computational resources to ensure efficient caching and name-based forwarding. Refs. [12,19] analyzed the CPU instruction pipeline and identified DRAM access latency as the bottleneck in NDN software switches, proposing a prefetching algorithm for NDN data structures to hide this latency. In contrast to the aforementioned works, NB-Cache [20] identified the Content Store (CS) as the main bottleneck in content routers and effectively mitigated local CS congestion through a global load balancing strategy. When a packet arrived at a router with a full CS, NB-Cache forwarded it directly to the next-hop router, avoiding local queuing delays. This method optimized resource allocation from a global perspective, decoupled fast data packet forwarding from slower content retrieval, and significantly reduced the reliance of the CS on expensive memory.
In addition, some studies have explored the replica management problem in the context of distributed multi-layer cache/storage systems. For example, ref. [21] introduced a two-layer overlay network cache system for content distribution, which aims to improve the overall cache performance by leveraging the interaction of caches at different depths in the network. Ref. [22] considered a hierarchical cache system consisting of mirror sites and end users and explored how to jointly utilize two-layer caches to reduce the transmission load in linear function retrieval scenarios. It should be pointed out that the hierarchy mentioned in these studies is reflected in the multi-layer architecture composed of nodes located at different network locations, with bottlenecks typically arising from network bandwidth and the collaboration between nodes [23]. This paper differs from the studies mentioned above in that the hierarchy discussed here refers to a multi-layer cache architecture composed of different hardware within a single node. The focus is on how to combine the I/O characteristics of different storage media to address the challenges of local cache management and replica replacement on a single node.

2.2. ICN Cache Replacement Policies

Cache replacement policies have been widely studied in fields such as operating systems and web caching. These classical policies optimize replacement decisions by considering different statistical information of cached items [24]. For example, LRU (Least Recently Used) replaces the least recently accessed cache item based on access time, with variations including SLRU [25], LRU-K [26], and q-LRU [27]. Frequency-based policies, such as LFU (Least Frequently Used), LFU-Aging [25], and FB-FIFO [28], aim to retain popular data in the cache to improve cache hit rates. Additionally, hybrid policies, such as ARC [29], LRFU [30], and 2Q [31], combine multiple replacement algorithms to adapt to different application scenarios. Policies like GreedyDual-Size [32] and LNC-R-W3 [33] take into account the access cost of cached items, improving the accuracy of replacement decisions by considering both the size and access frequency of cached items.
In addition to following these classical replacement algorithms, ICN in-network caching has further developed cache replacement policies to address the ubiquitous nature of ICN networks, considering node- and network-level attributes [14]. For example, ref. [34] proposed using heterogeneous replacement algorithms on edge and intermediate routers to handle different observed workload characteristics. Ref. [35] employed an LRFU (Least Recently and Frequently Used) policy with weighted parameters, allowing each content router to implement different caching policies based on its position in the network. Ref. [36] discussed the benefits of applying different cache replacement policies at each layer of the data center fat-tree topology for big data application scenarios. Similar work can be found in [37,38]. Ref. [39] proposed a neighbor cooperation-based cache replacement policy aimed at utilizing neighboring nodes’ cache resources to extend the caching time of replicas. Additionally, some studies focus on specific application scenarios. For instance, ref. [40] proposed a freshness-priority replacement policy for NDN-based IoT (Internet of Things), where data are evicted based on predictions of invalid data from the time series of future sensor events. Ref. [41] introduced a QoS-aware cache replacement policy for VNDN (Vehicular Named Data Networking), where traffic is classified and stored in different sub-caches. Ref. [42] proposed a replacement policy for the IoT scenario based on three fundamental content attributes: request count, remaining lifetime, and the waiting time for upcoming requests.
However, the lack of effective hierarchical cache management policies remains one of the key factors limiting the widespread application of SSD-based large-capacity storage in ICN in-network caching [8]. Ref. [43] pointed out that caching in persistent memory or disk storage was the future direction and emphasized the necessity of designing new caching algorithms in multi-layer caches that took into account the characteristics of different devices. In a hierarchical cache architecture, cache replacement not only manages local replicas to enhance cache performance but also needs to consider the read/write characteristics of different storage devices to balance cache benefits and usage costs. For example, ref. [10] pointed out that each SSD block could only undergo a limited number of writes and erasures during its lifecycle. The authors proposed a lightweight cache insertion policy called Probationary Insertion, where DRAM and SSD cache spaces were managed with independent LRU replacement policies. Content is only inserted into the SSD if it has been requested at least once during its time in DRAM. Similarly, uCache [15] designed a two-level LRU chain filter to distinguish one-time accessed data blocks, aiming to avoid unnecessary writes to the SSD. Ref. [44] noted that a two-level cache hierarchy, where each level uses independent cache replacement policies, results in inefficient cache resource management and reduced system performance. The authors proposed an adaptive multi-level cache replacement algorithm that dynamically adjusts data blocks between the DRAM and SSD cache levels.
In summary, the aforementioned works have provided preliminary insights into aspects such as ICN cache replacement and SSD cache filtering. However, there is still a lack of an effective cache management policy specifically designed for hierarchical cache architectures. In particular, differentiated replacement policies are needed to address the varying request workload characteristics observed at each cache layer to enhance overall cache performance. Additionally, there is a need to design SSD cache filtering mechanisms that leverage ICN in-network caching characteristics to reduce both the deployment and operational costs of SSDs.

3. Hierarchical Cache Architecture Design

To bypass the line-speed limitations of ICN in-network caching while providing terabyte-level single-node cache capacity, we adopt a two-tier cache architecture combining DRAM and SSD. As shown in Figure 1, we follow the design of the split architecture from [13], dividing each ICN cache router into two functional modules: a forward module and a cache module. By decoupling packet forwarding and caching operations on the timescale, this approach addresses the issue of SSDs’ slow I/O blocking the forwarding pipeline. Specifically, the forward module is responsible for operations such as name-based routing and packet forwarding, while the cache module handles cache-related operations such as replica management and data request service. The cache module consists of two types of storage media: DRAM and SSD. The DRAM cache space is further divided into four parts. One part is used to store the indexes of replicas in the L1 and L2 cache layers, another part is used to store the DRAM virtual cache queue, a third part is used to store the Replica Information Table (RIT), and the remaining space is used to store the actual replica data. The first three parts occupy only a small amount of space, and their specific implementation will be introduced in Section 4. The SSD is deployed in a scalable manner, allowing network administrators to adjust the number of SSDs based on cache capacity requirements. The SSD cache space primarily holds the replicas evicted from the DRAM cache to ensure the exclusivity of the hierarchical cache system [44], meaning that the same replica is not cached in multiple layers.
The proposed architecture involves three core operations: cache insertion, cache hit, and cache miss, as indicated by the arrows in Figure 1. Specifically, when a new data packet is forwarded to the node, the cache module is responsible for establishing an index and storing the corresponding replica data. When a new request packet is forwarded to the node, it queries the index table in DRAM. If the request hits any cache level, the corresponding replica data are read from the cache device indicated by the index, and the forward module encapsulates it into an ICN data packet to be returned to the requesting node, while updating the corresponding entry in the RIT. If the request misses at any cache level, the forward module forwards the packet to other cache nodes or original servers based on the name-based routing protocol.

4. Hierarchical Cache Management (HCM) Policy

The key to hierarchical cache management lies in how to allocate cache levels for replicas based on the read and write characteristics of different storage media, ensuring the efficient utilization of cache resources at each level. At the same time, it is necessary to design tailored replica replacement policies for each cache layer based on its differentiated workload characteristics, in order to improve the hit rate of the caching system.
The hierarchical cache management (HCM) policy designed in this paper is shown in Figure 2. DRAM serves as the upper-level cache, responsible for caching new replicas, aiming to fully utilize DRAM’s high memory bandwidth to improve the overall I/O throughput of the cache system. A cascading connection is used between the upper-level DRAM cache and the lower-level SSD cache, meaning the write load to the SSD cache mainly originates from the replica replacements in the DRAM cache, ensuring the exclusivity of the layered cache and efficient use of cache resources. To further reduce the write load on the SSD cache, a cache admission filter is placed in front of the SSD cache. This filter uses information such as replica popularity and redundancy to filter the write load, thereby reducing the write/erase cycle frequency and wear-related operational costs of SSDs.
Considering the filtering effect of the upper-level cache on request loads, differentiated replica replacement policies are adopted between each cache layer to accommodate their respective request arrival patterns. Specifically, considering the temporal locality [45] characteristics of the request sequence, each cache level adopts LRU as the base management policy to improve the hit rate of short-term popular data. For the heavy-tailed distribution of data popularity [46], an SSD filter is introduced to further filter out long-term popular data replicas. In addition, to address the diverse optimization objectives of cache management, the SSD cache employs a multi-queue replica management scheme based on cache utility, optimizing the overall cache performance. In the following sections, we will provide a detailed description of the three key components of HCM: DRAM cache replica management, SSD cache admission filtering, and SSD cache replica management.

4.1. DRAM Cache Replica Management

The replicas in the DRAM cache are managed by an LRU queue (denoted as Q d ), with LRU serving to capture the temporal locality of requests. However, DRAM can only provide cache capacity in the range of tens of gigabytes. In heavy traffic scenarios, cache replacement in Q d occurs very frequently, leading to short residency times for replicas in the cache, making it difficult for them to be hit by subsequent requests. To address this, we allocate a portion of DRAM space for a virtual cache queue (denoted as Q v ), which uses an LRU queue to track the replicas recently replaced in Q d . The purpose of Q v is to observe the request access patterns of replicas over a longer time scale. The term “virtual” indicates that the replica data in Q v are not actually stored in DRAM; it is only used to maintain the historical cache replacement records of DRAM.
The Replica Information Table (RIT) in Figure 1 is used to record the attribute information corresponding to each locally cached replica, including but not limited to the replica’s popularity, redundancy, and cache distance. Specifically, the popularity value can be set as the number of request hits for the replica at the current node, the redundancy value can be set as the number of replicas cached upstream of the node, and the cache distance is used to record information such as the number of hops or the delay between the current node and upstream service nodes. When a data request hits any node, the corresponding content attributes of the replica are retrieved from its local RIT and added to the header fields of the returned data packet. Along the content delivery path, intermediate nodes can read and update these header fields, such as incrementing the cache distance field hop-by-hop or increasing the redundancy field value by 1 after the data are cached locally.
When a new replica is inserted into Q d , it is directly placed at the tail of Q d according to the LRU rule, and its corresponding popularity and redundancy information is recorded in the local RIT, as described earlier. When a new request hits a replica in Q d , the replica is first moved to the tail of Q d , and its local popularity information is updated accordingly. When a replacement occurs in Q d , the replaced replica is further processed by the SSD cache admission filter described in Section 4.2. Specifically, replicas that pass through the filter are written to the SSD cache, while those that do not pass the filter are removed from DRAM, and the corresponding entry is inserted into the virtual cache queue Q v .

4.2. SSD Cache Admission Filter

SSDs have a limited lifespan, which necessitates minimizing their write/erase cycle frequency to reduce wear-related operational costs. To decrease the write load on SSDs while ensuring that cache hit rates in the network remain optimal, careful selection of replicas to be inserted into the SSD cache is required. To achieve this, we leverage statistical information on replica popularity and consider the cache redundancy characteristics in ICN. We have designed a cache admission filter based on probabilistic insertion, which effectively reduces both SSD write operations and cache replacement rates. At the same time, by filtering out content with low cache utility, the overall cache hit rate is improved.
In our design, the write load on the SSD cache consists of two main components. One part of the load comes from cache hits in the DRAM virtual queue ( Q v ). Specifically, when a request hits Q v , it indicates that the corresponding replica has a short request interval, which typically suggests that the replica has high popularity or strong temporal locality. As a result, when the data for this replica are returned, we directly insert them into the SSD cache. The other part of the load comes from the cache replacement process in the DRAM cache queue ( Q d ). The SSD cache admission filter checks these replicas and, based on the filtering results, probabilistically inserts them into the SSD cache. The following sections will provide a detailed description of the three main processes of the SSD cache filter: popular replica selection, redundant replica filtering, and cache load tuning.

4.2.1. Popular Replica Selection

To increase the probability of cache hits for subsequent requests, replicas of content with higher popularity should be prioritized for caching [47]. This is a fundamental consensus in cache management strategy design. During the popular replica selection process, the filter maintains a popularity threshold (denoted as T p o p ) to indicate the minimum request count that a replica must meet before being written to the SSD. Its purpose is to filter out content that is requested only once [48]. For replicas replaced in the DRAM cache queue ( Q d ), the result of the popularity filtering is represented by a 0–1-valued Boolean variable b m , as shown in Equation (1), where p o p m denotes the popularity of replica m at the current node. Specifically, when b m is 1, it indicates that the replica passes the popularity filter, and the subsequent filtering process can continue. When b m is 0, it means the replica fails the popularity filter, and no further processing will be carried out.
b m = {     0 ,         i f   p o p m < T p o p   1 ,         i f   p o p m T p o p

4.2.2. Redundant Replica Filtering

The distributed nature of ICN in-network caching inevitably leads to cache redundancy issues among cache nodes. Under limited cache capacity, higher redundancy means lower diversity of cached replicas, which limits the improvement of cache hit rates. To address this, we leverage implicit cooperation among on-path nodes during data transmission, enabling each node with a certain degree of replica redundancy perception ability. As described in Section 4.1, the RIT records redundancy information for each local cached replica, which indicates the number of replicas cached by upstream nodes at the time the replica is cached in the current node. It is important to note that although the cache status of upstream nodes is dynamic, it still provides an effective estimate of replica redundancy. Specifically, for a replica m that has passed the popularity filtering, let its redundancy value be r m . The redundancy filtering result for this replica can be represented by the cache admission probability p m as shown in Equation (2).
p m = 1 r m

4.2.3. Cache Load Tuning

The data transfer speeds of SSD and DRAM differ significantly, and the write load on the SSD must be carefully controlled to avoid data loss. Although the number of replicas written to the SSD has been greatly reduced through the filtering process described above, there may still be some cache redundancy between nodes on different transmission paths, which could lead to decreased cache utilization efficiency. To address this, we introduce a load tuning function, ω ( t ) , based on the cache admission probability described above, with a value range of ( 0 ,   1 ) . The purpose of ω ( t ) is to further adjust the SSD replica admission probability to ensure that the SSD write load does not exceed its I/O bandwidth. Specifically, ω ( t ) can be dynamically adjusted based on the write load at time t or set to a fixed value (e.g., 0.1).
Based on the above, for any replica m replaced in the DRAM cache queue ( Q d ), its SSD admission probability 𝒫 m can be expressed by Equation (3), where p m is given by Equation (2).
𝒫 m = p m × ω ( t )
Algorithm 1 summarizes the entire process of SSD cache filtering described above.
Algorithm 1: SSD Cache Filtering
Output :   SSD   cache   filter   result :   r e s
Initialization :   m n u l l ,   r e s     F a l s e
1:      if DRAM cache replacement occurs then
2:               m ← Get replaced replica based on LRU
3:      end if
4:       𝒫 m     1.0
5:      if SSD cache space not full then
6:               ω ( t ) ← Get cache load tuning value
7:               𝒫 m     𝒫 m × ω ( t )
8:      else
9:               p o p m ← Get replica popularity from local RIT
10:              if   p o p m T p o p then
11:                     r m ← Get replica redundancy from local RIT
12:                     𝒫 m     𝒫 m × 1 / r m
13:                     ω ( t ) ← Get cache load tuning value
14:                     𝒫 m     𝒫 m × ω ( t )
15:             else
16:                     𝒫 m     0.0
17:             end if
18:      endif
19:       p Generate a random decimal between (0,1)
20:       if   p < 𝒫 m then
21:              r e s ← True
22:      end if

4.3. SSD Cache Replica Management

The write frequency of replicas in SSDs is much lower than that in DRAM, which allows for more refined replica management with relatively low overhead. To achieve this, we have designed a multi-queue cache replacement policy based on cache utility, where the cache utility can be customized by network administrators according to different cache optimization objectives. The multi-queue design allows replicas with different utility values to be provided with different levels of cache replacement frequency, thereby improving the overall cache performance. Specifically, replicas cached in the SSD are managed using two LRU queues, denoted as Q s H and Q s L . Q s H is used to maintain replicas with high cache utility values, while Q s L is used to maintain those with low cache utility values. The two LRU queues are linked end-to-end, so that replicas replaced in Q s H can be directly demoted to Q s L for continued caching, thereby extending the residency time of high-utility replicas in the cache. Replicas replaced in Q s L are simply removed from the SSD cache.
Before a new replica is written to the SSD cache, a classifier determines the target queue for insertion. Specifically, the classifier is responsible for maintaining a cache utility threshold (denoted as T u t i l i t y ) and dynamically updating this threshold during the cache management process. For each replica to be written to the SSD cache, its corresponding utility value is calculated using a predefined cache utility function. Based on whether its utility value exceeds T u t i l i t y , the replica is inserted into the corresponding LRU queue. Each new replica insertion triggers an update to T u t i l i t y to ensure it always reflects the real-time status of the SSD cache. The cache utility function can be customized according to different cache optimization goals, such as the replica’s popularity, hop count, network traffic cost, etc. It should be noted that the utility threshold should be adjusted based on the utility function, ensuring that the number of replicas written to each queue aligns with the design objectives. Algorithm 2 provides a simple implementation, where the threshold is set as the mean of the maximum and minimum utility. This results in significantly fewer replicas being written to Q s H compared to Q s H . As a result, Q s H experiences a lower replacement rate, which allows high-utility replicas to stay in the cache for a longer period.
When a new request hits the SSD cache, the corresponding replica is first moved to the tail of the LRU queue it resides in, and its cache utility value is updated. If the replica is in Q s L , it is further checked whether its updated utility value exceeds T u t i l i t y . If so, the replica is promoted to Q s H . Similarly, the update of each replica’s utility value also triggers an update to T u t i l i t y . An example of the update rule for T u t i l i t y is presented in Algorithm 2, while Algorithm 3 summarizes the SSD cache replica management algorithm described above.
Algorithm 2: Update_utility_threshold ( u )
Input: New/updated replica cache utility: u
Output :   T u t i l i t y
1:       if   T u t i l i t y has never been set then
2:             U m a x     u
3:             U m i n     u
4:             T u t i l i t y     u
5:      else
6:             U m a x     max ( U m a x ,   u )
7:             U m i n     min ( U m i n ,   u )
8:             T u t i l i t y     ( U m a x + U m i n ) / 2.0
9:      endif
10:      return   T u t i l i t y
Algorithm 3: SSD Cache Management
Initialization :   T u t i l i t y   0
1:      # cache insert process
2:       if   Q s H is not full then
3:               Add     m   to   the   tail   of   Q s H
4:      else
5:               u m ← Get cache utility of replica m
6:               if   u m T u t i l i t y   then
7:                     Add m   to   the   tail   of   Q s H
8:              else
9:                     Add m   to   the   tail   of   Q s L
10:             endif
11:     endif
12:     Update_utility_threshold ( u m )
13:
14:     # cache hit process
15:     if SSD cache hit occurs then
16:              n ← Get hit replica
17:     end if
18:      u n ← Update cache utility of replica n
19:     if n   in   Q s H then
20:             Move n   to   the   tail   of   Q s H
21:     else
22:                if   u n T u t i l i t y   then
23:                    Move n   to   the   tail   of   Q s H
24:             else
25:                    Move n   to   the   tail   of   Q s L
26:             endif
27:     endif
28:     Update_utility_threshold  ( u n )

5. Simulation Results and Analysis

5.1. Simulation Environment Setup

To evaluate the performance of the HCM policy, we conducted extensive comparative experiments using the Icarus simulator [16]. Based on the Icarus v0.8.0 source code, we implemented all the functions of HCM. Additionally, for the hierarchical cache architecture scenario, we made corresponding modifications to the original LRU and LFU. Furthermore, we selected two comparison schemes, uCache [15] and Probationary Insertion (denoted as PI) [10], and implemented their relevant functionalities in Icarus. A brief introduction to them is provided below. The Icarus parameters used in the experiments are shown in Table 1.
  • PI (Probationary Insertion): the DRAM and SSD caches are managed using two separate LRU queues. For the content evicted from the DRAM cache, if it has been accessed at least once during its time in DRAM, it is written to the SSD cache;
  • uCache: it uses a two-level LRU queue to manage the DRAM cache. Content that hits in the first-level LRU queue is directly promoted to the second-level LRU queue, while content evicted from the second-level LRU queue is directly written to the SSD cache. The SSD cache is managed using a single LRU queue. Additionally, the design includes a ghost buffer, which inserts content that was previously accessed but is now deleted directly into the second-level queue, allowing for the observation of data access patterns over a longer time period.
We selected the GARR topology [49] as the network topology for the simulation, which consists of 21 receiver nodes, 27 cache nodes, and 13 server nodes. To visualize the topology, we used Gephi [50], as shown in Figure 3, where white nodes represent cache nodes, gray nodes represent receiver nodes, and yellow nodes represent server nodes. In addition, the bandwidth capacity of all links is set to 1 Gbps. The cache capacity is uniformly distributed across all cache nodes, with a DRAM-to-SSD capacity ratio of 1:10 for each cache node. All content is randomly assigned to a server node as its origin repository, and the size of each content item is set to 2 MB.
Additionally, HCM requires the manual configuration of certain parameters, namely the filter’s popularity threshold T p o p and the cache load tuning function ω ( t ) . In our experiments, we tested various T p o p values and found that setting T p o p to 2 achieves optimal cache performance in most cases. This is because it effectively filters out a large portion of one-time requests. However, further increasing T p o p reduces the write load on the SSD but negatively impacts the cache hit rate. Therefore, T p o p is fixed at 2 in the subsequent experiments. As for ω ( t ) , we simplified its configuration by setting it to a constant value of 0.1. Experimental results demonstrate that this value not only keeps the SSD write load at a very low level but also ensures good cache performance.

5.2. Simulation Results Analysis

In this section, we design three sets of experiments to evaluate the performance of HCM. Experiment 1 tests the applicability of different policies under different input traffic loads. Experiment 2 tests the performance of various replacement policies integrated with different cache placement strategies. Experiment 3 compares the scalability of different replacement policies by adjusting cache capacity. Each experiment is repeated 5 times, and the average is taken as the final result. A 95% confidence interval is also calculated for each metric to assess the statistical significance of the results. In each subsection, we will introduce the specific settings of each experiment. Below are the three cache performance evaluation metrics used in this section.
(1)
Cache Hit Rate
Cache hit rate is one of the key metrics for measuring the performance of a caching system. It represents the proportion of requests serviced by the cache system out of the total number of requests. A higher cache hit rate means that more requests can be served directly from the cache, eliminating the need to access the original server. This not only shortens data response distances but also reduces network latency and bandwidth consumption. By improving the cache hit rate, cache resources can be utilized more efficiently, which in turn enhances overall system performance and optimizes the user experience;
(2)
Network Link Load
Network link load refers to the amount of data transmitted over each link per unit of time. It is a key metric for evaluating the caching system’s effectiveness in reducing network traffic costs. Typically, network link load is closely related to the cache hit rate—that is, the higher the cache hit rate, the lower the link load tends to be. This is because the cache system can more effectively utilize cached data, reducing the need to fetch data directly from the original server, thus lowering network bandwidth consumption and improving network scalability. In calculating the link load, we sum the loads of all used links and take the average as the final result, providing a comprehensive assessment of the caching system’s efficiency in utilizing network resources;
(3)
SSD Write Load
SSD write load is represented by the total number of replicas written to the SSD cache. A reasonable SSD write load is crucial for extending the lifespan of the storage device. Cache replacement policies need to improve cache hit rates while minimizing unnecessary write operations, in order to maintain the health and long-term performance of the storage device. In calculating the SSD write load, we record the total number of replicas written to the SSD cache across all nodes during the simulation process.

5.2.1. Results Analysis Across Different Traffic Groups

To simulate scenarios with mixed traffic types in real-world networks, we use the traffic generator Globetraff [51], which generates mixed traffic workload including applications such as Web, P2P, and Video, based on specified traffic ratios and models. To validate the applicability of HCM to different traffic loads, we use Globetraff to generate four sets of traffic workloads with different mix ratios (as shown in Table 2), with the model parameters for each traffic type listed in Table 3. Data objects in ICN networks are typically divided into fixed-size data chunks [3,4]. However, there is currently no strict standardization for the size of ICN data chunks, leading to significant variations in the chunk sizes published by different content providers. To simplify the experiments and analysis, we set the size of data objects for all traffic types in the experiment to 2 MB. In addition, all other unspecified parameters are consistent with the settings in reference [51].
In this experiment, we tested the cache performance of different replacement policies under each traffic group shown in Table 2. To control variables, LCE (Leave Copy Everywhere) [48] was used as the default cache placement strategy, which caches a copy of the content at each hop node along the content delivery path. The cache capacity was set to 5% of the total number of contents. The experimental results are shown in Figure 4.
Figure 4a shows that under different input traffic loads, HCM consistently achieves the highest cache hit rate. In contrast, classical replacement policies such as LRU and LFU exhibit significant performance differences across different traffic loads, indicating that they are only suitable for specific application scenarios. For example, in traffic group A, where the proportion of Web traffic is low and the request temporal locality is weak, LRU performs poorly. However, in traffic group D, as the proportion of Web traffic increases, LRU’s performance improves significantly. Conversely, LFU shows an entirely opposite trend in performance across these two traffic groups. The PI policy, which only inserts replicas that hit in the DRAM cache into the SSD, struggles to effectively handle scenarios with longer request intervals in Video and other traffic types, resulting in a lower hit rate. In contrast, both uCache and HCM more effectively monitor changes in replica popularity by introducing virtual queues to record the most recently evicted replicas from the DRAM cache. HCM outperforms uCache in several aspects. First, HCM facilitates more accurate SSD cache admission filtering by transmitting content popularity information between nodes, ensuring that popular replicas are identified more accurately. Second, HCM introduces a redundancy estimation mechanism, further improving the hit rate. These improvements enable HCM to demonstrate outstanding cache performance across all traffic loads.
Figure 4b presents the experimental results for the network link load, which shows a significant negative correlation with the cache hit rate: the higher the cache hit rate, the lower the network link load. This is because a higher cache hit rate allows more data requests to be served directly by cache nodes that are closer, greatly reducing the amount of data transmitted through the network links. In addition, shorter response distances not only reduce bandwidth consumption but also decrease response latency, which is particularly important for latency-sensitive applications. Due to its advantage in cache hit rate, HCM consistently achieves the lowest link load across all traffic loads, significantly optimizing bandwidth utilization and improving overall network performance.
Figure 4c clearly demonstrates the important role of SSD cache admission filtering in reducing SSD write load. Since the LRU and LFU policies directly write replicas to the SSD cache without any filtering, this leads to a very high SSD write load, which not only accelerates the SSD wear and significantly shortens its lifespan but also increases the processing overhead on the cache nodes. In contrast, the PI, uCache, and HCM policies significantly reduce the SSD write load by introducing filtering mechanisms. Specifically, the PI policy has the lowest write load but may lead to the issue of underutilized SSD resources. The write load of HCM is slightly higher than that of uCache, primarily because uCache determines replica popularity based solely on local statistics, while HCM more accurately identifies popular replicas by utilizing feedback from the service nodes, resulting in better caching performance. Nevertheless, HCM’s write load remains at a relatively low level, balancing efficient SSD usage with improved cache performance.

5.2.2. Results Analysis Based on Real-World Traces

To further verify the effectiveness of HCM in real network scenarios, we use the real YouTube traffic load collected by [53], which includes a set of traces collected on different dates, corresponding to different monitoring durations, and different numbers of requests and video clips. We selected three sets of traces, and their main parameters are shown in Table 4. Similar to the settings in Section 5.2.1, we use LCE as the default cache placement strategy and set the cache capacity to 5% of the total content. Additionally, to simplify the analysis, we set the size of all video clips to a uniform 2 MB. The experimental results are shown in Figure 5.
As shown in Figure 5a, LFU exhibits the lowest cache hit rate across different traces. This is because, compared to the synthetic traces from GlobeTraff, the real YouTube trace has a less skewed distribution of popularity, reducing LFU’s effectiveness. Additionally, the traces in this experiment were collected over a concentrated period, resulting in strong temporal locality in the request sequence, which explains why LRU achieves a higher hit rate. Since HCM uses LRU as its base management queue, its hit rate is similarly satisfactory and slightly outperforms PI and uCache. This result is consistent with the findings from the experiment using synthetic traces generated by GlobeTraff (see traffic group D in Figure 4a): when traces exhibit strong temporal locality, LRU’s advantage is amplified, while LFU’s performance declines. The advantage of HCM lies in its ability to maintain stable performance across different traffic loads. Figure 5b shows the network link load, which is basically inversely correlated with the cache hit rate. Figure 5c illustrates the SSD write load. Although HCM does not significantly outperform LRU in terms of cache performance, it achieves the same results with fewer SSD write operations.

5.2.3. Results Analysis Under Different Cache Placement Strategies

The replica placement strategy controls both the location and quantity distribution of replicas, while the replica replacement policy is responsible for managing the local cached replicas. Together, they determine the overall distribution of cached replicas within the network. A good replacement policy should maintain stable performance across different placement strategies. To this end, we selected several typical replica placement strategies, including LCE, LCD (Leave Copy Down) [48], CL4M (Cache Less for More) [54], Random [16], and ProbCache [55], and studied their cache performance when combined with different replacement policies. In this experiment, we used traffic group B from Table 2 and set the cache capacity to 5% of the total number of contents. The following is an introduction to these cache placement strategies.
  • LCE: a copy of the content is cached at every hop node along the content delivery path;
  • LCD: content is cached only at the downstream one-hop node of the current service node;
  • CL4M: among all the nodes along the content delivery path, the node with the highest betweenness centrality is selected to cache the content;
  • Random: a node along the content delivery path is randomly selected to cache the content;
  • ProbCache: content is cached probabilistically at each node along the content delivery path. The caching probability at each node is determined based on two key parameters: T i m e s I n (which approximates the caching capability of the path based on traffic load) and C a c h e W e i g h t (which reflects the router’s distance from the user). These factors guide the content caching decision to optimize cache resource utilization and reduce redundancy.
As shown in Figure 6a, there are significant differences in cache hit rates for different replacement policies under various placement strategies. For example, under the LCE strategy, LFU outperforms LRU, while under the LCD and CL4M strategies, LFU’s performance significantly declines, and LRU performs the best. Nevertheless, HCM achieves satisfactory results under all placement strategies, indicating that it can operate stably across different placement strategies. Notably, HCM combined with Random achieves the best cache hit rate across all combinations, suggesting that good cache performance can be obtained even with simple placement decisions. This reduces the complexity of caching decisions and helps optimize the forwarding performance of ICN routers.
Figure 6b shows that HCM consistently achieves the lowest network link load across different scenarios, indicating that HCM can effectively reduce network traffic costs through closer cache hit distances. Notably, under the LCD and CL4M strategies, although HCM’s hit rate is slightly lower than LRU’s, it still results in lower network link load. This suggests that HCM, through SSD cache filtering and effective SSD cache management, can prioritize retaining replicas with higher cache utility, further reducing link load. Figure 6c presents the experimental results for SSD write load. Since LRU and LFU lack SSD write filtering, their write load is mainly determined by the placement strategy, resulting in similar write load levels for LCD, CL4M, and Random. In contrast, HCM, uCache, and PI maintain very low write loads across different placement strategies.

5.2.4. Results Analysis Under Different Cache Capacities

As the number of cache nodes increases or storage capacity expands, the cache capacity will increase accordingly. Therefore, the cache management policy should have good scalability to ensure that, as cache capacity grows, both the number of cached replicas and the cache hit rate can gradually improve. In this section, we evaluate the performance of different replacement policies under various cache capacity settings. The x-axis in the figures represents the proportion of cache capacity relative to the total number of contents. Considering that the trends observed under different traffic loads and cache placement strategies are roughly the same, we selected traffic groups B and C from Table 2 as input workloads and set the cache placement strategy to LCE.
As shown in Figure 7, with the increase in cache capacity, the cache hit rate of all replacement policies improves, primarily due to the increased diversity of replicas in the cache. Under both traffic loads, HCM consistently achieves the highest hit rate across different cache capacities, demonstrating its broad applicability and excellent scalability in various application scenarios. In contrast, LFU shows significant differences in performance under different traffic loads. Under traffic group C, LFU’s cache hit rate is notably lower than that of other strategies, mainly because this group has a higher proportion of Web traffic, which leads to strong temporal locality between requests. However, LFU demonstrates a higher growth rate as cache capacity increases. This is because, with more cache capacity, LFU can retain more popular replicas, and in scenarios where popularity follows a heavy-tailed distribution, it can hit more requests.
As shown in Figure 8, with the increase in cache capacity, the link load of all replacement policies decreases, and overall, there is a negative correlation with the cache hit rate. Under both traffic loads, HCM consistently achieves the lowest link load. Although LFU’s cache hit rate grows faster as cache capacity increases (as shown in Figure 7), HCM still maintains a significant advantage in terms of link load. In addition, compared to uCache, HCM’s link load decreases more rapidly as cache capacity increases, thanks to its faster cache hit rate growth. The multi-queue replica management mechanism in HCM allows for more efficient utilization of the expanded cache space, while uCache uses a single LRU queue to manage the SSD cache, leading to limitations in SSD space utilization.
Figure 9 shows the changes in SSD write load as cache capacity increases. With the increase in cache capacity, the SSD write load of different replacement policies slightly decreases, mainly due to the improvement in the cache hit rate and the shortening of the request response distance, which alleviates the overall cache load. However, the primary factor determining the SSD write load remains the SSD admission filtering mechanism. As a result, the SSD write load of LFU and LRU remains consistently high. In contrast, HCM, uCache, and PI effectively filter the replaced replicas in DRAM based on replica popularity information, maintaining a consistently low SSD write load. In addition, HCM includes a cache load tuning option that allows the SSD admission rate to be adjusted based on demand, further reducing the SSD write load.

6. Conclusions

This paper proposes an ICN cache management policy for hierarchical cache architectures, which fully exploits the advantages of large-capacity SSDs by combining the differentiated read–write features and workload characteristics of each cache layer. First, we design and implement differentiated cache replacement policies for the different data access patterns observed at each cache layer. At the DRAM layer, we use LRU to cache short-term popular data; at the SSD layer, we use a multi-queue replacement policy based on cache utility to prioritize caching content that provides higher cache benefits. Next, we design a replica admission filtering mechanism for SSDs, utilizing the redundancy and popularity information of replicas in the ICN cache to filter out low-benefit content. This approach effectively reduces the SSD write load without sacrificing the cache hit rate, thereby reducing its operational costs. However, the experiments also indicate that different cache placement policies have a significant impact on the final results, highlighting the clear interaction between cache placement and replacement policies. Therefore, future work will further investigate the interaction mechanism between cache placement and replacement strategies and optimize the overall cache performance through their joint design and optimization.

Author Contributions

Conceptualization, Y.C. and R.H.; methodology, Y.C. and R.H.; software, Y.C.; validation, Y.C. and R.H.; formal analysis, Y.C.; investigation, Y.C.; resources, R.H.; data curation, Y.C.; writing—original draft preparation, Y.C.; writing—review and editing, Y.C. and R.H.; visualization, R.H.; supervision, R.H.; project administration, R.H.; funding acquisition, R.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the National Key R&D Program of China: Application Demonstration of Polymorphic Network Environment for Computing from the Eastern Areas to the Western. (Project No. 2023YFB2906404).

Data Availability Statement

Data are contained within this article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ahlgren, B.; Dannewitz, C.; Imbrenda, C.; Kutscher, D.; Ohlman, B. A Survey of Information-Centric Networking. IEEE Commun. Mag. 2012, 50, 26–36. [Google Scholar] [CrossRef]
  2. Wang, J.; Cheng, G.; You, J.; Sun, P. SEANet: Architecture and Technologies of an On-site, Elastic, Autonomous Network. J. Netw. New Media 2020, 9, 1–8. [Google Scholar]
  3. Jacobson, V.; Smetters, D.K.; Thornton, J.D.; Plass, M.F.; Briggs, N.H.; Braynard, R.L. Networking Named Content. In Proceedings of the 5th International Conference on Emerging Networking Experiments and Technologies, Rome, Italy, 1 December 2009; Association for Computing Machinery: New York, NY, USA, 2009; pp. 1–12. [Google Scholar]
  4. Zhang, L.; Afanasyev, A.; Burke, J.; Jacobson, V.; Claffy, K.C.; Crowley, P.; Papadopoulos, C.; Wang, L.; Zhang, B. Named Data Networking. SIGCOMM Comput. Commun. Rev. 2014, 44, 66–73. [Google Scholar] [CrossRef]
  5. Serhane, O.; Yahyaoui, K.; Nour, B.; Moungla, H. A Survey of ICN Content Naming and In-Network Caching in 5G and Beyond Networks. IEEE Internet Things J. 2021, 8, 4081–4104. [Google Scholar] [CrossRef]
  6. Zhang, Z.; Lung, C.-H.; Wei, X.; Chen, M.; Chatterjee, S.; Zhang, Z. In-Network Caching for ICN-Based IoT (ICN-IoT): A Comprehensive Survey. IEEE Internet Things J. 2023, 10, 14595–14620. [Google Scholar] [CrossRef]
  7. Naeem, M.A.; Ullah, R.; Meng, Y.; Ali, R.; Lodhi, B.A. Caching Content on the Network Layer: A Performance Analysis of Caching Schemes in ICN-Based Internet of Things. IEEE Internet Things J. 2022, 9, 6477–6495. [Google Scholar] [CrossRef]
  8. Mutlu, F.V.; Yeh, E. Cost-Aware Joint Caching and Forwarding in Networks with Heterogeneous Cache Resources. arXiv 2023, arXiv:2310.07243. [Google Scholar]
  9. Rossini, G.; Rossi, D.; Garetto, M.; Leonardi, E. Multi-Terabyte and Multi-Gbps Information Centric Routers. In Proceedings of the IEEE INFOCOM 2014—IEEE Conference on Computer Communications, Toronto, ON, Canada, 27 April–2 May 2014; pp. 181–189. [Google Scholar]
  10. Saino, L. On the Design of Efficient Caching Systems. Ph.D. Thesis, UCL (University College London), London, UK, 2015. [Google Scholar]
  11. Mansilha, R.B.; Saino, L.; Barcellos, M.P.; Gallo, M.; Leonardi, E.; Perino, D.; Rossi, D. Hierarchical Content Stores in High-Speed ICN Routers: Emulation and Prototype Implementation. In Proceedings of the 2nd ACM Conference on Information-Centric Networking, San Francisco, CA, USA, 30 September–2 October 2015; ACM: San Francisco, CA, USA, 2015; pp. 59–68. [Google Scholar]
  12. Takemasa, J.; Koizumi, Y.; Hasegawa, T. Data Prefetch for Fast NDN Software Routers Based on Hash Table-Based Forwarding Tables. Comput. Netw. 2020, 173, 107188. [Google Scholar] [CrossRef]
  13. Ding, L.; Wang, J.; Sheng, Y.; Wang, L. A Split Architecture Approach to Terabyte-Scale Caching in a Protocol-Oblivious Forwarding Switch. IEEE Trans. Netw. Serv. Manag. 2017, 14, 1171–1184. [Google Scholar] [CrossRef]
  14. Pires, S.; Ziviani, A.; Sampaio, L.N. Contextual Dimensions for Cache Replacement Schemes in Information-Centric Networks: A Systematic Review. PeerJ Comput. Sci. 2021, 7, e418. [Google Scholar] [CrossRef] [PubMed]
  15. Jiang, D.; Che, Y.; Xiong, J.; Ma, X. uCache: A Utility-Aware Multilevel SSD Cache Management Policy. In Proceedings of the 2013 IEEE 10th International Conference on High Performance Computing and Communications & 2013 IEEE International Conference on Embedded and Ubiquitous Computing, Zhangjiajie, China, 13–15 November 2013; pp. 391–398. [Google Scholar]
  16. Saino, L.; Psaras, I.; Pavlou, G. Icarus: A Caching Simulator for Information Centric Networking (ICN). In Proceedings of the 7th International ICST Conference on Simulation Tools and Techniques, Lisbon Portugal, 17–19 March 2014; ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering): Brussels, Belgium, 2014; pp. 66–75. [Google Scholar]
  17. Li, H.; Zhou, H.; Quan, W.; Feng, B.; Zhang, H.; Yu, S. HCaching: High-Speed Caching for Information-Centric Networking. In Proceedings of the GLOBECOM 2017—2017 IEEE Global Communications Conference, Singapore, 4–8 December 2017; pp. 1–6. [Google Scholar]
  18. Taniguchi, K.; Takemasa, J.; Koizumi, Y.; Hasegawa, T. A Method for Designing High-Speed Software NDN Routers. In Proceedings of the 3rd ACM Conference on Information-Centric Networking, Kyoto, Japan, 26–28 September 2016; Association for Computing Machinery: New York, NY, USA, 2016; pp. 203–204. [Google Scholar]
  19. Takemasa, J.; Koizumi, Y.; Hasegawa, T. Toward an Ideal NDN Router on a Commercial Off-the-Shelf Computer. In Proceedings of the 4th ACM Conference on Information-Centric Networking, Berlin, Germany, 26–28 September 2017; Association for Computing Machinery: New York, NY, USA, 2017; pp. 43–53. [Google Scholar]
  20. Pan, T.; Lin, X.; Song, E.; Xu, C.; Zhang, J.; Li, H.; Lv, J.; Huang, T.; Liu, B.; Zhang, B. NB-Cache: Non-Blocking In-Network Caching for High-Performance Content Routers. IEEE/ACM Trans. Netw. 2021, 29, 1976–1989. [Google Scholar] [CrossRef]
  21. Reali, G.; Femminella, M. Two-Layer Network Caching for Different Service Requirements. Future Internet 2021, 13, 85. [Google Scholar] [CrossRef]
  22. Zhang, L.; Kong, Y.; Wu, Y.; Cheng, M. Hierarchical Cache-Aided Networks for Linear Function Retrieval. Entropy 2024, 26, 195. [Google Scholar] [CrossRef]
  23. Appuswamy, R.; van Moolenbroek, D.C.; Tanenbaum, A.S. Cache, Cache Everywhere, Flushing All Hits down the Sink: On Exclusivity in Multilevel, Hybrid Caches. In Proceedings of the 2013 IEEE 29th Symposium on Mass Storage Systems and Technologies (MSST), Long Beach, CA, USA, 6–10 May 2013; pp. 1–14. [Google Scholar]
  24. Podlipnig, S.; Böszörmenyi, L. A Survey of Web Cache Replacement Strategies. ACM Comput. Surv. 2003, 35, 374–398. [Google Scholar] [CrossRef]
  25. Arlitt, M.; Friedrich, R.; Jin, T. Performance Evaluation of Web Proxy Cache Replacement Policies. Perform. Eval. 2000, 39, 149–164. [Google Scholar] [CrossRef]
  26. O’Neil, E.J.; O’Neil, P.E.; Weikum, G. The LRU-K Page Replacement Algorithm for Database Disk Buffering. SIGMOD Rec. 1993, 22, 297–306. [Google Scholar] [CrossRef]
  27. Martina, V.; Garetto, M.; Leonardi, E. A Unified Approach to the Performance Analysis of Caching Systems. In Proceedings of the IEEE INFOCOM 2014—IEEE Conference on Computer Communications, Toronto, ON, Canada, 27 April–2 May 2014; pp. 2040–2048. [Google Scholar]
  28. Gomaa, H.; Messier, G.G.; Williamson, C.; Davies, R. Estimating Instantaneous Cache Hit Ratio Using Markov Chain Analysis. IEEE/ACM Trans. Netw. 2013, 21, 1472–1483. [Google Scholar] [CrossRef]
  29. Megiddo, N.; Modha, D.S. {ARC}: A {Self-Tuning}, Low Overhead Replacement Cache. In Proceedings of the 2nd USENIX Conference on File and Storage Technologies (FAST 03), San Francisco, CA, USA, 31 March–2 April 2003. [Google Scholar]
  30. Lee, D.; Choi, J.; Kim, J.-H.; Noh, S.H.; Min, S.L.; Cho, Y.; Kim, C.S. LRFU: A Spectrum of Policies That Subsumes the Least Recently Used and Least Frequently Used Policies. IEEE Trans. Comput. 2001, 50, 1352–1361. [Google Scholar] [CrossRef]
  31. Johnson, T.; Shasha, D. 2Q: A Low Overhead High Performance Buffer Management Replacement Algorithm. In Proceedings of the 20th International Conference on Very Large Data Bases, Santiago de, Chile, Chile, 12–15 September 1994; Morgan Kaufmann Publishers, Inc.: San Francisco, CA, USA, 1994; pp. 439–450. [Google Scholar]
  32. Cao, P.; Irani, S. {Cost-Aware} {WWW} Proxy Caching Algorithms. In Proceedings of the USENIX Symposium on Internet Technologies and Systems (USITS 97), Monterey, CA, USA, 8–11 December 1997. [Google Scholar]
  33. Shim, J.; Scheuermann, P.; Vingralek, R. Proxy Cache Algorithms: Design, Implementation, and Performance. IEEE Trans. Knowl. Data Eng. 1999, 11, 549–562. [Google Scholar] [CrossRef] [PubMed]
  34. Wang, J.M.; Bensaou, B. Improving Content-Centric Networks Performance with Progressive, Diversity-Load Driven Caching. In Proceedings of the 2012 1st IEEE International Conference on Communications in China (ICCC), Beijing, China, 15–18 August 2012; pp. 85–90. [Google Scholar]
  35. Li, Z.; Simon, G.; Gravey, A. Caching Policies for In-Network Caching. In Proceedings of the 2012 21st International Conference on Computer Communications and Networks (ICCCN), München, Germany, 30 July–2 August 2012; pp. 1–7. [Google Scholar]
  36. Newberry, E.; Zhang, B. On the Power of In-Network Caching in the Hadoop Distributed File System. In Proceedings of the 6th ACM Conference on Information-Centric Networking, Macao, China, 24 September 2019; Association for Computing Machinery: New York, NY, USA, 2019; pp. 89–99. [Google Scholar]
  37. Singh, P.; Sarma, N. Adaptive Replacement Cache with Quality of Service for Delay Sensitive Applications in Named Data Networking. In Proceedings of the 2021 IEEE 18th India Council International Conference (INDICON), Guwahati, India, 19–21 December 2021; pp. 1–6. [Google Scholar]
  38. Pires, S.; Ribeiro, A.; Sampaio, L. A Meta-Policy Approach for Learning Suitable Caching Replacement Policies in Information-Centric Networks. In Proceedings of the 37th ACM/SIGAPP Symposium on Applied Computing, Brno, Czech Republic, 25–29 April 2022; Association for Computing Machinery: New York, NY, USA, 2022; pp. 1950–1959. [Google Scholar]
  39. An, Y.; Luo, X. An In-Network Caching Scheme Based on Energy Efficiency for Content-Centric Networks. IEEE Access 2018, 6, 20184–20194. [Google Scholar] [CrossRef]
  40. Meddeb, M.; Dhraief, A.; Belghith, A.; Monteil, T.; Drira, K.; Mathkour, H. Least Fresh First Cache Replacement Policy for NDN-Based IoT Networks. Pervasive Mob. Comput. 2019, 52, 60–70. [Google Scholar] [CrossRef]
  41. Khelifi, H.; Luo, S.; Nour, B.; Moungla, H. A QoS-Aware Cache Replacement Policy for Vehicular Named Data Networks. In Proceedings of the 2019 IEEE Global Communications Conference (GLOBECOM), Big Island, HI, USA, 9–13 December 2019; pp. 1–6. [Google Scholar]
  42. Mishra, S.; Jain, V.K.; Gyoda, K.; Jain, S. An Efficient Content Replacement Policy to Retain Essential Content in Information-Centric Networking Based Internet of Things Network. Ad Hoc Netw. 2024, 155, 103389. [Google Scholar] [CrossRef]
  43. Shi, J.; Pesavento, D.; Benmohamed, L. NDN-DPDK: NDN Forwarding at 100 Gbps on Commodity Hardware. In Proceedings of the 7th ACM Conference on Information-Centric Networking, Virtual, 29 September 2020–1 October 2020; Association for Computing Machinery: New York, NY, USA, 2020; pp. 30–40. [Google Scholar]
  44. Cheng, Y.; Chen, W.; Wang, Z.; Yu, X.; Xiang, Y. AMC: An Adaptive Multi-Level Cache Algorithm in Hybrid Storage Systems. Concurr. Comput. Pract. Exp. 2015, 27, 4230–4246. [Google Scholar] [CrossRef]
  45. Traverso, S.; Ahmed, M.; Garetto, M.; Giaccone, P.; Leonardi, E.; Niccolini, S. Temporal Locality in Today’s Content Caching: Why It Matters and How to Model It. SIGCOMM Comput. Commun. Rev. 2013, 43, 5–12. [Google Scholar] [CrossRef]
  46. Crovella, M.E. Performance Evaluation with Heavy Tailed Distributions. In Proceedings of the Computer Performance Evaluation, Modelling Techniques and Tools, Schaumburg, IL, USA, 25–31 March 2000; Haverkort, B.R., Bohnenkamp, H.C., Smith, C.U., Eds.; Springer: Berlin/Heidelberg, Germany, 2000; pp. 1–9. [Google Scholar]
  47. Khandaker, F.; Oteafy, S.; Hassanein, H.S.; Farahat, H. A Functional Taxonomy of Caching Schemes: Towards Guided Designs in Information-Centric Networks. Comput. Netw. 2019, 165, 106937. [Google Scholar] [CrossRef]
  48. Laoutaris, N.; Syntila, S.; Stavrakakis, I. Meta Algorithms for Hierarchical Web Caches. In Proceedings of the IEEE International Conference on Performance, Computing, and Communications, Phoenix, AZ, USA, 15–17 April 2004; pp. 445–452. [Google Scholar]
  49. Spring, N.; Mahajan, R.; Wetherall, D. Measuring ISP Topologies with Rocketfuel. SIGCOMM Comput. Commun. Rev. 2002, 32, 133–145. [Google Scholar] [CrossRef]
  50. Bastian, M.; Heymann, S.; Jacomy, M. Gephi: An Open Source Software for Exploring and Manipulating Networks. In Proceedings of the International AAAI Conference on Web and Social, San Jose, CA, USA, 17–20 May 2009; Volume 3, pp. 361–362. [Google Scholar] [CrossRef]
  51. Katsaros, K.V.; Xylomenos, G.; Polyzos, G.C. GlobeTraff: A Traffic Workload Generator for the Performance Evaluation of Future Internet Architectures. In Proceedings of the 2012 5th International Conference on New Technologies, Mobility and Security (NTMS), Istanbul, Turkey, 7–10 May 2012; pp. 1–5. [Google Scholar]
  52. Busari, M.; Williamson, C. ProWGen: A Synthetic Workload Generation Tool for Simulation Evaluation of Web Proxy Caches. Comput. Netw. 2002, 38, 779–794. [Google Scholar] [CrossRef]
  53. Zink, M.; Suh, K.; Gu, Y.; Kurose, J. Characteristics of YouTube Network Traffic at a Campus Network—Measurements, Models, and Implications. Comput. Netw. 2009, 53, 501–514. [Google Scholar] [CrossRef]
  54. Chai, W.K.; He, D.; Psaras, I.; Pavlou, G. Cache “Less for More” in Information-Centric Networks. In Proceedings of the NETWORKING, Prague, Czech Republic, 21–25 May 2012; Bestak, R., Kencl, L., Li, L.E., Widmer, J., Yin, H., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 27–40. [Google Scholar]
  55. Psaras, I.; Chai, W.K.; Pavlou, G. Probabilistic In-Network Caching for Information-Centric Networks. In Proceedings of the Second Edition of the ICN Workshop on Information-Centric Networking—ICN ’12, Helsinki, Finland, 17 August 2012; ACM Press: Helsinki, Finland, 2012; p. 55. [Google Scholar]
Figure 1. Illustration of hierarchical cache architecture.
Figure 1. Illustration of hierarchical cache architecture.
Futureinternet 17 00017 g001
Figure 2. Illustration of hierarchical cache management (HCM) policy.
Figure 2. Illustration of hierarchical cache management (HCM) policy.
Futureinternet 17 00017 g002
Figure 3. Visualization of the GARR Topology.
Figure 3. Visualization of the GARR Topology.
Futureinternet 17 00017 g003
Figure 4. Performance of different replacement policies under various traffic groups: (a) cache hit rate, (b) network link load, and (c) SSD write operations.
Figure 4. Performance of different replacement policies under various traffic groups: (a) cache hit rate, (b) network link load, and (c) SSD write operations.
Futureinternet 17 00017 g004
Figure 5. Performance of different replacement policies under YouTube traffic traces: (a) cache hit rate, (b) network link load, and (c) SSD write operations.
Figure 5. Performance of different replacement policies under YouTube traffic traces: (a) cache hit rate, (b) network link load, and (c) SSD write operations.
Futureinternet 17 00017 g005
Figure 6. Performance of different replacement policies under various cache placement strategies: (a) cache hit rate, (b) network link load, and (c) SSD write operations.
Figure 6. Performance of different replacement policies under various cache placement strategies: (a) cache hit rate, (b) network link load, and (c) SSD write operations.
Futureinternet 17 00017 g006
Figure 7. Cache hit rate vs. cache capacity under different replacement policies: (a) results for traffic group B; (b) results for traffic group C.
Figure 7. Cache hit rate vs. cache capacity under different replacement policies: (a) results for traffic group B; (b) results for traffic group C.
Futureinternet 17 00017 g007
Figure 8. Network link load vs. cache capacity under different replacement policies: (a) results for traffic group B; (b) results for traffic group C.
Figure 8. Network link load vs. cache capacity under different replacement policies: (a) results for traffic group B; (b) results for traffic group C.
Futureinternet 17 00017 g008
Figure 9. SSD write load vs. cache capacity under different replacement policies: (a) results for traffic group B; (b) results for traffic group C.
Figure 9. SSD write load vs. cache capacity under different replacement policies: (a) results for traffic group B; (b) results for traffic group C.
Futureinternet 17 00017 g009
Table 1. Simulation parameters in Icarus.
Table 1. Simulation parameters in Icarus.
ParametersValue
TopologyGARR
Number of nodes (receiver, router, server)21, 27, 13
Bandwidth of network links1 Gbps
Cache placementUniform
DRAM/SSD capacity ratio1:10
Content placementRandom
Content size2 MB
Table 2. Traffic composition of mixed workloads in experimental scenarios.
Table 2. Traffic composition of mixed workloads in experimental scenarios.
Traffic GroupWebVideoOther
A10%50%40%
B20%50%30%
C30%50%20%
D40%50%10%
Table 3. Model parameters for different traffic types in Globetraff.
Table 3. Model parameters for different traffic types in Globetraff.
Traffic TypePopularity DistributionTime Locality
ModelParameters
WebZipf α = 0.75 LRU stack model [52]
VideoWeibull k = 0.513 ,   λ = 6010 Random Distribution
OtherZipf α = 0.75 Random Distribution
Table 4. YouTube traffic traces.
Table 4. YouTube traffic traces.
TraceNumber of Video ClipsNumber of RequestsDuration (h)
T482,132145,140162
T5303,331611,968336
T6131,450243,023168
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chao, Y.; Han, R. A Hierarchical Cache Architecture-Oriented Cache Management Scheme for Information-Centric Networking. Future Internet 2025, 17, 17. https://doi.org/10.3390/fi17010017

AMA Style

Chao Y, Han R. A Hierarchical Cache Architecture-Oriented Cache Management Scheme for Information-Centric Networking. Future Internet. 2025; 17(1):17. https://doi.org/10.3390/fi17010017

Chicago/Turabian Style

Chao, Yichao, and Rui Han. 2025. "A Hierarchical Cache Architecture-Oriented Cache Management Scheme for Information-Centric Networking" Future Internet 17, no. 1: 17. https://doi.org/10.3390/fi17010017

APA Style

Chao, Y., & Han, R. (2025). A Hierarchical Cache Architecture-Oriented Cache Management Scheme for Information-Centric Networking. Future Internet, 17(1), 17. https://doi.org/10.3390/fi17010017

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop