Next Article in Journal
Feasibility of Wearable Devices for Motivating Post-Stroke Patients
Previous Article in Journal
VCformer: Variable-Centric Multi-Scale Transformer for Multivariate Time Series Forecasting
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Efficient Caching Strategies in NDN-Enabled IoT Networks: Strategies, Constraints, and Future Directions

by
Ala’ Ahmad Alahmad
1,2,
Azana Hafizah Mohd Aman
1,
Faizan Qamar
1,* and
Wail Mardini
3,4
1
Center of Cyber Security, Faculty of Information Science and Technology (FTSM), Universiti Kebangsaan Malaysia (UKM), Bangi 43600, Selangor, Malaysia
2
Department of Computer Science, Jadara University, Irbid 21110, Jordan
3
Department of Computer Science, Jordan University of Science and Technology, Irbid 22110, Jordan
4
Department of Computer Science, Houston Christian University, Houston, TX 77074, USA
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(16), 5203; https://doi.org/10.3390/s25165203
Submission received: 28 June 2025 / Revised: 14 August 2025 / Accepted: 19 August 2025 / Published: 21 August 2025
(This article belongs to the Section Internet of Things)

Abstract

Named Data Networking (NDN) is identified as a significant shift within the information-centric networking (ICN) perspective that avoids our current IP-based infrastructures by retrieving data based on its name rather than where the host is placed. This shift in paradigm is especially beneficial in Internet of Things (IoT) settings because information sharing is a critical challenge, as millions of IoT items create enormous traffic. Content caching in the network is another key characteristic of NDN used in IoT, which enables data storing within the network and provides IoT devices with the opportunity to address nearby caching nodes to gain the intended content, which, in its turn, will minimize latency as well as bandwidth consumption. However, effective caching solutions must be developed since cache management is made difficult by the constant shifting of IoT networks and the constrained capabilities of IoT devices. This paper gives an overview of cache strategies in NDN-based IoT systems. It emphasizes six strategy types: popularity-based, freshness-aware, collaborative, hybrid, probabilistic, and machine learning-based, evaluating their performances in terms of demands like content preference, cache update, and power consumption. By analyzing various caching policies and their performance characteristics, this paper provides valuable insights for researchers and practitioners developing caching strategies in NDN-based IoT networks.

1. Introduction

Through the Internet of Things, all people and all things can be connected anywhere and at any time, which is why it was necessary to pay attention to the IoT and work on its development. For example, in a smart home, the Industrial Internet of Things (IIoT), a smart city, healthcare, smart agriculture, and environmental monitoring, IoT enables real-time communication between these environments, which is why it was necessary to pay attention to the Internet of Things (IoT) and work on its development [1]. The IoT requires a set of devices, sensors, and actuators that work together to accomplish multiple tasks. It allows device objects to learn, think, listen, and perform different tasks [2]. The IoT networks consist of a huge amount of data [3], but due to the limited-resource nature of the IoT, this makes it have shortcomings when dealing with big data, such as limited energy resources and limited storage spaces, as well as limited bandwidth. It is necessary to propose solutions to such challenges, and the integration process between Named Data Networking (NDN) and IoT provides broad solutions to these challenges. Researchers’ attention has recently been focused on improving the future Internet’s architecture and functionality, making sure that users can access data quickly, effectively, and securely. One promising paradigm that easily fits the particular needs of the IoT is NDN [4,5]. Figure 1 illustrates the NDN architecture compared with IP-based architectures. One of the most important features of NDN networks is caching operations [6]. These operations are translated into a set of caching strategies. The operation of each of these strategies in NDN-based IoT networks depends on the nature of the network and on multiple aspects. By using caching, NDN seeks to move from the traditional host-centric model to a content-centric model [7] that eliminates the need for a fixed, centralized storage site by allowing data to be cached in the closest network nodes [8,9]. Even in IoT situations with limited resources, this method improves data availability, reduces latency, and mitigates congestion by switching the content retrieval strategy from IP-based addressing to content name-based requests [10,11,12,13]. This shift is especially important in IoT environments, where traditional IP-based caching relies on distant server responses and central data paths that strain bandwidth and increase delay. In contrast, NDN’s in-network caching enables data to be stored and retrieved from intermediate nodes along the communication path, significantly reducing retrieval time and energy usage, making it inherently more suitable for the dynamic and resource-limited nature of IoT networks [14].
The NDN-based node supports three basic forms of data structures, which are crucial components: the Content Store (CS), Pending Interest Table (PIT), and Forwarding Information Base (FIB). The CS takes interest packets from the data to organize and store them, and to fulfill requests when they are needed [15]. The PIT holds an interest list of packets waiting to be forwarded, while tracking the interfaces associated with the data packets to be forwarded appropriately when received. On the other hand, the FIB ensures that interest packets follow the best path toward the source of the NDN data packets [16,17,18,19,20]. Although these caching mechanisms have established good fundamental support for NDN [21], with consideration of the rapid development of IoT devices, there are new factors thus arisen, including how to ensure the freshness of caches, how to allocate the resources more efficiently, and how to solve the heterogeneity and dynamics in IoT devices. Figure 2 shows the basic steps of the caching process, each step being more important than the other to achieve optimal caching, namely, selecting content, placing content, and replacing content [22].

1.1. Related Work

Even though caching techniques and procedures have advanced significantly, new problems are brought about by the exponential expansion of IoT devices and the enormous volumes of data they generate. In order to solve problems like scalability, energy efficiency, and dynamic network conditions, researchers are constantly investigating sophisticated caching techniques designed for the NDN-based IoT context. By improving caching’s flexibility and efficiency, these initiatives hope to keep the NDN-based IoT network resilient and able to handle the needs of a world growing more networked by the day [23,24]. This survey, unlike previous studies that focus on general ICN architectures or limited caching classification, explores a more granular and structured taxonomy of caching strategies, specifically designed for NDN-based IoT networks. In addition to identifying current issues regarding strategies and potential research prospects, this study attempts to provide a comprehensive overview of modern caching strategies and compares them across multiple performance metrics, including CHR, latency, energy consumption, and scalability. It is expected that researchers and practitioners looking to improve caching in NDN-based IoT networks will find great value in the survey results.
There are a number of survey studies that have focused on the field of caching in ICN-IoT and caching in NDN-based IoT networks. Regarding the study in [14], the authors presented NDN caching mechanisms within ad hoc wireless networks, which were divided into MANETs, the IoT, and VANETs. The study also emphasized the difficulties facing traditional IP-based networks, including data transfer restrictions, data duplication, and poor content delivery efficiency. This was solved and mitigated by the presence of the NDN architecture. The strategies were divided into several categories, and their benefits and challenges were discussed. In Ref. [25], the study presented a comprehensive overview of caching strategies in ICNs and the IoT. The study focused on the importance of network efficiency and the importance of delivering content effectively in IoT environments. Also, the study showed the extent of the inefficiency of traditional networks based on the Internet Protocol (IP) in delivering content and their inability to expand and extend response time. Furthermore, the study also discussed important issues, such as metrics and methods for evaluating caching operations, and praised the important role that caching plays in improving network performance. The study in Ref. [26] reviewed an extensive classification of caching mechanisms in ICN-IoT networks, mentioning the challenges of scalability, energy consumption, and data freshness. It also discussed cache mechanisms and the selection of the appropriate method for cache content, in addition to content replacement mechanisms. The study demonstrated the importance of machine learning-based caching for its effective role in enhancing caching efficiency. In our study, the classification of six categories of caching mechanisms is presented. Table 1 shows a number of the strategies discussed in the above study and other studies, and Figure 3 shows a diagram of the use of each type of strategy within each study.

1.2. Motivation and Contribution

This review presents a modern set of caching strategies for NDN-based IoT networks. The main focus is to explore how caching strategies can be leveraged to address issues in IoT environments, such as dealing with huge amounts of data, resource constraints, and the dynamic nature of the network. To ensure coverage and balance, we searched through major academic databases, such as IEEE Xplore, ACM Digital Library, SpringerLink, and ScienceDirect, using targeted keywords (“NDN IoT caching,” “caching strategies in information-centric networking,” “NDN-based IoT,” etc.). Preference was given to papers published in high-quality journals and conferences within the last 5–7 years, particularly those that introduced novel strategies, performed simulations or experimental validation, and used standard performance metrics, such as the cache hit rate (CHR), latency, energy, and scalability.
While several past surveys have focused on general caching in NDN or ICN networks, they often lack a dedicated emphasis on NDN-IoT scenarios or do not include recent developments, particularly in the areas of hybrid and AI-driven caching. Moreover, few reviews offer a critical comparison of strategies across shared evaluation metrics or discuss deployment feasibility under real-world IoT constraints. This research categorizes caching strategies based on their objectives and methodology; each of them is evaluated based on different criteria, such as CHR, latency reduction, energy efficiency, and adaptability to dynamic network conditions. In addition to reviewing the current strategies, a set of potential future directions in this field is proposed to develop and enhance caching, which in turn enhances network efficiency. In addition, this survey aims to provide a targeted resource for both practitioners and researchers.
The main objective of this review is to provide a comprehensive and well-structured overview of caching strategies within NDN-based IoT environments. Given the increasing complexity and heterogeneity of IoT systems, along with the shift from host-centric to content-centric communication models, understanding caching mechanisms has become a critical area of study. This review aims to bridge the gap between theoretical concepts and practical implementations by categorizing and analyzing existing strategies through a structured lens. In particular, we focus on explaining the following key aspects: the fundamental role of in-network caching in NDN-IoT systems, the challenges posed by IoT characteristics, and the classification of caching strategies into meaningful categories. We mainly explain the following:
  • In this article, we provide a comprehensive analysis and summary of NDN, in conjunction with IoTs, focusing on the transmission from providers to end users through cacheable networks.
  • A comprehensive evaluation of various strategies in NDN-based IoT networks based on current and relevant surveys is conducted.
  • This survey provides comprehensive insights into the difficulties of NDN-based IoT networks and identifies their resolutions through the use of caching strategies.
  • This paper provides a comprehensive summary of NDN-based IoT caching strategies based on their classification into six categories: popularity-based, freshness-aware, collaborative, hybrid, probabilistic, and machine learning-based.

1.3. Paper Strucutre

This paper is organized into a set of sections, and each section presents and explains specific ideas on the basic concepts in modern caching strategies in NDN-based IoT networks, as follows and which is illustrated in Figure 4. Section 1: Introduction: this section contains the background, motivation, objectives, scope, and structure of the survey, which gives a clear consistency in understanding the study and identifies its contributions. Section 2: an overview of caching in NDN-based IoT networks and its importance and challenges. Section 3: Caching Strategies in NDN-based IoT Networks: This section provides a comprehensive review of modern caching strategies and discusses the different methodologies used in these strategies and classifications, with a focus on their importance. Section 4: Future Directions for Research: This section shows some promising future directions in caching strategies. Section 5: Conclusions: This section concludes the survey by summarizing the research and highlighting the importance of caching in NDN-based IoT networks. Table 2 provides a summary of the main abbreviations used throughout this article.

2. Caching in NDN-IoT Networks

Caching is a key component in improving the performance and efficiency of NDN-based IoT networks. As the shift from IP-based to content-based data retrieval has taken place, NDN has provided solutions to many of the problems facing IoT environments through caching [34,35]. Caching allows the required data to be stored at nodes close to the client, thus enabling faster data access, reducing energy and bandwidth consumption, and also alleviating network congestion [36,37]. All of these features are important and necessary in IoT environments, as the nature of these environments is resource-constrained and requires a model, such as NDN [38,39,40].
IoT environments contain a large number of devices that generate and exchange a huge amount of data. The number of devices has reached billions of devices [37,41], and the traditional approach of retrieving data from servers is no longer suitable or effective because it increases data access time and causes network congestion. Hence, with the emergence of caching for NDN-based IoT networks, it has become a solution to these problems by storing content close to the requesting nodes. This, in turn, leads to the faster retrieval of content and reduces the load on central servers [42].
An important feature of caching technologies is that they improve the scalability of the network, in addition to enhancing data availability in cases of intermittent connectivity [43]. On the other hand, with regard to energy consumption, caching contributes to reducing energy consumption, which is very important in IoT devices with a limited battery life [44]. Caching also reduces the need to transfer data over long distances, as is the case in the server-client model, and thus, this is a reason for extending the operational life of IoT devices. Recent studies have shown that when caching is integrated into NDN-based function chaining, it enables the reuse of intermediate results. This not only helps cut down on latency and processing load but also lowers energy usage, an especially important benefit in IoT settings [45]. Despite all the benefits that caching techniques offer, they face many challenges in NDN-based IoT networks, which challenges resource constraints [46], and where IoT devices are characterized by limited memory, energy, and processing power, allocating data to the right place in the cache efficiently while maintaining the lowest amount of energy consumption is a major challenge [47]. Making caching work well in such environments requires careful handling of limited resources. Stored content needs to fit within the available memory, and choosing what to evict should not use up excessive energy. Some caching approaches have addressed these challenges by aiming to cut down power consumption and make better use of memory.
A dynamic network topology is where IoT networks are dynamic in nature [48,49], as devices in the network can join or leave frequently. This requires compatibility with the cache and maintaining its consistency and adaptation to this pattern [50]. These constant changes can make it difficult to keep cached data consistent and coordinated. If a device storing content disconnects, that content might be lost, so caching strategies need to adjust quickly to such changes. Also, the scalability challenge, the huge amount of data in IoT environments, requires the ability to have caching strategies capable of handling large-scale networks, without negatively affecting the performance of the network as a whole [4]. As the network grows, caching systems need to scale without slowing everything down. In addition, replacement policies must be applied when the cache becomes full, and an appropriate replacement policy must be applied to maintain a high hit ratio to the cache, which poses a challenge [51]. Bad choices can result in keeping outdated or less useful data and removing content that is still valuable, which can lower the cache hit rate and increase delays. As a final challenge, we mention security and privacy, where caching data in nodes for later access poses a number of security and privacy concerns, as keeping content in the cache and accessible only to authorized users is a challenge in NDN-based IoT networks [52,53]. These risks are particularly serious in NDN-IoT setups. Figure 5 illustrates the challenges of caching in NDN-based IoT.

3. Caching Strategies in NDN-IoT

Caching strategies are one of the most important aspects to improve the performance of NDN networks in IoT environments [54]. Research interests in this area are directed towards finding a comprehensive strategy that determines how to cache appropriate content. As is well known, caching has various benefits, but sometimes, if the appropriate caching method is not used, this process is considered a waste of network resources. Until now, there have been a large number of strategies, each of which is concerned with a part during the caching process or is based on a specific goal in its working principle. The main purpose of this section is to classify modern caching strategies in detail.
The use of NDN-based IoT networks offers many advantages, including more than one point. As a first one, it provides the idea of caching, where data is kept in different nodes in different places in the network. Furthermore, it allows data retrieval in an easy and fast way. Besides that, the speed of searching for content via the hierarchical naming feature [55] also gives the network the advantage of expansion. Concomitantly, it provides facilities for developing various applications [14]. Figure 6 shows a comprehensive classification of caching strategies in NDN-based IoT, including the following categories: popularity-based caching, freshness-based caching, collaborative/cooperative caching, hybrid caching, machine learning-based caching, and probabilistic caching.

3.1. Popularity-Based Caching

In this approach, IoT devices monitor data requests and identify the most frequent requests during specific time periods, and use this information to determine which data should be cached. The most requested content is labelled as popular content, as shown in Figure 7 [14]. Adopting a popularity-based caching approach and focusing on the most requested content reduces latency. Data that is classified as the most frequent is stored closer to the devices or users, thus reducing the need to reach service providers. This is a great advantage in IoT networks, which are characterized by limited resources in terms of energy, bandwidth, and storage. One of the most important things that the popularity-based caching approach does is its ability to exploit resources in an innovative way in NDN-based IoT networks. This is achieved by storing content within the network in strategic locations. This preserves bandwidth, retrieves data without duplication, and reduces energy consumption [56]. In addition, this approach supports scalability in NDN-based IoT systems. As a result, researchers have been keen to find a set of popularity-based caching strategies to improve caching results in NDN-based IoT networks. Below, we have classified popularity-based caching strategies into several categories, as appears in Figure 8, and Table 3 summarizes selected popularity-based caching strategies.

3.1.1. Popularity with Network Centrality Awareness

These studies rely on combining the concept of content popularity with network centrality measures and focus on the network topology (such as node closeness or betweenness). The goal of such integration is to achieve highly efficient caching decisions. Nodes that occupy a central position in the network are able to optimally deliver and distribute content to the largest number of consumers, which is why they are given priority for content caching. This integration process thus increases network efficiency and reduces unwanted data duplication [57,58].
In a recent study performed by Ref. [59], the researchers proposed a caching control scheme, which sought to improve the efficiency of content cache in Information-Centric ad hoc Networks (ICANETs). This study focused on the importance of information-centric networks (ICNs), especially the NDN, in addressing the problems faced by traditional networks in storing and managing content to reduce the burden on the network. The study relied on the popularity of the content and the centrality of the nodes as key indicators for caching content, especially in dynamic environments. Cache priority is given to easily accessible nodes, as this scheme ensures that the most popular content is stored in the easily accessible nodes in the network, which contributes to increasing the efficiency of caching. A new mechanism has also been used, which is interest packet forwarding, working to update closeness centrality values and content popularity ratings. In addition, this scheme was evaluated using the ndnSIM simulator (based on NS-3), and the results showed that the proposed method, PT-Cache, demonstrated improvements in both cache efficiency and network performance, reducing response time compared to both PT-default and LCD. The study also indicated that energy consumption remains an area for future research.
The authors in Ref. [58] presented a set of caching strategies in NDN-IoT networks, which try to find solutions to the shortcomings in traditional networks, such as energy efficiency and response time. The strategies in this study were classified into popularity-based strategies, centrality-based strategies, and probability-based strategies. The Icarus simulator was used to examine the efficiency of each of these strategies and evaluate the caching performance of each. It was found that the Popularity-Aware Closeness Centrality (PACC) and efficient popularity-aware probabilistic caching (EPPC) gave better results than Client Cache Strategy (CCS), periodic caching strategy (PCS), Centrally Controlled Caching (CCC), Approximate Betweenness Centrality (ABC), and others. The study also emphasized the importance of adaptive strategies that prevent data duplication, which improves network efficiency.
One of the most important problems facing IoT networks is the limited resources and restrictions on them, especially in streaming media operations. Researchers in the study in Ref. [60] presented caching in ICN networks within IoT environments, where they proposed a popularity and betweenness-based replacement scheme (PBRS). This scheme is based on two important factors: the popularity of the content within the range and the importance of the node, and based on these two factors, the content replacement operations are carried out. The popularity of the data is determined by the frequency of the request, and the importance of the node is determined by its role in directing the data. This scheme uses a resource adaptation resolving server (RARS) to maintain and manage the caching state. The simulation results using the ndnSIM simulator showed that PBRS outperformed traditional strategies, such as Least Recently Used (LRU) and Least Frequently Used (LFU). This was in terms of both CHR criteria and reducing the delay time.

3.1.2. Traffic and Energy Efficiency-Oriented Caching

In this type of caching, popularity-based caching, popularity is taken into account as a primary factor, and then two other important factors are addressed: reducing the volume of data traffic within the network and reducing energy consumption by participating nodes. As mentioned, IoT environments are resource-limited, so in this section, caching priority is given to content that is in frequent demand in a way that contributes to reducing transmission costs and any energy-consuming operations. Such strategies help reduce unnecessary transmissions and maintain network efficiency [61].
The authors in Ref. [62] proposed a traffic-aware caching mechanism (TCM) in information-centric networking (ICN) wireless sensor networks. It is a traffic-aware caching method for wireless sensor networks with limited sensors. It aims to improve energy efficiency. The TCM pushes popular upstream content objects to be cached near the sink node. In addition, less popular content is cached farther away from the sink compared to the popular content. The results of the study in wireless networks using the COOJA simulator showed that the TCM reduces energy consumption and response time compared to other strategies, such as CPCCS and MPC. Such a technique is based on a bloom filter (BF) to manage the storage coordination processes, which ensures diversity in the content of stored data and better allocation of resources, which is suitable for IoT environments.
In the study by Ref. [63], the researchers proposed a popularity-based caching strategy, namely the Collaborative Caching Strategy for Content-centric enabled Wireless Sensor Networks (CCN-WSNs). This collaborative strategy takes into account two important factors during the content storage process, which are the distance from the content source and the node degree (number of neighboring nodes), and thus it was named Collaborative Caching Strategy Distance and Degree awareness (CSDD). The strategy works to achieve a balance between both the ease of access to the content through the process of selectively storing data in neighboring nodes, and the strategy also achieves a balance in energy efficiency and diversity of data caching. The CSDD strategy is characterized by reducing excessive caching and unjustified energy consumption by reducing the number of nodes participating in the storage process while maintaining the content on demand. The study used the CCNx Contiki simulator in the evaluation process of this strategy, and the results showed its superiority over traditional strategies, such as Leave Copy Everywhere (LCE) and Leave Copy Down (LCD). The simulation was applied nine times under different criteria, and the most prominent results were in reducing energy consumption and improving the CHR. The strategy shows high efficiency in caching content with variable content popularity and different network structures, making it a suitable choice for IoT environments.
In the study in Ref. [64], the researchers proposed an architecture that included edge computing (EC) and information-centric networking (ICN) to improve caching operations. This architecture seeks to increase content availability, reduce access time, and improve network efficiency by optimizing traffic. Two scenarios were used to deliver media in the IoT. In the first scenario, called Downstream Data Flow Management, content was transferred from the edge nodes in the network to the user. As for the second scenario, called Upstream Data Handling, data was routed and stored in a local cloud server. The architecture also used a caching mechanism to store the most popular data based on clustering. Additionally, the simulation results using the Network Simulator-3 (NS-3) showed that this architecture outperformed its counterparts in terms of response time, increased content availability, and bandwidth. This architecture ensures scalability for IoT environments.

3.1.3. Cache Consistency and Data Freshness Mechanisms

This type of strategy focuses on the challenges in maintaining cached data that is both popular and fresh. Content storage in such strategies not only focuses on storing the most requested content, but also takes into account the freshness and consistency of the data. This also includes content that is sensitive to corruption, such as sensor readings or control signals in IoT environments. This category also uses mechanisms to monitor the validity of content and remove it when necessary or if it is considered outdated [58]. Techniques such as timestamp-based eviction, adaptive freshness windows, or synchronization protocols with content producers are commonly used [65,66]. These mechanisms ensure that users receive up-to-date and accurate data.
In the study in Ref. [67], the researchers presented a caching strategy called popularity, size, and freshness-based (PoSiF) caching that relies on content freshness in addition to its popularity and size in ICN-IoT networks. The study focused on the balance between content freshness and popularity, and also highlighted taking content size into account as an important factor, as neglecting this factor may lead to inefficient content replacement. This strategy was evaluated using the ndnSIM simulator, which showed that PoSiF outperforms CCS/CES, LCD, and Cache Everything Everywhere (CEE) in terms of the CHR and reduces the hop ratio. Compared to LRU and LFU, the results showed that PoSiF gives a lower freshness ratio, as it prioritizes caching the most popular content.
The study in Ref. [68] presented a scheme called a Popularity-based Cache Consistency Management (PCCM) to handle and manage the temporal data in the IoT and its freshness within Content-Centric Networking (CCN) networks. PCCM aims to provide high-freshness and popular data to consumers. It also gives priority to popular data that is constantly updated in the cache operations, compared to unpopular data. Data popularity is determined by Adaptive Routers (ARs), as they play an effective role in consistency management operations. Thus, routers can check the freshness of cached data and remove old ones. The results using the ndnSIM simulator showed that PCCM achieved better results in terms of cache consistency and reduced latency. PCCM provides support for IoT environments where data freshness is of utmost importance.

3.1.4. Dynamic Popularity Windows and Threshold-Based Schemes

In this type of classification, it refers to a description of a type of strategy based on the popularity of the content and other methods. These methods define time-based popularity windows or set thresholds. In these strategies, the popularity of the content is not taken as a constant value, but rather within a time window (such as the last 10 min or a certain number of recent requests). This allows the strategy to adapt to any sudden change, such as when a certain content is suddenly designated as highly popular. Threshold-based strategies rely on specific limits, where if the count for the content increases more than the threshold value, it is labelled as popular content. Such strategies are preferred in IoT environments because they are considered dynamic environments that require adapting to changing user interests [69].
The authors in Ref. [70] provided a unique technique, dynamic popularity window and distance-based efficient caching, for CCN models. It aims to improve the caching capacity by studying two important factors: first, the popularity of the content, which is determined by using a dynamic size popularity window, and second, the distance of the content message. The researchers in this study focused on the importance of choosing the appropriate routers to store the content, based on the previous factors, and thus improving caching decisions. This technique also works to improve the quality of service (QoS), which is performed during the content delivery process. The threshold value is used in the content storage process, and this value is adjusted based on the measures of both the popularity and distance. The results of the study showed that the proposed storage technique gave better results in terms of performance indicators: the number of hops, hit ratio, network traffic, and response time. This was compared to the LCE technique. The simulation was used on the Abilene network architecture, and LRU policies were used for content replacement. This approach could improve performance in future high-demand applications, such as the IoT, vehicular networks, and 5G/6G networks.
In Ref. [71], the authors proposed an effective solution for caching by proposing a scheme called the Dynamic Popularity Window-based Caching Scheme (DPWCS). The importance of this scheme is that it is able to solve the resource constraints problem in routers. The most popular content is cached through the popularity window in routers by the content request rate and threshold value. The redundant storage is reduced, and the network load is reduced by caching the content that is popular. The results of the study showed that the set of improvements in each of the CHR, hop count, and network load was comparable to LCE and ProbCache.
In Ref. [56], the researchers proposed a compound popular content caching strategy (CPCCS) that was designed to improve caching operations in NDN networks. This strategy solved some of the challenges in traditional storage mechanisms in NDN-based IoT by focusing on less popular content (LPC), where the content is stored near the nodes requesting the content, and the optimal popular content (OPC), which is determined based on the frequency of requests for specific content. The CPCCS strategy is divided into two stages: First, the content popularity stage, where the content is divided into two main categories based on threshold values, where the first is data with high popularity, and the second is data with low popularity. With the help of the PIT record, the popular content is selected. The second stage is the process of storing this popular content that was selected in the shared nodes, while the rest of the content is stored in the nodes close to the users. Both LRU strategies are used to replace the content, where a lifetime is assigned to the content, after which the content is removed. The CHR, diversity, and stretch metrics used to evaluate CPCCS are mathematically defined, as mentioned in the above equations. The SocialCCNSim-Master Simulator was used, and the evaluation results showed that CPCCS achieved better results in each of the evaluation criteria: the CHR, stretch, and content diversity, compared to other strategies, such as the WAVE popularity-based caching strategy, max-gain in-network caching (MAGIC), LeafPopDown, hop-based probabilistic caching (HPC), cache capacity aware caching (CCAC), most popular cache (MPC), and ProbCache through simulations. This makes this strategy a promising solution in IoT environments.
In addition, the researchers in [72] proposed a periodic caching strategy (PCS) for NDN-based smart city networks. One of the most important applications of IoT environments is smart cities. This strategy was designed to improve the caching performance in NDN-based IoT networks, specifically in smart city networks. One of the most important aspects that the PCS strategy takes into account is working to improve indicators, such as retrieval time (latency), CHR, and distance between the node and the content source (stretch). The principle of the PCS strategy is that it adds a statistical table in each node in the smart city, in which the interests of each request are recorded. This table contains several columns, the most important of which is the threshold. Through this table, the most commonly used content is found. The threshold factor is a measure to compare whether the content will be stored or not. If the interest value reaches the threshold value, this content will be stored based on this strategy. Storage operations are carried out in the edge nodes initially, and if they become full, storage operations are carried out in the central nodes. The LRU policy is used to replace content. This strategy helps to reduce content redundancy, as well as reduce congestion. The effectiveness of the PCS strategy is evaluated through key performance metrics, which are defined mathematically in previous equations. The comparison was made with two other strategies, namely the Client-Cache strategy (CCS) and the Tag-Based Caching Strategy (TCS). The results using the SocialCCN simulator in smart city networks showed that the PCS strategy showed superior results over its counterparts in terms of improving the CHR, reducing data retrieval time (latency), and reducing the distance value (stretch). The study concluded that this strategy is suitable for NDN-based IoT environments, as it works to meet the needs and requests of applications in smart cities in an effective manner. Table 4 compares the equations used in the above studies.
Table 3. Summary of the popularity-based caching strategies.
Table 3. Summary of the popularity-based caching strategies.
CategoryStudyProposed StrategyKey FeaturesEvaluation and ResultsTool
Popularity with Network Centrality Awareness[59]PT-CacheA new mechanism used, which is interest packet forwarding, which works to update closeness centrality values and content popularity ratingsCompared to both PT-default and LCD.
Improvements in both cache efficiency and network performance, reduces response time.
This strategy based on this equation.
ndnSIM
[58]Popularity-Aware Closeness Centrality (PACC) and efficient popularity-aware probabilistic caching (EPPC)PACC and EPPC give better results than other strategiesCompared to CCS, PCS, CCC, and ABC.Icarus
[60]Popularity and betweenness-based replacement scheme (PBRS)Uses content popularity and node importance for cache replacementCompared with LRU and LFU
improved CHR and reduced delay.
ndnSIM
Traffic and Energy Efficiency-Oriented Caching[62]Traffic-aware caching mechanism (TCM)Caches popular content near the sink node to improve energy efficiencyCompared with CPCCS and MPC
reduced energy consumption and response time.
COOJA
[63](CCN-WSNs) or Collaborative Caching Strategy Distance and Degree aware (CSDD)Balances content accessibility and energy efficiency, selectively stores data in neighboring nodesCompared with LCE and LCD
reduced energy consumption and improved CHR.
CCNx Contiki
[64]Proposes an architecture, including edge computingIncreases content availability, reduces access timeOutperformed in response time, content availability, and bandwidth.NS-3
Cache Consistency and Data Freshness Mechanisms[67]PoSiFRelies on content freshness, popularity, and sizeOutperformed CCS/CES, LCD, and CEE in CHR and reduces hop ratio.ndnSIM
[68]Popularity-based Cache Consistency Management (PCCM)Ensures cache consistency by prioritizing high-freshness popular data, removes outdated cache contentCompared with traditional methods
improved cache consistency and reduced latency.
ndnSIM
Dynamic Popularity Windows and Threshold-based Schemes[70]Dynamic Popularity Window and Distance-based Efficient CachingUses dynamic popularity window and content distance for caching decisions, improves QoS and LRU for replacementCompared with LCE
improved hit ratio, network traffic, and response time.
-
[71]Dynamic Popularity Window-based Caching Scheme (DPWCS)Reduces redundant storage and network load by caching highly popular content based on request rate and thresholdCompared with LCE and ProbCache, improved CHR, hop count, and network load.-
[56]Compound Popular Content Caching Strategy (CPCCS)Caches less popular and optimal popular content, LRU for replacementCompared with WAVE, MAGIC, HPC, CCAC, MPC, and ProbCache
improved CHR, stretch, and content diversity.
SocialCCNSim-Master
[72]Periodic caching strategy (PCS)Threshold-based statistical table, use the LRU for replacement, reduces redundancy and congestionCompared with CCS and TCS and the PCS improved CHR, retrieval time, and stretch.SocialCCN
Table 4. Formula comparison for popularity-based caching strategies.
Table 4. Formula comparison for popularity-based caching strategies.
CategoryStudyEquationsDescription
Popularity with Network Centrality Awareness[59] R F C i j = β M F C i T j + 1 β R F C i j 1 R F C i j :   Access   frequency   of   interest   packets   for   content   C i   at   time   period   j .
M F C i T j :   Measured   frequency   of   interest   packets   during   cycle   T j .
R F C i j 1 :   Previous   access   frequency   ( from   time   period   j 1 ).
β : Weighting factor that adjusts responsiveness to new data.
B C n = i n j δ i j n δ i j B C n :   Betweenness   centrality   of   node   n .
δ i j :   Number   of   shortest   paths   between   node   i   and   node   j .
δ i j n :   Number   of   those   paths   that   pass   through   node   n .
C C n = N 1 i d n i C C n :   Closeness   centrality   of   node   n .
N : Total number of nodes in the network.
d n i :   Distance   from   node   n   to   node   i .
[58] Δ C C m = C C m C C n Δ C C m :   Difference   in   closeness   centrality   between   upstream   node   m   and   downstream   node   n .
C C m , C C n :   Closeness   centralities   of   nodes   m   and   n , respectively.
[60] C B S P v = s v t s , t V δ s t v δ s t C B S P v :   Betweenness   centrality   of   node   v .
δ s t :   Total   number   of   shortest   paths   from   node   s   to   node   t .
δ s t v :   Number   of   those   shortest   paths   that   pass   through   node   v .
popuL i = reqs i T j = 1 k reqs j T popuL i :   Popularity - like   value   of   content   i .
reqs i T :   Number   of   interest   packet   requests   for   content   i   in   time   period   T .
k :   Total   number   of   different   contents   requested   during   T .
popuL i m = n N + 1   n = 1 N popuL i n C B S P n n = 1 N C B S P n popuL i m :   Popularity - like   value   of   content   i   at   next - hop   router   m .
popuL i n :   Popularity - like   value   received   from   last - hop   router   n .
C B S P n :   Betweenness   of   router   n .
N :   Number   of   routers   forwarding   interest   packets   for   content   i .
F H R = i R h i i R f i h i : Number   of   hit   requests   for   content   i .
f i : Total   number   of   requests   for   content   i .
R : Set of all requested content.
Traffic and Energy Efficiency-Oriented Caching[62] C H R average = i = 1 m c i p c i :   Number   of   cache   hits   at   node   i .
p : Total number of interest messages sent by all consumers.
m : Total number of nodes in the network.
S T average = 1 I i = 1 I H i forwarded H i c p I : Total number of interest packets transmitted in the network.
H i forwarded :   Total   hop   count   used   to   forward   interest   packet   i .
H i c p :   Hop   count   from   content   consumer   to   content   producer   for   packet   i .
L average = c = 1 p L c p L c :   Latency   to   retrieve   content   object   c .
p : Total number of interest messages.
D C i = D C i lx + D C i tx + D C i rx + D C i over + D C i add Each term represents the duty cycle of different radio states:
D C lx : Listening,
D C tx : Transmitting,
D C rx : Receiving,
D C over : Overhearing,
D C add : Additional operations.
D C average = i = 1 m D C i m D C i :   Radio   duty   cycle   for   node   i .
m : Total number of nodes.
[63] p i = β i α p i :   Probability   of   requesting   the   i -th most popular content.
α : Zipf exponent controlling the skewness of popularity (higher = more skewed).
i : Rank of the content (1 is most popular).
β : Normalization constant defined as:
β = i = 1 N 1 i α 1
where   N is the total number of contents.
d = t Δ 100 d : The threshold hop-distance from the source node where caching starts.
t : Total number of hops from the source node to the requester.
Δ : Caching threshold percentage (e.g., 30%, 50%).
Stretch = i = 1 I hops   crossed i i = 1 I total   hops i I : Total number of generated content interests.
H ops   crossed i :   actual   path   length   taken   for   content   i .
T otal   hops i : total possible hop count to source.
Diversity = Cardinality n = 1 N C O n n = 1 N Cardinality C O n N : Total number of nodes.
C O n :   Set   of   cached   content   at   node   n .
The numerator counts unique content items; the denominator counts total stored items.
[64] P C = P C + α Where (PC): popularity of content.
The   increment   α is set to 0.01.
m i n { P i , j , Ψ i , j } 1 ξ i = 1 ξ P C i j = 1 J m a x i F m V m τ m P i , j τ c s + Ψ i , j τ R + P i , j τ m P C i : Content popularity of content i.
ξ : Number of content types.
P i , j : Caching resource allocated to content i on device j.
Ψ i , j : Computing resource allocated to content i on device j.
F m : Coverage area of mobile device.
V m : Speed of mobile device.
τ m : Time to retrieve a unit content from a mobility-supported content provider.
τ c s : Time to retrieve a unit content from Tier 4 devices (content server).
τ R : Processing latency.
J m a x i : Number of caching devices that can cache content i.
j = 1 J m a x i 1 F m V m τ m < β i j = 1 J m a x i F m V m τ m β i : Size of content i.
i = 1 ξ j = 1 J m a x i P i , j β i C Total C Total : Total caching capacity available in Tier 2 devices.
i = 1 ξ j = 1 J m a x i Ψ i , j C R i Ψ Total C R i : Computing requirement for content i.
Ψ Total : Total computing resource available in Tier 2 devices.
Cache Consistency and Data Freshness Mechanisms[67] Freshness d i = 1 A i G i L i where   A i G i < L i A i :   Arrival   time   of   content   d i at the caching node.
G i :   Generation   time   of   content   d i at the producer node.
L i :   Lifetime   of   the   content   d i .
This equation ensures content is still fresh if the arrival time minus the generation time is less than its lifetime.
W i n = P i n Q i n
where:
P i n = R i n R i n + t = 1 n R t
Q i n = 1 d i n d i n + t = 1 n D t
P i n : Popularity weight of incoming content.
Q i n : Size weight of incoming content.
R i n : Request rate of incoming content.
R t : Request rates of cached items.
d i n : Size of incoming content.
D t : Sizes of cached items.
W e C = P e C Q e C
where:
P e C = i = 1 k R e C i R i n + t = 1 n R t
Q e C = 1 i = 1 k d e C i d i n + t = 1 n D t
R e C i : Request rates of cached items to evict.
k : Number of cached items to evict.
d e C i : Sizes of cached items to evict.
D t + d i n > C L C L : Cache limit (maximum cache size).
R i n = N r e q t w t 1 N r e q : Number of content requests in the time window.
t w ,   t 1 : Last and first timestamps of the request window.
[68] Popularity   Value = l o g 2 n Where   n is the total number of interest packets received for the content during an estimation period.
H k P C C M = γ β T u , T c k + 1 β T u , T c K + 1 + 1 γ H k T T L H k :   Hop   count   to   router   k .
γ : Proportion of popular contents.
α , β : Impact ratios related to TTL and update frequency.
T c : Characteristic time of cache.
T u : Update time of content.
Dynamic Popularity Windows and Threshold-based Schemes[70] Max W R i s = θ D θ : Coefficient to regulate window size.
D : Content catalogue size.
W R i s :   Popularity   window   of   router   R i .
T R i D j = AvgPop W R i s 1 δ Norm H D j T R i D j : Threshold value for incoming content Dj on Ri.
AvgPop W R i s :   The   average   popularity   of   interest   messages   in   W R i s ;   Norm H D j : Normalized hop-count navigated by Dj.
AvgPop W R i s = W R i s Dis W R i s Dis W R i s : Number of distinct interests in the window.
Norm H D j = H D j H I j H D j : Hop-count from the previous caching router to current.
H I j : Hop-count of Interest packet to content provider.
T R i D j = W R i s Dis W R i s 1 δ H D j H I j Combining the above equation.
f k , α , D = 1 / k α n = 1 D 1 / n α k : Rank of the content.
α : Skewness factor (typically 0.7 in simulations).
D : Total number of contents.
[71] Max W R i s D This indicates that the maximum popularity window size should grow proportionally with the total number of distinct contents in the network.
Max W R i s λ The window size should also increase as the request rate grows to keep tracking accurate popularity.
Max W R i s = θ D λ θ : Coefficient for window size control.
D : Number of distinct contents.
λ : Content request rate.
T R i p W R i s As the window size increases, the threshold to consider content as “popular” must also increase.
[56] CacheWeighty = 1 y + α , α 0 y : Number of hops between a content provider and a consumer.
α : A constant.
CacheWeightMRT = MRT m + MRT exp MRT m : Mean Residence Time.
MRT exp : Expected Mean Residence Time.
HPC = CacheWeighty + CacheWeightMRT
CCV i = c L i × Cachesize i c : Compensation value.
L i : Recent caching load.
Cachesize i : Size of cache at node i.
w r = l o g r l o g N total N total : Total number of content.
r : Rank of the content.
CCV th = CCV highest × w r Defines a threshold to classify content as “popular”.
[72] ContentRetrievalLatency c = i = 1 R latency R i R This   equation   computes   the   average   content   retrieval   latency   for   consumer   c .   It   sums   up   the   latencies   of   all   requests   ( R i )   and   divides   by   the   total   number   of   requests   R .
Stretch = i = 1 R Hop - traveled i i = 1 R Total - Hop i This metric measures the efficiency of content retrieval in terms of the number of hops. It compares the actual number of hops traveled to retrieve content (hop-traveled) with the total number of hops (total-hop) between user and server.

3.2. Freshness-Based Caching

The freshness-based caching strategy is important in NDN-based IoT networks to keep data fresh and relevant. This strategy involves taking the most recent data, storing it temporarily, and giving priority to the freshest content, as illustrated in Figure 9. This helps in getting rid of old content that is rarely needed in the network, as its presence leads to increased network load and thus slow response times [73,74]. Removing the old content frees up space for new content in the nodes. A set of strategies proposed by researchers in the field of freshness-based caching content will be discussed, which appears in Figure 10. Table 5 summarizes the selected freshness-based caching strategies.

3.2.1. Machine Learning and Prediction-Based Freshness Management

This part of the strategy refers to the use of machine learning techniques to accurately and intelligently manage data freshness during caching decisions in NDN-based IoT networks. Data freshness is particularly important in IoT environments, as it refers to obtaining the most recent state or reading of data. In traditional systems, data freshness is determined using traditional methods, such as time-to-live (TTL) values or predefined freshness thresholds [75]. However, these methods are no longer sufficient in IoT environments. This is where machine learning comes in to train intelligent models capable of analyzing data demand patterns. These mechanisms help manage caching decisions in a way that increases system efficiency and ensures the provision of up-to-date information.
The study conducted in Ref. [76] introduced a caching strategy that cares about data freshness in NDN-IoT networks. It is a cache aging mechanism combined with artificial neural networks (ANNs), which addresses the challenge of maintaining fresh data. The strategy policy is based on eight steps, starting with routers that evaluate data requests and then work on calculating freshness scores, followed by a step that selectively replaces old data to keep only fresh data in the cache to improve the efficiency of the caching process. Using MATLAB-based simulation, the CAL was evaluated and compared to the VLRU (Variable Least Recently Used) strategy, where the evaluation results indicated that CAL improves the CHR, reduces latency, and reduces network load.
In Ref. [74], the researchers proposed the Least Fresh First (LFF) policy in NDN-based IoT networks, which is concerned with addressing one of the challenges in IoT networks, namely the freshness of data within this dynamic network. LFF is based on building the Autoregressive Moving Average (ARMA) model, which is a time-series prediction model. ARMA helps in predicting future sensor events based on their past behavior and also contributes to providing an estimate of the remaining life of the temporarily cached content (freshness). Thus, ARMA can identify unwanted data that has become old and not fresh and work to remove it. This ensures that the caching operations contain the most important and fresh data. This scenario was combined with more than one caching strategy, where a set of improvements in performance indicators emerged. After conducting simulations using the ccnSim simulator, it was found that the LFF policy outperforms the proposed policies, such as RR, LRU, LFU, and First In First Out (FIFO), in terms of reducing pressure on service providers, reducing hops, and reducing response time.

3.2.2. Freshness-Aware Replacement Policies

Data freshness-based caching strategies attempt to maintain the highest level of fresh data and eliminate any data that has become outdated according to specific policies like LRU and FIFO [73,77,78]. Since expired and stale data is no longer relevant, we group mechanisms that rely on freshness when replacing stored data into this category. This ensures content quality, improves storage efficiency, avoids the provision of outdated data, and enhances overall network efficiency.
The researchers in the study in [79] discussed the role of data freshness in the replacement of cached content in NDN-IoT networks. The study demonstrated the importance of fresh data in IoT environments, which requires efficient storage to keep data fresh and current, helping improve data access and reduce network congestion. The authors focused on both LRU and FIFO replacement policies and analyzed the impact of fresh data on each of the mentioned replacement policies. The replacement policies were evaluated for a range of different freshness periods (200 ms, 2000 ms, and 20,000 ms) using the Mini-NDN simulator, and the results showed that LRU with freshness awareness enhances the CHR and reduces Round Trip Time (RTT) compared to the FIFO replacement policy. This demonstrates the role of data freshness in enhancing network performance. The study did not consider factors such as energy efficiency and security as criteria related to data freshness. These metrics guided the analysis of how freshness periods impact network performance. The study observed that when the freshness period was set to 200 ms, the RTT increased significantly due to frequent cache misses, forcing requests to the producer. As the freshness period increased to 2000 ms and 20,000 ms, more data remained in the cache, leading to an improved CHR and reduced RTT. This effect was more pronounced in the LRU policy compared to FIFO, confirming the benefit of freshness-aware caching in NDN-IoT environments.
In their investigation, the researchers in Ref. [80] introduced smart caching, which deals with the freshness of cached content and the energy level, and works to achieve a balance between them. In this strategy, the SCTSmart caching Table was added, which tracks and manages cache entries, and works to keep the fresh data and remove the less fresh data, which is in line with the dynamic environments in which data changes periodically. Employing SCT effectively helps in managing the cache memory appropriately and extending the network’s life. The simulation results using the ndnSIM simulator showed the efficiency of SCTSmart, as it contributes to significantly reducing energy consumption, and also maintains a high level of freshness of the cached data. Table 6 compares the equations used in the above studies.
Table 5. Summary of the freshness-based caching strategies.
Table 5. Summary of the freshness-based caching strategies.
CategoryStudyProposed StrategyKey FeaturesEvaluation and ResultsTool
Machine Learning and Prediction-Based Freshness Management[76]Cache Aging with Learning (CAL)Cache aging mechanism combined with artificial neural networks (ANNs) that address the challenge of fresh dataOutperformed VLRU in CHR, reduces latency, and reduces network load.MATLAB-based simulation
[74]Least Fresh First (LFF) policyUses ARMA model for time-series prediction; removes outdated content based on estimated remaining freshness; ensures fresh data caching.Outperformed RR, LRU, LFU, and FIFO in reducing hops, reducing service provider pressure, hops, and response time.ccnSIM
Freshness-Aware Replacement Policies[79]-Discuss the role of data freshness in the replacement of cached contentLRU with freshness awareness enhances CHR and reduces RTT compared to FIFO replacement policy.Mini-NDN simulator
[80]Smart caching (SCTSmart)Employs SCTSmart caching table to track and manage cached data freshnessReduced energy consumption while maintaining high data freshness levels.ndnSIM
Table 6. Formula comparison for freshness-based caching strategies.
Table 6. Formula comparison for freshness-based caching strategies.
CategoryStudyEquationsDescription
Machine Learning and Prediction-Based Freshness Management[76] D = T rtt + T proc + T queue + T trans T rtt = Round-trip time delay.
T proc = Processing time at network nodes.
T queue = Queuing delay.
T trans = Transmission time.
T L = Total   Number   of   Requests × Data   Size   per   Request Simulation   Time
R M S E = 1 n i = 1 n F rate _ pred , i F rate _ obs , i 2
n = Number of observations.
F rate _ pred , i   =   Predicted   freshness   rate   for   observation   i .
F rate _ obs , i   =   Observed   freshness   rate   for   observation   i .
M = Data   request   message if   M τ Data   message if   M D M = Incoming message.
τ = Set of data request messages.
D = Set of data response messages.
Cache   Status = 1 if   Sensor   number , data   type C S 0 Otherwise
V = 1 R T C D T S R type
Where:
R T = Request time.
C D T S = Cached data timestamp.
R type = Requested data type refresh rate.
Time   Validation = 1 N D T S C D T S N D type N D T S = New data timestamp.
C D T S = Cached data timestamp.
N D type = New data type refresh rate.
V i , j = α × F Rate + 1 α × Time   Validation Where:
α = Weighting coefficient.
F Rate = Freshness rate.
F Rate = S R N T R S R N = Same request number (specific data type requests).
T R = Total requests.
F rate _ pred = S R N + P S R N T R + P T R P S R N = Predicted same request number.
P T R = Predicted total requests.
[74] α = 1 1 N R i = 1 N r = 1 R h i r H i r N : Number of consumers.
R : Number of requests per consumer.
h i r :   Number   of   hops   to   cache   for   request   r   from   consumer   i .
H i r : Number of hops to producer for the same request.
β = 1 i = 1 N serverhit i i = 1 N totalReq i serverhit i :   Number   of   requests   by   consumer   i that were served by the producer.
totalReq i :   Total   number   of   requests   sent   by   consumer   i .
γ = 1 N R i = 1 N r = 1 R T i r T i r :   Response   time   for   request   r   from   consumer   i .
Totalcost = P hit C hit + 1 P hit C miss + N S 2 1 1 N caching N decisions + 1 1 N evictions N caching P hit : Probability of cache hit.
C hit : Delay for content from cache.
C miss : Delay from producer.
N S : Number of caching nodes.
N caching : Number of times content cached.
N decisions : Number of caching decisions.
N evictions : Number of evictions.
Validity   ( % ) = i = 1 N valid i × 100 i = 1 N valid i + invalid i valid i :   Number   of   valid   contents   received   by   consumer   i .
invalid i :   Number   of   invalid   contents   received   by   consumer   i .
Freshness-Aware Replacement Policies[79] RTT = T processing + T queueing + T transmission + T propagation T processing is the time taken by routers to process packets.
T queueing is the delay caused by packets waiting in queues.
T transmission is the time taken to push all packet bits onto the link.
T propagation is the time for the signal to propagate through the medium.
[80] F u = w 1 E N n 1 + w 2 1 O C n 2 + w 3 F R n 3 F u = Caching utility function value.
E N = Energy level of the node.
O C = Cache occupancy (how full the cache is).
F R = Freshness of the data.
w 1 , w 2 , w 3   =   Weight   coefficients   for   each   parameter   ( with   0 w i 1 ).
n 1 , n 2 , n 3   =   Power   exponents   applied   to   each   term   ( with   n 1 ).
E total = E tran + E cache + E amp E tran = 50 nJ / bit (energy required for transaction).
E cache = 10 nJ / bit (energy required for data caching).
E amp = 10 pJ / bit / m 2 (energy required for amplification).

3.3. Collaborative/Cooperative Caching

In this approach, collaborative/cooperative caching has become widespread in NDN-based IoT networks, which allows caching technologies to work together in mutually supportive and coordinated ways to achieve a common goal among a group of nodes [29,81] through shared responsibility in decision-making and thus shared benefits for each node, as illustrated in Figure 11. This caching helps address important challenges that contribute to improving network performance, such as energy consumption, latency, data duplication, and storage space. When content is requested multiple times from different locations, it is stored in more than one location on the network. Hence, there is a need for researchers to find a set of strategies for cooperative caching, which are classified as shown in Figure 12. Recent studies have proposed dynamic, cluster-based cooperative caching methods. These approaches group IoT nodes into edge-level clusters, allowing them to share content more efficiently. This coordination helps boost cache hit ratio, reduce latency, and enhance energy efficiency in NDN-IoT networks [82]. Table 7 summarizes selected cooperative caching strategies.

3.3.1. Cooperative Caching for Energy Efficiency and Retrieval Performance

This category showcases collaborative caching strategies that achieve two important goals: reducing and improving energy consumption and improving data retrieval performance, especially in IoT environments. Nodes within a specific area can coordinate caching decisions. Instead of caching content in a single location repeatedly, common content is distributed among nodes to prevent duplication and improve coverage. This coordination contributes to energy savings by reducing the number of duplicate copies cached, which reduces unnecessary data transfers and the number of accesses to the data server, thus reducing energy consumption. Regarding retrieval performance, collaborative storage allows data to be cached strategically, ensuring faster response times and reducing the number of hops. This all contributes to increased network efficiency [83].
The study conducted in Ref. [84] introduced a cooperative multi-hop caching (SMCC) strategy, which aims to improve the energy efficiency levels in information-driven wireless sensor networks (ICWSNs). The study addressed the ability of SMCC in coping with the problem of high energy consumption that comes from request and response operations, where nodes remain active for a period of time, which leads to the consumption of large amounts of energy. SMCC balances between the time required to retrieve content and the energy consumption. SMCC enables the idea of regional storage. It caches content within a specific range (α-hop distance). This mechanism allows nodes to be in sleep mode while ensuring that data is available through multi-hop cooperation. The simulation results showed that SMCC achieves energy savings with less content retrieval time. When compared with CoCa (cooperative caching in ICN) and Always Active (AA), it is clear that SMCC shows better results. The study concluded with a set of expectations, including the implementation of SMCC in large-scale wireless sensor network systems in real environments.
As discussed in the work in Ref. [85], the researchers proposed a green cooperative caching strategy that contributes to saving energy in future networks, as it works to reduce energy consumption while maintaining Quality of User Experience (QoE) standards to remain within an acceptable level. The strategy relies on proactive caching in the auxiliary nodes to improve caching decisions. The simulation results using the IBM ILOG CPLEX optimization studio and MATLAB R2016b showed the possibility of this strategy to reduce non-renewable energy consumption by 23%, taking into account improving the CHR. The study also proposed the possibility of integrating caching into renewable energy sources. This gives the ability to expand this feature.
As discussed in the work in Ref. [86], the researchers introduced the integration of cooperative caching and NDN networks, which aims to improve energy efficiency, consumption, and data availability in NDN-IoT networks. The study presented a side protocol called CoCa (Cooperative Caching), which seeks to improve the caching efficiency by coordinating energy-saving mechanisms and cooperative caching. When devices are in sleep mode in IoT environments, using this protocol, it is possible to access content, which is known as integrating caching with deep sleep mechanisms. Max Diversity Most Recent (MDMR) is used as a caching and content replacement policy, which in turn works to diversify the content and improve its freshness. The results showed that this strategy can effectively reduce energy consumption while maintaining content availability. This study also discussed the majority of challenges in IoT, such as energy consumption, memory limitations, and intermittent connectivity, which can be overcome by distributed caching based on NDN. In addition, the study indicated that the use of auto-configuration mechanisms enhances caching operations, which contributes to increasing scalability and energy savings. NDN + CoCa increases energy efficiency, reduces response time, and improves content diversity.

3.3.2. Collaborative Caching for Multimedia and Vehicular Networks

In this category of collaborative caching, collaborative storage is designed to serve high-throughput, latency-sensitive applications, such as multimedia streaming and vehicle-to-vehicle data exchange. Storage is often performed through content segmentation and distributed storage, using specific protocols that prioritize content, predict traffic, and sometimes share content. This type of intelligent, collaborative storage is fundamental to building intelligent and responsive networks, especially in smart cities and next-generation internet applications [87].
In the paper in Ref. [88], the researchers highlight an enhanced collaborative caching scheme for multimedia streaming in resource-constrained vehicular environments within ICNs. The study focused on a set of challenges, such as limited storage capacity with high vehicle traffic, followed by security concerns for these networks. This hierarchical scheme is divided into two layers: the first is the core layer, and the other is the edge layer. This strategy stores content in the edge nodes, taking into account the popularity of the content, enhancing network efficiency, reducing resource consumption, and expected future evaluations, using Non-Negative Matrix Factorization (NMF). The strategy uses the clustering principle that is formed using the modified Weighted Clustering Algorithm (WCA) to balance between high vehicle traffic and limited storage capacity. In addition, the strategy supports chunk-level caching to improve storage management and capacity and reduce content retrieval time. The study was compared with others, such as LCE and Weighted Popularity (WAVE), and the simulation results, using the MATLAB parallel simulation toolkit, showed a reduction in the average CHR, reducing the number of hops. The results also showed that this strategy enhances the QoE in dynamic vehicular networks, making it an ideal solution in intelligent transportation systems.

3.3.3. Collaborative/Cooperative Hybrid Caching Strategies

Strategies in the hybrid collaborative caching category represent a set of contexts, the most common of which combine on-path and off-path caching. Popular content is stored on-path to reduce response time, while less-demanding content is stored off-path to maintain data diversity and reduce redundancy [89]. In addition, nodes collaborate to make caching and replacement decisions. In this category, content is intelligently distributed among nodes, reducing network congestion and increasing storage efficiency. It also effectively contributes to improving data access rates and reducing latency.
In Ref. [90], the researchers discussed the impact of combining cache placement and replacement operations in NDN networks. The study addressed four content cache placement operations: LCD, LCE, ProbCache, and Cache Less for More (CL4M). Each of these storage policies was combined with two replacement policies: LFU and LRU. Using the Icarus simulator, which is a widely used simulation tool, and evaluating the metrics of CHR, latency, link load, and path stretch, it was found that when combining both cache placement and replacement policies, specifically LCE and LFU, the results outperform the combination of LCE and LRU. Therefore, the study emphasized the importance of choosing the appropriate caching placement policy that is compatible with the replacement policy to improve and enhance the efficiency of the network. However, the study examined specific caching placement and replacement policies and did not discuss one of the most important evaluation metrics for storage strategies, which is the energy consumption factor.
The study conducted in Ref. [89] introduced a hybrid caching strategy that combines on-path caching and off-path caching. Most caching mechanisms rely on on-path caching in ICNs, and with the large flow of content, this can cause duplication of stored data and thus reduce the efficiency of the network as a whole. To solve this problem, a cooperative caching policy was proposed where the content is stored as usual on-path; in addition to that, a part of it is stored off-path and placed centrally, forming a support for caching, with the aim of improving the caching efficiency and thus the network efficiency. The distribution of content between on-path and off-path depends on a simple heuristic mechanism, where the highly popular content that is frequently requested is given priority for on-path storage. As for off-path storage, the ICN routers determine the most important content to store. The results of the simulation based on MATLAB showed a significant improvement in the CHR compared to strategies based on in-path storage only. This strategy also affected data redundancy, reducing it, in addition to reducing the level of delay in retrieving the content. The study suggested expanding the application of this strategy and verifying its effectiveness in real environments. Table 8 compares the equations used in the above studies.
Table 7. Summary of the cooperative caching strategies.
Table 7. Summary of the cooperative caching strategies.
CategoryStudyProposed StrategyKey FeaturesEvaluation and ResultsTool
Cooperative Caching for Energy Efficiency and Retrieval Performance[84]Cooperative Multi-Hop Caching (SMCC)Balances content retrieval time and energy consumption; regional storage within α-hop distance; enables multi-hop cooperative caching.Demonstrated energy savings and reduced content retrieval time compared to CoCa and AA.-
[85]Green Cooperative Caching StrategyProactive caching in auxiliary nodes; reduces non-renewable energy consumption while maintaining QoE standards.Reduced non-renewable energy consumption while improving CHR.IBM ILOG CPLEX and MATLAB R2016b
[86]Cooperative Caching (CoCa) with NDNIntegrates cooperative caching with deep sleep mechanisms; uses MDMR for content replacement.Improved energy efficiency, response time, and content diversity.-
Collaborative Caching for Multimedia and Vehicular Networks[88]Enhanced Collaborative Caching for Multimedia StreamingUses hierarchical edge-core structure; clustering principle for balancing traffic and storage; supports chunk-level caching for efficiency.Improved CHR, reduced hops, enhanced QoE in vehicular networks
compared with LCE and WAVE.
MATLAB Parallel Simulation Toolkit
Collaborative/Cooperative Hybrid Caching Strategies[90]-Discuss the impact of combining cache placement and replacement operations in NDN networks.When combining both LCE and LFU, the results outperform the process of combining both LCE and LRU.Icarus
[89]Hybrid Caching Strategy (On-Path and Off-Path)Combines on-path and off-path caching to optimize storage efficiency; heuristic mechanism prioritizes frequently requested content.Improved CHR, reduced redundancy, and decreased retrieval delay.MATLAB
Table 8. Formula comparison for cooperative caching strategies.
Table 8. Formula comparison for cooperative caching strategies.
CategoryStudyEquationsDescription
Cooperative Caching for Energy Efficiency and Retrieval Performance[84] E u c = e a k + 1 + e l t w n + 2 n t k 1 E u c : Total energy consumption without cooperative caching.
e a : Energy consumption per time unit in active mode.
e l : Energy consumption per time unit in light-sleep mode.
k : Number of hops from the producer to the consumer (original transmission path).
n : Total number of sensors in the region.
n t :   Number   of   sensors   in   light - sleep   or   active   mode   at   time   t .
t w : Waiting phase duration (in time slots).
E c o = e a k + d α + 1 + e l t w n t d α + n c o k + 2 n t d α 1 + e d t w d α n c o E c o : Total energy consumption with cooperative caching.
k : Number of hops from the cooperator to the consumer (shortened path).
d α :   Number   of   sensors   within   cooperation   distance   α (cooperation scope).
n c o :   Number   of   cooperating   sensors   within   α -hop region.
e d : Energy consumption per time unit in deep-sleep mode.
Remaining terms same as above.
Δ E α = E u c E c o 1 1 γ α e a k Δ E α :   Net   energy   saved   due   to   cooperative   caching   with   scope   α .
γ : Risk coefficient of cooperation (probability that a cooperator cannot serve a request).
Δ D α = 1 γ α θ k θ k + T T Δ D α : Net reduction in data retrieval delay due to cooperative caching.
θ : Average transmission delay per hop.
T : Deep-sleep schedule time (delay incurred when waiting for a sensor to wake up).
[85] p k = k β f = 1 N f β p k :   Probability   of   requesting   content   c k .
k :   Rank   of   the   content   c k in terms of popularity (1 being the most popular).
β : Skewness parameter of the Zipf distribution (controls how steep the popularity curve is).
N : Total number of content objects.
Q i , j , k t = s k p k q i t H i , j Q i , j , k t :   Traffic   cos t   at   time   slot   t   between   node   v j   and   node   v i   for   content   c k .
s k :   Size   of   content   c k .
p k :   Popularity   of   content   c k .
q i t :   Number   of   content   requests   at   node   v i   during   time   slot   t .
H i , j :   Minimum   hop   distance   between   node   v i   and   node   v j .
B i t = m i n B M , U i t 1 + R i t B i t :   Battery   level   of   node   v i   at   time   slot   t .
B M : Maximum battery capacity.
U i t 1 : Energy remaining from the previous slot after usage.
R i t :   Renewable   energy   harvested   at   node   v i   during   time   slot   t .
U i t = B i t E i c a , t , if   B i t E i c a , t 0 , otherwise U i t :   Usable   stored   green   energy   at   node   v i   for   time   slot   t .
E i c a , t :   Caching   energy   required   at   node   v i   for   time   slot   t .
d i , j , k t = H i , j d 0 , if   v j   serves   the   request H i , d d 0 + d 1 , if   the   data   center   serves   the   request d i , j , k t :   End - to - end   delay   for   serving   content   c k   at   node   v i   via   node   v j .
H i , j :   Hop   distance   between   v i   and   v j .
d 0 : Delay per hop.
H i , d :   Hop   distance   from   node   v i to the node closest to the data center.
d 1 : Additional delay from the data center to the network.
[86] E = state P state t state
P R i = r = n 1 r p s r 1 p s n 1 r
E : Total energy consumed by the node.
P state : Power consumed in a particular state (e.g., sleeping, active listening, unicast sending, broadcast sending).
t state : Duration the node spends in that state.
R i : Number of replicas for the most recent sensor value from source i.
n : Total number of nodes.
p s = 1 p p c :   Success   probability   for   replication   ( dependent   on   sleep   probability   p   and   caching   probability   p c ).
E content   multiplicity = 1 + n 1 p s Represents the average number of replicas for each content item across the network.
A = 1 p 1 p s + p p s 1 p s L 1 n 1 A : Probability that content is available in the network.
L : Lifetime (freshness window) of the data.
A = 1 p p + 1 p p L n i 1 n i :   Number   of   designated   caching   nodes   for   content   source   i .
E collectable   content   items = S 1 p 1 p s + p p s 1 p s L 1 n 1 S : Total number of content sources.
E collectable   MDMR   content = S S p p + 1 p p L n i 1
Collaborative Caching for Multimedia and Vehicular Networks[88] f i = f i 1 , f i 2 , , f i Z
C C j , t = C C 1 1 j , t , , C C v a j , t , , C C F Z j , t
m = 1 F a Z α C C f v a j , t c s j
Z   denotes   the   number   of   chunks   for   file   f i .
C C v a j , t = true   indicates   that   chunk   a   of   file   v   is   cached   at   node   j   at   time   t .
where   α is the chunk size.
w n = w 1 1 d n M d + w 2 T n T + w 3 H n H + w 4 p n P + w 5 c n C d n :   Degree   of   node   n .
T n :   Mean   transmission   time   from   node   n .
H n : Weighted sum of neighbor hop distances.
p n :   Power   consumption   of   node   n .
c n :   Cache   utilization   of   node   n .
M d ,   T ,   H ,   P ,   and   C : corresponding average values across all nodes.
Collaborative/Cooperative Hybrid Caching Strategies[89] τ i = H X l o g 2 λ i τ i : Diversity index of incoming requests at router i
H X : Entropy of the incoming request set X
λ i : Average rate of requests at router i
Range :   0 τ i 1 .
maximize   E p i , minimize   E d i , subject   to   Δ B i 0 E p i : Expected (average) cache hit probability at router i.
E d i : Expected end-to-end delay while serving requests.
Δ B i : Available cache space at router i.
C S i k = c 1 D i k + c 2 F i k C S i k : Caching score of content k at router i.
D i k : Normalized distance (in hop count) from the source to router i.
F i k : Normalized frequency of access for content k.
c 1 , c 2 : Constants to weigh distance and frequency.
maximize   E p i , subject   to   E d i δ , Δ B i 0 δ : Delay threshold required by service agreement.
Turns the previous bi-objective problem into a constrained optimization for practical deployment.
maximize   E p C , subject   to   E d i δ i , Δ B C 0 E p C : Expected cache hit ratio at the central router.
Δ B C : Available cache space at the central router.
E d i : Average delay from each router i.
maximize   E p C a i E d i δ , subject   to   Δ B C
s i = E d i g i
a i : Boolean variable, 1 if content from router i is chosen, 0 otherwise
where:
g i = E p i λ i τ i
g i : Represents the rate of non-repetitive request traffic from router i.
s i : Score to prioritize edge routers based on delay and traffic diversity.

3.4. Hybrid Caching

Hybrid caching strategies in NDN-based IoT networks are based on the principle of merging and uniting two or more policies of different names and classifications, as shown in Figure 13, with the aim of achieving an improvement in one of the areas that are considered challenges in NDN-based IoT networks, such as access time, energy, data redundancy, etc. [27]. All this is with the aim of improving the networks and their efficiency. Table 9 summarizes selected hybrid caching strategies. Hybrid caching strategies have proven to be highly effective in IoT settings, largely because they do not depend on just one method. Instead, they bring together different types of caching techniques within a unified framework, leveraging their combined strengths. Some of these strategies even make use of AI and machine learning, allowing them to adapt based on factors like network status, content type, and available resources. The various categories of hybrid caching are outlined in Figure 14.

3.4.1. Energy-Aware and Centrality-Based Caching Strategies

This category of hybrid caching strategies focuses on two factors: energy efficiency and node centrality in the network. Each node in the network considers its energy level when making caching decisions, as such a resource is limited in IoT environments [91]. Regarding centrality, a set of metrics, such as betweenness centrality or closeness, is used to determine the importance of the node where the content is cached. Combining these metrics results in intelligent content caching decisions, ensuring faster response times and energy savings.
In the study in Ref. [92], the researchers proposed the Energy-aware Centrality-Based Caching (EABC) strategy, which takes into account the energy consumption of caching operations in NDN-IoT networks and contributes to distributing cache operations to nodes with high centrality. Traditional cache methods take into account betweenness centrality to determine storage locations, but they do not consider the energy constraints of IoT devices, which may cause network inefficiency and sometimes downtime. Therefore, EABC was proposed, which balances content delivery latency and energy efficiency, which is completed through dynamic caching operations based on a node’s centrality and residual energy. The EABC strategy was evaluated using the ndnSIM simulator and compared with CEE, Probabilistic Storage (Prob), pCASTING, and Approximate Betweenness Centrality (ABC) strategies. The results showed that the proposed strategy outperformed its counterparts in terms of cache hit rates, retrieval delay, and network lifetime. Notably, EABC improved the CHR by approximately 6.44%, reduced latency by about 12.6%, and extended the network lifetime by around 25.1% compared to the ABC strategy.
In the study in Ref. [93], the researchers proposed a Lifetime Cooperative Caching (LCC) strategy. This strategy relies on two factors in the caching decision-making process, namely the lifetime of the content and the rate of requests for the content. The strategy intends to address the challenges in NDN-IoT networks, such as energy consumption and data retrieval time. The simulation results using a C++ simulator showed that this strategy gives better performance in terms of energy consumption, number of hops, and retrieval time compared to other policies, such as LCE and the probabilistic caching strategy. As it achieved a balance between these challenges, the study raises future hopes for applying the LCC strategy in mobile networks and machine-to-machine (M2M) communication.

3.4.2. AI-Driven Caching Approaches

AI-driven caching strategies are a highly emerging technology, especially in the IoT environment, where these environments demand quick and intelligent decisions to manage content caching. Various AI techniques are used in caching strategies, depending on the context and resources to be addressed. The ultimate goal of all these technologies is to improve caching rates, reduce response time, and save energy and memory. This makes such strategies ideal for large, complex networks [94].
The contribution of Ref. [95] lies in their exploration of a heterogeneous edge caching scheme for ICN-IoT applications. This strategy relies on integrating ICN-IoT networks and leveraging AI and cloud computing in collaborative filtering to store the most popular and requested content at the edge nodes. The strategy uses both edge clustering and edge caching techniques. Through collaborative filtering, the demand for specific content is predicted, and therefore, the content with the highest demand is stored at the edge nodes to reduce response time and improve caching efficiency. The results were evaluated using the Icarus simulator, which showed improvements, including a 15% increase in the CHR, a 12% reduction in content retrieval delay, and a 28% reduction in average hop count compared to baseline strategies, such as LCE, LCD, CL4M, and ProbCache. This demonstrates the efficacy of integrating collaborative filtering in edge caching for heterogeneous ICN-IoT applications.

3.4.3. Cooperative Caching for Machine-to-Machine (M2M) Networks

This category of strategies is designed for machine-to-machine (M2M) environments without human intervention. These processes are carried out in a systematic and robust manner, involving coordination between devices to optimally cache and retrieve content when needed. This approach offers several benefits, including reduced latency and reduced load, as when specific content is requested, it is first searched on nearby devices before being requested from a remote source or the internet. Furthermore, it plays a role in conserving network energy [96].
A recent study in Ref. [37] proposed a strategy that depends on the freshness and popularity of the content in NDN-IoT networks. However, this study depended on caching in the core layer (CCS—Caching in the Core Strategy) and caching at the edge layer (CES—Caching at the Edge Strategy). Whereas in CCS, the most popular and long-lived data is stored, which contributes to improving the CHR and reducing network congestion, CES contributes to reducing unwanted repetition during caching operations and keeping the repetitive content with the highest demand close to the nodes that request it. The simulation results showed that the strategy in this study was superior to others, such as LRU, Random Caching (RC), and Betweenness and Edge Popularity Caching (BEP), by comparing the values of the CHR in addition to reducing the number of hops and reducing the retrieval time. The study emphasized the importance of integrating the freshness and popularity of content as basic criteria for caching, as they give the network advantages in terms of performance efficiency and scalability.
In Ref. [97], a cooperative caching scheme was presented that also integrated the popularity of content in NDN-IoT networks, especially in device-to-device (M2M) communications. The main goal was to improve the caching efficiency in the ICN-IoT environment. It addresses the inefficiency of limited resources and high retrieval time. This strategy gives storage priority to popular content, which ensures the utilization of storage space, in addition to reducing unwanted content duplication. Using the TOSSIM simulator, the strategy showed improved results in energy consumption and access to cached content in NDN-IoT networks compared to LCE and LRU methods. The proposed scheme demonstrated enhanced CHRs, reduced average access delays, and optimized energy efficiency, thereby improving resource allocation and scalability of applications in IoT networks.

3.4.4. Freshness, Popularity, and Probabilistic-Aware Caching Strategies

This category focuses on strategies that combine one of the following: freshness, popularity, probability, or a combination of all three. Freshness involves avoiding caching old data, popularity involves ensuring the availability of frequently needed data, and probability involves achieving balanced distribution and reducing duplication. Such a combination offers superior performance, adapting to dynamic environments and improving network performance.
In the study in Ref. [98], the researchers proposed an efficient popularity-aware probabilistic caching (EPPC) strategy, which stores the highest-demand content in NDN-based IoT networks. The EPPC strategy faces a set of challenges in these networks, such as a low CHR, content duplication, and high retrieval time. The strategy relies on three basic mechanisms during the caching process: content selection, content placement, and content replacement. These performance metrics were used to evaluate the efficiency of the EPPC strategy using the Icarus simulator. The results showed improved CHR, reduced path stretch, and reduced content retrieval time compared to its counterparts, such as ProbCache, MPC, and Client-Cache. The hybrid approach of EPPC in storing and replacing content provides scalability in diverse IoT networks, such as healthcare, smart cities, smart agriculture, smart grid, and smart home applications.
As discussed in the work in Ref. [99], an innovative technique was proposed—the PF-ClusterCache strategy—which is a strategy that combines content popularity and data freshness in NDN-IoT networks. The PF-ClusterCache strategy came with a set of goals, such as reducing data retrieval time, enhancing data availability, and achieving better utilization of network resources. PF-ClusterCache is based on the principle of merging storage resources of more than one node within a single cluster to form a global shared storage, which contributes to reducing unnecessary duplication and thus ensures the storage of the most recent content that is always required. The strategy uses hashing technology to distribute the stored content efficiently. The study presented the most important challenges facing temporary storage operations, such as balancing between the popularity and freshness of content and the imposed storage restrictions. Since this strategy gives priority to the most recent and most requested content, it enhances the diversity of temporary storage and reduces network congestion. The PF-ClusterCache strategy is based on Least Popular First (LPF) replacement policies that ensure the removal of the least requested data. The ndnSIM simulator was used, and its results showed that this strategy outperformed its counterparts, such as LCE, PoolCache, and CFPC, in terms of CHR and retrieval time reduction. The study suggested integrating artificial intelligence mechanisms in the future to enhance its performance and bring further development.
The methodology introduced in Ref. [100] focused on a caching strategy for freshness and popularity content (CFPC) in NDN-IoT networks. The CFPC strategy depends on two important factors: the freshness and popularity of the content, where priority is given to the most requested and most popular content. One of the advantages of CFPC is that it works independently, and there is no need for prior coordination between nodes, which contributes to the simplicity and scalability of this strategy. The ndnSIM simulator was used, and its results showed that CFPC improves content retrieval time, achieves a better CHR, and lowers the number of hops compared to other strategies, such as CEE and Random Caching (RC). The results of this study emphasized the importance of balancing freshness and popularity, especially in IoT environments that face clear challenges between the novelty and popularity of content. The study also demonstrated the importance of the CFPC strategy in improving QoE. Table 10 analyzes the equations used in the above studies.
Table 9. Summary of the hybrid caching strategies.
Table 9. Summary of the hybrid caching strategies.
CategoryStudyProposed StrategyKey FeaturesEvaluation and ResultsTool
Energy-Aware and Centrality-Based Caching Strategies[92]Energy-aware Centrality-Based Caching (EABC)EABC balances content delivery latency and energy efficiency.Improved cache hit rates, retrieval delay, and network lifetime compared to CEE, Prob, pCASTING and ABC.ndnSIM
[93]Lifetime Cooperative Caching (LCC)Caches content based on lifetime and request rate.Improved energy consumption, retrieval time, and reduced hops compared to LCE and probabilistic caching strategy.C++ Simulator
AI-Driven Caching Approaches[95]Heterogeneous Edge Caching SchemeUses AI, cloud computing, and collaborative filtering to predict and cache high-demand content at edge nodes.Reduced retrieval time, improved CHR, and decreased number of hops.Icarus Simulator
Cooperative Caching for Machine-to-Machine (M2M) Networks[37]Core and Edge Layer Caching (CCS and CES)Core caching for long-lived data, edge caching to reduce redundancy, enhances QoE.Superior CHR, reduced hops and retrieval time compared to LRU, RC, and BEP.ndnSIM
[97]Cooperative Caching for M2MPrioritizes popular content in M2M environments, reduces duplication and improves efficiency.Improved energy efficiency and access to cached content compared to LCE and LRU.TOSSIM
Freshness, Popularity and Probabilistic-Aware Caching Strategies[98]Efficient popularity-aware probabilistic caching (EPPC)Hybrid approach combining content selection, placement, and replacement.Improved CHR, reduced path stretch, and retrieval time compared to ProbCache, MPC, and Client-Cache.Icarus Simulator
[99]PF-ClusterCacheMerges storage resources in clusters, prioritizes most recent and most requested content, uses LPF for content replacement.Improved CHR and reduced retrieval time
compared to LCE, PoolCache, and CFPC.
ndnSIM
[100]Caching for Freshness and Popularity Content (CFPC)Independently stores popular and fresh content, ensures simplicity and scalability in NDN-IoT, enhances QoE.Enhanced retrieval time, CHR, and fewer hops than CEE and RC.ndnSIM
Table 10. Formula comparison for hybrid caching strategies.
Table 10. Formula comparison for hybrid caching strategies.
CategoryStudyEquationsDescription
Energy-Aware and Centrality-Based Caching Strategies[92] C B v = i v j i , j V σ i , j v
where:
σ i , j v = 1 , if   v   lies   on   the   shortest   path   from   i   to   j 0 , otherwise
C B v .Betweenness centrality of node v. This measures how central or important a node is within the network by counting how many shortest paths between other nodes pass through v.
V : The set of all nodes in the network.
i , j :   Any   two   distinct   nodes   in   the   network   ( i v j ).
σ i , j v :   This   is   an   indicator   function   that   equals   1   if   the   node   v   lies   on   the   shortest   path   between   node   i   and   node   j , and 0 otherwise.
[93] h = t T h = Freshness of the data.
t = Current time.
T = Time when the data was generated by the content producer.
θ i = α θ root t + β θ middle t + γ θ edge t θ i   =   Caching   threshold   for   node   i .
θ root t ,   θ middle t ,   θ edge t = Threshold functions for root, middle, and edge nodes.
α ,   β ,   γ = Node type indicators (only one is 1, others are 0 depending on the node type).
Caching Threshold Adjustment (Auto-Configuration Mechanism)
If request rate is increasing:
θ edge t = 1 λ 1 θ edge t + λ 1 avg L t
If request rate is decreasing:
θ edge t = 1 + λ 2 θ edge t
λ 1 , λ 2 = Adjustable weights.
L t = Set of data lifetimes in a sliding time window.
E j = E tx + E amp d j n E j   =   Energy   consumed   by   IoT   device   j to transmit one bit.
E tx = Base transmission energy per bit.
E amp = Energy cost of the transmitter amplifier.
d j   =   Distance   between   IoT   device   j and the gateway.
n = Path loss factor.
E total = T P E tx + E sense J + E awake A T = Simulation time.
P = Packet size.
E sense = Energy for sensing one bit.
J = Number of IoT devices.
E awake = Energy to wake up from sleep mode.
A = Number of times IoT devices are activated.
A = t R e q t i I R R e q t , C i , t A = Number of activations.
R e q t   =   Number   of   requests   at   time   t .
R R e q t , C i , t = Requests served from cache.
Average   Hops = 1 R e q t Hop R e q t , i I C i , t Hop R e q t , i I C i , t = Number of hops for each request.
AI-Driven Caching Approaches[95] R E Q p , q = E N p C q J 1 REQ :   The   request   frequency   of   content   C q   by   edge   node   E N p .
EN _ p :   Attribute   vector   of   edge   node   p .
C _ q :   Attribute   vector   of   content   q .
J: Total number of content attributes.
S i z e L = v o l ρ Size_L: Cache size allocated to each edge node.
vol: Total capacity of the cloud server database.
ρ : Cache allocation ratio (between 0 and 1).
d i s t E N i , E N j = 1 m = 1 N R E Q i , m R E Q j , m m = 1 N R E Q i , m 2 m = 1 N R E Q j , m 2 dist ( EN _ i ,   EN _ j ) :   Cos ine   distance   between   edge   nodes   E N i   and   E N j .
REQ _ { i , m } ,   REQ _ { j , m } :   Request   frequencies   for   content   m   at   nodes   E N i   and   E N j .
N: Total number of content items.
S i m C i , C j = C i C j C i C j = b = 1 J C b , i C b , j b = 1 J C b , i 2 b = 1 J C b , j 2 Sim ( C _ i ,   C _ j ) :   Cos ine   similarity   between   content   items   C i   and   C j .
C _ { b , i } ,   C _ { b , j } :   The   b - th   attribute   of   contents   C i   and   C j .
J: Total number of content attributes.
C P i , j = d = 1 d j N S i m C d , C j R E Q i , d CP _ { i , j } :   Predicted   probability   that   edge   node   E D i   will   request   content   C j .
Sim ( C _ d ,   C _ j ) :   Similarity   between   contents   C d   and   C j .
REQ _ { i , d } :   Request   frequency   of   content   C d   by   edge   node   E D i .
N :   Set   of   all   content   items   requested   by   E D i .
Cooperative Caching for Machine-to-Machine (M2M) Networks[37] P th = 1 γ P th , old + γ I curr P th : The current popularity threshold.
P th , old : The previous value of the popularity threshold.
I curr : The current average number of received Interest packets per content.
γ : A smoothing factor (set to 0.125 in the study).
F th = 1 γ F th , old + γ F curr F th : The current freshness threshold.
F th , old : The previous value of the freshness threshold.
F curr : The current average freshness period of the received IoT data packets.
γ : Same smoothing factor (0.125).
p c , d k ( CCS ) = 1 , if   I d k P th   and   F d k F th F d k F th , if   I d k P th   and   F d k < F th 0 , otherwise p c , d k ( CCS ) :   The   caching   probability   for   data   packet   d k at the core nodes.
I d k :   Number   of   Interests   received   for   content   d k .
F d k :   Freshness   period   of   content   d k .
P th : Popularity threshold.
F th : Freshness threshold.
p c , d k ( CES ) = I d k m a x d i I d i p c , d k ( CES ) :   The   caching   probability   for   content   d k at the edge nodes.
I d k :   Number   of   Interests   received   for   content   d k .
m a x d i I d i :   Maximum   number   of   Interests   received   for   any   content   d i in the given time window.
U d j = L res , d j f d j U d j :   Utility   Index   for   cached   content   d j .
L res , d j :   Residual   lifetime   of   cached   content   d j .
f d j :   Request   frequency   of   content   d j .
[97] Betw v = B v m a x { B l } , 0 < Betw v 1
Be
Betw ( v ) :   Normalized   betweenness   centrality   of   node   v .
B ( v ) :   Actual   betweenness   value   of   node   v .
m a x { B l } :   Maximum   betweenness   value   among   all   nodes   l in the current request path.
BetwRepRate v = Betw v RepRate v , where RepRate v = R v m a x { R l } BetwRepRate(v): Ratio indicating node importance, factoring in both centrality and replacement rate.
RepRate ( v ) :   Normalized   replacement   rate   at   node   v .
R ( v ) :   Actual   number   of   replacements   at   node   v .
m a x { R l } : Maximum number of replacements among all nodes.
P k v = Betw v P k , where P k = 1 S k f k T 1 Q T 1 P k v :   Probability   of   replacing   content   C k   at   node   v .
Betw ( v ) :   As   before ,   the   centrality   score   of   node   v .
S k :   Size   of   content   block   C k .
f k T 1 :   Number   of   times   content   C k   was   requested   in   the   last   cycle   T 1 .
Q T 1 : Total number of all content requests in the previous cycle.
Freshness, Popularity and Probabilistic-Aware Caching Strategies[98] Path   Stretch ave = i = 1 N path i , r , s N Path i , r , p path i , r , s : The path length (number of hops) from the requesting node r to the serving node s.
Path i , r , p : The path length from the requesting node r to the content provider p (publisher).
N : Total number.
[99] PopCount i n = α I R i n + 1 α PopCount i 1 n PopCount i n :   The   updated   popularity   count   of   the   content   name   prefix   n   at   the   end   of   the   current   time   interval   i .
α :   The   smoothing   factor   ( weight   parameter ) ,   where   0 α 1 , which balances between recent observations and past popularity.
I R i n :   The   number   of   Interest   packets   received   for   content   name   prefix   n   at   the   Surrogate   Caching   Node   ( SCN )   during   the   current   time   interval   i .
PopCount i 1 n : The popularity count of content.
[100] p s c r , i = I c r , i I t o t , i p s c r , i :   Popularity   sample   of   content   c r   during   interval   T i .
I c r , i :   Number   of   requests   for   content   c r   in   interval   T i .
I t o t , i :   Total   number   of   content   requests   during   interval   T i .
P c r , i = 1 α P c r , i 1 + α p s c r , i P c r , i :   Popularity   of   content   c r   at   interval   T i .
P c r , i 1 :   Popularity   of   content   c r   at   the   previous   interval   T i 1 .
α :   Smoothing   factor   for   EWMA   ( set   to   0.125 in this paper).
p s c r , i : Popularity sample from the previous equation.
Θ P , i = 1 α Θ P , i 1 + α I i Θ P , i :   Popularity   threshold   at   interval   T i .
Θ P , i 1 : Popularity threshold from the previous interval.
I i :   Average   number   of   requests   per   distinct   content   in   T i (see next equation).
α : Smoothing factor (same as before).
I i = I t o t , i M i I i :   Average   number   of   requests   per   distinct   content   in   interval   T i .
I t o t , i :   Total   number   of   requests   in   T i .
M i :   Number   of   distinct   requested   contents   in   T i .
Θ F , i = 1 α Θ F , i 1 + α F P Θ F , i :   Freshness   threshold   at   interval   T i .
Θ F , i 1 : Freshness threshold from the previous interval.
F P : Average freshness period of the cached contents.
π d n c r = F P c r Θ F , i π d n c r :   Probability   of   caching   the   Data   packet   d n c r .
F P c r :   Freshness   period   of   content   c r .
Θ F , i :   Freshness   threshold   at   interval   T i .

3.5. Machine Learning-Based Caching

In the NDN-based IoT environment, there are still a number of challenges that can be solved using machine learning (ML) techniques. The process of integrating machine learning into caching mechanisms is an effective way to improve caching performance, energy efficiency, and network efficiency, as shown in Figure 15. Among the things that machine learning (ML) techniques can improve in IoT environments are data aggregation, fault detection, data reliability, and security [101,102]. Table 11 summarizes selected ML caching strategies.

3.5.1. Reinforcement Learning-Based Caching Strategies

This category uses intelligent algorithms, such as artificial intelligence algorithms, specifically reinforcement learning (RL), to make smart caching decisions [103]. Using such systems reduces reliance on fixed policies in the network and improves its performance. The use of such mechanisms is considered a qualitative shift towards more efficient and intelligent networks [104].
Deep Q Networks (DQN)
In the study in Ref. [105], the researchers presented the iCache model, an intelligent caching approach based on Deep Q Networks (DQNs), to improve the caching efficiency in ICN-IoT networks in terms of retrieval time and energy consumption. This approach uses machine learning to manage caching operations within dynamic networks. iCache also uses the Markov Decision Model (MDP) and the DQN network to track caching decisions. This approach also takes into account two important factors, which are the age of the data and, furthermore, the popularity of the content. It also uses the LFF policy to replace content and get rid of data that has become old and almost expired. The simulation results using Python 3.8 and PyTorch 1.9 showed that iCache reduces total energy consumption by up to 37.5% and reduces average hop count by up to 33% compared to baseline strategies, such as LCE, most popular content (MPC), and Caching Transient Data (CTD). This demonstrates the superior performance of iCache in dynamic IoT environments, particularly in reducing retrieval delay and energy consumption.
Genetic Algorithms (GA)
The study in Ref. [106] explored a Multi-Round Parallel Genetic Algorithm (MRPGA) strategy based on genetic algorithms (GAs). Traditional strategies, such as LCE and LCD, suffer from inefficiency in allocating cache resources in ICNs due to their reliance on fixed or probabilistic allocation strategies. In addition, the MRPGA strategy relies mainly on software-defined networking (SDN) to ensure efficient cache allocation, which leads to efficient centralized management of caching operations. The strategy is based on dividing the storage problem into several sub-problems, such as crossover, selection, and mutation, and then applying genetic algorithm operators. The simulation results using MATLAB showed that MRPGA outperforms traditional caching strategies, such as LCE, ProbCache, and standard GAs in several key metrics, including CHR, retrieval time, and storage resource utilization (SUR).
Deep Q-Learning (DQL)
The research conducted in Ref. [107] specialized in improving the efficiency of computing and caching resources in IC-IoT networks. A model based on Deep Q-Learning (DQL) was proposed. This model ensures, by taking into account the factors that affect the network, the optimal use of network resources. The aim of the study was to address the challenges in IoT networks supported by artificial intelligence, as these environments require high computational capabilities. The set of challenges was summarized as limited resources, limited computational capabilities, and dynamic network behavior. In the study, the IC-IoT architecture consisted of a number of layers, which integrate both caching resources with computing to address and process the dynamics of IoT environments, such as smart cities, Industrial IoT (IIoT), and smart homes. The simulation results in this study showed that this framework outperforms traditional methods in terms of increasing the CHR, improving energy efficiency, and reducing data retrieval time. This framework also helps improve the QoE for IoT users.
Reinforcement Learning (RL)
According to the nature of dynamic environments, where content and popularity are constantly changing, the researchers in Ref. [108] proposed a reinforcement learning approach for proactive caching in wireless networks, which reduces energy consumption during changes in dynamic environments while ensuring efficient content retrieval. The proposed strategy is based on the Markov Decision Process (MDP) framework, which in turn contributes to managing caching decisions. The simulation results demonstrated the advantages of this strategy and its role in saving energy and improving caching performance. The reinforcement learning-driven caching approach is scalable, adaptable, and suitable for IoT systems and 5G/6G networks.

3.5.2. Supervised and Unsupervised Learning-Based Caching

In this category of caching strategies for NDN-based IoT networks, we explore supervised and unsupervised learning strategies, which are considered smart strategies based on intelligence techniques [109]. In supervised learning, the model is trained using labelled datasets. Unsupervised learning, on the other hand, does not rely on labelled data but uses methods such as clustering or dimensionality reduction to uncover any hidden patterns in content access.
K-Nearest Neighbors (KNNs)
In the study in [110], the researchers focused on integrating both caching mechanisms and machine learning techniques into NDN networks to improve the caching efficiency of routers, which enhances overall network efficiency. Traditional caching policies, such as LFU, LRU, and FIFO, face difficulties in dealing with changes in content popularity, which can lead to inefficient use of storage memory. This study proposed the KNN machine learning model combined with LRU (KNN-LRU), which dynamically predicts content popularity and replaces irrelevant content. This model improves the CHR, reduces network load, and reduces response time. The simulation results using the ICARUS simulator demonstrated that the KNN-LRU model outperforms LFU, LRU, and FIFO in key performance metrics, such as the CHR, latency, and link load.
Deep Neural Networks (DNN)
The approach adopted in Ref. [111] aimed to improve a popularity-based caching framework, IoTCache, which is designed for IoT networks to improve network efficiency, reduce congestion, and reduce the CHR. This framework predicts content popularity through a Popularity Evolving Model (PEM). The deep prediction processes that use DNN and long short-term memory (LSTM) work to solve the cold start problem that occurs when initial data is absent. In addition, the IoTCache framework relies on eviction and pre-caching algorithms that determine what should be stored and when. Unlike traditional caching mechanisms, such as LRU and LFU, IoTCache has provided significant improvements: it reduces data retrieval time, increases caching efficiency, and significantly reduces the volume of traffic on the edge in the IoT. IoTCache also reduces duplicate operations, which improves cache efficiency.
Recurrent Neural Networks (RNNs)
The study in Ref. [112] highlighted an advanced caching strategy that aims to improve caching efficiency in ICNs under the name DeepCache. DeepCache is based on deep recurrent neural networks (RNNs), specifically encoder–decoder models built on long short-term memory (LSTM). This strategy consists of two main components: an Object Characteristics Predictor, which, based on request patterns, predicts the future popularity of content, and a Caching Policy Component, which, based on these predictions, makes intelligent storage and content eviction decisions. DeepCache demonstrates the limitations of traditional caching strategies, such as LRU and LFU, which have no predictive ability, unlike DeepCache, which allows proactive storage by predicting future requests, thereby improving resource utilization efficiency. The simulation results demonstrated that DeepCache significantly outperforms traditional methods like LRU and k-LRU, achieving higher CHRs and reduced response times. In particular, the experiments showed that integrating DeepCache with k-LRU provided cache–hit performance that even surpassed optimal cache configurations under certain conditions. This highlights the scalability and adaptability of DeepCache for real-world caching scenarios. Table 12 compares the equations used in the above studies.
Table 11. Summary of the machine learning-based caching strategies.
Table 11. Summary of the machine learning-based caching strategies.
CategoryStudyProposed StrategyKey FeaturesEvaluation and ResultsTool
Reinforcement Learning-Based Caching Strategies[105]iCache (DQN-based Intelligent Caching)Deep Q Networks (DQNs) and Markov Decision Model (MDP) for intelligent caching, uses LFF policy.Reduced retrieval delay and energy consumption compared to LCE, MPC, CTD.Python 3.8 and PyTorch 1.9
[106]Multi-Round Parallel Genetic Algorithm (MRPGA)Genetic algorithm-based strategy for cache allocation using SDN, improves storage resource utilization (SUR).Improved CHR, reduced retrieval time, compared to LCE, ProbCache.MATLAB
[107]Deep Q-Learning (DQL) for IC-IoTCombines caching with computing resources, optimizes resource allocation in dynamic IoT environments like smart cities and IIoT.Simulation results show increased CHR, improved energy efficiency, and reduced retrieval time.-
[108]Reinforcement learning-based proactive cachingUses MDP framework, policy gradient methods (LRM and FDM), and caching policies (LISO, LFA) for proactive caching.Simulation results show reduced energy consumption and improved caching performance.-
Supervised and Unsupervised Learning-Based Caching[110]KNN-LRUKNN machine learning model combined with LRU (KNN-LRU).Improves the CHR, reduces network load, and reduces response time compared to LFU, LRU, FIFO.Icarus Simulator
[111]IoTCache (popularity-based caching framework)Utilizes deep neural networks (DNNs) and LSTM for popularity prediction.Simulation results show reduced data retrieval time, increased caching efficiency, and reduced traffic load.-
[112]DeepCache (Deep RNN-based Caching)Uses deep recurrent neural networks (RNNs) with LSTM to predict content popularity and optimize caching decisions.Simulation results show improved CHR, reduced response time, and better scalability compared to LRU and LFU.-
Table 12. Formula comparison for machine learning-based caching strategies.
Table 12. Formula comparison for machine learning-based caching strategies.
CategoryStudyEquationsDescription
Reinforcement Learning-Based Caching Strategies[105] F p m = T life p m T current F p m :   Freshness   of   the   IoT   data   packet   p m .
T life p m :   Lifetime   of   the   IoT   data   packet   p m .
T current : Current time.
m M s m Z n s m :   Size   of   the   IoT   data   packet   m .
Z n :   Cache   size   of   node   n .
f x = 1 x α / m = 1 M 1 m α f x :   Probability   mass   function   ( PMF )   for   the   data   packet   ranked   x .
x : Rank of the IoT data packet.
α : Skewness parameter of the Zipf distribution (higher values mean more skewed popularity).
M : Total number of data packets.
Req = n edge N edge m M t T req p m n edge , t Req : Total requests for IoT data packets.
req p m n edge , t :   Requests   for   data   packet   p m   from   edge   node   n edge   at   time   t .
N edge : Set of edge caching nodes.
Cos t S t , A = j J active j t m M b j m s m e tran j + e sen j + e awake j Cos t S t , A :   Total   energy   consumption   at   time   t .
active j t :   1   if   IoT   node   j   is   active   at   time   t , otherwise 0.
b j m :   1   if   IoT   node   j   can   sense   data   packet   m .
s m :   Size   of   data   packet   m .
e tran j :   Energy   cos t   for   IoT   node   j to transmit one bit.
e sen j :   Energy   cos t   for   IoT   node   j to sense one bit.
e awake j :   Energy   cos t   for   IoT   node   j to wake up.
L θ = r + γ m a x A t + 1 Q S t + 1 , A t + 1 ; θ target Q S t , A ; θ pred 2 L θ : Loss function for the DQN model.
r : Immediate reward (or cost).
γ : Discount factor.
Q S t + 1 , A t + 1 ; θ target : Q-value from the target network.
Q S t , A ; θ pred : Q-value from the prediction network.
E total = t = 1 T j = 1 J active j t E j t   E j t = m = 1 M n = 1 N n edge N edge cache req m n , t , c n t s m e tran j + e sen j + e awake j E total : Total energy consumed by IoT nodes.
E j t :   Energy   consumed   by   IoT   node   j   at   time   t .
cache req m n , t , c n t : Indicator function (1 if request can be served from cache, else 0).
A = t = 1 T m = 1 M n = 1 N n edge N edge 1 req m n , t cache req m n , t , c n t hop n edge , n A : Average number of hops.
hop n edge , n :   Number   of   hops   between   edge   node   n edge   and   caching   node   n .
[106] T C = i = 1 n k = 1 m s k f k i m i n j { j c k j 0 } d i j T C : Total transportation cost.
s k :   Size   of   content   block   k .
f k i :   Request   frequency   of   content   block   k   at   CR   i .
d i j :   Distance   between   CR   i   and   CR   j .
c k j :   Cache   state   ( 1   if   CR   j   caches   block   k , 0 otherwise).
C opt = a r g m i n C i = 1 n k = 1 m s k f k i m i n j { j c k j 0 } d i j
Subject to:
s T C i v i , i = 1 , 2 , , n
C opt : Optimal cache matrix.
s : Vector of block sizes.
v i :   Cache   capacity   at   CR   i .
s T C i :   Total   volume   of   cached   content   at   CR   i , must not exceed its capacity.
fitness C = 1 T C , if   s T C i v i i 0 , otherwise Assigns higher fitness to solutions with lower transportation costs but penalizes infeasible solutions (that exceed cache capacity).
a i = m i n K j = 1 n f i j , h i , i = 1 , , m a i :   Number   of   blocks   of   category   i to be cached.
K : Adjustable scaling parameter.
f i j :   Frequency   of   block   i   requested   at   CR   j .
h i :   Number   of   CRs   interested   in   block   i .
i = 1 m a i s i < j = 1 n v j Ensures that the total allocated cache volume does not exceed the total available cache capacity.
j = 1 n C distributed i j = a i , i { 1 , , m } Ensures that each content block category i is cached exactly  is   cached   exactly   a i  times across the network.
[107] λ i t = λ ρ i α λ: Total arrival rate of user requests (Poisson process).
i: Index of AI model, arranged by decreasing popularity.
α: Zipf distribution slope (0 < α ≤ 1).
ρ :   Normalization   constant ,   ρ = i = 1 I 1 i α .
Θ i t = J 00 t J 01 t J 10 t J 11 t J x y t : Probability that the caching status of model i transitions from state x to y at time t.
State 0: Not cached, State 1: Cached.
t u m = n u h u m t n u : CPU cycles needed for training the AI model for user u.
h u m t : Computation capability of MEC gateway m at time t for user u.
[108] P s | s , g = E P S I s | s , g Z
μ s , g = E μ S I s , g Z , Z
These equations reflect the transition probability and the expected cost under the MDP with side information (MDP-SI).
Supervised and Unsupervised Learning-Based Caching[110] d = i = 1 p x 2 i x 1 i 2 d : The Euclidean distance between two data points.
p : The number of features (dimensions) in the dataset.
x 2 i :   The   value   of   the   i t h feature in the second data point (test data).
x 1 i :   The   value   of   the   i t h feature in the first data point (training data).
[111] P n = Λ S j n n ! e Λ S j P n :   Probability   of   receiving   n   requests   in   slot   S j .
Λ S j :   Cumulative   intensity   function   over   time   slot   S j , defined as:
Λ S j = t j 1 k t j k λ t d t
λ t :   Intensity   function   ( rate   of   requests   at   time   t ).
[112] X t = { x 1 , x 2 , . . . , x t }
Y t = { y ^ 1 , y ^ 2 , . . . , y ^ k }
Input   Tensor : Shape = # samples × m , d , 1
X t = { x 1 , x 2 , . . . , x t }
Description :   This   represents   the   sequence   of   objects   requested   so   far   at   time   t .   Each   element   x t R d is a feature vector of dimension d, where d is the number of unique objects. Each vector captures the probability (popularity) of each object within a predefined window (either time-based or request-based).
Y t = { y ^ 1 , y ^ 2 , . . . , y ^ k }
Description :   This   is   the   sequence   of   predicted   outputs   associated   with   object   arrivals   at   time   t .   Each   y ^ t R p is an output vector of dimension p (where p = d, the number of unique objects). The output represents future probabilities (popularities) of these objects over k future time steps.
m is the number of past probability windows (e.g., 20 time steps), and d is the number of unique objects.

3.6. Probabilistic Caching

The concept of probabilistic caching in NDN-based IoT networks is that it does not rely on fixed rules for caching content, but rather on probability, as shown in Figure 16. This approach introduces an element of randomness to determine whether content will be cached or not based on a predetermined probability. This approach helps prevent content congestion at central nodes, thus better managing storage memory. It also contributes to reducing unnecessary duplication across the network, especially in IoT environments, as it helps balance the diversity of content to be stored and resource consumption in these environments [62,113]. In recent studies, this approach has been combined with other methods and strategies, such as popularity prediction or collaborative decision-making. Table 13 summarizes the selected probabilistic caching strategies.
In this study [114], researchers proposed an SBPC (software-defined probabilistic caching) approach, which is a caching strategy for CCNs combined with software-defined networks (SDNs). This strategy helps streamline the caching process and reduce the time required to retrieve content. Traditional caching strategies, such as LCE, ProbCache, Prob (0.5), and the betweenness-based caching strategy (Betw), face several problems, including the inefficiency of content distribution systems to cache content using these strategies. The SBPC strategy adds centrality metrics to nodes (degree, closeness, betweenness, and eigenvector), which in turn determine the importance of a node in the network and correlate it with the popularity of the content. A standard CCN simulation was used, and this strategy was compared to a set of strategies. The simulation results showed that SBPC outperforms the other strategies in terms of each of the following metrics: increased CHR, reduced content acquisition hops, and request delay.
In this study [115], the researchers proposed the Prob-CH strategy, a strategy designed for NDN networks. It is a probabilistic caching strategy combined with a consistent hashing approach. The main goal of this strategy is to improve caching operations, manage content distribution, and reduce server load. This strategy uses consistent hashing to determine where to store content and which devices are most suitable for storage. This is instead of having each device perform the storage operation based on a fixed probability, as is the case in traditional probabilistic caching. The strategy was evaluated using the NS-3-based ndnSIM simulator, and the results were compared with traditional probabilistic caching (p = 0.5) and deterministic caching. The results showed an increased CHR, a decreased average hop count, and a reduced server load. This helps meet network requests without having to repeatedly refer to the providers. Table 14 compares the equations used in the above studies.

4. Findings

This section outlines the main observations drawn from the study on caching strategies in NDN-based IoT networks. It is divided into three parts. Section 4.1 compares different caching approaches, explaining how each one works, where it fits best, and the assumptions it relies on. Section 4.2 looks at how these strategies perform when measured by factors like cache hit ratio, energy use, latency, and scalability. Lastly, Section 4.3 gives an overview of the simulation tools commonly used to evaluate these strategies, pointing out what each tool can do and where it might fall short. Altogether, this section helps build a clearer understanding of what works best in different IoT scenarios. Table 15 illustrates the critical analysis of NDN-based IoT caching strategies.

4.1. Comparative Analysis of Caching Strategies in NDN-Based IoT Networks

4.1.1. Popularity-Based Caching

Popularity-based caching approaches can be highly effective in boosting the cache hit ratio (CHR) and reducing data retrieval delays. However, their success largely depends on the stability and predictability of content request patterns. In fast-changing IoT environments, sudden shifts in demand can lead to outdated or less relevant data remaining in the cache, which increases the likelihood of cache misses and reduces overall efficiency. These strategies typically rely on the assumption that content popularity can be assessed locally, an expectation that may not hold true in resource-limited IoT nodes [12]. They work best in settings where user request patterns remain relatively steady over time, allowing past data to serve as a reliable guide for future caching decisions. Additionally, they presume that devices have enough memory and processing power to monitor and assess content popularity. As a result, popularity-based caching tends to be more effective in static or semi-static environments, such as smart homes or industrial monitoring systems, where access behavior is more consistent.

4.1.2. Freshness Caching

Freshness-aware caching strategies are designed to maintain the accuracy and relevance of stored data, which is particularly important for real-time IoT applications. These strategies typically require frequent updates and content revalidation, which can increase bandwidth usage and energy consumption, especially in scenarios where the rate of content changes is high. They are most effective in applications where having up-to-date data is critical, such as healthcare monitoring, environmental tracking, or smart traffic systems. For these strategies to work as intended, it is assumed that the content is marked with freshness indicators, like timestamps, either by the original producers or intermediate nodes [116]. It is also expected that the network has enough resources to handle ongoing checks and data replacement. In these cases, the cost of delivering stale data is considered more detrimental than the added effort required to keep the cache fresh.

4.1.3. Collaborative/Cooperative Caching

Cooperative caching helps improve overall performance by allowing devices in the network to share storage responsibilities. Instead of each node caching content independently, they coordinate to reduce duplication and make better use of available memory. However, this cooperation is not without challenges. It usually involves extra signaling between devices and depends on a certain level of trust, which can be difficult to establish, especially in IoT environments where devices might be from different manufacturers or only occasionally connected. This approach tends to work best in places where devices are close together and have reliable communication links, like in smart campuses, urban sensor networks, or Industrial IoT systems. For cooperative caching to be effective, it is generally assumed that devices can communicate using shared standards or protocols, and that they have enough bandwidth and energy to handle the added coordination. It also assumes that avoiding redundant data copies is beneficial, so content is distributed across nodes rather than stored individually by each one [14].

4.1.4. Hybrid Caching

Hybrid caching strategies are designed to bring flexibility by combining different caching methods. This allows the system to adapt to varying network conditions and content requirements. However, integrating multiple mechanisms can also make the implementation more complex. Finding the right balance between performance goals like reducing latency, saving energy, or improving the cache hit ratio can be particularly difficult in environments with limited resources. If not well coordinated, these strategies might also struggle to scale effectively in larger deployments. Such approaches are especially useful in IoT settings that involve a mix of content types and device capabilities. In these scenarios, no single caching method is likely to meet all the needs. Hybrid strategies usually rely on the network’s ability to support basic intelligence to choose the most suitable caching behavior in real time. This makes them a good fit for systems that are adaptable by design, including edge-cloud IoT architectures where different parts of the network serve different roles [117].

4.1.5. Machine Learning-Based Caching

Machine learning-based caching strategies offer strong potential when it comes to predicting content demand and making intelligent caching decisions. However, their practical use in real-time IoT environments can be limited by the processing power required, especially on edge devices with constrained resources. Another common challenge is the need for a large amount of training data, which may not be available in newer deployments or networks with low traffic volumes. These methods tend to perform best in settings where edge nodes, such as gateways or local servers, have enough computing capacity, like in edge data centers or AI-enabled routers [94]. They typically rely on having access to past data for training and operate most effectively in environments where conditions do not change too rapidly, allowing the models to learn and adapt with reasonable accuracy. Overall, ML-based caching makes the biggest impact in large-scale, data-rich networks where patterns in user behavior can be identified and used to optimize content placement [118].

4.1.6. Probabilistic Caching

Probabilistic caching is valued for its simplicity and its ability to distribute the caching load across the network. Making caching decisions based on predefined or random probabilities helps avoid overloading specific nodes. However, this simplicity can also lead to inefficiencies; for instance, popular content might be missed, while less useful data ends up occupying cache space, depending on how the probabilities are set. These strategies are particularly suitable for large, decentralized, and fast-changing IoT environments, where tracking detailed content popularity or freshness at every node would be too costly or complex. They are built on the idea that making lightweight, distributed caching decisions is often more practical than relying on heavier, centralized algorithms. This approach is especially useful in networks with high content variability and unstable connectivity, such as those found in vehicular systems or mobile IoT setups [65].

4.2. Evaluation Metrics Used in NDN-IoT Caching Strategies

When it comes to the cache hit ratio (CHR), most of the strategies across the six categories show noticeable improvements, although they achieve this through different means. Popularity-based methods like PACC and PCS improve the CHR by prioritizing frequently requested content and placing it closer to where it is needed. Freshness-oriented approaches, such as PoSiF and PCCM, maintain a high CHR by ensuring only current and valid content is cached, helping to avoid unnecessary requests for outdated data. Cooperative strategies like SMCC and NDN + CoCa enhance the CHR by distributing data intelligently across multiple nodes, increasing the chances of content availability [30]. Hybrid solutions, which combine multiple factors like popularity, freshness, and network centrality, often lead to even better CHR outcomes by balancing content demand and diversity. Similarly, machine learning-based techniques, including iCache, DeepCache, and IoTCache, use predictive models to anticipate demand and cache data, accordingly, leading to significant improvements. Probabilistic methods, particularly those enhanced with techniques like hashing (e.g., Prob-CH) or centrality-based adjustments (e.g., SBPC), also contribute by spreading the load efficiently and minimizing redundancy.
Looking at energy consumption, some strategies are clearly more energy-aware than others, especially in settings where devices have limited power resources [119]. For example, cooperative approaches like SMCC and green caching techniques reduce energy use by minimizing content duplication and allowing some nodes to enter low-power states. Freshness-based methods, such as SCTSmart, improve efficiency by proactively removing stale content, reducing unnecessary data exchanges. Machine learning-based strategies, like iCache and those built on DQL, support energy savings by making smarter decisions about what and where to cache, reducing long-distance data retrieval. Hybrid strategies often stand out here as well, as they combine energy considerations with other key factors like popularity. While popularity-based and probabilistic approaches may indirectly lower energy use by improving the CHR or reducing the number of hops, they usually do not focus on energy as a primary objective.
Latency is another important metric, and most strategies aim to reduce the time it takes to retrieve data [120]. Popularity-based caching lowers latency by placing frequently requested items near the user, reducing the need to travel upstream. Freshness-focused techniques also help by minimizing delays caused by data revalidation or replacement. Cooperative caching speeds up retrieval by allowing nearby nodes to serve data, which is especially beneficial in dense networks such as vehicular systems. Hybrid caching, by combining on-path and off-path techniques, helps strike a balance between redundancy and speed. Machine learning models, particularly those based on DQN or RNN, dynamically adapt to demand and optimize content placement in real-time, keeping latency low. Even probabilistic approaches, especially when paired with tools like consistent hashing, contribute to faster content delivery by spreading the caching burden across the network.
Scalability is essential for any solution intended for large or growing networks [121]. Probabilistic caching stands out for its simplicity and stateless nature, making it easy to scale, especially when combined with hashing or SDN techniques. Cooperative caching also scales well in dense environments, thanks to its distributed design. Hybrid strategies are particularly effective at adapting to different network sizes and demands, as they can flexibly adjust to various conditions. Machine learning-based caching can scale too, but often requires more powerful infrastructure, like edge or cloud computing, to manage the complexity. Popularity and freshness-based methods are generally scalable if they incorporate dynamic thresholds or adaptive mechanisms, although static configurations might struggle in rapidly changing environments. Overall, hybrid and cooperative models tend to offer the best balance between scalability, adaptability, and consistent performance in evolving IoT settings.

4.3. Simulation Environments for Evaluating NDN-Based Caching Strategies

In the context of Named Data Networking (NDN), particularly when applied to Internet of Things (IoT) environments, caching strategies are typically evaluated through simulations rather than real-world deployments. This is mainly due to the high cost and complexity of building and maintaining physical testbeds. To address this, several simulation tools have been developed, allowing researchers to experiment with and compare different caching methods in a controlled and repeatable manner. This section highlights some of the most widely used simulation platforms for NDN caching studies, including ndnSIM, SocialCCNSim, Icarus, and Mininet, as well as various custom or extended simulation frameworks.

4.3.1. ndnSIM

ndnSIM is one of the most commonly used simulation tools for exploring NDN protocols and architectures. Built on top of the NS-3 simulator, it follows a modular design that allows researchers to work with key NDN components like forwarding strategies, the Content Store (CS), Pending Interest Table (PIT), and Forwarding Information Base (FIB) as separate, customizable modules. This flexibility makes it easier to experiment with or replace specific functionalities without disrupting the overall setup. The framework also supports various caching models, lookup strategies, and replacement policies, giving researchers a broad range of options to test. In addition, ndnSIM includes detailed logging and metric tracking tools, which help in evaluating system performance and behavior at a granular level. These features make it a powerful and reliable platform for simulating and analyzing caching strategies in NDN environments [122].

4.3.2. SocialCCNSim

SocialCCNSim is a simulation tool designed specifically for exploring content-centric networking within socially structured networks. It supports the evaluation of various caching placement strategies, such as PC and MAGIC, and commonly used replacement policies like LRU, LFU, and FIFO. One of its strengths is the ability to simulate networks based on real-world-inspired topologies, including those derived from platforms like Facebook and LastFM, in addition to allowing users to define custom graph structures. The simulator also accommodates mobility scenarios, making it useful for studying dynamic environments. It offers a range of performance metrics such as cache diversity, eviction frequency, and path stretch, which help in assessing how well different caching strategies perform in socially influenced content dissemination. This makes SocialCCNSim a valuable option for research that focuses on user behavior and social interactions in content delivery networks [123].

4.3.3. Icarus

Icarus is a Python-based simulation toolkit tailored for evaluating caching performance in both ICN and NDN environments. Its design offers a good degree of flexibility, allowing researchers to model and experiment with a variety of caching approaches. It supports several well-known strategies, such as LCD, PC, and probabilistic methods, as well as common cache replacement policies like FIFO, LRU, and random eviction. Due to its lightweight nature and straightforward setup, Icarus is easy to work with, especially for higher-level analysis focused on cache behavior. However, it does not include support for low-level networking functions or mobility features, which limits its use for full-stack simulations. Instead, it is best suited for abstract modeling and performance comparison of caching mechanisms in more controlled or static environments [124].

4.3.4. Mininet

Mininet serves as a virtual testbed widely used for emulating network topologies and testing network applications, particularly in the context of software-defined networking (SDN). Although it is not specifically designed for Named Data Networking (NDN), it can be adapted for NDN-related research by integrating custom caching modules. With support for real-time simulation of complex network setups, Mininet allows users to design and replicate experiments using its interactive command-line interface and Python-based APIs. Its lightweight nature and minimal hardware requirements make it a practical choice for researchers aiming to approximate real-world deployment scenarios without the overhead of physical infrastructure [125].
In some situations, researchers choose to build custom simulation environments to accommodate features that standard tools do not support. These tailored simulators might extend existing frameworks like ndnSIM or ccnSim or be developed entirely from scratch using programming languages such as Python or C++. Such tools are often created to address specific needs, including detailed energy modeling, unique network architectures, or hybrid deployment scenarios. While this approach offers more design flexibility, it typically comes with trade-offs in terms of standardization and broader tool support.

5. Future Directions

This study raises many important research issues and useful research opportunities that include and cover the limitations of IoT environments, including integrating more than one strategy to benefit from the characteristics of each to create new strategies with better storage qualifications than the previous ones. In this section, we present some open research trends that seek to improve caching operations in NDN-based IoT networks. Figure 17 explains the timeline of the NDN-based IoT caching strategies that support future directions between 2020 and 2025.

5.1. Developing a Caching Strategy That Combines Data Freshness and Popularity

Some studies have discussed the design of caching strategies based on data freshness, and other studies have discussed caching operations based on content popularity. In addition, some studies have proposed caching strategies that combine both freshness and popularity, but so far, there is a possibility to improve such strategies in terms of reducing energy consumption, choosing the appropriate content, choosing the appropriate replacement method, and other measures that are consistent with the nature of dynamic IoT networks. Integrating edge computing into such strategies can allow freshness, popularity decisions to be made locally at the network edge, reducing latency and central coordination overhead. Moreover, scalable frameworks can be developed to dynamically weigh freshness and popularity depending on application context and network state.

5.2. Caching Strategies Based on Machine Learning (ML)

Developing caching strategies based on machine learning (ML) and reinforcement learning (RL) allows for ease and efficiency in using the developed strategy. In general, ML-based methods primarily consider the popularity of the content due to the importance of this feature, and then consider the freshness of the content. ML can also be used in the main cache operations, which are represented by selecting the appropriate content, choosing the most appropriate cache method, and finally choosing the most appropriate replacement mechanism to improve the efficiency of the network as a whole. In addition, deep learning models and adaptive RL algorithms can be trained on mobility and traffic patterns to enable more intelligent caching decisions in real time. These AI models can also learn contextual features from heterogeneous IoT environments to improve accuracy and energy efficiency

5.3. Caching Strategies That Support Energy Saving

Energy-saving management operations are one of the most important and difficult tasks in IoT environments, and they also pose a challenge in NDN-based IoT networks. Future researchers must focus on strategies that take the energy factor as a first-ranking factor through a set of measures, such as improving storage locations, and it is also possible to distribute the load proportionally. AI-driven caching models can predict low-usage times and optimize cache placement accordingly, thus reducing unnecessary data transmissions. Similarly, edge-based adaptive algorithms can reduce global control traffic and help balance energy expenditure across nodes.

5.4. Caching Strategies Based on Enhancing Security and Privacy

The security and privacy characteristics in NDN-based IoT networks are of great importance and play a prominent role in maintaining the efficiency of the network. Therefore, it is necessary to draw attention to the fact that caching operations that contribute to enhancing content retrieval may expose the network to a set of attacks, and thus, future researchers must confront such attacks by working to close the gaps in these caching strategies [126,127]. Lightweight blockchain mechanisms can be explored to create decentralized trust models that ensure data integrity and user authentication in cached content, especially in multi-provider or heterogeneous IoT environments.

5.5. Caching Strategies That Support Mobility Management

Recently, with the increasing number of mobile users and the increasing use of wearable devices and smart vehicles, managing mobility in NDN-based IoT networks poses a growing challenge [128]. Therefore, future research should be directed towards developing caching strategies that accommodate and adapt to the changing locations of users across the network. The goal of such strategies should be to ensure seamless data access during mobility, while taking into account both latency and packet loss. The development of these strategies can also rely on predictions inspired by caching traffic patterns along the expected path. This aspect is important for supporting the dynamic nature of mobile IoT networks, especially in scenarios such as smart transportation and mobile healthcare systems. Scalable caching frameworks that use software-defined networking (SDN) can help dynamically redirect requests and adjust cache placement in real-time based on user mobility. ML techniques can also assist in forecasting user trajectories to prefetch data proactively.

5.6. Caching Strategies That Support Digital Twin Integration

Integrating digital twin technologies with NDN-based IoT networks helps make intelligent decisions in caching and predictive analytics. However, these integrations can pose new challenges to caching operations because they require efficient and effective physical access to data. Therefore, it is important to direct future research toward developing customized caching strategies to support the dynamic and continuous data exchange required by digital twin models. Additionally, adaptive caching techniques that prioritize time-sensitive content and use machine learning algorithms to predict the importance of data can improve the efficiency and accuracy of digital twins. Such integration will revolutionize several fields, such as smart manufacturing, autonomous systems, and real-time diagnostics, where up-to-date information is essential for effective interaction between the real and virtual worlds. AI-driven caching algorithms can play a vital role in predicting which twin data is likely to be reused or updated soon, enabling smarter prefetching and eviction [47]. Furthermore, edge–cloud collaboration frameworks can help synchronize twin models more efficiently using localized cache buffers.

6. Conclusions

This study provided a comprehensive review of caching strategies in NDN-based IoT networks. Caching plays an important role in the efficiency of data retrieval from the source, reducing access time and improving energy conservation levels. The dynamic nature of these networks requires innovative and effective storage solutions. Caching strategies were classified into several categories, including popularity-based caching, freshness-based caching, cooperative caching, hybrid caching, multi-caching, and machine learning-enhanced caching. Each category included a set of strategies that offered advantages and addressed challenges in NDN-based IoT networks. This paper contributed a detailed comparative analysis of these strategies based on common evaluation metrics, such as the CHR, energy consumption, latency, and scalability. One key takeaway is that hybrid and AI-based strategies demonstrated the highest adaptability to dynamic IoT conditions, while freshness-based strategies remained essential for real-time applications. For developers and researchers, this paper recommends prioritizing adaptable and context-aware caching mechanisms, especially those that integrate energy awareness and mobility support. Despite the existence of these solutions, challenges still persist regarding energy efficiency, privacy, and security. In particular, open problems remain in designing scalable, secure caching frameworks that balance performance and resource constraints under real-world IoT settings. This study was limited by the scope of strategies and evaluation metrics considered, and future work should explore broader integrations with emerging technologies, such as edge computing, digital twins, and adaptive AI-driven approaches. Building on the identified gaps, further research can focus on combining multiple caching objectives into unified, context-aware frameworks. Addressing these gaps could pave the way for more robust and intelligent caching systems in future IoT networks.

Author Contributions

Supervision: A.H.M.A.; validation: F.Q. and W.M.; visualization and writing of original draft: A.A.A.; review and editing: A.A.A., A.H.M.A., and F.Q. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Universiti Kebangsaan Malaysia Fundamental Research Grant Scheme (FRGS) from the Ministry of Higher Education with the code: FRGS/1/2019/ICT03/UKM/02/1.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the respective editor and reviewers for their support.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Islam, R.U.; Savaglio, C.; Fortino, G. Leading Smart Environments towards the Future Internet through Name Data Networking: A survey. Future Gener. Comput. Syst. 2025, 167, 107754. [Google Scholar] [CrossRef]
  2. Kumar, S.; Tiwari, P.; Zymbler, M. Internet of Things is a revolutionary approach for future technology enhancement: A review. J. Big Data 2019, 6, 111. [Google Scholar] [CrossRef]
  3. Alubady, R.; Hassan, S.; Habbal, A. Pending interest table control management in Named Data Network. J. Netw. Comput. Appl. 2018, 111, 99–116. [Google Scholar] [CrossRef]
  4. Gupta, D.; Rani, S.; Ahmed, S.H.; Hussain, R. Caching policies in NDN-IoT architecture. In Integration of WSN and IoT for Smart Cities; Springer International Publishing: Cham, Switzerland, 2020; pp. 43–64. [Google Scholar]
  5. Anjum, A.; Agbaje, P.; Hounsinou, S.; Guizani, N.; Olufowobi, H. D-NDNoT: Deterministic Named Data Networking for Time-Sensitive IoT Applications. IEEE Internet Things J. 2024, 11, 24872–24885. [Google Scholar] [CrossRef]
  6. Chand, M. A Comparative Survey on Different Caching Mechanisms in Named Data. J. Emerg. Technol. Innov. Res. 2019, 6, 264–271. [Google Scholar]
  7. Abane, A.; Daoui, M.; Bouzefrane, S.; Muhlethaler, P. A lightweight forwarding strategy for named data networking in low-end IoT. J. Netw. Comput. Appl. 2019, 148, 102445. [Google Scholar] [CrossRef]
  8. Hail, M.A.M.; Matthes, A.; Fischer, S. Comparative Performance Analysis of NDN Protocol in IoT Environments. In Proceedings of the 7th Conference on Cloud and Internet of Things, CioT, Montreal, QC, Canada, 29–31 October 2024; pp. 1–8. [Google Scholar]
  9. Azamuddin, W.M.H.; Aman, A.H.M.; Hassan, R.; Mansor, N. Comparison of named data networking mobility methodology in a merged cloud internet of things and artificial intelligence environment. Sensors 2022, 22, 6668. [Google Scholar] [CrossRef]
  10. Alwakeel, A.M. Enhancing IoT performance in wireless and mobile networks through named data networking (NDN) and edge computing integration. Comput. Netw. 2025, 264, 111267. [Google Scholar] [CrossRef]
  11. Amadeo, M.; Campolo, C.; Ruggeri, G.; Molinaro, A. Edge Caching in IoT Smart Environments: Benefits, Challenges, and Research Perspectives Toward 6G. Internet Things 2023, 53–73. [Google Scholar] [CrossRef]
  12. Alahmri, B.; Al-Ahmadi, S.; Belghith, A. Efficient Pooling and Collaborative Cache Management for NDN/IoT Networks. IEEE Access 2021, 9, 43228–43240. [Google Scholar] [CrossRef]
  13. Asghari, P.; Rahmani, A.M.; Javadi, H.H.S. Internet of Things applications: A systematic review. Comput. Netw. 2019, 148, 241–261. [Google Scholar] [CrossRef]
  14. Khalid, A.; Rehman, R.A.; Kim, B.S. Caching Strategies in NDN Based Wireless Ad Hoc Network: A Survey. Comput. Mater. Contin. 2024, 80, 61–103. [Google Scholar] [CrossRef]
  15. Yovita, L.V.; Syambas, N.R. Caching on named data network: A survey and future research. Int. J. Electr. Comput. Eng. 2018, 8, 4456–4466. [Google Scholar] [CrossRef]
  16. Abdullahi, I.; Arif, S.; Hassan, S. Survey on caching approaches in information centric networking. J. Netw. Comput. Appl. 2015, 56, 48–59. [Google Scholar] [CrossRef]
  17. Alkwai, L.; Belghith, A.; Gazdar, A.; AlAhmadi, S. Awareness of user mobility in Named Data Networking for IoT traffic under the push communication mode. J. Netw. Comput. Appl. 2023, 213, 103598. [Google Scholar] [CrossRef]
  18. Mishra, S.; Jain, V.K.; Gyoda, K.; Jain, S. Distance-based dynamic caching and replacement strategy in NDN-IoT networks. Internet Things 2024, 27, 101264. [Google Scholar] [CrossRef]
  19. Muralidharan, S.; Roy, A.; Saxena, N. MDP-IoT: MDP based interest forwarding for heterogeneous traffic in IoT-NDN environment. Future Gener. Comput. Syst. 2018, 79, 892–908. [Google Scholar] [CrossRef]
  20. Shailendra, S.; Sengottuvelan, S.; Rath, H.K.; Panigrahi, B.; Simha, A. Performance evaluation of caching policies in NDN-an ICN architecture. In Proceedings of the IEEE Region 10 Annual International Conference, Proceedings/TENCON, Singapore, 22–25 November 2016; pp. 1117–1121. [Google Scholar]
  21. Lehmann, M.B.; Barcellos, M.P.; Mauthe, A. Providing producer mobility support in NDN through proactive data replication. In Proceedings of the NOMS 2016—2016 IEEE/IFIP Network Operations and Management Symposium, Istanbul, Turkey, 25–29 April 2016; pp. 383–391. [Google Scholar]
  22. Dhawan, G.; Mazumdar, A.P.; Meena, Y.K. CNCP: A Candidate Node Selection for Cache Placement in ICN-IoT. In Proceedings of the 2022 IEEE 6th Conference on Information and Communication Technology, CICT, Gwalior, India, 18–20 November 2022; pp. 1–6. [Google Scholar]
  23. Arshad, S.; Azam, M.A.; Rehmani, M.H.; Loo, J. Recent advances in information-centric networking-based internet of things (ICN-IoT). IEEE Internet Things J. 2019, 6, 2128–2158. [Google Scholar] [CrossRef]
  24. Atzori, L.; Iera, A.; Morabito, G. Understanding the Internet of Things: Definition, potentials, and societal role of a fast evolving paradigm. Ad Hoc Netw. 2017, 56, 122–140. [Google Scholar] [CrossRef]
  25. Pruthvi, C.N.; Vimala, H.S.; Shreyas, J. A systematic survey on content caching in ICN and ICN-IoT: Challenges, approaches and strategies. Comput. Netw. 2023, 233, 109896. [Google Scholar]
  26. Zhang, Z.; Lung, C.-h.; Member, S.; Wei, X.; Chen, M.; Zhang, Z. In-network Caching for ICN-based IoT ( ICN-IoT ): A comprehensive survey. IEEE Internet Things J. 2023, 10, 14595–14620. [Google Scholar] [CrossRef]
  27. Naeem, M.A.; Din, I.U.; Meng, Y.; Almogren, A.; Rodrigues, J.J.P.C. Centrality-Based On-Path Caching Strategies in NDN-Based Internet of Things: A Survey. IEEE Commun. Surv. Tutor. 2024, 27, 2621–2657. [Google Scholar] [CrossRef]
  28. Meng, Y.; Ahmad, A.B.; Naeem, M.A. Comparative analysis of popularity aware caching strategies in wireless-based ndn based iot environment. IEEE Access 2024, 12, 136466–136484. [Google Scholar] [CrossRef]
  29. Alubady, R.; Salman, M.; Mohamed, A.S. A review of modern caching strategies in named data network: Overview, classification, and research directions. Telecommun. Syst. 2023, 84, 581–626. [Google Scholar] [CrossRef]
  30. Serhane, O.; Yahyaoui, K.; Nour, B.; Moungla, H. A survey of ICN content naming and in-network caching in 5G and beyond networks. IEEE Internet Things J. 2020, 8, 4081–4104. [Google Scholar] [CrossRef]
  31. Aboodi, A.; Wan, T.C.; Sodhy, G.C. Survey on the Incorporation of NDN/CCN in IoT. IEEE Access 2019, 7, 71827–71858. [Google Scholar] [CrossRef]
  32. Din, I.U.; Hassan, S.; Khan, M.K.; Guizani, M.; Ghazali, O.; Habbal, A. Caching in Information-Centric Networking: Strategies, Challenges, and Future Research Directions. IEEE Commun. Surv. Tutor. 2018, 20, 1443–1474. [Google Scholar] [CrossRef]
  33. Bilal, M.; Kang, S.G. A Cache Management Scheme for Efficient Content Eviction and Replication in Cache Networks. IEEE Access 2017, 5, 1692–1701. [Google Scholar] [CrossRef]
  34. Zhao, Q.; Peng, Z.; Hong, X. A named data networking architecture implementation to internet of underwater things. In Proceedings of the 14th International Conference on Underwater Networks & Systems, Atlanta, GA, USA, 23–25 October 2019. [Google Scholar]
  35. Shrisha, H.S.; Boregowda, U. An energy efficient and scalable endpoint linked green content caching for Named Data Network based Internet of Things. Results Eng. 2022, 13, 100345. [Google Scholar] [CrossRef]
  36. Amadeo, M.; Campolo, C.; Ruggeri, G.; Molinaro, A. Beyond Edge Caching: Freshness and Popularity Aware IoT Data Caching via NDN at Internet-Scale. IEEE Trans. Green Commun. Netw. 2022, 6, 352–364. [Google Scholar] [CrossRef]
  37. Kumar, S.; Tiwari, R. Navigating transient content: PFC caching approach for NDN-based IoT networks. Pervasive Mob. Comput. 2025, 109, 102031. [Google Scholar] [CrossRef]
  38. Dehkordi, I.F.; Manochehri, K.; Aghazarian, V. Internet of Things (IoT) Intrusion Detection by Machine Learning (ML): A Review. Asia-Pac. J. Inf. Technol. Multimed. 2023, 12, 13–38. [Google Scholar]
  39. Naeem, M.A.; Ullah, R.; Meng, Y.; Ali, R.; Lodhi, B.A. Caching Content on the Network Layer: A Performance Analysis of Caching Schemes in ICN-Based Internet of Things. IEEE Internet Things J. 2022, 9, 6477–6495. [Google Scholar] [CrossRef]
  40. Ahed, K.; Benamar, M.; El Ouazzani, R. Content delivery in named data networking based internet of things. In Proceedings of the 2019 15th International Wireless Communications and Mobile Computing Conference, IWCMC, Tangier, Morocco, 24–28 June 2019; pp. 1397–1402. [Google Scholar]
  41. Wang, X.; Wang, X.; Li, Y. NDN-based IoT with Edge computing. Future Gener. Comput. Syst. 2021, 115, 397–405. [Google Scholar] [CrossRef]
  42. Meng, Y.; Naeem, M.A.; Ali, R.; Zikria, Y.B.; Kim, S.W. DCS: Distributed Caching Strategy at the Edge of Vehicular Sensor Networks in Information-Centric Networking. Sensors 2019, 19, 4407. [Google Scholar] [CrossRef]
  43. Meddeb, M.; Dhraief, A.; Belghith, A.; Monteil, T.; Drira, K.; Mathkour, H. Least fresh first cache replacement policy for NDN-based IoT networks. Pervasive Mob. Comput. 2019, 52, 60–70. [Google Scholar] [CrossRef]
  44. Kumamoto, Y.; Nakazato, H. Implementation of NDN function chaining using caching for IoT environments. In Proceedings of the CCIoT 2020—Proceedings of the 2020 Cloud Continuum Services for Smart IoT Systems, Part of SenSys, Virtual, 16–19 November 2020; pp. 20–25. [Google Scholar]
  45. Kazmi, S.H.A.; Qamar, F.; Hassan, R.; Nisar, K. Improved QoS in internet of things (IoTs) through short messages encryption scheme for wireless sensor communication. In Proceedings of the 2022 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS), Penang, Malaysia, 22–25 November 2022; pp. 1–6. [Google Scholar]
  46. Alabadi, M.; Habbal, A.; Wei, X. Industrial Internet of Things: Requirements, Architecture, Challenges, and Future Research Directions. IEEE Access 2022, 10, 66374–66400. [Google Scholar] [CrossRef]
  47. Hassan, R.; Qamar, F.; Hasan, M.K.; Aman, A.H.M.; Ahmed, A.S. Internet of Things and its applications: A comprehensive survey. Symmetry 2020, 12, 1674. [Google Scholar] [CrossRef]
  48. Liwen, Z.; Qamar, F.; Liaqat, M.; Hindia, M.N.; Ariffin, K.A.Z. Toward efficient 6G IoT networks: A perspective on resource optimization strategies, challenges, and future directions. IEEE Access 2024, 12, 76606–76633. [Google Scholar] [CrossRef]
  49. Djama, A.; Djamaa, B.; Senouci, M.R. TCP/IP and ICN Networking Technologies for the Internet of Things: A Comparative Study. In Proceedings of the ICNAS 2019: 4th International Conference on Networking and Advanced Systems, Annaba, Algeria, 26–27 June 2019; pp. 1–6. [Google Scholar]
  50. Zahedinia, M.S.; Khayyambashi, M.R.; Bohlooli, A. IoT data management for caching performance improvement in NDN. Clust. Comput. 2024, 27, 4537–4550. [Google Scholar] [CrossRef]
  51. Zhang, Z.; Yu, Y.; Zhang, H.; Newberry, E.; Mastorakis, S.; Li, Y.; Afanasyev, A.; Zhang, L. An Overview of Security Support in Named Data Networking. IEEE Commun. Mag. 2018, 56, 62–68. [Google Scholar] [CrossRef]
  52. Djama, A.; Djamaa, B.; Senouci, M.R. Information-Centric Networking solutions for the Internet of Things: A systematic mapping review. Comput. Commun. 2020, 159, 37–59. [Google Scholar] [CrossRef]
  53. Ullah, S.S.; Ullah, I.; Khattak, H.; Khan, M.A.; Adnan, M.; Hussain, S.; Amin, N.U.; Khattak, M.A.K. A Lightweight Identity-Based Signature Scheme for Mitigation of Content Poisoning Attack in Named Data Networking with Internet of Things. IEEE Access 2020, 8, 98910–98928. [Google Scholar] [CrossRef]
  54. Abraham, H.B.; Crowley, P. Controlling strategy retransmissions in named data networking. In Proceedings of the 2017 ACM/IEEE Symposium on Architectures for Networking and Communications Systems (ANCS), Beijing, China, 18–19 May 2017; pp. 70–81. [Google Scholar]
  55. Ahlgren, B.; Dannewitz, C.; Imbrenda, C.; Kutscher, D.; Ohlman, B. A survey of information-centric networking. IEEE Commun. Mag. 2012, 50, 26–36. [Google Scholar] [CrossRef]
  56. Ali Naeem, M.; Awang Nor, S.; Hassan, S.; Kim, B.S. Compound popular content caching strategy in named data networking. Electronics 2019, 8, 771. [Google Scholar] [CrossRef]
  57. Amadeo, M.; Ruggeri, G.; Campolo, C.; Molinaro, A. Content-Driven Closeness Centrality Based Caching in Softwarized Edge Networks. In Proceedings of the ICC 2023-IEEE International Conference on Communications, Rome, Italy, 28 May–1 June 2023; pp. 3264–3269. [Google Scholar]
  58. Gui, Y.; Chen, Y. A cache placement strategy based on compound popularity in named data networking. IEEE Access 2020, 8, 196002–196012. [Google Scholar] [CrossRef]
  59. Koide, M.; Matsumoto, N.; Matsuzawa, T. Caching Method for Information-Centric Ad Hoc Networks Based on Content Popularity and Node Centrality. Electronics 2024, 13, 2416. [Google Scholar] [CrossRef]
  60. Liu, Y.; Zhi, T.; Zhou, H.; Xi, H. PBRS: A Content Popularity and Betweenness Based Cache Replacement Scheme in ICN-IoT. J. Internet Technol. 2021, 22, 1495–1508. [Google Scholar]
  61. Nour, B.; Sharif, K.; Li, F.; Moungla, H.; Kamal, A.E.; Afifi, H. NCP: A near ICN cache placement scheme for IoT-based traffic class. In Proceedings of the 2018 IEEE Global Communications Conference (GLOBECOM), Abu Dhabi, United Arab Emirates, 9–13 December 2018. [Google Scholar]
  62. Dinh, N.-t. An efficient traffic-aware caching mechanism for information-centric wireless sensor networks. EAI Endorsed Trans. Ind. Netw. Intell. Syst. 2022, 9, e5. [Google Scholar] [CrossRef]
  63. Jaber, G.; Kacimi, R. A collaborative caching strategy for content-centric enabled wireless sensor networks. Comput. Commun. 2020, 159, 60–70. [Google Scholar] [CrossRef]
  64. Hasan, K.; Jeong, S.H. Efficient caching for data-driven IoT applications and fast content delivery with low latency in ICN. Appl. Sci. 2019, 9, 4730. [Google Scholar] [CrossRef]
  65. Xu, Y.; Li, J. Cache Benefit-Based Cache Placement Scheme for Iot Data in Icn by Using Ranking. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4132289 (accessed on 20 April 2025).
  66. Tarnoi, S.; Kumwilaisak, W.; Suppakitpaisarn, V.; Fukuda, K.; Ji, Y. Adaptive probabilistic caching technique for caching networks with dynamic content popularity. Comput. Commun. 2019, 139, 1–15. [Google Scholar] [CrossRef]
  67. Asmat, H.; Din, I.U.; Ullah, F.; Talha, M.; Khan, M.; Guizani, M. ELC: Edge Linked Caching for content updating in information-centric Internet of Things. Comput. Commun. 2020, 156, 174–182. [Google Scholar] [CrossRef]
  68. Dhawan, G.; Mazumdar, A.P.; Meena, Y.K. PoSiF: A Transient Content Caching and Replacement Scheme for ICN-IoT. Wirel. Commun. Mob. Comput. 2023, 2023, 1–18. [Google Scholar] [CrossRef]
  69. Feng, B.; Tian, A.; Yu, S.; Li, J.; Zhou, H.; Zhang, H. Efficient Cache Consistency Management for Transient IoT Data in Content-Centric Networking. IEEE Internet Things J. 2022, 9, 12931–12944. [Google Scholar] [CrossRef]
  70. Han, G.; Liu, L.; Jiang, J.; Shu, L.; Hancke, G. Analysis of energy-efficient connected target coverage algorithms for industrial wireless sensor networks. IEEE Trans. Ind. Inform. 2015, 13, 135–143. [Google Scholar] [CrossRef]
  71. Kumar, S.; Tiwari, R. Dynamic popularity window and distance-based efficient caching for fast content delivery applications in CCN. Eng. Sci. Technol. Int. J. 2021, 24, 829–837. [Google Scholar] [CrossRef]
  72. Kumar, S.; Tiwari, R. Optimized content centric networking for future internet: Dynamic popularity window based caching scheme. Comput. Netw. 2020, 179, 107434. [Google Scholar] [CrossRef]
  73. Naeem, M.A.; Ali, R.; Kim, B.S.; Nor, S.A.; Hassan, S. A Periodic Caching Strategy Solution for the Smart City in Information-Centric Internet of Things. Sustainability 2018, 10, 2576. [Google Scholar] [CrossRef]
  74. Vural, S.; Wang, N.; Navaratnam, P.; Tafazolli, R. Caching Transient Data in Internet Content Routers. IEEE/ACM Trans. Netw. (TON) 2017, 25, 1048–1061. [Google Scholar] [CrossRef]
  75. Meddeb, M.; Dhraief, A.; Belghith, A.; Monteil, T.; Drira, K.; Alahmadi, S. Cache freshness in named data networking for the internet of things. Comput. J. 2018, 61, 1496–1511. [Google Scholar] [CrossRef]
  76. Buchipalli, T.; Mahendran, V.; Badarla, V. How Fresh is the Data? An Optimal Learning-Based End-to-End Pull-Based Forwarding Framework for NDNoTs. In Proceedings of the Int’l ACM Conference on Modeling Analysis and Simulation of Wireless and Mobile Systems, Montreal, QC, Canada, 30 October–3 November 2023; pp. 285–289. [Google Scholar]
  77. Hazrati, N.; Pirahesh, S.; Arasteh, B.; Sefati, S.S.; Fratu, O.; Halunga, S. Cache Aging with Learning (CAL): A Freshness-Based Data Caching Method for Information-Centric Networking on the Internet of Things (IoT). Future Internet 2025, 17, 11. [Google Scholar] [CrossRef]
  78. Zahed, M.I.A.; Ahmad, I.; Habibi, D.; Phung, Q.V.; Mowla, M.M.; Waqas, M. A review on green caching strategies for next generation communication networks. IEEE Access 2020, 8, 212709–212737. [Google Scholar] [CrossRef]
  79. Podlipnig, S.; Böszörmenyi, L. A survey of web cache replacement strategies. ACM Comput. Surv. (CSUR) 2003, 35, 374–398. [Google Scholar] [CrossRef]
  80. Sekardefi, K.P.; Negara, R.M. Impact of Data Freshness-aware in Cache Replacement Policy for NDN-based IoT Network. In Proceedings of the ICCoSITE 2023—International Conference on Computer Science, Information Technology and Engineering: Digital Transformation Strategy in Facing the VUCA and TUNA Era, Jakarta, Indonesia, 16 February 2023; pp. 156–161. [Google Scholar]
  81. Shrimali, R.; Shah, H.; Chauhan, R. Proposed Caching Scheme for Optimizing Trade-off between Freshness and Energy Consumption in Name Data Networking Based IoT. Adv. Internet Things 2017, 7, 11–24. [Google Scholar] [CrossRef]
  82. Ioannou, A.; Weber, S. A Survey of Caching Policies and Forwarding Mechanisms in Information-Centric Networking. IEEE Commun. Surv. Tutor. 2016, 18, 2847–2886. [Google Scholar] [CrossRef]
  83. Naeem, M.A.; Bashir, A.K.; Meng, Y. Dynamic cluster-based cooperative cache management at the network edges in NDN-based Internet of Things. Alex. Eng. J. 2025, 125, 297–310. [Google Scholar] [CrossRef]
  84. Gui, Y.; Chen, Y. A Cache Placement Strategy Based on Entropy Weighting Method and TOPSIS in Named Data Networking. IEEE Access 2021, 9, 56240–56252. [Google Scholar] [CrossRef]
  85. Yang, Y.; Song, T. Energy-Efficient Cooperative Caching for Information-Centric Wireless Sensor Networking. IEEE Internet Things J. 2022, 9, 846–857. [Google Scholar] [CrossRef]
  86. Zahed, M.I.A.; Ahmad, I.; Habibi, D.; Phung, Q.V.; Zhang, L. A Cooperative Green Content Caching Technique for Next Generation Communication Networks. IEEE Trans. Netw. Serv. Manag. 2020, 17, 375–388. [Google Scholar] [CrossRef]
  87. Hahm, O.; Baccelli, E.; Schmidt, T.C.; Wählisch, M.; Adjih, C.; Massoulié, L. Low-power internet of things with NDN and cooperative caching. In Proceedings of the ICN 2017—Proceedings of the 4th ACM Conference on Information Centric Networking, Berlin, Germany, 26–28 September 2017; pp. 98–108. [Google Scholar]
  88. Yao, L.; Chen, A.; Deng, J.; Wang, J.; Wu, G. A cooperative caching scheme based on mobility prediction in vehicular content centric networks. IEEE Trans. Veh. Technol. 2017, 67, 5435–5444. [Google Scholar] [CrossRef]
  89. Gupta, D.; Rani, S.; Ahmed, S.H.; Garg, S.; Piran, M.J.; Alrashoud, M. ICN-Based Enhanced Cooperative Caching for Multimedia Streaming in Resource Constrained Vehicular Environment. IEEE Trans. Intell. Transp. Syst. 2021, 22, 4588–4600. [Google Scholar] [CrossRef]
  90. Rath, H.K.; Panigrahi, B.; Simha, A. On cooperative on-path and off-path caching policy for information centric networks (ICN). In Proceedings of the International Conference on Advanced Information Networking and Applications, AINA, Crans-Montana, Switzerland, 23–25 March 2016; pp. 842–849. [Google Scholar]
  91. Zaki Hamidi, E.A.; Akbar, K.M.; Negara, R.M.; Payangan, I.P.; Puspitaningsih, M.D.; Putri As’ari, A.Z. NDN Collaborative Caching Replacement and Placement Policy Performance Evaluation. In Proceeding of 2024 the 10th International Conference on Wireless and Telematics, ICWT, Batam, Indonesia, 4–5 July 2024; pp. 1–5. [Google Scholar]
  92. Dinh, N.; Kim, Y. An energy reward-based caching mechanism for information-centric internet of things. Sensors 2022, 22, 743. [Google Scholar] [CrossRef]
  93. He, X.; Liu, H.; Li, W.; Valera, A.; Seah, W.K.G. EABC: Energy-aware Centrality-based Caching for Named Data Networking in the IoT. In Proceedings of the 2024 IEEE 25th International Symposium on a World of Wireless, Mobile and Multimedia Networks, WoWMoM, Perth, Australia, 4–7 June 2024; pp. 259–268. [Google Scholar]
  94. Zhang, Z.; Lung, C.H.; Lambadaris, I.; St-Hilaire, M. IoT data lifetime-based cooperative caching scheme for ICN-IoT networks. In Proceedings of the IEEE International Conference on Communications, Kansas City, MO, USA, 20–24 May 2018; pp. 1–7. [Google Scholar]
  95. Hail, M.A.M.; Bin-Salem, A.A.; Munassar, W. AI for IoT-NDN: Enhancing IoT with Named Data Networking and Artificial Intelligence. In Proceedings of the 2024 ASU International Conference in Emerging Technologies for Sustainability and Intelligent Systems, ICETSIS, Manama, Bahrain, 28–29 January 2024; pp. 1020–1026. [Google Scholar]
  96. Gupta, D.; Rani, S.; Ahmed, S.H.; Verma, S.; Ijaz, M.F.; Shafi, J. Edge caching based on collaborative filtering for heterogeneous icn-iot applications. Sensors 2021, 21, 5491. [Google Scholar] [CrossRef]
  97. Meddeb, M.; Dhraief, A.; Belghith, A.; Monteil, T.; Drira, K. Cache coherence in machine-to-machine information centric networks. In Proceedings of the 2015 IEEE 40th Conference on Local Computer Networks (LCN), Clearwater Beach, FL, USA, 26–29 October 2015; pp. 430–433. [Google Scholar]
  98. Anamalamudi, S.; Alkatheiri, M.S.; Solami, E.A.; Sangi, A.R. Cooperative Caching Scheme for Machine-to-Machine Information-Centric IoT Networks. IEEE Can. J. Electr. Comput. Eng. 2021, 44, 228–237. [Google Scholar] [CrossRef]
  99. Naeem, M.A.; Nguyen, T.N.; Ali, R.; Cengiz, K.; Meng, Y.; Khurshaid, T. Hybrid Cache Management in IoT-Based Named Data Networking. IEEE Internet Things J. 2022, 9, 7140–7150. [Google Scholar] [CrossRef]
  100. Alduayji, S.; Belghith, A.; Gazdar, A.; Al-Ahmadi, S. PF-ClusterCache: Popularity and Freshness-Aware Collaborative Cache Clustering for Named Data Networking of Things. Appl. Sci. 2022, 12, 6706. [Google Scholar] [CrossRef]
  101. Amadeo, M.; Ruggeri, G.; Campolo, C.; Molinaro, A.; Mangiullo, G. Caching popular and fresh IoT contents at the edge via named data networking. In Proceedings of the IEEE INFOCOM 2020—IEEE Conference on Computer Communications Workshops, INFOCOM WKSHPS, Toronto, ON, Canada, 6–9 July 2020; pp. 610–615. [Google Scholar]
  102. Khattab, A.; Youssry, N. Machine Learning for IoT Systems; Springer: Berlin/Heidelberg, Germany, 2020; pp. 105–127. [Google Scholar]
  103. Zhang, Y.; Muniyandi, R.C.; Qamar, F. A Review of Deep Learning Applications in Intrusion Detection Systems: Overcoming Challenges in Spatiotemporal Feature Extraction and Data Imbalance. Appl. Sci. 2025, 15, 1552. [Google Scholar] [CrossRef]
  104. Hou, J.; Lu, H.; Nayak, A. A GNN-based proactive caching strategy in NDN networks. Peer-Peer Netw. Appl. 2023, 16, 997–1009. [Google Scholar] [CrossRef]
  105. Wu, H.; Nasehzadeh, A.; Wang, P. A Deep Reinforcement Learning-Based Caching Strategy for IoT Networks With Transient Data. IEEE Trans. Veh. Technol. 2022, 71, 13310–13319. [Google Scholar] [CrossRef]
  106. Zhang, Z.; Wei, X.; Lung, C.H.; Zhao, Y. iCache: An Intelligent Caching Scheme for Dynamic Network Environments in ICN-Based IoT Networks. IEEE Internet Things J. 2023, 10, 1787–1799. [Google Scholar] [CrossRef]
  107. Yang, F.; Tian, Z. MRPGA: A Genetic-Algorithm-based In-network Caching for Information-Centric Networking. In Proceedings of the International Conference on Network Protocols, ICNP, Dallas, TX, USA, 1–5 November 2021; pp. 1–6. [Google Scholar]
  108. Xu, F.; Yang, F.; Bao, S.; Zhao, C. DQN Inspired Joint Computing and Caching Resource Allocation Approach for Software Defined Information-Centric Internet of Things Network. IEEE Access 2019, 7, 61987–61996. [Google Scholar] [CrossRef]
  109. Somuyiwa, S.O.; Gyorgy, A.; Gunduz, D. A Reinforcement-Learning Approach to Proactive Caching in Wireless Networks. IEEE J. Sel. Areas Commun. 2018, 36, 1331–1344. [Google Scholar] [CrossRef]
  110. Ihsan, A.; Rainarli, E. Optimization of k-nearest neighbour to categorize Indonesian’s news articles. Asia–Pac. J. Inf. Technol. Multimed. 2021, 10, 43–51. [Google Scholar] [CrossRef]
  111. Negaral, R.M.; Syambas, N.R.; Mulyana, E.; Wasesa, N.P. Maximizing Router Efficiency in Named Data Networking with Machine Learning-Driven Caching Placement Strategy. In Proceedings of the IEEE International Conference on Computer Communication and the Internet, ICCCI, Tokyo, Japan, 14–16 June 2024; pp. 118–123. [Google Scholar]
  112. Chen, B.; Liu, L.; Sun, M.; Ma, H. IoTCache: Toward Data-Driven Network Caching for Internet of Things. IEEE Internet Things J. 2019, 6, 10064–10076. [Google Scholar] [CrossRef]
  113. Narayanan, A.; Verma, S.; Ramadan, E.; Babaie, P.; Zhang, Z.L. DEEPCACHE: A deep learning based framework for content caching. In Proceedings of the NetAI 2018—Proceedings of the 2018 Workshop on Network Meets AI and ML, Part of SIGCOMM, Budapest, Hungary, 24 August 2018; pp. 48–53. [Google Scholar]
  114. Naeem, M.A.; Nor, S.A.; Hassan, S.; Kim, B.-S. Performances of probabilistic caching strategies in content centric networking. IEEE Access 2018, 6, 58807–58825. [Google Scholar] [CrossRef]
  115. Gao, Y.; Zhou, J. Probabilistic caching mechanism based on software defined content centric network. In Proceedings of the 2019 IEEE 11th International Conference on Communication Software and Networks (ICCSN), Chongqing, China, 12–15 June 2019. [Google Scholar]
  116. Qin, Y.; Yang, W.; Liu, W. A probability-based caching strategy with consistent hash in named data networking. In Proceedings of the 2018 1st IEEE International Conference on Hot Information-Centric Networking (HotICN), Shenzhen, China, 15–17 August 2018. [Google Scholar]
  117. Mishra, S.; Jain, V.K.; Gyoda, K.; Jain, S. A novel content eviction strategy to retain vital contents in NDN-IoT networks. Wirel. Netw. 2024, 31, 2327–2349. [Google Scholar] [CrossRef]
  118. Iqbal, S.M.A. Asaduzzaman Cache-MAB: A reinforcement learning-based hybrid caching scheme in named data networks. Future Gener. Comput. Syst. 2023, 147, 163–178. [Google Scholar] [CrossRef]
  119. Safitri, C.; Mandala, R.; Nguyen, Q.N.; Sato, T. Artificial Intelligence Approach for Name Classification in Information-Centric Networking-based Internet of Things. In Proceedings of the 2020 IEEE International Conference on Sustainable Engineering and Creative Computing (ICSECC), Cikarang, Indonesia, 16–17 December 2020; pp. 158–163. [Google Scholar]
  120. Meng, Y.; Ahmad, A.B. Performance Measurement Through Caching in Named Data Networking Based Internet of Things. IEEE Access 2023, 11, 120569–120584. [Google Scholar] [CrossRef]
  121. Afanasyev, A.; Burke, J.; Refaei, T.; Wang, L.; Zhang, B.; Zhang, L. A Brief Introduction to Named Data Networking. In Proceedings of the IEEE Military Communications Conference MILCOM, Los Angeles, CA, USA, 29–31 October 2018; pp. 605–611. [Google Scholar]
  122. Afanasyev, A.; Moiseenko, I.; Zhang, L. ndnSIM: NDN Simulator for NS-3. 2012. Available online: https://www.researchgate.net/publication/265359778_ndnSIM_ndn_simulator_for_NS-3 (accessed on 20 April 2025).
  123. Mesarpe. SocialCCNSim: Social CCN Sim is a CCN Simulator, Which Represents Interaction of Users in a CCN Network. GitHub, 2017. Available online: https://github.com/mesarpe/Socialccnsim (accessed on 21 July 2025).
  124. Saino, L.P.; Psaras, I.; Pavlou, G. Icarus: A caching simulator for information centric networking (ICN)v. In Proceedings of the SIMUTools 2014—7th International Conference on Simulation Tools and Techniques, Lisbon, Portugal, 17–19 March 2014. [Google Scholar]
  125. Mininet. Mininet Overview. 2015. Available online: http://mininet.org/overview/ (accessed on 7 May 2025).
  126. Kim, D.; Bi, J.; Vasilakos, A.V.; Yeom, I. Security of cached content in NDN. IEEE Trans. Inf. Forensics Secur. 2017, 12, 2933–2944. [Google Scholar] [CrossRef]
  127. Kumar, N.; Singh, A.K.; Aleem, A.; Srivastava, S. Security attacks in named data networking: A review and research directions. J. Comput. Sci. Technol. 2019, 34, 1319–1350. [Google Scholar] [CrossRef]
  128. Azamuddin, W.M.H.; Aman, A.H.M.; Sallehuddin, H.; Abualsaud, K.; Mansor, N. The Emerging of Named Data Networking: Architecture, Application, and Technology. IEEE Access 2023, 11, 23620–23633. [Google Scholar] [CrossRef]
Figure 1. Comparative diagram illustrating (a) NDN architecture and (b) IP-based architecture.
Figure 1. Comparative diagram illustrating (a) NDN architecture and (b) IP-based architecture.
Sensors 25 05203 g001
Figure 2. The basic steps in the caching process.
Figure 2. The basic steps in the caching process.
Sensors 25 05203 g002
Figure 3. Caching strategy usage across studies [14,26,27,28,29,30,31,32,33].
Figure 3. Caching strategy usage across studies [14,26,27,28,29,30,31,32,33].
Sensors 25 05203 g003
Figure 4. The taxonomy of this paper.
Figure 4. The taxonomy of this paper.
Sensors 25 05203 g004
Figure 5. Challenges of caching in NDN-based IoT.
Figure 5. Challenges of caching in NDN-based IoT.
Sensors 25 05203 g005
Figure 6. Classification of caching strategies in NDN-based IoT.
Figure 6. Classification of caching strategies in NDN-based IoT.
Sensors 25 05203 g006
Figure 7. Popularity-based caching in NDN-based IoT.
Figure 7. Popularity-based caching in NDN-based IoT.
Sensors 25 05203 g007
Figure 8. Subcategories of popularity-based caching strategies in NDN-based IoT networks.
Figure 8. Subcategories of popularity-based caching strategies in NDN-based IoT networks.
Sensors 25 05203 g008
Figure 9. Freshness-based caching in NDN-based IoT.
Figure 9. Freshness-based caching in NDN-based IoT.
Sensors 25 05203 g009
Figure 10. Subcategories of freshness-based caching strategies in NDN-based IoT networks.
Figure 10. Subcategories of freshness-based caching strategies in NDN-based IoT networks.
Sensors 25 05203 g010
Figure 11. Cooperative caching in NDN-based IoT.
Figure 11. Cooperative caching in NDN-based IoT.
Sensors 25 05203 g011
Figure 12. Subcategories of cooperative caching strategies in NDN-based IoT networks.
Figure 12. Subcategories of cooperative caching strategies in NDN-based IoT networks.
Sensors 25 05203 g012
Figure 13. Hybrid caching in NDN-based IoT.
Figure 13. Hybrid caching in NDN-based IoT.
Sensors 25 05203 g013
Figure 14. Subcategories of hybrid caching strategies in NDN-based IoT networks.
Figure 14. Subcategories of hybrid caching strategies in NDN-based IoT networks.
Sensors 25 05203 g014
Figure 15. Machine learning-based caching in NDN-based IoT.
Figure 15. Machine learning-based caching in NDN-based IoT.
Sensors 25 05203 g015
Figure 16. Probabilistic caching in NDN-based IoT.
Figure 16. Probabilistic caching in NDN-based IoT.
Sensors 25 05203 g016
Figure 17. The timeline of NDN-based IoT caching strategies that support future directions between 2020 and 2025.
Figure 17. The timeline of NDN-based IoT caching strategies that support future directions between 2020 and 2025.
Sensors 25 05203 g017
Table 1. Summary of related surveys.
Table 1. Summary of related surveys.
The StudyYearCollaborative/Cooperative CachingFreshness-Based CachingHybrid CachingMachine Learning-Based CachingPopularity-Based CachingProbabilistic Caching
Number of Strategies
[14] Khalid2024448 3
[27] Naeem2024 3 2
[28] Meng2024 15
[29] C.N202313 7113
[26] Zhang2023 5 84
[30] Alubady2023 1112
[31] Serhane20203 31
[32] Aboodi2019 2 21
[33] Din2018 6
Percentage19.2311.5311.5315.3832.699.61
Table 2. List of key abbreviations used in this paper.
Table 2. List of key abbreviations used in this paper.
AbbreviationsFull Form
AAAlways Active
ABCApproximate Betweenness Centrality
AIArtificial intelligence
ANNsArtificial neural networks
ARMAAutoregressive Moving Average
ARsAdaptive Routers
BFBloom filter
CCCCentrally Controlled Caching
CCNContent-centric network
CCN-WSNsCollaborative Caching Strategy for Content-Centric Enabled Wireless Sensor Networks
CCSClient Cache Strategy
CEECache Everything Everywhere
CESCaching At the Edge Strategy
CFPCCaching strategy for freshness and popularity content
CHRCache hit rate
CL4MCache Less for More
CoCaCooperative caching in ICN
CPCCSCluster-Based Popularity and Centrality-Aware Caching Strategy
CSContent Store
CSDDCaching Strategy Distance and Degree
CTDCaching Transient Data
DNNsDeep neural networks
DPWCSDynamic Popularity Window-Based Caching Scheme
DQLDeep Q-Learning
DQNsDeep Q Networks
EABCEnergy-Aware Centrality-Based Caching
ECEdge computing
EPPCEfficient popularity-aware probabilistic caching
FIBForwarding Information Base
FIFOFirst In First Out
GAsGenetic algorithms
ICANETsInformation-Centric ad hoc Networks
ICNInformation-centric networking
ICWSNsInformation-driven wireless sensor networks
IIoTIndustrial Iot
IoTInternet of Things
IPInternet Protocol
KNNsK-Nearest Neighbors
LCCLifetime Cooperative Caching
LCDLeave Copy Down
LCELeave Copy Everywhere
LFFLeast Fresh First
LFULeast Frequently Used
LPCLess popular content
LPFLeast Popular First
LRULeast Recently Used
LSTMLong short-term memory
M2MMachine-to-machine
MAGICMax-gain in-network
MANETMobile Ad Hoc Network
MDMRMax Diversity Most Recent
MDPMarkov Decision Model
MLMachine learning
MPCMost popular content
MRPGAMulti-Round Parallel Genetic Algorithm
NDNNamed Data Networking
ndnSIMNamed Data Networking Simulator
NMFNon-Negative Matrix Factorization
NS-3Network Simulator-3
OPCOptimal popular content
PACCPopularity-Aware Closeness Centrality
PBRSPopularity and betweenness-based replacement scheme
PCCMPopularity-Based Cache Consistency Management
PCSPeriodic caching strategy
PEMPopularity Evolving Model
PITPending Interest Table
PoSiFPopularity, size, and freshness-based
QoEQuality Of User Experience
QoSQuality of service
RARSResource adaptation resolving server
RCRandom Caching
RLReinforcement learning
RNNsRecurrent neural networks
RTTRound Trip Time
SBPCSoftware-defined probabilistic caching
SCTSmartSmart caching
SDNSoftware-defined networking
SMCCA cooperative multi-hop caching
SURStorage resource utilization
TCMTraffic-aware caching mechanism
TCSTag-Based Caching Strategy
TTLTime-to-live
VANETsVehicular Ad Hoc Networks
VLRUVariable Least Recently Used
WAVEWeighted Popularity
Table 13. Summary of probabilistic caching strategies.
Table 13. Summary of probabilistic caching strategies.
StudyProposed StrategyKey FeaturesEvaluation and ResultsTool
[114]SBPC (software-defined probabilistic caching)
-
Integrates software-defined networking (SDN) with CCN.
-
Node importance based on degree, closeness, betweenness, and eigenvector centrality.
-
Matches node importance with content popularity.
-
Outperforms LCE, ProbCache, Prob(0.5), Betw.
-
Higher CHR.
-
Fewer acquisition hops.
-
Lower request delay.
-
Scales well with content popularity skewness (Zipf α).
Standard CCN simulation
50 nodes, 156 links
Zipf, Poisson distributions
[115]Prob-CH (probabilistic caching + consistent hashing)
-
Uses consistent hashing to distribute caching responsibility.
-
Prevents cache redundancy and ensures balanced caching.
-
Higher CHR.
-
Lower average hop count.
-
Reduced server load compared to probabilistic caching (p = 0.5) and deterministic caching.
ndnSIM (NS-3)
Table 14. Formula comparisons for probabilistic caching strategies.
Table 14. Formula comparisons for probabilistic caching strategies.
StudyEquationsDescription
[114] P k = f k N Where   f k   is   the   frequency   of   content   requests   over   a   fixed   window   N .
C k , i = P k I i if   P k I i I m a x < ϵ 0 otherwise Where   ϵ   is   the   adjustment   factor ,   and   I m a x is the maximum node importance on the path.
δ = α H T Where   α   is   the   link   idle   rate ,   H   is   hop   count ,   and   T is transmission delay.
[115] h ContentName p h ContentName is the normalized hash value (between 0 and 1) for the content name,
p is the pre-defined caching probability.
Table 15. Critical analysis of NDN-based IoT caching strategies.
Table 15. Critical analysis of NDN-based IoT caching strategies.
Caching StrategyApplication DomainTheoriesStrengthsCritical
Weaknesses
Performance MetricsTool
Popularity-Based CachingSmart homes
Industrial monitoring
Static IoT deployments
Consistent access patterns
Stable content access patterns
Sufficient local memory/processing
Historical data predicts future demand
Relatively static network topology
High cache hit ratio for stable patterns
Significant latency reduction
Effective for predictable content demand
Well-suited for static environments
Poor performance in dynamic environments
Outdated content remains cached
Requires local computation of popularity
Assumes stable access patterns
CHR; High
Latency; Low
Energy; Medium
Scalability; Medium
ndnSIM,
SocialCCNSim,
Icarus
Freshness-Based CachingHealth monitoring
Environmental sensing
Smart traffic systems
Time-critical applications
Content tagged with freshness indicators
Sufficient network resources for validation
Stale data more harmful than cache misses
Regular content updates available
Ensures content validity
Supports real-time applications
High CHR with valid content
Reduces energy via proactive eviction
Frequent updates consume bandwidth
High control overhead
Reduced caching efficiency
Energy consumption for validation
CHR; High
Latency; Low
Energy; Medium
Scalability; Medium
ndnSIM
Collaborative/Cooperative CachingSmart campuses
Smart cities
Industrial IoT setups
Dense, well-connected networks
Stable communication between peers
Trust relationships or common protocols
Sufficient bandwidth for coordination
Redundancy is undesirable
Enhanced performance through sharing
Significant energy savings
Avoids duplicate caching
Excellent scalability in dense networks
Complex coordination mechanisms
High signaling overhead
Requires trust between devices
Impractical in heterogeneous networks
CHR; High
Latency; Low
Energy; High
Scalability; High
ndnSIM, Icarus
Hybrid CachingDiverse IoT environments
Edge-cloud networks
Mixed content types
Adaptive systems
Multiple content types present
Heterogeneous node capabilities
Network supports lightweight intelligence
No single policy sufficient
High adaptability
Combines multiple mechanisms
Excellent scalability
Balances diverse objectives
Increased implementation complexity
Challenging parameter fine-tuning
Potential scalability issues
Requires careful coordination management
CHR; High
Latency; Low
Energy; High
Scalability; High
ndnSIM,
Icarus
Machine Learning-Based CachingEdge data centers
AI-enabled routers
Large-scale networks
High-traffic environments
Sufficient computational capabilities
Access to historical training data
Stable learning environments
Models can generalize well
Powerful prediction capabilities
Dynamic decision-making
Significant CHR improvements
Optimizes energy consumption
High computational cost
Barrier for real-time deployment
Requires sufficient training data
May not work in new environments
CHR; High
Latency; Low
Energy; High
Scalability; Medium
ndnSIM,
Icarus
Probabilistic CachingLarge-scale distributed networks
Vehicular networks
Mobile IoT applications
High content diversity scenarios
Lightweight decisions preferable
Detailed metrics too costly to maintain
High content diversity
Intermittent connectivity acceptance
Simple implementation
Excellent load balancing
Highly scalable
Works in dynamic environments
Suboptimal cache utilization
May miss popular content
Can cache rarely accessed content
Performance depends on configuration
CHR; Medium
Latency; Medium
Energy; Variable
Scalability; High
ndnSIM
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alahmad, A.A.; Mohd Aman, A.H.; Qamar, F.; Mardini, W. Efficient Caching Strategies in NDN-Enabled IoT Networks: Strategies, Constraints, and Future Directions. Sensors 2025, 25, 5203. https://doi.org/10.3390/s25165203

AMA Style

Alahmad AA, Mohd Aman AH, Qamar F, Mardini W. Efficient Caching Strategies in NDN-Enabled IoT Networks: Strategies, Constraints, and Future Directions. Sensors. 2025; 25(16):5203. https://doi.org/10.3390/s25165203

Chicago/Turabian Style

Alahmad, Ala’ Ahmad, Azana Hafizah Mohd Aman, Faizan Qamar, and Wail Mardini. 2025. "Efficient Caching Strategies in NDN-Enabled IoT Networks: Strategies, Constraints, and Future Directions" Sensors 25, no. 16: 5203. https://doi.org/10.3390/s25165203

APA Style

Alahmad, A. A., Mohd Aman, A. H., Qamar, F., & Mardini, W. (2025). Efficient Caching Strategies in NDN-Enabled IoT Networks: Strategies, Constraints, and Future Directions. Sensors, 25(16), 5203. https://doi.org/10.3390/s25165203

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop