Next Article in Journal
A Survey on Knowledge Graph Embedding: Approaches, Applications and Benchmarks
Previous Article in Journal
A Quadratic Fractional Map without Equilibria: Bifurcation, 0–1 Test, Complexity, Entropy, and Control
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Effective Fairness Scheme for Named Data Networking

1
Department of Electrical Engineering, Air University, Aerospace and Aviation Campus, Kamra 43570, Pakistan
2
Faculty of Electrical Engineering, GIK Institute of Engineering Sciences and Technology, Topi 23640, Pakistan
3
Faculty of Computer Science and Engineering, GIK Institute of Engineering Sciences and Technology, Topi 23460, Pakistan
4
Department of Electrical Engineering, City University of Science and Information Technology, Peshawar 25000, Pakistan
5
Department of Mechatronics Engineering, University of Engineering and Technology, Peshawar 25000, Pakistan
6
School of Electrical Engineering, University of Ulsan, Ulsan 44610, Korea
*
Author to whom correspondence should be addressed.
Electronics 2020, 9(5), 749; https://doi.org/10.3390/electronics9050749
Submission received: 15 April 2020 / Revised: 28 April 2020 / Accepted: 28 April 2020 / Published: 2 May 2020
(This article belongs to the Section Networks)

Abstract

:
Named data networking (NDN) is a revolutionary approach to cater for modern and future Internet usage trends. The advancements in web services, social networks and cloud computing have shifted Internet utilization towards information delivery. Information-centric networking (ICN) enables content-awareness in the network layer and adopts name-based routing through the NDN architecture. Data delivery in NDN is receiver-driven pull-based and governed by requests (interests) sent out by the receiver. The ever-increasing share of high-volume media streams traversing the Internet due to the popularity and availability of video-streaming services can put a strain on network resources and lead to congestion. Since most congestion control techniques proposed for NDN are receiver-based and rely on the users to adjust their interest rates, a fairness scheme needs to be implemented at the intermediate network nodes to ensure that “rogue” users do not monopolize the available network resources. This paper proposes a fairness-based active queue management at network routers which performs per-flow interest rate shaping in order to ensure fair allocation of resources. Different congestion scenarios for both single path and multipath network topologies have been simulated to test the effectiveness of the proposed fairness scheme. Performance of the scheme is evaluated using Jain’s fairness index as a fairness metric.

1. Introduction

The current Internet architecture has been serving for decades with many innovations in layers of the TCP/IP protocol stack. The traditional IP network delivers data packets based on IP addresses (location of the data) and is completely unaware of the contents of those packets. Most of the Internet traffic today is content based: YouTube, Netflix, iTunes, Amazon, etc. Furthermore, recent advancements in web services, social networks, and cloud computing have shifted Internet usage more towards information delivery, which requires information and content-aware networks. Information-centric networking (ICN) is a networking paradigm that tries to overcome this mismatch between the current demand of Internet traffic and network infrastructure by transforming the existing location-based Internet architecture to content-based. Numerous information and content delivery networks (CDNs) have been deployed on top of the existing IP networks to support information delivery. However, CDNs face issues such as efficiency, sustainability, and scalability [1,2].
Named data networking (NDN) is based on the ICN paradigm that employs name-based routing, in which users request content from the network by its “name” [3]. NDN uniquely identifies data packets by their names and stores them in one or more data repositories (sources) as well as in local caches of in-network nodes [4]. Users request data by generating “interests” containing names of the desired data objects. These interests are routed by the network to the appropriate data repositories using name-based prefix matching [5]. Content retrieval is pull-based; i.e., data delivery is triggered by corresponding interests and data packets are pulled down on the reverse path of routed interests. The stateful, connectionless, and content-aware approach employed by NDN is used in multiple emerging technologies, including Internet-of-Things (IoT), mobile ad-hoc networks (MANETs), and vehicular ad-hoc networks (VANETs). These emerging technologies are making use of NDN’s name-based forwarding and caching to come up with new and improved data delivery and support content-based usage [6,7,8].
Current trends in the Internet traffic show that most of the traffic is video based. It is estimated that by the year 2021, almost 82% of Internet traffic will be based on video [9]. The popularity of video-on-demand, web-TV, and video streaming services—Netflix, Amazon Prime, Hulu, YouTube, etc.—has changed how and why today’s consumers use the Internet. While ICN aims to adapt the Internet to become more suitable for these usage trends, the huge volume of high definition video streams is bound to cause network congestion in the absence of an effective congestion control strategy. The difference in NDN and the current TCP/IP model in terms of routing, forwarding, and transport signifies the need for novel and effective mechanisms designed exclusively for the NDN transport in order to address the congestion and fairness related challenges.
The unique transport model of NDN makes it impossible to employ the traditional, time-tested congestion detection, mitigation, and avoidance methods of TCP [10]. The traditional TCP congestion control relies on a connection between two end hosts. The sending host measures the congestion by either calculating round-trip time (RTT) or detecting a packet loss and adjusts its sending rate accordingly. In NDN, different content sources cause retrieval delays for each incoming data packet to vary and the consumer is unable to differentiate between them. Moreover, the in-network interest rate shaping algorithm must perform some kind of fairness-based active queue management, as some uncooperative users might disable their interest rate control and keep sending out interests at a constant rate, resulting in constant bit rate (CBR) flows. These CBR flows are unresponsive towards network congestion and can cause unfair allocation of network resources. The algorithm deployed at the network nodes must limit the interest rates of unresponsive flows to prevent them from hogging the bandwidth.
In this paper, we investigate the lack of fairness caused by the unresponisve flows in the network and propose a fairness scheme to ensure fair allocation of network resources in the presence of rogue users; i.e., the users that do not comply with interest rate shaping at receiver-end, thereby contributing to congestion in the network. The “fairness scheme”—to be implemented at the network routers—adjusts their interest forwarding rates in order to ensure fair allocation of network resources among all users. The effectiveness of the proposed fairness scheme is evaluated under different scenarios using Jain’s fairness index (JFI).
The remainder of this paper is organized as follows. Section 2 discusses the existing literature on the topic. Section 3 deals with the challenges faced by the novel NDN transport. Section 4 explains the algorithms involved in the proposed fairness scheme. Section 5 shows the effectiveness of the proposed fairness scheme under different congestion scenarios. Section 6 concludes the paper with a brief discussion regarding future research.

2. Related Work

Various congestion control techniques have been proposed in the literature. However, little work has been done to ensure fairness among competing flows under dynamic multisource and multipath scenarios. While some of the proposed techniques do incorporate fairness as an added objective, the evaluation of those techniques exclusively on a “fairness” metric is still missing. Furthermore, the effect of rogue users, i.e., users who decide not to abide by any kind of congestion control mechanism, has not been studied extensively in the literature. There is a need for effective and robust techniques to ease congestion caused by the behavior of such users, and to ensure that other users—who are responsive and adjust their interest generation rates to mitigate congestion—do not face deterioration in quality of experience (QoE). Most of the related work relies solely on the receiver-based congestion control method, which has its shortcomings, as discussed in [11]. The authors in [11] have thoroughly analyzed and classified the congestion control strategies in NDN. The best approach is to adopt an optimal hybrid approach to avoid network congestion, utilize network resources in an optimized fashion, and to ensure fairness. Based on where the congestion control mechanism is applied, NDN congestion control mechanisms can be divided into four classes.

2.1. Receiver-Based Control

A simple way to control data traffic is to control the rate of interest packets sent out by the receiver. Interest rate can be controlled by a method similar to that of TCP congestion control [12]. However, TCP congestion control is highly dependent upon the retransmission timeout (RTO) value of the packet; and Ref. [13] shows that it is difficult to estimate RTO values correctly in NDN. Moreover, using a single RTO timer does not help with in-network congestion control in NDN. Content-centric TCP (CCTCP) [14] introduces the concept of “anticipated interests” to measure the RTO of packets coming from multiple sources in an adaptive fashion. Remote adaptive active queue management (RAAQM) [15] introduces a probabilistic window decrease. RAAQM adopts a window-decrease algorithm which anticipates the decrease in window size by sensing the variations in the monitored round trip delay. In explicit control protocol (ECP) [16], routers detect network congestion status in the network and feed the information back to end nodes so that receivers can adjust their interest rates. Another receiver-based intelligent congestion control algorithm is the deep reinforcement learning-based congestion control protocol (DRL-CCP) [17]. In DRL-CCP, deep learning and reinforcement training of neural networks determine the evolution of interest generation window at the receiver side.

2.2. Hop-By-Hop Control

In the hop-by-hop congestion control mechanism, the intermediate nodes, i.e., routers, measure their local congestion status by monitoring the size of incoming data chunk queues and reduce their interest forwarding rates if congestion is detected. Hop-by-hop interest shaping (HoBHIS) [18] allows network nodes to shape the interest rate on the basis of data chunk queue occupancy. Each node detects congestion and calculates interest shaping rate. The rate is explicitly notified to the clients and they are not supposed to exceed this advertised value. The effectiveness of this technique depends heavily on the cooperation of end-users. The adaptive congestion control protocol (ACCP) proposed in [19] aims to anticipate congestion before it can affect network performance. ACCP employs deep learning to predict the source of congestion at each network node, and then proceeds to estimate the measure of congestion by monitoring the average queue length. This estimation is explicitly notified to the receivers and they can adjust their interest generation rates to avoid network congestion. Similarly to ECP, ACCP also relies on the responses and cooperation of end-users to effectively avoid congestion in the network.

2.3. Hybrid Control

The receiver-based control requires correct estimation of RTT and RTO values, which is a challenging task in NDN. The hop-by-hop method solves the problem of fairness; however, it does not guarantee optimal performance. Combining the two methods can give better performance against network congestion. Not only is the congestion detected earlier, but also the network nodes can take quick action to mitigate congestion before the end-users get the chance to react [11]. Hop-by-hop and receiver-driven interest control protocol (HR-ICP) [20] uses a hybrid approach to control congestion. The interest control protocol (ICP) keeps an AIMD-window for interest generation at the receiver end. In addition to ICP, a per-flow interest rate shaping algorithm is implemented at the network nodes to alter the interest forwarding rates of unresponisve flows. Chunk-switched hop pull control protocol (CHoPCoP) [21] uses random early marking (REM) to measure congestion and notify the end users about it. At the receiver end, AIMD decreases the window size to reduce the interest rate when the receiver gets an explicit congestion notification in the form of marked data packets.

2.4. Optimal Control

In NDN, optimal control over routing is very significant due to its multipath and multisource nature. For example, the shortest path routing algorithms can cause most of the traffic to be directed on one link; i.e., the shortest path. This causes the shortest path link to be congested while other paths may remain under-utilized. Therefore, optimal routing and forwarding algorithms are used in order to gain maximum user throughput, avoid network congestion, ensure fairness, and utilize multipaths. In [22], the authors make use of the Fleischer algorithm and random rounding algorithm to propose a near-optimal routing scheme for NDN multipaths which ensures maximum throughput while considering fairness among each source-sink pair. The authors in [23] consider a joint congestion control and interest forwarding problem. They formulate a global optimization problem with the two-fold objective of maximizing user throughput and minimizing overall network cost. The joint optimal congestion control technique, comprising an optimal receiver-driven congestion control (termed interest rate control protocol (IRCP) in this paper) and an optimal request forwarding strategy, is designed as a result.
Performance evaluation of the technique proposed in [23] proves its effectiveness in avoiding congestion and utilizing network resources in an optimized fashion. However, the behavior and performance of the proposed optimal control are not considered in the presence of “rogue” users; i.e., the users who do not abide by the proposed receiver-based congestion control protocol. It can be deduced that the proposed congestion control technique would lose its effectiveness if the receivers were to decide not to obey IRCP and keep generating interests at a constant rate. A congestion collapse becomes inevitable when this happens. Even a small number of rogue users can cause a serious fairness issue and keep “obedient” IRCP-following users from using their fair share of network resources (as seen in Section 5). The fairness scheme proposed in Section 4 keeps that from happening. It not only prevents congestion collapse in the presence of rogue users but also ensures fairness among active flows when the network is facing congestion.

3. Challenges of NDN Transport

NDN employs a unique transport model which makes it suitable for content-oriented and information-centric data delivery. Therefore, the existing congestion avoidance and fairness techniques for TCP/IP model cannot work in NDN domain and there is a need for developing effective solutions to address such issues which are exclusive to NDN.

3.1. Connectionless Transport

To address the mismatch that exists between current location addressing and location-agnostic “name addressing” for information-centric delivery, NDN adopts connection-less content retrieval which makes its transport fundamentally different from that of the TCP/IP model.

3.1.1. System Description

NDN introduces stateful, name-based routing and forwarding for content and information delivery. Each in-network node maintains three structures: (i) a content store (CS), (ii) a pending interest table (PIT), and (iii) a forwarding information base (FIB). NDN routers temporarily store copies of the data objects being requested by the end-users in their content stores. Users request data packets by sending out requests, namely, “interest packets”, for corresponding content. If the requested content can be retrieved locally from the CS, the incoming interest is not forwarded upstream. Record of outstanding interests is kept in the PIT which stores the names of the requested content and the incoming interfaces of the corresponding interest packets. If an incoming interest contains the name for which there is already an outstanding entry in the PIT, the interest is not forwarded upstream and the incoming interface is added against the PIT entry of that particular name. If there is no PIT entry for the requested content, the interest is forwarded upstream according to the forwarding information available in the router’s FIB. Figure 1 describes how an interest packet is processed at an NDN router.
Data packets make their way to the end-users by traversing the reverse path followed by their corresponding interest packets. When a data packet arrives at an NDN router, the router performs a PIT lookup. If there is an outstanding entry for that name in the PIT, the data packet is forwarded to all the interfaces that originated the request; otherwise it is discarded. A copy of the packet is also stored temporarily in the router’s CS. Figure 2 explains how a data packet is processed at an NDN router.
This novel method of data transmission is completely different from the one employed by the current Internet. There is no separate “transport” layer in NDN which could mitigate and avoid network congestion [24]. Therefore, transport challenges faced by NDN can be best resolved by making innovations in its all-encompassing routing and forwarding plane.

3.1.2. The Multisource and Multipath Nature of NDN Transport

In NDN, different content sources cause retrieval delay for each incoming data packet to vary and the consumer is unable to differentiate between them. Hence, the traditional RTT-based congestion detection methods become unreliable indicators of congestion. The concept of end-to-end connections does not apply to NDN due to its multisource nature; i.e., data can come from a repository or from one or more local caches of intermediate nodes. The multisource nature of ICN can cause data to be received from multiple sources on multiple paths of varying lengths, which leads to a multipath problem. Effective caching is another important area that could solve some of the challenges faced by the NDN transport. Significant research is being carried out on the caching issues of ICN [25,26,27,28].
Data flow in NDN is governed by interest packets. Hence, by regulating the rate of interest packets, congestion can be avoided and controlled. Moreover, it is beneficial to drop interest packets instead of data packets, as they are smaller in size, and thus network resources can be saved if an interest packet is dropped early. A TCP-like AIMD-based interest window can be maintained at the receiver side (end-users are termed “receivers” in NDN as they receive the data that they requested via interests) which would govern the users’ interest generation rates. Since data packets belonging to the same flow can come from multiple sources and traverse multiple paths, accumulated delays for all routes need to be considered for the calculation of round-trip delay per flow. The receiver-based IRCP used in this work incorporates route labels to uniquely identify multiple routes and calculates their associated delays. Thus, a per-flow estimation of round-trip delay helps in the evolution of interest generation window for that particular flow.

3.2. Load Balancing

The multipath nature of NDN transport needs to be considered while developing an effective congestion control protocol. Even multipath versions of TCP (e.g., multipath TCP [29]) cannot address this multipath problem effectively, as they consider predetermined routes for a specific connection while in NDN; multipaths can change dynamically. Therefore, an effective load-balancing and interest forwarding strategy needs to be employed at in-network nodes which can make on-the-go decisions for forwarding interests to optimally utilize the available multipaths and avoid network congestion. In this paper, optimal request forwarding algorithm—designed in [23]—is implemented at the network nodes to optimally forward interest packets on all available paths. This distributed load-balancing scheme is applied on a per-Interest basis using local forwarding statistics. No additional lookups on the FIB are required for maintaining these statistics. The computational complexity of the request forwarding algorithm is given by the complexity needed to update the local statistics, which is O(1) [23].

3.3. Fairness

Congestion avoidance in NDN mainly depends on the receivers and their cooperation in adjusting their interest generation rates. However, if some of the users choose not to cooperate and keep sending out interests at a constant rate (much like UDP/CBR), it can give rise to a situation where cooperating users always see the network being congested and their throughput suffers because of their adaptive AIMD-based interest generation windows. This way, transmission of obedient and cooperative flows, i.e., IRCP flows, is throttled while CBR flows hog most of the available bandwidth. To avoid this unfair allocation of network resources, the in-network nodes need to implement some kind of interest-rate shaping that restrains the interest packets of the uncooperative flows (such as CBR or IRCP-disabled flows) from going upstream and alters their interest forwarding rate according to their fair share. A fairness scheme is proposed in Section 4 of this paper to ensure fair resource allocation in the presence of uncooperative flows.

4. Proposed Fairness Scheme

The fairness scheme proposed in this paper works together with the optimal multipath congestion control and request forwarding technique proposed in [23]. To optimally utilize network resources while simultaneously avoiding network congestion, receivers keep an AIMD-based interest generation window to adapt their interest generation rate accordingly. Therefore, flows originating from these cooperative users are responsive to network congestion and help in mitigating congestion by adjusting their interest generation rates. These flows are termed IRCP flows in this paper. Furthermore, the network routers employ an optimal request forwarding scheme to perform efficient load-balancing. Together, the three elements make for a comprehensive “hybrid” congestion control protocol, which can address the multiple challenges of NDN transport.
The scheme is deployed at the intermediate routers. Thus, in case of network congestion, in-network nodes can shape the interest forwarding rates of active flows. This shaping is done on the basis of fair allocation of link bandwidth among all the active flows. Flows are distinguished as “rogue” and “obedient” flows on-the-fly, “rogue” flows being the flows that are exceeding their fair share of network resources. Uncooperative flows that do not abide by the IRCP, or flows that are unresponsive to network congestion such as CBR flows are marked as “rogue” and interest packets belonging to these flows are inserted in an interest rate shaping queue.
Each in-network node measures its congestion status by monitoring the size of the data chunk queue, and if the network is congested, each node performs per-flow interest rate shaping on-the-fly to respond quickly to network congestion. This way, the scheme works independently from the receiver-based congestion control protocols. The interest rate shaping algorithm also ensures per-flow fairness by using a stateful approach to keep record of active flows and assign them their fair share. This is done by keeping a virtual queue and an associated data counter for each active flow. All the interest packets belonging to same flow (same name prefix) are queued in that virtual queue. Upon congestion, the flows exceeding their fair share (rogue flows) are throttled by altering their interest forwarding rates. Thus, rogue users are restricted to use only their fair share of the link bandwidth and not tap into the share of other “obedient” users. The performance of this fairness scheme is evaluated in Section 5.
Algorithm 1 summarizes the steps involved in the fairness-based interest rate shaping scheme. The scheme only comes into play when the network is congested which is determined by monitoring the queue size on routers’ incoming data buffers. Upon reception of interest packets, CS and PIT lookups are performed on the names of the requested data objects. If there are no entries in the PIT for the incoming request and if the request cannot be fulfilled locally through the router’s CS (the conditions termed as P I T _ m i s s and C a c h e _ m i s s , respectively), the interest packet needs to be forwarded upstream. Incoming interests are queued in the virtual queues based on their “name” fields; i.e., interests belonging to the same flow are queued in that flow’s virtual queue ( E n q u e u e ). Unique name prefix entries in the PIT dictate the number of active flows. Each flow (unique name prefix) has its virtual queue and its corresponding “counter”, which dictates the amount of data (in MB) that the flow is allowed to transmit in a certain time interval T. Interest packets belonging to an obedient flow are fetched from the flow’s virtual queue using D e q u e u e and are forwarded upstream without any delay. The flow’s counter is decremented by the size of the requested data packet each time an interest belonging to that particular flow is forwarded upstream ( C o u n t e r _ D e c r e a s e ). This is done by employing a “data size” field in the interest packets, which indicates the size of the data chunk being requested by those interests. If a flow’s counter reaches zero—meaning that the flow has already transmitted its allocated share—the flow is marked R o g u e . The interest packets belonging to a rogue flow are inserted in the interest rate shaping queue ( R a t e _ S h a p i n g _ Q u e u e _ I n s e r t ). The T h r o t t l e function makes sure that the interests belonging to a rogue flow are forwarded only when the counter value for that particular flow is non-zero; i.e., once the counter for that flow is reset after some time, thereby throttling the data transmission rates of the unresponsive flows. R a t e _ S h a p i n g _ Q u e u e _ F e t c h is used to retrieve the interests at the head of the interest rate shaping queue. These interests are forwarded upstream as long as the counter value stays above zero. Thus, each active flow gets to transmit the same amount of data in a specific time interval under congestion. Data counters for all active flows are maintained according to Algorithm 2.
Algorithm 1 Fairness-based interest rate shaping.
  1: Precondition: There is congestion on the outgoing link
  2: A t I n t e r e s t _ P a c k e t _ R e c e p t i o n ( n a m e , f a c e _ i n )
  3: if (Cache_miss and PIT_miss) then
  4:  Enqueue(interest, name)
  5:  Rogue[ n a m e ] = (Counter[ n a m e ]≤0)
  6:  if (Rogue[ n a m e ]) then
  7:   Rate_Shaping_Queue_Insert( i n t e r e s t , n a m e )
  8:   Throttle( )
  9:  else
10:   interest = Dequeue( n a m e )
11:   Forward( i n t e r e s t )
12:   Counter_Decrease( n a m e , i n t e r e s t . d a t a _ s i z e )
13:  end if
14: else
15:  Standard_NDN_Processing
16: end if
17: procedure Throttle
18: while (!Counter[ n a m e ]) do
19:  interest = Rate_Shaping_Queue_Fetch(i)
20:  Forward( i n t e r e s t )
21:  Counter_Decrease( n a m e , i n t e r e s t . d a t a _ s i z e )
22:   i n e x t _i
23: end while
24: end procedure
Algorithm 2 Per-flow data rate regulation.
1: if (New_Flow or Timeout) then
2:  Check PIT for active flows
3:   f a i r _ s h a r e ← Fair_Share_Update( f l o w s , b a n d w i d t h )
4:  for all active flows do
5:   Update_Counters( f a i r _ s h a r e )
6:  end for
7: end if
Network routers follow a stateful approach for per-flow data rate regulation by monitoring PIT entries. The bandwidth of the outgoing link—through which the data packets are “pulled” against their corresponding interests—is divided equally among the active flows. The counter value is essentially the allotted fair share, which the flows are not allowed to exceed within a given time interval T. If there is a new flow, the fair share value is updated for all active flows and counters are set to the new value, otherwise counter values for all active flows are reset periodically.

5. Performance Evaluation

In this section we evaluate the performance of the fairness scheme proposed in Algorithm 1 through simulating different congestion scenarios for both single path and multipath network topologies. The simulation environment chosen for this purpose is the ns-3 based NDN simulator called ndnSIM2.0 [30]. The key performance indicator selected for the evaluation is: Jain’s fairness index [31].
In all the simulation scenarios discussed in this section, the interest packets of different flows (whether CBR or IRCP), request objects, i.e., data packets, of the same size. Therefore, number of interest packets per flow is restricted in a certain time interval instead of number of allowed bytes, as was the case in Algorithm 2.

5.1. Single Path (Dumbbell Topology)

Consider a simple dumbbell topology as shown in Figure 3 with a bottleneck link of 5 Mbps between R1 and R2. User1 is generating interests with the name prefix “ndn://facebook.com/{object1, object2, object3...}” in accordance with the window-based IRCP. User2 is generating interests with the name prefix “ndn://netflix.com/{object1, object2, object3...}” at a constant rate, thereby receiving data at a continuous bit rate (set as 5Mbps throughout the simulation). We have simulated this scenario both with and without the proposed fairness scheme deployed at the intermediate routers R1 and R2.
In the absence of a fairness scheme, the CBR flow causes unfair network usage as it occupies almost all of the bottleneck link throughout the simulation and does not let the IRCP flow transmit according to its fair share. The transmission (data reception) rates for both flows are displayed in Figure 4a which clearly shows that the CBR flow is occupying all the bandwidth, resulting in unfair allocation of network resources. When network congestion appears, CBR’s rate drops a little but returns to its maximum rate shortly, forcing the IRCP flow to decrease its transmission rate.
We observe a significantly fair allocation of the link bandwidth when the fairness scheme is deployed at the intermediate routers as illustrated in Figure 4b. When congestion occurs, router R1 alters the interest forwarding rate of the CBR flow, thereby regulating its data reception rate. This allows the IRCP flow to receive data at its fair share. CBR is transmitting at its desired rate initially until the network becomes congested and the proposed fairness scheme comes into action which consequently reduces CBR’s interest forwarding rate and hence its data reception rate. IRCP gets the chance to transmit according to its fair share and therefore fairness is achieved.
The performance of the proposed fairness scheme is depicted in Figure 5 using Jain’s fairness index, which is the measure of fairness (or lack thereof) among the active flows in a network. The index ranges between 1 n and 1 (n being the number of active flows), representing “worst” and “best” fairness respectively [31]. Since there are only two active flows: “ndn://facebook.com/” and “ndn://netflix.com/”—the fairness index ranges between 0.5 and 1.0 (representing worst and best fairness respectively). It is evident from the result that the proposed scheme is successful in achieving fairness among the active flows.

5.2. Single Path (Parking Lot Topology)

We now evaluate the performance of our proposed scheme in a different scenario. Consider the parking lot topology depicted in Figure 6, with a bottleneck link of 10 Mbps between R3 and R4 while all the remaining links are 20 Mbps each. This scenario consists of six unique active flows; i.e., six users requesting content (data objects) with different name prefixes. Name prefixes and flow types for different users in this topology are given in Table 1. The bottleneck link is being utilized in both directions. Three of the flows (User1, User2, and User3) compete for their share of the link bandwidth from R4 to R3 while the other three flows (User4, User5, and User6) utilize the link from R3 to R4. The data reception rates for all the users are shown in Figure 7.
The simulation result in Figure 7a illustrates that the CBR flows originating from User1 and User4 start transmitting at a higher rate, occupying more than their fair share of the bandwidth, but the fairness scheme—deployed at the intermediate routers R3 and R4—forces them to reduce their transmission rates, consequently allowing IRCP flows (originating from User2, User3, User5, and User6) to transmit their fair share. Moreover, it is evident from Figure 7b that the bottleneck link is being shared fairly among all the network users. There are three flows competing for the bandwidth in either direction on the bottleneck link. Therefore, the lowest possible value for JFI is 1÷3, i.e., 0.33, which represents the “worst fairness” among any three users competing for the link bandwidth in either direction. It is evident from the result that the fairness indices for both directions remain far above the worst value and a near-ideal fairness is achieved and maintained throughout the simulation. This is because of the highly effective fairness scheme incorporated at the network routers, which tries to ensure that all the competing flows get to transmit the same amount of data in a specific time interval.

5.3. Multipath Scenario

The fairness scheme is also tested and evaluated in a multisource, multipath scenario. Consider the topology in Figure 8, where two different types of flows, i.e., CBR and IRCP, are being used to retrieve content from multiple sources.
CBR flow is generating interests of the name prefix “ndn://netflix.com/” and is set to receive data at 30 Mbps, while IRCP is requesting data of the name prefix “ndn://youtube.com/”; there are two sources (data servers) for each flow, and both can satisfy incoming interest packets by providing corresponding data. An optimal load balancing technique—proposed in [23]—is active on the network routers. The routers can utilize all the available multipaths by optimally selecting the outgoing links for the forwarding of interest packets. The bottleneck links (1, 2) and (1, 3) give rise to congestion in the network, thereby causing the fairness algorithm to become active and perform per-flow interest rate shaping. In this scenario, data packets from both CBR and IRCP flows compete for their share of the bandwidth on these bottleneck links. Data reception rates for both these flows are probed at the respective receivers’ ends in order to measure their throughput.
In the absence of a fairness scheme, CBR flows monopolize the available bandwidth and keep IRCP flows from transmitting at their fair share. However, in this case, our interest rate shaping algorithm ensures that both the flows get to transmit approximately at the same rate. Congestion caused by the combination of high data reception rate of CBR flow and the existence of bottlenecks in the network, provoke the intermediate routers to adjust the forwarding rate of interest packets belonging to the CBR flow. Consequently, both flows end up receiving data at approximately the same rate. The data reception rates in Figure 9 show that both CBR and IRCP users are receiving data at approximately 15 Mbps (fair share). CBR flow, which was originally intended to transmit around 30 Mbps, has been confined to transmit around 15 Mbps to ensure fairness.
Figure 10 demonstrates that the proposed fairness scheme is equally effective and suitable for multi-source, multipath NDN transport. Throughout the simulation, JFI remains close to 1; i.e., the “best fairness” value. This proves that both the flows are sharing network resources in a fair manner.

6. Conclusions

In this paper, we propose a lightweight, effective, and robust fairness scheme for NDN which is deployed at the network nodes. Upon detecting congestion, routers can perform interest rate shaping locally and independently before the receiver-based algorithms get the chance to react. Moreover, uncooperative users are discouraged, as the interest forwarding rate of unresponsive flows is throttled during congestion scenarios to ensure fairness. An ns-3 based simulator, ndnSIM, is used to simulate desired congestion conditions in order to monitor the performance of the proposed fairness scheme under different scenarios. Fairness evaluation proves that our proposed scheme is quite effective in ensuring fair resource allocation among users.
The proposed fairness scheme is designed to work together with an effective receiver-based interest control algorithm that can address the multisource and multipath nature of NDN. Additionally, an optimal request forwarding strategy to balance traffic on all available multipaths is also required for developing a comprehensive congestion control protocol for NDN. The effectiveness of the receiver-based interest rate control protocol and the request forwarding strategy used in this paper is well established in the literature. However, a thorough study is required on the effectiveness of this comprehensive congestion control protocol. In the future, we aim to investigate the efficacy of the proposed integrated approach in mitigating and avoiding network congestion. Furthermore, we also plan to carry out a scalability study of the proposed fairness scheme on a large-scale experimental testbed as an extension to this work.

Author Contributions

Conceptualization, H.Z., Z.H.A., and G.A.; methodology, H.Z., Z.H.A., and G.A.; software, H.Z. and G.A.; validation, Z.H.A. and G.A.; formal analysis, H.Z., Z.H.A., and G.A.; investigation, H.Z., Z.H.A., and G.A.; data curation, H.Z.; writing—original draft preparation, H.Z.; writing—review and editing, Z.H.A., G.A., F.M., and M.T.; visualization, H.Z.; supervision, Z.A.H. and G.A.; project administration, F.M. and S.K.; funding acquisition, S.K. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Research Program through the National Research Foundation of Korea (NRF-2019R1A2C1005920).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kilanioti, I.; Fernández-Montes, A.; Fernández-Cerero, D.; Karageorgos, A.; Mettouris, C.; Nejkovic, V.; Albanis, N.; Bashroush, R.; Papadopoulos, G.A. Towards efficient and scalable data-intensive content delivery: State-of-the-art, issues and challenges. In High-Performance Modelling and Simulation for Big Data Applications; Springer: Berlin, Germany, 2019; pp. 88–137. [Google Scholar]
  2. Zhao, J.; Liang, P.; Liufu, W.; Fan, Z. Recent Developments in Content Delivery Network: A Survey. In International Symposium on Parallel Architectures, Algorithms and Programming; Springer: Berlin, Germany, 2019; pp. 98–106. [Google Scholar]
  3. Jacobson, V.; Smetters, D.K.; Thornton, J.D.; Plass, M.F.; Briggs, N.H.; Braynard, R.L. Networking named content. In Proceedings of the 5th International Conference on Emerging Networking Experiments and Technologies, Rome, Italy, 1–4 December 2009; pp. 1–12. [Google Scholar]
  4. Liu, W.X.; Yu, S.Z.; Gao, Y.; Wu, W.T. Caching efficiency of information-centric networking. IET Netw. 2013, 2, 53–62. [Google Scholar] [CrossRef]
  5. Quan, W.; Xu, C.; Guan, J.; Zhang, H.; Grieco, L.A. Scalable name lookup with adaptive prefix bloom filter for named data networking. IEEE Commun. Lett. 2013, 18, 102–105. [Google Scholar] [CrossRef]
  6. Amadeo, M.; Ruggeri, G.; Campolo, C.; Molinaro, A.; Loscrí, V.; Calafate, C.T. Fog Computing in IoT Smart Environments via Named Data Networking: A Study on Service Orchestration Mechanisms. Future Internet 2019, 11, 222. [Google Scholar] [CrossRef] [Green Version]
  7. Rehman, R.A.; Kim, B.S. LOMCF: Forwarding and caching in named data networking based MANETs. IEEE Trans. Veh. Technol. 2017, 66, 9350–9364. [Google Scholar] [CrossRef]
  8. Amadeo, M.; Campolo, C.; Ruggeri, G.; Lia, G.; Molinaro, A. Caching Transient Contents in Vehicular Named Data Networking: A Performance Analysis. Sensors 2020, 20, 1985. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Cisco Public. Cisco Visual Networking Index: Forecast and Methodology. Available online: https://www.google.com.hk/url?sa=t&rct=j&q=&esrc=s&source=web&cd=4&cad=rja&uact=8&ved=2ahUKEwiQvJ6qgJTpAhVbUd4KHU2XBBgQFjADegQIAxAB&url=https%3A%2F%2Fwww.reinvention.be%2Fwebhdfs%2Fv1%2Fdocs%2Fcomplete-white-paper-c11-481360.pdf&usg=AOvVaw3S0z-NZ_1XawkfmiZu3oTB (accessed on 6 June 2017).
  10. Abbas, G.; Halim, Z.; Abbas, Z.H. Fairness-driven queue management: A survey and taxonomy. IEEE Commun. Surv. Tutor. 2015, 18, 324–367. [Google Scholar] [CrossRef]
  11. Ren, Y.; Li, J.; Shi, S.; Li, L.; Wang, G.; Zhang, B. Congestion control in named data networking–a survey. Comput. Commun. 2016, 86, 1–11. [Google Scholar] [CrossRef]
  12. Allman, M.; Paxson, V.; Blanton, E. TCP Congestion Control. Available online: https://www.rfc-editor.org/info/rfc5681 (accessed on 30 April 2020).
  13. Braun, S.; Monti, M.; Sifalakis, M.; Tschudin, C. An empirical study of receiver-based aimd flow-control strategies for CCN. In Proceedings of the 2013 22nd international conference on computer communication and Networks (ICCCN), Nassau, Bahamas, 30 July–2 August 2013; pp. 1–8. [Google Scholar]
  14. Saino, L.; Cocora, C.; Pavlou, G. CCTCP: A scalable receiver-driven congestion control protocol for content centric networking. In Proceedings of the 2013 IEEE International Conference on Communications (ICC), Budapest, Hungary, 9–13 June 2013; pp. 3775–3780. [Google Scholar]
  15. Carofiglio, G.; Gallo, M.; Muscariello, L.; Papali, M. Multipath congestion control in content-centric networks. In Proceedings of the 2013 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), Turin, Italy, 14–19 April 2013; pp. 363–368. [Google Scholar]
  16. Ren, Y.; Li, J.; Shi, S.; Li, L.; Wang, G. An explicit congestion control algorithm for named data networking. In Proceedings of the 2016 IEEE conference on computer communications workshops (INFOCOM WKSHPS), San Francisco, CA, USA, 10–14 April 2016; pp. 294–299. [Google Scholar]
  17. Lan, D.; Tan, X.; Lv, J.; Jin, Y.; Yang, J. A Deep Reinforcement Learning Based Congestion Control Mechanism for NDN. In Proceedings of the ICC 2019—2019 IEEE International Conference on Communications (ICC), Shanghai, China, 20–24 May 2019; pp. 1–7. [Google Scholar]
  18. Rozhnova, N.; Fdida, S. An extended hop-by-hop interest shaping mechanism for content-centric networking. In Proceedings of the 2014 IEEE Global Communications Conference, Austin, TX, USA, 8–12 December 2014; pp. 1–7. [Google Scholar]
  19. Liu, T.; Zhang, M.; Zhu, J.; Zheng, R.; Liu, R.; Wu, Q. ACCP: Adaptive congestion control protocol in named data networking based on deep learning. Neural Comput. Appl. 2019, 31, 4675–4683. [Google Scholar] [CrossRef]
  20. Carofiglio, G.; Gallo, M.; Muscariello, L. Joint hop-by-hop and receiver-driven interest control protocol for content-centric networks. ACM SIGCOMM Comput. Commun. Rev. 2012, 42, 491–496. [Google Scholar] [CrossRef] [Green Version]
  21. Zhang, F.; Zhang, Y.; Reznik, A.; Liu, H.; Qian, C.; Xu, C. A transport protocol for content-centric networking with explicit congestion control. In Proceedings of the 2014 23rd international conference on computer communication and Networks (ICCCN), Shanghai, China, 4–7 August 2014; pp. 1–8. [Google Scholar]
  22. Zhang, Y.; An, X.; Yuan, M.; Bu, X.; An, J. Concurrent Multi-Path Routing Optimization in Named Data Networks. IEEE Internet Things J. 2019, 7, 1451–1463. [Google Scholar] [CrossRef]
  23. Carofiglio, G.; Gallo, M.; Muscariello, L. Optimal multipath congestion control and request forwarding in information-centric networks: Protocol design and experimentation. Comput. Netw. 2016, 110, 104–117. [Google Scholar] [CrossRef]
  24. Zhang, L.; Afanasyev, A.; Burke, J.; Jacobson, V.; Claffy, K.; Crowley, P.; Papadopoulos, C.; Wang, L.; Zhang, B. Named data networking. ACM SIGCOMM Comput. Commun. Rev. 2014, 44, 66–73. [Google Scholar] [CrossRef]
  25. Naeem, M.A.; Nor, S.A.; Hassan, S.; Kim, B.S. Compound popular content caching strategy in named data networking. Electronics 2019, 8, 771. [Google Scholar] [CrossRef] [Green Version]
  26. Badshah, J.; Mohaia Alhaisoni, M.; Shah, N.; Kamran, M. Cache Servers Placement Based on Important Switches for SDN-Based ICN. Electronics 2020, 9, 39. [Google Scholar] [CrossRef] [Green Version]
  27. Xu, C.; Wang, M.; Chen, X.; Zhong, L.; Grieco, L.A. Optimal information centric caching in 5G device-to-device communications. IEEE Trans. Mob. Comput. 2018, 17, 2114–2126. [Google Scholar] [CrossRef]
  28. Feng, Y.; Zhou, P.; Wu, D.; Hu, Y. Accurate Content Push for Content-Centric Social Networks: A Big Data Support Online Learning Approach. IEEE Trans. Emerg. Top. Comput. Intell. 2018, 2, 426–438. [Google Scholar] [CrossRef]
  29. Wischik, D.; Raiciu, C.; Greenhalgh, A.; Handley, M. Design, Implementation and Evaluation of Congestion Control for Multipath TCP. In Proceedings of the 8th USENIX Conference on Networked Systems Design and Implementation, Boston MA, USA, 30 March–1 April 2011; Volume 11, p. 8. [Google Scholar]
  30. Mastorakis, S.; Afanasyev, A.; Moiseenko, I.; Zhang, L. ndnSIM 2: An Updated NDN Simulator for NS-3; NDN 0028; University of California: Los Angeles, CA, USA, 2016. [Google Scholar]
  31. Jain, R.; Durresi, A.; Babic, G. Throughput Fairness Index: An Explanation, ATM Forum Document Number: ATM_Forum/99-0045; February 1999. Available online: https://www.cse.wustl.edu/~jain/atmf/ftp/af_fair.pdf (accessed on 30 April 2020).
Figure 1. Interest packet forwarding process.
Figure 1. Interest packet forwarding process.
Electronics 09 00749 g001
Figure 2. Data packet forwarding process.
Figure 2. Data packet forwarding process.
Electronics 09 00749 g002
Figure 3. Dumbbell topology with 5 Mbps bottleneck link.
Figure 3. Dumbbell topology with 5 Mbps bottleneck link.
Electronics 09 00749 g003
Figure 4. Data reception rates for CBR and IRCP flows.
Figure 4. Data reception rates for CBR and IRCP flows.
Electronics 09 00749 g004
Figure 5. Performance evaluation of the proposed fairness scheme (dumbbell topology).
Figure 5. Performance evaluation of the proposed fairness scheme (dumbbell topology).
Electronics 09 00749 g005
Figure 6. Parking lot topology with 10 Mbps bottleneck link.
Figure 6. Parking lot topology with 10 Mbps bottleneck link.
Electronics 09 00749 g006
Figure 7. Performance evaluation of the proposed fairness scheme (parking lot topology).
Figure 7. Performance evaluation of the proposed fairness scheme (parking lot topology).
Electronics 09 00749 g007
Figure 8. Multipath scenario for fairness evaluation.
Figure 8. Multipath scenario for fairness evaluation.
Electronics 09 00749 g008
Figure 9. Per-flow data reception rate for the multipath scenario.
Figure 9. Per-flow data reception rate for the multipath scenario.
Electronics 09 00749 g009
Figure 10. Performance evaluation of the proposed fairness scheme (multipath acenario).
Figure 10. Performance evaluation of the proposed fairness scheme (multipath acenario).
Electronics 09 00749 g010
Table 1. Flow parameters for parking lot topology.
Table 1. Flow parameters for parking lot topology.
UsersName PrefixFlow Type
User1ndn://netflix.com/CBR
User2ndn://facebook.com/IRCP
User3ndn://youtube.com/IRCP
User4ndn://amazon.com/CBR
User5ndn://google.com/IRCP
User6ndn://yahoo.com/IRCP

Share and Cite

MDPI and ACS Style

Zafar, H.; Abbas, Z.H.; Abbas, G.; Muhammad, F.; Tufail, M.; Kim, S. An Effective Fairness Scheme for Named Data Networking. Electronics 2020, 9, 749. https://doi.org/10.3390/electronics9050749

AMA Style

Zafar H, Abbas ZH, Abbas G, Muhammad F, Tufail M, Kim S. An Effective Fairness Scheme for Named Data Networking. Electronics. 2020; 9(5):749. https://doi.org/10.3390/electronics9050749

Chicago/Turabian Style

Zafar, Hammad, Ziaul Haq Abbas, Ghulam Abbas, Fazal Muhammad, Muhammad Tufail, and Sunghwan Kim. 2020. "An Effective Fairness Scheme for Named Data Networking" Electronics 9, no. 5: 749. https://doi.org/10.3390/electronics9050749

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop