An Effective Fairness Scheme for Named Data Networking

: Named data networking (NDN) is a revolutionary approach to cater for modern and future Internet usage trends. The advancements in web services, social networks and cloud computing have shifted Internet utilization towards information delivery. Information-centric networking (ICN) enables content-awareness in the network layer and adopts name-based routing through the NDN architecture. Data delivery in NDN is receiver-driven pull-based and governed by requests (interests) sent out by the receiver. The ever-increasing share of high-volume media streams traversing the Internet due to the popularity and availability of video-streaming services can put a strain on network resources and lead to congestion. Since most congestion control techniques proposed for NDN are receiver-based and rely on the users to adjust their interest rates, a fairness scheme needs to be implemented at the intermediate network nodes to ensure that “rogue” users do not monopolize the available network resources. This paper proposes a fairness-based active queue management at network routers which performs per-ﬂow interest rate shaping in order to ensure fair allocation of resources. Different congestion scenarios for both single path and multipath network topologies have been simulated to test the effectiveness of the proposed fairness scheme. Performance of the scheme is evaluated using Jain’s fairness index as a fairness metric.


Introduction
The current Internet architecture has been serving for decades with many innovations in layers of the TCP/IP protocol stack. The traditional IP network delivers data packets based on IP addresses (location of the data) and is completely unaware of the contents of those packets. Most of the Internet traffic today is content based: YouTube, Netflix, iTunes, Amazon, etc. Furthermore, recent advancements in web services, social networks, and cloud computing have shifted Internet usage more towards information delivery, which requires information and content-aware networks. Information-centric networking (ICN) is a networking paradigm that tries to overcome this mismatch

Related Work
Various congestion control techniques have been proposed in the literature. However, little work has been done to ensure fairness among competing flows under dynamic multisource and multipath scenarios. While some of the proposed techniques do incorporate fairness as an added objective, the evaluation of those techniques exclusively on a "fairness" metric is still missing. Furthermore, the effect of rogue users, i.e., users who decide not to abide by any kind of congestion control mechanism, has not been studied extensively in the literature. There is a need for effective and robust techniques to ease congestion caused by the behavior of such users, and to ensure that other users-who are responsive and adjust their interest generation rates to mitigate congestion-do not face deterioration in quality of experience (QoE). Most of the related work relies solely on the receiver-based congestion control method, which has its shortcomings, as discussed in [11]. The authors in [11] have thoroughly analyzed and classified the congestion control strategies in NDN. The best approach is to adopt an optimal hybrid approach to avoid network congestion, utilize network resources in an optimized fashion, and to ensure fairness. Based on where the congestion control mechanism is applied, NDN congestion control mechanisms can be divided into four classes.

Receiver-Based Control
A simple way to control data traffic is to control the rate of interest packets sent out by the receiver. Interest rate can be controlled by a method similar to that of TCP congestion control [12]. However, TCP congestion control is highly dependent upon the retransmission timeout (RTO) value of the packet; and Ref. [13] shows that it is difficult to estimate RTO values correctly in NDN. Moreover, using a single RTO timer does not help with in-network congestion control in NDN. Content-centric TCP (CCTCP) [14] introduces the concept of "anticipated interests" to measure the RTO of packets coming from multiple sources in an adaptive fashion. Remote adaptive active queue management (RAAQM) [15] introduces a probabilistic window decrease. RAAQM adopts a window-decrease algorithm which anticipates the decrease in window size by sensing the variations in the monitored round trip delay. In explicit control protocol (ECP) [16], routers detect network congestion status in the network and feed the information back to end nodes so that receivers can adjust their interest rates. Another receiver-based intelligent congestion control algorithm is the deep reinforcement learning-based congestion control protocol (DRL-CCP) [17]. In DRL-CCP, deep learning and reinforcement training of neural networks determine the evolution of interest generation window at the receiver side.

Hop-By-Hop Control
In the hop-by-hop congestion control mechanism, the intermediate nodes, i.e., routers, measure their local congestion status by monitoring the size of incoming data chunk queues and reduce their interest forwarding rates if congestion is detected. Hop-by-hop interest shaping (HoBHIS) [18] allows network nodes to shape the interest rate on the basis of data chunk queue occupancy. Each node detects congestion and calculates interest shaping rate. The rate is explicitly notified to the clients and they are not supposed to exceed this advertised value. The effectiveness of this technique depends heavily on the cooperation of end-users. The adaptive congestion control protocol (ACCP) proposed in [19] aims to anticipate congestion before it can affect network performance. ACCP employs deep learning to predict the source of congestion at each network node, and then proceeds to estimate the measure of congestion by monitoring the average queue length. This estimation is explicitly notified to the receivers and they can adjust their interest generation rates to avoid network congestion. Similarly to ECP, ACCP also relies on the responses and cooperation of end-users to effectively avoid congestion in the network.

Hybrid Control
The receiver-based control requires correct estimation of RTT and RTO values, which is a challenging task in NDN. The hop-by-hop method solves the problem of fairness; however, it does not guarantee optimal performance. Combining the two methods can give better performance against network congestion. Not only is the congestion detected earlier, but also the network nodes can take quick action to mitigate congestion before the end-users get the chance to react [11]. Hop-by-hop and receiver-driven interest control protocol (HR-ICP) [20] uses a hybrid approach to control congestion. The interest control protocol (ICP) keeps an AIMD-window for interest generation at the receiver end. In addition to ICP, a per-flow interest rate shaping algorithm is implemented at the network nodes to alter the interest forwarding rates of unresponisve flows. Chunk-switched hop pull control protocol (CHoPCoP) [21] uses random early marking (REM) to measure congestion and notify the end users about it. At the receiver end, AIMD decreases the window size to reduce the interest rate when the receiver gets an explicit congestion notification in the form of marked data packets.

Optimal Control
In NDN, optimal control over routing is very significant due to its multipath and multisource nature. For example, the shortest path routing algorithms can cause most of the traffic to be directed on one link; i.e., the shortest path. This causes the shortest path link to be congested while other paths may remain under-utilized. Therefore, optimal routing and forwarding algorithms are used in order to gain maximum user throughput, avoid network congestion, ensure fairness, and utilize multipaths. In [22], the authors make use of the Fleischer algorithm and random rounding algorithm to propose a near-optimal routing scheme for NDN multipaths which ensures maximum throughput while considering fairness among each source-sink pair. The authors in [23] consider a joint congestion control and interest forwarding problem. They formulate a global optimization problem with the two-fold objective of maximizing user throughput and minimizing overall network cost. The joint optimal congestion control technique, comprising an optimal receiver-driven congestion control (termed interest rate control protocol (IRCP) in this paper) and an optimal request forwarding strategy, is designed as a result.
Performance evaluation of the technique proposed in [23] proves its effectiveness in avoiding congestion and utilizing network resources in an optimized fashion. However, the behavior and performance of the proposed optimal control are not considered in the presence of "rogue" users; i.e., the users who do not abide by the proposed receiver-based congestion control protocol. It can be deduced that the proposed congestion control technique would lose its effectiveness if the receivers were to decide not to obey IRCP and keep generating interests at a constant rate. A congestion collapse becomes inevitable when this happens. Even a small number of rogue users can cause a serious fairness issue and keep "obedient" IRCP-following users from using their fair share of network resources (as seen in Section 5). The fairness scheme proposed in Section 4 keeps that from happening. It not only prevents congestion collapse in the presence of rogue users but also ensures fairness among active flows when the network is facing congestion.

Challenges of NDN Transport
NDN employs a unique transport model which makes it suitable for content-oriented and information-centric data delivery. Therefore, the existing congestion avoidance and fairness techniques for TCP/IP model cannot work in NDN domain and there is a need for developing effective solutions to address such issues which are exclusive to NDN.

Connectionless Transport
To address the mismatch that exists between current location addressing and location-agnostic "name addressing" for information-centric delivery, NDN adopts connection-less content retrieval which makes its transport fundamentally different from that of the TCP/IP model.

System Description
NDN introduces stateful, name-based routing and forwarding for content and information delivery. Each in-network node maintains three structures: (i) a content store (CS), (ii) a pending interest table (PIT), and (iii) a forwarding information base (FIB). NDN routers temporarily store copies of the data objects being requested by the end-users in their content stores. Users request data packets by sending out requests, namely, "interest packets", for corresponding content. If the requested content can be retrieved locally from the CS, the incoming interest is not forwarded upstream. Record of outstanding interests is kept in the PIT which stores the names of the requested content and the incoming interfaces of the corresponding interest packets. If an incoming interest contains the name for which there is already an outstanding entry in the PIT, the interest is not forwarded upstream and the incoming interface is added against the PIT entry of that particular name. If there is no PIT entry for the requested content, the interest is forwarded upstream according to the forwarding information available in the router's FIB. Figure 1 describes how an interest packet is processed at an NDN router. Data packets make their way to the end-users by traversing the reverse path followed by their corresponding interest packets. When a data packet arrives at an NDN router, the router performs a PIT lookup. If there is an outstanding entry for that name in the PIT, the data packet is forwarded to all the interfaces that originated the request; otherwise it is discarded. A copy of the packet is also stored temporarily in the router's CS. Figure 2 explains how a data packet is processed at an NDN router. This novel method of data transmission is completely different from the one employed by the current Internet. There is no separate "transport" layer in NDN which could mitigate and avoid network congestion [24]. Therefore, transport challenges faced by NDN can be best resolved by making innovations in its all-encompassing routing and forwarding plane.

The Multisource and Multipath Nature of NDN Transport
In NDN, different content sources cause retrieval delay for each incoming data packet to vary and the consumer is unable to differentiate between them. Hence, the traditional RTT-based congestion detection methods become unreliable indicators of congestion. The concept of end-to-end connections does not apply to NDN due to its multisource nature; i.e., data can come from a repository or from one or more local caches of intermediate nodes. The multisource nature of ICN can cause data to be received from multiple sources on multiple paths of varying lengths, which leads to a multipath problem. Effective caching is another important area that could solve some of the challenges faced by the NDN transport. Significant research is being carried out on the caching issues of ICN [25][26][27][28].
Data flow in NDN is governed by interest packets. Hence, by regulating the rate of interest packets, congestion can be avoided and controlled. Moreover, it is beneficial to drop interest packets instead of data packets, as they are smaller in size, and thus network resources can be saved if an interest packet is dropped early. A TCP-like AIMD-based interest window can be maintained at the receiver side (end-users are termed "receivers" in NDN as they receive the data that they requested via interests) which would govern the users' interest generation rates. Since data packets belonging to the same flow can come from multiple sources and traverse multiple paths, accumulated delays for all routes need to be considered for the calculation of round-trip delay per flow. The receiver-based IRCP used in this work incorporates route labels to uniquely identify multiple routes and calculates their associated delays. Thus, a per-flow estimation of round-trip delay helps in the evolution of interest generation window for that particular flow.

Load Balancing
The multipath nature of NDN transport needs to be considered while developing an effective congestion control protocol. Even multipath versions of TCP (e.g., multipath TCP [29]) cannot address this multipath problem effectively, as they consider predetermined routes for a specific connection while in NDN; multipaths can change dynamically. Therefore, an effective load-balancing and interest forwarding strategy needs to be employed at in-network nodes which can make on-the-go decisions for forwarding interests to optimally utilize the available multipaths and avoid network congestion. In this paper, optimal request forwarding algorithm-designed in [23]-is implemented at the network nodes to optimally forward interest packets on all available paths. This distributed load-balancing scheme is applied on a per-Interest basis using local forwarding statistics. No additional lookups on the FIB are required for maintaining these statistics. The computational complexity of the request forwarding algorithm is given by the complexity needed to update the local statistics, which is O(1) [23].

Fairness
Congestion avoidance in NDN mainly depends on the receivers and their cooperation in adjusting their interest generation rates. However, if some of the users choose not to cooperate and keep sending out interests at a constant rate (much like UDP/CBR), it can give rise to a situation where cooperating users always see the network being congested and their throughput suffers because of their adaptive AIMD-based interest generation windows. This way, transmission of obedient and cooperative flows, i.e., IRCP flows, is throttled while CBR flows hog most of the available bandwidth. To avoid this unfair allocation of network resources, the in-network nodes need to implement some kind of interest-rate shaping that restrains the interest packets of the uncooperative flows (such as CBR or IRCP-disabled flows) from going upstream and alters their interest forwarding rate according to their fair share.
A fairness scheme is proposed in Section 4 of this paper to ensure fair resource allocation in the presence of uncooperative flows.

Proposed Fairness Scheme
The fairness scheme proposed in this paper works together with the optimal multipath congestion control and request forwarding technique proposed in [23]. To optimally utilize network resources while simultaneously avoiding network congestion, receivers keep an AIMD-based interest generation window to adapt their interest generation rate accordingly. Therefore, flows originating from these cooperative users are responsive to network congestion and help in mitigating congestion by adjusting their interest generation rates. These flows are termed IRCP flows in this paper. Furthermore, the network routers employ an optimal request forwarding scheme to perform efficient load-balancing. Together, the three elements make for a comprehensive "hybrid" congestion control protocol, which can address the multiple challenges of NDN transport.
The scheme is deployed at the intermediate routers. Thus, in case of network congestion, in-network nodes can shape the interest forwarding rates of active flows. This shaping is done on the basis of fair allocation of link bandwidth among all the active flows. Flows are distinguished as "rogue" and "obedient" flows on-the-fly, "rogue" flows being the flows that are exceeding their fair share of network resources. Uncooperative flows that do not abide by the IRCP, or flows that are unresponsive to network congestion such as CBR flows are marked as "rogue" and interest packets belonging to these flows are inserted in an interest rate shaping queue.
Each in-network node measures its congestion status by monitoring the size of the data chunk queue, and if the network is congested, each node performs per-flow interest rate shaping on-the-fly to respond quickly to network congestion. This way, the scheme works independently from the receiver-based congestion control protocols. The interest rate shaping algorithm also ensures per-flow fairness by using a stateful approach to keep record of active flows and assign them their fair share. This is done by keeping a virtual queue and an associated data counter for each active flow. All the interest packets belonging to same flow (same name prefix) are queued in that virtual queue. Upon congestion, the flows exceeding their fair share (rogue flows) are throttled by altering their interest forwarding rates. Thus, rogue users are restricted to use only their fair share of the link bandwidth and not tap into the share of other "obedient" users. The performance of this fairness scheme is evaluated in Section 5.
Algorithm 1 summarizes the steps involved in the fairness-based interest rate shaping scheme. The scheme only comes into play when the network is congested which is determined by monitoring the queue size on routers' incoming data buffers. Upon reception of interest packets, CS and PIT lookups are performed on the names of the requested data objects. If there are no entries in the PIT for the incoming request and if the request cannot be fulfilled locally through the router's CS (the conditions termed as PIT_miss and Cache_miss, respectively), the interest packet needs to be forwarded upstream. Incoming interests are queued in the virtual queues based on their "name" fields; i.e., interests belonging to the same flow are queued in that flow's virtual queue (Enqueue). Unique name prefix entries in the PIT dictate the number of active flows. Each flow (unique name prefix) has its virtual queue and its corresponding "counter", which dictates the amount of data (in MB) that the flow is allowed to transmit in a certain time interval T. Interest packets belonging to an obedient flow are fetched from the flow's virtual queue using Dequeue and are forwarded upstream without any delay. The flow's counter is decremented by the size of the requested data packet each time an interest belonging to that particular flow is forwarded upstream (Counter_Decrease). This is done by employing a "data size" field in the interest packets, which indicates the size of the data chunk being requested by those interests. If a flow's counter reaches zero-meaning that the flow has already transmitted its allocated share-the flow is marked Rogue. The interest packets belonging to a rogue flow are inserted in the interest rate shaping queue (Rate_Shaping_Queue_Insert). The Throttle function makes sure that the interests belonging to a rogue flow are forwarded only when the counter value for that particular flow is non-zero; i.e., once the counter for that flow is reset after some time, thereby throttling the data transmission rates of the unresponsive flows. Rate_Shaping_Queue_Fetch is used to retrieve the interests at the head of the interest rate shaping queue. These interests are forwarded upstream as long as the counter value stays above zero. Thus, each active flow gets to transmit the same amount of data in a specific time interval under congestion. Data counters for all active flows are maintained according to Algorithm 2.
Algorithm 1 Fairness-based interest rate shaping.  19: interest = Rate_Shaping_Queue_Fetch(i) 20: Forward(interest) 21: Counter_Decrease(name, interest.data_size) 22: i ← next_i 23: end while 24: end procedure Algorithm 2 Per-flow data rate regulation. 1: if (New_Flow or Timeout) then 2: Check PIT for active flows 3: f air_share ← Fair_Share_Update( f lows, bandwidth) 4: for all active flows do 5: Update_Counters( f air_share) 6: end for 7: end if Network routers follow a stateful approach for per-flow data rate regulation by monitoring PIT entries. The bandwidth of the outgoing link-through which the data packets are "pulled" against their corresponding interests-is divided equally among the active flows. The counter value is essentially the allotted fair share, which the flows are not allowed to exceed within a given time interval T. If there is a new flow, the fair share value is updated for all active flows and counters are set to the new value, otherwise counter values for all active flows are reset periodically.

Performance Evaluation
In this section we evaluate the performance of the fairness scheme proposed in Algorithm 1 through simulating different congestion scenarios for both single path and multipath network topologies. The simulation environment chosen for this purpose is the ns-3 based NDN simulator called ndnSIM2.0 [30]. The key performance indicator selected for the evaluation is: Jain's fairness index [31].
In all the simulation scenarios discussed in this section, the interest packets of different flows (whether CBR or IRCP), request objects, i.e., data packets, of the same size. Therefore, number of interest packets per flow is restricted in a certain time interval instead of number of allowed bytes, as was the case in Algorithm 2.

Single Path (Dumbbell Topology)
Consider a simple dumbbell topology as shown in Figure 3 with a bottleneck link of 5 Mbps between R1 and R2. User1 is generating interests with the name prefix "ndn://facebook.com/{object1, object2, object3. . . }" in accordance with the window-based IRCP. User2 is generating interests with the name prefix "ndn://netflix.com/{object1, object2, object3. . . }" at a constant rate, thereby receiving data at a continuous bit rate (set as 5Mbps throughout the simulation). We have simulated this scenario both with and without the proposed fairness scheme deployed at the intermediate routers R1 and R2. In the absence of a fairness scheme, the CBR flow causes unfair network usage as it occupies almost all of the bottleneck link throughout the simulation and does not let the IRCP flow transmit according to its fair share. The transmission (data reception) rates for both flows are displayed in Figure 4a which clearly shows that the CBR flow is occupying all the bandwidth, resulting in unfair allocation of network resources. When network congestion appears, CBR's rate drops a little but returns to its maximum rate shortly, forcing the IRCP flow to decrease its transmission rate.
We observe a significantly fair allocation of the link bandwidth when the fairness scheme is deployed at the intermediate routers as illustrated in Figure 4b. When congestion occurs, router R1 alters the interest forwarding rate of the CBR flow, thereby regulating its data reception rate. This allows the IRCP flow to receive data at its fair share. CBR is transmitting at its desired rate initially until the network becomes congested and the proposed fairness scheme comes into action which consequently reduces CBR's interest forwarding rate and hence its data reception rate. IRCP gets the chance to transmit according to its fair share and therefore fairness is achieved.  The performance of the proposed fairness scheme is depicted in Figure 5 using Jain's fairness index, which is the measure of fairness (or lack thereof) among the active flows in a network. The index ranges between 1 n and 1 (n being the number of active flows), representing "worst" and "best" fairness respectively [31]. Since there are only two active flows: "ndn://facebook.com/" and "ndn://netflix.com/"-the fairness index ranges between 0.5 and 1.0 (representing worst and best fairness respectively). It is evident from the result that the proposed scheme is successful in achieving fairness among the active flows.

Single Path (Parking Lot Topology)
We now evaluate the performance of our proposed scheme in a different scenario. Consider the parking lot topology depicted in Figure 6, with a bottleneck link of 10 Mbps between R3 and R4 while all the remaining links are 20 Mbps each. This scenario consists of six unique active flows; i.e., six users requesting content (data objects) with different name prefixes. Name prefixes and flow types for different users in this topology are given in Table 1. The bottleneck link is being utilized in both directions. Three of the flows (User1, User2, and User3) compete for their share of the link bandwidth from R4 to R3 while the other three flows (User4, User5, and User6) utilize the link from R3 to R4. The data reception rates for all the users are shown in Figure 7.  The simulation result in Figure 7a illustrates that the CBR flows originating from User1 and User4 start transmitting at a higher rate, occupying more than their fair share of the bandwidth, but the fairness scheme-deployed at the intermediate routers R3 and R4-forces them to reduce their transmission rates, consequently allowing IRCP flows (originating from User2, User3, User5, and User6) to transmit their fair share. Moreover, it is evident from Figure 7b that the bottleneck link is being shared fairly among all the network users. There are three flows competing for the bandwidth in either direction on the bottleneck link. Therefore, the lowest possible value for JFI is 1÷3, i.e., 0.33, which represents the "worst fairness" among any three users competing for the link bandwidth in either direction. It is evident from the result that the fairness indices for both directions remain far above the worst value and a near-ideal fairness is achieved and maintained throughout the simulation. This is because of the highly effective fairness scheme incorporated at the network routers, which tries to ensure that all the competing flows get to transmit the same amount of data in a specific time interval.

Multipath Scenario
The fairness scheme is also tested and evaluated in a multisource, multipath scenario. Consider the topology in Figure 8, where two different types of flows, i.e., CBR and IRCP, are being used to retrieve content from multiple sources. CBR flow is generating interests of the name prefix "ndn://netflix.com/" and is set to receive data at 30 Mbps, while IRCP is requesting data of the name prefix "ndn://youtube.com/"; there are two sources (data servers) for each flow, and both can satisfy incoming interest packets by providing corresponding data. An optimal load balancing technique-proposed in [23]-is active on the network routers. The routers can utilize all the available multipaths by optimally selecting the outgoing links for the forwarding of interest packets. The bottleneck links (1,2) and (1,3) give rise to congestion in the network, thereby causing the fairness algorithm to become active and perform per-flow interest rate shaping. In this scenario, data packets from both CBR and IRCP flows compete for their share of the bandwidth on these bottleneck links. Data reception rates for both these flows are probed at the respective receivers' ends in order to measure their throughput.
In the absence of a fairness scheme, CBR flows monopolize the available bandwidth and keep IRCP flows from transmitting at their fair share. However, in this case, our interest rate shaping algorithm ensures that both the flows get to transmit approximately at the same rate. Congestion caused by the combination of high data reception rate of CBR flow and the existence of bottlenecks in the network, provoke the intermediate routers to adjust the forwarding rate of interest packets belonging to the CBR flow. Consequently, both flows end up receiving data at approximately the same rate. The data reception rates in Figure 9 show that both CBR and IRCP users are receiving data at approximately 15 Mbps (fair share). CBR flow, which was originally intended to transmit around 30 Mbps, has been confined to transmit around 15 Mbps to ensure fairness. Figure 10 demonstrates that the proposed fairness scheme is equally effective and suitable for multi-source, multipath NDN transport. Throughout the simulation, JFI remains close to 1; i.e., the "best fairness" value. This proves that both the flows are sharing network resources in a fair manner.

Conclusions
In this paper, we propose a lightweight, effective, and robust fairness scheme for NDN which is deployed at the network nodes. Upon detecting congestion, routers can perform interest rate shaping locally and independently before the receiver-based algorithms get the chance to react. Moreover, uncooperative users are discouraged, as the interest forwarding rate of unresponsive flows is throttled during congestion scenarios to ensure fairness. An ns-3 based simulator, ndnSIM, is used to simulate desired congestion conditions in order to monitor the performance of the proposed fairness scheme under different scenarios. Fairness evaluation proves that our proposed scheme is quite effective in ensuring fair resource allocation among users.
The proposed fairness scheme is designed to work together with an effective receiver-based interest control algorithm that can address the multisource and multipath nature of NDN. Additionally, an optimal request forwarding strategy to balance traffic on all available multipaths is also required for developing a comprehensive congestion control protocol for NDN. The effectiveness of the receiver-based interest rate control protocol and the request forwarding strategy used in this paper is well established in the literature. However, a thorough study is required on the effectiveness of this comprehensive congestion control protocol. In the future, we aim to investigate the efficacy of the proposed integrated approach in mitigating and avoiding network congestion. Furthermore, we also plan to carry out a scalability study of the proposed fairness scheme on a large-scale experimental testbed as an extension to this work.