Next Article in Journal
Audio-Visual Genres and Polymediation in Successful Spanish YouTubers
Previous Article in Journal
Research on a Support System for Automatic Ship Navigation in Fairway
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Mathematical Model for Efficient and Fair Resource Assignment in Multipath Transport

Sustainable Communication Networks, University of Bremen, 28359 Bremen, Germany
*
Author to whom correspondence should be addressed.
Future Internet 2019, 11(2), 39; https://doi.org/10.3390/fi11020039
Submission received: 3 January 2019 / Revised: 28 January 2019 / Accepted: 5 February 2019 / Published: 10 February 2019

Abstract

:
Multipath transport protocols are aimed at increasing the throughput of data flows as well as maintaining fairness between users, which are both crucial factors to maximize user satisfaction. In this paper, a mixed (non)linear programming (MINLP) solution is developed which provides an optimum solution to allocate link capacities in a network to a number of given traffic demands considering both the maximization of link utilization as well as fairness between transport layer data flows or subflows. The solutions of the MINLP formulation are evaluated w. r. t. their throughput and fairness using well-known metrics from the literature. It is shown that network flow fairness based capacity allocation achieves better fairness results than the bottleneck-based methods in most cases while yielding the same capacity allocation performance.

1. Introduction

The aim of transport-layer protocols is the reliable end-to-end transport of data. For best performance and resulting user satisfaction, Internet providers expect that transport protocols utilize links in an optimum way and avoid congestion in the network. Furthermore, protocols have to consider that links are in most cases shared by multiple users. Therefore, the available capacity on a shared link should be assigned to the users in a fair way so that no user has to starve. Fairness means that a transport protocol should respond to congestion notifications such as packet loss or increase of delay by reducing the traffic load injected into the network.
The most commonly used transport protocol in today’s Internet is the Transmission Control Protocol (TCP) [1]. It is not only used by “classical” applications which rely on reliable transmissions—such as web, e-mail or file transfer—but also by some soft-real-time applications where delay is less critical, such as video streaming. In the context of TCP coexisting with other transport protocols on shared links, the term “TCP friendliness” means that a flow should not use a larger portion of the link capacity than a legacy TCP flow [2].
An extension for legacy TCP developed in recent years is Multipath TCP (MPTCP) [3] which makes use of multiple interfaces in the sender or receiver in contrast to legacy TCP, which is only able to use a single interface for a given data flow. Each interface connects the sender or receiver node to a different network so that packets are sent to the opposite station along different paths. Over each of the paths, a subset of the overall amount of packets is transported; this subset is called MPTCP subflow. The protocol stack hides details about the transport from the application, it behaves in a transparent way and appears to the application as a legacy connection.
An alternative transport-layer protocol developed in recent years which from the beginning has been designed for multihoming operation is the Stream Control Transmission Protocol (SCTP) [4]. However, although multihoming means that alternative interfaces on a node and the resulting additional data paths are identified, they are only used for redundancy, so that an extension named Concurrent Multipath Transfer SCTP (CMT-SCTP) has been investigated [5] and is currently proposed as an Internet Engineering Task Force (IETF) draft [6].
It was pointed out that any transport-layer protocol, irrespective of whether it supports multipath, should coexist in a fair way with legacy TCP, so fairness has to be ensured when sharing resources with normal TCP flows. A resource can either be a link which is part of an end-to-end connection, or it can be the entire network. Moreover, when discussing fairness, different participants can be specified. On the one hand, there are data flows which transport the entire data of a stream or file. On the other hand, such a flow can be divided into subflows where each of them transports a subset of the data. The TCP-friendliness policy may require that an entire data flow may behave equivalently to a TCP connection, or each of the subflows should do so. This option of different resources and participants means that there is not “the” fairness as such, but there are different ways regarding how to interpret fairness.
Another aspect of multipath transport is that it can not only consider the entire network as a resource which should be fairly shared as described in Section 3.2.3 of this article, but also focus on individual links as the shared resource as shown in Section 3.2.1 and Section 3.2.2 , respectively.
This paper discusses the theoretical analysis of resource assignment and fairness in the domain of multipath transport. In the multipath transport case, at least one of the end nodes of a connection has multiple interfaces, usually connected to different access networks, so that a data flow can be split at the transport layer. The analytical description of multipath transport requires mathematical formulation which avoids flow splitting “on the way”; it has to be restricted to the end nodes by respective constraints. The topic should not be confused with multipath routing where data is sent from a source node to a destination node as a single flow from the end-to-end view, whereas the flow splitting occurs in routers along the way between sender and receiver. Multipath transport is run between end-user devices as previously said. In this way, both methods complement each other. Furthermore, it was previously mentioned that multipath transport fairness can be performed not only inside the resource of an entire network, but also in an individual bottleneck link, whereas multipath routing always has the network as the scope.
Existing research about analytical modeling of multipath transport focuses on analytical modeling of the behaviour and stability of practical MPTCP congestion control [7,8,9,10,11]. Any practical transport-layer algorithm is however limited by the fact that it does not have knowledge about the internal states of the network; it takes estimation by observing the performance of the links. Furthermore, practical algorithms respond to changing link conditions, which results in a time-dependent behaviour. In contrast to this, the aim of our paper is looking at a network in “god view” in order to find the optimum resource assignment assuming perfect knowledge of the network topology in a static equilibrium. The considerations in our paper are independent from a particular algorithm; however, they consider fairness which is also inherent to the different practical algorithms as discussed in Section 3.2.1, Section 3.2.2 and Section 3.2.3. Another group of publications discusses the aforementioned fairness in multipath routing where the survey [12] gives an overview and some further examples including [13,14,15,16]. Our paper differs, as already mentioned, by considering fairness between end-to-end connections instead of routes in a network. Further publications discuss legacy (single-path) TCP where various methods are summarized in three surveys [17,18,19]. In addition, for UDP transport, an application-layer congestion control scheme has been proposed to support fairness [20]. Finally, authors have investigated different practical implementations of MPTCP; these references are summarized in Section 3.2.1, Section 3.2.2 and Section 3.2.3 where they are given as examples of how different fairness methods were realized.
The contribution of this paper is (1) presenting formal definitions of a network and the different transport-layer fairness methods based on selecting resources and participants; (2) based on these definitions, designing a novel mathematical method based on mixed-integer (non)linear programming (MINLP) for the optimum assignment of data flows to physical paths across the network; and (3) evaluating the developed mathematical methods using 30 example scenarios.
The proposed method should consider a trade-off between high link utilization and fair share as discussed in the previous paragraphs. Since fairness is an important aspect of the resource assignment problem, well-known fairness methods from the literature are used to measure the fairness in a quantitative way. The methods can serve as a benchmark to evaluate practical resource assignment methods for multipath transport protocols.
The remaining parts of this paper are organized as follows: Section 2 gives the definition of resources and participants in a network and specifies the different fairness methods. Based on these definitions, the mathematical model is developed in Section 3. Metrics for the performance evaluation are discussed in Section 4, whereas, for the evaluation itself, a number of example scenarios are investigated in Section 5. Finally, Section 6 gives a conclusion and an outlook.

2. Terminology

When discussing fairness aspects of multipath transport, some terms and definitions have to be specified to avoid ambiguities. Inside a multipath network, there are participants who share a common resource. An analogous example is a number of wireless stations, which, as participants, share the wireless medium which is the resource. Resources and participants are defined in Section 2.1 and Section 2.2. Furthermore, clarification is needed as to what exactly the notion of fairness means, which is covered in Section 2.3.
The discussions in this paper at several occasions refer to (MP)TCP to identify different types of connections such as flows or subflows. The MPTCP-related wording should, however, be interpreted without loss of generality; it can be transferred to any multipath supporting transport protocol.

2.1. Resources

The following definitions of the resources are illustrated by the example in Figure 1. The blue circles identify the interfaces, the small grey dots attached to the nodes are the interfaces. The black arrows and numbers denote the links with their capacities in Mbps.
Definition 1.
Network: A network is a directed graph specified by
  • A set of nodes V in the example network in Figure 1—the nodes a to e .
  • A set of interfaces L that is greater than or equal to the number of nodes in the network. Each node has at least one interface, and each interface can only be part of exactly one node. In Figure 1, L is formed by the interfaces a1, a2, ...e.
  • A connectivity matrix with elements c a p i j 0 , i , j L . If i , j are directly connected by the same link, then c a p i j specifies the link capacity in the direction from i to j; otherwise, c a p i j = 0 . If i , j belong to the same node, it is assumed that c a p i j . This means that nodes which act as forwarders, which could in practice be e.g., routers, are assumed to have infinite processing capacity—bottlenecks only occur on the links between nodes, but not inside the nodes. The connectivity matrix for the example of Figure 1 is given in Table 1. Empty elements have a value of 0 that is omitted for better overview.
The definition of a network raises the question as to why the additional notions of the node and interface set are required for the definitions of the network’s directed graph, besides the connectivity matrix. The reason are requirements of the linear programming formulation where interfaces have to be distinguished from the nodes.
Definition 2.
Path: When a data packet is forwarded from the sender to the receiver, it propagates across a number of nodes resp. their interfaces. A path is defined as the sequence of interfaces which the packet passes while being forwarded. The first interface of the path is the one of the sending node, the last one is the receiving node’s interface.
The bandwidth of a path is the bandwidth of the path’s slowest link, i.e., the “weakest link in the chain”.
In the example of Figure 1, node a is the sender and node e the receiver. There are two possible paths [ a , b , d , e ] and [ a , c , d , e ] which are identified by the red and green arrows, respectively.
Definition 3.
Bottleneck: A link whose capacity is fully utilized by flows or subflows limits the speed of at least one (sub)flow. Such a link is called a bottleneck.
In Figure 1, the path [ a , b , d , e ] is bottlenecked by link ( a , b ) which has a capacity of only 10 Mbps. Path [ a , c , d , e ] is bottlenecked by link ( d , e ) which allows a maximum 30 Mbps. Out of this capacity, 10 Mbps are already used by the data transport via path [ a , b , d , e ] so that only 20 Mbps are left.

2.2. Participants

Definition 4.
Flow: The set of all data packets belonging to the same communication, e.g., an individual file transfer between two nodes, regardless of the interfaces used at the sender and receiver side and the path(s) between the sender and receiver, is denoted as a flow.
In Figure 1, all data transported from the sending node “ a ” to the receiving node “ e ” is a flow. The set of all traffic demands resp. data flows (single or multipath) is denoted as K.
Definition 5.
Subflow: A subflow occurs when a flow is split into multiple paths; it denotes the packets of a flow that use a certain interface at the sender and receiver and a certain path inside the network.
In Figure 1, there are two subflows that are equivalent to the available paths identified in Definition 2.

2.3. Fairness

Since TCP traffic is elastic, each participant—a flow or subflow—has a potentially unlimited demand which is restricted by the limited capacity of the links. The task of fairness is to share the link capacity in a proper way between the participants. From the view of a network operator, ensuring no user has to starve is a crucial aspect of maintaining user satisfaction. However, no absolute value for a guaranteed minimum capacity can be specified because it cannot be predicted how many flows will share a link.
For reasons of simplicity and better overview, in the further figures with network examples, the interfaces are omitted and the (sub)flows are drawn besides the links.
Definition 6.
Bottleneck subflow fairness (BSF): A bottlenecked link may be shared by a number of multipath TCP subflows along with legacy TCP flows which compete for the available bandwidth. The idea of BSF is that each MPTCP subflow should get the same capacity share like a TCP flow [21].
Figure 2 shows an example network scenario which illustrates the BSF mechanism. There are six nodes a , b , c , d , e , m and two flows [ a , e ] and [ m , e ] inside the network. All link capacities are given in Mbps. Multipath flow [ a , e ] has two subflows [ a , b , d , e ] and [ a , c , d , e ] in this scenario. Both subflows share the same bottleneck link ( d , e ) with the single path flow [ m , e ] . Thus, for bottleneck subflow, for fair allocation, each subflow should get the same allocation as the single path flow on the bottleneck link ( d , e ) . This corresponds to a capacity allocation of 20 Mbps for flow [ a , e ] and 10 Mbps for flow [ m , e ] .
Let s f i where i = 1 , , n be n legacy TCP flows or MPTCP subflows sharing a link ( g , h ) with capacity c a p g h . If all s f i are bottlenecked on ( g , h ) , the assignment s f _ a l l o c i for each s f i is
s f _ a l l o c i = c a p g h / n for i = 1 , , n .
The more common case is that a subset of the (sub)flows s f 1 , , s f m , m < n is already bottlenecked on other links where each of them gets an assignment s f _ a l l o c i < c a p g h / n , i = 1 , , m . The remaining s f m + 1 , , s f n which are bottlenecked on ( g , h ) then get the assignment
s f _ a l l o c i = c a p g h j = 1 m s f _ a l l o c j n m for i = m + 1 , , n .
Definition 7.
Bottleneck flow fairness (BFF): An MPTCP flow may have more than one subflow on the same shared bottleneck link. The BFF approach requires that all subflows of the same MPTCP flow should get the same aggregated share as a single legacy TCP flow [21]. In other words, an MPTCP user should not get an advantage by deploying multiple subflows on the same link. Practical methods based on coupling the congestion window sizes of individual subflows [22] as well as feedback-based path failure (FPF) or buffer blocking protection (BBP) [23] have been proposed to ensure that an MPTCP flow gets at least the same share as a legacy TCP flow and avoid (sub) flows being underutilized.
In Figure 3, subflows [ a , c , d , e ] and [ a , b , d , e ] of the multipath flow [ a , e ] are coupled together and considered as a single flow on bottleneck link ( d , e ) . According to the bottleneck flow fair allocation, the bottleneck capacity of link ( d , e ) should be shared equally between the competing flows [ a , e ] and [ m , e ] . This corresponds to an allocation of 15 Mbps to each flow.
Let f i where i = 1 , , n be n legacy or multipath flows sharing a link ( g , h ) with capacity c a p g h . If all f i are bottlenecked on ( g , h ) , the assignment f _ a l l o c i for each f i is
f _ a l l o c i = c a p g h / n for i = 1 , , n .
The more common case is that a subset of the flows f 1 , , f m , m < n are already bottlenecked on other links where each of them gets an assignment f _ a l l o c i < c a p g h / n , i = 1 , , m . The remaining f m + 1 , , f n which are bottlenecked on ( g , h ) then get the assignment
f _ a l l o c i = c a p g h j = 1 m f _ a l l o c j n m for i = m + 1 , , n .
Definition 8.
Network flow fairness (NFF):
In contrast to the two previously mentioned fairness methods where the bottleneck is the shared resource, for NFF, it is the entire network. If an MPTCP flow has multiple subflows, each of these subflows may propagate along a different path and thus become bottlenecked on different links and experience different amounts of congestion. The aggregated throughput of a flow is the sum of the throughputs of all subflows belonging to that particular flow. To overcome the problem of unequal share between different flows, a resource pooling (RP) algorithm has been defined which aims at balancing the amount of congestion which the different (sub)flows have to face [24]. If a link is heavily congested, the algorithm tries to find less congested alternative paths for some of the (sub)flows sharing that link so that some of the load can be removed from the link. In other words, the available links in the whole network are considered as a pool of resources which should be shared in a fair way between the participants. The benefits of balancing the load between participants using resource pooling based congestion control are decreased overall network congestion, increased efficiency and reliability [25]. This fairness mechanism is called Network Flow Fair (NFF) allocation. Based on the resource pooling principle, several congestion control algorithms have been proposed such as Linked Increases Algorithm (LIA) [26], Opportunistic Linked Increases Algorithm (OLIA) [7], Adapted OLIA [27] or Balanced link adaptation (Balia) [8] for MPTCP and Resource Pooling Multipath version 2 (RP-MPv2) for CMT-SCTP [28]. Several MPTCP algorithms have been investigated analytically concerning their TCP friendliness and stability [8].
For an NFF example, consider the network in Figure 4 which includes eight nodes a , b , c , d , e , f , m and n . The link capacities are given in Mbps. There are two flows [ a , f ] and [ m , n ] inside the network. Flow [ m , n ] is limited by link ( m , b ) and flow [ a , f ] is limited by the links ( b , c ) and ( d , e ) . Thus, link ( m , b ) , ( b , c ) and ( d , e ) are the bottleneck links for this scenario. Flow [ m , n ] can not get more than 10 Mbps which makes the ideal fair share for flow [ a , f ] . However, allocating 10 Mbps to both flows would not utilize the network capacity fully. After being fair, for efficient network utilization flow, [ a , f ] can get extra 10 Mbps from the network (“fair+spare”). For this scenario, network flow fair allocates [ a , f ] to 20 Mbps and [ m , n ] to 10 Mbps.
Due to the complexity of interdependence between flows in a network-wide view, it is hard to give a closed expression for a fair capacity assignment as it was done for BSF and BFF. It is however possible to express NFF in the MINLP model as specified in the paragraph about NFF in Section 3.2.

2.4. Efficiency vs. Fairness

The performance of sending data across a network can be characterized by efficiency or fairness. There are two ways to measure efficiency: On the one hand, the focus can be on the individual links inside a network, which is the method which is preferred in the literature e.g., [29]. What is the load on each link in relation to the capacity of the respective link is observed. On the other hand, the entire network can be under consideration: when summing up the throughput of all flows which are transported by the network, the result is the aggregated capacity of the network. A high link utilization does not necessarily result in a high aggregated capacity, e.g., if because of bad network design a large number of links become bottlenecks, the link utilization is high, whereas the aggregated capacity may still be low since the transported traffic is slowed down due to these bottlenecks.
In opposition to a pure view on maximizing the network utilization, one can also aim at providing maximum fairness between the allocation of capacity to a number of flows, which means in the best case that each flow gets the same amount of capacity.
In many cases, it is not possible to maximize both efficiency and fairness, i.e., they form a trade-off, so that a network operator has to find a compromise between both metrics. The scenario depicted in in Figure 5 shows the relationship between network utilization and fairness. Three nodes a, b and c are connected by two links ( a , b ) and ( b , c ) which have the same capacity of 100 Mbps. There are two data flows [ a , c ] and [ b , c ] . The link ( a , b ) is only occupied by flow [ a , c ] whereas link ( b , c ) is used by both flows, which results in a bottleneck situation. If the aim is optimum resource utilization of the network, flow [ a , c ] will be allocated the entire capacities of both links which results in full link utilization, but minimum fairness because flow [ b , c ] does not achieve any throughput. If the focus is on fairness, flows [ a , c ] and [ b , c ] share the bottlenecked link ( b , c ) by 50 Mbps each, which, however, results in the problem that link ( a , b ) is now only utilized by 50 Mbps. The given scenario is an example where fairness only can be achieved at the expense of network utilization. Weighting can be applied to specify by what extent efficiency or fairness should be considered when assigning resources. In the discussed example, a factor α is set to 0 if utilization should be the only goal and fairness should not be considered at all, whereras it is set to 1 if fairness should be pursued irrespective of any efficiency loss. Figure 6 shows the amount of average link utilization dependent on α for the scenario in Figure 5. In case of α = 0 , both links are fully utilized, so the average utilization is 1. In case of α = 1 , link ( a , b ) has a utilization of 0.5 and link ( b , c ) is fully utilized, so the average utilization is 0.75.

3. Mathematical Model

The aim of this paper is the introduction of a novel optimum mathematical method to deliver data (sub)flows across a network considering the fairness-efficiency trade-off that was already mentioned in Section 2. High efficiency means that the aggregate throughput of all (sub)flows should be maximized by utilizing the available links to the maximum possible extent. Fairness means that no (sub)flow should starve by assigning only a small amount of link capacity. Both goals cannot be achieved at the same time; a compromise has to be identified as discussed in Section 2.4.
The resource allocation of (MP)TCP flows to bandwidth-limited links leads to a mixed-integer nonlinear programming (MINLP) problem. In general, an (N)LP problem is defined by an objective function which takes a number of input arguments that are limited by constraints. The task of the (N)LP solver is to find values for the input arguments which yield the maximum value for the objective function while keeping the constraints. The MINLP model is developed in two steps. In Section 3.1, the basic model is shown in case throughput is the only value to be optimized. In Section 3.2, the objective function of the model is extended for the different fairness approaches.

3.1. MINLP Model without Fairness

The objective function, in case no fairness is considered, only specifies the aggregate throughput of all flows in the network. It does make use of multipath transport for a given flow, but only if a higher aggregate throughput for the entire network can be achieved. Let K be the set of all traffic demands’ resp. data flows in the network. Let [ s , t ] K be a flow between the stations s and t and φ s t be the capacity which is allocated to that flow. The symbol φ s t which denotes, as already said, the assigned capacity to a single or multipath flow should not be confused with the fixed capacity c a p g h of a link; they are not necessarily identical. Let A l l o c be the aggregated allocation of K as shown in Equation (5):
Aggregated allocation of all flows:
A l l o c = [ s , t ] K φ s t .
In case that only the aggregated allocation should be maximized and the fairness is not considered, which is called max-flow allocation, the objective function then only consists of the variable A l l o c as given in Equation (6):
Max-flow allocation objective:
maximize : A l l o c .
The constraints for the max-flow model apply to any network with single or multipath data flows. They are also used in the fairness implementations described later where they are supplemented by fairness-specific constraints. In this section, the constraints are discussed in a textual way, the formal description is given in Appendix A. Specifically, the formulations for the general constraints independent from the allocation method are given in the Appendix A.1. The equation numbers in parentheses starting with “A” refer to the Appendix A.
Flow conservation constraint (Ref. [16]): At each node except for the communication endpoints, the number of ingoing and outgoing flows must be equal; routers do not act as source or sink (Equation (A1)).
Capacity limitation constraints (Ref. [16]):
  • The overall throughput on a link, summed up for all flows, cannot be higher than the physical speed of the link (Equation (A2)).
  • A flow can only occupy capacity on a link if the latter is part of the flow’s path. The maximum available capcity is the physical speed of the link (Equation (A3)).
Path constraint: In a given network, there are interfaces which act as a data source while others become a data sink. The number of available paths cannot exceed the product of sources and sinks, i.e., the data between each pair of connected interfaces is transferred via exactly one path (Equation (A4)). Due to the fact that a subflow corresponds to exactly one path, the same constraint also holds for the number of subflows (Equation (A5)).
Multipath flow identifier constraint: In the (N)LP formulation, variables have to be specified to identify whether or not a node is part of any path between a source and a sink node, i.e., acting as an intermediate node (Equations (A6) and (A7)). Further variables identify whether a flow which makes use of a node is a multipath flow (Equations (A8) and (A9)).
Multipath subflow identification constraints: These constraints control a set of helper variables, where one variable exists for each combination of any subflow and any link. By means of these variables, the following constraints are enforced:
  • A multipath flow which does not use a particular link cannot have a subflow on that link (Equation (A10)).
  • If a link is used by a multipath flow, at least one of the subflows has to use that link (Equation (A11)).
  • A given subflow can occur at maximum once on a particular link (Equation (A12)).
  • Keep track of whether a flow has a subflow on a particular link (Equation (A13)).
  • Keep track of whether a flow divides into subflows at the source node (Equation (A14)).
  • A multipath-enabled flow may not have a subflow yet because none might have been computed yet during the solution process. After the computation, the flow may be assigned one or more than one subflow (Equation (A15)).
  • A data flow cannot be split into subflows if both end nodes only have one interface, respectively (Equation (A16)).
The previously mentioned equations concerning subflows ensure that the creation of subflows is logically correct, but do not limit the number of subflows. This is ensured by a second equation set:
  • Keep track of how many subflows a flow is split into (Equation (A17)).
  • The number of subflows between a sender and receiver node cannot be larger than the product of sender and receiver interfaces (Equation (A18)).
  • The number of subflows is equivalent to the number of paths between the sender and the receiver node (Equations (A19) to (A21)).
  • No subflow can be created if both the sender and the receiver only have one interface. Between a pair of sender and receiver interfaces, at a maximum, one subflow can be created (Equations (A22) to (A24)).
Congested links’ constraints for flows: The aim of transport protocols is to maximize link utilization in order to make efficient usage of the network. This means that each flow should experience at least one congested link; if all links which are occupied by a flow had capacity left, it would mean that there is space left for additional throughput. The corresponding equations express the following constraints:
  • There is at least one congested link for each flow (Equation (A25)).
  • A congested link is fully utilized by flows, there is no capacity left (Equation (A26)).
  • A link can be on a path for a flow even though it might not be fully utilized and the bottleneck link for a subflow must be on the path allocated to that subflow (Equation (A27)).
Congested link constraints for subflows: If a flow splits into multiple subflows, each of them should be a part of at least one congested link, as it is the case for single-path flows. It can however happen that on a given link two or more subflows belong to the same flow. The following constraints describe links congested by subflows:
  • A necessary condition that a link is identified as congested by a particular subflow is that the subflow exists on that link (Equation (A28)).
  • Each subflow has at least one congested link (Equation (A29)).
  • A subflow cannot be congested on a link if the parent flow is not congested on that link (Equation (A30)).
  • If a flow is bottlenecked on a link, then all its subflows are bottlenecked on that link (Equation (A31)).

3.2. Extension of the Objective Function for Fairness

The objective function given in Equation (5) does not consider fairness but is only targeted at maximizing the aggregate throughput. In this section, the objective function is extended for the different fairness methods BSF, BFF and NFF. The formulations for the constraints specific for the different allocation methods are given in the Appendix A.2.

3.2.1. Bottleneck Subflow Fair Allocation (BSF)

BSF ensures equal share between multiple subflows on a common congested link as described in Definition 6. An example for a network where BSF is applied was already given in Figure 2.
In order to formulate bottleneck subflow fair allocation, let m s t be the end-to-end capacity allocation to a multipath flow [ s , t ] K , m s t = 0 if the flow [ s , t ] K is a single path flow. The value m s t should not be confused with φ s t which is the allocation to any flow, irrespective of single or mulitpath, or with c a p i j which is the physical capacity of a link.
In the most simple case, the objective function for BSF is the same as the previously defined max-flow Equation (6) which does not make use of multiple paths if a single-path solution yields a better result. The objective function is therefore written as:
BSF-I:
maximize : A l l o c .
The variable A l l o c is the aggregated allocation of all flows (multipath + single path).
The difference between pure max-flow and BSF-I with maximum allocation is that BSF includes additional constraints enforcing fairness which are applied in addition to the general constraints from Section 3.1:
  • All subflows which share the same bottlenecked link should get the same share (Equations (A40) and (A41)).
  • It has to be ensured that a single-path flow gets the same share as a subflow of a multipath flow (Equations (A42) and (A43)).
BSF-I supports multipath, but does not make use of it if single-path yields a better allocation. The goal is, however, to push the system towards multipath usage which means that the latter should be rewarded to enhance fairness. Let M be the aggregated allocation of all multipath flows as shown in Equation (8)
M = [ s , t ] K m s t ,
where s and t are stations inside the network K running multipath flows. The objective function is then extended as:
BSF-II:
maximize : A l l o c + β · M ,
where β is a positive constant to provide a weighting between prioritizing maximum capacity and using multiple paths. The elements m s t which are summed up to M are determined by constraints given in the Appendix A.2.1 in Equations (A32) to (A34). Further constraints control the mapping between flows, subflows and links:
  • A subflow only gets an allocation on a particular link if the link is part of the particular subflow (Equations (A35) and (A36))
  • The allocation of a flow is greater than or equal to the allocation of its subflows on a given link (Equations (A37) and (A38)).
  • A subflow gets the same allocation on all links which are part of the subflow’s path (Equation (A39)).
Finally, for BSF-II, the constraints controlling the fairness as mentioned for BSF-I also need to be applied.
Practical multipath implementations as they are e.g., known from MPTCP always make use of multiple paths if the end nodes are equipped with multiple interfaces, even if a single-path solution might yield a higher aggregated throughput. This happens e.g., if a subflow shares more than one bottleneck link with other (sub)flows. Therefore, BSF-II reflects the behavior of practical protocols in a better way than BSF-I.

3.2.2. Bottleneck Flow Fair Allocation (BFF)

The Bottleneck Flow Fair (BFF) allocation method assigns the same capacity to all flows on a shared bottleneck link, irrespective of the fact if more the flow occupies the link with more than one subflows, which was explained in Definition 7. In other words, BFF ensures that there is no advantage for multipath flows over single path flows competing for the same link. An example for a network where BFF is applied was already given in Figure 3.
As it was shown for BSF, there are two ways to specify the objective function for BFF, namely maximizing the aggregated allocation or supporting a high amount of flows using multipath transport. This results in the objective functions BFF-I (Equation (10)) and BFF-II (Equation (11)) which are identical to the ones specified for BSF:
BFF-I
maximize : A l l o c ,
BFF-II
maximize : A l l o c + β · M ,
where A l l o c , β and M are defined in the same way as for BSF.
The constraints which have to be applied specifically for BFF are:
  • All flows on a shared bottleneck link should get the same allocation (Equations (A44) and (A45)).
  • Optionally, all subflows of a multipath flow which share the same bottleneck link are assigned the same allocation (Equations (A46) and (A47)).
After performing the aforementioned operation, BFF optionally takes a second step if there are flows which are represented by more than one subflow on a particular link. For each of these flows, BFF tries to share the capacity assigned to the respective flow equally among the subflows on that link.

3.2.3. Network Flow Fair Allocation (NFF)

In Network Flow Fairness, the entire network is the resource whose capacity should be equally shared between flows. This network capacity is determined by summing up the throughput of all flows inside the network, or their subflows in case of multipath transport. Unlike the link capacity which can be easily specified due to the physical nature of a link, the network capacity is difficult to calculate as already mentioned in Definition 8. It depends on the topology of the network, the location of source and sink nodes and which flows or subflows get bottlenecked on which link. Furthermore, there is a trade-off between the goal of resource usage maximization and fairness as discussed in Section 2.3: using multipath might on the one hand be desired to maximize network utilization but on the other hand be unwanted for fairness reasons to avoid that a flow occupies multiple paths which should be assigned to other flows. An example for a network where NFF is applied was already given in Figure 4.
When the entire network is the resource, fairness cannot be expressed by link-level constraints in the LP formulation but can only be included in the objective function. It is proposed to reflect the network fairness by a negative term whose absolute value increases in case of a large difference between the capacity assignments among the flows.
The objective function for NFF is developed in multiple steps. After each step, an example of a network architecture is shown where the fairness methods specified up to that step fail to ensure fair allocation of the flows and thus require an extension which is then shown in the next step; the respective objective functions are denoted as NFF-objective-I, -II, etc.
Let f l o w _ d i f f s t be the sum of the allocation differences between flow [ s , t ] K and all other flows [ q , r ] K . δ all estimates the total allocation differences between all flows not concerning whether the flows share a common congested link or not. In Equation (13), the denominator is introduced because the allocation difference between two flows is added twice, between flows s, t and vice versa:
f l o w _ d i f f s t = ( q , r ) K ( m s t m q r ) [ s , t ] K ,
δ all = [ s , t ] K f l o w _ d i f f s t 2 .
The objective function NFF–objective-I (Equation (14)) maximizes the differences of the aggregated allocation ( A l l o c ) and the total allocation difference among the flows ( δ all ). Though it seems that the objective function provides a network flow fair solution, it does not always fully utilize the available network capacity, as shown in the previously discussed scenario in Figure 4. As explained earlier, according to the network flow fair allocation method, flow [ a , f ] should get 20 Mbps and flow [ m , n ] should get 10 Mbps from the network. This implies the objective function value of 20 ( A l l o c = 20 + 10 , δ all = 20 10 ) for this scenario. However, if each flow gets 10 Mbps, then the optimum value from the the objective function remains the same ( 10 + 10 0 = 20 ) as there is no difference between the flow allocations. Thus, both allocations are valid and optimum for this scenario; however, they differ in the fairness:
NFF-objective-I:
maximize : A l l o c δ all .
Nevertheless, if the solver allocates 10 Mbps to each flow, the network capacity is not fully utilized, which is not desirable. From the closer look, it is visible that the flow [ a , f ] does not utilize multipath though the flow is multipath capable and not affecting any other flow. Based on these insights, the objective function is extended to Equation (15) in order to push the system towards multipath if flows are multipath capable:
NFF-objective-II:
maximize : A l l o c δ all + M .
M is the aggregated allocation of all multipath flows as defined in Equation (8). Though the objective function solves the above-mentioned problem for the scenario in Figure 4, adding M to the objective function might not be enough for ideal network flow fair allocation. Consider the network scenario in Figure 7. The network is composed of eight nodes a , b , c , d , e , f , m and n . All link capacities are in Mbps. There are five flows [ e , a ] , [ a , d ] , [ b , f ] , [ m , c ] and [ d , n ] inside the network. The ideal network flow fair solution would be flow [ e , a ] = 10 Mbps , [ a , d ] = 110 Mbps , [ b , f ] = 10 Mbps , [ m , c ] = 10 Mbps and [ d , n ] = 10 Mbps . Now, the value of the objective function NFF-objective-II can be calculated as follows:
A l l o c = 10 + 110 + 10 + 10 + 10 = 150 ,
δ a l l = ( 110 10 ) + ( 110 10 ) + ( 110 10 ) + ( 110 10 ) = 400 ,
M = 110 ,
NFF-objective-II = 150 400 + 110 = 140 .
The allocations of the flows are [ e , a ] = 10 Mbps , [ a , d ] = 10 Mbps , [ b , f ] = 10 Mbps , [ m , c ] = 10 Mbps and [ d , n ] = 10 Mbps where multipath capable flow [ a , d ] does not utilize multipath then the value of the objective function NFF-objective-II is:
A l l o c = 10 + 10 + 10 + 10 + 10 = 50 ,
δ all = 0 ,
M = 0 ,
NFF-objective-II = 50 .
This means the latter allocation where flow [ a , d ] does not utilize multipath is the optimum solution of the objective function NFF-objective-II for this scenario. If flow [ a , d ] was assigned to the 100 Mbps path via node c in addition to the 10 Mbps path via node b, there would be a high allocation difference between flow [ a , d ] and each of the other flows. The penalization of this high difference by the negative δ all element in the objective function outweighs the multipath capacity gain M.
In order to tackle this problem, a multiplication factor | K | 1 for the capacity gain is introduced, where | K | is the number of flows inside flow set K and M gain is the sum of the overall allocation gain due to multipath. The modified objective function is given in Equation (16):
NFF-objective-III:
maximize : A l l o c δ all + ( | K | 1 ) M gain .
To formulate the equation sets for calculating M gain , let m a x _ s f _ v a l s t be the maximum subflow allocation for the multipath flow [ s , t ] K . If the flow [ s , t ] K is a single path flow, then m a x _ s f _ v a l s t = 0 . m a x _ s f _ v a l s t is the equivalent single path allocation for the multipath flow [ s , t ] K because, in single path allocation, a flow tries to take the path from which it can get maximum capacity:
m a x _ s f _ v a l s t = max ( o , h ) K , ( i , j ) K s f _ a l l o c o h i j s t [ s , t ] K .
The variable s f _ a l l o c o h i j s t specifies the capacity allocation to the subflow between interface o to h, where the subflow is part of the flow from s to t and uses the link between nodes i and j.
The overall allocation gain due to multipath M gain is the difference between the aggregated allocation when flows inside the network utilize multipath and when flows would use single path only. Let, A l l o c single path flow be the aggregated allocation when each flow would use single path.
A l l o c single path flow = [ s , t ] K φ single path s t + [ s , t ] K m a x _ s f _ v a l s t ,
where φ single - path s t is the allocation of the single path flow ( s , t ) K which is computed by the constraints given in Equations (A48) to (A50) in the Appendix A.2.3.
The overall allocation gain due to multipath is then:
M gain = A l l o c A l l o c single path flow .
Though the objective function NFF-objective-III (Equation (16)) overcomes the limitation of the objective function NFF-objective-II for the scenario in Figure 8, it may not always provide the optimum allocation. For example, consider the scenario in Figure 9. The allocation of the flow [ m , n ] is limited by the links ( c , d ) and ( g , h ) . Flow [ a , d ] is limited by the link ( a , b ) and the flow [ e , h ] is limited by the link ( g , h ) . This leads to the ideal network flow fair allocation of 20 Mbps to each flow by utilizing the whole network capacity. The corresponding value of the objective function NFF-objective-III = ( 20 + 20 + 20 ) 0 + ( 3 1 ) × 0 = 60 . However, if the allocation is considered as in Figure 10 where the allocation of the flows [ a , f ] , [ e , h ] and [ m , n ] are 30, 30 and 10 Mbps, respectively, flows [ a , f ] and [ e , h ] limit the allocation of the flow [ m , n ] by utilizing multipath. In this case, the multipath gain of each individual multipath flow is 10 Mbps. This results in a value of the objective function NFF-objective-III = ( 30 + 30 + 10 ) ( ( 30 30 ) + ( 30 10 ) + ( 30 10 ) ) + ( 3 1 ) × ( 10 + 10 ) = 70 , which is higher than the previous allocation. Thus, the objective function NFF-objective-III would allocate 30 Mbps to the flows [ a , f ] and [ e , h ] , whereas flow [ m , n ] would be allocated 10 Mbps, which is not an optimum network flow fair solution.
Going back to the objective function NFF-objective-II, the reason behind the problem in Figure 8 was comparing all flows for the network flow fair solution even if they do not share any congested links, which means they are disjoint. Comparing all flows might create higher allocation differences which can not be compensated by the M value. Introducing M gain with the multiplication factor β is more biased to multipath allocation, which has a negative effect on fairness. This results in the idea of comparing flows only when they share a congested link.
To compare flows according to the shared congested link, let c o n g _ f l o w _ d i f f i j s t be the sum of the allocation differences between the flow [ s , t ] K and all other flows sharing the congested link ( i , j ) A with the flow [ s , t ] K . In addition, consider a binary variable c o n g _ g r o u p _ i d i j s t which identifies the flow [ s , t ] K sharing the congested link ( i , j ) A . c o n g _ g r o u p _ i d i j s t = 1 when the flow [ s , t ] K shares the congested link ( i , j ) A with other flows. Equation (20) means when flow [ s , t ] K and all other flows [ q , r ] K sharing the same congested link ( i , j ) A i.e., c o n g _ g r o u p _ i d i j s t = 1 and c o n g _ g r o u p _ i d i j q r = 1 then c o n g _ f l o w _ d i f f i j s t is the sum of the allocation differences between the flow [ s , t ] K and all other flows ( q , r ) K . After calculating the allocation differences between the same congested flow group, δ congested sums the total allocation difference of all those flows:
c o n g _ f l o w _ d i f f i j s t = ( q , r ) K ( c o n g _ g r o u p _ i d i j s t · ( φ s t φ q r ) · c o n g _ g r o u p _ i d i j q r ) ( i , j ) K , [ s , t ] K ,
δ congested = [ s , t ] K ( i , j ) A c o n g _ f l o w _ d i f f i j s t .
Formulating the equation sets to identify the flows sharing the same congested link requires that all links which are congested for the flows [ s , t ] K are identified. A link is said to be congested only when the link is fully utilized. In the constraints, a number of variables is defined which keeps track of whether a link is fully loaded, how many congested flows are on the link and whether these flows share the link with flows which are congested on another link.
Equation (22) is the revised objective function for network flow fair allocation comparing the flows sharing the same congested link. δ congested is dependent on the allocation of flows sharing a congested link which might lead to having a smaller number of common congested links among the flows which means multipath capable flows might not use multipath in cases where using multipath would create a common congested link with other flows. Consider the scenario in Figure 11 with two flows [ a , f ] and [ m , n ] . Allocation of the multipath capable flow [ a , f ] is limited on link ( a , b ) and ( d , e ) , the single path flow [ m , n ] is limited on the link ( m , b ) . The resulting maximum allocation is 20 Mbps for flow [ a , f ] and 90 Mbps for the flow [ m , n ] . This leads to the network flow fair solution of 20 Mbps to the flow [ a , f ] and 90 Mbps to the flow [ m , n ] sharing the congested link ( b , c ) . Since the flows share a congested link, their allocation difference is considered in the variable δ congested of the objective function NFF-objective-IV. M is the total allocation to multipath flows as defined in Equation (8). The value of the objective function for this allocation is ( 20 + 90 ) ( 90 20 ) + 20 = 60 :
NFF-objective-IV:
maximize : A l l o c δ congested + M .
The value of δ congested is determined by a number of equations which perform the following tasks:
  • Compute the allocation on a link or if it is congested (Equations (A51) and (A52)).
  • Compute how many flows on a link are congested (Equations (A53) to (A55)).
  • Check whether a given flow shares a link with other flows (Equations (A56) to (A58)).
However, if the flow [ a , f ] uses only one path as in Figure 12 and does not use the path [ a , b , c , f ] , then flows [ a , f ] and [ m , n ] do not have a shared congested link which would nullify the value of δ congested and M. In that case, the allocations of the flows are [ a , f ] = 10 Mbps and [ m , n ] = 90 Mbps . This implies the value of the objective function NFF-objective-IV is ( 10 + 90 ) 0 + 0 = 100 , which is higher than the previous allocation. The optimum allocation is then 10 Mbps to the flow [ a , f ] and 90 Mbps to the flow [ m , n ] by the objective function NFF-objective-IV, which is not an ideal network flow fair solution for this scenario.
All versions of objective functions discussed until now may still insufficiently utilize multipath for some scenarios: multipath may be deployed although it degrades fairness or it may not be deployed despite the requirement of efficient network usage. Therefore, the goal is to identify an objective function which does not affect fairness while maintaining multipath so that it can achieve optimum results for any scenario. In order to specify such objective function, a counter S F for the total number of subflows assigned to all flows is introduced:
S F = [ s , t ] K n _ s f s t .
The objective function for the optimum network flow fair solution is formulated in Equation (24) where β is the constant multiplication factor which is used to ensure multipath. The value of β must be larger than the total network capacity to eliminate the effect of network utilization ( A l l o c ) and fairness ( δ ) on multipath uses. Moreover, S F has a negligible adverse effect on fairness because S F only specifies the number of subflows and does not have any effect on the amount of subflow allocation, which means fairness can be adjusted by providing a very small allocation (as small as 0.01 unit) to unnecessary subflows. On the other hand, by using all the possible paths for multipath flows, total network capacity utilization is ensured by the congested link identifier constraints described earlier in this section:
NFF-objective-V:
maximize : A l l o c δ congested + β · S F .
However, NFF-objective-V still might not yield an optimum result if a subflow is assigned a path with a link not used by other flows and that link contributes a smaller capacity to the subflow than a bottlenecked, but a faster link on a different path. The solver would select the unused link to keep δ congested low while, on the other hand, the A l l o c gain would suffer. This suggests that achieving a network flow fair solution which works for any type of scenario is improbable with a single objective function.
In order to cope with the conflict of considering both network allocation and fairness in the objective function in any kind of scenario, the optimization needs to be performed in two steps. In both of them, network utilization as well as fairness are optimized; however, the first step puts the priority on fair capacity assignment, whereas, in the second step, a high network utilization is preferred.
Equations (25) and (26) show the final objective function pair NFF-objective-VI of the two step process, which yields the optimum network flow fair solution. In the first step, a minimum allocation is set for each flow however still utilizing the network capacity as much as possible. The flow allocation of step-1 is used as the minimum allocation constraint for the step-2 to ensure fairness in step-2. m i n _ a l l o c s t is the allocation of the flow [ s , t ] K in step-1. Equation (27) makes sure that each flow gets the minimum fair share from the network in step-2. If any network capacity has not been allocated, which might affect fair allocation, step-2 makes sure to utilize the spare capacity by allocating fairly among the competing flows. In Equation (26), | K | is the total number of flows inside the network. The | K | multiplication factor helps to prioritize the network utilization ( A l l o c ) over fairness ( δ congested ) in step-2.
NFF-objective-VI
Step 1:
maximize : A l l o c δ congested + β · S F ,
Step 2:
maximize : | K | · A l l o c δ congested ,
Minimum allocation constraint for step-2:
φ s t m i n _ a l l o c s t [ s , t ] K .

4. Performance Metrics

The methods to assign link capacity resources in the network to flows discussed in the previous section are evaluated using metrics describing the amount of allocation, the fairness and combinations of both. Definitions of these metrics are given in this section.

4.1. Aggregate Allocation

The aggregate allocation which is a metric for the network utilization efficiency is the sum of the throughputs achieved by all (sub)flows in the network:
A l l o c = [ s , t ] K φ s t .

4.2. Jain’s Fairness Index ( γ Jain )

In the literature, indexes were developed in order to assess the fairness of resource assignment in networks. A well-known fairness index was proposed by Jain [31], which yields a positive real value, which is k / n in case k out of n users get an equal allocation, whereas the other n k users are not assigned any capacity. The minimum is 1 / n if only one user out of n gets an assignment at all, the maximum is 1 in the case all users get the same assignment:
γ Jain = ( [ s , t ] K φ s t ) 2 | K | · [ s , t ] K ( φ s t ) 2 .

4.3. Vardalis’s Fairness Index ( γ Vardalis )

This fairness index has been introduced in [32]. It ranges between 0 to 1 for any number of flows. The index is 1 when all flows are receiving equal allocation. If out of n flows, k are equally sharing all the bandwidth and the rest n k are idle, the fairness index is ( k 1 ) / ( n 1 ) :
γ Vardalis = 1 [ s , t ] K | φ s t A v g | 2 ( | K | 1 ) A v g .
The numerator of the fraction in Equation (30) is the sum of the absolute value of the difference of each flows’ allocation from the average allocation. The system is more unfair if the sum is more. There are differences between Jain’s fairness index (Equation (29)) and Vardalis’s fairness index:
  • Jain’s fairness index ranges from 1 / n to 1, whereas Vardalis’s fairness index ranges from 0 to 1 for n being the number of flows. For example, if there are two flows, one with a certain allocation and other with zero allocation, then Jain’s fairness index will be 0.5. On the other hand, the value of the fairness index-II will be 0.
  • Vardalis’s fairness index is more sensitive to changes, especially when the number of flows is small. For instance, in case of two flows where one flow achieves 50% more bandwidth than the other, Jain’s fairness index will be 0.96 where Vardalis’s fairness index will be 0.8. When one flow receives exactly twice as much bandwidth as the other, Jain’s fairness index will be 0.9 while Vardalis’s fairness index will be 0.66.

4.4. Normalized Metrics

All fairness indexes mentioned above assume maximum fairness, e.g., yield a value of 1, only if the same capacity is assigned to all end-to-end demands. Dependent on the fairness method being used, e.g., BSF where fair distribution of resources is focused on a link and not end-to-end, the fairness index may yield a value smaller than 1 even if the maximum possible fairness is achieved. For this reason, the fairness index results which are computed from the LP solutions are normalized by those from the algorithmic solution. In case the normalized index is γ norm = 1, this proves that the LP determined an optimum solution.
The normalized versions of the aggregated allocation, Jain’s fairness index and Vardalis’s fairness index are defined in Equations (31) to (33):
A l l o c norm = A l l o c LP A l l o c Algorithm ,
γ Jain , norm = γ Jain , LP γ Jain , Algorithm ,
γ Vardalis , norm = γ Vardalis , LP γ Vardalis , Algorithm .

4.5. Product of Efficiency and Jain’s Fairness Index

It was discussed earlier that efficiency and fairness are related by a trade-off. It is useful to define a performance metric which takes both metrics into account, therefore the product of the aggregate capacity and the well-known Jain’s fairness index A l l o c · γ Jain are considered as well in this analysis:
A l l o c · γ Jain = [ s , t ] K φ s t · ( [ s , t ] K φ s t ) 2 | K | · [ s , t ] K ( φ s t ) 2 .

5. Performance Evaluation

This section includes validation and performance comparisons of different allocation models. In order to validate the developed fair allocation methods, eight network topologies are considered which represent different topology types such as a linear network, a network with two alternative paths, a full-meshed network, etc. The details about the topologies are given in Figure 13, Figure 14, Figure 15, Figure 16, Figure 17, Figure 18, Figure 19 and Figure 20. In the figures, the blue circles identify the nodes, the connectors represent the links and the arrow in each link specifies the direction of the data flow if the link is used. The numbers besides the connectors specify the respective link speeds.
For each of the topologies, different scenarios are created by using different nodes as senders and receivers for data flows and in some cases the capacities of individual links are changed, resulting in 31 scenarios. The link speeds are given in the respective topology figures, whereas the data flows injected into the topologies are given in Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8 and Table 9. The same tables also show the end-to-end throughput which each flow can achieve within a given scenario considering the different fairness methods discussed in Section 3.2. The NFF column in the tables shows the results for the NFF-objective-VI function specified in Equations (25) and (26).
The computation of the linear programming solutions is performed with the AIMMS (non-)linear programming software system [33]. The formulations for BSF and BFF are specified as a mixed-integer linear programming (MILP) problem (Source code available on [34]) which is processed by the CPLEX solver included in the AIMMS system. The NFF solution contains nonlinear components and is computed using AIMMS’s Outer Approximation (AOA) module.
The AIMMS software does, however, not support batch jobs, the problem description had to be entered manually into a GUI, so that an automated evaluation of different network scenarios could not be performed. For this reason, the evaluation is performed with a selected number of scenarios.
The multipath fair capacity allocation models developed in Section 3 are based on fair capacity allocation to individual flows utilizing maximum possible network capacity. It is essential to validate the developed models confirming the optimum solution. If the developed models provide the ideal fair allocation for any scenario, then the models can be confirmed as the optimum fair solution. As an alternative solution which complements the linear programing approach, algorithms which provide an ideal solution for a network scenario for different fair allocation methods have been proposed in [1]. These algorithms resemble the way how one would solve the problem “manually” when looking at it.
The values of the performance metrics for BSF, BFF and NFF are listed in Table 10, Table 11, Table 12, Table 13 and Table 14. For all investigated scenarios, the performance metrics yield the same value for both the linear programming solution and the algorithmic solution, therefore the results from the algorithmic solution are not given separately. The identity of the LP and the algorithm values confirms the linear programming model as the optimum solution. However, fairness indexes which measure the fairness performance are biased to the equal allocation to the flows and neglect the fairness criteria of the individual allocation methods. Therefore, as already mentioned in Section 4.4, the fairness indexes computed by LP need to be normalized with respect to the respective algorithmic fairness indexes. These normalized values are shown in Figure 21, Figure 22, Figure 23 and Figure 24.
Figure 21 depicts the aggregate capacity allocation of all flows in the network. All fairness methods achieve the same network utilization for the different scenarios. Exceptions are the scenarios 23, 25 and 26 where BSF and BFF yield a value which is slightly greater than 1. This means that the BSF and BFF methods are better by a small amount in this case than NFF against which the BSF and BFF performance is normalized. The topologies deployed in these scenarios include a single link which connects two subnetworks. Bottleneck-based fairness methods appear to be better able to cope with such a situation. Figure 22, Figure 23 and Figure 24 show the results for the fairness metrics for bottleneck subflow fair and bottleneck flow fair, again normalized by the metrics for network flow fair to provide comparability between the different scenarios. Most values range between 0.8 and 1 for BSF and BFF, which means that NFF, which is always one by definition, achieves better fairness. A direct comparison between the different BSF and BFF methods shows however no significant difference. There is a slight trend that fairness degrades in the case of scenarios 29 to 31 which are all based on the complex meshed network topology shown in Figure 20, with different flow sets. Since this is one of the most complex scenarios which was investigated, it means that realizing fairness for these more complex setups is more difficult than for simpler ones.
The examples show how optimum resource allocation and fairness can be achieved in case of optimum conditions, i.e., perfect global knowledge of the network topology and of all flow demands which are injected at different nodes. Practical congestion control methods for multipath transport are restricted to observations at the end nodes, e.g., an increase of the delay or packet losses. With this limited knowledge, they have to draw conclusions about the conditions in the network which obviously cannot be optimum. The theoretical methods can however serve as a benchmark to show how well a practical multipath transport implementation can fulfill performance and fairness requirements, in other words, how closely it approaches the optimum.

6. Conclusions

The aim of this work was a rigorous analysis of fairness in multipath transport. Formal definitions for the different resources and participants in a network were given, based on which fairness definitions were specified. An optimum solution for the resource allocation in arbitrary networks considering both high network utilization as well as fairness was developed using mixed (non-)linear programming. The results were compared with an algorithmic solution which imitates the way resources would be assigned manually. It can be observed that the network utilization, measured by the aggregated allocation of all flows, is almost the same for all fairness methods, whereas network-fair allocation, according to the investigated fairness metrics, performs best if fairness is in focus. The achieved results can serve as a benchmark to assess the performance and fairness of existing MPTCP implementations. Since the best fairness results can be achieved with the network-based solution and an individual (MP)TCP flow, this suggests that assistance of the network operator may be useful. MPTCP is an end-to-end solution; the protocol has only information about the end-to-end connection, but not about the entire network.
The presented results show the optimum assignments of data flows in the case of perfect “god-view” network knowledge and an idealized view with constant data flows. They provide a benchmark for practical implementations of multipath transport whose congestion control only has a “local” view onto the network by analysing the performance of ongoing connections.
In order to enhance the performance of the purely transport-layer based approach, it can be complemented by the network infrastructure in two ways: multipath routing which is aware of the internal network state can further optimize network utilization and fairness. Another possible approach could be that routers are equipped with intelligence to identify MPTCP data transfers. Using this data, packets could be selectively dropped in order to trigger the sender’s congestion control with the target of shaping the traffic injected into the network.
The results which are given for fairness should by interpreted with care for two reasons: on the one hand, as already mentioned in the Introduction, an idealistic view onto a network with perfect global knowledge is assumed. In practice, an endpoint running a practical transport protocol can only roughly estimate possible constraints which may apply to the data flow which it injects into the network. On the other hand, the fairness metrics available in the literature which are used in this paper are designed for single path transport. The investigations in this paper show that they cannot give a direct comparison independent from the observed scenario and fairness method, which is the reason why the normalization has to be brought in. This situation is not satisfying; it would be useful if dedicated fairness metrics for multipath transport were available, which is not the case to the best of the authors’ knowledge. An important step therefore would be the extension of the fairness metrics for multipath transport, which allows direct comparison of multipath fairness for any scenario irrespective of the fairness method.

Author Contributions

Conceptualization, A.K., M.S., A.S. and A.F.; Data Curation, M.S.; Formal Analysis, M.S. and A.S.; Funding Acquisition, A.F.; Investigation, A.K., M.S. and A.S.; Methodology, A.K., M.S., A.S. and A.F.; Project Administration, A.F.; Software, M.S.; Supervision, A.F.; Visualization, A.K.; Writing—Original Draft, A.K. and M.S.; Writing—Review and Editing, A.K.

Funding

This research was funded in part by Deutsche Forschungsgemeinschaft (DFG) under the grant GO 730/9-1.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Mathematical Annex

In this annex, the mathematical description is given for the constraints of the MINLP problem which were specified in a textual way in Section 3. The symbols used in the descriptions are summarized in Table A1.

Appendix A.1. General Constraints

Flow conservation constraint: At each node except for the communication endpoints, the number of ingoing and outgoing flows must be equal [16]; routers do not act as source or sink.
( i , j ) A f i j s t ( j , i ) A f j i s t = φ s t , if i = s , φ s t , if i = t , 0 , else , i V , [ s , t ] K .
Table A1. Symbols used in the Mixed Integer Non-Linear Programming (MINLP) model.
Table A1. Symbols used in the Mixed Integer Non-Linear Programming (MINLP) model.
Vset of all nodes
Lset of all interfaces
Aset of all links
Kset of all flows (single or multipath)
( i , j ) link A from interface i to j
[ s , t ] flow (legacy or multipath) K from node s to t
A l l o c total allocations for all flows
f i legacy or MPTCP flow i
s f i legacy flow or MPTCP subflow i
f _ a l l o c i allocation for legacy or MPTCP flow i
s f _ a l l o c i allocation for legacy flow or MPTCP subflow i
f l o w _ d i f f s t sum of the differences of the allocation
to flow [ s , t ] and any other flow
δ sum of f l o w _ d i f f s t for all pairs s , t
Maggregated allocation for all multipath flows
M gain Allocation gain for multipath-capable flows
when upgraded from single-path to multipath
c a p i j capacity on link ( i , j )
φ s t capacity allocated to flow [ s , t ] (single or multipath)
m s t capacity allocated to multipath flow [ s , t ]
0 for single-path flows
f i j s t capacity allocated to flow [ s , t ] on link ( i , j )
u n u s e d _ c a p i j unused capacity on link ( i , j )
m a x _ s f _ v a l s t maximum allocation to a subflow of flow [ s , t ]
s f _ i d o h i j s t binary, 1 if subflow ( o , h )
belonging to flow [ s , t ] is present on link ( i , j )
f l o w _ t y p e s t binary, 1 if flow [ s , t ] is a multipath flow
β weighting factor in the objective function
to prefer maximum allocation or fairness
s n d _ i n t s t ,number of sender. resp. receiver interfaces
r c v _ i n t s t for flow [ s , t ]
f l o w _ i d _ b i n i s t binary, 1 if node i is part
of the path from node s to t
x i j s t binary, 1 if link ( i , j ) is in the path of flow [ s , t ]
y i j s t binary, 1 if link ( i , j ) is a bottleneck for flow [ s , t ]
n o _ o f _ p a t h i s t number of paths for flow [ s , t ] passing node i
z o h s t binary, 1 if flow [ s , t ] has a subflow
on link ( o , h )
s p l i t t i n g _ n o d e s t number of paths used by flow [ s , t ]
n _ s f s t number of subflows used by flow [ s , t ]
s f _ b n e c k o h i j s t binary, 1 if subflow [ o , h ] of flow [ s , t ]
is congested on link ( i , j )
c o n g _ f l o w _ d i f f i j s t sum of allocation differences of flow [ s , t ]
and any other flow on link ( i , j )
c o n g _ g r o u p _ i d i j s t binary, 1 if flow [ s , t ] shares link
( i , j ) with other flows
S F total number of subflows allocated to all flows
MPTCP: Multipath TCP.
Capacity limitation constraints: The overall throughput on link ( i , j ) , summed up for all flows [ s , t ] , cannot be higher than the capacity c a p i j of the link [16]:
[ s , t ] K f i j s t c a p i j ( i , j ) A .
Equation (A3) expresses the condition that a flow [ s , t ] K can only occupy capacity on a link ( i , j ) A if the latter is part of the flow’s path. If the flow makes use of the link, the capacity allocated for the flow can at maximum be the physical speed of the link:
f i j s t c a p i j x i j s t ( i , j ) A , [ s , t ] K .
Path constraints: Equation (A4) limits the number of available paths between the sender and the receiver to the product of the interfaces which the sender and the receiver are respectively equipped with. Obviously, the same limit also applies to the number of subflows which belong to the same connection.
Equation (A5) is similar to Equation (A3); however, the link and the path variables are exchanged between the left and right side of the equation sign. This means that the (non-)occupation of a link by a flow is both a necessary as well as a sufficient criterion for the fact that the link is part of the flow:
( i , j ) A x i j s t s n d _ i n t s t · r c v _ i n t s t i V , [ s , t ] K ,
x i j s t β · f i j s t i , j A , [ s , t ] K .
Multipath flow identifier constraints: Equations (A6) and (A7) are complementary: the further forces the binary variable f l o w _ i d _ b i n to 0 if node i is not part of a path, which is assigned to the flow from node s to t; the latter forces the variable to be greater than zero if node i is part of such a path.
Equation (A8) makes sure that the variable f l o w _ t y p e s t = 0 if the flow [ s , t ] K does not take multipath from any node i. Equation (A9) identifies the multipath flow i.e., f l o w _ t y p e s t = 1 if the flow [ s , t ] K takes multipath from any node i:
f l o w _ i d _ b i n i s t n o _ o f _ p a t h i s t i V , [ s , t ] K ,
β · f l o w _ i d _ b i n i s t n o _ o f _ p a t h i s t i V , [ s , t ] K ,
f l o w _ t y p e s t i V ( n o _ o f _ p a t h i s t f l o w _ i d _ b i n i s t ) [ s , t ] K ,
β · f l o w _ t y p e s t i V ( n o _ o f _ p a t h i s t f l o w _ i d _ b i n i s t ) [ s , t ] K .
Multipath subflow identification constraints: Let s f _ i d o h i j s t be a binary variable that identifies subflow [ o , h ] A on link ( i , j ) A for the flow [ s , t ] K . The binary variable s f _ i d o h i j s t = 1 if there exists a subflow ( o , h ) A on link ( i , j ) A for the flow [ s , t ] K . There is no subflow for the single path flow [ s , t ] K so s f _ i d o h i j s t = 0 . Equation (A10) implies that there can not be a subflow on link ( i , j ) A for the flow [ s , t ] K i.e., s f _ i d o h i j s t = 0 if the flow [ s , t ] K does not use ( x i j s t = 0 ) that link. Equation (A11) generates at least one subflow for the multipath flow ( f l o w _ t y p e s t = 1 ) [ s , t ] K on the link ( i , j ) A if the multipath flow [ s , t ] K uses that path. Equation (A12) makes sure that, for the subflow [ o , h ] A , the subflow identifier s f _ i d o h i j s t is not set to more than one link from a certain node i for the flow [ s , t ] K :
s f _ i d o h i j s t x i j s t h s , ( o , h ) A , ( i , j ) A , [ s , t ] K ,
( o , h ) A s f _ i d o h i j s t x i j s t + 1 f l o w _ t y p e s t h s , ( i , j ) A , [ s , t ] K ,
( i , j ) A s f _ i d o h i j s t 1 i V , i t , h s , ( o , h ) A , [ s , t ] K .
Consider a helping binary variable z o h s t which is 1 if there exists a subflow [ o , h ] A on link ( o , h ) A for the flow [ s , t ] K . z o h s t identifies the subflow [ o , h ] A for the flow [ s , t ] K not considering the routing path for the subflow. Thus, the information through which links the subflow is routed can not be identified from z o h s t which can be found through subflow identifier s f _ i d o h i j s t :
z o h s t = s f _ i d o h o h s t h s , ( o , h ) A , [ s , t ] K .
Equation (A14) sets the variable s f _ i d i h i h s t to zero if the flow starting at node i does not split into subflows, i.e., the flow only uses one single path to the destination. Equation (A15) signifies that, if there is no subflow [ o , h ] A for the flow [ s , t ] K , i.e., the z o h s t = 0 , the subflow identifier s f _ i d o h i j s t should be set to zero for any link ( i , j ) A . Once a subflow is initialized from a certain node i.e., s f _ i d o h i j s t is set to 1, Equation (A16) extends the subflow identification over the path it takes to reach the destination. Equation (A17) ensures that, if there is an incoming subflow [ o , h ] A on link ( i , j ) A to any node j which is not a source or destination node, the subflow [ o , h ] A also exists on the link ( j , m ) A to complete the subflow path:
s f _ i d i h i h s t n o _ o f _ p a t h i s t f l o w _ i d _ b i n i s t i V , h s , ( i , h ) A , [ s , t ] K ,
z o h s t s f _ i d o h i j s t h s , ( o , h ) A , ( i , j ) A , [ s , t ] K ,
( i , j ) A s f _ i d o h i j s t = ( j , m ) A s f _ i d o h j m s t j V , j s , j t , h s , ( o , h ) A , [ s , t ] K .
The previously introduced equations ensure that subflows are not created erroneously but do not limit the number of subflows which a single flow can be split into. In order to enforce this limit, a variable s p l i t t i n g _ n o d e s t which counts the total number of paths used by the flow [ s , t ] K is defined:
s p l i t t i n g _ n o d e s t = 1 + i V ( n o _ o f _ p a t h i s t f l o w _ i d _ b i n i s t ) [ s , t ] K .
Equation (A18) limits the number of paths resp. subflows for the flow [ s , t ] K to the product of the sender and receiver interfaces. For each interface pair, maximum one subflow is possible:
s p l i t t i n g _ n o d e s t s n d _ i n t s t · r c v _ i n t s t [ s , t ] K .
A helping variable n _ s f s t is introduced to express that the number of subflows is identical to the number of paths used for a multipath flow. n _ s f s t counts the total number of subflows for the flow [ s , t ] K . Equations (A19)–(A21) result in s p l i t t i n g _ n o d e s t = ( i , j ) A s f _ i d i j i j s t = n _ s f s t when f l o w _ t y p e s t = 1 :
s p l i t t i n g _ n o d e s t ( i , j ) A s f _ i d i j i j s t + 1 f l o w _ t y p e s t [ s , t ] K ,
1 f l o w _ t y p e s t + n _ s f s t s p l i t t i n g _ n o d e s t [ s , t ] K ,
( i , j ) A s f _ i d i j i j s t n _ s f s t j s , [ s , t ] K .
Equations (A22) to (A24) prevent subflow creation if the source as well as the destination have only one interface. The equations also ensure that only one subflow can be created for any source–destination interface pair. Equation (A22) means that, for the flow [ s , t ] K , both subflows [ m , n ] A and [ m , p ] A can not be created from the node m using the same source interface ( s , j ) A and destination interface ( i , t ) A pair. Equation (A23) implies that there can not be two incoming subflows ( m , p ) A and ( o , p ) A on the link p for the flow [ s , t ] K using the same source interface ( s , j ) A and destination interface ( i , t ) A pair. Equation (A24) refers both subflows [ m , p ] A and [ o , n ] A can not be created from two different nodes using the same source interface ( s , j ) A and destination interface ( i , t ) A pair for the flow [ s , t ] K :
s f _ i d m n s j s t + s f _ i d m p s j s t + s f _ i d m n i t s t + s f _ i d m p i t s t 3 i V , j V , ( i , t ) A , ( s , j ) A , ( m , n ) A , ( m , p ) A , n p , [ s , t ] K ,
s f _ i d o p s j s t + s f _ i d m p s j s t + s f _ i d o p i t s t + s f _ i d m p i t s t 3 i V , j V , ( i , t ) A , ( s , j ) A , ( o , p ) A , ( m , p ) A , o m , [ s , t ] K ,
s f _ i d o n s j s t + s f _ i d m p s j s t + s f _ i d o n i t s t + s f _ i d m p i t s t 3 i V , j V , ( i , t ) A , ( s , j ) A , ( o , n ) A , ( m , p ) A , o m , n p , [ s , t ] K .
Congested links identifier constraints for flows: Equation (A25) implies that there exists at least one congested link for each flow. Equation (A26) makes sure that the link identified to be congested is fully utilized i. e, the flow allocations uses the full capacity. If the link ( i , j ) A is congested ( y i j s t = 1 ) for the demand [ s , t ] K , the sum of allocated capacity to the flows sharing the link ( i , j ) A is equal to the capacity of the link ( i , j ) A . It has already been concluded that the aggregated flow allocation on a link cannot be larger than the capacity of that link. Equation (A27) relates the two binary variables x i j s t and y i j s t . It expresses that the link ( i , j ) A can be on the path for the flow [ s , t ] K even though the link ( i , j ) A is not fully utilized. It also ensures that the identified bottleneck link ( i , j ) A for the flow [ s , t ] K is on the allocated path of the flow [ s , t ] K i.e., y i j s t can be 1 only when x i j s t = 1 :
( i , j ) A y i j s t 1 [ s , t ] K ,
[ o , d ] K f i j o d c a p i j y i j s t ( i , j ) A , [ s , t ] K ,
y i j s t x i j s t ( i , j ) A , [ s , t ] K .
Congested links identifier constraints for multipath subflows: If a flow splits into multiple subflows, each of them should be a part of at least one congested link, as it is the case for single-path flows. It can, however, happen that, on a given link, two or more subflows belong to the same flow. To identify the congested link for a subflow, consider a binary variable s f _ b n e c k o h i j s t which is 1 if the subflow [ o , h ] A of the flow [ s , t ] K is congested on link ( i , j ) A . Equation (A28) refers that the link ( i , j ) A can be identified as congested link for the subflow [ o , h ] A of the flow [ s , t ] K only if the subflow [ o , h ] A exists on that link. Equation (A29) makes sure that at least one congested link ( i , j ) A for the subflow [ o , h ] A of the flow [ s , t ] K . Equation (A30) implies that the subflow [ o , h ] A of the flow [ s , t ] K can not be congested on ( i , j ) A if the flow [ s , t ] K itself is not congested on that link i.e., s f _ b n e c k o h i j s t = 0 when y i j s t = 0 . Equation (A31) specifies that, if flow [ s , t ] is bottlenecked on link ( i , j ) , then all subflows [ o , h ] of flow [ s , t ] are bottlenecked on ( i , j ) :
s f _ b n e c k o h i j s t s f _ i d o h i j s t h s , ( o , h ) A , ( i , j ) A , [ s , t ] K ,
( i , j ) A s f _ b n e c k o h i j s t z o h s t h s , ( o , h ) A , [ s , t ] K ,
s f _ b n e c k o h i j s t y i j s t h s , ( o , h ) A , ( i , j ) A , [ s , t ] K ,
β · ( 1 f l o w _ t y p e s t ) + s f _ b n e c k o h i j s t y i j s t β · ( 1 s f _ i d o h i j s t ) h s , ( o , h ) A , ( i , j ) A , [ s , t ] K .
After giving the general constraints of the linear programming model which are common to all fairness methods, the constraints specific for the particular fairness methods BSF, BFF and NFF are now discussed.

Appendix A.2. Allocation-Specific Constraints

Appendix A.2.1. Bottleneck Subflow Fair Allocation (BSF)

Constraints for identifying the allocation of multipath flows: The objective function BSF-II (Equation (9)) uses the aggregated capacity allocation to all multipath flows M as defined in Equation (8). As already mentioned in the description for this equation, m s t specifies the capacity allocation to a multipath flow [ s , t ] and is 0 for single-path flows. In contrast to this, the variable φ s t is the allocation to any flow [ s , t ] K irrespective whether the flow is a multipath flow or a single path flow. Equation (A32) sets the value of m s t = 0 for the single path flow [ s , t ] K . Equation (A33) and Equation (A34) make sure that, for the multipath flow [ s , t ] K , m s t is the end-to-end capacity allocation for that flow i.e., m s t = φ s t when f l o w _ t y p e s t = 1 :
m s t β · f l o w _ t y p e s t [ s , t ] K ,
m s t φ s t [ s , t ] K ,
m s t φ s t β · ( 1 f l o w _ t y p e s t ) [ s , t ] K .
Multipath subflow allocation mapping constraints: Let s f _ a l l o c o h i j s t be the subflow allocation of the subflow [ o , h ] A on the link ( i , j ) A for the flow [ s , t ] K . Equation (A35) implies that the subflow [ o , h ] A does not get any allocation on the link ( i , j ) A if there is no subflow [ o , h ] A exist on the link ( i , j ) A for the flow [ s , t ] K i.e., s f _ a l l o c o h i j s t = 0 when s f _ i d o h i j s t = 0 . Equation (A36) ensures that the subflow [ o , h ] A gets some allocation on the link ( i , j ) A when there is a subflow [ o , h ] A on the link ( i , j ) A for the flow [ s , t ] K :
s f _ a l l o c o h i j s t β · s f _ i d o h i j s t [ o , h ] A , ( i , j ) A , [ s , t ] K ,
β · s f _ a l l o c o h i j s t s f _ i d o h i j s t [ o , h ] A , ( i , j ) A , [ s , t ] K .
Equation (A37) and Equation (A38) map the flow allocation with the sum of the subflow allocation for a flow on a specific link. For a multipath flow, on a specific link, the sum of the allocation of all subflows of a flow should be equal to the flow allocation for that flow on that link. This implies that f i j s t = ( o , h ) A s f _ a l l o c o h i j s t when f l o w _ t y p e s t = 1 :
f i j s t ( o , h ) A s f _ a l l o c o h i j s t h s , ( i , j ) A , [ s , t ] K ,
c a p i j ( 1 f l o w _ t y p e s t ) + ( o , h ) A s f _ a l l o c o h i j s t f i j s t h s , ( i , j ) A , [ s , t ] K .
Once a subflow gets an allocation on a link, Equation (A39) makes sure that the allocation of that subflow is the same on all the other links on its path. From any link j when j is not a source or destination node, the allocation of the subflow [ o , h ] A of the flow [ s , t ] K is equal on the incoming node ( i , j ) A and the outgoing node ( j , m ) A :
( i , j ) A s f _ a l l o c o h i j s t = ( j , m ) A s f _ a l l o c o h j m s t j V , j s , j t , ( o , h ) A , ( i , j ) A , [ s , t ] K .
Bottleneck subflow fair allocation constraints: Let s f _ l i n k _ s h a r e i j is the maximum share of the capacity that a subflow can get from the link ( i , j ) A . Equation (A40) and Equation (A41) ensure equal allocation to all the subflows that are bottlenecked on the link ( i , j ) A . The subflow [ o , h ] A gets the maximum share of the capacity on the link ( i , j ) A if the link is identified as bottleneck link ( s f _ b n e c k o h i j s t = 1 ) for that subflow. Subflow allocation s f _ a l l o c o h i j s t = s f _ l i n k _ s h a r e i j when s f _ b n e c k o h i j s t = 1 :
s f _ a l l o c o h i j s t s f _ l i n k _ s h a r e i j ( o , h ) A , ( i , j ) A , [ s , t ] K ,
s f _ a l l o c o h i j s t s f _ l i n k _ s h a r e i j c a p i j ( 1 s f _ b n e c k o h i j s t ) ( o , h ) A , ( i , j ) A , [ s , t ] K .
In bottleneck subflow fairness, single-path flows and individual subflows of mulitpath flows are equivalent and should in the ideal case get the same share of the link capacity. Equation (A42) and Equation (A43) ensure the fair share for a single path flow on the bottleneck link. A single path flow [ s , t ] K gets the maximum share of the capacity on its bottleneck link ( i , j ) A i.e., f i j s t = s f _ l i n k _ s h a r e i j when f l o w _ t y p e s t = 0 and y i j s t = 1 . For a multipath flow, Equation (A42) and Equation (A43) do not have any effect on the allocation:
f i j s t s f _ l i n k _ s h a r e i j + c a p i j · f l o w _ t y p e s t ( i , j ) A , [ s , t ] K ,
f i j s t s f _ l i n k _ s h a r e i j c a p i j ( 1 y i j s t ) c a p i j · f l o w _ t y p e s t ( o , h ) A , ( i , j ) A , [ s , t ] K .

Appendix A.2.2. Bottleneck Flow Fair Allocation (BFF)

Bottleneck flow fair allocation constraints: Let f l o w _ l i n k _ s h a r e i j is the maximum amount of the capacity a flow can be allocated on the link ( i , j ) A . Equations (A44) and (A45) ensure equal share to all flows which are bottlenecked on the link ( i , j ) A . Equation (A44) expresses f l o w _ l i n k _ s h a r e i j as the maximum share of the capacity to a flow on the link ( i , j ) A . If the link ( i , j ) A is bottlenecked for the flow [ s , t ] K i.e., y i j s t = 1 , Equation (A45) ensures maximum share of the capacity f l o w _ l i n k _ s h a r e i j to the flow [ s , t ] K on that bottleneck link. If y i j s t = 1 , then f l o w _ l i n k _ s h a r e i j = f i j s t which means equal share of the bottleneck capacity to all the flows bottlenecked on the link ( i , j ) A :
f i j s t f l o w _ l i n k _ s h a r e i j ( i , j ) A , [ s , t ] K ,
f i j s t f l o w _ l i n k _ s h a r e i j c a p i j ( 1 y i j s t ) ( o , h ) A , ( i , j ) A , [ s , t ] K .
Besides assigning equal share to the flows on a bottlenecked link, BFF optionally takes a second step if there are flows which are represented by more than one subflow on the link. For each of these flows, BFF tries to share the capacity assigned to the respective flow equally among its subflows. Equation (A46) and Equation (A47) make sure that all the subflows of a flow get equal share from the flow capacity on that bottleneck link. Let s f _ l i n k _ s h a r e i j s t be the maximum allocation that a subflow of the flow [ s , t ] K can get on the link ( o , h ) A . The difference between s f _ l i n k _ s h a r e i j s t and previously used s f _ l i n k _ s h a r e i j in Appendix A.2.2 is that s f _ l i n k _ s h a r e i j is the generic value irrespective of any flow where as s f _ l i n k _ s h a r e i j s t is the flow specific value. Equation (A46) means that s f _ l i n k _ s h a r e i j s t is the maximum amount of the capacity the subflow [ o , h ] A of the flow [ s , t ] K can get on the link ( o , h ) A . Equation (A47) ensures that the subflow [ o , h ] A of the flow [ s , t ] K gets the maximum share of the flow capacity on the link ( i , j ) A if the subflow is bottlenecked on that link. In other words, subflows of the flow [ s , t ] K which are bottlenecked on the link ( i , j ) A get equal allocation from that link:
s f _ a l l o c o h i j s t s f _ l i n k _ s h a r e i j s t ( o , h ) A , ( i , j ) A , [ s , t ] K ,
s f _ a l l o c o h i j s t s f _ l i n k _ s h a r e i j s t c a p i j ( 1 s f _ b n e c k o h i j s t ) ( o , h ) A , ( i , j ) A , [ s , t ] K .

Appendix A.2.3. Network Flow Fair Allocation (NFF)

In this paragraph, the constraints to compute the allocation gain for the objective function NFF-objective-III are derived. Let φ s i n g l e p a t h s t 0 be the allocation of the single path flow ( s , t ) K . Equation (A48) refers to if the flow ( s , t ) K is a multipath path flow, then φ s i n g l e p a t h s t = 0 . Equations (A49) and (A50) make sure that φ s i n g l e p a t h s t is the allocation of the single path flow ( s , t ) K . When ( s , t ) K is a single path flow i.e., f l o w _ t y p e s t = 0 , φ s i n g l e p a t h s t is the end-to-end allocation of that flow i.e., φ s i n g l e p a t h s t = φ s t
φ s i n g l e p a t h s t 1 f l o w _ t y p e s t ( s , t ) K ,
φ s i n g l e p a t h s t φ s t ( s , t ) K ,
φ s i n g l e p a t h s t φ s t β · f l o w _ t y p e s t ( s , t ) K .
Network Congestion Groups
Formulating the equation sets to identify the flows sharing the same congested link requires that all links which are congested for the flows [ s , t ] K are identified. A link is said to be congested only when the link is fully utilized. Consider a variable u n u s e d _ c a p i j that calculates the unused capacity of the link ( i , j ) A after allocation. If the link ( i , j ) A is congested, then u n u s e d _ c a p i j = 0 for that link:
u n u s e d _ c a p i j = c a p i j ( q , r ) K f i j q r ( i , j ) A .
The binary variable y i j s t identifies at least one congested link for the flow [ s , t ] K :
β · u n u s e d _ c a p i j + y i j s t x i j s t ( i , j ) K , [ s , t ] K .
Let c o n g _ f l o w _ c o u n t i j s t be a counter for the number of flows sharing the same congested link ( i , j ) A along with flow [ s , t ] K . Equation (A53) refers to the fact that if the link ( i , j ) A not being congested for the flow [ s , t ] K , then the variable c o n g _ f l o w _ c o u n t i j s t = 0 . Equations (A54) and (A55) make sure that when the link ( i , j ) A is fully utilized for the flow [ s , t ] K , c o n g _ f l o w _ c o u n t i j s t counts the number of flows that are congested on that link. When y i j s t = 1 , then c o n g _ f l o w _ c o u n t i j s t = ( q , r ) K y i j q r .
c o n g _ f l o w _ c o u n t i j s t β · y i j s t ( i , j ) K , [ s , t ] K ,
c o n g _ f l o w _ c o u n t i j s t ( q , r ) K y i j q r ( i , j ) K , [ s , t ] K ,
c o n g _ f l o w _ c o u n t i j s t ( q , r ) K y i j q r c a p i j · ( 1 y i j s t ) .
When the link ( i , j ) A is not congested for the flow [ s , t ] K , then the value of the variable c o n g _ g r o u p _ i d i j s t is set to zero by Equation (A56). Equation (A57) ensures the variable c o n g _ g r o u p _ i d i j s t can only be set to 1 only if the congested link ( i , j ) A is shared by other flows along with the flow [ s , t ] K . Equation (A58) means that when c o n g _ f l o w _ c o u n t i j s t 2 , then c o n g _ g r o u p _ i d i j s t is bound to be 1. This implies that, if the flow [ s , t ] K shares the congested link ( i , j ) A with any other flow, c o n g _ g r o u p _ i d i j s t is set to 1:
c o n g _ g r o u p _ i d i j s t y i j s t ( i , j ) K , [ s , t ] K ,
β · c o n g _ g r o u p _ i d i j s t ( q , r ) K y i j q r + β 2 ( i , j ) K , [ s , t ] K ,
β · c o n g _ g r o u p _ i d i j s t + 1 c o n g _ f l o w _ c o u n t i j s t ( i , j ) K , [ s , t ] K .

References

  1. Singh, A.; Könsgen, A.; Adhari, H.; Görg, C.; Rathgeb, E. Algorithms for Theoretical Investigation of Fairness in Multipath Transport. In Proceedings of the 7th EAI International Conference on Mobile Networks and Management (MONAMI), Santander, Spain, 16–18 September 2015. [Google Scholar]
  2. Recommendations on Queue Management and Congestion Avoidance in the Internet. IETF Informational RFC 2309, 1998. Available online: https://www.rfc-editor.org/rfc/pdfrfc/rfc2309.txt.pdf (accessed on 1 February 2019).
  3. Ford, A.; Raiciu, C.; Handley, M.; Barré, S.; Iyengar, J.R. Architectural Guidelines for Multipath TCP Development. Informational RFC 6182, IETF, 2011. Available online: https://www.rfc-editor.org/rfc/pdfrfc/rfc6182.txt.pdf (accessed on 1 February 2019).
  4. Stewart, R.R. Stream Control Transmission Protocol. RFC 4960, IETF, 2007. Available online: https://www.rfc-editor.org/rfc/pdfrfc/rfc4960.txt.pdf (accessed on 1 February 2019).
  5. Dreibholz, T. Evaluation and Optimisation of Multi-Path Transport using the Stream Control Transmission Protocol. Habilitation Treatise, University of Duisburg-Essen, Faculty of Economics, Institute for Computer Science and Business Information Systems, 2012. Available online: https://duepublico.uni-duisburg-essen.de/servlets/DerivateServlet/Derivate-29737/Dre2012_final.pdf (accessed on 1 February 2019).
  6. Tuexen, M.; Stewart, R.; Natarajan, P.; Iyengar, J.; Ekiz, N.; Dreibholz, T.; Becke, M.; Amer, P. Load Sharing for the Stream Control Transmission Protocol (SCTP). IETF Draft, Individual Submission, Draft-Tuexen-Tsvwg-Sctp-Multipath-16, 2018. Available online: https://tools.ietf.org/html/draft-tuexen-tsvwg-sctp-multipath-16 (accessed on 1 February 2019).
  7. Khalili, R.; Gast, N.; Popovic, M.; Boudec, J.Y. MPTCP is not Pareto-Optimal: Performance Issues and a Possible Solution. IEEE/ACM Trans. Netw. 2013, 21, 1651–1665. [Google Scholar] [CrossRef]
  8. Peng, Q.; Walid, A.; Hwang, J.; Low, S.H. Multipath TCP: Analysis, Design, and Implementation. IEEE/ACM Trans. Netw. 2017, 24, 596–609. [Google Scholar] [CrossRef]
  9. Park, S.Y.; Joo, C.; Park, Y.; Bank, S. Impact of traffic splitting on the delay performance of MPTCP. In Proceedings of the IEEE International Conference on Communications (ICC), Sydney, Australia, 10–14 June 2014. [Google Scholar]
  10. Liu, Q.; Xu, K.; Wang, H.; Xu, L. Modeling multi-path TCP throughput with coupled congestion control and flow control. In Proceedings of the 18th ACM International Conference on Modeling, Analysis and Simulation of Wireless and Mobile Systems (MSWIM), Montreal, QC, Canada, 28 October–2 November 2015. [Google Scholar]
  11. Pokhrel, S.R.; Panda, M.; Vu, H.L. Analytical Modeling of Multipath TCP Over Last-Mile Wireless. IEEE/ACM Trans. Netw. 2017, 25, 1876–1891. [Google Scholar] [CrossRef]
  12. A Literature Review of Reliable Multipath Routing Technique. Int. J. Eng. Comput. Sci. 2015, 4, 10599–10602. Available online: https://ijecs.in/index.php/ijecs/article/view/807/719 (accessed on 1 February 2019).
  13. Low, S.H.; Lapsley, D.E. Optimization Flow Control—I: Basic Algorithm and Convergence. IEEE/ACM Trans. Netw. 1999, 7, 861–874. [Google Scholar] [CrossRef]
  14. Mazumdar, R.; Mason, L.; Douligeris, C. Fairness in network optimal flow control: Optimality of product forms. IEEE Trans. Commun. 1991, 39, 775–782. [Google Scholar] [CrossRef]
  15. Neely, M.J.; Modiano, E.; Li, C.P. Fairness and Optimal Stochastic Control for Heterogeneous Networks. IEEE/ACM Trans. Netw. 2008, 16, 396–409. [Google Scholar] [CrossRef]
  16. Amaldi, E.; Capone, A.; Coniglio, S.; Gianoli, L. Network optimization problems subject to max-min fair flow allocation. IEEE Commun. Lett. 2013, 17, 1463–1466. [Google Scholar] [CrossRef]
  17. Barakat, C.; Altman, E.; Dabbous, W. On TCP performance in a heterogeneous network: A survey. IEEE Commun. Mag. 2000, 38, 40–46. [Google Scholar] [CrossRef]
  18. Hasegawa, G.; Murata, M. Survey on Fairness Issues in TCP Congestion Control Mechanism. IEICE Trans. Commun. 2001, E84-B, 1461–1472. [Google Scholar]
  19. Widmer, J.; Denda, R.; Mauve, M. A survey on TCP-friendly congestion control. IEEE Netw. 2001, 15, 28–37. [Google Scholar] [CrossRef]
  20. Carofiglio, G.; Muscariello, L.; Rossi, D.; Valenti, S. The quest for LEDBAT fairness. In Proceedings of the 2010 IEEE Global Telecommunications Conference GLOBECOM 2010, Miami, FL, USA, 6–10 December 2010. [Google Scholar]
  21. Adhari, H.; Rathgeb, E.; Singh, A.; Könsgen, A.; Görg, C. Transport Layer Fairness Revisited. In Proceedings of the 2015 13th International Conference on Telecommunications (ConTEL), Graz, Austria, 13–15 July 2015. [Google Scholar]
  22. Wischik, D.; Raiciu, C.; Greenhalgh, A.; Handley, M. Design, implementation and evaluation of congestion control for multipath TCP. In Proceedings of the 8th USENIX Symposium on Networked Systems Design and Implementation, Boston, MA, USA, 30 March–1 April 2011; Available online: https://www.usenix.org/legacy/event/nsdi11/tech/full_papers/Wischik.pdf (accessed on 1 February 2019).
  23. Oh, B.; Lee, J. Feedback-Based Path Failure Detection and Buffer Blocking Protection for MPTCP. IEEE/ACM Trans. Netw. 2016, 24, 3450–3461. [Google Scholar] [CrossRef]
  24. Wischik, D.; Handley, M.; Braun, B. The Resource Pooling Principle. ACM SIGCOMM Comput. Commun. Rev. 2008, 5, 47–52. [Google Scholar] [CrossRef]
  25. Raiciu, C.; Handley, M.; Wischik, D. Coupled Congestion Control for Multipath Transport Protocols. IETF, RFC 6356, 2011. Available online: https://www.rfc-editor.org/rfc/pdfrfc/rfc6356.txt.pdf (accessed on 1 February 2019).
  26. Raiciu, C.; Handley, M.; Wischik, D. Practical Congestion Control for Multipath Transport Protocols. University College London Tech. Rep.. , 2009. Available online: https://pdfs.semanticscholar.org/46a4/a32914297e05466e40c0603059301095ab80.pdf (accessed on 1 February 2019).
  27. Singh, A.; Xiang, M.; Könsgen, A.; Görg, C. Performance and Fairness Comparison of Extenstions to Dynamic Window Coupling for Multipath TCP. In Proceedings of the 2013 9th International Wireless Communications and Mobile Computing Conference (IWCMC), Sardinia, Italy, 1–5 July 2013. [Google Scholar]
  28. Dreibholz, T.; Becke, M.; Adhari, H.; Rathgeb, E. On the Impact of Congestion Control for Concurrent Multipath Transfer on the Transport Layer. In Proceedings of the 11th IEE International Conference of Telecommunications (ConTEL), Graz, Austria, 15–17 June 2011; Available online: https://www.tdr.wiwi.uni-due.de/fileadmin/fileupload/I-TDR/SCTP/Paper/ConTEL2011.pdf (accessed on 1 February 2019).
  29. Hassidim, A.; Raz, D.; Segalov, M.; Shaqed (Scolnicov), A. Network Utilization: The Flow View. In Proceedings of the 2013 IEEE INFOCOM, Turin, Italy, 14–19 April 2013. [Google Scholar]
  30. Zukerman, M.; Tan, L.; Wang, H.; Ouveysi, I. Efficiency-Fairness Tradeoff in Telecommunications Networks. IEEE Commun. Lett. 2005, 9, 643–645. [Google Scholar] [CrossRef]
  31. Jain, R.; Chiu, D.; Hawe, W. A Quantitative Measure Of Fairness And Discrimination For Resource Allocation In Shared Computer Systems. arXiv, 1998; arXiv:cs/9809099. [Google Scholar]
  32. Vardalis, D. On the Efficiency and Fairness of TCP over Wired/Wireless Networks. Master’s Thesis, State University of New York, Stony Brook, NY, USA, 2001. Available online: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.28.4569rep=rep1type=pdf (accessed on 1 February 2019).
  33. Available online: aimms.com (accessed on 7 February 2019).
  34. Available online: github.com/shahab04bd/AIMMS-code-for-Optimizations-in-Multipath-Transport (accessed on 7 February 2019).
Figure 1. Example network to explain Definitions 1 to 5. Blue circles: nodes; grey dots: interfaces; black arrows/numbers: links with capacities; colored arrows/numbers: flows with capacity assignments. All numerical values are given in Mbps.
Figure 1. Example network to explain Definitions 1 to 5. Blue circles: nodes; grey dots: interfaces; black arrows/numbers: links with capacities; colored arrows/numbers: flows with capacity assignments. All numerical values are given in Mbps.
Futureinternet 11 00039 g001
Figure 2. Example for bottleneck subflow fair allocation. Blue circles: nodes; black arrows/numbers: links with capacities; colored arrows/numbers: flows with capacity assignments. All numerical values are given in Mbps.
Figure 2. Example for bottleneck subflow fair allocation. Blue circles: nodes; black arrows/numbers: links with capacities; colored arrows/numbers: flows with capacity assignments. All numerical values are given in Mbps.
Futureinternet 11 00039 g002
Figure 3. Example for bottleneck flow fair allocation. Blue circles: nodes; black arrows/numbers: links with capacities; colored arrows/numbers: flows with capacity assignments. All numerical values in Mbps.
Figure 3. Example for bottleneck flow fair allocation. Blue circles: nodes; black arrows/numbers: links with capacities; colored arrows/numbers: flows with capacity assignments. All numerical values in Mbps.
Futureinternet 11 00039 g003
Figure 4. Example for network flow fair allocation. Blue circles: nodes; black arrows/numbers: links with capacities. Colored arrows/numbers: flows with capacity assignments. All numerical values in Mbps.
Figure 4. Example for network flow fair allocation. Blue circles: nodes; black arrows/numbers: links with capacities. Colored arrows/numbers: flows with capacity assignments. All numerical values in Mbps.
Futureinternet 11 00039 g004
Figure 5. Efficiency ( α = 0 ) vs. fairness ( α = 1 ) example. Blue circles: nodes; black arrows: links; colored arrows: data flows; numerical values: link speeds resp. flow assignments in Mbps.
Figure 5. Efficiency ( α = 0 ) vs. fairness ( α = 1 ) example. Blue circles: nodes; black arrows: links; colored arrows: data flows; numerical values: link speeds resp. flow assignments in Mbps.
Futureinternet 11 00039 g005
Figure 6. Efficiency-fairness graph for scenario in Figure 5, based on Ref. [30].
Figure 6. Efficiency-fairness graph for scenario in Figure 5, based on Ref. [30].
Futureinternet 11 00039 g006
Figure 7. Example scenario for Network Flow Fair (NFF) with optimum allocation. Blue circles: nodes; black arrows/numbers: links with capacities; colored arrows/numbers: flows with capacity assignments. All numerical values in Mbps.
Figure 7. Example scenario for Network Flow Fair (NFF) with optimum allocation. Blue circles: nodes; black arrows/numbers: links with capacities; colored arrows/numbers: flows with capacity assignments. All numerical values in Mbps.
Futureinternet 11 00039 g007
Figure 8. Scenario as in Figure 7, but NFF-objective-II prevents flow [ a , d ] from using multipath although it does not harm any other flow.
Figure 8. Scenario as in Figure 7, but NFF-objective-II prevents flow [ a , d ] from using multipath although it does not harm any other flow.
Futureinternet 11 00039 g008
Figure 9. Example scenario for NFF with optimum allocation. Blue circles: nodes; black arrows/ numbers: links with capacities; colored arrows/numbers: flows with capacity assignments. All values in Mbps.
Figure 9. Example scenario for NFF with optimum allocation. Blue circles: nodes; black arrows/ numbers: links with capacities; colored arrows/numbers: flows with capacity assignments. All values in Mbps.
Futureinternet 11 00039 g009
Figure 10. Scenario from Figure 9, but NFF-objective-III enforces multipath for flows [ a , d ] and [ e , h ] and as a result restricts flow [ m , n ] .
Figure 10. Scenario from Figure 9, but NFF-objective-III enforces multipath for flows [ a , d ] and [ e , h ] and as a result restricts flow [ m , n ] .
Futureinternet 11 00039 g010
Figure 11. Example scenario for NFF with optimum allocation. Black arrows/numbers: links with capacities, colored arrows/numbers: flows with capacity assignments. All values in Mbps.
Figure 11. Example scenario for NFF with optimum allocation. Black arrows/numbers: links with capacities, colored arrows/numbers: flows with capacity assignments. All values in Mbps.
Futureinternet 11 00039 g011
Figure 12. Scenario as in Figure 11, but NFF-objective-IV prevents flow [ a , f ] from using multipath and assigning link [ b , c ] to flow [ m , n ] resulting in poor fairness.
Figure 12. Scenario as in Figure 11, but NFF-objective-IV prevents flow [ a , f ] from using multipath and assigning link [ b , c ] to flow [ m , n ] resulting in poor fairness.
Futureinternet 11 00039 g012
Figure 13. Topology of Scenarios 1 to 4: Two-path connection with shared bottleneck ( b , c ) on the path between a and f. The numbers specify the link speeds in Mbps. scenarios 1, 2, 4: A = 20; scenario 3: A = 15.
Figure 13. Topology of Scenarios 1 to 4: Two-path connection with shared bottleneck ( b , c ) on the path between a and f. The numbers specify the link speeds in Mbps. scenarios 1, 2, 4: A = 20; scenario 3: A = 15.
Futureinternet 11 00039 g013
Figure 14. Topology of scenarios 5 to 8: all stations in a line. The numbers specify the link speeds in Mbps. Scenarios 5 and 6: A = 10; Scenarios 7 and 8: A = 20.
Figure 14. Topology of scenarios 5 to 8: all stations in a line. The numbers specify the link speeds in Mbps. Scenarios 5 and 6: A = 10; Scenarios 7 and 8: A = 20.
Futureinternet 11 00039 g014
Figure 15. Topology of scenarios 9 to 14: Two-path connection between a and h where one of the paths is again split into two sub-paths. The numbers specify the link speeds in Mbps.
Figure 15. Topology of scenarios 9 to 14: Two-path connection between a and h where one of the paths is again split into two sub-paths. The numbers specify the link speeds in Mbps.
Futureinternet 11 00039 g015
Figure 16. Topology of scenarios 15 to 18: 2 two-path connections i-l and a–f with shared bottleneck links d–e and b–c. The numbers specify the link speeds in Mbps. Scenarios 15 and 16: A = B = C = 10, D = 15; scenarios 17 and 18: A = 30, B = 40, C = 10, D = 35.
Figure 16. Topology of scenarios 15 to 18: 2 two-path connections i-l and a–f with shared bottleneck links d–e and b–c. The numbers specify the link speeds in Mbps. Scenarios 15 and 16: A = B = C = 10, D = 15; scenarios 17 and 18: A = 30, B = 40, C = 10, D = 35.
Futureinternet 11 00039 g016
Figure 17. Topology of scenarios 19 and 20: Three alternative connections for flow [ a , f ] via c, b or d. The numbers specify the link speeds in Mbps.
Figure 17. Topology of scenarios 19 and 20: Three alternative connections for flow [ a , f ] via c, b or d. The numbers specify the link speeds in Mbps.
Futureinternet 11 00039 g017
Figure 18. Topology of scenarios 21 to 24: Two consecutive two-path connections a-f and i-n with intermediate bottleneck link f-i. The numbers specify the link speeds in Mbps.
Figure 18. Topology of scenarios 21 to 24: Two consecutive two-path connections a-f and i-n with intermediate bottleneck link f-i. The numbers specify the link speeds in Mbps.
Futureinternet 11 00039 g018
Figure 19. Topology of scenarios 25 to 28: Meshed network with shared bottleneck link d-g. The numbers specify the link speeds in Mbps.
Figure 19. Topology of scenarios 25 to 28: Meshed network with shared bottleneck link d-g. The numbers specify the link speeds in Mbps.
Futureinternet 11 00039 g019
Figure 20. Scenario 8—Meshed network. The numbers specify the link speeds in Mbps.
Figure 20. Scenario 8—Meshed network. The numbers specify the link speeds in Mbps.
Futureinternet 11 00039 g020
Figure 21. Aggregate allocation, normalized by NFF, for different fair allocation methods, based on the numerical results in Table 10, Table 11, Table 12, Table 13 and Table 14.
Figure 21. Aggregate allocation, normalized by NFF, for different fair allocation methods, based on the numerical results in Table 10, Table 11, Table 12, Table 13 and Table 14.
Futureinternet 11 00039 g021
Figure 22. Jain’s fairness index, normalized by NFF, for different fair allocation methods, based on the numerical results in Table 10, Table 11, Table 12, Table 13 and Table 14.
Figure 22. Jain’s fairness index, normalized by NFF, for different fair allocation methods, based on the numerical results in Table 10, Table 11, Table 12, Table 13 and Table 14.
Futureinternet 11 00039 g022
Figure 23. Vardalis’s fairness index, normalized by NFF, for different fair allocation methods, based on the numerical results in Table 10, Table 11, Table 12, Table 13 and Table 14.
Figure 23. Vardalis’s fairness index, normalized by NFF, for different fair allocation methods, based on the numerical results in Table 10, Table 11, Table 12, Table 13 and Table 14.
Futureinternet 11 00039 g023
Figure 24. Product of efficiency and Jain’s fairness index, normalized by NFF, for different fair allocation methods, based on the numerical results in Table 10, Table 11, Table 12, Table 13 and Table 14.
Figure 24. Product of efficiency and Jain’s fairness index, normalized by NFF, for different fair allocation methods, based on the numerical results in Table 10, Table 11, Table 12, Table 13 and Table 14.
Futureinternet 11 00039 g024
Table 1. Connectivity matrix for example network in Figure 1. Rows: source, columns: destination. All values in Mbps. Zero values are omitted for better overview.
Table 1. Connectivity matrix for example network in Figure 1. Rows: source, columns: destination. All values in Mbps. Zero values are omitted for better overview.
a1a2b1b2c1c2d1d2d3e1
a150
a2 10
b1
b2 50
c1
c2 50
d1
d2
d3 30
e1
Table 2. Flow set and linear programming (LP) results for the flows’ end-to-end throughput for scenarios 1 to 4 described in Figure 13. All values in Mbps.
Table 2. Flow set and linear programming (LP) results for the flows’ end-to-end throughput for scenarios 1 to 4 described in Figure 13. All values in Mbps.
Scenario 1
FlowBSF IBSF IIBFF IBFF IINFF
[a,f]25.025.025.025.025.0
[m,n]10.010.010.010.010.0
Scenario 2
FlowBSF IBSF IIBFF IBFF IINFF
[a,f]18.318.310.018.312.5
[a,c]8.38.315.08.312.5
[m,n]8.38.310.08.310.0
Scenario 3
FlowBSF IBSF IIBFF IBFF IINFF
[n,h]7.515.87.515.810.0
[m,h]15.08.315.08.310.0
[a,h]7.57.57.57.510.0
[a,d]10.08.310.08.310.0
Scenario 4
FlowBSF IBSF IIBFF IBFF IINFF
[a,f]8.313.35.013.38.8
[a,c]8.38.315.08.38.8
[a,e]10.05.05.05.08.8
[m,n]8.38.310.08.38.8
BSF: Bottleneck Subflow Fair; BFF: Bottleneck Flow Fair; BSF: Network Flow Fair.
Table 3. Flow set and LP results for the flows’ end-to-end throughput for scenarios 5 to 8 described in Figure 14. All values in Mbps.
Table 3. Flow set and LP results for the flows’ end-to-end throughput for scenarios 5 to 8 described in Figure 14. All values in Mbps.
Scenario 5
FlowBSF IBSF IIBFF IBFF IINFF
[a,b]5.05.05.05.05.0
[a,d]5.05.05.05.05.0
[c,d]5.05.05.05.05.0
Scenario 6
FlowBSF IBSF IIBFF IBFF IINFF
[a,c]3.33.33.33.33.3
[a,d]3.33.33.33.33.3
[b,d]3.33.33.33.33.3
Scenario 7
FlowBSF IBSF IIBFF IBFF IINFF
[a,b]15.015.015.015.015.0
[a,d]5.05.05.05.05.0
[c,d]5.05.05.05.05.0
Scenario 8
FlowBSF IBSF IIBFF IBFF IINFF
[a,c]3.33.33.33.33.3
[a,d]3.33.33.33.33.3
[b,d]3.33.33.33.33.3
Table 4. Flow set and LP results for the flows’ end-to-end throughput for scenarios 9 to 14 described in Figure 15. All values in Mbps.
Table 4. Flow set and LP results for the flows’ end-to-end throughput for scenarios 9 to 14 described in Figure 15. All values in Mbps.
Scenario 9
FlowBSF IBSF IIBFF IBFF IINFF
[a,f]10.015.010.015.015.0
[a,f]20.015.020.015.015.0
Scenario 10
FlowBSF IBSF IIBFF IBFF IINFF
[a,f]10.015.010.015.010.0
[a,h]10.010.015.010.010.0
[c,g]10.05.05.05.010.0
Scenario 11
FlowBSF IBSF IIBFF IBFF IINFF
[a,b]15.08.315.010.08.8
[a,f]10.016.75.010.08.8
[a,h]5.05.010.010.08.8
[c,g]5.05.05.05.08.8
Scenario 12
FlowBSF IBSF IIBFF IBFF IINFF
[a,b]10.017.517.517.512.5
[a,f]15.07.57.57.512.5
Scenario 13
FlowBSF IBSF IIBFF IBFF IINFF
[a,f]20.020.020.020.020.0
[c,h]15.015.015.015.015.0
Scenario 14
FlowBSF IBSF IIBFF IBFF IINFF
[a,f]8.38.35.08.38.0
[a,h]8.310.010.010.08.0
[a,g]10.06.710.06.78.0
[c,h]10.06.710.06.78.0
[b,h]8.38.35.08.38.0
Table 5. Flow set and LP results for the flows’ end-to-end throughput for scenarios 15 to 18 described in Figure 16. All values in Mbps.
Table 5. Flow set and LP results for the flows’ end-to-end throughput for scenarios 15 to 18 described in Figure 16. All values in Mbps.
Scenario 15
FlowBSF IBSF IIBFF IBFF IINFF
[i,l]20.020.020.020.020.0
[a,f]25.025.025.025.025.0
[m,n]10.010.010.010.010.0
Scenario 16
FlowBSF IBSF IIBFF IBFF IINFF
[i,l]15.015.010.015.010.0
[i,e]5.05.010.05.010.0
[a,c]15.08.315.08.312.5
[a,f]10.08.310.08.312.5
[m,n]10.08.310.08.310.0
Scenario 17
FlowBSF IBSF IIBFF IBFF IINFF
[i,l]25.025.025.025.020.0
[a,f]10.022.510.022.520.0
[m,n]25.012.525.012.520.0
Scenario 18
FlowBSF IBSF IIBFF IBFF IINFF
[i,l]15.021.715.021.715.0
[i,e]10.06.710.06.76.3
[a,c]12.58.312.58.311.3
[a,f]10.015.010.015.011.3
[m,n]12.58.312.58.311.3
Table 6. Flow set and LP results for the flows’ end-to-end throughput for scenarios 19 and 20 described in Figure 17. All values in Mbps.
Table 6. Flow set and LP results for the flows’ end-to-end throughput for scenarios 19 and 20 described in Figure 17. All values in Mbps.
Scenario 19
FlowBSF IBSF IIBFF IBFF IINFF
[a,f]53.360.060.050.040.0
[b,f]33.330.030.035.040.0
[a,e]33.330.030.035.040.0
Scenario 20
FlowBSF IBSF IIBFF IBFF IINFF
[a,c]30.023.340.020.022.5
[a,d]30.021.710.030.022.5
[a,e]10.028.310.020.022.5
[a,f]20.016.730.020.022.5
[b,f]30.030.030.030.030.0
Table 7. Flow set and LP results for the flows’ end-to-end throughput for scenarios 21 to 24 described in Figure 18. All values in Mbps.
Table 7. Flow set and LP results for the flows’ end-to-end throughput for scenarios 21 to 24 described in Figure 18. All values in Mbps.
Scenario 21
FlowBSF IBSF IIBFF IBFF IINFF
[a,f]60.060.060.060.060.0
[i,n]210.0210.0210.0210.0210.0
[f,i]30.030.030.030.030.0
Scenario 22
FlowBSF IBSF IIBFF IBFF IINFF
[a,i]5.05.010.05.015.0
[a,f]55.055.050.055.045.0
[f,n]10.05.010.05.015.0
[i,n]200.0205.0200.0205.0195.5
Scenario 23
FlowBSF IBSF IIBFF IBFF IINFF
[a,i]20.021.75.021.715.0
[a,c]20.016.750.016.722.5
[a,f]20.521.75.021.722.5
[f,n]10.03.35.03.315.0
[i,n]190.0203.3200.0203.3185.0
[i,k]10.03.35.03.310.0
Scenario 24
FlowBSF IBSF IIBFF IBFF IINFF
[a,e]5.05.05.05.020.0
[a,c]50.025.050.025.020.0
[a,f]5.030.05.030.020.0
[i,n]100.0105.0100.0105.370.0
[i,m]100.0100.0100.0100.070.0
[i,k]10.05.010.05.070.0
Table 8. Flow set and LP results for the flows’ end-to-end throughput for scenarios 25 to 28 described in Figure 19. All values in Mbps.
Table 8. Flow set and LP results for the flows’ end-to-end throughput for scenarios 25 to 28 described in Figure 19. All values in Mbps.
Scenario 25
FlowBSF IBSF IIBFF IBFF IINFF
[a,d]195.0195.0195.0195.0190.0
[e,h]15.015.015.015.010.0
[m,h]5.05.05.05.010.0
Scenario 26
FlowBSF IBSF IIBFF IBFF IINFF
[a,d]193.3193.3193.3193.3190.0
[e,h]13.313.313.313.310.0
[m,h]3.33.33.33.35.0
[n,h]3.33.33.33.35.0
Scenario 27
FlowBSF IBSF IIBFF IBFF IINFF
[a,d]150.0150.0150.0150.0150.0
[m,d]50.050.050.050.050.0
[e,h]20.020.020.020.020.0
Scenario 28
FlowBSF IBSF IIBFF IBFF IINFF
[a,d]100.0100.0100.0100.0100.0
[m,d]50.050.050.050.050.0
[n,b]50.050.050.050.050.0
[e,h]15.015.010.015.010.0
[g,h]5.05.010.05.010.0
Table 9. Flow set and LP results for the flows’ end-to-end throughput for scenarios 29 to 31 described in Figure 20. All values in Mbps.
Table 9. Flow set and LP results for the flows’ end-to-end throughput for scenarios 29 to 31 described in Figure 20. All values in Mbps.
Scenario 29
FlowBSF IBSF IIBFF IBFF IINFF
[n,h]25.025.025.025.050.0
[m,h]25.050.025.050.050.0
[a,h]100.075.0100.075.050.0
Scenario 30
FlowBSF IBSF IIBFF IBFF IINFF
[n,h]25.025.050.075.050.0
[m,h]25.041.750.025.050.0
[a,h]75.066.750.025.050.0
[a,d]75.066.750.075.050.0
Scenario 31
FlowBSF IBSF IIBFF IBFF IINFF
[n,h]50.016.725.016.733.3
[m,h]16.720.025.041.733.3
[a,h]16.770.050.066.733.3
[a,d]16.766.750.041.733.3
[m,e]50.010.025.016.733.3
[n,d]50.016.725.016.733.3
Table 10. Performance evaluation for BSF-I. A l l o c values in Mbps.
Table 10. Performance evaluation for BSF-I. A l l o c values in Mbps.
Scenario Alloc γ Jain γ Vard Alloc · γ Jain
135.00.8450.57129.6
235.00.8600.71430.1
340.00.9140.83336.6
435.00.9930.95234.7
515.01.0001.00015.0
610.01.0001.00010.0
725.00.7580.60019.0
810.01.0001.00010.0
930.00.9000.66727.0
1030.01.0001.00030.0
1135.00.8170.71428.6
1225.00.9620.80024.1
1335.00.9800.85734.3
1440.00.9140.85436.6
1555.00.8960.77349.3
1655.00.8960.81849.3
1760.00.8890.75053.3
1860.00.9760.91758.6
19120.00.9470.833113.6
20120.00.9000.813108.0
21300.00.6170.450185.1
22270.00.4220.346113.9
23270.00.3240.35687.5
24270.00.5360.489144.7
25215.00.4030.14086.6
26213.30.3030.12564.6
27220.00.6350.477139.7
28220.00.6350.614139.7
29150.00.6670.500100.1
30200.00.8000.667160.0
31200.00.8000.700160.0
Table 11. Performance evaluation for BSF-II. A l l o c values in Mbps.
Table 11. Performance evaluation for BSF-II. A l l o c values in Mbps.
Scenario Alloc γ Jain γ Vard Alloc · γ Jain
1350.8450.57129.6
2350.8600.71430.1
3400.8970.80635.9
4350.8960.82531.4
5151.0001.00015.0
6101.0001.00010.0
7250.7580.60019.0
8101.0001.00010.0
9301.0001.00030.0
10300.8570.75025.7
11350.7710.69827.0
12250.8620.60021.6
13350.9800.85734.3
14400.9760.91739.0
15550.8960.77349.3
16550.8350.74245.9
17600.9320.81355.9
18600.8200.73649.2
191200.8890.750106.7
201200.9620.892115.4
213000.6170.450185.1
222700.4040.321109.1
232700.2850.29677.0
242700.5380.489145.3
252150.4030.14086.6
262130.3030.12564.6
272200.6350.477139.7
282200.6350.614139.7
291500.6670.500100.1
302000.8890.778177.8
312000.6410.580128.2
Table 12. Performance evaluation for BFF-I. A l l o c values in Mbps.
Table 12. Performance evaluation for BFF-I. A l l o c values in Mbps.
Scenario Alloc γ Jain γ Vard Alloc · γ Jain
1350.8450.57129.6
2350.9610.85733.6
3400.9140.83336.6
4350.8170.71428.6
5151.0001.00015.0
6101.0001.00010.0
7250.7580.60019.0
8101.0001.00010.0
9300.9000.66727.0
10300.8570.75025.7
11350.8170.71428.6
12250.8620.60021.6
13350.9800.85734.3
14400.9140.81336.6
15550.8960.77349.3
16550.9680.90953.2
17600.8890.75053.3
18600.9760.91758.6
191200.8890.750106.7
201200.8000.70896.0
213000.6170.450185.1
222700.4270.346115.3
232700.2850.28977.0
242700.5360.489144.7
252150.4030.14086.6
262130.3030.12564.6
272200.6350.477139.7
282200.6370.614140.1
291500.6670.500100.1
302001.0001.000200.0
312000.8890.800177.8
Table 13. Performance evaluation for BFF-II . A l l o c values in Mbps.
Table 13. Performance evaluation for BFF-II . A l l o c values in Mbps.
Scenario Alloc γ Jain γ Vard Alloc · γ Jain
1350.8450.57129.6
2350.860.71430.1
3400.8970.80635.9
4350.8960.82531.4
5151.0001.00015.0
6101.0001.00010.0
7250.7580.60019.0
8101.0001.00010.0
9301.0001.00030.0
10300.8570.75025.7
11350.9420.85733.0
12250.8620.60021.6
13350.9800.85734.3
14400.9760.91739.0
15550.8960.77349.3
16550.8350.74245.9
17600.9320.81355.9
18600.8200.73649.2
191200.9700.875116.4
201200.9600.875115.2
213000.6170.450185.1
222700.4040.321109.1
232700.2850.29677.0
242700.5380.489145.3
252150.4030.14086.6
262130.3030.12564.6
272200.6350.477139.7
282200.6350.614139.7
291500.8570.750128.6
302000.8000.667160
312000.7620.700152.4
Table 14. Performance evaluation for NFF. A l l o c values in Mbps.
Table 14. Performance evaluation for NFF. A l l o c values in Mbps.
scenario A l l o c γ Jain γ Vard A l l o c · γ Jain
1350.8450.57129.6
2350.9900.92934.7
3401.0001.00040.0
4351.0001.00035.0
5151.0001.00015.0
6101.0001.00010.0
7250.7580.60018.95
8101.0001.00010.0
9301.0001.00030.0
10301.0001.00030.0
11351.0001.00035.0
12251.0001.00025.0
13350.9800.85734.3
14401.0001.00040.0
15550.8960.77349.3
16550.9880.93254.3
17601.0001.00060.0
18600.9850.93859.1
191201.0001.000120.0
201200.9850.938118.2
213000.6170.450185.1
222700.4500.370121.5
232600.3790.36198.5
242700.7640.667206.3
252100.4050.14385.1
262100.3040.12763.8
272200.6350.477139.7
282200.6370.614140.1
291501.0001.000150.0
302001.0001.000200.0
312001.0001.000200.0

Share and Cite

MDPI and ACS Style

Könsgen, A.; Shahabuddin, M.; Singh, A.; Förster, A. A Mathematical Model for Efficient and Fair Resource Assignment in Multipath Transport. Future Internet 2019, 11, 39. https://doi.org/10.3390/fi11020039

AMA Style

Könsgen A, Shahabuddin M, Singh A, Förster A. A Mathematical Model for Efficient and Fair Resource Assignment in Multipath Transport. Future Internet. 2019; 11(2):39. https://doi.org/10.3390/fi11020039

Chicago/Turabian Style

Könsgen, Andreas, Md. Shahabuddin, Amanpreet Singh, and Anna Förster. 2019. "A Mathematical Model for Efficient and Fair Resource Assignment in Multipath Transport" Future Internet 11, no. 2: 39. https://doi.org/10.3390/fi11020039

APA Style

Könsgen, A., Shahabuddin, M., Singh, A., & Förster, A. (2019). A Mathematical Model for Efficient and Fair Resource Assignment in Multipath Transport. Future Internet, 11(2), 39. https://doi.org/10.3390/fi11020039

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop