Next Article in Journal
Event-Triggered State Estimation for Fractional-Order Neural Networks
Previous Article in Journal
Parallel Direct and Iterative Methods for Solving the Time-Fractional Diffusion Equation on Multicore Processors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Towards Distributed Lexicographically Fair Resource Allocation with an Indivisible Constraint

1
School of Computer Science and Engineering, Southeast University, Nanjing 210096, China
2
MOE Key Laboratory of Computer Network and Information Integration, Southeast University, Nanjing 210096, China
3
State Key Laboratory of Mathematical Engineering and Advanced Computing, Wuxi 214000, China
4
National Key Laboratory for Complex Systems Simulation, Department of Systems General Design, Institute of Systems Engineering, AMS, Beijing 100083, China
5
North China Institute of Computing Technology, Beijing 100083, China
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(3), 324; https://doi.org/10.3390/math10030324
Submission received: 3 November 2021 / Revised: 22 December 2021 / Accepted: 17 January 2022 / Published: 20 January 2022
(This article belongs to the Section Mathematics and Computer Science)

Abstract

:
In the cloud computing and big data era, data analysis jobs are usually executed over geo-distributed data centers to make use of data locality. When there are not enough resources to fully meet the demands of all the jobs, allocating resources fairly becomes critical. Meanwhile, it is worth noting that in many practical scenarios, resources waiting to be allocated are not infinitely divisible. In this paper, we focus on fair resource allocation for distributed job execution over multiple sites, where resources allocated each time have a minimum requirement. Aiming at the problem, we propose a novel scheme named Distributed Lexicographical Fairness (DLF) targeting to well specify the meaning of fairness in the new scenario considered. To well study DLF, we follow a common research approach that first analyzes its economic properties and then proposes algorithms to output concrete DLF allocations. In our study, we leverage a creative idea that transforms DLF equivalently to a special max flow problem in the integral field. The transformation facilitates our study in that by generalizing basic properties of DLF from the view of network flow, we prove that DLF satisfies Pareto efficiency, envy-freeness, strategy-proofness, relaxed sharing incentive and 1 2 -maximin share. After that, we propose two algorithms. One is a basic algorithm that stimulates a water-filling process. However, our analysis shows that the time complexity is not strongly polynomial. Aiming at such inefficiency, we then propose a new iterative algorithm that comprehensively leverages parametric flow and push-relabel maximal flow techniques. By analyzing the steps of the iterative algorithm, we show that the time complexity is strongly polynomial.

1. Introduction

In this paper we study fair resource allocation for distributed job execution over multiple sites, where resources are not infinitely divisible. This problem arises from cloud computing and big data analytics, where we notice two significant features. One is that running data analysis jobs often requires a large amount of data that is usually stored at geo-distributed sites. Collecting all data needed from different sites and then executing jobs at a central location would involve unacceptable time costs in data transmission. Hence, distributed data analysis jobs that could execute close to the input data receive attention recently [1,2]. Job execution requires system resources. If multiple jobs need to execute at the same site but there are not enough system resources to meet their demands, fair resource allocation becomes a critical problem. On the other hand, in cloud computing, resources are usually allocated as a virtual machine. Although the amount of resources of a virtual machine is usually configurable, cloud providers often use a basic service to require the minimum amount of resources that a virtual machine must have. Thus, when studying fair resource allocation, we consider the constraint that resources are not infinitely divisible.
Resource allocation is a classical combinatorial optimization problem in many fields like computer science, manufacturing, and economics. In the past decades, fair resource allocation receives a lot of attention [3,4,5]. To study fair allocation, providing a reasonable scheme to define fairness is critical. In the literature, max-min fairness is a popular scheme to define fair allocation when meeting competing demands [6]. B. Radunovic et al. [7] proved that in a compact and convex sets max-min fairness is always achievable. Meanwhile, the authors studied algorithms to achieve max-min fairness whenever it exists. However, they do not consider distributed job execution which is different from our work.
Max-min fairness has been generalized aiming at fair allocation of multiple types of resources. A. Ghodsi et al. [8] proposed Dominant Resource Fairness (DRF). By defining dominant share for each user, they proposed an algorithm to maximize the minimum dominant share across all users. As a different option of DRF, D. Dolev et al. [9] proposed “no justified complaints” which focuses on the bottleneck resource type. DRF could sacrifice the efficiency of job execution. In order to achieve a better tradeoff between fairness and efficiency, T. Bonald et al. [10] proposed Bottleneck Max Fairness. By considering different machines could have different configurations, W. Wang et al. [11] extended DRF to handle heterogeneous machines. All the above studies are interested in multi-resource allocation. Different from them, our problem arises from a distributed scenario where data cannot be migrated and resources allocated to jobs are not infinitely divisible. Hence, none of the schemes on fairness defined by them can be applied to handle our problem.
Y. Guan et al. [12] considered fair resource allocation in distributed job executions. By considering fairness towards aggregate resources allocated, they defined max-min fairness under distributed settings. This work is close to our work. The key difference is that resources are assumed to be infinitely divisible. Nevertheless, we consider that the assumption does not make sense in many practical scenarios. Furthermore, it is worth noting that their fair scheme cannot be applied in our scenario, due to max-min fairness even may not exist under our settings. Hence, aiming at the new problem addressed in this paper, it is still necessary to study a new reasonable scheme on fairness.
To handle fair resource allocation in distributed setting with a minimum indivisible resource unit, we set up the model in the integral field and propose a novel fair resource allocation scheme named Distributed Lexicographical Fairness (DLF) to specify the meaning of fairness. If an allocation satisfies DLF, the aggregate resource allocation across all sites (machines or datacenters) of each job should be lexicographical optimal. To verify a new defined fair scheme is self-consistent or not, a usual way [8,9,12] is to study whether it well satisfies critical economic properties such as Pareto efficiency, envy-freeness, strategy-proofness, maximin share and sharing incentive, and whether there exist efficient algorithms to achieve a fair allocation.
To conduct our study, we leverage a creative idea that transforms DLF equivalently into a network flow model. Such transformation facilitates us to not only study economic properties but also to design new algorithms based on efficient max flow algorithms. More precisely, we first generalize basic properties of DLF based on network flow theories and then use them to further prove that DLF satisfies Pareto efficiency, envy-freeness, strategy-proofness, 1 2 -maximin share, and relaxed sharing incentive. On the other hand, to get a DLF allocation, we proposed two algorithms based on max flow theory. One is named Basic Algorithm, which simulates a water-filling procedure. However, the time complexity is not strongly polynomial as it is affected by of the capacity of sites. To further improve the efficiency, we proposed a novel iterative algorithm leveraging parametric flow techniques [13] and the push-relabel maximal flow algorithm. The complexity of the iterative algorithm decreases to O ( | V | 2 | E | log ( | V | 2 / | E | ) ) where | V | is the number of jobs and sites, and | E | is the number of edges in the flow network graph.
The contribution of this paper is summarized as follows.
  • We address a new distributed fair resource allocation problem, where resources are composed of indivisible units. To handle the problem, we propose a new scheme named Distributed Lexicographical Fairness (DLF).
  • We creatively transform DLF into a model based on network flow and generalize its basic properties.
  • By proving DLF satisfies critical economic properties and proposing efficient algorithms to get a DLF allocation, we confirm that DLF is self-consistent and is reasonable to define fairness in the scenario considered.
The rest of this paper is organized as follows. Section 2 introduces the system model and gives a formal definition of distributed lexicographical fairness. Section 3 remodels distributed lexicographical fairness by using network flow theories. Section 4 proves basic properties, based on which proofs in Section 5 show that distributed lexicographical fairness satisfies five critical economic properties. Section 6 presents two algorithms and analyzes their time complexities. Finally, Section 7 brings our concluding remarks and discusses future work.

2. System Model & Problem Definition

2.1. System Model

We consider a set of distributed sites M = { M 1 , M 2 , , M m } , where each site M i could be a cluster of servers or a data center depending on the scale of the system modeled. Each site M j has a computing capability C j that is measured by the number of computing slots. Note that we consider each computing slot is no more divisible.
Suppose there is a set of n distributed execution jobs J = { J 1 , J 2 , , J n } . A job is composed of multiple tasks. Note that we assume each task has independent data inputs and thus different tasks can run in parallel. We do not permit data migration between sites due to unacceptable overheads. By considering data locality, each task can only be executed at a designated site. For any job J i in J , it could require resource at each site. Thus, for the set of jobs J , we model their resource demands by a n × m matrix D n × m , where each entry d i j is the job J i ’s resource demand at site M j . We assume each task execution occupies one computing slot and thus resource demand is modeled by the number of tasks. If J i has no task to run at a site M j , we let d i j = 0 .
For the set of jobs J , we leverage a n × m matrix A n × m to represent the resource allocation. In A n × m , each entry a i j means the amount of resources that job J i can receive from site M j . Note that resources are not considered infinitely divisible. We require that the value of each a i j can only be a non-negative integer, i.e., we use integer 1 to model the minimum resource unit.
Each site M j has a finite capacity. We claim a capacity constraint by Formula (1) that requires the total amount of resources allocated at each site cannot exceed the capacity. On the other hand, it is not reasonable to allocate more resources than what a job demands. Thus, we claim a rational constraint by Formula (2):
M j M , i = 1 n a i j C j ,
J i J , M j M , 0 a i j d i j .
In this paper, whenever a resource allocation A n × m is claimed feasible, each entry a i j must be a non-negative integer and the above constraints (1) and (2) are satisfied.

2.2. Problem Definition

2.2.1. Single Site

We are interested in fair allocation for the set of jobs. The key is to specify the meaning of fairness in our model. We start the discussion from a single site. Before giving a formal definition, let us consider an example: there are four jobs J 1 , J 2 , J 3 and J 4 running on a single site M 1 whose computing ability is modeled by 20 time slots. Suppose J 1 requires 8 time slots, J 2 requires 4 time slots, J 3 requires 10 time slots and J 4 needs 40 time slots to execute their tasks. The demands of J 1 , J 2 , J 3 and J 4 are formulated by a vector 8 , 4 , 10 , 40 . The site M 1 cannot meet the resource requirements at the same time such that how to allocation resource among the four jobs becomes critical. Let us consider a feasible allocation 6 , 2 , 6 , 6 . Intuitively, it is not fair enough as we can increase J 2 ’s allocation by decreasing J 1 ’s allocation to obtain 5 , 3 , 6 , 6 . Similar adjustment can also be made between J 2 and J 3 which increases J 2 ’s allocation by decreasing J 3 ’s allocation to get 5 , 4 , 5 , 6 . Note that we cannot continue to increase J 2 ’s allocation by decreasing J 4 ’s allocation as J 2 ’s demand is 4 such that 5 , 5 , 5 , 5 is not rational (i.e., not feasible).
From the above example, we can see that different allocations have different fair levels that need to be specified. Our idea is to make different fair levels be comparable. Let us consider again the allocation vectors 6 , 2 , 6 , 6 , 5 , 3 , 6 , 6 and 5 , 4 , 5 , 6 . If we rearrange them into a monotone nondecreasing order, then we have 2 , 6 , 6 , 6 , 3 , 5 , 6 , 6 and 4 , 5 , 5 , 6 . Now we shall consider that 4 , 5 , 5 , 6 is the greatest one of the three. Such comparison arises from lexicographical order whose definition is given below.
Definition 1.
X and Y are two n dimensional vectors under monotone nondecreasing order. If t such that X t < Y t and i < t , X i = Y i , then X < Y , otherwise X = Y .
Note that lexicographical order is a total order such that any two vectors are comparable. Accordingly, for all feasible allocations, we shall take the one who has the greatest lexicographical order as the fairest allocation. Indeed, the fairness allocation problem in a single site becomes a lexicographical order optimization: finding the allocation who has the greatest lexicographical order. Suppose A is a n dimensional vector. We adopt a function ϕ ( A ) to get the monotone nondecreasing order of A . Lexicographical fairness is defined below.
Definition 2.
Suppose X is a finite set of vectors of n-dimension. We claim A X is lexicographical fairness if and only if B X , there is ϕ ( B ) ϕ ( A ) .
Lexicographical fairness always exists as X is finite. Definition 2 can be well applied to fair allocation problem on a single site if we consider that X is the set of all feasible allocations. Note that allocations satisfying lexicographical fair may not be unique. In the above example, the following three allocations 5 , 4 , 5 , 6 , 6 , 4 , 5 , 5 and 5 , 4 , 6 , 5 all satisfy lexicographical fair.

2.2.2. Multiple Sites

In the following, we shall extend Definition 2 to adapt the distributed settings: from one site to multiple sites. One intuitive way is to fairly allocate the resource by Definition 2 in each site independently. However, this way could lead to an unfair allocation from a system-wide view: the aggregate resource allocated to different jobs could be far from fair. In distributed resource allocation, the system-wide processing rate of each job is normally decided by its aggregate resource received. Thus, to handle the distributed setting from a system-wide view, we consider all the sites as a united resource pool. Equation (3) is a job-wise allocation vector A derived from allocation matrix A n × m . A means the total amount of resources allocated to each job from all the sites:
A = j = 1 m a 1 j , j = 1 m a 2 j , . . . , j = 1 m a n j ,
where the i th entry A i of A is the aggregate resource allocated to the job J i . Distributed Lexicographical Fairness (DLF) requires that the job-wise vector A satisfy lexicographical fairness.
Let X be the set of all feasible allocations and let X = { A | A X } , i.e., the set of all job-wise allocations of the allocation matrices in X . We have the following definition of DLF.
Definition 3.
An allocation A n × m satisfies Distributed Lexicographical Fairness, if and only if, its job-wise allocation vector A satisfies lexicographical fairness over the set X.
Distributed Lexicographical Fairness is always achievable for resource allocation on a set of sites. To verify whether the definition on DLF is reasonable and self-consistent, we need to confirm that it not only satisfies common economic properties of fairness including Pareto efficiency, envy-freeness, strategy-proofness, maximin share and sharing incentive but also has efficient algorithm to out a DLF allocation. These two tasks are not trivial. To facilitate our further study, we shall transform DLF equivalently to a network flow problem.

3. Problem Transformation

Network flow is a well-known topic in Combinatorial Optimization. Transforming DLF equivalently to a network flow problem gives us good opportunity to apply knowledge in network flow, i.e., based on the existing network flow theories, we shall not only prove that DLF has good economic properties but also propose efficient algorithms to output a DLF allocation.
Transforming DLF means that we should build a flow network that could well model jobs, sites, capacity constraint, rational constraint, and the integral requirements. Before giving a formal description, we perform a concrete case study first. Consider there are two jobs J 1 , J 2 executed over two sites M 1 , M 2 . The demands of the two jobs are respectively 3 , 1 and 0 , 2 , while the capacity of M 1 is 4 and the capacity of M 2 is 3. Figure 1 depicts the flow network built for the case. We can see that J 1 , J 2 and M 1 , M 2 all appear as nodes in the graph. We use directly edges between jobs and sites to express the demands (by edge capacity). s and t in the graph represent respectively the source and sink node which are essential for any flow network. We add directed edges between s and the two jobs, where the capacity has no special constraint (expressed by + ). We also add directed edges from every site to t, where the capacity of edge is set to the corresponding capacity of site. Our general idea is to use the amount of flow passing by J 1 and J 2 to model the corresponding allocations. It is well-known that a feasible flow never breaks the capacity of any edge in the flow network. With the above settings, any feasible flow satisfies the capacity and rational constraints. If we further require that the flow is in integral field (i.e., for any feasible flow, the amount of flow on any edge is an integer), a feasible allocation actually corresponds to a feasible flow and vice versa.
Now let us give a formal description on problem transformation. Necessary notations are introduced first. We consider graph G = ( V , E ) with a capacity function c : E Z + , where V = { s } J M { t } . s and t represent the source node and the sink node, respectively, in a flow network. J is the set of nodes representing jobs and M is the set of nodes corresponding to sites. Each edge e E is denoted as a pair of ordered nodes, i.e., e = v p , v q . The capacity function c is defined in the following.
c ( e ) = d i j if v p = J i and v q = M j ; + if v p = s and v q = J i ; C j if v p = M j and v q = t ; 0 otherwise .
By removing the edges that have an empty capacity, the flow network graph that we obtained for problem transformation is given in Figure 2.
In the flow network constructed, we can also provide a formal definition on feasibility. A flow f on G is called feasible if it satisfies two conditions that the amount of flow over any edge e E is a positive integer f ( e ) Z + and never exceeds the capacity f ( e ) c ( e ) . For ease of representation, we use | f | to express the total amount flow of f. Clearly, | f | = e E f ( e ) is an integer too. Based on f ( e ) c ( e ) , we have two inequalities given below:
M j , t E , i = 1 n f ( J i , M j ) c ( M j , t ) = C j ,
J i V , M j V , 0 f ( J i , M j ) d i j .
By considering f ( J i , M j ) as a i j , it is easy to verify that the above inequalities actually refer to the feasible and rational allocation constraints, respectively. Consequently, a feasible flow f on G corresponds to a feasible resource allocation A n × m , and vice versa. For each job J i , its aggregate resource allocation is the amount of flow passing by node J i in graph G:
M j V , A i = f ( s , J i ) = j f ( J i , M j ) = j a i j .
To avoid redundant notations, we also write f X to mean f is feasible and let ϕ ( f ) = ϕ ( A ) be the corresponding nondecreasing order. A DLF allocation corresponds to a lexicographically optimal flow whose definition is given below.
Definition 4.
A flow f X is lexicographically optimal if and only if f X there is ϕ ( f ) ϕ ( f ) .

4. Basic Properties

In network flow theory, maximum flow is a well-known problem [14]. One of the classic theorems is the augmenting path theorem [15,16], which plays an important role in many related algorithms [17]. A well-known property is that maximum flow algorithm is also suitable for the integral setting where the amount of flow can only be formulated by an integer. N. Megiddo [18,19] studied multisource and multisink lexicographically optimal flow, the algorithm proposed in [19] is a natural extension of Dinic maximum flow algorithm [20]. In our paper, we also leverage maximum flow theories to carry out the key proofs. Note that if f X is maximal then f X , there is | f | | f | .
Lemma 1.
If f is a maximal flow on G, then M j V , i f ( J i , M j ) = m i n ( C j , i d i j ) .
By combining (5) and (6), we have i f ( J i , M j ) m i n ( C j , i d i j ) . If the inequality is strictly satisfied, there must exist two successive edges J i , M j and M j , t such that f ( J i , M j ) < d i j and i f ( J i , M j ) < C j . Clearly, | f | can be increased, which contradicts with f being maximal.
In our model, if a flow f X is lexicographically optimal, it is easy to verify that f is also a maximal flow but not vice versa, i.e., a lexicographically optimal flow is a special maximal flow. Given a feasible flow f on G, an augmenting path is a directed simple path from s to t (e.g., P = { s , v 1 , v 2 , . . . , v i , v i + 1 , . . . , t } ) on the residual graph G f . Similarly, an adjusting cycle C on G f is a directed simple cycle from s to s (does not pass by t). The capacity of a given path (or a given cycle) is defined to be the capacity of the corresponding bottleneck edge, e.g., for a path P , c ( P ) = min e = v i , v i + 1 P c ( e ) and for a cycle C , c ( C ) = min e = v i , v i + 1 C c ( e ) . For ease of representation, we consider augmenting path or adjusting cycle to only contain edges with positive capacity, i.e., c ( P ) > 0 and c ( C ) > 0 .
In our context, performing an augmentation means increasing the original flow f by δ ( δ Z + and 1 δ c ( P ) ) along an augmenting path P in G f and performing an adjustment means adjusting the original flow f by δ ( δ Z + and 1 δ c ( C ) ) along an adjusting cycle C in G f . By performing an augmentation or an adjustment on a flow f, we can get a new feasible flow. The difference is that only the augmentation increases | f | .
Claim 1.
For a given feasible flow f on G, if there does not exist any augmenting path passing by J k in G f , then for any feasible f augmented from f, there is also no augmenting path passing by J k in G f .
Proof. 
We prove by contradiction: assume there exists a feasible flow f augmented from f and in the residual graph G f , however, there is an augmenting path passing by J k (denoted by P * in the following). In our model, c ( s , J k ) = + , which implies s , J k is an edge existing in any residual graph. Therefore, we only need to consider P * where the first two nodes are s and J k .
As f is augmented from f, we can perform successive augmentations on f to get f . Note that each augmentation is performed along an augmenting path. Hence, assume the successive augmentations are performed on a sequence of augmenting paths P = { P 1 , P 2 , . . . , P r } which are respectively in the intermediate residual graphs { G 1 , G 2 , . . . , G r } obtained by augmentations. Here G 1 is exactly G f , G 2 is obtained by perform an augmentation along P 1 on G 1 , et al. Next, we perform a recursive analysis to gradually reduce P and finally prove that in G f there also exists an augmenting path passing by J k , which leads to a contradiction.
Finding P the last path in P which has common nodes (except s, J k and t) with P * : P P * / { s , J k , t } . A special case is that we cannot find such P . It means that starting from G f by performing augmentations along all the augmenting paths in P , there is no influence on the existence of P * , i.e., P * is already an augmenting path in G f .
If such P exists, it implies that after the augmentation is performed along P in G , the follow-up augmentations along P + 1 , P + 2 , …, P r do not affect the existence of P * any more, i.e., P * exists in G + 1 , G + 2 , …, G r . Now let us focus on finding an augmentation path passing by J k which appears in G (before the augmentation along P is performed). Assume v x is the first node not only passing by P but also appearing in P * . We know that the augmentation along P does not affect the existence of the first part of P * : { s , J k , . . . , v x } . The remaining part of P * may not exist in G . However, we can replace it with the part { v x , . . . , t } that appears in P . By concatenating the above two parts together, we actually find another augmentation path in G passing by J k . If P is exactly P 1 of P , we already find an augmentation path passing by J k in G f . Otherwise, let us denote the new found augmentation path by P * too, reduce P to { P 1 , P 2 , . . . , P } and restart the whole process. As P is not infinite, the process will be stopped after finite times, which implies G f must include an augmentation path passing by J k . □
In our model, if a maximal flow is also lexicographically optimal, it must satisfy a special condition. In a residual graph G f , we name a simple path is a J p J q path if it starts at J p , ends at J q , and does not pass by s and t. Note that any J p J q path only contains edges with positive capacity.
Theorem 2.
A maximal flow f on G is lexicographically optimal if and only if for all J p and J q , if A p A q 2 , then there is no J p J q path on the residual graph G f .
Proof. 
First, proving the “only if” side. Assume there exist J p and J q , where A p A q 2 and a J p J q path exists in G f . As the existence of s , J p and J q , s , we can perform an adjustment by 1-unit from s to J p , then along the J p J q path and along the edge J q , s to reach s again. Note that by the above adjustment, we obtain a new maximal flow f where A p = A p + 1 A q 1 = A q . Therefore, f is lexicographically larger than f, i.e., ϕ ( f ) > ϕ ( f ) , which contradicts with f is lexicographically optimal.
Second, proving the “if” side by contradiction. Suppose that flow f is maximal on G and satisfies the condition. However, it is not lexicographically optimal. Assume f o p t is a lexicographically optimal flow such that ϕ ( f o p t ) > ϕ ( f ) and | f |   =   | f o p t | . By comparing f with f o p t , the set of jobs J can be naturally divided into three parts J < = { J i J | f ( s , J i ) < f o p t ( s , J i ) } , J = = { J i J | f ( s , J i ) = f o p t ( s , J i ) } and J > = { J i J | f ( s , J i ) > f o p t ( s , J i ) } . Next, we construct a special graph G d i f f [21] to differentiate f o p t and f.
Let us denote G d i f f = ( V d i f f , E d i f f ) , where V d i f f = J M . We also define a capacity function c d i f f for the edges of E d i f f :
  • if f ( J i , M j ) f o p t ( J i , M j ) , then c d i f f ( J i , M j ) = f o p t ( J i , M j ) f ( J i , M j ) and c d i f f ( M j , J i ) = 0 ;
  • otherwise, c d i f f ( M j , J i ) = f ( J i , M j ) f o p t ( J i , M j ) and c d i f f ( J i , M j ) = 0 .
According to Lemma 1, we know for any site M j there is f o p t ( M j , t ) = f ( M j , t ) . Therefore, in the graph G d i f f , for each M j , we have i c d i f f ( M j , J i ) = i c d i f f ( J i , M j ) .
In G d i f f , there could exist “positive cycles” (denoted by C ): for each edge e of C , there is c d i f f ( e ) > 0 . For a positive cycle C , we let c a p be the minimum capacity of all the edges contained in C . We shall eliminate all these cycles by capacity reductions. For each edge e of C , we perform c d i f f ( e ) = c d i f f ( e ) c a p . Clearly, after the reduction, C is no more a positive cycle. Note that once we eliminate a positive cycle C on G d i f f , it is equivalent to perform an adjustment by c a p on G f . For example, assume J i is a node included in the cycle C . The adjustment is starting from s, along the edge s , J i , then along the cycle C back to J i , and finally along the edge J i , s back to s. We can eliminate all the positive cycles to get a new G d i f f by performing a sequence of capacity reduction operations. Compared with G d i f f , G d i f f corresponds to another maximal flow f , where J i , there is f ( s , J i ) = f ( s , J i ) . Hence, f is not lexicographically optimal either. Moreover, J < , J = and J > are always kept during the capacity reductions. In the following, we turn to focus on G d i f f .
In G d i f f , there must exist positive paths (for each edge in the path, the capacity is a positive integer), otherwise the flow f is exactly f o p t , which implies both f and f are also lexicographically optimal. As there are no positive cycles in G d i f f , we can extend any positive paths to be a maximal positive path, where there are no edges with positive capacity entering the starting point and no edges with positive capacity leaving the ending point. The minimum capacity of edges in a maximal positive path is also denoted by c a p 1 . Next, we shall demonstrate that for any maximal positive path in G d i f f , the starting point J p must belong to J < and the ending point J q must belong to J > . Clearly, J p cannot belong to J > due to every node in J > having positive entering edges. On the other hand, J p cannot belong to J = nor M , since for each node in J = or in M , the total capacity of the positive entering edges is equal to the total capacity of the positive leaving edges. Consequently, J p can only belong to J < . Similarly, we can infer that the ending point J q can only belong to J > . Now we show that the maximal positive path in G d i f f corresponds to a J p J q path in G f . First, note that from G d i f f to G d i f f , we only decrease the capacity of some edges, such that if a maximal positive path appears in G d i f f , it is also a path in G d i f f (not necessarily maximal).
Suppose J i , M j is a directed edge in the maximal positive path. J i , M j is also an edge having positive capacity in G d i f f . Then, c f ( J i , M j ) the capacity of the edge J i , M j inside G f satisfies:
c f ( J i , M j ) = c ( J i , M j ) f ( J i , M j ) max ( f o p t ( J i , M j ) f ( J i , M j ) , 0 ) = c d i f f ( J i , M j )
Similarly, assume M j , J i is a directed edge in the maximal positive path. M j , J i is also an edge with positive capacity in G d i f f . c f ( M j , J i ) the capacity of the edge M j , J i inside G f satisfies:
c f ( M j , J i ) = f ( J i , M j ) max ( f ( J i , M j ) f o p t ( J i , M j ) , 0 ) = c d i f f ( M j , J i )
The above two formulas together indicate that for each edge of a maximal positive path in G d i f f , the capacity of the corresponding edge in G f is also positive. Without loss of generality, assume a maximal positive path that starts at J p and ends at J q . Then we get the corresponding J p J q path in G f . According to the assumption on f, we can easily infer that A p A q 1 as the existence of the J p J q path. Next, we show that f must be lexicographically optimal.
First, let us assume A p A q . Together with the existence of the maximal positive path from J p to J q , we can infer that A p o p t > A p A q > A q o p t . By considering that the problem is defined in integral field, we can obtain A p o p t A p + 1 A q + 1 A q o p t + 2 . Note that as the maximal path exists, there is a J q J p path (a reversed path from J q to J p ) in the residual graph G o p t . As the sufficiency of this theorem is already proven, we can get that A q o p t A p o p t 1 . Above all, we can obtain A p o p t A q o p t + 2 A p o p t + 1 . A contradiction is identified, which means only A p = A q 1 can happen.
Consider A p = A q 1 . Note that in this case we still have the J p J q path in G f and such that A q o p t A p o p t 1 . Next, we focus on the maximal positive path from J p to J q in G d i f f and do capacity reductions by c a p along such maximal path. Remember that the capacity reductions correspond to do an adjustment in G f , which results in a new maximal flow f that satisfies | f | = | f | = | f | , A p = A p + c a p = A p + c a p and A q = A q c a p = A q c a p . Together with A p = A q 1 , we have A p = A q + 2 c a p 1 . The new difference graph G d i f f is obtained by capacity reductions. Hence, in G d i f f , for node J p there are no positive edges entering in and for node J q there are no positive edges leaving out. Therefore, we have A p o p t A p and A q o p t A q . Above all, we have
A p o p t A p = A q + 2 c a p 1 A q o p t + 2 c a p 1 A p o p t + 2 c a p 2 .
Let us assume c a p 2 . According to the above inequality, we have A p o p t A p o p t + 2 , which also results in a contraction.
Now we can conclude that for any maximal path existing in G d i f f (w.l.o.g, J p and J q represents respectively the starting point and the ending point), we must have A p = A q 1 and c a p = 1 . We perform capacity reductions by c a p = 1 along such maximal positive path. The adjustment corresponding to such capacity reductions is to increase A p by 1 and meanwhile to decrease A q by 1. Note that for the flow f got after the adjustment, there is ϕ ( f ) = ϕ ( f ) = ϕ ( f ) . It means that the adjustment cannot make f be better in terms of lexicographical order. Finally, we continue to perform capacity reductions along maximal positive paths one by one until no positive edges remains in the difference graph (i.e., f o p t is obtained). As no adjustment can make f be better, f is already lexicographically optimal.  □
Corollary 1.
If both f and f are lexicographically optimal, they are interchangeable.
The above corollary is straightforward by the proof of Theorem 2. One can construct the different graph between f and f . Then, capacity reductions can be performed to eliminate all positive cycles and maximal positive paths, which is indeed a process of transformation between the two optimal flows.
Lexicographically optimal flow is not unique. Let L O F be the set including all lexicographically optimal flows. Next, we study the variation of aggregate resource obtained by a job among different optimal flows.
Definition 5.
For any J i J , the value interval I i is defined as the value range of the aggregate resource A i in all f L O F .
Remember that our problem is discussed in Z + such that each value interval we defined only includes integers. The following theorem shows that the length of any value interval is at most 1.
Theorem 3.
f , f L O F , J i J , there is | A i A i | 1 .
Proof. 
We prove by contradiction. Assume there exists a pair of flows f , f L O F and there exists a job J p J which satisfies A p A p 2 . Based on the proof of Theorem 2, we construct the difference graph G d i f f between f and f and target to transform f to f . As A p needs to be increased, we have J p J < . Moreover, during the transformation, there must exist a time that J p becomes a starting point of a maximal positive path. Suppose the ending point of such path is J q . We have J q J > (such that A q > A q ) and A p = A q 1 . On the other hand, the reverse of such maximal path is a J q J p path on the residual graph G f . According to Theorem 2, there is A q A p 1 . Above all, we can show that A p A p + 2 = A q + 1 > A q + 1 A p , which is a contradiction as A p > A p is obtained. □
Based on the Theorem 3, we provide a more specified definition of value interval.
Definition 6.
A job J i ’s value interval I i is [ L , L ] if and only if A i = L for all f L O F . A job J i ’s value interval I i is [ L , L + 1 ] if and only if there exist a pair of flows f , f L O F such that A i = L and A i = L + 1 .
Theorem 4.
For a job J p J , suppose A p = L of a given flow f L O F . J p ’s value interval is [ L , L + 1 ] if and only if there exists a J p J q path in the residual graph G f where A p = A q 1 .
Proof. 
For the “if” side, since one could obtain a new flow f L O F by performing an adjustment along the edge s , J p , then along the J p J q path and along the edge J q , s back to s. In the new flow f , there is A p = L + 1 , which implies I p = [ L , L + 1 ] .
For the “only if” side, there exists a flow f L O F with A p = L + 1 . Based on the proof of Theorem 2, we transform f to f . We first eliminate all positive cycles and then eliminate maximal positive path one by one. During the transformation process, we can find a maximal positive path which starts at J p and ends at another node J q satisfying A p = A q 1 . By such maximal path, we can identify the corresponding J p J q path on G f . □
By Theorem 4, we directly have the following two corollaries.
Corollary 2.
For a job J p J , suppose A p = L under a given flow f L O F . J p ’s value interval is [ L 1 , L ] if and only if there exists a J q J p path in the residual graph G f where A q = A p 1 .
Corollary 3.
For a job J p J , suppose A p = L under a given flow f L O F . J p ’s value interval is [ L , L ] if and only if in the residual graph G f there neither exists a J p J q path where A p = A q 1 nor exists a J q J p path where A q = A p 1 .
We define binary relations on value intervals in order to make the comparison. Let I p = [ L p , R p ] and I q = [ L q , R q ] be the two value intervals of J p and J q respectively. I p < I q if and only if L p < L q or R p < R q . Symmetrically, I p > I q if and only if L p > L q or R p > R q . Finally, I p = I q if and only if L p = L q and R p = R q .
Theorem 5.
Suppose f L O F and in the residual graph G f there exists a J p J q path where A p = A q 1 = L . Let P denote the set of jobs passed by the J p J q path, then J k P , there is I k = [ L , L + 1 ] .
Proof. 
The value intervals of J p and J q can be obtained directly by Theorem 4 and Corollary 2, respectively. Both of them are equal to [ L , L + 1 ] . Suppose J k P and J k is not J p nor J q . Clearly, we have a J p J k path and a J k J q path in the residual graph G f . Assume J k ’s value interval I k [ L 1 , L ] , i.e., A k L 1 under the current flow f. Then we can perform an adjustment by 1 along the edge s , J k , then along the J k J q path and along the edge J k , s back to s. After the adjustment, we obtain a new flow f , where A k = L and A q = L . However, it implies that f satisfies ϕ ( f ) > ϕ ( f ) which contradicts with f is lexicographically optimal. Symmetrically, we can prove that I k [ L , L + 1 ] is not true too due to the existence of the J p J k path in G f . Above all, we can conclude I k = [ L , L + 1 ] . □
Definition 7.
A feasible flow f on G is lexicographically feasible if and only if J p , J q J , if A p A q 2 , then no J p J q path exists in the residual graph G f .
Definition 8.
A lexicographically feasible flow f on G is called v-strict if and only if J i J , there is A i v and if A i v 1 , then in G f there is no augmenting path passing by J i .
For any given lexicographically optimal flow, we can get one unique value:
v m a x = max { A 1 , A 2 , . . . , A n } .
From Definitions 7 and 8, we can easily see that a lexicographically optimal flow is v m a x -strict. Additionally, we consider the empty flow ( | f | = 0 ) as 0-strict. Starting from the empty flow, a lexicographically optimal flow could be obtained after a sequence of water-filling stages are carried out.
Definition 9.
A water-filling stage is performed on any v-strict ( 0 v < v m a x ) flow: performing augmentation by 1 for each job node in the set J v = { J i J | A i = v } .
Note that by a water-filling stage, it is not necessarily that J i J v , A i is increased by 1, as there may already be no augmentation path passing by J i in G f . According to Claim 1, we know that if A i fails to be increased, then it will no more be increased during the following water-filling stages. That is also the reason that for a v-strict flow a water-filling stage only needs to focus on nodes in J v .
Lemma 6.
A lexicographically optimal flow is obtained after v m a x water-filling stages.
Proof. 
This lemma is true if during all water-filling stages there is no flow obtained breaks lexicographic feasibility (Definition 7).
Without loss of generality, let us focus on one water-filling stage which will be performed on a v-strict flow, where 0 v < v m a x . In such stage, we know there are a sequence of augmentations that will be performed for each node in the set J v . Clearly, before any augmentations are performed, the v-strict flow is lexicographically feasible. We need to prove that after any augmentation is successfully performed, the new flow obtained is still lexicographically feasible.
Suppose after a sequence augmentations the flow f obtained is still lexicographically feasible. In the current state, we can divide jobs into three parts: S 1 = { J i J | A i L 1 } , S 2 = { J i J | A i = L } and S 3 = { J i J | A i = L + 1 } . Consider that the next augmentation will be executed along an augmenting path (denoted by P ) that passes by node J k and assume that after the augmentation the new flow f is not lexicographically feasible, i.e., in  G f , there exists a J p J q path where A p A q 2 . Since L + 1 = max { A 1 , A 2 , . . . , A n } , we have J p S 1 which implies A p will not be increased during the current water-filling stage. There are two cases. First, in G f , P and the J p J q path have no intersections (share common nodes in the path). In this case, we can infer that the J p J q path also exists in G f , as the augmentation does not affect the existence of the J p J q path. On the other hand, we can infer that under the flow f there is also A p A q 2 . The reason is that from f to f both A p and A q are kept. However, it violates the assumption that f is lexicographically feasible. Second, in G f , P and the J p J q path have intersections. Along the two paths, let us assume node x is the first common node where the two paths intersect. Note that the sub-path (from J p to x) of J p J q is not affected by the augmentation such that it also exists in G f . On the other hand, P has a sub-path from x to t in G f . Therefore, we can find an augmenting path from s to J p , then from J p to x and finally from x to t. However, it violates f is v-strict. Above all, we get the proof. □
Theorem 7.
Suppose J p ’s value interval is I p = L 1 , L . Under any L-strict flow f, if A p = L 1 , then there exists a J p J q path on G f where A q = L . Symmetrically, if A p = L , then there exists a J q J p path on G f where A q = L 1 .
Proof. 
Since f is L-strict, we can perform a sequence of water-filling stages on f to get a lexicographically optimal flow f . Clearly, A p is no more increased during the following water-filling stages such that A p = A p . First, consider currently A p = L 1 . According to Corollary 2, we know in G f there exists a J p J q path where A q = A p + 1 = L . Next, we prove that the J p J q path existing in G f also appears in G f . From f to f , successive water-filling stages on f are performed. Every water-filling stage is composed of a sequence of augmentations each of which corresponds to an augmenting path. Thus, we could use an ordered set S = { P 1 , P 2 , , P r } to include all augmenting paths (of all water-filling stages) used for augmenting f to f . Suppose P i S is the last element in S which shares common nodes with the J p J q path and suppose the first common node of the two paths is J k . Let f i 1 denote the flow before the augmentation along P i is processed. Note that f i 1 is lexicographically feasible according to the proof of Lemma 6. We can infer that the J p J k path (sub-path of J p J q ) already appears in G f i 1 . On the other hand, there is a path from J k to t in G f i 1 , which is the sub-path of P i . Together with the edge s , J p , we find an augmenting path passing by J p in G f i 1 . Remember that f is L-strict such that in G f there is no augmenting path passing by J p . By Claim 1, we know that in G f i 1 there should also be no augmenting path passing by J p , i.e., a contradiction is identified. Therefore, for any path P i S , it shares no common nodes with the J p J q path, which implies the J p J q path also exists in G f . The proof for the second case A p = L is symmetric where we can find an augmenting path passing by J q (with A q = L 1 ) in an intermediately obtained residual graph, which also concludes a contradiction. □
Corollary 4.
For any L-strict flow f on G, if there exists a J p J q path in G f where A p = A q 1 = L 1 , then the same path also exists in any lexicographically optimal flow f that could be augmented from f.
Corollary 4 is actually the inverse proposition of Theorem 7. The proof can be obtained by applying the proof of Theorem 7 in a reversed direction.

5. Economic Properties

In this section, we investigate whether or not a DLF allocation (or, equivalently, a lexicographically optimal flow) satisfies economic properties which is critical to verifying whether DLF is reasonable to define fairness in our scenario.

5.1. Pareto Efficiency and Envy-Freeness

Pareto efficiency: Increasing the allocation of a job must decrease the allocation of another job.
If Pareto efficiency is satisfied, all the available resources must be allocated or the total resource requirements are already fully met, i.e., resource utilization is maximized. Note that fairness is only necessary when resources cannot meet all the demands. Therefore, any reasonable scheme on fairness should be Pareto efficiency.
Theorem 8.
Distributed lexicographical fairness satisfies Pareto efficiency.
Proof. 
The proof is straightforward. Suppose DLF does not satisfy Pareto efficiency. We know that a DLF allocation corresponds to a lexicographically optimal flow. DLF does not satisfy Pareto efficiency which directly implies that lexicographically optimal flow is not maximal. This is a contradiction.  □
Envy-freeness: no job would expect to get the allocation of any other job.
Envy-freeness is also a usual requirement of any fair scheme. Under a fair allocation, we can easily imagine that no job prefers the allocation of another job. In our setting, envy-freeness could be represented by the following inequality.
J q J , j min ( a q j , d p j ) j a p j .
At first glance, DLF allocation does not always satisfy envy-freeness. For example, there are two jobs, J 1 and J 2 , each of which has one task to be executed on the same site M 1 whose time slots is 1. No matter which job gets the time slot, the other one would envy its allocation. This happens due to our discussion area being in Z + . Indeed in our setting, J p never envies J q ’s allocation if A p A q 2 .
Theorem 9.
J p , J q J , if A p A q 2 , then J p does not envy J q ’s allocation.
Proof. 
Proof by contradiction. Suppose f is lexicographically optimal. However, there exists a pair of jobs— J p and J q , where A p A q 2 and j min ( a q j , d p j ) > j a p j . We can infer that M j M such that min ( a q j , d p j ) > a p j . Note that in G f the two edges J p , M j and M j , J q together form a J p J q path. Combining with A p A q 2 , we find that a contradiction with f is lexicographically optimal. □

5.2. Strategy-Proofness

Strategy-proofness: No job can get more allocation by lying about its demands.
Strategy-proofness ensures incentive compatibility. This property is important for a fairness scheme. With strategy-proofness, no participant can break the fairness scheme with its own information. In our setting, strategy-proofness is used to ensure that a job should not get profits by misreporting its demands. Suppose J lies about its demands. The demand matrix is denoted by D n × m . Note that under D n × m we could still compute lexicographically optimal allocation and model the problem under another flow graph denoted by G .
Since only J lies, M j M , if J k J \ { J } , then d k j = d k j . For J , we consider d j could be any non-negative integer, i.e., we do not assume d j d j . Under the setting with misreporting, the allocation matrix is denoted by A . When A is distributed lexicographically fairly, the corresponding lexicographically optimal flow is denoted by f D L F , where D L F is the set of lexicographically optimal flows obtained under D n × m .
With mis-reporting, it is easy to verify that for each job J i the value interval I i is still in the form [ L , L ] or [ L , L + 1 ] where L is a non-negative integer. For each job J i , we also define its useful allocation to be j min { d i j , a i j } , where the minimum is taken to ensure that the actual executed tasks of J i on any site would not exceed the true demands. Note that if J i is honest, then j min { d i j , a i j } = j a i j . For simplifying the representation, we let U be the vector of useful aggregate allocation under mis-reporting.
U = j min { d 1 j , a 1 j } , j min { d 2 j , a 2 j } , , j min { d n j , a n j }
We define a similar notion called useful value interval ( I u ), where I k u represents the useful value interval covered by the values of U k corresponding to lexicographically optimal flows in D L F . It can be verified that I u I and for all honest jobs J k J \ { J } there is I k u = I k .
Lemma 10.
For the lying job J , the length of useful value interval I u could be larger than 1, i.e., I u = [ L , R ] s.t. R could be larger than L + 1 .
Proof. 
This lemma can be easily verified by a concrete instance. Consider that there are two jobs J 1 , J 2 and two sites M 1 , M 2 . Suppose J 1 is the job whose will misreport the demand. The genuine and lying demand matrix are given as follows:
D n × m = 3 0 3 3 D n × m = 3 3 3 3
Suppose each site has three time slots to allocate. It is straightforward that each job should obtain three slots in any f D L F . All possible allocations are in the following:
A n × m = 0 3 3 0 A n × m = 1 2 2 1 A n × m = 2 1 1 2 A n × m = 3 0 0 3
Clearly, in the above example, there is I 1 u = [ 0 , 3 ] showing that R could be larger than L + 1 . □
From the above example, we can also see that I 1 u is continuous, i.e., U 1 could be any integer in the set { 0 , 1 , 2 , 3 } . Actually, the useful value interval of the lying job is always continuous.
Lemma 11.
The useful value interval I u of the lying job J is continuous.
Proof. 
Select arbitrarily two flows f 1 , f 2 D L F and suppose the useful allocation of the lying job J is U k 1 and U k 2 , respectively. Without loss of generality, consider U 1 U 2 2 . As  f 1 and f 2 are lexicographically optimal, we know they are interchangeable (Corollary 1). Note that during the process of transforming f 1 to f 2 (which also increases U 1 to U 2 ), if we can control the amount variation of U k 1 upper bounded by “1 unit”, then all middle points (integers) between U 1 and U 2 must be obtained (corresponding to a flow obtained during the transformation).
Similar to the proof of Theorem 2, let us construct the graph G d i f f to depict the difference between f 1 and f 2 , and then eliminate all positive cycles and maximal positive paths. Different from the proof of Theorem 2, here we control the amount of the variation by “1 unit”. For example, if C is a positive cycle, then only “1 unit” reduction is performed each time: for each edge e of C , do c d i f f ( e ) = c d i f f ( e ) 1 . Similarly, each time the amount of reduction during any maximal positive path is also controlled by 1. Now let us consider the variation of U 1 after a reduction is performed. There are two cases. First, J is included by a positive cycle or located in the middle of a maximal positive path. In this case, J must be adjacent with two site nodes. Without loss of generality, suppose M i is the in-neighbor and M j is the out-neighbor. The “1 unit” reduction corresponds to lose one unit resource from M i and obtain one more unit resource from M j . Second, J is the starting point or the ending point of a maximal positive path. In this case, the allocation of J could be increased by 1 (corresponding to J is the starting point) or be decreased by 1 (corresponding to J is the ending point). It is not difficult to verify that in both cases the variation of the total useful allocation is upper bounded by 1. □
Now we give the definition of strategy-proofness in our setting.
Definition 10.
For distributed lexicographic fairness, strategy-proofness means that no job cannot obtain a larger useful value interval by lying about its demand: if J lies, then there is I u I .
Theorem 12.
Distributed lexicographic fairness satisfies strategy-proofness.
Proof. 
For convenience, when there is no misreporting, for each job J i J , we denote the value interval as I i = [ L i , R i ] and when J l lies, for each job J i J , we denote the value interval as I i = [ L i , R i ] and denote the useful value interval as I i u = [ L i u , R i u ] . Suppose J is the job which lies. To prove strategy proofness, we need to show that I u I is always true.
In order to prove I u I , let us first show that R u R . For any lexicographically optimal flow f under misreporting, we could construct a vector called restricted useful allocation as follows:
T = m i n { U 1 , U } , m i n { U 2 , U } , , m i n { U n , U }
Note that T can always be obtained for a given f as f can be obtained by a sequence of water-filling stages which is a reversible procedure. In other words, we can push back flows and remove all J x ’s useless resources to get T . The flow corresponds to T is called a restricted flow f T . Clearly, f T is a U -strict flow on G , where G is the flow graph modeled under mis-reporting.
Next, we shall prove that f T is a ( U 1 ) -strict flow on G, where G is the flow graph modeled without any mis-reporting. Let us consider the two residual graphs G f T and G f T . The differences between them are the capacities of edges J , M j for each M j M , where the former is d j min { d j , a j } and the latter is d j min { d j , a j } . To prove that f T is a ( U 1 ) -strict flow on G, we need to show that in G f T there is no augmenting path passing by any honest job node J i if T i U 2 and meanwhile in G f T there is no J p J q path if T p T q 2 .
We prove by contradiction. First, suppose there exists an augmenting path P passing an honest job node J i where T i U 2 . We can infer that P must also pass by J . Otherwise, P is also an augmenting path on G f T , as in G f T and G f T the only different edges are J , M j ( M j M ). Note that P cannot be an augmenting path on G f T due to f T on G being U -strict. However, P passing by J implies there is a J i J path in G f T where T i T 2 , which also contradicts with f T on G is U -strict. Second, suppose in G f T there is a J p J q path where T p T q 2 . Similarly, we can obtain that the J p J q path must pass by J , which implies a J p J path exists in G f T . However, it also breaks f T on G is U -strict since T p T q 2 U 2 . Remember that f is arbitrarily selected. Above all, there must have R u R .
Suppose R = L . According to the conclusion R u R obtained above, there is R u L . To prove R u R , we still need to show that the following case cannot happen: I u = [ L , L ] while I = [ L 1 , L ] . We prove by contradiction: assume the above case happens such that U = L , f T defined above is a L-strict flow on G and ( L 1 ) -strict flow on G. The ( L 1 ) -strict flow on G let us know that jobs which are possible to perform the next water-filling stage belong to the set { J i J | T i = U 1 = L 1 } . Assume J p { J i J | T i = L 1 } . Note that in the residual graph G f T the augmentation path passing by J p must also pass by J , otherwise, the path also exists in G f T which violates that f T is a L-strict flow on G . Such augmentation path also passing by J means that in G f T there exists a J p J path. Considering the flow f T on G , we continue to perform water-filling stages until the lexicographically optimal flow f is obtained. Note that as J p { J i J | T i = L 1 } , its allocation is more increased during the water-filling stages. Therefore, we have:
A p = U p = L 1 < L = U A .
Note that according to Corollary 4 the J p J path still exists in G f . There are two cases. First, U < A , which implies A p A 2 . It contradict with that f is lexicographically optimal. Second, consider U = A . The existing J p J path in G f implies I u = [ L 1 , L ] contradicting with the assumption I u = [ L , L ] .
Finally, if no job in the set { J i J | T i = L 1 } is successful to perform augmentation, f T is also a L-strict flow on G. Since I = [ L 1 , L ] , by Theorem 7, we know there is a J p J path on G f T where A p = L 1 under the flow f T on G. Similarly, we can infer that the J p J path also appears in G f T . Again, by Corollary 4, the J p J path exists in G f that I u cannot be [ L , L ] . □

5.3. Maximin Share

Maximin share [22]: Each job is required to divide a set of m indivisible resources into n bundles and it will receive the minimum valuable bundle.
Maximin share ( M M S ) is a well-defined notion in fair indivisible resources allocation. If a fair scheme satisfies M M S , the allocation for any participant is at least to be the average case. In our setting, the maximin share defined for each job J i is given in the following.
M M S i = 1 n j = 1 m min ( C j , n · d i j )
Theorem 13.
Distributed lexicographically fairness satisfies 1 2 -maximin share.
Proof. 
Select a job J k J arbitrarily. We know that J k ’s value interval I k is [ L 1 , L ] or I k = [ L , L ] . We consider a L-strict flow f on a new flow graph G L = ( V , E ) , where the node set V and the edge set E are the same with the flow graph G and the difference is that for any edge s , J i ( J i J ), we define c ( s , J i ) = L . Clearly, any L-strict flow on such a graph G L is a maximal flow.
It is well known that maximal flow corresponds to minimum cut. Suppose ( V L , V ¯ L ) is a minimum cut in G L , where V L , V ¯ L V , V = V L V ¯ L and = V L V ¯ L . Minimum cut of G L is not necessarily unique. In the following, we consider the minimum cut where J k V L is satisfied. Note that such minimum cut must exist, otherwise it contradicts with that J k ’s value interval is [ L 1 , L ] or [ L , L ] .
Denote V L = { s } J 1 M 1 and V ¯ L = J 2 M 2 { t } where J = J 1 J 2 and M = M 1 M 2 . By considering f on G L , we have:
J i J 1 A i = M j M 1 C j + J i J 1 M j M 2 d i j
Equation (10) is computed by the minimum cut: edges from V L to V ¯ L should be fully filled, while edges from V ¯ L to V L should not contain any flow (otherwise contradicting with that f on G L is maximal). Let r = | J 1 | n , we have:
M j M 1 min ( C j , n d k j ) M j M 1 C j J i J 1 A i r L
The second inequality is from Equation (10) and d i j 0 , and the last inequality is by f is L-strict. Note that if A k = L 1 the last inequality is strict. Furthermore, we have:
1 n M j M 1 min ( C j , n d k j ) 1 r M j M 1 min ( C j , n d k j ) 1 r J i J 1 A i A k
The second inequality is from (11). The last inequality is due to the round-down average allocation of jobs in J 1 being unable to reach L if there exists one job in J 1 whose allocation is less than L and, on the other hand, if for each job in J 1 the allocation is equal to L, the inequality is still true as A k = L .
For the set M 2 we have:
1 n M j M 2 min ( C j , n d k j ) 1 n M j M 2 n d k j = M j M 2 d k j A k
The last inequality in (13) is based on the property of the minimum cut ( V L , V ¯ L ) . Combining (12) and (13) together, we have:
1 n M j M min ( C j , n d k j ) = 1 n M j M 1 min ( C j , n d k j ) + 1 n M j M 2 min ( C j , n d k j ) 1 n M j M 1 min ( C j , n d k j ) + M j M 2 d k j = 1 n M j M 1 min ( C j , n d k j ) + M j M 2 d k j 2 A k
In other words, A k 1 2 M M S k . Thus, the bound 1 2 is obtained. □
We now provide a simple instance to show that the above bound 1 2 is tight. Considering an instance composed of two jobs and two sites, each site has two slots to allocate. The demand matrix is given on the left and a possible distributed lexicographically fair allocation is given on the right.
D 2 × 2 = 1 1 2 0 A 2 × 2 = 0 1 2 0
We have A 1 = 1 and the value interval is I 1 = [ 1 , 2 ] . The maximin share of J 1 is M M S 1 = 2 , thus A 1 = 1 2 M M S 1 .

5.4. Sharing Incentive

Sharing incentive: Each job should be better off sharing the total resources than exclusively using its own partition of the total resources.
In our scenario, sharing incentive means that each job J i should be allocated with at least a 1 n fraction of resources. It is similar to Maximin share and can often be used when resources are infinitely divisible. Although we are interested in a different scenario, we shall show that DLF satisfies a relaxed sharing incentive. Specifically, when considering the whole system as a single resource pool, the following formula is true.
M j M , j a i j 1 n j min ( C j , d i j )
Theorem 14.
Distributed lexicographical fairness satisfies relaxed sharing incentive.
Proof. 
Suppose there is job J k J whose value interval I k is [ L 1 , L ] or [ L , L ] . We consider a L-strict flow f on a new flow graph G L = ( V , E ) , where the node set V and the edge set E are the same with the flow graph G and the difference is that for any edge s , J i ( J i J ), we define c ( s , J i ) = L . Clearly, any L-strict flow on such a graph G L is a maximal flow.
It is well known that maximal flow corresponds to minimum cut. Suppose ( V L , V ¯ L ) is a minimum cut in G L , where V L , V ¯ L V , V = V L V ¯ L and = V L V ¯ L . Minimum cut of G L is not necessarily unique. In the following, we consider the minimum cut where J k V L is satisfied. Note that such minimum cut must exist, otherwise it contradicts with J k ’s value interval being [ L 1 , L ] or [ L , L ] . Denote V L = { s } J 1 M 1 and V ¯ L = { t } J 1 M 2 such that J = J 1 J 2 and M = M 1 M 2 . Then we have:
J i J 1 A i = M j M 1 C j + J i J 1 M j M 2 d i j
Equation (17) is computed by the minimum cut, edges from V L to V ¯ L should be fully filled while the reversed arcs should be zero (otherwise contradicting with f is maximal on G L ). Let r = | J 1 | . Since r n , we have:
n r J i J 1 A i J i J 1 A i M j M 1 min ( C j , d k j ) + M j M 2 d k j M j M min ( C j , d k j )
Equation (18) can be rewritten as:
1 r J i J 1 A i 1 n M j M min ( C j , d k j )
If I k = [ L 1 , L ] and A k = L 1 then the average value is smaller than L. Otherwise, the average value is not larger than L. Combining them together, we have:
A k 1 r J i J 1 A i 1 n M j M min ( C j , d k j )
 □

6. Algorithms

In this section, we shall propose two network flow-based algorithms to achieve a distributed lexicographically fair allocation (or equivalently get a lexicographically optimal flow). We use the techniques of parametric flow which is a flow network where the edge capacities could be functions of a real-valued parameter. A special case of parametric flow was studied by G. Gallo et al. [13], who extended their push-relabel maximum flow algorithm [23] to the parametric setting. In this paper, the techniques used are based on the work of [13,23].
Generally, the capacity of each edge in a parametric flow network is a function of parameter λ where λ belongs to the real value set R . The capacity function is represented by c λ and the following three conditions hold:
  • c λ ( s , v ) is a non-decreasing function of λ for all v t ;
  • c λ ( v , t ) is a non-increasing function of λ for all v s ;
  • c λ ( v , w ) is constant for all v s and w t .
The parametric flow graph in our problem can be formulated by setting the capacity of each edge s , J i ( J i J ) in Formula (4) to c λ ( s , J i ) = λ . Figure 3 depicts an example of parametric flow graph in our problem. Compared with the example drawn in Figure 2, we can see that each edge taking s as one endpoint has a parametric capacity λ rather than infinity. A flow graph G with capacity function c λ is called parametric flow graph G λ . Clearly, with the above settings, the three conditions are all satisfied, since c λ ( s , J i ) = λ is an increasing function of λ and c λ ( M j , t ) = C j is a constant. Furthermore, we restrict λ to be non-negative integers which is used to adapt the integral solution area.
It is well known that maximum flow algorithm is also applicable in discrete setting, the following lemma is directly from Lemma 2.4 in [13].
Lemma 15.
For a given online sequence of discrete parameter values λ 1 < λ 2 < < λ and corresponding parametric graphs G λ 1 , G λ 2 , G λ , any maximum flow algorithm can correctly get the maximum flows f 1 , f 2 , , f l and the corresponding minimum cuts ( V 1 , V ¯ 1 ) , ( V 2 , V ¯ 2 ) , ⋯, ( V , V ¯ ) , where V 1 V 2 V .
The key insight of parametric flow is to represent the capacity of minimum cut (or the amount of maximal flow) as a piece-wise linear function of λ , denoted by F ( λ ) . For a given λ k and the corresponding flow graph G λ k , we let f k and ( V k , V ¯ k ) be respectively the maximum flow and minimum cut, where V k = { s } J s M s and V ¯ k = J t M t { t } . Here J s ( M s ) denotes the set of job nodes (site nodes) in V k , J t ( M t ) denotes the set of job nodes (site nodes) in V ¯ k such that J = J s J t , M = M s M t . The cut function F ( λ ) for the minimum cut ( V k , V ¯ k ) is given in the following:
F ( λ ) = | J t | ( λ λ k ) + | f k |
| J t | is the slope of F ( λ ) . Clearly, the slope is decided by the minimum cut. The minimum cut is often not unique for G λ k , however, it is always possible to find the maximal (minimal) minimum cut such that | V k | is maximized (minimized). Let ( V k , V ¯ k ) and ( V k , V ¯ k ) be the minimal and maximal minimum cut of G λ k such that we have V k V k . For each job J i V k \ V k , we have f k ( s , J i ) = λ k . Moreover, k > k , J i always belongs to V k , the part including s such that there is f k ( s , J i ) = λ k . We can also infer that the maximum slope of cut function corresponding to G λ k + 1 is not larger than the minimum slope of cut function corresponding to G λ k , i.e., the slope of the piece-wise function F ( λ ) is non-increasing when λ increases. In the following, for a given G λ , we shall use s l m a x ( λ ) and s l m i n ( λ ) to denote respectively the maximum and the minimum slope of the function F ( λ ) .

6.1. Basic Algorithm

Based on the water-filling stages introduced in Section 4, we first propose a basic algorithm which implements a sequence of water-filling stage until a lexicographically optimal flow is obtained.
Theorem 16.
A L-strict flow f on graph G is also a maximum flow on the parametric graph G L , where λ = L .
The feasibility of f on G L is straightforward. As L-strict ensures that there is no augmenting path on G f L , f is maximal on G L .
The basic algorithm aims to solve a sequence of parametric flows where λ = 0 , 1 , , v m a x . Although v m a x cannot be foreknown, the process will be stopped until no job can get more aggregate allocation by continuing to increase λ . The complexity of the basic algorithm depends on the concrete maximum flow algorithm selected. If augmentation used in Ford–Fulkerson is directly taken, the complexity of implementing each water-filling is O ( | V | 3 ) . Overall, the complexity is O ( v m a x · | V | 3 ) as there are v m a x water-filling stages. If the push-relabel algorithm (implemented with queue [13]) is take for implementing each water-filling stage, then the overall complexity decreases to O ( | V | 3 + v m a x · | V | 2 ) as only the first water-filling stage costs | V | 3 operations. However, no matter which way is selected for implementing water-filling stage, the overall complexity is not strongly polynomial of | V | , since v m a x is related to the input capacities which are upper-bounded by O ( j = 1 m C j ) .

6.2. Iterative Algorithm

The basic algorithm is time-costly due to the algorithm being performed each time the parameter λ is increased. However, increasing λ by one does not necessarily mean the slope of F ( λ ) decreases immediately. Indeed, the slope of the piece-wise function F ( λ ) only decreases at a few special points (called breakpoints in the following). To understand breakpoints, let us consider the first example where J 1 and J 2 execute over two sites M 1 and M 2 . The flow network is built in Figure 1. Let us explain the breakpoints by calculating a DLF allocation for this example. The capacity of s , J 1 and s , J 2 is now expressed by a variable λ . Figure 4 depicts what happens when λ increases, and the dash line denotes the minimum cut. When 0 λ 2 (Figure 4a), the slope of F ( λ ) is equal to 2, i.e., for both s , J 1 and s , J 2 , their capacity expansion contributes to the increasing of F ( λ ) . Note that λ = 2 is the first breakpoint. When 2 < λ 4 (Figure 4b), only the capacity expansion of s , J 1 contributes to the increasing of F ( λ ) such that the slope of F ( λ ) decreases to 1. λ = 4 is the second breakpoint. When λ > 4 (Figure 4c), the increasing of λ will no more increase F ( λ ) such that the slope decreases to 0. In Figure 4d, we depict the two breakpoints and show the curve of F ( λ ) .
Generally once the slope decreases, it means there exist some jobs whose aggregate allocation stops increasing. Consequently, we only need to focus on every breakpoint: we first set the parameter λ , then perform push-relabel maximum flow algorithm to get a λ -strict flow and finally compute a new slope for F ( λ ) . When the computation of all breakpoints is finished, we obtain a lexicographically optimal flow which corresponds to a DLF allocation.
Based on the above idea, we propose a more efficient iterative algorithm. The pseudo-code is presented in Algorithm 1. The general idea is to maintain a parametric flow with λ in an increasing order and identify each breakpoint sequentially. For a given λ k , if s l m a x ( λ k ) > s l m i n ( λ k ) (which means there at least exists one job J i whose aggregate allocation A i = λ k cannot be further increased), ( λ k , F ( λ k ) ) is a breakpoint. Remember that our problem is considered in Z + . When a breakpoint identified is not an integer, we need to perform necessary rounding operations.
Algorithm 1: Iterative Algorithm
Mathematics 10 00324 i001
In the first two lines of Algorithm 1, we set the parameter λ = 1 and run a push-label algorithm to get a 1-strict flow. Lines (4–11) contain the main iterative procedure where the key idea is using the Newton method to identify every breakpoint. More specifically, we first set a big enough value for λ 2 ensuring that the slope of F ( λ ) at λ 2 is 0. At line 8 and line 9 we have two line functions which are all part of F ( λ ) but have difference slopes. Lines 10–20 are the Newton method, by which we decrease λ 2 to find the breakpoint (when the condition at line 10 is satisfied). Note that if the breakpoint identified is not an integer, we first round down λ 2 and update f 1 by the push-relabel maximum flow algorithm. The result of line 13 is a λ 2 -strict flow. Then at line 14 we continue to update f 1 to be ( λ 2 + 1 ) -strict. If λ 2 is an integer, we directly update f 1 to be λ 2 -strict (line 17). Finally, we set λ 1 to be λ 2 and start to find the next breakpoint. Note that in the above process the slope of f λ 2 ( λ ) is decreasing such that the push-relabel maximum flow algorithm is performed on a reversed graph (reverse the direction of each edge) such that the three conditions of parametric flow still hold.
The correctness of algorithm mainly comes from Lemma 6 and Theorem 16, and also from the non-increasing slope of F ( λ ) , which leads to a correct application of the Newton method. The time complexity of the algorithm comes from two parts: one is update f 1 and the other is finding breakpoints though decreasing λ 2 . The time complexity of computing f 1 depends on the maximum flow algorithm and the number of breakpoints. The simplest way is to perform maximum flow algorithm at each breakpoint to update f 1 . If so, the time complexity is O ( | V | 3 · | V | ) where | V | 3 is the cost of maximum flow algorithm and | V | is the upper bound of the number of breakpoints. If we implement the push-relabel algorithm together with dynamic tree [13], the time complexity can decrease to O ( | V | | E | log ( | V | 2 / | E | ) ) as at each time when we update f 1 , it is not necessarily to redo a complete push-relabel maximum flow algorithm. On the other hand, finding each breakpoint, we need to decrease λ 2 iteratively by the Newton method. The iterative times is bounded by the number of breakpoints. Note that once λ 2 decreases, we also perform the push-relabel maximum flow algorithm (line 21) to get a new function f λ 2 ( λ ) . The complexity of the push-relabel algorithm is O ( | V | 3 ) . Combining with dynamic tree, the time complexity for finding one breakpoint is O ( | V | | E | log ( | V | 2 / | E | ) ) . Clearly, to identify all breakpoints, the time complexity is O ( | V | 2 | E | log ( | V | 2 / | E | ) ) . By adding the time costs of the two parts together, the overall time complexity is O ( | V | 2 | E | log ( | V | 2 / | E | ) ) .

7. Conclusions and Future Work

In this paper, we proposed distributed lexicographically fair (DLF) resource allocation for jobs executed over geographically distributed sites. We consider that resources waiting to be allocated are not infinitely divisible such that we model DLF by using network flow within the integer field. We prove that DLF satisfies Pareto efficiency, envy-freeness, strategy-proofness, 1 2 -maximin share and relaxed sharing incentive. Finally, we proposed two algorithm to compute DLF allocation, where the iterative algorithm is polynomial of the input number of jobs and sites.
In the future, we would like to extend strategy-proofness to group strategy-proofness where a set of job could misreport their demands, however, they as a whole cannot obtain more useful resources. Meanwhile, we also would like to generalize our problem and model to adopt multiple kinds of resources.

Author Contributions

Formal analysis, C.L. and T.W.; Funding acquisition, C.L.; Methodology, J.H.; Project administration, C.L.; Validation, C.L. and W.J.; Writing—original draft, T.W.; Writing— review & editing, C.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the National Natural Science Foundation of China under Grant 61902063, and by the Provincial Natural Science Foundation of Jiangsu, China under Grant BK20190342, and by the Open Project Program of the State Key Laboratory of Mathematical Engineering and Advanced Computing.

Institutional Review Board Statement

Not Applicable.

Informed Consent Statement

Not Applicable.

Data Availability Statement

Not Applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Pu, Q.; Ananthanarayanan, G.; Bodik, P.; Kandula, S.; Akella, A.; Bahl, P.; Stoica, I. Low latency geo-distributed data analytics. ACM SIGCOMM Comput. Commun. Rev. 2015, 45, 421–434. [Google Scholar] [CrossRef] [Green Version]
  2. Vulimiri, A.; Curino, C.; Godfrey, P.B.; Jungblut, T.; Padhye, J.; Varghese, G. Global Analytics in the Face of Bandwidth and Regulatory Constraints. NSDI 2015, 7, 7–8. [Google Scholar]
  3. Brams, S.J.; Taylor, A.D. Fair Division- from Cake-Cutting to Dispute Resolution; Cambridge University Press: Cambridge, UK, 1996. [Google Scholar]
  4. Moulin, H. Fair Division and Collective Welfare; MIT Press: Cambridge, MA, USA, 2003. [Google Scholar]
  5. Brandt, F.; Conitzer, V.; Endriss, U.; Lang, J.; Procaccia, A.D. (Eds.) Handbook of Computational Social Choice; Cambridge University Press: Cambridge, UK, 2016. [Google Scholar]
  6. Bertsekas, D.P.; Gallager, R.G.; Humblet, P. Data Networks; Prentice-Hall International: Hoboken, NJ, USA, 1992; Volume 2. [Google Scholar]
  7. Radunović, B.; Boudec, J.Y.L. A unified framework for max-min and min-max fairness with applications. IEEE/ACM Trans. Netw. (TON) 2007, 15, 1073–1083. [Google Scholar] [CrossRef] [Green Version]
  8. Ghodsi, A.; Zaharia, M.; Hindman, B.; Konwinski, A.; Shenker, S.; Stoica, I. Dominant Resource Fairness: Fair Allocation of Multiple Resource Types. In Proceedings of the 8th USENIX Symposium on Networked Systems Design and Implementation, NSDI, Boston, MA, USA, 30 March–1 April 2011. [Google Scholar]
  9. Dolev, D.; Feitelson, D.G.; Halpern, J.Y.; Kupferman, R.; Linial, N. No justified complaints: On fair sharing of multiple resources. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, Cambridge, MA, USA, 8–10 August 2012. [Google Scholar]
  10. Bonald, T.; Roberts, J. Multi-resource fairness: Objectives, algorithms and performance. In Proceedings of the ACM SIGMETRICS Performance Evaluation Review; ACM: New York, NY, USA, 2015; Volume 43, pp. 31–42. [Google Scholar]
  11. Wang, W.; Li, B.; Liang, B. Dominant resource fairness in cloud computing systems with heterogeneous servers. In Proceedings of the IEEE INFOCOM 2014—IEEE Conference on Computer Communications, Toronto, ON, Canada, 27 April–2 May 2014; pp. 583–591. [Google Scholar]
  12. Guan, Y.; Li, C.; Tang, X. On Max-min Fair Resource Allocation for Distributed Job Execution. In Proceedings of the 48th International Conference on Parallel Processing, ICPP, Kyoto, Japan, 5–8 August 2019; pp. 55:1–55:10. [Google Scholar]
  13. Gallo, G.; Grigoriadis, M.D.; Tarjan, R.E. A Fast Parametric Maximum Flow Algorithm and Applications. SIAM J. Comput. 1989, 18, 30–55. [Google Scholar] [CrossRef]
  14. Schrijver, A. Combinatorial Optimization: Polyhedra and Efficiency; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2003; Volume 24. [Google Scholar]
  15. Berge, C. Two theorems in graph theory. Proc. Natl. Acad. Sci. USA 1957, 43, 842. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Norman, R.Z.; Rabin, M.O. An algorithm for a minimum cover of a graph. Proc. Am. Math. Soc. 1959, 10, 315–319. [Google Scholar] [CrossRef]
  17. Fulkerson, D.; Ford, L. Flows in Networks; Princeton University Press Princeton: Princeton, NJ, USA, 1962. [Google Scholar]
  18. Megiddo, N. Optimal flows in networks with multiple sources and sinks. Math. Program. 1974, 7, 97–107. [Google Scholar] [CrossRef]
  19. Megiddo, N. A good algorithm for lexicographically optimal flows in multi-terminal networks. Bull. Am. Math. Soc. 1977, 83, 407–409. [Google Scholar] [CrossRef] [Green Version]
  20. Dinic, E.A. Algorithm for solution of a problem of maximum flow in networks with power estimation. Sov. Math. Dokl. 1970, 11, 1277–1280. [Google Scholar]
  21. Bernstein, A.; Kopelowitz, T.; Pettie, S.; Porat, E.; Stein, C. Simultaneously Load Balancing for Every p-norm, with Reassignments. In Proceedings of the 8th Innovations in Theoretical Computer Science Conference, ITCS 2017, Berkeley, CA, USA, 9–11 January 2017; pp. 51:1–51:14. [Google Scholar]
  22. Kurokawa, D.; Procaccia, A.D.; Wang, J. Fair Enough: Guaranteeing Approximate Maximin Shares. J. ACM 2018, 65, 8:1–8:27. [Google Scholar] [CrossRef]
  23. Goldberg, A.V.; Tarjan, R.E. A New Approach to the Maximum Flow Problem. In Proceedings of the 18th Annual ACM Symposium on Theory of Computing, Berkeley, CA, USA, 28–30 May 1986; pp. 136–146. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The flow network graph built for case study.
Figure 1. The flow network graph built for case study.
Mathematics 10 00324 g001
Figure 2. The general flow network graph for DLF.
Figure 2. The general flow network graph for DLF.
Mathematics 10 00324 g002
Figure 3. An example of parametric flow network graph.
Figure 3. An example of parametric flow network graph.
Mathematics 10 00324 g003
Figure 4. Example of piece-wise function F ( λ ) and breakpoints.
Figure 4. Example of piece-wise function F ( λ ) and breakpoints.
Mathematics 10 00324 g004
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, C.; Wan, T.; Han, J.; Jiang, W. Towards Distributed Lexicographically Fair Resource Allocation with an Indivisible Constraint. Mathematics 2022, 10, 324. https://doi.org/10.3390/math10030324

AMA Style

Li C, Wan T, Han J, Jiang W. Towards Distributed Lexicographically Fair Resource Allocation with an Indivisible Constraint. Mathematics. 2022; 10(3):324. https://doi.org/10.3390/math10030324

Chicago/Turabian Style

Li, Chuanyou, Tianwei Wan, Junmei Han, and Wei Jiang. 2022. "Towards Distributed Lexicographically Fair Resource Allocation with an Indivisible Constraint" Mathematics 10, no. 3: 324. https://doi.org/10.3390/math10030324

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop