1. Introduction
Maximizing social welfare or minimizing inequality in allocating resources to agents is an important topic in social economics and has been studied extensively in recent years [
1,
2,
3,
4]. Each agent has her own subjective
valuation function over a subset of resources (items). Yet how to evaluate the welfare or inequality has no unified answer. Some may suggest LexiMin as the criterion [
5]—where we maximize the smallest valuations of all agents, then maximize the second smallest valuations of them, and so on; while others may suggest Maximum Nash Welfare (MNW) [
4], where we maximize the product of valuations of all agents. For most classes of valuation functions, such as the additive valuation functions (where an agent’s valuation is the sum of her valuations for individual items in her bundle), the optimal allocations vary with the selected criteria.
For the special case of
binary additive valuations on items, however, Aziz and Rey [
6] showed that the LexiMin allocations are equivalent to the MNW allocations. This result raises a natural question of whether there are more connections among other criteria. Benabbou et al. [
7] gave a positive answer—the optimal allocations under any
strongly Pigou–Dalton (SPD) principle (a.k.a. transfer principle) [
8] over all allocations with maximal utilitarian social welfare (USW, defined as the sum of valuations, a promise of efficiency) are consistent with LexiMin allocations, even under
binary submodular valuations, which subsume binary additive valuations. Roughly, the Pigou–Dalton principle requires that if some income is transferred from a rich person to a poor person, while not bringing the rich to a poorer situation than the poor, then the measured inequality should not increase (or decrease in the strong version); see its formal definition in
Section 2. The SPD principle embodies a strictly egalitarian stance: any feasible redistribution toward balance must be reflected in a measurable welfare improvement. It is particularly compelling in the context where society prioritizes minimal disparity over efficiency margins. Most common criteria admit the SPD principle (see Example 1).
Inspired by the aforementioned consistency of optimums among all the SPD criteria [
7], we conduct a further study of the optimums under the SPD criteria mainly for the following three scenarios:
- IND.
Indivisible items and agents with binary additive valuations.
- DIV.
Divisible items and agents with binary additive valuations.
- IND-SUB.
Indivisible items and agents with binary submodular valuations.
Our main results are summarized below. We are only concerned with the allocations with maximal USW to ensure the efficiency, like in [
7]. Unless otherwise stated, the criteria for evaluating inequality are SPD and symmetric. Moreover, a profile of an allocation
refers to the valuations vector of the
n agents under
:
- (a)
For IND, the profiles of optimal allocations have a Chebyshev distance of at most . Moreover, there is a layer structure hidden behind the optimal allocations—the agents and items are partitioned into several layers so that items can only be allocated to the agents within the same layer in all optimal allocations. The layer structure admits the property that within each layer, the number of items allocated to any two agents differs by at most —a fixed and small constant that is independent of a specific instance.
- (b)
In DIV, the profiles of optimal allocations are all the same (in other words, the Chebyshev distance is
). Halpern et al. [
9] have shown a similar result in which only LexiMin and MNW are considered.
A layer structure still exists (and with more layers compared to the scenario of IND). More importantly, by utilizing this layer structure and using a reduction to IND, we design the first combinatorial algorithm for finding the optimal allocation for DIV, which runs in time.
- (c)
In IND-SUB, the profiles of optimal allocations have a Chebyshev distance of at most . This extends the corresponding result in IND. However, the layer structure mentioned above does not hold under this scenario.
See
Table 1 for a summary of the results. Our research objects are mainly the optimal allocations under any SPD criteria. In all cases, we derive an “almost consistency” among different optimums (SPD criteria). It states that the valuation of any agent differs by at most
(or
for the DIV case), under any two different optimal allocations. We further discover layer structures for the IND and DIV scenarios and design the first combinatorial algorithm for DIV based on the layer structure.
2. Preliminaries
For an integer , let denote . Throughout this paper, refers to the set of m items and refers to the set of n agents.
Each agent has a valuation function over subsets of (called bundles), where . Given a valuation function , we define the marginal gain of an item o over a bundle as . We focus on the binary valuations where the marginal gain .
Two kinds of binary valuations are discussed frequently in the literature, which we call 0/1-add and 0/1-sub. For the 0/1-add valuations, the value of a set of items for an agent is the sum of the valuation of the individual items; the marginal gain is based on whether agent i likes item o (and is independent of S). For the 0/1-sub valuations, the marginal gain does not increase when S grows; formally, for and .
Note that 0/1-sub valuations subsume 0/1-add ones.
An allocation
refers to a collection of disjoint bundles
, such that
. An allocation
is
clean if all the bundles are clean—
is clean if it has no items with zero marginal gain (i.e., for all
,
). For 0/1-sub valuations,
is clean if and only if
(Proposition 3.3 of [
7]).
Given a clean allocation , assuming agent i gets items under , we call vector the profile of , denoted by . Henceforth, always refers to unless otherwise stated.
Definition 1 (Criterion)
.
A criterion of income inequality
(criterion
for short), a.k.a. an income inequality metric
[1,3], is a function from the profiles to : Each profile is evaluated by a real number (called score); the lower the score, the better the profile under this criterion. Following the convention, a criterion must be symmetric
; i.e., it should evaluate all permutations of equally. Example 1 (Some commonly used criteria)
.
Let be the permutation of , sorted in increasing order. Take to be any strictly convex function of x. For example, . For every profile . Remark 1.
For , we first need to maximize the number of agents with nonzero valuation and then maximize the product of the nonzero valuations. By the last two definitions, minimizing is equivalent to minimizing the largest valuation of all agents, then minimizing the second largest valuation of them, and so on; minimizing is equivalent to maximizing the smallest valuation of all agents, then maximizing the second smallest valuation of them, and so on.
We abbreviate as for any criterion f.
Definition 2 (Strongly Pigou–Dalton)
. Profile is regarded more balanced
than , if there are , such that both lie in range , , and for (namely, the incomes of two agents are more balanced in , whereas all other incomes remain unchanged). A criterion f is strongly Pigou–Dalton
(SPD) [8] if whenever is more balanced than . (The SPD principle is also known as the transfer principle
.) All criteria shown in Example 1 are SPD (proved in
Appendix A).
We only consider the allocation with maximal utilitarian social welfare (max-USW, maximizing the sum of the valuations of all agents); otherwise, one may minimize (, etc.) by not allocating any items, which is uninteresting.
Henceforth, unless otherwise stated, allocations are assumed to be max-USW and clean (to this end, we can drop items with zero marginal gain without changing the profile).
3. Indivisible Items and Agents with 0/1-Add Valuations
Definition 3 (Stable allocations in the IND scenario)
. Take an allocation χ of indivisible items. For each item o allocated to agent i that can be reallocated to another agent (i.e., ), build an edge . Moreover, if there is a simple path () along such edges, we state that χ admits a transfer from to , denoted by , which consists of reallocations along the path, after which agent u loses and v gains one.
A narrowing transfer refers to a transfer with . A widening transfer refers to a transfer with . Other transfers (i.e., with ) are called swapping transfers.
Allocation χ is called nonstable if it admits a narrowing transfer, and is called stable otherwise.
Denote by the set of stable allocations.
Barman et al. [
4] proved that if an allocation is not optimal under
, then it admits a narrowing transfer (hence it is nonstable). Their result can be easily generalized to any SPD criterion, including
(as stated in Lemma 1).
Lemma 1.
Stable allocations are optimal under .
(As a corollary, their profiles are equivalent under permutation.)
Proof. Assume is non-optimal under . We shall prove that is nonstable, i.e., it admits a narrowing transfer.
First, take an allocation that is optimal under .
We build a graph G with n vertices. If an item is allocated to j in and allocated to k in , where , build an arc from j to k. Note that G may have duplicate arcs. Clearly, an arc represents a reallocation (of one item) on , and becomes after all the arcs (i.e., reallocations) are applied.
We decompose G into several cycles and paths . Denote by the starting and ending vertices of , respectively. We assume that for any ; otherwise we connect the two paths into one path.
For , let be the allocation copied from but applied all the arcs (reallocations) in . .
Note that becomes after applying the arcs in . We obtain that . Further since that , there exists , such that .
It follows that in , we have (where denote respectively, for short). It further follows that in , we also have , as never increases and never decreases in the sequence . Consequently, admits a narrowing transfer (from s to t). □
Theorem 1.
1. For any SPD criterion, the optimums are exactly .
2. We can find a stable allocation in time.
Proof. 1. Fix any SPD criteria f; we need to show that
- (1)
A nonstable allocation is non-optimal under f.
- (2)
A stable allocation is optimal under f.
Together, the optimal allocations under f are the stable ones.
Proof of (1): If is nonstable, it admits a narrowing transfer; denoted by is the allocation after this transfer. Clearly, is more balanced than , and therefore due to the assumption that f is strongly Pigou–Dalton.
Proof of (2): Assume are stable. By Lemma 1, is equivalent to up to permutation; therefore, as f is symmetric (a reminder that we always assume so). So, stable allocations have the same score under f. Further since that nonstable allocations are non-optimal under f ((1) of Claim 1), all stable allocations admit the same lowest score under f (a.k.a. optimal).
2. Finding a stable allocation reduces to finding the allocation with minimum
(by Claim 1 of this theorem), which can be found using network flows. Specifically, it reduces to computing the minimum-cost flow in the following network (see
Figure 1).
There are nodes, including a source node s, a sink node t, and m nodes representing the items and n nodes representing the agents. And there are edges in the network: (i) an edge from s to each , with capacity one and cost zero; (ii) an edge from to if agent j likes item i, with capacity one and cost zero; and (iii) m edges from each to t, in which the kth one has capacity one and cost .
Our target—a flow of size
m with the minimum cost—can be computed by the Successive Shortest Path algorithm [
18], which increases the size of the current flow by one via augmenting along the shortest path in the residual graph, repeating
m times. For our particular network, finding such a path reduces to finding a non-used edge
with the lowest cost, such that
s can reach
in the residual network, which can be done in
time by BFS. In total it runs in
time. □
Our Algorithm 1 is similar to that of Kleinberg et al. [
5] for finding a
optimal allocation, but our analysis is simpler.
| Algorithm 1 StableAllocationIND |
- Input:
m items , n agents , binary valuation matrix B of size , where denotes the valuation of item i for agent j. - Output:
A stable allocation - 1:
Construct flow network with nodes . - 2:
Add edge with capacity and cost for each item . - 3:
Add edge with capacity and cost if (agent j likes item i). - 4:
Add m edges from each to t, where the kth edge has capacity and cost . - 5:
Compute min-cost flow f of value m using SuccessiveShortestP ath [ 18]. {Running time: } - 6:
Add item i to for each flow . - 7:
return
|
Remark 2.
It might be asked whether a better allocation under some criterion is also a better allocation under another criterion? This answer is no, although the best remains the best crossing different criteria (Theorem 1). For example, for and , we have
, and
.
3.1. Layer Partition of Agents and Items
Recall that the profiles of stable allocations are equivalent up to permutation (see the corollary part in Lemma 1). Yet the profiles are not unique—e.g., the number of items allocated to agent i may differ in different stable allocations. It raises a natural question: to what extent can differ for different in ?
Our next theorem shows that the difference is negligible. The Chebyshev distance between two allocation profiles is the maximum difference in the number of items received by any single agent. It captures the worst-case deviation experienced by any agent. The low Chebyshev distance of any two different in shows the stability of the family of stable allocations.
Theorem 2.
For , it holds that for each agent , where and . In other words, the Chebyshev distance .
Proof. Suppose the opposite, that . We shall prove that admits a narrowing transfer and hence is nonstable, which contradicts the assumption .
We build a graph G with n vertices. If an item is allocated to j in and allocated to k in , where , build an arc from j to k. Note that G may have duplicate arcs. Clearly, becomes after all the arcs (i.e., reallocations) are applied.
Decompose G into paths and a few cycles. Denote by the starting and ending vertices of , respectively. We assume that for as in Lemma 1. Moreover, assume that (due to ).
For , let be the allocation copied from but applied all the arcs (reallocations) in . .
Because , we have . We also know since both are stable (Lemma 1). Together, . It further implies that . Otherwise, there exists j, such that < , which means admits a narrowing transfer , implying that admits a narrowing transfer .
Since , we know , i.e., (recall ), is a swapping transfer of , whereas , i.e., (recall ), is a swapping transfer of . It follows that ; therefore, admits a narrowing transfer . □
Remark 3.
We will see in a later part of this paper (Theorem 5) that for the scenario of IND-SUB, the Chebyshev distance between different optimal allocations remains at most one. The proof of this generalized result is much more involved and is given in Section 5. According to Theorem 2, the income under different stable allocations does not differ too much for every agent i, so every solution seems to be acceptable for all of them. However, many questions regarding stable allocations remain to be settled. For example:
- .
Are there common properties of all stable allocations?
- .
Can we obtain the range of over all stable allocations?
- .
Is it possible to efficiently count the number of stable allocations?
- .
Can we find out the profile (of some stable allocation) that optimizes a specific function of ?
In what follows, we introduce a construct, called “layer partition” of agents and items, which is independent of stable allocation . This construct elucidates the structure of and, as we will demonstrate, provides a pathway to answering the four questions above.
First of all, we point out that each agent i falls into two cases: (1) its income is always equal to some constant d for all ; or (2) its income equals d for some and equals for some other , for some integer . This is a corollary of Theorem 2.
Definition 4
(Layer partition of agents and items). Denote by the set of agents i for which always equal d (for ), and by the set of agents i for which sometimes equal d and sometimes equal (for ).
Take an arbitrary stable allocation . Denote by (and , respectively) the set of items allocated to agent (and , respectively) by χ.
The layer partition of agents and items consists [san] of the partition of agent , , , , … and the partition of items , , , , ….
Lemma 2.
Under each stable allocation , the items from (or , respectively) are always allocated to the agents from (or , respectively).
Equivalently speaking, the set of items allocated to () is invariant under every stable allocation. and the set of items allocated to (for ) is invariant as well.
Shortly, items in any given layer can only be allocated to agents in that layer. To be clear, we regard as in the same layer and as in another layer.
3.2. Proof of Lemma 2 and the Explicit Construction of the Layer Partition
We prove Lemma 2 and show how to compute efficiently in this subsection. To this end, we introduce some additional notation first.
Fix any stable allocation
. Recall the
transfers on
(and the notation →) in Definition 3. Those agents receiving
d items under
can be classified into three groups:
See
Figure 2a for an illustration of
and
. Observe that
is disjoint from
. Otherwise, there is
with
, contradicting the fact that
is stable. Denote
. Be aware that the quantity
, the groups
, and the set
all depend on the selected stable allocation
.
Denote by and the set of items allocated to and under , respectively.
Moreover, within this subsection, we rank all items and agents as follows:
Elements in have rank 0; elements in have rank 1;
Elements in have rank 2; elements in have rank 3, etc.
Proposition 1.
In the aforementioned stable allocation χ,
- 1.
There is no transfer from to any lower-ranked agents (namely, );
- 2.
There is no transfer from to any lower-ranked agents (namely, ).
This easily follows from the definitions of . Trivial proofs are omitted.
In the following, the rank refers to the rank defined above with respect to the fixed .
Proposition 2.
In any stable allocation , it holds that
- 1.
A higher-ranked item cannot be allocated to a lower-ranked agent.
- 2.
A lower-ranked item cannot be allocated to a higher-ranked agent.
- 3.
Together, items in are always assigned to , and items in are always assigned to .
Proof. 1. An item o from (or , respectively) is not attractive to any lower-ranked agent, since otherwise admits a transfer from (or , respectively) to a lower-ranked agent (using just one reallocation of o), which contradicts Proposition 1. As a result, an item from (or , respectively) cannot be allocated to any lower-ranked agent in .
2. We prove it by induction. In this proof, for the simplicity of presentation, assume
; see
Figure 2. First, the items ranked lower than
cannot be allocated to
in
. Otherwise, as the items in
must be allocated to
(by the analysis above), some agent in
receives more than 3 items by the pigeonhole principle, which implies that
is non-optimal under
, and therefore
is not stable (Theorem 1). Similarly, the items ranked lower than
cannot be allocated to
in
; otherwise,
contains more agents receiving 3 items than
, which is impossible, and so on and so forth. □
Proposition 3.
For , it holds that is always equal to d for all ; and for , it holds that can be d or under different . It follows that and .
Proof. According to Claim 3 of Proposition 2, in every stable allocation, agents will receive and only receive . Note that . Therefore, in every stable allocation, each agent from receives exactly d items (otherwise, there must be one agent receiving more than d and one agent receiving less than d, which is worse than an equal distribution for the criteria and hence nonstable). Hence, always equals d for .
For each
, we know
, and
can be reduced to
(due to (
1)) in another stable allocation. For each
, we know
, and
can be increased to
d (due to (2)) in another stable allocation. Therefore
can be
or
d for
. □
Lemma 2 simply follows from all the analysis above. In particular, combining the last two propositions, we obtain that and .
3.3. Some Applications of the Layer Partition—Answering
With the layer partition of agents and items, we can promptly answer to :
- Answer of Q1.
They all allocate items layer by layer, in which the layers are invariant with respect to the chosen allocation.
- Answer of Q2.
For , the range of is . For , the range of is . Computing the layers reduces to computing for any fixed , which is easy. Hence we can compute the ranges of s efficiently.
Alternative methods for computing the ranges have to use network flow and are far more complicated.
- Answer of Q3.
Counting stable allocations reduces to counting stable allocations within each layer (and then applying the rule of product).
However, this counting problem is #P-hard, as the problem of counting stable allocation in can be reduced to counting perfect matchings, which is already #P-hard.
- Answer of Q4.
Finding a profile (of a stable allocation) that minimizes a linear function of , e.g., , is easy. Modify the network used in the proof of Claim 2 of Theorem 1 as follows: Among the edges from to t, change the cost of the kth one to be , where A is a large enough constant. Then, the minimum-cost flow still has the minimum , and it optimizes . For non-linear functions, it is more difficult. Yet the layer partition still helps break down the task.
Remark 4.
Halpern et al. [9] implicitly found a structure similar to our layer structures. Specifically, (in the proof of their Theorem 4), given a fractional MNW allocation, they partition the agents into subsets according to the floor of valuations (so in a certain set, the valuation range of each agent is for some integer x). They imply that the agents in each set must be fully allocated to a certain subset of items for any fractional MNW allocation, and the partition and correspondence between the subsets of agents and the subsets of items also hold for deterministic MNW allocations. We discover our layer structures independently. Conceptually, our layers are defined not merely by valuation intervals but by partitions of the agents who receive the same number of items according to the transferring relation between agents. Theoretically, the presence of the layer partition of IND was directly confirmed in the scenario of IND, without the assistance of the DIV case. And so is the computation of the layer partition of IND. Observationally, our layer structures are more specific, which can be briefly explained through the valuation range of each agent in a certain layer: for the scenario of DIV, compared to their range of for some integer x, the range in our layer structures is some fixed rational number, and for the scenario of IND, compared to their range of for some integer x, the range in our layer structures is for some integer x or some fixed integer (the range of ). 4. Divisible Items and Agents with 0/1-Add Valuations
This section discusses the scenario of DIV, i.e., the case of divisible items, in which we are allowed to allocate a part of an item to an agent (and different parts to different agents, perhaps). Note that the 0/1-sub valuations cannot easily match with divisible items. Therefore we restrict ourselves to 0/1-add valuations in this section.
Definition 5
(Stable allocations in the scenario of DIV). Let χ be an allocation of divisible items. For each part of item o allocated to agent i that can be reallocated to another agent (i.e., ), build an edge . Moreover, if there is a simple path () along such edges, we state that χ admits a transfer from to , denoted by . It consists of reallocations with fraction of items along the path, after which decreases by Δ and increases by Δ.
A narrowing transfer refers to a transfer from u to v with . A widening transfer refers to a transfer from u to v with . An Allocation χ is called nonstable if it admits some narrowing transfer and is stable otherwise.
Denote by the set of stable allocations in the DIV scenario.
Lemma 3.
Stable allocations are optimal under . (As a corollary, their profiles are equivalent up to permutation.)
Proof. Assume is non-optimal under . We shall prove that is nonstable, i.e., it admits a narrowing transfer.
First, take an allocation that is optimal under .
We build a graph G with n vertices. Be aware that according to and , each item i can be divided into several pieces (with a total size of ), so that each piece is given to a certain agent (denoted by j) in and given to a certain agent (denoted by k) in . If , build an arc from j to k, with weight equal to the size of this piece. Clearly, an arc represents a reallocation (of one piece) on , and becomes after all the arcs (i.e., reallocations) are applied.
We decompose graph G into several cycles and paths , where the edges in any cycle or path have the same weight, and where for , where denote the starting and ending vertices of , respectively. Such a decomposition exists under an appropriate division of items.
For , let be the allocation copied from but applied all the arcs (reallocations) in . .
Be aware that becomes after applying the arcs in . We obtain that . Further since that , there exists , such that .
It follows that in , we have (where denote respectively, for short). It further follows that in , we also have , since never increases and never decreases in the sequence . Consequently, admits a narrowing transfer (from s to t). □
Theorem 3.
1. For any SPD criterion, the optimums coincide with .
2. We can find a stable allocation in polynomial time.
Just like we prove Claim 1 of Theorem 1 using Lemma 1, we can prove Claim 1 of Theorem 3 using Lemma 3 (the proof is omitted).
To find a stable allocation in polynomial time, we can use an approach based on linear programming (LP), which is shown in
Section 4.2 (thus we prove Claim 2 of Theorem 3). Alternatively, we find a purely combinatorial approach (which is more efficient and more interesting) for finding such an allocation. It utilizes the layer partition with several nontrivial ideas. See details in
Section 4.1.
Theorem 4.
For , it holds that ; namely, the Chebyshev distance .
A proof of Theorem 4 is related to the LP approach mentioned above and is given in
Section 4.2. An alternative proof is similar to the proof of Theorem 2 and is omitted.
4.1. Layer Partition of Agents and Items for Divisible Items
and a Combinatorial Algorithm for Finding a Stable Allocation
In the following, we extend the layer partition given in
Section 3.1 to the divisible case and then present the aforementioned combinatorial algorithm.
According to Theorem 4, profile is unique for . In other words, the income of each agent i is independent of , as long as is stable.
For each real number d, denote by the set of agents that always receive d items no matter in which stable allocation; formally, .
Lemma 4.
The set of items allocated to is invariant for . Moreover, this set (denoted by henceforth) consists of complete items only.
Proof. Fix . Let be the items allocated to under . An item in () cannot be allocated to ; otherwise, there is a narrowing transfer in . An item in () cannot be allocated to ; otherwise, the allocation is not optimal (can be proved by induction as in the proof of Proposition 2). Therefore, those items in can only be allocated to (even for other stable allocations). We thus obtain the first part of this lemma.
In any stable allocation, an item cannot be allocated to different layers. Otherwise there is clearly a simple narrowing transfer. □
Lemma 4 implies a layer partition of items and agents, where items and agents are in the same layer.
Lemma 5.
A stable allocation γ for the divisible case can be obtained by optimally reallocating items (regarded as divisible items) to agents and items (regarded as indivisible items) to for each d.
Proof. For each d, let be an optimal allocation of items (regarded as divisible items) to agents . Moreover, let be an optimal allocation of items (regarded as indivisible items) to so that each agent in receives d items. Combine in all layers to obtain an overall allocation . We claim that . The proof is as follows.
Those items in the higher layer cannot be given to agents in the lower layer—there are no such edges as shown in the proof of Proposition 2 (this also holds for the divisible case). Hence there is no narrowing transfer between layers in . Also, there is no narrowing transfer within any layer of by the construction of . Together, admits no narrowing transfer and hence is stable. So, . □
Lemma 6.
For any , it holds that
1. for each .
2. for .
Proof. Denote and .
Combining (1)–(3) below, we immediately obtain the lemma:
- (1)
For each , it holds that (apply Theorem 4 with ).
- (2)
For each , it holds that (trivial).
- (3)
For each , it holds that . (Proof: Since is () optimal, . Since is also () optimal, .)
□
Lemma 7.
Given , for two different layers and of χ, we have Proof. By Lemma 5, we obtain that
According to Lemma 4, we have that
and
are integers and
.
Thus, and .
The last three lemmas are crucial to our combinatorial algorithm for DIV.
For the divisible case, we divide each item into identical (unbreakable) pieces with size , and solve this indivisible case (with indivisible pieces) using the algorithm in Claim 2 of Theorem 1.
We obtain an allocation with several layers; each layer’s index d is a real number—a multiple of . For agent , we have , and for , we have .
According to Lemma 6, for the divisible case, we have
As a result, , for some .
By Lemma 7, there can only be one such
in the range of
for
. By Lemma 5, we can obtain an optimal allocation for the divisible case just by reallocating items within each layer, and we can calculate that
See Algorithm 2 for details.
| Algorithm 2 Finding stable allocation for DIV via reduction to IND |
- Input:
m items , n agents , binary valuation matrix B of size , where denotes the valuation of item i for agent j. - Output:
A stable allocation for divisible items. - 1:
Divide each item into identical pieces of size ; let denote the set of all these pieces. - 2:
Expand B to , such that for each piece o of item j, we have . - 3:
γ ← StableAllocationIND {Algorithm 1}. - 4:
Compute layer partition of : agents and items . With answer of - 5:
for each layer d of with agents and items do - 6:
Allocate items in integrally to agents in , such that each gets d value in - 7:
end for - 8:
for each layer d of with agents and items do - 9:
Allocate items in fractionally to agents in , such that each agent receives exactly value in - 10:
end for - 11:
return .
|
Remark 5.
The parameter is a sufficient choice. By Lemma 6, each of γ corresponds to a of , and each corresponds to all with . By Lemma 7, two layers in the divisible case must be away from each other, so there can only be one such in the range of .
As a result, each layer in the indivisible reduction corresponds to only one layer in the original divisible case. By Lemma 5, we can obtain an optimal allocation for the divisible case just by reallocating items within each layer, and we can calculate the value for each layer since the number of layers (in the divisible case) is guaranteed to be one.
In addition, a subdivision into pieces is inevitable. For example, suppose we have agents and items. The first agents can only receive the first k items (arbitrarily), and the remaining k agents can only receive the remaining items (arbitrarily). The optimal result of this case is obviously allocating items equally if possible. In the end, the first agents become a layer getting the k items (evenly distributed), and the rest become the other layer getting the items (evenly distributed). The difference between these two layers is . So the denominator is for , indicating that we need pieces to indicate such a small difference.
The obtained algorithm deals with items and n agents. Therefore, it has a time complexity of , which is still better than the linear programming procedure (explained below) with time complexity of with variables and linear programming problems in the worst case.
Example 2.
A numerical illustration of Algorithm 2.
Consider an instance with agents and divisible items, where the binary valuation matrix B is: Our combinatorial algorithm proceeds as follows:
Step 1: Split each item into identical, indivisible pieces. We now have pieces in total.
Step 2: Compute a stable allocation for these 75 indivisible pieces using Algorithm 1. One possible optimal allocation yields the following piece counts:
Agent 1 receives 12 pieces from item 1.
Agent 2 receives 13 pieces from item 1.
Agent 3 receives 16 pieces from item 2.
Agent 4 receives 9 and 8 pieces from items 2 and 3, respectively.
Agent 5 receives 17 pieces from item 3.
The final profile (piece counts) is . Converting back to the original item scale (dividing by 25), we obtain the profile: .
Step 3: Compute the layer partition.
Agents 1 and 2 are in , and item 1 is in , where .
Agents 3, 4 and 5 are in , and items 2 and 3 are in , where .
Step 4: Recompose the fractional allocation. Agents 1 and 2 each receives item, and agents 3, 4 and 5 each receives item. The profile of stable allocation is .
4.2. Find a Stable Allocation for Divisible Items Using LP and a Proof of Theorem 4
This subsection provides an alternative approach based on linear programming for finding a stable allocation for divisible items. This approach is more straightforward, but it is not combinatorial.
Applying Claim 1 of Theorem 3, finding a stable allocation reduces to finding an allocation with the minimum .
First, we prove the claim that the latter further reduces to computing the “fair multi-flow” [
11] in the network below:
See
Figure 3. There are
n source nodes
in the first layer
S. The second layer
R consists of only one node
r as a relay. The third layer
I has
m nodes, standing for items. The last layer
T has
n sink nodes
. For each agent
d, there is an arc
of unlimited capacity. For every item
i in
I, there is an arc
of capacity one. If item
i can be allocated to agent
d, an arc
is added with capacity one.
In the aforementioned fair multi-flow problem, we shall send a flow from to for each d. The sum of the flows on each edge cannot exceed the capacity, and the n amount of flows as a vector should have the lowest . Clearly, the solution to this problem corresponds to the optimal allocation under (items are fully allocated as the multi-flow is ); therefore, the claim holds.
Then, recall the result of [
11], which states that the fair multi-flow problem can be reduced to a polynomial number of linear programming problems and can thus be solved in polynomial time.
Proof of Theorem 4. Recall finding a stable allocation by computing a fair multi-flow; the flow vector
) is unique for the multi-flow problem, according to [
11]. Therefore the profile of optimal allocation is also unique. □
5. Indivisible Items and Agents with 0/1-Sub Valuations
We now move on to the scenario of IND-SUB.
The 0/1-sub function is closely related to the matroid theory [
19]. A matroid is a pair
, where
E is a finite set (called the
ground set) and
is a family of subsets of
E (called the
independent sets).
The independent sets satisfy the following three axioms:
- (I1)
;
- (I2)
If and , then ;
- (I3)
If and , then there exists , such that .
If an agent has a 0/1-sub valuation on items, then the set of
clean bundles forms the set of independent sets of a matroid (proved in Benabbou et al. [
7]). Benabbou et al. [
7] further showed that under 0/1-sub valuations, the set of max-USW allocations optimal under any SPD criterion is consistent with the set of
allocations. This result generalizes Claim 1 of Theorem 1 (for the 0,1-add valuations). In addition, Babaioff et al. [
10] showed that under 0/1-sub valuations, the allocation that optimizes
can be found in polynomial time.
To the best of our knowledge, however, it is not known to what extent can differ for different optimal allocations . We answer this question by the following theorem.
Let denote the set of (max-USW and clean) allocations that are optimal under any SPD criterion.
Theorem 5.
For , it holds that for each agent , where and . In other words, the Chebyshev distance is .
Proof. Assume (i) and (ii) . If there are multiple such pairs of allocations, take the pair with a minimum symmetric difference of .
Without loss of generality, assume that
Assume that .
Notice that
. Take the minimum index
i satisfying
. Clearly, for all
we have
In fact, for
i, it must hold that
Recall that the smaller
, the better the vector
under
. Indeed, if
, then together with (
4),
Moreover,
since
are the smallest
i elements in
. Together, we have
. By comparing the smallest
i elements of
and
, we see
, contradicting the assumption that
and
are both
optimal.
By the definition of
i and the assumption
, it is easy to see
and hence
. Further, we claim
It reduces to prove
. Since
and
are both
optimal, the two multisets
and
are equivalent. By (
4), for all
. Together, the multiset
equals
. Further by the assumption
, the elements in
are not smaller than
. Recall
, we have
.
Recall that the family of clean bundles
for
forms a family of independent sets of a matroid [
7]. By (I3) of the independent-set matroid axioms and inequality (
5), there exists an item
making
. And
is allocated to some agent
under allocation
. Otherwise,
is not allocated to anyone, and can be allocated to agent
i, violating that
is max-USW. Consider the following three cases:
- 1.
Suppose . Then transferring from to i in decreases , contradicting that is optimal.
- 2.
Suppose
. We note that
since
(inequality (
6)). If we transfer
from
to
i in
,
and
are unchanged, which means
still satisfies the two conditions (i)
and (ii)
, but the
decreases, a contradiction.
- 3.
Suppose .
We first show
. This clearly holds by (
4) if
.
When , since , we have . Together with the assumption in this case, we have .
Suppose to the oppose that
, which means
. Applying (
4),
Moreover,
since
are the smallest
i elements in
. Together,
, by comparing the smallest
i elements of
and
, yielding
, a contradiction.
With (i.e., ) in hand, since is clean, we have . Further since and are clean (i.e., independent sets of a matroid), there exists an item , such that . And is allocated to some agent under , as otherwise is not allocated to anyone, and we can transfer from to i and allocate to in , violating that is max-USW. We note that and because (recall that ) and .
Repeating the same argument and letting , we obtain a sequence of items and agents . Let denote the allocation transferring from to under for all . The sequence of items and agents satisfies .
See
Figure 4 for an illustration of the sequence. We note that the same item
does not appear again, since for
, we have
(as a result,
). Thus, the sequence must terminate when we reach agent
with
or
for the first time. If
, after transferring items along the sequence, we have that
is not
optimal, a contradiction. If
, we note that agent
q (recall that
) is not in the sequence since
(inequality (
6)). After transferring items along the sequence,
and
are unchanged, which means
still satisfies the two conditions (i)
and (ii)
, but
decreases, a contradiction.
To sum up, suppose the opposite of Theorem 5 that there are some pairs of allocations that meet (i) and (ii) , then it leads to some contradiction. Thus, for , it holds that . □
Remark 6.
Our proof of Theorem 5 is similar to that of Lemma 3.12 in [7], which is the 0/1-sub valuation version of Lemma 1 (implying an allocation non-optimal under is non-optimal under any SPD criterion). There is a minor flaw in their proof that we have modified in our proof of Theorem 5. In the proof of Lemma 3.12 in [7], for the sequence of items and agents , they claim that no same agent appears twice by proof, beginning from arguing that if the same agent appears again, the allocation χ is still clean after transferring items along the cycle—a subsequence of the sequence with a common agent as the starting and ending element. However, the above claim and proof are incorrect. See the example in the left figure of Figure 4, where the sequence is and agent 2 appears twice. After transferring items along the cycle (which is equivalent to swapping items b and c between agents 2 and 1), the allocation may not be clean if is not a clean bundle of agent 2. Indeed, the family of clean bundles of agent 2 may be , such that . We note that Lemma 3.12 in [7] still holds, and the minor flaw in its proof in [7] can be corrected by arguing that no same item appears twice, like our proof of Theorem 5. The layer structures of the IND and DIV scenarios admit the property that is within each layer; the number of items allocated to any two agents differs by at most a fixed and small constant that is independent of a specific instance—specifically, by at most in IND and exactly in DIV. This uniformity is enabled by the additive nature of valuations, which allows items to be freely transferred between agents without affecting their marginal gains on other items.
In the IND-SUB scenario, however, this regular layer structure breaks down. While the overall profiles of different optimal allocations remain close (Chebyshev distance , as shown in Theorem 5), the underlying combinatorial rigidity imposed by submodular (matroid) constraints prevents the formation of stable, invariant layers, where allocations differ only by a bounded constant. A simple counterexample is an instance with two agents and ten items, with submodular valuations defined as: agents 1 and 2, with the value of any subset S of items respectively as and , where x is a parameter of the instance. The ten items can be freely allocated to the two agents, and the allocation is stable as long as agent 1 receives x items. Thus, the two agents are in the same layer. However, the difference in the number of items that these two agents received changes as the instance changes, rather than being a fixed constant of the whole IND-SUB scenario.