Next Article in Journal
Digital Transformation of Supply Chain Considering Intelligent Information Platform: A Tripartite Evolutionary Game Analysis
Previous Article in Journal
Some Geometric Characterizations of a Certain Class of Log-Harmonic Mappings
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimal Allocations Under Strongly Pigou–Dalton Criteria: Hidden Layer Structure and Efficient Combinatorial Approach

School of Intelligent Systems Engineering, Shenzhen Campus, Sun Yat-sen University, Shenzhen 518107, China
*
Author to whom correspondence should be addressed.
Mathematics 2026, 14(4), 658; https://doi.org/10.3390/math14040658
Submission received: 2 January 2026 / Revised: 6 February 2026 / Accepted: 10 February 2026 / Published: 12 February 2026
(This article belongs to the Special Issue Game Theory and Operations Research)

Abstract

We investigate optimal social welfare allocations of m items to n agents with binary additive or submodular valuations. For binary additive valuations, we prove that the set of optimal allocations coincides with the set of so-called stable allocations, as long as the employed criterion for evaluating social welfare is strongly Pigou–Dalton (SPD) and symmetric. Many common criteria are SPD and symmetric, such as Nash social welfare, LexiMax, LexiMin, the Gini Index, Entropy, and Envy Sum. We also design efficient algorithms for finding a stable allocation, including an O ( m 2 n ) time algorithm for the case of indivisible items, and an O ( m 2 n 5 ) time one for the case of divisible items. The first is faster than the existing algorithms or has a simpler analysis. The latter is the first combinatorial algorithm for that problem. It utilizes a hidden layer partition of items and agents admitted by all stable allocations, and cleverly reduces the case of divisible items to the case of indivisible items. In addition, we show that the profiles of different optimal allocations have a small Chebyshev distance, which is zero for the case of divisible items under binary additive valuations, and is at most one for the case of indivisible items under binary submodular valuations.

1. Introduction

Maximizing social welfare or minimizing inequality in allocating resources to agents is an important topic in social economics and has been studied extensively in recent years [1,2,3,4]. Each agent has her own subjective valuation function over a subset of resources (items). Yet how to evaluate the welfare or inequality has no unified answer. Some may suggest LexiMin as the criterion [5]—where we maximize the smallest valuations of all agents, then maximize the second smallest valuations of them, and so on; while others may suggest Maximum Nash Welfare (MNW) [4], where we maximize the product of valuations of all agents. For most classes of valuation functions, such as the additive valuation functions (where an agent’s valuation is the sum of her valuations for individual items in her bundle), the optimal allocations vary with the selected criteria.
For the special case of binary additive valuations on items, however, Aziz and Rey [6] showed that the LexiMin allocations are equivalent to the MNW allocations. This result raises a natural question of whether there are more connections among other criteria. Benabbou et al. [7] gave a positive answer—the optimal allocations under any strongly Pigou–Dalton (SPD) principle (a.k.a. transfer principle) [8] over all allocations with maximal utilitarian social welfare (USW, defined as the sum of valuations, a promise of efficiency) are consistent with LexiMin allocations, even under binary submodular valuations, which subsume binary additive valuations. Roughly, the Pigou–Dalton principle requires that if some income is transferred from a rich person to a poor person, while not bringing the rich to a poorer situation than the poor, then the measured inequality should not increase (or decrease in the strong version); see its formal definition in Section 2. The SPD principle embodies a strictly egalitarian stance: any feasible redistribution toward balance must be reflected in a measurable welfare improvement. It is particularly compelling in the context where society prioritizes minimal disparity over efficiency margins. Most common criteria admit the SPD principle (see Example 1).
Inspired by the aforementioned consistency of optimums among all the SPD criteria [7], we conduct a further study of the optimums under the SPD criteria mainly for the following three scenarios:
IND.    
Indivisible items and agents with binary additive valuations.
DIV.    
Divisible items and agents with binary additive valuations.
IND-SUB. 
Indivisible items and agents with binary submodular valuations.
Our main results are summarized below. We are only concerned with the allocations with maximal USW to ensure the efficiency, like in [7]. Unless otherwise stated, the criteria for evaluating inequality are SPD and symmetric. Moreover, a profile of an allocation χ refers to the valuations vector of the n agents under χ :
(a)
For IND, the profiles of optimal allocations have a Chebyshev distance of at most o n e . Moreover, there is a layer structure hidden behind the optimal allocations—the agents and items are partitioned into several layers so that items can only be allocated to the agents within the same layer in all optimal allocations. The layer structure admits the property that within each layer, the number of items allocated to any two agents differs by at most o n e —a fixed and small constant that is independent of a specific instance.
(b)
In DIV, the profiles of optimal allocations are all the same (in other words, the Chebyshev distance is z e r o ). Halpern et al. [9] have shown a similar result in which only LexiMin and MNW are considered.
A layer structure still exists (and with more layers compared to the scenario of IND). More importantly, by utilizing this layer structure and using a reduction to IND, we design the first combinatorial algorithm for finding the optimal allocation for DIV, which runs in O ( m 2 n 5 ) time.
(c)
In IND-SUB, the profiles of optimal allocations have a Chebyshev distance of at most o n e . This extends the corresponding result in IND. However, the layer structure mentioned above does not hold under this scenario.
See Table 1 for a summary of the results. Our research objects are mainly the optimal allocations under any SPD criteria. In all cases, we derive an “almost consistency” among different optimums (SPD criteria). It states that the valuation of any agent differs by at most o n e (or z e r o for the DIV case), under any two different optimal allocations. We further discover layer structures for the IND and DIV scenarios and design the first combinatorial algorithm for DIV based on the layer structure.

Related Work

Halpern et al. [9] show that under binary additive valuations, given any fractional MNW allocation (i.e., the MNW allocations in the scenario of DIV), one can compute, in polynomial time, a randomized allocation with only deterministic MNW allocations (i.e., the MNW allocations in the scenario of IND) in its support and the randomized allocation implements the given fractional MNW allocation. This is a compelling connection between the deterministic and fractional MNW allocations: given a fractional MNW allocation, one can find a convex combination of deterministic MNW allocations to yield it. We note that the connection found by them is not a computational method for fractional MNW allocation (since they need a fractional MNW allocation as the input), and our method finds an optimal allocation of the scenario of DIV with an algorithm designed for computing optimal allocations of the scenario of IND.
For computational tractability, the SPD optimal allocations can be computed in polynomial time (by computing a LexiMin or MNW allocation) in the scenario of IND [2,4,5], DIV [11] and IND-SUB [6]. We propose a method to find an optimal allocation of the scenario of DIV with an algorithm designed for computing optimal allocations of the scenario of IND. This is reminiscent of the relation between integer programming and linear programming. The well-known branch-and-bound method uses linear programming as a subprogram to solve the integer programming problem. In this paper, based on nontrivial observations, we split each item into a fixed number of pieces and prove that the optimal allocation over the pieces (viewed as indivisible items) is exactly an optimal allocation of the scenario of DIV.
The setting of binary valuation is considered in the resource allocation problem, optimal job scheduling, and load balancing problem. Lin and Li [12] study the special case in which each job can be processed on a subset of allowed machines, and its run-time in each of these machines is one, and find the minimum makespan in polynomial time. Kleinberg et al. [5] study a case called uniform load balancing, which is to assign jobs to machines so that the set of allocated bandwidth is LexiMin. The objective is to find the allocation that the numbers of jobs assigned to each machine is LexiMax optimal: lexicographically minimizing the number of jobs assigned to each machine when sorted from largest to smallest.
Another classic class of resource allocation problems is that with only one kind of resource. In this setting, we only care about the number of items allocated to each agent, rather than the specific subset of items. Ibaraki and Katoh [13] make a review on this class of resource allocation problem, viewing it as an optimization problem. They allocate a fixed amount of resources (continuous or discrete) to n agents for optimizing the objective function (e.g., separable, convex, minimax, or general). In particular, an important case is the discrete resource allocation problem with a separable convex objective function, where Michaeli and Pollatschek [14] discuss some properties between the optimal solution of the discrete version and that of the continuous version. These properties can be used to speed up the search for an integer solution.
The resource allocation problems under ternary valuations, which are a natural extension of binary valuations, are much harder to handle. Under additive valuations, Golovin [15] proves that it is NP-hard to compute a ( 2 ϵ ) approximate maximin allocation even with agents’ valuations of { 0 , 1 , 2 } on single items. Another extension of binary valuations is the case where item j has value p j or z e r o for each agent. In this case of valuations, for computing maximin allocation, Bezakova and Dani [16] prove that there is no approximation algorithm with a performance guarantee better than 2 unless P = N P , and Bansal and Sviridenko [17] present an O ( log log m log log log m ) approximation algorithm.

2. Preliminaries

For an integer k > 0 , let [ k ] denote { 1 , , k } . Throughout this paper, [ m ] refers to the set of m items and [ n ] refers to the set of n agents.
Each agent i [ n ] has a valuation function v i : 2 [ m ] R + over subsets of [ m ] (called bundles), where v i ( ) = 0 . Given a valuation function v i , we define the marginal gain of an item o over a bundle S [ m ] as Δ i ( S ; o ) = v i ( S o ) v i ( S ) . We focus on the binary valuations where the marginal gain Δ i ( S ; o ) { 0 , 1 } .
Two kinds of binary valuations are discussed frequently in the literature, which we call 0/1-add and 0/1-sub. For the 0/1-add valuations, the value of a set of items for an agent is the sum of the valuation of the individual items; the marginal gain Δ i ( S ; o ) is based on whether agent i likes item o (and is independent of S). For the 0/1-sub valuations, the marginal gain Δ i ( S ; o ) does not increase when S grows; formally, Δ i ( T ; o ) Δ i ( S ; o ) for S T [ m ] and o [ m ] T .
Note that 0/1-sub valuations subsume 0/1-add ones.
An allocation χ refers to a collection of disjoint bundles χ 1 χ n , such that χ 1 χ n [ m ] . An allocation χ is clean if all the bundles are clean— χ i is clean if it has no items with zero marginal gain (i.e., for all o χ i , Δ i ( χ i { o } ; o ) = 1 ). For 0/1-sub valuations, χ i is clean if and only if v i ( χ i ) = | χ i | (Proposition 3.3 of [7]).
Given a clean allocation χ , assuming agent i gets h i = | χ i | items under χ , we call vector ( h 1 , , h n ) the profile of χ , denoted by p ( χ ) . Henceforth, h i always refers to | χ i | unless otherwise stated.
Definition 1 (Criterion). 
A criterion of income inequality (criterion for short), a.k.a. an income inequality metric  [1,3], is a function from the profiles to R : Each profile is evaluated by a real number (called score); the lower the score, the better the profile under this criterion. Following the convention, a criterion must be symmetric; i.e., it should evaluate all permutations of p equally.
Example 1 (Some commonly used criteria). 
Let h 1 , , h n be the permutation of h 1 , , h n , sorted in increasing order. Take Φ ( x ) to be any strictly convex function of x. For example, Φ ( x ) = x 2 . For every profile p = ( h 1 , , h n ) .
NSW ¬ ( p ) : = h i > 0 h i ; Potential Φ ( p ) : = i = 1 n Φ ( h i ) ; GiniIndex ( p ) : = i i · h i ; EnvySum ( p ) : = h i < h j ( h j h i ) ; Congestion ( p ) : = i h i 2 ; Entropy ¬ ( p ) : = i h i m log ( h i m ) ; LexiMax ( p ) : = i m h i ; LexiMin ( p ) : = i m m h i .
Remark 1. 
For NSW ¬ , we first need to maximize the number of agents with nonzero valuation and then maximize the product of the nonzero valuations. By the last two definitions, minimizing LexiMax is equivalent to minimizing the largest valuation of all agents, then minimizing the second largest valuation of them, and so on; minimizing LexiMin is equivalent to maximizing the smallest valuation of all agents, then maximizing the second smallest valuation of them, and so on.
We abbreviate f ( p ( χ ) ) as f ( χ ) for any criterion f.
Definition 2 (Strongly Pigou–Dalton).
Profile q = ( q 1 , , q n ) is regarded more balanced than p = ( p 1 , , p n ) , if there are j , k [ n ] , such that both q j , q k lie in range ( p j , p k ) , p j + p k = q j + q k , and q i = p i for i [ n ] { j , k } (namely, the incomes of two agents are more balanced in q , whereas all other incomes remain unchanged). A criterion f is strongly Pigou–Dalton (SPD) [8] if f ( q ) < f ( p ) whenever q is more balanced than p . (The SPD principle is also known as the transfer principle.)
All criteria shown in Example 1 are SPD (proved in Appendix A).
We only consider the allocation with maximal utilitarian social welfare (max-USW, maximizing the sum of the valuations of all agents); otherwise, one may minimize LexiMax ( χ ) ( GiniIndex ( χ ) , etc.) by not allocating any items, which is uninteresting.
Henceforth, unless otherwise stated, allocations are assumed to be max-USW and clean (to this end, we can drop items with zero marginal gain without changing the profile).

3. Indivisible Items and Agents with 0/1-Add Valuations

Definition 3 (Stable allocations in the IND scenario).
Take an allocation χ of indivisible items. For each item o allocated to agent i that can be reallocated to another agent i (i.e., v i ( { o } ) = 1 ), build an edge ( i , i ) . Moreover, if there is a simple path ( i 1 , , i k ) ( k 2 ) along such edges, we state that χ admits a transfer from i 1 = u to i k = v , denoted by u v , which consists of k 1 reallocations along the path, after which agent u loses and v gains one.
A narrowing transfer refers to a transfer u v with h u h v + 2 . A widening transfer refers to a transfer u v with h u h v . Other transfers (i.e., u v with h u = h v + 1 ) are called swapping transfers.
Allocation χ is called nonstable if it admits a narrowing transfer, and is called stable otherwise.
Denote by S the set of stable allocations.
Barman et al. [4] proved that if an allocation is not optimal under NSW ¬ , then it admits a narrowing transfer (hence it is nonstable). Their result can be easily generalized to any SPD criterion, including LexiMin (as stated in Lemma 1).
Lemma 1. 
Stable allocations are optimal under LexiMin .
(As a corollary, their profiles are equivalent under permutation.)
Proof. 
Assume χ is non-optimal under LexiMin . We shall prove that χ is nonstable, i.e., it admits a narrowing transfer.
First, take an allocation χ * that is optimal under LexiMin .
We build a graph G with n vertices. If an item is allocated to j in χ and allocated to k in χ * , where k j , build an arc from j to k. Note that G may have duplicate arcs. Clearly, an arc represents a reallocation (of one item) on χ , and χ becomes χ * after all the arcs (i.e., reallocations) are applied.
We decompose G into several cycles C 1 , , C a and paths P 1 , , P b . Denote by s i , t i the starting and ending vertices of P i , respectively. We assume that t j s k for any j k ; otherwise we connect the two paths P j , P k into one path.
For 0 i b , let χ ( i ) be the allocation copied from χ but applied all the arcs (reallocations) in P 1 , , P i . χ ( 0 ) = χ .
Note that χ ( b ) becomes χ * after applying the arcs in C 1 , , C a . We obtain that LexiMin ( χ ( b ) ) = LexiMin ( χ * ) . Further since that LexiMin ( χ * ) < LexiMin ( χ ) , there exists i ( 1 i b ) , such that LexiMin ( χ ( i ) ) < LexiMin ( χ ( i 1 ) ) .
It follows that in χ ( i 1 ) , we have h s h t + 2 (where s , t denote s i , t i respectively, for short). It further follows that in χ ( 0 ) = χ , we also have h s h t + 2 , as h s never increases and h t never decreases in the sequence χ ( 0 ) , , χ ( i 1 ) . Consequently, χ admits a narrowing transfer (from s to t).    □
Theorem 1. 
1. For any SPD criterion, the optimums are exactly S .
2. We can find a stable allocation in O ( m 2 n ) time.
Proof. 
1. Fix any SPD criteria f; we need to show that
(1)
A nonstable allocation χ is non-optimal under f.
(2)
A stable allocation χ is optimal under f.
Together, the optimal allocations under f are the stable ones.
Proof of (1): If χ is nonstable, it admits a narrowing transfer; denoted by χ is the allocation after this transfer. Clearly, p ( χ ) is more balanced than p ( χ ) , and therefore f ( χ ) < f ( χ ) due to the assumption that f is strongly Pigou–Dalton.
Proof of (2): Assume χ , χ are stable. By Lemma 1, p ( χ ) is equivalent to p ( χ ) up to permutation; therefore, f ( χ ) = f ( χ ) as f is symmetric (a reminder that we always assume so). So, stable allocations have the same score under f. Further since that nonstable allocations are non-optimal under f ((1) of Claim 1), all stable allocations admit the same lowest score under f (a.k.a. optimal).
2. Finding a stable allocation reduces to finding the allocation with minimum Congestion (by Claim 1 of this theorem), which can be found using network flows. Specifically, it reduces to computing the minimum-cost flow in the following network (see Figure 1).
There are m + n + 2 nodes, including a source node s, a sink node t, and m nodes u 1 , , u m representing the items and n nodes v 1 , , v n representing the agents. And there are Θ ( m n ) edges in the network: (i) an edge from s to each u i , with capacity one and cost zero; (ii) an edge from u i to v j if agent j likes item i, with capacity one and cost zero; and (iii) m edges from each v j to t, in which the kth one has capacity one and cost k 1 .
Our target—a flow of size m with the minimum cost—can be computed by the Successive Shortest Path algorithm [18], which increases the size of the current flow by one via augmenting along the shortest path in the residual graph, repeating m times. For our particular network, finding such a path reduces to finding a non-used edge ( v j , t ) with the lowest cost, such that s can reach v j in the residual network, which can be done in O ( m n ) time by BFS. In total it runs in O ( m 2 n ) time.    □
Our Algorithm 1 is similar to that of Kleinberg et al. [5] for finding a LexiMax optimal allocation, but our analysis is simpler.
Algorithm 1 StableAllocationIND ( [ m ] , [ n ] , B )
Input: 
m items [ m ] , n agents [ n ] , binary valuation matrix B of size n × m , where B ( i , j ) denotes the valuation of item i for agent j.
Output: 
A stable allocation χ = ( χ 1 , , χ n )
 1:
Construct flow network G = ( V , E ) with nodes { s , t } { u 1 , , u m } { v 1 , , v n } .
 2:
Add edge ( s , u i ) with capacity o n e and cost z e r o for each item i [ m ] .
 3:
Add edge ( u i , v j ) with capacity o n e and cost z e r o if B ( i , j ) = 1 (agent j likes item i).
 4:
Add m edges from each v j to t, where the kth edge has capacity o n e and cost k 1 .
 5:
Compute min-cost flow f of value m using SuccessiveShortestPath [18].
{Running time: O ( m 2 n ) }
 6:
Add item i to χ j for each flow f ( u i , v j ) = 1 .
 7:
return  χ
Remark 2. 
It might be asked whether a better allocation under some criterion is also a better allocation under another criterion? This answer is no, although the best remains the best crossing different criteria (Theorem 1). For example, for m = 14 and n = 3 , we have
Congestion ( ( 0 , 5 , 9 ) ) = 46 < 47 = Congestion ( ( 2 , 2 , 10 ) ) , and
EnvySum ( ( 0 , 5 , 9 ) ) = 18 > 16 = EnvySum ( ( 2 , 2 , 10 ) ) .

3.1. Layer Partition of Agents and Items

Recall that the profiles of stable allocations are equivalent up to permutation (see the corollary part in Lemma 1). Yet the profiles are not unique—e.g., the number of items h i allocated to agent i may differ in different stable allocations. It raises a natural question: to what extent can p ( χ ) differ for different χ in S ?
Our next theorem shows that the difference is negligible. The Chebyshev distance between two allocation profiles is the maximum difference in the number of items received by any single agent. It captures the worst-case deviation experienced by any agent. The low Chebyshev distance of any two different χ in S shows the stability of the family of stable allocations.
Theorem 2. 
For χ , χ S , it holds that | h i h i |     1 for each agent i [ n ] , where ( h 1 , , h n ) = p ( χ ) and ( h 1 , , h n ) = p ( χ ) . In other words, the Chebyshev distance D ( p ( χ ) , p ( χ ) ) 1 .
Proof. 
Suppose the opposite, that h i h i + 2 . We shall prove that χ admits a narrowing transfer and hence is nonstable, which contradicts the assumption χ S .
We build a graph G with n vertices. If an item is allocated to j in χ and allocated to k in χ , where k j , build an arc from j to k. Note that G may have duplicate arcs. Clearly, χ becomes χ after all the arcs (i.e., reallocations) are applied.
Decompose G into paths P 1 , , P b and a few cycles. Denote by s j , t j the starting and ending vertices of P j , respectively. We assume that t j s k for j k as in Lemma 1. Moreover, assume that s 1 = s 2 = i (due to h i h i + 2 ).
For 0 j b , let χ ( j ) be the allocation copied from χ but applied all the arcs (reallocations) in P 1 , , P j . χ ( 0 ) = χ .
Because p ( χ ( b ) ) = p ( χ ) , we have LexiMin ( χ b ) = LexiMin ( χ ) . We also know LexiMin ( χ ( 0 ) ) = LexiMin ( χ ) since both χ ( 0 ) , χ are stable (Lemma 1). Together, LexiMin ( χ ( 0 ) ) = LexiMin ( χ ( b ) ) . It further implies that LexiMin ( χ ( 0 ) ) = LexiMin ( χ ( 1 ) ) = = LexiMin ( χ b ) . Otherwise, there exists j, such that LexiMin ( χ ( j ) ) < LexiMin ( χ ( j 1 ) ) , which means χ ( j 1 ) admits a narrowing transfer s j t j , implying that χ ( 0 ) admits a narrowing transfer s j t j .
Since LexiMin ( χ ( 0 ) ) = LexiMin ( χ ( 1 ) ) = LexiMin ( χ ( 2 ) ) , we know s 1 t 1 , i.e., i t 1 (recall s 1 = i ), is a swapping transfer of χ ( 0 ) , whereas s 2 t 2 , i.e., i t 2 (recall s 2 = i ), is a swapping transfer of χ ( 1 ) . It follows that h t 2 = h i 2 ; therefore, χ ( 0 ) = χ admits a narrowing transfer i t 2 .    □
Remark 3. 
We will see in a later part of this paper (Theorem 5) that for the scenario of IND-SUB, the Chebyshev distance between different optimal allocations remains at most one. The proof of this generalized result is much more involved and is given in Section 5.
According to Theorem 2, the income h i under different stable allocations χ does not differ too much for every agent i, so every solution χ seems to be acceptable for all of them. However, many questions regarding stable allocations remain to be settled. For example:
Q 1 .
Are there common properties of all stable allocations?
Q 2 .
Can we obtain the range of h i over all stable allocations?
Q 3 .
Is it possible to efficiently count the number of stable allocations?
Q 4 .
Can we find out the profile (of some stable allocation) that optimizes a specific function of h 1 , , h n ?
In what follows, we introduce a construct, called “layer partition” of agents and items, which is independent of stable allocation χ . This construct elucidates the structure of S and, as we will demonstrate, provides a pathway to answering the four questions above.
First of all, we point out that each agent i falls into two cases: (1) its income h i is always equal to some constant d for all χ S ; or (2) its income h i equals d for some χ S and equals d 1 for some other χ S , for some integer d > 0 . This is a corollary of Theorem 2.
Definition 4 
(Layer partition of agents and items). Denote by layer d the set of agents i for which h i always equal d (for χ S ), and by layer d the set of agents i for which h i sometimes equal d and sometimes equal d 1 (for χ S ).
Take an arbitrary stable allocation χ S . Denote by Layer d (and Layer d , respectively) the set of items allocated to agent layer d (and Layer d , respectively) by χ.
The layer partition of agents and items consists [san] of the partition of agent layer 0 , layer 1 , layer 1 , layer 2 , … and the partition of items Layer 0 , Layer 1 , Layer 1 , Layer 2 , ….
Lemma 2. 
Under each stable allocation χ S , the items from Layer d (or Layer d , respectively) are always allocated to the agents from layer d (or layer d , respectively).
Equivalently speaking, the set of items allocated to layer d ( d 0 ) is invariant under every stable allocation. and the set of items allocated to layer d (for d > 0 ) is invariant as well.
Shortly, items in any given layer can only be allocated to agents in that layer. To be clear, we regard layer d , Layer d as in the same layer and layer d , Layer d as in another layer.

3.2. Proof of Lemma 2 and the Explicit Construction of the Layer Partition

We prove Lemma 2 and show how to compute layer d , layer d , Layer d , Layer d efficiently in this subsection. To this end, we introduce some additional notation first.
Fix any stable allocation χ S . Recall the transfers on χ (and the notation →) in Definition 3. Those agents receiving d items under χ can be classified into three groups:
p d : = { i h i = d and i j with h j = d 1 } ;
q d : = { i h i = d and k i with h k = d + 1 } ;
r d : = { i h i = d , i p d , and i q d } .
See Figure 2a for an illustration of p d , q d and r d . Observe that p d is disjoint from q d . Otherwise, there is k j with h k = h j + 2 , contradicting the fact that χ is stable. Denote s d = p d q d 1 . Be aware that the quantity h i , the groups p d , q d , r d , and the set s d all depend on the selected stable allocation χ .
Denote by R d and S d the set of items allocated to r d and s d under χ , respectively.
Moreover, within this subsection, we rank all items and agents as follows:
  • Elements in r 0 , R 0 have rank 0; elements in s 1 , S 1 have rank 1;
  • Elements in r 1 , R 1 have rank 2; elements in s 2 , S 2 have rank 3, etc.
Proposition 1. 
In the aforementioned stable allocation χ,
1.
There is no transfer from s d to any lower-ranked agents (namely, r d 1 , s d 1 , );
2.
There is no transfer from r d to any lower-ranked agents (namely, s d , r d 1 , ).
This easily follows from the definitions of p d , q d , r d . Trivial proofs are omitted.
In the following, the rank refers to the rank defined above with respect to the fixed χ .
Proposition 2. 
In any stable allocation χ , it holds that
1.
A higher-ranked item cannot be allocated to a lower-ranked agent.
2.
A lower-ranked item cannot be allocated to a higher-ranked agent.
3.
Together, items in R d are always assigned to r d , and items in S d are always assigned to s d .
Proof. 
1. An item o from S d (or R d , respectively) is not attractive to any lower-ranked agent, since otherwise χ admits a transfer from s d (or r d , respectively) to a lower-ranked agent (using just one reallocation of o), which contradicts Proposition 1. As a result, an item from S d (or R d , respectively) cannot be allocated to any lower-ranked agent in χ .
2. We prove it by induction. In this proof, for the simplicity of presentation, assume max { h i } = 3 ; see Figure 2. First, the items ranked lower than r 3 cannot be allocated to r 3 in χ . Otherwise, as the items in R 3 must be allocated to r 3 (by the analysis above), some agent in r 3 receives more than 3 items by the pigeonhole principle, which implies that p ( χ ) is non-optimal under LexiMax , and therefore χ is not stable (Theorem 1). Similarly, the items ranked lower than s 3 cannot be allocated to s 3 in χ ; otherwise, p ( χ ) contains more agents receiving 3 items than p ( χ ) , which is impossible, and so on and so forth.    □
Proposition 3. 
For i r d , it holds that h i is always equal to d for all χ S ; and for i s d , it holds that h i can be d or d 1 under different χ S . It follows that layer d = r d and layer d = s d .
Proof. 
According to Claim 3 of Proposition 2, in every stable allocation, agents r d will receive and only receive R d . Note that | R d |   = d | r d | . Therefore, in every stable allocation, each agent from r d receives exactly d items (otherwise, there must be one agent receiving more than d and one agent receiving less than d, which is worse than an equal distribution for the criteria LexiMin and hence nonstable). Hence, h i always equals d for i r d .
For each i p d , we know h i = d , and h i can be reduced to d 1 (due to (1)) in another stable allocation. For each i q d 1 , we know h i = d 1 , and h i can be increased to d (due to (2)) in another stable allocation. Therefore h i can be d 1 or d for i p d q d 1 = s d .    □
Lemma 2 simply follows from all the analysis above. In particular, combining the last two propositions, we obtain that Layer d = R d and Layer d = S d .

3.3. Some Applications of the Layer Partition—Answering Q 1 , , Q 4

With the layer partition of agents and items, we can promptly answer Q 1 to Q 4 :
Answer of Q1. 
They all allocate items layer by layer, in which the layers are invariant with respect to the chosen allocation.
Answer of Q2. 
For i layer d , the range of h i is [ d , d ] . For i layer d , the range of h i is [ d 1 , d ] . Computing the layers reduces to computing r d , s d for any fixed χ , which is easy. Hence we can compute the ranges of h i s efficiently.
Alternative methods for computing the ranges have to use network flow and are far more complicated.
Answer of Q3. 
Counting stable allocations reduces to counting stable allocations within each layer (and then applying the rule of product).
However, this counting problem is #P-hard, as the problem of counting stable allocation in layer 1 can be reduced to counting perfect matchings, which is already #P-hard.
Answer of Q4. 
Finding a profile (of a stable allocation) that minimizes a linear function of h 1 , , h n , e.g., c i h i , is easy. Modify the network used in the proof of Claim 2 of Theorem 1 as follows: Among the edges from v j to t, change the cost of the kth one to be ( k 1 ) × A + c i , where A is a large enough constant. Then, the minimum-cost flow still has the minimum Congestion , and it optimizes c i h i . For non-linear functions, it is more difficult. Yet the layer partition still helps break down the task.
Remark 4. 
Halpern et al. [9] implicitly found a structure similar to our layer structures. Specifically, (in the proof of their Theorem 4), given a fractional MNW allocation, they partition the agents into subsets according to the floor of valuations (so in a certain set, the valuation range of each agent is [ x , x + 1 ) for some integer x). They imply that the agents in each set must be fully allocated to a certain subset of items for any fractional MNW allocation, and the partition and correspondence between the subsets of agents and the subsets of items also hold for deterministic MNW allocations. We discover our layer structures independently. Conceptually, our layers are defined not merely by valuation intervals but by partitions of the agents who receive the same number of items according to the transferring relation between agents. Theoretically, the presence of the layer partition of IND was directly confirmed in the scenario of IND, without the assistance of the DIV case. And so is the computation of the layer partition of IND. Observationally, our layer structures are more specific, which can be briefly explained through the valuation range of each agent in a certain layer: for the scenario of DIV, compared to their range of [ x , x + 1 ) for some integer x, the range in our layer structures is some fixed rational number, and for the scenario of IND, compared to their range of [ x , x + 1 ] for some integer x, the range in our layer structures is [ x , x + 1 ] for some integer x or some fixed integer (the range of [ x , x ] ).

4. Divisible Items and Agents with 0/1-Add Valuations

This section discusses the scenario of DIV, i.e., the case of divisible items, in which we are allowed to allocate a part of an item to an agent (and different parts to different agents, perhaps). Note that the 0/1-sub valuations cannot easily match with divisible items. Therefore we restrict ourselves to 0/1-add valuations in this section.
Definition 5 
(Stable allocations in the scenario of DIV). Let χ be an allocation of divisible items. For each part of item o allocated to agent i that can be reallocated to another agent i (i.e., v i ( { o } ) = 1 ), build an edge ( i , i ) . Moreover, if there is a simple path ( i 1 , , i k ) ( k 2 ) along such edges, we state that χ admits a transfer from i 1 = u to i k = v , denoted by u v . It consists of k 1 reallocations with Δ > 0 fraction of items along the path, after which h u decreases by Δ and h v increases by Δ.
A narrowing transfer refers to a transfer from u to v with h u Δ h v + Δ . A widening transfer refers to a transfer from u to v with h u h v . An Allocation χ is called nonstable if it admits some narrowing transfer and is stable otherwise.
Denote by S * the set of stable allocations in the DIV scenario.
Lemma 3. 
Stable allocations are optimal under LexiMin . (As a corollary, their profiles are equivalent up to permutation.)
Proof. 
Assume χ is non-optimal under LexiMin . We shall prove that χ is nonstable, i.e., it admits a narrowing transfer.
First, take an allocation χ * that is optimal under LexiMin .
We build a graph G with n vertices. Be aware that according to χ and χ * , each item i can be divided into several pieces i 1 , , i p (with a total size of o n e ), so that each piece is given to a certain agent (denoted by j) in χ and given to a certain agent (denoted by k) in χ . If j k , build an arc from j to k, with weight equal to the size of this piece. Clearly, an arc represents a reallocation (of one piece) on χ , and χ becomes χ * after all the arcs (i.e., reallocations) are applied.
We decompose graph G into several cycles C 1 , , C a and paths P 1 , , P b , where the edges in any cycle or path have the same weight, and where t j s k for j k , where s i , t i denote the starting and ending vertices of P i , respectively. Such a decomposition exists under an appropriate division of items.
For 0 i b , let χ ( i ) be the allocation copied from χ but applied all the arcs (reallocations) in P 1 , , P i . χ ( 0 ) = χ .
Be aware that χ ( b ) becomes χ * after applying the arcs in C 1 , , C a . We obtain that LexiMin ( χ ( b ) ) = LexiMin ( χ * ) . Further since that LexiMin ( χ * ) < LexiMin ( χ ) , there exists i ( 1 i b ) , such that LexiMin ( χ ( i ) ) < LexiMin ( χ ( i 1 ) ) .
It follows that in χ ( i 1 ) , we have h s > h t (where s , t denote s i , t i respectively, for short). It further follows that in χ ( 0 ) = χ , we also have h s > h t , since h s never increases and h t never decreases in the sequence χ ( 0 ) , , χ ( i 1 ) . Consequently, χ admits a narrowing transfer (from s to t).    □
Theorem 3. 
1. For any SPD criterion, the optimums coincide with S * .
2. We can find a stable allocation in polynomial time.
Just like we prove Claim 1 of Theorem 1 using Lemma 1, we can prove Claim 1 of Theorem 3 using Lemma 3 (the proof is omitted).
To find a stable allocation in polynomial time, we can use an approach based on linear programming (LP), which is shown in Section 4.2 (thus we prove Claim 2 of Theorem 3). Alternatively, we find a purely combinatorial approach (which is more efficient and more interesting) for finding such an allocation. It utilizes the layer partition with several nontrivial ideas. See details in Section 4.1.
Theorem 4. 
For χ , χ S * , it holds that p ( χ ) = p ( χ ) ; namely, the Chebyshev distance D ( p ( χ ) , p ( χ ) ) = 0 .
A proof of Theorem 4 is related to the LP approach mentioned above and is given in Section 4.2. An alternative proof is similar to the proof of Theorem 2 and is omitted.

4.1. Layer Partition of Agents and Items for Divisible Items and a Combinatorial Algorithm for Finding a Stable Allocation

In the following, we extend the layer partition given in Section 3.1 to the divisible case and then present the aforementioned combinatorial algorithm.
According to Theorem 4, profile p ( χ ) is unique for χ S * . In other words, the income h i of each agent i is independent of χ , as long as χ is stable.
For each real number d, denote by layer d * the set of agents that always receive d items no matter in which stable allocation; formally, layer d * = { i h i = d } .
Lemma 4. 
The set of items allocated to layer d * is invariant for χ S * . Moreover, this set (denoted by Layer d * henceforth) consists of complete items only.
Proof. 
Fix χ . Let L d be the items allocated to layer d under χ . An item in L d ( d > d ) cannot be allocated to layer d ; otherwise, there is a narrowing transfer in χ . An item in L d ( d < d ) cannot be allocated to layer d ; otherwise, the allocation is not LexiMax optimal (can be proved by induction as in the proof of Proposition 2). Therefore, those items in L d can only be allocated to layer d (even for other stable allocations). We thus obtain the first part of this lemma.
In any stable allocation, an item cannot be allocated to different layers. Otherwise there is clearly a simple narrowing transfer.    □
Lemma 4 implies a layer partition of items and agents, where items Layer d * and agents layer d * are in the same layer.
Lemma 5. 
A stable allocation γ for the divisible case can be obtained by optimally reallocating items Layer d (regarded as divisible items) to agents layer d and items Layer d (regarded as indivisible items) to layer d for each d.
Proof. 
For each d, let γ d be an optimal allocation of items Layer d (regarded as divisible items) to agents layer d . Moreover, let γ d be an optimal allocation of items Layer d (regarded as indivisible items) to layer d so that each agent in layer d receives d items. Combine γ d , γ d ( d 0 ) in all layers to obtain an overall allocation γ . We claim that γ S * . The proof is as follows.
Those items in the higher layer cannot be given to agents in the lower layer—there are no such edges as shown in the proof of Proposition 2 (this also holds for the divisible case). Hence there is no narrowing transfer between layers in γ . Also, there is no narrowing transfer within any layer of γ by the construction of γ . Together, γ admits no narrowing transfer and hence is stable. So, γ S * .    □
Lemma 6. 
For any χ S * , it holds that
1. h i = d for each i layer d .
2. h i [ d 1 , d ] for i layer d .
Proof. 
Denote ( g 1 , , g n ) = p ( γ ) and ( h 1 , , h n ) = p ( χ ) .
Combining (1)–(3) below, we immediately obtain the lemma:
(1)
For each i [ n ] , it holds that h i = g i (apply Theorem 4 with γ , χ S * ).
(2)
For each i layer d , it holds that g i = d (trivial).
(3)
For each i layer d , it holds that g i d 1 , d . (Proof: Since γ d is ( LexiMin ) optimal, g i d 1 . Since γ d is also ( LexiMax ) optimal, g i d .)
   □
Lemma 7. 
Given χ S * , for two different layers layer d * and layer d * of χ, we have
d d > 1 n 2 .
Proof. 
By Lemma 5, we obtain that
d = | Layer d * | | layer d * | , d = | Layer d * | | layer d * | .
According to Lemma 4, we have that | Layer d * | and | Layer d * | are integers and | layer d * | + | layer d * | n .
Thus, | | Layer d * | | layer d * | | Layer d * | | layer d * | | 1 and | layer d * | | layer d * | < n 2 .
Therefore,
d d = | Layer d * | | layer d * | | Layer d * | | layer d * | | layer d * | | layer d * | > 1 n 2 .
   □
The last three lemmas are crucial to our combinatorial algorithm for DIV.
For the divisible case, we divide each item into n 2 identical (unbreakable) pieces with size 1 n 2 , and solve this indivisible case (with n 2 m indivisible pieces) using the algorithm in Claim 2 of Theorem 1.
We obtain an allocation with several layers; each layer’s index d is a real number—a multiple of 1 n 2 . For agent j layer d , we have h j = d , and for i layer d , we have h i [ d 1 n 2 , d ] .
According to Lemma 6, for the divisible case, we have
h i [ d 1 n 2 , d ] for i layer d .
As a result, i layer d * , for some d [ d 1 n 2 , d ] .
By Lemma 7, there can only be one such d in the range of [ d 1 n 2 , d ] for i layer d . By Lemma 5, we can obtain an optimal allocation for the divisible case just by reallocating items within each layer, and we can calculate that d = | Layer d | | layer d | . See Algorithm 2 for details.
Algorithm 2 Finding stable allocation for DIV via reduction to IND
Input: 
m items [ m ] , n agents [ n ] , binary valuation matrix B of size n × m , where B ( i , j ) denotes the valuation of item i for agent j.
Output: 
A stable allocation χ * for divisible items.
 1:
Divide each item j [ m ] into n 2 identical pieces of size 1 n 2 ; let P denote the set of all these pieces.
 2:
Expand B to B , such that for each piece o of item j, we have B ( i , o ) = B ( i , j ) .
 3:
γ ← StableAllocationIND ( P , [ n ] , B ) {Algorithm 1}.
 4:
Compute layer partition of γ : agents layer d , layer d and items Layer d , Layer d .
With answer of Q 2
 5:
for each layer d of γ with agents layer d and items Layer d  do
 6:
   Allocate items in Layer d integrally to agents in layer d , such that each gets d value in χ *
 7:
end for
 8:
for each layer d of γ with agents layer d and items Layer d  do
 9:
   Allocate items in Layer d fractionally to agents in layer d , such that each agent receives exactly | Layer d | / | layer d | value in χ *
 10:
end for
 11:
return  χ * .
Remark 5. 
The parameter 1 n 2 is a sufficient choice. By Lemma 6, each layer d of γ corresponds to a layer d * of χ * , and each layer d corresponds to all layer d * with d [ d 1 n 2 , d ] . By Lemma 7, two layers d , d in the divisible case must be 1 n 2 away from each other, so there can only be one such d in the range of [ d 1 n 2 , d ] .
As a result, each layer in the indivisible reduction corresponds to only one layer in the original divisible case. By Lemma 5, we can obtain an optimal allocation for the divisible case just by reallocating items within each layer, and we can calculate the value for each layer since the number of layers (in the divisible case) is guaranteed to be one.
In addition, a subdivision into O ( n 2 ) pieces is inevitable. For example, suppose we have n = 2 k + 1 agents and 2 k 1 items. The first k + 1 agents can only receive the first k items (arbitrarily), and the remaining k agents can only receive the remaining k 1 items (arbitrarily). The optimal result of this case is obviously allocating items equally if possible. In the end, the first k + 1 agents become a layer getting the k items (evenly distributed), and the rest become the other layer getting the k 1 items (evenly distributed). The difference between these two layers is k k + 1 k 1 k = 1 k 2 + k . So the denominator is O ( n 2 ) for n = 2 k + 1 = O ( n ) , indicating that we need O ( n 2 ) pieces to indicate such a small difference.
The obtained algorithm deals with m n 2 items and n agents. Therefore, it has a time complexity of O ( m 2 n 5 ) , which is still better than the linear programming procedure (explained below) with time complexity of O ( m 3.5 n 5.5 ) with O ( m n ) variables and O ( n 2 ) linear programming problems in the worst case.
Example 2. 
A numerical illustration of Algorithm 2.
Consider an instance with n = 5 agents and m = 3 divisible items, where the binary valuation matrix B is:
B = 1 1 1 1 1 0 0 1 1 0 0 0 0 1 1 .
Our combinatorial algorithm proceeds as follows:
Step 1: Split each item into n 2 = 25 identical, indivisible pieces. We now have 3 × 25 = 75 pieces in total.
Step 2: Compute a stable allocation for these 75 indivisible pieces using Algorithm 1. One possible optimal allocation yields the following piece counts:
  • Agent 1 receives 12 pieces from item 1.
  • Agent 2 receives 13 pieces from item 1.
  • Agent 3 receives 16 pieces from item 2.
  • Agent 4 receives 9 and 8 pieces from items 2 and 3, respectively.
  • Agent 5 receives 17 pieces from item 3.
The final profile (piece counts) is ( h 1 , h 2 , h 3 , h 4 , h 5 ) = ( 12 , 13 , 16 , 17 , 17 ) . Converting back to the original item scale (dividing by 25), we obtain the profile: ( 12 25 , 13 25 , 16 25 , 17 25 , 17 25 ) .
Step 3: Compute the layer partition.
  • Agents 1 and 2 are in layer d , and item 1 is in Layer d , where d = 13 25 .
  • Agents 3, 4 and 5 are in layer d , and items 2 and 3 are in Layer d , where d = 17 25 .
Step 4: Recompose the fractional allocation. Agents 1 and 2 each receives 1 2 item, and agents 3, 4 and 5 each receives 2 3 item. The profile of stable allocation is ( h 1 , h 2 , h 3 , h 4 , h 5 ) = ( 1 2 , 1 2 , 2 3 , 2 3 , 2 3 ) .

4.2. Find a Stable Allocation for Divisible Items Using LP and a Proof of Theorem 4

This subsection provides an alternative approach based on linear programming for finding a stable allocation for divisible items. This approach is more straightforward, but it is not combinatorial.
Applying Claim 1 of Theorem 3, finding a stable allocation reduces to finding an allocation with the minimum LexiMin .
First, we prove the claim that the latter further reduces to computing the “fair multi-flow” [11] in the network below:
See Figure 3. There are n source nodes s 1 , , s n in the first layer S. The second layer R consists of only one node r as a relay. The third layer I has m nodes, standing for items. The last layer T has n sink nodes t 1 , , t n . For each agent d, there is an arc ( s d , r ) of unlimited capacity. For every item i in I, there is an arc ( r , i ) of capacity one. If item i can be allocated to agent d, an arc ( i , t d ) is added with capacity one.
In the aforementioned fair multi-flow problem, we shall send a flow f d from s d to t d for each d. The sum of the flows on each edge cannot exceed the capacity, and the n amount of flows | f 1 | , , | f n | as a vector should have the lowest LexiMin . Clearly, the solution to this problem corresponds to the optimal allocation under LexiMin (items are fully allocated as the multi-flow is LexiMin ); therefore, the claim holds.
Then, recall the result of [11], which states that the fair multi-flow problem can be reduced to a polynomial number of linear programming problems and can thus be solved in polynomial time.
Proof of Theorem 4. 
Recall finding a stable allocation by computing a fair multi-flow; the flow vector ( | f 1 | , , | f n | ) is unique for the multi-flow problem, according to [11]. Therefore the profile of optimal allocation is also unique. □

5. Indivisible Items and Agents with 0/1-Sub Valuations

We now move on to the scenario of IND-SUB.
The 0/1-sub function is closely related to the matroid theory [19]. A matroid is a pair ( E , I ) , where E is a finite set (called the ground set) and I is a family of subsets of E (called the independent sets).
The independent sets satisfy the following three axioms:
(I1)
I ;
(I2)
If Y I and X Y , then X I ;
(I3)
If X , Y I and | X | < | Y | , then there exists y Y X , such that X { y } I .
If an agent has a 0/1-sub valuation on items, then the set of clean bundles forms the set of independent sets of a matroid (proved in Benabbou et al. [7]). Benabbou et al. [7] further showed that under 0/1-sub valuations, the set of max-USW allocations optimal under any SPD criterion is consistent with the set of LexiMin allocations. This result generalizes Claim 1 of Theorem 1 (for the 0,1-add valuations). In addition, Babaioff et al. [10] showed that under 0/1-sub valuations, the allocation that optimizes NSW ¬ can be found in polynomial time.
To the best of our knowledge, however, it is not known to what extent can p ( χ ) , p ( χ ) differ for different optimal allocations χ , χ . We answer this question by the following theorem.
Let SS denote the set of (max-USW and clean) allocations that are optimal under any SPD criterion.
Theorem 5. 
For χ , χ SS , it holds that | h i h i | 1 for each agent i [ n ] , where ( h 1 , , h n ) = p ( χ ) and ( h 1 , , h n ) = p ( χ ) . In other words, the Chebyshev distance is D ( p ( χ ) , p ( χ ) ) 1 .
Proof. 
Assume (i) χ , χ SS and (ii) D ( p ( χ ) , p ( χ ) ) 2 . If there are multiple such pairs of allocations, take the pair ( χ , χ ) with a minimum symmetric difference of i [ n ] | χ i χ i | .
Without loss of generality, assume that
h 1 h 2 h n and h q h q + 2 .
Assume that h j 1 h j 2 h j n .
Notice that p ( χ ) p ( χ ) . Take the minimum index i satisfying h i h i . Clearly, for all k [ i 1 ] we have
h k = h k .
In fact, for i, it must hold that
h i < h i .
Recall that the smaller LexiMin ( p ) , the better the vector p under LexiMin . Indeed, if h i > h i , then together with (4),
LexiMin ( ( h 1 , , h i ) ) > LexiMin ( ( h 1 , , h i ) ) .
Moreover,
LexiMin ( ( h 1 , , h i ) ) LexiMin ( ( h j 1 , , h j i ) ) ,
since h j 1 , , h j i are the smallest i elements in p ( χ ) . Together, we have LexiMin ( ( h 1 , , h i ) ) > LexiMin ( ( h j 1 , , h j i ) ) . By comparing the smallest i elements of p ( χ ) and p ( χ ) , we see LexiMin ( p ( χ ) ) > LexiMin ( p ( χ ) ) , contradicting the assumption that χ and χ are both LexiMin optimal.
By the definition of i and the assumption h q h q + 2 , it is easy to see q > i and hence h q h i . Further, we claim
h q h i + 2 .
It reduces to prove h q h i . Since χ and χ are both LexiMin optimal, the two multisets { h 1 , , h n } and { h 1 , , h n } are equivalent. By (4), for all k [ i 1 ] , h k = h k . Together, the multiset { h i , , h n } equals { h i , , h n } . Further by the assumption h 1 h n , the elements in { h i , , h n } are not smaller than h i . Recall q > i , we have h q h i .
Recall that the family of clean bundles I j = { S [ m ] v j ( S ) = | S | } for j [ n ] forms a family of independent sets of a matroid [7]. By (I3) of the independent-set matroid axioms and inequality (5), there exists an item o 1 χ i χ i making v i ( χ i { o 1 } ) = v i ( χ i ) + 1 = h i + 1 . And o 1 is allocated to some agent i 1 i under allocation χ . Otherwise, o 1 is not allocated to anyone, and can be allocated to agent i, violating that χ is max-USW. Consider the following three cases:
1.
Suppose h i 1 h i + 2 . Then transferring o 1 from i 1 to i in χ decreases LexiMin ( p ( χ ) ) , contradicting that χ is LexiMin optimal.
2.
Suppose h i 1 = h i + 1 . We note that i 1 q since h q h i + 2 (inequality (6)). If we transfer o 1 from i 1 to i in χ , LexiMin ( p ( χ ) ) and h q are unchanged, which means ( χ , χ ) still satisfies the two conditions (i) χ , χ SS and (ii) D ( p ( χ ) , p ( χ ) ) 2 , but the i [ n ] | χ i χ i | decreases, a contradiction.
3.
Suppose h i 1 h i .
We first show h i 1 h i 1 . This clearly holds by (4) if i 1 < i .
When i 1 > i , since h 1 h n , we have h i 1 h i . Together with the assumption h i 1 h i in this case, we have h i 1 = h i .
Suppose to the oppose that h i 1 > h i 1 , which means h i > h i 1 . Applying (4),
LexiMin ( ( h 1 , , h i 1 , h i ) ) > LexiMin ( ( h 1 , , h i 1 , h i 1 ) ) .
Moreover,
LexiMin ( ( h 1 , , h i 1 , h i 1 ) ) LexiMin ( ( h j 1 , , h j i ) ) ,
since h j 1 , , h j i are the smallest i elements in p ( χ ) . Together, LexiMin ( ( h 1 , , h i ) ) > LexiMin ( ( h j 1 , , h j i ) ) , by comparing the smallest i elements of p ( χ ) and p ( χ ) , yielding LexiMin ( p ( χ ) ) > LexiMin ( p ( χ ) ) , a contradiction.
With h i 1 h i 1 (i.e., v i 1 ( χ i 1 ) v i 1 ( χ i 1 ) ) in hand, since χ i 1 is clean, we have v i 1 ( χ i 1 { o 1 } ) < v i 1 ( χ i 1 ) . Further since χ i 1 { o 1 } and χ i 1 are clean (i.e., independent sets of a matroid), there exists an item o 2 χ i 1 ( χ i 1 { o 1 } ) , such that v i 1 ( χ i 1 { o 1 } { o 2 } ) = v i 1 ( χ i 1 ) = h i 1 . And o 2 is allocated to some agent i 2 i 1 under χ , as otherwise o 2 is not allocated to anyone, and we can transfer o 1 from i 1 to i and allocate o 2 to i 1 in χ , violating that χ is max-USW. We note that χ i 1 ( χ i 1 { o 1 } ) = χ i 1 χ i 1 and o 2 o 1 because o 1 χ i 1 (recall that o 1 χ i χ i ) and o 2 χ i 1 .
Repeating the same argument and letting i 0 = i , we obtain a sequence of items and agents ( i 0 , o 1 , i 1 , , o t , i t ) . Let χ ( k ) denote the allocation transferring o l from i l to i l 1 under χ for all l [ k ] . The sequence of items and agents satisfies o k χ i k 1 χ i k 1 ( k 1 ) .
See Figure 4 for an illustration of the sequence. We note that the same item o l does not appear again, since for k l , we have o l χ i l 1 ( k ) (as a result, o l χ i k 1 χ i l 1 ( k ) ). Thus, the sequence must terminate when we reach agent i t with h i t h i + 2 or h i t = h i + 1 for the first time. If h i t h i + 2 , after transferring items along the sequence, we have that χ is not LexiMin optimal, a contradiction. If h i t = h i + 1 , we note that agent q (recall that h q h q + 2 ) is not in the sequence since h q h i + 2 (inequality (6)). After transferring items along the sequence, LexiMin ( p ( χ ) ) and h q are unchanged, which means ( χ , χ ) still satisfies the two conditions (i) χ , χ SS and (ii) D ( p ( χ ) , p ( χ ) ) 2 , but i [ n ] | χ i χ i | decreases, a contradiction.
To sum up, suppose the opposite of Theorem 5 that there are some pairs of allocations ( χ , χ ) that meet (i) χ , χ SS and (ii) D ( p ( χ ) , p ( χ ) ) 2 , then it leads to some contradiction. Thus, for χ , χ SS , it holds that D ( p ( χ ) , p ( χ ) ) 1 . □
Remark 6. 
Our proof of Theorem 5 is similar to that of Lemma 3.12 in [7], which is the 0/1-sub valuation version of Lemma 1 (implying an allocation non-optimal under LexiMin is non-optimal under any SPD criterion). There is a minor flaw in their proof that we have modified in our proof of Theorem 5.
In the proof of Lemma 3.12 in [7], for the sequence of items and agents ( i 0 , o 1 , i 1 , , o t , i t ) , they claim that no same agent appears twice by proof, beginning from arguing that if the same agent appears again, the allocation χ is still clean after transferring items along the cycle—a subsequence of the sequence with a common agent as the starting and ending element. However, the above claim and proof are incorrect. See the example in the left figure of Figure 4, where the sequence is ( 3 , a , 2 , b , 1 , c , 2 , d , 4 ) and agent 2 appears twice. After transferring items along the cycle ( 2 , b , 1 , c , 2 ) (which is equivalent to swapping items b and c between agents 2 and 1), the allocation may not be clean if { a , b } is not a clean bundle of agent 2. Indeed, the family of clean bundles of agent 2 may be I 2 = { { a , c } , { a , d } , { b , c } , { b , d } , { a } , { b } , { c } , { d } } , such that { a , b } I 2 .
We note that Lemma 3.12 in [7] still holds, and the minor flaw in its proof in [7] can be corrected by arguing that no same item appears twice, like our proof of Theorem 5.
The layer structures of the IND and DIV scenarios admit the property that is within each layer; the number of items allocated to any two agents differs by at most a fixed and small constant that is independent of a specific instance—specifically, by at most o n e in IND and exactly z e r o in DIV. This uniformity is enabled by the additive nature of valuations, which allows items to be freely transferred between agents without affecting their marginal gains on other items.
In the IND-SUB scenario, however, this regular layer structure breaks down. While the overall profiles of different optimal allocations remain close (Chebyshev distance 1 , as shown in Theorem 5), the underlying combinatorial rigidity imposed by submodular (matroid) constraints prevents the formation of stable, invariant layers, where allocations differ only by a bounded constant. A simple counterexample is an instance with two agents and ten items, with submodular valuations defined as: agents 1 and 2, with the value of any subset S of items respectively as min ( x , | S | ) and min ( 10 x , | S | ) , where x is a parameter of the instance. The ten items can be freely allocated to the two agents, and the allocation is stable as long as agent 1 receives x items. Thus, the two agents are in the same layer. However, the difference in the number of items that these two agents received changes as the instance changes, rather than being a fixed constant of the whole IND-SUB scenario.

6. Summary

For the binary valuations, it is known that the set of optimums has nothing to do with the exact criterion, as long as the criterion is SPD and symmetric. A complete proof of this result was scattered in the literature and is provided in this paper. Furthermore, this paper discusses the consistency among the set of SPD optimal allocations, that is, their profiles are close to each other under Chebyshev distance, which holds for 0/1-sub valuations. The proof is nontrivial.
For the 0/1-add valuations, we introduce a layer partition among items and agents. Together with the idea of dividing items into n 2 identical pieces, we obtain the first combinatorial algorithm for computing an optimal allocation (i.e., a stable allocation) for the divisible items, which has a lower complexity than the LP approach.

Author Contributions

Conceptualization, K.J.; methodology, T.Z. and R.L.; validation, R.L. and S.C.; formal analysis, T.Z., K.J. and R.L.; investigation, T.Z. and S.C.; resources, K.J.; writing—original draft preparation, K.J. and T.Z.; writing—review and editing, K.J.; supervision, K.J.; project administration, K.J.; funding acquisition, K.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by the Department of Science and Technology of Guangdong Province (Project No. 2021QN02X239), National Natural Science Foundation of China (No. 62002394) and the Shenzhen Science and Technology Program (Grant No. 202206193000001, 20220817175048002).

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of this study, in the collection, analyses, or interpretation of data, in the writing of the manuscript, or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
SPDstrongly Pigou–Dalton
MNWMaximum Nash Welfare
USWutilitarian social welfare

Appendix A. All Criteria in Example 1 Are SPD

Claim A1. 
Minimizing EnvySum ( p ) is equivalent to minimizing GiniIndex ( p ) .
Proof. 
h i < h j ( h j h i ) = i < j ( h j h i ) = j ( h j ( j 1 ) ) i ( h i ( n i ) ) = i ( h i ( i 1 n + i ) ) = 2 i ( i · h i ) ( 1 + n ) i ( h i ) = 2 GiniIndex ( p ) ( 1 + n ) m .
Lemma A1. 
Consider two profiles p = ( p 1 , , p n ) and q = ( q 1 , , q n ) , where q is more balanced than p (namely, there are j , k [ n ] , such that p j > q j q k > p k and q i = p i for i [ n ] { j , k } ).
1.
NSW ¬ ( q ) < NSW ¬ ( p ) .
2.
Potential Φ ( q ) < Potential Φ ( p ) , where Φ ( x ) is a strictly convex function of x. (Hence criteria i Φ ( h i ) with strictly convex terms Φ ( h i ) , including Entropy ¬ ( p ) , Congestion ( p ) , LexiMax ( p ) , and LexiMin ( p ) are all SPD.)
3.
EnvySum ( q ) < EnvySum ( p ) .
(Hence GiniIndex ( q ) < GiniIndex ( p ) due to Claim A1.)
Therefore all criteria mentioned in Example 1 are SPD.
Proof. 
By definition, we assume q j = p j Δ and q k = p k + Δ , where Δ > 0 . Then we have p j p k 2 Δ .
1.
NSW ¬ ( q ) NSW ¬ ( p ) = ( q j q k p j p k ) i p i = ( p j p k ( p j Δ ) ( p k + Δ ) ) i p i = ( p j p k p j p k Δ p j + Δ p k + Δ 2 ) i p i = ( Δ ( Δ ( p j p k ) ) i p i ( Δ ( Δ 2 Δ ) ) i p i = ( Δ 2 ) i p i < 0 .
Thus we have NSW ¬ ( q ) < NSW ¬ ( p ) .
2.
Potential Φ ( q ) Potential Φ ( p ) = ( Φ ( q k ) Φ ( p k ) ) + ( Φ ( q j ) Φ ( p j ) ) = ( Φ ( p k + Δ ) Φ ( p k ) ) ( Φ ( q j + Δ ) Φ ( q j ) ) .
Since Φ ( x ) is a strictly convex function of x, and q j > p k , we have
( Φ ( p k + Δ ) Φ ( p k ) ) < ( Φ ( q j + Δ ) Φ ( q j ) ) .
Thus,
Potential Φ ( q ) < Potential Φ ( p ) .
3. Let n 1 , n 2 , n 3 , n 4 and n 5 be the number of agents i except j , k with p i p k , p k < p i q k , q k < p i < q j , q j p i < p j and p i p j respectively.
Let U ( p , i ) = j h i < h j ( h j h i ) denote the unhappiness of agent i. Then we have
EnvySum ( p ) = i U ( p , i ) ,
EnvySum ( q ) EnvySum ( p ) = i ( U ( q , i ) U ( p , i ) ) .
Observe the change from p to q , we derive that:
(i)
For those agents i with p i p k or p i p j , their U ( p , i ) = U ( q , i ) s remain unchanged.
(ii)
For an agent i with p k < p i q k , its U ( p , i ) changes by
U ( q , i ) U ( p , i ) = ( q k p i + q j p i ) ( p j p i ) = q k p i Δ 0 .
(iii)
For those agents i with q k < p i < q j , their total sum of unhappiness is changed by
Δ 3 = i q k < p i < q j ( U ( q , i ) U ( p , i ) ) = n 3 Δ .
(iv)
For an agent i with q j p i < p j , its unhappiness changes by
U ( q , i ) U ( p , i ) = ( p j p i ) 0 .
(v)
For j, its unhappiness changes by
Δ j = U ( q , j ) U ( p , j ) ( n 4 + n 5 ) Δ ,
because of the decrease from p j to q j .
(vi)
For k, its unhappiness changes by
Δ k = U ( q , k ) U ( p , k ) ( n 3 + n 4 + n 5 ) Δ 2 Δ .
The ( n 3 + n 4 + n 5 ) Δ part is due to the increase from p k to q k , and the remaining 2 Δ part is due to the increase from p k to q k and the decrease from p j to q j .
Since case (i), (ii) and (iv) contribute non-positive terms in EnvySum ( q ) EnvySum ( p ) , we may only consider (iii), (v) and (vi) and have:
EnvySum ( q ) EnvySum ( p ) = i ( U ( q , i ) U ( p , i ) ) Δ 3 + Δ j + Δ k n 3 Δ + ( n 4 + n 5 ) Δ ( n 3 + n 4 + n 5 ) Δ 2 Δ = n 3 Δ 2 Δ < 0 .
Thus we have EnvySum ( q ) EnvySum ( p ) < 0 . □

Appendix B. Inconsistency for the Ternary or Item-Dependent Valuations

When the valuations of agents are not binary but ternary ( v i ( { o j } ) { 0 , 1 , x } ) or item-dependent ( v i ( { o j } ) { 0 , p j } ), the optimal allocations under different SPD criteria may differ. In other words, the consistency of optimums among all the SPD criteria for the case of binary valuations does not generalize to the case of ternary or item-dependent valuations. We show an example in the following.
Suppose there are four agents and 10 items. The matrix v below represents the valuations of agents: for agent i the value of item j is v i , j . In this example, the valuations can be seen as ternary with x = 3 or item-dependent with p 1 = = p 4 = 3 and p 5 = = p 10 = 1 .
v = 3 0 0 3 0 0 0 0 0 0 0 3 0 0 1 1 1 1 1 1 0 0 3 0 1 1 1 1 1 1 0 0 0 0 1 1 1 1 1 1
In this example, all allocations can be divided into two classes depending on whether agent 1 or agent 4 gets item 4. It is not hard to verify:
One optimal allocation for minimizing LexiMin admits profile ( 6 , 4 , 4 , 4 ) . The bundles allocated to each agent in one of the LexiMin optimal allocation are { o 1 , o 4 } , { o 2 , o 5 } , { o 3 , o 6 } and { o 7 , o 8 , o 9 , o 10 } .
One optimal allocation for minimizing LexiMax admits profile ( 3 , 5 , 5 , 5 ) . The bundles allocated to each agent in one of the LexiMax optimal allocation are { o 1 } , { o 2 , o 5 , o 6 } , { o 3 , o 7 , o 8 } and { o 4 , o 9 , o 10 } .

Appendix C. Inconsistency for the Mixed Case

When some items are divisible, and others are indivisible, the optimal allocations under different SPD criteria may differ. In other words, the consistency of optimums among all the SPD criteria for the divisible and indivisible cases, respectively, does not generalize to the mixed case. We show an example in the following. The agents are assumed to have 0/1-add valuations.
Suppose there are four agents and six items. The first four items are indivisible, whereas items 5 , 6 are divisible. The matrix v below represents the valuations of agents: for agent i the value of item j is v i , j :
v = 1 0 0 1 0 0 0 1 0 0 1 1 0 0 1 0 1 1 0 0 0 0 1 1 .
We use matrix b to represent the allocation, where b i , j indicates the amount of item j allocated to agent i. Note that b i , j = 0 if v i , j = 0 and the jth column of matrix b contains only one if item j is indivisible. The sum of each column of the matrix equals one, and the sum of each row equals the number of items obtained by each agent.
In this example, all allocations can be divided into two classes depending on whether item 4 is assigned to agent 1 or agent 4. It is not hard to verify:
The optimal allocation for minimizing LexiMin is shown in b LexiMin below, admitting profile ( 2 , 4 3 , 4 3 , 4 3 ) .
The optimal allocation for minimizing LexiMax is shown in b LexiMax below, admitting profile ( 1 , 5 3 , 5 3 , 5 3 ) .
b LexiMin = 1 0 0 1 0 0 0 1 0 0 1 3 0 0 0 1 0 0 1 3 0 0 0 0 2 3 2 3 , b LexiMax = 1 0 0 0 0 0 0 1 0 0 1 3 1 3 0 0 1 0 1 3 1 3 0 0 0 1 1 3 1 3 .

References

  1. Coulter, P.B. Measuring Inequality: A Methodological Handbook, 1st ed.; Routledge: London, UK, 1989. [Google Scholar] [CrossRef]
  2. Darmann, A.; Schauer, J. Maximizing Nash product social welfare in allocating indivisible goods. Eur. J. Oper. Res. 2015, 247, 548–559. [Google Scholar] [CrossRef]
  3. Atkinson, A.B.; Bourguignon, F. Introduction: Income Distribution Today. In Handbook of Income Distribution; Atkinson, A.B., Bourguignon, F., Eds.; Elsevier: Amsterdam, The Netherlands, 2015; Volume 2, pp. xvii–lxiv. [Google Scholar] [CrossRef]
  4. Barman, S.; Krishnamurthy, S.K.; Vaish, R. Greedy Algorithms for Maximizing Nash Social Welfare. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, Stockholm, Sweden, 10–15 July 2018; pp. 7–13. [Google Scholar]
  5. Kleinberg, J.; Rabani, Y.; Tardos, É. Fairness in routing and load balancing. In Proceedings of the 40th Annual Symposium on Foundations of Computer Science (Cat. No. 99CB37039), New York, NY, USA, 17–19 October 1999; IEEE: Piscataway, NJ, USA, 1999; pp. 568–578. [Google Scholar]
  6. Aziz, H.; Rey, S. Almost group envy-free allocation of indivisible goods and chores. arXiv 2021, arXiv:1907.09279. [Google Scholar]
  7. Benabbou, N.; Chakraborty, M.; Igarashi, A.; Zick, Y. Finding Fair and Efficient Allocations for Matroid Rank Valuations. ACM Trans. Econ. Comput. 2021, 9, 1–41. [Google Scholar] [CrossRef]
  8. Moulin, H. Axioms of Cooperative Decision Making; Cambridge University Press: Cambridge, UK, 1991; Volume 15. [Google Scholar]
  9. Halpern, D.; Procaccia, A.D.; Psomas, A.; Shah, N. Fair division with binary valuations: One rule to rule them all. In Proceedings of the Web and Internet Economics: 16th International Conference, WINE 2020, Beijing, China, 7–11 December 2020; Proceedings 16; Springer: Berlin/Heidelberg, Germany, 2020; pp. 370–383. [Google Scholar]
  10. Babaioff, M.; Ezra, T.; Feige, U. Fair and truthful mechanisms for dichotomous valuations. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtually, 19–21 May 2021; Volume 35, pp. 5119–5126. [Google Scholar]
  11. Nace, D.; Doan, L.N. A Polynomial Approach to the Fair Multi-Flow Problem; University of Technology of Compiègne: Compiègne, France, 2002. [Google Scholar]
  12. Lin, Y.; Li, W. Parallel machine scheduling of machine-dependent jobs with unit-length. Eur. J. Oper. Res. 2004, 156, 261–266. [Google Scholar] [CrossRef]
  13. Ibaraki, T.; Katoh, N. Resource Allocation Problems: Algorithmic Approaches; MIT Press: Cambridge, MA, USA, 1988. [Google Scholar]
  14. Michaeli, I.; Pollatschek, M.A. On Some Nonlinear Knapsack Problems. Ann. Discret. Math. 1977, 1, 403–414. [Google Scholar]
  15. Golovin, D. Max-Min Fair Allocation of Indivisible Goods; School of Computer Science, Carnegie Mellon University: Pittsburgh, PA, USA, 2005. [Google Scholar]
  16. Bezáková, I.; Dani, V. Allocating indivisible goods. SIGecom Exch. 2005, 5, 11–18. [Google Scholar] [CrossRef]
  17. Bansal, N.; Sviridenko, M. The santa claus problem. In Proceedings of the Thirty-Eighth Annual ACM Symposium on Theory of Computing, Seattle, WA, USA, 21–23 May 2006; pp. 31–40. [Google Scholar]
  18. Edmonds, J.; Karp, R.M. Theoretical Improvements in Algorithmic Efficiency for Network Flow Problems. J. ACM 1972, 19, 248–264. [Google Scholar] [CrossRef]
  19. Oxley, J. Matroid Theory; Oxford University Press: Oxford, UK, 2011. [Google Scholar] [CrossRef]
Figure 1. Reduction to minimum-cost flow problem.
Figure 1. Reduction to minimum-cost flow problem.
Mathematics 14 00658 g001
Figure 2. (a) Illustration of p d , q d , r d . The arrows indicate that every agent in p d + 1 has a transfer to some agent in q d , and for each agent in q d , there exists a transfer from some agent in p d + 1 . (b) Illustration of s d and r d .
Figure 2. (a) Illustration of p d , q d , r d . The arrows indicate that every agent in p d + 1 has a transfer to some agent in q d , and for each agent in q d , there exists a transfer from some agent in p d + 1 . (b) Illustration of s d and r d .
Mathematics 14 00658 g002
Figure 3. Reduction to fair multi-flow problem. The reduction constructs a directed acyclic network G = ( V , E ) with four layers. The key insight is to encode the LexiMin objective as a fair multi-flow problem, where the amount of flow received by each sink directly corresponds to the valuation of the respective agent.
Figure 3. Reduction to fair multi-flow problem. The reduction constructs a directed acyclic network G = ( V , E ) with four layers. The key insight is to encode the LexiMin objective as a fair multi-flow problem, where the amount of flow received by each sink directly corresponds to the valuation of the respective agent.
Mathematics 14 00658 g003
Figure 4. Illustration of transferring items along the sequence. Some items are denoted by a , , g . Some agents and their bundles are denoted by 1 , , 4 and A 1 , , A 4 respectively. In this example, the sequence ( i 0 , o 1 , i 1 , , o t , i t ) is ( 3 , a , 2 , b , 1 , c , 2 , d , 4 ) .
Figure 4. Illustration of transferring items along the sequence. Some items are denoted by a , , g . Some agents and their bundles are denoted by 1 , , 4 and A 1 , , A 4 respectively. In this example, the sequence ( i 0 , o 1 , i 1 , , o t , i t ) is ( 3 , a , 2 , b , 1 , c , 2 , d , 4 ) .
Mathematics 14 00658 g004
Table 1. A summary of the results.
Table 1. A summary of the results.
ScenarioSPD Criteria ConsistencyRunning TimeCheby. Dist.Layer Structure
INDYes O ( m 2 n ) 1 Yes
DIVYes O ( m 2 n 5 ) 0Yes
IND-SUBYes [7]poly [10] 1 No
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhu, T.; Jin, K.; Luo, R.; Cao, S. Optimal Allocations Under Strongly Pigou–Dalton Criteria: Hidden Layer Structure and Efficient Combinatorial Approach. Mathematics 2026, 14, 658. https://doi.org/10.3390/math14040658

AMA Style

Zhu T, Jin K, Luo R, Cao S. Optimal Allocations Under Strongly Pigou–Dalton Criteria: Hidden Layer Structure and Efficient Combinatorial Approach. Mathematics. 2026; 14(4):658. https://doi.org/10.3390/math14040658

Chicago/Turabian Style

Zhu, Taikun, Kai Jin, Ruixi Luo, and Song Cao. 2026. "Optimal Allocations Under Strongly Pigou–Dalton Criteria: Hidden Layer Structure and Efficient Combinatorial Approach" Mathematics 14, no. 4: 658. https://doi.org/10.3390/math14040658

APA Style

Zhu, T., Jin, K., Luo, R., & Cao, S. (2026). Optimal Allocations Under Strongly Pigou–Dalton Criteria: Hidden Layer Structure and Efficient Combinatorial Approach. Mathematics, 14(4), 658. https://doi.org/10.3390/math14040658

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop