Next Article in Journal
EPSOM-Hyb: A General Purpose Estimator of Log-Marginal Likelihoods with Applications in Probabilistic Graphical Models
Next Article in Special Issue
Generating m-Ary Gray Codes and Related Algorithms
Previous Article in Journal
Solving Least-Squares Problems via a Double-Optimal Algorithm and a Variant of the Karush–Kuhn–Tucker Equation for Over-Determined Systems
Previous Article in Special Issue
Path Algorithms for Contact Sequence Temporal Graphs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A General Statistical Physics Framework for Assignment Problems

1
Department of Computer Science, University of California, Davis, CA 95616, USA
2
CNRS, CEA, Institut de Physique Théorique, Université Paris-Saclay, 91191 Gif-sur-Yvette, France
*
Author to whom correspondence should be addressed.
Algorithms 2024, 17(5), 212; https://doi.org/10.3390/a17050212
Submission received: 30 April 2024 / Revised: 9 May 2024 / Accepted: 11 May 2024 / Published: 14 May 2024

Abstract

:
Linear assignment problems hold a pivotal role in combinatorial optimization, offering a broad spectrum of applications within the field of data sciences. They consist of assigning “agents” to “tasks” in a way that leads to a minimum total cost associated with the assignment. The assignment is balanced when the number of agents equals the number of tasks, with a one-to-one correspondence between agents and tasks, and it is and unbalanced otherwise. Additional options and constraints may be imposed, such as allowing agents to perform multiple tasks or allowing tasks to be performed by multiple agents. In this paper, we propose a novel framework that can solve all these assignment problems employing methodologies derived from the field of statistical physics. We describe this formalism in detail and validate all its assertions. A major part of this framework is the definition of a concave effective free energy function that encapsulates the constraints of the assignment problem within a finite temperature context. We demonstrate that this free energy monotonically decreases as a function of a parameter β representing the inverse of temperature. As β increases, the free energy converges to the optimal assignment cost. Furthermore, we demonstrate that when β values are sufficiently large, the exact solution to the assignment problem can be derived by rounding off the elements of the computed assignment matrix to the nearest integer. We describe a computer implementation of our framework and illustrate its application to multi-task assignment problems for which the Hungarian algorithm is not applicable.

1. Introduction

An assignment problem can be interpreted as a problem of resource allocation in which a certain number of “tasks” are performed by some “agents” so that the total cost of pairing tasks with agents is minimized. There are many ways to constrain this problem, such as imposing one-to-one mapping (the balanced problem), searching for a specific number of assignments (the k-cardinality assignment problem), imposing constraints on the resources (the generalized assignment problem), or allowing agents to perform multiple tasks or reversely having some tasks performed by more than one agent. For reviews on the variety of assignment problems and the algorithms that have been proposed to solve them, we refer the reader to Refs. [1,2,3] and references therein. Assignment problems are fundamental combinatorial optimization problems. As such, solving such problems is a significant concern in operational research, economics, and data sciences, among other disciplines. Assignment problems have been and continue to be a focal point of investigation for mathematicians, statisticians, computer scientists, and even physicists. In this paper, we propose a general framework that regroups all the assignment problems mentioned above. We develop an approximate solution to this problem based on statistical physics and prove that it converges efficiently to an exact solution for non-degenerate as well as degenerate assignment problems.
Let us first define the assignment problem. Let T be the set of tasks and A the set of agents, with possibly different cardinalities | T | = N 2 and | A | = N 1 . If we define C ( i , j ) as the cost of assigning task i to the agent j, the balanced assignment problem can be stated as finding a transportation matrix G between T and A whose elements are non-negative integer numbers and that minimizes
U = i T j A C ( i , j ) G ( i , j ) .
In the special case of N 1 = N 2 , with a one-to-one assignment of tasks to agents (the balanced assignment problem), G corresponds to a permutation of { 1 , , N } . In the more general case, N 1 and N 2 can be different (unbalanced assignment), the number of pairings k may be imposed with 0 < k min ( N 1 , N 2 ) (k-cardinality assignment problem), and/or tasks and agents may be assigned to more than one agents and tasks, respectively. In those general cases, G remains binary but may not be a permutation matrix (in the specific case of the k-cardinality problem, for example, G is a partial permutation matrix of rank k).
As mentioned above, assignment problems are basically combinatorial optimization problems that can be solved numerically using a linear programming model or, more specifically, a linear integer programming method. Those methods, however, are theoretically N P -complete. There are fast heuristics that can solve such problems, such as Dantzig’s simplex method [4]. The assignment problem, however, is a specialized case of integer linear programming that can be recast and solved by an exact polynomial algorithm. For example, the most common algorithm for solving the balanced assignment problem has its origin in the work of Jacobi [5]. It was “rediscovered” 60 years later by [6] and is now dubbed the Hungarian algorithm. It is a global algorithm that iteratively identifies assignments between agents and tasks. It is polynomial in running time ( O ( N 4 ) or O ( N 3 ) depending on the implementation). While it is serial by nature, there are many recent efforts to parallelize this algorithm (see, for example [7,8,9], and references therein).
The Hungarian algorithm can be adapted to many unbalanced assignment problems. If there are N 2 tasks and N 1 agents, with N 2 > N 1 , for example, it is possible to add N 2 N 1 “dummy” agents, solve the balanced assignment problem of size N 2 , and then only retain the assignments to the actual N 1 agents. If each agent can perform up to M tasks, the textbook solution is then to create M copies of each agent prior to using the Hungarian algorithm. This approach, however, may not lead to the expected solution. Consider a problem with fiv tasks and three agents, with each agent allowed to perform up to three tasks. The option of creating three copies of each agent and then solving the corresponding unbalanced assignment problem may lead to one agent performing two tasks, a second agent performing three tasks, and the last agent not performing any task, possibly violating a constraint that each agent needs to perform at least one task. Several methods have been proposed to circumvent this problem, either by dividing the problem into multiple subproblems [10,11] or by using random algorithms such as the ant colony algorithms [12,13]. It is unclear whether these algorithms can identify the optimal solution (they have been shown to fail in some cases, see for example [14]) or how they scale for large problems.
In this paper, we propose a radically different approach to solving a general class of assignment problems using continuous systems. Our approach is motivated by statistical physics. It is a full generalization of the method developed in a previous paper to solve balanced assignment problems [15]. In the following, we will use the generic term “multi-assignment” to represent an assignment problem with agents possibly performing multiple tasks, or with tasks assigned to multiple agents. In this paper, we aim to
  • Introduce and substantiate a continuous framework for addressing potentially multifaceted assignment problems involving multiple tasks and/or multiple agents, leveraging principles from statistical physics;
  • Demonstrate that, under generic circumstances where the multi-assignment problem possesses a unique solution, the aforementioned framework ensures convergence to that solution with arbitrarily high accuracy, in terms of both cost (referred to as energy) and assignment matrix,
  • Present a modified approach capable of identifying at least one solution for degenerate multi-assignment problems that feature multiple solutions;
  • Demonstrate that the implementation of this framework can be efficiently parallelized.
We emphasize that this formalism is not a mere adaptation but a full generalization of the framework we develop for solving assignment problems [16]. In particular, new features of this paper are as follows:
  • A method to account for the fact that each task can be assigned to a variable number of agents and, on the other hand, that each agent can be assigned a variable number of tasks. To that end, we introduce indicator functions over the sets of tasks and agents that are optimized along with the transportation plan.
  • The establishment of proofs of validity and convergence of our algorithm for those general multi-assignment problems. The main results are provided in the text, while the proofs themselves are relegated to the Appendices for more clarity. For the balanced assignment problem, these proofs rely heavily on the fact that the corresponding transportation matrices are permutation matrices that are the extreme points of the well-characterized convex set of doubly stochastic matrices (according to the Birkhoff–von Neumann theorem [17,18]). For more general multi-assignment problems, the transportation matrices also belong to a convex set, with properties akin to the Birkhoff–von Neumann theorem (these properties are discussed in Appendix A). The use of these properties to derive the convergence of our algorithm for solving multi-assignment problems is new.
The paper is organized as follows. In Section 2 Section 3 and Section 4, we describe in detail the general framework we propose for solving multi-assignment problems. Proofs of all important properties of this framework are provided in the appendices. Section 5 briefly describes the implementation of the method in a C++ program, Matching V1.0. In Section 6, we present some applications, along with a comparison to current algorithms.

2. A General Formulation of the Assignment Problem

We consider two sets of points, S 1 and S 2 , with cardinalities N 1 and N 2 , respectively. We represent the cost of transportation between S 1 and S 2 as a matrix C whose elements C ( i , j ) are positive, for all ( i , j ) [ 1 , N 1 ] × [ 1 , N 2 ] . We set the number of assignments between points in S 1 and S 2 to be a strictly positive integer number k, a constant given as input to the problem. A point i is assigned to n 1 ( i ) points in S 2 , with n 1 ( i ) belonging to [ p m i n ( i ) , , p m a x ( i ) ] , where p m i n ( i ) and p m a x ( i ) are non-negative integers satisfying 0 p m i n ( i ) p m a x ( i ) N 2 . Similarly, a point j is assigned to n 2 ( j ) points in S 1 , with n 2 ( j ) belonging to [ q m i n ( j ) , , q m a x ( j ) ] , where q m i n ( j ) and q m a x ( j ) are non-negative integers integers satisfying 0 q m i n ( j ) q m a x ( j ) N 1 . Using the traditional “task-agent” for the formulation of an assignment problem, S 1 represents the tasks, and S 2 the agents. n 1 ( i ) is the number of agents needed to perform task i, while n 2 ( j ) is the number of tasks that can be performed by agent j. p m i n ( i ) and p m a x ( i ) represent the minimum and maximum number of agents that are assigned to a task i, respectively, while q m i n ( j ) and q m a x ( j ) represent the minimum and maximum number of tasks that can be assigned to an agent j, respectively. We use a general definition of these boundaries, in that we allow each task and each agent to have their own limits. The multi-assignment problem is then framed as the search for an assignment matrix G that defines the correspondence between points in S 1 and points in S 2 . This matrix is found by minimizing the assignment cost U, defined as
U ( G , C ) = i , j G ( i , j ) C ( i , j ) .
The summations encompass all i in S 1 and j in S 2 . The objective is to locate the minimum of U given the values of G ( i , j ) that adhere to the following constraints:
i , j G ( i , j ) = n 1 ( i ) j , i G ( i , j ) = n 2 ( j ) i j G ( i , j ) = k ( i , j ) , G ( i , j ) { 0 , 1 } ( i ) , n 1 ( i ) [ p m i n ( i ) , , p m a x ( i ) ] ( j ) , n 2 ( j ) [ q m i n ( j ) , , q m a x ( j ) ] ,
where k is the actual number of task–agent pairs. The numbers p m i n ( i ) , p m a x ( i ) , q m i n ( j ) , q m a x ( j ) , and k are problem-specific and given. They do satisfy some constraints, such as 0 p m i n ( i ) p m a x ( i ) N 2 and 0 q m i n ( j ) q m a x ( j ) N 1 , as described above, as well as 0 < k i = 1 N 1 p m i n ( i ) and k j = 1 N 2 q m i n ( j ) . We store those numbers in four vectors, P m i n = ( p m i n ( 1 ) , , p m i n ( N 1 ) ) , P m a x = ( p m a x ( 1 ) , , p m a x ( N 1 ) ) , Q m i n = ( q m i n ( 1 ) , , q m i n ( N 2 ) ) , Q m a x = ( q m a x ( 1 ) , , q m a x ( N 2 ) ) .
Equation (3) recap most of the standard linear assignment problems:
(i)
If k = N 1 = N 2 and p m i n ( i ) = p m a x ( i ) = q m i n ( j ) = q m a x ( j ) = 1 for all i and j, we recover the balanced assignment problem,
(ii)
If p m i n ( i ) = q m i n ( j ) = 0 and p m a x ( i ) = q m a x ( j ) = 1 for all i and j, the equations correspond to the k-cardinality assignment problem [19,20,21].
In Equation (3), G ( i , j ) , n 1 ( i ) and n 2 ( j ) are unknown, defining a total of N 1 N 2 + N 1 + N 2 variables. The solution to the general assignment problem is given by the functions n 1 * and n 2 * on S 1 and S 2 , respectively, which identify which tasks and which agents are involved in the optimal assignment, the transportation matrix G * that defines these correspondences, and the minimum assignment cost U * = U ( G * , C ) .
Optimizing Equation (2) subject to the constraints Equation (3) constitutes a discrete optimization challenge, specifically an integer linear programming problem. To address this challenge, we employ a statistical physics methodology, transforming it into a temperature-dependent problem involving real variables. The integer optimal solution is then obtained as the temperature approaches zero.

2.1. Effective Free Energy for the General Assignment Problem

Solving the multi-assignment problem entails determining the minimum of the function defined in Equation (2) across the potential mappings between the two discrete sets of points under consideration. Identifying this minimum is tantamount to identifying the most probable state of the system it characterizes. This “system” encompasses the various binary transportation plans, also referred to as assignment matrices, between S 1 and S 2 that adhere to the constraints Equation (3). Each state within this system corresponds to assignment matrix G with its associated energy U ( G , C ) as defined in Equation (2). The probability P ( G ) linked with an assignment matrix G is given by
P ( G ) = 1 Z ( β ) e β U ( G , C ) .
In this equation, β is the inverse temperature, namely β = 1 / k B T , where k B is the Boltzmann constant and T is the temperature, and Z ( β ) is the partition function computed over all states of the system. As this system is defined by the assignment matrices G and by the marginals of G, n 1 , and n 2 , the partition function is equal to
Z β = e β F β = G ( i , j ) = 0 1 n 1 ( i ) = p m i n ( i ) p m a x ( i ) n 2 ( j ) = q m i n ( j ) q m a x ( j ) e β U ( G , C ) ,
where F ( β ) is the free energy of the system. The practicality of this free energy is limited due to the inability to compute it explicitly. We develop a method for approximating it through the saddle point approximation technique.
Taking into account the constraints in Equation (3), the partition function can be written as
Z β = G ( i , j ) = 0 1 n 1 ( i ) = p m i n ( i ) p m a x ( i ) n 2 ( j ) = q m i n ( j ) q m a x ( j ) e β i , j C ( i , j ) G ( i , j ) i δ j G ( i , j ) n 1 ( i ) j δ i G ( i , j ) n 2 ( j ) δ i , j G ( i , j ) k .
The three sums impose that G ( i , j ) , n 1 ( i ) , and n 2 ( j ) take integer values within some ranges defined by the constraints Equation (3). The additional constraints are imposed through the delta functions. We employ the Fourier representation of those delta functions, thereby introducing additional auxiliary variables x, λ ( i ) , and μ ( j ) , with i [ 1 , N 1 ] and j [ 1 , N 2 ] . The partition function is then given, after reorganization, by (up to a multiplicative constant) by
Z β = + i d λ ( i ) + j d μ ( j ) + d x e ß β k x n 1 ( i ) = p m i n ( i ) p m a x ( i ) n 2 ( j ) = q m i n ( j ) q m a x ( j ) e β i ß λ ( i ) n 1 ( i ) + j ß μ ( j ) n 2 ( j ) G ( i , j ) = 0 1 e β i , j G ( i , j ) ( C ( i , j ) + ß λ ( i ) + ß μ ( j ) + ß x ) ,
where ß is the imaginary unit ( ß 2 = 1 ). Note that we have rescaled the variables x, λ , and μ by a factor β for consistency with the energy term. Conducting the summations over the variables G ( i , j ) , n 1 ( i ) , and n 2 ( j ) , we get
Z β = + i d λ ( i ) + j d μ j + d x e β F β λ , μ , x ,
where F β λ , μ , x is a functional, or effective free energy, that depends on the variables λ , μ , and x and is defined by
F β λ , μ , x = 1 β i ln e ß β p m i n ( i ) λ ( i ) e ß β ( p m a x ( i ) + 1 ) λ ( i ) 1 e ß β λ ( i ) 1 β j ln e ß β q m i n ( j ) μ ( j ) e ß β ( q m a x ( j ) + 1 ) μ ( j ) 1 e ß β μ ( j ) 1 β i , j ln 1 + e β C ( i , j ) + ß λ ( i ) + ß μ ( l ) + ß x ß k x .
In contrast to the expression of the internal energy U defined in Equation (2) that depends on the N 1 N 2 + N 1 + N 2 constrained binary variables G ( i , j ) , n 1 ( i ) , and n 2 ( j ) , the expression for the effective free energy F β λ , μ , x depends only on N 1 + N 2 + 1 unconstrained variables, namely λ ( i ) , μ ( j ) , and x. Below, we demonstrate how identifying the extremum of this function enables us to address the multi-assignment problem.

2.2. Optimizing the Effective Free Energy

Let G ¯ ( i , j ) , n 1 ¯ ( i ) , and n 2 ¯ ( j ) represent the expected values of G ( i , j ) , n 1 ( i ) , and n 2 ( j ) , respectively, in accordance with the Gibbs distribution specified in Equation (4). Computing these expected values directly is unfortunately not feasible as the partition function defined in Equation (8) lacks an analytical expression. Instead, we derive a saddle point approximation (SPA) by seeking extrema of the effective free energy with respect to the variables λ , μ , and x:
F β λ , μ , x λ i = F β λ , μ , x μ j = F β λ , μ , x x = 0 .
After some rearrangements, these two equations can be written as
i , j X ( i , j ) = d 1 ( i ) ,
j , i X ( i , j ) = d 2 ( j ) ,
i , j X ( i , j ) = k ,
where
X ( i , j ) = h β C ( k , l ) + ß λ ( i ) + ß μ ( j ) + ß x , d 1 ( i ) = g ß β λ ( i ) , p m i n ( i ) , p m a x ( i ) , d 2 ( j ) = g ß β μ ( j ) , q m i n ( j ) , q m a x ( j ) ,
and
h ( x ) = 1 e x + 1 , g ( x , a , b ) = ( b a + 1 ) e ( b a + 1 ) x 1 + 1 1 e x + b .
Note that
(i)
g ( x , a , a ) = a . If a = 1 and N 1 = N 2 = 1 , we recover Equation (11) from [16] corresponding to the balanced assignment problem.
(ii)
g ( x , 0 , 1 ) = h ( x ) . In this case, Equation (11) refers to the k-cardinality problem.
As frequently encountered, the saddle point may be purely imaginary. In this instance, it is evident from Equations (10) and (11) that the variables ß λ ( i ) , ß μ ( j ) , and ß x must be real. Consequently, we will replace { ß λ ( i ) , ß μ ( j ) , ß x } by { λ ( i ) , μ ( j ) , x } below.
In order to analyze the saddle point approximation, SPA, it is necessary to verify the existence and evaluate the uniqueness of the extrema of the effective free energy. The following theorem proves that F β λ , μ , x is strictly concave.
Theorem 1.
The Hessian of the effective free energy F β λ , μ , x is negative definite if p m i n ( i ) < p m a x ( i ) for all i and q m i n ( j ) < q m a x ( j ) , for all j, and negative semidefinite otherwise. The free energy function F β λ , μ , x is then strictly concave in the first case and concave in the second case.
Proof. 
See Appendix B.    □
The following property establishes a relationship between the solutions of the SPA system of equations and the expected values for the assignment matrix and for the indicator functions:
Proposition 1.
Let S ¯ β be the expected state of the system at the temperature β with respect to the Gibbs distribution given in Equation (4). S ¯ β is associated with an expected transportation plan G ¯ β and expected indicator functions n 1 ¯ β and n 2 ¯ β . Let λ M F ( i ) , μ M F ( j ) , and x M F be the solutions of the system of Equation (10). Then the following identities hold:
G ¯ β ( i , j ) = h ( β ( C ( i , j ) + λ M F ( i ) + μ M F ( j ) + x M F ) ) = X M F ( k , l ) n ¯ 1 β ( i ) = g ( β λ M F ( i ) , p m i n ( i ) , p m a x ( i ) ) = d 1 M F ( i ) n ¯ 2 β ( j ) = g ( β μ M F ( j ) , q m i n ( j ) , q m a x ( j ) ) = d 2 M F ( j ) .
To indicate that the solutions are mean field solutions, we use the superscript M F .
Proof. 
The proof of this proposition is virtually identical to the proof of the same results for the assignment problem, found in Appendix B of [15].    □
For a given value of the parameter β , n ¯ 1 β ( i ) and n ¯ 2 β ( j ) are the numbers of elements of S 2 and the number of elements in S 1 that are in correspondence with i and j, respectively, and G ¯ β forms a transportation plan between S 1 and S 2 that is optimal with respect to the free energy defined in Equation (8). Note that these values are mean values and fractional. The matrix G ¯ β belongs to a polytope, which we define as U k ( P m i n m a x , Q m i n m a x ) (see Appendix A for a full characterization of this polytope). We can link the mean field assignment matrix G ¯ β to an optimal free energy F β M F and to an optimal internal energy U β M F = i , j G ¯ β ( i , j ) C ( i , j ) . These values serve as mean field approximations of the exact free energy and internal energy of the system, respectively. Here are the key properties of U β M F and F β M F :
Proposition 2.
F β M F and U β M F are monotonic increasing and monotonic decreasing functions, respectively, of the parameter β.
Proof. 
See Appendix C for F β M F and Appendix D for U β M F .    □
Theorem 1 and Proposition 2 outlined above underscore several advantages of the proposed framework, which reframes the multi-assignment problem as a temperature-dependent process. Firstly, at each temperature, the multi-assignment problem with a generic cost matrix is transformed into a concave problem with a unique solution. This problem exhibits linear complexity in the number of variables, contrasting with the quadratic complexity of the original problem. The concavity facilitates the utilization of straightforward algorithms for identifying a minimum of the effective free energy function (Equation (8)). Additionally, Equation (12) offers robust numerical stability for computing the transportation plan and the functions n 1 and n 2 , owing to the characteristics of the functions h ( x ) and g ( x ) . Lastly, the convergence with respect to temperature is monotonic.

3. Solving Generic Assignment Problems

In the previous section, we proposed an effective free energy, F β λ , μ , x , that depends on N 1 + N 2 + 1 unconstrained variables and on the inverse of the temperature, β , whose extremum identifies the solution of a possibly multi-job, multi-agent assignment problem. We have shown that the trajectory of this extremum is monotonic, increasing with respect to β . We now relate these values to the optimal assignment energy U * :
Theorem 2.
Let F β M F and U β M F be the mean field approximations of the exact free energy and internal energy of the system at the inverse temperature β. The optimal assignment energy U * is then given by,
U * = lim β + F β M F , U * = lim β + U β M F ,
namely, the optimal assignment energy, U * , is equal to the infinite limit with respect to the inverse temperature of both the mean field free energy and the internal energy.
Proof. 
See Appendix E.    □
This theorem illustrates that as the inverse temperature approaches infinity (or equivalently, as the temperature tends towards zero), the internal energy and free energy of the system converge to the optimal assignment energy. This confirms the validity of our statistical physics approach, particularly emphasizing the effectiveness of the saddle-point approximation. Note, however, that it does not define the behavior of the coupling matrix G ¯ β = X M F . As G ¯ β ( i , j ) = h ( β ( C ( i , j ) + λ M F ( i ) + μ M F ( j ) + x M F ) ) and 0 < h ( x ) < 1 , the coupling matrix at any finite temperature is fractional. We need to show that as β + , the corresponding matrix G ¯ does converge to the solution matrix G * and not to a fractional matrix that would lead to the same low energy U * .
We first establish bounds on the internal energy and free energy at the SPA. Let us define
A ( N 1 , N 2 ) = N 1 N 2 ln ( 2 ) + i = 1 N 1 ln ( p m a x ( i ) p m i n ( i ) + 1 ) + j = 1 N 2 ln ( q m a x ( j ) q m i n ( j ) + 1 ) .
Theorem 3.
F β M F and U β M F , the mean field approximations of the exact free energy and internal energy of the system at the inverse temperature β, satisfy the following inequalities:
U * A ( N 1 , N 2 ) β F β M F U * ,
U * U β M F U * + A ( N 1 , N 2 ) β ,
where U * is the optimal assignment energy and A ( N 1 , N 2 ) is defined in Equation (14).
Proof. 
See Appendix F.    □
Now, let us assume that the multi-assignment problem linked with the N 1 × N 2 cost matrix C possesses a unique optimal assignment matrix, denoted as G * . We have the following theorem:
Theorem 4.
Let G ¯ β be the coupling matrix solution of the SPA equations at the inverse temperature β. We assume that the multi-assignment problem has a unique solution, with G * being the associated coupling matrix. If Δ is the difference in total cost between the optimal solution and the second-best solution, then
max i , j G ¯ β ( i , j ) G * ( i , j ) A ( N 1 , N 2 ) β Δ .
Proof. 
See Appendix G.    □
This theorem validates that, in the generic case in which the solution to the assignment problem is unique, the converged solution matrix G ¯ is this unique solution.

4. Solving Degenerate Assignment Problems

The method we propose is basically a relaxation approach to the general assignment problem. Indeed, we build a collection of transportation matrices G ¯ β that belong to U k ( P m i n m a x , Q m i n m a x ) P k ( P m i n m a x , Q m i n m a x ) (see Appendix A). Entries of these matrices are non-integer, strictly in the interval ( 0 , 1 ) . If the general assignment problem is known to have a unique integer solution, we have shown that these matrices converge to an element of P k ( P m i n m a x , Q m i n m a x ) when β + . The question remains as to what happens when the problem is degenerate, i.e., when it may have multiple integer solutions.
The general assignment problem is a linear programming problem. Checking if such a problem is degenerate is unfortunately often N P -complete ([22,23]). The degeneracies occur due to the presence of cycles in the linear constraints, i.e., in the cost matrix for the assignment problem. If this is the case, we propose randomly perturbing that matrix to bring it back to the generic problem. Megiddo and colleagues [24] have shown that an ε -perturbation of a degenerate linear programming problem reduces this problem to a non-degenerate one.

5. Matching: A Program for Solving Assignment Problems

We have implemented the multi-assignment framework described here in a C++ program, Matching, that is succinctly described in Algorithm 1.
Matching relies on an iterative process in which the parameter β is gradually increased. At each value of β , the nonlinear system of equations defined by Equation (10) is solved. We write this system as
A λ = 0 , A μ = 0 , A x = 0 ,
where A = ( A λ , A μ , A x ) is a vector of predicates defined as
A λ ( i ) = j h ( β ( C ( i , j ) + λ ( i ) + μ ( j ) + x ) ) g ( β λ ( i ) , p m i n ( i ) , p m a x ( i ) ) , A μ ( j ) = i h ( β ( C ( i , j ) + λ ( i ) + μ ( j ) + x ) ) g ( β μ ( j ) , q m i n ( j ) , q m a x ( j ) ) , A x = i , j h ( β ( C ( i , j ) + λ ( i ) + μ ( j ) + x ) ) k .
This system has N 1 + N 2 + 1 equations, with the same number of variables. It is solved using an iterative Newton–Raphson method (for details, see, for example [16,25]). Once the SPA system of equations is solved, the assignment matrix G ¯ β , the functions n ¯ 1 β , n ¯ 2 β , and the corresponding transportation energy U M F ( β ) are computed. The iterations over β are stopped if the mean field energy U β M F does not change anymore (within a tolerance TOL usually set to 10 6 ). Otherwise, β is increased, and the current values of λ , μ and x are used as input for the following iteration. At convergence, the values of the assignment matrix are rounded to the nearest integer (indicated as in the output of Algorithm 1). The minimal energy is then computed using the corresponding integer matrix.
The primary computational expense of this algorithm arises from solving the nonlinear set of equations corresponding to the SPA at every β value. We use for this purpose an iterative Newton–Raphson method (see [15]).
Algorithm 1 Matching: a temperature-dependent framework for solving the multi-assignment problem
  • Input: N 1 and N 2 , the number of agents and tasks; p m i n ( i ) and p m a x ( i ) , the minimum and maximum number of agents needed to perform task i, and q m i n ( j ) and q m a x ( j ) , the minimum and maximum number of tasks that can be performed by an agent j; k, the expected number of assignments; the cost matrix C. Initial value β 0 for β , the inverse of the temperature
  • Initialize: Initialize arrays λ and μ to 0 and initialize x = 0 . Set S T E P = 10 .
  • for i = 1 , until convergence do
        (1)
    Initialize β i = S T E P β i 1 .
        (2)
    Solve non linear Equation (10) for λ , μ and x at saddle point
        (3)
    Compute corresponding G ¯ β i , n ¯ 1 β i , n ¯ 2 β i , and U β i , M F
        (4)
    Check for convergence: if | U β i , M F U β i 1 , M F | < T O L , stop
  • end for
  • Output: The converged assignment matrix G ¯ β , the functions n ¯ 1 β , n ¯ 2 β over the agents and tasks, and the minimal associated cost U β .
As for any annealing scheme, the initial temperature, or, in our case, the initial value β 0 , is a parameter that significantly impacts the efficiency of the algorithm. Setting β 0 to be too small (i.e., a high initial temperature) will lead to inefficiency as the system will spend a significant amount of time at high temperatures, while setting β 0 too high will require many steps to converge at the corresponding low temperature, thereby decreasing the efficiency brought about by annealing. The value of β scales the cost matrix C and as such is related to the range of this matrix, more specifically to its largest value, C m a x . We found that setting β 0 C m a x = 1 provides satisfactory annealing efficiency. The value for S T E P was chosen empirically.

6. Examples

The framework we propose is general: it allows us to solve balanced and unbalanced assignment problems, including those that allow for multi-agent or multi-task assignments. We illustrate our method on the latter types of problems, as, to our knowledge, there are currently no solutions to those that are guaranteed to reach the optimal assignment.
As a first illustration of our framework, we ran Matching on the cost matrix C 1 given in Table 1. This matrix has been used in previous studies of multi-agent assignment problems [10,11,13,14,26,27]. Matching was run with the following parameters: p m i n = 0 for all agents (i.e., some agents may stay idle), p m a x = 4 for all agents (i.e., an agent can perform up to 4 tasks), q m i n = q m a x = 1 (i.e., a task is only performed by one agent), and k = 8 (all tasks are performed). In Figure 1, we show the corresponding trajectories of the internal energy U β M F and free energy F β M F .
As expected, the internal energy is monotonically decreasing while the free energy is monotonically increasing, and both converge to the same value, 1440. Theorem 3 provides bounds on those energy values. Note that it can be rewritten as
U β M F A ( N 1 , N 2 ) β U * U β M F , F β M F U * U β M F + A ( N 1 , N 2 ) β ,
i.e., at each value of β , we have bounds based on internal energy and free energy for the actual optimal cost of the assignment problem. These bounds are shown as shaded areas in Figure 1. Note that the widths of the intervals defined by the bounds are inversely proportional to β and therefore decrease monotonically as β increases.
In Table 2, we compare the assignment found by Matching on C 1 with p m a x set to 4 (i.e., up to four tasks per agent), and p m i n = 0 (agents can be idle) or p m i n = 1 (each agent must execute at least one task). The removal of the latter constraint leads to a better optimal assignment in which agent 3 remains idle. When we maintain this constraint, our results are the same as those found by Wang et al. [13].
As a second more general test of our method, we compared the results of the framework we propose with the results of our own version of the Hungarian algorithm, as well as with those found in previous studies. We used both the matrix C1 and the matrix C2 given in Table 3 in those computing experiments.
We ran Matching as follows. For all tasks j, we set q m i n ( j ) = q m a x ( j ) = 1 , i.e., a task is only performed by one agent, and all tasks are expected to be executed. The latter is further enforced by setting k to be the total number of tasks (8 for C 1 and 10 for C 2 ). For an agent i, we either set all p m i n ( i ) to be 1 (in which case all agents must perform at least one task) or 0 (in which case some agents may not be assigned). We considered all p m a x ( i ) to be equal to a given value p and ran Matching with p varying from 1 to k. Note that both matrices C 1 and C 2 contain cycles. As such, optimal assignments based on those matrices are not unique. To remove the degeneracy, we added random uniform noise between [ 0 , ϵ ] with ϵ = 0.001 to all values of the cost matrices (see Section 4 for details).
The Hungarian algorithm remains a standard for solving balanced as well as unbalanced assignment problems. It needs to be adapted, however, when solving multi-task assignment problems. The corresponding textbook solution that was used, for example, in Ref. [14] is to make copies or clones of the agents, solve the augmented unbalanced assignment problem using the Hungarian algorithm, and then regroup all clones of an agent to define the tasks that it is assigned to. We consequently created multiple versions of the cost matrices C 1 and C 2 , with all agents copied p times, where p defines the maximum number of tasks that an agent can perform. We ran our own version of the Hungarian algorithm on those matrices. This version is adapted from the serial code written by C. Ma and is publicly available at https://github.com/mcximing/hungarian-algorithm-cpp, (accessed on 1 May 2021).
Comparisons of the results of Matching, of the Hungarian algorithm, and of previous studies are provided in Table 4 for the cost matrix C1 and in Table 5 for the cost matrix C2.
The results of Matching have been proven to be optimal (see Section 3). As expected, those results are strongly dependent on the constraints imposed on the problem: we can find assignments with a lower total cost if we allow agents to remain idle by setting p m i n to 0. For example, for the cost matrix C 1 , in the extreme case with p m i n = 0 and p m a x = 8 for all agents, we find that all eight tasks have been assigned to agent 5, while the other agents have no task to perform, with a total cost of 1400. If instead we set p m i n = 1 and keep p m a x = 8 , we find that agent 5 is assigned tasks 1, 2, 5, and 6; agent 1 is assigned task 3; agent 2 is assigned task 8; agent 3 is assigned task 4; and agent 4 is assigned task 7, for a total cost of 1450. If we instead run the Hungarian algorithm with agents that have been cloned to allow for multi-task assignments, we find optimal assignments that match those found by Matching with p m i n set to 0 for both cost matrices. For matrix C 1 , for example, if each agent is represented with 8 clones, we find that agent 5 is assigned tasks 1, 2, 4, 5, 6, 7, and 8 while agent 1 is assigned task 3, for a total cost of 1400. This illustrates two points. First, as expected, the optimal assignment is not unique, as we find two different assignments with the same cost, 1400. Second, and more importantly, this shows the difficulty of applying the Hungarian algorithm for such problems. It works fine if we want to find the optimal assignment without consideration of constraints, such as the constraint each agent needs to perform at least one task, but does not fit if such a constraint is necessary.
The optimal solutions found by Matching when p m i n is set to 1 match those found by Wang et al. [13] for both matrices and are better than those found by Majumdar et al. [26] for matrix C 2 . Note that the latter are obtained using either an ant colony algorithm or a genetic algorithm. Both are probabilistic algorithms that do not guarantee that the true minimum is found.

7. Conclusions

We have developed a general framework for solving balanced and unbalanced and constrained and unconstrained assignment problems. Given two sets of points S 1 and S 2 with the possibly different cardinalities N 1 and N 2 , constraints on the number of possible assignments for each element of S 1 and of S 2 , and a cost matrix defining the penalties of on assignment, we have constructed a concave free energy parameterized by temperature that captures those constraints. The definition of this free energy is general enough that it includes balanced and unbalanced cases. Its extremum establishes an optimal assignment between the two sets of points. We have demonstrated that this free energy consistently decreases as a function of β (the inverse of temperature) towards the optimal assignment cost. Moreover, we have established that for sufficiently large β values, the precise solution to the generic multi-assignment problem can be directly derived through straightforward rounding to the nearest integer of the elements of the assignment matrix linked to this extremum. Additionally, we have developed a method that guarantees convergence for addressing degenerate assignment problems.
The formalism introduced in this paper was designed to generalize the Hungarian algorithm for a large range of assignment problems. We expect it to be useful for a much larger set of problems, especially those associated with graphs. Graphs capture data relationships and, as such, are essential to numerous applications. They are particularly useful in domains such as Web search [28], neural [29] and social network analysis [30], gene networks [31], etc. More generally, they are designed to represent complex knowledge. The scale of modern graphs leads to a need to develop more efficient methods to process very large graphs. The formalism we propose is well adapted to tackle this problem for applications such as bipartite or simple graph matching. We will also consider extensions to higher-order matching problems, such as k-matching problems [32] (for example the three-assignment problem [33]) that are known to be N P -complete [34]. These problems have their own applications for graph analyses.

Author Contributions

Conceptualization, P.K. and H.O.; methodology, P.K. and H.O.; software, P.K.; validation, P.K.; formal analysis, P.K. and H.O.; investigation, P.K. and H.O.; writing—original draft preparation, P.K. and H.O.; writing—review and editing, P.K. and H.O. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created for this study.

Acknowledgments

The work discussed here originated from visits by P.K. at the Institut de Physique Théorique, CEA Saclay, France, during the falls of 2022 and 2023. He thanks them for their hospitality and financial support.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. A Generalized Birkhoff–von Neumann Theorem

An N × N matrix A = ( a ( i , j ) ) is said to be doubly stochastic if and only if it satisfies the following conditions,
a ( i , j ) 0 , i = 1 N a ( i , j ) = 1 , j = 1 N a ( i , j ) = 1 ,
for all ( i , j ) [ 1 , N ] 2 . The set Ω N of doubly stochastic matrices is a convex polytope whose vertices are the permutation matrices with the same size. This is expressed by the following theorem, established by [17,18]:
Theorem A1.
An N × N matrix A is doubly stochastic if and only if it can be written as a weighted sum of permutation matrices, i.e.,
A = π Π N a π π ,
where Π N is the set of permutation matrices of size N, a π is a positive real number and π Π N a π = 1 .
Doubly stochastic matrices and their properties associated with the Birkhoff–von Neumann theorem above proved useful to establish convergence properties of statistical physics frameworks for solving the balanced assignment problem [15,35].
For the more general assignment problem considered here, however, we need to consider a different but related set of matrices, defined as follows. Let N and M be two natural numbers. Let R m i n = ( r ( 1 ) , , r ( N ) ) , R m a x = ( R ( 1 ) , , R ( N ) ) , C m i n = ( c ( 1 ) , , c ( M ) ) , and C m a x = ( C ( 1 ) , , C ( M ) ) be four non-negative integral vectors satisfying
i , 0 r ( i ) R ( i ) , j , 0 c ( j ) C ( j ) .
Let A be a non-negative matrix of size N × M , and let us denote the row sum vector of A as R A and the column sum vector of A as C A . Let us also define σ ( A ) as the sum of all elements of A, i.e., σ ( A ) = i = 1 N j = 1 M A ( i , j ) . The transportation polytope U ( R m i n m a x , C m i n m a x ) is the set of N × M matrices A that satisfy
( i , j ) , 0 A ( i , j ) 1 , i , r ( i ) R A ( i ) R ( i ) , j , c ( j ) C A ( j ) C ( j ) .
The transportation polytope U k ( R m i n m a x , C m i n m a x ) is the set of N × M matrices A that satisfy
U k ( R m i n m a x , C m i n m a x ) = A U ( R m i n m a x , C m i n m a x ) | σ ( A ) = k
Denote P ( R m i n m a x , C m i n m a x ) as the set of all matrices in U ( R m i n m a x , C m i n m a x ) whose entries are either 0 or 1 with a similar definition for P k ( R m i n m a x , C m i n m a x ) with respect to U k ( R m i n m a x , C m i n m a x ) .
Relaxed solutions to the general assignment problem considered in this paper belong to U k ( P m i n m a x , Q m i n m a x ) , while integer solutions belong to P k ( P m i n m a x , Q m i n m a x ) , where P m i n , represents the vectors containing the constraints on the number of tasks that can be assigned to an agent or the number of agents that can be assigned to a task. In Koehl [36], we show the following theorem:
Theorem A2.
U k ( R m i n m a x , C m i n m a x ) is the convex hull of all matrices in P k ( R m i n m a x , C m i n m a x ) .
Theorem A2 can be stated as any matrix in U k ( R m i n m a x , C m i n m a x ) that can be written as a linear combination of matrices in P k ( R m i n m a x , C m i n m a x ) . This result will be useful for some of the proofs below.
Remark A1.
(i) 
If R m i n = R m a x = 1 N , C m i n = C m a x = 1 M (where 1 N and 1 N are vectors of one of size N and M, respectively, N = M = k , U ( R m i n m a x , C m i n m a x ) is the set of doubly stochastic matrices, P k ( R m i n m a x , C m i n m a x ) is the set of permutation matrices, and Theorem A2 is then equivalent to the Birkhoff–von Neumann theorem for doubly stochastic matrices [17,18].
(ii) 
If R m i n = 0 n , C m i n = 0 m , R m a x = 1 n , C m a x = 1 m , U k ( R m i n m a x , C m i n m a x ) is the set of doubly sub-stochastic matrices with sum k and P k ( R m i n m a x , C m i n m a x ) is the set of sub-permutation matrices of rank k; a specific version of Theorem A2 was established by Mendelssohn and Dulmage for square matrices ([37]), and later by Brualdi and Lee for rectangular matrices ([38]).

Appendix B. Proof of Theorem 1: Concavity of the Effective Free Energy

The free energy associated with the multi-assignment problem can be written as
F β λ , μ , x = 1 β i G ( β λ ( i ) , p m i n ( i ) , p m a x ( i ) ) 1 β j G ( β μ ( j ) , q m i n ( j ) , q m a x ( j ) ) + 1 β i , j H β ( C ( i , j ) + λ ( i ) + μ ( l ) + x ) k x ,
with:
H ( x ) = ln ( 1 + e x ) , G ( x , a , b ) = ln e a x e ( b + 1 ) x 1 e x .
where a and b are two positive integers with 0 a b . The functions H ( x ) , its derivatives, and some associated functions have been fully characterized in [15]. In parallel, the functions G ( x , a , b ) , its derivatives, and associated functions are characterized in Appendix H.
We prove that this free energy is weakly concave by showing that its Hessian H E is negative-definite. H E is a symmetric matrix of size ( N 1 + N 2 + 1 ) × ( N 1 + N 2 + 1 ) , such that its rows and columns correspond to all N 1 λ values first, followed by all N 2 μ values, and finally to the value x. Let h ( x ) = H ( x ) , and let h be its derivative, i.e.,
h ( x ) = e x ( 1 + e x ) 2 .
In [15], we have shown that h ( x ) [ 1 4 , 0 ) x R , i.e., that h ( x ) is always strictly negative. Similarly, let g ( x ) = G ( x ) , and let g be its derivative, i.e.,
g ( x , a , b ) = ( b a + 1 ) 2 e ( b a + 1 ) x ( e ( b a + 1 ) x 1 ) 2 + e x ( e x 1 ) 2
In Appendix H, we show that when b = a , g ( x , a , a ) = 0 x R , and, when b > a , g ( x , a , b ) is strictly positive x R .
We define the matrix X and the vector d 1 and d 2 such that
X ( i , j ) = h ( β ( C ( i , j ) + λ ( i ) + μ ( j ) + x ) , d 1 ( i ) = g ( β λ ( i ) , p m i n ( i ) , p m a x ( i ) ) , d 2 ( j ) = g ( β μ ( j ) , q m i n ( j ) , q m a x ( j ) ) .
From Equation (10), we obtain
H E ( i , i ) = 2 F β λ , μ , x λ ( i ) λ ( i ) = β δ i i j X ( i , j ) d 1 ( i ) , H E ( i , j ) = 2 F β λ , μ , x λ ( i ) μ ( j ) = β X ( i , j ) , H E ( i , N ) = 2 F β λ , μ , x λ ( i ) x = β j X ( i , j ) , H E ( j , j ) = 2 F β λ , μ μ ( j ) μ ( j ) = β δ j j i X ( i , j ) d 2 ( j ) , H E ( j , N ) = 2 F β λ , μ , x μ ( j ) x = β i X ( i , j ) , H E ( N , N ) = 2 F β λ , μ , x x 2 = β i j X ( i , j ) ,
where δ are Kronecker functions, the indices i and i belong to [ 1 , N 1 ] and the indices j and j belong to [ 1 , N 2 ] , and we have defined N = N 1 + N 2 + 1 .
Let x = ( x 1 , x 2 , x 3 ) be an arbitrary vector of size N. The quadratic form Q ( x ) = x T H x is equal to
Q ( x ) = i , i x 1 ( i ) H E ( i , i ) x 1 ( i ) + j , j x 2 ( j ) H E ( j , j ) x 2 ( j ) + x 3 H E ( N , N ) x 3 + 2 i , j x 1 ( i ) H E ( i , j ) x 2 ( j ) + 2 i x 1 ( i ) H E ( i , N ) x 3 + 2 j x 2 ( j ) H E ( j , N ) x 3 = β i , j ( x 1 ( i ) + x 2 ( j ) + x 3 ) 2 X ( i , j ) β i x 1 ( i ) 2 d 1 ( i ) + j x 2 ( j ) 2 d 2 ( j ) .
As X ( i , j ) is based on the function h that is negative and d 1 ( i ) , and d 2 ( j ) are based on the function g that is positive, the summands in the equation above are all negative for all i [ 1 , N 1 ] and j [ 1 , N 2 ] , and therefore, Q ( x ) is negative for all vector x . The Hessian H is negative, semidefinite.
To check for definitiveness, let us note first that as Q ( x ) is a sum of negative terms, it is 0 if and only if all the terms are equal to 0. As the function h ( x ) is strictly negative, this means that ( i , j ) ,
( x 1 ( i ) + x 2 ( j ) + x 3 ) 2 = 0 .
For the two other terms, we consider two cases:
(i)
p m i n ( i ) < p m a x ( i ) i and q m i n ( j ) < q m a x ( j ) j . Then g ( x , p m i n ( i ) , p m a x ( i ) ) < 0 and g ( x , q m i n ( j ) , q m a x ( j ) ) < 0 . This leads to ( i , j ) :
x 1 ( i ) 2 = 0 , x 2 ( j ) 2 = 0 ,
Equations (A3) and (A4) are satisfied when all x 1 ( i ) , x 2 ( j ) , and when x 3 are zero, namely that x = 0 . Therefore in this case, H is negative, definite, and the free energy F β λ , μ , x is strictly concave.
(ii)
For all i p m i n ( i ) = p m a x ( i ) or for all j q m i n ( j ) = q m a x ( j ) . Then, either d 1 ( x ) = 0 x R N 1 or d 2 ( x ) = 0 x R N 2 , or both. There are then non-zero solutions to the equation Q ( x ) = 0 . The Hessian is then only semidefinite.

Appendix C. Monotonicity of F β M F

The effective free energy F β λ , μ , x defined in Equation (8) is a function of the cost matrix C and of real unconstrained variables λ ( i ) , μ ( j ) , and x. For the sake of simplicity, for any ( i , j ) [ 1 , N 1 ] × [ 1 , N 2 ] , we define
y ( i , j ) = C ( i , j ) + λ ( i ) + μ ( j ) + x , y M F ( i , j ) = C ( i , j ) + λ M F ( i ) + μ M F ( j ) + x M F .
The effective free energy is then
F β λ , μ , x = 1 β i G ( β λ ( i ) , p m i n ( i ) , p m a x ( i ) ) 1 β j G ( β μ ( j ) , q m i n ( j ) , q m a x ( j ) ) + 1 β i , j H β y ( i , j ) k x .
As written above, F β λ , μ , x is a function of the independent variables β , λ ( i ) , μ ( j ) , and x. However, under the saddle point approximation, the variables λ ( i ) , μ ( j ) , and x are constrained by the conditions
F β λ , μ , x λ ( i ) = F β λ , μ , x μ ( j ) = F β λ , μ , x x = 0 ,
and the free energy under those constraints is written as F β M F . In the following, we use the notations d F β M F d β and F β M F β to differentiate between the total derivative and partial derivative of F β M F with respect to β , respectively. It can be shown easily that (see Appendix C of [15])
d F β M F d β = F β λ M F , μ M F , x M F β ,
namely that the total derivative with respect to β is, in this specific case, equal to the corresponding partial derivative, which is easily computed to be
F β M F β = 1 β 2 i G ( β λ M F ( i ) , p m i n ( i ) , p m a x ( i ) ) 1 β λ M F ( i ) g ( β λ M F ( i ) , p m i n ( i ) , p m a x ( i ) ) + 1 β 2 j G ( β μ M F ( j ) , q m i n ( j ) , q m a x ( j ) ) 1 β μ M F ( j ) g ( β μ M F ( j ) , q m i n ( j ) , q m a x ( j ) ) 1 β 2 i , j H ( β y M F ( i ) ) β y M F ( i , j ) h ( β y M F ( i , j ) ) .
Let t ( x ) = H ( x ) x h ( x ) . The function t ( x ) is continuous and defined over all real values x and is bounded above by 0, i.e., t ( x ) 0 x R . Similarly, let us define l ( x , a , b ) = G ( x , a , b ) x g ( x , a , b ) . In Appendix H, we show that l ( x , a , b ) is positive over R . As
d F β M F d β = 1 β 2 i l ( β λ M F ( i ) , p m i n ( i ) , p m a x ( i ) ) + 1 β 2 j l ( β μ M F ( j ) , q m i n ( j ) , q m a x ( j ) ) 1 β 2 i , j t ( β y M F ( i , j ) ) ,
we conclude that
d F β M F d β 0 ,
namely that F β M F is a monotonically increasing function of β .

Appendix D. Monotonicity of U β M F

Let
U β λ , μ , x = i , j C ( i , j ) G ¯ ( i , j ) ,
and let the corresponding mean field approximation of the internal energy at the saddle point,
U β M F = U β λ M F , μ M F , x M F .
Before computing d U β M F d β , we prove the following property
Proposition A1.
U β M F = F β M F + β d F β M F d β ,
i.e., it extends the well-known relationship between the free energy and the average energy to their mean field counterparts.
Proof. 
The proof of this proposition is virtually identical to the proof of the same results for the assignment problem, found in Appendix C of [15]. □
Based on the chain rule,
d U β M F d β = U β M F β + i U β M F λ ( i ) λ ( i ) β + j U β M F μ ( j ) μ ( j ) β + U β M F x x β .
Using Proposition A1, we find that the partial derivatives of U β M F with respect to λ , μ , and x are all zeros, and therefore,
d U β M F d β = U β M F β .
Using again Proposition A1, we obtain
d U β M F d β = d F β M F d β + d d β β d F β M F d β = 2 F β M F β + β β F β M F β .
Using Equation (A5),
β 2 d U β M F d β = i β λ ( i ) l ( β λ M F ( i ) , p m i n ( i ) , p m a x ( i ) ) + j β μ M F ( j ) l ( β μ M F ( j ) , q m i n ( j ) , q m a x ( j ) ) i , j β y M F ( i , j ) t ( β y M F ( i , j ) ) ,
where t ( x ) and l ( x , a , b ) are the functions defined above, and t ( x ) and l ( x , a , b ) are their derivatives with respect to x. Let us define r ( x ) = x t ( x ) and s ( x , a , b ) = x l ( x , a , b ) . Then,
β 2 U β M F β = i s ( β λ M F ( i ) , p m i n ( i ) , p m a x ( i ) ) + j s ( β μ M F ( j ) , q m i n ( j ) , q m a x ( j ) ) i , j r ( β y M F ( i , j ) ) .
Note that r ( x ) = x 2 e x ( 1 + e x ) 2 . Therefore, r ( x ) is positive, bounded below by 0. In Appendix H, we show that s ( x , a , b ) is negative, bounded above 0. Therefore,
d U β M F d β = U β M F β 0 ,
and the function U β M F is a monotonically decreasing function of β .

Appendix E. Proof of Theorem 2: Convergence of the Mean Field Free Energy and the Internal Energy to the Optimal Assignment Cost

For simplicity in notation, we define F M F ( ) = lim β + F β M F and U M F ( ) = lim β + U β M F .

Appendix E.1. Defining Entropy

Recall that we have defined
y ( i , j ) = C ( i , j ) + λ ( i ) + μ ( j ) + x , X ( i , j ) = h ( β y ( i , j ) ) , d 1 ( i ) = g ( β λ ( i ) , p m i n ( i ) , p m a x ( i ) ) , d 2 ( j ) = g ( β μ ( j ) , q m i n ( j ) , q m a x ( j ) ) .
The free energy is given by
F β λ , μ , x = 1 β i G ( β λ ( i ) , p m i n ( i ) , p m a x ( i ) ) 1 β j G ( β μ ( j ) , q m i n ( j ) , q m a x ( j ) ) + 1 β i , j H ( β y ( i , j ) ) k x ,
while the internal energy is
U β λ , μ , x = i , j C ( i , j ) X ( i , j ) .
By adding and subsequently subtracting the internal energy in the equation defining the free energy, and then replacing C ( i , j ) with y ( i , j ) λ ( i ) μ ( j ) x , we obtain (after some reorganization),
F β λ , μ , x = U β λ , μ , x 1 β i l ( β λ ( i ) , p m i n ( i ) , p m a x ( i ) ) 1 β j l ( β μ ( j ) , q m i n ( j ) , q m a x ( j ) ) + 1 β i , j t ( β y ( i , j ) ) + x i , j X ( i , j ) k + i λ ( i ) j X ( i , j ) d 1 ( i ) + j μ ( j ) i X ( i , j ) d 2 ( j ) ,
where t ( x ) and l ( x , a , b ) are defined above. Let us define
S β λ , μ , x = i l ( β λ ( i ) , p m i n ( i ) , p m a x ( i ) ) + j l ( β μ ( j ) , q m i n ( i ) , q m a x ( i ) ) i , j t ( β y ( i , j ) ) .
Then,
F β λ , μ , x = U β λ , μ , x 1 β S β λ , μ , x + x i , j X ( i , j ) k + i λ ( i ) j X ( i , j ) d 1 ( i ) + j μ ( j ) i X ( i , j ) d 2 ( j ) .
The form of the free energy given in Equation (A8) has an intuitive physical interpretation. The first term is the original unbalanced assignment energy, the second is -T times an entropy term (defined in Equation (A7)), and the third, fourth, and fifth terms impose the constraints via Lagrange multipliers. At the saddle point, these constraints are satisfied, and the free energy has the form
F β M F = U β M F 1 β S β M F .
Let us find the bound for the entropy term. t ( x ) is negative, bounded below by ln ( 2 ) . Similarly, in Appendix H, we show that l ( x , a , b ) is positive, bounded above by ln ( b a + 1 ) . Let us define
A ( N 1 , N 2 ) = N 1 N 2 ln ( 2 ) + i = 1 N 1 ln ( p m a x ( i ) p m i n ( i ) + 1 ) + j = 1 N 2 ln ( q m a x ( j ) q m i n ( j ) + 1 ) .
Then, the entropy satisfies the following constraints:
0 S β M F A ( N 1 , N 2 ) .

Appendix E.2. FMF (∞) = UMF (∞)

Proof. 
Using Equation (A9), after rearrangements, we obtain
U β M F 1 β A ( N 1 , N 2 ) F β M F U β M F .
Taking the limits when β + , we obtain
F M F ( ) = U M F ( ) .

Appendix E.3. U * ≤ FMF (∞)

Proof. 
Let U M F ( β ) be the mean field internal energy at the inverse temperature β :
U β M F = i , j C ( i , j ) X β M F ( i , j ) ,
where X β M F is the solution the the SPA system of equations. At a finite inverse temperature β , X β M F is strictly non-integral. In addition, X β M F satisfies constraints on row sums and column sums that make it an element of U k ( R m i n m a x , C m i n m a x ) ; see appendix A for details. Based on Theorem A2, X β M F can be written as a linear combination of matrices in P k ( R m i n m a x , C m i n m a x ) ,
X β M F = π P k ( R m i n m a x , C m i n m a x ) a π π ,
with all a π [ 0 , 1 ] and π P k ( R m i n m a x , C m i n m a x ) a π = 1 . Therefore,
U β M F = i , j C ( i , j ) X β M F ( i , j ) = π P k ( R m i n m a x , C m i n m a x ) a π i , j C ( i , j ) π ( i , j ) .
As U * is the minimum matching cost over all possible matrices in P k ( R m i n m a x , C m i n m a x ) , we have
i , j C ( i , j ) π ( i , j ) U * ,
for all π P k ( R m i n m a x , C m i n m a x ) . Combining Equations (A13) and (A14), we obtain
U β M F π P k ( R m i n m a x , C m i n m a x ) a π U * π P k ( R m i n m a x , C m i n m a x ) a π U * ,
from which we conclude that at each β ,
U * U β M F .
Therefore U * U M F ( ) , and consequently U * F M F ( ) , based on Equation (A12). □

Appendix E.4. U * ≥ FMF (∞)

Proof. 
Let us first recall the definition of the free energy,
F β λ , μ , x = 1 β i ln e β p m i n ( i ) λ ( i ) e β ( p m a x ( i ) + 1 ) λ ( i ) 1 e β λ ( i ) 1 β j ln e β q m i n ( j ) μ ( j ) e β ( q m a x ( j ) + 1 ) μ ( j ) 1 e β μ ( j ) 1 β i , j ln 1 + e β y ( i , j ) k x .
Note these two properties of limits:
lim β + ln ( 1 + e a β ) β = 0 if a 0 a if a 0 ,
and
lim β + 1 β ln e x a β e x ( b + 1 ) β 1 e x β = x b if x 0 x a if x 0 ,
when a and b are non-negative integers. Therefore,
lim β + F β λ , μ , x = i | λ ( i ) 0 p m a x ( i ) λ ( i ) i | λ ( i ) < 0 p m i n ( i ) λ ( i ) j | μ ( j ) 0 q m a x ( j ) μ ( j ) j | μ ( j ) < 0 q m i n ( j ) μ ( j ) + ( i , j ) | y ( i , j ) 0 y ( i , j ) k x .
Let us consider now a matrix π in P k ( R m i n m a x , C m i n m a x ) (see above). We can write
i , j C ( i , j ) π ( i , j ) = i , j y ( i , j ) π ( i , j ) i λ ( i ) R S ( i ) j μ ( j ) C S ( j ) k x ,
where R S and C S are the row sums and column sums of π , respectively. For each index i, the summand included in the first term on the right is always larger than or equal to the sum of all the corresponding terms that are negative:
i , j y ( i , j ) π ( i , j ) i | y ( i , j ) 0 y ( i , j ) .
As π belongs to P k ( R m i n m a x , C m i n m a x ) , its row sums satisfy the constraints:
p m i n ( i ) R S ( i ) p m a x ( i ) .
We multiply this equation separately for positive and negative λ ( i ) . We find
i | λ ( i ) > 0 p m i n ( i ) λ ( i ) i | λ ( i ) > 0 λ ( i ) R S ( i ) i | λ ( i ) > 0 p m a x ( i ) λ ( i ) i | λ ( i ) < 0 p m a x ( i ) λ ( i ) i | λ ( i ) < 0 λ ( i ) R S ( i ) i | λ ( i ) < 0 p m i n ( i ) λ ( i ) .
Therefore,
i λ ( i ) R S ( i ) i | λ ( i ) 0 p m a x ( i ) λ ( i ) i | λ ( i ) < 0 p m i n ( i ) λ ( i ) .
Similarly, we can show that
j μ ( j ) C S ( j ) j | μ ( j ) 0 q m a x ( j ) μ ( j ) j | μ ( j ) < 0 q m i n ( j ) μ ( j ) .
Combining Equations (A15)–(A17), (A19) and (A20), we obtain
i , j C ( i , j ) π ( i , j ) lim β + F β λ , μ , x .
Equation (A21) is valid for all matrices π in P k ( R m i n m a x , C m i n m a x ) . It is therefore valid for the optimal π * that solves the general assignment problem. Since U * = i , j C ( i , j ) π * ( i , j ) , we have
U * lim β + F β λ , μ , x .
As this equation is true for all λ , μ , and x, it is true in particular for λ = λ M F , μ = μ M F , and x = x M F , leading to
U * lim β + F β M F = F M F ( ) .
We have shown that U * F M F ( ) and F M F ( ) U * , and therefore U * = F M F ( ) . The corresponding result for the internal energy, U * = U M F ( ) follows directly from Equation (A12).

Appendix F. Proof of Theorem 3: Bounds on the Internal Energy and Free Energy

Appendix F.1. Bounds on the Free Energy

Proposition A1 from Appendix D states that
β d F β M F d β = F β M F + U β M F .
Using this equation and the relationship between free energy, energy, and entropy at SPA (see Equation (A22)), we obtain
d F β M F d β = 1 β 2 S β M F .
From the bounds on the entropy (Equation (A11)),
0 d F β M F d β A ( N 1 , N 2 ) β 2 .
By integrating over β between + and β ,
0 F M F ( ) F β M F A ( N 1 , N 2 ) β .
Finally, as F M F ( ) = U * ,
U * A ( N 1 , N 2 ) β F β M F U * .

Appendix F.2. Bounds on the Energy

As U β M F = F β M F + 1 β S β M F , using the bounds on the free energy and on the entropy, we obtain
U β M F U * + A ( N 1 , N 2 ) β .
In addition, U β M F is monotonic, decreasing, with the limit U * as β + , U * U β M F . Therefore,
U * U β M F U * + A ( N 1 , N 2 ) β .

Appendix G. Proof of Theorem 4: Bounds on the Transportation Matrix G ¯ β

This proof is inspired by the proof of Theorem 6 in Appendix 2 of [35] and by Appendix F of [15].
We first recall that G ¯ β belongs to U k ( R m i n m a x , C m i n m a x ) (see above), with σ ( G ¯ β ) = k . As such, it can be written as a linear combination of the matrices π P k ( R m i n m a x , C m i n m a x ) ,
G ¯ β = π P k ( R m i n m a x , C m i n m a x ) a π π ,
with all a π [ 0 , 1 ] and π P k ( R m i n m a x , C m i n m a x ) a π = 1 .
We assume that the general assignment problem considered has a unique solution. We want to prove that max i , j G ¯ β ( i , j ) G * ( i , j ) A ( N 1 , N 2 ) β Δ , where G * is the optimal solution of the assignment problem, Δ = U 2 * U * is the difference in total cost between the second best solution and the optimal solution, and A ( N 1 , N 2 ) is defined in Equation (A10). We use for that a proof by contradiction. We assume that there exists a pair ( i , j ) such that
A ( N 1 , N 2 ) β Δ < G ¯ β ( i , j ) G * ( i , j ) .
Let us denote B ( i , j ) = G ¯ β ( i , j ) G * ( i , j ) . As G * is a binary matrix, G * ( i , j ) = 0 or G * ( i , j ) = 1 .
In the first case,
B ( i , j ) = G ¯ β ( i , j ) = π P k ( R m i n m a x , C m i n m a x ) a π π ( i , j ) .
Since G * belongs to P k ( R m i n m a x , C m i n m a x ) , it is included in the decomposition of G ¯ β , and therefore,
B ( i , j ) = a G * G * ( i , j ) + π P k ( R m i n m a x , C m i n m a x ) { G * } a π π ( i , j ) = π P k ( R m i n m a x , C m i n m a x ) { G * } a π π ( i , j ) < π P k ( R m i n m a x , C m i n m a x ) ) { G * } a π = 1 a G * ,
where the final equality follows from the fact that the sum of all coefficients a is equal to 1.
In the second case, G * ( i , j ) = 1 ,
B ( i , j ) = 1 G ¯ β ( i , j ) = 1 π P k ( R m i n m a x , C m i n m a x ) a π π ( i , j ) .
Again, as G * is included in the decomposition of G ¯ β ,
B ( i , j ) = 1 a G * G * ( i , j ) π P k ( R m i n m a x , C m i n m a x ) { G * } a π π ( i , j ) = 1 a G * π P k ( R m i n m a x , C m i n m a x ) { G * } a π π ( i , j ) < 1 a G * ,
where the final inequality follows from the fact that π P k ( R m i n m a x , C m i n m a x ) { G * } a π π ( i , j ) is positive.
In conclusion, in both cases, we have
A ( N 1 , A 2 ) β Δ < 1 a G * .
Now, let us look at the energy associated with G ¯ β :
U β M F = i , j C ( i , j ) G ¯ β ( i , j ) = π P k ( R m i n m a x , C m i n m a x ) a π i , j C ( i , j ) π ( i , j ) = a G * U * + π P k ( R m i n m a x , C m i n m a x ) { G * } a π i , j C ( i , j ) π ( i , j ) a G * U * + π P k ( R m i n m a x , C m i n m a x ) { G * } a π U 2 * a G * U * + ( 1 a G * ) U 2 * U * + ( 1 a G * ) Δ .
In Theorem 3, we have shown that
U * U β M F U * + A ( N 1 , N 2 ) β .
Therefore,
U * + ( 1 a G * ) Δ U * + A ( N 1 , N 2 ) β ,
i.e.,
( 1 a G * ) A ( N 1 , N 2 ) β Δ ,
as Δ is strictly positive.
We have shown that A ( N 1 , N 2 ) β Δ < 1 a G * (Equation (A24)) and ( 1 a G * ) A ( N 1 , N 2 ) β Δ (Equation (A25)); i.e., we have reached a contradiction. Our hypothesis is wrong, and therefore max i , j G ¯ β ( i , j ) G * ( i , j ) A ( N 1 , N 2 ) β Δ .

Appendix H. Properties of the General Function G(x,a,b) and Its Derivatives

Theorem A3.
Let G ( x , a , b ) be the function defined by,
G ( x , a , b ) = ln e a x e ( b + 1 ) x 1 e x .
where a and b are two positive integers with 0 a < b . Let g ( x ) = G ( x ) , l ( x , a , b ) = G ( x , a , b ) x g ( x , a , b ) , and s ( x , a , b ) = x l ( x , a , b ) . We have the following properties:
(1) 
g ( x , a , b ) > 0 x R .
(2) 
0 l ( x , a , b ) ln ( b a + 1 ) x R .
(3) 
s ( x , a , b ) 0 x R .
Note that in Theorem A3, we consider a, b non-negative, with a < b . The result can be trivially expanded to the case a = b , with G ( x , a , a ) = a x and therefore g ( x , a , a ) = a , g ( x , a , a ) = 0 , l ( x , a , a ) = 0 , and s ( x , a , a ) = 0 . In the following, we only consider a < b .

Appendix H.1. The Function G(x,a,b)

Note first that
G ( x , a , b ) = ln e a x e ( b + 1 ) x 1 e x = ln k = a b e k x .
G ( x , a , b ) is therefore defined, continuous, and differentiable over R . Some special values:
G ( 0 , a , b ) = ln ( b + a 1 ) , lim x G ( x , a , b ) = , lim x + G ( x , a , b ) = + .

Appendix H.2. g(x,a,b) and Its Derivative g ′ (x,a,b)

Note that
g ( x , a , b ) = ( b a + 1 ) e ( b a + 1 ) x 1 + 1 1 e x + b , g ( x , a , b ) = ( b a + 1 ) 2 e ( b a + 1 ) x ( e ( b a + 1 ) x 1 ) 2 + e x ( e x 1 ) 2 ,
g and its derivative g are defined when e ( b a + 1 ) x 1 0 and e x 1 0 , i.e., when x 0 . They can be extended by continuity at 0:
g ( 0 , a , b ) = a + b 2 , g ( 0 , a , b ) = ( b a + 1 ) 2 1 12 .
We have g ( 0 , a , b ) > 0 as b a + 1 > 1 . In addition,
lim x g ( x , a , b ) = a , lim x + g ( x , a , b ) = b , lim x g ( x , a , b ) = 0 , lim x + g ( x , a , b ) = 0 .
Figure A1 shows a few examples of g and g for different values of a and b.
Figure A1. Examples of g ( x , a , b ) (A) and its derivative g ( x , a , b ) (B) for a = 1 , b = 1 (black), a = 1 , b = 2 , (red), and a = 1 , b = 4 (blue).
Figure A1. Examples of g ( x , a , b ) (A) and its derivative g ( x , a , b ) (B) for a = 1 , b = 1 (black), a = 1 , b = 2 , (red), and a = 1 , b = 4 (blue).
Algorithms 17 00212 g0a1
We now prove property 1 of Theorem A3.
Proof. 
Let us define m = b a + 1 . As a < b , we have m > 1 . Let x be a real number; then,
g ( x , a , b ) = m 2 e m x ( e m x 1 ) 2 + e x ( e x 1 ) 2 ,
which we can rewrite as
g ( x , a , b ) = 1 4 m 2 sinh 2 ( m x 2 ) + 1 sinh 2 ( x 2 ) = 1 4 sinh 2 ( m x 2 ) m 2 sinh 2 ( x 2 ) sinh 2 ( m x 2 ) sinh 2 ( x 2 ) = 1 4 ( sinh ( m x 2 ) m sinh ( x 2 ) ) ( sinh ( m x 2 ) + m sinh ( x 2 ) ) sinh 2 ( m x 2 ) sinh 2 ( x 2 ) .
From this formulation, it is clear that g ( x , a , b ) is even. As g ( 0 , a , b ) is strictly positive (see above), we only need to consider x > 0 . Let us define y = x / 2 . For y > 0 , the denominator of g ( x , a , b ) and the term sinh ( m y ) + m sinh ( y ) are strictly positive. The only remaining term is sinh ( m y ) m sinh ( y ) , which we call f ( y ) . We have
f ( y ) = m cosh ( m y ) cosh ( y ) = 2 m sinh ( ( m + 1 ) y ) sinh ( ( m 1 ) y ) .
As m > 1 , sinh ( ( m + 1 ) y ) and sinh ( ( m 1 ) y ) are strictly positive when y > 0 : f ( y ) is monotonically increasing for y 0 . As f ( 0 ) = 0 , we obtain f ( y ) > 0 for y > 0 . Therefore g ( x , a , b ) > 0 for all x > 0 , and since it is an even function and g ( 0 , a , b ) > 0 , we find that x R g ( x , a , b ) > 0 . □

Appendix H.3. The Function l(x,a,b) = G(x,a,b)-xg(x,a,b)

Recall that
l ( x , a , b ) = G ( x , a , b ) x g ( x , a , b ) .
We nowprove property 2 of Theorem A3.
Proof. 
We prove first that l ( x , a , b ) is even. Note that
G ( x , a , b ) = ln e a x e ( b + 1 ) x 1 e x = ln e ( b + 1 ) x e a x e x 1 + ln e x e ( b + 1 ) x e a x = b x a x + G ( x , a , b ) ,
and
g ( x , a , b ) = ( b a + 1 ) e ( b a + 1 ) x 1 + 1 1 e x + b = ( b a + 1 ) e ( b a + 1 ) x 1 e ( b a + 1 ) x + e x e x 1 + b = a + b g ( x , a , b ) .
Therefore,
l ( x , a , b ) = G ( x , a , b ) + x g ( x , a , b ) = G ( x , a , b ) x g ( x , a , b ) = l ( x , a , b ) .
Let now x be a positive real number. Then,
l ( x , a , b ) = G ( x , a , b ) g ( x , a , b ) x g ( x , a , b ) . = x g ( x , a , b )
We know that g ( x , a , b ) is positive for all real numbers x (property 1 of Theorem A3); therefore, l ( x , a , b ) is negative on R + and l ( x , a , b ) is monotonic decreasing on this domain. Since:
l ( 0 , a , b ) = ln ( b a + 1 ) , lim x + l ( x , a , b ) = 0 ,
we have 0 l ( x , a , b ) ln ( b a + 1 ) for x R + , and since l ( x , a , b ) is even, 0 l ( x , a , b ) ln ( b a + 1 ) for x R , which concludes the proof of property 2 of Theorem A3. □

Appendix H.4. The Function s(x,a,b) Is Negative

We now prove the property 3 of A3.
Proof. 
Recall that
l ( x , a , b ) = x g ( x , a , b ) .
Therefore,
s ( x , a , b ) = x 2 g ( x , a , b ) .
We know that g ( x , a , b ) is positive for all real numbers x (property 1 of Theorem A3); therefore, s ( x , a , b ) is negative on R . □

References

  1. Dell’Amico, M.; Toth, P. Algorithms and codes for dense assignment problems: The state of the art. Discret. Appl. Math. 2000, 100, 17–48. [Google Scholar] [CrossRef]
  2. Pentico, D.W. Assignment problems: A golden anniversary survey. Eur. J. Oper. Res. 2007, 176, 774–793. [Google Scholar] [CrossRef]
  3. Burkard, R.; Dell’Amico, M.; Martello, S. Assignment Problems; Society for Industrial and Applied Mathematics (SIAM): Philadelphia, PA, USA, 2009. [Google Scholar]
  4. Dantzig, G. Origins of the simplex method. In A History of Scientific Computing; Association for Computing Machinery: New York, NY, USA, 1990; pp. 141–151. [Google Scholar]
  5. Jacobi, C. De investigando ordine systematis aequationum differentialum vulgarium cujuscunque. J. Reine Angew. Math. 1890, 94, 292–320. [Google Scholar]
  6. Kuhn, H. The Hungarian method for the assignment problem. Nav. Res. Logist. 1955, 2, 83–97. [Google Scholar] [CrossRef]
  7. Date, K.; Nagi, R. GPU-accelerated Hungarian algorithms for the Linear Assignment Problem. Parallel Comput. 2016, 57, 52–72. [Google Scholar] [CrossRef]
  8. Lopes, P.A.; Yadav, S.S.; Ilic, A.; Patra, S.K. Fast block distributed CUDA implementation of the Hungarian algorithm. J. Parallel Distrib. Comput. 2019, 130, 50–62. [Google Scholar] [CrossRef]
  9. Yadav, S.S.; Lopes, P.A.C.; Ilic, A.; Patra, S.K. Hungarian algorithm for subcarrier assignment problem using GPU and CUDA. Int. J. Commun. Syst. 2019, 32, e3884. [Google Scholar] [CrossRef]
  10. Kumar, A. A modified method for solving the unbalanced assignment problems. Appl. Math. Comput. 2006, 176, 76–82. [Google Scholar] [CrossRef]
  11. Yadaiah, V.; Haragopal, V. A New Approach of Solving Single Objective Unbalanced Assignment Problem. Am. J. Oper. Res. 2016, 6, 81–89. [Google Scholar] [CrossRef]
  12. Costa, D.; Hertz, A. Ants can color graphs. J. Oper. Res. Soc. 1997, 48, 295–305. [Google Scholar] [CrossRef]
  13. Wang, L.; He, Z.; Liu, C.; Chen, Q. Graph based twin cost matrices for unbalanced assignment problem with improved ant colony algorithm. Results Appl. Math. 2021, 12, 100207. [Google Scholar] [CrossRef]
  14. Betts, N.; Vasko, F.J. Solving the unbalanced assignment problem: Simpler is better. Am. J. Oper. Res. 2016, 6–9, 296. [Google Scholar] [CrossRef]
  15. Koehl, P.; Orland, H. Fast computation of exact solutions of generic and degenerate assignment problems. Phys. Rev. E 2021, 103, 042101. [Google Scholar] [CrossRef] [PubMed]
  16. Koehl, P.; Delarue, M.; Orland, H. Physics approach to the variable-mass optimal-transport problem. Phys. Rev. E 2021, 103, 012113. [Google Scholar] [CrossRef]
  17. Birkhoff, G. Tres observaciones sobre el algebra lineal. Univ. Nac. Tucuman. Ser. A 1946, 5, 147–154. [Google Scholar]
  18. Von Neumann, J. A certain zero-sum two-person game equivalent to the optimal assignment problem. Contrib. Theory Games 1953, 2, 5–12. [Google Scholar]
  19. Dell’Amico, M.; Martello, S. The k-cardinality assignment problem. Discret. Appl. Math. 1997, 76, 103–121. [Google Scholar] [CrossRef]
  20. Dell’Amico, M.; Lodi, A.; Martello, S. Efficient algorithms and codes for k-cardinality assignment problems. Discret. Appl. Math. 2001, 110, 25–40. [Google Scholar] [CrossRef]
  21. Volgenant, A. Solving the k-cardinality assignment problem by transformation. Eur. J. Oper. Res. 2004, 157, 322–331. [Google Scholar] [CrossRef]
  22. Chandrasekaran, R.; Kabadi, S.; Murty, K. Some NP-complete problems in linear programming. Oper. Res. Lett. 1982, 1, 101–104. [Google Scholar] [CrossRef]
  23. Greenberg, H.J. An analysis of degeneracy. Nav. Res. Logist. Q. 1986, 33, 635–655. [Google Scholar] [CrossRef]
  24. Megiddo, N.; Chandrasekaran, R. On the ε-perturbation method for avoiding degeneracy. Oper. Res. Lett. 1989, 8, 305–308. [Google Scholar] [CrossRef]
  25. Koehl, P.; Delarue, M.; Orland, H. Finite temperature optimal transport. Phys. Rev. E 2019, 100, 013310. [Google Scholar] [CrossRef] [PubMed]
  26. Majumdar, J.; Bhunia, A.K. An alternative approach for unbalanced assignment problem via genetic algorithm. Appl. Math. Comput. 2012, 218, 6934–6941. [Google Scholar] [CrossRef]
  27. Rabbani, Q.; Khan, A.; Quddoos, A. Modified Hungarian method for unbalanced assignment problem with multiple jobs. Appl. Math. Comput. 2019, 361, 493–498. [Google Scholar] [CrossRef]
  28. Heist, N.; Hertling, S.; Ringler, D.; Paulheim, H. Knowledge Graphs on the Web—An Overview. arXiv 2020, arXiv:2003.00719. [Google Scholar]
  29. Veličković, P. Everything is connected: Graph neural networks. Curr. Opin. Struct. Biol. 2023, 79, 102538. [Google Scholar] [CrossRef] [PubMed]
  30. Camacho, D.; Panizo-LLedot, A.; Bello-Orgaz, G.; Gonzalez-Pardo, A.; Cambria, E. The four dimensions of social network analysis: An overview of research methods, applications, and software tools. Inf. Fusion 2020, 63, 88–120. [Google Scholar] [CrossRef]
  31. Ideker, T. Network genomics. Ernst Scher. Res. Found. Workshop 2007, 61, 89–115. [Google Scholar]
  32. Pierskalla, W.P. The multidimensional assignment problem. Oper. Res. 1968, 16, 422–431. [Google Scholar] [CrossRef]
  33. Balas, E.; Saltzman, M.J. Facets of the three-index assignment polytope. Discret. Appl. Math. 1989, 23, 201–229. [Google Scholar] [CrossRef]
  34. Spieksma, F.C. Multi index assignment problems: Complexity, approximation, applications. In Nonlinear Assignment Problems: Algorithms and Applications; Springer: Berlin/Heidelberg, Germany, 2000; pp. 1–12. [Google Scholar]
  35. Kosowsky, J.; Yuille, A. The invisible hand algorithm: Solving the assignment problem with statistical physics. Neural Netw. 1994, 7, 477–490. [Google Scholar] [CrossRef]
  36. Koehl, P. Extreme points of general transportation polytopes. arXiv 2024, arXiv:2404.16791. [Google Scholar]
  37. Mendelsohn, N.S.; Dulmage, A.L. The convex hull of sub-permutation matrices. Proc. Am. Math. Soc. 1958, 9, 253–254. [Google Scholar] [CrossRef]
  38. Brualdi, R.A.; Lee, G.M. On the truncated assignment polytope. Linear Algebra Its Appl. 1978, 19, 33–62. [Google Scholar] [CrossRef]
Figure 1. Convergence of the internal energy U β M F (A) and free energy F β M F (B) as a function of β when solving the multi-task assignment problem with the cost matrix C 1 , such that each agent can perform up to 4 tasks, may be idle, and all 8 tasks are performed. On both panels, we show the bounds on the expected value for the optimal cost U * as shaded areas (see text for details).
Figure 1. Convergence of the internal energy U β M F (A) and free energy F β M F (B) as a function of β when solving the multi-task assignment problem with the cost matrix C 1 , such that each agent can perform up to 4 tasks, may be idle, and all 8 tasks are performed. On both panels, we show the bounds on the expected value for the optimal cost U * as shaded areas (see text for details).
Algorithms 17 00212 g001
Table 1. The cost matrix C 1 .
Table 1. The cost matrix C 1 .
Tasks
Agents T 1 T 2 T 3 T 4 T 5 T 6 T 7 T 8
A 1 300250180320270190220260
A 2 290310190180210200300190
A 3 280290300190190220230260
A 4 290300190240250190180210
A 5 210200180170160140160180
Table 2. Assignments for C 1 with p m a x = 4 .
Table 2. Assignments for C 1 with p m a x = 4 .
p min = 0 p min = 1 Wang a
A 1 T 3 A 1 T 3 A 1 T 3
A 2 T 4 , T 8 A 2 T 8 A 2 T 8
A 3 A 3 T 4 A 3 T 4
A 4 T 7 A 4 T 7 A 4 T 7
A 5 T 1 , T 2 , T 5 , T 6 A 5 T 1 , T 2 , T 5 , T 6 A 5 T 1 , T 2 , T 5 , T 6
Total cost144014501450
a Assignment based on an ant colony algorithm [13].
Table 3. The cost matrix C 2 .
Table 3. The cost matrix C 2 .
Tasks
Agents T 1 T 2 T 3 T 4 T 5 T 6 T 7 T 8 T 9 T 10
A 1 1021496721321811
A 2 71293569165412
A 3 4861221921144513
A 4 219129321019251610
A 5 1012301512173012129
A 6 1573417716141795
Table 4. Solving the multi-task assignment problems defined by the cost matrix C 1 .
Table 4. Solving the multi-task assignment problems defined by the cost matrix C 1 .
Method p  a2345678
Matching, p m i n = 1 1520147014501450145014501450
Matching, p m i n = 0 1520147014401420141014001400
Hungarian1520147014401420141014001400
Majumdar GA b152014701450NA cNANANA
Wang AC d152014701450NANANANA
a Each agent is allowed to execute up to p tasks. Note that as there are 8 tasks and 5 agents only, the minimum value of p is 2 if all the tasks must be executed. b Genetic algorithm [26]. c Not available in either [26] or [13]. d Ant colony algorithm [13].
Table 5. Solving the multi-task assignment problems defined by the cost matrix C 2 .
Table 5. Solving the multi-task assignment problems defined by the cost matrix C 2 .
Method p  a2345678
Matching, p m i n = 1 66656565656565
Matching, p m i n = 0 66626161616161
Hungarian66626161616161
Majumdar GA b66676674NA cNANA
Wang AC d666565NANANANA
a Each agent is allowed to execute up to p tasks. Note that as there are 10 tasks and 6 agents only, and the minimum value of p is 2 if all the tasks must be executed. b Genetic algorithm [26]. c Not available in either [26] or [13]. d Ant colony algorithm [13].
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Koehl, P.; Orland, H. A General Statistical Physics Framework for Assignment Problems. Algorithms 2024, 17, 212. https://doi.org/10.3390/a17050212

AMA Style

Koehl P, Orland H. A General Statistical Physics Framework for Assignment Problems. Algorithms. 2024; 17(5):212. https://doi.org/10.3390/a17050212

Chicago/Turabian Style

Koehl, Patrice, and Henri Orland. 2024. "A General Statistical Physics Framework for Assignment Problems" Algorithms 17, no. 5: 212. https://doi.org/10.3390/a17050212

APA Style

Koehl, P., & Orland, H. (2024). A General Statistical Physics Framework for Assignment Problems. Algorithms, 17(5), 212. https://doi.org/10.3390/a17050212

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop