Next Article in Journal
Experimental Validation of a Sliding Mode Control for a Stewart Platform Used in Aerospace Inspection Applications
Previous Article in Journal
A Sustainable Inventory Model with Imperfect Products, Deterioration, and Controllable Emissions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multiple Hungarian Method for k-Assignment Problem

1
Faculty of Mechanical Engineering, University of Ljubljana, Askerceva 6, SI-1000 Ljubljana, Slovenia
2
Faculty of Mathematics and Physics, University of Ljubljana, Jadranska ulica 19, SI-1000 Ljubljana, Slovenia
3
Institute of Mathematics, Physics and Mechanics, University of Ljubljana, Jadranska 19, SI-1000 Ljubljana, Slovenia
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(11), 2050; https://doi.org/10.3390/math8112050
Submission received: 28 September 2020 / Revised: 13 November 2020 / Accepted: 13 November 2020 / Published: 17 November 2020

Abstract

:
The k-assignment problem (or, the k-matching problem) on k-partite graphs is an NP-hard problem for k 3 . In this paper we introduce five new heuristics. Two algorithms, B m and C m , arise as natural improvements of Algorithm A m from (He et al., in: Graph Algorithms And Applications 2, World Scientific, 2004). The other three algorithms, D m , E m , and F m , incorporate randomization. Algorithm D m can be considered as a greedy version of B m , whereas E m and F m are versions of local search algorithm, specialized for the k-matching problem. The algorithms are implemented in Python and are run on three datasets. On the datasets available, all the algorithms clearly outperform Algorithm A m in terms of solution quality. On the first dataset with known optimal values the average relative error ranges from 1.47% over optimum (algorithm A m ) to 0.08% over optimum (algorithm E m ). On the second dataset with known optimal values the average relative error ranges from 4.41% over optimum (algorithm A m ) to 0.45% over optimum (algorithm F m ). Better quality of solutions demands higher computation times, thus the new algorithms provide a good compromise between quality of solutions and computation time.

1. Introduction

1.1. Motivation

Suppose we have k sets of vertices V 1 , V 2 , , V k and we want to consider multi-associations between them. For example, in bioinformatics, V 1 can correspond to the set of known (relevant) diseases, V 2 to the set of known drugs, and V 3 to the set of known genes that are relevant for the observed species (e.g., for Homo Sapiens). Multi-association in this case is a triple ( v 1 , v 2 , v 3 ) V 1 × V 2 × V 3 , which means that disease v 1 , drug v 2 , and gene v 3 are related. Such a triple may imply that the gene v 3 is activated in disease v 1 and is usually silenced by drug v 2 , hence drug v 2 may be considered to be the cure for disease v 1 . This is related to the very vibrant area of drug re-purposing and precision medicine, see e.g., [1,2,3]. We can represent the data as a complete 3-partite graph where the vertex set is V 1 V 2 V 3 and the edges between vertices from different V i have weights equal to the strength of the association between the ending vertices. Each triple ( v 1 , v 2 , v 3 ) is therefore a complete subgraph (3-clique, triangle) of such a graph and its weight is the sum of the weights on its edges. If we want to find a decomposition of this graph into disjoint triangles with a maximum (minimum) total weight, we obtain the 3-assignment problem (3-AP) [4,5,6,7]. The 3-AP can also serve as a model in production planning when we try to assign e.g., workers, machines, and tasks in a way that each worker gets exactly one task at one machine and the total cost is minimal. Many more applications can be found in the literature, see cf. [8] or [9], and the references therein.

1.2. Problem Formulation

Let G = ( V , E , w ) be a complete weighted k-partite graph where V = V 1 V 2 V k is the vertex set, V i are the vertices of the i-th partition with cardinality | V i | = n , E = 1 i < j k { u v u V i , v V j } is the edge set, and w : E R is the weight function that may be given in terms of matrices W i j as w ( e ) = W u v i j for e = u v , u V i , v V j . A k-clique is a subset Q V with cardinality k, such that the induced graph G [ Q ] is isomorphic to the complete graph K k . This means that a k-clique has exactly one vertex from each V i . In the case when G is a k-partite graph, a k-clique can be also called a k-association. The weight of a k-clique Q, w ( Q ) , is the sum of the edge weights of G [ Q ] :
w ( Q ) = e E ( G [ Q ] ) w ( e ) .
In a complete k-partite graph where each partition has cardinality n, we can always find n pairwise disjoint k-cliques Q = { Q 1 , Q 2 , , Q n } . We call such a set of cliques a k-assignment or k-matching, since this is a natural extension of 2-assignments or 2-matchings in the bipartite graphs. Naturally, we define the weight of k-assignment Q as
w ( Q ) = i = 1 n w ( Q i ) .
The k-assignment problem, or equivalently, the k-matching problem (k-AP) (1), is the problem of finding a k-assignment of a given k-partite graph G with the minimum weight:
min { Q w ( Q ) Q i s   a   k a s s i g n m e n t   i n   G } .
In the literature [4,5,6,10], this problem is also referred to as the multidimensional assignment problem (MAP). For the case k = 3 we can also trace the name 3-index assignment problem or 3-dimensional assignment problem in the literature. When k = 2 , it is well-known that the Hungarian algorithm solves the 2-assignment problem to optimality [11] in polynomial time. Kuhn used this name for the method because his invention of the algorithm is based on the work of two Hungarian mathematicians, D. König and E. Egervary. We observe that sometimes the researchers use word matching if the weights on the graph edges are all equal to 1, while for the general case they use assignment.
In this paper, we will also consider the maximum version of (1) because we want to compare our heuristic algorithms with some algorithms from the literature correctly. To make a clear distinction, we use subscripts m and M in the names of heuristic algorithms to denote that we are solving (1) with minimum and maximum objective, respectively.
We conclude the subsection with a useful observation. For k > 2 , every k-assignment Q implies a 2-assignment on G [ V i V j ] , for all i j . Thus Q gives rise to k 2 2-assignments M i , j , 1 i < j k , between partitions V i and V j . Therefore, a k-assignment Q defines k -assignments on subgraphs of G induced on k different -partitions.

1.3. Literature Review

The problem has been extensively studied in the past. Here we briefly mention some of the relevant results and cite some previous work without the intention to review the literature completely. The problem called 3-dimensional matching (3DM) has already appeared among the NP-hard problems in Karp’s [12] seminal paper. This problem is related to the question of whether there exists a 3-assignment in a 3-partite graph if the partitions of the graph have the same cardinality but the graph is not necessary a complete k-partite graph.
According to [8], 3DM is a special case of the 3-assignment problem with maximum or minimum objective, which they call the axial 3-index assignment problem, hence Karp’s result [12] implies that both minimization and maximization versions of 3-assignment problems are NP-hard.
If k = 3 and if we consider only the weights on the triples Q i which must be 0 or 1, then there exists a ( 2 3 ε )-approximation algorithm [13]. If the weights on the triangles are arbitrary, there exists a ( 1 2 ε )-approximation algorithm [14].
For the minimization version of the problem, it is known [15] that there is no polynomial time algorithm that achieves a constant performance ratio unless P=NP and the result holds even in the case when the clique weights are of the form
w i j k = d i j + d i k + d j k
for all i , j , k (i.e., for the problem defined here as (1)). However, when the triangle inequality holds, Crama and Spieksma show that there is a 4 3 -approximation algorithm [15].
Hence, it is justified to apply heuristics to find near optimal solutions of the 3-AP and in general for k-AP problem instances. In the literature, various heuristics are reported that were designed to handle the k-assignment problem, many focusing on the 3-AP. We mention some of them to illustrate the variety of ideas elaborated. Aiex et al. [16] adopted the heuristic called Greedy Randomized Adaptive Search Algorithm. Huang and Lim in [17] described a new local search procedure which solves the problem by simplifying it to the classical assignment problem. Furthermore, Huang and Lim hybridized their heuristic with a genetic algorithm. An extension of the Huang and Lim heuristic [17] to the multidimensional assignment problem was done in [18,19], while [20] developed another new heuristic named Approximate muscle guided beam search, using the local search of [17]. Karapetyan and Gutin [21] devised a memetic algorithms with an adjustable population size technique and with the same local search as Huang and Lim for the case of 3-AP. The size was calculated as a function of the runtime of the whole algorithm and the average runtime of the local search for the given instance. According to Valencia, Martinez, and Perez [22] this is the best known technique to solve the general case of k-AP. These authors performed an experimental evaluation of a basic genetic algorithm combined with a dimensionwise variation heuristic and showed its effectiveness in comparison to a more complex state-of-the-art memetic algorithm for k-AP. Some other approaches were recently proposed for solving 3-AP [23], the so-called Neighborly algorithm, modified Greedy algorithm with some of the steps used by the Auction algorithm [24], and a probabilistic modification of the minimal element algorithm for solving the axial three-index assignment problem [25], where the idea was to extend the basic greedy-type algorithmic schemes using transition to a probabilistic setup based on variables randomization.
The k-assignment problem can be also formulated as an integer (0–1) linear programming problem in n k binary variables. This approach yields some interesting theoretical results, but has very limited practical impact due to the huge number of binary variables. More details can be found in [5,26]. Some recent results related to special variants of the k-assignment problem can also be found in [23,27]. An exact algorithm for 3-AP is proposed by Balas and Saltzman [4]. For more information on related work we refer to [8,9] and the references there.
The idea to use the Hungarian algorithm for 2-AP as a tool to attack the k-AP first appears in [28], where the algorithm named A m (see the descriptions in the next section) for the approximate solution to the minimal k-assignment problem and an algorithm A M for the approximate solution to the maximal k-assignment problem (or the maximal k-clique problem) of a weighted complete k-partite graph is given. In [28] it is experimentally shown that the Cubic Greedy Algorithms are better than the Random Select Algorithm and that Algorithms A m and A M are better than the Cubic Greedy Algorithms. For k = 4 , it is also shown that Algorithms A m and A M are better than the 4-clique Greedy Algorithm.

1.4. Our Contribution

As the (1) problem is NP-hard for k 3 , it is natural to ask whether one can design a useful heuristic based on the Hungarian algorithm that efficiently solves the k = 2 case to optimality. The first work along this avenue is the implementation of Algorithm A from [28]. We continue the research with the main goal to understand how much the ideas of the Hungarian algorithm can contribute to performance of the heuristics. To this aim, we design several heuristics that are based on the Hungarian algorithm. Our experimental results show that all the algorithms improve the quality of the solutions compared to A, and some of our algorithms are a substantial improvement over the basic algorithm A (see Table 2). We also show that this type of heuristics can provide near optimal solutions for the k-assignment problem of very good quality on the datasets with known optimal values (see Tables 3 and 4).
The experiments are run on two datasets from the literature, one of them providing optimal solutions for the instances. In addition, we run two batches of experiments including instances generated by nonuniform distribution, in contrast to other datasets where the uniform distribution is used. Due to intractability of the k-assignment problem, it is not a surprise that our experimental study shows limitations of particular heuristics. Therefore we also introduce two randomized versions of heuristic algorithm C that lead to local search type heuristics that are observed to be improving the quality of solutions over time and may converge to the optimal solution. In this way we complement the quick heuristics with some alternatives that trade much longer computational times for quality of solutions. Hence we show that the heuristics for the k-assignment problem that are based on the Hungarian algorithm may be a very competitive choice.
The main contributions of this paper are the following:
  • We design five new heuristics for (1). More precisely, we propose algorithms named B, C, D, E, and F for finding near optimal solutions to the (1). (All the algorithms have both the minimization and maximization versions, that are respectively denoted c.f. X m and X M , for algorithm X.) The algorithms rely on heavy usage of the Hungarian algorithm [11] and arise as natural improvements of Algorithm A m from [28]. The last two algorithms, E and F, can be considered as versions of iterative improvement type local search algorithms, as opposed to the steepest descent nature of, e.g., C.
  • We implement and test the algorithms on three datasets:
    (a)
    The set of random graphs generated as suggested in [28]. Here we also reproduce their results.
    (b)
    We design two random datasets using both the uniform and a nonuniform distribution for the second batch of experiments.
    (c)
    We test our algorithms on the dataset of hard instances from [15], for which the optimal solutions are known.
  • Our experimental results show that (on all datasets used) all the algorithms improve the quality of the solutions compared to algorithm A. We also observe the algorithms performance considering the quality of solutions versus computation time.
The rest of the paper is organized as follows. In Section 2 we introduce the notation that is used throughout the paper, and in Section 3 we outline the algorithms. In Section 4 we provide results of our experiments, where we evaluate the existing and the new algorithms on the three datasets. In Section 5 we summarize the results and discuss the methods of choice.

2. Preliminaries

From k-Assignment Problem to 2-Assignment Problem

The Hungarian method [11] is a classical and polynomial-time exact method for solving the 2-assignment problem, therefore it is very natural that we explore the idea to find near optimal feasible solutions of k-assignment problem by solving a series of 2-assignment problems.
Below we are going to present several new algorithms for (1), which strongly rely on the repetitive use of the Hungarian method on selected bipartite subgraphs and contracting the original graphs along the assignments computed by this method. Note that all algorithms return feasible solutions because we have a complete k-partite graph where the Hungarian method always finds an optimum solution for any induced graph G [ V i V j ] , which can be easily used to reconstruct a feasible solution of (1) via operator *, see Section 3. Given an -partite weighted graph G and a 2-assignment M i , j between partitions V i and V j of G, we wish to contract the (weighted) edges which constitute M i , j in G. We therefore associate to M i , j the quotient graph G / M i , j = G / , where u v u v M i , j . The construction is explained in more detail below. The new weight function of G / M i , j is obtained by summing the weights of contracted edges adjacent to M i , j . Formally, the vertices of G / M i , j are the equivalence classes consisting of either the singleton { v } if the vertex v is not adjacent to an edge in the assignment M i , j or { u , v } if u v is an edge in M i , j . Loosely speaking, in G / M i , j , pairs of elements of V i and V j are merged along M i , j to form a new partition U while the other V k , k i , k j are not changed. (See Figure 1a,b.) More formally,
U = { { u , v } u v M i , j } and V = { { v } v V } .
By construction, the elements of U and V are equivalence classes, and formally we have { u , v } = [ u ] = [ v ] and { u } = [ u ] . However, we will (warning that it is abuse of notation) often not distinguish between V and V , (i.e., identify V = V ) and also consider the elements of U as sets of two elements, the union of two singleton sets. Formally, G / M i , j = ( V , E , w ) , is the graph with the vertex set
V = U 1 l k i , j V ,
and the edge set
E = F 1 i < j k { i , j } { i , j } = E i , j
where F = { { u } { v , v } u V \ ( V i V j ) , v v M i , j } and E i , j = { { u } { v } u v E \ M i , j } . Or, recalling the simplified notation above, simply E i , j = E i , j .
The new weight function is defined by summing the weights on the edges adjacent to the identified vertices: if u , v V \ ( V i V j ) , then we have
w ( { u } { v } ) = w ( u v )
and if u V \ ( V i V j ) and v v M i , j , we have
w ( { u } { v , v } ) = w ( u v ) + w ( u v ) .
In other words, the weights on E i , j = E i , j are not changed, and the weights on the edge set F are the weights of pairs of edges that were contracted to obtain triangles { u , v , v } .

3. Algorithms

In this section we first recall Algorithm A m for (1) from [28] and then enhance it using different well-known greedy and local search approaches. Let A be a set and f a function f : A R . Denote by
arg   min x A f ( x ) = { x y A : f ( y ) f ( x ) } ,
the set of minimal elements in A under the function f. Let M i , j be an arbitrary assignment between the i-th and j-th partition and let M be a ( k 1 ) -assignment for the ( k 1 ) -partite graph G / M i , j . We denote by M M i , j the unique k-assignment for G reconstructed from M and M i , j , i.e.,
M M i , j = { u v [ u ] [ v ] M } [ u ] [ v v ] M v v M i , j { u v , u v } .
For an example see Figure 1. In the case when G is a bipartite graph, G / M i , j contains only one partition and thus does not allow any assignment, for completeness we thus define M i , j = M i , j . Recall that in the case n = 2 an optimal 2-assignment can be found using the Hungarian algorithm [11]. The result of this algorithm on bipartite graph G will be denoted by Hungarian ( G ) .
Algorithm 1, A m . The following algorithm, which we denote by A m , for finding near optimal solution for the minimal k-assignment problem of k-partite graph G has been proposed in  [28].
Algorithm 1 [28].
1:
function  A m (G)
2:
    if k = 1 then
3:
        return
4:
    else
5:
         M 1 , 2 = Hungarian( G [ V 1 V 2 ] )
6:
        return A m ( G / M 1 , 2 ) M 1 , 2
In short, the algorithm finds an optimal 2-assignment for G [ V 1 V 2 ] , takes the quotient by this assignment and recursively repeats this process. The final result is a complete k-assignment M reconstructed from the previously computed ( k 1 ) -assignments.
Example 1.
Let G be a complete 3-partite graph with partitions V = V 1 V 2 V 3 , V 1 = { 1 , 2 , 3 } , V 2 = { 4 , 5 , 6 } , V 3 = { 7 , 8 , 9 } and the following (weighted) adjacency matrix
Mathematics 08 02050 i001
where the entries generate the weight function w : u v W u , v ; u , v { 1 , , 9 } .
In Algorithm A m we first compute the optimal assignment M 1 , 2 between partitions V 1 and V 2 (Figure 1a), then we compute the quotient graph G / M 1 , 2 , presented in Figure 1b, and finally we compute the optimal assignment for this graph (Figure 1c). At the end, we reconstruct the 3-assignment M and obtain a solution of weight 44 (Figure 1d).
Algorithm 2, B m . Algorithm A m is greedy in the sense that it takes (lexicographically) the first pair of partitions and merges them according to the best assignment among them. If the order of partitions is changed, A m would provide a different result. The idea of B m is to consider all the pairs for possible first merge, and take the best result. Note that B m is also greedy as it always takes the minimal assignment between the two partitions that are merged. (In a sense, one may say that B m is somehow more greedy than A m because it looks a bit farther for the seemingly best option in sight.)
Algorithm 2 ( B m , improvement of A m ).
1:
function  B m (G)
2:
    if k = 1 then
3:
        return
4:
    else
5:
         { a , b } arg   min 1 i < j < k w B m G / H u n g a r i a n ( G [ V i V j ] )
6:
        return B m ( G / M a , b ) M a , b
Algorithm B m searches through all possible pairs ( V i , V j ) of partitions, recursively runs on the quotient graph G / M , where M is the optimal 2-assignment on V i V j , and among these partitions chooses the one with the best assignment of G / M . If there are more minimal partitions, the algorithm chooses a random partition of minimal weight. Clearly, Algorithm B m returns a k-assignment that is determined by the ( k 1 ) 2-assignments that were chosen in the recursive calls of B m .
Example 2.
We take the same graph as in Example 1. In contrast to A m , Algorithm B m finds optimal assignments M 1 , 2 , M 1 , 3 , and M 2 , 3 for the induced subgraphs G [ V 1 V 2 ] , G [ V 1 V 3 ] , and G [ V 2 V 3 ] , see Figure 2a–c, respectively. The algorithm continues its search recursively on the bipartite graphs G / M 1 , 2 , G / M 1 , 3 , and G / M 2 , 3 . As Figure 2b shows, we obtain a k-assignment of weight 40.
Algorithm 3, C m . Observe that Algorithm B m is much more time consuming than Algorithm A m as it calls the Hungarian algorithm subroutine
k 2 k 1 2 3 2 = k ! ( k 1 ) ! 2 k 1
times as opposed to only ( k 1 ) calls by Algorithm A m .
However, note that Algorithm B m is greedy because it always takes the minimal 2-assignment. As the k-assignment problem (for k > 2 ) is intractable, a deterministic greedy algorithm can not solve the problem to optimality unless P=NP. We therefore consider an iterative improvement of the solutions by taking a nearly optimal solution (that may be the result of B m ) to define an initial solution. The neighbors of the given solution are the results of the following procedure: fix one of the 2-assignments, say M, and run Algorithm B m on G / M . After repeating the process by fixing all 2-assignments, we get a set of new solutions. If at least one of them is an improvement over the previously known best solution, we continue the improvement process. The process stops when there is no better solution among the set of new solutions.
The third algorithm, denoted as Algorithm C m , can be considered as a steepest descent algorithm on the set of all k-assignments, where the next solution is chosen to be a minimal solution in a suitably defined neighborhood.
Algorithm 3 ( C m , steepest descent based on B m ).
1:
function  C m (G)
2:
     M = B m (G)
3:
    repeat
4:
         M previous = M
5:
         M a , b = arg   min M i , j M w B m ( G / M i , j )
6:
         M = B m ( G / M a , b ) M a , b
7:
    until w ( M previous ) = w ( M )
8:
    return M
Example 3.
Taking the same instance as in Examples 1 and 2, Algorithm C m starts with finding a 3-assignment M = { M 1 , 2 , M 1 , 3 , M 2 , 3 } using B m and continues by recursively searching graphs G / M 1 , 2 and G / M 2 , 3 (we can skip G / M 1 , 3 , since the initial 3-assignment was already obtained from G / M 1 , 3 ). As Figure 3 shows, the best solution of weight 37 is obtained from contracting M 2 , 3 . If we continue with the iteration, we can see that the solution has stabilized and no further improvements can be made.
Algorithm 4, D m . Algorithms B m and C m heavily rely on the Hungarian method and are very time-consuming in comparison to A m . Therefore we define another greedy algorithm based on the Hungarian method that is faster. We denote by D m the greedy algorithm that takes the minimal 2-assignment M i , j in the k-partite graph G, and continues considering the ( k 1 ) -partite graph G / M i , j until only one partition is left.
Algorithm 4 ( D m , Greedy iterative)
1:
function  D m (G)
2:
    if k = 1 then
3:
        return
4:
    else
5:
         { a , b } arg   min 1 i < j < k w H u n g a r i a n ( G [ V i V j ] )
6:
        return D m ( G / M a , b ) M a , b
Note that Algorithm D m has the shortest average running time among algorithms B, C, and D. On the other hand, it is at the same time also the most short-sighted greedy algorithm among the rest of the algorithms, since its choice is based on the quality of the 2-assignment at hand, w ( M ) , and not by looking at the quality of w ( G / M ) .
Algorithm 5, E m . The algorithms considered above were in principle deterministic. Except breaking ties randomly in case of more than one minimal choice, all the steps are precisely defined, i.e., the algorithms are deterministic. In the final experiment, we are interested in the effect of randomization.
The next algorithm, E m , is an alternative randomized version of C m . The main idea is to loop through all 2-assignments in random order and accept the first assignment M i , j that yields a better solution, instead of searching for the minimal 2-assignment M i , j . Thus algorithm E m is obtained by changing the main loop of C m . As algorithm E m accepts the first better solution in the neighborhood of the current solution, it is a kind of iterative improvement algorithm as opposed to C m , which is a steepest descent type local search algorithms, because it always looks for the best solution in the neighborhood.
Algorithm 5 ( E m )
1:
function  E m (G)
2:
     M = B m (G)
3:
    repeat
4:
         m i n _ w e i g h t = w ( M )
5:
        for ( i , j ) Shuffle( k 2 ) do
6:
            M = min { M , B m ( G / M i , j ) M i , j }
7:
           if m i n _ w e i g h t w ( M ) then
8:
               break   
9:
    until m i n _ w e i g h t = w ( M )
10:
    return M
We denote by E m ( n ) the algorithm that takes the best solution of E m out of n trials (restarts).
Algorithm 6, F m . The Algorithm E m stops when there is no better solution, even if there are solutions of the same quality (weight) in the neighbourhood of the current solution. These neighborhood solutions might later give improvement, therefore we introduce another variant, called F m (n), which in such case chooses randomly one solution of equal weight and continues, but stops after at most n steps, since it does not necessarily terminate (e.g., it can iterate between two equally good solutions).
Algorithm 6 ( F m , multistart local search)
1:
function  F m ( G , n )
2:
     M = B m (G)
3:
     l a s t _ a s s i g n m e n t = none
4:
    for c o u n t e r = 1 , , n do
5:
         a s s i g n m e n t s = arg   min M i , j M w B m ( G / M i , j ) M i , j
6:
        if l a s t _ a s s i g n m e n t in a s s i g n m e n t s then
7:
           remove l a s t _ a s s i g n m e n t from a s s i g n m e n t s
8:
        if a s s i g n m e n t s = then
9:
           break 
10:
         M a , b = Choose_Random( a s s i g n m e n t s )
11:
         M = B m ( G / M a , b ) M a , b
12:
    return M
All of the above algorithms can be easily adapted to solve maximization assignment problems, i.e., to solve (1) where the objective is maximum. We denote the maximization variants of the algorithms A , B , C , D , E and F by A M , B M , C M , D M , E M , and F M , respectively.
Remark 1.
Clearly, the algorithms C, D, E, and F always return a feasible assignment because any solution is obtained by a recursive call of B. However, many calls of B and thus many runs of the Hungarian algorithm are expensive in terms of computation time. Therefore, it is an interesting question whether the present local search heuristics may be sped up by considering other neighborhoods, for example applying the idea of variable neighborhood search [29].

4. Numerical Results

In this section, we present numerical evaluations of the algorithms introduced in Section 3. We compare them with the algorithms from [28] and to each other. In particular, we
  • reproduce the results on random graphs as given in [28] and compare them with the results of our Algorithms B m to F m and their maximization variants B M to F M ,
  • evaluate the performance of the algorithms against A M as the number of vertices increases,
  • test our algorithms on the instances provided in [15].

4.1. Datasets

Numerical evaluations are done using three sets of random complete k-partite graphs. The first set was constructed according to [28], as follows: It consists of two sets of 1000 random complete k-partite graphs with k = 3 , 4 and n = 30 , 100 , respectively. The weights on the edges were selected randomly from given set S with probability density function p ( x ) = 1 | S | , x S , where S = { 0 , 1 , , 9 } , if k = 3 and S = { 1 , , 100 } , if k = 4 .
The second dataset is our contribution. It has been designed to compare how our algorithms scale with increasing size of the instances. It consists of two subsets, each consisting of instances with k = 3 and k = 4 . The first subset was generated as follows. For each k { 3 , 4 } we range the number of vertices n in each partition from 2 to 100 and the weights on edges connecting vertices from different partitions are chosen randomly according to discrete uniform distribution on the set S = { 0 , , n 1 } . The second subset was obtained similarly, we only changed the distribution of the edge weights. The edges between the different partitions are assigned random weights chosen according to discrete uniform distribution on the set S = { 2 0 , 2 1 , , 2 10 } . We expect that these random instances are more difficult, because the very important edges are sufficiently rare. For each pair k , n  we generated 1000 random instances.
The third set is the same as in [15]. For this set, the optimum value of 3-AP is known. We retrieved it from [30]. This dataset includes 18 instances of complete 3-partite graphs:
  • 6 graphs with 33 vertices and 6 graphs with 66 vertices in each of the 3 partitions, where the weights of edges between different partitions are random integers which should, according to the description given by the authors [30], range from 0 to 99. However, we point out that some of the weights in these instances are larger than 100.
  • 3 graphs with 33 vertices and 3 graphs with 66 vertices in each partition, where the weights take only values 1 or 2. We call these graphs binary graphs.
At the beginning of the web page [30] it is explained how the numerical results from [15] relate to these instances.

4.2. First Experiment–Dataset from He et al. (2004)

In the first experiment we compare the algorithms used in [28] (namely, the Random, Greedy, and A M algorithm) with our Algorithms B M , C M , D M , E M , E M (10), and F M (100) on the first set of random k-partite graphs, which we generated as described in [28], see Section 4.1. We run these 5 algorithms on each group of 1000 graph instances and report the average values in Table 1 and Table 2.
In order to compare the algorithms with A m (resp. A M ), we define the relative gap with respect to the value obtained by A m (resp. A M ) by
Δ A m = 100 · val val A m val A m resp . Δ A M = 100 · val val A M val A M .

4.3. Second Experiment on Random Instances

In this subsection we compare Algorithms B M , C M , D M , E M , E M (10), and F M (100) on the second dataset, introduced in Section 4.1. We run all three algorithms on each of two subsets, consisting of 1000 instances for each pair ( k , n ) { 3 , 4 } × { 2 , 3 , 4 , , 100 } . For each pair ( k , n ) and each algorithm, we compute the average value of solutions given by the algorithm over the corresponding 1000 instances. Then, we compute quotients of the average values for B M and for A M and denote it by B M / A M . Similarly we compute quotient C M / A M , and so on. Figure 4, Figure 5, Figure 6 and Figure 7 contain plots and interpretations of these quotients.
The results on the first subset (with uniform distribution of weights) are depicted on Figure 4 and Figure 5. They show that Algorithm E M (10) clearly finds the best solutions. The Algorithms C M , E M and F M (100) perform similarly, and are clearly outperforming B M and D M . Note that taking into account time complexity and considering k = 3 , the clear winners among the faster algorithms ( A M , B M , C M , D M , and E M ) are C M and E M , and among the more time consuming E M (10) and F M (100), the winner is E M (10). Note that E M (n) may potentially find even better solutions with larger n (and consequently may need more time). The differences are much less obvious for k = 4 (see Figure 5). With larger n, the ratios seem to stabilize at certain constants.
Considering the results on the first dataset (Table 1) and the first sample of the second dataset (Figure 4 and Figure 5) suggest that there is no significant difference in quality of solutions among the algorithms C, E, and F. However, the results on the second subset (the set with a special distribution of weights), in particular for k = 3 , show that Algorithms C, E, and F substantially outperform B and D (see Figure 6), and the differences of ratios tend to grow with larger n. This allows us to conclude that Algorithms C M , E M (n) and F M (n) are significantly better than B M and D M (at least on most of our instances). For k = 4 (see Figure 7), the differences are small again.
Remark 2.
When experimentally testing the performance of heuristics, it is well known that random instances are often among the simplest and many heuristics perform remarkably well on such datasets. It is often difficult to generate instances that are hard, or better to say, that are hard for a specific heuristic algorithm. The dataset generated using a nonuniform distribution of weights is obviously harder for the algorithms B and D. It may be of some interest to see what distribution of weights may show further significant differences in performance among the algorithms that are of interest here.

4.4. Third Experiment–Dataset Crama and Spieksma (1992)

In this subsection, we compare Algorithms A m , B m , C m , D m , E m , E m (10), and F m (100) with the optimal solution of 3-AP on the problems from the second database, which were taken from [30], see also Section 4.1.
For these instances, we know the optimum value of 3-AP (denoted by OPT ), so we can report the relative gap, defined by
δ = 100 · val OPT OPT .
The results are given in Table 3 for non-binary problems and in Table 4 for binary problems.
For non-binary graphs, with results presented in Table 3, Algorithms C m , E m , E m (10), and F(100) have, as expected, the best performance and on average differ from the optimal solution by 0.1% or less (see last row in Table 3). In addition, they are in some cases also able to find the optimal solution.
For binary graphs, with results presented in Table 4, we can observe that the relative performances are, due to the low weight sum, worse than those of non-binary graphs. As the problems are binary, a solution that differs from the optimal in one element may have, due to small total weight of the assignments, a considerably large relative error. As in the case of non-binary graphs, C m , E m , E m (10), and F(100) outperform A m , B m , and D m . Algorithm F m (100) finds the optimal solution in most cases (see the last column), and Algorithms C m , and E m find the optimal solution in some cases.
We point out that these algorithms are fast. Our implementation, which could be further optimised, takes a fraction of a second (on a 3.0 Ghz PC) on each of these instances. Relative computation times (relative to algorithm A) and average relative errors (compared to known optimal solutions) are evident from Figure 8.

5. Summary and Conclusions

We have introduced Algorithms A, B, C, D, E, and F to approximately solve (1). The algorithms are all based on extensive use of the Hungarian algorithm and thus arise as natural improvements of Algorithm A from [28]. Algorithms A, B, C, and D are in principle deterministic, whereas Algorithms E and F incorporate randomization. We implemented the algorithms in Python and evaluated them on three benchmark datasets. Numerical tests show that new algorithms in minimization or maximization variant, in terms of solution quality, outperform A on all of the chosen datasets. Summing up, our study shows that multiple usage of the classic Hungarian method can provide very tight solutions for (1), in some cases even an optimal solution.
Another important issue when regarding algorithms’ performance is computational time. For smaller instances, E has relatively good speed and on average misses the optimal solution by merely 0.1%, thus, we propose it as our method of choice. Among the deterministic algorithms, our study suggests using Algorithm C. However, we wish to note that when we consider large instances of (1), both in number of partitions and in size of each partition, we must be very careful how often we will actually run the Hungarian method because many repetitions of the Hungarian method substantially increase computation time. The main goal of the reported research was to explore the potential of the Hungarian algorithm for solving the k-assignment problem. We have designed several heuristics based on the Hungarian method that have shown to be competitive. While, on one hand, some of our algorithms provide very good (near optimal or even optimal) results in a short time, we also designed two heuristics based on local search [31,32,33]. Local search type heuristics improve the quality of solutions over time and may converge to the optimal solution. This type of heuristics are very useful when the quality of solutions is more important than computational time. We believe that further development of a multistart local search heuristics based on the Hungarian algorithm may lead to a very competitive heuristics for (1) with hopefully competitive fast convergence to optimal solutions.
In the future, a more comprehensive experimental study of local search based on the Hungarian algorithm may be a very promising avenue of research.

Author Contributions

Funding acquisition, J.P.; Methodology, J.Ž.; Software, B.G.; Supervision, J.P. and J.Ž.; Writing—original draft, B.G., T.N. and D.R.P.; Writing—review & editing, J.P. and J.Ž. All authors have read and agreed to the published version of the manuscript.

Funding

This reasearch is funded in part by Javna Agencija za Raziskovalno Dejavnost RS, grants: J1-8155, J1-1693, P2-0248, J2-2512 and N1 0071.

Acknowledgments

The authors wish to thank to three anonymous reviewers for a number of constructive comments that helped us to considerably improve the presentation.

Conflicts of Interest

The authors declear have no conflicts of interest.

References

  1. Gligorijević, V.; Malod-Dognin, N.; Pržulj, N. Integrative methods for analyzing big data in precision medicine. Proteomics 2016, 16, 741–758. [Google Scholar] [CrossRef] [PubMed]
  2. Gligorijević, V.; Malod-Dognin, N.; Pržulj, N. Fuse: Multiple network alignment via data fusion. Bioinformatics 2015, 32, 1195–1203. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Malod-Dognin, N.; Petschnigg, J.; Windels, S.F.L.; Povh, J.; Hemmingway, H.; Ketteler, R.; Pržulj, N. Towards a data-integrated cell. Nat. Commun. 2019, 10, 805. [Google Scholar] [CrossRef] [PubMed]
  4. Balas, E.; Saltzman, M.J. An algorithm for the three-index assignment problem. Oper. Res. 1991, 39, 150–161. [Google Scholar] [CrossRef]
  5. Burkard, R.; Dell’Amico, M.; Martello, S. Assignment Problems: Revised Reprint; SIAM-Society of Industrial and Applied Mathematics: Philadelphia, PA, USA, 2012; Volume 106. [Google Scholar] [CrossRef]
  6. Burkard, R.E.; Rudolf, R.; Woeginger, G.J. Three-dimensional axial assignment problems with decomposable cost coefficients. Discret. Appl. Math. 1996, 65, 123–139. [Google Scholar] [CrossRef] [Green Version]
  7. Frieze, A.M. Complexity of a 3-dimensional assignment problem. Eur. J. Oper. Res. 1983, 13, 161–164. [Google Scholar] [CrossRef]
  8. Spieksma, F. Multi Index Assignment Problems: Complexity, Approximation, Applications. In Nonlinear Assignment Problems; Springer: Boston, MA, USA, 2000; pp. 1–12. [Google Scholar] [CrossRef]
  9. Kuroki, Y.; Matsui, T. An approximation algorithm for multidimensional assignment problems minimizing the sum of squared errors. Discret. Appl. Math. 2009, 157, 2124–2135. [Google Scholar] [CrossRef] [Green Version]
  10. Grundel, D.A.; Krokhmal, P.A.; Oliveira, C.A.S.; Pardalos, P.M. On the number of local minima for the multidimensional assignment problem. J. Comb. Optim. 2007, 13, 1–18. [Google Scholar] [CrossRef]
  11. Kuhn, H.W. The Hungarian Method for the Assignment Problem. Nav. Res. Logist. Q. 1955, 2, 83–97. [Google Scholar] [CrossRef] [Green Version]
  12. Karp, R.M. Reducibility Among Combinatorial Problems. In Complexity of Computer Computations; Plenum: New York, NY, USA, 1972; pp. 85–103. [Google Scholar] [CrossRef]
  13. Hurkens, C.A.J.; Schrijver, A. On the size of systems of sets every t of which have an SDR, with an application to the worst-case ratio of heuristics for packing problems. SIAM J. Discret. Math. 1989, 2, 68–72. [Google Scholar] [CrossRef] [Green Version]
  14. Arkin, E.; Hassin, R. On local search for weighted packing problems. Math. Oper. Res. 1998, 23, 640–648. [Google Scholar] [CrossRef]
  15. Crama, Y.; Spieksma, F. Approximation algorithms for three-dimensional assignment problems with triangle inequalities. Eur. J. Oper. Res. 1992, 60, 273–279. [Google Scholar] [CrossRef]
  16. Aiex, R.M.; Resende, M.G.C.; Paradalos, P.M.; Toraldo, G. Grasp with path relinking for three-index assignment. Inform. J. Comput. 2005, 17, 224–247. [Google Scholar] [CrossRef] [Green Version]
  17. Huang, G.; Lim, A. A hybrid genetic algorithm for the Three-Index Assignment Problem. Eur. J. Oper. Res. 2006, 172, 249–257. [Google Scholar] [CrossRef]
  18. Gutin, G.; Karapetyan, D. Local Search Heuristics for the Multidimensional Assignment Problem. J. Heuristics 2011, 17, 201–249. [Google Scholar] [CrossRef]
  19. Karapetyan, D.; Gutin, G.; Goldengorin, B. Empirical evaluation of construction heuristics for the multidimensional assignment problem. arXiv 2009, arXiv:0906.2960. [Google Scholar]
  20. Jiang, H.; Zhang, S.; Ren, Z.; Lai, X.; Piao, Y. Approximate Muscle Guided Beam Search for Three-Index Assignment Problem. Adv. Swarm Intell. Lect. Notes Comput. Sci. 2014, 8794, 44–52. [Google Scholar] [CrossRef]
  21. Karapetyan, D.; Gutin, G. A New Approach to Population Sizing for Memetic Algorithms: A Case Study for the Multidimensional Assignment Problem. Evol. Comput. 2011, 19, 345–371. [Google Scholar] [CrossRef] [Green Version]
  22. Valencia, C.E.; Zaragoza Martinez, F.J.; Perez, S.L.P. A simple but effective memetic algorithm for the multidimensional assignment problem. In Proceedings of the 14th Inernational Conference on Electrical Engineering, Computing Science and Automatic Control (CCE), Mexico City, Mexico, 20–22 October 2017; pp. 1–6. [Google Scholar] [CrossRef]
  23. Li, J.; Tharmarasa, R.; Brown, D.; Kirubarajan, T.; Pattipati, K.R. A novel convex dual approach to three-dimensional assignment problem: Theoretical analysis. Comput. Optim. Appl. 2019, 74, 481–516. [Google Scholar] [CrossRef]
  24. O’Leary, B. Don’t be Greedy, be Neighborly, a new assignment algorithm. In Proceedings of the 2019 IEEE Aerospace Conference, Big Sky, MT, USA, 2–9 March 2019; pp. 1–8. [Google Scholar]
  25. Medvedev, S.N.; Medvedeva, O.A. An Adaptive Algorithm for Solving the Axial Three-Index Assignment Problem. Autom. Remote Control 2019, 80, 718–732. [Google Scholar] [CrossRef]
  26. Pentico, D. Assignment problems: A golden anniversary survey. Eur. J. Oper. Res. 2007, 176, 774–793. [Google Scholar] [CrossRef]
  27. Walteros, J.; Vogiatzis, C.; Pasiliao, E.; Pardalos, P. Integer programming models for the multidimensional assignment problem with star costs. Eur. J. Oper. Res. 2014, 235, 553–568. [Google Scholar] [CrossRef]
  28. He, G.; Liu, J.; Zhao, C. Approximation algorithms for some graph partitioning problems. In Graph Algorithms and Applications 2; World Scientific: Singapore, 2004; pp. 21–31. [Google Scholar]
  29. Mladenović, N.; Hansen, P. Variable neighborhood search. Comput. Oper. Res. 1997, 24, 1097–1100. [Google Scholar] [CrossRef]
  30. Spieksma, F.C.R. Instances of the 3-Dimensional Assignment Problem. Available online: https://www.win.tue.nl/~fspieksma/instancesEJOR.htm (accessed on 15 February 2019).
  31. Aarts, E.H.L.; Lenstra, J.K. (Eds.) Local Search in Combinatorial Optimization; Wiley-Interscience Series in Discrete Mathematics and Optimization; Wiley-Interscience: Hoboken, NJ, USA, 1997. [Google Scholar]
  32. Talbi, E. Metaheuristics: From Design to Implementation; John Wiley & Sons: Hoboken, NJ, USA, 2009. [Google Scholar]
  33. Žerovnik, J. Heuristics for NP-hard optimization problems—Simpler is better !? Logist. Sustain. Transp. 2015, 6, 1–10. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Application of Algorithm A m on a small example.
Figure 1. Application of Algorithm A m on a small example.
Mathematics 08 02050 g001
Figure 2. Cases for Algorithm B M .
Figure 2. Cases for Algorithm B M .
Mathematics 08 02050 g002
Figure 3. Cases for Algorithm C m .
Figure 3. Cases for Algorithm C m .
Mathematics 08 02050 g003
Figure 4. This plot depicts the quotients with A M for the instances from the first subset of the third dataset (see Section 4.1) corresponding to k = 3 . The x axis represents the size of each partition n, while the y axis represents the quotient. We can see that with larger n, E M (10) outperforms all other algorithms, while C M , E M , and F M (100) perform similarly.
Figure 4. This plot depicts the quotients with A M for the instances from the first subset of the third dataset (see Section 4.1) corresponding to k = 3 . The x axis represents the size of each partition n, while the y axis represents the quotient. We can see that with larger n, E M (10) outperforms all other algorithms, while C M , E M , and F M (100) perform similarly.
Mathematics 08 02050 g004
Figure 5. On this plot we can observe the quotients with A M for the instances from the first subset of the third dataset, corresponding to k = 4 . The x axis represents the size of each partition n, while y axis represents the quotient. Compared to results from Figure 4, we can see that on this dataset, the difference between B M , C M , E M , E M (10) and F M (100) are becoming almost negligible when n increases.
Figure 5. On this plot we can observe the quotients with A M for the instances from the first subset of the third dataset, corresponding to k = 4 . The x axis represents the size of each partition n, while y axis represents the quotient. Compared to results from Figure 4, we can see that on this dataset, the difference between B M , C M , E M , E M (10) and F M (100) are becoming almost negligible when n increases.
Mathematics 08 02050 g005
Figure 6. This diagram depicts quotients with A M for the second subset of the third dataset (corresponding to k = 3 ). Compared to results from Figure 4, we can see that on this subset, the Algorithms C M , E M , E M (10), and F M (100) give substantially better results than B M and D M .
Figure 6. This diagram depicts quotients with A M for the second subset of the third dataset (corresponding to k = 3 ). Compared to results from Figure 4, we can see that on this subset, the Algorithms C M , E M , E M (10), and F M (100) give substantially better results than B M and D M .
Mathematics 08 02050 g006
Figure 7. This diagram depicts quotients with A M on the second subset of the third dataset (corresponding to k = 4 ). Algorithm E M (10) outperforms the others. However, the ratios seem to stabilise at a constant factor.
Figure 7. This diagram depicts quotients with A M on the second subset of the third dataset (corresponding to k = 4 ). Algorithm E M (10) outperforms the others. However, the ratios seem to stabilise at a constant factor.
Mathematics 08 02050 g007
Figure 8. This diagram contains graphical representations of average (normalized) times (in logarithmic scale) needed for non-binary instances from [30] computed in Table 3 and relative errors rel(with respect to the optimal value OPT). Algorithms A m , B m , C m , D m , and E m are considered fast, while E m (10) and F m (100) are comparably slow.
Figure 8. This diagram contains graphical representations of average (normalized) times (in logarithmic scale) needed for non-binary instances from [30] computed in Table 3 and relative errors rel(with respect to the optimal value OPT). Algorithms A m , B m , C m , D m , and E m are considered fast, while E m (10) and F m (100) are comparably slow.
Mathematics 08 02050 g008
Table 1. Comparison of Random, Greedy and A M algorithms from [28] with B M to F M algorithms for the maximization version of 3-AP. Each row contains average values of solutions obtained by these algorithms, computed over 1000 random instances of complete k-partite graphs with n vertices, which are generated as described in Section 4.1. We can see that Algorithms B M to F M return substantially better results.
Table 1. Comparison of Random, Greedy and A M algorithms from [28] with B M to F M algorithms for the maximization version of 3-AP. Each row contains average values of solutions obtained by these algorithms, computed over 1000 random instances of complete k-partite graphs with n vertices, which are generated as described in Section 4.1. We can see that Algorithms B M to F M return substantially better results.
k = 3 , n = 30 , k = 4 , n = 100 ,
S = { 0 , , 9 } S = { 1 , , 100 }
Algorithmval Δ A M ( % ) Timeval Δ A M ( % ) Time
Random405−45.9-41806−23.5-
Greedy736−1.8-52801−3.0-
A M 749.20154,421.701
B M 753.80.63.054,634.10.414.4
C M 759.41.46.154,731.50.646.7
D M 749.40.02.454,442.90.03.7
E M 759.31.35.354,730.50.637.9
E M (10)759.91.453.054,761.10.6378.7
F M (100)760.41.594.854,732.00.6674.7
Table 2. The rows contain (respectively) the average values obtained by algorithms Random, Greedy, A m from [28], and by our Algorithms B m to C m over 1000 random complete 3-partite graphs, respectively. We can see that our Algorithms B m to F m outperform the algorithms from [28].
Table 2. The rows contain (respectively) the average values obtained by algorithms Random, Greedy, A m from [28], and by our Algorithms B m to C m over 1000 random complete 3-partite graphs, respectively. We can see that our Algorithms B m to F m outperform the algorithms from [28].
k = 3 , n = 30 ,
S = { 0 , , 9 }
Algorithmval Δ A m ( % ) Time
Random405566.7-
Greedy218258.9-
A m 60.701
B m 56.2−7.43
C m 50.8−16.46.1
D m 60.6−0.22.4
E m 50.9−16.25.3
E m (10)50.3−17.253
F m (100)49.8−18.095.2
Table 3. Comparison of Algorithms A m , B m , C m , D m , E m , E m (10), and F m (100) on the first group of instances of 3-AP from [15,30]. Column 2 contains the optimum value of the problem, as reported in [30]. For each algorithm, we report the value that it returns. Average relative errors δ and average computation times are given in the last two rows. Algorithms C m , E m , E m (10), and F m (100) have the best performance and, on average, differ from the optimal solution by 0.1% or less (see the last row).
Table 3. Comparison of Algorithms A m , B m , C m , D m , E m , E m (10), and F m (100) on the first group of instances of 3-AP from [15,30]. Column 2 contains the optimum value of the problem, as reported in [30]. For each algorithm, we report the value that it returns. Average relative errors δ and average computation times are given in the last two rows. Algorithms C m , E m , E m (10), and F m (100) have the best performance and, on average, differ from the optimal solution by 0.1% or less (see the last row).
ProblemOPT A m B m C m D m E m E m (10) F m (100)
3DA198N126622696266926632669266326632663
3DA198N2244924982467245824672458.42457.12457.4
3DA198N327582811277827642778276427642764
3DA99N1160816171617160816171608.01608.01608.0
3DA99N2140114201411140214151402.01402.01402.0
3DA99N3160416121612160416121604.01604.01604.0
3DI198N1968498309765969597659693.39689.29689.8
3DI198N1894491329121894991778949.78947.48948.4
3DI198N3974599309876975098769749.69747.69748.5
3DIJ99N1479748824839480048824801.347984799.3
3DIJ99N2506751455136507451455071.85069.65071.1
3DIJ99N3428743384338429143714290.54287.84289.6
δ ¯ [ % ] 01.470.920.101.150.100.070.08
Time-12.911.82.39.089.1150.9
Table 4. Numerical result for Algorithms A m , B m , and C m on the second group of instances from [30] (called binary graphs). The optimum values OPT are taken from [30], and the values val are computed by Algorithms A m , B m , C m , D m , E m , E m (10), and F(100). Average relative errors δ and average computation times are given in the last two rows. We can see that F m (100) has the best performance.
Table 4. Numerical result for Algorithms A m , B m , and C m on the second group of instances from [30] (called binary graphs). The optimum values OPT are taken from [30], and the values val are computed by Algorithms A m , B m , C m , D m , E m , E m (10), and F(100). Average relative errors δ and average computation times are given in the last two rows. We can see that F m (100) has the best performance.
ProblemOPT A m B m C m D m E m E(10) F m (100)
3 D m 198N1286298294287295286.5286.0286.4
3 D m 198N2286294293286294286.2286.0286.2
3 D m 198N3282294294285294284.9284.3283.7
3 D m 299N1133140134134134134.0134.0133.4
3 D m 299N2131139137134137134.0134.0133.0
3 D m 299N3131136136132136132.0132.0131.0
δ ¯ [ % ] 04.413.110.873.220.840.770.45
Time012.76.82.45.454.089.7
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Gabrovšek, B.; Novak, T.; Povh, J.; Rupnik Poklukar, D.; Žerovnik, J. Multiple Hungarian Method for k-Assignment Problem. Mathematics 2020, 8, 2050. https://doi.org/10.3390/math8112050

AMA Style

Gabrovšek B, Novak T, Povh J, Rupnik Poklukar D, Žerovnik J. Multiple Hungarian Method for k-Assignment Problem. Mathematics. 2020; 8(11):2050. https://doi.org/10.3390/math8112050

Chicago/Turabian Style

Gabrovšek, Boštjan, Tina Novak, Janez Povh, Darja Rupnik Poklukar, and Janez Žerovnik. 2020. "Multiple Hungarian Method for k-Assignment Problem" Mathematics 8, no. 11: 2050. https://doi.org/10.3390/math8112050

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop