Lexicographic Unranking of Combinations Revisited

: In the context of combinatorial sampling, the so-called “unranking method” can be seen as a link between a total order over the objects and an effective way to construct an object of given rank. The most classical order used in this context is the lexicographic order, which corresponds to the familiar word ordering in the dictionary. In this article, we propose a comparative study of four algorithms dedicated to the lexicographic unranking of combinations, including three algorithms that were introduced decades ago. We start the paper with the introduction of our new algorithm using a new strategy of computations based on the classical factorial numeral system (or factoradics). Then, we present, in a high level, the three other algorithms. For each case, we analyze its time complexity on average, within a uniform framework, and describe its strengths and weaknesses. For about 20 years, such algorithms have been implemented using big integer arithmetic rather than bounded integer arithmetic which makes the cost of computing some coefﬁcients higher than previously stated. We propose improvements for all implementations, which take this fact into account, and we give a detailed complexity analysis, which is validated by an experimental analysis. Finally, we show that, even if the algorithms are based on different strategies, all are doing very similar computations. Lastly, we extend our approach to the unranking of other classical combinatorial objects such as families counted by multinomial coefﬁcients and k -permutations.


Introduction
One of the most fundamental combinatorial objects is called a combination. A combination of k objects among n is a subset of cardinality k of a set containing n objects. In many enumerating problems, it appears either as the main combinatorial structure or as a core building block because of its simplicity and counting characteristics.
In the 1960s while resolving some optimization problems related to scheduling, Lehmer rediscovered an important property linking natural numbers with a mixed radius numeral system based on combinations. This relation gave him the possibility to exhibit some greedy approach for a ranking algorithm that transforms (bijectively) a combination into an integer. This numeral system is now commonly called the "combinatorial number system" or "combinadics". It is often used for the reverse of Lehmer's problem: generating the uth combination (for a given order on the set of combinations). For efficiency reasons, this approach can be substituted to exhaustive generation once the latter is no longer possible due to the combinatorial explosion of the number of objects when their size increases. In the context of combinations, the explosion appears quickly (we recall that ( 2n n ) ∼ (2πn) − 1 2 4 n ). This generation strategy of a single element is classically called an unranking method. It appears as the fundamental problem in combinatorial generation such as in [1] and in optimization [2]. It is also used as a basic building block in scheduling problems [3], as well as, e.g., in software testing [4]. In other contexts, it appears as the core problem for the generation of complex structures: we can for example cite phylogenetics [5] and bioinformatics [6]. Table 1. Average time (in ms) for the unranking of a combination of k elements among n (too long means that the average time is larger than 30,000 ms).

Time in ms
Implem.

Sample
Our Algo. Throughout the paper, we represent combinations as follows.

Sagemath
Definition 1. Let n and k be two integers with 0 ≤ k ≤ n. We represent a combination of k elements among n elements denoted by {0, 1, . . . , n − 1} as a finite increasingly sorted sequence containing k distinct elements.
For example, let n and k be, respectively, 5 and 3. The finite sequences (0, 1, 2) and (0, 2, 4) are combinations of k elements among n, but (0, 2, 1) and (0, 1, 2, 3) are not. Other representations are sometimes used, notably by using a two-letter alphabet, but we stick to the one given in Definition 1 in this paper.
There are several possible orders for comparing combinations. In the following, we restrict our attention to orders comparing combinations of the same length, i.e., the same number of elements. Definition 2. Let A = (a 0 , a 1 , . . . , a k−1 ) and B = (b 0 , b 1 , . . . , b k−1 ) be two distinct combinations of k elements among n.

•
In the lexicographic order, we say that A is smaller than B if and only if both combinations have the same (possibly empty) prefix such that (a 0 , . . . , a p−1 ) = (b 0 , . . . , b p−1 ) and if in addition a p < b p .

•
In the co-lexicographic order, we say that A is smaller than B if and only if the finite sequence (a k−1 , . . . , a 0 ) is smaller than (b k−1 , . . . , b 0 ) for the lexicographic order.

•
If A is smaller than B for a given order, then, for the reverse order, B is smaller that A.

Definition 3.
Let 0 ≤ k ≤ n be two integers and let A be a combination of k elements among n. For a given order, the rank u of A belongs to {0, 1, . . . ( n k ) − 1} and is such that A is the uth smallest combination.
With these definitions in mind, we can enter the core of the paper organized as follows. We first give the presentation and a first complexity analysis of our new algorithm solving the lexicographic combination unranking problem in Section 2. Section 3 is dedicated to the survey of three classical algorithms for the latter problem. The first one is based on the so-called recursive method and the two other algorithms are based on combinadics, which is a specific numeral system. It seems that it is the first time that both are compared and the reason one is better is explained. We also propose a new method for their analysis based on a generating function approach. Obviously, we obtain the same results as in the literature, but we manage also to obtain better asymptotic bounds than in the literature. In Section 4, we then compare their efficiency experimentally (the implementation and the exhaustive material used for repeating the experiments are all available at http://github.com/Kerl1 3/combination_unranking (accessed on 15 March 2021)) and propose a way to improve the computation of the binomial coefficients used in all four algorithms. Surprisingly, once the improvements have been implemented in all algorithms, we observe deep similarities in their computations, which is reflected by their observed run times. Finally, we extend our approach to solve the problem of unranking structures enumerated by multinomial coefficients and also objects counted by the k-permutations of n (also called arrangements).

Unranking through Factoradics: A New Strategy
The classical methods to unrank combinations are relying on the combinatorial number system introduced in 1887, by E. Pascal [15] and later by D. H. Lehmer (detailed in the book [16] (p. 27)). We survey these classical algorithms in Section 3.2. In this section, we first present a new strategy based on another number system called factoradics. To the best of our knowledge, the latter has never been used for the unranking of combinations.

Link between the Factorial Number System and Permutations
We recall that the factorial number system, or factoradics, is a mixed radix numeral system in which the representation of integers relies on the use of factorial numbers. Its definition belongs to the folklore but already appeared in 1888 in [17]. Definition 4. Let u be a positive integer and let n be the unique integer satisfying (n − 1)! ≤ u < n!. Then, there exists a unique sequence of integers ( f ) ∈{0,...,n−1} , with 0 ≤ f ≤ for all , such that: The finite sequence ( f 0 , f 1 , . . . , f n−1 ) is called the factoradic decomposition (or factoradic) of u (note that f 0 = 0 for all u).

Definition 5.
Let n be a positive integer. A permutation of size n is an ordering of the elements of the set {0, 1, . . . , n − 1}.
We represent a permutation of size n as a finite sequence of length n indicating the order of its elements. For example, the sequence (2, 4, 0, 3, 1) is a permutation of size 5. The factorial number system is particularly suitable to define a one-to-one correspondence between integers and permutations and thus can be used as an unranking method for permutations. The algorithm implemented in the function UNRANKINGPERMUTATION in Algorithm 1 is a straightforward adaptation of the Fisher-Yates random sampler for permutations [18]. F ← factoradic(u) 3: while length(F) < n do 4: append(F, 0) 5: return EXTRACT(F, n, n) for i from 0 to k − 1 do 5: Since the factoradic (with 8 components) of 2021 is (0, 1, 2, 0, 4, 4, 2, 0), the permutation (of size 8) of rank 2021 is (0, 3,6,7,1,5,4,2). To obtain this permutation, we read the factoradic from right to left, and extract iteratively from the list (0, 1, . . . , 8 − 1) the element whose index is the coefficient read in the factoradic. This goes on until the list is empty and we reach the leftmost component of the factoradic. Thus, in our example, we start by extracting the element of index 0, which is 0. Then, the list P becomes (1, 2, . . . , 7) and we extract the element of index 2, which is 3. Then, P becomes (1, 2, 4, 5, 6, 7) and we extract the fourth element, which is 6, and so on.
Note that, for the sake of clarity, we present the function EXTRACT using a list for P, but a better data structure must be used to achieve good performance. Good candidates are dynamic balanced trees, as presented in [19], or multisets with elements of weight 1 or 0, as presented in the Appendix of [20], since both provide logarithmic access and removal. Unfortunately, it seems that there is no algorithm based on some swap operation giving an in-place shuffle to unrank permutations in the lexicographic order; put differently: Durstenfeld's algorithm [21] cannot be easily adapted for the lexicographic order.

Combinations Unranking through Factoradics
We start with the definition of our algorithm to unrank combinations via the use of factoradics. The basic ideas driving our algorithm are the following: 1. We define a bijection between the combinations of k elements among n and a subset of the permutations of n elements.
2. We transform the combination rank u into the rank u of the appropriate permutation.
3. We build (the prefix of) the permutation of rank u by using Algorithm 1.

Proposition 1.
For all integers 0 ≤ k ≤ n, the function P n,k is a bijection from the set of (n, k) combinations to the set of size-n permutations whose prefix of length k and suffix of length n − k are both increasingly sorted.
In fact, we get this result using a classical cardinality argument: a sequence of integers of the form x 0 = 0 ≤ x 1 ≤ x 2 ≤ · · · ≤ x k ≤ x k+1 = corresponds to a weak composition (consider the differences x i+1 − x i ) of the integer into k + 1 terms. The number of such compositions is given by ( +k k ). Hence, the number of sequences matching the description of Fact 2 is given by ( n−k−m+k k ). We now exhibit how to transform a combination rank into its corresponding permutation rank. Lemma 1. For any given 0 ≤ k ≤ n, the factoradic decompositions of the ranks of the permutations obtained as the image by P n,k of some combination of k elements among n are all the finite sequences of the form (0, . . . , 0, f n−k , . . . , f n−1 ) with n − k ≥ f n−k ≥ f n−k+1 ≥ · · · ≥ f n−1 ≥ 0.
Proof. Let u be an integer whose factoradic is (0, . . . , 0, f n−k , . . . f n−1 ) as in the lemma. Due to the constraint n − k ≥ f n−k ≥ f n−k+1 ≥ · · · ≥ f n−1 ≥ 0, the permutation corresponding to u has for prefix of length k the sequence ( f n−1 , f n−2 + 1, f n−3 + 2, . . . , f n−k + k − 1) which is increasingly sorted. The rest of the permutation (the suffix of length n − k) corresponds to the increasing sequence of elements that have not been taken yet. Thus, the result corresponds to a combination via P n,k . Fact 2 completes the proof.
Thus, to convert the combination rank u into its corresponding permutation rank u , it is sufficient to find the uth sequence satisfying Lemma 1 in co-lexicographic order. This is presented in Algorithm 2 where the RANKCONVERSION function implements the conversion from u to u and the UNRANKINGCOMBINATION function implements the whole unranking procedure.
The key to the rank conversion is also Fact 2. As a consequence, we get that the first ( n−1 k−1 ) such sequences (in co-lexicographic order) have f n−1 = 0 and that the ( n−1 k ) following have f n−1 ≥ 1.
Once again, we opt for a simple presentation here where the rank conversion and the unranking part of the algorithm are clearly separated. There is much room for improvement here: for instance, note that, at the end of the function RANKCONVERSION, a factoradic decomposition is transformed into the integer it represents, but then at the beginning of UNRANKINGPERMUTATION this integer will be decomposed again in factoradics. In fact, instead of storing m into F on Line 8 we could directly compute the ith component of the combination as m + i by using the remark at the beginning of the proof of Lemma 1. In Section 4.2, we provide a more efficient way to implement this algorithm including this particular optimization and other improvements. Proof. The algorithm, and thus its proof, relies heavily on Fact 2. The key to prove the correctness of the RANKCONVERSION function is the following loop invariant: • The values of f n−1−j for all 0 ≤ j < i have been computed and stored in F.

•
The value of f n−i−1 (which has not been determined yet, as we enter the loop) is at least m.

•
The variable u holds the rank of the sequence ( f j ) 0≤j<n−i (note that j < i) among all sequences satisfying the condition of Fact 2 with k = k − i and n = n − i.
With this invariant at hand, the combinatorial argument behind the condition u < ( n−m−i−1 k−i−1 ) in the algorithm becomes more apparent; that is, the binomial coefficient counts the number of such sequences ending with m. Hence, if u < ( n−m−i−1 k−i−1 ), then f n−1−i = m and we move to the evaluation of the next coefficient ( f n−i−2 ); otherwise, we try the next possible value for m. Algorithm 2 Unranking a combination.
F is the factoradic decomposition 14: return composition(F) binomial(n, k) computes the value of ( n k ); composition(F): computes the integer whose factoradic is F.
The usual way to evaluate the efficiency of such an algorithm is to count the number of times the binomial function is called (see, e.g., the book [7] (p. 66) or the papers [22,23]). During the conversion from the rank of the combination to the one of the associated permutation, the coefficients are obtained via trials (in the factoradic notation) for f n−1 to f n−k , remarking that through our bijection P n,k the latter sequence is weakly increasing. Thus, the worst cases are obtained when the value f n−k is as large as possible, that is n − k. In these cases, the number of calls to binomial is n.
To study the average number of calls to binomial, when u describes the whole range from 0 to ( n k ) − 1, we introduce the following cumulative sequence.

Lemma 2.
Let u n,k be the cumulative numbers of calls to binomial while unranking all possible combinations u from 0 to ( n k ) − 1. The sequence satisfies: u n,0 = 0 and u n,n+i = 0 for all n and i > 0, and otherwise: In Table 2, we give the first values of u n,k , when n is less than 9. The obtained sequence is stored under the reference OEIS A127717 (throughout this paper, a reference OEIS A· · · points to Sloane's Online Encyclopedia of Integer Sequences www.oeis.org (accessed on 15 March 2021)). The bijection between both structures is direct, and thus we provide new information about this sequence in the following. Table 2. First values of u n,k for n = 1 . . . 8 and k = 0 . . . n (Algorithm 2).  1  2  0  3  2  3  0  6  8  3  4  0  10  20  15  4  5  0  15  40  45  24  5  6  0  21  70  105  84  35  6  7  0  28  112  210  224  140  48  7  8  0  36  168  378  504  420  216  63  8 Proof. The recurrence can be observed by unrolling the first iteration of the while loop. In the first iteration of the loop, a binomial coefficient b is always computed (regardless the value of k and n) which accounts for the term ( n k ) in the recurrence relation. Then, for all the ranks u such that u < b, we choose f n−1 = 0 and increment i, so that the rest of the execution corresponds to unranking a combination of k − 1 elements among n − 1. This is accounted for by u n−1,k−1 . Conversely, for all ranks u such that u ≥ b, the value of m is incremented and the rest of the execution corresponds to unranking a combination of k elements among n − 1, which is accounted for by u n−1,k .
We turn to bivariate generating functions to encode the sequence (u n,k ) as a power series and express the above recurrence as a simple equation satisfied by this function, which can be solved explicitly. The reader can refer to the two books of Flajolet and Sedgewick [24,25] for a global presentation of such method. Theorem 3. Let U(z, y) be the generating functions associated with (u n,k ). Then, Proof. The first step of the proof consists in exhibiting the ordinary generating function associated with U(z, y). To obtain a functional equation satisfied by U, we start from the result presented in Lemma 2. The extreme cases are u n,0 = 0 and u n,n+i = 0 for all n and i > 0. The recursive equation is u n,k = ( n k ) + u n−1,k−1 + u n−1,k . We remark the constant ( n k ) in the equation. We thus need the bivariate generating function of binomial coefficients. Let us denote it by B(z, y), which is equal to To take into account the extreme cases, we must remove the terms corresponding to By summing both sides of the recursive relation and by taking care of the extreme cases, we get: We thus deduce The second step in the proof consists in extracting the coefficient u n,k . We rewrite U(z, y) as: By extraction, the coefficient in front of z n is: The latter result corresponds to the distribution of the costs when k varies from 0 to n. We can then exactly extract the coefficient of z n y k :

Corollary 1.
To unrank a combination of k elements among n, the function UNRANKINGCOMBINATION(n, k, ·) needs on average u n,k /( n k ) calls to the function binomial. For n being large and k being of the form αn for 0 < α < 1, this average number of calls is The result is direct by using Theorem 3. Since we have the exact value of u n,k , the mean value can easily be computed in other cases such as k = o(n) or k = n − o(n).

Classical Unranking Algorithms
This section is dedicated to the presentation of a survey of the usual approaches to unrank combinations in the lexicographic order. The motivation behind this section is threefold. First, the classical algorithms were developed in the 1970s and 1980s and it is a hard task to get access to the papers we mention. Second, although they have been analyzed according to the number of calls to the binomial coefficient computations, we present here a standardization of the analysis using generating functions such as in the previous section. Finally, as shown in Section 4 and in the conclusion of the paper, a detailed analysis of all the possible approaches is necessary to well understand the behaviors of the computations.

Unranking through the Recursive Method
We are dealing with a combinatorial structure here, combinations, that is well understood in the combinatorial sense. Thus, when trying to develop an unranking algorithm, the first idea consists in developing one based on the classical recursive generation method presented in [11]. This type of algorithm is based on a recursive decomposition of the structure into smaller parts. Here, this idea is to use the following fact: a combination of k elements among {0, 1, . . . , n − 1} either contains n − 1 or does not. In the first case, the rest of the combination can be seen as a combination of k − 1 elements among n − 1 and in the second case the combination is a combination of k elements among n − 1. From a counting point of view, this translates into the well-known identity ( n k ) = ( n−1 k−1 ) + ( n−1 k ) and, from an unranking point of view, this translates into Algorithm 3.  for i from 0 to k − 1 do 5: if n = k then 5: append(R, n − 1) 10: return R 11: else 12: return RECGENERATION(n − 1, k, u − b) Remark 1. An alternative choice would have been to test whether b < ( n−1 k ) on Line 6. This corresponds to putting first the combinations that do not contain n − 1 and then those that contain n − 1. In this case, the unranking order is different but the performance is similar. Proposition 3. The function RECGENERATION(n, k, u) computes the uth combination of k elements among n for the reverse co-lexicographic order.
Corollary 2. The function UNRANKINGRECURSIVE(n, k, u) computes the uth combinations of k elements among n for the lexicographic order.
Proof. The proposition is proved by induction and the corollary is a direct observation given in [26] (p. 47).
Here, again, we are interested in the average number of calls to the binomial function, when u describes the whole range of integers from 0 to ( n k ) − 1. Ruskey justified such a measure by supposing the table of all binomial coefficients pre-computed, thus each call is equivalent. In Section 4, we discuss this measure. We introduce the sequence (to simplify the notations, we use several times the notations u n,k and U(z) for distinct sequences and series) u n,k equal to the cumulative number of calls to binomial for the whole range of possible values for u. Lemma 3. Let u n,k be the cumulative number of calls to binomial for the recursive unranking while unranking all possible u from 0 to ( n k ) − 1. The sequence satisfies: u n,0 = 0 and u n,n+i = 0 for all n and i ≥ 0, and otherwise Proof. If 0 < k < n, calling RECGENERATION(n, k, u) incurs one call to binomial and a recursive call. The cumulative cost of the first call to binomial is ( n k ), the cumulative cost of the recursive calls for u < ( n−1 k−1 ) is u n−1,k−1 and the cumulative costs of the recursive calls for u ≥ ( n−1 k−1 ) is u n−1,k . In the following Table 3, we give the first values of u n,k , when n is less than 9. Table 3. First values of u n,k for n = 1 . . . 8 and k = 0 . . . n (Algorithm 3). Theorem 4. Let U(z, y) be the ordinary generating function associated with (u n,k ), such that U(z, y) = ∑ n≥0 ∑ n k=0 u n,k y k z n . Then, The proof of Theorem 4 is very similar to the one of Theorem 3.
Proof. The first step of the proof consists in exhibiting the ordinary generating function associated with U(z, y). To obtain the equation for U, we start from the result presented in Lemma 3. The extreme cases are u n,0 = 0 and u n,n+i = 0 for all n and i ≥ 0. The recursive equation is u n,k = ( n k ) + u n−1,k−1 + u n−1,k . Again, the numbers ( n k ) appear in the equation, thus we need the bivariate generating function of binomial coefficients. Let us denote it by B(z, y). It satisfies: To follow the extreme cases, we must remove the first column k = 0 and the diagonal k = n: By summing the recursive equation and taking care of the extreme cases, we get: ∑ n≥1 n ∑ k=1 u n,k z n y k =B(z, y) + ∑ n≥1 n ∑ k=1 u n−1,k−1 z n y k + ∑ n≥1 n ∑ k=1 u n−1,k z n y k U(z, y) =B(z, y) + z y U(z, y) + z U(z, y).
We thus deduce The second step of the proof consists in extracting the coefficient u n,k .
U(z, y) = 1 1 − z(1 + y) By extraction the coefficient in front of z n is: The latter result corresponds to the distribution of the costs when k varies from 0 to n. We can then extract the coefficient of z n y k : This sequence u n,k is a shifted version of the sequence stored under the reference OEIS A059797. We thus can complete its properties in the OEIS using our results.
Due to the values of the extreme cases when k = 0 and k = n and the symmetry in the recurrence, we obviously obtain that u n,k = u n,n−k , reflecting the symmetry of the binomial coefficients.
Corollary 3. The function UNRANKINGRECURSIVE(n, k, ·) needs on average u n,k /( n k ) calls to the function binomial. For n being large and k being of the form α · n for 0 < α < 1, we get: Note that, for this algorithm too, the average complexity is only below the worst-case complexity by a constant (when k ∼ αn).
Naturally, to be able to handle large values of n and k, a tail-recursive variant of this algorithm, or an iterative version, should be preferred over the straightforward implementation. The recursive approach is a drawback for some programming languages that do not handle recursion efficiently (due to the depth of the stack). Naturally, other strategies have been suggested in the literature.

Unranking through Combinadics
In 1887, E. Pascal [15] and later D. H. Lehmer (detailed in the book [16] (p. 27)) presented an interesting way to decompose a natural number, in what we call today a mixed radix numeral system. In their case, it is the so-called combinatorial number system, or combinadics. The decomposition relies on binomial coefficients.

Fact 5.
Let n ≥ k be two positive integers. For all integers u, with 0 ≤ u < ( n k ), there exists a unique sequence 0 ≤ c 1 < c 2 < · · · < c k < n such that (we extend the definition of binomial coefficients with ( n k ) = 0 when k > n) The finite sequence (c 1 , . . . , c k ) is called the combinadic of u.
For example, when n = 5 and k = 3, the number 8 is represented as ( 1 1 ) + ( 3 2 ) + ( 4 3 ), thus the combinadic of 8 is (1,3,4). In Table 4, we present for various values of u the combinadic of u and the combination of rank u for n = 6 and k = 2. Here, we observe that the exhibited ranking is co-lexicographic and that the combination of rank u can be deduced from the combinadic of u by reversing it and applying the transformation x → n − 1 − x to each of its components.
In 2004, using this representation, McCaffrey exhibited, in the MSDN article [27], an algorithm to build the uth element (in lexicographic order) of the combinations of k elements among n. However, in fact, this algorithm was already published by Kreher and Stinson [26] (p. 47) and can also be seen as an extension of the work of Lehmer. This algorithm is interesting as it corresponds to the previous implementation used in the mathematics software system Sagemath [28] (the previous unranking algorithm from Sagemath is stored in the Software Heritage Archive swh:1:cnt:c60366bc03936eede6509b23307321faf1035e23;lines=539-605). In the beginning of 2020, we replaced the Sagemath implementation by the algorithm presented in Section 2 (the new unranking algorithm from Sagemath is stored in the Software Heritage Archive swh:1:cnt:b2a68056554dbf90fa55e76820f348d9d55019e3;lines=539-653 (accessed on 15 March 2021)).
The algorithm simply performs the combinadic decomposition of u and then applies the aforementioned transformation. The idea to get the combinadic of an integer 0 ≤ u < ( n k ) is the following: c k is obtained as the maximum integer such that u ≥ ( c k k ), and then the remaining part u − ( c k k ) is smaller than ( n−1 k−1 ) so it can be decomposed recursively into a combinadic with k − 1 components smaller than n − 1. McCaffrey's algorithm is described in Algorithm 4.  for i from 0 to k − 1 do return L As noted while explaining Table 4, we work with the reverse of the rank u (see Line 3 in the algorithm) in order to unrank combinations in lexicographic order. The presented algorithm is also close to Er's algorithm [23] whose representation for combinations is distinct, but the computations are analogous; furthermore, in his paper, Theorem 2 corresponds exactly to the combinadic decomposition.
The function UNRANKINGVIACOMBINADIC(n, k, u) computes the combinations of k elements among n of rank u in lexicographic order, although the core of the algorithm is reverse co-lexicographic. The correctness of the algorithm is stated in [26].
Again, we express the complexity of this algorithm as its number of calls to the binomial function. First note that, the values n and k being given, the worst cases are obtained when v gets as small as possible at the end of the loop, thus for all u whose combinadic satisfies c 1 = 0. Hence, the worst case complexity is n − 1. Again, we complete this analysis, by computing the average complexity of the algorithm. To reach this goal, we again introduce the sequence u n,k computing the cumulative number of calls to binomial when u ranges from 0 to ( n k ) − 1.

Lemma 4.
Let u n,k be the cumulative numbers of calls to binomial while unranking all possible u from 0 to ( n k ) − 1. The sequence satisfies: u n,0 = 1 and u n,n+i = 0 for all n and i > 0, and otherwise u n,k = n k + u n−1,k−1 + u n−1,k . Table 5 presents the sequence given in Lemma 4. The difference with the two sequences studied before lies in the extreme cases. This sequence is a shifted version of the sequence OEIS A264751. Both combinatorial objects can be put in bijection, and thus some conjectures stated there are solved in the following. Proof. Note that n − c k calls to binomial are necessary to determine c k , then c k − c k−1 calls to determine c k−1 , . . . , and finally c 2 − c 1 to determine c 1 . Hence, the total number of binomial coefficients evaluations necessary to compute the combinadic of u is n − c 1 (and thus only depends on c 1 ). Besides, for a given j ≥ 0, the number of finite sequences c 1 = j < c 2 < c 3 < · · · < c k < n is equal to the number of sequences 0 ≤ c 2 < c 3 < · · · < c k < n − j − 1 by the change of variable c i ← c i − j − 1. Hence, this number is equal to ( n−1−j k−1 ). In addition, there is a first call to binomial at the beginning of the algorithm to reverse the rank, regardless of the value of u. We thus obtain: Using the latter equation, the recursive equation is directly proved by induction.
Note that in this case the cumulative numbers are not symmetrical u n,k = u n,n−k . In fact, the computation of the combinadics is not symmetrical. Theorem 6. Let U(z, y) be the generating function associated with (u n,k ). Then, The proof is the same as for Theorem 3. The values for u n,k are a bit different due to the extreme cases u n,0 . Corollary 4. The average number of calls to binomial in Algorithm 4 for n being large and k being of the form αn for 0 < α < 1 is In the literature, another algorithm based on combinadics is given in [22]. We provide a pseudo-code equivalent of the original Fortran algorithm in Algorithm 5. Note that the algorithm does not handle the case k = 1, which should thus be treated separately. There, in the computation of the combinadic for a given rank, the coefficients are computed from the smallest one, c 1 , to the second largest one, c k−1 , and finally the value for c k is directly deduced with no need for further trials. In this algorithm, the variable L contains the combinadic of u (not its reverse). We note two differences with Algorithm 4. First, the last coefficient (c k ) is directly computed without "trying" the different possible values as for the previous coefficients (see Line 13 ). Second, it uses a combinatorial argument to find the value of the c i coefficients that is the complementary of the argument used in the previous algorithm: the number of sequences 0 ≤ c 1 < c 2 < · · · < c k < n with c 1 ≥ j is equal to ∑ j i=0 ( n−i−1 k−1 ), hence the accumulation performed in the r variable. The fact that the same combinatorial argument can be used in two different ways here has to be put in parallel with Remark 1 at the beginning of this section.
The first point mentioned above is an improvement over Algorithm 4, but in fact this second algorithm is penalized by the extra addition that it has to perform and then undo on Line 12 to find each c i . With this approach, it is mandatory to compute the accumulation of binomial coefficients in r until it becomes greater than u to know when to exit the loop. It appears in our experimentations in the next section that this has a noticeable impact on performance.
Algorithm 5 Unranking a combination (alternative algorithm). 10: if r > u then exit the loop 12: return L UNRANKINGVIACOMBINADIC2(n, k, u) is a lexicographic unranking for combinations. This algorithm was presented by Buckles and Lybanon [22], and its correctness is presented in [26]. Finally, note it is approximately the implementation in Matlab [29] whose code was also presented by Ruskey ([7], p. 65). The latter approach also does trials to find the last coefficient of the combination instead of computing it directly on Line 13. Lemma 5. Let u n,k be the cumulative numbers of calls to binomial while unranking all possible u from 0 to ( n k ) − 1. For all n, the sequence satisfies u n,k = 0 when k = 1, 2 or k > n, and otherwise u n,k = n k + u n−1,k−1 + u n−1,k .
The result is proved in an analogous way as Lemma 4, summing over c k−1 instead of c 1 . In Table 6, we compute the first values of (u n,k ). We note the first values are smaller than the previous ones, and Theorem 7 gives their asymptotic behavior. Table 6. First values of u n,k for n = 1 . . . 8 and k = 0 . . . n (Algorithm 5). (1 − z) 2 (1 − z − zy) 2 ; thus u n,k = n k k − 1 k + 1 (n + 1).

Corollary 5.
The average number of calls to binomial in Algorithm 5 for n being large and k being of the form αn for 0 < α < 1 is The improvement for the efficiency of the algorithm is seen only on the second order in comparison to the one of Corollary 4 on average. Let us conclude this part with an interesting remark.

Remark 2.
In our Algorithm 3, while unranking k elements among n, we combinatorially see ( n k ) as first ( n−1 k−1 ) and otherwise at ( n−1 k ). This is the same approach as in Algorithm 5. However, as explained after Algorithm 3, we could have first been interested in ( n−1 k ) and then in ( n−1 k−1 ). Then, the algorithm would have been in a reverse co-lexicographic fashion, and it would follow exactly the same approach as Algorithm 4.

Improving Efficiency and Realistic Complexity Analysis
Finally, we show that the complexity model used above is no longer adequate. Although the approximation that computing one binomial coefficient has a constant cost seems to have been sufficient in the past (due the limitation of the implementations to 32-bit integers; see the work of McCaffrey [27], who seems to be the first to introduce combination unranking using big integer computations), this is no longer a valid model as big integers are now more widely used. We discuss the impact of big integer arithmetic and some optimizations that significantly improve the performance of all algorithms.
From the two last sections, we know the different algorithms that are mostly used in practice. However, having in mind the results exhibited in Table 1, implementations may vary a lot. Obviously, using a specialized library for big integer arithmetic such as the GMP C library is the best way to carry out fast operations, but the distinct behaviors observed in the table suggests that deeper differences exists between the various implementations than just the choice of the library.
For all algorithms, we prove that the average number of calls to the binomial function is equivalent to n (when k grows linearly). A first question to investigate is about this complexity measure: Is it reasonable? An obvious approach consists in comparing these results with the actual run time of the algorithms.

First Experiments to Visualize the Time Complexity in a Real Context
In the two next plots in Figure 1, we present the average time needed for the computations of 500 combinations for each pair (n, k = n/2) where n ranges from 250 to 10,000 with a step size of 250. The choice k = n/2 was made because it corresponds to the worst complexity cases when k varies from 0 to n. To conduct our experiments, we implemented all algorithms in C using the classical GMP library for big integers arithmetic (the implementation and the exhaustive material used for repeating the experiments are available at http://github.com/Kerl13/combination_unranking (accessed on 15 March 2021)). The experiments were run on a standard laptop PC with an I7-8665U CPU, 32Gb RAM running Ubuntu Linux 2020. Moreover, except for Table 1, which required running different computer algebra systems and thus the X server, the experiments were run in the console while the X server and other time-consuming services (wifi, sound, etc.) were off. On the leftmost plot, we present the average time for the unranking of a combination for n ranging from 250 to 10,000 by steps of 250 too. Note that all the curves are merged: all algorithms seem to need almost exactly the same time to unrank a combination. In the rightmost plot, we present a version with memoization (memoization is an optimization technique consisting in storing the results of expensive function calls and returning the cached values when the function is called again with the same arguments; this technique is sometimes called "tabling") for the algorithms: we first run a pre-computation step storing all binomial coefficients that will be used. The second step consists in unranking the combinations by using the pre-calculated values for the binomial coefficients. In our plot, we present only the time used for the second step. We remark that the recursive Algorithm 3 is not as efficient as the others. Below, we show that this is due to its recursive nature and that this can be easily avoided. As explained above, Algorithm 5 should be less efficient than Algorithm 4, which is indeed apparent in the plots. Our Algorithms 2 and 4 are the most efficient when used in this setup. While the number of evaluations of binomial coefficients is linear in n, it is clear that it is not the case for the time complexity of the algorithms (see the leftmost plot). However, once the binomial coefficients have been pre-computed, running the four algorithms without computing binomial coefficients is closer to a linear function but they still are not really linear.
While memoizing all binomial coefficients when n is of order of some hundreds is possible, it is no longer the case when n is of the order of several thousands (for the experiments in Figure 1, we only computed the necessary binomial coefficients using a lazy approach). Such use cases did not occur when the methods (e.g., [22,23]) were derived. However, now that big integers are more widely used, it becomes reasonable to unrank combinations of large size and it is thus necessary to take the cost of arithmetic operations into account. The first use of big integers for combination unranking appears to date back to 2004 [27].

Improving the Implementations of the Algorithms
Before going further with the experiments, we propose an improvement in the computation of the binomial coefficient, which is applicable to all algorithms presented in this paper.
In all of the presented algorithms, a binomial coefficient is computed at each step of the generation. There are various ways to implement those. One possibility is to compute the two products n · (n − 1) · (n − 2) · · · (n − k + 1) and k! separately using a divide and conquer strategy as described in [30] (Section 15.3) and then to compute the division.
In the unranking algorithms, instead of doing the "full" evaluation of the binomial coefficient at each step, it is possible to reuse the computations from the previous step and obtain the value of the new coefficient by a constant number of multiplications or divisions by a small integer. This is possible because in all algorithms the parameters of the binomial coefficients vary only by ±1 from one step to the next. For instance, in Algorithm 2, just before incrementing i on Line 9, the coefficient for the next iteration can be obtained by multiplying b by k − 1 − i and dividing it by n − 1 − m − i. Similarly, in the other branch of the if, just before incrementing m on Line 12, the value of the coefficient for the next iteration can be obtained by multiplying b by n − m − k and dividing it by n − 1 − m − i. In the end, only the first binomial coefficient is computed from scratch and all others are obtained as described above. This lowers the amortized cost of computing one coefficient to Θ(1) multiplication of a big integer by a small integer. This optimization is applicable to all the algorithms presented in this paper.
In Algorithm 6, we propose an optimized version of our Algorithm 2 based on the above remarks. It also includes some other enhancements. First, instead of computing the permutation rank of the combination and then unranking the combination as a permutation, it is possible to process the coefficients of the factoradic decomposition on the fly to extract the right value from the set {0, 1, 2, . . . , n − 1}. Second, we can note that we are in a special use-case of the EXTRACT function where values are extracted in increasing order. Hence, there is no need to explicitly store the set of remaining values (not yet in the combination) to get access to its mth value: it is m + i where i is the number of already extracted values. Finally, the last component of the combination can be deduced without computing more binomial coefficients (Line 15) in order to leave the loop earlier.
Before going on with the efficiency comparisons, we use the remarks related to the binomial coefficient computation to improve Algorithms 3-5. Furthermore, to make the comparison fair, we used a variant of Algorithm 3 for the recursive approach in which the array storing the result is allocated with the right size at the beginning of the execution, as in the other algorithms. In addition, in order not to penalize it due to its recursive nature, the order of some instructions have been changed so that it is tail-recursive. The optimized version is given in Algorithm 7.
In UNRANKTR, the variable i represents the position in L of the next value to be computed and the variable m represents the next candidate to be the value of L[i]. The invariant satisfied by b is that UNRANKTR is always called with b = n · ( n−1 k−1 ).
if b > u then 8: 10: 14: We propose a second time efficiency comparison for some algorithms with their optimizations in Figure 2. Comparing this plot with the leftmost one of Figure 1, we note that, when n = 10,000, the algorithms run approximately 45 times faster. Again, we note that Algorithm 5 is still less efficient than the others, which all seem to be equivalent.  To understand the behavior of the curves of the above plot, we introduce another way to analyze the time complexity in Figure 3. Now, we take n = 10,000 and we let k range from 250 to 9750 with an iteration step of 250. For each step, we present an average value for 500 tests. Again (in the leftmost plot), the dashed lines correspond to the first version of each algorithm and the solid lines to the optimized versions. In the rightmost plot, we only focus on the optimized versions of the four algorithms. We remark that the worst time complexity is obtained when k = n/2. In addition, Algorithm 6 and the tail-recursive Algorithms 4 and 7 behave in almost the same way. Whereas Algorithm 4 is a little bit more efficient when k < n/2, the two other are a bit more efficient for the second half range. Thus, the curves for Algorithms 4, 6, and 7 (once optimized) are hard to distinguish. There is a surprising explanation for this. In fact, it can be shown that all three algorithms perform the exact same arithmetic operations on big integers except for a few terms at then end of their execution, due to their different base cases. In the case of the recursive and factoradic-based algorithms, the similarity goes further. Algorithm 7, being tail-recursive, it can be automatically translated into the imperative style (the conversion of a tail-recursive algorithm into its imperative version is called "tail-call" optimization and is implemented by most compilers for languages with recursion) and the result of the automatic translation is an algorithm which is almost identical to Algorithm 2. In the case of Algorithm 4 (once optimized), although the computations are the same, they are used to obtain the values of the c i coefficients in a different order, which makes the parallel less obvious.
As a result, the only fundamental differences between all these algorithms are their base cases. Since they are all based on a different combinatorial point of view, their base cases have been characterized slightly differently, but, as we can see by comparing the results of their complexity analyses (establishing how many calls to binomial are done), this only impacts the second order of their complexity.
Besides, for all algorithms, a significant speed-up can be achieved by re-using the value of the most recently computed binomial coefficient.

Realistic Complexity Analysis
We now propose a more precise complexity analysis based on a more realistic cost model. Recall that we are dealing with big integers. More precisely for n and k being given, the ranks as well as the binomial coefficients computed during the generation can have up to L n,k = 1 + log 2 ( n k ) bits. Using Stirling's approximation, we get that L n,k ∼ n→∞ k=αn n α log 2 1 Besides, the cost of the multiplication of a big integer with O(n) bits with a smaller integer of O(ln n) bits can be bounded by O( n ln n M(ln n)) where M(x) is the cost of multiplying two x-bits integers. This can be achieved by writing the big integer in base 2 log 2 n = n and performing the multiplication using the naive "textbook" algorithm in this base. The first term n ln n counts the number of operations done in base n and the second term M(ln) is the cost of one single multiplication in this base.
A rough upper bound for M(x) is x 2 , obtained by using the naive multiplication algorithm. Actually, the naive algorithm is often used in practice for small values of x since the asymptotically more efficient algorithms only become better above a given threshold. For our use-case, n is likely to fit in a machine word in practice and thus the naive algorithm must be used. Hence, the upper bound of O(n ln n) for the cost of the multiplication of a small integer by a big integer should faithfully reflect the actual runtime of our implementations, although it is theoretically not optimal. A tighter bound of O(n(ln ln n)(ln ln ln n)) can be obtained by using the Schönhage-Strassen algorithm, although it is not advisable in practice.
In addition to the cost of the multiplications discussed above, a linear number of comparisons and additions are performed. The cost of one such operation is linear in n and thus negligible compared to the cost of the multiplications. Finally, the first binomial coefficient must be computed from scratch which can be done at negligible cost compared to n 2 ln n M(ln n). By combining the above discussion with the results from the previous sections, we get the average bit-complexity of all algorithms when k grows linearly with n, as presented in Theorem 8.

Theorem 8.
For all the optimized algorithms of the present paper, there exist a constant c > 0 such that for all n large enough and k = αn for 0 < α < n, the bit-complexity of the algorithm is bounded by: c · n 2 ln n · α log 2 1 In Figure 4, we display the time complexity of our algorithm (in green) with the graph of the function α → C · α ln 1 α + (1 − α) ln 1 1−α where the constant C was chosen so that the maximum values of both curves coincide.
This validates experimentally our complexity results. Besides, we checked by profiling the optimized algorithms that most of the run time is spent in the arithmetic operations, which also confirms the validity of our complexity model.

Objects Counted by Multinomial Coefficients
Let n and m be two positive integers and let K = (k 1 , k 2 , . . . , k m ) be a finite sequence of non-negative integers whose sum equals n. The multinomial coefficient ( n k 1 ,...,k m ) counts the number of ways of depositing n distinct objects into m distinct buckets such that there are k i objects in the ith bucket. It can also be interpreted as combination with repetitions as follows. Consider we have a (large enough) pool of m kinds of different objects and we must pick a finite sequence of n objects from this pool such that k 1 of them are of the first kind, k 2 of the second kind, and so on.
Note that, when m = 2, the multinomial coefficient corresponds to a binomial coefficient. Informally, Proposition 1 (related to combinations) states that combinations are in one-to-one correspondence with permutations containing two increasing runs. We have an analogous interpretation here. The ranks from 0 to ( n k 1 ,...,k m ) − 1 are in one-to-one correspondence with size-n permutations composed of m increasing runs. We now exhibit this link more formally. Proposition 4. Let n and m be two positive integers and let K = (k 1 , k 2 , . . . , k m ) be a sequence of non-negative integers such that ∑ i k i = n. Each object enumerated by ( n k 1 ,...,k m ) is represented by a finite sequence ( 1 , 2 , k m ) where for all 1 ≤ i ≤ m, i is an increasing finite sequence of length k i . Furthermore, the union over i of all elements of i is exactly {0, 1, . . . , n − 1}.
Our unranking algorithm relies on the classical formula expressing the multinomial coefficient as a product of binomial coefficients.
Algorithm 8 Unranking a combination with repetitions. F ← factoradic(RANKCONVERSION(n , k i , u )) 10: for j from 0 to k i − 1 do 11: F[n − k i + j] ← F [n − k i + j] 12: r ← composition(F) 13: return UNRANKINGPERMUTATION(n, r) division(s, t): returns the pair (q, r) corresponding respectively to the quotient and the remainder of the integer division of s by t.
The core of the algorithm consists in computing the rank of the permutation, written in factoradics, associated with the combination with repetitions we are interested in. It remains then to unrank a permutation.
Proof. Based on Proposition 4, the core of the algorithm computes the rank of a permutation containing m increasing runs respectively of lengths k 1 , k 2 , . . . , k m . Determining the contribution of each run in the factoradic decomposition of the permutation rank is done in the external loop starting in Line 5, from k m down to k 1 . The correctness of our algorithm relies on the following loop invariant: • The values of f j for all 0 ≤ j < k m + · · · + k i+1 have been computed and stored in F.

•
The values of f k m +···+k i+1 , . . . , f k m +k i+1 +k i −1 (which has not been determined yet, as we enter the loop) are equal to the factoradics of the rank u mod( k m +···+k i k i ) in the combinations of k i elements among k m + · · · + k i possible elements.

•
The variable u holds the rank of the runs that must be still unranked.
Once the factoradic F of the run under consideration has been computed (Line 10), it remains to update F according to the k i last components of F .

Objects Counted by k-Permutations
Let n and k be two positive integer. A k-permutation is given by a combination of k elements among n elements, and an ordering of these k elements. The number of k-permutation over a set of n elements is given by: k! n k = n 1, . . . , 1, n − k .
Proof. The facts that both combinatorial classes have the same cardinality and that the algorithm for unranking combination with repetitions is lexicographic induce the correctness of the function.

Conclusions
In the present article, we first exhibit, in Section 2, a new algorithm based on the classical factoradic numeral system for the lexicographic unranking of combinations. We also give a first step complexity analysis of the algorithm. Section 3 is dedicated to the survey of three classical algorithms. The first one is the algorithm based on the so-called recursive method. The other two algorithms are based on combinadics, which is a numeral system particularly suited to representing combinations. To the best of our knowledge, it is the first time that both algorithms are compared. During the survey, we extend our complexity analysis based on a generating function approach, as presented in Section 2, in order to obtain a uniform presentation of the analysis of all four algorithms. Obviously, we thus reprove their average complexity results but with more details. Then, in Section 4, we compare their efficiency experimentally and propose a way to improve all these algorithms based on classical formulas of the binomial coefficient.
As a surprising result, we find that all the usual algorithms share a very similar core, doing almost the same computations in order to reconstruct the combination under consideration.
One interesting remark is that understanding in detail the core computations that are necessary to unrank combinations, it is possible to significantly improve all algorithms. This understanding, joined with a detailed and realistic theoretical complexity analysis, leads to a prediction of the run time of the algorithm that closely matches the actual run time of their implementations.
However, due to details that are neglected in practice, we realize in Table 1 that some improvements are still necessary in various computer algebra systems in order to get the most efficient implementations possible for the unranking of combinatorial objects.
Finally, in Section 5, we extend our approach to solve the problem of unranking structures enumerated by multinomial coefficients and objects counted by the k-permutations of n elements (also called arrangements).
Author Contributions: The authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.

Data Availability Statement:
Publicly available datasets were analyzed in this study. This data can be found here: http://github.com/Kerl13/combination_unranking (accessed on 15 March 2021).