Optimal Prefix Free Codes With Partial Sorting

We describe an algorithm computing an optimal prefix free code for $n$ unsorted positive weights in time within $O(n(1+\lg \alpha))\subseteq O(n\lg n)$, where the alternation $\alpha\in[1..n-1]$ measures the amount of sorting required by the computation. This asymptotical complexity is within a constant factor of the optimal in the algebraic decision tree computational model, in the worst case over all instances of size $n$ and alternation $\alpha$. Such results refine the state of the art complexity of $\Theta(n\lg n)$ in the worst case over instances of size $n$ in the same computational model, a landmark in compression and coding since 1952, by the mere combination of van Leeuwen's algorithm to compute optimal prefix free codes from sorted weights (known since 1976), with Deferred Data Structures to partially sort a multiset depending on the queries on it (known since 1988).


Introduction
Given n positive weights W [1..n] coding1 for the frequencies i∈ [1..n] of n messages2 , and a number D of output symbols, an OPTIMAL PREFIX FREE CODE [13] is a set of n code strings on alphabet [1..D], of variable lengths L[1.
.n] and such that no string is prefix of another, and the average length of a code is minimized (i.e.n i=1 L[i]W [i] is minimal).The particularity of such codes is that even though the code strings assigned to the messages can differ in lengths (assigning shorter ones to more frequent messages yields compression to n i=1 L[i]W [i] symbols), the prefix free property insures a non-ambiguous decoding.
Such optimal codes, known since 1952 [13], are used in "all the mainstream compression formats" [8] (e.g.PNG, JPEG, MP3, MPEG, GZIP and PKZIP).The concept is "one of the fundamental ideas that people in computer science and data communications are using all the time" (Knuth [23]), and the code itself is "one of the enduring techniques of data compression.It was used in the venerable PACK compression program, authored by Szymanski in 1978, and remains no less popular today" (Moffat et al. [20] in 1997).

Previous works
Any prefix free code can be computed in linear time from a set of code lengths satisfying the Kraft inequality n i=1 D −L[i] ≤ 1.The original description of the code by Huffman [13] yields a heap-based algorithm performing O(n log n) algebraic operations, using the bijection between D-ary prefix free codes and D-ary cardinal trees [11].This complexity is asymptotically optimal for any constant value of D in the algebraic decision tree model, in the worst case over instances composed of n positive weights, as computing the optimal binary prefix free code for the weights W [0, . . ., Dn] = {D x 1 , . . ., D x 1 , D x 2 , . . ., D x 2 , . . ., D xn , . . ., D xn } is equivalent to sorting the positive integers {x 1 , . . ., x n } .We consider here only the binary case, where D = 2.Not all instances require the same amount of work to compute an optimal code (see Table 1 for a partial list of relevant results): -When the weights are given in sorted order, van Leeuwen [16] showed that an optimal code can be computed using within O(n) algebraic operations.-When the weights consist of r ∈ [1..n] distinct values and are given in a sorted, compressed form, Moffat and Turpin [21] showed how to compute an optimal code using within O(r(1 + log(n/r))) algebraic operations, which is often sublinear in n. -In the case where the weights are given unsorted, Belal et al. [5,6] described several families of instances for which an optimal prefix free code can be computed in linear time, along with an algorithm claimed to perform O(kn) algebraic operations, in the worst case over instances formed by n weights such that there is an optimal binary prefix free code with k distinct code lengths 3 .This complexity was later downgraded to O(16 k n) in an extended version [4] of their article.Both results are better than the state of the art when k is finite, but worse when k is larger than log n.  1.A selection of results on the computational complexity of optimal prefix free codes.k is the number of distinct codelengths produced.α = |S|EI ∈ [1..n − 1] is a difficulty measure, the number of alternation between External nodes and Internal nodes in an execution of van Leeuwen [16]'s algorithm.Note that there can be various optimal codes for any given set of weights, each with a distinct number of distinct code lengths k.

Contributions
In the context described above, various questions are left unanswered, from the confirmation of the existence of an algorithm running in time O(16 k n) or O(kn), to the existence of an algorithm taking advantage of small values of both n and k, less trivial than running two algorithms in parallel and stopping both whenever one computes the answer.Given n positive integer weights, can we compute an optimal binary prefix free code in time better than O(min{kn, n log n}) in the algebraic model?We answer in the affirmative for many classes of instances, identified by the alternation measure α defined in Section 3.1: Theorem 1.Given n positive weights of alternation α ∈ [1..n − 1], there is an algorithm which computes an optimal binary prefix free code using within O(n(1+ log α)) ⊆ O(n lg n) algebraic instructions, and this complexity is asymptotically optimal among all algorithms in the algebraic decision tree computational model in the worst case over instances of size n and alternation α.
Proof.We describe in Lemma 2 a deferred data structure which supports q queries of type rank, select and partialSum in time within O(n(1 + lg q)), all within the algebraic computational model, and describe in Section 2.3 an algorithm using such a data structure to compute optimal prefix free codes given an unsorted input.We show in Lemma 9 that any algorithm A in the algebraic computational model performs within Ω(n lg α) algebraic operations in the worst case over instances of size n and alternation α.We show in Lemma 6 that the GDM algorithm, a variant of the van Leeuwen's algorithm [16], modified to use the deferred data structure from Lemma 2, performs q ∈ O(α(1 + lg n−1 α )) such queries, which yields in Corollary 7 a complexity within O(n(1+ log α) + α(lg n)(lg n α )), all within the algebraic computational model.
for this range (Lemma 8), the optimality ensues.
⊓ ⊔ When α is at its maximal (i.e.α = n−1), this complexity matches the tight computational complexity bound of Θ(n lg n) for algebraic algorithms in the worst case over all instances of size n.When α is substantially smaller than n (e.g.α ∈ O(lg n)), the GDM algorithm performs within o(n lg n) operations, down to linear in n for finite values of α.
We discuss our solution in Section 2 in three parts: the intuition behind the general strategy in Section 2.1, the deferred data structure which maintains a partially sorted list of weights while supporting rank, select and partialSum queries in Section 2.2, and the algorithm which uses those operators to compute an optimal prefix free code in Section 2.3.Our main contribution consists in the analysis of the running time of this solution, described in Section 3: the formal definition of the parameter of the analysis in Section 3.1, the upper bound in Section 3.2 and the matching lower bound in Section 3.3.We conclude with a comparison of our results with those from Belal et al. [5] in Section 4.

Solution
The solution that we describe is a combination of two results: some results about deferred data structures for multisets, which support queries in a "lazy" way; and some results about optimal prefix free codes themselves, about the relation between the computational cost of sorting a set of positive integers and the computational cost of computing an optimal prefix free code for the corresponding frequency distribution.We describe the general intuition of our solution in Section 2.1, the deferred data structure in Section 2.2, and the algorithm in Section 2.3.

General Intuition
Observing that the algorithm suggested by Huffman [13] always creates the internal nodes in increasing order of weight, van Leeuwen [16] described an algorithm to compute optimal prefix free codes in linear time when the input (i.e. the weights of the external nodes) is given in sorted order.
A close look at the execution of van Leeuwen's algorithm [16] reveals a sequence of sequential searches for the insertion rank r of the weight of an internal node in the list of weights of external nodes.Such sequential search could be replaced by a more efficient search algorithm in order to reduce the number of comparisons performed (e.g. a doubling search [7] would find such a rank r in 2⌈log 2 r⌉ comparisons).
Example 1.Consider an instance of the optimal prefix free code problem formed by n sorted positive weights W [1.
.n] such that the first internal node created is bigger than the largest weight (i.e.W [1]+W [2] > W [n]). On such an instance, van Leeuwen's algorithm [16] starts by performing n − 2 comparisons in the equivalent of a sequential search in W for W [1]+W [2]: a binary search would perform ⌈log 2 n⌉ comparisons instead.
Of course, any algorithm must access (and sum) each weight at least once in order to compute an optimal prefix free code for the input, so that reducing the number of comparisons does not reduce the running time of van Leeuwen's algorithm on a sorted input.Our claim is that in the case where the input is not sorted, the computational cost of optimal prefix free codes on instances where van Leeuwen performs long sequential searches can be greatly reduced.We define the "van Leeuwen signature" of an instance as a first step to characterize such instances: Definition 1.Given an instance of the optimal prefix free code problem formed by n positive weights W [1..n], its van Leeuwen signature S(W ) ∈ {E, I} 2n−1 is a string of length 2n − 1 over the alphabet {E, I} (where E stands for "External" and I for "Internal") marking, at each step of the algorithm described by van Leeuwen [16], whether an external or internal node is chosen as the minimum (including the last node returned by the algorithm, for simplicity).
Example 2. Given the sorted array W = 1 2 3 4 5 5 6 7 of length 8, its van Leeuwen signature is of length 15, starts with EE and finishes with I: S(W ) = EEEIEEEEIEIIIII.
The analysis described in Section 3 is based on the number of blocks formed only of E in the van Leeuwen signature of the instance S. We can already show some basic properties of this measure: The three first properties are simple consequences of basic properties on binary trees.S starts with two E as the first two nodes paired are always external.S finishes with one I as the last node returned is always (for n > 1) an internal node.The two last properties are simple consequences of the fact that S is a binary string starting with an E and finishing with an I.
Instances with very few blocks of E are easier to solve than instances with many such blocks.For instance, an instance W of length n such that its signature S(W ) is composed of a single run of n Es followed by a single run of n − 1 Is can be solved in linear time, and in particular without sorting the weights: it is enough to assign the codelength l = ⌊log 2 n⌋ to the n − 2 l largest weights and the codelength l + 1 to the 2 l smallest weights.Separating those weights is a simple select operation, supported by the data structures described in the following section.

Partial Sum Deferred Data Structure
Given a MULTISET W [1.
.n] queries such as rank(x), the number of elements which are strictly smaller than x in W ; and select(r), the value of the r-th smallest value (counted with multiplicity) in W . Their data structure supports q queries in time within O(n(1 + lg q)), all in the comparison model.To achieve this results, it partially sorts its data in order to minimize the computational cost of future queries, but avoids sorting all of the data if the queries don't require it: the queries have then become operators (they modify the data).Note that whereas the running time of each individual query depends on the state of the data, the answer to each query is independent of the state of the data.
Karp et al.'s data structure [15] supports only rank and select queries in the comparison model, whereas the computation of optimal prefix free codes requires to sum pairs of weights from the input, and the algorithm that we propose in Section 2.3 requires to sum weights from a range in the input.Such requirement can be reduced to partialSum queries.Whereas such queries have been defined in the literature, we define them here in a way that depends only on the content of the MULTISET (as opposed to a definition dpending on the order in which it is given), so that it can be generalized to deferred data structures.rank(x), the number of elements which are strictly smaller than x in W ; select(r), the value of the r-th smallest value (counted with multiplicity) in W ; -partialSum(r), the sum of the r smallest elements (counted with multiplicity) in W .We describe below how to extend Karp et al.'s deferred data structure [15], which supports rank and select queries on MULTISETS, in order to add the support for partialSum queries, with an amortized running time within a constant factor of the original asymptotic time.Note that the data structure is not performing any more in the comparison model, but rather in the algebraic decision tree model, since it performs algebraic operations (additions) on the elements of the MULTISET: Lemma 2. Given n unsorted positive weights W [1..n], there is a PartialSum Deferred Data Structure which supports q operations of type rank, select and partialSum in time within O(n(1 + lg q) + q(1 + log n)), all within the algebraic decision tree computational model.Karp et al. [15] described a deferred data structure which supports the rank and select queries (but not partialSum queries).It is based on median computations and (2, 3)-trees, and performs q queries on n values in time within O(n(1 + lg q) + q(1 + log n)), all within the algebraic computational model.We describe below how to modify in a simple way their data structure so that to support partialSum queries with asymptotically negligible additional cost.At the initialization of the data structure, compute the n partial sums corresponding to the n positions of the unsorted array.After each median computation and partitioning in a rank or select query, recompute the partial sums on the range of values newly partitioned, adding only a constant factor to the cost of the query.When answering a partialSum query, perform a select query and then return the value of the partial sum corresponding to the value by the select query: the asymptotic complexity is within a constant factor of the one described by Karp et al. [15].⊓ ⊔ Barbay et al. [1] further improved Karp et al.'s result [15] with a simpler data structure (a single binary array) and a finer analysis taking into account the gaps between the position hit by the queries.Barbay et al.'s results [1] can similarly be augmented in order to support partialSum queries while increasing the computational complexity by only a constant factor.This result is not relevant to the analysis described in Section 3.
Such a deferred data structure is sufficient to simply execute van Leeuwen's algorithm [16] on an unsorted array of positive integers, but would not result in an improvement in the computational complexity: van Leeuwen's algorithm [16] is simply performing n select operations on the input, effectively sorting the unsorted array.
We describe in the next section an algorithm which uses the deferred data structure described above to batch the operations on the external nodes, and to defer the computation of the weights of some internal nodes to later, so that for many instances the input is not completely sorted at the end of the execution, which reduces the execution cost.

Algorithm "Grouping-Docking-Mixing" (GDM)
There are five main phases in the GDM algorithm: the Initialization, three phases (Grouping, Docking and Mixing, hence the name "GDM" of the algorithm) inside a loop running until only internal nodes are left to process, and the Conclusion: The algorithm and its complexity analysis distinguish two types of internal nodes: pure nodes, which descendants were all paired during the same Grouping phase; and mixed nodes, which either is the ancestor of such a mixed node, or pairs a pure internal node with an external node, or pairs two pure internal nodes produced at distinct phases of the algorithm.The distinction is important as the algorithm computes the weight of any mixed node at its creation (potentially generating several data structure operations), whereas it defers the computation of the weight of some pure nodes to later.
Before describing each phase more in detail, it is important to observe the following invariant of the algorithm: Lemma 3. Given an instance of the optimal prefix free code problem formed by n > 1 positive weights W [1..n], between each phase of the algorithm, all unpaired internal nodes have weight within a constant factor of two (i.e. the maximal weight of an unpaired internal node is strictly smaller than the minimal weight of an unpaired internal node).
We now proceed to describe each phase in more details: Initialization: Initialize the Partial Sum deferred data structure; compute the weight currentMinInternal of the first internal node through the operation partialSum(2) (the sum of the two smallest weights); create this first internal node as a node of weight currentMinInternal and children 1 and 2 (the positions of the first and second weights, in any order); compute the weight currentMinExternal of the first unpaired weight (i.e. the first available external node) by the operation select(3); setup the variables nbInternals = 1 and nbExternalProcessed = 2.
Grouping: Compute the position r of the first unpaired weight which is larger than the smallest unpaired internal node, through the operation rank with parameter currentMinInternal; pair the ((r − nbExternalProcessed) modulo 2) indices to form ⌊ r−nbExternalProcessed 2 ⌋ pure internal nodes; if the number r − nbExternalProcessed of unpaired weights smaller than the first unpaired internal node is odd, select the r-th weight through the operation select(r), compute the weight of the first unpaired internal node, compare it with the next unpaired weight, to form one mixed node by combining the minimal of the two with the extraneous weight.
Docking: Pair all internal nodes by batches (by Lemma 3, their weights are all within a factor of two, so all internal nodes of a generation are processed before any internal node of the next generation); after each batch, compare the weight of the largest such internal node (compute it through partialSum on its range if it is a pure node, otherwise it is already computed) with the first unpaired weight: if smaller, pair another batch, and if larger, the phase is finished.
Mixing: Rank the smallest unpaired weight among the weights of the available internal nodes, by a doubling search starting from the begining of the list of internal nodes.For each comparison, if the internal node's weight is not already known, compute it through a partialSum operation on the corresponding range (if it is a mixed node, it is already known).If the number r of internal nodes of weight smaller than the unpaired weight is odd, pair all but one, compute the weight of the last one and pair it with the unpaired weight.If r is even, pair all of the r internal nodes of weight smaller than the unpaired weight, compare the weight of the next unpaired internal node with the weight of the next unpaired external node, and pair the minimum of the two with the first unpaired weight.If there are some unpaired weights left, go back to the Grouping phase, otherwise continue to the Conclusion phase.

Conclusion:
There are only internal nodes left, and their weights are all within a factor of two from each other.Pair the nodes two by two in batch as in the Docking phase, computing the weight of an internal node only when the number of internal nodes of a batch is odd.
The combination of those phases forms the GDM algorithm, which computes an optimal prefix free code given an unsorted sets of positive integers.Lemma 4. The tree returned by the GDM algorithm describes an optimal prefix free code for its input.
In the next section, we analyze the number q of rank, select and partialSum queries performed by the GDM algorithm, and deduce from it the complexity of the algorithm in term of algebraic operations.

Analysis
The GDM algorithm runs in time within O(n lg n) in the worst case over instances of size n (which is optimal (if not a new result) in the algebraic decision tree model), but much faster on instances with few blocks of consecutive Es in the van Leeuwen signature of the instance.We formalize this concept by defining the alternation α of the instance in Section 3.1.We then proceed in Section 3.2 to show upper bounds on the number of queries and operations performed by the GDM algorithm in the worst case over instances of fixed size n and alternation α.We finish in Section 3.3 with a matching lower bound for the number of operations performed.

Alternation α(W )
We suggested in Section 2.1 that the number of blocks of consecutive Es in the van Leeuwen signature of an instance can be used to measure its difficulty.Indeed, some "easy" instances have few such blocks, and the instance used to prove the Ω(n lg n) lower bound on computational complexity of optimal prefix free codes in the algebraic decision tree model in the worst case over instances of size n has n−1 such blocks (the maximum possible in an instance of size n).We formally define this measure as the "alternation" of the instance (it measures how many times the van Leeuwen algorithm "alternates" from an external node to an internal node) and denote it by the parameter α: Definition 3. Given an instance of the optimal prefix free code problem formed by n positive weights

occurrences of the substring "EI" in its van Leeuwen signature S(W ).
Note that counting the number of blocks of consecutive Es is equivalent to counting the number of blocks of consecutive Is: they are the same, because the van Leeuwen signature starts with two Es and finished with an I, and each new I-block ends an E-block and vice-versa.Also, the choice between measuring the number of occurrences of "EI" or the number of occurrence of "IE" is arbitrary, as they are within a term of 1 of each other: counting the number of occurrences of "EI" just gives a nicer range of [1..n − 1] (as opposed to [0..n − 2]).This number is of particular interest as it measures the number of iteration of the main loop in the GDM algorithm: Lemma 5. Given an instance of the optimal prefix free code problem of alternation α, the GDM algorithm performs α iterations of its main loop.
In the next section, we refine this result to the number of data structure operations and algebraic operations performed by the GDM algorithm.

Upper Bound
In order to measure the number of queries performed by the GDM algorithm, we detail how many queries are performed in each phase of the algorithm.
-The Initialization corresponds to a constant number of data structure operations: a select operation to find the third smallest weight, and a simple partialSum operation to sum the two smallest weights of the input.-Each Grouping phase corresponds to a constant number of data structure operations: a partialSum operation to compute the weight of the smallest internal node if needed, and a rank operation to identify the unpaired weights which are smaller or equal to this node.-The number of operations performed by each Docking and Mixing phase is better analyzed together: if there are i symbols in the I-block corresponding to this phase in the van Leeuwen signature, and if the internal nodes are grouped on h levels before generating an internal node larger than the smallest unpaired weights, the Docking phase corresponds to at most h partialSum operations, whereas the Mixing phase corresponds to at most log 2 (i/2 h ) partialSum operations, which develops to log 2 (i) − h, for a total of log 2 i data structure operations.-The Conclusion phase corresponds to a number of data structure operations logarithmic in the size of the last block of Is in the Leeuwen's signature of the instance: in the worst case, the weight of one pure internal node is computed for each batch, through one single partialSum operation each time.
Lemma 5 and the concavity of the log yields the total number of data structure operations performed by the GDM algorithm: Lemma 6.Given an instance of the optimal prefix free code problem of alternation α, the GDM algorithm performs within O(α(1 + lg n−1 α )) data structure operations on the deferred data structure given as input.Proof.For i ∈ [1..α], let n i be the number of internal nodes at the beginning of the i-th Docking phase.According to Lemma 5 and the analysis of the number of data structure operations performed in each phase, the GDM algorithm performs in total within O(α + α i=1 lg n i ) data structure operations.Since there are at most n − 1 internal nodes, by concavity of the logarithm this is within Combining this result with the complexity of the Partial Sum deferred data structure from Lemma 2 directly yields the complexity of the GDM algorithm in algebraic operation (and running time): Lemma 7. Given an instance of the optimal prefix free code problem of alternation α, the GDM algorithm runs in time within O(n(1+ log α) + α(lg n)(lg n α )), all within the algebraic computational model.Proof.Let q be the number of queries performed by the GDM algorithm.Lemma 6 implies that q ∈ O(α(1 + lg n α )).Plunging this into the complexity of O(q lg n+n lg q) from Lemma 2 yields the complexity O(n(1+ log α)+ α(lg n)(lg n α )).⊓ ⊔ Some simple functional analysis further simplifies the expression to our final upper bound: Proof.Given two positive integers n > 0 and α ∈ [1..n − 1], α < n lg n and α lg α < n.A simple rewriting yields α lg α < n lg 2 n and α lg 2 n > n lg α .Then, n/α < n implies α × lg n × lg n α < n lg α, which yields the result.
⊓ ⊔ In the next section, we show that this complexity is indeed optimal in the algebraic decision tree model, in the worst case over instances of fixed size n and alternation α.

Lower Bound
A complexity within O(n(1 + lg α)) is exactly what one could expect, by analogy with the sorting of MULTISETS: there are α groups of weights, so that the order within each groups does not matter much, but the order between weights from different groups matter a lot.We prove a lower bound within Ω(n lg α) by reduction to MULTISET sorting: Lemma 9. Given the integers n ≤ 2 and α ∈ [1..n−1], for any algorithm A in the algebraic decision tree computational model, there is a set W [1..n] of n positive weights of alternation α such that A performs within Ω(n lg α) operations.
Proof.For any MULTISET A[1.
.n] = {x 1 , . . ., x n } of n values from an alphabet of α distinct values, define the instance W A = {2 x 1 , . . ., 2 xn } of size n, so that computing an optimal prefix free code for W , sorted by codelength, provides an ordering for A. W has alternation α: for any two distinct values x and y from A, the van Leeuwen algorithm pairs all the weights of value 2 x before pairing any weight of value 2 y , so that the van Leeuwen signature of W A has α blocks of consecutive Es.The lower bounds then results from the classical lower bound on sorting MULTISETS in the comparison model in the worst case over MULTISETS of size n with α distinct symbols.

⊓ ⊔
We compare our results to previous results in the next section.

Discussion
We described an algorithm computing an optimal prefix free code for n unsorted positive weights in time within O(n(1+ lg α)) ⊆ O(n lg n), where the alternation α ∈ [1..n−1] roughly measures the amount of sorting required by the computation, by combining van Leeuwen's results about optimal prefix free codes [16], known since 1976, with results about Karp et al.'s results about Deferred Data Structures [15], known since 1988.The results described above yields many new questions, of which we discuss only a few in the following sections.We discuss in this section how those results relate to previous results on optimal prefix free codes (Section 4.1), to other results on Deferred Data Structures obtained since 1988 (Section 4.2 and 4.3), to the lack of practical applications of our results on optimal prefix free codes (Section 4.4), and about perspectives of research on this topic (Section 4.5).We list in the appendix A.1 some interesting quotes about the importance of optimal prefix free codes in general.

Relation to previous work on optimal prefix free codes
In 2006, Belal et al. [5], described a variant of Milidiú et al.'s algorithm [18,17] to compute optimal prefix free codes, announcing that it performed O(kn) algebraic operations when the weights are not sorted, where k is the number of distinct code lengths in some optimal prefix free code.
They describe an algorithm claimed to run in time O(16 k n) when the weights are unsorted, and propose to improve the complexity to O(kn) by partitioning the weights into smaller groups, each corresponding to disjoint intervals of weights value 4 .The claimed complexity is asymptotically better than the one suggested by Huffman when k ∈ o(log n), and they raise the question of whether there exists an algorithm running in time O(n log k).
Like the GDM algorithm, the algorithm described by Belal et al. [5] for the unsorted case is based on several computations of the median of the weights within a given interval, in particular, in order to select the weights smaller than some well chosen value.The essential difference between both work is the use of deferred data structure, which simplifies both the algorithm and the analysis of its complexity.

Applicability of dynamic results on Deferred Data Structures
Karp et al. [15], when they defined the first Deferred Data Structures, supporting rank and select on MULTISETS and other queries on CONVEX HULL, left as an open problem the support of dynamic operators such as insert and delete: Ching et al. [9] quickly demonstrated how to add such support in good amortized time.
The dynamic addition and deletion of elements in a deferred data structure (added by Ching et al. [9] to Karp et al. [15]'s results) does not seem to have any application to the computation of optimal prefix free codes: even if the list of weights was dynamic, further work is required to build a deferred data structure supporting prefix free code queries.

Applicability of refined results on Deferred Data Structures
Karp et al.'s analysis [15] of the complexity of the deferred data structure is in function of the total number q of queries and operators, while Kaligosi et al. [14] analyzed the complexity of an offline version in function of the size of the gaps between the positions of the queries.Barbay et al. [1] combined the three results into a single deferred data structure for MULTISETS which supports the operators rank and select in amortized time proportional to the entropy of the distribution of the sizes of the gaps between the positions of the queries.
At first view, one could hope to generalized the refined entropy analysis (introduced by Kaligosi et al. [14] and applied by Barbay et al. [1] to the online version) of MULTISETS deferred data structures supporting rank and select to the computational complexity of optimal prefix free codes: a complexity proportional to the entropy of the distribution of codelengths in the output would nicely match the lower bound of Ω(k(1 + H(n 1 , . . ., n h ))) suggested by information theory, where the output contains n i codes of length l i , for some integer vector (l 1 , . . ., l h ) of distinct codelengths and some integer h measuring the number of distinct codelengths.Our current analysis does not yield such a result: the gap lengths between queries in the list of weights are not as regular as (l 1 , . . ., l h ).

Potential Practical Impact of our Results
The impact of our faster algorithm on the execution time of optimal prefix free code based techniques should definitely be evaluated further.Yet, we expect it to be of little importance in most cases: compressing a sequence S of |S| messages from an input alphabet of size n requires not only computing the code (in time O(n) using our solution), but also computing the weights of the messages (in time |S|), and encoding the sequence S itself using the computed code (in time O(|S|)).Improving the code computation time will improve on the compression time only in cases where the size n of the input alphabet is very large compared to the length |S| of the compressed sequence.One such application is the compression of texts in natural language, where the input alphabet is composed of all the natural words [22].Another potential application is the boosting technique from Ferragina et al. [12], which divides the input sequence into very short subsequence and computes a prefix free code for each subsequences on the input alphabet of the whole sequence.

Perspectives
One could hope for an algorithm which complexity would match the lower bound of Ω(k(1+H(n 1 , . . ., n h ))) suggested by information theory, where the output contains n i codes of length l i , for some integer vector (l 1 , . . ., l h ) of distinct codelengths and some integer h measuring the number of distinct codelengths.Our current analysis does not yield such a result: the gap lengths between queries in the list of weights are not as regular as (l 1 , . . ., l h ), but a refined analysis might.Minor improvements of our results could be brought by studying the problem in external memory, where deferred data structures have also been developed [24,2], or when the alphabet size is larger than two, as in the original article from Huffman [13].
Another promising line of research is given by variants of the original problem, such as OPTIMAL BOUNDED LENGTH PREFIX FREE CODES, where the maximal length of each word of the prefix free code must be less than or equal to a parameter l, while still minimizing the entropy of the code; or such as the ORDER CONSTRAINED PREFIX FREE CODES, where the order of the words of the codes is constrained to be the same as the order of the weights.Both problems have complexity O(n lg n) in the worst case over instances of fixed input size n, while having linear complexity when all the weights are within a factor of two of each other, exactly as in the original problem.
A logical step would be to study, among the communication solutions using an optimal prefix free code computed offline, which can now afford to compute a new optimal prefix free code more frequently and see their compression performance improved by a faster prefix free code algorithm.free code on each new instance (e.g.JPEG, MP3, MPEG), which ones get a better their time performance by using a faster prefix free code algorithm.

A.1 Relevance of Prefix Free codes in General
Albeit 60 year old, Huffman's result is still relevant nowadays.Optimal Prefix Free codes are used not only for compressed encodings: they are also used in the construction of compressed data structures for permutations [3], and using similar techniques for sorting faster multisets which contains subsequences of consecutive positions already ordered [3].
In 1991, Gary Stix [25] stated that "Large networks of IBM computers use it.So do high-definition television, modems and a popular electronic device that takes the brain work out of programming a videocassette recorder.All these digital wonders rely on the results of a 40-year-old term paper by a modest Massachusetts Institute of Technology graduate student-a data compression scheme known as Huffman encoding (...) Products that use Huffman code might fill a consumer electronics store.A recent entry on the shop shelf is VCR Plus+, a device that automatically programs a VCR and is making its inventors wealthy.(...) Instead of confronting the frustrating process of programming a VCR, the user simply types into the small handheld device a numerical code that is printed in the television listings.When it is time to record, the gadget beams its decoded instructions to the VCR and cable box with an infrared beam like those on standard remote-control devices.This turns on the VCR, sets it (and the cable box) to the proper channel and records for the designated time.".
In 1995, Moffat and Katajainen [19], stated that: "The algorithm introduced by Huffman for devising minimum-redundancy prefix free codes is well known and continues to enjoy widespread use in data compression programs.Huffman's method is also a good illustration of the greedy paradigm of algorithm design and, at the implementation level, provides a useful motivation for the priority queue abstract data type.For these reasons Huffman's algorithm enjoys a prominence enjoyed by only a relatively small number of fundamental methods".
In 1997, Moffat and Turpin [20] stated that those were "one of the enduring techniques of data compression.It was used in the venerable PACK compression program, authored by Szymanski in 1978, and remains no less popular today".
In 2010 Donald E. Knuth was quoted [23] as saying that: "Huffman code is one of the fundamental ideas that people in computer science and data communications are using all the time".
In 2010, the answer to the question "What are the real-world applications of Huffman coding?" on the website Stacks Exchange [26] states that "Huffman is widely used in all the mainstream compression formats that you might encounter -from GZIP, PKZIP (winzip etc) and BZIP2, to image formats such as JPEG and PNG.".
The Wikipedia website on Huffman coding states that "Huffman coding today is often used as a "backend" to some other compression method.DEFLATE (PKZIP's algorithm) and multimedia codecs such as JPEG and MP3 have a front-end model and quantization followed by Huffman coding."[27].
Ironically, the pseudo-optimality of this algorithm seems to have become part of the folklore of the area, as illustrated by a quote from Parker et al. [10] in 1999: "While there may be little hope of improving on the O(n log n) complexity of the Huffman algorithm itself, there is still room for improvement in our understanding of the algorithm.".

Definition 2 .
Given n unsorted positive weights W [1..n], a Partial Sum data structure supports the following queries:
In the Initialization phase, initialize the Partial Sum deferred data structure with the input, and initialize the first internal node by pairing the two smallest weights of the input.-In the Grouping phase, detect and group the weights smaller than the smallest internal node: this corresponds to a run of consecutive E in the van Leeuwen signature of the instance.-In the Docking phase, pair the consecutive positions of those weights (as opposed to the weights themselves, which can be reordered by future operations) into internal nodes, and pair those internal nodes until the weight of at least one such internal node becomes equal or larger than the smallest remaining weight: this corresponds to a run of consecutive I in the van Leeuwen signature of the instance.-In the Mixing phase, rank the smallest unpaired weight among the weights of the available internal nodes: this corresponds to an occurrence of IE in the van Leeuwen signature of the instance.-In the Conclusion phase, with i internal nodes left to process, assign codelength l = ⌊log 2 i⌋ to the i − 2 l largest ones and codelength l+1 to the 2 l smallest ones: this corresponds to the last run of consecutive I in the van Leeuwen signature of the instance.