Re-Pair in Small Space

Re-Pair is a grammar compression scheme with favorably good compression rates. The computation of Re-Pair comes with the cost of maintaining large frequency tables, which makes it hard to compute Re-Pair on large scale data sets. As a solution for this problem we present, given a text of length n whose characters are drawn from an integer alphabet, an O(n^2) time algorithm computing Re-Pair in n lg max(n, τ) bits of working space including the text space, where τ is the number of terminals and non-terminals.


Introduction
Re-Pair [21] is a grammar deriving a single string.It is computed by replacing the most frequent bigram in this string with a new non-terminal, recursing until no bigram occurs more than once.Despite this simple-looking description, both the merits and the computational complexity of Re-Pair are intriguing.As a matter of fact, Re-Pair is currently one of the most well-understood grammar schemes.
Besides the seminal work of Larsson and Moffat [21], there are a couple of articles devoted to the compression aspects of Re-Pair: Given a text T of length n whose characters are drawn from an integer alphabet of size σ, the output of Re-Pair applied to T is at most 2nH k (T )+o(n lg σ) bits with k = o(log σ n) when represented naively as a list of character pairs [25], where H k denotes the empirical entropy of the k-th order.Using the encoding of Kieffer and Yang [19], Ochoa and Navarro [26] could improve the output size to at most nH k (T ) + o(n lg σ) bits.Other encodings were recently studied by Ganczorz [14].Since Re-Pair is a so-called irreducible grammar, its grammar size, i.e., the sum of the symbols on the right hand of all rules, is upper bounded by O(n/ log σ n) [19, Lemma 2], which matches the informationtheoretic lower bound on the size of a grammar for a string of length n.Comparing this size with the size of the smallest grammar, its approximation ratio has O((n/ lg n) 2/3 ) as an upper bound [8] and Ω(lg n/ lg lg n) as a lower bound [2].
On the practical side, Yoshida and Kida [33] presented an efficient fixed-length code for compressing the Re-Pair grammar.Although conceived as a grammar for compressing texts, Re-Pair has been successfully applied for compressing trees [23], matrices [30], or images [11].
For different settings or for better compression rates, there is a great interest in modifications to Re-Pair.Charikar et al. [8,Sect. G] give an easy variation to improve the size of the grammar.Sekine et al. [28] provide an adaptive variant whose algorithm divides the input into blocks, and processes each block based on the rules obtained from the grammars of its preceding blocks.Subsequently, Masaki and Kida [24] gave an online algorithm producing a grammar mimicking Re-Pair.Ganczorz and Jez [15] modified the Re-Pair grammar by disfavoring the replacement of bigrams that cross Lempel-Ziv-77 (LZ77) [34] factorization borders, which allowed the authors to achieve practically smaller grammar sizes.Recently, Furuya et al. [13] presented a variant, called MR-Re-Pair, in which a most frequent maximal repeat is replaced instead of a most frequent bigram.

Related Work
Although Re-Pair is a well received grammar, there is not much literature found on how to compute Re-Pair efficiently.In this article, we focus on the problem to compute the grammar with an algorithm working in text space, forming a bridge between the domain of in-place string algorithms and the domain of Re-Pair computing algorithms.We briefly review some prominent achievements in both domains: In-Place String Algorithms.For the LZ77 factorization, Kärkkäinen et al. [18] present an algorithm computing this factorization with O(n/d) words on top of the input space in O(dn) time for a variable d ≥ 1, achieving O(1) words with O(n 2 ) time.For the suffix sorting problem, Goto [16] gave an algorithm to compute the suffix array with O(lg n) bits on top of the output in O(n) time if each character of the alphabet is present in the text.This condition got improved to alphabet sizes of at most n by Li et al. [22].Finally, Crochemore et al. [9] showed how to transform a text into its Burrows-Wheeler transform by using O(lg n) of additional bits.Due to da Louza et al. [10], this algorithm got extended to compute simultaneously the LCP array with O(lg n) bits of additional working space.
Re-Pair Computation.Re-Pair is a grammar proposed by Larsson and Moffat [21], who gave an algorithm computing it in expected linear time with 5n + 4σ 2 + 4σ ′ + √ n words of working space, where σ ′ is the number of non-terminals (produced by Re-Pair).This space requirement got improved by Bille et al. [5], who presented a linear time algorithm taking (1 + ǫ)n + √ n words on top of the rewriteable text space for a constant ǫ with 0 < ǫ ≤ 1. Subsequently, they improved their algorithm in [4] to include the text space within the (1 + ǫ)n + √ n words of working space.However, they assume that the alphabet size σ is constant and ⌈lg σ⌉ ≤ w/2, where w is the machine word size.They also provide a solution for ǫ = 0 running in expected linear time.Recently, Sakai et al. [27] showed how to convert an arbitrary grammar (representing a text) into the Re-Pair grammar in compressed space, i.e., without decompressing the text.Combined with a grammar compression that can process the text in compressed space in a streaming fashion, this result leads to the first Re-Pair computation in compressed space.
Our Contribution.In this article, we propose an algorithm that computes the Re-Pair grammar in O(n 2 ) ∩ O(n lg log τ n lg lg lg n/ log τ n) time (cf.Thm.2.3 and Thm.3.1) with max((n/c) lg n, n ⌈lg τ ⌉) + O(lg n) bits of working space including the text space, where τ is the number of terminals and nonterminals.Given that the characters of the text are drawn from a large integer alphabet with size σ = Ω(n), the algorithm works in-place.This is the first non-trivial in-place algorithm, as a trivial approach on a text T of length n would compute the most frequent bigram in Θ(n 2 ) time by computing the frequency of each bigram T [i]T [i + 1] for every integer i with 1 ≤ i ≤ n − 1, keeping only the most frequent bigram in memory.This sums up to O(n 3 ) total time, and can be Θ(n 3 ) for some texts since there can be Θ(n) different bigrams considered for replacement by Re-Pair.To achieve our goal of O(n 2 ) total time, we first provide a trade-off algorithm (cf.Lemma 2.2) finding the d most frequent bigrams in O(n 2 lg d/d) time for a trade-off parameter d.We subsequently run this algorithm for increasing values of d, and show that we need to run it O(lg n) times, which gives us O(n 2 ) time if d is increasing sufficiently fast.Our major tools are appropriate text partitioning, elementary scans, and sorting steps, which we visualize in Sect.2.4 by an example, and practically evaluate in Sect.2.5.When τ = o(n), a different approach using word-packing and bit-parallel techniques becomes attractive, leading to an O(n lg log τ n lg lg lg n/ log τ n) time algorithm, which we explain in Sect.3. Our algorithm can be parallelized (Sect.5), used in external memory (Sect.6), or adapted to compute the MR-Re-Pair grammar in small space (Sect.4).Finally, in Sect.7 we study several heuristics that make the algorithm faster on specific texts.

Preliminaries
We use the word RAM model with a word size of Ω(lg n) for an integer n ≥ 1.We work in the restore model [7], in which algorithms are allowed to overwrite the input, as long as they can restore the input to its original form.
Strings.Let T be a text of length n whose characters are drawn from an integer alphabet Σ of size σ = n O (1) .A bigram is an element of Σ 2 .The frequency of a bigram B in T is the number of nonoverlapping occurrences of B in T , which is at most |T | /2.Re-Pair.We reformulate the recursive description in the introduction by dividing a Re-Pair construction algorithm into turns.Stipulating that T i is the text after the i-th turn with i ≥ 1 and T 0 := T ∈ Σ + 0 with Σ 0 := Σ, Re-Pair replaces one of the most frequent bigrams (ties are broken arbitrarily) in T i−1 with a non-terminal in the i-th turn.Given this bigram is bc ∈ Σ 2 i−1 , Re-Pair replaces all occurrences of bc with a new non-terminal X i in T i−1 , and sets Σ

Sequential Algorithm
A major task for producing the Re-Pair grammar is to count the frequencies of the most frequent bigrams.Our work horse for this task are frequency tables.A frequency table in T i of length f stores pairs of the form (bc, x), where bc is a bigram and x the frequency of bc in T i .It uses f lg(σ 2 i n i /2) bits of space since an entry stores a bigram consisting of two characters from Σ i and its respective frequency, which can be at most n i /2.Throughout this paper, we use an elementary in-place sorting algorithm like heapsort: 32]).An array of length n can be sorted in-place in O(n lg n) time.

Trade-Off Computation
By embracing the frequency tables, we present a solution with a trade-off parameter: Lemma 2.2.Given an integer d with d ≥ 1, we can compute the frequencies of the d most frequent bigrams in a text of length n whose characters are drawn from an alphabet of size σ in O(max(n, d)n lg d/d) time using 2d lg(σ 2 n/2) + O(lg n) bits.
Proof.Our idea is to partition the set of all bigrams appearing in T into ⌈n/d⌉ subsets, compute the frequencies for each subset, and finally merge these frequencies.In detail, we partition the text T = S 1 • • • S ⌈n/d⌉ into ⌈n/d⌉ substrings such that each substring has length d (the last one has a length of at most d).Subsequently, we extend S j to the left (only if j > 1) and to the right (only if j < ⌈n/d⌉) such that S j and S j+1 overlap by one text position, for 1 ≤ j < ⌈n/d⌉.By doing so, we take the bigram on the border of two adjacent substrings S j and S j+1 for each j < ⌈n/d⌉ into account.Next, we create two frequency tables F and F ′ , each of length d for storing the frequencies of d bigrams.With F and F ′ , we process each of the n/d substrings S j as follows: Let us fix an integer j with 1 ≤ j ≤ ⌈n/d⌉.We first put all bigrams of S j into F ′ in lexicographic order.We can perform this within the space of F ′ in O(d lg d) time since there are at most d different bigrams in S j .We compute the frequencies of all these bigrams in the complete text T in O(n lg d) time by scanning the text from left to right while locating a bigram in F ′ in O(lg d) time with a binary search.Subsequently, we interpret F and F ′ as one large frequency

Algorithmic Ideas
With Lemma 2.2, we can compute T m in O(mn 2 lg d/d) time with additional 2d lg(σ 2 m n/2) bits of working space on top of the text for a parameter d with 1 ≤ d ≤ n.In the following, we present an O(n 2 ) time algorithm that needs max((n/c) lg n, n ⌈lg σ m ⌉) + O(lg n) bits of working space, where the text space is included as a rewriteable part in the working space and c ≥ 1 is a constant.In this model, we assume that we can enlarge the text T i from n i ⌈lg σ i ⌉ bits to n i ⌈lg σ i+1 ⌉ bits without additional extra memory.Our main idea is to store a growing frequency table using the space freed up by replacing bigrams with non-terminals.In detail, we maintain a frequency table F in T i of length f k for a growing variable f k , which is set to f 0 := O(1) in the beginning.The table F takes f k lg(σ 2 i n/2) bits, which is O(lg(σ 2 n)) = O(lg n) bits for k = 0.When we want to query it for a most frequent bigram, we linearly scan F in O(f k ) = O(n) time, which is not a problem since (a) the number of queries is m ≤ n, and (b) we aim for O(n 2 ) overall running time.A consequence is that there is no need to sort the bigrams in F according to their frequencies, which simplifies the following discussion.
Instead of recomputing F for every turn i, we want to recompute it only when it no longer stores a most frequent bigram.However, it is ad-hoc not clear when this happens as replacing a most frequent bigram during a turn (a) removes this entry in F and (b) can reduce the frequencies of other bigrams in F , making them possibly less frequent than other bigrams not tracked by F .Hence, the variable i for the i-th turn (creating the i-th non-terminal) and the variable k for recomputing the frequency table F the (k + 1)-st time are loosely connected.We group together all turns with the same f k and call this group the k-th round of the algorithm.At the beginning of each round, we enlarge f k and create a new F with a capacity for f k bigrams.Since a recomputation of F takes much time, we want to end a round only if F is no longer useful, i.e., when we no longer can guarantee that F stores a most frequent bigram.To achieve our claimed time bounds, we want to assign all m turns to O(lg n) different rounds, which can only be done if f k grows sufficiently fast.
Algorithm Outline.Given we are at the beginning of the k-th round and the i-th turn, we compute the frequency table F storing f k bigrams, and keep additionally the lowest frequency of F as a threshold t, which is treated as a constant during this round.During the computation of the i-th turn, we replace the most frequent bigram (say, bc ∈ Σ 2 i ) in the text T i with a non-terminal X i+1 to produce T i+1 .Thereafter, we remove bc from F and update those frequencies in F which got decreased by the replacement of bc with X i+1 , and add each bigram containing the new character X i+1 into F if its frequency is at least t.Whenever a frequency in F drops below t, we discard it.If F becomes empty, we move to the (k + 1)-st round, and create a new F for storing f k+1 frequencies.Otherwise (F still stores an entry), we can be sure that F stores a most frequent bigram.In both cases, we recurse with the (i + 1)-st turn by selecting the bigram with the highest frequency stored in F .We describe in the following how we update of F and how large f k+1 can be at least.

Algorithmic Details
Suppose that we are in the k-th round and in the i-th turn.Let t be the lowest frequency in F computed at the beginning of the k-th round.We keep t as a constant threshold for the invariant that all frequencies in F are at least t during the k-th round.With this threshold we can assure in the following that F is either empty or stores a most frequent bigram.Now suppose that the most frequent bigram of T i is bc ∈ Σ 2 i , which is stored in F .To produce T i+1 (and hence advancing to the (i + 1)-st turn), we enlarge the space of T i from n i ⌈lg σ i ⌉ to n i ⌈lg σ i+1 ⌉, and replace all occurrences of bc in T i with a new non-terminal X i+1 .Subsequently, we would like to take the next bigram of F .For that, however, we need to update the stored frequencies in F .To see this necessity, suppose that there is an occurrence of abcd with two characters a, d ∈ Σ i in T i .By replacing bc with X i+1 , (a) the frequencies of ab and cd decrease by one1 , and (b) the frequencies of aX i+1 and X i+1 d increase by one.
Updating F We can take care of the former changes (a) by decreasing the respective bigram in F (in case that it is present).If the frequency of this bigram drops below the threshold t, we remove it from F as there may be bigrams with a higher frequency that are not present in F .To cope with the latter changes (b), we track the characters adjacent to X i+1 after the replacement, count their numbers, and add their respective bigrams to F if their frequencies are sufficiently high.In detail, suppose that we have substituted bc with X i+1 exactly h times.Consequently, with the new text T i+1 we have additionally h lg σ i+1 bits of free space2 , which we call D in the following.Subsequently, we scan the text and put the characters of Σ i+1 appearing to the left of each of the h occurrences of X i+1 into D.After sorting the characters in D lexicographically, we can count the frequency of aX i+1 for each character a ∈ Σ i+1 preceding an occurrence of X i+1 in the text T i+1 by scanning D linearly.If the obtained frequency of such a bigram aX i+1 is at least as high as the threshold t, we insert aX i+1 into F , and subsequently discard a bigram with the currently lowest frequency in F if the size of F has become f k + 1.In case that we visit a run of X i+1 's during the creation of D, we must take care of not counting the overlapping occurrences of X i+1 X i+1 .Finally, we can count analogously the occurrences of X i+1 d for all characters d ∈ Σ i succeeding an occurrence of X i+1 .
Capacity of F After the above procedure we have updated the frequencies of F .When F becomes empty, we end the k-th round and continue with the (k + 1)-st round by creating a new frequency table F with capacity f k+1 .In what follows, we (a) analyze in detail when F becomes empty (as this determines the sizes f k and f k+1 ), and (b) show that we can compensate the number of discarded bigrams with an enlargement of F 's capacity from f k bigrams to f k+1 bigrams for the sake of our aimed total running time: If the frequency of bc in T i is x, then we can reduce at most 2x frequencies of other bigrams.Since a bigram must occur at least twice in T i to be present in F , the frequency of bc has to be at least max(2, (f k − 1)/2) for discarding all bigrams of F , and each replacement of bc with X i+1 frees up ⌈lg σ i+1 ⌉ bits of the text.
Suppose that we have enough space available for storing the frequencies of αf k bigrams, where α is a constant (depending on σ i and n i ) such that F and the working space of Lemma 2.2 with d = f k can be stored within this space.Let δ := lg(σ 2 i+1 n i /2) be the number of bits needed to store one entry in F , and let β := min(δ/ lg σ i+1 , cδ/ lg n) be the minimum number of characters that need to be freed to store one frequency in this space.To understand the value of β, we look at the arguments of the minimum function in the definition of β and simultaneously at the maximum function in our aimed working space of max(n ⌈lg σ m ⌉ , (n/c) lg n) + O(lg n) bits (cf.Thm.2.3): • The first item in this maximum function allows us to spend lg σ i+1 bits for each freed character such that we obtain space for one additional entry in F after freeing δ/ lg σ i+1 characters.
• The second item allows us to use lg n additional bits after freeing up c characters. 3 Hence, after freeing up cδ/ lg n characters, we have space to store one additional entry in F .
Since we let f k grow by a factor of at least γ := min 1≤i≤n γ i > 1 for each recomputation of F , f k = Ω(γ k ), and therefore f k = Θ(n) after k = O(lg n) steps.Consequently, after reaching k = O(lg n), we can iterate the above procedure a constant number of times to compute the non-terminals of the remaining bigrams occurring at least twice.Time Analysis On the total picture, we compute F O(lg n) times with Lemma 2.2.For the k-th time, we run the algorithm of Lemma 2.2 with In the i-th turn, we update F by decreasing the frequencies of the bigrams affected by the substitution of the most frequent bigram bc with X i .For decreasing such a frequency, we look up its respective bigram with a linear scan in F , which takes Output Finally, we show that we can store the computed grammar in text space.More precisely, we want to store the grammar in an auxiliary array A packed at the end of the working space such that the entry A[i] stores the right hand side of the non-terminal X i , which is a bigram.Thus the non-terminals are represented implicitly as indices of the array A. We therefore need to subtract 2 lg σ i bits of space from our working space αf k after the i-th turn.By adjusting α in the above equations, we can deal with this additional space requirement as long as the frequencies of the replaced bigrams are at least three (we charge two occurrences for growing the space of A).
When only bigrams with frequencies of at most two remain, we switch to a simpler algorithm, discarding the idea of maintaining the frequency table F : Suppose that we work with the text T i .Let k be a text position, which is 1 in the beginning, but will be incremented in the following turns while holding the invariant that T [1..k] does not contain a bigram of frequency two.We scan T i [k..n] linearly from left to right and check, for each text position j, whether the bigram with a new non-terminal X i+1 to transform T i to T i+1 , and (c) recurse on T i+1 with k := j until no bigram with frequency two is left.
The position k, which we never decrement, helps us to skip over all text positions starting with bigrams with a frequency of one.Thus, the algorithm spends O(n) time for each such text position, and O(n) time for each bigram with frequency two.Since there are at most n such bigrams, the overall running time of this algorithm is O(n 2 ).
Remark 2.4 (Pointer Machine Model).Refraining from the usage of complicated algorithms, our algorithm consists only of elementary sorting and scanning steps.This allows us to run our algorithm on a pointer machine, yielding the same time bound of O(n 2 ).For the space bounds, we assume that the text is given in n words, where a word is large enough to store an element of Σ m or a text position.

Step-by-Step Execution
Here, we present an exemplary execution of the first turn (of the first round) on the input T = cabaacabcabaacaaabcab.We visualize each step of this turn as a row in Fig. 1.A detailed description of each row follows: Row 1: Suppose that we have computed F , which has a constant number of entries 4 .The highest frequency is five achieved by ab and ca.The lowest frequency represented in F is three, which becomes the threshold for a bigram to be present in F such that bigrams whose frequencies drop below this threshold are removed from F .This threshold is a constant for all later turns until F is rebuilt (in the following round).During Turn 1, the algorithm proceeds now as follows: Step-by-step execution of the first turn of our algorithm on the string T = cabaacabcabaacaaabcab.The turn starts with the memory configuration given in Row 1. Positions 1 to 21 are text positions, positions 22 to 24 belong to F (f 0 = 3, and it is assumed that a frequency fits into a text entry).Subsequent rows depict the memory configuration during Turn 1.A comment to each row is given in Sect.2.4.
Row 2: Choose ab as a bigram to replace with a new non-terminal X 1 (break ties arbitrarily).Replace every occurrence of ab with X 1 while decrementing frequencies in F accordingly to the neighboring characters of the replaced occurrence.Row 6: Insert new bigrams (consisting of a character of D and X 1 ) whose frequencies are at least as large as the threshold.
Row 7: Scan the text again and copy each character succeeding an occurrence of X 1 in T 1 to D (symmetric to Row 4).
Row 8: Sort all characters in D lexicographically (symmetric to Row 5).
Row 9: Insert new bigrams whose frequencies are at least as large as the threshold (symmetric to Row 6).
The simplification is that we (a) fix the bit width of the text space to 16 bit, and (b) assume that Σ is the byte alphabet.We further skip the step increasing the bit width from lg σ i to lg σ i+1 .This means that the program works as long as the characters of Σ m fit into 16 bits.The benchmark, whose results are displayed in Table 1, was conducted on a Mac Pro Server with an Intel Xeon CPU X5670 clocked at 2.93GHz running Arch Linux.The implementation was compiled with gcc-8.2.1 in the highest optimization mode -O3.Looking at Table 1, we can see that the running time is super-linear to the input size on all text instances, which we obtained from the Pizza&Chili corpus (http://pizzachili.dcc.uchile.cl/).Table 2 gives some characteristics about the used data sets.We see that the number of rounds is the number of turns plus one for every unary string a empty such that the algorithm recomputes F after each turn.Note that the number of rounds can drop while scaling the prefix length based on the choice of the bigrams stored in F .

Bit-Parallel Algorithm
In the case that the number of terminals and non-terminals τ := σ m is o(n), a word-packing approach becomes interesting.We present techniques speeding up previously introduced operations on chunks of O(log τ n) characters from O(log τ n) time to O(lg lg lg n) time.In the end, these techniques allow us to speed up the sequential algorithm of Thm.2.3 from O(n 2 ) time to the following: Theorem 3.1.We can compute Re-Pair on a string of length n in O(n 2 lg log τ n lg lg lg n/ log τ n) time with max((n/c) lg n, n ⌈lg τ ⌉) + O(lg n) bits of working space including the text space, where c ≥ 1 is a fixed constant, and τ is the number of terminal and non-terminal symbols.
Note that the O(lg lg lg n) factor is due to the popcount function [31, Algo.1], which has been optimized to a single instruction on modern computer architectures.Operation Description X ≪ j shift X j positions to the left X ≫ j shift X j positions to the right ¬X bitwise NOT of X X ⊗ Y bitwise XOR of X and Y −1 bit vector consisting only of one bits msb(X) returns the position of the most significant set bit of X, i.e., ⌊lg X⌋ + 1; see [12,Sect. 5] for a constant time algorithm using O(lg n) bits rmPreRun(X) sets all bits of the maximal prefix of consecutive ones to zero rmSufRun(X) sets all bits of the maximal suffix of consecutive ones to zero Figure 2: Operations used in Figs. 4 and 5 for two bit vectors X and Y .All operations can be computed in constant time.See Fig. 3 for an example of rmSufRun and rmPreRun.

rmPreRun(X) Operation
Example Step-by-step execution of rmPreRun(X) and rmSufRun(X) introduced in Fig. 2 on a bit vector X.

Broadword Search
First, we deal with accelerating the computation of the frequency of a bigram in T by exploiting broadword search thanks to the word RAM model.We start with the search of single characters and subsequently extend this result to bigrams: Lemma 3.2.We can count the occurrences of a character c ∈ Σ in a string of length O(log σ n) in O(lg lg lg n) time.
Proof.Let q be the largest multiple of ⌈lg σ⌉ fitting into a computer word, divided by ⌈lg σ⌉.Let S ∈ Σ * be a string of length q.Our first task is to compute a bit mask of length q ⌈lg σ⌉ marking with a '1' the occurrences of a character c ∈ Σ in S. For that, we follow the constant time broadword pattern matching of Knuth [20,Sect. 7.1.3]5 : Let H and L be two bit vectors of length ⌈lg σ⌉ having marked only the most significant or the least significant bit, respectively.Let H q and L q denote the q times concatenation of H and L, respectively.Then the operations in Fig. 4 yield an array X of length q with where each entry of X has ⌈lg σ⌉ bits.
To obtain the number of occurrences of c in S, we use the popcount operation returning the number of zero bits in X, and divide the result by ⌈lg σ⌉.The popcount instruction takes O(lg lg lg n) time [31,Algo. 1].
Having Lemma 3.2, we show that we can compute the frequency of a bigram in T in O(n lg lg lg n/ log σ n) time.For that, we partition T into strings of length ⌊log σ n⌋ fitting into a computer word, and call each string of this partition a chunk .For each chunk S, we call find(c, S) to compute the bit vector X storing the occurrences of c in S. In case that we want to use Lemma 3.
)) | X X as in Eq. ( 2) 000100000 = X 000001000 000011000 000100000 000111000 → X Figure 4: Broadword matching all occurrences of a character in a string S fitting into a computer word.
For the last step, special care has to be taken when the last character of S is a match, as shifting X ⌈lg σ⌉ bits to the right might erase a '1' bit witnessing the rightmost match.In the description column, X is treated as an array of integers with bit width ⌈lg σ⌉.In this example, S = 101010000, c has the bit representation 010 with lg σ = 3, and q = 3.
T ∈ Σ n of length n as a text T ∈ (Σ 2 ) ⌈n/2⌉ of length ⌈n/2⌉.The result is, however, not the frequency of the bigram c in general.For computing the frequency a bigram bc ∈ Σ 2 , we distinguish the cases b = c and b = c.
Case b = c.By applying Lemma 3.2 to find the character bc ∈ Σ 2 in a chunk S (interpreted as a string of length ⌊q/2⌋ on the alphabet Σ 2 ), we obtain the number of occurrences of bc starting at odd positions in S. To obtain this number for all even positions, we apply the procedure to dS with d ∈ Σ \ {b, c}.
Additional care has to be taken at the borders of each chunk matching the last character of the current chunk and the first character of the subsequent chunk with b and c, respectively.
Case b = c.This case is more involving as overlapping occurrences of bb can occur in S, which we must not count.To this end, we watch out for runs of b's, i.e., substrings of maximal lengths consisting of the character b (here, we consider also maximal substrings of b with length 1 as a run).We separate these runs into runs ending either at even or at odd positions.We do this because the frequency of bb in a run of b's ending at an even (resp.odd) position is the number of occurrences of bb within this run ending at an even (resp.odd) position.We can compute these positions similarly to the approach for b = c by first (a) hiding runs ending at even (resp.odd) positions, and then (b) counting all bigrams ending at even (resp.odd) positions.Runs of b that are a prefix or a suffix of S are handled individually if S is neither the first nor the last chunk of T , respectively.That is because a run passing a chunk border starts and ends in different chunks.To take care of those runs, we remember the number of b's of the longest suffix of every chunk, and accumulate this number until we find the end of this run, which is a prefix of a subsequent chunk.The procedure for counting the frequency of bb inside S is explained with an example in Fig. 5.With the aforementioned analysis of the runs crossing chunk borders, we can extend this procedure to count the frequency of bb in T .We conclude: Lemma 3.3.We can compute the frequency of a bigram in a string T of length n whose characters are drawn from an alphabet of size σ in O(n lg lg lg n/ log σ n) time.

Operation Description Example input
bit mask for all runs ending at even positions occurrences of all bs belonging to runs ending at even positions ) frequency of all bbs belonging to runs ending at even positions 0000000011000 = X 0101010101010 0000000001000 bit mask for all runs ending at odd positions occurrences of all bs belonging to runs ending at odd positions ) frequency of all bbs belonging to runs ending at odd positions 0000100000000 = X 1010101010101 0000100000000 Figure 5: Finding a bigram bb in a string S of bit length q, where q is the largest multiple of 2 ⌈lg σ⌉ fitting into a computer word, divided by ⌈lg σ⌉.In the example, we represent the strings M , B, E, and X as arrays of integers with bit width x := ⌈lg σ⌉ and write 1 and 0 for 1 x and 0 x , respectively.Let findBigram(bc, X) := find(bc, X) | find(bc, dX) for d = b be the frequency of a bigram bc with b = c as described in Sect.3.1.Each of the popcount queries gives us one occurrence as a result (after dividing the returned number by ⌈lg σ⌉), thus the frequency of bb in S, without looking at the borders of S, is two.As a side note, modern computer architectures allow us to shrink the 0 x or 1 x blocks to single bits by instructions like pext u64 taking a single CPU cycle.

Bit-Parallel Adaption
Similarly to Lemma 2.2, we present an algorithm computing the d most frequent bigrams, but now with the word-packed search of Lemma 3.3.Proof.We allocate a frequency table F of length d.For each text position i with 1 ≤ i ≤ n − 1, we compute the frequency of T [i]T [i + 1] in O(n lg lg lg n/ log σ n) time with Lemma 3.3.After computing a frequency, we insert it into F if it is one of the d most frequent bigrams among the bigrams we have already computed.We can perform the insertion in O(lg d) time if we sort the entries of F by their frequencies.
Studying the final time bounds of Eq. ( 1) for the sequential algorithm of Sect.2, we see that we spend O(n 2 ) time in the first turn, but spend less time in later turns.Hence, we want to run the bit-parallel algorithm only in the first few turns until f k becomes so large that the benefits of running Lemma 2.2 outweigh the benefits of the bit-parallel approach of Lemma 3.4.In detail, for the k-th round, we set d := f k and run the algorithm of Lemma 3.4 on the current text if d is sufficiently small, or otherwise the algorithm of Lemma 2.2.In total, we yield where τ = σ m is the number of terminals and non-terminals, and k/γ k > lg lg lg n/ log τ n ⇔ k = O(lg(lg n/(lg τ lg lg lg n))).
To obtain the claim of Thm.3.1, it is left to show that the k-th round with the bit-parallel approach uses O(n 2 lg lg lg n/ log τ n) time, as we now want to charge each text position with O(n/ log τ n) time with the same amortized analysis as after Eq. ( 1).We target O(n/ log τ n) time for Let x := ⌈lg σ i+1 ⌉ and q be the largest multiple of x fitting into a computer word, divided by x.For Item (1), we partition T into substrings of length q, and apply Item (1) to each such substring S. Here, we combine the two bit vectors of Fig. 5 used for the two popcount calls by a bitwise OR, and call the resulting bit vector Y .Interpreting Y as an array of integers of bit width x, Y has q entries, and it holds that Y [i] = 2 x − 1 if and only if S[i] is the second character of an occurrence of the bigram we want to replace 6 .We can replace this character in all marked positions in S by a non-terminal X i+1 using x bits with the instruction (S & ¬Y ) | ((Y & L q ) • X i+1 ), where L with |L| = x is the bit vector having marked only the least significant bit.Subsequently, for Item (2), we erase all characters S[i] with For the remaining points, our trick is to represent F by a minimum and a maximum heap, both realized as array heaps.For the space increase, we have to lower γ adequately.Each element of an array heap stores a frequency and a pointer to a bigram stored in a separate array B storing all bigrams consecutively.A pointer array P stores pointers to the respective frequencies in both heaps for each bigram of B. The total data structure can be constructed at the beginning of the k-th round in O(f k ) time, and hence does not worsen the time bounds.While B solves Item (5), the two heaps with P solve Items (3) and (4) even in O(lg f k ) time.
In case that we want to store the output in working space, we follow the description in the paragraph after Thm.2.3, where we now use word-packing to find the second occurrence of a bigram in T i in O(n/ log σi n) time.

Computing MR-Re-Pair in Small Space
We can adapt our algorithm to compute the MR-Re-Pair grammar scheme proposed by Furuya et al. [13].The difference to Re-Pair is that MR-Re-Pair replaces the most frequent maximal repeat instead of the most frequent bigram, where a maximal repeat is a reoccurring substring of the text whose frequency7 decreases when extending it to the left or to the right.Our idea is to exploit the fact that a most frequent bigram corresponds to a most frequent maximal repeat [13,Lemma 2].This means that we can find a most frequent maximal repeat by extending all occurrences of a most frequent bigram to their left and to their right until all are no longer equal substrings.Although such an extension can be time consuming, this time is amortized by the number of characters that are replaced on creating an MR-Re-Pair rule.Hence, we conclude that we can compute MR-Re-Pair in the same space and time bounds as our algorithm computing the Re-Pair grammar.

Parallel Algorithm
Suppose that we have p processors on a CRCW machine, supporting in particular parallel insertions of elements and frequency updates in a frequency table.In the parallel setting, we allow us to spend O(p lg n) bits of additional working space such that each processor has a extra budget of n) bits.our computational model, we assume that the text is stored in p parts of equal lengths8 such that we can enlarge a text using n lg σ to n(lg σ + 1) bits in max(1, n/p) time without extra memory.For our parallel variant computing Re-Pair, our working horse is a parallel sorting algorithm: The O(n/d) merge steps are conducted in the same way, yielding the bounds of this lemma.
In our sequential model, we produce T i+1 by performing a left shift after replacing all occurrences of a most frequent bigram with a new non-terminal X i+1 such that we gain free space at the end of the text.As described in our computational model, our text is stored as a partition of p substrings, each assigned to one processor.Instead of gathering the entire free space at T 's end, we gather free space at the end of each of these substrings.We bookkeep the size and location of each such free space (there are at most p many) such that we can work on the remaining text T i+1 like it would be a single continuous array (and not fragmented into p substrings).This shape allows us to perform the left shift in O(n/p) time, while spending O(p lg n) bits of space for the locations of the free space fragments.
For p ≤ n, exchanging Lemma 2.2 with Lemma 5.2 in Eq. ( 1) yields It is left to provide an amortized analysis for updating the frequencies in F during the i-th turn.Here, we can charge each text position with O(n/p) time, as we have the following time bounds for each operation: The first operation in the above table is used, among others, for finding the bigram with the lowest or highest frequency in F .Computing the lowest or highest frequency in F can be done with a single variable pointing to the currently found entry with the lowest or highest frequency during a parallel scan thanks to the CRCW model.9 Theorem 5.3.We can compute Re-Pair in O(n 2 /p) time with p ≤ n processors on a CRCW machine with max((n/c) lg n, n ⌈lg σ m ⌉) + O(p lg n) bits of working space including the text space, where c ≥ 1 is a fixed constant, and σ m is the number of terminal and non-terminal symbols.The work is O(n 2 ).

Computing Re-Pair in External Memory
The last part of this article is devoted to the first external memory (EM) algorithm computing Re-Pair, which is another way to overcome the memory limitation problem.We start with the definition of the EM model, present an approach using a sophisticated heap data structure, and another approach adapting our in-place techniques.
For the following, we use the EM model of Aggarwal and Vitter [1].It features fast internal memory (IM) holding up to M data words, and slow EM of unbounded size.The measure of the performance of an algorithm is the number of input and output operations (I/Os) required, where each I/O transfers a block of B consecutive words between memory levels.Reading or writing n contiguous words from or to disk requires scan(n) = Θ(n/B) I/Os.Sorting n contiguous words requires sort(n) = O((n/B) • log M/B (n/B)) I/Os.For realistic values of n, B, and M , we stipulate that scan(n) < sort(n) ≪ n.
A simple approach is based on an EM heap maintaining the frequencies of all bigrams in the text.A state-of-the-art heap is due to Jiang and Larsen [17] providing insertion, deletion, and the retrieval of the maximum element in O(B −1 log M/B (N/B)) I/Os, where N is the size of the heap.Since N ≤ n, inserting all bigrams takes at most sort(n) I/Os.As there are at most n additional insertions, deletions and maximum element retrievals, this sums to at most 4 sort(n) I/Os.Finally, we need to scan the text m times to replace the occurrences of the retrieved bigram, triggering m m i=1 scan(|T i |) ≤ m scan(n) I/Os.In the following, we show an EM Re-Pair algorithm that evades the use of complicated data structures and prioritizes scans over sorting.
This algorithm is based on our Re-Pair algorithm.It uses Lemma 2.2 with d := Θ(M ) such that F and F ′ can be kept in IM.This allows us to perform all sorting steps and binary searches in IM without additional I/O.We only trigger I/O operations for scanning the text, which is done ⌈n/d⌉ times, since we partition T into d substrings.In total, we spend at most mn/M scans for the algorithm of Lemma 2.2.For the actual algorithm, an update of F is done m times, during which we replace all occurrences of a chosen bigram in the text.This gives us m scans in total.Finally, we need to reason about D, which is also created m times.However, D may be larger than M , such that we may need to store it in EM.Given that D i is D in the i-th turn, we sort D in EM, triggering sort(D i ) I/Os.With a converse of Jensen's inequality [29,Theorem B] (set there f (x) := n lg n) we obtain m i=1 sort(|D i |) ≤ sort(n)+O(n log M/B 2) total I/Os for all instances of D. We finally yield: Theorem 6.1.We can compute Re-Pair with min(4 sort(n), (mn/M ) scan(n)+sort(n)+O(n log M/B 2))+ m scan(n) I/Os in external memory.
Our approach can be practically favorable to the heap based approach if m = o(lg n) and mn/M = o(lg n), or if the EM space is also of major concern.

Heuristics for Practicality
The achieved O(n 2 ) time bound seems to convey the impression that this work is only of purely theoretic interest.However, we provide here some heuristics, which can help us to overcome the practical bottleneck at the beginning of the execution, where only O(lg n) of bits of working space are available.In other words, we want to study several heuristics to circumvent the need to call Lemma 2.2 with a small parameter d, as such a case means a considerable time loss.Even a single call of Lemma 2.2 with a small d prevents the computation of Re-Pair of data sets larger than 1 MiB within a reasonable time frame (cf.Sect.2.5).We present three heuristics depending on whether our budget on top of the text space is within 1. σ 2 i lg n bits, 2. n i lg(σ i+1 + n i ) bits, or 3. O(lg n) bits.Heuristic 1.If σ i is small enough such that we can spend σ 2 i lg n bits, then we can compute the frequencies of all bigrams in O(n) time.Whenever we reach a σ j that lets σ j lg n grow outside of our budget, we have spent O(n) time in total for reaching T j from T i as the costs for replacements can be amortized by twice of the text length.Heuristic 2. Suppose that we are allowed to use (n i − 1) lg(n i /2) = (n i − 1) lg n i − n i + O(lg n i ) bits additionally to the n i lg σ i bits of the text T i .We create an extra array F of length n i − 1 with the aim that F [j] stores the frequency of T [j]T [j + 1] in T [1..j].We can fill the array in σ i scans over T i , costing us O(n i σ i ) time.The largest number stored in F is the most frequent bigram in T .Heuristic 3. Finally, if the distribution of bigrams is skewed, chances are that one bigram outnumbers all others.In such a case we can use the following algorithm to find this bigram: Lemma 7.1.Given there is a bigram in T i (0 ≤ i ≤ n) whose frequency is higher than the sum of frequencies of all other bigrams, we can compute T i+1 in O(n) time using O(lg n) bits.
Proof.We use the Boyer-Moore majority vote algorithm [6] for finding the most frequent bigram in O(n) time with O(lg n) bits of working space.

1
c a b a a c a b c a b a a c a a a b c a b ab:5 ca:5 aa:3

Row 3 :Row 4 :Row 5 :
Remove from F every bigram whose frequency falls below the threshold.Obtain space for D by aligning the compressed text T 1 .(The process of Row 2 and Row 3 can be done simultaneously.)Scan the text and copy each character preceding an occurrence of X 1 in T 1 to D. Sort characters in D lexicographically.

Lemma 3 . 4 .
Given an integer d with d ≥ 1, we can compute the frequencies of the d most frequent bigrams in a text of length n whose characters are drawn from an alphabet of size σ in O(n 2 lg lg lg n/ log σ n) time using d lg(σ 2 n/2) + O(lg n) bits.

( 1 )
replacing all occurrences of a bigram, (2) shifting freed up text space to the right, (3) finding the bigram with the highest or lowest frequency in F , (4) updating or exchanging an entry in F , and (5) looking up the frequency of a bigram in F .
− 1 and move them to the right of the bit chunk S sequentially.In the subsequent bit chunks, we can use word-packed shifting.The sequential bit shift costs O(|S|) = O(log σi+1 n) time, but on an amortized view, a deletion of a character is done at most once per original text position.

Lemma 5 . 1 (Lemma 5 . 2 .
[3]).We can sort an array of length n in O(max(n/p, 1) lg 2 n) parallel time with O(p lg n) bits of working space.The work is O(n lg 2 n).The parallel sorting allows us to state Lemma 2.2 in the following way: Given an integer d with d ≥ 1, we can compute the frequencies of the d most frequent bigrams in a text of length n whose characters are drawn from an alphabet of sizeσ in O(max(n, d) max(n/p, 1) lg 2 d/d) time using 2d lg(σ 2 n/2) + O(p lg n) bits.The work is O(max(n, d)n lg 2 d/d).Proof.We follow the computational steps of Lemma 2.2, but (a) divide a scan into p parts, (b) conduct a scan in parallel but a binary search sequentially, and (c) use Lemma 5.1 for the sorting.This gives us the following time bounds for each operation: Operation Lemma 2.2 Parallel fill F ′ with bigrams O(d) O(max(1, d/p)) sort F ′ lexicographically O(d lg d) O(max(d/p, 1) lg 2 n) compute frequencies of F ′ O(n lg d) O(n/p lg d) merge F ′ with F O(d lg d) O(max(d/p, 1) lg 2 n) 2, Re-Pair terminates after m < n/2 turns such that T m ∈ Σ + m contains no bigram occurring more than once.
table, sort it with respect to the frequencies while discarding duplicates such that F stores the d most frequent bigrams in T [1..jd].This sorting step can be done in O(d lg d) time.Finally, we clear F ′ and are done with S j .After the final merge step, we obtain the d most frequent bigrams of T stored in F .Since each of the O(n/d) merge steps takes O(d lg d + n lg d) time, we need O(max(d, n) • (n lg d)/d) time.For d ≥ n, we can build a large frequency table and perform one scan to count the frequencies of all bigrams in T .This scan and the final sorting with respect to the counted frequencies can be done in O(n lg n) time.
However, since this decrease is accompanied with a replacement of an occurrence of bc, we obtain O(n 2 ) total time by charging each text position with O(n) time for a linear search in F .With the same argument, we can bound the total time for sorting the characters in D to O(n 2 ) overall time: Since we spend O(h lg h) time on sorting h characters preceding or succeeding a replaced character, and O(f k ) = O(n) time on swapping a sufficiently large new bigram composed of X i+1 and a character of Σ i+1 with a bigram with the lowest frequency in F , we charge each text position again with O(n) time.Putting all time bounds together leads to the main result of this article:Theorem 2.3.We can compute Re-Pair on a string of length n in O(n 2 ) time with max((n/c) lg n, n ⌈lg σ m ⌉)+ O(lg n) bits of working space including the text space, where c ≥ 1 is a fixed constant, and σ m is the number of terminal and non-terminal symbols.

Table 1 :
Experimental evaluation of our implementation described in Sect.2.5.Table entries are running times in seconds.The last line is the benchmark on the unary string aa • • • a.
2k with an integer k ≥ 1 since the text contains only one bigram with a frequency larger than two in each round.Replacing this bigram in the text makes F

Table 2 :
Characteristics of our data sets.The number of turns and rounds are given for each of the prefix sizes 128, 256, 512, and 1024 KiB of the respective data sets.The number of turns reflecting the number of non-terminals is given in units of thousands.The turns of the unary string aa • • • a are in plain units (not divided by thousand).