Compressed Communication Complexity of Hamming Distance

We consider the communication complexity of the Hamming distance of two strings. Bille et al. [SPIRE 2018] considered the communication complexity of the longest common prefix (LCP) problem in the setting where the two parties have their strings in a compressed form, i.e., represented by the Lempel-Ziv 77 factorization (LZ77) with/without self-references. We present a randomized public-coin protocol for a joint computation of the Hamming distance of two strings represented by LZ77 without self-references. While our scheme is heavily based on Bille et al.'s LCP protocol, our complexity analysis is original which uses Crochemore's C-factorization and Rytter's AVL-grammar. As a byproduct, we also show that LZ77 with/without self-references are not monotonic in the sense that their sizes can increase by a factor of 4/3 when a prefix of the string is removed.


Introduction
Communication complexity, first introduced by Yao [15], is a well studied sub-field of complexity theory which aims at quantifying the total amount of communication (bits) between the multiple parties who separately hold partial inputs of a function f .The goal of the k (≥ 2) parties is to jointly compute the value of f (X 1 , . . ., X k ), where X i denotes the partial input that the ith party holds.Communication complexity studies lower bounds and upper bounds for the communication cost of a joint computation of a function f .Due to the rapidly increasing amount of distributed computing tasks, communication complexity has gained its importance in the recent highly digitalized society.This paper deals with the most basic and common setting where the two parties, Alice and Bob, separately hold partial inputs A and B and they perform a joint computation of f (A, B) for a function f following a specified protocol.
We pay our attention to communication complexity of string problems where the inputs A and B are strings over an alphabet Σ. Communication complexity of string problems has played a critical role in the space lower bound analysis of several streaming processing problems including Hamming/edit/swap distances [3], pattern matching with k-mismatches [12], parameterized pattern matching [7], dictionary matching [6], and quasi-periodicity [5].
Bille et al. [2] were the first to consider the communication complexity of the longest common prefix (LCP) problem in the setting where the two parties have their strings in a compressed form, i.e., represented by the Lempel-Ziv 77 factorization (LZ77 ) [16] with/without self-references.Bille et al. [2] proposed a randomized public-coin protocol for the LCP problem with O(log z ) communication rounds and O(log ) total bits of communication, where denotes the length of the LCP of the two strings A and B and z denotes the size of the non self-referencing LZ77 factorization of the LCP A[1.. ].In addition, Bille et al. [2] showed a randomized public-coin protocol for the LCP problem with In this paper, we consider the communication complexity of the Hamming distance of two strings of equal length, which are represented in a compressed form.We present a randomized public-coin protocol for a joint computation of the Hamming distance of two strings represented by non self-referencing LZ77, with O(d log z) communication rounds and O(d log max ) total bits of communication, where d is the Hamming distance between A and B, z is the size of the LZ77 factorization of string A, and max is the largest gap between two adjacent mismatching positions between A and B1 .While our scheme is heavily based on Bille et al.'s LCP protocol, our complexity analysis is original which uses Crochemore's C-factorization [4] and Rytter's AVL-grammar [13].
Further, as a byproduct of our result for the Hamming distance problem, we also show that LZ77 with/without self-references are non-monotonic.For a compression algorithm A let A(S) denote the size of the compressed representation of string S by A. We say that compression algorithm A is monotonic if A(S[1..j]) ≤ A(S) for any 1 ≤ j < |S| and A(S[i..|S|]) ≤ A(S) for any 1 < i ≤ |S|, and we say it is non-monotonic otherwise.It is clear that LZ77 with/without self-references satisfy the first property, however, to our knowledge the second property has not been studied for the LZ77 factorizations.We prove that LZ77 with/without self-references is non-monotonic by giving a family of strings such that removing each prefix of length from 1 to √ n increases the number of factors in the LZ77 factorization by a factor of 4/3, where n denotes the string length.We also show that in the worst-case the number of factors in the non self-referencing LZ77 factorization of any suffix of any string S of length n can be larger than that of S by at most a factor of O(log n).
Monotonicity of compression algorithms and string repetitive measures has gained recent attention.Lagarde and Perifel [11] showed that Lempel-Ziv 78 compression [17] is non-monotonic by showing that removing the first character of a string can increase the size of the compression by a factor of Ω(log n).The recently proposed repetitive measure called the substring complexity δ is known to be monotonic [10].Kociumaka et al. [10] posed an open question whether the smallest bidirectional macro scheme size b [14] or the smallest string attractor size γ [8] is monotonic.

Strings
Let Σ be an alphabet of size σ.An element of Σ * is called a string.The length of a string S is denoted by |S|.The empty string ε is the string of length 0, namely, |ε| = 0.The i-th character of a string S is denoted by S[i] for 1 ≤ i ≤ |S|, and the substring of a string S that begins at position i and ends at position j is denoted by S[i..j] for 1 ≤ i ≤ j ≤ |S|.For convenience, let S[i..j] = ε if j < i. Substrings S[1..j] and S[i..|S|] are respectively called a prefix and a suffix of S. For simplicity, let S[..j] denote the prefix of S ending at position j and S[i..] the suffix S[i..|S|] of S beginning at position i.A suffix S[j..] with j > 1 is called a proper suffix of S.
For string X and Y , let lcp(X, Y ) denote the length of the The Hamming distance d H (X, Y ) of two strings X, Y of equal length is the number of positions where the underlying characters differ between X and Y , namely,

Lempel-Ziv 77 factorizations
Of many versions of Lempel-Ziv 77 factorization [16] which divide a given string in a greedy left-to-right manner, the main tool we use is the non self-referencing LZ77, which is formally defined as follows: Definition 1 (Non self-referencing LZ77 factorization).The non self-referencing LZ77 factorization of string S, denoted LZN(S), is a factorization S = f 1 • • • f zn that satisfies the following: Let u i denote the beginning position of each factor f i in the factorization Intuitively, each factor f i in LZN(S) is either a fresh letter, or the shortest prefix of This means that selfreferencing is not allowed in LZN(S), namely, no previous occurrences S[s i ..s i + p i ] of each factor f i can overlap with itself.
The size zn(S) of LZN(S) is the number zn of factors in LZN(S).We encode each factor f i by a triple (s , where s i is the left-most previous occurrence of f i , p i is the length of f i , and α i is the last character of f i .The self-referencing counterpart is defined as follows: Definition 2 (Self-referencing LZ77 factorization).The self-referencing LZ77 factorization of string S, denoted LZS(S), is a factorization S = g 1 • • • g zs that satisfies the following: Let v i denote the beginning position of each factor g i in the factorization Intuitively, each factor g i of LZS(S) is either a fresh letter, or the shortest prefix of g i • • • g zs that does not have a previous occurrence beginning in . This means that self-referencing is allowed in LZS(S), namely, the left-most previous occurrence with smallest t i of each factor g i may overlap with itself.
The size zs(S) of LZS(S) is the number zs of factors in LZS(S).
Likewise, we encode each factor g i by a triple , where t i is the left-most previous occurrence of g i , q i is the length of g i , and β i is the last character of g i .

Communication complexity model
Our approach is based on the standard communication complexity model of Yao [15] between two parties: • The parties are Alice and Bob; • The problem is a function f : X × Y → Z for arbitrary sets X, Y, Z; • Alice has instance x ∈ X and Bob has instance y ∈ Y ; • The goal of the two parties is to output f (x, y) for a pair (x, y) of instances by a joint computation; • The joint computation (i.e. the communication between Alice and Bob) follows a specified protocol P.
The communication complexity [15] usually refers merely to the total amount of bits that need to be transferred between Alice and Bob to compute f (x, y).In this paper, we follow Bille et al.'s model [2] where the communication complexity is evaluated by a pair r, b of the number of communication rounds r and the total amount of bits b exchanged in the communication.
In a (Monte-Carlo) randomized public-coin protocol, each party (Alice/Bob) can access a shared infinitely long sequence of independent random coin tosses.The requirement is that the output has to be correct for every pair of inputs with probability at least 1 − for some 0 < < 1/2, which is based on the shared random sequence of coin tosses.We remark that one can amplify the error rate to an arbitrarily small constant by paying a constant factor penalty in the communication complexity.Note that the public-coin model differs from a randomized private-coin model, where in the latter the parties do not share a common random sequence and they can only use their own random sequence.In a deterministic protocol, every computation is performed without random sequences.

Joint computation of compressed string problems
In this paper, we also consider the communication complexity of the Hamming distance problem between two compressed strings of equal length, which are compressed by LZ77 without self-references.
Alice's input: LZN(A) for string A of length n.
Bob's input: LZN(B) for string B of length n.

Goal: Both Alice and Bob obtain the value of d H (A, B).
The following LCP problem for two strings compressed by non self-referencing LZ77 has been considered by Bille et al. [2].

Problem 2 (LCP with non self-referencing LZ77).
Alice's input: LZN(A) for string A. In Section 3, we present our protocol for Problem 1 of jointly computing the Hamming distance of two strings compressed by non self-referencing LZ77.The scheme itself is a simple application of the LCP protocol of Theorem 1 for non self-referencing LZ77, but our communication complexity analysis is based on non-trivial combinatorial properties of LZ77 factorization which, to our knowledge, were not previously known.

Compressed communication complexity of Hamming distance
In

On the sizes of non self-referencing LZ77 factorization of suffixes
Our next question is how large the zn(A[i k + 1..]) term in Lemma 1 can be in comparison to zn(A).To answer this question, we consider the following general measure: For any string of length n, let

Lower bound for ζ(n)
In this subsection, we present a family of strings S such that zn(S[i..]) > zn(S) for some suffix S[i..], namely ζ(n) > 1.More specifically, we show the following: Proof.For simplicity, we consider an integer alphabet {0, 1, . . ., σ} of size σ + 1.Consider the string and its proper suffix The non self-referencing LZ77 factorization of S and S[2..] are: Observe that after the first occurrence of character σ, each factor of LZN(S) is divided into two smaller factors in , which tends to 4/3 as σ goes to infinity.We finally remark that |S| = n = Θ(σ 2 ) which in turn means σ = Θ( √ n).
Remark 1.One can generalize the string S of Lemma 2 by replacing 0 with 0 h for arbitrarily fixed 1 < h ≤ a • σ for any constant a.The upper limit a • σ comes from the fact that the number of 0's in the original string S is exactly σ 2 .Since |S| = n = Θ(σ 2 ), replacing 0 by 0 h with h < a • σ keeps the string length within O(n).This implies that one can obtain the asymptotic lower bound 4/3 for any suffix S[h..] of length roughly up to n − √ n.
Note also that the factorizations shown in Lemma 2 coincide with the self-referencing counterparts LZS(S) and LZS(S[2..]), respectively.The next corollary immediately follows from Lemma 2 and Remark 1.

Upper bound for ζ(n)
Next, we consider an upper bound for ζ(n).The tools we use here are the C-factorization [4] without self-references, and a grammar compression called AVL-grammar [13].
Definition 3 (Non self-referencing C-factorization).The non self-referencing C-factorization of string S, denoted CN(S), is a factorization S = c 1 • • • c cn that satisfies the following: Let w i denote the beginning position of each factor c i in the factorization (2) Otherwise, let y i = 0.Then, c i = S[w i ..w i + y i ] for each 1 ≤ i ≤ cn.
The size cn(S) of CN(S) is the number cn of factors in CN(S).We also use the next lemma in our upper bound analysis for ζ(n).Proof.Suppose that there are two consecutive factors c i , c i+1 of CN(S) and a factor f j of LZN(S) such that c i , c i+1 are completely contained in f i and the ending position of c i+1 is less than the ending position of f j .Since c i c i+1 is a substring of f j [..|f j | − 1] and Thus the only possible case is that c i c i+1 occurs as a suffix of f j .Note that in this case c i−1 cannot occur inside f j by the same reasoning as above.Therefore, at most two consecutive factors of CN(S) can occur completely inside of each factor of LZN(S).This leads to cn(S) ≤ 2zn(S).
An AVL-grammar of a string S is a kind of a straight-line program (SLP ), which is a context-free grammar in the Chomsky-normal form which generates only S. The parse-tree of the AVL-grammar is an AVL-tree [1] and therefore, its height is O(log n) if n is the length of S. Let avl(S) denote the size (i.e. the number of productions) in the AVL-grammar for S. Basically, the AVL-grammar for S is constructed from the C-factorization of S, by introducing at most O(log n) new productions for each factor in the C-factorization.Thus the next lemma holds.Proof.Suppose we have two AVL-grammars for strings X and Y of respective sizes avl(X) and avl(Y ).Rytter [13] showed how to build an AVL-grammar for the concatenated string XY of size avl(X) + avl(Y ) + O(h), where h is the height of the taller parse tree of the two AVL-grammars before the concatenation.This procedure is based on a folklore algorithm (cf [9]) that concatenates two given AVL-trees of height h with O(h) node rotations.In the concatenation procedure of AVL-grammars, O(1) new productions are produced per node rotation.Therefore, O(h) new productions are produced in the concatenation operation.
Suppose we have the AVL-grammar of a string S of length n.It contains avl(S) productions and the height of its parse tree is h = O(log n) since an AVL-tree is a balanced binary tree.For any proper suffix S = S[i..] of S with 1 < i ≤ n, we split the AVL-grammar into two AVL-grammars, one for the prefix S[1..i] and the other for the suffix S[i..n].We ignore the former and concentrate on the latter for our analysis.Since split operations on a given AVL-grammar can be performed in a similar manner to the afore-mentioned concatenation operations, we have that avl(S ) ≤ avl(S) + a log n for some constant a > 0. Now it follows from Lemma 3, Lemma 4, Lemma 5, and that the size cn of the C-factorization of any string is no more than the number of productions in any SLP generating the same string [13], we have zn(S ) ≤ cn(S ) ≤ avl(S ) ≤ avl(S) + a log n ≤ a cn(S) log n ≤ 2a zn(S) log n where a > 0 is a constant.This gives us zn(S )/zn(S) = O(log n) for any string S of length n and any of its proper suffix S .
Since the size zn(S) of the non self-referencing LZ77 factorization of any string S of length n is at least log n, the next corollary is immediate from Lemma 6: Corollary 2. For any string S and its proper suffix S , zn(S )/zn(S) = O(zn(S)).

Compressed communication complexity of Hamming distance
Now we have the main result of this section.

Conclusions and open questions
This paper showed a randomized public-coin protocol for a joint computation of the Hamming distance of two compressed strings.Our Hamming distance protocol relies on Bille et al.'s LCP protocol for two strings that are compressed by non self-referencing LZ77, while our communication complexity analysis is based on new combinatorial properties of non self-referencing LZ77 factorization.
As a further research, it would be interesting to consider the communication complexity of the Hamming distance problem using self-referencing LZ77.The main question to this regard is whether zs(S[i..]) = O(poly(zs(S))) holds for any suffix S[i..] of any string S. In the case of non self-referencing LZ77, zn(S[i..]) = O(zn(S) 2 ) holds due to Lemma 2.
(i) O(log z + log log ) communication rounds and O(log ) total bits of communication, or (ii) O(log z ) communication rounds and O(log + log log log n) total bits of communication, where z denotes the size of the self-referencing LZ77 factorization of the LCP A[1.. ] and n = |A|.
Bob's input: LZN(B) for string B. Goal: Both Alice and Bob obtain the value of lcp(A, B).Bille et al. proposed the following protocol for a joint computation of the LCP of two strings compressed by non self-referencing LZ77: Theorem 1 ([2]).Suppose that the alphabet Σ and the length n of string A are known to both Alice and Bob.Then, there exists a randomized public-coin protocol which solves Problem 2 with communication complexity O(log z ), O(log ) , where = lcp(A, B) and z = zn(A[1.. ]).

Lemma 1 .
this section we show a Monte-Carlo randomized protocol for Problem 1 that asks the Hamming distance d H (A, B) of strings A and B that are compressed by non self-referencing LZ77.Our protocol achieves O(d log z), O(d log max ) communication complexity, where d = d H (A, B), z = zn(A), and max is the largest value returned by the sub-protocol of the LCP problem for two strings compressed by non self-referencing LZ77.The basic idea is to apply the so-called Kangaroo jumping method, namely, if d is the number of mismatching positions between A and B, then one can compute d = d H (A, B) with at most d + 1 LCP quires.More specifically, let 1 ≤ i 1 < • • • < i d ≤ n be the sequence of mismatching positions between A and B. By using the protocol of Theorem 1 as a black-box, and also using the fact that zn(S) ≥ zn(S[1..j]) for any prefix S[1..j] of any string S, we immediately obtain the following: Suppose that the alphabet Σ and the length n of strings A and B are known to both Alice and Bob.Then, there exists a randomized public-coin protocol which solves Problem 1 with communication complexity O( d k=1 log zn(A[i k + 1..])), O(d log max ) , where max = max 1<k≤d {i k − i k−1 + 1}.

Example 3 .Lemma 3 .
For S = abaababaabaabaabaabaabb, CN(S) = a | b | a | ab | abaab | aaba | abaabaab | b | and its size is 8.The difference between LZN(S) and CN(S) is that while each factor f i in LZN(S) is the shortest prefix of S[u i ..] that does not occur in S[1..u i − 1], each factor c i in CN(S) is the longest prefix of S[w i ..] that occurs in S[1..w i − 1].This immediately leads to the next lemma.For any string S, cn(S) ≥ zn(S).

Theorem 2 .
Suppose that the alphabet Σ and the length n of strings A and B are known to both Alice and Bob.Then, there exists a randomized public-coin protocol which solves Problem 1 with communication complexity O(d log zn), O(d log max ) , where zn = zn(A) and max = max 1<k≤d {i k − i k−1 + 1}.