Abstract
We consider the communication complexity of the Hamming distance of two strings. Bille et al. [SPIRE 2018] considered the communication complexity of the longest common prefix (LCP) problem in the setting where the two parties have their strings in a compressed form, i.e., represented by the Lempel-Ziv 77 factorization (LZ77) with/without self-references. We present a randomized public-coin protocol for a joint computation of the Hamming distance of two strings represented by LZ77 without self-references. Although our scheme is heavily based on Bille et al.’s LCP protocol, our complexity analysis is original which uses Crochemore’s C-factorization and Rytter’s AVL-grammar. As a byproduct, we also show that LZ77 with/without self-references are not monotonic in the sense that their sizes can increase by a factor of 4/3 when a prefix of the string is removed.
1. Introduction
Communication complexity, first introduced by Yao [1], is a well-studied sub-field of complexity theory which aims at quantifying the total amount of communication (bits) between the multiple parties who separately hold partial inputs of a function f. The goal of the parties is to jointly compute the value of , where denotes the partial input that the ith party holds. Communication complexity studies lower bounds and upper bounds for the communication cost of a joint computation of a function f. Due to the rapidly increasing amount of distributed computing tasks, communication complexity has gained its importance in the recent highly digitalized society. This paper deals with the most basic and common setting where the two parties, Alice and Bob, separately hold partial inputs A and B and they perform a joint computation of for a function f following a specified protocol.
We pay our attention to communication complexity of string problems where the inputs A and B are strings over an alphabet . Communication complexity of string problems has played a critical role in the space lower bound analysis of several streaming processing problems including Hamming/edit/swap distances [2], pattern matching with k-mismatches [3], parameterized pattern matching [4], dictionary matching [5], and quasi-periodicity [6].
Bille et al. [7] were the first to consider the communication complexity of the longest common prefix (LCP) problem in the setting where the two parties have their strings in a compressed form, i.e., represented by the Lempel-Ziv 77 factorization (LZ77) [8] with/without self-references. Bille et al. [7] proposed a randomized public-coin protocol for the LCP problem with communication rounds and total bits of communication, where ℓ denotes the length of the LCP of the two strings A and B and denotes the size of the non-self-referencing LZ77 factorization of the LCP . In addition, Bille et al. [7] showed a randomized public-coin protocol for the LCP problem with
- (i)
- communication rounds and total bits of communication, or
- (ii)
- communication rounds and total bits of communication,
where denotes the size of the self-referencing LZ77 factorization of the LCP and .
In this paper, we consider the communication complexity of the Hamming distance of two strings of equal length, which are represented in a compressed form. We present a randomized public-coin protocol for a joint computation of the Hamming distance of two strings represented by non-self-referencing LZ77, with communication rounds and total bits of communication, where d is the Hamming distance between A and B, z is the size of the LZ77 factorization of string A, and is the largest gap between two adjacent mismatching positions between A and B (if the first/last characters of A and B are equal, then we can add terminal symbols as and and subtract 2 from the computed distance). Although our scheme is heavily based on Bille et al.’s LCP protocol, our complexity analysis is original which uses Crochemore’s C-factorization [9] and Rytter’s AVL-grammar [10].
Furthermore, as a byproduct of our result for the Hamming distance problem, we also show that LZ77 with/without self-references are non-monotonic. For a compression algorithm let denote the size of the compressed representation of string S by . We say that compression algorithm is monotonic if for any and for any , and we say it is non-monotonic otherwise. It is clear that LZ77 with/without self-references satisfy the first property, however, to our knowledge the second property has not been studied for the LZ77 factorizations. We prove that LZ77 with/without self-references is non-monotonic by giving a family of strings such that removing each prefix of length from 1 to increases the number of factors in the LZ77 factorization by a factor of 4/3, where n denotes the string length. We also show that in the worst-case the number of factors in the non-self-referencing LZ77 factorization of any suffix of any string S of length n can be larger than that of S by at most a factor of .
Monotonicity of compression algorithms and string repetitive measures has gained recent attention. Lagarde and Perifel [11] showed that Lempel-Ziv 78 compression [12] is non-monotonic by showing that removing the first character of a string can increase the size of the compression by a factor of . The recently proposed repetitive measure called the substring complexity is known to be monotonic [13]. Kociumaka et al. [13] posed an open question whether the smallest bidirectional macro scheme size b [14] or the smallest string attractor size [15] is monotonic. It was then answered by Mantaci et al. [16] that is non-monotonic.
2. Preliminaries
2.1. Strings
Let be an alphabet of size . An element of is called a string. The length of a string S is denoted by . The empty string is the string of length 0, namely . The i-th character of a string S is denoted by for , and the substring of a string S that begins at position i and ends at position j is denoted by for . For convenience, let if . Substrings and are respectively called a prefix and a suffix of S. For simplicity, let denote the prefix of S ending at position j and the suffix of S beginning at position i. *fix A suffix with is called a proper suffix of S.
For string X and Y, let denote the length of the longest common prefix (LCP) of strings , namely . The Hamming distance of two strings of equal length is the number of positions where the underlying characters differ between X and Y, namely .
2.2. Lempel-Ziv 77 Factorizations
Of many versions of Lempel-Ziv 77 factorization [8] which divide a given string in a greedy left-to-right manner, the main tool we use is the non-self-referencing LZ77, which is formally defined as follows:
Definition 1
(Non-self-referencing LZ77 factorization). The non-self-referencing LZ77 factorizationof string S, denoted , is a factorization that satisfies the following: Let denote the beginning position of each factor in the factorization , that is, . (1) If and , then for any position in S, let . (2) Otherwise, let . Then, for each .
Intuitively, each factor in is either a fresh letter, or the shortest prefix of that does not have a previous occurrence in . This means that self-referencing is not allowed in , namely no previous occurrences of each factor can overlap with itself.
The size of is the number of factors in .
We encode each factor by a triple , where is the left-most previous occurrence of , is the length of , and is the last character of .
Example 1.
For , and it can be represented as . The size of is 7.
The self-referencing counterpart is defined as follows:
Definition 2
(Self-referencing LZ77 factorization). Theself-referencing LZ77 factorizationof string S, denoted , is a factorization that satisfies the following: Let denote the beginning position of each factor in the factorization , that is, . (1) If and , then for any position in S, let . (2) Otherwise, let . Then, for each .
Intuitively, each factor of is either a fresh letter, or the shortest prefix of that does not have a previous occurrence beginning in . This means that self-referencing is allowed in , namely the left-most previous occurrence with smallest of each factor may overlap with itself.
The size of is the number of factors in .
Likewise, we encode each factor by a triple , where is the left-most previous occurrence of , is the length of , and is the last character of .
Example 2.
For , and it can be represented as . The size of is 6.
2.3. Communication Complexity Model
Our approach is based on the standard communication complexity model of Yao [1] between two parties:
- The parties are Alice and Bob;
- The problem is a function for arbitrary sets ;
- Alice has instance and Bob has instance ;
- The goal of the two parties is to output for a pair of instances by a joint computation;
- The joint computation (i.e., the communication between Alice and Bob) follows a specified protocol .
The communication complexity [1] usually refers merely to the total amount of bits that need to be transferred between Alice and Bob to compute . In this paper, we follow Bille et al.’s model [7] where the communication complexity is evaluated by a pair of the number of communication rounds r and the total amount of bits b exchanged in the communication.
In a (Monte-Carlo) randomized public-coin protocol, each party (Alice/Bob) can access a shared infinitely long sequence of independent random coin tosses. The requirement is that the output must be correct for every pair of inputs with probability at least for some , which is based on the shared random sequence of coin tosses. We remark that one can amplify the error rate to an arbitrarily small constant by paying a constant factor penalty in the communication complexity. Please note that the public-coin model differs from a randomized private-coin model, where in the latter the parties do not share a common random sequence and they can only use their own random sequence. In a deterministic protocol, every computation is performed without random sequences.
2.4. Joint Computation of Compressed String Problems
In this paper, we also consider the communication complexity of the Hamming distance problem between two compressed strings of equal length, which are compressed by LZ77 without self-references.
Problem 1
(Hamming distance with non-self-referencing LZ77).
Alice’s input: for string A of length n.
Bob’s input: for string B of length n.
Goal:Both Alice and Bob obtain the value of .
The following LCP problem for two strings compressed by non-self-referencing LZ77 has been considered by Bille et al. [7].
Problem 2
(LCP with non-self-referencing LZ77).
Alice’s input: for string A.
Bob’s input: for string B.
Goal:Both Alice and Bob obtain the value of .
Bille et al. proposed the following protocol for a joint computation of the LCP of two strings compressed by non-self-referencing LZ77:
Theorem 1
([7]). Suppose that the alphabet Σ and the length n of string A are known to both Alice and Bob. Then, there exists a randomized public-coin protocol which solves Problem 2 with communication complexity , where and .
The basic idea of Bille et al.’s protocol [7] is as follows: In their protocol, the sequences of factors in the non-self-referencing LZ77 factorizations and are regarded as strings of respective lengths and over an alphabet . Then, Alice and Bob jointly compute the LCP of and , which gives them the first mismatching factors between and . This LCP of and is computed by a randomized protocol for doubling-then-binary searches with communication rounds. Finally, Alice sends the information about her first mismatching factor to Bob, and he internally computes the LCP of A and B. The total number of bits exchanged is bounded by .
In Section 3, we present our protocol for Problem 1 of jointly computing the Hamming distance of two strings compressed by non-self-referencing LZ77. The scheme itself is a simple application of the LCP protocol of Theorem 1 for non-self-referencing LZ77, but our communication complexity analysis is based on non-trivial combinatorial properties of LZ77 factorization which, to our knowledge, were not previously known.
3. Compressed Communication Complexity of Hamming Distance
In this section, we show a Monte-Carlo randomized protocol for *fixed Problem 1 that asks for the Hamming distance of strings A and B that are compressed by non-self-referencing LZ77. Our protocol achieves communication complexity, where , , and is the largest value returned by the sub-protocol of the LCP problem for two strings compressed by non-self-referencing LZ77.
The basic idea is to apply the so-called Kangaroo jumping method, namely if d is the number of mismatching positions between A and B, then one can compute with at most LCP queries. More specifically, let be the sequence of mismatching positions between A and B. By using the protocol of Theorem 1 as a black box, and using the fact that for any prefix of any string S, we immediately obtain the following:
Lemma 1.
Suppose that the alphabet Σ and the length n of strings A and B are known to both Alice and Bob. Then, there exists a randomized public-coin protocol which solves Problem 1 with communication complexity , where .
3.1. On the Sizes of Non-Self-Referencing LZ77 Factorization of Suffixes
Our next question is how large the term in Lemma 1 can be in comparison to . To answer this question, we consider the following general measure: For any string of length n, let
3.1.1. Lower Bound for
In this subsection, we present a family of strings S such that for some suffix , namely . More specifically, we show the following:
Lemma 2.
is asymptotically lower bounded by .
Proof.
For simplicity, we consider an integer alphabet of size . Consider the string
and its proper suffix
The non-self-referencing LZ77 factorization of S and are:
Observe that after the first occurrence of character , each factor of is divided into two smaller factors in . Since and , , which tends to as goes to infinity. We finally remark that which in turn means that . □
Remark 1.
One can generalize the string S of Lemma 2 by replacing 0 with for arbitrarily fixed for any constant a. The upper limit comes from the fact that the number of 0’s in the original string S is exactly . Since , replacing 0 by with keeps the string length within . This implies that one can obtain the asymptotic lower bound for any suffix of length roughly up to .
Note also that the factorizations shown in Lemma 2 coincide with the self-referencing counterparts and , respectively. The next corollary immediately follows from Lemma 2 and Remark 1.
Corollary 1.
The Lempel-Ziv 77 factorization with/without self-references is non-monotonic.
3.1.2. Upper Bound for
Next, we consider an upper bound for . The tools we use here are the C-factorization [9] without self-references, and a grammar compression called AVL-grammar [10].
Definition 3
(Non-self-referencing C-factorization). The non-self-referencing C-factorizationof string S, denoted , is a factorization that satisfies the following: Let denote the beginning position of each factor in the factorization , that is, . (1) If and , then for any position in S, let . (2) Otherwise, let . Then, for each .
The size of is the number of factors in .
Example 3.
For , and its size is 8.
The difference between and is that while each factor in is the shortest prefix of that does not occur in , each factor in is the longest prefix of that occurs in . This immediately leads to the next lemma.
Lemma 3.
For any string S, .
We also use the next lemma in our upper bound analysis for .
Lemma 4.
For any string S, .
Proof.
*removed “on the contrary”Suppose that there are two consecutive factors of and a factor of such that are completely contained in and the ending position of is less than the ending position of . Since is a substring of and has a previous occurrence in , this contradicts that terminated inside .
Thus, the only possible case is that occurs as a suffix of . Please note that in this case cannot occur inside by the same reasoning as above. Therefore, at most two consecutive factors of can occur completely inside of each factor of . This leads to . □
An AVL-grammar of a string S is a kind of a straight-line program (SLP), which is a context-free grammar in the Chomsky-normal form which generates only S. The parse-tree of the AVL-grammar is an AVL-tree [17] and therefore, its height is if n is the length of S. Let denote the size (i.e., the number of productions) in the AVL-grammar for S. Basically, the AVL-grammar for S is constructed from the C-factorization of S, by introducing at most new productions for each factor in the C-factorization. Thus, the next lemma holds.
Lemma 5
([10]). For any string S of length n, .
Now we show our upper bound for .
Lemma 6.
.
Proof.
Suppose we have two AVL-grammars for strings X and Y of respective sizes and . Rytter [10] showed how to build an AVL-grammar for the concatenated string of size , where h is the height of the taller parse-tree of the two AVL-grammars before the concatenation. This procedure is based on a folklore algorithm (cf [18]) that concatenates two given AVL-trees of height h with node rotations. In the concatenation procedure of AVL-grammars, new productions are produced per node rotation. Therefore, new productions are produced in the concatenation operation.
Suppose we have the AVL-grammar of a string S of length n. It contains productions and the height of its parse-tree is since an AVL-tree is a balanced binary tree. For any proper suffix of S with , we split the AVL-grammar into two AVL-grammars, one for the prefix and the other for the suffix . We ignore the former and concentrate on the latter for our analysis. Since split operations on a given AVL-grammar can be performed in a similar manner to the afore-mentioned concatenation operations, we have that for some constant . Now it follows from Lemma 3, Lemma 4, Lemma 5, and that the size of the C-factorization of any string is no more than the number of productions in any SLP generating the same string [10], we have
where is a constant. This gives us for any string S of length n and any of its proper suffix . □
Since the size of the non-self-referencing LZ77 factorization of any string S of length n is at least , the next corollary is immediate from Lemma 6:
Corollary 2.
For any string S and its proper suffix , .
3.2. Compressed Communication Complexity of Hamming Distance
Now we have the main result of this section.
Theorem 2.
Suppose that the alphabet Σ and the length n of strings A and B are known to both Alice and Bob. Then, there exists a randomized public-coin protocol which solves Problem 1 with communication complexity , where and .
Proof.
The protocol of Lemma 1 has rounds. By Corollary 2, we have that . Therefore, , which proves the theorem. □
4. Conclusions and Open Questions
This paper showed a randomized public-coin protocol for a joint computation of the Hamming distance of two compressed strings. Our Hamming distance protocol relies on Bille et al.’s LCP protocol for two strings that are compressed by non-self-referencing LZ77, while our communication complexity analysis is based on new combinatorial properties of non-self-referencing LZ77 factorization.
As a further research, it would be interesting to consider the communication complexity of the Hamming distance problem using self-referencing LZ77. The main question to this regard is whether holds for any suffix of any string S. In the case of non-self-referencing LZ77, holds due to Lemma 2.
Author Contributions
Conceptualization, Y.N, S.I., H.B. and M.T.; methodology, S.M., Y.N., and S.I.; writing—original draft preparation, S.I.; writing—review and editing, Y.N. and H.B. All authors have read and agreed to the published version of the manuscript.
Funding
This work was supported by JSPS KAKENHI Grant Numbers JP18K18002 (YN), JP20H04141 (HB), JP18H04098 (MT), and JST PRESTO Grant Number JPMJPR1922 (SI).
Conflicts of Interest
The authors declare no conflict of interest.
References
- Yao, A.C. Some Complexity Questions Related to Distributive Computing (Preliminary Report). In Proceedings of the Eleventh Annual ACM-SIAM Symposium onTheory of computing, Atlanta, Georgia, 30 April –2 May 1979; pp. 209–213. [Google Scholar]
- Clifford, R.; Jalsenius, M.; Porat, E.; Sach, B. Space lower bounds for online pattern matching. Theor. Comput. Sci. 2013, 483, 68–74. [Google Scholar] [CrossRef]
- Radoszewski, J.; Starikovskaya, T. Streaming k-mismatch with error correcting and applications. Inf. Comput. 2020, 271, 104513. [Google Scholar] [CrossRef]
- Jalsenius, M.; Porat, B.; Sach, B. Parameterized Matching in the Streaming Model. STACS 2013, 400–411. [Google Scholar] [CrossRef]
- Gawrychowski, P.; Starikovskaya, T. Streaming Dictionary Matching with Mismatches. In Proceedings of the 30th Annual Symposium on Combinatorial Pattern Matching (CPM 2019), Pisa, Italy, 18–20 June 2019; pp. 21:5–21:15. [Google Scholar]
- Gawrychowski, P.; Radoszewski, J.; Starikovskaya, T. Quasi-Periodicity in Streams. In Proceedings of the 30th Annual Symposium on Combinatorial Pattern Matching (CPM 2019), Pisa, Italy, 18–20 June 2019; pp. 22:1–22:14. [Google Scholar]
- Bille, P.; Ettienne, M.B.; Grossi, R.; Gørtz, I.L.; Rotenberg, E. Compressed Communication Complexity of Longest Common Prefixes. In International Symposium on String Processing and Information Retrieval, Proceedings of the 25th International Symposium, SPIRE 2018, Lima, Peru, 9–11 October 2018; Springer: Cham, Switzerland, 2018; pp. 74–87. [Google Scholar]
- Ziv, J.; Lempel, A. A universal algorithm for sequential data compression. IEEE Trans. Inf. Theory 1977, 23, 337–343. [Google Scholar] [CrossRef]
- Crochemore, M. Linear searching for a square in a word. Bull. Eur. Assoc. Theor. Comput. Sci. 1984, 24, 66–72. [Google Scholar]
- Rytter, W. Application of Lempel-Ziv factorization to the approximation of grammar-based compression. Theor. Comput. Sci. 2003, 302, 211–222. [Google Scholar] [CrossRef]
- Lagarde, G.; Perifel, S. Lempel-Ziv: A “one-bit catastrophe” but not a tragedy. In Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, New Orleans, LA, USA, 7–10 January 2018; pp. 1478–1495. [Google Scholar]
- Ziv, J.; Lempel, A. Compression of individual sequences via variable-rate coding. IEEE Trans. Inf. Theory 1978, 24, 530–536. [Google Scholar] [CrossRef]
- Kociumaka, T.; Navarro, G.; Prezza, N. Towards a Definitive Measure of Repetitiveness. In Latin American Symposium on Theoretical Informatics, Proceedings of the 14th Latin American Symposium, São Paulo, Brazil, 5–8 January 2020; Springer: Cham, Switzerland, 2020; pp. 207–219. [Google Scholar]
- Storer, J.A.; Szymanski, T.G. Data compression via textual substitution. J. ACM 1982, 29, 928–951. [Google Scholar] [CrossRef]
- Kempa, D.; Prezza, N. At the roots of dictionary compression: string attractors. In Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, Los Angeles, CA, USA, 25–29 June 2018; pp. 827–840. [Google Scholar]
- Mantaci, S.; Restivo, A.; Romana, G.; Rosone, G.; Sciortino, M. A combinatorial view on string attractors. Theor. Comput. Sci. 2021, 850, 236–248. [Google Scholar] [CrossRef]
- Adelson-Velskii, G.; Landis, E. An algorithm for the organization of information. Sov. Math. Dokl. 1962, 3, 1259–1263. [Google Scholar]
- Knuth, D.E. The Art of Computer Programming, 2nd ed.; Addison-Wesley: Boston, MA, USA, 1998; Volume III. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).