Display options:
Normal
Show Abstracts
Compact

Select/unselect all

Displaying article 1-6

p. 145-167
Received: 23 December 2009 / Revised: 18 March 2010 / Accepted: 18 March 2010 / Published: 1 April 2010

Show/Hide Abstract
| Cited by 5 | PDF Full-text (580 KB) | HTML Full-text | XML Full-text
Abstract: Given a sequence T = t_{0} t_{1} . . . t_{n-1} of size n = |T|, with symbols from a fixed alphabet Σ, (|Σ| ≤ n ), the suffix array provides a listing of all the suffixes of T in a lexicographic order. Given T, the suffix sorting problem is to construct its suffix array. The direct suffix sorting problem is to construct the suffix array of T directly without using the suffix tree data structure. While algorithims for linear time, linear space direct suffix sorting have been proposed, the actual constant in the linear space is still a major concern, given that the applications of suffix trees and suffix arrays (such as in whole-genome analysis) often involve huge data sets. In this work, we reduce the gap between current results and the minimal space requirement. We introduce an algorithm for the direct suffix sorting problem with worst case time complexity in O (n ), requiring only $(1\frac{2}{3}n\text{}\mathrm{log}\text{}n\text{}-\text{}n\text{}\mathrm{log}\text{}|\sum |+O(1\left)\right)$ bits in memory space. This implies $5\frac{2}{3}n+O\left(1\right)$ bytes for total space requirment, (including space for both the output suffix array and the input sequence T) assuming $\text{n}\le {\text{2}}^{32},|\sum |\le 256$ , and 4 bytes per integer. The basis of our algorithm is an extension of Shannon-Fano-Elias codes used in source coding and information theory. This is the first time information-theoretic methods have been used as the basis for solving the suffix sorting problem.

p. 63-75
Received: 4 November 2009 / Revised: 8 January 2010 / Accepted: 25 January 2010 / Published: 29 January 2010

Show/Hide Abstract
| PDF Full-text (375 KB) | HTML Full-text | XML Full-text
Abstract: If we can use previous knowledge of the source (or the knowledge of a source that is correlated to the one we want to compress) to exploit the compression process then we can have significant gains in compression. By doing this in the fundamental source coding theorem we can substitute entropy with conditional entropy and we have a new theoretical limit that allows for better compression. To do this, when data compression is used for data transmission, we can assume some degree of interaction between the compressor and the decompressor that can allow a more efficient usage of the previous knowledge they both have of the source. In this paper we review previous work that applies interactive approaches to data compression and discuss this possibility.

p. 1429-1448
Received: 30 September 2009 / Accepted: 20 November 2009 / Published: 25 November 2009

Show/Hide Abstract
| Cited by 9 | PDF Full-text (267 KB) | HTML Full-text | XML Full-text
Abstract: We consider grammar-based text compression with longest first substitution (LFS ), where non-overlapping occurrences of a longest repeating factor of the input text are replaced by a new non-terminal symbol. We present the first linear-time algorithm for LFS. Our algorithm employs a new data structure called sparse lazy suffix trees . We also deal with a more sophisticated version of LFS, called LFS2 , that allows better compression. The first linear-time algorithm for LFS2 is also presented.

p. 1221-1231
Received: 25 June 2009 / Revised: 31 August 2009 / Accepted: 15 September 2009 / Published: 22 September 2009

Show/Hide Abstract
| PDF Full-text (303 KB) | HTML Full-text | XML Full-text
Abstract: The symmetric-convolution multiplication (SCM) property of discrete trigonometric transforms (DTTs) based on unitary transform matrices is developed. Then as the reciprocity of this property, the novel multiplication symmetric-convolution (MSC) property of discrete trigonometric transforms, is developed.

p. 1105-1136
Received: 9 July 2009 / Revised: 8 September 2009 / Accepted: 9 September 2009 / Published: 10 September 2009

Show/Hide Abstract
| Cited by 15 | PDF Full-text (334 KB) | HTML Full-text | XML Full-text
Abstract: A compressed full-text self-index for a text T is a data structure requiring reduced space and able to search for patterns P in T . It can also reproduce any substring of T , thus actually replacing T . Despite the recent explosion of interest on compressed indexes, there has not been much progress on functionalities beyond the basic exact search. In this paper we focus on indexed approximate string matching (ASM), which is of great interest, say, in bioinformatics. We study ASM algorithms for Lempel-Ziv compressed indexes and for compressed suffix trees/arrays. Most compressed self-indexes belong to one of these classes. We start by adapting the classical method of partitioning into exact search to self-indexes, and optimize it over a representative of either class of self-index. Then, we show that a Lempel- Ziv index can be seen as an extension of the classical q -samples index. We give new insights on this type of index, which can be of independent interest, and then apply them to a Lempel- Ziv index. Finally, we improve hierarchical verification, a successful technique for sequential searching, so as to extend the matches of pattern pieces to the left or right. Most compressed suffix trees/arrays support the required bidirectionality, thus enabling the implementation of the improved technique. In turn, the improved verification largely reduces the accesses to the text, which are expensive in self-indexes. We show experimentally that our algorithms are competitive and provide useful space-time tradeoffs compared to classical indexes.

p. 1031-1044
Received: 30 June 2009 / Revised: 20 August 2009 / Accepted: 21 August 2009 / Published: 25 August 2009

Show/Hide Abstract
| Cited by 24 | PDF Full-text (261 KB) | HTML Full-text | XML Full-text
Abstract: The Web Graph is a large-scale graph that does not fit in main memory, so that lossless compression methods have been proposed for it. This paper introduces a compression scheme that combines efficient storage with fast retrieval for the information in a node. The scheme exploits the properties of the Web Graph without assuming an ordering of the URLs, so that it may be applied to more general graphs. Tests on some datasets of use achieve space savings of about 10% over existing methods.

Select/unselect all

Displaying article 1-6

Export citation of selected articles as:
Plain Text
BibTeX
BibTeX (without abstracts)
Endnote
Endnote (without abstracts)
Tab-delimited
RIS