Lossless Data Compression

A special issue of Algorithms (ISSN 1999-4893). This special issue belongs to the section "Databases and Data Structures".

Deadline for manuscript submissions: closed (15 September 2020) | Viewed by 12099

Special Issue Editor


E-Mail Website
Guest Editor
Department of Mathematical Informatics, Graduate School of Information Science and Technology, The University of Tokyo, Tokyo 113-8656, Japan
Interests: information retrieval; data structures; data compression

Special Issue Information

Dear Colleagues,

In the era of big data, it is important to store huge data such as DNA sequences and text messages in SNS without any information loss. This Special Issue, entitled “Lossless Data Compression”, aims to discuss the theory and practice of lossless data compression. It also aims to deal with algorithms and data structures for efficient access to compressed data.

Topics of interest include but are not limited to:

  • Text data compression;
  • DNA and bioinformatics data compression;
  • Compression of big data;
  • Algorithms and data structures for accessing compressed data;
  • Data analysis algorithms for compressed data;
  • Succinct data structures.

Prof. Kunihiko Sadakane
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

26 pages, 3481 KiB  
Article
On the Optimal Calculation of the Rice Coding Parameter
by Fernando Solano Donado
Algorithms 2020, 13(8), 181; https://doi.org/10.3390/a13080181 - 27 Jul 2020
Cited by 4 | Viewed by 4514
Abstract
In this article, we design and evaluate several algorithms for the computation of the optimal Rice coding parameter. We conjecture that the optimal Rice coding parameter can be bounded and verify this conjecture through numerical experiments using real data. We also describe algorithms [...] Read more.
In this article, we design and evaluate several algorithms for the computation of the optimal Rice coding parameter. We conjecture that the optimal Rice coding parameter can be bounded and verify this conjecture through numerical experiments using real data. We also describe algorithms that partition the input sequence of data into sub-sequences, such that if each sub-sequence is coded with a different Rice parameter, the overall code length is minimised. An algorithm for finding the optimal partitioning solution for Rice codes is proposed, as well as fast heuristics, based on the understanding of the problem trade-offs. Full article
(This article belongs to the Special Issue Lossless Data Compression)
Show Figures

Figure 1

19 pages, 650 KiB  
Article
Stream-Based Lossless Data Compression Applying Adaptive Entropy Coding for Hardware-Based Implementation
by Shinichi Yamagiwa, Eisaku Hayakawa and Koichi Marumo
Algorithms 2020, 13(7), 159; https://doi.org/10.3390/a13070159 - 30 Jun 2020
Cited by 6 | Viewed by 3705
Abstract
Toward strong demand for very high-speed I/O for processors, physical performance growth of hardware I/O speed was drastically increased in this decade. However, the recent Big Data applications still demand the larger I/O bandwidth and the lower latency for the speed. Because the [...] Read more.
Toward strong demand for very high-speed I/O for processors, physical performance growth of hardware I/O speed was drastically increased in this decade. However, the recent Big Data applications still demand the larger I/O bandwidth and the lower latency for the speed. Because the current I/O performance does not improve so drastically, it is the time to consider another way to increase it. To overcome this challenge, we focus on lossless data compression technology to decrease the amount of data itself in the data communication path. The recent Big Data applications treat data stream that flows continuously and never allow stalling processing due to the high speed. Therefore, an elegant hardware-based data compression technology is demanded. This paper proposes a novel lossless data compression, called ASE coding. It encodes streaming data by applying the entropy coding approach. ASE coding instantly assigns the fewest bits to the corresponding compressed data according to the number of occupied entries in a look-up table. This paper describes the detailed mechanism of ASE coding. Furthermore, the paper demonstrates performance evaluations to promise that ASE coding adaptively shrinks streaming data and also works on a small amount of hardware resources without stalling or buffering any part of data stream. Full article
(This article belongs to the Special Issue Lossless Data Compression)
Show Figures

Figure 1

18 pages, 468 KiB  
Article
Practical Grammar Compression Based on Maximal Repeats
by Isamu Furuya, Takuya Takagi, Yuto Nakashima, Shunsuke Inenaga, Hideo Bannai and Takuya Kida
Algorithms 2020, 13(4), 103; https://doi.org/10.3390/a13040103 - 23 Apr 2020
Cited by 2 | Viewed by 3352
Abstract
This study presents an analysis of RePair, which is a grammar compression algorithm known for its simple scheme, while also being practically effective. First, we show that the main process of RePair, that is, the step by step substitution of the most frequent [...] Read more.
This study presents an analysis of RePair, which is a grammar compression algorithm known for its simple scheme, while also being practically effective. First, we show that the main process of RePair, that is, the step by step substitution of the most frequent symbol pairs, works within the corresponding most frequent maximal repeats. Then, we reveal the relation between maximal repeats and grammars constructed by RePair. On the basis of this analysis, we further propose a novel variant of RePair, called MR-RePair, which considers the one-time substitution of the most frequent maximal repeats instead of the consecutive substitution of the most frequent pairs. The results of the experiments comparing the size of constructed grammars and execution time of RePair and MR-RePair on several text corpora demonstrate that MR-RePair constructs more compact grammars than RePair does, especially for highly repetitive texts. Full article
(This article belongs to the Special Issue Lossless Data Compression)
Show Figures

Figure 1

Back to TopTop