Special Issue "Data Compression Algorithms and their Applications"

A special issue of Algorithms (ISSN 1999-4893).

Deadline for manuscript submissions: 30 November 2019.

Special Issue Editor

Guest Editor
Assoc. Prof. Philip Bille

Associate professor, Department of Applied Mathematics and Computer Science (DTU Compute), Technical University of Denmark, 2800 Kgs. Lyngby, Denmark
Website | E-Mail
Interests: pattern matching; data compression; parallelism in modern computer architectures

Special Issue Information

Dear Colleagues,

Data compression is classic research area in computer science focusing on the efficient storage and communication of data. Data compression is ubiquitous throughout science and engineering and essentially any data of non-trivial size is stored or communicated in compressed form on any modern computer system. With rapid advances in data collection in areas such as e-commerce, astronomy, climatology, bioinformatics, and particle physics, the need for efficient data compression is stronger than ever.

We invite you to submit high quality papers to this Special Issue on “Data compression and applications”, with subjects covering the whole range from theory to applications. The following is a (non-exhaustive) list of topics of interests:

  • Loss-less data compression
  • Lossy data compression
  • Algorithms on compressed data
  • Compressed data structures
  • Applications of data compression

Prof. Philip Bille
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • loss-less data compression
  • lossy data compression
  • algorithms on compressed data
  • compressed data structures
  • applications of data compression

Published Papers (3 papers)

View options order results:
result details:
Displaying articles 1-3
Export citation of selected articles as:

Research

Open AccessArticle
Compaction of Church Numerals
Algorithms 2019, 12(8), 159; https://doi.org/10.3390/a12080159
Received: 28 June 2019 / Revised: 5 August 2019 / Accepted: 5 August 2019 / Published: 8 August 2019
PDF Full-text (4444 KB) | HTML Full-text | XML Full-text
Abstract
In this study, we address the problem of compaction of Church numerals. Church numerals are unary representations of natural numbers on the scheme of lambda terms. We propose a novel decomposition scheme from a given natural number into an arithmetic expression using tetration, [...] Read more.
In this study, we address the problem of compaction of Church numerals. Church numerals are unary representations of natural numbers on the scheme of lambda terms. We propose a novel decomposition scheme from a given natural number into an arithmetic expression using tetration, which enables us to obtain a compact representation of lambda terms that leads to the Church numeral of the natural number. For natural number n, we prove that the size of the lambda term obtained by the proposed method is O ( ( slog 2 n ) ( log n / log log n ) ) . Moreover, we experimentally confirmed that the proposed method outperforms binary representation of Church numerals on average, when n is less than approximately 10,000. Full article
(This article belongs to the Special Issue Data Compression Algorithms and their Applications)
Figures

Figure 1

Open AccessArticle
A New Regularized Reconstruction Algorithm Based on Compressed Sensing for the Sparse Underdetermined Problem and Applications of One-Dimensional and Two-Dimensional Signal Recovery
Algorithms 2019, 12(7), 126; https://doi.org/10.3390/a12070126
Received: 27 May 2019 / Revised: 23 June 2019 / Accepted: 24 June 2019 / Published: 26 June 2019
PDF Full-text (1681 KB) | HTML Full-text | XML Full-text
Abstract
The compressed sensing theory has been widely used in solving undetermined equations in various fields and has made remarkable achievements. The regularized smooth L0 (ReSL0) reconstruction algorithm adds an error regularization term to the smooth L0(SL0) algorithm, achieving the reconstruction of the signal [...] Read more.
The compressed sensing theory has been widely used in solving undetermined equations in various fields and has made remarkable achievements. The regularized smooth L0 (ReSL0) reconstruction algorithm adds an error regularization term to the smooth L0(SL0) algorithm, achieving the reconstruction of the signal well in the presence of noise. However, the ReSL0 reconstruction algorithm still has some flaws. It still chooses the original optimization method of SL0 and the Gauss approximation function, but this method has the problem of a sawtooth effect in the later optimization stage, and the convergence effect is not ideal. Therefore, we make two adjustments to the basis of the ReSL0 reconstruction algorithm: firstly, we introduce another CIPF function which has a better approximation effect than Gauss function; secondly, we combine the steepest descent method and Newton method in terms of the algorithm optimization. Then, a novel regularized recovery algorithm named combined regularized smooth L0 (CReSL0) is proposed. Under the same experimental conditions, the CReSL0 algorithm is compared with other popular reconstruction algorithms. Overall, the CReSL0 algorithm achieves excellent reconstruction performance in terms of the peak signal-to-noise ratio (PSNR) and run-time for both a one-dimensional Gauss signal and two-dimensional image reconstruction tasks. Full article
(This article belongs to the Special Issue Data Compression Algorithms and their Applications)
Figures

Figure 1

Open AccessArticle
Time-Universal Data Compression
Algorithms 2019, 12(6), 116; https://doi.org/10.3390/a12060116
Received: 26 April 2019 / Revised: 25 May 2019 / Accepted: 27 May 2019 / Published: 29 May 2019
PDF Full-text (742 KB) | HTML Full-text | XML Full-text
Abstract
Nowadays, a variety of data-compressors (or archivers) is available, each of which has its merits, and it is impossible to single out the best ones. Thus, one faces the problem of choosing the best method to compress a given file, and this problem [...] Read more.
Nowadays, a variety of data-compressors (or archivers) is available, each of which has its merits, and it is impossible to single out the best ones. Thus, one faces the problem of choosing the best method to compress a given file, and this problem is more important the larger is the file. It seems natural to try all the compressors and then choose the one that gives the shortest compressed file, then transfer (or store) the index number of the best compressor (it requires log m bits, if m is the number of compressors available) and the compressed file. The only problem is the time, which essentially increases due to the need to compress the file m times (in order to find the best compressor). We suggest a method of data compression whose performance is close to optimal, but for which the extra time needed is relatively small: the ratio of this extra time and the total time of calculation can be limited, in an asymptotic manner, by an arbitrary positive constant. In short, the main idea of the suggested approach is as follows: in order to find the best, try all the data compressors, but, when doing so, use for compression only a small part of the file. Then apply the best data compressors to the whole file. Note that there are many situations where it may be necessary to find the best data compressor out of a given set. In such a case, it is often done by comparing compressors empirically. One of the goals of this work is to turn such a selection process into a part of the data compression method, automating and optimizing it. Full article
(This article belongs to the Special Issue Data Compression Algorithms and their Applications)

Planned Papers

The below list represents only planned manuscripts. Some of these manuscripts have not been received by the Editorial Office yet. Papers submitted to MDPI journals are subject to peer-review.


 

 

 

Algorithms EISSN 1999-4893 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top