Next Article in Journal
Iterative Numerical Scheme for Non-Isothermal Two-Phase Flow in Heterogeneous Porous Media
Next Article in Special Issue
A New Regularized Reconstruction Algorithm Based on Compressed Sensing for the Sparse Underdetermined Problem and Applications of One-Dimensional and Two-Dimensional Signal Recovery
Previous Article in Journal
Combining Background Subtraction and Convolutional Neural Network for Anomaly Detection in Pumping-Unit Surveillance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Time-Universal Data Compression †

1
Institute of Computational Technologies of the Siberian Branch of the Russian Academy of Science, 630090 Novosibirsk, Russia
2
Department of Information Technologies, Novosibirsk State University, 630090 Novosibirsk, Russia
The preliminary version of this paper is accepted for ISIT 2019, Paris.
Algorithms 2019, 12(6), 116; https://doi.org/10.3390/a12060116
Submission received: 26 April 2019 / Revised: 25 May 2019 / Accepted: 27 May 2019 / Published: 29 May 2019
(This article belongs to the Special Issue Data Compression Algorithms and their Applications)

Abstract

:
Nowadays, a variety of data-compressors (or archivers) is available, each of which has its merits, and it is impossible to single out the best ones. Thus, one faces the problem of choosing the best method to compress a given file, and this problem is more important the larger is the file. It seems natural to try all the compressors and then choose the one that gives the shortest compressed file, then transfer (or store) the index number of the best compressor (it requires log m bits, if m is the number of compressors available) and the compressed file. The only problem is the time, which essentially increases due to the need to compress the file m times (in order to find the best compressor). We suggest a method of data compression whose performance is close to optimal, but for which the extra time needed is relatively small: the ratio of this extra time and the total time of calculation can be limited, in an asymptotic manner, by an arbitrary positive constant. In short, the main idea of the suggested approach is as follows: in order to find the best, try all the data compressors, but, when doing so, use for compression only a small part of the file. Then apply the best data compressors to the whole file. Note that there are many situations where it may be necessary to find the best data compressor out of a given set. In such a case, it is often done by comparing compressors empirically. One of the goals of this work is to turn such a selection process into a part of the data compression method, automating and optimizing it.

1. Introduction

Nowadays lossless data compressors, or archivers, are widely used in systems of information transmission and storage. Modern data compressors are based on the results of the theory of source coding, as well as on the experience and intuition of their developers. Among the theoretical results, we note, first of all, such deep concepts as entropy, information, and methods of source coding discovered by Shannon [1]. The next important step was done by Fitingoff [2] and Kolmogorov [3], who described the first universal code, as well as Krichevsky who described the first such a code with minimal redundancy [4].
Now practically used data compressors are based on the PPM universal code [5] (which is used along with the arithmetic code [6]), the Lempel–Ziv (LZ) compression methods [7], the Burrows–Wheeler transform [8] (which is used along with the book-stack (or MTF) code [9,10,11]), the class of grammar-based codes [12,13] and some others [14,15,16]. All these codes are universal. This means that, asymptotically, the length of the compressed file goes to the smallest possible value (i.e., the Shannon entropy per letter), if the compressed sequence is generated by a stationary source.
In particular, the universality of practically used codes means that we cannot compare their performance theoretically, because all of them have the same limit ratio of compression. On the other hand, the experiments show that the performance of different data compressors depends on a compressed file and it is impossible to single out one of the best or even remove the worst ones. Thus, there is no theoretical or experimental way to select the best data compressors for practical use. Hence, if someone is going to compress a file, he should first select the appropriate data compressor, preferably giving the best compression. The following obvious two-step method can be applied: first, try all available compressors and choose the one that gives the shortest compressed file. Then place a byte representation of its number and the compressed file. When decoding, the decoder first reads the number of the selected data compressor, and then decodes the rest of the file with the selected data compressor. An obvious drawback of this approach is the need to spend a lot of time in order to first compress the file by all the compressors.
In this paper we show that there exists a method that encodes the file with the (close to) optimal compressor, but uses a relatively small extra time. In short, the main idea of the suggested approach is as follows: in order to find the best, try all the compressors, but, when doing it, use for compression only a small part of the file. Then apply the best data compressor for the compression of the whole file. Based on experiments and some theoretical considerations, we can say that under certain conditions this procedure is quite effective. That is why we call such methods “time-universal.”
It is important to note that the problems of data compression and time series prediction are very close mathematically (see, for example, [17]). That is why the proposed approach can be directly applied to time series forecasting.
To the best of our knowledge, the suggested approach to data compression is new, but the idea to organize the computation of several algorithms in such a way that any of them worked at certain intervals of time, and their course depends on intermediate results, is widely used in the theory of algorithms, randomness testing and artificial intelligence; see [18,19,20,21].

2. The Statement of the Problem and Preliminary Example

Let there be a set of data compressors F = { φ 1 , φ 2 , } and x 1 x 2 be a sequence of letters from a finite alphabet A, whose initial part x 1 x n should be compressed by some φ F . Let v i be the time spent on encoding one letter by the data compressor φ i and suppose that all v i are upper-bounded by a certain constant v m a x , i.e., sup i = 1 , 2 , , v i v m a x . (It is possible that v i is unknown beforehand.)
The considered task is to find a data compressor from F which compresses x 1 x n in such a way that the total time spent for all calculations and compressions does not exceed T ( 1 + δ ) for some δ > 0 . Note that T = v m a x n is the minimum time that must be reserved for compression and δ T is the additional time that can be used to find the good compressor (among φ 1 , φ 2 , ). It is important to note that we can estimate δ without knowing the speeds v 1 , v 2 , .
If the number of data compressors F is finite, say, { φ 1 , φ 2 , , φ m } , m 2 , and one chooses φ k to compress the file x 1 x 2 x n , he can use the following two step procedure: encode the file as < k > φ k ( x 1 x 2 x n ) , where < k > is log m -bit binary presentation of k. (The decoder first reads log m bits and finds k, then it finds x 1 x 2 x n decoding φ k ( x 1 x 2 x n ) .) Now our goal is to generalize this approach for the case of infinite F = { φ 1 , φ 2 , } . For this purpose we take a probability distribution ω = ω 1 , ω 2 , such that all ω i > 0 . The following is an example of such a distribution:
ω k = 1 k ( k + 1 ) , k = 1 , 2 , 3 , .
Clearly, it is a probability distribution, because ω k = 1 / k 1 / ( k + 1 ) .
Now we should take into account the length of a codeword which presents the number k, because those lengths must be different for different k. So, we should find such φ k that the value
log ω k + | φ k ( x 1 x 2 x n ) |
is close to minimal. As earlier, the first part log ω k is used for encoding number k (codes achieving this are well-known, e.g., [22].) The decoder first finds k and then x 1 x 2 x n using the decoder corresponding to φ k . Based on this consideration, we give the following
Definition 1.
We call any method that encodes a sequence x 1 x 2 x n , n 1 , x i A , by the binary word of the length log ω j + | φ j ( x 1 x 2 x n ) | for some φ j F , a time-adaptive code and denote it by Φ ^ c o m p r δ . The output of Φ ^ c o m p r δ is the following word:
Φ ^ c o m p r δ ( x 1 x 2 x n ) = < ω i > φ i ( x 1 x 2 x n ) ,
where < ω i > is log ω i -bit word that encodes i, whereas the time of encoding is not grater than T ( 1 + δ ) (here T = v m a x n ).
If for a time-adaptive code Φ ^ c o m p r δ the following equation is valid
lim n Φ ^ c o m p r δ ( x 1 x n ) / n = inf 1 = 1 , 2 , lim n φ i ( x 1 x n ) / n ,
this code is called time-universal.
Comment 1
It will be convenient to reckon that the whole sequence is compressed not letter-by-letter, but by sub-words, each of which, say, a few kilobytes in length. More formally, let, as before, there be a sequence x 1 x 2 , where x i , i = 1 , 2 , are sub-words whose length (say, L) can be a few kilobytes. In this case x i { 0 , 1 } 8 L .
Comment 2
Here and below we did not take into account the time required for the calculation of log ω i and some other auxiliary calculations. If in a certain situation this time is not negligible, it is possible to reduce T ^ in advance by the required value.
This description and the following discussion are fairly formal, so we give a brief preliminary example of a time-adaptive code. To do this, we took 22 data compressors from [23] and 14 files of different lengths. For each file we applied the following three-step scheme: first we took 1% of the file and sequentially compressed it with all the data compressors. Then we selected the three best compressors, took 5% of the file, and sequentially compressed it with the three compressors selected. Finally, we selected the best of these compressors and compressed the file with this compressor. Thus, the total extra time is limited to 22 × 0.01 + 3 × 0.05 = 0.37, i.e., δ 0.37 . Table 1 contains the obtained data.
Table 2 shows that the larger the file, the better the compression. The following table gives some insight into the effect of the extra time. Here we used the same three-step scheme, but the size of the parts was 2 % and 10 % for the first step and the second, respectively, while the extra time was 0.74.
From the tables it can be seen that the performance of the considered scheme increases significantly when the additional time increases. It worth noting, that if one applied all 22 data compressors to the whole file, the extra time would be 21 instead of 0.74.

3. The Time-Universal Code for the Finite Set of Data Compressors

3.1. Theoretical Consideration

Suppose that there is a file x 1 x 2 x n and data compressors φ 1 , , φ m , n 1 , m 1 . Let, as before, v i be the time spent on encoding one letter by the data compressor φ i ,
v m a x = max i = 1 , , n v i , T = n v ,
and let
T ^ = T ( 1 + δ ) , δ > 0 .
The goal is to find the data compressor φ j , j = 1 , , m , that compresses the file x 1 x 2 x n in the best way in time T ^ .
Apparently, the following two-step method is the simplest.
Step 1. Calculate r = δ T / ( m v m a x ) .
Step 2. Compress the file x 1 x 2 x r by φ 1 and find the length of compressed file | φ 1 ( x 1 x r ) | , then, likewise, find | φ 2 ( x 1 x r ) | , etc.
Step 3. Calculate s = arg min i = 1 , , m | φ i ( x 1 x r ) |
Step 4. Compress the whole file x 1 x 2 x n by φ s and compose the codeword s φ s ( x 1 x n ) , where s is log m -bit word with the presentation of s.
It will be shown that even this simple method is time universal. On the other hand, there are a lot of quite reasonable approaches to build the time-adaptive codes. For example, it could be natural to try a three step procedure, which was considered in the previous part (see Table 1 and Table 2), as well as many other versions. Probably, it could be useful to use multidimensional optimization approaches, such as machine learning, so-called deep learning, etc. That is why, we consider only some general conditions needed for time-universality.
Let us give some needed definitions. Suppose, a time-adaptive data-compressor Φ ^ is applied to x = x 1 x t . For any φ i we define
τ i ( t ) = max { r : φ i ( x 1 x r ) w a s c a l c u l a t e d , w h e n e x t r a t i m e δ T w a s e x h a u s t e d } .
Theorem 1.
Let there be an infinite word x 1 x 2 and time-adaptive method Φ ^ which is based on the finite set of data compressors ϕ 1 , , ϕ m . If its additional time of calculation is not grater than δ T and the following properties are valid:
(i) the limits lim t φ i ( x 1 x t ) / t exist for i = 1 , 2 , , m ,
(ii) for i = 1 , 2 , , m
lim t τ i ( t ) = ,
(iii) for any t the method Φ ^ ( x 1 x t ) uses such a compressor φ s for which, for any i
( log ω s + | φ s ( x 1 x τ s ( t ) ) | ) / τ s ( t ) ( log ω i + | φ i ( x 1 x τ i ( t ) ) | ) / τ i ( t ) ,
Then Φ ^ ( x 1 x n ) is time universal, that is
lim t Φ ^ ( x 1 x t ) / t = inf i = 1 , 2 , lim t | φ i ( x 1 x t ) | / t
A proof is given in the Appendix A, but here we give some informal comments. First, note that property (i) means that any data compressor will participate in the competition to find the best one. Second, if the sequence x 1 x 2 is generated by a stationary source and all φ i are universal codes, then the property (iii) is valid with probability 1 (See, for example, [22]). Hence, this theorem is valid for this case. Besides, note that this this theorem is valid for methods described earlier.

3.2. Experiments

We conducted several experiments to evaluate the effectiveness of the proposed approach in practice. For this purpose we took 20 data compressor from the “squeeze chart (lossless data compression benchmarks)”, http://www.squeezechart.com/index.html and files from this site http://corpus.canterbury.ac.nz/descriptions/, and http://tolstoy.ru/creativity/90-volume-collection-of-the-works/ (Information about their size is given in the tables below). It is worth noting, that we do not change the collection of the data compressors and the files during experiments. The results are presented in the following tables, where the expression “worst/best” means the ratio of the longest length of the compressed file and the shortest one (for different data compressors). More formally, w o r s t / b e s t = max i , j = 1 , , 20 ( | φ i | / | φ j | ) . The expression “chosen/best” is a similar value for a chosen data compressor and the best one. The value “chosen/best” is the frequency of occurrence of the event “the best compressor was selected”.
Table 3 shows the results of the two-step method, where we took 3% in the first step. Thus, the total extra time is limited to 20 × 0.03 = 0.6, i.e., δ 0.6 .
Here ratio “chosen best” means a proportion of cases in which the best method was chosen.
Table 4 shows the effect of the extra time δ on the efficiency of the method (In this case we took 5% in the first step).
Table 5 contains information about the three step method. Here we took 3% in the first step and then took five data compressors with the best performance. Then, in the second step, we tested those five data compressors taking 5% from the file. Hence, the extra time equals 20 × 0.03 + 5 × 0.05 = 0.85 .
Table 6 gives an example of four step method. Here we took 1% in the first step and then took five data compressors with the best performance. Then, in the second step, we tested those five data compressors taking 2% from each file. Basing on the obtained data, we chose three best and tested them on 5% parts. At last, the best of them was used for compression of the whole file. Hence, the extra time equals 20 × 0.01 + 5 × 0.02 + 3 × 0.05 = 0.45 .
If we compare Table 6 and Table 3, we can see that the performance of the four step method is better than two step method, where the extra time is significantly less for the four step method. The same is valid for the considered example of the three step method.
We can see that the three- and four-step methods make sense because they make it possible to reduce the additional time while maintaining the better quality of the method. Also, we can make another important conclusion. All tables show that the method is more efficient for large files. Indeed, the ratio “chosen/best” and the average value “chosen/best” decreases where the file lengths increases. Moreover, the average value “worst/best” increases where the file lengths increases.

4. The Time-Universal Code for Stationary Ergodic Sources

In this section we describe a time-universal code for stationary sources. It is based on optimal universal codes for Markov chains, developed by Krichevsky [4,24] and the twice-universal code [25]. Denote by M i , i = 1 , 2 , the set of Markov chains with memory (connectivity) i, and let M 0 be the set of Bernoulli sources. For stationary ergodic μ and an integer r we denote by h r ( μ ) the r-order entropy (per letter) and let h ( μ ) be the limit entropy; see for definitions [22].
Krichevsky [4,24] described the codes ψ 0 , ψ 1 , which are asymptotically optimal for M 0 , M 1 , , correspondingly. If the sequence x 1 x 2 x n , x i A , is generated by a source μ M i , the following inequalities are valid almost surely (a.s.):
h i ( μ ) | ψ i ( x 1 x t ) | / t h i ( μ ) + ( ( | A | 1 ) | A | i + C ) / t ,
where t grows. (Here C is a constant.) The length of a codeword of the twice-universal code ρ is defined as the following “mixture”:
| ρ ( x 1 x t ) | = log i = 0 ω i + 1 2 | ψ i ( x 1 x t ) | .
(It is well-known in information theory [22] that there exists a code with such codeword lengths, because x 1 x t A t 2 | ρ ( x 1 x t ) | = 1 .) This code is called twice-universal because for any M i , i = 0 , 1 , , and μ M i the equality (8) is valid (with different C). Besides, for any stationary ergodic source μ a.s.
lim t | ρ i ( x 1 x t ) | / t = h ( μ ) .
Let us estimate the time of calculations necessary when using ρ . First, note that it suffices to sum a finite number of terms in (9), because all the terms 2 | ψ i ( x 1 x t ) | are equal for i t . On the other hand, the number of different terms grows, where t and, hence, the encoder should calculate 2 | ψ i ( x 1 x t ) | for growing number i’s. It is known [24] that the time spent on coding one letter is close for different codes ψ i , i = 0 , 1 , 2 , .
Hence, the time spent for encoding one letter by the code ρ grows to infinity, when t grows. The described below time-universal code Ψ δ has the same asymptotic performance, but the time spent for encoding one letter is a constant.
In order to describe the time-universal code Ψ δ we give some definitions. Let, as before, v be an upper-bound of the time spent for encoding one letter by any ψ i , x 1 x t be the generated word,
T = t v , N ( t ) = δ T / v = δ t ,
m ( t ) = log log N ( t ) , s ( t ) = N ( t ) / ( m ( t ) + 1 ) .
Denote by Ψ δ the following method:
Step 1. Calculate m ( t ) , s ( t ) and
| ψ 0 ( x 1 x s ( t ) ) | , | ψ 1 ( x 1 x s ( t ) ) | , , | ψ m ( t ) ( x 1 x s ( t ) ) | .
Step 2. Find such a j that
| ψ j ( x 1 x s ( t ) ) | = min i = 0 , , m ( t ) | ψ i ( x 1 x s ( t ) ) | .
Step 3. Calculate the codeword ψ j ( x 1 x t ) and output
Ψ δ ( x 1 x t ) = < j > ψ j ( x 1 x t ) ,
where < j > is the log ω j + 1 -bit codeword of j. The decoding is obvious.
Theorem 2.
Let x 1 x 2 be a sequence generated by a stationary source and the code Ψ δ be applied. Then this code is time-universal, i.e., a.s.
lim t | Ψ δ ( x 1 x t ) | / t = inf i = 0 , 1 , lim t | ψ i ( x 1 x t ) | / t .

Funding

This research was funded by Russian Foundation for Basic Research grant number 18-29-03005.

Conflicts of Interest

The author declares no conflict of interest.

Appendix A

Proof of Theorem 1.
Let λ i = lim t | φ i ( x 1 x t ) | / t and φ m i n be such a data compressor that λ m i n = min i λ i . Having taken into account that the set of data compressors F is finite, we can see that for any ϵ > 0 there exists such t 1 that for all φ i F and t > t 1
( | φ i ( x 1 x t ) | log ω i ) / t λ i | < ϵ .
From (ii) we obtain that there exists such t 2 that τ i ( t 2 ) > t 1 for all i = 1 , , m . Let n t 2 and Φ ^ be applied to x 1 x 2 x n . Suppose that a data-compressor φ s was chosen, when Φ ^ was applied. Hence,
( log ω s + | φ s ( x 1 x τ s ( n ) | ) / τ s ( n ) ( log ω m i n + | φ m i n ( x 1 x τ m i n ( n ) | ) / τ m i n ( n ) .
From (A1) we can see that
( log ω s + | φ s ( x 1 x τ s ( n ) | ) / τ s ( n ) λ s ϵ
and
( log ω m i n + | φ m i n ( x 1 x τ m i n ( n ) | ) / τ m i n ( n ) λ m i n + ϵ .
From the inequalities (A2)–(A4) we obtain λ s λ m i n + 2 ϵ . Taking into account, that, by definition, λ m i n λ s we get
λ m i n λ s λ m i n + 2 ϵ .
Let us estimate Φ ^ ( x 1 x n ) / n . When Φ ^ ( x 1 x n ) was applied, the data compressor φ s was chosen. Hence, from (A1) we get
λ s ϵ Φ ^ ( x 1 x n ) / n λ s + ϵ .
From those inequalities and (A5) we can see that
λ m i n ϵ Φ ^ ( x 1 x n ) / n λ m i n + 3 ϵ .
It is true for any ϵ > 0 , hence, lim n Φ ^ ( x 1 x n ) / n = λ m i n . The theorem is proven. □
Proof of Theorem 2.
It is known in Information Theory [22] that h r ( μ ) h r + 1 ( μ ) h ( μ ) for any r and (by definition) lim r h r ( μ ) = h ( μ ) . Let ϵ > 0 and r be such an integer that h r h < ϵ . From (11) we can see that there exists such t 1 that m ( t ) r if t t 1 . Taking into account (8) and (11), we can see that there exists t 2 for which a.s. | | ψ r ( x 1 x t ) | / t h r ( μ ) | < ϵ if t > t 2 . From the description of Ψ δ (the step 3) we can see that there exists such t 3 > max { t 1 , t 2 } for which a.s.
| | ψ r ( x 1 x t ) | / t h ( μ ) | | | ψ r ( x 1 x t ) | / t h r ( μ ) |
+ ( h r ( μ ) h ( μ ) ) < 2 ϵ ,
if t > t 3 . By definition,
| Ψ δ ( x 1 x t ) | / t ( | ψ r ( x 1 x t ) | log ω r + 1 ) / t .
Having taken into account that ϵ is an arbitrary number and two latest inequalities as well as the fact that a.s. inf i = 0 , 1 , lim t | ψ r ( x 1 x t ) | / t = h ( μ ) , we obtain (12). The theorem is proven. □

References

  1. Shannon, C. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar]
  2. Fitingof, B.M. Optimal encoding for unknown and changing statistics of messages. Probl. Inform. Transm. 1966, 2, 3–11. [Google Scholar]
  3. Kolmogorov, A.N. Three approaches to the quantitative definition of information. Probl. Inform. Transm. 1965, 1, 3–11. [Google Scholar] [CrossRef]
  4. Krichevsky, R. A relation between the plausibility of information about a source and encoding redundancy. Probl. Inform. Transm. 1968, 4, 48–57. [Google Scholar]
  5. Cleary, J.; Witten, I. Data compression using adaptive coding and partial string matching. IEEE Trans. Commun. 1984, 32, 396–402. [Google Scholar] [CrossRef]
  6. Rissanen, J.; Langdon, G.G. Arithmetic coding. IBM J. Res. Dev. 1979, 23, 149–162. [Google Scholar] [CrossRef]
  7. Ziv, J.; Lempel, A. A universal algorithm for sequential data compression. IEEE Trans. Inf. Theory 1977, 23, 337–343. [Google Scholar] [CrossRef]
  8. A Block-Sorting Lossless Data Compression Algorithm. Available online: https://www.hpl.hp.com/techreports/Compaq-DEC/SRC-RR-124.pdf (accessed on 15 May 2019).
  9. Ryabko, B.Y. Data compression by means of a “book stack”. Probl. Inf. Transm. 1980, 16, 265–269. [Google Scholar]
  10. Bentley, J.; Sleator, D.; Tarjan, R.; Wei, V. A locally adaptive data compression scheme. Commun. ACM 1986, 29, 320–330. [Google Scholar] [CrossRef] [Green Version]
  11. Ryabko, B.; Horspool, N.R.; Cormack, G.V.; Sekar, S.; Ahuja, S.B. Technical correspondence. Commun. ACM 1987, 30, 792–797. [Google Scholar]
  12. Kieffer, J.C.; Yang, E.H. Grammar-based codes: A new class of universal lossless source codes. IEEE Trans. Inf. Theory 2000, 46, 737–754. [Google Scholar] [CrossRef]
  13. Yang, E.H.; Kieffer, J.C. Efficient universal lossless data compression algorithms based on a greedy sequential grammar transform. i. without context models. IEEE Trans. Inf. Theory 2000, 46, 755–777. [Google Scholar] [CrossRef]
  14. Drmota, M.; Reznik, Y.A.; Szpankowski, W. Tunstall code, Khodak variations, and random walks. IEEE Trans. Inf. Theory 2010, 56, 2928–2937. [Google Scholar] [CrossRef]
  15. Ryabko, B. A fast on-line adaptive code. IEEE Trans. Inf. Theory 1992, 28, 1400–1404. [Google Scholar] [CrossRef]
  16. Willems, F.M.J.; Shtarkov, Y.M.; Tjalkens, T.J. The context-tree weighting method: Basic properties. IEEE Trans. Inf. Theory 1995, 41, 653–664. [Google Scholar] [CrossRef]
  17. Ryabko, B.; Astola, J.; Malyutov, M. Compression-Based Methods of Statistical Analysis and Prediction of Time Series; Springer International Publishing: Cham, Switzerland, 2016. [Google Scholar]
  18. Li, M.; Vitanyi, P. An Introduction to Kolmogorov Complexity and Its Applications, 3rd ed.; Springer: New York, NY, USA, 2008. [Google Scholar]
  19. Calude, C.S. Information and Randomness—An Algorithmic Perspective, 2nd ed.; Springer: Berlin, Germany, 2002. [Google Scholar]
  20. Downey, R.; Hirschfeldt, D.R.; Nies, A.; Terwijn, S.A. Calibrating randomness. Bull. Symb. Log. 2006, 12, 411–491. [Google Scholar] [CrossRef]
  21. Hutter, M. Universal Artificial Intelligence: Sequential Decisions based on Algorithmic Probability; Springer: Berlin, Germany, 2005. [Google Scholar]
  22. Cover, T.M.; Thomas, J.A. Elements of Information Theory; Wiley-Interscience: New York, NY, USA, 2006. [Google Scholar]
  23. Mahoney, M. Data Compression Programs. Available online: http://mattmahoney.net/dc/ (accessed on 15 March 2019).
  24. Krichevsky, R. Universal Compression and Retrival; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1993. [Google Scholar]
  25. Ryabko, B. Twice-universal coding. Probl. Inf. Transm. 1984, 3, 173–177. [Google Scholar]
Table 1. Three-step compression. Extra-time δ = 0.37.
Table 1. Three-step compression. Extra-time δ = 0.37.
File Length (bites)Best CompressorChosen CompressorChosen/Best (Ratio of Length)
BIB111,261nanoziplpaq81.06
BOOK1768,771nanozipnanozip1
BOOK 2610,856nanozipnanozip1
GEO102,400nanozipccm1.07
NEWS377,109nanozipnanozip1
OBJ121,504nanoziptornado1.23
OBJ2246,814nanoziplpaq81.08
PAPER153,161nanoziptornado1.52
PAPER282,199nanoziptornado1.54
PIC513,216zpaqbbb1.25
PROGC39,611nanoziptornado1.42
PROGL71,646nanoziptornado1.44
PROGP49,379lpaq8tornado1.4
TRANS93,695lpaq8lpaq81
Table 2. Three-step compression. Extra-time δ = 0.74.
Table 2. Three-step compression. Extra-time δ = 0.74.
File LegthBest CompressorChosen CompressorChosen/Best (Ratio of Length)
BIB111,261nanozipnanozip1
BOOK1768,771nanozipnanozip1
BOOK 2610,856nanozipnanozip1
GEO102,400nanozipnanozip1
NEWS377,109nanoziplpq1v21.14
OBJ121,504nanozipccm1.17
OBJ2246,814nanozipnanozip1
PAPER153,161nanoziplpaq81.19
PAPER282,199nanozipnanozip1
PIC513,216zpaqbbb1.25
PROGC39,611nanoziplpaq81.04
PROGL71,646nanoziplpaq81.03
PROGP49,379lpaq8lpaq81
TRANS93,695lpaq8lpaq81
Table 3. Two-step compression. Extra-time δ = 20 × 0.03 = 0.6.
Table 3. Two-step compression. Extra-time δ = 20 × 0.03 = 0.6.
Length of File (byte)Number of FilesRatio “Chosen Best”Average “Worst/best”Average “Chosen/Best”
10 5 14968%112.87%103.57%
10 5 10 6 112245.72%131.22%102.04%
10 6 10 8 38471%147.95%100.99%
Table 4. Two-step compression. Extra-time δ = 20 × 0.05 = 1.
Table 4. Two-step compression. Extra-time δ = 20 × 0.05 = 1.
Length of File (byte)Number of FilesRatio “Chosen Best”Average “Worst/Best”Average “Chosen/Best”
10 5 149616%112.87%102.14%
10 5 10 6 112253.63%131.22%101.33%
10 6 10 8 38473%147.95%100.84%
Table 5. Three-step compression. Extra-time δ = 20 × 0.03 + 5 × 0.05 = 0.85 .
Table 5. Three-step compression. Extra-time δ = 20 × 0.03 + 5 × 0.05 = 0.85 .
Length of File (byte)Number of FilesRatio “Chosen Best”Average “Worst/Best”Average “Chosen/Best”
10 5 149614%112.87%102.48%
10 5 10 6 112254.9%131.22%101.92%
10 6 10 8 38473%147.95%100.86%
Table 6. Four-step compression. Extra-time 20 × 0.01 + 5 × 0.02 + 3 × 0.05 = 0.45 .
Table 6. Four-step compression. Extra-time 20 × 0.01 + 5 × 0.02 + 3 × 0.05 = 0.45 .
Length of File (byte)Number of FilesRatio “Chosen Best”Average “Worst/Best”Average “Chosen/Best”
10 5 149610%112.87%103.12%
10 5 10 6 112244.69%131.22%102.54%
10 6 10 8 38472%147.95%100.88%

Share and Cite

MDPI and ACS Style

Ryabko, B. Time-Universal Data Compression. Algorithms 2019, 12, 116. https://doi.org/10.3390/a12060116

AMA Style

Ryabko B. Time-Universal Data Compression. Algorithms. 2019; 12(6):116. https://doi.org/10.3390/a12060116

Chicago/Turabian Style

Ryabko, Boris. 2019. "Time-Universal Data Compression" Algorithms 12, no. 6: 116. https://doi.org/10.3390/a12060116

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop