#
Time-Universal Data Compression^{ †}

^{1}

^{2}

^{†}

*Keywords:*data compression; universal coding; time-series forecasting

Next Article in Journal

Next Article in Special Issue

Next Article in Special Issue

Institute of Computational Technologies of the Siberian Branch of the Russian Academy of Science, 630090 Novosibirsk, Russia

Department of Information Technologies, Novosibirsk State University, 630090 Novosibirsk, Russia

The preliminary version of this paper is accepted for ISIT 2019, Paris.

Received: 26 April 2019 / Revised: 25 May 2019 / Accepted: 27 May 2019 / Published: 29 May 2019

(This article belongs to the Special Issue Data Compression Algorithms and their Applications)

Nowadays, a variety of data-compressors (or archivers) is available, each of which has its merits, and it is impossible to single out the best ones. Thus, one faces the problem of choosing the best method to compress a given file, and this problem is more important the larger is the file. It seems natural to try all the compressors and then choose the one that gives the shortest compressed file, then transfer (or store) the index number of the best compressor (it requires $logm$ bits, if m is the number of compressors available) and the compressed file. The only problem is the time, which essentially increases due to the need to compress the file m times (in order to find the best compressor). We suggest a method of data compression whose performance is close to optimal, but for which the extra time needed is relatively small: the ratio of this extra time and the total time of calculation can be limited, in an asymptotic manner, by an arbitrary positive constant. In short, the main idea of the suggested approach is as follows: in order to find the best, try all the data compressors, but, when doing so, use for compression only a small part of the file. Then apply the best data compressors to the whole file. Note that there are many situations where it may be necessary to find the best data compressor out of a given set. In such a case, it is often done by comparing compressors empirically. One of the goals of this work is to turn such a selection process into a part of the data compression method, automating and optimizing it.

Nowadays lossless data compressors, or archivers, are widely used in systems of information transmission and storage. Modern data compressors are based on the results of the theory of source coding, as well as on the experience and intuition of their developers. Among the theoretical results, we note, first of all, such deep concepts as entropy, information, and methods of source coding discovered by Shannon [1]. The next important step was done by Fitingoff [2] and Kolmogorov [3], who described the first universal code, as well as Krichevsky who described the first such a code with minimal redundancy [4].

Now practically used data compressors are based on the PPM universal code [5] (which is used along with the arithmetic code [6]), the Lempel–Ziv (LZ) compression methods [7], the Burrows–Wheeler transform [8] (which is used along with the book-stack (or MTF) code [9,10,11]), the class of grammar-based codes [12,13] and some others [14,15,16]. All these codes are universal. This means that, asymptotically, the length of the compressed file goes to the smallest possible value (i.e., the Shannon entropy per letter), if the compressed sequence is generated by a stationary source.

In particular, the universality of practically used codes means that we cannot compare their performance theoretically, because all of them have the same limit ratio of compression. On the other hand, the experiments show that the performance of different data compressors depends on a compressed file and it is impossible to single out one of the best or even remove the worst ones. Thus, there is no theoretical or experimental way to select the best data compressors for practical use. Hence, if someone is going to compress a file, he should first select the appropriate data compressor, preferably giving the best compression. The following obvious two-step method can be applied: first, try all available compressors and choose the one that gives the shortest compressed file. Then place a byte representation of its number and the compressed file. When decoding, the decoder first reads the number of the selected data compressor, and then decodes the rest of the file with the selected data compressor. An obvious drawback of this approach is the need to spend a lot of time in order to first compress the file by all the compressors.

In this paper we show that there exists a method that encodes the file with the (close to) optimal compressor, but uses a relatively small extra time. In short, the main idea of the suggested approach is as follows: in order to find the best, try all the compressors, but, when doing it, use for compression only a small part of the file. Then apply the best data compressor for the compression of the whole file. Based on experiments and some theoretical considerations, we can say that under certain conditions this procedure is quite effective. That is why we call such methods “time-universal.”

It is important to note that the problems of data compression and time series prediction are very close mathematically (see, for example, [17]). That is why the proposed approach can be directly applied to time series forecasting.

To the best of our knowledge, the suggested approach to data compression is new, but the idea to organize the computation of several algorithms in such a way that any of them worked at certain intervals of time, and their course depends on intermediate results, is widely used in the theory of algorithms, randomness testing and artificial intelligence; see [18,19,20,21].

Let there be a set of data compressors $F=\{{\phi}_{1},{\phi}_{2},\dots \}$ and ${x}_{1}{x}_{2}\dots $ be a sequence of letters from a finite alphabet A, whose initial part ${x}_{1}\dots {x}_{n}$ should be compressed by some $\phi \in F$. Let ${v}_{i}$ be the time spent on encoding one letter by the data compressor ${\phi}_{i}$ and suppose that all ${v}_{i}$ are upper-bounded by a certain constant ${v}_{max}$, i.e., ${sup}_{i=1,2,\dots ,}{v}_{i}\phantom{\rule{0.166667em}{0ex}}\le \phantom{\rule{0.166667em}{0ex}}{v}_{max}.$ (It is possible that ${v}_{i}$ is unknown beforehand.)

The considered task is to find a data compressor from F which compresses ${x}_{1}\dots {x}_{n}$ in such a way that the total time spent for all calculations and compressions does not exceed $T(1+\delta )$ for some $\delta >0$. Note that $T={v}_{max}\phantom{\rule{0.166667em}{0ex}}n$ is the minimum time that must be reserved for compression and $\delta T$ is the additional time that can be used to find the good compressor (among ${\phi}_{1},{\phi}_{2},\dots $). It is important to note that we can estimate $\delta $ without knowing the speeds ${v}_{1},{v}_{2},\dots $.

If the number of data compressors F is finite, say, $\{{\phi}_{1},{\phi}_{2},\dots ,{\phi}_{m}\}$, $m\ge 2$, and one chooses ${\phi}_{k}$ to compress the file ${x}_{1}{x}_{2}\dots {x}_{n}$, he can use the following two step procedure: encode the file as $<k>{\phi}_{k}({x}_{1}{x}_{2}\dots {x}_{n})$, where $<k>$ is $\lceil logm\rceil $-bit binary presentation of k. (The decoder first reads $\lceil logm\rceil $ bits and finds k, then it finds ${x}_{1}{x}_{2}\dots {x}_{n}$ decoding ${\phi}_{k}({x}_{1}{x}_{2}\dots {x}_{n})$.) Now our goal is to generalize this approach for the case of infinite F = $\{{\phi}_{1},{\phi}_{2},\dots \}.$ For this purpose we take a probability distribution $\omega $ = ${\omega}_{1},{\omega}_{2},\dots $ such that all ${\omega}_{i}>0$. The following is an example of such a distribution:

$${\omega}_{k}=\frac{1}{k(k+1)}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}k=1,2,3,\dots .$$

Clearly, it is a probability distribution, because ${\omega}_{k}=1/k-1/(k+1)$.

Now we should take into account the length of a codeword which presents the number k, because those lengths must be different for different k. So, we should find such ${\phi}_{k}$ that the value
is close to minimal. As earlier, the first part $\lceil -log{\omega}_{k}\rceil $ is used for encoding number k (codes achieving this are well-known, e.g., [22].) The decoder first finds k and then ${x}_{1}{x}_{2}\dots {x}_{n}$ using the decoder corresponding to ${\phi}_{k}$. Based on this consideration, we give the following

$$\lceil -log{\omega}_{k}\rceil \phantom{\rule{0.166667em}{0ex}}+|{\phi}_{k}({x}_{1}{x}_{2}\dots {x}_{n})|$$

We call any method that encodes a sequence ${x}_{1}{x}_{2}\dots {x}_{n}$, $n\ge 1$, ${x}_{i}\in A$, by the binary word of the length $\lceil -log{\omega}_{j}\rceil \phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}|{\phi}_{j}({x}_{1}{x}_{2}\dots {x}_{n})|$ for some ${\phi}_{j}\in F$, a time-adaptive code and denote it by ${\widehat{\mathrm{\Phi}}}_{compr}^{\delta}$. The output of ${\widehat{\mathrm{\Phi}}}_{compr}^{\delta}$ is the following word:
where $<{\omega}_{i}>$ is $\lceil -log{\omega}_{i}\rceil $-bit word that encodes i, whereas the time of encoding is not grater than $T(1+\delta )$ (here $T={v}_{max}\phantom{\rule{0.166667em}{0ex}}n$).

$${\widehat{\mathrm{\Phi}}}_{compr}^{\delta}({x}_{1}{x}_{2}\dots {x}_{n})\phantom{\rule{0.166667em}{0ex}}=<{\omega}_{i}>\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}{\phi}_{i}({x}_{1}{x}_{2}\dots {x}_{n}),$$

If for a time-adaptive code ${\widehat{\mathrm{\Phi}}}_{compr}^{\delta}$ the following equation is valid
this code is called time-universal.

$$\underset{n\to \infty}{lim}{\widehat{\mathrm{\Phi}}}_{compr}^{\delta}({x}_{1}\dots {x}_{n})/n\phantom{\rule{0.166667em}{0ex}}=\underset{1=1,2,\dots}{inf}\underset{n\to \infty}{lim}{\phi}_{i}({x}_{1}\dots {x}_{n})/n\phantom{\rule{0.166667em}{0ex}},$$

It will be convenient to reckon that the whole sequence is compressed not letter-by-letter, but by sub-words, each of which, say, a few kilobytes in length. More formally, let, as before, there be a sequence ${x}_{1}{x}_{2}\dots $, where ${x}_{i}$, $i=1,2,\dots $ are sub-words whose length (say, L) can be a few kilobytes. In this case ${x}_{i}\in {\{0,1\}}^{8L}$.

Here and below we did not take into account the time required for the calculation of $log{\omega}_{i}$ and some other auxiliary calculations. If in a certain situation this time is not negligible, it is possible to reduce $\widehat{T}$ in advance by the required value.

This description and the following discussion are fairly formal, so we give a brief preliminary example of a time-adaptive code. To do this, we took 22 data compressors from [23] and 14 files of different lengths. For each file we applied the following three-step scheme: first we took 1% of the file and sequentially compressed it with all the data compressors. Then we selected the three best compressors, took 5% of the file, and sequentially compressed it with the three compressors selected. Finally, we selected the best of these compressors and compressed the file with this compressor. Thus, the total extra time is limited to 22 × 0.01 + 3 × 0.05 = 0.37, i.e., $\delta \le 0.37$. Table 1 contains the obtained data.

Table 2 shows that the larger the file, the better the compression. The following table gives some insight into the effect of the extra time. Here we used the same three-step scheme, but the size of the parts was $2\%$ and $10\%$ for the first step and the second, respectively, while the extra time was 0.74.

From the tables it can be seen that the performance of the considered scheme increases significantly when the additional time increases. It worth noting, that if one applied all 22 data compressors to the whole file, the extra time would be 21 instead of 0.74.

Suppose that there is a file ${x}_{1}{x}_{2}\dots {x}_{n}$ and data compressors ${\phi}_{1},\dots ,{\phi}_{m}$, $n\ge 1,m\ge 1$. Let, as before, ${v}_{i}$ be the time spent on encoding one letter by the data compressor ${\phi}_{i}$,
and let

$${v}_{max}=\underset{i=1,\dots ,n}{max}{v}_{i},\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}T=n\phantom{\rule{0.166667em}{0ex}}v,$$

$$\widehat{T}=T(1+\delta )\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\delta >0.$$

The goal is to find the data compressor ${\phi}_{j}$, $j=1,\dots ,m$, that compresses the file ${x}_{1}{x}_{2}\dots {x}_{n}$ in the best way in time $\widehat{T}$.

Apparently, the following two-step method is the simplest.

Step 1. Calculate $r=\lfloor \delta T/\left(m{v}_{max}\right)\rfloor $.

Step 2. Compress the file ${x}_{1}{x}_{2}\dots {x}_{r}$ by ${\phi}_{1}$ and find the length of compressed file $|{\phi}_{1}({x}_{1}\dots {x}_{r})|$, then, likewise, find $|{\phi}_{2}({x}_{1}\dots {x}_{r})|$, etc.

Step 3. Calculate $s=arg{min}_{i=1,\dots ,m}$$|{\phi}_{i}({x}_{1}\dots {x}_{r})|$

Step 4. Compress the whole file ${x}_{1}{x}_{2}\dots {x}_{n}$ by ${\phi}_{s}$ and compose the codeword $\langle s\rangle $ ${\phi}_{s}({x}_{1}\dots {x}_{n})$, where $\langle s\rangle $ is $\lceil logm\rceil $-bit word with the presentation of s.

It will be shown that even this simple method is time universal. On the other hand, there are a lot of quite reasonable approaches to build the time-adaptive codes. For example, it could be natural to try a three step procedure, which was considered in the previous part (see Table 1 and Table 2), as well as many other versions. Probably, it could be useful to use multidimensional optimization approaches, such as machine learning, so-called deep learning, etc. That is why, we consider only some general conditions needed for time-universality.

Let us give some needed definitions. Suppose, a time-adaptive data-compressor $\widehat{\mathrm{\Phi}}$ is applied to $x={x}_{1}\dots {x}_{t}$. For any ${\phi}_{i}$ we define

$${\tau}_{i}\left(t\right)=max\{r:{\phi}_{i}({x}_{1}\dots {x}_{r})\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}was\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}calculated,\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}when\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}extra\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}time\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\delta \phantom{\rule{0.166667em}{0ex}}T\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}was\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}exhausted\}.$$

Let there be an infinite word ${x}_{1}{x}_{2}\dots $ and time-adaptive method $\widehat{\mathrm{\Phi}}$ which is based on the finite set of data compressors ${\varphi}_{1},\dots ,{\varphi}_{m}$. If its additional time of calculation is not grater than $\delta T$ and the following properties are valid:

(i) the limits ${lim}_{t\to \infty}{\phi}_{i}({x}_{1}\dots {x}_{t})/t$ exist for $i=1,2,\dots ,m$,

(ii) for $i=1,2,\dots ,m$

$$\underset{t\to \infty}{lim}{\tau}_{i}\left(t\right)=\infty ,$$

(iii) for any t the method $\widehat{\mathrm{\Phi}}({x}_{1}\dots {x}_{t})$ uses such a compressor ${\phi}_{s}$ for which, for any i
Then $\widehat{\mathrm{\Phi}}({x}_{1}\dots {x}_{n})$ is time universal, that is

$$(-log{\omega}_{s}+|{\phi}_{s}({x}_{1}\dots {x}_{{\tau}_{s}\left(t\right)})|)/{\tau}_{s}\left(t\right)\le (-log{\omega}_{i}+|{\phi}_{i}({x}_{1}\dots {x}_{{\tau}_{i}\left(t\right)})|)/{\tau}_{i}\left(t\right),$$

$$\underset{t\to \infty}{lim}\widehat{\mathrm{\Phi}}({x}_{1}\dots {x}_{t})/t\phantom{\rule{0.166667em}{0ex}}=\underset{i=1,2,\dots}{inf}\underset{t\to \infty}{lim}|{\phi}_{i}({x}_{1}\dots {x}_{t})|/t$$

A proof is given in the Appendix A, but here we give some informal comments. First, note that property (i) means that any data compressor will participate in the competition to find the best one. Second, if the sequence ${x}_{1}{x}_{2}\dots $ is generated by a stationary source and all ${\phi}_{i}$ are universal codes, then the property (iii) is valid with probability 1 (See, for example, [22]). Hence, this theorem is valid for this case. Besides, note that this this theorem is valid for methods described earlier.

We conducted several experiments to evaluate the effectiveness of the proposed approach in practice. For this purpose we took 20 data compressor from the “squeeze chart (lossless data compression benchmarks)”, http://www.squeezechart.com/index.html and files from this site http://corpus.canterbury.ac.nz/descriptions/, and http://tolstoy.ru/creativity/90-volume-collection-of-the-works/ (Information about their size is given in the tables below). It is worth noting, that we do not change the collection of the data compressors and the files during experiments. The results are presented in the following tables, where the expression “worst/best” means the ratio of the longest length of the compressed file and the shortest one (for different data compressors). More formally, $worst/best\phantom{\rule{0.166667em}{0ex}}={max}_{i,j=1,\dots ,20}\phantom{\rule{0.166667em}{0ex}}(|{\phi}_{i}|/|{\phi}_{j}|)$. The expression “chosen/best” is a similar value for a chosen data compressor and the best one. The value “chosen/best” is the frequency of occurrence of the event “the best compressor was selected”.

Table 3 shows the results of the two-step method, where we took 3% in the first step. Thus, the total extra time is limited to 20 × 0.03 = 0.6, i.e., $\delta \le 0.6$.

Here ratio “chosen best” means a proportion of cases in which the best method was chosen.

Table 4 shows the effect of the extra time $\delta $ on the efficiency of the method (In this case we took 5% in the first step).

Table 5 contains information about the three step method. Here we took 3% in the first step and then took five data compressors with the best performance. Then, in the second step, we tested those five data compressors taking 5% from the file. Hence, the extra time equals $20\times 0.03+5\times 0.05$ = $0.85$.

Table 6 gives an example of four step method. Here we took 1% in the first step and then took five data compressors with the best performance. Then, in the second step, we tested those five data compressors taking 2% from each file. Basing on the obtained data, we chose three best and tested them on 5% parts. At last, the best of them was used for compression of the whole file. Hence, the extra time equals $20\times 0.01+5\times 0.02$ + $3\times 0.05=0.45$.

If we compare Table 6 and Table 3, we can see that the performance of the four step method is better than two step method, where the extra time is significantly less for the four step method. The same is valid for the considered example of the three step method.

We can see that the three- and four-step methods make sense because they make it possible to reduce the additional time while maintaining the better quality of the method. Also, we can make another important conclusion. All tables show that the method is more efficient for large files. Indeed, the ratio “chosen/best” and the average value “chosen/best” decreases where the file lengths increases. Moreover, the average value “worst/best” increases where the file lengths increases.

In this section we describe a time-universal code for stationary sources. It is based on optimal universal codes for Markov chains, developed by Krichevsky [4,24] and the twice-universal code [25]. Denote by ${M}_{i}$, $i=1,2,\dots $ the set of Markov chains with memory (connectivity) i, and let ${M}_{0}$ be the set of Bernoulli sources. For stationary ergodic $\mu $ and an integer r we denote by ${h}_{r}\left(\mu \right)$ the r-order entropy (per letter) and let ${h}_{\infty}\left(\mu \right)$ be the limit entropy; see for definitions [22].

Krichevsky [4,24] described the codes ${\psi}_{0},{\psi}_{1},\dots $ which are asymptotically optimal for ${M}_{0},{M}_{1},\dots $, correspondingly. If the sequence ${x}_{1}{x}_{2}\dots {x}_{n}$, ${x}_{i}\in A$, is generated by a source $\mu $ $\in {M}_{i}$, the following inequalities are valid almost surely (a.s.):
where t grows. (Here C is a constant.) The length of a codeword of the twice-universal code $\rho $ is defined as the following “mixture”:

$${h}_{i}\left(\mu \right)\le |{\psi}_{i}({x}_{1}\dots {x}_{t})|/t\phantom{\rule{0.166667em}{0ex}}\le {h}_{i}\left(\mu \right)+\left(\right(|A|-{1)|A|}^{i}+C)/t,$$

$$|\rho ({x}_{1}\dots {x}_{t})|=-log\phantom{\rule{0.166667em}{0ex}}\sum _{i=0}^{\infty}{\omega}_{i+1}\phantom{\rule{0.166667em}{0ex}}{2}^{-|{\psi}_{i}({x}_{1}\dots {x}_{t})|}.$$

(It is well-known in information theory [22] that there exists a code with such codeword lengths, because ${\sum}_{{x}_{1}\dots {x}_{t}\in {A}^{t}}$${2}^{-|\rho ({x}_{1}\dots {x}_{t})|}$ = $1$.) This code is called twice-universal because for any ${M}_{i}$, $i=0,1,\dots $, and $\mu \in {M}_{i}$ the equality (8) is valid (with different C). Besides, for any stationary ergodic source $\mu $ a.s.

$$\underset{t\to \infty}{lim}|{\rho}_{i}({x}_{1}\dots {x}_{t})|/t\phantom{\rule{0.166667em}{0ex}}={h}_{\infty}\left(\mu \right).$$

Let us estimate the time of calculations necessary when using $\rho $. First, note that it suffices to sum a finite number of terms in (9), because all the terms ${2}^{-|{\psi}_{i}({x}_{1}\dots {x}_{t})|}$ are equal for $i\ge t$. On the other hand, the number of different terms grows, where $t\to \infty $ and, hence, the encoder should calculate ${2}^{-|{\psi}_{i}({x}_{1}\dots {x}_{t})|}$ for growing number i’s. It is known [24] that the time spent on coding one letter is close for different codes ${\psi}_{i},\phantom{\rule{0.166667em}{0ex}}i=0,1,2,\dots $.

Hence, the time spent for encoding one letter by the code $\rho $ grows to infinity, when t grows. The described below time-universal code ${\mathrm{\Psi}}^{\delta}$ has the same asymptotic performance, but the time spent for encoding one letter is a constant.

In order to describe the time-universal code ${\mathrm{\Psi}}^{\delta}$ we give some definitions. Let, as before, v be an upper-bound of the time spent for encoding one letter by any ${\psi}_{i}$, ${x}_{1}\dots {x}_{t}$ be the generated word,
Denote by ${\mathrm{\Psi}}^{\delta}$ the following method:

$$T=t\phantom{\rule{0.166667em}{0ex}}v,\phantom{\rule{0.166667em}{0ex}}N\left(t\right)=\delta T/v=\delta \phantom{\rule{0.166667em}{0ex}}t,$$

$$m\left(t\right)=\lfloor loglogN\left(t\right)\rfloor ,\phantom{\rule{0.166667em}{0ex}}s\left(t\right)=\lfloor N\left(t\right)/\left(m\right(t)+1)\rfloor .$$

Step 1. Calculate $m\left(t\right),s\left(t\right)$ and

$$|{\psi}_{0}({x}_{1}\dots {x}_{s\left(t\right)})|,|{\psi}_{1}({x}_{1}\dots {x}_{s\left(t\right)})|,\dots ,|{\psi}_{m\left(t\right)}({x}_{1}\dots {x}_{s\left(t\right)})|.$$

Step 2. Find such a j that

$$|{\psi}_{j}({x}_{1}\dots {x}_{s\left(t\right)})|=\underset{i=0,\dots ,m\left(t\right)}{min}|{\psi}_{i}({x}_{1}\dots {x}_{s\left(t\right)})|.$$

Step 3. Calculate the codeword ${\psi}_{j}({x}_{1}\dots {x}_{t})$ and output
where $<j>$ is the $\lceil -log{\omega}_{j+1}\rceil $-bit codeword of j. The decoding is obvious.

$${\mathrm{\Psi}}^{\delta}({x}_{1}\dots {x}_{t})=\phantom{\rule{0.166667em}{0ex}}<j>{\psi}_{j}({x}_{1}\dots {x}_{t}),$$

Let ${x}_{1}{x}_{2}\dots $ be a sequence generated by a stationary source and the code ${\mathrm{\Psi}}^{\delta}$ be applied. Then this code is time-universal, i.e., a.s.

$$\underset{t\to \infty}{lim}|{\mathrm{\Psi}}^{\delta}({x}_{1}\dots {x}_{t})|/t=\underset{i=0,1,\dots}{inf}\underset{t\to \infty}{lim}|{\psi}_{i}({x}_{1}\dots {x}_{t})|/t.$$

This research was funded by Russian Foundation for Basic Research grant number 18-29-03005.

The author declares no conflict of interest.

Let ${\lambda}_{i}={lim}_{t\to \infty}|{\phi}_{i}({x}_{1}\dots {x}_{t})|/t$ and ${\phi}_{min}$ be such a data compressor that ${\lambda}_{min}$ = ${min}_{i}$${\lambda}_{i}$. Having taken into account that the set of data compressors F is finite, we can see that for any $\u03f5>0$ there exists such ${t}_{1}$ that for all ${\phi}_{i}\in F$ and $t>{t}_{1}$

$$(\phantom{\rule{0.166667em}{0ex}}|\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}{\phi}_{i}({x}_{1}\dots {x}_{t})|-log{\omega}_{i})/t-{\lambda}_{i}|<\u03f5.$$

From (ii) we obtain that there exists such ${t}_{2}$ that ${\tau}_{i}\left({t}_{2}\right)>{t}_{1}$ for all $i=1,\dots ,m$. Let $n\ge {t}_{2}$ and $\widehat{\mathrm{\Phi}}$ be applied to ${x}_{1}{x}_{2}\dots {x}_{n}$. Suppose that a data-compressor ${\phi}_{s}$ was chosen, when $\widehat{\mathrm{\Phi}}$ was applied. Hence,

$$(-log{\omega}_{s}+|{\phi}_{s}({x}_{1}\dots {x}_{{\tau}_{s}\left(n\right)}|)/{\tau}_{s}\left(n\right)\le (-log{\omega}_{min}+|{\phi}_{min}({x}_{1}\dots {x}_{{\tau}_{min}\left(n\right)}|)/{\tau}_{min}\left(n\right).$$

From (A1) we can see that
and

$$(-log{\omega}_{s}+|{\phi}_{s}({x}_{1}\dots {x}_{{\tau}_{s}\left(n\right)}|)/{\tau}_{s}\left(n\right)\ge {\lambda}_{s}-\u03f5$$

$$(-log{\omega}_{min}+|{\phi}_{min}({x}_{1}\dots {x}_{{\tau}_{min}\left(n\right)}|)/{\tau}_{min}\left(n\right)\le {\lambda}_{min}+\u03f5.$$

From the inequalities (A2)–(A4) we obtain ${\lambda}_{s}\le {\lambda}_{min}+2\u03f5$. Taking into account, that, by definition, ${\lambda}_{min}$ ≤ ${\lambda}_{s}$ we get

$${\lambda}_{min}\le {\lambda}_{s}\le {\lambda}_{min}+2\u03f5.$$

Let us estimate $\widehat{\mathrm{\Phi}}({x}_{1}\dots {x}_{n})/n$. When $\widehat{\mathrm{\Phi}}({x}_{1}\dots {x}_{n})$ was applied, the data compressor ${\phi}_{s}$ was chosen. Hence, from (A1) we get

$${\lambda}_{s}-\u03f5\le \widehat{\mathrm{\Phi}}({x}_{1}\dots {x}_{n})/n\le {\lambda}_{s}+\u03f5.$$

From those inequalities and (A5) we can see that

$${\lambda}_{min}-\u03f5\le \widehat{\mathrm{\Phi}}({x}_{1}\dots {x}_{n})/n\le {\lambda}_{min}+3\u03f5.$$

It is true for any $\u03f5>0$, hence, ${lim}_{n\to \infty}\widehat{\mathrm{\Phi}}({x}_{1}\dots {x}_{n})/n={\lambda}_{min}$. The theorem is proven. □

It is known in Information Theory [22] that ${h}_{r}\left(\mu \right)$ ≥ ${h}_{r+1}\left(\mu \right)$ ≥ ${h}_{\infty}\left(\mu \right)$ for any r and (by definition) ${lim}_{r\to \infty}{h}_{r}\left(\mu \right)$ = ${h}_{\infty}\left(\mu \right)$. Let $\u03f5>0$ and r be such an integer that ${h}_{r}-{h}_{\infty}$ < $\u03f5$. From (11) we can see that there exists such ${t}_{1}$ that $m\left(t\right)\ge r$ if $t\ge {t}_{1}$. Taking into account (8) and (11), we can see that there exists ${t}_{2}$ for which a.s. $||{\psi}_{r}({x}_{1}\dots {x}_{t})|/t-{h}_{r}\left(\mu \right)|$ < $\u03f5$ if $t>{t}_{2}$. From the description of ${\mathrm{\Psi}}^{\delta}$ (the step 3) we can see that there exists such ${t}_{3}>max\{{t}_{1},{t}_{2}\}$ for which a.s.
if $t>{t}_{3}$. By definition,
Having taken into account that $\u03f5$ is an arbitrary number and two latest inequalities as well as the fact that a.s. ${inf}_{i=0,1,\dots}{lim}_{t\to \infty}|{\psi}_{r}({x}_{1}\dots {x}_{t})|/t$ = ${h}_{\infty}\left(\mu \right)$, we obtain (12). The theorem is proven. □

$$||{\psi}_{r}({x}_{1}\dots {x}_{t})|/t-{h}_{\infty}\left(\mu \right)|\le ||{\psi}_{r}({x}_{1}\dots {x}_{t})|/t-{h}_{r}\left(\mu \right)|$$

$$+({h}_{r}\left(\mu \right)-{h}_{\infty}\left(\mu \right))<2\u03f5\phantom{\rule{0.166667em}{0ex}},$$

$$|{\mathrm{\Psi}}^{\delta}({x}_{1}\dots {x}_{t})|/t\le (|{\psi}_{r}({x}_{1}\dots {x}_{t})|-log{\omega}_{r+1})/t.$$

- Shannon, C. A mathematical theory of communication. Bell Syst. Tech. J.
**1948**, 27, 379–423. [Google Scholar] - Fitingof, B.M. Optimal encoding for unknown and changing statistics of messages. Probl. Inform. Transm.
**1966**, 2, 3–11. [Google Scholar] - Kolmogorov, A.N. Three approaches to the quantitative definition of information. Probl. Inform. Transm.
**1965**, 1, 3–11. [Google Scholar] [CrossRef] - Krichevsky, R. A relation between the plausibility of information about a source and encoding redundancy. Probl. Inform. Transm.
**1968**, 4, 48–57. [Google Scholar] - Cleary, J.; Witten, I. Data compression using adaptive coding and partial string matching. IEEE Trans. Commun.
**1984**, 32, 396–402. [Google Scholar] [CrossRef] - Rissanen, J.; Langdon, G.G. Arithmetic coding. IBM J. Res. Dev.
**1979**, 23, 149–162. [Google Scholar] [CrossRef] - Ziv, J.; Lempel, A. A universal algorithm for sequential data compression. IEEE Trans. Inf. Theory
**1977**, 23, 337–343. [Google Scholar] [CrossRef] - A Block-Sorting Lossless Data Compression Algorithm. Available online: https://www.hpl.hp.com/techreports/Compaq-DEC/SRC-RR-124.pdf (accessed on 15 May 2019).
- Ryabko, B.Y. Data compression by means of a “book stack”. Probl. Inf. Transm.
**1980**, 16, 265–269. [Google Scholar] - Bentley, J.; Sleator, D.; Tarjan, R.; Wei, V. A locally adaptive data compression scheme. Commun. ACM
**1986**, 29, 320–330. [Google Scholar] [CrossRef] - Ryabko, B.; Horspool, N.R.; Cormack, G.V.; Sekar, S.; Ahuja, S.B. Technical correspondence. Commun. ACM
**1987**, 30, 792–797. [Google Scholar] - Kieffer, J.C.; Yang, E.H. Grammar-based codes: A new class of universal lossless source codes. IEEE Trans. Inf. Theory
**2000**, 46, 737–754. [Google Scholar] [CrossRef] - Yang, E.H.; Kieffer, J.C. Efficient universal lossless data compression algorithms based on a greedy sequential grammar transform. i. without context models. IEEE Trans. Inf. Theory
**2000**, 46, 755–777. [Google Scholar] [CrossRef] - Drmota, M.; Reznik, Y.A.; Szpankowski, W. Tunstall code, Khodak variations, and random walks. IEEE Trans. Inf. Theory
**2010**, 56, 2928–2937. [Google Scholar] [CrossRef] - Ryabko, B. A fast on-line adaptive code. IEEE Trans. Inf. Theory
**1992**, 28, 1400–1404. [Google Scholar] [CrossRef] - Willems, F.M.J.; Shtarkov, Y.M.; Tjalkens, T.J. The context-tree weighting method: Basic properties. IEEE Trans. Inf. Theory
**1995**, 41, 653–664. [Google Scholar] [CrossRef] - Ryabko, B.; Astola, J.; Malyutov, M. Compression-Based Methods of Statistical Analysis and Prediction of Time Series; Springer International Publishing: Cham, Switzerland, 2016. [Google Scholar]
- Li, M.; Vitanyi, P. An Introduction to Kolmogorov Complexity and Its Applications, 3rd ed.; Springer: New York, NY, USA, 2008. [Google Scholar]
- Calude, C.S. Information and Randomness—An Algorithmic Perspective, 2nd ed.; Springer: Berlin, Germany, 2002. [Google Scholar]
- Downey, R.; Hirschfeldt, D.R.; Nies, A.; Terwijn, S.A. Calibrating randomness. Bull. Symb. Log.
**2006**, 12, 411–491. [Google Scholar] [CrossRef] - Hutter, M. Universal Artificial Intelligence: Sequential Decisions based on Algorithmic Probability; Springer: Berlin, Germany, 2005. [Google Scholar]
- Cover, T.M.; Thomas, J.A. Elements of Information Theory; Wiley-Interscience: New York, NY, USA, 2006. [Google Scholar]
- Mahoney, M. Data Compression Programs. Available online: http://mattmahoney.net/dc/ (accessed on 15 March 2019).
- Krichevsky, R. Universal Compression and Retrival; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1993. [Google Scholar]
- Ryabko, B. Twice-universal coding. Probl. Inf. Transm.
**1984**, 3, 173–177. [Google Scholar]

$\mathbf{File}$ | Length (bites) | Best Compressor | Chosen Compressor | Chosen/Best (Ratio of Length) |
---|---|---|---|---|

BIB | 111,261 | nanozip | lpaq8 | 1.06 |

BOOK1 | 768,771 | nanozip | nanozip | 1 |

BOOK 2 | 610,856 | nanozip | nanozip | 1 |

GEO | 102,400 | nanozip | ccm | 1.07 |

NEWS | 377,109 | nanozip | nanozip | 1 |

OBJ1 | 21,504 | nanozip | tornado | 1.23 |

OBJ2 | 246,814 | nanozip | lpaq8 | 1.08 |

PAPER1 | 53,161 | nanozip | tornado | 1.52 |

PAPER2 | 82,199 | nanozip | tornado | 1.54 |

PIC | 513,216 | zpaq | bbb | 1.25 |

PROGC | 39,611 | nanozip | tornado | 1.42 |

PROGL | 71,646 | nanozip | tornado | 1.44 |

PROGP | 49,379 | lpaq8 | tornado | 1.4 |

TRANS | 93,695 | lpaq8 | lpaq8 | 1 |

$\mathbf{File}$ | Legth | Best Compressor | Chosen Compressor | Chosen/Best (Ratio of Length) |
---|---|---|---|---|

BIB | 111,261 | nanozip | nanozip | 1 |

BOOK1 | 768,771 | nanozip | nanozip | 1 |

BOOK 2 | 610,856 | nanozip | nanozip | 1 |

GEO | 102,400 | nanozip | nanozip | 1 |

NEWS | 377,109 | nanozip | lpq1v2 | 1.14 |

OBJ1 | 21,504 | nanozip | ccm | 1.17 |

OBJ2 | 246,814 | nanozip | nanozip | 1 |

PAPER1 | 53,161 | nanozip | lpaq8 | 1.19 |

PAPER2 | 82,199 | nanozip | nanozip | 1 |

PIC | 513,216 | zpaq | bbb | 1.25 |

PROGC | 39,611 | nanozip | lpaq8 | 1.04 |

PROGL | 71,646 | nanozip | lpaq8 | 1.03 |

PROGP | 49,379 | lpaq8 | lpaq8 | 1 |

TRANS | 93,695 | lpaq8 | lpaq8 | 1 |

Length of File (byte) | Number of Files | Ratio “Chosen Best” | Average “Worst/best” | Average “Chosen/Best” |
---|---|---|---|---|

≤${10}^{5}$ | 1496 | 8% | 112.87% | 103.57% |

${10}^{5}\u2013{10}^{6}$ | 1122 | 45.72% | 131.22% | 102.04% |

${10}^{6}\u2013{10}^{8}$ | 384 | 71% | 147.95% | 100.99% |

Length of File (byte) | Number of Files | Ratio “Chosen Best” | Average “Worst/Best” | Average “Chosen/Best” |
---|---|---|---|---|

≤${10}^{5}$ | 1496 | 16% | 112.87% | 102.14% |

${10}^{5}\u2013{10}^{6}$ | 1122 | 53.63% | 131.22% | 101.33% |

${10}^{6}\u2013{10}^{8}$ | 384 | 73% | 147.95% | 100.84% |

Length of File (byte) | Number of Files | Ratio “Chosen Best” | Average “Worst/Best” | Average “Chosen/Best” |
---|---|---|---|---|

≤${10}^{5}$ | 1496 | 14% | 112.87% | 102.48% |

${10}^{5}\u2013{10}^{6}$ | 1122 | 54.9% | 131.22% | 101.92% |

${10}^{6}\u2013{10}^{8}$ | 384 | 73% | 147.95% | 100.86% |

Length of File (byte) | Number of Files | Ratio “Chosen Best” | Average “Worst/Best” | Average “Chosen/Best” |
---|---|---|---|---|

≤${10}^{5}$ | 1496 | 10% | 112.87% | 103.12% |

${10}^{5}\u2013{10}^{6}$ | 1122 | 44.69% | 131.22% | 102.54% |

${10}^{6}\u2013{10}^{8}$ | 384 | 72% | 147.95% | 100.88% |

© 2019 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).