Next Article in Journal
Speeding up Derivative Configuration from Product Platforms
Previous Article in Journal
A Bayesian Probabilistic Framework for Rain Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Block-Based Scheme for Arithmetic Coding

School of Information Science and Engineering, Northeastern University, Shenyang 110004, China
*
Author to whom correspondence should be addressed.
Entropy 2014, 16(6), 3315-3328; https://doi.org/10.3390/e16063315
Submission received: 8 February 2014 / Revised: 25 May 2014 / Accepted: 10 June 2014 / Published: 18 June 2014

Abstract

: It is well-known that for a given sequence, its optimal codeword length is fixed. Many coding schemes have been proposed to make the codeword length as close to the optimal value as possible. In this paper, a new block-based coding scheme operating on the subsequences of a source sequence is proposed. It is proved that the optimal codeword lengths of the subsequences are not larger than that of the given sequence. Experimental results using arithmetic coding will be presented.

1. Introduction

For any discrete memoryless source (DMS, an independent identically distributed source—a typical example is a sequence of independent flips of an unbiased coin), Shannon’s lossless source coding theorem [1] shows that the optimal lossless compression rate is bounded by the entropy of the given source. Since then, there has been considerable interest in designing source codes and the objective is to make them matched to different applications. As regards the existing source codes, the most widely used algorithms are indubitably Huffman coding [2], arithmetic coding [3,4] and Lempel-Ziv coding [5,6]. Of these coding methods, arithmetic coding offers great potential for the combination of compression and encryption. Recently, many novel approaches on joint compression and encryption have been presented [714] and interested readers may find their corresponding cryptanalysis in [1518].

The first exhibited coding method is the well-known Huffman coding, proved to be optimal by Huffman [2]. Due to the optimality of Huffman coding, it has been applied into many international standards, such as JPEG [19]. Later, with the appearance of arithmetic coding, Huffman coding has been replaced gradually and many new standards (such as JPEG2000 [20] and H.264 [21]) in multimedia have utilized modified versions of arithmetic coding to serve as their entropy coders.

The predecessor of arithmetic coding is Shannon-Fano-Elias coding. The extension of Shannon-Fano-Elias method to sequences is based on the enumerative methods presented by Cover [22]. Nevertheless, both of these codes suffer from precision problem. Fortunately, Rissanen and Langdon [3] successfully solve this problem and characterize the family of arithmetic codes through the notion of the decodability criterion which applies to all such codes. Actually, a practical implementation of arithmetic coding is due to Witten et al. [23] and a revisited version of arithmetic coding should be attributed to Moffat et al [24].

As is known, the prerequisite for the establishment of Shannon’s theorem is that the encoder should be optimal and work according to the distribution of the given source, which indicates the compression ratio is restricted by the entropy rate of the given DMS. The contribution of this paper is to draw freedom lines not bound by entropy rate constraint for a given DMS. The source sequence is first separated into two or more subsequences which are encoded independently. Then we consider a simple case that the length of each subsequence is the same and analyze the proposed coding scheme theoretically. Next, we prove that for general case, the sum of optimal codeword lengths of the subsequences is no longer than that of the original sequence. Moreover, the subsequences are encoded without interference, which facilities parallel computing. In addition, it should be noted that the coding algorithms adopted here are for a class of mean-optimal source codes. As a result, in the sequel, arithmetic coding is the main compression algorithm so as to achieve desired results.

The rest of this paper is arranged as follows: in the next section, the proposed scheme is described in detail. In Section 3, its constraint and feasibility are analyzed. In Section 4 we introduce a simple arithmetic coding scheme and present the experimental results as well. Finally, conclusions are drawn in Section 5.

2. Block-Based Coding

Let X be a random variable having value A or B:

X = { A with probability p ( A ) B with probability p ( B )

Let Xn := X1X2Xn be an independent and identically distributed sequence of length n generated according to this distribution and let nA and nB denote the number of times that symbols A and B appeared, respectively. Let x be a realization of X, then the optimal codeword length L of this sequence is given by [25]:

L = x { A , B } n p ( x ) log 1 p ( x )

where here and throughout the sequel log(·):=log2(·). Let the given binary message sequence be divided into two subsequences of lengths n1 and n2, respectively. Let the number of symbols A and B in the first subsequence be n1A and n1B, respectively. For the second one, they are n2A and n2B. Denote the actual probability mass function of the source sequence as p(X). For the two subsequences, they are q(X) and r(X), respectively.

As the symbols of the given binary sequence is independent, we have [25]:

H ( X n ) = H ( X 1 X 2 X n ) = H ( X 1 X 2 X n 1 ) + H ( X n 1 + 1 X n 1 + 2 X n )

where H(Xn) is the entropy of the source sequence. Note that the entropy here refers to the information entropy presented by Shannon [1]. There are of course other kinds of entropies, interested readers can find them in [2629]. Thus, the optimal codeword length of the source sequence can be rewritten as:

Entropy 16 03315f3

where L1 and L2 denote the respective codeword lengths of the two subsequences after encoding in accordance with the distribution of the source sequence. Actually, it can be easily found that the probability mass functions of the two subsequences and the source sequence are not necessarily the same. Therefore, after partitioning the real optimal codeword length of the first subsequence is:

L 1 * = x { A , B } n 1 q ( x ) log 1 q ( x )

For the second subsequence, the real optimal codeword length is expressed as:

L 2 * = x { A , B } n 2 r ( x ) log 1 r ( x )

From Equations (4)(6), it seems that the source sequence has been encoded according to a wrong distribution after partitioning. In other words, an i.i.d. source sequence can be further compressed if it is divided into two subsequences. In the following subsections, we shall formally analyze this fact.

2.1. An Alphabet of Size Two

Consider a given binary sequence of length n. Without loss of generality, suppose that n is an even number and the two subsequences have the same length, i.e., n1 = n2. Then similar to the manner above, we have:

{ p ( A ) = n A n p ( B ) = n B n

The optimal codeword length of the binary sequence can be given by:

L = n n A n log n n A + n n B n log n n B = n A log n n A + n B log n n B

After partition, it is easy to observe that 0 n 1 A n 2, 0 n 2 A n 2 and n1A + n2A = nA. Therefore, we can obtain the following pair of equations:

{ L 1 * = n 1 A log n 1 n 1 A + n 1 B log n 1 n 1 B L 2 * = n 2 A log n 2 n 2 A + n 2 B log n 2 n 2 B

and the sum of the optimal codeword lengths of these two subsequences is:

L s u m * = L 1 * + L 2 *

The above discussion leads to Theorem 1.

Theorem 1. For a given binary message Xn with length n (n2), the sum of the optimal codeword length of the two equally-divided subsequences is no greater than that of the given message sequence as:

L L s u m * = L 1 * + L 2 *

with equality holds if and only if n1A = n2A.

Proof of Theorem 1. From Equations (8)(10), we have:

Entropy 16 03315f4

If n1A = n2A, we have:

L s u m * = n A log n n A + n B log n n B = L

as nA = n1A + n2A.

If n1An2A, we can rewrite Equation (12) as:

Entropy 16 03315f5

Equation (13) implies that L s u m * is a function of n1A since n and nA are constants for a given binary sequence. In order to make Equation (13) clear, here let F(t) and t denote L s u m * and n1A, respectively. Then differentiating F(t) with respect to t yields:

d d t F ( t ) = log 2 t + log 2 ( n A t ) + log ( n 2 t ) log ( n 2 n A + 2 t ) = log ( n A t ) ( n 2 t ) t ( n 2 n A + 2 t )

After rearrangement, we have:

d d t F ( t ) = log ( n A t ) n + 2 t 2 2 n A t n t 2 n A t + 2 t 2

As nAt ≥ 0, n − 2t ≥ 0, t ≥ 0 and n − 2nA + t ≥ 0, letting:

g ( t ) = ( n A t ) n + 2 t 2 2 n A t n t 2 n A t + 2 t 2

yields g(t) ≥ 0. Regarding Equation (16), there are two possible cases:

(a)

If nAt > t, i.e., t < n A 2, then g(t) > 1 and d F d t > 0;

(b)

If nAt < t, i.e., t < n A 2, then 0 < g(t) < 1 and d F d t > 0.

The above two cases are visualized in Figure 1. It is clear that L s u m * is concave and L s u m * L.

2.2. A Special Case for an Alphabet of Size 2

In this subsection, we shall demonstrate the case where the two subsequences are of arbitrary lengths while the sum of the two lengths are constant for a given source message sequence.

Theorem 2. For a given binary source sequence X1X2…Xn with length n (n2), which takes values in {A, B} with probabilities p(A) and p(B), respectively. Let the binary sequence be partitioned into two subsequences with arbitrary lengths n1 and n2 (note that n1 + n2 = n), then the sum of the optimal codeword lengths of the two subsequences is no greater than the length of the given binary sequence, i.e., L L s u m * = L 1 * + L 2 *.

Before the formal proof is presented, the following lemma [25] is required:

Lemma 1: Let p(x) and q(x), xX, be two probability mass functions. Then D(p||q) ≥ 0 with equality holds if and only if p(x) = q(x) for all x. Here, D(·) represents the relative entropy.

Proof of Theorem 2. From the previous part, we have:

Entropy 16 03315f6

For symbol A, we have:

n p ( A ) = n 1 q ( A ) + n 2 r ( A )

Now, expanding the first part of Equation (17) using Equation (18) with respect to the optimal codeword length of symbol A in the given source sequence, we have:

n p ( A ) log 1 p ( A ) = [ n 1 q ( A ) + n 2 r ( A ) ] log 1 p ( A ) = n 1 q ( A ) log q ( A ) p ( A ) 1 q ( A ) + n 2 r ( A ) log r ( A ) p ( A ) 1 r ( A )

Similarly, for symbol B we have:

n p ( B ) log 1 p ( B ) = n 1 q ( B ) log q ( B ) p ( B ) 1 q ( B ) + n 2 r ( B ) log r ( B ) p ( B ) 1 r ( B )

Combining Equations (19) and (20), we have:

Entropy 16 03315f7

From Lemma 1, we know that:

D ( q | | p ) 0 , D ( r | | p ) 0.

If the equality holds, we have:

q ( A ) = p ( A ) = r ( A ) , q ( B ) = p ( B ) = r ( B ) .

As a result, the proof of Theorem 2 has been shown. Additionally, we can see that Theorem 1 is a special case of Theorem 2 and Equation (21) can be considered as a coding scheme which is designed based on a wrong distribution [25].

2.3. An Alphabet of Size D > 2

In the above two subsections, we have discussed the case that the alphabet size is two. In this subsection, we shall deal with the case of alphabet size D > 2.

Consider an i.i.d. random variable Z taking value from the set {1,2,…, D} and a given discrete sequence Z1Z2Zn of length n. Suppose that the number of occurrences of symbol i is ni for some i ∈ {1,2,…,D} and the corresponding probability is p(i). Obviously, we have:

n = n 1 + n 2 + + n D = i = 1 D n i

Following the preceding method, we once more partition the given sequence into two subsequences ZA and ZB with length nA and nB, respectively. We further assume that the probability of symbol i is pA(i) and the number of times that symbol i occurred in ZA is niA. For ZB, they are pB(i) and niB, respectively. Similarly, we have:

n A = n 1 A + n 2 A + + n D A = i = 1 D n i A , n B = n 1 B + n 2 B + + n D B = i = 1 D n i B ,

and:

n i = n i A + n i B

Then the entropies of the source sequence and the two subsequences are given by:

Entropy 16 03315f8

Their corresponding optimal codeword lengths:

Entropy 16 03315f9

Similar to Theorem 2, we have the following theorem.

Theorem 3. For a discrete sequence from a multiple-symbol source, after partitioning it into two subsequences (ZA and ZB), its optimal codeword length L L A * + L B * with equality holds if and only if their probability mass functions satisfy p(i) = pA(i) = pB(i) for all i ∈ {1,2,…,D}.

Proof of Theorem 3. Let the source sequence be represented by Zn:= Z1Z2Zn. As the source sequence is independent and identically distributed, we have:

H ( Z n ) = H ( Z 1 Z 2 Z n ) = H A ( Z 1 Z 2 Z n A ) + H B ( Z n A + 1 Z n A + 2 Z n )

Similar way to the proof of Theorem 2, we have:

n p ( i ) log 1 p ( i ) = [ n A p A ( i ) + n B p B ( i ) ] log 1 p ( i ) = n A p A ( i ) log p A ( i ) p ( i ) 1 p A ( i ) + n B p B ( i ) log p B ( i ) p ( i ) 1 p B ( i )

Thus:

Entropy 16 03315f10

As:

D ( p A | p ) 0 , D ( p B | p ) 0 ,

we have:

L = L A * + L B * + n A D ( p A | | p ) + n B D ( p B | | p ) L A * + L B *

with equality holds if and only if:

p ( i ) = p A ( i ) = p B ( i )

Now, we can see that not only binary sequences but also the non-binary ones can be further compressed by using the proposed method.

3. Constraint and Feasibility

In Section 2, we have studied the advantages of block-based coding. Particularly, as the source message is sufficiently large, we can repeat the partition operation. Nevertheless, this has the downside of increasing the size of the output file when the number of subsequences or alphabet size grows since more distributions will take up too much space in the output file. In this section, we shall show the constraint and feasibility of our approach.

For a given binary sequence x1x2xn with length n and probability mass function p(x1,x2,…,xn), it can, without doubt, be encoded to a length of log(1/p (x1,x2,…,xn)) + 2 bits [25]. This means that if the source is i.i.d., this code achieves an average codeword length within 2 bits above the entropy. If the prefix-free restriction is removed, a codeword length of log(1/p (x1,x2,…,xn)) +1 bits can be achieved.

When the given message sequence x1x2xn is partitioned into two subsequences of length n1 and n 2, the practical codeword lengths of the two subsequences can be given by:

L 1 = log 1 p ( x 1 , x 2 , , x n 1 ) + 2 , L 2 = log 1 p ( x n 1 + 1 , x n 1 + 2 , , x n ) + 2.

Now, consider the following two special cases:

(a)

Suppose that the message sequence is 010101…, i.e., equal number of zeros and ones. Obviously, it is not compressible. However, if we separate it into two subsequences alternatively, one subsequence will have all zeros while another will consist of all ones. Both subsequences have zero entropy and this is the ideal case;

(b)

Suppose the message sequence is 010101… and the numbers of zeros and ones are both even numbers. After partitioned into three sequences, each one has equal probability of zero and one. According to the preceding analysis, the final codeword length will be within 2 bits above the codeword length before partitioning.

There is no doubt that the second special case exists. Thus, we suggest applying this work to binary arithmetic coding when the size of target input file is much smaller. On the other hand, since the subsequences after partition are encoded without interference, this fact implies that parallel coding is feasible. Peculiarly, if the file to be compressed is considerably large, then we can partition it into multiple subsequences and encode them in parallel.

4. Experimental Results

As an extension of Shannon-Fano-Elias coding, arithmetic coding is an efficient coding scheme for lossless compression. Unlike Huffman coding, the process of arithmetic coding does not require much additional memory as the sequence length increases. Therefore arithmetic coding has been adopted in quite a number of international standards. On the other hand, there is no need for a representative sample of sequence and the probability model can be updated with each symbol read, which indicates that adaptive coding can be utilized (but won’t perform well).

In order to further illustrate the superiority of the proposed coding scheme, we have performed a simple binary coding experiment. The operating procedures are described in Table 1 and one can refer to Figure 2 as well.

Eighteen standard test files from the Calgary Corpus [30] are compressed to show the performance of this compression method. The test results are listed in Table 2, where RT and RP represent the compression ratio of the traditional and the proposed schemes, respectively. Note that the proposed method is used to further improve the compression ratio instead of designing a new source coding algorithm. Consequently, we just compare the compression ratio of the proposed scheme with the traditional arithmetic coding algorithm.

Similarly, another experiment is performed by employing a fixed model with 256 possible source symbols. The detailed operating procedures are the same as that listed in Table 1 except that the source sequence is read in bytes rather than in bits. The corresponding test results are listed in Table 3.

So far, the compression ratios of most present compression algorithms cannot break the restriction of entropy. However, from Tables 2 and 3, it can be found that with extra relative entropy, the compression ratio of the proposed scheme is sometimes smaller than the entropy of the original sequence, which is highlighted in the two tables. The reason for this phenomenon can boil down to the following three aspects:

(a)

The probability distribution of the source sequence;

(b)

The partition method;

(c)

The encoding function.

The first aspect is important since the compression ratio of a given source sequence depends on the probability distribution of the source sequence. As proved in Section 2, our method is able to work better than the traditional one because of the existence of the extra relative entropy. Meanwhile, a good partition method can increase the extra relative entropy. In other words, when there is a greater difference between the source sequence and the subsequence, the extra relative entropy will be larger and further the compression ratio will be higher. This fact exactly reflects the importance of Aspect (b). The importance of Aspect (c) is conspicuous and we no longer repeat it. In addition, as the subsequences are encoded independently, we can perform the coding by parallel processing which can obviously reduce the processing time.

5. Conclusions

In this paper, we have proved that the overall codeword length after sequence partition is no greater than the original one. The original sequence can be regarded as the case that the code is designed using a wrong distribution. Because of the existence of error in the encoding process, we cannot divide the sequence into multiple sequences infinitely. Nonetheless, if we perform the sequence separation properly according to the length of the original sequence, the expected codeword length can be achieved. This fact indirectly suggests that we can implement our scheme efficiently by parallel coding. Furthermore, since our work depends on the partition method, our future work will focus on how to partition different kinds of files and which kinds of files should be partitioned.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (No. 61271350), and the Fundamental Research Funds for the Central Universities (No. N120504005).

Author Contributions

Both authors contributed equally to the presented mathematical and computational framework and the writing of the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J 1948, 27, 379–423. [Google Scholar]
  2. Huffman, D.A. A method for the construction of minimum redundancy codes. Proc. IRE 1952, 40, 1098–1101. [Google Scholar]
  3. Rissanen, J.; Langdon, G., Jr. Arithmetic coding. IBM J. Res. Dev 1979, 23, 149–162. [Google Scholar]
  4. Langdon, G.; Rissanen, J. Compression of black-white images with arithmetic coding. IEEE Trans. Commun 1981, 29, 858–867. [Google Scholar]
  5. Ziv, J.; Lempel, A. A universal algorithm for sequential data-compression. IEEE Trans. Inf. Theory 1977, 23, 337–343. [Google Scholar]
  6. Ziv, J.; Lempel, A. Compression of individual sequences via variable-rate coding. IEEE Trans. Inf. Theory 1978, 24, 530–536. [Google Scholar]
  7. Cleary, J.; Irvine, S.; Rinsma-Melchert, I. On the insecurity of arithmetic coding. Comput. Secur 1995, 14, 167–180. [Google Scholar]
  8. Bergen, H.; Hogan, J. A chosen plaintext attack on an adaptive arithmetic coding algorithm. Comput. Secur 1993, 12, 157–167. [Google Scholar]
  9. Wen, J.; Kim, H.; Villasenor, J. Binary arithmetic coding with key-based interval splitting. IEEE Signal Process. Lett 2006, 13, 69–72. [Google Scholar]
  10. Kim, H.; Wen, J.; Villasenor, J. Secure arithmetic coding. IEEE Trans. Signal Process 2007, 55, 2263–2272. [Google Scholar]
  11. Wong, K.W.; Lin, Q.Z.; Chen, J.Y. Simultaneous Arithmetic Coding and Encryption Using Chaotic Maps. IEEE Trans. Circuits Syst 2010, 57, 146–150. [Google Scholar]
  12. Grangetto, M.; Magli, E.; Olmo, G. Multimedia selective encryption by means of randomized arithmetic coding. IEEE Trans. Multimedia 2006, 8, 905–917. [Google Scholar]
  13. Luca, M.B.; Serbanescu, A.; Azou, S.; Burel, G. A new compression method using a chaotic symbolic approach, Proceedings of the IEEE-Communications 2004, Bucharest, Romania, 3–5 June 2004; Available online: http://www.univ-brest.fr/lest/tst/publications/ (accessed on 18 June 2014).
  14. Nagaraj, N.; Vaidya, P.G.; Bhat, K.G. Arithmetic coding as a non-linear dynamical system. Commun. Nonlinear Sci. Numer. Simul 2009, 14, 1013–1020. [Google Scholar]
  15. Sun, H.M.; Wang, K.H.; Ting, W.C. On the Security of Secure Arithmetic Code. IEEE Trans. Inf. Forensics Secur 2009, 4, 781–789. [Google Scholar]
  16. Jakimoski, G.; Subbalakshmi, K.P. Cryptanalysis of Some Multimedia Encryption Schemes. IEEE Trans. Multimedia 2008, 10, 330–338. [Google Scholar]
  17. Pande, A.; Zambreno, J.; Mohapatra, P. Comments on “Arithmetic coding as a non-linear dynamical system”. Commun. Nonlinear Sci. Numer. Simul 2012, 17, 4536–4543. [Google Scholar]
  18. Katti, R.S.; Srinivasan, S.K.; Vosoughi, A. On the Security of Randomized Arithmetic Codes Against Ciphertext-Only Attacks. IEEE Trans. Inf. Forensics Secur 2011, 6, 19–27. [Google Scholar]
  19. Wallance, G.K. The JPEG Still image compression standard. Commun. ACM 1991, 34, 30–44. [Google Scholar]
  20. Taubman, D.S.; Marcellin, M.W. JPEG2000: Image Compression Fundamentals, Standards and Practice; Kluwer Academic: Norwell, MA, USA, 2002. [Google Scholar]
  21. Wiegand, T.; Sullivan, G.; Bjontegaard, G.; Luthra, A. Overview of the H.264/AVC video coding standard. IEEE Trans. Circuits Syst. Video Technol 2003, 13, 560–576. [Google Scholar]
  22. Cover, T.M. Enumerative source encoding. IEEE Trans. Inf. Theory 1973, 19, 73–77. [Google Scholar]
  23. Witten, I.H.; Neal, R.M.; Cleary, J.G. Arithmetic coding for data compression. Commun. ACM 1987, 30, 520–540. [Google Scholar]
  24. Moffat, A.; Neal, R.M.; Witten, I.H. Arithmetic coding revisited. ACM Trans. Inf. Sysy 1998, 16, 256–294. [Google Scholar]
  25. Cover, T.; Thomas, J. Elements of Information Theory; Wiley: New York, NY, USA, 2006. [Google Scholar]
  26. Balasis, G.; Donner, R.V.; Potirakis, S.M.; Runge, J.; Papadimitriou, C.; Daglis, I.A.; Eftaxias, K.; Kurths, J. Statistical mechanics and information-theoretic perspectives on complexity in the Earth system. Entropy 2013, 15, 4844–4888. [Google Scholar]
  27. Balasis, G.; Daglis, I.A.; Papadimitriou, C.; Kalimeri, M.; Anastasiadis, A.; Eftaxias, K. Investigating dynamical complexity in the magnetosphere using various entropy measures. J. Geophys. Res 2009. [Google Scholar] [CrossRef]
  28. Eftaxias, K.; Athanasopoulou, L.; Balasis, G.; Kalimeri, M.; Nikolopoulos, S.; Contoyiannis, Y.; Kopanas, J.; Antonopoulos, G.; Nomicos, C. Unfolding the procedure of characterizing recorded ultra low frequency, kHZ and MHz electromagnetic anomalies prior to the L’Aquila earthquake as preseismic ones–Part 1. Nat. Hazards Earth Syst. Sci 2009, 9, 1953–1971. [Google Scholar]
  29. Eftaxias, K.; Balasis, G.; Contoyiannis, Y.; Papadimitriou, C.; Kalimeri, M.; Athanasopoulou, L.; Nikolopoulos, S.; Kopanas, J.; Antonopoulos, G.; Nomicos, C. Unfolding the procedure of characterizing recorded ultra low frequency, kHZ and MHz electromagnetic anomalies prior to the L’Aquila earthquake as pre-seismic ones–Part 2. Nat. Hazards Earth Syst. Sci 2010, 10, 275–294. [Google Scholar]
  30. Calgary Corpus, Available online: ftp://ftp.cpsc.ucalgary.ca/pub/projects/text.compression.corpus (accessed on 8 February 2014).
Figure 1. A plot of H* when (a) nA < n/2; and (b) nAn/2.
Figure 1. A plot of H* when (a) nA < n/2; and (b) nAn/2.
Entropy 16 03315f1 1024
Figure 2. Encoding process of the proposed arithmetic coding scheme.
Figure 2. Encoding process of the proposed arithmetic coding scheme.
Entropy 16 03315f2 1024
Table 1. Binary coding procedures.
Table 1. Binary coding procedures.
InputOriginal sequence
OutputCodeword sequence
Step 1:Read the source sequence to buffer in bits.
Step 2:Find the middle symbol in the original sequence.
Step 3:Divide the original sequence into two subsequences.
Step 4:Encode the two subsequences according to their own probability models and then obtain two codeword sequences.
Step 5:Combine the two codeword sequences.
Table 2. Test results (Two symbols).
Table 2. Test results (Two symbols).
FileSize (KB)EntropyRT (%)RP (%)
bib111.2610.98533498.535098.5332
book1768.7710.99268999.269199.2690
book2610.8560.99365599.365699.3656
geo102.4000.85899685.901485.9004
news377.1090.99132699.132699.1329
obj121.5040.92960492.968892.9641
obj2246.8140.97941597.942297.9418
paper153.1610.99254999.257099.2551
paper281.7680.99473499.475799.4732
paper346.5260.99597499.600299.5981
paper413.2860.99378899.382899.3828
paper511.9540.98967898.979498.9711
paper638.1050.98863498.863798.8663
pic513.2160.39288539.289339.2893
progc39.6110.98089798.091498.0914
prog171.6460.98162098.163298.1632
progp49.3790.97143597.146697.1445
trans93.6950.98328198.329798.3286
Table 3. Test results (256 symbols).
Table 3. Test results (256 symbols).
FileSize (KB)EntropyRT (%)RP (%)
bib111.2610.650165.0465.06
book1768.7710.565956.5956.59
book2610.8560.599159.9159.82
geo102.4000.705870.5870.55
news377.1090.648764.8864.85
obj121.5040.743574.3772.11
obj2246.8140.782578.2777.78
paper153.1610.622962.3562.18
paper281.7680.575257.5657.50
paper346.5260.583158.3958.37
paper413.2860.587559.0158.96
paper511.9540.617061.9861.62
paper638.1050.626262.7062.28
pic513.2160.151315.1315.06
progc39.6110.649965.0764.89
prog171.6460.596359.6759.16
progp49.3790.608660.9360.77
trans93.6950.691669.1968.85

Share and Cite

MDPI and ACS Style

Hou, Q.-B.; Fu, C. A Novel Block-Based Scheme for Arithmetic Coding. Entropy 2014, 16, 3315-3328. https://doi.org/10.3390/e16063315

AMA Style

Hou Q-B, Fu C. A Novel Block-Based Scheme for Arithmetic Coding. Entropy. 2014; 16(6):3315-3328. https://doi.org/10.3390/e16063315

Chicago/Turabian Style

Hou, Qi-Bin, and Chong Fu. 2014. "A Novel Block-Based Scheme for Arithmetic Coding" Entropy 16, no. 6: 3315-3328. https://doi.org/10.3390/e16063315

Article Metrics

Back to TopTop