Abstract
We consider the rate distortion problem with side information at the decoder posed and investigated by Wyner and Ziv. Using side information and encoded original data, the decoder must reconstruct the original data with an arbitrary prescribed distortion level. The rate distortion region indicating the trade-off between a data compression rate R and a prescribed distortion level was determined by Wyner and Ziv. In this paper, we study the error probability of decoding for pairs of outside the rate distortion region. We evaluate the probability of decoding such that the estimation of source outputs by the decoder has a distortion not exceeding a prescribed distortion level . We prove that, when is outside the rate distortion region, this probability goes to zero exponentially and derive an explicit lower bound of this exponent function. On the Wyner–Ziv source coding problem the strong converse coding theorem has not been established yet. We prove this as a simple corollary of our result.
1. Introduction
For single or multi terminal source coding systems, the converse coding theorems state that at any data compression rates below the fundamental theoretical limit of the system the error probability of decoding cannot go to zero when the block length n of the codes tends to infinity. On the other hand, the strong converse theorems state that, at any transmission rates exceeding the fundamental theoretical limit, the error probability of decoding must go to one when n tends to infinity. The former converse theorems are sometimes called the weak converse theorems to distinguish them with the strong converse theorems.
In this paper, we study the strong converse theorem for the rate distortion problem with side information at the decoder posed and investigated by Wyner and Ziv [1]. We call the above source coding system the Wyner and Ziv source coding system (the WZ system). The WZ system is shown in Figure 1. In this figure, the WZ system corresponds to the case where the switch is close. In Figure 1, the sequence represents independent copies of a pair of dependent random variables which take values in the finite sets and , respectively. We assume that has a probability distribution denoted by . The encoder outputs a binary sequence which appears at a rate R bits per input symbol. The decoder function observes and to output a sequence . The t-th component of for take values in the finite reproduction alphabet . Let be an arbitrary distortion measure on . The distortion between and is defined by
Figure 1.
Source encoding with a fidelity criterion with or without side inforamion at the decoder.
In general, we have two criteria on . One is the excess-distortion probability of decoding defined by
The other is the average distortion defined by
A pair is -achievable for if there exist a sequence of pairs such that for any and any n with
where stands for the range of cardinality of . The rate distortion region is defined by
Furthermore, set
On the other hand, we can define a rate distortion region based on the average distortion criterion, a formal definition of which is the following. A pair is achievable for if there exist a sequence of pairs such that for any and any n with ,
The rate distortion region is defined by
If the switch is open, then the side information is not available to the decoder. In this case the communication system corresponds to the source coding for the discrete memoryless source (DMS) specified with . We define the rate distortion region in a similar manner to the definition of . We further define the region and , respectively in a similar manner to the definition of and .
Previous works on the characterizations of , , and are shown in Table 1. Shannon [2] determined . Subsequently, Wolfowiz [3] proved that Furthermore, he proved the strong converse theorem. That is, if , then for any sequence of encoder and decoder functions satisfying the condition
we have
Table 1.
Previous results on the converse coding theorems for DMS, WZ. Main results in the present paper on WZ are also included.
The above strong converse theorem implies that, for any ,
Csiszár and Körner proved that in Equation (3), the probability converges to one exponentially and determined the optimal exponent as a function of .
The previous works on the coding theorems for the WZ system are summarized in Table 1. The rate distortion region was determined by Wyner and Ziv [1]. Csiszár and Körner [4] proved that . On the other hand, we have had no result on the strong converse theorem for the WZ system.
Main results of this paper are summarized in Table 1. For the WZ system, we prove that if is out side the rate distortion region , then we have that for any sequence of encoder and decoder functions satisfying the conditionin Equation (2), the quantity goes to zero exponentially and derive an explicit lower bound of this exponent function. This result corresponds to Theorem 3 in Table 1. As a corollary from this theorem, we obtain the strong converse result, which is stated in Corollary 2 in Table 1. This results states that we have an outer bound with gap from the rate distortion region .
To derive our result, we use a new method called the recursive method. This method is a general powerful tool to prove strong converse theorems for several coding problems in information theory. In fact, the recursive method plays important roles in deriving exponential strong converse exponent for communication systems treated in [5,6,7,8].
2. Source Coding with Side Information at the Decoder
In the following argument, the operations and , respectively, stand for the expectation and the variance with respect to a probability distribution p. When the value of p is obvious from the context, we omit the suffix p in those operations to simply write and . Let and be finite sets and be a stationary discrete memoryless source. For each , the random pair takes values in , and has a probability distribution
We write n independent copies of and , respectively, as
We consider a communication system depicted in Figure 2. Data sequences is separately encoded to and is sent to the information processing center. At the centerm the decoder function observes and to output the estimation of . The encoder function is defined by
Figure 2.
Wyner–Ziv source coding system.
Let be a reproduction alphabet. The decoder function is defined by
Let be an arbitrary distortion measure on . The distortion between and is defined by
The excess-distortion probability of decoding is
where . The average distortion between and is defined by
In the previous section, we gave the formal definitions of , , , and . We can show that the above three rate distortion regions satisfy the following property.
Property 1.
- (a)
- The regions , , , and are closed convex sets of , where
- (b)
- has another form using -rate distortion region, the definition of which is as follows. We setwhich is called the -rate distortion region. Using , can be expressed aswhere stands for the closure operation.
Proof of this property is given in Appendix A.
It is well known that was determined by Wyner and Ziv [1]. To describe their result we introduce auxiliary random variables U and Z, respectively, taking values in finite sets and . We assume that the joint distribution of is
The above condition is equivalent to
Define the set of probability distribution by
By definitions, it is obvious that . Set
We can show that the above functions and sets satisfy the following property:
Property 2.
- (a)
- The region is a closed convex set of .
- (b)
- For any , we have
Proof of Property 2 is given in Appendix C. In Property 2 Part (b), is regarded as another expression of . This expression is useful for deriving our main result. The rate region was determined by Wyner and Ziv [1]. Their result is the following:
Theorem 1
(Wyner and Ziv [1]).
On , Csiszár and Körner [4] obtained the following result.
Theorem 2
(Csiszár and Körner [4]).
We are interested in an asymptotic behavior of the error probability of decoding to tend to one as for . To examine the rate of convergence, we define the following quantity. Set
By time sharing, we have that
Choosing and in Equation (7), we obtain the following subadditivity property on :
which together with Fekete’s lemma yields that exists and satisfies the following:
Set
The exponent function is a convex function of . In fact, from Equation (7), we have that for any
where . The region is also a closed convex set. Our main aim is to find an explicit characterization of . In this paper, we derive an explicit outer bound of whose section by the plane coincides with .
3. Main Results
In this section, we state our main results. We first explain that the rate distortion region can be expressed with two families of supporting hyperplanes. To describe this result, we define two sets of probability distributions on by
Let . We set
Then, we have the following property:
Property 3.
For any , we have
Proof of Property 3 is given in Appendix D. For and , define
Furthermore, set
We next define a functions serving as a lower bound of . For each , define
Furthermore, set
We can show that the above functions satisfies the following properties:
Property 4.
- (a)
- The cardinality bound appearing in is sufficient to describe the quantity . Furthermore, the cardinality bound in is sufficient to describe the quantity .
- (b)
- For any , we have
- (c)
- Fix any and . For , exists and is nonnegative. For , define a probability distribution byThen, for , is twice differentiable. Furthermore, for , we haveThe second equality implies that is a concave function of .
- (d)
- For , defineand setThen, we have . Furthermore, for any ,
- (e)
- For every , the condition implieswhere g is the inverse function of .
Proof of Property 4 Part (a) is given in Appendix B. Proof of Property 4 Part (b) is given in Appendix E. Proofs of Property 4 Parts (c), (d), and (e) are given in Appendix F.
Our main result is the following:
Theorem 3.
For any , any , and for any satisfying we have
It follows from Theorem 3 and Property 4 Part (d) that if is outside the rate distortion region, then the error probability of decoding goes to one exponentially and its exponent is not below .
It immediately follows from Theorem 3 that we have the following corollary.
Corollary 1.
For any and any , we have
Furthermore, for any , we have
Proof of Theorem 3 will be given in the next section. The exponent function in the case of can be obtained as a corollary of the result of Oohama and Han [9] for the separate source coding problem of correlated sources [10]. The techniques used by them is a method of types [4], which is not useful for proving Theorem 3. In fact, when we use this method, it is very hard to extract a condition related to the Markov chain condition , which the auxiliary random variable must satisfy when is on the boundary of the set . Some novel techniques based on the information spectrum method introduced by Han [11] are necessary to prove this theorem.
From Theorem 3 and Property 4 Part (e), we can obtain an explicit outer bound of with an asymptotically vanishing deviation from . The strong converse theorem immediately follows from this corollary. To describe this outer bound, for , we set
which serves as an outer bound of . For each fixed , we define by
Step (a) follows from . Since as , we have the smallest positive integer such that for . From Theorem 3 and Property 4 Part (e), we have the following corollary.
Corollary 2.
For each fixed ε, we choose the above positive integer Then, for any , we have
The above result together with
yields that for each fixed , we have
Proof of this corollary will be given in the next section.
The direct part of coding theorem, i.e., the inclusion of ⊆ was established by Csiszár and Körner [4]. They proved a weak converse theorem to obtain the inclusion . Until now, we have had no result on the strong converse theorem. The above corollary stating the strong converse theorem for the Wyner–Ziv source coding problem implies that a long standing open problem since Csiszár and Körner [4] has been resolved.
4. Proof of the Main Results
In this section, we prove Theorem 3 and Corollary 2. We first present a lemma which upper bounds the correct probability of decoding by the information spectrum quantities. We set
It is obvious that
Then, we have the following:
Lemma 1.
For any and for any , satisfying we have
The probability distribution and stochastic matrices appearing in the right members of Equation (18) have a property that we can select them arbitrary. In Equation (14), we can choose any probability distribution on . In Equation (15), we can choose any stochastic matrix . In Equation (16), we can choose any stochastic matrix ×. In Equation (17), we can choose any stochastic matrix ×.
Proof of this lemma is given in Appendix G.
Lemma 2.
Suppose that, for each , the joint distribution of the random vector is a marginal distribution of . Then, for , we have the following Markov chain:
or equivalently that .
Proof of this lemma is given in Appendix H. For , set . Let be a random vector taking values in × ×. From Lemmas 1 and 2, we have the following:
Lemma 3.
For any and for any , satisfying we have the following:
where for each , the following probability distribution and stochastic matrices:
appearing in the first term in the right members of Equation (21) have a property that we can choose their values arbitrary.
Proof.
On the probability distributions appearing in the right members of Equation (18), we take the following choices. In Equation (14), we choose so that
In Equation (15), we choose so that
In Equation (16), we choose so that
In Equation (16), we note that
Step (a) follows from Lemma 2. In Equation (17), we choose so that
From Lemma 1 and Equations (21)–(25), we have the bound of Equation (21) in Lemma 3. ☐
To evaluate an upper btound of Equation (21) in Lemma 3, we use the following lemma, which is well known as the Cramér’s bound in the large deviation principle.
Lemma 4.
For any real valued random variable A and any , we have
Here, we define a quantity which serves as an exponential upper bound of . For each , let be a set of all
Set
Let be a set of all probability distributions on having the form:
For simplicity of notation, we use the notation for . We assume that is a marginal distribution of . For , we simply write . For and , we define
where, for each , the following probability distribution and stochastic matrices:
appearing in the definition of are chosen so that they are induced by the joint distribution .
By Lemmas 3 and 4, we have the following proposition:
Proposition 1.
For any , any , and any satisfying we have
Proof.
When , the bound we wish to prove is obvious. In the following argument, we assume that We define five random variables by
By Lemma 3, for any satisfying we have
where we set
Applying Lemma 4 to the first term in the right member of Equation (26), we have
We choose so that
Solving Equation (28) with respect to , we have
For this choice of and Equation (27), we have
completing the proof. ☐
Set
By Proposition 1, we have the following corollary.
Corollary 3.
For any , for any , and for any satisfying we have
We shall call the communication potential. The above corollary implies that the analysis of leads to an establishment of a strong converse theorem for Wyner–Ziv source coding problem. In the following argument, we drive an explicit lower bound of . We use a new technique we call the recursive method. The recursive method is a powerful tool to drive a single letterized exponent function for rates below the rate distortion function. This method is also applicable to prove the exponential strong converse theorems for other network information theory problems [5,6,7]. Set
For each , define a function of by
By definition, we have
For each , we define the conditional probability distribution
by
where
are constants for normalization. For , define
where we define for Then, we have the following lemma:
Lemma 5.
For each , and for any , we have
Furthermore, we have
The equality in Equation (34) in Lemma 5 is obvious from Equations (29)–(31). Proofs of Equations (32) and (33) in this lemma are given in Appendix I. Next, we define a probability distribution of the random pair taking values in by
where is a constant for normalization given by
For , define
where we define . Set
Then, we have the following:
Lemma 6.
Proof.
By the equality Equation (34) in Lemma 5, we have
Step (a) follows from the definition in Equation (36) of We next prove Equation (39) in Lemma 6. Multiplying to both sides of Equation (35), we have
Taking summations of Equations (41) and (42) with respect to , we have
Step (a) follows from Equation (33) in Lemma 5. Step (b) follows from the definition in Equation (37) of . ☐
The following proposition is a mathematical core to prove our main result.
Proposition 2.
For , we choose the parameter α such that
Then, for any and for any , we have
Proof.
Set
Then, by Lemma 6, we have
For each , we recursively choose so that and choose , , , and appearing in
such that they are the distributions induced by . Then, for each ⋯, n, we have the following chain of inequalities:
Step (a) follows from Hölder’s inequality and the following identity:
Step (b) follows from Equation (43). Step (c) follows from the definition of . Step (d) follows from that by Property 4 Part (a), the bound , is sufficient to describe . Hence, we have the following:
Step (a) follows from Equation (38) in Lemma 6. Step (b) follows from Equation (45). Since Equation (46) holds for any and any , we have
Thus, we have Equation (44) in Proposition 2. ☐
Proof of Theorem 3:
For , set
Then, we have the following:
Step (a) follows from Corollary 3. Step (b) follows from Proposition 2 and Equation (47). Since the above bound holds for any positive , and , we have
Thus, Equation (10) in Theorem 3 is proved. ☐
Proof of Corollary 2:
By the definition of , we have that for . We assume that for , Then, there exists a sequence such that for , we have
Then, by Theorem 3, we have
for any . We claim that for , we have ∈ . To prove this claim, we suppose that does not belong to for some . Then, we have the following chain of inequalities:
Step (a) follows from and Property 4 Part (e). Step (b) follows from Equation (48). The bound of Equation (50) contradicts Equation (49). Hence, we have ∈ or equivalent to
for , which implies that for ,
completing the proof. ☐
5. Conclusions
For the WZ system, we have derived an explicit lower bound of the optimal exponent function on the correct probability of decoding for for . We have described this result in Theorem 3. The determination problem of the optimal exponent remains to be resolved. This problem is our future work.
In this paper, we have treated the case where and are finite sets. Extension of Theorem 3 to the case where and are arbitrary sets is also our future work. Wyner [12] investigated the characterization of the rate distortion region in the case where and are general sets and is a correlated stationary memoryless source. This work may provide a good tool to investigate the second future work.
Acknowledgments
I am very grateful to Shun Watanabe for his helpful comments.
Conflicts of Interest
The authors declare no conflict of interest.
Appendix A. Properties of the Rate Distortion Regions
In this Appendix, we prove Property 1. Property 1 Part (a) can easily be proved by the definitions of the rate distortion regions. We omit the proofs of this part. In the following argument, we prove Part (b).
Proof of Property 1 Part (b):
We set
By the definitions of and , we have that for . Hence, we have that
We next assume that . Set
Then, by the definitions of and , we have that, for any , there exists such that for any , which implies that
Here, we assume that there exists a pair belonging to such that
Since the set in the right hand side of Equation (A3) is a closed set, we have
for some small . Note that . Then, Equation (A4) contradicts Equation (A2). Thus, we have
Note here that is a closed set. Then, from Equation (A5), we conclude that
completing the proof. ☐
Appendix B. Cardinality Bound on Auxiliary Random Variables
Set
Since , it is obvious that
We first prove the following lemma.
Lemma A1.
To prove Lemma A1, we set
By definition, it is obvious that
Proof of Lemma A1:
Since we have Equation (A6), it suffices to show to prove Lemma A1. We bound the cardinality of U to show that the bound is sufficient to describe . We first observe that, by Equation (A8), we have
where
is a conditional probability distribution that attains the following optimization problem:
Observe that
where
For each , is a continuous function of . Then, by the support lemma,
is sufficient to express values of Equation (A9) and one value of Equation (A10). ☐
Next, we give a proof of Property 4 Part (a).
Proof of Property 4 Part (a):
We first bound the cardinality of U in to show that the bound is sufficient to describe . Observe that
where
The value of included in must be preserved under the reduction of . For each , is a continuous function of . Then, by the support lemma,
is sufficient to express values of Equation (A11) and one value of Equation (A12). We next bound the cardinality of U in to show that the bound is sufficient to describe . Observe that
where
The value of included in must be preserved under the reduction of . For each , is a continuous function of . Then, by the support lemma,
is sufficient to express values of Equation (A13) and one value of Equation (A14). ☐
Appendix C. Proof of Property 2
In this Appendix, we prove Property 2. Property 2 Part (a) is a well known property. Proof of this property is omitted here. We only prove Property 2 Part (b).
Proof of Property 2 Part (b):
Since ⊆, it is obvious that ⊆. Hence it suffices to prove that ⊆ We assume that . Then, there exists such that
On the second inequality in Equation (A15), we have the following:
where is one of the minimizers of the function
Define by . We further define by It is obvious that
Furthermore, from Equation (A16), we have
From Equations (A17) and (A18), we have . Thus ⊆ is proved.
Appendix D. Proof of Property 3
In this Appendix, we prove Property 3. From Property 2 Part (a), we have the following lemma.
Lemma A2.
Suppose that does not belong to . Then, there exist and such that for any we have
Proof of this lemma is omitted here. Lemma A2 is equivalent to the fact that if the region is a convex set, then for any point outside the region , there exits a line which separates the point from the region . Lemma A2 will be used to prove Equation (8) in Property 3.
Proof of Equation (8) in Property 3:
We first recall the following definitions of and :
We prove . We assume that . Then, by Lemma A2, there exist and such that for any , we have
Hence, we have
Step (a) follows from the definition of . The inequality in Equation (A19) implies that . Thus, is concluded. We next prove ⊆ . We assume that . Then, there exists such that
Then, for each and for , we have the following chain of inequalities:
Step (a) follows from Equation (A20). Step (b) follows from Lemma A1. Hence, we have ⊆ . ☐
Appendix E. Proof of Property 4 Part (b)
In this Appendix, we prove Property 4 Part (b). We have the following lemma.
Lemma A3.
For any μ, , and any , there exists such that
This implies that for any μ, , , we have
Proof.
Since Equation (A22) is obvious from Equation (A21), we only prove Equation (A21). We consider the case where satisfies , , and . In this case, we have
For each , we choose so that and . Then, for any satisfying , , and , we have the following chain of inequalities:
Step (a) follows from Hölder’s inequality. Step (b) follows from Equation (A23) and Hölder’s inequality. ☐
Proof of Property 4 Part (b):
We have the following chain of inequalities:
Step (a) follows from Lemma A3. Step (b) follows from
completing the proof. ☐
Appendix F. Proof of Property 4 Parts (c), (d), and (e)
In this Appendix, we prove Property 4 Parts (c), (d), and (e). We first prove Parts (c) and (d).
Proof of Property 4 Parts (c) and (d):
We first prove Part (c). We first show that for each and , exists for . On a lower bound of , for , we have the following:
Step (a) follows from that, for ,
It is obvious that the lower bound of Equation (A26) of takes some positive value. This implies that exists for . We next show that for . On upper bounds of for , we have the following chain of inequalities:
Step (a) follows from Equation (A25) and . Step (b) follows from and Hölder’s inequality. We next prove that that, for each and , is twice differentiable for . For simplicity of notations, set
Then, we have
The quantity has the following form:
By simple computations, we have
On upper bound of for , we have the following chain of inequalities:
Step (a) follows from Equation (A30). Step (b) follows from Equation (A29). Step (c) follows from Cauchy–Schwarz inequality and Equation (A28). Step (d) follows from that for . Note that exists for . Furthermore, we have the following:
Hence, by Equation (A31), exists for . We next prove Part (d). We derive the lower bound Equation (9) of . Fix any and any . By the Taylor expansion of with respect to around , we have that, for any and for some ,
Step (a) follows from ,
and the definition of . Let be a pair which attains .
By this definition, we have that
and that, for any
On lower bounds of , we have the following chain of inequalities:
Step (a) follows from Equation (A33). Step (b) follows from Equation (A32). Step (c) follows from Equation (A34). Step (d) follows from the definition of . ☐
We next prove Part (e). For the proof we use the following lemma.
Lemma A4.
When , the maximum of
for is attained by the positive satisfying
Let be the inverse function of for . Then, the condition of Equation (A35) is equivalent to . The maximum is given by
By an elementary computation we can prove this lemma. We omit the detail.
Proof of Property 4 Part (e):
By the hyperplane expression of stated Property 3 we have that when , we have
or equivalent to
for some . Then, for each positive , we have the following chain of inequalities:
Step (a) follows from Property 4 Part (d). Step (b) follows from Equation (A36). Step (c) follows from Lemma A4. ☐
Appendix G. Proof of Lemma 1
To prove Lemma 1, we prepare a lemma. Set
Then, we have the following lemma:
Lemma A5.
Proof.
We first prove the first inequality. We have the following chain of inequalities:
Step (a) follows from the definition of . We next prove the second inequality. We have the following chain of inequalities:
Step (a) follows from the Markov chain . Step (b) follows from the definition of . On the third inequality, we have the following chain of inequalities:
Step (a) follows from the Markov chain . Step (b) follows from the definition of . We finally prove the fourth inequality. We have the following chain of inequalities:
☐
Proof of Lemma 1:
We set
Set . By definition, we have
Then, for any , satisfying
we have
Hence, it suffices to show
to prove Lemma 1. By definition, we have
Then, we have the following:
Step (a) follows from Lemma A5. ☐
Appendix H. Proof of Lemma 2
In this Appendix, we prove Lemma 2.
Proof of Lemma 2:
We have the following chain of inequalities:
Step (a) follows from that is a function of . Step (b) follows from the memoryless property of the information source . ☐
Appendix I. Proof of Lemma 5
In this Appendix, we prove Equations (32) and (33) in Lemma 5.
Proofs Equations (32) and (33) in Lemma 5:
By the definition of , for , we have
Then, we have the following chain of equalities:
Steps (a) and (b) follow from Equation (A39). From Equation (A40), we have
Taking summations of Equation (A41) and (A42) with respect to , we obtain
completing the proof. ☐
References
- Wyner, A.D.; Ziv, J. The rate-distortion function for source coding with side information at the decoder. IEEE Trans. Inf. Theory 1976, 22, 1–10. [Google Scholar] [CrossRef]
- Shannon, C.E. Coding theorems for a discrete source with a fidelity criterion. IRE Nat. Conv. Rec. 1959, 7, 142–163. [Google Scholar]
- Wolfowiz, J. Approximation with a fidelity criterion. In Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability; University of California Press: Berkeley, CA, USA, 1966; Volume 1, pp. 566–573. [Google Scholar]
- Csiszár, I.; Körner, J. Information Theory: Coding Theorems for Discrete Memoryless Systems; Academic: London, UK, 1981. [Google Scholar]
- Oohama, Y. Exponent function for one helper source coding problem at rates outside the rate region. In Proceedings of the 2015 IEEE International Symposium on Information Theory, Hong Kong, China, 14–19 June 2015; pp. 1575–1579. [Google Scholar]
- Oohama, Y. Strong converse exponent for degraded broadcast channels at rates outside the capacity region. In Proceedings of the 2015 IEEE International Symposium on Information Theory, Hong Kong, China, 14–19 June 2015; pp. 939–943. [Google Scholar]
- Oohama, Y. Strong converse theorems for degraded broadcast channels with feedback. In Proceedings of the 2015 IEEE International Symposium on Information Theory, Hong Kong, China, 14–19 June 2015; pp. 2510–2514. [Google Scholar]
- Oohama, Y. Exponent function for asymmetric broadcast channels at rates outside the capacity region. In Proceedings of the 2016 IEEE InternationalSymposium on Information Theory and Its Applications, Monterey, CA, USA, 30 October–2 November 2016; pp. 568–572. [Google Scholar]
- Oohama, Y.; Han, T.S. Universal coding for the Slepian-wolf data compression system and the strong converse theorem. IEEE Trans. Inf. Theory 1994, 40, 1908–1919. [Google Scholar] [CrossRef]
- Slepian, D.; Wolf, J.K. Noiseless coding of correlated information sources. IEEE Trans. Inf. Theory 1973, 19, 471–480. [Google Scholar] [CrossRef]
- Han, T.S. Information-Spectrum Methods in Information Theory; Springer: Berlin, Germany, 2002; The Japanese edition was published by Baifukan-Publisher, Tokyo, 1998. [Google Scholar]
- Wyner, A.D. The rate-distortion function for source coding with side information at the decoder-II: General sources. Inf. Control 1978, 38, 60–80. [Google Scholar] [CrossRef]
© 2018 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).