Universality of Logarithmic Loss in Successive Refinement †

We establish an universal property of logarithmic loss in the successive refinement problem. If the first decoder operates under logarithmic loss, we show that any discrete memoryless source is successively refinable under an arbitrary distortion criterion for the second decoder. Based on this result, we propose a low-complexity lossy compression algorithm for any discrete memoryless source.


Introduction
In the lossy compression problem, logarithmic loss is a criterion allowing a "soft" reconstruction of the source, a departure from the classical setting of a deterministic reconstruction. In this setting, the reconstruction alphabet is the set of probability distributions over the source alphabet. More precisely, let x be the source symbol from the source alphabet X , and q(·) be the reconstruction symbol which is the probability measure on X . Then the logarithmic loss is given by (x, q) = log 1 q(x) .
Clearly, if the reconstruction q(·) has a small probability on the true source symbol x, the amount of loss will be large.
Although logarithmic loss plays a crucial role in the theory of learning and prediction, relatively little work has been done in the context of lossy compression, notwithstanding the two-encoder multi-terminal source coding problem under logarithmic loss [1,2], or recent work on the single-shot approach to lossy source coding under logarithmic loss [3]. Note that lossy compression under logarithmic loss is closely related to the information bottleneck method [4][5][6]. In this paper, we focus on universal properties of logarithmic loss in the context of successive refinement.
Successive refinement is a network lossy compression problem where one encoder wishes to describe the source to two decoders [7,8]. Instead of having two separate coding schemes, the successive refinement encoder designs a code for the decoder with a weaker link, and sends extra information to the second decoder on top of the message of the first decoder. In general, successive refinement coding cannot do as well as two separate encoding schemes optimized for the respective decoders. However, if we can achieve the point-to-point optimum rates using successive refinement coding, we say the source is successively refinable.
Although necessary and sufficient conditions of successive refinability is known [7,8], proving (or disproving) successive refinability of the source is not a simple task. Equitz and Cover [7] found a discrete source that is not successively refinable using Gerrish problem [9]. Chow and Berger found a continuous source that is not successively refinable using Gaussian mixture [10]. Lastras and Berger showed that all sources are nearly successively refinable [11]. However, still only a few sources are known to be successively refinable. In this paper, we show that any discrete memoryless source is successively refinable as long as the weaker link employs logarithmic loss, regardless of the distortion criterion used for the stronger link.
In the second part of the paper, we show that this result can be useful to design a lossy compression algorithm with low complexity. Recently, the idea of successive refinement is applied to reduce the complexity of point-to-point lossy compression algorithm. Venkataramanan et al. proposed a new lossy compression for Gaussian source where the codewords are linear combination of sub-codewords [12]. No and Weissman also proposed a low-complexity lossy compression algorithm for Gaussian source using extreme value theory [13]. Both algorithms are successively describing source and achieve low complexity. Roughly speaking, successive refinement algorithm provides a smaller size of codebook. For example, the naive random coding scheme has a codebook of size e nR when the blocklength is n and the rate is R. On the other hand, if we can design a successive refinement scheme with half rate in the weaker link, then the size of codebook is e nR/2 each. Thus, the overall codebook size is 2e nR/2 . The above idea can be generalized to successive refinement scheme with L decoders [12,14] The universal property of logarithmic loss in successive refinement implies that, for any point-to-point lossy compression of discrete memoryless source, we can insert a virtual intermediate decoder (weaker link) under logarithmic loss without losing any rates at the actual decoder (stronger link). As we discussed, this property allows us to design a lossy compression algorithm with low-complexity for any discrete source and distortion pair. Note that previous works only focused on specific source and distortion pair such as binary source with Hamming distortion.
The remainder of the paper is organized as follows. In Section 2, we revisit some of the known results pertaining to logarithmic loss. Section 3 is dedicated to successive refinement under logarithmic loss in the weaker link. In Section 4, we propose a low complexity compression scheme that can be applied to any discrete lossy compression problem. Finally, we conclude in Section 5.
Notation: X n denotes an n-dimensional random vector (X 1 , X 2 , . . . , X n ) while x n denotes a specific possible realization of the random vector X n . X denotes a support of random variable X. Also, Q denotes a random probability mass function while q denotes a specific probability mass function. We use natural logarithm and nats instead of bits.

Successive Refinability
In this section, we review the successive refinement problem with two decoders. Let the source X n be i.i.d. random vector with distribution p X . The encoder wants to describe X n to two decoders by sending a pair of messages (m 1 , m 2 ) where 1 ≤ m i ≤ M i for i ∈ {1, 2}. The first decoder reconstructŝ X n 1 (m 1 ) ∈X n 1 based only on the first message m 1 . The second decoder reconstructsX n 2 (m 1 , m 2 ) ∈X n 2 based on both m 1 and m 2 . The setting is described in Figure 1. Let d i (·, ·) : X ×X i →[0, ∞) be a distortion measure for i-th decoder. The rates of code (R 1 , R 2 ) are simply defined as An (n, R 1 , R 2 , D 1 , D 2 , )-successive refinement code is a coding scheme with block length n and excess distortion probability where rates are (R 1 , R 2 ) and target distortions are (D 1 , D 2 ). Since we have two decoders, the excess distortion probability is defined by Pr d i (X n ,X n i ) > D i for some i .
For some special cases, both decoders can achieve the point-to-point optimum rates simultaneously.
is achievable, then we say the source is successively refinable at (D 1 , D 2 ). If the source is successively refinable at (D 1 , D 2 ) for all D 1 , D 2 , then we say the source is successively refinable.
The following theorem provides a necessary and sufficient condition of successive refinable sources.

Theorem 1 ([7,8]).
A source is successively refinable at (D 1 , D 2 ) if and only if there exists a conditional distribution pX 1 ,X 2 |X such that X −X 2 −X 1 forms a Markov chain and Note that the above results of successive refinability can easily be generalized to the case of k decoders.

Logarithmic Loss
Let X be a set of discrete source symbols (|X | < ∞), and M(X ) be the set of probability measures on X . Logarithmic loss : for x ∈ X and q ∈ M(X ). Logarithmic loss between n-tuples is defined by i.e., the symbol-by-symbol extension of the single letter loss.
Let X n be the discrete memoryless source with distribution p X . Consider the lossy compression problem under logarithmic loss where the reconstruction alphabet is M(X ). The rate-distortion function is given by The following lemma provides a property of the rate-distortion function achieving conditional distribution. Lemma 1. The rate-distortion function achieving conditional distribution p Q |X satisfies for p Q almost every q ∈ M(X ). Conversely, if p Q|X satisfies (1) and (2), then it is a rate-distortion function achieving conditional distribution, i.e., The key idea is that we can replace Q by p X|Q (·|Q), and have lower rate and distortion, i.e., which directly implies (1). Interestingly, since the rate-distortion function in this case is a straight line, a simple time sharing scheme achieves the optimal rate-distortion trade-off. More precisely, the encoder losslessly compresses only the first H(X)−D H(X) fraction of the source sequence components. Then, the decoder perfectly recovers those losslessly compressed components and uses p X as its reconstruction for the remaining part. The resulting scheme obviously achieves distortion D with rate H(X) − D.
Furthermore, this simple scheme directly implies successive refinability of the source. For D 1 > D 2 , suppose the encoder losslessly compresses the first H(X)−D 2 H(X) fraction of the source. Then, the first decoder can perfectly reconstruct fraction of the source with the message of rate H(X) − D 1 and distortion D 1 while the second decoder can achieve distortion D 2 with rate H(X) − D 2 . Since both decoders can achieve the best rate-distortion pair, it follows that any discrete memoryless source under logarithmic loss is successively refinable.
We can formally prove successive refinability of discrete memoryless source under logarithmic loss using Theorem 1. I.e., by finding random probability mass functions Q 1 , Q 2 ∈ M(X ) that satisfy where X − Q 2 − Q 1 forms a Markov chain.
Let e x be a deterministic probability mass function (pmf) in M(X ) that has a unit mass at x. In other words, Then, consider random pmfs Q 1 , Q 2 ∈ {e x : x ∈ X } ∪ {p X }. Since the support of Q 1 and Q 2 is finite, we can define the following conditional pmfs.
if q 1 = p X and q 2 = e x for some x It is not hard to show that the above conditional pmfs satisfies (3) and (4).

Main Results
Consider the successive refinement problem with a discrete memoryless source as described in Section 2.1. Specifically, we are interested in the case where the first decoder is under logarithmic loss and the second decoder is under some arbitrary distortion measure d(·, ·). We only have a following benign assumption that if d(x,x 1 ) = d(x,x 2 ) for all x, thenx 1 =x 2 . This is not a hard restriction since ifx 1 andx 2 have the same distortion values for all x, then there is no reason to have both reconstruction symbols.
The following theorem shows that any discrete memoryless source is successive refinable as long as the weaker link is under logarithmic loss. This implies an universal property of logarithmic loss in the context of successive refinement. Theorem 2. Let the source be arbitrary discrete memoryless. Suppose the distortion criterion of the first decoder is logarithmic loss while that of the second decoder is an arbitrary distortion criterion d : X ×X →[0, ∞]. Then the source is successively refinable.
Proof. The source is successively refinable at (D 1 , D 2 ) if and only if there exists a X −X − Q such that Let pX |X be the conditional distribution for the second decoder that achieves the informational rate-distortion function R 2 (D 2 ). i.e., Since the weaker link is under logarithmic loss, we have R 1 (D 1 ) ≤ R 2 (D 2 ). This implies that H(X) − D 1 ≤ H(X) − H(X|X ). Thus, we can assume H(X|X ) ≤ D 1 throughout the proof. For simplicity, we further have a benign assumption that there is nox such that p X (x) = p X|X (x|x) for all x. (See Remark 1 for the case where suchx exists.) Without loss of generality, supposeX = {0, 1, . . . , s − 1}. Consider a random variable Y ∈ Y = {0, 1, . . . , s} with the following pmf for some 0 ≤ ≤ 1: The conditional distribution is given by The joint distribution of X,X , Y is given by We are now ready to define the Markov chain. Let Q = p X|Y (·|Y) and q (y) = p X|Y (·|y) for all y ∈ Y. The following lemma implies that there is a one-to-one mapping between q and y. Lemma 2. If p X|Y (x|y 1 ) = p X|Y (x|y 2 ) for all x ∈ X , then y 1 = y 2 .
The proof of lemma is given in Appendix A. Since Q = p X|Y (·|Y) is a one-to-one mapping, we have Furthermore, X −X − Q forms a Markov chain since X −X − Y forms a Markov chain. This concludes the proof.
The key idea of the theorem is that (1) is the only loose required condition for the rate-distortion function achieving conditional distribution. Thus, for any distortion criterion in the second stage, we are able to choose an appropriate distribution p X,X,Q that satisfies both (1) and the condition for successive refinability.

Remark 1.
The assumption p X|X (·|x) = p X (·) for allx is not necessary. Appendix B shows another joint distribution p X,X ,Y that satisfies conditions for successive refinability when the above assumption does not hold.
The distribution in the above proof is one simple example that has a single parameter , but we can always find other distributions that satisfy the condition for successive refinability. In the next section, we propose totally different distribution that achieves a Markov chain X −X − Y with H(X|Y) = D 1 . This implies that the above proof does not rely on the assumption.

Remark 2.
In the proof, we used random variable Y to define Q = p X|Y (·|Y). On the other hand, if the joint distribution p X,X ,Q satisfies conditions of successive refinability, there exists a random variable Y where X −X − Y forms a Markov chain and Q = p X|Y (·|Y). This is simply because we can set Y = Q, which implies p X|Y (·|Y) = p X|Q (·|Q) = Q.
Theorem 2 can be generalized to successive refinement problem with K intermediate decoders.
Consider random variables Y k ∈ Y for 1 ≤ k ≤ K such that X −X − Y K − · · · − Y 1 forms a Markov chain and the joint distribution of X,X , Y 1 , . . . , Y K is given by where H(X|Y k ) = D k . Similar to the proof of Theorem 2, we can show that Q k = p X|Y k (·|Y k ) for all 1 ≤ k ≤ K satisfy the condition for successive refinability (where posterior distributions p X|Y k (·|y k ) should be distinct for all y k ∈ Y to guarantee one-to-one correspondence). Thus, we can conclude that any discrete memoryless source with K intermediate decoders is successively refinable as long as all the intermediate decoders are under logarithmic loss.

Toward Lossy Compression with Low Complexity
As we mentioned in Remark 1, the choice of joint distribution p X,X ,Q in the proof of Theorem 2 is not unique. In this section, we propose another joint distribution p X,X ,Q that satisfies the conditions for successive refinability. It naturally suggests a new lossy compression algorithm which we will discuss in Section 4.3.

Rate-Distortion Achieving Joint Distribution: Small D 1
Recall that H(X|X ) ≤ D 1 . We first consider the case where D 1 is not too large so that D 1 is close to H(X|X ). We will clarify this later. For simplicity, we further assume that pX (0) ≥ · · · ≥ px (s − 1).

Consider a random variable Z
(s) ∈X with the following pmf for some 0 ≤ ≤ (s − 1) minx pX (x) If it is clear from context, we simply use Z ≡ Z (s) for the sake of notation. We further define a random variable Y that is independent to Z such thatX = Y ⊕ s Z, where ⊕ s denotes a sum modulo s. This can be achieved by following pmf and conditional pmf.
If = 0, we have H(X|Y) = H(X|X ). Also, it is clear that H(X|Y) increases as increases. Since we assume that D 1 is not too large, there exists 0 ≤ ≤ (s − 1) min pX (x) such that H(X|Y) = D 1 . We will discuss about the case of general D 1 in Section 4.2. The joint distribution of X,X , Y is given by p X,X ,Y (x,x, y) = p X,X (x,x)p Y|X (y|x).
We are now ready to define the Markov chain. Let Q = p X|Y (·|Y) and q (y) = p X|Y (·|y) for all y ∈ Y where Y =X = {0, 1, . . . s − 1}. For simplicity, we assume that p X|Y (·|y 1 ) and p X|Y (·|y 2 ) are not the same for all y 1 = y 2 . Since Q = p X|Y (·|Y) is a one-to-one mapping, we have Furthermore, X −X − Q forms a Markov chain since X −X − Y forms a Markov chain. Thus, the above construction of joint distribution p X,X ,Q satisfies the conditions for successive refinability.

Rate-Distortion Achieving Joint Distribution: General D 1
The joint distribution in the previous section only works for small D 1 . It is because has a natural upper-bound from (5) which is ≤ (s − 1) min pX (x). In this section, we generalize the proof in the previous section for general D 1 . The key observation is that if we pick the maximum = (s − 1) min pX (x), then p Y (s − 1) = 0. This implies that we can focus on the smaller set of reconstruction alphabet Y = {0, 1, . . . s − 2}.
Let U s =X , and define random variables {U k : 1 ≤ k ≤ s − 1} recursively. More precisely, we define the random variable U k−1 based on U k for 2 ≤ k ≤ s.
Similar to the definition of Y, we assume U k−1 and Z (k) k are independent, and ⊕ k denotes modulo k sum. Each time step, the alphabet size of U k decreases by one. Thus, we have 0 ≤ U k ≤ k − 1, and therefore U 1 = 0 with probability 1. Furthermore, we have Similar to the previous section, we assume that p X|Y (·|y 1 ) = p X|Y (·|y 2 ) if y 1 = y 2 . Then, we can set Q = p X|Y (·|Y) which satisfies the conditions for successive refinability.

Iterative Lossy Compression Algorithm
The joint distribution from the previous section naturally suggests a simple successive refinement scheme. Consider the lossy compression problem where the source is i.i.d. ∼ p X and the distortion measure is d : X ×X →[0, ∞). Let D be the target distortion, and R > R(D) be the rate of the scheme where R(D) is the rate-distortion function. Let p X,X be the rate-distortion achieving distribution.
For block length n, we propose a new lossy compression scheme that mimics successive refinement with s − 1 decoders. Similar to the previous section, let k = (k − 1) min u p U k (u) and We further let R k−1 > I(X; U k ) − I(X; U k−1 ) for 2 ≤ k ≤ s that satisfy R = ∑ s k=2 R k−1 . Now, we are ready to describe our coding scheme. Generate a sub-codebook for all m.
Upon observing x n ∈ X n , the encoder finds m 1 ∈ C 1 that minimizes d 1 (x n , z n (1, m 1 )) where the distortion measure d 1 (·, ·) is defined as follows.
Note that d 1 (x, z) is simply the logarithmic loss between x and p X|U 2 (·|z). Similarly, for 2 ≤ k ≤ s − 1, the encoder iteratively finds m k ∈ C k that minimizes Upon receiving m 1 , m 2 , . . . , m s−1 , the decoder reconstructŝ Suppose R 1 ≈ R 2 ≈ · · · ≈ R s−1 ≈ R s−1 , and L = s − 1. Similar to [12,14], this scheme has two main advantages compare to naive random coding scheme. First, the number of codewords in the proposed scheme is L · e nR/L , while the naive scheme requires e nR codewords. Also, in each iteration, the encoder finds the best codeword among e nR/L sub-codewords. Thus, the overall complexity is L · e nR/L as well. On the other hand, the naive scheme requires e nR complexity.

Remark 3.
The proposed scheme constructsX n from binary sequences. The reconstruction after each stage can be viewed as where 0 ≤ u k ≤ k − 1. Thus, the decoder starts from binary sequence u n 2 (m 1 ), and the alphabet size increases by 1 at each iteration. After (s − 1)-th iteration, it reaches the final reconstructionX n where the size of alphabet is s.

Conclusions
To conclude our discussion, we summarize our main contributions. In the context of successive refinement problem, we showed another universal property of logarithmic loss that any discrete memoryless source is successively refinable as long as the intermediate decoders operate under logarithmic loss. We applied the result to the point-to-point lossy compression problem and proposed a lossy compression scheme with lower complexity. Since H(X|X ) ≤ D 1 , there exists an 0 ≤ ≤ 1 such that H(X|Y) = D 1 . We also have Q = p X|Y (·|Y) and q (y) = p X|Y (·|y) for all y ∈ Y. The following lemma implies the one-to-one mapping between q and y.
Proof. If y = 0, the conditional distribution pX |Y (x|y) is given by =p X|X (x|y).
As we have seen in Appendix A, p X|X (·|x 1 ) cannot be equal to p X|X (·|x 2 ) ifx 1 =x 2 . Since p X|Y (x|y) = p X|X (x|y) for all x, we can say that p X|Y (x|y 1 ) = p X|Y (x|y 2 ) for all x implies y 1 = y 2 .
The remaining part of the proof is exactly the same as the main proof.