Next Article in Journal
Bearing Fault Diagnosis Method Based on RCMFDE-SPLR and Ocean Predator Algorithm Optimizing Support Vector Machine
Next Article in Special Issue
Types of Entropies and Divergences with Their Applications
Previous Article in Journal
Fast Driving of a Particle in Two Dimensions without Final Excitation
Previous Article in Special Issue
Tight and Scalable Side-Channel Attack Evaluations through Asymptotically Optimal Massey-like Inequalities on Guessing Entropy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Are Guessing, Source Coding and Tasks Partitioning Birds of A Feather? †

1
Department of Mathematics, Indian Institute of Technology Palakkad, Palakkad 678557, India
2
Department of Computer Science and Engineering, Indian Institute of Technology Palakkad, Palakkad 678557, India
3
Department of Mathematics, Indian Institute of Technology Indore, Indore 453552, India
4
Institute for Communications Engineering, Technical University of Munich, 80333 Munich, Germany
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in 2022 IEEE International Symposium on Information Theory (ISIT), Espoo, Finland, 26 June–1 July 2022.
Entropy 2022, 24(11), 1695; https://doi.org/10.3390/e24111695
Submission received: 15 August 2022 / Revised: 3 October 2022 / Accepted: 5 October 2022 / Published: 19 November 2022
(This article belongs to the Special Issue Types of Entropies and Divergences with Their Applications)

Abstract

:
This paper establishes a close relationship among the four information theoretic problems, namely Campbell source coding, Arikan guessing, Huleihel et al. memoryless guessing and Bunte and Lapidoth tasks’ partitioning problems in the IID-lossless case. We first show that the aforementioned problems are mathematically related via a general moment minimization problem whose optimum solution is given in terms of Renyi entropy. We then propose a general framework for the mismatched version of these problems and establish all the asymptotic results using this framework. The unified framework further enables us to study a variant of Bunte–Lapidoth’s tasks partitioning problem which is practically more appealing. In addition, this variant turns out to be a generalization of Arıkan’s guessing problem. Finally, with the help of this general framework, we establish an equivalence among all these problems, in the sense that, knowing an asymptotically optimal solution in one problem helps us find the same in all other problems.

1. Introduction

The concept of entropy is very central to information theory. In source coding, the expected number of bits required (per letter) to encode a source with (finite) alphabet set X and probability distribution P is the Shannon entropy H ( P ) : = x X P ( x ) log P ( x ) . If the compressor does not know the true distribution P, but assumes a distribution Q (mismatch), then the number of bits required for compression is H ( P ) + I ( P , Q ) , where
I ( P , Q ) : = x X P ( x ) log Q ( x ) + x X P ( x ) log P ( x ) ,
is the entropy of P relative to Q (or the Kullback–Leibler divergence). In his seminal paper, Shannon [1] argued that H ( P ) can also be regarded as a measure of uncertainty. Subsequently, Rényi [2] introduced an alternate measure of uncertainty, now known as Rényi entropy of order α , as
H α ( P ) : = 1 1 α log x X P ( x ) α ,
where α > 0 and α 1 . Rényi entropy can also be regarded as a generalization of the Shannon entropy as lim α 1 H α ( P ) = H ( P ) . Refer Aczel and Daroczy [3] and the references therein for an extensive study of various measures of uncertainty and their characterizations.
In 1965, Campbell [4] gave an operational meaning to Rényi entropy. He showed that, instead of expected code lengths, if one minimizes the cumulants of code lengths, then the optimal cumulant is Rényi entropy H α ( P ) , where α = 1 / ( 1 + ρ ) with ρ being the order of the cumulant. He also showed that the optimal cumulant can be achieved by encoding sufficiently long sequences of symbols. Sundaresan (Theorem 8 of [5]) (c.f. Blumer and McElice [6]) showed that, in the mismatched case, the optimal cumulant is H α ( P ) + I α ( P , Q ) , where
I α ( P , Q ) : = α 1 α log x P ( x ) Q ( x ) α y Q ( y ) α α 1 α H α ( P )
is called α-entropy of P relative to Q or Sundaresan’s divergence [7]. Hence, I α ( P , Q ) can be interpreted as the penalty for not knowing the true distribution. The first term in (3) is sometimes called the Renyi cross-entropy and is analogous to the first term of (1). I α ( P , Q ) 0 with equality if and only if P = Q . I α -divergence can also be regarded as a generalization of the Kullback–Leibler divergence as lim α 1 I α ( P , Q ) = I ( P , Q ) . Refer to [5,8,9] for detailed discussions on the properties of I α . Lutwak et al. also independently identified I α in the context of maximum Rényi entropy and called it an α-Rényi relative entropy (Equation (4) of [10]). I α , for α > 1 , also arises in robust inference problems (see [11] and the references therein).
In [12], Massey studied a guessing problem where one is interested in the expected number of guesses required to guess a random variable X that assumes values from an infinite set, and found a lower bound in terms of Shannon entropy. Arıkan [13] studied it for a finite alphabet set and showed that Rényi entropy arises as the optimal solution in minimizing moments of the number of guesses. Subsequently, Sundaresan [5] showed that the penalty in guessing according to a distribution Q when the true distribution is P, is given by I α ( P , Q ) . It is interesting to note that guesswork has also been studied from a large deviations point of view [14,15,16,17,18]. Bunte and Lapidoth [8] studied a problem on partitioning of tasks and showed that Rényi entropy and Sundaresan’s divergence play a similar role in the optimal number of tasks performed. We propose, in this paper, a variant of this problem where the tasks in each subset of the partition are performed according to the decreasing order of probabilities. We show that Rényi antropy and Sundaresan’s divergences arise as optimal solutions in this problem too. Huleihel et al. [19,20] studied the memoryless guessing problem, a variant of Arıkan’s guessing problem, with i.i.d. (independent and identically distributed) guesses and showed that the minimum attainable factorial moments of number of guesses is the Rényi entropy. We show, in this paper, that the minimum factorial moment in the mismatched case is measured by the Sundaresan’s divergence.
We observe that, in all these problems, the objective is to minimize usual moments or factorial moments of random variables, and Rényi entropy and Sundaresan’s divergence arise in optimal solutions. The relationship between source coding and guessing is well-known in the literature. Arıkan and Merhav established a close relationship between lossy source coding and guessing with distortion using large deviation techniques [14,21]. The same for the lossless case was done by Hanawal and Sundaresan [17]. In this paper, we establish a general framework for all the five problems in the IID-lossless case. We then use this to establish upper and lower bounds for the mismatched version of these problems. This helps us find an equivalence among all these problems, in the sense that knowing an asymptotically optimal solution in one problem helps us find the same in all other problems.
Our Contributions in the Paper:
(a)
a general framework for the problems on source coding, guessing and tasks partitioning;
(b)
lower and upper bounds for the general framework of these problems both in matched and mis-matched cases;
(c)
a unified approach to derive bounds for the mismatched version of these problems;
(d)
a generalized tasks partitioning problem; and
(e)
establishing operational commonality among the problems.
Organisation of the Paper:
In Section 2, we present our unified framework, and find conditions under which lower and upper bounds are attained. In Section 3, we present four well-known information-theoretic problems, namely, Campbell’s source coding, Arıkan’s guessing, Huleihel et al.’s memoryless guessing, and Bunte–Lapidoth’s tasks partitioning, and re-establish and refine major results pertaining to these problems. In Section 4, we propose and solve a generalized tasks partitioning problem. In Section 5, we establish a connection among the aforementioned problems. Finally, we summarize and conclude the paper in Section 6.

2. A General Minimization Problem

In this section, we present a general minimization problem whose optimum solution evaluates to Rényi entropy. We will later show that all problems stated in Section 3 are particular instances of this general problem.
 Proposition 1. 
Let ψ : X ( 0 , ) be such that x X ψ ( x ) 1 k for some k > 0 . Then, for ρ ( 1 , 0 ) ( 0 , ) ,
1 ρ log E P [ ψ ( X ) ρ ] H α ( P ) log k ,
where E P [ · ] denotes the expectation with respect to probability distribution P on X , H α ( P ) is the Rényi entropy of order α, and α : = α ( ρ ) = 1 / ( 1 + ρ ) . The lower bound is achieved if and only if
ψ ( x ) 1 = k · P ( x ) α / Z P , α for x X ,
where Z P , α : = x X P ( x ) α .
 Proof. 
Observe that
sgn ( ρ ) x X P ( x ) ψ ( x ) ρ = sgn ( ρ ) x X P ( x ) α ψ ( x ) 1 P ( x ) α ρ ( a ) sgn ( ρ ) x P ( x ) α · x ψ ( x ) 1 x P ( x ) α ρ = sgn ( ρ ) x P ( x ) α 1 + ρ · x ψ ( x ) 1 ρ ( b ) sgn ( ρ ) x P ( x ) α 1 + ρ · k ρ ,
where ( a ) is due to the generalised log-sum inequality (Equation (4.1) of [22]) applied to the function f ( x ) = sgn ( ρ ) · x ρ ; and ( b ) follows from the hypothesis that x ψ ( x ) 1 k . By taking log and then dividing by ρ , we obtain (4). Equality holds in ( a ) if and only if ψ ( x ) 1 = ν P ( x ) α for some constant ν and in (b) if and only if x ψ ( x ) 1 = k . This completes the proof. □
The left-side of (4) is called normalised cumulant of ψ ( X ) of order ρ. The measure P ( α ) ( x ) : = P ( x ) α / Z P , α in (5) that attains the lower bound in (4) is called an α-scaled measure or escort measure of P. This measure also arises in robust inference (Equation (7) of [11]) and statistical physics [23]. The above proposition can also be proved using a variational formula as follows. By a version of Donsker–Varadhan variational formula (Propostion 4.5.1 of [24]), for any real-valued f on X , we have
log E P [ 2 f ( X ) ] = max Q { E Q [ f ( X ) ] D ( Q P ) } ,
where the max is over all probability distributions Q on X . Taking ρ > 0 and f ( x ) = ρ log ψ ( x ) in (6), we have
log E P [ ψ ( X ) ρ ] = max Q { ρ E Q [ log ψ ( X ) ] D ( Q P ) } = max Q ρ x Q ( x ) log ψ ( x ) 1 Q ( x ) Q ( x ) D ( Q P ) = max Q ρ H ( Q ) ρ x Q ( x ) log ψ ( x ) 1 Q ( x ) D ( Q P ) ( a ) max Q ρ H ( Q ) ρ log x ψ ( x ) 1 D ( Q P ) ( b ) ρ max Q H ( Q ) 1 ρ D ( Q P ) ρ log k ,
where (a) is by the log-sum inequality (Equation (4.1) of [22]) and (b) is by applying the constraint x ψ ( x ) 1 k . For ρ ( 1 , 0 ) , the inequalities in (a) and (b) are reversed, and the last max is replaced by min. Hence, (4) follows as the last max is equal to H α ( P ) by (Theorem 1 of [25]). Equality in ( a ) and ( b ) holds if and only if ψ ( x ) 1 = k · Q ( x ) . In addition, the last max is attained when Q ( x ) = P ( x ) α / Z P , α for x X . This completes the proof. The following is the analogous one for Shannon entropy.
 Proposition 2. 
Let ψ : X ( 0 , ) be such that x X ψ ( x ) 1 k . Then,
E P log ψ ( X ) H ( P ) log k .
Equality in (7) is achieved if and only if ψ ( x ) 1 = k · P ( x ) x X .
Proof. 
E P log ψ ( X ) = x P ( x ) log P ( x ) · ψ ( x ) 1 P ( x ) = H ( P ) x P ( x ) log ψ ( x ) 1 P ( x ) H ( P ) log x ψ ( x ) 1 H ( P ) log k ,
where the penultimate inequality is due to the log-sum inequality. Equality holds in both inequalities if and only if ψ ( x ) 1 = k · P ( x ) x X . □
It is interesting to note that log E P [ ψ ( X ) ρ ] / ρ E P [ log ( ψ ( X ) ) ] and H α ( P ) H ( P ) as ρ 0 in (4). We now extend Propositions 1 and 2 to sequences of random variables. Let X n be the set of all n-length sequences of elements of X , and P n be the n-fold product distribution of P on X n , that is, for x n : = x 1 , , x n X n , P n ( x n ) = i = 1 n P ( x i ) .
 Corollary 1. 
Given any n 1 , if ψ n : X n [ 0 , ) is such that x n X n ψ n ( x n ) 1   k n for some k n > 0 , then
(a)
For ρ ( 1 , 0 ) ( 0 , ) ,
lim inf n 1 n ρ log E P n [ ψ n ( X n ) ρ ] H α ( P ) lim sup n log k n n .
(b)
lim inf n 1 n E P n [ log ψ n ( X n ) ] H ( P ) lim sup n log k n n ,
where E P n [ · ] denotes the expectation with respect to probability distribution P n on X n .
 Proof. 
It is easy to see that H α ( P n ) = n H α ( P ) and H ( P n ) = n H ( P ) . Applying Propositions 1 and 2, dividing throughout by n and taking lim inf n , the results follow. □

A General Framework for Mismatched Cases

In this sub-section, we establish a unified approach for cases when there is mismatch between assumed and true distributions.
 Proposition 3. 
Let ρ > 1 , α = 1 / ( 1 + ρ ) , and Q be a probability distribution on X . For n 1 , let Q n be the n-fold product distribution of Q on X n . If ψ n : X n ( 0 , ) is such that
ψ n ( x n ) c n · Z Q n , α Q n ( x n ) α
for some c n > 0 , then
(a)
for ρ 0 , we have
sgn ( ρ ) · E P n [ ψ n ( X n ) ρ ] sgn ( ρ ) · 2 n ρ [ H α ( P ) + I α ( P , Q ) + n 1 log c n ] ,
(b)
for ρ 0 , we have
lim sup n 1 n ρ log E P n [ ψ n ( X n ) ρ ] H α ( P ) + I α ( P , Q ) + lim sup n 1 n log c n ,
(c)
for ρ = 0 , we have
E P n [ log ψ n ( X n ) ] n [ H ( P ) + I ( P , Q ) + n 1 log c n ] ,
(d)
for ρ = 0 , we have
lim sup n 1 n E P n [ log ψ n ( X n ) ] H ( P ) + I ( P , Q ) + lim sup n 1 n log c n .
 Proof. 
Part (a): From (8), we have
sgn ( ρ ) · E P n [ ψ n ( X n ) ρ ] = sgn ( ρ ) · x n X n P n ( x n ) ψ n ( x n ) ρ sgn ( ρ ) · c n ρ Z Q n , α ρ x n X n P n ( x n ) Q n ( x n ) α ρ = sgn ( ρ ) · 2 ρ [ H α ( P n ) + I α ( P n , Q n ) + log c n ] = sgn ( ρ ) · 2 n ρ [ H α ( P ) + I α ( P , Q ) + n 1 log c n ] ,
where the penultimate equality holds from the definition of I α , and the last one holds because H α ( P n ) = n H α ( P ) , I α ( P n , Q n ) = n I α ( P , Q ) .
Part (b): Taking log, dividing throughout by n ρ , and then applying lim sup successively on both sides of (9), the result follows.
Part (c): When ρ = 0 , we have α = 1 and (8) becomes ψ n ( x n ) c n Q n ( x n ) . Hence,
E P n [ log ψ n ( X n ) ] = x n X n P n ( x n ) log ψ n ( x n ) log c n + x n X n P n ( x n ) log ( 1 / Q n ( x n ) ) = log c n + H ( P n ) + I ( P n , Q n ) = n ( H ( P ) + I ( P , Q ) + n 1 log c n ) ,
where the last equality holds because H ( P n ) = n H ( P ) , and I ( P n , Q n ) = n I ( P , Q ) .
Part (d): Dividing (10) throughout by n, and taking limsup on both sides, the result follows. □
 Proposition 4. 
Let ρ > 1 , α = 1 / ( 1 + ρ ) , and Q be a probability distribution on X . For n 1 , let Q n be the n-fold product distribution of Q on X n . Suppose ψ n : X n ( 0 , ) is such that
ψ n ( x n ) a n Z Q n , α Q n ( x n ) α
for some a n > 0 , then
(a)
for ρ 0 , we have
sgn ( ρ ) · E P n [ ψ n ( X n ) ρ ] sgn ( ρ ) · 2 n ρ ( H α ( P ) + I α ( P , Q ) + n 1 log   a n ) ,
(b)
for ρ 0 , we have
lim inf n 1 n ρ log E P n [ ψ n ( X n ) ρ ] H α ( P ) + I α ( P , Q ) + lim inf n 1 n log a n ,
(c)
for ρ = 0 , we have
E P n [ log ψ n ( X n ) ] n ( H ( P ) + I ( P , Q ) + n 1 log a n ) ,
(d)
for ρ = 0 , we have
lim inf n 1 n E P n [ log ψ n ( X n ) ] H ( P ) + I ( P , Q ) + lim inf n 1 n log a n .
 Proof. 
Similar to proof of Proposition 3. □

3. Problem Statements and Known Results

In this section, we discuss Campbell’s source coding problem, Arıkan’s guessing problem, Huleihel et al.’s memoryless guessing problem, and Bunte–Lapidoth’s tasks partitioning problem. Using the general framework presented in the previous section, we re-establish known results, and present a few new results relating to these problems.

3.1. Source Coding Problem

Let X be a random variable that assumes values from a finite alphabet set X = { a 1 , , a m } according to a probability distribution P. The tuple ( X , P ) is usually referred to as a source. A binary codeC is a mapping from X to the set of finite length binary strings. Let L ( C ( X ) ) be the length of code C ( X ) . The objective is to find a uniquely decodable code that minimizes the expected code-length, that is,
Minimize E P [ L ( C ( X ) ) ]
over all uniquely decodable codes C. Kraft and McMillan independently proved the following relation between uniquely decodable codes and their code-lengths.
Kraft-McMilan Theorem [26]: If C a uniquely decodable code, then
x X 2 L ( C ( x ) ) 1 .
Conversely, given a length sequence that satisfies the above inequality, there exists a uniquely decodable code C with the given length sequence.
Thus, one can confine the search space for C to codes satisfying the Kraft–McMillan inequality (11).
Theorem 5.3.1 of [26]: If C is a uniquely decodable code, then E P [ L ( C ( X ) ) ] H ( P ) .
 Proof. 
Choose ψ ( x ) = 2 L ( C ( x ) ) , where L ( C ( x ) ) is the length of code C ( x ) assigned to alphabet x. Since C is uniquely decodable, from (11), we have x X ψ ( x ) 1 1 . Now, an application of Proposition 2 with k = 1 yields the desired result. □
 Theorem 1. 
Let X n : = X 1 , , X n be an i.i.d. sequence from X n following the product distribution P n ( X n ) = i = 1 n P ( X i ) . Let Q n ( X n ) = i = 1 n Q ( X i ) , where Q is another probability distribution. Let C n be a code such that L ( C n ( X n ) ) = log Q n ( X n ) . Then, C n satisfies Kraft–McMillan inequality and
lim n E P n [ L ( C n ( X n ) ) ] n = H ( P ) + I ( P , Q ) .
 Proof. 
Choose ψ n ( x n ) = 2 L ( C n ( x n ) ) , where L ( C n ( x n ) is the length of code C n ( x n ) assigned to sequence x n . Then, we have
ψ n ( x n ) = 2 L ( C n ( x n ) ) = 2 log Q n ( x n ) 2 · 2 log Q n ( x n ) = 2 / Q n ( x n ) .
An application of Proposition 3 with c n = 2 yields lim sup n E P n [ L ( C n ( X n ) ) ] / n H ( P ) + I ( P , Q ) . Furthermore, we also have
ψ n ( x n ) = 2 L ( C n ( x n ) ) = 2 log Q n ( x n ) 2 log Q n ( x n ) = 1 / Q n ( x n ) .
An application of Proposition 4 with a n = 1 gives lim inf n E P n [ L ( C n ( X n ) ) ] / n H ( P ) + I ( P , Q ) . □

3.2. Campbell Coding Problem

Campbell’s coding problem is similar to Shannon’s source coding problem except that, instead of minimizing the expected code-length, one is interested in minimizing the normalized cumulant of code lengths, that is,
Minimize 1 ρ log E P [ 2 ρ L ( C ( X ) ) ] ,
over all uniquely decodable codes C, and ρ > 0 . This problem was shown to be equivalent to minimizing buffer overflow probability by Humblet in [27]. A lower bound for the normalized cumulants in terms of Rényi entropy was provided by Campbell [4].
Lemma 1 of [4]: Let C be a uniquely decodable code. Then,
1 ρ log E P n [ 2 ρ L ( C ( X ) ) ] H α ( P ) ,
where α = 1 / ( 1 + ρ ) .
 Proof. 
Apply Proposition 1 with ψ ( x ) = 2 L ( C ( x ) ) and k = 1 . □
Notice that, if we ignore the integer constraint of the length function, then
L ( C ( x ) ) = log Z P , α P ( x ) α ,
with Z P , α as in Proposition 1, satisfies (11) and achieves the lower bound in (12). Campbell also showed that the lower bound in (12) can be achieved by encoding long sequences of symbols with code-lengths close to (13).
Theorem 1 of [4]: If C n is a uniquely decodable code such that
L ( C n ( x n ) ) = log Z P n , α P n ( x n ) α ,
then
lim n 1 n ρ log E P n [ 2 ρ L ( C n ( X n ) ) ] = H α ( P ) .
 Proof. 
Choose ψ n ( x n ) = 2 L ( C n ( x n ) ) . Then, from (14), we have
Z P n , α P n ( x n ) α ψ n ( x n ) < 2 · Z P n , α P n ( x n ) α ·
The result follows by applying Propositions 3 and 4 with c n = 2 , a n = 1 and Q = P . □
Mismatch Case:
Redundancy in the mismatched case of the Campbell’s problem was studied in [5,6]. Sundaresan showed that the difference in the normalized cumulant from the minimum when encoding according to an arbitrary uniquely decodable code is measured by I α -divergence up-to a factor of 1 [5]. We provide a more general version of this result in the following.
 Proposition 5. 
Let X be a random variable that assumes values from set X according to a probability distribution P. Let ρ ( 1 , 0 ) ( 0 , ) and L : X Z + be an arbitrary length function that satisfies (11). Define
R c ( P , L , ρ ) : = 1 ρ log E P [ 2 ρ L ( X ) ] min K 1 ρ log E P [ 2 ρ K ( X ) ] ,
where the minimum is over all length functions K satisfying (11). Then, there exists a probability distribution Q L such that
I α ( P , Q L ) log η 1 R c ( P , L , ρ ) I α ( P , Q L ) log η ,
where η = x 2 L ( x ) .
 Proof. 
Since K satisfies (11), an application of Proposition 1 with ψ ( x ) = 2 K ( x ) gives us 1 ρ log E P [ 2 ρ K ( X ) ] H α ( P ) . Since K ( x ) = log ( Z P , α / P ( x ) α ) satisfies (11) and ψ ( x ) = 2 K ( x ) < 2 · Z P , α / P ( x ) α , applying Proposition 3 with n = 1 , c 1 = 2 , and Q = P , we have
1 ρ log E P [ 2 ρ K ( X ) ] H α ( P ) + 1 ,
that is, the minimum in (15) is between H α ( P ) and H α ( P ) + 1 . Hence,
1 ρ log E P [ 2 ρ L ( X ) ] H α ( P ) 1 R c ( P , L , ρ ) 1 ρ log E P [ 2 ρ L ( X ) ] H α ( P ) .
Let us now define a probability distribution Q L as
Q L ( x ) = 2 L ( x ) / α x 2 L ( x ) / α ·
Then,
2 L ( x ) = Q L ( x ) α Z Q L , α · 1 x 2 L ( x ) = Q L ( x ) α Z Q L , α · 1 η ,
where η = x 2 L ( x ) . Applying Propositions 3 and 4 with n = 1 , ψ 1 ( x ) = 2 L ( x ) , a 1 = c 1 = 1 / η , we obtain
1 ρ log E P [ 2 ρ L ( X ) ] = H α ( P ) + I α ( P , Q L ) log η .
Combining (17) and (18), we obtain the desired result. □
We remark that the bound in (16) can be loose when η is small. For example, for a source with two symbols, say x and y, with code lengths L ( x ) = L ( y ) = 100 , we have R c ( P , L , ρ ) I α ( P , Q L ) + 98 . However, if one imposes the constraint 1 / 2 η 1 , then (16) simplifies to
| R c ( P , L , ρ ) I α ( P , Q L ) | 1 ,
which is (Theorem 8 of [5]). I α ( P , Q L ) is, in a sense, the penalty when Q L does not match the true distribution P. In view of this, a result analogous to Proposition 5 also holds for the Shannon source coding problem.

3.3. Arıkan’s Guessing Problem

Let X be a set of objects with | X | = m . Bob thinks of an object X (a random variable) from X according to a probability distribution P. Alice guesses it by asking questions of the form “Is X = x ?”. The objective is to minimize average number of guesses required for Alice to guess X correctly. By a guessing strategy (or guessing function), we mean a one-one map G : X { 1 , , m } , where G ( x ) is to be interpreted as the number of questions required to guess x correctly. Arıkan studied the ρ th moment of number of guesses and found upper and lower bounds in terms of Rényi entropy.
Theorem 1 of [13]: Let G be any guessing function. Then, for ρ ( 1 , 0 ) ( 0 , ) ,
1 ρ log E P G ( X ) ρ H α ( P ) log ( 1 + ln m ) .
 Proof. 
Let G be any guessing function. Let ψ ( x ) = G ( x ) . Then, we have x X ψ ( x ) 1 = x X 1 / G ( x ) = i = 1 m 1 / i 1 + ln m . An application of Proposition 1 with k = 1 + ln m yields the desired result. □
Arıkan showed that an optimal guessing function guesses according to the decreasing order of P-probabilities with ties broken using an arbitrary but fixed rule [13]. He also showed that normalized cumulant of an optimal guessing function is bounded above by the Rényi entropy. Next, we present a proof of this using our general framework.
Proposition 4 of [13]: If G * is an optimal guessing function, then, for ρ ( 1 , 0 ) ( 0 , ) ,
1 ρ log E P G * ( X ) ρ H α ( P ) .
 Proof. 
Let us rearrange the probabilities { P ( x ) , x X } in non-increasing order, say
p 1 p 2 p m .
Then, the optimal guessing function G * is given by G * ( x ) = i if P ( x ) = p i . Let us index the elements in set X as { x 1 , x 2 , , x m } , according to the decreasing order of their probabilities. Then, for i { 1 , , m } , we have
Z P , α P ( x i ) α = j = 1 m p j α p i α i = G * ( x i ) .
That is, G * ( x ) Z P , α P ( x ) α for x X . Now, an application of Proposition 3 with n = 1 , Q = P , ψ 1 ( x ) = G * ( x ) and c 1 = 1 , gives us
1 ρ log E P [ G * ( X ) ρ ] = 1 ρ log E P [ ψ ( X ) ρ ] H α ( P ) + I α ( P , P ) + log 1 = H α ( P ) .
Arıkan also proved that the upper bound of Rényi entropy can be achieved by guessing long sequences of letters in an i.i.d. fashion.
Proposition 5 of [13] Let X 1 , X 2 , , X n be a sequence of i.i.d. guesses. Let G n * ( X 1 , , X n ) be an optimal guessing function. Then, for ρ ( 1 , 0 ) ( 0 , ) ,
lim n 1 n ρ log E P n [ G n * ( X 1 , X 2 , , X n ) ρ ] = H α ( P ) .
 Proof. 
Let G n * be the optimal guessing function from X n to { 1 , 2 , , m n } . An application of Corollary 1 with ψ n ( x n ) = G n * ( x n ) and k n = 1 + n ln m yields
lim inf n 1 n ρ log E P n [ G n * ( X n ) ρ ] H α ( P ) lim sup n log ( 1 + n ln m ) n = H α ( P ) .
As in the proof of the previous result, we know that G * ( x n ) Z P n , α P n ( x n ) α for x n X n . Hence, an application of Proposition 3 with Q n = P n , ψ n ( x n ) = G n * ( x n ) , and c n = 1 yields
lim sup n 1 n ρ log E P n [ G n * ( X n ) ρ ] H α ( P ) .
Combining (20) and (21), we obtain the desired result. □
Henceforth, we shall denote the optimal guessing function corresponding to a probability distribution P by G P .
Mismatch Case:
Suppose Alice does not know the true underlying probability distribution P, and guesses according to some guessing function G. The following proposition tells us that the penalty for deviating from the optimal guessing function can be measured by I α -divergence.
 Proposition 6. 
Let G be an arbitrary guessing function. Then, for ρ ( 1 , 0 ) ( 0 , ) , there exists a probability distribution Q G on X such that
1 ρ log E P [ G ( X ) ρ ] H α ( P ) + I α ( P , Q G ) log ( 1 + ln m ) .
 Proof. 
Let G be a guessing function. Define a probability distribution Q G on X as
Q G ( x ) = G ( x ) 1 / α x X G ( x ) 1 / α .
Then, we have
Z Q G , α Q G ( x ) α = G ( x ) x X 1 G ( x ) G ( x ) · ( 1 + ln m ) ·
Now, an application of Proposition 4 with n = 1 , ψ 1 ( x ) = G ( x ) , and a 1 = 1 / ( 1 + ln m ) yields the desired result. □
A converse result is the following.
Proposition 1 of [5]: Let G Q be an optimal guessing function associated with Q. Then, for ρ ( 1 , 0 ) ( 0 , ) ,
1 ρ log E P G Q ( X ) ρ H α ( P ) + I α ( P , Q ) ,
where the expectation is with respect to P.
 Proof. 
Let us rearrange the probabilities ( Q ( x ) , x X ) in non-increasing order, say
q 1 q 2 q m .
By definition, G Q ( x ) = i if Q ( x ) = q i . Then, as in (19), we have G Q ( x ) Z Q , α / Q ( x ) α for x X . Hence, an application of Proposition 3 with n = 1 , ψ 1 ( x ) = G Q ( x ) , and c 1 = 1 proves the result. □
Observe that, given a guessing function G, if we apply the above proposition for Q = Q G , where Q G is as in (22), then we obtain
1 ρ log E P n G Q G ( X ) ρ H α ( P ) + I α ( P , Q G ) .
Thus, the above two propositions can be combined to state the following, which is analogous to Proposition 5 (refer Section 3.2).
Theorem 6 of [5]: Let G be an arbitrary guessing function and G P be the optimal guessing function for P. For ρ ( 1 , 0 ) ( 0 , ) , let
R g ( P , G , ρ ) : = 1 ρ log E P G ( X ) ρ 1 ρ log E P G P ( X ) ρ .
Then, there exists a probability distribution Q G such that
| R g ( P , G , ρ ) I α ( P , Q G ) | log ( 1 + ln m ) .

3.4. Memoryless Guessing

In memoryless guessing, the setup is similar to that of Arıkan’s guessing problem except that this time the guesser Alice comes up with guesses independent of her previous guesses. Let X ^ 1 , X ^ 2 , be Alice’s sequence of independent guesses according to a distribution P ^ . The guessing function in this problem is defined as
G P ^ ( X ) : = inf { i 1 : X ^ i = X } ,
that is, the number of guesses until a successful guess. Sundaresan [28], inspired by Arıkan’s result, showed that the minimum expected number of guesses required is exp { H 1 2 ( P ) } , and the distribution that achieves this is surprisingly not the underlying distribution P, but the “tilted distribution” P ^ * ( x ) : = P ( x ) / y P ( y ) .
Unlike in Arıkan’s guessing problem, Huleihel et al. [19] minimized what are called factorial moments, defined for ρ Z + as
V P ^ , ρ ( X ) = 1 ρ ! l = 0 ρ 1 G P ^ ( X ) + l .
Huleihel et al. [19] (c.f. [20]) studied the following problem.
Minimize E P V P ^ , ρ ( X ) ,
over all P ^ P , where P is the probability simplex that is, P = { ( P ( x ) ) x X : P ( x ) 0 , x P ( x ) = 1 } . Let P ^ * be the optimal solution of the above problem.
Theorem 1 of [19]: For any integer ρ > 0 , we have
1 ρ log E P V P ^ * , ρ ( X ) = H α ( P )
and P ^ * ( x ) = P ( x ) α / Z P , α .
 Proof. 
From [19], we know that
E P V P ^ , ρ ( X ) = E P [ P ^ ( X ) ρ ] .
Now, the result follows from Proposition 1 with ψ ( x ) = P ^ ( x ) 1 and k = 1 . Indeed, since P ^ is a probability distribution, we have x X ψ ( x ) 1 = x X P ^ ( x ) = 1 . Hence, 1 ρ log E P V P ^ , ρ ( X ) H α ( P ) , and the lower bound is attained by P ^ * ( x ) = P ( x ) α / Z P , α . □
For a sequence of guesses, the above theorem can be stated in the following way. Let X ^ n = ( X ^ 1 , , X ^ n ), where X ^ i ’s are i.i.d. guesses, drawn from X n with distribution P ^ n —the n-fold product distribution of P ^ on X n . If the true underlying distribution is P n , then
lim n 1 n ρ log E P n V P ^ n * ρ ( X n ) = H α ( P ) ,
where P ^ n * ( x ) = P n ( x ) α / Z P n , α . For the mismatched case, we have the following result.
 Proposition 7. 
If the true underlying probability distribution is P, but Alice assumes it as Q and guesses according to its optimal one, namely Q ^ * ( x ) = Q ( x ) α / Z Q , α , then
1 ρ log E P V Q ^ * , ρ ( X ) = H α ( P ) + I α ( P , Q ) .
 Proof. 
Due to (23), the result follows easily by taking n = 1 , ψ 1 ( x ) = Q ^ * ( x ) 1 , c 1 = 1 , a 1 = 1 in Propositions 3 and 4. □

3.5. Tasks Partitioning Problem

Encoding of Tasks problem studied by Bunte and Lapidoth [8] can be phrased in the following way. Let X be a finite set of tasks. A task X is randomly drawn from X according to a probability distribution P, which may correspond to the frequency of occurrences of tasks. Suppose these tasks are associated with M keys. Typically, M < | X | . Due to a limited availability of keys, more than one task may be associated with a single key. When a task needs to be performed, the key associated with it is pressed. Consequently, all tasks associated with this key will be performed. The objective in this problem is to minimize the number of redundant tasks performed. Usual coding techniques suggest assigning tasks with high probability to individual keys and leaving the low probability tasks unassigned. It may just be the case that some tasks may have a higher frequency of occurrence than others. However, for an individual, all tasks can be equally important. If M | X | , then one can perform tasks without any redundancy. However, Bunte and Lapidoth [8] showed that, even when M < | X | , one can accomplish the tasks with much less redundancy on average, provided the underlying probability distribution is different from the uniform distribution.
Let A = { A 1 , A 2 , , A M } be a partition of X that corresponds to the assignment of tasks to M keys. Let A ( x ) be the cardinality of the subset containing x in the partition. We shall call A the partition function associated with partition A . We shall assume that ρ > 0 throughout this section, though some of the results hold even when ρ ( 1 , 0 ) .
Theorem I.1 of [8]: The following results hold.
(a)
For any partition of X of size M with partition function A, we have
1 ρ log E P [ A ( X ) ρ ] H α ( P ) log M .
(b)
If M > log | X | + 2 , then there exists a partition of X of size at most M with partition function A such that
1 E P [ A ^ ( X ) ρ ] 1 + 2 ρ ( H α ( P ) log M ˜ ) ,
where
M ˜ : = ( M log | X | 2 ) / 4 .
 Proof. 
Part (a): Let ψ ( x ) = A ( x ) . Then, we have x X ψ ( x ) 1 = x X A ( x ) 1 = M (Prop. III-1 of [8]). Now, an application of Proposition 1 with k = M gives us the desired result.
Part (b): For the proof of this part, we refer to [8]. □
Bunte and Lapidoth also proved the following limit results.
Theorem I.2 of [8]: Then, for every n 1 , there exists a partition A n of X n of size at most M n with an associated partition function A n such that
lim n E P n [ A n ( X n ) ρ ] = 1 if log M > H α ( P ) if log M < H α ( P ) ,
where X n : = ( X 1 , , X n ) .
It should be noted that, in a general set-up of the tasks partitioning problem, it is not necessary that the partition size is of the form M n ; it can be some M n (a function of n). Consequently, we have the following result.
 Proposition 8. 
Let { M n } be a sequence of positive integers such that M n n log | X | + 3 , and
γ : = lim n log M n n
exists. Then, there exists a sequence of partitions of X n of size at most M n with partition functions A n such that
(a) 
lim n E P n [ A n ( X n ) ρ ] = 1 if γ > H α ( P ) ,
(b) 
lim n 1 n ρ log E P n [ A n ( X n ) ρ ] = H α ( P ) γ if γ < H α ( P ) .
 Proof. 
Let
M ˜ n : = ( M n n log | X | 2 ) / 4 .
We first claim that lim n log M ˜ n n = γ . Indeed, since log ( 1 / 4 ) n log M ˜ n n < log M n n , when γ = 0 , we have lim n log M ˜ n n = 0 . On the other hand, when γ > 0 , we can find an n γ such that M n 2 γ n / 2 n n γ . Thus, we have lim n n M n = 0 . Consequently,
lim n log M ˜ n n = lim n log M n n + lim n 1 n log 1 ( n log | X | + 2 ) M n lim n 2 n = γ .
This proves the claim. From Theorem I.1 of [11], for any n 1 and M n > n log | X | + 2 , there exists a partition A n of X n of size at most M n such that the associated partition function A n satisfies
E P n [ A n ( X n ) ρ ] 1 + 2 ρ ( H α ( P n ) log M ˜ n ) = 1 + 2 n ρ H α ( P ) log M ˜ n n .
Part (a): When γ > H α ( p ) , let us choose ϵ = ( γ H α ( P ) ) / 2 > 0 . Then, there exists an n ϵ such that log M ˜ n n γ ϵ n n ϵ . Thus, we have
E P n [ A n ( X n ) ρ ] 1 + 2 n ρ ( H α ( P ) γ + ϵ ) = 1 + 2 n ρ ( γ H α ( P ) ) / 2 n n ϵ
Consequently, lim sup n E P n [ A n ( X n ) ρ ] 1 . We also note that A n ( x n ) 1 for all x n X n .
Thus, lim inf n E P n [ A n ( X n ) ρ ] 1 .
Part (b): For any ϵ > 0 , there exists an n ϵ such that log M ˜ n n γ ϵ n n ϵ . Thus, we have
E P n [ A n ( X n ) ρ ] 1 + 2 n ρ ( H α ( P ) γ + ϵ ) 2 1 + n ρ ( H α ( P ) γ + ϵ ) n n ϵ
Hence, we have
lim sup n 1 n ρ log E P n [ A n ( X n ) ρ ] H α ( P ) γ + ϵ ϵ > 0
Furthermore, an invocation of Corollary 1 with ψ n ( x n ) = A n ( x n ) and k n = x n X n 1 / A n ( x n ) = M n gives us
lim inf n 1 n ρ log E P n [ A n ( X n ) ρ ] H α ( P ) lim sup n log M n n = H α ( P ) γ .
 Remark 1. 
It is interesting to note that, when γ < H α ( P ) , in addition to the fact that lim n E P n [ A n ( X n ) ρ ] = , we also have E P n [ A n ( X n ) ρ ] 2 n ρ ( H α ( P ) γ ) for large values of n.
Mismatch Case:
Let us now suppose that one does not know the true underlying probability distribution P, but arbitrarily partitions X . Then, the penalty due to such a partition can be measured by the I α -divergence as stated in the following theorem.
 Proposition 9. 
Let A be a partition of X of size M with partition function A. Then, there exists a probability distribution Q A on X such that
1 ρ log E P [ A ( X ) ρ ] = H α ( P ) + I α ( P , Q A ) log M .
 Proof. 
Define a probability distribution Q A = { Q A ( x ) , x X } as
Q A ( x ) : = A ( x ) 1 / α x X A ( x ) 1 / α ·
Then,
Z Q A , α Q A ( x ) α = A ( x ) x X 1 A ( x ) = A ( x ) · M ,
where the last equality follows due to Proposition III.1 of [8]. Rearranging terms, we have A ( x ) = Z Q A , α M · Q A ( x ) α . Hence, an application of Propositions 3 and 4 with n = 1 , ψ 1 ( x ) = A ( x ) , c 1 = 1 / M , a 1 = 1 / M , and Q = Q A yields the desired result. □
A converse result is the following.
 Proposition 10. 
Let X be a random task from X following distribution P and ρ ( 0 , ) . Let Q be another distribution on X . If M > log | X | + 2 , then there exists a partition A Q (with an associated partition function A Q ) of X of size at most M such that
E P [ A Q ( X ) ρ ] 1 + 2 ρ ( H α ( P ) + I α ( P , Q ) log M ˜ ) ,
where M ˜ is as in (24).
 Proof. 
Similar to proof of Theorem I.1 of [8]. □

4. Ordered Tasks Partitioning Problem

In Bunte–Lapidoth’s tasks partitioning problem [8], one is interested in the average number of tasks associated with a key. However, in some scenarios, it might be more important to minimize the average number of redundant tasks performed, before the intended task. To achieve this, tasks associated with a key should be performed in a decreasing order of their probabilities. With such a strategy in place, this problem draws parallel with Arıkan’s guessing problem [13].
Let A = { A 1 , A 2 , , A M } be a partition of X that corresponds to the assignment of tasks to M keys. Let N ( x ) be the number of redundant tasks performed until and including the intended task x. We refer to N ( · ) as the count function associated with partition A . We suppress the dependence of N on A for the sake of notational convenience. If X denotes the intended task, then we are interested in the ρ th moment of number of tasks performed, that is, E P [ N ( X ) ρ ] , where ρ > 0 .
 Lemma 1. 
For any count function associated with a partition of size M, we have
x X 1 N ( x ) M 1 + ln | X | M .
 Proof. 
For a partition A = { A 1 , A 2 , , A M } of X , observe that
x X 1 N ( x ) = 1 + 1 2 + + 1 | A 1 | + + 1 + 1 2 + + 1 | A M | .
Since 1 + 1 2 + + 1 | A k | 1 + ln | A k | , for any k { 1 , , M } , we have
x X 1 N ( x ) M + ln ( | A 1 | | A M | ) = M [ 1 + ln ( | A 1 | | A M | ) 1 / M ] ( a ) M 1 + ln | A 1 | + + | A M | M = M 1 + ln | X | M ,
where (a) follows due to the AM–GM inequality. □
 Proposition 11. 
Let X be a random task from X following distribution P. Then, the following hold:
(a)
For any partition of X of size M, we have
1 ρ log E P [ N ( X ) ρ ] H α ( P ) log { M [ 1 + ln | X | / M ] }
(b)
Let M > log | X | + 2 . Then, there exists a partition of X of size at most M with count function N such that
1 E P [ N ( X ) ρ ] 1 + 2 ρ ( H α ( P ) log M ˜ ) ,
where M ˜ is as in (24).
 Proof. 
Part (a): Applying Proposition 1 with k = M 1 + ln | X | / M and ψ ( x ) = N ( x ) , we obtain the desired result.
Part (b): If A and N are, respectively, the partition and count functions of a partition A , then we have 1 N ( x ) A ( x ) for x X . Once we observe this, the proof is same as Theorem I.1 (b) of [8]. □
 Proposition 12. 
Let { M n } be a sequence of positive integers such that M n n log | X | + 3 , and γ : = lim n log M n / n exists. Then, there exists a sequence of partitions of X n of size at most M n with count functions N n such that
(a) 
lim n E P n [ N n ( X n ) ρ ] = 1 if γ > H α ( P ) ,
(b) 
lim n 1 n ρ log E P n [ N n ( X n ) ρ ] = H α ( P ) γ if γ < H α ( P ) .
 Proof. 
Similar to proof of Proposition 8. □
Remark 2. 
(a) 
If we choose the trivial partition, namely A n = { X n } , then the ordered tasks partitioning problem simplifies to Arıkan’s guessing problem, that is, we have M n = 1 , N n ( x n ) = G n ( x n ) and (26) simplifies to
x n X n 1 G n ( x n ) 1 + n ln | X | .
Hence, all results pertaining to the Arıken’s guessing problem can be derived from the ordered tasks partitioning problem.
(b) 
Structurally, ordered tasks partitioning problem differs from the Bunte–Lapidoth’s problem only due the factor 1 + ln ( | X | / M ) in (28). While this factor matters for one-shot results, for a sequence of i.i.d. tasks, this factor vanishes asymptotically.
Mismatch Case:
Let us now suppose that one does not know the true underlying probability distribution P, but arbitrarily partitions X and executes tasks within each subset of this partition in an arbitrary order. Then, the penalty due to such a partition and ordering can be measured by the I α -divergence as stated in the following propositions.
 Proposition 13. 
Let A be a partition of X of size M with count function N. Then, there exists a probability distribution Q N on X such that
1 ρ log E P [ N ( X ) ρ ] H α ( P ) + I α ( P , Q N ) log { M 1 + ln | X | / M } .
 Proof. 
Define a probability distribution Q N = { Q N ( x ) , x X } as
Q N ( x ) : = N ( x ) 1 / α x X N ( x ) 1 / α ·
Then, by Lemma 1, we have
Z Q N , α Q N ( x ) α = N ( x ) x X 1 N ( x ) N ( x ) · M [ 1 + ln ( | X | / M ) ] .
Now, an application of Proposition 4 with n = 1 , ψ 1 ( x ) = N ( x ) , Q = Q N , and a 1 = 1 / M [ 1 + ln ( | X | / M ) ] yields the desired result. □
A converse result is the following.
 Proposition 14. 
Let X be a random task from X following distribution P. Let Q be another distribution on X . If M > log | X | + 2 , then there exists partition A Q (with an associated count function N Q ) of X of size at most M such that
E P [ N Q ( X ) ρ ] 1 + 2 ρ ( H α ( P ) + I α ( P , Q ) log M ˜ ) if ρ ( 0 , ) ,
where M ˜ is as in (24).
 Proof. 
Identical to the proof of Proposition 11(b). □

5. Operational Connection among the Problems

In this section, we establish an operational relationship among the five problems (refer Figure 1) that we studied in the previous section. The relationship we are interested in is “Does knowing an optimal or asymptotically optimal solution in one problem helps us find the same in another?” In fact, we end up showing that, under suitable conditions, all the five problems form an equivalence class with respect to the above-mentioned relation.
In this section, we assume ρ > 0 . First, we make the following observations:
  • Among the five problems discussed in the previous section, only Arıkan’s guessing and Huleihel et al.’s memoryless guessing have a unique optimal solution; others only have asymptotically optimal solutions.
  • Optimal solution of Huleihel et al.’s memoryless guessing problem is the α -scaled measure of the underlying probability distribution P. Hence, knowledge about the optimal solution of this problem implies knowledge about an optimal (or asymptotically optimal) solution of all other problems.
  • Among the Bunte–Lapidoth’s and ordered tasks problems, an asymptotically optimal solution of one yields that of the other. The partitioning lemma (Prop. III-2 of [8]) is the key result in these two problems, as it guarantees the existence of the asymptotically optimal partitions in both these problems.

5.1. Campbell’s Coding and Arıkan’s Guessing

An attempt to find a close relationship between these two problems was made, for example, by Hanawal and Sundaresan (Section II of [17]). Here, we show the equivalence between asymptotically optimal solutions of these two problems.
 Proposition 15. 
An asymptotically optimal solution exists for Campbell’s source coding problem if and only if an asymptotically optimal solution exists for Arıkan’s guessing problem.
 Proof. 
Let { G n * } be an asymptotically optimal sequence of guessing functions, that is,
lim n 1 n ρ log E P n [ G n * ( X n ) ρ ] = H α ( P ) .
Define
Q G n * ( x n ) : = c n 1 · G n * ( x n ) 1 ,
where c n is the normalization constant. Notice that
c n = x n G n * ( x n ) 1 1 + n ln | X | .
Let us now define
L G n * ( x n ) : = log Q G n * ( x n ) .
Then, by (Proposition 1 of [17]),
L G n * ( x n ) log G n * ( x n ) + 1 + log c n .
Hence,
2 ρ L G n * ( x n ) 2 ρ · c n ρ · G n * ( x n ) ρ 2 ρ · ( 1 + n ln | X | ) ρ · G n * ( x n ) ρ .
Thus, we have
lim sup n 1 n ρ log E P n [ 2 ρ L G n * ( X n ) ] lim sup n 1 n ρ log E P n [ G n * ( X n ) ρ ] = H α ( P ) .
We observe that
x n X n 2 L G n * ( x n ) = x n X n 2 log Q G n * ( x n ) x n X n 2 log Q G n * ( x n ) = 1 .
Consequently, from (12), we have
lim inf n 1 n ρ log E P n [ 2 ρ L G n * ( X n ) ] H α ( P ) .
Thus, { L G n * } is an asymptotically optimal sequence of length functions for Campbell’s coding problem.
Conversely, given an asymptotically optimal sequence of length functions { L n * } for Campbell’s coding problem, define
Q L n * ( x n ) : = 2 L n * ( x n ) y n 2 L n * ( x n ) .
Let G L n * be the guessing function on X n that guesses according to the decreasing order of Q L n * -probabilities. Then, by (Proposition 2 of [17]),
log G L n * ( x n ) L n * ( x n ) .
Thus,
lim sup n 1 n ρ log E P n [ G L n * ( X n ) ρ ] lim n 1 n ρ log E P n [ 2 ρ L n * ( X n ) ] = H α ( P ) .
Furthermore, from Theorem 1 of [13], we have
lim inf n 1 n ρ log E P n [ G L n * ( X n ) ρ ] H α ( P ) lim sup n 1 n log ( 1 + n ln | X | ) = H α ( P ) .
This completes the proof. □

5.2. Arıkan’s Guessing and Bunte–Lapidoth’s Tasks Partitioning Problem

Bracher et al. found a close connection between Arıkan’s guessing problem and Bunte–Lapidoth’s tasks partitioning problem in the context of distributed storage [29]. In this section, we establish a different relation between these problems.
 Proposition 16. 
An asymptotically optimal solution of Arıkan’s guessing problem gives rise to an asymptotically optimal solution of tasks partitioning problem.
 Proof. 
Let { G n * } be an asymptotically optimal sequence of guessing functions. Define
Q G n * ( x n ) = d n 1 G n * ( x n ) 1 / α ,
where d n is the normalization constant. Let A G n * be the partition function satisfying A G n * ( x n ) β n · Z Q G n * , α / Q G n * ( x n ) α guaranteed by (Proposition III-2 of [8]), where
β n = 2 M n n log | X | 2 ·
Thus, we have
A G n * ( x n ) ρ β n · Z Q G n * , α / Q G n * ( x n ) α ρ ( a ) β n · ( 1 + n ln | X | ) · G n * ( x n ) ρ ( b ) 1 + 2 ρ β n ρ · ( 1 + n ln | X | ) ρ · G n * ( x n ) ρ ,
where (a) holds because Z Q G n * , α = d n α x n X n G n * ( x n ) 1 d n α ( 1 + n ln | X | ) ; and (b) hold because x ρ 1 + 2 ρ x ρ for x > 0 . Hence,
E P n [ A G n * ( X n ) ρ ] 1 + 2 ρ β n ρ · ( 1 + n ln | X | ) ρ · E P n [ G n * ( X n ) ρ ] ( c ) 1 + 2 2 ρ 1 + n ln | X | M n n log | X | 2 ρ · 2 n ρ H α ( P ) = 1 + 2 n ρ H α ( P ) + log ( 1 + n ln | X | ) n log M ˜ n n ,
where M ˜ n is as in (25); and inequality (c) follows from Proposition 4 of [13] proved in Section 3. Thus, if M n is such that M n n log | X | + 3 and if γ : = lim n ( log M n ) / n exists and γ > H α ( P ) , then we have
lim sup n E P n [ A G n * ( X n ) ρ ] 1 .
Since E P n [ A G n * ( X n ) ρ ] 1 , we have lim inf n E P n [ A G n * ( X n ) ρ ] 1 . When γ < H α ( P ) , arguing along the lines of proof of Proposition 8(b), it can be shown than
lim n 1 n ρ log E P n [ A G n * ( X n ) ρ ] = H α ( P ) γ .
Reverse implication of the above result does not hold always due to the additional parameter M n in the tasks partitioning problem. For example, if M n = | X | n and A n ( x n ) = 1 for every x n , the partition does not provide any information about the underlying distribution. As a consequence, we will not be able to conclude anything about the optimal (or asymptotically optimal) solutions of other problems. However, if M n is such that log M n increases sub-linearly, then it does help us find asymptotically optimal solutions of other problems.
 Proposition 17. 
An asymptotically optimal sequence of partition functions { A n } with partition sizes { M n } for the tasks partitioning problem gives rise to an asymptotically optimal solution for the guessing problem if M n n log | X | + 3 and lim n ( log M n ) / n = 0 .
 Proof. 
By hypothesis,
lim n 1 n ρ log E P n [ A n ( X n ) ρ ] = H α ( P ) .
For every A n , define the probability distribution
Q A n ( x n ) : = c n 1 A n ( x n ) 1 ,
where c n : = x n A n ( x n ) 1 = M n . Let G A n * be the guessing function that guesses according to the decreasing order of Q A n -probabilities. Then, by (Proposition 2 of [17]), we have
G A n * ( x n ) Q A n ( x n ) 1 = c n A n ( x n ) = M n A n ( x n ) .
Hence,
lim sup n 1 n ρ log E P n [ G A n * ( X n ) ρ ] lim sup n 1 n log M n + lim sup n 1 n ρ log E P n [ A n ( X n ) ρ ] = H α ( P ) .
Furthermore, an application of Theorem 1 of [13] gives us
lim inf n 1 n ρ log E P n [ G A n * ( X n ) ρ ] H α ( P ) lim sup n 1 n log ( 1 + n ln | X | ) = H α ( P ) .
This completes the proof. □

5.3. Huleihel et al.’s Memoryless Guessing and Campbell’s Coding

We already know that, if one knows the optimal solution of Huleihel et al.’s memoryless guessing problem, that is, the α -scaled measure of the underlying probability distribution P, one has knowledge about the optimal (or asymptotically optimal) solution of Campbell’s coding problem. In this section, we prove a converse statement. We first prove the following lemma.
 Lemma 2. 
Let L n * denote the length function corresponding to an optimal solution for Campbell’s coding problem on the alphabet set X n endowed with the product distribution P n . Then, x n X n 2 L n * ( x n ) 1 / 2 .
 Proof. 
Suppose x n X n 2 L n * ( x n ) < 1 / 2 . Then, we must have L n * ( x n ) 2 for every x n X n . Define L ^ n ( x n ) : = L n * ( x n ) 1 . We observe that x n X n 2 L ^ n ( x n ) < 1 , that is, the length function L ^ n ( · ) satisfies (11). Hence, there exists a code C n for X n such that L ( C n ( x n ) ) = L ^ n ( x n ) . Then, for ρ > 0 , we have log E P n [ 2 ρ L n * ( X n ) ] > log E P n [ 2 ρ L ^ n ( X n ) ] —a contradiction. □
 Proposition 18. 
An asymptotically optimal solution for Huleihel et al.’s memoryless guessing problem exists if an asymptotically optimal solution exists for Campbell’s coding problem.
 Proof. 
Let { L n * , n 1 } denote a sequence of asymptotically optimal length functions of Campbell’s coding problem, that is,
lim n 1 n ρ log E P n [ 2 ρ L n * ( X n ) ] = H α ( P ) .
Let us define
Q L n * ( x n ) : = 2 L n * ( x n ) / α x ¯ n X n 2 L n * ( x ¯ n ) / α .
Then, we have
I α ( P n , Q L n * ) = α 1 α log x n X n P n ( x n ) x ^ n X n Q L n * ( x ^ n ) Q L n * ( x n ) α 1 α α H α ( P n ) = α 1 α log x n X n P n ( x n ) x ^ n X n 2 L n * ( x ^ n ) 2 L n * ( x n ) ρ n H α ( P ) = 1 ρ log E P n [ 2 ρ L n * ( X n ) ] + log ζ n n H α ( P ) ,
where ζ n = x ^ n X n 2 L n * ( x ^ n ) . Hence,
lim n 1 n I α ( P n , Q L n * ) = lim n 1 n ρ log E P n [ 2 ρ L n * ( X n ) ] + lim n 1 n log ζ n H α ( P ) = ( a ) 0 ,
where (a) holds because ζ n [ 1 / 2 , 1 ] (refer Lemma 2). If we assume the underlying probability distribution to be Q L n * instead of P n , and perform memoryless guessing according to the escort distribution of Q L n * , namely Q ^ n * ( x n ) = Q L n * ( x n ) α / Z Q L n * , α , due to Proposition 7, we have
lim n 1 n ρ log E P n V Q ^ n * , ρ ( X n ) = H α ( P ) + lim n 1 n I α ( P n , Q L n * ) = H α ( P ) .

6. Summary and Conclusions

This paper was motivated by the need to find a unified framework for the problems on source coding, guessing and the tasks partitioning. To that end, we formulated a general moment minimization problem in the IID-lossless case and observed that the optimal value of its objective function is bounded below by Rényi entropy. We then re-established all achievable lower bounds in each of the above-mentioned problems using the generalized framework. It was interesting to note that the optimal solution did not depend on the moment function ψ , but only on the underlying probability distribution P and order of the moment ρ (refer Proposition 1). We also presented a unified framework for the mismatched version of the above-mentioned problems. This framework not only led to refinement of the known theorems, but also helped us identify a few new results. We went on to extend the tasks partitioning problem by asking a more practical question and solved it using the unified theory. Finally, we established an equivalence among these problems, in the sense that an asymptotically optimal solution of one problem yields the asymptotically optimal solution of all other problems. Although the relationship between source coding and guessing is well-known in the literature [30,31,32,33], their connection to the tasks partitioning problem, and the connection in the mismatched version of the problems are new.
Our unified framework also has the potential to act as a general tool-set and provide insights for similar problems in information theory. For example, in Section 4, this framework enabled us to solve a more general tasks partitioning problem, namely, the ordered tasks partitioning problem using this framework. The presented that a unified approach can also be extended and explored further in several ways. This includes (a) Extension to general alphabet set: The guessing problem was originally studied for countably infinite alphabet set by Massey [12]. Courtade and Verdú have studied the source coding problem for a countably infinite alphabet set with a cumulant generating function of code word lengths as a design criterion [31]. It would be interesting to see if memory-less guessing and tasks partitioning problems can also be formulated for countably infinite alphabet sets and relationships among the problems can be extended. (b) More general sources: Relationship between source coding and guessing is very well-known in the literature. Relationship between guessing and source coding in the ‘with distortion’ case for finite alphabet was established by Merhav and Arıkan [14] and for a countably infinite alphabet by Hanawal and Sundaresan [34]. Relationship between Guessing and Campbell’s coding in the universal case was established by Sundaresan [35]. It would be interesting to see if these can be extended to memoryless guessing and tasks partitioning also, possibly in an unified manner. (c) Applications: Arıkan showed an application of the guessing problem in a sequential decoding problem [13]. Humblet showed that the cumulant of code-lengths arises in minimizing the probability of buffer overflow in source coding problems [27]. Rezaee et al. [36], Salamatian et al. [20], and Sundaresan [28] show the application of guessing in the security aspect of password protected systems. Our unified framework has the potential to help solve problems that arise in real life situations and fall in this framework.

Author Contributions

Conceptualization: M.A.K., A.S., A.T., A.K. and G.D.M.; Formal analysis: M.A.K. and A.S.; Investigation: M.A.K., A.S., A.T. and G.D.M.;Writing – original draft: M.A.K., A.S., A.T., A.K. and G.D.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shannon, C.E. A Mathematical Theory of Communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef] [Green Version]
  2. Rényi, A. On measures of entropy and information. In Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics, Berkeley, CA, USA, 20 June–30 July 1960; pp. 547–561. [Google Scholar]
  3. Aczél, J.; Daróczy, Z. On Measures of Information and Their Characterizations. In Mathematicas in Science and Engineering; Academic Press: Cambridge, MA, USA, 1975; Volume 115. [Google Scholar]
  4. Campbell, L.L. A coding theorem and Rényi’s entropy. Inf. Control 1965, 8, 423–429. [Google Scholar] [CrossRef] [Green Version]
  5. Sundaresan, R. Guessing Under Source Uncertainty. IEEE Trans. Inf. Theory 2007, 53, 269–287. [Google Scholar] [CrossRef] [Green Version]
  6. Blumer, A.C.; McEliece, R.J. The Rényi redundancy of generalized Huffman codes. IEEE Trans. Inf. Theory 1988, 34, 1242–1249. [Google Scholar] [CrossRef] [Green Version]
  7. Sundaresan, R. A measure of discrimination and its geometric properties. In Proceedings of the IEEE International Symposium on Information Theory, Lausanne, Switzerland, 30 June–5 July 2002; p. 264. [Google Scholar]
  8. Bunte, C.; Lapidoth, A. Encoding Tasks and Rényi Entropy. IEEE Trans. Inf. Theory 2014, 60, 5065–5076. [Google Scholar] [CrossRef] [Green Version]
  9. Kumar, M.A.; Sundaresan, R. Minimization problems based on relative α-entropy I: Forward Projection. IEEE Trans. Inf. Theory 2015, 61, 5063–5080. [Google Scholar] [CrossRef] [Green Version]
  10. Lutwak, E.; Yang, D.; Zhang, G. Cramér-Rao and moment-entropy inequalities for Rényi entropy and generalized Fisher information. IEEE Trans. Inf. Theory 2005, 51, 473–478. [Google Scholar] [CrossRef]
  11. Ashok Kumar, M.; Sundaresan, R. Minimization Problems Based on Relative α-Entropy II: Reverse Projection. IEEE Trans. Inf. Theory 2015, 61, 5081–5095. [Google Scholar] [CrossRef] [Green Version]
  12. Massey, J.L. Guessing and entropy. In Proceedings of the 1994 IEEE International Symposium on Information Theory, Trondheim, Norway, 27 June–1 July 1994; p. 204. [Google Scholar]
  13. Arikan, E. An inequality on guessing and its application to sequential decoding. IEEE Trans. Inf. Theory 1996, 42, 99–105. [Google Scholar] [CrossRef] [Green Version]
  14. Arikan, E.; Merhav, N. Guessing subject to distortion. IEEE Trans. Inf. Theory 1998, 44, 1041–1056. [Google Scholar] [CrossRef]
  15. Pfister, C.; Sullivan, W. Renyi entropy, guesswork moments, and large deviations. IEEE Trans. Inf. Theory 2004, 50, 2794–2800. [Google Scholar] [CrossRef] [Green Version]
  16. Malone, D.; Sullivan, W. Guesswork and entropy. IEEE Trans. Inf. Theory 2004, 50, 525–526. [Google Scholar] [CrossRef]
  17. Hanawal, M.K.; Sundaresan, R. Guessing Revisited: A Large Deviations Approach. IEEE Trans. Inf. Theory 2011, 57, 70–78. [Google Scholar] [CrossRef] [Green Version]
  18. Christiansen, M.M.; Duffy, K.R. Guesswork, Large Deviations, and Shannon Entropy. IEEE Trans. Inf. Theory 2013, 59, 796–802. [Google Scholar] [CrossRef] [Green Version]
  19. Huleihel, W.; Salamatian, S.; Médard, M. Guessing with limited memory. In Proceeding of the 2017 IEEE International Symposium on Information Theory (ISIT), Aachen, Germany, 25–30 June 2017; pp. 2253–2257. [Google Scholar]
  20. Salamatian, S.; Huleihel, W.; Beirami, A.; Cohen, A.; Médard, M. Why Botnets Work: Distributed Brute-Force Attacks Need No Synchronization. IEEE Trans. Inf. Forensics Secur. 2019, 14, 2288–2299. [Google Scholar] [CrossRef] [Green Version]
  21. Arikan, E.; Merhav, N. Joint source-channel coding and guessing with application to sequential decoding. IEEE Trans. Inf. Theory 1998, 44, 1756–1769. [Google Scholar] [CrossRef] [Green Version]
  22. Csiszár, I.; Shields, P. Information Theory and Statistics: A Tutorial, Foundations and Trends in Communications and Information Theory; Now Publishers: Delft, The Netherlands, 2004. [Google Scholar]
  23. Tsallis, C.; Mendes, R.S.; Plastino, A.R. The role of constraints within generalized non-extensive statistics. Phys. A 1998, 261, 534–554. [Google Scholar] [CrossRef]
  24. Dupuis, P.; Ellis, R.S. A Weak Convergence Approach to the Theory of Large Deviations; John Wiley & Sons: Hoboken, NJ, USA, 1997. [Google Scholar]
  25. Shayevitz, O. On Rényi measures and hypothesis testing. In Proceedings of the 2011 IEEE International Symposium on Information Theory Proceedings, St. Petersburg, Russia, 31 July–5 August 2011; pp. 894–898. [Google Scholar]
  26. Cover, T.M.; Thomas, J.A. Elements of Information Theory, 2nd ed.; John Wiley & Sons: Hoboken, NJ, USA, 2012. [Google Scholar]
  27. Humblet, P. Generalization of Huffman coding to minimize the probability of buffer overflow. IEEE Trans. Inf. Theory 1981, 27, 230–232. [Google Scholar] [CrossRef] [Green Version]
  28. Hanawal, M.K.; Sundaresan, R. Randomised Attacks on Passwords. DRDO-IISc Programme on Advanced Research in Mathematical Engineering. 2010. Available online: https://ece.iisc.ac.in/rajeshs/reprints/TR-PME-2010-11.pdf (accessed on 14 August 2022).
  29. Bracher, A.; Hof, E.; Lapidoth, A. Guessing Attacks on Distributed-Storage Systems. IEEE Trans. Inf. Theory 2019, 65, 6975–6998. [Google Scholar] [CrossRef] [Green Version]
  30. Salamatian, S.; Liu, L.; Beirami, A.; Médard, M. Mismatched guesswork and one-to-one codes. In Proceedings of the 2019 IEEE Information Theory Workshop (ITW), Gotland, Sweden, 25–28 August 2019; pp. 1–5. [Google Scholar]
  31. Courtade, T.A.; Verdú, S. Cumulant generating function of codeword lengths in optimal lossless compression. In Proceedings of the 2014 IEEE International Symposium on Information Theory, Honolulu, HI, USA, 29 June –4 July 2014; pp. 2494–2498. [Google Scholar] [CrossRef]
  32. Kosut, O.; Sankar, L. Asymptotics and non-asymptotics for universal fixed-to-variable source coding. IEEE Trans. Inf. Theory 2017, 63, 3757–3772. [Google Scholar] [CrossRef]
  33. Beirami, A.; Calderbank, R.; Christiansen, M.M.; Duffy, K.R.; Médard, M. A characterization of guesswork on swiftly tilting curves. IEEE Trans. Inf. Theory 2018, 60, 2850–2871. [Google Scholar]
  34. Hanawal, M.K.; Sundaresan, R. Guessing and Compression Subject to Distortion; IndraStra Global: Sheridan, WY, USA, 2010. [Google Scholar]
  35. Sundaresan, R. Guessing Based On Length Functions. In Proceedings of the 2007 IEEE International Symposium on Information Theory, Cambridge, MA, USA, 1–6 July 2007; pp. 716–719. [Google Scholar] [CrossRef] [Green Version]
  36. Rezaee, A.; Beirami, A.; Makhdoumi, A.; Médard, M.; Duffy, K. Guesswork subject to a total entropy budget. In Proceedings of the 2017 55th Annual Allerton Conference on Communication, Control, and Computing (Allerton), Monticello, IL, USA, 3–6 October 2017; pp. 1008–1015. [Google Scholar] [CrossRef]
Figure 1. Relationships established among the five problems. A directed arrow from problem A to problem B means knowing optimal or asymptotically optimal solution of A helps us find the same in B.
Figure 1. Relationships established among the five problems. A directed arrow from problem A to problem B means knowing optimal or asymptotically optimal solution of A helps us find the same in B.
Entropy 24 01695 g001
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ashok Kumar, M.; Sunny, A.; Thakre, A.; Kumar, A.; Dinesh Manohar, G. Are Guessing, Source Coding and Tasks Partitioning Birds of A Feather? Entropy 2022, 24, 1695. https://doi.org/10.3390/e24111695

AMA Style

Ashok Kumar M, Sunny A, Thakre A, Kumar A, Dinesh Manohar G. Are Guessing, Source Coding and Tasks Partitioning Birds of A Feather? Entropy. 2022; 24(11):1695. https://doi.org/10.3390/e24111695

Chicago/Turabian Style

Ashok Kumar, M., Albert Sunny, Ashish Thakre, Ashisha Kumar, and G. Dinesh Manohar. 2022. "Are Guessing, Source Coding and Tasks Partitioning Birds of A Feather?" Entropy 24, no. 11: 1695. https://doi.org/10.3390/e24111695

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop