On Two-Stage Guessing

Stationary memoryless sources produce two correlated random sequences $X^n$ and $Y^n$. A guesser seeks to recover $X^n$ in two stages, by first guessing $Y^n$ and then $X^n$. The contributions of this work are twofold: (1) We characterize the least achievable exponential growth rate (in $n$) of any positive $\rho$-th moment of the total number of guesses when $Y^n$ is obtained by applying a deterministic function $f$ component-wise to $X^n$. We prove that, depending on $f$, the least exponential growth rate in the two-stage setup is lower than when guessing $X^n$ directly. We further propose a simple Huffman code-based construction of a function $f$ that is a viable candidate for the minimization of the least exponential growth rate in the two-stage guessing setup. (2) We characterize the least achievable exponential growth rate of the $\rho$-th moment of the total number of guesses required to recover $X^n$ when Stage 1 need not end with a correct guess of $Y^n$ and without assumptions on the stationary memoryless sources producing $X^n$ and $Y^n$.


I. INTRODUCTION
Pioneered by Massey [1], McEliece and Yu [2], and Arikan [3], the guessing problem is concerned with recovering the realization of a finite-valued random variable X using a sequence of yes-no questions of the form "Is X = x 1 ?", "Is X = x 2 ?", etc., until correct. A commonly used performance metric for this problem is the ρ-th moment of the number of guesses until X is revealed (where ρ is a positive parameter). 2 When guessing a length-n i. i. d. sequence X n (a tuple of n components that are drawn independently according to the law of X), the ρ-th moment of the number of guesses required to recover the realization of X n grows exponentially with n, and the exponential growth rate is referred to as the guessing exponent. The least achievable guessing exponent was derived by Arikan [3], and it equals the order-1 1+ρ Rényi entropy of X. Arikan's result is based on the optimal deterministic guessing strategy, which proceeds in descending order of the probability mass function (PMF) of X n .
In this paper, we propose and analyze a two-stage guessing strategy to recover the realization of an i. i. d. sequence X n . In Stage 1, the guesser is allowed to produce guesses of an ancillary sequence Y n that is jointly i. i. d. with X n . In Stage 2, the guesser must recover X n . We show the following: 1) When Y n is generated by component-wise application of a mapping f : X → Y to X n and the guesser is required to recover Y n in Stage 1 before proceeding to Stage 2, then the least achievable guessing exponent (i.e., the exponential growth rate of the ρ-th moment of the total number of guesses in the two stages) equals where the maximum is between the order-1 1+ρ Rényi entropy of f (X) and the conditional Arimoto-Rényi entropy of X given f (X). We derive (1) in Section III and summarize our analysis in Theorem 1. We also propose a Huffman code-based construction of a function f that is a viable candidate for the minimization of (1) among all maps from X to Y (see derived in this paper, studying their information-like properties is a subject of future research (see Section V.) Besides its theoretic implications, the guessing problem is also applied practically in communications and cryptography. This includes sequential decoding ( [5], [6]), and measuring password strength [7], confidentiality of communication channels [8], and resilience against brute-force attacks [9]. It is also strongly related to task encoding and (lossless and lossy) compression (see, e.g., [10], [11], [12], [13], [14], [15]).

II. PRELIMINARIES
We begin with some notation and preliminary material that are essential for the presentation in Section III ahead. The analysis in Section IV relies on the method of types (see, e.g., Chapter 11 in [20]).
Throughout the paper, we use the following notation: • For m, n ∈ N with m < n, let [m : n] := {m, . . . , n}; April 16, 2021 DRAFT 4 • Let P be a PMF that is defined on a finite set X . For k ∈ [1 : |X |], let G P (k) denote the sum of its k largest point masses, and let p max := G P (1). For n ∈ N, denote by P n the set of all PMFs defined on [1 : n].
The next definitions and properties are related to majorization and Rényi measures.
Definition 1 (Majorization). Consider PMFs P and Q, defined on the same (finite or countably infinite) set X . We say that Q majorizes P , If P and Q are defined on finite sets of different cardinalities, then the PMF defined on the smaller set is zero-padded to match the cardinality of the larger set.
By Definition 1, a unit mass majorizes any other distribution; on the other hand, the uniform distribution (on a finite set) is majorized by any other distribution of equal support.
Definition 2 (Schur-convexity/concavity). A function f : P n → R is Schur-convex if, for every Definition 3 (Rényi entropy [21]). Let X be a random variable taking values on a finite or countably infinite set X according to the PMF P X . The order-α Rényi entropy H α (X) of X is given by where unless explicitly given, the base of log(·) can be chosen arbitrarily, with exp(·) denoting its inverse function. Via continuous extension, where H(X) is the (Shannon) entropy of X.
Definition 4 (Arimoto-Rényi conditional entropy [23]). Let (X, Y ) be a pair of random variables taking values on a product set X × Y according to the PMF P XY . When X is finite or countably infinite, the order-α Arimoto-Rényi conditional entropy H α (X|Y ) of X given Y is defined as follows: When Y is finite, (7) can be simplified as follows: • If α ∈ {0, 1, ∞} and Y is finite, then, via continuous extension, The properties of the Arimoto-Rényi conditional entropy were studied in [24], [25].
Finally, F n,m with n, m ∈ N denotes the set of all deterministic functions f : If m < n, then a function f ∈ F n,m is not one-to-one (i.e, it is a non-injective function). a) Stage 1: Y n := f (X 1 ), . . . , f (X n ) ∈ Y n is guessed by asking questions of the form: "Is Y n = y 1 ?", "Is Y n = y 2 ?", . . . until correct. Note that as |Y| = m < |X |, this stage cannot reveal X n . b) Stage 2: Based on Y n , the sequence X n ∈ X n is guessed by asking questions of the form: "Is X n = x 1 ?", "Is X n = x 2 ?", . . . until correct. If Y n = y n , the guesses x k := ( x k,1 , . . . , x k,n ) are restricted to X -sequences, The guesses y 1 , y 2 , . . . , in Stage 1 are in descending order of probability as measured by P Y n (i.e., y 1 is the most probable sequence under P Y n ; y 2 is the second most probable; and so on; ties are resolved arbitrarily). We denote the index of y n ∈ Y n in this guessing order by g Y n (y n ). Note that because every sequence y n is guessed exactly once, g Y n (·) is a bijection from Y n to [1 : m n ]; we refer to such bijections as ranking functions. The guesses x 1 , x 2 , . . . , in Stage 2 depend on Y n and are in descending order of the posterior P X n |Y n (·|Y n ).
Following our notation from Stage 1, the index of x n ∈ X n in the guessing order induced by Y n = y n is denoted g X n |Y n (x n |y n ). Note that for every y n ∈ Y n , the function g X n |Y n (·|y n ) is a ranking function on X n . Using g Y n (·) and g X n |Y n (·|·), the total number of guesses G 2 (X n ) in Algorithm 1 can be expressed as where g Y n (Y n ) and g X n |Y n (X n |Y n ) are the number of guesses in Stages 1 and 2, respectively.
Observe that guessing in descending order of probability minimizes the ρ-th moment of the number of guesses in both stages of Algorithm 1. By [3], for every ρ > 0, the guessing moments E g ρ Y n (Y n ) and E g ρ X n |Y n (X n |Y n ) can be (upper and lower) bounded in terms of H 1 1+ρ

(Y )
and H 1 1+ρ (X|Y ) as follows: Combining (13) and (14), we next establish bounds on E[G 2 (X n ) ρ ]. In light of (13), we begin with bounds on the ρ-th power of a sum.
Lemma 1. Let k ∈ N, and let {a i } k i=1 be a non-negative sequence. For every ρ > 0, where and • If ρ ≥ 1, then the left and right inequalities in (15) hold with equality if, respectively, k − 1 of the a i 's are equal to zero or a 1 = . . . = a k ; • if ρ ∈ (0, 1), then the left and right inequalities in (15) hold with equality if, respectively, a 1 = . . . = a k or k − 1 of the a i 's are equal to zero.
Proof: See Appendix A.
Using the shorthand notation k 1 (ρ) := s 1 (2, ρ), k 2 (ρ) := s 2 (2, ρ), we apply Lemma 1 in conjunction with (13) and (14) (and the fact that |X | ≥ m) to bound E G ρ 2 (X n ) as follows: The bounds in (18) are asymptotically tight as n tends to infinity. To see this, note that lim n→∞ 1 n ln k 1 (ρ) 1 + n ln |X | −ρ = 0, lim 8 and therefore, for all ρ > 0, Since the sum of the two exponents on the right-hand side (RHS) of (20) is dominated by the larger exponential growth rate, it follows that and thus, by (20) and (21), As a sanity check, note that if m = |X | and f is the identity function id (i.e., id(x) = x for all x ∈ X ), then X n is revealed in Stage 1, and (with Stage 2 obsolete) the ρ-th moment of the total number of guesses grows exponentially with rate This is in agreement with the RHS of (22), as We summarize our results so far in Theorem 1 below.
Theorem 1. Let X n = (X 1 , . . . , X n ) be a sequence of i.i.d. random variables, each drawn according to the PMF P X of support X := [1 : and define Y n := (f (X 1 ), . . . , f (X n )). When guessing X n according to Algorithm 1 (i.e., after first guessing Y n in descending order of probability as measured by P Y n (·) and proceeding in descending order of probability as measured by P X n |Y n (·|Y n ) for guessing X n ), the ρ-th moment of the total number of guesses G 2 (X n ) satisfies a) the lower and upper bounds in (18) for all n ∈ N and ρ > 0; b) the asymptotic characterization (22) for ρ > 0 and n → ∞.
A Suboptimal and Simple Construction of f in Algorithm 1 and Bounds on E G ρ 2 (X n ) Having established in Theorem 1 that we now seek to minimize the exponent E 2 (X; ρ, m, f ) in the RHS of (23) (for given PMF P X , We proceed by considering a sub-optimal and simple construction of f , which enables obtaining explicit bounds as a function of the PMF P X and the value of m, while this construction also does not depend on ρ. is constructed by relying on the Huffman algorithm for lossless compression of X := X 1 . This construction also (almost) achieves the maximal mutual information I(X; f (X)) among all deterministic functions f : X → [1 : m] (this issue is elaborated in the sequel). Heuristically, apart from its simplicity, the motivation of this sub-optimal construction can be justified since it is expected to reduce the guesswork in Stage 2 of Algorithm 1, where one wishes to guess In this setting, it is shown that the upper and lower bounds on E G ρ 2 (X n ) are (almost) asymptotically tight in terms of their exponential growth rate in n. Furthermore, these exponential bounds demonstrate a reduction in the required number of guesses for X n , as compared to the optimal one-stage guessing of X n .
In the sequel, the following construction of a deterministic function f c Observe that due to [26] (Corollary 3 and Lemma 5) and because f * m (·) operates componentwise on the i.i.d. vector X n , the following lower bound on 1 n I X n ; Y * n applies: where Y n := (f (X 1 ), . . . , f (X n )) and From the proof of [26] (Theorem 3), we further have the following multiplicative bound: Note that by [26] (Lemma 1), the maximization problem in the RHS of (29) is strongly NPhard [27]. This means that, unless P = NP, there is no polynomial-time algorithm that, given an arbitrarily small ε > 0, produces a deterministic function f (ε) ∈ F |X |,m satisfying We next examine the performance of our candidate function f * m when applied in Algorithm 1.
To that end, we first bound E g ρ Y n (Y * n ) in terms of the Rényi entropy of a suitably defined random variable X m ∈ [1 : m] constructed in Algorithm 3 below. In the construction, we assume without loss of generality that P X (1) ≥ . . . ≥ P X (|X |) and denote the PMF of X m by Algorithm 3 (construction of the PMF Q := R m (P X ) of the random variable X m ): • If m = 1, then Q := R 1 (P X ) is defined to be a point mass at one; where m * is the maximal integer i ∈ [1 : m − 1], which satisfies Algorithm 3 was introduced in [26], [28]. The link between the Rényi entropy of X m and that of Y * was given in Eq. (34) of [14]: where The function v : (0, ∞) → (0, log 2) is depicted in Figure 1  Combining (14a) and (35) yields, for all ρ > 0, where, due to (30), the difference between the exponential growth rates (in n) of the lower and upper bounds in (38) is equal to ρ v 1 1+ρ , and it can be verified to satisfy (see (30) and (36)) where the leftmost and rightmost inequalities of (39) are asymptotically tight as we let ρ → 0 + and ρ → ∞, respectively.
By inserting (38) into (18) and applying Inequality (39), it follows that for all ρ > 0, Consequently, by letting n tend to infinity and relying on (22), We next simplify the above bounds by evaluating the maxima in (41) as a function m and ρ.
To that end, we use the following lemma.  Proof: See Appendix B.
Since symbols of probability zero (i.e., x ∈ X for which P X (x) = 0) do not contribute to the expected number of guesses, assume without loss of generality that supp P X = X . In view of Lemma 2, we can therefore define Using (44), we simplify (41) as follows: a) If m < m * ρ , then Note that when guessing X n directly, the ρ-th moment of the number of guesses grows exponentially with rate ρH 1 1+ρ (X) (cf. (14a)), due to Item c) in Lemma 2, since and also because conditioning (on a dependent chance variable) strictly reduces the Rényi entropy [24]. Eqs. (45) and (46) imply that for any m ∈ [2 : |X | − 1], guessing in two stages according to Algorithm 1 with f = f * m reveals X n sooner (in expectation) than guessing X n directly.
We summarize our findings in this section in the second theorem below. . . , f * m (X n )). Finally, for ρ > 0, let be the optimal exponential growth rate of the ρ-th moment single-stage guessing of X n , and let be given by (23) with f := f * m . Then, the following holds: a) [26]: The maximization of the (normalized) mutual information 1 n I(X n ; Y n ) over all the deterministic functions f : X → [1 : m] is a strongly NP-hard problem (for all n). However, the deterministic function f := f * m almost achieves this maximization up to a small additive term, which is equal to β * := log 2 e ln 2 ≈ 0.08607 log 2, and also up to a multiplicative term, which is equal to 10 11 (see (29)-(31)).
b) The ρ-th moment of the number of guesses for X n , which is required by the two-stage guessing in Algorithm 1, satisfies the non-asymptotic bounds in (40a)-(40b).
so there is a reduction in the exponential growth rate (as a function of n) of the required number of guesses for X n by Algorithm 1 (in comparison to the optimal one-stage guessing).

IV. TWO-STAGE GUESSING: ARBITRARY (X, Y )
We next assume that X n and Y n are drawn jointly i.i.d. according to a given PMF P XY , and we drop the requirement that Stage 1 need reveal Y n prior to guessing X n . Given ρ > 0, our goal in this section is to find the least exponential growth rate (in n) of the ρ-th moment of the total number of guesses required to recover X n . Since Stage 1 may not reveal Y n , we can no longer express the number of guesses using the ranking functions g Y n (·) and g X n |Y n (·|·) as in Section III, and need new notation to capture the event that Y n was not guessed in Stage 1. To that end, let G n be a subset of Y n , and let the ranking functioñ denote the guessing order in Stage 1 with the understanding that if Y n / ∈ G n , theñ and the guesser moves on to Stage 2 knowing only that Y n / ∈ G n . We denote the guessing order in Stage 2 byg where, for every y n ∈ Y n ,g X n |Y n (·|y n ) is a ranking function on [1 : |X | n ] that satisfies g X n |Y n (·|y n ) =g X n |Y n (·|η n ), ∀ y n , η n / ∈ G n .
Note that whileg Y n andg X n |Y n depend on G n , we do not make this dependence explicit.
In the remainder of this section, we prove the following variational characterization of the least exponential growth rate of the ρ-th moment of the total number of guesses in both stages g Y n (Y n ) +g X n |Y n (X n |Y n ).
where the supremum on the RHS of (55) is over all PMFs Q XY on X × Y (and the limit exists).
Note that if P XY is such that Y = f (X), then the RHS of (55) is less than or equal to the RHS of (21). In other words, the guessing exponent of Theorem 3 is less than or equal to the guessing exponent of Theorem 1. This is due to the fact that guessing Y n in Stage 1 before proceeding to Stage 2 (the strategy examined in Section III) is just one of the admissible guessing strategies of Section IV and not necessarily the optimal one.
We prove Theorem 3 in two parts: First, we show that the guesser can be assumed cognizant of the empirical joint type of (X n , Y n ); by invoking the law of total expectation, averaging over denominator-n types Q XY on X × Y, we reduce the problem to evaluating the LHS of (55) under the assumption that (X n , Y n ) is drawn uniformly at random from a type class T (n) (Q XY ) (instead of being i.i.d. P XY ). We conclude the proof by solving this reduced problem, showing in particular that when (X n , Y n ) is drawn uniformly at random from a type class, the LHS of (55) can be achieved either by guessing Y n in Stage 1 or skipping Stage 1 entirely.
We begin with the first part of the proof and show that the guesser can be assumed cognizant of the empirical joint type of (X n , Y n ); we formalize and prove this claim in Corollary 1, which we derive from Lemma 3 below.
Lemma 3. Letg * Y n andg * X n |Y n be ranking functions that minimize the expectation in the LHS of (55), and likewise, letg * T ;Y n andg * T ;X n |Y n be ranking functions cognizant of the empirical joint type Π X n Y n of (X n , Y n ) that minimize the expectation in the LHS of (55) over all ranking functions depending on Π X n Y n . Then, there exist positive constants a and k, which are independent of n, such that Proof: See Appendix C.

Corollary 1. If the limit
exists, so does the limit in the LHS of (55), and the two are equal.
Proof: By (56), The inverse inequality lim inf n→∞ miñ gY n ,gXn|Y n follows from the fact that an optimal guessing strategy depending on the empirical joint type Π X n Y n of (X n , Y n ) cannot be outperformed by a guessing strategy ignorant of Π X n Y n .
Corollary 1 states that the minimization in the LHS of (55) can be taken over guessing strategies cognizant of the empirical joint type of (X n , Y n ). As we show in the next lemma, this implies that evaluating the LHS of (55) can be further simplified by taking the expectation with (X n , Y n ) drawn uniformly at random from a type class (instead of being i.i.d. P XY ).

Lemma 4.
Let E QXY denote expectation with (X n , Y n ) drawn uniformly at random from the type class T (n) (Q XY ). Then, the following limits exist and lim n→∞ miñ gT ;Y n ,gT ;X n |Y n 1 n log E g T ;Y n (Y n ) +g T ;X n |Y n (X n |Y n ) where the maximum in the RHS of (64) is taken over all denominator-n types on X × Y.
Proof: Recall that Π X n Y n is the empirical joint type of (X n , Y n ), and let T n (X ×Y) denote the set of all denominator-n types on X × Y. We prove Lemma 4 by applying the law of total expectation to the LHS of (64) (averaging over the events {Π X n Y n = Q XY }, Q XY ∈ T n (X ×Y)) and by approximating the probability of observing a given type using standard tools from large deviations theory. We first show that the LHS of (64) is upper bounded by its RHS: where (66) holds for sufficiently large α because the number of types grows polynomially in n (see Appendix C); and (68) follows from [20] (Theorem 11.1.4). We next show that the LHS of (64) is also lower bounded by its RHS: ≥ lim where in (71), tends to zero as we let n tend to infinity, and the inequality in (71) follows again from [20] (Theorem 11.1.4). Together, (68) and (71) imply the equality in (64).
We now have established the first part of the proof of Theorem 3: In Corollary 1, we showed that the ranking functionsg Y n andg X n |Y n can be assumed cognizant of the empirical joint type of (X n , Y n ), and in Lemma 4, we showed that under this assumption, the minimization of the ρ-th moment of the total number of guesses can be carried out with (X n , Y n ) drawn uniformly at random from a type class.
Below, we give the second part of the proof: We show that if the pair (X n , Y n ) is drawn uniformly at random from T (n) (Q XY ), then Note that Corollary 1 and Lemma 4 in conjunction with (74) conclude the proof of Theorem 3: where the supremum in the RHS of (77) is taken over all PMFs Q XY on X × Y, and the step follows from (76) because the set of types is dense in the set of all PMFs (in the same sense that Q is dense in R).
It thus remains to prove (74). We begin with the direct part: Note that when (X n , Y n ) is drawn uniformly at random from T (n) (Q XY ), the ρ-th moment of the total number of guesses grows exponentially with rate ρH(Q X ) if we skip Stage 1 and guess X n directly and with rate if we guess Y n in Stage 1 before moving on to guessing X n . To prove the second claim, we argue by case distinction on ρ. Assuming first that ρ ≤ 1, where (78) holds because ρ ≤ 1 (see Lemma 1); (80) holds since, by the latter assumption, Y n is revealed at the end of Stage 1, and thus, the guesser (cognizant of Q XY ) will only guess elements from the conditional type class T (n) (Q X|Y |Y n ); and (81) follows from the fact that the exponential growth rate of a sum of two exponents is dominated by the larger one. We wrap up the argument by showing that the LHS of (78) is also lower bounded by the RHS of (81): ≥ lim where (83) follows from Jensen's inequality; and (85) follows from [29] (Proposition 6.6) and the lower bound on the size of a (conditional) type class [20] (Theorem 11.1.3). The case ρ > 1 can be proven analogously and is hence omitted (with the term 2 ρ−1 in the RHS of (83) replaced by one). Note that by applying the better of the two proposed guessing strategies This concludes the direct part of the proof of (74). We remind the reader that while we have constructed a guessing strategy under the assumption that the empirical joint type Π X n Y n of (X n , Y n ) is known, Lemma 3 implies the existence of a guessing strategy of equal asymptotic performance that does not depend on Π X n Y n . Moreover, Lemma 3 is constructive in the sense that the type-independent guessing strategy can be explicitly derived from the type-cognizant one (cf. the proof of Proposition 6.6 in [29]).
We next establish the converse of (74) by showing that when (X n , Y n ) is drawn uniformly for all two-stage guessing strategies. To see why (87) holds, consider an arbitrary guessing strategy, and let the sequence n 1 , n 2 , . . . be such that and such that the limit exists. Using Lemma 5 below, we show that the LHS of (88) (and thus, also the LHS of (87)) is lower bounded by ρH(Q X ) if α = 0 and by ρ max H(Q Y ), H(Q X|Y ) if α > 0. This establishes the converse, because the lower of the two bounds must apply in any case.
Lemma 5. If the pair (X n , Y n ) is drawn uniformly at random from a type class T (n) (Q XY ) and lim sup where Q X and Q Y denote the Xand Y -marginal of Q XY .
Proof: Note that since the RHS of (91) is trivially upper bounded by ρH(Q X ). It thus suffices to show that (90) yields the lower bound To show that (90) yields (93), we define the indicator variable and observe that due to (90) and the fact that Y n is drawn uniformly at random from T (n) (Q Y ), Consequently, H(E n ) tends to zero as n tends to infinity, and because To conclude the proof of Lemma 5, we proceed as follows: ≥ lim inf where (99) holds due to (95) and the law of total expectation; (100) follows from [3] (Theorem 1); (101) holds because the Rényi entropy is monotonically decreasing in its order and because ρ > 0; and (102) is due to (98).
We now conclude the proof of the converse part of (74). Assume first that α (as defined in (89)) equals zero. By (88) and Lemma 5, establishing the first contribution to the RHS of (87). Next, let α > 0. Applying [29] (Proposition 6.6) in conjunction with (89) and the fact that Y nk is drawn uniformly at random from for all sufficiently large k. Using (107) and proceeding analogously as in (82) to (85), we now establish the second contribution to the RHS of (87): where (110) is due to (107); and in (111), we granted the guesser access to Y n at the beginning of Stage 2.

V. SUMMARY AND OUTLOOK
We proposed a new variation on the Massey-Arikan guessing problem where, instead of guessing X n directly, the guesser is allowed to first produce guesses of a correlated ancillary sequence Y n . We characterized the least achievable exponential growth rate (in n) of the ρ-th moment of the total number of guesses in the two stages when X n is i.i.d. according to P X , , and the guesser must recover Y n in Stage 1 before proceeding to Stage 2 (Section III, Theorems 1 and 2); and when the pair (X n , Y n ) is jointly i.i.d. according to P XY and Stage 1 need not reveal Y n (Section IV, Theorem 3). Future directions of this work include: 1) The generalization of our results to a larger class of sources (e.g., Markov sources); 2) A study of the information-like properties of the guessing exponents (1) and (2); 3) Finding the optimal block-wise description Y n = f (X n ) and its associated two-stage guessing exponent; 4) The generalization of the cryptographic problems [8], [9] to a setting where the adversary may also produce guesses of leaked side information.

ACKNOWLEDGMENT
The authors are indebted to Amos Lapidoth for his contribution to Section IV (see [4]). The constructive comments in the review process, which helped to improve the presentation, are gratefully acknowledged. April where (A1) holds by Jensen's inequality and since the mapping x → x ρ for x ≥ 0 is convex. If at least one of the non-negative a i 's is positive (if all a i 's are zero, it is trivial), then where (A2) holds since 0 ≤ ai k j=1 aj ≤ 1 for all i ∈ [1 : n], and ρ ≥ 1. If ρ ∈ (0, 1), then inequalities (A1) and (A2) are reversed. The conditions for equalities in (15) are easily verified.
We next prove Item b. Consider the sequence of functions {f * m } |X | m=1 , defined over the set X . By construction (see Algorithm 2), f * |X | is the identity function since all the respective |X | nodes in the Huffman algorithm stay un-changed in this case. We also have f * 1 (x) = 1 for all x ∈ X (in the latter case, by Algorithm 2, all nodes are merged by the Huffman algorithm into a single node). Hence, from (43), Consider the construction of the function f * m by Algorithm 2. Since the transition from m + 1 to m nodes is obtained by merging two nodes without affecting the other m − 1 nodes, it follows by the data processing theorem for the Arimoto-Rényi conditional entropy (see [24] (Theorem 2 and Corollary 1)) that, for all m ∈ [1 : We finally prove Item c. Suppose that P X is supported on the set X . Under this assumption, it follows from the strict Schur concavity of the Rényi entropy that the inequality in (B3) is strict, and therefore, (B2)-(B4) imply that a m (α) < a m+1 (α) for all m ∈ [1 : |X | − 1]. In particular, Item a implies that 0 < a m (α) < H α (X) holds for every m ∈ [2 : |X | − 1]. Furthermore, the conditioning on f * m+1 (X) enables distinguishing between the two labels of X , which correspond to the pair of nodes that are being merged (by the Huffman algorithm) in the transition from f * m+1 (X) to f * m (X). Hence, the inequality in (B6) turns out to be strict under the assumption that P X is supported on the set X . In particular, under that assumption, it follows from Item b that 0 < b m (α) < H α (X) holds for every m ∈ [2 : |X | − 1].

APPENDIX C PROOF OF LEMMA 3
We prove Lemma 3 as a consequence of Corollary C1 below and the fact that the number of denominator-n types on a finite set grows polynomially in n ( [20], Theorem 11.1.1). Corollary C1. (Moser [29], (6.47) and Corollary 6.10) Let the random triple (U, V, W ) take values in the finite set U × V × W, and let g * U (·),g * U |V (·|·) and g * U |W (·|·),g * U |V,W (·|·, ·) be ranking functions that, for a given ρ > 0, minimize over all two-stage guessing strategies (with no access to W ) and over all two-stage guessing strategies cognizant of W . Then, Lemma 3 follows from Corollary C1 with U ← X n , U ← X n ; V ← Y n , V ← Y n ; W ← Π X n Y n ; W ← T n (X × Y), and by noticing that for all n ∈ N, where a := ρ |X × Y| − 1 and k := 2 a are positive constants independent of n.