Generalizations of Fano’s Inequality for Conditional Information Measures via Majorization Theory †

Fano’s inequality is one of the most elementary, ubiquitous, and important tools in information theory. Using majorization theory, Fano’s inequality is generalized to a broad class of information measures, which contains those of Shannon and Rényi. When specialized to these measures, it recovers and generalizes the classical inequalities. Key to the derivation is the construction of an appropriate conditional distribution inducing a desired marginal distribution on a countably infinite alphabet. The construction is based on the infinite-dimensional version of Birkhoff’s theorem proven by Révész [Acta Math. Hungar. 1962, 3, 188–198], and the constraint of maintaining a desired marginal distribution is similar to coupling in probability theory. Using our Fano-type inequalities for Shannon’s and Rényi’s information measures, we also investigate the asymptotic behavior of the sequence of Shannon’s and Rényi’s equivocations when the error probabilities vanish. This asymptotic behavior provides a novel characterization of the asymptotic equipartition property (AEP) via Fano’s inequality.


Introduction
Inequalities relating probabilities to various information measures are fundamental tools for proving various coding theorems in information theory. Fano's inequality [1] is one such paradigmatic example of an information-theoretic inequality; it elucidates the interplay between the conditional Shannon entropy H(X | Y) and the error probability P{X Y}. Denoting by h 2 : u → −u log u − (1 − u) log(1 − u) the binary entropy function on [0, 1] with the conventional hypothesis that h 2 (0) h 2 (1) 0, Fano's inequality can be written as where both X n (X 1 , . . . , X n ) and Y n (Y 1 , . . . , Y n ) are random vectors in which each component is a {1, . . . , M}-valued r.v. This is the key in proving weak converse results in various communication models (cf. [2][3][4]). Moreover, Fano's inequality also shows that where X n and Y n are {1, . . . , M}-valued r.v.'s for each n ≥ 1. This implication is used, for example, to prove that various Shannon's information measures are continuous in the error metric P{X n Y n } or the variational distance (cf. [5][6][7]).

Main Contributions
In this study, we consider general maximization problems that can be specialized to the left-hand side of (1); we generalize Fano's inequality in the following four ways: (i) the alphabet X of a discrete r.v. X to be estimated is countably infinite, (ii) the marginal distribution P X of X is fixed, (iii) the inequality is established on a general class of conditional information measures, and (iv) the decoding rule is a list decoding scheme in contrast to a unique decoding scheme.
Specifically, given an X-valued r.v. X with a countably infinite alphabet X and a Y-valued r.v. Y with an abstract alphabet Y, this study considers a generalized conditional information measure defined by where P X|Y (x) stands for a version of the conditional probability P{X x | Y} for each x ∈ X, and E[Z] stands for the expectation of the real-valued r.v. Z. Here, this function φ : P(X) → [0, ∞] defined on the set P(X) of discrete probability distributions on X plays the role of an information measure of a discrete probability distribution. When Y is a countable alphabet, the right-hand side of (4) can be written as where P Y P • Y −1 denotes the probability law of Y, and P X|Y y (x) P{X x | Y y} denotes the conditional probability for each (x, y) ∈ X × Y. In this study, we impose some postulates on φ for technical reasons. Choosing φ appropriately, we can specialize H φ (X | Y) to the conditional Shannon entropy H(X | Y), Arimoto's and Hayashi's conditional Rényi entropies [8,9], and so on. For example, if φ is given as then H φ (X | Y) coincides with the conditional Shannon entropy H(X | Y). Denoting by P where the supremum is taken over the pairs (X, Y) satisfying P (L) e (X | Y) ≤ ε and fixing the X-marginal P X to a given distribution Q. The feasible region of systems (Q, L, ε, Y) will be characterized in this paper to ensure that H φ (Q, L, ε, Y) is well-defined. Under some mild conditions on a given system (Q, L, ε, Y), especially on the cardinality of Y, we derive explicit formulas of H φ (Q, L, ε, Y); otherwise, we establish tight upper bounds on H φ (Q, L, ε, Y). As H φ (Q, L, ε, Y) can be thought of as a generalization of the maximization problem stated in (1), we call these results Fano-type inequalities in this paper. These Fano-type inequalities are formulated by the considered information measures φ(P type- * ) of certain (extremal) probability distributions P type- * depending only on the system (Q, L, ε, Y).
In this study, we provide Fano-type inequalities via majorization theory [10]. A proof outline to obtain our Fano-type inequalities is as follows.
1. First, we show that a generalized conditional information measure H φ (X | Y) can be bounded from above by H φ (U | V) with a certain pair (U, V) in which the conditional distribution P U |V of U given V can be thought of as a so-called uniformly dispersive channel [11,12] (see also Section II-A of [13]). We prove this fact via Jensen's inequality (cf. Proposition A-2 of [14]) and the symmetry of the considered information measures φ. Moreover, we establish a novel characterization of uniformly dispersive channels via a certain majorization relation; we show that the output distribution of a uniformly dispersive channel is majorized by its transition probability distribution for any fixed input symbol. This majorization relation is used to obtain a sharp upper bound via the Schur-concavity property of the considered information measures φ. 2. Second, we ensure the existence of a joint distribution P X,Y of (X, Y) which satisfies all constraints in our maximization problems H φ (Q, L, ε, Y) stated in (7) and the conditional distribution P X|Y is uniformly dispersive. Here, a main technical difficulty is to maintain a marginal distribution P X of X over a countably infinite alphabet X; see (ii) above. Using a majorization relation for a uniformly dispersive channel, we express a desired marginal distribution P X by the multiplication of a doubly stochastic matrix and a uniformly dispersive P X|Y . This characterization of the majorization relation via a doubly stochastic matrix was proven by Hardy-Littleweed-Pólya [15] in the finite-dimensional case, and by Markus [16] in the infinite-dimensional case. From this doubly stochastic matrix, we construct a marginal distribution P Y of Y so that the joint distribution P X,Y P X|Y P Y has the desired marginal distribution P X . The construction of P Y is based on the infinite-dimensional version of Birkhoff's theorem, which was posed by Birkhoff [17] and was proven by Révész [18] via Kolmogorov's extension theorem. Although the finite-dimensional version of Birkhoff's theorem [19] (also known as the Birkhoff-von Neumann decomposition) is well-known, the application of the infinite-dimensional version of Birkhoff's theorem in information theory appears to be novel; its application aids in dealing with communication systems over countably infinite alphabets. 3. Third, we introduce an extremal distribution P type- * on a countably infinite alphabet X. Showing that P type- * is the infimum of a certain class of discrete probability distributions with respect to the majorization relation, our maximization problems can be bounded from above by the considered information measure φ(P type- * ). Namely, our Fano-type inequality is expressed by a certain information measure of the extremal distribution. When the cardinality of the alphabet of Y is large enough, by constructing a joint distribution P X,Y achieving equality in our generalized Fano-type inequality, we say that the inequality is sharp.
When the alphabet of Y is finite, we further tighten our Fano-type inequality. To do so, we prove a reduction lemma for the principal maximization problem from an infinite-to a finite-dimensional feasible region. Therefore, when the alphabet of Y is finite, we do not have to employ technical tools in infinite-dimensional majorization theory, e.g., the infinite-dimensional version of Birkhoff's theorem. This reduction lemma is useful not only to tighten our Fano-type inequality but also to characterize a sufficient condition of the considered information measure φ in which H φ (Q, L, ε, Y) is finite if and only if φ(Q) is also finite. In fact, Shannon's and Rényi's information measures meet this sufficient condition.
We show that our Fano-type inequalities can be specialized to some known generalizations of Fano's inequality [20][21][22][23] on Shannon's and Rényi's information measures. Therefore, one of our technical contributions is a unified proof of Fano's inequality for conditional information measures via majorization theory. Generalizations of Erokhin's function [20] from the ordinary mutual information to Sibson's and Arimoto's α-mutual information [8,24] are also discussed.
Via our generalized Fano-type inequalities, we investigate sufficient conditions on a general source X {X n (Z (n) 1 , . . . , Z (n) n )} ∞ n 1 in which vanishing error probabilities implies vanishing equivocations (cf. (2) and (3)). We show that the asymptotic equipartition property (AEP) as defined by Verdú-Han [25] is indeed such a sufficient condition. In other words, if a general source X {X n } ∞ n 1 satisfies the AEP and H(X n ) Ω(1) as n → ∞, then we prove that where {L n } ∞ n 1 is an arbitrary sequence of list sizes. This is a generalization of (2) and (3) and, to the best of the author's knowledge, a novel connection between the AEP and Fano's inequality. We prove this connection by using the splitting technique of a probability distribution; this technique was used to derive limit theorems of Markov processes by Nummelin [26] and Athreya-Ney [27]. Note that there are also many applications of the splitting technique in information theory (cf. [21,[28][29][30][31][32]). In addition, we extend Ho-Verdú's sufficient conditions (See Section V of [21]) and Sason-Verdú's sufficient conditions (see Theorem 4 of [23]) on a general source X {X n } ∞ n 1 in which equivocations vanish if the error probabilities vanish.

Information Theoretic Tools on Countably Infinite Alphabets
As the right-hand side of (1) diverges as M goes to infinity whenever ε > 0 is fixed, the classical Fano inequality is applicable only if X is supported on a finite alphabet (see also Chapter 1 of [33]). In fact, if both X n and Y n are supported on the same countably infinite alphabet for each n ≥ 1, one can construct a somewhat pathological example so that P{X n Y n } o(1) as n → ∞ but H(X n | Y n ) ∞ for every n ≥ 1 (cf. Example 2.49 of [4]).
Usually, it is not straightforward to generalize information theoretic tools for systems defined on a finite alphabet to systems defined on a countably infinite alphabet. Ho-Yeung [34] showed that Shannon's information measures defined on countably infinite alphabets are not continuous with respect to the following distances; the χ 2 -divergence, the relative entropy, and the variational distance. Continuity issues of Rényi's information measures defined on countably infinite alphabets were explored by Kovačević-Stanojević-Šenk [35]. In addition, although weak typicality (cf. Chapter 3 of [2] that is also known as the entropy-typical sequences (cf. Problem 2.5 of [6]) is a convenient tool in proving achievability theorems for sources and channels with defined on countably infinite (or even uncountable) alphabets, strong typicality [6] is only amenable in situations with finite alphabets. To ameliorate this issue, Ho-Yeung [36] proposed a notion known as unified typicality that ensures that the desirable properties of weak and strong typicality are retained when one is working with countably infinite alphabets.
Recently, Madiman-Wang-Woo [37] investigated relations between majorization and the strong Sperner property [38] of posets together with applications to the Rényi entropy power inequality for sums of independent and integer-valued r.v.'s, i.e., supported on countably infinite alphabets.
To the best of the author's knowledge, a generalization of Fano's inequality to the case when X is supported on a countably infinite alphabet was initiated by Erokhin [20]. Given a discrete probability distribution Q on a countably infinite alphabet X {1, 2, . . . }, Erokhin established in Equation (11) of [20] an explicit formula of the function: where the minimization is taken over the pairs of X-valued r.v.'s X and Y satisfying P{X Y} ≤ ε and P{X x} Q(x) for each x ∈ X, and I(X ∧ Y) stands for the mutual information between X and Y. Note that Erokhin's function I(Q, ε) is the rate-distortion function with Hamming distortion measures (cf. [39,40]). As the well-known identity Erokhin's function I(Q, ε) can be naturally thought of as a generalization of the classical Fano inequality stated in (1), where H(X) stands for the Shannon entropy of X, and the probability distribution of X is given by P{X x} Q(x) for each x ∈ X. Kostina-Polyanskiy-Verdú [41] derived a second-order asymptotic expansion of I(Q n , ε) as n → ∞, where Q n stands for the n-fold product of Q. Their asymptotic expansion is closely related to the second-order asymptotics of the variable-length compression allowing errors; see ( [41], Theorem 4).
Ho-Verdú [21] gave an explicit formula of the maximization in the right-hand side of (10); they proved it via the additivity of Shannon's information measures. Note that Ho-Verdú's formula (cf. Theorem 1 of [21]) coincides with Erokhin's formula (cf. Equation (11) of [20]) via the identity stated in (10). In Theorems 2 and 4 of [21], Ho-Verdú also tightened the maximization in the right-hand side of (10) when Y is supported on a proper subalphabet of X. Moreover, they provided in Section V of [21] some sufficient conditions on a general source in which vanishing error probabilities (i.e., P{X n Y n } o(1)) implies vanishing unnormalized or normalized equivocations (i.e., H(X n | Y n ) o(1) or H(X n | Y n ) o(n)).

Fano's Inequality with List Decoding
Fano's inequality with list decoding was initiated by Ahlswede-Gács-Körner [42]. By a minor extension of the usual proof (see, e.g., Lemma 3.8 of [6]), one can see that for every integers 1 ≤ L < M and every real number 0 ≤ ε ≤ 1 − L/M, where the maximization is taken over the pairs of a {1, . . . , M}-valued r.v. X and a Y-valued r.v. Y satisfying P (L) e (X | Y) ≤ ε. Note that the right-hand side of (11) coincides with the Shannon entropy of the extremal distribution of type-0 defined by for each integer x ≥ 1. A graphical representation of this extremal distribution is plotted in Figure 1 Combining (11) and the blowing-up technique (cf. Chapter 5 of [6] or Section 3.6.2 of [43]), Ahlswede-Gács-Körner [42] proved the strong converse property (in Wolfowitz's sense [44]) of degraded broadcast channels under the maximum error probability criterion. Extending the proof technique in [42] together with the wringing technique, Dueck [45] proved the strong converse property of multiple-access channels under the average error probability criterion. As these proofs rely on a combinatorial lemma (cf. Lemma 5.1 of [6]), they work only when the channel output alphabet is finite; but see recent work by Fong-Tan [46,47] in which such techniques have been extended to Gaussian channels. On the other hand, Kim-Sutivong-Cover [48] investigated a trade-off between the channel coding rate and the state uncertainty reduction of a channel with state information available only at the sender, and derived its trade-off region in the weak converse regime by employing (11).

Fano's Inequality for Rényi's Information Measures
So far, many researchers have considered various directions for generalizing Fano's inequality. An interesting study involves reversing the usual Fano inequality. In this regard, lower bounds on H(X | Y) subject to P{X Y} ε were independently established by Kovalevsky [49], Chu-Cheuh [50], and Tebbe-Dwyer [51] (see also Feder-Merhav's study [52]). Prasad [53] provided several refinements of the reverse/forward Fano inequalities for Shannon's information measures.
In [54], Ben-Bassat-Raviv explored several inequalities between the (unconditional) Rényi entropy and the error probability. Generalizations of Fano's inequality from the conditional Shannon entropy [8] were recently and independently investigated by Sakai-Iwata [22] and Sason-Verdú [23]. Specifically, Sakai-Iwata [22] provided sharp upper/lower bounds on H Arimoto In other words, they gave explicit formulas of the following minimization and maximization, respectively. As H Arimoto β (X | Y) is a strictly monotone function of the minimum average probability of error if β ∞, both functions f min (α, ∞, γ) and f max (α, ∞, γ) can be thought of as reverse and forward Fano inequalities on H Arimoto α (X | Y), respectively (cf. Section V in the arXiv paper [22]). Sason-Verdú [23] also gave generalizations of the forward and reverse Fano's inequalities on H Arimoto α (X | Y). Moreover, in the forward Fano inequality pertaining to H Arimoto α (X | Y), they generalized in Theorem 8 of [23] the decoding rules from unique decoding to list decoding as follows: for every 0 ≤ ε ≤ 1 − L/M and α ∈ (0, 1) ∪ (1, ∞), where the maximization is taken as with (11). Similar to (11), the right-hand side of (15) coincides with the Rényi entropy [55] of the extremal distribution of type-0. Note that the reverse Fano inequality proven in [22,23] does not require that X is finite. On the other hand, the forward Fano inequality proven in [22,23] is applicable only when X is finite.

Lower Bounds on Mutual Information
Han-Verdú [56] generalized Fano's inequality on a countably infinite alphabet X by investigating lower bounds on the mutual information, i.e., via the data processing lemma without additional constraints on the r.v.'s X and Y, whereX andȲ are independent r.v.'s having marginals as X and Y respectively. Polyanskiy-Verdú [57] showed a lower bound on Sibson's α-mutual information by using the data processing lemma for the Rényi divergence. Recently, Sason [58] generalized Fano's inequality with list decoding via the strong data processing lemma for the f -divergences. Liu-Verdú [59] showed that as n → ∞, provided that the geometric average probability of error, which is a weaker and a stronger criteria than the maximum and the average error criteria, respectively, satisfies for sufficiently large n, where X n is a r.v. uniformly distributed on the codeword set {c m,n } M n m 1 , Y n is a r.v. induced by the n-fold product of a discrete memoryless channel with the input X n , M n is a positive integer denoting the message size, {D m,n } M n m 1 is a collection of disjoint subsets playing the role of decoding regions, and 0 < ε < 1 is a tolerated probability of error. This is a second-order asymptotic estimate on the mutual information, and is derived by using the Donsker-Varadhan lemma (cf. Equation (3.4.67) of [43]) and the so-called pumping-up argument.

Paper Organization
The rest of this paper is organized as follows. Section 2 introduces basic notations and definitions to understand our generalized conditional information measure H φ (X | Y) and the principal maximization problem H φ (Q, L, ε, Y). Section 3 presents the main results: our Fano-type inequalities. Section 4 specializes our Fano-type inequalities to Shannon's and Rényi's information measures, and discusses generalizations of Erokhin's function from the ordinary mutual information to Sibson's and Arimoto's α-mutual information. Section 5 investigates several conditions on a general source in which the vanishing error probabilities implies the vanishing equivocations; a novel characterization of the AEP via Fano's inequality is also presented. Section 6 proves our Fano-type inequalities stated in Section 3, and contains most technical contributions in this study. Section 7 proves the asymptotic behaviors stated in Section 5. Finally, Section 8 concludes this study with some remarks.

A General Class of Conditional Information Measures
This subsection introduces some notions in majorization theory [10] and a rigorous definition of generalized conditional information measure H φ (X | Y) defined in (4). Let X {1, 2, . . . } be a countably infinite alphabet. A discrete probability distribution P on X is a map P : X → [0, 1] satisfying x∈X P(x) 1. In this paper, motivated to consider the joint probability distributions on X × Y, it is called an X-marginal. Given an X-marginal P, a decreasing rearrangement of P is denoted by P ↓ , i.e., it fulfills The following definition gives us the notion of majorization for X-marginals.
Definition 1 (Majorization [10]). An X-marginal P is said to majorize another X-marginal Q if for every k ≥ 1. This relation is denoted by P Q or Q ≺ P.
Let P(X) be the set of X-marginals. The following definitions are important postulates on a function φ : P(X) → [0, ∞] playing the role of an information measure of an X-marginal.
In Definitions 4-6, each term or its suffix convex is replaced by concave if −φ fulfills the condition. In Definition 3, note that the pointwise convergence of X-marginals is equivalent to the convergence in the variational distance topology (see, e.g., Lemma 3.1 of [60] or Section III-D of [61]).
Let X be an X-valued r.v. and Y a Y-valued r.v., where Y is an abstract alphabet. Unless stated otherwise, assume that the measurable space of Y with a certain σ-algebra is standard Borel, where a measurable space is said to be standard Borel if its σ-algebra is the Borel σ-algebra generated by a Polish topology on the space. Assuming that φ : P(X) → [0, ∞] is a symmetric, concave, and lower semicontinuous function, the generalized conditional information measure H φ (X | Y) is defined by (4). The postulates on φ we have imposed here are useful for technical reasons to employ majorization theory; see the following lemma. Proof of Proposition 1. In Proposition 3.C.3 of [10], the assertion of Proposition 1 was proved in the case where the dimension of the domain of φ is finite. Employing Theorem 4.2 of [16] instead of Corollary 2.B.3 of [10], the proof of Proposition 3.C.3 of [10] can be directly extended to infinite-dimensional domains.
To employ the Schur-concavity property in the sequel, Proposition 1 suggests assuming that φ is symmetric and quasiconcave. In addition, to apply Jensen's inequality on the function φ, it suffices to assume that φ is concave and lower semicontinuous, because the domain P(X) forms a closed convex bounded set in the variational distance topology (cf. Proposition A-2 of [14]). Motivated by these properties, we impose the three postulates (corresponding to Definitions 2-4) on φ in this study.

Minimum Average Probability of List Decoding Error
Consider a certain communication model for which a Y-valued r.v. Y plays the role of the side-information of an X-valued r.v. X. A list decoding scheme with a list size 1 ≤ L < ∞ is a decoding scheme producing L candidates for realizations of X when we observe a realization of Y. The minimum average error probability under list decoding is defined by where the minimization is taken over all set-valued functions f : Y → X L with the decoding range and | · | stands for the cardinality of a set. If S is an infinite set, then we assume that |S| ∞ as usual.
If L 1, then (21) coincides with the average error probability of the maximum a posteriori (MAP) decoding scheme. For the sake of brevity, we write It is clear that for any list decoder f : Y → X L and any tolerated probability of error ε ≥ 0. Therefore, it suffices to consider the constraint P (L) e (X | Y) ≤ ε rather than P{X f (Y)} ≤ ε in our subsequent analyses. The following proposition is an elementary formula of P (L) e (X | Y) as in the MAP decoding.

Proposition 2. It holds that
Proof of Proposition 2. See Appendix A.

Remark 1. It follows from Proposition 2 that
The following proposition characterizes the feasible region of systems (Q, L, ε, Y) considered in our principal maximization problem H φ (Q, L, ε, Y) stated in (7).
Moreover, both inequalities are sharp in the sense that there exist pairs of r.v.'s X and Y achieving the equalities while respecting the constraint P X Q.

Proof of Proposition 3. See Appendix B.
The minimum average error probability for list decoding concerning X ∼ Q without any side-information is denoted by Then, the second inequality in (27) is obvious, and it is similar to the property that conditioning reduces uncertainty (cf. [2], Theorem 2.8.1). Proposition 3 ensures that when we have to consider the constraints P (L) e (X | Y) ≤ ε and P X Q, it suffices to consider a system (Q, L, ε, Y) satisfying

Main Results: Fano-Type Inequalities
Let (Q, L, ε, Y) be a system satisfying (29), and φ : P(X) → [0, ∞] a symmetric, concave, and lower semicontinuous function. The main aim of this study is to find an explicit formula or a tight upper bound on H φ (Q, L, ε, Y) defined in (7). Now, define the extremal distribution of type-1 by the following X-marginal, for each x ∈ X, the weight V( j) is defined by for each j ≥ 1, the weight W(k) is defined by for each k ≥ L, the integer J is chosen so that and K 1 is chosen so that A graphical representation of P type-1 is shown in Figure 2. Under some mild conditions, the following theorem gives an explicit formula of H φ (Q, L, ε, Y). 1 2 3 4 5 6 7 8 9 Proof of Theorem 1. See Section 6.1.
The Fano-type inequality stated in (35) of Theorem 1 is formulated by the extremal distribution P type-1 defined in (30). The following proposition summarizes basic properties of P type-1 .

Proposition 4. The extremal distribution of type-1 defined in (30) satisfies the following,
• the probability masses are nonincreasing in x ∈ X, i.e., • the sum of first L probability masses of is equal to 1 − ε, i.e., consequently, it holds that • the first J − 1 probability masses are equal to that of Q ↓ , i.e., • the probability masses for J ≤ x ≤ L are equal to V(J), i.e., • the probability masses for L + 1 ≤ x ≤ K 1 are equal to W(K 1 ), i.e., • the probability masses for x ≥ K 1 + 1 are equal to that of Q ↓ , i.e., and • it holds that P type-1 majorizes Q.

Proof of Proposition 4. See Appendix C.
Although positive tolerated probabilities of error (i.e., ε > 0) are highly interesting in most of the lossless communication systems, the scenario in which the error events with positive probabilities are not allowed (i.e., ε 0) is also important to deal with the error-free communication systems. The following theorem is an error-free version of Theorem 1.

Theorem 2. Suppose that ε 0 and Y is at least countably infinite. Then, it holds that
Moreover, if the cardinality of Y is at least the cardinality of the continuum R, then there exists a σ-algebra on Y satisfying (43) with equality.
Proof of Theorem 2. See Section 6.2.

Remark 2.
Note that J L holds under the unique decoding rule (i.e., L 1); that is, we see from Theorem 2 that (43) holds with equality if L 1. The inequality J < L occurs only if a non-unique decoding rule (i.e., L > 1) is considered. In Theorem 2, the existence of a σ-algebra on an uncountably infinite alphabet Y in which (43) holds with equality is due to Révész'

s generalization of the Birkhoff-von Neumann decomposition via Kolmogorov's extension theorem; see Sections 6.1 and 6.2 for technical details.
Consider the case where Y is a finite alphabet. Define the extremal distribution of type-2 as the following X-marginal, for each x ∈ X, where the three quantities V(·), W(·), and J are defined in (31), (32), and (33), respectively, and K 2 is chosen so that Moreover, define the integer D by where a b a! b!(a−b)! stands for the binomial coefficient for two integers 0 ≤ b ≤ a. A graphical representation of P type-2 is illustrated in Figure 3. When Y is finite, the Fano-type inequality stated in Theorems 1 and 2 can be tightened as follows: Proof of Theorem 3. See Section 6.3. 1 2 3 4 5 6 7 8 9 Similar to Theorems 1 and 2, the Fano-type inequality stated in (47) of Theorem 3 is formulated by the extremal distribution P type-2 defined in (44). The difference between P type-1 and P type-2 is only the difference between K 1 and K 2 defined in (34) and (45), respectively.

Remark 3.
In contrast to Theorems 1 and 2, Theorem 3 holds in both cases: ε > 0 and ε 0. By Lemma 5 stated in Section 6.1, it can be verified that P type-2 majorizes P type-1 , and it follows from Proposition 1 that Namely, the Fano-type inequalities stated in Theorems 1 and 2 also holds for finite Y. In other words, it holds that for every nonempty alphabet Y, provided that (Q, L, ε, Y) satisfies (29). As |Y| ≥ D if L 1 (see (46)), another benefit of Theorem 3 is that the Fano-type inequality is always sharp under a unique decoding rule (i.e., L 1).
So far, it is assumed that the probability law P X of the X-valued r.v. X is fixed to a given X-marginal Q. When we assume that X is supported on a finite subalphabet of X, we can loosen and simplify our Fano-type inequalities by removing the constraint that P X Q. Let L and M be two integers satisfying 1 ≤ L < M, ε a real number satisfying 0 ≤ ε ≤ 1 − L/M, and Y a nonempty alphabet. Consider the following maximization, where the maximization is taken over the pairs (X,

Theorem 4. It holds that
where P type-0 is defined in (12).
Proof of Theorem 4. See Section 6.4.

Remark 4. Although Theorems 1-3 depend on the cardinality of Y, the Fano-type inequality stated in Theorem 4
does not depend on it whenever Y is nonempty.

Special Cases: Fano-Type Inequalities on Shannon's and Rényi's Information Measures
In this section, we specialize our Fano-type inequalities stated in Theorems 1-4 from general conditional information measures H φ (X | Y) to Shannon's and Rényi's information measures. We then recover several known results such as those in [1,[20][21][22][23] along the way.

On Shannon's Information Measures
The conditional Shannon entropy [62] of an X-valued r.v. X given a Y-valued r.v. Y is defined by where the (unconditional) Shannon entropy of an X-marginal P is defined by provided that the right-hand side of (54) is finite. In some cases, it is convenient to define the conditional Shannon entropy H(X | Y) by the right-hand side of (54) (see, e.g., [64]).
The following proposition is a well-known property of Shannon's information measures.
we readily observe the following corollary.

Corollary 1.
Suppose that ε > 0 and the cardinality of Y is at least countably infinite. Then, it holds that Proof of Corollary 1. Corollary 1 is a direct consequence of Theorem 1 and Proposition 5.

Remark 7.
Note that Corollary 1 coincides with Theorem 1 of [21] if L 1 and Y X. Moreover, we observe from (10) and Corollary 1 that for every X-marginal Q and every tolerated probability of error 0 ≤ ε ≤ 1 − Q ↓ (1), where Erokhin's function I(Q, ε) is defined in (9). See Section 4.3 for details of generalizing of Erokhin's function.

Kostina-Polyanskiy-Verdú showed in Theorem 4 and Remark 3 of [41] that
where V(P) is defined by and Φ −1 (·) stands for the inverse of the Gaussian cumulative distribution function If Y is finite, then a tighter version of the Fano-type inequality than Corollary 1 can be obtained as follows: Proof of Corollary 2. Corollary 2 is a direct consequence of Theorem 3 and Proposition 5. (61) holds with equality if L 1 (cf. Remark 3). In fact, when L 1, Corollary 2 coincides with Ho-Verdú's refinement of Erokhin's function I(Q, ε) with finite Y (see Theorem 4 of [21]).

Remark 8. The inequality in
Similar to (50) and (55), we can define and can give an explicit formula of H(M, L, ε, Y) as follows.

Corollary 3. It holds that
Proof of Corollary 3. Corollary 3 is a direct consequence of Theorem 4 and Proposition 5. (11).

On Rényi's Information Measures
Although the choices of Shannon's information measures are unique based on a set of axioms (see, e.g., Theorem 3.6 of [6] and Chapter 3 of [4]), there are several different definitions of conditional Rényi entropies (cf. [65][66][67]). Among them, this study focuses on Arimoto's and Hayashi's conditional Rényi entropies [8,9]. Arimoto's conditional Rényi entropy of X given Y is defined by for each order α ∈ (0, 1) ∪ (1, ∞), where the α -norm of an X-marginal P is defined by Here, note that the (unconditional) Rényi entropy [55] of an X-marginal P can be defined by i.e., it is a monotone function of the α -norm. Basic properties of the α -norm can be found in the following proposition.
Proof of Proposition 6. The symmetry is obvious. The lower semicontinuity was proven by Kovačević-Stanojević-Šenk in Theorem 5 of [35]. The concavity (resp. convexity) property can be verified by the reverse (resp. forward) Minkowski inequality.

Remark 11. Although Hayashi's conditional Rényi entropy is smaller than Arimoto's one in general (see
When Y is finite, a tighter Fano-type inequality than Corollary 4 can be obtained as follows.
with equality if ε P Proof of Corollary 5. The proof is the same as the proof of Corollary 4 by replacing Theorem 1 by Theorem 3.
Proof of Corollary 6. The proof is the same as the proof of Corollary 4 by replacing Theorem 1 by Theorem 4. (15)).

Remark 13. It follows by l'Hôpital's rule that
Therefore, our Fano-type inequalities stated in Corollaries 1-6 satisfy the continuity of Shannon's and Rényi's information measures with respect to the order 0 < α < ∞.

Generalization of Erokhin's Function to α-Mutual Information
Erokhin's function I(Q, ε) defined in (9) can be generalized to the α-mutual information (cf. [68]) as follows: Let X be an X-valued r.v. and Y a Y-valued r.v. Sibson's α-mutual information [24] (see also Equation (32) of [68], Equation (13) of [69], and Definition 7 of [70]) is defined by for each 0 < α < ∞, where P X,Y (resp. P X ) denotes the probability measure on X × Y (resp. X) induced by the pair (X, Y) of r.v.'s (resp. the r.v. X), the infimum is taken over the probability measures Q Y on Y, and the Rényi divergence [55] between two probability measures µ and ν on A is defined by for each 0 < α < ∞. Note that Sibson's α-mutual information coincides with the ordinary mutual information when α 1, i.e., it holds that I(X ∧ Y) I 1 (X ∧ Y). Similar to (7) and (9), given a system (Q, L, ε, Y) satisfying (29), define where the infimum is taken over the pairs of r.
e (X | Y) ≤ ε, and (iv) P X Q. By convention, we denote by It is clear that this definition can be specialized to Erokhin's function I(Q, ε) defined in (9); in other words, it holds that Proof of Corollary 7. The equality in (87) is trivial from the well-known identity The inequality in (87) follows from Corollary 1, completing the proof.
Proof of Corollary 8. As Sibson's identity [24] (see also [69], Equation (12)) states that where Q α stands for the probability distribution on Y given as for each y ∈ Y, we observe that On the other hand, it follows from ( [8] Equation (13)) that for every α ∈ (0, 1) ∪ (1, ∞), provided that Y is countable. Combining (92) and (93), we have the first equality in (88). Finally, the second equality in (88) follows from Corollary 4 after some algebra. This completes the proof of Corollary 8.
Corollary 9 (Arimoto, when α 1). Suppose that ε > 0 and the cardinality of Y is at least countably infinite. For every α ∈ (0, 1) ∪ (1, ∞), it holds that Proof of Corollary 9. The first equality in (96) is obvious from the definition. The second equality in (96) follows from Corollary 4 after some algebra, completing the proof.
When Y is finite, then the inequalities stated in Corollaries 7-9 can be tightened by Theorem 3 as in Corollaries 2 and 5. We omit to explicitly state these tightened inequalities in this paper.

Asymptotic Behaviors on Equivocations
In information theory, the equivocation or the remaining uncertainty of an r.v. X relative to a correlated r.v. Y has an important role in establishing fundamental limits of the optimal transmission ratio and/or rate in several communication models. Shannon's equivocation H(X | Y) is a well-known measure in the formulation of the notion of perfect secrecy of symmetric-key encryption in information-theoretic cryptography [71]. Iwamoto-Shikata [66] considered the extension of such a secrecy criterion by generalizing Shannon's equivocation to Rényi's equivocation by showing various desired properties of the latter. Recently, Hayashi-Tan [72] and Tan-Hayashi [73] studied the asymptotics of Shannon's and Rényi's equivocations when the side-information about the source is given via a various class of random hash functions with a fixed rate.
In this section, we assume that certain error probabilities vanish and we then establish asymptotic behaviors on Shannon's, or sometimes on Rényi's, equivocations via the Fano-type inequalities stated in Section 4.

Fano's Inequality Meets the AEP
We consider a general form of the asymptotic equipartition property (AEP) as follows.
In the literature, the r.v. X n is commonly represented as a random vector X n (Z (n) 1 , . . . , Z (n) n ). The formulation without reference to random vectors means that X {X n } ∞ n 1 is a general source in the sense of Page 100 of [33].
Let {L n } ∞ n 1 be a sequence of positive integers, {Y n } ∞ n 1 a sequence of nonempty alphabets, and {(X n , Y n )} ∞ n 1 a sequence of pairs of r.v.'s, where X n (resp. Y n ) is X-valued (resp. Y n -valued) for each n ≥ 1. As for any sequence of list decoders { f n : Y → X L n } ∞ n 1 , it suffices to assume that P (L n ) e (X n | Y n ) o(1) as n → ∞ in our analysis. The following theorem is a novel characterization of the AEP via Fano's inequality.
Theorem 5. Suppose that a general source X {X n } ∞ n 1 satisfies the AEP, and H(X n ) Ω(1) as n → ∞. Then, it holds that where |u| + max{0, u} for u ∈ R. Consequently, it holds that Suppose that X n (Z 1 , . . . , Z n ) and Y n X n for each n ≥ 1. Then, Theorem 5 states that This result is commonly referred to as the weak converse property of the source {Z n } ∞ n 1 in the unique decoding setting.
n 1 be a source as described in Example 1. Even if the list decoding setting, Theorem 5 states that similarly to Example 1. This is a key observation in Ahlswede-Gács-Körner's proof of the strong converse property of degraded broadcast channels; see Chapter 5 of [42] (see also Section 3.6.2 of [43] and Lemma 1 of [48]).

Example 3.
Consider the Poisson source X {X n } ∞ n 1 with growing mean λ n ω(1) as n → ∞, i.e., It is known that and the Poisson source X satisfies the AEP (see [25]). Therefore, it follows from Theorem 5 that The following example shows a general source that satisfies neither the AEP nor (99).
Consider a general source X {X n } ∞ n 1 whose component distributions are given by for each n ≥ 1. Suppose that X n Y n for each n ≥ 1. After some algebra, we have for each n ≥ 1. Therefore, we observe that does not hold. In fact, it holds that H(X n ) → γ + log L as n → ∞ and Consequently, we also see that X {X n } ∞ n 1 does not satisfy the AEP.
Example 4 implies that the AEP has an important role in Theorem 5.

Vanishing Unnormalized Rényi's Equivocations
Let X be an X-valued r.v. satisfying H(X) < ∞, {L n } ∞ n 1 a sequence of positive integers, {Y n } ∞ n 1 a sequence of nonempty alphabets, and {(X n , Y n )} ∞ n 1 a sequence of X × Y n -valued r.v.'s. The following theorem provides four conditions on a general source X {X n } ∞ n 1 such that vanishing error probabilities implies vanishing unnormalized Shannon's and Rényi's equivocations. Theorem 6. Let α ≥ 1 be an order. Suppose that any one of the following four conditions hold, (a) the order α is strictly larger than 1, i.e., α > 1, (b) the sequence {X n } ∞ n 1 satisfies the AEP and H(X n ) O(1) as n → ∞, (c) there exists an n 0 ≥ 1 such that P X n majorizes P X for every n ≥ n 0 , (d) the sequence {X n } ∞ n 1 converges in distribution to X and H(X n ) → H(X) as n → ∞.

Under the Symbol-Wise Error Criterion
Let L {L n } ∞ n 1 be a sequence of positive integers, {Y n } ∞ n 1 a sequence of nonempty alphabets, and {(X n , Y n )} ∞ n 1 a sequence of X × Y n -valued r.v.'s satisfying H(X n ) < ∞ for every n ≥ 1. In this subsection, we focus on the minimum arithmetic-mean probability of symbol-wise list decoding error defined as where X n (X 1 , X 2 , . . . , X n ) and Y n (Y 1 , Y 2 , . . . , Y n ). Now, let X be an X-valued r.v. satisfying H(X) < ∞. Under this symbol-wise error criterion, the following theorem holds. Theorem 7. Suppose that P X n majorizes P X for sufficiently large n. Then, it holds that Proof of Theorem 7. See Section 7.3.
It is known that the classical Fano inequality stated in (1) can be extended from the average error criterion P{X n Y n } to the symbol-wise error criterion stands for the Hamming distance between two strings x n (x 1 , . . . , n n ) and y n (y 1 , . . . , y n ). In fact, Theorem 7 states that provided that P X n majorizes P X for sufficiently large n. However, in the list decoding setting, we observe that P A counterexample can be readily constructed.
if n m, X n Y n for each n ≥ 1, and L n 2 for each n ≥ 1. Then, we observe that for every n ≥ 1, but for every n ≥ 1.

Proof of Theorem 1
We shall relax the feasible regions of the supremum in (7) via some lemmas, i.e., our preliminary results. Define a notion of symmetry for the conditional distribution P X|Y as follows.

Remark 15. The term introduced in Definition 8 is inspired by uniformly dispersive channels named by
Massey (see Page 77 of [12]). In fact, if Y is countable and X (resp. Y) denotes the output (resp. input) of a channel P X|Y , then the channel P X|Y can be thought of as a uniformly dispersive channel, provided that (X, Y) is connected uniform-dispersively. Initially, Fano said such channels to be uniform from the input; see Page 127 of [11]. Refer to Section II-A of [13] for several symmetry notions of channels.
Although an almost surely constant P X|Y implies the independence X Y, note also that an almost surely constant P ↓ X|Y does not imply the independence. We now give the following lemma.

Lemma 1. If a jointly distributed pair (X, Y) is connected uniform-dispersively, then P X|Y majorizes P X a.s.
Proof of Lemma 1. Let k be a positive integer. Choose a collection {x i } k i 1 of k distinct elements in X so that for every 1 ≤ i ≤ k. As and for each x ∈ X, we observe that If (X, Y) is connected uniform-dispersively (see Definition 8), then (123) implies that which is indeed the majorization relation stated Definition 1, completing the proof of Lemma 1.

Remark 16. Lemma 1 is can be thought of as a novel characterization of uniformly dispersive channels via the majorization relation; see Remark 15.
More precisely, given an input distribution P on X and a uniformly dispersive channel W : X → Y with countable output alphabet Y, it holds that W(· | x) majorizes the output distribution PW for every x ∈ X, where PW is given by for each y ∈ Y.

Definition 9. Let A be a collection of jointly distributed pairs of an X-valued r.v. and a Y-valued r.v. We say that
for every x ∈ X.
For such a collection A, the following lemma holds.

Lemma 2. Suppose that A has balanced conditional distributions. For any (X, Y) ∈ A, there exists a pair (U, V) ∈ A connected uniform-dispersively such that
Proof of Lemma 2. For any (X, Y) ∈ A, it holds that where • (a) follows by the symmetry of φ, • (b) follows by Jensen's inequality (see [14], Proposition A-2), • (c) follows by the existence of a pair (U, V) ∈ A connected uniform-dispersively (see (126)), and • (d) follows by the symmetry of φ again.
This completes the proof of Lemma 2.
For a system (Q, L, ε, Y) satisfying (29), we now define a collection of pairs of r.v.'s as follows, Note that this is the feasible region of the supremum in (7). The main idea of proving Theorem 1 is to apply Lemma 2 for this collection. The collection R(Q, L, ε, Y) does not, however, have balanced conditional distributions in general. More specifically, there exists a measurable space Y such that R(Q, L, ε, Y) does not have balanced conditional distributions even if Y is standard Borel. Fortunately, the following lemma can avoid this issue by blowing-up the collection R(Q, L, ε, Y) via the infinite-dimensional version of Birkhoff's theorem [18].

Lemma 3.
If the cardinality of Y is at least the cardinality of the continuum R, then there exists a σ-algebra on Y such that the collection R(Q, L, ε, Y) has balanced conditional distributions.
Proof of Lemma 3. First, we shall choose an appropriate alphabet Y so that its cardinality is the cardinality of the continuum. Denote by Ψ the set of ∞ × ∞ permutation matrices, where an ∞ × ∞ permutation matrix is a real matrix Π {π i,j } ∞ i, j 1 satisfying either π i,j 0 or π i, j 1 for each 1 ≤ i, j < ∞, and ∞ j 1 For an ∞ × ∞ permutation matrix Π {π i, j } i,j ∈ Ψ, define the permutation ψ Π on X {1, 2, . . . } by It is known that there is a one-to-one correspondence between the permutation matrices Π and the bĳections ψ Π ; and thus, the cardinality of Ψ is the cardinality of the continuum. Therefore, in this proof, we may assume without loss of generality that Y Ψ. Second, we shall construct an appropriate σ-algebra on Y via the infinite-dimensional version of Birkhoff's theorem (cf. Theorem 2 of [18]) for ∞ × ∞ doubly stochastic matrices, where an ∞ × ∞ doubly stochastic matrix is a real matrix M {m i, j } ∞ i,j 1 satisfying 0 ≤ m i,j ≤ 1 for each 1 ≤ i, j < ∞, and ∞ j 1 Similar to Ψ, denote by Ψ i,j the set of ∞ × ∞ permutation matrices in which the entry in the ith row and the jth column is 1, where note that Ψ i, j ⊂ Y. Then, the following lemma holds.

Remark 17.
In the original statement of Theorem 2 of [18], it is written that a probability space (Y, Γ, µ) exists for a given ∞ × ∞ doubly stochastic matrix M, namely, the σ-algebra Γ may depend on M. However, the construction of Γ is independent of M (see Page 196 of [18]); and we can restate Theorem 2 of [18] as Lemma 4. This is a probabilistic description of an ∞ × ∞ doubly stochastic matrix via a probability measure on the ∞ × ∞ permutation matrices. The existence of the probability measure µ is due to Kolmogorov's extension theorem. We employ this σ-algebra Γ on Y in the proof.
Thirdly, we shall show that under this measurable space (Y, Γ), the collection R(Q, L, ε, Y) has balanced conditional distributions defined in (126). In other words, for a given pair (X, Y) ∈ R(Q, L, ε, Y), it suffices to construct another pair (U, V) of r.v.'s satisfying (126) and (U, V) ∈ R(Q, L, ε, Y). At first, construct its conditional distribution P U |V by for each x ∈ X, where E[Z | W] stands for the conditional expectation of a real-valued r.v. Z given the sub-σ-algebra σ(W) generated by a r.v. W, and φ V is given as in (132). As ψ V (x) is σ(V)-measurable for each x ∈ X, it is clear that for every x ∈ X. Thus, we readily see that (126) holds, and (U, V) is connected uniform-dispersively. Thus, by (123) and the hypothesis that P X Q, we see that P U |V majorizes Q a.s. Therefore, it follows from the well-known characterization of the majorization relation via ∞ × ∞ doubly stochastic matrices (see Lemma 3.1 of [16] or Page 25 of [10]) that one can find an ∞ × ∞ doubly stochastic matrix for every i ≥ 1. By Lemma 4, we can construct an induced probability measure P V so that P V (Ψ i, j ) m i, j for each 1 ≤ i, j < ∞. Now, the pair of P U |V and P V can define the probability law of (U, V). To ensure that (U, V) belongs to R(Q, L, ε, Y), it remains to verity that P (L) e (U | V) ≤ ε and P U Q. As ψ Π is a permutation defined in (132), we have where

• (a) and (c) follow from Proposition 2, and • and (b) follows from (136).
Therefore, we see that P for every i ≥ 1, where • (a) follows from (137), • (b) follows by the identity m i,j P{V ∈ Ψ i, j }, • (c) follows from the fact that (X, Y) is connected uniform-dispersively, • (d) follows from (136), • (e) follows by the definition of Ψ i,j , • (f) follows by the Fubini-Tonelli theorem, and • (f) follows from the fact that the inverse of a permutation matrix is its transpose.
Therefore, we have P U Q, and the assertion of Lemma 3 is proved in the case where the cardinality of Y is the cardinality of the continuum.
Finally, even if the cardinality of Y is larger than the cardinality of continuum, the assertion of Lemma 3 can be immediately proved by considering the trace of the space Y on Ψ (cf. [74], p. 23). This completes the proof of Lemma 3.
Finally, we show that the Fano-type distribution of type-1 defined in (30) is the infimum of a certain class of X-marginals with respect to the majorization relation ≺.
Lemma 5. Suppose that the system (Q, L, ε) satisfies the right-hand inequality in (29). For every X-marginal R in which R majorizes Q and P (L) e (R) ≤ ε, it holds that R majorizes P type-1 as well.
Proof of Lemma 5. We first give an elementary fact of the weak majorization on the finite-dimensional real vectors. Lemma 6. Let p (p i ) n i 1 and q (q i ) n i 1 be n-dimensional real vectors satisfying p 1 ≥ p 2 ≥ · · · ≥ p n ≥ 0 and q 1 ≥ q 2 ≥ · · · ≥ q n ≥ 0, respectively. Consider an integer 1 ≤ k ≤ n satisfying q k q i for every i k, k + 1, . . . , n. If Since P type-1 P ↓ type-1 (see Proposition 4), it suffices to prove that for every k ≥ 1.
Using the above lemmas, we can prove Theorem 1 as follows.
Proof of Theorem 1. Let ε > 0. For the sake of brevity, we write in the proof. Let Υ be a σ-algebra on Y, Ψ an alphabet in which its cardinality is the cardinality of the continuum, and Γ a σ-algebra on Ψ so that R(Q, L, ε, Ψ) has balanced conditional distributions (see Lemma 3). Now, we define the collection where the σ-algebra on Y ∪ Ψ is given by the smallest σ-algebra Υ ∨ Γ containing Υ and Γ. It is clear that R ⊂R, andR has balanced conditional distributions as well (see the last paragraph in the proof of Lemma 3). Then, we have  (30) that P type-1 Q ↓ (see also Proposition 4). In such a case, the supremum in (7) can be achieved by a pair (X, Y) satisfying P X Q and X Y.
Finally, we shall construct a jointly distributed pair (X, Y) satisfying For the sake of brevity, suppose that Y is the index set of the set of permutation matrices on { J, J + 1, . . . , K 1 }. Namely, denote by Π (y) {π (y) i,j } K 1 i,j J a permutation matrix for each index y ∈ Y. By the definition of P type-1 stated in (30) (see also Proposition 4), we observe that and Noting that K 1 < ∞ if ε > 0 (see (34)), Equations (153) and (154) are indeed a majorization relation between two finite-dimensional real vectors; and thus, it follows from the Hardy-Littlewood-Pólya theorem (see Theorem 8 of [15] or Theorem 2.B.2 [10]) that there exists a ( for each J ≤ i ≤ K 1 . Moreover, it follows from the finite dimensional version of Birkhoff's theorem [19] (see also Theorems 2.A.2 and 2.C.2 of [10]) that for such a doubly stochastic matrix M {m i, j } K 1 i,j J , there exists a probability vector λ (λ y ) y∈Y satisfying for every J ≤ i, j ≤ K 1 , where a nonnegative vector is called a probability vector if the sum of the elements is unity. Using them, we construct a pair (X, Y) via the following distributions, where the permutationψ y on { J, J + 1, . . . , K 1 } is defined bỹ for each y ∈ Y. Then, it follows from (155) and (156) that (152) holds. Moreover, it is easy to see that P ↓ X|Y y P type-1 for every y ∈ Y. Thus, we observe that (150) and (151) hold as well. This implies together with (149) that the constructed pair (X, Y) achieves the supremum in (7), completes the proof of Theorem 1.

Proof of Theorem 2
Even if ε 0, the inequalities in (149) hold as well; that is, the Fano-type inequality stated in (43) of Theorem 2 holds. In this proof, we shall verify the equality conditions of (43).
If supp(Q) is finite, then it follows by the definition of K 1 stated in (34) that K 1 < ∞. Thus, the same construction of a jointly distributed pair (X, Y) as the last paragraph of Section 6.1 proves that (43) holds with equality if supp(Q) is finite.
Consider the case where supp(Q) is infinite and J L. Since ε 0, we readily see that K 1 ∞, V(J) > 0, and W(K 1 ) 0. Suppose that We then construct a pair (X, Y) via the following distributions, We readily see that P ↓ X|Y y P type-1 for every y ∈ Y; therefore, we have that (150)-(152) hold. This implies that the constructed pair (X, Y) achieves the supremum in (7).
Finally, suppose that the cardinality of Y is at least the cardinality of the continuum. Assume without loss of generality that Y is the set of ∞ × ∞ permutation matrices. Consider the measurable space (Y, Γ) given in the infinite-dimensional version of Birkhoff's theorem (see Lemma 4). In addition, consider a jointly distributed pair (X, Y) satisfying P ↓ X|Y P type-1 a.s. Then, it is easy to see that (150) and (151) hold for any induced probability measure P Y on Y. Similar to the construction of the probability measure P V on Y below (137), we can find an induced probability measure P Y satisfying (152). Therefore, it follows from (43) that this pair (X, Y) achieves the supremum in (7). This completes the proof of Theorem 2.

Proof of Theorem 3
To prove Theorem 3, we need some more preliminary results. Throughout this subsection, assume that the alphabet Y is finite and nonempty. In this case, given a pair (X, Y), one can define provided that P Y (y) > 0. For a subset Z ⊂ X, define Note that the difference between P e (X | Y Z) is trivial from these definitions stated in (21) and (164), respectively. The following propositions are easy consequences of the proofs of Propositions 2 and 3, and so we omit those proofs in this paper.

Proposition 7. It holds that
(165) Proposition 8. Let β : {1, . . . , |Z|} → Z be a bĳection satisfying P X (β(i)) ≥ P X (β(j)) if i < j. It holds that For a finite subset Z ⊂ X, denote by Ψ(Z) the set of |Z| × |Z| permutation matrices in which both rows and columns are indexed by the elements in Z. The main idea of proving Theorem 3 is the following lemma.

Lemma 7. For any
as in (132) and (159). It is clear that for each y ∈ Y, there exists at least one Π ∈ Ψ(Z) such that for every x 1 , x 2 ∈ Z satisfying x 1 ≤ x 2 , which implies that the permutation ϕ Π plays the role of a decreasing rearrangement of P X|Y y on Z. To denote such a correspondence between Y and Ψ(Z), one can choose an injection ι : Y → Ψ(Z) appropriately. In other words, one can find an injection ι so that for every y ∈ Y and x 1 , x 2 ∈ Z satisfying x 1 ≤ x 2 . We now construct an X × Y × Ψ(Z)-valued r.v.
(U, V, W) as follows: The conditional distribution P U |V,W is given by where σ 1 • σ 2 stands for the composition of two bĳections σ 1 and σ 2 . The induced probability distribution P V of V is given by P V P Y . Suppose that the independence V W holds. As it remains to determine the induced probability distribution P W of W, and we defer to determine it until the last paragraph of this proof. A direct calculation shows where • (a) follows by the independence V W and P V P Y , and • (b) follows by (177) and defining ω(u, w) so that for each x ∈ Z and w ∈ Ψ(Z). Now, we readily see from (179) that (171) holds for any induced probability distribution P W of W. Therefore, to complete the proof, it suffices to show that (U, W) satisfies (169) and (170) with an arbitrary choice of P W , and (U, W) satisfies (168) with an appropriate choice of P W .
Firstly, we shall prove (169). For each w ∈ Ψ(Z), denote by D(w) ∈ Z L the set satisfying for every k ∈ D(w) and x ∈ Z \ D(w), i.e., it stands for the set of first L elements in Z under the permutation rule w ∈ Ψ(Z). Then, we have where • (a) is an obvious inequality (see the definitions stated in (21) and (164) Therefore, we obtain (169). Secondly, we shall prove (170). We get where • (a) follows by the symmetry of φ and (177), • (b) follows by P V P Y , • (c) follows by Jensen's inequality, and • (d) follows by the independence U W.
Therefore, we obtain (170). Finally, we shall prove that there exists an induced probability distribution P W satisfying (168). If we denote by I ∈ Ψ(Z) the identity matrix, then it follows from (180) that for every (u, w) ∈ Z × Ψ(Z). It follows from (179) that Now, denote by β 1 : {1, 2, . . . , LN } → Z and β 2 : {1, 2, . . . , LN } → Z two bĳections satisfying P X (β 1 (i)) ≥ P X (β 1 ( j)) and β 2 (i) < β 2 ( j), respectively, provided that i < j. That is, the bĳection β 1 and β 2 play roles of decreasing rearrangements of P X and P U |W I , respectively, on Z. Using those bĳections, one can rewrite (185) as In the same way as (123), it can be verified from (180) by induction that for each k 1, 2, . . . , LN. Equations (186) and (187) are indeed a majorization relation between two finite-dimensional real vectors, because β 1 plays a role of a decreasing rearrangement of P X on Z. Combining (184) and this majorization relation, it follows from the Hardy-Littlewood-Pólya theorem derived in Theorem 8 of [15] (see also Theorem 2.B.2 of [10]) and the finite-dimensional version of Birkhoff's theorem [19] (see also Theorem 2.A.2 of [10]) that there exists an induced probability distribution P W satisfying P U P X , i.e., Equation (168) holds, as in (153)-(158). This completes the proof of Lemma 7. (7) from a countably infinite alphabet X to a finite alphabet Z in the sense of (171). Specifically, if Y is finite, it suffices to vary at most |Z| L · |Y| probability masses {P X|Y y (x)} x∈Z for each y ∈ Y. Lemma 7 is useful not only to prove Theorem 3 but also to prove Proposition 9 of Section 8.1 (see Appendix D for the proof).

Remark 18. Lemma 7 can restrict the feasible region of the supremum in
As with (129), for a subset Z ⊂ X, we define e (X | Y Z) ≤ ε, P X Q, provided that Y is finite. It is clear that (188) coincides with (129) if Z X, i.e., it holds that Note from Lemma 7 that for each system (Q, L, ε, Y) satisfying (29), there exists a subset Z ⊂ X such that |Z| L · |Y| and R(Q, L, ε, Y, Z) is nonempty, provided that Y is finite. Another important idea of proving Theorem 3 is to apply Lemma 2 for this collection of r.v.'s. The correction R(Q, L, ε, Y, Z) does not, however, have balanced conditional distributions of (126) in general, as with (129). Fortunately, similar to Lemma 3, the following lemma can avoid this issue by blowing-up the collection R(Q, L, ε, Y, Z) via the finite-dimensional version of Birkhoff's theorem [19].
Proof of Lemma 8. Lemma 8 can be proven in a similar fashion to the proof of Lemma 3. As this proof is slightly long as with Lemma 3, we only give a sketch of the proof as follows.
As |Ψ(Z)| |Z|!, we may assume without loss of generality that Y Ψ(Z). For the sake of brevity, we writeR in this proof. For a pair (X, Y) ∈R, construct another X × Y-valued r.v. (U, V), as in (135), so that P U |V y (x) Q(x) for every (x, y) ∈ (X \ Z) × Y. By such a construction of (135), the condition stated in (126) is obviously satisfied. In the same way as (138), we can verify that Moreover, employing the finite-dimensional version of Birkhoff's theorem [19] (also known as the Birkhoff-von Neumann decomposition) instead of Lemma 4, we can also find an induced probability distribution P V of V so that P U Q in the same way as (139). Therefore, for any (X, Y) ∈R, one can find (U, V) ∈R satisfying (126). This completes the proof of Lemma 8.
Let Z ⊂ X be a subset. Consider a bĳection β : {1, 2, . . . , |Z|} → Z satisfying Q(β(i)) ≥ Q(β(j)) whenever i < j, i.e., it plays a role of a decreasing rearrangement of Q on Z. Thereforeforth, suppose that (Q, L, ε, Y, Z) satisfies Define the extremal distribution of type-3 by the following X-marginal: where the weight V 3 ( j) is defined by for each integer 1 ≤ j ≤ L, the weight W 3 (k) is defined by for each integer L ≤ k ≤ L · |Y|, the integer J 3 is chosen so that and the integer K 3 is chosen so that (17) of [21], respectively.

Remark 19. The extremal distribution of type-3 can be specialized to both extremal distribution of type-2 defined in (44) and Ho-Verdú's truncated distribution defined in Equation
The following lemma shows a relation between the type-2 and the type-3.
We prove the rest of the majorization relation by contradiction. Namely, assume that for some integer l ≥ L + 1. By the definitions stated in (32), (45), (195), and (197), it can be verified that Thus, as it follows that for every x l, l + 1, . . . , which implies together with the hypothesis (208) that This, however, contradicts to the definition of probability distributions, i.e., the sum of probability masses is strictly larger than one. This completes the proof of Lemma 9.
Similar to (164), we now define As with Proposition 8, we can verify that Therefore, the restriction stated in (192) comes from the same observation as (29) (see Propositions 3 and 8). In view of (216), we write P e (X Z) if P X Q. As in Lemma 5, the following lemma holds.

Lemma 10.
Suppose that an X-marginal R satisfies that (i) R majorizes Q, (ii) P (L) e (R Z) ≤ ε, and (iii) R(k) Q(k) for each k ∈ X \ Z. Then, it holds that R majorizes P type-3 as well.
Finally, we can prove Theorem 3 by using the above lemmas.

Proof of Theorem 3.
For the sake of brevity, we define Then, we have • (d) follows from Lemmas 2 and 8, • (e) follows from Lemma 1, • (f) follows from Lemma 10, and • (g) follows from Proposition 1 and Lemma 9.
Inequalities (224) are indeed the Fano-type inequality stated in (47) of Theorem 3. Finally, supposing that |Y| ≥ (K 2 − J) 2 + 1, we shall construct a jointly distributed pair (X, Y) satisfying Similar to (153) and (154), we see that and This is a majorization relation between two (K 2 − J + 1)-dimensional real vectors; and thus, it follows from the Hardy-Littlewood-Pólya theorem ( [15] Theorem 8) (see also [10], Theorem 2.B.2) that there exists a ( for each J ≤ i ≤ K 2 . Moreover, it follows from Marcus-Ree's or Farahat-Mirsky's refinement of the finite-dimensional version of Birkhoff's theorem derived in [75] or Theorem 3 of [76], respectively (see also Theorem 2.F.2 of [10]), that there exists a pair of a probability vector λ (λ y ) y∈Y and a collection for every J ≤ i, j ≤ K 2 . Using them, construct a pair (X, Y) via the following distributions, whereψ y is defined as in (159). Similar to Section 6.1, we now observe that (226)-(228) hold. This implies together with (224) that the constructed pair (X, Y) achieves the supremum in (7). Furthermore, since P type-2 and Q ↓ differ at most K 2 −J+1 L−J+1 probability masses, it follows that the collection {P X|Y y } y∈Y consists of at most K 2 −J+1 L−J+1 distinct distributions. Namely, the condition that |Y| ≥ K 2 −J+1 L−J+1 is also sufficient to construct a jointly distributed pair (X, Y) satisfying (226)-(228). This completes the proof of Theorem 3.

Remark 20.
Step (b) in (224) is a key of proving Theorem 3; it is the reduction step from infinite to finite-dimensional settings via Lemma 7 (see also Remark 18). Note that this proof technique is not applicable when Y is infinite, while the proof of Theorem 1 works well for infinite Y.

Proof of Theorem 4
It is known that every discrete probability distribution on Finally, it is easy to see that provided that for every 1 ≤ x ≤ M. This implies the existence of a pair (X, Y) achieving the maximum in (50); and therefore, the equality (237) holds. This completes the proof of Theorem 4.

Proofs of Asymptotic Behaviors on Equivocations
In this section, we prove Theorems 5-7.

Proof of Theorem 5
Defining the variational distance between two X-marginals P and Q by we now introduce the following lemma, which is useful to prove Theorem 5.
where the X-marginal S (Q,δ) is defined by and the integer B is chosen so that For the sake of brevity, in this proof, we write for each n ≥ 1. Suppose that ε n o(1) as n → ∞. By Corollary 1, instead of (99), it suffices to verify that As supp(P 1,n ) {1, . . . , L n } if ε n 0, we may assume without loss of generality that 0 < ε n < 1.
Define two X-marginals Q (1) n and Q (2) n by for each n ≥ 1. As Q (1) n majorizes the uniform distribution on {1, 2, . . . , L n }, it is clear from the Schur-concavity property of the Shannon entropy that Thus, since it follows by strong additivity of the Shannon entropy (cf. Property (1.2.6) of [78]) that Thus, since h 2 (ε n ) o(1), it suffices to verify the asymptotic behavior of the third term in the right-hand side of (253), i.e., whether holds or not.
Consider the X-marginal Q n given by for each n ≥ 1. As it follows by the concavity of the Shannon entropy that for each n ≥ 1. A direct calculations shows d(P n , Q n ) for each n ≥ 1, where note that ε n o(1) implies δ n o(1) as well. Thus, it follows from Lemma 11 that for every > 0 and each n ≥ 1, where • (a) follows by the definition for each n ≥ 1, • (b) follows by the continuity of the map u → −u log u and the fact that δ n o(1) as n → ∞, i.e., there exists a sequence {γ n } ∞ n 1 of positive reals satisfying γ n o(1) as n → ∞ and for each n ≥ 1, • (c) follows by constructing the subset B (n) ⊂ X so that for each n ≥ 1, • (d) follows by defining the typical set A (n) ⊂ X so that with some > 0 for each n ≥ 1, and • (e) follows by the definition of A (n) .
As {X n } ∞ n 1 satisfies the AEP and it is clear that (see, e.g., Problem 3.11 of [2]). Thus, since > 0 can be arbitrarily small and ε n o(1) as n → ∞, it follows from (259) that there exists a sequence {λ n } ∞ n 1 of positive real numbers satisfying λ n o(1) as n → ∞ and for each n ≥ 1. Combining (257) and (268), we observe that for each n ≥ 1. Therefore, Equation (254) is indeed valid, which proves (248) together with (253). This completes the proof of Theorem 5.

Remark 21. The construction of Q
n defined in (255) is a special case of the splitting technique; it was used to derive limit theorems of Markov processes by Nummelin [26] and Athreya-Ney [27]. This technique has many applications in information theory [21,[28][29][30][31][32] and to the Markov chain Monte Carlo (MCMC) algorithm [79].

Proof of Theorem 6
Condition (b) is a direct consequence of Theorem 5; and we shall verify Conditions (a), (c), and (d) in the proof. For the sake of brevity, in the proof, we write for each n ≥ 1. By Corollary 4, instead on (113), it suffices to verify that under any one of Conditions (a), (b), and (c). Similar to the proof of Theorem 5, we may assume without loss of generality that 0 < ε n < 1. Firstly, we shall verify Condition (a). Let Q n be an X-marginal given by for each n ≥ 1. As P 1,n majorizes Q n , it follows by the Schur-concavity property of the Rényi entropy that where the second inequality follows by the hypothesis that α > 1, i.e., by Condition (a). These inequalities immediately ensure (274) under Condition (a). Second, we shall verify Condition (d) of Theorem 6. As X and {X n } n are discrete r.v.'s, note that the convergence in distribution X n d → X is equivalent to P n (x) → P(x) as n → ∞ for each x ∈ X, i.e., the pointwise convergence P n → P as n → ∞. It is well-known that the Rényi entropy α → H α (P) is nonincreasing for α ≥ 0; hence, it suffices to verify (274) with α 1, i.e., lim n→∞ H(P 1,n ) − log L n + 0.
We now define two X-marginals Q (1) n and Q (2) n in the same ways as (249) and (250), respectively, for each n ≥ 1. By (253), it suffices to verify whether the third term in the right-hand side of (253) approaches to zero, i.e., lim n→∞ ε n H(Q This can be verified in a similar fashion to the proof of Lemma 3 of [21] as follows: Consider the X-marginal Q n defined in (255) for each n ≥ 1. Since Q n (1) 0 and ε n Q (2) n (x) ≤ ε n for each x ≥ 2, we observe that lim n→∞ ε n Q (2) n (x) 0 (279) for every x ≥ 1; therefore, for every x ≥ 1. Therefore, since P n converges pointwise to P as n → ∞, we see that Q n also converges pointwise to P ↓ X as ε n vanishes. Therefore, by the lower semicontinuity property of the Shannon entropy, we observe that and we then have where (a) follows from (257). Thus, it follows from (282), the hypothesis H(X) < ∞, and the nonnegativity of the Shannon entropy that (278) is valid, which proves (277) together with (253). Finally, we shall verify Condition (c) of Theorem 6. Define the X-marginalQ (2) n bỹ for each n ≥ 1, whereP 1,n P (P,L n ,ε n ) type-1 . Note that the difference between Q (2) n andQ (2) n is the difference between P n and P. It can be verified by the same way as (282) that lim n→∞ ε n H(Q It follows by the same manner as Lemma 1 of [21] that if P n majorizes P, then Q n majorizesQ (2) n as well. Therefore, it follows from the Schur-concavity property of the Shannon entropy that if P n majorizes P for sufficiently large n, then for sufficiently large n. Combining (284) and (285), Equation (278) also holds under Condition (c). This completes the proof of Theorem 6.

Proof of Theorem 7
To prove Theorem 7, we now give the following lemma.

Proof of Lemma 12.
It is well-known that for a fixed P X , the conditional Shannon entropy H(X | Y) is concave in P Y|X (cf. [2], Theorem 2.7.4). Defining the distortion measure d : the average probability of list decoding error is equal to the average distortion, i.e., for any list decoder f : Y → X L . Therefore, by following Theorem 1, the concavity property of Lemma 12 can be proved by the same argument as the proof of the convexity of the rate-distortion function (cf. Lemma 10.4.1 of [2]).
For the sake of brevity, we write IfL ∞, then (115) is a trivial inequality. Therefore, it suffices to consider the case whereL < ∞. It is clear that there exists an integer n 0 ≥ 1 such that L n ≤L for every n ≥ n 0 . Then, we can verify that P 1,n majorizesP 1,n for every n ≥ n 0 as follows. Let J n and J 3 be given by (33) with (Q, L, ε) (P n , L n , ε n ) and (Q, L, ε) (P n ,L, ε n ), respectively. Similarly, let K n and K 3 be given by (34) with (Q, L, ε) (P n , L n , ε n ) and (Q, L, ε) (P n ,L, ε n ), respectively. As L n ≤L implie that J n ≤ J 3 and K n ≤ K 3 , it can be seen from (30) that Therefore, noting that we obtain the majorization relation P 1,n P 1,n for every n ≥ n 0 .
By hypothesis, there exists an integer n 1 ≥ 1 such that P n majorizes P for every n ≥ n 1 . Letting n 2 max{n 0 , n 1 }, we observe that for every n ≥ n 2 , where • (a) follows by Corollary 4 and P 1,n P 1,n , • (b) follows by Condition (b) of Theorem 6 and the same manner as ( [21], Lemma 1), and • (c) follows by Lemma 12 together with the following definition Note that the Schur-concavity property of the Shannon entropy is used in both (b) and (c) of (298). As it follows from (274) that there exists an integer n 3 ≥ 1 such that for every n ≥ n 3 . Therefore, it follows from (298) that for every n ≥ max{n 2 , n 3 }. Therefore, letting n → ∞ in (302), we have (115). This completes the proof of Theorem 7.

Impossibility of Establishing Fano-Type Inequality
In Section 3, we explored the principal maximization problem H φ (Q, L, ε, Y) defined in (7) without any explicit form of φ under the three postulates: φ is symmetric, concave, and lower semicontinuous. If ε > 0 and we impose another postulate on φ, then we can also avoid the (degenerate) case in which φ(Q) ∞. The following proposition shows this fact.
Then, it holds that Proof of Proposition 9. See Appendix D.
As seen in Section 4, the conditional Shannon and Rényi entropies can be expressed by H φ (X | Y); and then φ must satisfy (303). Proposition 9 shows that we cannot establish an effective Fano-type inequality based on the conditional information measure H φ (X | Y) subject to our original postulates in Section 2.1, provided that (i) φ satisfies the additional postulate of (303), (ii) ε > 0, and (iii) φ(Q) ∞. This generalizes a pathological example given in Example 2.49 of [4], which states issues of the interplay between conditional information measures and error probabilities over countably infinite alphabets X; see Section 1.2.1.

Postulational Characterization of Conditional Information Measures
Our Fano-type inequalities were stated in terms of the general conditional information H φ (X | Y) defined in Section 2.1. As shown in Section 4, the quantity H φ (X | Y) can be specialized to Shannon's and Rényi's information measures. Moreover, the quantity H φ (X | Y) can be further specialized to the following quantities: 1. If φ · 1/2 , then H φ (X | Y) coincides with the (unnormalized) Bhattacharyya parameter (cf. Definition 17 of [80] and Section 4.2.1 of [81]) defined by Note that the Bhattacharyya parameter is often defined so that Z(X | Y) (B(X | Y) − 1)/(M − 1) to normalize as 0 ≤ Z(X | Y) ≤ 1, provided that X is {0, 1, . . . , M − 1}-valued. When X takes values in a finite alphabet with a certain algebraic structure, the Bhattacharyya parameter B(X | Y) is useful in analyzing the speed of polarization for non-binary polar codes (cf. [80,81]). Note that B(X | Y) is a monotone function of Arimoto's conditional Rényi entropy (64) of order α 1/2. This bound is analogous to the property that conditioning reduces entropy (cf. [2], Theorem 2.6.5). 2. It is easy to check that for any (deterministic) mapping : X → A with A ⊂ X, the conditional distribution P (X)|Y majorizes P X|Y a.s. Thus, it follows from Proposition 1 that for any mapping : X → A, which is a counterpart of the data processing inequality (cf. Equations (26)-(28) of [72]). 3. As shown in Section 3, the quantity H φ (X | Y) also satisfies appropriate generalizations of Fano's inequality.
Therefore, similar to the family of f -divergences [85,86], the quantity H φ (X | Y) is a generalization of various information-theoretic conditional quantities that also admit certain desirable properties. In addition, we can establish Fano-type inequalities based on H φ (X | Y); this characterization provides insights on how to measure conditional information axiomatically.

When Does Vanishing Error Probabilities Imply Vanishing Equivocations?
In the list decoding setting, the rate of a block code with codeword length n, message size M n , and list size L n can be defined as (1/n) log(M n /L n ) (cf. [87]). Motivated by this, we established asymptotic behaviors of this quantity in Theorems 5 and 6. We would like to emphasize that Example 2 shows that Ahlswede-Gács-Körner's proof technique described in Chapter 5 of [42] (see also Section 3.6.2 of [43]) works for an i.i.d. source on a countably infinite alphabet, provided that the alphabets {Y n } ∞ n 1 are finite.
Theorem 5 states that the asymptotic growth of H(X n | Y n ) − log L n is strictly slower than H(X n ), provided that the general source X {X n } ∞ n 1 satisfies the AEP and the error probabilities vanish (i.e., P (L n ) e (X n | Y n ) o(1) as n → ∞). This is a novel characterization of the AEP via Fano's inequality. An instance of this characterization using the Poisson source (cf. Example 4 of [25]) was provided in Example 3.

Future Works
1. While there are various studies of the reverse Fano inequalities [22,23,[49][50][51][52], this study has focused only on the forward Fano inequality. Generalizing the reverse Fano inequality in the same spirit as was done in this study would be of interest. 2. Important technical tools used in our analysis include the finite-and infinite-dimensional versions of Birkhoff's theorem; they were employed to satisfy the constraint that P X Q. As a similar constraint is imposed in many information-theoretic problems, e.g., coupling problems (cf. [7,88,89]), finding further applications of the infinite-dimensional version of Birkhoff's theorems would refine technical tools, and potentially results, when we are dealing with communication systems on countably infinite alphabets. 3. We have described a novel connection between the AEP and Fano's inequality in Theorem 5; its role in the classifications of sources and channels and its applications to other coding problems are of interest.
Funding: This research was funded by JSPS KAKENHI Grant Number 17J11247.

Acknowledgments:
The author would like to thank Prof. Ken-ichi Iwata for his valuable comments on an earlier version of this paper. Vincent Y. F. Tan gave insightful comments and suggestions that greatly improved this paper. The author also would like to express my gratitude to an anonymous reviewer in IEEE Transactions on Information Theory and three anonymous reviewers in this journal for carefully following the technical parts and giving a lot of his/her valuable comments. Finally, the author would like to thank the Guest Editor, Amos Lapidoth, for inviting the author to this special issue and supporting this paper.

Conflicts of Interest:
The author declares no conflicts of interest.

Appendix A. Proof of Proposition 2
The proposition is quite obvious; it is similar to ([90], Equation (1)). Here, we prove it to make this paper self-contained. For a given list decoder f : Y → X L with list size 1 ≤ L < ∞, it follows that where the equality of (a) can be achieved by an optimal list decoder f * satisfying that X f * (Y) only if P X|Y (X) P ↓ X|Y (k) for some k ≥ L + 1. This completes the proof of Proposition 2.

Appendix B. Proof of Proposition 3
The second inequality in (27) is indeed a direct consequence of Proposition 2 and (123). The sharpness of the second bound can be easily verified by setting that X and Y are statistically independent. We next prove the first inequality in (27). When Y is infinite, the first inequality is an obvious one P (L) e (X | Y) ≥ 0, and its equality holds by setting X ⊂ Y and X Y a.s. Therefore, it suffices to consider the case where Y is finite. Assume without loss of generality that Y {0, 1, . . . , N − 1} (A2) for some positive integer N. By the definition of cardinality, there exists a subset Z ⊂ X satisfying (i) |Z| LN and (ii) for each x ∈ {1, 2, . . . , L} and y ∈ {0, 1, . . . , N − 1}, there exists an element z ∈ Z satisfying P X|Y y (z) P ↓ X|Y y (x). Then, where • (a) follows from Proposition 2, • (b) follows from by the construction of Z, and • (c) follows from the facts that |Z| LN and P X Q.
This is indeed the first inequality in (27). Finally, the sharpness of the first inequality can be verified by the X × Y-valued r.v. (U, V) determined by where ω 1 (Q, v, L) and ω 2 (Q, L, ε) are defined by A direct calculation shows that P U Q ↓ and P (L) which implies the sharpness of the first inequality. This completes the proof of Proposition 3.
By Lemma 7, one can find Z ⊂ X so that |Z| L · |Y| and R 3 R(Q, L, ε, S, Z) (A17) defined in (188) is nonempty as well. Moreover, since P e (X | Y Z), if follows that R 3 ⊂ R 2 . Then, we have where • (a) follows by the definition of R 1 stated in (129), • (b) follows by the inclusions • (c) follows from the fact that (X, Y) ∈ R 3 implies that P X|Y y (x) Q(x) (A20) for x ∈ X \ Z and y ∈ S, and • (d) follows from the facts that | supp(Q) \ Z| ∞, Inequalities (A18) imply (A11), completing the proof of Proposition 9.

Appendix E. Proof of Lemma 6
This lemma is quite trivial, but we prove it to make the paper self-contained. Actually, this can be directly proved by contradiction. Suppose that (140) and (141) hold, but (142) does not hold. Then, there must exist an l ∈ {k, k + 1, . . . , n − 1} satisfying As q j is constant for each j k, k + 1, . . . , n, it follows from (140) and (A24) that p j < q j for every j l, l + 1, . . . , n. Then, we observe that n i 1 which contradicts to the hypothesis of (141), and therefore Lemma 6 must hold.