Next Article in Journal
A Novel Framework for Enhancing Decision-Making in Autonomous Cyber Defense Through Graph Embedding
Previous Article in Journal
Soft Classification in a Composite Source Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Method of Types for the AWGN Channel †

by
Sergey Tridenski
* and
Anelia Somekh-Baruch
*
Faculty of Engineering, Bar-Ilan University, Ramat Gan 5290002, Israel
*
Authors to whom correspondence should be addressed.
The material in this paper was partially presented in Proceedings of the International Zurich Seminar on Information and Communication (IZS), Zurich, Switzerland, 6–8 March 2024. and the IEEE International Symposium on Information Theory (ISIT), Athens, Greece, 7–12 July 2024.
Entropy 2025, 27(6), 621; https://doi.org/10.3390/e27060621
Submission received: 4 May 2025 / Revised: 6 June 2025 / Accepted: 8 June 2025 / Published: 11 June 2025
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

For the discrete-time AWGN channel with a power constraint, we give an alternative derivation for the sphere-packing upper bound on the optimal block error exponent and an alternative derivation for the analogous lower bound on the optimal correct-decoding exponent. The derivations use the method of types with finite alphabets of sizes depending on the block length n and with the number of types sub-exponential in n.

1. Introduction

We study reliability of the discrete-time additive white Gaussian noise (AWGN) channel with a power constraint imposed on blocks of its inputs. Consider the capacity of this channel, found by Shannon:
C = 1 2 log 2 ( 1 + s 2 / σ 2 ) ,
where σ 2 is the channel noise variance and s 2 is the power constraint. This capacity corresponds to the maximum of the mutual information I ( p X , w ) over p X , under the power constraint on p X , where w stands for the channel transition probability density function (PDF) and p X is the channel input PDF. Let us briefly recall the technicalities [1] of how expression (1) is obtained from the mutual information:
max p X : E [ X 2 ] s 2 I ( p X , w ) = max p X : E [ X 2 ] s 2 D w p ^ Y | p X D p Y p ^ Y = max p X : E [ X 2 ] s 2 1 2 log ( 2 1 + s 2 σ 2 + E X 2 s 2 2 ln ( 2 ) ( s 2 + σ 2 ) D p ( Y p ^ ( Y 0 .
Here p ^ Y ( y ) 1 2 π ( s 2 + σ 2 ) e y 2 2 ( s 2 + σ 2 ) and p Y ( y ) R p X ( x ) w ( y | x ) d x , the operator E [ · ] denotes the expectation, and D is the Kullback–Leibler divergence between two probability densities. The maximum in (2) is attained by the Gaussian PDF with zero mean and variance s 2 , which simultaneously gives E X 2 = s 2 and brings the divergence D p ( Y p ^ ( Y to zero, which is its lowest possible value [1] [Equation (8.57)]. In this paper we consider the optimal exponents in the block error/correct-decoding probability of the AWGN channel. We propose explanations, similar to (2), both for Shannon’s sphere-packing converse bound on the optimal error exponent [2] [Equations (3), (4) and (11)] (see also [3] [Equation (47)] with [4] [Equations (7.5.34) and (7.5.32)]) and for Oohama’s converse bound on the optimal correct-decoding exponent [5] [Equation (4)].
In the case of discrete memoryless channels, the mutual information enters into the expressions for correct-decoding and error exponents through the method of types [6,7]. For the moment, without any particular meaning attached to it, let us rewrite the sphere-packing constant-composition exponent [6] [Equation (5.19)] with the PDF’s:
min p Y | X : I ( p ^ ( X , p Y | X ) R D p ( Y | X w | p ^ ( X ,
where p ^ X denotes the Gaussian density with variance s 2 , which maximizes (2), and R > 0 is the information rate. When p ^ X is Gaussian, the minimum (3) allows an explicit solution by the method of Lagrange multipliers. The minimizing solution p Y | X of (3) is Gaussian [8] [Equation (65)], Ref. [9] [Equation (327)], and we obtain that (3) is the same as Shannon’s converse bound on the error exponent [2] [Equations (3), (4) and (11)], Refs. [8,9] in the limit of a large block length:
E s p ( R ) = 1 2 A 2 1 2 A G cos θ ln ( G sin θ ) log 2 e , G = 1 2 ( A cos θ + A 2 cos 2 θ + 4 ) , sin θ = 2 min { R , C } , A = s / σ .
Then it turns out that p Y | X of (3) and the y-marginal PDF of the product p ^ X p Y | X [8] [Equation (63)] play the same roles in the derivation of the converse bound as w and p ^ Y , respectively, in the maximization (2).
In this paper, in order to derive expressions similar to (3), we extend the method of types [1] [Chapter 11.1], Ref. [10] to include countable alphabets consisting of uniformly spaced real numbers, with the help of power constraints on types. The countable alphabets depend on the block length n and the number of types satisfying the power constraints is kept sub-exponential in n. The latter idea is inspired by a different subject—of “runs” in a binary sequence. If we treat every “run” of ones or zeros in a binary sequence as a separate symbol from the countable alphabet of run-lengths, then the number of different empirical distributions of such symbols in a binary sequence of length n is equivalent to the number of different partitions of the integer n into the sum of positive integers, which is e c n [11] [Equation (5.1.2)]. Thus, it is sub-exponential, and the method of types can be extended to that case. In our present case, however, the types are empirical distributions of uniformly quantized real numbers in quantized versions of real channel input and output vectors of length n. The quantized versions serve only for classification of channel input and output vectors and not for the communication itself. The uniform quantization step is different for the quantized versions of channel inputs and outputs, and in both cases it is chosen to be a decreasing function of n.
Via expressions similar to (2), the proposed derivations demonstrate that, in order to achieve the converse bounds on the correct-decoding and error exponents, it is necessary for the types of the quantized versions of codewords to converge to the Gaussian distribution in the characteristic function (CF), or, equivalently, in cumulative distribution function (CDF).
The contributions of the current paper are twofold: we successfully apply the method of types to derive converse bounds on both the error exponent [12] and the correct-decoding exponent [13] of the AWGN channel. This underscores the advantage of the method of types.
In Section 2 and Section 3, we describe the communication system and introduce other definitions. In Section 4 we present the main results of the paper, which consist of two theorems and a proposition. Section 5 provides an extension to the method of types. The results of Section 5 are then applied in all the sections that follow. In Section 6 we prove a converse lemma that is then used for derivation of both the correct-decoding and error exponents in Section 7 and Section 8, respectively. Section 9 connects between the PDFs and types.

Notation

Countable alphabets consisting of real numbers are denoted by X n , Y n . The set of types with denominator n over X n is denoted by P n ( X n ) . Capital ‘ P ’ denotes probability mass functions, which are types P X , P Y , P X Y , P Y | X . The type class and the support of a type P X are denoted by T ( P X ) and S ( P X ) , respectively. The expectation with respect to a probability distribution P X is denoted by E P X [ · ] . Small ‘p’ denotes probability density functions p X , p Y , p X Y , p Y | X . Thin letters x, y represent real values, while thick letters x , y represent real vectors. Capital letters X, Y represent random variables; boldface Y represents a random vector of length n. The conditional type class of P X | Y given y is denoted by T ( P X | Y | y ) . The quantized versions of variables are denoted by a superscript ‘q’: x k q , x q , Y q . Small w stands for a conditional PDF, and W n stands for a discrete positive measure which does not necessarily add up to 1. All information-theoretic quantities such as joint and conditional entropies H ( P X Y ) , H ( Y | X ) , the mutual information I ( P X Y ) , I P X , P Y | X , I P X , p Y | X , the Kullback–Leibler divergence D P Y | X W n | P X , D p Y | X w | P X , and the information rate R are defined with respect to the logarithm to a base b > 1 , denoted as log b ( · ) . It is assumed that 0 log b ( 0 ) = 0 . The natural logarithm is denoted as ln. The cardinality of a discrete set is denoted by | · | , while the volume of a continuous region is denoted by vol ( · ) . The complementary set of a set A is denoted by A c . Logical “or” and “and” are represented by the symbols ∨ and ∧, respectively. Gaussian distributions are denoted by N , while Unif ( · ) stands for the discrete uniform distribution. In the Appendix B, p X Y q represents the rounded down version of the PDF p X Y .

2. Communication System

We consider communication over the time-discrete additive white Gaussian noise channel with real channel inputs x R and channel outputs y R and a transition probability density
w ( y | x ) 1 σ 2 π e ( y x ) 2 2 σ 2 .
Communication is performed by blocks of n channel inputs. Let R > 0 denote a nominal information rate. Each block is used for transmission of one out of M messages, where M = M ( n , R ) b n R , for some logarithm base b > 1 . The encoder is a deterministic function f : { 1 , 2 , , M } R n , which converts a message into a transmitted block, such that
f ( m ) = x ( m ) = x 1 ( m ) , x 2 ( m ) , , x n ( m ) , m = 1 , 2 , , M ,
where x k ( m ) R , for all k = 1 , 2 , , n . The set of all the codewords x ( m ) , m = 1 , 2 , , M , constitutes a codebook C . Each codeword x ( m ) in C satisfies the power constraint:
1 n k = 1 n x k 2 ( m ) s 2 , m = 1 , 2 , , M .
The decoder is another deterministic function g : R n { 0 , 1 , 2 , , M } , which converts the received block of n channel outputs y R n into an estimated message or, possibly, to a special error symbol ‘0’:
g ( y ) = { 0 , y m = 1 M D m c , m , y D m , m { 1 , 2 , , M } ,
where each set D m R n is either an open region or the empty set, and the regions are disjoint: D m D m = for m m . Observe that the maximum-likelihood decoder with open decision regions D m , defined for m = 1 , 2 , , M as
D m = R n m : ( m < m ) m > m x ( m ) x ( m ) y : y x ( m ) y x ( m ) ,
is a special case of (6). Note that the formal definition of D m includes the undesirable possibility of x ( m ) = x ( m ) for m m .

3. Definitions

For each n, we define two discrete countable alphabets X n and Y n as one-dimensional lattices:
α , β , γ ( 0 , 1 ) , α + β + γ = 1 ,
Δ α , n 1 / n α , Δ β , n 1 / n β , Δ γ , n 1 / n γ ,
Δ α , n · Δ β , n · Δ γ , n = 1 / n ,
X n i Z i Δ α , n , Y n i Z i Δ β , n .
For each n, we define also a discrete positive measure (not necessarily a distribution), which will approximate the channel w:
W n ( y | x ) w ( y | x ) · Δ β , n , x X n , y Y n .
Denoting by C 0 ( A ) a class of functions f : R R 0 continuous on an open subset A R , we define
F n f : R R 0 | sup y R f ( y ) < + ; f C 0 R { Y n + Δ β , n / 2 } ; R f ( y ) d y = 1 .
The set (11), defined for a given n, will be used only in the derivation of the correct-decoding exponent, while the following set of Lipschitz continuous functions will be used only in the derivation of the error exponent:
L f : R R 0 | | f ( y 1 ) f ( y 2 ) | K | y 1 y 2 | , y 1 , y 2 ; R f ( y ) d y = 1 , K 1 σ 2 2 π e .
Note that L is a convex set and also each function f L is bounded and cannot exceed K .
With a parameter ρ ( 1 , + ) , we define the following Gaussian probability density functions [8,9]:
p Y | X ( ρ ) ( y | x ) 1 σ Y | X ( ρ ) 2 π exp ( y k ρ · x ) 2 2 σ Y | X 2 ( ρ ) ,
k ρ SNR ρ 1 + ( SNR ρ 1 ) 2 + 4 · SNR 2 · SNR , SNR s 2 / σ 2 ,
σ Y | X 2 ( ρ ) ( 1 + ρ ) k ρ σ 2 ,
p ^ Y ( ρ ) ( y ) 1 σ Y ( ρ ) 2 π exp y 2 2 σ Y 2 ( ρ ) ,
σ Y 2 ( ρ ) σ 2 + k ρ s 2 ,
p ^ X ( x ) 1 s 2 π exp x 2 2 s 2 .
The first property of the following lemma shows that p ^ Y ( ρ ) is the y-marginal PDF of the product p ^ X p Y | X ( ρ ) .
Lemma 1
(Properties of (13)–(18)). The following properties hold:
σ Y 2 ( ρ ) = σ Y | X 2 ( ρ ) + k ρ 2 s 2 ,
1 + ρ σ Y | X 2 ( ρ ) = ρ σ Y 2 ( ρ ) + 1 σ 2 ,
σ Y | X 2 ( ρ ) = σ 2 + k ρ ( 1 k ρ ) s 2 ,
1 k ρ > 0 , ρ 0 ,
1 2 1 + 1 + 4 σ 2 s 2 k ρ 1 , 1 ρ 0 ,
p Y | X ( ρ ) ( · | x ) L , ρ 0 , x R ,
and for any two jointly distributed random variables ( X , Y ) , such that E X 2 = σ X 2 s 2 + ϵ , ϵ > 0 , and Y | X = x N k ρ x , σ Y | X 2 ( ρ ) , it holds that
E ( Y X ) 2 = σ 2 + ( 1 k ρ ) s 2 + ( 1 k ρ ) 2 ( σ X 2 s 2 )
{ σ 2 + s 2 + ϵ , ρ 0 , σ 2 + ϵ σ 2 s 2 , 1 < ρ 0 .
Here (17) combined with (14) corresponds to [8] [Equation (64)], (15) and (20) can be found in [8] [Equation (65)], while (14), (21) correspond respectively to [9] [Equations (302) and (328)].
Proof of Lemma 1.
The first property (19) can be verified using (14), (15), (17). Then (20) can be obtained from (15), (17), (19). Property (21) follows by (17) and (19). It can be verified from (14) that k ρ is a positive monotonically decreasing function of ρ , such that k 0 = 1 . Then we get (22) and (23). From (21) and (22) we see that σ Y | X 2 ( ρ ) σ 2 for all ρ 0 , which gives (24). Equality (25) can be obtained using (21). Then, using (22) and (23), we obtain (26). □
The following expressions will describe our results for the error and correct-decoding exponents:
E e ( R ) sup ρ 0 D p Y | X ( ρ ) w | p ^ X + ρ I p ^ X , p Y | X ( ρ ) R ,
E c ( R ) sup 1 < ρ 0 D p Y | X ( ρ ) w | p ^ X + ρ I p ^ X , p Y | X ( ρ ) R .
The following identity can be obtained using (13), (15), (16), (18) and (20):
D p Y | X ( ρ ) w | p ^ X + ρ I p ^ X , p Y | X ( ρ ) c 0 ( ρ ) + c 1 ( ρ ) s 2 ,
c 0 ( ρ ) 1 ln b ln σ · σ Y ρ ( ρ ) σ Y | X 1 + ρ ( ρ ) , c 1 ( ρ ) 1 k ρ 2 σ 2 ln b .
We note also that c 0 ( ρ ) 0 , as ρ + , which can be verified using the properties (15), (17) and (21). It can be verified that the expression inside the supremum of (27) is equivalent to the expression for the Gaussian random-coding error exponent of Gallager before the maximization over ρ [4] [Equations (7.4.24) and (7.4.28)]. Therefore, with the supremum over ρ 0 , the expression (27) coincides with the converse sphere-packing bound of Shannon (4).

4. Main Results

In this section we present two theorems and a proposition. The two theorems give converse bounds on the optimal error exponent and correct-decoding exponent of the AWGN, respectively. The bounds are asymptotic in the limit of a large block length n, and are given by the expressions (27) and (28), accordingly. The proofs, leading to these expressions, are also presented in this section, while some of their technical details are encapsulated in the form of lemmas that are taken care of by the rest of the sections in this paper. In the course of the two proofs leading to (27) and (28) we obtain expressions analogous to (2), which, in exactly the same manner as the maximization of (2), allow us to draw conclusions about the asymptotically optimal codeword types, achieving the converse bounds. This is made precise in the remark below, after the proof of the second theorem. The section is concluded with the proposition that brings together the bounds of the two theorems in a parametric form.
The proof of the first theorem relies on Lemmas 13 and 19, which appear in Section 7 and Section 9, respectively.
Theorem 1
(Error exponent). Let J U n i f { 1 , 2 , , M } be a random variable, independent of the channel noise, and let x ( J ) Y be the random channel-input and channel-output vectors, respectively. Then
lim sup n sup C sup g 1 n log b Pr g ( Y ) J E e ( R ) ,
where E e ( R ) is defined by (27), decoder functions g are defined by (6), and codebooks C satisfy (5).
Proof. 
Starting from Lemma 13, we can write the following sequence of inequalities:
lim sup n sup C sup g 1 n log b Pr g ( Y ) J
a lim sup n max P X : P X P n ( X n ) , E [ X 2 ] s 2 + ϵ min P Y | X : P X Y P n ( X n × Y n ) , E [ ( Y X ) 2 ] σ 2 + s 2 + 2 ϵ , I ( P X , P Y | X ) R ϵ D P Y | X W n | P X
b lim sup n max P X : P X P n ( X n ) , E [ X 2 ] s 2 + ϵ inf p Y | X : p Y | X ( · | x ) L , x , E [ ( Y X ) 2 ] σ 2 + s 2 + ϵ , I ( P X , p Y | X ) R 2 ϵ D p Y | X w | P X
= c lim sup n max P X : P X P n ( X n ) , E [ X 2 ] s 2 + ϵ sup ρ 0 inf p Y | X : p Y | X ( · | x ) L , x , E [ ( Y X ) 2 ] σ 2 + s 2 + ϵ D p Y | X w | P X + ρ I P X , p Y | X R + 2 ϵ
d lim sup n max P X : P X P n ( X n ) , E [ X 2 ] s 2 + ϵ sup ρ 0 D p Y | X ( ρ ) w | P X + ρ I P X , p Y | X ( ρ ) R + 2 ϵ
e lim sup n max P X : P X P n ( X n ) , E [ X 2 ] s 2 + ϵ sup ρ 0 c 0 ( ρ ) + c 1 ( ρ ) E X 2 + ρ D p Y ( ρ ) p ^ Y ( ρ ) R + 2 ϵ
f sup ρ 0 c 0 ( ρ ) + c 1 ( ρ ) ( s 2 + ϵ ) ρ ( R 2 ϵ ) ,
where
(a) holds for any ϵ > 0 by Lemma 13 with c X Y = σ 2 + s 2 + 2 ϵ . Note also that D P Y | X W n | P X in (31) denotes the Kullback–Leibler divergence between the probability distribution P Y | X and the positive measure W n defined in (10), which is not a probability distribution but only approximates the channel w.
(b) follows by Lemma 19 for the alphabet parameters α 0 , 1 4 and 1 3 + 2 3 α < β < 2 3 2 3 α .
(c) holds for all R > 0 with the possible exception of the single point on R-axis where (32) may transition between a finite value and + . To verify the equality, let us compare the infimum in (32) and the supremum over ρ 0 in (33) as functions of R R , for a given P X . First, it can be verified that the supremum of (33) is the closure of the lower convex envelope of the infimum of (32). Second, it can be checked by the definition of convexity, that the infimum of (32) itself is a convex (∪) function of R. Then they coincide for all values of R, except possibly for the single point where they both jump to + . This property carries over to the external ‘lim sup max’ as well.
(d) follows because by (24) and (26) function p Y | X ( ρ ) satisfies the conditions under the infimum of (33).
(e) holds as equality inside the supremum over ρ 0 , separately for each ρ . In (34) by p Y ( ρ ) we denote the corresponding marginal PDF of the product P X p Y | X ( ρ ) and use the definitions (29). Then (e) follows by the definitions of p Y | X ( ρ ) and p ^ Y ( ρ ) in (13) and (16), and by their properties (15) and (20).
(f) follows by the non-negativity of the divergence, and by the condition under the maximum of (34), since c 1 ( ρ ) 0 for ρ 0 .
In conclusion, according to (c) we obtain that the inequality between (30) and (35), as functions of R, holds for all R > 0 , except possibly for the single point R = 2 ϵ , where the jump to + in (35) occurs. Therefore, taking the limit as ϵ 0 , we obtain that (30) is upper-bounded for all R > 0 by
sup ρ 0 c 0 ( ρ ) + c 1 ( ρ ) s 2 ρ R ,
which is the same as (27). □
The second theorem relies on Lemmas 17 and 20, which appear in Section 8 and Section 9, respectively.
Theorem 2
(Correct-decoding exponent). Let J U n i f { 1 , 2 , , M } be a random variable, independent of the channel noise, and let x ( J ) Y be the random channel-input and channel-output vectors, respectively. Then
lim inf n inf C inf g 1 n log b Pr g ( Y ) = J E c ( R ) ,
where E c ( R ) is defined by (28), decoder functions g are defined by (6), and codebooks C satisfy (5).
Proof. 
Starting from Lemma 17, for each R > 0 we can choose a different parameter σ ˜ = σ ˜ ( R ) σ , such that there is equality E ( σ ˜ ( R ) ) = E c ( R ) between (74) and (28). Then by (80) we obtain
lim inf n inf C inf g 1 n log b Pr g ( Y ) = J min lim inf n E n ( R , σ ˜ ( R ) , ϵ ˜ , ϵ ) , E c ( R ) .
With the choice 2 ϵ ˜ = ϵ σ 2 s 2 , the first term in the minimum can be lower-bounded as follows:
lim inf n min P X : P X P n ( X n ) , E [ X 2 ] s 2 + ϵ min P Y | X : P X Y P n ( X n × Y n ) , E [ ( Y X ) 2 ] σ ˜ 2 ( R ) + ϵ ˜ D P Y | X W n | P X + | R I P X , P Y | X | + a lim inf n min P X : P X P n ( X n ) , E [ X 2 ] s 2 + ϵ inf p Y | X : p Y | X ( · | x ) F n , x , E [ ( Y X ) 2 ] σ ˜ 2 ( R ) + 2 ϵ ˜ D p Y | X w | P X + | R I P X , p Y | X | + b lim inf n min P X : P X P n ( X n ) , E [ X 2 ] s 2 + ϵ inf p Y | X : p Y | X ( · | x ) F n , x , E [ ( Y X ) 2 ] σ ˜ 2 ( R ) + 2 ϵ ˜ D p Y | X w | P X ρ R D p Y | X p ^ Y ( ρ ) | P X c lim inf n min P X : P X P n ( X n ) , E [ X 2 ] s 2 + ϵ inf p Y | X : p Y | X ( · | x ) F n , x , E [ ( Y X ) 2 ] σ ˜ 2 ( R ) + 2 ϵ ˜ c 0 ( ρ ) + c 1 ( ρ ) E X 2 ρ R + ( 1 + ρ ) D p Y | X p Y | X ( ρ ) | P X
= d lim inf n min P X : P X P n ( X n ) , E [ X 2 ] s 2 + ϵ c 0 ( ρ ) + c 1 ( ρ ) E X 2 ρ R
e c 0 ( ρ ) + c 1 ( ρ ) ( s 2 + ϵ ) ρ R ,
where:
(a) follows by Lemma 20 with c X Y = σ ˜ 2 ( R ) + ϵ ˜ .
(b) holds for ρ ( 1 , 0 ] , because | R I P X , p Y | X | + ρ R I P X , p Y | X for any such ρ , and because I P X , p Y | X D p Y | X p ^ Y ( ρ ) | P X , where p ^ Y ( ρ ) is the Gaussian PDF defined in (16).
(c) holds as an identity inside the infimum by the definitions (13), (16) and (29), and properties (15) and (20).
(d) holds if 2 ϵ ˜ ϵ σ 2 s 2 and ρ ( 1 , 0 ] , because then by (26) and (11) the function p Y | X ( ρ ) satisfies the conditions under the infimum and achieves the infimum.
(e) follows by the condition under the minimum of (36) since c 1 ( ρ ) 0 for ρ ( 1 , 0 ] .
In conclusion, since (37) is the lower bound for any ρ ( 1 , 0 ] and 2 ϵ ˜ ϵ σ 2 s 2 , we obtain
lim inf n E n R , σ ˜ ( R ) , ϵ σ 2 s 2 / 2 , ϵ sup 1 < ρ 0 c 0 ( ρ ) + c 1 ( ρ ) ( s 2 + ϵ ) ρ R ϵ 0 E c ( R ) .
Remark 1.
Observe that neither the inequality (f) in the proof of Theorem 1 nor the inequality (e) in the proof of Theorem 2 can be met with equality unless E X 2 s 2 + ϵ . Furthermore, neither the inequality (f) in the proof of Theorem 1 nor the inequality (b) in the proof of Theorem 2 can be met with equality unless D p Y ( ρ ) p ^ Y ( ρ ) 0 , where p Y ( ρ ) is the y-marginal PDF of P X p Y | X ( ρ ) . This is similar to (2). Accordingly, since p ^ Y ( ρ ) is Gaussian, while p Y ( ρ ) is a convolution of P X with the Gaussian PDF p Y | X ( ρ ) , the type P X must converge to the Gaussian distribution with zero mean and variance s 2 in CF (it follows because the expression for the characteristic function of the zero-mean Gaussian distribution also has a Gaussian form) and CDF in order to achieve the exponents of Theorems 1 and 2. In both proofs the type P X represents the histograms of codewords, i.e., the empirical distributions of their quantized versions.
The functions E e ( R ) and E c ( R ) given by (27) and (28) can be expressed parametrically [4,5,8] as follows:
Proposition 1
(Parametric representations of E c and E e ). For every R I ( p ^ X , w ) there exists a unique ρ ( 1 , 0 ] , such that
R = I p ^ X , p Y | X ( ρ ) , E c ( R ) = D p Y | X ( ρ ) w | p ^ X .
For every R I ( p ^ X , w ) there exists a unique ρ 0 , such that
R = I p ^ X , p Y | X ( ρ ) , E e ( R ) = D p Y | X ( ρ ) w | p ^ X .
The correct-decoding exponent representation of (38) is equivalent to [5] [Equation (22)] and appears in [14] [Equations (25) and (26)], while the error exponent representation of (39) is equivalent to [4] [Equations (7.4.30) and (7.4.31)] and appears in [8] [Equations (70) and (71)] and [9] [Equations (329) and (330)]. Here we present an alternative proof of Proposition 1 in the vein of the proofs of Theorems 1 and 2.
Proof. 
Let us denote R β I p ^ X , p Y | X ( β ) . Then for β ( 1 , 0 ] we can write a sandwich proof:
inf p Y | X : p ^ X p Y | X N D p Y | X w | p ^ X + | R β I p ^ X , p Y | X | +
a sup 1 < ρ 0 inf p Y | X : p ^ X p Y | X N D p Y | X w | p ^ X ρ R β D p Y | X p ^ Y ( ρ ) | p ^ X b sup 1 < ρ 0 inf p Y | X : p ^ X p Y | X N D p Y | X ( ρ ) w | p ^ X ρ R β R ρ + ( 1 + ρ ) D p Y | X p Y | X ( ρ ) | p ^ X
= c sup 1 < ρ 0 D p Y | X ( ρ ) w | p ^ X ρ R β R ρ E c ( R β ) d D p Y | X ( β ) w | p ^ X ,
where N denotes the set of all bivariate non-degenerate Gaussian PDF’s. Here (a) follows similarly to the inequality (b) in Theorem 2; (b) is an identity; (c) follows because p ^ X p Y | X ( ρ ) is Gaussian and p Y | X ( ρ ) achieves the infimum; and (d) is a lower bound on the supremum at ρ = β . Finally, since the RHS of (41) is further lower-bounded by the infimum (40), we conclude that E c ( R β ) = D p Y | X ( β ) w | p ^ X .
For β 0 , besides R β let us define R β ( ρ ) D p Y | X ( ρ ) p ^ Y ( β ) | p ^ X . Then
inf p Y | X : p ^ X p Y | X N , D ( p Y | X p ^ Y ( β ) | p ^ X ) R β D p Y | X w | p ^ X
a sup ρ 0 inf p Y | X : p ^ X p Y | X N D p Y | X w | p ^ X + ρ D p Y | X p ^ Y ( β ) | p ^ X R β b sup ρ 0 inf p Y | X : p ^ X p Y | X N D p Y | X ( ρ ) w | p ^ X + ρ R β ( ρ ) R β + ( 1 + ρ ) D p Y | X p Y | X ( ρ ) | p ^ X = c sup ρ 0 D p Y | X ( ρ ) w | p ^ X + ρ R β ( ρ ) R β
d sup ρ 0 D p Y | X ( ρ ) w | p ^ X + ρ R ρ R β E e ( R β ) e D p Y | X ( β ) w | p ^ X .
Here (a) follows due to the inequality under the first infimum; (b) is an identity; (c) follows because p ^ X p Y | X ( ρ ) is Gaussian and p Y | X ( ρ ) achieves the infimum; and (d) follows because R β ( ρ ) R ρ ( ρ ) R ρ ; (e) is a lower bound on the supremum at ρ = β . Since the RHS of (43) is lower-bounded by the infimum (42), we obtain E e ( R β ) = D p Y | X ( β ) w | p ^ X . From I p ^ X , p Y | X ( ρ ) = 1 2 log b σ Y 2 ( ρ ) / σ Y | X 2 ( ρ ) using (17) and (21) we obtain d R ρ d ρ = d R ρ d k ρ · d k ρ d ρ < 0 . Hence for every R > 0 the parameter ρ ( R ) ( 1 , + ) is unique. □

5. Method of Types

In this section we extend the method of types [1] to include the countable alphabets of uniformly spaced reals (9) by using power constraints on the types. The method of types in the form of the results of this section is then used in the rest of the paper. It allows us to establish converse bounds in terms of types in Section 6, Section 7 and Section 8 and is used in the Appendix B dedicated to the proof of Lemma 18 of Section 9, connecting between PDFs and types.

5.1. Alphabet Size

Consider all the types P X P n ( X n ) satisfying the power constraint E P X X 2 c X . Let X n ( c X ) X n denote the subset of the alphabet used by these types. In particular, every letter x = i Δ α , n X n ( c X ) must satisfy P X ( x ) x 2 c X , while by the definition of a type we have P X ( x ) 1 / n . This gives | i Δ α , n | n c X . Then X n ( c X ) is finite and by (7) we obtain
Lemma 2
(Alphabet size).  | X n ( c X ) | 2 c X n 1 / 2 + α + 1 ( 2 c X + 1 ) n 1 / 2 + α .

5.2. Size of a Type Class

For P X Y P n ( X n × Y n ) let us define
S ( P X Y ) ( x , y ) X n × Y n : P X Y ( x , y ) > 0 , S ( P X ) x X n : P X ( x ) > 0 , S ( P Y ) y Y n : P Y ( y ) > 0 .
Lemma 3
(Support of a joint type). Let P X Y P n ( X n × Y n ) be a joint type, such that E P X X 2 c X and E P Y Y 2 c Y . Then
| S ( P X Y ) | 2 π ( c X + c Y + 1 / 6 ) · n ( 1 + α + β ) / 2 .
The proof is given in the Appendix A.
Lemma 4
(Support of a type). Let P X P n ( X n ) and P Y P n ( Y n ) be types, such that E P X X 2 c X and E P Y Y 2 c Y . Then
| S ( P X ) | ( 12 c X + 1 ) 1 / 3 · n ( 1 + 2 α ) / 3 , | S ( P Y ) | ( 12 c Y + 1 ) 1 / 3 · n ( 1 + 2 β ) / 3 .
The proof for P X is given in the Appendix A.
For P Y , the parameters c Y , β replace, respectively, c X , α .
Lemma 5
(Size of a type class). Let P X Y P n ( X n × Y n ) be a joint type, such that E P X X 2 c X and E P Y Y 2 c Y . Then
H ( P X Y ) c 1 log b ( n + 1 ) n γ / 2 1 n log b | T ( P X Y ) | H ( P X Y ) ,
H ( P X ) c 2 log b ( n + 1 ) n 2 ( 1 α ) / 3 1 n log b | T ( P X ) | H ( P X ) ,
H ( P Y ) c 3 log b ( n + 1 ) n 2 ( 1 β ) / 3 1 n log b | T ( P Y ) | H ( P Y ) ,
where c 1 2 π ( c X + c Y + 1 / 6 ) , c 2 ( 12 c X + 1 ) 1 / 3 , and c 3 ( 12 c Y + 1 ) 1 / 3 .
Proof. 
Observe that the standard type-size bounds (see, e.g., [6] [Lemma 2.3], Ref. [1] [Equation (11.16)]) can be rewritten as
1 ( n + 1 ) | S ( P X Y ) | b n H ( P X Y ) | T ( P X Y ) | b n H ( P X Y ) .
Here | S ( P X Y ) | can be replaced with its upper bound of Lemma 3. This gives (44). The remaining bounds of (45) and (46) are obtained similarly using Lemma 4. □
Since it holds for any y T ( P Y ) that | T ( P X | Y | y ) | = | T ( P X Y ) | / | T ( P Y ) | , and similarly for x T ( P X ) , as a corollary of the previous lemma we also obtain
Lemma 6
(Size of a conditional type class). Let P X Y P n ( X n × Y n ) be a joint type, such that E P X X 2 c X and E P Y Y 2 c Y . Then for y T ( P Y ) and x T ( P X ) respectively
H ( X | Y ) c 1 log b ( n + 1 ) n γ / 2 1 n log b | T ( P X | Y | y ) | H ( X | Y ) + c 3 log b ( n + 1 ) n 2 ( 1 β ) / 3 ,
H ( Y | X ) c 1 log b ( n + 1 ) n γ / 2 1 n log b | T ( P Y | X | x ) | H ( Y | X ) + c 2 log b ( n + 1 ) n 2 ( 1 α ) / 3 ,
where c 1 , c 2 , and c 3 are defined as in Lemma 5.

5.3. Number of Types

Let P n X n , c X be the set of all the types P X P n ( X n ) satisfying the power constraint E P X X 2 c X . Then its cardinality can be upper-bounded as follows:
| P n X n , c X | ( a ) | P n X n ( c X ) | ( b ) ( n + 1 ) | X n ( c X ) | ( c ) ( n + 1 ) ( 2 c X + 1 ) n 1 / 2 + α ,
where (a) follows by the definition of X n ( c X ) preceding Lemma 2, (b) follows by [1] [Equation (11.6)], and (c) follows by Lemma 2. This bound is sub-exponential in n for α < 1 / 2 . This can be also further improved and made sub-exponential in n for all α ( 0 , 1 ) using Lemma 4, as follows.
Lemma 7
(Number of types).
| P n X n , c X | ( n + 1 ) c c ˜ n ( 1 + 2 α ) / 3 ,
where c ( 2 c X + 1 ) 1 / ( 3 / 2 + α ) and c ˜ ( 3 / 2 + α ) ( 12 c X + 1 ) 1 / 3 .
Proof. 
Denoting k | X n ( c X ) | and max P X P n ( X n , c X ) | S ( P X ) | , we can upper-bound as follows
| P n X n , c X | k ( n + 1 ) k ( n + 1 ) .
Substituting for k and their upper bounds of Lemma 2 (with n + 1 ) and Lemma 4, we obtain (51). □
Similarly, let P n X n × Y n , c X , c Y denote the set of all the joint types P X Y P n ( X n × Y n ) , such that E P X X 2 c X and E P Y Y 2 c Y . Then its cardinality can be bounded as follows.
Lemma 8
(Number of joint types).
| P n X n × Y n , c X , c Y | ( n + 1 ) c c ˜ n ( 1 + α + β ) / 2 ,
where c ( 2 c X + 1 ) ( 2 c Y + 1 ) 1 / ( 2 + α + β ) and c ˜ ( 2 + α + β ) 2 π ( c X + c Y + 1 / 6 ) .
Proof. 
Denoting k | X n ( c X ) | · | Y n ( c Y ) | and max P X Y P n ( X n × Y n , c X , c Y ) | S ( P X Y ) | , we repeat the steps of (52) and use the bounds of Lemma 2 and Lemma 3 to obtain (53). □

6. Converse Lemma

In this section we prove a converse Lemma 10, which is then used both for the error exponent in Section 7 and for the correct-decoding exponent in Section 8.
In order to determine exponents in channel probabilities, it is convenient to take hold of the exponent in the channel probability density. Let x = ( x 1 , x 2 , , x n ) R n be a vector of n channel inputs and let x q = ( x 1 q , x 2 q , , x n q ) X n n be its quantized version, with components
x k q = Q α ( x k ) Δ α , n · x k / Δ α , n + 1 / 2 , k = 1 , , n .
Similarly, let y = ( y 1 , y 2 , , y n ) R n be a vector of n channel outputs and let y q = ( y 1 q , y 2 q , , y n q ) Y n n be its quantized version, with y k q = Q β ( y k ) for all k = 1 , , n . Then we have the following
Lemma 9
(PDF exponent). Let x R n and y R n be two channel input and output vectors, with their respective quantized versions ( x q , y q ) T ( P X Y ) , such that E P X Y ( Y X ) 2 c X Y . Then
1 n log b w ( y q | x q ) + ( Δ α , n + Δ β , n ) c X Y + ( Δ α , n + Δ β , n ) 2 / 4 2 σ 2 ln b 1 n log b w ( y | x ) 1 n log b w ( y q | x q ) ( Δ α , n + Δ β , n ) c X Y 2 σ 2 ln b .
Proof. 
The exponent can be equivalently rewritten as
1 n log b w ( y | x ) log b ( σ 2 π ) + 1 2 σ 2 ln b · 1 n k = 1 n ( y k x k ) 2 .
Defining δ k ( y k x k ) ( y k q x k q ) , we observe that
1 n k = 1 n ( y k x k ) 2 = 1 n k = 1 n ( y k q x k q ) 2 + 2 n k = 1 n ( y k q x k q ) δ k + 1 n k = 1 n δ k 2 .
The second term on the RHS is bounded as:
| 2 n k = 1 n ( y k q x k q ) δ k | 2 n k = 1 n | y k q x k q | · | δ k | a ( Δ α , n + Δ β , n ) · 1 n k = 1 n | y k q x k q | b ( Δ α , n + Δ β , n ) 1 n k = 1 n ( y k q x k q ) 2 1 / 2 c ( Δ α , n + Δ β , n ) c X Y ,
where (a) follows because | δ k | ( Δ α , n + Δ β , n ) / 2 , (b) follows by Jensen’s inequality for the concave (∩) function f ( t ) = t , and (c) follows by the condition of the lemma. The third term is bounded as
1 n k = 1 n δ k 2 ( Δ α , n + Δ β , n ) 2 / 4 .
Since the exponent with the quantized versions 1 n log b w ( y q | x q ) , in turn, can also be rewritten similarly to (55), the result of the lemma follows by (55)–(58). □
The following lemma will be used both for the upper bound on the error exponent and for the lower bound on the correct-decoding exponent.
Lemma 10
(Conditional probability of correct decoding). Let P X Y P n ( X n × Y n ) be a joint type, such that E P X X 2 c X , E P Y Y 2 c Y , and E P X Y ( Y X ) 2 c X Y , and let C be a codebook, such that the quantized versions (54) of its codewords x ( m ) , m = 1 , 2 , , M ( n , R ) , are all of the type P X , that is:
x q ( m ) = Q α ( x ( m ) ) = Q α ( x 1 ( m ) ) , Q α ( x 2 ( m ) ) , , Q α ( x n ( m ) ) T ( P X ) , m .
Let J U n i f { 1 , 2 , , M } be a random variable, independent of the channel noise, and let x ( J ) Y be the random channel-input and channel-output vectors, respectively. Let Y q = Q β ( Y ) Y n n . Then
Pr g ( Y ) = J | x q ( J ) , Y q T ( P X Y ) b n R ˜ I ( P X Y ) + o ( 1 ) ,
where R ˜ = 1 n log b M ( n , R ) , and o ( 1 ) 0 , as n , depending only on α, β, c X , c Y , c X Y , and σ 2 .
Proof. 
First, from the single code ( C , g ) we create an ensemble of codes, where each member code has the same probability of error/correct-decoding as the original code ( C , g ) . Then we upper bound the ensemble average probability of correct decoding.
Considering the codebook C as an M × n matrix, we permute its n columns. This produces a set of codebooks: C , = 1 , , n ! . The quantized versions of all the codewords of each codebook C belong to the same type class T ( P X ) . In accordance with C , we permute also the n coordinates of each y D m of the decision regions D m in the definition (6) of the decoder g, obtaining open sets D m ( ) and creating in this way an ensemble of codes ( C , g ) , = 1 , , n ! .
Let x ( J ) Y denote the random channel-input and channel-output vectors, respectively, when using the code with an index { 1 , , n ! } . Let x q ( J ) and Y q denote their respective quantized versions. Since the additive channel noise is i.i.d., permutation of components does not change the distribution of the noise vector Y x ( J ) , and we obtain
Pr g ( Y ) = J , x q ( J ) , Y q T ( P X Y ) = Pr g ( Y ) = J , x q ( J ) , Y q T ( P X Y ) , ,
Pr x q ( J ) , Y q T ( P X Y ) = Pr x q ( J ) , Y q T ( P X Y ) , .
Suppose that one of the codes ( C , g ) , = 1 , , n ! , is used for communication with probability 1 / n ! , chosen independently of the sent message J and of the channel noise. Let L Unif { 1 , 2 , , n ! } be the random variable denoting the index of this code. Then, using (59) and (60) we obtain
Pr g ( Y ) = J | x q ( J ) , Y q T ( P X Y ) = Pr g L ( Y L ) = J | x L q ( J ) , Y L q T ( P X Y ) .
In what follows, we upper bound the RHS of (61) with an added condition that Y L q = y T ( P Y ) :
Pr g L ( Y L ) = J | x L q ( J ) T ( P X | Y | y ) , Y L q = y .
The total number of codes in the ensemble can be rewritten as
n ! = | T ( P X ) | x S ( P X ) n P X ( x ) ! | T ( P X ) | · Π ( P X ) .
Given y T ( P Y ) , the total number of all the codewords in the ensemble such that their quantized versions belong to the same conditional type class T ( P X | Y | y ) (counted as distinct if the codewords belong to different ensemble member codes or represent different messages) is given by
S = | T ( P X | Y | y ) | · Π ( P X ) for a message m · M .
Let N ( ) denote the number of the codewords in a codebook C such that their quantized versions belong to T ( P X | Y | y ) . Given that Y L q = y , the channel output vector Y L falls into a hypercube region of R n :
B { y ˜ R n : Q β ( y ˜ ) = y } .
For any x R n such that Q α ( x ) T ( P X | Y | y ) and any open region D R n , by Lemma 9 we obtain
b n E P X Y [ log b w ( Y | X ) ] + o 1 ( 1 ) · vol ( B D ) Pr Y L B D | x L ( J ) = x
b n E P X Y [ log b w ( Y | X ) ] + o 2 ( 1 ) · vol ( B D ) .
Then, since all the codes and messages are equiprobable, the conditional probability of the code with the index is upper-bounded as
Pr L = | x L q ( J ) T ( P X | Y | y ) , Y L q = y b n o 1 ( 1 ) o 2 ( 1 ) N ( ) / S .
For N ( ) > 0 , let m 1 < m 2 < < m N ( ) be the indices of all the codewords in the codebook C with their quantized versions in T ( P X | Y | y ) . Given that indeed the codebook C has been used for communication, similarly to (65), by (64) the conditional probability of correct decoding can be upper-bounded as
Pr g ( Y ) = J | x q ( J ) T ( P X | Y | y ) , Y q = y , L = j = 1 N ( ) vol B D m j ( ) b n · o ( 1 ) N ( ) vol ( B ) b n · o ( 1 ) N ( ) ,
where the second inequality follows because the decision regions D m j ( ) are disjoint. Summing up over all the codes, we finally obtain:
Pr g L ( Y L ) = J | x L q ( J ) T ( P X | Y | y ) , Y L q = y a 1 n ! : N ( ) > 0 N ( ) S · 1 N ( ) · b n · o ( 1 ) = 1 n ! : N ( ) > 0 1 S · b n · o ( 1 ) n ! S · b n · o ( 1 ) = b | T ( P X ) | | T ( P X | Y | y ) | M · b n · o ( 1 ) c b n R ˜ I ( P X Y ) + o ( 1 ) ,
where (a) follows by (65) and (66), (b) follows by (62) and (63), and (c) follows by (45) of Lemma 5 and (48) of Lemma 6. □
In the next two sections, we derive converse bounds on the error and correct-decoding exponents in terms of types.

7. Error Exponent

The end result of this section is given by Lemma 13 and represents a converse bound on the error exponent by the method of types.
Lemma 11
(Error exponent of mono-composition codebooks). Let P X P n ( X n ) be a type, such that E P X X 2 c X , and let C be a codebook, such that the quantized versions (54) of its codewords x ( m ) , m = 1 , 2 , , M ( n , R ) , are all of the type P X , that is:
x q ( m ) = Q α ( x ( m ) ) = Q α ( x 1 ( m ) ) , Q α ( x 2 ( m ) ) , , Q α ( x n ( m ) ) T ( P X ) , m .
Let J U n i f { 1 , 2 , , M } be a random variable, independent of the channel noise, and let x ( J ) Y be the random channel-input and channel-output vectors, respectively. Then for any parameter c X Y
1 n log b Pr g ( Y ) J min P Y | X : P X Y P n ( X n × Y n ) , E [ ( Y X ) 2 ] c X Y , I ( P X , P Y | X ) R ˜ o ( 1 ) D P Y | X W n | P X + o ( 1 ) ,
where R ˜ = 1 n log b M ( n , R ) , and o ( 1 ) 0 , as n , depending only on α, β, c X , c X Y , and σ 2 .
Proof. 
For a joint type P X Y P n ( X n × Y n ) with the marginal type P X , such that E P X Y ( Y X ) 2 c X Y , we have also
E P Y Y 2 E P X 1 / 2 X 2 + E P X Y 1 / 2 ( Y X ) 2 2 c X + c X Y 2 c Y .
Then with Y q = Q β ( Y ) Y n n for any 1 j M , we obtain
Pr x q ( J ) , Y q T ( P X Y ) | J = j a | T P Y | X | x q ( j ) | · Δ β , n n · b n E P X Y [ log b w ( Y | X ) ] + o ( 1 ) b b n D ( P Y | X W n | P X ) + o ( 1 ) , j ,
where (a) follows by Lemma 9, and (b) follows by (49) of Lemma 6 and (10). This gives
Pr x q ( J ) , Y q T ( P X Y ) b n D ( P Y | X W n | P X ) + o ( 1 ) .
Now we are ready to apply Lemma 10:
Pr g ( Y ) J Pr x q ( J ) , Y q T ( P X Y ) · Pr g ( Y ) J | x q ( J ) , Y q T ( P X Y ) a b n D ( P Y | X W n | P X ) + o ( 1 ) · 1 b n R ˜ I ( P X , P Y | X ) + o ( 1 ) b b n D ( P Y | X W n | P X ) + o ( 1 ) · 1 / 2 ,
where (a) follows by (69) and Lemma 10, and (b) holds for I ( P X , P Y | X ) R ˜ log b ( 2 ) / n + o ( 1 ) . □
Lemma 12
(Type constraint). For any ϵ > 0 there exists n 0 = n 0 ( α , s 2 , ϵ ) N , such that for any n > n 0 and any codeword x R n , satisfying the power constraint (5), the quantized version of that codeword, defined by (54), satisfies the power constraint (5) within ϵ, that is with s 2 replaced by s 2 + ϵ .
The proof is the same as (56)–(58).
Lemma 13
(Error exponent). Let J U n i f { 1 , 2 , , M } be a random variable, independent of the channel noise, and let x ( J ) Y be the random channel-input and channel-output vectors, respectively. Then for any c X Y and ϵ > 0 there exists n 0 = n 0 ( α , β , s 2 , σ 2 , c X Y , ϵ ) N , such that for any n > n 0
1 n log b Pr g ( Y ) J max P X | : P X P n ( X n ) , E [ X 2 ] s 2 + ϵ min P Y | X : P X Y P n ( X n × Y n ) , E [ ( Y X ) 2 ] c X Y , I ( P X , P Y | X ) R ϵ D P Y | X W n | P X + o ( 1 ) ,
where o ( 1 ) 0 , as n , depending only on the parameters α, β, s 2 + ϵ , c X Y , and σ 2 .
Proof. 
For a type P X P n ( X n ) let us define M ( P X ) | j : x q ( j ) T ( P X ) | . Then for any n greater than n 0 of Lemma 12, there exists at least one type P X such that
M ( P X ) M | P n X n , s 2 + ϵ | M ( n + 1 ) c c ˜ n ( 1 + 2 α ) / 3 ,
where the second inequality follows by Lemma 7 applied with c X = s 2 + ϵ . Then we can use such a type for a bound:
Pr g ( Y ) J = 1 M j = 1 M Pr Y D J | J = j 1 M 1 j M : x q ( j ) T ( P X ) Pr Y D J | J = j a b n · o ( 1 ) 1 M ( P X ) 1 j M : x q ( j ) T ( P X ) Pr Y D J | J = j
= b b n · o ( 1 ) Pr g ˜ ( Y ˜ ) J ˜ ,
where (a) follows by (71), and (b) holds for the random variable J ˜ Unif { 1 , 2 , , M ( P X ) } , independent of the channel noise, and the channel input/output random vectors x m J ˜ Y ˜ with the decoder
g ˜ ( y ) 0 , y j = 1 M ( P X ) D m j c , j , y D m j , j { 1 , 2 , , M ( P X ) } ,
where m 1 < < m M ( P X ) are the indices of the codewords in C with their quantized versions in T ( P X ) .
It follows now from (72) that the LHS of (70) can be upper-bounded by (67) of Lemma 11 with R ˜ = 1 n log b M ( P X ) . Substituting then (71) in place of M ( P X ) we obtain a stricter condition under the minimum of (67), leading to an upper bound with a condition I ( P X , P Y | X ) R o ( 1 ) and to (70). □

8. Correct-Decoding Exponent

The end result of this section is Lemma 17, which is a converse bound on the correct-decoding exponent by the method of types.
Lemma 14
(Joint type constraint). For any ϵ > 0 and σ ˜ 2 , there exists n 0 = n 0 ( α , β , σ ˜ 2 , ϵ ) N , such that for any n > n 0 and any pair of vectors x , y R n satisfying
1 n k = 1 n ( y k x k ) 2 σ ˜ 2 ,
the respective quantized versions x q = Q α ( x ) and y q = Q β ( y ) , defined as in (54), satisfy
1 n k = 1 n ( y k q x k q ) 2 σ ˜ 2 + ϵ .
The proof is the same as (56)–(58).
We use a Chernoff bound for the probability of an event when the method of types cannot be applied:
Lemma 15
(Chernoff bound). Let Z k N ( 0 , σ 2 ) , k = 1 , 2 , , n , be n independent random variables. Then for σ ˜ 2 σ 2 and f ( x ) 1 2 x 1 ln ( x ) :
Pr 1 n k = 1 n Z k 2 σ ˜ 2 exp n f ( σ ˜ 2 / σ 2 ) .
Lemma 16
(Correct-decoding exponent of mono-composition codebooks). Let P X P n ( X n ) be a type, such that E P X X 2 c X , and let C be a codebook, such that the quantized versions (54) of its codewords x ( m ) , m = 1 , 2 , , M ( n , R ) , are all of the type P X , that is:
x q ( m ) = Q α ( x ( m ) ) = Q α ( x 1 ( m ) ) , Q α ( x 2 ( m ) ) , , Q α ( x n ( m ) ) T ( P X ) , m .
Let J U n i f { 1 , 2 , , M } be a random variable, independent of the channel noise, and let x ( J ) Y be the random channel-input and channel-output vectors, respectively. Then for any ϵ > 0 and σ ˜ 2 σ 2 there exists n 0 = n 0 ( α , β , σ ˜ 2 , ϵ ) N , such that for any n > n 0
1 n log b Pr g ( Y ) = J min E n ( P X , R , σ ˜ , ϵ ) , E ( σ ˜ ) + o ( 1 ) ,
E ( σ ˜ ) 1 2 ln b σ ˜ 2 / σ 2 1 ln ( σ ˜ 2 / σ 2 ) ,
E n ( P X , R , σ ˜ , ϵ ) min P Y | X : P X Y P n ( X n × Y n ) , E [ ( Y X ) 2 ] σ ˜ 2 + ϵ D P Y | X W n | P X + | R I P X , P Y | X | + ,
where | t | + max { 0 , t } , and o ( 1 ) 0 , as n , depending only on α, β, c X , σ ˜ 2 + ϵ , and σ 2 .
Proof. 
We consider probabilities of two disjoint events:
Pr g ( Y ) = J Pr g ( Y ) = J , Y x ( J ) 2 n σ ˜ 2 + Pr Y x ( J ) 2 > n σ ˜ 2 2 max Pr g ( Y ) = J , Y x ( J ) 2 n σ ˜ 2 , Pr Y x ( J ) 2 n σ ˜ 2 .
For the first term in the maximum, we obtain:
Pr g ( Y ) = J , Y x ( J ) 2 n σ ˜ 2 a Pr g ( Y ) = J , Y q x q ( J ) 2 n ( σ ˜ 2 + ϵ ) = b P Y | X : P X Y P n ( X n × Y n ) , E [ ( Y X ) 2 ] σ ˜ 2 + ϵ Pr g ( Y ) = J , x q ( J ) , Y q T ( P X Y )
c | P n X n × Y n , c X , c Y | max P Y | X : P X Y P n ( X n × Y n ) , E [ ( Y X ) 2 ] σ ˜ 2 + ϵ Pr g ( Y ) = J , x q ( J ) , Y q T ( P X Y ) ,
where (a) holds with Y q = Q β ( Y ) Y n n for all n > n 0 ( α , β , σ ˜ 2 , ϵ ) of Lemma 14, (b) holds because x q ( J ) T ( P X ) , and in (c) we use the notation P n X n × Y n , c X , c Y for the set of all the joint types P X Y P n ( X n × Y n ) satisfying both E P X X 2 c X and E P Y Y 2 c X + σ ˜ 2 + ϵ 2 c Y . By the same steps as in (68), we further obtain
Pr x q ( J ) , Y q T ( P X Y ) b n D ( P Y | X W n | P X ) + o ( 1 ) ,
while Lemma 10 gives
Pr g ( Y ) = J | x q ( J ) , Y q T ( P X Y ) b n | R I ( P X Y ) | + + o ( 1 ) .
Now by Lemma 8 for the number of joint types and by (77)–(79), we obtain
Pr g ( Y ) = J , Y x ( J ) 2 n σ ˜ 2 b n E n ( P X , R , σ ˜ , ϵ ) + o ( 1 ) ,
where E n ( P X , R , σ ˜ , ϵ ) denotes the expression (75). Applying Lemma 15 to the second term in the maximum of (76) we obtain (73)–(75). □
Lemma 17
(Correct-decoding exponent). Let J U n i f { 1 , 2 , , M } be a random variable, independent of the channel noise, and let x ( J ) Y be the random channel-input and channel-output vectors, respectively. Then for any ϵ ˜ , ϵ > 0 and σ ˜ 2 σ 2 there exists n 0 = n 0 ( α , β , s 2 , σ ˜ 2 , ϵ ˜ , ϵ ) N , such that for any n > n 0
1 n log b Pr g ( Y ) = J min E n ( R , σ ˜ , ϵ ˜ , ϵ ) , E ( σ ˜ ) + o ( 1 ) ,
E n ( R , σ ˜ , ϵ ˜ , ϵ ) min P X | : P X P n ( X n ) , E [ X 2 ] s 2 + ϵ E n ( P X , R , σ ˜ , ϵ ˜ )
where E ( σ ˜ ) and E n ( P X , R , σ ˜ , ϵ ˜ ) are as defined in (74) and (75), respectively, and o ( 1 ) 0 , as n , depending only on the parameters α, β, s 2 + ϵ , σ ˜ 2 + ϵ ˜ , and σ 2 .
Proof. 
Similarly as in [7] [Lemma 5]:
Pr g ( Y ) = J = 1 M P X P n ( X n ) 1 j M : x q ( j ) T ( P X ) Pr Y D J | J = j
a | P n X n , s 2 + ϵ | · 1 M max P X P n ( X n ) 1 j M : x q ( j ) T ( P X ) Pr Y D J | J = j
b b n · o ( 1 ) 1 M 1 j M : x q ( j ) T ( P X ) Pr Y D J | J = j = c b n · o ( 1 ) Pr g ˜ ( Y ˜ ) = J ,
where:
(a) holds for n > n 0 ( α , s 2 , ϵ ) of Lemma 12;
(b) follows by Lemma 7 with c X = s 2 + ϵ , while P X P n ( X n ) is a maximizer of (83);
(c) holds for the channel input/output random vectors x ˜ ( J ) Y ˜ and a code ( C ˜ , g ˜ ) , such that x ˜ ( j ) = x ( m j ) with D ˜ j = D m j for 1 j M ( P X ) , and x ˜ q ( j ) T ( P X ) with D ˜ j = for M ( P X ) < j M , where m 1 < < m M ( P X ) are the indices of the codewords in the original codebook C with their quantized versions in T ( P X ) . Since all the codewords of C ˜ have their quantized versions in T ( P X ) , we can apply Lemma 16 with c X = s 2 + ϵ for the RHS of (83) to obtain (80) and (81). □

9. PDF to Type

Lemmas 19 and 20 of this section relate between minimums over types and over PDF’s. The next Lemma 18, which has a laborious proof, is required only in the proof of Lemma 19, used for Theorem 1.
Lemma 18
(Quantization of PDF). Let X n be an alphabet defined as in (9), (7) with α 0 , 1 4 . Let P X P n ( X n ) be a type and p Y | X ( · | x ) L , x X n , be a collection of functions from (12), such that E P X X 2 c X , E P X p Y | X Y 2 c Y , and E P X p Y | X ( Y X ) 2 c X Y . Then for any alphabet Y n defined as in (9), (7) with 1 3 + 2 3 α < β < 2 3 2 3 α , there exists a joint type P X Y P n ( X n × Y n ) with the marginal type P X , such that
x X n P X ( x ) R p Y | X ( y | x ) log b p Y | X ( y | x ) d y x X n y Y n P X Y ( x , y ) log b P Y | X ( y | x ) Δ β , n + o ( 1 ) ,
x X n P X ( x ) R p Y | X ( y | x ) ( y x ) 2 d y x X n y Y n P X Y ( x , y ) ( y x ) 2 + o ( 1 ) ,
R p Y ( y ) log b p Y ( y ) d y y Y n P Y ( y ) log b P Y ( y ) Δ β , n + o ( 1 ) ,
where p Y ( y ) = x X n P X ( x ) p Y | X ( y | x ) , y R , and o ( 1 ) 0 , as n , and depends only on the parameters α, β, c X , c Y , c X Y , and σ 2 (through (12)).
The proof is given in the Appendix B.
Lemma 19
(PDF to type). Let X n and Y n be two alphabets defined as in (9), (7) with α 0 , 1 4 and 1 3 + 2 3 α < β < 2 3 2 3 α . Then for any c X , c X Y , and ϵ > 0 there exists n 0 = n 0 ( α , β , c X , c X Y , σ 2 , ϵ ) N , such that for any n > n 0 and for any type P X P n ( X n ) with E P X X 2 c X :
inf p Y | X : p Y | X ( · | x ) L , x , E P X p Y | X [ ( Y X ) 2 ] c X Y , I ( P X , p Y | X ) R 2 ϵ D p Y | X w | P X min P Y | X : P X Y P n ( X n × Y n ) , E P X Y [ ( Y X ) 2 ] c X Y + ϵ , I ( P X , P Y | X ) R ϵ D P Y | X W n | P X + o ( 1 ) ,
where o ( 1 ) 0 , as n , and depends only on the parameters α, β, c X , c X Y , and σ 2 .
Proof. 
For a type P X with a collection of p Y | X such that E P X X 2 c X and E P X p Y | X ( Y X ) 2 c X Y , we can find also an upper bound c Y on E P X p Y | X Y 2 . For example, using the Cauchy–Schwarz inequality:
E P X p Y | X Y 2 E P X 1 / 2 X 2 + E P X p Y | X 1 / 2 ( Y X ) 2 2 c X + c X Y 2 c Y .
Then by Lemma 18 there exists a joint type P X Y with the marginal type P X , such that simultaneously the three inequalities (84)–(86) are satisfied and it also follows by (10) and (85) that
E P X p Y | X log b w ( Y | X ) E P X Y log b W n ( Y | X ) + log b Δ β , n + o ( 1 ) .
Then the sum of (84) and (88) gives
D p Y | X w | P X D P Y | X W n | P X + o ( 1 ) ,
while the difference of (84) and (86) gives
I P X , P Y | X I P X , p Y | X + o ( 1 ) .
Note that all o ( 1 ) in the above relations are independent of the joint type P X Y and the functions p Y | X . Therefore by (89), (90) and (85) we conclude, that given any ϵ > 0 for n sufficiently large for every type P X with the prerequisites of this lemma and every collection of p Y | X that satisfy the conditions under the infimum on the LHS of (87) there exists a joint type P X Y such that simultaneously
I P X , P Y | X I P X , p Y | X + ϵ , E P X Y ( Y X ) 2 E P X p Y | X ( Y X ) 2 + ϵ ,
and (89) holds with a uniform o ( 1 ) , i.e., independent of P X Y and p Y | X . It follows that such P X Y satisfies also the conditions under the minimum on the RHS of (87) and results in the objective function of (87) satisfying (89) with the uniform o ( 1 ) . Then the minimum itself, which can only possibly be taken over a greater variety of P X Y , satisfies the inequality (87). □
Lemma 20
(Type to PDF). For any c X Y and ϵ > 0 there exists n 0 = n 0 ( β , c X Y , ϵ ) N , such that for any n > n 0 and for any type P X P n ( X n ) :
min P Y | X : P X Y P n ( X n × Y n ) , E P X Y [ ( Y X ) 2 ] c X Y D P Y | X W n | P X + | R I P X , P Y | X | + inf p Y | X : p Y | X ( · | x ) F n , x , E P X p Y | X [ ( Y X ) 2 ] c X Y + ϵ D p Y | X w | P X + | R I P X , p Y | X | + + o ( 1 ) ,
where o ( 1 ) 0 , as n , and depends only on the parameters β and c X Y .
Proof. 
Observe first that any collection of conditional PDF’s p Y | X that satisfies the conditions under the infimum of (91) has finite differential entropies and well-defined quantities D p Y | X w | P X and I P X , p Y | X . For any conditional type P Y | X from the LHS of (91) we can define a set of histogram-like conditional PDF’s:
p Y | X ( y | x ) P Y | X ( j Δ β , n | x ) Δ β , n , y ( j 1 / 2 ) Δ β , n , ( j + 1 / 2 ) Δ β , n , j Z , x S ( P X ) ,
which are step functions of y R for each x S ( P X ) . Then I P X , p Y | X = I P X , P Y | X , and p Y | X ( · | x ) F n , as defined in (11). Analogously to (56)–(58), it can be obtained that
E P X p Y | X ( Y X ) 2 E P X Y ( Y X ) 2 + Δ β , n c X Y + Δ β , n 2 / 4 .
Then also D p Y | X w | P X D P Y | X W n | P X + o ( 1 ) . Then the lemma follows. □
We use Lemmas 19 and 20 in Section 4 in the derivation of Theorems 1 and 2, respectively.

Author Contributions

Writing—original draft, A.S.-B. and S.T. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Israel Science Foundation (ISF) grant #1579/23.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding authors.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
AWGNAdditive white Gaussian noise
PDFProbability density function
CFCharacteristic function
CDFCumulative distribution function
RHSRight-hand side
LHSLeft-hand side

Appendix A

Proof of Lemma 3.
Adding the two power constraints together, we successively obtain the following inequalities:
( x , y ) S ( P X Y ) P X Y ( x , y ) ( x 2 + y 2 ) c X + c Y , ( x , y ) S ( P X Y ) 1 n ( x 2 + y 2 ) c X + c Y , ( x , y ) S ( P X Y ) 1 | S ( P X Y ) | ( x 2 + y 2 ) n ( c X + c Y ) | S ( P X Y ) | , E U 2 n ( c X + c Y ) | S ( P X Y ) | ,
where U Discrete - Uniform S ( P X Y ) . To this let us add a continuously distributed random vector
D Continuous - Uniform [ Δ α , n / 2 , Δ α , n / 2 ) × [ Δ β , n / 2 , Δ β , n / 2 ) ,
independent of U . Then U ˜ = U + D Continuous - Uniform ( A ) , where
A ( x , y ) S ( P X Y ) x + [ Δ α , n / 2 , Δ α , n / 2 ) × y + [ Δ β , n / 2 , Δ β , n / 2 ) .
So we have
n ( c X + c Y ) | S ( P X Y ) | + Δ α , n 2 + Δ β , n 2 12 E U 2 + E D 2 = E U ˜ 2 = 1 vol ( A ) A u ˜ 2 d 2 u ˜ = 1 vol ( A ) A B u ˜ 2 d 2 u ˜ + A B c u ˜ 2 d 2 u ˜ a 1 vol ( A ) A B u ˜ 2 d 2 u ˜ + A c B t 2 d 2 t = 1 vol ( A ) B t 2 d 2 t ,
where for (a) we use the disk set B t : π t 2 vol ( A ) , centered around zero and of the same total area as A, and the resulting property
vol ( A B ) + vol ( A B c ) = vol ( A ) = vol ( B ) = vol ( A B ) + vol ( A c B ) , vol ( A B c ) = vol ( A c B ) , A B c u ˜ 2 d 2 u ˜ vol ( A ) π A B c d 2 u ˜ = vol ( A ) π A c B d 2 t A c B t 2 d 2 t .
Integrating on the RHS of (A1), we obtain
vol ( A ) 2 π = | S ( P X Y ) | · Δ α , n Δ β , n 2 π n ( c X + c Y ) | S ( P X Y ) | + Δ α , n 2 + Δ β , n 2 12 , | S ( P X Y ) | 2 · Δ α , n Δ β , n 2 π n c X + c Y + Δ α , n 2 + Δ β , n 2 12 · | S ( P X Y ) | n 1 c X + c Y + 1 / 6 , | S ( P X Y ) | 2 2 π ( c X + c Y + 1 / 6 ) · n Δ α , n 1 Δ β , n 1 = 2 π ( c X + c Y + 1 / 6 ) · n 1 + α + β .
Proof of Lemma 4.
Similarly as in Lemma 3, the power constraint gives the following succession of inequalities:
x S ( P X ) P X ( x ) x 2 c X , x S ( P X ) 1 n x 2 c X , x S ( P X ) 1 | S ( P X ) | x 2 n | S ( P X ) | c X , E U 2 n | S ( P X ) | c X ,
where U Discrete - Uniform S ( P X ) . To this let us add a continuously distributed random variable
D Continuous - Uniform [ Δ α , n / 2 , Δ α , n / 2 ) ,
independent of U. Then U ˜ = U + D Continuous - Uniform ( A ) , where
A x S ( P X ) x + [ Δ α , n / 2 , Δ α , n / 2 ) .
So we have
n | S ( P X ) | c X + Δ α , n 2 12 E U 2 + E D 2 = E U ˜ 2 = 1 vol ( A ) A u ˜ 2 d u ˜ = 1 vol ( A ) A B u ˜ 2 d u ˜ + A B c u ˜ 2 d u ˜ a 1 vol ( A ) A B u ˜ 2 d u ˜ + A c B t 2 d t = 1 vol ( A ) B t 2 d t ,
where for (a) we use the interval set B vol ( A ) / 2 , vol ( A ) / 2 , centered around zero and of the same total length as A with the resulting property that vol ( A B c ) = vol ( A c B ) , so that
A B c u ˜ 2 d u ˜ ( vol ( A ) ) 2 4 A B c d u ˜ = ( vol ( A ) ) 2 4 A c B d t A c B t 2 d t .
Integrating on the RHS of (A2), we obtain
( vol ( A ) ) 2 12 = ( | S ( P X ) | · Δ α , n ) 2 12 n | S ( P X ) | c X + Δ α , n 2 12 , | S ( P X ) | 3 12 c X n Δ α , n 2 + | S ( P X ) | = 12 c X n 1 + 2 α + | S ( P X ) | n ( 12 c X + 1 ) n 1 + 2 α .

Appendix B

Proof of Lemma 18.
Using P X and p Y | X , it is convenient to define a joint probability density function over R 2 as
p X Y ( x , y ) P X ( i Δ α , n ) Δ α , n p Y | X ( y | i Δ α , n ) , x ( i 1 / 2 ) Δ α , n , ( i + 1 / 2 ) Δ α , n , i Z , y R ,
which is changing only stepwise in x-direction. Note that p Y of (86) is the y-marginal of p X Y . This gives
E p X Y X 2 = E P X X 2 + Δ α , n 2 12 , E p X Y Y 2 = E P X p Y | X Y 2 .
We proceed in two stages. First we quantize p X Y ( x , y ) by rounding it down and check the effect of this on the LHS of (84)–(86). Then we complement the total probability back to 1, so that the type P X is conserved, and check the effect of this on the RHS of (84)–(86).
The quantization of p X Y ( x , y ) is done by first replacing it with its infimum in each rectangle
( i 1 / 2 ) Δ α , n , ( i + 1 / 2 ) Δ α , n × ( j 1 / 2 ) Δ β , n , ( j + 1 / 2 ) Δ β , n :
p X Y inf ( x , y ) inf ( j 1 / 2 ) Δ β , n y ˜ < ( j + 1 / 2 ) Δ β , n p X Y ( x , y ˜ ) , y ( j 1 / 2 ) Δ β , n , ( j + 1 / 2 ) Δ β , n , j Z , x R ,
and then quantizing this infimum down to the nearest value k Δ γ , n , k = 0 , 1 , 2 , :
p X Y q ( x , y ) p X Y inf ( x , y ) / Δ γ , n · Δ γ , n , ( x , y ) R 2 .
Due to (8), the integral of p X Y q over R 2 can be smaller than 1 only by an integer multiple of 1 / n . The resulting difference from p X Y ( x , y ) at each point ( x , y ) R 2 can be bounded by a sum of two terms as
0 p X Y ( x , y ) p X Y q ( x , y ) ( K / Δ α , n ) · Δ β , n minimization + Δ γ , n quantization = K n α β + n α + β 1 ( K + 1 ) n δ h ,
where δ min { β , 1 β } α and K is the parameter from (12).
For (86) we will require the y-marginal of p X Y q from (A6), defined in the usual manner:
p Y q ( y ) R p X Y q ( x , y ) d x = a x S ( P X ) p X Y q ( x , y ) Δ α , n b x S ( P X ) ( p X Y ( x , y ) h ) Δ α , n = x S ( P X ) p X Y ( x , y ) Δ α , n h x S ( P X ) Δ α , n = p Y ( y ) h | S ( P X ) | Δ α , n , y R ,
where the equality (a) follows from (A3), (A5) and (A6), and the inequality (b) follows by (A7). Then
0 p Y ( y ) p Y q ( y ) h | S ( P X ) | Δ α , n ( K + 1 ) ( 12 c X + 1 ) 1 / 3 n δ 1 h ˜ ,
where the last inequality follows by Lemma 4, (A7), (7), with δ 1 min { β , 1 β } ( 1 + 2 α ) / 3 . Note that the previously defined δ > δ 1 , while δ 1 > 0 if and only if α 0 , 1 4 and 1 3 + 2 3 α < β < 2 3 2 3 α .
The LHS of (84)
Now consider the LHS of (84). Note that each function p Y | X ( · | x ) in (84) is bounded and has a finite variance. It follows that it has a finite differential entropy. With (A3) we can rewrite the LHS of (84) as
x X n P X ( x ) R p Y | X ( y | x ) log b p Y | X ( y | x ) d y = R 2 p X Y ( x , y ) log b p X Y ( x , y ) d x d y x X n P X ( x ) log b P X ( x ) Δ α , n .
Let us examine the possible increase in (A10) when p X Y is replaced with p X Y q defined by (A5) and (A6). For this, let us define a set in R 2 with respect to the parameter h of (A7):
A ( x , y ) : p X Y ( x , y ) > h ,
which is a countable union of disjoint rectangles by the definition of p X Y in (A3). Then
R 2 p X Y q ( x , y ) log b p X Y q ( x , y ) d x d y R 2 p X Y ( x , y ) log b p X Y ( x , y ) d x d y = A c p X Y q ( x , y ) log b p X Y q ( x , y ) d x d y A c p X Y ( x , y ) log b p X Y ( x , y ) d x d y + A p X Y q ( x , y ) log b p X Y q ( x , y ) p X Y ( x , y ) log b p X Y ( x , y ) d x d y .
Note that the minimum of the function f ( t ) = t log b t occurs at t = 1 / e . Then for h 1 / e we have p X Y ( x , y ) log b p X Y ( x , y ) p X Y q ( x , y ) log b p X Y q ( x , y ) 0 for all ( x , y ) A c and the first of the two terms in (A12) is upper-bounded as
A c p X Y q ( x , y ) log b p X Y q ( x , y ) d x d y A c p X Y ( x , y ) log b p X Y ( x , y ) d x d y
h 1 / e A c p X Y ( x , y ) log b p X Y ( x , y ) d x d y = ( ) p R 2 p X Y c ( x , y ) log b p X Y c ( x , y ) d x d y p log b p ,
where the equality (∗) is appropriate for the case when the upper bound is positive, with the definitions:
p A c p X Y ( x , y ) d x d y ,
p X Y c ( x , y ) { p X Y ( x , y ) / p , ( x , y ) A c , 0 , o . w .
Next we upper-bound the entropy of the probability density function p X Y c on the RHS of (A13) by that of a Gaussian PDF. By (A4) we have
c X + 1 / 12 + c Y A c p X Y ( x , y ) ( x 2 + y 2 ) d x d y = p R 2 p X Y c ( x , y ) ( x 2 + y 2 ) d x d y , ( c X + 1 / 12 + c Y ) / p R p X c ( x ) x 2 d x + R p Y c ( y ) y 2 d y , log b 2 π e ( c X + 1 / 12 + c Y ) / p R 2 p X Y c ( x , y ) log b p X Y c ( x , y ) d x d y .
So we can rewrite the bound of (A13) in terms of p defined by (A14):
A c p X Y q ( x , y ) log b p X Y q ( x , y ) d x d y A c p X Y ( x , y ) log b p X Y ( x , y ) d x d y h 1 / e 2 p log b p + p log b 2 π e ( c X + c Y + 1 / 12 ) .
From (A11) and (A14) it is clear that p 0 as h 0 . In order to relate between them, let us rewrite the inequality in (A15) again as
c X + 1 / 12 + c Y A c p X Y ( x , y ) ( x 2 + y 2 ) d x d y
= B 1 h · ( x 2 + y 2 ) d x d y + A c B 1 c p X Y ( x , y ) ( x 2 + y 2 ) d x d y A B 1 h · ( x 2 + y 2 ) d x d y A c B 1 h p X Y ( x , y ) ( x 2 + y 2 ) d x d y B 1 h · ( x 2 + y 2 ) d x d y + p h π A c B 1 c p X Y ( x , y ) d x d y p h π A B 1 h d x d y p h π A c B 1 h p X Y ( x , y ) d x d y = B 1 h · ( x 2 + y 2 ) d x d y = p 2 2 π h ,
where we use the disk set B 1 ( x , y ) : h π ( x 2 + y 2 ) p , centered around zero. This results in the following upper bound on p in terms of h:
c 1 h 1 / 2 p ,
where c 1 2 π ( c X + c Y + 1 / 12 ) . Substituting the LHS of (A17) in (A16) in place of p, we obtain the following upper bound on the first half of (A12) in terms of h of (A7) and (A11):
A c p X Y q ( x , y ) log b p X Y q ( x , y ) d x d y A c p X Y ( x , y ) log b p X Y ( x , y ) d x d y c 1 h 1 / 2 log b ( e / h ) , h 1 / ( c 1 e ) 2 .
In the second term of (A12) for ( x , y ) A the integrand can be upper-bounded by Lemma A1 with its parameters t and t 1 such that
t 1 = p X Y q ( x , y ) p X Y ( x , y ) = t 1 + t , t h 1 / e .
This gives
A p X Y q ( x , y ) log b p X Y q ( x , y ) p X Y ( x , y ) log b p X Y ( x , y ) d x d y vol ( A ) h log b ( 1 / h ) , h 1 / e ,
where vol ( A ) is the total area of A. To find an upper bound on vol ( A ) , we use (A4):
c X + 1 / 12 + c Y A p X Y ( x , y ) ( x 2 + y 2 ) d x d y A h · ( x 2 + y 2 ) d x d y = h A B 2 ( x 2 + y 2 ) d x d y + A B 2 c ( x 2 + y 2 ) d x d y a h A B 2 ( x 2 + y 2 ) d x d y + A c B 2 ( x 2 + y 2 ) d x d y = B 2 h · ( x 2 + y 2 ) d x d y = h 2 π ( vol ( A ) ) 2 ,
where in (a) we use the disk set B 2 ( x , y ) : π ( x 2 + y 2 ) vol ( A ) , centered around zero, of the same total area as A, and the resulting property that vol ( A c B 2 ) = vol ( A B 2 c ) . So that
c 1 h 1 / 2 vol ( A ) .
Continuing (A19), we therefore obtain the following upper bound on the second term in (A12):
A p X Y q ( x , y ) log b p X Y q ( x , y ) p X Y ( x , y ) log b p X Y ( x , y ) d x d y c 1 h 1 / 2 log b ( 1 / h ) , h 1 / e .
Putting (A12), (A18) and (A21) together:
R 2 p X Y q ( x , y ) log b p X Y q ( x , y ) d x d y R 2 p X Y ( x , y ) log b p X Y ( x , y ) d x d y 2 c 1 h 1 / 2 log b ( e / h ) , h 1 / ( c 1 e ) 2 ,
where c 1 and h are such as in (A17) and (A7), respectively. So if δ > 0 in (A7), then the possible increase in (A10) caused by substitution of p X Y q in place of p X Y is at most o ( 1 ) .
Later on, for the RHS of (84)–(86) we will require also the loss in the total probability incurred in the replacement of p X Y by p X Y q . This loss is strictly positive and tends to zero with h of (A7):
0 < p 1 R 2 p X Y ( x , y ) d x d y = 1 R 2 p X Y q ( x , y ) d x d y a A c p X Y ( x , y ) d x d y = p + A p X Y ( x , y ) p X Y q ( x , y ) h d x d y b p + h · vol ( A ) c c 1 h 1 / 2 + c 1 h 1 / 2 = 2 c 1 h 1 / 2 ,
where the set A in (a) is defined in (A11), (b) follows by (A14) and (A7), and (c) follows by (A17) and (A20).
The LHS of (86)
Consider next the LHS of (86). Since p Y L and has a finite variance, its differential entropy is finite. Let us examine the possible decrease in the LHS of (86) when p Y is replaced with p Y q defined in (A8). For this, let us define a set in R with respect to the parameter h ˜ of (A9):
A ˜ y : p Y ( y ) > h ˜ ,
which is a countable union of disjoint open intervals. Then
R p Y ( y ) log b p Y ( y ) d y R p Y q ( y ) log b p Y q ( y ) d y = A ˜ c p Y ( y ) log b p Y ( y ) p Y q ( y ) log b p Y q ( y ) d y + A ˜ p Y ( y ) log b p Y ( y ) p Y q ( y ) log b p Y q ( y ) d y .
For h ˜ 1 / e we have p Y ( y ) log b p Y ( y ) p Y q ( y ) log b p Y q ( y ) 0 for all y A ˜ c and the first of the two terms in (A25) is non-positive:
A ˜ c p Y ( y ) log b p Y ( y ) p Y q ( y ) log b p Y q ( y ) d y 0 .
In the second term of (A25) for y A ˜ , the integrand can be upper-bounded by Lemma A1 with its parameters t and t 1 such that
t 1 = p Y q ( y ) t 1 + t = p Y ( y ) sup y R p Y ( y ) K , t h ˜ 1 / e ,
where K is the parameter from (12). This gives
A ˜ p Y ( y ) log b p Y ( y ) p Y q ( y ) log b p Y q ( y ) d y h ˜ 1 / e vol ( A ˜ ) h ˜ · max log b ( 1 / h ˜ ) , log b ( e K ) ,
where vol ( A ˜ ) is the total length of A ˜ . It remains to find an upper bound on vol ( A ˜ ) . We use (A4):
c Y A ˜ p Y ( y ) y 2 d y A ˜ h ˜ · y 2 d y = h ˜ A ˜ B ˜ 2 y 2 d y + A ˜ B ˜ 2 c y 2 d y a h ˜ A ˜ B ˜ 2 y 2 d y + A ˜ c B ˜ 2 y 2 d y = B ˜ 2 h ˜ · y 2 d y = h ˜ 12 ( vol ( A ˜ ) ) 3 ,
where in (a) we use the interval set B ˜ 2 vol ( A ˜ ) / 2 , vol ( A ˜ ) / 2 , centered around zero, and of the same total length as A ˜ with the resulting property that vol ( A ˜ c B ˜ 2 ) = vol ( A ˜ B ˜ 2 c ) . So that
c ˜ 1 h ˜ 1 / 3 vol ( A ˜ ) ,
where c ˜ 1 ( 12 c Y ) 1 / 3 . Continuing (A27), with (A28) we obtain the following upper bound on the second term in (A25), which is by (A26) also an upper bound on both terms of (A25):
R p Y ( y ) log b p Y ( y ) d y R p Y q ( y ) log b p Y q ( y ) d y h ˜ 1 / e c ˜ 1 h ˜ 2 / 3 max log b ( 1 / h ˜ ) , log b ( e K ) .
So if δ 1 > 0 in (A9), then the possible decrease caused by substitution of p Y q in place of p Y on the LHS of (86) is at most o ( 1 ) .
The LHS of (85)
Let us define two functions of y R :
Q β ( y ) Δ β , n · y / Δ β , n + 1 / 2 , r β ( y ) y Q β ( y ) .
Then with p X Y q defined in (A6), we can obtain a lower bound for the expression on the LHS of (85):
Δ α , n Δ β , n x X n y Y n p X Y q ( x , y ) ( y x ) 2 = Δ α , n x X n R p X Y q ( x , y ) y x r β ( y ) 2 d y a Δ α , n x X n R p X Y ( x , y ) ( y x ) 2 d y + Δ α , n Δ β , n 2 x X n R p X Y ( x , y ) d y + Δ α , n Δ β , n x X n R p X Y ( x , y ) | y x | d y b x X n P X ( x ) R p Y | X ( y | x ) ( y x ) 2 d y + Δ β , n 2 + Δ β , n x X n P X ( x ) R p Y | X ( y | x ) ( y x ) 2 d y 1 / 2 c x X n P X ( x ) R p Y | X ( y | x ) ( y x ) 2 d y + Δ β , n 2 + Δ β , n c X Y o ( 1 ) ,
where (a) follows because p X Y q ( x , y ) p X Y ( x , y ) and | r β ( y ) | Δ β , n / 2 , (b) follows by (A3) and Jensen’s inequality for the concave (∩) function f ( t ) = t , and (c) follows by the condition of the lemma.
Joint type P X Y
Let us define two mutually complementary probability masses for each ( x , y ) X n × Y n :
P X Y q ( x , y ) p X Y q ( x , y ) Δ α , n Δ β , n ,
P X Y a ( x , y ) { P X ( x ) y ˜ Y n P X Y q ( x , y ˜ ) , y = Q β ( x ) , 0 , o . w . ,
where Q β ( · ) is defined in (A30). It follows from (A6) and (8), that each number P X Y q ( x , y ) is an integer multiple of 1 / n and y Y n P X Y q ( x , y ) P X ( x ) for each x X n . Then a joint type can be formed with the two definitions above:
P X Y ( x , y ) P X Y q ( x , y ) + P X Y a ( x , y ) , ( x , y ) X n × Y n ,
such that P X Y P n ( X n × Y n ) and y Y n P X Y ( x , y ) = P X ( x ) for each x X n .
The RHS of (85)
Having defined P X Y and P X Y q , let us examine the possible decrease in the expression found on the RHS of (85) when P X Y inside that expression is replaced with P X Y q :
x X n y Y n P X Y ( x , y ) ( y x ) 2 x X n y Y n P X Y q ( x , y ) ( y x ) 2 = a x X n y Y n P X Y a ( x , y ) ( y x ) 2 = b x X n y Y n P X Y a ( x , y ) r β 2 ( x ) c p 1 Δ β , n 2 / 4 d c 1 h 1 / 2 Δ β , n 2 / 2 o ( 1 ) ,
where (a) follows by (A34), (b) follows according to the definitions (A30) and (A33), (c) follows because | r β ( x ) | Δ β , n / 2 and because
x X n y Y n P X Y a ( x , y ) = ( A 34 ) 1 x X n y Y n P X Y q ( x , y ) = ( A 32 ) 1 Δ α , n Δ β , n x X n y Y n p X Y q ( x , y ) = ( A 6 ) 1 R 2 p X Y q ( x , y ) d x d y = ( A 23 ) p 1 ,
then (d) follows by the upper bound on p 1 of (A23). Since by definition (A32) we also have
x X n y Y n P X Y q ( x , y ) ( y x ) 2 = Δ α , n Δ β , n x X n y Y n p X Y q ( x , y ) ( y x ) 2 ,
which is exactly the beginning of (A31), then combining (A31) and (A35) we obtain (85). The remainder of the proof for (84) and (86) will easily follow by Lemma A1 applied to corresponding discrete entropy expressions with probability masses.
The RHS of (84)
In order to upper-bound the expression on the RHS of (84), it is convenient to write:
x X n y Y n P X Y ( x , y ) log b P X Y ( x , y ) Δ α , n Δ β , n x X n y Y n P X Y q ( x , y ) log b P X Y q ( x , y ) Δ α , n Δ β , n = a x X n y Y n P X Y ( x , y ) log b P X Y ( x , y ) P X Y q ( x , y ) log b P X Y q ( x , y ) + x X n y Y n P X Y a ( x , y ) log b 1 Δ α , n Δ β , n b x X n y Y n max P X Y a ( x , y ) log b P X Y a ( x , y ) , P X Y a ( x , y ) log b e + ( 1 γ ) p 1 log b n c x X n y Y n P X Y a ( x , y ) log b n + ( 1 γ ) p 1 log b n = d ( 2 γ ) p 1 log b n e 4 c 1 h 1 / 2 log b n o ( 1 ) ,
where (a) follows by (A34); for (b) in the first term we apply the upper bound of Lemma A1 with its parameters t 1 = P X Y q ( x , y ) and t 1 + t = P X Y ( x , y ) 1 with t = P X Y a ( x , y ) p 1 by (A34), (A36), for n sufficiently large such that p 1 1 / e ; while in the second term we use the equality (A36) and (7); (c) follows for n > 2 since P X Y a ( x , y ) 1 / n when positive; (d) and (e) follow respectively by the equality (A36) and the inequality (A23). Now since
x X n y Y n P X Y q ( x , y ) log b P X Y q ( x , y ) Δ α , n Δ β , n = R 2 p X Y q ( x , y ) log b p X Y q ( x , y ) d x d y ,
the inequality in (84) follows by comparing (A10), (A22), and (A37).
The RHS of (86)
With P Y q ( y ) x X n P X Y q ( x , y ) and P Y a ( y ) x X n P X Y a ( x , y ) we have
y Y n P Y ( y ) log b P Y ( y ) Δ β , n y Y n P Y q ( y ) log b P Y q ( y ) Δ β , n =
= a y Y n P Y ( y ) log b P Y ( y ) P Y q ( y ) log b P Y q ( y ) + y Y n P Y a ( y ) log b 1 Δ β , n b y Y n P Y a ( y ) log b P Y a ( y ) + β p 1 log b n c y Y n P Y a ( y ) log b ( 1 / n ) + β p 1 log b n = d ( 1 β ) p 1 log b n e 2 c 1 h 1 / 2 log b n o ( 1 ) ,
where (a) follows by (A34); for (b) in the first term, we apply the lower bound of Lemma A1 with its parameters t 1 = P Y q ( y ) and t 1 + t = P Y ( y ) with t = P Y a ( y ) p 1 by (A34) and (A36), for n sufficiently large such that p 1 1 / e , while in the second term, we use the equality (A36) and (7); (c) follows because P Y a ( y ) 1 / n whenever positive, (d) and (e) follow, respectively, by the equality (A36) and the inequality (A23). From (A8) and (A32) we observe that P Y q ( y ) = p Y q ( y ) Δ β , n . Since the function p Y q ( y ) is piecewise constant in R by the definition of p X Y q , it follows that
y Y n P Y q ( y ) log b P Y q ( y ) Δ β , n = R p Y q ( y ) log b p Y q ( y ) d y .
Then the inequality (86) follows by comparing (A29), (A38). This concludes the proof of Lemma 18. □
Lemma A1.
Let f ( x ) = x ln x , then for 0 < t 1 / e and t 1 > 0 ,
t ln t f ( t 1 + t ) f ( t 1 ) t ln max 1 / t , ( t 1 + t ) e .
Proof. 
For t 1 / e , the magnitude of the derivative of f ( x ) in the interval ( 0 , t ) is monotonically decreasing and its average is ln t , while in the adjacent interval [ t , 1 / ( e t ) ] , the magnitude of the derivative is upper-bounded by ln t : | f ( x ) | ln t , t x 1 / ( e t ) .
Then there are three possible cases:
(1) t 1 < t < t 1 + t < 1 / ( e t ) :
| f ( t 1 + t ) f ( t 1 ) | | f ( t 1 + t ) f ( t ) | + | f ( t 1 ) f ( t ) | = | f ( t 1 + t ) f ( t ) | + t 1 ln t 1 t ln t t 1 ln t + t 1 ln t 1 t ln t t ln t .
(2) t t 1 < t 1 + t 1 / ( e t ) :
| f ( t 1 + t ) f ( t 1 ) | t ln t .
(3) 1 / ( e t ) < t 1 + t :
0 f ( t 1 + t ) f ( t 1 ) f ( t 1 + t ) t = ln ( t 1 + t ) + 1 t . □

References

  1. Cover, T.M.; Thomas, J.A. Elements of Information Theory; John Wiley & Sons: Hoboken, NJ, USA, 2006. [Google Scholar]
  2. Shannon, C.E. Probability of Error for Optimal Codes in a Gaussian Channel. Bell Syst. Tech. J. 1959, 38, 611–656. [Google Scholar] [CrossRef]
  3. Ebert, P.M. Error Bounds For Parallel Communication Channels; Technical Report 448; Research Laboratory of Electronics at Massachusetts Institute of Technology: Cambridge, MA, USA, 1966. [Google Scholar]
  4. Gallager, R.G. Information Theory and Reliable Communication; John Wiley & Sons: Hoboken, NJ, USA, 1968. [Google Scholar]
  5. Oohama, Y. The Optimal Exponent Function for the Additive White Gaussian Noise Channel at Rates above the Capacity. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Aachen, Germany, 25–30 June 2017. [Google Scholar]
  6. Csiszár, I.; Körner, J. Information Theory: Coding Theorems for Discrete Memoryless Systems; Academic Press: Cambridge, MA, USA, 1981. [Google Scholar]
  7. Dueck, G.; Körner, J. Reliability Function of a Discrete Memoryless Channel at Rates above Capacity. IEEE Trans. Inf. Theory 1979, 25, 82–85. [Google Scholar] [CrossRef]
  8. Nakiboğlu, B. The Sphere Packing Bound for Memoryless Channels. Probl. Inf. Transm. 2020, 56, 201–244. [Google Scholar] [CrossRef]
  9. Verdú, S. Error Exponents and α-Mutual Information. Entropy 2021, 23, 199. [Google Scholar] [CrossRef] [PubMed]
  10. Csiszár, I. The Method of Types. IEEE Trans. Inf. Theory 1998, 44, 2505–2523. [Google Scholar] [CrossRef]
  11. Andrews, G.E. The Theory of Partitions; Cambridge University Press: Cambridge, UK, 1976. [Google Scholar]
  12. Tridenski, S.; Somekh-Baruch, A. The Method of Types for the AWGN Channel: Error Exponent. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Athens, Greece, 7–12 July 2024. [Google Scholar]
  13. Tridenski, S.; Somekh-Baruch, A. The Method of Types for the AWGN Channel: Correct-Decoding Exponent. In Proceedings of the International Zurich Seminar on Information and Communication (IZS), Zurich, Switzerland, 6–8 March 2024. [Google Scholar]
  14. Cheng, H.-C.; Nakiboğlu, B. Refined Strong Converse for the Constant Composition Codes. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Los Angeles, CA, USA, 21–26 June 2020; pp. 2149–2154. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tridenski, S.; Somekh-Baruch, A. The Method of Types for the AWGN Channel. Entropy 2025, 27, 621. https://doi.org/10.3390/e27060621

AMA Style

Tridenski S, Somekh-Baruch A. The Method of Types for the AWGN Channel. Entropy. 2025; 27(6):621. https://doi.org/10.3390/e27060621

Chicago/Turabian Style

Tridenski, Sergey, and Anelia Somekh-Baruch. 2025. "The Method of Types for the AWGN Channel" Entropy 27, no. 6: 621. https://doi.org/10.3390/e27060621

APA Style

Tridenski, S., & Somekh-Baruch, A. (2025). The Method of Types for the AWGN Channel. Entropy, 27(6), 621. https://doi.org/10.3390/e27060621

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop