Abstract
For the discrete-time AWGN channel with a power constraint, we give an alternative derivation for the sphere-packing upper bound on the optimal block error exponent and an alternative derivation for the analogous lower bound on the optimal correct-decoding exponent. The derivations use the method of types with finite alphabets of sizes depending on the block length n and with the number of types sub-exponential in n.
1. Introduction
We study reliability of the discrete-time additive white Gaussian noise (AWGN) channel with a power constraint imposed on blocks of its inputs. Consider the capacity of this channel, found by Shannon:
where is the channel noise variance and is the power constraint. This capacity corresponds to the maximum of the mutual information over , under the power constraint on , where w stands for the channel transition probability density function (PDF) and is the channel input PDF. Let us briefly recall the technicalities [1] of how expression (1) is obtained from the mutual information:
Here and , the operator denotes the expectation, and D is the Kullback–Leibler divergence between two probability densities. The maximum in (2) is attained by the Gaussian PDF with zero mean and variance , which simultaneously gives and brings the divergence to zero, which is its lowest possible value [1] [Equation (8.57)]. In this paper we consider the optimal exponents in the block error/correct-decoding probability of the AWGN channel. We propose explanations, similar to (2), both for Shannon’s sphere-packing converse bound on the optimal error exponent [2] [Equations (3), (4) and (11)] (see also [3] [Equation (47)] with [4] [Equations (7.5.34) and (7.5.32)]) and for Oohama’s converse bound on the optimal correct-decoding exponent [5] [Equation (4)].
In the case of discrete memoryless channels, the mutual information enters into the expressions for correct-decoding and error exponents through the method of types [6,7]. For the moment, without any particular meaning attached to it, let us rewrite the sphere-packing constant-composition exponent [6] [Equation (5.19)] with the PDF’s:
where denotes the Gaussian density with variance , which maximizes (2), and is the information rate. When is Gaussian, the minimum (3) allows an explicit solution by the method of Lagrange multipliers. The minimizing solution of (3) is Gaussian [8] [Equation (65)], Ref. [9] [Equation (327)], and we obtain that (3) is the same as Shannon’s converse bound on the error exponent [2] [Equations (3), (4) and (11)], Refs. [8,9] in the limit of a large block length:
Then it turns out that of (3) and the y-marginal PDF of the product [8] [Equation (63)] play the same roles in the derivation of the converse bound as w and , respectively, in the maximization (2).
In this paper, in order to derive expressions similar to (3), we extend the method of types [1] [Chapter 11.1], Ref. [10] to include countable alphabets consisting of uniformly spaced real numbers, with the help of power constraints on types. The countable alphabets depend on the block length n and the number of types satisfying the power constraints is kept sub-exponential in n. The latter idea is inspired by a different subject—of “runs” in a binary sequence. If we treat every “run” of ones or zeros in a binary sequence as a separate symbol from the countable alphabet of run-lengths, then the number of different empirical distributions of such symbols in a binary sequence of length n is equivalent to the number of different partitions of the integer n into the sum of positive integers, which is [11] [Equation (5.1.2)]. Thus, it is sub-exponential, and the method of types can be extended to that case. In our present case, however, the types are empirical distributions of uniformly quantized real numbers in quantized versions of real channel input and output vectors of length n. The quantized versions serve only for classification of channel input and output vectors and not for the communication itself. The uniform quantization step is different for the quantized versions of channel inputs and outputs, and in both cases it is chosen to be a decreasing function of n.
Via expressions similar to (2), the proposed derivations demonstrate that, in order to achieve the converse bounds on the correct-decoding and error exponents, it is necessary for the types of the quantized versions of codewords to converge to the Gaussian distribution in the characteristic function (CF), or, equivalently, in cumulative distribution function (CDF).
The contributions of the current paper are twofold: we successfully apply the method of types to derive converse bounds on both the error exponent [12] and the correct-decoding exponent [13] of the AWGN channel. This underscores the advantage of the method of types.
In Section 2 and Section 3, we describe the communication system and introduce other definitions. In Section 4 we present the main results of the paper, which consist of two theorems and a proposition. Section 5 provides an extension to the method of types. The results of Section 5 are then applied in all the sections that follow. In Section 6 we prove a converse lemma that is then used for derivation of both the correct-decoding and error exponents in Section 7 and Section 8, respectively. Section 9 connects between the PDFs and types.
Notation
Countable alphabets consisting of real numbers are denoted by , . The set of types with denominator n over is denoted by . Capital ‘’ denotes probability mass functions, which are types , , , . The type class and the support of a type are denoted by and , respectively. The expectation with respect to a probability distribution is denoted by . Small ‘p’ denotes probability density functions , , , . Thin letters x, y represent real values, while thick letters , represent real vectors. Capital letters X, Y represent random variables; boldface represents a random vector of length n. The conditional type class of given is denoted by . The quantized versions of variables are denoted by a superscript ‘q’: , , . Small w stands for a conditional PDF, and stands for a discrete positive measure which does not necessarily add up to 1. All information-theoretic quantities such as joint and conditional entropies , , the mutual information , , , the Kullback–Leibler divergence , , and the information rate R are defined with respect to the logarithm to a base , denoted as . It is assumed that . The natural logarithm is denoted as ln. The cardinality of a discrete set is denoted by , while the volume of a continuous region is denoted by . The complementary set of a set A is denoted by . Logical “or” and “and” are represented by the symbols ∨ and ∧, respectively. Gaussian distributions are denoted by , while stands for the discrete uniform distribution. In the Appendix B, represents the rounded down version of the PDF .
2. Communication System
We consider communication over the time-discrete additive white Gaussian noise channel with real channel inputs and channel outputs and a transition probability density
Communication is performed by blocks of n channel inputs. Let denote a nominal information rate. Each block is used for transmission of one out of M messages, where , for some logarithm base . The encoder is a deterministic function , which converts a message into a transmitted block, such that
where , for all . The set of all the codewords , , constitutes a codebook . Each codeword in satisfies the power constraint:
The decoder is another deterministic function , which converts the received block of n channel outputs into an estimated message or, possibly, to a special error symbol ‘0’:
where each set is either an open region or the empty set, and the regions are disjoint: for . Observe that the maximum-likelihood decoder with open decision regions , defined for as
is a special case of (6). Note that the formal definition of includes the undesirable possibility of for .
3. Definitions
For each n, we define two discrete countable alphabets and as one-dimensional lattices:
For each n, we define also a discrete positive measure (not necessarily a distribution), which will approximate the channel w:
Denoting by a class of functions continuous on an open subset , we define
The set (11), defined for a given n, will be used only in the derivation of the correct-decoding exponent, while the following set of Lipschitz continuous functions will be used only in the derivation of the error exponent:
Note that is a convex set and also each function is bounded and cannot exceed .
With a parameter , we define the following Gaussian probability density functions [8,9]:
The first property of the following lemma shows that is the y-marginal PDF of the product .
Lemma 1
Here (17) combined with (14) corresponds to [8] [Equation (64)], (15) and (20) can be found in [8] [Equation (65)], while (14), (21) correspond respectively to [9] [Equations (302) and (328)].
Proof of Lemma 1.
The first property (19) can be verified using (14), (15), (17). Then (20) can be obtained from (15), (17), (19). Property (21) follows by (17) and (19). It can be verified from (14) that is a positive monotonically decreasing function of , such that . Then we get (22) and (23). From (21) and (22) we see that for all , which gives (24). Equality (25) can be obtained using (21). Then, using (22) and (23), we obtain (26). □
The following expressions will describe our results for the error and correct-decoding exponents:
The following identity can be obtained using (13), (15), (16), (18) and (20):
We note also that , as , which can be verified using the properties (15), (17) and (21). It can be verified that the expression inside the supremum of (27) is equivalent to the expression for the Gaussian random-coding error exponent of Gallager before the maximization over [4] [Equations (7.4.24) and (7.4.28)]. Therefore, with the supremum over , the expression (27) coincides with the converse sphere-packing bound of Shannon (4).
4. Main Results
In this section we present two theorems and a proposition. The two theorems give converse bounds on the optimal error exponent and correct-decoding exponent of the AWGN, respectively. The bounds are asymptotic in the limit of a large block length n, and are given by the expressions (27) and (28), accordingly. The proofs, leading to these expressions, are also presented in this section, while some of their technical details are encapsulated in the form of lemmas that are taken care of by the rest of the sections in this paper. In the course of the two proofs leading to (27) and (28) we obtain expressions analogous to (2), which, in exactly the same manner as the maximization of (2), allow us to draw conclusions about the asymptotically optimal codeword types, achieving the converse bounds. This is made precise in the remark below, after the proof of the second theorem. The section is concluded with the proposition that brings together the bounds of the two theorems in a parametric form.
The proof of the first theorem relies on Lemmas 13 and 19, which appear in Section 7 and Section 9, respectively.
Theorem 1
Proof.
Starting from Lemma 13, we can write the following sequence of inequalities:
where
(a) holds for any by Lemma 13 with . Note also that in (31) denotes the Kullback–Leibler divergence between the probability distribution and the positive measure defined in (10), which is not a probability distribution but only approximates the channel w.
(b) follows by Lemma 19 for the alphabet parameters and .
(c) holds for all with the possible exception of the single point on R-axis where (32) may transition between a finite value and . To verify the equality, let us compare the infimum in (32) and the supremum over in (33) as functions of , for a given . First, it can be verified that the supremum of (33) is the closure of the lower convex envelope of the infimum of (32). Second, it can be checked by the definition of convexity, that the infimum of (32) itself is a convex (∪) function of R. Then they coincide for all values of R, except possibly for the single point where they both jump to . This property carries over to the external ‘lim sup max’ as well.
(d) follows because by (24) and (26) function satisfies the conditions under the infimum of (33).
(e) holds as equality inside the supremum over , separately for each . In (34) by we denote the corresponding marginal PDF of the product and use the definitions (29). Then (e) follows by the definitions of and in (13) and (16), and by their properties (15) and (20).
(f) follows by the non-negativity of the divergence, and by the condition under the maximum of (34), since for .
In conclusion, according to (c) we obtain that the inequality between (30) and (35), as functions of R, holds for all , except possibly for the single point , where the jump to in (35) occurs. Therefore, taking the limit as , we obtain that (30) is upper-bounded for all by
which is the same as (27). □
The second theorem relies on Lemmas 17 and 20, which appear in Section 8 and Section 9, respectively.
Theorem 2
Proof.
Starting from Lemma 17, for each we can choose a different parameter , such that there is equality between (74) and (28). Then by (80) we obtain
With the choice , the first term in the minimum can be lower-bounded as follows:
where:
(a) follows by Lemma 20 with .
(b) holds for , because for any such , and because , where is the Gaussian PDF defined in (16).
(c) holds as an identity inside the infimum by the definitions (13), (16) and (29), and properties (15) and (20).
(d) holds if and , because then by (26) and (11) the function satisfies the conditions under the infimum and achieves the infimum.
(e) follows by the condition under the minimum of (36) since for .
In conclusion, since (37) is the lower bound for any and , we obtain
□
Remark 1.
Observe that neither the inequality (f) in the proof of Theorem 1 nor the inequality (e) in the proof of Theorem 2 can be met with equality unless . Furthermore, neither the inequality (f) in the proof of Theorem 1 nor the inequality (b) in the proof of Theorem 2 can be met with equality unless , where is the y-marginal PDF of . This is similar to (2). Accordingly, since is Gaussian, while is a convolution of with the Gaussian PDF , the type must converge to the Gaussian distribution with zero mean and variance in CF (it follows because the expression for the characteristic function of the zero-mean Gaussian distribution also has a Gaussian form) and CDF in order to achieve the exponents of Theorems 1 and 2. In both proofs the type represents the histograms of codewords, i.e., the empirical distributions of their quantized versions.
Proposition 1
(Parametric representations of and ). For every there exists a unique , such that
For every there exists a unique , such that
The correct-decoding exponent representation of (38) is equivalent to [5] [Equation (22)] and appears in [14] [Equations (25) and (26)], while the error exponent representation of (39) is equivalent to [4] [Equations (7.4.30) and (7.4.31)] and appears in [8] [Equations (70) and (71)] and [9] [Equations (329) and (330)]. Here we present an alternative proof of Proposition 1 in the vein of the proofs of Theorems 1 and 2.
Proof.
Let us denote . Then for we can write a sandwich proof:
where denotes the set of all bivariate non-degenerate Gaussian PDF’s. Here (a) follows similarly to the inequality (b) in Theorem 2; (b) is an identity; (c) follows because is Gaussian and achieves the infimum; and (d) is a lower bound on the supremum at . Finally, since the RHS of (41) is further lower-bounded by the infimum (40), we conclude that .
For , besides let us define . Then
Here (a) follows due to the inequality under the first infimum; (b) is an identity; (c) follows because is Gaussian and achieves the infimum; and (d) follows because ; (e) is a lower bound on the supremum at . Since the RHS of (43) is lower-bounded by the infimum (42), we obtain . From using (17) and (21) we obtain . Hence for every the parameter is unique. □
5. Method of Types
In this section we extend the method of types [1] to include the countable alphabets of uniformly spaced reals (9) by using power constraints on the types. The method of types in the form of the results of this section is then used in the rest of the paper. It allows us to establish converse bounds in terms of types in Section 6, Section 7 and Section 8 and is used in the Appendix B dedicated to the proof of Lemma 18 of Section 9, connecting between PDFs and types.
5.1. Alphabet Size
Consider all the types satisfying the power constraint . Let denote the subset of the alphabet used by these types. In particular, every letter must satisfy , while by the definition of a type we have . This gives . Then is finite and by (7) we obtain
Lemma 2
(Alphabet size). .
5.2. Size of a Type Class
For let us define
Lemma 3
(Support of a joint type). Let be a joint type, such that and . Then
The proof is given in the Appendix A.
Lemma 4
(Support of a type). Let and be types, such that and . Then
The proof for is given in the Appendix A.
For , the parameters , replace, respectively, , .
Lemma 5
(Size of a type class). Let be a joint type, such that and . Then
where , , and .
Proof.
Observe that the standard type-size bounds (see, e.g., [6] [Lemma 2.3], Ref. [1] [Equation (11.16)]) can be rewritten as
Here can be replaced with its upper bound of Lemma 3. This gives (44). The remaining bounds of (45) and (46) are obtained similarly using Lemma 4. □
Since it holds for any that , and similarly for , as a corollary of the previous lemma we also obtain
Lemma 6
(Size of a conditional type class). Let be a joint type, such that and . Then for and respectively
where , , and are defined as in Lemma 5.
5.3. Number of Types
Let be the set of all the types satisfying the power constraint . Then its cardinality can be upper-bounded as follows:
where (a) follows by the definition of preceding Lemma 2, (b) follows by [1] [Equation (11.6)], and (c) follows by Lemma 2. This bound is sub-exponential in n for . This can be also further improved and made sub-exponential in n for all using Lemma 4, as follows.
Lemma 7
(Number of types).
where and .
Proof.
Denoting and , we can upper-bound as follows
Similarly, let denote the set of all the joint types , such that and . Then its cardinality can be bounded as follows.
Lemma 8
(Number of joint types).
where and .
6. Converse Lemma
In this section we prove a converse Lemma 10, which is then used both for the error exponent in Section 7 and for the correct-decoding exponent in Section 8.
In order to determine exponents in channel probabilities, it is convenient to take hold of the exponent in the channel probability density. Let be a vector of n channel inputs and let be its quantized version, with components
Similarly, let be a vector of n channel outputs and let be its quantized version, with for all . Then we have the following
Lemma 9
(PDF exponent). Let and be two channel input and output vectors, with their respective quantized versions , such that . Then
Proof.
The exponent can be equivalently rewritten as
Defining , we observe that
The second term on the RHS is bounded as:
where (a) follows because , (b) follows by Jensen’s inequality for the concave (∩) function , and (c) follows by the condition of the lemma. The third term is bounded as
Since the exponent with the quantized versions , in turn, can also be rewritten similarly to (55), the result of the lemma follows by (55)–(58). □
The following lemma will be used both for the upper bound on the error exponent and for the lower bound on the correct-decoding exponent.
Lemma 10
(Conditional probability of correct decoding). Let be a joint type, such that , , and , and let be a codebook, such that the quantized versions (54) of its codewords , , are all of the type , that is:
Let be a random variable, independent of the channel noise, and let be the random channel-input and channel-output vectors, respectively. Let . Then
where , and , as , depending only on α, β, , , , and .
Proof.
First, from the single code we create an ensemble of codes, where each member code has the same probability of error/correct-decoding as the original code . Then we upper bound the ensemble average probability of correct decoding.
Considering the codebook as an matrix, we permute its n columns. This produces a set of codebooks: , . The quantized versions of all the codewords of each codebook belong to the same type class . In accordance with , we permute also the n coordinates of each of the decision regions in the definition (6) of the decoder g, obtaining open sets and creating in this way an ensemble of codes , .
Let denote the random channel-input and channel-output vectors, respectively, when using the code with an index . Let and denote their respective quantized versions. Since the additive channel noise is i.i.d., permutation of components does not change the distribution of the noise vector , and we obtain
Suppose that one of the codes , , is used for communication with probability , chosen independently of the sent message J and of the channel noise. Let be the random variable denoting the index of this code. Then, using (59) and (60) we obtain
In what follows, we upper bound the RHS of (61) with an added condition that :
The total number of codes in the ensemble can be rewritten as
Given , the total number of all the codewords in the ensemble such that their quantized versions belong to the same conditional type class (counted as distinct if the codewords belong to different ensemble member codes or represent different messages) is given by
Let denote the number of the codewords in a codebook such that their quantized versions belong to . Given that , the channel output vector falls into a hypercube region of :
For any such that and any open region , by Lemma 9 we obtain
Then, since all the codes and messages are equiprobable, the conditional probability of the code with the index ℓ is upper-bounded as
For , let be the indices of all the codewords in the codebook with their quantized versions in . Given that indeed the codebook has been used for communication, similarly to (65), by (64) the conditional probability of correct decoding can be upper-bounded as
where the second inequality follows because the decision regions are disjoint. Summing up over all the codes, we finally obtain:
where (a) follows by (65) and (66), (b) follows by (62) and (63), and (c) follows by (45) of Lemma 5 and (48) of Lemma 6. □
In the next two sections, we derive converse bounds on the error and correct-decoding exponents in terms of types.
7. Error Exponent
The end result of this section is given by Lemma 13 and represents a converse bound on the error exponent by the method of types.
Lemma 11
(Error exponent of mono-composition codebooks). Let be a type, such that , and let be a codebook, such that the quantized versions (54) of its codewords , , are all of the type , that is:
Let be a random variable, independent of the channel noise, and let be the random channel-input and channel-output vectors, respectively. Then for any parameter
where , and , as , depending only on α, β, , , and .
Proof.
For a joint type with the marginal type , such that , we have also
Lemma 12
Lemma 13
(Error exponent). Let be a random variable, independent of the channel noise, and let be the random channel-input and channel-output vectors, respectively. Then for any and there exists , such that for any
where , as , depending only on the parameters α, β, , , and .
Proof.
For a type let us define . Then for any n greater than of Lemma 12, there exists at least one type such that
where the second inequality follows by Lemma 7 applied with . Then we can use such a type for a bound:
where (a) follows by (71), and (b) holds for the random variable , independent of the channel noise, and the channel input/output random vectors with the decoder
where are the indices of the codewords in with their quantized versions in .
8. Correct-Decoding Exponent
The end result of this section is Lemma 17, which is a converse bound on the correct-decoding exponent by the method of types.
Lemma 14
(Joint type constraint). For any and , there exists , such that for any and any pair of vectors satisfying
the respective quantized versions and , defined as in (54), satisfy
We use a Chernoff bound for the probability of an event when the method of types cannot be applied:
Lemma 15
(Chernoff bound). Let , , be n independent random variables. Then for and :
Lemma 16
(Correct-decoding exponent of mono-composition codebooks). Let be a type, such that , and let be a codebook, such that the quantized versions (54) of its codewords , , are all of the type , that is:
Let be a random variable, independent of the channel noise, and let be the random channel-input and channel-output vectors, respectively. Then for any and there exists , such that for any
where , and , as , depending only on α, β, , , and .
Proof.
We consider probabilities of two disjoint events:
For the first term in the maximum, we obtain:
where (a) holds with for all of Lemma 14, (b) holds because , and in (c) we use the notation for the set of all the joint types satisfying both and . By the same steps as in (68), we further obtain
while Lemma 10 gives
Now by Lemma 8 for the number of joint types and by (77)–(79), we obtain
where denotes the expression (75). Applying Lemma 15 to the second term in the maximum of (76) we obtain (73)–(75). □
Lemma 17
(Correct-decoding exponent). Let be a random variable, independent of the channel noise, and let be the random channel-input and channel-output vectors, respectively. Then for any and there exists , such that for any
where and are as defined in (74) and (75), respectively, and , as , depending only on the parameters α, β, , , and .
Proof.
Similarly as in [7] [Lemma 5]:
where:
(a) holds for of Lemma 12;
(b) follows by Lemma 7 with , while is a maximizer of (83);
(c) holds for the channel input/output random vectors and a code , such that with for , and with for , where are the indices of the codewords in the original codebook with their quantized versions in . Since all the codewords of have their quantized versions in , we can apply Lemma 16 with for the RHS of (83) to obtain (80) and (81). □
9. PDF to Type
Lemmas 19 and 20 of this section relate between minimums over types and over PDF’s. The next Lemma 18, which has a laborious proof, is required only in the proof of Lemma 19, used for Theorem 1.
Lemma 18
(Quantization of PDF). Let be an alphabet defined as in (9), (7) with . Let be a type and , , be a collection of functions from (12), such that , , and . Then for any alphabet defined as in (9), (7) with , there exists a joint type with the marginal type , such that
where , , and , as , and depends only on the parameters α, β, , , , and (through (12)).
The proof is given in the Appendix B.
Lemma 19
(PDF to type). Let and be two alphabets defined as in (9), (7) with and . Then for any , , and there exists , such that for any and for any type with :
where , as , and depends only on the parameters α, β, , , and .
Proof.
For a type with a collection of such that and , we can find also an upper bound on . For example, using the Cauchy–Schwarz inequality:
Then by Lemma 18 there exists a joint type with the marginal type , such that simultaneously the three inequalities (84)–(86) are satisfied and it also follows by (10) and (85) that
Then the sum of (84) and (88) gives
while the difference of (84) and (86) gives
Note that all in the above relations are independent of the joint type and the functions . Therefore by (89), (90) and (85) we conclude, that given any for n sufficiently large for every type with the prerequisites of this lemma and every collection of that satisfy the conditions under the infimum on the LHS of (87) there exists a joint type such that simultaneously
and (89) holds with a uniform , i.e., independent of and . It follows that such satisfies also the conditions under the minimum on the RHS of (87) and results in the objective function of (87) satisfying (89) with the uniform . Then the minimum itself, which can only possibly be taken over a greater variety of , satisfies the inequality (87). □
Lemma 20
(Type to PDF). For any and there exists , such that for any and for any type :
where , as , and depends only on the parameters β and .
Proof.
Observe first that any collection of conditional PDF’s that satisfies the conditions under the infimum of (91) has finite differential entropies and well-defined quantities and . For any conditional type from the LHS of (91) we can define a set of histogram-like conditional PDF’s:
which are step functions of for each . Then , and , as defined in (11). Analogously to (56)–(58), it can be obtained that
Then also . Then the lemma follows. □
We use Lemmas 19 and 20 in Section 4 in the derivation of Theorems 1 and 2, respectively.
Author Contributions
Writing—original draft, A.S.-B. and S.T. All authors have read and agreed to the published version of the manuscript.
Funding
This work was supported by the Israel Science Foundation (ISF) grant #1579/23.
Data Availability Statement
The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding authors.
Conflicts of Interest
The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.
Abbreviations
The following abbreviations are used in this manuscript:
| AWGN | Additive white Gaussian noise |
| Probability density function | |
| CF | Characteristic function |
| CDF | Cumulative distribution function |
| RHS | Right-hand side |
| LHS | Left-hand side |
Appendix A
Proof of Lemma 3.
Adding the two power constraints together, we successively obtain the following inequalities:
where . To this let us add a continuously distributed random vector
independent of . Then , where
So we have
where for (a) we use the disk set , centered around zero and of the same total area as A, and the resulting property
Integrating on the RHS of (A1), we obtain
□
Proof of Lemma 4.
Similarly as in Lemma 3, the power constraint gives the following succession of inequalities:
where . To this let us add a continuously distributed random variable
independent of U. Then , where
So we have
where for (a) we use the interval set , centered around zero and of the same total length as A with the resulting property that , so that
Integrating on the RHS of (A2), we obtain
□
Appendix B
Proof of Lemma 18.
Using and , it is convenient to define a joint probability density function over as
which is changing only stepwise in x-direction. Note that of (86) is the y-marginal of . This gives
We proceed in two stages. First we quantize by rounding it down and check the effect of this on the LHS of (84)–(86). Then we complement the total probability back to 1, so that the type is conserved, and check the effect of this on the RHS of (84)–(86).
The quantization of is done by first replacing it with its infimum in each rectangle
and then quantizing this infimum down to the nearest value , :
Due to (8), the integral of over can be smaller than 1 only by an integer multiple of . The resulting difference from at each point can be bounded by a sum of two terms as
where and K is the parameter from (12).
For (86) we will require the y-marginal of from (A6), defined in the usual manner:
where the equality (a) follows from (A3), (A5) and (A6), and the inequality (b) follows by (A7). Then
where the last inequality follows by Lemma 4, (A7), (7), with . Note that the previously defined , while if and only if and .
The LHS of (84)
Now consider the LHS of (84). Note that each function in (84) is bounded and has a finite variance. It follows that it has a finite differential entropy. With (A3) we can rewrite the LHS of (84) as
Let us examine the possible increase in (A10) when is replaced with defined by (A5) and (A6). For this, let us define a set in with respect to the parameter h of (A7):
which is a countable union of disjoint rectangles by the definition of in (A3). Then
Note that the minimum of the function occurs at . Then for we have for all and the first of the two terms in (A12) is upper-bounded as
where the equality (∗) is appropriate for the case when the upper bound is positive, with the definitions:
Next we upper-bound the entropy of the probability density function on the RHS of (A13) by that of a Gaussian PDF. By (A4) we have
So we can rewrite the bound of (A13) in terms of p defined by (A14):
From (A11) and (A14) it is clear that as . In order to relate between them, let us rewrite the inequality in (A15) again as
where we use the disk set , centered around zero. This results in the following upper bound on p in terms of h:
where . Substituting the LHS of (A17) in (A16) in place of p, we obtain the following upper bound on the first half of (A12) in terms of h of (A7) and (A11):
In the second term of (A12) for the integrand can be upper-bounded by Lemma A1 with its parameters t and such that
This gives
where is the total area of A. To find an upper bound on , we use (A4):
where in (a) we use the disk set , centered around zero, of the same total area as A, and the resulting property that . So that
Continuing (A19), we therefore obtain the following upper bound on the second term in (A12):
Putting (A12), (A18) and (A21) together:
where and h are such as in (A17) and (A7), respectively. So if in (A7), then the possible increase in (A10) caused by substitution of in place of is at most .
Later on, for the RHS of (84)–(86) we will require also the loss in the total probability incurred in the replacement of by . This loss is strictly positive and tends to zero with h of (A7):
where the set A in (a) is defined in (A11), (b) follows by (A14) and (A7), and (c) follows by (A17) and (A20).
The LHS of (86)
Consider next the LHS of (86). Since and has a finite variance, its differential entropy is finite. Let us examine the possible decrease in the LHS of (86) when is replaced with defined in (A8). For this, let us define a set in with respect to the parameter of (A9):
which is a countable union of disjoint open intervals. Then
For we have for all and the first of the two terms in (A25) is non-positive:
In the second term of (A25) for , the integrand can be upper-bounded by Lemma A1 with its parameters t and such that
where K is the parameter from (12). This gives
where is the total length of . It remains to find an upper bound on . We use (A4):
where in (a) we use the interval set , centered around zero, and of the same total length as with the resulting property that . So that
where . Continuing (A27), with (A28) we obtain the following upper bound on the second term in (A25), which is by (A26) also an upper bound on both terms of (A25):
So if in (A9), then the possible decrease caused by substitution of in place of on the LHS of (86) is at most .
The LHS of (85)
Let us define two functions of :
Then with defined in (A6), we can obtain a lower bound for the expression on the LHS of (85):
where (a) follows because and , (b) follows by (A3) and Jensen’s inequality for the concave (∩) function , and (c) follows by the condition of the lemma.
Joint type
Let us define two mutually complementary probability masses for each :
where is defined in (A30). It follows from (A6) and (8), that each number is an integer multiple of and for each . Then a joint type can be formed with the two definitions above:
such that and for each .
The RHS of (85)
Having defined and , let us examine the possible decrease in the expression found on the RHS of (85) when inside that expression is replaced with :
where (a) follows by (A34), (b) follows according to the definitions (A30) and (A33), (c) follows because and because
then (d) follows by the upper bound on of (A23). Since by definition (A32) we also have
which is exactly the beginning of (A31), then combining (A31) and (A35) we obtain (85). The remainder of the proof for (84) and (86) will easily follow by Lemma A1 applied to corresponding discrete entropy expressions with probability masses.
The RHS of (84)
In order to upper-bound the expression on the RHS of (84), it is convenient to write:
where (a) follows by (A34); for (b) in the first term we apply the upper bound of Lemma A1 with its parameters and with by (A34), (A36), for n sufficiently large such that ; while in the second term we use the equality (A36) and (7); (c) follows for since when positive; (d) and (e) follow respectively by the equality (A36) and the inequality (A23). Now since
the inequality in (84) follows by comparing (A10), (A22), and (A37).
The RHS of (86)
With and we have
where (a) follows by (A34); for (b) in the first term, we apply the lower bound of Lemma A1 with its parameters and with by (A34) and (A36), for n sufficiently large such that , while in the second term, we use the equality (A36) and (7); (c) follows because whenever positive, (d) and (e) follow, respectively, by the equality (A36) and the inequality (A23). From (A8) and (A32) we observe that . Since the function is piecewise constant in by the definition of , it follows that
Lemma A1.
Let , then for and ,
Proof.
For , the magnitude of the derivative of in the interval is monotonically decreasing and its average is , while in the adjacent interval , the magnitude of the derivative is upper-bounded by : .
Then there are three possible cases:
(1) :
(2) :
(3) :
. □
References
- Cover, T.M.; Thomas, J.A. Elements of Information Theory; John Wiley & Sons: Hoboken, NJ, USA, 2006. [Google Scholar]
- Shannon, C.E. Probability of Error for Optimal Codes in a Gaussian Channel. Bell Syst. Tech. J. 1959, 38, 611–656. [Google Scholar] [CrossRef]
- Ebert, P.M. Error Bounds For Parallel Communication Channels; Technical Report 448; Research Laboratory of Electronics at Massachusetts Institute of Technology: Cambridge, MA, USA, 1966. [Google Scholar]
- Gallager, R.G. Information Theory and Reliable Communication; John Wiley & Sons: Hoboken, NJ, USA, 1968. [Google Scholar]
- Oohama, Y. The Optimal Exponent Function for the Additive White Gaussian Noise Channel at Rates above the Capacity. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Aachen, Germany, 25–30 June 2017. [Google Scholar]
- Csiszár, I.; Körner, J. Information Theory: Coding Theorems for Discrete Memoryless Systems; Academic Press: Cambridge, MA, USA, 1981. [Google Scholar]
- Dueck, G.; Körner, J. Reliability Function of a Discrete Memoryless Channel at Rates above Capacity. IEEE Trans. Inf. Theory 1979, 25, 82–85. [Google Scholar] [CrossRef]
- Nakiboğlu, B. The Sphere Packing Bound for Memoryless Channels. Probl. Inf. Transm. 2020, 56, 201–244. [Google Scholar] [CrossRef]
- Verdú, S. Error Exponents and α-Mutual Information. Entropy 2021, 23, 199. [Google Scholar] [CrossRef] [PubMed]
- Csiszár, I. The Method of Types. IEEE Trans. Inf. Theory 1998, 44, 2505–2523. [Google Scholar] [CrossRef]
- Andrews, G.E. The Theory of Partitions; Cambridge University Press: Cambridge, UK, 1976. [Google Scholar]
- Tridenski, S.; Somekh-Baruch, A. The Method of Types for the AWGN Channel: Error Exponent. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Athens, Greece, 7–12 July 2024. [Google Scholar]
- Tridenski, S.; Somekh-Baruch, A. The Method of Types for the AWGN Channel: Correct-Decoding Exponent. In Proceedings of the International Zurich Seminar on Information and Communication (IZS), Zurich, Switzerland, 6–8 March 2024. [Google Scholar]
- Cheng, H.-C.; Nakiboğlu, B. Refined Strong Converse for the Constant Composition Codes. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Los Angeles, CA, USA, 21–26 June 2020; pp. 2149–2154. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).