Next Article in Journal
Sparse Density Estimation with Measurement Errors
Next Article in Special Issue
Secure Physical Layer Network Coding versus Secure Network Coding
Previous Article in Journal
Network Bending: Expressive Manipulation of Generative Models in Multiple Domains
Previous Article in Special Issue
Encoding Individual Source Sequences for the Wiretap Channel
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Listsize Capacity of the Gaussian Channel with Decoder Assistance

Signal and Information Processing Laboratory, ETH Zurich, 8092 Zurich, Switzerland
*
Author to whom correspondence should be addressed.
Entropy 2022, 24(1), 29; https://doi.org/10.3390/e24010029
Submission received: 26 November 2021 / Revised: 17 December 2021 / Accepted: 20 December 2021 / Published: 24 December 2021
(This article belongs to the Special Issue Wireless Networks: Information Theoretic Perspectives Ⅱ)

Abstract

:
The listsize capacity is computed for the Gaussian channel with a helper that—cognizant of the channel-noise sequence but not of the transmitted message—provides the decoder with a rate-limited description of said sequence. This capacity is shown to equal the sum of the cutoff rate of the Gaussian channel without help and the rate of help. In particular, zero-rate help raises the listsize capacity from zero to the cutoff rate. This is achieved by having the helper provide the decoder with a sufficiently fine quantization of the normalized squared Euclidean norm of the noise sequence.

1. Introduction

The order- ρ listsize capacity C l i s t ( ρ ) of a channel is the supremum of the coding rates for which there exist codes guaranteeing the large-blocklength convergence to one of the ρ -th moment of the cardinality of the list of messages that, given the received output sequence, have positive a posteriori probability. It is zero for the Gaussian channel because, on this channel, no codeword is ruled out by any received sequence so said list contains all the messages. Here we derive this capacity for the Gaussian channel with a helper that observes the noise sequence and describes it to the decoder using a rate-limited noise-free bit pipe; see Figure 1.
We show that the listsize capacity C l i s t ( ρ ) ( R h ) is then the sum of bit-pipe’s rate R h and the order- ρ cutoff rate R c u t o f f ( ρ ) of the Gaussian channel without a helper
C l i s t ( ρ ) ( R h ) = R c u t o f f ( ρ ) + R h .
The latter’s definition is similar to that of the listsize capacity, but with the list now comprising only those messages that are a posteriori at least as likely as the transmitted one. As we shall see, for the Gaussian channel with average power P , noise-variance N , and corresponding signal-to-noise ratio (SNR) A P / N ,
R c u t o f f ( ρ ) = R 0 ( ρ ) ,
where
R 0 ( ρ ) = 1 2 ln 1 2 1 + A 1 + ρ + 1 A 1 + ρ 2 + 4 A ( 1 + ρ ) 2 + 1 + ρ 2 ρ 1 2 1 + A 1 + ρ 1 A 1 + ρ 2 + 4 A ( 1 + ρ ) 2 + 1 2 ρ ln 1 2 1 A 1 + ρ + 1 A 1 + ρ 2 + 4 A ( 1 + ρ ) 2
(in nats) is a function that plays a prominent role in the analysis of the Reliability Function of said channel (Section 7.4 in [1]), [2]. That analysis does not, however, carry over directly to our setting because it deals with error exponents and not lists.
It is interesting to note that (1) also holds when the help rate R h is zero: the number of help bits required to increase the listsize capacity from zero to R c u t o f f ( ρ ) is sublinear in the blocklength. In fact, as we shall see, all it takes is a sufficiently fine quantization of the normalized squared Euclidean norm of the noise sequence.
The relation (1) is reminiscent of the analogous result on the erasures-only capacity C e-o ( R h ) of the Gaussian channel with a rate- R h helper (Remark 10 in [3]), namely, that
C e-o ( R h ) = C + R h ,
where C denotes the Shannon capacity of the Gaussian channel (without help) (Theorem 9.1.1 in [4]), and C e-o ( R h ) is the erasures-only capacity, which is defined like C l i s t ( ρ ) ( R h ) but with the requirement on the ρ -th moment of the list replaced by the requirement that the list be of size 1 with probability tending to one. (The Gaussian erasures-only capacity with a helper is given by the RHS of (4) irrespective of whether the assistance is provided to the encoder or decoder.) The latter result in turn is reminiscent of the analogous result on the Shannon capacity with a helper C ( R h ) [5,6,7,8]
C ( R h ) = C + R h .
In proving (1), we shall focus on the “direct part,” i.e., that the right-hand side (RHS) of (1) is achievable. The “converse,” that no rate exceeding the RHS of (1) is achievable, is omitted because it follows directly from (Remark 4 in [3]): There it is shown that this is true even if, given the received sequence and the provided help, the list contains only a subset of the messages that are of positive a posteriori probability, namely, those that are a posteriori at least as likely as the transmitted message.
The listsize capacity is relevant, for example, when the message set corresponds to tasks [9] and the transmitted message corresponds to one that must be performed by the decoder with absolute certainty. To ensure this, the decoder must perform all the tasks in the list of tasks that are not ruled out by the received sequence. (In addition to the transmitted task, other tasks need not but may be performed.) The ρ -th moment of the list’s size then measures the receiver’s average effort.
Results on the listsize capacity and the erasures-only capacity of general discrete memoryless channels (DMCs) in the absence of help are scarce. Noteworthy exceptions are the results of Pinsker and Sheverdjaev [10], Csiszár and Narayan [11], and Telatar [12], that provide sufficient conditions for the erasures-only capacity to equal the Shannon capacity and for the listsize capacity to equal the cutoff rate. Asymptotic results on the erasures-only capacity in the low-noise regime can be found in [13,14]. Once noiseless feedback is introduced, the problems become more tractable [15,16,17].
The rest of the paper is organized as follows. Section 2 describes our set-up and presents the main result. Section 3 contains some classical and some new observations regarding Gallager’s E 0 function and its modification. Section 4 derives the cutoff rate of the Gaussian channel without help and proves (2). Section 5 describes and analyzes a coding scheme that proves the direct part of (1).

2. The Main Result

A power- P blocklength-n encoder f ( n ) for a message set M is a mapping
f ( n ) : M R n
that maps each message m M to an n-tuple f ( n ) ( m ) whose Euclidean norm f ( n ) ( m ) satisfies
f ( n ) ( m ) 2 n P , m M .
We sometimes use x m to denote f ( n ) ( m ) , and x m , k to denote the k-th component of x m , so
f ( n ) ( m ) = x m = ( x m , 1 , , x m , n ) .
The encoder is said to be of rate R if the cardinality of M is e n R , in which case we often assume that M = { 1 , , e n R } . (We ignore the fact that e n R need not be an integer; this issue washes out in the large-n asymptotics we study.)
When a message m M is sent over the discrete-time additive Gaussian noise channel using the encoder f ( n ) , the channel produces the random vector Y R n whose k-th component Y k is
Y k = x m , k + Z k , k = 1 , , n ,
where { Z k } are independent and identically distributed (IID) zero-mean Gaussians of variance N . We assume that N is positive and use w ( y | x ) to denote the density of the channel’s output when its input is x, i.e., the mean-x variance- N Gaussian density
w ( y | x ) = 1 2 π N e ( y x ) 2 2 N , x , y R ,
which we extend to n-tuples in a memoryless fashion:
w ( y | x ) = k = 1 n w y k | x k , x , y R n .
For convenience, we define
A = P N .
Given an output sequence y and a message m, we define the “at-least-as-likely list”
L ( m , y ) = m M : w ( y | x m ) w ( y | x m ) .
Assuming, as we do, that the messages are a priori equally likely, this list comprises the messages that, given the output sequence y , are a posteriori at least as likely as m.
If a message M, drawn equiprobably from M , is transmitted over the channel with a resulting received sequence Y , then the cardinality of the at-least-as-likely list is a random positive integer, and we denote its ρ -th moment E | L ( M , Y ) | ρ :
E | L ( M , Y ) | ρ = 1 | M | m M w ( y | x m ) | L ( m , y ) | ρ d ν ( y ) ,
where ν ( · ) denotes the Lebesgue measure on R n .
For a given ρ > 0 , we define the order- ρ cutoff rate R c u t o f f ( ρ ) as the supremum of the rates R for which there exists a sequence of rate-R power- P blocklength-n encoders { f ( n ) } satisfying
lim n E | L ( M , Y ) | ρ = 1 .
Theorem 1.
The order-ρ cutoff rate R c u t o f f ( ρ ) of the additive Gaussian noise channel equals R 0 ( ρ ) of (3).
Proof. 
See Section 4. □
A T n -valued description of the noise sequence Z = ( Z 1 , , Z n ) is a mapping
ϕ ( n ) : R n T n
with the understanding that ϕ ( n ) ( Z ) , which we denote T, is the description of Z . We say that a sequence { ϕ ( n ) } of descriptions is of rate R h (nats) if
lim n 1 n ln T n = R h .
Suppose now that, in addition to the received sequence Y , the receiver is also presented with the description T = ϕ ( n ) ( Z ) of the noise, and that, based on the two, it forms the “remotely-plausible list” L ( Y , T ) comprising the messages that have positive a posteriori probability given the two:
L ( y , t ) = m M : ϕ ( n ) ( y x m ) = t .
Given ρ > 0 , the listsize capacity C l i s t ( ρ ) ( R h ) with rate- R h decoder assistance is the supremum of the rates R for which there exists a sequence of rate-R power- P blocklength-n encoders { f ( n ) } and a sequence { ϕ ( n ) } of descriptions of rate R h such that
lim n E L Y , ϕ ( n ) ( Z ) ρ = 1 .
Theorem 2.
On the Gaussian channel, the listsize capacity with rate- R h decoder assistance C l i s t ( ρ ) ( R h ) is given by
C l i s t ( ρ ) ( R h ) = R c u t o f f ( ρ ) + R h
where R c u t o f f ( ρ ) is the order-ρ cutoff rate of the channel (without assistance) as given in (2) and (3).
Proof. 
The “converse,” that (19) cannot be achieved when the rate exceeds the RHS of (20), follows from (Remark 4 in [3]). The “direct part,” describing a coding scheme that achieves (19) with rates approaching the RHS of (20), is proved in Section 5. □

3. Preliminaries

Given ρ 0 and any probability measure Q on R , Gallager’s E 0 function for our channel is defined as [1]
E 0 ( ρ , Q ) = ln y R x R w ( y | x ) 1 1 + ρ d Q ( x ) 1 + ρ d ν ( y ) ,
where ν ( · ) is now the Lebesgue measure on R . The result of maximizing E 0 ( ρ , Q ) over all Q under which E [ X 2 ] P , is denoted E 0 * ( ρ ) :
E 0 * ( ρ ) = sup Q : x 2 d Q ( x ) P E 0 ( ρ , Q ) .
The multi-letter extension of E 0 is
E 0 ( n ) ρ , Q ( n ) = 1 n ln y R n x R n w ( y | x ) 1 1 + ρ d Q ( n ) ( x ) 1 + ρ d ν ( y ) ,
where Q ( n ) is a probability measure on R n ; the integrals are over R n ; the channel w ( y | x ) is defined in (11). Similarly,
E 0 ( n ) , * [ n ] ( ρ ) = sup Q ( n ) : x 2 d Q ( n ) ( x ) n P E 0 ( n ) ( ρ , Q ( n ) ) .
Given probability measures Q ( m ) on R m and Q ( n ) on R n that satisfy the power constraints E [ X 2 ] m P and E [ X 2 ] n P respectively, the product measure Q ( m ) × Q ( n ) on R m + n satisfies the power constraint E [ X 2 ] ( m + n ) P and
( m + n ) E 0 ( m + n ) ρ , Q ( m ) × Q ( n ) = m E 0 ( m ) ρ , Q ( m ) + n E 0 ( n ) ρ , Q ( n )
because
( m + n ) E 0 ( m + n ) ρ , Q ( m ) × Q ( n ) ( 26 ) = ln y R m + n x R m + n w ( y | x ) 1 1 + ρ d Q ( m ) × Q ( n ) ( x ) 1 + ρ d ν ( y ) ( 27 ) = m E 0 ( m ) ( ρ , Q ( m ) ) + n E 0 ( n ) ( ρ , Q ( n ) ) .
The sequence n E 0 ( n ) , * ( ρ ) is thus superadditive, and Feket’s Subadditive lemma implies that E 0 ( n ) , * ( ρ ) converges to its supremum:
lim n E 0 ( n ) , * ( ρ ) = sup n E 0 ( n ) , * ( ρ ) .
We shall later see (cf. (55) ahead) that
1 ρ sup n E 0 ( n ) , * ( ρ ) = R 0 ( ρ ) ,
where R 0 ( ρ ) is defined in (3).
We shall also need Gallager’s modified E 0 function. To highlight its relation to the unmodified function, which is quite general, we shall use g ( x ) for x 2 and g ( x ) for x 2 . We shall also replace P with Γ .
Given some ρ 0 , some probability distribution Q on R under which E [ g ( X ) ] Γ , and some r 0 , the modified Gallager’s E 0 function E 0 , m ( ρ , Q , r ) is defined as
E 0 , m ( ρ , Q , r ) = ln y R x R e r ( g ( x ) Γ ) w ( y | x ) 1 1 + ρ d Q ( x ) 1 + ρ d ν ( y ) .
We shall also be interested in the maximum of E 0 , m ( ρ , Q , r ) over both Q and r. We distinguish between two cases depending on whether E [ g ( X ) ] Γ holds strictly or not. In the former case we only allow r to be zero, whereas in the latter case it can be any non-negative number. We thus define
E 0 , m * ( ρ , Q ) = sup r 0 E 0 , m ( ρ , Q , r ) , if g ( x ) d Q ( x ) = Γ , E 0 ( ρ , Q ) , if g ( x ) d Q ( x ) < Γ ,
and
E 0 , m * * ( ρ ) = sup Q : g ( x ) d Q ( x ) Γ E 0 , m * ( ρ , Q ) .
The next proposition provides a lower bound on lim E 0 ( n ) , * ( ρ ) .
Proposition 1.
Any probability distribution Q on R under which g ( X ) is of finite second moment and of expectation Γ provides the lower bound
lim n E 0 ( n ) , * ( ρ ) E 0 , m * ( ρ , Q ) .
Proof. 
Let Q be any input distributions Q under which g ( X ) is of finite second moment and E [ g ( X ) ] = Γ . For each n N , let Q ( n ) be the conditional distribution of the n-fold product distribution Q × n given the event { X A n } , where
A n = x R n : n Γ δ < g ( x ) n Γ
where δ > 0 is some positive constant. Thus, for every Borel measurable subset B of R n ,
Q ( n ) ( B ) = 1 μ Q × n ( B A n ) , B B ( R n )
with
μ = Q × n ( A n ) .
For any r 0 , we can upper-bound the Radon–Nykodim derivative of Q ( n ) with respect to product distribution Q × n as follows:
( 37 ) d Q ( n ) d Q × n = 1 μ I { x A n } ( 38 ) 1 μ e r ( g ( x ) n Γ + δ ) ( 39 ) = 1 μ e r δ e r ( g ( x ) n Γ )
where I { statement } equals 1 if the statement is true and else 0. Using this bound on the Radon–Nykodim derivative we obtain:
( 40 ) E 0 ( n ) ( ρ , Q ( n ) ) = 1 n ln y R n x R n w ( y | x ) 1 1 + ρ d Q ( n ) ( x ) 1 + ρ d ν ( y ) ( 41 ) 1 n ln y R n x R n w ( y | x ) 1 1 + ρ · 1 μ e r δ e r ( g ( x ) n Γ ) d Q × n ( x ) 1 + ρ d ν ( y ) ( 42 ) = 1 + ρ n ln e r δ μ ln y R x R e r ( g ( x ) Γ ) w ( y | x ) 1 1 + ρ d Q ( x ) 1 + ρ d ν ( y ) ( 43 ) = 1 + ρ n ln e r δ μ + E 0 , m ( ρ , Q , r ) .
By the Central Limit Theorem, μ tends to 1/2 as n tends to infinity, so (43) implies that
lim inf n 1 n E 0 ( n ) ( ρ , Q ( n ) ) E 0 , m ( ρ , Q , r ) .
Taking the supremum of the RHS over all r 0 , establishes that
lim inf n 1 n E 0 ( n ) ρ , Q ( n ) E 0 , m * ( ρ , Q )
and hence, by (24), proves (33). □
We next turn to upper-bounding lim E 0 ( n ) , * ( ρ ) .
Proposition 2.
If the probability distribution Q ( n ) on R n is such that E [ g ( X ) ] n Γ , and if f R is any density on R , then
E 0 ( n ) ( ρ , Q ( n ) ) sup P : g ( x ) d P ( x ) Γ ( 1 + ρ ) x R ln y R w ( y | x ) 1 1 + ρ f R ( y ) ρ 1 + ρ d y d P ( x )
and, consequently,
lim n E 0 ( n ) , * ( ρ ) sup P : g ( x ) d P ( x ) Γ ( 1 + ρ ) x R ln y R w ( y | x ) 1 1 + ρ f R ( y ) ρ 1 + ρ d y d P ( x ) .
Proof. 
The proof is based on Proposition 2 in [18], which implies that for every density f R ( n ) on R n and any probability measure Q ( n ) on R n ,
n E 0 ( n ) ρ , Q ( n ) ( 1 + ρ ) x R n ln y R n w ( y | x ) 1 1 + ρ f R ( n ) ( y ) ρ 1 + ρ d y d Q ( n ) ( x ) .
Applying this inequality to the product density
f R ( n ) ( y ) = i = 1 n f R ( y i ) ,
where f R is a density on R , and using the product form of the channel (11), we obtain that for any density f R on R
( 50 ) E 0 ( n ) ( ρ , Q ( n ) ) 1 n ( 1 + ρ ) i = 1 n x i R ln y R w ( y | x i ) 1 1 + ρ f R ( y ) ρ 1 + ρ d y d Q i ( n ) ( x i ) ( 51 ) = ( 1 + ρ ) x R ln y R w ( y | x ) 1 1 + ρ f R ( y ) ρ 1 + ρ d y d Q ¯ ( x ) ,
where Q i ( n ) is the i-th marginal of Q ( n ) , and Q ¯ is the probability measure on R defined by
Q ¯ = 1 n i = 1 n Q i ( n ) .
Observe that if E [ g ( X ) ] n Γ under Q ( n ) , then E [ g ( X ) ] Γ under Q ¯ . This observation and (51) establish (46). Since (46) holds for all n, (47) must also hold. □

4. The Cutoff Rate of the Gaussian Channel

In this section, we prove Theorem 1. Since scaling the output does not change the cutoff rate, we will assume WLOG that the noise variance is 1 and the transmit power is A ; see (12). Thus,
w ( y | x ) = 1 2 π e ( y x ) 2 2 , x , y R ,
and each codeword x m satisfies
x m 2 n A .

4.1. Computing lim E 0 ( n ) , * ( ρ )

Here we shall establish that on the Gaussian channel (53)
lim n E 0 ( n ) , * ( ρ ) = ρ R 0 ( ρ ) = E 0 , m * ( ρ , Q G ) ,
where R 0 ( ρ ) is defined in (3), and Q G is the zero-mean variance- A Gaussian distribution. To this end, we shall derive matching upper and lower bounds on the limit. We begin with the former.

4.1.1. Upper-Bounding lim E 0 ( n ) , * ( ρ )

We show that on the channel (10)
lim n E 0 ( n ) , * ( ρ ) ρ R 0 ( ρ ) .
The proof is based on Proposition 2 with the density f R corresponding to a centered Gaussian of variance σ 2 , where
σ 2 = A ( 1 + ρ ) β + 1
and
β = 1 2 1 A 1 + ρ + 1 A 1 + ρ 2 + 4 A ( 1 + ρ ) 2 .
Evaluating the RHS of (47) for this density, we obtain
( 59 ) sup P : E [ X 2 ] A ( 1 + ρ ) x R ln y R w ( y | x ) 1 1 + ρ f R ( y ) ρ 1 + ρ d y d P ( x ) ( 60 ) = sup P : E [ X 2 ] A ( 1 + ρ ) x R ln y R 1 ( 2 π ) 1 1 + ρ e ( y x ) 2 2 ( 1 + ρ ) 1 ( 2 π σ 2 ) ρ 1 + ρ e y 2 ρ 2 σ 2 ( 1 + ρ ) d y d P ( x ) ( 61 ) = sup P : E [ X 2 ] A ( 1 + ρ ) x R ln 2 π ( 1 + ρ ) 2 ρ σ 2 1 1 + ρ 1 2 π σ 1 2 e x 2 2 σ 1 2 d P ( x ) ( 62 ) = sup P : E [ X 2 ] A ( 1 + ρ ) ln ( 1 + ρ ) 2 ρ σ 2 1 1 + ρ 1 σ 1 2 + ( 1 + ρ ) x R x 2 2 σ 1 2 d P ( x ) ( 63 ) = ( 1 + ρ ) ln ( 1 + ρ ) 2 ρ σ 2 1 1 + ρ 1 σ 1 2 + ( 1 + ρ ) A 2 σ 1 2 ( 64 ) = 1 + ρ 2 A σ 1 2 + 1 + ρ 2 ln σ 1 2 1 2 ln σ 2 1 + ρ 2 ln ( 1 + ρ ) 2 ρ
where in (61) we defined
( 65 ) σ 1 2 1 + ρ + σ 2 ( 1 + ρ ) ρ ( 66 ) = A ρ β + ( 1 + ρ ) 2 ρ .
To conclude the proof, it remains to show that the RHS of (64) coincides with ρ R 0 ( ρ ) . To this end, observe that some basic algebra reveals that
β β 1 + A 1 + ρ = A ( 1 + ρ ) 2
and
( β + A 1 + ρ ) ( 1 β ) = A ρ ( 1 + ρ ) 2 .
Therefore, the first term in (64) can be rewritten as
( 69 ) 1 + ρ 2 A σ 1 2 = 1 + ρ 2 A A ρ β + ( 1 + ρ ) 2 ρ = 1 + ρ 2 A ρ ( 1 + ρ ) 2 β A ( 1 + ρ ) 2 + β ( 70 ) = A ρ 2 ( 1 + ρ ) 1 β + A 1 + ρ = ( 1 + ρ ) ( 1 β ) 2 ,
and the remaining terms rewritten as
1 + ρ 2 ln σ 1 2 1 2 ln σ 2 1 + ρ 2 ln ( 1 + ρ ) 2 ρ ( 71 ) = 1 + ρ 2 ln A ρ β + ( 1 + ρ ) 2 ρ 1 2 ln A ( 1 + ρ ) β + 1 1 + ρ 2 ln ( 1 + ρ ) 2 ρ ( 72 ) = 1 + ρ 2 ln A ( 1 + ρ ) 2 β + 1 1 2 ln A ( 1 + ρ ) β + 1 ( 73 ) = ρ 2 ln β + A 1 + ρ + 1 2 ln β .
The sum equals to ρ R 0 ( ρ ) .

4.1.2. Lower-Bounding lim E 0 ( n ) , * ( ρ )

To lower-bound lim E 0 ( n ) , * ( ρ ) , we shall use Proposition 1 with Q chosen as a centered variance- A Gaussian distribution Q G . For this probability distribution Gallager calculated E 0 , m * ( ρ , Q G ) (Section 7.4 in [1]). He showed that for any ρ > 0 ,
E 0 , m * ( ρ , Q G ) = ρ R 0 ( ρ ) ,
where R 0 ( ρ ) is defined in (3). Using this result and Proposition 1 we obtain
( 75 ) lim n E 0 ( n ) , * ( ρ ) E 0 , m * ( ρ , Q G ) ( 76 ) = ρ R 0 ( ρ ) .

4.2. The Mapping ρ R 0 ( ρ ) Is Monotonically Decreasing

For the purpose of proving the achievability of R 0 ( ρ ) , we will need the fact that it is monotonically decreasing in ρ . In view of (55), it suffices to show that, for every n N , the mapping ρ ρ 1 E 0 ( n ) , * ( ρ ) is monotonically decreasing. In view of (24), the latter will follow once we establish the monotonicity of ρ ρ 1 E 0 ( n ) ( ρ , Q ( n ) ) for any fixed Q ( n ) . Since E 0 ( n ) ( ρ , Q ( n ) ) evaluates to zero at ρ = 0 , this monotonicity can be established by showing that the mapping ρ E 0 ( n ) ( ρ , Q ( n ) ) is concave. This is established in (Appendix 5.B in [1]). (That appendix deals with finite alphabets, but the proof goes through also to our case.)

4.3. Achievability of R 0 ( ρ )

The achievability of R 0 ( ρ ) will be proved using a random-coding argument. Let Q be the zero-mean variance- A Gaussian distribution, let δ > 0 be a positive constant, and let Q ( n ) be the distribution on R n defined in (35) and (36). Draw the codewords { X m } m = 1 , , e n R of a blocklength-n random codebook independently, each according to Q ( n ) , so X m 2 n A with probability 1 for every m M . By symmetry, E | L ( m , Y ) | ρ (where the expectation is over the random choice of codebook and on the channel behavior) does not depend on m. Consequently,
E e n R m M | L ( m , Y ) | ρ = E | L ( 1 , Y ) | ρ ,
and if we establish that E | L ( 1 , Y ) | ρ tends to 1, it will follow by the random-coding argument that there exists a codebook for which the LHS of (77)—with the expectation now over the channel behavior only—tends to 1.
Defining
B m ( x 1 , y ) = 𝟙 w ( y | X m ) w ( y | x 1 ) , x 1 , y R n ,
we can express the RHS of (77) as
E | L ( 1 , Y ) | ρ = E 1 + m 1 B m ( X 1 , Y ) ρ ,
and we seek to show that
lim n E 1 + m 1 B m ( X 1 , Y ) ρ = 1 .
To this end, we shall need the following lemma.
Lemma 1.
Let { Z n } be a sequence of random variables taking values in N , and let ρ > 0 be fixed. The following two conditions are then equivalent:
(i) 
E [ ( 1 + Z n ) ρ ] = 1 + o ( 1 )
(ii) 
E [ Z n ρ ] = o ( 1 )
where o ( 1 ) tends to zero as n tends to infinity. Thus
lim n E [ ( 1 + Z n ) ρ ] = 1 lim n E [ Z n ρ ] = 0 .
Proof. 
The implication (ii) ⟹ (i) follows by noting for any z N and ρ > 0
( 1 + z ) ρ 1 + 2 ρ z ρ ,
so
E [ ( 1 + Z n ) ρ ] 1 + 2 ρ E [ Z n ρ ] .
As for the implication (i) ⟹ (ii), note that any y N and ρ > 0
( 1 + y ) ρ y ρ + 𝟙 { y = 0 } ,
so
E [ ( 1 + Z n ) ρ ] E [ Z n ρ ] + Pr [ Z n = 0 ] .
The implication is now established by noting that (i) implies that Pr [ Z n = 0 ] 1 because, by Markov’s inequality (and the strict positivity of ρ ),
( 86 ) Pr [ Z n 0 ] = Pr [ ( 1 + Z n ) ρ 1 2 ρ 1 ] ( 87 ) E [ ( 1 + Z n ) ρ ] 1 2 ρ 1 .
In light of the above lemma, to establish (80) it suffices to show that
lim n E m 1 B m ( X 1 , Y ) ρ = 0 ,
i.e., that
lim n E E m 1 B m ( x 1 , y ) ρ | X 1 = x 1 , Y = y = 0 ,
where the outer expectation is over X 1 and Y .
A related expectation—but one where it is the conditional expectation that is raised to the ρ -th power—is studied in the following lemma:
Lemma 2.
If ρ > 0 and R < R 0 ( ρ ) , then
lim n E E m 1 B m ( x 1 , y ) | X 1 = x 1 , Y = y ρ = 0 .
Proof. 
See Appendix A. □
To establish (88) using this lemma, we distinguish between two cases depending on whether 0 < ρ 1 or ρ > 1 . In the former case x x ρ is concave, so Jensen’s inequality implies that
E E m 1 B m ( x 1 , y ) ρ | X 1 = x 1 , Y = y E E m 1 B m ( x 1 , y ) | X 1 = x 1 , Y = y ρ ,
which, together with Lemma 2, implies (88) whenever R < R 0 ( ρ ) .
Suppose now that ρ > 1 . Conditional on the transmitted codeword x 1 and the output y , the random variables { B m } m 1 are IID Bernoulli, with B m determined by X m . We can thus use Rosenthal’s technique (Lemma 5.10 in [19]), [20] to obtain
E m 1 B m ( x 1 , y ) ρ | X 1 = x 1 , Y = y ( 92 ) 2 ρ 2 max E m 1 B m ( x 1 , y ) | X 1 = x 1 , Y = y ρ , E m 1 B m ( x 1 , y ) | X 1 = x 1 , Y = y ( 93 ) 2 ρ 2 E m 1 B m ( x 1 , y ) | X 1 = x 1 , Y = y ρ + E m 1 B m ( x 1 , y ) | X 1 = x 1 , Y = y .
Taking the expectation over X 1 and Y yields
( 94 ) E E m 1 B m ( x 1 , y ) ρ | X 1 = x 1 , Y = y ( 95 ) 2 ρ 2 E E m 1 B m ( y ) | Y = y ρ + 2 ρ 2 E E m 1 B m ( y ) | Y = y ( 96 ) 2 ρ 2 E E m 1 B m ( x 1 , y ) | X 1 = x 1 , Y = y ρ + 2 ρ 2 E E m 1 B m ( x 1 , y ) | X 1 = x 1 , Y = y .
The first term on the RHS can be treated using the lemma. The second—but for the 2 ρ 2 constant—is the one encountered when ρ is 1. Since by Section 4.2, R 0 ( ρ ) R 0 ( 1 ) (because ρ > 1 for the case at hand), it too tends to zero when R < R 0 ( ρ ) .

4.4. No Rate Exceeding R 0 ( ρ ) Is Achievable

To show the converse, we need Arıkan’s lower bound on guessing [21].
Fix any sequence of rate-R blocklength-n codebooks { C n } satisfying the cost constraint. For any n N , let
Q ( n ) ( x ) = 1 | C n | if x C n , 0 otherwise
be the induced probability distribution on R n . Since the codebook satisfies the cost constraint, E [ X 2 ] n A under Q ( n ) .
Given y , list the messages m M in decreasing order of likelihood w ( y | x m ) (resolving ties arbitrarily, e.g., ranking low numerical values of m higher), and let G ( m | y ) denote the ranking of the message m in this list. Note that
| L ( m , y ) | G ( m | y ) ,
where the inequality can be strict because there may be messages that are in L ( m , y ) because they have the same likelihood as m, and that are yet ranked lower than m by G ( · | y ) because of the way ties are resolved. It follows from this inequality that the ρ -th moment of | L ( M , Y ) | cannot tend to one unless the ρ -th moment of G ( M | Y ) does. By Arıkan’s guessing inequality [21],
E G ( M | Y ) ρ ( 1 + n R ) ρ · exp n ρ R n E 0 ( n ) ( ρ , Q ( n ) ) ,
so the ρ -th moment of G ( M | Y ) can tend to one only if
ρ R lim inf n E 0 ( n ) ( ρ , Q ( n ) ) .
From this, the converse now follows using (24) and (55) because
( 101 ) lim inf n E 0 ( n ) ( ρ , Q ( n ) ) lim n E 0 ( n ) , * ( ρ ) ( 102 ) = ρ R 0 ( ρ ) .

5. The Direct Part of Theorem 2

In this section we prove the direct part of Theorem 2: when the decoder can be provided with a rate- R h description of the noise, the convergence (19) can be achieved at all transmission rates below R 0 ( ρ ) + R h . As noted earlier, the converse follows directly from (Remark 4 in [3]).
Our proof treats the cases R h = 0 and R h > 0 separately. As in Section 4, we assume that the channel is normalized to having noise variance 1 and transmit power A .

5.1. Case 1: R h = 0

The analogous result for the modulo-additive channel was proved in [3] by having the helper provide the decoder with a lossless description of the type of the noise sequence. Since this type fully specifies the a posteriori probability of the transmitted message, the decoder’s remotely-plausible-with-this-help list L ( Y , T ) contains only messages whose a posteriori probability is equal to that of the correct message. It is therefore a subset of the at-least-as-likely list L ( M , Y ) (without help) and hence of smaller-or-equal ρ -th moment. Consequently, any rate that allows the latter to tend to one, also allows the former to tend to one.
On the Gaussian channel the likelihood w ( y | x m ) is specified by the normalized squared Euclidean norm of the noise sequence z 2 / n . The latter, however, cannot be described at zero rate with infinite precision. This motivates us to quantize it and have the quantized version be the zero-rate help. The result will then follow by considering the high-resolution limit of the achievable rates. For this purpose, a uniform quantizer will do.
Given some large M > 0 (which determines the overload region) and some large K (corresponding to the number of quantization cells), we partition the interval [ 0 , M ] into K subintervals, each of length Δ = M / K . The helper, upon observing the noise sequence Z , produces
T = ϕ ( n ) ( Z ) = Z 2 / ( n Δ ) if Z 2 / n < M , K otherwise .
The constant M, which does not depend on the blocklength n, is chosen large enough to guarantee that the large-deviation probability of overload Pr [ Z 2 / n M ] decay sufficiently fast in n so that the contribution of the overload to the ρ -th moment of the list be negligible, even if an overload results in the list containing all e n R codewords:
lim n e n ρ R · Pr n 1 Z 2 M = 0 .
(Upper bounds on the tail of the χ 2 distribution show, for example, that for R < R 0 ( ρ ) , the choice M = max { 2 , 20 ρ R 0 ( ρ ) } will do.) Since the help takes values in the finite set T n = { 0 , 1 , , K } , where K does not depend on the blocklength, it is of zero rate.
As in Section 4.3, we consider a random codebook { X m } m = 1 , , e n R whose codewords are drawn independently from the conditional Gaussian distribution, i.e., from Q ( n ) defined in (35) and (36) with Q being Q G , the centered variance- A Gaussian distribution. Using the same symmetry arguments, we also assume that the transmitted message is m = 1 and study the ρ -th moment of the list under this assumption. Defining
V m ( x 1 , y ) = 𝟙 ϕ ( n ) ( y X m ) = ϕ ( n ) ( y x 1 ) , x 1 , y R n ,
we can express the ρ -th moment of the remotely-plausible list when m = 1 as
E | L ( Y , T ) | ρ = E 1 + m 1 V m ( X 1 , Y ) ρ .
In view of Lemma 1, we need to prove that
lim n E m 1 V m ( X 1 , Y ) ρ = 0 ,
where the expectation is over both the random choice of the codebook and the channel behavior.
To analyze the LHS of (107), we define for every x 1 , y R n and every message m 1 the binary random variable
B m ( x 1 , y ; Δ ) = 𝟙 w ( y | X m ) w ( y | x 1 ) · e n Δ 2 .
Our analysis of V m ( x 1 , y ) depends on whether ϕ ( n ) ( y x 1 ) differs from K (no overload) or equals K (corresponding to quantizer overload). In the former case, the random variable V m ( x 1 , y ) can be upper bounded by B m ( x 1 , y ; Δ ) because
( 109 ) V m ( x 1 , y ) = 𝟙 ϕ ( n ) ( y X m ) = ϕ ( n ) ( y x 1 ) ( 110 ) 𝟙 | y X m 2 y x 1 2 | < n Δ ( 111 ) 𝟙 y X m 2 y x 1 2 + n Δ ( 112 ) = 𝟙 e y X m 2 2 e y x 1 2 + n Δ 2 ( 113 ) = B m ( x 1 , y ; Δ ) ,
where (110) holds because, for the case at hand, the equality of helper’s description implies that y X m 2 and y x 1 2 lie in a same interval of length n Δ . In the latter case—which is exponentially rare when M exceeds the noise variance—we simply upper bound V m ( x 1 , y ) by 1.
The ρ -th moment of the list can now be expressed using the law of total expectation as
E m 1 V m ( X 1 , Y ) ρ ( 114 ) = E m 1 V m ( X 1 , Y ) ρ | T K Pr [ T K ] + E m 1 V m ( X 1 , Y ) ρ | T = K Pr [ T = K ] ( 115 ) E m 1 B m ( X 1 , Y ; Δ ) ρ | T K Pr [ T K ] + e n ρ R Pr [ T = K ] ( 116 ) E m 1 B m ( X 1 , Y ; Δ ) ρ + e n ρ R Pr [ T = K ] .
The second term on the RHS of (116) tends to zero by (104). The first term is studied in the following lemma:
Lemma 3.
If ρ > 0 , Δ > 0 , and R < R 0 ( ρ ) Δ , then
lim n E m 1 B m ( X 1 , Y ; Δ ) ρ = 0 .
Proof. 
See Appendix B. □
For a given R < R 0 ( ρ ) , achievability is thus established using this lemma and (116) by picking M sufficiently large for (104) to hold, and then picking K large enough to guarantee that R < R 0 ( ρ ) M / K so that, by Lemma 3, the first term on the RHS of (116) will also tend to zero.

5.2. Case 2: R h > 0

The key to proving the achievability of R c u t o f f ( ρ ) + R h is in showing that rate- R h help can be utilized to increase the data rate by R h , and that this can be done losslessly, with arbitrarily small (positive) power, and in one channel use. To show how this can be done, we show that—by using the channel once to send a single input that is bounded by A (with A any prespecified positive number) and using help taking values in the set T = { 0 , , κ 1 } —we can send error-free a message taking values in said set. To transmit m { 0 , , κ 1 } , the encoder sends
x = m · A κ ,
which is upper-bounded by A . Upon observing the noise Z, the helper produces the description T by quantizing the normalized noise and taking modulo, i.e.,
T = Z · κ A mod κ ,
which is an element of { 0 , , κ 1 } . Based on Y and T, the decoder can calculate
m ^ = Y · κ A T mod κ ,
which equals m, because
( 121 ) m ^ = x + Z · κ A T mod κ ( 122 ) = m + Z · κ A T mod κ ( 123 ) = m + Z · κ A T mod κ ( 124 ) = m ,
where (123) holds because m and T are both integers.
Using this building-block, we can now prove the achievability of R c u t o f f ( ρ ) + R h by employing two-phase time sharing. Specifically, we propose the following blocklength- ( n + 1 ) scheme. In the first n channel uses, the helper operates at rate zero as in Section 5.1. By the achievability result proved in Section 5.1, for any R < R 0 ( ρ ) , there exists a sequence of blocklength-n rate-R codebooks { x m } m = 1 , , e n R , with x m 2 ( n 1 ) A for every m, and zero-rate helpers ϕ ( Z n ) , such that the remotely-plausible-list L ( Y n , ϕ ( Z n ) ) satisfies
lim n E | L ( Y n , ϕ ( Z n ) ) | ρ = 1 .
In the ( n + 1 ) -th channel-use we use the aforementioned coding scheme with κ being e n R h . Since that scheme is error-free, the overall remotely-plausible-list for the two phases has the same cardinality as that of the first phase, namely | L ( Y n , ϕ ( Z n ) ) | , and hence, its ρ -th moment tends to 1 by (125).
The achievability now follows by verifying that, the power of the transmitted input sequence x satisfies
x 2 = x n 2 + x n + 1 2 n A + A = ( n + 1 ) A ;
the rate of the helper is
1 n + 1 0 + n R h
and the rate achieved by the scheme is
1 n + 1 n R 0 ( ρ ) + n R h
which tend to R h and R 0 ( ρ ) + R h , respectively, as n tends to infinity.

Author Contributions

Writing—original draft preparation, A.L. and Y.Y.; writing—review and editing, A.L. and Y.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proof of Lemma 2

We shall establish that the expectation
( A 1 ) E E m 1 B m ( x 1 , y ) | X 1 = x 1 , Y = y ρ ( A 2 ) = y R n x 1 R n E m 1 B m ( x 1 , y ) | X 1 = x 1 , Y = y ρ w ( y | x 1 ) d Q ( n ) ( x 1 ) d ν ( y )
tends to zero as n tends to infinity whenever R < R 0 ( ρ ) .
First notice that conditional on the transmitted codeword x 1 and the channel output y , the random variables { B m } m 1 are IID Bernoulli, with B m determined by X m and being of probability of success
( A 3 ) p ( x 1 , y ) = Pr w ( y | X m ) w ( y | x 1 ) ( A 4 ) = Pr w ( y | X m ) 1 1 + ρ w ( y | x 1 ) 1 1 + ρ ( A 5 ) w ( y | x 1 ) 1 1 + ρ E w ( y | X m ) 1 1 + ρ ,
where the last inequality follows from Markov’s inequality. Thus
E E m 1 B m ( x 1 , y ) | X 1 = x 1 , Y = y ρ ( A 6 ) = y R n x 1 R n E m 1 B m ( x 1 , y ) | X = x 1 , Y = y ρ w ( y | x 1 ) d Q ( n ) ( x 1 ) d ν ( y ) ( A 7 ) e n ρ R y R n x 1 R n p ( x 1 , y ) ρ w ( y | x 1 ) d Q ( n ) ( x 1 ) d ν ( y ) ( A 8 ) e n ρ R y R n x 1 R n E w ( y | X m ) 1 1 + ρ ρ w ( y | x 1 ) 1 1 + ρ d Q ( n ) ( x 1 ) d ν ( y ) . ( A 9 ) = e n ρ R y R n E w ( y | X m ) 1 1 + ρ ρ x 1 R n w ( y | x 1 ) 1 1 + ρ d Q ( n ) ( x 1 ) d ν ( y ) ( A 10 ) = e n ρ R y R n x R n w ( y | x ) 1 1 + ρ d Q ( n ) ( x ) 1 + ρ d ν ( y ) ( A 11 ) e r δ μ 1 + ρ e n ρ R y R x R e r ( x 2 A ) w ( y | x ) 1 1 + ρ d Q G ( x ) 1 + ρ d ν ( y ) n ( A 12 ) = e r δ μ 1 + ρ e n ρ R e n E 0 , m ( ρ , Q G , r ) ,
where (A11) follows from the upper bound (39) on the Radon–Nykodim derivative and holds for every r 0 . Choosing r as r that achieves E 0 , m * ( ρ , Q G ) (cf. (31)), we obtain
( A 13 ) E E m 1 B m ( x 1 , y ) | X 1 = x 1 , Y = y ρ e r δ μ 1 + ρ e n ρ R e n E 0 , m * ( ρ , Q G ) ( A 14 ) = e r δ μ 1 + ρ e n ρ ( R R 0 ( ρ ) ) ,
where the last equality follows from (74).
The Central Limit Theorem guarantees that, as n tends to infinity, μ approaches 1 / 2 . Consequently, the RHS of (A14) tends to zero whenever R < R 0 ( ρ ) .

Appendix B. Proof of Lemma 3

To prove the lemma, we shall establish that, whenever R < R 0 ( ρ ) Δ ,
lim n E E m 1 B m ( x 1 , y ; Δ ) | X 1 = x 1 , Y = y ρ = 0 ,
where the outer expectation is over X 1 and Y . From this (117) will follow in much the same way that (88) followed from (90) in Section 4.3.
To establish (A15), first note that, conditional on the transmitted codeword x 1 and the channel output y , the random variables { B m ( x 1 , y ; Δ ) } m 1 are IID Bernoulli, with B m determined by X m and being of probability of success
( A 16 ) p ( x 1 , y ; Δ ) = Pr w ( y | X m ) w ( y | x 1 ) e n Δ 2 ( A 17 ) = Pr w ( y | X m ) 1 1 + ρ w ( y | x 1 ) 1 1 + ρ e n Δ 2 ( 1 + ρ ) ( A 18 ) w ( y | x 1 ) 1 1 + ρ e n Δ 2 ( 1 + ρ ) E w ( y | X m ) 1 1 + ρ ,
where the last inequality follows from Markov’s inequality. Consequently,
E E m 1 B m ( x 1 , y ; Δ ) | X 1 = x 1 , Y = y ρ ( A 19 ) = y R n x 1 R n E m 1 B m ( x 1 , y ; Δ ) | X = x 1 , Y = y ρ w ( y | x 1 ) d Q ( n ) ( x 1 ) d ν ( y ) ( A 20 ) e n ρ R y R n x 1 R n p ( x 1 , y ; Δ ) ρ w ( y | x 1 ) d Q ( n ) ( x 1 ) d ν ( y ) ( A 21 ) e n ρ R e n ρ Δ 2 ( 1 + ρ ) y R n x 1 R n E w ( y | X m ) 1 1 + ρ ρ w ( y | x 1 ) 1 1 + ρ d Q ( n ) ( x 1 ) d ν ( y ) ( A 22 ) < e n ρ Δ e n ρ R y R n x 1 R n E w ( y | X m ) 1 1 + ρ ρ w ( y | x 1 ) 1 1 + ρ d Q ( n ) ( x 1 ) d ν ( y )
where (A22) holds because ρ , Δ > 0 so n ρ Δ / 2 ( 1 + ρ ) < n ρ Δ .
Except for the e n ρ Δ factor, the RHS of (A22) is identical to the RHS of (A8), which was shown to decay at least as fast as e n ρ ( R R 0 ( ρ ) ) ; see (A14). It follows that the RHS of (A22) tends to zero whenever R + Δ < R 0 ( ρ ) .

References

  1. Gallager, R.G. Information Theory and Reliable Communication; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 1968. [Google Scholar]
  2. Verdú, S. Error exponents and α-mutual information. Entropy 2021, 23, 199. [Google Scholar] [CrossRef] [PubMed]
  3. Lapidoth, A.; Marti, G.; Yan, Y. Other helper capacities. In Proceedings of the 2021 IEEE International Symposium on Information Theory (ISIT), Victoria, Australia, 12–20 July 2021; pp. 1272–1277. [Google Scholar] [CrossRef]
  4. Cover, T.M. Elements of Information Theory, 2nd ed.; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2006. [Google Scholar]
  5. Kim, Y. Capacity of a class of deterministic relay channels. IEEE Trans. Inf. Theory 2008, 54, 1328–1329. [Google Scholar] [CrossRef] [Green Version]
  6. Bross, S.I.; Lapidoth, A.; Marti, G. Decoder-assisted communications over additive noise channels. IEEE Trans. Commun. 2020, 68, 4150–4161. [Google Scholar] [CrossRef]
  7. Lapidoth, A.; Marti, G. Encoder-assisted communications over additive noise channels. IEEE Trans. Inf. Theory 2020, 66, 6607–6616. [Google Scholar] [CrossRef]
  8. Merhav, N. On error exponents of encoder-assisted communication systems. IEEE Trans. Inf. Theory 2021, 67, 7019–7029. [Google Scholar] [CrossRef]
  9. Bunte, C.; Lapidoth, A. Encoding tasks and Rényi entropy. IEEE Trans. Inf. Theory 2014, 60, 5065–5076. [Google Scholar] [CrossRef] [Green Version]
  10. Pinsker, M.S.; Sheverdjaev, A.Y. Transmission capacity with zero error and erasure. Probl. Peredachi Informatsii 1970, 6, 20–24. [Google Scholar]
  11. Csiszar, I.; Narayan, P. Channel capacity for a given decoding metric. IEEE Trans. Inf. Theory 1995, 41, 35–43. [Google Scholar] [CrossRef]
  12. Telatar, I.E. Zero-error list capacities of discrete memoryless channels. IEEE Trans. Inf. Theory 1997, 43, 1977–1982. [Google Scholar] [CrossRef]
  13. Ahlswede, R.; Cai, N.; Zhang, Z. Erasure, list, and detection zero-error capacities for low noise and a relation to identification. IEEE Trans. Inf. Theory 1996, 42, 55–62. [Google Scholar] [CrossRef]
  14. Bunte, C.; Lapidoth, A.; Samorodnitsky, A. The zero-undetected-error capacity approaches the Sperner capacity. IEEE Trans. Inf. Theory 2014, 60, 3825–3833. [Google Scholar] [CrossRef] [Green Version]
  15. Nakiboğlu, B.; Zheng, L. Errors-and-erasures decoding for block codes with feedback. IEEE Trans. Inf. Theory 2012, 58, 24–49. [Google Scholar] [CrossRef] [Green Version]
  16. Bunte, C.; Lapidoth, A. The zero-undetected-error capacity of discrete memoryless channels with feedback. In Proceedings of the 2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton), Monticello, IL, USA, 1–5 October 2012; pp. 1838–1842. [Google Scholar] [CrossRef] [Green Version]
  17. Bunte, C.; Lapidoth, A. On the listsize capacity with feedback. IEEE Trans. Inf. Theory 2014, 60, 6733–6748. [Google Scholar] [CrossRef] [Green Version]
  18. Lapidoth, A.; Miliou, N. Duality bounds on the cutoff rate with applications to Ricean fading. IEEE Trans. Inf. Theory 2006, 52, 3003–3018. [Google Scholar] [CrossRef] [Green Version]
  19. Pfister, C. On Rényi Information Measures and Their Applications. Ph.D. Thesis, ETH Zurich, Zurich, Switzerland, 2019. [Google Scholar]
  20. Rosenthal, H.P. On the subspaces of Lp (p > 2) spanned by sequences of independent random variables. Isr. J. Math. 1970, 8, 273–303. [Google Scholar] [CrossRef]
  21. Arıkan, E. An inequality on guessing and its application to sequential decoding. IEEE Trans. Inf. Theory 1996, 42, 99–105. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Gaussian channel with decoder assistance.
Figure 1. Gaussian channel with decoder assistance.
Entropy 24 00029 g001
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lapidoth, A.; Yan, Y. The Listsize Capacity of the Gaussian Channel with Decoder Assistance. Entropy 2022, 24, 29. https://doi.org/10.3390/e24010029

AMA Style

Lapidoth A, Yan Y. The Listsize Capacity of the Gaussian Channel with Decoder Assistance. Entropy. 2022; 24(1):29. https://doi.org/10.3390/e24010029

Chicago/Turabian Style

Lapidoth, Amos, and Yiming Yan. 2022. "The Listsize Capacity of the Gaussian Channel with Decoder Assistance" Entropy 24, no. 1: 29. https://doi.org/10.3390/e24010029

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop