Next Article in Journal
A System-Independent Derivation of Preferential Attachment from the Principle of Least Effort
Next Article in Special Issue
Round-Efficient Secure Inference Based on Masked Secret Sharing for Quantized Neural Network
Previous Article in Journal
Behavioral Propagation Based on Passionate Psychology on Single Networks with Limited Contact
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Distributed Hypothesis Testing over a Noisy Channel: Error-Exponents Trade-Off

by
Sreejith Sreekumar
1,* and
Deniz Gündüz
2
1
Department of Electrical and Computer Engineering, Cornell University, Ithaca, NY 14850, USA
2
Department of Electrical and Electronic Engineering, Imperial College London, London SW72AZ, UK
*
Author to whom correspondence should be addressed.
Entropy 2023, 25(2), 304; https://doi.org/10.3390/e25020304
Submission received: 28 December 2022 / Revised: 25 January 2023 / Accepted: 31 January 2023 / Published: 6 February 2023
(This article belongs to the Special Issue Information Theory for Distributed Systems)

Abstract

:
A two-terminal distributed binary hypothesis testing problem over a noisy channel is studied. The two terminals, called the observer and the decision maker, each has access to n independent and identically distributed samples, denoted by U and V , respectively. The observer communicates to the decision maker over a discrete memoryless channel, and the decision maker performs a binary hypothesis test on the joint probability distribution of ( U , V ) based on V and the noisy information received from the observer. The trade-off between the exponents of the type I and type II error probabilities is investigated. Two inner bounds are obtained, one using a separation-based scheme that involves type-based compression and unequal error-protection channel coding, and the other using a joint scheme that incorporates type-based hybrid coding. The separation-based scheme is shown to recover the inner bound obtained by Han and Kobayashi for the special case of a rate-limited noiseless channel, and also the one obtained by the authors previously for a corner point of the trade-off. Finally, we show via an example that the joint scheme achieves a strictly tighter bound than the separation-based scheme for some points of the error-exponents trade-off.

1. Introduction

Hypothesis testing (HT), which refers to the problem of choosing between one or more alternatives based on available data, plays a central role in statistics and information theory. Distributed HT (DHT) problems arise in situations where the test data are scattered across multiple terminals, and need to be communicated to a central terminal, called the decision maker, which performs the hypothesis test. The need to jointly optimize the communication scheme and the hypothesis test makes DHT problems much more challenging than their centralized counterparts. Indeed, while an efficient characterization of the optimal hypothesis test and its asymptotic performance is well known in the centralized setting, thanks to [1,2,3,4,5], the same problem in even the simplest distributed setting remains open, except for some special cases (see [6,7,8,9,10,11]).
In this work, we consider a DHT problem with two parties, an observer and a decision maker, such that the former communicates to the latter over a noisy channel. The observer and the decision maker each has access to independent and identically distributed samples, denoted by U and V , respectively. Based on the information received from the observer and its own observations V , the decision maker performs a binary hypothesis test on the joint distribution of ( U , V ) . Our goal is to characterize the trade-off between the best achievable rate of decay (or exponent) of the type I and type II error probabilities with respect to the sample size. We will refer to this problem as DHT over a noisy channel, and its special instance with the noisy channel replaced by a rate-limited noiseless channel as DHT over a noiseless channel.

1.1. Background

Distributed statistical inference problems were first conceived in [12] and the information-theoretic study of DHT over a noiseless channel was first investigated in [6], where the objective is to characterize Stein’s exponent κ se ( ϵ ) , i.e., the optimal type II error-exponent subject to the type I error probability constrained to be at most ϵ ( 0 , 1 ) . The authors therein established a multi-letter characterization of this quantity including a strong converse, which shows that κ se ( ϵ ) is independent of ϵ . Furthermore, a single-letter characterization of κ se ( ϵ ) is obtained for a special case of HT known as testing against independence (TAI), in which the joint distribution factors as a product of the marginal distributions under the alternative hypothesis. Improved lower bounds on κ se ( ϵ ) were subsequently obtained in [7,8], respectively, and the strong converse was extended to zero-rate settings [13]. While all the aforementioned works focus on κ se ( ϵ ) , the trade-off between the exponents of both the type I and type II error probabilities in the same setting was first explored in [14].
In the recent years, there has been a renewed interest in distributed statistical inference problems motivated by emerging machine learning applications to be served at the wireless edge, particularly in the context of semantic communications in 5G/6G communication systems [15,16]. Several extensions of the DHT over a noiseless channel problem have been studied, such as generalizations to multi-terminal settings [9,17,18,19,20,21], DHT under security or privacy constraints [22,23,24,25], DHT with lossy compression [26], interactive settings [27,28], successive refinement models [29], and more. Improved bounds have been obtained on the type I and type II error-exponents region [30,31], and on κ se ( ϵ ) for testing correlation between bivariate standard normal distributions [32]. In the simpler zero-rate communication setting, there has been some progress in terms of second-order optimal schemes [33], geometric interpretation of type I and type II error-exponent region [34], and characterization of κ se ( ϵ ) for sequential HT [35]. DHT over noisy communication channels with the goal of characterizing κ se ( ϵ ) has been considered in [10,11,36,37].

1.2. Contributions

In this work, our objective is to explore the trade-off between the type I and type II error-exponents for DHT over a noisy channel. This problem is a generalization of [14] from noiseless rate-limited channels to noisy channels, and also of [10,11] from a type I error probability constraint to a positive type I error-exponent constraint.
Our main contributions can be summarized as follows:
(i)
We obtain an inner bound (Theorem 1) on the error-exponents trade-off by using a separate HT and channel coding scheme (SHTCC) that is a combination of a type-based (type here refers to the empirical probability distribution of a sequence, see [38]) quantize-bin strategy and unequal error-protection scheme of [39]. This result is shown to recover the bounds established in [10,14]. Furthermore, we evaluate Theorem 1 for two important instances of DHT, namely TAI and its opposite, i.e., testing against dependence (TAD) in which the joint distribution under the null hypothesis factors as a product of marginal distributions.
(ii)
We also obtain a second inner bound (Theorem 2) on the error-exponents trade-off by using a joint HT and channel coding scheme (JHTCC) based on hybrid coding [40]. Subsequently, we show via an example that the JHTCC scheme strictly outperforms the SHTCC scheme for some points on the error-exponent trade-off.
While the above schemes are inspired from those in [10], which have been proposed with the goal of maximizing the type II error-exponent, novel modifications in its design and analysis are required when considering both of the error-exponents. More specifically, the schemes presented here perform separate quantization-binning or hybrid coding on each individual source sequence type at the observer/encoder (as opposed to a typical ball in [10]) with the corresponding reverse operation implemented at the decision-maker/decoder. This necessitates a different analysis to compute the probabilities of the various error events contributing to the overall error-exponents. We finally mention that the DHT problem considered here was recently investigated in [41], where an inner bound on the error-exponents trade-off (Theorem 2 in [41]) is obtained using a combination of a type-based quantization scheme and unequal error protection scheme of [42] with two special messages. A qualitative comparison between Theorem 2 and Theorem 2 in [41] seems to suggest that the JHTCC scheme here uses a stronger decoding rule depending jointly on the source-channel statistics. In comparison, the metric used at the decoder for the scheme in [41] factors as the sum of two metrics, one which depends only on the source statistics, and the other which depends only on the channel statistics. Importantly, this hints that the inner bound achieved by JHTCC scheme is not subsumed by that in [41]. That said, a direct computational comparison appears difficult, as evaluating the latter requires optimization over several parameters as mentioned in the last paragraph of [41].
The remainder of the paper is organized as follows. Section 2 formulates the operational problem along with the required definitions. The main results are presented in Section 3. The proofs are furnished in Section 4. Finally, concluding remarks are given in Section 5.

2. Preliminaries

2.1. Notation

We use the following notation. All logarithms are with respect to the natural base e. N , R , R 0 , and R ¯ denotes the set of natural, real, non-negative real and extended real numbers, respectively. For a , b R 0 , [ a : b ] : = { n N : a n b } and [ b ] : = [ 1 : b ] . Calligraphic letters, e.g., X , denote sets, while X c and | X | stands for its complement and cardinality, respectively. For n N , X n denotes the n-fold Cartesian product of X , and x n = ( x 1 , , x n ) denotes an element of X n . Bold-face letters denote vectors or sequences, e.g., x for x n ; its length n will be clear from the context. For i , j N such that i j , x i j : = ( x i , x i + 1 , , x j ) , the subscript is omitted when i = 1 . 𝟙 A denotes the indicator of set A . For a real sequence { a n } n N , a n ( n ) b stands for lim n a n = b , while a n b denotes lim n a n b . Similar notations apply for other inequalities. O ( · ) , Ω ( · ) and o ( · ) denote standard asymptotic notations.
Random variables and their realizations are denoted by uppercase and lowercase letters, respectively, e.g., X and x. Similar conventions apply for random vectors and their realizations. The set of all probability mass functions (PMFs) on a finite set X is denoted by P ( X ) . The joint PMF of two discrete random variables X and Y is denoted by P X Y ; the corresponding marginals are P X and P Y . The conditional PMF of X given Y is represented by P X | Y . Expressions such as P X Y = P X P Y | X are to be understood as pointwise equality, i.e., P X Y ( x , y ) = P X ( x ) P Y | X ( y | x ) , for all ( x , y ) X × Y . When the joint distribution of a triple ( X , Y , Z ) factors as P X Y Z = P X Y P Z | X , these variables form a Markov chain X Y Z . When X and Y are statistically independent, we write X Y . If the entries of X n are drawn in an independent and identically distributed manner, i.e., if P X n ( x ) = i = 1 n P X ( x i ) , x X n , then the PMF P X n is denoted by P X n . Similarly, if P Y n | X n ( y | x ) = i = 1 n P Y | X ( y i | x i ) for all ( x , y ) X n × Y n , then we write P Y | X n for P Y n | X n . The conditional product PMF given a fixed x X n is designated by P Y | X n ( · | x ) . The probability measure induced by a PMF P is denoted by P P . The corresponding expectation is designated by E P .
The type or empirical PMF of a sequence x X n is designated by P x , i.e., P x ( x ) : = 1 n i = 1 n 𝟙 { x i = x } . The set of n-length sequences x X n of type P X is T n ( P X , X n ) : = { x X n : P x = P X } . Whenever the underlying alphabet X n is clear from the context, T n ( P X , X n ) is simplified to T n ( P X ) . The set of all possible types of n-length sequences x X n is T ( X n ) : = P X P ( X ) : | T n ( P X , X n ) | 1 . Similar notations are used for larger combinations, e.g., P x y , T n ( P X Y , X × Y ) and T ( X n × Y n ) . For a given x T n ( P X , X n ) and a conditional PMF P Y | X , T n ( P Y | X , x ) : = { y Y n : ( x , y ) T n ( P X Y , X n × Y n ) } stands for the P Y | X -conditional type class of x .
For PMFs P , Q P ( X ) , the Kullback–Leibler (KL) divergence between P and Q is D P | | Q : = x X P ( x ) log P ( x ) / Q ( x ) . The conditional KL divergence between P Y | X and Q Y | X given P X is D P Y | X | | Q Y | X | P X : = x X P X ( x ) D P Y | X ( · | x ) | | Q Y | X ( · | x ) . The mutual information and entropy terms are denoted by I P ( · ) and H P ( · ) , respectively, where P denotes the PMF of the relevant random variables. When the PMF is clear from the context, the subscript is omitted. For ( x , y ) X n × Y n , the empirical conditional entropy of y given x is H e ( y | x ) : = H P ( Y ˜ | X ˜ ) , where P X ˜ Y ˜ = P x y . For a given function f : Z R and a random variable Z P Z , the log-moment generating function of Z with respect to f is ψ P Z , f ( λ ) : = log E P Z [ e λ f ( Z ) ] whenever the expectation exists. Finally, let
ψ P Z , f * ( θ ) : = sup λ R θ λ ψ P Z , f ( λ ) ,
denote the rate function (see, e.g., Definition 15.5 in [43]).

2.2. Problem Formulation

Let U , V , X and Y be finite sets, and n N . The DHT over a noisy channel setting is depicted in Figure 1. Herein, the observer and the decision maker observe n independent and identically distributed samples, denoted by u and v , respectively. Based on its observations u , the observer outputs a sequence x X n as the channel input sequence (note that the ratio of the number of channel uses to the number of data samples, termed the bandwidth ratio, is taken to be 1 for simplicity; however, our results easily generalize to arbitrary bandwidth ratios). The discrete memoryless channel (DMC) with transition kernel P Y | X produces a sequence y Y n according to the probability law P Y | X n ( · | x ) as its output. We will assume that P Y | X ( · | x ) P Y | X ( · | x ) , ( x , x ) X 2 , where P Q indicates the absolute continuity of P with respect to Q. Based on its observations, y and v , the decision maker performs binary HT on the joint probability distribution of ( U , V ) with the null ( H 0 ) and alternative ( H 1 ) hypotheses given by
H 0 : ( U , V ) P U V n ,
H 1 : ( U , V ) Q U V n .
The decision maker outputs h ^ H ^ : = { 0 , 1 } as the decision of the hypothesis test, where 0 and 1 denote H 0 and H 1 , respectively.
A length-n DHT code c n is a pair of functions ( f n , g n ) , where
(i)
f n : U n P ( X n ) denotes the encoding function;
(ii)
g n : V n × Y n H ^ denotes a deterministic decision function specified by an acceptance region (for null hypothesis H 0 ) A n V n × Y n as g n ( v , y ) = 1 𝟙 { ( v , y ) A n } , ( v , y ) V n × Y n .
We emphasize at this point that there is no loss in generality in restricting our attention to a deterministic decision function for the objective of characterizing the error-exponents trade-off in HT (for e.g., see Lemma 3 in [24])).
A code c n = ( f n , g n ) induces the joint PMFs P U V X Y H ^ ( c n ) and Q U V X Y H ^ ( c n ) under the null and alternative hypotheses, respectively, where
P U V X Y H ^ ( c n ) ( u , v , x , y , h ^ ) : = P U V n ( u , v ) f n ( x | u ) P Y | X n ( y | x ) 𝟙 g n ( v , y ) = h ^ ,
and
Q U V X Y H ^ ( c n ) ( u , v , x , y , h ^ ) : = Q U V n ( u , v ) f n ( x | u ) P Y | X n ( y | x ) 𝟙 g n ( v , y ) = h ^ ,
respectively. For a given code c n , the type I and type II error probabilities are α n ( c n ) : = P P ( c n ) ( H ^ = 1 ) and β n ( c n ) : = P Q ( c n ) ( H ^ = 0 ) respectively. The following definition formally states the error-exponents trade-off we aim to characterize.
Definition 1 
(Error-exponent region).An error-exponent pair ( κ α , κ β ) R 0 2 is said to be achievable if there exists a sequence of codes { c n } n N such that
lim inf n 1 n log α n c n κ α ,
lim inf n 1 n log β n c n κ β .
The error-exponent region R ¯ is the closure of the set of all achievable error-exponent pairs ( κ α , κ β ) . Set R : = { κ α , κ ( κ α ) : κ α ( 0 , κ α ) } , where κ α = inf { κ α : κ ( κ α ) = 0 } and κ ( κ α ) : = sup { κ β : ( κ α , κ β ) R ¯ } .
We are interested in a computable characterization of R , which pertains to the region of positive error-exponents (i.e., excluding the boundary points corresponding to Stein’s exponent). To this end, we present two inner bounds on R in the next section.

3. Main Results

In this section, we obtain two inner bounds on R , first using a separation-based scheme which performs independent HT and channel coding, termed the SHTCC scheme, and the second via a joint HT and channel coding scheme that uses hybrid coding for communication between the observer and the decision maker.

3.1. Inner Bound on R via SHTCC Scheme

Let S = X and P S X Y = P S X P Y | X be a PMF under which S X Y forms a Markov chain. For x X , let Λ x , P S X Y ( y ) : = log P Y | X = x ( y ) / P Y | S = x ( y ) and define
E sp ( P S X , θ ) : = s S P S ( s ) ψ P Y | S = s , Λ s , P S X Y * ( θ ) ,
where the rate function ψ * is defined in (1). For a fixed P S X and R 0 , let
E ex ( R , P S X ) : = max ρ 1 ρ R ρ log ( s , x , x ˜ P S ( s ) P X | S ( x | s ) P X | S ( x ˜ | s ) y P Y | X ( y | x ) P Y | X ( y | x ˜ ) 1 2 1 ρ ) ,
denote the expurgated exponent [38,44]. Let W be a finite set and F denote the set of all continuous mappings from P ( U ) to P ( W | U ) , where P ( W | U ) is the set of all conditional distributions P W | U . Set θ l ( P S X ) : = s S P S ( s ) D P Y | S = s | | P Y | X = s , θ u ( P S X ) : = s S P S ( s ) D P Y | X = s | | P Y | S = s , Θ ( P S X ) : = θ l ( P S X ) , θ u ( P S X ) . Denote an arbitrary element of F × R 0 × P ( S × X ) × Θ ( P S X ) by ( ω , R , P S X , θ ) , and set
L ( κ α ) : = ( ω , R , P S X , θ ) : ζ ( κ α , ω ) ρ ( κ α , ω ) R < I P ( X ; Y | S ) , P S X Y = P S X P Y | X min E sp ( P S X , θ ) , E ex R , P S X , E b ( κ α , ω , R ) κ α ,
L ^ ( κ α , ω ) : = P U ^ V ^ W ^ : D P U ^ V ^ W ^ | | P U V W ^ κ α , P W ^ | U ^ = ω ( P U ^ ) , P U V W ^ = P U V P W ^ | U ^ , E b ( κ α , ω , R ) : = R ζ ( κ α , ω ) + ρ ( κ α , ω ) , if   0 R < ζ ( κ α , ω ) , , otherwise ,
ζ ( κ α , ω ) : = max P U ^ W ^ : P V ^ , P U ^ V ^ W ^ L ^ ( κ α , ω ) I P ( U ^ ; W ^ ) ,
ρ ( κ α , ω ) : = min P V ^ W ^ : P U ^ , P U ^ V ^ W ^ L ^ ( κ α , ω ) I P ( V ^ ; W ^ ) , E 1 ( κ α , ω ) : = min ( P U ˜ V ˜ W ˜ , Q U ˜ V ˜ W ˜ ) T 1 ( κ α , ω ) D ( P U ˜ V ˜ W ˜ | | Q U ˜ V ˜ W ˜ ) ,
E 2 ( κ α , ω , R ) : = min ( P U ˜ V ˜ W ˜ , Q U ˜ V ˜ W ˜ ) T 2 ( κ α , ω ) D ( P U ˜ V ˜ W ˜ | | Q U ˜ V ˜ W ˜ ) + E b ( κ α , ω , R ) , if   R < ζ ( κ α , ω ) , , otherwise ,
E 3 ( κ α , ω , R , P S X ) : = min ( P U ˜ V ˜ W ˜ , Q U ˜ V ˜ W ˜ ) T 3 ( κ α , ω ) D ( P U ˜ V ˜ W ˜ | | Q U ˜ V ˜ W ˜ ) + E b ( κ α , ω , R ) + E ex R , P S X , if   R < ζ ( κ α , ω ) , min ( P U ˜ V ˜ W ˜ , Q U ˜ V ˜ W ˜ ) T 3 ( κ α , ω ) D ( P U ˜ V ˜ W ˜ | | Q U ˜ V ˜ W ˜ ) + ρ ( κ α , ω ) + E ex R , P S X , otherwise ,
E 4 ( κ α , ω , R , P S X , θ ) : = min P V ^ : P U ^ V ^ W ^ L ^ ( κ α , ω ) D ( P V ^ | | Q V ) + E b ( κ α , ω , R ) + E m ( P S X , θ ) θ , if   R < ζ ( κ α , ω ) , min P V ^ : P U ^ V ^ W ^ L ^ ( κ α , ω ) D ( P V ^ | | Q V ) + ρ ( κ α , ω ) + E m ( P S X , θ ) θ , otherwise , where ,
T 1 ( κ α , ω ) : = ( P U ˜ V ˜ W ˜ , Q U ˜ V ˜ W ˜ ) : P U ˜ W ˜ = P U ^ W ^ , P V ˜ W ˜ = P V ^ W ^ , Q U ˜ V ˜ W ˜ : = Q U V P W ˜ | U ˜ for   some P U ^ V ^ W ^ L ^ ( κ α , ω ) , T 2 ( κ α , ω ) : = ( P U ˜ V ˜ W ˜ , Q U ˜ V ˜ W ˜ ) : P U ˜ W ˜ = P U ^ W ^ , P V ˜ = P V ^ , H P ( W ˜ | V ˜ ) H P ( W ^ | V ^ ) , Q U ˜ V ˜ W ˜ : = Q U V P W ˜ | U ˜ for   some P U ^ V ^ W ^ L ^ ( κ α , ω ) , T 3 ( κ α , ω ) : = ( P U ˜ V ˜ W ˜ , Q U ˜ V ˜ W ˜ ) : P U ˜ W ˜ = P U ^ W ^ , P V ˜ = P V ^ , Q U ˜ V ˜ W ˜ : = Q U V P W ˜ | U ˜ for   some P U ^ V ^ W ^ L ^ ( κ α , ω ) .
We have the following lower bound for κ ( κ α ) , which translates to an inner bound for R .
Theorem 1 
(Inner bound via SHTCC scheme). κ ( κ α ) κ s ( κ α ) , where
κ s ( κ α ) : = max ( ω , R , P S X , θ ) L ( κ α ) min { E 1 ( κ α , ω ) , E 2 ( κ α , ω , R ) , E 3 ( κ α , ω , R , P S X ) , E 4 ( κ α , ω , R , P S X , θ ) } .
The proof of Theorem 1 is presented in Section 4.1. The SHTCC scheme, which achieves the error-exponent pair ( κ α , κ s ( κ α ) ) , is a coding scheme analogous to separate source and channel coding for the lossy transmission of a source over a communication channel with correlated side-information at the receiver [45], however, with the objective of reliable HT. In this scheme, the source samples are first compressed to an index, which acts as the message to be transmitted over the channel. But, in contrast to standard communication problems, there is a need to protect certain messages more reliably than others; hence, an unequal error-protection scheme [39,42] is used. To describe briefly, the SHTCC scheme involves ( i ) the quantization and binning of u sequences, whose type P u is within a κ α -neighborhood (in terms of KL divergence) of P U , using V as side information at the decision maker for decoding, and ( i i ) unequal error-protection channel coding scheme in [39] for protecting a special message which informs the decision maker that P u lies outside the κ α -neighborhood of P U . The output of the channel decoder is processed by an empirical conditional entropy decoder which recovers the quantization codeword with the least conditional entropy with V . Since this decoder depends only on the empirical distributions of the observations, it is universal and useful in the hypothesis testing context, where multiple distributions are involved (as was first noted in [8]). The various factors E 1 to E 4 in (7) have natural interpretations in terms of events that could possibly result in a hypothesis testing error. Specifically, E 1 and E 2 correspond to the error events arising due to quantization and binning, respectively, while E 3 and E 4 correspond to the error events of wrongly decoding an ordinary channel codeword and special message codeword, respectively.
Remark 1 
(Generalization of Han–Kobayashi inner bound). In Theorem 1 in [14], Han and Kobayashi obtained an inner bound on R for DHT over a noiseless channel. At a high level, their coding scheme involves type-based quantization of u U n sequences, whose type P u lies within a κ α -neighborhood of P U , where κ α is the desired type I error-exponent. As a corollary, Theorem 1 recovers the lower bound for κ ( κ α ) obtained in [14] by ( i ) setting E ex R , P S X , E m ( P S X , θ ) and E m ( P S X , θ ) θ to ∞, which hold when the channel is noiseless, and ( i i ) maximizing over the set ( ω , R , P S X , θ ) F × R 0 × P ( S × X ) × Θ ( P S X ) : ζ ( κ α , ω ) R < I P ( X ; Y | S ) , P S X Y : = P S X P Y | X L ( κ α ) in (7). Then, note that the terms E 2 ( κ α , ω , R ) , E 3 ( κ α , ω , R , P S X ) and E 4 ( κ α , ω , R , P S X , θ ) all equal ∞, and thus the inner bound in Theorem 1 reduces to that given in Theorem 1 in [14].
Remark 2 
(Improvement via time-sharing). Since the lower bound on κ ( κ α ) in Theorem 1 is not necessarily concave, a tighter bound can be obtained using the technique of time-sharing similar to Theorem 3 in [14]. We omit its description, as it is cumbersome, although straightforward.
Theorem 1 also recovers the lower bound for the optimal type II error-exponent for a fixed type I error probability constraint established in Theorem 2 in [10] by letting κ α 0 . The details are provided in Appendix A. Further, specializing the lower bound in Theorem 1 to the case of TAI, i.e., when Q U V = P U P V , we obtain the following corollary which characterizes the optimal type II error-exponent for TAI established in Proposition 7 in [10] as a special case.
Corollary 1 
(Inner bound for TAI). Let P U V P ( U × V ) be an arbitrary distribution and Q U V = P U P V . Then,
κ ( κ α ) κ s ( κ α ) κ i ( κ α ) ,
where
κ i ( κ α ) : = max ( ω , P S X , θ ) L ( κ α ) min E 1 i ( κ α , ω ) , E 2 i ( κ α , ω , P S X ) , E 3 i ( κ α , ω , P S X , θ ) , L ( κ α ) : = ( ω , P S X , θ ) F × P ( S × X ) × Θ ( P S X ) : ζ ( κ α , ω ) < I P ( X ; Y | S ) , P S X Y : = P S X P Y | X , min E sp ( P S X , θ ) , E ex ζ ( κ α , ω ) , P S X κ α , E 1 i ( κ α , ω ) : = min P V ^ W ^ : P U ^ V ^ W ^ L ^ ( κ α , ω ) I P ( V ^ ; W ^ ) + D ( P V ^ | | P V ) , E 2 i ( κ α , ω , P S X ) : = ρ ( κ α , ω ) + E ex ζ ( κ α , ω ) , P S X , E 3 i ( κ α , ω , P S X , θ ) : = ρ ( κ α , ω ) + E sp ( P S X , θ ) θ ,
and L ^ ( κ α , ω ) , ζ ( κ α , ω ) and ρ ( κ α , ω ) are defined in (6a), (6b) and (6c), respectively. In particular,
lim κ α 0 κ ( κ α ) = κ s ( 0 ) = κ i ( 0 ) = max P W | U : I P ( U ; W ) C ( P Y | X ) , P U V W = P U V P W | U I P ( V ; W ) ,
where | W | | U | + 1 and C ( P Y | X ) denotes the capacity of the channel P Y | X .
The proof of Corollary 1 is given in Section 4.2. Its achievability follows from a special case of the SHTCC scheme without binning at the encoder.
Next, we consider testing against dependence (TAD) for which Q U V is an arbitrary joint distribution and P U V = Q U Q V . Theorem 1 specialized to TAD gives the following corollary.
Corollary 2 
(Inner bound for TAD). Let Q U V P ( U × V ) be an arbitrary distribution and P U V = Q U Q V . Then,
κ ( κ α ) κ s ( κ α ) = κ d ( κ α ) : = max ( ω , P S X , θ ) L ( κ α ) min E 1 d ( κ α , ω ) , E 2 d ( κ α , ω , P S X ) , E 3 d ( P S X , θ ) ,
where
E 1 d ( κ α , ω ) : = min ( P U ˜ V ˜ W ˜ , Q U ˜ V ˜ W ˜ ) T 1 ( κ α , ω ) D ( P U ˜ V ˜ W ˜ | | Q U ˜ V ˜ W ˜ ) min ( P V ^ W ^ , Q V W ^ ) : P U ^ V ^ W ^ L ^ ( κ α , ω ) , Q U V W ^ = Q U V P W ^ | U ^ D ( P V ^ W ^ | | Q V W ^ ) , E 2 d ( κ α , ω , P S X ) : = E ex ζ ( κ α , ω ) , P S X , E 3 d ( P S X , θ ) : = E sp ( P S X , θ ) θ ,
and L ^ ( κ α , ω ) , T 1 ( κ α , ω ) and L ( κ α ) are given in (6a), (6d) and (9), respectively. In particular,
lim κ α 0 κ ( κ α ) κ s ( 0 ) = κ d ( 0 ) κ TAD ,
where
κ TAD = max ( P W | U , P S X ) : I Q ( W ; U ) I P ( X ; Y | S ) , Q U V W = Q U V P W | U , P S X Y = P S X P Y | X min D ( Q V Q W | | Q V W ) , E ex I Q ( U ; W ) , P S X , θ l ( P S X ) ,
and | W | | U | + 1 .
The proof of Corollary 2 is given in Section 4.3. Note that the expression for κ s ( κ α ) given in (11) is relatively simpler to compute compared to that in Theorem 1. This will be handy in showing that the JHTCC scheme strictly outperforms the SHTCC scheme, which we highlight via an example in Section 3.3 below.

3.2. Inner Bound via JHTCC Scheme

It is well known that joint source-channel coding schemes offer advantages over separation-based coding schemes in several information theoretic problems, such as the transmission of correlated sources over a multiple-access channel [40,46] and the error-exponent in the lossless or lossy transmission of a source over a noisy channel [42,47]. Recently, it was shown via an example in [10] that joint schemes also achieve a strictly larger type II error-exponent in DHT problems compared to a separation-based scheme in some scenarios. Motivated by this, we present an inner bound on R using a generalization of the JHTCC scheme in [10].
Let W and S be arbitrary finite sets, and F denote the set of all continuous mappings from P ( U × S ) to P ( W | U × S ) , where P ( W | U × S ) is the set of all conditional distributions P W | U S . Let P S , ω ( · , P S ) , P X | U S W , P X | U S denote an arbitrary element of P ( S ) × F × P ( X | U × S × W ) × P ( X | U × S ) , and define L h ( κ α ) : = P S , ω ( · , P S ) , P X | U S W , P X | U S : E b ( κ α , ω , P S , P X | U S W ) κ α , L ^ h ( κ α , ω , P S , P X | U S W ) : = ( P U ^ V ^ W ^ Y ^ S : D ( P U ^ V ^ W ^ Y ^ | S | | P U V W ^ Y | S | P S ) κ α , P S U V W ^ X Y : = P S P U V P W ^ | U ^ S P X | U S W P Y | X , P W ^ | U ^ S = ω ( P U ^ , P S ) , E b ( κ α , ω , P S , P X | U S W ) : = ρ ( κ α , ω , P S , P X | U S W ) ζ q ( κ α , ω , P S ) , ζ ( κ α , ω , P S ) : = max P U ^ W ^ S : P V ^ Y ^ s . t . P U ^ V ^ W ^ Y ^ S L ^ h ( κ α , ω , P S , P X | U S W ) I P ( U ^ ; W ^ | S ) , ρ ( κ α , ω , P S , P X | U S W ) : = min P V ^ W ^ Y ^ S : P U ^ s . t . P U ^ V ^ W ^ Y ^ S L ^ h ( κ α , ω , P S , P X | U S W ) I P ( Y ^ , V ^ ; W ^ | S ) , E 1 ( κ α , ω ) : = min ( P U ˜ V ˜ W ˜ Y ˜ S , Q U ˜ V ˜ W ˜ Y ˜ S ) T 1 ( κ α , ω ) D ( P U ˜ V ˜ W ˜ Y ˜ | S | | Q U ˜ V ˜ W ˜ Y ˜ | S | P S ) , E 2 ( κ α , ω , P S , P X | U S W ) : = min ( P U ˜ V ˜ W ˜ Y ˜ S , Q U ˜ V ˜ W ˜ Y ˜ S ) T 2 ( κ α , ω , P S , P X | U S W ) D ( P U ˜ V ˜ W ˜ Y ˜ | S | | Q U ˜ V ˜ W ˜ Y ˜ | S | P S ) + E b ( κ α , ω , P S , P X | U S W ) , E 3 ( κ α , ω , P S , P X | U S W , P X | U S ) : = min P V ^ Y ^ S : P U ^ V ^ W ^ Y ^ S L ^ h ( κ α , ω , P S , P X | U S W ) D ( P V ^ Y ^ | S | | Q V Y | S | P S ) + E b ( κ α , ω , P S , P X | U S W ) , Q S U V X Y : = P S Q U V P X | U S P Y | X , P Y | X : = P Y | X , T 1 ( κ α , ω , P S , P X | U S W ) : = ( P U ˜ V ˜ W ˜ Y ˜ S , P U ˜ W ˜ S = P U ^ W ^ S , P V ˜ W ˜ Y ˜ S = P V ^ W ^ Y ^ S , Q U ˜ V ˜ W ˜ Y ˜ S ) : Q S U ˜ V ˜ W ˜ X ˜ Y ˜ : = P S Q U V P W ˜ | U ˜ S P X | U S W P Y | X for   some P U ^ V ^ W ^ Y ^ S L ^ h ( κ α , ω , P S , P X | U S W ) , T 2 ( κ α , ω , P S , P X | U S W ) : = ( P U ˜ V ˜ W ˜ Y ˜ S , P U ˜ W ˜ S = P U ^ W ^ S , P V ˜ Y ˜ S = P V ^ Y ^ S , Q U ˜ V ˜ W ˜ Y ˜ S ) : H P ( W ˜ | V ˜ , Y ˜ , S ) H P ( W ^ | V ^ , Y ^ , S ) , Q S U ˜ V ˜ W ˜ X ˜ Y ˜ : = P S Q U V P W ˜ | U ˜ S P X | U S W P Y | X for   some P U ^ V ^ W ^ Y ^ S L ^ h ( κ α , ω , P S , P X | U S W ) .
Then, we have the following result.
Theorem 2 
(Inner bound via JHTCC scheme).
κ ( κ α ) max κ h ( κ α ) , κ u ( κ α ) ,
where
κ h ( κ α ) : = max ( P S , ω , P X | U S W , P X | U S ) L h ( κ α ) min { E 1 ( κ α , ω ) , E 2 ( κ α , ω , P S , P X | U S W ) ,
E 3 ( κ α , ω , P S , P X | U S W , P X | U S ) } , κ u ( κ α ) : = max ( P S , P X | U S ) P ( S ) × P ( X | S × U ) κ u ( κ α , P S , P X | U S ) , κ u ( κ α , P S , P X | U S ) : = min P S P V ^ Y ^ : D P V ^ Y ^ | S | | P V Y | S | P S κ α D P V ^ Y ^ | S | | Q V Y | S | P S , P S U V X Y = P S P U V P X | U S P Y | X a n d Q S U V X Y = P S Q U V P X | U S P Y | X .
The proof of Theorem 2 is given in Section 4.4, and utilizes a generalization of hybrid coding scheme [40] to achieve the stated inner bound. Specifically, the error-exponent pair κ α , κ h ( κ α ) is achieved using type-based hybrid coding, while κ α , κ u ( κ α ) is realized by uncoded transmission, in which the channel input X is generated as the output of a DMC P X | U with input U (along with time sharing). In standard hybrid coding, the source sequence is first quantized via joint typicality and the channel input is then chosen as a function of both the original source sequence and its quantization. At the decoder, the quantized codeword is first recovered using the channel output and side information via joint typicality decoding, and an estimate of the source sequence is output as a function of the channel output and recovered codeword. The quantization part forms the digital part of the scheme, while the use of the source sequence for encoding and channel output for decoding comprises the analog part. The scheme derives its name from these joint hybrid digital-analog operations. In the HT context considered here, the aforementioned source quantization is replaced by a type-based quantization at the encoder, and the joint typicality decoder is replaced by a universal empirical conditional entropy decoder. We note that Theorem 2 recovers the lower bound on the optimal type II error-exponent proved in Theorem 5 in [10]. The details are provided in Appendix B.
Next, we provide a comparison between the SHTCC and JHTCC bounds via an example as mentioned earlier.

3.3. Comparison of Inner Bounds

We compare the inner bounds established in Theorem 1 and Theorem 2 for a simple setting of TAD over a BSC. For this purpose, we will use the inner bound κ d ( κ α ) stated in Corollary 2 and κ u ( κ α ) that is achieved by uncoded transmission. Our objective is to illustrate that the JHTCC scheme achieves a strictly tighter bound on R compared to the SHTCC scheme, at least for some points of the trade-off.
Example 1. 
Let p , q [ 0 , 0.5 ] , U = V = X = Y = S = { 0 , 1 } ,
Q U V = q 0.5 q 0.5 q q , P Y | X = 1 p p p 1 p , a n d   P U V = Q U Q V .
A comparison of the inner bounds achieved by the SHTCC and JHTCC schemes for the above example are shown in Figure 2 and Figure 3, where we plot the error-exponents trade-off achieved by uncoded transmission (a lower bound for the JHTCC scheme), and the expurgated exponent at a zero rate:
E ex ( 0 ) : = max P S X P ( S × X ) E ex ( P S X , 0 ) = 0.25 log ( 4 p ( 1 p ) ) ,
which is an upper bound on κ d ( κ α ) for any κ α 0 . To compute E ex ( 0 ) , we used the closed-form expression for E ex ( · ) given in Problem 10.26(c) in [38]. Clearly, it can be seen that the JHTCC scheme outperforms SHTCC scheme for κ α below a threshold, which depends on the source and channel distributions. In particular, the threshold below which improvement is seen is reduced when the channel or the source becomes more uniform. The former behavior can be seen directly by comparing the subplots in Figure 2 and Figure 3, while the latter can be noted by comparing Figure 2a with Figure 3a, or Figure 2b with Figure 3b.

4. Proofs

4.1. Proof of Theorem 1

We will show the achievability of the error-exponent pair ( κ α , κ s ( κ α ) ) by constructing a suitable ensemble of HT codes, and showing that the expected type I and type II error probabilities (over this ensemble) satisfy (5) for the pair ( κ α , κ s ( κ α ) ) . Then, an expurgation argument [44] will be used to show the existence of a HT code that satisfies (5) for the same error-exponent pair, thus showing that ( κ α , κ s ( κ α ) ) R as desired.
Let n N , | W | < , κ α > 0 , ( ω , R , P S X , θ ) L ( κ α ) , R : = ζ ( κ α , ω ) , and η > 0 be a small number. Additionally, suppose that R 0 satisfies
ζ ( κ α , ω ) ρ ( κ α , ω ) R < I P ( X ; Y | S ) ,
where ζ ( κ α , ω ) and ρ ( κ α , ω ) are defined in (6b) and (6c), respectively. The SHTCC scheme is as follows:
  • Encoding: The observer’s encoder is composed of two stages, a source encoder followed by a channel encoder.
  • Source encoder: The source encoding comprises a quantization scheme followed by binning to reduce the rate if necessary.
  • Quantization codebook: Let
D n ( P U , η ) : = P U ^ T ( U n ) : D ( P U ^ | | P U ) κ α + η .
Consider some ordering on the types in D n ( P U , η ) and denote the elements as P U ^ i for i | D n ( P U , η ) | . For each type P U ^ i D n ( P U , η ) , i | D n ( P U , η ) | , choose a joint type variable P U ^ i W ^ i T ( U n × W n ) such that
D P W ^ i | U ^ i | | P W i | U | P U ^ i η 3 ,
I P U ^ i ; W ^ i R + η 3 ,
where P W i | U = ω ( P U ^ i ) . Note that this is always possible for n sufficiently large by the definition of R .
Let
D n ( P U W , η ) : = P U ^ i W ^ i : i | D n ( P U , η ) | ,
R i : = I P U ^ i ; W ^ i + ( η / 3 ) , i | D n ( P U , η ) | ,
M i : = 1 + k = 1 i 1 e n R k : k = 1 i e n R k ,
and B W , n = W ( j ) , 1 j i = 1 | D n ( P U , η ) | | M i | denote a random quantization codebook such that the codeword W ( j ) Unif T n ( P W ^ i ) , if j M i for some i | D n ( P U , η ) | . Denote a realization of B W , n by B W , n = w ( j ) W n , 1 j i = 1 | D n ( P U , η ) | | M i | .
Quantization scheme: For a given codebook B W , n and u T n P U ^ i such that P U ^ i D n ( P U , η ) for some i | D n ( P U , η ) | , let
M ˜ u , B W , n : = j M i : w ( j ) B W , n , ( u , w ( j ) ) T n P U ^ i W ^ i , P U ^ i W ^ i D n ( P U W , η ) .
If | M ˜ ( u , B W , n ) | 1 , let M u , B W , n denote an index selected uniformly at random from the set M ˜ ( u , B W , n ) , otherwise, set M u , B W , n = 0 . Denoting the support of M u , B W , n by M , we have for sufficiently large n that
| M | 1 + i = 1 | D n ( P U , η ) | e n R i 1 + | D n ( P U , η ) | e P U ^ W ^ D n ( P U W , η ) max n I ( U ^ ; W ^ ) + ( n η / 3 ) e n ( R + η ) ,
where the last inequality uses (16b) and | D n ( P U , η ) | ( n + 1 ) | U | . Binning: If | M | > | M | , then the source encoder performs binning as described below. Let R n : = log e n R / | D n ( P U , η ) | , M i : = [ 1 + ( i 1 ) R n : i R n ] , i | D n ( P U , η ) | , and M : = { 0 } i [ | D n ( P U , η ) | ] M i . Note that
e n R n e n R | U | log ( n + 1 ) .
Let f B denote the random binning function such that for each j M i , f B ( j ) Unif [ | M i | ] for i | D n ( P U , η ) | , and f B ( 0 ) = 0 with probability one. Denote a realization of f B ( j ) by f b , where f b : M M . Given a codebook B W , n and binning function f b , the source encoder outputs M = f b M u , B W , n for u U n . If | M | | M | , then f b is taken to be the identity map (no binning), and in this case, M = M u , B W , n .
Channel codebook: Let B X , n : = { X ( m ) X n , m M } denote a random channel codebook generated as follows. Without loss of generality, denote the elements of the set S = X as 1 , , | X | . The codeword length n is divided into | S | = | X | blocks, where the length of the first block is P S ( 1 ) n , the second block is P S ( 2 ) n , so on and so forth, and the length of the last block is chosen such that the total length is n. For i [ | X | ] , let k i : = l = 1 i 1 P S ( l ) n + 1 and k ¯ i : = l = 1 i P S ( l ) n , where the empty sum is defined to be zero. Let s X n be such that s k i k ¯ i = i , i.e., the elements of s equal i in the i t h block for i [ | X | ] . Let X ( 0 ) = s with probability one, and the remaining codewords X ( m ) , m M { 0 } be constant composition codewords [38] selected such that X k i k ¯ i ( m ) Unif T P S ( i ) n ( P ^ X | S ( · | i ) ) , where P ^ X | S is such that T P S ( i ) n P ^ X | S ( · | i ) is non-empty and D ( P ^ X | S | | P X | S | P S ) η 3 . Denote a realization of B X , n by B X , n : = { x ( m ) X n , m M } . Note that for m M { 0 } and large n, the codeword pair ( x ( 0 ) , x ( m ) ) has joint type (approx) P x ( 0 ) x ( j ) = P ^ S X : = P S P ^ X | S .
Channel encoder: For a given B X , n , the channel encoder outputs x = x ( m ) for output m from the source encoder. Denote this map by f B X , n : M X n .
Encoder: Denote by f n : U n P ( X n ) the encoder induced by all the above operations, i.e., f n ( · | u ) = f B X , n f b M ( u , B W , n ) .
Decision function: The decision function consists of three parts, a channel decoder, a source decoder and a tester.
Channel decoder: The channel decoder first performs a Neyman–Pearson test on the channel output y according to Π ˜ θ : Y n { 0 , 1 } , where
Π ˜ θ ( y ) : = 𝟙 i = 1 n log P Y | X ( y i | s i ) P Y | S ( y i | s i ) n θ .
If Π ˜ θ ( y ) = 1 , then M ^ = 0 . Else, for a given B X , n , maximum likelihood (ML) decoding is done on the remaining set of codewords { x ( m ) , m M { 0 } } , and M ^ is set equal to the ML estimate. Denote the channel decoder induced by the above operations by g B X , n , where g B X , n : Y n M .
For a given codebook B X , n , the channel encoder–decoder pair described above induces a distribution
P X Y M ^ | M ( B X , n ) ( m , x , y , m ^ | m ) : = 𝟙 f B X , n ( m ) = x P Y | X n ( y | x ) 𝟙 m ^ = g B X , n .
Note that P x ( 0 ) x ( m ) = P ^ S X , Y i = 1 | X | P Y | X P S ( i ) n ( · | i ) for M = 0 and Y i = 1 | X | P Y | S P S ( i ) n ( · | i ) for M = m 0 . Then, it follows by an application of Proposition A1 proved in Appendix C that for any B X , n and n sufficiently large, the Neyman–Pearson test in (20) yields
P P ( B X , n ) M ^ = 0 | M = m e n E sp ( P S X , θ ) η , m M { 0 } ,
P P ( B X , n ) M ^ 0 | M = 0 e n E sp ( P S X , θ ) θ η .
Moreover, given M ^ 0 , a random coding argument over the ensemble of B X n (see Exercise 10.18, 10.24 in [38,44]) shows that there exists a deterministic codebook B X , n such that (21a) and (21b) holds, and the ML decoding described above asymptotically achieves
P P ( B X , n ) M ^ m | M = m 0 , M ^ 0 e n E ex R , P S X η .
This deterministic codebook B X , n is used for channel coding.
Source decoder: For a given codebook B W , n and inputs M ^ = m ^ and V = v , the source decoder first decodes for the quantization codeword w ( m ^ ) (if required) using the empirical conditional entropy decoder, and then declares the output H ^ of the hypothesis test based on w ( m ^ ) and v . More specifically, if binning is not performed, i.e., if | M | | M | , M ^ = m ^ . Otherwise, M ^ = m ^ , where m ^ = 0 if m ^ = 0 and m ^ = arg min j : f b ( j ) = m ^ H e ( w ( j ) | v ) otherwise. Denote the source decoder induced by the above operations by g B W , n : M × V n M .
Testing and Acceptance region: If m ^ = 0 , H ^ = 1 is declared. Otherwise, H ^ = 0 or H ^ = 1 is declared depending on whether ( m ^ , v ) A n or ( m ^ , v ) A n , respectively, where A n denotes the acceptance region for H 0 as specified next. For a given codebook B W , n , let O m denote the set of u such that the source encoder outputs m , m M { 0 } . For each m M { 0 } and u O m , let
Z m ( u ) = { v V n : ( w ( m ) , u , v ) J n ( κ α + η , P W m U V ) } ,
where J n ( r , P X ) : = { x X n : D P x | | P X r } ,
P U V W m : = P U V P W m | U   and   P W m | U = ω ( P u ) .
For m M { 0 } , set Z m : = { v : v Z m ( u ) for   some u O m } , and define the acceptance region for H 0 at the decision maker as A n : = m M 0 m × Z m or equivalently as A n e : = m M 0 O m × Z m . Note that A n is the same as the acceptance region for H 0 in Theorem 1 in [14]. Denote the decision function induced by g B X , n , g B W , n and A n by g n : Y n × V n H ^ .
Induced probability distribution: The PMFs induced by a code c n = ( f n , g n ) with respect to codebook B n : = B W , n , f b , B X , n under H 0 and H 1 are
P UV M M XY M ^ M ^ H ^ ( B n , c n ) ( u , v , m , m , x , y , m ^ , m ^ , h ^ ) : = P U V n ( u , v ) 𝟙 M u , B W , n = m , f b ( m ) = m P X Y M ^ | M ( B X , n ) ( x , y , m ^ | m ) 𝟙 g B W , n ( m , v ) = m ^ , h ^ = 𝟙 ( m ^ , v ) A n c , Q UV M M XY M ^ M ^ H ^ ( B n , c n ) ( u , v , m , m , x , y , m ^ , m ^ , h ^ ) : = Q U V n ( u , v ) 𝟙 M u , B W , n = m , f b ( m ) = m P X Y M ^ | M ( B X , n ) ( x , y , m ^ | m ) 𝟙 g B W , n ( m , v ) = m ^ , h ^ = 𝟙 ( m ^ , v ) A n c ,
respectively. For simplicity, we will denote the above distributions by P ( B n ) and Q ( B n ) . Let B n : = B W , n , f B , B X , n , B n , and μ n denote the random codebook, its support, and the probability measure induced by its random construction, respectively. Additionally, define P ¯ P ( B n ) : = E μ n P P ( B n ) and P ¯ Q ( B n ) : = E μ n P Q ( B n ) .
Analysis of the type I and type II error probabilities: We analyze the type I and type II error probabilities averaged over the random ensemble of quantization and binning codebooks ( B W , f B ) . Then, an expurgation technique [44] guarantees the existence of a sequence of deterministic codebooks { B n } n N and a code { c n = ( f n , g n ) } n N that achieves the lower bound given in Theorem 1.
Type I error probability: In the following, random sets where the randomness is induced due to B n will be written using blackboard bold letters, e.g., A n for the random acceptance region for H 0 . Note that a type I error can occur only under the following events:
(i)
E EE : = P U ^ D n ( P U , η ) u T n ( P U ^ ) E EE ( u ) , where
E EE ( u ) : = { j M { 0 } s . t . ( u , W ( j ) ) T n P U ^ i W ^ i , P U ^ i = P u , P U ^ i W ^ i D n ( P U W , η ) } ,
(ii)
E NE : = { M ^ = M   and   ( M ^ , V ) A n } ,
(iii)
E OCE : = { M 0 , M ^ M   and   ( M ^ , V ) A n } ,
(iv)
E SCE : = { M = M = 0 , M ^ M   and   ( M ^ , V ) A n } ,
(v)
E BE : = { M 0 , M ^ = M , M ^ M   and   ( M ^ , V ) A n } .
Here, E EE corresponds to the event that there does not exist a quantization codeword corresponding to atleast one sequence u of type P u D n ( P U , η ) ; E NE corresponds to the event, in which, there is neither an error at the channel decoder nor at the empirical conditional entropy decoder; E OCE and E SCE corresponds to the case, in which there is an error at the channel decoder (hence also at the empirical conditional entropy decoder); and E BE corresponds to the case that there is an error (due to binning) only at the empirical conditional entropy decoder. For the event E EE , it follows from a slight generalization of the type-covering lemma (Lemma 9.1 in [38]) that
P ¯ P ( B n ) ( E EE ) e e n Ω ( η ) .
Since e n Ω ( η ) / n ( n ) for η > 0 , the event E EE may be safely ignored from the analysis of the error-exponents. Given that E EE c holds for some B W , n , it follows from Equation 4.22 in [14] that
P ¯ P ( B n ) E NE | E EE c e n κ α ,
for sufficiently large n since the acceptance region is the same as that in Theorem 1 in [14].
Next, consider the event E OCE . We have for sufficiently large n that
P ¯ P ( B n ) E OCE P ¯ P ( B n ) M 0 P ¯ P ( B n ) M ^ M | M 0 ( a ) P ¯ P ( B n ) M ^ M | M 0 P ¯ P ( B n ) M ^ = 0 | M 0 + P ¯ P ( B n ) M ^ M | M 0 , M ^ 0 ( b ) e n E m ( P S X , θ ) η + e n E ex R , P S X η = e n min E m ( P S X , θ ) , E ex R , P S X η ,
where
(a)
holds since the event { M 0 } is equivalent to { M 0 } ;
(b)
holds due to (21a) and (22), which holds for B X , n .
Additionally, the probability of E SCE can be upper bounded as
P ¯ P ( B n ) E SCE P ¯ P ( B n ) M = 0 P ¯ P ( B n ) M = 0 | U D n ( P U , η ) + P ¯ P ( B n ) U D n ( P U , η ) = P ¯ P ( B n ) E EE + P ¯ P ( B n ) U D n ( P U , η ) e n κ α ,
where (27) is due to (24), the definition of D n ( P U , η ) in (15) and Lemma 2.2 and Lemma 2.6 in [38].
Finally, consider the event E BE . Note that this event occurs only when | M | | M | . Additionally, M = 0 iff M = 0 , and hence M 0 and M ^ = M implies that M ^ 0 . Let
D n ( P V W , η ) : = P V ^ W ^ : ( w , u , v ) m M { 0 } J n ( κ α + η , P W m U V ) , P W m U V satisfies ( 23 )   and   P w u v = P W ^ U ^ V ^ .
We have
P ¯ P ( B n ) E BE = P ¯ P ( B n ) E BE , ( M , V ) A n + P ¯ P ( B n ) E BE , ( M , V ) A n .
The second term in (28) can be upper bounded as
P ¯ P ( B n ) E BE , ( M , V ) A n P ¯ P ( B n ) ( M , V ) A n , E EE + P ¯ P ( B n ) ( M , V ) A n , E EE c e e n Ω ( η ) + P ¯ P ( B n ) ( M , V ) A n | E EE c e e n Ω ( η ) + P ¯ P ( B n ) ( U , V ) A n e e e n Ω ( η ) + e n κ α ,
where the inequality in (29) follows from Equation (4.22) in [14] for sufficiently large n since the acceptance region A n e is the same as that in [14]. To bound the first term in (28), define D n ( P V , η ) : = { P V ^ : P V ^ W ^ D n ( P V W , η ) } , and observe that since ( M , V ) A n implies M 0 , we have
P ¯ P ( B n ) E BE , ( M , V ) A n = ( m , m ) M × M P ¯ P ( B n ) E BE , ( M , V ) A n , M = m , M = m = ( m , m ) M × M P ¯ P ( B n ) M = m , M = m , M ^ = M P ¯ P ( B n ) M ^ M , ( M ^ , V ) A n , ( M , V ) A n | M = m , M = m , M ^ = M ( m , m ) M × M P ¯ P ( B n ) M = m , M = m , M ^ = M P ¯ P ( B n ) M ^ M , ( M , V ) A n | M = m , M = m , M ^ = M = ( a ) P ¯ P ( B n ) M ^ M , ( M , V ) A n | M = 1 , M = 1 , M ^ = M ( b ) P v D n ( P V , η ) v P v P ¯ P ( B n ) ( V = v | M = 1 )
P ¯ P ( B n ) j f B 1 ( 1 ) , j 1 , H e ( W ( j ) | v ) H e ( W ( 1 ) | v ) | M = 1 , V = v ,
where ( a ) follows since by the symmetry of the source encoder, binning function and random codebook construction, the term in (30) is independent of ( m , m ) ; ( b ) holds since ( M , V ) A n implies that P v D n ( P V , η ) and ( V , B W ) M ( M , M ^ ) form a Markov chain. Defining P V ^ = P v , and the event E 1 : = { M = 1 , V = v } , we obtain
P ¯ P ( B n ) j f B 1 ( 1 ) , j 1 , H e ( W ( j ) | v ) H e ( W ( 1 ) | v ) | E 1 = j M { 0 , 1 } P ¯ P ( B n ) f B ( j ) = 1 , H e ( W ( j ) | v ) H e ( W ( 1 ) | v ) | E 1 ( a ) 1 e n R n j M { 0 , 1 } P ¯ P ( B n ) H e ( W ( j ) | v ) H e ( W ( 1 ) | v ) | E 1 ( b ) 1 e n R n j M { 0 , 1 } P W ^ : P V ^ W ^ D n ( P V W , η ) w : ( v , w ) T n ( P V ^ W ^ ) P ¯ P ( B n ) W ( 1 ) = w | E 1 w ˜ T n ( P W ^ ) : H e ( w ˜ | v ) H ( W ^ | V ^ ) P ¯ P ( B n ) W ( j ) = w ˜ | E 1 { W ( 1 ) = w } ( c ) 1 e n R n j M { 0 , 1 } P W ^ : P V ^ W ^ D n ( P V W , η ) w : ( v , w ) T n ( P V ^ W ^ ) P ¯ P ( B n ) W ( 1 ) = w | E 1 w ˜ T n ( P W ^ ) : H e ( w ˜ | v ) H ( W ^ | V ^ ) 2 P ¯ P ( B n ) W ( j ) = w ˜ ,
where
(a)
follows since f B ( · ) is the uniform binning function independent of B W , n ;
(b)
holds due to the fact that if P v D n ( P V , η ) , then M = 1 implies that ( W ( 1 ) , v ) T n ( P V ^ W ^ ) with probability one for some P V ^ W ^ D n ( P V W , η ) ;
(c)
holds since P ¯ P ( B n ) W ( j ) = w ˜ | E 1 { W ( 1 ) = w } 2 P ¯ P ( B n ) W ( j ) = w ˜ , which follows similarly to Equation (101) in [10].
Continuing, we can write for sufficiently large n,
P ¯ P ( B n ) j f B 1 ( 1 ) , j 1 , H e ( W ( j ) | v ) H e ( W ( 1 ) | v ) | E 1 ( a ) 1 e n R n j M { 0 , 1 } P W ^ : P V ^ W ^ D n ( P V W , η ) w : ( v , w ) T n ( P V ^ W ^ ) P ¯ P ( B n ) W ( 1 ) = w | E 1 w ˜ T n ( P W ^ ) : H e ( w ˜ | v ) H ( W ^ | V ^ ) 2 e n ( H ( W ^ ) η ) ( b ) 1 e n R n j M { 0 , 1 } P W ^ : P V ^ W ^ D n ( P V W , η ) w : ( v , w ) T n ( P V ^ W ^ ) P ¯ P ( B n ) W ( 1 ) = w | E 1 ( n + 1 ) | V | | W | e n H ( W ^ | V ^ ) 2 e n ( H ( W ^ ) η ) 1 e n R n j M { 0 , 1 } P W ^ : P V ^ W ^ D n ( P V W , η ) 2 ( n + 1 ) | V | | W | e n I ( W ^ ; V ^ ) η ( c ) 1 e n R n j M { 0 , 1 } 2 ( n + 1 ) | W | ( n + 1 ) | V | | W | e n min P V ^ W ^ D n ( P V W , η ) I ( W ^ ; V ^ ) η ( d ) e n ( R R + ρ n η n ) ,
where ρ n : = min P V ^ W ^ D n ( P V W , η ) I ( V ^ ; W ^ ) and η n : = 3 η + o ( 1 ) . In the above,
(a) 
used Lemma 2.3 in [38] and the fact that the codewords are chosen uniformly at random from T n ( P W ^ ) ;
(b) 
follows since the total number of sequences w ˜ T n ( P W ^ ) such that P w ˜ v = P W ˜ V ˜ and H ( W ˜ | V ˜ ) H ( W ^ | V ^ ) is upper bounded by e n H ( W ^ | V ^ ) , and | T ( W n × V n ) | ( n + 1 ) | V | | W | ;
(c) 
holds due to Lemma 2.2 in [38];
(d) 
follows from R : = ζ ( κ α , (14), (18) and (19).
Thus, since ρ n ρ ( κ α , ω ) + O ( η ) , we have from (28), (29), (31), (33) for large enough n that
P ¯ P ( B n ) E BE e n min κ α , R ζ ( κ α , ω ) + ρ ( κ α , ω ) O ( η ) .
By choice of ( ω , P S X , θ ) L ( κ α ) , it follows from (24), (25), (26), (27) and (34) that the type I error probability is upper bounded by e n κ α O ( η ) for large n.
Type II error probability: We analyze the type II error probability averaged over B n . A type II error can occur only under the following events:
(i)
E a : = M ^ = M , M ^ = M 0 , ( U , V , W ( M ) ) T n P U ^ V ^ W ^ s . t . P U ^ W ^ D n ( P U W , η )   and   P V ^ W ^ D n ( P V W , η ) ,
(ii)
E b : = M 0 , M ^ = M , M ^ M , f B ( M ^ ) = f B ( M ) , ( U , V , W ( M ) , W ( M ^ ) ) T n P U ^ V ^ W ^ W ^ d s . t . P U ^ W ^ D n ( P U W , η ) , P V ^ W ^ d D n ( P V W , η )   and   H e W ( M ^ ) | V H e W ( M ) | V ,
(iii)
E c : = M 0 , M ^ M   o r   0 , ( U , V , W ( M ) , W ( M ^ ) ) T n P U ^ V ^ W ^ W ^ d s . t . P U ^ W ^ D n ( P U W , η )   and   P V ^ W ^ d D n ( P V W , η ) ,
(iv)
E d : = M = M = 0 , M ^ M , ( V , W ( M ^ ) ) T n P V ^ W ^ d s . t . P V ^ W ^ d D n ( P V W , η ) .
Similar to (24), it follows that P ¯ Q ( B n ) ( E EE ) e e n Ω ( η ) . Hence, we may assume that E EE c holds for the type II error-exponent analysis. It then follows from the analysis in Equations (4.23)–(4.27) in [14] that for sufficiently large n,
P ¯ Q ( B n ) E a | E EE c e n E 1 ( κ α , ω ) O ( η ) .
The analysis of the error events E b , E c and E d follows similarly to that in the proof of Theorem 2 in [10], and results in 1 n log P ¯ Q ( B n ) E b min ( P U ˜ V ˜ W ˜ , Q U ˜ V ˜ W ˜ ) T 2 ( κ α , ω ) D ( P U ˜ V ˜ W ˜ | | Q U ˜ V ˜ W ˜ ) + E b ( κ α , ω , R ) O ( η ) , if   R < ζ ( κ α , ω ) + η , , otherwise , = E 2 ( κ α , ω , R ) O ( η ) . 1 n log P ¯ Q ( B n ) E c min ( P U ˜ V ˜ W ˜ , Q U ˜ V ˜ W ˜ ) T 3 ( κ α , ω ) D ( P U ˜ V ˜ W ˜ | | Q U ˜ V ˜ W ˜ ) + E b ( κ α , ω , R ) if   R < ζ ( κ α , ω ) + η + E ex R , P S X O ( η ) , min ( P U ˜ V ˜ W ˜ , Q U ˜ V ˜ W ˜ ) T 3 ( κ α , ω ) D ( P U ˜ V ˜ W ˜ | | Q U ˜ V ˜ W ˜ ) + ρ ( κ α , ω ) otherwise , + E ex R , P S X O ( η ) , = E 3 ( κ α , ω , R , P S X ) O ( η ) . 1 n log P ¯ Q ( B n ) E d min P V ˜ : P V ˜ W ˜ D n ( P V W , η ) D ( P V ˜ | | Q V ) + E b ( κ α , ω , R ) if   R < ζ ( κ α , ω ) + η , + E sp ( P S X , θ ) θ O ( η ) , min P V ˜ : P V ˜ W ˜ D n ( P V W , η ) D ( P V ˜ | | Q V ) + ρ ( κ α , ω ) otherwise , + E sp ( P S X , θ ) θ O ( η ) , = E 4 ( κ α , ω , R , P S X , θ ) O ( η ) . Since the exponent of the type II error probability is lower bounded by the minimum of the exponent of the type II error-causing events, we have shown from the above that for a fixed ( ω , R , P S X , θ ) L ( κ α ) and sufficiently large n,
P ¯ P ( B n ) H ^ = 1 e n ( κ α O ( η ) ) ,
P ¯ Q ( B n ) H ^ = 0 e n ( κ ¯ s ( κ α , ω , R , P S X , θ ) O ( η ) ) ,
where
κ ¯ s ( κ α , ω , R , P S X , θ ) : = min E 1 ( κ α , ω ) , E 2 ( κ α , ω , R ) , E 3 ( κ α , ω , R , P S X ) , E 4 ( κ α , ω , R , P S X , θ ) .
Expurgation: To complete the proof, we extract a deterministic codebook B n that satisfies
P P ( B n ) H ^ = 1 e n ( κ α O ( η ) ) , P Q ( B n ) H ^ = 0 e n ( κ ¯ s ( κ α , ω , R , P S X , θ ) O ( η ) ) .
For this purpose, remove a set B n B n of highest type I error probability codebooks such that the remaining set B n B n has a probability of τ ( 0.25 , 0.5 ) , i.e., μ n B n B n = τ . Then, it follows from (35a) and (35b) that for all B n B n B n ,
P P ( B n ) H ^ = 1 2 e n ( κ α O ( η ) ) , P ˜ Q ( B n ) H ^ = 0 4 e n ( κ ¯ s ( κ α , ω , R , P S X , θ ) O ( η ) ) ,
where P ˜ Q ( B n ) = 1 τ E μ n P Q B n 𝟙 B n B n B n is a PMF. Perform one more similar expurgation step to obtain B n = B W , n , f b , B X , n B n B n such that for all sufficiently large n
P P ( B n ) H ^ = 1 2 e n ( κ α O ( η ) ) e n κ α O ( η ) ( log 2 / n ) , P Q ( B n ) H ^ = 0 4 e n κ ¯ s ( κ α , ω , R , P S X , θ ) O ( η ) e n κ ¯ s ( κ α , ω , R , P S X , θ ) O ( η ) ( log 4 / n ) .
Maximizing over ( ω , R , P S X , θ ) L ( κ α ) and noting that η > 0 is arbitrary completes the proof.

4.2. Proof of Corollary 1

Consider ( ω , P S X , θ ) L ( κ α ) and R = ζ ( κ α , ω ) . Then, ( ω , R , P S X , θ ) L ( κ α ) . Additionally, for any ( P U ˜ V ˜ W ˜ , Q U ˜ V ˜ W ˜ ) T 1 ( κ α , ω ) , we have
D ( P U ˜ V ˜ W ˜ | | Q U ˜ V ˜ W ˜ ) = D ( P U ˜ W ˜ | | Q U ˜ W ˜ ) + D P V ˜ | U ˜ W ˜ | | Q V ˜ | U ˜ W ˜ | P U ˜ W ˜ ( a ) D P V ˜ | U ˜ W ˜ | | P V | P U ˜ W ˜ = D P V ˜ U ˜ W ˜ | | P V P U ˜ W ˜ ( b ) D P V ˜ W ˜ | | P V P W ˜ = ( c ) D P V ^ W ^ | | P V P W ^ = I P ( V ^ ; W ^ ) + D ( P V ^ | | P V ) ,
where ( a ) is due to the non-negativity of KL divergence and since Q V ˜ | U ˜ W ˜ = P V ; ( b ) is because of the monotonicity of KL divergence Theorem 2.14 in [43]; ( c ) follows since for ( P U ˜ V ˜ W ˜ , Q U ˜ V ˜ W ˜ ) T 1 ( κ α , ω ) , P V ˜ W ˜ = P V ^ W ^ for some P U ^ V ^ W ^ L ^ ( κ α , ω ) . Minimizing over all P U ^ V ^ W ^ L ^ ( κ α , ω ) yields that
E 1 ( κ α , ω ) = min ( P U ˜ V ˜ W ˜ , Q U ˜ V ˜ W ˜ ) T 1 ( κ α , ω ) D ( P U ˜ V ˜ W ˜ | | Q U ˜ V ˜ W ˜ ) min P U ^ V ^ W ^ L ^ ( κ α , ω ) I P ( V ^ ; W ^ ) + D ( P V ^ | | P V ) = min P V ^ W ^ : P U ^ V ^ W ^ L ^ ( κ α , ω ) I P ( V ^ ; W ^ ) + D ( P V ^ | | P V ) : = E 1 i ( κ α , ω ) ,
where the inequality above follows from (36). Next, since ζ ( κ α , ω ) = R , we have that E 2 ( κ α , ω , R ) = . Additionally, by the non-negativity of KL divergence
E 3 ( κ α , ω , R , P S X ) = min ( P U ˜ V ˜ W ˜ , Q U ˜ V ˜ W ˜ ) T 3 ( κ α , ω ) D ( P U ˜ V ˜ W ˜ | | Q U ˜ V ˜ W ˜ ) + ρ ( κ α , ω ) + E ex R , P S X ρ ( κ α , ω ) + E ex ζ ( κ α , ω ) , P S X : = E 2 i ( κ α , ω , P S X ) , E 4 ( κ α , ω , P S X , θ ) = min P V ^ : P U ^ V ^ W ^ L ^ ( κ α , ω ) D ( P V ^ | | P V ) + ρ ( κ α , ω ) + E m ( P S X , θ ) θ = ρ ( κ α , ω ) + E m ( P S X , θ ) θ : = E 3 i ( κ α , ω , P S X , θ ) ,
where the final equality is since P U V P W | U L ^ ( κ α , ω ) for P W | U : = ω ( P U ) . The claim in (8) now follows from Theorem 1.
Next, we prove (10). Note that L ^ ( 0 , ω ) = { P U V W = P U V P W | U : P W | U = ω ( P U ) } and L ( 0 ) = { ( ω , P S X , θ ) F × P ( S × X ) × Θ ( P S X ) : I P ( U ; W ) < I P ( X ; Y | S ) , P W | U = ω ( P U ) , P S X Y : = P S X P Y | X } since E sp ( P S X , θ ) 0 and E ex I P ( U ; W ) , P S X 0 . Hence, we have
E 1 i ( 0 , ω ) min P U ^ V ^ W ^ L ^ ( 0 , ω ) I P ( V ^ ; W ^ ) = I P ( V ; W ) .
Additionally, ρ ( 0 , ω ) = I P ( V ; W ) , E 2 i ( 0 , ω , P S X ) ρ ( 0 , ω ) and E 3 i ( 0 , ω , P S X , θ ) ρ ( 0 , ω ) . By choosing P X S = P X P S where P X is the capacity achieving input distribution, we have I P ( X ; Y | S ) = C . Then, it follows from (8) and the continuity of E 1 i ( κ α , ω ) , E 2 i ( κ α , ω , P S X ) and E 3 i ( κ α , ω , P S X , θ ) in κ α that lim κ α 0 κ ( κ α ) κ i ( 0 ) . On the other hand, lim κ α 0 κ ( κ α ) κ i ( 0 ) follows from the converse proof in Proposition 7 in [10]. The proof of the cardinality bound | W | | U | + 1 follows from a standard application of the Eggleston–Fenchel–Carathéodory theorem (Theorem18 in [48]), thus completing the proof.

4.3. Proof of Corollary 2

Specializing Theorem 1 to TAD, note that ρ ( κ α , ω ) = 0 since P U ^ V ^ W ^ = Q U Q V P W ^ | U ^ L ^ ( κ α , ω ) and I P ( V ^ ; W ^ ) = 0 . Additionally, for R ζ ( κ α , ω ) , E b ( κ α , ω , R ) = . Hence,
L ( κ α ) = ( ω , R , P S X , θ ) : ζ ( κ α , ω ) R < I P ( X ; Y | S ) , P S X Y = P S X P Y | X , min E sp ( P S X , θ ) , E ex R , P S X κ α , L ^ ( κ α , ω ) : = P U ^ V ^ W ^ : D P U ^ V ^ W ^ | | P U V W ^ κ α , P W ^ | U ^ = ω ( P U ^ ) , P U V W ^ = Q U Q V P W ^ | U ^ .
Then, we have
E 1 ( κ α , ω ) : = E 1 d ( κ α , ω ) : = min ( P U ˜ V ˜ W ˜ , Q U ˜ V ˜ W ˜ ) T 1 ( κ α , ω ) D ( P U ˜ V ˜ W ˜ | | Q U ˜ V ˜ W ˜ ) ( a ) min ( P U ˜ V ˜ W ˜ , Q U ˜ V ˜ W ˜ ) T 1 ( κ α , ω ) D ( P V ˜ W ˜ | | Q V ˜ W ˜ ) = ( b ) min ( P V ^ W ^ , Q V W ^ ) : P U ^ V ^ W ^ L ^ ( κ α , ω ) , Q U V W ^ = Q U V P W ^ | U ^ D ( P V ^ W ^ | | Q V W ^ ) ,
where ( a ) follows due to the data-processing inequality for KL divergence Theorem 2.15 in [43]; ( b ) is since ( P U ˜ V ˜ W ˜ , Q U ˜ V ˜ W ˜ ) T 1 ( κ α , ω ) implies that P V ˜ W ˜ = P V ^ W ^ and Q U ˜ V ˜ W ˜ = Q U V P W ^ | U ^ for some P U ^ V ^ W ^ L ^ ( κ α , ω ) . Next, note that since R ζ ( κ α , ω ) , E 2 ( κ α , ω , R ) = . Additionally,
E 3 ( κ α , ω , R , P S X ) = min ( P U ˜ V ˜ W ˜ , Q U ˜ V ˜ W ˜ ) T 3 ( κ α , ω ) D ( P U ˜ V ˜ W ˜ | | Q U ˜ V ˜ W ˜ ) + E ex R , P S X
= ( a ) E ex R , P S X ,
E 4 ( κ α , ω , P S X , θ ) = min P V ^ : P U ^ V ^ W ^ L ^ ( κ α , ω ) D ( P V ^ | | Q V ) + E m ( P S X , θ ) θ
= ( b ) E m ( P S X , θ ) θ = : E 3 d ( P S X , θ ) ,
where
(a)
is obtained by taking P U ^ V ^ W ^ = Q U Q V P W | U L ^ ( κ α , ω ) and P W | U = ω ( Q U ) in the definition of T 3 ( κ α , ω ) . This implies that ( P U ˜ V ˜ W ˜ , Q U ˜ V ˜ W ˜ ) = ( Q U V P W | U , Q U V P W | U ) T 3 ( κ α , ω ) , and hence that the first term in the right hand side (RHS) of (38a) is zero;
(b)
is due to Q U Q V P W | U L ^ ( κ α , ω ) for P W | U = ω ( Q U ) .
Since E ex R , P S X is a non-increasing function of R and R ζ ( κ α , ω ) , selecting R = ζ ( κ α , ω ) maximizes E 3 ( κ α , ω , R , P S X ) . Then, (11) follows from (37), (38b) and (38c).
Next, we prove (12). Note that ζ ( 0 , ω ) = I Q ( U ; W ) , where Q U W = Q U P W | U , P W | U = ω ( Q U ) , and since E sp ( P S X , θ ) 0 and E ex I Q ( U ; W ) , P S X 0 ,
L ( 0 ) = ( ω , P S X , θ ) F × P ( S × X ) × Θ ( P S X ) : I Q ( U ; W ) < I P ( X ; Y | S ) , Q U V W = Q U V P W | U , P W | U = ω ( Q U ) , P S X Y : = P S X P Y | X .
Additionally, L ^ ( 0 , ω ) = Q U Q V P W | U : P W | U = ω ( Q U ) . By choosing θ = θ l ( P S X ) (defined above (6a)) that maximizes E 3 d ( P S X , θ ) , we have
E 1 d ( 0 , ω ) min ( P V ^ W ^ , Q V W ^ ) : P U ^ V ^ W ^ L ^ ( 0 , ω ) , Q U V W ^ = Q U V P W ^ | U ^ D ( P V ^ W ^ | | Q V W ^ ) = min ( P W | U , P S X ) : I Q ( U ; W ) I P ( X ; Y | S ) , Q U V W = Q U V P W | U , P S X Y = P S X P Y | X D ( Q V Q W | | Q V W ) ,
E 2 d ( 0 , ω , P S X ) = E ex I Q ( U ; W ) , P S X ,
E 3 d ( P S X , θ l ( P S X ) ) = E m ( P S X , θ l ( P S X ) ) + θ l ( P S X ) = θ l ( P S X ) ,
where (39c) is due to E m ( P S X , θ l ( P S X ) ) = 0 . The latter in turn follows similar to (A10) and (A11) from the definition of E m ( · , · ) . From (11), (39a,39b,39c), and the continuity of E 1 d ( κ α , ω ) , E 2 d ( κ α , ω , P S X ) in κ α , (12) follows. The proof of the cardinality bound | W | | U | + 1 in the RHS of (39a) follows via a standard application of the Eggleston–Fenchel–Carathéodory Theorem (Theorem 18 in [48]). To see this, note that it is sufficient to preserve { Q U ( u ) , u U } , D ( Q V Q W | | Q V W ) and H Q ( U | W ) , all of which can be written as a linear combination of functionals of Q U | W ( · | w ) with weights Q W ( w ) . Thus, it requires | U | 1 points to preserve { Q U ( u ) , u U } and one each for D ( Q V Q W | | Q V W ) and H Q ( U | W ) . This completes the proof.

4.4. Proof of Theorem 2

We will show that the error-exponent pairs κ α , κ h ( κ α ) and κ α , κ u ( κ α ) are achieved by a hybrid coding scheme and uncoded transmission scheme, respectively. First, we describe the hybrid coding scheme.
Let n N , | W | < , κ α > 0 , and ( P S , ω ( · , P S ) , P X | U S W , P X | U S ) L h ( κ α ) . Further, let η > 0 be a small number, and choose a sequence s T n P S ^ , where P S ^ satisfies D P S ^ | | P S η . Set R : = ζ ( κ α , ω , P S ^ ) .
Encoding: The encoder performs type-based quantization followed by hybrid coding [40]. The details are as follows:
Quantization codebook: Let D n ( P U , η ) be as defined in (15). Consider some ordering on the types in D n ( P U , η ) and denote the elements as P U ^ i , i | D n ( P U , η ) | . For each joint type P S ^ U ^ i such that P U ^ i D n ( P U , η ) and S ^ U ^ i , choose a joint type variable P S ^ U ^ i W ^ i , P W ^ i T ( W n ) , such that D P W ^ i | U ^ i S ^ | | P W i | U S ^ | P U ^ i S ^ η / 3 and I ( S ^ , U ^ i ; W ^ i ) R + ( η / 3 ) , where P W i | U , S = ω ( P U ^ i , P S ^ ) . Define D n ( P S U W , η ) : = P S ^ U ^ i W ^ i : i | D n ( P U , η ) | , R i : = I P ( S ^ , U ^ i ; W ^ i ) + ( η / 3 ) for i | D n ( P U , η ) | and M i : = 1 + m = 1 i 1 e n R m : m = 1 i e n R m , i | D n ( P U , η ) | . Let B W , n = W ( j ) W n , 1 j i = 1 | D n ( P U , η ) | e n R i denote a random quantization codebook such that for i | D n ( P U , η ) | , each codeword W ( j ) , j M i , is independently selected from T n ( P W ^ i ) according to uniform distribution, i.e., W ( j ) Unif T n ( P W ^ i ) . Let B W , n denote a realization of B W , n .
Type-based hybrid coding: For u T n P U ^ i such that P U ^ i D n ( P U , η ) for some i | D n ( P U , η ) | , let
M ¯ u , B W , n : = j M i : w ( j ) B W , n , ( s , u , w ( j ) ) T n ( P S ^ U ^ i W ^ i ) , P S ^ U ^ i W ^ i D n ( P S U W , η ) .
If | M ¯ u , B W , n | 1 , let M u , B W , n denote an index selected uniformly at random from the set M ¯ u , B W , n ; otherwise, set M u , B W , n = 0 . Given B W , n and u U n , the quantizer outputs M = M u , B W , n , where the support of M is M : = { 0 } i = 1 | D n ( P U , η ) | M i . Note that for sufficiently large n, it follows similarly to (18) that | M | e n ( R + η ) . For a given B W , n and u U n , the encoder transmits X P X | U S W n ( · | u , s , w ( m ) ) if M = m 0 , and X P X | U S n ( · | u , s ) if M = 0 .
Acceptance region: For a given codebook B W , n and m M { 0 } , let O m denote the set of u such that M u , B W , n = m . For each m M { 0 } and u O m , set
Z m ( u ) = ( v , y ) V n × Y n : ( s , u , w ¯ m , v , y ) J n κ α + η , P S ^ U W m V Y ,
where recall that J n ( r , P X ) : = { x X n : D P x | | P X r } , and
P S ^ U W m V X Y = P S ^ P U V P W m | U S ^ P X | U S ^ W m P Y | X ,
P W m | U S ^ = ω ( P u , P S ^ )   and   P X | U S ^ W m = P X | U S W .
For m M { 0 } , define Z m : = { ( v , y ) : ( v , y ) Z m ( u ) for   some u O m } . The acceptance region for H 0 is given by A n : = m M 0 s × m × Z m or equivalently as A n e : = m M 0 s × O m × Z m .
Decoding: Given codebook B W , n , Y = y , and V = v , if ( v , y ) m M { 0 } Z m , then M ^ = m ^ , where m ^ : = arg min j M 0 H e ( w ( j ) | v , y , s ) . Otherwise, M ^ = 0 . Denote the decoder induced by the above operations by g B W , n : S n × V n × Y n M .
Testing: If M ^ = 0 , H ^ = 1 is declared. Otherwise, H ^ = 0 or H ^ = 1 is declared depending on whether ( s , m ^ , v , y ) A n or ( s , m ^ , v , y ) A n , respectively. Denote the decision function induced by g B W , n and A n by g n : S n × V n × Y n H ^ .
Induced probability distribution: The PMFs induced by a code c n = ( f n , g n ) with respect to codebook B W , n under H 0 and H 1 are
P UV M XY M ^ H ^ ( B W , n , c n ) ( u , v , m , x , y , m ^ , h ^ ) : = P U V n ( u , v ) 𝟙 M u , B W , n = m P X | U S W n ( x | s , u , w ( m ) ) P Y | X n ( y | x ) 𝟙 g B W , n ( v , y , s ) = m ^ 𝟙 h ^ = 𝟙 ( s , m ^ , v , y ) A n c , if   m 0 , P U V n ( u , v ) 𝟙 M u , B W , n = m P X | U S n ( x | s , u ) P Y | X n ( y | x ) 𝟙 g B W , n ( v , y , s ) = m ^ 𝟙 h ^ = 𝟙 ( s , m ^ , v , y ) A n c , otherwise ,
and
Q UV M XY M ^ H ^ ( B W , n , c n ) ( u , v , m , x , y , m ^ , h ^ ) : = Q U V n ( u , v ) 𝟙 M u , B W , n = m P X | U S W n ( x | s , u , w ( m ) ) P Y | X n ( y | x ) 𝟙 g B W , n ( v , y , s ) = m ^ 𝟙 h ^ = 𝟙 ( s , m ^ , v , y ) A n c , if   m 0 , Q U V n ( u , v ) 𝟙 M u , B W , n = m P X | U S n ( x | s , u ) P Y | X n ( y | x ) 𝟙 g B W , n ( v , y , s ) = m ^ 𝟙 h ^ = 𝟙 ( s , m ^ , v , y ) A n c , otherwise ,
respectively. For brevity, we will denote B W , n by B n , B W , n by B n , and the above probability distributions by P ( B n ) and Q ( B n ) . Let B n and μ n stand for the support and probability measure of B n , respectively, and set P ¯ P ( B n ) : = E μ n P P ( B n ) , P ¯ Q ( B n ) : = E μ n P Q ( B n )
Analysis of the type I and type II error probabilities: We analyze the expected type I and type II error probabilities, where the expectation is with respect to the randomness of B n , followed by the expurgation technique to extract a sequence of deterministic codebooks { B n } n N and a code { c n = ( f n , g n ) } n N that achieves the lower bound in Theorem 2.
Type I error probability: Denoting by A n the random acceptance region for H 0 , note that a type I error can occur only under the following events:
(i)
E EE : = P U ^ D n ( P U , η ) u T n ( P U ^ ) E EE ( u ) , where
E EE ( u ) : = j M { 0 } s . t . ( s , u , W ( j ) ) T n ( P S ^ U ^ i W ^ i ) , P S ^ U ^ i = P s u , P S ^ U ^ i W ^ i D n ( P S U W , η ) ,
(ii)
E NE : = { M ^ = M and ( s , M ^ , V , Y ) A n } ,
(iii)
E ODE : = { M 0 , M ^ M and ( s , M ^ , V , Y ) A n } ,
(iv)
E SDE : = { M = 0 , M ^ M and ( s , M ^ , V , Y ) A n } .
By definition of R i , we have, similar to (24), the following:
P ¯ P B n ( E EE ) e e n Ω ( η ) .
Next, the event E NE can be upper bounded as
P ¯ P B n E NE | E EE c P ¯ P B n ( s , M ^ , V , Y ) A n | M ^ = M , E E E c = 1 P ¯ P B n ( s , U , V , Y ) A n e | E E E c .
For u O m , note that, similar to Equation 4.17 in [14], we have
P ¯ P B n ( V , Y ) Z m ( u ) | U = u , W ( m ) = w ¯ m , E E E c 1 e n κ α + η 3 D ( P u | | P U ) .
From this and (15), we obtain, similar to Equation (4.22) in [14] that
P ¯ P B n ( ( s , U , V , Y ) A e n | E E E c ) 1 e n κ α .
Substituting (43) in (42) yields
P ¯ P B n E NE | E EE c e n κ α .
Next, we bound the probability of the event E ODE as follows:
P ¯ P B n E ODE = P ¯ P B n M 0 , M ^ M , ( s , M , V , Y ) A n , ( s , M ^ , V , Y ) A n + P ¯ P B n M 0 , M ^ M , ( s , M , V , Y ) A n , ( s , M ^ , V , Y ) A n P ¯ P B n M 0 , M ^ M , ( s , M , V , Y ) A n , ( s , M ^ , V , Y ) A n + P ¯ P B n M 0 , M ^ M , ( s , M , V , Y ) A n
( a ) P ¯ P B n M 0 , M ^ M , ( s , M , V , Y ) A n , ( s , M ^ , V , Y ) A n + e e n Ω ( η ) + e n κ α
P ¯ P B n M ^ M | M 0 , ( s , M , V , Y ) A n + e e n Ω ( η ) + e n κ α ,
= ( b ) P ¯ P B n M ^ M | M 0 , M ^ 0 , ( s , M , V , Y ) A n
( c ) e n ρ ( κ α , ω , P S , P X | U S W ) ζ ( κ α , ω , P S ^ ) O ( η ) ,
where ( a ) follows similar to (29) using (41) and (43); ( b ) is since ( s , M , V , Y ) A n implies that M ^ 0 ; and ( c ) follows similar to (33). Further,
P ¯ P B n E SDE P ¯ P B n M = 0 P ¯ P B n M = 0 | E E E c + P ¯ P B n E E E = u : P u D n ( P U , η ) P U n ( u ) + P ¯ P B n E E E e n κ α + e e n Ω ( η ) ,
where the penultimate equality is since given E E E c , M = 0 occurs only for U = u such that P u D n ( P U , η ) , and the final inequality follows from (41), the definition of D n ( P U , η ) and Lemma 1.6 in [38]. From (41), (44), (47) and (48), the expected type I error probability satisfies e n ( κ α O ( η ) ) for sufficiently large n via the union bound.
Type II error probability: Next, we analyze the expected type II error probability over B n . Let
D n ( P S V W Y , η ) : = P S ^ V ^ W ^ Y ^ : ( s , u , v , w ¯ , y ) m M { 0 } J n κ α + η , P S ^ U V W m Y , P S ^ U V W m Y satisfies ( 40 )   and   P s u v w ¯ y = P S ^ U ^ V ^ W ^ Y ^ , F 1 , n ( η ) : = P S ^ U ˜ V ˜ W ˜ Y ˜ T S n × U n × V n × W n × Y n : P S ^ U ˜ W ˜ D n ( P S U W , η ) , P S ^ V ˜ W ˜ Y ˜ D n ( P S V W Y , η ) .
A type II error can occur only under the following events:
(a) 
E a : = M ^ = M 0 , ( s , U , V , W ( M ) , Y ) T n P S ^ U ^ V ^ W ^ Y ^ s . t . P U ^ W ^ D n ( P S U W , η )   and   P S ^ V ^ W ^ Y ^ D n ( P S V W Y , η ) ,
(b) 
E b : = M 0 , M ^ M , ( s , U , V , W ( M ) , Y , W ( M ^ ) ) T n P S ^ U ^ V ^ W ^ Y ^ W ^ d s . t . P S ^ U ^ W ^ D n ( P S U W , η ) , P S ^ V ^ W ^ d Y ^ D n ( P S V W Y , η ) ,   and   H e W ( M ^ ) | s , V , Y H e W ( M ) | s , V , Y ,
(c) 
E c : = M = 0 , M ^ M , ( s , V , Y , W ( M ^ ) ) T n P S ^ V ^ Y ^ W ^ d s . t . P S ^ V ^ W ^ d Y ^ D n ( P S V W Y , η ) .
Considering the event E a , we have
P ¯ Q B n E a P S ^ U ˜ V ˜ W ˜ Y ˜ F 1 , n ( η ) ( u , v , w ¯ , y ) : ( s , u , v , w ¯ , y ) T n ( P S ^ U ˜ V ˜ W ˜ Y ˜ ) m M { 0 } P ¯ Q B n U = u , V = v , M = m , W ( m ) = w ¯ , Y = y | S = s P S ^ U ˜ V ˜ W ˜ Y ˜ F 1 , n ( η ) ( u , v , w ¯ , y ) : ( s , u , v , w ¯ , y ) T n ( P S ^ U ˜ V ˜ W ˜ Y ˜ ) m M { 0 } P ¯ Q B n U = u , V = v , M = m | S = s P ¯ Q B n W ( m ) = w ¯ | U = u , V = v , M = m , S = s P ¯ Q B n Y = y | U = u , V = v , M = m , W ( m ) = w ¯ , S = s ( a ) P S ^ U ˜ V ˜ W ˜ Y ˜ F 1 , n ( η ) ( u , v , w ¯ , y ) : ( s , u , v , w ¯ , y ) T n ( P S ^ U ˜ V ˜ W ˜ Y ˜ ) e n H ( U ˜ , V ˜ ) + D P U ˜ V ˜ | | Q U V e n H ( W ˜ | S ^ , U ˜ ) η e n H ( Y ˜ | U ˜ , S ^ , W ˜ ) + D P Y ˜ | U ˜ S ^ W ˜ | | P Y | U S W | P U ˜ S ^ W ˜ P S ^ U ˜ V ˜ W ˜ Y ˜ F 1 , n ( η ) e n H ( U ˜ , V ˜ , W ˜ , Y ˜ | S ^ ) e n H ( U ˜ , V ˜ ) + D P U ˜ V ˜ | | Q U V e n H ( W ˜ | S ^ , U ˜ ) η e n H ( Y ˜ | U ˜ , S ^ , W ˜ ) + D P Y ˜ | U ˜ S ^ W ˜ | | P Y | U S W | P U ˜ S ^ W ˜ e n E 1 , n ,
where
E 1 , n : = min P S ^ U ˜ V ˜ W ˜ Y ˜ F 1 , n ( η ) H ( U ˜ , V ˜ ) + D P U ˜ V ˜ | | Q U V + H ( W ˜ | S ^ , U ˜ ) η + H ( Y ˜ | U ˜ , S ^ , W ˜ ) + D P Y ˜ | U ˜ S ^ W ˜ | | P Y | U S W | P U ˜ S ^ W ˜ H ( U ˜ , V ˜ , W ˜ , Y ˜ | S ^ ) 1 n | | U | | V | | W | | Y | log ( n + 1 ) min ( P U ˜ V ˜ W ˜ Y ˜ S , Q U ˜ V ˜ W ˜ Y ˜ S ) T 1 ( κ α , ω , P S , P X | U S W ) D ( P U ˜ V ˜ W ˜ Y ˜ | S | | Q U V W Y | S | P S ) O ( η ) = E 1 ( κ α , ω ) O ( η ) .
For the inequality in ( a ) above, we used P ¯ Q B n M = m | U = u , V = v , S = s 1 and
P ¯ Q B n W ( m ) = w ¯ | U = u , V = v , S = s , M = m e n H ( W ˜ | S ^ , U ˜ ) η , if   w ¯ T n ( W ˜ ) , 0 , otherwise ,
which in turn follows from the fact that given M = m and U = u , W ( m ) is uniformly distributed in the set T n P W ˜ | S ^ U ˜ , s , u and that for sufficiently large n | T n P W ˜ | S ^ U ˜ , s , u | e n H ( W ˜ | S ^ , U ˜ ) η .
Next, we analyze the probability of the event E b . Let
F 2 , n ( η ) : = P S ^ U ˜ V ˜ W ˜ Y ˜ W ˜ d : P S ^ U ˜ W ˜ D n ( P S U W , η ) , P S ^ V ˜ W ˜ d Y ˜ D n ( P S V W Y , η ) H W ˜ d | S ^ , V ˜ , Y ˜ H W ˜ | S ^ , V ˜ , Y ˜ .
Then,
P ¯ Q B n E b P S ^ U ˜ V ˜ W ˜ Y ˜ W ˜ d F 2 , n ( η ) ( u , v , w ¯ , y , w ) : ( s , u , v , w ¯ , y , w ) T n ( P S ^ U ˜ V ˜ W ˜ Y ˜ W ˜ d ) m M { 0 } P ¯ Q B n U = u , V = v , M = m , W ( m ) = w ¯ , Y = y | S = s m ^ M { 0 , m } P ¯ Q B n W ¯ ( m ^ ) = w | U = u , M = m , W ( m ) = w ¯ , S = s P S ^ U ˜ V ˜ W ˜ Y ˜ W ˜ d F 2 , n ( η ) ( u , v , w ¯ , y ) : ( s , u , v , w ¯ , y ) T n ( P S ^ U ˜ V ˜ W ˜ Y ˜ ) e n H ( U ˜ , V ˜ ) + D P U ˜ V ˜ | | Q U V e n H ( W ˜ | S ^ , U ˜ ) η e n H ( Y ˜ | U ˜ , S ^ , W ˜ ) + D P Y ˜ | U ˜ S ^ W ˜ | | P Y | U S W | P U ˜ S ^ W ˜ e n ζ ( κ α , ω , P S ^ ) + η 2 e n H ( W ˜ d | S ^ , V ˜ , Y ˜ ) e n H ( W ˜ d ) η e n E 2 , n ,
where
E 2 , n min ( P U ˜ V ˜ W ˜ Y ˜ S , Q U ˜ V ˜ W ˜ Y ˜ S ) T 2 ( κ α , ω , P S , P X | U S W ) D ( P U ˜ V ˜ W ˜ Y ˜ | S | | Q U V W Y | S | P S ) + ρ ( κ α , ω , P S , P X | U S W ) ζ ( κ α , ω , P S ) O ( η ) = E 2 ( κ α , ω , P S , P X | U S W ) O ( η ) .
Finally, considering the event E c , we have
P ¯ Q B n E c = u T n ( P U ˜ ) : P U ˜ D n ( P U , η ) P ¯ Q B n U = u , E EE , E c | S = s + u T n ( P U ˜ ) : P U ˜ D n ( P U , η ) P ¯ Q B n U = u , E c | S = s .
The first term in the RHS decays double exponentially as e e n Ω ( η ) , while the second term can be handled as follows:
u T n ( P U ˜ ) : P U ˜ D n ( P U , η ) P ¯ Q B n U = u , E c | S = s u T n ( P U ˜ ) : P U ˜ D n ( P U , η ) ( v , y , w ) : ( s , v , y , w ) T n P S ^ V ˜ Y ˜ W ˜ d , P S ^ V ˜ W ˜ d Y ˜ D n ( P S V W Y , η ) m ^ M { 0 } P ¯ Q B n U = u , V = v , M = 0 , Y = y | S = s m ^ M { 0 } P ¯ Q B n W ( m ^ ) = w ¯ P U ˜ S ^ V ˜ W ˜ d Y ˜ D n ( P U , η ) c × D n ( P S V W Y , η ) e n H ( U ˜ , V ˜ , Y ˜ | S ^ ) e n H ( U ˜ , V ˜ , Y ˜ | S ^ ) + D P U ˜ V ˜ Y ˜ | S ^ | | Q U V Y | S ^ | P S ^ e n H ( W ˜ d | S ^ , V ˜ , Y ˜ ) e n ( R + η ) e n H ( W ˜ d ) η e n E 3 , n ,
where
E 3 , n min P V ^ Y ^ S : P U ^ V ^ W ^ Y ^ S L ^ h ( κ α , ω , P S , P X | U S W ) D P V ^ Y ^ | S | | Q V Y | S | P S + ρ ( κ α , ω , P S , P X | U S W ) ζ ( κ α , ω , P S ) O ( η ) = E 3 ( κ α , ω , P S , P X | U S W , P X | U S ) O ( η ) .
Since the exponent of the type II error probability is lower bounded by the minimum of the exponent of the type II error-causing events, it follows from (49), (50) and (51) that for a fixed P S , ω ( · , P S ) , P X | U S W , P X | U S L h ( κ α )
P ¯ P ( B n ) H ^ = 1 e n ( κ α O ( η ) ) ,
P ¯ Q ( B n ) H ^ = 0 e n κ ¯ h ( κ α , ω , P S , P X | U S W , P X | U S ) O ( η ) ,
where κ ¯ h = min E 1 ( κ α , ω ) , E 2 ( κ α , ω , P S , P X | U S W ) , E 3 ( κ α , ω , P S , P X | U S W , P X | U S ) . Performing expurgation as in the proof of Theorem 1 to obtain a deterministic codebook B n satisfying (52a, 52b), maximizing over P S , ω ( · , P S ) , P X | U S W , P X | U S L h ( κ α ) and noting that η > 0 is arbitrary yields κ ( κ α ) κ h ( κ α ) .
Finally, we show that κ ( κ α ) κ u ( κ α ) , which will complete the proof. Fix P X | U S and let P U V X Y : = P U V P X | U S P Y | X and Q U V X Y : = Q U V P X | U S P Y | X . Consider an uncoded transmission scheme in which the channel input X f n ( · | u ) = P X | U S n ( · | u , s ) . Let the decision rule g n be specified by the acceptance region A n = ( s , v , y ) : D P v y | s | | P V Y | S | P s κ α + η for some small η > 0 . Then, it follows from Lemma 2.6 in [42] that for sufficiently large n,
α n ( f n , g n ) = P V Y | S n A n c | s e n κ α , β n ( f n , g n ) = Q V Y | S n A n | s e n ( κ u ( κ α ) O ( η ) ) .
The proof is complete by noting that η > 0 is arbitrary.

5. Conclusions

This work explored the trade-off between the type I and type II error-exponents for distributed hypothesis testing over a noisy channel. We proposed a separate hypothesis testing and channel coding scheme as well as a joint scheme utilizing hybrid coding, and analyzed their performance resulting in two inner bounds on the error-exponents trade-off. The separate scheme recovers some of the existing bounds in the literature as special cases. We also showed via an example of testing against dependence that the joint scheme strictly outperforms the separate scheme at some points of the error-exponents trade-off. An interesting avenue for future research is the exploration of novel outer bounds that could shed light on the scenarios where the separate or joint schemes are tight.

Author Contributions

Conceptualization, S.S. and D.G.; writing—original draft preparation, S.S. and D.G.; supervision, D.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by the European Research Council Starting Grant project BEACON (grant agreement number 677854).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
HTHypothesis testing
DHTDistributed hypothesis testing
TADTesting against dependence
TAITesting against independence
SHTCCSeparate hypothesis testing and channel coding
JHTCCJoint hypothesis testing and channel coding

Appendix A. Proof that Theorem 1 Recovers Theorem 2 in [10]

We prove that lim κ α 0 κ s ( κ α ) = κ s , where κ s is the lower bound on the type II error-exponent for a fixed type I error probability constraint and unit bandwidth ratio established in Theorem 2 in [10]. Note that L ^ ( 0 , ω ) = { P U V W = P U V P W | U , P W | U = ω ( P U ) } , ζ ( 0 , ω ) = I P ( U ; W ) , and ρ ( 0 , ω ) = I P ( V ; W ) . The result then follows from Theorem 1 by noting that L ^ ( κ α , ω ) , ζ ( κ α , ω ) and ρ ( κ α , ω ) are continuous in κ α and the fact that E sp ( P S X , θ ) , E ex R , P S X and E b ( κ α , ω , R ) are all greater than or equal to zero.

Appendix B. Proof that Theorem 2 Recovers Theorem 5 in [10]

We show that lim κ α 0 κ h ( κ α ) = κ h , where κ h is as defined in Theorem 5 in [10]. Note that L ^ h ( 0 , ω , P S , P X | U S W ) : = P U V W ^ Y S : P S U V W X Y : = P S P U V P W | U S P X | U S W P Y | X , P W | U S = ω ( P U , P S ) , ζ ( 0 , ω , P S ) : = I P ( U ; W | S ) , ρ ( 0 , ω , P S , P X | U S W ) = I P ( Y , V ; W | S ) ,
L h ( 0 ) : = P S , ω ( · , P S ) , P X | U S W , P X | U S : I P ( U ; W | S ) < I P ( Y , V ; W | S ) ,
and E b ( 0 , ω , P S , P X | U S W ) = I P ( Y , V ; W | S ) I P ( U ; W | S ) . The result then follows from Theorem 2 via the continuity of L ^ h ( κ α , · , · , · ) , ζ ( κ α , · , · ) , ρ ( κ α , · , · , · ) , L h ( κ α ) and E b ( κ α , · , · , · ) in κ α .

Appendix C. An Auxiliary Result

Here, we prove a result which was used in the proof of Theorem 1, namely Proposition A1 given below. For this purpose, we require a few properties of log-moment generating function, which we briefly review next.
Lemma A1.
(Theorem 15.3, Theorem 15.6 in [43])
(i) 
ψ P Z , f ( 0 ) = 0 and ψ P Z , f ( 0 ) = E P Z [ f ( Z ) ] , where ψ P Z , f ( λ ) denotes the derivative of ψ P Z , f ( λ ) with respect to λ.
(ii) 
ψ P Z , f ( λ ) is a strictly convex function in λ.
(iii) 
ψ P Z , f * ( θ ) is strictly convex and strictly positive in θ except ψ P Z , f * ( E P Z [ Z ] ) = 0 .
Proposition A1 is basically a characterization of the error-exponent region of a hypothesis testing problem, which we introduce next. Let P X 0 X 1 P ( X 2 ) be an arbitrary joint PMF, and consider a sequence of pairs of n-length sequences ( x ˜ , x ) such that
P x ˜ x ( x ˜ , x ) ( n ) P X 0 X 1 ( x ˜ , x ) , ( x ˜ , x ) X 2 .
Consider the following HT:
H 0 : Y P Y | X n ( · | x ˜ ) ,
H 1 : Y P Y | X n ( · | x ) .
With the achievability of an error-exponent pair ( κ α , κ β ) defined similar to Definition 1, consider the error-exponent region of interest
R ( P X 0 X 1 ) : = { ( κ α , κ ( κ α , P X 0 X 1 ) ) : κ α ( 0 , κ α ) } ,
where κ ( κ α , P X 0 X 1 ) : = sup { κ β : ( κ α , κ β ) is   achievable   for   HT in   in   ( A2 ) } and κ α : = inf { κ α : κ ( κ α , P X 0 X 1 ) = 0 } . We mention at this point that the notation R ( P X 0 X 1 ) is justified as the error-exponent region for the above hypothesis test depends on ( x ˜ , x ) only through its limiting joint type P X 0 X 1 , as will be evident later. Given this, the following proposition provides a single-letter characterization of R ( P X 0 X 1 ) .
Proposition A1.
R ( P X 0 X 1 ) = θ I ( P X 0 X 1 ) E P X 0 X 1 ψ P Y | X ( · | X 0 ) , Π X 0 , X 1 * ( θ ) , E P X 0 X 1 ψ P Y | X ( · | X 0 ) , Π X 0 , X 1 * ( θ ) θ
where Π x ˜ , x ( y ) : = log P Y | X ( y | x ) / P Y | X ( y | x ˜ ) for ( x ˜ , x ) X 2 ,
I ( P X 0 X 1 ) : = d min ( P X 0 X 1 ) , d max ( P X 0 X 1 ) , d min ( P X 0 X 1 ) : = E P X 0 X 1 D P Y | X ( · | X 0 ) | | P Y | X ( · | X 1 ) d max ( P X 0 X 1 ) : = E P X 0 X 1 D P Y | X ( · | X 1 ) | | P Y | X ( · | X 0 ) .
Proof.
Let ( x ˜ , x ) X n × X n be sequences that satisfy (A1). For simplicity, we will denote d max ( P X 0 X 1 ) and d min ( P X 0 X 1 ) by d max and d min , respectively.
Achievability: We will show that for d min < θ < d max ,
κ E P X 0 X 1 ψ P Y | X ( · | X 0 ) , Π X 0 , X 1 * ( θ ) , P X 0 X 1 E P X 0 X 1 ψ P Y | X ( · | X 0 ) , Π X 0 , X 1 * ( θ ) θ .
Consider the Neyman–Pearson test given by g n ( y ) = 𝟙 Π x ˜ , x ( n ) ( y ) n θ , where Π x ˜ , x ( n ) ( y ) : = i = 1 n Π x ˜ i , x i , P Y | X ( y i ) . Observe that the type I error probability can be upper bounded for θ > d min and sufficiently large n as follows:
α n g n = P P Y | X n ( · | x ˜ ) Π x ˜ , x ( n ) ( Y ) n θ ( a ) e sup λ 0 n θ λ ψ P Y | X n ( · | x ˜ ) , Π x ˜ , x ( n ) ( λ ) = ( b ) e sup λ R n θ λ 1 n ψ P Y | X n ( · | x ˜ ) , Π x ˜ , x ( n ) ( λ ) ,
where ( a ) follows from the Chernoff bound, and ( b ) holds because for θ > d min and sufficiently large n, the supremum in (A3) always occurs at λ 0 . To see this, note that the term l n ( λ ) : = θ λ n 1 ψ P Y | X n ( · | x ˜ ) , Π x ˜ , x ( n ) ( λ ) is a concave function of λ by Lemma A1 (i). Additionally, denoting its derivative with respect to λ by l n ( λ ) , we have
l n ( 0 ) = θ 1 n E P Y | X n ( · | x ˜ ) Π x ˜ , x ( n ) = θ 1 n i = 1 n E P Y | X ( · | x ˜ i ) log P Y | X ( Y i | x i ) / P Y | X ( Y i | x ˜ i )
( n ) θ + d min > 0 ,
where (A4) follows from Lemma A1 (iii), and (A5) is due to the absolute continuity assumption, P Y | X ( · | x ) P Y | X ( · | x ) , ( x , x ) X 2 on the channel, and (A1). Thus, by the concavity of l n ( λ ) , its supremum has to occur at λ 0 . Simplifying the term within the exponent in (A3), we obtain
1 n ψ P Y | X n ( · | x ) , Π x ˜ , x ( n ) ( λ ) = x ˜ , x P x ˜ x ( x ˜ , x ) log E P Y | X ( · | x ˜ ) P Y | X ( Y | x ) / P Y | X ( Y | x ˜ ) λ
( n ) E P X 0 X 1 log E P Y | X ( · | X 0 ) e λ Π X 0 , X 1 ( Y ) ,
where (A7) follows from (A1) and the absolute continuity assumption on P Y | X . Substituting (A7) in (A3) and from (1), we obtain for arbitrarily small (but fixed) δ > 0 and sufficiently large n, that
α n g n e sup λ R n θ λ E P X 0 X 1 log E P Y | X ( · | X 0 ) e λ Π X 0 , X 1 ( Y ) δ = e n E P X 0 X 1 sup λ R θ λ E P Y | X ( · | X 0 ) e λ Π X 0 , X 1 ( Y ) δ = e n E P X 0 X 1 ψ P Y | X ( · | X 0 ) , Π X 0 , X 1 * ( θ ) δ .
Similarly, it can be shown that for θ < d max ,
β n g n e n E P X 0 X 1 ψ P Y | X ( · | X 1 ) , Π X 0 , X 1 * ( θ ) δ .
Moreover, for ( x ˜ , x ) X 2 , we have
e ψ P Y | X ( · | x ) , Π x ˜ , x ( λ ) = y Y P Y | X λ + 1 ( · | x ) / P Y | X λ ( · | x ˜ ) = e ψ P Y | X ( · | x ˜ ) , Π x ˜ , x ( λ + 1 ) .
It follows that
ψ P Y | X ( · | x ) , Π x ˜ , x * ( θ ) : = sup λ R λ θ ψ P Y | X ( · | x ) , Π x ˜ , x ( λ ) = sup λ R λ θ ψ P Y | X ( · | x ˜ ) , Π x ˜ , x ( λ + 1 ) = ψ P Y | X ( · | x ˜ ) , Π x ˜ , x * ( θ ) θ .
Hence,
E P X 0 X 1 ψ P Y | X ( · | X 1 ) , Π X 0 , X 1 * ( θ ) = E P X 0 X 1 ψ P Y | X ( · | X 0 ) , Π X 0 , X 1 * ( θ ) θ .
From this, (A8) and (A9), we obtain for d min < θ < d max that
κ E P X 0 X 1 ψ P Y | X ( · | X 0 ) , Π X 0 , X 1 * ( θ ) δ , P X 0 X 1 E P X 0 X 1 ψ P Y | X ( · | X 0 ) , Π X 0 , X 1 * ( θ ) θ δ .
Then, the proof of achievability is completed by noting that δ > 0 is arbitrary and κ ( κ α , P X 0 X 1 ) is a continuous function of κ α for a fixed P X 0 X 1 .
  • Converse: Let I n ( x ˜ , x ) : = { i [ n ] : x ˜ i = x ˜   and   x i = x } . For any θ R and decision function g n , we have from Theorem 14.9 in [43] that
    α n ( g n ) + e n θ β n ( g n ) P P Y | X n ( · | x ˜ ) log P Y | X n ( Y | x ) / P Y | X n ( Y | x ˜ ) n θ .
Simplifying the RHS above, we obtain
P P Y | X n ( · | x ˜ ) log P Y | X n ( Y | x ) / P Y | X n ( Y | x ˜ ) n θ = P P Y | X n ( · | x ˜ ) x ˜ , x i I n ( x ˜ , x ) log P Y | X ( Y i | x i ) / P Y | X ( Y i | x ˜ i ) n θ = P P Y | X n ( · | x ˜ ) x ˜ , x i I n ( x ˜ , x ) log P Y | X ( Y i | x i ) / P Y | X ( Y i | x ˜ i ) ( x ˜ , x ) X 2 n P x ˜ x ( x ˜ , x ) θ ( a ) P P Y | X n ( · | x ˜ ) x ˜ , x i I n ( x ˜ , x ) log P Y | X ( Y i | x i ) / P Y | X ( Y i | x ˜ i ) n P x ˜ x ( x ˜ , x ) θ = ( b ) ( x ˜ , x ) X 2 P P Y | X n ( · | x ˜ ) i I n ( x ˜ , x ) log P Y | X ( Y i | x i ) / P Y | X ( Y i | x ˜ i ) n P x ˜ x ( x ˜ , x ) θ ,
where
(a)
follows since
x ˜ , x i I n ( x ˜ , x ) log P Y | X ( Y i | x i ) / P Y | X ( Y i | x ˜ i ) n P x ˜ x ( x ˜ , x ) θ x ˜ , x i I n ( x ˜ , x ) log P Y | X ( Y i | x i ) / P Y | X ( Y i | x ˜ i ) ( x ˜ , x ) X 2 n P x ˜ x ( x ˜ , x ) θ ;
(b)
is due to the independence of the events i I n ( x ˜ , x ) log P Y | X ( Y i | x i ) / P Y | X ( Y i | x ˜ i ) n P x ˜ x ( x ˜ , x ) θ for different ( x ˜ , x ) X 2 .
Define b x ˜ , x ( θ ) : = min Q ˜ x ˜ P ( Y ) : E Q ˜ x ˜ log P Y | X ( Y | x ) / P Y | X ( Y | x ˜ ) θ D Q ˜ x | | P Y | X ( · | x ˜ ) . Then, for arbitrary δ > 0 , δ > δ and sufficiently large n, we can write
α n + e n θ β n ( a ) ( x ˜ , x ) X 2 e n P x ˜ x ( x ˜ , x ) b x ˜ , x ( θ ) + δ ( b ) ( x ˜ , x ) X 2 e n P x ˜ x ( x ˜ , x ) ψ P Y | X ( · | x ˜ ) , Π x ˜ , x * ( θ ) + δ = ( c ) e n E P X 0 X 1 ψ P Y | X ( · | X 0 ) , Π X 0 , X 1 * ( θ ) + δ ,
where ( a ) follows from Theorem 15.9 in [43]; ( b ) follows since b x ˜ , x ( θ ) = ψ P Y | X ( · | x ˜ ) , Π x ˜ , x * ( θ ) from Theorem 15.6 in [43] and Theorem 15.11 in [43]; and ( c ) is due to (A1). The equation above implies that
lim sup n min 1 n log α n , 1 n log β n + θ E P X 0 X 1 ψ P Y | X ( · | X 0 ) , Π X 0 , X 1 * ( θ ) + δ .
Hence, if log ( α n ) / n > E P X 0 X 1 ψ P Y | X ( · | X 0 ) , Π X 0 , X 1 * ( θ ) + δ for all sufficiently large n, then
lim sup n 1 n log β n E P X 0 X 1 ψ P Y | X ( · | X 0 ) , Π X 0 , X 1 * ( θ ) θ + δ .
Since δ (and δ ) is arbitrary, this implies via the continuity of κ κ α , P X 0 X 1 in κ α that
κ E P X 0 X 1 ψ P Y | X ( · | X 0 ) , Π X 0 , X 1 * ( θ ) , P X 0 X 1 E P X 0 X 1 ψ P Y | X ( · | X 0 ) , Π X 0 , X 1 * ( θ ) θ .
To complete the proof, we need to show that θ can be restricted to lie in I ( P X 0 X 1 ) . Toward this, it suffices to show the following:
(i)
E P X 0 X 1 ψ P Y | X ( · | X 0 ) , Π X 0 , X 1 * d min = 0 ,
(ii)
E P X 0 X 1 ψ P Y | X ( · | X 0 ) , Π X 0 , X 1 * d max = d max , and
(iii)
E P X 0 X 1 ψ P Y | X ( · | X 0 ) , Π X 0 , X 1 * ( θ ) and E P X 0 X 1 ψ P Y | X ( · | X 0 ) , Π X 0 , X 1 * ( θ ) θ are convex functions of θ .
We have
E P X 0 X 1 ψ P Y | X ( · | X 0 ) , Π X 0 , X 1 * d min : = sup λ R λ E P X 0 X 1 D P Y | X ( · | X 0 ) | | P Y | X ( · | X 1 ) E P X 0 X 1 ψ P Y | X ( · | X 0 ) , Π X 0 , X 1 ( λ ) x ˜ , x P X 0 X 1 ( x ˜ , x ) sup λ x ˜ , x R λ x ˜ , x D P Y | X ( · | x ˜ ) | | P Y | X ( · | x ) ψ P Y | X ( · | x ˜ ) , Π x ˜ , x ( λ x ˜ , x ) = 0 ,
where (A10) follows since each term inside the square braces in the penultimate equation is zero, which in turn follows from Lemma A1 (iii). Additionally,
E P X 0 X 1 ψ P Y | X ( · | X 0 ) , Π X 0 , X 1 * d min = x ˜ , x P X 0 X 1 ( x ˜ , x ) ψ P Y | X ( · | x ˜ ) , Π x ˜ , x * d min 0 ,
where (A11) follows from the non-negativity of ψ P Y | X ( · | x ˜ ) , Π x ˜ , x * for every ( x ˜ , x ) X 2 stated in Lemma A1 (iii). Combining (A10) and (A11) proves ( i ) . We also have
E P X 0 X 1 ψ P Y | X ( · | X 0 ) , Π X 0 , X 1 * d max d max = E P X 0 X 1 ψ P Y | X ( · | X 1 ) , Π X 0 , X 1 * ( d max ) = 0 ,
where the final equality follows similarly to the proof of ( i ) . This proves ( i i ) . Finally, (iii) follows from Lemma A1 (iii) and the fact that a weighted sum of convex functions is convex provided the weights are non-negative, thus completing the proof. □

References

  1. Neyman, J.; Pearson, E. On the Problem of the Most Efficient Tests of Statistical Hypotheses. Philos. Trans. R. Soc. Lond. 1933, 231, 289–337. [Google Scholar]
  2. Chernoff, H. A measure of asymptotic efficiency for tests of a hypothesis based on a sum of observations. Ann. Math. Statist. 1952, 23, 493–507. [Google Scholar] [CrossRef]
  3. Hoeffding, W. Asymptotically optimal tests for multinominal distributions. Ann. Math. Statist. 1965, 36, 369–400. [Google Scholar] [CrossRef]
  4. Blahut, R.E. Hypothesis Testing and Information Theory. IEEE Trans. Inf. Theory 1974, 20, 405–417. [Google Scholar] [CrossRef]
  5. Tuncel, E. On error-exponents in hypothesis testing. IEEE Trans. Inf. Theory 2005, 51, 2945–2950. [Google Scholar] [CrossRef]
  6. Ahlswede, R.; Csiszár, I. Hypothesis Testing with Communication Constraints. IEEE Trans. Inf. Theory 1986, 32, 533–542. [Google Scholar] [CrossRef]
  7. Han, T.S. Hypothesis Testing with Multiterminal Data Compression. IEEE Trans. Inf. Theory 1987, 33, 759–772. [Google Scholar] [CrossRef]
  8. Shimokawa, H.; Han, T.S.; Amari, S. Error Bound of Hypothesis Testing with Data Compression. In Proceedings of the IEEE International Symposium on Information Theory, Trondheim, Norway, 27 June–1 July 1994; p. 114. [Google Scholar]
  9. Rahman, M.S.; Wagner, A.B. On the Optimality of Binning for Distributed Hypothesis Testing. IEEE Trans. Inf. Theory 2012, 58, 6282–6303. [Google Scholar] [CrossRef]
  10. Sreekumar, S.; Gündüz, D. Distributed Hypothesis Testing Over Discrete Memoryless Channels. IEEE Trans. Inf. Theory 2020, 66, 2044–2066. [Google Scholar] [CrossRef]
  11. Salehkalaibar, S.; Wigger, M. Distributed Hypothesis Testing Based on Unequal-Error Protection Codes. IEEE Trans. Inf. Theory 2020, 66, 4150–4182. [Google Scholar] [CrossRef]
  12. Berger, T. Decentralized estimation and decision theory. In Proceedings of the IEEE 7th Spring Workshop on IInformation Theory, Mt. Kisco, NY, USA, September 1979. [Google Scholar]
  13. Shalaby, H.M.H.; Papamarcou, A. Multiterminal Detection with Zero-Rate Data Compression. IEEE Trans. Inf. Theory 1992, 38, 254–267. [Google Scholar] [CrossRef]
  14. Han, T.S.; Kobayashi, K. Exponential-Type Error Probabilities for Multiterminal Hypothesis Testing. IEEE Trans. Inf. Theory 1989, 35, 2–14. [Google Scholar] [CrossRef]
  15. Gündüz, D.; Kurka, D.B.; Jankowski, M.; Amiri, M.M.; Ozfatura, E.; Sreekumar, S. Communicate to Learn at the Edge. IEEE Commun. Mag. 2020, 58, 14–19. [Google Scholar] [CrossRef]
  16. Gündüz, D.; Qin, Z.; Aguerri, I.E.; Dhillon, H.S.; Yang, Z.; Yener, A.; Wong, K.K.; Chae, C.B. Beyond Transmitting Bits: Context, Semantics, and Task-Oriented Communications. IEEE J. Sel. Areas Commun. 2023, 41, 5–41. [Google Scholar] [CrossRef]
  17. Zhao, W.; Lai, L. Distributed Testing Against Independence with Multiple Terminals. In Proceedings of the 52nd Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL, USA, 30 September–3 October 2014; pp. 1246–1251. [Google Scholar]
  18. Wigger, M.; Timo, R. Testing Against Independence with Multiple Decision Centers. In Proceedings of the International Conference on Signal Processing and Communications, Bangalore, India, 12–15 June 2016; pp. 1–5. [Google Scholar]
  19. Salehkalaibar, S.; Wigger, M.; Wang, L. Hypothesis Testing In Multi-Hop Networks. IEEE Trans. Inf. Theory 2019, 65, 4411–4433. [Google Scholar] [CrossRef]
  20. Zaidi, A.; Aguerri, I.E. Optimal Rate-Exponent Region for a Class of Hypothesis Testing Against Conditional Independence Problems. In Proceedings of the 2019 IEEE Information Theory Workshop (ITW), Visby, Sweden, 25–28 August 2019; pp. 1–5. [Google Scholar]
  21. Zaidi, A. Rate-Exponent Region for a Class of Distributed Hypothesis Testing Against Conditional Independence Problems. IEEE Trans. Inf. Theory 2023, 69, 703–718. [Google Scholar] [CrossRef]
  22. Mhanna, M.; Piantanida, P. On Secure Distributed Hypothesis Testing. In Proceedings of the IEEE International Symposium on Information Theory, Hong Kong, China, 14–19 June 2015; pp. 1605–1609. [Google Scholar]
  23. Sreekumar, S.; Gündüz, D. Testing Against Conditional Independence Under Security Constraints. In Proceedings of the IEEE Int. Symp. Inf. Theory (ISIT), Vail, CO, USA, 17–22 June 2018; pp. 181–185. [Google Scholar]
  24. Sreekumar, S.; Cohen, A.; Gündüz, D. Privacy-Aware Distributed Hypothesis Testing. Entropy 2020, 22, 665. [Google Scholar] [CrossRef] [PubMed]
  25. Gilani, A.; Amor, S.B.; Salehkalaibar, S.; Tan, V. Distributed Hypothesis Testing with Privacy Constraints. Entropy 2019, 21, 478. [Google Scholar] [CrossRef] [PubMed]
  26. Katz, G.; Piantanida, P.; Debbah, M. Distributed Binary Detection with Lossy Data Compression. IEEE Trans. Inf. Theory 2017, 63, 5207–5227. [Google Scholar] [CrossRef]
  27. Xiang, Y.; Kim, Y.H. Interactive hypothesis testing with communication constraints. In Proceedings of the 50th Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL, USA, 1–5 October 2012; pp. 1065–1072. [Google Scholar]
  28. Xiang, Y.; Kim, Y.H. Interactive hypothesis testing against Independence. In Proceedings of the IEEE International Symposium on Information Theory, Istanbul, Turkey, 7–12 July 2013; pp. 2840–2844. [Google Scholar]
  29. Tian, C.; Chen, J. Successive Refinement for Hypothesis Testing and Lossless One-Helper Problem. IEEE Trans. Inf. Theory 2008, 54, 4666–4681. [Google Scholar] [CrossRef]
  30. Haim, E.; Kochman, Y. On Binary Distributed Hypothesis Testing. arXiv 2017, arXiv:1801.00310. [Google Scholar]
  31. Weinberger, N.; Kochman, Y. On the Reliability Function of Distributed Hypothesis Testing Under Optimal Detection. IEEE Trans. Inf. Theory 2019, 65, 4940–4965. [Google Scholar] [CrossRef] [Green Version]
  32. Hadar, U.; Liu, J.; Polyanskiy, Y.; Shayevitz, O. Error Exponents in Distributed Hypothesis Testing of Correlations. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Paris, France, 7–12 July 2019; pp. 2674–2678. [Google Scholar]
  33. Watanabe, S. Neyman-Pearson Test for Zero-Rate Multiterminal Hypothesis Testing. IEEE Trans. Inf. Theory 2018, 64, 4923–4939. [Google Scholar] [CrossRef]
  34. Xu, X.; Huang, S.L. On Distributed Learning With Constant Communication Bits. IEEE J. Sel. Areas Inf. Theory 2022, 3, 125–134. [Google Scholar] [CrossRef]
  35. Salehkalaibar, S.; Tan, V.Y.F. Distributed Sequential Hypothesis Testing With Zero-Rate Compression. In Proceedings of the 2021 IEEE Information Theory Workshop (ITW), Kanazawa, Japan, 17–21 October 2021; pp. 1–5. [Google Scholar]
  36. Sreekumar, S.; Gündüz, D. Strong Converse for Testing Against Independence over a Noisy channel. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Los Angeles, CA, USA, 21–26 June 2020; pp. 1283–1288. [Google Scholar]
  37. Salehkalaibar, S.; Wigger, M. Distributed Hypothesis Testing with Variable-Length Coding. IEEE J. Sel. Areas Inf. Theory 2020, 1, 681–694. [Google Scholar] [CrossRef]
  38. Csiszár, I.; Körner, J. Information Theory: Coding Theorems for Discrete Memoryless Systems; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
  39. Borade, S.; Nakiboğlu, B.; Zheng, L. Unequal Error Protection: An Information-Theoretic Perspective. IEEE Trans. Inf. Theory 2009, 55, 5511–5539. [Google Scholar] [CrossRef]
  40. Minero, P.; Lim, S.H.; Kim, Y.H. A Unified Approach to Hybrid Coding. IEEE Trans. Inf. Theory 2015, 61, 1509–1523. [Google Scholar] [CrossRef]
  41. Weinberger, N.; Kochman, Y.; Wigger, M. Exponent Trade-off for Hypothesis Testing Over Noisy Channels. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Paris, France, 7–12 July 2019; pp. 1852–1856. [Google Scholar]
  42. Csiszár, I. On the error-exponent of source-channel transmission with a distortion threshold. IEEE Trans. Inf. Theory 1982, 28, 823–828. [Google Scholar] [CrossRef]
  43. Polyanskiy, Y.; Wu, Y. Information Theory: From Coding to Learning; Cambridge University Press: Cambridge, UK, 2012; Available online: https://people.lids.mit.edu/yp/homepage/data/itbook-export.pdf (accessed on 10 December 2022).
  44. Gallager, R. A simple derivation of the coding theorem and some applications. IEEE Trans. Inf. Theory 1965, 11, 3–18. [Google Scholar] [CrossRef]
  45. Merhav, N.; Shamai, S. On joint source-channel coding for the Wyner-Ziv source and the Gelfand-Pinsker channel. IEEE Trans. Inf. Theory 2003, 49, 2844–2855. [Google Scholar] [CrossRef]
  46. Cover, T.; Gamal, A.E.; Salehi, M. Multiple Access Channels with Arbitrarily Correlated Sources. IEEE Trans. Inf. Theory 1980, 26, 648–657. [Google Scholar] [CrossRef] [Green Version]
  47. Csiszár, I. Joint Source-Channel Error Exponent. Prob. Control Inf. Theory 1980, 9, 315–328. [Google Scholar]
  48. Eggleston, H.G. Convexity, 6th ed.; Cambridge University Press: Cambridge, UK, 1958. [Google Scholar]
Figure 1. DHT over a noisy channel. The observer observes an n-length independent and identically distributed sequence U , and transmits X over the DMC P Y | X n . Based on the channel output Y and the n-length independent and identically distributed sequence V , the decision maker performs a binary HT to determine whether ( U , V ) P U V n or ( U , V ) Q U V n .
Figure 1. DHT over a noisy channel. The observer observes an n-length independent and identically distributed sequence U , and transmits X over the DMC P Y | X n . Based on the channel output Y and the n-length independent and identically distributed sequence V , the decision maker performs a binary HT to determine whether ( U , V ) P U V n or ( U , V ) Q U V n .
Entropy 25 00304 g001
Figure 2. Comparison of the error-exponents trade-off achieved by the SHTCC and JHTCC schemes for TAD over a BSC in Example 1 with parameters p = 0.25 , q = 0 for (a) and p = 0.35 , q = 0 for (b). The red curve shows κ α , κ u ( κ α ) pairs achieved by uncoded transmission while the blue line plots κ α , E ex ( 0 ) . The joint scheme clearly achieves a better error-exponent trade-off for values of κ α below a threshold which depends on the transition kernel of the channel. In particular, a more uniform channel results in a lesser threshold.
Figure 2. Comparison of the error-exponents trade-off achieved by the SHTCC and JHTCC schemes for TAD over a BSC in Example 1 with parameters p = 0.25 , q = 0 for (a) and p = 0.35 , q = 0 for (b). The red curve shows κ α , κ u ( κ α ) pairs achieved by uncoded transmission while the blue line plots κ α , E ex ( 0 ) . The joint scheme clearly achieves a better error-exponent trade-off for values of κ α below a threshold which depends on the transition kernel of the channel. In particular, a more uniform channel results in a lesser threshold.
Entropy 25 00304 g002
Figure 3. Comparison of the error-exponents trade-off achieved by the SHTCC and JHTCC schemes for Example 1 with parameters p = 0.25 , q = 0.05 for (a) and p = 0.35 , q = 0.05 for (b). The JHTCC scheme improves over the separation based scheme for small values of κ α ; however, the region of improvement is reduced compared to Figure 2 as the source is more uniformly distributed.
Figure 3. Comparison of the error-exponents trade-off achieved by the SHTCC and JHTCC schemes for Example 1 with parameters p = 0.25 , q = 0.05 for (a) and p = 0.35 , q = 0.05 for (b). The JHTCC scheme improves over the separation based scheme for small values of κ α ; however, the region of improvement is reduced compared to Figure 2 as the source is more uniformly distributed.
Entropy 25 00304 g003
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sreekumar, S.; Gündüz, D. Distributed Hypothesis Testing over a Noisy Channel: Error-Exponents Trade-Off. Entropy 2023, 25, 304. https://doi.org/10.3390/e25020304

AMA Style

Sreekumar S, Gündüz D. Distributed Hypothesis Testing over a Noisy Channel: Error-Exponents Trade-Off. Entropy. 2023; 25(2):304. https://doi.org/10.3390/e25020304

Chicago/Turabian Style

Sreekumar, Sreejith, and Deniz Gündüz. 2023. "Distributed Hypothesis Testing over a Noisy Channel: Error-Exponents Trade-Off" Entropy 25, no. 2: 304. https://doi.org/10.3390/e25020304

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop