Distributed Hypothesis Testing over a Noisy Channel: Error-Exponents Trade-Off

A two-terminal distributed binary hypothesis testing problem over a noisy channel is studied. The two terminals, called the observer and the decision maker, each has access to n independent and identically distributed samples, denoted by U and V, respectively. The observer communicates to the decision maker over a discrete memoryless channel, and the decision maker performs a binary hypothesis test on the joint probability distribution of (U,V) based on V and the noisy information received from the observer. The trade-off between the exponents of the type I and type II error probabilities is investigated. Two inner bounds are obtained, one using a separation-based scheme that involves type-based compression and unequal error-protection channel coding, and the other using a joint scheme that incorporates type-based hybrid coding. The separation-based scheme is shown to recover the inner bound obtained by Han and Kobayashi for the special case of a rate-limited noiseless channel, and also the one obtained by the authors previously for a corner point of the trade-off. Finally, we show via an example that the joint scheme achieves a strictly tighter bound than the separation-based scheme for some points of the error-exponents trade-off.


I. INTRODUCTION
Hypothesis testing (HT), which refers to the problem of choosing between one or more alternatives based on available data, plays a central role in statistics and information theory.Distributed HT (DHT) problems arise in situations where the test data are scattered across multiple terminals, and need to be communicated to a central terminal, called the decision maker, which performs the hypothesis test.The need to jointly optimize the communication scheme and the hypothesis test makes DHT problems much more challenging than their centralized counterparts.Indeed, while an efficient characterization of the optimal hypothesis test and its asymptotic performance is well known in the centralized setting, thanks to [1]- [5], the same problem in even the simplest distributed setting remains open, except for some special cases (see [6]- [11]).
In this work, we consider a DHT problem with two parties, an observer and a decision maker, such that the former communicates to the latter over a noisy channel.The observer and the decision maker each has access to This work is supported in part by the European Research Council (ERC) through Starting Grant BEACON (agreement #677854).S. Sreekumar was with the Department of Electrical and Electronic Engineering, Imperial College London, at the time of this work.He is now with the School of Electrical and Computer Engineering, Cornell University, Ithaca, NY 14850, USA (email: sreejithsreekumar@cornell.edu).D. Gündüz is with the Department of Electrical and Electronic Engineering, Imperial College London, London SW72AZ, UK (e-mail: d.gunduz@imperial.ac.uk).independent and identically distributed (i.i.d.) samples, denoted by U and V, respectively.Based on the information received from the observer and its own observations V, the decision maker performs a binary hypothesis test on the joint distribution of (U, V).Our goal is to characterize the trade-off between the best achievable rate of decay (or exponent) of the type I and type II error probabilities with respect to the sample size.We will refer to this problem as DHT over a noisy channel, and its special instance with the noisy channel replaced by a rate-limited noiseless channel as DHT over a noiseless channel.

A. Background
Distributed statistical inference problems were first conceived in [12] and the information-theoretic study of DHT over a noiseless channel was first investigated in [6], where the objective is to characterize the Stein's exponent κ se ( ), i.e., the optimal type II error-exponent subject to the type I error probability constrained to be at most ∈ (0, 1).The authors therein established a multi-letter characterization of this quantity including a strong converse, which shows that κ se ( ) is independent of .Furthermore, a single-letter characterization of κ se ( ) is obtained for a special case of HT known as testing against independence (TAI), in which the joint distribution factors as a product of the marginal distributions under the alternative hypothesis.Improved lower bounds on κ se ( ) were subsequently obtained in [7], [8], respectively, and the strong converse was extended to zero-rate settings [13].While all the aforementioned works focus on κ se ( ), the trade-off between the exponents of both the type I and type II error probabilities in the same setting was first explored in [14].
In the recent years, there has been a renewed interest in distributed statistical inference problems motivated by emerging machine learning applications to be served at the wireless edge, particularly in the context of semantic communications in 5G/6G communication systems [15], [16].Several extensions of the DHT over a noiseless channel problem have been studied, such as generalizations to multi-terminal settings [9], [17]- [21], DHT under security or privacy constraints [22]- [25], DHT with lossy compression [26], interactive settings [27], [28], successive refinement models [29], and more.Improved bounds have been obtained on the type I and type II error-exponents region [30], [31], and on κ se ( ) for testing correlation between bivariate standard normal distributions [32].In the simpler zerorate communication setting, there has been some progress in terms of second-order optimal schemes [33], geometric interpretation of type I and type II error-exponent region [34], and characterization of κ se ( ) for sequential HT [35].

B. Contributions
In this work, our objective is to explore the trade-off between the type I and type II error-exponents for DHT over a noisy channel.This problem is a generalization of [14] from noiseless rate-limited channels to noisy channels, and also of [10], [11] from a type I error probability constraint to a positive type I error-exponent constraint.
Our main contributions can be summarized as follows: (i) We obtain an inner bound (Theorem 1) on the error-exponents trade-off by using a separate HT and channel coding scheme (SHTCC) that is a combination of a type-based (type here refers to the empirical probability distribution of a sequence, see [38]) quantize-bin strategy and unequal error-protection scheme of [39].This result is shown to recover the bounds established in [10], [14].Furthermore, we evaluate Theorem 1 for two important instances of DHT, namely TAI and its opposite, i.e., testing against dependence (TAD) in which the joint distribution under the null hypothesis factors as a product of marginal distributions.
(ii) We also obtain a second inner bound (Theorem 2) on the error-exponents trade-off by using a joint HT and channel coding scheme (JHTCC) based on hybrid coding [40].Subsequently, we show via an example that the JHTCC scheme strictly outperforms the SHTCC scheme for some points on the error-exponent trade-off.
While the above schemes are inspired from those in [10], which have been proposed with the goal of maximizing the type II error-exponent, novel modifications in its design and analysis are required when considering both of the error-exponents.More specifically, the schemes presented here perform separate quantization-binning or hybrid coding on each individual source sequence type at the observer/encoder (as opposed to a typical ball in [10]) with the corresponding reverse operation implemented at the decision-maker/decoder.This necessitates a different analysis to compute the probabilities of the various error events contributing to the overall error-exponents.We finally mention that the DHT problem considered here was recently investigated in [41], where an inner bound on the error-exponents trade-off (Theorem 2 in [41]) is obtained using a combination of a type-based quantization scheme and unequal error protection scheme of [42] with two special messages.A qualitative comparison between Theorem 2 and Theorem 2 in [41] seems to suggest that the JHTCC scheme here uses a stronger decoding rule depending jointly on the source-channel statistics.In comparison, the metric used at the decoder for the scheme in [41] factors as the sum of two metrics, one which depends only on the source statistics, and the other which depends only on the channel statistics.Importantly, this hints that the inner bound achieved by JHTCC scheme is not subsumed by that in [41].That said, a direct computational comparison appears difficult, as evaluating the latter requires optimization over several parameters as mentioned in the last paragraph of [41].

C. Organization
The remainder of the paper is organized as follows.Section II formulates the operational problem along with the required definitions.The main results are presented in Section III.The proofs are furnished in Section IV.Finally, concluding remarks are given in Section V.

A. Notation
We use the following notation.All logarithms are with respect to the natural base e.N, R, R ≥0 , and R denotes the set of natural, real, non-negative real and extended real numbers, respectively.for its complement and cardinality, respectively.For n ∈ N, X n denotes the n-fold Cartesian product of X , and an element of X n .Bold-face letters denote vectors or sequences, e.g., x for x n ; its length n will be clear from the context.For i, j ∈ N such that i ≤ j, Random variables and their realizations are denoted by uppercase and lowercase letters, respectively, e.g., X and x. Similar conventions apply for random vectors and their realizations.The set of all probability mass functions (PMFs) on a finite set X is denoted by P(X ).The type or empirical PMF of a sequence x ∈ X n is designated by P x , i.e., P x (x) := Similar notations are used for larger combinations, e.g., P xy , T n (P XY , X × Y) and T (X n × Y n ).For a given x ∈ T n (P X , X n ) and a conditional PMF For PMFs P, Q ∈ P(X ), the Kullback-Leibler (KL) divergence between P and Q is D (P ||Q) := x∈X P (x) log . The mutual information and entropy terms are denoted by I P (•) and H P (•), respectively, where P denotes the PMF of the relevant random variables.When the PMF is clear from the context, the subscript is omitted.For (x, y) ∈ X n × Y n , the empirical conditional entropy of y given x is H e (y|x) := H P ( Ỹ | X), where P X Ỹ = P xy .For a given function f : Z → R and a random variable Z ∼ P Z , the log-moment generating function of Z with respect to f is ψ P Z ,f (λ) := log E P Z [e λf (Z) ] whenever the expectation exists.Finally, let denote the rate function (see, e.g., Definition 15.5 in [43]).

B. Problem Formulation
Let U, V, X and Y be finite sets, and n ∈ N. The DHT over a noisy channel setting is depicted in Figure 1.
Herein, the observer and the decision maker observe n i.i.d.samples, denoted by u and v, respectively.Based on its observations u, the observer outputs a sequence x ∈ X n as the channel input sequence 1 .The discrete memoryless Fig. 1: DHT over a noisy channel.The observer observes an n-length i.i.d.sequence U, and transmits X over the DMC P ⊗n Y |X .Based on the channel output Y and the n-length i.i.d.sequence V, the decision maker performs a binary HT to determine whether (U, V) channel (DMC) with transition kernel P Y |X produces a sequence y ∈ Y n according to the probability law P ⊗n Y |X (•|x) as its output.We will assume that , where P Q indicates absolute continuity of P with respect to Q. Based on its observations, y and v, the decision maker performs binary HT on the joint probability distribution of (U, V) with the null (H 0 ) and alternative (H 1 ) hypotheses given by The decision maker outputs ĥ ∈ Ĥ := {0, 1} as the decision of the hypothesis test, where 0 and 1 denote H 0 and H 1 , respectively.A length-n DHT code c n is a pair of functions (f n , g n ), where (i) f n : U n → P(X n ) denotes the encoding function, and (ii) g n : V n × Y n → Ĥ denotes a deterministic decision function2 specified by an acceptance region (for null A code c n = (f n , g n ) induces the joint PMFs P (cn) UVXY Ĥ and Q (cn) UVXY Ĥ under the null and alternative hypotheses, respectively, where and respectively.For a given code c n , the type I and type II error probabilities are α n (c n ) := P P (cn ) ( Ĥ = 1) and The following definition formally states the error-exponents trade-off we aim to characterize.
We are interested in a computable characterization of R, which pertains to the region of positive error-exponents (i.e., excluding the boundary points corresponding to Stein's exponent).To this end, we present two inner bounds on R below.

III. MAIN RESULTS
In this section, we obtain two inner bounds on R, first using a separation-based scheme which performs independent HT and channel coding, termed the SHTCC scheme, and the second via a joint HT and channel coding scheme that uses hybrid coding for communication between the observer and the decision maker. A ρ(κ α , ω) := min E 1 (κ α , ω) := min otherwise, We have the following lower bound for κ(κ α ) which translates to an inner bound for R.
Theorem 1 (Inner bound via SHTCC scheme).κ(κ α ) ≥ κ s (κ α ), where The proof of Theorem 1 is presented in Section IV-A.The SHTCC scheme, which achieves the error-exponent pair (κ α , κ s (κ α )), is a coding scheme analogous to separate source and channel coding for the lossy transmission of a source over a communication channel with correlated side-information at the receiver [45], however, with the objective of reliable HT.In this scheme, the source samples are first compressed to an index, which acts as the message to be transmitted over the channel.But, in contrast to standard communication problems, there is a need to protect certain messages more reliably than others; hence, an unequal error-protection scheme [39], [42] is used.
To describe briefly, the SHTCC scheme involves (i) the quantization and binning of u sequences, whose type P u is within a κ α -neighborhood (in terms of KL divergence) of P U , using V as side information at the decision maker for decoding, and (ii) unequal error-protection channel coding scheme in [39] for protecting a special message which informs the decision maker that P u lies outside the κ α -neighborhood of P U .The output of the channel decoder is processed by an empirical conditional entropy decoder which recovers the quantization codeword with the least conditional entropy with V. Since this decoder depends only on the empirical distributions of the observations, it is universal and useful in the hypothesis testing context, where multiple distributions are involved (as was first noted in [8]).The various factors E 1 to E 4 in ( 7) have natural interpretations in terms of events that could possibly result in a hypothesis testing error.Specifically, E 1 and E 2 correspond to the error events arising due to quantization and binning, respectively, while E 3 and E 4 correspond to the error events of wrongly decoding an ordinary channel codeword and special message codeword, respectively.
Remark 1 (Generalization of Han-Kobayashi inner bound).In [14, Theorem 1], Han and Kobayashi obtained an inner bound on R for DHT over a noiseless channel.At a high level, their coding scheme involves type-based quantization of u ∈ U n sequences, whose type P u lies within a κ α -neighbourhood of P U , where κ α is the desired type I error-exponent.As a corollary, Theorem 1 recovers the lower bound for κ(κ α ) obtained in [14] by (i) setting E ex (R, P SX ), E m (P SX , θ) and E m (P SX , θ) − θ to ∞, which hold when the channel is noiseless; and (ii) all equal ∞, and thus the inner bound in Theorem 1 reduces to that given in [14, Theorem 1].
Remark 2 (Improvement via time-sharing).Since the lower bound on κ(κ α ) in Theorem 1 is not necessarily concave, a tighter bound can be obtained using the technique of time-sharing similar to [14, Theorem 3].We omit its description as it is cumbersome, although straightforward.
Theorem 1 also recovers the lower bound for the optimal type II error-exponent for a fixed type I error probability constraint established in [10,Theorem 2] by letting κ α → 0. The details are provided in Appendix A. Further, specializing the lower bound in Theorem 1 to the case of TAI, i.e., when Q U V = P U P V , we obtain the following corollary which recovers the optimal type II error-exponent for TAI established in [10,Proposition 7].
The proof of Corollary 1 is given in Section IV-B.Its achievability follows from a special case of the SHTCC scheme without binning at the encoder.
Next, we consider testing against dependence (TAD) for which Q U V is an arbitrary joint distribution and Theorem 1 specialized to TAD gives the following corollary.
Corollary 2 (Inner bound for TAD).Let Q U V ∈ P(U × V) be an arbitrary distribution and where and, L(κ α , ω), T 1 (κ α , ω) and L (κ α ) are given in (6a), (6d) and (9), respectively.In particular, where κ TAD = max The proof of Corollary 2 is given in Section IV-C.Note that the expression for κ s (κ α ) given in (11) is relatively simpler to compute compared to that in Theorem 1.This will be handy in showing that the JHTCC scheme strictly outperforms the SHTCC scheme, which we highlight via an example in Section III-C below.

B. Inner Bound via JHTCC Scheme
It is well known that joint source-channel coding schemes offer advantages over separation-based coding schemes in several information theoretic problems, such as the transmission of correlated sources over a multiple-access channel [40], [46] and the error-exponent in the lossless or lossy transmission of a source over a noisy channel [42], [47].Recently, it is shown via an example in [10] that joint schemes also achieve a strictly larger type II error-exponent in DHT problems compared to a separation-based scheme in some scenarios.Motivated by this, we present an inner bound on R using a generalization of the JHTCC scheme in [10].
Let W and S be arbitrary finite sets, and F denote the set of all continuous mappings from P(U ×S) to P(W| U × S), where P(W| U × S) is the set of all conditional distributions P W |U S .Let P S , ω (•, P S ), P X|U SW , P X |U S denote an arbitrary element of P(S) × F × P(X |U × S × W) × P(X |U × S), and define ρ (κ α , ω , P S , P X|U SW ) := min P V Ŵ Ŷ S : ∃ P Û s.t.P Û V Ŵ Ŷ S ∈ Lh (κα,ω ,P S ,P X|U SW ) E 2 (κ α , ω , P S , P X|U SW ) := min + E b (κ α , ω , P S , P X|U SW ), E 3 (κ α , ω , P S , P X|U SW , P X |U S ) := min .
Then, we have the following result.
where κ u (κ α , P S , P X|U S ), κ u (κ α , P S , P X|U S ) := min The proof of Theorem 2 is given in Section IV-D, and utilizes a generalization of hybrid coding scheme [40] to achieve the stated inner bound.Specifically, the error-exponent pair κ α , κ h (κ α ) is achieved using type-based hybrid coding, while κ α , κ u (κ α ) is realized by uncoded transmission, in which the channel input X is generated as the output of a DMC P X|U with input U (along with time sharing).In standard hybrid coding, the source sequence is first quantized via joint typicality and the channel input is then chosen as a function of both the original source sequence and its quantization.At the decoder, the quantized codeword is first recovered using the channel output and side information via joint typicality decoding, and an estimate of the source sequence is output as a function of the channel output and recovered codeword.The quantization part forms the digital part of the scheme, while the use of the source sequence for encoding and channel output for decoding comprises the analog part.The scheme derives its name from these joint hybrid digital-analog operations.In the HT context considered here, the aforementioned source quantization is replaced by a type-based quantization at the encoder, and the joint typicality decoder is replaced by a universal empirical conditional entropy decoder.We note that Theorem 2 recovers the lower bound on the optimal type II error-exponent proved in Theorem 5 in [10].The details are provided in Appendix B.
Next, we provide a comparison between the SHTCC and JHTCC bounds via an example as mentioned earlier.

C. Comparison of Inner Bounds
We compare the inner bounds established in Theorem 1 and Theorem 2 for a simple setting of TAD over a BSC.
For this purpose, we will use the inner bound κ d (κ α ) stated in Corollary 2 and κ u (κ α ) that is achieved by uncoded transmission.Our objective is to illustrate that the JHTCC scheme achieves a strictly tighter bound on R compared to the SHTCC scheme, at least for some points of the trade-off.
Example 1 (Comparison of inner bounds).Let p, q ∈ [0, 0.5], A comparison of the inner bounds achieved by the SHTCC and JHTCC schemes for the above example are shown in Figures 2 and 3, where we plot the error-exponents trade-off achieved by uncoded transmission (a lower bound for the JHTCC scheme), and the expurgated exponent at a zero rate: which is an upper bound on κ d (κ α ) for any κ α ≥ 0. To compute E ex (0), we used the closed-form expression for E ex (•) given in Problem 10.26(c) in [38].Clearly, it can be seen that the JHTCC scheme outperforms SHTCC scheme for κ α below a threshold, which depends on the source and channel distributions.In particular, the threshold below which improvement is seen is reduced when the channel or the source becomes more uniform.The former behavior can be seen directly by comparing the subplots in Figures 2 and 3, while the latter can be noted by comparing Figure 2a with Figure 3a, or Figure 2b with Figure 3b.3a and p = 0.35, q = 0.05 for Figure 3b.The JHTCC scheme improves over the separation based scheme for small values of κ α ; however, the region of improvement is reduced compared to Figure 2 as the source is more uniformly distributed.

A. Proof of Theorem 1
We will show the achievability of the error-exponent pair (κ α , κ s (κ α )) by constructing a suitable ensemble of HT codes, and showing that the expected type I and type II error probabilities (over this ensemble) satisfy (5) for the pair (κ α , κ s (κ α )).Then, an expurgation argument [44] will be used to show the existence of a HT code that satisfies (5) for the same error-exponent pair, thus showing that (κ α , κ s (κ α )) ∈ R as desired.
Source encoder: The source encoding comprises of a quantization scheme followed by binning to reduce the rate if necessary.
Consider some ordering on the types in D n (P U , η) and denote the elements as where P Wi|U = ω(P Ûi ).Note that this is always possible for n sufficiently large by definition of R . Let and |M i | denote a random quantization codebook such that the codeword Quantization scheme: For a given codebook B W,n and u ∈ T n P Ûi such that P Ûi ∈ D n (P U , η) for some Let f B denote the random binning function such that for each j , where the empty sum is defined to be zero.Let s ∈ X n be such that s ki ki = i, i.e., the elements of s equal i in the i th block for i ∈ [|X |].Let X(0) = s with probability one, and the remaining codewords X(m), m ∈ M\{0} be constant composition codewords [38] selected such that X ki ki (m) ∼ Unif T P S (i)n ( PX|S (•|i)) , where PX|S is such that T P S (i)n PX|S (•|i) is nonempty and D( PX|S ||P X|S |P S ) ≤ η 3 .Denote a realization of B X,n by B X,n := {x(m) ∈ X n , m ∈ M}.Note that for m ∈ M\{0} and large n, the codeword pair (x(0), x(m)) has joint type (approx) P x(0)x(j) = PSX := P S PX|S .
Channel encoder: For a given B X,n , the channel encoder outputs x = x(m) for output m from the source encoder.
Denote this map by f B X,n : M → X n .
Encoder: Denote by f n : U n → P(X n ) the encoder induced by all the above operations, i.e.,

Decision function:
The decision function consists of three parts, a channel decoder, a source decoder and a tester.

Channel decoder:
The channel decoder first performs a Neyman-Pearson test on the channel output y according to Πθ : Y n → {0, 1}, where If Πθ (y) = 1, then M = 0. Else, for a given B X,n , maximum likelihood (ML) decoding is done on the remaining set of codewords {x(m), m ∈ M\{0}}, and M is set equal to the ML estimate.Denote the channel decoder induced by the above operations by g B X,n , where g B X,n : Y n → M.
For a given codebook B X,n , the channel encoder-decoder pair described above induces a distribution .
Then, it follows by an application of Proposition 1 proved in Appendix C that for any B X,n and n sufficiently large, the Neyman-Pearson test in (19) yields Moreover, given M = 0, a random coding argument over the ensemble of B n X (see [38,Exercise 10.18,10.24]and [44]) shows that there exists a deterministic codebook B X,n such that (20) holds, and the ML-decoding described above asymptotically achieves This deterministic codebook B X,n is used for channel coding.
where J n (r, P X ) := {x ∈ X n : D (P x ||P X ) ≤ r}, For m ∈ M \{0}, set Z m := {v : v ∈ Z m (u) for some u ∈ O m }, and define the acceptance region for H 0 at the decision maker as A n := ∪ m ∈M \0 m × Z m or equivalently as A e n := ∪ m ∈M \0 O m × Z m .Note that A n is the same as the acceptance region for H 0 in [14, Theorem 1].Denote the decision function induced by g B X,n , g B W,n and A n by g n : Y n × V n → Ĥ.

Induced probability distribution:
The PMFs induced by a code c n = (f n , g n ) with respect to codebook B n := (B W,n , f b , B X,n ) under H 0 and H 1 are P (Bn,cn) UVM M XY M M Ĥ (u, v, m , m, x, y, m, m , ĥ) , respectively.For simplicity, we will denote the above distributions by P (Bn) and Q (Bn) .Let B n , and µ n denote the random codebook, its support, and the probability measure induced by its random construction, respectively.Also, define PP (Bn ) := E µn P P (Bn ) and PQ (Bn) := E µn P Q (Bn ) .

Analysis of the type I and type II error probabilities:
We analyze the type I and type II error probabilities averaged over the random ensemble of quantization and binning codebooks (B W , f B ).Then, an expurgation technique [44] guarantees the existence of a sequence of deterministic codebooks {B n } n∈N and a code {c n = (f n , g n )} n∈N that achieves the lower bound given in Theorem 1.
Type I error probability: In the following, random sets where the randomness is induced due to B n will be written using blackboard bold letters, e.g., A n for the random acceptance region for H 0 .Note that a type I error can occur only under the following events: (i) E EE := P Û ∈Dn(P U ,η) u∈Tn(P Û ) E EE (u), where Here, E EE corresponds to the event that there does not exist a quantization codeword corresponding to at least one sequence u of type P u ∈ D n (P U , η); E NE corresponds to the event, in which, there is neither an error at the channel decoder nor at the empirical conditional entropy decoder; E OCE and E SCE corresponds to the case, in which, there is an error at the channel decoder (hence also at the empirical conditional entropy decoder); and, E BE corresponds to the case that there is an error (due to binning) only at the empirical conditional entropy decoder.For the event E EE , it follows from a slight generalization of the type-covering lemma [38, Lemma 9.1] that Since e nΩ(η) /n − − → ∞ for η > 0, the event E EE may be safely ignored from the analysis of the error-exponents.
Given E c EE holds for some B W,n , it follows from [14, Equation 4.22] that for sufficiently large n since the acceptance region is the same as that in [14, Theorem 1].
Next, consider the event E OCE .We have for sufficiently large n that where (a) holds since the event {M = 0} is equivalent to {M = 0}; (b) holds due to (20a) and ( 21), which holds for B X,n .
Also, the probability of E SCE can be upper bounded as where ( 26) is due to (23), the definition of D n (P U , η) in ( 14) and [38, Lemma 2.2, Lemma 2.6].
Finally, consider the event E BE .Note that this event occurs only when |M| ≤ |M |.Also, M = 0 iff M = 0, and hence M = 0 and M = M implies that M = 0. Let J n (κ α + η, P W m U V ), P W m U V satisfies (22) and We have The second term in ( 27) can be upper-bounded as where the inequality in (28) follows from [14, Equation 4.22] for sufficiently large n, since the acceptance region A e n is the same as that in [14].To bound the first term in (27), define D n (P V , η) := {P V : ∃ P V Ŵ ∈ D n (P V W , η)}, and observe that since (M , V) ∈ A n implies M = 0, we have where (a) follows since by the symmetry of the source encoder, binning function and random codebook construction, the term in ( 29) is independent of (m, m ); and (b) holds since (M , V) ∈ A n implies that P v ∈ D n (P V , η) and ) form a Markov chain.Defining P V = P v , and the event E 1 := {M = 1, V = v}, we obtain where (a) follows since f B (•) is the uniform binning function independent of B W,n ; (b) holds due to the fact that if P v ∈ D n (P V , η), then M = 1 implies that (W(1), v) ∈ T n (P V Ŵ ) with probability one for some P V Ŵ ∈ D n (P V W , η); (c) holds since PP (Bn) W(j) = w E 1 ∪ {W(1) = w} ≤ 2 PP (Bn ) (W(j) = w), which follows similarly to [10,Equation (101)].
Type II error probability: We analyze the type II error probability averaged over B n .A type II error can occur only under the following events: Similar to (23), it follows that PQ (Bn ) (E EE ) ≤ e −e nΩ(η) .Hence, we may assume that E c EE holds for the type II error-exponent analysis.It then follows from the analysis in [14,] that for sufficiently large n, The analysis of the error events E b , E c and E d follows similarly to that in the proof of [10,Theorem 2], and results in Since the exponent of the type II error probability is lower bounded by the minimum of the exponent of the type II error causing events, we have shown above that for a fixed (ω, R, P SX , θ) ∈ L(κ α ) and sufficiently large n, where κs (κ α , ω, R, P SX , θ) := min E 1 (κ α , ω), E 2 (κ α , ω, R), E 3 (κ α , ω, R, P SX ), E 4 (κ α , ω, R, P SX , θ) .
Expurgation: To complete the proof, we extract a deterministic codebook B n that satisfies For this purpose, remove a set B n ⊂ B n of highest type I error probability codebooks such that the remaining set B n \B n has a probability of τ ∈ (0.25, 0.5), i.e., µ n (B n \B n ) = τ .Then, it follows from (34a) and (34b) that for all B n ∈ B n \B n , where that for all sufficiently large n Maximizing over (ω, R, P SX , θ) ∈ L(κ α ) and noting that η > 0 is arbitrary completes the proof.

D. Proof of Theorem 2
We will show that the error-exponent pairs κ α , κ h (κ α ) and κ α , κ u (κ α ) are achieved by a hybrid coding scheme and uncoded transmission scheme, respectively.First we describe the hybrid coding scheme.
Analysis of the type I and type II error probabilities: We analyze the expected type I and type II error probabilities, where the expectation is with respect to the randomness of B n , followed by the expurgation technique to extract a sequence of deterministic codebooks {B n } n∈N and a code {c n = (f n , g n )} n∈N that achieves the lower bound in Theorem 2.
Type I error probability: Denoting by A n the random acceptance region for H 0 , note that a type I error can occur only under the following events: (i) E EE := P Û ∈Dn(P U ,η) u∈Tn(P Û ) E EE (u), where E EE (u) := j ∈ M \{0} s.t.(s, u, W(j)) ∈ T n (P Ŝ Ûi Ŵi ), P Ŝ Ûi = P su , P Ŝ Ûi Ŵi ∈ D n (P SU W , η) , By definition of R i , we have similar to (23) that Next, the event E NE can be upper bounded as For u ∈ O m , note that similar to [14, Equation 4.17], we have From this and ( 14), we obtain similar to [14, Equation 4.22] that Substituting (42) in (41) yields Next, we bound the probability of the event E ODE as follows: ≤ e −n(ρ (κα,ω ,P S ,P X|U SW )−ζ (κα,ω ,P Ŝ )−O(η)) , where (a) follows similar to (28) using ( 40) and ( 42); (b) is since (s, M , V, Y) ∈ A n implies that M = 0; and (c) follows similar to (32).Further, where the penultimate equality is since given E c EE , M = 0 occurs only for U = u such that P u / ∈ D n (P U , η), and the final inequality follows from (40), the definition of D n (P U , η) and [38,Lemma 1.6].From ( 40), ( 43), (46) and (47), the expected type I error probability satisfies e −n(κα−O(η)) for sufficiently large n via the union bound.
Type II error probability: Next, we analyze the expected type II error probability over B n .Let (39) and A type II error can occur only under the following events: and Considering the event E a , we have where E 1,n := min For the inequality in (a) above, we used which in turn follows from the fact that given M = m and U = u, W(m ) is uniformly distributed in the set Next, we analyze the probability of the event E b .Let where Finally, considering the event E c , we have The first term in the RHS decays double exponentially as e −e nΩ(η) , while the second term can be handled as follows: u∈Tn(P Ũ ):P Ũ / ∈Dn(P U ,η) where Since the exponent of the type II error probability is lower bounded by the minimum of the exponent of the type II error causing events, it follows from (48), (49) and (50) that for a fixed P S , ω (•, P S ), P X|U SW , P X |U S ∈ L h (κ α ) PQ (Bn) Ĥ = 0 ≤ e −n κh (κα,ω ,P S ,P X|U SW ,P X |U S )−O(η) , where κh = min E 1 (κ α , ω ), E 2 (κ α , ω , P S , P X|U SW ), E 3 (κ α , ω , P S , P X|U SW , P X |U S ) .Performing expurgation as in the proof of Theorem 1 to obtain a deterministic codebook B n satisfying (51), maximizing over P S , ω (•, P S ), P X|U SW , P X |U S ∈ L h (κ α ) and noting that η > 0 is arbitrary yields that κ(κ α ) ≥ κ h (κ α ).
Finally, we show that κ(κ α ) ≥ κ u (κ α ) which will complete the proof.Fix P X|U S and let P U V XY := P U V P X|U S P Y |X and Q U V XY := Q U V P X|U S P Y |X .Consider an uncoded transmission scheme in which the channel input Let the decision rule g n be specified by the acceptance region A n = (s, v, y) : D P vy|s ||P V Y |S |P s ≤ κ α + η for some small η > 0.Then, it follows from [42,Lemma 2.6] that for sufficiently large n, The proof is complete by noting that η > 0 is arbitrary.

V. CONCLUSION
This work explored the trade-off between the type I and type II error-exponents for distributed hypothesis testing over a noisy channel.We proposed a separate hypothesis testing and channel coding scheme as well as a joint scheme utilizing hybrid coding, and analyzed their performance resulting in two inner bounds on the error-exponents tradeoff.The separate scheme recovers some of the existing bounds in the literature as special cases.We also showed via an example of testing against dependence that the joint scheme strictly outperforms the separate scheme at some points of the error-exponents trade-off.An interesting avenue for future research is the exploration of novel outer bounds that could shed light on the scenarios where the separate or joint schemes are tight.
Here, we prove a result which was used in the proof of Theorem 1, namely Proposition 1 given below.For this purpose, we require a few properties of log-moment generating function, which we briefly review next.
Proposition 1 is basically a characterization of the error-exponent region of a hypothesis testing problem which we introduce next.Let P X0X1 ∈ P(X 2 ) be an arbitrary joint PMF, and consider a sequence of pairs of n-length sequences (x, x ) such that − − → P X0X1 (x, x ), ∀ (x, x ) ∈ X 2 . (52) Consider the following HT: (λ) is a concave function of λ by Lemma 1 (i).Also, denoting its derivative with respect to λ by l n (λ), we have where (58) follows from (52) and the absolute continuity assumption on P Y |X .Substituting (58) in (54) and from (1), we obtain for arbitrarily small (but fixed) δ > 0 and sufficiently large n, that Hence, From this, (59) and (60), we obtain for −d min < θ < d max that κ E P X 0 X 1 ψ * Then, the proof of achievability is completed by noting that δ > 0 is arbitrary and κ (κ α , P X0X1 ) is a continuous function of κ α for a fixed P X0X1 .
Converse: Let I n (x, x ) := {i ∈ [n] : xi = x and x i = x }.For any θ ∈ R and decision function g n , we have from [43,Theorem 14.9 For a, b ∈ R ≥0 , [a : b] := {n ∈ N : a ≤ n ≤ b} and [b] := [1 : b].Calligraphic letters, e.g., X , denote sets, while X c and |X | stands the subscript is omitted when i = 1. 1 A denotes the indicator of set A. For a real sequence {a n } n∈N , a n (n) − − → b stands for lim n→∞ a n = b, while a n b denotes lim n→∞ a n ≥ b.Similar notations apply for other inequalities.O(•), Ω(•) and o(•) denote standard asymptotic notations.

Fig. 2 :
Fig.2: Comparison of the error-exponents trade-off achieved by the SHTCC and JHTCC schemes for TAD over a BSC in Example 1 with parameters p = 0.25, q = 0 for Figure2aand p = 0.35, q = 0 for Figure2b.The red curve shows (κ α , κ u (κ α )) pairs achieved by uncoded transmission while the blue line plots (κ α , E ex (0)).The joint scheme clearly achieves a better error-exponent trade-off for values of κ α below a threshold which depends on the transition kernel of the channel.In particular, a more uniform channel results in a lesser threshold.

Fig. 3 :
Fig.3: Comparison of the error-exponents trade-off achieved by the SHTCC and JHTCC schemes for Example 1 with parameters p = 0.25, q = 0.05 for Figure3aand p = 0.35, q = 0.05 for Figure3b.The JHTCC scheme improves over the separation based scheme for small values of κ α ; however, the region of improvement is reduced compared to Figure2as the source is more uniformly distributed.
an index selected uniformly at random from the set M (u, B W,n ), otherwise, set M (u, B W,n ) = 0. Denoting the support of M (u, B W,n ) by M , we have for sufficiently large n that |M | ≤ 1 + |Dn(P U ,η)| i=1 e nR i ≤ 1 + |D n (P U , η)|e max P Û Ŵ ∈Dn(P U W ,η) nI( Û ; Ŵ )+(nη/3) ≤ e n(R +η) , (17) where the last inequality uses (15b) and |D n (P U , η)| ≤ (n + 1) |U | .Binning: If |M | > |M|, then the source encoder performs binning as given below.Let R n := log e nR /|D n (P U , η)| , η)| , and f B (0) = 0 with probability one.Denote a realization of f B (j) by f b , where f b : M → M. Given a codebook B W,n and binning function f b , the source encoder outputs M = f b (M (u, B W,n )) for u ∈ U n .If |M | ≤ |M|, then f b is taken to be the identity map (no binning), and in this case, M = M (u, B W,n ).Channel codebook: Let B X,n := {X(m) ∈ X n , m ∈ M} denote a random channel codebook generated as follows.Without loss of generality (w.l.o.g.), denote the elements of the set S = X as 1, . . ., |X |.The codeword length n is divided into |S| = |X | blocks, where the length of the first block is P S (1)n , the second block is P S (2)n , so on so forth, and the length of the last block is chosen such that the total length is n.For i ∈ [|X |], let k i := i−1 l=1 P S (l)n + 1 and ki := i l=1 P S (l)n Source decoder: For a given codebook B W,n and inputs M = m and V = v, the source decoder first decodes for the quantization codeword w( m ) (if required) using the empirical conditional entropy decoder, and then declares the output Ĥ of the hypothesis test based on w( m ) and v.More specifically, if binning is not performed, i.e., if|M| ≥ |M |, M = m.Otherwise, M = m ,where m = 0 if m = 0 and m = arg min j:f b (j)= m H e (w(j)|v) otherwise.Denote the source decoder induced by the above operations by g B W,n : M × V n → M .Testing and Acceptance region: If m = 0, Ĥ = 1 is declared.Otherwise, Ĥ = 0 or Ĥ = 1 is declared depending on whether ( m , v) ∈ A n or ( m , v) / ∈ A n , respectively, where A n denotes the acceptance region for H 0 as specified next.For a given codebook B W,n , let O m denote the set of u such that the source encoder outputs m , m ∈ M \{0}.For each m ∈ M \{0} and u ∈ O m , let 1 RECOVERS[10, THEOREM 2]    We prove that lim κα→0 κ s (κ α ) = κ s , where κ s is the lower bound on the type II error-exponent for a fixed type I error probability constraint and unit bandwidth ratio established in[10, Theorem 2].Note that L(0, ω) = {P U V W =P U V P W |U , P W |U = ω(P U )}, ζ(0, ω) = I P (U ; W ),and ρ(0, ω) = I P (V ; W ). The result then follows from Theorem 1 by noting that L(κ α , ω), ζ(κ α , ω) and ρ(κ α , ω) are continuous in κ α and the fact that E sp (P SX , θ), E ex (R, P SX ) and E b (κ α , ω, R) are all greater than or equal to zero.APPENDIX B PROOF THAT THEOREM 2 RECOVERS [10, THEOREM 5] We show that lim κα→0 κ h (κ α ) = κ h , where κ h is as defined in [10, Theorem 5].Note that ζ (0, ω , P S ) := I P (U ; W |S), ρ(0, ω , P S , P X|U SW ) = I P (Y, V ; W |S), Lh (0, ω , P S , P X|U SW ) := P U V Ŵ Y S : P SU V W XY := P S P U V P W |U S P X|U SW P Y |X , P W |U S = ω (P U , P S ) , L h (0) := P S , ω (•, P S ), P X|U SW , P X |U S : I P (U ; W |S) < I P (Y, V ; W |S) , and E b (0, ω , P S , P X|U SW ) = I P (Y, V ; W |S) − I P (U ; W |S). The result then follows from Theorem 2 via the continuity of Lh
x ) log P Y |X (Y i |x i )/P Y |X (Y i |x i ) ≥ nP xx (x, x )θ x ) log P Y |X (Y i |x i )/P Y |X (Y i |x i ) ≥ nP xx (x, x )θ x ) log P Y |X (Y i |x i )/P Y |X (Y i |x i ) ≥ nP xx (x, x )θ i∈In(x,x ) log P Y |X (Y i |x i )/P Y |X (Y i |x i ) ≥ (x,x )∈X 2 nP xx (x, x )θ is due to the independence of the events i∈In(x,x ) log P Y |X (Y i |x i )/P Y |X (Y i |x i ) ≥ nP xx (x, x )θ for different (x, x ) ∈ X 2 .Define b x,x (θ) := min Qx∈P(Y):E Qx [log(PY |X (Y |x )/P Y |X (Y |x))]≥θ D Qx ||P Y |X (•|x) .Then, for arbitrary δ > 0, δ > δ and sufficiently large n, we can write α n + e −nθ β n (a) ≥ (x,x )∈X 2

e
−nP xx (x,x ) (b x,x (θ)+δ) (b) ≥ (x,x )∈X 2 e −nP xx (x,x ) ψ * P Y |X (•|x),Π x,x (θ)+δ The joint PMF of two discrete random variables X and Y is denoted by P XY ; the corresponding marginals are P X and P Y .The conditional PMF of X given Y is represented by P X|Y .Expressions such as P XY = P X P Y |X are to be understood as pointwise equality, i.e., P XY P Z|X , these variables form a Markov chain X − Y − Z.If the entries of X n are drawn in an i.i.d.manner, i.e., ifP X n (x) = n i=1 P X (x i ), ∀ x ∈ X n , then the PMF P X n is denoted by P ⊗n X .Similarly, if P Y n |X n (y|x) = n i=1 P Y |X (y i |x i ) for all (x, y) ∈ X n × Y n ,then we write P ⊗n Y |X for P Y n |X n .The conditional product PMF given a fixed x ∈ X n is designated by P ⊗n Y |X (•|x).The probability measure induced by a PMF P is denoted by P P .The corresponding expectation is designated by E P .
XY (x, y) = P X (x)P Y |X (y|x), for all (x, y) ∈ X × Y.When the joint distribution of a triple (X, Y, Z) factors as P XY Z = P . Inner Bound on R via SHTCC Scheme Let S = X and P SXY = P SX P Y |X be a PMF under which S − X − Y forms a Markov chain.For x ∈ X , set where P(W|U) is the set of all conditional distributions P W |U .Set θ l (P SX ) := s∈S P S (s)D P Y |S=s ||P Y |X=s , θ u (P SX ) := s∈S P S (s)D P Y |X=s ||P Y |S=s , and Θ(P SX ) := − θ l (P SX ), θ u (P SX ) .Denote an arbitrary element of F × R ≥0 × P(S × X ) × Θ(P SX ) by (ω, R, P SX , θ), and set |X (•|xi) log P Y |X (Y i |x i )/P Y |X (Y i |x i ) ) follows from Lemma 1 (iii), and (56) is due to the absolute continuity assumption, P Y |X (•|x)P Y |X (•|x ), ∀ (x, x ) ∈ X 2 onthe channel, and (52).Thus, by the concavity of l n (λ), its supremum has to occur at λ ≥ 0. Simplifying the term within the exponent in (54), we obtainP xx (x, x ) log E P Y |X (•|x) P Y |X (Y |x )/P Y |X (Y |x) λ(57) ] that α n (g n ) + e −nθ β n (g n ) ≥ P P ⊗n Y |X (•|x) log P ⊗n Y |X (Y|x )/P ⊗n Y |X (Y|x) ≥ nθ .Simplifying the RHS above, we obtainP P ⊗n Y |X (•|x) log P ⊗n Y |X (Y|x )/P ⊗n Y |X (Y|x) ≥ nθ = P P ⊗n Y |X (•|x) x,x i∈In(x,x ) log P Y |X (Y i |x i )/P Y |X (Y i |x i ) ≥ nθ = P P ⊗n Y |X (•|x)   x,x i∈In(x,x ) log P Y |X (Y i |x i )/P Y |X (Y i |x i ) ≥ (x,x )∈X 2