Privacy-Aware Distributed Hypothesis Testing

A distributed binary hypothesis testing (HT) problem involving two parties, a remote observer and a detector, is studied. The remote observer has access to a discrete memoryless source, and communicates its observations to the detector via a rate-limited noiseless channel. The detector observes another discrete memoryless source, and performs a binary hypothesis test on the joint distribution of its own observations with those of the observer. While the goal of the observer is to maximize the type II error exponent of the test for a given type I error probability constraint, it also wants to keep a private part of its observations as oblivious to the detector as possible. Considering both equivocation and average distortion under a causal disclosure assumption as possible measures of privacy, the trade-off between the communication rate from the observer to the detector, the type II error exponent, and privacy is studied. For the general HT problem, we establish single-letter inner bounds on both the rate-error exponent-equivocation and rate-error exponent-distortion trade-offs. Subsequently, single-letter characterizations for both trade-offs are obtained (i) for testing against conditional independence of the observer’s observations from those of the detector, given some additional side information at the detector; and (ii) when the communication rate constraint over the channel is zero. Finally, we show by providing a counter-example where the strong converse which holds for distributed HT without a privacy constraint does not hold when a privacy constraint is imposed. This implies that in general, the rate-error exponent-equivocation and rate-error exponent-distortion trade-offs are not independent of the type I error probability constraint.


Introduction
Data inference and privacy are often contradicting objectives. In many multi-agent system, each agent/user reveals information about its data to a remote service, application or authority, which in turn, provides certain utility to the users based on their data. Many emerging networked systems can be thought of in this context, from social networks to smart grids and communication networks. While obtaining the promised utility is the main goal of the users, privacy of data that is shared is becoming increasingly important. Thus, it is critical that the users ensure a desired level of privacy for the sensitive information revealed, while maximizing the utility subject to this constraint. In many distributed learning or distributed decision-making applications, typically the goal is to learn the joint probability distribution of data available at different locations. In some cases, there may be prior knowledge about the joint distribution, for example, that it belongs to a certain set of known probability distributions. In such a scenario, the nodes communicate their observations to the detector, which then applies hypothesis testing (HT) on the underlying joint distribution of the data based on its own observations and those received from other nodes. However, with the efficient data mining and machine learning algorithms available today, the detector can illegitimately infer some unintended private information from the data provided to it exclusively for HT purposes. Such threats are becoming increasingly imminent as large amounts of seemingly irrelevant yet sensitive data are collected from users, such as in medical research [1], social networks [2], online shopping [3] and smart grids [4]. Therefore, there is an inherent trade-off between the utility acquired by sharing data and the associated privacy leakage.
There are several practical scenarios where the above-mentioned trade-off arises. For example, consider the issue of consumer privacy in the context of online shopping. A consumer would like to share some information about his/her shopping behavior, e.g., shopping history and preferences, with the shopping portal to get better deals and recommendations on relevant products. The shopping portal would like to determine whether the consumer belongs to its target age group (e.g., below 30 years old) before sending special offers to this customer. Assuming that the shopping patterns of the users within and outside the target age groups are independent, the shopping portal performs a hypothesis test to check if the consumer's shared data is correlated with the data of its own customers. If the consumer is indeed within the target age group, the shopping portal would like to gather more information about this potential customer, particular interests, more accurate age estimation, etc.; while the user is reluctant to provide any further information. Yet another relevant example is the issue of user privacy in the context of wearable Internet of Things (IoT) devices such as smart watches and fitness trackers, which collect information on routine daily activities, and often have a third-party cloud interface.
In this paper, we study distributed HT (DHT) with a privacy constraint, in which an observer communicates its observations to a detector over a noiseless rate-limited channel of rate R nats per observed sample. Using the data received from the observer, the detector performs binary HT on the joint distribution of its own observations and those of the observer. The performance of the HT is measured by the asymptotic exponential rate of decay of the type II error probability, known as the type II error exponent (or error exponent henceforth), for a given constraint on the type I error probability (definitions will be given below). While the goal is to maximize the performance of the HT, the observer also wants to maintain a certain level of privacy against the detector for some latent private data that is correlated with its observations. We are interested in characterizing the trade-off between the communication rate from the observer to the detector over the channel, error exponent achieved by the HT and the amount of information leakage of private data. A special case of HT known as testing against conditional independence (TACI) will be of particular interest. In TACI, the detector tests whether its own observations are independent of those at the observer, conditioned on additional side information available at the detector.

Background
Distributed HT without any privacy constraint has been studied extensively from an informationtheoretic perspective in the past, although many open problems remain. The fundamental results for this problem are first established in [5], which includes a single-letter lower bound on the optimal error exponent and a strong converse result which states that the optimal error exponent is independent of the constraint on the type I error probability. Exact single-letter characterization of the optimal error exponent for the testing against independence (TAI) problem, i.e., TACI with no side information at the detector, is also obtained. The lower bound established in [5] is further improved in [6,7]. Strong converse is studied in the context of complete data compression and zero-rate compression in [6,8], respectively, where in the former, the observer communicates to the detector using a message set of size two, while in the latter using a message set whose size grows sub-exponentially with the number of observed samples. The TAI problem with multiple observers remains open (similar to several other distributed compression problems when a non-trivial fidelity criterion is involved); however, the optimal error exponent is obtained in [9] when the sources observed at different observers follow a certain Markov relation. The scenario in which, in addition to HT, the detector is also interested in obtaining a reconstruction of the observer's source, is studied in [10]. The authors characterize the trade-off between the achievable error exponent and the average distortion between the observer's observations and the detector's reconstruction. The TACI is first studied in [11], where the optimality of a random binning-based encoding scheme is shown. The optimal error exponent for TACI over a noisy communication channel is established in [12]. Extension of this work to general HT over a noisy channel is considered in [13], where lower bounds on the optimal error exponent are obtained by using a separation-based scheme and also using hybrid coding for the communication between the observer and the detector. The TACI with a single observer and multiple detectors is studied in [14], where each detector tests for the conditional independence of its own observations from those of the observer. The general HT version of this problem over a noisy broadcast channel and DHT over a multiple access channel is explored in [15]. While all the above works consider the asymmetric objective of maximizing the error exponent under a constraint on the type I error probability, the trade-off between the exponential rate of decay of both the type I and type II error probabilities are considered in [16][17][18].
Data privacy has been a hot topic of research in the past decade, spanning across multiple disciplines in computer and computational sciences. Several practical schemes have been proposed that deal with the protection or violation of data privacy in different contexts, e.g., see [19][20][21][22][23][24]. More relevant for our work, HT under mutual information and maximal leakage privacy constraints have been studied in [25,26], respectively, where the observer uses a memoryless privacy mechanism to convey a noisy version of its observed data to the detector. The detector performs HT on the probability distribution of the observer's data, and the optimal privacy mechanism that maximizes the error exponent while satisfying the privacy constraint is analyzed. Recently, a distributed version of this problem has been studied in [27], where the observer applies a privacy mechanism to its observed data prior to further coding for compression, and the goal at the detector is to perform a HT on the joint distribution of its own observations with those of the observer. In contrast with [25][26][27], we study DHT with a privacy constraint, but without considering a separate privacy mechanism at the observer. In Section 2, we will further discuss the differences between the system model considered here and that of [27].
It is important to note here that the data privacy problem is fundamentally different from that of data security against an eavesdropper or an adversary. In data security, sensitive data is to be protected against an external malicious agent distinct from the legitimate parties in the system. The techniques for guaranteeing data security usually involve either cryptographic methods in which the legitimate parties are assumed to have additional resources unavailable to the adversary (e.g., a shared private key) or the availability of better communication channel conditions (e.g., using wiretap codes). However, in data privacy problems, the sensitive data is to be protected from the same legitimate party that receives the messages and provides the utility; and hence, the above-mentioned techniques for guaranteeing data security are not applicable. Another model frequently used in the context of information-theoretic security assumes the availability of different side information at the legitimate receiver and the eavesdropper [28,29]. A DHT problem with security constraints formulated along these lines is studied in [30], where the authors propose an inner bound on the rate-error exponent-equivocation trade-off. While our model is related to that in [30] when the side information at the detector and eavesdropper coincide, there are some important differences which will be highlighted in Section 2.3.
Many different privacy measures have been considered in the literature to quantify the amount of private information leakage, such as k-anonymity [31], differential privacy (DP) [32], mutual information leakage [33][34][35], maximal leakage [36], and total variation distance [37] to count a few; see [38] for a detailed survey. Among these, mutual information between the private and revealed information (or, equivalently, the equivocation of private information given the revealed information) is perhaps the most commonly used measure in the information-theoretic studies of privacy. It is well known that a necessary and sufficient condition to guarantee statistical independence between two random variables is to have zero mutual information between them. Furthermore, the average information leakage measured using an arbitrary privacy measure is upper bounded by a constant multiplicative factor of that measured by mutual information [34]. It is also shown in [33] that a differentially private scheme is not necessarily private when the information leakage is measured by mutual information. This is done by constructing an example that is differentially private, yet the mutual information leakage is arbitrarily high. Mutual information-based measures have also been used in cryptographic security studies. For example, the notion of semantic security defined in [39] is shown to be equivalent to a measure based on mutual information in [40].
A rate-distortion approach to privacy is first explored by Yamamoto in [41] for a rate-constrained noiseless channel, where in addition to a distortion constraint for legitimate data, a minimum distortion requirement is enforced for the private part. Recently, there have been several works that have used distortion as a security or privacy metric in several different contexts, such as side-information privacy in discriminatory lossy source coding [42] and rate-distortion theory of secrecy systems [43,44]. More specifically, in [43], the distortion-based security measure is analyzed under a causal disclosure assumption, in which the data samples to be protected are causally revealed to the eavesdropper (excluding the current sample), yet the average distortion over the entire block has to satisfy a desired lower bound. This assumption ensures that distortion as a secrecy measure is more robust (see ( [43], Section I-A)), and could in practice model scenarios in which the sensitive data to be protected is eventually available to the eavesdropper with some delay, but the protection of the current data sample is important. In this paper, we will consider both equivocation and average distortion under a causal disclosure assumption as measures of privacy. In [45], error exponent of a HT adversary is considered to be a privacy measure. This can be considered to be the opposite setting to ours, in the sense that while the goal here is to increase the error exponent under a privacy leakage constraint, the goal in [45] is to reduce the error exponent under a constraint on possible transformations that can be applied on the data.
It is instructive to compare the privacy measures considered in this paper with DP. Towards this, note that average distortion and equivocation (see Definitions 1 and 2) are "average case" privacy measures, while DP is a "worst case" measure that focuses on the statistical indistinguishability of neighboring datasets that differ in just one entry. Considering this aspect, it may appear that these privacy measures are unrelated. However, as shown in [46], there is an interesting connection between them. More specifically, the maximum conditional mutual information leakage between the revealed data Y and an entry in the dataset X i given all the other n − 1 entries X −i = X n \ {X i }, i.e., I(Y; X i |X −i ), is sandwiched between the so-called -DP and ( , δ)-DP in terms of the strength of the privacy measure, where the maximization is over all distributions P X n on X n and entries i ∈ [1 : n] ( [46], Theorem 1). This implies that as a privacy measure, equivocation (equivalent to mutual information leakage) is weaker than -DP, and stronger than ( , δ)-DP, at least for some probability distributions on the data. On the other hand, equivocation and average distortion are relatively well-behaved privacy measures compared to DP, and often result in clean and exact computable characterizations of the optimal trade-off for the problem at hand. Moreover, as already shown in [39,40,47,48], the trade-off resulting from "average" constraints turns out to be the same as that with more stricter constraints in many interesting cases. Hence, it is of interest to consider such average case privacy measures as a starting point for further investigation with stricter measures. DP has been used extensively in privacy studies including those that involve learning and HT [49][50][51][52][53][54][55][56][57][58][59]. More relevant to the distributed HT problem at hand is the local differentially private model employed in [49][50][51]56], in which, depending on the privacy requirement, a certain amount of random noise is injected into the user's data before further processing, while the utility is maximized subject to this constraint. Nevertheless, there are key differences between these models and ours. For example, in [49], the goal is to learn from differentially private "examples", the underlying "concept" (model that maps examples to "labels") such that the error probability in predicting the label for future examples is minimized, irrespective of the statistics of the examples. Hence, the utility in [49] is to learn an unknown model accurately, whereas our objective is to test between two known probability distributions. Furthermore, in our setting (unlike [49][50][51]56]), there is an additional requirement to satisfy in terms of the communication rate. These differences perhaps also make DP less suitable as a privacy measure in our model relative to equivocation and average distortion. On one hand, imposing a DP measure in our setting may be overly restrictive since there are only two probability distributions involved and DP is tailored for situations where the statistics of the underlying data is unknown. On the other hand, DP is also more unwieldy to analyze under a rate constraint compared to mutual information or average distortion.
The amount of private information leakage that can be tolerated depends on the specific application at hand. While it may be possible to tolerate a moderate amount of information leakage in applications like online shopping or social networks, it may no longer be the case in matters related to information sharing among government agencies or corporations. While it is obvious that maximum privacy can be attained by revealing no information, this typically comes at the cost of zero utility. On the other hand, maximum utility can be achieved by revealing all the information, but at the cost of minimum privacy. Characterizing the optimal trade-off between the utility and the minimum privacy leakage between these two extremes is a fundamental and challenging research problem.

Main Contributions
The main contributions of this work are as follows.

1.
In Section 3, Theorem 1 (resp. Theorem 2), we establish a single-letter inner bound on the rate-error exponent-equivocation (resp. rate-error exponent-distortion) trade-off for DHT with a privacy constraint. The distortion and equivocation privacy constraints we consider, which is given in (6) and (7), respectively, are slightly stronger than what is usually considered in the literature (stated in (8) and (9), respectively).

2.
Exact characterizations are obtained for some important special cases in Section 4. More specifically, a single-letter characterization of the optimal rate-error exponent-equivocation (resp. rate-error exponent-distortion) trade-off is established for: (a) TACI with a privacy constraint (for vanishing type I error probability constraint) in Section 4.1, Proposition 1 (resp. Proposition 2), (b) DHT with a privacy constraint for zero-rate compression in Section 4.2, Proposition 4 (resp. Proposition 3).
Since the optimal trade-offs in Propositions 3 and 4 are independent of the constraint on the type I error probability, they are strong converse results in the context of HT.

3.
Finally, in Section 5, we provide a counter-example showing that for a positive rate R > 0, the strong converse result does not hold in general for TAI with a privacy constraint.

Organization
The organization of the paper is as follows. Basic notations are introduced in Section 2.1. The problem formulation and associated definitions are given in Section 2.2. Main results are presented in Section 3 to Section 5. The proofs of the results are presented either in the Appendix or immediately after the statement of the result. Finally, Section 6 concludes the paper with some open problems for future research.

Notations
N, R and R ≥0 stand for the set of natural numbers, real numbers and non-negative real numbers, respectively. For a ∈ R ≥0 , [a] := {i ∈ N, i ≤ a} and for a ∈ R, a + := max{0, a} (:= represents equality by definition). Calligraphic letters, e.g., A, denotes sets, while |A| and A c denotes its cardinality and complement, respectively. 1(·) denotes the indicator function, while O(·), o(·) and Ω(·) stands for the standard asymptotic notations of Big-O, Little-O and Big-Ω, respectively. For a real sequence {a n } n∈N and b ∈ R, a n (n) −→ b represents lim n→∞ a n = b. Similar notations apply for asymptotic inequalities, e.g., a n (n) ≥ b, means that lim n→∞ a n ≥ b. Throughout this paper, the base of the logarithms is taken to be e, and whenever the range of the summation is not specified, it means summation over the entire support, e.g., ∑ u denotes ∑ u∈U .
All the random variables (r.v.'s) considered in this paper are discrete with finite support unless specified otherwise. We denote r.v.'s, their realizations and support by upper case, lower case and calligraphic letters (e.g., X, x and X ), respectively. The joint probability distribution of r.v.'s X and Y is denoted by P XY , while their marginals are denoted by P X and P Y . The set of all probability distributions with support X and X × Y are represented by P (X ) and P (X × Y ), respectively. For j, i ∈ N, the random vector (X i , . . . , X j ), j ≥ i, is denoted by X j i , while X j stands for (X 1 , . . . , X j ). Similar notation holds for the vector of realizations. X − Y − Z denotes a Markov chain relation between the r.v.'s X, Y and Z. P P (E ) denotes the probability of event E with respect to the probability measure induced by distribution P, and E P [·] denotes the corresponding expectation. The subscript P is omitted when the distribution involved is clear from the context. For two probability distributions P and Q defined on a common support, P << Q denotes that P is absolutely continuous with respect to Q.
Following the notation in [60], for P X ∈ P (X ) and δ ≥ 0, the P X -typical set is T n and the P X -type class (set of sequences of type or empirical distribution P X ) is T n P X := T n The set of all possible types of sequences of length n over an alphabet X n and the set of types in T n [P X ] δ are denoted by P n (X ) and P n T n [P X ] δ , respectively. Similar notations apply for pairs and larger combinations of r.v.'s, e.g., T n [P XY ] δ , T n P XY , P n (X × Y ) and P n T n . The conditional P Y|X type class of a sequence x n ∈ X n is T n P Y|X (x n ) := y n : (x n , y n ) ∈ T n P XY .
The standard information-theoretic quantities like Kullback-Leibler (KL) divergence between distributions P X and Q X , the entropy of X with distribution P X , the conditional entropy of X given Y and the mutual information between X and Y with joint distribution P XY , are denoted by D(P X ||Q X ), H P X (X), H P XY (X|Y) and I P XY (X; Y), respectively. When the distribution of the r.v.'s involved are clear from the context, the last three quantities are denoted simply by H(X), H(X|Y) and I(X; Y), respectively. Given realizations X n = x n and Y n = y n , H e (x n |y n ) denotes the conditional empirical entropy given by H e (y n |x n ) := H PXỸ (Ỹ|X), (2) where PXỸ denotes the joint type of (x n , y n ). Finally, the total variation between probability distributions P X and Q X defined on the same support X is

Problem Formulation
Consider the HT setup illustrated in Figure 1, where (U n , V n , S n ) denote n independent and identically distributed (i.i.d.) copies of triplet of r.v.'s (U, V, S). The observer observes U n and sends the message index M to the detector over an error-free channel, where M ∼ f n (·|U n ) and f n : U n → P (M), M = [e nR ]. Given its own observation V n , the detector performs a HT on the joint distribution of U n and V n with null hypothesis and alternate hypothesis Let H andĤ denote the r.v.'s corresponding to the true hypothesis and the output of the HT, respectively, with support H =Ĥ = {0, 1}, where 0 denotes the null hypothesis and 1 the alternate hypothesis. Let g n : M × V n → P (Ĥ) denote the decision rule at the detector, which outputŝ H ∼ g n (M, V n ). Then, the type I and type II error probabilities achieved by a ( f n , g n ) pair are given by α n ( f n , g n ) := P(Ĥ = 1|H = 0) = PĤ(1), and β n ( f n , g n ) := P(Ĥ = 0|H = 1) = QĤ(0), respectively, where and Let P U n V n S n MĤ and Q U n V n S n MĤ denote the joint distribution of (U n , V n , S n , M,Ĥ) under the null and alternate hypotheses, respectively. For a given type I error probability constraint , define the minimum type II error probability over all possible detectors as such that α n ( f n , g n ) ≤ .
The performance of HT is measured by the error exponent achieved by the test for a given constraint on the type I error probability, i.e., lim inf n→∞ − 1 n log β n ( f n , ) . Although the goal of the detector is to maximize the error exponent achieved for the HT, it is also curious about the latent r.v. S n that is correlated with U n . S n is referred to as the private part of U n , which is distributed i.i.d. according to the joint distribution P SUV and Q SUV under the null and alternate hypothesis, respectively. It is desired to keep the private part as concealed as possible from the detector. We consider two measures of privacy for S n at the detector. The first is the equivocation defined as H(S n |M, V n ). The second one is the average distortion between S n and its reconstructionŜ n at the detector, measured according to an arbitrary bounded additive distortion metric d : S ×Ŝ → [0, D m ] with multi-letter distortion defined as We will assume the causal disclosure assumption, i.e.,Ŝ i is a function of S i−1 in addition to (M, V n ). The goal is to ensure that the error exponent for HT is maximized, while satisfying the constraints on the type I error probability and the privacy of S n . In the sequel, we study the trade-off between the rate, error exponent (henceforth also referred to simply as the error exponent) and privacy achieved in the above setting. Before delving into that, a few definitions are in order.

Definition 1.
For a given type I error probability constraint , a rate-error exponent-distortion tuple (R, κ, ∆ 0 , ∆ 1 ) is achievable, if there exists a sequence of encoding and decoding functions f n : U n → P (M), and g n : and for any γ > 0, there exists an n 0 ∈ N such that i,n (·|M, V n , S i−1 ), and g (r) i,n : [e nR ] × V n × S i−1 → P (Ŝ i ) denotes an arbitrary stochastic reconstruction map at the detector. The rate-error exponent-distortion region R d ( ) is the closure of the set of all such achievable (R, κ, ∆ 0 , ∆ 1 ) tuples for a given .

Definition 2.
For a given type I error probability constraint , a rate-error exponent-equivocation (It is well known that equivocation as a privacy measure is a special case of average distortion under the causal disclosure assumption and log-loss distortion metric [43]. However, we provide a separate definition of the rate-error exponent-equivocation region for completeness.) (R, κ, Λ 0 , Λ 1 ) tuple is achievable, if there exists a sequence of encoding and decoding functions f n : U n → P (M) and g n : [e nR ] × V n → P (Ĥ) such that (5) is satisfied, and for any γ > 0, there exists a n 0 ∈ N such that The rate-error exponent-equivocation region R e ( ) is the closure of the set of all such achievable (R, κ, Λ 0 , Λ 1 ) tuples for a given .
Please note that the privacy measures considered in (6) and (7) are stronger than and lim inf respectively. To see this for the equivocation privacy measure, note that if H(S n |M, V n , H = i) = nΛ * i − n a , i = 0, 1, for some a ∈ (0, 1), then an equivocation pair (Λ * 0 , Λ * 1 ) is achievable under the constraint given in (9), while it is not achievable under the constraint given in (7).

Relation to Previous Work
Before stating our results, we briefly highlight the differences between our system model and the ones studied in [27,30]. In [27], the observer applies a privacy mechanism to the data before releasing it to the transmitter, which performs further encoding prior to transmission to the detector. More specifically, the observer checks if U n ∈ T n [P U ] δ and if successful, sends the output of a memoryless privacy mechanism applied to U n , to the transmitter. Otherwise, it outputs a n-length zero-sequence. The privacy mechanism plays the role of randomizing the data (or adding noise) to achieve the desired privacy. Such randomized privacy mechanisms are popular in privacy studies, and have been used in [25,26,61]. In our model, the tasks of coding for privacy and compression are done jointly by using all the available data samples U n . Also, while we consider the equivocation (and average distortion) between the revealed information and the private part as the privacy measure, in [27], the mutual information between the observer's observations and the output of the memoryless mechanism is the privacy measure. As a result of these differences, there exist some points in the rate-error exponent-privacy trade-off that are achievable in our model, but not in [27]. For instance, a perfect privacy condition Λ 0 = 0 for testing against independence in ([27], Theorem 2) would imply that the error exponent is also zero, since the output of the memoryless mechanism has to be independent of the observer's observations (under both hypotheses). However, as we later show in Example 2, a positive error exponent is achievable while guaranteeing perfect privacy in our model.
On the other hand, the difference between our model and [30] arises from the difference in the privacy constraint as well as the privacy measure. Specifically, the goal in [30] is to keep U n private from an illegitimate eavesdropper, while the objective here is to keep a r.v. S n that is correlated with U n private from the detector. Also, we consider the more general average distortion (under causal disclosure) as a privacy measure, in addition to equivocation in [30]. Moreover, as already noted, the equivocation privacy constraint in (7) is more stringent than (9) that is considered in [30]. To satisfy the distortion requirement or the stronger equivocation privacy constraint in (7), we require that the a posteriori probability distribution of S n given the observations (M, V n ) at the detector is close in some sense to a desired "target" memoryless distribution. To achieve this, we use a stochastic encoding scheme to induce the necessary randomness for S n at the detector, which to the best of our knowledge has not been considered previously in the context of DHT. Consequently, the analysis of the type I and type II error probabilities and privacy achieved are novel.
Another subtle yet important difference is that the marginal distributions of U n and the side information at the eavesdropper are assumed to be the same under the null and alternate hypotheses in [30], which is not the case here. This necessitates separate analysis for the privacy achieved under the two hypotheses.
Next, we state some supporting results that will be useful later for proving the main results.

Supporting Results
Let denote a deterministic detector with acceptance region A n ⊆ [e nR ] × V n for H 0 and A c n for H 1 . Then, the type I and type II error probabilities are given by Lemma 1. Any error exponent that is achievable is also achievable by a deterministic detector of the form given in (10) for some A n ⊆ [e nR ] × V n , where A n and A c n denote the acceptance regions for H 0 and H 1 , respectively.
The proof of Lemma 1 is given in Appendix A for completeness. Due to Lemma 1, henceforth we restrict our attention to a deterministic g n as given in (10).
The next result shows that without loss of generality (w.l.o.g), it is also sufficient to consider g (r) i,n (in Definition 1) to be a deterministic function of the form (13) for the minimization in (6), whereφ i,n : M × V n × S i−1 →Ŝ, i ∈ [n], denotes an arbitrary deterministic function. (6) is achieved by a deterministic function g (r)

Lemma 2. The infimum in
i,n as given in (13), and hence it is sufficient to restrict to such deterministic g (r) i,n in (6).
The proof of Lemma 2 is given in Appendix B. Next, we state some lemmas that will be handy for upper bounding the amount of privacy leakage in the proofs of the main results stated below. The following one is a well-known result proved in [60] that upper bounds the difference in entropy of two r.v.'s (with a common support) in terms of the total variation distance between their probability distributions.

Lemma 3. ([60]
, Lemma 2.7) Let P X and Q X be distributions defined on a common support X and let ρ : The next lemma will be handy in proving Theorems 1 and 2, Proposition 3 and the counter-example for strong converse presented in Section 5.
In the next section, we establish an inner bound on R e ( ) and R d ( ).

Main Results
The following two theorems are the main results of this paper providing inner bounds for R e ( ) and R d ( ), respectively.
The proof of Theorems 1 and 2 is given in Apppendix D. While the rate-error exponent trade-off in Theorems 1 and 2 is the same as that achieved by the Shimokawa-Han-Amari (SHA) scheme [7], the coding strategy achieving it is different due to the requirement of the privacy constraint. As mentioned above, in order to obtain a single-letter lower bound for the achievable distortion (and achievable equivocation) of the private part at the detector, it is required that the a posteriori probability distribution of S n given the observations (M, V n ) at the detector is close in some sense to a desired "target" memoryless distribution. For this purpose, we use the so-called likelihood encoder [62,63] (at the observer) in our achievability scheme. The likelihood encoder is a stochastic encoder that induces the necessary randomness for S n at the detector, and to the best of our knowledge has not been used before in the context of DHT. The analysis of the type I and type II error probabilities and the privacy achieved by our scheme is novel and involves the application of the well-known channel resolvability or soft-covering lemma [62,64,65]. Properties of the total variation distance between probability distributions mentioned in [43] play a key role in this analysis. The analysis also reveals the interesting fact that the coding schemes in Theorems 1 and 2, although quite different from the SHA scheme, achieves the same lower bound on the error exponent.
Theorems 1 and 2 provide single-letter inner bounds on R d ( ) and R e ( ), respectively. A complete computable characterization of these regions would require a matching converse. This is a hard problem, since such a characterization is not available even for the DHT problem without a privacy constraint, in general (see [5]). However, it is known that a single-letter characterization of the rate-error exponent region exists for the special case of TACI [11]. In the next section, we show that TACI with a privacy constraint also admits a single-letter characterization, in addition to other optimality results.

TACI with a Privacy Constraint
Assume that the detector observes two discrete memoryless sources Y n and Z n , i.e., V n = (Y n , Z n ). In TACI, the detector tests for the conditional independence of U and Y, given Z. Thus, the joint distribution of the r.v.'s under the null and alternate hypothesis are given by H 0 : P SUYZ := P S|UYZ P U|Z P Y|UZ P Z , and respectively. Let R e and R d denote the rate-error exponent-equivocation and rate-error exponent-distortion regions, respectively, for the case of vanishing type I error probability constraint, i.e., Assume that the privacy constraint under the alternate hypothesis is inactive. Thus, we are interested in characterizing the set of all tuples (R, κ, Please note that Λ min and ∆ min correspond to the equivocation and average distortion of S n at the detector, respectively, when U n is available directly at the detector under the alternate hypothesis. The above assumption is motivated by scenarios, in which the observer is more eager to protect S n when there is a correlation between its own observation and that of the detector, such as the online shopping portal example mentioned in Section 1. In that example, U n , S n and Y n corresponds to shopping behavior, more information about the customer, and customers data available to the shopping portal, respectively. For the above-mentioned case, we have the following results.
for some joint distribution of the form P SUYZW := P SUYZ P W|U .
Proof. For TACI, the inner bound in Theorem 1 yields that for ∈ (0, 1), where P SUYZW := P SUYZ P W|U , Q SUYZW := Q S|YZ P U|Z P Y|Z P Z P W|U .
Please note that since (Y, Z, S) − U − W, we have Let B := {P W|U : I P (U; W|Z) ≤ R}. Then, for P W|U ∈ B , we have, Hence, By noting that Λ min ≤ H Q (S|W, Y, Z) (by the data processing inequality), we have shown that for (28)-(30) are satisfied. This completes the proof of achievability. Converse: and independent of all the other r.v.'s (U n , Y n , Z n , S n , M). Define an auxiliary r.v. W : . Then, we have for sufficiently large n that Here, (39) follows since the sequences (U n , Z n ) are memoryless; (40) form a Markov chain; and, (41) follows from the fact that T is independent of all the other r.v.'s.
The equivocation of S n under the null hypothesis can be bounded as follows.
where P SUYZW = P SUYZ P W|U for some conditional distribution P W|U . In (43), we used the fact that conditioning reduces entropy. Finally, we prove the upper bound on κ. For any encoding function f n and decision region A n ⊆ M × Y n × Z n for H 0 such that n → 0, we have, Here, (45) follows from the log-sum inequality [60]. Thus, where (46) follows since Q MY n Z n = P MZ n P Y n |Z n . The last term can be single-letterized as follows: Substituting (48) in (47), we obtain Also, note that (Z, Y) − U − W holds. To see this, note that . Hence, any information in W i on (Y i , Z i , S i ) is only through M as a function of U i , and so given U i , W i is independent of (Y i , Z i , S i ). The above Markov chain then follows from the fact that T is independent of (U n , Y n , Z n , S n , M). This completes the proof of the converse and the theorem.
Next, we state the result for TACI with a distortion privacy constraint, where the distortion is measured using an arbitrary distortion measure d(·, ·). Let ∆ min : for some P SUYZW as defined in Proposition 1.
Proof. The proof of achievability follows from Theorem 2, similarly to the way Proposition 1 is obtained from Theorem 1. Hence, only differences will be highlighted. Similar to the inequality Λ min ≤ H Q (S|U, Y, Z) in the proof of Proposition 1, we need to prove the inequality where Q SUYZW := Q SUYZ P W|U for some conditional distribution P W|U . This can be shown as follows: φ(w, y, z)).
Converse: Let W = (W T , T) denote the auxiliary r.v. defined in the converse of Proposition 1. Inequalities (50) and (51) follow similarly as obtained in Proposition 1. We prove (52). Defining where (54) is due to (A1) (in Appendix B). Hence, any ∆ 0 satisfying (6) satisfies This completes the proof of the converse and the theorem.
A more general version of Propositions 1 and 2 is claimed in [66] as Theorems 7 and 8, respectively, in which a privacy constraint under the alternate hypothesis is also imposed. However, we have identified a mistake in the converse proof; and hence, a single-letter characterization for this general problem remains open.
To complete the single-letter characterization in Propositions 1 and 2, we bound the alphabet size of the auxiliary r.v. W in the following lemma, whose proof is given in Appendix E.
The proof of Lemma 5 uses standard arguments based on the Fenchel-Eggleston-Carathéodory's theorem and is given in Appendix E.

Remark 1.
When Q S|UYZ = Q S|YZ , a tight single-letter characterization of R e and R d exists even if the privacy constraint is active under the alternate hypothesis. This is due to the fact that given Y n and Z n , M is independent of S n under the alternate hypothesis. In this case, (R, κ, Λ 0 , Λ 1 ) ∈ R e if and only if there exists an auxiliary r.v. W, such that (Z, Y, S) − U − W, and for some P SUYZW as in Proposition 1. Similarly, we have that (R, κ, ∆ 0 , ∆ 1 ) ∈ R d if and only if there exist an auxiliary r.v. W and a deterministic function φ : W × Y × Z →Ŝ such that (55) and (56), are satisfied for some P SUYZW as in Proposition 1.
The computation of the trade-off given in Proposition 1 is challenging despite the cardinality bound on the auxiliary r.v. W provided by Lemma 5, as closed form solutions do not exist in general. To see this, note that the inequality constraints defining R e are not convex in general, and hence even computing specific points in the trade-off could be a hard problem. This is evident from the fact that in the absence of the privacy constraint in Proposition 1, i.e., (30), computing the maximum error exponent for a given rate constraint is equivalent to the information bottleneck problem [67], which is known to be a hard non-convex optimization problem. Also, the complexity of brute force search is exponential in |U |, and hence intractable for large values of |U |. Below we provide an example which can be solved in closed form and hence computed easily.
)) for q = 0 and p ∈ {0.15, 0.25, 0.35}, as r is varied in the range [0, 0.5]. The projection of this curve on the R − κ and κ − Λ 0 plane is shown in Figures 3a,b, respectively, for q ∈ {0, 0.1} and the same values of p. As expected, the error exponent κ increases with rate R while the equivocation Λ 0 decreases with κ at the boundary of R e . Proposition 1 (resp. Proposition 2) provide a characterization of R e (resp. R d ) under the condition of vanishing type I error probability constraint. Consequently, the converse part of these results are known as weak converse results in the context of HT. In the next subsection, we establish the optimal error exponent-privacy trade-off for the special case of zero-rate compression. This trade-off is independent of the type I error probability constraint ∈ (0, 1), and hence known as a strong converse result.

Zero-Rate Compression
Assume the following zero-rate constraint on the communication between the observer and the detector, Please note that (66) does not imply that |M| = 0, i.e., nothing can be transmitted, but that the message set cardinality can grow at most sub-exponentially in n. Such a scenario is motivated practically by low power or low bandwidth constrained applications in which communication is costly. Propositions 3 and 4 stated below provide an optimal single-letter characterization of R d ( ) and R e ( ) in this case. While the coding schemes in the achievability part of these results are inspired from that in [6], the analysis of privacy achieved at the detector is new. Lemma 4 serves as a crucial tool for this purpose. We next state the results. Let and ∆ max where φ : V →Ŝ is a deterministic function and Proof. First, we prove that (0, κ, ∆ 0 , ∆ 1 ) satisfying (68)-(70) is achievable. While the encoding and decoding scheme is the same as that in [6], we mention it for the sake of completeness. Encoding: The observer sends the message M = 1 if U n ∈ T n [P U ] δ , δ > 0, and M = 0 otherwise.

Decoding:
The detector declaresĤ = 0 if M = 1 and V n ∈ T n , δ > 0. Otherwise,Ĥ = 1 is declared. We analyze the type I and type II error probabilities for the above scheme. Please note that for any δ > 0, the weak law of large numbers implies that Hence, the type I error probability tends to zero, asymptotically. The type II error probability can be written as follows: Next, we lower bound the average distortion for S n achieved by this scheme at the detector. Defining we can write where (74) is since Π(U n , δ, P U ) = 1 − M with probability one by the encoding scheme; (75) follows from and ( [43], Property 2(b)); and, (76) is due to (17). Similarly, it can be shown using (16) that if Q U = P U , then On the other hand, if Q U = P U and δ is small enough, we have Hence, we can write for δ small enough, where (80) is since Π(U n , δ, P U ) = 1 − M with probability one; (81) is due to (79) and ( [43], Property 2(b)); and, (82) follows from (15). This completes the proof of the achievability. We next prove the converse. Please note that by the strong converse result in [8], the right hand side (R.H.S) of (68) is an upper bound on the achievable error exponent for all ∈ (0, 1) even without a privacy constraint (hence, also with a privacy constraint). Also, Here, (83) follows from the fact that the detector can always reconstructŜ i as a function of V i for i ∈ [n]. Similarly, Hence, any achievable Λ 0 and Λ 1 must satisfy (69) and (70), respectively. This completes the proof.
The following Proposition is the analogous result to Proposition 3 when the privacy measure is equivocation.

Proposition 4.
For ∈ (0, 1), (0, κ, Λ 0 , Λ 1 ) ∈ R e ( ) if and only if it satisfies (68) and Proof. For proving the achievability part, the encoding and decoding scheme is the same as in Proposition 3. Hence, the analysis of the error exponent given in Proposition 3 holds. To lower bound the equivocation of S n at the detector, defining Π(U n , δ, P U ), ρ where (86) follows due to Lemma 3,([60], Lemma 2.12) and the fact that entropy of a r.v. is bounded by the logarithm of cardinality of its support; and, (87) follows from (17) in Lemma 4 since δ > 0. In a similar way, it can be shown using (16) that if Q U = P U , then On the other hand, if Q U = P U and δ is small enough, we can write where (89) follows from Lemma 3 and (79). It follows from (15) in Lemma 4 that for δ > 0 sufficiently small, ρ (1) n (δ) ≤ e −nδ for someδ > 0, thus implying that the R.H.S. of (89) tends to zero. This completes the proof of achievability.
The converse follows from the results in [6,8] that the R.H.S of (68) is the optimal error exponent achievable for all values of ∈ (0, 1) even when there is no privacy constraint, and the following inequality This concludes the proof of the Proposition.
In Section 2.2, we mentioned that it is possible to achieve a positive error exponent with perfect privacy in our model. Here, we provide an example of TAI with an equivocation privacy constraint under both hypothesis, and show that perfect privacy is possible. Recall that TAI is a special case of TACI, in which Z = constant, and hence, the null and alternate hypothesis are given by , P SUY := P SU P Y|U and Q SUY := P SU P Y , where P Y = ∑ u∈U P U (u)P Y|U (y|u). Then, we have H Q (S|Y) = H P (S) = H P (U) = 2 bits. Also, noting that under the null hypothesis, Y = U mod 2, H P (S|Y) = 2 bits. It follows from the inner bound given by Equations (31)- (34), and, (37) and (38) where P SUYW := P SUY P W|U and Q SUYW := Q SUY P W|U for some conditional distribution P W|U . If we set W := U mod 2, then we have I P (U; W) = 1 bit, I P (Y; W) = H P (Y) = 1 bit, H P (S|W, Y) = H P (S|Y) = 2 bits, and H Q (S|W) = H P (S|Y) = 2 bits. Thus, by revealing only W to the detector, it is possible to achieve a positive error exponent while ensuring maximum privacy under both the null and alternate hypothesis, i.e., the tuple (1, 1, 2, 2) ∈ R e ( ), ∀ ∈ (0, 1).

A Counter-Example to the Strong Converse
Ahlswede and Csiszár obtained a strong converse result for the DHT problem without a privacy constraint in [5], where they showed that for any positive rate R, the optimal achievable error exponent is independent of the type I error probability constraint . Here, we explore whether a similar result holds in our model, in which an additional privacy constraint is imposed. We will show through a counter-example that this is not the case in general. The basic idea used in the counter-example is a "time-sharing" argument which is used to construct from a given coding scheme that achieves the optimal rate-error exponent-equivocation trade-off under a vanishing type I error probability constraint, a new coding scheme that satisfies the given type I error probability constraint * and the same error exponent as before, yet achieves a higher equivocation for S n at the detector. This concept has been used previously in other contexts, e.g., in the characterization of the first-order maximal channel coding rate of additive white gaussian noise (AWGN) channel in the finite block-length regime [69], and subsequently in the characterization of the second order maximal coding rate in the same setting [70]. However, we will provide a self-contained proof of the counter-example by using Lemma 4 for this purpose.
Assume that the joint distribution P SUV is such that H P (S|U, V) < H P (S|V). Proving the strong converse amounts to showing that any (R, κ, Λ 0 , Λ 1 ) ∈ R e ( ) for some ∈ (0, 1) also belongs to R e . Consider TAI problem with an equivocation privacy constraint, in which R ≥ H P (U) and Λ 1 ≤ Λ min . Then, from the optimal single-letter characterization of R e given in Proposition 1, it follows by taking W = U that (H P (U), I P (V; U), H P (S|V, U), Λ min ) ∈ R e . Please note that I P (V; U) is the maximum error exponent achievable for any type I error probability constraint ∈ (0, 1), even when U n is observed directly at the detector. Thus, for vanishing type I error probability constraint → 0 and κ = I P (V; U), the term H P (S|V, U) denotes the maximum achievable equivocation for S n under the null hypothesis. From the proof of Proposition 1, the coding scheme achieving this tuple is as follows: 1.
Quantize u n to codewords in B n = {u n (j) ∈ T n , j ∈ [e n(H P (U)+η) ]} and send the index of quantization to the detector, i.e., if u n ∈ T n for some δ > δ, The type I error probability of the above scheme tends to zero asymptotically with n. Now, for a fixed * > 0, consider a modification of this coding scheme as follows: 1.
If u n ∈ T n , send M = j with probability 1 − * , where j is the index of u n in B n , and with probability * , send M = 0. If u n / ∈ T n It is easy to see that for this modified coding scheme, the type I error probability is asymptotically equal to * , while the error exponent remains the same as I(V; U) since the probability of declaringĤ = 0 is decreased. Recalling that Π(u n , δ, P U ) where {γ n } n∈N denotes some sequence of positive numbers such that γ n (n) −→ 0, and γ n := −2ρ * n log 2ρ * n |S| n , ρ * n := P S n V n |Π(U n ,δ,P U ),M (·|0, 0) − P S n V n (·) = P S n V n |Π(U n ,δ,P U ) (·|0) − P S n V n (·) , γ n := γ n n H(S n |M = 0, V n , , H = 0, Π(U n , δ, P U ) = 1) Equation (91) follows similarly to the proof of Theorem 1 in [71]. Equation (92) is obtained as follows: Here, (98) is obtained by an application of Lemma 3; and (99) is due to the assumption that H P (S|U, V) < H P (S|V).
It follows from Lemma 4 that ρ * n (n) −→ 0, which in turn implies that From (95), (97) and (100), we have thatγ n this implies that in general, the strong converse does not hold for HT with an equivocation privacy constraint. The same counter-example can be used in a similar manner to show that the strong converse does not hold for HT with an average distortion privacy constraint either.

Conclusions
We have studied the DHT problem with a privacy constraint, with equivocation and average distortion under a causal disclosure assumption as the measures of privacy. We have established a single-letter inner bound on the rate-error exponent-equivocation and rate-error exponent-distortion trade-offs. We have also obtained the optimal rate-error exponent-equivocation and rate-error exponent-distortion trade-offs for two special cases, when the communication rate over the channel is zero, and for TACI under a privacy constraint. It is interesting to note that the strong converse for DHT does not hold when there is an additional privacy constraint in the system. Extending these results to the case when the communication between the observer and detector takes place over a noisy communication channel is an interesting avenue for future research. Yet another important topic worth exploring is the trade-off between rate, error probability and privacy in the finite sample regime for the setting considered in this paper.

Conflicts of Interest:
The authors declare no conflict of interest.

Abbreviations
The following abbreviations are used in this manuscript:

HT
Hypothesis testing DHT Distributed hypothesis testing TACI Testing against conditional independence TAI Testing against independence DP Differential privacy KL Kullback-Leibler SHA Shimokawa-Han-Amari

Appendix A. Proof of Lemma 1
Please note that for a stochastic detector, the type I and type II error probabilities are linear functions of PĤ |M,V n . As a result, for each fixed n and f n , α n ( f n , g n ) and β n ( f n , g n ) for a stochastic detector g n can be thought of as the type I and type II errors achieved by "time-sharing" among a finite number of deterministic detectors. To see this, consider some ordering on the elements of the set M × V n and let ν i := PĤ |M,V n (0|i), i ∈ [1 : N], where i denotes the i th element of M × V n and N = |M × V n |. Then, we can write Then, it is easy to see that PĤ |M,V n = ∑ N i=1 ν i I i , where I i := [e i 1 − e i ] and e i is an N length vector with 1 at the i th component and 0 elsewhere. Now, suppose (α (1) n , β (1) n ) and (α (2) n , β (2) n ) denote the pair of type I and type II error probabilities achieved by deterministic detectors g (1) n and g (2) n , respectively. Let A 1,n and A 2,n denote their corresponding acceptance regions for H 0 . Let g (θ) n denote the stochastic detector formed by using g (1) n and g (2) n with probabilities θ and 1 − θ, respectively. From the above-mentioned linearity property, it follows that g (θ) n achieves type I and type II error probabilities of α n f n , g (θ) n = θα (1) n + (1 − θ)α (2) n and β n f n , g (θ) n = θβ (1) n + (1 − θ)β (2) n , respectively. Let r(θ) = min(θ, 1 − θ). Then, for θ ∈ (0, 1), Hence, either α (1) n ≤ α n f n , g n ≤ α n f n , g (θ) n and − 1 n log β Thus, since 1 n log(r(θ)) (n) −→ 0, a stochastic detector does not offer any advantage over deterministic detectors in the trade-off between the error exponent and the type I error probability.

Appendix B. Proof of Lemma 2
LetP (C n ,0) S n U n V n MŜ n = P S n U n V n M ∏ n i=1PŜ i |M,V n ,S i−1 andP (C n ,1) S n U n V n MŜ n = Q S n U n V n M ∏ n i=1PŜ i |M,V n ,S i−1 denote the joint distribution of the r.v.'s (S n , U n , V n , M,Ŝ n ) under hypothesis H 0 and H 1 , respectively, wherePŜ i |M,V n ,S i−1 denotes g (r) i,n for i ∈ [n]. Then, we have Continuing, we have This completes the proof.

Appendix D. Proof of Theorems 1 and 2
We describe the encoding and decoding operations which are the same for both Theorems 1 and 2. Fix positive numbers (small) η, δ > 0, and let δ := δ 2 ,δ := |U |δ,δ := 2δ andδ := δ |V | . Codebook Generation: Fix a finite alphabet W and a conditional distribution P W|U . Let B n = W n (j), j ∈ [M n ] , M n := e n(I P (U:W)+η) , denote a random codebook such that each W n (j) is randomly and independently generated according to distribution ∏ n i=1 P W (w i ), where Denote a realization of B n by B n and the support of B n by B n . Encoding: For a given codebook B n , let E u (j|u n ) and t denotes the index of the joint type of (u n , w n (j)) in the set of types P n (U × W ).
, the observer outputs the error message M = 0. Please note that |M| ≤ e nR since the total number of types in P n (U × W ) is upper bounded by (n + 1) |U ||W | ([60], Lemma 2.2). Let C n := (B n , f B ), and let C n = (B n , f B ) and µ n (·) denote its realization and probability distribution, respectively. For a given C n , let f ,Ĥ = 1 is declared. Else, given m = (t, f B (j)) and V n = v n , the detector decodes for a codewordŵ n := w n (ĵ) ∈ T n [P W ]δ in the codebook B n such that Denote the above decoding rule by P : M × V n →Ĥ stand for the decision rule induced by the above operations.

System induced distributions and auxiliary distributions:
The system induced probability distribution when H = 0 is given bỹ P (C n ,0) (s n , u n , v n , j, w n , m,ĵ,ŵ n ) and Consider two auxiliary distributionΨ and Ψ given bỹ and Ψ (C n ,0) (s n , u n , v n , j, w n , m,ĵ,ŵ n ) LetP (C n ,1) andΨ (C n ,1) denote probability distributions under H = 1 defined by the R.H.S. of (A21)-(A23) with P SUV replaced by Q SUV , and let Ψ (C n ,1) denote the R.H.S. of (A24) with P VS|U replaced by Q VS|U . Please note that the encoder f (C n ) n is such that P (B n ) E u (j|u n ) = Ψ (C n ,0) (j|u n ) and hence, the only difference between the joint distribution Ψ (C n ,0) andΨ (C n ,0) is the marginal distribution of U n . By the soft-covering lemma [62,64], it follows that for some γ 1 > 0, Hence, from ( [43], Property 2(d)), it follows that Also, note that the only difference between the distributionsP (C n ,0) andΨ (C n ,0) is P it follows that Equations (A26) and (A28) together imply via ( [43], Property 2(c)) that Please note that for l ∈ {0, 1}, the joint distribution Ψ (C n ,l) satisfies Also, since I P (U; W) + η > 0, by the application of soft-covering lemma, for some γ 3 > 0. If Q U = P U , then it again follows from the soft-covering lemma that thereby implying that E µ n Ψ (C n ,1) −Ψ (C n ,1) ≤ e −γ 1 n .

Analysis of type I and type II error probabilities:
We analyze type I and type II error probabilities of the coding scheme mentioned above averaged over the random ensemble C n . Type I error probability: Please note that a type I error occurs only if one of the following events occur: H e W n (l)|V n ≤ H e W n (J)|V n .
Let E := E TE ∪ E SE ∪ E ME ∪ E DE . Then, the expected type I error probability over C n be upper bounded as Please note thatPP (Cn,0) (E TE ) tends to 0 asymptotically by the weak law of large numbers. From (A36), −→ 0. Also, as in the proof of Theorem 2 in [13], it follows that where δ . Thus, if R > I P (U; W|V), it follows by choosing η = O(δ) that for δ > 0 small enough, the R.H.S. of (A38) tends to zero asymptotically. By the union bound on probability, the R.H.S. of (A37) tends to zero.

Type II error probability:
Let δ = |W |δ. Please note that a type II error occurs only if V n ∈ T n [P V ] δ and M = 0, i.e., U n ∈ T n [P U ] δ and T ∈ T n [P UW ] δ . Hence, we can restrict the type II error analysis to only such (U n , V n ). Denoting the event that a type II error occurs by D 0 , we have The last term in (A39) can be upper bounded as follows: where (A41) follows since the term in (A40) is independent of the indices (j,m) due to the symmetry of the codebook generation, encoding and decoding procedure. The first term in (A42) can be upper bounded as PP (Cn,1) (W n (1) = w n |U n = u n , V n = v n , J = 1, f B (J) = 1, E NE ) To obtain (A43), we used the fact that P (B n ) E u (1|u n ) in (A20) is invariant to the joint type PŨW of (U n , W n (1)) = (u n , w n ) (keeping all the other codewords fixed). This in turn implies that given E NE , each sequence in the conditional type class T PW |Ũ (u n ) is equally likely (in the randomness induced by B n and stochastic encoding in (A20)) and its probability is upper bounded by 1 T PW |Ũ . Defining the events and F 2 : the last term in (A42) can be written as PP (Cn,1) (D 0 |F ) =PP (Cn,1) (E c BE |F )PP (Cn,1) (D 0 |F 1 ) +PP (Cn,1) (E BE |F )PP (Cn,1) (D 0 |F 2 ).
The analysis of the terms in (A48) is essentially similar to that given in the proof of Theorem 2 in [13], except for a subtle difference that we mention next. To bound the binning error event E BE , we require an upper bound similar tō that is used in the proof of Theorem 2 in [13]. Please note that the stochastic encoding scheme considered here is different from the encoding scheme in [13]. In place (A49), we will show that for l = 1, which suffices for the proof. Please note that PP (Cn,1) (W n (l) =w n |F ) =PP (Cn,1) (W n (l) =w n |U n = u n , V n = v n )PP (Cn,1) (W n (1) = w n |W n (l) =w n , U n = u n , V n = v n ) PP (Cn,1) (W n (1) = w n |U n = u n , V n = v n ) Since the codewords are generated independently of each other and the binning operation is done independent of the codebook generation, we havē PP (Cn,1) (W n (1) = w n |W n (l) =w n , U n = u n , V n = v n ) =PP (Cn,1) (W n (1) = w n |U n = u n , V n = v n ), (A53) andPP (Cn,1) ( f B (J) = 1|J = 1, W n (1) = w n , W n (l) =w n , U n = u n , V n = v n ) =PP (Cn,1) ( f B (J) = 1|J = 1, W n (1) = w n , U n = u n , V n = v n ). (A54) Also, note that Next, consider the term in (A51). Let F := {W n (1) = w n , U n = u n , V n = v n }, F := {W n (1) = w n , W n (l) =w n , U n = u n , V n = v n }.
Then, the numerator and denominator of (A51) can be written as and respectively. The R.H.S. of (A56) (resp. (A57)) denote the average probability that J = 1 is chosen by P given W n (1) = w n , U n = u n and M n − 2 (resp. M n − 1) other independent codewords in B n . Let Hence, denoting byμ n the probability measure induced by µ n , we havē ≤ E µ n ∏ n i=1 P U|W (u i |w i ) ∏ n i=1 P U|W (u i |w i )+∑ j =1,l ∏ n i=1 P U|W (u i |W i (j)) 1 2 E µ n ∏ n i=1 P U|W (u i |w i ) ∏ n i=1 P U|W (u i |w i )+∑ j =1,l ∏ n i=1 P U|W (u i |W i (j)) − e −e n(I P (U;W)+η ) (A62) where (A59) is due to (A58); (A61) is since the term within E µ n |E l [·] in (A60) is upper bounded by one; (A62) is sinceμ n (E l ) ≤ e −e n(I P (U;W)+η ) for some η > 0 which follows similar to ([68], Section 3.6.3), and (A63) follows since the term within the expectation which is exponential in order dominates the double exponential term. From (A52)-(A55), (A63) and (A50) follows. The analysis of the other terms in (A48) is the same as in the SHA scheme in [7], and results in the error exponent (within an additive O(δ) term) claimed in the Theorem. We refer the reader to ( [13], Theorem 2) for a detailed proof (In [13], the communication channel between the observer and the detector is a DMC. However, since the coding scheme used in the achievability part of Theorem 2 in [13] is a separation-based scheme, the error exponent when the channel is noiseless can be recovered by setting E 3 (·) and E 4 (·) in Theorem 2 to ∞). By the random coding argument followed by the standard expurgation technique [72] (see ([13], Proof of Theorem 2)), there exists a deterministic codebook and binning function pair C n = (B n , f B ) such that the type I and type II error probabilities are within a constant multiplicative factor of their average values over the random ensemble C n , and where γ 4 and γ 5 are some positive numbers. Since the average type I error probability for our scheme tends to zero asymptotically, and the error exponent is unaffected by a constant multiplicative scaling of the type II error probability, this codebook achieves the same type I error probability and error exponent as the average over the random ensemble. Using this deterministic codebook for encoding and decoding, we first lower bound the equivocation and average distortion of S n at the detector as follows: First consider the equivocation of S n under the null hypothesis. Here, (A68) follows from (A27); (A69) follows since M is a function of w n (J) for a deterministic codebook; (A71) follows from (A65) and Lemma 3; (A72) follows from (A24); and (A75) follows from (A67) and Ψ (0) S i V i |w i = P (0) SV|W , i ∈ [n]. If Q U = P U , it follows similarly to above that HP (Cn,1) (S n |M, V n ) ≥ 1 − e −nΩ(δ) H Ψ (Cn,1) (S n |w n (J), V n ) − 2e −γ 4 n log |S| n |V | n e −γ 4 n (A76) Finally, consider the case H = 1 and Q U = P U . We have for δ small enough that PP (Cn,1) (M = 0) = PP (Cn,1) U n / ∈ T n Hence, for δ small enough, we can write HP (Cn,1) (S n |M, V n ) ≥ HP (Cn,1) (S n |M, V n , Π(U n , δ , P U )) ≥ 1 − e −n(D(P U ||Q U )−O(δ )) HP (Cn,1) (S n |M, V n , Π(U n , δ , P U ) = 1) (A82) = 1 − e −n(D(P U ||Q U )−O(δ )) HP (Cn,1) (S n |V n , Π(U n , δ , P U ) = 1) (A83) Here, (A82) follows from (A81); (A83) follows since Π(U n , δ , P U ) = 1 implies M = 0; (A84) follows from Lemma 3 and (15). Thus, since δ > 0 is arbitrary, we have shown that for ∈ (0, 1), (R, κ, Λ 0 , Λ 1 ) ∈ R e ( ) if (18)- (21) holds.
Thus, by the Fenchel-Eggleston-Carathéodory's theorem [68], it is sufficient to have at most |U | − 1 points in the support of W to preserve P U and three more to preserve H P (U|W, Z), H P (Y|W, Z) and H P (S|W, Z, Y). Noting that H P (Y|Z) and H P (U|Z) are automatically preserved since P U is preserved (and (Y, Z, S) − U − W holds), |W | = |U | + 2 points are sufficient to preserve the R.H.S. of Equations (28)- (30). This completes the proof for the case of R e . Similarly, considering the |U | + 1 functions of P W|U given in (A97)-(A99) and E P [d (S, φ(W, Y, Z))] = ∑ w P W (w)g 4 (w, P W|U ), where g 4 (w, P W|U ) = ∑ s,u,y,z P U|W (u|w)P YZS|U (y, z, s|u) d(s, φ(w, y, z)), similar result holds also for the case of R d .