Next Article in Journal
Power-Law Distributions of Dynamic Cascade Failures in Power-Grid Models
Next Article in Special Issue
Measuring Independence between Statistical Randomness Tests by Mutual Information
Previous Article in Journal
Special Characteristics and Synchronizations of Multi Hybrid-Order Chaotic Systems
Previous Article in Special Issue
Time-Adaptive Statistical Test for Random Number Generators
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Privacy-Aware Distributed Hypothesis Testing †

1
Department of Electrical and Computer Engineering , Cornell University, Ithaca, NY 14850, USA
2
The School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beer-Sheva 8410501, Israel
3
Department of Electrical and Electronic Engineering, Imperial College London, London SW72AZ, UK
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in Proceedings of IEEE Information Theory Workshop (ITW), Guangzhou, 2018.
Entropy 2020, 22(6), 665; https://doi.org/10.3390/e22060665
Submission received: 1 May 2020 / Revised: 11 June 2020 / Accepted: 12 June 2020 / Published: 16 June 2020
(This article belongs to the Special Issue Information Theory, Forecasting, and Hypothesis Testing)

Abstract

:
A distributed binary hypothesis testing (HT) problem involving two parties, a remote observer and a detector, is studied. The remote observer has access to a discrete memoryless source, and communicates its observations to the detector via a rate-limited noiseless channel. The detector observes another discrete memoryless source, and performs a binary hypothesis test on the joint distribution of its own observations with those of the observer. While the goal of the observer is to maximize the type II error exponent of the test for a given type I error probability constraint, it also wants to keep a private part of its observations as oblivious to the detector as possible. Considering both equivocation and average distortion under a causal disclosure assumption as possible measures of privacy, the trade-off between the communication rate from the observer to the detector, the type II error exponent, and privacy is studied. For the general HT problem, we establish single-letter inner bounds on both the rate-error exponent-equivocation and rate-error exponent-distortion trade-offs. Subsequently, single-letter characterizations for both trade-offs are obtained (i) for testing against conditional independence of the observer’s observations from those of the detector, given some additional side information at the detector; and (ii) when the communication rate constraint over the channel is zero. Finally, we show by providing a counter-example where the strong converse which holds for distributed HT without a privacy constraint does not hold when a privacy constraint is imposed. This implies that in general, the rate-error exponent-equivocation and rate-error exponent-distortion trade-offs are not independent of the type I error probability constraint.

1. Introduction

Data inference and privacy are often contradicting objectives. In many multi-agent system, each agent/user reveals information about its data to a remote service, application or authority, which in turn, provides certain utility to the users based on their data. Many emerging networked systems can be thought of in this context, from social networks to smart grids and communication networks. While obtaining the promised utility is the main goal of the users, privacy of data that is shared is becoming increasingly important. Thus, it is critical that the users ensure a desired level of privacy for the sensitive information revealed, while maximizing the utility subject to this constraint.
In many distributed learning or distributed decision-making applications, typically the goal is to learn the joint probability distribution of data available at different locations. In some cases, there may be prior knowledge about the joint distribution, for example, that it belongs to a certain set of known probability distributions. In such a scenario, the nodes communicate their observations to the detector, which then applies hypothesis testing (HT) on the underlying joint distribution of the data based on its own observations and those received from other nodes. However, with the efficient data mining and machine learning algorithms available today, the detector can illegitimately infer some unintended private information from the data provided to it exclusively for HT purposes. Such threats are becoming increasingly imminent as large amounts of seemingly irrelevant yet sensitive data are collected from users, such as in medical research [1], social networks [2], online shopping [3] and smart grids [4]. Therefore, there is an inherent trade-off between the utility acquired by sharing data and the associated privacy leakage.
There are several practical scenarios where the above-mentioned trade-off arises. For example, consider the issue of consumer privacy in the context of online shopping. A consumer would like to share some information about his/her shopping behavior, e.g., shopping history and preferences, with the shopping portal to get better deals and recommendations on relevant products. The shopping portal would like to determine whether the consumer belongs to its target age group (e.g., below 30 years old) before sending special offers to this customer. Assuming that the shopping patterns of the users within and outside the target age groups are independent, the shopping portal performs a hypothesis test to check if the consumer’s shared data is correlated with the data of its own customers. If the consumer is indeed within the target age group, the shopping portal would like to gather more information about this potential customer, particular interests, more accurate age estimation, etc.; while the user is reluctant to provide any further information. Yet another relevant example is the issue of user privacy in the context of wearable Internet of Things (IoT) devices such as smart watches and fitness trackers, which collect information on routine daily activities, and often have a third-party cloud interface.
In this paper, we study distributed HT (DHT) with a privacy constraint, in which an observer communicates its observations to a detector over a noiseless rate-limited channel of rate R nats per observed sample. Using the data received from the observer, the detector performs binary HT on the joint distribution of its own observations and those of the observer. The performance of the HT is measured by the asymptotic exponential rate of decay of the type II error probability, known as the type II error exponent (or error exponent henceforth), for a given constraint on the type I error probability (definitions will be given below). While the goal is to maximize the performance of the HT, the observer also wants to maintain a certain level of privacy against the detector for some latent private data that is correlated with its observations. We are interested in characterizing the trade-off between the communication rate from the observer to the detector over the channel, error exponent achieved by the HT and the amount of information leakage of private data. A special case of HT known as testing against conditional independence (TACI) will be of particular interest. In TACI, the detector tests whether its own observations are independent of those at the observer, conditioned on additional side information available at the detector.

1.1. Background

Distributed HT without any privacy constraint has been studied extensively from an information- theoretic perspective in the past, although many open problems remain. The fundamental results for this problem are first established in [5], which includes a single-letter lower bound on the optimal error exponent and a strong converse result which states that the optimal error exponent is independent of the constraint on the type I error probability. Exact single-letter characterization of the optimal error exponent for the testing against independence (TAI) problem, i.e., TACI with no side information at the detector, is also obtained. The lower bound established in [5] is further improved in [6,7]. Strong converse is studied in the context of complete data compression and zero-rate compression in [6,8], respectively, where in the former, the observer communicates to the detector using a message set of size two, while in the latter using a message set whose size grows sub-exponentially with the number of observed samples. The TAI problem with multiple observers remains open (similar to several other distributed compression problems when a non-trivial fidelity criterion is involved); however, the optimal error exponent is obtained in [9] when the sources observed at different observers follow a certain Markov relation. The scenario in which, in addition to HT, the detector is also interested in obtaining a reconstruction of the observer’s source, is studied in [10]. The authors characterize the trade-off between the achievable error exponent and the average distortion between the observer’s observations and the detector’s reconstruction. The TACI is first studied in [11], where the optimality of a random binning-based encoding scheme is shown. The optimal error exponent for TACI over a noisy communication channel is established in [12]. Extension of this work to general HT over a noisy channel is considered in [13], where lower bounds on the optimal error exponent are obtained by using a separation-based scheme and also using hybrid coding for the communication between the observer and the detector. The TACI with a single observer and multiple detectors is studied in [14], where each detector tests for the conditional independence of its own observations from those of the observer. The general HT version of this problem over a noisy broadcast channel and DHT over a multiple access channel is explored in [15]. While all the above works consider the asymmetric objective of maximizing the error exponent under a constraint on the type I error probability, the trade-off between the exponential rate of decay of both the type I and type II error probabilities are considered in [16,17,18].
Data privacy has been a hot topic of research in the past decade, spanning across multiple disciplines in computer and computational sciences. Several practical schemes have been proposed that deal with the protection or violation of data privacy in different contexts, e.g., see [19,20,21,22,23,24]. More relevant for our work, HT under mutual information and maximal leakage privacy constraints have been studied in [25,26], respectively, where the observer uses a memoryless privacy mechanism to convey a noisy version of its observed data to the detector. The detector performs HT on the probability distribution of the observer’s data, and the optimal privacy mechanism that maximizes the error exponent while satisfying the privacy constraint is analyzed. Recently, a distributed version of this problem has been studied in [27], where the observer applies a privacy mechanism to its observed data prior to further coding for compression, and the goal at the detector is to perform a HT on the joint distribution of its own observations with those of the observer. In contrast with [25,26,27], we study DHT with a privacy constraint, but without considering a separate privacy mechanism at the observer. In Section 2, we will further discuss the differences between the system model considered here and that of [27].
It is important to note here that the data privacy problem is fundamentally different from that of data security against an eavesdropper or an adversary. In data security, sensitive data is to be protected against an external malicious agent distinct from the legitimate parties in the system. The techniques for guaranteeing data security usually involve either cryptographic methods in which the legitimate parties are assumed to have additional resources unavailable to the adversary (e.g., a shared private key) or the availability of better communication channel conditions (e.g., using wiretap codes). However, in data privacy problems, the sensitive data is to be protected from the same legitimate party that receives the messages and provides the utility; and hence, the above-mentioned techniques for guaranteeing data security are not applicable. Another model frequently used in the context of information-theoretic security assumes the availability of different side information at the legitimate receiver and the eavesdropper [28,29]. A DHT problem with security constraints formulated along these lines is studied in [30], where the authors propose an inner bound on the rate-error exponent-equivocation trade-off. While our model is related to that in [30] when the side information at the detector and eavesdropper coincide, there are some important differences which will be highlighted in Section 2.3.
Many different privacy measures have been considered in the literature to quantify the amount of private information leakage, such as k-anonymity [31], differential privacy (DP) [32], mutual information leakage [33,34,35], maximal leakage [36], and total variation distance [37] to count a few; see [38] for a detailed survey. Among these, mutual information between the private and revealed information (or, equivalently, the equivocation of private information given the revealed information) is perhaps the most commonly used measure in the information-theoretic studies of privacy. It is well known that a necessary and sufficient condition to guarantee statistical independence between two random variables is to have zero mutual information between them. Furthermore, the average information leakage measured using an arbitrary privacy measure is upper bounded by a constant multiplicative factor of that measured by mutual information [34]. It is also shown in [33] that a differentially private scheme is not necessarily private when the information leakage is measured by mutual information. This is done by constructing an example that is differentially private, yet the mutual information leakage is arbitrarily high. Mutual information-based measures have also been used in cryptographic security studies. For example, the notion of semantic security defined in [39] is shown to be equivalent to a measure based on mutual information in [40].
A rate-distortion approach to privacy is first explored by Yamamoto in [41] for a rate-constrained noiseless channel, where in addition to a distortion constraint for legitimate data, a minimum distortion requirement is enforced for the private part. Recently, there have been several works that have used distortion as a security or privacy metric in several different contexts, such as side-information privacy in discriminatory lossy source coding [42] and rate-distortion theory of secrecy systems [43,44]. More specifically, in [43], the distortion-based security measure is analyzed under a causal disclosure assumption, in which the data samples to be protected are causally revealed to the eavesdropper (excluding the current sample), yet the average distortion over the entire block has to satisfy a desired lower bound. This assumption ensures that distortion as a secrecy measure is more robust (see ([43], Section I-A)), and could in practice model scenarios in which the sensitive data to be protected is eventually available to the eavesdropper with some delay, but the protection of the current data sample is important. In this paper, we will consider both equivocation and average distortion under a causal disclosure assumption as measures of privacy. In [45], error exponent of a HT adversary is considered to be a privacy measure. This can be considered to be the opposite setting to ours, in the sense that while the goal here is to increase the error exponent under a privacy leakage constraint, the goal in [45] is to reduce the error exponent under a constraint on possible transformations that can be applied on the data.
It is instructive to compare the privacy measures considered in this paper with DP. Towards this, note that average distortion and equivocation (see Definitions 1 and 2) are “average case” privacy measures, while DP is a “worst case” measure that focuses on the statistical indistinguishability of neighboring datasets that differ in just one entry. Considering this aspect, it may appear that these privacy measures are unrelated. However, as shown in [46], there is an interesting connection between them. More specifically, the maximum conditional mutual information leakage between the revealed data Y and an entry in the dataset X i given all the other n 1 entries X i = X n { X i } , i.e., I ( Y ; X i | X i ) , is sandwiched between the so-called ϵ - DP and ( ϵ , δ ) -DP in terms of the strength of the privacy measure, where the maximization is over all distributions P X n on X n and entries i [ 1 : n ] ([46], Theorem 1). This implies that as a privacy measure, equivocation (equivalent to mutual information leakage) is weaker than ϵ - DP, and stronger than ( ϵ , δ ) -DP, at least for some probability distributions on the data. On the other hand, equivocation and average distortion are relatively well-behaved privacy measures compared to DP, and often result in clean and exact computable characterizations of the optimal trade-off for the problem at hand. Moreover, as already shown in [39,40,47,48], the trade-off resulting from “average” constraints turns out to be the same as that with more stricter constraints in many interesting cases. Hence, it is of interest to consider such average case privacy measures as a starting point for further investigation with stricter measures.
DP has been used extensively in privacy studies including those that involve learning and HT [49,50,51,52,53,54,55,56,57,58,59]. More relevant to the distributed HT problem at hand is the local differentially private model employed in [49,50,51,56], in which, depending on the privacy requirement, a certain amount of random noise is injected into the user’s data before further processing, while the utility is maximized subject to this constraint. Nevertheless, there are key differences between these models and ours. For example, in [49], the goal is to learn from differentially private “examples”, the underlying “concept” (model that maps examples to “labels”) such that the error probability in predicting the label for future examples is minimized, irrespective of the statistics of the examples. Hence, the utility in [49] is to learn an unknown model accurately, whereas our objective is to test between two known probability distributions. Furthermore, in our setting (unlike [49,50,51,56]), there is an additional requirement to satisfy in terms of the communication rate. These differences perhaps also make DP less suitable as a privacy measure in our model relative to equivocation and average distortion. On one hand, imposing a DP measure in our setting may be overly restrictive since there are only two probability distributions involved and DP is tailored for situations where the statistics of the underlying data is unknown. On the other hand, DP is also more unwieldy to analyze under a rate constraint compared to mutual information or average distortion.
The amount of private information leakage that can be tolerated depends on the specific application at hand. While it may be possible to tolerate a moderate amount of information leakage in applications like online shopping or social networks, it may no longer be the case in matters related to information sharing among government agencies or corporations. While it is obvious that maximum privacy can be attained by revealing no information, this typically comes at the cost of zero utility. On the other hand, maximum utility can be achieved by revealing all the information, but at the cost of minimum privacy. Characterizing the optimal trade-off between the utility and the minimum privacy leakage between these two extremes is a fundamental and challenging research problem.

1.2. Main Contributions

The main contributions of this work are as follows.
  • In Section 3, Theorem 1 (resp. Theorem 2), we establish a single-letter inner bound on the rate-error exponent-equivocation (resp. rate-error exponent-distortion) trade-off for DHT with a privacy constraint. The distortion and equivocation privacy constraints we consider, which is given in (6) and (7), respectively, are slightly stronger than what is usually considered in the literature (stated in (8) and (9), respectively).
  • Exact characterizations are obtained for some important special cases in Section 4. More specifically, a single-letter characterization of the optimal rate-error exponent-equivocation (resp. rate-error exponent-distortion) trade-off is established for:
    (a)
    TACI with a privacy constraint (for vanishing type I error probability constraint) in Section 4.1, Proposition 1 (resp. Proposition 2),
    (b)
    DHT with a privacy constraint for zero-rate compression in Section 4.2, Proposition 4 (resp. Proposition 3).
    Since the optimal trade-offs in Propositions 3 and 4 are independent of the constraint on the type I error probability, they are strong converse results in the context of HT.
  • Finally, in Section 5, we provide a counter-example showing that for a positive rate R > 0 , the strong converse result does not hold in general for TAI with a privacy constraint.

1.3. Organization

The organization of the paper is as follows. Basic notations are introduced in Section 2.1. The problem formulation and associated definitions are given in Section 2.2. Main results are presented in Section 3 to Section 5. The proofs of the results are presented either in the Appendix or immediately after the statement of the result. Finally, Section 6 concludes the paper with some open problems for future research.

2. Preliminaries

2.1. Notations

N , R and R 0 stand for the set of natural numbers, real numbers and non-negative real numbers, respectively. For a R 0 , [ a ] : = { i N , i a } and for a R , a + : = max { 0 , a } ( : = represents equality by definition). Calligraphic letters, e.g., A , denotes sets, while | A | and A c denotes its cardinality and complement, respectively. 1 ( · ) denotes the indicator function, while O ( · ) , o ( · ) and Ω ( · ) stands for the standard asymptotic notations of Big-O, Little-O and Big- Ω , respectively. For a real sequence { a n } n N and b R , a n ( n ) b represents lim n a n = b . Similar notations apply for asymptotic inequalities, e.g., a n ( n ) b , means that lim n a n b . Throughout this paper, the base of the logarithms is taken to be e, and whenever the range of the summation is not specified, it means summation over the entire support, e.g., u denotes u U .
All the random variables (r.v.’s) considered in this paper are discrete with finite support unless specified otherwise. We denote r.v.’s, their realizations and support by upper case, lower case and calligraphic letters (e.g., X, x and X ), respectively. The joint probability distribution of r.v.’s X and Y is denoted by P X Y , while their marginals are denoted by P X and P Y . The set of all probability distributions with support X and X × Y are represented by P ( X ) and P ( X × Y ) , respectively. For j , i N , the random vector ( X i , , X j ) , j i , is denoted by X i j , while X j stands for ( X 1 , , X j ) . Similar notation holds for the vector of realizations. X Y Z denotes a Markov chain relation between the r.v.’s X, Y and Z. P P ( E ) denotes the probability of event E with respect to the probability measure induced by distribution P, and E P [ · ] denotes the corresponding expectation. The subscript P is omitted when the distribution involved is clear from the context. For two probability distributions P and Q defined on a common support, P < < Q denotes that P is absolutely continuous with respect to Q.
Following the notation in [60], for P X P ( X ) and δ 0 , the P X -typical set is
T [ P X ] δ n : = x n X n : P X ( x ) 1 n i = 1 n 1 ( x i = x ) δ , x X ,
and the P X -type class (set of sequences of type or empirical distribution P X ) is T P X n : = T [ P X ] 0 n . The set of all possible types of sequences of length n over an alphabet X n and the set of types in T [ P X ] δ n are denoted by P n ( X ) and P n T [ P X ] δ n , respectively. Similar notations apply for pairs and larger combinations of r.v.’s, e.g., T [ P X Y ] δ n , T P X Y n , P n ( X × Y ) and P n T [ P X Y ] δ n . The conditional P Y | X type class of a sequence x n X n is
T P Y | X n ( x n ) : = y n : ( x n , y n ) T P X Y n .
The standard information-theoretic quantities like Kullback–Leibler (KL) divergence between distributions P X and Q X , the entropy of X with distribution P X , the conditional entropy of X given Y and the mutual information between X and Y with joint distribution P X Y , are denoted by D ( P X | | Q X ) , H P X ( X ) , H P X Y ( X | Y ) and I P X Y ( X ; Y ) , respectively. When the distribution of the r.v.’s involved are clear from the context, the last three quantities are denoted simply by H ( X ) , H ( X | Y ) and I ( X ; Y ) , respectively. Given realizations X n = x n and Y n = y n , H e ( x n | y n ) denotes the conditional empirical entropy given by
H e ( y n | x n ) : = H P X ˜ Y ˜ ( Y ˜ | X ˜ ) ,
where P X ˜ Y ˜ denotes the joint type of ( x n , y n ) . Finally, the total variation between probability distributions P X and Q X defined on the same support X is
| | P X Q X | | : = 1 2 x X | P X ( x ) Q X ( x ) | .

2.2. Problem Formulation

Consider the HT setup illustrated in Figure 1, where ( U n , V n , S n ) denote n independent and identically distributed (i.i.d.) copies of triplet of r.v.’s ( U , V , S ) . The observer observes U n and sends the message index M to the detector over an error-free channel, where M f n ( · | U n ) and f n : U n P ( M ) , M = [ e n R ] . Given its own observation V n , the detector performs a HT on the joint distribution of U n and V n with null hypothesis
H 0 : ( U n , V n ) i = 1 n P U V ,
and alternate hypothesis
H 1 : ( U n , V n ) i = 1 n Q U V .
Let H and H ^ denote the r.v.’s corresponding to the true hypothesis and the output of the HT, respectively, with support H = H ^ = { 0 , 1 } , where 0 denotes the null hypothesis and 1 the alternate hypothesis. Let g n : M × V n P ( H ^ ) denote the decision rule at the detector, which outputs H ^ g n ( M , V n ) . Then, the type I and type II error probabilities achieved by a f n , g n pair are given by
α n f n , g n : = P ( H ^ = 1 | H = 0 ) = P H ^ ( 1 ) ,
and
β n f n , g n : = P ( H ^ = 0 | H = 1 ) = Q H ^ ( 0 ) ,
respectively, where
P H ^ ( 1 ) = u n , m , v n i = 1 n P U V ( u i , v i ) f n ( m | u n ) g n ( 1 | m , v n ) ,
and
Q H ^ ( 0 ) = u n , m , v n i = 1 n Q U V ( u i , v i ) f n ( m | u n ) g n ( 0 | m , v n ) .
Let P U n V n S n M H ^ and Q U n V n S n M H ^ denote the joint distribution of ( U n , V n , S n , M , H ^ ) under the null and alternate hypotheses, respectively. For a given type I error probability constraint ϵ , define the minimum type II error probability over all possible detectors as
β ¯ n f n , ϵ : = inf g n β n f n , g n , such   that α n f n , g n ϵ .
The performance of HT is measured by the error exponent achieved by the test for a given constraint ϵ on the type I error probability, i.e., lim inf n 1 n log β ¯ n ( f n , ϵ ) . Although the goal of the detector is to maximize the error exponent achieved for the HT, it is also curious about the latent r.v. S n that is correlated with U n . S n is referred to as the private part of U n , which is distributed i.i.d. according to the joint distribution P S U V and Q S U V under the null and alternate hypothesis, respectively. It is desired to keep the private part as concealed as possible from the detector. We consider two measures of privacy for S n at the detector. The first is the equivocation defined as H ( S n | M , V n ) . The second one is the average distortion between S n and its reconstruction S ^ n at the detector, measured according to an arbitrary bounded additive distortion metric d : S × S ^ [ 0 , D m ] with multi-letter distortion defined as
d ( s n , s ^ n ) : = i = 1 n d ( s i , s ^ i ) .
We will assume the causal disclosure assumption, i.e., S ^ i is a function of S i 1 in addition to ( M , V n ) . The goal is to ensure that the error exponent for HT is maximized, while satisfying the constraints on the type I error probability ϵ and the privacy of S n . In the sequel, we study the trade-off between the rate, error exponent (henceforth also referred to simply as the error exponent) and privacy achieved in the above setting. Before delving into that, a few definitions are in order.
Definition 1.
For a given type I error probability constraint ϵ, a rate-error exponent-distortion tuple ( R , κ , Δ 0 , Δ 1 ) is achievable, if there exists a sequence of encoding and decoding functions f n : U n P ( M ) , and g n : M × V n P ( H ^ ) such that
lim inf n log β ¯ n ( f n , ϵ ) n κ ,
and for any γ > 0 , there exists an n 0 N such that
inf g i , n ( r ) i = 1 n E d S n , S ^ n | H = j n Δ j γ , n n 0 , j = 0 , 1 ,
where S ^ i g i , n ( r ) ( · | M , V n , S i 1 ) , and g i , n ( r ) : [ e n R ] × V n × S i 1 P ( S ^ i ) denotes an arbitrary stochastic reconstruction map at the detector. The rate-error exponent-distortion region R d ( ϵ ) is the closure of the set of all such achievable ( R , κ , Δ 0 , Δ 1 ) tuples for a given ϵ.
Definition 2.
For a given type I error probability constraint ϵ, a rate-error exponent-equivocation (It is well known that equivocation as a privacy measure is a special case of average distortion under the causal disclosure assumption and log-loss distortion metric [43]. However, we provide a separate definition of the rate-error exponent-equivocation region for completeness.) ( R , κ , Λ 0 , Λ 1 ) tuple is achievable, if there exists a sequence of encoding and decoding functions f n : U n P ( M ) and g n : [ e n R ] × V n P ( H ^ ) such that (5) is satisfied, and for any γ > 0 , there exists a n 0 N such that
H ( S n | M , V n , H = i ) n Λ i γ , n n 0 , i { 0 , 1 } .
The rate-error exponent-equivocation region R e ( ϵ ) is the closure of the set of all such achievable ( R , κ , Λ 0 , Λ 1 ) tuples for a given ϵ.
Please note that the privacy measures considered in (6) and (7) are stronger than
lim inf n inf g i , n ( r ) i = 1 n E 1 n d S n , S ^ n | H = i Δ i , i = 0 , 1 ,
and   lim inf n 1 n H ( S n | M , V n , H = i ) Λ i , i = 0 , 1 ,
respectively. To see this for the equivocation privacy measure, note that if H ( S n | M , V n , H = i ) = n Λ i * n a , i = 0 , 1 , for some a ( 0 , 1 ) , then an equivocation pair ( Λ 0 * , Λ 1 * ) is achievable under the constraint given in (9), while it is not achievable under the constraint given in (7).

2.3. Relation to Previous Work

Before stating our results, we briefly highlight the differences between our system model and the ones studied in [27,30]. In [27], the observer applies a privacy mechanism to the data before releasing it to the transmitter, which performs further encoding prior to transmission to the detector. More specifically, the observer checks if U n T [ P U ] δ n and if successful, sends the output of a memoryless privacy mechanism applied to U n , to the transmitter. Otherwise, it outputs a n-length zero-sequence. The privacy mechanism plays the role of randomizing the data (or adding noise) to achieve the desired privacy. Such randomized privacy mechanisms are popular in privacy studies, and have been used in [25,26,61]. In our model, the tasks of coding for privacy and compression are done jointly by using all the available data samples U n . Also, while we consider the equivocation (and average distortion) between the revealed information and the private part as the privacy measure, in [27], the mutual information between the observer’s observations and the output of the memoryless mechanism is the privacy measure. As a result of these differences, there exist some points in the rate-error exponent-privacy trade-off that are achievable in our model, but not in [27]. For instance, a perfect privacy condition Λ 0 = 0 for testing against independence in ([27], Theorem 2) would imply that the error exponent is also zero, since the output of the memoryless mechanism has to be independent of the observer’s observations (under both hypotheses). However, as we later show in Example 2, a positive error exponent is achievable while guaranteeing perfect privacy in our model.
On the other hand, the difference between our model and [30] arises from the difference in the privacy constraint as well as the privacy measure. Specifically, the goal in [30] is to keep U n private from an illegitimate eavesdropper, while the objective here is to keep a r.v. S n that is correlated with U n private from the detector. Also, we consider the more general average distortion (under causal disclosure) as a privacy measure, in addition to equivocation in [30]. Moreover, as already noted, the equivocation privacy constraint in (7) is more stringent than (9) that is considered in [30]. To satisfy the distortion requirement or the stronger equivocation privacy constraint in (7), we require that the a posteriori probability distribution of S n given the observations ( M , V n ) at the detector is close in some sense to a desired “target" memoryless distribution. To achieve this, we use a stochastic encoding scheme to induce the necessary randomness for S n at the detector, which to the best of our knowledge has not been considered previously in the context of DHT. Consequently, the analysis of the type I and type II error probabilities and privacy achieved are novel. Another subtle yet important difference is that the marginal distributions of U n and the side information at the eavesdropper are assumed to be the same under the null and alternate hypotheses in [30], which is not the case here. This necessitates separate analysis for the privacy achieved under the two hypotheses.
Next, we state some supporting results that will be useful later for proving the main results.

2.4. Supporting Results

Let
g A n ( d ) ( m , v n ) = 1 ( m , v n ) A n c
denote a deterministic detector with acceptance region A n [ e n R ] × V n for H 0 and A n c for H 1 . Then, the type I and type II error probabilities are given by
α n f n , g n : = P M V n ( A n c ) = E P 1 ( M , V n ) A n c ,
β n f n , g n : = Q M V n ( A n ) = E Q 1 ( M , V n ) A n .
Lemma 1.
Any error exponent that is achievable is also achievable by a deterministic detector of the form given in (10) for some A n [ e n R ] × V n , where A n and A n c denote the acceptance regions for H 0 and H 1 , respectively.
The proof of Lemma 1 is given in Appendix A for completeness. Due to Lemma 1, henceforth we restrict our attention to a deterministic g n as given in (10).
The next result shows that without loss of generality (w.l.o.g), it is also sufficient to consider g i , n ( r ) (in Definition 1) to be a deterministic function of the form
g i , n ( r ) = { ϕ ¯ i , n ( · , · , · ) } i = 1 n
for the minimization in (6), where ϕ ¯ i , n : M × V n × S i 1 S ^ , i [ n ] , denotes an arbitrary deterministic function.
Lemma 2.
The infimum in (6) is achieved by a deterministic function g i , n ( r ) as given in (13), and hence it is sufficient to restrict to such deterministic g i , n ( r ) in (6).
The proof of Lemma 2 is given in Appendix B. Next, we state some lemmas that will be handy for upper bounding the amount of privacy leakage in the proofs of the main results stated below. The following one is a well-known result proved in [60] that upper bounds the difference in entropy of two r.v.’s (with a common support) in terms of the total variation distance between their probability distributions.
Lemma 3.
([60], Lemma 2.7) Let P X and Q X be distributions defined on a common support X and let ρ : = | | P X Q X | | . Then, for ρ 1 4
| H P X H Q X | 2 ρ log 2 ρ | X | .
The next lemma will be handy in proving Theorems 1 and 2, Proposition 3 and the counter-example for strong converse presented in Section 5.
Lemma 4.
Let ( X n , Y n ) denote n i.i.d. copies of r.v.’s ( X , Y ) , and P X n Y n = i = 1 n P X Y and Q X n Y n = i = 1 n Q X Y denote two joint probability distributions on ( X n , Y n ) . For δ > 0 , define
Π ( x n , δ , P X ) : = 1 x n T [ P X ] δ n .
If P X Q X , then for δ > 0 sufficiently small, there exists δ ¯ > 0 and n 0 ( δ , | X | , | Y | ) N such that for all n n 0 ( δ , | X | , | Y | ) ,
Q Y n ( · ) Q Y n | Π ( X n , δ , P X ) ( · | 1 ) e n δ ¯ .
If P X = Q X , then for any δ > 0 , there exists δ ¯ > 0 and n 0 ( δ , | X | , | Y | ) N such that for all n n 0 ( δ , | X | , | Y | ) ,
Q Y n ( · ) Q Y n | Π ( X n , δ , P X ) ( · | 0 ) e n δ ¯ ,
Also, for any δ > 0 , there exists δ ¯ > 0 and n 0 ( δ , | X | , | Y | ) N such that for all n n 0 ( δ , | X | , | Y | ) ,
P Y n ( · ) P Y n | Π ( X n , δ , P X ) ( · | 0 ) e n δ ¯ .
Proof. 
The proof is presented in Appendix C. ☐
In the next section, we establish an inner bound on R e ( ϵ ) and R d ( ϵ ) .

3. Main Results

The following two theorems are the main results of this paper providing inner bounds for R e ( ϵ ) and R d ( ϵ ) , respectively.
Theorem 1.
For ϵ ( 0 , 1 ) , ( R , κ , Λ 0 , Λ 1 ) R e ( ϵ ) if there exists an auxiliary r.v. W, such that ( V , S ) U W , and
R I P ( W ; U | V ) ,
κ κ * ( P W | U , R ) ,
Λ 0 H P ( S | W , V ) ,
Λ 1 1 P U = Q U H Q ( S | W , V ) + 1 P U Q U H Q ( S | V ) ,
where
κ * ( P W | U , R ) : = min E 1 ( P W | U ) , E 2 ( R , P W | U ) ,
E 1 ( P W | U ) : = min P U ˜ V ˜ W ˜ L 1 ( P U W , P V W ) D ( P U ˜ V ˜ W ˜ | | Q U V P W | U ) ,
E 2 ( R , P W | U ) : = min P U ˜ V ˜ W ˜ L 2 ( P U W , P V ) D ( P U ˜ V ˜ W ˜ | | Q U V P W | U ) + ( R I P ( U ; W | V ) ) , i f I P ( U ; W ) > R , , o t h e r w i s e , L 1 ( P U W , P V W ) : = { P U ˜ V ˜ W ˜ P ( U × V × W ) : P U ˜ W ˜ = P U W , P V ˜ W ˜ = P V W } , L 2 ( P U W , P V ) : = { P U ˜ V ˜ W ˜ P ( U × V × W ) : P U ˜ W ˜ = P U W , P V ˜ = P V , H P ( W | V ) H ( W ˜ | V ˜ ) } , P S U V W : = P S U V P W | U , a n d Q S U V W : = Q S U V P W | U .
Theorem 2.
For a given bounded additive distortion measure d ( · , · ) and ϵ ( 0 , 1 ) , ( R , κ , Δ 0 , Δ 1 ) R d ( ϵ ) if there exist an auxiliary r.v. W and deterministic functions ϕ : W × V S ^ and ϕ : V S ^ , such that ( V , S ) U W and (18) and (19),
Δ 0 min ϕ ( · , · ) E P d S , ϕ ( W , V ) ,
and   Δ 1 1 P U = Q U min ϕ ( · , · ) E Q d S , ϕ ( W , V ) + 1 P U Q U min ϕ ( · ) E Q d S , ϕ ( V ) ,
are satisfied, where P S U V W and Q S U V W are as defined in Theorem 1.
The proof of Theorems 1 and 2 is given in Apppendix Appendix D. While the rate-error exponent trade-off in Theorems 1 and 2 is the same as that achieved by the Shimokawa-Han-Amari (SHA) scheme [7], the coding strategy achieving it is different due to the requirement of the privacy constraint. As mentioned above, in order to obtain a single-letter lower bound for the achievable distortion (and achievable equivocation) of the private part at the detector, it is required that the a posteriori probability distribution of S n given the observations ( M , V n ) at the detector is close in some sense to a desired “target” memoryless distribution. For this purpose, we use the so-called likelihood encoder [62,63] (at the observer) in our achievability scheme. The likelihood encoder is a stochastic encoder that induces the necessary randomness for S n at the detector, and to the best of our knowledge has not been used before in the context of DHT. The analysis of the type I and type II error probabilities and the privacy achieved by our scheme is novel and involves the application of the well-known channel resolvability or soft-covering lemma [62,64,65]. Properties of the total variation distance between probability distributions mentioned in [43] play a key role in this analysis. The analysis also reveals the interesting fact that the coding schemes in Theorems 1 and 2, although quite different from the SHA scheme, achieves the same lower bound on the error exponent.
Theorems 1 and 2 provide single-letter inner bounds on R d ( ϵ ) and R e ( ϵ ) , respectively. A complete computable characterization of these regions would require a matching converse. This is a hard problem, since such a characterization is not available even for the DHT problem without a privacy constraint, in general (see [5]). However, it is known that a single-letter characterization of the rate-error exponent region exists for the special case of TACI [11]. In the next section, we show that TACI with a privacy constraint also admits a single-letter characterization, in addition to other optimality results.

4. Optimality Results for Special Cases

4.1. TACI with a Privacy Constraint

Assume that the detector observes two discrete memoryless sources Y n and Z n , i.e., V n = ( Y n , Z n ) . In TACI, the detector tests for the conditional independence of U and Y, given Z. Thus, the joint distribution of the r.v.’s under the null and alternate hypothesis are given by
H 0 : P S U Y Z : = P S | U Y Z P U | Z P Y | U Z P Z ,
and
H 1 : Q S U Y Z : = Q S | U Y Z P U | Z P Y | Z P Z ,
respectively.
Let R e and R d denote the rate-error exponent-equivocation and rate-error exponent-distortion regions, respectively, for the case of vanishing type I error probability constraint, i.e.,
R e : = lim ϵ 0 R e ( ϵ ) a n d R d : = lim ϵ 0 R d ( ϵ ) .
Assume that the privacy constraint under the alternate hypothesis is inactive. Thus, we are interested in characterizing the set of all tuples ( R , κ , Λ 0 , Λ 1 ) R e and ( R , κ , Δ 0 , Δ 1 ) R d , where
Λ 1 Λ m i n : = H Q ( S | U , Y , Z ) , and   Δ 1 Δ m i n : = min ϕ ( u , y , z ) E Q d S , ϕ ( U , Y , Z ) .
Please note that Λ m i n and Δ m i n correspond to the equivocation and average distortion of S n at the detector, respectively, when U n is available directly at the detector under the alternate hypothesis. The above assumption is motivated by scenarios, in which the observer is more eager to protect S n when there is a correlation between its own observation and that of the detector, such as the online shopping portal example mentioned in Section 1. In that example, U n , S n and Y n corresponds to shopping behavior, more information about the customer, and customers data available to the shopping portal, respectively.
For the above-mentioned case, we have the following results.
Proposition 1.
For the HT given in (26), ( R , κ , Λ 0 , Λ m i n ) R e if and only if there exists an auxiliary r.v. W, such that ( Z , Y , S ) U W , and
κ I P ( W ; Y | Z ) ,
R I P ( W ; U | Z ) ,
Λ 0 H P ( S | W , Z , Y ) ,
for some joint distribution of the form P S U Y Z W : = P S U Y Z P W | U .
Proof. 
For TACI, the inner bound in Theorem 1 yields that for ϵ ( 0 , 1 ) , ( R , κ , Λ 0 , Λ 1 ) R e ( ϵ ) if there exists an auxiliary r.v. W, such that ( Y , Z , S ) U W , and
R I P ( W ; U | Y , Z ) ,
κ κ * ( P W | U , R ) ,
Λ 0 H P ( S | W , Y , Z ) ,
Λ 1 H Q ( S | W , Y , Z ) ,
where
κ * ( P W | U , R ) : = min E 1 ( P W | U ) , E 2 ( R , P W | U ) ,
E 1 ( P W | U ) : = min P U ˜ Y ˜ Z ˜ W ˜ L 1 ( P U W , P Y Z W ) D ( P U ˜ Y ˜ Z ˜ W ˜ | | Q U Y Z P W | U ) , E 2 ( R , P W | U )
: = min P U ˜ Y ˜ Z ˜ W ˜ L 2 ( P U W , P Y Z ) D ( P U ˜ Y ˜ Z ˜ W ˜ | | Q U Y Z P W | U ) + ( R I P ( U ; W | Y , Z ) ) , i f I P ( U ; W ) > R , , otherwise , L 1 ( P U W , P Y Z W ) : = { P U ˜ Y ˜ Z ˜ W ˜ P ( U × Y × Z × W ) : P U ˜ W ˜ = P U W , P Y ˜ Z ˜ W ˜ = P Y Z W } , L 2 ( P U W , P Y Z ) : = { P U ˜ Y ˜ Z ˜ W ˜ P ( U × Y × Z × W ) : P U ˜ W ˜ = P U W , P Y ˜ Z ˜ = P Y Z , H P ( W | Y , Z ) H ( W ˜ | Y ˜ Z ˜ ) } , P S U Y Z W : = P S U Y Z P W | U , Q S U Y Z W : = Q S | Y Z P U | Z P Y | Z P Z P W | U .
Please note that since ( Y , Z , S ) U W , we have
I P ( W ; U ) I P ( W ; U | Y , Z ) .
Let B : = { P W | U : I P ( U ; W | Z ) R } . Then, for P W | U B , we have,
E 1 ( R , P W | U ) = min P U ˜ Y ˜ Z ˜ W ˜ L 1 ( P U W , P Y Z W ) D ( P U ˜ Y ˜ Z ˜ W ˜ | | Q U Y Z P W | U ) = I P ( Y ; W | Z ) , E 2 ( R , P W | U ) I P ( U ; W | Z ) I P ( U ; W | Y , Z ) = I P ( Y ; W | Z ) .
Hence,
κ * ( P W | U , R ) I P ( Y ; W | Z ) .
By noting that Λ m i n H Q ( S | W , Y , Z ) (by the data processing inequality), we have shown that for Λ 1 Λ m i n , ( R , κ , Λ 0 , Λ 1 ) R e if (28)–(30) are satisfied. This completes the proof of achievability.
Converse: Let ( R , κ , Λ 0 , Λ 1 ) R e . Let T be a r.v. uniformly distributed over [ n ] and independent of all the other r.v.’s ( U n , Y n , Z n , S n , M ) . Define an auxiliary r.v. W : = ( W T , T ) , where W i : = ( M , Y i 1 , S i 1 , Z i 1 , Z i + 1 n ) , i [ n ] . Then, we have for sufficiently large n that
n R H P ( M ) H P ( M | Z n ) I P ( M ; U n | Z n ) = i = 1 n I P ( M ; U i | U i 1 , Z n )
= i = 1 n I P ( M , U i 1 , Z i 1 , Z i + 1 n ; U i | Z i ) = i = 1 n I P ( M , U i 1 , Z i 1 , Z i + 1 n , Y i 1 , S i 1 ; U i | Z i ) i = 1 n I P ( M , Z i 1 , Z i + 1 n , Y i 1 , S i 1 ; U i | Z i ) = i = 1 n I P ( W i ; U i | Z i ) = n I P ( W T ; U T | Z T , T )
= n I P ( W T , T ; U T | Z T )
= n I P ( W ; U | Z ) .
Here, (39) follows since the sequences ( U n , Z n ) are memoryless; (40) follows since ( Y i 1 , S i 1 ) ( M , U i 1 , Z n ) U i form a Markov chain; and, (41) follows from the fact that T is independent of all the other r.v.’s.
The equivocation of S n under the null hypothesis can be bounded as follows.
H ( S n | M , Y n , Z n , H = 0 ) = i = 1 n H ( S i | M , S i 1 , Y n , Z n , H = 0 )
i = 1 n H ( S i | M , Y i 1 , S i 1 , Z i 1 , Z i + 1 n , Y i , Z i , H = 0 ) = i = 1 n H ( S i | W i , Y i , Z i , H = 0 ) = n H ( S T | W T , Y T , Z T , T , H = 0 )
= n H P ( S | W , Y , Z ) ,
where P S U Y Z W = P S U Y Z P W | U for some conditional distribution P W | U . In (43), we used the fact that conditioning reduces entropy.
Finally, we prove the upper bound on κ . For any encoding function f n and decision region A n M × Y n × Z n for H 0 such that ϵ n 0 , we have,
D P M Y n Z n | | Q M Y n Z n P M Y n Z n ( A n ) log P M Y n Z n ( A n ) Q M Y n Z n ( A n ) + P M Y n Z n ( A n c ) log P M Y n Z n ( A n c ) Q M Y n Z n ( A n c ) H ( ϵ n ) ( 1 ϵ n ) log β ¯ n f n , ϵ n .
Here, (45) follows from the log-sum inequality [60]. Thus,
lim sup n log β ¯ n f n , ϵ n n lim sup n 1 n D P M Y n Z n | | Q M Y n Z n
= lim sup n 1 n I P ( M ; Y n | Z n )
= H P ( Y | Z ) lim inf n 1 n H P ( Y n | M , Z n ) ,
where (46) follows since Q M Y n Z n = P M Z n P Y n | Z n . The last term can be single-letterized as follows:
H P ( Y n | M , Z n ) = i = 1 n H P ( Y i | Y i 1 , M , Z n ) i = 1 n H P ( Y i | Y i 1 , S i 1 , M , Z n ) = i = 1 n H P ( Y i | Z i , W i ) = n H P ( Y T | Z T , W T , T ) = n H P ( Y | Z , W ) .
Substituting (48) in (47), we obtain
κ I P ( Y ; W | Z ) .
Also, note that ( Z , Y ) U W holds. To see this, note that ( U i , Y i , Z i , S i ) are i.i.d across i [ n ] . Hence, any information in W i on ( Y i , Z i , S i ) is only through M as a function of U i , and so given U i , W i is independent of ( Y i , Z i , S i ) . The above Markov chain then follows from the fact that T is independent of ( U n , Y n , Z n , S n , M ) . This completes the proof of the converse and the theorem. ☐
Next, we state the result for TACI with a distortion privacy constraint, where the distortion is measured using an arbitrary distortion measure d ( · , · ) . Let Δ m i n : = min ϕ ( u , y , z ) E Q d S , ϕ ( U , Y , Z ) .
Proposition 2.
For the HT given in (26), ( R , κ , Δ 0 , Δ m i n ) R d if and only if there exist an auxiliary r.v. W and a deterministic function ϕ : W × Y × Z S ^ such that
R I P ( W ; U | Z ) ,
κ I P ( W ; Y | Z ) ,
Δ 0 min ϕ ( · , · , · ) E P d S , ϕ ( W , Y , Z ) ,
for some P S U Y Z W as defined in Proposition 1.
Proof. 
The proof of achievability follows from Theorem 2, similarly to the way Proposition 1 is obtained from Theorem 1. Hence, only differences will be highlighted. Similar to the inequality Λ m i n H Q ( S | U , Y , Z ) in the proof of Proposition 1, we need to prove the inequality Δ m i n E Q d S , ϕ ( W , Y , Z ) , where Q S U Y Z W : = Q S U Y Z P W | U for some conditional distribution P W | U . This can be shown as follows:
min ϕ ( · , · , · ) E Q d S , ϕ ( W , Y , Z ) = u , y , z Q U Y Z ( u , y , z ) w P W | U ( w | u ) min ϕ ( w , y , z ) s Q S | U Y Z ( s | u , y , z ) d ( s , ϕ ( w , y , z ) ) u , y , z Q U Y Z ( u , y , z ) w , s P W | U ( w | u ) Q S | U Y Z ( s | u , y , z ) d ( s , ϕ * ( u , y , z ) ) u , y , z Q U Y Z ( u , y , z ) min ϕ ( u , y , z ) w , s P W | U ( w | u ) Q S | U Y Z ( s | u , y , z ) d ( s , ϕ ( u , y , z ) ) = u , y , z Q U Y Z ( u , y , z ) min ϕ ( u , y , z ) s Q S | U Y Z ( s | u , y , z ) d ( s , ϕ ( u , y , z ) ) = min ϕ ( · , · , · ) E Q d S , ϕ ( U , Y , Z ) : = Δ m i n ,
where in (53), ϕ * ( u , y , z ) is chosen such that
ϕ * ( u , y , z ) : = arg min ϕ ( w , y , z ) , w W s Q S | U Y Z ( s | u , y , z ) d ( s , ϕ ( w , y , z ) ) .
Converse: Let W = ( W T , T ) denote the auxiliary r.v. defined in the converse of Proposition 1. Inequalities (50) and (51) follow similarly as obtained in Proposition 1. We prove (52). Defining ϕ ˜ n ( M , Y n , Z n , S i 1 , i ) : = ϕ ¯ i , n ( M , Y n , Z n , S i 1 ) , we have
min g i , n ( r ) E d S n , S ^ n | H = 0 = min { ϕ ˜ n ( m , y n , z n , s i 1 , i ) } i = 1 n E i = 1 n d S i , ϕ ˜ n ( M , Y n , Z n , S i 1 , i ) | H = 0 = min { ϕ ˜ n ( · , · , · , · , · ) } i = 1 n E i = 1 n d S i , ϕ ˜ n ( W i , Z i , Y i , Y i + 1 n , i ) | H = 0 min { ϕ ( w i , z i , y i , i ) } E i = 1 n d S i , ϕ ( W i , Z i , Y i , i ) | H = 0 = n min { ϕ ( · , · , · , · ) } E E d S T , ϕ ( W T , Z T , Y T , T ) | T | H = 0 = n min { ϕ ( · , · , · , · ) } E d S T , ϕ ( W T , Z T , Y T , T ) | H = 0 = n min { ϕ ( w , z , y ) } E d S , ϕ ( W , Z , Y ) | H = 0 ,
where (54) is due to (A1) (in Appendix B). Hence, any Δ 0 satisfying (6) satisfies
Δ 0 min { ϕ ( w , z , y ) } E P d S , ϕ ( W , Z , Y ) .
This completes the proof of the converse and the theorem. ☐
A more general version of Propositions 1 and 2 is claimed in [66] as Theorems 7 and 8, respectively, in which a privacy constraint under the alternate hypothesis is also imposed. However, we have identified a mistake in the converse proof; and hence, a single-letter characterization for this general problem remains open.
To complete the single-letter characterization in Propositions 1 and 2, we bound the alphabet size of the auxiliary r.v. W in the following lemma, whose proof is given in Appendix E.
Lemma 5.
In Propositions 1 and 2, it suffices to consider auxiliary r.v.’s W such that | W | | U | + 2 .
The proof of Lemma 5 uses standard arguments based on the Fenchel–Eggleston–Carathéodory’s theorem and is given in Appendix E.
Remark 1.
When Q S | U Y Z = Q S | Y Z , a tight single-letter characterization of R e and R d exists even if the privacy constraint is active under the alternate hypothesis. This is due to the fact that given Y n and Z n , M is independent of S n under the alternate hypothesis. In this case, ( R , κ , Λ 0 , Λ 1 ) R e if and only if there exists an auxiliary r.v. W, such that ( Z , Y , S ) U W , and
κ I P ( W ; Y | Z ) ,
R I P ( W ; U | Z ) ,
Λ 0 H P ( S | W , Z , Y ) ,
Λ 1 H Q ( S | Z , Y ) ,
for some P S U Y Z W as in Proposition 1. Similarly, we have that ( R , κ , Δ 0 , Δ 1 ) R d if and only if there exist an auxiliary r.v. W and a deterministic function ϕ : W × Y × Z S ^ such that (55) and (56),
Δ 0 min ϕ ( · , · , · ) E P d S , ϕ ( W , Y , Z ) ,
Δ 1 min ϕ ( · , · , · ) E Q d S , ϕ ( Y , Z ) ,
are satisfied for some P S U Y Z W as in Proposition 1.
The computation of the trade-off given in Proposition 1 is challenging despite the cardinality bound on the auxiliary r.v. W provided by Lemma 5, as closed form solutions do not exist in general. To see this, note that the inequality constraints defining R e are not convex in general, and hence even computing specific points in the trade-off could be a hard problem. This is evident from the fact that in the absence of the privacy constraint in Proposition 1, i.e., (30), computing the maximum error exponent for a given rate constraint is equivalent to the information bottleneck problem [67], which is known to be a hard non-convex optimization problem. Also, the complexity of brute force search is exponential in | U | , and hence intractable for large values of | U | . Below we provide an example which can be solved in closed form and hence computed easily.
Example 1.
Let V = U = S = { 0 , 1 } , V = Y , Z = constant, V S U , P U ( 0 ) = Q U ( 0 ) = 0.5 , P S | U ( 0 | 0 ) = P S | U ( 1 | 1 ) = Q S | U ( 0 | 0 ) = Q S | U ( 1 | 1 ) = 1 q , P V | S ( 0 | 0 ) = P V | S ( 1 | 1 ) = 1 p and Q V | S ( 0 | 0 ) = Q V | S ( 1 | 1 ) = 0.5 . Then, ( R , κ , Λ 0 , 0 ) R e if there exists r [ 0 , 0.5 ] such that
R 1 h b ( r ) ,
κ 1 h b ( ( r * q ) * p ) ,
Λ 0 h b ( p ) + h b ( q * r ) h b ( p * ( q * r ) ) ,
where for a , b R , a * b : = ( 1 a ) · b + ( 1 b ) · a , and h b : [ 0 , 1 ] [ 0 , 1 ] is the binary entropy function given by
h b ( t ) = ( 1 t ) log ( 1 t ) t log ( t ) .
The above characterization (Numerical computation shows that the characterization given in (61)–(63) is exact even when q ( 0 , 1 ) .) is exact for q = 0 , i.e., ( R , κ , Λ 0 , 0 ) R e only if there exists r [ 0 , 0.5 ] such that (61)–(63) are satisfied.
Proof. 
Taking W = { 0 , 1 } , and P W | U ( 0 | 0 ) = P W | U ( 1 | 1 ) = 1 r , the constraints defining the trade-off given in Proposition 1 simplifies to
I P ( U ; W ) = 1 h b ( r ) , I P ( V ; W ) = 1 h b ( ( r * q ) * p ) , H P ( S | V , W ) = H P ( S | W ) I P ( S ; V | W ) = H P ( S | W ) + H P ( V | S ) H P ( V | W ) = h b ( r * q ) + h b ( p ) h b ( p * ( q * r ) ) .
On the other hand, if q = 0 , note that S = U . Hence, the same constraints can be bounded as follows:
I P ( U ; W ) = 1 H P ( U | W ) ,
I P ( V ; W ) = 1 H P ( V | W ) 1 h b h b 1 ( H ( U | W ) ) * p , H P ( U | V , W ) = H P ( U | W ) + H P ( V | U ) H P ( V | W )
h b ( p ) + H P ( U | W ) h b h b 1 ( H P ( U | W ) ) * p ,
where h b 1 : [ 0 , 1 ] [ 0 , 0.5 ] is the inverse of the binary entropy function. Here, the inequality in (64) and (65) follows by an application of Mrs Gerber’s lemma [68], since V = U N p under the null hypothesis and N p B e r ( p ) is independent of U and W. Also, Λ m i n = 0 since S = U . Noting that H P ( U | W ) [ 0 , 1 ] , and defining r : = h b 1 ( H P ( U | W ) ) [ 0 , 0.5 ] , the result follows. ☐
Figure 2 depicts the curve 1 h b ( r ) , 1 h b ( p * ( q * r ) ) , h b ( p ) + h b ( r * q ) h b ( p * ( r * q ) ) for q = 0 and p { 0.15 , 0.25 , 0.35 } , as r is varied in the range [ 0 , 0.5 ] . The projection of this curve on the R κ and κ Λ 0 plane is shown in Figure 3a,b, respectively, for q { 0 , 0.1 } and the same values of p. As expected, the error exponent κ increases with rate R while the equivocation Λ 0 decreases with κ at the boundary of R e .
Proposition 1 (resp. Proposition 2) provide a characterization of R e (resp. R d ) under the condition of vanishing type I error probability constraint. Consequently, the converse part of these results are known as weak converse results in the context of HT. In the next subsection, we establish the optimal error exponent-privacy trade-off for the special case of zero-rate compression. This trade-off is independent of the type I error probability constraint ϵ ( 0 , 1 ) , and hence known as a strong converse result.

4.2. Zero-Rate Compression

Assume the following zero-rate constraint on the communication between the observer and the detector,
lim n log ( | M | ) n = 0 .
Please note that (66) does not imply that | M | = 0 , i.e., nothing can be transmitted, but that the message set cardinality can grow at most sub-exponentially in n. Such a scenario is motivated practically by low power or low bandwidth constrained applications in which communication is costly. Propositions 3 and 4 stated below provide an optimal single-letter characterization of R d ( ϵ ) and R e ( ϵ ) in this case. While the coding schemes in the achievability part of these results are inspired from that in [6], the analysis of privacy achieved at the detector is new. Lemma 4 serves as a crucial tool for this purpose. We next state the results. Let
Δ 0 m a x : = min ϕ ( · ) E P d S , ϕ ( V ) ,
and   Δ 1 m a x : = min ϕ ( · ) E Q d S , ϕ ( V ) .
Proposition 3.
For ϵ ( 0 , 1 ) , ( 0 , κ , Δ 0 , Δ 1 ) R d ( ϵ ) if and only if it satisfies,
κ min P U ˜ V ˜ L ( P U , P V ) D ( P U ˜ V ˜ | | Q U V ) ,
Δ 0 Δ 0 m a x ,
Δ 1 Δ 1 m a x ,
where ϕ : V S ^ is a deterministic function and
L ( P U , P V ) = { P U ˜ V ˜ P ( U × V ) : P U ˜ = P U , P V ˜ = P V } .
Proof. 
First, we prove that ( 0 , κ , Δ 0 , Δ 1 ) satisfying (68)–(70) is achievable. While the encoding and decoding scheme is the same as that in [6], we mention it for the sake of completeness.
Encoding: The observer sends the message M = 1 if U n T [ P U ] δ n , δ > 0 , and M = 0 otherwise.
Decoding: The detector declares H ^ = 0 if M = 1 and V n T [ P V ] δ n , δ > 0 . Otherwise, H ^ = 1 is declared.
We analyze the type I and type II error probabilities for the above scheme. Please note that for any δ > 0 , the weak law of large numbers implies that
P U n T [ P U ] δ n V n T [ P V ] δ n ) | H = 0 = P M = 1 V n T [ P V ] δ n ) | H = 0 ( n ) 1 .
Hence, the type I error probability tends to zero, asymptotically. The type II error probability can be written as follows:
β n ( f n , g n ) = P ( U n T [ P U ] δ n V n T [ P V ] δ n ) | H = 1 ) = u n T [ P U ] δ n , v n T [ P V ] δ Q U n V n ( u n , v n ) ( n + 1 ) | U | | V | e n ( κ * O ( δ ) ) = e n κ * | U | | V | log ( n + 1 ) n O ( δ ) ,
where
κ * = min P U ˜ V ˜ L ( P U , P V ) D ( P U ˜ V ˜ | | Q U V ) .
Next, we lower bound the average distortion for S n achieved by this scheme at the detector. Defining
Π ( U n , δ , P U ) : = 1 U n T [ P U ] δ n ,
ρ n ( 0 ) ( δ ) : = P S n V n ( · ) P S n V n | Π ( U n , δ , P U ) ( · | 0 ) , ,
ρ n ( 1 ) ( δ ) : = Q S n V n ( · ) Q S n V n | Π ( U n , δ , P U ) ( · | 1 ) , ϕ n ( v n ) : = ( ϕ ( v 1 ) , , ϕ ( v n ) ) ,
we can write
| min { ϕ ¯ i ( m , v n , s i 1 ) } i = 1 n E d S n , S ^ n | H = 0 n min ϕ ( v ) E P d S , ϕ ( V ) | = | min { ϕ ¯ i ( m , v n , s i 1 ) } i = 1 n E d S n , S ^ n | H = 0 min ϕ n ( v n ) E d S n , ϕ n ( V n ) | H = 0 | | min { ϕ ¯ i ( m , v n , s i 1 ) } i = 1 n E d S n , S ^ n | H = 0 P M = 1 | H = 0 min ϕ n ( v n ) E d S n , ϕ n ( V n ) | M = 1 , H = 0 | + P M = 0 | H = 0 min ϕ n ( v n ) E d S n , ϕ n ( V n ) | M = 0 , H = 0 | min { ϕ ¯ i ( m , v n , s i 1 ) } i = 1 n E d S n , S ^ n | H = 0 min ϕ n ( v n ) E d S n , ϕ n ( V n ) | M = 1 , H = 0 | + P M = 0 | H = 0 [ min ϕ n ( v n ) E d S n , ϕ n ( V n ) | M = 1 , H = 0 + min ϕ n ( v n ) E d S n , ϕ n ( V n ) | M = 0 , H = 0 ] = | min { ϕ ¯ i ( m , v n , s i 1 ) } i = 1 n E d S n , S ^ n | H = 0 min ϕ n ( v n ) E d S n , ϕ n ( V n ) | Π ( U n , δ , P U ) = 0 , H = 0 | + P Π ( U n , δ , P U ) = 1 | H = 0 [ min ϕ n ( v n ) E d S n , ϕ n ( V n ) | M = 1 , H = 0 + min ϕ n ( v n ) E d S n , ϕ n ( V n ) | M = 0 , H = 0 ]
n D m ρ n ( 0 ) ( δ ) + 2 e n Ω ( δ ) n D m
( n ) 0 ,
where (74) is since Π ( U n , δ , P U ) = 1 M with probability one by the encoding scheme; (75) follows from
P Π ( U n , δ , P U ) = 1 | H = 0 = P U n T [ P U ] δ n | H = 0 e n Ω ( δ )
and ([43], Property 2(b)); and, (76) is due to (17). Similarly, it can be shown using (16) that if Q U = P U , then
| min { ϕ ¯ i , n ( m , v n , s i 1 ) } i = 1 n E d S n , S ^ n | H = 1 n min ϕ ( v ) E Q d S , ϕ ( V ) | ( n ) 0 .
On the other hand, if Q U P U and δ is small enough, we have
P M = 0 | H = 1 = P Π ( U n , δ , P U ) = 1 | H = 1 1 e n ( D ( P U | | Q U ) O ( δ ) ) ( n ) 1 .
Hence, we can write for δ small enough,
| min { ϕ ¯ i ( m , v n , s i 1 ) } i = 1 n E d S n , S ^ n | H = 1 n min ϕ ( v ) E Q d S , ϕ ( V ) | = | min { ϕ ¯ i ( m , v n , s i 1 ) } i = 1 n E d S n , S ^ n | H = 1 min ϕ n ( v n ) E d S n , ϕ n ( V n ) | H = 1 | | min { ϕ ¯ i ( m , v n , s i 1 ) } i = 1 n E d S n , S ^ n | H = 1 P M = 0 | H = 0 min ϕ n ( v n ) E d S n , ϕ n ( V n ) | M = 0 , H = 1 | + P M = 1 | H = 1 min ϕ n ( v n ) E d S n , ϕ n ( V n ) | M = 1 , H = 1 | min { ϕ ¯ i ( m , v n , s i 1 ) } i = 1 n E d S n , S ^ n | H = 1 min ϕ n ( v n ) E d S n , ϕ n ( V n ) | M = 0 , H = 1 | + P M = 1 | H = 1 [ min ϕ n ( v n ) E d S n , ϕ n ( V n ) | M = 1 , H = 1 + min ϕ n ( v n ) E d S n , ϕ n ( V n ) | M = 0 , H = 1 ] = | min { ϕ ¯ i ( m , v n , s i 1 ) } i = 1 n E d S n , S ^ n | H = 1 min ϕ n ( v n ) E d S n , ϕ n ( V n ) | Π ( U n , δ , P U ) = 1 , H = 1 | + P Π ( U n , δ , P U ) = 0 | H = 1 [ min ϕ n ( v n ) E d S n , ϕ n ( V n ) | M = 1 , H = 1 + min ϕ n ( v n ) E d S n , ϕ n ( V n ) | M = 0 , H = 1 ]
n D m ρ n ( 1 ) ( δ ) + 2 e n ( D ( P U | | Q U ) O ( δ ) ) n D m
( n ) 0 ,
where (80) is since Π ( U n , δ , P U ) = 1 M with probability one; (81) is due to (79) and ([43], Property 2(b)); and, (82) follows from (15). This completes the proof of the achievability.
We next prove the converse. Please note that by the strong converse result in [8], the right hand side (R.H.S) of (68) is an upper bound on the achievable error exponent for all ϵ ( 0 , 1 ) even without a privacy constraint (hence, also with a privacy constraint). Also,
min g i , n ( r ) E d S n , S ^ n | H = 0 min { ϕ ( v i ) } i = 1 n i = 1 n E P S i V i d S i , ϕ ( V i ) = n min { ϕ ( v ) } E P d ( S , ϕ ( V ) ) .
Here, (83) follows from the fact that the detector can always reconstruct S ^ i as a function of V i for i [ n ] . Similarly,
min g i , n ( r ) E d S n , S ^ n | H = 1 n min { ϕ ( v ) } E Q d ( S , ϕ ( V ) ) .
Hence, any achievable Λ 0 and Λ 1 must satisfy (69) and (70), respectively. This completes the proof. ☐
The following Proposition is the analogous result to Proposition 3 when the privacy measure is equivocation.
Proposition 4.
For ϵ ( 0 , 1 ) , ( 0 , κ , Λ 0 , Λ 1 ) R e ( ϵ ) if and only if it satisfies (68) and
Λ 0 H P ( S | V ) ,
Λ 1 H Q ( S | V ) .
Proof. 
For proving the achievability part, the encoding and decoding scheme is the same as in Proposition 3. Hence, the analysis of the error exponent given in Proposition 3 holds. To lower bound the equivocation of S n at the detector, defining Π ( U n , δ , P U ) , ρ n ( 0 ) ( δ ) and ρ n ( 1 ) ( δ ) as in (71)–(73), we can write
| n H P ( S | V ) H ( S n | M , V n , H = 0 ) | = | H ( S n | V n , H = 0 ) H ( S n | M , V n , H = 0 ) | | H ( S n , V n | H = 0 ) H ( S n , V n | M , H = 0 ) | | H ( S n , V n | H = 0 ) P M = 1 | H = 0 H ( S n , V n | M = 1 , H = 0 ) | + P M = 0 | H = 0 H ( S n , V n | M = 0 , H = 0 ) | H ( S n , V n | H = 0 ) H ( S n , V n | M = 1 , H = 0 ) | + P M = 0 | H = 0 H ( S n , V n | M = 1 , H = 0 ) + H ( S n , V n | M = 0 , H = 0 ) | H ( S n , V n | H = 0 ) H ( S n , V n | Π ( U n , δ , P U ) = 0 , H = 0 ) | + P Π ( U n , δ , P U ) = 1 | H = 0 H ( S n , V n | M = 1 , H = 0 ) + H ( S n , V n | M = 0 , H = 0 ) ( n ) 2 ρ n ( 0 ) ( δ ) log ρ n ( 0 ) ( δ ) | S | n | V | n + 2 e n Ω ( δ ) log | S | n | V | n
( n ) 0 ,
where (86) follows due to Lemma 3, ([60], Lemma 2.12) and the fact that entropy of a r.v. is bounded by the logarithm of cardinality of its support; and, (87) follows from (17) in Lemma 4 since δ > 0 . In a similar way, it can be shown using (16) that if Q U = P U , then
| H ( S n | V n , H = 1 ) H ( S n | M , V n , H = 1 ) | ( n ) 0 .
On the other hand, if Q U P U and δ is small enough, we can write
| n H Q ( S | V ) H ( S n | M , V n , H = 1 ) | = | H ( S n | V n , H = 1 ) H ( S n | M , V n , H = 1 ) | | H ( S n , V n | H = 1 ) H ( S n , V n | M , H = 1 ) | | H ( S n , V n | H = 1 ) H ( S n , V n | M = 0 , H = 1 ) | + P Π ( U n , δ , P U ) = 0 | H = 1 H ( S n , V n | M = 0 , H = 1 ) + H ( S n , V n | M = 1 , H = 1 ) 2 ρ n ( 1 ) ( δ ) log ρ n ( 1 ) ( δ ) | S | n | V | n + 2 e n ( D ( P U | | Q U ) O ( δ ) ) log | S | n | V | n ,
where (89) follows from Lemma 3 and (79). It follows from (15) in Lemma 4 that for δ > 0 sufficiently small, ρ n ( 1 ) ( δ ) e n δ ¯ for some δ ¯ > 0 , thus implying that the R.H.S. of (89) tends to zero. This completes the proof of achievability.
The converse follows from the results in [6,8] that the R.H.S of (68) is the optimal error exponent achievable for all values of ϵ ( 0 , 1 ) even when there is no privacy constraint, and the following inequality
H ( S n | M , V n , H = j ) H ( S n | V n , H = j ) , j = 0 , 1 .
This concludes the proof of the Proposition. ☐
In Section 2.2, we mentioned that it is possible to achieve a positive error exponent with perfect privacy in our model. Here, we provide an example of TAI with an equivocation privacy constraint under both hypothesis, and show that perfect privacy is possible. Recall that TAI is a special case of TACI, in which Z = constant, and hence, the null and alternate hypothesis are given by
H 0 : ( U n , Y n ) i = 1 n P U Y , and   H 1 : ( U n , Y n ) i = 1 n P U P Y .
Example 2.
Let S = U = { 0 , 1 , 2 , 3 } , Y = { 0 , 1 } ,
P S U = 0 . 125 · 1 1 0 0 1 1 0 0 0 0 1 1 0 0 1 1 , P Y | U = 1 0 0 1 1 0 0 1 ,
P S U Y : = P S U P Y | U and Q S U Y : = P S U P Y , where P Y = u U P U ( u ) P Y | U ( y | u ) . Then, we have H Q ( S | Y ) = H P ( S ) = H P ( U ) = 2 bits. Also, noting that under the null hypothesis, Y = U m o d 2 , H P ( S | Y ) = 2 bits. It follows from the inner bound given by Equations (31)–(34), and, (37) and (38) that ( R , κ , Λ 0 , Λ 1 ) R e ( ϵ ) , ϵ ( 0 , 1 ) if
R I P ( W ; U ) , κ I P ( W ; Y ) , Λ 0 H P ( S | W , Y ) , Λ 1 H Q ( S | W , Y ) = H Q ( S | W ) ,
where P S U Y W : = P S U Y P W | U and Q S U Y W : = Q S U Y P W | U for some conditional distribution P W | U . If we set W : = U m o d 2 , then we have I P ( U ; W ) = 1 bit, I P ( Y ; W ) = H P ( Y ) = 1 bit, H P ( S | W , Y ) = H P ( S | Y ) = 2 bits, and H Q ( S | W ) = H P ( S | Y ) = 2 bits. Thus, by revealing only W to the detector, it is possible to achieve a positive error exponent while ensuring maximum privacy under both the null and alternate hypothesis, i.e., the tuple ( 1 , 1 , 2 , 2 ) R e ( ϵ ) , ϵ ( 0 , 1 ) .

5. A Counter-Example to the Strong Converse

Ahlswede and Csiszár obtained a strong converse result for the DHT problem without a privacy constraint in [5], where they showed that for any positive rate R, the optimal achievable error exponent is independent of the type I error probability constraint ϵ . Here, we explore whether a similar result holds in our model, in which an additional privacy constraint is imposed. We will show through a counter-example that this is not the case in general. The basic idea used in the counter-example is a “time-sharing” argument which is used to construct from a given coding scheme that achieves the optimal rate-error exponent-equivocation trade-off under a vanishing type I error probability constraint, a new coding scheme that satisfies the given type I error probability constraint ϵ * and the same error exponent as before, yet achieves a higher equivocation for S n at the detector. This concept has been used previously in other contexts, e.g., in the characterization of the first-order maximal channel coding rate of additive white gaussian noise (AWGN) channel in the finite block-length regime [69], and subsequently in the characterization of the second order maximal coding rate in the same setting [70]. However, we will provide a self-contained proof of the counter-example by using Lemma 4 for this purpose.
Assume that the joint distribution P S U V is such that H P ( S | U , V ) < H P ( S | V ) . Proving the strong converse amounts to showing that any ( R , κ , Λ 0 , Λ 1 ) R e ( ϵ ) for some ϵ ( 0 , 1 ) also belongs to R e . Consider TAI problem with an equivocation privacy constraint, in which R H P ( U ) and Λ 1 Λ m i n . Then, from the optimal single-letter characterization of R e given in Proposition 1, it follows by taking W = U that ( H P ( U ) , I P ( V ; U ) , H P ( S | V , U ) , Λ m i n ) R e . Please note that I P ( V ; U ) is the maximum error exponent achievable for any type I error probability constraint ϵ ( 0 , 1 ) , even when U n is observed directly at the detector. Thus, for vanishing type I error probability constraint ϵ 0 and κ = I P ( V ; U ) , the term H P ( S | V , U ) denotes the maximum achievable equivocation for S n under the null hypothesis. From the proof of Proposition 1, the coding scheme achieving this tuple is as follows:
  • Quantize u n to codewords in B n = { u n ( j ) T [ P U ] δ n , j [ e n ( H P ( U ) + η ) ] } and send the index of quantization to the detector, i.e., if u n T [ P U ] δ n , send M = j , where j is the index of u n in B n . Else, send M = 0 .
  • At the detector, if M = 0 , declare H ^ = 1 . Else, declare H ^ = 0 if ( u n ( M ) , v n ) T [ P U V ] δ n for some δ > δ , and H ^ = 1 otherwise.
The type I error probability of the above scheme tends to zero asymptotically with n. Now, for a fixed ϵ * > 0 , consider a modification of this coding scheme as follows:
  • If u n T [ P U ] δ n , send M = j with probability 1 ϵ * , where j is the index of u n in B n , and with probability ϵ * , send M = 0 . If u n T [ P U ] δ n , send M = 0 .
  • At the detector, if M = 0 , declare H ^ = 1 . Else, declare H ^ = 0 if u n ( M ) , v n ) T [ P U V ] δ n for some δ > δ , and H ^ = 1 otherwise.
It is easy to see that for this modified coding scheme, the type I error probability is asymptotically equal to ϵ * , while the error exponent remains the same as I ( V ; U ) since the probability of declaring H ^ = 0 is decreased. Recalling that Π ( u n , δ , P U ) : = 1 u n T [ P U ] δ n , we also have
1 n H S n | M , V n , H = 0 = ( 1 γ n ) ( 1 ϵ * ) 1 n H S n | U n , V n , Π ( U n , δ , P U ) = 0 , H = 0 + ( 1 γ n ) ϵ * 1 n H S n | M = 0 , V n , Π ( U n , δ , P U ) = 0 , H = 0 + γ n 1 n H S n | M = 0 , V n , Π ( U n , δ , P U ) = 1 , H = 0 ( 1 γ n ) ( 1 ϵ * ) H P S | U , V γ n + ( 1 γ n ) ϵ * 1 n H S n | M = 0 , V n , Π ( U n , δ , P U ) = 0 , H = 0 + γ n 1 n H S n | M = 0 , V n , Π ( U n , δ , P U ) = 1 , H = 0 > ( 1 γ n ) ( 1 ϵ * ) H P S | U , V γ n + ( 1 γ n ) ϵ * H P ( S | U , V ) γ n n
+ γ n 1 n H S n | M = 0 , V n , H = 0 , Π ( U n , δ , P U ) = 1
= ( 1 γ n ) ( 1 ϵ * ) H P S | U , V γ n + ( 1 γ n ) ϵ * H P S | U , V γ n n + γ n
= ( 1 γ n ) H P S | U , V γ ¯ n ,
where { γ n } n N denotes some sequence of positive numbers such that γ n ( n ) 0 , and
γ n : = P U n T [ P U ] δ n | H = 0 e n Ω ( δ ) ( n ) 0 , γ n : = 2 ρ n * log 2 ρ n * | S | n ,
ρ n * : = P S n V n | Π ( U n , δ , P U ) , M ( · | 0 , 0 ) P S n V n ( · ) = P S n V n | Π ( U n , δ , P U ) ( · | 0 ) P S n V n ( · ) ,
γ n : = γ n n H ( S n | M = 0 , V n , , H = 0 , Π ( U n , δ , P U ) = 1 ) ( n ) 0 , γ ¯ n : = ( 1 γ n ) ( 1 ϵ * ) γ n + ( 1 γ n ) ϵ * γ n n γ n .
Equation (91) follows similarly to the proof of Theorem 1 in [71]. Equation (92) is obtained as follows:
1 n H S n | M = 0 , V n , I U ( U n , δ ) = 0 , H = 0
1 n H S n | V n , H = 0 γ n n
> H P ( S | U , V ) γ n n .
Here, (98) is obtained by an application of Lemma 3; and (99) is due to the assumption that H P ( S | U , V ) < H P ( S | V ) .
It follows from Lemma 4 that ρ n * ( n ) 0 , which in turn implies that
γ n n ( n ) 0 .
From (95), (97) and (100), we have that γ ¯ n ( n ) 0 . Hence, Equation (94) implies that ( H P ( U ) , I P ( V ; U ) , Λ 0 * , Λ m i n ) R e ( ϵ * ) for some Λ 0 * > H P ( S | U , V ) . Since ( H P ( U ) , I P ( V ; U ) , Λ 0 * , Λ m i n ) R e , this implies that in general, the strong converse does not hold for HT with an equivocation privacy constraint. The same counter-example can be used in a similar manner to show that the strong converse does not hold for HT with an average distortion privacy constraint either.

6. Conclusions

We have studied the DHT problem with a privacy constraint, with equivocation and average distortion under a causal disclosure assumption as the measures of privacy. We have established a single-letter inner bound on the rate-error exponent-equivocation and rate-error exponent-distortion trade-offs. We have also obtained the optimal rate-error exponent-equivocation and rate-error exponent-distortion trade-offs for two special cases, when the communication rate over the channel is zero, and for TACI under a privacy constraint. It is interesting to note that the strong converse for DHT does not hold when there is an additional privacy constraint in the system. Extending these results to the case when the communication between the observer and detector takes place over a noisy communication channel is an interesting avenue for future research. Yet another important topic worth exploring is the trade-off between rate, error probability and privacy in the finite sample regime for the setting considered in this paper.

Author Contributions

Conceptualization, S.S., A.C. and D.G.; writing—original draft preparation, S.S.; supervision, A.C. and D.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by the European Research Council Starting Grant project BEACON (grant agreement number 677854).

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
HTHypothesis testing
DHTDistributed hypothesis testing
TACITesting against conditional independence
TAITesting against independence
DPDifferential privacy
KLKullback–Leibler
SHAShimokawa-Han-Amari

Appendix A. Proof of Lemma 1

Please note that for a stochastic detector, the type I and type II error probabilities are linear functions of P H ^ | M , V n . As a result, for each fixed n and f n , α n f n , g n and β n f n , g n for a stochastic detector g n can be thought of as the type I and type II errors achieved by “time-sharing” among a finite number of deterministic detectors. To see this, consider some ordering on the elements of the set M × V n and let ν i : = P H ^ | M , V n ( 0 | i ) , i [ 1 : N ] , where i denotes the i t h element of M × V n and N = | M × V n | . Then, we can write
P H ^ | M , V n = ν 1 1 ν 1 ν 2 1 ν 2 ν N 1 ν N .
Then, it is easy to see that P H ^ | M , V n = i = 1 N ν i I i , where I i : = [ e i 1 e i ] and e i is an N length vector with 1 at the i t h component and 0 elsewhere. Now, suppose ( α n ( 1 ) , β n ( 1 ) ) and ( α n ( 2 ) , β n ( 2 ) ) denote the pair of type I and type II error probabilities achieved by deterministic detectors g n ( 1 ) and g n ( 2 ) , respectively. Let A 1 , n and A 2 , n denote their corresponding acceptance regions for H 0 . Let g n ( θ ) denote the stochastic detector formed by using g n ( 1 ) and g n ( 2 ) with probabilities θ and 1 θ , respectively. From the above-mentioned linearity property, it follows that g n ( θ ) achieves type I and type II error probabilities of α n f n , g n ( θ ) = θ α n ( 1 ) + ( 1 θ ) α n ( 2 ) and β n f n , g n ( θ ) = θ β n ( 1 ) + ( 1 θ ) β n ( 2 ) , respectively. Let r ( θ ) = min ( θ , 1 θ ) . Then, for θ ( 0 , 1 ) ,
1 n log β n f n , g n ( θ ) min 1 n log β n ( 1 ) , 1 n log β n ( 2 ) 1 n log ( r ( θ ) ) .
Hence, either
α n ( 1 ) α n f n , g n ( θ ) a n d 1 n log β n ( 1 ) 1 n log β n f n , g n ( θ ) + 1 n log ( r ( θ ) ) ,
or
α n ( 2 ) α n f n , g n ( θ ) a n d 1 n log β n ( 2 ) 1 n log β n f n , g n ( θ ) + 1 n log ( r ( θ ) ) .
Thus, since 1 n log ( r ( θ ) ) ( n ) 0 , a stochastic detector does not offer any advantage over deterministic detectors in the trade-off between the error exponent and the type I error probability.

Appendix B. Proof of Lemma 2

Let P ˜ S n U n V n M S ^ n ( C n , 0 ) = P S n U n V n M i = 1 n P ˜ S ^ i | M , V n , S i 1 and P ˜ S n U n V n M S ^ n ( C n , 1 ) = Q S n U n V n M i = 1 n P ˜ S ^ i | M , V n , S i 1 denote the joint distribution of the r.v.’s ( S n , U n , V n , M , S ^ n ) under hypothesis H 0 and H 1 , respectively, where P ˜ S ^ i | M , V n , S i 1 denotes g i , n ( r ) for i [ n ] . Then, we have
min g i , n ( r ) E d S n , S ^ n | H = j = min P ˜ S ^ i | M , V n , S i 1 i = 1 n E P ˜ ( j ) d S n , S ^ n = min P ˜ S ^ i | M , V n , S i 1 i = 1 n 1 n i = 1 n E P ˜ ( j ) d S i , S ^ i = 1 n i = 1 n ( m , v n , s i 1 ) P ˜ M V n S i 1 ( j ) ( m , v n , s i 1 ) min P ˜ S ^ i | M , V n , S i 1 ( · | m , v n , s i 1 ) s ^ i P ˜ S ^ i | M , V n , S i 1 ( s ^ i | m , v n , s i 1 ) E P ˜ S i | M , V n , S i 1 ( j ) ( · | m , v n , s i 1 ) d S i , s ^ i = 1 n i = 1 n m , v n , s i 1 P ˜ M V n S i 1 ( j ) ( m , v n , s i 1 ) E P ˜ S i | M , V n , S i 1 ( j ) ( · | m , v n , s i 1 ) d S i , ϕ i j ( m , v n , s i 1 ) ,
where
ϕ i j ( m , v n , s i 1 ) = arg min s ^ S ^ E P ˜ S i | M , V n , S i 1 ( j ) ( · | m , v n , s i 1 ) d ( S i , s ^ ) .
Continuing, we have
min g i , n ( r ) E d S n , S ^ n | H = j = 1 n i = 1 n m , v n , s i 1 P ˜ M V n S i 1 ( j ) ( m , v n , s i 1 ) min ϕ i ( m , v n , s i 1 ) E P ˜ S i | M , V n , S i 1 ( j ) ( · | m , v n , s i 1 ) d S i , ϕ i ( m , v n , s i 1 ) = min { ϕ i ( m , v n , s i 1 ) } i = 1 n 1 n i = 1 n E P ˜ ( j ) d S i , ϕ i ( M , V n , S i 1 ) .
This completes the proof.

Appendix C. Proof of Lemma 4

We will first prove (15). Fix δ > 0 . For γ > 0 , define the following sets:
B 0 , γ δ : = y n T [ P Y ] γ n : P Y n ( y n ) P Y n | Π ( X n , δ , P X ) ( y n | 0 ) ,
C 0 , γ δ : = y n T [ P Y ] γ n : P Y n ( y n ) < P Y n | Π ( X n , δ , P X ) ( y n | 0 ) , B 1 , γ δ : = y n T [ Q Y ] γ n : Q Y n ( y n ) Q Y n | Π ( X n , δ , P X ) ( y n | 0 ) , C 1 , γ δ : = y n T [ Q Y ] γ n : Q Y n ( y n ) < Q Y n | Π ( X n , δ , P X ) ( y n | 0 ) , B 2 , γ δ : = y n T [ Q Y ] γ n : Q Y n ( y n ) Q Y n | Π ( X n , δ , P X ) ( y n | 1 ) , C 2 , γ δ : = y n T [ Q Y ] γ n : Q Y n ( y n ) < Q Y n | Π ( X n , δ , P X ) ( y n | 1 ) .
Then, we can write
Q Y n ( · ) Q Y n | Π ( X n , δ , P X ) ( · | 1 ) = 1 2 y n | Q Y n ( y n ) Q Y n | Π ( X n , δ , P X ) ( y n | 1 ) | = 1 2 y n T [ Q Y ] γ n | Q Y n ( y n ) Q Y n | Π ( X n , δ , P X ) ( y n | 1 ) | + 1 2 y n T [ Q Y ] γ n | Q Y n ( y n ) Q Y n | Π ( X n , δ , P X ) ( y n | 1 ) | 1 2 y n T [ Q Y ] γ n Q Y n ( y n ) + Q Y n | Π ( X n , δ , P X ) ( y n | 1 ) + 1 2 y n T [ Q Y ] γ n | Q Y n ( y n ) Q Y n | Π ( X n , δ , P X ) ( y n | 1 ) | .
Next, note that
Q Y n | Π ( X n , δ , P X ) ( y n | 1 ) = Q Y n ( y n ) Q Π ( X n , δ , P X ) | Y n ( 1 | y n ) Q Π ( X n , δ , P X ) ( 1 ) Q Y n ( y n ) Q Π ( X n , δ , P X ) ( 1 ) 2 Q Y n ( y n ) ,
for sufficiently large n (depending on | X | ), since Q Π ( X n , δ , P X ) ( 1 ) ( n ) 1 . Thus, for n large enough,
y n T [ Q Y ] γ n Q Y n ( y n ) + Q Y n | Π ( X n , δ , P X ) ( y n | 1 ) 3 y n T [ Q Y ] γ n Q Y n ( y n ) e n Ω ( γ ) .
We can bound the last term in (A4) as follows:
y n T [ Q Y ] γ n | Q Y n ( y n ) Q Y n | Π ( X n , δ , P X ) ( y n | 1 ) | = y n B 2 , γ δ Q Y n ( y n ) Q Y n | Π ( X n , δ , P X ) ( y n | 1 ) + y n C 2 , γ δ Q Y n | Π ( X n , δ , P X ) ( y n | 1 ) Q Y n ( y n ) = y n B 2 , γ δ Q Y n ( y n ) Q Y n | Π ( X n , δ , P X ) ( y n | 1 ) + y n C 2 , γ δ Q Y n | Π ( X n , δ , P X ) ( y n | 1 ) Q Y n ( y n ) = y n B 2 , γ δ Q Y n ( y n ) 1 Q Y n | Π ( X n , δ , P X ) ( y n | 1 ) Q Y n ( y n ) + y n C 2 , γ δ Q Y n ( y n ) Q Y n | Π ( X n , δ , P X ) ( y n | 1 ) Q Y n ( y n ) 1 = y n B 2 , γ δ Q Y n ( y n ) 1 Q Π ( X n , δ , P X ) | Y n ( 1 | y n ) Q Π ( X n , δ , P X ) ( 1 ) + y n C 2 , γ δ Q Y n ( y n ) Q Π ( X n , δ , P X ) | Y n ( 1 | y n ) Q Π ( X n , δ , P X ) ( 1 ) 1
y n B 2 , γ δ Q Y n ( y n ) 1 Q Π ( X n , δ , P X ) | Y n ( 1 | y n ) + y n C 2 , γ δ Q Y n ( y n ) 1 Q Π ( X n , δ , P X ) ( 1 ) 1 .
Let P Y ˜ denote the type of y n and define
E n ( δ , γ ) : = min P Y ˜ P n T [ Q Y ] γ n min P X ˜ P n T [ P X ] δ n D ( P X ˜ | Y ˜ | | Q X | Y | P Y ˜ ) .
Then, for y n T [ Q Y ] γ n , arbitrary γ ˜ > 0 and n sufficiently large (depending on | X | , | Y | , δ , γ ), it follows from ([60], Lemma 2.6) that
Q Π ( X n , δ , P X ) | Y n ( 1 | y n ) 1 e n E n ( δ , γ ) γ ˜ ,
and   Q Π ( X n , δ , P X ) ( 1 ) 1 e n ( D ( P X | | Q X ) γ ˜ ) .
From (A4), (A6) and (A8)–(A10), it follows that
Q Y n ( · ) Q Y n | Π ( X n , δ , P X ) ( · | 1 ) e n Ω ( γ ) + e n E n ( δ , γ ) γ ˜ + e n ( D ( P X | | Q X ) γ ˜ ) .
We next show that E n ( δ , γ ) > 0 for sufficiently small δ > 0 and γ > 0 . This would imply that the R.H.S of (A11) converges exponentially to zero (for γ ˜ small enough) with exponent δ ¯ : = min Ω ( γ ) , E n ( δ , γ ) γ ˜ , D ( P X | | Q X ) γ ˜ , thus proving (15). We can write,
E n ( δ , γ ) min P Y ˜ T [ Q Y ] γ n min P X ˜ T [ P X ] δ n D ( P X ˜ | | Q ^ X )
2 min P Y ˜ T [ Q Y ] γ n min P X ˜ T [ P X ] δ n P X ˜ Q ^ X 2 ,
where
Q ^ X ( x ) : = y P Y ˜ ( y ) Q X | Y ( x | y ) .
Here, (A12) follows due to the convexity of KL divergence (A13) is due to Pinsker’s inequality [60]. We also have from the triangle inequality satisfied by total variation that,
P X ˜ Q ^ X P X Q X P X ˜ P X Q ^ X Q X .
For y n T [ Q Y ] γ n ,
Q ^ X Q X Q X | Y P Y ˜ Q X Y P Y ˜ Q Y O ( γ ) .
Also, for P X ˜ T [ P X ] δ n ,
P X ˜ P X O ( δ ) .
Hence,
E n ( δ , γ ) 2 P X Q X O ( γ ) O ( δ ) 2 .
Since P X Q X , E n ( δ , γ ) > 0 for sufficiently small γ > 0 and δ > 0 . This completes the proof of (15).
We next prove (17). Similar to (A4) and (A5), we have,
P Y n ( · ) P Y n | Π ( X n , δ , P X ) ( · | 0 ) 1 2 y n T [ P Y ] γ n P Y n ( y n ) + P Y n | Π ( X n , δ , P X ) ( y n | 0 ) + 1 2 y n T [ P Y ] γ n | P Y n ( y n ) P Y n | Π ( X n , δ , P X ) ( y n | 0 ) | ,
and
P Y n | Π ( X n , δ , P X ) ( y n | 0 ) 2 P Y n ( y n ) ,
since P Π ( X n , δ , P X ) ( 0 ) ( n ) 1 .
Also, for γ < δ | Y | and sufficiently large n (depending on δ , γ , | X | , | Y | ), we have
y n T [ P Y ] γ n | P Y n ( y n ) P Y n | Π ( X n , δ , P X ) ( y n | 0 ) | = y n B 0 , γ δ P Y n ( y n ) P Y n | Π ( X n , δ , P X ) ( y n | 0 ) + y n C 0 , γ δ P Y n | Π ( X n , δ , P X ) ( y n | 0 ) P Y n ( y n ) y n B 0 , γ δ P Y n ( y n ) 1 P Π ( X n , δ , P X ) | Y n ( 0 | y n ) + y n C 0 , γ δ P Y n ( y n ) 1 P Π ( X n , δ , P X ) ( 0 ) 1 y n B 0 , γ δ P Y n ( y n ) e n Ω ( δ γ | Y | ) + y n C 0 , γ δ P Y n ( y n ) e n Ω ( δ )
e n Ω ( δ γ | Y | ) ,
where to obtain (A16), we used
P Π ( X n , δ , P X ) ( 0 ) 1 e n Ω ( δ ) ,
and   P Π ( X n , δ , P X ) | Y n ( 0 | y n ) 1 e n Ω ( δ γ | Y | ) , f o r y n B 0 , γ δ a n d γ < δ | Y | .
Here, (A18) follows from ([60], Lemma 2.12), and (A19) follows from ([60], Lemmas 2.10 and 2.12), respectively. Thus, from (A14), (A15) and (A17), we can write that,
P Y n ( · ) P Y n | Π ( X n , δ , P X ) ( · | 0 ) e n Ω ( γ ) + e n Ω ( δ γ | Y | ) ( n ) 0 .
This completes the proof of (17). The proof of (16) is exactly the same as (17), with the only difference that the sets B 1 , γ δ and C 1 , γ δ are used in place of B 0 , γ δ and C 0 , γ δ , respectively.

Appendix D. Proof of Theorems 1 and 2

We describe the encoding and decoding operations which are the same for both Theorems 1 and 2. Fix positive numbers (small) η , δ > 0 , and let δ : = δ 2 , δ ^ : = | U | δ , δ ˜ : = 2 δ and δ ¯ : = δ | V | .
Codebook Generation: Fix a finite alphabet W and a conditional distribution P W | U . Let B n = W n ( j ) , j [ M n ] , M n : = e n ( I P ( U : W ) + η ) , denote a random codebook such that each W n ( j ) is randomly and independently generated according to distribution i = 1 n P W ( w i ) , where
P W ( w ) = u U P U ( u ) P W | U ( w | u ) .
Denote a realization of B n by B n and the support of B n by B n .
Encoding: For a given codebook B n , let
P E u ( B n ) ( j | u n ) : = i = 1 n P U | W ( u i | w i ( j ) ) j i = 1 n P U | W ( u i | w i ( j ) ) ) ,
denote the likelihood encoding function. If I P ( U ; W ) + η + | U | | W | log ( n + 1 ) n > R , the observer performs uniform random binning on the indices in M n , i.e., for each j M n , it selects an index uniformly at random from the set M ˜ n : = e n R | U | | W | log ( n + 1 ) n . Denote the random binning function by f B and a realization of it by f B . If I P ( U ; W ) + η + | U | | W | log ( n + 1 ) n R , set f B as the identity function with probability one, i.e., f B ( j ) = j . If u n T [ P U ] δ n , then the observer outputs the message m = ( t , f B ( j ) ) if I P ( U ; W ) + η + | U | | W | log ( n + 1 ) n > R or m = ( t , j ) otherwise, where j [ M n ] is chosen randomly with probability P E u ( B n ) ( j | u n ) and t denotes the index of the joint type of ( u n , w n ( j ) ) in the set of types P n ( U × W ) . If u n T [ P U ] δ n , the observer outputs the error message M = 0 . Please note that | M | e n R since the total number of types in P n ( U × W ) is upper bounded by ( n + 1 ) | U | | W | ([60], Lemma 2.2). Let C n : = ( B n , f B ) , and let C n = ( B n , f B ) and μ n ( · ) denote its realization and probability distribution, respectively. For a given C n , let f n ( C n ) represent the encoder induced by the above operations, where f n ( C n ) : U n P ( M ) and M : = [ e n R ] .
Decoding: If M = 0 or t T [ P U W ] δ n , H ^ = 1 is declared. Else, given m = ( t , f B ( j ) ) and V n = v n , the detector decodes for a codeword w ^ n : = w n ( j ^ ) T [ P W ] δ ^ n in the codebook B n such that
j ^ = arg min l : f B ( l ) = f B ( j ) , w n ( l ) T [ P W ] δ ^ n H e ( w n ( l ) | v n ) , i f I P ( U ; W ) + η + 1 n | U | | W | log ( n + 1 ) > R , j ^ = j , otherwise .
Denote the above decoding rule by P ED ( C n ) , where P ED ( C n ) : M × V n J . The detector declares H ^ = 0 if ( w ^ n , v n ) T [ P W V ] δ ˜ n and H ^ = 1 otherwise. Let g n ( C n ) : M × V n H ^ stand for the decision rule induced by the above operations.
System induced distributions and auxiliary distributions:
The system induced probability distribution when H = 0 is given by
P ˜ ( C n , 0 ) ( s n , u n , v n , j , w n , m , j ^ , w ^ n ) = i = 1 n P S U V ( s i , u i , v i , z i ) P E u ( B n ) ( j | u n ) 1 ( w n ( j ) = w n ) 1 ( f B ( j ) = m ) 1 j ^ = P ED ( C n ) ( m , v n )
1 ( w n ( j ^ ) = w ^ n ) , if   u n T [ P U ] δ n ,
and
P ˜ ( C n , 0 ) ( s n , u n , v n , m ) = i = 1 n P S U V ( s i , u i , v i ) 1 ( m = 0 ) , if   u n T [ P U ] δ n .
Consider two auxiliary distribution Ψ ˜ and Ψ given by
Ψ ˜ ( C n , 0 ) ( s n , u n , v n , j , w n , m , j ^ , w ^ n ) : = i = 1 n P S U V ( s i , u i , v i ) P E u ( B n ) ( j | u n ) 1 ( w n ( j ) = w n ) 1 ( f B ( j ) = m ) 1 j ^ = P ED ( C n ) ( m , v n ) 1 ( w n ( j ^ ) = w ^ n ) ,
and
Ψ ( C n , 0 ) ( s n , u n , v n , j , w n , m , j ^ , w ^ n ) : = 1 M n 1 ( w n ( j ) = w n ) i = 1 n P U | W ( u i | w i ) i = 1 n P V S | U ( v i , s i | u i ) 1 ( f B ( j ) = m ) 1 j ^ = P ED ( C n ) ( m , v n ) 1 ( w n ( j ^ ) = w ^ n ) .
Let P ˜ ( C n , 1 ) and Ψ ˜ ( C n , 1 ) denote probability distributions under H = 1 defined by the R.H.S. of (A21)–(A23) with P S U V replaced by Q S U V , and let Ψ ( C n , 1 ) denote the R.H.S. of (A24) with P V S | U replaced by Q V S | U . Please note that the encoder f n ( C n ) is such that P E u ( B n ) ( j | u n ) = Ψ ( C n , 0 ) ( j | u n ) and hence, the only difference between the joint distribution Ψ ( C n , 0 ) and Ψ ˜ ( C n , 0 ) is the marginal distribution of U n . By the soft-covering lemma [62,64], it follows that for some γ 1 > 0 ,
E μ n Ψ U n ( C n , 0 ) Ψ ˜ U n ( C n , 0 ) e n γ 1 ( n ) 0 .
Hence, from ([43], Property 2(d)), it follows that
E μ n Ψ ( C n , 0 ) Ψ ˜ ( C n , 0 ) e n γ 1 .
Also, note that the only difference between the distributions P ˜ ( C n , 0 ) and Ψ ˜ ( C n , 0 ) is P E u ( B n ) when U n T [ P U ] δ n . Since
P U n T [ P U ] δ n | H = 0 e n Ω ( δ ) ,
it follows that
E μ n P ˜ ( C n , 0 ) Ψ ˜ ( C n , 0 ) e n Ω ( δ ) .
Equations (A26) and (A28) together imply via ([43], Property 2(c)) that
E μ n P ˜ ( C n , 0 ) Ψ ( C n , 0 ) e n Ω ( δ ) + e n γ 1 ( n ) 0 .
Please note that for l { 0 , 1 } , the joint distribution Ψ ( C n , l ) satisfies
S i ( w i ( J ) , V i ) ( M , w n ( J ) , V n , S i 1 ) , i [ n ] .
Also, since I P ( U ; W ) + η > 0 , by the application of soft-covering lemma,
E μ n i = 1 n P W Ψ W i ( J ) ( C n , l ) | H = l e γ 3 n ( n ) 0 , l = 0 , 1 ,
for some γ 3 > 0 .
If Q U = P U , then it again follows from the soft-covering lemma that
E μ n Ψ U n ( C n , 1 ) Ψ ˜ U n ( C n , 1 ) e γ 1 n ( n ) 0 ,
thereby implying that
E μ n Ψ ( C n , 1 ) Ψ ˜ ( C n , 1 ) e γ 1 n .
Also, note that the only difference between the distributions P ˜ ( C n , 1 ) and Ψ ˜ ( C n , 1 ) is P E u ( B n ) when U n T [ P U ] δ n . Since Q U = P U implies P U n T [ P U ] δ n | H = 1 e n Ω ( δ ) , it follows that
E μ n P ˜ ( C n , 1 ) Ψ ˜ ( C n , 1 ) e n Ω ( δ ) .
Equations (A33) and (A34) together imply that
E μ n P ˜ ( C n , 1 ) Ψ ( C n , 1 ) e n Ω ( δ ) + e γ 1 n ( n ) 0 .
Let P ¯ P ˜ ( C n , 0 ) = E μ n P P ˜ ( C n , 0 ) and P ¯ P ˜ ( C n , 1 ) = E μ n P P ˜ ( C n , 1 ) denote the expected probability measure (random coding measure) induced by PMF’s P ˜ ( C n , 0 ) and P ˜ ( C n , 1 ) , respectively. Then, note that from (A24), (A29), (A31) and the weak law of large numbers,
P ¯ P ˜ ( C n , 0 ) U n , W n ( J ) T [ P U W ] δ n 1 e n Ω ( δ ) ( n ) 1 .
Analysis of type I and type II error probabilities:
We analyze type I and type II error probabilities of the coding scheme mentioned above averaged over the random ensemble C n .
Type I error probability:
Please note that a type I error occurs only if one of the following events occur:
E TE = ( U n , V n ) T [ P U V ] δ ¯ n , E SE = T P n T [ P U W ] δ n , E ME = V n , W n ( J ) T [ P V W ] δ ˜ n , E DE = { l e n ( I P ( U ; W ) + η ) , l J : f B ( l ) = f B ( J ) , W n ( l ) T [ P W ] δ ^ n , H e W n ( l ) | V n H e W n ( J ) | V n } .
Let E : = E TE E SE E ME E DE . Then, the expected type I error probability over C n be upper bounded as
E μ n α n f n ( C n ) , g n ( C n ) P ¯ P ˜ ( C n , 0 ) ( E ) .
Please note that P ¯ P ˜ ( C n , 0 ) ( E TE ) tends to 0 asymptotically by the weak law of large numbers. From (A36), P ¯ P ˜ ( C n , 0 ) ( E SE ) ( n ) 0 . Given E SE c and E TE c holds, it follows from the Markov chain relation V U W and the Markov lemma [68] that P ¯ P ˜ ( C n , 0 ) ( E ME ) ( n ) 0 . Also, as in the proof of Theorem 2 in [13], it follows that
P ¯ P ˜ ( C n , 0 ) ( E DE | V n = v n , W n ( J ) = w n , E ME c E SE c E TE c ) e n R I P ( U ; W | V ) δ n ( 1 ) ,
where δ n ( 1 ) ( n ) η + O ( δ ) . Thus, if R > I P ( U ; W | V ) , it follows by choosing η = O ( δ ) that for δ > 0 small enough, the R.H.S. of (A38) tends to zero asymptotically. By the union bound on probability, the R.H.S. of (A37) tends to zero.
Type II error probability:
Let δ = | W | δ ˜ . Please note that a type II error occurs only if V n T [ P V ] δ n and M 0 , i.e., U n T [ P U ] δ n and T T [ P U W ] δ n . Hence, we can restrict the type II error analysis to only such ( U n , V n ) . Denoting the event that a type II error occurs by D 0 , we have
E μ n β n f n ( C n ) , g n ( C n ) = u n , v n P ¯ P ˜ ( C n , 1 ) ( U n = u n , V n = v n ) P ¯ P ˜ ( C n , 1 ) ( D 0 | U n = u n , V n = v n ) .
Let E NE : = E SE c V n T [ V ] δ n U n T [ U ] δ n . The last term in (A39) can be upper bounded as follows:
P ¯ P ˜ ( C n , 1 ) ( D 0 | U n = u n , V n = v n ) = P ¯ P ˜ ( C n , 1 ) ( E NE | U n = u n , V n = v n ) P ¯ P ˜ ( C n , 1 ) ( D 0 | U n = u n , V n = v n , E NE ) P ¯ P ˜ ( C n , 1 ) ( D 0 | U n = u n , V n = v n , E NE ) = j , m ˜ P ¯ P ˜ ( C n , 1 ) ( J = j , f B ( J ) = m ˜ | U n = u n , V n = v n , E NE ) P ¯ P ˜ ( C n , 1 ) ( D 0 | U n = u n , V n = v n , J = j , f B ( J ) = m ˜ , E NE )
= P ¯ P ˜ ( C n , 1 ) ( D 0 | U n = u n , V n = v n , J = 1 , f B ( J ) = 1 , E NE ) = w n W n P ¯ P ˜ ( C n , 1 ) ( W n ( 1 ) = w n | U n = u n , V n = v n , J = 1 , f B ( J ) = 1 , E NE )
P ¯ P ˜ ( C n , 1 ) ( D 0 | U n = u n , V n = v n , J = 1 , f B ( J ) = 1 , W n ( 1 ) = w n , E NE ) .
where (A41) follows since the term in (A40) is independent of the indices ( j , m ˜ ) due to the symmetry of the codebook generation, encoding and decoding procedure. The first term in (A42) can be upper bounded as
P ¯ P ˜ ( C n , 1 ) ( W n ( 1 ) = w n | U n = u n , V n = v n , J = 1 , f B ( J ) = 1 , E NE ) 1 | T P W ˜ | U ˜ | e n ( H ( W ˜ | U ˜ ) 1 n | U | | W | log ( n + 1 ) ) .
To obtain (A43), we used the fact that P E u ( B n ) ( 1 | u n ) in (A20) is invariant to the joint type P U ˜ W ˜ of ( U n , W n ( 1 ) ) = ( u n , w n ) (keeping all the other codewords fixed). This in turn implies that given E NE , each sequence in the conditional type class T P W ˜ | U ˜ ( u n ) is equally likely (in the randomness induced by B n and stochastic encoding in (A20)) and its probability is upper bounded by 1 | T P W ˜ | U ˜ | . Defining the events
E BE : = l M n , l J , f B ( l ) = M , W n ( l ) ) T [ P W ] δ ^ n , ( V n , W n ( l ) ) T [ P V W ] δ ˜ n ,
F : = { U n = u n , V n = v n , J = 1 , f B ( J ) = 1 , W n ( 1 ) = w n , E NE } ,
F 1 : = { U n = u n , V n = v n , J = 1 , f B ( J ) = 1 , W n ( 1 ) = w n , E NE , E BE c } ,
and   F 2 : = { U n = u n , V n = v n , J = 1 , f B ( J ) = 1 , W n ( 1 ) = w n , E NE , E BE } ,
the last term in (A42) can be written as
P ¯ P ˜ ( C n , 1 ) ( D 0 | F ) = P ¯ P ˜ ( C n , 1 ) ( E BE c | F ) P ¯ P ˜ ( C n , 1 ) ( D 0 | F 1 ) + P ¯ P ˜ ( C n , 1 ) ( E BE | F ) P ¯ P ˜ ( C n , 1 ) ( D 0 | F 2 ) .
The analysis of the terms in (A48) is essentially similar to that given in the proof of Theorem 2 in [13], except for a subtle difference that we mention next. To bound the binning error event E BE , we require an upper bound similar to
P ¯ P ˜ ( C n , 1 ) W n ( l ) = w ˜ n | F 2 P ¯ P ˜ ( C n , 1 ) ( W n ( l ) = w ˜ n ) , w ˜ n W n ,
that is used in the proof of Theorem 2 in [13]. Please note that the stochastic encoding scheme considered here is different from the encoding scheme in [13]. In place (A49), we will show that for l 1 ,
P ¯ P ˜ ( C n , 1 ) ( W n ( l ) = w ˜ n | F ) 3 P ¯ P ˜ ( C n , 1 ) ( W n ( l ) = w ˜ n ) ,
which suffices for the proof. Please note that
P ¯ P ˜ ( C n , 1 ) ( W n ( l ) = w ˜ n | F ) = P ¯ P ˜ ( C n , 1 ) ( W n ( l ) = w ˜ n | U n = u n , V n = v n ) P ¯ P ˜ ( C n , 1 ) ( W n ( 1 ) = w n | W n ( l ) = w ˜ n , U n = u n , V n = v n ) P ¯ P ˜ ( C n , 1 ) ( W n ( 1 ) = w n | U n = u n , V n = v n ) P ¯ P ˜ ( C n , 1 ) ( J = 1 | W n ( 1 ) = w n , W n ( l ) = w ˜ n , U n = u n , V n = v n ) P ¯ P ˜ ( C n , 1 ) ( J = 1 | W n ( 1 ) = w n , U n = u n , V n = v n ) P ¯ P ˜ ( C n , 1 ) ( f B ( J ) = 1 | J = 1 , W n ( 1 ) = w n , W n ( l ) = w ˜ n , U n = u n , V n = v n ) P ¯ P ˜ ( C n , 1 ) ( f B ( J ) = 1 | J = 1 , W n ( 1 ) = w n , U n = u n , V n = v n )
P ¯ P ˜ ( C n , 1 ) ( E NE | f B ( J ) = 1 , J = 1 , W n ( 1 ) = w n , W n ( l ) = w ˜ n , U n = u n , V n = v n ) P ¯ P ˜ ( C n , 1 ) ( E NE | f B ( J ) = 1 , J = 1 , W n ( 1 ) = w n , U n = u n , V n = v n )
Since the codewords are generated independently of each other and the binning operation is done independent of the codebook generation, we have
P ¯ P ˜ ( C n , 1 ) ( W n ( 1 ) = w n | W n ( l ) = w ˜ n , U n = u n , V n = v n ) = P ¯ P ˜ ( C n , 1 ) ( W n ( 1 ) = w n | U n = u n , V n = v n ) ,
and
P ¯ P ˜ ( C n , 1 ) ( f B ( J ) = 1 | J = 1 , W n ( 1 ) = w n , W n ( l ) = w ˜ n , U n = u n , V n = v n ) = P ¯ P ˜ ( C n , 1 ) ( f B ( J ) = 1 | J = 1 , W n ( 1 ) = w n , U n = u n , V n = v n ) .
Also, note that
P ¯ P ˜ ( C n , 1 ) ( E NE | f B ( J ) = 1 , J = 1 , W n ( 1 ) = w n , W n ( l ) = w ˜ n , U n = u n , V n = v n ) = P ¯ P ˜ ( C n , 1 ) ( E NE | f B ( J ) = 1 , J = 1 , W n ( 1 ) = w n , U n = u n , V n = v n ) .
Next, consider the term in (A51). Let
F : = { W n ( 1 ) = w n , U n = u n , V n = v n } , F : = { W n ( 1 ) = w n , W n ( l ) = w ˜ n , U n = u n , V n = v n } .
Then, the numerator and denominator of (A51) can be written as
P ¯ P ˜ ( C n , 1 ) ( J = 1 | F ) = E μ n i = 1 n P U | W ( u i | w i ) i = 1 n P U | W ( u i | w i ) + i = 1 n P U | W ( u i | w ˜ i ) + j 1 , l i = 1 n P U | W ( u i | W i ( j ) ) E μ n i = 1 n P U | W ( u i | w i ) i = 1 n P U | W ( u i | w i ) + j 1 , l i = 1 n P U | W ( u i | W i ( j ) ) ,
and
P ¯ P ˜ ( C n , 1 ) ( J = 1 | F ) = E μ n i = 1 n P U | W ( u i | w i ) i = 1 n P U | W ( u i | w i ) + j 1 i = 1 n P U | W ( u i | W i ( j ) ) ,
respectively. The R.H.S. of (A56) (resp. (A57)) denote the average probability that J = 1 is chosen by P E u ( B n ) given W n ( 1 ) = w n , U n = u n and M n 2 (resp. M n 1 ) other independent codewords in B n . Let
E l : = i = 1 n P U | W ( u i | W i ( l ) ) max i = 1 n P U | W ( u i | W i ( j ) ) , j M n { 1 } i = 1 n P U | W ( u i | w i ) .
Please note that
E μ n | E l c i = 1 n P U | W ( u i | w i ) i = 1 n P U | W ( u i | w i ) + j 1 i = 1 n P U | W ( u i | W i ( j ) ) 1 2 E μ n | E l c i = 1 n P U | W ( u i | w i ) i = 1 n P U | W ( u i | w i ) + j 1 , l i = 1 n P U | W ( u i | W i ( j ) ) .
Hence, denoting by μ ¯ n the probability measure induced by μ n , we have
P ¯ P ˜ ( C n , 1 ) ( J = 1 | F ) P ¯ P ˜ ( C n , 1 ) ( J = 1 | F ) E μ n i = 1 n P U | W ( u i | w i ) i = 1 n P U | W ( u i | w i ) + j 1 , l i = 1 n P U | W ( u i | W i ( j ) ) E μ n i = 1 n P U | W ( u i | w i ) i = 1 n P U | W ( u i | w i ) + j 1 i = 1 n P U | W ( u i | W i ( j ) ) E μ n i = 1 n P U | W ( u i | w i ) i = 1 n P U | W ( u i | w i ) + j 1 , l i = 1 n P U | W ( u i | W i ( j ) ) μ ¯ n ( E l c ) E μ n | E l c i = 1 n P U | W ( u i | w i ) i = 1 n P U | W ( u i | w i ) + j 1 i = 1 n P U | W ( u i | W i ( j ) ) E μ n i = 1 n P U | W ( u i | w i ) i = 1 n P U | W ( u i | w i ) + j 1 , l i = 1 n P U | W ( u i | W i ( j ) ) 1 2 μ ¯ n ( E l c ) E μ n | E l c i = 1 n P U | W ( u i | w i ) i = 1 n P U | W ( u i | w i ) + j 1 , l i = 1 n P U | W ( u i | W i ( j ) )
= E μ n i = 1 n P U | W ( u i | w i ) i = 1 n P U | W ( u i | w i ) + j 1 , l i = 1 n P U | W ( u i | W i ( j ) ) 1 2 E μ n i = 1 n P U | W ( u i | w i ) i = 1 n P U | W ( u i | w i ) + j 1 , l i = 1 n P U | W ( u i | W i ( j ) ) 1 2 μ ¯ n ( E l ) E μ n | E l i = 1 n P U | W ( u i | w i ) i = 1 n P U | W ( u i | w i ) + j 1 , l i = 1 n P U | W ( u i | W i ( j ) )
E μ n i = 1 n P U | W ( u i | w i ) i = 1 n P U | W ( u i | w i ) + j 1 , l i = 1 n P U | W ( u i | W i ( j ) ) 1 2 E μ n i = 1 n P U | W ( u i | w i ) i = 1 n P U | W ( u i | w i ) + j 1 , l i = 1 n P U | W ( u i | W i ( j ) ) 1 2 μ ¯ n ( E l )
E μ n i = 1 n P U | W ( u i | w i ) i = 1 n P U | W ( u i | w i ) + j 1 , l i = 1 n P U | W ( u i | W i ( j ) ) 1 2 E μ n i = 1 n P U | W ( u i | w i ) i = 1 n P U | W ( u i | w i ) + j 1 , l i = 1 n P U | W ( u i | W i ( j ) ) e e n ( I P ( U ; W ) + η )
2 + o ( 1 ) 3 ,
where (A59) is due to (A58); (A61) is since the term within E μ n | E l [ · ] in (A60) is upper bounded by one; (A62) is since μ ¯ n ( E l ) e e n ( I P ( U ; W ) + η ) for some η > 0 which follows similar to ([68], Section 3.6.3), and (A63) follows since the term within the expectation which is exponential in order dominates the double exponential term. From (A52)–(A55), (A63) and (A50) follows. The analysis of the other terms in (A48) is the same as in the SHA scheme in [7], and results in the error exponent (within an additive O ( δ ) term) claimed in the Theorem. We refer the reader to ([13], Theorem 2) for a detailed proof (In [13], the communication channel between the observer and the detector is a DMC. However, since the coding scheme used in the achievability part of Theorem 2 in [13] is a separation-based scheme, the error exponent when the channel is noiseless can be recovered by setting E 3 ( · ) and E 4 ( · ) in Theorem 2 to ). By the random coding argument followed by the standard expurgation technique [72] (see ([13], Proof of Theorem 2)), there exists a deterministic codebook and binning function pair C n = ( B n , f B ) such that the type I and type II error probabilities are within a constant multiplicative factor of their average values over the random ensemble C n , and
S i ( w i ( J ) , V i ) ( M , w n ( J ) , V n , S i 1 ) , i [ n ] ,
P ˜ ( C n , 0 ) Ψ ( C n , 0 ) e γ 4 n ,
P ˜ ( C n , 1 ) Ψ ( C n , 1 ) e γ 4 n , i f Q U = P U ,
and   i = 1 n P W Ψ w i ( J ) ( C n , l ) e γ 5 n , l = 0 , 1 ,
where γ 4 and γ 5 are some positive numbers. Since the average type I error probability for our scheme tends to zero asymptotically, and the error exponent is unaffected by a constant multiplicative scaling of the type II error probability, this codebook achieves the same type I error probability and error exponent as the average over the random ensemble. Using this deterministic codebook for encoding and decoding, we first lower bound the equivocation and average distortion of S n at the detector as follows:
First consider the equivocation of S n under the null hypothesis.
H P ˜ ( C n , 0 ) ( S n | M , V n ) P P ˜ ( C n , 0 ) ( M 0 ) H ( S n | M 0 , V n )
( 1 e n Ω ( δ ) ) H P ˜ ( C n , 0 ) ( S n | M 0 , V n )
( 1 e n Ω ( δ ) ) H P ˜ ( C n , 0 ) ( S n | w n ( J ) , V n )
= ( 1 e n Ω ( δ ) ) H P ˜ ( C n , 0 ) ( S n | w n ( J ) , V n )
( 1 e n Ω ( δ ) ) H Ψ ( C n , 0 ) ( S n | w n ( J ) , V n ) 2 e γ 4 n log | S | n | V | n e γ 4 n
= i = 1 n H Ψ ( C n , 0 ) ( S i | w i ( J ) , V i ) e n Ω ( δ ) i = 1 n H Ψ ( C n , 0 ) ( S i | w i ( J ) , V i ) o ( 1 )
i = 1 n H Ψ ( C n , 0 ) ( S i | w i ( J ) , V i ) n e n Ω ( δ ) H P ( S | V ) o ( 1 )
= i = 1 n H Ψ ( C n , 0 ) ( S i | w i ( J ) , V i ) o ( 1 )
= n H P ( S | W , V ) o ( 1 ) .
Here, (A68) follows from (A27); (A69) follows since M is a function of w n ( J ) for a deterministic codebook; (A71) follows from (A65) and Lemma 3; (A72) follows from (A24); and (A75) follows from (A67) and Ψ S i V i | w i ( 0 ) = P S V | W ( 0 ) , i [ n ] .
If Q U = P U , it follows similarly to above that
H P ˜ ( C n , 1 ) ( S n | M , V n ) 1 e n Ω ( δ ) H Ψ ( C n , 1 ) ( S n | w n ( J ) , V n ) 2 e γ 4 n log | S | n | V | n e γ 4 n
= i = 1 n H Ψ ( C n , 1 ) ( S i | w i ( J ) , V i ) e n Ω ( δ ) i = 1 n H Ψ ( C n , 1 ) ( S i | w i ( J ) , V i ) o ( 1 )
i = 1 n H Ψ ( C n , 1 ) ( S i | w i ( J ) , V i ) n e n Ω ( δ ) H Q ( S | V ) o ( 1 )
= i = 1 n H Ψ ( C n , 1 ) ( S i | w i ( J ) , V i ) o ( 1 )
= n H Q ( S | W , V ) o ( 1 ) .
Finally, consider the case H = 1 and Q U P U . We have for δ small enough that
P P ˜ ( C n , 1 ) M = 0 = P P ˜ ( C n , 1 ) U n T [ P U ] δ n 1 e n ( D ( P U | | Q U ) O ( δ ) ) ( n ) 1 .
Hence, for δ small enough, we can write
H P ˜ ( C n , 1 ) ( S n | M , V n ) H P ˜ ( C n , 1 ) ( S n | M , V n , Π ( U n , δ , P U ) )
1 e n ( D ( P U | | Q U ) O ( δ ) ) H P ˜ ( C n , 1 ) ( S n | M , V n , Π ( U n , δ , P U ) = 1 )
= 1 e n ( D ( P U | | Q U ) O ( δ ) ) H P ˜ ( C n , 1 ) ( S n | V n , Π ( U n , δ , P U ) = 1 )
1 e n ( D ( P U | | Q U ) O ( δ ) ) H P ˜ ( C n , 1 ) ( S n | V n ) o ( 1 )
= n H Q ( S | V ) n e n ( D ( P U | | Q U ) O ( δ ) ) H Q ( S | V ) o ( 1 ) = n H Q ( S | V ) o ( 1 ) .
Here, (A82) follows from (A81); (A83) follows since Π ( U n , δ , P U ) = 1 implies M = 0 ; (A84) follows from Lemma 3 and (15). Thus, since δ > 0 is arbitrary, we have shown that for ϵ ( 0 , 1 ) , ( R , κ , Λ 0 , Λ 1 ) R e ( ϵ ) if (18)–(21) holds.
On the other hand, average distortion of S n at the detector can be lower bounded under H = 0 as follows:
min g i , n ( r ) E d S n , S ^ n | H = 0
= min ϕ ¯ i , n ( m , v n , s i 1 ) i = 1 n E P ˜ ( C n , 0 ) i = 1 n d S i , ϕ ¯ i ( m , v n , s i 1 )
min ϕ ¯ i ( m , v n , s i 1 ) i = 1 n E Ψ ( C n , 0 ) i = 1 n d ( S i , ϕ ¯ i ( m , v n , s i 1 ) ) n e n γ 4 D m
min ϕ ¯ i ( · , · ) i = 1 n E Ψ ( C n , 0 ) i = 1 n d ( S i , ϕ ¯ i ( w i ( J ) , V i ) ) n e n γ 4 D m
n min ϕ ( · , · ) E P d ( S , ϕ ( W , V ) ) n e n γ 4 + e n γ 5 D m
= n min ϕ ( · , · ) i = 1 n E P d ( S , ϕ ( W , V ) ) o ( 1 ) .
Here, (A86) follows from Lemma 2; (A87) follows from ([43], Property 2(b)) due to (A65) and boundedness of distortion measure; (A88) follows from the Markov chain in (A64); (A89) follows from (A67) and the fact that Ψ S i V i | w i ( J ) ( 0 ) = P S V | W ( 0 ) , i [ n ] .
Next, consider the case H = 1 and Q U = P U . Then, similarly to above, we can write
min g i , n ( r ) E d S n , S ^ n | H = 1 = min ϕ ¯ i ( m , v n , s i 1 ) i = 1 n E P ˜ ( C n , 1 ) i = 1 n d S i , ϕ i ( M , V n , S i 1 ) min ϕ ¯ i ( m , v n , s i 1 ) i = 1 n E Ψ ( C n , 1 ) i = 1 n d ( S i , ϕ i ( M , V n , S i 1 ) ) n e n γ 4 D m
min ϕ i ( · , · ) i = 1 n E Ψ ( C n , 1 ) i = 1 n d ( S i , ϕ i ( w i , V i ) ) n e n γ 4 D m
n min ϕ ( · , · ) i = 1 n E Q d ( S , ϕ ( W , V ) ) n ( e n γ 4 + e n γ 5 ) D m .
= n min ϕ ( · , · ) i = 1 n E Q d ( S , ϕ ( W , V ) ) o ( 1 ) .
If H = 1 and Q U P U , we have
min g i , n ( r ) E d S n , S ^ n | H = 1 P P ˜ ( C n , 1 ) M = 0 | H = 1 min ϕ ¯ i ( m , v n , s i 1 ) i = 1 n i = 1 n E P ˜ ( C n , 1 ) d S i , ϕ i ( 0 , V n , S i 1 ) P P ˜ ( C n , 1 ) M = 0 | H = 1 min ϕ i ( v ) i = 1 n E Q i = 1 n d ( S i , ϕ i ( V i ) ) D m o ( 1 )
= n min ϕ ( · ) E Q d ( S , ϕ ( V ) ) o ( 1 ) .
Here, (A96) follows from (15) in Lemma 4 and (A96) follows from (A81). Thus, since δ > 0 is arbitrary, we have shown that ( R , κ , Δ 0 , Δ 1 ) R d ( ϵ ) , ϵ ( 0 , 1 ) , provided that (18), (19), (24) and (25) are satisfied. This completes the proof of the theorem.

Appendix E. Proof of Lemma 5

Consider the | U | + 2 functions of P U | W ,
P U ( u i ) = w W P W ( w ) P U | W ( u i | w ) , i = 1 , 2 , , | U | 1 ,
H P ( U | W , Z ) = w P W ( w ) g 1 ( P U | W , w ) ,
H P ( Y | W , Z ) = w P W ( w ) g 2 ( P U | W , w ) ,
H P ( S | W , Y , Z ) = w P W ( w ) g 3 ( P U | W , w ) ,
where
g 1 ( P U | W , w ) = u , z P U | W ( u | w ) P Z | U ( z | u ) log P U | W ( u | w ) P Z | U ( z | u ) u P U | W ( u | w ) P Z | U ( z | u ) , g 2 ( P U | W , w ) = y , z , u P U | W ( u | w ) P Y Z | U ( y , z | u ) log u P U | W ( u | w ) P Y Z | U ( y , z | u ) u P U | W ( u | w ) P Z | U ( z | u ) , g 3 ( P U | W , w ) = s , y , z , u P U | W ( u | w ) P S Y Z | U ( s , y , z | u ) log u P U | W ( u | w ) P S Y Z | U ( s , y , z | u ) u P U | W ( u | w ) P Y Z | U ( y , z | u ) .
Thus, by the Fenchel–Eggleston–Carathéodory’s theorem [68], it is sufficient to have at most | U | 1 points in the support of W to preserve P U and three more to preserve H P ( U | W , Z ) , H P ( Y | W , Z ) and H P ( S | W , Z , Y ) . Noting that H P ( Y | Z ) and H P ( U | Z ) are automatically preserved since P U is preserved (and ( Y , Z , S ) U W holds), | W | = | U | + 2 points are sufficient to preserve the R.H.S. of Equations (28)–(30). This completes the proof for the case of R e . Similarly, considering the | U | + 1 functions of P W | U given in (A97)–(A99) and
E P d S , ϕ ( W , Y , Z ) = w P W ( w ) g 4 ( w , P W | U ) ,
where
g 4 ( w , P W | U ) = s , u , y , z P U | W ( u | w ) P Y Z S | U ( y , z , s | u ) d ( s , ϕ ( w , y , z ) ) ,
similar result holds also for the case of R d .

References

  1. Appari, A.; Johnson, E. Information security and privacy in healthcare: Current state of research. Int. J. Internet Enterp. Manag. 2010, 6, 279–314. [Google Scholar] [CrossRef]
  2. Gross, R.; Acquisti, A. Information revelation and privacy in online social networks. In Proceedings of the ACM workshop on Privacy in Electronic Society, Alexandria, VA, USA, 7 November 2005; pp. 71–80. [Google Scholar]
  3. Miyazaki, A.; Fernandez, A. Consumer Perceptions of Privacy and Security Risks for Online Shopping. J. Consum. Aff. 2001, 35, 27–44. [Google Scholar] [CrossRef]
  4. Giaconi, G.; Gündüz, D.; Poor, H.V. Privacy-Aware Smart Metering: Progress and Challenges. IEEE Signal Process. Mag. 2018, 35, 59–78. [Google Scholar] [CrossRef] [Green Version]
  5. Ahlswede, R.; Csiszár, I. Hypothesis Testing with Communication Constraints. IEEE Trans. Inf. Theory 1986, 32, 533–542. [Google Scholar] [CrossRef] [Green Version]
  6. Han, T.S. Hypothesis Testing with Multiterminal Data Compression. IEEE Trans. Inf. Theory 1987, 33, 759–772. [Google Scholar] [CrossRef]
  7. Shimokawa, H.; Han, T.S.; Amari, S. Error Bound of Hypothesis Testing with Data Compression. In Proceedings of the IEEE International Symposium on Information Theory, Trondheim, Norway, 27 June–1 July 1994. [Google Scholar]
  8. Shalaby, H.M.H.; Papamarcou, A. Multiterminal Detection with Zero-Rate Data Compression. IEEE Trans. Inf. Theory 1992, 38, 254–267. [Google Scholar] [CrossRef]
  9. Zhao, W.; Lai, L. Distributed Testing Against Independence with Multiple Terminals. In Proceedings of the 52nd Annual Allerton Conference, Monticello, IL, USA, 30 September–3 October 2014; pp. 1246–1251. [Google Scholar]
  10. Katz, G.; Piantanida, P.; Debbah, M. Distributed Binary Detection with Lossy Data Compression. IEEE Trans. Inf. Theory 2017, 63, 5207–5227. [Google Scholar] [CrossRef] [Green Version]
  11. Rahman, M.S.; Wagner, A.B. On the Optimality of Binning for Distributed Hypothesis Testing. IEEE Trans. Inf. Theory 2012, 58, 6282–6303. [Google Scholar] [CrossRef] [Green Version]
  12. Sreekumar, S.; Gündüz, D. Distributed Hypothesis Testing Over Noisy Channels. In Proceedings of the IEEE International Symposium on Information Theory, Aachen, Germany, 25–30 June 2017; pp. 983–987. [Google Scholar]
  13. Sreekumar, S.; Gündüz, D. Distributed Hypothesis Testing Over Discrete Memoryless Channels. IEEE Trans. Inf. Theory 2020, 66, 2044–2066. [Google Scholar] [CrossRef]
  14. Salehkalaibar, S.; Wigger, M.; Timo, R. On Hypothesis Testing Against Conditional Independence with Multiple Decision Centers. IEEE Trans. Commun. 2018, 66, 2409–2420. [Google Scholar] [CrossRef] [Green Version]
  15. Salehkalaibar, S.; Wigger, M. Distributed Hypothesis Testing based on Unequal-Error Protection Codes. arXiv 2018, arXiv:1806.05533. [Google Scholar] [CrossRef]
  16. Han, T.S.; Kobayashi, K. Exponential-Type Error Probabilities for Multiterminal Hypothesis Testing. IEEE Trans. Inf. Theory 1989, 35, 2–14. [Google Scholar] [CrossRef]
  17. Haim, E.; Kochman, Y. On Binary Distributed Hypothesis Testing. arXiv 2018, arXiv:1801.00310. [Google Scholar]
  18. Weinberger, N.; Kochman, Y. On the Reliability Function of Distributed Hypothesis Testing Under Optimal Detection. IEEE Trans. Inf. Theory 2019, 65, 4940–4965. [Google Scholar] [CrossRef]
  19. Bayardo, R.; Agrawal, R. Data privacy through optimal k-anonymization. In Proceedings of the International Conference on Data Engineering, Tokyo, Japan, 5–8 April 2005; pp. 217–228. [Google Scholar]
  20. Agrawal, R.; Srikant, R. Privacy-preserving data mining. In Proceedings of the ACM SIGMOD International Conference on Management of Data, Dallas, TX, USA, 18–19 May 2000; pp. 439–450. [Google Scholar]
  21. Bertino, E. Big Data-Security and Privacy. In Proceedings of the IEEE International Congress on BigData, New York, NY, USA, 27 June–2 July 2015; pp. 425–439. [Google Scholar]
  22. Gertner, Y.; Ishai, Y.; Kushilevitz, E.; Malkin, T. Protecting Data Privacy in Private Information Retrieval Schemes. J. Comput. Syst. Sci. 2000, 60, 592–629. [Google Scholar] [CrossRef] [Green Version]
  23. Hay, M.; Miklau, G.; Jensen, D.; Towsley, D.; Weis, P. Resisting structural re-identification in anonymized social networks. J. Proc. VLDB Endow. 2008, 1, 102–114. [Google Scholar] [CrossRef] [Green Version]
  24. Narayanan, A.; Shmatikov, V. De-anonymizing Social Networks. In Proceedings of the IEEE Symposium on Security and Privacy, Berkeley, CA, USA, 17–20 May 2009. [Google Scholar]
  25. Liao, J.; Sankar, L.; Tan, V.; Calmon, F. Hypothesis Testing Under Mutual Information Privacy Constraints in the High Privacy Regime. IEEE Trans. Inf. Forensics Secur. 2018, 13, 1058–1071. [Google Scholar] [CrossRef]
  26. Liao, J.; Sankar, L.; Calmon, F.; Tan, V. Hypothesis testing under maximal leakage privacy constraints. In Proceedings of the IEEE International Symposium on Information Theory, Aachen, Germany, 25–30 June 2017. [Google Scholar]
  27. Gilani, A.; Amor, S.B.; Salehkalaibar, S.; Tan, V. Distributed Hypothesis Testing with Privacy Constraints. Entropy 2019, 21, 478. [Google Scholar] [CrossRef] [Green Version]
  28. Gündüz, D.; Erkip, E.; Poor, H.V. Secure lossless compression with side information. In Proceedings of the IEEE Information Theory Workshop, Porto, Portugal, 5–9 May 2008; pp. 169–173. [Google Scholar]
  29. Gündüz, D.; Erkip, E.; Poor, H.V. Lossless compression with security constraints. In Proceedings of the IEEE International Symposium on Information Theory, Toronto, ON, Canada, 6–11 July 2008; pp. 111–115. [Google Scholar]
  30. Mhanna, M.; Piantanida, P. On secure distributed hypothesis testing. In Proceedings of the IEEE International Symposium on Information Theory, Hong Kong, China, 14–19 June 2015; pp. 1605–1609. [Google Scholar]
  31. Sweeney, L. K-anonymity: A model for protecting privacy. Int. J. Uncertain. Fuzziness Knowl. Based Syst. 2002, 10, 557–570. [Google Scholar] [CrossRef] [Green Version]
  32. Dwork, C.; McSherry, F.; Nissim, K.; Smith, A. Calibrating Noise to Sensitivity in Private Data Analysis. In Theory of Cryptography; Springer: Berlin/Heidelberg, Germany, 2006; pp. 265–284. [Google Scholar]
  33. Calmon, F.; Fawaz, N. Privacy Against Statistical Inference. In Proceedings of the 50th Annual Allerton Conference, Illinois, IL, USA, 1–5 October 2012; pp. 1401–1408. [Google Scholar]
  34. Makhdoumi, A.; Salamatian, S.; Fawaz, N.; Medard, M. From the information bottleneck to the privacy funnel. In Proceedings of the IEEE Information Theory Workshop, Hobart, Australia, 2–5 November 2014; pp. 501–505. [Google Scholar]
  35. Calmon, F.; Makhdoumi, A.; Medard, M. Fundamental limits of perfect privacy. In Proceedings of the IEEE International Symposium on Information Theory, Hong Kong, China, 14–19 June 2015; pp. 1796–1800. [Google Scholar]
  36. Issa, I.; Kamath, S.; Wagner, A.B. An Operational Measure of Information Leakage. In Proceedings of the Annual Conference on Information Science and Systems, Princeton, NJ, USA, 16–18 March 2016; pp. 1–6. [Google Scholar]
  37. Rassouli, B.; Gündüz, D. Optimal Utility-Privacy Trade-off with Total Variation Distance as a Privacy Measure. IEEE Trans. Inf. Forensics Secur. 2019, 15, 594–603. [Google Scholar] [CrossRef] [Green Version]
  38. Wagner, I.; Eckhoff, D. Technical Privacy Metrics: A Systematic Survey. arXiv 2015, arXiv:1512.00327v1. [Google Scholar] [CrossRef] [Green Version]
  39. Goldwasser, S.; Micali, S. Probabilistic encryption. J. Comput. Syst. Sci. 1984, 28, 270–299. [Google Scholar] [CrossRef] [Green Version]
  40. Bellare, M.; Tessaro, S.; Vardy, A. Semantic Security for the Wiretap Channel. In Proceedings of the Advances in Cryptology-CRYPTO 2012, Heidelberg, Germany, 19–23 August 2012; pp. 294–311. [Google Scholar]
  41. Yamamoto, H. A Rate-Distortion Problem for a Communication System with a Secondary Decoder to be Hindered. IEEE Trans. Inf. Theory 1988, 34, 835–842. [Google Scholar] [CrossRef]
  42. Tandon, R.; Sankar, L.; Poor, H.V. Discriminatory Lossy Source Coding: Side Information Privacy. IEEE Trans. Inf. Theory 2013, 59, 5665–5677. [Google Scholar] [CrossRef] [Green Version]
  43. Schieler, C.; Cuff, P. Rate-Distortion Theory for Secrecy Systems. IEEE Trans. Inf. Theory 2014, 60, 7584–7605. [Google Scholar] [CrossRef]
  44. Agarwal, G.K. On Information Theoretic and Distortion-based Security. Ph.D. Thesis, University of California, Los Angeles, CA, USA, 2019. Available online: https://escholarship.org/uc/item/7qs7z91g (accessed on 3 January 2020).
  45. Li, Z.; Oechtering, T.; Gündüz, D. Privacy against a hypothesis testing adversary. IEEE Trans. Inf. Forensics Secur. 2019, 14, 1567–1581. [Google Scholar] [CrossRef]
  46. Cuff, P.; Yu, L. Differential privacy as a mutual information constraint. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, Vienna, Austria, 24–28 October 2016; pp. 43–54. [Google Scholar]
  47. Goldfeld, Z.; Cuff, P.; Permuter, H.H. Semantic-Security Capacity for Wiretap Channels of Type II. IEEE Trans. Inf. Theory 2016, 62, 3863–3879. [Google Scholar] [CrossRef]
  48. Sreekumar, S.; Bunin, A.; Goldfeld, Z.; Permuter, H.H.; Shamai, S. The Secrecy Capacity of Cost-Constrained Wiretap Channels. arXiv 2020, arXiv:2004.04330. [Google Scholar]
  49. Kasiviswanathan, S.P.; Lee, H.K.; Nissim, K.; Raskhodnikova, S.; Smith, A. What can we learn privately? SIAM J. Comput. 2011, 40, 793–826. [Google Scholar] [CrossRef]
  50. Duchi, J.C.; Jordan, M.I.; Wainwright, M.J. Local Privacy and Statistical Minimax Rates. In Proceedings of the 2013 IEEE 54th Annual Symposium on Foundations of Computer Science, Berkeley, CA, USA, 26–29 October 2013; pp. 429–438. [Google Scholar]
  51. Duchi, J.C.; Jordan, M.I.; Wainwright, M.J. Privacy Aware Learning. J. ACM 2014, 61, 1–57. [Google Scholar] [CrossRef]
  52. Wang, Y.; Lee, J.; Kifer, D. Differentially Private Hypothesis Testing, Revisited. arXiv 2015, arXiv:1511.03376. [Google Scholar]
  53. Gaboardi, M.; Lim, H.; Rogers, R.; Vadhan, S. Differentially Private Chi-Squared Hypothesis Testing: Goodness of Fit and Independence Testing. In Proceedings of the 33rd International Conference on Machine Learning, New York City, NY, USA, 19–24 June 2016; Volume 48, pp. 2111–2120. [Google Scholar]
  54. Rogers, R.M.; Roth, A.; Smith, A.D.; Thakkar, O. Max-Information, Differential Privacy, and Post-Selection Hypothesis Testing. arXiv 2016, arXiv:1604.03924. [Google Scholar]
  55. Cai, B.; Daskalakis, C.; Kamath, G. Priv’IT: Private and Sample Efficient Identity Testing. In Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; Volume 70, pp. 635–644. [Google Scholar]
  56. Sheffet, O. Locally Private Hypothesis Testing. In Proceedings of the 35th International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018; Volume 80, pp. 4605–4614. [Google Scholar]
  57. Acharya, J.; Sun, Z.; Zhang, H. Differentially Private Testing of Identity and Closeness of Discrete Distributions. In Advances in Neural Information Processing Systems 31; Curran Associates Inc.: Red Hook, NY, USA, 2018; pp. 6878–6891. [Google Scholar]
  58. Canonne, C.L.; Kamath, G.; McMillan, A.; Smith, A.; Ullman, J. The Structure of Optimal Private Tests for Simple Hypotheses. In Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, Phoenix, Arizona, 23–26 June 2019; pp. 310–321. [Google Scholar]
  59. Aliakbarpour, M.; Diakonikolas, I.; Kane, D.; Rubinfeld, R. Private Testing of Distributions via Sample Permutations. In Advances in Neural Information Processing Systems 32; Curran Associates Inc.: Red Hook, NY, USA, 2019; pp. 10878–10889. [Google Scholar]
  60. Csiszár, I.; Körner, J. Information Theory: Coding Theorems for Discrete Memoryless Systems; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
  61. Wang, Y.; Basciftci, Y.O.; Ishwar, P. Privacy-Utility Tradeoffs under Constrained Data Release Mechanisms. arXiv 2017, arXiv:1710.09295. [Google Scholar]
  62. Cuff, P. Distributed Channel Synthesis. IEEE Trans. Inf. Theory 2013, 59, 7071–7096. [Google Scholar] [CrossRef] [Green Version]
  63. Song, E.C.; Cuff, P.; Poor, H.V. The Likelihood Encoder for Lossy Compression. IEEE Trans. Inf. Theory 2016, 62, 1836–1849. [Google Scholar] [CrossRef] [Green Version]
  64. Wyner, A.D. The Common Information of Two Dependent Random Variables. IEEE Trans. Inf. Theory 1975, 21, 163–179. [Google Scholar] [CrossRef]
  65. Han, T.S.; Verdú, S. Approximation Theory of Output Statistics. IEEE Trans. Inf. Theory 1993, 39, 752–772. [Google Scholar] [CrossRef] [Green Version]
  66. Sreekumar, S.; Gündüz, D.; Cohen, A. Distributed Hypothesis Testing Under Privacy Constraints. In Proceedings of the IEEE Information Theory Workshop (ITW), Guangzhou, China, 25–29 November 2018; pp. 1–5. [Google Scholar]
  67. Tishby, N.; Pereira, F.; Bialek, W. The Information Bottleneck Method. arXiv 2000, arXiv:physics/0004057. [Google Scholar]
  68. Gamal, A.E.; Kim, Y.H. Network Information Theory; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
  69. Polyanskiy, Y. Channel Coding: Non-Asymptotic Fundamental Limits. Ph.D. Thesis, Princeton University, Princeton, NJ, USA, 2010. [Google Scholar]
  70. Yang, W.; Caire, G.; Durisi, G.; Polyanskiy, Y. Optimum Power Control at Finite Blocklength. IEEE Trans. Inf. Theory 2015, 61, 4598–4615. [Google Scholar] [CrossRef] [Green Version]
  71. Villard, J.; Piantanida, P. Secure Multiterminal Source Coding With Side Information at the Eavesdropper. IEEE Trans. Inf. Theory 2013, 59, 3668–3692. [Google Scholar] [CrossRef] [Green Version]
  72. Gallager, R.G. A simple derivation of the coding theorem and some applications. IEEE Trans. Inf. Theory 1965, 11, 3–18. [Google Scholar] [CrossRef] [Green Version]
Figure 1. DHT with a privacy constraint.
Figure 1. DHT with a privacy constraint.
Entropy 22 00665 g001
Figure 2. ( R , κ , Λ 0 ) trade-off at the boundary of R e in Example 1 (Axes units are in bits)
Figure 2. ( R , κ , Λ 0 ) trade-off at the boundary of R e in Example 1 (Axes units are in bits)
Entropy 22 00665 g002
Figure 3. Projections of Figure 2 in the R κ plane and κ Λ 0 plane
Figure 3. Projections of Figure 2 in the R κ plane and κ Λ 0 plane
Entropy 22 00665 g003

Share and Cite

MDPI and ACS Style

Sreekumar, S.; Cohen, A.; Gündüz, D. Privacy-Aware Distributed Hypothesis Testing. Entropy 2020, 22, 665. https://doi.org/10.3390/e22060665

AMA Style

Sreekumar S, Cohen A, Gündüz D. Privacy-Aware Distributed Hypothesis Testing. Entropy. 2020; 22(6):665. https://doi.org/10.3390/e22060665

Chicago/Turabian Style

Sreekumar, Sreejith, Asaf Cohen, and Deniz Gündüz. 2020. "Privacy-Aware Distributed Hypothesis Testing" Entropy 22, no. 6: 665. https://doi.org/10.3390/e22060665

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop