# Context-Aware Generative Adversarial Privacy

^{1}

^{2}

^{*}

^{†}

## Abstract

**:**

## 1. Introduction

**Context-free privacy.**One of the most popular context-free notions of privacy is differential privacy (DP) [18,19,20]. DP, quantified by a leakage parameter $\u03f5$ (Smaller $\u03f5\in [0,\infty )$ implies smaller leakage and stronger privacy guarantees), restricts distinguishability between any two “neighboring” datasets from the published data. DP provides strong, context-free theoretical guarantees against worst-case adversaries. However, training machine learning models on randomized data with DP guarantees often leads to a significantly reduced utility and comes with a tremendous hit in sample complexity [21,22,23,24,25,26,27,28,29,30,31,32,33] in the desired leakage regimes. For example, learning population level histograms under local DP suffers from a stupendous increase in sample complexity by a factor proportional to the size of the dictionary [27,29,30].

**Context-aware privacy.**Context-aware privacy notions have been so far studied by information theorists under the rubric of information theoretic (IT) privacy [34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54]. IT privacy has predominantly been quantified by mutual information (MI) which models how well an adversary, with access to the released data, can refine its belief about the private features of the data. Recently, Issa et al. introduced maximal leakage (MaxL) to quantify leakage to a strong adversary capable of guessing any function of the dataset [55]. They also showed that their adversarial model can be generalized to encompass local DP (wherein the mechanism ensures limited distinction for any pair of entries—a stronger DP notion without a neighborhood constraint [27,56]) [57]. When one restricts the adversary to guessing specific private features (and not all functions of these features), the resulting adversary is a maximum a posteriori (MAP) adversary that has been studied by Asoodeh et al. in [52,53,58,59]. Context-aware data perturbation techniques have also been studied in privacy preserving cloud computing [60,61,62].

**Generative adversarial privacy.**Given the challenges of existing privacy approaches, we take a fundamentally new approach towards enabling private data publishing with guarantees on both privacy and utility. Instead of adopting worst-case, context-free notions of data privacy (such as differential privacy), we introduce a novel context-aware model of privacy that allows the designer to cleverly add noise where it matters. An inherent challenge in taking a context-aware privacy approach is that it requires having access to priors, such as joint distributions of public and private variables. Such information is hardly ever present in practice. To overcome this issue, we take a data-driven approach to context-aware privacy. We leverage recent advancements in generative adversarial networks (GANs) to introduce a unified framework for context-aware privacy called generative adversarial privacy (GAP). Under GAP, the parameters of a generative model, representing the privatization mechanism, are learned from the data itself.

#### 1.1. Our Contributions

#### 1.2. Related Work

#### 1.3. Outline

## 2. Generative Adversarial Privacy Model

#### 2.1. Formulation

#### 2.2. GAP under Various Loss Functions

**Hard Decision Rules.**When the adversary adopts a hard decision rule, $h(g(X,Y))$ is an estimate of Y. Under this setting, we can choose $\ell (h(g(X,Y)),Y)$ in a variety of ways. For instance, if Y is continuous, the adversary can attempt to minimize the difference between the estimated and true private variable values. This can be achieved by considering a squared loss function

**Soft Decision Rules.**Instead of a hard decision rule, we can also consider a broader class of soft decision rules where $h(g(X,Y))$ is a distribution over $\mathcal{Y}$; i.e., $h(g(X,Y))={P}_{h}(y|g(X,Y))$ for $y\in \mathcal{Y}$. In this context, we can analyze the performance under a log-loss

#### 2.3. Data-Driven GAP

Algorithm 1 Alternating minimax privacy preserving algorithm |

Input: dataset $\mathcal{D}$, distortion parameter D, iteration number T Output: Optimal privatizer parameter ${\theta}_{p}$ procedure Alernate Minimax($\mathcal{D},D,T$)Initialize ${\theta}_{p}^{1}$ and ${\theta}_{a}^{1}$ for $t=1,\dots ,T$ doRandom minibatch of M datapoints $\{{x}_{(1)},\dots ,{x}_{(M)}\}$ drawn from full dataset Generate $\{{\widehat{x}}_{(1)},\dots ,{\widehat{x}}_{(M)}\}$ via ${\widehat{x}}_{(i)}=g({x}_{(i)},{y}_{(i)};{\theta}_{p}^{t})$ Update the parameter ${\theta}_{a}^{t+1}$ for the adversary
$$\phantom{\rule{1.em}{0ex}}{\theta}_{a}^{t+1}=arg\underset{{\theta}_{a}}{max}-\frac{1}{M}\sum _{i=1}^{M}\ell (h({\widehat{x}}_{(i)};{\theta}_{a}),{y}_{(i)})$$
$$\ell ({\theta}_{p},{\theta}_{a}^{t+1})=-\frac{1}{M}\sum _{i=1}^{M}\ell (h(g({x}_{(i)},{y}_{(i)};{\theta}_{p});{\theta}_{a}^{t+1}),{y}_{(i)})$$
Perform line search along ${\nabla}_{{\theta}_{p}}l({\theta}_{p},{\theta}_{a}^{t+1})$ and update
$${\theta}_{p}^{t+1}={\theta}_{p}^{t}-{\alpha}_{t}{\nabla}_{{\theta}_{p}}\ell ({\theta}_{p},{\theta}_{a}^{t+1}),\phantom{\rule{1.em}{0ex}}{\alpha}_{t}>0$$
return ${\theta}_{p}^{t+1}$ |

#### 2.4. Our Focus

## 3. Binary Data Model

#### 3.1. Theoretical Approach for Binary Data Model

#### 3.1.1. PDD Privacy Mechanism

#### 3.1.2. PDI Privacy Mechanism

**Theorem**

**1.**

- If $1-D>max\{p,1-p\}$, the optimal privacy mechanism is given by $\{{s}_{0},{s}_{1}|p{s}_{1}+(1-p){s}_{0}=1-D,{s}_{0},{s}_{1}\in [0,1]\}$. The adversary’s accuracy of correctly guessing the private variable is$$\begin{array}{c}\hfill \left\{\begin{array}{cc}(1-2q)(1-D)+q\hfill & ifq\frac{1}{2}\hfill \\ (2q-1)(1-D)+1-q\hfill & ifq\frac{1}{2}\hfill \end{array}\right..\end{array}$$
- Otherwise, the optimal privacy mechanism is given by $\{{s}_{0},{s}_{1}|max\{min\{p,1-p\},1-D\}\le p{s}_{1}+(1-p){s}_{0}\le max\{p,1-p\},{s}_{0},{s}_{1}\in [0,1]\}$ and the adversary’s accuracy of correctly guessing the private variable is$$\begin{array}{c}\hfill \left\{\begin{array}{cc}p(1-q)+(1-p)q\hfill & ifp\ge \frac{1}{2},q\frac{1}{2}orp\le \frac{1}{2},q\frac{1}{2}\hfill \\ pq+(1-p)(1-q)\hfill & ifp\ge \frac{1}{2},q\frac{1}{2}orp\le \frac{1}{2},q\frac{1}{2}\hfill \end{array}\right..\end{array}$$

#### 3.2. Data-driven Approach for Binary Data Model

#### 3.3. Illustration of Results

## 4. Binary Gaussian Mixture Model

#### 4.1. Theoretical Approach for Binary Gaussian Mixture Model

- (a)
- Even though it is known that adding Gaussian noise is not the worst case noise adding mechanism for non-Gaussian X [72], identifying the optimal noise distribution is mathematically intractable. Thus, for tractability and ease of analysis, we choose Gaussian noise.
- (b)
- Adding Gaussian noise to each data entry preserves the conditional Gaussianity of the released dataset.

#### 4.1.1. PDI Gaussian Noise Adding Privacy Mechanism

**Theorem**

**2.**

#### 4.1.2. PDD Gaussian Noise Adding Privacy Mechanism

**Theorem**

**3.**

**Theorem**

**4.**

**Proof.**

**Proposition**

**1.**

**Proof.**

#### 4.2. Data-driven Approach for Binary Gaussian Mixture Model

#### 4.3. Illustration of Results

## 5. Concluding Remarks

## Acknowledgments

## Author Contributions

## Conflicts of Interest

## Appendix A. Proof of Theorem 1

**Proof.**

**Subproblem 1**: $P(Y=1,\widehat{X}=0)\ge P(Y=0,\widehat{X}=0)$ and $P(Y=1,\widehat{X}=1)\le P(Y=0,\widehat{X}=1)$, which implies $p(1-2q)(1-{s}_{1})-(1-p)(1-2q){s}_{0}\ge 0$ and $(1-p)(1-2q)(1-{s}_{0})-p(1-2q){s}_{1}\ge 0$. As a result, the objective of the privatizer is given by $P(Y=1,\widehat{X}=0)+P(Y=0,\widehat{X}=1)$. Thus, the optimization problem in (30) can be written as

- If $1-2q>0$, i.e., $q<\frac{1}{2}$, we have $p{s}_{1}+(1-p){s}_{0}\le p$ and $p{s}_{1}+(1-p){s}_{0}\le 1-p$. The privatizer must maximize $p{s}_{1}+(1-p){s}_{0}$ to reduce the adversary’s probability of correctly inferring the private variable. Thus, if $1-D\le min\{p,1-p\}$, the optimal value is given by $(2q-1)min\{p,1-p\}+1-q$; the corresponding optimal solution is given by $\{{s}_{0},{s}_{1}|p{s}_{1}+(1-p){s}_{0}$ $=min\{p,1-p\},0\le {s}_{0},{s}_{1}\le 1\}$. Otherwise, the problem is infeasible.
- If $1-2q<0$, i.e., $q>\frac{1}{2}$, we have $p{s}_{1}+(1-p){s}_{0}\ge p$ and $p{s}_{1}+(1-p){s}_{0}\ge 1-p$. In this case, the privatizer has to minimize $p{s}_{1}+(1-p){s}_{0}$. Thus, if $1-D\ge max\{p,1-p\}$, the optimal value is given by $(2q-1)(1-D)+1-q$; the corresponding optimal solution is $\{{s}_{0},{s}_{1}|p{s}_{1}+(1-p){s}_{0}=1-D,0\le {s}_{0},{s}_{1}\le 1\}$. Otherwise, the optimal value is $(2q-1)max\{p,1-p\}+1-q$ and the corresponding optimal solution is given by $\{{s}_{0},{s}_{1}|p{s}_{1}+(1-p){s}_{0}=max\{p,1-p\},0\le {s}_{0},{s}_{1}\le 1\}$.

**Subproblem 2**: $P(Y=1,\widehat{X}=0)\le P(Y=0,\widehat{X}=0)$ and $P(Y=1,\widehat{X}=1)\ge P(Y=0,\widehat{X}=1)$, which implies $p(1-2q)(1-{s}_{1})-(1-p)(1-2q){s}_{0}\le 0$ and $(1-p)(1-2q)(1-{s}_{0})-p(1-2q){s}_{1}\le 0$. Thus, the objective of the privatizer is given by $P(Y=0,\widehat{X}=0)+P(Y=1,\widehat{X}=1)$. Therefore, the optimization problem in (30) can be written as

- If $1-2q>0$, i.e., $q<\frac{1}{2}$, we have $p{s}_{1}+(1-p){s}_{0}\ge p$ and $p{s}_{1}+(1-p){s}_{0}\ge 1-p$. The privatizer needs to minimize $p{s}_{1}+(1-p){s}_{0}$ to reduce the adversary’s probability of correctly inferring the private variable. Thus, if $1-D\ge max\{p,1-p\}$, the optimal value is given by $(1-2q)(1-D)+q$; the corresponding optimal solution is $\{{s}_{0},{s}_{1}|p{s}_{1}+(1-p){s}_{0}=1-D,0\le {s}_{0},{s}_{1}\le 1\}$. Otherwise, the optimal value is $(1-2q)max\{p,1-p\}+q$ and the corresponding optimal solution is given by $\{{s}_{0},{s}_{1}|p{s}_{1}+(1-p){s}_{0}=max\{p,1-p\},0\le {s}_{0},{s}_{1}\le 1\}$.
- If $1-2q<0$, i.e., $q>\frac{1}{2}$, we have $p{s}_{1}+(1-p){s}_{0}\le p$ and $p{s}_{1}+(1-p){s}_{0}\le 1-p$. In this case, the privatizer needs to maximize $p{s}_{1}+(1-p){s}_{0}$. Thus, if $1-D\le min\{p,1-p\}$, the optimal value is given by $(1-2q)min\{p,1-p\}+q$; the corresponding optimal solution is given by $\{{s}_{0},{s}_{1}|p{s}_{1}+(1-p){s}_{0}=min\{p,1-p\},0\le {s}_{0},{s}_{1}\le 1\}$. Otherwise, the problem is infeasible.

**Subproblem 3**: $P(Y=1,\widehat{X}=0)\ge P(Y=0,\widehat{X}=0)$ and $P(Y=1,\widehat{X}=1)\ge P(Y=0,\widehat{X}=1)$, we have $p(1-2q)(1-{s}_{1})-(1-p)(1-2q){s}_{0}\ge 0$ and $(1-p)(1-2q)(1-{s}_{0})-p(1-2q){s}_{1}\le 0$. Under this scenario, the objective function in (30) is given by $P(Y=1,\widehat{X}=0)+P(Y=1,\widehat{X}=1)$. Thus, the privatizer solves

- If $1-2q>0$, i.e., $q<\frac{1}{2}$, the problem becomes infeasible for $p<\frac{1}{2}$. For $p\ge \frac{1}{2}$, if $1-D>max\{p,1-p\}$, the problem is also infeasible; if $min\{p,1-p\}\le 1-D\le max\{p,1-p\}$, the optimal value is given by $p(1-q)+(1-p)q$ and the corresponding optimal solution is $\{{s}_{0},{s}_{1}|1-D\le p{s}_{1}+(1-p){s}_{0}\le max\{p,1-p\},0\le {s}_{0},{s}_{1}\le 1\}$; otherwise, the optimal value is $p(1-q)+(1-p)q$ and the corresponding optimal solution is given by $\{{s}_{0},{s}_{1}|min\{p,1-p\}\le p{s}_{1}+(1-p){s}_{0}\le max\{p,1-p\},0\le {s}_{0},{s}_{1}\le 1\}$.
- If $1-2q<0$, i.e., $q>\frac{1}{2}$, the problem is infeasible for $p>\frac{1}{2}$. For $p\le \frac{1}{2}$, if $1-D>max\{p,1-p\}$, the problem is also infeasible; if $min\{p,1-p\}\le 1-D\le max\{p,1-p\}$, the optimal value is given by $p(1-q)+(1-p)q$ and the corresponding optimal solution is $\{{s}_{0},{s}_{1}|1-D\le p{s}_{1}+(1-p){s}_{0}\le max\{p,1-p\},0\le {s}_{0},{s}_{1}\le 1\}$; otherwise, the optimal value is $p(1-q)+(1-p)q$ and the corresponding optimal solution is given by $\{{s}_{0},{s}_{1}|min\{p,1-p\}\le p{s}_{1}+(1-p){s}_{0}\le max\{p,1-p\},0\le {s}_{0},{s}_{1}\le 1\}$.

**Subproblem 4**: $P(Y=1,\widehat{X}=0)\le P(Y=0,\widehat{X}=0)$ and $P(Y=1,\widehat{X}=1)\le P(Y=0,\widehat{X}=1)$, which implies $p(1-2q)(1-{s}_{1})-(1-p)(1-2q){s}_{0}\le 0$ and $(1-p)(1-2q)(1-{s}_{0})-p(1-2q){s}_{1}\ge 0$. Thus, the optimization problem in (30) is given by

- If $1-2q>0$, i.e., $q<\frac{1}{2}$, the problem becomes infeasible for $p>\frac{1}{2}$. For $p\le \frac{1}{2}$, if $1-D>max\{p,1-p\}$, the problem is also infeasible; if $min\{p,1-p\}\le 1-D\le max\{p,1-p\}$, the optimal value is given by $pq+(1-p)(1-q)$ and the corresponding optimal solution is $\{{s}_{0},{s}_{1}|1-D\le p{s}_{1}+(1-p){s}_{0}\le max\{p,1-p\},0\le {s}_{0},{s}_{1}\le 1\}$; otherwise, the optimal value is $pq+(1-p)(1-q)$ and the corresponding optimal solution is given by $\{{s}_{0},{s}_{1}|min\{p,1-p\}\le p{s}_{1}+(1-p){s}_{0}\le max\{p,1-p\},0\le {s}_{0},{s}_{1}\le 1\}$.
- If $1-2q<0$, i.e., $q>\frac{1}{2}$, the problem becomes infeasible for $p<\frac{1}{2}$. For $p\ge \frac{1}{2}$, if $1-D>max\{p,1-p\}$, the problem is also infeasible; if $min\{p,1-p\}\le 1-D\le max\{p,1-p\}$, the optimal value is given by $pq+(1-p)(1-q)$ and the corresponding optimal solution is $\{{s}_{0},{s}_{1}|1-D\le p{s}_{1}+(1-p){s}_{0}\le max\{p,1-p\},0\le {s}_{0},{s}_{1}\le 1\}$; otherwise, the optimal value is $pq+(1-p)(1-q)$ and the corresponding optimal solution is given by $\{{s}_{0},{s}_{1}|min\{p,1-p\}\le p{s}_{1}+(1-p){s}_{0}\le max\{p,1-p\},0\le {s}_{0},{s}_{1}\le 1\}$.

## Appendix B. Proof of Theorem 2

**Proof.**

## Appendix C. Proof of Theorem 3

**Proof.**

## References

- Economist, T. The World’s Most Valuable Resource Is No Longer Oil, but Data; The Economist: New York, NY, USA, 2017. [Google Scholar]
- National Science and Technology Council Networking and Information Technology Research and Development Program. National Privacy Research Strategy; Technical Report; Executive Office of the President of the United States: Washington, DC, USA, 2016.
- EUGDPR. The EU General Data Protection Regulation (GDPR). Available online: http://www.eugdpr.org/ (accessed on 22 November 2017).
- Samarati, P.; Sweeney, L. Protecting Privacy When Disclosing Information: k-Anonymity and Its Enforcement through Generalization and Suppression; Technical Report; SRI International: Menlo Park, CA, USA, 1998. [Google Scholar]
- Sweeney, L. k-Anonymity: A model for protecting privacy. Int. J. Uncertain. Fuzziness Knowl. Based Syst.
**2002**, 10, 557–570. [Google Scholar] [CrossRef] - Li, N.; Li, T.; Venkatasubramanian, S. t-Closeness: Privacy beyond k-anonymity and l-diversity. In Proceedings of the IEEE 23rd International Conference on Data Engineering, ICDE 2007, Istanbul, Turkey, 11–15 April 2007; pp. 106–115. [Google Scholar]
- Narayanan, A.; Shmatikov, V. Robust de-anonymization of large sparse datasets. In Proceedings of the IEEE Symposium on Security and Privacy (SP 2008), Oakland, CA, USA, 18–21 May 2008; pp. 111–125. [Google Scholar]
- Harmanci, A.; Gerstein, M. Quantification of private information leakage from phenotype-genotype data: Linking attacks. Nat. Methods
**2016**, 13, 251–256. [Google Scholar] [CrossRef] [PubMed] - Sweeney, L.; Abu, A.; Winn, J. Identifying Participants in the Personal Genome Project by Name (A Re-identification Experiment). arXiv, 2013; arXiv:1304.7605. [Google Scholar]
- Finn, E.S.; Shen, X.; Scheinost, D.; Rosenberg, M.D.; Huang, J.; Chun, M.M.; Papademetris, X.; Constable, R.T. Functional connectome fingerprinting: Identifying individuals using patterns of brain connectivity. Nat. Neurosci.
**2015**, 18, 1664–1671. [Google Scholar] [CrossRef] [PubMed] - LeFevre, K.; DeWitt, D.J.; Ramakrishnan, R. Incognito: Efficient full-domain k-anonymity. In Proceedings of the 2005 ACM SIGMOD international conference on Management of data, Baltimore, MD, USA, 13–17 June 2005; pp. 49–60. [Google Scholar]
- Bayardo, R.J.; Agrawal, R. Data privacy through optimal k-anonymization. In Proceedings of the 21st International Conference on Data Engineering (ICDE 2005), Tokyo, Japan, 5–8 April 2005; pp. 217–228. [Google Scholar]
- Fung, B.C.; Wang, K.; Philip, S.Y. Anonymizing classification data for privacy preservation. IEEE Trans. Knowl. Data Eng.
**2007**, 19, 711–725. [Google Scholar] [CrossRef] - Iyengar, V.S. Transforming data to satisfy privacy constraints. In Proceedings of the eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Edmonton, AB, Canada, 23–25 July 2002; pp. 279–288. [Google Scholar]
- Samarati, P. Protecting respondents identities in microdata release. IEEE Trans. Knowl. Data Eng.
**2001**, 13, 1010–1027. [Google Scholar] [CrossRef] - Wang, K.; Fung, B.C.; Philip, S.Y. Handicapping attacker’s confidence: An alternative to k-anonymization. Knowl. Inf. Syst.
**2007**, 11, 345–368. [Google Scholar] [CrossRef] - Fung, B.; Wang, K.; Chen, R.; Yu, P.S. Privacy-preserving data publishing: A survey of recent developments. ACM Comput. Surv. (CSUR)
**2010**, 42, 14. [Google Scholar] [CrossRef] - Dwork, C. Differential privacy. In Proceedings of the 33rd International Colloquium (ICALP 2006), Venice, Italy, 10–14 July 2006. [Google Scholar]
- Dwork, C. Differential privacy: A survey of results. In Theory and Applications of Models of Computation: Lecture Notes in Computer Science; Springer: New York, NY, USA, 2008. [Google Scholar]
- Dwork, C.; Roth, A. The Algorithmic Foundations of Differential Privacy. Found. Trends Theor. Comput. Sci.
**2014**, 9, 211–407. [Google Scholar] [CrossRef] - Fienberg, S.E.; Rinaldo, A.; Yang, X. Differential Privacy and the Risk-Utility Tradeoff for Multi-dimensional Contingency Tables. In Proceedings of the Privacy in Statistical Databases: UNESCO Chair in Data Privacy, International Conference, PSD 2010, Corfu, Greece, 22–24 September 2010; Domingo-Ferrer, J., Magkos, E., Eds.; Springer: Berlin/Heidelberg, Germany, 2010; pp. 187–199. [Google Scholar]
- Wang, Y.; Lee, J.; Kifer, D. Differentially private hypothesis testing, revisited. arXiv, 2015; arXiv:1511.03376. [Google Scholar]
- Uhlerop, C.; Slavković, A.; Fienberg, S.E. Privacy-preserving data sharing for genome-wide association studies. J. Priv. Confid.
**2013**, 5, 137. [Google Scholar] [PubMed] - Yu, F.; Fienberg, S.E.; Slavković, A.B.; Uhler, C. Scalable privacy-preserving data sharing methodology for genome-wide association studies. J. Biomed. Inform.
**2014**, 50, 133–141. [Google Scholar] [CrossRef] [PubMed] - Karwa, V.; Slavković, A. Inference using noisy degrees: Differentially private β-model and synthetic graphs. Ann. Stat.
**2016**, 44, 87–112. [Google Scholar] [CrossRef] - Duchi, J.; Wainwright, M.J.; Jordan, M.I. Local privacy and minimax bounds: Sharp rates for probability estimation. In Proceedings of the Advances in Neural Information Processing Systems, Lake Tahoe, CA, USA, 5–10 December 2013; pp. 1529–1537. [Google Scholar]
- Duchi, J.C.; Jordan, M.I.; Wainwright, M.J. Local privacy and statistical minimax rates. In Proceedings of the 2013 IEEE 54th Annual Symposium on Foundations of Computer Science (FOCS), Berkeley, CA, USA, 26–29 October 2013; pp. 429–438. [Google Scholar]
- Duchi, J.; Wainwright, M.; Jordan, M. Minimax optimal procedures for locally private estimation. arXiv, 2016; arXiv:1604.02390. [Google Scholar]
- Kairouz, P.; Oh, S.; Viswanath, P. Extremal Mechanisms for Local Differential Privacy. J. Mach. Learn. Res.
**2016**, 17, 492–542. [Google Scholar] - Kairouz, P.; Bonawitz, K.; Ramage, D. Discrete Distribution Estimation Under Local Privacy. In Proceedings of the 33rd International Conference on Machine Learning, New York, NY, USA, 19–24 June 2016; pp. 2436–2444. [Google Scholar]
- Ye, M.; Barg, A. Optimal schemes for discrete distribution estimation under local differential privacy. In Proceedings of the 2017 IEEE International Symposium on Information Theory (ISIT), Aachen, Germany, 25–30 June 2017; pp. 759–763. [Google Scholar]
- Raval, N.; Machanavajjhala, A.; Cox, L.P. Protecting Visual Secrets using Adversarial Nets. In Proceedings of the CVPR Workshop, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
- Hayes, J.; Melis, L.; Danezis, G.; De Cristofaro, E. LOGAN: Evaluating Privacy Leakage of Generative Models Using Generative Adversarial Networks. arXiv, 2017; arXiv:cs.CR/1705.07663. [Google Scholar]
- Yamamoto, H. A source coding problem for sources with additional outputs to keep secret from the receiver or wiretappers. IEEE Trans. Inf. Theory
**1983**, 29, 918–923. [Google Scholar] [CrossRef] - Rebollo-Monedero, D.; Forne, J.; Domingo-Ferrer, J. From t-Closeness-Like Privacy to Postrandomization via Information Theory. IEEE Trans. Knowl. Data Eng.
**2010**, 22, 1623–1636. [Google Scholar] [CrossRef][Green Version] - Varodayan, D.; Khisti, A. Smart meter privacy using a rechargeable battery: Minimizing the rate of information leakage. In Proceedings of the 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Prague, Czech Republic, 22–27 May 2011; pp. 1932–1935. [Google Scholar]
- Sankar, L.; Kar, S.K.; Tandon, R.; Poor, H.V. Competitive Privacy in the Smart Grid: An Information-theoretic Approach. In Proceedings of the Smart Grid Communications, Brussels, Belgium, 17–22 October 2011. [Google Scholar]
- Calmon, F.P.; Fawaz, N. Privacy against statistical inference. In Proceedings of the 2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton), Monticello, IL, USA, 1–5 October 2012; pp. 1401–1408. [Google Scholar]
- Sankar, L.; Rajagopalan, S.R.; Poor, H.V. Utility-Privacy Tradeoffs in Databases: An Information-Theoretic Approach. IEEE Trans. Inf. Forensics Secur.
**2013**, 8, 838–852. [Google Scholar] [CrossRef] - Calmon, F.P.; Varia, M.; Médard, M.; Christiansen, M.M.; Duffy, K.R.; Tessaro, S. Bounds on inference. In Proceedings of the 51st Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL, USA, 2–4 October 2013; pp. 567–574. [Google Scholar]
- Sankar, L.; Rajagopalan, S.R.; Mohajer, S.; Poor, H.V. Smart Meter Privacy: A Theoretical Framework. IEEE Trans. Smart Grid
**2013**, 4, 837–846. [Google Scholar] [CrossRef] - Salamatian, S.; Zhang, A.; Calmon, F.P.; Bhamidipati, S.; Fawaz, N.; Kveton, B.; Oliveira, P.; Taft, N. Managing Your Private and Public Data: Bringing Down Inference Attacks Against Your Privacy. IEEE J. Sel. Top. Signal Process.
**2015**, 9, 1240–1255. [Google Scholar] [CrossRef] - Liao, J.; Sankar, L.; Tan, V.F.; du Pin Calmon, F. Hypothesis testing in the high privacy regime. In Proceedings of the 54th Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL, USA, 28–30 September 2016. [Google Scholar]
- Calmon, F.P.; Varia, M.; Médard, M. On Information-Theoretic Metrics for Symmetric-Key Encryption and Privacy. In Proceedings of the 52nd Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL, USA, 1–3 October 2014. [Google Scholar]
- Asoodeh, S.; Alajaji, F.; Linder, T. Notes on information-theoretic privacy. In Proceedings of the 2014 52nd Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL, USA, 1–3 October 2014; pp. 1272–1278. [Google Scholar]
- Calmon, F.P.; Makhdoumi, A.; Médard, M. Fundamental Limits of Perfect Privacy. In Proceedings of the International Symposium on Information Theory, Hong Kong, China, 14–19 June 2015. [Google Scholar]
- Basciftci, Y.O.; Wang, Y.; Ishwar, P. On privacy-utility tradeoffs for constrained data release mechanisms. In Proceedings of the 2016 Information Theory and Applications Workshop (ITA), La Jolla, VA, USA, 1–5 February 2016; pp. 1–6. [Google Scholar]
- Kalantari, K.; Kosut, O.; Sankar, L. On the fine asymptotics of information theoretic privacy. In Proceedings of the 2016 54th Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL, USA, 27–30 September 2016; pp. 532–539. [Google Scholar]
- Kalantari, K.; Sankar, L.; Kosut, O. On information-theoretic privacy with general distortion cost functions. In Proceedings of the 2017 IEEE International Symposium on Information Theory (ISIT), Aachen, Germany, 25–30 June 2017; pp. 2865–2869. [Google Scholar]
- Kalantari, K.; Kosut, O.; Sankar, L. Information-Theoretic Privacy with General Distortion Constraints. arXiv, 2017; arXiv:1708.05468. [Google Scholar]
- Asoodeh, S.; Alajaji, F.; Linder, T. On maximal correlation, mutual information and data privacy. In Proceedings of the 2015 IEEE 14th Canadian Workshop on Information Theory (CWIT), St. John’s, NL, Canada, 6–9 July 2015; pp. 27–31. [Google Scholar]
- Asoodeh, S.; Diaz, M.; Alajaji, F.; Linder, T. Information extraction under privacy constraints. Information
**2016**, 7, 15. [Google Scholar] [CrossRef] - Asoodeh, S.; Alajaji, F.; Linder, T. Privacy-aware MMSE estimation. In Proceedings of the 2016 IEEE International Symposium on Information Theory (ISIT), Barcelona, Spain, 10–15 July 2016; pp. 1989–1993. [Google Scholar]
- Moraffah, B.; Sankar, L. Privacy-Guaranteed Two-Agent Interactions Using Information-Theoretic Mechanisms. IEEE Trans. Inf. Forensics Secur.
**2017**, 12, 2168–2183. [Google Scholar] [CrossRef] - Issa, I.; Kamath, S.; Wagner, A.B. An operational measure of information leakage. In Proceedings of the 2016 Annual Conference on Information Science and Systems, CISS 2016, Princeton, NJ, USA, 16–18 March 2016; pp. 234–239. [Google Scholar]
- Warner, S.L. Randomized response: A survey technique for eliminating evasive answer bias. J. Am. Stat. Assoc.
**1965**, 60, 63–69. [Google Scholar] [CrossRef] [PubMed] - Issa, I.; Wagner, A.B. Operational definitions for some common information leakage metrics. In Proceedings of the 2017 IEEE International Symposium on Information Theory (ISIT), Aachen, Germany, 25–30 June 2017; pp. 769–773. [Google Scholar]
- Asoodeh, S.; Diaz, M.; Alajaji, F.; Linder, T. Estimation Efficiency Under Privacy Constraints. arXiv, 2017; arXiv:1707.02409. [Google Scholar]
- Asoodeh, S.; Diaz, M.; Alajaji, F.; Linder, T. Privacy-aware guessing efficiency. In Proceedings of the 2017 IEEE International Symposium on Information Theory (ISIT), Aachen, Germany, 25–30 June 2017; pp. 754–758. [Google Scholar]
- Kerschbaum, F. Frequency-hiding order-preserving encryption. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, Denver, CO, USA, 12–16 October 2015; pp. 656–667. [Google Scholar]
- Dong, B.; Wang, W.; Yang, J. Secure Data Outsourcing with Adversarial Data Dependency Constraints. In Proceedings of the 2016 IEEE 2nd International Conference on Big Data Security on Cloud (BigDataSecurity), IEEE International Conference on High Performance and Smart Computing (HPSC), and IEEE International Conference on Intelligent Data and Security (IDS), New York, NY, USA, 9–10 April 2016; pp. 73–78. [Google Scholar]
- Dong, B.; Liu, R.; Wang, W.H. Prada: Privacy-preserving data-deduplication-as-a-service. In Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management, Shanghai, China, 3–7 November 2014; pp. 1559–1568. [Google Scholar]
- Alemi, A.; Fischer, I.; Dillon, J.; Murphy, K. Deep Variational Information Bottleneck. In Proceedings of the International Conference on Learning Representations, Toulon, France, 24–26 April 2017. [Google Scholar]
- Giraldo, L.G.S.; Principe, J.C. Rate-Distortion Auto-Encoders. arXiv, 2013; arXiv:1312.7381. [Google Scholar]
- Zhang, Y.; Ozay, M.; Sun, Z.; Okatani, T. Information Potential Auto-Encoders. arXiv, 2017; arXiv:1706.04635. [Google Scholar]
- Theis, L.; Shi, W.; Cunningham, A.; Huszár, F. Lossy Image Compression with Compressive Autoencoders. In Proceedings of the International Conference on Learning Representations, Toulon, France, 24–26 April 2017. [Google Scholar]
- Moon, K.R.; Sricharan, K.; Hero, A.O. Ensemble estimation of mutual information. In Proceedings of the 2017 IEEE International Symposium on Information Theory (ISIT), Aachen, Germany, 25–30 June 2017; pp. 3030–3034. [Google Scholar]
- Mirza, M.; Osindero, S. Conditional generative adversarial nets. arXiv, 2014; arXiv:1411.1784. [Google Scholar]
- Schmidhuber, J.H. Learning factorial codes by predictability minimization. Neural Comput.
**1992**, 4, 863–879. [Google Scholar] [CrossRef] - Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 8–13 December 2014; pp. 2672–2680. [Google Scholar]
- Morris, J. On single-sample robust detection of known signals with additive unknown-mean amplitude-bounded random interference. IEEE Trans. Inf. Theory
**1980**, 26, 199–209. [Google Scholar] [CrossRef] - Shamai, S.; Verdú, S. Worst-case power-constrained noise for binary-input channels. IEEE Trans. Inf. Theory
**1992**, 38, 1494–1511. [Google Scholar] [CrossRef] - Morris, J. On single-sample robust detection of known signals with additive unknown-mean amplitude-bounded random interference—II: The randomized decision rule solution (Corresp.). IEEE Trans. Inf. Theory
**1981**, 27, 132–136. [Google Scholar] [CrossRef] - Morris, J.M.; Dennis, N.E. A random-threshold decision rule for known signals with additive amplitude-bounded nonstationary random interference. IEEE Trans. Commun.
**1990**, 38, 160–164. [Google Scholar] [CrossRef] - Root, W.L. Communications through unspecified additive noise. Inf. Control
**1961**, 4, 15–29. [Google Scholar] [CrossRef] - Erlingsson, Ú.; Pihur, V.; Korolova, A. Rappor: Randomized aggregatable privacy-preserving ordinal response. In Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, Scottsdale, AZ, USA, 3–7 November 2014; pp. 1054–1067. [Google Scholar]
- WWDC 2016. Engineering Privacy for Your User. Available online: https://developer.apple.com/videos/play/wwdc2016/709/ (accessed on 22 November 2017).
- Tang, J.; Korolova, A.; Bai, X.; Wang, X.; Wang, X. Privacy Loss in Apple’s Implementation of Differential Privacy on MacOS 10.12. arXiv, 2017; arXiv:1709.02753. [Google Scholar]
- Verdú, S. α-mutual information. In Proceedings of the 2015 Information Theory and Applications Workshop (ITA), San Diego, CA, USA, 1–6 February 2015. [Google Scholar]
- Sugiyama, M.; Borgwardt, K.M. Measuring statistical dependence via the mutual information dimension. DIM
**2013**, 10, 1. [Google Scholar] - Hamm, J. Minimax Filter: Learning to Preserve Privacy from Inference Attacks. arXiv, 2016; arXiv:1610.03577. [Google Scholar]
- Liu, C.; Chakraborty, S.; Mittal, P. DEEProtect: Enabling Inference-based Access Control on Mobile Sensing Applications. arXiv, 2017; arXiv:1702.06159. [Google Scholar]
- Xu, K.; Cao, T.; Shah, S.; Maung, C.; Schweitzer, H. Cleaning the Null Space: A Privacy Mechanism for Predictors. In Proceedings of the AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017. [Google Scholar]
- Abadi, M.; Andersen, D.G. Learning to protect communications with adversarial neural cryptography. arXiv, 2016; arXiv:1610.06918. [Google Scholar]
- Edwards, H.; Storkey, A. Censoring representations with an adversary. arXiv, 2015; arXiv:1511.05897. [Google Scholar]
- Smolensky, P. Information Processing in Dynamical Systems: Foundations of Harmony Theory; Technical Report; Colorado University at Boulder Department of Computer Science: Boulder, CO, USA, 1986. [Google Scholar]
- Hinton, G.E. Deep belief networks. Scholarpedia
**2009**, 4, 5947. [Google Scholar] [CrossRef] - Nguyen, T.; Sanner, S. Algorithms for direct 0-1 loss optimization in binary classification. In Proceedings of the International Conference on Machine Learning, Atlanta, GA, USA, 16–21 June 2013; pp. 1085–1093. [Google Scholar]
- Liao, J.; Kosut, O.; Sankar, L.; du Pin Calmon, F. A General Framework for Information Leakage: Privacy Utility Trade-Offs. Available online: http://sankar.engineering.asu.edu/wp-content/uploads/2017/11/A-General-Framework-for-Information-Leakage-Privacy-Utility-Trade-offs1.pdf (accessed on 22 November 2017).
- Zhang, G.P. Neural networks for classification: A survey. IEEE Trans. Syst. Man Cybern. Part C
**2000**, 30, 451–462. [Google Scholar] [CrossRef] - Tang, Y. Deep learning using linear support vector machines. arXiv, 2013; arXiv:1306.0239. [Google Scholar]
- Lillo, W.E.; Loh, M.H.; Hui, S.; Zak, S.H. On solving constrained optimization problems with neural networks: A penalty method approach. IEEE Trans. Neural Netw.
**1993**, 4, 931–940. [Google Scholar] [CrossRef] [PubMed] - Eckstein, J.; Yao, W. Augmented Lagrangian and alternating direction methods for convex optimization: A tutorial and some illustrative computational results. RUTCOR Res. Rep.
**2012**, 32, 1–34. [Google Scholar] - Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M.; et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv, 2016; arXiv:1603.04467. [Google Scholar]
- Weisstein, E.W. Normal Distribution; Wolfram Research Inc.: Champaign, IL, USA, 2002. [Google Scholar]
- Van Trees, H.L. Detection, Estimation, and Modulation Theory; John Wiley & Sons: Hoboken, NJ, USA, 2004. [Google Scholar]

**Figure 5.**Privacy-distortion tradeoff for binary data model. (

**a**) Performance of privacy mechanisms against MAP adversary for $p=0.5$; (

**b**) Performance of privacy mechanisms against MAP adversary for $p=0.75$; (

**c**) Performance of privacy mechanisms under MI privacy metric for $p=0.5$; (

**d**) Performance of privacy mechanisms under MI privacy metric for $p=0.75$.

**Figure 6.**Neural network structure of the privatizer and adversary for binary Gaussian mixture model.

**Figure 7.**Privacy-distortion tradeoff for binary Gaussian mixture model. (

**a**) Performance of PDD mechanisms against MAP adversary for $p=0.5$; (

**b**) Performance of PDD mechanisms against MAP adversary for $p=0.75$.

**Figure 9.**Prior $P(Y=1)=0.5$, $X|Y=1\sim N(3,1)$, $X|Y=0\sim N(-3,1)$. (

**a**) $D=1$; (

**b**) $D=3$; (

**c**) $D=8$.

**Figure 10.**Prior $P(Y=1)=0.75$, $X|Y=1\sim N(3,1)$, $X|Y=0\sim N(-3,1)$. (

**a**) $D=1$; (

**b**) $D=3$; (

**c**) $D=8$.

**Figure 12.**Prior $P(Y=1)=0.5$, $X|Y=1\sim N(3,1)$, $X|Y=0\sim N(-3,4)$. (

**a**) $D=1$; (

**b**) $D=3$; (

**c**) $D=8$.

**Figure 13.**Prior $P(Y=1)=0.75$, $X|Y=1\sim N(3,1)$, $X|Y=0\sim N(-3,4)$. (

**a**) $D=1$; (

**b**) $D=3$; (

**c**) $D=8$.

Dataset | $\mathit{P}(\mathit{Y}=1)$ | $\mathit{X}|\mathit{Y}=0$ | $\mathit{X}|\mathit{Y}=1$ |
---|---|---|---|

1 | 0.5 | $\mathcal{N}(-3,1)$ | $\mathcal{N}(3,1)$ |

2 | 0.5 | $\mathcal{N}(-3,4)$ | $\mathcal{N}(3,1)$ |

3 | 0.75 | $\mathcal{N}(-3,1)$ | $\mathcal{N}(3,1)$ |

4 | 0.75 | $\mathcal{N}(-3,4)$ | $\mathcal{N}(3,1)$ |

D | ${\mathit{\beta}}_{0}$ | ${\mathit{\beta}}_{1}$ | ${\mathit{\gamma}}_{0}$ | ${\mathit{\gamma}}_{1}$ | $\mathit{acc}$ | $\mathit{xent}$ | $\mathit{Distance}$ | ${\mathit{P}}_{\mathit{detect}}$ | ${\mathit{P}}_{\mathit{detect}-\mathit{theory}}$ |
---|---|---|---|---|---|---|---|---|---|

1 | 0.5214 | 0.5214 | 0.7797 | 0.7797 | 0.9742 | 0.0715 | 0.9776 | 0.9747 | 0.9693 |

2 | 0.9861 | 0.9861 | 1.0028 | 1.0029 | 0.9169 | 0.1974 | 1.9909 | 0.9225 | 0.9213 |

3 | 1.3819 | 1.3819 | 1.0405 | 1.0403 | 0.8633 | 0.3130 | 3.0013 | 0.8689 | 0.8682 |

4 | 1.5713 | 1.5713 | 1.2249 | 1.2249 | 0.8123 | 0.4066 | 4.0136 | 0.8169 | 0.8144 |

5 | 1.8199 | 1.8199 | 1.3026 | 1.3024 | 0.7545 | 0.4970 | 4.9894 | 0.7638 | 0.7602 |

6 | 1.9743 | 1.9745 | 1.436 | 1.4359 | 0.7122 | 0.5564 | 5.9698 | 0.7211 | 0.7035 |

7 | 2.5332 | 2.5332 | 0.7499 | 0.7500 | 0.6391 | 0.6326 | 7.0149 | 0.6456 | 0.6384 |

8 | 2.8284 | 2.8284 | 0.0044 | 0.0028 | 0.5727 | 0.6787 | 7.9857 | 0.5681 | 0.5681 |

9 | 2.9999 | 3.0000 | 0.0003 | 0.0004 | 0.4960 | 0.6938 | 8.9983 | 0.5000 | 0.5000 |

D | ${\mathit{\beta}}_{0}$ | ${\mathit{\beta}}_{1}$ | ${\mathit{\gamma}}_{0}$ | ${\mathit{\gamma}}_{1}$ | $\mathit{acc}$ | $\mathit{xent}$ | $\mathit{Distance}$ | ${\mathit{P}}_{\mathit{detect}}$ | ${\mathit{P}}_{\mathit{detect}-\mathit{theory}}$ |
---|---|---|---|---|---|---|---|---|---|

1 | 0.8094 | 0.2698 | 0.844 | 0.8963 | 0.9784 | 0.0591 | 0.9533 | 0.9731 | 0.9630 |

2 | 1.4998 | 0.5000 | 0.9676 | 1.1612 | 0.9314 | 0.1635 | 1.9098 | 0.9271 | 0.9176 |

3 | 0.9808 | 0.3269 | 1.3630 | 1.5762 | 0.911 | 0.2054 | 2.9833 | 0.9205 | 0.8647 |

4 | 2.2611 | 0.7536 | 1.1327 | 1.6225 | 0.8359 | 0.3519 | 4.0559 | 0.8355 | 0.8023 |

5 | 2.5102 | 0.8368 | 1.0724 | 1.8666 | 0.792 | 0.401 | 5.0445 | 0.7963 | 0.7503 |

6 | 2.8238 | 0.9412 | 1.2894 | 1.9752 | 0.7627 | 0.4559 | 6.0843 | 0.7643 | 0.7500 |

7 | 3.2148 | 1.0718 | 0.6938 | 2.1403 | 0.7500 | 0.4468 | 7.0131 | 0.7500 | 0.7500 |

8 | 3.3955 | 1.1320 | 1.0256 | 2.2789 | 0.7500 | 0.4799 | 8.0484 | 0.7500 | 0.7500 |

9 | 4.1639 | 1.3878 | 0.0367 | 2.0714 | 0.7500 | 0.4745 | 8.9343 | 0.7500 | 0.7500 |

D | ${\mathit{\beta}}_{0}$ | ${\mathit{\beta}}_{1}$ | ${\mathit{\gamma}}_{0}$ | ${\mathit{\gamma}}_{1}$ | $\mathit{acc}$ | $\mathit{xent}$ | $\mathit{Distance}$ | ${\mathit{P}}_{\mathit{detect}}$ | ${\mathit{P}}_{\mathit{detect}-\mathit{theory}}$ |
---|---|---|---|---|---|---|---|---|---|

1 | 0.8660 | 0.8660 | 0.0079 | 0.7074 | 0.9122 | 0.2103 | 1.0078 | 0.9107 | 0.9105 |

2 | 1.2781 | 1.2781 | 0.0171 | 0.8560 | 0.8595 | 0.3239 | 2.0181 | 0.8550 | 0.8539 |

3 | 1.5146 | 1.5146 | 0.0278 | 1.1352 | 0.8084 | 0.4211 | 3.0264 | 0.8042 | 0.8011 |

4 | 1.7587 | 1.7587 | 0.0330 | 1.2857 | 0.7557 | 0.4970 | 4.0274 | 0.7554 | 0.7513 |

5 | 2.0923 | 2.0923 | 0.0142 | 1.0028 | 0.7057 | 0.5589 | 5.0082 | 0.7113 | 0.7043 |

6 | 2.3079 | 2.2572 | 0.0211 | 1.1185 | 0.6650 | 0.5999 | 6.0377 | 0.6676 | 0.6600 |

7 | 2.5351 | 2.5351 | 0.0567 | 1.0715 | 0.6100 | 0.6509 | 7.0125 | 0.6225 | 0.6185 |

8 | 2.7056 | 2.7056 | 0.0358 | 1.1665 | 0.5770 | 0.6738 | 8.0088 | 0.5868 | 0.5803 |

9 | 2.8682 | 2.8682 | 0.0564 | 1.2435 | 0.5445 | 0.6844 | 9.0427 | 0.5601 | 0.5457 |

D | ${\mathit{\beta}}_{0}$ | ${\mathit{\beta}}_{1}$ | ${\mathit{\gamma}}_{0}$ | ${\mathit{\gamma}}_{1}$ | $\mathit{acc}$ | $\mathit{xent}$ | $\mathit{Distance}$ | ${\mathit{P}}_{\mathit{detect}}$ | ${\mathit{P}}_{\mathit{detect}-\mathit{theory}}$ |
---|---|---|---|---|---|---|---|---|---|

1 | 0.8214 | 0.2739 | 0.0401 | 1.0167 | 0.9514 | 0.1357 | 0.9909 | 0.9448 | 0.9328 |

2 | 1.4164 | 0.4722 | 0.0583 | 1.2959 | 0.9026 | 0.2402 | 2.0257 | 0.9033 | 0.8891 |

3 | 2.2354 | 0.7450 | 0.0246 | 1.3335 | 0.8665 | 0.3354 | 2.9617 | 0.8514 | 0.8481 |

4 | 2.6076 | 0.8693 | 0.0346 | 1.5199 | 0.8269 | 0.4034 | 3.9522 | 0.8148 | 0.8120 |

5 | 2.9919 | 0.9977 | 0.0143 | 1.6399 | 0.7885 | 0.4625 | 5.0034 | 0.7833 | 0.7824 |

6 | 3.3079 | 1.1027 | 0.0094 | 1.7707 | 0.7616 | 0.5013 | 6.0022 | 0.7606 | 0.7500 |

7 | 3.1458 | 1.0488 | 0.0565 | 2.1606 | 0.7496 | 0.4974 | 7.0091 | 0.7500 | 0.7500 |

8 | 3.9707 | 1.3237 | 0.0142 | 1.9129 | 0.7500 | 0.5470 | 7.9049 | 0.7500 | 0.7500 |

9 | 4.0835 | 1.3613 | 0.0625 | 2.1364 | 0.7500 | 0.5489 | 8.8932 | 0.7500 | 0.7500 |

© 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Huang, C.; Kairouz, P.; Chen, X.; Sankar, L.; Rajagopal, R. Context-Aware Generative Adversarial Privacy. *Entropy* **2017**, *19*, 656.
https://doi.org/10.3390/e19120656

**AMA Style**

Huang C, Kairouz P, Chen X, Sankar L, Rajagopal R. Context-Aware Generative Adversarial Privacy. *Entropy*. 2017; 19(12):656.
https://doi.org/10.3390/e19120656

**Chicago/Turabian Style**

Huang, Chong, Peter Kairouz, Xiao Chen, Lalitha Sankar, and Ram Rajagopal. 2017. "Context-Aware Generative Adversarial Privacy" *Entropy* 19, no. 12: 656.
https://doi.org/10.3390/e19120656