Next Article in Journal
Properties of Risk Measures of Generalized Entropy in Portfolio Selection
Next Article in Special Issue
Gaussian Guided Self-Adaptive Wolf Search Algorithm Based on Information Entropy Theory
Previous Article in Journal
Entropy Parameter M in Modeling a Flow Duration Curve
Previous Article in Special Issue
On Lower Bounds for Statistical Learning Theory
Open AccessArticle

Context-Aware Generative Adversarial Privacy

1
School of Electrical, Computer and Energy Engineering, Arizona State University, Tempe, AZ 85281, USA
2
Department of Civil and Environmental Engineering, Stanford University, Stanford, CA 94305, USA
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Entropy 2017, 19(12), 656; https://doi.org/10.3390/e19120656
Received: 12 October 2017 / Revised: 21 November 2017 / Accepted: 22 November 2017 / Published: 1 December 2017
(This article belongs to the Special Issue Information Theory in Machine Learning and Data Science)
Preserving the utility of published datasets while simultaneously providing provable privacy guarantees is a well-known challenge. On the one hand, context-free privacy solutions, such as differential privacy, provide strong privacy guarantees, but often lead to a significant reduction in utility. On the other hand, context-aware privacy solutions, such as information theoretic privacy, achieve an improved privacy-utility tradeoff, but assume that the data holder has access to dataset statistics. We circumvent these limitations by introducing a novel context-aware privacy framework called generative adversarial privacy (GAP). GAP leverages recent advancements in generative adversarial networks (GANs) to allow the data holder to learn privatization schemes from the dataset itself. Under GAP, learning the privacy mechanism is formulated as a constrained minimax game between two players: a privatizer that sanitizes the dataset in a way that limits the risk of inference attacks on the individuals’ private variables, and an adversary that tries to infer the private variables from the sanitized dataset. To evaluate GAP’s performance, we investigate two simple (yet canonical) statistical dataset models: (a) the binary data model; and (b) the binary Gaussian mixture model. For both models, we derive game-theoretically optimal minimax privacy mechanisms, and show that the privacy mechanisms learned from data (in a generative adversarial fashion) match the theoretically optimal ones. This demonstrates that our framework can be easily applied in practice, even in the absence of dataset statistics. View Full-Text
Keywords: generative adversarial privacy; generative adversarial networks; privatizer network; adversarial network; statistical data privacy; differential privacy; information theoretic privacy; mutual information privacy; error probability games; machine learning generative adversarial privacy; generative adversarial networks; privatizer network; adversarial network; statistical data privacy; differential privacy; information theoretic privacy; mutual information privacy; error probability games; machine learning
Show Figures

Figure 1

MDPI and ACS Style

Huang, C.; Kairouz, P.; Chen, X.; Sankar, L.; Rajagopal, R. Context-Aware Generative Adversarial Privacy. Entropy 2017, 19, 656.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop