Next Article in Journal
Union and Intersection Operators for Thick Ellipsoid State Enclosures: Application to Bounded-Error Discrete-Time State Observer Design
Next Article in Special Issue
A Context-Aware Neural Embedding for Function-Level Vulnerability Detection
Previous Article in Journal
Correction: Razgon, M., et al. Relaxed Rule-Based Learning for Automated Predictive Maintenance: Proof of Concept. Algorithms 2020, 13, 219
Previous Article in Special Issue
Detection of Representative Variables in Complex Systems with Interpretable Rules Using Core-Clusters
Article

Local Data Debiasing for Fairness Based on Generative Adversarial Training

1
Départment d’Informatique, Université du Québec à Montréal, Montreal, QC H2L 2C4, Canada
2
Laboratoire d’Informatique de l’École polytechnique, Institut Polytechnique de Paris, 91120 Palaiseau, France
3
DIRO, Université de Montréal, Montreal, QC H3T 1J4, Canada
*
Author to whom correspondence should be addressed.
Academic Editor: Laurent Risser
Algorithms 2021, 14(3), 87; https://doi.org/10.3390/a14030087
Received: 31 December 2020 / Revised: 9 March 2021 / Accepted: 9 March 2021 / Published: 14 March 2021
(This article belongs to the Special Issue Interpretability, Accountability and Robustness in Machine Learning)
The widespread use of automated decision processes in many areas of our society raises serious ethical issues with respect to the fairness of the process and the possible resulting discrimination. To solve this issue, we propose a novel adversarial training approach called GANSan for learning a sanitizer whose objective is to prevent the possibility of any discrimination (i.e., direct and indirect) based on a sensitive attribute by removing the attribute itself as well as the existing correlations with the remaining attributes. Our method GANSan is partially inspired by the powerful framework of generative adversarial networks (in particular Cycle-GANs), which offers a flexible way to learn a distribution empirically or to translate between two different distributions. In contrast to prior work, one of the strengths of our approach is that the sanitization is performed in the same space as the original data by only modifying the other attributes as little as possible, thus preserving the interpretability of the sanitized data. Consequently, once the sanitizer is trained, it can be applied to new data locally by an individual on their profile before releasing it. Finally, experiments on real datasets demonstrate the effectiveness of the approach as well as the achievable trade-off between fairness and utility. View Full-Text
Keywords: sanitization; fairness; generative adversarial network sanitization; fairness; generative adversarial network
Show Figures

Figure 1

MDPI and ACS Style

Aïvodji, U.; Bidet, F.; Gambs, S.; Ngueveu, R.C.; Tapp, A. Local Data Debiasing for Fairness Based on Generative Adversarial Training. Algorithms 2021, 14, 87. https://doi.org/10.3390/a14030087

AMA Style

Aïvodji U, Bidet F, Gambs S, Ngueveu RC, Tapp A. Local Data Debiasing for Fairness Based on Generative Adversarial Training. Algorithms. 2021; 14(3):87. https://doi.org/10.3390/a14030087

Chicago/Turabian Style

Aïvodji, Ulrich, François Bidet, Sébastien Gambs, Rosin C. Ngueveu, and Alain Tapp. 2021. "Local Data Debiasing for Fairness Based on Generative Adversarial Training" Algorithms 14, no. 3: 87. https://doi.org/10.3390/a14030087

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop