Next Article in Journal
Exploiting Genomic Relations in Big Data Repositories by Graph-Based Search Methods
Previous Article in Journal
Particle Swarm Optimization: A Survey of Historical and Recent Developments with Hybridization Perspectives
Open AccessArticle

An Algorithm for Generating Invisible Data Poisoning Using Adversarial Noise That Breaks Image Classification Deep Learning

ONERA, Université de Paris Saclay, F-91123 Palaiseau, France
Mach. Learn. Knowl. Extr. 2019, 1(1), 192-204; https://doi.org/10.3390/make1010011
Received: 4 September 2018 / Revised: 28 October 2018 / Accepted: 7 November 2018 / Published: 9 November 2018
Today, the main two security issues for deep learning are data poisoning and adversarial examples. Data poisoning consists of perverting a learning system by manipulating a small subset of the training data, while adversarial examples entail bypassing the system at testing time with low-amplitude manipulation of the testing sample. Unfortunately, data poisoning that is invisible to human eyes can be generated by adding adversarial noise to the training data. The main contribution of this paper includes a successful implementation of such invisible data poisoning using image classification datasets for a deep learning pipeline. This implementation leads to significant classification accuracy gaps. View Full-Text
Keywords: deep learning; data poisoning; adversarial examples deep learning; data poisoning; adversarial examples
Show Figures

Figure 1

MDPI and ACS Style

CHAN-HON-TONG, A. An Algorithm for Generating Invisible Data Poisoning Using Adversarial Noise That Breaks Image Classification Deep Learning. Mach. Learn. Knowl. Extr. 2019, 1, 192-204.

Show more citation formats Show less citations formats

Article Access Map by Country/Region

1
Back to TopTop