Next Article in Journal
Multivariate Optimal Control with Payoffs Defined by Submanifold Integrals
Previous Article in Journal
On a Reduced Cost Higher Order Traub-Steffensen-Like Method for Nonlinear Systems
Article Menu
Issue 7 (July) cover image

Export Article

Open AccessArticle

Selective Poisoning Attack on Deep Neural Networks

1
School of Computing, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea
2
Department of Computer and Information Security, Sejong University, Seoul 05006, Korea
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in IEEE AIKE 2019.
Current address: KAIST, 291 Daehak-ro, Yuseong-gu, Daejeon 305-701, Korea.
Symmetry 2019, 11(7), 892; https://doi.org/10.3390/sym11070892
Received: 14 June 2019 / Revised: 29 June 2019 / Accepted: 1 July 2019 / Published: 8 July 2019
  |  
PDF [796 KB, uploaded 8 July 2019]
  |  

Abstract

Studies related to pattern recognition and visualization using computer technology have been introduced. In particular, deep neural networks (DNNs) provide good performance for image, speech, and pattern recognition. However, a poisoning attack is a serious threat to a DNN’s security. A poisoning attack reduces the accuracy of a DNN by adding malicious training data during the training process. In some situations, it may be necessary to drop a specifically chosen class of accuracy from the model. For example, if an attacker specifically disallows nuclear facilities to be selectively recognized, it may be necessary to intentionally prevent unmanned aerial vehicles from correctly recognizing nuclear-related facilities. In this paper, we propose a selective poisoning attack that reduces the accuracy of only the chosen class in the model. The proposed method achieves this by training malicious data corresponding to only the chosen class while maintaining the accuracy of the remaining classes. For the experiment, we used tensorflow as the machine-learning library as well as MNIST, Fashion-MNIST, and CIFAR10 as the datasets. Experimental results show that the proposed method can reduce the accuracy of the chosen class by 43.2%, 41.7%, and 55.3% in MNIST, Fashion-MNIST, and CIFAR10, respectively, while maintaining the accuracy of the remaining classes. View Full-Text
Keywords: poisoning attack; machine learning; deep neural network; chosen class poisoning attack; machine learning; deep neural network; chosen class
Figures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Kwon, H.; Yoon, H.; Park, K.-W. Selective Poisoning Attack on Deep Neural Networks . Symmetry 2019, 11, 892.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Symmetry EISSN 2073-8994 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top