Next Article in Journal
A Numerical Study for Assessing the Risk Reduction Using an Emergency Vehicle Equipped with a Micronized Water System for Contrasting the Fire Growth Phase in Road Tunnels
Next Article in Special Issue
Control Synthesis as Machine Learning Control by Symbolic Regression Methods
Previous Article in Journal
Development and Study of the Structure and Properties of a Composite Textile Material with Encapsulated Heat-Preserving Components for Heat-Protective Clothing
Previous Article in Special Issue
Hybridization of Intelligent Solutions Architecture for Text Understanding and Text Generation
Article

Methods for Preventing Visual Attacks in Convolutional Neural Networks Based on Data Discard and Dimensionality Reduction

Data Analysis and Machine Learning Department, Financial University under the Government of the Russian Federation, 125993 Moscow, Russia
Academic Editor: Askhat Diveev
Appl. Sci. 2021, 11(11), 5235; https://doi.org/10.3390/app11115235
Received: 29 April 2021 / Revised: 1 June 2021 / Accepted: 3 June 2021 / Published: 4 June 2021
(This article belongs to the Special Issue 14th International Conference on Intelligent Systems (INTELS’20))
The article is devoted to the study of convolutional neural network inference in the task of image processing under the influence of visual attacks. Attacks of four different types were considered: simple, involving the addition of white Gaussian noise, impulse action on one pixel of an image, and attacks that change brightness values within a rectangular area. MNIST and Kaggle dogs vs. cats datasets were chosen. Recognition characteristics were obtained for the accuracy, depending on the number of images subjected to attacks and the types of attacks used in the training. The study was based on well-known convolutional neural network architectures used in pattern recognition tasks, such as VGG-16 and Inception_v3. The dependencies of the recognition accuracy on the parameters of visual attacks were obtained. Original methods were proposed to prevent visual attacks. Such methods are based on the selection of “incomprehensible” classes for the recognizer, and their subsequent correction based on neural network inference with reduced image sizes. As a result of applying these methods, gains in the accuracy metric by a factor of 1.3 were obtained after iteration by discarding incomprehensible images, and reducing the amount of uncertainty by 4–5% after iteration by applying the integration of the results of image analyses in reduced dimensions. View Full-Text
Keywords: convolutional neural networks; pattern recognition; visual attacks; VGG-16; Inception_v3; image processing; dimension reduction; singular value decomposition; neural network ensembles convolutional neural networks; pattern recognition; visual attacks; VGG-16; Inception_v3; image processing; dimension reduction; singular value decomposition; neural network ensembles
Show Figures

Figure 1

MDPI and ACS Style

Andriyanov, N. Methods for Preventing Visual Attacks in Convolutional Neural Networks Based on Data Discard and Dimensionality Reduction. Appl. Sci. 2021, 11, 5235. https://doi.org/10.3390/app11115235

AMA Style

Andriyanov N. Methods for Preventing Visual Attacks in Convolutional Neural Networks Based on Data Discard and Dimensionality Reduction. Applied Sciences. 2021; 11(11):5235. https://doi.org/10.3390/app11115235

Chicago/Turabian Style

Andriyanov, Nikita. 2021. "Methods for Preventing Visual Attacks in Convolutional Neural Networks Based on Data Discard and Dimensionality Reduction" Applied Sciences 11, no. 11: 5235. https://doi.org/10.3390/app11115235

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop