Next Article in Journal
Prediction of Content Success and Cloud-Resource Management in Internet-of-Media-Things Environments
Next Article in Special Issue
A Vibration Fault Signal Identification Method via SEST
Previous Article in Journal
An RG-FLAT-CRF Model for Named Entity Recognition of Chinese Electronic Clinical Records
Previous Article in Special Issue
Lightweight Path Recovery in IPv6 Internet-of-Things Systems
 
 
Article
Peer-Review Record

Adversarial Attack and Defense: A Survey

Electronics 2022, 11(8), 1283; https://doi.org/10.3390/electronics11081283
by Hongshuo Liang 1, Erlu He 2,*, Yangyang Zhao 2, Zhe Jia 2 and Hao Li 2
Reviewer 1:
Reviewer 2:
Reviewer 3: Anonymous
Electronics 2022, 11(8), 1283; https://doi.org/10.3390/electronics11081283
Submission received: 14 March 2022 / Revised: 8 April 2022 / Accepted: 13 April 2022 / Published: 18 April 2022
(This article belongs to the Special Issue Edge Computing for Urban Internet of Things)

Round 1

Reviewer 1 Report

In the first instance I ascertained that the article met the requirements of the Journal and Academia in general. My findings were that this article has been written for a Special Edition of the MDPI Journal Electronics and meets the requirements stipulated by the Editors of the Special Edition. It has been researched and written by a team of researchers and so also meets the requirements of collaboration. The research has been undertaken with a local government funding grant and so also meets the requirements of being monitored by an external entity. The references in the article indicate that primary research has been conducted and presented and discussed in a peer environment including workshops. Reference is made to secondary literature and so this article builds upon and critiques other research. Given the above I turned to review the content to ascertain if it met the requirements of being innovative, in contributing to knowledge and in analysis that would place it firmly as worthy of publication. My findings were that the article meets these requirements. The contents describe that artificial intelligence technology represented by deep learning has achieved remarkable results in image recognition, semantic analysis, natural language processing and other fields. In particular, deep neural networks have been widely used in different security-sensitive tasks. Examples were noted for example facial payment, smart medical and autonomous driving. A point on this was stressed that there is an urgent need to push the AI frontier to the network edge as it supports the deployment of deep learning algorithms to edge devices that generate data and has become a key driver of smart city development. The Authors start by discussing an analysis of secondary literature showing that deep neural networks are vulnerable to attacks from adversarial example and output wrong results namely adversarial attack. They and others have begun to pay attention to the research in the field of adversarial defense showing that both attack and defense technologies have been developed rapidly. The article is well written and structured firstly introducing the principles and characteristics of adversarial attacks followed by summarizing and analyzing the adversarial example generation methods in recent years. The main section introduces the adversarial example defense technology in detail from the three directions of model, data, and additional network. The innovative sections that make it worthy of publication are the combination of the current status of adversarial example generation and defense technology development, putting forward challenges and prospects in this field.

Author Response

Response to Reviewer 1 Comments

Point 1: In the first instance I ascertained that the article met the requirements of the Journal and Academia in general. My findings were that this article has been written for a Special Edition of the MDPI Journal Electronics and meets the requirements stipulated by the Editors of the Special Edition. It has been researched and written by a team of researchers and so also meets the requirements of collaboration. The research has been undertaken with a local government funding grant and so also meets the requirements of being monitored by an external entity. The references in the article indicate that primary research has been conducted and presented and discussed in a peer environment including workshops. Reference is made to secondary literature and so this article builds upon and critiques other research. Given the above I turned to review the content to ascertain if it met the requirements of being innovative, in contributing to knowledge and in analysis that would place it firmly as worthy of publication. My findings were that the article meets these requirements. The contents describe that artificial intelligence technology represented by deep learning has achieved remarkable results in image recognition, semantic analysis, natural language processing and other fields. In particular, deep neural networks have been widely used in different security-sensitive tasks. Examples were noted for example facial payment, smart medical and autonomous driving. A point on this was stressed that there is an urgent need to push the AI frontier to the network edge as it supports the deployment of deep learning algorithms to edge devices that generate data and has become a key driver of smart city development. The Authors start by discussing an analysis of secondary literature showing that deep neural networks are vulnerable to attacks from adversarial example and output wrong results namely adversarial attack. They and others have begun to pay attention to the research in the field of adversarial defense showing that both attack and defense technologies have been developed rapidly. The article is well written and structured firstly introducing the principles and characteristics of adversarial attacks followed by summarizing and analyzing the adversarial example generation methods in recent years. The main section introduces the adversarial example defense technology in detail from the three directions of model, data, and additional network. The innovative sections that make it worthy of publication are the combination of the current status of adversarial example generation and defense technology development, putting forward challenges and prospects in this field.

Response 1: We sincerely thank you for your comments, we will continue to improve the paper, wish you a happy life and good health.

Reviewer 2 Report

In this paper, the authors summarized the recent advances in the field of adversarial machine learning. Although a wide range of attack and defense methods are covered in this paper, which builds the foundation of a good survey paper, the weaknesses of this paper are also obvious:
a) It is lack of an in-depth analysis and comparison of different attack and defense methods. Although the authors provide some analysis in the "Challenge" section, comparing with the rich body of attack and defense methods in previous sections, the original contents of authors, such as analysis, summarization or comparison of these different methods seem too short. 
b) An important type of adversarial defense method is converting deterministic networks into stochastic network, but this type of method is not mentioned in this paper. It would make this survey more complete if the authors could add another subsection in 3.1 to cover this defense type. Some typical works in this direction includes:

- Defensive dropout for hardening deep neural networks under adversarial attacks

 - Protecting neural networks with hierarchical random switching: Towards better robustness-accuracy trade-off for stochastic defenses

 - Towards robust neural networks via random self-ensemble

c) There are quite a few minor errors in terms of grammar and content. Such as:
In line 168, it is hard to understand what "l_inf constraint the maximum perturbed pixel point" means. I think what the author try to express is "L_inf" constraints the maximum allowed perturbation per pixel".
In line 298, the authors need to check the grammar.
In line 491, there is an incomplete sentence "The problem;".
 

Author Response

Response to Reviewer 2 Comments

 

Point 1: The applicability and adaptability of adversarial example defense technology needs to be improved. Research has proved that even the defense technology with the best defense effect will be broken by an endless stream of adversarial attack technologies. How to improve the self-iteration capability of defense technology is an urgent problem to be solved. The problem.

 

Response 1: We added Section 2.3 to the paper, supplementing the summary and comparison of different adversarial attack algorithms.

2.3. Adversarial attacks comparison

L-BFGS is an early adversarial attack algorithm, which has inspired other attack algorithms. The adversarial examples generated by L-BFGS have good mobility and can be applied in many different types of neural network structures. Although JSMA has High attack success rate, but because its attack depends on the Jacobian matrix of the input example, and the Jacobian matrix of different examples is quite different, JSMA does not have transferability. FGSM only needs one iteration to get the adversarial per-turbation, so the attack efficiency is higher, but the attack success rate is not as good as iterative attack algorithms such as PGD. Compared with FGSM, JSMA and other attacks, the adversarial disturbance generated by DeepFool attack is relatively small, but DeepFool does not have the ability of directed attack.UAP achieves better generalization ability based on the idea of DeepFool, realizes the ability of general attack across models and data sets, and provides technical support for attack requirements in real scenarios. One-Pixel achieves the purpose of the attack by modifying a single pixel. Compared with other algorithms, the generated adversarial examples are more deceptive, but they require multiple rounds of iterations for the optimal solution, so the attack efficiency is low.C&W attack has strong aggressiveness. Compared with L-BFGS, FGSM, DeepFool and other attack methods, C&W can successfully break the defense of defensive distillation, but sacrifice the attack efficiency. UPSET and ANGRI were proposed at the same time, but UPSET does not depend on the properties of the input data and can achieve general attacks. The latter cannot achieve general attacks because it depends on the properties of the input data during training. AdvGAN, DaST and GAP++ all use generative adversarial networks in the attack process. AdvGAN, DaST and GAP++ all use the generative adversarial network in the attack process, and the formed adversarial examples have a strong attack effect, because the adversarial examples generated by the game process of the generator and the discriminator are highly similar to the original examples.

 

 

Point 2: An important type of adversarial defense method is converting deterministic networks into stochastic network, but this type of method is not mentioned in this paper. It would make this survey more complete if the authors could add another subsection in 3.1 to cover this defense type. Some typical works in this direction includes:

-Defensive dropout for hardening deep neural networks under adversarial attacks

-Protecting neural networks with hierarchical random switching: Towards better robustness-accuracy trade-off for stochastic defenses

- Towards robust neural networks via random self-ensemble

 

Response 2: Your guidance helped us improve the paper, we supplemented Section 3.1.5 Stochastic Network in the paper, and additionally supplemented 3 other papers.Section 3.1.5 Supplementary references are as follows:

  1. Wang S, Wang X, Zhao P, et al. Defensive dropout for hardening deep neural networks under adversarial at-tacks[C]//Proceedings of the International Conference on Computer-Aided Design. 2018: 1-8.
  2. ang X, Wang S, Chen P Y, et al. Protecting neural networks with hierarchical random switching: Towards better robust-ness-accuracy trade-off for stochastic defenses[J]. arXiv preprint arXiv:1908.07116, 2019
  3. Liu X, Cheng M, Zhang H, et al. Towards robust neural networks via random self-ensemble[C]//Proceedings of the European Conference on Computer Vision (ECCV). 2018: 369-385

 

 

Point 3: There are quite a few minor errors in terms of grammar and content. Such as:

In line 168, it is hard to understand what "l_inf constraint the maximum perturbed pixel point" means. I think what the author try to express is "L_inf" constraints the maximum allowed perturbation per pixel".

In line 298, the authors need to check the grammar.

In line 491, there is an incomplete sentence "The problem;".

 

Response 3: We have revised every error in the paper, thank you for your careful correction.

"L_inf" constraints the maximum allowed perturbation per pixel".

The error on line 298 has been fixed, adding the missing single.

"The problem" has been removed.

 

Reviewer 3 Report

The paper about adversarial attack and defense is good.


1. For the adversarial attack, the authors introduce 14 papers. 

However, it is difficult to know what the advantages and disadvantages of them are. 

Please tell the pros and cons of them when they are compared to each other.


2. For defense, please add the following papers.


1) Samangouei, P.; Kabkab,M.; Chellappa, R. 

DEFENSE-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models. 

In Proceedings of the 6th International Conference on Learning Representations ICLR, 

Vancouver, BC, Canada, 30 April–3 May 2018.


2) Sunoh Choi, Malicious Powershell detection using Attention against Adversarial Attacks, 

Electronics, 2020

Author Response

Response to Reviewer 3 Comments

 

Point 1: For the adversarial attack, the authors introduce 14 papers.

However, it is difficult to know what the advantages and disadvantages of them are.

Please tell the pros and cons of them when they are compared to each other. 

 

Response 1: We added Section 2.3 to the paper, supplementing the summary and comparison of different adversarial attack algorithms.

2.3. Adversarial attacks comparison

L-BFGS is an early adversarial attack algorithm, which has inspired other attack algorithms. The adversarial examples generated by L-BFGS have good mobility and can be applied in many different types of neural network structures. Although JSMA has High attack success rate, but because its attack depends on the Jacobian matrix of the input example, and the Jacobian matrix of different examples is quite different, JSMA does not have transferability. FGSM only needs one iteration to get the adversarial per-turbation, so the attack efficiency is higher, but the attack success rate is not as good as iterative attack algorithms such as PGD. Compared with FGSM, JSMA and other attacks, the adversarial disturbance generated by DeepFool attack is relatively small, but DeepFool does not have the ability of directed attack.UAP achieves better generalization ability based on the idea of DeepFool, realizes the ability of general attack across models and data sets, and provides technical support for attack requirements in real scenarios. One-Pixel achieves the purpose of the attack by modifying a single pixel. Compared with other algorithms, the generated adversarial examples are more deceptive, but they require multiple rounds of iterations for the optimal solution, so the attack efficiency is low.C&W attack has strong aggressiveness. Compared with L-BFGS, FGSM, DeepFool and other attack methods, C&W can successfully break the defense of defensive distillation, but sacrifice the attack efficiency. UPSET and ANGRI were proposed at the same time, but UPSET does not depend on the properties of the input data and can achieve general attacks. The latter cannot achieve general attacks because it depends on the properties of the input data during training. AdvGAN, DaST and GAP++ all use generative adversarial networks in the attack process. AdvGAN, DaST and GAP++ all use the generative adversarial network in the attack process, and the formed adversarial examples have a strong attack effect, because the adversarial examples generated by the game process of the generator and the discriminator are highly similar to the original examples.

 

 

Point 2: For defense, please add the following papers.

 

 

1) Samangouei, P.; Kabkab,M.; Chellappa, R.

 

DEFENSE-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models.

 

In Proceedings of the 6th International Conference on Learning Representations ICLR,

 

Vancouver, BC, Canada, 30 April–3 May 2018.

 

 

2) Sunoh Choi, Malicious Powershell detection using Attention against Adversarial Attacks,

 

Electronics, 2020

 

Response 2: Thank you for the paper you shared, which helped us to improve our paper. The supplementary references are as follows:

  1. Samangouei P, Kabkab M, Chellappa R. Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models[C]//International Conference on Learning Representations. 2018.
  2. Choi S. Malicious PowerShell Detection Using Attention against Adversarial Attacks[J]. Electronics, 2020, 9(11): 1817.

 

Round 2

Reviewer 3 Report

I think that the paper can be accepted and published.

This manuscript is a resubmission of an earlier submission. The following is a list of the peer review reports and author responses from that submission.


Back to TopTop