Next Article in Journal
Soft Iterative Decoding Algorithms for Rateless Codes in Satellite Systems
Previous Article in Journal
Mapping a Guided Image Filter on the HARP Reconfigurable Architecture Using OpenCL
 
 
Article
Peer-Review Record

Defacement Detection with Passive Adversaries

Algorithms 2019, 12(8), 150; https://doi.org/10.3390/a12080150
by Francesco Bergadano 1,*, Fabio Carretto 2, Fabio Cogno 2 and Dario Ragno 2
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Algorithms 2019, 12(8), 150; https://doi.org/10.3390/a12080150
Submission received: 28 May 2019 / Revised: 17 July 2019 / Accepted: 25 July 2019 / Published: 29 July 2019

Round 1

Reviewer 1 Report

The paper proposes a novel keyed learning approach to defacement detection to prevent an adversary from simulating the learning process. The keyed an adaptive defacement detection system has been implemented. And extensive evaluation of the system has been performed to monitor production websites to demonstrate its effectiveness.

The paper is well written and organized, and the proposed approach is well described. However, the experimental evaluation to show that the proposed approach can effectively prevent the adversarial machine learning should be provided as it's the major motivation for this work.


Author Response

Dear Reviewer,


thank you for analyzing our paper and for your comments.


We are submitting a revision, as suggested by the Editor, that takes into account your suggestions.

In particular, 

- section 5.3 has been added to better motivate the experimental setting and to discuss its limitations. As consequence, the conclusions were adapted and shortened. 

- section 3.1 was added to compare with work on classifier randomization and discuss the limitations of both approaches in adversarial Learning
- the abstract and the introduction were changed to anticipate the application definition and to enhance readability
Best regards,
the Authors

Reviewer 2 Report

The authors propose and evaluate a new method for defacement detection addressing explicitly the presence of a passive adversary, who is willing to guess the anomaly detector’s behaviour. They consider an exploratory adversarial setting, and define a learning methodology based on the use of a secret key.


The submission comes timely and is well motivated. Solutions provided are technically solid. Presentation comes handy and it is easy to follow the ideas described. References quite well describe the state-of-the-art of the field.


A proofread is needed

Author Response

Dear Reviewer:


thank you for your report.


We are submitting a revision, as suggested by the Editor. We have done a proof-reading of all the paper, and modified the introduction, the abstract, and the conclusion so that it would be easier to understand.


Best regards,
the Authors

Reviewer 3 Report

The paper needs to undergo a thorough revision in order to clearly point out the rationale behind the proposed method, and thus provide a sound methodological foundation of the proposed techniques.

The authors should clearly define the technical meaning of the term "defacement" so that the detection algorithm clearly follows from the definition.  They should also clearly point out why defacement detection is important, and the performances that such an algorithm should exhibit in order to be of practical interest. Then, the issue of adversarial learning should be presented, and the proposed solution clearly discussed.

As far as the proposed technique is concerned, there is no evidence in the paper, neither from a methodological point of view, nor from the experimental point of view, that using a secret key can actually allows achieving good performances and make evasion more difficult. The authors should further elaborate on this, by providing sound arguments on the achievement of good performances through the randomnesses introduced by the secret key. Actually, some recent works (see for examples more recent papers by Biggio et al., some of them are included in the references) question the validity of randomness to address evasion at test time. 

It is also mandatory to clearly motivate each of the components of the proposed mechanism, so that it is clear its role in the detection process, by making reference to the technical definition of "defacement".

The experimental results does not allow assessing the effectiveness of the proposed mechanism in the presence of adversaries. In addition, the accuracy is not compared to other mechanisms, so it is not possible to conclude if the proposed algorithm does provide some advance with respect to the state of the art. The experimental results shows that at least in some experimental settings, some detection capability can be seen. But there is no guarantee that the data used for the experiment are representative of general working conditions. 


As per the organisation of the paper, while all sections need to be carefully revised to correct typos, broken sentences, and, more in general, to make sure that each statement is fully explained though the previous one, I strongly recommend that the abstract and the introduction are thoroughly revised to clearly present the problem, and the summary of the proposed solution. In its present forms

- the abstract put together "defacement" and "learning process" without clarifying the relationship between the two. There is no reference to the fact that the paper is about the detection of defacement through a machine learning algorithm;

- the role of the secret key that is mentioned in the abstract is not clear, as far as the abstract is concerned, and it is quite difficult for the reader to understand the topic of the paper;

- the introduction suffers from similar issues, as the reader should be already familiar with the main issues that are addressed by the paper. Actually, part of the material that is now in Section 2, should be moved in the Introduction, that must clearly point out the problem statement (what is defacement? why this is a research problem? what are the current issues that state-of-the-art solutions do not address properly? why the proposed solution is expect to address them?)

In conclusion, while the paper addresses a potential challenging issue, the problem formulation and the proposed solution are not adequately presented, so that the effectiveness of the proposed mechanism cannot be assessed from a methodological point of view. Moreover, the reported experiments does not allow supporting the effectiveness of the proposed method as the experimental setup is limited, and does not address the adversarial learning issue.

Author Response

Dear Reviewer,


please find enclosed our response, with the summary of revisions as requested by your report.


Thank you and best regards,

the authors

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

The revised version has addressed some reviewer comments, but it didn't provide the essential additional experimental evaluation to show that the proposed approach can effectively prevent the adversarial machine learning.


Author Response

Dear Reviewer,

thank you for evaluating again our paper and for stating that we have addressed some of your previous comments.

In this revision we provide the additional experimental evaluation (section 5.2) to show that the proposed approach can effectively prevent adversarial action. The same newspaper scenario as in section 5.1 is used, but defacements are chosen by an adversary. This is done in two cases: (a) when the adversary knows the key used by the adaptive defacement detector and (b) when the adversary has no such knowledge. The results show that keyed learning is effective, and produces a dramatic improvement in the undetected defacement rate (UDR) when the adversary does not know the key.

The introductory paragraph of section 5 has been changed accordingly, as well as section 5.4 on limitations, and the conclusions.

Best regards,
the Authors


Reviewer 3 Report

The authors have addressed the vast majority of concerns, and the overall quality of the paper has improved.

I still do not fully agree with the authors as far as the adversarial setting is taken into account. In particular, it is not evident the benefit of keyed learning compared to traditional learning. In other words, it would have made sense to see the results of two algorithms, both of them based on the same set of features, but only one of them based on keyed learning, and see the performances. The reported results show that the overall system can provide effective detection results, but the need for keyed learning cannot be assessed for the lack of comparison of detection results without keyed learning. While the rationale behind the use of keyed learning is clearly stated, it cannot be claimed that the good performances can be associated to the use of keyed learning, as no comparison without that component has been performed.

Author Response

Dear Reviewer,

thank you for evaluating again our paper and for stating that we have addressed the vast majority of your concerns, improving the overall quality of our paper.

To address the need for evaluating the benefit of keyed learning, a new experiment has been done, and a corresponding section 5.2 has been added. The same newspaper scenario as in section 5.1 is used, but defacements are chosen by an adversary. This is done in two cases: (a) when the adversary knows the key used by the adaptive defacement detector and (b) when the adversary has no such knowledge. The results show that keyed learning is effective, and produces a dramatic improvement in the undetected defacement rate (UDR) when the adversary does not know the key.

The introductory paragraph of section 5 has been changed accordingly, as well as section 5.4 on limitations, and the conclusions.

Best regards,
the Authors

Back to TopTop