Next Article in Journal
A Numerical Simulation of an Experimental Melting Process of a Phase-Change Material without Convective Flows
Previous Article in Journal
Clinical-Pathological Study on Expressions β-APP, GFAP, NFL, Spectrin II, CD68 to Verify Diffuse Axonal Injury Diagnosis, Grade and Survival Interval
 
 
Article
Peer-Review Record

Affection Enhanced Relational Graph Attention Network for Sarcasm Detection

Appl. Sci. 2022, 12(7), 3639; https://doi.org/10.3390/app12073639
by Guowei Li, Fuqiang Lin, Wangqun Chen and Bo Liu *
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Appl. Sci. 2022, 12(7), 3639; https://doi.org/10.3390/app12073639
Submission received: 2 March 2022 / Revised: 22 March 2022 / Accepted: 31 March 2022 / Published: 4 April 2022
(This article belongs to the Topic Machine and Deep Learning)

Round 1

Reviewer 1 Report

Sarcasm detection is one of the important research problems in Natural Language Processing. Previously, it was performed using contextual information and Graph Convolution Networks. The authors have proposed the solution by Affection enhanced graph attention network. They utilized the rational graph attention network with GCN and then applied the classification. The work looks promising and outperformed the state of the art solutions. However, there are a few observations based on which this paper cannot be accepted in its current form until it is improved. The comments are as follows,

 

 

  1. The abstract can be improved by providing the matrics utilized to analyze results and exact %age improvement of the proposed solutions to the state-of-the-art work.
  2. The introduction section is well written in explaining the problem with the help of examples.
  3. The related work section can be improved by providing complete information on different aspects of sarcasm detection. Authors can improve this section by classifying previous works mentioning the traditional machine learning approaches, deep learning, and graph-based solutions. The related work section must include a table of different approaches, techniques, and significant achievements.
  4. The paper's main contribution is missing after the related work section, and I prefer to include your contribution in bullets form.
  5. At this point, the authors should tell the reader about the rest of the sections in the paper in two to three lines.
  6. The experimental setup is provided, but the implementation details are missing what you have utilized in conducting the experiments, tools, libraries, language, etc.
  7. There is no information about the preprocessing steps.
  8. The authors have utilized only two evaluation parameters, accuracy, and f1score. It is preferable to use some more parameters to verify your proposed solution, e.g., Precision, Recall, Mean Reciprocal Rank, Mean average precision, Root mean square error, etc.
  9. Discussion on the achieved results can be enhanced; currently, it is less.
  10. The conclusion must be improved.
  11. Some general comments for improvements.
    1. More figures can be included to explain the architecture in the methodology section
    2. The language of the paper can be improved
    3. Grammatical errors must be corrected.
    4. Missing information in the references

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

The state of the art needs some improvement, some recent papers are not cited such as:

Chenwei Lou, Bin Liang, Lin Gui, Yulan He, Yixue Dang, and Ruifeng Xu. 2021. Affective Dependency Graph for Sarcasm Detection. Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. Association for Computing Machinery, New York, NY, USA, 1844–1849. DOI:https://doi.org/10.1145/3404835.3463061

Nastaran Babanejad, Heidar Davoudi, Aijun An, and Manos Papagelis. 2020. Affective and Contextual Embedding for Sarcasm Detection. In Proceedings of the 28th International Conference on Computational Linguistics. International Committee on Computational Linguistics, Barcelona, Spain (Online), 225--243. https://doi.org/10.18653/v1/2020.coling-main.20

in the sentence "The proposed model(ARGAT) first extract contextual information based on BiLSTM and sends the contextual embeddings to RGAT layers and GCN layers." references should be added for BiLstm, rgat and gcn

The use of acronyms should be moderate and appropriate. for example, NLP should be defined also if well-known.

Table 2 is too big and I suggest to modify the label "our" in "proposed".

 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

The authors have satisfactorily answered my observations so the paper is good to go ahead now.

Back to TopTop