Next Article in Journal
Laser Seed Pretreatment Alters the Silybin Content and Anti-Dictyostelium discoideum Cell Growth Activity of Silybum marianum (L.) Fruit
Next Article in Special Issue
PrivacyGLUE: A Benchmark Dataset for General Language Understanding in Privacy Policies
Previous Article in Journal
The Concept of a Hybrid Data Transmission Network with a Mobile Application Intended for Monitoring the Operating Parameters of a Solar Power Plant
Previous Article in Special Issue
Temporal Extraction of Complex Medicine by Combining Probabilistic Soft Logic and Textual Feature Feedback
 
 
Article
Peer-Review Record

Joint Syntax-Enhanced and Topic-Driven Graph Networks for Emotion Recognition in Multi-Speaker Conversations

Appl. Sci. 2023, 13(6), 3548; https://doi.org/10.3390/app13063548
by Hui Yu 1, Tinghuai Ma 2,*, Li Jia 2, Najla Al-Nabhan 3 and M. M. Abdel Wahab 4
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3:
Reviewer 4:
Appl. Sci. 2023, 13(6), 3548; https://doi.org/10.3390/app13063548
Submission received: 22 February 2023 / Revised: 6 March 2023 / Accepted: 7 March 2023 / Published: 10 March 2023
(This article belongs to the Special Issue Natural Language Processing (NLP) and Applications)

Round 1

Reviewer 1 Report

The authors propose a model for emotion recognition in conversations with multiple participants. This model called joint syntax-enhanced and topic-driven graph networks for emotion recognition in multi-speaker conversations (SETD-ERC), consists of three modules: syntax, topic, and dialogue interaction. The experimentation consisted of comparing the performance of their model against the performance of 9 other models in four datasets, both the 9 models and the four datasets known in the literature. The model proposed by the authors outperformed the others in three of the four datasets.

 

My comments are:

1.-Lines 186, and 239 cannot be seen in full.

2.-For figure 2, it is suggested to define all the abbreviations included or add a reference where they can be consulted.

3.-There are missing parentheses in formula 1.

4.-Mention the reasons for applying the parameter settings mentioned in Parameter Setting (Section 4.2). Was it for example after performing a parameter optimization?

5.-It is suggested to show more examples of conversations or a slightly longer conversation.

6.-In the results tables, highlight the best values in some way.

 

7.-In table 4, the color of the prediction label is missing for utterance 08.

Author Response

We are very grateful to the reviewers for their suggestions. We have carefully read each suggestion and answered it in the form of an attachment. Please see the attachment.

Author Response File: Author Response.docx

Reviewer 2 Report

 

I have gone through the manuscript “Joint Syntax-Enhanced and Topic-driven graph networks for emotion recognition in multi-speaker conversations. My concerns are given below.

·       Abstract is too general and needs to be more specific with some more numeric data.

·       Authors have given lots of detail in methodology section but not focused, it needs to update with steps and more focused way.

·       Authors have compare their results with other baseline models but need to add more discussion by adding advantages and disadvantages with each model.

·       Need to add focused discussion instead of giving detail too much

Author Response

We are very grateful to the reviewers for their suggestions. We have carefully read each suggestion and answered it in the form of an attachment. Please see the attachment.

Author Response File: Author Response.docx

Reviewer 3 Report

This paper proposes a graph network that combines syntactic structure and topic information.,mining the hidden meaning of utterances and use graph convolutional neural networks to extract the extended meaning of utterances. inow I think that some questions should be improved。

1)in the abstract, Highlights of the article should be clarified. onlyMotivation and method are described in this section.

2)in figure 1, Softmax function is used, According to the diversity of language data, can you choose different activation functions intelligently?

Author Response

We are very grateful to the reviewers for their suggestions. We have carefully read each suggestion and answered it in the form of an attachment. Please see the attachment.

Author Response File: Author Response.docx

Reviewer 4 Report

In this paper, the authors summarize the challenges of emotion recognition work in multi-speaker dialogue, focusing on the context-topic switching problem caused by multi-speaker dialogue. For this challenge, this paper proposed a graph network that combines syntactic structure and topic information. Thus, the authors represent the entire dialogue passage as a heterogeneous graph, where graph nodes are each utterance, and the problem of inter-speaker interaction is presented as an edge between two nodes.

The paper’s scope is within the scope of the journal, and it presents an original contribution. The abstract is somehow sufficient to give useful information about the paper’s topic. The proposed approach is described and thoroughly illustrated. The paper is somehow well-structured and written, and the text is clear and easy to read. However, there are some comments we recommend the authors to do:

At the end of the abstract, it is worthwhile to present your best results as percentages in comparison to other existing approaches.

In the introduction section, related work section, or where appropriate, you need to write about graph networks and their applications in general and emotion recognition. Accordingly, you may need to cite and add the following recent references:

Wan, H.; Tang, P.; Tian, B.; Yu, H.; Jin, C.; Zhao, B.; Wang, H. Water Extraction in PolSAR Image Based on Superpixel and Graph Convolutional Network. Appl. Sci. 2023, 13, 2610. https://doi.org/10.3390/app13042610

Al-Shaikh, A.; Mahafzah, B.; Alshraideh M. Hybrid harmony search algorithm for social network contact tracing of COVID-19. Soft Computing 2023, 27, 3343–3365.https://doi.org/10.1007/s00500-021-05948-2                               

Wu, Z.; Liang, Q.; Zhan, Z. Course Recommendation Based on Enhancement of Meta-Path Embedding in Heterogeneous Graph. Appl. Sci. 2023, 13, 2404. https://doi.org/10.3390/app13042404

Lin, W.; Li, C. Review of Studies on Emotion Recognition and Judgment Based on Physiological Signals. Appl. Sci. 2023, 13, 2573. https://doi.org/10.3390/app13042573

Jo, A.-H.; Kwak, K.-C. Speech Emotion Recognition Based on Two-Stream Deep Learning Model Using Korean Audio Information. Appl. Sci. 2023, 13, 2167. https://doi.org/10.3390/app13042167

In Section 2 (Related work) and before Subsection 2.1, write one small overview paragraph about Section 2 and its Subsections 2.1 and 2.2.

In Section 3.4, you need to collaborate in more detail regarding the suffering from vanishing gradients, where you can cite the following reference regarding this issue:

Abuqaddom, I.; Mahafzah, B.; Faris, H. Oriented Stochastic Loss Descent Algorithm to Train Very Deep Multi-Layer Neural Networks Without Vanishing Gradients. Knowledge-Based Systems 2021, 230, 107391. https://doi.org/10.1016/j.knosys.2021.107391  

In Section 4 (Experimental setup) and before Subsection 4.1 (Datasets), write one small overview paragraph about Section 4 and its Subsections 4.1, 4.2, and 4.3.

In Section 4, it is worthwhile to mention the hardware specifications, software, and tools you did use in your experiments.

In Section 5 (Experimental results and analysis) and before Subsection 5.1, write one small overview paragraph about Section 5 and its Subsections 5.1, 5.2, and 5.3.

The results in Tables 2–3 need more explanation and justifications, where you need to explain the obtained results regarding the algorithm's design point of view.

 

At the end of the first paragraph of the conclusion section (Section 6), present your best results in terms of various performance metrics as values or percentages. 

Author Response

We are very grateful to the reviewers for their suggestions. We have carefully read each suggestion and answered it in the form of an attachment. Please see the attachment.

Author Response File: Author Response.docx

Back to TopTop