Next Article in Journal
Formal Matters on the Topic of Risk Mitigation: A Mathematical Perspective
Next Article in Special Issue
Sentiment Analysis of Text Reviews Using Lexicon-Enhanced Bert Embedding (LeBERT) Model with Convolutional Neural Network
Previous Article in Journal
Analysis of Microtremor Exploration Application and Construction Monitoring in a Large-Diameter Shield Tunnel
 
 
Article
Peer-Review Record

Fine-Grained Sentiment-Controlled Text Generation Approach Based on Pre-Trained Language Model

Appl. Sci. 2023, 13(1), 264; https://doi.org/10.3390/app13010264
by Linan Zhu, Yifei Xu, Zhechao Zhu, Yinwei Bao and Xiangjie Kong *
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Reviewer 4:
Appl. Sci. 2023, 13(1), 264; https://doi.org/10.3390/app13010264
Submission received: 15 November 2022 / Revised: 11 December 2022 / Accepted: 20 December 2022 / Published: 26 December 2022
(This article belongs to the Special Issue AI Empowered Sentiment Analysis)

Round 1

Reviewer 1 Report

Review for applied sciences

 

Abstract

 

It would be helpful to provide a brief explanation of what is meant by "finer-grained control" in the context of sentiment-controlled text generation. This could include a brief description of the differences between document- and sentence-level sentiment control, and the limitations of these approaches.

 

The abstract could benefit from a more detailed description of the proposed pre-trained model extended generative model and its key features. This could include a brief overview of how the model is trained, as well as any unique or novel aspects of the model that set it apart from existing approaches. Also, the word model appears twice, is it supposed to be pre-trained extended generative model?

 

The mention of an "auxiliary classifier" in the abstract is somewhat vague and could be elaborated upon. For example, it could be helpful to explain how the auxiliary classifier is used in the proposed approach, and how it contributes to the overall performance of the model.

 

The abstract could also benefit from a more detailed description of the experimental results and their implications. For example, it could be helpful to provide specific metrics that demonstrate the model's excellent adaptability and high sentiment coverage, as well as any comparisons to other existing approaches. This could help to further convince readers of the effectiveness and potential impact of the proposed approach.

 

Introduction

 

The introduction provides a clear overview of the current state of the art in Transformer-based pre-trained language models (LMs) and their applications in natural language generation (NLG) tasks. The discussion of the challenges and limitations of controlling attributes of generated text without modifying the model architecture or fine-tuning with attribute-specific data is well-written and relevant to the proposed approach.

 

I would suggest providing more specific examples and details to help illustrate the key points and challenges of the task. For instance, it could be helpful to provide concrete examples of aspect-level sentiment information and how it is used to control the content of generated text before the formal description of the task in lines 43-49. This could help to better understand the proposed approach and its potential contributions to the field.

 

I would recommend providing a more thorough review of the limitations of existing approaches and how the proposed approach aims to overcome them. This could include a brief summary of the key features and advantages of the proposed approach, as well as any relevant experimental results or comparisons to other methods. It would be helpful to understand how a classifier assisted in training the generator on a large unlabaled dataset (line 48). This could help to establish the potential impact and significance of the proposed approach.

 

Related work

 

The related work section provides a thorough and well-written review of existing approaches to controlled text generation. The discussion of the key features and limitations of these approaches is clear and relevant to the proposed approach.

 

In lines 61-77 I would suggest providing more specific examples and details to help illustrate the key points and challenges of the task. For instance, it could be helpful to provide concrete examples of different types of control codes and how they are used to control the content of generated text. This could help to better understand differences of the proposed approach to existing approaches.

 

In line 93-94 it would be useful to understand how Chen et al. applied the classifier to enhance the generation. Is this similar or different to your approach?

 

In addition, I would recommend providing a more thorough comparison of the proposed approach to existing methods, including a discussion of the key advantages and disadvantages of the proposed approach. For example, to give the reader better intuition can you provide concrete examples that are only possible with your approach but not with older approaches?

 

Method and experiments

 

Overall the method could benefit from clearer explanations. I don‘t understand the motivation behind the f_{hint} function. Is this standard practice or something you are proposing? It would be helpful to explain the sentiment control loss functions intuitively before the equations are presented such that a reader can skip the equations if they agree that the approach makes sense.

 

Line 102: ...which was trained... (missing was)

Line 111: You talk about triples but aspect-sentiment indicates that it is a tuple. Please clarify further in the text at this point to make the meaning clearer.

Line 117: The lower case variable symbol is a strange choice. Can you justify it or use a name such as \ell instead?

Line 118-119: What should the string et. mean? Should it be i.e.?

Line 125-126: Please make this grammatically correct (...our proposed method in the basic of a text generator...).

Line 128-129: Could you not have performed an ablation study where you train without the pseudo-label dataset and then with it to confirm that it improved performance? Do you have any justification to skip such an ablation study?

Line 136-138: It is not clear why you train in this order instead of mixing the datasets. Can you clarify why?

 

Figure 1 caption: Casual -> causal

 

Line 161: inner->inside

Line 183: Skip Since at the beginning to make sentence grammatically correct.

Line 190: Should it be causal instead of casual? (casual would not make sense here...)

Line 199: inner -> within

Equation 3: What is p(x) in the definition of p_{\mathrm{max}}(x)?

 

Line 226-228: Make this grammatically correct.

After equation (6) grid -> gird.

Line 233-234: It would be useful to understand intuitively how the grid-formed tagging schema works. Also, serious -> series.

Table 1 caption: A sentence in the ... (indefinite article missing)

Line 271-275: Can you explain why you use Glove and fasttext? Is the performance much worse if you use just one?

Line 276: Why did you note compare with models such as GPT-3? I suspect it would be perform well on the task you describe in lines 283-284.

It seems like T5-large is the closest model to your model in Table 3. Could you show some input that your model can solve but T5-large could not? It could be an addition to figure 6.

Regarding Figure 6. The first example is logically strange, why would the staff throw in a some dessert if the meal was great? The other two have missing words (to have [been] greeted by, ...but [the] service was horrible). What is the reason for these missing words? Is it a common problem? Even if the sentences are more linguistacally complicated it is not ideal that they are not grammatically correct.

 

Conclusion

Line 337: How would the sentiments be expressed implicitly?

 

References

Some of the references refer to pre-prints but they have been published at conferences. The authors should cite the peer-reviewed source such as reference 23 that was published at EMNLP.

Author Response

Dear reviewer(s),

Thank you for your valuable suggestions for our work, which has considerably improved our work.

These opinions are highly detailed and the structure is very reasonable. We have made corresponding modifications in our paper, according to these comments. Please see the attachment for specific responses.

Thank you again for your precious suggestions!

Author Response File: Author Response.pdf

Reviewer 2 Report

I have examined in detail the work you have done titled “Fine-grained Sentiment Controlled Text Generation Approach Based on Pre-trained Language Model”. The points that I think are missing are listed below.

A paragraph about the organization of the article should be added at the end of the Introduction section.

The model proposed in the abstract should be highlighted.

Limitations of the study should be included.

The acc values obtained in the study should be compared with the studies in the literature.

The proposed model should be explained more clearly and concisely.

The contributions of the model and the innovation should be stated more clearly in the “Our Contributions:” title. Spelling errors in the study should be reviewed.  

Best Regards.

Author Response

Dear reviewer(s),

Thank you for your valuable suggestions for our work, which has considerably improved our work.

These opinions are really helpful and the structure is very reasonable. We have made corresponding modifications in our paper, according to these comments. Please see the attachment for specific responses.

Thank you again for your precious suggestions!

Author Response File: Author Response.pdf

Reviewer 3 Report

The manuscript introduced and tested a pre-trained model extended generative model that improved over low adaptability issues in generating aspect-level sentiments. The novel query-hint-based guiding mechanism for the generation process enhanced the performance results in training with both the real-world annotated and unannotated datasets. The aspect level sentiment controllable review texts are used to demonstrate high sentiment coverage and stable quality with the proposed strategy.  However, I have the following comments:

 

1)      The readability of the manuscript will be enhanced when the illustration of figures and tables are placed after they are mentioned in the paragraph. Figure 3 should be placed in section 3.3.  Similarly, Figure 6 should be placed in section 4.4. Figure 2, and 5 are not mentioned in the writing section of the manuscript.

2 ) Table 2 and 3 should be placed in section 4.3.1 and section 4.3.2, respectively. Please make sure all the figures and tables are in properly mentioned in the corresponding section in order. 

Author Response

Dear reviewer(s),

Thank you for your valuable suggestions for our work, which has considerably improved our work.

These opinions are really helpful and the structure is very reasonable. We have made corresponding modifications in our paper, according to these comments. Please see the attachment for specific responses.

Thank you again for your precious suggestions!

Author Response File: Author Response.pdf

Reviewer 4 Report

The submission proposes a fine-grained sentiment controlled text generation approach with higher performance compared with the previous works. This research is contributing to the research field and is quite well-structured. My recommendation for further improvements are mentioned below:

1- In the Abstract, please add the limitation(s) of the proposed work and provide a concise suggestion for future work.

2- In the Introduction, please add the last paragraph to explain briefly the following sections of the paper.

3- The authors are encouraged to state the entire phrase before using any abbreviations. Adding an abbreviation section to the end of the paper before the reference section is also beneficial for readers to refer to.

4- Please update the references included in 2. Related Work by adding recent studies from 2021 and 2022.

5- Please draw a flowchart to represent the steps in 3.1 Main Framework. It helps readers have a visual perception of the work.

6- Please insert the Figures and Tables after they are used in the text. 

7- Figure 2 is not referred to in the text. Please explain it in the text.

8- In 4. Experiments, please mention the name of the software used for the experiment and other important details to help researchers replicate this study.

9- Please explain the subsection of 4.1 briefly in the place between 4.1 and 4.1.1 (line 338). Do the same for 4.3, please.

10- Please define the abbreviations used in the note sections of Figures and Tables.

11- Please double-check all equations and ensure they are correct and their parameters are defined sufficiently. 

12- The current submission requires proofreading, as some incomplete, and unclear sentences hinder demonstrating the importance of the work. Please avoid using long paragraphs (more than 7 sentences approximately) in the write-up as they demotivate the reader. I highlighted some concerns about the English language in the file attached.

13- For other corrections, please refer to the attached file.

14- Please highlight the corrections for my comments in the revised file to speed up the review process.

All the best.

Comments for author File: Comments.pdf

Author Response

Dear reviewer(s),

Thank you for your valuable suggestions for our work, which has considerably improved our work.

These opinions are highly detailed and the structure is very reasonable. We have made corresponding modifications in our paper, according to these comments. Please see the attachment for specific responses.

Thank you again for your precious suggestions!

Author Response File: Author Response.pdf

Back to TopTop