Next Article in Journal
A Review of Recent Literature on Audio-Based Pseudo-Haptics
Previous Article in Journal
One Method for Predicting Satellite Communication Terminal Service Demands Based on Artificial Intelligence Algorithms
Previous Article in Special Issue
Interpretability in Sentiment Analysis: A Self-Supervised Approach to Sentiment Cue Extraction
 
 
Article
Peer-Review Record

Generative Aspect Sentiment Quad Prediction with Self-Inference Template

Appl. Sci. 2024, 14(14), 6017; https://doi.org/10.3390/app14146017
by Yashi Qin and Shu Lv *
Reviewer 1:
Reviewer 2:
Appl. Sci. 2024, 14(14), 6017; https://doi.org/10.3390/app14146017
Submission received: 27 February 2024 / Revised: 28 June 2024 / Accepted: 9 July 2024 / Published: 10 July 2024
(This article belongs to the Special Issue AI Empowered Sentiment Analysis)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

The manuscript discusses the significance and complexity of Aspect Sentiment Quad Prediction within Aspect-Based Sentiment Analysis, highlighting the use of the T5 model for end-to-end extraction of aspect sentiment elements through template-based paraphrasing. It introduces a Self-Inference Template (SIT) to enable more accurate identification of aspect sentiment elements and their interdependencies, demonstrating significant improvements in quadruplet prediction performance without increasing time costs, and mitigating overfitting issues to some extent due to limited data volume.

The proposed approach is noted for its novelty in encouraging models to contemplate and reason gradually, showing a significant improvement in prediction performance on ASQP and ACOS datasets.

The work is novel and constitutes a good contribution to the area of language analysis; it is well organized. However, we detected some minor formatting issues. Line 25, the period appears displaced. Line 27, it cannot start with a Reference indicator.

The article's references seem up-to-date. Out of a total of 25 cited works, 15 of them date from 2020 onwards.

We suggest expanding the conclusions section, providing the most relevant quantitative details among the results. Additionally, I suggest that section 6.1 be relocated to the end of chapter 5, as a kind of discussion of the results or "practical insights".

In summary, I would like to recommend the acceptance with minor corrections.

Author Response

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

Motivated by chain-of-thought based templates, the paper presents an interesting approach that apparently improves on SOTA transformer-led aspect-based sentiment analysis. The approach introduces sufficient originality and performance for publication in this journal provided the following improvements are made:

1.  In the work contribution and in the main body of the paper, the innovation of enhancing the prompts and the token selection is attributed as a prephiral development due to experimentation rather than scientific process. This devalues and misinterprets the innovation and should be changed throughput the document where appropriate.

2. The 3rd contribution in 'mitigating overfitting' is an expected step rather and should be rethought as a contribution.

3. The running examples do not motivate the complexity of the used methodologies. The examples can be easily resolved using generic Generative AI tools in Figure 8.

4. The work should consider two refrences to improve the survey of relevant works and evaluate the impact of the paper's findings:

a) “Zhang, W., Li, X., Deng, Y., Bing, L. and Lam, W., 2022. A survey on aspect-based sentiment analysis: Tasks, methods, and challenges. IEEE Transactions on Knowledge and Data Engineering.”

b) “Jinsong Su, Jialong Tang, Hui Jiang, Ziyao Lu, Yubin Ge, Linfeng Song, Deyi Xiong, Le Sun, Jiebo Luo, Enhanced aspect-based sentiment analysis models with progressive self-supervised attention learning, Artificial Intelligence, Volume 296, 2021

Author Response

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

The article is interesting and shows a contribution in the field of ABSA

On lines 28, 33, 34 of the first page x_ac, x_sp etc. are mentioned. It is important that authors specify what is meant so that readers can have a better understanding of those terms.

 

The authors must explain how their model works exactly. Apparently what they do is use the BERT approach for the fine-tuning process, but the T5 developers state the following "With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP. task."

Author Response

Author Response File: Author Response.pdf

Round 2

Reviewer 3 Report

Comments and Suggestions for Authors Good work!
Back to TopTop