Aspect-Enhanced Prompting Method for Unsupervised Domain Adaptation in Aspect-Based Sentiment Analysis
Abstract
:1. Introduction
- 1.
- We explore a severe domain adaptation setting for ABSA, where labeled data of multiple source domains are available but the data of a target domain are completely unavailable during training. This setting has not been actively studied in terms of ABSA.
- 2.
- We propose a novel domain-agnostic prompt generation framework that automatically generates prompts highlighting important keywords for sentiment classification.
- 3.
- We evaluate the effectiveness of our proposed method through experiments on diverse sentiment analysis datasets.
2. Related Work
2.1. Unsupervised Domain Adaptation
2.2. Multi-Source Domain Adaptation
2.3. Example-Based Prompt Learning
3. Proposed Method
3.1. Problem Settings
3.2. Overview of the Method
3.3. Training
3.3.1. Extraction of Aspect-Related Features
3.3.2. Generation of Candidate Prompt
- (1)
- What is the sentiment of [aspect] considering [ARFs] in the review?
- (2)
- Predict the sentiment for [aspect] described as [ARFs].
3.3.3. Prompt Scoring
3.3.4. Training of Initial Models
3.3.5. Rescoring of the Prompts and Re-Training of the Models
Algorithm 1 Prompt Rescoring Algorithm. |
|
3.4. Cluster-Based Prompt Expansion
Algorithm 2 Cluster-Based Prompt Expansion Algorithm. |
|
4. Evaluation
4.1. Datasets
- Restaurant and Laptop: The datasets of the SemEval-2014 ABSA task [32], which contain reviews of various aspects of restaurants and laptops, are used.
- Device: The device domain dataset is taken from Toprak et al. [33]. It contains user reviews of electronic devices such as smartphones, tablets, and home appliances.
- Service: The service domain dataset comes from Hu and Liu [34]. It includes users’ feedback on service quality, staff responsiveness, and customer satisfaction.
- Location: The Sentihood [35], which focuses on location-based sentiment analysis, is used. This dataset consists of reviews that evaluate the perceptions of specific places or neighborhoods.
4.2. Experimental Settings
4.3. Models
- T5-base: It is a T5-base model fine-tuned using the training data from multiple source domains. It serves as a benchmark for comparison. Unlike our framework, this model does not use any domain adaptation techniques. Instead, it directly processes the review and the aspect to generate a sentiment word as a classification result.
- AutoPrompt [36]: AutoPrompt (AP) is a prompt-based method for automatically constructing prompts using gradient-guided search. It is applied to multi-source unseen domain adaptation of the ASC task. Two pre-trained models, BERT and RoBERTa, are employed in this experiment. (RoBERTa is used in the original paper [36]).
- LM-BFF [37]: LM-BFF (Better Few-Shot Fine-Tuning of Language Models) is a prompt-based fine-tuning approach designed to enhance model performance with a limited number of annotated training examples by using automatically generated prompts and demonstrations. Although this method is designed for few-shot learning, it is extended for multi-source unseen domain adaptation by using the entire training dataset. RoBERTa is used as the base pre-trained language model.
- PADA: The PADA model is an important previous model: it is a state-of-the-art method of multi-source unseen domain adaptation. It adapts source domain knowledge to the target domain by incorporating domain-specific and general features. The comparison with PADA allows us to evaluate the generalization ability of our method in the domain adaptation of the multi-source domain setting. In this experiment, we reimplemented the PADA model and applied it to our dataset to ensure a fair and consistent comparison.
- AEP+RS+CE: Aspect-Enhanced Prompting method, which is our proposed method. “+RS” and “+CE” indicate that the prompt rescoring module and the cluster-based prompt expansion module are employed, respectively.
4.4. Results
4.5. Detailed Evaluation of the Components
4.5.1. Ablation Study
4.5.2. Impact of Parameters on Prompt Rescoring
4.5.3. Investigation of Voting Strategy in Cluster-Based Prompt Expansion
4.5.4. Investigation of Input Format of Sentiment Classification Model
4.5.5. Impact of Prompt Templates
4.6. Error Analysis
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A. List of Templates
Template | Type |
---|---|
Given the statement text, where aspect [ASPECT] is described as [ARF], what is the sentiment? | Q |
For the statement text and focusing on [ASPECT] being [ARF], what is the sentiment? | Q |
Analyze the sentiment of text with emphasis on [ASPECT] being [ARF]. | D |
How does text portray [ASPECT] as [ARF] in terms of sentiment? | Q |
Considering the text, what sentiment does [ARF] convey about [ASPECT]? | Q |
Evaluate the sentiment towards [ASPECT] being [ARF]. | D |
What is the emotional tone when [ASPECT] is [ARF]? | Q |
In the text, [ASPECT] is described as [ARF]. What sentiment does this reflect? | Q |
Assess the feeling towards [ASPECT] being [ARF]. | D |
Determine the sentiment of [ASPECT] is characterized as [ARF]. | D |
What emotion is associated with [ASPECT] being [ARF]? | Q |
Identify the sentiment when [ASPECT] is mentioned as [ARF]. | D |
Predict the sentiment for [ASPECT] described as [ARF]. | D |
In the text, [ASPECT] is [ARF]. How does this make the sentiment? | Q |
Sentiment analysis with [ASPECT] as [ARF]. | D |
How is the sentiment towards [ASPECT] being [ARF]? | Q |
What is the sentiment outcome when [ASPECT] equals [ARF]? | Q |
Review the sentiment with [ASPECT] as [ARF]. | D |
With [ASPECT] being [ARF], how is the sentiment? | Q |
Analyze for sentiment with a focus on [ASPECT] as [ARF]. | D |
Template | Type |
---|---|
Given the statement [TEXT], where aspect [ASPECT] is described as [ARF], what is the sentiment? | Q |
For the statement [TEXT] and focusing on [ASPECT] being [ARF], what is the sentiment? | Q |
Analyze the sentiment of [TEXT] with emphasis on [ASPECT] being [ARF]. | D |
How does [TEXT] portray [ASPECT] as [ARF] in terms of sentiment? | Q |
Considering the [TEXT], what sentiment does [ARF] convey about [ASPECT]? | Q |
In the [TEXT], evaluate the sentiment towards [ASPECT] being [ARF]. | D |
In the [TEXT], what is the emotional tone when [ASPECT] is [ARF]? | Q |
In the [TEXT], [ASPECT] is described as [ARF]. What sentiment does this reflect? | Q |
In the [TEXT], assess the feeling towards [ASPECT] being [ARF]. | D |
In the [TEXT], determine the sentiment of [ASPECT] is characterized as [ARF]. | D |
In the [TEXT], what emotion is associated with [ASPECT] being [ARF]? | Q |
In the [TEXT], identify the sentiment when [ASPECT] is mentioned as [ARF]. | D |
In the [TEXT], predict the sentiment for [ASPECT] described as [ARF]. | D |
In the [TEXT], [ASPECT] is [ARF]. How does this make the sentiment? | Q |
In the [TEXT], sentiment analysis with [ASPECT] as [ARF]. | D |
In the [TEXT], how is the sentiment towards [ASPECT] being [ARF]? | Q |
In the [TEXT], what is the sentiment outcome when [ASPECT] equals [ARF]? | Q |
In the [TEXT], review the sentiment with [ASPECT] as [ARF]. | D |
In the [TEXT], with [ASPECT] being [ARF], how is the sentiment? | Q |
In the [TEXT], analyze for sentiment with a focus on [ASPECT] as [ARF]. | D |
References
- Pang, B.; Lee, L. Opinion Mining and Sentiment Analysis. Found. Trends® Inf. Retr. 2008, 2, 1–135. [Google Scholar] [CrossRef]
- Blitzer, J.; Dredze, M.; Pereira, F. Biographies, Bollywood, Boom-boxes and Blenders: Domain Adaptation for Sentiment Classification. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, Prague, Czech Republic, 25–27 June 2007; pp. 440–447. [Google Scholar]
- Gong, C.; Yu, J.; Xia, R. Unified Feature and Instance Based Domain Adaptation for Aspect-Based Sentiment Analysis. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Online, 16–20 November 2020; pp. 7035–7045. [Google Scholar] [CrossRef]
- Ramponi, A.; Plank, B. Neural Unsupervised Domain Adaptation in NLP—A Survey. In Proceedings of the 28th International Conference on Computational Linguistics, Barcelona, Spain, 8–13 December 2020; pp. 6838–6855. [Google Scholar] [CrossRef]
- Blitzer, J.; McDonald, R.; Pereira, F. Domain Adaptation with Structural Correspondence Learning. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, Sydney, Australia, 22–23 July 2006; pp. 120–128. [Google Scholar]
- Pan, S.J.; Ni, X.; Sun, J.T.; Yang, Q.; Chen, Z. Cross-domain sentiment classification via spectral feature alignment. In Proceedings of the 19th International Conference on World Wide Web, Raleigh, NC, USA, 26–30 April 2010; pp. 751–760. [Google Scholar]
- Ganin, Y.; Ustinova, E.; Ajakan, H.; Germain, P.; Larochelle, H.; Laviolette, F.; Marchand, M.; Lempitsky, V. Domain-Adversarial Training of Neural Networks. arXiv 2016, arXiv:1505.07818. http://arxiv.org/abs/1505.07818.
- Long, M.; Cao, Y.; Wang, J.; Jordan, M.I. Learning Transferable Features with Deep Adaptation Networks. arXiv 2015, arXiv:1502.02791. http://arxiv.org/abs/1502.02791.
- Vincent, P.; Larochelle, H.; Bengio, Y.; Manzagol, P.A. Extracting and composing robust features with denoising autoencoders. In Proceedings of the International Conference on Machine Learning, Helsinki, Finland, 5–9 July 2008. [Google Scholar]
- Glorot, X.; Bordes, A.; Bengio, Y. Domain Adaptation for Large-Scale Sentiment Classification: A Deep Learning Approach. In Proceedings of the International Conference on Machine Learning, Bellevue, WA, USA, 28 June–2 July 2011. [Google Scholar]
- Chen, M.; Xu, Z.; Weinberger, K.; Sha, F. Marginalized Denoising Autoencoders for Domain Adaptation. arXiv 2012, arXiv:1206.4683. http://arxiv.org/abs/1206.4683.
- Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv 2019, arXiv:1810.04805. http://arxiv.org/abs/1810.04805.
- Han, X.; Eisenstein, J. Unsupervised Domain Adaptation of Contextualized Embeddings for Sequence Labeling. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, 3–7 November 2019; pp. 4238–4248. [Google Scholar] [CrossRef]
- Karouzos, C.; Paraskevopoulos, G.; Potamianos, A. UDALM: Unsupervised Domain Adaptation through Language Modeling. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Online, 6–11 June 2021; pp. 2579–2590. [Google Scholar] [CrossRef]
- Zhao, Z.; Ma, Z.; Lin, Z.; Xie, J.; Li, Y.; Shen, Y. Source-free Domain Adaptation for Aspect-based Sentiment Analysis. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), Torino, Italia, 20–25 May 2024; pp. 15076–15086. [Google Scholar]
- Guo, H.; Mao, Y.; Zhang, R. Augmenting Data with Mixup for Sentence Classification: An Empirical Study. arXiv 2019, arXiv:1905.08941. http://arxiv.org/abs/1905.08941.
- Yu, J.; Gong, C.; Xia, R. Cross-Domain Review Generation for Aspect-Based Sentiment Analysis. In Proceedings of the Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, Online, 1–6 August 2021; pp. 4767–4777. [Google Scholar] [CrossRef]
- Wang, H.; He, K.; Li, B.; Chen, L.; Li, F.; Han, X.; Teng, C.; Ji, D. Refining and Synthesis: A Simple yet Effective Data Augmentation Framework for Cross-Domain Aspect-based Sentiment Analysis. In Proceedings of the Findings of the Association for Computational Linguistics: ACL 2024, Bangkok, Thailand, 11–16 August 2024; pp. 10318–10329. [Google Scholar] [CrossRef]
- Guo, H.; Pasunuru, R.; Bansal, M. Multi-Source Domain Adaptation for Text Classification via DistanceNet-Bandits. arXiv 2020, arXiv:2001.04362. http://arxiv.org/abs/2001.04362. [CrossRef]
- Guo, J.; Shah, D.; Barzilay, R. Multi-Source Domain Adaptation with Mixture of Experts. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, 31 October–4 November 2018; pp. 4694–4703. [Google Scholar] [CrossRef]
- Wright, D.; Augenstein, I. Transformer Based Multi-Source Domain Adaptation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Online, 16–20 November 2020; pp. 7963–7974. [Google Scholar] [CrossRef]
- Ma, X.; Xu, P.; Wang, Z.; Nallapati, R.; Xiang, B. Domain Adaptation with BERT-based Domain Classification and Data Selection. In Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019), Hong Kong, China, 3 November 2019; pp. 76–83. [Google Scholar] [CrossRef]
- Li, R.; Liu, C.; Tong, Y.; Dazhi, J. Feature Structure Matching for Multi-source Sentiment Analysis with Efficient Adaptive Tuning. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), Torino, Italia, 20–25 May 2024; pp. 7153–7162. [Google Scholar]
- Yang, S.; Jiang, X.; Zhao, H.; Zeng, W.; Liu, H.; Jia, Y. FaiMA: Feature-aware In-context Learning for Multi-domain Aspect-based Sentiment Analysis. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), Torino, Italia, 20–25 May 2024; pp. 7089–7100. [Google Scholar]
- Raffel, C.; Shazeer, N.; Roberts, A.; Lee, K.; Narang, S.; Matena, M.; Zhou, Y.; Li, W.; Liu, P.J. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. arXiv 2023, arXiv:1910.10683. http://arxiv.org/abs/1910.10683.
- Lester, B.; Al-Rfou, R.; Constant, N. The Power of Scale for Parameter-Efficient Prompt Tuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, Online/Punta Cana, Dominican Republic, 7–11 November 2021; pp. 3045–3059. [Google Scholar] [CrossRef]
- Ben-David, E.; Oved, N.; Reichart, R. PADA: Example-based Prompt Learning for on-the-fly Adaptation to Unseen Domains. Trans. Assoc. Comput. Linguist. 2022, 10, 414–433. [Google Scholar] [CrossRef]
- Sun, X.; Zhang, K.; Liu, Q.; Bao, M.; Chen, Y. Harnessing domain insights: A prompt knowledge tuning method for aspect-based sentiment analysis. Knowl.-Based Syst. 2024, 298, 111975. [Google Scholar] [CrossRef]
- Church, K.W.; Hanks, P. Word Association Norms, Mutual Information, and Lexicography. Comput. Linguist. 1990, 16, 22–29. [Google Scholar]
- Google. T5-Base Model. 2024. Available online: https://huggingface.co/google-t5/t5-base (accessed on 15 May 2025).
- MacQueen, J.B. Some methods for classification and analysis of multivariate observations. In Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, Oakland, CA, USA, 21 June–18 July 1965; Volume 1, pp. 281–297. [Google Scholar]
- Pontiki, M.; Galanis, D.; Pavlopoulos, J.; Papageorgiou, H.; Androutsopoulos, I.; Manandhar, S. SemEval-2014 Task 4: Aspect Based Sentiment Analysis. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), Dublin, Ireland, 23–24 August 2014; pp. 27–35. [Google Scholar] [CrossRef]
- Toprak, C.; Jakob, N.; Gurevych, I. Sentence and Expression Level Annotation of Opinions in User-Generated Discourse. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, Uppsala, Sweden, 13 July 2010; pp. 575–584. [Google Scholar]
- Hu, T.; Liu, B. Mining and summarizing customer reviews. In Proceedings of the 10th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Seattle, WA, USA, 22–25 August 2004; pp. 168–177.
- Saeidi, M.; Bouchard, G.; Liakata, M.; Riedel, S. SentiHood: Targeted Aspect Based Sentiment Analysis Dataset for Urban Neighbourhoods. In Proceedings of the COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, Osaka, Japan, 11–16 December 2016; pp. 1546–1556. [Google Scholar]
- Shin, T.; Razeghi, Y.; IV, R.L.L.; Wallace, E.; Singh, S. AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts. In Proceedings of the Empirical Methods in Natural Language Processing (EMNLP), Online, 16–20 November 2020. [Google Scholar]
- Gao, T.; Fisch, A.; Chen, D. Making Pre-trained Language Models Better Few-shot Learners. In Proceedings of the Association for Computational Linguistics (ACL), Online, 5–10 July 2021. [Google Scholar]
- Brown, T.B.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. Language Models are Few-Shot Learners. arXiv 2020, arXiv:2005.14165. http://arxiv.org/abs/2005.14165.
- Touvron, H.; Lavril, T.; Izacard, G.; Martinet, X.; Lachaux, M.A.; Lacroix, T.; Rozière, B.; Goyal, N.; Hambro, E.; Azhar, F.; et al. LLaMA: Open and Efficient Foundation Language Models. arXiv 2023, arXiv:2302.13971. http://arxiv.org/abs/2302.13971.
- OpenAI; Achiam, J.; Adler, S.; Agarwal, S.; Ahmad, L.; Akkaya, I.; Aleman, F.L.; Almeida, D.; Altenschmidt, J.; Altman, S.; et al. GPT-4 Technical Report. arXiv 2024, arXiv:2303.08774. http://arxiv.org/abs/2303.08774.
Review | Domain | Extracted ARFs |
---|---|---|
This is a great Thai restaurant with a very friendly staff. | Restaurant | friendly, Thai, restaurant |
The battery was completely dead, in fact it had grown about a quarter inch thick lump on the underside. | Laptop | thick, grown, quarter |
It is a perfect phone in such a small and appealing package. | Device | perfect, small, appealing |
I have heard of people having problems receiving email out of order or not receiving some of their messages at all. | Service | heard, people, problems |
Most are very expensive to rent or buy even the grotty little flats and bedsits in LOCATION1. | Location | expensive, rent, buy |
Domain | Positive | Negative | Neutral | Total |
---|---|---|---|---|
Restaurant | 795 | 287 | 217 | 1299 |
Laptop | 482 | 505 | 177 | 1164 |
Device | 589 | 375 | – | 964 |
Service | 920 | 629 | 135 | 1684 |
Location | 954 | 480 | – | 1434 |
Process | Parameter | Value |
---|---|---|
Extraction of ARFs | 3 | |
Fine-tuning of | Learning Rate | |
Epochs | 10 | |
Fine-tuning of | Learning Rate | |
Epochs | 2 | |
Additinal fine-tuning of & | Learning Rate | |
Epochs | 1 |
(a) accuracy | ||||||
Model | →R | →La | →D | →S | →Lo | Average |
T5-base | 0.746 | 0.754 | 0.882 | 0.798 | 0.809 | 0.798 |
AP(BERT) | 0.743 | 0.758 | 0.893 | 0.793 | 0.818 | 0.801 |
AP(RoBERTa) | 0.767 | 0.770 | 0.901 | 0.819 | 0.857 | 0.823 |
LM-BFF | 0.752 | 0.764 | 0.891 | 0.802 | 0.806 | 0.803 |
PADA | 0.756 | 0.780 | 0.907 | 0.832 | 0.815 | 0.818 |
AEP+RS+CE | 0.754 | 0.769 | 0.928 | 0.835 | 0.875 | 0.832 |
(b) macro F1-score | ||||||
Model | →R | →La | →D | →S | →Lo | Average |
T5-base | 0.522 | 0.545 | 0.892 | 0.550 | 0.796 | 0.661 |
AP(BERT) | 0.519 | 0.552 | 0.895 | 0.546 | 0.807 | 0.664 |
AP(RoBERTa) | 0.551 | 0.582 | 0.897 | 0.561 | 0.842 | 0.687 |
LM-BFF | 0.539 | 0.558 | 0.893 | 0.552 | 0.796 | 0.668 |
PADA | 0.544 | 0.575 | 0.908 | 0.577 | 0.805 | 0.682 |
AEP+RS+CE | 0.527 | 0.596 | 0.925 | 0.576 | 0.862 | 0.697 |
Model | RS | CE | →R | →La | →D | →S | →Lo | Average |
---|---|---|---|---|---|---|---|---|
AEP+RS+CE | 0.527 | 0.596 | 0.925 | 0.576 | 0.862 | 0.697 | ||
AEP+CE | × | 0.526 | 0.596 | 0.921 | 0.575 | 0.853 | 0.694 | |
() | (−0.001) | (0.000) | (−0.004) | (−0.001) | (−0.009) | (−0.003) | ||
AEP+RS | × | 0.525 | 0.583 | 0.923 | 0.572 | 0.857 | 0.692 | |
() | (−0.002) | (−0.013) | (−0.002) | (−0.004) | (−0.005) | (−0.005) | ||
AEP | × | × | 0.521 | 0.549 | 0.916 | 0.572 | 0.846 | 0.681 |
() | (−0.006) | (−0.047) | (−0.009) | (−0.004) | (−0.016) | (−0.016) | ||
T5-base | × | × | 0.522 | 0.545 | 0.892 | 0.550 | 0.796 | 0.661 |
Model | →R | →La | →D | →S | →Lo | Ave. |
---|---|---|---|---|---|---|
AEP | 0.521 | 0.549 | 0.916 | 0.572 | 0.846 | 0.681 |
AEP+RS ( = 0.99) | 0.522 | 0.581 | 0.920 | 0.574 | 0.853 | 0.690 |
AEP+RS ( = 0.98) | 0.525 | 0.583 | 0.923 | 0.572 | 0.857 | 0.692 |
AEP+RS ( = 0.95) | 0.524 | 0.576 | 0.920 | 0.574 | 0.856 | 0.690 |
Model | →R | →La | →D | →S | →Lo | Ave. |
---|---|---|---|---|---|---|
AEP+CE (ma) | 0.523 | 0.603 | 0.910 | 0.573 | 0.853 | 0.692 |
AEP+CE (we) | 0.526 | 0.596 | 0.921 | 0.575 | 0.853 | 0.694 |
AEP+RS+CE (ma) | 0.528 | 0.597 | 0.918 | 0.571 | 0.863 | 0.695 |
AEP+RS+CE (we) | 0.527 | 0.596 | 0.925 | 0.576 | 0.862 | 0.697 |
Method | Example |
---|---|
AEP-Separate | This is a great Thai restaurant with a very friendly staff. With staff being great Thai restaurant, how is the sentiment? |
AEP-Insert | Consider the text: This is a great Thai restaurant with a very friendly staff. what sentiment does great Thai restaurant convey about staff? |
Model | →R | →La | →D | →S | →Lo | Ave. |
---|---|---|---|---|---|---|
AEP-Separate | 0.521 | 0.549 | 0.916 | 0.572 | 0.846 | 0.681 |
AEP-Insert | 0.525 | 0.548 | 0.910 | 0.520 | 0.806 | 0.662 |
Input Text (Aspect, Gold Label) | Prompt | Prediction | ||
---|---|---|---|---|
AEP | PADA | AEP | PADA | |
The food is decent at best, and the ambience, well, it’s a matter of opinion, some may consider it to be a sweet thing, I thought it was just annoying. (food, neutral) | With food being decent ambience annoying, how is the sentiment? | food egroups overall quickly table | neg | neu |
The service was a bit slow, but they were very friendly. (service, negative) | Predict the sentiment for service described as bit slow friendly. | service slow egroups dinner toshiba | pos | neg |
As much as I like the food there, I cant bring myself to go back. (food, positive) | With food being much bring back, how is the sentiment? | food egroups week extremely simple | neg | neg |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lu, B.; Shirai, K.; Kertkeidkachorn, N. Aspect-Enhanced Prompting Method for Unsupervised Domain Adaptation in Aspect-Based Sentiment Analysis. Information 2025, 16, 411. https://doi.org/10.3390/info16050411
Lu B, Shirai K, Kertkeidkachorn N. Aspect-Enhanced Prompting Method for Unsupervised Domain Adaptation in Aspect-Based Sentiment Analysis. Information. 2025; 16(5):411. https://doi.org/10.3390/info16050411
Chicago/Turabian StyleLu, Binghan, Kiyoaki Shirai, and Natthawut Kertkeidkachorn. 2025. "Aspect-Enhanced Prompting Method for Unsupervised Domain Adaptation in Aspect-Based Sentiment Analysis" Information 16, no. 5: 411. https://doi.org/10.3390/info16050411
APA StyleLu, B., Shirai, K., & Kertkeidkachorn, N. (2025). Aspect-Enhanced Prompting Method for Unsupervised Domain Adaptation in Aspect-Based Sentiment Analysis. Information, 16(5), 411. https://doi.org/10.3390/info16050411