Parameter-Efficient Adaptation of Qwen2.5 for Aspect-Based Sentiment Analysis Using Low-Rank Adaptation and Parameter-Efficient Fine-Tuning †
Abstract
1. Introduction
2. Related Work
2.1. ABSA Methodologies
2.2. PEFT in LLMs
3. Methodology
3.1. Data Description
3.2. Qwen Models
3.3. LoRA
- The choice of rank r = 16 strikes a balance between model capacity and computational efficiency, consistent with prior studies that show moderate ranks achieve strong performance without excessive parameter growth [61].
- A scaling factor α = 32 is used to appropriately scale the LoRA updates, stabilizing training and ensuring effective adaptation [61].
- A dropout rate of 0.1 is applied to mitigate overfitting during fine-tuning, which is especially important when adapting large models on limited datasets [64].
- Specifying the task type as “CAUSAL_LM” ensures compatibility with Qwen’s autoregressive architecture and training objectives [66].
3.4. Model Evaluation
4. Results and Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
| ABSA | Aspect-based Sentiment Analysis |
| LLMs | Large Language Models |
| NLP | Natural Language Processing |
| LoRA | Low-Rank Adaptation |
| PEFT | Parameter-Efficient Fine-Tuning |
| Qwen | Tongyi Qianwen |
References
- Zhang, W.; Li, X.; Deng, Y.; Bing, L.; Lam, W. A Survey on Aspect-Based Sentiment Analysis: Tasks, Methods, and Challenges. IEEE Trans. Knowl. Data Eng. 2022, 35, 11019–11038. [Google Scholar] [CrossRef]
- Chifu, A.G.; Fournier, S. Sentiment Difficulty in Aspect-Based Sentiment Analysis. Mathematics 2023, 11, 4647. [Google Scholar] [CrossRef]
- Hua, Y.C.; Denny, P.; Taskova, K.; Wicker, J. A Systematic Review of Aspect-Based Sentiment Analysis: Domains, Methods, and Trends. Artif. Intell. Rev. 2023, 57, 296. [Google Scholar] [CrossRef]
- Ismet, H.T.; Mustaqim, T.; Purwitasari, D. Aspect Based Sentiment Analysis of Product Review Using Memory Network. Sci. J. Inform. 2022, 9, 73–83. [Google Scholar] [CrossRef]
- Xing, B.; Tsang, I.W. Out of Context: A New Clue for Context Modeling of Aspect-Based Sentiment Analysis. J. Artif. Intell. Res. 2022, 74, 627–659. [Google Scholar] [CrossRef]
- Nazir, A.; Rao, Y.; Wu, L.; Sun, L. Issues and Challenges of Aspect-Based Sentiment Analysis: A Comprehensive Survey. IEEE Trans. Affect. Comput. 2022, 13, 845–863. [Google Scholar] [CrossRef]
- Simmering, P.F.; Huoviala, P. Large Language Models for Aspect-Based Sentiment Analysis. arXiv 2023, arXiv:2310.18025. [Google Scholar] [CrossRef]
- Neveditsin, N.; Lingras, P.; Mago, V. From Annotation to Adaptation: Metrics, Synthetic Data, and Aspect Extraction for Aspect-Based Sentiment Analysis with Large Language Models. In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 4: Student Research Workshop), Albuquerque, NM, USA, 29 April–4 May 2025; Association for Computational Linguistics: Kerrville, TX, USA, 2025. [Google Scholar]
- Zhong, Q.; Li, H.; Zhuang, L.; Liu, J.; Du, B. Iterative Data Generation with Large Language Models for Aspect-Based Sentiment Analysis. arXiv 2024, arXiv:2407.00341. [Google Scholar]
- Tan, Z.; Li, D.; Wang, S.; Beigi, A.; Jiang, B.; Bhattacharjee, A.; Karami, M.; Li, J.; Cheng, L.; Liu, H. Large Language Models for Data Annotation and Synthesis: A Survey. arXiv 2024, arXiv:2402.13446. [Google Scholar]
- Yang, X.; Zhan, R.; Wong, D.F.; Wu, J.; Chao, L.S. Human-in-the-Loop Machine Translation with Large Language Model. arXiv 2023, arXiv:2310.08908. [Google Scholar]
- Zhou, C.; Song, D.; Tian, Y.; Wu, Z.; Wang, H.; Zhang, X.; Yang, J.; Yang, Z.; Zhang, S. A Comprehensive Evaluation of Large Language Models on Aspect-Based Sentiment Analysis. arXiv 2024, arXiv:2412.02279. [Google Scholar] [CrossRef]
- Pangakis, N.; Wolken, S.; Fasching, N. Automated Annotation with Generative AI Requires Validation. arXiv 2023, arXiv:2306.00176. [Google Scholar] [CrossRef]
- Gligorić, K.; Zrnic, T.; Lee, C.; Candès, E.J.; Jurafsky, D. Can Unconfident LLM Annotations Be Used for Confident Conclusions? In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), Albuquerque, NM, USA, 29 April–4 May 2025; Association for Computational Linguistics: Kerrville, TX, USA, 2024. [Google Scholar]
- Huang, J.; Cui, Y.; Liu, J.; Liu, M. Supervised and Few-Shot Learning for Aspect-Based Sentiment Analysis of Instruction Prompt. Electronics 2024, 13, 1924. [Google Scholar] [CrossRef]
- Parthasarathy, V.B.; Zafar, A.; Khan, A.; Shahid, A. The Ultimate Guide to Fine-Tuning LLMs from Basics to Breakthroughs: An Exhaustive Review of Technologies, Research, Best Practices, Applied Research Challenges and Opportunities. arXiv 2024, arXiv:2408.13296. [Google Scholar] [CrossRef]
- Ding, X.; Zhou, J.; Dou, L.; Chen, Q.; Wu, Y.; Chen, C.; He, L. Boosting Large Language Models with Continual Learning for Aspect-Based Sentiment Analysis. In Findings of the Association for Computational Linguistics: EMNLP 2024; Association for Computational Linguistics: Kerrville, TX, USA, 2024. [Google Scholar]
- Šmíd, J.; Přibá, P.; Přibáň, P.; Král, P. LLaMA-Based Models for Aspect-Based Sentiment Analysis. In Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, & Social Media Analysis, Bangkok, Thailand, 15 August 2024; Association for Computational Linguistics: Kerrville, TX, USA, 2024. [Google Scholar]
- Zhang, Y.; Zeng, J.; Hu, W.; Wang, Z.; Chen, S.; Xu, R. Self-Training with Pseudo-Label Scorer for Aspect Sentiment Quad Prediction. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Bangkok, Thailand, 11–16 August 2024; Association for Computational Linguistics: Kerrville, TX, USA, 2024; Volume 1. [Google Scholar]
- Scaria, K.; Gupta, H.; Goyal, S.; Sawant, S.A.; Mishra, S.; Baral, C. InstructABSA: Instruction Learning for Aspect Based Sentiment Analysis. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers), Mexico City, Mexico, 16–21 June 2024; Short Papers; Association for Computational Linguistics: Kerrville, TX, USA, 2024; Volume 2. [Google Scholar]
- Azizi, S.; Kundu, S.; Pedram, M. LaMDA: Large Model Fine-Tuning via Spectrally Decomposed Low-Dimensional Adaptation. arXiv 2024, arXiv:2406.12832. [Google Scholar]
- Zhang, Y.; Li, P.; Hong, J.; Li, J.; Zhang, Y.; Zheng, W.; Chen, P.-Y.; Lee, J.D.; Yin, W.; Hong, M.; et al. Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark. arXiv 2024, arXiv:2402.11592. [Google Scholar]
- Wu, X.K.; Chen, M.; Li, W.; Wang, R.; Lu, L.; Liu, J.; Hwang, K.; Hao, Y.; Pan, Y.; Meng, Q.; et al. LLM Fine-Tuning: Concepts, Opportunities, and Challenges. Big Data Cogn. Comput. 2025, 9, 87. [Google Scholar] [CrossRef]
- Zhang, B.; Liu, Z.; Cherry, C.; Firat, O. When Scaling Meets Llm Finetuning: The Effect of Data, Model And Finetuning Method. arXiv 2024, arXiv:2402.17193. [Google Scholar] [CrossRef]
- Balne, C.C.S.; Bhaduri, S.; Roy, T.; Jain, V.; Chadha, A. Parameter Efficient Fine Tuning: A Comprehensive Analysis Across Applications. arXiv 2024, arXiv:2404.13506. [Google Scholar] [CrossRef]
- Hu, E.; Shen, Y.; Wallis, P.; Allen-Zhu, Z.; Li, Y.; Wang, S.; Wang, L.; Chen, W. LORA: Low-rank adaptation of large language models. ICLR 2022, 1, 3. [Google Scholar]
- Liu, S.; Zhou, J.; Zhu, Q.; Chen, Q.; Bai, Q.; Xiao, J.; He, L. Let’s Rectify Step by Step: Improving Aspect-Based Sentiment Analysis with Diffusion Models. arXiv 2024, arXiv:2402.15289. [Google Scholar]
- Schmitt, M.; Steinheber, S.; Schreiber, K.; Roth, B. Joint Aspect and Polarity Classification for Aspect-Based Sentiment Analysis with End-to-End Neural Networks. arXiv 2018, arXiv:1808.09238. [Google Scholar]
- Mao, Y.; Shen, Y.; Yu, C.; Cai, L. A Joint Training Dual-MRC Framework for Aspect Based Sentiment Analysis. Proc. AAAI Conf. Artif. Intell. 2021, 35, 13543–13551. [Google Scholar] [CrossRef]
- Hoang, M.; Alija Bihorac, O.; Rouces, J. Aspect-Based Sentiment Analysis Using BERT. In Proceedings of the 22nd Nordic Conference on Computational Linguistics, Turku, Finland, 30 September–2 October 2019; Linköping University Electronic Press: Linköping, Sweden, 2019. [Google Scholar]
- Xu, H.; Shu, L.; Yu, P.S.; Liu, B. Understanding Pre-Trained BERT for Aspect-Based Sentiment Analysis. In Proceedings of the 28th International Conference on Computational Linguistics, Barcelona, Spain, 8–13 December 2020; International Committee on Computational Linguistics: New York, NY, USA, 2020. [Google Scholar]
- Zhang, M.; Zhu, Y.; Liu, Z.; Bao, Z.; Wu, Y.; Sun, X.; Xu, L. Span-Level Aspect-Based Sentiment Analysis via Table Filling. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Toronto, ON, Canada, 9–14 July 2023; Long Papers; Association for Computational Linguistics: Kerrville, TX, USA, 2023; Volume 1. [Google Scholar]
- Ghosh, K.K.; Sur, C. Learning to Extract Cross-Domain Aspects and Understanding Sentiments Using Large Language Models. arXiv 2025, arXiv:2501.08974. [Google Scholar] [CrossRef]
- Jin, W.; Zhao, B.; Zhang, Y.; Huang, J.; Yu, H. WordTransABSA: Enhancing Aspect-Based Sentiment Analysis with Masked Language Modeling for Affective Token Prediction. Expert Syst. Appl. 2024, 238, 122289. [Google Scholar] [CrossRef]
- Musa, A.; Adam, F.M.; Ibrahim, U.; Zandam, A.Y. HauBERT: A Transformer Model for Aspect-Based Sentiment Analysis of Hausa-Language Movie Reviews. Eng. Proc. 2025, 87, 43. [Google Scholar]
- Chaudhry, H.N.; Kulsoom, F.; Ullah Khan, Z.; Aman, M.; Khan, S.U.; Albanyan, A. TASCI: Transformers for Aspect-Based Sentiment Analysis with Contextual Intent Integration. PeerJ Comput. Sci. 2025, 11, e2760. [Google Scholar] [CrossRef]
- Taj, S.; Daudpota, S.M.; Imran, A.S.; Kastrati, Z. Aspect-Based Sentiment Analysis for Software Requirements Elicitation Using Fine-Tuned Bidirectional Encoder Representations from Transformers and Explainable Artificial Intelligence. Eng. Appl. Artif. Intell. 2025, 151, 110632. [Google Scholar] [CrossRef]
- Wang, Q.; Ding, K.; Liang, B.; Yang, M.; Xu, R. Reducing Spurious Correlations in Aspect-Based Sentiment Analysis with Explanation from Large Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2023; Association for Computational Linguistics: Kerrville, TX, USA, 2023; p. 2941. [Google Scholar]
- Cao, J.; Li, J.; Yang, Z.; Zhou, R. Enhanced Multimodal Aspect-Based Sentiment Analysis by LLM-Generated Rationales. In International Conference on Neural Information Processing; Springer Nature: Singapore, 2025. [Google Scholar]
- Xu, L.; Xie, H.; Qin, S.-Z.J.; Tao, X.; Wang, F.L. Parameter-Efficient Fine-Tuning Methods for Pretrained Language Models: A Critical Review and Assessment. arXiv 2023, arXiv:2312.12148. [Google Scholar] [CrossRef]
- Prottasha, N.J.; Chowdhury, U.R.; Mohanto, S.; Nuzhat, T.; Sami, A.A.; Ali, M.S.; Sobuj, M.S.I.; Raman, H.; Kowsher, M.; Garibay, O.O. PEFT A2Z: Parameter-Efficient Fine-Tuning Survey for Large Language and Vision Models. arXiv 2025, arXiv:2504.14117. [Google Scholar]
- Shankar Pandey, D.; Pyakurel, S.; Yu, Q. Be Confident in What You Know: Bayesian Parameter Efficient Fine-Tuning of Vision Foundation Models. Adv. Neural Inf. Process. Syst. 2024, 37, 44814–44844. [Google Scholar]
- Liao, B.; Meng, Y.; Monz, C. Parameter-Efficient Fine-Tuning without Introducing New Latency. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Toronto, ON, Canada, 9–14 July 2023; Association for Computational Linguistics: Kerrville, TX, USA, 2023; Long Papers; Volume 1. [Google Scholar]
- Chen, K.; Pang, Y.; Yang, Z. Parameter-Efficient Fine-Tuning with Adapters. arXiv 2024, arXiv:2405.05493. [Google Scholar]
- Zhou, X.; He, J.; Ke, Y.; Zhu, G.; Gutiérrez-Basulto, V.; Pan, J.Z. An Empirical Study on Parameter-Efficient Fine-Tuning for MultiModal Large Language Models. In Findings of the Association for Computational Linguistics: ACL 2024; Association for Computational Linguistics: Kerrville, TX, USA, 2024. [Google Scholar]
- Zhang, D.; Feng, T.; Xue, L.; Wang, Y.; Dong, Y.; Tang, J. Parameter-Efficient Fine-Tuning for Foundation Models. arXiv 2025, arXiv:2501.13787. [Google Scholar]
- Haque, S.; Eberhart, Z.; Bansal, A.; McMillan, C. Semantic Similarity Metrics for Evaluating Source Code Summarization. In Proceedings of the IEEE International Conference on Program Comprehension, Virtual, 16–17 May 2022; IEEE Computer Society: New York, NY, USA, 2022; Volume 2022, pp. 36–47. [Google Scholar]
- Pontiki, M.; Papageorgiou, H.; Galanis, D.; Androutsopoulos, I.; Pavlopoulos, J.; Manandhar, S. SemEval-2014 Task 4: Aspect Based Sentiment Analysis. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), San Diego, CA, USA, 16–17 June 2016; Association for Computational Linguistics: Kerrville, TX, USA, 2014. [Google Scholar]
- Wang, B.; Liu, M. Deep Learning for Aspect-Based Sentiment Analysis. In 2021 International Conference on Machine Learning and Intelligent Systems Engineering (MLISE), Chongqing, China, 9–11 July 2021; IEEE: New York, NY, USA, 2021. [Google Scholar]
- Jayakody, D.; Isuranda, K.; Malkith, A.V.A.; de Silva, N.; Ponnamperuma, S.R.; Sandamali, G.G.N.; Sudheera, K.L.K. Aspect-Based Sentiment Analysis Techniques: A Comparative Study. In 2024 Moratuwa Engineering Research Conference (MERCon), Moratuwa, Sri Lanka, 8–10 August 2024; IEEE: New York, NY, USA, 2024. [Google Scholar] [CrossRef]
- Li, X.; Bing, L.; Li, P.; Lam, W. A Unified Model for Opinion Target Extraction and Target Sentiment Prediction. Proc. AAAI Conf. Artif. Intell. 2018, 33, 6714–6721. [Google Scholar] [CrossRef]
- Hu, M.; Peng, Y.; Huang, Z.; Li, D.; Lv, Y. Open-Domain Targeted Sentiment Analysis via Span-Based Extraction and Classification. arXiv 2019, arXiv:1906.03820. [Google Scholar]
- Chen, Z.; Qian, T. Relation-Aware Collaborative Learning for Unified Aspect-Based Sentiment Analysis. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online, 5–10 July 2020; Association for Computational Linguistics: Kerrville, TX, USA, 2020. [Google Scholar]
- Li, X.; Bing, L.; Zhang, W.; Lam, W. Exploiting BERT for End-to-End Aspect-Based Sentiment Analysis. arXiv 2019, arXiv:1910.00883. [Google Scholar]
- Luo, H.; Li, T.; Liu, B.; Zhang, J. DOER: Dual Cross-Shared RNN for Aspect Term-Polarity Co-Extraction. arXiv 2019, arXiv:1906.01794. [Google Scholar]
- He, R.; Lee, W.S.; Ng, H.T.; Dahlmeier, D. An Interactive Multi-Task Learning Network for End-to-End Aspect-Based Sentiment Analysis. arXiv 2019, arXiv:1906.06906. [Google Scholar]
- Bai, J.; Bai, S.; Chu, Y.; Cui, Z.; Dang, K.; Deng, X.; Fan, Y.; Ge, W.; Han, Y.; Huang, F.; et al. Qwen Technical Report. arXiv 2023, arXiv:2309.16609. [Google Scholar] [CrossRef]
- Albert, P.; Zhang, F.Z.; Saratchandran, H.; Rodriguez-Opazo, C.; van den Hengel, A.; Abbasnejad, E. RandLoRA: Full-Rank Parameter-Efficient Fine-Tuning of Large Models. arXiv 2025, arXiv:2502.00987. [Google Scholar]
- Li, Y.; Han, S.; Ji, S. VB-LoRA: Extreme Parameter Efficient Fine-Tuning with Vector Banks. Adv. Neural Inf. Process. Syst. 2024, 37, 16724–16751. [Google Scholar]
- Tian, C.; Shi, Z.; Guo, Z.; Li, L.; Xu, C. HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning. Adv. Neural Inf. Process. Syst. 2024, 37, 9565–9584. [Google Scholar]
- Kim, D.; Lee, G.; Shim, K.; Shim, B. Preserving Pre-Trained Representation Space: On Effectiveness of Prefix-Tuning for Large Multi-Modal Models. arXiv 2024, arXiv:2411.00029. [Google Scholar]
- Hsu, C.-Y.; Tsai, Y.-L.; Lin, C.-H.; Chen, P.-Y.; Yu, C.-M.; Huang, C.-Y. Safe LoRA: The Silver Lining of Reducing Safety Risks When Fine-Tuning Large Language Models. Adv. Neural Inf. Process. Syst. 2024, 37, 65072–65094. [Google Scholar]
- Qing, P.; Gao, C.; Zhou, Y.; Diao, X.; Yang, Y.; Vosoughi, S. AlphaLoRA: Assigning LoRA Experts Based on Layer Training Quality. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, Miami, FL, USA, 12–16 November 2024; Association for Computational Linguistics: Kerrville, TX, USA, 2024. [Google Scholar]
- Lin, Y.; Ma, X.; Chu, X.; Jin, Y.; Yang, Z.; Wang, Y.; Mei, H. LoRA Dropout as a Sparsity Regularizer for Overfitting Control. arXiv 2024, arXiv:2404.09610. [Google Scholar] [CrossRef]
- Prottasha, N.J.; Mahmud, A.; Sobuj, M.S.I.; Bhat, P.; Kowsher, M.; Yousefi, N.; Garibay, O.O. Parameter-Efficient Fine-Tuning of Large Language Models Using Semantic Knowledge Tuning. Sci. Rep. 2024, 14, 30667. [Google Scholar] [CrossRef]
- Yang, A.; Yang, B.; Hui, B.; Zheng, B.; Yu, B.; Zhou, C.; Li, C.; Li, C.; Liu, D.; Huang, F.; et al. Qwen2 Technical Report. arXiv 2024, arXiv:2407.10671. [Google Scholar]
- Madhoushi, Z.; Razak, A.; Suhaila Zainudin, H. Aspect-Based Sentiment Analysis Methods In Recent Years. Asia-Pacific J. Inf. Technol. Multimedia 2019, 7, 79–96. [Google Scholar] [CrossRef]
- Yang, A.; Yu, B.; Li, C.; Liu, D.; Huang, F.; Huang, H.; Jiang, J.; Tu, J.; Zhang, J.; Zhou, J.; et al. Qwen2.5-1M Technical Report. arXiv 2025, arXiv:2501.15383. [Google Scholar] [CrossRef]



| SentenceID | Raw_text | AspectTerms | AspectCategories |
|---|---|---|---|
| 2339 | I charge it at night and skip taking the cord with me because of the good battery life. | [{‘term’: ‘cord’, ‘polarity’: ‘neutral’}, {‘term’: ‘battery life’, ‘polarity’: ‘positive’}] | [{‘category’: ‘noaspectcategory’, ‘polarity’: ‘none’}] |
| 812 | I bought an HP Pavilion DV4-1222nr laptop and have had so many problems with the computer. | [{‘term’: ‘noaspectterm’, ‘polarity’: ‘none’}] | [{‘category’: ‘noaspectcategory’, ‘polarity’: ‘none’}] |
| 562 | Did not enjoy the new Windows 8 and touchscreen functions. | [{‘term’: ‘Windows 8’, ‘polarity’: ‘negative’}, {‘term’: ‘touchscreen functions’, ‘polarity’: ‘negative’}] | [{‘category’: ‘noaspectcategory’, ‘polarity’: ‘none’}] |
| 912 | The price is higher than most laptops out there; however, he/she will get what they paid for, which is a great computer. | [{‘term’: ‘price’, ‘polarity’: ‘conflict’}] | [{‘category’: ‘noaspectcategory’, ‘polarity’: ‘none’}] |
| LLM Model | Parameter Size (Billion) | Hardware Setup | Random Access Memory (RAM)/Graphics Processing Unit (GPU) | Fine-Tuning Time |
|---|---|---|---|---|
| Qwen2.5-14B [68] | 14 B | Multi-GPU (A100 or equivalent) | Estimated >60GB GPU video RAM (VRAM) | Not explicitly reported |
| Qwen2 dense models [66] | 0.5B to 72B | Large GPU clusters (A100 80GB GPUs) | Up to 80GB+ VRAM per GPU | Days to weeks (full pretraining and fine-tuning) |
| Qwen2.5-1M series [68] | 7B and 14B | Multi-GPU setups | High GPU VRAM (>60GB typical) | Not explicitly stated |
| This study (Qwen2.5-3B & 7B + LoRA) | 3B and 7B | Single-machine (Intel i7-13700F, 64GB RAM, RTX 4070 Ti) | 64GB RAM, 12GB GPU VRAM | 15.6 h (3B) 74 h (7B) |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Lim, P.Y.; Ho, C.F.; Tan, C.W. Parameter-Efficient Adaptation of Qwen2.5 for Aspect-Based Sentiment Analysis Using Low-Rank Adaptation and Parameter-Efficient Fine-Tuning. Eng. Proc. 2026, 128, 15. https://doi.org/10.3390/engproc2026128015
Lim PY, Ho CF, Tan CW. Parameter-Efficient Adaptation of Qwen2.5 for Aspect-Based Sentiment Analysis Using Low-Rank Adaptation and Parameter-Efficient Fine-Tuning. Engineering Proceedings. 2026; 128(1):15. https://doi.org/10.3390/engproc2026128015
Chicago/Turabian StyleLim, Pei Ying, Chuk Fong Ho, and Chi Wee Tan. 2026. "Parameter-Efficient Adaptation of Qwen2.5 for Aspect-Based Sentiment Analysis Using Low-Rank Adaptation and Parameter-Efficient Fine-Tuning" Engineering Proceedings 128, no. 1: 15. https://doi.org/10.3390/engproc2026128015
APA StyleLim, P. Y., Ho, C. F., & Tan, C. W. (2026). Parameter-Efficient Adaptation of Qwen2.5 for Aspect-Based Sentiment Analysis Using Low-Rank Adaptation and Parameter-Efficient Fine-Tuning. Engineering Proceedings, 128(1), 15. https://doi.org/10.3390/engproc2026128015

