Attentive Review Semantics-Aware Recommendation Model for Rating Prediction
Abstract
:1. Introduction
- This study proposed a new recommendation model that captures the relevance between the review content and the target item to estimate the user’s preference for the item. The proposed model combines large-scale item reviews with general item information to create a unified characteristic representation that can effectively capture the comprehensive characteristics of an item.
- The proposed model uses the self-attention mechanism to capture the correlation between item characteristics and the co-attention mechanism to capture the complementarity between these characteristics to obtain a more comprehensive and fused representation of item characteristics.
- The performance of the proposed model was fairly evaluated by comparing it with various baseline models. For this purpose, we utilized a real dataset from Amazon.com, and the experimental results showed that the proposed model outperforms existing models in making recommendations.
2. Related Works
2.1. Review-Based Recommender System
2.2. Deep Learning Techniques for Recommender Systems
2.2.1. Large Language Models (LLMs)
2.2.2. Attention Mechanism
2.3. Research Gaps and Motivation
3. Problem Definition
4. ARSRec Framework
4.1. User-Item Interaction Network
4.2. User Attentive Representation Network
4.2.1. Review Semantics Extractor
4.2.2. Item Auxiliary Information Extractor
4.2.3. User Attentive Preference Information Extraction
4.3. Rating Prediction Network
5. Experiments
- RQ 1: Does the proposed model perform better than other baseline models?
- RQ 2: Does considering the relevance of user reviews and fusion item information really impact recommendation performance?
- RQ 3: What is the most effective computational method for capturing user opinions from user reviews?
- RQ 4: How do different hyperparameters affect the recommendation performance of the proposed model?
5.1. Datasets
5.2. Evaluation Metrics
5.3. Baseline Model
- PMF [35]: This model is an MF-based recommendation model that predicts user preferences by decomposing user-item interactions into low dimensions based on Gaussian distributions. This approach works better than traditional MF models on sparse and imbalanced data.
- NCF [36]: This model effectively captures the complex interactions between users and items through linear and non-linear learning. The NeuMF structure, which combines the generalized matrix factorization (GMF) model and MLP model, was proposed. This recommendation model only uses rating information.
- DeepCoNN [7]: Deep cooperative neural networks use two parallel CNN networks to extract representation vectors from both user reviews and item reviews. The two extracted vectors are fed into the Factorization Machine for the final rating prediction.
- SAFMR [37]: This model uses CNN techniques to extract the characteristics of users and items from a set of reviews. The self-attention mechanism then considers the importance of various attributes of an item and reflects this in its recommendations.
- NARRE [28]: This model uses the review text and rating matrix as input to the model and utilizes a CNN and attention mechanism to learn the latent features of each user and item review. It uses an attention mechanism to reduce or ignore the weight of low-importance reviews, which allows it to effectively predict ratings.
5.4. Implementation Details
6. Experimental Results and Discussion
6.1. Performance Comparison to Baseline Models (RQ 1)
6.2. Model Components Analysis (RQ 2)
6.3. Fusion Method Efficiency Analysis (RQ 3)
- 1.
- Add: element-wise add, i.e., .
- 2.
- Average: element-wise average, i.e., .
- 3.
- Concatenation: concatenate and, i.e., .
- 4.
- Dot Product: element-wise product, i.e., .
6.4. Impact of Hyperparameters (RQ 4)
7. Conclusions and Future Work
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Xiao, B.; Benbasat, I. E-commerce product recommendation agents: Use, characteristics, and impact. MIS Q. 2007, 31, 137–209. [Google Scholar] [CrossRef]
- Nguyen, T.T.; Hui, P.-M.; Harper, F.M.; Terveen, L.; Konstan, J.A. Exploring the filter bubble: The effect of using recommender systems on content diversity. In Proceedings of the 23rd International Conference on World Wide Web, Seoul, Republic of Korea, 7–11 April 2014; pp. 677–686. [Google Scholar]
- Wang, R.; Jiang, Y.; Lou, J. ADCF: Attentive representation learning and deep collaborative filtering model. Knowl. Based Syst. 2021, 227, 107194. [Google Scholar] [CrossRef]
- Wang, J.; De Vries, A.P.; Reinders, M.J. Unifying user-based and item-based collaborative filtering approaches by similarity fusion. In Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Seattle, WA, USA, 6–11 August 2006; pp. 501–508. [Google Scholar]
- Jang, D.; Li, Q.; Lee, C.; Kim, J. Attention-based multi attribute matrix factorization for enhanced recommendation performance. Inf. Syst. 2024, 121, 102334. [Google Scholar] [CrossRef]
- Pappas, I.O.; Kourouthanassis, P.E.; Giannakos, M.N.; Chrissikopoulos, V. Explaining online shopping behavior with fsQCA: The role of cognitive and affective perceptions. J. Bus. Res. 2016, 69, 794–803. [Google Scholar] [CrossRef]
- Zheng, L.; Noroozi, V.; Yu, P.S. Joint deep modeling of users and items using reviews for recommendation. In Proceedings of the Tenth ACM International Conference on Web Search and Data Mining, Cambridge, UK, 6–10 February 2017; pp. 425–434. [Google Scholar]
- McAuley, J.; Leskovec, J. Hidden factors and hidden topics: Understanding rating dimensions with review text. In Proceedings of the 7th ACM Conference on Recommender Systems, Hong Kong, China, 12–16 October 2013; pp. 165–172. [Google Scholar]
- Kim, D.; Park, C.; Oh, J.; Lee, S.; Yu, H. Convolutional matrix factorization for document context-aware recommendation. In Proceedings of the 10th ACM Conference on Recommender Systems, Boston, MA, USA, 15–19 September 2016; pp. 233–240. [Google Scholar]
- Ma, Y.; Chen, G.; Wei, Q. Finding users preferences from large-scale online reviews for personalized recommendation. Electron. Commer. Res. 2017, 17, 3–29. [Google Scholar] [CrossRef]
- Pourgholamali, F.; Kahani, M.; Bagheri, E.; Noorian, Z. Embedding unstructured side information in product recommendation. Electron. Commer. Res. Appl. 2017, 25, 70–85. [Google Scholar] [CrossRef]
- Wang, A.; Zhang, Q.; Zhao, S.; Lu, X.; Peng, Z. A review-driven customer preference measurement model for product improvement: Sentiment-based importance—Performance analysis. Inf. Syst. e-Bus. Manag. 2020, 18, 61–88. [Google Scholar] [CrossRef]
- Khaledian, N.; Nazari, A.; Khamforoosh, K.; Abualigah, L.; Javaheri, D. TrustDL: Use of trust-based dictionary learning to facilitate recommendation in social networks. Expert Syst. Appl. 2023, 228, 120487. [Google Scholar] [CrossRef]
- Cheng, Z.; Ding, Y.; He, X.; Zhu, L.; Song, X.; Kankanhalli, M.S. A^ 3NCF: An Adaptive Aspect Attention Model for Rating Prediction. In Proceedings of the IJCAI, Stockholm, Sweden, 13–19 July 2018; pp. 3748–3754. [Google Scholar]
- Cao, R.; Zhang, X.; Wang, H. A review semantics based model for rating prediction. IEEE Access 2019, 8, 4714–4723. [Google Scholar] [CrossRef]
- Liu, Y.-H.; Chen, Y.-L.; Chang, P.-Y. A deep multi-embedding model for mobile application recommendation. Decis. Support Syst. 2023, 173, 114011. [Google Scholar] [CrossRef]
- Wang, S.; Qiu, J. Utilizing a feature-aware external memory network for helpfulness prediction in e-commerce reviews. Appl. Soft Comput. 2023, 148, 110923. [Google Scholar] [CrossRef]
- Liu, H.; Wang, Y.; Peng, Q.; Wu, F.; Gan, L.; Pan, L.; Jiao, P. Hybrid neural recommendation with joint deep representation learning of ratings and reviews. Neurocomputing 2020, 374, 77–85. [Google Scholar] [CrossRef]
- Lee, S.; Kim, D. Deep learning based recommender system using cross convolutional filters. Inf. Sci. 2022, 592, 112–122. [Google Scholar] [CrossRef]
- Qiu, Z.; Wu, X.; Gao, J.; Fan, W. U-BERT: Pre-training user representations for improved recommendation. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual, 2–9 February 2021; pp. 4320–4327. [Google Scholar]
- Kuo, R.; Li, S.-S. Applying particle swarm optimization algorithm-based collaborative filtering recommender system considering rating and review. Appl. Soft Comput. 2023, 135, 110038. [Google Scholar] [CrossRef]
- Geng, S.; Liu, S.; Fu, Z.; Ge, Y.; Zhang, Y. Recommendation as language processing (rlp): A unified pretrain, personalized prompt & predict paradigm (p5). In Proceedings of the 16th ACM Conference on Recommender Systems, Seattle, WA, USA, 18–23 September 2022; pp. 299–315. [Google Scholar]
- Liu, P.; Zhang, L.; Gulla, J.A. Pre-train, Prompt, and Recommendation: A Comprehensive Survey of Language Modeling Paradigm Adaptations in Recommender Systems. Trans. Assoc. Comput. Linguist. 2023, 11, 1553–1571. [Google Scholar] [CrossRef]
- Li, L.; Zhang, Y.; Chen, L. Personalized prompt learning for explainable recommendation. ACM Trans. Inf. Syst. 2023, 41, 103. [Google Scholar] [CrossRef]
- Chang, H.-S.; Sun, R.-Y.; Ricci, K.; McCallum, A. Multi-CLS BERT: An Efficient Alternative to Traditional Ensembling. arXiv 2022, arXiv:2210.05043. [Google Scholar]
- Devlin, J.; Chang, M.-W.; Lee, K.; Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv 2018, arXiv:1810.04805. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; Volume 30. [Google Scholar]
- Chen, C.; Zhang, M.; Liu, Y.; Ma, S. Neural attentional rating regression with review-level explanations. In Proceedings of the 2018 World Wide Web Conference, Lyon, France, 23–27 April 2018; pp. 1583–1592. [Google Scholar]
- Chin, J.Y.; Zhao, K.; Joty, S.; Cong, G. ANR: Aspect-based neural recommender. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, Torino, Italy, 22–26 October 2018; pp. 147–156. [Google Scholar]
- More, A. Attribute extraction from product titles in ecommerce. arXiv 2016, arXiv:1608.04670. [Google Scholar]
- Wang, Q.; Yang, L.; Kanagal, B.; Sanghai, S.; Sivakumar, D.; Shu, B.; Yu, Z.; Elsas, J. Learning to extract attribute value from product via question answering: A multi-task approach. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Virtual, 6–10 July 2020; pp. 47–55. [Google Scholar]
- Yang, S.; Li, Q.; Jang, D.; Kim, J. Deep learning mechanism and big data in hospitality and tourism: Developing personalized restaurant recommendation model to customer decision-making. Int. J. Hosp. Manag. 2024, 121, 103803. [Google Scholar] [CrossRef]
- Park, J.; Li, X.; Li, Q.; Kim, J. Impact on recommendation performance of online review helpfulness and consistency. Data Technol. Appl. 2023, 57, 199–221. [Google Scholar] [CrossRef]
- Isinkaye, F.O.; Folajimi, Y.O.; Ojokoh, B.A. Recommendation systems: Principles, methods and evaluation. Egypt. Inform. J. 2015, 16, 261–273. [Google Scholar] [CrossRef]
- Mnih, A.; Salakhutdinov, R.R. Probabilistic matrix factorization. In Proceedings of the Advances in Neural Information Processing Systems, Vancouver, BC, Canada, 3–6 December 2007; Volume 20. [Google Scholar]
- He, X.; Liao, L.; Zhang, H.; Nie, L.; Hu, X.; Chua, T.-S. Neural collaborative filtering. In Proceedings of the 26th International Conference on World Wide Web, Perth, Australia, 3–7 April 2017; pp. 173–182. [Google Scholar]
- Ma, H.; Liu, Q. In-depth Recommendation Model Based on Self-Attention Factorization. KSII Trans. Internet Inf. Syst. 2023, 17, 721–739. [Google Scholar]
Feature | Musical Instruments | Digital Music | Video Games |
---|---|---|---|
User | 40,630 | 46,440 | 63,931 |
Item | 59,981 | 210,124 | 47,243 |
Review & Rating | 357,804 | 505,399 | 581,465 |
Sparsity (%) | 99.985 | 99.995 | 99.981 |
Model | Musical Instruments | Digital Music | Video Games | |||
---|---|---|---|---|---|---|
MAE | RMSE | MAE | RMSE | MAE | RMSE | |
PMF | 1.1610 | 1.2200 | 1.3860 | 1.4410 | 1.1020 | 1.1650 |
NCF | 0.7590 | 1.0190 | 0.4990 | 0.7350 | 0.9020 | 1.1650 |
DeepCoNN | 0.7190 | 0.9820 | 0.4380 | 0.7070 | 0.8450 | 1.1190 |
SAFMR | 0.7180 | 0.9830 | 0.4240 | 0.6990 | 0.8810 | 1.1490 |
NARRE | 0.6804 | 0.9694 | 0.3996 | 0.6594 | 0.8420 | 1.0893 |
ARSRec | 0.5454 | 0.8959 | 0.3070 | 0.6207 | 0.6391 | 0.9305 |
Model | Musical Instruments | Digital Music | Video Games | |||
---|---|---|---|---|---|---|
MAE | RMSE | MAE | RMSE | MAE | RMSE | |
Only User Review | 0.5781 | 0.9101 | 0.3275 | 0.6285 | 0.6669 | 0.9328 |
ARSRec | 0.5454 | 0.8959 | 0.3070 | 0.6207 | 0.6391 | 0.9305 |
Fusion Method | Musical Instruments | Digital Music | Video Games | |||
---|---|---|---|---|---|---|
MAE | RMSE | MAE | RMSE | MAE | RMSE | |
Add | 0.5910 | 0.9171 | 0.3261 | 0.6313 | 0.6505 | 0.9386 |
Average | 0.5751 | 0.9001 | 0.3241 | 0.6242 | 0.6674 | 0.9330 |
Concatenation | 0.5658 | 0.9238 | 0.3219 | 0.6255 | 0.6517 | 0.9327 |
Dot Product | 0.5454 | 0.8959 | 0.3070 | 0.6207 | 0.6391 | 0.9305 |
Hyper Parameters | Values |
---|---|
Batch Size | [64, 128, 256, 512, 1024] |
Dropout Rate | [0.1, 0.2, 0.3, 0.4, 0.5] |
Learning Rate | [0.0001, 0.0005, 0.001, 0.005, 0.01] |
Batch Size | Musical Instruments | Digital Music | Video Games | |||
---|---|---|---|---|---|---|
MAE | RMSE | MAE | RMSE | MAE | RMSE | |
64 | 0.5454 | 0.8959 | 0.3070 | 0.6207 | 0.6391 | 0.9305 |
128 | 0.5688 | 0.9170 | 0.3225 | 0.6239 | 0.6767 | 0.9346 |
256 | 0.5729 | 0.9162 | 0.3753 | 0.6291 | 0.6582 | 0.9654 |
512 | 0.6053 | 0.9032 | 0.3597 | 0.6482 | 0.6704 | 0.9458 |
1024 | 0.5799 | 0.9191 | 0.3469 | 0.6537 | 0.6673 | 0.9344 |
Dropout Rate | Musical Instruments | Digital Music | Video Games | |||
---|---|---|---|---|---|---|
MAE | RMSE | MAE | RMSE | MAE | RMSE | |
0.1 | 0.5454 | 0.8959 | 0.3070 | 0.6207 | 0.6391 | 0.9305 |
0.2 | 0.5637 | 0.9110 | 0.3255 | 0.6238 | 0.6542 | 0.9345 |
0.3 | 0.6024 | 0.9090 | 0.3244 | 0.6254 | 0.6529 | 0.9415 |
0.4 | 0.5869 | 0.9080 | 0.3189 | 0.6250 | 0.6483 | 0.9480 |
0.5 | 0.5629 | 0.9112 | 0.3128 | 0.6253 | 0.6953 | 0.9369 |
Learning Rate | Musical Instruments | Digital Music | Video Games | |||
MAE | RMSE | MAE | RMSE | MAE | RMSE | |
0.0001 | 0.5829 | 0.9131 | 0.3441 | 0.6269 | 0.6658 | 0.9825 |
0.0005 | 0.5730 | 0.8983 | 0.3218 | 0.6215 | 0.6648 | 0.9367 |
0.001 | 0.5454 | 0.8959 | 0.3070 | 0.6207 | 0.6391 | 0.9305 |
0.005 | 0.7010 | 0.9994 | 0.3632 | 0.6581 | 0.6975 | 0.9580 |
0.01 | 0.6947 | 1.0182 | 0.5002 | 0.7365 | 0.7585 | 1.0810 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kim, J.; Li, X.; Jin, L.; Li, Q.; Kim, J. Attentive Review Semantics-Aware Recommendation Model for Rating Prediction. Electronics 2024, 13, 2815. https://doi.org/10.3390/electronics13142815
Kim J, Li X, Jin L, Li Q, Kim J. Attentive Review Semantics-Aware Recommendation Model for Rating Prediction. Electronics. 2024; 13(14):2815. https://doi.org/10.3390/electronics13142815
Chicago/Turabian StyleKim, Jihyeon, Xinzhe Li, Li Jin, Qinglong Li, and Jaekyeong Kim. 2024. "Attentive Review Semantics-Aware Recommendation Model for Rating Prediction" Electronics 13, no. 14: 2815. https://doi.org/10.3390/electronics13142815
APA StyleKim, J., Li, X., Jin, L., Li, Q., & Kim, J. (2024). Attentive Review Semantics-Aware Recommendation Model for Rating Prediction. Electronics, 13(14), 2815. https://doi.org/10.3390/electronics13142815