Contrastive Learning-Based Personalized Tag Recommendation
Abstract
:1. Introduction
- We utilize a noise augmentation strategy to generate augmented views of user–tag and item–tag interaction graphs, which effectively guarantees that the underlying semantics of original interaction graphs remain unchanged and avoids the problem of false negatives.
- We integrate the contrastive learning module into PTR, which is able to effectively alleviate the problem of data sparsity.
- We conduct extensive experiments on real-world datasets, and the experimental results demonstrate the superior performance of our proposed CLPTR compared with traditional PTR models.
2. Related Work
2.1. Personalized Tag Recommendation Algorithms
2.2. Contrastive Learning-Based Recommendation Models
3. Contrastive Learning-Based Personalized Tag Recommendation
3.1. Problem Description
3.2. Graph Convolution Module
3.2.1. Embedding Layer
3.2.2. Embedding Propagation Layer
3.2.3. Prediction Layer
3.3. Contrastive Learning Module
3.3.1. Noise Augmentation
3.3.2. Contrastive Loss
3.4. Objective Function
4. Experiment
4.1. Datasets
4.2. Evaluation Metrics and Experimental Settings
- NGCF [32]: NGCF integrates GCN into personalized item recommendation models. In our experiment, we utilize user–tag interaction information as the input.
- PITF [6]: PITF models the three-order interactions among entities and utilizes BPR criteria to optimize the model parameters.
- NLTF [7]: NLTF utilizes Gaussian kernel to enhance the capacity of modeling the complex relations among entities.
- ABNT [8]: ABNT models the nonlinearity relationships among entities through multi-layer perceptron.
- GNN-PTR [16]: GNN-PTR integrates GCN into the PTR model and utilizes GCN to capture high-order collaborative signals among entities.
- LNGTR [17]: LNGTR utilizes the lightweight GCN to alleviate the training difficulty for GNN-PTR.
- GHPTR [33]: GHPTR explicitly injects higher-order relevance into entity representation through the message propagation and aggregation mechanism of GNN and leverages hyperbolic embedding to alleviate the problem of embedding distortion.
4.3. Performance Analysis
- ABNT performs the worst on all datasets. One possible reason is that ABNT employs the multi-layer perceptron to capture the nonlinear relationships among entities, which introduces a large number of trainable parameters. However, with sparse interactions, ABNT is unable to accurately learn the embeddings of entities.
- NGCF performs better than ABNT. Although NGCF only models user–tag interaction information, NGCF utilizes GCN to extract high-order collaborative signals from the interaction behaviors, which enriches the embeddings of entities. This observation indicates that capturing high-order collaborative signals among entities is beneficial to PTR models.
- Compared to NGCF, NLTF achieves better performance. In fact, NLTF captures the third-order interaction among three entities rather than the second-order interaction.
- PITF generally outperforms NLTF. This indicates that explicitly modeling the pairwise interaction among entities is a promising approach for PTR systems.
- GNN-PTR is superior to PITF, because GNN-PTR utilizes the graph convolution module to effectively capture high-order collaborative signals among entities.
- The performance of LNGTR is better than that of GNN-PTR. This observation confirms that the cumbersome GCN may hinder the learning process of model parameters.
- Compared to LNGTR, GHPTR is better. One possible reason is that GHPTR utilizes hyperbolic distances to measure similarities between entities and leverages hyperbolic embedding to alleviate the problem of embedding distortion.
- On all datasets, our proposed PTR model achieves the best performance compared against all baselines. For instance, in terms of and , CLPTR outperforms LNGTR by 11.6% and 16.15% on the ml10m-10. On the ml10m-5, the improvements over LNGTR are 7.5% and 7.7%, respectively. This observation demonstrates that integrating the contrastive learning module into the PTR model is helpful to accurately learn the embeddings of entities via capturing the invariances among the augmented views.
4.4. Ablation Analysis
4.5. Impact of Noise Combination
4.6. Parameter Sensitivity Analysis
4.6.1. The Impact of
4.6.2. The Impact of
4.6.3. The Impact of
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Adomavicius, G.; Tuzhilin, A. Toward the next generation of recommender systems: A survey of the state-of-the-art and possible extensions. IEEE Trans. Knowl. Data Eng. 2005, 17, 734–749. [Google Scholar] [CrossRef]
- Guo, Q.; Zhuang, F.; Qin, C.; Zhu, H.; Xie, X.; Xiong, H.; He, Q. A Survey on Knowledge Graph-Based Recommender Systems. IEEE Trans. Knowl. Data Eng. 2022, 34, 3549–3568. [Google Scholar] [CrossRef]
- Wu, L.; He, X.; Wang, X.; Zhang, K.; Wang, M. A Survey on Accuracy-Oriented Neural Recommendation: From Collaborative Filtering to Information-Rich Recommendation. IEEE Trans. Knowl. Data Eng. 2023, 35, 4425–4445. [Google Scholar] [CrossRef]
- Symeonidis, P.; Nanopoulos, A.; Manolopoulos, Y. Tag recommendations based on tensor dimensionality reduction. In Proceedings of the 2008 ACM Conference on Recommender Systems (RecSys 2008), Lausanne, Switzerland, 23–25 October 2008; pp. 43–50. [Google Scholar]
- Rendle, S.; Balby Marinho, L.; Nanopoulos, A.; Schmidt-Thieme, L. Learning optimal ranking with tensor factorization for tag recommendation. In Proceedings of the SIGKDD, Paris, France, 28 July 2009; pp. 727–736. [Google Scholar]
- Rendle, S.; Schmidt-Thieme, L. Pairwise interaction tensor factorization for personalized tag recommendation. In Proceedings of the 3rd ACM International Conference on Web Search and Data Mining (WSDM), New York, NY, USA, 3–6 February 2010; pp. 81–90. [Google Scholar]
- Fang, X.; Pan, R.; Cao, G.; He, X.; Dai, W. Personalized tag recommendation through nonlinear tensor factorization using gaussian kernel. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence (AAAI-15), Austin, TX, USA, 25–30 January 2015; pp. 439–445. [Google Scholar]
- Yuan, J.; Jin, Y.; Liu, W.; Wang, X. Attention-Based Neural Tag Recommendation. In Proceedings of the DASFAA, Chiang Mai, Thailand, 22–25 April 2019; pp. 350–365. [Google Scholar]
- Shelhamer, E.; Long, J.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 640–651. [Google Scholar] [CrossRef] [PubMed]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- He, K.; Fan, H.; Wu, Y.; Xie, S.; Girshick, R. Momentum Contrast for Unsupervised Visual Representation Learning. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 9726–9735. [Google Scholar]
- Kim, Y. Convolutional Neural Networks for Sentence Classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, Doha, Qatar, 25–29 October 2014; pp. 1746–1751. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 6000–6010. [Google Scholar]
- Gao, T.; Yao, X.; Chen, D. SimCSE: Simple Contrastive Learning of Sentence Embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP 2021), Punta Cana, Dominican Republic, 7–11 November 2021; pp. 6894–6910. [Google Scholar]
- Wu, F.; de Souza, A.H., Jr.; Zhang, T.; Fifty, C.; Yu, T.; Weinberger, K.Q. Simplifying Graph Convolutional Networks. In Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA, 10–15 June 2019; pp. 6861–6871. [Google Scholar]
- Chen, X.; Yu, Y.; Jiang, F.; Zhang, L.; Gao, R.; Gao, H. Graph Neural Networks Boosted Personalized Tag Recommendation Algorithm. In Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN 2020), Glasgow, UK, 19–24 July 2020; pp. 1–8. [Google Scholar]
- Yu, Y.; Chen, X.; Zhang, L.; Gao, R.; Gao, H. Neural Graph for Personalized Tag Recommendation. IEEE Intell. Syst. 2022, 37, 51–59. [Google Scholar] [CrossRef]
- Scarselli, F.; Gori, M.; Tsoi, A.C.; Hagenbuchner, M.; Monfardini, G. The Graph Neural Network Model. IEEE Trans. Neural Netw. 2009, 20, 61–80. [Google Scholar] [CrossRef] [PubMed]
- Wu, J.; Wang, X.; Feng, F.; He, X.; Chen, L.; Lian, J.; Xie, X. Self-supervised Graph Learning for Recommendation. In Proceedings of the SIGIR, Virtual Event, 11–15 July 2021; pp. 726–735. [Google Scholar]
- Jing, M.; Zhu, Y.; Zang, T.; Wang, K. Contrastive Self-supervised Learning in Recommender Systems: A Survey. ACM Trans. Inf. Syst. 2023, 42, 1–39. [Google Scholar] [CrossRef]
- Yu, J.; Yin, H.; Xia, X.; Chen, T.; Cui, L.; Nguyen, Q.V.H. Are Graph Augmentations Necessary? Simple Graph Contrastive Learning for Recommendation. In Proceedings of the SIGIR, Madrid, Spain, 11–15 July 2022; pp. 1294–1303. [Google Scholar]
- Wang, F.; Liu, H. Understanding the Behaviour of Contrastive Loss. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 2495–2504. [Google Scholar]
- Zhang, O.; Wu, M.; Bayrooti, J.; Goodman, N. Temperature as Uncertainty in Contrastive Learning. arXiv 2021, arXiv:2110.04403. [Google Scholar]
- Liu, Z.; Li, H.; Chen, G.; Ouyang, Y.; Rong, W.; Xiong, Z. PopDCL: Popularity-aware Debiased Contrastive Loss for Collaborative Filtering. In Proceedings of the Conference on Information and Knowledge Management (CIKM), Birmingham, UK, 25 October 2023; pp. 1482–1492. [Google Scholar]
- Xie, X.; Sun, F.; Liu, Z.; Wu, S.; Gao, J.; Zhang, J.; Ding, B.; Cui, B. Contrastive Learning for Sequential Recommendation. In Proceedings of the 2021 IEEE 37th International Conference on Data Engineering (ICDE), Chania, Greece, 19–22 April 2021; pp. 1259–1273. [Google Scholar]
- Liu, Z.; Chen, Y.; Li, J.; Yu, P.S.; McAuley, J.; Xiong, C. Contrastive Self-supervised Sequential Recommendation with Robust Augmentation. arXiv 2021, arXiv:2108.06479. [Google Scholar]
- Chen, Y.; Liu, Z.; Li, J.; McAuley, J.; Xiong, C. Intent Contrastive Learning for Sequential Recommendation. In Proceedings of the ACM Web Conference 2022 (WWW), Lyon, France, 25–29 April 2022; pp. 2172–2182. [Google Scholar]
- Wu, H.; Zhang, Y.; Ma, C.; Guo, W.; Tang, R.; Liu, X.; Coates, M. Intent-aware Multi-source Contrastive Alignment for Tag-enhanced Recommendation. In Proceedings of the 2023 IEEE 39th International Conference on Data Engineering (ICDE), Anaheim, CA, USA, 3–7 April 2023; pp. 1112–1125. [Google Scholar]
- Xu, C.; Zhang, Y.; Chen, H.; Dong, L.; Wang, W. A fairness-aware graph contrastive learning recommender framework for social tagging systems. Inf. Sci. 2023, 640, 119064. [Google Scholar] [CrossRef]
- He, X.; Deng, K.; Wang, X.; Li, Y.; Zhang, Y.; Wang, M. LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation. In Proceedings of the 43nd International ACM SIGIR Conference on Research and Development in Information Retrieval, Virtual Event, China, 25–30 July 2020; pp. 639–648. [Google Scholar]
- Rendle, S.; Freudenthaler, C.; Gantner, Z.; Schmidt-Thieme, L. BPR: Bayesian Personalized Ranking from Implicit Feedback. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence (UAI), Montreal, QC, Canada, 18–21 June 2009; pp. 452–461. [Google Scholar]
- Wang, X.; He, X.; Wang, M.; Feng, F.; Chua, T.S. Neural Graph Collaborative Filtering. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, Paris, France, 21–25 July 2019; pp. 165–174. [Google Scholar]
- Zhang, C.; Zhang, A.; Zhang, L.; Yu, Y.; Zhao, W.; Geng, H. A Graph Neural Networks-Based Learning Framework with Hyperbolic Embedding for Personalized Tag Recommendation. IEEE Access 2024, 12, 339–350. [Google Scholar] [CrossRef]
Dataset | ||||
---|---|---|---|---|
lastfm-5 | 1348 | 6927 | 2132 | 162,047 |
lastfm-10 | 966 | 3870 | 1204 | 133,945 |
ml10m-5 | 990 | 3247 | 2566 | 61,688 |
ml10m-10 | 469 | 1524 | 1017 | 37,414 |
Dataset | Metric | NGCF | PITF | NLTF | ABNT | GNN-PTR | LNGTR | GHPTR | CLPTR |
---|---|---|---|---|---|---|---|---|---|
ml10m-10 | 0.1244 | 0.1699 | 0.1436 | 0.0896 | 0.1933 | 0.2267 | 0.2507 | 0.2530 | |
0.0896 | 0.1173 | 0.1143 | 0.0759 | 0.1390 | 0.1748 | 0.1706 | 0.1833 | ||
0.0571 | 0.0744 | 0.0714 | 0.0501 | 0.0842 | 0.1064 | 0.0960 | 0.1106 | ||
0.3198 | 0.3770 | 0.3388 | 0.2210 | 0.4602 | 0.4990 | 0.5713 | 0.5795 | ||
0.3837 | 0.4523 | 0.4334 | 0.3017 | 0.5461 | 0.6319 | 0.6343 | 0.6653 | ||
0.4688 | 0.5205 | 0.5341 | 0.3858 | 0.6398 | 0.7698 | 0.6898 | 0.7775 | ||
ml10m-5 | 0.0950 | 0.1398 | 0.1323 | 0.0822 | 0.1455 | 0.1795 | 0.1915 | 0.1929 | |
0.0683 | 0.1021 | 0.0972 | 0.0628 | 0.1055 | 0.1388 | 0.1375 | 0.1455 | ||
0.0438 | 0.0641 | 0.0596 | 0.0400 | 0.0672 | 0.0892 | 0.0797 | 0.0908 | ||
0.2463 | 0.3208 | 0.2974 | 0.2089 | 0.3331 | 0.3819 | 0.4158 | 0.4111 | ||
0.2820 | 0.3910 | 0.3560 | 0.2538 | 0.3965 | 0.4817 | 0.4799 | 0.4992 | ||
0.3495 | 0.4623 | 0.4270 | 0.3039 | 0.4852 | 0.6008 | 0.5391 | 0.6116 | ||
lastfm-10 | 0.1739 | 0.2513 | 0.2443 | 0.1605 | 0.2647 | 0.3240 | 0.3382 | 0.4244 | |
0.1468 | 0.2088 | 0.2064 | 0.1367 | 0.2143 | 0.2652 | 0.2658 | 0.3371 | ||
0.1140 | 0.1458 | 0.1249 | 0.0943 | 0.1462 | 0.1833 | 0.1772 | 0.2168 | ||
0.2180 | 0.3204 | 0.2849 | 0.1579 | 0.3479 | 0.3949 | 0.4339 | 0.5250 | ||
0.2878 | 0.4158 | 0.4017 | 0.2190 | 0.4529 | 0.5208 | 0.5367 | 0.6587 | ||
0.4289 | 0.5654 | 0.5541 | 0.3034 | 0.5874 | 0.6830 | 0.6119 | 0.7998 | ||
lastfm-5 | 0.1679 | 0.2127 | 0.1949 | 0.1563 | 0.2324 | 0.2789 | 0.3043 | 0.3591 | |
0.1395 | 0.1789 | 0.1678 | 0.1353 | 0.1913 | 0.2325 | 0.2390 | 0.2918 | ||
0.1023 | 0.1274 | 0.1191 | 0.1018 | 0.1327 | 0.1596 | 0.1547 | 0.1911 | ||
0.2191 | 0.2571 | 0.2275 | 0.1569 | 0.3244 | 0.3415 | 0.3914 | 0.4490 | ||
0.2907 | 0.3479 | 0.3239 | 0.2194 | 0.4170 | 0.4511 | 0.4776 | 0.5736 | ||
0.4013 | 0.4814 | 0.4523 | 0.3298 | 0.5454 | 0.5861 | 0.5673 | 0.6990 |
Dataset | Metric | CLPTR-gcn | CLPTR-cl | CLPTR |
---|---|---|---|---|
ml10m-10 | 0.1699 | 0.2267 | 0.2530 | |
0.1173 | 0.1748 | 0.1833 | ||
0.3770 | 0.4990 | 0.5795 | ||
0.4523 | 0.6319 | 0.6653 | ||
ml10m-5 | 0.1398 | 0.1795 | 0.1929 | |
0.1021 | 0.1388 | 0.1455 | ||
0.3208 | 0.3819 | 0.4111 | ||
0.3910 | 0.4817 | 0.4992 | ||
lastfm-10 | 0.2513 | 0.3240 | 0.4244 | |
0.2088 | 0.2652 | 0.3371 | ||
0.3204 | 0.3949 | 0.5250 | ||
0.4158 | 0.5208 | 0.6587 | ||
lastfm-5 | 0.2127 | 0.2789 | 0.3591 | |
0.1789 | 0.2325 | 0.2918 | ||
0.2571 | 0.3415 | 0.4490 | ||
0.3479 | 0.4511 | 0.5736 |
Dataset | Metric | |||
---|---|---|---|---|
ml10m-10 | 0.2530 | 0.2431 | 0.2424 | |
0.1833 | 0.1774 | 0.1770 | ||
0.5795 | 0.5678 | 0.5607 | ||
0.6653 | 0.6696 | 0.6687 | ||
ml10m-5 | 0.1929 | 0.1323 | 0.1283 | |
0.1455 | 0.1085 | 0.1012 | ||
0.4111 | 0.2687 | 0.2620 | ||
0.4992 | 0.3597 | 0.3378 | ||
lastfm-10 | 0.4244 | 0.3647 | 0.3192 | |
0.3371 | 0.2969 | 0.2631 | ||
0.5250 | 0.4277 | 0.3769 | ||
0.6587 | 0.5654 | 0.4981 | ||
lastfm-5 | 0.3591 | 0.3244 | 0.2915 | |
0.2918 | 0.2690 | 0.2436 | ||
0.4490 | 0.4088 | 0.3709 | ||
0.5736 | 0.5335 | 0.4876 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, A.; Yu, Y.; Li, S.; Gao, R.; Zhang, L.; Gao, S. Contrastive Learning-Based Personalized Tag Recommendation. Sensors 2024, 24, 6061. https://doi.org/10.3390/s24186061
Zhang A, Yu Y, Li S, Gao R, Zhang L, Gao S. Contrastive Learning-Based Personalized Tag Recommendation. Sensors. 2024; 24(18):6061. https://doi.org/10.3390/s24186061
Chicago/Turabian StyleZhang, Aoran, Yonghong Yu, Shenglong Li, Rong Gao, Li Zhang, and Shang Gao. 2024. "Contrastive Learning-Based Personalized Tag Recommendation" Sensors 24, no. 18: 6061. https://doi.org/10.3390/s24186061
APA StyleZhang, A., Yu, Y., Li, S., Gao, R., Zhang, L., & Gao, S. (2024). Contrastive Learning-Based Personalized Tag Recommendation. Sensors, 24(18), 6061. https://doi.org/10.3390/s24186061