You are currently viewing a new version of our website. To view the old version click .
Mathematics
  • Article
  • Open Access

18 August 2023

LoRA-NCL: Neighborhood-Enriched Contrastive Learning with Low-Rank Dimensionality Reduction for Graph Collaborative Filtering

,
,
and
Science and Technology on Information Systems Engineering Laboratory, National University of Defense Technology, Changsha 410073, China
*
Author to whom correspondence should be addressed.

Abstract

Graph Collaborative Filtering (GCF) methods have emerged as an effective recommendation approach, capturing users’ preferences over items by modeling user–item interaction graphs. However, these methods suffer from data sparsity in real scenarios, and their performance can be improved using contrastive learning. In this paper, we propose an optimized method, named LoRA-NCL, for GCF based on Neighborhood-enriched Contrastive Learning (NCL) and low-rank dimensionality reduction. We incorporate low-rank features obtained through matrix factorization into the NCL framework and employ LightGCN to extract high-dimensional representations. Extensive experiments on five public datasets demonstrate that the proposed method outperforms a competitive graph collaborative filtering base model, achieving 4.6% performance gains on the MovieLens dataset, respectively.

1. Introduction

The advent of the digital age has led to an explosion of data, particularly in the realm of user–item interactions. This wealth of data has opened up new opportunities for recommendation systems [1], which aim to predict user preferences and recommend items that are most likely to be of interest. However, the sheer volume and complexity of the data present significant challenges. Traditional recommendation systems often struggle to effectively capture the intricate structure of user–item interactions, and fail to fully leverage the rich information embedded in these interactions.
In this paper, we propose a novel hybrid recommendation model that addresses these challenges by integrating Singular Value Decomposition [2] and an optimized version of Neighborhood-enriched Contrastive Learning [3]. Our method aims to capture both the global structure [4] and local neighborhood information [3] inherent in the user–item interaction graph [5], thereby enhancing the recommendation performance.
The primary contributions of this paper are as follows:
  • Novel Hybrid Recommendation Model: We propose a novel hybrid recommendation model that integrates Singular Value Decomposition (SVD) and an optimized version of neighborhood-enriched contrastive learning. This model is designed to capture both the global structure and local neighborhood information inherent in the user–item interaction graph, thereby enhancing the recommendation performance.
  • SVD-based Embedding Initialization: We introduce a novel approach to initializing user and item embeddings using SVD. This method captures the global structure of the user–item interaction graph and provides a robust starting point for the learning process. It also expedites the convergence of the training process, leading to improved efficiency.
  • Optimized Neighborhood-enriched Contrastive Learning: We present several key refinements to the NCL approach, including an adaptive neighborhood structure, unified optimization of contrastive objectives, and prototype regularization. These refinements allow our model to adapt to changing user–item interactions, balance the trade-off between different types of neighborhood information, and enhance the discriminative power of the prototypes.
  • Empirical Validation: We conduct extensive experiments on several benchmark datasets to validate the effectiveness of our proposed method. The results demonstrate that our method outperforms state-of-the-art recommendation models, thereby confirming its practical utility.
  • Insights into User–Item Interactions: Our work provides valuable insights into the structure of user–item interactions. By leveraging both global and local information, our model offers a more comprehensive understanding of user–item interactions, which can inform the design of future recommendation systems.

3. Proposed Method

In this section, we present our proposed method, a hybrid recommendation model that integrates Singular Value Decomposition (SVD) and an optimized version of Neighborhood-enriched Contrastive Learning (NCL). The objective of our method is to leverage both the global structure and local neighborhood information inherent in the user–item interaction graph, thereby enhancing the recommendation performance.

3.1. Embedding Initialization via Low-Rank Approximation

The traditional methods [11,17] of initializing user and item embeddings in recommendation systems often rely on random or heuristic techniques. These approaches, however, may not adequately capture the intrinsic structure of user–item interaction data, which can lead to less than optimal performance in the early stages of training and slower convergence.
To address this, we suggest an alternative method for initializing user and item embeddings using Singular Value Decomposition (SVD), a powerful tool from the field of linear algebra that provides a low-rank approximation of a matrix. The SVD of matrix A is given by
A = U Σ V T
In this equation, U and V are orthogonal matrices that contain the left and right singular vectors, respectively, and Σ is a diagonal matrix that holds the singular values. The interaction matrix can be broken down into three matrices, where the columns of U and V (i.e., u k and v k ) and s k are left and right singular vectors and a singular value, respectively; s 1 > s 2 > 0 ; diag ( · ) is the diagonalization operation. Components with larger (smaller) singular values contribute more (less) to interactions, allowing us to approximate R with only the K-largest singular values.
In the realm of recommendation systems, matrix A represents the user–item interaction matrix, with each entry A u i indicating the interaction between user u and item i. By applying SVD to A, we obtain a low-rank approximation that captures the most significant structure in the user–item interactions.
We use the first k columns of U and V as the initial embeddings for users and items, where k is the dimension of the embedding. This strategy offers two main advantages: First, the SVD-based initialization encapsulates the global structure [16] of the user–item interaction graph, providing a solid foundation for the learning process. Second, it can potentially speed up the convergence of the training process, as the initial embeddings are already a good approximation of the final embeddings.
Alternatively, we can dynamically learn low-rank representations [4] through matrix factorization [18]:
min ( u , i ) A + A u i e u T e i 2 2 + λ e u 2 2 + e i 2 2 ,
where λ is the regularization strength. Each user/item is considered as a node on the graph and parameterized as an embedding vector e u / e i R d with dimension d min ( | U | , | I | ) , and A + = { ( u , i ) A u i = 1 } . By optimizing this objective function, the model is expected to learn important features from interactions (e.g., components corresponding to the d-largest singular values).
In conclusion, the SVD-based initialization provides a systematic and effective way to initialize the user and item embeddings in recommendation systems, potentially leading to enhanced performance and quicker convergence.

3.2. Enhancing Collaborative Filtering with Contrastive Learning

As mentioned in Section 2.3, GNN-based methods produce user and item representations by applying the propagation and prediction function on the interaction graph G . In NCL, we utilize GNN to model the observed interactions between users and items. Specifically, following LightGCN [11], we discard the nonlinear activation and feature transformation in the propagation function as follows:
z u ( l + 1 ) = i N u 1 N u N i z i ( l ) , z i ( l + 1 ) = u N i 1 N i N u z ( l ) ,
After propagating with L layers, we adopt the weighted sum function as the readout function to combine the representations of all layers and obtain the final representations as follows:
z u = 1 L + 1 l = 0 L z u ( k ) , z i = 1 L + 1 l = 0 L z i ( k ) ,
With the final representations, we adopt inner product to predict how likely a user u would interact with items i:
y ^ u , i = z u z i ,
We incorporate an optimized version of the NCL approach into the learning process to capture local neighborhood information. The NCL approach defines two types of neighbors for a user (or an item): structural neighbors and semantic neighbors.
Structural neighbors are those who have interacted with the same items (or users). We introduce a self-supervised learning loss, denoted as L s s l , to capture the structural neighborhood information. This loss is defined as follows:
L s s l = log exp ( dot ( u , v pos ) / T ) Σ exp ( dot ( u , v neg ) / T )
Semantic neighbors are those with similar representations. We use the K-means clustering algorithm to identify the semantic neighbors. Each user (or item) is assigned to a cluster, and the centroid of the cluster is used as the prototype to represent the semantic neighbors. We introduce a prototype contrastive loss, denoted as L proto , to capture the semantic neighborhood information. This loss is defined as follows:
L proto = log exp ( dot ( u , c pos ) / T ) Σ exp ( dot ( u , c neg ) / T )
To capture the information from interactions directly, we adopt Bayesian Personalized Ranking (BPR) loss [19], which is a well-designed ranking objective function for recommendation. Specifically, BPR loss enforces the prediction scores of the observed interactions to be higher than those of the sampled unobserved ones. Formally, the objective function of BPR loss is as follows:
L B P R = ( u , i , j ) O log σ ( y ^ u , i y ^ u , j ) .
By optimizing the BPR loss L B P R , the suggested contrastive learning method enriched with neighborhood elements can capture the interplay between users and items. Nonetheless, higher-order connections among users (or items) are equally important for making recommendations. For instance, users often purchase items that their neighbors have bought. Moving forward, we will introduce two contrastive learning goals to harness the inherent neighborhood connections of both users and items.

3.3. Optimization for NCL

Building upon the NCL method, we introduce several key optimizations to further enhance its effectiveness and efficiency in capturing local neighborhood information for recommendation systems.

3.3.1. Dynamic Neighborhood Structure

In the standard NCL approach, the neighborhood structure is often fixed and predefined. This static approach may not adapt well to the dynamic nature of user–item interactions. To address this, we propose an adaptive neighborhood structure [20] that evolves during the learning process.
Specifically, we use the K-means [21] clustering algorithm to dynamically identify the semantic neighbors. The clustering process can be represented as
C , I = KMeans ( X )
where X is the set of embeddings, C is the set of cluster centroids, and I is the assignment of each embedding to a cluster. This adaptive neighborhood structure is updated in each iteration of the expectation–maximization algorithm [22], allowing the model to adapt to changing user–item interactions and capture more accurate neighborhood information.

3.3.2. Unified Optimization of Contrastive Objectives

The original NCL approach optimizes the contrastive objectives separately, which may not be efficient and can lead to suboptimal solutions. We propose a unified optimization framework that balances the trade-off between the structural and semantic neighborhood information.
The unified optimization objective can be represented as
L = λ ssl L ssl + λ proto L proto
where L ssl is the self-supervised learning loss for structural neighbors, L proto is the prototype contrastive loss for semantic neighbors, and λ ssl and λ proto are weight parameters. This unified optimization can lead to more efficient learning and better performance.

3.3.3. Regularization on Prototypes

To prevent the prototypes from being too close to each other, which can improve the discriminative power of the prototypes, we introduce a regularization term to the prototypes in the prototype contrastive objective. The regularized prototype contrastive loss can be represented as
L proto _ reg = L proto + γ | | C C | | 2
where C and C are the current and previous cluster centroids, γ is a regularization parameter, and | | . | | denotes the Frobenius norm. This regularization encourages the prototypes to move in each iteration, which can enhance the performance of contrastive learning.
In summary, these refinements provide a more flexible and efficient way to capture the neighborhood information in recommendation systems. They offer a new perspective on how to leverage contrastive learning in recommendation systems, leading to improved performance and faster convergence. By integrating these refinements with the SVD-based initialization, our proposed method provides a comprehensive solution for enhancing the performance of recommendation systems.

4. Experiments and Evaluation

4.1. Datasets

We evaluate the performance of our proposed method on five public datasets: MovieLens-1M (ML-1M) [23], Yelp2018 [24], Amazon Books, Gowalla, and Alibaba-iFashion [1]. These datasets vary in domain, scale, and density. For Yelp2018 and Amazon Books, we filter out users and items with fewer than 15 interactions to ensure data quality. The statistics of the datasets are summarized in Table 1.
Table 1. Statistics of the datasets.
The selection of these datasets was driven by a few key considerations:
  • Variety in Domains: These datasets span multiple domains—movie ratings, restaurant reviews, book reviews, social networking check-ins, and fashion. This wide coverage helps ensure the robustness of the model across diverse domains, which is key to a good machine learning model.
  • Scale and Density: These datasets vary not only in terms of the number of records (scale), but also in terms of the density of interactions. Some datasets may have a high number of interactions per user/item (dense), whereas others might have fewer interactions per user/item (sparse). Both of these scenarios pose unique challenges in recommendation systems, and dealing with both in training helps ensure the model’s adaptability.
  • Data Quality: For Yelp2018 and Amazon Books, filters have been applied to exclude users and items with fewer than 15 interactions. This decision is to ensure data quality and reliable signal in the data. It helps avoid cases where the model might overfit to users/items with very few interactions, thus making the evaluation more reliable.
Overall, these datasets were chosen to ensure that the evaluation is both rigorous and representative of various real-world situations.
For each dataset, we randomly select 80% of interactions as training data, 10% of interactions as validation data, and the remaining 10% interactions for performance comparison. We uniformly sample one negative item for each positive instance to form the training set.

4.2. Experiment Setup

4.2.1. Compared Models

We compare our proposed method with the following several state-of-the-art models:
  • SGL [25]: Incorporates self-supervised learning to improve recommendation systems. Our chosen model for SGL is SGL-ED.
  • NGCF [8]: Leverages the user–item bipartite graph to include high-order connections and employs GNN to bolster CF techniques.
  • NCL [3]: Advances graph collaborative filtering using neighborhood-enhanced contrastive learning, which our approach is rooted in. We utilize RUCAIBox/NCL as the model representation of NCL.

4.2.2. Evaluation Metrics

To assess the efficacy of top-N recommendations [26], we employ the commonly utilized metrics of Recall@N and NDCG@N [27,28], with N values of 10, 20, and 50 to maintain uniformity. As per earlier studies, we use the full-ranking approach, ranking all potential items that the user has not engaged with.

4.3. Implementation Details

We utilize the RecBole [29] open-source framework to develop our model, as well as all baseline algorithms. For a balanced comparison, we employ the Adam optimizer across all methods and meticulously fine-tune the hyperparameters for each baseline. We designate a batch size of 4096 and use the standard Xavier distribution for initializing parameters. The embedding dimensions are configured to 64. To deter overfitting, we apply early stopping after 10 epochs without improvement, using NDCG@10 as the benchmark indicator. We tune the weight hyperparameters λ ssl , L proto in [ 1 × 10 10 , 1 × 10 6 ], temperature hyperparameter τ in [0.01, 1], and number of clusters k in [5, 10,000].

4.4. Overall Performance

Table 2 presents a comprehensive performance comparison of the SGL model, NCL model, NGCF model, and our proposed model LoRA-NCL across various datasets. The results are insightful and reveal several key observations.
Table 2. Performance comparison of all datasets.
Firstly, LoRA-NCL consistently outperforms NCL across most datasets. The superior performance of LoRA-NCL can be attributed to its ability to effectively capture both the global structure and local neighborhood information inherent in the user–item interaction graph. This is achieved through the integration of Singular Value Decomposition (SVD) and an optimized version of NCL, which allows LoRA-NCL to leverage both explicit and implicit feedback from users, thereby enhancing the recommendation performance.
Interestingly, there are instances where NCL outperforms LoRA-NCL. This can be attributed to the inherent differences in the learning mechanisms of the two models. NCL, with its focus on capturing local neighborhood information, might be more effective in scenarios where local patterns and dependencies play a more significant role in user–item interactions. On the other hand, LoRA-NCL, which aims to capture both global and local structures, might be less effective when the global structure is sparse or less informative.
In conclusion, while LoRA-NCL generally outperforms NCL, the choice between the two models should be guided by the specific characteristics of the dataset and the computational resources available.
Table 3 presents the performance comparison of LoRA-NCL with different embedding sizes. The results are insightful and reveal several key observations.
Table 3. Performance comparison of different embedding sizes of LoRA-NCL.
LoRA-NCL (256) and LoRA-NCL (128) outperform LoRA-NCL (64) in most of the metrics across all datasets. For instance, in the MovieLens-1M dataset, LoRA-NCL (256) outperforms LoRA-NCL (64) by approximately 1.2% in Recall@10, 1.1% in NDCG@10, 1.8% in Recall@20, 1.4% in NDCG@20, 1.8% in Recall@50, and 4.6% in NDCG@50. Similar trends can be observed in other datasets such as Yelp, Amazon Books, and Gowalla.
The performance difference between LoRA-NCL (64) and LoRA-NCL (256) can be attributed to the increased capacity of the model with a larger embedding size. A larger embedding size allows the model to capture more nuanced features of the user–item interactions, leading to better performance. However, it is important to note that the improvement comes at the cost of increased computational complexity and memory usage.
The possible reason that higher embedding size led to better performance is that a larger embedding size provides a more expressive representation space for the items and users. This allows the model to capture more complex and subtle patterns in the user–item interactions, which can lead to improved recommendation performance. However, it is important to note that the benefits of a larger embedding size should be weighed against the increased computational cost and the risk of overfitting.

4.5. Further Analysis

In this subsection, we delve deeper into the results of our experiments to gain more insights into the performance of our proposed method.

4.5.1. Performance across Different Datasets

Our method demonstrated varying performance across the different datasets. We used recall and Normalized Discounted Cumulative Gain (NDCG) as our primary performance metrics. Recall, i.e., the proportion of true positives to the combined total of true positives and false negatives, provides insight into the percentage of accurate positive predictions made by our model. NDCG, on the other hand, is a measure of ranking quality, summing up the graded relevance values of all results in the list.

4.5.2. Comparison with Other Methods

Compared to other leading-edge techniques, our suggested approach demonstrated enhanced effectiveness. The NDCG scores of our method were consistently higher than those of the other methods, and the recall scores of our method were always higher than those of other the methods, except for one data point. Our method is more effective at NDCG.

4.6. Impact of Parameter Choices

The performance of our model is influenced by several parameters, one of the most significant being the size of the embeddings. Embedding size refers to the dimensionality of the vectors used to represent items in the recommendation system.
In our experiments, we found that the choice of embedding size had a substantial impact on the performance of our model. Specifically, smaller embedding sizes tended to result in faster training times but at the cost of model accuracy. On the other hand, larger embedding sizes led to more accurate models, but with an increase in computational complexity and training time.
Interestingly, there appeared to be a ‘sweet spot’ for the embedding size. Beyond a certain point, increasing the embedding size did not lead to significant improvements in model performance, and in some cases, it even led to a decrease in performance. This could be due to the model overfitting to the training data when given too many parameters to learn.
Therefore, it is crucial to carefully choose the embedding size when implementing our model. We recommend conducting a thorough parameter tuning process, such as grid search or random search, to find the optimal embedding size for the specific dataset and problem at hand.

4.7. Limitations and Future Directions

While our proposed method shows promising results, it also has several limitations. One of the main limitations is the sensitivity of the model to the choice of embedding size. The performance of our model can vary significantly with different embedding sizes, and finding the optimal size can be a computationally intensive process. Moreover, the optimal embedding size may not be the same for all datasets, adding another layer of complexity to the problem.
Another limitation is related to parameter tuning. Our model has several hyperparameters that need to be carefully tuned to achieve the best performance. However, the optimal set of parameters can vary depending on the specific characteristics of the dataset and the problem at hand, making the tuning process challenging and time-consuming.
Despite these limitations, our research opens up several avenues for future work. One potential direction is to develop more efficient methods for determining the optimal embedding size and tuning the model parameters. This could involve using more advanced optimization techniques or incorporating additional prior knowledge about the problem into the tuning process. Another interesting direction would be to explore ways to make the model less sensitive to the choice of embedding size and other parameters, thereby making it more robust and easier to use. We believe that addressing these limitations and exploring these future directions can further improve the performance and applicability of our method.

5. Conclusions

While our method shows promising results, it has several limitations that provide directions for future work. First, our method assumes that the user–item interaction graph is static, which may not hold in real-world scenarios where user–item interactions are dynamic. Future work could explore how to incorporate temporal information into our method. Second, our method relies on the K-means algorithm to identify semantic neighbors, which may not be optimal for all datasets. Future work could investigate other clustering algorithms or learn the neighborhood structure in an end-to-end manner. Lastly, our method is designed for explicit feedback data. Adapting it to implicit feedback data, where only positive interactions are observed, is another interesting direction for future research.

Author Contributions

Conceptualization, T.C. and Z.H.; methodology, T.C.; software, T.C.; validation, T.C., H.C. and Z.H.; formal analysis, T.C.; resources, T.H.; data curation, T.C.; writing—original draft preparation, T.C.; writing—review and editing, H.C.; visualization, T.C.; supervision, H.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data is unavailable due to privacy or ethical restrictions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chen, W.; Huang, P.; Xu, J.; Guo, X.; Guo, C.; Sun, F.; Li, C.; Pfadler, A.; Zhao, H.; Zhao, B. POG: Personalized Outfit Generation for Fashion Recommendation at Alibaba iFashion. arXiv 2019, arXiv:1905.01866. [Google Scholar]
  2. Bermeitinger, B.; Hrycej, T.; Handschuh, S. Singular Value Decomposition and Neural Networks. In Artificial Neural Networks and Machine Learning—ICANN 2019: Deep Learning; Tetko, I.V., Kůrková, V., Karpov, P., Theis, F., Eds.; Springer: Cham, Switzerland, 2019; pp. 153–164. [Google Scholar]
  3. Lin, Z.; Tian, C.; Hou, Y.; Zhao, W.X. Improving Graph Collaborative Filtering with Neighborhood-enriched Contrastive Learning. In Proceedings of the ACM Web Conference 2022, Virtual Event, 25–29 April 2022. [Google Scholar]
  4. Peng, S.; Sugiyama, K.; Mine, T. SVD-GCN: A Simplified Graph Convolution Paradigm for Recommendation. arXiv 2022, arXiv:2208.12689. [Google Scholar]
  5. Sarwar, B.; Karypis, G.; Konstan, J.; Riedl, J. Item-Based Collaborative Filtering Recommendation Algorithms. In Proceedings of the WWW’01: 10th International Conference on World Wide Web, New York, NY, USA, 1–5 May 2001; pp. 285–295. [Google Scholar]
  6. Chen, T.; Kornblith, S.; Norouzi, M.; Hinton, G.E. A Simple Framework for Contrastive Learning of Visual Representations. arXiv 2020, arXiv:2002.05709. [Google Scholar]
  7. Khosla, P.; Teterwak, P.; Wang, C.; Sarna, A.; Tian, Y.; Isola, P.; Maschinot, A.; Liu, C.; Krishnan, D. Supervised Contrastive Learning. arXiv 2021, arXiv:2004.11362. [Google Scholar]
  8. Wang, X.; He, X.; Wang, M.; Feng, F.; Chua, T. Neural Graph Collaborative Filtering. arXiv 2019, arXiv:1905.08108. [Google Scholar]
  9. He, X.; Liao, L.; Zhang, H.; Nie, L.; Hu, X.; Chua, T. Neural Collaborative Filtering. arXiv 2017, arXiv:1708.05031. [Google Scholar]
  10. Strub, F.; Mary, J. Collaborative Filtering with Stacked Denoising AutoEncoders and Sparse Inputs. In Proceedings of the NIPS 2015, Montreal, QC, Canada, 7–12 December 2015. [Google Scholar]
  11. He, X.; Deng, K.; Wang, X.; Li, Y.; Zhang, Y.; Wang, M. LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation. arXiv 2020, arXiv:2002.02126. [Google Scholar]
  12. van den Oord, A.; Li, Y.; Vinyals, O. Representation Learning with Contrastive Predictive Coding. arXiv 2019, arXiv:1807.03748. [Google Scholar]
  13. Lin, S.; Zhou, P.; Hu, Z.Y.; Wang, S.; Zhao, R.; Zheng, Y.; Lin, L.; Xing, E.; Liang, X. Prototypical Graph Contrastive Learning. arXiv 2022, arXiv:2106.09645. [Google Scholar] [CrossRef] [PubMed]
  14. Zhou, J.; Cui, G.; Hu, S.; Zhang, Z.; Yang, C.; Liu, Z.; Wang, L.; Li, C.; Sun, M. Graph Neural Networks: A Review of Methods and Applications. arXiv 2021, arXiv:1812.08434. [Google Scholar] [CrossRef]
  15. Ward, I.R.; Joyner, J.; Lickfold, C.; Guo, Y.; Bennamoun, M. A Practical Tutorial on Graph Neural Networks. arXiv 2021, arXiv:2010.05234. [Google Scholar] [CrossRef]
  16. Xu, M.; Wang, H.; Ni, B.; Guo, H.; Tang, J. Self-supervised Graph-level Representation Learning with Local and Global Structure. arXiv 2021, arXiv:2106.04113. [Google Scholar]
  17. Baluja, S.; Seth, R.; Sivakumar, D.; Jing, Y.; Yagnik, J.; Kumar, S.; Ravichandran, D.; Aly, M. Video Suggestion and Discovery for Youtube: Taking Random Walks through the View Graph. In Proceedings of the WWW’08: 17th International Conference on World Wide Web, Beijing, China, 21–25 April 2008; pp. 895–904. [Google Scholar]
  18. Koren, Y.; Bell, R.; Volinsky, C. Matrix Factorization Techniques for Recommender Systems. Computer 2009, 42, 30–37. [Google Scholar] [CrossRef]
  19. Rendle, S.; Freudenthaler, C.; Gantner, Z.; Schmidt-Thieme, L. BPR: Bayesian Personalized Ranking from Implicit Feedback. arXiv 2012, arXiv:1205.2618. [Google Scholar]
  20. Song, K.; Han, J.; Cheng, G.; Lu, J.; Nie, F. Adaptive Neighborhood Metric Learning. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 4591–4604. [Google Scholar] [CrossRef] [PubMed]
  21. Jin, X.; Han, J. K-Means Clustering. In Encyclopedia of Machine Learning; Sammut, C., Webb, G.I., Eds.; Springer: Boston, MA, USA, 2010; pp. 563–564. [Google Scholar]
  22. Moon, T. The expectation-maximization algorithm. IEEE Signal Process. Mag. 1996, 13, 47–60. [Google Scholar] [CrossRef]
  23. Harper, F.M.; Konstan, J.A. The MovieLens Datasets: History and Context. ACM Trans. Interact. Intell. Syst. 2015, 5, 19. [Google Scholar] [CrossRef]
  24. Kronmueller, M.; Chang, D.j.; Hu, H.; Desoky, A. A Graph Database of Yelp Dataset Challenge 2018 and Using Cypher for Basic Statistics and Graph Pattern Exploration. In Proceedings of the 2018 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT), Louisville, KY, USA, 6–8 December 2018; pp. 135–140. [Google Scholar]
  25. Wu, J.; Wang, X.; Feng, F.; He, X.; Chen, L.; Lian, J.; Xie, X. Self-supervised Graph Learning for Recommendation. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, Virtual Event, 11–15 July 2020. [Google Scholar]
  26. Kabbur, S.; Ning, X.; Karypis, G. FISM: Factored Item Similarity Models for Top-N Recommender Systems. In Proceedings of the KDD’13: 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Chicago, IL, USA, 11–14 August 2013; pp. 659–667. [Google Scholar]
  27. Herlocker, J.L.; Konstan, J.A.; Terveen, L.G.; Riedl, J.T. Evaluating Collaborative Filtering Recommender Systems. ACM Trans. Inf. Syst. 2004, 22, 5–53. [Google Scholar] [CrossRef]
  28. Silveira, T.; Zhang, M.; Lin, X.; Liu, Y.; Ma, S. How good your recommender system is? A survey on evaluations in recommendation. Int. J. Mach. Learn. Cybern. 2019, 10, 813–831. [Google Scholar] [CrossRef]
  29. Zhao, W.X.; Mu, S.; Hou, Y.; Lin, Z.; Chen, Y.; Pan, X.; Li, K.; Lu, Y.; Wang, H.; Tian, C.; et al. RecBole: Towards a Unified, Comprehensive and Efficient Framework for Recommendation Algorithms. arXiv 2021, arXiv:2011.01731. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.