Next Article in Journal
A Novel Approach for Cyber Threat Analysis Systems Using BERT Model from Cyber Threat Intelligence Data
Previous Article in Journal
Semantic Matching for Chinese Language Approach Using Refined Contextual Features and Sentence–Subject Interaction
Previous Article in Special Issue
Photovoltaic Power Prediction Technology Based on Multi-Source Feature Fusion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Recommendation Model Based on Global Intention Learning and Sequence Augmentation

School of Information and Communication, Guilin University of Electronic Technology, Guilin 541004, China
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(4), 586; https://doi.org/10.3390/sym17040586
Submission received: 3 March 2025 / Revised: 2 April 2025 / Accepted: 8 April 2025 / Published: 11 April 2025
(This article belongs to the Special Issue Symmetry and Asymmetry in Data Analysis)

Abstract

:
User interaction behavior is influenced by various intentions, which are often asymmetric. Incorporating intention information into sequential recommendation can significantly improve recommendation performance. However, most existing intention modeling methods rely on auxiliary information or random data augmentation to capture user intentions, which cannot effectively capture the potential correlations between different user intentions, especially when dealing with asymmetric intentions. Furthermore, using random data augmentation methods may amplify the noise in the original sequence, leading to a decline in the model’s recommendation performance. To address these issues, this paper proposes a recommendation model based on Global Intention Learning and Sequence Augmentation. Firstly, a novel sequence information extraction module is designed, which efficiently integrates the refined global item association graph into item representations through a self-supervised approach, thereby capturing global collaborative sequence information. Secondly, an improved sequence augmentation strategy is adopted to reduce the disruption of the original item correlations, making the intention representation more accurate. Finally, intention information is integrated into the sequential recommendation model through a contrastive learning method, further enhancing the accuracy of the model’s recommendations. Experimental results show that compared to several state-of-the-art methods, the proposed model exhibits significant improvements on the Sports, Toys and LastFM datasets.

1. Introduction

Recommendation systems help users select items they may be interested in by analyzing vast amounts of data, thereby better meeting the users’ personalized needs [1,2,3]. However, in reality, user preferences change dynamically over time, making it difficult to accurately represent these preferences. In particular, user preferences often exhibit asymmetry. For example, some users have a strong preference for certain products, while showing little interest in others. This asymmetry presents challenges for traditional recommendation systems, especially in capturing long-tail preferences and dealing with cold-start problems. In this context, sequence-based recommendation systems have become a widely researched topic because they can effectively capture the dynamic preferences hidden in the users’ sequential behaviors, address the challenges posed by asymmetry, and enhance personalized recommendations for users [4,5,6,7].
However, despite the significant progress made by sequence recommendation models in personalized recommendations, the widespread issue of data sparsity remains a major bottleneck limiting their performance [8]. To address this challenge, researchers have proposed data augmentation methods. The earliest approach by Tang et al. involved directly increasing sequences using a sliding window and mixing them into the training process for data augmentation purposes [9]. However, heuristic algorithms like the sliding window may generate low-quality augmented data [10]. As a result, researchers have proposed many data synthesis methods that require training. For example, Li et al. combined spatial and temporal information to enhance the sequence effect for recommending the next point of interest [11]. Liu et al. employed a diffusion model for sequence generation and designed two guiding strategies to control the model’s generation of items corresponding to the original data [12]. Wang et al. enhanced the sequences of temporary users by learning the interaction patterns of core users [13]. However, the augmented data generated by these methods may not maintain the same quality as the original data.
To address this issue, many researchers have adopted self-supervised techniques to improve performance by introducing similarity contrast between augmented data and original data. For example, Xie et al. proposed a contrastive self-supervised learning framework to improve the performance of sequence recommendation models [14]. Liu et al. introduced more augmentation operations to train robust sequence representations [15]. Qiu et al. combined recommendation loss with unsupervised learning and supervised contrastive learning to optimize sequence recommendation models [16]. Qin et al. utilized contrastive learning data and learnable model enhancements to obtain more informative and discriminative features [17]. Dang et al. proposed five data operators for expanding time-interval-based item sequences, demonstrating that consistent sequences are more valuable for next-item prediction [18,19]. However, despite these methods improving the performance of recommendation systems to some extent, they also face several challenges. One issue is that many methods rely on random augmentation operators, which may disrupt the correlations in the original sequence information, affecting the quality of the augmented data. Therefore, maintaining the original correlations in augmented data remains an important challenge in recommendation system research.
In addition, user interests and preferences are often not solely determined by current behavior, but are deeply influenced by historical behavior and latent factors, behind which there are often deeper underlying intentions. In recent years, intention-based contrastive learning has become a popular topic in recommendation system research and industrial applications. A large amount of research focuses on modeling user intentions, aiming to improve the accuracy of recommendation systems and user satisfaction [20,21,22].
To capture users’ long-term purchasing intentions, sequence-based recommendation focuses on modeling user interaction behaviors over a longer time span. Ma et al. proposed a seq2seq training strategy that infers intentions based on clustering and a single sequence representation [23]. Md Mehrad et al. used time convolutional networks to leverage side information (such as user action types like clicks and adding to favorites) and then used the learned intentions to guide the sequence recommendation model in predicting the next item [24]. Chen et al. obtained intention prototype representations through clustering in the embedding space of user behavior sequences and constructed a contrastive learning task, utilizing the learned intentions in sequence recommendation models [25]. Liu et al. proposed a new intention learning method that unifies behavior representation learning into an end-to-end learnable clustering framework, solving the complex and cumbersome alternating optimization issues in previous methods, achieving more efficient recommendations [26]. Although these methods have made some progress in improving recommendation accuracy, they often focus on modeling the intentions of each individual user’s interaction sequence, neglecting the correlation information of similar subsequences across different users. This limitation prevents recommendation systems from fully leveraging the similarities between users when mining global intention information, resulting in a suboptimal recommendation performance. Therefore, how to integrate intention information from multiple users and better capture the deep patterns of users’ long-term behavior remains a pressing challenge in current recommendation systems.
In summary, the problems that this paper needs to address are as follows:
(1)
How to improve sequence augmentation methods and enhance the quality of augmented data?
(2)
How to develop the correlation information of different users with the same subsequences and further mine global intention information?
In order to solve the above problems, this paper proposes a recommendation model based on Global Intention Learning and Sequence Augmentation (RM-GILSA) inspired by the literature [27,28,29,30,31,32,33]. The model first designs a novel data augmentation method based on item associations to address the issue where existing random data augmentation methods often disrupt the original sequence information. Unlike traditional methods, this new data augmentation approach preserves the structure and information of the original sequence, thus avoiding the introduction of unnecessary noise in the augmented data and ensuring the data quality during the training of the recommendation model. Next, an innovative global sequence extraction module is proposed, which captures the related information between different users by constructing a refined global product correlation graph. Self-supervised learning is then employed to integrate the obtained information into the product representation, thereby solving the issue that traditional methods fail to adequately capture the correlation of information between sequences. With the introduction of this global information, the model can more accurately capture user behavior patterns and latent associations between items, improving the relevance and accuracy of the recommendation results. Finally, the sequence recommendation model is optimized using intent contrastive learning to capture fine-grained user behavior features, enhancing the model’s generalization ability. This allows the model to perform better when handling unknown users or new items.

2. Model

2.1. Definitions and Notations

Sequence recommendation is the task of recommending the next item that a user is likely to interact with based on their historical interaction data [34]. Let U and V denote the set of users and items, respectively. A user uU has an interacted item S u = v 1 , v 2 , , v j , v N , v j V 1 j N , denoted as the item interacted by user u at position j in the sequence, where N is the sequence length. Given the historical interactions S u [17], the goal of sequence recommendation is to recommend an item from the set of items V that user u is likely to interact with at step N + 1, which can be expressed as follows:
arg max P v V ( v N + 1 = v | S u )

2.2. An Overview of the Proposed Framework

The overall framework of the RM-GILSA model is illustrated in Figure 1. First, in the sequence information extraction module, the global collaborative information is incorporated into the item representation by maximizing the mutual information between the original graph and the refined graph, thereby enhancing the item representation. Then, in the intent contrastive learning module, data augmentation is performed on the original sequence using an improved sequence augmentation strategy. The augmented sequence set is then used to cluster the obtained interest representations, making them closer to their intent prototypes. Next, in the sequence model, the learned interest representations and enhanced item representations are used to predict the next item the user is likely to interact with. Finally, a multi-task training approach is employed to jointly optimize the model with sequence recommendation, graph contrastive learning, and intent contrastive learning methods, improving the accuracy of the recommendations.

2.3. The Sequence Information Extraction Module

Existing methods typically model user intents separately for user sequences, neglecting the development of global sequence information, which leads to poor recommendation performance. In session-based recommendations, researchers have made efforts to develop global information by presenting a global item graph built from predefined rules [29,30]. However, rule-based graphs may overlook potential connections between items or contain noisy edges, affecting the final relevance output.
Inspired by the work in [31,32], this paper designs a novel sequence information extraction module in Figure 2 that uses graph-based learning techniques to obtain a refined global item association graph. The original and refined graph representations are obtained through a graph encoder, and their mutual information is maximized, integrating global sequence information into item features, thereby improving the accuracy of the recommendations. This method adjusts the structure of the graph by introducing perturbations, allowing for a more detailed capture of subtle differences between items and extracting global sequence information. The core of this method lies in refining the original graph, enabling it to more accurately reflect the underlying structure of user preferences, especially in cases where user preferences are asymmetric. Specifically, user preference asymmetry refers to the significant differences in the intensity of user preferences for different items, which can lead to issues such as data sparsity or information bias in collaborative filtering. Traditional collaborative filtering methods may fail to effectively capture these asymmetries, resulting in some user preferences not being fully understood or utilized. By using perturbation-based graph refinement, the process of graph learning can more flexibly handle such asymmetries, thereby capturing more precise user behavior patterns.
First, a rule-driven worldwide item graph is constructed as the preliminary graph A , then small perturbations are applied to refine the graph, resulting in a more refined version A ^ , as described by the following formula:
A ^ = A + α A
Here, the perturbation strength is denoted as α , and the fully learnable perturbation graph is represented as A . This approach belongs to the direct method of graph structure modeling in graph-based learning. Then, the graph encoder [31] is used to generate graph representations, obtaining both the original representation P l and the refined graph representation P ^ l .
The computational complexity of the refined graph representation can be further reduced through an acceleration strategy based on singular value decomposition [32]. Given the preliminary and improved graphs, two representations are generated, and then, using a self-supervised learning paradigm, the refined global collaborative information is effectively integrated into the item representations, which is achieved through mutual information maximization. Although directly maximizing mutual information is challenging, it can be effectively estimated using InfoNCE. The graph representations generated from the preliminary or improved graph should have related information, so the shared information between the preliminary and improved graphs is maximized, as described by the following formula:
L g c l = i = 1 N log exp cos p i l , p ^ i l / τ j = 1 K exp cos p i l , p ^ j l / τ
Here, N is the number of samples, p i l P l and p ^ i l P ^ l are the original and refined graph representations, respectively. The temperature hyperparameter is denoted as τ , and is empirically fixed at 0.2. cos p i l , p ^ i l / τ represents the cosine similarity between the original and refined graph representations, and L g c l is the loss function for global graph contrastive learning.

2.4. The Sequence Augmentation Module

In previous intent contrastive learning methods, random data augmentation techniques such as masking, cropping, and shuffling are often used to generate multiple different views of unlabeled data in sequence recommendation tasks [25]. The purpose of these data augmentation methods is to expand training samples by altering the structure of the data, thereby improving the robustness and generalization ability of the model. However, these augmentation techniques also have potential issues. Specifically, these random operations may disrupt the critical relationships between the user and the items, especially in the user–item interaction sequences, where the order and interrelationships between items are crucial for capturing the user’s true intentions. When these important relationships are disrupted, the original sequence information suffers significant loss, which affects the model’s understanding of user behavior and may lead to biases unrelated to the user’s actual behavior [27].
To address this issue and improve the performance of recommendation systems, this chapter, inspired by the work in [33], proposes an innovative project-association-based data augmentation method, as shown in Figure 3. Unlike traditional random augmentation methods, this approach enhances the sequence based on the intrinsic relationships between items. Specifically, given an original user interaction sequence S = v 1 , v 2 , , v j , v n the improved data augmentation method divides the sequence into multiple subsequences in order, ensuring that each subsequence not only preserves the order of the original sequence but also maintains the association between the items in each generated subsequence with the previous subsequence. This method retains the original interaction sequence information while avoiding excessive disruption of potential patterns in user behavior, thereby minimizing the damage to the sequence information during augmentation.
Furthermore, by ensuring the association between items during the augmentation process, this method effectively enhances the recommendation model’s understanding of user preferences and behavioral patterns. Since each generated subsequence maintains associations with the items in the previous subsequence, the recommendation system can more accurately capture the historical behavior characteristics of users, thereby achieving higher accuracy in recommendation tasks. Ultimately, this project association-based augmentation method not only improves the effectiveness of the data but also optimizes the learning ability of the model, making the recommendation results more aligned with the user’s actual needs. The formula for this data augmentation method is as follows:
S = v 1 , X v 2 , v 1 , v 2 , X v 3 , , v 1 , v 2 , , X v m v 2 , v 3 , , X v m + 1 , , v m n , v m n + 1 , , X v n m M m > M
Here, S represents the augmented sequence, M denotes the maximum sequence length, and X v i 1 i n represents the target item of the generated sequence.

2.5. The Intent Contrastive Learning Module

User–item interactions are driven by various potential intentions, which may include changes in user interests, updates in needs, and behavioral preferences in different contexts. However, these potential intentions are often not directly observable, as they may manifest as latent patterns in user behavior, which cannot be captured through explicit labels or direct feedback. This lack of observability makes it extremely difficult to leverage potential intentions for precise sequential recommendations. Traditional recommendation systems primarily rely on explicit user behavior data (such as clicks, purchases, etc.), but such data often fails to fully reflect users’ real needs and deeper interests in different situations.
To overcome this challenge, Chen et al. proposed an intent contrastive learning method, which learns from unlabeled user behavior sequences to infer and model the user’s latent intent distribution function [25]. By learning the user’s intent distribution, the recommendation model can more accurately capture the user’s real needs, thereby optimizing the recommendation process. This method not only improves the model’s understanding of latent intentions but also enhances its generalization ability when faced with complex user behaviors, significantly improving the quality and accuracy of recommendations. Inspired by this work, this chapter designs an innovative intent contrastive learning module, as shown in Figure 1. The module aims to effectively integrate user intent information into the sequential recommendation system by deeply mining user behavior to capture the variation in user intent at different time points and under different contexts. By incorporating this module, the recommendation system can more precisely identify and understand the user’s latent needs, further improving the accuracy and personalization of the recommendations. This approach not only enhances the model’s learning capability for sequential data but also ensures that the recommendation system remains robust and adaptive even when dealing with unlabeled data.
Specifically, first, all user interaction sequences S u are augmented through splitting, and the augmented sequences S u are passed through the embedding layer to obtain the embedding matrix A u which is then processed by the sequence encoder to produce the output user interest representation. The formula is as follows:
I u = f θ A u
where I u = i u 1 , i u 2 , , i u n represents the user interest representation, f θ · represents the sequence encoder.
Then, the output is clustered using K-means to obtain the intent prototype representation Z u = z u 1 , z u 2 , , z u k , where k denotes the number of user intent types. A sequence s 1 is randomly selected from the augmented sequence set, and a sequence s 2 , which shares the same target item, is selected as the positive sample. The interest representations i 1 and i 2 of these two sequences are obtained through the embedding layer and the sequence encoder. Then, the nearest intent prototypes z 1 and z 2 are selected from the intent prototypes Z u to match i 1 and i 2 The formula is as follows:
z 1 , z 2 = q u e r y i 1 , Z u , q u e r y i 2 , Z u
The final step is to compute the loss for the intent contrastive learning module, as shown in the following formula:
L i c l w 1 , w 2 = log exp s i m w 1 , w 2 exp s i m w 1 , w 2 + W F exp s i m w 1 , W   log exp s i m w 2 , w 1 exp s i m w 2 , w 1 + W F exp s i m w 2 , W
L i c l = L i c l i 1 , z 1 + L i c l i 2 , z 2
Here, s i m dot denotes the dot product operation, and w 1 , w 2 is the embedding of a pair of positive samples. To mitigate the impact of false negatives, this paper employs a simple strategy—False Negative Mask (FNM)—which alleviates the effect by avoiding the comparison with false negatives. F represents a set of negative samples within a mini-batch that share the same label as the two positive sample pairs.

2.6. The Loss Function for Model Training

In this paper, the sequence information extraction module effectively integrates global sequence information into the item representations. Then, the augmented item set V ^ is embedded into the same space, generating the item embedding matrix B [35,36]. Combined with the learned interest representations, I u , the sequence model can be optimized through the next-item prediction loss. The formula is as follows:
g ^ = s o f t max I u B T
L r e c = 1 g ^ x + log v V ^ e g ^ v
Here, g ^ represents the predicted scores of all items, and x V ^ denotes the ground truth of the sequence. This paper utilizes a multi-task learning paradigm to optimize the entire framework simultaneously:
L = L r e c + λ 1 L g c l + λ 2 L i c l
Here, L r e c is the next-item prediction loss, L g c l is the graph contrastive learning loss, and L i c l is the intention contrastive learning loss [37]. λ 1 and λ 2 are two hyperparameters used to control the intensity of the graph contrastive learning and intent contrastive learning tasks, respectively [38].
In order to help readers better understand the model algorithm in this chapter, this section provides the pseudocode for the RM-GILSA model, as shown in Algorithm 1.
Algorithm 1: Pseudo-code of RM-GILSA.
1: While RM-GILSA Not Convergence do:
2:       for x in Dataloader (X) do
3:         Construct a global item correlation graph as the original graph, and obtain a refined graph through small perturbations;
4:         Use a graph encoder f θ to generate representations of the original graph and the refined graph P l , P ^ l ;
5:           Calculate the graph contrastive loss L g c l ;
6:           Perform data augmentation on the original sequence;
7:            Select two sequences with the same target item, and obtain their interest representations i 1 , i 2 through the sequence encoder;
8:            Select the intent prototype z 1 , z 2 that is closest to i 1 , i 2 .
9:            Calculate the intent contrastive loss L i c l ;
10:            Generate the item embedding matrix B;
11:            Combine B with the interest representation I to calculate the next-item prediction loss L r e c ;
12:            Calculate total loss L;
13:    end for
14:  end while
15: Return L

2.7. Computational Complexity Analysis

The computational complexity in this paper primarily arises from the sequence information extraction module and the intent learning module. In the sequence information extraction module, the computational complexity of generating graph representations using a graph encoder is O L × N × d , where L is the number of graph convolution layers, N is the number of nodes in the graph (i.e., the number of items), and d is the feature dimension of each node. The computational complexity of the dimensionality reduction using SVD is O min N , M × min N , M × min N , M , where N is the number of items and M is the number of edges. In the intent learning module, the computational complexity of performing KMeans clustering is O l × n × k , where l is the maximum number of iterations, n is the number of samples, k is the number of clusters. The overall computational complexity is O l × n × k + L × N × d + min N , M 3 . Challenges for real-world deployment should be considered in terms of scalability, memory consumption, and computational efficiency, especially when handling large-scale datasets. The complexity of graph-based models and clustering methods may increase significantly with the number of items, edges, and samples, which could lead to difficulties in practical deployment and require optimizations or approximations to manage resource limitations effectively.

3. Experiments and Results

3.1. Datasets and Experimental Settings

To ensure the generalizability of the experiments, three commonly used public datasets for sequential recommendation were adopted: the Sports and Toys datasets from Amazon (https://cseweb.ucsd.edu/~jmcauley/, accessed on 18 August 2024), the largest e-commerce platform, and the music recommendation dataset LastFM (https://grouplens.org/datasets/hetrec-2011/, accessed on 18 August 2024). For the above datasets, the data processing method by Chen et al. [25] is followed, where users and items with fewer than five interactions are removed. The experiments in this chapter adopt a leave-one-out strategy for evaluation. Specifically, for each user’s interaction sequence with items, the last two items are used as validation and test data, while the remaining part is used to train the user model. Detailed information about the three datasets is shown in Table 1.
Two commonly used evaluation metrics, Hit@10 and NDCG@10, are employed for assessment. Hit measures whether the recommended items within the top-k results contain the items the user desires, while NDCG evaluates the quality of the ranking results [39]. Higher values of Hit and NDCG indicate more accurate recommendations. The experimental environment for this chapter uses Windows 11, with a GPU configuration of RTX 3060 and a memory capacity of 16 GB. All experimental code involved in this chapter is written in Python 3.10, and the experimental framework uses PyTorch 1.12.1. For the RM-GILSA model in this chapter, it is implemented using PyTorch, with Adam as the optimizer [40]. The maximum number of training epochs is set to 1000, and early stopping is applied. If the NDCG@10 value does not improve after 40 epochs, model training is stopped. More hyperparameter settings are shown in Table 2.

3.2. Comparison and Analysis of Model Results

To validate the effectiveness of the RM-GILSA model, this chapter compares it with several representative sequence-based recommendation models: SASRec (2018) [35] applies the Transformer model to recommendation systems, using the self-attention mechanism to model sequences of arbitrary length, effectively capturing both users’ long-term and short-term interests. CoSeRec (2021) [15] proposes a novel and robust data augmentation method, while leveraging item relevance to alleviate the length skew problem, enhancing the robustness of the data augmentation process. DuoRec (2022) [16] introduces a model-level enhancement based on Dropout in contrastive learning for sequence recommendation, achieving better semantic preservation and significantly mitigating the representation degradation problem. ELECRec (2022) [41] is the first to train a sequence recommender as a discriminator rather than a generator, making user behavior sequences and item representations more accurate. ICLRec (2022) [25] models the latent intent factors in user interactions and integrates them into the sequence recommendation model through a new contrastive self-supervised learning objective. MCLRec (2023) [17] introduces a learnable enhancer module that captures informative features from augmented sequences through random data augmentation and model enhancement operations, thereby improving the contrastive learning framework. ELCRec (2024) [26] innovatively extends the existing intent learning optimization framework by combining behavior representation learning with clustering optimization. BASRec [10] is a sequence recommendation method based on contrastive learning, newly proposed in December 2024. It introduces two new operators, M-Reorder and M-Substitute, for single-sequence augmentation, which mix the representations of items in the original sequence with those in the augmented sequence to generate new samples. Together with its designed cross-sequence augmentation module, it performs augmentation and fusion operations to generate new samples that balance relevance and diversity.
This paper provides a thorough comparison of the experimental results between RM-GILSA and other recommendation models on the Sports, Toys, and LastFM datasets. The summarized results are presented in Table 3, where bold text indicates the best performance and underlined text highlights the second-best performance. The following observations can be made:
From these results, it can be seen that the CoSeRec, which introduces different contrastive self-supervised learning data augmentation methods, outperforms SASRec, further validating the effectiveness of data augmentation in alleviating the data sparsity problem. Specifically, data augmentation helps to expand the data samples and provide additional diversity, which enables the model to better capture the underlying user behavior patterns, significantly improving the performance of the recommendation system. This indicates that solving the data sparsity problem is not limited to traditional model optimization methods, and data augmentation is also a key strategy to enhance recommendation accuracy. Furthermore, the ELECRec model performs better than SASRec, suggesting that training a sequence recommendation model as a discriminator can more effectively capture the details in user behavior and generate more accurate representations for each item. This discriminative training approach allows for a deeper exploration of the complex relationships between users and items, leading to more user-preference-aligned recommendations.
It is worth noting that MCLRec and BASRec outperform CoSeRec, indicating that further improvements in data augmentation methods can bring significant results. By introducing more refined and targeted augmentation strategies, these models are able to provide more precise recommendations while retaining the original features of the data. Therefore, optimizing data augmentation methods not only helps address data sparsity but also enhances the model’s recommendation accuracy and generalization ability. Compared to CoSeRec, ICLRec and ELCRec significantly improve recommendation performance by introducing intent modeling. By accurately capturing users’ underlying intentions, these models better understand the motivations behind user behaviors, leading to more personalized and accurate recommendations. Intent modeling helps the model effectively distinguish between users’ behavioral patterns in different contexts, further improving the prediction of user preferences. However, the performance of ICLRec and ELCRec is still not as good as that of DuoRec, which may be due to the introduction of random data augmentation during the training process. This augmentation method sometimes disrupts the intent representation of the original sequence, negatively impacting the model’s performance. Therefore, balancing the diversity introduced by data augmentation and preserving the original intent of the sequence remains an important research direction for improving model performance.
The RM-GILSA model proposed in this paper outperforms the above models on all metrics, demonstrating the superiority of the proposed model. The RM-GILSA model first uses item-related enhancement operators for sequence augmentation, which not only introduces more diversity but also maintains the relationships between the original intentions of the sequences, providing richer information for the subsequent modeling process. It then utilizes graph contrastive learning for sequence modeling to capture global sequence information, improving the accuracy and generalization ability of the recommendation system. Finally, it employs intent contrastive learning to maximize the consistency between sequence views and their corresponding intentions, based on the intentions learned in the sequence recommendation model. This further enhances the accuracy of recommendations and user satisfaction, ensuring that the content recommended by the model better aligns with the user’s true needs and preferences. On the Sports dataset, HR@10 increased by 4.81%, NDCG@10 improved by 5.02%; on the Toys dataset, HR@10 increased by 6.93%, NDCG@10 increased by 6.38%; and on the LastFM dataset, HR@10 improved by 5.76%, NDCG@10 increased by 13.66%. Since this paper uses three datasets with different scales, and with two of them being relatively large, even small changes may statistically show significant variations, indicating that the design of this paper has strong practical application value. On the Sports, Toys, and LastFM datasets, compared to the best-performing model, ELECRec, the proposed model shows progressively larger improvements in HR@10 and NDCG@10. As seen from Table 1, from the Sports to the LastFM dataset, the number of users, the number of transaction items, and the number of transaction behaviors gradually decrease, which means that the proposed model performs better as the dataset size decreases.

3.3. Ablation Study

The ablation experiments are designed to investigate the impact of each key component of the RM-GILSA model on the final recommended performance.

3.3.1. Effect of Sequence Information Extraction Module

To verify the effectiveness of the proposed Sequence Information Extraction Module (SIEM), this study designed the variant model RMGILSA-S, representing the model without the SIEM. Specific results are shown in Figure 4.
From the results in Figure 4, by comparing RMGILSA and RMGILSA-S, it is evident that the model performance significantly decreases after removing SIEM, which proves the effectiveness of the Sequence Information Extraction Module. This indicates that incorporating global sequence information into item representations by constructing an adaptive global item-related graph is beneficial for capturing global intent information, thereby improving the performance of sequence recommendation models.
To observe the impact of the graph contrastive learning strength λ 1 on the model’s final prediction performance, this paper designs relevant experiments on the Toys dataset, and the results are shown in Figure 5. From Figure 5, it can be observed that when the intensity of contrastive learning λ 1 increases, both HR@10 and NDCG@10 metrics improve. This phenomenon can be attributed to the incorporation of global sequential information, which provides the model with more contextual data, thereby offering more detailed and rich features for subsequent user intent modeling. Specifically, global sequential information helps the model better understand the user’s historical behavior, evolving interests, and preferences, leading to more precise recommendations that match the user’s needs. Therefore, as λ 1 increases, the model becomes more sensitive in capturing the user’s latent interests and preferences, thereby enhancing the recommendation performance. When λ 1 increases to 0.2, both HR@10 and NDCG@10 reach their highest values. This indicates that the model has achieved the best balance in utilizing global sequential information at this point. The model can fully leverage the potential value of the global sequence information without overemphasizing it, which might lead to increased complexity. At this point, the model’s recommendation performance is optimally optimized, providing the most relevant recommendations to the user. However, as λ 1 continues to increase, both HR@10 and NDCG@10 begin to decline. This suggests that, with excessive interference from global sequential information, the model starts to face the burden of processing too much information. While some of this additional global sequential information may contain useful signals, it can also introduce noise and increase computational complexity, making the model’s learning process more difficult. Too much information makes it harder for the model to extract effective features, which in turn lowers its accuracy and efficiency in making recommendations.
Therefore, selecting an appropriate value for λ 1 is key to improving the model’s recommendation performance. A moderate λ 1 value ensures that the model can effectively utilize global sequential information while avoiding issues like information overload and excessive complexity, thus achieving more accurate and efficient recommendation results.

3.3.2. Effect of Sequence Augmentation Module

To verify the effectiveness of the proposed Sequence Augmentation Module (SAM), this study designed comparative experiment, as shown in Figure 4. By comparing the experimental results of RMGILSA-S and ICLRec, it is evident that RMGILSA-S outperforms ICLRec significantly, which proves the effectiveness of SAM. This demonstrates that the improved item-related data augmentation method proposed in this paper greatly preserves the original sequence information while performing data augmentation, thereby significantly enhancing the accuracy of recommendations.

3.3.3. Effect of Intent Contrastive Learning Module

To verify the effectiveness of the proposed Intent Contrastive Learning Module (ICLM), this study designed the variant model RMGILSA-I, representing the model without the ICLM. Specific results are shown in Figure 4. By comparing RMGILSA and RMGILSA-S, it is evident that the model performance significantly decreases after removing ICLM, which proves the effectiveness of the Intent Contrastive Learning Module.
To observe the impact of the intent contrastive learning strength λ 2 on the model’s final prediction performance [26], this paper designs relevant experiments on the Toys dataset, and the results are shown in Figure 6. From Figure 6, it can be observed that in the experiment, as the intent contrastive learning strength parameter λ 2 increased, both HR@10 and NDCG@10 metrics also improved. This indicates that by using intent modeling, the model is better able to learn the user’s potential preferences, leading to more accurate recommendations. When λ 2 increased to 0.05, HR@10 and NDCG@10 reached their peak, and then began to decline. This suggests that a very high λ 2 value might cause the model to focus too much on local details, which in turn affects its ability to generalize to a large-scale user base and projects. Therefore, selecting an appropriate value for λ 2 becomes key to further enhancing the performance of the recommendation system.

3.3.4. Runtime Performance Analysis

In order to gain a clearer understanding of the runtime performance of the RM-GILSA model proposed in this paper, experiments were conducted, and its performance was compared with the classic intent-based model ICLRec [25] and the latest data augmentation-based sequence recommendation model BASRec [10]. The experiment was conducted on the Toys dataset, and the results are shown in Table 4. The results show that BASRec requires the least runtime and CPU usage, which is mainly because its model is the simplest among the three. ICLRec ranks in the middle due to its use of intent modeling, which adds a clustering process compared to BASRec, making the model more complex. In contrast, the RM-GILSA model proposed in this paper adds a global information extraction module on top of ICLRec, resulting in the highest runtime and CPU usage. This indicates that while the RM-GILSA model can capture more detailed information and improve recommendation performance, it requires more computational resources due to its additional complexity.

3.4. The Impact of the Number of Intent Classes on Model Performance

When obtaining intent prototype representations, the number of user intent types K is a tunable parameter. A larger value of K means that users can have more differencing intents [25]. To observe the impact of the different numbers of intent classes on the model’s final prediction performance, this paper designs relevant experiments, and the results are shown in Figure 7. The experiments test HR@10 and NDCG@10 metrics on the Toys dataset. From Figure 7, it can be observed that the RM-GILSA model achieves its best performance when K increases to 1024 on the Toys dataset. After this point, as K continues to increase, the performance begins to degrade. This is because when K is very small, the number of users under each intent prototype may be large, leading to false positive samples being introduced in the contrastive self-supervised learning (i.e., users with different intents are incorrectly identified as having the same intent), which affects the learning process. On the other hand, when K is too large, the number of users under each intent prototype becomes smaller, and the false negative samples introduced will also have a negative impact on the contrastive self-supervised learning. In the Toys dataset, 1024 user intents provide the best generalization for users’ diverse behaviors.

4. Conclusions

This study proposes a recommendation model based on Global Intention Learning and Sequence Augmentation (RM-GILSA). By improving the relevance of augmented data and introducing global intent learning, the model’s accuracy has been enhanced. However, this approach effectively boosts the model’s performance, but also introduces challenges associated with the increased depth of the model, manifesting as slower training speeds and higher memory usage. Future research could explore more efficient techniques to better capture users’ long-term consumption interests, thus optimizing performance while balancing accuracy and efficiency. Additionally, while this model has effectively improved the relevance of the augmented data, it may also impact its diversity. In the future, we may consider further refining data augmentation methods to balance the relevance and diversity of the augmented data. Such improvements to data augmentation methods could generate sufficiently informative variant data without disrupting the original data structure, thereby enhancing the robustness and adaptability of the recommendation system. This will support the further improvement of recommendation algorithm accuracy, enabling better capture of user preferences, improving recommendation quality, and ensuring that the users receive choices more aligned with their interests and needs.

Author Contributions

Conceptualization, M.L., W.L. and X.C.; methodology, M.L., W.L. and X.C.; software, M.L.; validation, M.L. and W.L.; formal analysis, M.L.; investigation, M.L.; resources, X.C.; data curation, M.L.; writing—original draft preparation, M.L.; writing—review and editing, M.L. and X.C.; funding acquisition, X.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China, grant number No. 62001133.

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Wu, L.; Zheng, Z.; Qiu, Z.; Wang, H.; Gu, H.; Shen, T.; Qin, C.; Zhu, C.; Zhu, H.; Liu, Q.; et al. A survey on large language models for recommendation. arXiv 2024, arXiv:2305.19860. Available online: https://arxiv.org/abs/2305.19860 (accessed on 18 August 2024). [CrossRef]
  2. Wu, C.Y.; Ahmed, A.; Beutel, A.; Smola, A.J.; Jing, H. Recurrent recommender networks. In Proceedings of the 10th ACM International Conference on Web Search and Data Mining, Cambridge, UK, 6–10 February 2017; ACM: New York, NY, USA, 2017; pp. 495–503. [Google Scholar]
  3. Ren, R.; Liu, Z.; Li, Y.; Zhao, W.X.; Wang, H.; Ding, B.; Wen, J.R. Sequential recommendation with self-attentive multi-adversarial network. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, Xi’an, China, 25–30 July 2020; ACM: New York, NY, USA, 2020; pp. 89–98. [Google Scholar]
  4. Wang, S.; Zhang, Q.; Hu, L.; Zhang, X.; Wang, Y.; Aggarwal, C. Sequential/session-based recommendations: Challenges, approaches, applications and opportunities. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, Mardrid, Spain, 11–15 July 2022; ACM: New York, NY, USA, 2022; pp. 3425–3428. [Google Scholar]
  5. Sun, F.; Liu, J.; Wu, J.; Pei, C.; Lin, X.; Ou, W.; Jiang, P. BERT4Rec: Sequential recommendation with bidirectional encoder representations from transformer. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, Beijing, China, 3–7 November 2019; ACM: New York, NY, USA, 2019; pp. 1441–1450. [Google Scholar]
  6. Li, Y.; Chen, T.; Zhang, P.; Yin, H. Lightweight Self-Attentive Sequential Recommendation. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, Gold Coast, QLD, Australia, 1–5 November 2021; ACM: New York, NY, USA, 2021; pp. 967–977. [Google Scholar]
  7. Liu, Z.; Fan, Z.; Wang, Y.; Philip, S.Y. Augmenting Sequential Recommendation with Pseudo-Prior Items via Reversely Pre-training Transformer. arXiv 2021, arXiv:2105.00522. Available online: https://arxiv.org/abs/2105.00522 (accessed on 18 August 2024).
  8. Jing, M.; Zhu, Y.; Zang, T.; Wang, K. Contrastive self-supervised learning in recommender systems: A survey. ACM Trans. Inf. Syst. 2023, 42, 1–39. [Google Scholar] [CrossRef]
  9. Tang, J.; Wang, K. Personalized top-n sequential recommendation via convolutional sequence embedding. In Proceedings of the 11th ACM International Conference on Web Search and Data Mining, Los Angeles, CA, USA, 5–9 February 2018; ACM: New York, NY, USA, 2018; pp. 565–573. [Google Scholar]
  10. Dang, Y.; Zhang, J.; Liu, Y. Augmenting Sequential Recommendation with Balanced Relevance and Diversity. arXiv 2024, arXiv:2412.08300. Available online: https://arxiv.org/abs/2412.08300 (accessed on 18 August 2024).
  11. Li, Y.; Luo, Y.; Zhang, Z.; Sadiq, S.; Cui, P. Context-aware attention-based data augmentation for POI recommendation. In Proceedings of the ICDEW, Macao, China, 8–12 April 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 177–184. [Google Scholar]
  12. Liu, Q.; Yan, F.; Zhao, X.; Du, Z.; Guo, H.; Tang, R.; Tian, F. Diffusion augmentation for sequential recommendation. In Proceedings of the CIKM, Birmingham, UK, 21–25 October 2023; pp. 1576–1586. [Google Scholar]
  13. Wang, J.; Le, Y.; Chang, B.; Wang, Y.; Chi, E.H.; Chen, M. Learning to augment for casual user recommendation. In Proceedings of the WWW, Lyon, France, 25–29 April 2022; pp. 2183–2194. [Google Scholar]
  14. Xie, X.; Sun, F.; Liu, Z.; Wu, S.; Gao, J.; Zhang, J.; Ding, B.; Cui, B. Contrastive learning for sequential recommendation. arXiv 2020, arXiv:2010.14395. Available online: https://arxiv.org/abs/2010.14395 (accessed on 18 August 2024).
  15. Liu, Z.; Chen, Y.; Li, J.; Yu, P.S.; McAuley, J.; Xiong, C. Contrastive Self-supervised Sequential Recommendation with Robust Augmentation. arXiv 2021, arXiv:2108.06479. Available online: https://arxiv.org/abs/2108.06479 (accessed on 18 August 2024).
  16. Qiu, R.; Huang, Z.; Yin, H.; Wang, Z. Contrastive learning for representation degeneration problem in sequential recommendation. In Proceedings of the 15th ACM International Conference on Web Search and Data Mining, Tempe, AZ, USA, 21–25 February 2022; ACM: New York, NY, USA, 2022; pp. 813–823. [Google Scholar]
  17. Qin, X.; Yuan, H.; Zhao, P.; Fang, J.; Zhuang, F.; Liu, G.; Liu, Y.; Sheng, V. Meta-optimized contrastive learning for sequential recommendation. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, Taipei, China, 23–27 July 2023; ACM: New York, NY, USA, 2023; pp. 89–98. [Google Scholar]
  18. Dang, Y.; Yang, E.; Guo, G.; Jiang, L.; Wang, X.; Xu, X.; Sun, Q.; Liu, H. TiCoSeRec: Augmenting Data to Uniform Sequences by Time Intervals for Effective Recommendation. IEEE Trans. Knowl. Data Eng. 2023, 36, 2686–2700. [Google Scholar] [CrossRef]
  19. Dang, Y.; Yang, E.; Guo, G.; Jiang, L.; Wang, X.; Xu, X.; Sun, Q.; Liu, H. Uniform sequence better: Time interval aware data augmentation for sequential recommendation. In Proceedings of the AAAI, Washington, DC, USA, 7–14 February 2023; Volume 37, pp. 4225–4232. [Google Scholar]
  20. Wang, S.; Hu, L.; Wang, Y.; Sheng, Q.Z.; Orgun, M.; Cao, L. Modeling multi-purpose sessions for next-item recommendations via mixture-channel purpose routing networks. In Proceedings of the 28th International Joint Conference on Artificial Intelligence, Macao, China, 10–16 August 2019; Nanyang Technological University: Singapore, 2019; pp. 879–890. [Google Scholar]
  21. Liu, Z.; Li, X.; Fan, Z.; Guo, S.; Achan, K.; Yu, P.S. Basket recommendation with multi-intent translation graph neural network. In Proceedings of the 5th IEEE International Conference on Big Data, Xiamen, China, 10–13 December 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 728–737. [Google Scholar]
  22. Pan, Z.; Cai, F.; Ling, Y.; De Rijke, M. An intent guided collaborative machine for session-based recommendation. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, Xi’an, China, 25–30 July 2020; ACM: New York, NY, USA, 2020; pp. 1833–1836. [Google Scholar]
  23. Ma, J.; Zhou, C.; Yang, H.; Cui, P.; Wang, X.; Zhu, W. Disentangled self-supervision in sequential recommenders. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery& Data Mining, San Diego, CA, USA, 23–27 August 2020; ACM: New York, NY, USA, 2020; pp. 483–491. [Google Scholar]
  24. Tanjim, M.M.; Su, C.; Benjamin, E.; Hu, D.; Hong, L.; McAuley, J. Attentive sequential models of latent intent for next item recommendation. In Proceedings of the 29th Web Conference, Taipei, China, 20–24 April 2020; ACM: New York, NY, USA, 2020; pp. 2528–2534. [Google Scholar]
  25. Chen, Y.; Liu, Z.; Li, J.; McAuley, J.; Xiong, C. Intent contrastive learning for sequential recommendation. In Proceedings of the 31th International Conference on World Wide Web, Lyons, France, 25–29 April 2022; ACM: New York, NY, USA, 2022; pp. 2172–2182. [Google Scholar]
  26. Liu, Y.; Zhu, S.; Xia, J.; Ma, Y.; Ma, J.; Liu, X.; Yu, S.; Zhang, K.; Zhong, W. End-to-end learnable clustering for intent learning in recommendation. In Proceedings of the 38th Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 9–15 December 2024. [Google Scholar]
  27. Zhou, P.; Gao, J.; Xie, Y.; Ye, Q.; Hua, Y.; Kim, J.; Wang, S.; Kim, S. Equivariant contrastive learning for sequential recommendation. In Proceedings of the 17th ACM Conference on Recommender Systems, Singapore, 18–22 September 2023; ACM: New York, NY, USA, 2023; pp. 79–86. [Google Scholar]
  28. Li, X.; Sun, A.; Zhao, M.; Yu, J.; Zhu, K.; Jin, D.; Yu, M.; Yu, R. Multi-intention oriented contrastive learning for sequential recommendation. In Proceedings of the 16th ACM International Conference on Web Search and Data Mining, Singapore, 27 February–3 March 2023; ACM: New York, NY, USA, 2023; pp. 411–419. [Google Scholar]
  29. Li, Z.; Wang, X.; Yang, C.; Yao, L.; McAuley, J.; Xu, G. Exploiting explicit and implicit item relationships for session-based recommendation. In Proceedings of the 16th ACM International Conference on Web Search and Data Mining, Singapore, 27 February–3 March 2023; ACM: New York, NY, USA, 2023; pp. 553–561. [Google Scholar]
  30. Wang, Z.; Wei, W.; Cong, G.; Li, X.L.; Mao, X.L.; Qiu, M. Global context enhanced graph neural networks for session-based recommendation. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, Xi’an, China, 25–30 July 2020; ACM: New York, NY, USA, 2020; pp. 169–178. [Google Scholar]
  31. He, X.; Deng, K.; Wang, X.; Li, Y.; Zhang, Y.; Wang, M. Lightgcn: Simplifying and powering graph convolution network for recommendation. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, Xi’an, China, 25–30 July 2020; ACM: New York, NY, USA, 2020; pp. 639–648. [Google Scholar]
  32. Cai, X.; Huang, C.; Xia, L.; Ren, X. LightGCL: Simple yet effective graph contrastive learning for recommendation. In Proceedings of the 11th International Conference on Learning Representations, ICLR, Kigali, Rwanda, 1–5 May 2023; pp. 127–135. [Google Scholar]
  33. Tan, Y.K.; Xu, X.; Liu, Y. Improved recurrent neural networks for session-based recommendations. In Proceedings of the 10th ACM Conference on Recommender Systems, Boston, MA, USA, 15–19 September 2016; ACM: New York, NY, USA, 2016; pp. 17–22. [Google Scholar]
  34. Ma, M.; Ren, P.; Chen, Z.; Ren, Z.; Zhao, L.; Liu, P.; Ma, J.; de Rijke, M. Mixed Information Flow for Cross-Domain Sequential Recommendations. ACM Trans. Knowl. Discov. Data 2022, 16, 64. [Google Scholar] [CrossRef]
  35. Kang, W.C.; McAuley, J. Self-attentive sequential recommendation. In Proceedings of the 11th IEEE International Conference on Data Mining, Sentosa, Singapore, 17–20 November 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 197–206. [Google Scholar]
  36. Shin, Y.; Choi, J.; Wi, H.; Park, N. An Attentive Inductive Bias for Sequential Recommendation beyond the Self-Attention. arXiv 2024, arXiv:2312.10325. Available online: https://arxiv.org/abs/2312.10325 (accessed on 28 March 2025). [CrossRef]
  37. Wei, Z.; Wu, N.; Li, F.; Wang, K.; Zhang, W. MoCo4SRec: A momentum contrastive learning framework for sequential recommendation. Expert Syst. Appl. 2023, 223, 119911. [Google Scholar] [CrossRef]
  38. Zhang, Y.; Cai, J.; Li, C.; Li, T.; Wang, H. KMPR-AEP: Knowledge-Enhanced Multi-task Parallelized Recommendation Algorithm Incorporating Attention-Embedded Propagation. Int. J. Comput. Intell. Syst. 2024, 17, 213. [Google Scholar] [CrossRef]
  39. Celik, E.; Omurca, S.I. Skip-Gram and Transformer Model for Session-Based Recommendation. Appl. Sci. 2024, 14, 635. [Google Scholar] [CrossRef]
  40. Kingma, D.; Ba, J. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations, ICLR, San Diego, CA, USA, 7–9 May 2015; pp. 231–237. [Google Scholar]
  41. Chen, Y.; Lin, J.; Xiong, C. ELECRec: Training sequential recommenders as discriminators. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, Madrid, Spain, 11–15 July 2022; ACM: New York, NY, USA, 2022; pp. 2671–2678. [Google Scholar]
Figure 1. RM-GILSA overall framework.
Figure 1. RM-GILSA overall framework.
Symmetry 17 00586 g001
Figure 2. Sequence information extraction module.
Figure 2. Sequence information extraction module.
Symmetry 17 00586 g002
Figure 3. Data augmentation method based on item relevance.
Figure 3. Data augmentation method based on item relevance.
Symmetry 17 00586 g003
Figure 4. The impact of each key component (a) on the Sports dataset, HR@10 and NDCG@10 and (b) on the Toys dataset, HR@10 and NDCG@10.
Figure 4. The impact of each key component (a) on the Sports dataset, HR@10 and NDCG@10 and (b) on the Toys dataset, HR@10 and NDCG@10.
Symmetry 17 00586 g004
Figure 5. Effect of the graph contrastive learning strength λ 1 on model performance.
Figure 5. Effect of the graph contrastive learning strength λ 1 on model performance.
Symmetry 17 00586 g005
Figure 6. Effect of the intent contrastive learning strength λ 2 on model performance.
Figure 6. Effect of the intent contrastive learning strength λ 2 on model performance.
Symmetry 17 00586 g006
Figure 7. Effect of the number of intent classes K on model performance.
Figure 7. Effect of the number of intent classes K on model performance.
Symmetry 17 00586 g007
Table 1. Statistical information of experimental datasets.
Table 1. Statistical information of experimental datasets.
Dataset#User#Item#ActionAvg. Len.Sparsity
Sports35,59818,357296,33799.95%8.3
Toys19,41211,924167,59799.93%8.6
LastFM1090364652,55198.68%48.2
Table 2. Hyperparameter settings.
Table 2. Hyperparameter settings.
Parameter NameValue
batch_size256
learning rate0.001
weight_decay0
hidden_size64
maximum sequence length50
num_attention_heads2
dropout rate0.5
graph contrastive learning strength0.2
intent contrastive learning strength0.05
Table 3. Comparison of experimental results of various models.
Table 3. Comparison of experimental results of various models.
ModelSportsToysLastFM
Hit@10NDCG@10Hit@10NDCG@10Hit@10NDCG@10
SASRec0.03200.01720.06960.03980.06330.0355
CoSeRec0.04370.02420.07710.04470.04590.0273
DuoRec0.04660.02440.09270.04430.06240.0361
ELECRec0.05410.03190.09960.06270.06240.0366
ICLRec0.04500.02420.08370.04820.05050.0305
MCLRec0.05010.02600.09210.04680.05690.0317
ELCRec0.04260.02310.08590.04920.04950.0282
BASRec0.04360.02420.08250.04930.06420.0366
Ours0.0567 ± 0.00090.0335 ± 0.00040.1065 ± 0.00150.0667 ± 0.00110.0679 ± 0.00070.0416 ± 0.009
Improvement4.81%5.02%6.93%6.38%5.76%13.66%
Table 4. Runtime performance.
Table 4. Runtime performance.
ModelRuntimeCPU Utilization
ICLRec2 h 7 min59%
BASRec53 min48%
Ours3 h 20 min85%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, M.; Lu, W.; Cai, X. Recommendation Model Based on Global Intention Learning and Sequence Augmentation. Symmetry 2025, 17, 586. https://doi.org/10.3390/sym17040586

AMA Style

Li M, Lu W, Cai X. Recommendation Model Based on Global Intention Learning and Sequence Augmentation. Symmetry. 2025; 17(4):586. https://doi.org/10.3390/sym17040586

Chicago/Turabian Style

Li, Minghui, Wei Lu, and Xiaodong Cai. 2025. "Recommendation Model Based on Global Intention Learning and Sequence Augmentation" Symmetry 17, no. 4: 586. https://doi.org/10.3390/sym17040586

APA Style

Li, M., Lu, W., & Cai, X. (2025). Recommendation Model Based on Global Intention Learning and Sequence Augmentation. Symmetry, 17(4), 586. https://doi.org/10.3390/sym17040586

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop