This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Open AccessArticle
IntentGraphRec: Dual-Level Fusion of Co-Intent Graphs and Shift-Aware Sequence Encoding Under Full-Catalog Evaluation
by
Doo-Yong Park
Doo-Yong Park 1 and
Sang-Min Choi
Sang-Min Choi 2,3,*
1
Department of Building Equipment System & Fire Protection Engineering, Chungwoon University, Sukgol-ro 113, Incheon 22100, Republic of Korea
2
Department of Computer Science and Engineering, Gyeongsang National University, Jinjudaero 501, Jinju 52828, Republic of Korea
3
The Research Institute of Natural Science, Jinjudaero 501, Jinju 52828, Republic of Korea
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(22), 3632; https://doi.org/10.3390/math13223632 (registering DOI)
Submission received: 8 October 2025
/
Revised: 2 November 2025
/
Accepted: 10 November 2025
/
Published: 12 November 2025
Abstract
Sequential recommendations seek to predict the next item a user will interact with by modeling historical behavior, yet most approaches emphasize either temporal dynamics or item relationships and thus miss how structural co-intents interact with dynamic preference shifts under realistic evaluation. IntentGraphRec introduces a dual-level framework that builds an intent graph from session co-occurrences to learn intent-aware item representations with a lightweight GNN, paired with a shift-aware Transformer that adapts attention to evolving preferences via a learnable fusion gate. To avoid optimistic bias, evaluation is performed with a leakage-free, full-catalog ranking protocol that forms prefixes strictly before the last target occurrence and scores against the entire item universe while masking PAD and prefix items. On MovieLens-1M and Gowalla, IntentGraphRec is competitive but does not surpass strong Transformer baselines (SASRec/BERT4Rec); controlled analyses indicate that late fusion is often dominated by sequence representations and that local co-intent graphs provide limited gains unless structural signals are injected earlier or regularized. These findings provide a reproducible view of when structural signals help, and when they do not, in sequential recommendations and offer guidance for future graph–sequence hybrids.
Share and Cite
MDPI and ACS Style
Park, D.-Y.; Choi, S.-M.
IntentGraphRec: Dual-Level Fusion of Co-Intent Graphs and Shift-Aware Sequence Encoding Under Full-Catalog Evaluation. Mathematics 2025, 13, 3632.
https://doi.org/10.3390/math13223632
AMA Style
Park D-Y, Choi S-M.
IntentGraphRec: Dual-Level Fusion of Co-Intent Graphs and Shift-Aware Sequence Encoding Under Full-Catalog Evaluation. Mathematics. 2025; 13(22):3632.
https://doi.org/10.3390/math13223632
Chicago/Turabian Style
Park, Doo-Yong, and Sang-Min Choi.
2025. "IntentGraphRec: Dual-Level Fusion of Co-Intent Graphs and Shift-Aware Sequence Encoding Under Full-Catalog Evaluation" Mathematics 13, no. 22: 3632.
https://doi.org/10.3390/math13223632
APA Style
Park, D.-Y., & Choi, S.-M.
(2025). IntentGraphRec: Dual-Level Fusion of Co-Intent Graphs and Shift-Aware Sequence Encoding Under Full-Catalog Evaluation. Mathematics, 13(22), 3632.
https://doi.org/10.3390/math13223632
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
here.
Article Metrics
Article Access Statistics
For more information on the journal statistics, click
here.
Multiple requests from the same IP address are counted as one view.