Novel Models for the Warm-Up Phase of Recommendation Systems
Abstract
1. Introduction
- RS evaluations that explicitly address data leakage are lacking [15].
- Studies addressing the metrics of the cold-to-warm RS transition as a function of the number of user–catalog interactions remain lacking.
- Few studies address potential improvements in the cold-to-warm transition via algorithms that explicitly improve the latter phase. The available studies often cater to deep neural network embeddings and are presented for a new user, not a new-item case.
2. Objectives and Contributions
- We study and present how existing RS formulations perform during the transition from the extreme-cold-start phase to the warm-up phase. Thus, we ensure that their training and evaluation follow a consistent pattern of chronological information, as discussed in Section 1.
- We propose a novel approach that is independent of the RS algorithm used to improve the performance of any RS during the warm-up phase.
- We present the performance of baseline RSs with and without the proposed model(s) and demonstrate how their accuracy, fairness, and serendipity metrics evolve on the basis of the growing number of new-user/new-item interactions.
- Studies have focused on new-user or new-item problems, treating them as two independent cases, with the new-user problem receiving the most attention. Nonetheless, our formulation similarly applies to both problems and reveals desirable characteristics in new-user and new-item cases.
3. Related Work
3.1. Cold Start
3.2. Warm-Up
3.3. RS Evaluation
4. Model Formulation
4.1. Proposed Models
4.2. Numerical Implementations of Recommendation Algorithms
- SVD/SVD++ is a well-known baseline model in the RS literature—Svd.
- The stereotype model with the Xgb solver discussed in [15] represents the best pure-cold-start stereotype-based model in that study—Xgb stereo.
- The pure-cold-start RS with a DNN is discussed in [4] in its standard form and stereotyped formulation—Dnn, Dnn stereo.
- Xgb stereo with the dynamic user and item biases of Equations (4) and (5)—Xgb dynamic.
- DNN (the best deep neural network baseline), with the dynamic user and item biases of Equations (4) and (5) as extra processing layers—Dnn dynamic.
- XGB dynamic model with the preference transition matrix approach in (6)—Xgb full warm.
- DNN dynamic model with the preference transition matrix approach in (6)—Dnn full warm.
Listing 1. Warm-up-aware recommendation: experimental pipeline. High-level pseudocode outlining the training and evaluation process of base and warm-up recommendation models under cold-start conditions. |
# INPUT: # - Dataset with timestamped user-item interactions + user/item metadata # - Choice of base RS model: SVD, XGBoost, or DNN # - Hyperparameters for training # STEP 0: # Select a number of user-item interactions deemed sufficient to train a model, M # Select a number of user-item interactions deemed sufficient to test a model, T sort all interactions by timestamp j = 0 for the set of interactions R from 0 to j * M + M: # PREPROCESSING STEP 1 (time-consistent training set): Identify and retain only the users U and items I whose interactions fall within R # PREPROCESSING STEP 2 (encoding): Encode identified user and item metadata into embedding vectors (e.g., via stereotypes) # ----- NEW USER PROBLEM ----- # TRAINING Step A (New user case): Compute user-group encoded biases μ_enc(u) for each encoding of users in U Compute item-group encoded biases λ_enc(i) for each encoding of items in I # TRAINING Step B-new_user (New user case): Train RS_base_Nu on interactions in R (Equation (2)) using individual item biases and static encoded user biases μ_enc # TRAINING Step C-new_user (New user dynamic warm-up): Optimize parameters γ and N_u by minimizing residuals when using the trained RS_base_Nu from Step B-new_user, replacing μ _enc with model (4) → Obtain RS_base_dyna_warm_Nu # TRAINING Step D-new_user (transition probability): Optimize the parameters of the item-to-item transition matrix E using sorted interactions R to minimize residuals between predictions via RS_base_dyna_warm_Nu and the actual rating →Obtain RS_full_warm_Nu # ----- NEW ITEM PROBLEM ----- # Repeat Steps A, B, C, D for the new item case using Equation (5) →Obtain RS_base_Ni, RS_base_dyna_warm_Ni, RS_full_warm_Ni # ----- INFERENCE ----- Take the next T time-sorted interactions from (j * M + M) to (j * M + M + T) Extract the new users and new items from the T set # New user problem: Predict ratings and ranked lists of items for all new users in T using RS_base_Nu, RS_base_dyna_warm_Nu, RS_full_warm_Nu # New item problem: Predict ratings and ranked lists of users for all new items in T using RS_base_Ni, RS_base_dyna_warm_Ni, RS_full_warm_Ni j += 1 |
4.3. Model Applicability
5. Experiments
5.1. Data
5.2. Cold-Start Warming Experiments
6. Results
6.1. Accuracy
6.2. Ranking Accuracy
6.3. Serendipity and Fairness
6.4. Extending to a New Dataset
6.5. Result Summary
- The addition of models (4, 5, and 6) enables better warm-up performance under various metrics compared with the baseline and pure-cold-start models.
- Accuracy/ranking accuracy shows consistent improvements (approximately 10%), particularly in the early interaction phases, where personalization starts to emerge.
- Serendipity gains: Warm-up models increase serendipity by 5–15% over baseline recommenders, especially during early interactions.
- Performance improvement is obtained regardless of the base RS utilized, and it is statistically significant.
- The two models, (4 and 5) and (6), provide two almost independent directions of improvement: better user–item characterization and collaborative recommendation paths.
- Tradeoff between accuracy and serendipity/variety appears to be a characteristic feature of the baseline selected.
- The preliminary results reproduce on a second dataset.
- The added complexity of warm-up models is linear (bias models) or quadratic (transition models), remaining modest relative to base RS training, especially for deep models.
7. Conclusions and Future Research
7.1. Conclusions
- Standard RSs without any special treatment of the warm-up phase may substantially lag in performance compared with adopting a warm-up-dedicated sequence-aware approach, such as the ones presented. We discuss how such performance may provide improvements ranging from 10% to 15% for key metrics. Given that the warm-up phase is likely when new users form their opinions on the platform, including such a treatment in commercial applications is crucial.
- Although the deep neural network-driven RS seemed to increase in accuracy, its baseline fairness performance was very low in our experiments. Many recommendation platforms adopting such approaches may require sacrificing the accuracy of models to achieve better fairness characteristics. To date, limited research has been conducted on this topic.
7.2. Limitations and Future Work
- -
- Model personalization depth. While we use stereotype-based encodings and dynamic biasing, future enhancements could integrate real-time interaction signals.
- -
- Deployment dynamics. Our evaluation is based on offline experiments. A promising future direction involves online evaluation via A/B split testing to assess long-term user retention and engagement. At this stage, we can extrapolate expected gains using ratios observed in real-world case studies, such as those discussed by DataColor.ai, where an improvement in their recommendation engine performance of up to 10% reduced customer attrition by 15%. By analogy, we anticipate that the proposed approach could reduce attrition during the critical warm-up phase by approximately 30%.
- -
- Multi-objective optimization. We focus on accuracy and examine fairness as a side product of the models produced, but balancing and injecting goals such as fairness, diversity, and novelty during the RS training of the warm-up remain open challenges for further research.
Funding
Data Availability Statement
Conflicts of Interest
Abbreviations
CF | Collaborative filtering |
CBF | Content-based filtering |
CTR | Click Through Rate |
MF | Matrix factorization |
SVD | Singular value decomposition |
SVD++ | Singular value decomposition Plus Plus |
NDCG | Normalized Discounted Cumulative Gain |
RMSE | Root mean square error |
MAE | Mean absolute error |
HR | Hit rate |
AUC | Area Under the Curve |
MAP | Mean Average Precision |
MRR | Mean Reciprocal Rank |
UI | User–item |
IMDb | Internet Movie Database |
ML | MovieLens |
NLP | Natural Language Processing |
DL | Deep learning |
SGD | Stochastic gradient descent |
ADAM | Adaptive moment estimation |
GPU | Graphics Processing Unit |
SOTA | State of the Art |
I/O | Input/Output |
OOV | Out of Vocabulary (can apply in cold-start contexts) |
RQ | Research Question |
R&D | Research and Development |
Appendix A
Hyperparameter | Grid |
---|---|
Learning Rate (lr) | {0.001, 0.002, 0.005} |
Batch Size | {256, 512, 1024} |
Epochs | Max 100, early stopping (patience = 5–10) |
Optimizer (Adam) β1 | 0.9 |
Optimizer (Adam) β2 | 0.999 |
Weight Decay (L2) | {1 × 10−4, 1 × 10−3} |
Dropout Rate | {0.1, 0.2, 0.3} |
Hidden Layers | {3, 5, 7} |
Neurons per Layer | {128, 256, 512} |
Hyperparameter | Grid |
---|---|
Learning Rate (eta) | {0.025, 0.05} |
Number of Trees (n_estimators) | {250, 500} |
Max Depth (max_depth) | {6, 9} |
Subsample (subsample) | {0.6, 0.8, 1.0} |
Column Sampling (colsample_bytree) | {0.25, 0.5, 0.75} |
L1 Regularization (alpha) | {1 × 10−3} |
L2 Regularization (lambda) | {1 × 10−3} |
Min Child Weight (min_child_weight) | {3, 5} |
Gamma (gamma) | {0.1, 0.2} |
Early Stopping Patience | {10, 20} |
References
- Lu, J.; Wu, D.; Mao, M.; Wang, W.; Zhang, G. Recommender system application developments: A survey. Decis. Support Syst. 2015, 74, 12–32. [Google Scholar] [CrossRef]
- Ko, H.; Lee, S.; Park, Y.; Choi, A. A survey of recommendation systems: Recommendation models, techniques, and application fields. Electronics 2022, 11, 141. [Google Scholar] [CrossRef]
- Çano, E.; Morisio, M. Hybrid recommender systems: A systematic literature review. Intell. Data Anal. 2017, 21, 1487–1524. [Google Scholar] [CrossRef]
- Al-Rossais, N.A. Improving cold start stereotype-based recommendation using deep learning. IEEE Access 2023, 11, 145781–145791. [Google Scholar] [CrossRef]
- Afsar, M.M.; Crump, T.; Far, B. Reinforcement learning based recommender systems: A survey. ACM Comput. Surv. 2023, 55, 1–38. [Google Scholar] [CrossRef]
- Panda, D.K.; Ray, S. Approaches and algorithms to mitigate cold start problems in recommender systems: A systematic literature review. J. Intell. Inf. Syst. 2022, 59, 341–366. [Google Scholar] [CrossRef]
- AlRossais, N.; Kudenko, D.; Yuan, T. Improving cold-start recommendations using item-based stereotypes. User Model. User-Adap Inter. 2021, 31, 867–905. [Google Scholar] [CrossRef]
- Anand, A.; Johri, P.; Banerji, A.; Gaur, N. Product Based Recommendation System on Amazon Data. Int. J. Creat. Res. Thoughts IJCRT 2020. [Google Scholar]
- Zhu, Y.; Xie, R.; Zhuang, F.; Ge, K.; Sun, Y.; Zhang, X.; Lin, L.; Cao, J. Learning to warm up cold item embeddings for cold-start recommendation with meta scaling and shifting networks. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, Virtual, 11–15 July 2021; ACM: New York, NY, USA, 2021; pp. 1167–1176. [Google Scholar] [CrossRef]
- Chen, H.; Wang, Z.; Huang, F.; Huang, X.; Xu, Y.; Lin, Y.; He, P.; Li, Z. Generative adversarial framework for cold-start item recommendation. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, Madrid, Spain, 11–15 July 2022; ACM: New York, NY, USA, 2022; pp. 2565–2571. [Google Scholar] [CrossRef]
- Yuan, H.; Hernandez, A.A. User cold start problem in recommendation systems: A systematic review. IEEE Access 2023, 11, 136958–136977. [Google Scholar] [CrossRef]
- Kodiyan, A.A. An Overview of Ethical Issues in Using AI Systems in Hiring with a Case Study of Amazon’s AI Based Hiring Tool. Res. Prepr. 2019, 12, 1–19. [Google Scholar]
- Zhu, Z.; Kim, J.; Nguyen, T.; Fenton, A.; Caverlee, J. Fairness among new items in cold start recommender systems. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2021), Virtual, 11–15 July 2021; Association for Computing Machinery: New York, NY, USA, 2021; pp. 767–776. [Google Scholar] [CrossRef]
- Bushra, A.; Awajan, A.; Fraihat, S. Survey on the objectives of recommender systems: Measures, solutions, evaluation methodology, and new perspectives. ACM Comput. Surv. 2023, 55, 93. [Google Scholar]
- Ji, Y.; Sun, A.; Zhang, J.; Li, C. A critical study on data leakage in recommender system offline evaluation. ACM Trans. Inf. Syst. 2023, 41, 1–27. [Google Scholar] [CrossRef]
- Wu, L.; He, X.; Wang, X.; Zhang, K.; Wang, M. A survey on accuracy-oriented neural recommendation: From collaborative filtering to information-rich recommendation. IEEE Trans. Knowl. Data Eng. 2022, 35, 4425–4445. [Google Scholar] [CrossRef]
- Panteli, A.; Boutsinas, B. Addressing the cold-start problem in recommender systems based on frequent patterns. Algorithms 2023, 16, 182. [Google Scholar] [CrossRef]
- Patro, S.G.K.; Mishra, B.K.; Panda, S.K.; Kumar, R.; Long, H.V.; Taniar, D. Cold start aware hybrid recommender system approach for e-commerce users. Soft Comput. 2023, 27, 2071–2091. [Google Scholar] [CrossRef]
- Xu, Y.; Wang, E.; Yang, Y.; Xiong, H. GS-RS: A generative approach for alleviating cold start and filter bubbles in recommender systems. IEEE Trans. Knowl. Data Eng. 2023, 36, 668–681. [Google Scholar] [CrossRef]
- Pan, F.; Li, S.; Ao, X.; Tang, P.; He, Q. Warm up cold-start advertisements: Improving ctr predictions via learning to learn id embeddings. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, Paris, France, 21–25 July 2019; ACM: New York, NY, USA, 2019; pp. 695–704. [Google Scholar] [CrossRef]
- Vartak, M.; Thiagarajan, A.; Miranda, C.; Bratman, J.; Larochelle, H. A meta-learning perspective on cold-start recommendations for items’. In NeurIPS; Curran Associates Inc.: Red Hook, NY, USA, 2017; pp. 6904–6914. [Google Scholar]
- Chen, H.; Zhu, C.; Tang, R.; Zhang, W.; He, X.; Yu, Y. Large-scale interactive recommendation with tree-structured reinforcement learning. IEEE Trans. Knowl. Data Eng. 2023, 35, 4018–4032. [Google Scholar] [CrossRef]
- Behera, G.; Nain, N. Collaborative filtering with temporal features for movie recommendation system. Procedia Comput. Sci. 2023, 218, 1366–1373. [Google Scholar] [CrossRef]
- Rendle, S.; Freudenthaler, C.; Schmidt-Thieme, L. Factorizing personalized markov chains for next-basket recommendation. In Proceedings of the 19th International Conference on World Wide Web, Raleigh, NC, USA, 26–30 April 2010; ACM: New York, NY, USA, 2010; pp. 811–820. [Google Scholar] [CrossRef]
- Wen, W.; Wang, W.; Hao, Z.; Cai, R. Factorizing time-heterogeneous Markov transition for temporal recommendation. Neural Netw. 2023, 159, 84–96. [Google Scholar] [CrossRef]
- He, M.; Lin, J.; Luo, J.; Pan, W.; Ming, Z. FLAG: A feedback-aware local and global model for heterogeneous sequential recommendation. ACM Trans. Intell. Syst. Technol. 2023, 14, 1–22. [Google Scholar] [CrossRef]
- Gao, C.; He, X.; Gan, D.; Chen, X.; Feng, F.; Li, Y.; Chua, T.S.; Yao, L.; Song, Y.; Jin, D. Learning to recommend with multiple cascading behaviors. IEEE Trans. Knowl. Data Eng. 2021, 33, 2588–2601. [Google Scholar] [CrossRef]
- Chang, J.; Gao, C.; Zheng, Y.; Hui, Y.; Niu, Y.; Song, Y.; Jin, D.; Li, Y. Sequential recommendation with graph neural networks. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, Virtual, 11–15 July 2021; ACM: New York, NY, USA, 2021; pp. 378–387. [Google Scholar] [CrossRef]
- Ahmed, A.; Salim, N. Markov Chain Recommendation System (MCRS). Int. J. Novel Res. Comput. Sci. Softw. Eng. 2016, 3, 11–26. [Google Scholar]
- Quadrana, M.; Cremonesi, P.; Jannach, D. Sequence-aware recommender systems. ACM Comput. Surv. 2019, 51, 1–36. [Google Scholar] [CrossRef]
- AlRossais, N. Warming up From Extreme Cold Start Using Stereotypes with Dynamic User and Item Features. In Proceedings of the ACM International Conference on Recommender Systems (RecSys 2023), KaRS, Singapore, 18–22 September 2023. [Google Scholar]
- AlRossais, N.A.; Kudenko, D. Isynchronizer: A Tool for Extracting, Integration and Analysis of Movielens and IMDb Datasets. In Proceedings of the Adjunct Publication of the 26th Conference on User Modeling, Adaptation and Personalization, Singapore, 8–11 July 2018; ACM: New York, NY, USA, 2018; pp. 103–107. [Google Scholar] [CrossRef]
- Li, P.; Chen, R.; Liu, Q.; Xu, J.; Zheng, B. Transform cold-start users into warm via fused behaviors in large-scale recommendation. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, Madrid, Spain, 11–15 July 2022; ACM: New York, NY, USA, 2022; pp. 2013–2017. [Google Scholar] [CrossRef]
- Ferrari Dacrema, M.; Boglio, S.; Cremonesi, P.; Jannach, D. A troubling analysis of reproducibility and progress in recommender systems research. ACM Trans. Inf. Syst. 2021, 39, 1–49. [Google Scholar] [CrossRef]
- Jin, D.; Wang, L.; Zhang, H.; Zheng, Y.; Ding, W.; Xia, F.; Pan, S. A survey on fairness-aware recommender systems. Inf. Fusion 2023, 100, 101906. [Google Scholar] [CrossRef]
- Zangerle, E.; Bauer, C. Evaluating recommender systems: Survey and framework. ACM Comput. Surv. 2023, 55, 1–38. [Google Scholar] [CrossRef]
- Deldjoo, Y.; Jannach, D.; Bellogin, A.; Difonzo, A.; Zanzonelli, D. Fairness in recommender systems: Research landscape and future directions. User Model. User Adapt. Interact. 2024, 34, 59–108. [Google Scholar] [CrossRef]
- Jeunen, O. Revisiting offline evaluation for implicit-feedback recommender systems. In Proceedings of the 13th ACM Conference on Recommender Systems, Copenhagen, Denmark, 16–20 September 2019; ACM: New York, NY, USA, 2019; pp. 596–600. [Google Scholar] [CrossRef]
- Chen, J.; Dong, H.; Wang, X.; Feng, F.; Wang, M.; He, X. Bias and debias in recommender system: A survey and future directions. ACM Trans. Inf. Syst. 2023, 41, 67. [Google Scholar] [CrossRef]
- Levin, A.; Peres, Y.; Wilmer, E.L. Markov chains and mixing times. In Postgraduate Textbook; AMS: Providence, RI, USA, 2009. [Google Scholar]
- Cao, J.; Hu, H.; Luo, T.; Wang, J.; Huang, M.; Wang, K.; Wu, Z.; Zhang, X. Distributed design and implementation of SVD++ algorithm for e-commerce personalized recommender system. In Proceedings of the Embedded System Technology: 13th National Conference, ESTC 2015, Beijing, China, 10–11 October 2015; Revised Selected Papers 13. Springer: Singapore, 2016; pp. 30–44. [Google Scholar]
- Frolov, E.; Oseledets, I. HybridSVD: When collaborative information is not enough. In Proceedings of the 13th ACM Conference on Recommender Systems, Copenhagen, Denmark, 16–20 September 2019; ACM: New York, NY, USA, 2019; pp. 331–339. [Google Scholar] [CrossRef]
- Kingma, D.; Ba, J.A. A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations, Banff, AB, Canada, 14–16 April 2014. [Google Scholar]
- Frazier, P.I. A tutorial on Bayesian optimization. arXiv 2018, arXiv:1807.02811. [Google Scholar] [CrossRef]
- Balandat, M.; Karrer, B.; Jiang, D.; Daulton, S.; Letham, B.; Wilson, A.G.; Bakshy, E.B.T. A framework for efficient Monte-Carlo Bayesian optimization. Adv. Neural Inf. Process Syst. 2020, 33, 21524–21538. [Google Scholar]
- Rana, A.; Bridge, D. Explanations that are intrinsic to recommendations. In Proceedings of the 26th Conference on User Modeling, Adaptation and Personalization, Singapore, 8–11 July 2018; ACM: New York, NY, USA, 2018; pp. 187–195. [Google Scholar] [CrossRef]
- Barkan, O.; Koenigstein, N.; Yogev, E.; Katz, O. CB2cf: A neural multiview content-to-collaborative filtering model for completely cold item recommendations. In Proceedings of the 13th ACM Conference on Recommender Systems, Copenhagen, Denmark, 16–20 September 2019; ACM: New York, NY, USA, 2019; pp. 228–236. [Google Scholar] [CrossRef]
- He, R.; McAuley, J. Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering. In Proceedings of the 25th International Conference on World Wide Web, Montréal, QC, Canada, 11–15 April 2016; pp. 507–517. [Google Scholar]
- Wang, Y.; Wang, Y.; Ma, W.; Zhang, M.; Liu, Y.; Ma, S. A survey on the fairness of recommender systems. ACM Trans. Inf. Syst. 2023, 41, 1–43. [Google Scholar] [CrossRef]
- Ge, Y.; Zhao, X.; Yu, L.; Paul, S.; Hu, D.; Hsieh, C.C.; Zhang, Y. Toward Pareto efficient fairness-utility trade-off in recommendation through reinforcement learning. In Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining, Virtual, 21–25 February 2022; ACM: New York, NY, USA, 2022; pp. 316–324. [Google Scholar] [CrossRef]
- Rahmani, H.A.; Deldjoo, Y.; Tourani, A.; Naghiaei, M. The unfairness of active users and popularity bias in point-of-interest recommendation. In Proceedings of the International Workshop on Algorithmic Bias in Search and Recommendation, Stavanger, Norway, 10 April 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 56–68. [Google Scholar]
Warm-Up Transition Addressed/Mentioned | Warm-Up Performance Addressed | Data Leakage or Look-Ahead Bias Addressed in Training | Time Event Consistency Addressed in Experiments | |
---|---|---|---|---|
Literature reviews on RSs [1,2,3,5,16], over 200 refs | ✕ | ✕ | ✕ | ✕ |
Literature reviews on cold start [6,11] | ✕ | ✕ | ✕ | ✕ |
Cold start [17,18,19], + 270 references in [6,11] | ✕ | ✕ | ✕ | ✕ |
Cold-start studies [9,20,21,22,23] | √ | √ | ✕ | ✕ |
Cold-start studies [4,7,15,24,25,26,27,28,29,30] | ✕ | ✕ | √ | √ |
Cold-start studies [31] | √ | √ | √ | √ |
Functional form of the RS and specification for the new-user and -item problems. | |
Implicit or explicit rating of user for item . | |
Representation vector of user and item i via their metadata coordinates. | |
Encoded representation of user and item , for instance, via stereotypes. | |
Characteristic biases of user and item . | |
Average bias exhibited by the users (items) of an encoding neighbor. | |
K | Total number of observed interactions of a user (with an item) in the system. |
k | kth interaction of a user (with an item) in the system. |
Dynamic model for the user (item) bias at the kth interaction. | |
Average user bias recorded for user during the first k interactions. | |
Average item bias recorded for item i during the first k interactions. | |
Number of interactions that are required to characterize a user (item). | |
Dynamic weight decay (parameter computed during optimization). | |
Probability of interacting with (consuming) item after interacting with (consuming) j. | |
State vector at j for a user, , representing recent interactions with the catalog. | |
Rate transition matrix of expected rating of item after rating item j. |
Training Base | Training Model (4.5) | Training Model (6) | Inference | |
---|---|---|---|---|
DNN L layers, N nodes | O(I(U+C)E ((U+C)E+LN2)) | O(ILN2E) | O(ILN2E2) | O(UCELN2) |
XGBoost T trees D depth | O(IT((U+C)E log[(U+C)E+2D]) | O(ITDE) | O(ITDE2) | O(UCETD) |
SVD | O(I((U+C)E)2) | O(IE) | O(IE2) | O(UCE) |
Domain | Use Case | Suggested Model | Problems Targeted | Key Outcome |
---|---|---|---|---|
E-commerce | Product recommendations for new users | RS_base + Models (4), (6) | New-user attrition. E.g., Amazon-reported attrition rates | Better warm-up characterization and more sales |
E-commerce | Cold-launch of new products | RS_base + Models (5), (6) | Attrition rate of new sellers. E.g., Etsy ghost sellers. | Improved exposure and early item ranking fairness |
Streaming | New-user onboarding for media platforms | RS_base + Models (4), (6) | Improved satisfaction of new users. E.g., Netflix. | Early engagement without explicit preferences |
News platforms and research feeds | Article recommendations for new readers | RS_base + Models (4), (6) | Low CTR. | Timely personalization of reader |
Social media | Suggestions for new users or content | RS_base + Models (4), (5), (6) | Low user engagement and low content exposure. | Onboarding of users, warm-up on their new content |
Dataset | No. of Users | No. of Items | No. of Ratings | No. of User Features | No. of Item Features |
---|---|---|---|---|---|
MovieLens + IMDb | 3827 | 6040 | 1,000,290 | 5 | 35 |
Amazon Sports & Outdoor | 478,000 | 532,197 | 3,268,700 | 3 | 9 |
Dataset | Model | RMSE | MAE | NDCG_5 | NDCG_10 | SER_10 |
---|---|---|---|---|---|---|
W = 5 | ||||||
MLens + IMDb | Stereo Fully Warm | 0.8876 0.8795 | 0.613 0.581 | 0.5093 0.5274 | 0.5103 0.5324 | 0.4719 0.5506 |
Amazon S & O | Stereo Fully Warm | 0.7730 0.8044 | 0.524 0.501 | 0.3805 0.4031 | 0.3896 0.4099 | 0.2317 0.2654 |
W = 10 | ||||||
MLens + IMDb | Stereo Fully Warm | 0.8853 0.8737 | 0.611 0.577 | 0.4995 0.5266 | 0.5009 0.5313 | 0.4785 0.5684 |
Amazon S & O | Stereo Fully Warm | 0.7693 0.7995 | 0.531 0.509 | 0.3814 0.4049 | 0.3830 0.4103 | 0.2388 0.2697 |
W = 15 | ||||||
MLens + IMDb | Stereo Fully Warm | 0.8817 0.8671 | 0.608 0.571 | 0.4915 0.5269 | 0.4913 0.5302 | 0.4813 0.5544 |
Amazon S & O | Stereo Fully Warm | 0.7615 0.7930 | 0.529 0.499 | 0.3895 0.4105 | 0.3855 0.4096 | 0.2395 0.2702 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
AlRossais, N. Novel Models for the Warm-Up Phase of Recommendation Systems. Computers 2025, 14, 302. https://doi.org/10.3390/computers14080302
AlRossais N. Novel Models for the Warm-Up Phase of Recommendation Systems. Computers. 2025; 14(8):302. https://doi.org/10.3390/computers14080302
Chicago/Turabian StyleAlRossais, Nourah. 2025. "Novel Models for the Warm-Up Phase of Recommendation Systems" Computers 14, no. 8: 302. https://doi.org/10.3390/computers14080302
APA StyleAlRossais, N. (2025). Novel Models for the Warm-Up Phase of Recommendation Systems. Computers, 14(8), 302. https://doi.org/10.3390/computers14080302