# Deep Matrix Factorization Approach for Collaborative Filtering Recommender Systems

^{*}

^{†}

## Abstract

**:**

## 1. Introduction

## 2. Materials and Methods

Algorithm 1: Pseudo-code of a recursive implementation of DeepMF. |

Algorithm 2: Pseudo-code of a parrallel recursive implementation of Deepmf using an als approach. |

## 3. Results

## 4. Discussion

## Author Contributions

## Funding

## Conflicts of Interest

## References

- Manogaran, G.; Lopez, D. A survey of big data architectures and machine learning algorithms in healthcare. Int. J. Biomed. Eng. Technol.
**2017**, 25, 182–211. [Google Scholar] [CrossRef] - Molina, M.; Garip, F. Machine Learning for Sociology. Ann. Rev. Sociol.
**2019**, 45, 27–45. [Google Scholar] [CrossRef] [Green Version] - Lu, Y. Industry 4.0: A survey on technologies, applications and open research issues. J. Ind. Inform. Integr.
**2017**, 6, 1–10. [Google Scholar] [CrossRef] - Bobadilla, J.; Ortega, F.; Hernando, A.; Gutiérrez, A. Recommender systems survey. Knowl. Based Syst.
**2013**, 46, 109–132. [Google Scholar] [CrossRef] - Son, Y.; Choi, Y. Improving Matrix Factorization Based Expert Recommendation for Manuscript Editing Services by Refining User Opinions with Binary Ratings. Appl. Sci.
**2020**, 10, 3395. [Google Scholar] [CrossRef] - Zhang, D.; Liu, L.; Wei, Q.; Yang, Y.; Yang, P.; Liu, Q. Neighborhood Aggregation Collaborative Filtering Based on Knowledge Graph. Appl. Sci.
**2020**, 10, 3818. [Google Scholar] [CrossRef] - Adomavicius, G.; Tuzhilin, A. Toward the next generation of recommender systems: A survey of the state-of-the-art and possible extensions. IEEE Trans. Knowl. Data Eng.
**2005**, 17, 734–749. [Google Scholar] [CrossRef] - Lara-Cabrera, R.; González-Prieto, Á.; Ortega, F.; Bobadilla, J. Evolving matrix-factorization-based collaborative filtering using genetic programming. Appl. Sci.
**2020**, 10, 675. [Google Scholar] [CrossRef] [Green Version] - Koren, Y.; Bell, R.; Volinsky, C. Matrix factorization techniques for recommender systems. Computer
**2009**, 42, 30–37. [Google Scholar] [CrossRef] - LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature
**2015**, 521, 436–444. [Google Scholar] [CrossRef] - Deng, L.; Hinton, G.; Kingsbury, B. New types of deep neural network learning for speech recognition and related applications: An overview. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 26–31 May 2013; pp. 8599–8603. [Google Scholar]
- Wu, M.; Li, C. Image recognition based on deep learning. In Proceedings of the 2015 Chinese Automation Congress (CAC), Wuhan, China, 27–29 November 2015; pp. 542–546. [Google Scholar]
- Deng, L.; Liu, Y. Deep Learning in Natural Language Processing; Springer: Berlin/Heidelberg, Germany, 2018. [Google Scholar]
- Zhang, S.; Yao, L.; Sun, A.; Tay, Y. Deep Learning Based Recommender System: A Survey and New Perspectives. ACM Comput. Surv.
**2019**, 52, 1–38. [Google Scholar] [CrossRef] [Green Version] - Guo, H.; Tang, R.; Ye, Y.; Li, Z.; He, X. DeepFM: A factorization-machine based neural network for CTR prediction. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, Melbourne, VIC, Australia, 19–25 August 2017; pp. 1725–1731. [Google Scholar]
- Ouyang, Y.; Liu, W.; Rong, W.; Xiong, Z. Autoencoder-Based Collaborative Filtering; SpringerLink: Berlin/Heidelberg, Germany, 2014; pp. 284–291. [Google Scholar]
- Bobadilla, J.; Alonso, S.; Hernando, A. Deep Learning Architecture for Collaborative Filtering Recommender Systems. Appl. Sci.
**2020**, 10, 2441. [Google Scholar] [CrossRef] [Green Version] - He, X.; Du, X.; Wang, X.; Tian, F.; Tang, J.; Chua, T.S. Outer product-based neural collaborative filtering. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, Stockholm, Sweden, 13–19 July 2018; pp. 2227–2233. [Google Scholar]
- Abavisani, M.; Patel, V.M. Deep Sparse Representation-Based Classification. IEEE Signal Proces. Lett.
**2019**, 26, 948–952. [Google Scholar] [CrossRef] - Hidasi, B.; Karatzoglou, A.; Baltrunas, L.; Tikk, D. Session-based Recommendations with Recurrent Neural Networks. arXiv
**2015**, arXiv:cs.LG/1511.06939. [Google Scholar] - Tan, Y.K.; Xu, X.; Liu, Y. Improved Recurrent Neural Networks for Session-Based Recommendations. In Proceedings of the 1st Workshop on Deep Learning for Recommender Systems, Boston, MA, USA, 15 September 2016; Association for Computing Machinery: New York, NY, USA, 2016; pp. 17–22. [Google Scholar]
- Wu, S.; Ren, W.; Yu, C.; Chen, G.; Zhang, D.; Zhu, J. Personal recommendation using deep recurrent neural networks in NetEase. In Proceedings of the 2016 IEEE 32nd International Conference on Data Engineering (ICDE), Helsinki, Finland, 16–20 May 2016; pp. 1218–1229. [Google Scholar]
- Wang, Q.; Yin, H.; Hu, Z.; Lian, D.; Wang, H.; Huang, Z. Neural Memory Streaming Recommender Networks with Adversarial Training. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, London, UK, 19–23 August 2018; Association for Computing Machinery: New York, NY, USA, 2018; pp. 2467–2475. [Google Scholar]
- He, X.; He, Z.; Du, X.; Chua, T.S. Adversarial Personalized Ranking for Recommendation. In Proceedings of the 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, Ann Arbor, MI, USA, 8–12 July 2018; Association for Computing Machinery: New York, NY, USA, 2018; pp. 355–364. [Google Scholar]
- Le Magoarou, L.; Gribonval, R. Flexible multilayer sparse approximations of matrices and applications. IEEE J. Sel. Top. Signal Process.
**2016**, 10, 688–700. [Google Scholar] [CrossRef] [Green Version] - Trigeorgis, G.; Bousmalis, K.; Zafeiriou, S.; Schuller, B.W. A Deep Matrix Factorization Method for Learning Attribute Representations. IEEE Trans. Pattern Anal. Mach. Intel.
**2017**, 39, 417–429. [Google Scholar] [CrossRef] [Green Version] - Guo, Z.; Zhang, S. Sparse deep nonnegative matrix factorization. Big Data Min. Anal.
**2020**, 3, 13–28. [Google Scholar] [CrossRef] [Green Version] - Sharma, P.; Abrol, V.; Sao, A.K. Deep-Sparse-Representation-Based Features for Speech Recognition. IEEE/ACM Trans. Audio Speech Lang. Process.
**2017**, 25, 2162–2175. [Google Scholar] [CrossRef] - Mnih, A.; Salakhutdinov, R.R. Probabilistic matrix factorization. In Advances in Neural Information Processing Systems; University of Toronto: Toronto, ON, Canada, 2008; pp. 1257–1264. [Google Scholar]
- Lee, D.D.; Seung, H.S. Algorithms for non-negative matrix factorization. In Advances in Neural Information Processing Systems; University of Toronto: Toronto, ON, Canada, 2001; pp. 556–562. [Google Scholar]
- Koren, Y. Factorization Meets the Neighborhood: A Multifaceted Collaborative Filtering Model. In Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Las Vegas, NV, USA, 24–27 August 2008; Association for Computing Machinery: New York, NY, USA, 2008; pp. 426–434. [Google Scholar]
- Bobadilla, J.; Hernando, A.; Ortega, F.; Bernal, J. A framework for collaborative filtering recommender systems. Expert Syst. Appl.
**2011**, 38, 14609–14623. [Google Scholar] [CrossRef] [Green Version] - Harper, F.M.; Konstan, J.A. The movielens datasets: History and context. ACM Trans. Interact. Intel. Syst.
**2015**, 5, 1–19. [Google Scholar] [CrossRef] - Guo, G.; Zhang, J.; Yorke-Smith, N. A Novel Bayesian Similarity Measure for Recommender Systems. In Proceedings of the 23rd International Joint Conference on Artificial Intelligence (IJCAI), Beijing, China, 3–9 August 2013; pp. 2619–2625. [Google Scholar]
- MyAnimeList.net. MyAnimeList Dataset. 2020. Available online: https://www.kaggle.com/azathoth42/myanimelist (accessed on 18 May 2020).
- Ortega, F.; Zhu, B.; Bobadilla, J.; Hernando, A. CF4J: Collaborative filtering for Java. Knowl. Based Syst.
**2018**, 152, 94–99. [Google Scholar] [CrossRef]

**Figure 2.**Quality of the recommendations measured by precision and recall. The higher the better. Blue number over the lines represents the size of the recommendation list for each value.

**Figure 3.**Average value of the predictions provided by each factorization according to its depth. Top 5 combinations of hyper-parameters for each dataset included in Table 3 are shown.

Dataset | Number of Users | Number of Items | Number of Ratings | Number of Test Ratings | Possible Scores |
---|---|---|---|---|---|

MovieLens 100K | 943 | 1682 | 92,026 | 7974 | 1 to 5 stars |

MovieLens 1M | 6040 | 3706 | 911,031 | 89,178 | 1 to 5 stars |

FilmTrust | 1508 | 2071 | 32,675 | 2819 | 0.5 to 4.0 with half increments |

MyAnimeList | 69,600 | 9927 | 5,788,207 | 549,027 | 1 to 10 |

Dataset | PMF | NMF | SVD++ |
---|---|---|---|

MovieLens 100K | $k=2$, $\gamma =0.01$, $\lambda =0.025$ | $k=2$ | $k=2$, $\gamma =0.0014$, $\lambda =0.08$ |

MovieLens 1M | $k=8$, $\gamma =0.01$, $\lambda =0.045$ | $k=2$ | $k=4$, $\gamma =0.0014$, $\lambda =0.05$ |

FilmTrust | $k=4$, $\gamma =0.015$, $\lambda =0.1$ | $k=2$ | $k=2$, $\gamma =0.0014$, $\lambda =0.1$ |

MyAnimeList | $k=10$, $\gamma =0.005$, $\lambda =0.085$ | $k=2$ | $k=4$, $\gamma =0.0014$, $\lambda =0.02$ |

Dataset | Rank | Hyper-Parameters | MAE |
---|---|---|---|

MovieLens 100K | 1 | K = [3, 3, 3, 3]; $\phantom{\rule{3.33333pt}{0ex}}\Gamma $= [0.01, 0.1, 0.01, 0.1]; $\Lambda $ = [0.1, 0.01, 0.1, 0.01] | 0.75017 |

2 | K = [3, 3, 3, 9]; $\phantom{\rule{3.33333pt}{0ex}}\Gamma $= [0.01, 0.1, 0.01, 0.1]; $\Lambda $ = [0.1, 0.01, 0.1, 0.01] | 0.75077 | |

3 | K = [3, 6, 6, 3]; $\phantom{\rule{3.33333pt}{0ex}}\Gamma $= [0.01, 0.1, 0.01, 0.1]; $\Lambda $ = [0.1, 0.01, 0.1, 0.01] | 0.75079 | |

4 | K = [3, 3, 3]; $\phantom{\rule{3.33333pt}{0ex}}\Gamma $= [0.01, 0.1, 0.01]; $\Lambda $ = [0.1, 0.01, 0.1] | 0.75092 | |

5 | K = [3, 3, 9, 9]; $\phantom{\rule{3.33333pt}{0ex}}\Gamma $= [0.01, 0.1, 0.01, 0.1]; $\Lambda $ = [0.1, 0.01, 0.1, 0.01] | 0.75105 | |

MovieLens 1M | 1 | K = [9, 3, 3]; $\phantom{\rule{3.33333pt}{0ex}}\Gamma $= [0.01, 0.01, 0.01]; $\Lambda $ = [0.1, 0.01, 0.1] | 0.70943 |

2 | K = [9, 3, 9, 3]; $\phantom{\rule{3.33333pt}{0ex}}\Gamma $= [0.01, 0.01, 0.01, 0.1]; $\Lambda $ = [0.1, 0.01, 0.1, 0.01] | 0.70948 | |

3 | K = [9, 3, 9]; $\phantom{\rule{3.33333pt}{0ex}}\Gamma $= [0.01, 0.01, 0.01]; $\Lambda $ = [0.1, 0.01, 0.1] | 0.70949 | |

4 | K = [9, 3, 3, 9]; $\phantom{\rule{3.33333pt}{0ex}}\Gamma $= [0.01, 0.01, 0.01, 0.1]; $\Lambda $ = [0.1, 0.01, 0.1, 0.01] | 0.70956 | |

5 | K = [9, 3, 3, 3]; $\phantom{\rule{3.33333pt}{0ex}}\Gamma $= [0.01, 0.01, 0.01, 0.1]; $\Lambda $ = [0.1, 0.01, 0.1, 0.01] | 0.70959 | |

FilmTrust | 1 | K = [6, 6, 3]; $\phantom{\rule{3.33333pt}{0ex}}\Gamma $= [0.01, 0.1, 0.01]; $\Lambda $ = [0.1, 0.01, 0.01] | 0.64936 |

2 | K = [6, 6, 3]; $\phantom{\rule{3.33333pt}{0ex}}\Gamma $= [0.01, 0.1, 0.01]; $\Lambda $ = [0.1, 0.01, 0.1] | 0.64987 | |

3 | K = [6, 3, 6]; $\phantom{\rule{3.33333pt}{0ex}}\Gamma $= [0.01, 0.1, 0.01]; $\Lambda $ = [0.1, 0.01, 0.1] | 0.65072 | |

4 | K = [9, 3, 3]; $\phantom{\rule{3.33333pt}{0ex}}\Gamma $= [0.01, 0.1, 0.01]; $\Lambda $ = [0.1, 0.01, 0.1] | 0.65088 | |

5 | K = [6, 6]; $\phantom{\rule{3.33333pt}{0ex}}\Gamma $= [0.01, 0.01]; $\Lambda $ = [0.1, 0.01] | 0.65102 | |

MyAnimeList | 1 | K = [9, 6, 9, 6]; $\phantom{\rule{3.33333pt}{0ex}}\Gamma $= [0.01, 0.1, 0.01, 0.1]; $\Lambda $ = [0.1, 0.01, 0.1, 0.01] | 0.97447 |

2 | K = [9, 9, 6, 9]; $\phantom{\rule{3.33333pt}{0ex}}\Gamma $= [0.01, 0.1, 0.01, 0.1]; $\Lambda $ = [0.1, 0.01, 0.1, 0.01] | 0.97452 | |

3 | K = [9, 6, 6, 6]; $\phantom{\rule{3.33333pt}{0ex}}\Gamma $= [0.01, 0.1, 0.01, 0.1]; $\Lambda $ = [0.1, 0.01, 0.1, 0.01] | 0.97454 | |

4 | K = [9, 6, 6, 9]; $\phantom{\rule{3.33333pt}{0ex}}\Gamma $= [0.01, 0.1, 0.01, 0.1]; $\Lambda $ = [0.1, 0.01, 0.1, 0.01] | 0.97454 | |

5 | K = [9, 9, 9, 6]; $\phantom{\rule{3.33333pt}{0ex}}\Gamma $= [0.01, 0.1, 0.01, 0.1]; $\Lambda $ = [0.1, 0.01, 0.1, 0.01] | 0.97458 |

**Table 4.**Quality of the predictions measured by the mae. The lower the better. In bold the best recommendation model for each dataset.

Dataset | DeepMF | PMF | NMF | SVD++ |
---|---|---|---|---|

MovieLens 100K | 0.75017 | 0.76720 | 0.79138 | 0.78170 |

MovieLens 1M | 0.70943 | 0.71868 | 0.75166 | 0.74285 |

FilmTrust | 0.64936 | 0.84659 | 0.82911 | 0.65748 |

MyAnimeList | 0.97447 | 1.10006 | 1.12025 | 0.95179 |

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Lara-Cabrera, R.; González-Prieto, Á.; Ortega, F.
Deep Matrix Factorization Approach for Collaborative Filtering Recommender Systems. *Appl. Sci.* **2020**, *10*, 4926.
https://doi.org/10.3390/app10144926

**AMA Style**

Lara-Cabrera R, González-Prieto Á, Ortega F.
Deep Matrix Factorization Approach for Collaborative Filtering Recommender Systems. *Applied Sciences*. 2020; 10(14):4926.
https://doi.org/10.3390/app10144926

**Chicago/Turabian Style**

Lara-Cabrera, Raúl, Ángel González-Prieto, and Fernando Ortega.
2020. "Deep Matrix Factorization Approach for Collaborative Filtering Recommender Systems" *Applied Sciences* 10, no. 14: 4926.
https://doi.org/10.3390/app10144926