An Improved Deep Belief Network Prediction Model Based on Knowledge Transfer
Abstract
:1. Introduction
2. Related Works
2.1. DBN
2.2. Unsupervised Learning Based on RBM
2.3. Supervised Learning Based on BP Algorithm
3. Methods
3.1. DBN Structure Construction Method Based on Knowledge Transfer
3.2. PLSR-Based Fine-Tuning Algorithm
4. Experiments
4.1. Datasets
4.2. Popularity Prediction Based on Time Series
4.2.1. Validity Check
4.2.2. Predictive Performance Evaluation
4.3. Rating Classification Prediction Based on User Data
4.3.1. Validity Check
4.3.2. Predictive Performance Evaluation
5. Conclusions and Future Work
Author Contributions
Funding
Conflicts of Interest
References
- Schmidhuber, J. Deep learning in neural networks: An overview. Neural Netw. 2015, 61, 85–117. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Valter, P.; Lindgren, P.; Prasad, R. The consequences of artificial intelligence and deep learning in a world of persuasive business models. IEEE Aerosp. Electron. Syst. Mag. 2018, 33, 80–88. [Google Scholar] [CrossRef]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436. [Google Scholar] [CrossRef] [PubMed]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2012; pp. 1097–1105. [Google Scholar]
- Mikolov, T.; Karafiát, M.; Burget, L. Recurrent neural network based language model. In Proceedings of the Eleventh Annual Conference of the International Speech Communication Association, Makuhari, Japan, 26–30 September 2010. [Google Scholar]
- Hinton, G.E.; Salakhutdinov, R. Reducing the dimensionality of data with neural networks. Science 2006, 313, 504–507. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Hinton, G.E.; Osindero, S.; Teh, Y.W. A fast learning algorithm for deep belief nets. Neural Comput. 2006, 18, 1527–1554. [Google Scholar] [CrossRef] [PubMed]
- Khatami, A.; Khosravi, A.; Nguyen, T.; Lim, C.P.; Nahavandi, S. Medical image analysis using wavelet transform and deep belief networks. Expert Syst. Appl. 2017, 86, 190–198. [Google Scholar] [CrossRef]
- De la Rosa, E.; Yu, W. Randomized algorithms for nonlinear system identification with deep learning modification. Inf. Sci. 2016, 364, 197–212. [Google Scholar] [CrossRef]
- Shen, F.; Chao, J.; Zhao, J. Forecasting exchange rate using deep belief networks and conjugate gradient method. Neurocomputing 2015, 167, 243–253. [Google Scholar] [CrossRef]
- Chen, T.; Goodfellow, I.; Shlens, I. Net2Net: Accelerating learning via knowledge transfer. arXiv 2015, arXiv:1511.05641. [Google Scholar]
- Lopes, N.; Ribeiro, B. Towards adaptive learning with improved convergence of deep belief networks on graphics processing units. Pattern Recognit. 2014, 47, 114–127. [Google Scholar] [CrossRef]
- Roy, P.P.; Chherawala, Y.; Cheriet, M. Deep-belief-network based rescoring approach for handwritten word recognition. In Proceedings of the 14th International Conference on Frontiers in Handwriting Recognition, Crete, Greece, 1–4 September 2014; pp. 506–511. [Google Scholar]
- Geng, Z.Q.; Zang, Y.K. An improved deep belief network inspired by Glia Chains. Acta Autom. Sin. 2016, 43, 943–952. [Google Scholar]
- Palomo, E.J.; López-Rubio, E. The growing hierarchical neural gas self-organizing neural network. IEEE Trans. Neural Netw. Learn. Syst. 2017, 28, 2000–2009. [Google Scholar] [CrossRef] [PubMed]
- Wang, G.M.; Qiao, J.F.; Bi, J. TL-GDBN: Growing deep belief network with transfer learning. IEEE Trans. Autom. Sci. Eng. 2018, 16, 874–885. [Google Scholar] [CrossRef]
- Afridi, M.J.; Ross, A.; Shapiro, E.M. On automated source selection for transfer learning in convolutional neural networks. Pattern Recognit. 2018, 73, 65–75. [Google Scholar] [CrossRef] [PubMed]
- Qiao, J.; Wang, G.; Li, W.; Li, X. A deep belief network with PLSR for nonlinear system modeling. Neural Netw. 2018, 104, 68–79. [Google Scholar] [CrossRef] [PubMed]
- He, Y.L.; Geng, Z.Q.; Xu, Y. A robust hybrid model integrating enhanced inputs based extreme learning machine with PLSR (PLSR-EIELM) and its application to intelligent measurement. ISA Trans. 2015, 58, 533–542. [Google Scholar] [CrossRef] [PubMed]
- Harper, F.M. The movielens datasets: History and context. ACM Trans. Int. Intell. Syst. 2015, 5, 1–19. [Google Scholar] [CrossRef]
- Li, F.; Qiao, J.; Han, H. A self-organizing cascade neural network with random weights for nonlinear system modeling. Appl. Soft Comput. 2016, 42, 184–193. [Google Scholar] [CrossRef] [Green Version]
- López-Yáñez, I.L.; Yáñez-Márquez, Y.C. A novel associative model for time series data mining. Pattern Recognit. Lett. 2014, 41, 23–33. [Google Scholar] [CrossRef]
- Ma, H.; King, I.; Lyu, M.R. Effective missing data prediction for collaborative filtering. In Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Amsterdam, The Netherlands, 23–27 July 2007; p. 46. [Google Scholar]
- Wang, J.; De Vries, A.P.; Reinders, M.J.T. Unifying user-based and item-based collaborative filtering approaches by similarity fusion. In Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Washington, DC, USA, 6–11 August 2006; p. 508. [Google Scholar]
Network Depth | Last.fm-1K (LFM-1k) | Movielens-20M(ML-20M) | ||||
---|---|---|---|---|---|---|
Rerror | Correct Rate | Running Time (s) | Rerror | Correct Rate | Running Time (s) | |
2 | 1.8867 | 80.6% | 21.65 | 1.9542 | 81.2% | 25.42 |
3 | 0.5346 | 84.3% | 23.96 | 1.2914 | 84.5% | 26.84 |
4 | 0.5123 | 89.6% | 27.56 | 0.6722 | 86.8% | 29.51 |
5 | 0.4554 | 86.7% | 30.67 | 0.6547 | 89.5% | 31.44 |
6 | 0.3935 | 81.9% | 34.53 | 0.5652 | 87.2% | 35.72 |
Methods | LFM-1k | ML-20M | ||
---|---|---|---|---|
RMSE | Training Time (s) | RMSE | Training Time (s) | |
SCNN | 0.3654 | 42.52 | 0.3724 | 44.25 |
NAM | 0.3745 | - | 0.3958 | - |
DBN | 0.6525 | 41.23 | 0.6724 | 43.68 |
CDBN | 0.5535 | 44.95 | 0.5620 | 45.33 |
ARIMA | 0.4247 | - | 0.4352 | - |
SVR | 0.4967 | 39.65 | 0.4822 | 38.44 |
IDBN | 0.3425 | 31.56 | 0.3614 | 30.71 |
Rating | Coding | Label | Rating | Coding | Label |
---|---|---|---|---|---|
0.5 | 0000000001 | 0.5sort | 3 | 0000100000 | 3 sort |
1 | 0000000010 | 1 sort | 3.5 | 0001000000 | 3.5sort |
1.5 | 0000000100 | 1.5sort | 4 | 0010000000 | 4 sort |
2 | 0000001000 | 2 sort | 4.5 | 0100000000 | 4.5sort |
2.5 | 0000010000 | 2.5sort | 5 | 1000000000 | 5 sort |
User Number | Methods | Rating0-100 | Rating100-200 | Rating200-300 |
---|---|---|---|---|
1w | DBN | 0.7635 | 0.7542 | 0.7354 |
SVD | 0.8042 | 0.7824 | 0.7642 | |
IDBN | 0.7454 | 0.7334 | 0.7132 | |
2w | DBN | 0.7653 | 0.7435 | 0.7311 |
SVD | 0.7834 | 0.7735 | 0.7565 | |
IDBN | 0.7135 | 0.7049 | 0.7045 | |
3w | DBN | 0.7632 | 0.7443 | 0.7256 |
SVD | 0.7737 | 0.7642 | 0.7586 | |
IDBN | 0.7143 | 0.7094 | 0.7022 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, Y.; Liu, F. An Improved Deep Belief Network Prediction Model Based on Knowledge Transfer. Future Internet 2020, 12, 188. https://doi.org/10.3390/fi12110188
Zhang Y, Liu F. An Improved Deep Belief Network Prediction Model Based on Knowledge Transfer. Future Internet. 2020; 12(11):188. https://doi.org/10.3390/fi12110188
Chicago/Turabian StyleZhang, Yue, and Fangai Liu. 2020. "An Improved Deep Belief Network Prediction Model Based on Knowledge Transfer" Future Internet 12, no. 11: 188. https://doi.org/10.3390/fi12110188
APA StyleZhang, Y., & Liu, F. (2020). An Improved Deep Belief Network Prediction Model Based on Knowledge Transfer. Future Internet, 12(11), 188. https://doi.org/10.3390/fi12110188