Network Embedding via a Bi-Mode and Deep Neural Network Model
Abstract
:1. Introduction
- We propose a novel network embedding model BimoNet, which describes relations’ semantic information by bi-mode embeddings and incorporates a deep neural network model to capture relations’ structural information;
- We are the first to fully mine both the semantic and structural information of edges in a network, which provides a new angle to represent the network; and
- The new model is evaluated and compared with existing models on real-life benchmark datasets and tasks, and experimental results on relation extraction verify that BimoNet outperforms state-of-the-art alternatives consistently.
2. Related Work
2.1. Relation Extraction in Knowledge Graphs
2.2. Deep Neural Networks
2.3. Network Embedding
3. Proposed Model
3.1. Bi-Mode Embedding
3.2. Deep Autoencoder
3.3. The Integrated Model
4. Experiments and Analysis
4.1. Datasets
4.2. Baseline Algorithms
4.3. Experimental Setup
4.4. Experimental Results and Analysis
4.5. Relation Comparison
4.6. Parameter Sensitivity
4.7. Discussion
5. Conclusions
- We intend to integrate the semantic and structural information of edges with that of vertices, in order to further mine the network information and obtain an even more powerful network embedding model; and
- Existing network embedding models do not consider the new vertices and edges while networks in the real world are becoming larger and larger, so it is crucial to find a way to represent these new vertices and edges.
Author Contributions
Funding
Conflicts of Interest
References
- Liben-Nowell, D.; Kleinberg, J.M. The link prediction problem for social networks. In Proceedings of the 2003 ACM CIKM International Conference on Information and Knowledge Management, New Orleans, LA, USA, 2–8 November 2003; pp. 556–559. [Google Scholar]
- Shepitsen, A.; Gemmell, J.; Mobasher, B.; Burke, R.D. Personalized recommendation in social tagging systems using hierarchical clustering. In Proceedings of the 2008 ACM Conference on Recommender Systems (RecSys 2008), Lausanne, Switzerland, 23–25 October 2008; pp. 259–266. [Google Scholar]
- Weiss, Y.; Torralba, A.; Fergus, R. Spectral Hashing. In Proceedings of the Twenty-Second Annual Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 8–11 December 2008; pp. 1753–1760. [Google Scholar]
- Perozzi, B.; Al-Rfou, R.; Skiena, S. DeepWalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD’14), New York, NY, USA, 24–27 August 2014; pp. 701–710. [Google Scholar]
- Tu, C.; Liu, H.; Liu, Z.; Sun, M. CANE: Context-Aware Network Embedding for Relation Modeling. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL), Vancouver, BC, Canada, 30 July–4 August 2017; Volume 1, pp. 1722–1731. [Google Scholar]
- Tu, C.; Zhang, Z.; Liu, Z.; Sun, M. TransNet: Translation-Based Network Representation Learning for Thuscial Relation Extraction. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), Melbourne, Australia, 19–25 August 2017; pp. 2864–2870. [Google Scholar]
- Salakhutdinov, R.; Hinton, G.E. Semantic hashing. Int. J. Approx. Reason. 2009, 50, 969–978. [Google Scholar] [CrossRef]
- Bollacker, K.D.; Evans, C.; Paritosh, P.; Sturge, T.; Taylor, J. Freebase: A collaboratively created graph database for structuring human knowledge. In Proceedings of the ACM SIGMOD International Conference on Management of Data (SIGMOD 2008), Vancouver, BC, Canada, 10–12 June 2008; pp. 1247–1250. [Google Scholar]
- Miller, G.A. WordNet: A Lexical Database for English. Commun. ACM 1995, 38, 39–41. [Google Scholar] [CrossRef]
- Suchanek, F.M.; Kasneci, G.; Weikum, G. Yago: A core of semantic knowledge. In Proceedings of the 16th International Conference on World Wide Web (WWW 2007), Banff, AB, Canada, 8–12 May 2007; pp. 697–706. [Google Scholar]
- Nickel, M.; Tresp, V.; Kriegel, H. A Three-Way Model for Collective Learning on Multi-Relational Data. In Proceedings of the 28th International Conference on Machine Learning (ICML 2011), Bellevue, WA, USA, 28 June–2 July 2011; pp. 809–816. [Google Scholar]
- Nickel, M.; Tresp, V.; Kriegel, H. Factorizing YAGO: scalable machine learning for linked data. In Proceedings of the 21st World Wide Web Conference 2012 (WWW 2012), Lyon, France, 16–22 April 2012; pp. 271–280. [Google Scholar]
- Mikolov, T.; Chen, K.; Corrado, G.; Dean, J. Efficient Estimation of Word Representations in Vector Space. arXiv, 2013; arXiv:1301.3781. [Google Scholar]
- Socher, R.; Chen, D.; Manning, C.D.; Ng, A.Y. Reasoning With Neural Tensor Networks for Knowledge Base Completion. In Proceedings of the 27th Annual Conference on Neural Information Processing Systems 2013, Lake Tahoe, NE, USA, 5–10 December 2013; pp. 926–934. [Google Scholar]
- Bordes, A.; Usunier, N.; García-Durán, A.; Weston, J.; Yakhnenko, O. Translating Embeddings for Modeling Multi-relational Data. In Proceedings of the 27th Annual Conference on Neural Information Processing Systems 2013, Lake Tahoe, NE, USA, 5–10 December 2013; pp. 2787–2795. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
- Wang, D.; Cui, P.; Ou, M.; Zhu, W. Deep Multimodal Hashing with Orthogonal Regularization. In Proceedings of the 24th International Conference on Artificial Intelligence (IJCAI’15), Buenos Aires, Argentina, 25–31 July 2015; pp. 2291–2297. [Google Scholar]
- Georgiev, K.; Nakov, P. A non-IID Framework for Collaborative Filtering with Restricted Boltzmann Machines. In Proceedings of the 30th International Conference on International Conference on Machine Learning (ICML’13), Atlanta, GA, USA, 16–21 June 2013; Volume 28, pp. 1148–1156. [Google Scholar]
- Hinton, G.E. Training Products of Experts by Minimizing Contrastive Divergence. Neural Comput. 2002, 14, 1771–1800. [Google Scholar] [CrossRef] [PubMed]
- Salakhutdinov, R.; Mnih, A.; Hinton, G.E. Restricted Boltzmann machines for collaborative filtering. In Proceedings of the 24th International Conference on Machine Learning, (ICML 2007), Corvallis, OR, USA, 20–24 June 2007; pp. 791–798. [Google Scholar]
- Chang, S.; Han, W.; Tang, J.; Qi, G.; Aggarwal, C.C.; Huang, T.S. Heterogeneous Network Embedding via Deep Architectures. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Sydney, Australia, 10–13 August 2015; pp. 119–128. [Google Scholar]
- Wang, D.; Cui, P.; Zhu, W. Structural Deep Network Embedding. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 1225–1234. [Google Scholar]
- Roweis, S.T.; Saul, L.K. Nonlinear Dimensionality Reduction by Locally Linear Embedding. Science 2000, 290, 2323–2326. [Google Scholar] [CrossRef] [PubMed]
- Tenenbaum, J.B.; Silva, V.D.; Langford, J.C. A Global Geometric Framework for Nonlinear Dimensionality Reduction. Science 2000, 290, 2319–2323. [Google Scholar] [CrossRef] [PubMed]
- Tang, J.; Qu, M.; Wang, M.; Zhang, M.; Yan, J.; Mei, Q. LINE: Large-scale Information Network Embedding. In Proceedings of the 24th International Conference on World Wide Web (WWW 2015), Florence, Italy, 18–22 May 2015; pp. 1067–1077. [Google Scholar]
- Grover, A.; Leskovec, J. node2vec: Scalable Feature Learning for Networks. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 855–864. [Google Scholar]
- Yang, C.; Liu, Z.; Zhao, D.; Sun, M.; Chang, E.Y. Network Representation Learning with Rich Text Information. In Proceedings of the 24th International Conference on Artificial Intelligence (IJCAI’15), Buenos Aires, Argentina, 25–31 July 2015; pp. 2111–2117. [Google Scholar]
- Tu, C.; Zhang, W.; Liu, Z.; Sun, M. Max-Margin DeepWalk: Discriminative Learning of Network Representation. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI’16 ), New York, NY, USA, 9–15 July 2016; pp. 3889–3895. [Google Scholar]
- Chen, J.; Zhang, Q.; Huang, X. Incorporate Group Information to Enhance Network Embedding. In Proceedings of the 25th ACM International on Conference on Information and Knowledge Management (CIKM ’16), Indianapolis, IN, USA, 24–28 October 2016; pp. 1901–1904. [Google Scholar]
- Sun, X.; Guo, J.; Ding, X.; Liu, T. A General Framework for Content-enhanced Network Representation Learning. arXiv, 2016; arXiv:1610.02906. [Google Scholar]
- Wang, S.; Tang, J.; Aggarwal, C.C.; Chang, Y.; Liu, H. Signed Network Embedding in Social Media. In Proceedings of the 2017 SIAM International Conference on Data Mining (SDM), Houston, TX, USA, 27–29 April 2017; pp. 327–335. [Google Scholar]
- Srivastava, N.; Hinton, G.E.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv, 2014; arXiv:1412.6980. [Google Scholar]
- Tang, J.; Zhang, J.; Yao, L.; Li, J.; Zhang, L.; Su, Z. ArnetMiner: Extraction and mining of academic social networks. In Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Las Vegas, NE, USA, 24–27 August 2008; pp. 990–998. [Google Scholar]
- Available online: https://github.com/thunlp/TransNet/blob/master/data.zip (accessed on 13 January 2018).
Datasets | Arnet-S | Arnet-M | Arnet-L |
---|---|---|---|
Vertices | 187,939 | 268,037 | 945,589 |
Edges | 1,619,278 | 2,747,386 | 5,056,050 |
Train | 1,579,278 | 2,147,386 | 3,856,050 |
Test | 20,000 | 300,000 | 600,000 |
Valid | 20,000 | 300,000 | 600,000 |
Metric | hits@1 | hits@5 | hits@10 | MeanRank | hits@1 | hits@5 | hits@10 | MeanRank |
---|---|---|---|---|---|---|---|---|
DeepWalk | 13.55 | 37.26 | 50.34 | 19.78 | 18.48 | 39.57 | 52.72 | 19.02 |
LINE | 11.25 | 31.56 | 44.37 | 23.22 | 15.35 | 33.69 | 45.88 | 22.21 |
node2vec | 13.62 | 36.55 | 50.31 | 19.35 | 18.13 | 39.31 | 52.34 | 19.13 |
TransNet | 47.78 | 86.87 | 92.13 | 5.12 | 77.18 | 90.34 | 93.67 | 3.98 |
TransE | 39.17 | 78.32 | 88.73 | 5.33 | 57.58 | 84.12 | 90.01 | 4.51 |
BimoNet | 51.34 | 89.69 | 95.12 | 4.42 | 81.34 | 93.55 | 95.56 | 3.22 |
BimoNet-Bi | 44.34 | 83.42 | 89.45 | 5.89 | 73.23 | 85.42 | 87.32 | 6.23 |
BimoNet-Net | 20.14 | 40.58 | 60.49 | 15.34 | 40.32 | 68.35 | 70.51 | 14.59 |
Metric | hits@1 | hits@5 | hits@10 | MeanRank | hits@1 | hits@5 | hits@10 | MeanRank |
---|---|---|---|---|---|---|---|---|
DeepWalk | 7.39 | 21.34 | 29.32 | 82.19 | 11.33 | 23.11 | 30.98 | 78.14 |
LINE | 5.78 | 17.21 | 24.83 | 95.39 | 8.68 | 19.02 | 26.23 | 92.15 |
node2vec | 7.34 | 21.06 | 29.55 | 80.54 | 11.45 | 23.41 | 31.37 | 78.27 |
TransNet | 27.82 | 66.47 | 76.12 | 25.35 | 58.35 | 74.71 | 79.72 | 22.71 |
TransE | 19.23 | 49.23 | 62.93 | 26.03 | 31.67 | 55.42 | 66.72 | 23.34 |
BimoNet | 31.23 | 69.43 | 80.23 | 19.24 | 63.42 | 80.37 | 86.77 | 18.83 |
BimoNet-Bi | 25.56 | 63.23 | 73.45 | 27.76 | 58.21 | 73.25 | 77.46 | 24.57 |
BimoNet-Net | 14.71 | 32.33 | 44,62 | 50.27 | 24.36 | 37.47 | 48.24 | 50.24 |
Metric | hits@1 | hits@5 | hits@10 | MeanRank | hits@1 | hits@5 | hits@10 | MeanRank |
---|---|---|---|---|---|---|---|---|
DeepWalk | 5.67 | 16.81 | 23.42 | 102.79 | 7.63 | 17.82 | 24.56 | 100.49 |
LINE | 4.34 | 13.23 | 19.65 | 115.02 | 6.14 | 14.72 | 20.91 | 112.88 |
node2vec | 5.34 | 16.43 | 23.61 | 101.87 | 7.27 | 17.91 | 24.81 | 100.12 |
TransNet | 28.53 | 66.18 | 75.29 | 29.41 | 56.66 | 73.14 | 78.64 | 27.61 |
TransE | 15.15 | 41.73 | 55.38 | 32.71 | 23.47 | 46.92 | 59.21 | 30.48 |
BimoNet | 35.23 | 73.48 | 80.32 | 25.04 | 61.36 | 77.63 | 82.67 | 21.45 |
BimoNet-Bi | 22.42 | 62.34 | 70.36 | 33.83 | 52.16 | 69.37 | 73.25 | 33.81 |
BimoNet-Net | 10.36 | 29.89 | 35.73 | 63.38 | 14.28 | 28.36 | 46.83 | 50.25 |
Model | Time (h) |
---|---|
DeepWalk | 4.8 |
LINE | 5.3 |
node2vec | 5.7 |
TransNet | 8.7 |
TransE | 7.2 |
BimoNet | 9.6 |
Tags | Top 5 Relations | Bottom 5 Relations | ||||||
---|---|---|---|---|---|---|---|---|
Metric | hits@1 | hits@5 | hits@10 | MeanRank | hits@1 | hits@5 | hits@10 | MeanRank |
TransNet | 73.62 | 86.22 | 90.81 | 4.26 | 74.54 | 86.51 | 90.57 | 4.12 |
BimoNet | 78.93 | 90.03 | 95.42 | 3.64 | 79.27 | 90.35 | 95.58 | 3.57 |
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Fang, Y.; Zhao, X.; Tan, Z.; Xiao, W. Network Embedding via a Bi-Mode and Deep Neural Network Model. Symmetry 2018, 10, 180. https://doi.org/10.3390/sym10050180
Fang Y, Zhao X, Tan Z, Xiao W. Network Embedding via a Bi-Mode and Deep Neural Network Model. Symmetry. 2018; 10(5):180. https://doi.org/10.3390/sym10050180
Chicago/Turabian StyleFang, Yang, Xiang Zhao, Zhen Tan, and Weidong Xiao. 2018. "Network Embedding via a Bi-Mode and Deep Neural Network Model" Symmetry 10, no. 5: 180. https://doi.org/10.3390/sym10050180
APA StyleFang, Y., Zhao, X., Tan, Z., & Xiao, W. (2018). Network Embedding via a Bi-Mode and Deep Neural Network Model. Symmetry, 10(5), 180. https://doi.org/10.3390/sym10050180