Efficient Non-Sampling Graph Neural Networks
Abstract
:1. Introduction
2. Materials and Methods
2.1. Related Work
2.2. Preliminaries and Notations
- Message Passing or Propagation: Each node in the graph sends out message to its neighboring nodes [13]. This information could be the node’s original feature vector or a transformed version of it. Mathematically, the message from node u to node v at the iteration can be written as a function M of their features and and node embedding vectors :
- Aggregation: Each node aggregates the messages received from its neighbors. Let us denote the set of neighbors of node u as [12]. The aggregation step can be represented as an aggregation function A over the messages from the neighbor nodes:
- Update: Each node updates its feature vector based on the aggregated message. This update can be modeled as an update function U:
2.3. Problem Formalization
2.4. Non-Sampling Graph Neural Network
2.5. Improving Time Efficiency
Algorithm 1 Efficient Calculation of Loss Function |
|
3. Results
3.1. Experimental Setup
3.2. Baselines Methods
- GCN [12]: The graph convolutional neural network (GCN) model uses convolutions to learn the representations of the nodes in the graph under a neural network architecture.
- GraphSAGE [11]: This is an inductive learning graph neural network framework that can use node feature information to generate embeddings for unseen vertices.
3.3. Evaluation Metrics
3.4. Parameter Settings
4. Discussion
4.1. Computational Efficiency
4.2. Classification Accuracy
4.3. Influence of Embedding Dimension
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Abbreviations
GNN | Graph Neural Network |
NS-GNN | Non-Sampling Graph Neural Network |
GAT | Graph Attention Network |
GCN | Graph Convolutional Network |
References
- Perozzi, B.; Al-Rfou, R.; Skiena, S. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, New York, NY, USA, 24–27 August 2014; pp. 701–710. [Google Scholar]
- Yang, L.; Gu, J.; Wang, C.; Cao, X.; Zhai, L.; Jin, D.; Guo, Y. Toward Unsupervised Graph Neural Network: Interactive Clustering and Embedding via Optimal Transport. In Proceedings of the 2020 IEEE International Conference on Data Mining (ICDM), Sorrento, Italy, 17–20 November 2020; pp. 1358–1363. [Google Scholar]
- Henaff, M.; Bruna, J.; LeCun, Y. Deep convolutional networks on graph-structured data. arXiv 2015, arXiv:1506.05163. [Google Scholar]
- Wang, M.; Gong, M.; Zheng, X.; Zhang, K. Modeling dynamic missingness of implicit feedback for recommendation. Adv. Neural Inf. Process. Syst. 2018, 31, 6669. [Google Scholar]
- Rendle, S. Factorization machines. In Proceedings of the 2010 IEEE International Conference on Data Mining, Sydney, Australia, 13–17 December 2010; pp. 995–1000. [Google Scholar]
- Chen, C.; Zhang, M.; Zhang, Y.; Ma, W.; Liu, Y.; Ma, S. Efficient heterogeneous collaborative filtering without negative sampling for recommendation. Proc. AAAI Conf. Artif. Intell. 2020, 34, 19–26. [Google Scholar] [CrossRef]
- Li, Z.; Ji, J.; Fu, Z.; Ge, Y.; Xu, S.; Chen, C.; Zhang, Y. Efficient Non-Sampling Knowledge Graph Embedding. In Proceedings of the Web Conference 2021, Online, 12–23 April 2021; pp. 1727–1736. [Google Scholar]
- Chen, C.; Zhang, M.; Ma, W.; Liu, Y.; Ma, S. Efficient non-sampling factorization machines for optimal context-aware recommendation. In Proceedings of the Web Conference 2020, Taipei, Taiwan, 20–24 April 2020; pp. 2400–2410. [Google Scholar]
- Wu, F.; Souza, A.; Zhang, T.; Fifty, C.; Yu, T.; Weinberger, K. Simplifying graph convolutional networks. In Proceedings of the International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; pp. 6861–6871. [Google Scholar]
- Katharopoulos, A.; Vyas, A.; Pappas, N.; Fleuret, F. Transformers are rnns: Fast autoregressive transformers with linear attention. In Proceedings of the International Conference on Machine Learning, Vienna, Austria, 12–17 July 2020; pp. 5156–5165. [Google Scholar]
- Hamilton, W.; Ying, Z.; Leskovec, J. Inductive representation learning on large graphs. Adv. Neural Inf. Process. Syst. 2017, 30. [Google Scholar]
- Kipf, T.N.; Welling, M. Semi-supervised classification with graph convolutional networks. arXiv 2017, arXiv:1609.02907. [Google Scholar]
- Scarselli, F.; Gori, M.; Tsoi, A.C.; Hagenbuchner, M.; Monfardini, G. The graph neural network model. IEEE Trans. Neural Netw. 2008, 20, 61–80. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Albawi, S.; Mohammed, T.A.; Al-Zawi, S. Understanding of a convolutional neural network. In Proceedings of the 2017 International Conference on Engineering and Technology (ICET), Antalya, Turkey, 21–23 August 2017; pp. 1–6. [Google Scholar]
- Chen, J.; Ma, T.; Xiao, C. FastGCN: Fast Learning with Graph Convolutional Networks via Importance Sampling. arXiv 2018, arXiv:1801.10247. [Google Scholar]
- Yang, L.; Liu, Z.; Dou, Y.; Ma, J.; Yu, P.S. Consisrec: Enhancing gnn for social recommendation via consistent neighbor aggregation. In Proceedings of the 44th international ACM SIGIR Conference on Research and Development in Information Retrieval, Virtual Event, 11–15 July 2021; pp. 2141–2145. [Google Scholar]
- Liu, Y.; Zeng, K.; Wang, H.; Song, X.; Zhou, B. Content matters: A GNN-based model combined with text semantics for social network cascade prediction. In Proceedings of the Advances in Knowledge Discovery and Data Mining: 25th Pacific-Asia Conference, PAKDD 2021, Virtual Event, 11–14 May 2021; Proceedings, Part I.. Springer: Berlin/Heidelberg, Germany, 2021; pp. 728–740. [Google Scholar]
- Yao, L.; Mao, C.; Luo, Y. Graph convolutional networks for text classification. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 29–31 January 2019; Volume 33, pp. 7370–7377. [Google Scholar]
- Wu, L.; Chen, Y.; Ji, H.; Liu, B. Deep learning on graphs for natural language processing. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, Virtual Event, 11–15 July 2021; pp. 2651–2653. [Google Scholar]
- Schlichtkrull, M.S.; Cao, N.D.; Titov, I. Interpreting Graph Neural Networks for {NLP} with Differentiable Edge Masking. In Proceedings of the International Conference on Learning Representations, Virtual Event, 3–7 May 2021. [Google Scholar]
- Wu, L.; Chen, Y.; Shen, K.; Guo, X.; Gao, H.; Li, S.; Pei, J.; Long, B. Graph neural networks for natural language processing: A survey. Found. Trends® Mach. Learn. 2023, 16, 119–328. [Google Scholar] [CrossRef]
- Wang, X.; Ye, Y.; Gupta, A. Zero-shot recognition via semantic embeddings and knowledge graphs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 6857–6866. [Google Scholar]
- Pradhyumna, P.; Shreya, G. Graph neural network (GNN) in image and video understanding using deep learning for computer vision applications. In Proceedings of the 2021 Second International Conference on Electronics and Sustainable Communication Systems (ICESC), Coimbatore, India, 4–6 August 2021; pp. 1183–1189. [Google Scholar]
- Shi, W.; Rajkumar, R. Point-gnn: Graph neural network for 3d object detection in a point cloud. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 1711–1719. [Google Scholar]
- Han, K.; Wang, Y.; Guo, J.; Tang, Y.; Wu, E. Vision GNN: An Image is Worth Graph of Nodes. Proc. Adv. Neural Inf. Process. Syst. 2022, 35, 8291–8303. [Google Scholar]
- Wu, C.; Wu, F.; Cao, Y.; Huang, Y.; Xie, X. FedGNN: Federated Graph Neural Network for Privacy-Preserving Recommendation. arXiv 2021, arXiv:2102.04925. [Google Scholar]
- Gao, C.; Wang, X.; He, X.; Li, Y. Graph neural networks for recommender system. In Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining, Virtual Event, 21–25 February 2022; pp. 1623–1625. [Google Scholar]
- Wu, S.; Tang, Y.; Zhu, Y.; Wang, L.; Xie, X.; Tan, T. Session-based recommendation with graph neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; Volume 33, pp. 346–353. [Google Scholar]
- Veličković, P.; Cucurull, G.; Casanova, A.; Romero, A.; Lio, P.; Bengio, Y. Graph attention networks. arXiv 2018, arXiv:1710.10903. [Google Scholar]
- Chiang, W.L.; Liu, X.; Si, S.; Li, Y.; Bengio, S.; Hsieh, C.J. Cluster-gcn: An efficient algorithm for training deep and large graph convolutional networks. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Anchorage, AK, USA, 4–8 August 2019; pp. 257–266. [Google Scholar]
- Xu, K.; Hu, W.; Leskovec, J.; Jegelka, S. How Powerful are Graph Neural Networks? In Proceedings of the International Conference on Learning Representations, New Orleans, LA, USA, 6–9 May 2019. [Google Scholar]
- Wu, Z.; Pan, S.; Chen, F.; Long, G.; Zhang, C.; Philip, S.Y. A comprehensive survey on graph neural networks. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 4–24. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Zhou, J.; Cui, G.; Hu, S.; Zhang, Z.; Yang, C.; Liu, Z.; Wang, L.; Li, C.; Sun, M. Graph neural networks: A review of methods and applications. AI Open 2020, 1, 57–81. [Google Scholar] [CrossRef]
- Kefato, Z.T.; Girdzijauskas, S. Self-supervised graph neural networks without explicit negative sampling. arXiv 2021, arXiv:2103.14958. [Google Scholar]
- Tam, P.; Song, I.; Kang, S.; Ros, S.; Kim, S. Graph Neural Networks for Intelligent Modelling in Network Management and Orchestration: A Survey on Communications. Electronics 2022, 11, 3371. [Google Scholar] [CrossRef]
- Zhuge, W.; Nie, F.; Hou, C.; Yi, D. Unsupervised single and multiple views feature extraction with structured graph. IEEE Trans. Knowl. Data Eng. 2017, 29, 2347–2359. [Google Scholar] [CrossRef]
- Glorot, X.; Bengio, Y. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, JMLR Workshop and Conference Proceedings, Sardinia, Italy, 13–15 May 2010; pp. 249–256. [Google Scholar]
- Sen, P.; Namata, G.; Bilgic, M.; Getoor, L.; Galligher, B.; Eliassi-Rad, T. Collective classification in network data. AI Mag. 2008, 29, 93. [Google Scholar] [CrossRef] [Green Version]
- McCallum, A.K.; Nigam, K.; Rennie, J.; Seymore, K. Automating the construction of internet portals with machine learning. Inf. Retr. 2000, 3, 127–163. [Google Scholar] [CrossRef]
- Fey, M.; Lenssen, J.E. Fast Graph Representation Learning with PyTorch Geometric. arXiv 2019, arXiv:1903.02428. [Google Scholar]
Symbol | Description |
---|---|
A graph | |
All nodes in a graph | |
Nodes in graph | |
Set of neighboring nodes of node u | |
Embedding vector of nodes u and v | |
i-th dimension of node embedding | |
d | Dimension of the embedding vectors |
Weight of the node embedding | |
Predicted score between nodes u and v |
Dataset | #Nodes | #Edges | #Features | #Classes |
---|---|---|---|---|
NELL | 65,755 | 266,144 | 5414 | 210 |
PubMed | 19,717 | 44,338 | 500 | 3 |
Cora | 2708 | 5429 | 1433 | 7 |
Dataset | NELL | PubMed | Cora | ||||||
---|---|---|---|---|---|---|---|---|---|
Metric | Accuracy | Time | Speedup | Accuracy | Time | Speedup | Accuracy | Time | Speedup |
GraphSAGE-mean | 0.214 | 876.7 s | / | 0.703 | 58.6 s | / | 0.685 | 6.1 s | / |
NS-GraphSAGE-mean | 0.241 | 235.1 s | 3.72 | 0.718 | 12.1 s | 4.84 | 0.652 | 2.5 s | 2.44 |
GraphSAGE-max | 0.471 | 929.2 s | / | 0.698 | 68.1 s | / | 0.679 | 5.9 s | / |
NS-GraphSAGE-max | 0.520 | 276.6 s | 3.35 | 0.711 | 13.1 s | 5.19 | 0.681 | 3.6 s | 1.63 |
GCN | 0.441 | 271.5 s | / | 0.694 | 2.1 s | / | 0.677 | 1.2 s | / |
NS-GCN | 0.461 | 210.7 s | 1.28 | 0.706 | 2.0 s | 1.05 | 0.683 | 1.1 s | 1.09 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ji, J.; Li, Z.; Xu, S.; Ge, Y.; Tan, J.; Zhang, Y. Efficient Non-Sampling Graph Neural Networks. Information 2023, 14, 424. https://doi.org/10.3390/info14080424
Ji J, Li Z, Xu S, Ge Y, Tan J, Zhang Y. Efficient Non-Sampling Graph Neural Networks. Information. 2023; 14(8):424. https://doi.org/10.3390/info14080424
Chicago/Turabian StyleJi, Jianchao, Zelong Li, Shuyuan Xu, Yingqiang Ge, Juntao Tan, and Yongfeng Zhang. 2023. "Efficient Non-Sampling Graph Neural Networks" Information 14, no. 8: 424. https://doi.org/10.3390/info14080424
APA StyleJi, J., Li, Z., Xu, S., Ge, Y., Tan, J., & Zhang, Y. (2023). Efficient Non-Sampling Graph Neural Networks. Information, 14(8), 424. https://doi.org/10.3390/info14080424