Link Prediction in Heterogeneous Information Networks: Improved Hypergraph Convolution with Adaptive Soft Voting
Abstract
1. Introduction
- (1)
- We propose a scalable frequent subgraph pattern extraction algorithm. It leverages a random subgraph sampling strategy addressing the limitations of existing hypergraph construction methods that rely on predefined network motifs. Its complexity is significantly lower than that of exhaustive enumeration methods, enabling the algorithm to handle large-scale heterogeneous networks while ensuring the extraction of high-frequency, meaningful FSPs for hypergraph construction.
- (2)
- We proposed an improved hypergraph-based link prediction model via a soft-voting ensemble strategy. This model effectively integrates the HGCN with the ensemble strategy, enabling it to not only learn heterogeneous semantics and high-order structural information from the built multi-view heterogeneous hypergraphs to generate discriminative node embeddings but also adaptively calibrate the ensemble weights of base models to evaluate the importance of such differentiated embeddings, thus significantly improving link prediction accuracy and making it well-suited for highly heterogeneous networks.
2. Preliminaries
2.1. Related Definition
2.1.1. Heterogeneous Information Network (HIN)
2.1.2. Frequent Subgraph Pattern (FSP)
2.1.3. Hypergraph
- Incidence Matrix
- Hyperedge Degree Matrix
- Vertex Degree Matrix
2.1.4. Ensemble Strategy
- Arithmetic Mean
- Weighted Mean
- Hard Voting
- Soft Voting
2.2. Problem Description
3. Model
3.1. Overall Architecture
3.2. Construction of Heterogeneous Hypergraphs
3.2.1. Frequent Subgraph Pattern Extraction
| Algorithm 1: Heterogeneous Network Frequent Subgraph Pattern Extraction |
| Inuput: : HIN; : Target size of FSPs; : Number of top K FSPs to extract. |
| Output: : Set of top FSP instances with the highest occurrence frequency. |
| Function: (); (). |
| 1: Initialize , Frequency, , , ← empty set; =100,000; |
| 2: // Step 1: Preprocessing—Calculate node degrees and neighbor sets (O(|V| + |E|)) |
| 3: For each node v in V do |
| 4: [v] ← |{e ∈ E | v ∈ e}| // Compute total degree of node v |
| 5: [v] ← {u ∈ V | (v,u) ∈ E or (u,v) ∈ E} // Neighbor set (including edge types) |
| 6: End for |
| 7: // Step 2: Random subgraph sampling and FSP statistics (O( × )) |
| 8: For i = 1 to do // Sample subgraph of target size (core subfunction: ) |
| 9: ← (G, , , ) |
| 10: If not is_connected() then // Connectivity check: only process connected subgraphs |
| 11: Continue // Skip disconnected subgraphs to avoid invalid FSPs |
| 12: End if |
| 13: // Update frequency and instance mapping |
| 14: f ← () |
| 15: If f not in then |
| 16: [f] ← 0 |
| 17: End if |
| 18: [f] += 1 |
| 19: If f not in then |
| 20: [f] ← // Store representative instance |
| 21: End if |
| 22: End for |
| 23: // Step 3: Filter high-frequency FSPs (O(M log M), M = number of unique FSPs ≪ ) |
| 24: ← Sort Frequency in descending order of values |
| 25: ← Take first K entries from |
| 26: // Step 4: Output results |
| 27: Return |
| Algorithm 2: Function: Core Subfunction: (G, , , ) |
| 1: // Step 1: Weighted sampling of seed node (higher degree → higher sampling probability) |
| 2: ← Sum([v] for v in V) |
| 3: ← Randomly select v ∈ V with probability |
| 4: ← {} // Initialize sampled node set |
| 5: // Step 2: Neighborhood-priority expansion + global jump, avoid local over-sampling |
| 6: While || < do // : Expansion probability from neighbors of sampled nodes |
| 7: If Random(0,1) < and then |
| 8: ← |
| 9: ← Randomly select u ∈ |
| 10: Else: |
| 11: ← Randomly select u ∈ V \ // global random jump: ensure sampling diversity |
| 12: End if |
| 13: .add() |
| 14: End while |
| 15: // Step 3: Extract complete subgraph info (nodes + edges + types) |
| 16: ← {e ∈ E | e connects two nodes in } |
| 17: ← (, , , ) |
| 18: Return |
3.2.2. Hypergraph Construction
3.3. VE-HGCN Model
3.3.1. Hypergraph Convolutional Neural Network (HGCN)
- 1.
- Forward transform (node domain → frequency domain):
- 2.
- Inverse transform (frequency domain → node domain):
3.3.2. Feature Construction
- Handling the diversity of node types;
- Features of hyperedges;
- Structural features;
- Normalization and alignment of different feature vectors.
3.3.3. Voting Ensemble Strategy
4. Experiments and Data
4.1. DataSets
4.2. Baseline Methods
- 1.
- LINE (Large-Scale Information Network Embedding) [31]
- 2.
- Metapath2vec [32]
- 3.
- Meta-GNN (Meta Graph Neural Network) [33]
- 4.
- GCN [29]
- 5.
- HAN (Heterogeneous Graph Attention Network) [34]
- 6.
- HGCL (Heterogeneous Graph Contrastive Learning) [35]
- 7.
- ie-HGCN [36]
4.3. Experimental Setup and Evaluation Metrics
4.4. Experiment Result Analysis
4.5. Analysis of Ablation Experiment Results
- V1 removes the ensemble strategy and uses only the single optimal base model for prediction. The model’s AUC and F1-Score show significant declines across all datasets. This is because a single base model can only capture the high-order semantic information contained in one type of hypergraph and cannot integrate the differentiated features of multiple hypergraphs, thus losing adaptability to complex heterogeneous networks. This fully verifies the necessity of the ensemble strategy for improving model performance.
- V2 replaces soft voting with hard voting, and its performance is inferior to that of the complete VE-HGCN. This is because hard voting only uses the category proportion of branch prediction results as the decision basis, ignoring the confidence differences in the prediction probabilities of different hypergraph branches, and thus cannot accurately distinguish the semantic contribution of each branch. This demonstrates the superiority of soft voting over hard voting.
- V3 replaces adaptive weighted soft voting with equal-weighted averaging, and the model performance also declines. This is because equal-weighted averaging assumes that all hypergraph branches are equally important and cannot dynamically adjust weights according to the heterogeneity characteristics of different datasets. In contrast, the adaptive weighting mechanism can specifically strengthen the role of high-contribution hypergraphs and weaken the interference of low-efficiency branches, verifying the performance gain of the adaptive weighting mechanism.
- V4 replaces FSP-based heterogeneous hypergraph construction with traditional meta-path-based construction. Its AUC and F1-Score drop obviously across all datasets, and V4 performs consistently worse than the full VE-HGCN despite minor fluctuations on individual datasets. Traditional meta-paths are manually predefined linear patterns that only capture limited pairwise node relations, failing to mine implicit non-linear high-order heterogeneous structures. In contrast, FSP-based construction is data-driven and automatically extracts fine-grained high-order patterns without manual priors, fully demonstrating its necessity and superiority in modeling heterogeneous network structures.
- V5 replaces the HGCN with the classic heterogeneous GCN while keeping other modules unchanged. It underperforms V4 in most cases and shows a significant performance gap against the VE-HGCN on all four datasets. The classic heterogeneous GCN is limited to pairwise node relation modeling and cannot effectively encode the multi-node high-order correlations captured by FSP hypergraphs. This verifies the HGCN’s distinct advantages in processing high-order heterogeneous information and its indispensable role in the VE-HGCN.
- The complete ablation experiment evaluation results verify the effectiveness of each component in the ensemble module. The adaptive weighted soft-voting mechanism is the core of the ensemble strategy and has the greatest impact on the model’s link prediction performance. Meanwhile, the introduction of the ensemble strategy and the replacement of hard voting with soft voting also play key roles in enhancing the robustness and accuracy of the model. In addition, the FSP-based heterogeneous hypergraph construction and HGCN-based high-order feature learning are two fundamental pillars of the model. The former provides high-quality high-order structural features, and the latter efficiently encodes these features. The synergistic effect of all components enables the complete VE-HGCN to achieve the optimal performance on all datasets, fully proving the rationality and advancement of the model design.
4.6. Validity Study of FSPs
4.7. Parameter Sensitivity Analysis
4.8. Time Complexity Analysis and Training Time Analysis
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Wu, Y.; Wang, Y.; Wang, X.; Xu, Z.X.; Li, L.N. Semi-Supervised Node Classification in Heterogeneous Networks Based on Hypergraph Convolution. J. Comput. Sci. Technol. 2021, 44, 2248–2260. [Google Scholar]
- Shi, C.; Wang, R.J.; Wang, X. A Survey on Heterogeneous Information Network Analysis and Applications. J. Softw. 2022, 33, 598–621. [Google Scholar] [CrossRef]
- Fan, H.; Zhang, F.; Wei, Y.; Li, Z.; Zou, C.; Gao, Y.; Dai, Q. Heterogeneous Hypergraph Variational Autoencoder for Link Prediction. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 4125–4138. [Google Scholar] [CrossRef]
- Cao, J.P.; Li, J.C.; Jiang, J. A Survey on Link Prediction Methods for Heterogeneous Information Networks. Syst. Eng. Electron. 2024, 46, 2747–2759. [Google Scholar]
- Lin, D.; Pantel, P. An Information-Theoretic Definition of Similarity. In Proceedings of the Fifteenth International Conference on Machine Learning; Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 2002; pp. 267–274. [Google Scholar]
- Baeza-Yates, R.; Ribeiro-Neto, B.A. Introduction to Modern Information Retrieval. Int. J. Inf. Manag. 2010, 30, 573–574. [Google Scholar]
- Sun, Y.; Han, J.; Yan, X.; Yu, P.S.; Wu, T. PathSim: Meta Path-Based Top-K Similarity Search in Heterogeneous Information Networks. Proc. VLDB Endow. 2011, 4, 992–1003. [Google Scholar] [CrossRef]
- Lyu, L.Y. Link Prediction in Complex Networks. J. Univ. Electron. Sci. Technol. China 2010, 39, 651–661. [Google Scholar]
- Jaccard, P. Étude Comparative de La Distribution Florale Dans Une Portion Des Alpes et Des Jura. Bull. De La Société Vaudoise Des Sci. Nat. 1901, 37, 547–579. [Google Scholar]
- Adamic, L.A.; Adar, E. Friends and Neighbors on the Web. Soc. Netw. 2003, 25, 211–230. [Google Scholar] [CrossRef]
- Katz, L. A New Status Index Derived from Sociometric Analysis. Psychometrika 1953, 18, 39–43. [Google Scholar] [CrossRef]
- Kumar, A.; Singh, S.S.; Singh, K.; Biswas, B. Link Prediction Techniques, Applications, and Performance: A Survey. Phys. A Stat. Mech. Its Appl. 2020, 553, 124289. [Google Scholar] [CrossRef]
- Sun, Y.; Han, J.; Zhao, P.; Yin, Z.; Cheng, H.; Wu, T. RankClus: Integrating Clustering with Ranking for Heterogeneous Information Network Analysis. In Proceedings of the 12th International Conference on Extending Database Technology: Advances in Database Technology; ACM: Saint Petersburg, Russia, 2009; pp. 565–576. [Google Scholar]
- Shi, C.; Li, Y.; Zhang, J.; Sun, Y.; Yu, P.S. A Survey of Heterogeneous Information Network Analysis. IEEE Trans. Knowl. Data Eng. 2017, 29, 17–37. [Google Scholar] [CrossRef]
- Zhao, Y.H.; Xue, W.J. Similarity Measurement for Heterogeneous Networks Based on Weighted Fusion of Meta-Paths. Comput. Eng. Des. 2021, 42, 309–315. [Google Scholar] [CrossRef]
- Li, Z.Y.; Liang, X.; Zhou, X.P.; Zhang, H.Y.; Ma, Y.F. A Link Prediction Method Based on Node Structural Feature Mapping in Large-Scale Networks. J. Comput. Sci. Technol. 2016, 39, 1947–1964. [Google Scholar]
- Li, T.; Zhang, R.; Yao, Y.; Liu, Y.; Ma, J. Link Prediction via Robust Bidirectional Deep Nonnegative Matrix Factorization. Expert Syst. Appl. 2025, 287, 128108. [Google Scholar] [CrossRef]
- Wang, X.; Lu, Y.; Shi, C.; Wang, R.; Cui, P.; Mou, S. Dynamic Heterogeneous Information Network Embedding With Meta-Path Based Proximity. IEEE Trans. Knowl. Data Eng. 2022, 34, 1117–1132. [Google Scholar] [CrossRef]
- Yadati, N.; Nimishakavi, M.; Yadav, P.; Nitin, V.; Louis, A.; Talukdar, P. HyperGCN: A New Method of Training Graph Convolutional Networks on Hypergraphs. Adv. Neural Inf. Process. Syst. 2019, 32, 10167–10178. [Google Scholar] [CrossRef]
- Liu, Z.; Wang, X.; Liu, X.; Zhang, S. Tri-Party Deep Network Representation Learning Using Inductive Matrix Completion. J. Cent. South Univ. 2019, 26, 2867–2878. [Google Scholar] [CrossRef]
- Fan, J.; Yang, J.; Gu, Z.; Wu, H.; Sun, D.; Qin, F.; Wu, J. Path-Aware Multi-Scale Learning for Heterogeneous Graph Neural Network. Neural Netw. 2025, 191, 107743. [Google Scholar] [CrossRef]
- Bian, J.; Zhou, T.; Bi, Y. Unveiling the Role of Higher-Order Interactions via Stepwise Reduction. Commun. Phys. 2025, 8, 228. [Google Scholar] [CrossRef]
- Zeng, L.; Yang, J.R.; Huang, G.; Jing, X.; Luo, C.R. A Survey of Hypergraph Application Methods: Problems, Progress, and Challenges. J. Comput. Appl. 2024, 44, 1–14. [Google Scholar]
- Bretto, A. Hypergraph Theory: An Introduction; Mathematical Engineering; Springer International Publishing: Heidelberg, Germany, 2013; ISBN 978-3-319-00079-4. [Google Scholar]
- Lin, J.J.; Ye, Z.L.; Zhao, H.X.; Li, Z.R. A Survey of Hypergraph Neural Networks. J. Comput. Res. Dev. 2024, 61, 362–384. [Google Scholar]
- Milo, R.; Shen-Orr, S.; Ltzkovitz, S.; Kashtan, N.; Chktovskii, D.; Alan, U. Network Motifs: Simple Building Blocks of Complex Networks. In The Structure and Dynamics of Networks; Princeton University Press: Princeton, NJ, USA, 2011; pp. 217–220. ISBN 978-1-4008-4135-6. [Google Scholar]
- Bruna, J.; Zaremba, W.; Szlam, A.; LeCun, Y. Spectral Networks and Locally Connected Networks on Graphs. arXiv 2013, arXiv:1312.6203. [Google Scholar]
- Defferrard, M.; Bresson, X.; Vandergheynst, P. Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering. arXiv 2016, arXiv:1606.09375. [Google Scholar]
- Kipf, T.N.; Welling, M. Semi-Supervised Classification with Graph Convolutional Networks. arXiv 2016, arXiv:1609.02907. [Google Scholar]
- Zhou, D.; Huang, J.; Schlkopf, B. Learning with Hypergraphs: Clustering, Classification, and Embedding. Adv. Neural Inf. Process. Syst. 2006, 19, 1601–1608. [Google Scholar] [CrossRef]
- Tang, J.; Qu, M.; Wang, M.; Zhang, M.; Yan, J.; Mei, Q. LINE: Large-scale Information Network Embedding. In Proceedings of the 24th International Conference on World Wide Web; ACM: Florence, Italy, 2015; pp. 1067–1077. [Google Scholar] [CrossRef]
- Dong, Y.; Chawla, N.V.; Swami, A. Metapath2vec: Scalable Representation Learning for Heterogeneous Networks. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining; ACM: Halifax, NS, Canada, 2017; pp. 135–144. [Google Scholar]
- Sankar, A.; Zhang, X.; Chang, K.C.-C. Motif-Based Convolutional Neural Network on Graphs. arXiv 2019, arXiv:1711.05697. [Google Scholar] [CrossRef]
- Wang, X.; Ji, H.; Shi, C.; Wang, B.; Cui, P.; Yu, P.; Ye, Y. Heterogeneous Graph Attention Network. arXiv 2021, arXiv:1903.07293. [Google Scholar]
- Wang, Y.; Zhang, Y.; Liu, Y.; Chen, H. Heterogeneous Graph Contrastive Learning for Recommendation. In Proceedings of the ACM Web Conference 2024; ACM: Singapore, 2024; pp. 1234–1245. [Google Scholar] [CrossRef]
- Li, Z.; Wang, X.; Zhang, J.; Chen, Y. Interaction-aware Enhanced Heterogeneous Graph Convolutional Network. In Proceedings of the AAAI Conference on Artificial Intelligence; AAAI: Vancouver, BC, Canada, 2024; Volume 38, pp. 8756–8764. [Google Scholar] [CrossRef]







| Network | N | E | N_Type | E_Type |
|---|---|---|---|---|
| DBLP1 | 6619 | 11,976 | 2 | 3 |
| DBLP2 | 13,891 | 342,56 | 3 | 5 |
| IMDB | 31,669 | 147,128 | 5 | 5 |
| LAST.FM | 128,705 | 375,114 | 4 | 5 |
| Hyperparameter | Value | Description |
|---|---|---|
| 100,000 | Number of random subgraph samples for pattern extraction | |
| 3, 4, 5 | Target size of subgraph patterns | |
| 0.8 | Probability of neighborhood expansion | |
| 0.2 | Probability of global random jump | |
| 0.01 | Adam optimizer learning rate | |
| L | 2 | The number of hypergraph convolutional layers |
| 64 | First HGCN layer dimension | |
| 32 | Second HGCN layer dimension | |
| 64 | Fully connected layer dimension | |
| 0.2 | Dropout rate for regularization | |
| 0.0001 | L2 regularization coefficient | |
| 50 | Epochs without improvement for early stopping | |
| 0.001 | Minimum F1-Score improvement threshold | |
| 1:1 | Ratio of negative to positive samples |
| Metrics | AUC | F1-Score | ||||||
|---|---|---|---|---|---|---|---|---|
| Model Datasets | DBLP1 | DBLP2 | IMDB | LAST.FM | DBLP1 | DBLP2 | IMDB | LAST.FM |
| LINE | 0.7281 ± 0.0015 | 0.6583 ± 0.0029 | 0.5810 ± 0.0017 | 0.6462 ± 0.0144 | 0.6821 ± 0.0028 | 0.6524 ± 0.0022 | 0.6334 ± 0.0003 | 0.6197 ± 0.0003 |
| Metapath2vec | 0.5800 ± 0.0045 | 0.5995 ± 0.0038 | 0.7441 ± 0.0032 | 0.7820 ± 0.0041 | 0.6313 ± 0.0035 | 0.5683 ± 0.0028 | 0.7269 ± 0.0025 | 0.7293 ± 0.0030 |
| Meta-GNN | 0.6609 ± 0.0032 | 0.6536 ± 0.0097 | 0.8062 ± 0.0019 | 0.7343 ± 0.0023 | 0.6666 ± 0.0027 | 0.6070 ± 0.0062 | 0.7346 ± 0.0021 | 0.6829 ± 0.0020 |
| GCN | 0.8046 ± 0.0039 | 0.7894 ± 0.0041 | 0.8326 ± 0.0027 | 0.7746 ± 0.0030 | 0.7042 ± 0.0038 | 0.6804 ± 0.0021 | 0.7493 ± 0.0021 | 0.6751 ± 0.0025 |
| HAN | 0.6966 ± 0.0058 | 0.6867 ± 0.0033 | 0.8083 ± 0.0024 | 0.7436 ± 0.0059 | 0.6671 ± 0.0035 | 0.6709 ± 0.0020 | 0.7473 ± 0.0022 | 0.6831 ± 0.0050 |
| HGCL | 0.8395 ± 0.0100 | 0.8002 ± 0.0050 | 0.8134 ± 0.0027 | 0.8290 ± 0.0027 | 0.7797 ± 0.0019 | 0.7379 ± 0.0055 | 0.7260 ± 0.0027 | 0.7355 ± 0.0078 |
| ie-HGCN | 0.8514 ± 0.0064 | 0.8416 ± 0.0084 | 0.8110 ± 0.0035 | 0.8450 ± 0.0081 | 0.7871 ± 0.0088 | 0.7570 ± 0.0040 | 0.7378 ± 0.0053 | 0.7561 ± 0.0086 |
| VE-HGCN | 0.8820 ± 0.0035 | 0.8706 ± 0.0042 | 0.8341 ± 0.0028 | 0.8631 ± 0.0031 | 0.8483 ± 0.0040 | 0.8097 ± 0.0038 | 0.7535 ± 0.0025 | 0.7770 ± 0.0033 |
| Variant ID | Operation |
|---|---|
| V1 | Remove the ensemble strategy, use only the single optimal base model. |
| V2 | Replace soft voting with hard voting, majority rule. |
| V3 | Replace soft voting with equal-weighted averaging. |
| V4 | Replace FSP hypergraph construction with meta-path-based hypergraph construction |
| V5 | Replace the HGCN with the classic heterogeneous GCN |
| Component | Time Complexity |
|---|---|
| Hypergraph Construction | O(K × |Etrain|) |
| HGCN Forward Pass (Per Epoch) | O(K × (|V| + |Ehyper|) × d). |
| Ensemble Weight Learning | O(K × |Eval|) |
| Total Training | O(+ K × |E| × epochs × d) |
| Model Datasets | DBLP1 | DBLP2 | IMDB | LAST.FM |
|---|---|---|---|---|
| LINE | 4.45 ± 0.09 | 6.64 ± 0.11 | 6.13 ± 0.10 | 2.04 ± 0.06 |
| Metapath2vec | 5.22 ± 0.12 | 7.86 ± 0.15 | 7.31 ± 0.13 | 2.87 ± 0.09 |
| Meta-GNN | 1.31 ± 0.03 | 1.01 ± 0.02 | 1.04 ± 0.02 | 1.00 ± 0.01 |
| GCN | 1.80 ± 0.04 | 0.95 ± 0.02 | 0.95 ± 0.02 | 0.90 ± 0.01 |
| HAN | 1.13 ± 0.03 | 1.38 ± 0.04 | 1.30 ± 0.03 | 1.76 ± 0.05 |
| HGCL | 4.19 ± 0.22 | 14.45 ± 0.38 | 10.71 ± 0.32 | 11.65 ± 0.35 |
| ie-HGCN | 4.25 ± 0.18 | 9.34 ± 0.26 | 9.24 ± 0.25 | 7.85 ± 0.21 |
| VE-HGCN | 3.16 ± 0.15 | 8.42 ± 0.29 | 7.34 ± 0.27 | 6.09 ± 0.26 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Zhang, S.; Huang, Y.; Luo, Z.; Zhou, J.; Wu, B.; Sun, K.; Mao, H. Link Prediction in Heterogeneous Information Networks: Improved Hypergraph Convolution with Adaptive Soft Voting. Entropy 2026, 28, 230. https://doi.org/10.3390/e28020230
Zhang S, Huang Y, Luo Z, Zhou J, Wu B, Sun K, Mao H. Link Prediction in Heterogeneous Information Networks: Improved Hypergraph Convolution with Adaptive Soft Voting. Entropy. 2026; 28(2):230. https://doi.org/10.3390/e28020230
Chicago/Turabian StyleZhang, Sheng, Yuyuan Huang, Ziqiang Luo, Jiangnan Zhou, Bing Wu, Ka Sun, and Hongmei Mao. 2026. "Link Prediction in Heterogeneous Information Networks: Improved Hypergraph Convolution with Adaptive Soft Voting" Entropy 28, no. 2: 230. https://doi.org/10.3390/e28020230
APA StyleZhang, S., Huang, Y., Luo, Z., Zhou, J., Wu, B., Sun, K., & Mao, H. (2026). Link Prediction in Heterogeneous Information Networks: Improved Hypergraph Convolution with Adaptive Soft Voting. Entropy, 28(2), 230. https://doi.org/10.3390/e28020230

