Hypergraph Semi-Supervised Contrastive Learning for Hyperedge Prediction Based on Enhanced Attention Aggregator
Abstract
1. Introduction
- We propose jointly modeling node influence heterogeneity and hyperedge order effects via order grouping with max–min pooling and design a dual-attention propagation architecture for dynamic node–hyperedge interaction modeling, significantly enhancing the semantic discriminability of embeddings.
- We propose a structure-aware view augmentation strategy guided by key nodes, combined with Adaptive Masking Probability Control, to generate diverse yet structurally faithful augmented views at both hyperedge and node levels. This effectively preserves the hypergraph’s core topology and semantics for robust contrastive learning.
- We construct collaborative optimization objectives at the node level, hyperedge level, and node–hyperedge interaction level, generating complementary self-supervised signals that more comprehensively model the multi-layered structural relationships in hypergraphs, effectively mitigating the data sparsity issue.
- Extensive experiments on five real-world hypergraph datasets demonstrate OFSH’s superiority and validate each module’s contribution, showing significant improvements over SOTA hyperedge prediction methods.
2. Related Work
2.1. Hyperedge Prediction Systems
2.2. Hypergraph Attention Networks
2.3. Hypergraph Self-Supervised Contrastive Learning
3. Preliminaries
4. Methodology
- (1)
- Hypergraph Encoder: A hypergraph encoder, , generates both node and hyperedge embeddings from the observed hypergraph structure, i.e., .
- (2)
- Hypergraph Node Aggregator: A node aggregator, , employs various aggregation mechanisms to derive more precise node embeddings, which encapsulate rich topological structure information and node feature information.
- (3)
- Hyperedge Candidate Scoring: After aggregating the node embeddings from to generate the final embedding for the hyperedge candidate, this representation is then fed into the predictor to compute the probability of the candidate hyperedge being formed.
4.1. Hypergraph Encoder
- (1)
- Node to hyperedge: Aggregates features of nodes within a hyperedge to form a hyperedge embedding, capturing the collective properties of the group.
- (2)
- Hyperedge to node: Propagates hyperedge features to nodes, enabling each node to integrate context from all hyperedges it belongs to, thus reflecting its participation in higher-order structures.
4.2. Hypergraph Attention Aggregator
4.2.1. Member-Aware Hypergraph Attention Aggregator (MHAA)
4.2.2. Order-Specific Hyperedge Attention Aggregator (OHAA)
- 1.
- Sum pooling amplifies high-frequency features (e.g., highly active nodes) but neglects divergence in feature distributions.
- 2.
- Average pooling compresses group features toward the centroid, causing the loss of discriminative signals across hyperedges.
- 3.
- Max–min pooling preserves two critical types of information: (1) extraction of the strongest, most representative signals (e.g., core node features) and (2) capture of anomalies or boundary patterns within the group. The range between these directly quantifies intra-group heterogeneity. This property makes max–min pooling particularly suited for modeling topological relationships that share the same order but are structurally heterogeneous, thereby enhancing embedding discriminability. Subsequent experiments (Section 5.5.3) validate the superiority of this mechanism for groups of hyperedges of the same order.
4.3. Hyperedge Prediction
- Sized Negative Sampling (SNS): Randomly selecting k nodes without considering the network structure.
- Clique NS (CNS): Choosing a hyperedge and substituting one of its nodes with another that is adjacent to all remaining nodes in the hyperedge.
- Motif NS (MNS): Sampling a k-connected component in a clique-expanded hypergraph.
4.4. Hypergraph Augmentation
4.4.1. Hypergraph Key Node Identification
- Local Influence Evaluation
- Global Influence Evaluation
4.4.2. Adaptive Hypergraph View Construction
- Hyperedge-wise Topology Augmentation.
- Node-wise Feature Augmentation.
4.4.3. Self-Supervised Hypergraph Contrastive Learning
- Node–Node Contrast Loss.
- Hyperedge–Hyperedge Contrast Loss.
- Node–Hyperedge Contrast Loss.
5. Experiments and Result Analysis
5.1. Dataset
- Co-citation datasets. Co-citation datasets model relationships through hypergraphs where nodes represent academic papers and hyperedges correspond to sets of papers co-cited by a source paper. Node features are constructed using bag-of-words representations derived from paper abstracts to capture semantic content. Three benchmark datasets are employed: Citeseer (1457 nodes, 1078 hyperedges), Cora (1434 nodes, 1579 hyperedges), and PubMed (3840 nodes, 7962 hyperedges), with feature dimensions of 3703, 1433, and 500, respectively. (https://linqs.soe.ucsc.edu/data accessed on 21 November 2024.)
- Authorship datasets. In authorship datasets, hypergraphs are structured such that nodes represent papers while hyperedges denote sets of papers co-authored by individual researchers. Node features are derived from bag-of-words representations of paper abstracts, with the Cora-A (https://people.cs.umass.edu/mccallum/data.html accessed on 28 November 2024.) dataset (containing 2388 nodes and 1072 hyperedges) featuring node dimensions of 1433.
- Collaboration dataset. This collaboration dataset uses a hypergraph model where nodes represent researchers and hyperedges denote groups of co-authors per publication. Node features are averages of bag-of-words vectors from paper abstracts authored by each researcher. The DBLP (https://lfs.aminer.cn/lab-datasets/citation/DBLP-citation-network-Oct-19.tar.gz accessed on 11 December 2024.) dataset contains 15,639 researcher nodes and 22,964 hyperedges (derived from 22,964 publications across 87 venues), with 4543-dimensional features available via the Aminer network.
5.2. Baselines
- Expansion [7]. Expansion predicts future hyperedges by transforming the hypergraph into multiple n-projected graphs.
- Hyper-SAGNN [29]. HyperSAGNN employs a self-attention-based graph neural network model to learn candidate sets of hyperedges with variable sizes and estimates their formation probabilities.
- NHP [10]. NHP utilizes a hyperedge-aware graph convolutional network to learn node embeddings and aggregates the features of candidate hyperedge nodes through max–min pooling.
- AHP [8]. AHP combines adversarial training to generate negative samples and adopts max–min pooling for node aggregation.
- CASH [25]. CASH addresses the node aggregation challenge via a context-aware aggregation strategy and mitigates data sparsity through dual contrastive loss coupled with hyperedge-aware enhanced self-supervised learning.
- OFSH. Our proposed method.
5.3. Implementation Details
5.4. Performance on Embedding
5.5. Analysis Experiment
5.5.1. Attention Aggregation Analysis
5.5.2. Order Attention Effectiveness Analysis
5.5.3. Maxmin Method Effectiveness Analysis
5.5.4. Training Time
5.6. Ablation Experiment
- 1.
- No hypergraph Attention Aggregator and no Hypergraph Augmentation (NOAA-HA): This approach eliminates both the hypergraph attention aggregation module and removes the self-supervised contrastive loss for hypergraph augmentation.
- 2.
- No hypergraph Attention Aggregator with Random augmentation (NOAA-Random): This variant removes the hypergraph attention aggregation module and replaces the importance-based masking augmentation with purely random augmentation while retaining the self-supervised contrastive loss in training.
- 3.
- No hypergraph Attention Aggregator (NOAA): It removes the hypergraph attention aggregation module while retaining the hypergraph augmentation module proposed in this work and incorporates the self-supervised contrastive loss into the training process.
5.7. Hyperparameter Study
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Zhou, D.; Huang, J.; Schölkopf, B. Learning with hypergraphs: Clustering, classification, and embedding. Adv. Neural Inf. Process. Syst. 2006, 19, 1601–1608. [Google Scholar]
- Liben-Nowell, D.; Kleinberg, J. The link-prediction problem for social networks. J. Am. Soc. Inf. Sci. Technol. 2007, 58, 1019–1031. [Google Scholar] [CrossRef]
- Liu, Y.; Qiu, S.; Zhang, P.; Gong, P.; Wang, F.; Xue, G.; Ye, J. Computational drug discovery with dyadic positive-unlabeled learning. In Proceedings of the 2017 SIAM International Conference on Data Mining, Houston, TX, USA, 27–29 April 2017; pp. 45–53. [Google Scholar]
- Nickel, M.; Murphy, K.; Tresp, V.; Gabrilovich, E. A review of relational machine learning for knowledge graphs. Proc. IEEE 2015, 104, 11–33. [Google Scholar] [CrossRef]
- Mei, Z.; Bi, X.; Li, D.; Xia, W.; Yang, F.; Wu, H. DHHNN: A dynamic hypergraph hyperbolic neural network based on variational autoencoder for multimodal data integration and node classification. Inf. Fusion 2025, 119, 103016. [Google Scholar] [CrossRef]
- Chen, C.; Wang, Y.; Zhang, Y.; Zhang, N.; Feng, H.; Xu, D. Hypergraph neural network for remote sensing hyperspectral image super-resolution. Knowl.-Based Syst. 2025, 321, 113755. [Google Scholar] [CrossRef]
- Yoon, S.e.; Song, H.; Shin, K.; Yi, Y. How much and when do we need higher-order information in hypergraphs? A case study on hyperedge prediction. In Proceedings of the Web Conference 2020, Taipei, Taiwan, 20–24 April 2020; pp. 2627–2633. [Google Scholar]
- Hwang, H.; Lee, S.; Park, C.; Shin, K. Ahp: Learning to negative sample for hyperedge prediction. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, Madrid, Spain, 11–15 July 2022; pp. 2237–2242. [Google Scholar]
- Dong, Y.; Sawin, W.; Bengio, Y. Hnhn: Hypergraph networks with hyperedge neurons. arXiv 2020, arXiv:2006.12278. [Google Scholar] [CrossRef]
- Yadati, N.; Nitin, V.; Nimishakavi, M.; Yadav, P.; Louis, A.; Talukdar, P. Nhp: Neural hypergraph link prediction. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, Virtual, Ireland, 19–23 October 2020; pp. 1705–1714. [Google Scholar]
- Wu, H.; Li, N.; Zhang, J.; Chen, S.; Ng, M.K.; Long, J. Collaborative contrastive learning for hypergraph node classification. Pattern Recognit. 2024, 146, 109995. [Google Scholar] [CrossRef]
- Chitra, U.; Raphael, B. Random walks on hypergraphs with edge-dependent vertex weights. In Proceedings of the International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; pp. 1172–1181. [Google Scholar]
- Chai, L.; Tu, L.; Wang, X.; Su, Q. Hypergraph modeling and hypergraph multi-view attention neural network for link prediction. Pattern Recognit. 2024, 149, 110292. [Google Scholar] [CrossRef]
- Kumar, T.; Darwin, K.; Parthasarathy, S.; Ravindran, B. HPRA: Hyperedge prediction using resource allocation. In Proceedings of the 12th ACM Conference on Web Science, New York, NY, USA, 6–10 July 2020; pp. 135–143. [Google Scholar]
- Choo, H.; Shin, K. On the persistence of higher-order interactions in real-world hypergraphs. In Proceedings of the 2022 SIAM International Conference on Data Mining (SDM), Alexandria, VA, USA, 28–30 April 2022; pp. 163–171. [Google Scholar]
- Zhang, Z.; Feng, Z.; Zhao, X.; Jean, D.; Yu, Z.; Chapman, E.R. Functionalization and higher-order organization of liposomes with DNA nanostructures. Nat. Commun. 2023, 14, 5256. [Google Scholar] [CrossRef]
- Feng, Y.; You, H.; Zhang, Z.; Ji, R.; Gao, Y. Hypergraph neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; Volume 33, pp. 3558–3565. [Google Scholar]
- Tu, K.; Cui, P.; Wang, X.; Wang, F.; Zhu, W. Structural deep embedding for hyper-networks. In Proceedings of the AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018; Volume 32. [Google Scholar]
- Yadati, N.; Nimishakavi, M.; Yadav, P.; Nitin, V.; Louis, A.; Talukdar, P. Hypergcn: A new method for training graph convolutional networks on hypergraphs. Adv. Neural Inf. Process. Syst. 2019, 32, 1511–1522. [Google Scholar]
- Ding, K.; Wang, J.; Li, J.; Li, D.; Liu, H. Be more with less: Hypergraph attention networks for inductive text classification. arXiv 2020, arXiv:2011.00387. [Google Scholar] [CrossRef]
- Yang, C.; Wang, R.; Yao, S.; Abdelzaher, T. Semi-supervised hypergraph node classification on hypergraph line expansion. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management, Atlanta, GA, USA, 17–21 October 2022; pp. 2352–2361. [Google Scholar]
- Wu, H.; Yan, Y.; Ng, M.K.P. Hypergraph Collaborative Network on Vertices and Hyperedges. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 3245–3258. [Google Scholar] [CrossRef] [PubMed]
- Chien, E.; Pan, C.; Peng, J.; Milenkovic, O. You are allset: A multiset function framework for hypergraph neural networks. arXiv 2021, arXiv:2106.13264. [Google Scholar]
- Wang, J.; Chen, J.; Wang, Z.; Gong, M. Hypergraph contrastive attention networks for hyperedge prediction with negative samples evaluation. Neural Netw. 2025, 181, 106807. [Google Scholar] [CrossRef]
- Ko, Y.; Tong, H.; Kim, S.W. Enhancing Hyperedge Prediction with Context-Aware Self-Supervised Learning. IEEE Trans. Knowl. Data Eng. 2025, 37, 1772–1784. [Google Scholar] [CrossRef]
- Song, Y.; Gu, Y.; Li, T.; Qi, J.; Liu, Z.; Jensen, C.S.; Yu, G. CHGNN: A semi-supervised contrastive hypergraph learning network. IEEE Trans. Knowl. Data Eng. 2024, 36, 4515–4530. [Google Scholar] [CrossRef]
- Lee, D.; Shin, K. I’m me, we’re us, and i’m us: Tri-directional contrastive learning on hypergraphs. In Proceedings of the AAAI Conference on Artificial Intelligence, Washington, DC, USA, 7–14 February 2023; Volume 37, pp. 8456–8464. [Google Scholar]
- Patil, P.; Sharma, G.; Murty, M.N. Negative sampling for hyperlink prediction in networks. In Proceedings of the Pacific-Asia Conference on Knowledge Discovery and Data Mining, Singapore, 11–14 May 2020; pp. 607–619. [Google Scholar]
- Zhang, R.; Zou, Y.; Ma, J. Hyper-SAGNN: A self-attention based graph neural network for hypergraphs. arXiv 2019, arXiv:1911.02613. [Google Scholar]
- Veličković, P.; Fedus, W.; Hamilton, W.L.; Liò, P.; Bengio, Y.; Hjelm, R.D. Deep graph infomax. arXiv 2018, arXiv:1809.10341. [Google Scholar] [CrossRef]
- You, Y.; Chen, T.; Sui, Y.; Chen, T.; Wang, Z.; Shen, Y. Graph contrastive learning with augmentations. Adv. Neural Inf. Process. Syst. 2020, 33, 5812–5823. [Google Scholar]
- Chen, C.; Liao, C.; Liu, Y.Y. Teasing out missing reactions in genome-scale metabolic networks through hypergraph learning. Nat. Commun. 2023, 14, 2375. [Google Scholar] [CrossRef]
- He, L.; Bai, L.; Yang, X.; Du, H.; Liang, J. High-order graph attention network. Inf. Sci. 2023, 630, 222–234. [Google Scholar] [CrossRef]
- Sun, L.; Rao, Y.; Zhang, X.; Lan, Y.; Yu, S. MS-HGAT: Memory-enhanced sequential hypergraph attention network for information diffusion prediction. In Proceedings of the AAAI Conference on Artificial Intelligence, Online, USA, 22 February–1 March 2022; Volume 36, pp. 4156–4164. [Google Scholar]
- Veličković, P.; Cucurull, G.; Casanova, A.; Romero, A.; Lio, P.; Bengio, Y. Graph attention networks. arXiv 2017, arXiv:1710.10903. [Google Scholar]
- Goh, C.W.J.; Bodnar, C.; Lio, P. Simplicial attention networks. arXiv 2022, arXiv:2204.09455. [Google Scholar] [PubMed]
- Li, M.; Zhang, Y.; Li, X.; Zhang, Y.; Yin, B. Hypergraph transformer neural networks. ACM Trans. Knowl. Discov. Data 2023, 17, 1–22. [Google Scholar] [CrossRef]
- Gao, J.; Gao, J.; Ying, X.; Lu, M.; Wang, J. Higher-order interaction goes neural: A substructure assembling graph attention network for graph classification. IEEE Trans. Knowl. Data Eng. 2021, 35, 1594–1608. [Google Scholar] [CrossRef]
- Yan, Y.; Chen, Y.; Wang, S.; Wu, H.; Cai, R. Hypergraph joint representation learning for hypervertices and hyperedges via cross expansion. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 20–27 February 2024; Volume 38, pp. 9232–9240. [Google Scholar]
- Xiang, H.; Jin, S.; Liu, X.; Zeng, X.; Zeng, L. Chemical structure-aware molecular image representation learning. Briefings Bioinform. 2023, 24, bbad404. [Google Scholar] [CrossRef]
- Yu, J.; Yin, H.; Li, J.; Wang, Q.; Hung, N.Q.V.; Zhang, X. Self-supervised multi-channel hypergraph convolutional network for social recommendation. In Proceedings of the Web Conference 2021, Ljubljana, Slovenia, 19–23 April 2021; pp. 413–424. [Google Scholar]
- Zhang, J.; Gao, M.; Yu, J.; Guo, L.; Li, J.; Yin, H. Double-scale self-supervised hypergraph learning for group recommendation. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, Virtual Event, Australia, 1–5 November 2021; pp. 2557–2567. [Google Scholar]
- Du, B.; Yuan, C.; Barton, R.; Neiman, T.; Tong, H. Hypergraph pre-training with graph neural networks. arXiv 2021, arXiv:2105.10862. [Google Scholar] [CrossRef]
- Wei, T.; You, Y.; Chen, T.; Shen, Y.; He, J.; Wang, Z. Augmentations in hypergraph contrastive learning: Fabricated and generative. Adv. Neural Inf. Process. Syst. 2022, 35, 1909–1922. [Google Scholar]
- Xu, Z.; Wei, P.; Liu, S.; Zhang, W.; Wang, L.; Zheng, B. Correlative preference transfer with hierarchical hypergraph network for multi-domain recommendation. In Proceedings of the ACM Web Conference 2023, Austin, TX, USA, 30 April–4 May 2023; pp. 983–991. [Google Scholar]
- Zhu, Y.; Xu, Y.; Yu, F.; Liu, Q.; Wu, S.; Wang, L. Graph contrastive learning with adaptive augmentation. In Proceedings of the Web Conference 2021, Ljubljana, Slovenia, 19–23 April 2021; pp. 2069–2080. [Google Scholar]
- Lee, G.; Choe, M.; Shin, K. How do hyperedges overlap in real-world hypergraphs?—Patterns, measures, and generators. In Proceedings of the Web Conference 2021, Ljubljana, Slovenia, 19–23 April 2021; pp. 3396–3407. [Google Scholar]
- Newman, M. Networks; Oxford University Press: Oxford, UK, 2018. [Google Scholar]
- Chen, T.; Kornblith, S.; Norouzi, M.; Hinton, G. A simple framework for contrastive learning of visual representations. In Proceedings of the International Conference on Machine Learning, Virtual, 13–18 July 2020; pp. 1597–1607. [Google Scholar]
Datasets | Category | #Features | Ave.Order | Max.Order | Min.Order | ||
---|---|---|---|---|---|---|---|
Citeseer | Co-citation | 1457 | 1078 | 3703 | 3.2 | 26 | 2 |
Cora | Co-citation | 1434 | 1579 | 1433 | 3 | 5 | 2 |
PubMed | Co-citation | 3840 | 7962 | 500 | 4.3 | 171 | 2 |
Cora-A | Authorship | 2388 | 1072 | 1433 | 4.3 | 43 | 2 |
DBLP | Collaboration | 15,639 | 22,964 | 4543 | 2.7 | 18 | 2 |
Datasets | Metrics | AUROC | Average Precision (AP) | ||||||||
Test Set | SNS | CNS | MNS | MIX | Average | SNS | CNS | MNS | MIX | Average | |
Citseer | Expansion | 0.663 | 0.331 | 0.781 | 0.558 | 0.591 | 0.765 | 0.498 | 0.871 | 0.631 | 0.681 |
HyperSAGNN | 0.540 | 0.473 | 0.410 | 0.478 | 0.475 | 0.627 | 0.497 | 0.455 | 0.507 | 0.512 | |
NHP | 0.991 | 0.510 | 0.701 | 0.817 | 0.751 | 0.990 | 0.520 | 0.731 | 0.768 | 0.751 | |
AHP | 0.943 | 0.651 | 0.881 | 0.820 | 0.824 | 0.952 | 0.660 | 0.870 | 0.795 | 0.819 | |
CASH | 0.928 | 0.708 | 0.906 | 0.838 | 0.838 | 0.927 | 0.705 | 0.910 | 0.824 | 0.827 | |
OFSH | 0.967 | 0.744 | 0.935 | 0.879 | 0.880 | 0.969 | 0.714 | 0.922 | 0.838 | 0.859 | |
Cora | Expansion | 0.470 | 0.256 | 0.707 | 0.476 | 0.477 | 0.637 | 0.454 | 0.764 | 0.563 | 0.607 |
HyperSAGNN | 0.617 | 0.494 | 0.527 | 0.540 | 0.545 | 0.687 | 0.508 | 0.574 | 0.566 | 0.584 | |
NHP | 0.943 | 0.472 | 0.641 | 0.774 | 0.703 | 0.949 | 0.509 | 0.678 | 0.744 | 0.718 | |
AHP | 0.964 | 0.572 | 0.860 | 0.799 | 0.799 | 0.961 | 0.552 | 0.837 | 0.740 | 0.772 | |
CASH | 0.918 | 0.642 | 0.857 | 0.812 | 0.811 | 0.913 | 0.617 | 0.846 | 0.771 | 0.784 | |
OFSH | 0.938 | 0.684 | 0.869 | 0.844 | 0.836 | 0.935 | 0.660 | 0.870 | 0.829 | 0.823 | |
PubMed | Expansion | 0.520 | 0.241 | 0.730 | 0.497 | 0.497 | 0.675 | 0.440 | 0.755 | 0.565 | 0.612 |
HyperSAGNN | 0.525 | 0.546 | 0.686 | 0.580 | 0.584 | 0.534 | 0.529 | 0.680 | 0.561 | 0.576 | |
NHP | 0.973 | 0.524 | 0.694 | 0.745 | 0.733 | 0.973 | 0.513 | 0.656 | 0.678 | 0.707 | |
AHP | 0.917 | 0.553 | 0.840 | 0.763 | 0.763 | 0.918 | 0.526 | 0.834 | 0.717 | 0.749 | |
CASH | 0.801 | 0.636 | 0.867 | 0.769 | 0.774 | 0.804 | 0.639 | 0.875 | 0.761 | 0.764 | |
OFSH | 0.849 | 0.646 | 0.879 | 0.776 | 0.786 | 0.852 | 0.648 | 0.886 | 0.766 | 0.788 | |
Cora-A | Expansion | 0.690 | 0.434 | 0.842 | 0.658 | 0.656 | 0.690 | 0.577 | 0.876 | 0.672 | 0.706 |
HyperSAGNN | 0.386 | 0.542 | 0.591 | 0.505 | 0.506 | 0.532 | 0.545 | 0.643 | 0.563 | 0.571 | |
NHP | 0.909 | 0.550 | 0.672 | 0.773 | 0.723 | 0.925 | 0.585 | 0.720 | 0.766 | 0.748 | |
AHP | 0.958 | 0.782 | 0.924 | 0.887 | 0.888 | 0.957 | 0.796 | 0.898 | 0.878 | 0.882 | |
CASH | 0.954 | 0.794 | 0.956 | 0.906 | 0.905 | 0.955 | 0.794 | 0.956 | 0.900 | 0.901 | |
OFSH | 0.973 | 0.834 | 0.972 | 0.940 | 0.930 | 0.975 | 0.839 | 0.972 | 0.934 | 0.931 | |
DBLP | Expansion | 0.645 | 0.366 | 0.801 | 0.607 | 0.607 | 0.751 | 0.518 | 0.856 | 0.655 | 0.698 |
HyperSAGNN | 0.448 | 0.572 | 0.574 | 0.530 | 0.531 | 0.562 | 0.586 | 0.602 | 0.577 | 0.582 | |
NHP | 0.663 | 0.503 | 0.540 | 0.572 | 0.569 | 0.608 | 0.501 | 0.523 | 0.542 | 0.544 | |
AHP | 0.946 | 0.568 | 0.820 | 0.778 | 0.778 | 0.947 | 0.561 | 0.815 | 0.735 | 0.764 | |
CASH | 0.840 | 0.701 | 0.826 | 0.789 | 0.791 | 0.839 | 0.687 | 0.816 | 0.779 | 0.782 | |
OFSH | 0.884 | 0.713 | 0.838 | 0.813 | 0.811 | 0.878 | 0.693 | 0.833 | 0.798 | 0.800 |
Datasets | Extraction Strategies | AUROC | Average Precision (AP) |
---|---|---|---|
Citseer | Sum | 0.953 | 0.955 |
Mean | 0.952 | 0.953 | |
Max–min | 0.967 | 0.969 | |
Cora | Sum | 0.928 | 0.927 |
Mean | 0.929 | 0.926 | |
Max–min | 0.938 | 0.935 | |
Cora-A | Sum | 0.963 | 0.962 |
Mean | 0.961 | 0.963 | |
Max–min | 0.973 | 0.975 |
Datasets | Metrics | AUROC | Average Precision (AP) | ||||||||
Test Set | SNS | CNS | MNS | MIX | Average | SNS | CNS | MNS | MIX | Average | |
Citseer | NOAA-HA | 0.882 | 0.645 | 0.859 | 0.798 | 0.796 | 0.895 | 0.670 | 0.862 | 0.788 | 0.803 |
NOAA-Random | 0.910 | 0.685 | 0.898 | 0.836 | 0.833 | 0.912 | 0.683 | 0.879 | 0.822 | 0.824 | |
NOAA | 0.913 | 0.698 | 0.905 | 0.842 | 0.840 | 0.915 | 0.702 | 0.889 | 0.828 | 0.833 | |
OFSH | 0.967 | 0.744 | 0.935 | 0.879 | 0.880 | 0.969 | 0.714 | 0.922 | 0.838 | 0.859 | |
Cora | NOAA-HA | 0.866 | 0.560 | 0.780 | 0.732 | 0.734 | 0.864 | 0.563 | 0.776 | 0.720 | 0.730 |
NOAA-Random | 0.909 | 0.620 | 0.857 | 0.795 | 0.795 | 0.883 | 0.589 | 0.829 | 0.765 | 0.766 | |
NOAA | 0.911 | 0.625 | 0.863 | 0.799 | 0.799 | 0.890 | 0.591 | 0.831 | 0.781 | 0.780 | |
OFSH | 0.938 | 0.684 | 0.869 | 0.836 | 0.836 | 0.935 | 0.660 | 0.870 | 0.829 | 0.823 | |
Cora-A | NOAA-HA | 0.949 | 0.703 | 0.905 | 0.853 | 0.853 | 0.950 | 0.746 | 0.910 | 0.867 | 0.868 |
NOAA-Random | 0.945 | 0.766 | 0.937 | 0.889 | 0.885 | 0.948 | 0.783 | 0.939 | 0.893 | 0.890 | |
NOAA | 0.972 | 0.832 | 0.951 | 0.923 | 0.920 | 0.972 | 0.847 | 0.921 | 0.910 | 0.912 | |
OFSH | 0.973 | 0.834 | 0.972 | 0.940 | 0.930 | 0.975 | 0.839 | 0.972 | 0.934 | 0.931 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Xie, H.; Song, C.; Shao, H.; Wang, L. Hypergraph Semi-Supervised Contrastive Learning for Hyperedge Prediction Based on Enhanced Attention Aggregator. Entropy 2025, 27, 1046. https://doi.org/10.3390/e27101046
Xie H, Song C, Shao H, Wang L. Hypergraph Semi-Supervised Contrastive Learning for Hyperedge Prediction Based on Enhanced Attention Aggregator. Entropy. 2025; 27(10):1046. https://doi.org/10.3390/e27101046
Chicago/Turabian StyleXie, Hanyu, Changjian Song, Hao Shao, and Lunwen Wang. 2025. "Hypergraph Semi-Supervised Contrastive Learning for Hyperedge Prediction Based on Enhanced Attention Aggregator" Entropy 27, no. 10: 1046. https://doi.org/10.3390/e27101046
APA StyleXie, H., Song, C., Shao, H., & Wang, L. (2025). Hypergraph Semi-Supervised Contrastive Learning for Hyperedge Prediction Based on Enhanced Attention Aggregator. Entropy, 27(10), 1046. https://doi.org/10.3390/e27101046