PDEC: A Framework for Improving Knowledge Graph Reasoning Performance through Predicate Decomposition
Abstract
:1. Introduction
2. Related Work
- Transductive KGC via KG embeddings. A number of works are proposed for KGC tasks using learnable embeddings for KG relations and entities. For example, in [1,2,3,4,13], they learn to map KG relations into vector space and predict links with scoring functions. NTN [14], on the other hand, parameterizes each relation into a neural network. In [15], the authors present a theoretical framework that highlights the capabilities of graph neural networks (GNNs) in embedding entities and relations within KGs and executing link prediction tasks. The paper [16] proposes a divide–search–combine algorithm, RelEns-DSC, to efficiently search relation-aware ensemble weights for KG embedding. Because these algorithms need to learn embeddings of entities and relations in the test set during the training process, they are generally only suitable for transductive KGC tasks and cannot be applied to scenarios where there are new predicates in the test set such as predicate decomposition.
- Inductive KGC. In recent years, inductive KG reasoning has gained increasing attention. This approach enables reasoning tasks to be performed in a bottom–up manner, focusing on the emergence of new entities. Methods such as ([17,18]) have emphasized the importance of modeling emerging entities, while ([19]) has introduced rule-based attention weights and ([20]) has extended RotatE ([4]) to enhance inductive reasoning. Alternatively, some research has focused on conducting inductive KGC tasks through rule mining. Neural LP [7] and DRUM [8] have reduced the rule learning problem to algebraic operations on neural-embedding-based representations of a given KG. LogCo [21] combines logical reasoning with contrastive representations and extracts subgraphs and relational paths to achieve entity independence and addresses supervision deficiencies, achieving superior performance on inductive KG reasoning. The paper [22] proposes an adaptive propagation path learning method for GNN-based inductive KG reasoning, addressing the limitations of hand-designed paths and explosive growth of entities. Compared with these methods, we tackle a more challenging problem where predicates can be new instead of only some entities being new.
- Predicate inductive KGC. Predicate induction KGC is the latest development of inductive KGC (also known as entity induction KGC), which aims to enable KG reasoning models to perform KGC tasks on new predicate sets that do not exist in the training set. INGRAM [11] exhibits remarkable reasoning capabilities for novel predicates, enabling it to handle any new predicates. However, novel predicates are not generated via polysemy splitting and synonymy merging; furthermore, a message-passing step on the graph containing novel predicates is still required prior to reasoning. This makes it incapable of effectively detecting missing predicates. Although RMPI [11] exhibits some ability to discover new predicates, it relies on additional ontology information. These characteristics hinder efforts to optimize the quality of the original graph and the performance of reasoning models built upon it, facing challenges in enhancing reasoning abilities. In contrast, PDEC excels at discovering new predicates by decomposing polysemous predicates and enhancing the reasoning performance of the original KGs.
3. Materials and Methods
3.1. Preliminaries
- KGs. A KG is a collection of the triples , where and represent the sets of entities and predicates in the KG, respectively. In a triple , entities h and t are referred to as the head entity and tail entity, respectively.
- KG embedding and notations. Within the context of a KG , KG embedding aims to represent each entity and relation using continuous vectors, denoted as , and , respectively. These vectors serve as dense representations that capture the meaning and relationships encoded in the KG.
- KGC tasks. KGC, or knowledge graph completion, involves inferring missing facts from the known facts within KGs. The objective of a KG reasoning model is to effectively rank positive triplets higher than negative triplets, thereby accurately identifying potential positive triplets that may have been overlooked in the current graph.
3.2. Predicate Decomposition
3.3. KG Reasoning Based on Predicate Decomposition
3.4. The Adaptive Optimization Mechanism of PDEC
Algorithm 1 KG reasoning based on polysemous predicate induction. |
|
3.5. Synonymous Predicate Merging
3.6. Theoretical Proof of PDEC
4. Results
4.1. Experimental Setup
- Datasets. Open-world KGC tasks are commonly evaluated on Word-Net and Freebase subsets, such as YAGO3 [24] and FB15K-237 [25]. In order to verify the effectiveness of predicate decomposition, we focus on the KG benchmark with many predicates and high difficulty to verify the effectiveness of our method. Therefore, we selected FB15K-237, YAGO3-10, and NELL-995 [26] as the benchmark dataset.
- FB15K-237 is a subset of the Freebase knowledge base [10] containing general knowledge facts.
- The YAGO3-10 dataset is a subset of YAGO3 that only contains entities with at least 10 relations. In total, YAGO3-10 has 123,182 entities and 37 relations and 1,179,040 triples, and most of the triples describe attributes of persons such as citizenship, gender, and profession.
- The NELL-995 dataset is a subset of NELL [27] created from the 995th iteration of the construction. NELL-995 includes 75,492 entities, 200 relations, and 154,208 triples.
- Baselines. In order to test the effectiveness and universality of PDEC, we have extensively selected some mature KG inference models. We use the KG inference model running on the original dataset as the baseline. For each baseline, we use them in conjunction with PDEC to conduct performance testing and record their performance improvement. The baseline model we have chosen includes TransE, TransH [29], TransR [30], TransD [31], ComplEX [32], DistMult [2], TuckER [13], RotatE, CompGCN [33], RelEns-DSC [16], etc.
- Experiment setting details. We implement the baseline model and its corresponding PDEC framework based on the OPENKE project [34]. We set the entity and relation embedding size to 200 for experiments. We use Adam optimization [35] and search the learning rate (0.001–0.005) and minibatch size (64–256). We apply dropout to the entity and relation embeddings and all feed-forward layers, and search the dropout rates within 0.6.
4.2. Experiment Results
- The KGC performance improvement on benchmarks brought by PDEC. We tested the performance of the baseline reasoning model on FB15K-237, YAGO3-10, and NELL-995, as well as the performance improvement of PDEC, as shown in the Table 1. The initial edge vectors are generated using TransE. We performed PDEC using Algorithm 1, resulting in datasets with new predicates. All tests ensure that the reasoning model keeps hyperparameters unchanged during the testing of the original benchmark and the benchmark after PDEC to ensure a fair comparison. The results indicate that PDEC can effectively promote the performance of KG reasoning models.
Method | PDEC | FB15K-237 | YAGO3-10 | NELL-995 | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
MRR | Hits@1 | Hits@3 | Hits@10 | MRR | Hits@1 | Hits@3 | Hits@10 | MRR | Hits@1 | Hits@3 | Hits@10 | ||
TransE | ✘ | 0.289 | 19.3 | 32.6 | 47.9 | 0.330 | 21.5 | 38.8 | 54.9 | 0.252 | 9.64 | 38.5 | 47.2 |
✔ | 0.349 | 24.2 | 39.9 | 55.8 | 0.395 | 28.0 | 46.0 | 60.3 | 0.287 | 11.7 | 43.0 | 52.7 | |
DistMult | ✘ | 0.187 | 10.3 | 20.7 | 36.1 | 0.073 | 2.93 | 6.87 | 16.7 | 0.163 | 7.10 | 19.1 | 33.9 |
✔ | 0.224 | 13.3 | 25.4 | 40.9 | 0.112 | 3.93 | 12.0 | 28.5 | 0.186 | 9.0 | 21.6 | 35.4 | |
TransH | ✘ | 0.286 | 18.4 | 32.9 | 48.4 | 0.332 | 22.1 | 38.9 | 54.4 | 0.255 | 10.0 | 39.6 | 48.8 |
✔ | 0.314 | 20.6 | 36.1 | 52.2 | 0.385 | 26.6 | 43.7 | 62.2 | 0.278 | 13.5 | 40.7 | 51.6 | |
RotatE | ✘ | 0.321 | 22.8 | 35.6 | 50.6 | 0.270 | 17.9 | 30.3 | 44.6 | 0.368 | 31.4 | 40.5 | 45.6 |
✔ | 0.360 | 26.4 | 40.2 | 54.7 | 0.355 | 25.6 | 39.8 | 54.3 | 0.374 | 32.1 | 41.1 | 46.0 | |
TransR | ✘ | 0.305 | 20.8 | 34.4 | 49.6 | 0.321 | 11.6 | 48.6 | 63.7 | 0.261 | 10.8 | 39.4 | 49.0 |
✔ | 0.334 | 23.5 | 37.8 | 53.3 | 0.496 | 39.2 | 56.0 | 68.3 | 0.276 | 11.7 | 41.0 | 53.3 | |
TransD | ✘ | 0.284 | 18.1 | 32.7 | 48.6 | 0.323 | 21.5 | 35.9 | 53.8 | 0.263 | 10.5 | 39.9 | 50.1 |
✔ | 0.310 | 20.0 | 36.1 | 51.9 | 0.377 | 25.1 | 42.6 | 61.7 | 0.280 | 12.1 | 40.4 | 50.8 | |
CompIEX | ✘ | 0.238 | 15.2 | 26.5 | 41.0 | 0.106 | 2.96 | 11.4 | 26.6 | 0.222 | 8.9 | 32.1 | 40.6 |
✔ | 0.280 | 18.9 | 31.5 | 46.0 | 0.196 | 9.35 | 22.8 | 41.4 | 0.248 | 9.9 | 37.9 | 45.0 | |
TuckER | ✘ | 0.323 | 0.238 | 0.353 | 0.494 | 0.332 | 26.8 | 34.3 | 47.9 | 0.293 | 21.6 | 32.5 | 41.7 |
✔ | 0.327 | 25.5 | 34.8 | 46.8 | 0.344 | 27.5 | 35.1 | 49.0 | 0.298 | 23.1 | 33.9 | 41.9 | |
CompGCN | ✘ | 0.355 | 26.4 | 39.0 | 53.5 | 0.411 | 37.9 | 48.3 | 57.4 | 0.461 | 38.0 | 49.1 | 58.9 |
✔ | 0.363 | 26.9 | 40.3 | 54.6 | 0.428 | 38.3 | 49.9 | 58.8 | 0.481 | 39.2 | 51.1 | 60.1 | |
RelEns-DSC | ✘ | 0.368 | 27.4 | 40.5 | 55.6 | 0.342 | 27.9 | 36.1 | 49.1 | 0.548 | 48.2 | 59.0 | 66.7 |
✔ | 0.377 | 28.6 | 43.1 | 56.9 | 0.349 | 27.8 | 35.7 | 49.6 | 0.562 | 50.0 | 61.2 | 67.5 |
- The KGC performance improvement brought by PDEC on a large-scale dataset (i.e., drug rediscovery). We conducted KGC experiments on the large-scale biochemical knowledge graph RTX-KG2c and compared the performances of the baseline model before and after applying the PDEC framework. During the testing of both the original benchmark and the benchmark after PDEC, all tests guarantee that the reasoning model maintains consistent hyperparameters to ensure a fair comparison. The relevant results are shown in Table 2. The results indicate that PDEC can effectively improve the ability of KG reasoning models on drug rediscovery.
Method | PDEC | FB15K-237 | |||
---|---|---|---|---|---|
MRR | Hits@1 | Hits@3 | Hits@10 | ||
TransE | ✘ | 0.232 | 17.3 | 45.0 | 58.1 |
✔ | 0.269 | 20.3 | 48.1 | 61.1 | |
DistMult | ✘ | 0.164 | 16.0 | 30.3 | 34.8 |
✔ | 0.177 | 17.1 | 33.4 | 35.9 | |
RotatE | ✘ | 0.296 | 22.8 | 49.1 | 57.9 |
✔ | 0.302 | 23.4 | 49.2 | 57.7 | |
CompGCN | ✘ | 0.291 | 22.3 | 48.6 | 58.1 |
✔ | 0.306 | 23.9 | 49.3 | 59.6 |
- The correlation between the granularity of predicate decomposition and KGC performance. In order to investigate the impact of clustering result quality on PDEC performance, we tested the difference in PDEC’s improvement to TransE’s performance when the proportion of predicates applied for splitting was different. Thus, an experimental conclusion was obtained on the correlation between predicate decomposition granularity and KGC performance.
- The impact of synonymous predicate merging on KGC performance. We trained the dataset after predicate merging based on TransE and performed KGC tasks on the original predicates that did not participate in the merging. The maximum number of synonymous predicates was set to 0, 7, and 37. The results obtained are shown in Table 3.
5. Discussion
5.1. The Performance Improvement of the Baseline Model after Applying the PDEC Framework
5.2. The Characteristics of PDEC Framework in Performance Improvement
- PDEC generally enhances the reasoning performance of KGC’s baseline model. When compared with the baseline methods, PDEC demonstrates consistent improvements across different datasets. This observation aligns with our objective of developing a more effective approach for KGC tasks.
- The performance improvement of PDEC is more significant on larger datasets. When examining the performance improvement of PDEC on different datasets, we observe that the larger the dataset, the more significant the improvement. This trend suggests that PDEC is particularly effective in capturing relationships and patterns present in larger KGs. It further supports our hypothesis that PDEC’s ability to model polysemous predicates effectively enables it to handle the complexity and diversity found in larger datasets.
- The performance improvement of PDEC is more significant for old-fashioned methods. Our experiments also reveal that the performance improvement of PDEC is more significant for older methods like TransE and DistMult. These methods are known to be more susceptible to polysemous predicates, which are common in KGs. This observation aligns with our theoretical analysis presented in Section 3.6, where we discuss how PDEC addresses the limitations of traditional methods by effectively modeling polysemous predicates.
5.3. The Impact of Granularity of Predicate Decomposition on KGC Performance
5.4. The Performance Impact of Synonymous Predicate Merging
6. Conclusions
- Limitations. The limitations of this method mainly lie in the following: First, the automatic optimization of clustering threshold hyperparameters in Equation (9) has not been achieved yet. In fact, according to the discussion in Section 3.6, this hyperparameter can be estimated based on the low dimensional manifold distribution of the dataset. The current algorithm version’s clustering threshold parameters are based on empirical values obtained from a large number of experiments, which affects the efficiency and practicality of the algorithm. We have conducted relevant research and exploration, but have not yet reached a clear conclusion. Second, PDEC’s clustering-based predicate decomposition cannot fully correspond to the hypernyms and hyponyms of predicates in the ontology, which reduces the interpretability of the new predicate set after predicate decomposition. This is also the cost that PDEC attempts to avoid introducing external information. Third, it has not yet been possible to synchronize and optimize the decomposition of polysemous predicates with the merging of synonymous predicates.
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
KG | knowledge graph |
KGC | knowledge graph completion |
LLMs | large language models |
GNNs | graph neural networks |
References
- Bordes, A.; Usunier, N.; Garcia-Duran, A.; Weston, J.; Yakhnenko, O. Translating embeddings for modeling multi-relational data. Adv. Neural Inf. Process. Syst. 2013, 26, 2787–2795. [Google Scholar]
- Yang, B.; Tau Yih, W.; He, X.; Gao, J.; Deng, L. Embedding Entities and Relations for Learning and Inference in Knowledge Bases. In Proceedings of the ICLR (Poster), San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
- Dettmers, T.; Minervini, P.; Stenetorp, P.; Riedel, S. Convolutional 2D Knowledge Graph Embeddings. In Proceedings of the AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018; Volume 32. [Google Scholar]
- Sun, Z.; Deng, Z.H.; Nie, J.Y.; Tang, J. RotatE: Knowledge Graph Embedding by Relational Rotation in Complex Space. arXiv 2019, arXiv:1902.10197. [Google Scholar]
- Meilicke, C.; Chekol, M.W.; Ruffinelli, D.; Stuckenschmidt, H. Anytime Bottom-Up Rule Learning for Knowledge Graph Completion. In Proceedings of the IJCAI, Macao, China, 10–16 August 2019; pp. 3137–3143. [Google Scholar]
- Qu, M.; Chen, J.; Xhonneux, L.P.; Bengio, Y.; Tang, J. RNNLogic: Learning Logic Rules for Reasoning on Knowledge Graphs. In Proceedings of the International Conference on Learning Representations, Virtual, 3–7 May 2021. [Google Scholar]
- Yang, F.; Yang, Z.; Cohen, W.W. Differentiable learning of logical rules for knowledge base reasoning. Adv. Neural Inf. Process. Syst. 2017, 30, 1–10. [Google Scholar]
- Sadeghian, A.; Armandpour, M.; Ding, P.; Wang, D.Z. Drum: End-to-end differentiable rule mining on knowledge graphs. Adv. Neural Inf. Process. Syst. 2019, 32, 1–11. [Google Scholar]
- Cohen, W.W. TensorLog: A Differentiable Deductive Database. arXiv 2016, arXiv:1605.06523. [Google Scholar]
- Toutanova, K.; Chen, D.; Pantel, P.; Poon, H.; Choudhury, P.; Gamon, M. Representing Text for Joint Embedding of Text and Knowledge Bases. In Proceedings of the EMNLP, Lisbon, Portugal, 17–21 September 2015; Màrquez, L., Callison-Burch, C., Su, J., Pighin, D., Marton, Y., Eds.; Association for Computational Linguistics: Toronto, ON, Canada, 2015; pp. 1499–1509. [Google Scholar]
- Lee, J.; Chung, C.; Whang, J.J. InGram: Inductive Knowledge Graph Embedding via Relation Graphs. arXiv 2023, arXiv:2305.19987. [Google Scholar] [CrossRef]
- Geng, Y.; Chen, J.; Pan, J.Z.; Chen, M.; Jiang, S.; Zhang, W.; Chen, H. Relational Message Passing for Fully Inductive Knowledge Graph Completion. In Proceedings of the 39th IEEE International Conference on Data Engineering, ICDE 2023, Anaheim, CA, USA, 3–7 April 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1221–1233. [Google Scholar] [CrossRef]
- Balažević, I.; Allen, C.; Hospedales, T.M. Tucker: Tensor factorization for knowledge graph completion. arXiv 2019, arXiv:1901.09590. [Google Scholar]
- Socher, R.; Chen, D.; Manning, C.D.; Ng, A.Y. Reasoning With Neural Tensor Networks for Knowledge Base Completion. In Proceedings of the NIPS, Lake Tahoe, Nevada, 5–10 December 2013; pp. 926–934. [Google Scholar]
- Nathani, D.; Chauhan, J.; Sharma, C.; Kaul, M. Learning Attention-based Embeddings for Relation Prediction in Knowledge Graphs. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Florence, Italy, 28 July–2 August 2019. [Google Scholar]
- Yue, L.; Zhang, Y.; Yao, Q.; Li, Y.; Wu, X.; Zhang, Z.; Lin, Z.; Zheng, Y. Relation-aware Ensemble Learning for Knowledge Graph Embedding. arXiv 2023, arXiv:2310.08917. [Google Scholar]
- Hamaguchi, T.; Oiwa, H.; Shimbo, M.; Matsumoto, Y. Knowledge Transfer for Out-of-Knowledge-Base Entities: A Graph Neural Network Approach. arXiv 2017, arXiv:1706.05674. [Google Scholar]
- Wang, C.; Zhou, X.; Pan, S.; Dong, L.; Song, Z.; Sha, Y. Exploring Relational Semantics for Inductive Knowledge Graph Completion. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual, 22 February–1 March 2022; Volume 36, pp. 4184–4192. [Google Scholar]
- Wang, P.; Han, J.; Li, C.; Pan, R. Logic Attention Based Neighborhood Aggregation for Inductive Knowledge Graph Embedding. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; pp. 7152–7159. [Google Scholar]
- Dai, D.; Zheng, H.; Luo, F.; Yang, P.; Chang, B.; Sui, Z. Inductively representing out-of-knowledge-graph entities by optimal estimation under translational assumptions. arXiv 2020, arXiv:2009.12765. [Google Scholar]
- Pan, Y.; Liu, J.; Zhang, L.; Zhao, T.; Lin, Q.; Hu, X.; Wang, Q. Inductive Relation Prediction with Logical Reasoning Using Contrastive Representations. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, Abu Dhabi, United Arab Emirates, 7–11 December 2022; Goldberg, Y., Kozareva, Z., Zhang, Y., Eds.; Association for Computational Linguistics: Dublin, Ireland, 2022; pp. 4261–4274. [Google Scholar] [CrossRef]
- Zhang, Y.; Zhou, Z.; Yao, Q.; Chu, X.; Han, B. Adaprop: Learning adaptive propagation for graph neural network based knowledge graph reasoning. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Long Beach, CA, USA, 6–10 August 2023; pp. 3446–3457. [Google Scholar]
- Caliński, T.; Harabasz, J. A dendrite method for cluster analysis. Commun. Stat. 1974, 3, 1–27. [Google Scholar] [CrossRef]
- Mahdisoltani, F.; Biega, J.; Suchanek, F.M. YAGO3: A Knowledge Base from Multilingual Wikipedias. In Proceedings of the Conference on Innovative Data Systems Research, Asilomar, CA, USA, 19 August 2014. [Google Scholar]
- Toutanova, K.; Chen, D. Observed Versus Latent Features for Knowledge Base and Text Inference. In Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality, Beijing, China, 31 July 2015. [Google Scholar]
- Xiong, W.; Hoang, T.; Wang, W.Y. DeepPath: A Reinforcement Learning Method for Knowledge Graph Reasoning. arXiv 2017, arXiv:1707.06690. [Google Scholar]
- Mitchell, T.; Cohen, W.; Hruschka, E.; Talukdar, P.; Yang, B.; Betteridge, J.; Carlson, A.; Dalvi, B.; Gardner, M.; Kisiel, B.; et al. Never-ending learning. Commun. ACM 2018, 61, 103–115. [Google Scholar] [CrossRef]
- Wood, E.C.; Glen, A.K.; Kvarfordt, L.G.; Womack, F.; Acevedo, L.; Yoon, T.S.; Ma, C.; Flores, V.; Sinha, M.; Chodpathumwan, Y.A. RTX-KG2: A system for building a semantically standardized knowledge graph for translational biomedicine. BMC Bioinform. 2022, 23, 400. [Google Scholar] [CrossRef] [PubMed]
- Wang, Z.; Zhang, J.; Feng, J.; Chen, Z. Knowledge Graph Embedding by Translating on Hyperplanes. In Proceedings of the AAAI Conference on Artificial Intelligence, Québec City, QC, Canada, 27–31 July 2014; AAAI Press: Washington, DC, USA, 2014. [Google Scholar]
- Lin, Y.; Liu, Z.; Sun, M.; Liu, Y.; Zhu, X. Learning entity and relation embeddings for knowledge graph completion. In Proceedings of the AAAI conference on artificial intelligence, Austin, TX, USA, 25–30 January 2015; Volume 29. [Google Scholar]
- Ji, G.; He, S.; Xu, L.; Liu, K.; Zhao, J. Knowledge graph embedding via dynamic mapping matrix. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers); Association for Computational Linguistics: Beijing, China, 2015; pp. 687–696. [Google Scholar]
- Trouillon, T.; Welbl, J.; Riedel, S.; Gaussier, É.; Bouchard, G. Complex embeddings for simple link prediction. In Proceedings of the International Conference on Machine Learning, PMLR, New York, NY, USA, 20–22 June 2016; pp. 2071–2080. [Google Scholar]
- Vashishth, S.; Sanyal, S.; Nitin, V.; Talukdar, P. Composition-based multi-relational graph convolutional networks. arXiv 2019, arXiv:1911.03082. [Google Scholar]
- Han, X.; Cao, S.; Xin, L.; Lin, Y.; Liu, Z.; Sun, M.; Li, J. OpenKE: An Open Toolkit for Knowledge Embedding. In Proceedings of the EMNLP, Brussels, Belgium, 31 October–4 November 2018. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Xin, J.; Afrasiabi, C.; Lelong, S.; Adesara, J.; Tsueng, G.; Su, A.I.; Wu, C. Cross-linking BioThings APIs through JSON-LD to facilitate knowledge exploration. BMC Bioinform. 2018, 19, 30. [Google Scholar] [CrossRef] [PubMed]
- Kilicoglu, H.; Shin, D.; Fiszman, M.; Rosemblat, G.; Rindflesch, T.C. SemMedDB: A PubMed-scale repository of biomedical semantic predications. Bioinformatics 2012, 28, 3158–3160. [Google Scholar] [CrossRef] [PubMed]
Number of Predicates Merging | MRR of Predicate Merging Model on Unmerged Predicates | MRR of Original Model on Unmerged Predicates |
---|---|---|
0 | 0.289 | 0.289 |
7 | 0.286 | 0.286 |
37 | 0.246 | 0.245 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Tian, X.; Meng, Y. PDEC: A Framework for Improving Knowledge Graph Reasoning Performance through Predicate Decomposition. Algorithms 2024, 17, 129. https://doi.org/10.3390/a17030129
Tian X, Meng Y. PDEC: A Framework for Improving Knowledge Graph Reasoning Performance through Predicate Decomposition. Algorithms. 2024; 17(3):129. https://doi.org/10.3390/a17030129
Chicago/Turabian StyleTian, Xin, and Yuan Meng. 2024. "PDEC: A Framework for Improving Knowledge Graph Reasoning Performance through Predicate Decomposition" Algorithms 17, no. 3: 129. https://doi.org/10.3390/a17030129
APA StyleTian, X., & Meng, Y. (2024). PDEC: A Framework for Improving Knowledge Graph Reasoning Performance through Predicate Decomposition. Algorithms, 17(3), 129. https://doi.org/10.3390/a17030129