Abstract
The applications of knowledge graph have received much attention in the field of artificial intelligence. The quality of knowledge graphs is, however, often influenced by missing facts. To predict the missing facts, various solid transformation based models have been proposed by mapping knowledge graphs into low dimensional spaces. However, most of the existing transformation based approaches ignore that there are multiple relations between two entities, which is common in the real world. In order to address this challenge, we propose a novel approach called DualQuatE that maps entities and relations into a dual quaternion space. Specifically, entities are represented by pure quaternions and relations are modeled based on the combination of rotation and translation from head to tail entities. After that we utilize interactions of different translations and rotations to distinguish various relations between head and tail entities. Experimental results exhibit that the performance of DualQuatE is competitive compared to the existing state-of-the-art models.
1. Introduction
Knowledge graphs, which represent knowledge from real world applications, contain abundant facts. In knowledge graphs, each fact is represented by a triple which indicates that the relation r between the head entity h and tail entity t. Knowledge graphs have been applied to various tasks such as explainable recommendation system [1], question answering [2] and prediction of future research collaborations [3].
Predicting missing facts (i.e., link prediction) is a fundamental task in knowledge graph research. Various models aiming at embedding entities and relations into low-dimension spaces have been proposed. For example, TransE [4] learned the embeddings of entities and relations by transforming head entity to tail entity according to the relation; RotatE [5] and QuatE [6] learned the embeddings of entities and relations by considering relations as rotations from head entities to tail entities. However, existing transformation based models fail to capture multiple relations between head and tail entities. For example, as shown in Figure 1, David Lynch is the director, the creator and an actor in the film Mulholland Drive, i.e., there are three relations: directed, created and actedIn between David Lynch and Mulholland Drive. These relations between head entity David Lynch and tail entity Mulholland Drive have no semantic connections with each other, which should be represented by spatially dispersed embeddings. Most existing transformation based models, however, assume that there is only one relation between each pair of head and tail entities. For instance, for each triple , their corresponding embeddings are assumed to be satisfied with in TransE, which indicates, for , , and , the embeddings of , , are similar, as shown in Figure 2c (i.e., ). To overcome this challenge, we propose a novel approach that considers multiple relations between head and tail entities in knowledge graph.
Figure 1.
Visualization of partial knowledge graph in YAGO3-10. The blue relations indicate that there can be multiple relations between two entities.
Figure 2.
Geometrical significance of DualQuatE and multiple relations between the entities. (a) denotes the interaction of rotation and translation from the head entity h to the tail entity t. represents the rotation of the relation, denotes the translation of the relation. (b) shows how to express multiple relations between the head entity h and the tail entity t for DualQuatE. (c,d) demonstrate that TransE and RotatE fail to model multiple relations.
In this paper, we propose a model called DualQuatE which utilizes the various combinations of distinct rotations and translations to represent multiple relations between head and tail entities. Based on this, easily to think of RotatE combined with TransE in complex space and real space. However, it is hard to find a uniform mathematical expression to convey their combination. Therefore, we propose DualQuatE which embeds entities and relations into dual quaternion space to combine rotation and translation. The dual quaternion consists of real part and dual part. More concretely, we embed entities with pure quaternions vectors in three-dimensional space to represent entity embeddings. To distinguish various relations between head entity h and tail entity t, we design a score function to utilize dual quaternion Hamilton product to model relations as interaction of rotation and translation. We utilize distinct interactions of rotations and translations to represent various relations between head and tail entities. Compared with RotatE and TransE in two-dimensional space, the dual quaternions space is eight-dimensional with six real degrees of freedom, three for translation and three for rotation; we can explore the interaction of rotation and translation with more free degrees in higher dimensions. Summarized in Table 1, our model has rich expression abilities of relations (i.e., relation patterns and multiple relations).
Table 1.
The ability of expressing relation patterns and multiple relations between head and tail entities.
To conclude, the contributions of our proposed model are listed as follows:
- We introduce dual quaternions to knowledge graph embeddings.
- We propose a novel transformation based model DualQuatE to overcome the challenge of multiple relations between two entities.
- Our experiments denote that DualQuatE is effective compared to the existing state-of-the-art models.
The rest paper is organized as follows. In Section 2, we introduce the related work. Section 3 presents prerequisite knowledge about dual quaternions. In Section 4, we describe our model. We present the results of experiments and make analysis and discussions in Section 5. In Section 6, we introduce the conclusion of this paper and future work.
2. Related Work
To gain high-quality knowledge graphs, approaches which utilize knowledge graph embedding to predict missing facts have been proposed recently. These methods fall into two broad categories in [10]: transformation based models and semantic matching models. Specifically, transformation based models transform head entity to tail entity by relations, while semantic matching models match entities and relations semantics in latent spaces. Compared to transformation based models, semantic matching models suffer from poor interpretability.
Transformation based models usually embed entities and relations into vector space and model the relation as a transformation from head entity embeddings to tail entity embeddings. One of the most representative is TransE which mapped entities and relations to the same space . For each triple , entity embeddings and relation embedding hold . Then a series of extensions based on TransE are presented to improve accuracy and interpretability. For instance, TransR [11] introduced relations-specific spaces. TransR modeled relations and entities into different spaces following the idea that TransE can only express 1-to-1 relations. RotatE mapped embeddings into complex space which focused on expressing relation patterns. HAKE [7] utilized the polar coordinate system to capture semantic hierarchies in the knowledge graphs.
Semantic matching models that match latent semantics of entities and relations can be divided into two categories: bilinear models and neural network based models. Bilinear models include DistMult [8], HolE [12], SimplE [9], ComplEx [13] and QuatE and DihEdral [14]. DistMult represented each entity as a vector and each relation as a diagonal matrix. HolE matched latent semantics of entities by circular correlation operation and then the compositional vector interacted with relations latter. ComplEx, mapping knowledge graph embedding into complex space, leveraged Hermitian product to capture latent semantics of entities and relations which could express antisymmetry relation pattern. QuatE, extending knowledge graph embedding from complex space to quaternion space, modeled each relation as rotation in four-dimensional space with more degree of freedom. Compared with ComplEx, QuatE could express the main relation patterns except composition. For each entity, SimplE proposed two embeddings and each of them learned latent semantics dependently. DihEdral mapped relations into dihedral group to capture composition relations. Neural network based models including ConvE [15], R-GCNs [16] and InteractE [17] are proposed recently. ConvE, R-GCNs introduced convolutional network and graph convolutional networks to knowledge graph embedding respectively. Compared with ConvE, InteractE introduced the feature permutation, “checkered” feature reshaping and circular convolution to increase interaction.
Recently, some models introduced hyperbolic space to knowledge graph embeddings. MuRP [18] represented knowledge graph in Poincaré ball of hyperbolic space. Chami et al. [19] attempted to capture hierarchical and logical patterns in hyperbolic space. Compared to hyperbolic space based models which focused on semantic hierarchies in knowledge graphs, DualQuatE try to overcome overcome the challenge of multiple relations between two entities and relation patterns.
Both DualQuatE and QuatE use quaternion to embed knowledge graphs. However, those are three different models. The main differences between DualQuatE and QuatE are as follows:
- DualQuatE, a transformation based model, measures score of triples by the distance between two entities. QuatE which is a semantic matching model measured the latent matching semantics of entities and relations.
- The purpose of the model is different. DualQuatE aims to address the challenge of having multiple relations between two entities. QuatE aims to utilize quaterion Hamilton product to encourage a more compact interaction between entities and relations.
- The geometric meaning is different. QuatE embeds entities and relations with quaterions to model relations as rotations. Our model firstly attempts to represent entities with pure quaternions and models relations as interaction of translation and rotation.
3. Preliminaries
In this part, we introduce several concepts used in this paper.
- Quaternion: Quaternion [20], is a number system that extends complex numbers to four-dimensional numbers. Generally, a quaternion is a number of the form , where are real numbers and satisfy that .
- Quaternion conjugate: The definition of conjugate to a quaternion is .
- Quaternion Multiplication: Multiplication of two quaternions and is defined by:
- Rotation with quaternions in three-dimensional space: The point is rotated by the point along the unit vector (i.e., rotation axis), which can use quaternion multiplication to represent. We define and as pure quaternion, i.e., quaternions with real part being zero, is a unit quaternion, then
- Dual quaternion: Dual quaterion [21] is an eight-dimensional real algebra to combine with quaternions. Formally, a dual quaternion can be represented by , where is a dual unit with , both the real part and the dual part are quaternions. Therefore, a dual quaternion is of the form .
- Dual quaternion conjugate: The conjugate of the dual quaternion is defined as: , which can be represented by an 8-tuple: .
- Dual Quaternion Multiplication: Dual Quaternion Hamilton product between and is defined as follows:
- Unit Dual Quaternion: A dual quaternion is a unit dual quaternion, if , namely, satisfies the following conditions:where . In order to simplify the calculation process, we use another effective form to represent unit dual quaternion which defines as follows:where and is a pure quaterion. We prove that is a unit dual quaternion:We can easily verify , as shown below:
- Combination of Rotation and Translation: We define a point in the three-dimensional space as a pure quaterion and let be the translation. The point under the rotation followed by the translation becomes the point . It is straightforward to utilize unit dual quaternion multiplication to represent the transformation from to , as shown below:
4. Our DualQuatE Model
In this section, we introduce our model DualQuatE which maps entities and relations to dual quaternion space, and two variations of DualQuatE, namely DualQuatE-1 and DualQuatE-2.
We denote a knowledge graph by , a set of entities by and a set of relations by . A knowledge graph is composed of a set of facts, each of which can be represented by , where is a head entity, is a tail entity, and is a relation between h and t. We denote a set of facts that are true by , and a set of facts that are false by . Given a knowledge graph , we aim to predict missing facts (i.e., link prediction) in .
4.1. Multiple Relations between the Entities
To address the challenge of having multiple relations between head and tail entities, we embed knowledge graph into dual quaternion space. denote vector of entity embeddings and relation embeddings, each element of entity embeddings or is pure quaternion and every dimension of relation embeddings is unit dual quaternion. We expect to model relation embeddings as interaction of rotation and translation from head entity embeddings to tail entity embeddings as shown in Figure 2a. Specifically, each true triple satisfies:
where each dimension of is a unit dual quaterinon satisfying Formula (4). We define a quaternion to represent a rotation about pure unit quaternion through and a pure quaternion . Furthermore, we define a unit dual quaternion by:
With Formula (10), we can deduce the transformation of DualQuatE in Formula (9):
where the geometric meaning of is shown in Formula (2). As shown above, DualQuatE transforms head entity h to tail entity t by relation r which combines rotation (i.e., ) and translation (i.e., ). Unlike previous models learned similar representations of relations shown in Figure 2c,d, our model learns combinations of different translations and rotations to represent various relations between head and tail entities.
We define score function by:
where represent norm of a vector. With the score function we want head entity to be as close to tail entity as possible after the transformation of the relation.
4.2. Loss Function
We employ self-adversarial negative sampling [5] method to generate corrupt samples. We define the probability distribution of negative samples by:
where is sampling temperature. Combining with self-adversarial negative sampling, we define loss function by:
where is fixed margin. We define our algorithm as shown in Algorithm 1.
| Algorithm 1DualQuatE. |
Input: Entity embeddings and relation embeddings . hyperparameters including margin , martrix dim k, negative sample size n.
|
4.3. Properties of DualQuatE
In this part we describe the relation patterns and introduce how DualQuatE expresses those patterns. Recently, learning relation patterns including symmetry/antisymmetry, inversion and composition have been realized to the key of link prediction task. Our model DualQuatE can easily explain the relation patterns of the learned relation embeddings and proof of relation patterns can be found in the Appendix A.
Inversion: If a relation is the inverse to a relation , then we can infer . For example, the relation is inverse to the relation . To r and , we infer that , which denotes the composition of component and have no rotation (i.e., ) and the translation which is rotated by is the opposite number of the translation (i.e., ).
Symmetry: A relation is symmetric, if holds. For instance, relations and from the dataset WN18 are symmetric. If a relation is symmetric, we reason that , which means no rotation of the self-composition of component (i.e., ) and no translation of component (i.e., ).
Antisymmetry: A relation is antisymmetric, if , which satisfies . For example, the relation .
Composition: A relation is composed by the relation and , which can be denoted by if . For example, relation uncle_of can be composited by brother_of and father_of such as if , are true triples, we can reason is a true fact in the real world. Relation can be composited by relation and ; they can be represented by , which deduces that is equal to the sum of translation and translation which is rotated by the rotation (i.e., ).
4.4. Variations
We introduce extensions of DualQuatE. DualQuatE is a transformation based model, which combines rotation and translation. To compare the effects of interaction of rotation and translation, we compare DualQuatE with DualQuatE-1 which models relations as rotation in three-dimensional space. Furthermore, we propose DualQuatE-2 to explore the role of scaling in the rotation.
DualQuatE-1: We devise DualQuatE-1 which embeds entities and relations to quaternion space. Specifically, we represent entity embeddings with pure quaternions and relation embeddings with quaternions. We design a score function as follows to model the relation as rotation in three-dimensional space. Namely, for each fact satisfies: .
DualQuatE-2: To explore the effect of scaling in knowledge graph embeddings, we present DualQuatE-2 to introduce scaling. DualQuatE-2 maps knowledge graph embeddings to four-dimensional space. Especially, we represent entities and relations with quaternions where relation embeddings are not unit quaternions. We define score function meaning relation transform head entity to tail entity combining rotation and scaling.
4.5. Connection to TransE and RotatE
Compared with RotatE: RotatE embedded entity embeddings , and relation embeddings into the complex space. RotatE utilized score function to calculate the probability of each triple, where is unit complex . DualQuatE can be transformed to RotatE by fixing rotation plane and removing translation variables. For instance, we can construct relation embeddings by Formula (10) in plane, where and (i.e., ) and embed entities with corresponding forms: or .
Compared with TransE: TransE modeled relation as translation that embedded entity embeddings , and relation embeddings to vector space. To express TransE, we can set (i.e., ) in relation embeddings to ignore the rotation. In other words, the relation embedddings in DualQuatE can be expressed as .
5. Experiments
5.1. Experiment Settings
5.1.1. Datasets
We evaluated our approach on widely used datasets: WN18, FB15k, WN18RR, FB15k-237 and YAGO3-10, details of which are shown in Table 2. WN18 [4] is sampled from WordNet (https://wordnet.princeton.edu/ accessed on 11 June 2021), which is a knowledge graph about lexical relations of words. WN18RR [15] is a subset of WN18 with inverse relations removed. FB15k [4] is a large database with structured general human knowledge. FB15k-237 [22] is a subset of FB15k with reverse relations removed. YAGO3-10 [23] is a subset of YAGO3 which extends YAGO (https://io.datascience-paris-saclay.fr/dataset/YAGO accessed on 11 June 2021) in different languages. Tuples in YAGO3-10 mainly come from Wikipedia describing individuals, e.g., who lives in which city.
Table 2.
Specific information of the experimental datasets. #E and #R denote entities and relations number in datasets, #TR, #V and #TE denote the size of training set, valid set and test set.
5.1.2. Evaluation Metric
Similar to [6], we used three metrics to measure our approach, i.e., Mean Rank (MR), Mean Reciprocal Rank (MRR), and Hit@n. To calculate those metrices, we first replace h by all entities for each testing triplet (where is a set of testing triplets) and compute score for each triple . After that we sort according to score ascendingly and get the rank of the original entity h, denoted by . Note that is the “rank” of h instead of the score of h, e.g., if the score of h the smallest, is 1. We can calculate MR as shown below:
which means MR is an average of ranks of all the original entities in the testing triplets. Likewise, MRR can be calculated as follows:
which indicates MRR is an average of inverse ranks of all the original entities in the testing triplets. Hit@n suggests the proportion of original entities in the top n entities, which can be calculated by:
where is 1 if , and 0 if . We tested different values n = 1, 3, 10 in the evaluation, similar to the setting used in reference [6].
5.1.3. Baselines
We compared our model with several state-of-the-art baselines. For transformation based models, we compared our model to TransE [4], TorusE [24], RotatE [5] and HAKE [7]; for bilinear models, we compared our model to ComplEx [13], HolE [12], SimplE [9], DihEdral [14] and QuatE [6] (to make the comparison fair, we use the version of QuatE without type constraints on the common link prediction datasets considering the requirement of type constraints is too strong).
5.1.4. Implementation Details
We utilized Pytorch (https://pytorch.org accessed on 11 June 2021) to implement our model (https://github.com/gaoliming123/DualQuatE accessed on 11 June 2021) and its variations DualQuatE-1 and DualQuatE-2. We tuned the hyperparameters as follows:the embedding dimension , the learning rate , the ratio of negative sample , the margin and the self-adversarial sampling temperature . We adopt for WN18RR and WN18, for FB15k-237 and YAGO3-10 and for FB15k.
5.2. Results
Table 3, Table 4 and Table 5 show the experimental results on four datasets. The performance of DualQuatE and its variations represent comparability to state-of-the-art models. For YAGO3-10, the link prediction results are shown in Table 3, from which we can see that DualQuatE is competitive compared to most previous knowledge graph embedding models, especially in metric Hit@10. The result of YAGO3-10 tells us that the performance of DualQuatE is better than DualQuatE-1, which indicates that modeling relations as the interaction of rotation and translation with more degrees of freedom (as done by DualQuatE) is indeed better than simply modeling relations as rotation (as done by DualQuatE-1). Furthermore, the advanced results of DualQuatE-2 and DualQuatE inspire us to explore the mixed effects of vector operations. Table 4 and Table 5 indicate the effects of our models on four common datasets: WN18RR, FB15k-237, WN18 and FB15k. We can find that our models perform better on datasets WN18RR and FB15k-237; for WN18 and FB15k, metrics are almost close to previous models and several metrics surpass the previous.
Table 3.
Link prediction on datasets YAGO3-10. ♠ represents the results that came from (Toutanova and Chen, 2015) and the others came from the original papers. The results with bold are the best and the underlined ones are the second best results.
Table 4.
Link prediction on datasets FB15k-237 and WN18RR. ♣ represents the results that came from (Sun et al., 2019); the others are from the original papers. ¶ denotes the results of QuatE without type constraints from the paper. The results in bold are the best and the underlined ones are the second best results.
Table 5.
Link prediction on datasets FB15k and WN18. ♣ represents the results that came from (Sun et al., 2019) and the others came from the original papers. ¶ denotes the results of QuatE without type constraints from the paper. The results with bold are the best and the underlined ones are the second best results.
5.3. Relation Embeddings
In this part we analyze the properties of DualQuatE learned for relations. DualQuatE can distinguish multiple relations between head and tail entities, for example, as shown in Figure 3. We compared our model with RotatE; Figure 3a,b display the difference of the representation of relation actedIn and directed. Figure 3a shows that the relations actedIn and directed are more similar where the gap between the two relations is clustered around zero. For DualQuatE, the difference which is shown in Figure 3b is more dispersed. Maybe the learned embeddings of our model are slightly concentrated around zero. We speculate that the reason for this result is due to less relations in YAGO3-10, which causes the diversity of relations between the entities to be more sparse. Figure 3d denotes the histrograms of translation component of relation actedIn and directed. Compared with the relation directed, the distribution of the relation actedIn is more decentralized. The values of translation component of relation directed are concentrated around zero, while the values of relation actedIn are more around . Namely, head entity which is rotated by the rotation component of directed will be closer to tail entity. Figure 3c denotes that the distribution of the embeddings of relations actedIn and directed is very similar.
Figure 3.
Visualization of the multiple relations between the entities. (a) denotes the histrograms of different between the actedIn embeddings and directed embeddings of RotatE. (b) shows the histrograms of the DualQuatE. (c) denotes the histrograms of actedIn and directed embeddings of TransE. (d) displays the histrograms of translation embeddings of actedIn and directed of DualQuatE.
Limited by the length of the article, we visualize only some relation patterns in this paper. Figure 4a shows that the self composition of rotation is close to 0 or . Figure 4b,c show the rotation embeddings to antisymmetry relation has_part from dataset WN18. For inversion relations and , Figure 4d denotes the embeddings of rotation elements between and which means the composition of and is 0 or .
Figure 4.
Visualization of relation patterns represented by DualQuatE in rotation, where (a) denotes the symmetry relation “similar_to” in rotation of , (b,c) are rotations of “has_part” and “part_of”, and (d) exhibits the inversion effects, where “has_part” • “part_of” represents .
5.4. Space and Time Complexity
In this part, we list space and time complexity of the different transformation based models and bilinear models as shown Table 6. m and n denote number of entities and relations. d is dimensions of entity or relation embeddings.
Table 6.
Space and Time Complexity.
6. Conclusions
In this paper, we propose a novel model, DualQuatE, for knowledge graph embedding, which maps entities and relations to dual quaternion space. We present a new score function to model each relation as interaction of rotation and translation, which addresses the multiple relations between two entities. We demonstrate that our model is able to express main relation patterns and outperforms state-of-the-art baselines. However, DualQuatE does not consider temporal information and semantic hierarchies in knowledge graphs. In the feature, we will investigate how to explore temporal information and semantic hierarchies based on our model. It is also interesting to investigate the possibility of applying our DualQuatE model to learning representations of propositions for helping learning action models [25,26,27,28] and recognizing plans [29,30,31] in planning community.
Author Contributions
Conceptualization, L.G., H.Z. and H.H.Z.; methodology, L.G. and H.Z.; software, L.G.; validation, L.G., H.Z. and H.H.Z.; formal analysis, L.G. and H.H.Z.; investigation, L.G. and H.Z.; resources, L.G., H.Z. and H.H.Z.; data curation, L.G.; writing—original draft preparation, L.G.; writing—review and editing, H.Z. and H.H.Z.; visualization, L.G.; supervision, H.Z., H.H.Z. and J.X.; project administration, L.G.; funding acquisition, H.Z. and H.H.Z. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by the National Natural Science Foundation of China (Grant No. 62076263), the National Natural Science Foundation of China for Young Scientists of China (Grant No. 11701592), the Joint Funds of the National Natural Science Foundation of China (Grant No. U1811263), Guangdong Natural Science Funds for Distinguished Young Scholar (Grant No. 2017A030306028), Guangdong special branch plans young talent with scientific and technological innovation (Grant No. 2017TQ04X866), Pearl River Science and Technology New Star of Guangzhou and Guangdong Province Key Laboratory of Big Data Analysis and Processing.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Publicly available datasets were analyzed in this study. These data can be found here: https://github.com/gaoliming123/DualQuatE, accessed on 11 June 2021.
Conflicts of Interest
The authors declare no conflict of interest.
Appendix A. Proof of Relation Patterns
symmetry: Relation is symmetry, then both and . The following equations will hold:
Then we deduce that:
Inversion: Relation is inversion, iff another relation exists and satisfies both and . Then the following equations will hold:
Then we deduce that:
Composition: A relation is the composition of relation and , which can be denoted by .
Then we can get:
References
- Chen, Z.; Wang, X.; Xie, X.; Wu, T.; Bu, G.; Wang, Y.; Chen, E. Co-Attentive Multi-Task Learning for Explainable Recommendation; Kraus, S., Ed.; IJCAI: Macao, China, 2019; pp. 2137–2143. [Google Scholar]
- Kumar, V.; Hua, Y.; Ramakrishnan, G.; Qi, G.; Gao, L.; Li, Y. Difficulty-Controllable Multi-hop Question Generation from Knowledge Graphs. In Proceedings of the Semantic Web—ISWC 2019—18th International Semantic Web Conference, Auckland, New Zealand, 26–30 October 2019; Ghidini, C., Hartig, O., Maleshkova, M., Svátek, V., Cruz, I.F., Hogan, A., Song, J., Lefrançois, M., Gandon, F., Eds.; Part I; Lecture Notes in Computer Science. Springer: Cham, Switzerland, 2019; Volume 11778, pp. 382–398. [Google Scholar]
- Kanakaris, N.; Giarelis, N.; Siachos, I.; Karacapilidis, N. Shall I Work with Them? A Knowledge Graph-Based Approach for Predicting Future Research Collaborations. Entropy 2021, 23, 664. [Google Scholar] [CrossRef] [PubMed]
- Bordes, A.; Usunier, N.; Garcia-Duran, A.; Weston, J.; Yakhnenko, O. Translating embeddings for modeling multi-relational data. Adv. Neural Inf. Process. Syst. 2013, 26, 2787–2795. [Google Scholar]
- Sun, Z.; Deng, Z.; Nie, J.; Tang, J. RotatE: Knowledge Graph Embedding by Relational Rotation in Complex Space. arXiv 2019, arXiv:1902.10197. [Google Scholar]
- Zhang, S.; Tay, Y.; Yao, L.; Liu, Q. Quaternion Knowledge Graph Embeddings. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, Vancouver, BC, Canada, 8–14 December 2019; Wallach, H.M., Larochelle, H., Beygelzimer, A., d’Alché-Buc, F., Fox, E.B., Garnett, R., Eds.; NeurIPS: Vancouver, BC, Canada, 2019; pp. 2731–2741. [Google Scholar]
- Zhang, Z.; Cai, J.; Zhang, Y.; Wang, J. Learning Hierarchy-Aware Knowledge Graph Embeddings for Link Prediction; AAAI Press: Palo Alto, CA, USA, 2020; pp. 3065–3072. [Google Scholar]
- Yang, B.; Yih, W.; He, X.; Gao, J.; Deng, L. Embedding Entities and Relations for Learning and Inference in Knowledge Bases; Bengio, Y., LeCun, Y., Eds.; ICLR: San Diego, CA, USA, 2015. [Google Scholar]
- Kazemi, S.M.; Poole, D. SimplE Embedding for Link Prediction in Knowledge Graphs; Bengio, S., Wallach, H.M., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R., Eds.; NeurIPS: Montréal, QC, Canada, 2018; pp. 4289–4300. [Google Scholar]
- Wang, Q.; Mao, Z.; Wang, B.; Guo, L. Knowledge Graph Embedding: A Survey of Approaches and Applications. IEEE Trans. Knowl. Data Eng. 2017, 29, 2724–2743. [Google Scholar] [CrossRef]
- Lin, Y.; Liu, Z.; Sun, M.; Liu, Y.; Zhu, X. Learning Entity and Relation Embeddings for Knowledge Graph Completion; Bonet, B., Koenig, S., Eds.; AAAI Press: Palo Alto, CA, USA, 2015; pp. 2181–2187. [Google Scholar]
- Nickel, M.; Rosasco, L.; Poggio, T.A. Holographic Embeddings of Knowledge Graphs; Schuurmans, D., Wellman, M.P., Eds.; AAAI Press: Palo Alto, CA, USA, 2016; pp. 1955–1961. [Google Scholar]
- Trouillon, T.; Welbl, J.; Riedel, S.; Gaussier, É.; Bouchard, G. Complex Embeddings for Simple Link Prediction; ICML: New York, NY, USA, 2016; pp. 2071–2080. [Google Scholar]
- Xu, C.; Li, R. Relation Embedding with Dihedral Group in Knowledge Graph; ACL: Florence, Italy, 2019; pp. 263–272. [Google Scholar]
- Dettmers, T.; Minervini, P.; Stenetorp, P.; Riedel, S. Convolutional 2D Knowledge Graph Embeddings. In Proceedings of the AAAI Conference on Artificial Intelligence; AAAI Press: Palo Alto, CA, USA, 2018; pp. 1811–1818. [Google Scholar]
- Schlichtkrull, M.S.; Kipf, T.N.; Bloem, P.; van den Berg, R.; Titov, I.; Welling, M. Modeling Relational Data with Graph Convolutional Networks. In Proceedings of the Semantic Web—15th International Conference, ESWC 2018, Heraklion, Crete, Greece, 3–7 June 2018; Gangemi, A., Navigli, R., Vidal, M., Hitzler, P., Troncy, R., Hollink, L., Tordai, A., Alam, M., Eds.; Lecture Notes in Computer Science. Springer: Cham, Switzerland, 2018; Volume 10843, pp. 593–607. [Google Scholar]
- Vashishth, S.; Sanyal, S.; Nitin, V.; Agrawal, N.; Talukdar, P.P. InteractE: Improving Convolution-Based Knowledge Graph Embeddings by Increasing Feature Interactions; AAAI Press: Palo Alto, CA, USA, 2020; pp. 3009–3016. [Google Scholar]
- Balazevic, I.; Allen, C.; Hospedales, T. Multi-relational poincaré graph embeddings. Adv. Neural Inf. Process. Syst. 2019, 32, 4463–4473. [Google Scholar]
- Chami, I.; Wolf, A.; Juan, D.; Sala, F.; Ravi, S.; Ré, C. Low-Dimensional Hyperbolic Knowledge Graph Embeddings. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, 5–10 July 2020; Jurafsky, D., Chai, J., Schluter, N., Tetreault, J.R., Eds.; Association for Computational Linguistics: Beijing, China, 2020; pp. 6901–6914. [Google Scholar]
- Hamilton, W.R. LXXVIII. On quaternions; or on a new system of imaginaries in Algebra: To the editors of the Philosophical Magazine and Journal. Lond. Edinb. Dublin Philos. Mag. J. Sci. 1844, 25, 489–495. [Google Scholar] [CrossRef]
- Kotelnikov, A.P. Screw calculus and some applications to geometry and mechanics. Annu. Imp. Univ. Kazan 1895, 24. [Google Scholar]
- Toutanova, K.; Chen, D. Observed versus latent features for knowledge base and text inference. In Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality; Association for Computational Linguistics: Beijing, China, 2015; pp. 57–66. [Google Scholar]
- Mahdisoltani, F.; Biega, J.; Suchanek, F.M. YAGO3: A Knowledge Base from Multilingual Wikipedias. In Proceedings of the Seventh Biennial Conference on Innovative Data Systems Research (CIDR 2015), Asilomar, CA, USA, 4–7 January 2015. [Google Scholar]
- Ebisu, T.; Ichise, R. TorusE: Knowledge Graph Embedding on a Lie Group. In Proceedings of the AAAI Conference on Artificial Intelligence; AAAI Press: Palo Alto, CA, USA, 2018; pp. 1819–1826. [Google Scholar]
- Zhuo, H.H.; Muñoz-Avila, H.; Yang, Q. Learning hierarchical task network domains from partially observed plan traces. Artif. Intell. 2014, 212, 134–157. [Google Scholar] [CrossRef]
- Zhuo, H.H.; Yang, Q. Action-model acquisition for planning via transfer learning. Artif. Intell. 2014, 212, 80–103. [Google Scholar] [CrossRef]
- Zhuo, H.H.; Zha, Y.; Kambhampati, S. Discovering Underlying Plans Based on Shallow Models. ACM Trans. Intell. Syst. Technol. 2020, 11, 18:1–18:30. [Google Scholar] [CrossRef]
- Zhuo, H.H.; Kambhampati, S. Model-lite planning: Case-based vs. model-based approaches. Artif. Intell. 2014, 246, 1–21. [Google Scholar] [CrossRef]
- Zhuo, H.H. Recognizing Multi-Agent Plans When Action Models and Team Plans Are Both Incomplete. ACM Trans. Intell. Syst. Technol. 2019, 10, 30:1–30:24. [Google Scholar] [CrossRef]
- Feng, W.; Zhuo, H.H.; Kambhampati, S. Extracting Action Sequences from Texts Based on Deep Reinforcement Learning. In Proceedings of the International Joint Conferences on Artifical Intelligence (IJCAI), Stockholm, Sweden, 13–19 July 2018; pp. 4064–4070. [Google Scholar]
- Zhuo, H.H. Human-Aware Plan Recognition. In Proceedings of the AAAI Conference on Artificial Intelligence; AAAI Press: Palo Alto, CA, USA, 2017; pp. 3686–3693. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).