Next Article in Journal
Object-Aware Adaptive Convolution Kernel Attention Mechanism in Siamese Network for Visual Tracking
Previous Article in Journal
Mechanical and Thermomechanical Properties of Clay-Cowpea (Vigna Unguiculata Walp.) Husks Polyester Bio-Composite for Building Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Co-Embedding Model with Variational Auto-Encoder for Knowledge Graphs

1
School of Computer Science, Sun Yat-Sen University, Guangzhou 510000, China
2
School of Data Science and Artificial Intelligence, Wenzhou University of Technology, Wenzhou 325000, China
3
School of Software, South China University of Technology, Guangzhou 510000, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(2), 715; https://doi.org/10.3390/app12020715
Submission received: 29 November 2021 / Revised: 1 January 2022 / Accepted: 6 January 2022 / Published: 12 January 2022
(This article belongs to the Topic Artificial Intelligence (AI) Applied in Civil Engineering)

Abstract

:
Knowledge graph (KG) embedding has been widely studied to obtain low-dimensional representations for entities and relations. It serves as the basis for downstream tasks, such as KG completion and relation extraction. Traditional KG embedding techniques usually represent entities/relations as vectors or tensors, mapping them in different semantic spaces and ignoring the uncertainties. The affinities between entities and relations are ambiguous when they are not embedded in the same latent spaces. In this paper, we incorporate a co-embedding model for KG embedding, which learns low-dimensional representations of both entities and relations in the same semantic space. To address the issue of neglecting uncertainty for KG components, we propose a variational auto-encoder that represents KG components as Gaussian distributions. In addition, compared with previous methods, our method has the advantages of high quality and interpretability. Our experimental results on several benchmark datasets demonstrate our model’s superiority over the state-of-the-art baselines.

1. Introduction

Knowledge graph (KG) embeddings are low-dimensional representations for entites and relations. This approach can benefit a range of downstream tasks, such as semantic parsing [1,2], knowledge reasoning [3], and question answering [4,5]. Embeddings are supposed to contain semantic information and should be able to deal with multiple linguistic relations.
At present, research on knowledge graph embedding occurs mainly along three main lines. One of these lines of research includes studies based on translation. TransE [6] was the first model to introduce translation-based embedding, which represents entities and relationships in the same space, and regards the relationship vector r as the translation between the head entity vector h and the tail entity vector t, that is, h + r t . Since transE cannot handle one-to-many, many-to-one, and many-to-many relationships (1-to-N, N-to-1, N-to-N), TransH [7] is proposed to enable an entity to have different representations when involved in various relations. In the TransR model [8], an entity is a complex of multiple attributes, and different relationships focus on different attributes of the entity. Another line of research includes studies based on semantic matching. RESCAL [9] obtains its latent semantics by using a vector to represent each entity. Each relationship is represented as a matrix that is used to model the interaction of potential relationships. It defines the scoring function of the triple (h, r, t) as a bilinear function. DistMult [10] simplifies RESCAL by restricting the relationship matrix to a diagonal matrix, which greatly improves training efficiency. ComplEx [11] extends DistMult by introducing complex number domain embedding to better model asymmetric relationships. In ComplEx, the embedding of entities and relationships no longer exists in real space, but in complex space. The third line of research includes studies based on graph convolutional neural networks. ConvE [12] employs a multi-layer convolutional network, which enables expressive feature learning, while remaining highly parameter-efficient. Unlike previous works, which focused on shallow, fast models that can scale to large knowledge graphs, ConvE uses 2D convolution over embeddings and multiple layers of nonlinear features to model KGs. Subsequently, the ConvKB [13] model has been used to explore the global relationships among same-dimensional entries of the entity and relation embeddings. However, neither of them models the interactions between various positions of entities and relations. R-GCN [14] is another convolutional network designed for KBs, generalized from GCN [15] for uni-relational data.
A typical KG embedding technique has two necessary elements: (i) an encoder to generate KG embeddings and (ii) a scoring function to measure plausibility for each fact. Entities are usually represented as vectors in low-dimensional space, whereas relations are represented as an operation between entities, resulting in vectors for translational operations [6] or matrices for linear transformation [16]. By doing so, the embedding of KG components can be used to enhance the performance in many downstream tasks. Despite the success those previous algorithms have achieved, we note that those methods have the following defects(1) the n-dimensional representation of the KG component can be regarded as a single point, neglecting the uncertainties for entities and relations; (2) they represent entities as vectors located in low-dimensional space and relations as an operation between entities [6,16], thus ignoring the affinities between entities and relations as they are embedded in different semantic spaces.
To address the issues mentioned above, we propose a co-embedding model for KG, learning low-dimensional representations for entities and relations in the same semantic space so that the affinities between them can be effectively captured. Moreover, we introduce a variational auto-encoder to infer the representations of KG components as Gaussian distributions. The mean of the distributions indicatesthe position in semantic space, and the variance of the distributions indicates the uncertainty for each KG components.
Compared with previous works that regard relations as an operation between entities, co-embedding of entities and relations in the same semantic space can improve the performance of KG representation. For example, in Freebase, the relation ’Perfession’ is used in (El Lissitzky, Perfession, Architect) and (Vlad. Gardin, Perfession, Screen Writer) uses two distinct semantic information categories, corresponding to a scientist and a writer, resulting in the finding that the resulting representations calculated using the two triples are not the same. The co-embedding model embeds entities and relations at the same semantic space, thus providing high-quality embeddings for both of them.
In summary, our contributions in this work are as follows:
  • We propose a co-embedding model for knowledge graphs, which learns low-dimensional representations for KG components, including entities and relations in the same semantic space, as a result of measuring their affinities effectively.
  • To address the issue of neglecting uncertainty, we introduce a variational auto-encoder into our model, which represents KG components as Gaussian distributions. The variational auto-encoder consists of two parts: (1) an inference model to encode KG components into latent vector spaces, (2) a generative model to reconstruct random variables from latent embeddings.
  • We conduct experiments on real-world datasets to evaluate the performance of our model in link prediction. The experimental result demonstrates that our model outperforms the state-of-the-art baselines.

2. Related Work

2.1. Knowledge Graph Representation

Knowledge representation is a technique that aims at learning low-dimensional representations for KG entities and relations, consisting of two critical steps: (1) constructing a scoring function measuring plausibility for triples, and (2) embedding KG components in continuous vector spaces.
TransE [6], the most representative method in KG embedding, represents entities as vectors and relations as an operations between entities, i.e., h + r t . The scoring function is defined as the distance between entities and relations in latent space, written as f r ( h , t ) = h + r t 1 / 2 . However, TransE fails to deal with one-to-many, many-to-one, and many-to-many relations [7,8]. For example, given a relation holding two facts ( h , r , t 1 ) and ( h , r , t 2 ) , we can infer t 1 = t 2 even though they are totally different entities. To overcome the above defects, Z. Wang, J. Zhang, J. Feng, and Z. Chen proposed TransH [7] to obtain distinct representations for entities when dealing with different relations, by projecting entity representations onto a hyperplane, resulting in a normal vector. e.g., h = h w r · h · w r , with w r as the normal vector.
TransE and its extensions represent both entities and relations as deterministic points in vector spaces, ignoring the uncertainty for KG components. To resolve this problem, some recent works have introduced uncertainty into KG embedding by representing KG components as distributions, e.g., KG2E [17], proposed by Shizhu He, Kang Liu, and Guoliang Ji and Jun Zhao, represents both entities and relations as distributions via Gaussian embedding. Inspired by the previous works, we tackle the embedding problem for KG by modeling both entities and relations as Gaussian distributions and representing them in the semantic space to effectively measure the affinities between them.

2.2. Gaussian Embedding

Gaussian embedding [18] is a method to embed word types into the space of Gaussian distributions, and learn the embeddings directly in that space, which represents words not as low-dimensional vectors, but as densities over a latent space, directly representing notions of uncertainty and enabling a richer geometry in the embedded space.
In word representation, embedding an object as a single point can not naturally express uncertainty about the target concepts with which the input may be associated, and the relationships between points are normally measured by distances required to obey the triangle inequality. Point vectors are typically compared via their dot products, cosine-distance, or Euclean distance, none of which provide asymmetric comparisons between objects. To overcome the limitations in representing objects as points, Gaussian embedding is proposed to learn representations in the space of Gaussian distributions, advocating for density-based distributed embeddings.
In Gaussian embedding, we learn both means and variances from data, representing them as densities over a latent space instead of low-dimensional vectors. As Gaussians innately represent uncertainty and have a geometric interpretation as an inclusion between families of ellipses, our method adopts KL divergence between Gaussian distributions to measure the relationship between objects, which is straightforward to calculate.
Mapping to a density provides many advantages, including better capturing uncertainty about a representation and its relationships, providing asymmetric comparisons between objects, which is more effective than dot product or cosine similarity, and which enables more expressive parameterization of decision boundaries.

2.3. Variational Auto-Encoder

Variational Auto-encoders [19], abbreviated as VAEs, are proposed to learn probability distributions of data. A typical VAE model is made up of two computational neural networks, an inference model to encode observations into latent variables and a generative model to decode from latent deterministic representations to random variables. Given a dataset X = { x i } i = 1 N , the VAE regards data as random numbers generated via two steps: (1) the latent variable z i is sampled from prior distribution p θ ( z i ) , and (2) the random variable x i is generated by the conditional distribution p θ ( x | z ) , where θ is the prior distribution parameter. Using the stochastic gradient variational Bayes (SGVB) estimator and reparameterization, we can learn approximate distributions for each data point via the VAE.
In the VAE, we treat the encoder and decoder as a whole and train them at the same time. The goal of training is to maximize the evidence lower bound of the likelihood function. Specifically, we first input random variables (randomly initialized node embedding) to the encoder, obtain the output, and calculate the encoder error, then we use the output of the encoder as the input of the decoder and calculate the reconstruction error of the decoder. The two parts of the errors are added together as the overall error of the network and propagated backward, thus realizing the simultaneous training of the encoder and the decoder.
In recent years, the VAE algorithm and its variations have been studied and applied in many downstream tasks such as semi-supervised classification [20], clustering [21,22], and image generation [23].

3. Notations and Problems

In this section, we introduce the notation used and define our studied problem.

3.1. Notations

In this paper, we define scalars as normal alphabets (e.g., the output dimension of latent variables: D), sets as typeface alphabets (e.g., the set of entities: E ), and vectors as lowercase alphabets (e.g., the representation of head entities: h ). A triple in KG is denoted by τ , whereas it can be written as τ = ( h , r , t ) . Our main notations are shown in Table 1.
Given a knowledge graph G , We represent the set of entities as E and the set of relations as R , whereas G can be defined as G = ( E , R , O ) , where O is the set of triples denoted as τ = ( h , r , t ) , h , t E and r R .

3.2. Problem Definition

Using the notation mentioned above, we define the problem of co-embedding in KG as follows.
Problem 1.
The Co-embedding Model for KG Embedding. Given a knowledge graph G = ( E , R , O ) , our goal is to learn the representations of KG components, including entities and relations, in the same semantic space as that of a transformation Ξ.
G Ξ Z E , Z R ,
where Z E R M × D and Z R R N × D , respectively. The i-th row vector in Z E , written as z i E , is denoted as the resulting embedding of the i-th entity, and the j-th row vector in Z R written as z j R is denoted as the resulting embedding of the j-th relation.

4. Model

To address the issues we mentioned above, we propose the co-embedding model, learning representations for both entities and relations as Gaussian distributions in the same semantic space, as Gaussians innately represent uncertainty. To obtain high-quality Gaussian embeddings for both entities and relations, we introduce VAE into our model, learning the distributions from training triples in KG via a stochastic gradient variational Bayes [19] estimator. We introduce the details in the following subsections.

4.1. Variational Lower Bound

For a KG represented as G = ( E , R , O ) , the embeddings of KG components can be represented as Z E , Z R in latent spaces. To embed both entities and relations in the same semantic space, we first define the log-likelihood of O , notated as the set of triples in KG, as:
log p ( O ) = log p ( H , R , T ) = log p ( H ) + log p ( R ) + log p ( T )
where O R W × 3 and H , R , and T are components in O . The log-likelihood of KG components, represented as log p ( H ) , log p ( R ) , and log p ( T ) , can be derived using the Bayesian algorithm:
log p ( H ) = log i = 0 D p θ ( H Z i E ) · p θ ( Z i E ) = log i = 0 D p θ ( Z i E H ) · p θ ( H ) = log i = 0 D p θ ( Z i E H ) · p θ ( Z i E , H ) · q ϕ ( Z i E H ) q ϕ ( Z i E H ) · p θ ( Z i E H ) = q ϕ ( Z E H ) · log q ϕ ( Z E H ) p θ ( Z E H ) + q ϕ ( Z E H ) · log p θ ( Z E , H ) q ϕ ( Z E H ) = D K L ( q ϕ ( Z E H ) p θ ( Z E H ) ) + L ( θ , ϕ ; E ) L ( θ , ϕ ; H )
The conditional probability q ϕ ( Z E H ) is the variational posterior to approximate the true posterior p ( Z E H ) , where the parameter ϕ is estimated in the inference model. In Equation (3), the second RHS term L ( θ , ϕ ; E ) is called the evidence lower bound (ELBO) on the marginal likelihood of the variables E :
L ( θ , ϕ ; H ) = E q ϕ ( Z E H ) log q ϕ ( Z E H ) + log p θ ( H , Z E ) = D K L ( q ϕ ( Z E H ) p θ ( Z E ) ) + E q ϕ ( Z E H ) log p θ ( H Z E )
where the D K L term denotes the Kullback–Leibler divergence, a measure of how one probability distribution is different from a second. Respectively, we have:
L ( θ , ϕ ; R ) = D K L ( q ϕ ( Z R R ) p θ ( Z R ) ) + E q ϕ ( Z R R ) log p θ ( R Z R ) L ( θ , ϕ ; T ) = D K L ( q ϕ ( Z E T ) p θ ( Z E ) ) + E q ϕ ( Z E T ) log p θ ( T Z E )
Substituting Equations (3)–(5) into Equation (2), the variational lower bound of log O can be represented with the parameters θ and ϕ :
log p ( O ) = log p θ ( H ) + log p θ ( R ) + log p θ ( T ) L ( θ , ϕ ; H ) + L ( θ , ϕ ; R ) + L ( θ , ϕ ; T ) = L ( θ , ϕ ; O )
where
L ( θ , ϕ ; O ) = D K L ( q ϕ ( Z E H ) p θ ( Z E ) ) + E q ϕ ( Z E H ) log p θ ( H Z E ) D K L ( q ϕ ( Z R R ) p θ ( Z R ) ) + E q ϕ ( Z R R ) log p θ ( R Z R ) D K L ( q ϕ ( Z E T ) p θ ( Z E ) ) + E q ϕ ( Z E T ) log p θ ( T Z E )
In Equation (7), the conditional probabilities q ( Z E H ) , q ( Z R R ) and q ( Z E T ) can be regarded as probabilistic encoders to embed real data into latent space. Similarly, the conditional probabilities p ( H Z E ) , p ( R Z R ) and p ( T Z E ) can be regarded as probabilistic decoders, producing corresponding data from latent vector representations. To approximate the real distributions of KG components, we assume that the prior distributions and the variational posterior distributions are Gaussian distributions.
p ( Z i E ) = N ( 0 , I ) p ( Z j R ) = N ( 0 , I ) q ϕ ( Z h E H ) = N ( E ¯ , σ E 2 · I ) q ϕ ( Z r R R ) = N ( R ¯ , σ R 2 · I ) q ϕ ( Z t E T ) = N ( E ¯ , σ E 2 · I )
Assuming priors and variational posteriors to be Gaussian distributions, the D K L terms in Equation (7) can be formed computationally. In addition, we adopt the Monte Carlo gradient estimator to deal with the E q ϕ terms:
L ( θ , ϕ ; O ) = 1 L i = 0 L ( h i , r i , t i ) O W ( log p θ ( t i Z t i E ) + log p θ ( h i Z h i E ) + log p θ ( r i Z r i R ) ) + 1 M e i E M d = 0 D ( μ e i , d 2 + σ e i , d 2 log σ e i , d 2 1 ) + 1 N r i R N d = 0 D ( μ r i , d 2 + σ r i , d 2 log σ r i , d 2 1 ) ,
where D is the output dimension of latent variables, L is the sampling number in the Monte Carlo estimator, and M, N, and W are the number of entities, relations, and triples. We also adopt the reparameterization trick mentioned in the VAE section to generate samples.
z h i E = h i ¯ + σ h i 2 ϵ , with h i H , ϵ N ( 0 , I ) z r i R = r i ¯ + σ r i 2 ϵ , with r i R , ϵ N ( 0 , I ) z t i E = t i ¯ + σ t i 2 ϵ , with t i T , ϵ N ( 0 , I )

4.2. Learning

To optimize the parameters in Equation (9), we apply two neural networks in VAE: (1) An inference model f ϕ with parameter ϕ to map observation data into latent vector spaces. (2) A generative model g θ with parameter θ to produce random variables from latent embeddings.
Inference model f ϕ . To encode KG components to Gaussian embeddings, we apply two fully-connected layers to map the entities and relations to the means and log-variances in their resulting Gaussian embeddings. One of the benefits of encoding log-variance instead of variance is that it enables us to avoiding using activation functions, since the variance σ 2 must be a positive number.
( h i ¯ , log σ h i 2 ) = f ϕ 1 ( h i ) ( r i ¯ , log σ r i 2 ) = f ϕ 2 ( r i ) ( t i ¯ , log σ t i 2 ) = f ϕ 3 ( t i )
where ϕ = [ ϕ 1 , ϕ 2 , ϕ 3 ] and μ and log σ 2 are the means and log-variances of learned Gaussian embeddings of KG components:
q ϕ ( z h i E h i ) = N ( h i ¯ , σ h i 2 · I ) q ϕ ( z r i R r i ) = N ( r i ¯ , σ r i 2 · I ) q ϕ ( z t i E t i ) = N ( t i ¯ , σ t i 2 · I )
We apply the reparameterization trick mentioned in Equation (10) to obtain the deterministic variables Z h E , Z r R , and Z t E , transformed from latent random variables, with a noise term ϵ from N ( 0 , I ) , which benefit from gradient propagation between the inference model and the generative model. We compute the loss of the inference model by measuring the KL divergence between those conditional probabilities and N ( 0 , I ) .
Generative model g θ . The generative model decodes from deterministic values to random variables. For example, given resulting embeddings Z E and Z R from a KG represented as G = ( E , R , O ) , our goal is to reconstruct random variables for each triple ( h i , r i , t i ) O , where:
p θ ( h i , r i , t i z h i E , z r i R , z t i E ) = g θ ( z h i E , z r i R , z t i E )
The random distributions of those components can be defined as:
p θ 1 ( h i z h i E ) = N ( z h i ¯ , σ z h i 2 · I ) p θ 2 ( r i z r i R ) = N ( z r i ¯ , σ z h i 2 · I ) p θ 3 ( t i z t i E ) = N ( z t i ¯ , σ z h i 2 · I )
where θ = [ θ 1 , θ 2 , θ 3 ] , and the reconstruction loss of the generative model can be measured based on the binary cross entropy (BCE) between the generative variables and the real data.

5. Experiment

5.1. Data Sets

In this work, we conducted experiments and evaluated the related methods using real-world databases of KG, commonly used in previous works: WordNet [24] and Freebase [25]. WordNet is an extensive lexical database of English. Nouns, verbs, adjectives, and adverbs are grouped into sets of cognitive synonyms (synsets), each expressing a distinct concept, and with interlinked sysnets employing conceptual-semantic and lexical relations. Freebase is a large collaborative knowledge base consisting of data compiled mainly by its community members. It is an online collection of structured data harvested from many sources, including individual wiki contributions. The most representative dataset in WorldNet is WN18, and FB15k in Freebase.
In those datasets, WN18 contains 18 relations and 40,943 entities, whereas FB15k contains 1345 relations and 14,951 entities. However, both of them suffer from test leakage through inverse relations: a large number of test triples can be obtained simply by inverting triples in the training set. Therefore, we introduced FB15k-237, a subset of FB15k, in which reversible relations were removed. Similarly, WN18 was corrected by WN18RR. Therefore, we selected WN18RR and FB15k-237 as datasets in our experiments.

5.2. Experimental Setup

We compared our models with serveral KG embedding algorithms in our experiments:
  • TransE [6]. TransE was the first model to introduce translation-based embedding, which interprets relations as the translations operating on entities.
  • DistMult [10]. DistMult is based on the bilinear model, where each relation is represented by a diagonal rather than a full matrix. DistMult enjoys the same scalable properties as TransE and it achieves superior performance over TransE.
  • ComplEx [11]. ComplEx extends DistMult by introducing complex-valued embeddings so as to better model asymmetric relations. It has been proven that HolE is subsumed by ComplEx as a special case.
  • ConvE [12]. ConvE is a multi-layer convolutional network model for link prediction [24] of KGs, and it reports state-of-the-art results for several established datasets. Unlike previous work which has focused on shallow, fast models that can scale to large knowledge graphs, ConvE uses 2D convolution over embeddings and multiple layers of nonlinear features to model KGs.
  • ConvKB [13]. ConvKB applies the global relationships among same-dimensional entries of the entity and relation embeddings, so that ConvKB generalizes the transitional characteristics in the transition-based embedding models.
  • R-GCN [14]. R-GCN applies graph convolutional networks to relational knowledge bases, creating a new encoder for link prediction and entity classification tasks.
The experimental results from those baselines were obtained from the codes provided by the authors. In our method, we made configurations by selecting a learning rate α among [0.01, 0.05, 0.10] and an output dimension D among [100, 200, 400]. For WN18RR, the configuration was as follows. The learning rate α was 0.01 and the output dimension D was 400, with 3000 training iterations using the Adam [27] optimizer. For FB15k-237, the configuration was as fallows. The learning rate α was 0.10, and the output dimension D was 200, with 1000 training iterations using the SGD optimizer. We trained the model until it converged.

5.3. Link Prediction

Link prediction, aiming at predicting the missing KG components for incomplete triples, is a typical task in KG embedding. e.g., predicting the head entity for a given triple ( , r , t ) or predicting the tail entity for a given triple ( h , r , ) . Following the protocols in [6], we evaluated the performance of our model. Given a test triple, we replaced the head or tail with all available entities and ranked them by measuring the scoring function defined in the methods section. Based on the ranking lists, we report the proportion of correct entities in the top N ranked entities, where N = 1, 3, and 10, denoted as Hits@1, Hits@3, and Hits@10.
M R R = 1 | M | i = 0 M 1 r a n k ( e i ) M R = 1 | M | i = 0 M r a n k ( e i )
We also record the average reciprocal rank of correct entities (denoted as MRR) and the average rank of correct entities (denoted as MR) for link prediction, where the function r a n k ( e i ) transforms to the rank of e i . A good embedding algorithm should obtain a relatively low mean rank and a relatively high mean reciprocal rank.

5.4. Results and Analysis

In this subsection, we report the ability of our model to represent uncertainty, and the experimental results regarding link prediction.
Qualitative Analysis Before evaluating the performance in specific task compared with other methods, we need to discuss the ability of our model to represent uncertainty in KG embedding.
In our method, we measure the uncertainty of KG components by the variances of their embeddings, where an entity/relation with a higher level of uncertainty has a large covariance. We discuss the relations in FB15k-237 with ‘/education’ as the domain, providing a (log) determinant and trace of their covariances as shown in Table 2, from which we have made the following observations:
  • Our method has the ability to measure the uncertainty in KG embedding. The covariance of Gaussian embedding can effectively describe the uncertainties by calculating the determinants and traces of the covariances.
  • The relations with more semantic information (the number of associated heads and tails, type of relation) have larger uncertainty. For example, the ’major_field_of_study’ relation has the largest uncertainty, and the ’educational_insitution’ relation has the smallest uncertainty in those relations.

6. Results

We compared our method with the state-of-the-art baselines mentioned above, including TransE, DisMult, ComplEx, ConvKB, and R-GCN. First of all, the codes in the baseline we used are provided by other authors. All models were fully trained, and the data sets used were public. Our models, in both the Hits@3 and Hits@10 metrics for this dataset, achieved superior results, which proves that the embedding obtained using our proposed method is of high quality. The experimental results regarding link prediction are shown in Table 3. We observe that:
  • The experimental results on FB15k-237 and WN18RR indicate that our method can learn high-quality representations in KG.
  • Our method outperformed other baselines in terms of the Hits@3 and Hits@10 metrics, but its performance was poor in terms of mean reciprocal rank and the Hits@1 metric on WN18RR. This may be because WN18RR contains a large number of entities and several relations, so most methods can only judge the correctness of a triple but cannot rank it in the top position.
  • On FB15k-237, our method outperformed other baselines in terms of the Hits@3, Hits@10, and mean reciprocal rank metrics, and came second in terms of the Hits@1 and mean rank metrics. The improvements observed in FB15k-237 were greater than those in WN18RR, showing that FB15k-237 contains more relations and thus the uncertainties in its components are larger than those in WN18RR, which indicates that our method can learn valid representations with uncertainties in KG.

7. Conclusions

In this paper, we propose the co-embedding model to learn the latent representations of both entities and relations in the same semantic space, embedding them as Gaussian distributions. To obtain high-quality embeddings, we introduced the variational auto-encoder, an auto-encoder model consisting of a probabilistic encoder and a probabilistic decoder, into our model. One of the assets of the technique is that the affinities between entities and relations can be measured effectively since they are embedded in the same semantic space, and we also explain the transformation from observation values to latent representations via the two models using the variational auto-encoder. In our experiments, we evaluated the performance of the co-embedding model and other baselines on several benchmark datasets. From these experimental results, we can conclude that our method can learn high-quality representations of KG components.
In the future, we plan to extend our method by assuming the priors with other distributions and optimizing the variational lower bounds in an effective way.

Author Contributions

Conceptualization, H.H.; data curation, L.X. and Q.D.; formal analysis, L.X.; methodology, L.X. and H.H.; project administration, H.H.; software, Q.D.; supervision, H.H.; validation, H.H.; visualization, L.X. and Q.D.; writing—original draft, L.X.; writing—review and editing, H.H.; funding acquisition, H.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Wenzhou Science and Technology Planning Project #2021R0082.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, of in the decision to publish the results.

References

  1. Berant, J.; Chou, A.; Frostig, R.; Liang, P. Semantic Parsing on Freebase from Question-Answer Pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, Seattle, WA, USA, 18–21 October 2013; pp. 1533–1544. [Google Scholar]
  2. Heck, L.; Hakkani-Tür, D.; Tur, G. Leveraging Knowledge Graphs for Web-Scale Unsupervised Semantic Parsing. In Proceedings of the International Speech Communication Association, Lyon, France, 25–29 August 2013. [Google Scholar]
  3. Wang, W.Y.; Mazaitis, K.; Lao, N.; Mitchell, T.; Cohen, W.W. Efficient Inference and Learning in a Large Knowledge Base: Reasoning with Extracted Information using a Locally Groundable First-Order Probabilistic Logic. arXiv 2014, arXiv:cs.AI/1404.3301. [Google Scholar] [CrossRef] [Green Version]
  4. Bordes, A.; Weston, J.; Usunier, N. Open Question Answering with Weakly Supervised Embedding Models. arXiv 2014, arXiv:cs.CL/1404.4326. [Google Scholar]
  5. Bordes, A.; Chopra, S.; Weston, J. Question Answering with Subgraph Embeddings. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar, 25–29 October 2014; pp. 615–620. [Google Scholar] [CrossRef]
  6. Bordes, A.; Usunier, N.; Garcia-Duran, A.; Weston, J.; Yakhnenko, O. Translating Embeddings for Modeling Multi-relational Data. In Advances in Neural Information Processing Systems 26; Burges, C.J.C., Bottou, L., Welling, M., Ghahramani, Z., Weinberger, K.Q., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2013; pp. 2787–2795. [Google Scholar]
  7. Wang, Z.; Zhang, J.; Feng, J.; Chen, Z. Knowledge graph embedding by translating on hyperplanes. In Proceedings of the AAAI Conference on Artificial Intelligence, Quebec, QC, Canada, 27–31 July 2014; Volume 28. [Google Scholar]
  8. Lin, Y.; Liu, Z.; Sun, M.; Liu, Y.; Zhu, X. Learning Entity and Relation Embeddings for Knowledge Graph Completion. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, Austin, TX, USA, 25–30 January 2015; pp. 2181–2187. [Google Scholar]
  9. Nickel, M.; Tresp, V.; Kriegel, H.P. A three-way model for collective learning on multi-relational data. In Proceedings of the ICML, Bellevue, WA, USA, 28 June–2 July 2011. [Google Scholar]
  10. Yang, B.; tau Yih, W.; He, X.; Gao, J.; Deng, L. Embedding Entities and Relations for Learning and Inference in Knowledge Bases. arXiv 2014, arXiv:cs.CL/1412.6575. [Google Scholar]
  11. Trouillon, T.; Welbl, J.; Riedel, S.; Gaussier, É.; Bouchard, G. Complex Embeddings for Simple Link Prediction. arXiv 2016, arXiv:cs.AI/1606.06357. [Google Scholar]
  12. Dettmers, T.; Minervini, P.; Stenetorp, P.; Riedel, S. Convolutional 2D Knowledge Graph Embeddings. arXiv 2017, arXiv:cs.LG/1707.01476. [Google Scholar]
  13. Nguyen, D.Q.; Nguyen, T.D.; Nguyen, D.Q.; Phung, D. A Novel Embedding Model for Knowledge Base Completion Based on Convolutional Neural Network. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, New Orleans, LA, USA, 1–6 June 2018; Volume 2, pp. 327–333. [Google Scholar] [CrossRef]
  14. Schlichtkrull, M.; Kipf, T.N.; Bloem, P.; van den Berg, R.; Titov, I.; Welling, M. Modeling Relational Data with Graph Convolutional Networks. arXiv 2017, arXiv:stat.ML/1703.06103. [Google Scholar]
  15. Kipf, T.N.; Welling, M. Semi-supervised classification with graph convolutional networks. arXiv 2016, arXiv:1609.02907. [Google Scholar]
  16. Paccanaro, A.; Hinton, G.E. Learning distributed representations of concepts using linear relational embedding. IEEE Trans. Knowl. Data Eng. 2001, 13, 232–244. [Google Scholar] [CrossRef] [Green Version]
  17. He, S.; Liu, K.; Ji, G.; Zhao, J. Learning to Represent Knowledge Graphs with Gaussian Embedding. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, CIKM’15, New York, NY, USA, 19–30 October 2015; pp. 623–632. [Google Scholar] [CrossRef]
  18. Vilnis, L.; McCallum, A. Word Representations via Gaussian Embedding. arXiv 2014, arXiv:cs.CL/1412.6623. [Google Scholar]
  19. Kingma, D.P.; Welling, M. Auto-Encoding Variational Bayes. arXiv 2013, arXiv:stat.ML/1312.6114. [Google Scholar]
  20. Kingma, D.P.; Rezende, D.J.; Mohamed, S.; Welling, M. Semi-Supervised Learning with Deep Generative Models. arXiv 2014, arXiv:cs.LG/1406.5298. [Google Scholar]
  21. Jiang, Z.; Zheng, Y.; Tan, H.; Tang, B.; Zhou, H. Variational Deep Embedding: An Unsupervised and Generative Approach to Clustering. arXiv 2016, arXiv:cs.CV/1611.05148. [Google Scholar]
  22. Makhzani, A.; Shlens, J.; Jaitly, N.; Goodfellow, I.; Frey, B. Adversarial Autoencoders. arXiv 2015, arXiv:cs.LG/1511.05644. [Google Scholar]
  23. Dosovitskiy, A.; Brox, T. Generating Images with Perceptual Similarity Metrics based on Deep Networks. arXiv 2016, arXiv:cs.LG/1602.02644. [Google Scholar]
  24. Miller, G.A. WordNet: A Lexical Database for English. Commun. ACM 1995, 38, 39–41. [Google Scholar] [CrossRef]
  25. Bollacker, K.; Evans, C.; Paritosh, P.; Sturge, T.; Taylor, J. Freebase: A Collaboratively Created Graph Database for Structuring Human Knowledge. In Proceedings of the 2008 ACM SIGMOD International Conference on Management of Data, SIGMOD’08, Vancouver, BC, Canada, 9–12 June 2008; pp. 1247–1250. [Google Scholar] [CrossRef]
Table 1. Main notations in our paper.
Table 1. Main notations in our paper.
SymbolDescription
G a knowledge graph
E set of entities
R set of relations
O set of triples
M = | E | size of entities
N = | R | size of relations
W = | O | size of triples
Ddimension of latent variables
O R W × 3 observed data for triples
Z E R M × D latent representation matrix for entities
Z R R N × D latent representation matrix for relations
Table 2. The relations with /education/ as the domain and their determinants and traces of the corresponding covariances, sorted by descending order of traces.
Table 2. The relations with /education/ as the domain and their determinants and traces of the corresponding covariances, sorted by descending order of traces.
Relation#Head#TailTypelog (det)Trace
major_field_of_study22577m-n−338.838.1
student1832921-n−340.634.8
institution22222m-n−376.232.8
colors8519m-n−400.926.9
fraternities_sororities203m-1−406.924.9
campuses13131-1−411.921.3
currency53m-1−423.419.8
educational_institution13131-1−430.618.7
Table 3. Experimental results for WN18RR and FB15k-237 test sets. Hits@N values are presented as percentages. The best score is in bold and the second best score is underlined.
Table 3. Experimental results for WN18RR and FB15k-237 test sets. Hits@N values are presented as percentages. The best score is in bold and the second best score is underlined.
WN18FB15k-237
MRMRRHITS@NMRMRRHITS@N
13101310
TransE (Bordes et al., 2013) 623000.2434.2744.153.23230.27919.837.644.1
DistMult (Yang et al., 2015) 1070000.44441.24750.45120.28119.930.144.6
ComplEx (Trouillon et al., 2016) 1178820.44940.946.9535460.27819.429.745
ConvE (Dettmers et al., 2018) 1244640.45641.94753.12450.31222.534.149.7
ConvKB (Nguyen et al., 2018) 1312950.2655.8244.555.82160.28919.832.447.1
R-GCN (Schlichtkrull et al., 2018) 1467000.123813.720.76000.1641018.130
Our work19630.23611.448.057.62400.51821.842.052.1
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Xie, L.; Huang, H.; Du, Q. A Co-Embedding Model with Variational Auto-Encoder for Knowledge Graphs. Appl. Sci. 2022, 12, 715. https://doi.org/10.3390/app12020715

AMA Style

Xie L, Huang H, Du Q. A Co-Embedding Model with Variational Auto-Encoder for Knowledge Graphs. Applied Sciences. 2022; 12(2):715. https://doi.org/10.3390/app12020715

Chicago/Turabian Style

Xie, Luodi, Huimin Huang, and Qing Du. 2022. "A Co-Embedding Model with Variational Auto-Encoder for Knowledge Graphs" Applied Sciences 12, no. 2: 715. https://doi.org/10.3390/app12020715

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop