Next Article in Journal
Research on the Strength Prediction Method of Coal and Rock Mass Based on the Signal While Drilling in a Coal Mine
Previous Article in Journal
Saccharomyces cerevisiae’s Response to Dysprosium Exposure
Previous Article in Special Issue
Efficient Top-k Spatial Dataset Search Processing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Inference Framework of Markov Logic Network for Link Prediction in Heterogeneous Networks

1
School of Information Science and Engineering, Yunnan University, Kunming 650500, China
2
Yunnan Key Laboratory of Intelligent Systems and Computing, Yunnan University, Kunming 650500, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(8), 4424; https://doi.org/10.3390/app15084424
Submission received: 26 February 2025 / Revised: 13 April 2025 / Accepted: 14 April 2025 / Published: 17 April 2025
(This article belongs to the Special Issue Innovative Data Mining Techniques for Advanced Recommender Systems)

Abstract

:
The presence of multiplex edges and sparse links often hampers the efficacy of link prediction (LP) tasks. By harnessing the expressive power of Markov logic network (MLN) formulations, multiplex edges can be unified to enhance LP effectiveness. However, scaling up inferences for effective LP remains challenging due to the inefficiency of traditional MLN inference methods. To tackle this issue, we redefine LP tasks within heterogeneous networks using MLN inferences and introduce a tailored inference framework to handle unobserved nodes and complex MLN structures. We propose a method to partition the MLN structure into discrete substructures and compute node label distributions using the variational expectation maximization (VEM) algorithm. Additionally, we establish a termination condition to streamline inference search space and present the MLN-based LP algorithm. Experimental findings demonstrate the efficacy of our VEM-driven MLN inference framework for LP tasks in heterogeneous networks, showcasing superior accuracy compared to existing approaches.

1. Introduction

Link prediction (LP) in heterogeneous networks (also known as multi-relational LP) [1] aims to forecast multi-type connections between objects in networks. This field has recently garnered significant interest due to its diverse applications in social networks [2], knowledge graph completion [3], community discovery [4], item recommendation [5], etc. For instance, in a heterogeneous academic social network with entities like authors A, B, and C and link types such as coauthor and citedby, as illustrated in Figure 1a, the LP task involves determining the validity of links like coauthor (A, B), citedby (A, B), and citedby (B, C). Notably, these diverse link types mutually influence each other’s existence, such as citedby (C, D) impacting coauthor (C, D). Moreover, the scarcity of observed links compared to potential links results in the link sparsity challenge.
LP in heterogeneous networks faces two primary challenges. Firstly, nodes in such networks often exhibit multiple relationship types, necessitating the creation of a unified metric space that captures the inter-dependencies among these diverse links [6]. Current approaches [7,8] typically extract and merge type-specific features into distinct metric spaces, aiming to enhance prediction performance through transfer learning or algebraic space integration. However, these methods often neglect dependencies across various link types, leading to decreased accuracy. For instance, the presence of a coauthor link between C and D might influence citedby. Secondly, real-world networks exhibit sparsity, with numerous pairs of unconnected nodes. LP tasks, akin to supervised learning on non-relational data, involve predicting relational facts using known facts as supervision. This setup results in inadequate supervision and reduced accuracy in LP tasks. Addressing these challenges is crucial for effective multi-relational LP.
Multi-type relations could be encompassed into logical formulas for inter-dependency calculation. Markov logic networks (MLNs) offer an intuitive methodology for delineating network relationships through logical formulas, garnering increasing acclaim in the domain of multi-relational link prediction [9]. MLNs permit deviations from formulas with a penalizing mechanism rather than outright failure, which aligns with the inherent uncertainty prevalent in real-world relationships among entities. The magnitude of the penalty is regulated by weights assigned to individual formulas, with higher weights denoting a more robust endorsement of the associated patterns.
For instance, propositions such as “coauthors are likely to be friends” ( coauthor ( x , y ) friend ( x , y ) ) and “if one author cites another, they might develop a friendly relationship” ( citedby ( x , y ) friend ( x , y ) ) elucidate the interplay between different relations. By designing these formulas and employing grounding techniques, we can transform the information like citedby ( A , C ) into grounded predicates, which serve as nodes in the MLN structure. The inter-dependency between different relations can be represented through the weights of formulas, which are calculated using MLN’s learning methods based on the values of MLN nodes. Similarly, the values of unknown nodes indicating link facts can be calculated through MLN’s inference methods using these learned weights.
In particular, LP in heterogeneous networks can be reformulated as an inference problem in MLNs and traditionally solved using Statistical Relational Learning (SRL) methods [10]. This approach predicts unknown links by inferring node labels in the MLN, using observed links as evidence, facing two major challenges for multi-relational LP: (1) the sparsity of observed links leads to numerous unobserved MLN nodes, and (2) the computational complexity of inference in large MLNs.
Traditional SRL inference methods for MLNs struggle with these challenges due to their inability to efficiently handle latent variables in sparse network settings. The variational expectation maximization (VEM) algorithm [10] addresses these limitations by introducing a principled framework for reasoning with latent variables. VEM enhances MLN inference for LP tasks in two key ways: first, by treating unobserved nodes as latent variables, it provides a systematic approach to handle network sparsity. Second, by approximating complex posterior distributions through variational inference, it significantly reduces the computational burden of exact inference in large-scale networks. Specifically, VEM iteratively updates variational and true posterior distributions to maximize the Evidence Lower Bound (ELBO) [11], thereby approximating the posterior distribution of node labels while maintaining computational tractability. This optimization process effectively balances inference accuracy with computational efficiency, making it particularly suitable for large heterogeneous networks.
To tackle the first challenge, we develop a substructure-based approach that iteratively partitions the MLN by selecting neighbors of unobserved nodes (seed nodes) corresponding to unknown links. The neighbor selection follows the 2-hop enclosing subgraph principle [12], balancing computational efficiency with prediction accuracy. Seed nodes are progressively selected based on VEM results from previous substructures, leading to an efficient MLN-based link prediction algorithm.
For the second challenge, we enhance the VEM framework by incorporating both neighboring node labels and MLN structural features. We leverage graph convolutional networks (GCNs) [13] to capture label dependencies and network topology. Specifically, we model the variational distribution of unobserved node labels as a categorical distribution and use GCNs to learn MLN structure representations, using neighboring node label distributions as feature matrices. This approach effectively captures the complex dependencies between different relationship types in the network.
Generally, the contributions of this paper are as follows:
  • We introduce the concept of transforming multi-relational LP tasks into inferences in MLN, wherein LP is viewed as the estimation of node labels in MLN, with known links in a heterogeneous network considered as observed nodes in MLN.
  • We propose a method to partition the MLN into distinct substructures to address the complexity of large MLN structures. Additionally, we present a VEM-based approach for calculating the distribution of node labels by incorporating formula features and the MLN structure.
  • We define a termination condition for computing label distributions in MLN substructures and provide an algorithm for MLN-based LP.
  • Extensive experiments demonstrate that our method surpasses both traditional and state-of-the-art (SOTA) approaches in terms of the accuracy of LP tasks.
The rest of this paper is organized as follows. We introduce related work, preliminaries, and the overview of our framework in Section 2. We elaborate our proposed inference framework of MLN for LP in Section 3. In Section 4, we report experimental results and performance studies. Lastly, we conclude and discuss future work in Section 5.

2. Related Work

In this section, we provide a comprehensive review of existing research on LP in both homogeneous and heterogeneous networks, along with an examination of inferences within MLN.
LP on homogeneous network. Heuristic and learning-based methods represent recent trends in LP research on homogeneous networks. Heuristic methods, such as utilizing common neighbors [14,15], are employed to infer link existence. Recent advancements in LP research have shown that learning-based methods like matrix factorization and network embedding techniques are more effective in acquiring node embeddings [16]. For instance, MHGCN+ [17] and HL-GNN [18] aggregate node embeddings through heterogeneous meta-path interaction and intra-layer propagation with inter-layer connections, respectively, to learn node embeddings. Moreover, ensemble methods [9,19] enhance LP tasks by combining the outcomes of heuristic and learning-based methods, leading to improved robustness at the expense of efficiency. These approaches construct classifier features through network embedding but struggle to achieve a unified representation of multiplex edges, prevalent in real-world heterogeneous network scenarios.
LP on heterogeneous network. Deep neural network-based methods have garnered significant attention for their adept feature extraction in LP tasks on heterogeneous networks [20,21]. These methods primarily focus on mining type-specific features to optimize prediction performance and subsequently combine relation embeddings through algebraic or transferable operations, as discussed in studies such as [7,8]. However, effectively capturing inter-dependencies among different types of links remains a challenge. Various techniques have been suggested to create link prediction models that minimize the loss associated with the representation of multi-type links. For instance, HRMNN [22] integrates a relational graph generator that leverages the topological attributes of heterogeneous graphs and combines object-level aggregation with a multi-head attention mechanism to produce more comprehensive node representations. MTTM [2] consists of a generative predictor and a discriminative classifier for link representations, enabling the discrimination of links and leveraging an adversarial neural network to maintain robustness against type differences. While these methods improve the representation of multiplex edges, challenges related to limited observed links and complex network architectures persist.
Researchers have endeavored to employ probabilistic graphical models (PGMs) to represent multi-type links in a unified manner, as discussed in [23]. First-order logic formulas have been utilized to model multi-relational databases with Bayesian network (BN), as outlined in [24], and the first-order BN serves as a statistical relational model capturing database frequencies.
As MLN formulas conclude predicates indicating various link types, inter-dependencies among various link types could be captured by the weights of formulas [25]. Meanwhile, MLN, acting as a PGM incorporating first-order logic rules, has been applied for representing relational data [26]. By converting heterogeneous networks into MLN and leveraging formulas, it enables a unified expression for multiplex edges. Nonetheless, optimization using SRL methods, as discussed in [27], is found to be non-scalable for LP tasks, primarily due to its inefficiency. Furthermore, the extensive structure of MLN leads to a vast search space and necessitates a costly inference process.
Inference in MLN. Efforts have been undertaken to enhance the efficiency and scalability of inferences in MLN for LP across diverse scenarios. For instance, QNMLN [25] expands on the formulas, while MCLA [28] refines its parameter updates for a range of practical tasks. The VEM algorithm [29] serves as the inference framework for MLN, with graph neural networks (GNNs) utilized in the E-step to capture the features of target relation nodes. ExpressGNN [30] utilizes GNN for node representation integration with formulas. On the other hand, PlogicNet [11] infers embeddings based on the local dependency of target nodes, yet overlooks their interactions. Pgat [26] optimizes the representations by employing graph attention neural network-based node embeddings. These methods use SRL to update parameters in the M-step by exploring the complete MLN, leading to challenges in generating precise embeddings for unlabeled nodes in efficient multi-relational LP. In contrast, our study in this paper utilizes GNNs to encapsulate the features of MLN and formulas in the M-step, and propose substituting the complete MLN with substructures to significantly decrease the search space of inferences.

3. Preliminaries

Consider a heterogeneous network with the following components: V represents the set of entities (e.g., papers, authors), R denotes the set of edges representing relationships, and A is the set of attributes associated with entities. We distinguish between observed and unobserved relationships where R o represents the set of observed relations and R u represents the set of unobserved relations.
Definition 1.
A heterogeneous network is defined as H = ( R , V , A ) , where R = R o R u and | R | + | V | > 2.
To transform our heterogeneous network into an MLN, we first represent relationships as logical predicates. Each relationship becomes a predicate r ( x , y ) , where r is a relation type from R , and x and y are entities from V .
These predicates form the building blocks of logical formulas that capture patterns in our data. When we ground these formulas (replace variables with actual entities), we create fact nodes in MLN. For example, in the academic citation network like in Figure 1a, we might observe that when author A cites both authors B and C, authors B and C are likely to be related. This pattern can be encoded as a logical formula: citedby (A, B) ∧ citedby (A, C) → citedby (B, C), in which c i t e d b y (A, B), c i t e d b y (A, C) and c i t e d b y (A, B) become concrete nodes in MLN.
Let V represent all fact nodes in our MLN, consisting of V o and V u , which denote the observed predicates (known relationships), and unobserved predicates (relationships to be inferred), respectively.
Definition 2.
An MLN is defined as M = G , w , where the following are denoted:
  • G = V , E is an undirected graph where V is the set of fact nodes following Bernoulli distribution, E is the set of edges derived from logical formulas F .
  • The joint probability distribution over G is defined by
    P ( V o , V u ) = 1 Z ( w ) exp f F w f n f ( V o , V u )
    where Z ( w ) is the partition function, w f is the weight of formula f, and n f ( · ) counts true groundings of f.
The parameters w of an MLN can be learned through discriminative learning by maximizing the pseudo-likelihood of V o [10]:
argmax w n V o log p ( y n | MB ( n ) )
where y n represents the label of node n, and MB(n) refers to the Markov blanket of n [31], encompassing its direct parents, direct children, and other parents of its direct children within the MLN.
The joint distribution P ( V u ) can be computed using maximum a posteriori (MAP) inference [26]:
argmax n V u f F w f n f ( V o , V u )
In this paper, we aim to address the multi-relational LP task within G, by transforming H into G and calculating P ( V u ) , where V u is corresponding to unknown links in H, and P ( V u ) represents the joint probability distribution. For the transformation, we firstly build G by establishing m formulas and the corresponding grounding. To address the efficiency bottleneck, we partition G into substructures { g i | 1 i k } , where V u i denotes the unobserved nodes in g i . To calculate P ( V u i ) efficiently, we propose the VEM-based inference method to update w by Equation (1) and calculate P ( V u i ) by Equation (2) in each substructure g i . Moreover, we introduce a termination condition to determine the necessity of computations in g i + 1 . Consequently, we frame the inference of P ( V u ) into the computation of P ( V u 1 g 1 ) , P ( V u 2 g 2 ) , , P ( V u k g k ) .
The key notations, abbreviations, and their descriptions are summarized in Table 1 and Table 2.

4. Proposed Method

The inference framework for efficient LP tasks is depicted in Figure 2, comprising the following three components.
  • MLN substructure construction is proposed to construct MLN substructures to avoid the massive search space of the entire MLN, as presented in Section 4.1.
  • VEM-based inference is proposed to calculate the joint distributions of nodes by extending the VEM with GCN, as presented in Section 4.2.
  • MLN-based LP is proposed with the termination condition to fulfill LP by inferences in MLN, as presented in Section 4.3.
Figure 2. Overview of the proposed framework.
Figure 2. Overview of the proposed framework.
Applsci 15 04424 g002
Our method initiates by transforming knowledge graph triples into grounded predicates to construct the MLN structure. Subsequently, we partition this MLN structure into coherent substructures. Finally, we apply VEM-based inference independently on each substructure, enabling efficient LP tasks on KGs.

4.1. MLN Substructure Construction

To compute the joint distribution P ( V u ) efficiently without navigating the extensive MLN search space, we divide G into k substructures, labeled as G = { g i | 1 i k } . Constructing g i involves choosing unobserved nodes neighboring V o i 1 in g i 1 (labeled as V u i ) and progressively expanding through neighboring nodes in the MLN hop by hop, until adequate information is collected for label computation.
Viewed from a heterogeneous network perspective, the 2-hop enclosing subgraph N 2 ( u , v ) has been validated as containing adequate information for LP between entities u and v in the heterogeneous network H, with minimal approximation errors [12]. This validation guarantees the rationale to leverage the 2-hop enclosing subgraph principle for the expansive refinement of V u i 1 within G, which encompasses the links in H. To achieve this, we map N 2 ( u , v ) to corresponding fact nodes in MLN. Assuming that N 2 ( u , v ) includes η predicates for facts like c o a u t h o r ( x , y ) , the analogous nodes in G to N 2 ( u , v ) in H are denoted as follows:
N ( r ( u , v ) ) = { r 1 ( u , x ) , r 2 ( u , x ) , , r η ( u , x ) } { r 1 ( v , x ) , r 2 ( v , x ) , , r η ( v , x ) } ,
where r ( u , v ) is the MLN node corresponding to the link between u and v in H.
In the context of MLN, Equation (3) comprises ample information for the calculation of node labels for r ( u , v ) , guiding the exploration of r ( u , v ) ’s neighbors through successive expansion hops. For convenience of expression, we use n and N h ( n ) to denote r ( u , v ) and its neighbors in MLN, reached through h expansion hops, respectively. In particular, the expansion of the h-th hop is fulfilled by adding the neighbors of all the nodes in N h 1 ( n ) iteratively. The expansion of V o i 1 halts when all generated neighbors encompass the nodes specified in Equation (3).
Note that without substructure construction, the interpretability of the entire MLN with respect to w can be calculated based on P ( V u ) . While our approach partitions the computation process, the substructure construction is specifically designed for the efficient estimation of P ( V u ) . Consequently, despite this division, the weights w can still be derived through conditional probability calculations, thus preserving the inherent interpretability of the original MLN.
By this procedure of substructure construction, we could transform P ( V u ) into P ( V u 1 g 1 ) , P ( V u 2 g 2 ) , , P ( V u k g k ) ( V u i g i ) with minimal approximation errors. Consequently, this partition strategy reduces computational complexity from O ( | E M L N | · T ) to O ( i = 1 k | E g i | · T i ) , where i = 1 k | E g i | | E M L N | for typical sparse KGs. We summarize the steps of substructure construction in Algorithm 1 with time complexity O ( k · | V g | ) , where | V g | denotes the average number of fact nodes in substructures.

4.2. VEM-Based Inference

In this section, we present a method to augment the VEM algorithm with GCNs to compute the labels of unobserved relation nodes.

4.2.1. Variational Distribution

To enhance LP efficiency in MLN, we propose computing the true posterior distribution P ( V u | V o ) within the EM algorithm by treating the labels of V u as latent variables. Given the challenge posed by a large number of unobserved fact nodes in MLN for directly calculating P ( V u | V o ) , we approximate this distribution with a variational distribution Q ( V u ) . Consequently, we derive P ( V u | V o ) from Q ( V u ) by minimizing their KL divergence:
K L ( Q ( V u ) P ( V u | V o ) ) = E Q [ log Q ( V u ) log P ( V u | V o ) ] = E Q [ log Q ( V u ) log P ( V u , V o ) P ( V o ) ] = E Q [ log Q ( V u ) log P ( V u , V o ) ] + log P ( V o ) ,
where E Q [ · ] represents the expectation with respect to Q.
By swapping the term log P ( V o ) in Equation (4), we have
log P ( V o ) = K L ( Q ( V u ) P ( V u | V o ) ) E Q [ log Q ( V u ) log P ( V u , V o ) ] = K L ( Q ( V u ) P ( V u | V o ) ) + E Q [ log P ( V u , V o ) log Q ( V u ) ] .
Algorithm 1 MLN Substructure Construction
Input: 
observed nodes V o , unobserved nodes V u , MLN structure G = ( V , E ) , number of substructures k
Output: 
set of substructures G = { g i 1 i k }
1:
G
2:
V o 1 V o
3:
for  i = 1 to k do
4:
       V u i V u v V o i N 1 ( v )
5:
      for each node n in V u i  do
6:
            Obtain N ( n ) using Equation (3)
7:
             h 1
8:
            while  N ( n ) N h ( n )  do // expansion
9:
                   h h + 1
10:
                 N N neighbors of all the nodes in N h 1 ( n )
11:
                 N h ( n ) N h 1 ( n ) N N
12:
            end while
13:
             E h ( n ) edges w . r . t . N h ( n ) in G
14:
             g i ( N h ( n ) { n } , E h ( n ) ) // the i-th substructure of G
15:
      end for
16:
       V o i V o i V u i // update observed nodes
17:
       G G g i // add substructure g i to G
18:
end for
19:
return  G
As K L ( Q ( V u ) P ( V u | V o ) ) is non-negative, we define E Q [ log P ( V u , V o ) log Q ( V u ) ] as the ELBO of log P ( V o ) . Since log P ( V o ) is a constant with respect to V u , we transform the minimization of K L ( Q ( V u ) P ( V u | V o ) ) into the ELBO maximization by iterative executions of variational E-steps and M-steps.

4.2.2. Argumentation of VEM

Variational E-step. According to the mean-field theory [32], the variational distribution Q ( V u ) of all unobserved fact nodes is formed by the independent fact node n ( n V u ) :
Q ( V u ) = n V u q ( n ) .
To calculate q ( n ) in Equation (6), we adopt the GCN-based representation with Softmax function as the distribution of n. For this purpose, we take the labels of neighbors of n as the feature vector q n . The labels of the observed and unobserved fact nodes are 1 and 0, respectively. As shown in Figure 3, the labels of the fact nodes surrounding n form its feature vector. Thus, the feature matrix F Q = [ q n ] could be constructed, and the distribution of n is
q ( n ) = Cat ( y n | S o f t m a x ( A F Q W Q ) ) ,
where Cat denotes the categorical distribution, A is the adjacency matrix corresponding to the graph structure, and W Q is the weight matrix. Since W Q could be optimized by observed nodes, we divide the optimization of W Q into the following two parts:
  • For unobserved fact nodes V u , we update W Q by maximizing
    L u = n V u K L ( Q ( V u ) Q ( V u ) ) ,
    where Q ( V u ) is the distribution of unobserved fact nodes in the previous iteration round. Since both unobserved and observed fact nodes follow the same distribution, we initialize P ( V o ) to Q ( V u ) in the first round of iteration.
  • For the observed fact nodes V o , we use the cross-entropy loss to make the GCN fit the true labels:
    L o = n V o y n log p c ( y n ) ,
    where y n is the label of n in V o , and p c ( y n ) is the probability that n is predicted as true.
Thus, we sum Equations (8) and (9) as the following training loss of GCN to obtain the variational distribution Q ( V u ) :
L Q = L u + L o .
M-step. To maximize the expectation of log P ( V u , V o ) , we use a similar method as Equation (6) to formulate P ( V u ) = n V u p ( v ) . Then, we leverage another GCN to obtain the node representations for categorical distribution:
p ( n ) = Cat ( y n | S o f t m a x ( A F P W P ) ) .
Due to the same structure of these two GCNs in the same iteration round, the adjacency matrix A in Equation (11) is the same with that in Equation (7). An element in F P represents the feature vector p n of n in V u and the value of P n is determined by the m formulas and corresponding weights w . Let n n denote the number of truth-value nodes linked with each formula f j , respectively. Let w j be the weight of f j . Then, we multiply the weight and number of truth-value nodes to achieve the feature p n = [ w j n n ] . As shown in Figure 3, we construct the feature matrix F p in the M-step by the numbers of nodes in grounding formulas of f 1 and f 2 , as well as their weights w 1 and w 2 .
Let n ( o ) and n ( Q ( V u ) ) be the real and expected numbers of true groundings of formulas, respectively. We use their difference as the gradient of w with the previous iteration round:
w E Q [ log P ( V u ) ] = n ( o ) n ( Q ( V u ) ) ,
where Q ( V u ) is the joint distribution of V u calculated by the E-step in the previous iteration round.
Additionally, if the fact node is not grounded from any predicate in the j-th formula (i.e., the node is not linked to this formula), we set the corresponding weight w j to 0.
Then, we design the following predicate loss to update W p by maximizing E Q log P ( V u , V o ) :
L P = n V u , V o log p ( y n ) ,
where y n ( y n Q ( V u ) ) is the label of n.
Thus, E Q log P ( V u , V o ) could be maximized by the optimal W P in the current iteration, and the distribution P ( V u | V o ) of unobserved fact nodes could be approximated efficiently.

4.2.3. Inference Algorithm

To calculate the joint distribution of fact nodes in each substructure, we use the VEM-based inference to update the parameters of GCNs in the variational E-steps and M-steps given the observed fact nodes.
We first initialize w with a normal distribution for training W P with V o i in the M-step. Then, in the variational E-step, we optimize the variational distribution Q ( V u i ) with the output P ( V u i | V o i ) . In the M-step, we optimize the posterior distribution P ( V u i | V o i ) with Q ( V u i ) calculated by Equation (7). Then, we alternatively optimize Q ( V u i ) and P ( V u i | V o i ) until their KL divergence is less than the given threshold or the given maximal iteration number is reached.
We summarize the above ideas in Algorithm 2. In the worst case, the time complexity is O ( T 1 · | E g i | · L ) , where T 1 is the maximal iteration number of steps 7∼13, | E g i | is the average number of edges in each substructure, and L is the number of layers in GCN.
Algorithm 2 VEM-based inference
Input: 
the threshold ϱ of KL divergence, the MLN substructure g i , the observed fact nodes V o i , the weights w , the unobserved fact nodes V u i
Output: 
posterior distribution p ( V u i | V o i )
1:
Initialize w and W P
2:
Obtain F P with V o i by Equation (12)
3:
Obtain A by edges of g i
4:
Optimize W P by Equation (13)
5:
p ( V u i | V o i ) S o f t m a x ( A F P W P ) // by Equation (11)
6:
j 1
7:
while  K L ( p ( V u i W P ) Q ( V u i W q ) ) ϱ  do
8:
      Update W Q in E-step by Equation (10)
9:
      Optimize Q ( V u i ) by Equation (7)
10:
    Update W P in M-step by Equation (13)
11:
    Optimize P ( V u i | V o i ) by Equation (11)
12:
     j j + 1
13:
end while
14:
T 1 j 1
15:
return  P ( V u i | V o i )

4.3. MLN-Based LP

The labels of fact nodes in g i could be determined through logical inference based on formulas rather than the VEM inference method. For instance, a grounding formula with a non-zero weight such as citedby ( A , B ) citedby ( A , C ) citedby ( B , C ) can create a true relation-node citedby ( B , C ) if both citedby ( A , B ) and citedby ( A , C ) hold. By iteratively obtaining the joint distribution of unobserved node labels using Algorithm 2 based on the observed fact nodes in the substructures, more observed nodes can be acquired. A comparison with the joint distribution derived from logical formulas reveals an increasing discrepancy in VEM-based inferences as more unobserved fact nodes are updated to observed. Consequently, we opt to halt the iterative inferences when the difference surpasses a specified threshold, enabling the calculation of the joint distribution of V u for LP tasks.
Without loss of generality, we establish the KL divergence-based termination condition to determine whether further calculation should proceed by measuring the difference between the distribution of fact node labels obtained through logical evaluation and VEM-based methods:
K L ( P f ( U | V o i ) | P e ( U | V o i ) ) .
Here, V o i denotes the observed fact nodes in g i , U represents the sole unobserved node in each formula, which can be evaluated logically as 1 or 0, corresponding to true or false, respectively. P f signifies the value of U given V o i through logical evaluation, while P e denotes the joint distribution from VEM-based inferences. A smaller KL divergence indicates that the two joint distributions calculated by VEM-based inferences and logical evaluations are more consistent.
To calculate P f ( U | V o i ) , we utilize Equation (12) to compute w i , deriving the nonzero-weight formulas F i ( F i F ). Subsequently, we construct U using the fact nodes, whose labels could be logically inferred as true by F i . Therefore, P f ( U | V o i ) could be redefined as P f ( U | V o i , F i ) . As P e ( U | V o i ) could be calculated using Algorithm 2 as n U p e ( n ) , we transform Equation (14) into
K L ( P f ( U | V o i ) | P e ( U | V o i ) ) = K L ( P f ( U | V o i , F i ) | n U P e ( n ) ) = n , n U , n n p f ( n ) log p f ( n ) p e ( n ) .
Equation (15) serves as the termination criterion, indicating that if it falls below a specified threshold, the computation in g i + 1 continues. By this pause mechanism, we could prevent noisy data from causing significant error impacts. These concepts are consolidated in Algorithm 3. Within each substructure g i ( 1 i k ) , the time complexity of step 4 is O ( | V g i | ) , where | V g i | denotes the average number of nodes in g i . Step 5’s time complexity is O ( T 1 · L · | E g i | ) as per Algorithm 2, with | E g i | representing the average edge count in g i . The time complexity of steps 6 through 7 is O ( | V g i | ). Hence, in a scenario where T 2 substructures are constructed, the time complexity of Algorithm 3 amounts to O ( T 2 · ( | V g i | + T 1 · L · | E g i | ) ) , where | E g i | and | V g i | are considerably smaller than | E | and | V | , respectively. Specifically, the time complexity of substructure construction is O ( T 2 · | V g i | ) , while the VEM-GCN integration exhibits a time complexity of O ( T 2 · T 1 · L · | E g i | ) , respectively.
Algorithm 3 MLN-based LP
Input: 
the set of initial observed fact nodes V o , the set of formulas F , the threshold τ of KL divergence between P f and P e
Output: 
joint distribution P ( V u )
1:
V u
2:
i 1
3:
while  K L ( P f ( U | V o i , F i ) P e ( U | V o i ) ) τ  do
4:
      Obtain g i , V u i , V o i with V o by Algorithm 1
5:
      Obtain P ( V u i | V o i ) by Algorithm 2
6:
      Obtain F i with w i by Equation (12)
7:
      Obtain U via F i by logical way
8:
       V u V u V u i
9:
       i i + 1
10:
end while
11:
T 2 i 1
12:
return  P ( V u )

5. Experiments

Our proposed method is evaluated based on the following inquiries:
  • Comparing the accuracy of our method for multi-relational LP tasks with that of competitors.
  • Assessing how our VEM-based inference method enhances the efficiency of MLN-based LP.
  • Investigating the impact of parameters on the accuracy of the LP.

5.1. Experiment Settings

Datasets. Our experiments were carried out on five heterogeneous networks: the academic network, Aminer; the network of relatives, Kinship [30]; the medical relation network, UMLS [11]; the word net WN18RR [29]; and the social knowledge base Freebase15k-237 [33]. The dataset statistics are presented in Table 3.
Formulas. Logical formulas for multi-relational LP tasks were devised for each dataset and translated into inferences within MLN. Specific formulas for each dataset are delineated in Table 4.
Comparison methods. We compared two types of methods: deep learning-based methods including SEAL [12] and Matrix Factorization (MF) [16], and MLN-based methods including ExpressGNN [30], PlogicNet [11], Pgat [26], RNNlogic [29], and MLN [10]. The latter category employs MLN to model heterogeneous networks, facilitating learning and inferences for diverse downstream tasks. They are summarized as follows:
  • SEAL adopts GNN for node representation by learning the structural information.
  • MF transforms the representation matrix into multiplication of matrices to enhance node representation.
  • ExpressGNN uses a GNN variant to conduct the variational inference in MLN structures.
  • PlogicNet uses GNN to embed the graphical structures and incorporates the weights of formulas by VEM.
  • Pgat incorporates graph attention network embeddings to infer the relations.
  • RNNlogic leverages recurrent neural networks to generate high-quality formulas for inferences in MLN.
  • MLN uses SRL-based methods to generate posterior distributions of nodes in MLN.
Implementation. We utilized accuracy to evaluate the efficacy of our proposed method, defined as the ratio of correct predictions generated by the LP methods. Efficiency was assessed through execution time, encompassing the overall time for parameter updates and link predictions.
For the learning-based methods, we treated the evidence nodes in MLN and their labels as the training and test sets, respectively. To execute multi-relational LP using MLN, we formulated the MLN structure for each dataset. In the case of the Aminer dataset, we randomly selected subsets by varying the sizes from 10,000, 5000, 2000, to 1000 individuals. SEAL and MF were applied to the datasets for each link type, with the average accuracy serving as the LP outcome. For robust evaluation, we repeated each experiment 5 times with different random seeds and reported the mean performance along with standard deviation. For each experimental configuration, the training and testing processes were executed independently, maintaining consistent hyperparameter settings across all runs to ensure fairness.
Our experiments were conducted on a machine equipped with a 6000Ada GPU, 128 GB of memory, and an i9-9900K CPU. All implementations were carried out using Python 3.7. We primarily used PyTorch v1.12.1 and ProbCoG libraries.

5.2. Experimental Results

Accuracy. We assessed the efficacy of our method by comparing its accuracy with various other methods. To ensure fairness in the experiments, we maintained consistent links in the test set. For discerning accuracy across different scales, we segmented the Aminer dataset into subsets: Aminer-10000, Aminer-5000, Aminer-2000, and Aminer-1000, containing 12,601, 3591, 2384, and 1189 relations, respectively. Two types of links were designated for prediction, with an 8/2 ratio between training and testing data. Furthermore, we examined the outcomes of combinations with varying numbers of link types and ratios, specifically (3 types, 8/2) and (2 types, 7/3), to analyze the impact of these factors on results. The accuracy comparisons are detailed and summarized in Table 5, Table 6 and Table 7. In cases where a model failed to operate due to memory constraints, this is denoted by `-’. Our findings suggest the following:
  • Across 2-type and 3-type links at different train–test ratios, our method attains an average accuracy of 94.87%, which stands as the highest accuracy across all dataset sizes. Notably, our method enhances accuracy by approximately 3.7%, 2.9%, 0.5%, 6.0%, and 11.5% compared to the highest accuracy achieved by the comparison methods, respectively.
  • Across varying sizes of the Aminer datasets, our method consistently surpasses all competitors, showcasing enhanced robustness. Notably, our method boosts accuracy by 3.7%, 3.1%, 5.3%, and 1.8% compared to the second-highest performing model on Aminer with 10,000, 5000, 2000, and 1000 individuals, respectively.
  • The MLN-based methods (ExpressGNN, MLN, PlogicNet, Pgat, RNNlogic, and our method) attain average accuracies of 85.71%, 82.81%, 41.52%, 34.4%, and 72.66% on the Aminer, Kinship, UMLS, WN18RR, and FB15k-237 datasets, respectively, surpassing the learning-based methods (SEAL and MF) by 8.3%, 0.1%, 171.55%, 140.1%, and 6.2%, respectively. Our method achieves accuracies of 72.66%, 76.7%, 79.48%, and 81.75%, outperforming the second-highest model by an average of 3.65% on the Aminer datasets with 10,000, 5000, 2000, and 1000 individuals, respectively.
Also, we conducted experiments on different missing link ratios to evaluate the robustness of our method. We randomly removed 10%, 30%, and 50% of the Kinship, UMLS, WN18RR, and FB15k-237 datasets, respectively, and various sizes of Aminer. The accuracy comparisons are detailed in Table 8. Our findings suggest the following:
  • When 10% of the edges are removed, the performance of the model shows a slight overall decline, with more pronounced decreases in the sparse datasets such as WN18RR and FB15k-237, reaching 5% and 6%, respectively. In other datasets, the performance degradation is relatively smaller, approximately 3.5–4%.
  • As the edge removal increases to 30%, performance degradation becomes more significant. The FB15k-237 dataset is most affected, with a 20% reduction, while WN18RR shows a 15% decrease. Kinship, UMLS, and the Aminer series datasets experience reductions between 11 and 13%.
  • When 50% of edges are removed, all datasets undergo substantial performance deterioration. The FB15k-237 dataset remains the most severely impacted, with a 35% decrease; WN18RR declines by 30%; Kinship decreases by 25%; UMLS exhibits the smallest relative reduction at 23%; and all Aminer series datasets uniformly experience a 26% decrease.
  • Overall, sparser graph structures (e.g., WN18RR and FB15k-237) demonstrate higher sensitivity to edge removal, whereas denser relationship graphs (e.g., UMLS and Kinship) display better robustness. The Aminer series datasets, regardless of size, show similar patterns of performance degradation.
The accuracy enhancements observed in multi-relational LP validate the efficacy and robustness of our method through the integration of the VEM-based inference framework.
Efficiency. Efficiency evaluation of our method involved comparing its execution time with various MLN-based methods in multi-relational LP on the benchmarks WN18RR, FB15k-237, and Aminer datasets. The total execution time (TT), time for the E-step (ToE), and time for the M-step (ToM) are illustrated in Figure 4. It is noteworthy that the execution times of MLN and NMLN are not included due to their significantly longer durations compared to the listed methods. Our observations reveal the following:
  • On WN18RR and FB15k-237 datasets, each containing over 10,000 entities, our method demonstrates the shortest processing time and surpasses the second-fastest model by an average of 42.37%.
  • When compared to pgat, RNNlogic, ExpressGNN, and PlogicNet on FB15k-237, our method reduces the total execution time (TT) by 40%, 56.82%, 63.3%, and 69.7%, respectively. On WN18RR, our method outperforms pgat, RNNlogic, ExpressGNN, and PlogicNet by 19.09%, 42.86%, 49.14%, and 55.5%, respectively.
  • In terms of the E-step (ToE) and M-step (ToM), our method achieves average improvements of 57.85% and 21.15% on WN18RR, and 71.96% and 37.41% on FB15k-237, respectively.
These results illustrated in Figure 4 denote that our method demonstrates efficiency advancements on TT, ToE, and ToM, on the datasets with more than 10,000 entities.
Moreover, to assess efficiency across datasets of different sizes, we compared the total execution time (TT) on UMLS, Kinship, Aminer-10000, Aminer-5000, Aminer-2000, and Aminer-1000, each comprising fewer than 10,000 entities, as depicted in Figure 5a. Furthermore, Figure 5b presents the TT, time for the E-step (ToE), and time for the M-step (ToM) of our method, showcasing the execution durations of these steps across various dataset sizes. Our analysis indicates the following:
  • As depicted in Figure 5a, our method exhibits the shortest total execution time among the datasets containing fewer than 10,000 entities, surpassing RNNlogic, Pgat, and PlogicNet. On average, our model consumes 33.2% less time than the second-fastest model in each dataset.
  • As illustrated in Figure 5b, the execution time for the M-step (ToM) in our method averages 26% more than that for the E-step (ToE) across different datasets. Furthermore, the total time (TT) of our model demonstrates exponential growth with increasing data size, whereas both ToE and ToM exhibit nearly linear growth.
Impacts of parameters. We assessed the influence of experimental variables on the efficacy of LP by examining the accuracy of different methods while altering the number of formulas on Aminer-10000, FB15k-237, and Kinship datasets with more than 3 formulas, as detailed in Table 9. Our analysis indicates that:
  • Our method attains average accuracies of 81.03%, 91.54%, and 44.39% on Aminer-10000, FB15k-237, and Kinship, respectively, surpassing the second-best competitors by 3.75%, 2.87%, and 11.55% on average. This underscores the robustness of our approach.
  • Accuracy shows a slight increment with an increase in logical formulas, suggesting that more formulas lead to improved accuracy. Conversely, a higher number of formulas results in a substantially larger grounded MLN structure, introducing more unlabeled nodes in MLN that could potentially impact accuracy negatively. Notably, as depicted in Table 9, the accuracy on PlogicNet decreases by an average of 1.34% with the increase in formulas.
Ablation study. We conducted ablation experiments on the distinct contributions of each component in our proposed framework, the results of which are shown in Table 10. The base MLN with SRL establishes a foundation but shows limitations on larger, complex datasets like FB15k and WN18RR. The VEM-GCN integration substantially enhances performance with a 7–12% across all datasets. Substructure condition (SC) presents an intriguing efficiency–accuracy trade-off. While moderately impacting accuracy with a reduction of only 1–2%, SC substantially enhances computational efficiency, delivering processing speeds up to 3.5x faster on larger datasets. Most significantly, the termination condition (TC) mechanism provides the most substantial accuracy gains, boosting performance by 4–6% when combined with SC across all datasets. The optimal configuration with all components except SRL achieves the highest accuracy.
In conclusion, based on the diverse experimental conditions and datasets examined, our method demonstrates the highest average accuracy in multi-relational LP across various datasets and link types, surpassing competitors by over 30% in efficiency.

6. Conclusions

First, we establish a perspective by transforming multi-relational LP into a node label estimation problem within MLN, treating known links as observed nodes. This transformation provides a principled probabilistic foundation for handling the inherent complexity of heterogeneous networks. Second, our substructure construction method partitions significantly enhance computational efficiency while preserving semantic relationships. Complementing this, our VEM-based framework strikes a balance between precision and computational feasibility. Third, the proposed terminating computation ensures both accuracy and efficiency in practical applications, making our approach viable for real-world heterogeneous networks. Finally, the experimental results demonstrate our method’s advantages in terms of efficiency, robustness, and accuracy. In all, our approach indeed bridges the critical gap between unknown links in heterogeneous networks and unobserved nodes in MLN, establishing a hybrid approach to link prediction that combines the strengths of both machine learning techniques and logical deduction.
Efficient inferences heavily depend on the MLN structure size, shaped by the formulated formulas for diverse multi-relational LP tasks. In future research, we aim to explore automatic formula selection to strike a balance between efficiency and efficacy. Furthermore, the combined VEM-GCN architecture possesses robust capacity for capturing temporal dynamics, strengthening MLN-based LP performance. Thus, we aim to extend our approach to dynamic graph-based applications such as sequential node classification using MLN-based inferences.

Author Contributions

Conceptualization, Z.L. and K.Y.; data curation, Z.L.; formal analysis, Z.L., K.Y. and L.Y.; funding acquisition, K.Y.; investigation, Z.L.; methodology, Z.L.; project administration, K.Y.; resources, Z.L.; software, Z.L.; supervision, L.Y.; validation, Z.L., K.Y. and L.Y.; visualization, Z.L.; writing—original draft, Z.L.; writing—review and editing, Z.L. and J.W. All authors have read and agreed to the published version of this manuscript.

Funding

This paper was supported by the Key Program of Basic Research of Yunnan Province (202401AS070138), the Program of Yunnan Key Laboratory of Intelligent Systems and Computing (202405AV340009) and the Open Foundation of Yunnan Key Laboratory of Intelligent Control and Application (2025ICA01).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Huang, L.; Pang, B.; Yang, Q.; Feng, X.; Wei, W. Link prediction by continuous spatiotemporal representation via neural differential equations. Knowl.-Based Syst. 2024, 292, 111619. [Google Scholar] [CrossRef]
  2. Wang, H.; Cui, Z.; Liu, R.; Fang, L.; Sha, Y. A multi-type transferable method for missing link prediction in heterogeneous social networks. IEEE Trans. Knowl. Data Eng. 2023, 35, 10981–10991. [Google Scholar] [CrossRef]
  3. Tong, D.; Chen, S.; Ma, R.; Qi, D.; Yu, Y. Knowledge graph embedding in a uniform space. Intell. Data Anal. 2024, 28, 1–23. [Google Scholar] [CrossRef]
  4. He, C.; Cheng, J.; Fei, X.; Weng, Y.; Zheng, Y.; Tang, Y. Community preserving adaptive graph convolutional networks for link prediction in attributed networks. Knowl.-Based Syst. 2023, 272, 110589. [Google Scholar] [CrossRef]
  5. Yang, P.; Zheng, W.; Xiao, Y.; Jiao, X. Asymmetric multilevel interactive attention network integrating reviews for item recommendation. Intell. Data Anal. 2024, 28, 1–18. [Google Scholar] [CrossRef]
  6. Wang, H.; Liu, G.; Hu, P. TDAN: Transferable domain adversarial network for link prediction in heterogeneous social networks. ACM Trans. Knowl. Discov. Data 2023, 18, 1–22. [Google Scholar] [CrossRef]
  7. Nguyen, T.; Liu, Z.; Fang, Y. Link prediction on latent heterogeneous graphs. In Proceedings of the ACM Web Conference 2023, Austin, TX, USA, 30 April–4 May 2023; pp. 263–273. [Google Scholar]
  8. Zhang, S.; Zhang, J.; Song, X.; Adeshina, S. PaGE-Link: Path-based graph neural network explanation for heterogeneous link prediction. In Proceedings of the ACM Web Conference 2023, Austin, TX, USA, 30 April–4 May 2023; pp. 3784–3793. [Google Scholar]
  9. Wu, M.; Yu, F.R.; Liu, P.; He, Y. A hybrid driving decision-making system integrating markov logic networks and connectionist AI. IEEE Trans. Intell. Transp. Syst. 2022, 24, 3514–3527. [Google Scholar] [CrossRef]
  10. Domingos, P.; Lowd, D. Unifying logical and statistical AI with Markov logic. Commun. ACM 2019, 62, 74–83. [Google Scholar] [CrossRef]
  11. Qu, M.; Tang, J. Probabilistic logic neural networks for reasoning. In Proceedings of the 33rd International Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 8–14 December 2019; pp. 7712–7722. [Google Scholar]
  12. Zhang, M.; Chen, Y. Link prediction based on graph neural networks. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, Montreal, QC, Canada, 3–8 December 2018; Springer: Berlin/Heidelberg, Germany, 2018; pp. 5171–5181. [Google Scholar]
  13. Song, K.; Yue, K.; Duan, L.; Yang, M.; Li, A. Mutual information based Bayesian graph neural network for few-shot learning. In Proceedings of the 38th Conference on Uncertainty in Artificial Intelligence, PMLR, Eindhoven, The Netherlands, 1–5 August 2022; pp. 1866–1875. [Google Scholar]
  14. Li, X.; Liu, Y.; Zhang, Z. Link Prediction Method Combining Node Labels with Common Neighbors. Front. Artif. Intell. Appl. 2023, 378, 296–302. [Google Scholar]
  15. Wang, X.; Yang, H.; Zhang, M. Neural common neighbor with completion for link prediction. arXiv 2023, arXiv:2302.00890. [Google Scholar]
  16. Agibetov, A. Neural graph embeddings as explicit low-rank matrix factorization for link prediction. Pattern Recognit. 2023, 133, 108977. [Google Scholar] [CrossRef]
  17. Fu, C.; Yu, P.; Yu, Y.; Huang, C. MHGCN+: Multiplex Heterogeneous Graph Convolutional Network. ACM Trans. Intell. Syst. Technol. 2024, 15, 1–25. [Google Scholar] [CrossRef]
  18. Zhang, J.; Wei, L.; Xu, Z.; Yao, Q. Heuristic Learning with Graph Neural Networks: A Unified Framework for Link Prediction. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Barcelona, Spain, 25–29 August 2024; pp. 4223–4231. [Google Scholar]
  19. Dimitriou, P.; Karyotis, V. A combinatory framework for link prediction in complex networks. Appl. Sci. 2023, 13, 9685. [Google Scholar] [CrossRef]
  20. Feng, P.; Zhang, X.; Wu, H.; Wang, Y.; Yang, Z.; Ouyang, D. Link Prediction Based on Feature Mapping and Bi-Directional Convolution. Appl. Sci. 2024, 14, 208. [Google Scholar] [CrossRef]
  21. Jin, B.; Zhang, Y.; Zhu, Q.; Han, J. Heterformer: Transformer-based deep node representation learning on heterogeneous text-rich networks. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Long Beach, CA, USA, 6–10 August 2023; pp. 1020–1031. [Google Scholar]
  22. Zhan, Q.; Wang, J.; Li, Y.; Xie, Z.; Liu, Y. HRMNN: Heterogeneous Relationship Mined Graph Neural Network. In Proceedings of the 12th International Conference on Intelligent Computing, Tianjin, China, 5–8 August 2024; Springer: Singapore, 2024; pp. 38–48. [Google Scholar]
  23. Chen, F.; Weitkämper, F.; Malhotra, S. Understanding Domain-Size Generalization in Markov Logic Networks. In Proceedings of the 34th Joint European Conference on Machine Learning and Knowledge Discovery in Databases, Vilnius, Lithuania, 8–12 September 2024; Springer: Cham, Switzerland, 2024; pp. 297–314. [Google Scholar]
  24. Wasserman, M.; Mateos, G. Graph structure learning with interpretable Bayesian neural networks. arXiv 2024, arXiv:2406.14786. [Google Scholar]
  25. Jung, P.; Marra, G.; Kuželka, O. Quantified neural Markov logic networks. Int. J. Approx. Reason. 2024, 171, 109172. [Google Scholar] [CrossRef]
  26. Harsha, V.L.; Jia, G.; Kok, S. Probabilistic logic graph attention networks for reasoning. In Proceedings of the 29th ACM Web Conference, ACM, Taiwan, China, 20–24 April 2023; pp. 669–673. [Google Scholar]
  27. Fang, H.; Liu, Y.; Cai, Y.; Sun, M. MLN4KB: An efficient markov logic network engine for large-scale knowledge bases and structured logic rules. In Proceedings of the ACM Web Conference 2023, Austin, TX, USA, 30 April–4 May 2023; pp. 2423–2432. [Google Scholar]
  28. Cui, S.; Zhu, T.; Zhang, X.; Ning, H. MCLA: Research on cumulative learning of Markov Logic Network. Knowl.-Based Syst. 2022, 242, 108352. [Google Scholar] [CrossRef]
  29. Qu, M.; Chen, J.; Xhonneux, L.; Bengio, Y.; Tang, J. RNNLogic: Learning Logic Rules for Reasoning on Knowledge Graphs. In Proceedings of the 9th International Conference on Learning Representations, Virtual, 3–7 May 2021. [Google Scholar]
  30. Zhang, Y.; Chen, X.; Yang, Y.; Ramamurthy, A.; Li, B.; Qi, Y.; Song, L. Efficient Probabilistic Logic Reasoning with Graph Neural Networks. In Proceedings of the 8th International Conference on Learning Representations(ICLR), Addis Ababa, Ethiopia, 26–30 April 2020. [Google Scholar]
  31. Richardson, M.; Domingos, P. Markov logic networks. Mach. Learn. 2006, 62, 107–136. [Google Scholar] [CrossRef]
  32. Qu, M.; Bengio, Y.; Tang, J. Gmnn: Graph markov neural networks. In Proceedings of the 15th International Conference on Machine Learning, PMLR, Long Beach, CA, USA, 9–15 June 2019; pp. 5241–5250. [Google Scholar]
  33. Marra, G.; Kuželka, O. Neural markov logic networks. In Proceedings of the 37th Uncertainty in Artificial Intelligence, PMLR, Online, 27–30 July 2021; pp. 908–917. [Google Scholar]
Figure 1. Illustration of exclusive MLN structure for link prediction. (a) Link prediction in heterogeneous network; (b) MLN structure.
Figure 1. Illustration of exclusive MLN structure for link prediction. (a) Link prediction in heterogeneous network; (b) MLN structure.
Applsci 15 04424 g001
Figure 3. Illustration of feature construction for fact nodes, where white circles and green circles represent unobserved and observed nodes, respectively.
Figure 3. Illustration of feature construction for fact nodes, where white circles and green circles represent unobserved and observed nodes, respectively.
Applsci 15 04424 g003
Figure 4. TT, Toe, and ToM on datasets with more than 10,000 entities. (a) FB15k-237; (b) WN18RR.
Figure 4. TT, Toe, and ToM on datasets with more than 10,000 entities. (a) FB15k-237; (b) WN18RR.
Applsci 15 04424 g004
Figure 5. Execution time on datasets with less than 10,000 entities. (a) TT; (b) TT, ToE, and ToM.
Figure 5. Execution time on datasets with less than 10,000 entities. (a) TT; (b) TT, ToE, and ToM.
Applsci 15 04424 g005
Table 1. Notations and descriptions.
Table 1. Notations and descriptions.
NotationsDescriptions
H = ( R , V , A ) Heterogeneous network with entities V , relations R , and attributes A
G = V , E Graphical structure G with nodes V and links E
w Weight parameters of formulas
M = G , w MLN with distribution P w with respect to parameters w over G
Z ( w ) Partition function with w
F = { f i 1 i m } Set of m formulas
p ( n ) Label distribution of node n
P ( V ) Joint distribution of node labels in V
G = { g i | 1 i k } Set of k MLN substructures
N 2 2-hop enclosing subgraph in H
Table 2. List of abbreviations.
Table 2. List of abbreviations.
AbbreviationFull Term
LPLink Prediction
MLNMarkov Logic Network
SRLStatistical Relational Learning
VEMVariational Expectation Maximization
ELBOEvidence Lower Bound
GCNGraph Convolutional Network
Table 3. Statistics of datasets.
Table 3. Statistics of datasets.
Dataset#Entity#Relation Type#Relation
FB15k-23714,541237592,213
WN18RR40,9431193,003
Aminer3,804,78928,024,869
Kinship1042510,686
UMLS135466529
Table 4. Formulas.
Table 4. Formulas.
DatasetFormulas
AminerFieldinA (x, y) ⟶ Citedby (x, y)
AffiliationinA (x, y) ⟶ Coauthor (x, y)
FieldinA (x, y) ⟶ Citedby (x, y)
(Coauthor (x, y) ∧ Coauthor (y, z)) ⟶ Coauthor (x, z)
(Citedby (x, y) ∧ Citedby (y, z)) ⟶ Citedby (x, z)
UMLSAffects (x, y) ∧ Affects (y,z) ∧ Affects (x, z)
Affects (x, y) ∧ Causes (x, y)
KinshipFather (x, y) ∧ Male(x) ∧ Son (x, y)
Mother (x, y) ∧ Male(x) ∧ Son (x, y)
Father (x, y) ∧ Father (x, z) ∧ Male(y) ∧ Male(z) ∧ Brother (y, z)
FB15k-237Position (x, y) ∧ Position (y, z) ⟶ Position (x, z)
Ceremony (x, y) ∧ Ceremony (y, z) ⟶ CategoryOf (x, z)
Film (x, y) ∧ Film (y, z) ⟶ Participant (x, z)
StoryBy (x, y) ⟶ Participant (x, y)
Adjoins (x, y) ∧ Country (y, z) ⟶ ServiceLocation (x, z)
WN18RRMember_eronym (x, y) ∧ Hypernym (x, y)
Similar_to (x, y) ∧ Derivationally_related_form (x, y)
Table 5. Accuracy with 2-type links and ratio 8/2 with statistical analysis.
Table 5. Accuracy with 2-type links and ratio 8/2 with statistical analysis.
MethodKinshipUMLSWN18RRFB15k-237Aminer-10,000Aminer-5000Aminer-2000Aminer-1000
SEAL85.8683.2215.2914.3370.4486.3482.8877.43
MF72.4182.37--66.4264.7779.8870.55
ExpressGNN91.8886.91-40.780.3189.467887.88
MLN92.2285.32----8281.56
PlogicNet85.3384.939.823.768.372.2176.9184.32
Pgat88.3180.239.537.772.4562.8380.582.21
RNNlogic61.6772.2144.624.558.9366.6772.1265.01
Ours94.8787.3342.245.483.3292.3187.3289.5
p-value0.027 *0.7800.2550.042 *0.0600.029 *0.009 **0.248
95% CI[92.52, 97.22][84.39, 90.27][38.08, 46.32][40.89, 49.91][80.18, 86.46][89.76, 94.86][83.99, 90.65][86.76, 92.24]
Note: The highest accuracy is indicated in bold. * p < 0.05, ** p < 0.01. CI = Confidence Interval. p-values show a comparison of our method against the best competitor.
Table 6. Accuracy with 3-type of links and ratio 8/2 with statistical analysis.
Table 6. Accuracy with 3-type of links and ratio 8/2 with statistical analysis.
MethodKinshipUMLSWN18RRFB15k-237Aminer-10,000Aminer-5000Aminer-2000Aminer-1000
SEAL80.2174.1213.2911.8568.1381.6679.1575.22
MF61.3376.28--62.7761.3975.4968.17
ExpressGNN90.4284.18-38.2773.1185.3373.9183.39
MLN80.1979.41----75.1176.48
PlogicNet79.4479.2733.620.3665.2965.4973.8180.09
Pgat86.2276.1234.531.1169.3555.1975.0573.81
RNNlogic55.7169.1839.620.7553.1663.2966.7463.55
Ours92.9282.3339.742.478.1587.4783.2285.65
p-value0.033 *0.041 *0.8920.019 *0.048 *0.037 *0.022 *0.175
95% CI[90.47, 95.37][79.20, 85.46][36.18, 43.22][38.63, 46.17][74.89, 81.41][84.74, 90.20][80.07, 86.37][82.51, 88.79]
Note: The highest accuracy is indicated in bold. * p < 0.05. CI = Confidence Interval. p-values show a comparison of our method against the best competitor.
Table 7. Accuracy with 2-type links and ratio 7/3 with statistical analysis.
Table 7. Accuracy with 2-type links and ratio 7/3 with statistical analysis.
MethodKinshipUMLSWN18RRFB15k-237Aminer-10,000Aminer-5000Aminer-2000Aminer-1000
SEAL81.1477.59.559.9363.2185.1776.9170.66
MF66.1573.98--60.2962.1871.5360.4
ExpressGNN82.1982.17-34.4470.8185.1274.2182.02
MLN81.7279.02----73.2973.19
PlogicNet79.1776.7929.1819.7461.3968.9171.5972.91
Pgat69.1172.7236.9733.2866.7554.7162.1558.44
RNNlogic50.0663.5541.9123.1952.1361.2862.5165.01
Ours89.4682.1139.441.4174.9185.1982.0481.66
p-value0.001 **0.9490.2020.015 *0.044 *0.9570.003 **0.818
95% CI[86.84, 92.08][78.93, 85.29][35.66, 43.14][37.52, 45.30][71.53, 78.29][82.46, 87.92][78.75, 85.33][78.34, 84.98]
Note: The highest accuracy is indicated in bold. * p < 0.05, ** p < 0.01. CI = Confidence Interval. p-values show a comparison of our method against the best competitor.
Table 8. Accuracy under various missing link ratios.
Table 8. Accuracy under various missing link ratios.
DatasetEdge Removal
10 (%)30 (%)50 (%)
Kinship91.1283.4871.15
UMLS84.2877.7267.24
WN18RR40.0935.8729.54
FB15k-23742.6836.3229.51
Aminer-1000079.9972.4961.66
Aminer-500088.6280.3168.31
Aminer-200083.8275.9764.62
Aminer-100085.9277.8766.33
Table 9. Accuracy of LP with various numbers (3, 4, 5) of logical formulas with statistical analysis.
Table 9. Accuracy of LP with various numbers (3, 4, 5) of logical formulas with statistical analysis.
MethodsAminer-10000FB15k-237Kinship
345345345
ExpressGNN70.4376.2180.3135.4338.7140.786.4387.4291.88
Pgat65.1267.3172.4515.1217.3123.775.1283.2185.33
PlogicNet60.4467.1268.335.4436.1237.786.4488.3187.12
Ours78.3281.4483.3242.3245.4445.488.3291.4494.87
p-value0.018 *0.039 *0.0600.012 *0.008 **0.042 *0.037 *0.019 *0.027 *
Improvement7.89%5.23%3.01%6.89%6.73%4.70%1.89%3.13%2.99%
95% CI[75.6, 81.0][78.9, 84.0][80.2, 86.4][39.8, 44.9][42.7, 48.2][42.5, 48.3][85.7, 91.0][88.5, 94.4][92.5, 97.2]
Note: The highest accuracy is indicated in bold. * p < 0.05, ** p < 0.01. CI = Confidence Interval for our method.
Table 10. Accuracy of LP with different model configurations.
Table 10. Accuracy of LP with different model configurations.
MLNSRLVEMGCNSCTCFB15kWN18RRAminerKinshipUMLS
--0.680.760.79
0.740.850.810.870.91
0.720.830.790.850.89
0.780.880.830.890.94
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, Z.; Yue, K.; Yu, L.; Wang, J. An Inference Framework of Markov Logic Network for Link Prediction in Heterogeneous Networks. Appl. Sci. 2025, 15, 4424. https://doi.org/10.3390/app15084424

AMA Style

Li Z, Yue K, Yu L, Wang J. An Inference Framework of Markov Logic Network for Link Prediction in Heterogeneous Networks. Applied Sciences. 2025; 15(8):4424. https://doi.org/10.3390/app15084424

Chicago/Turabian Style

Li, Zhongbin, Kun Yue, Lixing Yu, and Jiahui Wang. 2025. "An Inference Framework of Markov Logic Network for Link Prediction in Heterogeneous Networks" Applied Sciences 15, no. 8: 4424. https://doi.org/10.3390/app15084424

APA Style

Li, Z., Yue, K., Yu, L., & Wang, J. (2025). An Inference Framework of Markov Logic Network for Link Prediction in Heterogeneous Networks. Applied Sciences, 15(8), 4424. https://doi.org/10.3390/app15084424

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop