Next Article in Journal
Improving Dynamic Visual SLAM in Robotic Environments via Angle-Based Optical Flow Analysis
Previous Article in Journal
Evaluating Middleware Performance in the Transition from Monolithic to Microservices Architecture for Banking Applications
Previous Article in Special Issue
Image Haze Removal Using Dual Dark Channels with the Whale Optimization Algorithm and an Image Regression Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

HetRelMTL-Net: A Unified Framework for Knowledge Graph Completion via Graph–Text Fusion and Multi-Task Dynamic Optimization

1
School of Electronic and Information Engineering, Suzhou University of Science and Technology, Suzhou 215009, China
2
Virtual Reality Key Laboratory of Intelligent Interaction and Application Technology, Suzhou, 215009, China
3
Kunshan Data Bureau Technology, Suzhou 215300, China
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Electronics 2026, 15(1), 222; https://doi.org/10.3390/electronics15010222 (registering DOI)
Submission received: 8 December 2025 / Revised: 28 December 2025 / Accepted: 29 December 2025 / Published: 3 January 2026
(This article belongs to the Special Issue Advanced Research in Technology and Information Systems, 2nd Edition)

Abstract

Knowledge graph completion (KGC) necessitates comprehensive modeling of heterogeneous relations by effectively integrating both graph structural information and textual semantics. Current approaches often exhibit fragmented feature utilization or suboptimal multi-tasking coordination, which limits their capability to handle complex relational patterns such as symmetry, hierarchy, and asymmetry. This paper proposes HetRelMTL-Net, a unified framework that introduces two key innovations: (1) GraphBert-KGC, a graph–text fusion module that dynamically aligns structural and semantic features through relation-aware attention and adaptive gating mechanisms, achieving a 97% reduction in parameter redundancy compared to fixed-projection baselines; and (2) a multi-task learning architecture that jointly optimizes link prediction, relation classification, and path reasoning via a KL-divergence-based dynamic weighting strategy, effectively mitigating task conflicts while enhancing semantic discriminability. Extensive experiments on two publicly available benchmarks, WN18RR and FB15k-237, demonstrate state-of-the-art performance: achieving 28.7% Hits@1 on asymmetric relations (a 19.1% improvement over RotatE), an 85.8% higher MRR than TransE, and 12% faster convergence. The robustness and efficacy of the framework are further validated through detailed ablation studies, attention visualizations, and real-world case analyses.

1. Introduction

Knowledge graphs (KGs) serve as structured repositories of factual knowledge, forming the backbone of numerous artificial intelligence applications such as question answering [1], recommendation systems [2], and drug discovery [3]. Despite their utility, real-world KGs like Freebase and WordNet are inherently incomplete [4], which has spurred significant research interest in knowledge graph completion (KGC) —the task of inferring missing factual triples, denoted as ( h , r , t ) . A fundamental challenge in KGC lies in the accurate modeling of heterogeneous relations, which exhibit diverse and often conflicting logical properties. These include symmetry (e.g., spouse), asymmetry (e.g., bornIn), hierarchical structure (e.g., subClassOf), and complex multi-hop dependencies (e.g., holdsSharescontrols).
Existing approaches for KGC often struggle to balance the exploitation of structural and textual information. Structure-oriented models, such as TransE [5] and RotatE [6], embed entities and relations into low-dimensional geometric spaces. While effective at capturing specific relational patterns through geometric constraints, these approaches typically overlook rich textual metadata (e.g., entity descriptions), limiting their performance in tasks requiring fine-grained semantic understanding. Conversely, text-enhanced models like KG-BERT [7] and ERNIE-KG [8] leverage the power of pre-trained language models (PLMs) to incorporate textual context. However, they frequently underutilize crucial structural dependencies, such as multi-hop relational paths, resulting in fragmented and suboptimal feature representations.
Furthermore, the majority of existing KGC frameworks are designed for single-task learning, which inherently struggles to simultaneously accommodate the diverse characteristics of heterogeneous relations. For instance, the translational assumption h + r t in TransE is suitable for asymmetric relations but is fundamentally incompatible with symmetric relations, where h + r = t and t + r = h must hold concurrently. Multi-task learning (MTL) serves as a promising solution for KGC by enabling the joint optimization of interrelated tasks, such as link prediction and relation classification, thereby encouraging the learning of more generalized representations. Nevertheless, existing multi-task KGC models [9] often suffer from inter-task gradient conflicts and rely on static, heuristic weight allocation strategies, which can stifle potential synergistic effects across tasks.
The practical significance of robust KGC becomes particularly salient in high-stakes domains like finance and banking, where knowledge graphs model critical relationships such as corporate ownership, subsidiary control, investment flows, and compliance regulations. In these contexts, accurately inferring missing connections—for instance, identifying implicit control chains or undisclosed affiliations—is essential for risk assessment, fraud detection, and regulatory compliance. Financial knowledge graphs present unique challenges, including complex multi-hop dependencies (e.g., holdsShares controls ), diverse relation types with conflicting properties, and the need to integrate structured corporate data with unstructured textual descriptions. These challenges necessitate a KGC framework that can dynamically fuse structural and semantic information while adaptively coordinating multiple reasoning tasks—precisely the capabilities our framework aims to provide.
To address these limitations, we propose HetRelMTL-Net, a unified and holistic framework that integrates multi-modal signals and dynamically coordinates multiple learning objectives through a two-stage architecture. The framework consists of the following parts:
  • GraphBert-KGC: A novel graph-text fusion module that dynamically aligns structural and semantic features through relation-aware dynamic projection, multi-hop graph attention, and an adaptive cross-modal gating mechanism. This module contextually balances the contributions from graph topology and textual descriptions—for example, it prioritizes structural signals for hierarchical relations while emphasizing textual semantics for descriptive relations.
  • Multi-Task Framework: Built upon the unified representations from GraphBert-KGC, HetRelMTL-Net jointly optimizes link prediction, relation classification, and path reasoning. This is achieved by introducing task-specific inductive biases (e.g., hyperbolic projections for hierarchical), a hierarchical attention mechanism for semantic discrimination, and a theoretically grounded KL-divergence-based dynamic weighting strategy that adaptively balances task-specific losses during training, mitigating gradient conflicts.
The remainder of this paper is organized as follows. Section 2 reviews related work on graph-text fusion, multi-task learning, and complex relation reasoning in KGC. Section 3 elaborates on the methodology of HetRelMTL-Net, detailing the GraphBert-KGC fusion module and the multi-task learning architecture with dynamic optimization. Section 4 presents extensive experiments, including results, ablation studies, and case analyses. Section 5 discusses the model’s limitations, theoretical insights, and promising future directions. Finally, Section 6 concludes the paper.

2. Related Work

Our work is situated at the intersection of graph-text fusion, multi-task learning, and complex relation reasoning for knowledge graph completion. This section reviews the relevant literature in these areas and clarifies the position of our proposed HetRelMTL-Net framework.

2.1. Graph–Text Fusion in KGC

Traditional approaches to KGC mainly focus on structure-based models, such as TransE [5] and ComplEx [10], which embed entities and relations into geometric spaces to model relational patterns. While effective for capturing simple structures like symmetry and inversion through their respective geometric constraints (e.g., translation in TransE, complex inner product in ComplEx), these methods inherently lack the capacity to leverage rich textual semantics, often crucial for fine-grained semantic discrimination.
To bridge this gap, text-enhanced models have been developed. For example, DKRL [11] learns embeddings directly from entity descriptions, while ERNIE-KG [8] employs knowledge-aware pre-training to infuse textual context into semantic representations. Subsequent studies, including CoKE [12] and KEPLER [13], further leverage PLMs to jointly optimize textual and relational representation, demonstrating the significant potential of graph–text fusion.
Despite these advancements, current fusion approaches often face a critical trade-off: they either achieve integration at the cost of high parameter complexity or exhibit limited adaptability in dynamically balancing structural and textual signals across diverse relation types. Our GraphBert-KGC module addresses this by introducing a parameter-efficient, relation-aware dynamic projection and a gating mechanism for adaptive feature fusion.

2.2. Multi-Task Learning for KGC

MTL has emerged as a powerful paradigm in KGC, aiming to improve model generalization by learning shared representations across interrelated tasks. A common strategy involves using shared embeddings with task-specific prediction heads, as seen in MultiKE [14], which jointly optimizes link prediction and entity classification. However, such methods often rely on fixed and heuristic weight assignments across tasks, limiting adaptability and leading to suboptimal performance under task imbalance.
Dynamic weighting methods offer a more sophisticated alternative. Techniques like GradNorm [15] adjust task weights based on the norms of their gradients, while Uncertainty Weighting [16] uses homoscedastic uncertainty to balance losses. Although these strategies provide a degree of adaptability, they are often computationally expensive and lack a principled connection to the semantic characteristics of KG relations. The KL-divergence-based weighting strategy introduced in our framework offers theoretically grounded and computationally efficient alternative. It quantifies task difficulty relative to a uniform prior, enabling interpretable and dynamic balancing that directly responds to the training dynamics without introducing significant overhead.

2.3. Complex Relation Reasoning

Effectively modeling complex relational patterns—such as symmetry, antisymmetry, inversion, and hierarchy—requires embedding spaces with suitable geometric inductive biases. RotatE [6] models relations as rotations in the complex space, proving highly effective for capturing symmetry and antisymmetry. For hierarchical relations, which exhibit latent tree-like structures, hyperbolic embedding methods [17] like Poincaré embeddings provide a more natural and efficient geometric foundation due to the hierarchical capacity of hyperbolic space. Extensions such as HypER [18] incorporate hypernetwork-based relation projections to enhance modeling capacity.
However, a common limitation among these specialized models is their focus on a single type of relational pattern within a single-task learning framework. Furthermore, they primarily operate on the graph structure alone, failing to integrate complementary textual semantics. HetRelMTL-Net bridges these gaps by unifying complex relation reasoning within a multi-modal, multi-task framework. It synergistically integrates hyperbolic projections for hierarchical, rotational transforms for symmetry/asymmetry, and cross-modal attention for semantic enhancement, all coordinated by a dynamic multi-task learning strategy that leverages both structural and textual information.

3. Methodology

In this section, we present the architecture of HetRelMTL-Net, a unified framework designed to jointly model heterogeneous relations through graph–text fusion and multi-task dynamic optimization. The framework consists of two core components: (1) GraphBert-KGC, which integrates structural and textual features via dynamic relation projection and cross-modal gating; and (2) a multi-task learning extension that performs joint optimization of link prediction, relation classification, and path reasoning with adaptive weight allocation.

3.1. GraphBert-KGC: Graph–Text Fusion Module

3.1.1. Dynamic Relation Projection

To effectively capture heterogeneous relational semantics without introducing excessive parameters, we replace conventional fixed relation projection matrices (e.g., as used in TransR) with a lightweight MLP that generates adaptive projection weights for each relation r. Specifically, the projection matrix W r k is derived as:
W r k = MLP ( e r ) = U 2 · ReLU ( U 1 e r + b 1 ) + b 2
where e r denotes the initial embedding of relation r (either randomly initialized or pre-trained), and U 1 , U 2 R d × d are shared weight matrices. The superscript k in W r k denotes the hop index within the multi-hop neighborhood aggregation framework, corresponding to the k-th layer in the subsequent graph attention network). For each relation r, the same lightweight MLP generates a distinct projection matrix W r k for each hop k { 1 , , K } , enabling the model to learn relation-specific transformations that are contextualized by the topological distance (hop count). This design reduces parameter complexity from O ( | R | × d 2 ) to O ( d 2 ) , achieving a 97% reduction in parameters for a typical KG with | R | = 100 relations, while still enabling expressive relation-specific transformations.

3.1.2. Multi-Hop Graph Attention

To encapsulate multi-hop structural dependencies, we propose a decay-weighted graph attention mechanism. For a given entity h, we extract all paths up to length K in its neighborhood. The attention score between h and a neighbor t i at hop k under relation r is computed as:
α h i ( r , k ) = Softmax ( W r k h ) · ( W r k t i ) d
Subsequently, the aggregated graph embedding z h is obtained via a weighted sum over all K-hop neighbors, with a decay factor γ = 0.8 to attenuate the influence of distant nodes:
z h = k = 1 K γ k i N h k α h i ( r , k ) W r k t i
This design prioritizes proximate connections while still incorporating broader contextual information.

3.1.3. Cross-Modal Fusion

We fuse structural embeddings z h with textual embeddings h text (derived from entity descriptions using BERT-base [19]) through an adaptive gating mechanism. The gate value g h is computed as:
g h = σ W g [ z h h text ]
where ⊕ denotes vector concatenation and W g R d is a learnable parameter. The final fused representation is given by:
h fused = g h · z h + ( 1 g h ) · h text
This gating mechanism enables the model to dynamically blend structural and textual information according to the relational context. Specifically, for hierarchical relations (e.g., subClassOf), values of g h > 0.7 prioritize the graph structure, whereas for descriptive relations (e.g., hasAbstract), g h < 0.3 emphasizes textual semantics.

3.2. HetRelMTL-Net: Multi-Task Architecture

Building upon the unified representations generated by GraphBert-KGC, we design a multi-task learning architecture to jointly solve link prediction, relation classification, and path reasoning. This design encourages the model to learn representations that are robust across different relational reasoning tasks, while the dynamic weighting strategy mitigates potential conflicts between them. The architecture, as illustrated in Figure 1, operates on a shared embedding foundation and branches into specialized task-specific heads. A dynamic controller adaptively balances the learning process across these tasks.

3.2.1. Shared Embedding Layer

The fused entity representations h fused and t fused generated by GraphBert-KGC serve as shared inputs to three task-specific prediction heads, facilitating cross-task knowledge transfer and representation learning.

3.2.2. Task-Specific Heads

To effectively address the diverse requirements of KGC, we design three specialized task heads that operate on the shared fused embeddings. Each head is tailored to capture distinct relational patterns: link prediction for triple classification, relation classification for semantic discrimination, and path reasoning for multi-hop inference. These components are jointly optimized within a unified multi-task framework, enabling complementary learning while preserving task-specific representational structures.
(1)
Link Prediction with Hyperbolic Projection
To model hierarchical relations, we project entity embeddings into hyperbolic space using the Poincaré ball model:
Proj H ( x ) = x x sinh ( x )
The scoring function for a triple ( h , r , t ) is defined as:
s link ( h , r , t ) = Proj H ( h ) R r Proj H ( t ) 2
where R r H d × d denotes a relation-specific rotation matrix. This formulation facilitates efficient modeling of tree-like hierarchical structures.
(2)
Relation Classification with Hierarchical Attention
We employ a two-tier attention mechanism to capture fine-grained semantic interactions. At the entity level, attention focuses on local head-tail interactions:
α i j = Softmax W c T tanh ( W h h i + W t t j )
At the relation level, these features are aggregated to form a global representation for relation classification:
P ( r h , t ) = Softmax W class [ Attn ent ; Attn rel ]
This hierarchical design enhances semantic discrimination by jointly modeling local and global relational cues.
(3)
Path Reasoning with LSTM Encoding
Multi-hop relational paths are encoded using an LSTM to capture sequential dependencies. This design is motivated by the proven efficacy of recurrent architectures like LSTM in modeling sequential patterns and dependencies in various domains, including security contexts where hybrid models (e.g., CNN-LSTM) are employed for tasks such as fraudulent transaction sequence analysis [20]. For a path P = { ( e 1 , r 1 ) , , ( e n , r n ) } , the hidden state is updated as:
h t = LSTM h t 1 , [ e t ; r t ]
The path score for relation r is computed as:
s path ( P , r ) = h n T W p r
This enables the model to perform transitive reasoning over multi-step relational chains.

3.2.3. Dynamic Weight Allocation via KL-Divergence

To mitigate task interference and balance the learning dynamics across different objectives, we introduce a Kullback-Leibler (KL) divergence-based strategy for dynamic task weighting. This approach quantifies the relative difficulty of each task and adaptively adjusts its contribution to the total loss throughout the training process.
Specifically, the loss values of all tasks are first transformed into a probability distribution using a softmax function with temperature τ = 0.1 :
P k = exp ( L k / τ ) j exp ( L j / τ )
This distribution reflects the relative training status of each task. Subsequently, We compute the KL-divergence between P k and a uniform prior 1 / K , where K is the number of tasks. The KL-divergence between this distribution and a uniform prior quantifies task difficulty:
D KL ( k ) = P k log P k 1 / K
Here, the uniform distribution 1 / K represents an unbiased baseline where all tasks are assumed equally important a priori. The KL-divergence D KL ( k ) quantifies how much the actual training distribution P k deviates from this balanced reference state. A larger divergence value indicates that task k is relatively under-optimized (its current loss is disproportionately high), signaling higher immediate difficulty that warrants increased attention.
The divergence D KL ( k ) serves as an indicator of task difficulty-higher values correspond to tasks that are relatively under-optimized. The final weight for each task is obtained by normalizing the exponentiated divergence values:
w k = exp ( η · D KL ( k ) ) j exp ( η · D KL ( j ) )
Here, η = 0.5 is a hyperparameter that controls the intensity of weight adjustment. This strategy offers a principled and efficient alternative to gradient-based methods like GradNorm (avoiding sensitivity to gradient scale) and uncertainty weighting (requiring no extra parameters). The key hyperparameters—the scaling factor η = 0.5 and the temperature τ = 0.1 in Equation (12)—were selected via validation, providing a stable balance between responsiveness to task difficulty and training stability; performance is robust within ranges η [ 0.3 , 0.7 ] and τ [ 0.05 , 0.2 ] .
The evolution of task weights throughout the training process is visualized in Figure 2, which illustrates how our method dynamically shifts focus between link prediction, relation classification, and path reasoning.

3.2.4. Joint Objective Function

The overall training objective of HetRelMTL-Net is formulated as a dynamically weighted sum of the task-specific losses, integrating the learning signals from link prediction, relation classification, and path reasoning. The joint loss function is defined as follows:
L = w link L link + w class L class + w path L path
where w link , w class , w path are the dynamically adjusted at each training step using the KL-divergence-based strategy described in Section 3.2.3, which ensures balanced contributions from each task and mitigates inter-task gradient conflicts.
Each task-specific loss is designed according to the nature of the corresponding learning objective:
  • L link is a margin-based ranking loss that encourages positive triples to score higher than negative samples by a predefined margin.
  • L class is the cross-entropy loss applied to the relation classification task, aiming to correctly identify the relation type given entity pairs.
  • L path is the mean squared error loss used in path reasoning, which measures the discrepancy between the predicted and ground-truth path scores.
By integrating these losses under the adaptive weighting scheme, the model effectively coordinates multi-task learning, mitigates gradient conflicts, and enhances overall representation learning. The unified optimization process ensures that all tasks contribute proportionally to the training dynamics, leading to more robust and generalizable knowledge graph representations.

4. Experiments

4.1. Experimental Setup

To comprehensively evaluate the effectiveness and generalizability of HetRelMTL-Net framework, we conduct extensive experiments on two widely adopted benchmark knowledge graphs. This section elaborates on the datasets, baseline methods, and implementation details utilized in our empirical study.

4.1.1. Datasets

Our experiments are conducted on two publicly available benchmarks: WN18RR and FB15k-237. These datasets are selected for their distinct relational characteristics and prevalence in prior research, enabling a rigorous assessment of model performance across diverse scenarios.
  • WN18RR: A subset of WordNet, which primarily contains lexical and semantic relations. It is characterized by a high proportion of symmetric (e.g., antonym) and hierarchical (e.g., hypernym) relations, posing challenges for structural reasoning.
  • FB15k-237: Derived from Freebase, this dataset features more complex and diverse relational patterns, including asymmetric and many-to-many relations (e.g., foundedBy), which are prevalent in real-world knowledge graphs.
Table 1 provides a statistical summary of both datasets, detailing the number of entities, relations, and the splits of training, validation, and test triples.

4.1.2. Baselines

To ensure a fair and comprehensive comparison, we compare HetRelMTL-Net against three categories of state-of-the-art baseline methods:
  • Structure-based models: This category includes classical geometric embedding approaches, such as TransE [5], RotatE [6], ComplEx [10], and HypER [18]. These models primarily leverage graph topology while disregarding textual semantics.
  • Text-enhanced models: Models in this group incorporate textual information, such as entity descriptions. We include KG-BERT [7], ERNIE-KG [8], and an ablated single-task variant of our model, denoted as GraphBert-KGC, which utilizes the proposed fusion module without multi-task learning.
  • Multi-task models: To evaluate the efficacy of our multi-task learning strategy, we compare against MTL-KGC [21], MultiKE [14], and a GradNorm-based adaptive multi-task baseline, referred to as GradNorm-based model (GradNorm-KGC [15]).
The selection of these baselines ensures coverage of diverse methodological paradigms and facilitates a thorough analysis of the contributions of different components in our framework.

4.1.3. Implementation Details and Experimental Protocols

To ensure rigorous and reproducible experimental evaluation, our study adheres to established principles of fairness, transparency, and statistical reliability throughout the implementation, hyperparameter tuning, and evaluation phases. The detailed protocols are as follows.
Baseline Models and Implementation Fairness. All baseline models are implemented using their official, publicly released codebases to ensure correctness and comparability. This includes structure-based models (TransE, RotatE, ComplEx, HypER), text-enhanced models (KG-BERT, ERNIE-KG), and multi-task learning baselines (MT-KGC, GradNorm-KGC). All experiments, including those for baselines and our HetRelMTL-Net, are conducted under identical hardware (NVIDIA RTX 3090 GPUs, NVIDIA Corporation, Santa Clara, CA, USA) and software (PyTorch v1.12.0, CUDA v11.6) environments.
Model Architecture and Feature Extraction. For HetRelMTL-Net, textual features are extracted from entity descriptions using the pre-trained BERT-base model [19]. The descriptions are tokenized and truncated/padded to a maximum length of 128 tokens, yielding 768-dimensional embeddings. The graph attention module is a 2-layer Graph Attention Network (GAT) with 2 attention heads, a hidden dimension of 100, and a dropout rate of 0.3 applied to attention scores. The multi-hop neighborhood scope is set to K = 2 hops with a decay factor γ = 0.8 . For multi-hop path reasoning, we extract simple paths up to length K = 2 between entity pairs in the training set using breadth-first search, filtering out cyclic paths. These paths are encoded using a single-layer LSTM with a hidden size of 200. Entity and relation embeddings are of dimension 100. In the multi-task setup, entity embeddings are projected into a 100-dimensional hyperbolic space for hierarchical relations. The model uses a negative sampling ratio of 10 negative triples per positive triple during training.
Hyperparameter Tuning Strategy. To establish a fair comparison ground, we apply a consistent hyperparameter search strategy to our proposed model and a selection of the strongest baselines representing different methodological paradigms (RotatE for geometric embedding, KG-BERT for text-enhanced models, and GradNorm-KGC for adaptive multi-task learning). The grid search spans key parameters: learning rate { 1 × 10 5 , 1 × 10 4 , 1 × 10 3 } , embedding dimension { 100 , 200 , 256 } , and batch size { 128 , 256 , 512 } . Other parameters, such as dropout rate and LSTM hidden size, were fixed based on pilot studies to maintain architectural consistency. The optimal configuration for each model is selected based on validation set performance. For HetRelMTL-Net, this resulted in the use of the AdamW optimizer with a learning rate of 1 × 10 4 and a batch size of 256.
Training and Evaluation Protocol. To mitigate the influence of random initialization and ensure the statistical reliability of our results, we perform three independent training runs for HetRelMTL-Net and the aforementioned key baselines, each initialized with distinct fixed random seeds (e.g., 56, 128, 999). The performance metrics reported in the main text (e.g., Table 2) represent the mean values across these runs, with observed variances being negligible (standard deviations < 0.5% for MRR and Hits@1). Training proceeds for a maximum of 200 epochs with an early stopping criterion (patience of 20 epochs on the validation set). To further promote reproducibility, the complete source code, configuration files, and detailed instructions will be made publicly available upon acceptance.
On WN18RR dataset, HetRelMTL-Net attains a Mean Reciprocal Rank (MRR) of 0.420, significantly surpassing strong structure-based baselines such as RotatE (0.362) and HypER (0.348). Notably, the model exhibits remarkable performance in modeling hierarchical relations, achieving a Hits@1 score of 0.812 for the subClassOf relation—a substantial improvement over HypER’s score of 0.725.
On the more complex FB15k-237 dataset, our method achieves a Hits@1 score of 0.287 and a Hits@10 score of 0.689, outperforming KG-BERT by 53.5% in Hits@1 and demonstrating strong performance in top-10 prediction. Furthermore, HetRelMTL-Net excels in modeling asymmetric relations, attaining a Hits@1 score of 0.352 for the foundedBy relation, compared to 0.289 by RotatE.
These results underscore the effectiveness of the proposed framework in integrating structural and textual information while adaptively coordinating multiple learning tasks.

4.1.4. Convergence Efficiency

To evaluate the training efficiency of HetRelMTL-Net, we compare its convergence behavior with that of GradNorm-KGC and a fixed–weight multi-task baseline. As illustrated in Figure 3, our model converges approximately 12% faster, accompanied by a consistent and stable reduction in loss across all tasks. This accelerated and more stable convergence highlights the efficacy of the proposed KL-divergence-based dynamic weighting strategy in harmonizing gradient directions and mitigating inter-task interference during training.

4.2. Ablation Studies

To quantitatively assess the contribution of each core component in HetRelMTL-Net, we conduct a series of ablation experiments. The results are summarized in Table 3, which reports performance changes on both WN18RR (in terms of MRR) and FB15k-237 (in terms of Hits@1) relative to the full model.
Removing Dynamic Task Weighting: Ablating the KL-divergence-based dynamic weighting strategy leads to a 15.6% decrease in MRR on WN18RR and an 18.2% drop in Hits@1 on FB15k-237. This result underscore the critical importance of adaptive task balancing in mitigating gradient conflicts and enhancing multi-task coordination.
Disabling Path Reasoning: Removing the path reasoning module causes a 12.7% decline in Hits@10 on FB15k-237, confirming that multi-hop inference plays a vital role in capturing complex relational dependencies.
Replacing Dynamic Projections: Substituting the lightweight dynamic relation projection with a fixed projection mechanism (as used in TransR) results in a 22.3% reduction in MRR on WN18RR and a 25.1% decrease in Hits@1 on FB15k-237. This highlights the advantage of adaptive, relation-specific transformations in efficiently modeling heterogeneous relations.
Removing Cross-Modal Fusion: Omitting the cross-modal gating mechanism for integrating structural and textual embeddings leads to the most severe performance degradation—a 28.3% drop in MRR on WN18RR and a 30.9% decrease in Hits@1 on FB15k-237. This emphasizes the essential role of effectively combining structural and semantic information for robust knowledge graph completion.
Task Interaction Analysis: Beyond individual component contributions, the ablation results reveal important patterns about how the three tasks interact. The removal of path reasoning causes a more pronounced drop in Hits@10 (−12.7%) than in Hits@1 (−12.5%) on FB15k-237, indicating that this task primarily enhances multi-hop inference capabilities. Conversely, disabling cross-modal fusion—which affects both link prediction and relation classification—leads to the most severe degradation across all metrics, underscoring that semantic fusion provides fundamental benefits shared by all tasks. The dynamic weighting mechanism proves particularly critical for the more complex FB15k-237 (−18.2% Hits@1 when removed), where task difficulties are more heterogeneous. These patterns collectively demonstrate that the three tasks—link prediction, relation classification, and path reasoning—provide complementary signals: path reasoning strengthens structural inference (benefiting Hits@10), relation classification improves semantic discrimination (enhancing Hits@1), and their combination—when properly balanced by dynamic weighting—yields optimal overall performance.
Implications for Alternative Fusion Strategies: The severe degradation observed when removing cross-modal fusion (“No Cross-Modal Fusion”)—a 28.3% drop in MRR on WN18RR and 30.9% drop in Hits@1 on FB15k-237—indirectly suggests the superiority of our adaptive gating mechanism over simpler alternatives. In this ablation, the gating mechanism is disabled, effectively reverting to a fixed, non-adaptive fusion of structural and textual features (equivalent to a simple averaging or naive concatenation). The magnitude of performance loss indicates that static fusion strategies are inadequate for handling the heterogeneous nature of knowledge graph relations, where the relative importance of structural versus textual signals varies significantly across different relation types. While a comprehensive quantitative comparison with all individual alternatives (graph-only, text-only, concatenation+MLP, weighted averaging) is beyond the scope of this paper, this dramatic decline underscores the necessity of our context-aware, learnable gating approach.
Overall, the ablation study validates the necessity of each proposed component and demonstrates their collective contribution to the strong performance of HetRelMTL-Net.

4.3. Attention Visualization

To qualitatively interpret the behavior of HetRelMTL-Net, we visualize the attention mechanisms employed in the relation classification task, as well as the dynamic evolution of task–specific weights during training.
A heatmap illustrating the hierarchical attention weights produced for a financial knowledge graph is shown in Figure 4. The model accurately allocates higher attention scores (e.g., 0.7) to semantically salient entity pairs, such as (“Company”, “Subsidiary”), which participate in the multi-hop path holdsSharescontrols. Analysis of the corresponding gating values g i reveals interpretable patterns: for structural-dominant relations like those in corporate hierarchies, average gate values exceed 0.7, prioritizing graph topology; conversely, for descriptive relations where textual semantics are key, gate values fall below 0.3, emphasizing entity descriptions. This dynamic modulation based on relational context explains why fixed fusion strategies underperform—they cannot adapt to such varying requirements. This demonstrates the model’s ability to discern and emphasize structurally and semantically meaningful interactions, thereby enhancing discriminative capacity in relation classification.
Figure 5 illustrates the evolution of KL-divergence values for each task throughout the training process. The dynamic variation in these values reflects the shifting focus of the learning process: link prediction receives greater emphasis in the initial stages, while path reasoning gradually gains prominence in later epochs. This adaptive behavior confirms the effectiveness of our KL-divergence-based weighting strategy in automatically adjusting task priorities according to their relative difficulties and training progress.
These visualizations not only enhance the interpretability of the model but also provide empirical support for the mechanisms underlying its robust performance.

4.4. Case Study: Financial Relation Reasoning

To demonstrate our framework’s capability to address the complex reasoning challenges in semantically rich domains, we present a case study on a financial knowledge graph. Here, we examine a real-world inference task within a financial knowledge graph. The model is tasked with inferring the missing relation in the following multi–hop relational chain:
Company A holdsShares Subsidiary ? Target Company
HetRelMTL-Net accurately predicts the relation controls with high confidence. Quantitative assessment reveals that our model achieves a 23% improvement in accuracy over the best baseline in predicting such transitive financial relations, This case not only demonstrates the model’s ability to perform complex multi-step reasoning but also underscores its practical utility and robustness in real-world scenarios involving intricate relational dependencies.

5. Discussion

5.1. Model Limitations

While HetRelMTL-Net demonstrates strong performance in modeling multi-hop dependencies within a limited path length K = 2 , its capability to handle extremely long-range relational paths (e.g., K > 3 ) remains constrained. This limitation is primarily attributed to the use of LSTM for path encoding, which is susceptible to gradient vanishing over long sequences. The sequential nature of LSTM processing can also become a bottleneck for capturing very long-range dependencies, as information must pass through many recurrent steps, potentially leading to the loss of critical relational signals from distant hops.
Future improvements could explore more advanced sequential modeling architectures, such as transformer-based encoders, which are better equipped to capture long-term dependencies through self-attention mechanisms. Unlike recurrent models, transformers process all elements in a sequence simultaneously and compute pairwise attention scores, allowing direct modeling of relationships between any two entities in a path regardless of their distance. This architecture has proven highly effective in capturing long-range contexts in domains like natural language processing (e.g., in long-document understanding) and time-series analysis, where understanding distant interactions is crucial. Adapting such encoders for relational paths could significantly extend the model’s effective reasoning range. However, this would also introduce considerations such as path length variability and computational efficiency, which could be addressed through techniques like sparse attention or hierarchical modeling.
Additionally, while our ablation studies provide initial insights into task interactions, a more granular analysis—such as measuring per-task gradients or conducting exhaustive task combination studies—could further elucidate the multi-task dynamics. We leave this for future work, alongside the exploration of more advanced task-balancing mechanisms.

5.2. Theoretical Insights

The KL-divergence-based dynamic weighting strategy introduced in this work provides a theoretically grounded approach to multi-task balancing.
The uniform prior 1 / K is foundational to this approach. From an information-theoretic perspective, it represents the maximum entropy distribution—the least informative and most unbiased starting assumption about relative task difficulties. This choice ensures that any measured deviation via D KL ( k ) genuinely reflects emergent training dynamics rather than pre-imposed biases. Compared to alternative weighting schemes, this approach offers distinct advantages: it requires no prior domain knowledge about task relationships, avoids sensitivity to gradient scale variations (a limitation of gradient-based methods like GradNorm), and provides an interpretable, data-driven metric of task imbalance directly derived from the loss distribution.
The joint learning of link prediction, relation classification, and path reasoning within our framework creates a synergistic regularization effect. Link prediction establishes the fundamental geometric constraints for entity-relation embeddings. Relation classification imposes additional semantic discrimination requirements, forcing the model to develop fine-grained representational distinctions. Path reasoning introduces structural inductive biases that capture transitive dependencies and multi-hop contexts. Crucially, these tasks operate on shared representations generated by GraphBert-KGC, enabling constructive knowledge transfer: semantic distinctions learned through relation classification inform more accurate path reasoning, while structural patterns identified via path reasoning provide context for better link prediction. The dynamic weighting mechanism ensures this synergistic transfer remains productive by adaptively calibrating each task’s contribution based on its instantaneous learning difficulty, preventing any single objective from dominating prematurely.
Our adaptive gating mechanism for graph–text fusion addresses a core limitation of static fusion strategies. While simple alternatives like concatenation, weighted averaging, or modality-specific routing apply fixed rules regardless of relational context, our gating mechanism learns to dynamically reweight modalities based on specific entity-relation tuples. This design is theoretically justified by the heterogeneous nature of knowledge graphs: relations like subClassOf are primarily structural, while relations like hasDescription rely heavily on textual semantics. Static fusion either underweights or overweights one modality for certain relation types, leading to suboptimal representations. The adaptive gating effectively implements a context-aware mixture of experts, where each modality’s contribution is optimized for the specific relational context.
By quantifying the deviation of task-specific loss distributions from a uniform prior, the method provides a principled mechanism for prioritizing more challenging tasks during training. This not only mitigates gradient conflicts but also promotes more stable and efficient optimization. Given its general formulation, this strategy holds potential for broader application in other multi-modal and multi-task learning scenarios, both within and beyond the domain of knowledge graph analysis.

5.3. Error Analysis and Limitations

While HetRelMTL-Net demonstrates strong overall performance, analysis of its failure cases reveals specific limitations that align with the architectural choices discussed in Section 5.1.
First, performance degrades on relations with very low frequency in the training data (e.g., fewer than 50 instances). For example, rare relations like awardedBy in FB15k-237 show a Hits@1 approximately 60% lower than the dataset average. This is a common challenge for data-driven embedding methods, which struggle to learn robust representations from scarce supervision.
Second, the model exhibits a marked performance drop when reasoning requires implicit paths longer than the explicit neighborhood scope ( K = 2 ). While the path reasoning module can encode explicit multi-hop sequences, inferring relationships that depend on longer, unobserved relational chains remains challenging. This is consistent with the known limitation of LSTM-based path encoders for very long sequences.
Third, qualitative inspection of attention patterns reveals that for some symmetric or compositionally complex relations (e.g., relations that are both symmetric and hierarchical), the model occasionally misallocates attention between structural and textual signals, leading to confusion with inverse relations. This suggests that the current gating mechanism, while adaptive, may benefit from more explicit constraints or relational priors in such edge cases.
These observations are not failures of the core framework but rather point to natural trade-offs and opportunities for future work, as outlined in the following section.

5.4. Future Directions

Several promising research directions emerge from this work:
Computational Complexity and Scalability Considerations: The primary computational costs stem from the multi-hop graph attention, BERT-based text encoding, and multi-task optimization. Future work to enhance scalability will explore efficient strategies such as advanced neighborhood sampling for large graphs, employing lighter-weight language models, and model distillation techniques, aiming to maintain performance while improving efficiency for large-scale applications.
Temporal Knowledge Graph Completion: Extending HetRelMTL-Net to incorporate temporal dynamics—such as modeling time-sensitive relations (e.g., workedAt with valid time intervals)—would enhance its applicability to real-world evolving knowledge graphs.
Cross-Domain Knowledge Transfer: Adapting the framework for low-resource knowledge graphs through transfer learning from data-rich domains could improve scalability and usability in resource-constrained settings.
Integration of Multi-Modal Data: Future versions could incorporate additional modalities, such as visual or tabular data, alongside textual and structural information, enabling more holistic knowledge representation and reasoning.
Explainability and Interpretability: Enhancing the model transparency through explainable attention mechanisms or symbolic integration could facilitate better user trust and deeper semantic insights.

6. Conclusions

In this paper, we presented a unified framework, HetRelMTL-Net, for KGC, which effectively addresses heterogeneous relation reasoning through integrated graph–text fusion and adaptive multi-task optimization. The core innovations of HetRelMTL-Net include the GraphBert-KGC module, which dynamically aligns structural and semantic representations via relation-aware attention and parameter-efficient projection, and a KL-divergence-based multi-task weighting strategy that effectively coordinates link prediction, relation classification, and path reasoning. Extensive experiments on benchmark datasets demonstrated that HetRelMTL-Net achieves the state-of-the-art performance in modeling complex relations and exhibits superior training efficiency.
Future research will focus on extending HetRelMTL-Net to incorporate temporal dynamics for evolving knowledge graphs, as well as integrating additional modalities such as visual and tabular data to support more comprehensive knowledge representation. Further exploration of advanced path encoding mechanisms and interpretable attention architectures also represents a promising direction for enhancing the model’s capability and transparency in real-world applications.

Author Contributions

Methodology, Y.W., F.W. and S.S.; Validation, X.X. and S.S.; Formal analysis, F.W.; Investigation, Y.W.; Resources, R.Z.; Data curation, R.Z.; Writing—original draft, Y.W. and F.W.; Writing—review & editing, X.X., F.W. and S.S.; Visualization, X.X.; Supervision, Z.C.; Funding acquisition, Z.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China under grants 62176175 and 62372318; the Science and Technology Project of Suzhou Water under grants 2025004; Postgraduate Research & Practice Innovation Program of Jiangsu Province under grants KYCX24_3446.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding authors.

Conflicts of Interest

Author Run Zhu was employed by the company Kunshan Data Bureau Technology. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Ibrahim, N.; Aboulela, S.; Ibrahim, A.; Kashef, R. A survey on augmenting knowledge graphs (KGs) with large language models (LLMs): Models, evaluation metrics, benchmarks, and challenges. Discov. Artif. Intell. 2024, 4, 76. [Google Scholar] [CrossRef]
  2. Zhang, J.C.; Zain, A.M.; Zhou, K.Q.; Chen, X.; Zhang, R.M. A review of recommender systems based on knowledge graph embedding. Expert Syst. Appl. 2024, 250, 123876. [Google Scholar] [CrossRef]
  3. James, T.; Hennig, H. Knowledge graphs and their applications in drug discovery. In High Performance Computing for Drug Discovery and Biomedicine; Heifetz, A., Ed.; Publishing House: New York, NY, USA, 2023; pp. 203–221. [Google Scholar]
  4. Zamini, M.; Reza, H.; Rabiei, M. A Review of Knowledge Graph Completion. Information 2022, 13, 396. [Google Scholar] [CrossRef]
  5. Bordes, A.; Usunier, N.; García-Durán, A.; Weston, J.; Yakhnenko, O. Translating embeddings for modeling multi-relational data. In Proceedings of the 27th International Conference on Neural Information Processing Systems—Volume 2, Lake Tahoe, NV, USA, 5–10 December 2013. [Google Scholar]
  6. Sun, Z.; Deng, Z.H.; Nie, J.Y.; Tang, J. RotatE: Knowledge graph embedding by relational rotation in complex space. In Proceedings of the 7th International Conference on Learning Representations, New Orleans, LO, USA, 6–9 May 2019. [Google Scholar]
  7. Yao, L.; Mao, C.; Luo, Y. KG-BERT: BERT for knowledge graph completion. arXiv 2019, arXiv:1909.03193. [Google Scholar] [CrossRef]
  8. Zhang, Z.; Han, X.; Liu, Z.; Jiang, X.; Sun, M.; Liu, Q. ERNIE: Enhanced Language Representation with Informative Entities. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, 28 July–2 August 2019. [Google Scholar]
  9. Yin, J.; Zhou, J.; Shan, Y.; Peng, J.; Liu, H.; Zhou, X.; Wang, X. Multi-task Learning for Hyper-Relational Knowledge Graph Completion. In Proceedings of the 20th International Conference on Intelligent Computing, Tianjin, China, 5–8 August 2024. [Google Scholar]
  10. Trouillon, T.; Welbl, J.; Riedel, S.; Gaussier, E.; Bouchard, G. Complex embeddings for simple link prediction. In Proceedings of the 33rd International Conference on Machine Learning, New York, NY, USA, 19–24 June 2016. [Google Scholar]
  11. Xie, R.; Liu, Z.; Jia, J.; Luan, H.; Sun, M. Representation learning of knowledge graphs with entity descriptions. In Proceedings of the 30th AAAI Conference on Artificial Intelligence, Phoenix, AZ, USA, 12–17 February 2016. [Google Scholar]
  12. Wang, Q.; Huang, P.; Wang, H.; Dai, S.; Jiang, W.; Liu, J.; Lyu, Y.; Zhu, Y.; Wu, H. Coke: Contextualized knowledge graph embedding. arXiv 2019, arXiv:1911.02168. [Google Scholar]
  13. Wang, X.; Gao, T.; Zhu, Z.; Zhang, Z.; Liu, Z.; Li, J.; Tang, J. KEPLER: A unified model for knowledge embedding and pre-trained language representation. Trans. Assoc. Comput. Linguist. 2021, 9, 176–194. [Google Scholar] [CrossRef]
  14. Zhang, Q.; Sun, Z.; Hu, W.; Chen, M.; Guo, L.; Qu, Y. Multi-view knowledge graph embedding for entity alignment. In Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI-19), Macao, China, 10–16 August 2019. [Google Scholar]
  15. Chen, Z.; Badrinarayanan, V.; Lee, C.Y.; Rabinovich, A. Gradnorm: Gradient normalization for adaptive loss balancing in deep multitask networks. In Proceedings of the 35th International Conference on Machine Learning, Stockholmsmässan, Stockholm, Sweden, 10–15 July 2018. [Google Scholar]
  16. Kendall, A.; Gal, Y.; Cipolla, R. Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
  17. Nickel, M.; Kiela, D. Poincaré embeddings for learning hierarchical representations. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
  18. Kolyvakis, P.; Kalousis, A.; Kiritsis, D. Hyperbolic knowledge graph embeddings for knowledge base completion. In Proceedings of the 17th European Semantic Web Conference, Heraklion, Greece, 31 May–4 June 2017. [Google Scholar]
  19. Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, T. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Minneapolis, MA, USA, 2–7 June 2019. [Google Scholar]
  20. Kasula, V.K.; Konda, B.; Yenugula, M.; Yadulla, A.R.; Addula, S.R.; Kotteti, C.M.M. Fraudulent Credit Card Transaction Monitoring Using Deep Learning: A CNN-LSTM Approach for Single-Transaction Security Identification. In Proceedings of the 4th International Conference on Computing and Machine Intelligence (ICMI), Mount Pleasant, MI, USA, 5–6 April 2025. [Google Scholar]
  21. Kim, B.; Hong, T.; Ko, Y.; Seo, J. Multi-task learning for knowledge graph completion with pre-trained language models. In Proceedings of the 28th International Conference on Computational Linguistics, Barcelona, Spain, 8–13 December 2020. [Google Scholar]
Figure 1. Overview of the GraphBert-KGC architecture, illustrating dynamic relation projection, multi-hop attention, and cross-modal fusion, along with the multi-task learning workflow.
Figure 1. Overview of the GraphBert-KGC architecture, illustrating dynamic relation projection, multi-hop attention, and cross-modal fusion, along with the multi-task learning workflow.
Electronics 15 00222 g001
Figure 2. Dynamic task weight evolution during training.link prediction (blue), relation classification (red), path reasoning (green).
Figure 2. Dynamic task weight evolution during training.link prediction (blue), relation classification (red), path reasoning (green).
Electronics 15 00222 g002
Figure 3. Training convergence comparison.
Figure 3. Training convergence comparison.
Electronics 15 00222 g003
Figure 4. Attention heatmap for financial relation classification. The arrow indicates the attention score corresponding to the entity pair “Company” and “Subsidiary”.
Figure 4. Attention heatmap for financial relation classification. The arrow indicates the attention score corresponding to the entity pair “Company” and “Subsidiary”.
Electronics 15 00222 g004
Figure 5. Task-specific KL-divergence dynamics during training.
Figure 5. Task-specific KL-divergence dynamics during training.
Electronics 15 00222 g005
Table 1. Statistical overview of the experimental datasets.
Table 1. Statistical overview of the experimental datasets.
DatasetEntitiesRelationsTrainTriples ValidTest
WN18RR40,9431186,83530343100
FB15k-23714,951134483,14250,00059,071
Table 2. Comparative performance evaluation (MRR on WN18RR; Hits@1 and Hits@10 on FB15k-237).
Table 2. Comparative performance evaluation (MRR on WN18RR; Hits@1 and Hits@10 on FB15k-237).
ModelWN18RR (MRR)FB15k-237 (Hits@1)FB15k-237 (Hits@10)
TransE0.2260.1950.482
RotatE0.3620.2410.593
HypER0.3480.2170.552
KG-BERT0.2160.1870.471
GraphBert-KGC0.3460.2310.585
MT-KGC0.3300.1720.458
GradNorm-KGC0.3380.2230.561
HetRelMTL-Net0.4200.2870.689
Table 3. Ablation analysis (performance relative to full model).
Table 3. Ablation analysis (performance relative to full model).
AblationWN18RR (MRR)FB15k-237 (Hits@1)
HetRelMTL-Net0.4200.287
No Dynamic Weights0.355 (−15.6%)0.235 (−18.2%)
No Path Reasoning0.376 (−10.5%)0.251 (−12.7%)
Fixed Projection0.326 (−22.3%)0.215 (−25.1%)
No Cross-Modal Fusion0.301 (−28.3%)0.198 (−30.9%)
Note: Boldface denotes the performance of our proposed approach for clear comparison with baseline models.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, Y.; Xi, X.; Wang, F.; Sheng, S.; Cui, Z.; Zhu, R. HetRelMTL-Net: A Unified Framework for Knowledge Graph Completion via Graph–Text Fusion and Multi-Task Dynamic Optimization. Electronics 2026, 15, 222. https://doi.org/10.3390/electronics15010222

AMA Style

Wu Y, Xi X, Wang F, Sheng S, Cui Z, Zhu R. HetRelMTL-Net: A Unified Framework for Knowledge Graph Completion via Graph–Text Fusion and Multi-Task Dynamic Optimization. Electronics. 2026; 15(1):222. https://doi.org/10.3390/electronics15010222

Chicago/Turabian Style

Wu, Yujie, Xuefeng Xi, Fei Wang, Shengli Sheng, Zhiming Cui, and Run Zhu. 2026. "HetRelMTL-Net: A Unified Framework for Knowledge Graph Completion via Graph–Text Fusion and Multi-Task Dynamic Optimization" Electronics 15, no. 1: 222. https://doi.org/10.3390/electronics15010222

APA Style

Wu, Y., Xi, X., Wang, F., Sheng, S., Cui, Z., & Zhu, R. (2026). HetRelMTL-Net: A Unified Framework for Knowledge Graph Completion via Graph–Text Fusion and Multi-Task Dynamic Optimization. Electronics, 15(1), 222. https://doi.org/10.3390/electronics15010222

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop