Next Article in Journal
Secure and Decentralised Swarm Authentication Using Hardware Security Primitives
Next Article in Special Issue
SLTP: A Symbolic Travel-Planning Agent Framework with Decoupled Translation and Heuristic Tree Search
Previous Article in Journal
Loss Prediction and Global Sensitivity Analysis for Distribution Transformers Based on NRBO-Transformer-BiLSTM
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

HIEA: Hierarchical Inference for Entity Alignment with Collaboration of Instruction-Tuned Large Language Models and Small Models

School of Information and Artificial Intelligence, Yangzhou University, Yangzhou 225127, China
*
Author to whom correspondence should be addressed.
Electronics 2026, 15(2), 421; https://doi.org/10.3390/electronics15020421 (registering DOI)
Submission received: 27 December 2025 / Revised: 14 January 2026 / Accepted: 15 January 2026 / Published: 18 January 2026
(This article belongs to the Special Issue AI-Powered Natural Language Processing Applications)

Abstract

Entity alignment (EA) facilitates knowledge fusion by matching semantically identical entities in distinct knowledge graphs (KGs). Existing embedding-based methods rely solely on intrinsic KG facts and often struggle with long-tail entities due to insufficient information. Recently, large language models (LLMs), empowered by rich background knowledge and strong reasoning abilities, have shown promise for EA. However, most current LLM-enhanced approaches follow the in-context learning paradigm, requiring multi-round interactions with carefully designed prompts to perform additional auxiliary operations, which leads to substantial computational overhead. Moreover, they fail to fully exploit the complementary strengths of embedding-based small models and LLMs. To address these limitations, we propose HIEA, a novel hierarchical inference framework for entity alignment. By instruction-tuning a generative LLM with a unified and concise prompt and a knowledge adapter, HIEA produces alignment results with a single LLM invocation. Meanwhile, embedding-based small models not only generate candidate entities but also support the LLM through data augmentation and certainty-aware source entity classification, fostering deeper collaboration between small models and LLMs. Extensive experiments on both standard and highly heterogeneous benchmarks demonstrate that HIEA consistently outperforms existing embedding-based and LLM-enhanced methods, achieving absolute Hits@1 improvements of up to 5.6%, while significantly reducing inference cost.
Keywords: entity alignment; knowledge graph; large language models; hierarchical inference; instruction tuning entity alignment; knowledge graph; large language models; hierarchical inference; instruction tuning

Share and Cite

MDPI and ACS Style

Shi, X.; Han, Z.; Li, B. HIEA: Hierarchical Inference for Entity Alignment with Collaboration of Instruction-Tuned Large Language Models and Small Models. Electronics 2026, 15, 421. https://doi.org/10.3390/electronics15020421

AMA Style

Shi X, Han Z, Li B. HIEA: Hierarchical Inference for Entity Alignment with Collaboration of Instruction-Tuned Large Language Models and Small Models. Electronics. 2026; 15(2):421. https://doi.org/10.3390/electronics15020421

Chicago/Turabian Style

Shi, Xinchen, Zhenyu Han, and Bin Li. 2026. "HIEA: Hierarchical Inference for Entity Alignment with Collaboration of Instruction-Tuned Large Language Models and Small Models" Electronics 15, no. 2: 421. https://doi.org/10.3390/electronics15020421

APA Style

Shi, X., Han, Z., & Li, B. (2026). HIEA: Hierarchical Inference for Entity Alignment with Collaboration of Instruction-Tuned Large Language Models and Small Models. Electronics, 15(2), 421. https://doi.org/10.3390/electronics15020421

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop