This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Open AccessArticle
HIEA: Hierarchical Inference for Entity Alignment with Collaboration of Instruction-Tuned Large Language Models and Small Models
by
Xinchen Shi
Xinchen Shi
,
Zhenyu Han
Zhenyu Han and
Bin Li
Bin Li *
School of Information and Artificial Intelligence, Yangzhou University, Yangzhou 225127, China
*
Author to whom correspondence should be addressed.
Electronics 2026, 15(2), 421; https://doi.org/10.3390/electronics15020421 (registering DOI)
Submission received: 27 December 2025
/
Revised: 14 January 2026
/
Accepted: 15 January 2026
/
Published: 18 January 2026
Abstract
Entity alignment (EA) facilitates knowledge fusion by matching semantically identical entities in distinct knowledge graphs (KGs). Existing embedding-based methods rely solely on intrinsic KG facts and often struggle with long-tail entities due to insufficient information. Recently, large language models (LLMs), empowered by rich background knowledge and strong reasoning abilities, have shown promise for EA. However, most current LLM-enhanced approaches follow the in-context learning paradigm, requiring multi-round interactions with carefully designed prompts to perform additional auxiliary operations, which leads to substantial computational overhead. Moreover, they fail to fully exploit the complementary strengths of embedding-based small models and LLMs. To address these limitations, we propose HIEA, a novel hierarchical inference framework for entity alignment. By instruction-tuning a generative LLM with a unified and concise prompt and a knowledge adapter, HIEA produces alignment results with a single LLM invocation. Meanwhile, embedding-based small models not only generate candidate entities but also support the LLM through data augmentation and certainty-aware source entity classification, fostering deeper collaboration between small models and LLMs. Extensive experiments on both standard and highly heterogeneous benchmarks demonstrate that HIEA consistently outperforms existing embedding-based and LLM-enhanced methods, achieving absolute Hits@1 improvements of up to 5.6%, while significantly reducing inference cost.
Share and Cite
MDPI and ACS Style
Shi, X.; Han, Z.; Li, B.
HIEA: Hierarchical Inference for Entity Alignment with Collaboration of Instruction-Tuned Large Language Models and Small Models. Electronics 2026, 15, 421.
https://doi.org/10.3390/electronics15020421
AMA Style
Shi X, Han Z, Li B.
HIEA: Hierarchical Inference for Entity Alignment with Collaboration of Instruction-Tuned Large Language Models and Small Models. Electronics. 2026; 15(2):421.
https://doi.org/10.3390/electronics15020421
Chicago/Turabian Style
Shi, Xinchen, Zhenyu Han, and Bin Li.
2026. "HIEA: Hierarchical Inference for Entity Alignment with Collaboration of Instruction-Tuned Large Language Models and Small Models" Electronics 15, no. 2: 421.
https://doi.org/10.3390/electronics15020421
APA Style
Shi, X., Han, Z., & Li, B.
(2026). HIEA: Hierarchical Inference for Entity Alignment with Collaboration of Instruction-Tuned Large Language Models and Small Models. Electronics, 15(2), 421.
https://doi.org/10.3390/electronics15020421
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
here.
Article Metrics
Article metric data becomes available approximately 24 hours after publication online.