Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (2)

Search Parameters:
Keywords = ODQA

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 696 KiB  
Article
KG-EGV: A Framework for Question Answering with Integrated Knowledge Graphs and Large Language Models
by Kun Hou, Jingyuan Li, Yingying Liu, Shiqi Sun, Haoliang Zhang and Haiyang Jiang
Electronics 2024, 13(23), 4835; https://doi.org/10.3390/electronics13234835 - 7 Dec 2024
Cited by 2 | Viewed by 2320
Abstract
Despite the remarkable progress of large language models (LLMs) in understanding and generating unstructured text, their application in structured data domains and their multi-role capabilities remain underexplored. In particular, utilizing LLMs to perform complex reasoning tasks on knowledge graphs (KGs) is still an [...] Read more.
Despite the remarkable progress of large language models (LLMs) in understanding and generating unstructured text, their application in structured data domains and their multi-role capabilities remain underexplored. In particular, utilizing LLMs to perform complex reasoning tasks on knowledge graphs (KGs) is still an emerging area with limited research. To address this gap, we propose KG-EGV, a versatile framework leveraging LLMs to perform KG-based tasks. KG-EGV consists of four core steps: sentence segmentation, graph retrieval, EGV, and backward updating, each designed to segment sentences, retrieve relevant KG components, and derive logical conclusions. EGV, a novel integrated framework for LLM inference, enables comprehensive reasoning beyond retrieval by synthesizing diverse evidence, which is often unattainable via retrieval alone due to noise or hallucinations. The framework incorporates six key stages: generation expansion, expansion evaluation, document re-ranking, re-ranking evaluation, answer generation, and answer verification. Within this framework, LLMs take on various roles, such as generator, re-ranker, evaluator, and verifier, collaboratively enhancing answer precision and logical coherence. By combining the strengths of retrieval-based and generation-based evidence, KG-EGV achieves greater flexibility and accuracy in evidence gathering and answer formulation. Extensive experiments on widely used benchmarks, including FactKG, MetaQA, NQ, WebQ, and TriviaQA, demonstrate that KG-EGV achieves state-of-the-art performance in answer accuracy and evidence quality, showcasing its potential to advance QA research and applications. Full article
Show Figures

Figure 1

12 pages, 491 KiB  
Article
Quantum-Inspired Fusion for Open-Domain Question Answering
by Ruixue Duan, Xin Liu, Zhigang Ding and Yangsen Zhang
Electronics 2024, 13(20), 4135; https://doi.org/10.3390/electronics13204135 - 21 Oct 2024
Viewed by 1311
Abstract
Open-domain question-answering systems need models capable of referencing multiple passages simultaneously to generate accurate answers. The Rational Fusion-in-Decoder (RFiD) model focuses on differentiating between causal relationships and spurious features by utilizing the encoders of the Fusion-in-Decoder model. However, RFiD reliance on partial token [...] Read more.
Open-domain question-answering systems need models capable of referencing multiple passages simultaneously to generate accurate answers. The Rational Fusion-in-Decoder (RFiD) model focuses on differentiating between causal relationships and spurious features by utilizing the encoders of the Fusion-in-Decoder model. However, RFiD reliance on partial token information limits its ability to determine whether the corresponding passage is a rationale for the question, potentially leading to inappropriate answers. To address this issue, we propose a Quantum-Inspired Fusion-in-Decoder (QFiD) model. Our approach introduces a Quantum Fusion Module (QFM) that maps single-dimensional into multi-dimensional hidden states, enabling the model to capture more comprehensive token information. Then, the classical mixture method from quantum information theory is used to fuse all information. Based on the fused information, the model can accurately predict the relationship between the question and passage. Experimental results on two prominent ODQA datasets, Natural Questions and TriviaQA, demonstrate that QFiD outperforms the strong baselines in automatic evaluations. Full article
(This article belongs to the Special Issue Data Mining Applied in Natural Language Processing)
Show Figures

Figure 1

Back to TopTop