Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (51)

Search Parameters:
Keywords = multi-hop reasoning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 3121 KiB  
Article
SG-RAG MOT: SubGraph Retrieval Augmented Generation with Merging and Ordering Triplets for Knowledge Graph Multi-Hop Question Answering
by Ahmmad O. M. Saleh, Gokhan Tur and Yucel Saygin
Mach. Learn. Knowl. Extr. 2025, 7(3), 74; https://doi.org/10.3390/make7030074 (registering DOI) - 1 Aug 2025
Abstract
Large language models (LLMs) often tend to hallucinate, especially in domain-specific tasks and tasks that require reasoning. Previously, we introduced SubGraph Retrieval Augmented Generation (SG-RAG) as a novel Graph RAG method for multi-hop question answering. SG-RAG leverages Cypher queries to search a given [...] Read more.
Large language models (LLMs) often tend to hallucinate, especially in domain-specific tasks and tasks that require reasoning. Previously, we introduced SubGraph Retrieval Augmented Generation (SG-RAG) as a novel Graph RAG method for multi-hop question answering. SG-RAG leverages Cypher queries to search a given knowledge graph and retrieve the subgraph necessary to answer the question. The results from our previous work showed the higher performance of our method compared to the traditional Retrieval Augmented Generation (RAG). In this work, we further enhanced SG-RAG by proposing an additional step called Merging and Ordering Triplets (MOT). The new MOT step seeks to decrease the redundancy in the retrieved triplets by applying hierarchical merging to the retrieved subgraphs. Moreover, it provides an ordering among the triplets using the Breadth-First Search (BFS) traversal algorithm. We conducted experiments on the MetaQA benchmark, which was proposed for multi-hop question-answering in the movies domain. Our experiments showed that SG-RAG MOT provided more accurate answers than Chain-of-Thought and Graph Chain-of-Thought. We also found that merging (up to a certain point) highly overlapping subgraphs and defining an order among the triplets helped the LLM to generate more precise answers. Full article
(This article belongs to the Special Issue Knowledge Graphs and Large Language Models)
Show Figures

Figure 1

20 pages, 709 KiB  
Article
SKGRec: A Semantic-Enhanced Knowledge Graph Fusion Recommendation Algorithm with Multi-Hop Reasoning and User Behavior Modeling
by Siqi Xu, Ziqian Yang, Jing Xu and Ping Feng
Computers 2025, 14(7), 288; https://doi.org/10.3390/computers14070288 - 18 Jul 2025
Viewed by 244
Abstract
To address the limitations of existing knowledge graph-based recommendation algorithms, including insufficient utilization of semantic information and inadequate modeling of user behavior motivations, we propose SKGRec, a novel recommendation model that integrates knowledge graph and semantic features. The model constructs a semantic interaction [...] Read more.
To address the limitations of existing knowledge graph-based recommendation algorithms, including insufficient utilization of semantic information and inadequate modeling of user behavior motivations, we propose SKGRec, a novel recommendation model that integrates knowledge graph and semantic features. The model constructs a semantic interaction graph (USIG) of user behaviors and employs a self-attention mechanism and a ranked optimization loss function to mine user interactions in fine-grained semantic associations. A relationship-aware aggregation module is designed to dynamically integrate higher-order relational features in the knowledge graph through the attention scoring function. In addition, a multi-hop relational path inference mechanism is introduced to capture long-distance dependencies to improve the depth of user interest modeling. Experiments on the Amazon-Book and Last-FM datasets show that SKGRec significantly outperforms several state-of-the-art recommendation algorithms on the Recall@20 and NDCG@20 metrics. Comparison experiments validate the effectiveness of semantic analysis of user behavior and multi-hop path inference, while cold-start experiments further confirm the robustness of the model in sparse-data scenarios. This study provides a new optimization approach for knowledge graph and semantic-driven recommendation systems, enabling more accurate capture of user preferences and alleviating the problem of noise interference. Full article
Show Figures

Figure 1

23 pages, 863 KiB  
Article
GLR: Graph Chain-of-Thought with LoRA Fine-Tuning and Confidence Ranking for Knowledge Graph Completion
by Yifei Chen, Xuliang Duan and Yan Guo
Appl. Sci. 2025, 15(13), 7282; https://doi.org/10.3390/app15137282 - 27 Jun 2025
Viewed by 657
Abstract
In knowledge graph construction, missing facts often lead to incomplete structures, thereby limiting the performance of downstream applications. Although recent knowledge graph completion (KGC) methods based on representation learning have achieved notable progress, they still suffer from two fundamental limitations, namely the lack [...] Read more.
In knowledge graph construction, missing facts often lead to incomplete structures, thereby limiting the performance of downstream applications. Although recent knowledge graph completion (KGC) methods based on representation learning have achieved notable progress, they still suffer from two fundamental limitations, namely the lack of structured reasoning capabilities and the inability to assess the confidence of their predictions, which often results in unreliable outputs. We propose the GLR framework, which integrates Graph Chain-of-Thought (Graph-CoT) reasoning, LoRA fine-tuning, and the P(True)-based confidence evaluation mechanism. In the KGC task, this approach effectively enhances the reasoning ability and prediction reliability of large language models (LLMs). Specifically, Graph-CoT introduces local subgraph structures to guide LLMs in performing graph-constrained, step-wise reasoning, improving their ability to model multi-hop relational patterns. Complementing this, LoRA-based fine-tuning enables efficient adaptation of LLMs to the KGC scenario with minimal computational overhead, further enhancing the model’s capability for graph-structured reasoning. Moreover, the P(True) mechanism quantifies the reliability of candidate entities, improving the robustness of ranking and the controllability of outputs, thereby enhancing the credibility and interpretability of model predictions in knowledge reasoning tasks. We conducted systematic experiments on the standard KGC datasets FB15K-237, WN18RR, and UMLS, which demonstrate the effectiveness and robustness of the GLR framework. Notably, GLR achieves a Mean Reciprocal Rank (MRR) of 0.507 on FB15K-237, marking a 6.8% improvement over the best recent instruction-tuned method, DIFT combined with CoLE (MRR = 0.439). GLR also maintains significant performance advantages on WN18RR and UMLS, verifying its effectiveness in enhancing both the structured reasoning capabilities and the prediction reliability of LLMs for KGC tasks. These results indicate that GLR offers a unified and scalable solution to enhance structure-aware reasoning and output reliability of LLMs in KGC. Full article
Show Figures

Figure 1

22 pages, 933 KiB  
Article
DRKG: Faithful and Interpretable Multi-Hop Knowledge Graph Question Answering via LLM-Guided Reasoning Plans
by Yan Chen, Shuai Sun and Xiaochun Hu
Appl. Sci. 2025, 15(12), 6722; https://doi.org/10.3390/app15126722 - 16 Jun 2025
Viewed by 900
Abstract
Multi-Hop Knowledge Graph Question Answering (multi-hop KGQA) aims to obtain answers by analyzing the semantics of natural language questions and performing multi-step reasoning across multiple entities and relations in knowledge graphs. Traditional embedding-based methods map natural language questions and knowledge graphs into vector [...] Read more.
Multi-Hop Knowledge Graph Question Answering (multi-hop KGQA) aims to obtain answers by analyzing the semantics of natural language questions and performing multi-step reasoning across multiple entities and relations in knowledge graphs. Traditional embedding-based methods map natural language questions and knowledge graphs into vector spaces for answer matching through vector operations. While these approaches have improved model performance, they face two critical challenges: the lack of clear interpretability caused by implicit reasoning mechanisms, and the semantic gap between natural language queries and structured knowledge representations. This study proposes the DRKG (Decomposed Reasoning over Knowledge Graph), a constrained multi-hop reasoning framework based on large language models (LLMs) that introduces explicit reasoning plans as logical boundary controllers. The innovation of the DRKG lies in two key aspects: First, the DRKG generates hop-constrained reasoning plans through semantic parsing based on LLMs, explicitly defining the traversal path length and entity-retrieval logic in knowledge graphs. Second, the DRKG conducts selective retrieval during knowledge graph traversal based on these reasoning plans, ensuring faithfulness to structured knowledge. We evaluate the DRKG on four datasets, and the experimental results demonstrate that the DRKG achieves 1%–5% accuracy improvements over the best baseline models. Additional ablation studies verify the effectiveness of explicit reasoning plans in enhancing interpretability while constraining path divergence. A reliability analysis further examines the impact of different parameters combinations on the DRKG’s performance. Full article
Show Figures

Figure 1

36 pages, 3927 KiB  
Article
Hybrid Multi-Agent GraphRAG for E-Government: Towards a Trustworthy AI Assistant
by George Papageorgiou, Vangelis Sarlis, Manolis Maragoudakis and Christos Tjortjis
Appl. Sci. 2025, 15(11), 6315; https://doi.org/10.3390/app15116315 - 4 Jun 2025
Viewed by 2644
Abstract
As public institutions increasingly adopt AI-driven virtual assistants to support transparency and citizen engagement, the need for explainable, accurate, and context-aware language systems becomes vital. While traditional retrieval-augmented generation (RAG) frameworks effectively integrate external knowledge into Large Language Models (LLMs), their reliance on [...] Read more.
As public institutions increasingly adopt AI-driven virtual assistants to support transparency and citizen engagement, the need for explainable, accurate, and context-aware language systems becomes vital. While traditional retrieval-augmented generation (RAG) frameworks effectively integrate external knowledge into Large Language Models (LLMs), their reliance on flat, unstructured document retrieval limits multi-hop reasoning and interpretability, especially with complex, structured e-government datasets. This study introduces a modular, extensible, multi-agent graph retrieval-augmented generation (GraphRAG) framework designed to enhance policy-focused question answering. This research aims to provide an overview of hybrid multi-agent GraphRAG architecture designed for operational deployment in e-government settings to support explainable AI systems. The study focuses on how the hybrid integration of standard RAG, embedding-based retrieval, real-time web search, and LLM-generated structured Graphs can optimize knowledge discovery from public e-government data, thereby reinforcing factual grounding, reducing hallucinations, and enhancing the quality of complex responses. To validate the proposed approach, we implement and evaluate the framework using the European Commission’s Press Corner as a data source, constructing graph-based knowledge representations and embeddings, and incorporating web search. This work establishes a reproducible blueprint for deploying AI systems in e-government that require structured reasoning in comprehensive and factually accurate question answering. Full article
Show Figures

Figure 1

34 pages, 20058 KiB  
Article
Image First or Text First? Optimising the Sequencing of Modalities in Large Language Model Prompting and Reasoning Tasks
by Grant Wardle and Teo Sušnjak
Big Data Cogn. Comput. 2025, 9(6), 149; https://doi.org/10.3390/bdcc9060149 - 3 Jun 2025
Viewed by 1001
Abstract
Our study investigates how the sequencing of text and image inputs within multi-modal prompts affects the reasoning performance of Large Language Models (LLMs). Through empirical evaluations of three major commercial LLM vendors—OpenAI, Google, and Anthropic—alongside a user study on interaction strategies, we develop [...] Read more.
Our study investigates how the sequencing of text and image inputs within multi-modal prompts affects the reasoning performance of Large Language Models (LLMs). Through empirical evaluations of three major commercial LLM vendors—OpenAI, Google, and Anthropic—alongside a user study on interaction strategies, we develop and validate practical heuristics for optimising multi-modal prompt design. Our findings reveal that modality sequencing is a critical factor influencing reasoning performance, particularly in tasks with varying cognitive load and structural complexity. For simpler tasks involving a single image, positioning the modalities directly impacts model accuracy, whereas in complex, multi-step reasoning scenarios, the sequence must align with the logical structure of inference, often outweighing the specific placement of individual modalities. Furthermore, we identify systematic challenges in multi-hop reasoning within transformer-based architectures, where models demonstrate strong early-stage inference but struggle with integrating prior contextual information in later reasoning steps. Building on these insights, we propose a set of validated, user-centred heuristics for designing effective multi-modal prompts, enhancing both reasoning accuracy and user interaction with AI systems. Our contributions inform the design and usability of interactive intelligent systems, with implications for applications in education, medical imaging, legal document analysis, and customer support. By bridging the gap between intelligent system behaviour and user interaction strategies, this study provides actionable guidance on how users can effectively structure prompts to optimise multi-modal LLM reasoning within real-world, high-stakes decision-making contexts. Full article
Show Figures

Figure 1

37 pages, 732 KiB  
Article
Document GraphRAG: Knowledge Graph Enhanced Retrieval Augmented Generation for Document Question Answering Within the Manufacturing Domain
by Simon Knollmeyer, Oğuz Caymazer and Daniel Grossmann
Electronics 2025, 14(11), 2102; https://doi.org/10.3390/electronics14112102 - 22 May 2025
Viewed by 5038
Abstract
Retrieval-Augmented Generation (RAG) systems have shown significant potential for domain-specific Question Answering (QA) tasks, although persistent challenges in retrieval precision and context selection continue to hinder their effectiveness. This study introduces Document Graph RAG (GraphRAG), a novel framework that bolsters retrieval robustness and [...] Read more.
Retrieval-Augmented Generation (RAG) systems have shown significant potential for domain-specific Question Answering (QA) tasks, although persistent challenges in retrieval precision and context selection continue to hinder their effectiveness. This study introduces Document Graph RAG (GraphRAG), a novel framework that bolsters retrieval robustness and enhances answer generation by incorporating Knowledge Graphs (KGs) built upon a document’s intrinsic structure into the RAG pipeline. Through the application of the Design Science Research methodology, we systematically design, implement, and evaluate GraphRAG, leveraging graph-based document structuring and a keyword-based semantic linking mechanism to improve retrieval quality. The evaluation, conducted on well-established datasets including SQuAD, HotpotQA, and a newly developed manufacturing dataset, demonstrates consistent performance gains over a naive RAG baseline across both retrieval and generation metrics. The results indicate that GraphRAG improves Context Relevance metrics, with task-dependent optimizations for chunk size, keyword density, and top-k retrieval further enhancing performance. Notably, multi-hop questions benefit most from GraphRAG’s structured retrieval strategy, highlighting its advantages in complex reasoning tasks. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Intelligent Manufacturing)
Show Figures

Graphical abstract

27 pages, 1322 KiB  
Article
CoReaAgents: A Collaboration and Reasoning Framework Based on LLM-Powered Agents for Complex Reasoning Tasks
by Zhonghe Han, Jiaxin Wang, Xiaolu Yan, Zhiying Jiang, Yuanben Zhang, Siye Liu, Qihang Gong and Chenwei Song
Appl. Sci. 2025, 15(10), 5663; https://doi.org/10.3390/app15105663 - 19 May 2025
Viewed by 896
Abstract
As LLMs demonstrate remarkable reasoning capabilities, LLM-powered agents are seen as key to achieving AGI (Artificial General Intelligence) and are widely applied in various complex real-world scenarios. Nevertheless, existing studies still suffer from missing steps, deviated task execution and incorrect tool selection. This [...] Read more.
As LLMs demonstrate remarkable reasoning capabilities, LLM-powered agents are seen as key to achieving AGI (Artificial General Intelligence) and are widely applied in various complex real-world scenarios. Nevertheless, existing studies still suffer from missing steps, deviated task execution and incorrect tool selection. This paper proposes CoReaAgents, a collaboration and reasoning framework based on LLM-powered agents, comprising the Plan Agent (as a precise task planner), the Tool Agent (as a proficient tool user) and the Reflect Agent (as an objective task evaluator). These agents simulate the social division of labor and synergistic cooperation to enable each agent to perform different specialized capabilities in order to solve complex tasks together. Through the above mechanism, the CoReaAgents framework has the skills of prospective thinking and flexible execution. To verify the capability of the CoReaAgents framework, this paper conducts extensive experiments on different complex tasks such as tool learning, math reasoning and multi-hop QA. The results show that the CoReaAgents framework outperforms various comparative methods in both quantity and quality. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

28 pages, 8817 KiB  
Article
A Three-Dimensional Routing Protocol for Underwater Acoustic Sensor Networks Based on Fuzzy Logic Reasoning
by Lianyu Sun, Zhiyong Liu, Juan Dong and Jiayi Wang
J. Mar. Sci. Eng. 2025, 13(4), 692; https://doi.org/10.3390/jmse13040692 - 29 Mar 2025
Viewed by 440
Abstract
Underwater acoustic sensor networks (UASNs) play an increasingly crucial role in both civilian and military fields. However, existing routing protocols primarily rely on node position information for forwarding decisions, neglecting link quality and energy efficiency. To address these limitations, we propose a fuzzy [...] Read more.
Underwater acoustic sensor networks (UASNs) play an increasingly crucial role in both civilian and military fields. However, existing routing protocols primarily rely on node position information for forwarding decisions, neglecting link quality and energy efficiency. To address these limitations, we propose a fuzzy logic reasoning adaptive forwarding (FLRAF) routing protocol for three-dimensional (3D) UASNs. First, the FLRAF method redefines a conical forwarding region to prioritize nodes with greater effective advance distance, thereby reducing path deviations and minimizing the total number of hops. Unlike traditional approaches based on pipeline or hemispherical forwarding regions, this design ensures directional consistency in multihop forwarding, which improves transmission efficiency and energy utilization. Second, we design a nested fuzzy inference system for forwarding node selection. The inner inference system evaluates link quality by integrating the signal-to-noise ratio and some metrics related to the packet reception rate. This approach enhances robustness against transient fluctuations and provides a more stable estimation of link quality trends in dynamic underwater environments. The outer inference system incorporates link quality index, residual energy, and effective advance distance to rank candidate nodes. This multimetric decision model achieves a balanced trade-off between transmission reliability and energy efficiency. Simulation results confirm that the FLRAF method outperforms existing protocols under varying node densities and mobility conditions. It achieves a higher packet delivery rate, extended network lifetime, and lower energy consumption. These results demonstrate that the FLRAF method effectively addresses the challenges of energy constraints and unreliable links in 3D UASNs, making it a promising solution for adaptive and energy-efficient underwater communication. Full article
(This article belongs to the Special Issue Maritime Communication Networks and 6G Technologies)
Show Figures

Figure 1

27 pages, 470 KiB  
Article
Enhancing Domain-Specific Knowledge Graph Reasoning via Metapath-Based Large Model Prompt Learning
by Ruidong Ding and Bin Zhou
Electronics 2025, 14(5), 1012; https://doi.org/10.3390/electronics14051012 - 3 Mar 2025
Cited by 2 | Viewed by 1868
Abstract
Representing domain knowledge extracted from unstructured texts using knowledge graphs supports knowledge reasoning, enabling the extraction of accurate factual information and the generation of interpretable results. However, reasoning with knowledge graphs is challenging due to their complex logical structures, which require deep semantic [...] Read more.
Representing domain knowledge extracted from unstructured texts using knowledge graphs supports knowledge reasoning, enabling the extraction of accurate factual information and the generation of interpretable results. However, reasoning with knowledge graphs is challenging due to their complex logical structures, which require deep semantic understanding and the ability to address uncertainties with common sense. The rapid development of large language models makes them an option for solving this problem, with good complementary capabilities regarding the determinacy of knowledge graph reasoning. However, the use of large language models for knowledge graph reasoning also has challenges, including structural understanding challenges and the balance of semantic density sparsity. This study proposes a domain knowledge graph reasoning method based on a large model prompt learning metapath (DKGM-path), discussing how to use large models for the preliminary induction of reasoning paths and completing reasoning on knowledge graphs based on iterative queries. The method has made significant progress on several public reasoning question answering benchmark datasets, demonstrating multi-hop reasoning capabilities based on knowledge graphs. It utilizes structured data interfaces to achieve accurate and effective data access and information processing and can intuitively show the reasoning process, with good interpretability. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

25 pages, 1565 KiB  
Article
Towards a Unified Temporal and Event Logic Paradigm for Multi-Hop Path Reasoning in Knowledge Graphs
by Yajian Zeng, Xiaorong Hou, Xinrui Wang and Junying Li
Electronics 2025, 14(3), 516; https://doi.org/10.3390/electronics14030516 - 27 Jan 2025
Viewed by 1214
Abstract
Path reasoning in knowledge graphs is a pivotal task for uncovering complex relational patterns and facilitating advanced inference processes. It also holds significant potential in domains such as power electronics, where real-time reasoning over dynamic, evolving data is essential for advancing topology design [...] Read more.
Path reasoning in knowledge graphs is a pivotal task for uncovering complex relational patterns and facilitating advanced inference processes. It also holds significant potential in domains such as power electronics, where real-time reasoning over dynamic, evolving data is essential for advancing topology design and application systems. Despite its importance, traditional approaches often encounter substantial limitations when applied to dynamic, time-sensitive scenarios. These models typically fail to adequately capture intricate logical dependencies and demonstrate suboptimal performance in data-constrained environments. To address these challenges, we introduce Path-Reasoning Logic (PRlogic), an innovative framework that seamlessly integrates rule-based logical reasoning with cutting-edge neural network methodologies. PRlogic enhances path inference by leveraging a context-aware logical association network adept at handling temporal and event-driven attributes, enabling improved reasoning for dynamic systems such as IoT-based power electronics and smart grids. This adaptability allows the framework to better accommodate evolving knowledge structures, significantly improving reasoning accuracy under resource-scarce conditions. Furthermore, PRlogic employs a multi-stage refinement strategy, harmonizing logic-based rules with learned contextual representations to achieve heightened robustness and scalability. Comprehensive experiments on widely-recognized benchmark datasets validate the superiority of PRlogic, demonstrating its consistent outperformance of existing models in path reasoning tasks. These results underscore the efficacy of incorporating logic-driven mechanisms into knowledge graph reasoning and highlight PRlogic’s potential as a powerful solution for applications in dynamic data environments. Full article
Show Figures

Figure 1

34 pages, 1417 KiB  
Article
CRP-RAG: A Retrieval-Augmented Generation Framework for Supporting Complex Logical Reasoning and Knowledge Planning
by Kehan Xu, Kun Zhang, Jingyuan Li, Wei Huang and Yuanzhuo Wang
Electronics 2025, 14(1), 47; https://doi.org/10.3390/electronics14010047 - 26 Dec 2024
Cited by 3 | Viewed by 5637
Abstract
The Retrieval-Augmented Generation (RAG) framework enhances Large Language Models (LLMs) by retrieving relevant knowledge to broaden their knowledge boundaries and mitigate factual hallucinations stemming from knowledge gaps. However, the RAG Framework faces challenges in effective knowledge retrieval and utilization; invalid or misused knowledge [...] Read more.
The Retrieval-Augmented Generation (RAG) framework enhances Large Language Models (LLMs) by retrieving relevant knowledge to broaden their knowledge boundaries and mitigate factual hallucinations stemming from knowledge gaps. However, the RAG Framework faces challenges in effective knowledge retrieval and utilization; invalid or misused knowledge will interfere with LLM generation, reducing reasoning efficiency and answer quality. Existing RAG methods address these issues by decomposing and expanding queries, introducing special knowledge structures, and using reasoning process evaluation and feedback. However, the linear reasoning structures limit complex thought transformations and reasoning based on intricate queries. Additionally, knowledge retrieval and utilization are decoupled from reasoning and answer generation, hindering effective knowledge support during answer generation. To address these limitations, we propose the CRP-RAG framework, which employs reasoning graphs to model complex query reasoning processes more comprehensively and accurately. CRP-RAG guides knowledge retrieval, aggregation, and evaluation through reasoning graphs, dynamically adjusting the reasoning path based on evaluation results and selecting knowledge-sufficiency paths for answer generation. CRP-RAG outperforms the best LLM and RAG baselines by 2.46 in open-domain QA, 7.43 in multi-hop reasoning, and 4.2 in factual verification. Experiments also show the superior factual consistency and robustness of CRP-RAG over existing RAG methods. Extensive analyses confirm its accurate and fact-faithful reasoning and answer generation for complex queries. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

18 pages, 1071 KiB  
Article
PMHR: Path-Based Multi-Hop Reasoning Incorporating Rule-Enhanced Reinforcement Learning and KG Embeddings
by Ang Ma, Yanhua Yu, Chuan Shi, Shuai Zhen, Liang Pang and Tat-Seng Chua
Electronics 2024, 13(23), 4847; https://doi.org/10.3390/electronics13234847 - 9 Dec 2024
Viewed by 1480
Abstract
Multi-hop reasoning provides a means for inferring indirect relationships and missing information from knowledge graphs (KGs). Reinforcement learning (RL) was recently employed for multi-hop reasoning. Although RL-based methods provide explainability, they face challenges such as sparse rewards, spurious paths, large action spaces, and [...] Read more.
Multi-hop reasoning provides a means for inferring indirect relationships and missing information from knowledge graphs (KGs). Reinforcement learning (RL) was recently employed for multi-hop reasoning. Although RL-based methods provide explainability, they face challenges such as sparse rewards, spurious paths, large action spaces, and long training and running times. In this study, we present a novel approach that combines KG embeddings and RL strategies for multi-hop reasoning called path-based multi-hop reasoning (PMHR). We address the issues of sparse rewards and spurious paths by incorporating a well-designed reward function that combines soft rewards with rule-based rewards. The rewards are adjusted based on the target entity and the path to it. Furthermore, we perform action filtering and utilize the vectors of entities and relations acquired through KG embeddings to initialize the environment, thereby significantly reducing the runtime. Experiments involving a comprehensive performance evaluation, efficiency analysis, ablation studies, and a case study were performed. The experimental results on benchmark datasets demonstrate the effectiveness of PMHR in improving KG reasoning accuracy while preserving interpretability. Compared to existing state-of-the-art models, PMHR achieved Hit@1 improvements of 0.63%, 2.02%, and 3.17% on the UMLS, Kinship, and NELL-995 datasets, respectively. PMHR provides not only improved reasoning accuracy and explainability but also optimized computational efficiency, thereby offering a robust solution for multi-hop reasoning. Full article
(This article belongs to the Special Issue Future Technologies for Data Management, Processing and Application)
Show Figures

Figure 1

11 pages, 632 KiB  
Article
A Multi-Hop Reasoning Knowledge Selection Module for Dialogue Generation
by Zhiqiang Ma, Jia Liu, Biqi Xu, Kai Lv and Siyuan Guo
Electronics 2024, 13(16), 3275; https://doi.org/10.3390/electronics13163275 - 19 Aug 2024
Viewed by 1365
Abstract
Knowledge selection plays a crucial role in knowledge-driven dialogue generation methods, directly influencing the accuracy, relevance, and coherence of generated responses. Existing research often overlooks the handling of disparities between dialogue statements and external knowledge, leading to inappropriate knowledge representation in dialogue generation. [...] Read more.
Knowledge selection plays a crucial role in knowledge-driven dialogue generation methods, directly influencing the accuracy, relevance, and coherence of generated responses. Existing research often overlooks the handling of disparities between dialogue statements and external knowledge, leading to inappropriate knowledge representation in dialogue generation. To overcome this limitation, this paper proposes an innovative Multi-hop Reasoning Knowledge Selection Module (KMRKSM). Initially, multi-relational graphs containing rich composite operations are encoded to capture graph-aware representations of concepts and relationships. Subsequently, the multi-hop reasoning module dynamically infers along multiple relational paths, aggregating triple evidence to generate knowledge subgraphs closely related to dialogue history. Finally, these generated knowledge subgraphs are combined with dialogue history features and synthesized into comprehensive knowledge features by a decoder. Through automated and manual evaluations, the exceptional performance of KMRKSM in selecting appropriate knowledge is validated. This module efficiently selects knowledge matching the dialogue context through multi-hop reasoning, significantly enhancing the appropriateness of knowledge representation and providing technical support for achieving more natural and human-like dialogue systems. Full article
(This article belongs to the Special Issue New Advances in Affective Computing)
Show Figures

Figure 1

20 pages, 555 KiB  
Article
ChatGPT: The End of Online Exam Integrity?
by Teo Susnjak and Timothy R. McIntosh
Educ. Sci. 2024, 14(6), 656; https://doi.org/10.3390/educsci14060656 - 17 Jun 2024
Cited by 65 | Viewed by 9514
Abstract
This study addresses the significant challenge posed by the use of Large Language Models (LLMs) such as ChatGPT on the integrity of online examinations, focusing on how these models can undermine academic honesty by demonstrating their latent and advanced reasoning capabilities. An iterative [...] Read more.
This study addresses the significant challenge posed by the use of Large Language Models (LLMs) such as ChatGPT on the integrity of online examinations, focusing on how these models can undermine academic honesty by demonstrating their latent and advanced reasoning capabilities. An iterative self-reflective strategy was developed for invoking critical thinking and higher-order reasoning in LLMs when responding to complex multimodal exam questions involving both visual and textual data. The proposed strategy was demonstrated and evaluated on real exam questions by subject experts and the performance of ChatGPT (GPT-4) with vision was estimated on an additional dataset of 600 text descriptions of multimodal exam questions. The results indicate that the proposed self-reflective strategy can invoke latent multi-hop reasoning capabilities within LLMs, effectively steering them towards correct answers by integrating critical thinking from each modality into the final response. Meanwhile, ChatGPT demonstrated considerable proficiency in being able to answer multimodal exam questions across 12 subjects. These findings challenge prior assertions about the limitations of LLMs in multimodal reasoning and emphasise the need for robust online exam security measures such as advanced proctoring systems and more sophisticated multimodal exam questions to mitigate potential academic misconduct enabled by AI technologies. Full article
(This article belongs to the Section Higher Education)
Show Figures

Figure 1

Back to TopTop