Next Article in Journal
Immersive Virtual Reality in Psychotherapeutic Interventions for Youth with Eating Disorders: A Pilot Study in a Rural Context
Previous Article in Journal
Molecular Insights into the Insulating and Pyrolysis Properties of Environmentally Friendly PMVE/CO2 Mixtures: A Collaborative Analysis Based on Density Functional Theory and Reaction Kinetics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Spatiotemporal–Semantic Coupling Intelligent Q&A Method for Land Use Approval Based on Knowledge Graphs and Intelligent Agents

1
School of Geosciences and Info-Physics, Central South University, Changsha 410083, China
2
Hunan Geospatial Information Engineering and Technology Research Center, Changsha 410018, China
3
School of Geography and Environment, Jiangxi Normal University, Nanchang 330022, China
4
School of New Energy Equipment, Zhejiang College of Security Technology, Wenzhou 325016, China
5
Wenzhou Future City Research Institute, Wenzhou 325016, China
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2025, 15(16), 9012; https://doi.org/10.3390/app15169012
Submission received: 30 April 2025 / Revised: 22 July 2025 / Accepted: 13 August 2025 / Published: 15 August 2025

Abstract

The rapid retrieval and precise acquisition of land use approval information are crucial for enhancing the efficiency and quality of land use approval, as well as for promoting the intelligent transformation of land use approval processes. As an advanced retrieval method, question-answering (Q&A) technology has become a core technical support for addressing current issues such as low approval efficiency and difficulty in obtaining information. However, existing Q&A technologies suffer from significant hallucination problems and limitations in considering spatiotemporal factors in the land use approval domain. To effectively address these issues, this study proposes a spatiotemporal–semantic coupling intelligent Q&A method for land use approval based on knowledge graphs (KGs) and intelligent agent technology, aiming to enhance the efficiency and quality of land use approval. Firstly, a land use approval knowledge graph (LUAKG) is constructed, systematically integrating domain knowledge such as policy clauses, legal regulations, and approval procedures. Then, by combining large language models (LLMs) and intelligent agent technology, a spatiotemporal–semantic coupling Q&A framework is designed. Through the use of spatiotemporal analysis tools, this framework can comprehensively consider spatial, temporal, and semantic factors when handling land approval tasks, enabling dynamic decision-making and precise reasoning. The research results show that, compared to traditional Q&A based on LLMs and Q&A based on retrieval-enhanced generation (RAG), the proposed method improves accuracy by 16% and 9% in general knowledge Q&A tasks. In the project review Q&A task, F1 scores and accuracy increase by 2% and 9%, respectively, compared to RAG-QA. Particularly, under the spatiotemporal–semantic multidimensional analysis, the improvement in F1 score and accuracy ranges from 2 to 6% and 7 to 10%, respectively.

1. Introduction

As an essential part of China’s land use regulation system, land use approval refers to the multi-agency process of converting agricultural or collectively owned land into state-owned construction land in accordance with legal procedures. It involves dual transformations in both land use type and ownership. As illustrated in Figure 1, the land use approval workflow in practice engages multi-level approval from national, provincial, municipal, to county authorities (vertical coordination), and requires joint reviews across various departments, including planning adjustment, land consolidation, arable land compensation, and cadastral management (horizontal coordination). The outcomes of land use approval directly support land supply, development, utilization, and law enforcement. Land use approval plays a vital role in supporting territorial spatial governance, promoting urban–rural development, and ensuring the efficient allocation of land resources [1]. However, traditional land use approval processes are often fragmented, bureaucratically complex, and heavily reliant on manual policy interpretation. In response to China’s ongoing “streamline administration, delegate power, strengthen regulation, and improve services” reform, intelligent approaches are urgently needed to improve information retrieval efficiency, enhance policy understanding, and standardize the entire approval workflow [2].
In this context, Q&A technology has emerged as a key pathway to improving the efficiency and quality of information services. By enabling natural language interaction, Q&A systems can accurately understand user intent and intelligently retrieve, extract, and generate highly relevant and accurate responses from knowledge bases or corpora [3,4,5]. As a significant form of knowledge services, Q&A systems have seen widespread application in government services, healthcare, financial consulting, and other domains [6]. Through natural language interaction, these systems help users quickly acquire professional and precise auxiliary information, thereby enhancing service efficiency and decision-making quality [7].
In recent years, LLMs such as InstructGPT [8], GPT-4 [9], LlaMA [10], and ChatGLM [11] have demonstrated exceptional capabilities in natural language understanding and generation. These models can learn language patterns from vast textual corpora and handle increasingly complex natural language processing tasks [12], providing robust support for the development of Q&A systems [13]. Domain-specific LLMs have also emerged—such as FinGPT in finance [14] and ChatLaw in legal services [15]—which have accelerated the application of Q&A in specialized fields. For instance, Frisoni [16] improved response accuracy in open-domain medical Q&A by optimizing context input strategies, while Wang Zhe [17] focused on intelligent Q&A for flood emergency management to enhance the efficiency and accuracy of response teams. These LLMs not only comprehend and answer user queries effectively but also offer personalized suggestions and solutions based on user needs, significantly improving work quality and operational efficiency.
However, LLMs still suffer from the hallucination problem when dealing with highly specialized knowledge. This refers to the generation of information that appears plausible but is actually incorrect or fabricated [18,19]. The root causes include the presence of unreliable or biased content in the training data, which introduces errors during generation [20]; additionally, inherent limitations in the information processing mechanisms of LLMs at different levels may lead to deviations in understanding and generation [21]. The Law of Knowledge Overshadowing [22] further posits that LLMs tend to prioritize frequently occurring knowledge during content generation, thereby overshadowing low-frequency yet critical factual information. This theory offers an explanation for the common issues of irrelevant or logically flawed responses in domain-specific applications.
To mitigate hallucinations, researchers have proposed knowledge-enhanced techniques, such as RAG. RAG incorporates external knowledge sources during the Q&A process to improve the verifiability and contextual relevance of generated content [23,24]. For example, the SPOCK [25] system retrieves content from textbooks to provide more accurate answers; in biomedicine, RAG is employed to introduce trustworthy background knowledge, improving the precision and traceability of specialized terminology explanations [26]. He [27] proposed an innovative RAG framework that dynamically identifies inter-document relevance to enhance retrieval recall and generation quality, while Yang [28] developed IM-RAG, which integrates an inner monologue mechanism for multi-round knowledge retrieval and enhancement, effectively addressing hallucinations and adaptability issues in static knowledge environments. Despite these advancements, applying RAG in complex government domains like land use approval—characterized by intricate semantics, spatial constraints, and temporal requirements—remains challenging and falls short of fully supporting intelligent approval decision-making.
In light of these limitations, this study introduces an LLM-based agent framework enhanced with KG support to improve the intelligence level of land use approval processes. Agents refer to entities capable of perceiving the environment and autonomously making and executing decisions [29], leveraging historical experiences and knowledge to act in a goal-directed manner [30]. By integrating agents with an LLM, the model gains both knowledge-driven semantic reasoning capabilities and dynamic task planning for complex scenarios, thereby enhancing the accuracy, interpretability, and practicality of Q&A results [31,32]. Specifically, this paper proposes an LUA Q&A method that combines KGs and agent technologies, constructing an LUAKG covering key knowledge elements such as policies and regulations, approval procedures, and administrative departments. Additionally, an LLM-based agent module is introduced to handle semantic understanding, task decomposition, and dynamic reasoning. This agent, equipped with task-oriented autonomous action capabilities, can invoke geospatial analysis tools to perform critical tasks such as ecological redline avoidance analysis and land legality assessment. Ultimately, it enables spatio-semantic multidimensional evaluation and intelligent support for approval-related queries.
The key contributions of this study are as follows:
(1)
To address the hallucination issues encountered by LLMs in interpreting policy semantics, a knowledge graph-based enhancement mechanism is proposed, enabling precise mapping between policy texts and approval semantics and significantly improving the traceability and credibility of Q&A outputs.
(2)
To resolve the challenges posed by the lack of temporal and spatial elements in approval scenarios, agent technology is introduced to autonomously invoke analysis tools for time-sensitive assessments and spatial compliance evaluations, thereby enhancing the precision and intelligence of approval suggestion generation.
The remainder of this paper is organized as follows. Section 2 describes the research methods used. Section 3 details the experimental results and analysis. Section 4 summarizes the content of this paper and proposes future prospects.

2. Methodology

To address the complex decision support requirements in land use approval, this study proposes an intelligent Q&A method based on KGs and intelligent agent technology. A knowledge base is a structured repository that organizes domain concepts, entities, and their relationships to support semantic reasoning [33]. In this study, a unified knowledge graph architecture is designed, comprising two sub-knowledge bases tailored to differentiated knowledge needs: a General Knowledge Base (General KB) that models policies, procedures, and approval tasks; and a Review Knowledge Base (Review KB) that encodes project-level rules, constraints, and typical review logic. This layered structure supports both general policy interpretation and rule-based reasoning for specific cases, improving both knowledge coverage and reasoning precision. The overall framework is illustrated in Figure 2.

2.1. The Construction of Land Use Approval Knowledge Graph

Grounded in the business scenario of land use approval, this study systematically outlines the full process of land use submission and approval and constructs an LUAKG. An LUAKG consists of two main components: General KB, which captures domain-wide policy and procedural knowledge, and Review KB, which focuses on project-specific evaluation rules.

2.1.1. The Construction of Knowledge Ontology

Ontology is a tool for the formal and standardized modeling of concept classes and their interrelations within a specific domain. Its aim is to establish a unified and shareable knowledge representation system by clearly defining concepts, attributes, and relationships [34]. The ontology adopted in the LUAKG provides a formalized and structured representation of core concepts, relationships, and rules involved in the approval process. It is constructed through a combination of top-down analysis of policy and regulatory documents (e.g., Land Administration Law of the People’s Republic of China) and bottom-up extraction from practical approval workflows and application materials, ensuring both theoretical rigor and practical applicability. The ontology structure consists of two parts: the general knowledge ontology, which models approval procedures and regulatory frameworks, and the review knowledge ontology, which focuses on semantic, temporal, and spatial rules to support reasoning in project-specific reviews. The ontology structure is illustrated in Figure 3.

2.1.2. Knowledge Extraction

To accommodate the heterogeneous and multi-source nature of land use approval data, the LUAKG adopts differentiated processing pipelines tailored to each data type.
The corpus for the General KB consists of 242 policy documents and a comprehensive land use approval process manual, all publicly released by the Ministry of Natural Resources and various levels of local government authorities. All documents were standardized into PDF format, and their textual content was extracted using PyMuPDF. After removing HTML tags, blank lines, and special characters using regular expressions, the PySBD [35] tool was applied to segment the text into fine-grained semantic units while preserving the structural hierarchy of chapters, paragraphs, and clauses. Based on rule-based and pattern-driven designs, key sentence structures—such as “required submission materials” and “approved by [department]”—were automatically identified, resulting in a land approval corpus containing 612 structured triples. Table 1 summarizes the annotated entity types.
Leveraging domain-specific semantics, a pre-trained BERT model [36] was used to identify high-frequency domain concepts such as “land use planning permit” and “feasibility study report.” To assess the importance of each keyword, the TF-IDF [37] method was applied, defined as follows:
T F I D F ( t , d ) = T F ( t , d ) × l o g ( N D F ( t ) )
Here, TF(t, d) denotes the frequency of term t in document d, DF(t) is the number of documents containing the term t, and N is the total number of documents.
For semi-structured data such as tables, specialized data processing tools (e.g., pandas) are employed to extract relevant fields. Manual verification by domain experts further enhances the completeness and semantic accuracy of the extracted knowledge.
Based on the extracted high-quality entities, and guided by the relationships defined in the land use approval ontology—such as “involves material,” “approval authority,” “related policy,” and “preceding procedure”—a large language model was employed to perform deep semantic understanding of the entities within their contextual environments. This enabled the identification of hierarchical, dependency, and constraint-based semantic relations between entities (see Table 2).
The Review KB corpus comprises more than 130 publicly available land use approval cases, administrative documents, and policy guidelines sourced from the official websites of the Ministry of Natural Resources, the Ministry of Housing and Urban-Rural Development, and local government portals. A focused web crawler was used for automated data acquisition. For scanned PDF documents, OCR was applied to extract textual content, followed by text segmentation, tag classification, and noise cleaning using regular expressions.
Experts manually annotated key items in the approval process—such as location compliance, redline boundary control, and quota matching—and extracted structured rule triples in the format of “Item–Requirement–Criterion.” A multi-expert collaborative review mechanism was then employed to validate and refine the extracted rules, unify terminology, and resolve inconsistencies, thereby enhancing the accuracy and reliability of the Review KB. A portion of the corpus was reserved for subsequent performance evaluation of the system to verify its effectiveness on real-world review tasks.

2.1.3. Knowledge Fusion and Storage

In order to realize the alignment and fusion of multi-modal knowledge, General KB further carried out entity alignment and semantic fusion: the Sentence-BERT [38] model was used to generate the vector representation of the entity, and the semantic redundancy and ambiguity in the entity alignment were solved by calculating the cosine similarity. Combined with the general knowledge ontology of land use and the output of the large model, the complex semantic relationships between policy terms such as upper and lower, juxtaposition, and conditional dependence are established; for example, the conditional dependence between “construction land approval” and “permanent basic farmland protection”.
To achieve unified management and efficient retrieval of structured knowledge, this study developed the LUAKG and adopted the Neo4j graph database for knowledge storage. In this framework, entities, attributes, and relationships in land use approval triples are modeled as nodes and edges. The LUAKG systematically integrates fragmented knowledge from the land use approval domain, including policy texts, approval procedures, and structured review rules. The graph currently comprises 3326 nodes, 3311 edges, and over 200 review rules. The construction process involves data cleaning, entity and relation extraction (knowledge extraction), and knowledge alignment and fusion. This knowledge graph provides a solid foundation for the subsequent intelligent question-answering system and enhances the overall level of intelligence in land use approval services.

2.2. Establish Intelligent Q&A Methodology

This study proposes an intelligent Q&A method for land use approval based on KGs and intelligent agent technology, which realizes the accurate response of land use approval business through autonomous question classification, dynamic knowledge routing, and intelligent tool invocation. The intelligent agent technology integrates question parsing, knowledge retrieval, and spatiotemporal analysis functions, and its core process is shown in Figure 4.

2.2.1. Autonomous Problem Categorization Module

To effectively handle diverse user queries—ranging from general policy consultation to project-specific compliance checks—this method first employs an autonomous categorization process. It determines whether a question belongs to the General Knowledge Category (GKC) or the Project Review Category (PRC), thereby guiding the selection of the appropriate knowledge base and reasoning tools:
  • Questions of GKC: This class covers general knowledge questions on policies and regulations, approval processes, document formatting requirements, and so on;
  • Questions of PRC: This class deals with project-specific compliance reviews, such as policy applicability, spatiotemporal constraints, and other specific review issues.
The classification process adopts a combined strategy of semantic understanding and keyword matching to enable automatic categorization of user questions. Specifically, we manually classify a large number of real user queries based on the land use approval corpus, extracting sets of indicative keywords for each category as classification features. For instance, general knowledge questions often contain terms such as “process,” “policy,” “regulations,” and “materials,” while project review questions frequently include judgmental or context-specific terms such as “occupy,” “site selection opinion,” “compensatory designation,” and “red line.” Based on this, we construct the following classification function:
f c a t e g o r y ( Q ) = a r g   max c C   c o u n t ( c , Q )
In Equation (2), Q denotes the user input question, C denotes the set of all possible categories, and count(c, Q) denotes the number of times category c appears in question Q.

2.2.2. Dynamic Knowledge Retrieval Mechanisms

For the categorized questions, the intelligence agent will select different knowledge bases for knowledge retrieval. This part relies on vector retrieval and knowledge retrieval sorting techniques to ensure that relevant information can be accurately identified from the knowledge base.
  • Vector Retrieval
The intelligent agent first converts the user question Q into a vector representation EQ and traverses all documents in the corresponding knowledge base, converting each document into D and the corresponding vector representation ED. Next, the cosine similarity between the problem vector and the document vector is computed and the documents that satisfy the similarity threshold st are filtered out as follows:
s i m ( Q , D ) = E Q · E D E Q E D
D r e t r i e v e d = D S i m i l a t i r y E Q , E D > s t
In Equations (3) and (4), EQ and ED denote the embedding vectors of the input questions and knowledge base documents, respectively, and the computed similarity scores are used to filter the most matching knowledge entries.
2.
Knowledge Retrieval Rank
To improve the relevance of the retrieval results, the intelligent agent weights the combination of vector matching score and keyword matching score. Finally, the retrieval results are sorted based on the combined scores and the top Top-N most relevant documents are selected as follows:
S c o r e D = ω v · S i m i l a t i r y E Q , E D + ω k · K e y w o r d M a t c h Q , D
In Equation (5), ωv and ωk denote the weight of vector matching and the weight of keyword matching, respectively, and KeywordMatch(Q, D) is the text matching score calculated based on keyword similarity.

2.2.3. Spatiotemporal Compliance Analysis Toolset

In the Project Review Category, spatial and temporal compliance is often central to the user’s query. Based on policy regulations, land use approvals are subject to strict temporal and spatial constraints—such as declaration within a valid time window, avoidance of protected zones, and spatial exclusivity with neighboring projects. To support automated and precise compliance checks, we designed a modular spatiotemporal analysis toolset. This toolset comprises five typical analysis functions as follows:
  • Time Interval Validation Module
When a project involves time compliance checks, the intelligent agent parses natural language time expressions from user input (e.g., “submitted within 2023”) and standardizes them into timestamp format. It then compares these timestamps with the policy-defined valid time window.
Suppose the policy’s effective time window is [ t s r a , t e r a ] and the project submission time is t s p r o j , the compliance check is based on whether tproj Є [ t s r a , t e r a ].
If the project is non-compliant, the agent also outputs the deviation in days to support interpretability during the review process.
2.
Spatial Overlay Analysis Module
To determine whether a project encroaches on restricted land types (e.g., permanent basic farmland, ecological redlines, scenic areas), the agent performs spatial overlay analysis using GIS functions to compute intersections between the project area and restricted zones.
Assuming the project parcel is represented as polygon Pproj and the restricted zone as Pres, the overlay area Aint = Area(Pproj ∩ Pres). If Aint > 0, the output indicates “encroachment detected,” and the corresponding occupation ratio is also returned as follows:
R = A i n t A r e a ( P p r o j )
This result can inform decisions such as relocating the site or requiring an environmental impact assessment.
3.
Spatial Attribute Computation Module
Some policies mandate that project parcels meet specific topographical conditions, such as slope under 25° or elevation below 2000 m. The Q&A system supports computation of spatial attributes including parcel area, perimeter, etc., to assist in land classification and minimum area compliance checks.
Once the polygon is verified to be closed, the area A is calculated using standard geometric methods.
A = 1 2 i = 1 n x i + 1 y i x i y i + 1 , ( x n + 1 , y n + 1 ) = ( x 1 , y 1 )
4.
Spatial Relationship Analysis Module
Ensures that the project site is not located too close to existing or planned developments that may violate minimum separation requirements. In project evaluation, common questions include the following: “Is the project adjacent to area X?” or “Does it contain area Y?” The agent uses topological spatial relationship models to analyze the spatial connection between two parcels.
Given two polygons P1 and P2, the spatial relationship is computed using a function such as the following:
r e l a t i o n _ t y p e = f t o p o ( P 1 , P 2 )
This function supports tasks such as village planning, urban–rural boundary delineation, and other geographic reasoning tasks essential to intelligent spatial Q&A.
5.
Temporal Sequence Logic Validation Module
Certain approval processes involve verifying the correct chronological order of events, e.g., “project initiation must precede land approval,” or “land acquisition announcement must be published before construction permit application.” The agent automatically extracts time pairs for related events and validates their order.
If a time pair (t1,t2) is given, and t1 is required to precede t2, the validation function checks the following:
i s _ v a l i d _ o r d e r ( t 1 , t 2 ) =   1 ,   i f   t 1 < t 2 0 , o t h e r w i s e
If the sequence is invalid, the agent highlights the conflicting events and provides corrective suggestions, thus improving the procedural compliance and legality of the project.
For complex projects, the intelligent agent automatically selects and combines appropriate analysis modules based on the intent of the question. It performs a comprehensive spatiotemporal compliance assessment to ensure the project meets all legal spatial and temporal constraints. This integrated approach significantly improves both the accuracy and interpretability of the review process.

2.2.4. Prompt Construction and Answer Generation

After completing the knowledge retrieval and spatiotemporal analysis, the intelligent body integrates the user question, relevant knowledge and analysis results, and inputs them into the LLM by constructing a high-quality prompt to generate the final answer.
A sample prompt design is shown below (see Figure 5):

3. Results and Discussion

This study conducted an experimental analysis of the policy documents related to land use approval in Hunan Province, China, combined with the land use approval business workflow and three major project example data. To evaluate the method performance, this study uses a dataset containing 500 Q&A pairs and conducts comparative tests with multiple methods to assess the performance of different methods in the tasks of general knowledge Q&A and project review judgment. The following is a detailed description of the experimental setup of this study:

3.1. Experiment Setup

The dataset for this experiment contains policy documents related to land use approval in Hunan Province, business process data, three major project example data, and 500 Q&A pairs used to test the Q&A methodology, covering two major categories: general knowledge Q&A and project review judgment. In order to cover different types of land use submission and approval questions, these questions are subdivided into multiple subcategories and divided according to the nature of the tasks to ensure the representativeness and comprehensiveness of the dataset. Specific details are given below as follows:

3.1.1. Q&A Dataset

In this experiment, the dataset used to evaluate the performance of the Q&A method consists of 500 Q&A pairs covering two categories: GKC and PRC. To ensure the representativeness and comprehensiveness of the dataset, the questions are subdivided into subcategories and divided according to the nature of the tasks to cover different types of land use approval questions. Specific details are given below as follows:
  • General Knowledge Class (200 questions): These questions are related to the policies, regulations, and approval process of land use submission and approval, which are mainly categorized into the following five major categories (see Table 3):
2.
Project Review Class (300 questions): These questions are related to the review of compliance of land use submission projects, covering three major categories: semantic class (SeC), spatial class (SpC), and temporal class (TC) (see Table 4):
The question-answering dataset used in this study was generated through a semi-manual construction process. Initially, domain experts designed question templates covering both general land use knowledge and project-specific review rules. These templates were then used by a large language model (LLM) to generate reference answers with clear semantics and standardized formats. All generated QA pairs were manually reviewed and refined to ensure high quality. The dataset is grounded in real-world scenarios and includes a variety of question types and complexity levels. Each question is paired with a corresponding reference answer (see Table 5), enabling the evaluation of QA methods across different categories.

3.1.2. Method Comparison

In order to comprehensively evaluate the performance of the system, three different methods are used in this study for comparative experiments in order to deeply analyze the performance and advantages of the different methods in the intelligent Q&A task of land use approval. The specific methods are as follows:
  • LLM-QA: this approach uses the large language model Qwen-plus for question comprehension and answer generation. As a benchmark model, LLM-QA directly performs semantic comprehension of user input questions and generates answers through a generative model, which is mainly examined for its performance in general knowledge quizzes and simple tasks.
  • RAG-QA: the approach uses a strategy based on RAG, which is generated by retrieving relevant content from a knowledge base in combination with the large language model Qwen-plus. RAG-QA enhances the model’s knowledge acquisition capability through the retrieval module, and bridges the knowledge gap of the large language model by utilizing the information in the external knowledge base, which in turn improves the accuracy and generation of answers.
  • KG-Agent-QA: this is an optimization method proposed in this study, combined with intelligent agent technology, in the intelligent Q&A task of land use approval. KG-Agent-QA provides structured domain knowledge through KGs, and the intelligent agent automatically selects the appropriate knowledge base according to the type of question and performs spatiotemporal analysis. The method not only flexibly responds to different types of questions, but also improves the ability of project review and complex decision support through spatiotemporal–semantic multidimensional analysis.
Through the comparison of these three methods, this study is able to comprehensively evaluate the effectiveness of different techniques in dealing with land application problems, and these comparative experiments will provide an important theoretical basis and practical guidance for further optimizing the Q&A system.

3.2. Evaluation of Method Performance

In order to comprehensively evaluate the performance of different methods in the Q&A task, two scoring methods were used in this study: manual scoring and LLM autonomous scoring. The manual scoring is based on four common evaluation metrics: accuracy, precision, recall, and F1, focusing on the comparison between correct and generated answers; while the LLM autonomous scoring scores the answers given by different methods in terms of accuracy, relevance, and fluency, and finally, the average of all the answers is taken to evaluate the performance of the different methods. The following are the specific definitions and calculations of these indicators:
  • Accuracy [39]: One of the basic metrics for evaluating the overall performance of a system, defined as the percentage of questions that are answered correctly by the system.
c c u r a c y = T P T P + T N + F P + F N
In Equation (8), TP denotes the number of correct answers returned by the system, FP denotes the number of incorrect answers returned by the system, TN denotes the number of incorrect answers not returned by the system, and FN denotes the number of correct answers not returned by the system.
2.
Precision [40]: A measure of the proportion of answers returned by the system that are actually correct.
r e c i s i o n = T P T P + F P
3.
Recall [41]: Measures the proportion of actual correct answers that are returned by the system.
R e c a l l = T P T P + F N
4.
F1 [42]: A reconciled average of the precision and recall, used to synthesize and evaluate the precision and recall capabilities of the system.
F 1 = 2 × P r e c i s i o n × R e c a l l P r e c i s i o n + R e c a l l
The F1 value combines precision and recall, providing a balanced metric that is particularly suitable for tasks that require a balance between accuracy and comprehensiveness.
5.
LLM Autonomy Scores [43,44]:
LLM autonomous scoring is a method for automatically assessing the quality of generated answers based on a large language model. By independently scoring each generated answer for accuracy, relevance, and fluency, LLM autonomous scoring provides a comprehensive measure of answer quality.
Specifically, accuracy reflects how well the generated answer matches the standard answer, relevance measures whether the generated answer effectively solves the problem, and fluency assesses how natural the answer is in terms of grammar and expression:
a c c u r a c y _ a v e = 1 N i = 1 N a c c u r a c y i
r e l e v a n c e _ a v e = 1 N i = 1 N r e l e v a n c e i
f l u e n c y _ a v e = 1 N i = 1 N f l u e n c y i
In the above Equations, N is the total number of questions, and accuracyi, relevancei, and fluencyi are the scores of the ith question on accuracy, relevance, and fluency, respectively. With this approach, LLM autonomic scoring provides an automated and systematic solution for evaluating Q&A systems, which is especially suitable for evaluating large-scale datasets, and is able to comprehensively measure the system performance in terms of the dimensions of accuracy, relevance, and fluency.

3.3. Results and Discussions

To compare the performance of different algorithms across various Q&A tasks, this study conducts a comparative experiment based on the evaluation metrics mentioned above. The results indicate that the proposed KG-Agent-QA method shows significant advantages in both GKC and PRC tasks (see Table 6). In GKC tasks, KG-Agent-QA outperforms LLM-QA (77% accuracy_ave, 86% F1) and RAG-QA (84% accuracy_ave, 91% F1) with an accuracy_ave of 93% and an F1 of 97%. This performance improvement can be attributed to the structured organization of domain knowledge through the knowledge graph. In PRC tasks, KG-Agent-QA also demonstrates outstanding performance.
These findings highlight that KG-Agent-QA, by combining knowledge graphs and intelligent agent technology, enables more accurate problem understanding and knowledge retrieval, resulting in superior performance across multiple metrics. In comparison, while RAG-QA enhances answer accuracy and recall to some extent through retrieval-augmented generation, it falls short in utilizing deep knowledge and performing complex reasoning compared to KG-Agent-QA. LLM-QA, relying solely on pre-trained knowledge within the language model, lacks external knowledge augmentation and reasoning support, leading to knowledge bias or incomplete answers in complex tasks, and thus performs less effectively.
To address performance across different subcategories of project review questions, we conduct a fine-grained analysis by breaking down tasks into three representative types: semantic comprehension (SeC), temporal constraints (TC), and spatial compliance (SpC), as shown in Table 7. KG-Agent-QA demonstrates clear advantages in each. In SpC, it outperforms RAG-QA (83% accuracy_ave, 96% F1) with an accuracy_ave of 92% and an F1 of 98%, validating the effectiveness of the spatial analysis tool in project review. In TC, KG-Agent-QA achieves an accuracy_ave of 91%, which is 10 percentage points higher than RAG-QA, benefiting from the temporal logic constraint verification agent’s modeling of time-related features such as approval deadlines. Notably, the method also achieves a recall of 96% in SeC, improving by 6 percentage points compared to RAG-QA, demonstrating that the integration of the knowledge graph and intelligent agent routing mechanism effectively mitigates the issue of missed detection due to semantic ambiguity.
Overall, KG-Agent-QA forms a systemic advantage in multidimensional evaluation metrics through entity relationship reasoning within the knowledge graph, targeted activation of spatiotemporal analysis tools, and dynamic decision-making via intelligent agent routing, especially showing robust adaptability in advanced tasks requiring cross-modal knowledge fusion.
The comparison of performance metrics for RAG-QA and KG-Agent-QA in different Q&A tasks, as shown in Figure 6, further emphasizes that KG-Agent-QA has a systemic advantage over RAG-QA in terms of structured knowledge integration and multidimensional reasoning. In GKC, KG-Agent-QA’s metrics are predominantly in the range of 92–98%, significantly surpassing RAG-QA’s range of 78–92%. This advantage stems from the embedded representation of structured knowledge, such as approval procedures and material lists, in the knowledge graph. In PRC, KG-Agent-QA outperforms RAG-QA in semantic, time, and spatial tasks, further validating the superior performance of the knowledge graph and intelligent agent collaborative framework in complex domain-specific Q&A tasks.
As shown in Figure 7, the KG-Agent-QA method, by combining the LUAKG, significantly mitigates hallucinations in LLM-based answers. By associating with policy clauses, legal regulations, and other documents, this method greatly enhances the interpretability of answers. Additionally, the Q&A performance comparison between the two methods shown in Figure 8 demonstrates that KG-Agent-QA, by utilizing spatiotemporal analysis tools, can effectively assess the spatial constraints and temporal validity of projects. By considering multiple dimensions such as semantics, time, and space, it generates more reliable answers, thereby optimizing the precision and efficiency of the land approval process.

4. Conclusions

To enhance decision support and work efficiency in land use approval, this study proposes a spatiotemporal–semantic coupling intelligent Q&A method for land use approval, based on KGs and intelligent agent technology. By constructing the LUAKG and intelligent agent technology, the proposed KG-Agent-QA method provides more precise and intelligent answers across various Q&A tasks, particularly demonstrating significant advantages in complex project review tasks. The main conclusions are summarized as follows:
Superiority in Q&A Tasks: Compared to LLM-QA and RAG-QA, KG-Agent-QA shows significant improvements in both GKC and PRC tasks, with accuracy and F1 score improving by 16% and 11%, respectively. This confirms the effectiveness of KGs in structuring domain knowledge and the dynamic decision-making capability of intelligent agents.
Enhanced Multidimensional Q&A Ability: KG-Agent-QA shows higher accuracy and recall in spatial and temporal Q&A tasks, and effectively reduces missed detections caused by semantic ambiguity in semantic Q&A tasks. This further validates the advantage of the KG and intelligent agent collaborative architecture in handling complex domain-specific tasks.
Advantages of SpatiotemporalSemantic Coupling: By effectively utilizing spatiotemporal analysis tools, KG-Agent-QA performs comprehensive analysis across spatial, temporal, and semantic dimensions. The integration of multi-modal information systematically improves the precision and efficiency of decision support for land approval.
Effectiveness of the KG and Intelligent Agent Collaborative Architecture: The KG provides a structured foundation for domain knowledge, while intelligent agent technology enables dynamic decision support. This collaborative architecture allows for deeper reasoning and judgment in complex tasks, making it especially suitable for land use approval that requires the consideration of multiple conditions and cross-modal information.
This study proposes an innovative intelligent Q&A framework by combining KG and intelligent agent technology. Through effective spatiotemporal analysis and multidimensional reasoning, the method significantly enhances decision support for land use approval. This approach not only handles conventional policy and regulatory questions but also addresses complex project review tasks, demonstrating strong scalability and practical application value. However, the current method faces challenges such as high costs for knowledge graph construction and updates, limited ability to process unstructured text, and insufficient generalization capabilities of the intelligent agents. Future research should focus on exploring automated mechanisms for knowledge graph updates, improving the system’s ability to understand unstructured text, developing more collaborative multi-agent systems for cross-departmental collaboration, enhancing spatial analysis capabilities, and establishing a more comprehensive decision explanation mechanism. These improvements will help build more intelligent and efficient decision support systems for land use approval, driving the intelligent and digital transformation of territorial spatial planning.

Author Contributions

Conceptualization, S.Y., X.Y., H.L. and G.X.; data curation, S.Y. and X.H.; methodology, S.Y., X.H. and X.Y.; visualization, S.Y., H.L., X.H. and X.Y.; writing—original draft, H.L. and S.Y.; writing—review and editing, H.L., S.Y., X.Y., G.X. and M.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by grants from the National Key Research and Development Program of China (2022YFB3904203), the National Natural Science Foundation of China (42271485, 42430110 and 42171459), the Hunan Geospatial Information Engineering and Technology Research Center (HNGIET2024006), the Natural Science Foundation of Hunan Province (2024JJ1009), the Frontier Cross Research Project of Central South University (2023QYJC002), the Jiangxi Province “Double Thousand Plan”, the third batch of short-term projects to introduce innovative leading talents (jxsq2020102062), and the Open Fund of Wenzhou Future City Research Institute (WL2023003).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is unavailable due to privacy or ethical restrictions.

Acknowledgments

This work was carried out in part using computing resources at the High-Performance Computing Platform of Central South University.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Jiang, F. Model Design and Implementation in Construction Land Approval Business of Guangdong Province. Land Resour. Inf. 2018, 5, 12–16. [Google Scholar]
  2. Chen, J.; Wu, H.; Liu, W.; Shan, W.; Zhang, J.; Zhao, P. Technical Connotation and Research Agenda of Natural Resources Spatio-Temporal Information. Acta Geod. Cartogr. Sin. 2022, 51, 1130–1140. [Google Scholar]
  3. Hu, S.; Zou, L.; Yu, J.X.; Wang, H.; Zhao, D. Answering Natural Language Questions by Subgraph Matching over Knowledge Graphs. IEEE Trans. Knowl. Data Eng. 2017, 30, 824–837. [Google Scholar] [CrossRef]
  4. Chen, B.; Xian, G.; Zhao, R.; Huang, Y.; Li, J.; Cao, Y.; Sun, T. Overall Design and Key Technology of Q&A Style Intelligent Retrieval for Scientific and Technical Literature. J. Libr. Sci. China 2023, 49, 92–106. [Google Scholar] [CrossRef]
  5. Wu, L. Survey on Question Answering System Research. Sci. Technol. Commun. 2019, 11, 147–148. [Google Scholar] [CrossRef]
  6. Dwivedi, S.K.; Singh, V. Research and Reviews in Question Answering System. Procedia Technol. 2013, 10, 417–424. [Google Scholar] [CrossRef]
  7. Lopez, V.; Uren, V.; Sabou, M.; Motta, E. Is Question Answering Fit for the Semantic Web?: A Survey. Semant. Web 2011, 2, 125–155. [Google Scholar] [CrossRef]
  8. Ouyang, L.; Wu, J.; Jiang, X.; Almeida, D.; Wainwright, C.; Mishkin, P.; Zhang, C.; Agarwal, S.; Slama, K.; Ray, A.; et al. Training Language Models to Follow Instructions with Human Feedback. Adv. Neural Inf. Process. Syst. 2022, 35, 27730–27744. [Google Scholar]
  9. Gallifant, J.; Fiske, A.; Levites Strekalova, Y.A.; Osorio-Valencia, J.S.; Parke, R.; Mwavu, R.; Martinez, N.; Gichoya, J.W.; Ghassemi, M.; Demner-Fushman, D.; et al. Peer Review of GPT-4 Technical Report and Systems Card. PLOS Digit. Health 2024, 3, e0000417. [Google Scholar] [CrossRef]
  10. Touvron, H.; Lavril, T.; Izacard, G.; Martinet, X.; Lachaux, M.-A.; Lacroix, T.; Rozière, B.; Goyal, N.; Hambro, E.; Azhar, F.; et al. Llama: Open and Efficient Foundation Language Models. arXiv 2023, arXiv:2302.13971. [Google Scholar] [CrossRef]
  11. Du, Z.; Qian, Y.; Liu, X.; Ding, M.; Qiu, J.; Yang, Z.; Tang, J. GLM: General Language Model Pretraining with Autoregressive Blank Infilling. arXiv 2021, arXiv:2103.10360. [Google Scholar]
  12. Xu, J.; Zhang, H.; Zhang, H.; Lu, J.; Xiao, G. ChatTf: A Knowledge Graph-Enhanced Intelligent Q&A System for Mitigating Factuality Hallucinations in Traditional Folklore. IEEE Access 2024, 12, 162638–162650. [Google Scholar]
  13. Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.D.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. Language Models Are Few-Shot Learners. Adv. Neural Inf. Process. Syst. 2020, 33, 1877–1901. [Google Scholar]
  14. Yang, H.Y.; Liu, X.Y.; Wang, C.D. FinGPT: Open-Source Financial Large Language Models. arXiv 2023, arXiv:2306.06031. [Google Scholar] [CrossRef]
  15. Cui, J.; Li, Z.; Yan, Y.; Chen, B.; Yuan, L. ChatLaw: Open-Source Legal Large Language Model with Integrated External Knowledge Bases. arXiv 2023, arXiv:2306.16092. [Google Scholar]
  16. Frisoni, G.; Cocchieri, A.; Presepi, A.; Moro, G.; Meng, Z. To Generate or to Retrieve? On the Effectiveness of Artificial Contexts for Medical Open-Domain Question Answering. arXiv 2024, arXiv:2403.01924. [Google Scholar] [CrossRef]
  17. Wang, Z.; Yang, D.L.; Kuang, X.Y.; Liu, D.; Ma, Y. Research on Automatic Question Answering Model of Flood Disaster Emergency Decision-Making Considering Prompt-Learning. J. Saf. Sci. Technol. 2022, 18, 12–18. [Google Scholar]
  18. Lin, Z.; Guan, S.; Zhang, W.; Zhang, H.; Li, Y.; Zhang, H. Towards Trustworthy LLMs: A Review on Debiasing and Dehallucinating in Large Language Models. Artif. Intell. Rev. 2024, 57, 243. [Google Scholar] [CrossRef]
  19. Huang, L.; Yu, W.; Ma, W.; Zhong, W.; Feng, Z.; Wang, H.; Chen, Q.; Peng, W.; Feng, X.; Qin, B.; et al. A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions. ACM Trans. Inf. Syst. 2025, 43, 1–55. [Google Scholar] [CrossRef]
  20. Dziri, N.; Milton, S.; Yu, M.; Zaiane, O.; Reddy, S. On the Origin of Hallucinations in Conversational Models: Is It the Datasets or the Models? arXiv 2022, arXiv:2204.07931. [Google Scholar] [CrossRef]
  21. Yu, L.; Cao, M.; Cheung, J.C.K.; Dong, Y. Mechanistic Understanding and Mitigation of Language Model Non-Factual Hallucinations. arXiv 2024, arXiv:2403.18167. [Google Scholar]
  22. Zhang, Y.; Li, S.; Qian, C.; Liu, J.; Yu, P.; Han, C.; Fung, Y.R.; McKeown, K.; Zhai, C.; Li, M.; et al. The Law of Knowledge Overshadowing: Towards Understanding, Predicting, and Preventing LLM Hallucination. arXiv 2025, arXiv:2502.16143. [Google Scholar]
  23. Lewis, P.; Perez, E.; Piktus, A.; Petroni, F.; Karpukhin, V.; Goyal, N.; Küttler, H.; Lewis, M.; Yih, W.T.; Rocktäschel, T.; et al. Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. Adv. Neural Inf. Process. Syst. 2020, 33, 9459–9474. [Google Scholar]
  24. Gao, Y.; Xiong, Y.; Gao, X.; Jia, K.; Pan, J.; Bi, Y.; Dai, Y.; Sun, J.; Wang, H.; Wang, H. Retrieval-Augmented Generation for Large Language Models: A Survey. arXiv 2023, arXiv:2312.10997. [Google Scholar]
  25. Sonkar, S.; Liu, N.; Mallick, D.B.; Baraniuk, R.G. CLASS: A Design Framework for Building Intelligent Tutoring Systems Based on Learning Science Principles. arXiv 2023, arXiv:2305.13272. [Google Scholar] [CrossRef]
  26. Guo, Y.; Qiu, W.; Leroy, G.; Wang, S.; Cohen, T. Retrieval Augmentation of Large Language Models for Lay Language Generation. J. Biomed. Inform. 2024, 149, 104580. [Google Scholar] [CrossRef]
  27. He, X.X.; Tian, Y.J.; Sun, Y.F.; Chawla, N.; Laurent, T.; LeCun, Y.; Bresson, X.; Hooi, B. G-retriever: Retrieval-augmented generation for textual graph understanding and question answering. Adv. Neural Inf. Process. Syst. 2024, 37, 132876–132907. [Google Scholar]
  28. Yang, D.J.; Rao, J.M.; Chen, K.Z.; Guo, X.; Zhang, Y.; Yang, J.; Zhang, Y. IM-RAG: Multi-Round Retrieval-Augmented Generation through Learning Inner Monologues. In Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval, Washington, DC, USA, 14–18 July 2024; ACM: New York, NY, USA, 2024; pp. 730–740. [Google Scholar]
  29. Cheng, Y.; Zhang, C.; Zhang, Z.; Meng, X.; Hong, S.; Li, W.; Wang, Z.; Wang, Z.; Yin, F.; Zhao, J.; et al. Exploring Large Language Model Based Intelligent Agents: Definitions, Methods, and Prospects. arXiv 2024, arXiv:2401.03428. [Google Scholar] [CrossRef]
  30. Cao, S.; Zhang, Z.; Alghadeer, M.; Fasciati, S.D.; Piscitelli, M.; Bakr, M.; Leek, P.; Aspuru-Guzik, A. Agents for Self-Driving Laboratories Applied to Quantum Computing. arXiv 2024, arXiv:2412.07978. [Google Scholar] [CrossRef]
  31. Xi, Z.; Chen, W.; Guo, X.; He, W.; Ding, Y.; Hong, B.; Zhang, M.; Wang, J.; Jin, S.; Zhou, E.; et al. The Rise and Potential of Large Language Model Based Agents: A Survey. Sci. China Inf. Sci. 2023, 68, 121101. [Google Scholar] [CrossRef]
  32. Bommasani, R.; Hudson, D.A.; Adeli, E.; Altman, R.; Arora, S.; von Arx, S.; Bernstein, M.S.; Bohg, J.; Bosselut, A.; Brunskill, E.; et al. On the Opportunities and Risks of Foundation Models. arXiv 2021, arXiv:2108.07258. [Google Scholar] [CrossRef]
  33. Zhu, Y.X.; Zhang, X.F.; Sun, Q.M.; Li, X. Expert system of weld crack diagnosis based on knowledge base. Trans. China Welding Inst. 2001, 3, 59–62. [Google Scholar]
  34. Ji, S.; Pan, S.; Cambria, E.; Marttinen, P.; Yu, P.S. A survey on knowledge graphs: Representation, acquisition, and applications. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 494–514. [Google Scholar] [CrossRef] [PubMed]
  35. Sadvilkar, N.; Neumann, M. PySBD: Pragmatic Sentence Boundary Disambiguation. In Proceedings of the 2020 Natural Language Processing Open Source Software Workshop, Online, 19 November 2020. [Google Scholar] [CrossRef]
  36. Koroteev, M.V. BERT: A Review of Applications in Natural Language Processing and Understanding. arXiv 2021, arXiv:2103.11943. [Google Scholar] [CrossRef]
  37. Turney, P.D. Learning Algorithms for Keyphrase Extraction. Inf. Retr. 2000, 2, 303–336. [Google Scholar] [CrossRef]
  38. Reimers, N.; Gurevych, I. Sentence-BERT: Sentence Embeddings Using Siamese BERT-Networks. arXiv 2019, arXiv:1908.10084. [Google Scholar] [CrossRef]
  39. Jierula, A.; Wang, S.; Oh, T.M.; Wang, P. Study on Accuracy Metrics for Evaluating the Predictions of Damage Locations in Deep Piles Using Artificial Neural Networks with Acoustic Emission Data. Appl. Sci. 2021, 11, 2314. [Google Scholar] [CrossRef]
  40. Lee, S.; Lee, J.; Moon, H.; Park, C.; Seo, J.; Eo, S.; Koo, S.; Lim, H. A Survey on Evaluation Metrics for Machine Translation. Mathematics 2023, 11, 1006. [Google Scholar] [CrossRef]
  41. Alarfaj, F.K.; Khan, J.A. Deep Dive into Fake News Detection: Feature-Centric Classification with Ensemble and Deep Learning Methods. Algorithms 2023, 16, 507. [Google Scholar] [CrossRef]
  42. Chicco, D.; Jurman, G. The Advantages of the Matthews Correlation Coefficient (MCC) over F1 Score and Accuracy in Binary Classification Evaluation. BMC Genom. 2020, 21, 6. [Google Scholar] [CrossRef]
  43. Chang, Y.; Wang, X.; Wang, J.; Wu, Y.; Yang, L.; Zhu, K.; Chen, H.; Yi, X.; Wang, C.; Wang, Y.; et al. A Survey on Evaluation of Large Language Models. ACM Trans. Intell. Syst. Technol. 2024, 15, 1–45. [Google Scholar] [CrossRef]
  44. Liu, Y.; Iter, D.; Xu, Y.; Wang, S.; Xu, R.; Zhu, C. G-Eval: NLG Evaluation Using GPT-4 with Better Human Alignment. arXiv 2023, arXiv:2303.16634. [Google Scholar] [CrossRef]
Figure 1. Business process diagram of land use approval (taking the actual process of a place as an example).
Figure 1. Business process diagram of land use approval (taking the actual process of a place as an example).
Applsci 15 09012 g001
Figure 2. Schematic flowchart of this study.
Figure 2. Schematic flowchart of this study.
Applsci 15 09012 g002
Figure 3. The ontology structure diagram of land use approval.
Figure 3. The ontology structure diagram of land use approval.
Applsci 15 09012 g003
Figure 4. Intelligent Q&A methodology framework.
Figure 4. Intelligent Q&A methodology framework.
Applsci 15 09012 g004
Figure 5. Schematic of the prompt construct in project review judgment.
Figure 5. Schematic of the prompt construct in project review judgment.
Applsci 15 09012 g005
Figure 6. Comparison of KG-Agent-QA and RAG-QA metrics on different tasks.
Figure 6. Comparison of KG-Agent-QA and RAG-QA metrics on different tasks.
Applsci 15 09012 g006
Figure 7. Question 1: Comparison of Q&A effectiveness (left: effects of Q&A using LLM; right: effects of Q&A using the method proposed in this study).
Figure 7. Question 1: Comparison of Q&A effectiveness (left: effects of Q&A using LLM; right: effects of Q&A using the method proposed in this study).
Applsci 15 09012 g007
Figure 8. Question 2: Comparison of Q&A effects (left: effects of Q&A using RAG; right: effects of Q&A using the method proposed in this study).
Figure 8. Question 2: Comparison of Q&A effects (left: effects of Q&A using RAG; right: effects of Q&A using the method proposed in this study).
Applsci 15 09012 g008
Table 1. Description of the entity type labeled.
Table 1. Description of the entity type labeled.
Entity TypeExample
Approval mattersApproval of permanent basic farmland conversion
Approval application materialsFeasibility study report
Functional areas of businessMunicipal Bureau of Natural Resources
Policy provisionsArticle 44 of the Land Administration Law
Table 2. Example of triple extraction relationship.
Table 2. Example of triple extraction relationship.
Entity1RelationshipEntity2
Approval of Construction Land UseMatters related to approval content Conversion of Agricultural Land
Conversion of Agricultural Land UseApproval items involved during the compilation and submission phasePermanent Basic Farmland
Permanent Basic FarmlandMaterials submitted for approval“Report on the Field Survey and Verification of Occupation and Re-division of Permanent Basic Farmland”
Agricultural and Rural Affairs BureauIssued materialsExplanation of the Situation Regarding the Occupation of High-Standard Farmland
Table 3. Classification of GKC.
Table 3. Classification of GKC.
ClassTypical ExampleQuantity/Strip
Planning and Use Class“What are the differences between commercial and industrial land use approvals?”50
Approval Authority and Regulations Class“Which entity should approve certain types of land use changes?”50
Approval Materials and Processes Class“What materials are required for the land use approval work involving forest land?”40
Statute of Limitations and Costs Class“What is the typical turnaround time for approval of land title documents?”40
Special Policy Class“What are the policies to support land use for rural revitalization?”20
Table 4. Classification of PRC.
Table 4. Classification of PRC.
ClassTypical ExampleQuantity/Strip
SeCData Compliance“Does the accuracy of site area measurements meet national standards?”35
Operational Consistency“Is the land use status map consistent with the land classes in the boundary survey report?”35
Policy Relevance“Is a project in compliance with the latest cropland balance policy?”30
SpCSite Survey Compliance“Does the slope of the terrain measured in the boundary survey report meet the requirements?”40
Planning Compliance“Which entity should approve certain types of land use changes?”40
Topological Relationship of Space“Does the project site encroach on permanent basic agricultural land?”20
TCTime Window Compliance“Does the timing of the publication of the land acquisition notice comply with the relevant regulations?”50
Temporal Logic Consistency“Was the land grant contract signed later than the confirmation of ownership?”50
Table 5. Examples of QA pairs in the dataset.
Table 5. Examples of QA pairs in the dataset.
Question TypeExample QuestionExample Answer
Approval authorityWhich authority is responsible for approving this type of land use change?According to the Land Use Examination Measures, such changes should be reviewed by the local natural resources authority. The specific level of authority depends on the project’s scale and type.
Temporal logicIs the land transfer contract signed after land ownership confirmation?If the contract date is later than the ownership confirmation date, the process is reasonable. Otherwise, it may indicate a logical inconsistency and the project’s documentation should be further reviewed for legal compliance.
Table 6. Different methods of performance for different Q&A tasks.
Table 6. Different methods of performance for different Q&A tasks.
TasksMethodsAccuracyPrecisionRecallF1accuracy_averelevance_avefluency_ave
GKCLLM-QA0.720.820.920.860.770.780.80
RAG-QA0.800.920.910.910.840.830.85
KG-Agent-QA0.940.960.980.970.930.940.96
PRCRAG-QA0.850.920.960.940.820.840.83
KG-Agent-QA0.910.950.980.960.910.920.92
Table 7. Different methods of performance under different PRC of Q&A tasks.
Table 7. Different methods of performance under different PRC of Q&A tasks.
TasksMethodsAccuracyPrecisionRecallF1accuracy_averelevance_avefluency_ave
SeCRAG-QA0.800.890.920.900.830.840.84
KG-Agent-QA0.890.940.980.960.900.920.91
TCRAG-QA0.880.930.960.940.810.830.82
KG-Agent-QA0.920.960.980.970.910.920.92
SpCRAG-QA0.900.950.970.960.830.850.84
KG-Agent-QA0.940.970.990.980.920.940.93
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, H.; Yin, S.; Hu, X.; Deng, M.; Yang, X.; Xu, G. A Spatiotemporal–Semantic Coupling Intelligent Q&A Method for Land Use Approval Based on Knowledge Graphs and Intelligent Agents. Appl. Sci. 2025, 15, 9012. https://doi.org/10.3390/app15169012

AMA Style

Liu H, Yin S, Hu X, Deng M, Yang X, Xu G. A Spatiotemporal–Semantic Coupling Intelligent Q&A Method for Land Use Approval Based on Knowledge Graphs and Intelligent Agents. Applied Sciences. 2025; 15(16):9012. https://doi.org/10.3390/app15169012

Chicago/Turabian Style

Liu, Huimin, Shutong Yin, Xin Hu, Min Deng, Xuexi Yang, and Gang Xu. 2025. "A Spatiotemporal–Semantic Coupling Intelligent Q&A Method for Land Use Approval Based on Knowledge Graphs and Intelligent Agents" Applied Sciences 15, no. 16: 9012. https://doi.org/10.3390/app15169012

APA Style

Liu, H., Yin, S., Hu, X., Deng, M., Yang, X., & Xu, G. (2025). A Spatiotemporal–Semantic Coupling Intelligent Q&A Method for Land Use Approval Based on Knowledge Graphs and Intelligent Agents. Applied Sciences, 15(16), 9012. https://doi.org/10.3390/app15169012

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop