Next Article in Journal
A Hybrid Model for Designers to Learn from Failures: A Case of a High Potential Fire Incident at an Underground Hard Rock Mine
Previous Article in Journal
A Hybrid DSCNN-GRU Based Surrogate Model for Transient Groundwater Flow Prediction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Large Language Model Driven Knowledge Graph Construction Scheme for Semantic Communication

College of Information and Communication, National University of Defense Technology, Wuhan 430030, China
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2025, 15(8), 4575; https://doi.org/10.3390/app15084575
Submission received: 5 March 2025 / Revised: 15 April 2025 / Accepted: 18 April 2025 / Published: 21 April 2025

Abstract

:
This study presents a knowledge graph construction scheme leveraging large language models (LLMs) for task-oriented semantic communication systems. The proposed methodology systematically addresses four critical stages: corpus collection, entity extraction and relationship analysis, knowledge base generation, and dynamic updating mechanisms. It is worth noting that prompt engineering is combined with few-shot learning to enhance reliability and accuracy in this methodology. Experimental demonstration showed that this methodology had superior entity extraction performance, achieving 89.7% precision and 92.3% recall rate. This scheme overcomes the demand for domain knowledge and the labor cost of traditional knowledge base construction schemes. It greatly improves the construction efficiency of knowledge graphs. This paper provides an efficient and reliable task knowledge base construction scheme for task-oriented semantic communication, which is expected to promote its wider application.

1. Introduction

Semantic communication has emerged as a pivotal enabling technology for 6G networks, demonstrating significant advantages in reducing transmission loads and enhancing service quality [1]. Unlike conventional syntactic systems focusing on bit-level accuracy, semantic communication focus on the meaning of information. This paradigm shift relies on knowledge-driven frameworks, where semantic analysis is performed against background knowledge. Recent studies confirm that semantic knowledge bases (SKBs) serve as essential infrastructure for formalizing task information and domain expertise in semantic communication systems [2,3,4,5]. Especially in task-oriented scenarios, accurate and complete SKBs are crucial to efficient semantic interactions. Consequently, the rapid construction of task SKBs constitutes a critical research challenge in optimizing task-driven semantic communication architectures.
Traditional knowledge graph construction methods exhibit three fundamental limitations in semantic communication applications [6]: (1) labor-intensive manual annotation requiring domain specialists, (2) rigid schema design dependent on prior expertise, (3) scalability constraints caused by iterative quality assurance processes. Typically, these methods require domain experts, creating bottlenecks in dynamic environments. For knowledge-sensitive semantic communication scenarios, such constraints hinder rapid knowledge base deployment. Consequently, there is an urgent need to streamline the process of constructing knowledge graphs to meet semantic communication requirements.
The rapid development of large language models (LLMs), particularly the GPT series and DeepSeek, offers transformative capabilities for knowledge engineering [7]. These models exhibit three core strengths: natural language understanding, cross-domain reasoning, and representation learning. Such capabilities enable the automated execution of key tasks, including corpus collection, entity extraction, and relationship mapping [8]. Furthermore, prompt engineering allows precise model fine-tuning to enhance task-specific accuracy [9]. These advantages establish LLMs as foundational tools for reconstructing knowledge base development in semantic communication systems.
Therefore, it is believed that LLMs can be adopted to quickly complete knowledge collation, entity extraction, and knowledge graph construction for specific tasks. Designing a knowledge graph construction scheme based on LLMs that integrates knowledge organization, entity extraction, and graph generation will advance the development of task-driven semantic communication technologies. This paper aims to design a knowledge graph construction based on LLMs and improve the efficiency and accuracy through prompt engineering and few-shot learning techniques.
The principal contributions of this study comprise three key aspects:
  • An LLM-driven framework is developed to construct task SKBs, and entity extraction accuracy is improved by using few-shot learning technology. With this approach, task knowledge bases in semantic communication systems can be built without the high dependence on manual annotation and domain expertise found in traditional knowledge graph construction methods.
  • The knowledge base for UAV aerial photography tasks is designed, which provides basic support for the application of task-oriented semantic communication technology in UAV aerial photography scenes. This advancement is expected to advance the adoption of semantic communication in drone communications, addressing critical challenges, such as bandwidth resource constraints, in drone communication systems.
  • In the experimental phase, this study tested multiple LLMs. The results demonstrate that all LLMs successfully accomplished entity extraction and relationship analysis tasks with robust performance. Especially, DeepSeek provided authentic and valid corpus data, significantly enhancing knowledge base construction reliability. The experimental results validate the effectiveness and reliability of the proposed knowledge graph construction scheme.
The remainder of this paper is organized as follows. Section 2 reviews related research. Section 3 introduces the scheme proposed in this paper. Section 4 presents the experimental verification. Finally, a conclusion is drawn in Section 5. It should be noted that this paper was completed in a Chinese context. All Tables in the main text are translated versions of the output from the LLMs. As translations may not fully capture experimental nuances, we provide the raw data in the original language in Appendix A for reference.

2. Related Work

2.1. The Construction Method for the Semantic Knowledge Base

SKBs originated as computational knowledge graphs, designed to formally represent entity concepts and their relational networks. Recent advancements in semantic communication have driven the evolution of four distinct SKB construction methodologies.
The first is to construct a semantic knowledge base based on a knowledge graph. It is mainly oriented to text transmission and speech transmission, and constructs a semantic knowledge base with multi-level structure by extracting ternary groups [2,10,11]. The second is to use the labeled training dataset as a knowledge base, which describes the knowledge through the distribution of statistical properties of the training dataset. This method can dynamically update the coding and decoding scheme through the domain adaptive technique in migration learning. It solves the problem of mismatch between the transmitted data information and the training dataset [2]. The third is to use the statistical properties of features extracted from deep learning models as a semantic knowledge base. This method is mainly oriented to tasks such as image categorization, which extracts the correlation between the feature maps of image datasets with category labels and object categories as a semantic knowledge base [12,13]. The last one uses LLMs to replace the knowledge base, and semantic extraction and recovery through LLMs effectively solves the semantic ambiguity problem with good generalization [14,15].
Currently, various semantic knowledge base construction schemes have been applied to end-to-end semantic communication systems. Particularly in task-oriented semantic communication, the integration of task knowledge into semantic encoding and decoding processes achieves higher compression rates. However, existing studies embed knowledge into semantic encoding/decoding algorithms in the form of algorithmic parameters, which hinders knowledge updating during communication. This approach struggles to adapt to dynamic changes in task knowledge in task-oriented semantic communication. In contrast, entity knowledge bases represented by knowledge graphs can address this limitation [16]. Therefore, it is a critical step to realize the efficient and accurate construction of task knowledge bases in enhancing task-driven image semantic communication technologies.

2.2. Application of Large Language Models

In recent years, LLMs have developed rapidly and become a research hotspot in China. They provide brand-new solutions for semantic communication, natural language processing, and other fields.
LLMs possess robust semantic understanding capabilities and find extensive applications in semantic communication. Specifically, they are employed in two primary ways. First, LLMs calculate semantic loss and assess semantic importance to optimize the semantic compilation code process [14,17]. Second, they serve as semantic knowledge bases, facilitating precise extraction and recovery in semantic communication [18,19]. However, LLMs require extensive computational and storage resources to function. Integrating LLMs into semantic communication systems poses significant challenges to meeting resource requirements. Additionally, LLMs suffer from the problem of “hallucination”, which reduces accuracy on professional issues [20]. Currently, leveraging LLMs to enhance semantic communication remains an open research question. Harnessing LLMs’ extensive knowledge reserves and learning abilities to improve semantic communication represents a crucial development direction in this field.
In the realm of knowledge graphs, the application of LLMs has also become a prominent research area. LLMs have a powerful semantic comprehension ability, which enables them to perform tasks such as corpus collection and knowledge extraction, offering novel approaches to knowledge graph construction. Performance comparable to other state-of-the-art methods can be achieved if LLMs are fine-tuned via prompt engineering [21]. For instance, [22] introduced a scheme for clinical tasks that integrates few-sample learning techniques, achieving high performance in named entity recognition. Similarly, [9] proposes a method for constructing a Chinese medicine knowledge graph based on LLMs, significantly reducing manual effort and enhancing construction efficiency.
Therefore, we believe that a knowledge graph construction method tailored to semantic communication can be developed using LLMs. This approach can accommodate knowledge evolution’s dynamic nature during the semantic communication processes.

3. Method

In this section, an interactive semantic knowledge base construction method based on LLMs is proposed for semantic communication. The scheme consists of three components: public information collection, entity extraction and relationship analysis, and knowledge base generation, as shown in Figure 1.
  • Stage 1: Public Information Collection
The initial step involves assigning the primary task to LLMS to generate first-round output. Following this, operators sample the produced content to evaluate whether it complies with the specified requirements. In cases where discrepancies are detected, prompt engineering is employed to fine-tune the model through iterative refinement cycles. This iterative process persists until the corpus attains the predefined quality standards.
  • Stage 2: Entity Extraction and Relationship Analysis
a.
Entity Extraction
The LLMs extract entities to generate task-entity formatted data. Operators then randomly sample the output. For incomplete entity recognition or erroneous extraction results, prompt engineering guides iterative model adjustments until comprehensive and accurate entity identification is achieved.
b.
Relationship Analysis
The relationship analysis task is delegated to the LLMs with integrated quantitative scoring principles, producing task-entity-score triples. These triples undergo operator verification through sampling reviews. When relationship interpretations deviate from practical observations, prompt-driven iterative refinement is implemented to obtain logically consistent triples.
  • Stage 3: Knowledge Base Generation
The system transforms validated triples into executable Knowledge base generated by Neo4j programs through LLM. Neo4j visualization is implemented via Python (version 3.9.2), followed by comprehensive CRUD operation testing (Create, Read, Update, Delete). If functional requirements remain unmet, prompt-based optimization is conducted to finalize an operational knowledge base.
Our framework implements a knowledge base construction pipeline through LLM-mediated human-computer interaction. This improves knowledge base reliability through prompt engineering. The knowledge graph is visualized with Python and Neo4j. This framework completes the work of corpus collection, data preprocessing, and program writing through LLMs, which greatly improves the efficiency of the knowledge base construction and reduces labor cost. In addition, the output results are analyzed by manual sampling review, and the LLMs are fine-tuned by human–computer interaction through prompt engineering in order to improve accuracy. The complete workflow visualization is provided in Figure 2.
In our prior research, the task knowledge base served as the foundation for task-oriented image semantic communication systems [16]. During semantic encoding/decoding, it is essential to acquire the entity names and task relevance scores, necessitating a task knowledge graph structured as “Task–Entity–Relevance Score”. This study implemented an LLM-powered construction pipeline comprising five stages: corpus acquisition, entity recognition, relevance assessment, graph generation, and continuous updating mechanisms.

3.1. Corpus Collection

Our study attempted to complete the corpus collection through three LLMs: the iFLTTEK, DeepSeek and ERNIE Bot. This paper attempts to guide the LLMs to output the expected text through prompt engineering to ensure alignment with real-world events. To maintain corpus authenticity, the prompt templates explicitly require verifiable sources (e.g., news articles) and mandate the inclusion of UAV aerial photography task contexts with entity-specific objectives. Table 1 shows the prompt word template and the large model output results.
Table 1 reveals critical challenges in LLM-driven corpus collection. ERNIE Bot and iFLTTEK exhibited task misinterpretation, producing non-compliant outputs resembling instructional guidelines rather than authentic journalistic content. DeepSeek generated new articles, but its outputs contained divergent elements unrelated to its core objectives (e.g., “drone no-fly zone management” discussions). The experimental results indicate two limitations:
  • Specifying a task directly to the LLMs may cause the LLMs to misinterpret the task, such as in the output of ERNIE Bot and iFLTTEK.
  • Specifying a task directly to the LLMs may cause the output of the LLMs not to be as expected, such as the output of DeepSeek.
To address these limitations, we used a few-shot learning approach. This paper implemented a structured few-shot learning protocol with DeepSeek by providing three domain-specific news articles as contextual exemplars. As evidenced in Table 1, this approach successfully achieved effective corpus collection.
However, due to length constraints, DeepSeek could only output approximately 15 corpus entries per interaction. Therefore, it needs multiple rounds of interaction to collect the corpus. To ensure comprehensive coverage and factual accuracy in the constructed knowledge base, we present an iterative saturation protocol to govern corpus collection. This operational standard requires continuing data acquisition until three consecutive interaction cycles yield no novel task-relevant content from verified sources (news reports and official announcements), thereby guaranteeing two critical metrics:
a.
Information Completeness:
Full coverage of all publicly available task-related data that existed before the experimental time.
b.
Content Authenticity:
Strict adherence to source verification protocols for collected materials
Finally, 50 valid corpus entries meeting the requirements were collected through five rounds of iterative querying. It should be noted that completeness in this paper means that the final constructed knowledge base covers all task-related information on the internet at the time of the experiment, which is also the reason why the iterative saturation principle was adopted.

3.2. Entity Extraction and Relationship Analysis

During the entity extraction phase, manual annotation took place of all corpora in order to identify tasks and the entities that appear in the task entities in the UAV aerial photography task segments. This process yielded 14 distinct task categories and 155 unique entities, establishing foundational references for subsequent analysis. These manually curated results served as standard benchmarks for evaluating LLM-based entity extraction performance. Then, we instructed multiple LLMs to extract entities on raw corpora under identical task specifications. The comparative results in Table 2 reveal critical performance variations across models.
Table 2 demonstrates significant differences in LLMs’ entity extraction performance. iFLTTEK exhibited fundamental task misalignment, generating partial semantic descriptors, like “ terrain changes in landslide areas and vegetation destruction”, rather than discrete entities. While ERNIE Bot and DeepSeek were able to comprehend basic tasks, both models suffered from contextual recognition limitations, creating composite phrases like “forest burn areas” instead of atomic entities. This pattern reveals challenges in isolating core semantic units from descriptive contexts, exemplified by erroneous outputs like “fire source locations” (contextual dependency) versus the required “fire sources” (atomic entities). Furthermore, inconsistent semantic granularity persisted across models, indicating difficulties in maintaining abstraction levels aligned with task specifications.
To optimize LLMs-based entity extraction performance, we developed an enhanced few-shot learning protocol by supplying 10 annotated domain-specific exemplars during task configuration. As demonstrated in Table 3, this methodological refinement substantially reduced erroneous extraction instances across all evaluated models, enabling precise identification of atomic entities (e.g., “smoke”) from contextual descriptions (e.g., “smoke coverage areas”). However, the extraction accuracy and effectiveness of the three LLMs still exhibited significant variations, which are quantitatively analyzed in Section 4.
To establish robust relevance assessment criteria, we developed an LLM-driven analytical framework for evaluating entity–task relationships through semantic correlation analysis. This methodology introduces structured prompt templates (Equation (1)) to standardize scoring rationality, incorporating two fundamental evaluation dimensions:
  • Frequency Significance: Assessed through the frequency of occurrence of entities in different corpora under the same task.
  • Descriptive Salience: Assessed through contextual importance indicators in semantic expressions
S e n t i t y = α f e n t i t y + β i e n t i t y ,
where S e n t i t y ,   f e n t i t y ,   a n d   i e n t i t y , respectively, express the relevance score, entity occurrence frequency, and entity description strength. α ,   β are hyperparameters that improve the rationality of the correlation analysis process.
High-frequency manifestations (e.g., repeated entity appearances in related task contexts) and semantically salient descriptors (e.g., critical functional attributes in domain-specific descriptions) jointly determine relevance metrics. The framework’s operational criteria ensure systematic weighting of quantitative occurrence patterns and qualitative contextual value, effectively bridging statistical prevalence with semantic essentiality.
The output of relationship analysis by LLMs is shown in Table 4.
Experimental evaluations demonstrate that all three LLMs successfully generated relevance analyses with logically grounded scoring rationales. ERNIE Bot combines the references provided and LLMs’ internal knowledge during the analysis. DeepSeek strictly adheres to the evaluation methodology defined by the prompt word project. iFLTTEK mainly relied on LLMs’ internal knowledge for its analysis, ignoring the provided references. As Table 4 illustrates, ERNIE Bot and iFLTTEK produced small score variance, limiting their discriminative capacity for nuanced relevance differentiation. DeepSeek, conversely, demonstrated superior granularity with pronounced inter-entity score disparities, enabling clear hierarchical distinctions. To leverage complementary strengths while mitigating individual model biases, we adopted the average of outputs from all three LLMs for subsequent knowledge base construction.

3.3. Knowledge Base Generation

Our knowledge graph construction framework integrates LLMs, Python, and Neo4j through a coordinated workflow, as shown in Figure 3. The system directs LLMs to convert structured task–entity–relevance triplets into executable Python code, which subsequently interfaces with Neo4j’s graph database engine via dedicated API calls.
This paper processed the obtained structured triples through neo4j, python, and LLMs to obtain the UAV emergency rescue mission knowledge base, as shown in Figure 4. Through the above process, it realized semi-manual and semi-automatic knowledge graph generation, which greatly saves labor costs in the knowledge base construction process and improves the efficiency of knowledge graph construction. It should be noted that Figure 4 presents a translated representation of the Neo4j (version 4.4.5 for Windows) output, with the original Chinese interface screenshots preserved in Appendix A for verification purposes.
From Figure 4, it can be seen that, although UAV aerial photography technology is applied in various mission contexts, they all contain a large number of the same entities. The knowledge base contains tasks, entities, and relevance scores. Therefore, based on the built task knowledge base, it can use the natural language processing ability of LLMs to reason about the task information for the new task.

3.4. Knowledge Base Updating

In reality, task knowledge bases cannot achieve completeness, as new tasks may emerge during task execution, requiring rapid inference of task information for those tasks based on existing knowledge bases. This paper proposed that LLMs can directly generate task information for novel tasks based on their inherent knowledge and powerful reasoning abilities. Table 5 shows the experimental results.
Table 5 demonstrates the reasoning capabilities of three LLMs in inferring potential entities for novel tasks and analyzing relevance scores using established knowledge bases. The experimental results show that DeepSeek has better reasoning ability and can provide more possible entities, while ERNIE Bot is relatively cautious and only provides a small number of entities. This divergence may stem from inherent architectural differences in parameter optimization strategies—DeepSeek’s expansive reasoning pathways encourage exploratory hypothesis generation, whereas ERNIE Bot implements stricter output validation protocols to ensure high-confidence predictions.

4. Performance Analysis

This paper analyzed the accuracy and validity of corpus collection and entity extraction. For its corpus, this paper adopted the 50 items collated by DeepSeek. For entity extraction and relationship analysis, 50 corpora are processed with different LLMs and different clue templates. Accuracy, recall and F1-scores are used to evaluate the performance of the program.
P = T P T P + F P ,
R = T P T P + F N ,
F 1 - s c o r e s = 2 P R P + R
The meanings of TP, FP and FN are as follows:
TP: the number of correct triples
FP: the number of omissive triples
FN: the number of incorrect triples
It is critical to note that this paper employed manually processed triples as the ground truth, which serves as the criterion for evaluating the correctness of triples extracted by LLMs. TP denotes the number of triples output by the LLMs that match the ground truth; FN represents the number of triples generated by the LLMs but absent in the ground truth (i.e., spurious triples); FP indicates the number of triples missing in the LLM’s output compared to the ground truth. The experimental results are presented in Table 6.
Experimental evaluations demonstrate distinct performance characteristics across LLMs in entity extraction and relation analysis tasks. While DeepSeek and ERNIE Bot successfully extracted comprehensive triplets through direct instruction, achieving baseline accuracy exceeding 75% with 87% recall rates, iFLTTEK initially failed to meet fundamental task requirements. However, the implementation of few-shot learning methodologies substantially enhanced all models’ capabilities, particularly enabling iFLTTEK not only to complete core extraction tasks but also exhibit superior Chinese linguistic processing proficiency. iFLTTEK achieved baseline accuracy exceeding 85% with a 91% recall rate.
Overall, the experimental results shown in Table 6 fully illustrate the accuracy and reliability of the scheme in extracting entities. This framework demonstrates robust extraction reliability, with the obtained entities showing precise semantic alignment while maintaining comprehensive corpus coverage. The methodology’s effectiveness is further reinforced by using real-world data spanning complete domain representations from verified internet sources. This dual validation through both algorithmic performance metrics and source authenticity verification confirms that the constructed task knowledge base achieves superior completeness and accuracy, effectively supporting requirements in task-oriented semantic communication systems. The experimental results substantiate the effectiveness of LLM-driven entity extraction as a robust methodology for knowledge base development. This framework achieves efficient large-scale corpus processing and precise triple structure generation. The methodology significantly reduces manual intervention requirements when compared to conventional approaches, particularly evident in complex data environments requiring rapid knowledge graph deployment. This approach significantly reduces manual labor for knowledge base construction and improves overall efficiency.

5. Conclusions

To address the dynamic construction demands of task-oriented knowledge bases in semantic communication systems, a novel LLM-driven knowledge graph framework is proposed in this study. By fusing prompt engineering and few-shot learning, the accuracy of entity extraction and relationship analysis is significantly improved. Experiments show that LLMs combined with few-shot learning and prompt engineering (such as DeepSeek) perform well in knowledge reasoning and dynamic updating, and can quickly generate task-related entities and association scores, supporting real-time reasoning for new tasks. This scheme not only reduces manual intervention costs but also provides a technical basis for knowledge-driven optimization of semantic communication systems. Future work can further explore the generalization ability of LLMs in complex task scenarios. This can optimize model parameters to reduce the “illusion” phenomenon, and enhance the reliability and practical application value of the knowledge base.

Author Contributions

Conceptualization, C.G. and J.Y.; Data curation, C.G.; Formal analysis, Y.L.; Funding acquisition, C.G. and J.Y.; Investigation, J.L.; Methodology, C.G. and C.W.; Project administration, C.G. and J.Y.; Resources, Z.L.; Software, C.G.; Supervision, C.W. and J.Y.; Validation, C.G. and J.L.; Visualization, C.G. and W.G.; Writing—original draft, C.G.; Writing—review and editing, J.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China Grant No. 62402505, the Independent Innovation Science Fund of National University of Defense Technology under Grant No. 22-ZZCX-055 and the Graduate Research Innovation Project of National University of Defense Technology under Grant No. XJZH2024006.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A1, Table A2, Table A3, Table A4 and Table A5 in the appendix correspond to Table 1, Table 2, Table 3, Table 4 and Table 5 in the main text, which contain the raw data from the LLM output. Figure A1 in the appendix corresponds to Figure 4 in the main text and is the output result of Neo4j.
Table A1. Partial results for the corpus collected by the LLMs.
Table A1. Partial results for the corpus collected by the LLMs.
Input你的任务是搜集无人机航拍技术在应急救援中应用的新闻报道、官方通告,要求以独立段落形式呈现,每一段均描述了任务背景(如灾后损失评估)和航拍到的实体目标(如林木、房屋等)。
OutputDeepSeek1.背景:2025年春节期间无人机禁飞管理
为保障春节期间系列活动的安全,2025年2月1日04时至08时30分,在八一大道以东、广场南路以西等区域设立临时禁飞区,禁止无人机等“低慢小”航空器飞行。此举旨在防止航拍等行为干扰活动秩序,确保公共安全。
2.背景:海南省台风“摩羯”灾后评估
2024年9月,海南省遭遇台风“摩羯”袭击,自然资源部第七地形测量队使用4架无人机对海口市、文昌市等受灾区域进行航拍,获取了326张照片和55个视频,清晰记录了房屋、风力发电机组、植被及养殖场的损毁情况,为灾后恢复提供了科学依据。
ERNIE Bot1.无人机助力灾后损失评估
任务背景:某地区遭受严重洪水灾害,导致大片农田被淹,房屋受损严重。为了准确评估灾害损失,当地应急管理部门迅速启动了无人机航拍计划。
航拍实体目标:无人机对受灾区域进行了全面航拍,重点拍摄了被淹的农田、倒塌的房屋以及受损的道路和桥梁,为后续的灾害损失评估提供了详实的数据支持。
iFLTTEK新闻报道一
在【具体日期】发生的【灾害名称】中,受灾区域面临着严重的灾情,急需对灾后损失进行全面评估,以便制定后续救援和重建计划。为此,应急救援部门迅速出动了配备先进无人机航拍技术的团队。
无人机缓缓升空,开始在受灾区域上空进行全方位、多角度的航拍作业。从高空俯瞰,原本郁郁葱葱的山林如今部分区域出现了大片林木倾倒的现象,一些粗壮的树木横七竖八地躺在地上,树枝断裂散落一地,山体也因滑坡出现了不同程度的土石裸露。而在村落区域,许多房屋受到了不同程度的损坏,有的房屋屋顶被掀开,瓦片散落在周围;有的房屋墙体出现裂缝甚至部分倒塌,屋内的家具和生活用品也被掩埋在废墟之中。通过无人机航拍传回的高清图像和视频资料,救援人员能够清晰地了解到各个区域的受灾情况,为精准评估灾后损失提供了有力依据。
DeepSeekwith few-shot learning1.西藏日喀则地震灾情侦察
2025年1月7日,西藏日喀则市定日县发生6.8级地震,震源深度10公里,造成大量房屋倒塌和人员伤亡。西藏自治区应急管理厅紧急调派翼龙-2H应急救灾型无人机,对震中区域进行全面灾情侦察。无人机通过搭载光电设备和合成孔径雷达,成功捕捉到房屋倒塌、人员受困点位、救援安置点布局等关键信息,并在夜间低能见度条件下提供稳定图像和数据,为抢险救援提供了坚实支持。
2.广州春节应急管理
2025年春节期间,广州市应急指挥中心利用无人机对白鹅潭烟花汇演现场进行实时监控,通过高清摄像头和AI算力云台相机识别人流、车流,统计数量,及时发现人群聚集、拥堵等安全隐患。同时,无人机还对火炉山、白水寨等重点林区进行巡航,监测森林火险,劝阻野外违规用火行为,并通过热成像仪实时监测火情隐患。
Table A2. Partial results for the entity extracted by the LLMs.
Table A2. Partial results for the entity extracted by the LLMs.
Input你的任务是处理下列所有文本,要求用【】标记每一段文本中的有关无人机拍摄到的实体目标的描述(如倒塌的建筑物),并再每一段文本最后给出任务背景和拍摄到的实体(如地震灾后救援-建筑物、道路)
OutputDeepSeek加州山火中,无人机利用红外传感器实时捕捉【火源位置】及【火势扩散方向】,监测【林木燃烧范围】和【烟雾覆盖区域】,辅助消防部门制定灭火策略。
任务背景:山火救援;拍摄到的实体:火源位置、火势方向、林木燃烧范围、烟雾覆盖区域
四川仁和区暴雨灾害后,无人机拍摄【山体滑坡区域的地形变化】及【植被破坏情况】,生成高精度三维模型辅助灾后重建规划。
任务背景:暴雨灾后重建;拍摄到的实体:山体滑坡地形变化、植被破坏
重庆通过无人机航拍记录【城市道路】、【建筑】及【河流】现状,为旧城改造和绿化工程提供基础地理信息,提升规划效率。
任务背景:城市规划监测;拍摄到的实体:道路、建筑、河流
ERNIE Bot加州山火中,无人机利用红外传感器实时捕捉【火源位置】及【火势扩散方向】,监测【林木燃烧范围】和【烟雾覆盖区域】,辅助消防部门制定灭火策略。(加州山火-火源、火势、林木燃烧范围、烟雾覆盖区域)
四川仁和区暴雨灾害后,无人机拍摄【山体滑坡区域的地形变化】及【植被破坏情况】,生成高精度三维模型辅助灾后重建规划。(四川仁和区暴雨灾害-山体滑坡区域、植被破坏情况)
重庆通过无人机航拍记录【城市道路】、【建筑】及【河流现状】,为旧城改造和绿化工程提供基础地理信息,提升规划效率。(重庆旧城改造-城市道路、建筑、河流)
iFLTTEK加州山火中,无人机利用红外传感器实时捕捉火源位置及火势扩散方向,监测林木燃烧范围和烟雾覆盖区域,辅助消防部门制定灭火策略。
四川仁和区暴雨灾害后,无人机拍摄【山体滑坡区域的地形变化及植被破坏情况】,生成高精度三维模型辅助灾后重建规划。
重庆通过无人机航拍记录城市道路、建筑及河流现状,为旧城改造和绿化工程提供基础地理信息,提升规划效率。
Table A3. Partial results for LLMs with few-shot learning techniques to extract entities.
Table A3. Partial results for LLMs with few-shot learning techniques to extract entities.
Input1.2023年新疆独库公路部署无人机,定期扫描12处【高边坡裂缝】、8座【桥梁墩台】位移,发现3处【路面】塌陷前兆,巡检效率较人工提升4倍,养护成本降低45%。(公路巡检-高边坡裂缝、桥梁墩台、路面)
2…
你的任务是参考上述案例处理下列所有文本,要求用【】标记每一段文本中的有关无人机拍摄到的实体目标的描述(如倒塌的建筑物),并再每一段文本最后给出任务背景和拍摄到的实体(如地震灾后救援-建筑物、道路、电力线路、山体)。文本如下:
OutputDeepSeek加州山火中,无人机利用红外传感器实时捕捉【火源位置】及【火势扩散方向】,监测【林木燃烧范围】和【烟雾覆盖区域】,辅助消防部门制定灭火策略。(山火救援-火源点、火势方向、林木、烟雾源)
四川仁和区暴雨灾害后,无人机拍摄【山体滑坡区域】的【地形变化】及【植被破坏情况】,生成高精度三维模型辅助灾后重建规划。(暴雨灾害监测-山体、地形、植被)
重庆通过无人机航拍记录城市【道路】、【建筑】及【河流】现状,为旧城改造和绿化工程提供基础地理信息,提升规划效率。(城市规划巡检-道路、建筑、河流)
ERNIE Bot加州山火中,无人机利用红外传感器实时捕捉【火源位置】及【火势扩散方向】,监测【林木燃烧范围】和【烟雾覆盖区域】,辅助消防部门制定灭火策略。
(山火救援-火源、火势、林木、烟雾)
四川仁和区暴雨灾害后,无人机拍摄【山体滑坡区域的地形变化】及【植被破坏情况】,生成高精度三维模型辅助灾后重建规划。
(暴雨灾后救援-山体、植被)
重庆通过无人机航拍记录【城市道路】、【建筑】及【河流现状】,为旧城改造和绿化工程提供基础地理信息,提升规划效率。
(城市规划-道路、建筑、河流)
iFLTTEK加州山火中,无人机利用红外传感器实时捕捉【火源位置及火势扩散方向】,监测【林木燃烧范围和烟雾覆盖区域】,辅助消防部门制定灭火策略。(山火预警监测-火源点、林木)
四川仁和区暴雨灾害后,无人机拍摄【山体滑坡区域的地形变化及植被破坏情况】,生成高精度三维模型辅助灾后重建规划。(地质灾害救援-山体滑坡)
重庆通过无人机航拍记录城市【道路】、【建筑及河流现状】,为旧城改造和绿化工程提供基础地理信息,提升规划效率。(城市规划-道路、建筑、河流)
Table A4. Partial results of relationship analysis by LLMs.
Table A4. Partial results of relationship analysis by LLMs.
Input分析参考资料,分析时请平等考虑实体出现的频率和对实体的描述,为下列实体与任务的相关性赋分(0–10分),如地震应急救援-建筑物-10分。
Output DeepSeekERNIE BotiFLTTEK
房屋101010
道路101010
农田9109
人员9108
林木567
桥梁887
水位线7107
Table A5. Results of task inference based on LLMs.
Table A5. Results of task inference based on LLMs.
Input以上是任务-实体-相关性分数形式的三元组,描述了不同任务下不同实体的任务相关性分数。请根据上述资料推理城市爆炸应急救援、冻雨交通疏导、暴风雪应急救援三个任务中可能存在的实体,并以任务-实体-分数的形式输出。
Output DeepSeekERNIE BotiFLTTEK
城市爆炸应急救援建筑物10房屋10房屋10
道路9道路10道路8
人员10人员8桥梁7
电力设施8桥梁9电力设施9
桥梁断裂点8电力设施8人员9
烟雾源7 建筑物8
火焰7
基础设施7
冻雨交通疏导道路10道路10道路10
桥梁9桥梁8桥梁8
车辆7电力线路9车辆7
电力线路8 树木6
铁塔7 基础设施7
基础设施7 电线7
路面结冰带9
暴风雪应急救援道路10房屋10房屋10
建筑物9道路10道路9
人员9桥梁9桥梁8
电力设施8电力设施8林木6
车辆7人员9电力设施8
林木7 人员8
群众9
风力发电机组6
Figure A1. Knowledge base generated by Neo4j.
Figure A1. Knowledge base generated by Neo4j.
Applsci 15 04575 g0a1

References

  1. Shannon, C.; Weaver, W. The Mathematical Theory of Communication. Philos. Rev. 1949, 60, 398–400. [Google Scholar]
  2. Jiang, S.; Liu, Y.; Zhang, Y.; Luo, P.; Cao, K.; Xiong, J.; Zhao, H.; Wei, J. Reliable Semantic Communication System Enabled by Knowledge Graph. Entropy 2022, 24, 846. [Google Scholar] [CrossRef] [PubMed]
  3. Liang, J.; Xiao, Y.; Li, Y.; Shi, G.; Bennis, M. Life-Long Learning for Reasoning-Based Semantic Communication. In Proceedings of the 2022 IEEE International Conference on Communications Workshops (ICC Workshops), Seoul, Republic of Korea, 16 May 2022; pp. 271–276. [Google Scholar]
  4. Xiao, Y.; Li, Y.; Shi, G.; Poor, H.V. Reasoning on the Air: An Implicit Semantic Communication Architecture. In Proceedings of the 2022 IEEE International Conference on Communications Workshops (ICC Workshops), Seoul, Republic of Korea, 16 May 2022; pp. 289–294. [Google Scholar]
  5. Li, A.; Wei, X.; Wu, D.; Zhou, L. Cross-Modal Semantic Communications. IEEE Wirel. Commun. 2022, 29, 144–151. [Google Scholar] [CrossRef]
  6. Chen, H.; Luo, X. An Automatic Literature Knowledge Graph and Reasoning Network Modeling Framework Based on Ontology and Natural Language Processing. Adv. Eng. Inform. 2019, 42, 100959. [Google Scholar] [CrossRef]
  7. Floridi, L.; Chiriatti, M. GPT-3: Its Nature, Scope, Limits, and Consequences. Minds Mach. 2020, 30, 681–694. [Google Scholar] [CrossRef]
  8. Kommineni, V.K.; König-Ries, B.; Samuel, S. From Human Experts to Machines: An LLM Supported Approach to Ontology and Knowledge Graph Construction. arXiv 2024, arXiv:2403.08345. [Google Scholar]
  9. Zhang, Y.; Hao, Y. Traditional Chinese Medicine Knowledge Graph Construction Based on Large Language Models. Electronics 2024, 13, 1395. [Google Scholar] [CrossRef]
  10. Shi, G.; Gao, D.; Song, X.; Chai, J.; Yang, M.; Xie, X.; Li, L.; Li, X. A New Communication Paradigm: From Bit Accuracy to Semantic Fidelity. arXiv 2021, arXiv:2101.12649. [Google Scholar]
  11. Zhou, F.; Li, Y.; Xu, M.; Yuan, L.; Wu, Q.; Hu, R.Q.; Al-Dhahir, N. Cognitive Semantic Communication Systems Driven by Knowledge Graph: Principle, Implementation, and Performance Evaluation. IEEE Trans. Commun. 2023, 72, 193–208. [Google Scholar] [CrossRef]
  12. Yang, Y.; Guo, C.; Liu, F.; Liu, C.; Sun, L.; Sun, Q.; Chen, J. Semantic Communications With Artificial Intelligence Tasks: Reducing Bandwidth Requirements and Improving Artificial Intelligence Task Performance. EEE Ind. Electron. Mag. 2023, 17, 4–13. [Google Scholar] [CrossRef]
  13. Hu, Q.; Zhang, G.; Qin, Z.; Cai, Y.; Yu, G.; Li, G.Y. Robust Semantic Communications Against Semantic Noise. In Proceedings of the 2022 IEEE 96th Vehicular Technology Conference (VTC2022-Fall), London, UK, 26–29 September 2022; pp. 1–6. [Google Scholar]
  14. Guo, S.; Wang, Y.; Li, S.; Saeed, N. Semantic Importance-Aware Communications Using Pre-Trained Language Models. IEEE Commun. Lett. 2023, 27, 2328–2332. [Google Scholar] [CrossRef]
  15. Jiang, F.; Peng, Y.; Dong, L.; Wang, K.; Yang, K.; Pan, C.; You, X. Large AI Model-Based Semantic Communications. IEEE Wirel. Commun. 2024, 31, 68–75. [Google Scholar] [CrossRef]
  16. Guo, C.; Xi, J.; He, Z.; Liu, J.; Yang, J. A Knowledge Base Driven Task-Oriented Image Semantic Communication Scheme. Remote Sens. 2024, 16, 4044. [Google Scholar] [CrossRef]
  17. Guo, S.; Wang, Y.; Zhangz, P. Signal Shaping for Semantic Communication Systems with A Few Message Candidates. In Proceedings of the 2022 IEEE 96th Vehicular Technology Conference (VTC2022-Fall), London, UK, 26–29 September 2022; pp. 1–5. [Google Scholar]
  18. Jiang, F.; Dong, L.; Peng, Y.; Wang, K.; Yang, K.; Pan, C.; You, X. Large AI Model Empowered Multimodal Semantic Communications. IEEE Commun. Mag. 2024, 63, 76–82. [Google Scholar] [CrossRef]
  19. Nam, H.; Park, J.; Choi, J.; Bennis, M.; Kim, S.-L. Language-Oriented Communication with Semantic Coding and Knowledge Distillation for Text-to-Image Generation. In Proceedings of the ICASSP 2024–2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Seoul, Republic of Korea, 14–19 April 2024; pp. 13506–13510. [Google Scholar]
  20. Zhang, Y.; Li, Y.; Cui, L.; Cai, D.; Liu, L.; Fu, T.; Huang, X.; Zhao, E.; Zhang, Y.; Chen, Y.; et al. Siren’s Song in the AI Ocean: A Survey on Hallucination in Large Language Models. arXiv 2023, arXiv:2309.01219. [Google Scholar]
  21. Ni, X.; Li, P.; Li, H. Unified Text Structuralization with Instruction-Tuned Language Models. arXiv 2023, arXiv:2303.14956. [Google Scholar]
  22. Hu, Y.; Chen, Q.; Du, J.; Peng, X.; Keloth, V.K.; Zuo, X.; Zhou, Y.; Li, Z.; Jiang, X.; Lu, Z.; et al. Improving Large Language Models for Clinical Named Entity Recognition via Prompt Engineering. J. Am. Med. Inform. Assoc. 2024, 31, 1812–1820. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Schematic structural diagram.
Figure 1. Schematic structural diagram.
Applsci 15 04575 g001
Figure 2. Flowchart of knowledge processing.
Figure 2. Flowchart of knowledge processing.
Applsci 15 04575 g002
Figure 3. Flowchart of knowledge base generation.
Figure 3. Flowchart of knowledge base generation.
Applsci 15 04575 g003
Figure 4. Knowledge base generated by Neo4j.
Figure 4. Knowledge base generated by Neo4j.
Applsci 15 04575 g004
Table 1. Partial results of the corpus collected by the LLMs.
Table 1. Partial results of the corpus collected by the LLMs.
InputYour task is to collect news reports and official announcements regarding aerial drone photography technology application in emergency rescue operations. Each entry should be presented as a separate paragraph, with each paragraph explicitly describing both (1) the mission context (e.g., post-disaster damage assessment) and (2) the physical entities captured by aerial photography (e.g., vegetation, residential buildings, etc.).
OutputDeepSeek1. Background: UAV No-Fly Zone Management During 2025 Spring Festival
To ensure the safety of festival events during the Spring Festival period, a temporary no-fly zone will be established from 04:00 to 08:30 on 1 February 2025, covering areas east of Bayi Avenue and west of Guangchang South Road. This measure prohibits the operation of drones and other low-altitude, slow-speed, small-sized (LSS) aircraft to prevent aerial photography from disrupting event operations and maintain public safety.
2. Background: Post-Disaster Assessment of Typhoon “Yagi” in Hainan Province
In September 2024, Hainan Province was struck by Typhoon “Yagi.” The Seventh Topographic Survey Team of the Ministry of Natural Resources deployed four unmanned aerial vehicles (UAVs) to conduct aerial surveys of affected areas, including Haikou City and Wenchang City. The operation captured 326 photos and 55 videos that clearly documented damage to houses, wind turbines, vegetation, and aquaculture farms. This data provides a scientific basis for post-disaster recovery efforts.
ERNIE Bot1. Drones Assist in Post-Disaster Damage Assessment
Mission Background: A region was struck by a severe flood disaster, resulting in large areas of submerged farmland and severely damaged houses. To accurately assess disaster impact, the local emergency management department promptly initiated a drone aerial survey program.
Aerial Survey Objectives: Drones conducted a comprehensive aerial survey of the affected areas, focusing on capturing submerged farmland, collapsed houses, and damaged roads and bridges. The operation provided detailed data support for subsequent damage assessment and recovery planning.
iFLTTEKNews Report 1
During the [disaster name] that occurred on [specific date], the affected areas faced severe devastation, necessitating a comprehensive post-disaster damage assessment to formulate follow-up rescue and reconstruction plans. To address this, the emergency response department swiftly deployed a team equipped with advanced drone aerial survey technology.
The drones ascended steadily and began conducting comprehensive multi-angle aerial surveys over the disaster-stricken zones. From an aerial perspective, large swathes of once-lush forests now showed extensive tree collapse. In some areas, large trees lay haphazardly on the ground with broken branches scattered widely. Landslides exposed bare soil and rocks on slopes of varying severity. In village areas, numerous houses sustained damage: some had roofs torn off, with tiles strewn around; others exhibited cracked walls or partial collapse, burying furniture and household items under debris. The high-resolution images and video footage transmitted by the drones enabled rescue teams to clearly identify damage distribution patterns, providing critical evidence for precise damage quantification and resource allocation.
DeepSeekwith few-shot learning1. Earthquake Damage Reconnaissance in Shigatse, Tibet
On 7 January 2025, a magnitude 6.8 earthquake struck Dingri County, Shigatse City, Tibet, with a focal depth of 10 km, causing widespread building collapse and casualties. The Department of Emergency Management of Tibet Autonomous Region urgently deployed the Wing Loong-2H emergency relief UAV to conduct a full-scale damage assessment of the epicenter area. Equipped with electro-optical devices and synthetic aperture radar (SAR), the drone successfully captured critical data, including collapsed buildings, trapped personnel locations, and relief camp layouts. It also provided stable imaging and real-time data transmission under nighttime low-visibility conditions, offering robust support for rescue operations.
2. Spring Festival Emergency Management in Guangzhou
During the 2025 Spring Festival, Guangzhou’s Emergency Command Center utilized drones for real-time monitoring of the Baietan fireworks display. Using AI-powered cloud platform cameras and high-definition imaging, the drones analyzed crowd and vehicle flows, quantified densities, and promptly identified risks, such as overcrowding and traffic bottlenecks. Simultaneously, drones patrolled key forested areas, like Hulu Mountain and Baishuizhai, to monitor wildfire risks, deter illegal outdoor fire use, and detect potential fire hazards via thermal imaging sensors.
Table 2. Partial results of the entities extracted by the LLMs.
Table 2. Partial results of the entities extracted by the LLMs.
InputYour task is to process all the following texts, requiring the use of [] to mark descriptions of physical targets captured by drone photography (e.g., collapsed buildings) in each paragraph. At the end of each paragraph, provide the mission context and captured entities (e.g., post-earthquake rescue—buildings, roads).
OutputDeepSeekDuring California’s wildfires, drones utilized infrared sensors to capture real-time data on [fire source locations] and [directions of fire spread], monitoring [forest burn areas] and [smoke-covered regions], assisting fire departments in formulating firefighting strategies.
Mission Context: Wildfire Rescue; Captured Entities: fire source locations, fire spread direction, burning vegetation extent, smoke coverage areas
After the rainstorm disaster in Renhe District, Sichuan, drones captured [topographic changes in landslide areas] and [vegetation damage], generating high-precision 3D models to assist in post-disaster reconstruction planning.
Mission Context: Post-Rainstorm Disaster Reconstruction; Captured Entities: terrain changes in landslide areas, vegetation destruction
Chongqing used drone aerial photography to record the current status of [urban roads], [buildings], and [rivers], providing basic geographic information for old city renovation and greening projects, enhancing planning efficiency. (Urban Planning Inspection—Roads, Buildings, Rivers)
Mission Context: Urban Planning Monitoring; Captured Entities: roads, buildings, rivers
ERNIE BotDuring California’s wildfires, drones utilized infrared sensors to capture real-time data on [fire source locations] and [directions of fire spread], monitoring [forest burn areas] and [smoke-covered regions], assisting fire departments in formulating firefighting strategies. (California Wildfires—fire sources, fire spread, forest burn areas, smoke coverage)
After the rainstorm disaster in Renhe District, Sichuan, drones captured [topographic changes in landslide areas] and [vegetation damage], generating high-precision 3D models to assist in post-disaster reconstruction planning. (Renhe District Rainstorm Disaster—landslide areas, vegetation damage)
Chongqing used drone aerial photography to record the current status of [urban roads], [buildings], and [rivers], providing basic geographic information for old city renovation and greening projects, enhancing planning efficiency. (Chongqing Old City Renovation—urban roads, buildings, rivers)
iFLTTEKDuring California’s wildfires, drones utilized infrared sensors to capture real-time data on fire source locations and directions of fire spread, monitoring forest burn areas and smoke-covered regions, assisting fire departments in formulating firefighting strategies.
After the rainstorm disaster in Renhe District, Sichuan, drones captured [topographic changes in landslide areas and vegetation damage], generating high-precision 3D models to assist in post-disaster reconstruction planning.
Chongqing used drone aerial photography to record the current status of urban roads, buildings, and rivers, providing basic geographic information for old city renovation and greening projects, enhancing planning efficiency.
Table 3. Partial results of LLMs with few-shot learning techniques to extract entities.
Table 3. Partial results of LLMs with few-shot learning techniques to extract entities.
Input1. In 2023, Xinjiang’s Duku Highway deployed drones to regularly scan 12 locations of [high slope cracks] and monitor displacement at 8 [bridge piers], detecting 3 precursors of [road surface] collapse. This improved inspection efficiency by 4 times compared to manual methods and reduced maintenance costs by 45%. (Highway inspection—high slope cracks, bridge piers, road surfaces)
2…
Your task is to process all the following texts according to the above example, and you are asked to mark each paragraph with [] a description of the entity object captured by the drone (e.g., collapsed buildings), and at the end of each paragraph give the background to the task and the entity captured (e.g., earthquake rescue—buildings, roads, power lines, mountains). The text is as follows:
OutputDeepSeekDuring California’s wildfires, drones utilized infrared sensors to capture real-time data on [fire source locations] and [directions of fire spread], monitoring [forest burn areas] and [smoke-covered regions], assisting fire departments in formulating firefighting strategies. (Wildfire Rescue—Fire Source, Fire Spread, Forest, Smoke)
After the rainstorm disaster in Renhe District, Sichuan, drones captured [topographic changes in landslide areas] and [vegetation damage], generating high-precision 3D models to assist in post-disaster reconstruction planning. (Rainstorm Disaster Monitoring—Mountains, Terrain, Vegetation)
Chongqing used drone aerial photography to record the current status of urban [roads], [buildings], and [rivers], providing basic geographic information for old city renovation and greening projects, enhancing planning efficiency. (Urban Planning Inspection—Roads, Buildings, Rivers)
ERNIE BotDuring California’s wildfires, drones utilized infrared sensors to capture real-time data on [fire source locations] and [directions of fire spread], monitoring [forest burn areas] and [smoke-covered regions], assisting fire departments in formulating firefighting strategies. (Wildfire Rescue—Fire Source, Fire Spread, Forest, Smoke)
After the rainstorm disaster in Renhe District, Sichuan, drones captured [topographic changes in landslide areas] and [vegetation damage], generating high-precision 3D models to assist in post-disaster reconstruction planning. (Rainstorm Disaster Monitoring—Mountains, Vegetation)
Chongqing used drone aerial photography to record the current status of urban [roads], [buildings], and [rivers], providing basic geographic information for old city renovation and greening projects, enhancing planning efficiency. (Urban Planning Inspection—Roads, Buildings, Rivers)
iFLTTEKDuring California’s wildfires, drones utilized infrared sensors to capture real-time data on [fire source locations and directions of fire spread], monitoring [forest burn areas and smoke-covered regions], assisting fire departments in formulating firefighting strategies. (Wildfire Rescue—Fire Source, Forest)
After the rainstorm disaster in Renhe District, Sichuan, drones captured [topographic changes in landslide areas and vegetation damage], generating high-precision 3D models to assist in post-disaster reconstruction planning. (Rainstorm Disaster Monitoring—Landslides)
Chongqing used drone aerial photography to record the current status of urban [roads], [buildings, and rivers], providing basic geographic information for old city renovation and greening projects, enhancing planning efficiency. (Urban Planning Inspection—Roads, Buildings, Rivers)
Table 4. Partial results of relationship analysis by LLMs.
Table 4. Partial results of relationship analysis by LLMs.
InputAnalyze the reference materials, and when analyzing, please consider equally both the frequency of entity occurrence and the description of the entities. Assign a relevance score (0–10) to the following entities in relation to the task, such as Earthquake Emergency Rescue—Buildings—10 points.
Output DeepSeekERNIE BotiFLTTEK
Buildings101010
Roads101010
Farmland9109
Personnel9108
Forest567
Bridges887
Water level line7107
Table 5. Results of task inference based on LLMs.
Table 5. Results of task inference based on LLMs.
InputThe above are task–entity–relevance score triplets that describe the relevance scores of different entities under various tasks. Based on the provided data, please infer potential entities for the tasks “Urban Explosion Emergency Rescue,” “Freezing Rain Traffic Management,” and “Blizzard Emergency Rescue,” and output them in task–entity–score format.
Output DeepSeekERNIE BotiFLTTEK
Urban Explosion Emergency RescueBuildings10Houses10Houses10
Roads9Roads10Roads8
Personnel10Personnel8Bridges7
Power facilities8Bridges9Power facilities9
Bridge fracture points8Power facilities8Personnel9
Smoke sources7 Buildings8
Flames7
Infrastructure7
Freezing Rain Traffic ManagementRoads10Roads10Roads10
Bridges9Bridges8Bridges8
Vehicles7Power lines9Vehicles7
Power lines8 Trees6
Electric towers7 Infrastructure7
Infrastructure7 Power lines7
Road ice zones9
Blizzard Emergency RescueRoads10Houses10Houses10
Buildings9Roads10Roads9
Personnel9Bridges9Bridges8
Power facilities8Power facilities8Power facilities6
Vehicles7Personnel9Personnel8
Trees7 Houses8
Civilians9
Wind turbine units6
Table 6. Test results for different schemes.
Table 6. Test results for different schemes.
LLMPromptPrecisionRecallF1-Score
DeepSeek Assigning tasks directly75.48%90.26%89.975
Assigning task + Few-shot learning89.68%92.36%88.96%
ERNIE Bot Assigning tasks directly79.35%87.79%83.34%
Assigning task + Few-shot learning86.45%79.59%77.48%
iFLTTEKAssigning tasks directly---
Assigning task + Few-shot learning85.81%91.78%89.04%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guo, C.; Liu, J.; Gao, W.; Lu, Z.; Li, Y.; Wang, C.; Yang, J. A Large Language Model Driven Knowledge Graph Construction Scheme for Semantic Communication. Appl. Sci. 2025, 15, 4575. https://doi.org/10.3390/app15084575

AMA Style

Guo C, Liu J, Gao W, Lu Z, Li Y, Wang C, Yang J. A Large Language Model Driven Knowledge Graph Construction Scheme for Semantic Communication. Applied Sciences. 2025; 15(8):4575. https://doi.org/10.3390/app15084575

Chicago/Turabian Style

Guo, Chang, Jiaqi Liu, Wei Gao, Zhenhai Lu, Yao Li, Chengyuan Wang, and Jungang Yang. 2025. "A Large Language Model Driven Knowledge Graph Construction Scheme for Semantic Communication" Applied Sciences 15, no. 8: 4575. https://doi.org/10.3390/app15084575

APA Style

Guo, C., Liu, J., Gao, W., Lu, Z., Li, Y., Wang, C., & Yang, J. (2025). A Large Language Model Driven Knowledge Graph Construction Scheme for Semantic Communication. Applied Sciences, 15(8), 4575. https://doi.org/10.3390/app15084575

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop