Next Article in Journal
Three-Phase Powerline Energy Harvesting Circuit with Maximum Power Point Tracking and Cold Start-Up
Previous Article in Journal
Synergistic Effects of Mine Dewatering and Climate Change on a Vulnerable Chalk Aquifer (Chełm Region, Poland)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on the Automatic Generation of Information Requirements for Emergency Response to Unexpected Events

1
Graduate School, National University of Defense Technology, Changsha 410005, China
2
Department of Information Services, Information Support Force Engineering University, Wuhan 430030, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(22), 11953; https://doi.org/10.3390/app152211953
Submission received: 29 September 2025 / Revised: 2 November 2025 / Accepted: 4 November 2025 / Published: 11 November 2025

Abstract

In dealing with emergency events, it is very important when making scientific and correct decisions. As an important premise, the creation of information needs is quite essential. Taking earthquakes as a type of unexpected event, this paper constructs a large and model-driven system for automating the generating process of information requirements for earthquake response. This research explores how the different departments interact during an earthquake emergency response, how the information interacts with each other, and how the information requirement process operates. The system is designed from three points of view, building a knowledge base, designing and developing prompts, and designing the system structure. It talks about how computers automatically make info needs for sudden emergencies. During the experimental process, the backbone architectures used were four Large Language Models (LLMs): chatGLM (GLM-4.6), Spark (SparkX1.5), ERNIE Bot (4.5 Turbo), and DeepSeek (V3.2). According to the desired system process, information needs is generated by real-word cases and then they are compared to the gathered information needs by experts. In the comparison process, the “keyword weighted matching + text structure feature fusion” method was used to calculate the semantic similarity. Like true positives, false positives, and false negatives can be used to find differences and calculate metrics like precision and recal. And the F1-score is also computed. The experimental results show that all four LLMs achieved a precision and recall of over 90% in earthquake information extraction, with their F1-scores all exceeding 85%. This verifies the feasibility of the analytical method a chatGLM dopted in this research. Through comparative analysis, it was found that chatGLM exhibited the best performance, with an F1-score of 93.2%. Eventually, Python is used to script these aforementioned processes, which then create complete comparison charts for visual and test result checking. In the course of researching we also use Protege to create the knowledge requirements ontology, so it is easy for us to show and look at it. This research is particularly useful for emergency management departments, earthquake emergency response teams, and those working on intelligent emergency information systems or those focusing on the automated information requirement generation using technologies such as LLMs. It provides practical support for optimizing rapid decision-making in earthquake emergency response.

1. Introduction

Unexpected events happen very quickly over a short period and have majorly negative effect on society, the life and property of the public [1]. Such things happening usually comes with situations being uncertain, very destructive, and sudden. Under these conditions, emergency response teams must make decisions and allocate resources to do emergency response work as quickly as possible.
Emergency response boils down to efficient and scientific decision making and whether those decisions are scientific can be tied to the amount of information being mastered and used. Information work plays an important role in dealing with all kinds of unexpected things. Especially during earthquake rescues, the timely acquisition and transmission of earthquake and disaster messages is among the significant duties right from the beginning to the end of the whole rescue process [2]. There exist many participating departments, and forces all demand immediate access to multi dimensional information on the what, where, when, why, how, which includes everything from the baseline facts of the incident to its nature and extent, the deployed resources, and the relevant geographic context. Accurate collection and good application of this kind of information starts from the correct understanding and clear understanding and division of information needs. Information requirement formulation is also a very important part of emergency response, the good or bad of which will have a direct impact on the next step of information collection, analysis and use.
But now the existing traditional model mainly conducts sorting of the information requirement via person, this causes too many problems and has greatly impairs the emergency response speed. These problems are mainly reflected in lack of timeliness, poor knowledge transfer, and lack of connection. Under these circumstances, carrying out research about the automatic development of information necessities for emergencies in unforeseen events has become a necessary choice to remedy the shortages of the old model.
Wen-juan MA, et al. put forward the earthquake big data emergency dispatch platform based on the Internet of Things and cloud computing [3], build a “perceive-network-support-application” four-tier structure, integrate multi-source data to improve monitoring and emergency efficiency, save costs, provide support for earthquake emergency decision-making. Pwavodi et al. pointed out that traditional monitoring methods have limitations. AI technologies such as Convolutional Neural Networks and Long Short-Term Memory can uncover patterns in earthquake data. Internet of Things sensors enable real-time monitoring. The combination of these two approaches enhances the potential for prediction [4]. Xu Xiangyu et al. analyzed such incidents that occurred in China from 2006 to 2020. They pointed out that currently, there are problems in these incidents, such as unclear baseline risks and an incomplete response framework [5]. Huang Lihong et al. sort out the applications of AI in earthquake science, such as earthquake event detection [6]. Artificial intelligence technology also performs exceptionally well in seismic event identification tasks. Existing studies have successfully applied machine learning and deep learning methods to distinguish between events such as artificial explosions and natural earthquakes [7,8], volcano-tectonic earthquakes and volcanic tremors [9], underground mining activities [10], tectonic tremors [11], and distant earthquakes and near earthquakes [12]. These applications demonstrate that AI technology holds broad prospects for use in seismic event identification. Saptadeep Biswas et al. proposed an AI-based framework forecast earthquake relief demand for optimizing emergency resource distribution [13].
The application of these methods and technologies has significantly enhanced the efficiency of automated prediction and analysis for earthquakes and other emergent events, providing effective approaches for emergency response. By investigating the aforementioned technologies, frameworks, and theoretical methods, this paper proposes a model that leverages Large Language Models (LLMs) to automatically generate information requirements for emergency events such as earthquakes. LLMs, as an important part of the artificial intelligence development process, it is based on its own semantics understanding, knowledge reasoning and creation ability to provide full chain support from data perception to cognitive reasoning in advanced decision making processes [14].
Compared with traditional methods, LLMs have the following advantages in automatically generating information needs.
1. High Timeliness. Currently, widely adopted research methods for understanding information requirements include interviews, observations, questionnaires, and various other approaches [15]. These methods are effective for understanding the information requirements of individuals or small groups. However, they have certain limitations when it comes to addressing the public, rapid, and real-time information requirements [2]. The emergency response to unexpected events involves the participation of multiple departments. Taking earthquake emergency response events as an example, after an earthquake occurs, government departments quickly initiate the national first-level earthquake emergency response. The main organizations responsible for leadership, command, and coordination include the Earthquake Relief Headquarters, the Earthquake Administration, the Ministry of Public Security, the Ministry of Finance, the Ministry of Housing and Urban-Rural Development, the National Health Commission, the China Meteorological Administration, and the Ministry of Industry and Information Technology. The participating departments and rescue forces have different responsibilities and information requirements. In the traditional mode, response personnel need to spend a lot of effort analyzing information requirements and transforming them into executable checklists, which cannot meet the high timeliness requirements for generating information requirements in emergency response to unexpected events. In recent years, the vigorous development of LLMs such as GPT-4 [16], leveraging their excellent capabilities in text understanding and generation [17,18,19], has greatly promoted the development of information requirements generation. By combining LLMs with the high-speed information processing capabilities of machines, it can effectively shorten the time required for information requirements generation, improve the coverage rate of information requirements, and enhance generation efficiency.
2. Strong Knowledge Inheritance. In the field of earthquake emergency response, experts in the industry have conducted many investigations, earthquake disaster emergency situations including earthquake situation information; background information of the impacted area; disaster situation information; emergency response and rescue efforts information [2].And the demand for information of earthquake system, government and citizens in different stages of earthquake emergency rescue is also different. Traditionally, an information requirement is generated through domain experts’ analysis of the requirements, using their own accumulation of knowledge and experience. And this method often causes the knowledge inheritance of experts to be very fragmentary and disconnected. At the same time, when the specialist in the department is moved or new staff members are added, it is also disadvantageous to the transfer of work experience. It means we have a lot of repetitive task and a lot of cycles in making information requirements. LLMs can make use of their knowledge reasoning abilities and integrate them with machines’ large-capacity information storage, so that response personnel are freed up from having to filter through huge amounts of industry-related information, which allows them to focus on more creative decisions, and the knowledge inheritance method changes from reliance on personal experience to creation of a continuously developing knowledge store.
3. There is sufficient connectivity. In this case of dealing with a sudden incident, it may involve many areas and departments. From the traditional model, generating information requirements comes up against problems with interdisciplinary cooperation and cross-departmental resource management. Rely on coordinating by hand would result in some situation where there is organization problem and unsteady. Through the use of LLMs to automatically produce information needs, this can connect resources from different fields and departments together, and quickly collaboratively schedule the needs from all different areas. It helps the traditional manual cooperative scheduling change from a “data + rules + intelligence” model, greatly increasing work efficiency.
So, we think using LLMs can automatically develop information requirements for emergencies and greatly improve the efficiency, knowledge legacy, and coherent level of information requirements. It has an irreplaceable theoretical value and practical value for the improvement of emergency response models.
The main content of this study mainly includes the following aspects:
Firstly, we take the typical emergency event earthquake as the research object, and construct an automatic generation method system and system framework for emergency situation information requirements, including “LLMs-driven + knowledge base support + prompt word guidance”. At the method system level, we first integrate industry standards for earthquake emergency response, historical cases, information element lists, and time specification requirements for earthquake emergency response to form a knowledge base. It can also provide the domain knowledge to generate the information requirements.
Secondly, we have set up a three-step prompt word engineering approach, namely “extraction of demand provider”, “extraction of information and time element” and “generation of structured information needs”. It steers LLMs in extracting info by the rule.
At last we construct a “input layer—processing layer—output layer” kind of 3-layer system architecture. The input layer manually inputs key parameters for earthquake events, the processing layer uses LLMs to compare the content of the knowledge base, generating core elements of the information requirements, and the output layer outputs structured information requirements in Excel form.
In terms of verification environment setup and verification methods, we have selected four mainstream LLMs—chatGLM, Spark, ERNIE Bot, and DeepSeek—as the backbone architecture. Based on a standard template of earthquake information requirements manually compiled by industry experts, we have constructed a semantic similarity calculation model. This model combines “keyword weighted matching (60%) + text structural feature fusion (40%)”. By using the true positive (TP), falsie positive (FP), falsie negative (FN) difference comparison method, we compared the performances of different LLMs by the three main indicators: precision (P), recall (R), and F1-score. At the same time the Python (3.13.7) was utilized to do data calculation and visual analysis, comprehensive comparative charts were obtained.
Experimental results show that all 4 LLMs can meet the basic requirements for generating information requirements for earthquake emergency response, with chatGLM showing the best overall performance. Furthermore, it has been validated that the structured knowledge base and the constructed prompt word engineering can greatly increase information extraction accuracy and pertinence on LLMs. This greatly improved timeliness of information needs and the inheritance of knowledge, as well as the cross-departmental connectivity of the creation of information requirements, providing scientific decisions with the basis of information.
The rest of the paper is structured as follows: Section 2 introduces the research object of this study, the design of the system structure, and experimental methods such as comparative verification. Section 3 presents and analyzes the experimental results. Section 4 conducts a discussion, and Section 5 draws conclusions. It should be noted that the Appendix A describes the command, collaboration, and information interaction relationships among different participants and forces in earthquake events, as well as information classification.

2. Methods

2.1. Research Object

This study focuses on earthquakes, a natural disaster, as its subject of research to explore the emergency response process and analyze the information requirements. Through an extensive review of literature and conducting thorough, detailed research, a systematic analysis and summary are performed on the activity relationships (Appendix A.1), information interaction relationships (Appendix A.2), and information requirements (Appendix A.3) among the participating departments and forces in emergency response. This offers theoretical support and a foundation for subsequent system design.

2.2. System Design

To enhance the timeliness, knowledge inheritance, and connectivity of information requirement generation for emergency responses to unexpected events, and to address the shortcomings of traditional models, the system design is based on an analysis of the emergency response process and information requirements for such events. Driven by open-source LLMs, it realizes a closed loop of event input to information requirement generation through the design of an input layer, a processing layer, and an output layer. Prior to this, the construction of a knowledge base and the design of prompt engineering must be completed first.

2.2.1. Knowledge Base Construction

To improve the level of industrial knowledge inheritance and knowledge extraction accuracy for LLMs, various industry-specific knowledge as well as expert knowledge and relevant knowledge is compiled and organized to form a knowledge base that can be continuously expanded and updated. To line up with the required information content as laid out in the information requirements, the most prominent portion of the knowledge base consists of internal industry standards and examples of historical responses, lists of information elements, and time specification requirements. These are the base for LLMs to get and use domain specific data. Figure of the constructing of the knowledge base is shown as Table 1.

2.2.2. Prompt Engineering Design

Prompt engineering is the method which can increase the abilities of LLMs without changing their network parameter [29,30,31,32,33,34]. It is about guiding model’s behavior through prompts [35,36,37]. Prompt design is used to do well in many applications [38,39,40,41]. To generate more accurate and relevant information, LLMs have to be guided to extract step by step according to the created prompts, and the outcomes will be used as the foundation of generating the structured table content as the next step. Main design requirements provide prompts for three stages: Demand-Providing Unit extraction, Content Element Extraction, and Time Element Extraction. Table 2 shows specific design.

2.2.3. System Architecture Design

When we finish constructing the knowledge base and designing the prompt engineering, we will begin on an overall system design, it is made up of an input layer, processing layer and output layer.
  • Input Layer: Inputlayer is for entering a specific earthquake event, which can be done through API or manually. This paper uses manual input method. 14 April 2010, the time was 7:49 when a 7.1 magnitude of earthquake happened in area Yushu, China.
  • Processing Layer: The processing layer invokes LLMs to parse earthquake parameters. Based on the input prompts, it matches relevant content in the knowledge base and generates information including participating departments/rescue forces, responsibilities/actions, information requirement elements, first submission time, and update frequency.
  • Output Layer: The output layer generates information requirements in Excel format.
The flow chart design is depicted in Figure 1.

2.3. Experimental Design

This study uses four LLMs as the backbone architecture: chatGLM, Spark, ERNIE Bot, and DeepSeek. Information requirement was created for actual case according to the designed system workflow. The results are then compared with an industry expert manually organized template of required information. The method of “keyword weighted matching plus text structure feature” was chosen for calculating, and using TP, FP, FN to compare. Calculation was performed for P, R and F1-score. Then Python was used to design a code that carried out said procedure, and charts were created for comparison.

2.3.1. Construction of Standard Information Requirements

In this experiment, the specific information needed for the Yushu Earthquake event manually organized and established by industry experts was taken as the experimental standard template, as shown in Table 3.

2.3.2. Semantic Similarity Calculation

To prevent interference from common words, keyword weighted matching is combined with a custom weight design in the earthquake domain. At the same time, the document content is divided into different “information modules” according to the document paragraphs and table rows. Then the match between these modules and the info requirement produced by big model and expert standard is calculated. To further align with the priority for extracting earthquake information, we use a weighted fusion of keyword similarity×60% + structural similarity×40%.
We designed a Python implementation for the evaluation process and visualization, which includes the following aspects.
1. Chinese text is segmented using the jieba library, and key information is extracted based on the EARTHQUAKE_WEIGHTS dictionary that is predefined.
2. The similarity between the LLM-generated results and the expert-standard templates is calculated using the cosine similarity algorithm, and TP, FP, and FN are determined based on the similarity threshold.
3. P, R, and F1-Score are calculated and visualized. It should be noted that all the libraries used in the Python code can be installed directly using pip, the official package manager for Python (it is recommended to use Python version 3.8 or higher). The libraries used in the code and their functions are listed in Appendix A.4.

2.3.3. Difference Comparison and Performance Evaluation

Meanings of TP, FP, and FN
TP: Information points that are included in the information requirements generated by the large model and match the expert standards.
FP: Information points that are included in the information requirements generated by the large model but not present in the expert standards.
FN: Information points that are present in the expert standards but missing from the information requirements generated by the large model.
Performance Metrics
Standard evaluation metrics, including P, R, and the F1-score, are adopted.
P = T P T P + F P × 100 %           [ % ]
R = T P T P + F N × 100 %           [ % ]
F 1 - s c o r e = 2 P R P + R × 100 %         [ % ]
Among them, the meanings of P, R, and F1-score are as follows:
P: The repeatability and consistency of measurements.
R: The capability of the model to identify all relevant instances.
F1-score: It is a balanced metric that integrates both P and R.

2.3.4. Software Performance Analysis

1. Analysis of computational complexity. During the process of semantic similarity calculation, the Python software only needs to traverse through all the information points generated by four LLMs as well as the information points from the expert standard template, and then perform a matching process. After calculation, it is found that the number of information points output by a single model is always less than or equal to 100, and the overall computational time complexity is extremely low. In the statistical analysis of performance metrics, the counting and statistical analysis are performed based on the definitions of TP, FP, and FN. All numerical operations are performed at a constant level of complexity.
2. Analysis of memory complexity. The expert standard template and model output information are all structured data, with total memory usage and intermediate variable cache occupying only several dozen bytes. Additionally, the peak memory consumption of the Python runtime is relatively low, ensuring stable operation on regular computers or emergency command terminals.
3. Evaluation of Decision-Making Time. The processor used in the experiment is “Intel (R) Core (TM) Ultra 5 125H 3.60 GHz”. Running a Python script in the Command Prompt to automatically generate a visual comparison chart took 26.30 s. The performance of the processor can slightly affect the running time. Decision-makers can directly identify the optimal model through the visual chart without needing additional data analysis. The decision-making time is less than 1 min, which is significantly shorter than the time required for traditional manual comparison of information needs.
In summary, this Python software has a low computational/memory complexity, runs smoothly on regular computers, makes decisions efficiently, and is highly suitable for rapid and effective decision-making in emergency earthquake scenarios.

3. Results

3.1. Experimental Result

In this work, the calculation of semantic similarity is carried out through methods of “keyword weighted matching + text structure feature fusion”. The TP, FP, and FN methods are used to compare the difference between the information requirement that is produced by different kinds of model compared with the expert as defined in Equations (1)–(3). P, R and F1-score are calculated to see how well it works, with their respective computational formulas presented in Equations (1)–(3). Figure 2 is a visual comparison result.

3.2. Sub-Case Study and Analysis

To demonstrate the case study, we selected the “Seismological Bureau” entry as a sub-case to intuitively present the case evaluation process. We compared the information requirements generated by chatGLM and DeepSeek against the expert standards, with the comparison dimensions of “information requirement elements, first submission time, and information-providing units”. The results are as shown in Table 4.
From the comparison results, it can be seen that chatGLM performs better in terms of element completeness and timeliness (F1-Score: 66.7%), but has the issue of insufficient unit coverage. DeepSeek has no redundant information and complete unit coverage, yet its overall performance is relatively low (F1-Score: 57.1%) due to the omission of core elements.

3.3. Results Analysis

Based on detailed reports and visual comparison results, the performance of the four LLMs, Deepseek, chatGLM, Spark, and ERNIE Bot, was analyzed in terms of P, R, F1-score, as well as the number of TP, FP, and FN in the task of earthquake disaster information extraction.
It can be seen from the graph that the P and R values of all four LLMs are quite high. For example, the P of chatGLM reaches 95.9%, and the R is 97.2%. The proportion of accurately recognized information and information that is actually relevant is also quite high, validating the feasibility of LLMs in identifying the information needs related to earthquakes. The F1-score is a comprehensive metric that measures both P and R, providing a more complete picture of LLMs’s performance, chatGLM has the highest F1-score, reaching 93.2%. Its overall performance is superior. Deepseek has an F1-score of 89.8%, Spark has 89.2%, and ERNIE Bot has 93.1%, all of which are also at a relatively high level. This suggests that these models perform well in the task of extracting earthquake-related information.
It can be seen from the TP-FP-FN Comparison chart of the four LLMs that chatGLM has the highest number of TP, around 600. The number of FP and FN is relatively low, indicating that it is capable of accurately recognizing and extracting a large amount of information related to earthquake needs. Additionally, there are fewer instances of incorrect recognition and omission of relevant information. Overall, chatGLM has the best performance in terms of automatic generation of earthquake-related information, followed by ERNIE Bot. Deepseek and Spark also meet the requirements to a certain extent.

4. Discussion

The designing of the automatic generation system dependent on LLMs has been verified as greatly improving the timeliness, inheritance of knowledge, and connection of information needs for emergency responses to unknown events. Compared the performance of the 4 LLMs, chatGLM, Spark, ERNIE Bot, and DeepSeek in the earthquake disaster information extraction task, and analyzed the disadvantages of these models.
1. In terms of timely aspects, it adopts the machine that has the fast data processing capability so as to reduce a great amount of time needed to produce necessary information for many departments and forces when sudden disasters have occurred. For instance, in the case of a sudden earthquake emergency, the generation time for key information required by departments and forces such as the Earthquake Administration, the Ministry of Transport, and emergency rescue teams can be completed in just a few minutes. This is way more efficient than manually sorting it ourselves. In addition, what is generated is more than 10 departments such as earthquake, government, civil service, etc., and has a wider range, which meets the emergency management’s requirement for strict time limits.
2. A structured knowledge base was created by merging norms, standards, historical disposal instances, and expert experience practices. Through the LLMs reasoning, the information requirement elements in the emergency response to unexpected incidents were standardized into a reusable knowledge system. It decreases manual sorting of information needs and prevents fragmentation and discontinuation of knowledge accumulations and experience inheritances.
3. It can also be used for cross-departmental and cross-agency information resource collaboration. It achieves this by creating a logical model which encompasses “data + rules + intelligence” to accomplish quick scheduling and matching of information related to earthquake and disaster situations, rescue efforts. Also, by solving the problems of the complicated organization and low efficiency of traditional man-machine coordinated mode. The collaborative efficiency among the department of emergency can be improved.
4. In general, the four LLMs showed good extraction capabilities on earthquake disaster information extraction, validating the feasibility of using LLMs for information extraction. Among which, chatGLM had the best overall, and contained more information points according to experts and kept a certain degree of accuracy on the information that was selected.
Future research can be enhanced in four ways:
1. The range of the researched object shall be stepwise. The current system is only for the earthquake’s emergency response, but we think that in the future, this research can be further expanded from the earthquake to other unexpected events such as floods, typhoons among the natural disaster events, accidents and public health incident events. A information requirement generation model covering a lot of scenarios will be built in my system to enhance it with a wider range of capabilities.
2. Improve dynamic update of knowledge base. Although the current Knowledge base includes norms, standards, cases and expert’s disposal information, the real-time dynamic information such as real-time meteorological data, real-time traffic information, etc., are still missing. In the following research, an access channel for real-time data will be established, and the learning ability of different LLMs will be combined to achieve the dynamic iteration of knowledge, and enhance the timeliness and accuracy of the generation of information requirements.
3. Upgrade cross-sector collaborative reasoning capabilities. The existing system already achieves information connectivity within different departments. Subsequently, complex network analysis technology can be used to measure the information interaction weight value for all departments. And make it possible to optimize the mapping relationship in prompt engineering and improve the reasoning ability of LLMs when it comes to complex information needs that cross multiple domains and departments. Therefore, it will further improve on the “data + rules + intelligence” collaborative scheduling model.
4. The improvement of LLMs and semantic similarity model is also necessary. By using technologies like the attention mechanism, LLMs can focus more on important information when extracting data and eliminate interference from irrelevant information so that important information is not missed. On the selection of similarity models, this paper uses “keyword weight matching + text structure feature fusion” method to calculate semantic similarity. But it is rather inaccurate. In future work, pre-trained Chinese models like Hugging Face’s paraphrase-multilingual-MiniLM-L12-v2 and Baidu’s open source Chinese BERT-based Bert-base-Chinese can be used for more accurate calculations.

5. Conclusions

The main findings of this paper are as follows:
Model—driven automatic info requirement gen. Method does a good job of dealing with issues like slow responses, bad knowledge holding, and not enough connecting found when people do manual sorting of info reqs in old models. Based on the utilization of the method of the system, the speed of the generation of information requirements as per unforeseen situation can be greatly reduced and the scale can even be extended thus providing very strong support for a quick, scientific, and organized emergency response making decision taking much less time.
In the fourLLMs tested in the experiment, chatGLM generated the best overall information requirements for earthquake events, followed by ERNIE Bot. Deepseek and Spark can roughly fulfill the task demand, but they show some shortcomings in extracting information.
The structed knowledge base created and the engineered prompt are important for obtaining information requirements more accurately, relevant, and in a timely manner. The old knowledge base has good data foundations and industry standard references for larger models, and prompt engineering is what directs this knowledge to be selected and generated in information requirements by large models.
The limitations of this study are mainly reflected in the following aspects. First, the current research is limited to the vertical field of earthquakes, lacking research on automatic information requirement generation models in multiple scenarios. Second, the dynamic update mechanism of the knowledge base needs further establishment and improvement to include more real-time data as much as possible, so as to enhance the real-time performance of information requirement generation. Third, the semantic similarity calculation method adopted in this paper needs further improvement to improve the accuracy of LLMs in analyzing information requirements. Four, when LLMs generate information needs, they often follow the wording of the input, fail to accurately grasp the real needs hidden behind it, and even tend to produce impractical, general content when it comes to professional fields. It is necessary to build a professional domain knowledge base, carefully design prompt words, and train large models to avoid the aforementioned issues.
This research is particularly useful for emergency management departments, earthquake emergency response teams, and those working on intelligent emergency information systems or those focusing on the automated information requirement generation using technologies such as LLMs. It provides practical support for optimizing rapid decision-making in earthquake emergency response.

Author Contributions

Conceptualization, Y.L. and J.Y.; Data curation, Y.L.; Formal analysis, J.L.; Funding acquisition, Y.L. and J.Y.; Investigation, Z.L.; Methodology, W.G.; Project administration, Y.L. and C.G.; Resources, Z.L.; Software, C.G. and C.Z.; Supervision, J.Y.; Validation, Y.L. and J.L.; Visualization, C.G. and W.G.; Writing—original draft, Y.L.; Writing—review and editing, J.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China Grant No. 62402505, the Independent Innovation Science Fund of National University of Defense Technology under Grant No. 22-ZZCX-055.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
LLMsLarge Language Models
TPrue positive
FPfalse positive
FNfalse negative
Pprecision
Rrecall
FireRescfire rescue team
Freqfrequency
Updupdated
hhour
Numnumber
DepDepartments
HousMINministry of housing
CivAffDeptcivil affairs departments

Appendix A

Appendix A.1

Earthquake events involve a multitude of participating departments and forces. There exists a command, coordination, and feedback loop between these different departments and forces. Their interrelationships are depicted in Figure A1.
Figure A1. The activity relationship between participating departments and forces.
Figure A1. The activity relationship between participating departments and forces.
Applsci 15 11953 g0a1

Appendix A.2

During the management of earthquake events, there is a significant amount of complex information that needs to be exchanged and shared between various departments and forces. The information exchange relationships are depicted in Figure A2. An ontology is a conceptual model derived from the real world, serving as the fundamental technical basis for knowledge representation and reasoning [42]. To visually represent the information exchange relationships more clearly, Protege is utilized to develop an information interaction ontology, as illustrated in Figure A3.
Figure A2. The information interaction relationship among various departments and forces.
Figure A2. The information interaction relationship among various departments and forces.
Applsci 15 11953 g0a2
Figure A3. Information Interaction Ontology.
Figure A3. Information Interaction Ontology.
Applsci 15 11953 g0a3

Appendix A.3

During the management of earthquake events, the information requirements of various departments and agencies differ. Experts and scholars have categorized and summarized these information requirements based on the actual needs of the different departments and agencies. For instance, in the article titled “Discussion on the Classification of Earthquake Emergency Disaster Information,” scholars including Dong Man have elucidated the classification of earthquake emergency disaster information, as illustrated in Table A1 [26].
Table A1. Classification of Earthquake Emergency Disaster Information.
Table A1. Classification of Earthquake Emergency Disaster Information.
Primary CategorySecondary CategoryContent and Attribute Description
Earthquake Situation InformationEarthquake ParametersThe three key elements of an earthquake, the map of the epicenter location, and the distance from the epicenter to major cities.
Aftershock InformationAftershock statistics across various time periods and the distribution of aftershocks.
Hypocenter InformationRupture process, focal mechanism solution, moment tensor, slip component, stress drop, and moment magnitude, among others.
Strong Motion RecordsSeismic Motion Waveform Data, Ground Motion Acceleration, Ground Motion Displacement.
Earthquake Situation Trend JudgmentOpinions on the Trend Judgment of Earthquake Situations in Different Time Periods.
Earthquake Cause AnalysisAnalysis of earthquake causes by various institutions.
Earthquake-Affected Area Background InformationHumanistic Background of the Affected AreaAdministrative Division of the Affected Area, Transportation in the Affected Area, Population Distribution, Housing Information, Economic Statistics, Distribution of Ethnic Minorities, Distribution of Poverty-Stricken Counties.
Topographic Background of the Affected AreaThe topography of the affected area, the terrain landscape of the affected area, and the remote sensing images of the affected area.
Tectonic Background of the Affected AreaDistribution of active faults within the affected area, potential seismic source zones within the affected area, and the geological structure of the affected area.
Disaster Background of the Affected AreaHistorical earthquake catalog, earthquake isoseismal lines, disaster loss situation, direct earthquake disasters, major secondary disasters, etc.
Seismic Safety Evaluation Background of the Affected AreaSeismic intensity zoning, seismic safety evaluation, the main achievements of seismic hazard prediction, and earthquake prevention and disaster mitigation efforts, as well as data from various earthquake prevention and disaster mitigation demonstration areas, among others.
Major Lifeline Projects in the Affected AreaDistribution of lifeline projects, including transportation, electric power, communication facilities, gas supply, water supply, and drainage, in the affected area, as well as statistics on lifeline disaster damage, etc.
Key Targets in the Affected AreaStatistics on the quantity and distribution of key targets, such as schools and hospitals, in the affected area.
Rescue Forces in the Affected AreaMedical rescue capacity, fire-fighting capacity, public security capacity in the disaster area and adjacent areas, reserve of relief materials, information on rescue teams, type, quantity, quality, performance and distribution of rescue equipment.
Disaster Situation InformationDisaster Situation EstimationSeismic intensity estimation, casualty estimation, direct economic loss estimation, rapid earthquake assessment results, revised results of rapid assessment in different time periods, landslide risk distribution, seismic intensity assessment based on ground motion, damage estimation of lifeline projects and key targets, estimation of personnel burial points, and other disaster situation estimation results.
Rapid Disaster Situation ReportActual disaster situation information obtained through short messages, telephone calls, the Internet, surveys, video conferences, etc.
Casualty Disaster Situation InformationStatistics and Distribution of Actual Casualties.
Building DamageStatistics and Distribution of Actual Building Damage.
Secondary DisastersStatistics and Distribution of Actual Secondary Disasters.
Key Focus Disaster SituationInformation on disaster situations requiring urgent attention for decision-making and rescue.
Actual Surveyed Disaster SituationActual disaster situation from on-site earthquake surveys.
Emergency Response and RescueEmergency Decision-MakingVarious types of auxiliary decision-making information.
Headquarters Work DynamicsEmergency work dynamics of national and regional headquarters.
On-Site Work DynamicsDynamic changes of earthquake disasters and secondary disasters, real-time progress of rescue work, deployment and phased progress of disaster investigation work, etc.
Historical Rescue CasesDomestic and International Earthquake Rescue Cases.
Rescue Team Work DynamicsInput of rescue teams, progress of rescue operations, achievements of rescue efforts, real-time updates on rescue work, etc.

Appendix A.4

Table A2. Dependent Libraries and Their Functional Descriptions.
Table A2. Dependent Libraries and Their Functional Descriptions.
Library/ModuleTypeCore Function
jiebaThird-partyChinese word segmentation tool, used to extract domain keywords such as “magnitude” and “intensity” from text
jsonBuilt-inProcesses JSON format data, e.g., storing evaluation results
osBuilt-inFile path operations, e.g., reading expert standard documents
reBuilt-inRegular expression matching, e.g., extracting time information from text
pandasThird-partyStructured storage of evaluation metrics (e.g., P/R/F1-score)
numpyThird-partyNumerical computation, supporting mathematical operations for evaluation metrics
sklearn.metrics.pairwiseThird-partyProvides cosine similarity calculation function to support semantic matching algorithms
sklearn.feature_extraction.textThird-partyImplements TF-IDF vector conversion to assist in semantic similarity calculation
matplotlib.pyplotThird-partyGenerates model performance comparison charts

References

  1. Unconventional Emergency Management Research Group. Unconventional Emergency Management Research; Zhejiang University Press: Hangzhou, China, 2018. [Google Scholar]
  2. Wang, H.Y.; Li, Z.X.; Zhang, T.; Feng, J.; Zhang, X.Y. Information Needs and Acquisition Suggestions for Earthquake Emergency Rescue. J. Catastrophology 2016, 31, 176–180. [Google Scholar]
  3. Ma, W.; Liu, J.; Cai, Y.; Chen, H.-Z.; Liu, X.-F. Research on seismic information based on internet of things and cloud computing in big data era. Prog. Geophys. 2018, 33, 835–841. [Google Scholar] [CrossRef]
  4. Jo Pwavodi, J.; Ibrahim, A.U.; Pwavodi, P.C.; Al-Turjman, F.; Mohand-Said, A. The role of artificial intelligence and IoT in prediction of earthquakes: Review. Artif. Intell. Geosci. 2024, 5, 100075. [Google Scholar] [CrossRef]
  5. Xu, X.; Zhou, X.; Wang, W.; Xu, Z.; Cao, G. Analysis of the Characteristics of Secondary Sudden Environmental Events Induced by Natural Disasters in China. J. Geol. Hazards Environ. Preserv. 2025, 36, 121–128. [Google Scholar]
  6. Huang, L.; Li, J.; Liu, Z.; Wang, X.; Jie, S.; Lei, G.; Liu, Z. Review of Explainable Artificial Intelligence and Its Application Prospects in Earthquake Science. Prog. Earthq. Sci. 2025, 55, 1–11. [Google Scholar] [CrossRef]
  7. Linville, L.; Pankow, K.; Draelos, T. Deep learning models augment analyst decisions for event discrimination. Geophys. Res. Lett. 2019, 46, 3643–3651. [Google Scholar] [CrossRef]
  8. Kou, L.; Tang, J.; Wang, Z.; Jiang, Y.; Chu, Z. An adaptive rainfall estimation algorithm for dual-polarization radar. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1004805. [Google Scholar] [CrossRef]
  9. Titos, M.; Bueno, A.; Garcia, L.; Benitez, C.; Segura, J.C. Classification of isolated volcano-seismic events based on inductive transfer learning. IEEE Geosci. Remote Sens. Lett. 2020, 17, 869–873. [Google Scholar] [CrossRef]
  10. Peng, P.; He, Z.; Wang, L.; Jiang, Y. Microseismic records classification using capsule network with limited training samples in underground mining. Sci. Rep. 2020, 10, 13925. [Google Scholar] [CrossRef]
  11. Nakano, M.; Sugiyama, D.; Hori, T.; Kuwatani, T.; Tsuboi, S. Discrimination of seismic signals from earthquakes and tectonic tremor by applying a convolutional neural network to running spectral images. Seismol. Res. Lett. 2019, 90, 530–538. [Google Scholar] [CrossRef]
  12. Mousavi, S.M.; Zhu, W.; Ellsworth, W.; Beroza, G. Unsupervised clustering of seismic signals using deep convolutional autoencoders. IEEE Geosci. Remote Sens. Lett. 2019, 16, 1693–1697. [Google Scholar] [CrossRef]
  13. Biswas, S.; Kumar, D.; Hajiaghaei-Keshteli, M.; Bera, U.K. An AI-based framework for earthquake relief demand forecasting: A case study in Türkiye. Int. J. Disaster Risk Reduct. 2024, 102, 104287. [Google Scholar] [CrossRef]
  14. Huang, J.C.; Liu, Z.; Huang, H.B.; Zhu, C.; Fang, Y.C.; Chen, Z.X. LLMs and Decision Intelligence Technology. J. Command Control. 2025, 2, 125–127. [Google Scholar]
  15. Li, Y.L.; Zhang, J.W.; Bao, H.H. Information Needs and Satisfaction of College Students in the Context of Public Health Emergencies. Lib. Inf. Serv. 2020, 64, 85–95. [Google Scholar]
  16. Shultz, T.R.; Wise, J.M.; Nobandegani, A.S. Text understanding in GPT-4 versus humans. R. Soc. Open Sci. 2025, 12, 241313. [Google Scholar] [CrossRef]
  17. Liu, Q.; He, Y.; Xu, T.; Lian, D.; Liu, C.; Zheng, Z.; Chen, E. UniMEL: A Unified Framework for Multimodal Entity Linking with Large Language Models. In Proceedings of the 33rd ACM International Conference on Information and Knowledge Management, CIKM 2024, Boise, ID, USA, 21–25 October 2024. [Google Scholar]
  18. Peng, W.; Li, G.; Jiang, Y.; Wang, Z.; Ou, D.; Zeng, X.; Xu, D.; Xu, T.; Chen, E. Large Language Model Based Long-Tail Query Rewriting in Taobao Search. In Proceedings of the Companion Proceedings of the ACM Web Conference 2024, Singapore, 13–17 May 2024; pp. 20–28. [Google Scholar]
  19. Sarnikar, A. Using LLMs for Querying and Understanding Long Legislative Texts. In Proceedings of the 2025 IEEE International Students’ Conference on Electrical, Electronics and Computer Science (SCEECS), Bhopal, India, 18–19 January 2025. [Google Scholar]
  20. Zhang, J. (Ed.) Textbook on the Law of Earthquake Prevention and Disaster Mitigation; Tsinghua University Press: Beijing, China, 2014. [Google Scholar]
  21. Zhang, Y.; Fan, K.; Guo, H.; Tang, Z.; Chen, W. Analysis of the Emergency Mode of Sichuan Earthquake Emergency Command Center in the Lushan Ms 7.0 Earthquake. Earthq. Res. Sichuan 2014, 2, 43–47. [Google Scholar]
  22. Zhang, W.; Hui, Y.; Hou, Z.; Yang, X.; Mao, J. Current Situation and Reflections on On-Site Earthquake Emergency Work in Liaoning Province. J. Disaster Prev. Mitig. 2023, 39, 66–70. [Google Scholar]
  23. China Legal Publishing House (Comp). Regulations on Earthquake Monitoring Management (2024); China Legal Publishing House: Beijing, China, 2025. [Google Scholar]
  24. State Council of the People’s Republic of China. National Earthquake Emergency Plan (Revised on August 28, 2012). Gaz. State Counc. People’s Repub. China 2012, 28, 16–24. [Google Scholar]
  25. DB/T 89-2022 [S]; Seismic Monitoring and Forecasting Standardization Technical Committee. Code for Operation of Seismic Networks: Strong Motion Observation. China Standards Press: Beijing, China, 2022.
  26. Dong, M.; Yang, T.Q. Discussion on Classification of Earthquake Emergency Disaster Information. Technol. Disaster Prev. 2014, 4, 937–943. [Google Scholar]
  27. Gong, Y.; Shen, W. Overview and Analysis of Emergency Information Output by Seismic Departments After Earthquakes. China Emerg. Rescue 2018, 4, 33–37. [Google Scholar]
  28. DB/T 1-2000; Seismic Industry Standard System Table. Industry Standard—Seismology. Seismological Press: Beijing, China, 2000.
  29. Oppenlaender, J.; Linder, R.; Silvennoinen, J. Prompting AI Art: An Investigation into the Creative Skill of Prompt Engineering. Int. J. Hum. –Comput. Interact. 2024, 41, 10207–10229. [Google Scholar] [CrossRef]
  30. Marvin, G.; Hellen, N.; Jjingo, D.; Nakatumba-Nabende, J. Prompt Engineering in Large Language Models. In Data Intelligence and Cognitive Informatics; Jacob, I.J., Piramuthu, S., Falkowski-Gilski, P., Eds.; Springer: Singapore, 2024; pp. 387–402. [Google Scholar]
  31. Son, M.; Won, Y.-J.; Lee, S. Optimizing Large Language Models: A Deep Dive into Effective Prompt Engineering Techniques. Appl. Sci. 2025, 15, 1430. [Google Scholar] [CrossRef]
  32. Federiakin, D.; Molerov, D.; Zlatkin-Troitschanskaia, O.; Maur, A. Prompt engineering as a new 21st century skill. Front. Educ. 2024, 9, 1366434. [Google Scholar] [CrossRef]
  33. Wu, L.; Qiu, Z.; Zheng, Z.; Zhu, H.; Chen, E. Exploring Large Language Model for Graph Data Understanding in Online Job Recommendations. In Proceedings of the 38th AAAI Conference on Artificial Intelligence, AAAI 2024, Vancouver, BC, Canada, 26–27 February 2024. [Google Scholar]
  34. Zheng, Z.; Chao, W.; Qiu, Z.; Zhu, H.; Xiong, H. Harnessing Large Language Models for Text-Rich Sequential Recommendation. In Proceedings of the ACM Web Conference 2024, Austin, TX, USA, 13–17 May 2024; pp. 3207–3216. [Google Scholar]
  35. Liu, P.; Yuan, W.; Fu, J.; Jiang, Z.; Hayashi, H.; Neubig, G. Pre-Train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing. ACM Comput. Surv. 2023, 55, 195. [Google Scholar] [CrossRef]
  36. Chen, B.; Zhang, Z.; Langrené, N.; Zhu, S. Unleashing the potential of prompt engineering for large language models. Patterns 2025, 6, 101260. [Google Scholar] [CrossRef] [PubMed]
  37. Tong, S.; Mao, K.; Huang, Z.; Zhao, Y.; Peng, K. Automating psychological hypothesis generation with AI: When large language models meet causal graph. Humanit. Soc. Sci. Commun. 2024, 11, 896. [Google Scholar] [CrossRef]
  38. Shah, C. From Prompt Engineering to Prompt Science with Humans in the Loop. Commun. ACM 2025, 68, 54–61. [Google Scholar] [CrossRef]
  39. Xu, D.; Zhang, Z.; Lin, Z.; Wu, X.; Zhu, Z.; Xu, T.; Zhao, X.; Zheng, Y.; Chen, E. Multi-Perspective Improvement of Knowledge Graph Completion with Large Language Models. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation, Torino, Italy, 20–25 May 2024; pp. 11956–11968. [Google Scholar]
  40. Li, X.; Zhou, J.; Chen, W.; Xu, D.; Xu, T.; Chen, E. Visualization Recommendation with Prompt-Based Reprogramming of Large Language Models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics, Bangkok, Thailand, 11–16 August 2024; pp. 13250–13262. [Google Scholar]
  41. Liu, C.; Xie, Z.; Zhao, S.; Zhou, J.; Xu, T.; Li, M.; Chen, E. Speak from Heart: An Emotion-Guided LLM-Based Multimodal Method for Emotional Dialogue Generation. In Proceedings of the 2024 International Conference on Multimedia Retrieval, Phuket, Thailand, 10–14 June 2024; pp. 533–542. [Google Scholar]
  42. Corcho, O.; Fernandez-López, M.; Gómez-Pérez, A. Ontological Engineering: Principles, Methods, Tools and Languages. In Ontologies for Software Engineering and Software Technology; Springer: Berlin/Heidelberg, Germany, 2006; pp. 1–48. [Google Scholar]
Figure 1. Flow Chart Design.
Figure 1. Flow Chart Design.
Applsci 15 11953 g001
Figure 2. Visualized Comparison Results.
Figure 2. Visualized Comparison Results.
Applsci 15 11953 g002
Table 1. Knowledge Base Construction.
Table 1. Knowledge Base Construction.
CategorySpecific ContentBasis Documents/Standards (Examples)
Internal Industry Standards and Historical Emergency Response ScenariosThe content primarily encompasses laws, policies, historical cases related to emergency responses to unforeseen incidents, and emergency news. These sources offer data for the system to extract information on departments involved in earthquake emergency responses and their respective responsibilities.1. “Law on Earthquake Disaster Prevention and Mitigation” [20]
2. “Analysis of the Emergency Mode of the Earthquake Emergency Command Center during Earthquakes” [21]
3. “Current Status and Thoughts on Earthquake Emergency On-site Work” [22]
Time Specification RequirementsClarify time-dimensional specifications related to earthquake emergency response.1. “Regulations on the Administration of Earthquake Monitoring” [23]
2. “National Earthquake Emergency Plan” [24]
3. “Observation Specifications for Seismic Network” [25]
Information Element ListThe information requirements of participating departments during the emergency response to unforeseen incidents are primarily included.1. “Discussion on the Classification of Earthquake Emergency Disaster Information” [26]
2. “Information Needs and Acquisition Suggestions for Earthquake Emergency Rescue” [2]
3. “Overview and Analysis of Emergency Information Output by Earthquake Departments after the Earthquake” [27]
4. “Analysis of the Emergency Response Mode of the Earthquake Emergency Command Center during Earthquakes” [21]
5. “Seismic Industry Standards” [28]
Table 2. Design of Prompt Engineering.
Table 2. Design of Prompt Engineering.
PromptSpecific Content Description
Phase 1: Extraction of Demand-Providing UnitsBased on the inputted seismic event (time, location, magnitude), combined with industry internal standards and historical emergency response cases in the knowledge base, generate three items: “departments involved in the earthquake, rescue forces, responsibilities/actions, and basis documents”.
Earthquake event: On 14 April 2010, at 7:49, a magnitude 7.1 earthquake occurred in Area A.
Requirements: Extract information from the constructed knowledge base, including all participating departments and rescue forces after the earthquake, and output it in a tabular format.
Phase 2: Extraction of Information Requirement Elements and Time ElementsIncorporating the “participating departments/rescue forces” and “responsibilities/actions” extracted as mentioned above, and drawing from the regulatory documents listed in the “information element list” and “time specification requirements” within the knowledge base, generate the “information requirement elements, providing units, initial submission time, update frequency, and foundational documents/standards,” and present these in a tabular format.
Phase 3: Generation of Structured Information RequirementsSummarize the contents of the aforementioned tables to generate information requirements. The summary should include participating departments/rescue forces, responsibilities/actions, information requirement elements, providing units, initial submission time, and update frequency. Ensure that each information requirement element is paired with its corresponding providing unit. Present the results in tabular form.
Table 3. Information Requirements Manually Organized by Experts.
Table 3. Information Requirements Manually Organized by Experts.
Participating Departments/Rescue ForcesCore Responsibilities/ActionsInformation Requirement ElementsInformation Providing UnitsFirst Submission TimeUpdate Frequency
Earthquake AdministrationEarthquake Monitoring, Intensity Assessment, and Disaster Evaluation1. Basic earthquake parameters (epicenter, magnitude, focal depth).
2. Earthquake intensity distribution.
3. Aftershock sequence data
4. Distribution of secondary geological disasters.
5. Disaster situation evaluation report.
Earthquake Monitoring Center, earthquake departments, remote sensing monitoring unitsBasic parameters: 0.5 to 1 h.
Intensity distribution: 8 h.
Basic parameters: every 2 h.
Intensity distribution: revised every 24 h.
Fire and Rescue TeamsLife search and rescue, identification of rescue targets1. Location of trapped persons.
2. Building collapse types (schools, hospitals).
3. Search and rescue priorities.
Fire departments, on-site rescue teams, and UAV monitoring units.1–24 h.Real-time update.
National Health CommissionMedical treatment, medical resource statistics1. Casualty statistics (serious injuries, slight injuries, missing persons).
2. Damage to medical facilities.
3. List of urgently needed medicines.
4. Distribution of casualties.
Health departments, hospitals/emergency centers, and centers for disease control and prevention (CDC).Within 2 h.Every 12 h.
Ministry of Civil AffairsResettlement of affected people, coordination of living materials1. Number of affected individuals.
2. Number of individuals resettled.
3. Demand for relief supplies (tents, food, medicines).
Civil affairs departments at all levels, Red Cross Society, and charitable organizations.2–4 h.Every 6–12 h.
Ministry of TransportRoad emergency repair, guarantee of rescue access1. Location of Damaged Roads.
Bridges.
2. Emergency Repair
Progress.
3. Passable Routes.
Transport departments, highway administration bureaus, road emergency repair teams.1–2 h.Every 3–6 h.
Ministry of Industry and Information TechnologyCommunication restoration, power guarantee1. Scope of Communication/Power.
Outage.
2. Damage to Communication Base Stations and Power Facilities.
3. Restoration Progress.
Communication operators, State Grid/local power companies, provincial communication/power dispatch centers.Within 1 h.Every 3–6 h.
Ministry of Housing and Urban-Rural DevelopmentBuilding safety assessment, geological disaster monitoring1. Statistics on building damage.
2. Distribution of secondary geological disasters, (including landslides and debris flows.
3. Demand for relief materials.
4. Monitoring data of landslides and collapses.
Departments of housing and urban-rural development at all levels, departments of land and resources, and geological survey bureaus.8–48 h.Every 4–24 h.
Ministry of Emergency ManagementCoordination of Rescue Forces and Summary of the Disaster Situation1. Deployment of rescue forces.
2. Dynamic disaster situation.
3. Risk of secondary disasters.
Emergency Command Center, Geological Monitoring Station.1–2 h.Real-time update/every 6 h.
Meteorological BureauMeteorological forecast for disaster-stricken areas1. Weather forecast for the next 72 h (including precipitation and temperature).
2. Assessment of the impact of wind and visibility.
Meteorological departments at all levels and satellite remote sensing monitoring units.1–2 h.Every 6–24 h.
Local Governments Local disaster situation statistics and reporting1. Number of casualties.
2. Outage of transportation, communication, and power.
3. Number of collapsed buildings.
Civil affairs departments, public security departments, transportation, power, and communication departments.Within 2 h.Every 1–6 h.
Table 4. Sub-case Evaluation Results.
Table 4. Sub-case Evaluation Results.
ImensionExpert StandardschatGLMDeepSeek
Information Requirement Elements (items)5 (Parameters/Intensity/Aftershocks/Secondary Disasters/Assessment Report)5 (matches core elements, with additional “fault zone analysis”)4 (lacks “Parameters/Aftershocks”)
First Submission TimeBasic parameters: 0.5–1 h; Intensity: 8 hWithin 30 min (exceeds the standard)Within 8 h (meets the intensity requirement)
Information-Providing Units (number)3 (National Seismic Network/Provincial Departments/Remote Sensing Units)1 (lacks 2 units)4 (full coverage without redundancy)
F1-Score-66.7%57.1%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, Y.; Guo, C.; Lu, Z.; Zhang, C.; Gao, W.; Liu, J.; Yang, J. Research on the Automatic Generation of Information Requirements for Emergency Response to Unexpected Events. Appl. Sci. 2025, 15, 11953. https://doi.org/10.3390/app152211953

AMA Style

Li Y, Guo C, Lu Z, Zhang C, Gao W, Liu J, Yang J. Research on the Automatic Generation of Information Requirements for Emergency Response to Unexpected Events. Applied Sciences. 2025; 15(22):11953. https://doi.org/10.3390/app152211953

Chicago/Turabian Style

Li, Yao, Chang Guo, Zhenhai Lu, Chao Zhang, Wei Gao, Jiaqi Liu, and Jungang Yang. 2025. "Research on the Automatic Generation of Information Requirements for Emergency Response to Unexpected Events" Applied Sciences 15, no. 22: 11953. https://doi.org/10.3390/app152211953

APA Style

Li, Y., Guo, C., Lu, Z., Zhang, C., Gao, W., Liu, J., & Yang, J. (2025). Research on the Automatic Generation of Information Requirements for Emergency Response to Unexpected Events. Applied Sciences, 15(22), 11953. https://doi.org/10.3390/app152211953

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop