1. Introduction
Case-based reasoning (CBR) is a problem-solving method that primarily addresses new problems by retrieving, matching, and adapting historical cases, giving it a unique advantage in complex, dynamic, or knowledge-incomplete domains [
1]. Unlike traditional rule-based or model-based reasoning methods, CBR emphasizes extracting knowledge from historical cases and using analogical reasoning to address new situations, offering greater adaptability and interpretability [
2]. Given its reasoning characteristics, it is clear that the accuracy of the three stages—case retrieval, case matching, and historical case adaptation—directly impacts the effectiveness of solving new problems. With the rapid development of artificial intelligence and big data technologies, combined with its inherent reasoning advantages, CBR demonstrates significant application potential across various fields [
3,
4]. However, in practical applications, it still encounters numerous challenges, including the efficiency of case representation and retrieval, the dynamic updating of case repositories, the accuracy of similarity measurements, and the feasibility of cross-domain case transferability. Therefore, determining how to further enhance the intelligence level of CBR, efficiently retrieve massive cases, improve cross-domain transfer capabilities, and enhance adaptability in dynamic environments remain key challenges in current research.
Currently, in order to improve the reasoning performance of CBR, scholars in various countries have made relevant improvements to address the limitations of traditional CBR based on the 4R model proposed by Aamodt and Plaza, which includes case retrieval, reuse, revision, and retention [
1]. In the case representation stage, in order to solve the problems of inconsistent data types and some missing data, scholars in various countries have proposed relevant methods. The case representation method based on knowledge graph can greatly improve the flexibility of case knowledge extraction, management and retrieval [
5]. In the field of cascading disaster risk, a cascading disaster risk ontology system constructed by concepts and relationships has been proposed for application [
6]. In the case retrieval matching stage, researchers have proposed different levels of similarity calculation between historical and target cases. The introduction of the combined method of Random Forest algorithm and Bayesian optimization can realize the adaptive retrieval of similarity cases [
7]. Moreover, the quotient space granularity is introduced into the attribute-based similarity computation, and the case retrieval algorithm based on the theory of granularity synthesis is proposed [
8]. In the final stage of case reuse revision, a new differential evolutionary algorithm is used to revise and improve the adaptability of historical cases [
9]. In addition to this, a new approach to case adaptation can be proposed by combining multi-objective genetic algorithms with grey relational analysis called the Grey Relational Analysis-Multi-Objective Genetic Algorithms Approach (GRAMOGA) [
10].
Although all of the above studies have optimized and improved the traditional CBR method to a certain extent, there is still room for improvement. For example, in the case representation phase, existing methods (such as knowledge graphs and ontology systems) have improved the flexibility of case representation, but they still face challenges in the integrated representation of multi-source heterogeneous data. Additionally, current case representation methods are mostly based on static data and struggle to adapt to dynamic environments. In the case retrieval and matching phase, existing methods (such as Bayesian optimization random forests and commercial space granularity calculations) may still lack precision in similarity measurement for complex high-dimensional data, especially in non-linear relationship data. In the case reuse and revision phase, methods such as differential evolution algorithms and GRAMOGA have improved case adaptability, but the adjustment process still relies on manual rules or fixed strategies, lacking autonomous optimization capabilities. Finally, in terms of case repository maintenance and updates, there is limited research on maintenance mechanisms such as incremental learning, noise filtering, and redundant case deletion for case repositories. Future improvements in these areas are expected to further enhance CBR’s reasoning capabilities.
Based on related research, the purpose of this paper is to propose a case reasoning framework enhanced by generative AI. Overall, there are three innovative aspects of this framework. The first aspect concerns case representation. According to relevant studies [
11], an ontology model is chosen to represent various types of risk, and the model is easy to share and integrate [
12]. In addition, in the process of data representation, this paper combines the Dempter–Shafer (D-S) evidence theory with the framing theory to obtain a more accurate description of the case in order to minimize the interference of incomplete information [
12]. The second aspect is related to case retrieval. Following [
13], this paper proposes to use the DEMATEL method to determine the interdependent influence relationship among risks (or multiple risks). Their interdependence will make the risk response program more scientific and effective. The third aspect is case reuse and correction. In this paper, a case reuse method based on generative AI is proposed to generate practical response strategy plans using historical cases as references. In addition, a case study is conducted at the end to test the feasibility and effectiveness of the proposed method.
The stability of urban critical infrastructure is of vital importance to urban development [
12]. With the continuous advancement of Internet technology, human daily life and the normal operation of society are becoming increasingly reliant on a continuous and stable power network [
14]. However, in the face of sudden natural disasters, if risk management is poor or responses are not timely [
15,
16], it may cause more severe damage to the power network system. Through the improvement of the research method in this study, to a certain extent, it can provide important references for relevant departments to formulate emergency decisions for the power network under the background of typhoon disasters.
The rest of the paper is organized as follows. In
Section 2, firstly, the CBR methods and generative AI currently used in the field of risk assessment are introduced. Then, the relevant theories and models involved in this paper are explained. In
Section 3, firstly, the identification of key power risk factors based on D-S evidence theory is expounded, and then the ontology model is used to represent the risk cases. Secondly, a three-dimensional case retrieval method based on the public safety triangle theory is proposed. Finally, a case correction and reuse method based on generative AI is presented. In order to better illustrate the practicality of the method,
Section 4 shows a case study of typhoon
Capricorn.
Section 5 concludes by summarizing the main contributions of the paper and suggestions for future work.
3. Methodology
Based on traditional CBR, this section applies D-S evidence theory, the public safety triangle theory, and new generative AI technologies to each key link of CBR to enhance the accuracy of CBR. The following
Figure 2 is the method flowchart of this article.
3.1. Case Representation Supported by D-S Evidence Theory
Before the case representation, identifying key risk factors and determining their direct relationships is essential to ensuring the accuracy, relevance, and effectiveness of subsequent case representation. This step serves as the foundation of the entire CBR framework. Its core objective is to extract key information from complex risk scenarios, providing clear information for subsequent ontology modeling and case correction, and ultimately enhancing the scientificity and effectiveness of the entire risk response framework. To achieve this, we first collect relevant risk records from official channels. Secondly, the expert group is invited to identify the main risks using evidence-based BWM methods based on the data collected. After that, the secondary risks are screened out and then the expert panelists determine the direct impact relationship among the primary risks based on the DEMATEL evidence method. Finally, based on the results of the above analysis, an ontology model is utilized to represent the power network risk case.
3.1.1. Evidence-Based BWM Primary Risk Determination
In this study, several types of power network risks that often occur in the context of typhoon disasters are summarized by reviewing a large amount of literature. Subsequently, the above risks are ranked in order of importance through expert assessment, and finally the major power network risks that have the greatest impact on various sectors are identified, as follows.
- Step 1:
Determine the best and worst risk
First,
m experts from the field of risk management were invited to rate this questionnaire. Each expert’s choice of maximum risk and minimum risk is collected through the evidence BWM expert questionnaire. Let all risks be {
,
,
,
,
, …}, respectively, the experts are scored using the evidential linguistic term set proposed by Fei et al. [
42]. In this scoring approach, the identification framework is first defined as
S = {
, …,
}, whose specific linguistic terms and meanings are shown in
Table 1. The expert indicates the impact importance of each factor by a value between 0 and 1, and assigns corresponding confidence levels to these values, and the sum of all the confidence levels is 1. This scoring approach is different from the traditional 1–5 scoring system, and can deal with the problem of uncertainty and the cognitive ambiguity of the expert’s information more effectively. By comparing two by two, the maximum and minimum risks among all risks are determined, which are denoted as maximum risk
and minimum risk
, respectively.
- Step 2:
Compare the highest risk with all the other risks
On the basis of the determined maximum risk in Step 1, the experts score the comparison of the determined maximum risk with the other risks, again ensuring that all the confidence sums are 1. Where denotes the result of the comparison of the maximum risk with the other risks , it is clear that the result of the comparison of the maximum risk with the maximal risk is () = 1.
- Step 3:
Compare the worst risk with all the other risks
Similar to Step 2, the comparison of other risks to minimal risk is still represented using the evidence linguistic term set, where represents the result of comparing other factors to minimal risk. Similarly, the result of the comparison of minimum risk to minimum risk is () = 1.
- Step 4:
Integrate factors results
Since the results obtained in Steps 1 to 3 are the individual opinions of the experts, the experts’ opinions need to be integrated to obtain the final results regarding the importance of each factor. This integration process was realized through code written to ensure accuracy and consistency in data processing.
- Step 5:
Obtain optimal weights by optimization model
After integrating all the results, they are transformed into numerical form based on the probabilities and the transformed data are entered into the weight calculation software in order to obtain the weights and consistency coefficients for each risk. Risks with weight values less than 0.05 are eliminated to ensure the simplicity and accuracy of the model.
3.1.2. Evidence-Based DEMATEL Determination of Direct Impact Relationships for Key Risks
Since there will be relevant influences between each risk and their internal relationships will directly affect the effectiveness of the risk response strategy, this paper needs to consider the direct interactions between risks when developing each risk response strategy. This paper mainly uses the DEMATEL method to determine the interaction between factors. It is a methodology of systems science, mainly employing graph theory and matrix tools. By establishing a correlation matrix among the various elements in the system, the causal relationships among the elements and the position of each element in the system are ultimately determined [
43].
The filtered major risks are obtained through the evidence BWM method, and
m experts from different fields are invited to provide their opinions to determine the influence relationships of N factors. Each expert is asked to indicate the extent to which he believes the factor
influences the factor
(expressed as
→
). The specific steps of DEMATEL based on the evidential linguistic term set are as follows [
44]:
- Step 1:
Construct expert judgment matrix
Replace the 0–4 graded scoring method with the evidential linguistic term set. Assuming that the
expert gives the degree of interaction between all factors, construct it to form an
non-negative judgment matrix
, m = 1, 2, 3, …, m, where denotes the degree to which the expert’s judgment of
influences
. Note that the diagonal of each answer matrix is set to “-”, meaning that the factor does not affect the factor itself.
- Step 2:
Obtain the initial direct relationship IDR matrix
Based on the scoring method of the evidential linguistic term set used by the experts in Step 1, we first integrated the results, and then transformed the integration results into numerical form according to the corresponding probabilities, which in turn gave us the initial direct relationship IDR matrix as G.
- Step 3:
Obtain the normalized IDR matrix
The maximum row and column sum of the matrix G is
the normalized IDR matrix
can be calculated by equating.
- Step 4:
Obtain the total relationship matrix
- Step 5:
Calculate factor attribute parameters and analyze the results
where
denotes the total influence exerted by
on all other factors in the system, which is called the degree of influence of
, and
denotes the total influence exerted on
by all other factors, and is referred to as the degree to which
is influenced. The centrality degree
+
is defined as significant, which shows that factor
plays an important role in the complex system; the causality degree
−
shows the net effect of factor
on the complex system. Note that if
−
is positive, factor
is a causal factor; if
−
is negative, factor
is an influencing factor.
- Step 6:
Set up the threshold and obtain the causal-relation map
Using the center degree as the horizontal coordinate and the cause degree as the vertical coordinate, draw the scatter plot of the cause degree and center degree among the main risks. The sum of the mean value and the standard deviation of the total relationship matrix of each group is also calculated as the threshold value, and the raw values exceeding the threshold value are considered to demonstrate the existence of influence, which is indicated by arrows on the graph.
3.1.3. Ontology Modeling Primitives
The concept of ontology originally originated in the field of philosophy, where it was defined as “a systematic description of objective things in the world”, i.e., “existentialism”. Later, Gruber defined ontology as “a clear specification of conceptualization” that facilitates the integration and sharing of knowledge [
45]. It enables us to construct disaster cases with domain knowledge and to reuse this knowledge as a whole [
4]. Typically, ontologies are categorized into four types [
46]: top-level ontologies, domain ontologies, task ontologies, and application ontologies. Among them, the top-level ontology mainly studies the relationship between concepts. Domain ontology studies the connection between concepts within a specific domain. Task ontology is used to express the connection between concepts within a specific task. The application ontology is used to describe some specific applications, which can refer to concepts in the domain ontology as well as concepts appearing in the task ontology. Within an ontology, there are five main elements: classes, relations, functions, axioms, and instances. This study uses Protégé software [
47], a scenario-based ontology case representation, where all historical cases and target cases are represented as ternary groups (i.e., incident attribute descriptions, risk network descriptions, and response strategy descriptions).
3.2. Three-Dimensional Case Retrieval Based on the Public Safety Triangle Theory
Case retrieval is one of the key steps in CBR. It involves searching for historical cases from the case base that are similar to the new case (i.e., the new problem). Once the most similar cases are identified, their solutions can be reused and adapted to the current problem [
3]. Among other things, the quality of the case search determines the effectiveness of the system [
8]. Based on this, this part proposes a three-dimensional case retrieval method based on the public safety triangle theory, which involves three main similarity measures (i.e., accident attribute similarity, carrier requirements similarity, and emergency response capability at the incident site similarity). Firstly, the accident attribute similarity is used to measure the similarity between the two cases themselves, which is calculated through a series of hazard attribute indicators. Secondly, the carrier requirements similarity is used to measure the similarity in terms of the immediate recovery needs of the incident site after the disaster, which is assessed in terms of people, thing and systems. After that, the emergency response capability at the incident site similarity is used to measure the similarity of the local response capacity to the disaster, and this part is based on the crisis life cycle theory. The local similarity is first calculated for each segment, and finally the results of these three similarity measures are aggregated into a composite similarity between the historical cases and the target cases to determine the final set of similar cases.
3.2.1. Local Similarity Calculation
In this step, three data types (crisp symbol, crisp number, and interval number) are used to represent the incident attributes. Let
,
and
denote the data types as crisp symbol, crisp number, and interval number. The attribute feature set is
= {
,
,
,
…,
}, where
denotes a certain attribute feature. The accident attribute similarity between each historical case
and the target case
is denoted as
(
,
), the carrier requirements similarity is denoted as
(
,
), and the emergency response capability at the incident site similarity is denoted as
(
,
). Calculations were performed using the following Equations (
10)–(
12) depending on the data type.
- (1)
If
∈
, the local similarity between cases is computed as [
48]:
where is the value of the attribute
, which is the jth in case
, and
is the value of the attribute, which describes the
j-th in the
i-th case.
- (2)
If
∈
, the local similarity between cases is computed as [
48]:
where max and min denote the maximum and minimum operations, respectively.
- (3)
If
∈
, the local similarity between cases is computed as [
48]:
where
and
denote the number of intervals, respectively.
3.2.2. Global Similarity Calculation
In this step, firstly, the local similarity within the three similarity measures is averaged according to the same weights, and the thresholds of the three similarity measures (
,
,
) are set by the experts. The cases whose results of the three similarity measures satisfy the thresholds (
≥
,
≥
,
≥
) are classified into the initial set of cases
= {
,
,
,
…,
}. After that, the global similarity is calculated for all cases in the initial case set, and the corresponding weights (
,
,
) are set for each similarity metric to operate according to Equation (
13), and the final result of the weighting operation is obtained.
3.3. Case Correction and Reuse Based on Generative AI
The CBR method is based on the strategy of using the most similar historical cases to solve the target case [
13]. However, in general, due to the fact that events may differ from each other to a different extent, the retrieval results of some historical cases cannot be directly applied to solve the current risks [
9], and it is necessary to formulate reasonable and scientific modification rules to reprocess the response strategies of historical cases. Based on numerous scholarly studies [
13,
14,
49], this paper introduces generative AI into the case revision process to improve the rationality and applicability of response strategies. The specific operation steps are shown in
Figure 3.
- Step 1:
Threshold determination.
This step consists of the expert determining two thresholds, and , from the similarity interval of [0, 1]. where denotes the acceptable bottom line and denotes the similarity merit line.
- Step 2:
Similarity interval classification.
After the threshold is determined in the first step, the original [0, 1] similarity interval can be divided into three segments as [0, ), [, ), and [, 1], which are named invalid, acceptable, and optimal intervals, correspondingly.
- Step 3:
All historical cases’ interval distributions.
Match the global similarity results calculated for each historical case in
Section 3.2.2 of this paper into each interval defined in Step 2 and organize the historical cases within each interval.
- Step 4:
Strategy reuse based on high-similarity historical cases.
If the power network risk experienced by the target case can find a response scheme in the historical case within the optimal interval (, 1], the risk response strategy of the target case can directly reuse the response strategy in the historical case. If the corresponding response scheme cannot be found, then go to Step 5.
- Step 5:
Strategy correction based on generative AI.
If the power network risk experienced by the target case can be found in the response scenarios of the historical cases within the acceptable interval (, ], the current response scenarios of the historical cases need to be optimized and improved with the help of generative AI in order to generate a response strategy that is more applicable to the target case. If the corresponding response scheme cannot be found, then go to Step 6.
- Step 6:
Strategy revision based on risk experts.
If the power network risk experienced by the target case can only be found in the historical cases within the invalid interval [0, ] for the response scenario, the response strategy is not informative due to the minimal similarity between the historical cases falling within the interval and the target case, so the expert power is needed to make the response decision for this part of the risk.
- Step 7:
Strategy upgrade
Since direct interactions between risks are discussed in
Section 3.1.2, if the direct impact risk and the directly affected risk occur simultaneously in the target case, the decision-maker should decide whether to escalate the existing strategy and increase the treatment of the direct impact risk [
13], which will also mitigate the occurrence of the directly affected risk to some extent.
- Step 8:
Risk strategy integration.
Finally, the risk response strategies obtained in different intervals are integrated to obtain a complete risk response strategy for the target case. The above eight steps can result in modified response strategies that are more adapted to the target case to ensure that all risks involved in the target case are effectively addressed [
13]. In addition, the proposed strategy modification process takes into account the direct interdependencies between risks, which can improve the effectiveness of generating risk response strategies. Finally, the target case and its risk response strategies are retained in the case base to enrich the case base and provide more effective strategy references for the next risk.
4. Case Study
In this section, a concrete case is presented to illustrate in detail how the proposed new method can be used to cope with the major risks faced by urban infrastructures under natural disasters.
We collect typhoon information, including the China Meteorological Data Network (
http://data.cma.cn/en (accessed on 19 August 2025)), Typhoon Path Network (
http://typhoon.zjwater.gov.cn (accessed on 19 August 2025)), and other channels for collecting typhoon information from across China between 2017 and 2024, and the typhoon risk database with 42 source cases is constructed. A case base of electric power network risk under disasters is constructed, and the
Capricorn typhoon that occurred in September 2024 is selected as the target case of this study. According to the public safety triangle theory presented in
Table 2, 22 attribute characteristics, numbers, weights and their references with regard to the three dimensions are described. The ontology modeling of the partial attribute characteristics of the accident part in
Figure 4 is carried out using protégé software.
Table 3 shows the values of some of the attribute features for some of the source cases and the selected target case.
Based on the evidential BWM questionnaire, the results were validated with a consistency coefficient below 0.5 (as shown in
Table 4). Risks with weights below 0.05 were filtered out to identify the key power network risks for this study, and the risk ontology model (shown in
Figure 5) was constructed accordingly.
Then, based on the results of the evidence DEMATEL questionnaire, as shown in
Table 5, the direct interrelationships between each of the main power network risks are identified, as shown in
Figure 6. From
Figure 6, we can clearly observe that
(Damage to power grid (equipment)),
(damage to power supply equipment (lines)),
(damage to electrical installations) and
(damage to transmission lines) all affect
(power outage) to varying degrees. Meanwhile,
and
also influence
and
. We take one group of mutual influence relationships as an example for a detailed analysis. Since
(damage to power grid (equipment)) will have a certain impact on
(power outage), it can be understood according to the following three aspects: (1) When both
and
risks occur simultaneously, by strengthening the control of
, the consequences of
can also be mitigated to a certain extent. (2) When the risk control strategy of
is difficult to effectively improve
,
can be improved, thereby indirectly influencing
. (3) When
has occurred but
has not, since there is a certain connection between the two, the response strategy for
can be adjusted in advance to prevent the occurrence of
. We have incorporated this reference into our revised manuscript.
According to Equations (
10)–(
12), the local similarity between the target case and each historical case is calculated separately, and the results are shown in
Table 6,
Table 7 and
Table 8.
After that, according to the thresholds set by the experts of each dimension (
= 0.5,
= 0.7,
= 0.7), the historical cases with the combined similarity of the three dimensions exceeding the thresholds are filtered out and retained in the initial case base. Thus, an initial case base
= {
,
,
,
…,
}, which contains 15 source cases, is established. Finally, according to the weights of each dimension set by the experts (
= 0.6,
= 0.2,
= 0.2), the global similarity of the three dimensions is calculated according to Equation (
13) for the cases in the initial case base, and the results are shown in
Table 9.
The global similarity of the above cases is divided into specified intervals according to the interval thresholds ( = 0.7, = 0.85) given by the experts to briefly illustrate the process of case modification and reuse by taking the power network risk of typhoon Capricorn as an example.
Through the global similarity interval division, one historical case falls in the optimal interval [0.85, 1]; seven historical cases fall in the acceptable interval [0.7, 0.85); and seven historical cases fall in the invalid interval [0, 0.7). After removing all the historical cases in the invalid interval, the risk strategies in the optimal interval are used directly, and the risk strategies in the acceptable interval are used after training and learning based on generative AI, and then the cause factor strategies are upgraded based on the interactions between the major risks, which in turn results in the final risk coping strategy being created to support emergency decision-making. The specific strategies are shown in
Table 10. The corresponding response strategy ontology model is shown in
Figure 7.
5. Contributions and Future Directions
Solving new events by learning from previous events is an effective and preferred method after all types of emergencies. Therefore, the optimization of CBR methods is an ongoing academic concern. Traditional CBR methods face significant methodological challenges, including limited information resources in case databases, unscientific similarity computation methods, and the lack of a unified case correction mechanism. These limitations lead to suboptimal case matching and insufficient solution adaptation, and although the above problems have been optimized by some scholars, there is still room to improve the effectiveness of the method. Based on this, this paper proposes a CBR framework enhanced by generative AI. The methodological framework addresses fundamental limitations in existing CBR approaches through systematic improvements in case representation, similarity computation, and solution adaptation processes. The specific novelty contributions are the following three.
(1) In the case representation session, the proposed ontology model provides a structured basis for case knowledge organization. The ontology explicitly defines the core concepts, attributes, relationships, and constraint rules in each case, providing a rigorous semantic framework for subsequent retrieval and reasoning. In addition, it ensures the structured storage of case data and solves the problem of loose knowledge representation and semantic ambiguity in traditional approaches.
(2) In the case-matching session, the enhanced similarity computation method provides a more accurate case matching capability. The method not only considers the similarity of the surface features of the cases, but also deeply integrates the analysis of the differences between different scenarios and the interdependence between key factors. This comprehensive consideration significantly improves the accuracy, relevance, and contextualization of case matching and ensures that the retrieved historical cases are more practical references for solving new problems.
(3) In the case revision session, the generative AI enhanced revision mechanism dynamically optimizes the solution. This mechanism leverages the powerful pattern recognition, content generation, and adaptation capabilities of generative models. Based on this, the generative AI model intelligently revises, adjusts, and optimizes the original solution to better fit the needs and constraints of the new problem. This revision process is dynamic and interactive, absorbing feedback and continuously improving the quality and applicability of the solution.
There are still some limitations of this study, in terms of the optimization of the CBR process, the comprehensiveness of the similarity calculations between the historical cases and the target case, and the development of coping strategies for the target case in the face of uncertain natural disaster scenarios, which may lead to unprecedented risk patterns, i.e., “inexperience” cases, which are are issues that still need to be further investigated in this paper. In addition, in terms of the application of the methodology, the methodology proposed in this paper needs to be explored in depth in order to extend its application to other domains, such as risk response in water supply networks, transportation networks, and so on.