Next Article in Journal
SecureTeleMed: Privacy-Preserving Volumetric Video Streaming for Telemedicine
Previous Article in Journal
Resilience Enhancement for Power System State Estimation Against FDIAs with Moving Target Defense
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on Knowledge Graph Construction and Application for Online Emergency Load Transfer in Power Systems

1
Power Dispatching Control Center, China Southern Power Grid, Guangzhou 510000, China
2
School of Electrical Engineering and Automation, Wuhan University, Wuhan 430000, China
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(17), 3370; https://doi.org/10.3390/electronics14173370 (registering DOI)
Submission received: 21 June 2025 / Revised: 8 August 2025 / Accepted: 9 August 2025 / Published: 25 August 2025

Abstract

Efficient emergency load transfer is crucial for ensuring the power system’s safe operation and reliable power supply. However, traditional load transfer methods that rely on human experience have limitations, such as slow response times and low efficiency, which make it difficult to address complex and diverse fault scenarios effectively. Therefore, this paper proposes an emergency load transfer method based on knowledge graphs to achieve intelligent management and efficient retrieval of emergency knowledge. Firstly, a named entity recognition model based on ERNIE-BiGRU-CRF is constructed to automatically extract key entities and relationships from the load transfer plan texts, obtaining information such as fault names, fault causes, and operation steps. Secondly, a power system emergency load transfer knowledge graph is constructed based on the extracted structured knowledge, which is efficiently stored using a graph database and enables the visualization and interactive query of knowledge. Finally, real power system fault cases prove that the proposed method can effectively improve the retrieval efficiency of fault knowledge and provide intelligent support for online emergency load transfer decisions.

1. Introduction

With rapid economic development and the continuous increase in electricity demand, ensuring the safety, stability, and reliability of the power supply of the power system has become a core issue [1]. Especially in extreme natural disaster scenarios such as rainstorms and typhoons, quickly generating effective emergency load transfer plans to avoid large-scale power outages and reduce economic losses and social impacts is of great significance. Traditional load transfer plans rely on dispatching experts to screen typical fault scenarios and formulate load transfer plans based on the load rates of transformers, lines, and switches. When actual faults occur, dispatchers rely on pre-prepared load transfer plans to carry out the work. However, this method often relies on manual search and judgment, which not only involves a large amount of work and a long processing time, but also has the risk of plan inadaptability, and is difficult to meet the development needs of modern power systems with increasingly complex topological structures and operation modes. Therefore, it is urgent to build an intelligent decision-making knowledge base for power system load transfer to improve the automation and intelligence level of the load transfer process.
In recent years, artificial intelligence (AI) and deep learning (DL) technologies have found extensive applications across various domains of power systems, including optimal dispatch and equipment maintenance. These technologies can automatically extract knowledge from massive datasets [2,3,4]. Such studies highlight that multi-dimensional data analysis and intelligent modeling are crucial for improving decision-making efficiency, offering valuable insights for the automation and intelligent upgrade of load transfer decisions. However, unlike the structured data commonly found in DL tasks such as power prediction and optimal dispatch, load transfer plans exist as complex, diverse, non-structured, or semi-structured text, which presents challenges for direct computer processing. Named entity recognition (NER) offers a viable solution by extracting key information, including entity boundaries and categories, from vast amounts of unstructured text [5]. Currently, models like long short-term memory (LSTM) and gated recurrent unit (GRU) are widely employed in NER tasks. Ref. [6] applied a bidirectional neural network structure to NER and proposed the BiLSTM-CRF model. This model achieved semantic mining of bidirectional information in text and subsequently became a classic in named entity recognition. In the field of power system load transfer, ref. [7] established a power grid fault handling plan matching method based on BERT-CRF-RE2, which improved the accuracy and efficiency of plan matching. Ref. [8] proposed a sequence labeling method based on the BERT model and used labeled data to fine-tune the model for information extraction, effectively improving the accuracy of information extraction. Ref. [9] identified plan entity features using a BiLSTM-CRF and extracted entity relationships with a text convolutional neural network (TextCNN).
Knowledge graph technology offers a powerful approach to representing knowledge by depicting concepts, entities, and their complex interrelationships within the objective world. This method is extensively utilized due to its strong reasoning capabilities and efficient retrieval performance [10]. Compared to traditional knowledge organization and management methods, knowledge graphs, based on a graph-structured data organization form, offer several advantages. They support more efficient data retrieval, handle complex and diverse relationships, and enable millisecond-level multi-hop relationship queries. In the realm of power system load transfer, ref. [11] proposed a framework based on a distributed knowledge graph. Ref. [12] presented knowledge graph for intelligent dispatching and control and analyzed the implementation techniques of its key components. Ref. [13] introduced a cascaded model that leverages the random forest classifier in combination with knowledge reasoning, and can efficiently and accurately identify 6 types of power system faults.
Therefore, this paper investigates the construction and application of a knowledge graph for power system load transfer. First, a DL model is developed to perform named entity recognition on fault disposal plan texts, enabling the extraction of key information. Subsequently, the extracted knowledge is stored and visually represented within a knowledge graph. By constructing a power grid fault disposal knowledge graph, the aim is to assist dispatchers in efficiently managing faults, thereby improving work efficiency and decision-making accuracy.

2. Fault Disposal Plan Information Extraction Technique

2.1. Text Content of Power Grid Fault Disposal Plan

The plan text consists of the following four parts: fault type, fault name, fault cause, and operational steps. Fault types are categorized into total blackout faults and N-1 faults. The fault cause offers a detailed explanation of the specific reasons for the fault. The operational steps clearly list the immediate operational instructions needed to restore the normal operation of the power grid. As the power grid fault disposal plan is a professional text, it is necessary to fully consider the characteristics of the text to accurately extract key information from the plan. To achieve this, this paper employs NER technology to determine the boundaries and categories of specific entities in the text. Table 1 shows some named entity categories and examples extracted from power grid fault cases. By identifying these key entities, the plan text can be efficiently and accurately parsed, providing strong support for the rapid disposal of power grid faults.

2.2. ERNIE-BiGRU-CRF Named Entity Recognition Model

In this paper, we choose the pre-training model ERNIE to splice the basic BiGRU-CRF sequence annotation model, and train this NER model by transfer learning. The model is mainly composed of an input layer, an ERNIE pre-training layer, a BiGRU layer, and a CRF layer. Its structure is shown in Figure 1.
Among them, ERNIE learns richer interrelationships between words and words and words and contexts through large-scale unsupervised pre-training, and these semantic dependencies can support the downstream task to learn sequence features; the BiGRU layer acts as an encoder, which learns contextual information and dependencies in the input sequences through the bidirectional recurrent structure, and is responsible for the whole structure to learn the feature representations and the dependencies of the relationships in the sequences; the CRF layer acts as a decoder, which uses the BiGRU output and label dependency information to decode the sequence, considering the transfer dependencies between labels and finding the optimal label sequence by maximizing the global probability.

2.2.1. ERNIE Pre-Training Model

Addressing the critical concern of data security and privacy within sensitive environments like power dispatch centers, the proposed methodology leverages a pre-training and fine-tuning paradigm. This approach utilizes a powerful, general-domain foundation model as its core. Crucially, the subsequent fine-tuning requires significantly less training data and computational effort compared to training a model from scratch. This characteristic is pivotal, as it enables the localized training and deployment of the fine-tuned model directly within the secure perimeter of a dispatch center. By eliminating the need to transmit potentially sensitive operational data externally for processing, this localized strategy inherently enhances data security and privacy protection, forming a foundational layer of cybersecurity for the system. When developing intelligent systems for the power dispatch domain, ensuring data security and privacy is an indispensable, critical aspect. The ERNIE pre-training model we selected, with its unique pre-training plus fine-tuning architecture, offers an effective approach to addressing this challenge.
ERNIE is a Chinese-focused pre-trained language model by Baidu. Compared to the general-purpose BERT [14], it has unique advantages in processing and understanding Chinese text. So, it is widely used in downstream tasks like text matching, classification, question answering, and sequence labeling. BERT is built with multi-layer bidirectional Transformer encoders, as in Figure 2.
The core of the Transformer encoder lies in its multi-head attention mechanism. This mechanism employs multiple attention modules to capture the relationships between input words and contextual words from different perspectives, and assigns weights to them. The formula for this process is as follows (Equation (1)):
Attention ( Q , K , V ) = Softmax Q K T d k V
In the formula,  Q K , and  V  are the input vector matrices, and  d k  is the word vector dimension. The multi-head attention mechanism applies multiple different linear transformations to project  Q K , and  V , and then concatenates the obtained attention values. The calculation method is shown in the following formula:
head i = Attention Q W i Q , K W i K , V W i V
MultiHead ( Q , K , V ) = Concat ( head 1 , , head h ) W O
In the formula,  W i Q W i K , and  W i V  represent the initialized vector matrices; Concat indicates the concatenation of each  head i W O  represents the weight matrix; and MultiHead represents the multi-head attention values.
ERNIE improves on BERT by fully utilizing the relationships between words and entities in the corpus to enhance the learning of Chinese semantic information. It is also built based on bidirectional multi-layer Transformer encoders, and its structure is shown in Figure 3.

2.2.2. BiGRU Model

Bidirectional gated recurrent unit (BiGRU) is a classical Recurrent Neural Network (RNN) [15] used to model sequential information such as sentences, which can effectively capture contextual information for natural language understanding when processing textual sequence data, and is suitable for the task in this paper.
GRU is a gated recurrent neural network unit whose formulation consists of an update gate [16], a reset gate, and a new candidate state. The following is the computational procedure of the GRU unit:
The gating factor  r t  for the reset gate is calculated as follows:
r t = σ ( W r [ h t 1 , x t ] + b r ) ]
The gating factor  z t  for the update door is calculated as follows:
z t = σ ( W z [ h t 1 , x t ] + b z )
Candidate memory unit:
h ˜ t = tanh ( W h [ r t h t 1 , x t ] + b h )
Current moment memory unit:
h t = ( 1 z t ) h t 1 + z t h ˜ t
where  W r  and  W z  are the weight matrices of the reset and update gates,  b r  and  b z  are the biases of each gating node, and  σ  is the sigmoid function.
The BiGRU bidirectional model generates the final output by combining the hidden states in both forward GRU and reverse GRU directions. BiGRU adopts a simpler gating structure with nearly half the number of parameters compared to BiLSTM, so BiGRU requires less computational resources and time overheads in each iteration of computation, as well as in the overall training process, and converges faster. For long-sequence modeling or scenarios with limited computational capabilities, BiGRU becomes a better choice due to its efficient computational characteristics.

2.2.3. CRF Layer

After the problem of modeling for processing long text sequences is solved, another problem of the sequence annotation task needs to be solved urgently, which is the dependency between labels. Conditional Random Fields (CRF) is a common discriminative undirected graph model which has been widely used in sequence annotation tasks in the field of natural language processing.
Maximizing the log-likelihood function is often used during the training of CRFs, and the conditional probability of a label sequence  y  can be computed given a sentence  M  by using the following two formulae, where  y  is the full sequence of possible labels for a given sentence  M , and  L  is the loss function:
P ( y M ) = exp ( score ( M , y ) ) exp ( M , y ˜ )
L = log ( P ( y | M ) )
In the above equation,  y ˜  denotes the true markers. The CRF model will use the Viterbi algorithm to solve the global optimal sequence during the prediction process, as shown in Equation (10), and  y *  denotes the sequence within the set that gives the maximum value of the score function.
y * = arg max score y ˜ y M ( M , y ˜ )

2.3. Grammatical Rule-Based Relational Extraction

After entity extraction from the original text data, we need to identify the relationships between entities to form triples for knowledge graphs. This paper uses a rule-based approach for relation extraction. It leverages linguistic knowledge and predefined grammatical rules to accurately identify semantic relationships between target entities in text. Specifically, we first preprocess the text corpus with tokenization, part-of-speech (POS) tagging, and named entity recognition. Then, based on the target relation types, experts combine domain and linguistic knowledge to manually define grammatical rules. These rules cover part-of-speech, entity types, keywords, and syntactic dependencies to describe the grammatical manifestations of relations in sentences. Next, we apply these rules to the preprocessed text for pattern matching. If a sentence meets the rule conditions, we recognize that there is a corresponding relation and extract the relevant entities. Finally, we optimize the extraction results through deduplication and filtering. In specific domains or relation extraction tasks with stable sentence structures, this grammatical rule method can achieve precise relation extraction. The following Figure 4 shows the rule-based relation extraction process.

3. Construction of an Electronic Library of Fault Disposal Plans Based on Knowledge Graphs

3.1. Domain Knowledge Ontology Modeling

Ontology modeling is one of the common approaches to face professional domain knowledge modeling, which is used for the formal description of domain knowledge, concept hierarchy, attribute relationships, constraint rules, and so on. The following steps are usually followed in the ontology modeling process:
Firstly, it is necessary to clarify the purpose of modeling and application scenarios and define the scope of the domain covered by the ontology. By analyzing the characteristics and application needs of domain knowledge, the granularity of ontology modeling, key concepts, and main relationship types are determined. A clear modeling purpose and scope are helpful in guiding the subsequent modeling process and controlling the ontology size and complexity.
Collect and analyze structured, semi-structured, and unstructured knowledge resources in the domain, and study and refine the knowledge resources to further identify the core concepts, important attributes, and relationships of the domain, laying the foundation for ontology construction.
Next, based on domain knowledge analysis, define the core concepts of the ontology and organize the hierarchical relationships (e.g., contextual relationships) between concepts. Define the intrinsic attributes and external associations for each concept, where the intrinsic attributes portray the intrinsic characteristics of the concepts; at the same time, define the value types, value ranges, default values, base constraints, and so on, for some numerical type attributes to standardize the attribute semantics. In addition to concept hierarchical relationships, other important semantic relationships within the domain need to be defined to enhance the semantic expressiveness of the ontology.

3.2. Domain Knowledge Graph Construction

Taken together, building a knowledge graph involves multiple aspects, such as data processing, knowledge extraction, and storage management. The standardized domain knowledge graph construction process is shown in Figure 5.
In knowledge extraction, NER [17,18] identifies entity mentions in text, like power equipment names, power enterprises, and project locations, and determines their categories. Relationship extraction identifies associations between entities, such as the affiliation between power equipment and substations, or between power businesses and projects. Attribute extraction focuses on extracting entity attribute values, such as the commissioning time of a transformer or the rated capacity of a generator.
This paper adopts the ERNIE-BiGRU-CRF method for the NER task. ERNIE, fine-tuned on power-related data, can capture professional semantic information, providing a strong foundation for entity recognition. The BiGRU network, with its bidirectional structure, extracts features from both directions in power text sequences, merging contextual information to accurately capture the positions and dependencies of various entities. The subsequent CRF layer, based on the features extracted by BiGRU, considers tag transition probabilities. This ensures precise and complete entity labeling, effectively addressing issues such as vague entity boundaries and diverse entity types. Thus, the method achieves precise recognition of power-related entities, supplying high-quality structured entity data for power knowledge graphs and equipment management.
The results of domain knowledge ontology modeling standardize the conceptual structure and semantic constraints of domain knowledge, which provide the basis for standard knowledge representation. The finally constructed entity relationship form of knowledge needs to be stored in a graph database (e.g., Neo4j, JanusGraph, etc.) to support efficient graph querying and analysis. Graph databases represent entities and their relationships in the form of nodes and edges, and provide flexible graph models and query languages (e.g., Cypher, Gremlin, etc.). Graph querying makes it easy to retrieve entity attributes, navigate entity relationships, discover implicit patterns, and so on.

4. Case Study Analysis

4.1. Dataset and Experimental Parameter Settings

To verify the effectiveness of the NER model constructed in this paper, 3676 power grid fault disposal plan texts from a power grid were used as experimental objects. These include 2588 total blackout fault texts and 1088 N-1 fault texts. In these plan texts, the “Fault Name” and “Fault Cause” section details the specific manifestations and causes of the faults, while the “Operational Steps” section records the specific operations and disposal steps taken for the faults. These three sections contain rich entity information, which is key content for entity extraction tasks. After entity extraction, we obtained 7352 text data entries from the “Fault Name”, “Fault Cause”, and “Operational Steps” sections as experimental data, which can adequately cover the key information in power grid fault disposal plans.
After performing BIO labeling on the text data at the character level, this paper randomly divided the labeled dataset into training and testing sets in a 4:1 ratio for model training and testing.
In terms of model parameter settings, the ERNIE model uses a 12-head configuration with a hidden layer dimension of 768. The GRU hidden layer dimension is set to 128, the batch size is 16, and the learning rate is 5 × 10−5. The maximum input text length is 128. We used the Adam optimizer and set the dropout to 0.5 to prevent overfitting. The number of training epochs is 25. Details are provided in Table 2.

4.2. Analysis of the Effectiveness of ERNIE-BiGRU-CRF

To verify the effectiveness of the ERNIE-BiGRU-CRF model proposed in this paper for the NER task in power grid fault disposal plans, this section conducts comparative experiments using multiple baseline models, including BiGRU and ERNIE-BiLSTM-CRF. The performance of these models is evaluated using the precision (P), recall (R), and F1-score (F1) metrics, which are calculated as follows:
P = T p T p + F p
R = T p T p + F n
F 1 = 2 P R P + R
In the formula,  T p  is the number of correctly identified entities,  F n  is the number of missed correct entities, and  F p  is the number of incorrectly identified non-entities.
Both models were trained for 25 epochs under the same hardware and data conditions.
The loss function for the BERT-BiLSTM-CRF model is identical to that of the ERNIE-BiGRU-CRF model, as shown in Formula (10). The distinction between the two models lies in the encoder component, where the BERT-BiLSTM-CRF employs a BiLSTM instead of the ERNIE-BiGRU. Thus will not be reiterated here.
For the BiGRU model, we employed a token-wise Cross-Entropy loss function, which is suitable for sequence labeling tasks where label transitions are not considered. The formula for the loss function is as follows:
L BiGRU = 1 | M | ( i , j ) M log P ( y i j | x i j )
In the formula,  M  is a set of indices representing the positions of all valid tokens (where mask = 1). For each valid position  ( i , j ) y i j  represents the true label and  x i j  represents the input feature. The term  P ( y | x )  denotes the probability distribution obtained by applying the softmax function to the model’s logits output. This distribution indicates the model’s confidence in predicting the label  y i j  for the input  x i j .
The loss function for the BERT-BiLSTM-CRF model is identical to that of the ERNIE-BiGRU-CRF model, utilizing the Negative Log-Likelihood (NLL) of the Conditional Random Field (CRF). The distinction between the two models lies in the encoder component, where the BERT-BiLSTM-CRF employs a BiLSTM instead of the ERNIE-BiGRU. The formula for the loss function is consistent with that of the ERNIE-BiGRU-CRF model and thus will not be reiterated here.
According to Table 3, ERNIE-BiGRU-CRF has the best performance, with a 76.63% precision, 88.89% recall, and 82.30% F1-score, showing the best balance between accuracy and coverage. ERNIE-BiLSTM-CRF has a high recall of 92.59% but a low precision of 39.23%, leading to a low F1-score of 55.11%, which suggests possible overfitting or high error rates. BiGRU falls behind in all indicators, with an F1-score of just 11.11%, proving the necessity of ERNIE and CRF for effective feature extraction.
From Figure 6, BiGRU starts with a loss of around 2.6, and then slowly decreases and converges to 2.3965. ERNIE-BiGRU-CRF has the fastest loss decline with a final loss of 18.5372. ERNIE-BiLSTM-CRF’s final loss is 34.5541. In summary, ERNIE-BiGRU-CRF surpasses the other two models in precision and F1-score. It also trains more efficiently. Considering test metrics, BiGRU is the worst performer. ERNIE-BiLSTM-CRF has high recall but low precision, ending up with a lower F1-score than ERNIE-BiGRU-CRF.

4.3. Knowledge Graph Construction for Power Grid Fault Disposal

Based on the data analysis and entity extraction mentioned above, this paper proposes constructing a knowledge graph for the power grid fault disposal domain. To store large-scale datasets, Neo4j, a high-performance graph database, is chosen. It can store data as a super-large network and is commonly used for knowledge graph data with graph-based structures.
To efficiently store the power grid fault disposal plan in the underlying graph database, the Py2neo (a third-party library module of Python) is used for development. In this experiment, Python 3.11.9 and Py2neo 2021.2.4 are used. Through batch-wise row reading and node merging, this approach stores graph data, automatically ignores duplicate nodes and relationships, and thus effectively completes the knowledge graph construction task.
A real power grid’s core hub, SLZ Substation, is selected as a case. Figure 7 shows the constructed power grid fault disposal plan knowledge graph. Red nodes represent fault types (e.g., total blackout faults), yellow nodes represent fault names and causes (e.g., incoming line power failure faults), and blue and gray nodes represent operational steps with attributes like operation commands, types, and switch codes. For example, the operation command is “Open SLZ Substation815”, the operation type is “Open” (indicating opening) or “Close” (indicating closing), and the switch code is “Breaker_101019”. By identifying the fault substation and type, intelligent agents can quickly determine which switch to open or close by reading the operation type and switch code from the disposal nodes.
Figure 8 presents a knowledge graph of fault disposal plans for all substations in a power grid. It covers N-1 and full-shutdown fault data for all substations in the area and offers detailed disposal plans for each fault type. Using the method proposed in this paper, the fault disposal plan texts were transformed into 415 nodes and 822 edges through knowledge extraction. These triples were imported into the Neo4j graph database, forming a visualized power grid fault disposal knowledge graph. In the graph, red nodes denote full-outage events, green nodes denote N-1 events, and orange nodes represent specific fault causes or equipment. Edges represent relationships between nodes, covering various fault scenarios like equipment and line faults. Users can query fault information and retrieve corresponding disposal processes, enabling rapid response to and effective handling of power grid faults.

5. Conclusions

This paper addresses power system emergency load transfer by investigating the extraction of information from fault disposal plan texts and the construction of a knowledge graph, approaching the task from both data and knowledge perspectives. Our main conclusions are as follows:
(1) To address the challenges posed by the dense technical terminology and complex sentence structures in fault disposal plan texts, this paper employs an ERNIE-BiGRU-CRF named entity recognition model. Experimental results demonstrate the model’s superior performance in long-text semantic understanding, achieving an F1-score of 82.30%. This significantly outperforms traditional DL models like BiGRU, effectively resolving the issue of insufficient entity recognition accuracy in this specialized domain.
(2) Based on the knowledge extraction results, and integrated with expert knowledge from the power system fault disposal domain, this paper standardized the relationships among fault names, causes, and operational steps. Building on this, the Neo4j graph database is used to achieve automated construction and efficient storage of the knowledge graph, which supports multi-dimensional knowledge retrieval and various reasoning applications.

Author Contributions

Conceptualization, N.L. and S.L.; methodology, R.Y.; software, W.Y.; validation, K.W., Z.F. and Z.S.; data curation, H.Z.; writing—original draft preparation, W.Y.; writing—review and editing, R.S.; visualization, X.Y. and D.W.; supervision, J.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by the China Southern Power Grid Science and Technology Project No. 000005KC24010008 (Research on Key Technologies for Dynamic Monitoring, Early Warning, and Decision Support in Regional Power Grids for Real-time Dispatch).

Data Availability Statement

All relevant data are within the paper.

Conflicts of Interest

Authors Nan Lou, Shiqi Liu, Rong Yan, Ke Wang, Zhantao Fan, Zhengbo Shan, Hongxuan Zhang, Xinyue Yu and Dawei Wang were employed by the company Power Dispatching Control Center, China Southern Power Grid, Guangzhou, China. The authors declare that this study received funding from China Southern Power Grid. The funder had the following involvement with the study: study design, analysis, and the decision to submit it for publication.

References

  1. Tsiaras, E.; Papadopoulos, D.N.; Antonopoulos, C.N.; Papadakis, V.G.; Coutelieris, F.A. Planning and assessment of an off-grid power supply system for small settlements. Renew. Energy 2020, 149, 1271–1281. [Google Scholar] [CrossRef]
  2. Iturrino Garcia, C.A.; Bindi, M.; Corti, F.; Luchetta, A.; Grasso, F.; Paolucci, L.; Piccirilli, M.C.; Aizenberg, I. Power quality analysis based on machine learning methods for low-voltage electrical distribution lines. Energies 2023, 16, 3627. [Google Scholar] [CrossRef]
  3. Jain, S.; Satsangi, A.; Kumar, R.; Panwar, D.; Amir, M. Intelligent assessment of power quality disturbances: A comprehensive review on machine learning and deep learning solutions. Comput. Electr. Eng. 2025, 123, 110275. [Google Scholar] [CrossRef]
  4. Wu, C.; Cui, Z.; Xia, Q.; Yue, J.; Lyu, F. An overview of digital twin technology for power electronics: State-of-the-art and future trends. IEEE Trans. Power Electron. 2025, 40, 13337–13362. [Google Scholar] [CrossRef]
  5. Huang, Z.; Xu, W.; Yu, K. Bidirectional LSTM-CRF models for sequence tagging. arXiv 2015, arXiv:1508.01991. [Google Scholar] [CrossRef]
  6. Lample, G.; Ballesteros, M.; Subramanian, S.; Kawakami, K.; Dyer, C. Neural architectures for named entity recognition. arXiv 2016, arXiv:1603.01360. [Google Scholar] [CrossRef]
  7. Meng, F.; Li, J.; Li, T.; Xu, J.; Gao, H.; Qiao, Y. Matching method for power grid fault handling plan based on semantic enhancement. Electr. Power 2025, 58, 237–244. [Google Scholar]
  8. Xiao, D.; Zhang, Y.; Xu, X.; Liu, L.T.; Li, X. Research and implementation of power grid fault handling plan information extraction based on Bert. Electr. Power Inf. Commun. Technol. 2023, 21, 26–32. [Google Scholar] [CrossRef]
  9. Yu, J.; Shan, L.; Pi, J.; Zhang, Y.; Qiao, Y.; Wang, Y. Analysis method of fault handling plan based on knowledge graph. Electr. Autom. 2023, 45, 75–78. [Google Scholar]
  10. Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, MN, USA, 2–7 June 2019. [Google Scholar]
  11. Zhou, Y.; Lin, Z.; Tu, L.; Song, Y.; Wu, Z. Big Data and Knowledge Graph Based Fault Diagnosis for Electric Power Systems. EAI Endorsed Trans. Ind. Netw. Intell. Syst. 2022, 9, 1. [Google Scholar] [CrossRef]
  12. Yu, J.; Wang, X.; Zhang, Y.; Liu, Y.; Zhao, S.A.; Shan, L.F. Construction and application of knowledge graph for intelligent dispatching and control. Power Syst. Prot. Control 2020, 48, 29–35. [Google Scholar]
  13. Li, C.; Wang, B. A knowledge graph method towards power system fault diagnosis and classification. Electronics 2023, 12, 4808. [Google Scholar] [CrossRef]
  14. Zhang, Z.; Han, X.; Liu, Z.; Jiang, X.; Sun, M.; Liu, Q. ERNIE: Enhanced Language Representation with Informative Entities. arXiv 2019, arXiv:1905.07129. [Google Scholar] [CrossRef]
  15. Sherstinsky, A. Fundamentals of Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) Network. Phys. D Nonlinear Phenom. 2020, 404, 132306. [Google Scholar] [CrossRef]
  16. Dey, R.; Salem, F.M. Gate-variants of Gated Recurrent Unit (GRU) neural networks. In Proceedings of the 2017 IEEE 60th International Midwest Symposium on Circuits and Systems (MWSCAS), Boston, MA, USA, 6–9 August 2017; pp. 1597–1600. [Google Scholar] [CrossRef]
  17. Santoso, J.; Setiawan, E.I.; Purwanto, C.N.; Yuniarno, E.M.; Hariadi, M.; Purnomo, M.H. Named Entity Recognition for Extracting Concept in Ontology Building on Indonesian Language Using End-to-End Bidirectional Long Short Term Memory. Expert Syst. Appl. 2021, 176, 114856. [Google Scholar] [CrossRef]
  18. Al-Nabki, M.W.; Fidalgo, E.; Alegre, E.; Fernández-Robles, L. Improving Named Entity Recognition in Noisy User-Generated Text with Local Distance Neighbor Feature. Neurocomputing 2020, 382, 1–11. [Google Scholar] [CrossRef]
Figure 1. Structure of the ERNIE-BiGRU-CRF model.
Figure 1. Structure of the ERNIE-BiGRU-CRF model.
Electronics 14 03370 g001
Figure 2. Transformer encoder architecture diagram.
Figure 2. Transformer encoder architecture diagram.
Electronics 14 03370 g002
Figure 3. ERNIE model architecture.
Figure 3. ERNIE model architecture.
Electronics 14 03370 g003
Figure 4. Rule-based relation extraction process diagram.
Figure 4. Rule-based relation extraction process diagram.
Electronics 14 03370 g004
Figure 5. Domain knowledge graph construction process.
Figure 5. Domain knowledge graph construction process.
Electronics 14 03370 g005
Figure 6. Model loss comparison chart.
Figure 6. Model loss comparison chart.
Electronics 14 03370 g006
Figure 7. Substation fault disposal plan topology diagram.
Figure 7. Substation fault disposal plan topology diagram.
Electronics 14 03370 g007
Figure 8. Knowledge graph of Substation N-1 and full-outage fault disposal plans.
Figure 8. Knowledge graph of Substation N-1 and full-outage fault disposal plans.
Electronics 14 03370 g008
Table 1. Naming entity categories and examples.
Table 1. Naming entity categories and examples.
Entity CategoryEntity DescriptionEntity Example
Fault TypeClassify faultsN-1, full shutdown
Fault NameDescribe the faultJi-13 fault at JZZ Substation
Fault CauseSpecific situations causing the faultThe fault caused the Ji-13 circuit breaker at JZZ Substation to trip
Operational StepsOperational steps to take after the fault occursOpen NK979 Negative 1, attempt to reclose Ji-13 circuit breaker. If reclosing fails, close 87808 Station Internal Negative 2
Table 2. Hyperparameter details.
Table 2. Hyperparameter details.
ParameterValueParameterValue
Transformer12Hidden Dimension768
GRU_dim128Learning Rate5 × 10−5
max_seq_len128batch_size16
Dropout0.5epoch25
Table 3. Named entity recognition results for different models.
Table 3. Named entity recognition results for different models.
ModelPrecision/%Recall/%F1-Score/%
ERNIE-BiGRU-CRF76.6388.89 82.30
ERNIE-BiLSTM-CRF39.2392.5955.11
BiGRU39.6811.1117.36
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lou, N.; Liu, S.; Yan, R.; Si, R.; Yu, W.; Wang, K.; Fan, Z.; Shan, Z.; Zhang, H.; Yu, X.; et al. Research on Knowledge Graph Construction and Application for Online Emergency Load Transfer in Power Systems. Electronics 2025, 14, 3370. https://doi.org/10.3390/electronics14173370

AMA Style

Lou N, Liu S, Yan R, Si R, Yu W, Wang K, Fan Z, Shan Z, Zhang H, Yu X, et al. Research on Knowledge Graph Construction and Application for Online Emergency Load Transfer in Power Systems. Electronics. 2025; 14(17):3370. https://doi.org/10.3390/electronics14173370

Chicago/Turabian Style

Lou, Nan, Shiqi Liu, Rong Yan, Ruiqi Si, Wanya Yu, Ke Wang, Zhantao Fan, Zhengbo Shan, Hongxuan Zhang, Xinyue Yu, and et al. 2025. "Research on Knowledge Graph Construction and Application for Online Emergency Load Transfer in Power Systems" Electronics 14, no. 17: 3370. https://doi.org/10.3390/electronics14173370

APA Style

Lou, N., Liu, S., Yan, R., Si, R., Yu, W., Wang, K., Fan, Z., Shan, Z., Zhang, H., Yu, X., Wang, D., & Zhang, J. (2025). Research on Knowledge Graph Construction and Application for Online Emergency Load Transfer in Power Systems. Electronics, 14(17), 3370. https://doi.org/10.3390/electronics14173370

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop