Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (394)

Search Parameters:
Keywords = named entity recognition

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 3562 KiB  
Article
Automated Test Generation and Marking Using LLMs
by Ioannis Papachristou, Grigoris Dimitroulakos and Costas Vassilakis
Electronics 2025, 14(14), 2835; https://doi.org/10.3390/electronics14142835 - 15 Jul 2025
Viewed by 177
Abstract
This paper presents an innovative exam-creation and grading system powered by advanced natural language processing and local large language models. The system automatically generates clear, grammatically accurate questions from both short passages and longer documents across different languages, supports multiple formats and difficulty [...] Read more.
This paper presents an innovative exam-creation and grading system powered by advanced natural language processing and local large language models. The system automatically generates clear, grammatically accurate questions from both short passages and longer documents across different languages, supports multiple formats and difficulty levels, and ensures semantic diversity while minimizing redundancy, thus maximizing the percentage of the material that is covered in the generated exam paper. For grading, it employs a semantic-similarity model to evaluate essays and open-ended responses, awards partial credit, and mitigates bias from phrasing or syntax via named entity recognition. A major advantage of the proposed approach is its ability to run entirely on standard personal computers, without specialized artificial intelligence hardware, promoting privacy and exam security while maintaining low operational and maintenance costs. Moreover, its modular architecture allows the seamless swapping of models with minimal intervention, ensuring adaptability and the easy integration of future improvements. A requirements–compliance evaluation, combined with established performance metrics, was used to review and compare two popular multilingual LLMs and monolingual alternatives, demonstrating the system’s effectiveness and flexibility. The experimental results show that the system achieves a grading accuracy within a 17% normalized error margin compared to that of human experts, with generated questions reaching up to 89.5% semantic similarity to source content. The full exam generation and grading pipeline runs efficiently on consumer-grade hardware, with average inference times under 30 s. Full article
Show Figures

Figure 1

15 pages, 1701 KiB  
Article
Enhanced Named Entity Recognition and Event Extraction for Power Grid Outage Scheduling Using a Universal Information Extraction Framework
by Wei Tang, Yue Zhang, Xun Mao, Mingqi Shan, Kai Lv, Xun Sun and Zhenhuan Ding
Energies 2025, 18(14), 3617; https://doi.org/10.3390/en18143617 - 9 Jul 2025
Viewed by 173
Abstract
To enhance online dispatch decision support capabilities for power grid outage planning, this study proposes a Universal Information Extraction (UIE)-based method for enhanced named entity recognition and event extraction from outage documents. First, a Structured Extraction Language (SEL) framework is developed that unifies [...] Read more.
To enhance online dispatch decision support capabilities for power grid outage planning, this study proposes a Universal Information Extraction (UIE)-based method for enhanced named entity recognition and event extraction from outage documents. First, a Structured Extraction Language (SEL) framework is developed that unifies the semantic modeling of outage information to generate standardized representations for dual-task parsing of events and entities. Subsequently, a trigger-centric event extraction model is developed through feature learning of outage plan triggers and syntactic pattern entities. Finally, the event extraction model is employed to identify operational objects and action triggers, while the entity recognition model detects seven critical equipment entities within these operational objects. Validated on real-world outage plans from a provincial-level power dispatch center, the methodology demonstrates reliable detection capabilities for both named entity recognition and event extraction. Relative to conventional techniques, the F1 score increases by 1.08% for event extraction and 2.48% for named entity recognition. Full article
(This article belongs to the Special Issue Digital Modeling, Operation and Control of Sustainable Energy Systems)
Show Figures

Figure 1

20 pages, 1050 KiB  
Article
AI-Driven Sentiment Analysis for Discovering Climate Change Impacts
by Zeinab Shahbazi, Rezvan Jalali and Zahra Shahbazi
Smart Cities 2025, 8(4), 109; https://doi.org/10.3390/smartcities8040109 - 1 Jul 2025
Viewed by 352
Abstract
Climate change presents serious challenges for infrastructure, regional planning, and public awareness. However, effectively understanding and analyzing large-scale climate discussions remains difficult. Traditional methods often struggle to extract meaningful insights from unstructured data sources, such as social media discourse, making it harder to [...] Read more.
Climate change presents serious challenges for infrastructure, regional planning, and public awareness. However, effectively understanding and analyzing large-scale climate discussions remains difficult. Traditional methods often struggle to extract meaningful insights from unstructured data sources, such as social media discourse, making it harder to track climate-related concerns and emerging trends. To address this gap, this study applies Natural Language Processing (NLP) techniques to analyze large volumes of climate-related data. By employing supervised and weak supervision methods, climate data are efficiently labeled to enable targeted analysis of regional- and infrastructure-specific climate impacts. Furthermore, BERT-based Named Entity Recognition (NER) is utilized to identify key climate-related terms, while sentiment analysis of platforms like Twitter provides valuable insights into trends in public opinion. AI-driven visualization tools, including predictive modeling and interactive mapping, are also integrated to enhance the accessibility and usability of the analyzed data. The research findings reveal significant patterns in climate-related discussions, supporting policymakers and planners in making more informed decisions. By combining AI-powered analytics with advanced visualization, the study enhances climate impact assessment and promotes the development of sustainable, resilient infrastructure. Overall, the results demonstrate the strong potential of AI-driven climate analysis to inform policy strategies and raise public awareness. Full article
Show Figures

Figure 1

22 pages, 1763 KiB  
Article
A FIT4NER Generic Approach for Framework-Independent Medical Named Entity Recognition
by Florian Freund, Philippe Tamla, Frederik Wilde and Matthias Hemmje
Information 2025, 16(7), 554; https://doi.org/10.3390/info16070554 - 29 Jun 2025
Viewed by 265
Abstract
This article focuses on assisting medical professionals in analyzing domain-specific texts and selecting and comparing Named Entity Recognition (NER) frameworks. It details the development and evaluation of a system that utilizes a generic approach alongside the structured Nunamaker methodology. This system empowers medical [...] Read more.
This article focuses on assisting medical professionals in analyzing domain-specific texts and selecting and comparing Named Entity Recognition (NER) frameworks. It details the development and evaluation of a system that utilizes a generic approach alongside the structured Nunamaker methodology. This system empowers medical professionals to train, evaluate, and compare NER models across diverse frameworks, such as Stanford CoreNLP, spaCy, and Hugging Face Transformers, independent of their specific implementations. Additionally, it introduces a concept for modeling a general training and evaluation process. Finally, experiments using various ontologies from the CRAFT corpus are conducted to assess the effectiveness of the current prototype. Full article
Show Figures

Figure 1

31 pages, 1907 KiB  
Article
Knowledge-Graph-Driven Fault Diagnosis Methods for Intelligent Production Lines
by Yanjun Chen, Min Zhou, Meizhou Zhang and Meng Zha
Sensors 2025, 25(13), 3912; https://doi.org/10.3390/s25133912 - 23 Jun 2025
Viewed by 387
Abstract
In order to enhance the management and application of fault knowledge within intelligent production lines, thereby increasing the efficiency of fault diagnosis and ensuring the stable and reliable operation of these systems, we propose a fault diagnosis methodology that leverages knowledge graphs. First, [...] Read more.
In order to enhance the management and application of fault knowledge within intelligent production lines, thereby increasing the efficiency of fault diagnosis and ensuring the stable and reliable operation of these systems, we propose a fault diagnosis methodology that leverages knowledge graphs. First, we designed an ontology model for fault knowledge by integrating textual features from various components of the production line with expert insights. Second, we employed the ALBERT–BiLSTM–Attention–CRF model to achieve named entity and relationship recognition for faults in intelligent production lines. The introduction of the ALBERT model resulted in a 7.3% improvement in the F1 score compared to the BiLSTM–CRF model. Additionally, incorporating the attention mechanism in relationship extraction led to a 7.37% increase in the F1 score. Finally, we utilized the Neo4j graph database to facilitate the storage and visualization of fault knowledge, validating the effectiveness of our proposed method through a case study on fault diagnosis in CNC machining centers. The research findings indicate that this method excels in recognizing textual entities and relationships related to faults in intelligent production lines, effectively leveraging prior knowledge of faults across various components and elucidating their causes. This approach provides maintenance personnel with an intuitive tool for fault diagnosis and decision support, thereby enhancing diagnostic accuracy and efficiency. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

21 pages, 1658 KiB  
Article
Emotionally Controllable Text Steganography Based on Large Language Model and Named Entity
by Hao Shi, Wenpu Guo and Shaoyuan Gao
Technologies 2025, 13(7), 264; https://doi.org/10.3390/technologies13070264 - 21 Jun 2025
Viewed by 390
Abstract
For the process of covert transmission of text information, in addition to the need to ensure the quality of the text at the same time, it is also necessary to make the text content match the current context. However, the existing text steganography [...] Read more.
For the process of covert transmission of text information, in addition to the need to ensure the quality of the text at the same time, it is also necessary to make the text content match the current context. However, the existing text steganography methods excessively pursue the quality of the text, and lack constraints on the content and emotional expression of the generated steganographic text (stegotext). In order to solve this problem, this paper proposes an emotionally controllable text steganography based on large language model and named entity. The large language model is used for text generation to improve the quality of the generated stegotext. The named entity recognition is used to construct an entity extraction module to obtain the current context-centered text and constrain the text generation content. The sentiment analysis method is used to mine the sentiment tendency so that the stegotext contains rich sentiment information and improves its concealment. Through experimental validation on the generic domain movie reviews dataset IMDB, the results prove that the proposed method has significantly improved hiding capacity, perplexity, and security compared with the existing mainstream methods, and the stegotext has a strong connection with the current context. Full article
(This article belongs to the Special Issue Research on Security and Privacy of Data and Networks)
Show Figures

Graphical abstract

27 pages, 3926 KiB  
Article
A Multi-Source Embedding-Based Named Entity Recognition Model for Knowledge Graph and Its Application to On-Site Operation Violations in Power Grid Systems
by Lingwen Meng, Yulin Wang, Guobang Ban, Yuanjun Huang, Xinshan Zhu and Shumei Zhang
Electronics 2025, 14(13), 2511; https://doi.org/10.3390/electronics14132511 - 20 Jun 2025
Viewed by 277
Abstract
With the increasing complexity of power grid field operations, frequent operational violations have emerged as a major concern in the domain of power grid field operation safety. To support dispatchers in accurately identifying and addressing violation risks, this paper introduces a profiling approach [...] Read more.
With the increasing complexity of power grid field operations, frequent operational violations have emerged as a major concern in the domain of power grid field operation safety. To support dispatchers in accurately identifying and addressing violation risks, this paper introduces a profiling approach for power grid field operation violations based on knowledge graph techniques. The method enables deep modeling and structured representation of violation behaviors. In the structured data processing phase, statistical analysis is conducted based on predefined rules, and mutual information is employed to quantify the contribution of various operational factors to violations. At the municipal bureau level, statistical modeling of violation characteristics is performed to support regional risk assessment. For unstructured textual data, a multi-source embedding-based named entity recognition (NER) model is developed, incorporating domain-specific power lexicon information to enhance the extraction of key entities. High-weight domain terms related to violations are further identified using the TF-IDF algorithm to characterize typical violation behaviors. Based on the extracted entities and relationships, a knowledge graph of field operation violations is constructed, providing a computable and inferable semantic representation of operational scenarios. Finally, visualization techniques are applied to present the structural patterns and distributional features of violations, offering graph-based support for violation risk analysis and dispatch decision-making. Experimental results demonstrate that the proposed method effectively identifies critical features of violation behaviors and provides a structured foundation for intelligent decision support in power grid operation management. Full article
(This article belongs to the Special Issue Knowledge Information Extraction Research)
Show Figures

Figure 1

13 pages, 470 KiB  
Article
Towards Early Maternal Morbidity Risk Identification by Concept Extraction from Clinical Notes in Spanish Using Fine-Tuned Transformer-Based Models
by Andrés F. Giraldo-Forero, Maria C. Durango, Santiago Rúa, Ever A. Torres-Silva, Sara Arango-Valencia, José F. Florez-Arango and Andrés Orozco-Duque
Appl. Syst. Innov. 2025, 8(3), 78; https://doi.org/10.3390/asi8030078 - 11 Jun 2025
Viewed by 1060
Abstract
Early detection of morbidities that complicate pregnancy improves health outcomes in low- and middle-income countries. Automatic revision of electronic health records (EHRs) can help identify such morbidity risks. There is a lack of corpora to train models in Spanish in specific domains, and [...] Read more.
Early detection of morbidities that complicate pregnancy improves health outcomes in low- and middle-income countries. Automatic revision of electronic health records (EHRs) can help identify such morbidity risks. There is a lack of corpora to train models in Spanish in specific domains, and there are no models specialized in maternal EHRs. This study aims to develop a fine-tuned model that detects clinical concepts using a built database with text extracted from maternal EHRs in Spanish. We created a corpus with 13.998 annotations from 200 clinical notes in Spanish associated with EHRs obtained from a reference institution of high obstetric risk in Colombia. Using the Beginning–Inside–Outside tagging scheme, we fine-tuned five different transformer-based models to classify between 16 classes associated with eight entities. The best model achieved a macro F1 score of 0.55 ± 0.03. The entities with the best performance were signs, symptoms, and negations, with exact F1 scores of 0.714 and 0.726, respectively. The lower scores were associated with those classes with fewer observations. Even though our dataset is shorter in size and more diverse in entity types than other datasets in Spanish, our results are comparable to other state-of-the-art named entity recognition models fine-tuned in Spanish and the biomedical domain. This work introduces the first fine-tuning of a model for named entity recognition specifically designed for maternal EHRs. Our results can be used as a base to develop new models to extract concepts in the maternal–fetal domains and help healthcare providers detect morbidities that complicate pregnancy early. Full article
Show Figures

Figure 1

26 pages, 1789 KiB  
Article
Dynamic Vulnerability Knowledge Graph Construction via Multi-Source Data Fusion and Large Language Model Reasoning
by Ruitong Liu, Yaxuan Xie, Zexu Dang, Jinyi Hao, Xiaowen Quan, Yongcai Xiao and Chunlei Peng
Electronics 2025, 14(12), 2334; https://doi.org/10.3390/electronics14122334 - 7 Jun 2025
Viewed by 624
Abstract
With the increasing number of network security threats and the frequent occurrence of software vulnerability attacks, the effective management and large-scale retrieval of vulnerability data have become urgent needs. Existing vulnerability information is scattered across heterogeneous sources and is difficult to integrate, which [...] Read more.
With the increasing number of network security threats and the frequent occurrence of software vulnerability attacks, the effective management and large-scale retrieval of vulnerability data have become urgent needs. Existing vulnerability information is scattered across heterogeneous sources and is difficult to integrate, which in turn makes it hard for security analysts to quickly retrieve and analyze relevant security knowledge. To address this problem, this paper proposes a method to construct a vulnerability knowledge graph by integrating multi-source vulnerability data, combining graph embedding technology with large language model reasoning to aggregate, infer, and enrich vulnerability knowledge. Experiments demonstrated that our domain-tuned Bidirectional Long Short-Term Memory–Conditional Random Field (BiLSTM-CRF) named entity recognition (NER), enhanced with a cybersecurity dictionary, achieved a 90.1% F1-score for entity extraction. For link prediction, a hybrid Graph Attention Network fused with GPT-3 reasoning boosted Hits1 by 0.137, Hits3 by 0.116, and Hits10 by 0.101 over the baseline. These results confirm that our approach markedly enhanced entity identification and relationship inference, yielding a more complete and dynamically updatable cybersecurity knowledge graph. Full article
(This article belongs to the Special Issue Cryptography and Computer Security)
Show Figures

Figure 1

23 pages, 1119 KiB  
Article
Improving Text Classification of Imbalanced Call Center Conversations Through Data Cleansing, Augmentation, and NER Metadata
by Sihyoung Jurn and Wooje Kim
Electronics 2025, 14(11), 2259; https://doi.org/10.3390/electronics14112259 - 31 May 2025
Viewed by 536
Abstract
The categories for call center conversation data are valuably used for reporting business results and marketing analysis. However, they typically lack clear patterns and suffer from severe imbalance in the number of instances across categories. The call center conversation categories used in this [...] Read more.
The categories for call center conversation data are valuably used for reporting business results and marketing analysis. However, they typically lack clear patterns and suffer from severe imbalance in the number of instances across categories. The call center conversation categories used in this study are Payment, Exchange, Return, Delivery, Service, and After-sales service (AS), with a significant imbalance where Service accounts for 26% of the total data and AS only 2%. To address these challenges, this study proposes a model that ensembles meta-information generated through Named Entity Recognition (NER) with machine learning inference results. Utilizing KoBERT (Korean Bidirectional Encoder Representations from Transformers) as our base model, we employed Easy Data Augmentation (EDA) to augment data in categories with insufficient instances. Through the training of nine models, encompassing KoBERT category probability weights and a CatBoost (Categorical Boosting) model that ensembles meta-information derived from named entities, we ultimately improved the F1 score from the baseline of 0.9117 to 0.9331, demonstrating a solution that circumvents the need for expensive LLMs (Large Language Models) or high-performance GPUs (Graphic Process Units). This improvement is particularly significant considering that, when focusing solely on the category with a 2% data proportion, our model achieved an F1 score of 0.9509, representing a 4.6% increase over the baseline. Full article
Show Figures

Figure 1

18 pages, 456 KiB  
Article
Named Entity Recognition Based on Multi-Class Label Prompt Selection and Core Entity Replacement
by Di Wu, Yao Chen and Mingyue Yan
Appl. Sci. 2025, 15(11), 6171; https://doi.org/10.3390/app15116171 - 30 May 2025
Viewed by 414
Abstract
At present, researchers are showing a marked interest in the topic of few-shot named entity recognition (NER). Previous studies have demonstrated that prompt-based learning methods can effectively improve the performance of few-shot NER models and can reduce the need for annotated data. However, [...] Read more.
At present, researchers are showing a marked interest in the topic of few-shot named entity recognition (NER). Previous studies have demonstrated that prompt-based learning methods can effectively improve the performance of few-shot NER models and can reduce the need for annotated data. However, the contextual information of the relationship between core entities and a given prompt may not have been considered in these studies; moreover, research in this field continues to suffer from the negative impact of a limited amount of annotated data. A multi-class label prompt selection and core entity replacement-based named entity recognition (MPSCER-NER) model is proposed in this study. A multi-class label prompt selection strategy is presented, which can assist in the task of sentence–word representation. A long-distance dependency is formed between the sentence and the multi-class label prompt. A core entity replacement strategy is presented, which can enrich the word vectors of training data. In addition, a weighted random algorithm is used to retrieve the core entities that are to be replaced from the multi-class label prompt. The experimental results show that, when implemented on the CoNLL-2003, Ontonotes 5.0, Ontonotes 4.0, and BC5CDR datasets under 5-Way k-Shot (k = 5, 10), the MPSCER-NER model achieves minimum F1-score improvements of 1.32%, 2.14%, 1.05%, 1.32%, 0.84%, 1.46%, 1.43%, and 1.11% in comparison with NNshot, StructShot, MatchingCNN, ProtoBERT, DNER, and SRNER, respectively. Full article
Show Figures

Figure 1

19 pages, 1486 KiB  
Article
A Dual-Enhanced Hierarchical Alignment Framework for Multimodal Named Entity Recognition
by Jian Wang, Yanan Zhou, Qi He and Wenbo Zhang
Appl. Sci. 2025, 15(11), 6034; https://doi.org/10.3390/app15116034 - 27 May 2025
Viewed by 397
Abstract
Multimodal amed entity recognition (MNER) is a natural language-processing technique that integrates text and visual modalities to detect and segment entity boundaries and their types from unstructured multimodal data. Although existing methods alleviate semantic deficiencies by optimizing image and text feature extraction and [...] Read more.
Multimodal amed entity recognition (MNER) is a natural language-processing technique that integrates text and visual modalities to detect and segment entity boundaries and their types from unstructured multimodal data. Although existing methods alleviate semantic deficiencies by optimizing image and text feature extraction and fusion, a fundamental challenge remains due to the lack of fine-grained alignment caused by cross-modal semantic deviations and image noise interference. To address these issues, this paper proposes a dual-enhanced hierarchical alignment (DEHA) framework that achieves dual semantic and spatial enhancement via global–local cooperative alignment optimization. The proposed framework incorporates a dual enhancement strategy comprising Semantic-Augmented Global Contrast (SAGC) and Multi-scale Spatial Local Contrast (MS-SLC), which reinforce the alignment of image and text modalities at the global sample level and local feature level, respectively, thereby reducing image noise. Additionally, a cross-modal feature fusion and vision-constrained CRF prediction layer is designed to achieve adaptive aggregation of global and local features. Experimental results on the Twitter-2015 and Twitter-2017 datasets yield F1 scores of 77.42% and 88.79%, outperforming baseline models. These results demonstrate that the global–local complementary mechanism effectively balances alignment precision and noise robustness, thereby enhancing entity recognition accuracy in social media and advancing multimodal semantic understanding. Full article
(This article belongs to the Special Issue Intelligence Image Processing and Patterns Recognition)
Show Figures

Figure 1

26 pages, 3691 KiB  
Article
LLM-ACNC: Aerospace Requirement Texts Knowledge Graph Construction Utilizing Large Language Model
by Yuhao Liu, Junjie Hou, Yuxuan Chen, Jie Jin and Wenyue Wang
Aerospace 2025, 12(6), 463; https://doi.org/10.3390/aerospace12060463 - 23 May 2025
Viewed by 550
Abstract
Traditional methods for requirement identification depend on the manual transformation of unstructured requirement texts into formal documents, a process that is both inefficient and prone to errors. Although requirement knowledge graphs offer structured representations, current named entity recognition and relation extraction techniques continue [...] Read more.
Traditional methods for requirement identification depend on the manual transformation of unstructured requirement texts into formal documents, a process that is both inefficient and prone to errors. Although requirement knowledge graphs offer structured representations, current named entity recognition and relation extraction techniques continue to face significant challenges in processing the specialized terminology and intricate sentence structures characteristic of the aerospace domain. To overcome these limitations, this study introduces a novel approach for constructing aerospace-specific requirement knowledge graphs using a large language model. The method first employs the GPT model for data augmentation, followed by BERTScore filtering to ensure data quality and consistency. An efficient continual learning based on token index encoding is then implemented, guiding the model to focus on key information and enhancing domain adaptability through fine-tuning of the Qwen2.5 (7B) model. Furthermore, a chain-of-thought reasoning framework is established for improved entity and relation recognition, coupled with a dynamic few-shot learning strategy that selects examples adaptively based on input characteristics. Experimental results validate the effectiveness of the proposed method, achieving F1 scores of 88.75% in NER and 89.48% in relation extraction tasks. Full article
Show Figures

Figure 1

15 pages, 561 KiB  
Article
A Chinese Few-Shot Named-Entity Recognition Model Based on Multi-Label Prompts and Boundary Information
by Cong Zhou, Baohua Huang and Yunjie Ling
Appl. Sci. 2025, 15(11), 5801; https://doi.org/10.3390/app15115801 - 22 May 2025
Viewed by 365
Abstract
Currently, few-shot setting and entity nesting are two major challenges in named-entity recognition (NER). Compared to English, Chinese NER not only has issues such as complex grammatical structures, polysemy, and entity nesting but also faces low-resource scenarios in specific domains due to difficulties [...] Read more.
Currently, few-shot setting and entity nesting are two major challenges in named-entity recognition (NER). Compared to English, Chinese NER not only has issues such as complex grammatical structures, polysemy, and entity nesting but also faces low-resource scenarios in specific domains due to difficulties in sample annotation. To address these two issues, we propose a Chinese few-shot named-entity recognition model that integrates multi-label prompts and boundary information (MPBCNER). This model is an improvement based on a pre-trained language model (PLM) combined with a pointer network. First, the model uses multiple entity label words and position slots as prompt information in the entity recognition training task. Activating the relevant parameters in PLM associated with the corresponding entity labels through the prompt information improved the model’s performance in entity recognition under small-sample data. Secondly, by using a Graph Attention Network (GAT) to construct the boundary information extraction module, we integrated boundary information with text features, allowing the model to pay more attention to features near the boundaries when recognizing entities, thereby improving the accuracy of entity boundary recognition. Experiments on multiple public small-sample datasets and our own annotated datasets in the field of government auditing demonstrated the effectiveness of this model. Full article
Show Figures

Figure 1

23 pages, 2937 KiB  
Article
Domain-Specific Knowledge Graph for Quality Engineering of Continuous Casting: Joint Extraction-Based Construction and Adversarial Training Enhanced Alignment
by Xiaojun Wu, Yue She, Xinyi Wang, Hao Lu and Qi Gao
Appl. Sci. 2025, 15(10), 5674; https://doi.org/10.3390/app15105674 - 19 May 2025
Cited by 1 | Viewed by 359
Abstract
The intelligent development of continuous casting quality engineering is an essential step for the efficient production of high-quality billets. However, there are many quality defects that require strong expertise for handling. In order to reduce reliance on expert experience and improve the intelligent [...] Read more.
The intelligent development of continuous casting quality engineering is an essential step for the efficient production of high-quality billets. However, there are many quality defects that require strong expertise for handling. In order to reduce reliance on expert experience and improve the intelligent management level of billet quality knowledge, we focus on constructing a Domain-Specific Knowledge Graph (DSKG) for the quality engineering of continuous casting. To achieve joint extraction of billet quality defects entity and relation, we propose a Self-Attention Partition and Recombination Model (SAPRM). SAPRM divides domain-specific sentences into three parts: entity-related, relation-related, and shared features, which are specifically for Named Entity Recognition (NER) and Relation Extraction (RE) tasks. Furthermore, for issues of entity ambiguity and repetition in triples, we propose a semi-supervised incremental learning method for knowledge alignment, where we leverage adversarial training to enhance the performance of knowledge alignment. In the experiment, in the knowledge extraction part, the NER and RE precision of our model achieved 86.7% and 79.48%, respectively. RE precision improved by 20.83% compared to the baseline with sequence labeling method. Additionally, in the knowledge alignment part, the precision of our model reached 99.29%, representing a 1.42% improvement over baseline methods. Consequently, the proposed model with the partition mechanism can effectively extract domain knowledge, cand the semi-supervised method can take advantage of unlabeled triples. Our method can adapt the domain features and construct a high-quality knowledge graph for the quality engineering of continuous casting, providing an efficient solution for billet defect issues. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop