Next Article in Journal
The Asynchronous Installation Effect on the Embedment Performance and Dynamic Response of a Novel Group-Drag-Anchor System in a Soft Clay Seabed
Previous Article in Journal
Ice–Structure Interaction in Marine Engineering
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Semantic Modeling of Ship Collision Reports: Ontology Design, Knowledge Extraction, and Severity Classification

1
School of Navigation, Wuhan University of Technology, Wuhan 430063, China
2
State Key Laboratory of Maritime Technology and Safety, Wuhan University of Technology, Wuhan 430063, China
3
Sanya Science and Education Innovation Park of Wuhan University of Technology, Sanya 572000, China
4
National Engineering Research Center for Geographic Information System, China University of Geosciences (Wuhan), Wuhan 430074, China
*
Author to whom correspondence should be addressed.
J. Mar. Sci. Eng. 2026, 14(5), 448; https://doi.org/10.3390/jmse14050448
Submission received: 25 January 2026 / Revised: 25 February 2026 / Accepted: 25 February 2026 / Published: 27 February 2026
(This article belongs to the Section Ocean Engineering)

Abstract

With the expansion of water transportation networks and increasing traffic intensity, maritime accidents have become frequent, posing significant threats to safety and property. This study presents a knowledge graph-driven framework for maritime accident analysis, addressing the limitations of traditional risk analysis methods in extracting and organizing unstructured accident data. First, a standardized ontology for ship collision accidents is developed, defining core concepts such as event, spatiotemporal behavior, causation, consequence, responsibility, and decision-making. Advanced natural language processing models, including a lexicon-enhanced LEBERT-BiLSTM-CRF and a K-BERT-BiLSTM-CRF incorporating ship collision knowledge triplets, are proposed for named entity recognition and relation extraction, with F1-score improvements of 6.7% and 1.2%, respectively. The constructed accident knowledge graph integrates heterogeneous data, enabling semantic organization and efficient retrieval. Leveraging graph topological features, an accident severity classification model is established, where a graph-feature-driven LSTM-RNN demonstrates robust performance, especially with imbalanced data. Comparative experiments show the superiority of this approach over conventional models such as XGBoost and random forest. Overall, this research demonstrates that knowledge graph-driven methods can significantly enhance maritime accident knowledge extraction and severity classification, providing strong information support and methodological advances for intelligent accident management and prevention.

Graphical Abstract

1. Introduction

Waterway transport is a mainstay of trade shipping due to its large capacity and long-distance coverage. With the growth of international trade, waterway traffic continues to increase, which in turn raises the risk of accidents, threatening safety and the environment. To effectively mitigate these maritime risks, current research broadly advances along two complementary trajectories: real-time situational perception and retrospective accident cognition. On the perceptual front, relevant technologies have reached a remarkable level of maturity. For instance, advanced computer vision and deep learning models have been successfully deployed for high-precision, orientation-aware ship detection to ensure safe navigation [1]. In stark contrast to these well-established visual perception systems designed for front-line collision avoidance, research on retrospective accident cognition remains relatively insufficient. While real-time perception excels at capturing immediate surface-level hazards, uncovering the deep-seated causal chains and hidden risk patterns requires in-depth semantic mining of historical accident reports. Sufficient utilization of accident data is crucial for risk prevention and control. A large number of detailed accident investigation reports have been issued by authoritative institutions, providing valuable sources for understanding and analyzing the causes of accidents.
However, existing maritime regulations and accident classification schemes mainly focus on post-incident reporting and require manual interpretation of text-based reports, limiting timely and intelligent risk management. Data-driven and automated classification models, such as those based on knowledge graphs, can efficiently extract, organize, and analyze key information from unstructured reports, enabling rapid, near-real-time accident classification and intelligent decision support.

2. Literature Review

2.1. Ship Collision Accident Research

Ship collision accident research covers the identification and analysis of influencing factors and accident severity, as well as the information mining of accident data. Chauvin et al. analyzed human and organizational factors in collisions using HFACS, revealing that human error is critical [2]. Kayiran et al. built a data-driven Bayesian Network for accidents involving dry-bulk carriers in the Turkish Search and Rescue areas (2001–2019), quantifying how season, region, time-of-day, flag and other factors shape accident types and severities, and offering targeted management suggestions [3]. Hänninen synthesized the benefits and challenges of Bayesian-network modeling for multi-stage maritime accident causation [4]. Gan et al. constructed an ontology-based knowledge graph from 241 collision investigation reports to structure risk factors and support analysis [5]. In parallel, Qu et al. proposed AIS-based risk indices for close-quarters early warning and spatially pinpointed high-risk legs in a chokepoint strait, while Fan et al. advanced a validated Bayesian framework for navigational risk assessment of remotely controlled autonomous ships (MASS), linking probability and consequence under scenario-specific factors [6,7]. Building on these quantitative risk frameworks, Namgung and Kim further developed an inference system to determine critical decision timing by calculating the Collision Risk Index (CRI), while Namgung integrated such risk assessments into local route planning to ensure autonomous maneuvers remain compliant with the International Regulations for Preventing Collisions at Sea (COLREGs) rules [8,9].
The existing studies have carried out an in-depth analysis of ship collision accidents from different perspectives, such as HFACS-based human-factor analysis, AIS-derived close-quarters risk indices, Bayesian-network causation modeling, and ontology-based knowledge graphs from investigation reports, and emphasized the key roles of human, environmental, and management factors in accident management [10,11,12,13]. Particularly regarding environmental challenges, adverse weather conditions like dense fog severely impair visibility and threaten navigation safety. This has prompted the development of advanced visual enhancement methods, such as vision transformer-based image dehazing, to ensure climate-resilient maritime navigation [14]. However, from the perspective of historical accident data mining, the existing research mainly focuses on the organization of structured and semi-structured data and static feature analysis [15,16]. Furthermore, specifically regarding the semantic modeling of ship collision reports, the evolution of methodologies highlights a transition from shallow statistical extraction to deep semantic structuring. Early approaches primarily relied on rule-based text mining to extract superficial accident causalities [17]. Meanwhile, recent trends emphasize the construction of domain-specific ontologies and semantic networks to capture complex, multidimensional relationships [18]. However, current semantic modeling frameworks still struggle with processing long domain-specific entities and low-frequency maritime terminology [19]. Constrained by these technical bottlenecks, existing methods fall short of fully transforming highly unstructured accident texts into computable and inferable deep safety knowledge, leaving a critical gap between theoretical research and practical application. To address these practical difficulties, this study proposes a semantic modeling and application framework specifically for ship collision reports. Specifically, the concrete work and contributions of this paper are mainly reflected in three aspects. First, a multidimensional ontology model incorporating event, spatiotemporal behaviour, cause, consequence, responsible party, and disposition decision is designed, providing a structured expression for the accident evolution process. Second, targeting the specific difficulties of fuzzy boundaries in long entities and low-frequency vocabulary recognition in maritime texts, an information extraction approach combining LeBERT-BiLSTM-CRF incorporating domain vocabulary information, and BERT-MLP is adopted to construct a ship collision knowledge graph. Additionally, through comparative experiments, this study demonstrates that injecting a subset of the extracted triplet knowledge into a K-BERT-BiLSTM-CRF model further improves Named Entity Recognition (NER) accuracy, validating the tremendous potential of knowledge-enhanced architectures. Finally, building upon the structured organization of textual data, this study attempts to extract two distinct dimensions of quantifiable features—Node characteristics and topological features from the constructed knowledge graph, and utilizes an LSTM-RNN model to explore the classification of accident severity. Ultimately, this research not only provides effective data support for maritime knowledge management but also offers a feasible technical pathway for extracting deep risk characteristics from historical text mining.

2.2. Named Entity Recognition

NER is an important Natural Language Processing (NLP) task, which can be categorized into rule-based and dictionary-based methods, statistical machine learning approaches, and neural network-based deep learning methods.
Rule-based and dictionary-based methods rely on manually defined domain dictionaries and pattern matching to extract entities [20]. While interpretable, these methods require significant manual effort, are domain-dependent, and struggle to recognize unseen or out-of-vocabulary entities.
Due to the limitations of rule-based and dictionary-based methods, statistical machine learning approaches have become popular for NER, typically framing it as a sequence labeling or classification task. Representative models include the Hidden Markov Model (HMM), Conditional Random Fields (CRF), Maximum Entropy Model (MEM), and Support Vector Machine (SVM) [21,22,23,24,25]. These models can recognize a variety of entity types by learning feature representations from labeled data, and have been applied to multiple languages and domains [26,27,28]. Compared to rule-based methods, machine learning approaches improve scalability and adaptability, but still rely heavily on manually annotated corpora and handcrafted features, limiting their performance in complex or data-scarce scenarios.
With the rise of neural networks, NER has made significant progress in terms of accuracy and flexibility, especially in variable linguistic scenarios. Deep learning models, including Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), Graph Neural Network (GNN), and Transformer architectures, have been widely used in NER.
CNN-based methods efficiently convert vocabulary into vector representations and extract features [29]. Improvements such as IDCNN and GRAM-CNN enhance large-context modeling and local feature extraction [30,31]. Dictionary-augmented and data-augmented CNN approaches further improve model robustness and adaptability [32,33]. RNN-based models, such as Long Short-Term Memory Networks (LSTM) and Gated Recurrent Unit (GRU), are widely used to capture sequential dependencies in NER tasks [34,35]. The combination of Bidirectional Long Short-Term Memory (BiLSTM) and CRF further improves sequence labeling performance, while integrating Graph Convolutional Network (GCN) allows for enhanced modeling of complex dependencies [36,37].
Transformer-based models represent the state-of-the-art for NER by modeling complex contextual information [38]. Pre-trained language models such as the pre-trained Bidirectional Encoder Representations from Transformers (BERT) and its variants have significantly improved NER performance [39,40]. In particular, for Chinese NER, advanced models like RoBERTa with Whole Word Masking-Extended (RoBERTa-wwm-ext) effectively extract context-sensitive vectors and enhance recognition accuracy [41]. Other methods, including Whole Word Masking and domain-specific adaptations, further improve the extraction of complex entities [42,43].
Recent research shows that deep learning methods excel in NER tasks by capturing global contextual semantics and modeling long-range dependencies. However, for tasks requiring rich domain knowledge, pre-trained models still face challenges in generalization and knowledge representation. To address this, researchers have embedded external knowledge—such as knowledge graphs or dictionaries—into model inputs to enhance semantic representation. Representative knowledge-enhanced BERT models, including Knowledgeable BERT (K-BERT), Lexicon-Enhanced BERT (LeBERT), and similar variants, have achieved strong results in handling small-sample scenarios and domain-specific terminology [39,44,45]. Specifically, K-BERT incorporates semantic triples from knowledge graphs, while LeBERT injects lexical information at the input layer to address fuzzy entity boundaries and complex terms in domain texts.
In this study, considering the characteristics of water transportation accident texts—such as fuzzy entity boundaries, complex long terminologies, and data imbalance—a LeBERT-BiLSTM-CRF model integrating domain vocabulary information is employed for NER. Specifically, the model utilizes LeBERT to dynamically inject specialized maritime vocabularies, effectively enhancing the recognition of nested and fuzzy entities. Furthermore, to explore the utility and quality of the extracted structural knowledge, this study trained a K-BERT model using a mixture of manually annotated data and the derived knowledge triplets. While the LeBERT-BiLSTM-CRF pipeline constructed the foundational knowledge graph, comparative experiments confirm that the K-BERT model, empowered by the injected triplets, achieves higher recognition precision. This empirical finding not only verifies the high quality of the extracted knowledge but also highlights a promising paradigm for optimizing maritime NER tasks through knowledge-driven approaches.

2.3. Relationship Extraction

RE can also be categorized into rule-based methods, statistical machine learning methods, and neural network-based deep learning methods.
Rule-based RE methods rely on handcrafted extraction templates and structured formats to identify entity–relationship pairs [46,47]. These approaches are interpretable and simple, but require domain-specific knowledge and lack generalizability.
Statistical machine learning methods transform relationships into feature vectors and use classifiers such as MEM and SVM [48,49,50]. While effective, these methods depend heavily on feature engineering.
Neural network-based deep learning methods, such as CNNs, BiLSTM, and attention mechanisms, can automatically capture semantic relationships in text and model long-range dependencies [51,52,53]. Graph-based models further enhance RE by leveraging syntactic structures [54,55].
Pre-trained language models like BERT and its variants have set new benchmarks for RE by integrating contextual information and entity-aware attention [56,57].
For water transportation accident texts, RE is challenged by complex terminology and data imbalance. Therefore, this study adopts a knowledge migration strategy, combining pre-trained models with domain rule alignment, to improve extraction accuracy and support intelligent safety management.

2.4. Domain Knowledge Graph

Domain knowledge graphs have been widely applied across occupational safety and health, railway safety and risk management, emission calculation and control, cybersecurity risk governance and aquatic germplasm resource management [58,59,60,61,62,63]. These applications demonstrate the ability of knowledge graphs to transform raw data and named entities into intelligent reasoning and decision support for complex domains.
In the maritime domain, knowledge graphs have been developed for flag state control inspection, maritime traffic knowledge discovery, identification of illegal behaviors, and fine prediction, which have significantly enhanced information structuring, safety supervision, and decision-making [64,65,66,67,68,69,70]. However, the complexity of semantic relationships and the diversity of entity types in waterborne traffic accidents still pose major challenges to accurate and robust named-entity recognition (NER).
To address these challenges, this study proposes an approach that integrates domain-specific vocabularies and pre-trained language models to inject accident knowledge into the NER process. On this basis, we construct a comprehensive knowledge graph for ship collision prevention and control, covering all key accident elements, spatiotemporal behaviors, causes, consequences, responsible parties, and remedial decisions. This enables more accurate information extraction and supports intelligent discrimination of accident severity in maritime scenarios.

3. Methodology

The flowchart of the proposed methodology is shown in Figure 1. First, data collection and preprocessing are carried out to construct a standardized corpus, for example, cleaning the redundant information based on the corpus of ship collision accident reports. Second, the standardized knowledge framework for ship collision accidents is constructed, covering accidents, spatiotemporal behaviour, cause, consequence, responsible party, and disposition decisions. The LeBERT NER model integrates domain knowledge, and the RE model is based on BERT and domain rules, which are adopted for knowledge extraction from the unstructured text of water transportation accidents. Further, the structured triad is fed back to the NER to recognize more entity types in the ship collision reports during the training process, so as to improve the effectiveness of NER. Finally, the analysis and visualization of the knowledge graph and knowledge retrieval are used for accident-level classification.

3.1. Ship Collision Accident Ontology Modeling

3.1.1. Conceptual Layers

The study extends the SEM model to construct the knowledge standardization framework, which consists of seven core concepts, including accident, space and time, behaviour, cause, consequence, responsible party, and disposition decision [60]. The accident affects the safety and operation of the ship. Space and time are connected with where and when a series of behaviours occur in the accident, such as “collision location is 22°08.14′ N/114°13.80′ E”, “ship safety inspection on 9 April 2014”, etc. Behaviours are issued by the ship as objects accompanied by spatial and temporal relations and can be recorded using the relevant information. For example, “At about 1011 h, the AIS of HONGDA 186 recorded the ship’s position as 29°59.36′ N/122°00.87′ E, speed 6.8 knots, heading 140.5 degrees”. The cause of the accidents can be negligence in looking out, negligence of the management of the shipping company, etc. The consequences can be “caused the sinking of the three vessels involved in fishing”, “two people on board fell into the water, of which one was rescued, and one died”, “direct economic loss of about 367,000 yuan”, and so on. The responsible party can be the corresponding person or related parties. Disposition decision is the decision-making and recommendations for the accident, such as “recommending fisheries management organizations to strengthen the regulation of ‘three-less’ vessels involved in fishing” and “cracking down on illegal fishing by three-less vessels”.

3.1.2. Entity Types

Based on the core concepts set in the knowledge standardization framework, the types of entities are defined in Table 1. An accident is the core entity in the knowledge graph, connecting all other related entities, such as vessels, personnel, environment, etc. Vessels, as water transportation, can be of various types, such as cargo ships, passenger ships, fishing boats, and so on. Vessel dynamics refer to the behaviour and state changes in the vessel before and after the accident, including the sailing route, speed changes and other contents. It records the evolution of the accident. Personnel covers individuals involved in or affected by the accident, such as crew members, passengers, rescuers and so on. The behaviour, decision-making and skills of personnel are directly related to the accident. Organization refers to institutions related to accidents, such as maritime safety investigation agencies, ship inspection agencies, and so on. These organizations play an important role in the management, investigation, handling and prevention of accidents. Time information is crucial for tracking and analyzing the whole process of accident development, including the time of ship collisions, the time of ship construction, the time of disposition, etc. Location can be the country, administrative regions, maritime functional areas, locations, and so on, and it provides support for the spatial analysis of accidents. Environmental factors such as meteorology, water temperature, and navigational environment are important for the analysis of the causes of accidents and risk assessment. Equipment refers to the tools and devices used by ships and law enforcement agencies, such as radar, surveillance equipment, the ship’s main engine and so on. Cause refers to a series of factors that lead to the occurrence of accidents, including human errors, equipment failures, environmental conditions, and management negligence. The consequences of the accident include personnel injury, economic loss, environmental impact, accident level and so on. Laws and regulations provide a clear legal basis for the subsequent disposal of accidents and ensure the legitimacy and fairness of the handling process. The recommendation covers various aspects, such as recommendations for personnel, organizations, and management, which help to improve safety management, as well as enhance accident prevention and control.

3.1.3. Relationship Types

The relationship types between entities include the semantics of affiliation, attribute subject–object relationship, spatial–temporal relationship, and causal relationship, as shown in Table 2. Among them, “of_VesselFeature” is used to describe the specific attributes of the ship. Conceptual hierarchical relations include discovering, employ, manage, hold, equip, dispatch, occur, rescue, use, encounter, notify, investigate, report, and belongs_to. “manipulate_of_RealTimeDynamics” is used to reflect the specific operational behaviour of personnel in response to changes in ship dynamics. A causal relation is used to describe the causes and consequences of the accident. Spatiotemporal relation is used to portray the change in the spatial position of a ship or a person at a specific time.

3.1.4. Entity Attributes

The entity attributes include vessel features and personnel features, as shown in Table 3. Vessel features refer to the inherent attributes of a vessel, including MMSI, IMO, ship registry, ship size, and so on. Personnel features refer to the relevant characteristics of the personnel entity, including name, age, education, gender, and so on.

3.2. LeBERT Entity Recognition Model Enhanced by Domain Vocabularies

3.2.1. Domain Dictionary Construction and Matching Methods

The corpus related to waterborne traffic accidents was collected from the official website of the China MSA, the databases of China National Knowledge Infrastructure, Wanfang, Baidu Baike and so on. Following the collection and preprocessing of the corpus, we employed a TF-IDF-based ranking mechanism to identify potential domain keywords. By calculating the local frequency of terms weighted by their inverse document distribution, the top 50 representative words from each document were aggregated into a candidate set. These candidates were then filtered based on their normalized frequency percentage across the entire corpus; only terms surpassing a specific threshold were considered. The final domain dictionary was established after a rigorous manual verification by domain experts to ensure the accuracy and relevance of the maritime technical terminology.
The identified and extracted terminologies related to waterborne traffic accidents are shown in Table 4. Domain words may be formed from different words. For example, environmental factors can be a south wind turning to southwest and showers turning to cloudy, then overcast. The extracted domain vocabularies are trained using the Word2Vec word vector model. A domain dictionary consisting of 5377-word vectors with a dimension of 200 is obtained.
A prefix tree is an efficient structure to represent words in water transportation accidents. For instance, the strings “search and rescue vessel”, “search and rescue radar”, and “search and rescue radar transponder” share common prefixes and can be stored compactly using a prefix tree. In such a structure, each node represents a character, and the path from the root node to a leaf node forms a complete string. This method effectively reduces storage space and enhances retrieval performance.
The specific steps for the dictionary tree construction algorithm are shown in Algorithm 1. That covers initializing the vocabulary and the root node, and iterating through the vocabulary for each character to get matched.
Algorithm 1: Dictionary tree construction algorithm
Input: A sentence containing n characters s c = { c 1 , c 2 , , c n } .
Output: s c w = { c 1 , w s 1 , c 1 , w s 2 , , c 1 , w s n } .
Step 1: Initialize the vocabulary list W S i = { w s 1 , w s 2 , , w s n } , which is derived from a pre-compiled domain terms.
Step 2: Initialize the root node of the dictionary tree.
Step 3: For c i , iterate through the vocabulary list W S , and get the word W S i that matches the character c i . Take ( c i , w s i ) to construct a dictionary tree child node.
Step 4: Repeat execution until each character is matched.
The dictionary matching algorithm is to match character-level feature vectors with word-level feature vectors through the constructed domain dictionary, as shown in Algorithm 2. It includes the retrieval of candidate words, mapping candidate words into vectors, calculating weights based on similarity, and the weighted summation of all candidate word vectors. For example, “search” in “search and rescue radar transponder” is matched with the word sets “search and rescue” and “search and rescue radar” through the domain dictionary tree. This is essential to embed dictionary adapters into the transformer encoders of the BERT model.
Algorithm 2: Dictionary matching algorithm
Input: W S i = { w s 1 , w s 2 , , w s n } is the set of words, c i is the character need to be matched.
Output: H ~ = { h 1 ~ , h 2 ~ , , h n ~ } the generated matching vectors.
Step 1: Retrieve candidate words from W S that potentially match character c i .
Step 2: Map candidate words into vectors.
Step 3: Calculate weights based on the similarity of character c i to the candidate word vector.
Step 4: Weighted summation of all candidate word vectors.
Step 5: Generate the final character representation.

3.2.2. Named Entity Recognition Model

This paper proposes an NER model incorporating a lexical enhancement mechanism, as shown in Figure 2. It is based on character-level input and adopts a LeBERT-BiLSTM-CRF structure. The LeBERT model adds a Lexicon Adapter between two specific layers of the Transformer in the BERT model to enhance the feature information [45]. It consists of a pre-trained model of RoBERTa using fixed parameters and a Lexicon Adapter, as shown in Figure 3. In order to better adapt to the characteristics of the Chinese language, this paper chooses the RoBERTa-wwm-ext as the representation model of the embedding layer. The model adopts the Whole Word Masking (WWM) mechanism on the basis of the RoBERTa architecture to improve the model’s semantic modeling ability and perception of Chinese word boundaries in the NER task.
Given a Chinese sentence s c = { c 1 , c 2 , , c n } with n characters, construct a sequence of character–word pairs s c w = { c 1 , w s 1 , c 1 , w s 2 , , c 1 , w s n } . The characters { c 1 , c 2 , , c n } are first input to the embedder, and the E = { e 1 , e 2 , , e n } by adding token, segmentation, and position is output. Then, input E to the Transformer encoder, and the computation of each layer of the Transformer are shown in Equations (1) and (2).
G = L N ( H l 1 + M H A t t n ( H l 1 ) )
H l = L N ( G + F F N ( G ) )
where H l = { h 1 l , h 2 l , h n l } denotes the output of layer l. H 0 = E is the first layer, L N denotes normalization for each layer, M H A t t n denotes multi-head attention mechanism, and F F N denotes a two-layer feedforward network with ReLU as hidden activation function. In order to incorporate dictionary information between the kth and k + 1th Transformer layers, the output H k = { h 1 k , h 2 k , h n k } is first obtained after k times Transformers. Then, each pair ( h i k , x i w s ) is passed through a dictionary adapter to obtain h ~ i k .
Lexicon Adapter consists of a feature word vector and a bilinear attention mechanism, as shown in Figure 4. The module receives two inputs, including the contextual representation and the matching lexical item corresponding to the character. The ith input sequence is denoted as ( h i k , x i w s ) , h i c is a character vector generated by one of the Transformer layers, and x i w s = { x i 1 w , x i 2 w , , x i m w } is the set of word embeddings. The jth word in x i w s is represented as Equation (3):
x i j w s = e w ( w i j )
where e w is a pre-trained word-embedded table, and w i j is the jth word in w s i .
To align the representations of character–word, a nonlinear transformation is applied to the word vector as in Equation (4):
v i j w = W 2 tanh W 1 x i j w + b 1 + b 2
where W 1 and W 2 are the matrixes b 1 , b 2 is the scalar, and d w and d c denotes the dimensions of the word embedding and hidden layer in BERT, respectively.
To select the most relevant word for the character from all the matched words, a character-to-word attention mechanism is introduced. The words for the ith character are represented by V i = ( v i i w , , v i m w ) and the corresponding relevance for each word is calculated as Equation (5):
a i = s o f t m a x ( h i c W a t t n V i T )
where W a t t n is the weight matrix of the bilinear attention mechanism. The weighted sum of all words can be obtained by Equation (6):
z i w = j = 1 m a i j v i j w
Finally, the weighted dictionary information is injected into the character vector by the following Equation (7):
h ~ i k = h i k + z i w
The main role of the BiLSTM layer is to perform contextual feature extraction to obtain bidirectional semantic information. The outputs H l of the Transformer at layer l of the LeBERT module are used as inputs of the BiLSTM. The output [ h 0 , h 1 , h 2 , , h s ] of the forward hidden layer and the output [ h 0 , h 1 , h 2 , , h s ] of the backward hidden layer are combined to form the complete hidden state sequence H t = [ h 0 , h 1 , h 2 , , h s ] of the BiLSTM, where h i = [ h i , h i ] .
The CRF layer optimizes the label sequence of the whole sentence. For a given input sequence X and labeled sequence y , the score of the whole sequence s c o r e ( y , X ) is obtained through Equation (8).
S c o r e X , y = i = 0 n A y i , y i + 1 + i = 0 n p i , y i    
where A represents the transfer matrix, A y i , y i + 1 represents the probability score from label y i , to y i + 1 , and p i , y i represents the score of the p i , y i th label of the character.

3.3. Relationship Extraction Model Based on Semantic Representation and Rule Constraints

A RE model that integrates semantic representations with rule constraints is shown in Figure 5. The BERT-MLP_rule model takes the complete sentence text, the target entity pairs and their correspondences as joint inputs and semantically encodes the sentences and entities via BERT so as to obtain semantic vectors of the relationships between the entities. Subsequently, the encoded vectors are input into a Multilayer Perceptron (MLP) to calculate the predicted relationship through forward propagation.
The specific steps are as follows. First, the input data is formatted as { S , e 1 , e 2 , R } , where S denotes the input sentence, e 1 and e 2 denote two entities, and R denotes the relationship between the two entities. Second, the disambiguator of the BERT model is used to obtain the textual disambiguation sequence t o k e n s = [ t 1 , t 2 , , t m ] as follows.
t o k e n s = T o k e n i z e r ( S )
p e 1 = F i n d E n t i t y P o s i t i o n ( t o k e n s , e 1 )
p e 2 = F i n d E n t i t y P o s i t i o n ( t o k e n s , e 2 )
where m is the number of tokens, and T o k e n i z e r ( S ) denotes performing textual disambiguation on the input text S . F i n d E n t i t y P o s i t i o n ( t o k e n s , e 1 ) denotes the sub-sequence that matches the entity e 1 , with determined start and end positions.
Third, the location information p e 1 and p e 2 of entities e 1 and e 2 are compiled into entity vectors H e 1 and H e 2 to form H e 1 * and H e 2 * , respectively. They are jointed with the vector representation H S of the sentence S to obtain the composite representation vector H c o n a c t as follows.
H S = B E R T ( S )
H e 1 * = P o s i t i o n E m b e d d i n g ( p e 1 , H e 1 )
H e 2 * = P o s i t i o n E m b e d d i n g ( p e 2 , H e 2 )
H c o n a c t = C o n c a t e n a t e ( H S , H e 1 * , H e 2 * )
Fourth, input the synthesized representation vector H c o n a c t into the multilayer perceptual machine to obtain the probability distribution P ( R | S , e 1 , e 2 ) of the relationships.
P R S , e 1 , e 2 = s o f t m a x ( W H c o n a c t + b )
where W and b are the weights and bias of the fully connected layer and softmax is the activation function. Fifth, evaluate the difference between the predicted and the real relationships as follows.
l o s s r e = i = 1 N z i l o g ( P ( R i | S , e 1 , e 2 ) )
where N is the number of relationship categories, and z i is the unique hot coded form of the real label. Finally, validate the generated relationship triples and generate the corresponding relationship triad ( e 1 , R p , e 2 ) .

3.4. K-BERT-Based Entity Recognition Model

K-BERT consists of four knowledge, embedding, visualization, and mask coding layers, as shown in Figure 6. The input text is converted into a sentence tree that is fused with the external knowledge in the embedding and visible layers. Each sentence is converted into a token vector and a visible matrix.
The knowledge layer combines the input text s = { w 0 , w 1 , w 2 , , w n } with external knowledge and transforms it into a sentence tree t = { w 0 , w 1 , w 2 , , w l [ r i 0 , w i 0 , , r i k , w i k ] , , w n } . The process is divided into token querying in the knowledge graph K (K-Query) and knowledge injection (K-Inject).
In K-Query, the triples of all entities involved in sentence s are matched by querying the triples in ship collision knowledge triples as follows:
E = K _ Q u e r y ( s , k )
where E = [ w i , r i 0 , w 0 , , w i , r i k , w k ] denotes the set of matched triples for all entities involved in sentences. Injecting E into the corresponding place of sentence s , the original sentence serves as the trunk, and each trunk node can be connected to multiple branches. The formula for knowledge injection is as follows.
T = K _ I n j e c t ( s , E )
K-BERT introduces Soft Position Embedding (SPE) and a Visibility Matrix (VM) embedding layer to replace the absolute position encoding strategy in BERT. It provides relative position encoding based on the structure of the sentence tree so as to obtain the correct semantic information. To avoid structural semantic bias triggered by external knowledge injection, the visualization matrix is used to restrict the interconnection between each token. Only when a token is located in the same branch of the sentence tree is it considered “visible” in the visualization matrix. The visual matrix can be defined as
M i j = 0   w i & w j   w i | w j
where w i & w j means w i and w j visible and w i | w j means invisible.
It is necessary to modify the Transformer structure by introducing a mask unit and stacking it with the self-attention mechanism. The equations for mask-self-attention are (21)–(23):
Q i + 1 , K i + 1 , V i + 1 = h i W q , h i W k , h i W v
S i + 1 = s o f t m a x ( Q i + 1 K i + 1 T + M d k )
h i + 1 = S i + 1 V i + 1
where W q , W k and W v are the weights in the mask-attention mechanism, h i is the hidden state of the ith self-attention, and d k is the scaling factor. M is the visibility matrix, and S j k i + 1 is the attention value. If W k is masked to W j in M , then S j k i + 1 is 0.

3.5. Classification of Accident Severity

3.5.1. Feature Quantification

Relying on the constructed ship collision prevention and control knowledge graph, this paper further extracts the features related to the severity of the accident. Three types of topological structural indices including degree centrality, betweenness centrality, and closeness centrality are used to quantitatively the factors influencing accident severity [71,72,73]. The knowledge graph can be formalized as a directed graph G = ( V ,   E ) where V denotes the set of entity nodes, E V × R × V is the set of triples, and each triple ( h ,   r ,   t ) E denotes the directed edge connected the head entity h to the tail entity t with the relationship type r . The out-degree and in-degree for node type C are defined as in the following Equations (24) and (25), respectively.
S o u t ( C ) = v i V C D o u t ( v i )
S i n ( C ) = v i V C D i n ( v i )
where v i V C , D o u t v i = | { ( v i , r , v j ) E } | , and D i n v i = | { ( v j , r , v i ) E } | .
The betweenness centrality of node v i is given by Equation (26):
C B v i = s v i t s , t V σ s t ( v i ) σ s t
where σ s t ( v i ) denotes the number of paths passing through v i in all shortest paths from s to t . σ s t denotes the number of all shortest paths from nodes s to t .
The total value of betweenness centrality for node type C is given by Equation (27):
S B C ( C ) = v i V c C B v i
The closeness centrality of node v i is given by Equation (28):
C C v i = 1 v j V v j v i d ( v i , v j )
where d ( v i , v j ) denotes the shortest path distance between nodes v i and v j .
The total value of closeness centrality for node type C is given by Equation (29):
S C C ( C ) = v i V c C C v i

3.5.2. Accident Severity Classification

The LSTM-RNN model is used for accident severity classification, as shown in Figure 7. It includes the input layer, two LSTMs, two dense layers, and a Softmax layer. The input of the LSTM layer is the quantified features, and the output is based on the result of a Rectified Linear Unit (ReLU) activation function. Two fully connected layers are trained on top of the LSTM layer. They are added to match the output of the LSTM layer and the accident severity. The softmax function is used to activate the output layer. To prevent the RNN model from overfitting, the ReLU activation function and Dropout are employed to enhance the model’s robustness, accelerate the convergence, and improve the generalization ability.

3.6. Evaluation of Experimental Results

The performance of the knowledge extraction model is quantitatively assessed using precision, recall, and the F1-score. Specifically, precision reflects the model’s recognition accuracy by comparing correctly identified entities against all predicted entities. Recall measures recognition comprehensiveness by comparing correctly identified entities against the total number of actual entities in the dataset. As a reconciled harmonic average of these two metrics, the F1-score is utilized as the primary indicator of the model’s integrated performance.

4. Results and Discussion

4.1. Data Collection and Preprocessing

Accident report data is acquired from the official website of China MSA (https://www.msa.gov.cn accessed on 24 February 2026). There was a total of 312 ship-collision investigation reports. A total of 90 reports were manually annotated and used in Section 3.2 (LeBERT entity recognition model enhanced by domain vocabularies); 292 reports (including the 90) were used to construct the ship-collision knowledge-triple dataset for knowledge injection; 20 reports (non-overlapping with the 292) were used in Section 3.4 (K-BERT-based entity recognition model). All 312 reports were used for knowledge graph construction and graph-structure-based severity classification.
Based on the structural characteristics of ship-collision investigation reports, the data is categorized into semi-structured and unstructured components. For semi-structured data, which primarily involves tabular information regarding vessel and personnel characteristics, an automated extraction framework is implemented using the DeepSeek-V3 LLM accessed via its official API. Specifically, the temperature was set to 0 to enforce greedy decoding, thereby completely eliminating randomness and preventing hallucination. Both the frequency-penalty and presence-penalty were set to 0, as information extraction tasks require the precise replication of source terminology without penalizing vocabulary repetition. A “five-step prompt strategy” is designed based on the features of semi-structured text data, as detailed in Table 5, encompassing task requirement, data description, sample data, scenario information, and standard output. First, the task requirement prompt is utilized to clearly define the extraction objectives, data fields, and attribute values, providing precise instructions to the LLM. Second, the data description prompt explicitly describes the detailed raw text format, key–value relationships, and other metadata, establishing a unified template for entity fields, relationship types, and attribute units to enhance extraction stability. Then, the sample data prompt provides instances of raw semi-structured text using formatted separators to differentiate between entities and attributes, thereby eliminating potential parsing conflicts and ensuring accurate parsing by the LLM. Furthermore, domain-specific prior knowledge is injected through the scenario information prompt to enhance task comprehension, while historical dialogue context is maintained within the API message buffer to allow for the iterative optimization of prompting strategies through contextual feedback. Finally, the standard output prompt explicitly regulates the output format to ensure it meets structured storage requirements for subsequent knowledge graph construction or database integration. By activating the LLM’s recognition of data parsing and conversion rules through this strategy, structured data is automatically generated.
Vessel feature data is utilized as validation data, where attributes such as Vessel Name, Former Name, Vessel Type, Port of Registry, IMO Number, and Call Sign are stored in semi-structured text within PDF files. By applying the five-step prompt strategy to inject instructions and constraints into the LLM, vessel feature extraction and conversion are realized, automatically generating standardized “entity-relation-entity” CSV files. The resulting structured data for the vessel “OMEGA”, following the completion of this extraction task, is presented in Table 6.
The preprocessing of unstructured text data includes annotations for the entity and relationship. The standardization and cleaning of a large volume of collected text uniformly replaced the punctuation marks, English characters converted to half-angle form and redundant information elimination. The BIO (Beginning, Inside, Outside) annotation scheme was employed to label entity sequences in the text. Specifically, “B” indicates the beginning token of an entity, “I” denotes the subsequent tokens within the same entity, and “O” represents tokens that do not belong to any entity. Based on this strategy, the entities, including accident, vessel, vessel feature, vessel dynamics, equipment, personnel, personnel feature, organization, time, location, environment, cause, consequence, laws and regulations, and recommendations in the water transportation domain, are annotated. After completing the entity annotation, it is necessary to annotate the semantic relationships between entities further. The RE dataset is formed by {sentence, entity 1, relationship, entity 2} format. The annotated data can be used for entity and relationship automatic recognition experiments training.

4.2. Entity Recognition Based on Domain Vocabulary Enhancement

The dataset constructed in this experiment is derived from 90 ship-collision accident reports and contains 8,556 text segments. About 74.6% are under 100 words, 23.8% are between 100 and 200 words, and only a small fraction exceeds 200 words. For the LeBERT experiments, this dataset was split 9:1 into training and test subsets for model development and evaluation. Table 7 shows the parameter configurations for the various components of the LeBERT-BiLSTM-CRF model, which incorporates the domain vocabulary enhancement mechanism used in the experiment. With forward–backward concatenation, the BiLSTM produces an output whose dimension is two times the hidden size. The base version of the Chinese RoBERTa with Whole Word Masking-Extended (Chinese RoBERTa-wwm-ext), developed by Harbin Institute of Technology and iFLYTEK Research (HFL), is used in this experiment and has approximately 125 million parameters. To improve training efficiency, this pre-trained model was fine-tuned during training using a small learning rate to balance model performance and resource consumption.
Table 8 shows the parameter configuration of each component module of the LeBERT-BiLSTM-CRF model that integrates the domain vocabulary enhancement mechanism in the experiment. Training used the Adam optimizer and minimized the CRF negative log-likelihood (NLL) as a sequence-level objective over the gold label sequences. Gradients were updated by backpropagation. The maximum sequence length was 512; the training/validation batch sizes were 32/16; the learning rates were 3 × 10−5 for the encoder and 3 × 10−3 for the CRF layer.
The NER experiments were run under the Windows 11 operating system, with the PyTorch 2.1.0 deep learning framework and CUDA 12.4 for model construction and training. In order to verify the effectiveness of the proposed LeBERT-BiLSTM-CRF model enhanced by domain vocabulary information on ship collision NER, several typical models are selected for comparison experiments, including the classical CRF model, BiLSTM combined with the CRF model and pre-trained language models with multiple variants of the BiLSTM-CRF combination. All the models are trained and tested based on the ship collision accident dataset, and the experimental results are shown in Table 9. It can be seen that the precision, recall and F1-score of the CRF model are 68.1%, 70.1% and 69.4%, respectively. The results show that it is difficult to effectively capture the complex entity relationship and contextual semantic information in ship collision scenarios by solely relying on the sequence annotation mechanism. After the introduction of the BiLSTM structure on top of the CRF, the precision of the model slightly improves to 69.7%. However, the recall decreases significantly to 63.0%, resulting in a decrease of the overall F1-score to 66.1%. Although the BiLSTM structure increases the complexity and number of parameters of the model, it fails to effectively improve the model’s generalization performance in capturing the domain features in ship collision accidents. After introducing the pre-trained language model BERT on top of CRF, the model performance improved significantly, with precision, recall and F1-score reaching 72.8%, 72.1% and 71.9%, respectively, which is about 3%~6% improvement compared to the CRF model and BiLSTM-CRF model. This experimental result fully reflects the outstanding advantages of the BERT pre-trained model in feature extraction and semantic understanding.
BERT-BiLSTM-CRF leverages BERT to excavate the feature-context dependency relationship on the linguistic level deeply, and BiLSTM to better capture the long-distance dependency and sequential context, which effectively improves entity boundaries recognition and the overall performance of the model, with the precision, recall, and F1-score of the model reaching 80.7%, 79.4% and 80.1%, respectively. Furthermore, Chinese-BERT-WWM and RoBERTa were selected to be compared with BERT. The Chinese-BERT-WWM-BiLSTM-CRF model achieves 85.2%, 85.8% and 85.5% in precision, recall and F1-score, respectively, while the RoBERTa-BiLSTM-CRF model performs even better, with larger than 85.6% in each index. This result indicates that pre-trained models with more detailed masking strategies and training methods possess better performance for ship collision domain data. The LeBERT-BiLSTM-CRF model introduces a specially designed Lexicon Adapter structure to fuse lexical information with character features effectively. This model achieves the optimal performance, with precision, recall and F1-score reaching 86.3%, 87.5% and 86.8%, respectively, which are significantly better than the other models. This fully demonstrates that the incorporation of lexical information plays a key role in improving the performance of entity recognition in the field of ship collision accidents.
In order to more comprehensively evaluate the recognition performance of the LeBERT-BiLSTM-CRF model in the ship collision accident dataset, this paper further compares and analyzes the recognition accuracy of each model on different entity categories, as shown in Table 10. The model relies only on CRF or BiLSTM-CRF for sequence annotation and performs differently in different types of entity recognition. The F1-score of “Vessel” is 89.7, while the F1-score of the categories with sparse data or semantic abstraction, such as “Environment” and “Recommendation”, is as low as 20~40%. It suggests that the basic model is unable to fully learn the feature representations when dealing with data-scarce, semantically ambiguous, or context-dependent entities. With the introduction of pre-trained language models in BERT-CRF and BERT-BiLSTM-CRF, the overall recognition performance is significantly improved, with richer semantic and intrinsic representational capabilities. Especially on high-frequency categories such as “Vessel”, “Personnel”, and “Vessel Feature”, the F1-score is further improved to over 80%. Meanwhile, the recognition performance of some low-frequency categories, such as “Environment” and “Agency,” is also improved, with an F1-score of about 50~70%. With the introduction of stronger pre-models such as Chinese-BERT-WWM and RoBERTa, the model performance continues to improve. Especially in the low-frequency entity categories such as “Recommendation” and “Event Cause”, the F1-score is significantly increased to 44~75%, indicating that under the relatively scarce training samples, the model can still effectively capture semantic features and improve the recognition accuracy. By introducing domain vocabulary into the pre-trained language model of RoBERTa, the model improves recognition of low-frequency class entities significantly. The F1-score for the lowest frequent entities in the “Recommendation” category is 57.5%, which is much higher than that of other models. The same as “Cause”, “Consequence”, “Equipment”, and “Environment”, the corresponding F1-score increases significantly to more than 75%. This result suggests that by incorporating external lexical knowledge enhancement strategies, the model was able to capture the boundaries of named entities more accurately, resulting in improved recognition of low-frequency categories and improved overall performance.
The ablation experiment aims to evaluate the contribution of individual modules to the overall performance of the model. In order to verify the actual effect of the pre-trained language module and lexical enhancement on the model performance, ablation experiments are conducted with BERT-BiLSTM-CRF, RoBERTa-BiLSTM-CRF, and LeBERT-BiLSTM-CRF (Information on vocabulary in common areas) as shown in Table 11. BERT-BiLSTM-CRF as a baseline model used BERT-base-Chinese to generate dynamic word vectors at the embedding layer and finally achieved 80.7% precision, 79.4% recall, and an 80.1% F1-score, which verified the efficacy of the BERT module in-context semantic modelling. The BERT module is replaced with RoBERTa in RoBERTa-BiLSTM-CRF, and the semantic modelling is significantly enhanced, with the F1-score improved from 80.1% to 85.6%. That may be related to the full-word masking strategy and larger-scale corpus training strategy. The LeBERT-BiLSTM-CRF model is based on RoBERTa-BiLSTM-CRF. The results show that the F1-score is improved from 85.6% to 86.3%, indicating that the lexical enhancement strategy makes up for the deficiency of relying only on character representation to a certain extent and can better determine the entity boundary and improve the entity recognition performance. After replacing the generic vocabulary information with the ship collision accident domain vocabulary, it improves performance to 87.5% in recall and 86.8% in F1-score. This result shows that the introduction of domain vocabulary enhances recognizing entities with fuzzy boundaries in the text of ship collision accidents, provides the model with more targeted semantic features to expand coverage of domain entities, and strengthens its capability in capturing low-frequency and proprietary entities. The ablation experiments reveal that the NER method of fusing vocabulary information has strong practicality in fine-grained entity recognition of ship collision accidents.
To further verify the robustness and generalization capability of the LeBERT-BiLSTM-CRF model incorporating domain vocabulary information in the NER task, a 5-fold cross-validation method was employed for assessment. To ensure the consistency and comparability of the evaluation, the model architecture and hyperparameter settings were maintained identically to the configurations specified in Table 7 and Table 8. Table 12 presents the specific performance metrics across five independent experiments. The results indicate highly consistent performance under different data partitions, with mean Precision, Recall, and F1-score reaching 86.28%, 87.13%, and 86.70%, respectively. Notably, the standard deviation of the F1-score is as low as 0.0078; such minimal fluctuation strongly demonstrates the superior robustness of the model. Furthermore, the highest F1-score of 0.8763 was achieved in Fold 4, showcasing the model’s exceptional peak performance.
Table 13 details the classification performance for 15 distinct entity types. Under the rigorous testing of 5-fold validation, the model maintained high recognition accuracy for categories such as “Vessel” and “Laws and Regulations”. Even for sparse categories like “Recommendation” and “Cause,” the average performance remained robust due to the injection of domain-specific lexical information. In conclusion, the 5-fold cross-validation results not only confirm the outstanding robustness of the LeBERT-BiLSTM-CRF model but also prove its reliable generalization capability through sustained high-level performance across multiple unseen data subsets.
Beyond the empirical robustness demonstrated by the cross-validation, it is essential to clarify the minimum data requirement for this extraction pipeline. When fine-tuning high-parameter architectures such as RoBERTa and LeBERT, the corpus scale must be evaluated at the sentence level rather than the document level. As outlined in Section 4.1, the manually annotated dataset used for this specific task was constructed from 90 representative collision reports, yielding a total of 8556 sentence-level text segments. The existing literature on domain-specific NER indicates that leveraging pre-trained language models substantially reduces the dependency on massive annotated datasets. Empirical evidence demonstrates that a high-quality, domain-specific corpus of a few thousand sentences is typically sufficient to effectively fine-tune these models and reach a performance plateau [74]. Therefore, our dataset of 8556 text segments comfortably exceeds this functional threshold.
Furthermore, statistical analysis reveals that approximately 74.6% of the extracted text segments contain fewer than 100 characters, and 23.8% range between 100 and 200 characters. This predominance of short text segments aligns perfectly with the inherent 512-token sequence processing limit inherited by the RoBERTa architecture [75,76]. More importantly, confining the input segments within this optimal processing window effectively prevents context dilution, frequently described as the “lost in the middle” phenomenon. This is a prevalent issue where the attention mechanism’s efficacy severely degrades when processing overly long documents [77]. Coupled with the explicit injection of domain lexicons acting as a strong inductive bias, this specific data scale and length distribution collectively ensure rapid convergence and mitigate the risk of overfitting in the proposed pipeline.

4.3. Analysis of Relationship Extraction

Using the same 90 reports as the domain-vocabulary-enhanced LeBERT-BiLSTM-CRF NER experiments, the RE dataset comprises 11,086 labeled triples, split 8:2 into 8866 training and 2220 validation instances. The BERT-MLP_rule model is adopted for RE in ship collision accidents. The configuration of the BERT-MLP_rule model used in this article is detailed in Table 14. The pre-trained language model used was the base version of Chinese-RoBERTa-wwm-ext. The input to the multilayer perceptron module consisted of the concatenation of the entire sentence context vector output by BERT and the embedding representations of the two entities, resulting in an input dimension three times that of the BERT hidden layer. The MLP architecture included a single hidden layer with 128 neurons. Given the 38 relation types involved in the experiment, the number of nodes in the model’s output layer was set to 38.
The hyperparameter configuration is shown in Table 15. The text length and entity length in the hyperparameters are used to standardize the data input. TheChinese RoBERTa-wwm-ext is employed to reduce the training cost and improve the convergence efficiency. During the training process, the model loss is calculated using the cross-entropy function, the MLP layer weights are updated by backpropagation, and the parameters are optimized using the Adam optimizer.
The performance of the model in recognizing each category of relationships is shown in Figure 8. For categories with more sufficient training data, the model shows better performance, such as “of_PersonFeature” and “at_Time”, with an F1-score of 0.98. For categories with fewer samples, it still shows relatively good performance, such as “rescue” and “occur”, with the F1-score of 0.75 and 0.73, respectively. This indicates that the model is capable of recognizing categories with sufficient training data. However, for the relationships “manipulate_of_NavigationStatus”, “on_of_EngineStatus”, and “at_of_EngineStatus” with small-size data, the recognition accuracies are 98.5, 98.4 and 98.1, respectively. This may be because their semantic and contextual features are more obvious, which enables the model to capture their relationships more accurately with limited data. The Chinese RoBERTa-wwm-ext model has strong contextual semantic capture capability, which is helpful for RE for sparse data. In a word, the model is able to efficiently identify and extract large-scale domain entity relationships in ship collision accidents.
To ensure the prediction quality and authenticity in the RE task, a quality control mechanism based on the correlation analysis between performance metrics and confidence scores was established. As illustrated in Figure 9, the reliability of the large-scale extraction task is quantitatively evaluated by benchmarking the F1-score against the average confidence on the validation set. The experimental data reveal that as the training converges, the F1-score stabilizes above 0.93, while the peak average confidence reaches 0.987. This strong positive correlation demonstrates that the model possesses excellent self-calibration capability, and its confidence scores serve as a reliable benchmark for prediction veracity. Based on this validation, by utilizing high-confidence thresholds as an automated verification criterion, the quality of the subsequent knowledge graph construction can be effectively ensured, providing quantitative evidence for the authenticity of the knowledge graph.

4.4. Entity Recognition Based on K-BERT-BiLSTM-CRF

Knowledge injection uses a domain triple set extracted from 292 ship-collision investigation reports, including the 90 reports used to train the domain-vocabulary-enhanced LeBERT-BiLSTM-CRF model. Entities are recognized by the domain-vocabulary-enhanced LeBERT-BiLSTM-CRF model, and relations by the BERT-MLP_rule model. The 20-report dataset used to train/evaluate the K-BERT model is disjoint from this triple pool.
Table 16 shows the parameter configurations for each module of the K-BERT-BiLSTM-CRF model used in the actual experiment. Because the BiLSTM module concatenates the outputs of the forward and backward LSTM models, the hidden layer dimension of the module output is twice that of a single LSTM hidden layer. The base version of the Chinese RoBERTa-wwm-ext pre-trained model used in this experiment has approximately 125 million parameters. To improve training efficiency, the RoBERTa-wwm-ext pre-trained model was fine-tuned during training using a small learning rate to balance model performance and resource consumption.
Hyperparameter settings for the K-BERT-based entity recognition model are shown in Table 17. The Adam optimizer was chosen to update the model parameters during the training process, and NLL was used as the loss function to measure the difference between the model output and the real values. The training process was set with a maximum sequence length of 512, a training batch size of 16, a validation batch size of 8, a learning rate of 1 × 10−5 for the BERT layer, a learning rate of 1 × 10−3 for the CRF layer, and 100 epochs.
In order to verify the effectiveness of the proposed K-BERT-BiLSTM-CRF model, it is compared with BERT-BiLSTM-CRF, RoBERTa-BiLSTM-CRF, and LeBERT-BiLSTM-CRF. RoBERTa-BiLSTM-CRF improves the BERT-BiLSTM-CRF architecture by replacing the Chinese RoBERTa with Whole Word Masking-Extended (Chinese RoBERTa-wwm-ext) as the pre-training language model with the Chinese-BERT-Base module. LeBERT-BiLSTM-CRF introduced domain vocabulary information. The K-BERT-BiLSTM-CRF model takes the self-constructed ship collision accident knowledge graph as external knowledge and injects it into BERT for training. As shown in Table 18, the K-BERT-BiLSTM-CRF model achieves the optimal performance with precision, recall, and F1-score at 84.5%, 84.4%, and 84.7%, respectively. It indicated the effectiveness of introducing the domain knowledge graph for improving the NER performance. The BERT-BiLSTM-CRF, without introducing a domain enhancement mechanism, has a relatively low F1-score, only 78.0%. By using the Chinese RoBERTa-wwm-ext instead of BERT-base-Chinese as the encoder, the model performance is significantly improved with an F1 of 81.0%, suggesting that a stronger pre-trained language model can help to improve the performance. LeBERT-BiLSTM-CRF improves the F1-score to 83.5%, indicating that semantic enhancement has a positive impact on NER. The proposed K-BERT-BiLSTM-CRF model is better at recognizing domain terms and complex entities by introducing the domain knowledge of ship collision accidents. It not only keeps the semantic modelling capability of Chinese RoBERTa-wwm-ext but also utilizes the knowledge triples injected by K-BERT to provide support for context modelling. K-BERT-BiLSTM-CRF has significant advantages in the task of NER in the field of water transportation.

4.5. Knowledge Graph of Ship Collision Accidents

For knowledge-graph construction, a validated extraction pipeline was adopted: LeBERT-BiLSTM-CRF (domain-vocabulary enhanced) for NER and Chinese RoBERTa-wwm-ext-MLP for RE, applied to all 312 ship-collision investigation reports. The resulting ship collision prevention and control knowledge graph contains 35,000 entities and 320,000 relationships. The distributions of entities and relationships of the ship collision prevention and control knowledge graph are shown in Figure 10 and Figure 11, respectively. Except for the common entity types such as time, personnel and location, it integrates entity types specific to ship collision incident knowledge, including ship dynamics, environment, recommendations, etc. That provides a comprehensive description of the evolution of ship collision incidents.
To further validate the effectiveness of the constructed knowledge graph, this paper compares the number of entities and relationships with other knowledge graphs in the water transportation domain, as shown in Table 19. The constructed knowledge graph for ship collision prevention and control is superior to the existing knowledge graphs in the water transportation domain across the entity and relation types, entity and relation volume, and the considered data type. It supports both semi-structured and unstructured data, covering 38 types of relations and 15 types of entities, with the highest number of entities and relations. Through the fine-grained entity and relationship types division, it can facilitate the applications in different scenarios, for instance, the association query and analysis among subject–space–time–behavior-driven accident evolution process, ship activity, and accident causation.
Knowledge graphs usually represent entities and their semantic relationships in the form of ternary groups, which have good expressive capability in characterizing structured knowledge. However, the ternary may lead to increased computational complexity in large-scale applications, limiting the operational efficiency of graphs. To address this problem, knowledge graph embedding techniques are used to map graph data to a real vector space, making it easier for intelligent algorithms (e.g., machine learning, deep learning, etc.) to utilize the information hidden in the graph data. This paper uses the t-distributed Stochastic Neighbor Embedding (t-SNE) method to reduce the dimension of the graph embedding from 768 to a two-dimensional plane. The K-means clustering method is used to analyze and visualize the embedding nodes of the graph, as shown in Figure 12. The clustering results align with the 15 types of entities in the ship collision prevention and control knowledge graph, indicating that the knowledge representation effectively encodes the semantic concepts of knowledge entities related to ship collision incidents. Furthermore, nodes such as “Accident,” “Vessel,” and “Vessel feature” formed distinct clusters, indicating that these nodes are similar in high-dimensional embedding space and maintain this similarity when reduced to two-dimensional space.
The shortest path between the two ships involved in the accident can demonstrate the comprehensive ship collision accident knowledge contained in the constructed knowledge graph. The shortest path query equation is as follows:
M A T C H   p a t h = ( n o d e 1 ) [ * n ] ( n o d e 2 )   W H E R E   n o d e 1 . n a m e = a   A N D   n o d e 2 . n a m e = b   R E T U R N   p a t h
where M A T C H   p a t h represents matched query path, * n represents that the path length is n , nodes a and b usually represent two different entities, and R E T U R N   p a t h represents the returned query path information. Taking the “Ansheng 22” ship and “Minshiyu 06256” ship collision accident as an example, the shortest path in Figure 13 shows related entities such as the involved vessels, the companies of the involved vessels, vessel dynamics, accident locations, personnel on board, vessel equipment, accident causes, consequences of the accident, violated laws and regulations, and recommendations, as well as their relationships. This allows for a comprehensive association analysis of the entire process of a vessel collision incident.
Maritime supervisors can quickly obtain the overall overview of ship collision accidents through the shortest path query. Based on the subject–space–time–behavior analysis of the spatial and temporal process of the accident, Figure 14 shows the ship dynamics of the ZTE 2 (ship name) during the collision accident and can support the ship collision situation awareness combined with the relevant AIS information. As shown in Figure 15, the knowledge graph also supports the query of ship inspection activities so that maritime supervisors can understand whether the ship has violated the necessary inspection requirements. Ship collision prevention and control knowledge graph combined with NLP technology can improve the accuracy of NER in the field of water traffic accidents, analyze water traffic accidents more efficiently, and provide knowledge support for intelligent transportation decision-making.

4.6. Classification of Accidents Based on the Constructed Knowledge Graph

Accident severity can be determined according to the relevant standards of the Maritime Safety Administration of the Ministry of Transport of China. The classification criteria are mainly based on factors such as injury and death, economic losses, and the degree of environmental pollution caused by the accident. Although the current maritime regulations have clearly defined the accident severity level, the related classification mechanism mainly relies on manually filling in the consequences of accidents. It is difficult to satisfy the data-driven automatic identification of accident severity levels in practical application scenarios, such as emergency response, risk warning, and auxiliary supervision. The classification model based on a knowledge graph can quickly complete the NER after receiving the initial accident text and then realize the intelligent identification of accident severity level with good real-time accuracy and scalability.
The LSTM deep learning classification model is employed to predict the severity of injuries in ship collisions based on a knowledge graph. By introducing the topological information of the knowledge graph, the model is able to capture the complex relationship between the accident features so as to improve the accuracy. The topological features include the number of nodes, the sum of in-degrees, the sum of out-degrees, betweenness centrality, and closeness centrality. The Z-score normalization method was employed to address the differences in numerical magnitude between different topological features. To meet the model input requirements, the accident severity category variable was encoded using one-hot encoding with 0 (minor accident), 1 (ordinary accident), 2 (relatively serious accident), and 3 (serious accident). The Synthetic Minority Oversampling Technique (SMOTE) is an oversampling method designed to address class imbalance issues. It generates new synthetic samples by interpolating between existing minority class samples, thereby enhancing the learning effectiveness and classification performance of the classification model. The experiment is completed on a Core i9 CPU system equipped with 32GB RAM, and the stochastic gradient descent optimization algorithm is used for parameter updating. The hyperparameters are shown in Table 20.
This paper uses LSTM-RNN to classify the water transportation accident data and compare it with MLP, Random Forest (RF), and XGBoost models. Figure 16 shows the loss changes in the LSTM-RNN model during the training process. After 200 rounds of iterative training, the LSTM-RNN model already had a good performance. In the first 50 rounds of iterative training, the loss decreased slowly. The loss values decreased rapidly in the 50–125 rounds of iterative training, gradually slowed down and eventually stabilized after about 125 rounds of iterative training. Figure 17 shows the trend in training accuracy of the LSTM-RNN model during 200 iterations of training. During the first 50 iterations of training, the accuracy increased slowly. Between 50 and 125 iterations, the accuracy increased rapidly. After approximately 125 iterations, the accuracy on both the training and test sets gradually slowed down and eventually stabilized.
Figure 18, Figure 19, Figure 20 and Figure 21 show the confusion matrix of LSTM-RNN, MLP, XGBoost and RF models on the test set. The LSTM-RNN model demonstrated the best overall performance in recognizing the four types of accidents. Among them, the prediction accuracy for ordinary accidents was the highest, with 41 samples correctly identified and only 3 misclassified into other categories. For relatively serious accidents, 23 samples were correctly predicted, with only 1 misclassified. Of the 18 serious accident samples, 16 were correctly classified, with only 2 misclassified into other categories. Additionally, 16 minor accidents were accurately classified. The XGBoost model performed well in identifying relatively serious and serious accidents. The confusion matrix shows that the model successfully recognized all 24 samples of relatively serious accidents and achieved 17 correct predictions for serious accidents, with only 1 misclassification. However, its performance was slightly weaker in classifying minor and ordinary accidents, with 4 and 6 misclassifications, respectively. The random forest model also achieved 100% accuracy in identifying relatively serious accidents. However, its performance in predicting ordinary accidents was slightly inferior to that of the LSTM-RNN (35 correct and 9 misclassified). For minor accidents, 15 samples were correctly classified. Notably, this model misclassified 8 serious accidents as relatively serious accidents, indicating a relatively weaker ability to distinguish between adjacent severity levels. The MLP model performed less well compared to the other three models, especially in identifying ordinary and serious accidents, where its accuracy was relatively low. Only 29 ordinary accidents were correctly predicted, with as many as 15 misclassified. For serious accidents, 15 were correctly identified, and 3 were misclassified. While the model achieved some effectiveness in recognizing relatively serious (21 samples) and minor accidents (16 samples), its overall stability was insufficient, making it difficult to classify multi-category accident data accurately.
In order to further evaluate the comprehensive performance of different models in water traffic accident severity classification, this paper introduces four indicators, namely, accuracy, precision, recall and F1-score, to quantitatively analyze the prediction performance, as shown in Table 21. The overall accuracy of the three models in accident severity classification does not exceed 90% except for the LSTM-RNN model. Among them, the LSTM-RNN model has the most superior performance with an accuracy of 92.31%, which is significantly higher than the other models. In contrast, the MLP model performed weakly at 77.08%. XGBoost and RF models achieved an intermediate level of 80% to 90%, respectively. In terms of precision, the LSTM-RNN model is still dominant at 91.84%, indicating its high reliability in predicting and its capability to effectively reduce false alarms. XGBoost and RF achieve an intermediate level of 89.12% and 82.40%, respectively, while the MLP model is only 79.00%. As for the recall rate, LSTM-RNN still leads with a level of 91.70%, followed by XGBoost (89.65%), while the RF and MLP models are at 79.61% and 81.84%, respectively, which suggests that both of them are somewhat deficient in terms of recognition completeness. F1-score, as the reconciled average of precision rate and recall rate, becomes an important indicator of the comprehensive performance of the model. The LSTM-RNN model with F1-score reaches 91.73% and is far more than the other models, indicating that it achieves a good balance between accuracy and generalization ability, and possesses stronger prediction capability.
According to the model performance curves shown in Figure 22 and Figure 23, a more in-depth analysis can be conducted regarding the discriminative capability and accuracy of different models. From the macro-averaged ROC curve, both the LSTM-RNN and XGBoost models achieve an AUC value of 0.98, demonstrating the strongest classification ability. This indicates that these two models possess highly accurate favourable rates and low false favourable rates when handling the classification of waterway traffic accident severity. The RF model yields an AUC value of 0.97, which is slightly lower but still indicates excellent performance. In comparison, the MLP model has an AUC value of 0.95, which is lower than the other models, suggesting that its capability to distinguish between different categories is relatively weaker. Further combining the macro-averaged PR curve, it can be found that LSTM-RNN and XGBoost are also at the top with a PR-AUC value of 0.94, indicating that both of them are able to maintain high prediction accuracy while ensuring the recall rate, which is suitable for tasks with high requirements for accurate recognition in real scenarios. The PR-AUC of RF is 0.90, with overall good stability, while the PR-AUC of the MLP model is only 0.86, showing that its performance is weaker when facing imbalance data, which is prone to detection or misjudgment. Based on both ROC and PR evaluation metrics, it can be concluded that the LSTM-RNN and XGBoost models exhibit superior overall discriminative capability, predictive accuracy, and stability.
Considering the results from the confusion matrix, accuracy, precision, recall, and F1-score, as well as the ROC and PR curves, the LSTM-RNN model, driven by graph features, leverages its excellent sequence modelling capability and robustness to multi-class imbalanced accident severity classification. Compared with traditional methods that rely solely on text vectors or statistical features for accident severity prediction, the ship collision prevention and control knowledge graph approach extracts entities and relationships from accident reports and forms a semantic structure [67,84]. The knowledge graph has complex network topology features. To evaluate the effect of incorporating knowledge graph topological features on accident severity prediction, this study systematically compares the performance of the LSTM-RNN model under various feature combinations. As shown in Table 22, experimental results indicate that using only the Nodes characteristic as input, the model achieves an accuracy of 83.65% and an F1-score of 84.63%. The incremental introduction of individual topological components led to consistent performance gains: the combination of the Nodes characteristic with Betweenness centrality, Degree centrality, and Closeness centrality improved the F1-scores to 87.66%, 89.56%, and 89.65%, respectively. Notably, Closeness centrality provided the most significant boost to accuracy, reaching 90.38%, suggesting that the global proximity of entities within the knowledge graph is a critical factor in discriminating accident levels. Finally, when the Nodes characteristic and Topological features are jointly input, the accuracy reaches its peak at 92.31%, with the F1-score improving to 91.73%. This validates the significant advantages of the knowledge graph’s topological structure and demonstrates a clear synergistic effect among different topological dimensions in enhancing accident severity classification performance.
Although the optimized model achieved high accuracy on the test set, 6 samples were misclassified. To investigate the underlying causes of these errors, two representative misclassified cases (Case 74 and Case 91) were selected from the test set for qualitative analysis. This analysis is strictly based on the core features extracted from the knowledge graphs, namely the Nodes characteristic (representing the scale of entity nodes) and the Topological features (including degree centrality, betweenness centrality, and closeness centrality).
The analysis in Table 23 shows that the primary source of error is the nonlinear mapping bias between the graph’s structural representation (Nodes characteristic and topological features) and the actual accident severity. Overestimation bias occurs when a “Relatively serious” accident has a complex narrative structure, leading to an inflated Nodes characteristic and betweenness centrality. This structural “busyness” misleads the model into predicting a higher severity level. Conversely, underestimation bias occurs when a “Serious” accident involves few entities and a simple causal chain, resulting in a restricted Nodes characteristic along with low degree centrality and closeness centrality. This structural “simplicity” masks the true severity of the accident’s consequences.
These rare edge cases highlight the inherent boundary of relying exclusively on pure topological structures for classification. However, even without introducing any additional semantic weighting, the current LSTM-RNN model achieved accurate classification in the vast majority (over 91%) of real-world test samples. This robust performance thoroughly validates that decoding unstructured accident texts into quantifiable knowledge graph topological features is a highly effective paradigm for accident severity assessment. It successfully captures the complex physical and causal networks underlying marine accidents. While mitigating the mapping bias in extreme long-tail samples—where structural complexity and semantic severity mismatch—would require integrating deep semantic embeddings, the current pure-structure framework has already demonstrated overwhelming superiority over traditional baseline models. It serves as a highly robust, scalable, and accurate tool for practical maritime traffic safety management.

5. Conclusions and Future Work

This study establishes a comprehensive framework covering accident data acquisition, standardized knowledge representation, knowledge extraction method development, and knowledge graph construction and application for waterway traffic accident classification. By comprehensively analyzing ship accidents, an ontology model incorporating event, spatiotemporal behavior, cause, consequence, responsible party, and disposition decision is proposed. To address issues such as blurred boundaries of domain terms, the difficulty of identifying long entities, and low-frequency vocabulary recognition, a combined LeBERT and BiLSTM-CRF NER model incorporating maritime domain vocabulary was designed. This model demonstrated significant advantages in accurately identifying domain-specific terminology and long entities. Furthermore, to effectively extract complex semantic relationships from texts, a BERT-MLP_rule RE model incorporating semantic information was proposed. This model systematically extracted and identified semantic relations with an overall F1-score of up to 94.5%. Furthermore, a large-scale ship collision accident knowledge graph was constructed, containing 35,000 nodes and 320,000 relationships. It fully supports semantic queries and graph reasoning analysis in complex accident scenarios. Finally, a method combining knowledge graph topological features with an LSTM-RNN model was explored for accident severity classification. Comparative experiments showed that the proposed classification model achieved an accuracy of 92.31%, demonstrating high accuracy and generalization capability. This research not only confirms the important application value of knowledge graphs in waterway traffic accident severity classification but also provides a methodological foundation for future risk assessment and intelligent early warning system development.
Although this study conducted in-depth knowledge mining and intelligent application of ship collision accident text data, certain limitations need further exploration in future research. Firstly, the current knowledge graph is primarily constructed from historical accident text data. Future studies could incorporate multi-source heterogeneous data, such as AIS data, satellite remote sensing images, and video surveillance data, to enhance the knowledge graph’s capability in real-time scenarios and dynamic accident management. Secondly, a crucial subsequent step involves bridging the gap between retrospective semantic modeling and quantitative collision-risk interpretation. By integrating navigational variables and COLREGs, future research could explore the inference of quantitative risk indices, such as the CRI, to establish concrete decision thresholds and timing for navigation support systems. Furthermore, the transition from descriptive semantic patterns to actionable navigational insights is essential. It is intended that extracted accident knowledge will be integrated into real-time or near-real-time navigation support for Maritime Autonomous Surface Ships. This process would involve mapping semantic accident severity levels to navigational risk escalation and corresponding avoidance maneuvers, thereby providing proactive decision support rather than solely retrospective analysis. Moreover, to address the mapping bias between structural topology and actual semantic severity observed in rare edge cases, future iterations of the classification model will explore the multi-modal fusion of deep semantic embeddings with graph topological features. Finally, based on the further development of graph-driven intelligent decision-making technologies, graph embedding-based accident risk assessment and early warning models can be explored to realize more comprehensive maritime safety management.

Author Contributions

Conceptualization, H.Y.; methodology, H.Y., X.X., Z.G., T.W. and L.X.; resources, H.Y.; writing—original draft preparation, H.Y., X.X. and Z.G.; writing—review and editing, H.Y., T.W. and L.X.; funding acquisition, H.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key Research and Development Program of China (No. 2022YFC3302703), the National Natural Science Foundation of China (No. 42371415 and No. 42101429), and the Young Elite Scientists Sponsorship Program by China Association for Science and Technology (CAST) (No. YESS20220491).

Data Availability Statement

The data presented in this study are available upon request from the corresponding author. The data are not publicly available because they involve privacy restrictions of maritime authorities and confidentiality restrictions related to a key national research and development project.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Chen, X.; Wu, H.; Han, B.; Liu, W.; Montewka, J.; Liu, R.W. Orientation-aware ship detection via a rotation feature decoupling supported deep learning approach. Eng. Appl. Artif. Intell. 2023, 125, 106686. [Google Scholar] [CrossRef]
  2. Chauvin, C.; Lardjane, S.; Morel, G.; Clostermann, J.P.; Langard, B. Human and organisational factors in maritime accidents: Analysis of collisions at sea using the HFACS. Accid. Anal. Prev. 2013, 59, 26–37. [Google Scholar] [CrossRef] [PubMed]
  3. Kayiran, B.; Yazir, D.; Aslan, B. Data-driven Bayesian network approach to maritime accidents involved by dry bulk carriers in Turkish search and rescue areas. Reg. Stud. Mar. Sci. 2023, 67, 103193. [Google Scholar] [CrossRef]
  4. Hänninen, M. Bayesian networks for maritime traffic accident prevention: Benefits and challenges. Accid. Anal. Prev. 2014, 73, 305–312. [Google Scholar] [CrossRef] [PubMed]
  5. Gan, L.; Ye, B.; Huang, Z.; Xu, Y.; Chen, Q.; Shu, Y. Knowledge graph construction based on ship collision accident reports to improve maritime traffic safety. Ocean Coast. Manag. 2023, 240, 106660. [Google Scholar] [CrossRef]
  6. Qu, X.; Meng, Q.; Suyi, L. Ship collision risk assessment for the Singapore Strait. Accid. Anal. Prev. 2011, 43, 2030–2036. [Google Scholar] [CrossRef]
  7. Fan, C.; Bolbot, V.; Montewka, J.; Zhang, D. Advanced Bayesian study on inland navigational risk of remotely controlled autonomous ship. Accid. Anal. Prev. 2024, 203, 107619. [Google Scholar] [CrossRef]
  8. Namgung, H.; Kim, J.S. Collision risk inference system for maritime autonomous surface ships using COLREGs rules compliant collision avoidance. IEEE Access 2021, 9, 7823–7835. [Google Scholar] [CrossRef]
  9. Namgung, H. Local route planning for collision avoidance of maritime autonomous surface ships in compliance with COLREGs rules. Sustainability 2021, 14, 198. [Google Scholar] [CrossRef]
  10. Yu, H.; Meng, Q.; Fang, Z.; Liu, J. Literature review on maritime cybersecurity: State-of-the-art. Navigation 2023, 76, 453–466. [Google Scholar]
  11. Yu, H.; Guo, Z.; Fang, Z.; Xu, L.; Xu, J. An environment–kinetic compound space–time prism-based approach for assessing multi-ship collision risk in confined water. J. Navig. 2025, 78, 58–79. [Google Scholar] [CrossRef]
  12. Fang, Z.; Yu, H.; Ke, R.; Shaw, S.L.; Peng, G. Automatic identification system-based approach for assessing the near-miss collision risk dynamics of ships in ports. IEEE Trans. Intell. Transp. Syst. 2018, 20, 534–543. [Google Scholar] [CrossRef]
  13. Yu, H.; Fang, Z.; Murray, A.T.; Peng, G. A direction-constrained space-time prism-based approach for quantifying possible multi-ship collision risks. IEEE Trans. Intell. Transp. Syst. 2019, 22, 131–141. [Google Scholar] [CrossRef]
  14. Chen, X.; Xin, Z.; Zhang, H.; Wu, Y.; Wei, C.; Postolache, O. Vision Transformer-Based Image Dehazing for Climate-Resilient Maritime Navigation. IEEE Trans. Intell. Transp. Syst. 2026, 1–13. [Google Scholar] [CrossRef]
  15. Wang, Z.; Shao, F.; Zhang, C.; Yu, H.; Chen, S.; Wu, L. Collision Avoidance Pattern with Collective Wisdom: Ship Action Decision-Making Azimuth Map Construction Based on COLREGs. J. Mar. Sci. Eng. 2025, 13, 2240. [Google Scholar] [CrossRef]
  16. Yu, H.; Meng, Q.; Fang, Z.; Liu, J.; Xu, L. A review of ship collision risk assessment, hotspot detection and path planning for maritime traffic control in restricted waters. J. Navig. 2022, 75, 1337–1363. [Google Scholar] [CrossRef]
  17. Yu, C.; Mao, Z.; Gao, S. An Approach of Extracting Information for Maritime Unstructured Text Based on Rules. J. Transp. Inf. Saf. 2017, 35, 40–47. (In Chinese) [Google Scholar]
  18. Liu, D.; Cheng, L. MAKG: A maritime accident knowledge graph for intelligent accident analysis and management. Ocean Eng. 2024, 312, 119280. [Google Scholar] [CrossRef]
  19. He, L.; Wang, S.; Cao, X. Multi-feature fusion method for Chinese shipping companies credit named entity recognition. Appl. Sci. 2023, 13, 5787. [Google Scholar] [CrossRef]
  20. Hettne, K.M.; Stierum, R.H.; Schuemie, M.J.; Hendriksen, P.J.; Schijvenaars, B.J.; Mulligen, E.M.; Kleinjans, J.; Kors, J.A. A dictionary to identify small molecules and drugs in free text. Bioinformatics 2009, 25, 2983–2991. [Google Scholar] [CrossRef]
  21. Bikel, D.M.; Schwartz, R.; Weischedel, R.M. An algorithm that learns what’s in a name. Mach. Learn. 1999, 34, 211–231. [Google Scholar] [CrossRef]
  22. Srihari, R.K. A hybrid approach for named entity and sub-type tagging. In Proceedings of the Sixth Applied Natural Language Processing Conference, Seattle, WA, USA, 29 April–4 May 2000; pp. 247–254. [Google Scholar]
  23. Borthwick, A.; Sterling, J.; Agichtein, E.; Grishman, R. NYU: Description of the MENE named entity system as used in MUC-7. In Proceedings of the Seventh Message Understanding Conference (MUC-7), Fairfax, VA, USA, 29 April–1 May 1998. [Google Scholar]
  24. Zhou, G.; Su, J. Exploring deep knowledge resources in biomedical name recognition. In Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications (NLPBA/BioNLP), Geneva, Switzerland, 28–29 August 2004; COLING: Geneva, Switzerland, 2004; pp. 99–102. [Google Scholar]
  25. Yu, H.; Han, Y.; Xu, L.; Wei, T.; Zhang, X. Incorporating knowledge graph and deep learning method for the classification of ship offense activities. Reg. Stud. Mar. Sci. 2026, 94, 104785. [Google Scholar] [CrossRef]
  26. Shen, J.; Wang, X.; Li, S.; Yao, L. Exploiting rich features for Chinese named entity recognition. In Proceedings of the 2010 IEEE International Conference on Intelligent Systems and Knowledge Engineering, Hangzhou, China, 15–16 November 2010; IEEE: Piscataway, NJ, USA, 2010; pp. 278–282. [Google Scholar]
  27. Srivastava, S.; Sanglikar, M.; Kothari, D.C. Named entity recognition system for Hindi language: A hybrid approach. Int. J. Comput. Linguist. 2011, 2, 10–23. [Google Scholar]
  28. Meenachisundaram, T.; Dhanabalachandran, M. Biomedical Named Entity Recognition Using the SVM Methodologies and bio Tagging Schemes. Rev. Chim. 2021, 72, 52–64. [Google Scholar] [CrossRef]
  29. Collobert, R.; Weston, J.; Bottou, L.; Karlen, M.; Kavukcuoglu, K.; Kuksa, P. Natural language processing (almost) from scratch. J. Mach. Learn. Res. 2011, 12, 2493–2537. [Google Scholar]
  30. Strubell, E.; Verga, P.; Belanger, D.; McCallum, A. Fast and accurate entity recognition with iterated dilated convolutions. arXiv 2017, arXiv:1702.02098. [Google Scholar] [CrossRef]
  31. Zhu, Q.; Li, X.; Conesa, A.; Pereira, C. GRAM-CNN: A deep learning approach with local context for named entity recognition in biomedical text. Bioinformatics 2018, 34, 1547–1554. [Google Scholar] [CrossRef]
  32. Gui, T.; Ma, R.; Zhang, Q.; Zhao, L.; Jiang, Y.G.; Huang, X. CNN-Based Chinese NER with Lexicon Rethinking. In Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI), Macao, China, 10–16 August 2019; International Joint Conferences on Artificial Intelligence: CA, USA, 2019; pp. 4982–4988. [Google Scholar]
  33. Kong, J.; Zhang, L.; Jiang, M.; Liu, T. Incorporating multi-level CNN and attention mechanism for Chinese clinical named entity recognition. J. Biomed. Inform. 2021, 116, 103737. [Google Scholar] [CrossRef]
  34. Sherstinsky, A. Fundamentals of recurrent neural network (RNN) and long short-term memory (LSTM) network. Phys. D Nonlinear Phenom. 2020, 404, 132306. [Google Scholar] [CrossRef]
  35. Chung, J.; Gulcehre, C.; Cho, K.; Bengio, Y. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv 2014, arXiv:1412.3555. [Google Scholar] [CrossRef]
  36. Huang, Z.; Xu, W.; Yu, K. Bidirectional LSTM-CRF models for sequence tagging. arXiv 2015, arXiv:1508.01991. [Google Scholar] [CrossRef]
  37. Cetoli, A.; Bragaglia, S.; O’Harney, A.D.; Sloan, M. Graph convolutional networks for named entity recognition. arXiv 2017, arXiv:1709.10053. [Google Scholar]
  38. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. In Proceedings of the Advances in Neural Information Processing Systems (NIPS), Long Beach, CA, USA, 4–9 December 2017; Curran Associates, Inc.: Red Hook, NY, USA, 2017; Volume 30. [Google Scholar]
  39. Peters, M.E.; Ruder, S.; Smith, N.A. To tune or not to tune? Adapting pretrained representations to diverse tasks. arXiv 2019, arXiv:1903.05987. [Google Scholar] [CrossRef]
  40. Li, H.; Yu, L.; Lyu, M.; Qian, Y. Fusion deep learning and machine learning for multi-source heterogeneous military entity recognition. In Proceedings of the 2021 IEEE Conference on Telecommunications, Optics and Computer Science (TOCS), Shenyang, China, 11–13 December 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 535–539. [Google Scholar]
  41. Gao, F.; Zhang, L.; Wang, W.; Zhang, B.; Liu, W.; Zhang, J.; Xie, L. Named Entity Recognition for Equipment Fault Diagnosis Based on RoBERTa-wwm-ext and Deep Learning Integration. Electronics 2024, 13, 3935. [Google Scholar] [CrossRef]
  42. Cui, Y.; Che, W.; Liu, T.; Qin, B.; Yang, Z. Pre-training with whole word masking for Chinese bert. IEEE ACM Trans. Audio Speech Lang. Process. 2021, 29, 3504–3514. [Google Scholar] [CrossRef]
  43. Xin, Y.; Li, S.; Meiling, L.; Keyan, X.; Cheng, L.; Xuchao, D. Knowledge graph construction with BERT-BiLSTM-IDCNN-CRF and graph algorithms for metallogenic pattern discovery: A case study of pegmatite-type lithium deposits in China. Ore Geol. Rev. 2025, 176, 106514. [Google Scholar]
  44. Liu, W.; Zhou, P.; Zhao, Z.; Wang, Z.; Ju, Q.; Deng, H.; Wang, P. K-bert: Enabling language representation with knowledge graph. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; AAAI Press: Palo Alto, CA, USA, 2020; Volume 34, pp. 2901–2908. [Google Scholar]
  45. Liu, X.; Zhao, J.; Yao, J.; Zheng, H.; Wang, Z. Sequential lexicon enhanced bidirectional encoder representations from transformers: Chinese named entity recognition using sequential lexicon enhanced BERT. PeerJ Comput. Sci. 2024, 10, e2344. [Google Scholar] [CrossRef]
  46. Fundel, K.; Küffner, R.; Zimmer, R. RelEx—Relation extraction using dependency parse trees. Bioinformatics 2007, 23, 365–371. [Google Scholar] [CrossRef]
  47. Deng, B.; Fan, X.; Yang, L. Entity relation extraction method using semantic pattern. Comput. Eng. 2007, 33, 212–214. (In Chinese) [Google Scholar]
  48. Kambhatla, N. Combining lexical, syntactic, and semantic features with maximum entropy models for information extraction. In Proceedings of the ACL Interactive Poster and Demonstration Sessions, Barcelona, Spain, 21–26 July 2004; Association for Computational Linguistic: Stroudsburg, PA, USA, 2004; pp. 178–181. [Google Scholar]
  49. De Saeger, S.; Torisawa, K.; Tsuchida, M.; Kazama, J.; Wu, C.; Ohtake, K.; Uchimoto, K. Relation acquisition using word classes and partial patterns. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, Edinburgh, UK, 27–31 July 2011; Association for Computational Linguistic: Stroudsburg, PA, USA, 2011; pp. 825–835. [Google Scholar]
  50. Giuliano, C.; Lavelli, A.; Pighin, D.; Romano, L. FBK-IRST: Kernel methods for semantic relation extraction. In Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), Prague, Czech Republic, 23–24 June 2007; Association for Computational Linguistic: Stroudsburg, PA, USA, 2007; pp. 141–144. [Google Scholar]
  51. Zeng, D.; Liu, K.; Chen, Y.; Zhao, J. Distant supervision for relation extraction via piecewise convolutional neural networks. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, Lisbon, Portugal, 17–21 September 2015; Association for Computational Linguistic: Stroudsburg, PA, USA, 2015; pp. 1753–1762. [Google Scholar]
  52. Hu, L.; Zhang, L.; Shi, C.; Nie, L.; Guan, W.; Yang, C. Improving distantly-supervised relation extraction with joint label embedding. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, 3–7 November 2019; Association for Computational Linguistic: Stroudsburg, PA, USA, 2019; pp. 3821–3829. [Google Scholar]
  53. Nayak, T.; Ng, H.T. Effective modeling of encoder-decoder architecture for joint entity and relation extraction. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; AAAI Press: Palo Alto, CA, USA, 2020; Volume 34, pp. 8528–8535. [Google Scholar]
  54. Zhang, Y.; Qi, P.; Manning, C.D. Graph convolution over pruned dependency trees improves relation extraction. arXiv 2018, arXiv:1809.10185. [Google Scholar] [CrossRef]
  55. Guo, Z.; Zhang, Y.; Lu, W. Attention guided graph convolutional networks for relation extraction. arXiv 2019, arXiv:1906.07510. [Google Scholar]
  56. Wei, Q.; Ji, Z.; Si, Y.; Du, J.; Wang, J.; Tiryaki, F.; Wu, S.; Xu, H. Relation extraction from clinical narratives using pre-trained language models. In AMIA Annual Symposium Proceedings; American Medical Informatics Association: Bethesda, MD, USA, 2019; Volume 2019, p. 1236. [Google Scholar]
  57. Xu, B.; Li, S.; Zhang, Z.; Liao, T. BERT-PAGG: A Chinese relationship extraction model fusing PAGG and entity location information. PeerJ Comput. Sci. 2023, 9, e1470. [Google Scholar] [CrossRef] [PubMed]
  58. Zhou, Z.; Yu, X.; Magoua, J.J.; Cui, J.; Luan, H.; Lin, D. Integrating machine learning and a large language model to construct a domain knowledge graph for reducing the risk of fall-from-height accidents. Accid. Anal. Prev. 2025, 215, 108009. [Google Scholar] [CrossRef] [PubMed]
  59. Yu, H.; Fang, Q.; Fang, Z.; Xu, L.; Liu, J. Carbon footprints: Uncovering spatiotemporal dynamics of global container ship emissions during 2015–2021. Mar. Pollut. Bull. 2024, 209, 117165. [Google Scholar] [CrossRef]
  60. Huang, Y.; Zhang, Z.; Hu, H. Risk propagation mechanisms in railway systems under extreme weather: A knowledge graph-based unsupervised causation chain approach. Reliab. Eng. Syst. Saf. 2025, 260, 110976. [Google Scholar] [CrossRef]
  61. Xu, L.; Chen, N.; Chen, Z.; Zhang, C.; Yu, H. Spatiotemporal forecasting in earth system science: Methods, uncertainties, predictability and future directions. Earth Sci. Rev. 2021, 222, 103828. [Google Scholar] [CrossRef]
  62. Bag, S.; Sarkar, S.; Bose, I. Enhancing cybersecurity risk assessment using temporal knowledge graph-based explainable decision support system. Decis. Support Syst. 2025, 189, 114526. [Google Scholar] [CrossRef]
  63. Peng, X.; Jiang, H.; Chen, J.; Liu, M.; Chen, X. Research and Construction of Knowledge Map of Golden Pomfret Based on LA-CANER Model. J. Mar. Sci. Eng. 2025, 13, 400. [Google Scholar] [CrossRef]
  64. Yu, H.; Bai, X.; Liu, J. Ship Behavior Pattern Analysis Based on Graph Theory: A Case Study in Tianjin Port. J. Mar. Sci. Eng. 2023, 11, 2227. [Google Scholar] [CrossRef]
  65. Gan, L.; Chen, Q.; Zhang, D.; Zhang, X.; Zhang, L.; Liu, C.; Shu, Y. Construction of knowledge graph for Flag State Control (FSC) inspection for ships: A case study from China. J. Mar. Sci. Eng. 2022, 10, 1352. [Google Scholar] [CrossRef]
  66. Yu, H.; Jiang, C.; Fang, Q.; Wei, T.; Xu, L. Deep learning driven spatiotemporal prediction of global carbon emissions from container shipping. Transp. Res. Part D Transp. Environ. 2026, 151, 105169. [Google Scholar] [CrossRef]
  67. Li, S.; Xu, J.; Chen, X.; Zhang, Y.; Zheng, Y.; Postolache, O. Maritime Traffic Knowledge Discovery via Knowledge Graph Theory. J. Mar. Sci. Eng. 2024, 12, 2333. [Google Scholar] [CrossRef]
  68. Yu, H.; Wu, W.; Zhang, X.; Fang, Z.; Fu, X.; Xu, L.; Liu, J. Optimization-based global liquefied natural gas shipping network management for emission reduction. Ocean Eng. 2025, 321, 120366. [Google Scholar] [CrossRef]
  69. Wan, H.; Fu, S.; Zhang, M.; Xiao, Y. A Semantic Network Method for the Identification of Ship’s Illegal Behaviors Using Knowledge Graphs: A Case Study on Fake Ship License Plates. J. Mar. Sci. Eng. 2023, 11, 1906. [Google Scholar] [CrossRef]
  70. Yu, H.; Xiao, Y.; Chen, C.; Zhou, J.; Xu, L. Incorporating knowledge graph and multi-model stacking ensemble learning for prediction of fines for illegal fishing. Reg. Stud. Mar. Sci. 2025, 89, 104332. [Google Scholar] [CrossRef]
  71. Van Hage, W.R.; Malaisé, V.; Segers, R.; Hollink, L.; Schreiber, G. Design and use of the Simple Event Model (SEM). J. Web Semant. 2011, 9, 128–136. [Google Scholar] [CrossRef]
  72. Yu, H.; Chen, F. Quantitative analysis of the efficiency dynamics of global liquefied natural gas shipping under COVID-19. Digit. Transp. Saf. 2024, 3, 19–35. [Google Scholar] [CrossRef]
  73. Guo, S.; Yang, W.; Han, L.; Song, X.; Wang, G. A multi-layer soft lattice based model for Chinese clinical named entity recognition. BMC Med. Inform. Decis. Mak. 2022, 22, 201. [Google Scholar] [CrossRef]
  74. Li, J.; Sun, A.; Han, J.; Li, C. A survey on deep learning for named entity recognition. IEEE Trans. Knowl. Data Eng. 2020, 34, 50–70. [Google Scholar] [CrossRef]
  75. Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, MN, USA, 2–7 June 2019; Association for Computational Linguistic: Stroudsburg, PA, USA, 2019; pp. 4171–4186. [Google Scholar] [CrossRef]
  76. Liu, Y.; Ott, M.; Goyal, N.; Du, J.; Joshi, M.; Chen, D.; Levy, O.; Lewis, M.; Zettlemoyer, L.; Stoyanov, V. RoBERTa: A robustly optimized BERT pretraining approach. arXiv 2019, arXiv:1907.11692. [Google Scholar] [CrossRef]
  77. Liu, N.F.; Lin, K.; Hewitt, J.; Paranjape, A.; Bevilacqua, M.; Petroni, F.; Liang, P. Lost in the middle: How language models use long contexts. Trans. Assoc. Comput. Linguist. 2024, 12, 157–173. [Google Scholar] [CrossRef]
  78. Liu, S.; Wang, F. Knowledge Graph of Maritime Collision Avoidance Rules in Chinese. In Proceedings of the 2019 11th International Conference on Intelligent Human-Machine Systems and Cybernetics (IHMSC), Hangzhou, China, 24–25 August 2019; IEEE: Piscataway, NJ, USA, 2019; Volume 1, pp. 169–172. [Google Scholar]
  79. Wei, H. Construction of an integrated knowledge graph for coal mine safety. Master’s Thesis, China University of Mining and Technology, Xuzhou, China, 2020. (In Chinese) [Google Scholar]
  80. Wu, J.; Jiang, F.; Yao, H.; Huang, M.; Ma, Q. Analysis of causal factors and risk prediction of inland vessel collision accidents based on text mining. J. Transp. Inf. Saf. 2018, 36, 8–18. (In Chinese) [Google Scholar]
  81. Liu, J.; Chen, X.; Liu, H.; Zhang, B.; Xu, L.; Liu, T.; Fu, Y. Construction of a vessel activity knowledge graph based on trajectory semantics. J. Geo-Inf. Sci. 2023, 25, 1252–1266. (In Chinese) [Google Scholar]
  82. Zhang, Q.; Wen, Y.; Zhou, C.; Long, H.; Han, D.; Zhang, F.; Xiao, C. Construction of knowledge graphs for maritime dangerous goods. Sustainability 2019, 11, 2849. [Google Scholar] [CrossRef]
  83. Sur, J.M.; Kim, D.J. Comprehensive risk estimation of maritime accident using fuzzy evaluation method—Focusing on fishing vessel accident in Korean waters. Asian J. Shipp. Logist. 2020, 36, 127–135. [Google Scholar] [CrossRef]
  84. Chen, J.; Liu, P.; Wang, S.; Zheng, N.; Guo, X. Prediction and interpretation of crash severity using machine learning based on imbalanced traffic crash data. J. Saf. Res. 2025, 93, 185–199. [Google Scholar] [CrossRef]
Figure 1. The flowchart of the methodology.
Figure 1. The flowchart of the methodology.
Jmse 14 00448 g001
Figure 2. Named entity recognition model.
Figure 2. Named entity recognition model.
Jmse 14 00448 g002
Figure 3. LeBERT model.
Figure 3. LeBERT model.
Jmse 14 00448 g003
Figure 4. Feature dictionary adapter.
Figure 4. Feature dictionary adapter.
Jmse 14 00448 g004
Figure 5. Relationship extraction model framework.
Figure 5. Relationship extraction model framework.
Jmse 14 00448 g005
Figure 6. K-BERT-based entity recognition.
Figure 6. K-BERT-based entity recognition.
Jmse 14 00448 g006
Figure 7. Schematic diagram of the accident severity classification model.
Figure 7. Schematic diagram of the accident severity classification model.
Jmse 14 00448 g007
Figure 8. Relationship extraction analysis.
Figure 8. Relationship extraction analysis.
Jmse 14 00448 g008
Figure 9. Correlation analysis between RE performance metrics and average confidence.
Figure 9. Correlation analysis between RE performance metrics and average confidence.
Jmse 14 00448 g009
Figure 10. Distribution of entities in the ship collision prevention and control knowledge graph.
Figure 10. Distribution of entities in the ship collision prevention and control knowledge graph.
Jmse 14 00448 g010
Figure 11. Relationship distribution of ship collision prevention and control knowledge graph.
Figure 11. Relationship distribution of ship collision prevention and control knowledge graph.
Jmse 14 00448 g011
Figure 12. Embedding nodes clustering for the constructed knowledge graph.
Figure 12. Embedding nodes clustering for the constructed knowledge graph.
Jmse 14 00448 g012
Figure 13. Shortest path query for collision cases.
Figure 13. Shortest path query for collision cases.
Jmse 14 00448 g013
Figure 14. Ship dynamic search.
Figure 14. Ship dynamic search.
Jmse 14 00448 g014
Figure 15. Ship activity search.
Figure 15. Ship activity search.
Jmse 14 00448 g015
Figure 16. Training loss trend of the LSTM-RNN model.
Figure 16. Training loss trend of the LSTM-RNN model.
Jmse 14 00448 g016
Figure 17. Trend in training accuracy of the LSTM-RNN model.
Figure 17. Trend in training accuracy of the LSTM-RNN model.
Jmse 14 00448 g017
Figure 18. Confusion matrix of the LSTM-RNN model.
Figure 18. Confusion matrix of the LSTM-RNN model.
Jmse 14 00448 g018
Figure 19. Confusion matrix of the MLP model.
Figure 19. Confusion matrix of the MLP model.
Jmse 14 00448 g019
Figure 20. Confusion matrix of the XGBoost model.
Figure 20. Confusion matrix of the XGBoost model.
Jmse 14 00448 g020
Figure 21. Confusion matrix of the Random Forest model.
Figure 21. Confusion matrix of the Random Forest model.
Jmse 14 00448 g021
Figure 22. ROC curve.
Figure 22. ROC curve.
Jmse 14 00448 g022
Figure 23. PR curve.
Figure 23. PR curve.
Jmse 14 00448 g023
Table 1. Types of ship collision entities.
Table 1. Types of ship collision entities.
TypesName
EntityAccident
Vessel
Vessel dynamics
Personnel
Organization
Time
Location
Environment
Equipment
Cause
Consequence
Laws and Regulations
Recommendation
Table 2. Ship collision relationship types.
Table 2. Ship collision relationship types.
TypeRelationship NameTypeRelationship Name
Attribute relationof_VesselFeatureCausal relationof_Violation
of_PersonFeatureof_Consequence
Conceptual hierarchical relationdiscoverof_Cause
employproduces_of_Cause
manageof_Recommendation
holdSpatiotemporal relationon_of_RealTimeDynamics
equipon_of_NavigationStatus
dispatchon_of_EngineStatus
occurat_of_RealTimeDynamics
rescueat_of_NavigationStatus
useat_of_EngineStatus
encounteron_Location
notifygo_Location
investigateat_Time
reportleave_Location
belongs_toat_in_Environment
manipulate_of_RealTimeDynamicsat_on_Location
manipulate_of_EngineStatusin_Environment
manipulate_of_NavigationStatusto_Time
time_to_Time
Table 3. Entity attribute types.
Table 3. Entity attribute types.
TypesAttributeData Type
Vessel FeatureMMSIInteger
IMOInteger
Vessel NameString
Vessel DimensionsInteger
Vessel TypeString
Personnel FeatureNameString
AgeString
Education LevelString
Date of BirthString
Table 4. Examples of domain-specific words for maritime traffic accidents.
Table 4. Examples of domain-specific words for maritime traffic accidents.
TypeExamples
Accident causesImproper evasive manoeuvre; failure to maintain a safe speed
Environmental factorsSouth wind turning to southwest; showers turning to cloudy then overcast
Accident recommendationsEnhance the practical skills and professional knowledge of operators
Onboard equipmentVery High Frequency (VHF) radio equipment; search-and-rescue radar transponder
Table 5. Case study of the five-step prompt strategy for data conversion.
Table 5. Case study of the five-step prompt strategy for data conversion.
ComponentDescription
Task RequirementVessel feature fields are extracted from semi-structured data in PDF files and converted into a structured tabular format for subsequent analysis.
Data DescriptionInput data consists of image-based vessel information, including Vessel Name, Former Name, Vessel Type, Port of Registry, IMO Number, Call Sign, Gross Tonnage, Deadweight Tonnage, Net Tonnage, Overall Length, Molded Depth, Full Load Draft, Molded Breadth, and Main Engine Power.
Sample Data1.Vessel “OMEGA”
Vessel Name: OMEGA—Former Name: DIMITRISS
Vessel Type: Bulk—Carrier Port of Registry: MAJURO
IMO Number 9279836—Call Sign: V7A4604
Gross Tonnage: 28,171 tons—Deadweight Tonnage: 48,821 tons
Net Tonnage: 16,055 tons—Overall Length: 189.96 m
Molded Depth: 16.50 m—Full Load Draft: 11.623 m
Molded Breadth: 32.20 m—Main Engine Power: 7700 kW
Scenario InformationRaw data is stored as semi-structured text within PDF reports. Screenshots or text segments are converted into structured tables to support knowledge graph construction.
Standard OutputStandardized “entity-relation-entity” CSV files or database-compatible tabular data are generated.
Table 6. Data extraction results using the five-step prompt strategy.
Table 6. Data extraction results using the five-step prompt strategy.
VesselRelationshipVessel Feature ValueVessel Feature
vessel “OMEGA”of_VesselFeatureOMEGAVessel Name
vessel “OMEGA”of_VesselFeatureDIMITRISSFormer Name
vessel “OMEGA”of_VesselFeatureBulk CarrierVessel Type
vessel “OMEGA”of_VesselFeatureMAJUROPort of Registry
vessel “OMEGA”of_VesselFeature9279836IMO Number
vessel “OMEGA”of_VesselFeatureV7A4604Call Sign
vessel “OMEGA”of_VesselFeature28,171 tonsGross Tonnage
vessel “OMEGA”of_VesselFeature48,821 tonsDeadweight Tonnage
vessel “OMEGA”of_VesselFeature16,055 tonsNet Tonnage
vessel “OMEGA”of_VesselFeature189.96 mOverall Length
vessel “OMEGA”of_VesselFeature16.50 mMolded Depth
vessel “OMEGA”of_VesselFeature11.623 mFull Load Draft
vessel “OMEGA”of_VesselFeature32.20 mMolded Breadth
vessel “OMEGA”of_VesselFeature7700 kWMain Engine Power
Table 7. Entity recognition model structure parameter configuration.
Table 7. Entity recognition model structure parameter configuration.
ModuleParameterConfiguration
LeBERTPre-trained language modelChinese-RoBERTa-wwm-ext
Embedding dimension768
Transformer layers12
Attention heads12
BiLSTMInput size768
Hidden size128
Number of LSTM layers1
CRFInput size (emission)256
Number of output labels15
Table 8. Hyperparameter configuration for entity recognition.
Table 8. Hyperparameter configuration for entity recognition.
HyperparameterConfiguration
Maximum sequence length (max_seq_len)512
Training batch size (train_batch_size)32
Validation batch size (dev_batch_size)16
BERT learning rate (bert_learning_rate)3 × 10−5
CRF learning rate (crf_learning_rate)3 × 10−3
Dropout rate0.01
OptimizerAdam
Save step (save_step)200
Number of training epochs (epochs)100
Table 9. Comparison with other models.
Table 9. Comparison with other models.
ModelPrecisionRecallF1-Score
CRF68.170.169.4
BiLSTM_CRF69.763.066.1
BERT-CRF72.872.171.9
BERT-BiLSTM-CRF80.779.480.1
Chinese-BERT-WWM-BiLSTM-CRF85.285.885.5
RoBERTa-BiLSTM-CRF85.785.685.6
LeBERT-BiLSTM-CRF86.387.586.8
Table 10. Model performance comparison for different types of entity recognition.
Table 10. Model performance comparison for different types of entity recognition.
Entity TypeCRFBiLSTM-CRFBERT-CRFBERT-BiLSTM-CRFChinese-BERT-WWM-BiLSTM-CRFRoBERTa-BiLSTM-CRFLeBERT-BiLSTM-CRF
Vessel89.786.187.890.695.196.596.5
Vessel feature66.144.152.461.574.384.382.7
Personnel87.878.981.785.388.192.093.0
Personnel feature86.954.781.487.391.288.990.1
Time88.474.378.984.68984.888.6
Accident68.261.258.766.47670.473.5
Location84.137.467.775.379.189.288.7
Agency50.446.355.867.376.385.184.0
Environment19.324.360.268.572.181.882.4
Equipment77.671.969.171.371.673.675.7
Vessel dynamics84.278.569.074.386.376.978.4
Laws and regulations78.16060.862.488.397.597.5
Cause64.130.545.646.354.372.074.4
Event Consequence64.739.256.161.5707579.6
Recommendation37.12733.541.34454.557.5
Table 11. Ablation experiments for entity recognition based on domain vocabulary enhancement.
Table 11. Ablation experiments for entity recognition based on domain vocabulary enhancement.
ModelPrecisionRecallF1-Score
BERT-BiLSTM-CRF80.779.480.1
RoBERTa-BiLSTM-CRF85.785.685.6
LeBERT-BiLSTM-CRF (information on vocabulary in common areas)86.086.786.3
LeBERT-BiLSTM-CRF (domain vocabulary information)86.387.586.8
Table 12. The 5-fold cross-validation performance results of the LeBERT-BiLSTM-CRF model.
Table 12. The 5-fold cross-validation performance results of the LeBERT-BiLSTM-CRF model.
FoldPrecisionRecallF1-Score
Fold 10.85810.86490.8615
Fold 20.85800.86700.8625
Fold 30.87020.87950.8748
Fold 40.87080.88180.8763
Fold 50.85710.86320.8601
Mean ± SD0.8628 ± 0.00700.8713 ± 0.00870.8670 ± 0.0078
Table 13. Performance of LeBERT-BiLSTM-CRF on different entity types using 5-fold cross-validation.
Table 13. Performance of LeBERT-BiLSTM-CRF on different entity types using 5-fold cross-validation.
Entity TypePrecisionRecallF1-Score
Vessel0.9688 ± 0.00580.9668 ± 0.00320.9678 ± 0.0039
Vessel feature0.7964 ± 0.03400.8058 ± 0.06180.7992 ± 0.0311
Personnel0.8873 ± 0.02660.9039 ± 0.02400.8953 ± 0.0200
Personnel feature0.9198 ± 0.02560.9299 ± 0.02130.9247 ± 0.0220
Time0.8397 ± 0.02310.8429 ± 0.01980.8413 ± 0.0208
Accident0.7633 ± 0.04130.7660 ± 0.05040.7641 ± 0.0408
Location0.8757 ± 0.01400.8921 ± 0.01320.8838 ± 0.0120
Agency0.7936 ± 0.04230.8198 ± 0.03010.8064 ± 0.0355
Environment0.8666 ± 0.05860.8411 ± 0.06140.8531 ± 0.0563
Equipment0.7231 ± 0.05420.7536 ± 0.04580.7369 ± 0.0416
Vessel dynamics0.8000 ± 0.03000.8197 ± 0.02210.8095 ± 0.0228
Laws and regulations0.9597 ± 0.02630.9731 ± 0.01460.9662 ± 0.0179
Cause0.7334 ± 0.05950.7378 ± 0.04920.7354 ± 0.0540
Event Consequence0.7762 ± 0.05000.7523 ± 0.06920.7603 ± 0.0328
Recommendation0.6162 ± 0.06970.5602 ± 0.06190.5833 ± 0.0444
Table 14. Model architecture configuration for relation extraction.
Table 14. Model architecture configuration for relation extraction.
ModuleParameterConfiguration
BERTPre-trained language modelChinese-RoBERTa-wwm-ext
Embedding dimension768
Transformer layers12
Attention heads12
Output dimension768
MLPInput units2304
Activation functionReLU
Number of hidden layers1
Hidden units128
Output units38
Table 15. Hyperparameter configuration for relationship extraction.
Table 15. Hyperparameter configuration for relationship extraction.
ParameterSetting
Maximum Sequence Length (max_seq_len)512
Maximum Entity Length (max_en_len)30
Training Batch Size (train_batch_size)32
Validation Batch Size (dev_batch_size)16
Learning Rate (bert_learning_rate)3 × 10−5
OptimizerAdam
Save Step (save_step)200
Number of Training Epochs (epochs)100
Table 16. Model architecture configuration for K-BERT-based entity recognition.
Table 16. Model architecture configuration for K-BERT-based entity recognition.
ModuleParameterConfiguration
K-BERTPre-trained language modelChinese-RoBERTa-wwm-ext
Embedding dimension768
Transformer layers12
Attention heads12
BiLSTMInput size768
Hidden size128
Number of LSTM layers1
CRFInput feature size (from BiLSTM)256
Number of output labels15
Table 17. Hyperparameter settings for the K-BERT-based entity recognition model.
Table 17. Hyperparameter settings for the K-BERT-based entity recognition model.
ParameterSetting
Maximum Sequence Length (max_seq_len)512
Training Batch Size (train_batch_size)16
Validation Batch Size (dev_batch_size)8
BERT Learning Rate (bert_learning_rate)1 × 10−5
CRF Learning Rate (crf_learning_rate)1 × 10−3
Dropout Rate0.01
OptimizerAdam
Save Step (save_step)50
Number of Training Epochs (epochs)100
Table 18. Comparison analysis between K-BERT-BiLSTM-CRF and other models.
Table 18. Comparison analysis between K-BERT-BiLSTM-CRF and other models.
ModelPrecisionRecallF1-Score
BERT-BiLSTM-CRF78.777.478.0
RoBERTa-BiLSTM-CRF81.181.681.0
LeBERT-BiLSTM-CRF (domain vocabulary information)83.583.683.5
K-BERT-BiLSTM-CRF (domain knowledge triplet)84.584.484.7
Table 19. Comparison of knowledge graphs in waterway transportation.
Table 19. Comparison of knowledge graphs in waterway transportation.
Knowledge GraphEntitiesRelationsEntity TypesRelation TypesData Types
Reference [78]395,478 63Structured
Reference [79]41653255Semi-structured
Reference [80]9101920614Semi-structured
Reference [81] 39341513Unstructured
Reference [82] 67Unstructured
Reference [83] 67Unstructured
ship collision prevention and control knowledge graph (This paper)35,589321,9481538Unstructured/Semi-structured
Table 20. Hyperparameters for the accident severity classification.
Table 20. Hyperparameters for the accident severity classification.
HyperparameterOptimal ValueDescription
Batch size32Number of training samples used for each Stochastic gradient descent (SGD) update
Loss functionCategorical cross-entropyAlso known as multiclass logloss, suitable for classification targets
OptimizerSGDStochastic gradient descent
Learning rate0.01Learning rate used by the SGD optimizer
Momentum0.80Momentum used by the SGD optimizer
Weight decay0.9Learning rate decay applied at each update
Table 21. Comparison of prediction performance.
Table 21. Comparison of prediction performance.
IndicatorLSTM-RNNXGBoostRFMLP
Accuracy92.31%89.42%80.77%77.88%
Precision91.84%89.12%82.40%79.00%
Recall91.70%89.65%79.61%81.84%
F1-score91.73%89.29%78.48%78.37%
Table 22. Comparison of LSTM-RNN model prediction performance with different input features.
Table 22. Comparison of LSTM-RNN model prediction performance with different input features.
Input FeaturesAccuracyPrecisionRecallF1-Score
Nodes characteristic83.65%84.54%84.79%84.63%
Topological features86.54%85.77%86.99%86.21%
Nodes characteristic + Betweenness centrality87.50%86.97%89.68%87.66%
Nodes characteristic + Degree centrality89.42%89.96%89.99%89.56%
Nodes characteristic + Closeness centrality90.38%90.00%89.74%89.65%
Node characteristic + Topological features92.31%91.84%91.70%91.73%
Table 23. Qualitative analysis of representative misclassified cases based on Nodes characteristic and topological features.
Table 23. Qualitative analysis of representative misclassified cases based on Nodes characteristic and topological features.
Case IDActual SeverityPredicted SeverityFeatures AnalysisError Mechanism Explanation
74Relatively serious accident (2)Serious accident (3)Prominent Nodes characteristic and high topological features: The extracted graph exhibits a prominent Nodes characteristic (a large number of involved entity nodes) and extremely high topological features (specifically, betweenness and degree centrality). This implies the presence of many transitional "bridge" nodes and dense connections, resulting in a highly complex overall graph structure.Complexity-Induced Overestimation: The accident report delineated a convoluted event sequence, generating a knowledge graph with an inflated node scale and extreme topological density. Relying heavily on these dual complexity indicators, the model misinterpreted the structural intricacy as a sign of higher severity.
91Serious accident (3)Relatively serious accident (2)Restricted Nodes characteristic and low topological features: the graph shows a restricted Nodes characteristic (a small number of entity nodes) and low values for topological features (both degree centrality and closeness centrality). The resulting graph structure is sparse, with limited connectivity and loose global proximity between key nodes.Sparsity-Induced Underestimation: Although the actual consequence was severe, the textual description was concise or involved few entity types, generating a sparse knowledge graph. The model failed to infer the severe outcome from this structurally simple topology (low connectivity and small node scale), leading to an underestimation.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yu, H.; Xu, X.; Guo, Z.; Wei, T.; Xu, L. Semantic Modeling of Ship Collision Reports: Ontology Design, Knowledge Extraction, and Severity Classification. J. Mar. Sci. Eng. 2026, 14, 448. https://doi.org/10.3390/jmse14050448

AMA Style

Yu H, Xu X, Guo Z, Wei T, Xu L. Semantic Modeling of Ship Collision Reports: Ontology Design, Knowledge Extraction, and Severity Classification. Journal of Marine Science and Engineering. 2026; 14(5):448. https://doi.org/10.3390/jmse14050448

Chicago/Turabian Style

Yu, Hongchu, Xiaohan Xu, Zheng Guo, Tianming Wei, and Lei Xu. 2026. "Semantic Modeling of Ship Collision Reports: Ontology Design, Knowledge Extraction, and Severity Classification" Journal of Marine Science and Engineering 14, no. 5: 448. https://doi.org/10.3390/jmse14050448

APA Style

Yu, H., Xu, X., Guo, Z., Wei, T., & Xu, L. (2026). Semantic Modeling of Ship Collision Reports: Ontology Design, Knowledge Extraction, and Severity Classification. Journal of Marine Science and Engineering, 14(5), 448. https://doi.org/10.3390/jmse14050448

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop