Structure-Aware and Format-Enhanced Transformer for Accident Report Modeling
Abstract
1. Introduction
- (1)
- Proposing SAFE-Transformer as the first model to inject section numbers and heading semantics as explicit structural priors into a Transformer;
- (2)
- Validating the performance of SAFE-Transformer using a national-level accident dataset to demonstrate its practical effectiveness;
- (3)
- Showing how SAFE-Transformer alleviates key-information loss, context fragmentation, and long-tail effects through evaluation of a multi-label accident-intelligence benchmark.
2. Literature Review
2.1. Accident Investigation Report Text Analysis
2.2. Long Text Modeling Based on Transformer Architecture
3. Overview of Methodology
4. SAFE-Transformer Model
4.1. Model Architecture
4.2. Structural Parsing Rule Set
4.3. Structural Anchor Markers
- The hierarchy stack is initialized as empty and reset upon processing new documents;
- if the current top-level , the stack continuously pops elements until , ensuring legal hierarchical nesting (e.g., enforcing sequential patterns like “1. → (1) → ① → 1)”);
- the current level is then pushed into , with its parent relationship recorded as .
4.4. Hierarchical Format and Positional Encoding
- Step 1: Symbolic Format Embedding Definition and Fusion
- Step 2: Symbolic Format-Enhanced Hierarchical Positional Encoding
4.5. Attention Mask
4.6. Encoder
4.7. Classifier
5. Experiments
5.1. Dataset
5.2. Baseline Models
5.3. Parameter Settings
5.4. Model Fine-Tuning
- Stage 1 (first 30% of training steps): Freeze the base Transformer layers and only train the symbolic-aware positional encoder and the gated fusion module. The learning rate is reduced to 5 × 10−6 to stabilize hierarchical feature extraction.
- Stage 2 (remaining 70% of steps): Unfreeze all layers for joint optimization. The AdamW optimizer is re-initialized with a dynamic learning rate, decaying from a peak of 2 × 10−5 to 1 × 10−6. Gradient sparsity enhancement is applied to the attention mask generator to prevent local optima in hierarchical stack updates.
6. Result
6.1. Overall Performance Analysis
6.2. Ablation Study
7. Discussion
8. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Xu, N.; Ma, L.; Liu, Q.; Wang, L.; Deng, Y. An Improved Text Mining Approach to Extract Safety Risk Factors from Construction Accident Reports. Saf. Sci. 2021, 138, 105216. [Google Scholar] [CrossRef]
- Pandithawatta, S.; Ahn, S.; Rameezdeen, R.; Chow, C.W.K.; Gorjian, N. Systematic Literature Review on Knowledge-Driven Approaches for Construction Safety Analysis and Accident Prevention. Buildings 2024, 14, 3403. [Google Scholar] [CrossRef]
- Ha, J.; Haralick, R.M.; Phillips, I.T. Recursive XY Cut Using Bounding Boxes of Connected Components. In Proceedings of the 3rd International Conference on Document Analysis and Recognition, Montreal, QC, Canada, 14–16 August 1995; IEEE: New York, NY, USA, 1995; Volume 2, pp. 952–955. [Google Scholar] [CrossRef]
- Lebourgeois, F.; Bublinski, Z.; Emptoz, H. A Fast and Efficient Method for Extracting Text Paragraphs and Graphics from Unconstrained Documents. In Proceedings of the 11th IAPR International Conference on Pattern Recognition. Vol. II. Conference B: Pattern Recognition Methodology and Systems, The Hague, The Netherlands, 30 August–3 September 1992; IEEE Computer Society: Washington, DC, USA, 1992; Volume 1, pp. 272–273. [Google Scholar] [CrossRef]
- O’Gorman, L. The Document Spectrum for Page Layout Analysis. IEEE Trans. Pattern Anal. Mach. Intell. 1993, 15, 1162–1173. [Google Scholar] [CrossRef]
- Huang, Y.; Lv, T.; Cui, L.; Lu, Y.; Wei, F. Layoutlmv3: Pre-training for Document AI With Unified Text and Image Masking. In Proceedings of the 30th ACM International Conference on Multimedia, Lisboa, Portugal, 10–14 October 2022; ACM: New York, NY, USA, 2022; pp. 4083–4091. [Google Scholar] [CrossRef]
- Shilman, M.; Liang, P.; Viola, P. Learning Nongenerative Grammatical Models for Document Analysis. In Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV’05) Volume 1, Beijing, China, 17–21 October 2005; IEEE: Piscataway, NJ, USA, 2005; Volume 2, pp. 962–969. [Google Scholar] [CrossRef]
- Dalitz, C. Kd-trees for Document Layout Analysis. In Document Image Analysis with the Gamera Framework, Schriftenreihe des Fachbereichs Elektrotechnik und Informatik, Hochschule Niederrhein; Shaker Verlag: Düren, Germany, 2009; Volume 8, pp. 39–52. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Polosukhin, I. Attention Is All You Need. In Proceedings of the Advances in Neural Information Processing Systems 30, Long Beach, CA, USA, 4–9 December 2017; Volume 30, pp. 5998–6008. [Google Scholar]
- Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. Bert: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, MN, USA, 2–7 June 2019; ACL Anthology: Stroudsburg, PA, USA, 2019; pp. 4171–4186. [Google Scholar] [CrossRef]
- Luo, X.; Li, X.; Song, X. Convolutional Neural Network Algorithm–Based Novel Automatic Text Classification Framework for Construction Accident Reports. J. Constr. Eng. Manag. 2023, 149, 04023128. [Google Scholar] [CrossRef]
- Liu, C.; Yang, S. Using Text Mining to Establish Knowledge Graph from Accident/Incident Reports in Risk Assessment. Expert Syst. Appl. 2022, 207, 117991. [Google Scholar] [CrossRef]
- Fang, W.; Luo, H.; Xu, S. Automated Text Classification of Near-Misses from Safety Reports: An Improved Deep Learning Approach. Adv. Eng. Inform. 2020, 44, 101060. [Google Scholar] [CrossRef]
- Ray, U.; Arteaga, C.; Ahn, Y. Enhanced Identification of Equipment Failures from Descriptive Accident Reports Using Language Generative Model. Eng. Constr. Archit. Manag. 2024. ahead of print. [Google Scholar] [CrossRef]
- Ahmadi, E.; Muley, S.; Wang, C. Automatic Construction Accident Report Analysis Using Large Language Models (LLMs). J. Intell. Constr. 2025, 3, 1–10. [Google Scholar] [CrossRef]
- Sun, C.; Qiu, X.; Xu, Y.; Huang, X. How to Fine-Tune Bert for Text Classification? In Proceedings of the China National Conference on Chinese Computational Linguistics, Kunming, China, 18–20 October 2019; Springer International Publishing: Cham, Switzerland, 2019; pp. 194–206. [Google Scholar] [CrossRef]
- Adhikari, A.; Ram, A.; Tang, R.; Lin, J. Docbert: Bert for Document Classification. arXiv 2019, arXiv:1904.08398. [Google Scholar] [CrossRef]
- Mosbach, M.; Andriushchenko, M.; Klakow, D. On the Stability of Fine-Tuning Bert: Misconceptions, Explanations, and Strong Baselines. arXiv 2020, arXiv:2006.04884. [Google Scholar] [CrossRef]
- Dong, T.; Yang, Q.; Ebadi, N.; Luo, X.R.; Rad, P. Identifying Incident Causal Factors to Improve Aviation Transportation Safety: Proposing a Deep Learning Approach. J. Adv. Transp. 2021, 1, 5540046. [Google Scholar] [CrossRef]
- Wang, J.; Yan, M. Application of an Improved Model for Accident Analysis: A Case Study. Int. J. Environ. Res. Public Health 2019, 16, 2756. [Google Scholar] [CrossRef] [PubMed]
- Shayboun, M. Toward Accident Prevention Through Machine Learning Analysis of Accident Reports. Master’s Thesis, Universidade Tecnica de Lisboa, Lisboa, Portugal, 2022. [Google Scholar]
- Chi, S.; Han, S. Analyses of Systems Theory for Construction Accident Prevention with Specific Reference to OSHA Accident Reports. Int. J. Proj. Manag. 2013, 31, 1027–1041. [Google Scholar] [CrossRef]
- Lombardi, M.; Mauro, F.; Fargnoli, M.; Napoleoni, Q.; Berardi, D.; Berardi, S. Occupational Risk Assessment in Landfills: Research Outcomes from Italy. Safety 2023, 9, 3. [Google Scholar] [CrossRef]
- Karanikas, N.; Weber, D.; Bruschi, K.; Brown, S. Identification of Systems Thinking Aspects in ISO 45001: 2018 on Occupational Health & Safety Management. Saf. Sci. 2022, 148, 105671. [Google Scholar] [CrossRef]
- Marhavilas, P.K.; Pliaki, F.; Koulouriotis, D. International Management System Standards Related to Occupational Safety and Health: An Updated Literature Survey. Sustainability 2022, 14, 13282. [Google Scholar] [CrossRef]
- Chalkidis, I.; Fergadiotis, M.; Kotitsas, S.; Malakasiotis, P.; Aletras, N.; Androutsopoulos, I. An Empirical Study on Large-Scale Multi-Label Text Classification Including Few and Zero-Shot Labels. arXiv 2020, arXiv:2010.01653. [Google Scholar] [CrossRef]
- Dai, X.; Chalkidis, I.; Darkner, S.; Elliott, D. Revisiting Transformer-Based Models for Long Document Classification. arXiv 2022, arXiv:2204.06683. [Google Scholar] [CrossRef]
- Yang, Z.; Yang, D.; Dyer, C.; He, X.; Smola, A.; Hovy, E. Hierarchical Attention Networks for Document Classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego, CA, USA, 12–17 June 2016; ACL Anthology: Stroudsburg, PA, USA, 2016; pp. 1480–1489. [Google Scholar] [CrossRef]
- Wu, C.; Wu, F.; Qi, T.; Huang, Y. Hi-Transformer: Hierarchical Interactive Transformer for Efficient and Effective Long Document Modeling. arXiv 2021, arXiv:2106.01040. [Google Scholar] [CrossRef]
- Liu, Y.; Liu, J.; Chen, L.; Lu, Y.; Feng, S.; Feng, Z.; Wang, H. Ernie-Sparse: Learning Hierarchical Efficient Transformer Through Regularized Self-Attention. arXiv 2022, arXiv:2203.12276. [Google Scholar] [CrossRef]
- Zaheer, M.; Guruganesh, G.; Dubey, K.A.; Ainslie, J.; Alberti, C.; Ontanon, S.; Ahmed, A. Big Bird: Transformers for Longer Sequences. In Proceedings of the Advances in Neural Information Processing Systems 33, Online, 6–12 December 2020; Volume 33, pp. 17283–17297. [Google Scholar]
- Beltagy, I.; Peters, M.E.; Cohan, A. Longformer: The Long-Document Transformer. arXiv 2020, arXiv:2004.05150. [Google Scholar] [CrossRef]
- Han, S.W.; Eun, H.J.; Kim, Y.S.; Kóczy, L.T. A Document Classification Algorithm Using the Fuzzy Set Theory and Hierarchical Structure of Documents. In Computational Science and Its Applications—ICCSA 2004; Laganá, A., Gavrilova, M.L., Kumar, V., Mun, Y., Tan, C.J.K., Gervasi, O., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2004; Volume 3043, pp. 1480–1489. [Google Scholar] [CrossRef]
- Reason, J. Managing the Risks of Organizational Accidents; Routledge: London, UK, 2016. [Google Scholar] [CrossRef]
- Leveson, N.; Daouk, M.; Dulac, N.; Marais, K. Applying STAMP in Accident Analysis. In NASA Conference Publication; NASA: Washington, DC, USA, 1998; pp. 177–198. [Google Scholar]
- Pimble, J.; O’Toole, S. Analysis of Accident Reports. Ergonomics 1982, 25, 967–979. [Google Scholar] [CrossRef] [PubMed]
- Abdel-Aty, M.A.; Radwan, A.E. Modeling Traffic Accident Occurrence and Involvement. Accid. Anal. Prev. 2000, 32, 633–642. [Google Scholar] [CrossRef] [PubMed]
- Elvik, R.; Mysen, A. Incomplete Accident Reporting: Meta-Analysis of Studies Made in 13 Countries. Transp. Res. Rec. 1999, 1665, 133–140. [Google Scholar] [CrossRef]
- Farahani, M.; Ghasemi, G. Artificial Intelligence and Inequality: Challenges and Opportunities. Int. J. Innov. Educ. 2024, 9, 78–99. [Google Scholar] [CrossRef]
- Fiegler-Rudol, J.; Lau, K.; Mroczek, A.; Kasperczyk, J. Exploring Human–AI Dynamics in Enhancing Workplace Health and Safety: A Narrative Review. Int. J. Environ. Res. Public Health 2025, 22, 199. [Google Scholar] [CrossRef] [PubMed]
- Goh, Y.M.; Ubeynarayana, C.U. Construction Accident Narrative Classification: An Evaluation of Text Mining Techniques. Accident Anal. Prev. 2017, 108, 122–130. [Google Scholar] [CrossRef] [PubMed]
- Zhang, F.; Fleyeh, H.; Wang, X. Construction Site Accident Analysis Using Text Mining and Natural Language Processing Techniques. Autom. Constr. 2019, 99, 238–248. [Google Scholar] [CrossRef]
- Comberti, L.; Demichela, M.; Baldissone, G.; Fois, G.; Luzzi, R. Large Occupational Accidents Data Analysis with a Coupled Unsupervised Algorithm: The SOM k-Means Method. An Application to the Wood Industry. Safety 2018, 4, 51. [Google Scholar] [CrossRef]
- Lombardi, M.; Fargnoli, M.; Parise, G. Risk Profiling from the European Statistics on Accidents at Work (ESAW) Accidents’ Databases: A Case Study in Construction Sites. Int. J. Environ. Res. Public Health 2019, 16, 4748. [Google Scholar] [CrossRef] [PubMed]
- Li, J.; Wu, C. Deep Learning and Text Mining: Classifying and Extracting Key Information from Construction Accident Narratives. Appl. Sci. 2023, 13, 10599. [Google Scholar] [CrossRef]
- Zhang, J.; Zi, L.; Hou, Y. A C-BiLSTM Approach to Classify Construction Accident Reports. Appl. Sci. 2020, 10, 5754. [Google Scholar] [CrossRef]
- Qiao, J.; Wang, C.; Guan, S. Construction-Accident Narrative Classification Using Shallow and Deep Learning. J. Constr. Eng. Manag. 2022, 148, 04022088. [Google Scholar] [CrossRef]
- Chalkidis, I. ChatGPT May Pass the Bar Exam Soon, but Has a Long Way to Go for the Lexglue Benchmark. arXiv 2023, arXiv:2304.12202. [Google Scholar] [CrossRef]
- Tan, B.; Yang, Z.; AI-Shedivat, M.; Xing, E.P.; Hu, Z. Progressive Generation of Long Text with Pretrained Language Models. arXiv 2020, arXiv:2006.15720. [Google Scholar] [CrossRef]
- Han, G.; Tsao, J.; Huang, X. Length-Aware Multi-Kernel Transformer for Long Document Classification. arXiv 2024, arXiv:2405.07052. [Google Scholar] [CrossRef]
- Lewis, M.; Liu, Y.; Goyal, N.; Ghazvininejad, M.; Mohamed, A.; Levy, O.; Zettlemoyer, L. Bart: Denoising Sequence-to-Sequence Pre-Training for Natural Language Generation, Translation, and Comprehension. arXiv 2019, arXiv:1910.13461. [Google Scholar] [CrossRef]
- Chalkidis, I.; Fergadiotis, M.; Malakasiotis, P.; Aletras, N.; Androutsopoulos, I. LEGAL-BERT: The Muppets Straight Out of Law School. arXiv 2020, arXiv:2010.02559. [Google Scholar] [CrossRef]
- Huang, Y.; Xu, J.; Lai, J.; Jiang, Z.; Chen, T.; Li, Z.; Zhao, P. Advancing Transformer Architecture in Long-Context Large Language Models: A Comprehensive Survey. arXiv 2023, arXiv:2311.12351. [Google Scholar] [CrossRef]
- Xue, L.; Constant, N.; Roberts, A.; Kale, M.; Al-Rfou, R.; Siddhant, A.; Raffel, C. mT5: A Massively Multilingual Pre-Trained Text-to-Text Transformer. arXiv 2020, arXiv:2010.11934. [Google Scholar] [CrossRef]
- Child, R.; Gray, S.; Radford, A.; Sutskever, I. Generating Long Sequences with Sparse Transformers. arXiv 2019, arXiv:1904.10509. [Google Scholar] [CrossRef]
- Ainslie, J.; Ontanon, S.; Alberti, C.; Cvicek, V.; Fisher, Z.; Pham, P.; Yang, L. ETC: Encoding Long and Structured Inputs in Transformers. arXiv 2020, arXiv:2004.08483. [Google Scholar] [CrossRef]
- Zhang, H.; Liu, X.; Zhang, J. Hegel: Hypergraph Transformer for Long Document Summarization. arXiv 2022, arXiv:2210.04126. [Google Scholar] [CrossRef]
- He, H.; Flicke, M.; Buchmann, J. HDT: Hierarchical Document Transformer. arXiv 2024, arXiv:2407.08330. [Google Scholar] [CrossRef]
- Taylor, B.M.; Beach, R.W. The Effects of Text Structure Instruction on Middle-Grade Students’ Comprehension and Production of Expository Text. Reading Res. Q. 1984, 19, 134–146. [Google Scholar] [CrossRef]
- Guthrie, J.T.; Britten, T.; Barker, K.G. Roles of Document Structure, Cognitive Strategy, and Awareness in Searching for Information. Read. Res. Q. 1991, 26, 300–324. [Google Scholar] [CrossRef]
- Murty, S.; Sharma, P.; Andreas, J. Grokking of Hierarchical Structure in Vanilla Transformers. arXiv 2023, arXiv:2305.18741. [Google Scholar] [CrossRef]
- Li, M.; Hovy, E.; Lau, J.H. Summarizing Multiple Documents with Conversational Structure for Meta-Review Generation. arXiv 2023, arXiv:2305.01498. [Google Scholar] [CrossRef]
- Cao, S.; Wang, L. HIBRIDS: Attention with Hierarchical Biases for Structure-Aware Long Document Summarization. arXiv 2022, arXiv:2203.10741. [Google Scholar] [CrossRef]
- Ruan, Q.; Ostendorff, M.; Rehm, G.H. Improving Extractive Text Summarization with Hierarchical Structure Information. arXiv 2022, arXiv:2203.09629. [Google Scholar] [CrossRef]
- Liu, Y.; Ott, M.; Goyal, N.; Du, J.; Joshi, M.; Chen, D.; Stoyanov, V. Roberta: A Robustly Optimized BERT Pretraining Approach. arXiv 2019, arXiv:1907.11692. [Google Scholar] [CrossRef]
- Loshchilov, I.; Hutter, F. Decoupled Weight Decay Regularization. In Proceedings of the International Conference on Learning Representations, New Orleans, LA, USA, 6–9 May 2019. [Google Scholar]
- Xiong, W.; Oğuz, B.; Gupta, A.; Chen, X.; Liskovich, D.; Levy, O.; Mehdad, Y. Simple Local Attentions Remain Competitive for Long-Context Tasks. arXiv 2021, arXiv:2112.07210. [Google Scholar] [CrossRef]
- Tay, Y.; Dehghani, M.; Bahri, D.; Metzler, D. Efficient Transformers: A Survey. ACM Comput. Surv. 2022, 55, 1–28. [Google Scholar] [CrossRef]
- Pham, H.; Wang, G.; Lu, Y.; Florencio, D.; Zhang, C. Understanding Long Documents with Different Position-Aware Attentions. arXiv 2022, arXiv:2208.08201. [Google Scholar] [CrossRef]
- Tan, N.Ö.; Peng, A.Y.; Bensemann, J.; Bao, Q.; Hartill, T.; Gahegan, M.; Witbrock, M. Input-Length-Shortening and Text Generation via Attention Values. arXiv 2023, arXiv:2303.07585. [Google Scholar] [CrossRef]
Numbering Scheme | Codes | Regular Expression | Examples |
---|---|---|---|
Arabic numerals | ar | ^\d+[.. ]?$ | 1. 2. |
Uppercase letters | alpha_s | ^[A-Z][.. ]?$ | A. B. |
Lowercase letters | alpha_l | ^[a-z][.. ]?$ | a. c. |
Arabic numerals with right parentheses | ar_rbr | ^\d+[))]$ | 1) 2) |
Arabic numerals in full parentheses | ar_wbr | ^[((][^\d+[))]$ | (1) (2) |
Circled numbers | ar_cir | [\u2460-\u2473] | ① ⑮ |
Roman numerals | rom | ^[IVXLC]+[.. ]?$ | IV. XII. |
Code | Label Name | Description |
---|---|---|
L1 | Accident Profile | Summarizes basic accident information, including casualty statistics, operational context, causative agents, and accident classification. |
L2 | Emergency Response | Describes the immediate on-site handling of the incident, including rescue actions and emergency response measures. |
L3 | Accident Liability | Indicates the attribution of responsibility to individuals or organizations involved in the incident. |
L4 | Management Deficiencies | Refers to shortcomings in safety management systems, such as inadequate supervision, training, or procedural enforcement. |
L5 | Actions and Prevention | Covers both corrective measures implemented post-incident and proactive strategies to prevent similar future occurrences. |
L6 | Legal Provisions | Cites specific laws, regulations, or legal clauses used as the basis for accident handling and accountability. |
L7 | Unsafe Acts | Captures human-related safety violations, such as unauthorized operations, procedural breaches, or negligence. |
L8 | Unsafe Conditions | Refers to hazardous physical states of tools, equipment, or materials that contributed to the accident. |
L9 | Unsafe Environment | Describes environmental risks present at the worksite, such as poor weather, unstable terrain, or inadequate lighting. |
Model | Micro-AUC | Macro-AUC |
---|---|---|
RoBERTa | Standard multi-head self-attention | 12–16 heads, head dimension 64, Dropout: 0.1 |
Longformer | Sliding window + global attention | Window size 256–512, one global token, no dilation window |
BigBird | Random + window + global attention | Random connections r = 1, window size w = 3, global tokens g = 1, block size 2 |
ERNIE-SPARSE | Hierarchical sparse attention | Intra-window sparsity, 2–8 global representative tokens, regularization coefficient α = 0.5–10 |
Model | Micro-AUC | Macro-AUC | Micro-F1 | Macro-F1 | P@5 |
---|---|---|---|---|---|
RoBERTa | 68.74 | 65.21 | 64.79 | 62.15 | 67.42 |
Longformer | 69.35 | 66.83 | 66.52 | 64.78 | 71.15 |
BigBird | 71.89 | 68.92 | 70.18 | 67.33 | 74.19 |
ERNIE-SPARSE | 70.86 | 67.98 | 70.45 | 67.98 | 75.27 |
SAFE-Transformer | 73.45 | 70.12 | 71.23 | 68.73 | 76.33 |
Model | L = [0, 512] | L = [512, 1024] | L = [1024, 2048] | L = [2048, 4096] |
---|---|---|---|---|
RoBERTa | 73.13 | 68.41 | 66.75 | 59.12 |
Longformer | 69.43 | 70.83 | 71.32 | 69.16 |
BigBird | 69.15 | 73.82 | 70.22 | 71.25 |
ERNIE-SPARSE | 70.36 | 75.03 | 71.29 | 70.25 |
SAFE-Transformer | 71.48 | 75.87 | 72.21 | 71.35 |
Model Variant | Micro AUC | Macro AUC | Micro F1 | Macro F1 | P@5 |
---|---|---|---|---|---|
SAFE-Transformer | 73.45 | 70.12 | 70.23 | 68.82 | 76.33 |
SAFE w/o Struct | 69.82 | 67.05 | 67.12 | 65.74 | 71.05 |
SAFE w/o SymEnc | 71.28 | 68.93 | 68.35 | 66.82 | 73.16 |
SAFE w/o HierAttn | 70.15 | 68.11 | 67.84 | 66.02 | 72.47 |
SAFE w/o GateFuse | 71.03 | 69.24 | 69.17 | 67.35 | 74.02 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zeng, W.; Tang, W.; Yuan, D.; Zhang, H.; Duan, P.; Hu, S. Structure-Aware and Format-Enhanced Transformer for Accident Report Modeling. Appl. Sci. 2025, 15, 7928. https://doi.org/10.3390/app15147928
Zeng W, Tang W, Yuan D, Zhang H, Duan P, Hu S. Structure-Aware and Format-Enhanced Transformer for Accident Report Modeling. Applied Sciences. 2025; 15(14):7928. https://doi.org/10.3390/app15147928
Chicago/Turabian StyleZeng, Wenhua, Wenhu Tang, Diping Yuan, Hui Zhang, Pinsheng Duan, and Shikun Hu. 2025. "Structure-Aware and Format-Enhanced Transformer for Accident Report Modeling" Applied Sciences 15, no. 14: 7928. https://doi.org/10.3390/app15147928
APA StyleZeng, W., Tang, W., Yuan, D., Zhang, H., Duan, P., & Hu, S. (2025). Structure-Aware and Format-Enhanced Transformer for Accident Report Modeling. Applied Sciences, 15(14), 7928. https://doi.org/10.3390/app15147928