Selectively Connected Self-Attentions for Semantic Role Labeling
AbstractSemantic role labeling is an effective approach to understand underlying meanings associated with word relationships in natural language sentences. Recent studies using deep neural networks, specifically, recurrent neural networks, have significantly improved traditional shallow models. However, due to the limitation of recurrent updates, they require long training time over a large data set. Moreover, they could not capture the hierarchical structures of languages. We propose a novel deep neural model, providing selective connections among attentive representations, which remove the recurrent updates, for semantic role labeling. Experimental results show that our model performs better in accuracy compared to the state-of-the-art studies. Our model achieves 86.6 F1 scores and 83.6 F1 scores on the CoNLL 2005 and CoNLL 2012 shared tasks, respectively. The accuracy gains are improved by capturing the hierarchical information using the connection module. Moreover, we show that our model can be parallelized to avoid the repetitive updates of the model. As a result, our model reduces the training time by 62 percentages from the baseline. View Full-Text
Share & Cite This Article
Park, J. Selectively Connected Self-Attentions for Semantic Role Labeling. Appl. Sci. 2019, 9, 1716.
Park J. Selectively Connected Self-Attentions for Semantic Role Labeling. Applied Sciences. 2019; 9(8):1716.Chicago/Turabian Style
Park, Jaehui. 2019. "Selectively Connected Self-Attentions for Semantic Role Labeling." Appl. Sci. 9, no. 8: 1716.
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.